Singular Value Decomposition and its. SVD and its Applications in Computer Vision
|
|
- Kimberly Dawson
- 6 years ago
- Views:
Transcription
1 Singular Value Decomposition and its Applications in Computer Vision Subhashis Banerjee Department of Computer Science and Engineering IIT Delhi October 24, 2013
2 Overview Linear algebra basics Singular value decomposition Linear equations and least squares Principal component analysis Latent semantics and topic discovery Clustering?
3 Linear systems m equations in n unknowns. Ax b. A P R mˆn, x P R n and b P R m. Two reasons usually offered for importance of linearity: Superposition: If f 1 produces a 1 and f 2 produces a 2, then a combined force f 1 ` αf 2 produces a 1 ` αa 2. Pragmatics: f px, yq 0 and gpx, yq 0 yields F pxq 0 by elimination. Degree of F = degree of f ˆ degree of g. A system of m quadratic equation gives a polynomial of degree 2 m. The only case in which the exponential is harmless is when the base is 1 (linear).
4 Linear (in)dependence Given vectors a 1,..., a n and scalars x 1,..., x n, the vector nÿ b x j a j j 1 is a linear combination of the vectors. The vectors a 1,..., a n are linearly dependent iff at least one of them is a linear combination of the others (ones that precedes it). A set of vectors a 1,..., a n is a basis for a set B of vectors if they are linearly independent and every vector in B can be expressed as a linear combination of a 1,..., a n. Two different bases for the same vector space B have the same number of vectors (dimension).
5 Inner product and orthogonality 2-norm: mÿ mÿ }b} 2 b1 2 ` } b j e j } 2 bj 2 b T b j 2 j 1 inner product: b T c }b}}c} cos θ orthogonal: b T c 0 projection of b onto c: cc T c T c b
6 Orthogonal subspaces and rank Any basis a 1,..., a n for a subspace A of R m can be extended to a basis for R m by adding m n vectors a n`1,..., a m If vector space A is a subspace of R m for some m, then the orthogonal complement (A K ) of A is the set of all vectors in R m that are orthogonal to all the vectors in A. dimpaq ` dimpa K q m nullpaq tx : Ax 0u. dimpnullpaqq h (nullity). rangepaq tb : Ax b for some xu. dimprangepaqq r (rank). r n h. Number of linearly independent rows of A is equal to its number of linearly independent columns.
7 Solutions of a linear system: Ax b rangepaq; dimension r rankpaq nullpaq; dimension h nullitypaq rangepaq K ; dimension m r nullpaq K ; dimension n h nullpaq K rangepa T q rangepaq K nullpa T q b R rangepaq ùñ no solutions b P rangepaq r n m. Invertible. Unique solution. r n, m ą n. Redundant. Unique solution. r ă n. Under determined. 8 n r solutions.
8 Orthogonal matrices A set of vectors V is orthogonal if its elements are pairwise orthogonal. Orthonormal, if in addition for each x P V, }x} 1. Vectors in an orthonormal set are linearly independent. V rv 1,..., v n s is an orthogonal matrix. V 1 V V T V V 1 V VV T I. The norm of a vector x is not changed by multiplication by an orthogonal matrix: }V x} 2 x T V T V x x T x }x} 2
9 Vector norms
10 Matrix norms A j
11 Overview Linear algebra basics Singular value decomposition (Golub and Van Loan, 1996, Golub and Kahan, 1965) Linear equations and least squares Principal component analysis Latent semantics and topic discovery Clustering?
12 Singular value decomposition Geometric view: An m ˆ n matrix A of rank r maps the r-dimensional unit hypersphere in rowspacepaq into an r-dimensional hyperellipse in rangepaq. Algebraic view: If A is a real m ˆ n matrix then there exists orthogonal matrices such that U ru 1 u m s P R mˆm V rv 1 v n s P R nˆn U T AV Σ diagpσ 1,..., σ p q P R mˆn where p minpm, nq, and σ 1 ě σ 2 ě... ě σ p ě 0 Equivalently, A UΣV T
13 Proof (sketch): Consider all vectors of the form Ax b for x on the unit hypersphere }x} 1. Consider the scalar function }Ax}. Let v 1 be a vector on the unit sphere in R n where the scalar function is maximised. Let σ 1 u 1 be the corresponding vector with σ 1 u 1 Av 1 and }u 1 } 1. Let u 1 and v 1 be extended to orthonormal bases for R m and R n respectively. Let the corresponding matrices be U 1 and V 1. We have U1 T AV σ1 w 1 S 1 T j 0 A 1 Consider the length of the vector j 1 σ1 b S 1 σ1 2 ` w wt w Conclude w 0 and induct. 1 b σ1 2 ` wt w σ 2 1 ` w T w A 1 w j
14 SVD geometry: 1. ξ V T x, where V rv 1 v 2 s» σ 1 0 fi 2. η Σξ, where Σ 0 σ 2 fl Finally, b Uη.
15 SVD: structure of a matrix Suppose σ 1 ě... ě σ r ą σ r`1 0. Then, rankpaq r nullpaq spantv r`1,..., v n u rangepaq spantu 1,..., u r u Setting U r Up:, 1 : rq, Σ r Σp1 : r, 1 : rq, and V r V p:, 1 : rq, we have A U r Σ r V r ř r i 1 σ iu i v T i }A} F }A} 2 σ 1 b řm ř n i 1 j 1 a ij 2 σ1 2 `... ` σ2 p
16 SVD: low rank approximation For any ν with 0 ď ν ď r, define A ν ř ν i 1 σ iu i v T i. If ν p minpm, nq, define σ ν`1 0. Then, Proof (sketch): }A A ν } 2 inf }A B} 2 σ ν`1 BPR mˆn,rankpbqďν Aw is maximised by that w which is closest in direction to most of the rows of A. The projections of the rows of A onto v 1 is given by Av 1 v T 1. This is indeed the best rank 1 approximation: }A Av 1 v T 1 } 2 }A σ 1 u 1 v T 1 } 2 is the smallest over }A B} 2 where B is any rank 1 matrix.
17 Overview Linear algebra basics Singular value decomposition Linear equations and least squares Principal component analysis Latent semantics and topic discovery Clustering?
18 Least squares The minimum-norm least squares solution to a linear system Ax b, that is, the shortest vector x that achieves is unique and is given by min x }Ax b} x V Σ : U T b where Σ : diagp1{σ 1,..., 1{σ r, 0q is a n ˆ m diagonal matrix. The matrix A : V Σ : U T is called the pseudoinverse of A.
19 Pseudoinverse proof (sketch): min x }Ax b} min x }UΣV T x b} min x }UpΣV T x U T bq} min x }ΣV T x U T b} Setting y V T x and c U T b, we have min }Ax b} min }Σy c} x y» σ σ r fi» fl y 1. y r y r`1. y n fi» fl c 1. c r c r`1. c m fi fl
20 Least squares for homogenous systems The solution to min }Ax} }x} 1 is given by v n, the last column of V. Proof: min }Ax} min }UΣV T x} min }ΣV T x} min }Σy} }x} 1 }x} 1 }x} 1 }y} 1 where y V T x. Clearly this is minimised by the vector y r0,..., 0, 1s T.
21 A couple of other least squares problems Given an m ˆ n matrix A with m ě n, find the vector x that minimises }Ax} subject to }x} 1 and Cx 0. Given an m ˆ n matrix A with m ě n, find the vector x that minimises }Ax} subject to }x} 1 and x P rangepgq.
22 Overview Linear algebra basics Singular value decomposition Linear equations and least squares Principal component analysis (Pearson, 1901, Schlens 2003) Latent semantics and topic discovery Clustering?
23 PCA: A toy problem X ptq rx A ptq y A ptq x B ptq y B ptq x C ptq y C ptqs T, X rx p1q X p2q X pnqs T. Is there another basis, which is a linear combination of the original basis, that best expresses our data set? PX Y
24 PCA Issues: noise and redundancy Noise Redundancy SNR σ 2 signal σ 2 noise " 1
25 Covariance Consider zero mean vectors a ra 1 a 2... a n s and b rb 1 b 2... b n s. Variance: σ 2 a a i a i i and σ 2 b b ib i i Covariance: σ 2 ab a ib i i 1 n 1 abt. If X rx 1 x 2... x m s T (m ˆ n) then the covariance matrix is: S X 1 n 1 XX T ij th value of S X is obtained by substituting x i for a and x j for b. S X is square, symmetric, m ˆ m. Diagonal entries of S X are the variance of particular measurement types. The off-diagonal entries of S X are the covariance between measurement types.
26 Solving PCA S Y 1 n 1 YY T 1 n 1 ppx qppx qt 1 n 1 PXX T P T Writing X UΣV T, we have Setting P U T, we have XX T UΣU T S Y 1 n 1 Σ Data is maximally uncorrelated. Effective rank r of Σ gives dimensionality reduction.
27 PCA: tacit assumptions Linearity. Mean and variance are sucient statistics ùñ Gaussian distribution. Large variances have important dynamics. The principal components are orthogonal.
28 Application: eigenfaces (Turk and Pentland, 1991) Obtain a set S of M face images: S tγ 1,..., Γ M u Obtain the mean image Ψ: Ψ 1 M Mÿ j 1 Γ j
29 Application: eigenfaces (Turk and Pentland, 1991) Compute centered images The covariance matrix is Φ i Γ i Ψ C 1 M Mÿ Φ j Φ T j j 1 AA T Size is N 2 ˆ N 2. Intractable. If v i is an eigenvector of A T A (M ˆ M), then Av i an eigenvector of AA T. A T Av i µ i v i ô AA T Av i µ i Av i
30 Application: eigenfaces (Turk and Pentland, 1991) Recognition: ω k u k pγ Ψq Compute minimum distance to database of faces
31 Overview Linear algebra basics Singular value decomposition Linear equations and least squares Principal component analysis Latent semantics and topic discovery (Scott et. al. 1990, Papadimitriou et. al. 1998) Clustering?
32 Latent semantics and topic discovery Consider a m ˆ n matrix A where the ij th entry denotes the marks obtained by the i th student in the j th test (Naveen Garg, Abhiram Ranade). Are the marks obtained by the i th student in various tests correlated? What are the capabilities of the i th student? What does the j th test evaluate? What is the expected rank of A?
33 Latent semantics and topic discovery Suppose there are really only three abilities (topics) that determine a student s marks in tests: verbal, logical and quantitative. Suppose v i, l i and q i characterise these abilities of the i th student; let V j, L j and Q j characterise the extent to which the j th test evaluates these abilities. A generative model for the ij th entry of A may be given as v i V j ` l i L j ` q i Q j
34 Latent semantics and topic discovery A new m ˆ 1 term vector t can be projected in to the LSI space as: ˆt t T U k Σ 1 k A new 1 ˆ n document vector d can be projected in to the LSI space as: ˆd dv k Σ 1 k
35 Topic discovery example An example with more than 2000 images and with 12 topics (LDA)
36 Overview Linear algebra basics Singular value decomposition Linear equations and least squares Principal component analysis Latent semantics and topic discovery Clustering (Drineas et. al. 1999)
37 Clustering Partition rows of a matrix so that similar rows (points in n dimensional space) are clustered together. Given points a 1,..., a m P R m, find c 1,..., c k P R m so as to minimize ÿ dpa i, tc 1,..., c k uq 2 i where dpa, Sq is the smallest distance from a point a to any of the points in S. (k-means) k is a constant. Consider k 2 for simplicity. Even then the problem is NP-complete for arbitrary n. We have k centres. If n k then the problem can be solved in polynomial time.
38 Clustering The points belonging to the two clusters can be separated by the perpendicular bisector of the line joining the two centres. The centre selected for a group must be its centroid. There are only a polynomial number of lines to consider (Each set of cluster centres define a Voronoi diagram. Each cell is a polyhedron ˆ and the total number of faces in k cells is no more k than. Enumerate all sets of hyperplanes (faces) each 2 of which contains k independent points of A such that they define exactly k cells. Assign each point of A lying on a hyperplane to one of the sides.) The best k dimensional subspace can be found using SVD. Gives a 2-approximation.
39 Other applications High dimensional matching Graph partitioning Metric embedding Image compression... Learn SVD well Learn SVD well
Principal Component Analysis
Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used
More informationDraft. Lecture 01 Introduction & Matrix-Vector Multiplication. MATH 562 Numerical Analysis II. Songting Luo
Lecture 01 Introduction & Matrix-Vector Multiplication Songting Luo Department of Mathematics Iowa State University MATH 562 Numerical Analysis II Songting Luo ( Department of Mathematics Iowa State University[0.5in]
More informationDimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas
Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx
More informationProbabilistic Latent Semantic Analysis
Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr
More informationWhat is Principal Component Analysis?
What is Principal Component Analysis? Principal component analysis (PCA) Reduce the dimensionality of a data set by finding a new set of variables, smaller than the original set of variables Retains most
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 6 1 / 22 Overview
More informationUNIT 6: The singular value decomposition.
UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition
More informationIV. Matrix Approximation using Least-Squares
IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that
More informationDATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD
DATA MINING LECTURE 8 Dimensionality Reduction PCA -- SVD The curse of dimensionality Real data usually have thousands, or millions of dimensions E.g., web documents, where the dimensionality is the vocabulary
More informationApplied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationLinear Algebra & Geometry why is linear algebra useful in computer vision?
Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia
More informationON THE SINGULAR DECOMPOSITION OF MATRICES
An. Şt. Univ. Ovidius Constanţa Vol. 8, 00, 55 6 ON THE SINGULAR DECOMPOSITION OF MATRICES Alina PETRESCU-NIŢǍ Abstract This paper is an original presentation of the algorithm of the singular decomposition
More informationData Mining Lecture 4: Covariance, EVD, PCA & SVD
Data Mining Lecture 4: Covariance, EVD, PCA & SVD Jo Houghton ECS Southampton February 25, 2019 1 / 28 Variance and Covariance - Expectation A random variable takes on different values due to chance The
More informationLeast Squares Optimization
Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. I assume the reader is familiar with basic linear algebra, including the
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More informationLecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra
Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationVectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =
Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.
More informationMachine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012
Machine Learning CSE6740/CS7641/ISYE6740, Fall 2012 Principal Components Analysis Le Song Lecture 22, Nov 13, 2012 Based on slides from Eric Xing, CMU Reading: Chap 12.1, CB book 1 2 Factor or Component
More informationThe SVD-Fundamental Theorem of Linear Algebra
Nonlinear Analysis: Modelling and Control, 2006, Vol. 11, No. 2, 123 136 The SVD-Fundamental Theorem of Linear Algebra A. G. Akritas 1, G. I. Malaschonok 2, P. S. Vigklas 1 1 Department of Computer and
More informationLinear Algebra & Geometry why is linear algebra useful in computer vision?
Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia
More information1 Inner Product and Orthogonality
CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =
More informationPart IA. Vectors and Matrices. Year
Part IA Vectors and Matrices Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2018 Paper 1, Section I 1C Vectors and Matrices For z, w C define the principal value of z w. State de Moivre s
More informationGI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil
GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection
More informationPseudoinverse & Orthogonal Projection Operators
Pseudoinverse & Orthogonal Projection Operators ECE 174 Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 48 The Four
More information14 Singular Value Decomposition
14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationBare minimum on matrix algebra. Psychology 588: Covariance structure and factor models
Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1
More informationSingular Value Decomposition
Singular Value Decomposition CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Singular Value Decomposition 1 / 35 Understanding
More informationSingular Value Decomposition (SVD)
School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang
More informationNumerical Methods I Singular Value Decomposition
Numerical Methods I Singular Value Decomposition Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 9th, 2014 A. Donev (Courant Institute)
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More information15 Singular Value Decomposition
15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationLinear Algebra Primer
Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................
More informationLet A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1
Eigenvalue Problems. Introduction Let A an n n real nonsymmetric matrix. The eigenvalue problem: EIGENVALE PROBLEMS AND THE SVD. [5.1 TO 5.3 & 7.4] Au = λu Example: ( ) 2 0 A = 2 1 λ 1 = 1 with eigenvector
More informationLinear Algebra Practice Problems
Linear Algebra Practice Problems Page of 7 Linear Algebra Practice Problems These problems cover Chapters 4, 5, 6, and 7 of Elementary Linear Algebra, 6th ed, by Ron Larson and David Falvo (ISBN-3 = 978--68-78376-2,
More informationTutorial on Principal Component Analysis
Tutorial on Principal Component Analysis Copyright c 1997, 2003 Javier R. Movellan. This is an open source document. Permission is granted to copy, distribute and/or modify this document under the terms
More informationAM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition
AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationPrincipal Component Analysis -- PCA (also called Karhunen-Loeve transformation)
Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation) PCA transforms the original input space into a lower dimensional space, by constructing dimensions that are linear combinations
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationChapter 7: Symmetric Matrices and Quadratic Forms
Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationExercises * on Principal Component Analysis
Exercises * on Principal Component Analysis Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 207 Contents Intuition 3. Problem statement..........................................
More informationMATH 167: APPLIED LINEAR ALGEBRA Least-Squares
MATH 167: APPLIED LINEAR ALGEBRA Least-Squares October 30, 2014 Least Squares We do a series of experiments, collecting data. We wish to see patterns!! We expect the output b to be a linear function of
More informationThe Singular Value Decomposition
The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition
More informationLarge Scale Data Analysis Using Deep Learning
Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset
More informationIntroduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin
1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)
More informationTHE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR
THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},
More informationMachine Learning - MT & 14. PCA and MDS
Machine Learning - MT 2016 13 & 14. PCA and MDS Varun Kanade University of Oxford November 21 & 23, 2016 Announcements Sheet 4 due this Friday by noon Practical 3 this week (continue next week if necessary)
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationCS 143 Linear Algebra Review
CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see
More informationLecture 1: Review of linear algebra
Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations
More informationPseudoinverse & Moore-Penrose Conditions
ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego
More information7. Dimension and Structure.
7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain
More informationFSAN/ELEG815: Statistical Learning
: Statistical Learning Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware 3. Eigen Analysis, SVD and PCA Outline of the Course 1. Review of Probability 2. Stationary
More informationCSL361 Problem set 4: Basic linear algebra
CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices
More informationNotes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.
Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where
More information7. Symmetric Matrices and Quadratic Forms
Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value
More informationFocus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.
Previously Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations y = Ax Or A simply represents data Notion of eigenvectors,
More informationLecture 5 Singular value decomposition
Lecture 5 Singular value decomposition Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More informationEIGENVALE PROBLEMS AND THE SVD. [5.1 TO 5.3 & 7.4]
EIGENVALE PROBLEMS AND THE SVD. [5.1 TO 5.3 & 7.4] Eigenvalue Problems. Introduction Let A an n n real nonsymmetric matrix. The eigenvalue problem: Au = λu λ C : eigenvalue u C n : eigenvector Example:
More informationThe Mathematics of Facial Recognition
William Dean Gowin Graduate Student Appalachian State University July 26, 2007 Outline EigenFaces Deconstruct a known face into an N-dimensional facespace where N is the number of faces in our data set.
More informationCSE 494/598 Lecture-6: Latent Semantic Indexing. **Content adapted from last year s slides
CSE 494/598 Lecture-6: Latent Semantic Indexing LYDIA MANIKONDA HT TP://WWW.PUBLIC.ASU.EDU/~LMANIKON / **Content adapted from last year s slides Announcements Homework-1 and Quiz-1 Project part-2 released
More informationLeast Squares Optimization
Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques, which are widely used to analyze and visualize data. Least squares (LS)
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More informationPattern Recognition 2
Pattern Recognition 2 KNN,, Dr. Terence Sim School of Computing National University of Singapore Outline 1 2 3 4 5 Outline 1 2 3 4 5 The Bayes Classifier is theoretically optimum. That is, prob. of error
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,
More information1. Addition: To every pair of vectors x, y X corresponds an element x + y X such that the commutative and associative properties hold
Appendix B Y Mathematical Refresher This appendix presents mathematical concepts we use in developing our main arguments in the text of this book. This appendix can be read in the order in which it appears,
More informationCS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)
CS4495/6495 Introduction to Computer Vision 8B-L2 Principle Component Analysis (and its use in Computer Vision) Wavelength 2 Wavelength 2 Principal Components Principal components are all about the directions
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,
More informationNormed & Inner Product Vector Spaces
Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More informationDraft. Lecture 14 Eigenvalue Problems. MATH 562 Numerical Analysis II. Songting Luo. Department of Mathematics Iowa State University
Lecture 14 Eigenvalue Problems Songting Luo Department of Mathematics Iowa State University MATH 562 Numerical Analysis II Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH562
More informationMoore-Penrose Conditions & SVD
ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 8 ECE 275A Moore-Penrose Conditions & SVD ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 2/1
More informationSingular Value Decomposition
Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our
More informationECE 275A Homework #3 Solutions
ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =
More informationLinear Algebra- Final Exam Review
Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.
More informationMATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL
MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left
More informationSingular Value Decomposition
Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =
More information1 9/5 Matrices, vectors, and their applications
1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric
More informationIntroduction to Matrix Algebra
Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p
More informationLecture: Face Recognition and Feature Reduction
Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed in the
More informationLEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach
LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach Dr. Guangliang Chen February 9, 2016 Outline Introduction Review of linear algebra Matrix SVD PCA Motivation The digits
More informationMODULE 8 Topics: Null space, range, column space, row space and rank of a matrix
MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x
More informationSystem 1 (last lecture) : limited to rigidly structured shapes. System 2 : recognition of a class of varying shapes. Need to:
System 2 : Modelling & Recognising Modelling and Recognising Classes of Classes of Shapes Shape : PDM & PCA All the same shape? System 1 (last lecture) : limited to rigidly structured shapes System 2 :
More informationBasic Calculus Review
Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V
More informationLinear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions
Linear Systems Carlo Tomasi June, 08 Section characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More information(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =
. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)
More informationMachine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling
Machine Learning B. Unsupervised Learning B.2 Dimensionality Reduction Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University
More informationFinal Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson
Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Name: TA Name and section: NO CALCULATORS, SHOW ALL WORK, NO OTHER PAPERS ON DESK. There is very little actual work to be done on this exam if
More informationLecture: Face Recognition and Feature Reduction
Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed
More informationLinear Algebra Methods for Data Mining
Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More information