Exercises * on Principal Component Analysis
|
|
- Maude O’Connor’
- 5 years ago
- Views:
Transcription
1 Exercises * on Principal Component Analysis Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 207 Contents Intuition 3. Problem statement Exercise: Second moment from mean and variance Exercise: Second moment of a uniform distribution Projection and reconstruction error Exercise: Projection by an inner product is orthogonal Exercise: Error function Reconstruction error and variance Covariance matrix Exercise: Relation among the elements of a second moment matrix Exercise: From data distribution to second-moment matrix Exercise: From data distribution to second-moment matrix Exercise: From second-moment matrix to data Exercise: Data distributions with and without mean , 207 Laurenz Wiskott (homepage This work (except for all figures from other sources, if present is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License. To view a copy of this license, visit Figures from other sources have their own copyright, which is generally indicated. Do not distribute parts of these lecture notes showing figures with non-free copyrights (here usually figures I have the rights to publish but you don t, like my own published figures. Several of my exercises (not necessarily on this topic were inspired by papers and textbooks by other authors. Unfortunately, I did not document that well, because initially I did not intend to make the exercises publicly available, and now I cannot trace it back anymore. So I cannot give as much credit as I would like to. The concrete versions of the exercises are certainly my own work, though. In cases where I reuse an exercise in different variants, references may be wrong for technical reasons. * These exercises complement my corresponding lecture notes available at Teaching/Material/, where you can also find other teaching material such as programming exercises. The table of contents of the lecture notes is reproduced here to give an orientation when the exercises can be reasonably solved. For best learning effect I recommend to first seriously try to solve the exercises yourself before looking into the solutions.
2 .5 Covariance matrix and higher order structure PCA by diagonalizing the covariance matrix Formalism 5 2. Definition of the PCA-optimization problem Matrix V T : Mapping from high-dimensional old coordinate system to low-dimensional new coordinate system Matrix V: Mapping from low-dimensional new coordinate system to subspace in old coordinate system Exercise: Norm of a vector Matrix (V T V: Identity mapping within new coordinate system Matrix (VV T : Projection from high- to low-dimensional (subspace within old coordinate system Variance Reconstruction error Covariance matrix Exercise: Second-moment matrices are positive semi-definite Exercise: Covariance matrix from mean and second-moment matrix Eigenvalue equation of the covariance matrix Exercise: Eigenvectors of a symmetric matrix are orthogonal Total variance of the data x Diagonalizing the covariance matrix Variance of y for a diagonalized covariance matrix Constraints of matrix V Finding the optimal subspace Interpretation of the result Exercise: Moments of a data distribution: Simple example Exercise: From data distribution to second-moment matrix via the eigenvectors Exercise: From data distribution to second-moment matrix via the eigenvectors Exercise: Dimensionality reduction PCA Algorithm Intuition of the Results Whitening or sphering Exercise: Sphered data is uncorrelated Singular value decomposition
3 3 Application 9 3. Face processing Acknowledgment 9 Intuition. Problem statement.. Exercise: Second moment from mean and variance How are mean m, variance v and 2nd moment s related to each other? In other words, if mean and variance of a one-dimensional distribution were given. How could you compute the corresponding 2nd moment? Hint: Assume x to be the data values and x their mean. Then play around with the corresponding expressions for mean x = x, variance (x x 2 and second moment x Exercise: Second moment of a uniform distribution Calculate the second moment of a uniform, i.e. flat, distribution in [, +]. This is a distribution where every value between and + is equally likely and other values are impossible..2 Projection and reconstruction error.2. Exercise: Projection by an inner product is orthogonal. We have defined the projected vector, x, by x = vv T x ( where x is the data point and v is the unit vector along the principal axis of the projection. Show that the difference vector between data point and the projected data point is orthogonal to v. 2. Give a reason why the orthogonality of the two vectors is useful. x = x x (2.2.2 Exercise: Error function Why should the reconstruction error, E, be defined as the mean of the squared difference of the original and reconstructed data vectors, and not simply the mean of the difference or the mean of the absolute difference? 3
4 .3 Reconstruction error and variance.4 Covariance matrix.4. Exercise: Relation among the elements of a second moment matrix For a set of data vectors x µ, µ =,..., M the second moment matrix C is defined as C ij := x µ i xµ j µ. What are the upper and lower limits of C ij if C ii and C jj are known? Hint: Consider x µ i xµ j µ = M µ xµ i xµ j as the scalar product of two vectors..4.2 Exercise: From data distribution to second-moment matrix Give an estimate of the second moment matrix for the following data distributions. x 2 x 2 x 2 x x x (a (b (c CC BY-SA Exercise: From data distribution to second-moment matrix Give an estimate of the second moment matrix for the following data distributions. x 2 x 2 x 2 x x x (a (b (c CC BY-SA Exercise: From second-moment matrix to data Draw a data distribution qualitatively consistent with the following second-moment matrices C. (a C = ( (b C = ( (c C = ( 4
5 .4.5 Exercise: Data distributions with and without mean. Define a procedure by which you can turn any mean-free data distribution into a distribution with finite (non-zero mean but identical second-moment matrix. (Are there exceptions? 2. Conversely, define a procedure by which you can turn any data distribution with finite mean into a distribution with zero mean but identical second-moment matrix. (Are there exceptions? Hint: Think about what happens if you flip a point µ at the origin, i.e. if you replace x µ by x µ in the data set..5 Covariance matrix and higher order structure.6 PCA by diagonalizing the covariance matrix 2 Formalism 2. Definition of the PCA-optimization problem 2.2 Matrix V T : Mapping from high-dimensional old coordinate system to lowdimensional new coordinate system 2.3 Matrix V: Mapping from low-dimensional new coordinate system to subspace in old coordinate system 2.3. Exercise: Norm of a vector Let b i, i =,..., N, be an orthonormal basis. Then we have (b i, b j = δ ij and v = N v i b i with v i := (v, b i v. ( i= Show that N v 2 = vi 2. (2 i= 5
6 2.4 Matrix (V T V: Identity mapping within new coordinate system 2.5 Matrix (VV T : Projection from high- to low-dimensional (subspace within old coordinate system 2.6 Variance 2.7 Reconstruction error 2.8 Covariance matrix 2.8. Exercise: Second-moment matrices are positive semi-definite (//0/ minshow that a second-moment matrix C := x µ (x µ T µ is always positive semi-definite, i.e. for each vector v we find v T Cv 0. For which vectors v does v T Cv = 0 hold? Exercise: Covariance matrix from mean and second-moment matrix Given some data x µ, µ =,..., M, with mean x := x = ( ( and second-moment matrix Calculate the covariance matrix C := xx T = ( 4 2 Σ := (x x(x x T. (3 First derive a general formula and then calculate it for the concrete values given. (2 2.9 Eigenvalue equation of the covariance matrix 2.9. Exercise: Eigenvectors of a symmetric matrix are orthogonal Prove that the eigenvectors of a symmetric matrix are orthogonal, if their eigenvalues are different. Proceed as follows:. Let A be a symmetric N-dimensional matrix, i.e. A = A T. Show first that (v, Aw = (Av, w for any vectors v, w R N, with (, indicating the Euclidean inner product. 2. Let {a i } be the eigenvectors of the matrix A with the eigenvalues λ i. Show with the help of part one that (a i, a j = 0 if λ i λ j. Hint: λ i (a i, a j =... 6
7 2.0 Total variance of the data x 2. Diagonalizing the covariance matrix 2.2 Variance of y for a diagonalized covariance matrix 2.3 Constraints of matrix V 2.4 Finding the optimal subspace 2.5 Interpretation of the result 2.5. Exercise: Moments of a data distribution: Simple example Given a data distribution x µ with x = ( ( 3, x 2 =, x 3 = 2 ( 2. ( 3. Calculate the mean x = x µ µ and the second-moment matrix C = x µ x µt µ. 2. Determine the normalized eigenvectors c and c 2 of C and the corresponding eigenvalues. Hint: Look at the data distribution and guess the eigenvectors on the basis of the symmetry of the distribution. Then insert the guessed eigenvectors into the eigenvalue equation, verify that they are eigenvectors and calculate the eigenvalues. Otherwise you have to go the hard way via the characteristic polynomial. 3. Determine the first and second moment of i.e. y µ µ and (y µ 2 µ, for α {, 2}. y µ = c T αx µ, (2 Hint: You don t have to compute the projected data. There is a simpler way Exercise: From data distribution to second-moment matrix via the eigenvectors Give an estimate of the second-moment matrix for the following data distributions by first guessing the eigenvalues and normalized eigenvectors from the distribution and then calculating the matrix. x 2 x 2 x 2 x x x (a (b (c CC BY-SA 4.0 7
8 2.5.3 Exercise: From data distribution to second-moment matrix via the eigenvectors Give an estimate of the second-moment matrix for the following data distributions by first guessing the eigenvalues and normalized eigenvectors from the distribution and then calculating the matrix. x 2 x 2 x 2 x x x (a (b (c CC BY-SA Exercise: Dimensionality reduction Given some data in R 3 with the corresponding 3 3 second-moment matrix C with eigenvectors c α and eigenvalues λ α, with λ = 3, λ 2 = and λ 3 = Define a matrix A R 2 3 that maps the data into a two-dimensional space while preserving as much variance as possible. 2. Define a matrix B R 3 2 that places the reduced data back into R 3 with minimal reconstruction error. How large is the reconstruction error? 3. Prove that AB is an identity matrix. Why would one expect that intuitively? 4. Prove that BA is a projection matrix but not the identity matrix. 2.6 PCA Algorithm 2.7 Intuition of the Results 2.8 Whitening or sphering 2.8. Exercise: Sphered data is uncorrelated Prove that sphered zero-mean data ˆx projected onto two orthogonal vectors n and n 2 is uncorrelated. Hint: The correlation coefficient for two scalar data sets y and y 2 with means ȳ i := y i is defined as c := (y ȳ (y 2 ȳ 2 (y ȳ 2 (y 2 ȳ 2 2 ( 8
9 2.9 Singular value decomposition + 3 Application 3. Face processing 4 Acknowledgment 9
Exercises * on Linear Algebra
Exercises * on Linear Algebra Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 7 Contents Vector spaces 4. Definition...............................................
More informationExercises * on Functions
Exercises * on Functions Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 2 February 2017 Contents 1 Scalar functions in one variable 3 1.1 Elementary transformations.....................................
More informationSolutions to the Exercises * on Multiple Integrals
Solutions to the Exercises * on Multiple Integrals Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 27 Contents Introduction 2 2 Calculating multiple integrals
More informationSolutions to the Exercises * on Linear Algebra
Solutions to the Exercises * on Linear Algebra Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 ebruary 7 Contents Vector spaces 4. Definition...............................................
More informationPrincipal Component Analysis
Principal Component Analysis Laurenz Wiskott Institute for Theoretical Biology Humboldt-University Berlin Invalidenstraße 43 D-10115 Berlin, Germany 11 March 2004 1 Intuition Problem Statement Experimental
More informationCS 143 Linear Algebra Review
CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see
More informationConcentration Ellipsoids
Concentration Ellipsoids ECE275A Lecture Supplement Fall 2008 Kenneth Kreutz Delgado Electrical and Computer Engineering Jacobs School of Engineering University of California, San Diego VERSION LSECE275CE
More information14 Singular Value Decomposition
14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationLecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26
Principal Component Analysis Brett Bernstein CDS at NYU April 25, 2017 Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 1 / 26 Initial Question Intro Question Question Let S R n n be symmetric. 1
More informationContents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2
Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition
More informationVectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =
Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationFinal Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson
Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Name: TA Name and section: NO CALCULATORS, SHOW ALL WORK, NO OTHER PAPERS ON DESK. There is very little actual work to be done on this exam if
More informationPrincipal Component Analysis
Principal Component Analysis Yuanzhen Shao MA 26500 Yuanzhen Shao PCA 1 / 13 Data as points in R n Assume that we have a collection of data in R n. x 11 x 21 x 12 S = {X 1 =., X x 22 2 =.,, X x m2 m =.
More information15 Singular Value Decomposition
15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationLinear Algebra and Eigenproblems
Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationTutorial on Principal Component Analysis
Tutorial on Principal Component Analysis Copyright c 1997, 2003 Javier R. Movellan. This is an open source document. Permission is granted to copy, distribute and/or modify this document under the terms
More informationFoundations of Computer Vision
Foundations of Computer Vision Wesley. E. Snyder North Carolina State University Hairong Qi University of Tennessee, Knoxville Last Edited February 8, 2017 1 3.2. A BRIEF REVIEW OF LINEAR ALGEBRA Apply
More informationChapter 6: Orthogonality
Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products
More informationThe Hilbert Space of Random Variables
The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More information1 Principal Components Analysis
Lecture 3 and 4 Sept. 18 and Sept.20-2006 Data Visualization STAT 442 / 890, CM 462 Lecture: Ali Ghodsi 1 Principal Components Analysis Principal components analysis (PCA) is a very popular technique for
More informationPrincipal Component Analysis (PCA)
Principal Component Analysis (PCA) Additional reading can be found from non-assessed exercises (week 8) in this course unit teaching page. Textbooks: Sect. 6.3 in [1] and Ch. 12 in [2] Outline Introduction
More informationMaximum variance formulation
12.1. Principal Component Analysis 561 Figure 12.2 Principal component analysis seeks a space of lower dimensionality, known as the principal subspace and denoted by the magenta line, such that the orthogonal
More informationFall TMA4145 Linear Methods. Exercise set Given the matrix 1 2
Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an
More informationSpectral Theorem for Self-adjoint Linear Operators
Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;
More information4 Bias-Variance for Ridge Regression (24 points)
Implement Ridge Regression with λ = 0.00001. Plot the Squared Euclidean test error for the following values of k (the dimensions you reduce to): k = {0, 50, 100, 150, 200, 250, 300, 350, 400, 450, 500,
More informationPrincipal Component Analysis
CSci 5525: Machine Learning Dec 3, 2008 The Main Idea Given a dataset X = {x 1,..., x N } The Main Idea Given a dataset X = {x 1,..., x N } Find a low-dimensional linear projection The Main Idea Given
More informationMultivariate Statistical Analysis
Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions
More information1 Singular Value Decomposition and Principal Component
Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)
More informationCS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works
CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationDimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas
Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx
More informationEigenvalues, Eigenvectors, and an Intro to PCA
Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis.
More informationEigenvalues, Eigenvectors, and an Intro to PCA
Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis.
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationLinear Algebra & Geometry why is linear algebra useful in computer vision?
Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia
More informationEigenvalues and diagonalization
Eigenvalues and diagonalization Patrick Breheny November 15 Patrick Breheny BST 764: Applied Statistical Modeling 1/20 Introduction The next topic in our course, principal components analysis, revolves
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationBasic Calculus Review
Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V
More informationLECTURE NOTE #11 PROF. ALAN YUILLE
LECTURE NOTE #11 PROF. ALAN YUILLE 1. NonLinear Dimension Reduction Spectral Methods. The basic idea is to assume that the data lies on a manifold/surface in D-dimensional space, see figure (1) Perform
More informationANOVA: Analysis of Variance - Part I
ANOVA: Analysis of Variance - Part I The purpose of these notes is to discuss the theory behind the analysis of variance. It is a summary of the definitions and results presented in class with a few exercises.
More informationPrincipal Component Analysis (PCA)
Principal Component Analysis (PCA) Salvador Dalí, Galatea of the Spheres CSC411/2515: Machine Learning and Data Mining, Winter 2018 Michael Guerzhoy and Lisa Zhang Some slides from Derek Hoiem and Alysha
More information1 Inner Product and Orthogonality
CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =
More informationMatrix Vector Products
We covered these notes in the tutorial sessions I strongly recommend that you further read the presented materials in classical books on linear algebra Please make sure that you understand the proofs and
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationLinear Algebra: Characteristic Value Problem
Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number
More informationLinear Algebra & Geometry why is linear algebra useful in computer vision?
Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia
More informationAssignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name:
Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition Due date: Friday, May 4, 2018 (1:35pm) Name: Section Number Assignment #10: Diagonalization
More informationEXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants
EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. Determinants Ex... Let A = 0 4 4 2 0 and B = 0 3 0. (a) Compute 0 0 0 0 A. (b) Compute det(2a 2 B), det(4a + B), det(2(a 3 B 2 )). 0 t Ex..2. For
More informationLecture 8. Principal Component Analysis. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. December 13, 2016
Lecture 8 Principal Component Analysis Luigi Freda ALCOR Lab DIAG University of Rome La Sapienza December 13, 2016 Luigi Freda ( La Sapienza University) Lecture 8 December 13, 2016 1 / 31 Outline 1 Eigen
More informationVectors and Matrices Statistics with Vectors and Matrices
Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc
More informationExercise Sheet 1.
Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?
More informationIntroduction to Machine Learning
10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what
More informationMath 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008
Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures
More informationCheat Sheet for MATH461
Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A
More informationImage Registration Lecture 2: Vectors and Matrices
Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this
More informationCS168: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA)
CS68: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA) Tim Roughgarden & Gregory Valiant April 0, 05 Introduction. Lecture Goal Principal components analysis
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationDot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.
Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................
More informationFunctional Analysis Review
Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all
More information7 Principal Component Analysis
7 Principal Component Analysis This topic will build a series of techniques to deal with high-dimensional data. Unlike regression problems, our goal is not to predict a value (the y-coordinate), it is
More informationUnsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent
Unsupervised Machine Learning and Data Mining DS 5230 / DS 4420 - Fall 2018 Lecture 7 Jan-Willem van de Meent DIMENSIONALITY REDUCTION Borrowing from: Percy Liang (Stanford) Dimensionality Reduction Goal:
More informationDecember 20, MAA704, Multivariate analysis. Christopher Engström. Multivariate. analysis. Principal component analysis
.. December 20, 2013 Todays lecture. (PCA) (PLS-R) (LDA) . (PCA) is a method often used to reduce the dimension of a large dataset to one of a more manageble size. The new dataset can then be used to make
More informationLecture 10 - Eigenvalues problem
Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems
More informationPrincipal Components Theory Notes
Principal Components Theory Notes Charles J. Geyer August 29, 2007 1 Introduction These are class notes for Stat 5601 (nonparametrics) taught at the University of Minnesota, Spring 2006. This not a theory
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to
More information1 = I I I II 1 1 II 2 = normalization constant III 1 1 III 2 2 III 3 = normalization constant...
Here is a review of some (but not all) of the topics you should know for the midterm. These are things I think are important to know. I haven t seen the test, so there are probably some things on it that
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional
More informationLecture 22: A Review of Linear Algebra and an Introduction to The Multivariate Normal Distribution
Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 22: A Review of Linear Algebra and an Introduction to The Multivariate Normal Distribution Relevant
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lecture 18, Friday 18 th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON Mathematics
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More informationA PRIMER ON SESQUILINEAR FORMS
A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form
More informationLecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices
Lecture 3: Simple Linear Regression in Matrix Format To move beyond simple regression we need to use matrix algebra We ll start by re-expressing simple linear regression in matrix form Linear algebra is
More informationLearning with Singular Vectors
Learning with Singular Vectors CIS 520 Lecture 30 October 2015 Barry Slaff Based on: CIS 520 Wiki Materials Slides by Jia Li (PSU) Works cited throughout Overview Linear regression: Given X, Y find w:
More informationPrincipal Component Analysis
Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used
More informationUNIT 6: The singular value decomposition.
UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T
More informationOHSx XM511 Linear Algebra: Solutions to Online True/False Exercises
This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)
More informationWhat is Principal Component Analysis?
What is Principal Component Analysis? Principal component analysis (PCA) Reduce the dimensionality of a data set by finding a new set of variables, smaller than the original set of variables Retains most
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More informationExercise Set 7.2. Skills
Orthogonally diagonalizable matrix Spectral decomposition (or eigenvalue decomposition) Schur decomposition Subdiagonal Upper Hessenburg form Upper Hessenburg decomposition Skills Be able to recognize
More informationQuantitative Understanding in Biology Principal Components Analysis
Quantitative Understanding in Biology Principal Components Analysis Introduction Throughout this course we have seen examples of complex mathematical phenomena being represented as linear combinations
More informationAppendix A. Vector addition: - The sum of two vectors is another vector that also lie in the space:
Tor Kjellsson Stockholm University Appendix A A.1 Q. Consider the ordinary vectors in 3 dimensions (a x î+a y ĵ+a zˆk), with complex components. Do the following subsets constitute a vector space? If so,
More informationSTATISTICAL LEARNING SYSTEMS
STATISTICAL LEARNING SYSTEMS LECTURE 8: UNSUPERVISED LEARNING: FINDING STRUCTURE IN DATA Institute of Computer Science, Polish Academy of Sciences Ph. D. Program 2013/2014 Principal Component Analysis
More informationLarge Scale Data Analysis Using Deep Learning
Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset
More informationIn English, this means that if we travel on a straight line between any two points in C, then we never leave C.
Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from
More informationThe Singular Value Decomposition
The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition
More informationMLCC 2015 Dimensionality Reduction and PCA
MLCC 2015 Dimensionality Reduction and PCA Lorenzo Rosasco UNIGE-MIT-IIT June 25, 2015 Outline PCA & Reconstruction PCA and Maximum Variance PCA and Associated Eigenproblem Beyond the First Principal Component
More informationLecture Summaries for Linear Algebra M51A
These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture
More informationLecture 2: Linear Algebra
Lecture 2: Linear Algebra Rajat Mittal IIT Kanpur We will start with the basics of linear algebra that will be needed throughout this course That means, we will learn about vector spaces, linear independence,
More informationExercises * on Bayesian Theory and Graphical Models
Exercises * on Bayesian Theory and Graphical Models Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 2017 Contents 1 Bayesian inference 3 1.1 Discrete random
More informationLecture II: Linear Algebra Revisited
Lecture II: Linear Algebra Revisited Overview Vector spaces, Hilbert & Banach Spaces, etrics & Norms atrices, Eigenvalues, Orthogonal Transformations, Singular Values Operators, Operator Norms, Function
More information8 Eigenvectors and the Anisotropic Multivariate Gaussian Distribution
Eigenvectors and the Anisotropic Multivariate Gaussian Distribution Eigenvectors and the Anisotropic Multivariate Gaussian Distribution EIGENVECTORS [I don t know if you were properly taught about eigenvectors
More informationStat 206: Linear algebra
Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two
More information