Designing Information Devices and Systems II Fall 2015 Note 5

Similar documents
Principal Component Analysis

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17

15 Singular Value Decomposition

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation)

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26

PCA, Kernel PCA, ICA

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

Announcements (repeat) Principal Components Analysis

Lecture: Face Recognition and Feature Reduction

STATISTICAL LEARNING SYSTEMS

1 Principal Components Analysis

Statistical Machine Learning

Lecture: Face Recognition and Feature Reduction

December 20, MAA704, Multivariate analysis. Christopher Engström. Multivariate. analysis. Principal component analysis

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

Signal Analysis. Principal Component Analysis

PRINCIPAL COMPONENTS ANALYSIS

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

14 Singular Value Decomposition

Multivariate Statistical Analysis

Unsupervised dimensionality reduction

What is Principal Component Analysis?

Tutorial on Principal Component Analysis

Dimensionality Reduction

Math 2B Spring 13 Final Exam Name Write all responses on separate paper. Show your work for credit.

Principal Components Analysis (PCA)

COMPLEX PRINCIPAL COMPONENT SPECTRA EXTRACTION

(a)

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University

Principal Component Analysis. Applied Multivariate Statistics Spring 2012

Lecture 7 Spectral methods

Principal Component Analysis

Introduction to Machine Learning

Covariance to PCA. CS 510 Lecture #14 February 23, 2018

1 Singular Value Decomposition and Principal Component

Singular Value Decomposition and Principal Component Analysis (PCA) I

Chapter 6. Eigenvalues. Josef Leydold Mathematical Methods WS 2018/19 6 Eigenvalues 1 / 45

Machine Learning (Spring 2012) Principal Component Analysis

PCA Review. CS 510 February 25 th, 2013

Lecture 6: Methods for high-dimensional problems

Unsupervised Learning: Dimensionality Reduction

ECG SIGNAL FILTERING 6

LECTURE NOTE #11 PROF. ALAN YUILLE

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

Designing Information Devices and Systems II Fall 2018 Elad Alon and Miki Lustig Homework 9

Eigenvalues, Eigenvectors, and an Intro to PCA

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

2.3. Clustering or vector quantization 57

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu

Econ Slides from Lecture 7

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION

Preprocessing & dimensionality reduction

GEOG 4110/5100 Advanced Remote Sensing Lecture 15

4 Bias-Variance for Ridge Regression (24 points)

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 1: Quantum circuits and the abelian QFT

Linear Algebra Review. Fei-Fei Li

Numerical Methods I Singular Value Decomposition

HST.582J/6.555J/16.456J

L26: Advanced dimensionality reduction

Spectral clustering. Two ideal clusters, with two points each. Spectral clustering algorithms

Multivariate Statistics (I) 2. Principal Component Analysis (PCA)

Covariance to PCA. CS 510 Lecture #8 February 17, 2014

STA141C: Big Data & High Performance Statistical Computing

Vectors and Matrices Statistics with Vectors and Matrices

Unsupervised learning: beyond simple clustering and PCA

Dimension Reduction and Low-dimensional Embedding

Principal Components Analysis

Statistical Pattern Recognition

Principal Component Analysis (PCA)

Singular Value Decomposition

Harmonic Oscillator I

Dimensionality Reduction. CS57300 Data Mining Fall Instructor: Bruno Ribeiro

Quantitative Understanding in Biology Principal Components Analysis

Data Mining and Analysis: Fundamental Concepts and Algorithms

Linear Least-Squares Data Fitting

In other words, we are interested in what is happening to the y values as we get really large x values and as we get really small x values.

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.

CS281 Section 4: Factor Analysis and PCA

Eigenvalues, Eigenvectors, and an Intro to PCA

Lecture 10: Powers of Matrices, Difference Equations

Principal Component Analysis

Intelligent Data Analysis. Principal Component Analysis. School of Computer Science University of Birmingham

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note 8

Exercises * on Principal Component Analysis

Iterative solvers for linear equations

Problem # Max points possible Actual score Total 120

Advanced Introduction to Machine Learning CMU-10715

7 Principal Component Analysis

Lecture 4 - Spectral Estimation

20 Unsupervised Learning and Principal Components Analysis (PCA)

Linear Regression and Its Applications

Eigenvalues, Eigenvectors, and an Intro to PCA

CS168: The Modern Algorithmic Toolbox Lecture #8: PCA and the Power Iteration Method

Periodicity & State Transfer Some Results Some Questions. Periodic Graphs. Chris Godsil. St John s, June 7, Chris Godsil Periodic Graphs

Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings

Lecture Notes 5: Multiresolution Analysis

Spectral Algorithms I. Slides based on Spectral Mesh Processing Siggraph 2010 course

Transcription:

EE 16B Designing Information Devices and Systems II Fall 01 Note Lecture given by Babak Ayazifar (9/10) Notes by: Ankit Mathur Spectral Leakage Example Compute the length-8 and length-6 DFT for the following x(n) x(n) = e i π n Consider first the period of x(n) here, which is clearly P x = π π = 4. First, the length-8 DFT, which should work out nicely, since we know that ω 0 = π 8 = π 4. Since, 8 is an integer multiple of P x, there is no complication. The signal contains only a single frequency, and it will be reflected that way in the DFT. It will reflect itself in the second multiple of the harmonic. That means X 0 X k = 0 when k The more complicated case is the length-6 DFT. This is because N = 6 (mod 4) 0. What happens here is that you have a function that does not go through a complete set of cycles in the window of interest, which means that it is cut off. Since your calculations have an assumed periodicity, the reconstructed function from the DFT is not an accurate representation of the original function. An example of this can be seen Let s walk through the example, noting that, in this example, ω 0 = π 6 = π 3 X k = = x(n)e ik π 3 e i π n e ik π 3 A quick aside We need an equation that can help us take a shortcut for geometric series, and this is its derivation Assuming that α 1 (because this is a trivial case, in which case the answer is B A + 1) EE 16B, Fall 01, Note 1

S = B α n n=a S = α A + αa + 1 + + α B α S = αa + 1 + αa + + + α B+1 = (α 1) S = α B+1 α A S = αb+1 α A α 1 B α n = αb+1 α A n=a α 1 We manipulate the X k formula to use the geometric series. X k = (e i( π k π3 ) n = ei( π k π3 ) 6 1 e i( π k π3 ) 1 0 for any k However, if we use the synthesis equation x(n) = 1 N X k e ikω0n to construct the original period, we will k=0 construct a discontinuous graph/incorrect graph because of our incorrect period. N 1 Length- DFT Example [ ] 4 Consider a signal x =. Now, consider a Ψ 0 = combination of Φ 0 and Φ 1. 4Ψ 0 + Ψ 1 = 4 [ ] 1 + 0 [ ] 0 = 1 [ ] 4 [ ] 1 and Ψ 0 1 = [ ] 0. Clearly, we can construct x as a linear 1 Now, we move to a slightly more difficult version of this problem. Say we have two mutually orthogonal vectors that span the -dimensional space Ψ 0 (n) = 1 ei0πn = 1 Ψ 1 (n) = 1 eiπn = 1 ( 1)n Ψ k (n) = 1 N eikω 0n EE 16B, Fall 01, Note

In the lollipop plot form, Ψ 0 (n) would look like a graph that has just a constant lollipop with magnitude 1 for every integer. In the lollipop plot form, Ψ 1 (n) would look like a graph that has just a lollipop with magnitude 1, alternating between negative for even integers and negative for odd integers. Remember that, here, we are dealing with the set of -periodic functions, which is why we are considering only Ψ 0 (n) and Ψ 1 (n). We can then conclude x(n) = 6 Ψ 0 (n) + Ψ 1 (n) This example simply shows that the DFT is simply a change of basis into a coordinate system that tells us useful information about the frequency domain. In the coordinate system for periodic functions (the one where we use these two Ψ functions), we get that change of basis, which tells us about the frequency content of x(n). BMI and PCA We initially started with the following diagram detailing the different levels at which signals can be accumulated for patients regarding brain data.. In this lecture, we discussed the data received from probes all the way at the neuron level. The basic case is when you only have 1 neuron whose data you are receiving, along with, obviously, some noise. However, the more difficult case is when The challenge is to figure out how each of these will contribute to the overall signal that is being received, which is really a superposition of those. Unfortunately, these neurons will fire at around the same frequency EE 16B, Fall 01, Note 3

range. Techniques we used last semester involved using filters to cut down the frequencies we did not want (the ones that were too high or low). This problem is formalized as something called "Spike Sorting" (http://www.scholarpedia.org/ article/spike_sorting), and the following diagram shows a workflow for that The goal is to map the fuzzy diagram in the third row to a set of features that we can extract and individualize to specific neurons. We want to capture the directions that allow us to separate the segments that we want. Usually, or 3 principal directions will capture what you want, and the method that we want to use to do this kind of clustering is a singular value decomposition. Singular values, which are similar to eigenvalues, and their corresponding singular vectors are what help us to project our data onto those to generate information about where information in our graphs are coming from. Modeling Neuron Data... a T 1...... a T... Consider a sample with m spikes and n samples. We use the m n matrix A = to represent...... a T m... this data. The following equation allows us to average across each spike, whose data we collected at n time sample EE 16B, Fall 01, Note 4

points. ā = 1 m m a T i i=1 Then, using this, we can write a matrix X m n, which has the mean removed from it.... a T 1 avgt 1...... a T X m n = avgt...... = A IāT... a T m avg T m... We are going to consider the covariance matrix C x = 1 m 1 X T X, which will help us do some statistical analysis when we are looking for the singular values. Consider an example where you have a spring moving about in a system We will be looking for the camera that tells us the most about the movement of the spring. As such, we will be looking for the camera that shows us the largest variation, which is where variance begins to play into this equation. The multidimensional version of that story is embedded in the covariance matrix. We want to then determine a unit vector q such that var(q) = q T C x q is maximized. In the next lecture, we will see how to determine that from the eigenvalue-eigenvector decomposition. EE 16B, Fall 01, Note