Outline Week 1 PCA Challenge. Introduction. Multivariate Statistical Analysis. Hung Chen

Size: px
Start display at page:

Download "Outline Week 1 PCA Challenge. Introduction. Multivariate Statistical Analysis. Hung Chen"

Transcription

1 Introduction Multivariate Statistical Analysis Hung Chen Department of Mathematics Old Math

2 1 Outline 2 Week 1 3 PCA multivariate normal distribution typical heads Principal Components Analysis some matrix language interpretation applications 4 Challenge

3 Objective Build knowledge on the key elements in statistical analysis of multidimensional data. Develop the theory and methods, and apply to real data sets.

4 Course Outline Introduction

5 Course Outline Introduction Example 1.2: PCA

6 Course Outline Introduction Example 1.2: PCA Matrix Algebra

7 Course Outline Introduction Example 1.2: PCA Matrix Algebra Example 1.3: Classification

8 Course Outline Introduction Example 1.2: PCA Matrix Algebra Example 1.3: Classification Random Vectors and Multivariate Normal Distribution

9 Course Outline Introduction Example 1.2: PCA Matrix Algebra Example 1.3: Classification Random Vectors and Multivariate Normal Distribution Statistical Inferences for Multivariate Distributions

10 Course Outline Introduction Example 1.2: PCA Matrix Algebra Example 1.3: Classification Random Vectors and Multivariate Normal Distribution Statistical Inferences for Multivariate Distributions Principal Component Analysis (PCA)

11 Course Outline Introduction Example 1.2: PCA Matrix Algebra Example 1.3: Classification Random Vectors and Multivariate Normal Distribution Statistical Inferences for Multivariate Distributions Principal Component Analysis (PCA) Factor Analysis (FA)

12 Course Outline Introduction Example 1.2: PCA Matrix Algebra Example 1.3: Classification Random Vectors and Multivariate Normal Distribution Statistical Inferences for Multivariate Distributions Principal Component Analysis (PCA) Factor Analysis (FA) Discriminant Analysis (DA): (or classification analysis)

13 Course Outline Introduction Example 1.2: PCA Matrix Algebra Example 1.3: Classification Random Vectors and Multivariate Normal Distribution Statistical Inferences for Multivariate Distributions Principal Component Analysis (PCA) Factor Analysis (FA) Discriminant Analysis (DA): (or classification analysis) Cluster Analysis (CA)

14 Course Outline Introduction Example 1.2: PCA Matrix Algebra Example 1.3: Classification Random Vectors and Multivariate Normal Distribution Statistical Inferences for Multivariate Distributions Principal Component Analysis (PCA) Factor Analysis (FA) Discriminant Analysis (DA): (or classification analysis) Cluster Analysis (CA) Multivariate Analysis of Variance (MANOVA)

15 Course Outline Introduction Example 1.2: PCA Matrix Algebra Example 1.3: Classification Random Vectors and Multivariate Normal Distribution Statistical Inferences for Multivariate Distributions Principal Component Analysis (PCA) Factor Analysis (FA) Discriminant Analysis (DA): (or classification analysis) Cluster Analysis (CA) Multivariate Analysis of Variance (MANOVA) Canonical Correlation Analysis (CCA) and Multivariate Regression Analyse (MRA)

16 References and Grading (textbook) Johnson, R.A. and Wichern, D.W. (2007) Applied Multivariate Statistical Analysis. Pearson Prentice Hall. Flury, B. (1997) A First Course In Multivariate Statistics. Springer. Srivastava, M.S. (2002) Methods of Multivariate Statistics. Wiley. Grading scheme Homework (30%) Quiz (10%), Midterm (30%), Final (30%)

17 Office Hours Instructor s office hour: Tuesday 10:00-11:00; Friday 10:10-11:10 TA r @ntu.edu.tw (grade homework and provide homework solution) Class links: Meet on Wednesday 8:10 to 10:00 and Friday 9:10 to 10:00 at 403 Freshman building.

18 Outline Introduction Explore interrelationships among multiple random variables. Old stuff: Regression and ANOVA Example 1.2 Principal Components Analysis Example 1.1 Classification of Midge Discriminant Analysis (DA): (or classification analysis) Logistic Regression Example 1.3 Wing length of Water Pipits Discriminant Analysis (DA): (or classification analysis) Logistic Regression Normal Mixtures Issues on computation

19 Introduction Multivariate refers to the presence of multiple random variables. Multivariate data are data that are thought of as the realizations of several random variables. Multivariate Analysis can be defined broadly as an inquiry into the structure of interrelationships among multiple random variables. Regression Simple: Explain variability in Y from X. Multiple: Explain variability in Y from X 1, X 2,..., X r. Anova One-way: Explain variability in Y based on X, where X is a factor.

20 (cont) Introduction The above two examples are not usually considered to be multivariate analysis. Why? We distinguish between the Response Variable (Y ) and Explanatory Variables (X s) and in the analysis we treat the X s as fixed. Only 1 random variable in each situation. (multiple regression multivariate regression) One type of analysis that is a good example of a multivariate method is correlation analysis. Correlation Analysis: measure linear association between X and Y without distinguishing response versus explanatory.

21 Data Setting: We have n objects which are often denoted by O 1,..., O n and the variables associated with the jth object are denoted by x j = (x j1,, x jk,, x jp ). Then X n p = x 1 x 2. x n = x 11 x 12 x 1p x 21 x 22 x 2p... x j1 x j2 x jp... x n1 x n2 x np

22 multivariate normal distribution Example 1.2 (Flury): Head dimensions of 200 young men n = 200 For the 1st object, the measures are (113.2, 111.7, 119.6, 53.9, 127.4, 143.6). p = 6, (x 11, x 12,, x 1p ) Can we use multivariate normal distribution to describe those six measurements? QQ plot: Check on distributional assumption. Mahalanobis distance: Formally, the Mahalanobis distance from a group of values with mean µ = (µ 1, µ 2,..., µ p ) T and covariance matrix S for a multivariate vector x = (x 1, x 2,..., x p ) T is defined as: [ (x µ) T S 1 (x µ) ] 1/2. χ 2 6 distribution

23 multivariate normal distribution Example 1.2 Characteristics of Head Dimension Data sources: Flury, B.D. and H. Riedwyl (1988) Multivariate Statistics; A Practical Approach London: Chapman and Hall. These data concern head measurements of members of the Swiss Army. The purpose of this project was to give an empirical basis to the construction of new gas masks. After having completed the data collection, the investigators were to determine k typical heads after which to model the new masks. k is a number between two and six. What are typical heads?

24 multivariate normal distribution central tendency If k = 1, what will be your choice on typical heads? How do we measure central tendency? Define loss function. (Consider one-dimensional problem first.) L 1 norm min c E X c L 2 norm min c E(X c) 2 How do you convince others that k 2 is necessary? How do we measure spread out? Variance versus Probability

25 typical heads measure variability of multivariate data Multivariate normal distribution is determined by mean vector and covariance matrix only. p sample mean: x k = n 1 n j=1 x jk p sample variance: s kk = s 2 k = n 1 n j=1 (x jk x k ) 2 p(p 1)/2 sample covariance: s ik = 1 n n (x ji x i )(x jk x k ), 1 i, j p. j=1 Sample covariance matrix: p p matrix symmetric non-negative definite

26 typical heads cont. sample correlation: r ik = s ik sii skk, 1 i, j p. Remarks: 1. 1 r ik 1 2. r ik measures the strength of linear association 3. r ik is scale invariant We would probably want the typical heads to represent the dominant directions of variability in the data. Draw a two-dimensional contour plot.

27 typical heads Example 1.2: Principal Components Analysis Swiss heads data Six readings on the dimensions of the heads of 200 twenty years old soldiers A data with 200 observations on the following 6 variables. MFB: a numeric vector, minimum frontal breadth BAM: a numeric vector, breadth of angulus mandibulae TFH: a numeric vector, true facial height LGAN: a numeric vector, length from glabella to apex nasi LTN: a numeric vector, length from tragion to nasion LTG: a numeric vector, length from tragion to gnathion

28 typical heads cont. PURPOSE: Study the variability in size and shape of young men in order to help to design a new protection mask Data: Type flury-package in the search engine google. The R project for statistical computing: cran.r-project.org data(swiss.heads) How do we describe a distribution on six-dimension random vectors?

29 Principal Components Analysis Example 1.2 Summary statistics low-dimension information: histograms and pairwise scatterplots pairs(swiss.heads) apply(swiss.heads,2,hist) means, standard deviations, and correlations Why do we pay attention to the relationship between LTG and LTN? plot(swiss.heads$ltg,swiss.heads$ltn) Objective of Principal Component Analysis (PCA): Look for a few linear combinations, which can be used to summarize the data and loses in data as little information as possible. Parsimonious summarization

30 Principal Components Analysis projection and orthogonal least squares How do we carry it out? Play with the coordinate system by shifting it around and rotating it. If we are lucky, we may find some rotated version of the coordinate system in which the data exhibits no or almost no variability in some of the coordinate directions. Then we might claim that the data is well-represented by fewer than p coordinate directions, and thus, approximate the p-dimensional data in a subspace of lower dimension.

31 Principal Components Analysis population version Let X denote a p-variate random vector with EX = µ Cov(X) = Σ, and let Y denote an orthogonal projection of X on a line in R p. i.e., Y = x 0 + bb T (X x 0 ) for some point x 0 R p and some b R p, b = 1. Then MSE(Y; X) is minimal for Y = Y (1) = µ + β 1 β T 1 (X µ), where β 1 is a normalized eigenvector associated with the largest eigenvalue λ 1 of Σ, and MSE(Y (1) ; X) = tr(σ) λ 1. How to determine x 0 and b so as to minimize MSE(Y; X)?

32 Principal Components Analysis cont. Write P = bb T. (projection matrix) Then MSE(Y; X) = E[ Y X 2 ] How to maximize tr(pσ)? Note that = E[ (I p P)(X x 0 ) 2 ] E[ (I p P)(X µ) 2 ] = tr {Cov[(I p P)X]} = tr[(i p P)Σ(I p P)] = tr[(i p P) 2 Σ] = tr(σ) tr(pσ). tr(pσ) = tr(bb T Σ) = b T Σb. We have to maximize Var(bX) over all b of unit length.

33 Principal Components Analysis constrained minimization Let h(b) = b T σb and h (b) = h(b) λ(b T b 1) where λ is a Language multiplier. The method of Language multiplier leads to h b = 2Σb 2λb. eigenvalue-eigenvector problem Σb = λb. Critical points are given by the eigenvectors β i and associated eigenvalues λ i of Σ.

34 Principal Components Analysis cont. Since Var(β T i X) = β T i Σβ i = β T i λβ i = λ i, the variance of b T X is maximized by choosing b = β 1, a normalized eigenvector associated with the largest root. We conclude that MSE(Y (1) ; X) = tr(σ) λ 1.

35 Principal Components Analysis data version Pearson s original approach: Pearson, K. (1901). On Lines and Planes of Closest Fit to Systems of Points in Space. Philosophical Magazine 2(6): 559V Let x 1,..., x n denote n data points in R p. Let x = 1 n i x i S = 1 (x i x)(x i x) T. n Let y i denote the orthogonal projections of the x i on the straight line. Find the solution in terms of empirical cdf. i

36 some matrix language Principal Components Analysis: Reduce the dimension of a data set by examining a small number of linear combinations of the original variables that explain most of the variability among the original variables. Rotations and Orthogonal Projections: A rotation matrix is a real square matrix whose transpose is its inverse and whose determinant is 1. (It geometrically corresponds to a linear map that sends vectors to a corresponding vector rotated about the origin by a fixed angle.) A p p matrix Γ is orthogonal if Γ t Γ = ΓΓ t = I p. In Euclidean geometry, a rotation is an example of an isometry, a transformation that moves points without changing the distances between them. Think of the data matrix X being the set of n points in p-dimensional space.

37 some matrix language cont. For orthogonal matrix Γ, what does the set of points W = XΓ look like? It looks exactly like X, but rotated or flipped. In particular, WW t = XΓΓ t X t = XX t. So each point remains the same distance from 0. The distance between any two points in X is the same as the distance between the corresponding points in W. The distance between points b and c is b c, so if x i is the ith row of X, and w i the ith row of W, then x i x j = w i w j. What is the objective of rotating point clouds described?

38 interpretation cont. Principal components are the orthonormal combinations that maximize the variance. The idea behind them is that variation is information, so if one has several variables, one wishes the linear combinations that capture as much of the variation in the data as possible. For p variables, there are p principal components: The first has the maximal variance any one linear combination (with norm 1) can have, the first two have the maximal total variance any two linear combinations can have, etc. For n p data matrix X, the first principal component is the p 1 vector g 1 with g 1 = 1 that maximizes the sample variance of Xg over g = 1. For a given linear combination a, the mean and variance of the elements in the vector Z = Xa are easily obtained from the mean and covariance matrix of X:

39 interpretation cont. t t z = 1 = 1 nz 1 Xa = xa, 1 n 1 n = (1, 1,..., 1) t, ( 1 X 1 n x = X 1 n n 1t nx = I n 1 ) n 1 n1 t n X = H n X, s zz = 1 n Zt H n Z = 1 n at X t H n Xa = a t Sa, where S is the covariance matrix of X and 1 1/n 1/n 1/n 1/n 1 1/n 1/n 1/n H n = /n 1/n 1/n 1 1/n

40 interpretation cont. The first principal component is then the g 1 that maximizes g t Sg over g = 1. The maximum does exists, but it may not be unique.

41 applications PCA in Computer Vision Feature extraction and data compression are closely related problems that can be attacked using PCA. Reference: facial recognition Face Recognition: Eigenface, Elastic Matching, and Neural Nets, Jun Zhang et al. Proceedings of the IEEE, Vol. 85, No. 9, September Representation: Consider a square, N by N image can be expressed as an N 2 -dimensional vector where the rows of pixels in the image are placed one after the other to form a one-dimensional image. For example, the first N elements will be the first row of the image, the next N elements are the next row, and so on. The values in the vector are the intensity values of the image, possibly a single greyscale value.

42 applications (cont.) Use PCA to find patterns Say we have 20 images and each image is N pixels high by N pixels wide. Put all the images together in one big 20 N 2 image-matrix. Perform PCA analysis on the ImagesMatrix. Suppose we want to do facial recognition, and our original images were of peoples faces. Then, the problem is, given a new image, whose face from the original set is it? (Note that the new image is not one of the 20 we started with.) In computer vision with PCA, it measures the difference between the new image and the original images, but not along the original axes, along the new axes derived from the PCA analysis.

43 applications PCA for image compression m bit data is converted to n bit data where n < m. Using PCA for image compression also know as the Hotelling, or Karhunen-Leove (KL), transform. If we have 20 images, each with N 2 pixels, we can form N 2 vectors, each with 20 dimensions. Each vector consists of all the intensity values from the same pixel from each picture. This is different from the previous example because before we had a vector for image, and each item in that vector was a different pixel, whereas now we have a vector for each pixel, and each item in the vector is from a different image.

44 applications (cont.) Perform the PCA on this set of data, Get 20 eigenvectors because each vector is 20-dimensional. To compress the data, we can then choose to transform the data only using, say 15 of the eigenvectors. This gives us a final data set with only 15 dimensions, which has saved us 1/4 of the space. However, when the original data is reproduced, the images have lost some of the information.

45 applications Karhunen-Leove (KL), transform The Wiener process W t is characterized by three facts: 1. W 0 = 0 2. W t is almost surely continuous 3. W t has independent increments with distribution (for 0 s < t). Note that the covariance function is Cov(W t, W s ) = min(s, t). The eigenvectors of the covariance kernel are e k (t) = ( 2 sin k 1 ) πt 2 and the corresponding eigenvalues are λ k = 4 (2k 1) 2 π 2.

46 applications This gives the following representation: Theorem. There is a sequence {W i } of independent standard normal random variables such that W t = sin ( k 1 ) 2 πt 2 W k ( ). k 1 2 π k=1

47 Swiss bank notes Six variables are measurements of the size of the bank notes length of bank note near the top left-hand height of bank note right-hand height of bank note distance from bottom of bank note to beginning of patterned border distance from top of bank note to beginning of patterned border diagonal distance 200 Swiss bank notes, 100 of which are genuine and 100 are forged Some complications: Some of the notes in either group may have been misclassified. Forged notes may not form a homogeneous group. We expect a higher variability for forged notes.

48 Issues of Swiss bank notes Outliers: Some of the notes in either group may have been misclassified. Forged notes may not form a homogeneous group. more than one forger at work A single forger may have short print runs before repeatedly moving premises in order to avoid detection. discriminant analysis versus cluster analysis Reference: Exploring multivariate data with the forward search

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Introduction Edps/Psych/Stat/ 584 Applied Multivariate Statistics Carolyn J Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN c Board of Trustees,

More information

STAT 730 Chapter 1 Background

STAT 730 Chapter 1 Background STAT 730 Chapter 1 Background Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 27 Logistics Course notes hopefully posted evening before lecture,

More information

Principal component analysis

Principal component analysis Principal component analysis Angela Montanari 1 Introduction Principal component analysis (PCA) is one of the most popular multivariate statistical methods. It was first introduced by Pearson (1901) and

More information

1. Introduction to Multivariate Analysis

1. Introduction to Multivariate Analysis 1. Introduction to Multivariate Analysis Isabel M. Rodrigues 1 / 44 1.1 Overview of multivariate methods and main objectives. WHY MULTIVARIATE ANALYSIS? Multivariate statistical analysis is concerned with

More information

Multivariate Statistics Fundamentals Part 1: Rotation-based Techniques

Multivariate Statistics Fundamentals Part 1: Rotation-based Techniques Multivariate Statistics Fundamentals Part 1: Rotation-based Techniques A reminded from a univariate statistics courses Population Class of things (What you want to learn about) Sample group representing

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

STATISTICAL LEARNING SYSTEMS

STATISTICAL LEARNING SYSTEMS STATISTICAL LEARNING SYSTEMS LECTURE 8: UNSUPERVISED LEARNING: FINDING STRUCTURE IN DATA Institute of Computer Science, Polish Academy of Sciences Ph. D. Program 2013/2014 Principal Component Analysis

More information

Table of Contents. Multivariate methods. Introduction II. Introduction I

Table of Contents. Multivariate methods. Introduction II. Introduction I Table of Contents Introduction Antti Penttilä Department of Physics University of Helsinki Exactum summer school, 04 Construction of multinormal distribution Test of multinormality with 3 Interpretation

More information

Maximum variance formulation

Maximum variance formulation 12.1. Principal Component Analysis 561 Figure 12.2 Principal component analysis seeks a space of lower dimensionality, known as the principal subspace and denoted by the magenta line, such that the orthogonal

More information

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation)

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation) Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation) PCA transforms the original input space into a lower dimensional space, by constructing dimensions that are linear combinations

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Pattern Recognition Feature Extraction Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi, Payam Siyari Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Dimensionality Reduction

More information

MACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA

MACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA 1 MACHINE LEARNING Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA 2 Practicals Next Week Next Week, Practical Session on Computer Takes Place in Room GR

More information

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 59 CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 4. INTRODUCTION Weighted average-based fusion algorithms are one of the widely used fusion methods for multi-sensor data integration. These methods

More information

CS281 Section 4: Factor Analysis and PCA

CS281 Section 4: Factor Analysis and PCA CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we

More information

Principal component analysis

Principal component analysis Principal component analysis Motivation i for PCA came from major-axis regression. Strong assumption: single homogeneous sample. Free of assumptions when used for exploration. Classical tests of significance

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional

More information

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Principal Analysis Edps/Soc 584 and Psych 594 Applied Multivariate Statistics Carolyn J. Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN c Board

More information

Principal Component Analysis (PCA)

Principal Component Analysis (PCA) Principal Component Analysis (PCA) Additional reading can be found from non-assessed exercises (week 8) in this course unit teaching page. Textbooks: Sect. 6.3 in [1] and Ch. 12 in [2] Outline Introduction

More information

Dimensionality Reduction Using PCA/LDA. Hongyu Li School of Software Engineering TongJi University Fall, 2014

Dimensionality Reduction Using PCA/LDA. Hongyu Li School of Software Engineering TongJi University Fall, 2014 Dimensionality Reduction Using PCA/LDA Hongyu Li School of Software Engineering TongJi University Fall, 2014 Dimensionality Reduction One approach to deal with high dimensional data is by reducing their

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision) CS4495/6495 Introduction to Computer Vision 8B-L2 Principle Component Analysis (and its use in Computer Vision) Wavelength 2 Wavelength 2 Principal Components Principal components are all about the directions

More information

Pattern Recognition 2

Pattern Recognition 2 Pattern Recognition 2 KNN,, Dr. Terence Sim School of Computing National University of Singapore Outline 1 2 3 4 5 Outline 1 2 3 4 5 The Bayes Classifier is theoretically optimum. That is, prob. of error

More information

ECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction

ECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction ECE 521 Lecture 11 (not on midterm material) 13 February 2017 K-means clustering, Dimensionality reduction With thanks to Ruslan Salakhutdinov for an earlier version of the slides Overview K-means clustering

More information

Dimensionality Reduction

Dimensionality Reduction Lecture 5 1 Outline 1. Overview a) What is? b) Why? 2. Principal Component Analysis (PCA) a) Objectives b) Explaining variability c) SVD 3. Related approaches a) ICA b) Autoencoders 2 Example 1: Sportsball

More information

A tutorial on Principal Components Analysis

A tutorial on Principal Components Analysis A tutorial on Principal Components Analysis Lindsay I Smith February 26, 2002 Chapter 1 Introduction This tutorial is designed to give the reader an understanding of Principal Components Analysis (PCA).

More information

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA Principle Components Analysis: Uses one group of variables (we will call this X) In

More information

Eigenimaging for Facial Recognition

Eigenimaging for Facial Recognition Eigenimaging for Facial Recognition Aaron Kosmatin, Clayton Broman December 2, 21 Abstract The interest of this paper is Principal Component Analysis, specifically its area of application to facial recognition

More information

What is Principal Component Analysis?

What is Principal Component Analysis? What is Principal Component Analysis? Principal component analysis (PCA) Reduce the dimensionality of a data set by finding a new set of variables, smaller than the original set of variables Retains most

More information

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University Lecture 4: Principal Component Analysis Aykut Erdem May 016 Hacettepe University This week Motivation PCA algorithms Applications PCA shortcomings Autoencoders Kernel PCA PCA Applications Data Visualization

More information

Face detection and recognition. Detection Recognition Sally

Face detection and recognition. Detection Recognition Sally Face detection and recognition Detection Recognition Sally Face detection & recognition Viola & Jones detector Available in open CV Face recognition Eigenfaces for face recognition Metric learning identification

More information

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Canonical Edps/Soc 584 and Psych 594 Applied Multivariate Statistics Carolyn J. Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Canonical Slide

More information

An introduction to multivariate data

An introduction to multivariate data An introduction to multivariate data Angela Montanari 1 The data matrix The starting point of any analysis of multivariate data is a data matrix, i.e. a collection of n observations on a set of p characters

More information

Principal Components Theory Notes

Principal Components Theory Notes Principal Components Theory Notes Charles J. Geyer August 29, 2007 1 Introduction These are class notes for Stat 5601 (nonparametrics) taught at the University of Minnesota, Spring 2006. This not a theory

More information

Principal Component Analysis

Principal Component Analysis B: Chapter 1 HTF: Chapter 1.5 Principal Component Analysis Barnabás Póczos University of Alberta Nov, 009 Contents Motivation PCA algorithms Applications Face recognition Facial expression recognition

More information

TAMS39 Lecture 10 Principal Component Analysis Factor Analysis

TAMS39 Lecture 10 Principal Component Analysis Factor Analysis TAMS39 Lecture 10 Principal Component Analysis Factor Analysis Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content - Lecture Principal component analysis

More information

Machine Learning. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395

Machine Learning. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395 Machine Learning Dimensionality reduction Hamid Beigy Sharif University of Technology Fall 1395 Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1395 1 / 47 Table of contents 1 Introduction

More information

A Peak to the World of Multivariate Statistical Analysis

A Peak to the World of Multivariate Statistical Analysis A Peak to the World of Multivariate Statistical Analysis Real Contents Real Real Real Why is it important to know a bit about the theory behind the methods? Real 5 10 15 20 Real 10 15 20 Figure: Multivariate

More information

Image Analysis. PCA and Eigenfaces

Image Analysis. PCA and Eigenfaces Image Analysis PCA and Eigenfaces Christophoros Nikou cnikou@cs.uoi.gr Images taken from: D. Forsyth and J. Ponce. Computer Vision: A Modern Approach, Prentice Hall, 2003. Computer Vision course by Svetlana

More information

Advanced Introduction to Machine Learning CMU-10715

Advanced Introduction to Machine Learning CMU-10715 Advanced Introduction to Machine Learning CMU-10715 Principal Component Analysis Barnabás Póczos Contents Motivation PCA algorithms Applications Some of these slides are taken from Karl Booksh Research

More information

An Introduction to Multivariate Methods

An Introduction to Multivariate Methods Chapter 12 An Introduction to Multivariate Methods Multivariate statistical methods are used to display, analyze, and describe data on two or more features or variables simultaneously. I will discuss multivariate

More information

Applied Multivariate Statistical Analysis Richard Johnson Dean Wichern Sixth Edition

Applied Multivariate Statistical Analysis Richard Johnson Dean Wichern Sixth Edition Applied Multivariate Statistical Analysis Richard Johnson Dean Wichern Sixth Edition Pearson Education Limited Edinburgh Gate Harlow Essex CM20 2JE England and Associated Companies throughout the world

More information

Linear Dimensionality Reduction

Linear Dimensionality Reduction Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Principal Component Analysis 3 Factor Analysis

More information

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag A Tutorial on Data Reduction Principal Component Analysis Theoretical Discussion By Shireen Elhabian and Aly Farag University of Louisville, CVIP Lab November 2008 PCA PCA is A backbone of modern data

More information

7. Variable extraction and dimensionality reduction

7. Variable extraction and dimensionality reduction 7. Variable extraction and dimensionality reduction The goal of the variable selection in the preceding chapter was to find least useful variables so that it would be possible to reduce the dimensionality

More information

CS 4495 Computer Vision Principle Component Analysis

CS 4495 Computer Vision Principle Component Analysis CS 4495 Computer Vision Principle Component Analysis (and it s use in Computer Vision) Aaron Bobick School of Interactive Computing Administrivia PS6 is out. Due *** Sunday, Nov 24th at 11:55pm *** PS7

More information

PCA FACE RECOGNITION

PCA FACE RECOGNITION PCA FACE RECOGNITION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Shree Nayar (Columbia) including their own slides. Goal

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26 Principal Component Analysis Brett Bernstein CDS at NYU April 25, 2017 Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 1 / 26 Initial Question Intro Question Question Let S R n n be symmetric. 1

More information

Vectors and Matrices Statistics with Vectors and Matrices

Vectors and Matrices Statistics with Vectors and Matrices Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc

More information

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The

More information

Data Mining. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395

Data Mining. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395 Data Mining Dimensionality reduction Hamid Beigy Sharif University of Technology Fall 1395 Hamid Beigy (Sharif University of Technology) Data Mining Fall 1395 1 / 42 Outline 1 Introduction 2 Feature selection

More information

Principal Component Analysis (PCA)

Principal Component Analysis (PCA) Principal Component Analysis (PCA) Salvador Dalí, Galatea of the Spheres CSC411/2515: Machine Learning and Data Mining, Winter 2018 Michael Guerzhoy and Lisa Zhang Some slides from Derek Hoiem and Alysha

More information

PCA, Kernel PCA, ICA

PCA, Kernel PCA, ICA PCA, Kernel PCA, ICA Learning Representations. Dimensionality Reduction. Maria-Florina Balcan 04/08/2015 Big & High-Dimensional Data High-Dimensions = Lot of Features Document classification Features per

More information

CS168: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA)

CS168: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA) CS68: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA) Tim Roughgarden & Gregory Valiant April 0, 05 Introduction. Lecture Goal Principal components analysis

More information

Canonical Correlation Analysis of Longitudinal Data

Canonical Correlation Analysis of Longitudinal Data Biometrics Section JSM 2008 Canonical Correlation Analysis of Longitudinal Data Jayesh Srivastava Dayanand N Naik Abstract Studying the relationship between two sets of variables is an important multivariate

More information

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach Dr. Guangliang Chen February 9, 2016 Outline Introduction Review of linear algebra Matrix SVD PCA Motivation The digits

More information

Statistics for Applications. Chapter 9: Principal Component Analysis (PCA) 1/16

Statistics for Applications. Chapter 9: Principal Component Analysis (PCA) 1/16 Statistics for Applications Chapter 9: Principal Component Analysis (PCA) 1/16 Multivariate statistics and review of linear algebra (1) Let X be a d-dimensional random vector and X 1,..., X n be n independent

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin 1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)

More information

Lecture 4: Principal Component Analysis and Linear Dimension Reduction

Lecture 4: Principal Component Analysis and Linear Dimension Reduction Lecture 4: Principal Component Analysis and Linear Dimension Reduction Advanced Applied Multivariate Analysis STAT 2221, Fall 2013 Sungkyu Jung Department of Statistics University of Pittsburgh E-mail:

More information

Linear Heteroencoders

Linear Heteroencoders Gatsby Computational Neuroscience Unit 17 Queen Square, London University College London WC1N 3AR, United Kingdom http://www.gatsby.ucl.ac.uk +44 20 7679 1176 Funded in part by the Gatsby Charitable Foundation.

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

15 Singular Value Decomposition

15 Singular Value Decomposition 15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Robot Image Credit: Viktoriya Sukhanova 123RF.com. Dimensionality Reduction

Robot Image Credit: Viktoriya Sukhanova 123RF.com. Dimensionality Reduction Robot Image Credit: Viktoriya Sukhanova 13RF.com Dimensionality Reduction Feature Selection vs. Dimensionality Reduction Feature Selection (last time) Select a subset of features. When classifying novel

More information

Review (Probability & Linear Algebra)

Review (Probability & Linear Algebra) Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint

More information

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations. Previously Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations y = Ax Or A simply represents data Notion of eigenvectors,

More information

Deriving Principal Component Analysis (PCA)

Deriving Principal Component Analysis (PCA) -0 Mathematical Foundations for Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Deriving Principal Component Analysis (PCA) Matt Gormley Lecture 11 Oct.

More information

Announcements (repeat) Principal Components Analysis

Announcements (repeat) Principal Components Analysis 4/7/7 Announcements repeat Principal Components Analysis CS 5 Lecture #9 April 4 th, 7 PA4 is due Monday, April 7 th Test # will be Wednesday, April 9 th Test #3 is Monday, May 8 th at 8AM Just hour long

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. Broadly, these techniques can be used in data analysis and visualization

More information

Gaussian random variables inr n

Gaussian random variables inr n Gaussian vectors Lecture 5 Gaussian random variables inr n One-dimensional case One-dimensional Gaussian density with mean and standard deviation (called N, ): fx x exp. Proposition If X N,, then ax b

More information

Lecture 7: Con3nuous Latent Variable Models

Lecture 7: Con3nuous Latent Variable Models CSC2515 Fall 2015 Introduc3on to Machine Learning Lecture 7: Con3nuous Latent Variable Models All lecture slides will be available as.pdf on the course website: http://www.cs.toronto.edu/~urtasun/courses/csc2515/

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

FACTOR ANALYSIS AND MULTIDIMENSIONAL SCALING

FACTOR ANALYSIS AND MULTIDIMENSIONAL SCALING FACTOR ANALYSIS AND MULTIDIMENSIONAL SCALING Vishwanath Mantha Department for Electrical and Computer Engineering Mississippi State University, Mississippi State, MS 39762 mantha@isip.msstate.edu ABSTRACT

More information

Lecture 5: Hypothesis tests for more than one sample

Lecture 5: Hypothesis tests for more than one sample 1/23 Lecture 5: Hypothesis tests for more than one sample Måns Thulin Department of Mathematics, Uppsala University thulin@math.uu.se Multivariate Methods 8/4 2011 2/23 Outline Paired comparisons Repeated

More information

Dimension Reduction. David M. Blei. April 23, 2012

Dimension Reduction. David M. Blei. April 23, 2012 Dimension Reduction David M. Blei April 23, 2012 1 Basic idea Goal: Compute a reduced representation of data from p -dimensional to q-dimensional, where q < p. x 1,...,x p z 1,...,z q (1) We want to do

More information

IV. Matrix Approximation using Least-Squares

IV. Matrix Approximation using Least-Squares IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that

More information

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent Unsupervised Machine Learning and Data Mining DS 5230 / DS 4420 - Fall 2018 Lecture 7 Jan-Willem van de Meent DIMENSIONALITY REDUCTION Borrowing from: Percy Liang (Stanford) Dimensionality Reduction Goal:

More information

1 Principal Components Analysis

1 Principal Components Analysis Lecture 3 and 4 Sept. 18 and Sept.20-2006 Data Visualization STAT 442 / 890, CM 462 Lecture: Ali Ghodsi 1 Principal Components Analysis Principal components analysis (PCA) is a very popular technique for

More information

Wolfgang Karl Härdle Leopold Simar. Applied Multivariate. Statistical Analysis. Fourth Edition. ö Springer

Wolfgang Karl Härdle Leopold Simar. Applied Multivariate. Statistical Analysis. Fourth Edition. ö Springer Wolfgang Karl Härdle Leopold Simar Applied Multivariate Statistical Analysis Fourth Edition ö Springer Contents Part I Descriptive Techniques 1 Comparison of Batches 3 1.1 Boxplots 4 1.2 Histograms 11

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

Computation. For QDA we need to calculate: Lets first consider the case that

Computation. For QDA we need to calculate: Lets first consider the case that Computation For QDA we need to calculate: δ (x) = 1 2 log( Σ ) 1 2 (x µ ) Σ 1 (x µ ) + log(π ) Lets first consider the case that Σ = I,. This is the case where each distribution is spherical, around the

More information

Linear Methods in Data Mining

Linear Methods in Data Mining Why Methods? linear methods are well understood, simple and elegant; algorithms based on linear methods are widespread: data mining, computer vision, graphics, pattern recognition; excellent general software

More information

5. Discriminant analysis

5. Discriminant analysis 5. Discriminant analysis We continue from Bayes s rule presented in Section 3 on p. 85 (5.1) where c i is a class, x isap-dimensional vector (data case) and we use class conditional probability (density

More information

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani PCA & ICA CE-717: Machine Learning Sharif University of Technology Spring 2015 Soleymani Dimensionality Reduction: Feature Selection vs. Feature Extraction Feature selection Select a subset of a given

More information

MAT188H1S LINEAR ALGEBRA: Course Information as of February 2, Calendar Description:

MAT188H1S LINEAR ALGEBRA: Course Information as of February 2, Calendar Description: MAT188H1S LINEAR ALGEBRA: Course Information as of February 2, 2019 2018-2019 Calendar Description: This course covers systems of linear equations and Gaussian elimination, applications; vectors in R n,

More information

Karhunen-Loève Transform KLT. JanKees van der Poel D.Sc. Student, Mechanical Engineering

Karhunen-Loève Transform KLT. JanKees van der Poel D.Sc. Student, Mechanical Engineering Karhunen-Loève Transform KLT JanKees van der Poel D.Sc. Student, Mechanical Engineering Karhunen-Loève Transform Has many names cited in literature: Karhunen-Loève Transform (KLT); Karhunen-Loève Decomposition

More information

Principal Components Analysis

Principal Components Analysis Principal Components Analysis Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 16-Mar-2017 Nathaniel E. Helwig (U of Minnesota) Principal

More information

Canonical Correlations

Canonical Correlations Canonical Correlations Like Principal Components Analysis, Canonical Correlation Analysis looks for interesting linear combinations of multivariate observations. In Canonical Correlation Analysis, a multivariate

More information

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu Dimension Reduction Techniques Presented by Jie (Jerry) Yu Outline Problem Modeling Review of PCA and MDS Isomap Local Linear Embedding (LLE) Charting Background Advances in data collection and storage

More information

Announcements Monday, November 13

Announcements Monday, November 13 Announcements Monday, November 13 The third midterm is on this Friday, November 17. The exam covers 3.1, 3.2, 5.1, 5.2, 5.3, and 5.5. About half the problems will be conceptual, and the other half computational.

More information

Motivating the Covariance Matrix

Motivating the Covariance Matrix Motivating the Covariance Matrix Raúl Rojas Computer Science Department Freie Universität Berlin January 2009 Abstract This note reviews some interesting properties of the covariance matrix and its role

More information

MATH 829: Introduction to Data Mining and Analysis Principal component analysis

MATH 829: Introduction to Data Mining and Analysis Principal component analysis 1/11 MATH 829: Introduction to Data Mining and Analysis Principal component analysis Dominique Guillot Departments of Mathematical Sciences University of Delaware April 4, 2016 Motivation 2/11 High-dimensional

More information

CS 340 Lec. 6: Linear Dimensionality Reduction

CS 340 Lec. 6: Linear Dimensionality Reduction CS 340 Lec. 6: Linear Dimensionality Reduction AD January 2011 AD () January 2011 1 / 46 Linear Dimensionality Reduction Introduction & Motivation Brief Review of Linear Algebra Principal Component Analysis

More information

Machine Learning (CS 567) Lecture 5

Machine Learning (CS 567) Lecture 5 Machine Learning (CS 567) Lecture 5 Time: T-Th 5:00pm - 6:20pm Location: GFS 118 Instructor: Sofus A. Macskassy (macskass@usc.edu) Office: SAL 216 Office hours: by appointment Teaching assistant: Cheol

More information

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data.

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data. Structure in Data A major objective in data analysis is to identify interesting features or structure in the data. The graphical methods are very useful in discovering structure. There are basically two

More information

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling Machine Learning B. Unsupervised Learning B.2 Dimensionality Reduction Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University

More information

A Program for Data Transformations and Kernel Density Estimation

A Program for Data Transformations and Kernel Density Estimation A Program for Data Transformations and Kernel Density Estimation John G. Manchuk and Clayton V. Deutsch Modeling applications in geostatistics often involve multiple variables that are not multivariate

More information

Principal Components Analysis

Principal Components Analysis rincipal Components Analysis F Murtagh 1 Principal Components Analysis Topics: Reference: F Murtagh and A Heck, Multivariate Data Analysis, Kluwer, 1987. Preliminary example: globular clusters. Data, space,

More information

Textbook: Methods of Multivariate Analysis 2nd edition, by Alvin C. Rencher

Textbook: Methods of Multivariate Analysis 2nd edition, by Alvin C. Rencher Lecturer: James Degnan Office: SMLC 342 Office hours: MW 1:00-3:00 or by appointment E-mail: jamdeg@unm.edu Please include STAT476 or STAT576 in the subject line of the email to make sure I don t overlook

More information