Handling missing values in Multiple Factor Analysis

Size: px
Start display at page:

Download "Handling missing values in Multiple Factor Analysis"

Transcription

1 Handling missing values in Multiple Factor Analysis François Husson, Julie Josse Applied mathematics department, Agrocampus Ouest, Rennes, France Rabat, 26 March / 10

2 Multi-blocks data set Groups of variables (MFA) Groups of variables are quantitative and/ or qualitative Objectives: - study the link between the sets of variables - balance the influence of each group of variables - give the classical graphs but also specific graphs: groups of variables - partial representation Continuous and/or categorical (contingency) sets of variables products - sensorial, physico-chemical panels comparison (one group = one country) Examples: - Genomic: DNA, protein - Sensory analysis: sensorial, physico-chemical - Comparison of coding (quantitative / qualitative) products - judges (napping, ash prole, etc) collect methods comparison (sorting task/ QDA) Similarities - specicities of each group? Multiple Factor Analysis (Escoer & Pagès, 1998) 3 2 / 10

3 Missing values in multi-blocks data set Dierent patterns of missing values: scattered or structured Sensory analysis: each judge can't assess more than a certain number of products (saturation) experimental design (BIB) MFA with missing values 3 / 10

4 Missing values in MFA MFA balances the inuence of the groups X = X1 λ 1 1 X 2 ; λ 2 1 X K ;...; λ K 1 Complete case: SVD (PCA) on X X n p U n SΛ 1 2 S S V p S 2 With missing values: weighted least squares with w ij = 0 if x ij W n p (X n p U n SΛ 1 2 S S V p S ) 2 is missing, w ij = 1 otherwise Algorithms: weighted alternating least squares - iterative imputation 4 / 10

5 Iterative MFA 1 initialization l = 0: X 0 (mean imputation) 2 steps of estimation and imputation are repeated: (a) SVD on the completed data (U l, Λ l, V l ); S dim. kept (b) X l = W X + (1 W) (ˆX l = U l Λ 1/2l V l ) (c) means, standard deviations and eigenvalues are updated 5 / 10

6 Iterative MFA 1 initialization l = 0: X 0 (mean imputation) 2 steps of estimation and imputation are repeated: (a) SVD on the completed data (U l, Λ l, V l ); S dim. kept (b) X l = W X + (1 W) (ˆX l = U l Λ 1/2l V l ) (c) means, standard deviations and eigenvalues are updated Number of dimensions: CV (Bro, 2008; Josse & Husson, 2011) Same rationale in other multi-blocks methods GPA (Commandeur, 1991), PARAFAC (Tomasi & Bro, 2005), GCCA (Van de Velden & Bijmolt, 2006) 5 / 10

7 Iterative MFA 1 initialization l = 0: X 0 (mean imputation) 2 steps of estimation and imputation are repeated: (a) SVD on the completed data (U l, Λ l, V l ); S dim. kept (b) X l = W X + (1 W) (ˆX l = U l Λ 1/2l V l ) (c) means, standard deviations and eigenvalues are updated Number of dimensions: CV (Bro, 2008; Josse & Husson, 2011) Same rationale in other multi-blocks methods GPA (Commandeur, 1991), PARAFAC (Tomasi & Bro, 2005), GCCA (Van de Velden & Bijmolt, 2006) 5 / 10

8 Iterative MFA 1 initialization l = 0: X 0 (mean imputation) 2 steps of estimation and imputation are repeated: (a) SVD on the completed data (U l, Λ l, V l ); S dim. kept (b) X l = W X + (1 W) (ˆX l = U l Λ 1/2l V l ) (c) means, standard deviations and eigenvalues are updated Number of dimensions: CV (Bro, 2008; Josse & Husson, 2011) Same rationale in other multi-blocks methods GPA (Commandeur, 1991), PARAFAC (Tomasi & Bro, 2005), GCCA (Van de Velden & Bijmolt, 2006) Overtting problems many parameters / observed values (S large - many NA) data are very noisy 5 / 10

9 Regularized iterative MFA (Husson & Josse, 2013) Initialization - estimation step - imputation step The imputation step: ˆx MFA ij = S s=1 λs u is v js is replaced by a "shrunk" imputation step: ˆx rmfa ij = S s=1 ( λs ˆσ2 λs ) u is v js Compromise hard/ soft thresholding (Mazumder, Hastie & Tibshirani, 2010) σ 2 small regularized MFA MFA σ 2 large kill" the dimensions with small eigeivalues 6 / 10

10 Example on a wine data set 21 wines described by two groups of variables (olfaction/ gustation) 6 missing rows olfaction True configuration - partial points Dim 2 (13.22 %) Olfaction Tasting 1VAU 2ING T2 T1 3EL PER1 4EL 2BEA 1PO 1BOI Y 1DAM 1ING 2BOU 2DAM 1TUR 1FON 1ROC DOM1 1BEN 2EL 1CHA Dim 1 (65.39 %) 7 / 10

11 Example on a wine data set 21 wines described by two groups of variables (olfaction/ gustation) 6 missing rows olfaction True configuration - partial points M e a n im p u ta tio n - p a rtia l p o in ts Dim 2 (13.22 %) Olfaction Tasting 1VAU 2ING T2 T1 3EL PER1 4EL 2BEA 1PO 1BOI Y 1DAM 1ING 2BOU 2DAM 1TUR 1FON 1ROC DOM1 1BEN 2EL 1CHA Dim 2 (22.13 %) O lfa c tio n Ta st in g T 2 3 E L 1 P O Y 4EP L E R 1 1 T U R 2B E A 2 D 1F O N 2 B O U 1 B A O M 1 IN G I T1 1 C H A 1 D A M 1R O C D O M 1 1B E N 2 E L 1V A U 2 IN G Dim 1 (65.39 %) D im 1 ( % ) 7 / 10

12 Example on a wine data set 21 wines described by two groups of variables (olfaction/ gustation) 6 missing rows olfaction True configuration - partial points Iterative MFA - partial points Dim 2 (13.22 %) Olfaction Tasting 1VAU 2ING T2 T1 3EL PER1 4EL 2BEA 1PO 1BOI Y 1DAM 1ING 2BOU 2DAM 1TUR 1FON 1ROC DOM1 1BEN 2EL 1CHA Dim 2 (15.29 %) Olfaction Tasting 1VAU 2ING T1 T2 3EL 4EL 1POY PER1 1BOI 2BEA 1ING 2BOU 2DAM 1DAM 1TUR 1ROC DOM1 1FON 1BEN 1CHA 2EL Dim 1 (65.39 %) Dim 1 (71.74 %) 7 / 10

13 Simulations on napping data set 99 judges - 12 perfumes Subset of products 6-11: experimental design RV coecient between the "true" MFA products coordinates (12*2) and the regularized iterative MFA ones (12*2) V coefficient RV iterative MFA mean 12 consumers 25 consumers 50 consumers 75 consumers 99 consumers Numberof products seen by panellist 75 judges and 8 products 99 judges and 12 products 8 / 10

14 Simulations on napping data set Panelists Nb products RV Better to have 25 judges assessing all the products than 50 assessing half of the products Better to have more judges (50) assessing less products (9) than a small number of judges (25) assessing all the products (12) 9 / 10

15 Conclusion Multi-blocks method with missing values (regularized algorithm) Evaluation of n products - each judge assessing (n k) products R package missmda missing values in principal components methods (PCA, MCA, MIXPCA, MFA, multi-level methods) single imputation for continuous - categorical - mixed data single imputation is a rst step to multiple imputation Videos on Youtube! FactoMineR package 10 / 10

Missing values imputation for mixed data based on principal component methods

Missing values imputation for mixed data based on principal component methods Missing values imputation for mixed data based on principal component methods Vincent Audigier, François Husson & Julie Josse Agrocampus Rennes Compstat' 2012, Limassol (Cyprus), 28-08-2012 1 / 21 A real

More information

missmda: a package to handle missing values in Multivariate exploratory Data Analysis methods

missmda: a package to handle missing values in Multivariate exploratory Data Analysis methods missmda: a package to handle missing values in Multivariate exploratory Data Analysis methods Julie Josse & Francois Husson Applied Mathematics Department Agrocampus Rennes - France user! 2011, Warwick,

More information

Regularized PCA to denoise and visualise data

Regularized PCA to denoise and visualise data Regularized PCA to denoise and visualise data Marie Verbanck Julie Josse François Husson Laboratoire de statistique, Agrocampus Ouest, Rennes, France CNAM, Paris, 16 janvier 2013 1 / 30 Outline 1 PCA 2

More information

Correspondence Analysis

Correspondence Analysis Correspondence Analysis Julie Josse, François Husson, Sébastien Lê Applied Mathematics Department, Agrocampus Ouest user-2008 Dortmund, August 11th 2008 1 / 22 History Theoretical principles: Fisher (1940)

More information

Correspondence Analysis (CA)

Correspondence Analysis (CA) Correspondence Analysis (CA) François Husson & Magalie Houée-Bigot Department o Applied Mathematics - Rennes Agrocampus husson@agrocampus-ouest.r / 43 Correspondence Analysis (CA) Data 2 Independence model

More information

Distributed multilevel matrix completion for medical databases

Distributed multilevel matrix completion for medical databases Distributed multilevel matrix completion for medical databases Julie Josse Ecole Polytechnique, INRIA Séminaire Parisien de Statistique, IHP January 15, 2018 1 / 38 Overview 1 Introduction 2 Single imputation

More information

Graphical Model Selection

Graphical Model Selection May 6, 2013 Trevor Hastie, Stanford Statistics 1 Graphical Model Selection Trevor Hastie Stanford University joint work with Jerome Friedman, Rob Tibshirani, Rahul Mazumder and Jason Lee May 6, 2013 Trevor

More information

CMSC858P Supervised Learning Methods

CMSC858P Supervised Learning Methods CMSC858P Supervised Learning Methods Hector Corrada Bravo March, 2010 Introduction Today we discuss the classification setting in detail. Our setting is that we observe for each subject i a set of p predictors

More information

Lecture 1: September 25

Lecture 1: September 25 0-725: Optimization Fall 202 Lecture : September 25 Lecturer: Geoff Gordon/Ryan Tibshirani Scribes: Subhodeep Moitra Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have

More information

Regularized Discriminant Analysis. Part I. Linear and Quadratic Discriminant Analysis. Discriminant Analysis. Example. Example. Class distribution

Regularized Discriminant Analysis. Part I. Linear and Quadratic Discriminant Analysis. Discriminant Analysis. Example. Example. Class distribution Part I 09.06.2006 Discriminant Analysis The purpose of discriminant analysis is to assign objects to one of several (K) groups based on a set of measurements X = (X 1, X 2,..., X p ) which are obtained

More information

Lecture 6: Methods for high-dimensional problems

Lecture 6: Methods for high-dimensional problems Lecture 6: Methods for high-dimensional problems Hector Corrada Bravo and Rafael A. Irizarry March, 2010 In this Section we will discuss methods where data lies on high-dimensional spaces. In particular,

More information

Chapter 17: Undirected Graphical Models

Chapter 17: Undirected Graphical Models Chapter 17: Undirected Graphical Models The Elements of Statistical Learning Biaobin Jiang Department of Biological Sciences Purdue University bjiang@purdue.edu October 30, 2014 Biaobin Jiang (Purdue)

More information

Sensory Analysis, R tools to analyze Preference liking in function of descriptive measures.

Sensory Analysis, R tools to analyze Preference liking in function of descriptive measures. Sensory Analysis, R tools to analyze Preference liking in function of descriptive measures. Dhafer Malouche essai.academia.edu/dhafermalouche Center of Political Studies, Institute of Social Research University

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Adrien Todeschini Inria Bordeaux JdS 2014, Rennes Aug. 2014 Joint work with François Caron (Univ. Oxford), Marie

More information

Audio transcription of the Principal Component Analysis course

Audio transcription of the Principal Component Analysis course Audio transcription of the Principal Component Analysis course Part I. Data - Practicalities Slides 1 to 8 Pages 2 to 5 Part II. Studying individuals and variables Slides 9 to 26 Pages 6 to 13 Part III.

More information

Graph Functional Methods for Climate Partitioning

Graph Functional Methods for Climate Partitioning Graph Functional Methods for Climate Partitioning Mathilde Mougeot - with D. Picard, V. Lefieux*, M. Marchand* Université Paris Diderot, France *Réseau Transport Electrique (RTE) Buenos Aires, 2015 Mathilde

More information

STATISTICAL LEARNING SYSTEMS

STATISTICAL LEARNING SYSTEMS STATISTICAL LEARNING SYSTEMS LECTURE 8: UNSUPERVISED LEARNING: FINDING STRUCTURE IN DATA Institute of Computer Science, Polish Academy of Sciences Ph. D. Program 2013/2014 Principal Component Analysis

More information

Feature Engineering, Model Evaluations

Feature Engineering, Model Evaluations Feature Engineering, Model Evaluations Giri Iyengar Cornell University gi43@cornell.edu Feb 5, 2018 Giri Iyengar (Cornell Tech) Feature Engineering Feb 5, 2018 1 / 35 Overview 1 ETL 2 Feature Engineering

More information

Low Rank Matrix Completion Formulation and Algorithm

Low Rank Matrix Completion Formulation and Algorithm 1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9

More information

Data Preprocessing Tasks

Data Preprocessing Tasks Data Tasks 1 2 3 Data Reduction 4 We re here. 1 Dimensionality Reduction Dimensionality reduction is a commonly used approach for generating fewer features. Typically used because too many features can

More information

Multilevel Component Analysis applied to the measurement of a complex product experience

Multilevel Component Analysis applied to the measurement of a complex product experience Multilevel Component Analysis applied to the measurement of a complex product experience Boucon, C.A., Petit-Jublot, C.E.F., Groeneschild C., Dijksterhuis, G.B. Outline Background Introduction to Simultaneous

More information

Data set Science&Environment (ISSP,1993) Multiple categorical variables. Burt matrix. Indicator matrix V4a(A) Multiple Correspondence Analysis

Data set Science&Environment (ISSP,1993) Multiple categorical variables. Burt matrix. Indicator matrix V4a(A) Multiple Correspondence Analysis Data set Science&Environment (ISSP,1993) Q. SCIENCE AND ENVIRONMENT Multiple categorical variables Multiple Correspondence Analysis How much do you agree or disagree with each of these statements? Q.a

More information

Principal Component Analysis (PCA) for Sparse High-Dimensional Data

Principal Component Analysis (PCA) for Sparse High-Dimensional Data AB Principal Component Analysis (PCA) for Sparse High-Dimensional Data Tapani Raiko, Alexander Ilin, and Juha Karhunen Helsinki University of Technology, Finland Adaptive Informatics Research Center Principal

More information

Binary Principal Component Analysis in the Netflix Collaborative Filtering Task

Binary Principal Component Analysis in the Netflix Collaborative Filtering Task Binary Principal Component Analysis in the Netflix Collaborative Filtering Task László Kozma, Alexander Ilin, Tapani Raiko first.last@tkk.fi Helsinki University of Technology Adaptive Informatics Research

More information

Machine Learning 11. week

Machine Learning 11. week Machine Learning 11. week Feature Extraction-Selection Dimension reduction PCA LDA 1 Feature Extraction Any problem can be solved by machine learning methods in case of that the system must be appropriately

More information

Sparse Principal Component Analysis for multiblocks data and its extension to Sparse Multiple Correspondence Analysis

Sparse Principal Component Analysis for multiblocks data and its extension to Sparse Multiple Correspondence Analysis Sparse Principal Component Analysis for multiblocks data and its extension to Sparse Multiple Correspondence Analysis Anne Bernard 1,5, Hervé Abdi 2, Arthur Tenenhaus 3, Christiane Guinot 4, Gilbert Saporta

More information

Dimensionality Reduction

Dimensionality Reduction Lecture 5 1 Outline 1. Overview a) What is? b) Why? 2. Principal Component Analysis (PCA) a) Objectives b) Explaining variability c) SVD 3. Related approaches a) ICA b) Autoencoders 2 Example 1: Sportsball

More information

Sparse statistical modelling

Sparse statistical modelling Sparse statistical modelling Tom Bartlett Sparse statistical modelling Tom Bartlett 1 / 28 Introduction A sparse statistical model is one having only a small number of nonzero parameters or weights. [1]

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,

More information

Preprocessing & dimensionality reduction

Preprocessing & dimensionality reduction Introduction to Data Mining Preprocessing & dimensionality reduction CPSC/AMTH 445a/545a Guy Wolf guy.wolf@yale.edu Yale University Fall 2016 CPSC 445 (Guy Wolf) Dimensionality reduction Yale - Fall 2016

More information

2 Tikhonov Regularization and ERM

2 Tikhonov Regularization and ERM Introduction Here we discusses how a class of regularization methods originally designed to solve ill-posed inverse problems give rise to regularized learning algorithms. These algorithms are kernel methods

More information

MSA220/MVE440 Statistical Learning for Big Data

MSA220/MVE440 Statistical Learning for Big Data MSA220/MVE440 Statistical Learning for Big Data Lecture 7/8 - High-dimensional modeling part 1 Rebecka Jörnsten Mathematical Sciences University of Gothenburg and Chalmers University of Technology Classification

More information

Principal Component Analysis

Principal Component Analysis CSci 5525: Machine Learning Dec 3, 2008 The Main Idea Given a dataset X = {x 1,..., x N } The Main Idea Given a dataset X = {x 1,..., x N } Find a low-dimensional linear projection The Main Idea Given

More information

Learning with Sparsity Constraints

Learning with Sparsity Constraints Stanford 2010 Trevor Hastie, Stanford Statistics 1 Learning with Sparsity Constraints Trevor Hastie Stanford University recent joint work with Rahul Mazumder, Jerome Friedman and Rob Tibshirani earlier

More information

Unsupervised learning: beyond simple clustering and PCA

Unsupervised learning: beyond simple clustering and PCA Unsupervised learning: beyond simple clustering and PCA Liza Rebrova Self organizing maps (SOM) Goal: approximate data points in R p by a low-dimensional manifold Unlike PCA, the manifold does not have

More information

Fast Regularization Paths via Coordinate Descent

Fast Regularization Paths via Coordinate Descent user! 2009 Trevor Hastie, Stanford Statistics 1 Fast Regularization Paths via Coordinate Descent Trevor Hastie Stanford University joint work with Jerome Friedman and Rob Tibshirani. user! 2009 Trevor

More information

Dimension Reduction Methods

Dimension Reduction Methods Dimension Reduction Methods And Bayesian Machine Learning Marek Petrik 2/28 Previously in Machine Learning How to choose the right features if we have (too) many options Methods: 1. Subset selection 2.

More information

Degenerate Expectation-Maximization Algorithm for Local Dimension Reduction

Degenerate Expectation-Maximization Algorithm for Local Dimension Reduction Degenerate Expectation-Maximization Algorithm for Local Dimension Reduction Xiaodong Lin 1 and Yu Zhu 2 1 Statistical and Applied Mathematical Science Institute, RTP, NC, 27709 USA University of Cincinnati,

More information

EE 381V: Large Scale Learning Spring Lecture 16 March 7

EE 381V: Large Scale Learning Spring Lecture 16 March 7 EE 381V: Large Scale Learning Spring 2013 Lecture 16 March 7 Lecturer: Caramanis & Sanghavi Scribe: Tianyang Bai 16.1 Topics Covered In this lecture, we introduced one method of matrix completion via SVD-based

More information

arxiv: v4 [stat.co] 8 Dec 2017

arxiv: v4 [stat.co] 8 Dec 2017 Multivariate Analysis of Mixed Data: The R Package PCAmixdata Marie Chavent,2, Vanessa Kuentz-Simonet 3, Amaury Labenne 3, Jérôme Saracco 2,4 December, 207 arxiv:4.49v4 [stat.co] 8 Dec 207 Université de

More information

A Robust Approach to Regularized Discriminant Analysis

A Robust Approach to Regularized Discriminant Analysis A Robust Approach to Regularized Discriminant Analysis Moritz Gschwandtner Department of Statistics and Probability Theory Vienna University of Technology, Austria Österreichische Statistiktage, Graz,

More information

Principal Components

Principal Components Principal Cmpnents Suppse we have N measurements n each f p variables X j, j = 1,..., p. There are several equivalent appraches t principal cmpnents: Given X = (X 1,... X p ), prduce a derived (and small)

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning Christoph Lampert Spring Semester 2015/2016 // Lecture 12 1 / 36 Unsupervised Learning Dimensionality Reduction 2 / 36 Dimensionality Reduction Given: data X = {x 1,..., x

More information

Modeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System 2 Overview

Modeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System 2 Overview 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 Modeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System processes System Overview Previous Systems:

More information

2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net.

2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net. 2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net. - For an autoassociative net, the training input and target output

More information

Machine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013

Machine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 Machine Learning for Signal Processing Sparse and Overcomplete Representations Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 1 Key Topics in this Lecture Basics Component-based representations

More information

Lecture 9: September 28

Lecture 9: September 28 0-725/36-725: Convex Optimization Fall 206 Lecturer: Ryan Tibshirani Lecture 9: September 28 Scribes: Yiming Wu, Ye Yuan, Zhihao Li Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These

More information

Principal Component Analysis and Linear Discriminant Analysis

Principal Component Analysis and Linear Discriminant Analysis Principal Component Analysis and Linear Discriminant Analysis Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1/29

More information

Linear Regression (1/1/17)

Linear Regression (1/1/17) STA613/CBB540: Statistical methods in computational biology Linear Regression (1/1/17) Lecturer: Barbara Engelhardt Scribe: Ethan Hada 1. Linear regression 1.1. Linear regression basics. Linear regression

More information

Lecture Notes 10: Matrix Factorization

Lecture Notes 10: Matrix Factorization Optimization-based data analysis Fall 207 Lecture Notes 0: Matrix Factorization Low-rank models. Rank- model Consider the problem of modeling a quantity y[i, j] that depends on two indices i and j. To

More information

Solution: a) Let us consider the matrix form of the system, focusing on the augmented matrix: 0 k k + 1. k 2 1 = 0. k = 1

Solution: a) Let us consider the matrix form of the system, focusing on the augmented matrix: 0 k k + 1. k 2 1 = 0. k = 1 Exercise. Given the system of equations 8 < : x + y + z x + y + z k x + k y + z k where k R. a) Study the system in terms of the values of k. b) Find the solutions when k, using the reduced row echelon

More information

Gopalkrishna Veni. Project 4 (Active Shape Models)

Gopalkrishna Veni. Project 4 (Active Shape Models) Gopalkrishna Veni Project 4 (Active Shape Models) Introduction Active shape Model (ASM) is a technique of building a model by learning the variability patterns from training datasets. ASMs try to deform

More information

Principal Component Analysis vs. Independent Component Analysis for Damage Detection

Principal Component Analysis vs. Independent Component Analysis for Damage Detection 6th European Workshop on Structural Health Monitoring - Fr..D.4 Principal Component Analysis vs. Independent Component Analysis for Damage Detection D. A. TIBADUIZA, L. E. MUJICA, M. ANAYA, J. RODELLAR

More information

Statistics 202: Data Mining. c Jonathan Taylor. Week 2 Based in part on slides from textbook, slides of Susan Holmes. October 3, / 1

Statistics 202: Data Mining. c Jonathan Taylor. Week 2 Based in part on slides from textbook, slides of Susan Holmes. October 3, / 1 Week 2 Based in part on slides from textbook, slides of Susan Holmes October 3, 2012 1 / 1 Part I Other datatypes, preprocessing 2 / 1 Other datatypes Document data You might start with a collection of

More information

Analysis of Fast Input Selection: Application in Time Series Prediction

Analysis of Fast Input Selection: Application in Time Series Prediction Analysis of Fast Input Selection: Application in Time Series Prediction Jarkko Tikka, Amaury Lendasse, and Jaakko Hollmén Helsinki University of Technology, Laboratory of Computer and Information Science,

More information

Part I. Other datatypes, preprocessing. Other datatypes. Other datatypes. Week 2 Based in part on slides from textbook, slides of Susan Holmes

Part I. Other datatypes, preprocessing. Other datatypes. Other datatypes. Week 2 Based in part on slides from textbook, slides of Susan Holmes Week 2 Based in part on slides from textbook, slides of Susan Holmes Part I Other datatypes, preprocessing October 3, 2012 1 / 1 2 / 1 Other datatypes Other datatypes Document data You might start with

More information

A new method of nonparametric density estimation

A new method of nonparametric density estimation A new method of nonparametric density estimation Andrey Pepelyshev Cardi December 7, 2011 1/32 A. Pepelyshev A new method of nonparametric density estimation Contents Introduction A new density estimate

More information

arxiv: v3 [stat.me] 31 May 2015

arxiv: v3 [stat.me] 31 May 2015 Submitted to the Annals of Statistics arxiv: arxiv:1410.8260 SELECTING THE NUMBER OF PRINCIPAL COMPONENTS: ESTIMATION OF THE TRUE RANK OF A NOIS MATRIX arxiv:1410.8260v3 [stat.me] 31 May 2015 By unjin

More information

Correlate. A method for the integrative analysis of two genomic data sets

Correlate. A method for the integrative analysis of two genomic data sets Correlate A method for the integrative analysis of two genomic data sets Sam Gross, Balasubramanian Narasimhan, Robert Tibshirani, and Daniela Witten February 19, 2010 Introduction Sparse Canonical Correlation

More information

Topographic Mapping and Dimensionality Reduction of Binary Tensor Data of Arbitrary Rank

Topographic Mapping and Dimensionality Reduction of Binary Tensor Data of Arbitrary Rank Topographic Mapping and Dimensionality Reduction of Binary Tensor Data of Arbitrary Rank Peter Tiňo, Jakub Mažgút, Hong Yan, Mikael Bodén Topographic Mapping and Dimensionality Reduction of Binary Tensor

More information

Machine Learning for Signal Processing Sparse and Overcomplete Representations

Machine Learning for Signal Processing Sparse and Overcomplete Representations Machine Learning for Signal Processing Sparse and Overcomplete Representations Abelino Jimenez (slides from Bhiksha Raj and Sourish Chaudhuri) Oct 1, 217 1 So far Weights Data Basis Data Independent ICA

More information

Gaussian Graphical Models and Graphical Lasso

Gaussian Graphical Models and Graphical Lasso ELE 538B: Sparsity, Structure and Inference Gaussian Graphical Models and Graphical Lasso Yuxin Chen Princeton University, Spring 2017 Multivariate Gaussians Consider a random vector x N (0, Σ) with pdf

More information

Combinations. April 12, 2006

Combinations. April 12, 2006 Combinations April 12, 2006 Combinations, April 12, 2006 Binomial Coecients Denition. The number of distinct subsets with j elements that can be chosen from a set with n elements is denoted by ( n j).

More information

A Partitioning Method for the Clustering of Categorical Variables

A Partitioning Method for the Clustering of Categorical Variables A Partitioning Method for the Clustering of Categorical Variables Marie Chavent 1,2, Vanessa Kuentz 1,2, and Jérôme Saracco 1,2,3 1 Université de Bordeaux, IMB, CNRS, UMR 5251, France chavent@math.u-bordeaux1.fr;kuentz@math.u-bordeaux1.fr

More information

MLCC Clustering. Lorenzo Rosasco UNIGE-MIT-IIT

MLCC Clustering. Lorenzo Rosasco UNIGE-MIT-IIT MLCC 2018 - Clustering Lorenzo Rosasco UNIGE-MIT-IIT About this class We will consider an unsupervised setting, and in particular the problem of clustering unlabeled data into coherent groups. MLCC 2018

More information

Permutation-invariant regularization of large covariance matrices. Liza Levina

Permutation-invariant regularization of large covariance matrices. Liza Levina Liza Levina Permutation-invariant covariance regularization 1/42 Permutation-invariant regularization of large covariance matrices Liza Levina Department of Statistics University of Michigan Joint work

More information

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations. Previously Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations y = Ax Or A simply represents data Notion of eigenvectors,

More information

Linear Methods for Regression. Lijun Zhang

Linear Methods for Regression. Lijun Zhang Linear Methods for Regression Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Introduction Linear Regression Models and Least Squares Subset Selection Shrinkage Methods Methods Using Derived

More information

1/sqrt(B) convergence 1/B convergence B

1/sqrt(B) convergence 1/B convergence B The Error Coding Method and PICTs Gareth James and Trevor Hastie Department of Statistics, Stanford University March 29, 1998 Abstract A new family of plug-in classication techniques has recently been

More information

A two-way analysis of variance model with positive definite interaction for homologous factors

A two-way analysis of variance model with positive definite interaction for homologous factors Journal of Multivariate Analysis 95 (005) 431 448 www.elsevier.com/locate/jmva A two-way analysis of variance model with positive definite interaction for homologous factors David Causeur a,, Thierry Dhorne

More information

Constraint-based Subspace Clustering

Constraint-based Subspace Clustering Constraint-based Subspace Clustering Elisa Fromont 1, Adriana Prado 2 and Céline Robardet 1 1 Université de Lyon, France 2 Universiteit Antwerpen, Belgium Thursday, April 30 Traditional Clustering Partitions

More information

Sparse inverse covariance estimation with the lasso

Sparse inverse covariance estimation with the lasso Sparse inverse covariance estimation with the lasso Jerome Friedman Trevor Hastie and Robert Tibshirani November 8, 2007 Abstract We consider the problem of estimating sparse graphs by a lasso penalty

More information

Supporting information for: Norovirus capsid proteins self-assemble through. biphasic kinetics via long-lived stave-like.

Supporting information for: Norovirus capsid proteins self-assemble through. biphasic kinetics via long-lived stave-like. Supporting information for: Norovirus capsid proteins self-assemble through biphasic kinetics via long-lived stave-like intermediates Guillaume Tresset,, Clémence Le Cœur, Jean-François Bryche, Mouna Tatou,

More information

Chapter 7. Network Flow. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Chapter 7. Network Flow. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. Chapter 7 Network Flow Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. 7.5 Bipartite Matching Matching Matching. Input: undirected graph G = (V, E). M E is a matching

More information

Principal components analysis COMS 4771

Principal components analysis COMS 4771 Principal components analysis COMS 4771 1. Representation learning Useful representations of data Representation learning: Given: raw feature vectors x 1, x 2,..., x n R d. Goal: learn a useful feature

More information

Sparse canonical correlation analysis, with applications to genomic data

Sparse canonical correlation analysis, with applications to genomic data Sparse canonical correlation analysis, with applications to genomic data Daniela M. Witten and Robert Tibshirani June 15, 2009 The framework Suppose that we have n observations on p 1 + p 2 variables,

More information

Eigenvalue problems and optimization

Eigenvalue problems and optimization Notes for 2016-04-27 Seeking structure For the past three weeks, we have discussed rather general-purpose optimization methods for nonlinear equation solving and optimization. In practice, of course, we

More information

1 I A Q E B A I E Q 1 A ; E Q A I A (2) A : (3) A : (4)

1 I A Q E B A I E Q 1 A ; E Q A I A (2) A : (3) A : (4) Latin Squares Denition and examples Denition. (Latin Square) An n n Latin square, or a latin square of order n, is a square array with n symbols arranged so that each symbol appears just once in each row

More information

1. The Polar Decomposition

1. The Polar Decomposition A PERSONAL INTERVIEW WITH THE SINGULAR VALUE DECOMPOSITION MATAN GAVISH Part. Theory. The Polar Decomposition In what follows, F denotes either R or C. The vector space F n is an inner product space with

More information

High-dimensional regression modeling

High-dimensional regression modeling High-dimensional regression modeling David Causeur Department of Statistics and Computer Science Agrocampus Ouest IRMAR CNRS UMR 6625 http://www.agrocampus-ouest.fr/math/causeur/ Course objectives Making

More information

n i,j+1/2 q i,j * qi+1,j * S i+1/2,j

n i,j+1/2 q i,j * qi+1,j * S i+1/2,j Helsinki University of Technology CFD-group/ The Laboratory of Applied Thermodynamics MEMO No CFD/TERMO-5-97 DATE: December 9,997 TITLE A comparison of complete vs. simplied viscous terms in boundary layer

More information

National Accelerator Laboratory

National Accelerator Laboratory Fermi National Accelerator Laboratory FERMILAB-Conf-97/149-E DØ QCD Results Using the? Jet-Finding Algorithm in p p Collisions at p s = 1800 GeV D. Lincoln For the D Collaboration University of Michigan

More information

For instance, we want to know whether freshmen with parents of BA degree are predicted to get higher GPA than those with parents without BA degree.

For instance, we want to know whether freshmen with parents of BA degree are predicted to get higher GPA than those with parents without BA degree. DESCRIPTIVE ANALYSIS For instance, we want to know whether freshmen with parents of BA degree are predicted to get higher GPA than those with parents without BA degree. Assume that we have data; what information

More information

Comparing Robustness of Pairwise and Multiclass Neural-Network Systems for Face Recognition

Comparing Robustness of Pairwise and Multiclass Neural-Network Systems for Face Recognition Comparing Robustness of Pairwise and Multiclass Neural-Network Systems for Face Recognition J. Uglov, V. Schetinin, C. Maple Computing and Information System Department, University of Bedfordshire, Luton,

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Package sgpca. R topics documented: July 6, Type Package. Title Sparse Generalized Principal Component Analysis. Version 1.0.

Package sgpca. R topics documented: July 6, Type Package. Title Sparse Generalized Principal Component Analysis. Version 1.0. Package sgpca July 6, 2013 Type Package Title Sparse Generalized Principal Component Analysis Version 1.0 Date 2012-07-05 Author Frederick Campbell Maintainer Frederick Campbell

More information

A Modern Look at Classical Multivariate Techniques

A Modern Look at Classical Multivariate Techniques A Modern Look at Classical Multivariate Techniques Yoonkyung Lee Department of Statistics The Ohio State University March 16-20, 2015 The 13th School of Probability and Statistics CIMAT, Guanajuato, Mexico

More information

The Principal Component Analysis

The Principal Component Analysis The Principal Component Analysis Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) PCA Fall 2017 1 / 27 Introduction Every 80 minutes, the two Landsat satellites go around the world, recording images

More information

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

1 Interpretation. Contents. Biplots, revisited. Biplots, revisited 2. Biplots, revisited 1

1 Interpretation. Contents. Biplots, revisited. Biplots, revisited 2. Biplots, revisited 1 Biplots, revisited 1 Biplots, revisited 2 1 Interpretation Biplots, revisited Biplots show the following quantities of a data matrix in one display: Slide 1 Ulrich Kohler kohler@wz-berlin.de Slide 3 the

More information

The OSCAR for Generalized Linear Models

The OSCAR for Generalized Linear Models Sebastian Petry & Gerhard Tutz The OSCAR for Generalized Linear Models Technical Report Number 112, 2011 Department of Statistics University of Munich http://www.stat.uni-muenchen.de The OSCAR for Generalized

More information

Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora

Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora Scribe: Today we continue the

More information

Structured Matrix Completion with Applications to Genomic Data Integration

Structured Matrix Completion with Applications to Genomic Data Integration Structured Matrix Completion with Applications to Genomic Data Integration Aaron Jones Duke University BIOSTAT 900 October 14, 2016 Aaron Jones (BIOSTAT 900) Structured Matrix Completion October 14, 2016

More information

Coordinate descent. Geoff Gordon & Ryan Tibshirani Optimization /

Coordinate descent. Geoff Gordon & Ryan Tibshirani Optimization / Coordinate descent Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Adding to the toolbox, with stats and ML in mind We ve seen several general and useful minimization tools First-order methods

More information

Describing and Forecasting Video Access Patterns

Describing and Forecasting Video Access Patterns Boston University OpenBU Computer Science http://open.bu.edu CAS: Computer Science: Technical Reports -- Describing and Forecasting Video Access Patterns Gursun, Gonca CS Department, Boston University

More information

Business Statistics. Tommaso Proietti. Linear Regression. DEF - Università di Roma 'Tor Vergata'

Business Statistics. Tommaso Proietti. Linear Regression. DEF - Università di Roma 'Tor Vergata' Business Statistics Tommaso Proietti DEF - Università di Roma 'Tor Vergata' Linear Regression Specication Let Y be a univariate quantitative response variable. We model Y as follows: Y = f(x) + ε where

More information

Unsupervised Learning. k-means Algorithm

Unsupervised Learning. k-means Algorithm Unsupervised Learning Supervised Learning: Learn to predict y from x from examples of (x, y). Performance is measured by error rate. Unsupervised Learning: Learn a representation from exs. of x. Learn

More information

Vector Space Models. wine_spectral.r

Vector Space Models. wine_spectral.r Vector Space Models 137 wine_spectral.r Latent Semantic Analysis Problem with words Even a small vocabulary as in wine example is challenging LSA Reduce number of columns of DTM by principal components

More information

Generalized Principal Component Analysis: Projection of Saturated Model Parameters

Generalized Principal Component Analysis: Projection of Saturated Model Parameters Generalized Principal Component Analysis: Projection of Saturated Model Parameters Andrew J. Landgraf and Yoonkyung Lee Abstract Principal component analysis (PCA) is very useful for a wide variety of

More information