Multivariate Analysis

Similar documents
Multivariate Statistics

Multivariate Statistics

Multivariate Statistics

New Educators Campaign Weekly Report

Standard Indicator That s the Latitude! Students will use latitude and longitude to locate places in Indiana and other parts of the world.

Jakarta International School 6 th Grade Formative Assessment Graphing and Statistics -Black

Correction to Spatial and temporal distributions of U.S. winds and wind power at 80 m derived from measurements

Challenge 1: Learning About the Physical Geography of Canada and the United States

Abortion Facilities Target College Students

, District of Columbia

Lecture 5: Ecological distance metrics; Principal Coordinates Analysis. Univariate testing vs. community analysis

Preview: Making a Mental Map of the Region

Lecture 5: Ecological distance metrics; Principal Coordinates Analysis. Univariate testing vs. community analysis

Cooperative Program Allocation Budget Receipts Southern Baptist Convention Executive Committee May 2018

Cooperative Program Allocation Budget Receipts Southern Baptist Convention Executive Committee October 2017

Cooperative Program Allocation Budget Receipts Southern Baptist Convention Executive Committee October 2018

Intercity Bus Stop Analysis

Hourly Precipitation Data Documentation (text and csv version) February 2016

Chapter. Organizing and Summarizing Data. Copyright 2013, 2010 and 2007 Pearson Education, Inc.

Printable Activity book

Additional VEX Worlds 2019 Spot Allocations

QF (Build 1010) Widget Publishing, Inc Page: 1 Batch: 98 Test Mode VAC Publisher's Statement 03/15/16, 10:20:02 Circulation by Issue

Online Appendix: Can Easing Concealed Carry Deter Crime?

Outline. Administrivia and Introduction Course Structure Syllabus Introduction to Data Mining

Club Convergence and Clustering of U.S. State-Level CO 2 Emissions

A. Geography Students know the location of places, geographic features, and patterns of the environment.

2005 Mortgage Broker Regulation Matrix

Multivariate Classification Methods: The Prevalence of Sexually Transmitted Diseases

North American Geography. Lesson 2: My Country tis of Thee

Last time: PCA. Statistical Data Mining and Machine Learning Hilary Term Singular Value Decomposition (SVD) Eigendecomposition and PCA

RELATIONSHIPS BETWEEN THE AMERICAN BROWN BEAR POPULATION AND THE BIGFOOT PHENOMENON

What Lies Beneath: A Sub- National Look at Okun s Law for the United States.

Office of Special Education Projects State Contacts List - Part B and Part C

SUPPLEMENTAL NUTRITION ASSISTANCE PROGRAM QUALITY CONTROL ANNUAL REPORT FISCAL YEAR 2008

Meteorology 110. Lab 1. Geography and Map Skills

JAN/FEB MAR/APR MAY/JUN

Alpine Funds 2016 Tax Guide

Alpine Funds 2017 Tax Guide

Summary of Natural Hazard Statistics for 2008 in the United States

Crop Progress. Corn Mature Selected States [These 18 States planted 92% of the 2017 corn acreage]

Osteopathic Medical Colleges

Parametric Test. Multiple Linear Regression Spatial Application I: State Homicide Rates Equations taken from Zar, 1984.

OUT-OF-STATE 965 SUBTOTAL OUT-OF-STATE U.S. TERRITORIES FOREIGN COUNTRIES UNKNOWN GRAND TOTAL

LABORATORY REPORT. If you have any questions concerning this report, please do not hesitate to call us at (800) or (574)

LABORATORY REPORT. If you have any questions concerning this report, please do not hesitate to call us at (800) or (574)

BlackRock Core Bond Trust (BHK) BlackRock Enhanced International Dividend Trust (BGY) 2 BlackRock Defined Opportunity Credit Trust (BHL) 3

Grand Total Baccalaureate Post-Baccalaureate Masters Doctorate Professional Post-Professional

An Analysis of Regional Income Variation in the United States:

High School World History Cycle 2 Week 2 Lifework

Non-iterative, regression-based estimation of haplotype associations

MINERALS THROUGH GEOGRAPHY

Grand Total Baccalaureate Post-Baccalaureate Masters Doctorate Professional Post-Professional

Data Preprocessing. Cluster Similarity

extreme weather, climate & preparedness in the american mind

Office of Budget & Planning 311 Thomas Boyd Hall Baton Rouge, LA Telephone 225/ Fax 225/

Insurance Department Resources Report Volume 1

Statistical Methods for Data Mining

Chapter 11 : State SAT scores for 1982 Data Listing

MINERALS THROUGH GEOGRAPHY. General Standard. Grade level K , resources, and environmen t

National Organization of Life and Health Insurance Guaranty Associations

JAN/FEB MAR/APR MAY/JUN

United States Geography Unit 1

KS PUBL 4YR Kansas State University Pittsburg State University SUBTOTAL-KS

MO PUBL 4YR 2090 Missouri State University SUBTOTAL-MO

GIS use in Public Health 1

DOWNLOAD OR READ : USA PLANNING MAP PDF EBOOK EPUB MOBI

Rank University AMJ AMR ASQ JAP OBHDP OS PPSYCH SMJ SUM 1 University of Pennsylvania (T) Michigan State University

Pima Community College Students who Enrolled at Top 200 Ranked Universities

Clustering. CSL465/603 - Fall 2016 Narayanan C Krishnan

A Summary of State DOT GIS Activities

Infant Mortality: Cross Section study of the United State, with Emphasis on Education

FLOOD/FLASH FLOOD. Lightning. Tornado

October 2016 v1 12/10/2015 Page 1 of 10

Multivariate Statistics

Stem-and-Leaf Displays

112th U.S. Senate ACEC Scorecard

Package ZIM. R topics documented: August 29, Type Package. Title Statistical Models for Count Time Series with Excess Zeros. Version 1.

Green Building Criteria in Low-Income Housing Tax Credit Programs Analysis

Office of Budget & Planning 311 Thomas Boyd Hall Baton Rouge, LA Telephone 225/ Fax 225/

Introducing North America

Fungal conservation in the USA

A GUIDE TO THE CARTOGRAPHIC PRODUCTS OF

CCC-A Survey Summary Report: Number and Type of Responses

This project is supported by a National Crime Victims' Right Week Community Awareness Project subgrant awarded by the National Association of VOCA

State Section S Items Effective October 1, 2017

All-Time Conference Standings

Evidence for increasingly extreme and variable drought conditions in the contiguous United States between 1895 and 2012

Unsupervised machine learning

Division I Sears Directors' Cup Final Standings As of 6/20/2001

BOWL - APRIL 27, QUESTIONS 45 MINUTES

PHYSICS BOWL - APRIL 22, 1998

14. Where in the World is Wheat?

The Heterogeneous Effects of the Minimum Wage on Employment Across States

Physical Features of Canada and the United States

Physical Features of Canada and the United States

L11: Pattern recognition principles

Chapter 5 Linear least squares regression

Summary of Terminal Master s Degree Programs in Philosophy

University of Florida CISE department Gator Engineering. Clustering Part 1

HP-35s Calculator Program Lambert 1

Transcription:

Multivariate Analysis Chapter 5: Cluster analysis Pedro Galeano Departamento de Estadística Universidad Carlos III de Madrid pedro.galeano@uc3m.es Course 2015/2016 Master in Business Administration and Quantitative Methods Master in Mathematical Engineering Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 1 / 63

Chapter outline 1 Introduction. 2 Proximity measures. 3 Hierarchical clustering. 4 Partition clustering. 5 Model-based clustering. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 2 / 63

Introduction The purpose of cluster analysis is to group objects in a multivariate data set into different homogeneous groups. This is done by grouping individuals that are somehow similar according to some appropriate criterion. Once the clusters are obtained, it is generally useful to describe each group using some descriptive tools to create a better understanding of the differences that exists among the formulated groups. Cluster methods are also known as unsupervised classification methods. These are different than the supervised classification methods, or Classification Analysis, that will be presented in Chapter 7. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 3 / 63

Introduction Clustering techniques are applicable whenever a data set needs to be grouped into meaningful blocks. In some applications we know that the data naturally fall into a certain number of groups, but in many cases the number of clusters is not known. For some clustering methods the user has to specify the number of clusters prior to applying the method. This is not always easy, and unless additional information exists about the number of clusters, one typically explores different values and looks at potential interpretation of the clustering results. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 4 / 63

Introduction We use to think of multivariate measurements as quantitative random variables, but attributes of objects such as color, shape or species are relevant and should be integrated into an analysis as much as possible. For some data, an additional variable which assigns color or species type a numerical value might be appropriate. If some extra knowledge is available, it should inform our analysis and could guide the choice of the number of clusters. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 5 / 63

Introduction Central to some clustering approaches is the notion of proximity of two random vectors. We measure the degree of proximity of two multivariate observations by a distance measure. Intuitively, one might think of the Euclidean distance between two vectors, and this is typically the first and also the most common distance one applies in Cluster Analysis. We will consider also a number of distance measures, and we will explore their effect on the resulting cluster structure. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 6 / 63

Introduction Some cluster procedures are based on using mixtures of distributions. The underlying assumptions of these models, namely, that the data in the different parts are from a certain distribution, is not easy to verify and may not hold. However, these methods have been shown to be powerful under general circumstances. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 7 / 63

Introduction The strength of Cluster Analysis is its exploratory nature. As one varies the number of clusters, distance measures or mixtures distributions, different cluster patterns appear. These patterns might provide new insight into the structure of the data. Different cluster patterns can indicate the existence of unexpected substructures, which, in turn, can lead to further or more in-depth investigations of the data. For this reason, where possible, the interpretation of a cluster analysis should involve a subject expert. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 8 / 63

Introduction There are a large vast amount of cluster procedures. Here, we will focus on: Hierarchical clustering: start with singleton clusters (individual observations) and merges clusters or start with a single cluster (the whole dataset) and split clusters. Partition clustering: starts from a given group definition and proceed by exchanging elements between groups until a certain criterion is optimized. Model-based clustering: the random vectors are modeled by mixtures of distributions and the parameters of the mixture distributions are estimated. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 9 / 63

Proximity measures Two of the clustering methods that we are going to present depends on the notion of proximity. Proximities also play an important role in other multivariate techniques such as multidimensional scaling that will be presented in Chapter 6 and some of the methods for classification in Chapter 7. We already know some distances between multivariate observations: the Euclidean distance and the Mahalanobis distance. Next, we present alternative distances. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 10 / 63

Proximity measures We begin with the definition of distance and then consider common distances. Definition: A distance, d, between two multivariate random variables x i and x i for i, i = 1,..., n, denoted by d (x i, x i ), is a positive random variable which satisfies: 1 d (x i, x i ) 0, for all i, i = 1,..., n, 2 d (x i, x i ) = 0, if and only if i = i, and 3 d (x i, x i ) d (x i, x i ) + d (x i, x i ), for all i, i, i = 1,..., n. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 11 / 63

Proximity measures The two most common distances in Statistics are the Euclidean distance and the Mahalanobis distance. The Euclidean distance, d E, between x i and x i, for i, i = 1,..., n, is given by: d E (x i, x i ) = [ (x i x i ) (x i x i ) ] 1/2 The Mahalanobis distance, d M, between x i and x i, for i, i = 1,..., n, is given by: d M (x i, x i ) = [ (x i x i ) Σ 1 x (x i x i ) ] 1/2 where Σ x is the common covariance matrix of x i and x i. Note that the Euclidean distance coincides with the Mahalanobis distance if Σ x = I p. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 12 / 63

Proximity measures The weigthed p-distance or Minkowski distance, d p, between x i and x i, for i, i = 1,..., n, is given by: where ω 1,..., ω p are positive weights. p d p (x i, x ) i = ω j x ij x i j p j=1 If p = 1, d p is called the Manhattan distance. If, in addition, all weights are one, then d p is called the city block distance. 1/p Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 13 / 63

Proximity measures The maximum distance or Chebychev distance, d max, between x i and x i, for i, i = 1,..., n, is given by: d max (x i, x i ) = max x ij x i j j=1,...,p The Canberra distance, d Canb, between x i and x i, for i, i = 1,..., n, is given by: p d Canb (x i, x ) x ij x i i = j x ij + x i j The Bhattacharyya distance, d Bhat, between x i and x i, for i, i = 1,..., n, is given by: p ( ) 2 d Bhat (x i, x i ) = x 1/2 ij x 1/2 i j j=1 j=1 Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 14 / 63

Proximity measures The cosine distance, d cos, between x i and x i, for i, i = 1,..., n, is given by: d cos (x i, x i ) = 1 cos (x i, x i ) where cos (x i, x i ) is the cosine of the included angle of the two random vectors, given by: cos (x i, x ) i = x i xi x i x i and denotes the Euclidean norm of a vector. The correlation distance, d cor, between x i and x i, for i, i = 1,..., n, is given by: d cor (x i, x i ) = 1 ρ ii where ρ ii is the correlation coeficient betwen x i and x i. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 15 / 63

Proximity measures For binary random variables with entries 0 and 1, the Hamming distance, d Hamm, between x i and x i, for i, i = 1,..., n, is given by: d Hamm (x i, x i ) = # {x ij x i j : 1 j p} p Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 16 / 63

Hierarchical clustering There are two types of hierarchical clustering methods: 1 In agglomerative clustering, one starts with n singleton clusters and merges clusters into larger groupings. 2 In divisive clustering, one starts with a single cluster and divides it into a number of smaller clusters. Most attention has been paid on agglomerative methods; however, arguments have been made that divisive methods can provide more sophisticated and robust clusterings. The end result of all hierarchical clustering methods is a dendogram, where the k-cluster solution is obtained by merging some of the clusters from (k + 1)- cluster solution. The result of hierarchical algorithms depend on the distance considered. In particular, when the variables are in different units of measurement and the distance used do not take into account this fact, it is better to standardize the variables. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 17 / 63

Hierarchical clustering The algorithm for agglomerative hierarchical clustering (agglomerative nesting or agnes) is given next: 1 Let x i, for i = 1,..., n be the observations. Then, each observation is a cluster. 2 Compute D = {d ii, i, i = 1,..., n}, the matrix that contains the distances between the n observations (clusters). 3 Find the smallest distance in D, say, d II. Merge clusters I and I to form a new cluster II. 4 Compute distances, d II,I, between the new cluster II and all other clusters I II. These distances depend upon which linkage method is used. These are detailed in the next slide. 5 Form a new distance matrix, D, by deleting rows and columns I and I and adding a new row and column II with the distances computed from step 4. 6 Repeat steps 3, 4 and 5 a total of n 1 times. At the last step, all observations are merged together into a single cluster. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 18 / 63

Hierarchical clustering The linkage methods to compute the distances d II,I, between the new cluster II and all other clusters I II are: Single linkage: dii,i = min {d I,I, d I,I }. Complete linkage: dii,i = max {d I,I, d I,I }. Average linkage: dii,i = i II i II d i,i / (n ii n i ), where n ii and n i are the number of items in clusters II and I, respectively. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 19 / 63

Hierarchical clustering The dendogram allows the user to read off the distance at which clusters are combined together to form a new cluster. Clusters that are similar to each other are combined at low distances, whereas clusters that are more dissimilar are combined at high distances. The difference in distances defines how close clusters are of each other. A partition of the data into a specified number of groups can be obtained by cutting the dendogram at an appropriate distance. If we draw a horizontal line on the dendogram at a given distance, then the number, K, of vertical lines cut by that horizontal line identifies a K-cluster solution. The intersection of the horizontal line and one of those K vertical lines then represents a cluster, and the items located at the end of all branches below that intersection constitute the members of the cluster. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 20 / 63

Illustrative example (I) We are going to apply the agnes algorithm to the states data set. For that, we compare the results using the Euclidean and the Manhattan distances. Therefore, we use standardized variables. The next slides shows dendograms for the solutions with these two distances and the three linkage methods (simple, complete and average). Once the solutions are obtained, scatterplot matrices with the assignments are also given. For that we consider the case of 4 groups. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 21 / 63

Illustrative example (I) Euclidean distance and single linkage Height 0 1 2 3 4 5 6 Alabama Arkansas Kentucky Tennessee North Carolina Georgia Louisiana West Virginia Mississippi South Carolina Colorado Idaho Iowa Nebraska Minnesota Kansas Wisconsin South Dakota Maine New Hampshire Vermont Utah Montana Wyoming Oregon Washington North Dakota Connecticut Delaware Maryland New Jersey Massachusetts Illinois Michigan Ohio Pennsylvania Indiana Missouri Oklahoma Virginia Florida Rhode Island Nevada New York Arizona New Mexico California Hawaii Texas Alaska distances Agglomerative Coefficient = 0.77 Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 22 / 63

Illustrative example (I) Euclidean distance and single linkage 3000 4000 5000 6000 68 69 70 71 72 73 40 45 50 55 60 65 0e+00 2e+05 4e+05 Population 0 15000 3000 5000 Income Illiteracy 0.5 2.0 68 71 Life.Exp Murder 2 6 12 40 55 HS.Grad Frost 0 100 0e+00 5e+05 0 5000 15000 0.5 1.0 1.5 2.0 2.5 2 4 6 8 10 12 14 0 50 100 150 Area Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 23 / 63

Illustrative example (I) Euclidean distance and complete linkage Height 0 2 4 6 8 Alabama Georgia Louisiana Mississippi South Carolina Arkansas Kentucky Tennessee North Carolina West Virginia New Mexico Arizona Florida Virginia Delaware Maryland Massachusetts New Jersey Indiana Missouri Oklahoma Hawaii Oregon Washington California Texas Illinois Michigan Ohio Pennsylvania New York Colorado Montana Wyoming Idaho Utah Iowa Nebraska Kansas Minnesota Wisconsin Maine New Hampshire Vermont South Dakota Connecticut North Dakota Rhode Island Nevada Alaska distances Agglomerative Coefficient = 0.82 Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 24 / 63

Illustrative example (I) Euclidean distance and complete linkage 3000 4000 5000 6000 68 69 70 71 72 73 40 45 50 55 60 65 0e+00 2e+05 4e+05 Population 0 15000 3000 5000 Income Illiteracy 0.5 2.0 68 71 Life.Exp Murder 2 6 12 40 55 HS.Grad Frost 0 100 0e+00 5e+05 0 5000 15000 0.5 1.0 1.5 2.0 2.5 2 4 6 8 10 12 14 0 50 100 150 Area Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 25 / 63

Illustrative example (I) Euclidean distance and average linkage Height 0 2 4 6 8 Alabama Georgia Louisiana Mississippi South Carolina Arkansas Kentucky Tennessee North Carolina West Virginia New Mexico Arizona Florida Delaware Maryland Virginia Indiana Missouri Oklahoma Illinois Michigan Ohio Pennsylvania Colorado Montana Wyoming Idaho Utah Iowa Nebraska Minnesota Wisconsin Kansas South Dakota Maine New Hampshire Vermont Connecticut Massachusetts New Jersey North Dakota Rhode Island Oregon Washington Nevada Hawaii California New York Texas Alaska distances Agglomerative Coefficient = 0.8 Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 26 / 63

Illustrative example (I) Euclidean distance and average linkage 3000 4000 5000 6000 68 69 70 71 72 73 40 45 50 55 60 65 0e+00 2e+05 4e+05 Population 0 15000 3000 5000 Income Illiteracy 0.5 2.0 68 71 Life.Exp Murder 2 6 12 40 55 HS.Grad Frost 0 100 0e+00 5e+05 0 5000 15000 0.5 1.0 1.5 2.0 2.5 2 4 6 8 10 12 14 0 50 100 150 Area Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 27 / 63

Illustrative example (I) Manhattan distance and single linkage Height 0 2 4 6 8 10 Alabama Louisiana Arkansas Kentucky Tennessee North Carolina Georgia West Virginia Mississippi South Carolina Colorado Idaho Iowa Nebraska Kansas Minnesota Oregon Washington Wisconsin Utah New Hampshire Vermont Maine Montana Wyoming South Dakota North Dakota Connecticut Massachusetts Delaware Maryland New Jersey Florida Illinois Michigan Indiana Ohio Pennsylvania Missouri Virginia Oklahoma Nevada Rhode Island New York Arizona New Mexico Hawaii California Texas Alaska distances Agglomerative Coefficient = 0.71 Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 28 / 63

Illustrative example (I) Manhattan distance and single linkage 3000 4000 5000 6000 68 69 70 71 72 73 40 45 50 55 60 65 0e+00 2e+05 4e+05 Population 0 15000 3000 5000 Income Illiteracy 0.5 2.0 68 71 Life.Exp Murder 2 6 12 40 55 HS.Grad Frost 0 100 0e+00 5e+05 0 5000 15000 0.5 1.0 1.5 2.0 2.5 2 4 6 8 10 12 14 0 50 100 150 Area Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 29 / 63

Illustrative example (I) Manhattan distance and complete linkage Height 0 5 10 15 20 Alabama Louisiana Georgia Mississippi South Carolina Arkansas Kentucky Tennessee North Carolina West Virginia Arizona New Mexico Missouri Virginia Oklahoma Florida New York Illinois Michigan Indiana Ohio Pennsylvania California Texas Colorado Montana Wyoming Idaho Utah Kansas Oregon Washington Iowa Nebraska Minnesota Wisconsin North Dakota South Dakota Maine New Hampshire Vermont Rhode Island Connecticut Massachusetts New Jersey Delaware Maryland Hawaii Alaska Nevada distances Agglomerative Coefficient = 0.82 Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 30 / 63

Illustrative example (I) Manhattan distance and complete linkage 3000 4000 5000 6000 68 69 70 71 72 73 40 45 50 55 60 65 0e+00 2e+05 4e+05 Population 0 15000 3000 5000 Income Illiteracy 0.5 2.0 68 71 Life.Exp Murder 2 6 12 40 55 HS.Grad Frost 0 100 0e+00 5e+05 0 5000 15000 0.5 1.0 1.5 2.0 2.5 2 4 6 8 10 12 14 0 50 100 150 Area Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 31 / 63

Illustrative example (I) Manhattan distance and average linkage Height 0 5 10 15 Alabama Louisiana Georgia Mississippi South Carolina Arkansas Kentucky Tennessee North Carolina West Virginia Arizona New Mexico Texas Colorado Montana Wyoming Idaho Utah Iowa Nebraska Kansas Minnesota Wisconsin Maine New Hampshire Vermont South Dakota Oregon Washington Connecticut Massachusetts Rhode Island North Dakota Delaware Maryland New Jersey Illinois Michigan Indiana Ohio Pennsylvania Missouri Virginia Oklahoma Florida New York Nevada Hawaii California Alaska distances Agglomerative Coefficient = 0.78 Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 32 / 63

Illustrative example (I) Manhattan distance and average linkage 3000 4000 5000 6000 68 69 70 71 72 73 40 45 50 55 60 65 0e+00 2e+05 4e+05 Population 0 15000 3000 5000 Income Illiteracy 0.5 2.0 68 71 Life.Exp Murder 2 6 12 40 55 HS.Grad Frost 0 100 0e+00 5e+05 0 5000 15000 0.5 1.0 1.5 2.0 2.5 2 4 6 8 10 12 14 0 50 100 150 Area Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 33 / 63

Hierarchical clustering None of the distance/linkage procedures is uniformly best for all clustering problems. Singe linkage often leads to long clusters, joined by singleton observations near each other, a result that does not have much appeal in practice. Complete linkage tends to produce many small, compact clusters. Average linkage is dependent upon the size of the clusters, while single and complete linkage do not. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 34 / 63

Hierarchical clustering In divisive clustering (divisive analysis or diana), the idea is that at each step, the observations are divided into a splinter group (say cluster A) and the remainder group (say cluster B). The splinter group is initiated by extracting that observation that has the largest average distance from all other observations in the data set. That observation is set up as cluster A. Given the separation of the data into A and B, we next compute, for each observation in cluster B, the following quantities: 1 the average distance between that observation and all other observations in cluster B, and 2 the average distance between that observation and all observations in cluster A. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 35 / 63

Hierarchical clustering Then, we compute the difference between (1) and (2) above for each observation in B. There are two possibilities: 1 If all the differences are negative, we stop the algorithm. 2 If any of these differences are positive, we take the observation in B with the largest positive difference, move it to A, and repeat the procedure. This algorithm provides with a binary split of the data into two clusters A and B. This same procedure can then be used to obtain binary splits of each of the clusters A and B separately. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 36 / 63

Illustrative example (I) We are going to apply the diana algorithm to the states data set. For that, we compare the results using the Euclidean and the Manhattan distances. Therefore, we use standardized variables. The next slides shows dendograms for the solutions with these two distances. Once the solutions are obtained, scatterplot matrices with the assignments are also given. For that we consider the case of 4 groups. It is not difficult to see that this algorithm points out the presence of special states. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 37 / 63

Illustrative example (I) Euclidean distance Height 0 2 4 6 8 Alabama Georgia Louisiana Mississippi South Carolina Arkansas Kentucky Tennessee North Carolina West Virginia New Mexico Texas Arizona Florida Illinois Michigan Ohio Pennsylvania Maryland New Jersey Virginia California New York Colorado Montana Wyoming Nevada Connecticut Massachusetts Kansas Oregon Washington Delaware Idaho Utah Maine New Hampshire Vermont Iowa Nebraska Minnesota Wisconsin South Dakota North Dakota Indiana Missouri Oklahoma Rhode Island Hawaii Alaska distances Divisive Coefficient = 0.81 Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 38 / 63

Illustrative example (I) Euclidean distance 3000 4000 5000 6000 68 69 70 71 72 73 40 45 50 55 60 65 0e+00 2e+05 4e+05 Population 0 15000 3000 5000 Income Illiteracy 0.5 2.0 68 71 Life.Exp Murder 2 6 12 40 55 HS.Grad Frost 0 100 0e+00 5e+05 0 5000 15000 0.5 1.0 1.5 2.0 2.5 2 4 6 8 10 12 14 0 50 100 150 Area Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 39 / 63

Illustrative example (I) Manhattan distance Height 0 5 10 15 20 Alabama Louisiana Georgia Mississippi South Carolina Arkansas Kentucky Tennessee North Carolina West Virginia New Mexico Texas Arizona Florida Virginia New York Illinois Michigan Ohio Pennsylvania Indiana Missouri Maryland New Jersey California Colorado Montana Wyoming Nevada Connecticut Massachusetts Rhode Island Delaware Oklahoma Idaho Utah Iowa Nebraska Kansas Minnesota Wisconsin Oregon Washington Maine New Hampshire Vermont South Dakota North Dakota Hawaii Alaska distances Divisive Coefficient = 0.8 Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 40 / 63

Illustrative example (I) Manhattan distance 3000 4000 5000 6000 68 69 70 71 72 73 40 45 50 55 60 65 0e+00 2e+05 4e+05 Population 0 15000 3000 5000 Income Illiteracy 0.5 2.0 68 71 Life.Exp Murder 2 6 12 40 55 HS.Grad Frost 0 100 0e+00 5e+05 0 5000 15000 0.5 1.0 1.5 2.0 2.5 2 4 6 8 10 12 14 0 50 100 150 Area Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 41 / 63

Partition clustering Partition methods simply split the data observations into a predetermined number K of groups or clusters, where there is no hierarchical relationship between the K-cluster solution and the (K + 1)-cluster solution. Given K, we seek to partition the data into K clusters so that the observations within each cluster are similar to each other, whereas observations from different clusters are dissimilar. Ideally, one can obtain all the possible partition of the data into K clusters and selects the best partition using some optimizing criterion. Clearly, for medium or large data sets such a method rapidly becomes infeasible, requiring incredible amount of computer time and storage. As a result, all available partition methods are iterative and work on only a few possible partitions. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 42 / 63

Partition clustering The k-means algorithm is the most popular partition method. Because it is extremely efficient, it is often used for large-scale clustering projects. The algorithm depends on the concept of centroid of a cluster, which is a representative point of the group. Usually, the centroid is taken as the mean of the observations in the cluster, although this is not always the choice. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 43 / 63

Partition clustering The algorithm is given next: 1 Let x i, for i = 1,..., n be the observations set. 2 Do one of the following: 1 Form an initial random assignment of the observations into K clusters and, for cluster k, compute its current centroid, k x. 2 Pre-specify K cluster centroids, k x, for k = 1,..., K. 3 Compute the squared Euclidean distance of each observation to its current cluster centroid and sum all of them: SSE = K (x i k x) (x i k x) k=1 c(i)=k where k x is the k-th cluster centroid and c (i) is the cluster containing x i. 4 Reassign each observation to its nearest cluster centroid so that SSE is reduced in magnitude. Update the cluster centroids after each reassignment. 5 Repeat steps 3 and 4 until no further reassignment of observations takes place. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 44 / 63

Partition clustering The solution (a configuration of observations into K clusters) will typically not be unique; the algorithm will only find a local minumum of SSE. It is recommended that the algorithm be run using different initial random assignments to the observations to the K clusters (or by randomly selecting K initial centroids) in order to find the lowest minimum of SSE and, hence, the best clustering solution based upon K clusters. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 45 / 63

Illustrative example (I) We are going to apply the k-means algorithm to the states data set. As with the hierarchical algorithms, we use standardized variables, as the algorithm uses Euclidean distances. The next slide shows scatterplot matrices with the assignments made by the algorithm. For that we consider the case of 4 groups, as previously done. We run the algorithm 25 times. In other words, we form 25 initial random assignment of the observations into 4 clusters and run the algorithm. The value of SSE attained by the algorithm is 156.2664. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 46 / 63

Illustrative example (I) Manhattan distance 3000 4000 5000 6000 68 69 70 71 72 73 40 45 50 55 60 65 0e+00 2e+05 4e+05 Population 0 15000 3000 5000 Income Illiteracy 0.5 2.0 68 71 Life.Exp Murder 2 6 12 40 55 HS.Grad Frost 0 100 0e+00 5e+05 0 5000 15000 0.5 1.0 1.5 2.0 2.5 2 4 6 8 10 12 14 0 50 100 150 Area Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 47 / 63

Partition clustering The partition around medoids (pam) is another partition algorithm. Essentially, pam is a modification of the k-means algorithm. This algorithm searches for K representative objects rather than the centroids among the observations in the data set. Then, the method is expected to be more robust to data anomalies such as outliers. A disadvantage of the pam algorithm is that, although it run well on small data sets, they are not efficient enough to use for clustering large data sets. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 48 / 63

Partition clustering The algorithm is given next: 1 Let x i, for i = 1,..., n be the observations set. 2 Compute D = {d ii, i, i = 1,..., n}, the matrix that contains the distances between the n observations. 3 Choose K observations as the medoids of K initial clusters. 4 Assign every observation to its closest medoid using the matrix D. 5 For each cluster, search the observation, x i, of the cluster (if any) that gives the largest reduction in: K SSE med = d ii k=1 c(i)=k and select this observation as the medoid for this cluster (note that SSE med only considers distances from every observation in the cluster to the medoid). 6 Repeat steps 4 and 5 until no further reduction in SSE med takes place. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 49 / 63

Illustrative example (I) We are going to apply the pam algorithm to the states data set. As with the previous algorithms, we use standardized variables, as we are going to use the Euclidean distance. The next slide shows scatterplot matrices with the assignments made by the algorithm. For that we consider the case of 4 groups, as previously done. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 50 / 63

Illustrative example (I) Euclidean distance 3000 4000 5000 6000 68 69 70 71 72 73 40 45 50 55 60 65 0e+00 2e+05 4e+05 Population 0 15000 3000 5000 Income Illiteracy 0.5 2.0 68 71 Life.Exp Murder 2 6 12 40 55 HS.Grad Frost 0 100 0e+00 5e+05 0 5000 15000 0.5 1.0 1.5 2.0 2.5 2 4 6 8 10 12 14 0 50 100 150 Area Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 51 / 63

Model-based clustering In model-based clustering, it is assumed that the data have been generated by a mixture of K unknown distributions. Maximum likelihood estimation can be carried out to estimate the parameters of the mixture model. This is usually undertaken using the Expectation- Maximization (EM) algorithm. Then, one model parameters have been estimated, each observation is assigned to the mixture (cluster) with larger probability of having generated the observation. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 52 / 63

Model-based clustering Then, we assume that the data set have been generated from a mixture of distributions with pdf given by: f x (x θ) = K π k f x,k (x θ k ) k=1 where θ is a vector with all the parameters of the model, including the weights π k and the parameters of the distributions f x,k ( θ k ), denoted by θ k. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 53 / 63

Model-based clustering Then, for a data matrix, X, with observations x i = (x i1,..., x ip ), the likelihood function is given by: ( n n K ) l (θ X ) = f x (x i θ k ) = π k f x,k (x i θ k ) i=1 i=1 k=1 while the log-likelihood is given by: ( n K ) L (θ X ) = log π k f x,k (x i θ k ) i=1 k=1 Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 54 / 63

Model-based clustering Derivation of closed form expressions of the MLE of the mixture parameters is not possible, even in the case of the multivariate Gaussian distribution. Moreover, although it is possible to apply a Newton-Raphson type algorithm to solve the equalities provided by the MLE method, the usual approach is to use the EM algorithm to obtain the MLEs (see the references). Then, let π 1,..., π G and θ 1,..., θ G, be the MLE of the weights and the parameters of the group distributions, respectively, obtained with the EM algorithm. The estimated posterior probabilities that observation x i belongs to population k are obtained by applying the Bayes Theorem: ( ) π k f x,k x i θ k Pr (k x i ) = ) G g=1 π g f x,g (x i θ g The observations are assigned to the density (cluster) k with maximum Pr (k x i ). Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 55 / 63

Model-based clustering In model-based clustering, it is possible to select the number of groups, K, from the data set. The idea is to compare solutions with different values of K = 1, 2,... and choosing the best result. For that, we can rely on model selection criteria such as the Akaike Information Criterion (AIC) or the Bayesian Information Criterion (BIC). For instance, the BIC selects the number of clusters that minimizes: BIC (k) = 2 L k ( θ X ) + log (n) q ) where L k ( θ X denotes the maximized log-likelihood assuming k groups and q is the number of parameters of the model. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 56 / 63

Model-based clustering M-clust is a popular method to perform model-based clustering. M-clust assumes Gaussian densities and selects the optimal model according to BIC. To reduce the number of parameters to fit, M-clust works with the spectral decomposition of the covariance matrices Σ k, given by: Σ k = λ 1,k V k Λk V k, where λ 1 is the largest eigenvalue, V k is the matrix that contains the eigenvectors of Σ k and Λ k is the diagonal matrix of eigenvalues divided by λ 1. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 57 / 63

Model-based clustering The decompostion allows for different configurations: 1 spherical and equal volume, 2 spherical and unequal volume, 3 diagonal and equal volume and shape, 4 diagonal, varying volume and equal shape, 5 diagonal, equal volume and varying shape, 6 diagonal, varying volume and shape, 7 ellipsoidal, equal volume, shape, and orientation, 8 ellipsoidal, equal volume and equal shape, 9 ellipsoidal and equal shape, and 10 ellipsoidal, varying volume, shape, and orientation. Here (i) spherical, diagonal and ellipsoidal are relative to the covariance matrices; (ii) similar volume means that λ 1,1 = = λ 1,K ; (iii) equal shape means Λ 1 = = Λ K ; and (iv) equal orientation means V 1 = = V K. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 58 / 63

Illustrative example (I) For the states data set, Mclust selects a diagonal and equal shape model with 4 components. After estimating the model using the EM algorithm, the procedure compute the posterior probabilities for each country and population. The results are shown in the next two slides. The first one shows scatterplot matrices with the assignments made by the algorithm. The second one shows the first two principal components with the assignments made by the algorithm. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 59 / 63

Illustrative example (I) M clust solution 3000 4000 5000 6000 68 69 70 71 72 73 40 45 50 55 60 65 0e+00 2e+05 4e+05 Population 0 15000 3000 5000 Income Illiteracy 0.5 2.0 68 71 Life.Exp Murder 2 6 12 40 55 HS.Grad Frost 0 100 0e+00 5e+05 0 5000 15000 0.5 1.0 1.5 2.0 2.5 2 4 6 8 10 12 14 0 50 100 150 Area Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 60 / 63

Illustrative example (I) Alaska Second principal component 2 0 2 4 California Texas New York Florida Illinois Nevada Arizona Michigan Maryland Ohio Washington Colorado Georgia Virginia Pennsylvania New Jersey Hawaii Oregon Alabama Missouri Montana Wyoming Kansas Louisiana New Mexico Indiana Massachusetts Connecticut Minne North Carolina Delaware Tennessee Oklahoma Idaho Wisconsin Nebraska Utah Iowa North D South Carolina Mississippi Kentucky Arkansas New Hampshire South Dakota Rhode Island West Virginia Vermont Maine 4 3 2 1 0 1 2 First principal component Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 61 / 63

Model-based clustering There are other alternatives procedures for model based clustering. For instance, very appealing methodologies for estimating mixtures have been given from the Bayesian point of view. These procedures include the number of groups as an additional parameter, and posterior probabilities are also provided for this number. Also, procedures based on the use of projections (projection pursuit methods) are also very popular. The idea is to project the data into different directions that separate the groups as much as possible and look for clusters in the univariate projected data. Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 62 / 63

Chapter outline 1 Introduction. 2 Proximity measures. 3 Hierarchical clustering. 4 Partition clustering. 5 Model-based clustering. We are ready now for: Chapter 6: Multidimensional scaling Pedro Galeano (Course 2015/2016) Multivariate Analysis - Chapter 5 Masters BAQM and ME 63 / 63