Clustering. Genome 559: Introduction to Statistical and Computational Genomics Elhanan Borenstein. Some slides adapted from Jacques van Helden

Similar documents
Clustering. Genome 373 Genomic Informatics Elhanan Borenstein. Some slides adapted from Jacques van Helden

Clustering. Genome 559: Introduction to Statistical and Computational Genomics Elhanan Borenstein. Some slides adapted from Jacques van Helden

Clustering. Genome 559: Introduction to Statistical and Computational Genomics Elhanan Borenstein. Some slides adapted from Jacques van Helden

Gene Ontology and Functional Enrichment. Genome 559: Introduction to Statistical and Computational Genomics Elhanan Borenstein

Unsupervised machine learning

Clustering. Stephen Scott. CSCE 478/878 Lecture 8: Clustering. Stephen Scott. Introduction. Outline. Clustering.

University of Florida CISE department Gator Engineering. Clustering Part 1

Clustering and Network

6.047 / Computational Biology: Genomes, Networks, Evolution Fall 2008

Clustering. CSL465/603 - Fall 2016 Narayanan C Krishnan

Data Preprocessing. Cluster Similarity

Cluster Analysis (Sect. 9.6/Chap. 14 of Wilks) Notes by Hong Li

Introduction to clustering methods for gene expression data analysis

10-810: Advanced Algorithms and Models for Computational Biology. Optimal leaf ordering and classification

Principles of Pattern Recognition. C. A. Murthy Machine Intelligence Unit Indian Statistical Institute Kolkata

Clustering & microarray technology

Applying cluster analysis to 2011 Census local authority data

Clustering and classification with applications to microarrays and cellular phenotypes

Overview of clustering analysis. Yuehua Cui

Introduction to clustering methods for gene expression data analysis

Chapter 16. Clustering Biological Data. Chandan K. Reddy Wayne State University Detroit, MI

Modern Information Retrieval

Multivariate Statistics: Hierarchical and k-means cluster analysis

Computer Vision Group Prof. Daniel Cremers. 14. Clustering

Chapter 5: Microarray Techniques

Inferring Transcriptional Regulatory Networks from High-throughput Data

Data Exploration and Unsupervised Learning with Clustering

A general co-expression network-based approach to gene expression analysis: comparison and applications

CS249: ADVANCED DATA MINING

Logic and machine learning review. CS 540 Yingyu Liang

Hierarchical Clustering

Multivariate Analysis Cluster Analysis

Advanced Statistical Methods: Beyond Linear Regression

Hierarchical Clustering

Zhongyi Xiao. Correlation. In probability theory and statistics, correlation indicates the

Discovering molecular pathways from protein interaction and ge

Evolutionary Tree Analysis. Overview

A PARSIMONY APPROACH TO ANALYSIS OF HUMAN SEGMENTAL DUPLICATIONS

Introduction to Bioinformatics

Comparative Network Analysis

Contingency Table Analysis via Matrix Factorization

Clustering using Mixture Models

ROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015

Protein function prediction via analysis of interactomes

Comparative Genomics II

Microarray Data Analysis: Discovery

Cluster Analysis of Gene Expression Microarray Data. BIOL 495S/ CS 490B/ MATH 490B/ STAT 490B Introduction to Bioinformatics April 8, 2002

Phylogenetic Tree Reconstruction

Bayesian Hierarchical Classification. Seminar on Predicting Structured Data Jukka Kohonen

Approaches to clustering gene expression time course data

Statistical Significance for Hierarchical Clustering

Decision Trees. Lewis Fishgold. (Material in these slides adapted from Ray Mooney's slides on Decision Trees)

Weighted gene co-expression analysis. Yuehua Cui June 7, 2013

2. Sample representativeness. That means some type of probability/random sampling.

9/30/11. Evolution theory. Phylogenetic Tree Reconstruction. Phylogenetic trees (binary trees) Phylogeny (phylogenetic tree)

A transcriptome meta-analysis identifies the response of plant to stresses. Etienne Delannoy, Rim Zaag, Guillem Rigaill, Marie-Laure Martin-Magniette

A New Method to Build Gene Regulation Network Based on Fuzzy Hierarchical Clustering Methods

2MHR. Protein structure classification is important because it organizes the protein structure universe that is independent of sequence similarity.

Diffusion/Inference geometries of data features, situational awareness and visualization. Ronald R Coifman Mathematics Yale University

Co-expression analysis of RNA-seq data

Differential Modeling for Cancer Microarray Data

Geometric View of Machine Learning Nearest Neighbor Classification. Slides adapted from Prof. Carpuat

Machine Learning. Clustering 1. Hamid Beigy. Sharif University of Technology. Fall 1395

Clustering of Pathogenic Genes in Human Co-regulatory Network. Michael Colavita Mentor: Soheil Feizi Fifth Annual MIT PRIMES Conference May 17, 2015

Statistics 202: Data Mining. c Jonathan Taylor. Model-based clustering Based in part on slides from textbook, slides of Susan Holmes.

Package GeneExpressionSignature

Multimedia Retrieval Distance. Egon L. van den Broek

Lecture 5: November 19, Minimizing the maximum intracluster distance

Using Phylogenomics to Predict Novel Fungal Pathogenicity Genes

Multivariate Statistics

Biological Networks Analysis

Supplementary Information

Multivariate Analysis

Network Motifs of Pathogenic Genes in Human Regulatory Network

Marielle Caccam Jewel Refran

Descriptive Data Summarization

STATS 306B: Unsupervised Learning Spring Lecture 5 April 14

CS570 Introduction to Data Mining

Evidence for dynamically organized modularity in the yeast protein-protein interaction network

A* Search. 1 Dijkstra Shortest Path

Co-expression analysis

Microarray data analysis

Inferring Transcriptional Regulatory Networks from Gene Expression Data II

Classification for High Dimensional Problems Using Bayesian Neural Networks and Dirichlet Diffusion Trees

Clustering Lecture 1: Basics. Jing Gao SUNY Buffalo

Modern Information Retrieval

High-dimensional data: Exploratory data analysis

SUPPLEMENTARY INFORMATION

Cluster Analysis CHAPTER PREVIEW KEY TERMS

STATISTICA MULTIVARIATA 2

Zhiguang Huo 1, Chi Song 2, George Tseng 3. July 30, 2018

Lectures in AstroStatistics: Topics in Machine Learning for Astronomers

Clustering with k-means and Gaussian mixture distributions

Similarity and Dissimilarity

Biochip informatics-(i)

The Accuracy of Network Visualizations. Kevin W. Boyack SciTech Strategies, Inc.

Clustering VS Classification

Clustering compiled by Alvin Wan from Professor Benjamin Recht s lecture, Samaneh s discussion

Page 1. Evolutionary Trees. Why build evolutionary tree? Outline

Machine Learning. Nonparametric Methods. Space of ML Problems. Todo. Histograms. Instance-Based Learning (aka non-parametric methods)

Transcription:

Clustering Genome 559: Introduction to Statistical and Computational Genomics Elhanan Borenstein Some slides adapted from Jacques van Helden

Gene expression profiling A quick review Which molecular processes/functions are involved in a certain phenotype (e.g., disease, stress response, etc.) The Gene Ontology (GO) Project Provides shared vocabulary/annotation GO terms are linked in a complex structure Enrichment analysis: Find the most differentially expressed genes Identify functional annotations that are over-represented Modified Fisher's exact test

Gene Set Enrichment Analysis Calculates a score for the enrichment of a entire set of genes Does not require setting a cutoff! Identifies the set of relevant genes! Provides a more robust statistical framework! GSEA steps: A quick review cont 1. Calculation of an enrichment score (ES) for each functional category 2. Estimation of significance level 3. Adjustment for multiple hypotheses testing

The clustering problem The goal of gene clustering process is to partition the genes into distinct sets such that genes that are assigned to the same cluster are similar, while genes assigned to different clusters are nonsimilar.

Clustering vs. Classification Clustering is an exploratorytool: who's running with who. A very different problem from classification: Clustering is about finding coherent groups Classification is about relating such groups or individual objects to specific labels, mostly to support future prediction Most clustering algorithms are unsupervised (in contrast, most classification algorithms are supervised) Clustering methods have been used in a vast number of disciplines and fields.

What are we clustering? We can cluster genes, conditions (samples), or both.

Clustering gene expression profiles

Clustering genes or conditions is a basic tool for the analysis of expression profiles, and can be useful for many purposes, including: Inferring functions of unknown genes (assuming a similar expression pattern implies a similar function). Identifying disease profiles (tissues with similar pathology should yield similar expression profiles). Deciphering regulatory mechanisms: co-expression of genes may imply co-regulation. Reducing dimensionality. Why clustering

The clustering problem A good clustering solution should have two features: 1. High homogeneity: homogeneity measures the similarity between genes assigned to the same cluster. 2. High separation: separation measures the distance/dissimilarity between clusters. (If two clusters have similar expression patterns, then they should probably be merged into one cluster). Note that there is usually a tradeoff between these two features: More clusters increased homogeneity but decreased separation Less clusters Increased separation but reduced homogeneity

One problem, numerous solutions No single solution is necessarily the true/correct mathematical solution! There are many formulations of the clustering problem; most of them are NP-hard. Therefore, in most cases, heuristic methods or approximations are used. Hierarchical clustering k-means self-organizing maps (SOM) Knn PCC CAST CLICK

One problem, numerous solutions The results (i.e., obtained clusters) can vary drastically depending on: Clustering method Similarity or dissimilarity metric Parameters specific to each clustering method (e.g. number of centers for the k-mean method, agglomeration rule for hierarchical clustering, etc.)

Clustering methods We can distinguish between two types of clustering methods: 1. Agglomerative:These methods build the clusters by examining small groups of elements and merging them in order to construct larger groups. 2. Divisive:A different approach which analyzes large groups of elements in order to divide the data into smaller groups and eventually reach the desired clusters. Hierarchical clustering There is another way to distinguish between clustering methods: K-mean 1. Hierarchical:Here we construct a hierarchy or tree-like clustering structure to examine the relationship between entities. 2. Non-Hierarchical:In non-hierarchical methods, the elements are partitioned into non-overlapping groups.

Hierarchical clustering

Hierarchical clustering Hierarchicalclustering is an agglomerative clustering method Takes as input a distance matrix Progressively regroups the closest objects/groups Tree representation Distance matrix object 1 object 2 object 3 object 4 object 5 object 1 0.00 4.00 6.00 3.50 1.00 object 2 4.00 0.00 6.00 2.00 4.50 object 3 6.00 6.00 0.00 5.50 6.50 object 4 3.50 2.00 5.50 0.00 4.00 object 5 1.00 4.50 6.50 4.00 0.00 branch node root c4 c3 c2 c1 object 1 object 5 object 4 object 2 object 3 leaf nodes

mmm Déjà vu anyone?

Measuring similarity/distance An important step in many clustering methods is the selection of a distance measure (metric), defining the distance between 2 data points (e.g., 2 genes) Point 1: [0.10.00.61.02.10.4 0.2] Point 2: [0.21.00.80.41.40.5 0.3] Genes are points in the multi-dimensional space R n (where n denotes the number of conditions)

Measuring similarity/distance So how do we measure the distance between two point in a multi-dimensional space? Common distance functions: The Euclideandistance (a.k.a distance as the crow flies or distance). The Manhattandistance (a.k.a taxicab distance) The maximumnorm (a.k.a infinity distance) The Hammingdistance (number of substitutions required to change one point into another). Symmetric vs. asymmetric distances. p-norm 2-norm 1-norm infinity-norm

Correlation as distance Another approach is to use the correlation between two data points as a distance metric. Pearson Correlation Spearman Correlation Absolute Value of Correlation

Metric matters! The metric of choice has a marked impact on the shape of the resulting clusters: Some elements may be close to one another in one metric and far from one anther in a different metric. Consider, for example, the point (x=1,y=1) and the origin. What s their distance using the 2-norm (Euclidean distance )? What s their distance using the 1-norm (a.k.a. taxicab/ Manhattan norm)? What s their distance using the infinity-norm?

Hierarchical clustering algorithm 1. Assign each object to a separate cluster. 2. Find the pair of clusters with the shortest distance, and regroup them into a single cluster. 3. Repeat 2 until there is a single cluster. The result is a tree, whose intermediate nodes represent clusters Branch lengths represent distances between clusters

Hierarchical clustering 1. Assign each object to a separate cluster. 2. Find the pair of clusters with the shortest distance, and regroup them into a single cluster. 3. Repeat 2 until there is a single cluster. One needs to define a (dis)similarity metric between two groups. There are several possibilities Average linkage:the average distance between objects from groups A and B Single linkage:the distance between the closest objects from groups A and B Complete linkage:the distance between the most distant objects from groups A and B

Impact of the agglomeration rule These four trees were built from the same distance matrix, using 4 different agglomeration rules. Single-linkage typically creates nesting clusters Complete linkage create more balanced trees. Note: these trees were computed from a matrix of random numbers. The impression of structure is thus a complete artifact.

Hierarchical clustering result Five clusters 23

Clustering in both dimensions