Multivariate analysis of genetic data: investigating spatial structures

Similar documents
Multivariate analysis of genetic data: investigating spatial structures

Spatial genetics analyses using

Multivariate analysis of genetic markers as a tool to explore the genetic diversity: some examples

Multivariate analysis of genetic data an introduction

Multivariate analysis of genetic data: an introduction

Multivariate analysis of genetic data exploring group diversity

Appendix A : rational of the spatial Principal Component Analysis

Multivariate analysis of genetic data: exploring groups diversity

Multivariate analysis of genetic data: exploring group diversity

A tutorial for Discriminant Analysis of Principal Components (DAPC) using adegenet 1.3-6

MEMGENE package for R: Tutorials

Introduction GeoXp : an R package for interactive exploratory spatial data analysis. Illustration with a data set of schools in Midi-Pyrénées.

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Quantitative Genomics and Genetics BTRY 4830/6830; PBSB

Principal Component Analysis

Bootstrapping, Randomization, 2B-PLS

A (short) introduction to phylogenetics

RESERVE DESIGN INTRODUCTION. Objectives. In collaboration with Wendy K. Gram. Set up a spreadsheet model of a nature reserve with two different

2.5 Multivariate Curve Resolution (MCR)

Package covrna. R topics documented: September 7, Type Package

Moran s Autocorrelation Coefficient in Comparative Methods

Introduction to Principal Component Analysis (PCA)

Fiche TD avec le logiciel. Coinertia Analysis. A.B. Dufour

Dimensionality Reduction

Machine Learning (Spring 2012) Principal Component Analysis

Multivariate Analysis of Ecological Data using CANOCO

Unconstrained Ordination

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data.

Didacticiel - Études de cas

An introduction to the picante package

MEMGENE: Spatial pattern detection in genetic distance data

Clusters. Unsupervised Learning. Luc Anselin. Copyright 2017 by Luc Anselin, All Rights Reserved

Similarity and recommender systems

ISSN: (Online) Volume 3, Issue 5, May 2015 International Journal of Advance Research in Computer Science and Management Studies

Package HierDpart. February 13, 2019

EXPLORATORY SPATIAL DATA ANALYSIS OF BUILDING ENERGY IN URBAN ENVIRONMENTS. Food Machinery and Equipment, Tianjin , China

Data reduction for multivariate analysis

Microsatellite data analysis. Tomáš Fér & Filip Kolář

FINM 331: MULTIVARIATE DATA ANALYSIS FALL 2017 PROBLEM SET 3

Quantitative Genomics and Genetics BTRY 4830/6830; PBSB

PCA Advanced Examples & Applications

LAB EXERCISE #3 Quantifying Point and Gradient Patterns

DIMENSION REDUCTION AND CLUSTER ANALYSIS

Principal component analysis (PCA) for clustering gene expression data

Short Answer Questions: Answer on your separate blank paper. Points are given in parentheses.

2.3. Clustering or vector quantization 57

I don t have much to say here: data are often sampled this way but we more typically model them in continuous space, or on a graph

The gpca Package for Identifying Batch Effects in High-Throughput Genomic Data

Bootstrapping, Permutations, and Monte Carlo Testing

Polynomial Regression

Package LBLGXE. R topics documented: July 20, Type Package

Statistics Toolbox 6. Apply statistical algorithms and probability models

INTRODUCCIÓ A L'ANÀLISI MULTIVARIANT. Estadística Biomèdica Avançada Ricardo Gonzalo Sanz 13/07/2015

Karhunen-Loève Transform KLT. JanKees van der Poel D.Sc. Student, Mechanical Engineering

Chapter 11. Correlation and Regression

Quantitative Understanding in Biology Short Course Session 9 Principal Components Analysis

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages:

Principal Components Analysis using R Francis Huang / November 2, 2016

Principal Component Analysis (PCA) Theory, Practice, and Examples

Lecture 3: Exploratory Spatial Data Analysis (ESDA) Prof. Eduardo A. Haddad

Creating and Managing a W Matrix

Geographic Patterns of Genetic Variation in Indigenous Eastern Adriatic Sheep Breeds

Spatial Analysis I. Spatial data analysis Spatial analysis and inference

Principal Component Analysis, A Powerful Scoring Technique

CS Homework 3. October 15, 2009

ROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015

CoDa-dendrogram: A new exploratory tool. 2 Dept. Informàtica i Matemàtica Aplicada, Universitat de Girona, Spain;

Diversity and composition of termites in Amazonia CSDambros 09 January, 2015

Hotelling s One- Sample T2

EQUATION OF A CIRCLE (CENTRE A, B)

14 Singular Value Decomposition

This lab exercise will try to answer these questions using spatial statistics in a geographic information system (GIS) context.

PCA vignette Principal components analysis with snpstats

Statistics for Social and Behavioral Sciences

Model Building: Selected Case Studies

Interaction effects for continuous predictors in regression modeling

Supplement to Sparse Non-Negative Generalized PCA with Applications to Metabolomics

First steps of multivariate data analysis

G E INTERACTION USING JMP: AN OVERVIEW

1 Introduction Overview of the Book How to Use this Book Introduction to R 10

1 Introduction to Minitab

Hypothesis Testing hypothesis testing approach

1 A Review of Correlation and Regression

Introduction to multivariate analysis Outline

BTRY 7210: Topics in Quantitative Genomics and Genetics

Lecture 3: Exploratory Spatial Data Analysis (ESDA) Prof. Eduardo A. Haddad

Modularity for Mathematica User s Guide Version 2.0

Chapter 23. Inferences About Means. Monday, May 6, 13. Copyright 2009 Pearson Education, Inc.

Quantitative Understanding in Biology Principal Components Analysis

2/19/2018. Dataset: 85,122 islands 19,392 > 1km 2 17,883 with data

Methods for Cryptic Structure. Methods for Cryptic Structure

Freeman (2005) - Graphic Techniques for Exploring Social Network Data

Robustness of Principal Components

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17

Ratio of Polynomials Fit One Variable

Experiment 0 ~ Introduction to Statistics and Excel Tutorial. Introduction to Statistics, Error and Measurement

Sampling (Statistics)

Lecture 6: Methods for high-dimensional problems

Genetic Diversity by Multivariate Analysis Using R Software

From Histograms to Multivariate Polynomial Histograms and Shape Estimation. Assoc Prof Inge Koch

Transcription:

Multivariate analysis of genetic data: investigating spatial structures Thibaut Jombart Imperial College London MRC Centre for Outbreak Analysis and Modelling March 26, 2014 Abstract This practical provides an introduction to the analysis of spatial genetic structures using R. Using an empirical dataset of microsatellites sampled from wild goats in the Bauges mountains (France), we illustrate univariate and multivariate tests of spatial structures, and then introduce the spatial Principal Component Analysis (spca) for uncovering spatial genetic patterns. For a more complete overview of spatial genetics methods, see the basics and spca tutorials for adegenet, which you can open using adegenettutorial("basics") and adegenettutorial("spca") (with an internet connection) or from the adegenet website. tjombart@imperial.ac.uk 1

Contents 1 An overview of the data 3 2 Summarising the genetic diversity 5 3 Mapping and testing PCA results 12 4 Multivariate tests of spatial structure 19 5 spatial Principal Component Analysis 20 6 To go further 25 2

The chamois (Rupicapra rupicapra) is a conserved species in France. The Bauges mountains is a protected area in which the species has been recently studied. One of the most important questions for conservation purposes relates to whether individuals from this area form a single reproductive unit, or whether they are structured into sub-groups, and if so, what causes are likely to induce this structuring. While field observations are very scarce and do not allow to answer this question, genetic data can be used to tackle the issue, as departure from panmixia should result in genetic structuring. The dataset rupica contains 335 georeferenced genotypes of Chamois from the Bauges mountains for 9 microsatellite markers, which we propose to analyse in this exercise. 1 An overview of the data We first load some required packages and the data: library(ade4) library(adegenet) library(spdep) library(adehabitat) data(rupica) rupica # # Genind object # # - genotypes of individuals - S4 class: genind @call: NULL @tab: 335 x 55 matrix of genotypes @ind.names: vector of 335 individual names @loc.names: vector of 9 locus names @loc.nall: number of alleles per locus @loc.fac: locus factor for the 55 columns of @tab @all.names: list of 9 components yielding allele names for each locus @ploidy: 2 @type: codom Optionnal contents: @pop: - empty - @pop.names: - empty - @other: a list containing: xy mnt showbauges 3

rupica is a typical genind object, which is the class of objects storing genotypes (as opposed to population data) in adegenet. rupica also contains topographic information about the sampled area, which can be displayed by calling rupica$other$showbauges. For instance, the spatial distribution of the sampling can be displayed as follows: rupica$other$showbauges() points(rupica$other$xy, col="red",pch=20) 2000 2000 5 km N This spatial distribution is clearly not random, but seems arranged into loose clusters. However, superimposed samples can bias our visual assessment of the spatial clustering. Use a two-dimensional kernel density estimation (function s.kde2d) to overcome this possible issue. rupica$other$showbauges() s.kde2d(rupica$other$xy,add.plot=true) points(rupica$other$xy, col="red",pch=20) 4

2000 2000 5 km N Is geographical clustering strong enough to assign safely each individual to a group? Accordingly, shall we analyse these data at individual or group level? 2 Summarising the genetic diversity As a prior clustering of genotypes is not known, we cannot employ usual F ST -based approaches to detect genetic structuring. However, genetic structure could still result in a deficit of heterozygosity. Use the summary of genind objects to compare expected and observed heterozygosity: rupica.smry <- summary(rupica) # Total number of genotypes: 335 # Population sample sizes: 335 5

# Number of alleles per locus: L1 L2 L3 L4 L5 L6 L7 L8 L9 7 10 7 6 5 5 6 4 5 # Number of alleles per population: 1 55 # Percentage of missing data: [1] 0 # Observed heterozygosity: L1 L2 L3 L4 L5 L6 L7 L8 L9 0.5881 0.6209 0.5254 0.7582 0.6597 0.5284 0.6299 0.5552 0.4149 # Expected heterozygosity: L1 L2 L3 L4 L5 L6 L7 L8 L9 0.6077 0.6533 0.5315 0.7260 0.6602 0.5706 0.6413 0.5473 0.4071 plot(rupica.smry$hexp, rupica.smry$hobs, main="observed vs expected heterozygosity") abline(0,1,col="red") 6

Observed vs expected heterozygosity rupica.smry$hobs 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.40 0.45 0.50 0.55 0.60 0.65 0.70 rupica.smry$hexp The red line indicate identity between both quantities. What can we say about heterozygosity in this population? How can this be tested? The result below can be reproduced using a standard testing procedure: t.test(rupica.smry$hexp, rupica.smry$hobs,paired=true,var.equal=true) Paired t-test data: rupica.smry$hexp and rupica.smry$hobs t = 0.9461, df = 8, p-value = 0.3718 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -0.01025 0.02451 sample estimates: mean of the differences 0.007131 7

We can seek a global picture of the genetic diversity among genotypes using a Principal Component Analysis (PCA, [4, 2], dudi.pca in ade4 package). The analysis is performed on a table of standardised alleles frequencies, obtained by scalegen. Remember to disable the scaling option when performing the PCA. The function dudi.pca displays a barplot of eigenvalues and asks for a number of retained principal components: rupica.x <- scalegen(rupica) rupica.pca1 <- dudi.pca(rupica.x, cent=false, scale=false, scannf=false, nf=2) barplot(rupica.pca1$eig) 0.0 0.5 1.0 1.5 The output produced by dudi.pca is a dudi object. A dudi object contains various information; in the case of PCA, principal axes (loadings), principal components (synthetic variable), and eigenvalues are respectively stored in $c1, $li, and $eig slots. Here is the content of the PCA: rupica.pca1 Duality diagramm 8

class: pca dudi $call: dudi.pca(df = rupica.x, center = FALSE, scale = FALSE, scannf = FALSE, nf = 2) $nf: 2 axis-components saved $rank: 45 eigen values: 1.561 1.34 1.168 1.097 1.071... vector length mode content 1 $cw 55 numeric column weights 2 $lw 335 numeric row weights 3 $eig 45 numeric eigen values data.frame nrow ncol content 1 $tab 335 55 modified array 2 $li 335 2 row coordinates 3 $l1 335 2 row normed scores 4 $co 55 2 column coordinates 5 $c1 55 2 column normed scores other elements: cent norm In general, eigenvalues represent the amount of genetic diversity as measured by the multivariate method being used represented by each principal component (PC). Verify that here, each eigenvalue is the variance of the corresponding PC. head(rupica.pca1$eig) [1] 1.561 1.340 1.168 1.097 1.071 1.018 apply(rupica.pca1$li,2,var)*334/335 Axis1 Axis2 1.561 1.340 An abrupt decrease in eigenvalues is likely to indicate the boundary between true patterns and non-interpretable structures. In this case, how many PCs would you interprete? Use s.label to display to two first components of the analysis. Then, use a kernel density (s.kde2d) for a better assessment of the distribution of the genotypes onto the principal axes: s.label(rupica.pca1$li) s.kde2d(rupica.pca1$li, add.p=true, cpoint=0) add.scatter.eig(rupica.pca1$eig,2,1,2) 9

d = 5 2638 8 7385 86 2374 7380 600 85 69 7382 5899 2057 5576 70 7668 6036 7110 2469 83 2183 2320 2379 5938 7207 7178 5997 6021 7526 2344 2347 2372 170 669 734 1170 2053 2317 2373 2412 2577 5919 6004 6222 6223 6024 7566 2672 2355 2321 2401 2503 5904 6142 7141 6220 7438 7239 6148 2498 2626 2628 6087 6165 5915 7101 1474 2340 63 2459 2698 2332 2564 5868 5966 5855 5969 6027 6150 7104 6008 5948 2333 863 164 6013 2542 169 2360 2408 2514 2443 2546 5575 5862 5856 6153 6156 7077 7173 7516 7222 7563 7639 7652 6160b 7456 7128 6216 6184 599 2319 2000 545 1429 2604 5579 2569 2645 5853 7244 6157 6146 7114 7476 7197 7474 7496 7501 7636 5980 7189 6162 5911 5922 2002 2350 2682 5896 2309 2051 5912 556 2357 2418 2562 2680 5924 5954 2596 162 2454 2695 2721 2158 633 82 636 664 5580 2689 5991 5895 5983 5866 5963 5994 6016 5995 6019 6163 6014 6037 7143 7090 6023 6168 6111 6159 6031 6155 6133 7179 7091 7203 7630 7484 7477 7188 7435 7 7641 7161 6219 7191 6218 7199 7209 7224 7480 7666 7101b 7641b 7483 6161b 7567 7535 7473 7230 7502 6033 6020 6007 6001 6154 6152 7206 7241 7505 6035 5583 2198 2323 5852 2615 2485 2353 5956 2614 5869 5972 5578 5961 7112 6100 6045 6166 6147 7094 7234 7443 7524 7570 7503 7514 7562 7642 7480b 6000b 7634 7643 7192 6224 6149 7185 7478 7379 7647 7237 6102 5988 7523 7644 7659 5971 7072 7193 2590 6029 7174 665 2406 7100 2588 7648 76 735 2609 2581 2632 5572 6132 7497 6214 6217 7120 7102 6113 2643 5951 5939 6034 7136 7175 7194 7469 7402 7561 7485 2329 2358 2402 5584 5864 1171 2004 6071 6161 7635 7186 6030 77 78 2307 6151 7176 7466 7477b 7667 6043 616 5996 7451 165 2508 7137 Eigenvalues What can we say about the genetic diversity among these genotypes as inferred by PCA? The function loadingplot allows to visualize the contribution of each allele, expressed as squared loadings, for a given principal component. Using this function, reproduce this figure: loadingplot(rupica.pca1$c1^2) 10

Loading plot Loadings 0.00 0.05 0.10 0.15 0.20 0.25 Bm203.221 Eth225.136 Eth225.142 Eth225.146 Eth225.148 Bm4505.260 Bmc1009.230 Eth225.151 Hel1.120 Bm4505.266 Ilst030.161 Maf70.134 Ilst030.173 Maf70.139 Variables What do we observe? We can get back to the genotypes for the concerned markers (e.g., Bm203) to check whether the highlighted genotypes are uncommon. truenames extracts the table of allele frequencies from a genind object (restoring original labels for markers, alleles, and individuals): X <- truenames(rupica) class(x) [1] "matrix" dim(x) [1] 335 55 bm203.221 <- X[,"Bm203.221"] table(bm203.221) bm203.221 0 0.00597014925373134 0.5 330 1 4 11

Only 4 genotypes possess one copy of the allele 221 of marker bm203 (the second result corresponds to a replaced missing data). Which individuals are they? rownames(x)[bm203.221==0.5] 001 019 029 276 "8" "86" "600" "7385" Conclusion? 3 Mapping and testing PCA results A frequent practice in spatial genetics is mapping the first principal components (PCs) onto the geographic space. The function s.value is well-suited to do so, using black and white squares of variable size for positive and negative values. To give a legend for this type of representation: s.value(cbind(1:11,rep(1,11)), -5:5, cleg=0) text(1:11,rep(1,11), -5:5, col="red",cex=1.5) 12

d = 2 5 4 3 2 1 0 1 2 3 4 5 Apply this graphical representation to the first two PCs of the PCA: showbauges <- rupica$other$showbauges showbauges() s.value(rupica$other$xy, rupica.pca1$li[,1], add.p=true, cleg=0.5) title("pca - first PC",col.main="yellow",line=-2, cex.main=2) 13

PCA first PC 2000 2000 17.5 12.5 7.5 2.5 2.5 5 km N showbauges() s.value(rupica$other$xy, rupica.pca1$li[,2], add.p=true, csize=0.7) title("pca - second PC",col.main="yellow",line=-2, cex.main=2) 14

PCA second PC 2000 2000 3 1 1 3 5 5 km N What can we say about spatial genetic structure as inferred by PCA? This visual assessment can be complemented by testing the spatial autocorrelation in the first PCs of PCA. This can be achieved using Moran s I test. Use the function moran.mc in the package spdep to perform these tests. You will need first to define the spatial connectivity between the sampled individuals. For these data, spatial connectivity is best defined as the overlap between home ranges of individuals. Home ranges will be modelled as disks with a radius of 1150m. Use choosecn to create a connection network based on distance range ( neighbourhood by distance ). What threshold distance do you choose for individuals to be considered as neighbours? rupica.graph <- choosecn(rupica$other$xy,type=5,d1=0,d2=2300, plot=false, res="listw") The connection network should ressemble this: rupica.graph Characteristics of weights list object: Neighbour list object: 15

Number of regions: 335 Number of nonzero links: 18018 Percentage nonzero weights: 16.06 Average number of links: 53.79 Weights style: W Weights constants summary: n nn S0 S1 S2 W 335 112225 335 15.04 1352 plot(rupica.graph, rupica$other$xy) title("rupica.graph") rupica.graph Perform Moran s test for the first two PCs, and plot the results. The first test should be significant: pc1.mctest <- moran.mc(rupica.pca1$li[,1], rupica.graph, 999) plot(pc1.mctest) 16

Density plot of permutation outcomes Density 0 10 20 30 40 50 60 0.02 0.00 0.02 0.04 0.06 0.08 rupica.pca1$li[, 1] Monte Carlo simulation of Moran's I Compare this result to the mapping of the first PC of PCA. What is wrong? When a test gives unexpected results, it is worth looking into the data in more details. Moran s plot (moran.plot) plots the tested variable against its lagged vector. Use it on the first PC: moran.plot(rupica.pca1$li[,1], rupica.graph) 17

spatially lagged rupica.pca1$li[, 1] 1.5 1.0 0.5 0.0 1 276 19 29 11 273 2 10 275 274 7 4 5 17 916 612 15 433 24 20 2522 23 21 15 10 5 0 rupica.pca1$li[, 1] Actual positive autocorrelation corresponds to a positive correlation between a variable and its lag vector. Is it the case here? How can we explain that Moran s test was significant? Repeat these analyses for the second PC. What are your conclusions? pc2.mctest <- moran.mc(rupica.pca1$li[,2], rupica.graph, 999) plot(pc2.mctest) 18

Density plot of permutation outcomes Density 0 10 20 30 0.02 0.00 0.02 0.04 rupica.pca1$li[, 2] Monte Carlo simulation of Moran's I 4 Multivariate tests of spatial structure So far, we have only tested the existence of spatial structures in the first two principal components of a PCA of the data. Therefore, these tests only describe one fragment of the data, and do not encompass the whole diversity in the data. As a complement, we can use Mantel test (mantel.randtest) to test spatial structures in the whole data, by assessing the correlation between genetic distances and geographic distances. Pairwise Euclidean distances are computed using dist. Perform Mantel test, using the scaled genetic data you used before in PCA, and the geographic coordinates. mtest <- mantel.randtest(dist(rupica.x), dist(rupica$other$xy)) plot(mtest, nclass=30) 19

Histogram of sim Frequency 0 20 40 60 80 0.10 0.05 0.00 0.05 0.10 What is your conclusion? Shall we be looking for spatial structures? If so, how can we explain that PCA did not reveal them? sim 5 spatial Principal Component Analysis The spatial Principal Component Analysis (spca, function spca [3]) has been especially developed to investigate hidden or non-obvious spatial genetic patterns. Like Moran s I test, spca first requires the spatial proximities between genotypes to be modeled. You will reuse the connection network defined previously using choosecn, and pass it as the cn argument of the function spca. Read the documentation of spca, and apply the function to the dataset rupica. The function will display a barplot of eigenvalues: rupica.spca1 <- spca(rupica, cn=rupica.graph,scannf=false, nfposi=2,nfnega=0) barplot(rupica.spca1$eig, col=rep(c("red","grey"), c(2,)) ) 20

0.005 0.000 0.005 0.010 0.015 0.020 0.025 0.030 This figure illustrates the fundamental difference between PCA and spca. Like dudi.pca, spca displays a barplot of eigenvalues, but unlike in PCA, eigenvalues of spca can also be negative. This is because the criterion optimized by the analysis can have positive and negative values, corresponding respectively to positive and negative autocorrelation. Positive spatial autocorrelation correspond to greater genetic similarity between geographically closer individuals. Conversely, negative spatial autocorrelation corresponds to greater dissimilarity between neighbours. The spatial autocorrelation of a variable is measured by Moran s I, and interpreted as follows: I 0 = 1/(n 1): no spatial autocorrelation (x is randomly distributed across space) I > I 0 : positive spatial autocorrelation I < I 0 : negative spatial autocorrelation Principal components of PCA ensure that (φ referring to one PC) var(φ) is maximum. By contrast, spca provides PC which decompose the quantity var(φ)i(φ). In other words, PCA focuses on variability only, while spca is a compromise between variability (var(φ)) and spatial structure (I(φ)). 21

In this case, only the principal components associated with the two first positive eigenvalues (in red) shall be retained. The printing of spca objects is more explicit than dudi objects, but named with the same conventions: rupica.spca1 # spatial Principal Component Analysis # class: spca $call: spca(obj = rupica, cn = rupica.graph, scannf = FALSE, nfposi = 2, nfnega = 0) $nfposi: 2 axis-components saved $nfnega: 0 axis-components saved Positive eigenvalues: 0.03018 0.01408 0.009211 0.006835 0.004529... Negative eigenvalues: -0.008611-0.006414-0.004451-0.003963-0.003329... vector length mode content 1 $eig 45 numeric eigenvalues data.frame nrow ncol 1 $c1 55 2 2 $li 335 2 3 $ls 335 2 4 $as 2 2 content 1 principal axes: scaled vectors of alleles loadings 2 principal components: coordinates of entities ('scores') 3 lag vector of principal components 4 pca axes onto spca axes $xy: matrix of spatial coordinates $lw: a list of spatial weights (class 'listw') other elements: NULL Unlike usual multivariate analyses, eigenvalues of spca are composite: they measure both the genetic diversity (variance) and the spatial structure (spatial autocorrelation measured by Moran s I). This decomposition can also be used to choose which principal component to interprete. The function screeplot allows to display this information graphically: screeplot(rupica.spca1) 22

Spatial and variance components of the eigenvalues 1 0.7 Spatial autocorrelation (I) 0.3 0 λ 1 λ 2λ3 λ λ 4 λ λ 7 6 λ 8λ 5 λ 13 12 11 10 9 16 17 14 15 19 18 λ 21 20 23 22 24 λ 25 26 λ 27 28 λ 30 λ 29 λ 34 32 31 33 λ 36 λ 38 35 λ 39 37 λ 40 41 λ 42 λ 43λ 44 λ 45 0.4 0.00 0.05 0.10 0.15 0.20 0.25 Variance While λ 1 indicates with no doubt a structure, the second eigenvalue, λ 2 is less clearly distinct from the successive values. Thus, we shall keep in mind this uncertainty when interpreting the second principal component of the analysis. Try visualising the spca results as you did before with the PCA results. To clarify the possible spatial patterns, you can map the lagged PC ($ls) instead of the PC ($li), which are a denoisified version of the PCs. First, map the first principal component of spca. How would you interprete this result? How does it compare to the first PC of PCA? What inferrence can we make about the way the landscape influences gene flow in this population of Chamois? Do the same with the second PC of spca. Some field observations suggest that this pattern is not artefactual. How would you interprete this second structure? To finish, you can try representing both structures at the same time using the color coding introduced by [1] (?colorplot). The final figure should ressemble this (although colors may change from one computer to another): 23

showbauges() colorplot(rupica$other$xy, rupica.spca1$ls, axes=1:2, transp=true, add=true, cex=2) title("spca - colorplot of PC 1 and 2\n(lagged scores)", col.main="yellow", line=-2, cex=2) spca colorplot of PC 1 and 2 (lagged scores) 2000 2000 5 km N 24

6 To go further More spatial genetics methods and are presented in the basics tutorial as well as in the tutorial dedicated to spca, which you can access from the adegenet website: http://adegenet.r-forge.r-project.org/ or by typing: adegenettutorial("basics") adegenettutorial("spca") 25

References [1] L. L. Cavalli-Sforza, P. Menozzi, and A. Piazza. Demic expansions and human evolution. Science, 259:639 646, 1993. [2] I. T. Joliffe. Principal Component Analysis. Springer Series in Statistics. Springer, second edition, 2004. [3] T. Jombart, S. Devillard, A.-B. Dufour, and D. Pontier. Revealing cryptic spatial patterns in genetic variability by a new multivariate method. Heredity, 101:92 103, 2008. [4] K. Pearson. On lines and planes of closest fit to systems of points in space. Philosophical Magazine, 2:559 572, 1901. 26