Multivariate analysis of genetic data: investigating spatial structures

Similar documents
Multivariate analysis of genetic data: investigating spatial structures

Spatial genetics analyses using

Multivariate analysis of genetic markers as a tool to explore the genetic diversity: some examples

Multivariate analysis of genetic data an introduction

Multivariate analysis of genetic data: an introduction

Multivariate analysis of genetic data exploring group diversity

Appendix A : rational of the spatial Principal Component Analysis

Multivariate analysis of genetic data: exploring groups diversity

MEMGENE package for R: Tutorials

Multivariate analysis of genetic data: exploring group diversity

Principal Component Analysis

Introduction GeoXp : an R package for interactive exploratory spatial data analysis. Illustration with a data set of schools in Midi-Pyrénées.

A (short) introduction to phylogenetics

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

A tutorial for Discriminant Analysis of Principal Components (DAPC) using adegenet 1.3-6

Bootstrapping, Randomization, 2B-PLS

Package covrna. R topics documented: September 7, Type Package

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data.

Quantitative Genomics and Genetics BTRY 4830/6830; PBSB

I don t have much to say here: data are often sampled this way but we more typically model them in continuous space, or on a graph

Moran s Autocorrelation Coefficient in Comparative Methods

Dimensionality Reduction

Multivariate Analysis of Ecological Data using CANOCO

Didacticiel - Études de cas

RESERVE DESIGN INTRODUCTION. Objectives. In collaboration with Wendy K. Gram. Set up a spreadsheet model of a nature reserve with two different

MEMGENE: Spatial pattern detection in genetic distance data

Machine Learning (Spring 2012) Principal Component Analysis

Fiche TD avec le logiciel. Coinertia Analysis. A.B. Dufour

Introduction to Principal Component Analysis (PCA)

Clusters. Unsupervised Learning. Luc Anselin. Copyright 2017 by Luc Anselin, All Rights Reserved

Polynomial Regression

Package HierDpart. February 13, 2019

Unconstrained Ordination

2.3. Clustering or vector quantization 57

ISSN: (Online) Volume 3, Issue 5, May 2015 International Journal of Advance Research in Computer Science and Management Studies

EXPLORATORY SPATIAL DATA ANALYSIS OF BUILDING ENERGY IN URBAN ENVIRONMENTS. Food Machinery and Equipment, Tianjin , China

Data reduction for multivariate analysis

An introduction to the picante package

Short Answer Questions: Answer on your separate blank paper. Points are given in parentheses.

Spatial Analysis I. Spatial data analysis Spatial analysis and inference

Principal component analysis (PCA) for clustering gene expression data

2.5 Multivariate Curve Resolution (MCR)

The gpca Package for Identifying Batch Effects in High-Throughput Genomic Data

Supplement to Sparse Non-Negative Generalized PCA with Applications to Metabolomics

Introduction to multivariate analysis Outline

Bootstrapping, Permutations, and Monte Carlo Testing

Microsatellite data analysis. Tomáš Fér & Filip Kolář

1 A Review of Correlation and Regression

INTRODUCCIÓ A L'ANÀLISI MULTIVARIANT. Estadística Biomèdica Avançada Ricardo Gonzalo Sanz 13/07/2015

Package LBLGXE. R topics documented: July 20, Type Package

Lecture 13: Population Structure. October 8, 2012

LAB EXERCISE #3 Quantifying Point and Gradient Patterns

FINM 331: MULTIVARIATE DATA ANALYSIS FALL 2017 PROBLEM SET 3

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages:

Principal Component Analysis (PCA) Theory, Practice, and Examples

Creating and Managing a W Matrix

Hotelling s One- Sample T2

Quantitative Genomics and Genetics BTRY 4830/6830; PBSB

DIMENSION REDUCTION AND CLUSTER ANALYSIS

Genetic Diversity by Multivariate Analysis Using R Software

ROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015

This lab exercise will try to answer these questions using spatial statistics in a geographic information system (GIS) context.

Similarity and recommender systems

Mapping and Analysis for Spatial Social Science

Statistics for Social and Behavioral Sciences

Model Building: Selected Case Studies

Statistics Toolbox 6. Apply statistical algorithms and probability models

Basics of Geographic Analysis in R

Hypothesis Testing hypothesis testing approach

G E INTERACTION USING JMP: AN OVERVIEW

EXAM PRACTICE. 12 questions * 4 categories: Statistics Background Multivariate Statistics Interpret True / False

Karhunen-Loève Transform KLT. JanKees van der Poel D.Sc. Student, Mechanical Engineering

Population genetics snippets for genepop

Methods for Cryptic Structure. Methods for Cryptic Structure

Quantitative Understanding in Biology Short Course Session 9 Principal Components Analysis

1 Introduction Overview of the Book How to Use this Book Introduction to R 10

Principal Components Analysis using R Francis Huang / November 2, 2016

Chapter 11. Correlation and Regression

Quantitative Understanding in Biology Principal Components Analysis

Freeman (2005) - Graphic Techniques for Exploring Social Network Data

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17

Sparse Principal Component Analysis via Alternating Maximization and Efficient Parallel Implementations

Robustness of Principal Components

CoDa-dendrogram: A new exploratory tool. 2 Dept. Informàtica i Matemàtica Aplicada, Universitat de Girona, Spain;

Principal Component Analysis, A Powerful Scoring Technique

2/21/2018. Geographic Distance. Returns the Z-statistic No partial Mantel option No strata option

B.N.Bandodkar College of Science, Thane. Random-Number Generation. Mrs M.J.Gholba

Chapter 2 Exploratory Data Analysis

This gives us an upper and lower bound that capture our population mean.

Sampling (Statistics)

CS Homework 3. October 15, 2009

Dimensionality Reduction Techniques (DRT)

14 Singular Value Decomposition

FACTOR ANALYSIS AND MULTIDIMENSIONAL SCALING

PCA Advanced Examples & Applications

EDAMI DIMENSION REDUCTION BY PRINCIPAL COMPONENT ANALYSIS

Geographic Patterns of Genetic Variation in Indigenous Eastern Adriatic Sheep Breeds

Dimension reduction, PCA & eigenanalysis Based in part on slides from textbook, slides of Susan Holmes. October 3, Statistics 202: Data Mining

Reduction of Random Variables in Structural Reliability Analysis

Descriptive Statistics

Transcription:

Multivariate analysis of genetic data: investigating spatial structures Thibaut Jombart Imperial College London MRC Centre for Outbreak Analysis and Modelling August 19, 2016 Abstract This practical provides an introduction to the analysis of spatial genetic structures using R. Using an empirical dataset of microsatellites sampled from wild goats in the Bauges mountains (France), we illustrate univariate and multivariate tests of spatial structures, and then introduce the spatial Principal Component Analysis (spca) for uncovering spatial genetic patterns. For a more complete overview of spatial genetics methods, see the basics and spca tutorials for adegenet, which you can open using adegenettutorial("basics") and adegenettutorial("spca") (with an internet connection) or from the adegenet website. tjombart@imperial.ac.uk 1

Contents 1 An overview of the data 3 2 Summarising the genetic diversity 5 3 Mapping and testing PCA results 11 4 Multivariate tests of spatial structure 17 5 spatial Principal Component Analysis 18 6 To go further 23 2

The chamois (Rupicapra rupicapra) is a conserved species in France. The Bauges mountains is a protected area in which the species has been recently studied. One of the most important questions for conservation purposes relates to whether individuals from this area form a single reproductive unit, or whether they are structured into sub-groups, and if so, what causes are likely to induce this structuring. While field observations are very scarce and do not allow to answer this question, genetic data can be used to tackle the issue, as departure from panmixia should result in genetic structuring. The dataset rupica contains 335 georeferenced genotypes of Chamois from the Bauges mountains for 9 microsatellite markers, which we propose to analyse in this exercise. 1 An overview of the data We first load some required packages and the data: library(spdep) library(adehabitat) library(ade4) library(adegenet) /// adegenet 2.0.1 is loaded //////////// > overview:?adegenet > tutorials/doc/questions: adegenetweb() > bug reports/feature requests: adegenetissues() data(rupica) rupica /// GENIND OBJECT ///////// // 335 individuals; 9 loci; 55 alleles; size: 217.2 Kb // Basic content @tab: 335 x 55 matrix of allele counts @loc.n.all: number of alleles per locus (range: 4-10) @loc.fac: locus factor for the 55 columns of @tab @all.names: list of allele names for each locus @ploidy: ploidy of each individual (range: 2-2) @type: codom @call: NULL // Optional content @other: a list containing: xy mnt showbauges 3

rupica is a typical genind object, which is the class of objects storing genotypes (as opposed to population data) in adegenet. rupica also contains topographic information about the sampled area, which can be displayed by calling rupica$other$showbauges. For instance, the spatial distribution of the sampling can be displayed as follows: other(rupica)$showbauges() points(other(rupica)$xy, col="red",pch=20) 2000 2000 5 km N This spatial distribution is clearly not random, but seems arranged into loose clusters. However, superimposed samples can bias our visual assessment of the spatial clustering. Use a two-dimensional kernel density estimation (function s.kde2d) to overcome this possible issue. other(rupica)$showbauges() s.kde2d(other(rupica)$xy,add.plot=true) points(other(rupica)$xy, col="red",pch=20) 4

2000 2000 5 km N Is geographical clustering strong enough to assign safely each individual to a group? Accordingly, shall we analyse these data at individual or group level? 2 Summarising the genetic diversity As a prior clustering of genotypes is not known, we cannot employ usual F ST -based approaches to detect genetic structuring. However, genetic structure could still result in a deficit of heterozygosity. Use the summary of genind objects to compare expected and observed heterozygosity: rupica.smry <- summary(rupica) plot(rupica.smry$hexp, rupica.smry$hobs, main="observed vs expected heterozygosity") abline(0,1,col="red") 5

Observed vs expected heterozygosity rupica.smry$hobs 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.40 0.45 0.50 0.55 0.60 0.65 0.70 rupica.smry$hexp The red line indicate identity between both quantities. What can we say about heterozygosity in this population? How can this be tested? The result below can be reproduced using a standard testing procedure: t.test(rupica.smry$hexp, rupica.smry$hobs,paired=true,var.equal=true) Paired t-test data: rupica.smry$hexp and rupica.smry$hobs t = 1.1761, df = 8, p-value = 0.2734 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -0.007869215 0.024249885 sample estimates: mean of the differences 0.008190335 6

We can seek a global picture of the genetic diversity among genotypes using a Principal Component Analysis (PCA, [4, 2], dudi.pca in ade4 package). The analysis is performed on a table of allele frequencies, obtained by tab. The function dudi.pca displays a barplot of eigenvalues and asks for a number of retained principal components: rupica.x <- tab(rupica, freq=true, NA.method="mean") rupica.pca1 <- dudi.pca(rupica.x, cent=true, scannf=false, nf=2) barplot(rupica.pca1$eig) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 The output produced by dudi.pca is a dudi object. A dudi object contains various information; in the case of PCA, principal axes (loadings), principal components (synthetic variable), and eigenvalues are respectively stored in $c1, $li, and $eig slots. Here is the content of the PCA: rupica.pca1 Duality diagramm class: pca dudi 7

$call: dudi.pca(df = rupica.x, center = TRUE, scannf = FALSE, nf = 2) $nf: 2 axis-components saved $rank: 51 eigen values: 3.013 2.563 2.271 2.107 2.1... vector length mode content 1 $cw 55 numeric column weights 2 $lw 335 numeric row weights 3 $eig 51 numeric eigen values data.frame nrow ncol content 1 $tab 335 55 modified array 2 $li 335 2 row coordinates 3 $l1 335 2 row normed scores 4 $co 55 2 column coordinates 5 $c1 55 2 column normed scores other elements: cent norm In general, eigenvalues represent the amount of genetic diversity as measured by the multivariate method being used represented by each principal component (PC). Verify that here, each eigenvalue is the variance of the corresponding PC. head(rupica.pca1$eig) [1] 3.013193 2.562510 2.271093 2.107049 2.099738 1.971703 apply(rupica.pca1$li,2,var)*334/335 Axis1 Axis2 3.013193 2.562510 An abrupt decrease in eigenvalues is likely to indicate the boundary between true patterns and non-interpretable structures. In this case, how many PCs would you interprete? Use s.label to display to two first components of the analysis. Then, use a kernel density (s.kde2d) for a better assessment of the distribution of the genotypes onto the principal axes: s.label(rupica.pca1$li) s.kde2d(rupica.pca1$li, add.p=true, cpoint=0) add.scatter.eig(rupica.pca1$eig,2,1,2) 8

d = 5 8 7385 86 5899 2057 7668 70 2469 5938 7110 7239 7207 7178 5997 6021 6036 7526 6222 6223 2183 6024 2344 2317 2053 6148 2638 83 2320 2379 2372 1170 170 2373 2672 669 2503 5904 2347 2412 5576 2577 2626 2355 2401 5915 6087 6004 5919 6142 7141 6220 7438 7566 7652 7516 5966 2628 5855 2698 2374 2564 5862 2498 2321 63 1474 863 164 600 2332 2340 5868 5948 6165 7173 7101 7563 7639 6027 6150 7456 2459 5575 2542 2514 5856 2333 734 6156 5969 5980 6008 163 169 69 162 599 2319 2000 2443 2408 2350 545 1429 2546 2562 82 85 2051 2360 2002 556 2418 2680 2454 2604 2682 5896 6013 2569 2721 5579 2645 2596 2309 5580 2689 5853 5911 5922 5866 5924 5994 5995 6031 6019 6153 6157 7077 7636 6160b 7474 7197 7630 7128 6216 7104 6184 6111 6155 6163 6037 7114 7203 6146 6168 7189 6162 7188 7222 7666 7380 7496 7501 7244 7101b 7476 7143 6014 6023 6133 7209 7091 7477 7 7484 7473 6159 7161 7230 7090 6219 7191 7224 7480 7567 6033 6152 6016 5912 2357 5895 2198 2695 2406 2158 633 2588 5583 2323 2485 5852 2615 5572 636 664 5954 6218 7179 7206 7483 7435 7502 5991 6020 6007 6100 5983 6045 6149 6001 5963 6154 6147 7234 7241 7199 7503 7505 7535 7562 7641b 6161b 7480b 6000b 7192 7570 7514 7634 7642 6035 7643 6166 5956 5578 5961 6102 7094 7443 7379 7524 665 76 2353 2614 5971 5972 7112 7185 7478 7193 7647648 6224 7237 7174 7523 7644 7659 7072 5869 2590 6029 2581 2632 5988 6132 6217 7100 7469 7497 7102 6113 6034 2643 2329 2609 7120 7136 5584 735 6161 6214 7175 7194 7402 7382 7485 7186 2358 5951 5864 7561 7466 6030 616 78 77 2307 2402 5939 1171 2004 7176 6071 7635 7477b 6151 7667 5996 6043 165 7451 2508 7137 Eigenvalues What can we say about the genetic diversity among these genotypes as inferred by PCA? The function loadingplot allows to visualize the contribution of each allele, expressed as squared loadings, for a given principal component. Using this function, reproduce this figure: loadingplot(rupica.pca1$c1^2) 9

Loading plot Loadings 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Bm203.221 Eth225.136 Eth225.142 Eth225.146 Eth225.148 Bm4505.260 Bmc1009.230 Eth225.151 Hel1.120 Bm4505.266 Ilst030.161 Maf70.134 Maf70.139 Sr.231 Variables What do we observe? We can get back to the genotypes for the concerned markers (e.g., Bm203) to check whether the highlighted genotypes are uncommon. tab extracts the table of allele frequencies from a genind object: X <- tab(rupica) class(x) [1] "matrix" dim(x) [1] 335 55 bm203.221 <- X[,"Bm203.221"] table(bm203.221) bm203.221 0 1 331 4 10

Only 4 genotypes possess one copy of the allele 221 of marker bm203 (the second result corresponds to a replaced missing data). Which individuals are they? rownames(x)[bm203.221>0] [1] "8" "86" "600" "7385" Conclusion? 3 Mapping and testing PCA results A frequent practice in spatial genetics is mapping the first principal components (PCs) onto the geographic space. The function s.value is well-suited to do so, using black and white squares of variable size for positive and negative values. To give a legend for this type of representation: s.value(cbind(1:11,rep(1,11)), -5:5, cleg=0) text(1:11,rep(1,11), -5:5, col="red",cex=1.5) d = 2 5 4 3 2 1 0 1 2 3 4 5 11

Apply this graphical representation to the first two PCs of the PCA: showbauges <- other(rupica)$showbauges showbauges() s.value(other(rupica)$xy, rupica.pca1$li[,1], add.p=true, cleg=0.5) title("pca - first PC",col.main="yellow",line=-2, cex.main=2) PCA first PC 2000 2000 22.5 17.5 12.5 7.5 2.5 2.5 5 km N showbauges() s.value(other(rupica)$xy, rupica.pca1$li[,2], add.p=true, csize=0.7) title("pca - second PC",col.main="yellow",line=-2, cex.main=2) 12

PCA second PC 2000 2000 5 3 1 1 3 5 5 km N What can we say about spatial genetic structure as inferred by PCA? This visual assessment can be complemented by testing the spatial autocorrelation in the first PCs of PCA. This can be achieved using Moran s I test. Use the function moran.mc in the package spdep to perform these tests. You will need first to define the spatial connectivity between the sampled individuals. For these data, spatial connectivity is best defined as the overlap between home ranges of individuals. Home ranges will be modelled as disks with a radius of 1150m. Use choosecn to create a connection network based on distance range ( neighbourhood by distance ). What threshold distance do you choose for individuals to be considered as neighbours? rupica.graph <- choosecn(other(rupica)$xy,type=5,d1=0,d2=2300, plot=false, res="listw") The connection network should ressemble this: rupica.graph Characteristics of weights list object: Neighbour list object: 13

Number of regions: 335 Number of nonzero links: 18018 Percentage nonzero weights: 16.05525 Average number of links: 53.78507 Weights style: W Weights constants summary: n nn S0 S1 S2 W 335 112225 335 15.04311 1352.07 plot(rupica.graph, other(rupica)$xy) title("rupica.graph") rupica.graph Perform Moran s test for the first two PCs, and plot the results. The first test should be significant: pc1.mctest <- moran.mc(rupica.pca1$li[,1], rupica.graph, 999) plot(pc1.mctest) 14

Density plot of permutation outcomes Density 0 10 20 30 40 50 60 0.02 0.00 0.02 0.04 rupica.pca1$li[, 1] Monte Carlo simulation of Moran I Compare this result to the mapping of the first PC of PCA. What is wrong? When a test gives unexpected results, it is worth looking into the data in more details. Moran s plot (moran.plot) plots the tested variable against its lagged vector. Use it on the first PC: moran.plot(rupica.pca1$li[,1], rupica.graph) 15

spatially lagged rupica.pca1$li[, 1] 2.0 1.5 1.0 0.5 0.0 1 276 19 29 11 273 10 2 275 274 7 4 5 17 916 612 15 433 24 20 2522 21 23 20 15 10 5 0 rupica.pca1$li[, 1] Actual positive autocorrelation corresponds to a positive correlation between a variable and its lag vector. Is it the case here? How can we explain that Moran s test was significant? Repeat these analyses for the second PC. What are your conclusions? pc2.mctest <- moran.mc(rupica.pca1$li[,2], rupica.graph, 999) pc2.mctest Monte-Carlo simulation of Moran I data: rupica.pca1$li[, 2] weights: rupica.graph number of simulations + 1: statistic = 0.0048162, observed rank = 797, p-value = 0.203 alternative hypothesis: greater plot(pc2.mctest) 16

Density plot of permutation outcomes Density 0 10 20 30 40 0.02 0.00 0.02 0.04 rupica.pca1$li[, 2] Monte Carlo simulation of Moran I 4 Multivariate tests of spatial structure So far, we have only tested the existence of spatial structures in the first two principal components of a PCA of the data. Therefore, these tests only describe one fragment of the data, and do not encompass the whole diversity in the data. As a complement, we can use Mantel test (mantel.randtest) to test spatial structures in the whole data, by assessing the correlation between genetic distances and geographic distances. Pairwise Euclidean distances are computed using dist. Perform Mantel test, using the scaled genetic data you used before in PCA, and the geographic coordinates. mtest <- mantel.randtest(dist(rupica.x), dist(other(rupica)$xy)) plot(mtest, nclass=30) 17

Histogram of sim Frequency 0 20 40 60 80 100 0.05 0.00 0.05 What is your conclusion? Shall we be looking for spatial structures? If so, how can we explain that PCA did not reveal them? sim 5 spatial Principal Component Analysis The spatial Principal Component Analysis (spca, function spca [3]) has been especially developed to investigate hidden or non-obvious spatial genetic patterns. Like Moran s I test, spca first requires the spatial proximities between genotypes to be modeled. You will reuse the connection network defined previously using choosecn, and pass it as the cn argument of the function spca. Read the documentation of spca, and apply the function to the dataset rupica. The function will display a barplot of eigenvalues: rupica.spca1 <- spca(rupica, cn=rupica.graph,scannf=false, nfposi=2,nfnega=0) barplot(rupica.spca1$eig, col=rep(c("red","grey"), c(2,)) ) 18

0.005 0.000 0.005 0.010 0.015 0.020 0.025 0.030 This figure illustrates the fundamental difference between PCA and spca. Like dudi.pca, spca displays a barplot of eigenvalues, but unlike in PCA, eigenvalues of spca can also be negative. This is because the criterion optimized by the analysis can have positive and negative values, corresponding respectively to positive and negative autocorrelation. Positive spatial autocorrelation correspond to greater genetic similarity between geographically closer individuals. Conversely, negative spatial autocorrelation corresponds to greater dissimilarity between neighbours. The spatial autocorrelation of a variable is measured by Moran s I, and interpreted as follows: I 0 = 1/(n 1): no spatial autocorrelation (x is randomly distributed across space) I > I 0 : positive spatial autocorrelation I < I 0 : negative spatial autocorrelation Principal components of PCA ensure that (φ referring to one PC) var(φ) is maximum. By contrast, spca provides PC which decompose the quantity var(φ)i(φ). In other words, PCA focuses on variability only, while spca is a compromise between variability (var(φ)) and spatial structure (I(φ)). 19

In this case, only the principal components associated with the two first positive eigenvalues (in red) shall be retained. The printing of spca objects is more explicit than dudi objects, but named with the same conventions: rupica.spca1 # spatial Principal Component Analysis # class: spca $call: spca(obj = rupica, cn = rupica.graph, scannf = FALSE, nfposi = 2, nfnega = 0) $nfposi: 2 axis-components saved $nfnega: 0 axis-components saved Positive eigenvalues: 0.03001 0.01445 0.00924 0.006851 0.004485... Negative eigenvalues: -0.008643-0.006407-0.004488-0.003977-0.003248... vector length mode content 1 $eig 51 numeric eigenvalues data.frame nrow ncol 1 $c1 55 2 2 $li 335 2 3 $ls 335 2 4 $as 2 2 content 1 principal axes: scaled vectors of alleles loadings 2 principal components: coordinates of entities ('scores') 3 lag vector of principal components 4 pca axes onto spca axes $xy: matrix of spatial coordinates $lw: a list of spatial weights (class 'listw') other elements: NULL Unlike usual multivariate analyses, eigenvalues of spca are composite: they measure both the genetic diversity (variance) and the spatial structure (spatial autocorrelation measured by Moran s I). This decomposition can also be used to choose which principal component to interprete. The function screeplot allows to display this information graphically: screeplot(rupica.spca1) 20

Spatial and variance components of the eigenvalues 1 0.7 Spatial autocorrelation (I) 0.3 0 λ 1 λ 2λ3 λ λ λ 4 6 λ λ 8λ 7 5 13 λ 12 11 10 9 27 26 29 28 λ 30 31 32 λ 33 34 λ 36 λ 35 37 38 40 λ 3942 λ λ 44 41 λ 45 43 λλ 4647 λ 48 λ 49λ 50 λ 51 15 λ 16 19 17 18 20 14 21 22 23 24 25 0.4 0.0 0.2 0.4 0.6 0.8 1.0 Variance While λ 1 indicates with no doubt a structure, the second eigenvalue, λ 2 is less clearly distinct from the successive values. Thus, we shall keep in mind this uncertainty when interpreting the second principal component of the analysis. Try visualising the spca results as you did before with the PCA results. To clarify the possible spatial patterns, you can map the lagged PC ($ls) instead of the PC ($li), which are a denoisified version of the PCs. First, map the first principal component of spca. How would you interprete this result? How does it compare to the first PC of PCA? What inferrence can we make about the way the landscape influences gene flow in this population of Chamois? Do the same with the second PC of spca. Some field observations suggest that this pattern is not artefactual. How would you interprete this second structure? To finish, you can try representing both structures at the same time using the color coding introduced by [1] (?colorplot). The final figure should ressemble this (although colors may change from one computer to another): 21

showbauges() colorplot(other(rupica)$xy, rupica.spca1$ls, axes=1:2, transp=true, add=true, cex=2) title("spca - colorplot of PC 1 and 2\n(lagged scores)", col.main="yellow", line=-2, cex=2) spca colorplot of PC 1 and 2 (lagged scores) 2000 2000 5 km N 22

6 To go further More spatial genetics methods and are presented in the basics tutorial as well as in the tutorial dedicated to spca, which you can access from the adegenet website: http://adegenet.r-forge.r-project.org/ or by typing: adegenettutorial("basics") adegenettutorial("spca") 23

References [1] L. L. Cavalli-Sforza, P. Menozzi, and A. Piazza. Demic expansions and human evolution. Science, 259:639 646, 1993. [2] I. T. Joliffe. Principal Component Analysis. Springer Series in Statistics. Springer, second edition, 2004. [3] T. Jombart, S. Devillard, A.-B. Dufour, and D. Pontier. Revealing cryptic spatial patterns in genetic variability by a new multivariate method. Heredity, 101:92 103, 2008. [4] K. Pearson. On lines and planes of closest fit to systems of points in space. Philosophical Magazine, 2:559 572, 1901. 24