Séminaire de statistique IRMA Strasbourg

Similar documents
Testing the constancy of Spearman s rho in multivariate time series

A measure of radial asymmetry for bivariate copulas based on Sobolev norm

Nonparametric tests for tail monotonicity

Non-parametric Test Frameworks

Testing Independence Based on Bernstein Empirical Copula and Copula Density

I N S T I T U T D E S T A T I S T I Q U E B I O S T A T I S T I Q U E E T S C I E N C E S A C T U A R I E L L E S (I S B A)

Accounting for extreme-value dependence in multivariate data

A copula goodness-of-t approach. conditional probability integral transform. Daniel Berg 1 Henrik Bakken 2

MAXIMUM ENTROPIES COPULAS

A Goodness-of-fit Test for Copulas

Multivariate Extensions of Spearman s Rho and Related Statistics

Lecture Quantitative Finance Spring Term 2015

Asymptotic Statistics-VI. Changliang Zou

Statistical Properties of Bernstein Copulae with Applications in Multiple Testing

Financial Econometrics and Volatility Models Copulas

COPULA FITTING TO AUTOCORRELATED DATA, WITH APPLICATIONS TO WIND SPEED MODELLING

Score test for random changepoint in a mixed model

Bivariate Rainfall and Runoff Analysis Using Entropy and Copula Theories

Modelling Multivariate Peaks-over-Thresholds using Generalized Pareto Distributions

Modelling Dependence with Copulas and Applications to Risk Management. Filip Lindskog, RiskLab, ETH Zürich

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao

HETEROSCEDASTICITY AND AUTOCORRELATION ROBUST STRUCTURAL CHANGE DETECTION. By Zhou Zhou 1 University of Toronto February 1, 2013.

Bickel Rosenblatt test

Robust Backtesting Tests for Value-at-Risk Models

Bivariate Paired Numerical Data

Cramér-von Mises Gaussianity test in Hilbert space

Calibration Estimation of Semiparametric Copula Models with Data Missing at Random

Local sensitivity analyses of goodness-of-fit tests for copulas

Copulas and dependence measurement

Bahadur representations for bootstrap quantiles 1

Extreme Value Analysis and Spatial Extremes

Construction and estimation of high dimensional copulas

Quasi-copulas and signed measures

How to select a good vine

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

LARGE SAMPLE BEHAVIOR OF SOME WELL-KNOWN ROBUST ESTIMATORS UNDER LONG-RANGE DEPENDENCE

ESTIMATING BIVARIATE TAIL

Asymptotic Statistics-III. Changliang Zou

MONTE CARLO ANALYSIS OF CHANGE POINT ESTIMATORS

Journal of Statistical Software

When is a copula constant? A test for changing relationships

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University

Extremogram and ex-periodogram for heavy-tailed time series

Non parametric estimation of Archimedean copulas and tail dependence. Paris, february 19, 2015.

Detection of change-points in dependence structures

Robust Tests for Change-Points in Time Series

Modeling and testing long memory in random fields

Séminaire INRA de Toulouse

Copula-based tests of independence for bivariate discrete data. Orla Aislinn Murphy

Probability Distribution And Density For Functional Random Variables

DEPARTMENT MATHEMATIK ARBEITSBEREICH MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE

A nonparametric Bayesian approach to copula estimation

Robustness of a semiparametric estimator of a copula

On the empirical multilinear copula process for count data

University of California San Diego and Stanford University and

Research Article Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance

Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach

Tests for separability in nonparametric covariance operators of random surfaces

Application of Homogeneity Tests: Problems and Solution

Extremogram and Ex-Periodogram for heavy-tailed time series

A TEST OF FIT FOR THE GENERALIZED PARETO DISTRIBUTION BASED ON TRANSFORMS

Out-of-sample comparison of copula specifications in multivariate density forecasts

Strong approximations for resample quantile processes and application to ROC methodology

STAT 730 Chapter 5: Hypothesis Testing

A NONPARAMETRIC TEST FOR SERIAL INDEPENDENCE OF REGRESSION ERRORS

Tail Dependence of Multivariate Pareto Distributions

Goodness-of-Fit Testing with Empirical Copulas

Using copulas to model time dependence in stochastic frontier models

Time Varying Hierarchical Archimedean Copulae (HALOC)

A BOOTSTRAP VIEW ON DICKEY-FULLER CONTROL CHARTS FOR AR(1) SERIES

UNIVERSITÄT POTSDAM Institut für Mathematik

Online Appendix to Detecting structural differences in tail dependence of financial time series

Nonparametric Change-Point Tests for Long-Range Dependent Data

A consistent test of independence based on a sign covariance related to Kendall s tau

A STATISTICAL TEST FOR MONOTONIC AND NON-MONOTONIC TREND IN REPAIRABLE SYSTEMS

Modelling Dependent Credit Risks

Advanced Statistics II: Non Parametric Tests

On the Goodness-of-Fit Tests for Some Continuous Time Processes

Mi-Hwa Ko. t=1 Z t is true. j=0

Explicit Bounds for the Distribution Function of the Sum of Dependent Normally Distributed Random Variables

ISSN X Bivariate copulas parameters estimation using the trimmed L-moments method

Asymptotical distribution free test for parameter change in a diffusion model (joint work with Y. Nishiyama) Ilia Negri

A Bootstrap View on Dickey-Fuller Control Charts for AR(1) Series

PARSIMONIOUS MULTIVARIATE COPULA MODEL FOR DENSITY ESTIMATION. Alireza Bayestehtashk and Izhak Shafran

One-Sample Numerical Data

Calibration Estimation of Semiparametric Copula Models with Data Missing at Random

Markov Switching Regular Vine Copulas

ICA and ISA Using Schweizer-Wolff Measure of Dependence

Multivariate Statistics

Identification of marginal and joint CDFs using Bayesian method for RBDO

By Bhattacharjee, Das. Published: 26 April 2018

Bootstrapping high dimensional vector: interplay between dependence and dimensionality

Bootstrap of residual processes in regression: to smooth or not to smooth?

Discussion Papers in Economics

Multiple Testing of One-Sided Hypotheses: Combining Bonferroni and the Bootstrap

Multivariate Distribution Models

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

A regeneration proof of the central limit theorem for uniformly ergodic Markov chains

A simple nonparametric test for structural change in joint tail probabilities SFB 823. Discussion Paper. Walter Krämer, Maarten van Kampen

Randomization Tests for Equality in Dependence Structure

Transcription:

Test non paramétrique de détection de rupture dans la copule d observations multivariées avec changement dans les lois marginales -Illustration des méthodes et simulations de Monte Carlo- Séminaire de statistique IRMA Strasbourg Tom Rohmer, Université de Bordeaux2, ISPED with collaboration from: Ivan Kojadinovic (Université de Pau) & Jean-François Quessy (Université de Trois-Rivières) and Axel Bücher (Université d Heidelberg) & Johan Segers (Université catholique de Louvain) 24 novembre 2015 1 / 43

Summary 1 Introduction 2 Measure of the multivariate dependence The Sklar s theorem The role of copulas to test for breaks detection 3 A Cramér-von Mises test statistic Empirical copula The test statistic 4 Computation of approximate p-values The sequential empirical copula process A resampling scheme for the sequential empirical copula process 5 Monte Carlo Simulations 2 / 43

Introduction Summary 1 Introduction 2 Measure of the multivariate dependence The Sklar s theorem The role of copulas to test for breaks detection 3 A Cramér-von Mises test statistic Empirical copula The test statistic 4 Computation of approximate p-values The sequential empirical copula process A resampling scheme for the sequential empirical copula process 5 Monte Carlo Simulations 3 / 43

Introduction Test for break detection Let X 1,..., X n be d-dimensional random vectors, where for i = 1,..., n, X i = (X i1,..., X id ). We aim at testing the null hypothesis H 0 : F such that X 1,..., X n have c.d.f. F. 4 / 43

Introduction Test for break detection Let X 1,..., X n be d-dimensional random vectors, where for i = 1,..., n, X i = (X i1,..., X id ). We aim at testing the null hypothesis H 0 : F such that X 1,..., X n have c.d.f. F. Example of alternative hypotheses : An abrupt change at the instant k [[1, n 1]] : F (1), F (2) such that X 1,..., X k F (1), X k +1,..., X n F (2) 5 / 43

Introduction Test for break detection Let X 1,..., X n be d-dimensional random vectors, where for i = 1,..., n, X i = (X i1,..., X id ). We aim at testing the null hypothesis H 0 : F such that X 1,..., X n have c.d.f. F. Example of alternative hypotheses : A gradual change : 1 k 1 k 2 n 1 X1,..., X k1 F (1) X k2,..., Xn F (2) For i = k 1 + 1,..., k 2 the law of X i will gradually go from F (1) to F (2). 6 / 43

Introduction Cumulative Sum test for H 0, serially independent data I Example 1 : Test for change in the mean, d=1 { T n µ k(n k) 1 = max k=1,...,n 1 n 3/2 k ns = sup 1 (X i X n) s [0,1] n. i=1 k i=1 X i 1 n k } n X i i=k+1 Under H 0, as soon as X 1,..., X n are i.i.d., T µ n σ sup U(s), s [0,1] where σ 2 is the unknown variance of X i and U is a standard Brownian bridge, i.e. a centered Gaussian process with covariance function : cov{u(s), U(t)} = min(s, t) st, s, t [0, 1]. 7 / 43

Introduction Cumulative Sum test for H 0, serially independent data II Example 2 : test à la [Csörgő et Horváth(1997)] T n # k(n k) = max sup F k=1,...,n 1 n 3/2 1:k (x) F k+1:n (x) x R d ns = sup sup 1 {1(X i x) F 1:n(x)} s [0,1] x R d n, i=1 where for 1 k n, F 1:k (resp. F k+1:n ) is the empirical c.d.f. relative to the subsample X 1,..., X k (resp. X k+1,..., X n) : et F 1:k (x) = 1 k k i=1 1(X i x), F k+1:n (x) = 1 n k F 1:n(x) = 1 n n 1(X i x). i=1 n 1(X i x), i=k+1 8 / 43

Introduction Strong mixing conditions In this work, we do not necessarily assume the observations to be serially independent. The asymptotic validity of techniques is proved for strongly mixing observation : [Rosenblatt(1956)] Consider a sequence of d-dimensional random vectors (Y i ) i Z. For s, t Z {, + }, let F t s the σ-algebra generated by Y i, s i t, i.e., F t s = σ(y i, s i t). The strong mixing coefficient is defined by α r = sup t Z sup A F t,b F+ t+r P(A B) P(A)P(B). The sequence (Y i ) i Z is said α-mixing if α r 0 as r +. 9 / 43

Introduction Nasdaq, Dow Jones and the black Monday (1987-10-19) : ˆp T # n,α=0.05 val = 0.0004 : we conclude in favour of H 0. 10 / 43

Measure of the multivariate dependence Summary 1 Introduction 2 Measure of the multivariate dependence The Sklar s theorem The role of copulas to test for breaks detection 3 A Cramér-von Mises test statistic Empirical copula The test statistic 4 Computation of approximate p-values The sequential empirical copula process A resampling scheme for the sequential empirical copula process 5 Monte Carlo Simulations 11 / 43

Measure of the multivariate dependence The Sklar s theorem The Sklar s theorem Theorem of [Sklar(1959)] Let X = (X 1,..., X d ) be a d-dimensional random vector with c.d.f. F and let F 1,... F d be the marginals c.d.f. of X assuming continuous. Then it exists an unique function C : [0, 1] d [0, 1] such that : F (x) = C{F 1(x 1),..., F d (x d )}, x = (x 1,..., x d ) R d. The copula C characterizes the dependence structure of vector X. The copula C can be expressed as follows : C(u) = F {F 1 1 (u 1),..., F 1 d (u d )}, u = (u 1,..., u d ) [0, 1] d. A. Sklar. Fonctions de répartition à n dimensions et leurs marges. Publications de l Institut de Statistique de l Université de Paris, 8 : 229 231, 1959. 12 / 43

Measure of the multivariate dependence The Sklar s theorem Classical copulas Independence copula : Normal copulas d C Π (u) = u j ; j=1 Gumbel Hougaard copulas : ( Clayton copulas : C N Σ (u) = Φ d,σ {Φ 1 (u 1),..., Φ 1 (u d )}; Cθ GH (u) = exp C Cl θ (u) = ( d j=1 d j=1 ) [{ log(u j )} θ] 1/θ, θ 1; u θ j d + 1) 1/θ, θ > 0; 13 / 43

Measure of the multivariate dependence The role of copulas to test for breaks detection The role of copulas to test for breaks detection H 0 : F such that X 1,..., X n have c.d.f. F. Sklar s theorem allows to rewrite H 0 as H 0,m H 0,c where H 0,m H 0,c : H 0,c : C, such that X 1,..., X n have copula C H 0,m : F 1,..., F d such thar X 1,..., X n have m.c.d.f. F 1,..., F d. Construction of a test for H 0 more powerful than its predecessors against alternatives involving a change in the copula, based on the CUSUM approach. F, F 1,..., F d and C are unknown. 14 / 43

A Cramér-von Mises test statistic Summary 1 Introduction 2 Measure of the multivariate dependence The Sklar s theorem The role of copulas to test for breaks detection 3 A Cramér-von Mises test statistic Empirical copula The test statistic 4 Computation of approximate p-values The sequential empirical copula process A resampling scheme for the sequential empirical copula process 5 Monte Carlo Simulations 15 / 43

A Cramér-von Mises test statistic Empirical copula Empirical copula I Let X 1,..., X n be d-dimensional random vectors with continuous m.c.d.f. F 1,..., F d and copula C. For i = 1,..., n, the random vectors U i = (F 1(X i1 ),..., F d (X id )) C. When F 1,..., F d are supposed known, a natural estimator of C is given by the empirical c.d.f. of U 1,..., U n. Here F 1,..., F d are unknown, an estimator of C is given by the empirical c.d.f. of pseudo-observations of copula C : For j = 1,..., d let F 1:n,j be the empirical c.d.f. of sample X 1j,..., X nj. For i = 1,..., n, consider the vectors Û 1:n i = (F 1:n,1(X i1 ),..., F 1:n,d (X id )) = 1 n (R1:n i1,..., Rid 1:n ), where for j = 1,..., d, R 1:n ij is the maximal rank of X ij among X 1j,..., X nj. 16 / 43

A Cramér-von Mises test statistic Empirical copula Empirical copula II [Rüschendorf(1976)], [Deheuvels(1979)] C 1:n(u) = 1 n n i=1 1(Û 1:n i u), u [0, 1] d. Let C 1:k (resp. C k+1:n ) the empirical copula evaluates on X 1,..., X k (resp. X k+1,..., X n) : C 1:k (u) = 1 k k i=1 1(Û 1:k i u), C k+1:n (u) = 1 n k n i=k+1 1(Û k+1:n i u) u [0, 1] d. X 1 = X k X n = (X 11,, X 1d) (X n1,, X nd) (R 1:k 11,, R 1:k 1d ) (R 1:k k1,, R 1:k kd ) (R k+1:n k+11,, Rk+1:n k+1d ) (R k+1:n k+11,, Rk+1:n k+1d ) C 1:k. C k+1:n. 17 / 43

A Cramér-von Mises test statistic The test statistic Break detection in copula We consider the process D n(s, u) = n ns n (n ns ) {C 1: ns (u) C ns +1:n (u)}, (s, u) [0, 1] d+1, n A Cramér-von Mises statistic : S n,k = D 2 n(k/n, u)dc 1:n(u) = 1 [0,1] d n and S n = max k {1,...,n 1} S n,k. n i=1 D 2 n(k/n, Û 1:n i ), Under H 0, for all k {1,..., n 1}, C 1:k and C k+1:n estimate the same copula C, thus S n tends to be relatively weak. For an abrupt change in copula, the unknown break time can be estimate by the integer k which maximise S n,k. 18 / 43

A Cramér-von Mises test statistic The test statistic Asymptotic behaviour of S n under H 0 Condition : [Segers(2012)] For any j {1,..., d}, the partial derivatives C j = C/ u j exist and are continuous on V j = {u [0, 1] d, u j (0, 1)}. Proposition Under H 0 S n = 1 max k {1,...,n 1} n n i=1 D n(k/n, Ûi 1:n ) 2 S C = sup {D C (s, u)} 2 dc(u). s [0,1] [0,1] d D C (s, u) = (1 s)c C (0, s, u) sc C (s, 1, u), (s, u) [0, 1] d+1. and for s t [0, 1] and u [0, 1] d d C C (s, t, u) = {Z C (t, u) Z C (s, u)} C j (u){z C (t, u {j} ) Z C (s, u {j} )} j=1 with Z C a C-Kiefer Müller process and u {j} = (1,..., 1, u j, 1,..., 1). 19 / 43

Computation of approximate p-values Summary 1 Introduction 2 Measure of the multivariate dependence The Sklar s theorem The role of copulas to test for breaks detection 3 A Cramér-von Mises test statistic Empirical copula The test statistic 4 Computation of approximate p-values The sequential empirical copula process A resampling scheme for the sequential empirical copula process 5 Monte Carlo Simulations 20 / 43

Computation of approximate p-values The sequential empirical copula process A decomposition for the process D n Consider the process D n(s, u) = n ns n ( 1 ns ) {C1: ns (u) C } ns +1:n(u), n (s, u) [0, 1] d+1. D n can be written as function of C n the sequential empirical copula process : ( D n(s, u) = 1 ns ) C n(0, s, u) ns Cn(s, 1, u) n n with for s t [0, 1] and u [0, 1] d and λ n(s, t) = ( nt ns )/n Sequential empirical copula process C n(s, t, u) = nλ n(s, t){c ns +1: nt (u) C(u)} = 1 n nt i= ns +1 { } 1(Û ns +1: nt i u) C(u). 21 / 43

Computation of approximate p-values The sequential empirical copula process Asymptotic behaviour of the sequential empirical copula process Theorem Let X 1,..., X n be drawn from a strictly stationary sequence (X i ) i Z with continuous margins and whose strong mixing coefficients satisfy α r = O(r a ), a > 1. Under the previous condition, sup C n(s, t, u) C P n(s, t, u) 0, u [0,1] d where for u [0, 1] d, and u {j} = (1,..., 1, u j, 1,..., 1), C n(s, t, u) = B n(s, t, u) d C j (u)b n(s, t, u {j} ). j=1 B n(s, u) = 1 n nt i= ns +1 {1(U i u) C(u)}, (s, u) [0, 1] d+1. 22 / 43

Computation of approximate p-values The sequential empirical copula process A resampling scheme of the sequential empirical copula process From the previous theorem, the process C n is asymptotically equivalent to process C n C n(s, t, u) = {Z n(t, u) Z n(s, u)} with d C j (u){z n(t, u {j} ) Z n(s, u {j} )}, Z n(s, u) = 1 ns {1(U i u) C(u)} (s, u) [0, 1] d+1. n i=1 Z n Z C in l ([0, 1] d+1 ), Z C the C-Kiefer-Müller process and C n C C in l ( [0, 1]) j=1 To resample C n, we construct a resampling of Z n. 23 / 43

Computation of approximate p-values A resampling scheme for the sequential empirical copula process A resampling scheme for Z n, case of i.i.d. observations i.i.d. multipliers [van der Vaart and Wellner(2000)] A sequence of i.i.d multipliers (ξ i ) i Z satisfy the following conditions : For all i Z, ξ i are independent of observations X 1,..., X n E(ξ 0) = 0, var(ξ 0) = 1 and 0 {P( ξ 0 > x)} 1/2 dx <. For m = 1,..., M consider the processes Z (m) n (s, u) = 1 ns n i=1 [Holmes, Kojadinovic et Quessy(2013)] ( ) ( Z n, Z (1) n,..., Z (M) n ξ (m) i {1(U i u) C(u)}, (s, u) [0, 1] d+1. Z C, Z (1) C,..., Z(M) C in {l ([0, 1] d+1 )} M+1, where Z C is the C-Kiefer-Müller process and the processes Z (1) C,..., Z(M) C are independent copies of Z C. ) 24 / 43

Computation of approximate p-values A resampling scheme for the sequential empirical copula process A resampling scheme for C n, case of serially dependent observations I dependent multipliers [Bühlmann(1993)] A sequence (ξ i,n ) i Z of dependent multipliers satisfy the following conditions : (ξ i,n ) i Z is strictly stationary and for all i Z, ξ i,n are independent of observations X 1,..., X n ; E(ξ 0,n) = 0, var(ξ 0,n) = 1 and sup n 1 E ( ξ 0,n ν ) < for any ν 1 ; There exists a sequence l n of strictly positive constants such that l n = o(n), and the sequence (ξ i,n ) i Z is l n-dependant ; i.e., such that ξ i,n is independent of ξ i+h,n for all h > l n and all i N ; There exists a function ϕ : R [0, 1], symmetric around 0, continuous at 0 satisfying ϕ(0) = 1 and ϕ(x) = 0 for all x > 1, such that E(ξ 0,nξ h,n ) = ϕ(h/l n) for all h Z. 25 / 43

Computation of approximate p-values A resampling scheme for the sequential empirical copula process A resampling scheme for C n, case of serially dependent observations II Using dependent multipliers ξ i,n well-chosen, we can construct a resampling for Z n (or B n) adapted to the case of dependent data. B (m) n (s, u) = 1 n nt i= ns +1 ξ (m) i,n {1(U i u) C(u)}, (s, u) [0, 1] d+1. For m = 1,..., M, consider the estimated processes : ˆB (m) n (s, t, u) = 1 n nt i= ns +1 ξ (m) 1:n i,n {1(Ûi u) C 1:n(u)}, (s, t, u) [0, 1] d, We consider an estimator C j,1:n of C j constructed with finite differencing of the empirical copula at a bandwidth of h n : C j 1:n (u) = C1:n(u + hne j) C 1:n(u h nej ) min(u j + h n, 1) max(u j h n, 0), u [0, 1]d, e j = (0,..., 0, 1, 0,..., 0) and h n = n 1/2. 26 / 43

Computation of approximate p-values A resampling scheme for the sequential empirical copula process A resampling scheme for C n, case of serially dependent observations III For the estimated processes Ĉ (m) n d (s, t, u) = ˆB (m) n (s, t, u) C j,1:n (u) ˆB (m) n (s, t, u {j} ), (s, t, u) [0, 1] d, we have the following result : j=1 [Bücher, Kojadinovic, Rohmer et Segers(2014)] Let X 1,..., X n be drawn from a strictly stationary sequence (X i ) i Z. Under mixing conditions : ( ) C n, Ĉ (1) n ) (,..., Ĉ (M) n C C, C (1) C,..., C(M) C in {l ( [0, 1] d )} M+1, where C (1) C,..., C(M) C are independent copies of C C., 27 / 43

Computation of approximate p-values A resampling scheme for the sequential empirical copula process A resampling scheme for C n, case of serially dependent observations IV For (s, t, u) [0, 1] d, consider the processes and ˇB (m) n (s, t, u) = 1 n Č (m) n nt i= ns +1 (s, t, u) = ˇB (m) (s, t, u) n ξ (m) ns +1: nt i,n {1(Ûi u) C ns +1: nt (u)}. d C j, ns +1: nt (u) ˇB (m) n (s, t, u {j} ). j=1 [Bücher, Kojadinovic, Rohmer et Segers(2014)] Let X 1,..., X n be drawn from a strictly stationary sequence (X i ) i Z. Under mixing conditions : ( ) C n, Č (1) n in {l ( [0, 1] d )} M+1. ) (,..., Č (M) n C C, C (1) C,..., C(M) C, 28 / 43

Computation of approximate p-values A resampling scheme for the sequential empirical copula process A resampling for S n The process D n can be rewritten as : D n(s, u) = λ n(s, 1)C n(0, s, u) λ n(0, s)c n(s, 1, u). Two possibilities to resample D n : ˆD (m) n Ď (m) n (s, u) = λ n(s, 1)Ĉ (m) n (0, s, u) λ n(0, s)ĉ (m) n (s, 1, u), (s, u) = λ n(s, 1)Č (m) n (0, s, u) λ n(0, s)č (m) n (s, 1, u).... and for S n : n {ˆD (m) n (k/n, Ûi 1:n )} 2 /n, i=1 n {Ď (m) n (k/n, Ûi 1:n )} 2 /n. k {1,...,n 1} i=1 Ŝ n (m) = max k {1,...,n 1} Š (m) n = max 29 / 43

Computation of approximate p-values A resampling scheme for the sequential empirical copula process Asymptotic validity of the resampling scheme [Bücher, Kojadinovic, Rohmer et Segers(2014)] Under H 0 and with the same mixing conditions and ξ i,n well-chosen, (S n, Ŝ n (1),..., Ŝ (M) (S n, Š n (1),..., Š (M) n ) (S C, S (1) C n ) (S C, S (1) in R M+1, S (1) C,..., S (M) C independent copies of S C.,..., Ŝ n (M) and Š n (1) independent copies of S n. Ŝ (1) n,..., Š (M) n We compute two approximate p-values for S n as ˆp M,n = 1 M M m=1 ) (m) 1 (Ŝ n S n C,..., S (M) C ),..., S (M) C ) can be interpreted as M almost ou encore ˇp M,n = 1 M M m=1 ) (m) 1 (Š n S n 30 / 43

Computation of approximate p-values A resampling scheme for the sequential empirical copula process The pros and cons + : Most powerful test than predecessors for detect a change in copula + : Consistent tests + : Adapted in the case of strong mixing observations - : Less sensitive to detect a change in marginal distribution - : Without the hypothesis than the margins are constant ; we can t conclude in favour of a change in copula. 31 / 43

Computation of approximate p-values A resampling scheme for the sequential empirical copula process Break detection in the copula when there exists a change in marginal distribution at time m Consider the following null hypothesis H m 0 = H 1,m H 0,c : H 0,c : C, such that X 1,..., X n have copula C H 1,m : F 1,... F d and F 1,... F d such that X 1,..., X m have m.c.d.f. F 1,... F d and X m+1,..., X n have m.c.d.f. F 1,... F d. Taking the change at time m, an estimator of copula C can be built with the following pseudo-observations : { (F 1:m,1(X i1 ),..., F 1:m,d (X id )) i {1,..., m} Û 1:n i,m = (F m+1:n,1(x i1 ),..., F m+1:n,d (X id )) i {m + 1,..., n}, where in this case j = 1,..., d,f 1:m,j (resp. F m+1:n,j ) is the empirical cumulative distribution function computed with X 1j,..., X mj (resp. X m+1j,..., X nj ) 32 / 43

Computation of approximate p-values A resampling scheme for the sequential empirical copula process In the same way, we construct C 1:k,m and C k+1:n,m from sub sample X 1,..., X k and X k+1,..., X n for k in {1,..., n 1}. Figure: Case of m k : X 1 = (X 11,, X 1d) (R 1:m 11,, R 1:m 1d ) (R 1:m m1,, R 1:m md ) C 1:m X m C 1:k,m (R m+1:k m+11,, Rm+1:k m+1d ) (R m+1:k k1,, R m+1:k kd ) C m+1:k X k (R k+1:n k+11,, Rk+1:n k+1d ) X n = (X n1,, X nd) (R k+1:n n1,, R k+1:n nd ) C k+1:n C k+1:n,m 33 / 43

Computation of approximate p-values A resampling scheme for the sequential empirical copula process The process The empirical c.d.f. C 1:k,m and C k+1:n,m evaluate from pseudo-observations Û1,m, 1:k..., Ûk,m 1:k and Û k+1:n k+1,m,..., Û n,m k+1:n are given by { m k m C1:m(u) + C C 1:k,m (u) = k k m+1:k (u) m [1, k] C 1:k (u) m / [1, k], and C k+1:n,m (u) = The test statistic is where { m k+1 C n k+1 k+1:m(u) + n m Cm+1:n(u) m [k + 1, n] n k+1 C k+1:n (u) m / [k + 1, n]. S n,m = D n,m(k/n, u) = 1 max k=1,...,n n n i=1 D 2 n,m(k/n, Û 1:n i,m) k(n k) n 3/2 {C 1:k,m (u) C k+1:n,m (u)}. 34 / 43

Computation of approximate p-values A resampling scheme for the sequential empirical copula process Asymptotic behaviour of S n,m under H 0 Theorem Let X 1,..., X n n independent (or strong mixing) random vectors, with copula C such that for a fixed integer m, X 1,..., X m have marginal c.d.f. F 1,..., F d and X m+1,..., X n have marginal c.d.f. F 1,..., F d. Then with the similar conditions S n,m S C = sup D 2 C (s, u)dc(u). s [0,1] [0,1] d The asymptotic distribution does not depend on m! This result can be generalized at the case of multiple changes in marginal distributions. 35 / 43

Monte Carlo Simulations Summary 1 Introduction 2 Measure of the multivariate dependence The Sklar s theorem The role of copulas to test for breaks detection 3 A Cramér-von Mises test statistic Empirical copula The test statistic 4 Computation of approximate p-values The sequential empirical copula process A resampling scheme for the sequential empirical copula process 5 Monte Carlo Simulations 36 / 43

Monte Carlo Simulations Cramér-von Mises statistic, case of i.i.d. observations Consider the alternative H 1,c : H 1,c H 1,c : distinct C 1 and C 2, and t (0, 1) such that X 1,..., X nt have copula C 1 and X nt +1,..., X n have copula C 2. Percentages of rejection of hypothesis H 0 computed using 1000 samples of size n {50, 100, 200} generated under : H 0 = H 0,c H 0,m, H 1,c H 0,m With a change in marginal c.d.f. : H m 0 = H 0,c H 1,m H 1,c H 1,m 37 / 43

Monte Carlo Simulations Nasdaq, Dow Jones and the black Monday (1987-10-19) Suppose there is an unique change in m.c.d.f. at time m = 202 (1987-10-19) ˆp Sn val = 0.201 : no evidence against H 0,c. 38 / 43

Monte Carlo Simulations Bibliography I A. Bücher et I. Kojadinovic. A dependent multiplier bootstrap for the sequential empirical copula process under strong mixing. arxiv :1306.3930, 2013. A. Bücher, I. Kojadinovic, T. Rohmer et J. Segers. Detecting changes in cross-sectional dependence in multivariate time series. Journal of Multivariate Analysis, 132 : 111 128, 2014. M. Csörgő et L. Horváth. Limit theorems in change-point analysis. Wiley Series in Probability and Statistics. John Wiley & Sons, Chichester, UK, 1997. P. Bühlmann. The blockwise bootstrap in time series and empirical processes. PhD thesis, ETH Zürich, 1993. Diss. ETH No. 10354. 39 / 43

Monte Carlo Simulations Bibliography II P. Deheuvels. La fonction de dépendance empirique et ses propriétés : un test non paramétrique d indépendance. Acad. Roy. Belg. Bull. Cl. Sci. 5th Ser., 65 :274 292, 1979. M. Holmes, I. Kojadinovic et J-F. Quessy. Nonparametric tests for change-point detection à la Gombay and Horváth. Journal of Multivariate Analysis, 115 :16 32, 2013. I. Kojadinovic, J.-F. Quessy et T. Rohmer. Testing the constancy of Spearman s rho in multivariate time series. ArXiv e-prints, July 2014. T. Rohmer. Some results on change-point detection in cross-sectional dependence of multivariate data with changes in marginal distributions. ArXiv :1506.01894, July 2015. 40 / 43

Monte Carlo Simulations Bibliography III Murray Rosenblatt. A central limit theorem and a strong mixing condition. Proceedings of the National Academy of Sciences of the United States of America, 42(1) :43, 1956. L. Rüschendorf. Asymptotic distributions of multivariate rank order statistics. Annals of Statistics, 4 :912 923, 1976. J. Segers. Asymptotics of empirical copula processes under nonrestrictive smoothness assumptions. Bernoulli, 18 :764 782, 2012. A. Sklar. Fonctions de répartition à n dimensions et leurs marges. Publications de l Institut de Statistique de l Université de Paris, 8 : 229 231, 1959. 41 / 43

Monte Carlo Simulations Bibliography IV D. Wied, H. Dehling, M. van Kampen et D. Vogel. A fluctuation test for constant Spearman s rho with nuisance-free limit distribution. Computational Statistics and Data Analysis, 76 :723 736, 2014. A. Bücher and M. Ruppert. Consistent testing for a constant copula under strong mixing based on the tapered block multiplier technique. Journal of Multivariate Analysis, 116 :208 229, 2013. A.W. van der Vaart and J.A. Wellner. Weak convergence and empirical processes. Springer, New York, 2000. Second edition. 42 / 43

Monte Carlo Simulations Thank you for your attention! 43 / 43