Cramér-von Mises Gaussianity test in Hilbert space

Similar documents
UDC Anderson-Darling and New Weighted Cramér-von Mises Statistics. G. V. Martynov

On Cramér-von Mises test based on local time of switching diffusion process

arxiv:math/ v1 [math.pr] 22 Dec 2006

Karhunen-Loève decomposition of Gaussian measures on Banach spaces

Second-Order Inference for Gaussian Random Curves

A Hilbert Space for Random Processes

A Concise Course on Stochastic Partial Differential Equations

Quality of Life Analysis. Goodness of Fit Test and Latent Distribution Estimation in the Mixed Rasch Model

Karhunen-Loève decomposition of Gaussian measures on Banach spaces

Optimal series representations of continuous Gaussian random fields

A New Wavelet-based Expansion of a Random Process

A Limit Theorem for the Squared Norm of Empirical Distribution Functions

Performance Evaluation of Generalized Polynomial Chaos

Weighted Cramér-von Mises-type Statistics

The Laplace driven moving average a non-gaussian stationary process

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee

Asymptotic Probability Density Function of. Nonlinear Phase Noise

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions

ELEMENTS OF PROBABILITY THEORY

On the Goodness-of-Fit Tests for Some Continuous Time Processes

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

ON THE STRONG LIMIT THEOREMS FOR DOUBLE ARRAYS OF BLOCKWISE M-DEPENDENT RANDOM VARIABLES

Polynomial Chaos and Karhunen-Loeve Expansion

WELL-POSEDNESS FOR HYPERBOLIC PROBLEMS (0.2)

Matched Filtering for Generalized Stationary Processes

Convexity of the Reachable Set of Nonlinear Systems under L 2 Bounded Controls

Goodness of fit test for ergodic diffusion processes

Karhunen-Loève Expansions of Lévy Processes

Stein s Method and Characteristic Functions

Generalized Riccati Equations Arising in Stochastic Games

Uniform and non-uniform quantization of Gaussian processes

Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Bickel Rosenblatt test

Application of Homogeneity Tests: Problems and Solution

Krzysztof Burdzy University of Washington. = X(Y (t)), t 0}

Multivariate Time Series

Normal splines in reconstruction of multi-dimensional dependencies

2 Statement of the problem and assumptions

1. Stochastic Processes and filtrations

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS

Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION. September 2017

MATH 205C: STATIONARY PHASE LEMMA

Universal examples. Chapter The Bernoulli process

Conjugate Gradient (CG) Method

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

arxiv: v2 [math.pr] 27 Oct 2015

Citation Osaka Journal of Mathematics. 41(4)

Convergence of Multivariate Quantile Surfaces

Jae Gil Choi and Young Seo Park

Reduction to the associated homogeneous system via a particular solution

The stochastic heat equation with a fractional-colored noise: existence of solution

Collocation based high dimensional model representation for stochastic partial differential equations

Lecture 6 January 21, 2016

Statistic Distribution Models for Some Nonparametric Goodness-of-Fit Tests in Testing Composite Hypotheses

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***

Detection and Estimation Theory

1 Coherent-Mode Representation of Optical Fields and Sources

Anderson-Darling Type Goodness-of-fit Statistic Based on a Multifold Integrated Empirical Distribution Function

Lecture 4: Numerical solution of ordinary differential equations

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error

Asymptotic statistics using the Functional Delta Method

On robust and efficient estimation of the center of. Symmetry.

On rate of convergence in distribution of asymptotically normal statistics based on samples of random size

Rademacher functions

Mi-Hwa Ko. t=1 Z t is true. j=0

Chapter 13: Functional Autoregressive Models

-variation of the divergence integral w.r.t. fbm with Hurst parameter H < 1 2

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

= a. a = Let us now study what is a? c ( a A a )

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples

Generalized Gaussian Bridges of Prediction-Invertible Processes

SAMPLE PROPERTIES OF RANDOM FIELDS

Intensity Analysis of Spatial Point Patterns Geog 210C Introduction to Spatial Data Analysis

Asymptotic Statistics-VI. Changliang Zou

EE731 Lecture Notes: Matrix Computations for Signal Processing

Efficiency Random Walks Algorithms for Solving BVP of meta Elliptic Equations

Basic Operator Theory with KL Expansion

Harnack Inequalities and Applications for Stochastic Equations

The Codimension of the Zeros of a Stable Process in Random Scenery

HOSTOS COMMUNITY COLLEGE DEPARTMENT OF MATHEMATICS

2. The Schrödinger equation for one-particle problems. 5. Atoms and the periodic table of chemical elements

Haar function construction of Brownian bridge and Brownian motion Wellner; 10/30/2008

A Vector-Space Approach for Stochastic Finite Element Analysis

Concentration Ellipsoids

Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach

Lower Tail Probabilities and Related Problems

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems

Some recent results on controllability of coupled parabolic systems: Towards a Kalman condition

Some New Properties of Wishart Distribution

Asymptotic Properties of an Approximate Maximum Likelihood Estimator for Stochastic PDEs

arxiv:math/ v1 [math.pr] 25 Mar 2005

Change point tests in functional factor models with application to yield curves

Linear Operators, Eigenvalues, and Green s Operator

LIMIT CYCLES COMING FROM THE PERTURBATION OF 2-DIMENSIONAL CENTERS OF VECTOR FIELDS IN R 3

Digital Image Processing Lectures 13 & 14

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

Transcription:

Cramér-von Mises Gaussianity test in Hilbert space Gennady MARTYNOV Institute for Information Transmission Problems of the Russian Academy of Sciences Higher School of Economics, Russia, Moscow Statistique Asymptotique des Processus Stochastiques-X Le Mans, Mars 2015

CvM test that the process is Gaussian G Martynov CONTENTS One-dimensional CvM test: ω 2 n = n 1 0 ψ2 (t)(f n (t) t) 2 dt Multidimensional CvM test: ω 2 n = n [0, 1] d ψ 2 (t)(f n (t) t) 2 dt CvM test that the process is Gaussian: ω 2 n = n [0, 1] ψ2 (t)(f n (t) F (t)) 2 dt 1

G Martynov WEIGHTED CRAMÉR-VON MISES STATISTICS One-dimensional weighted Cramér-von Mises statistic is 1 ωn 2 = n ψ 2 (t)(f n (t) t) 2 dt, (1) 0 where F n (t) is the empirical distribution function based on the sample X 1, X 2,, X n from the uniform distribution on [0, 1], and ψ(t) is a weight functionthe statistic (1) designed to test the hypothesis against the alternative H 0 : F (x) = t H 1 : F (x) t, where F (x) is continuous distribution function 2

G Martynov If the condition 1 0 ψ 2 (t) t (1 t)dt < is fulfilled then the statistic ω 2 n ω 2 = 1 0 converges in probability to ξ 2 (t)dt, (2) where ξ(t), t [0, 1], is the Gaussian process with zero mean and the covariance function K ψ (t, τ) = ψ(t)ψ(τ)(min(t, τ) tτ) (see for example Van der Vaart and Wellner [1996, p 50]) 3

G Martynov The Gauss process ξ(t) can be developed in the Karhunen- Loève series ξ(t) = i=1 x k ϕ k (t) λk, where x k N(0, 1), k = 1, 2,, are independent random variables, and λ k and ϕ k (t), i = 1, 2,, are the eigenvalues and eigenfunctions of the Fredholm integral equation ϕ(t) = λ 1 0 ψ(t)ψ(τ)(min(t, τ) tτ)ϕ(τ)dτ (3) By twice differentiation (3) with respect to t, we obtain the differential equation h (t) + λψ 2 (t)h(t) = 0 with the conditions h(0) = h(1) = 0 Here h(t) = ϕ(t)/ψ(t) 4

G Martynov CRAMÉR-VON MISES STATISTIC WITH THE POWER WEIGHT FUNCTION ψ(t) = t β Deheuvels and Martynov (2003) described the follows result Let {B(t) : 0 t 1} be the Brownian bridge Then, for each β = 1 2ν 1 > 1, the Karhunen-Loeve expansions of {tβ B(t) : 0 < t 1} is given by t β B(t) = k=1 λ kb ω k e kb (t) Here, {ω k : k 1} are iid N(0, 1) random variables, and, for k = 1, 2,, the eigenvalues are λ k = (2ν/z ν,k ) 2, corresponding eigenfunctions are e B,k (t) = t 1 2ν 1 2J ν ( zν,k t 1 2ν νjν 1 ( zν,k ), 0 < t 1, ) and z ν,k, k = 1, 2,, are zeros of the Bessel functions J ν (z) 5

G Martynov Let {W (t) : 0 t 1} be the Wiener process Then, for each β = 1 1 > 1, the Karhunen-Loeve expansions of 2ν {t β W (t) : 0 < t 1} is t β W (t) = k=1 λ kw ω k e kw (t) Here, the eigenvalues are ( 2ν z ν 1,i, ) 2, k = 1, 2,, and corresponding eigenfunctions are e W,i (t) = 1 t 2ν 1 2 νjν 2(z ν 1,i) J ν ( z ν 1,i t 1/2ν), Deheuvels, P, Martynov, G (2003) Karhunen-Loeve expansions for weighted Wiener processes and Brownian bridges via Bessel functions Progress in Probability 55, 57-93 Birkhäuser Verlag, Basel/Switzerland 6

G Martynov UNIFORMITY TEST ON [0, 1] d H 0 : U have the uniform distribution on [0, 1] d It can be used the statistic ω 2 n = n t [0,1] d (F n (t) t) 2 dt n F n (t) = 1 n I t<u i The empirical process ξ n (t) = n 1/2 (F n (t) t) have the covariance function K(s, t) = d i=1 i=1 min(s i, t i ) where s = (s 1, s 2,, s d ), t = (t 1, t 2,, t d ) d i=1 s i t i, 7

G Martynov At first we will consider the covariance function K 0 (s, t) = d i=1 min(s i, t i ) The corresponding covariance operator has the eigenfunctions φ ijk (t) = 2 m/2 sin π(i 1/2)t 1 sin π(k 1/2)t m and the corresponding eigenvalues λ ijk = [π 2m (i 1/2) 2 (k 1/2) 2 ] 1 There ijk are all permutations of 1, 2,, m 8

G Martynov The eigenvalues, corresponding to K(s, t), are 1) numbers 2 2m /π 2m (2N + 1), where N > 0 with the multiplicities q n such, that q n + 1 is equal to numbers of ways of writing 2N + 1 as a product of m positive integers 2) simple eigenvalues, computing as the roots of the equation k 1 (q k + 1)λ 2 k λ k + µ = 1 2 m It was calculated the tables of considered ststistic for d = 2 and d = 3 Krivyakova, EN, Martynov, GV and Tyurin Yu N (1977) on the distribution omega-square statistics in the multidimensional case, Theory ProbabAppl 22 406-410 9

G Martynov CRAMÉR VON MISES GAUSSIANITY TEST FOR THE PROCESS IN HILBERT SPACE Martynov, G V, (1979) Martynov, G V, (1992) Hypothesis to be tested We will consider here the problems of the hypothesis testing that an observed random process x(t) on the interval [0, 1] is Gaussian The hypothesis is H 0x : x(t), t [0, 1], is the Gaussian process with Ex(t) = 0 and E(x(t) x(τ)) = K x (t, τ), t, τ [0, 1] The alternative H 1x to H 0x is the set of all another Gaussian processes on the interval [0, 1] Hypothesis testing is performed using n observations x 1 (t), x 2 (t),, x n (t) of the process x(t) 10

G Martynov Realizations of the process x(t) belongs with probability 1 to the separable Hilbert space H = L 2 ([0, 1]) As a basis for H we choose the orthonormal basis formed by eigenfunctions g 1 (t), g 2 (t), of the integral equation g(t) = λ 1 0 K x (t, τ)g(τ)dτ (4) Denote by λ 1, λ 2, λ 3, the eigenvalues of the equation (4) The process x(t) can be represented under hypothesis in the form of Karhunen-Loeve expansion x(t) = z j g j (t) λ j, (5) where z j, j = 1, 2,, are independent standard normal variables 11

G Martynov Let h = (h 1, h 2, h 3, ) denote the coordinates of x(t) in H Here h j = z j λ j = 1 0 x(t)g j (t)dt, j = 1, 2, (6) Analogously, the observations x i (t) of the random process x(t) can be represented in H as h i = (h i1, h i2, h i3, ), i = 1, 2,, n, where h ij = 1 0 x i (t)g j (t)dt,, i = 1, 2,, n, j = 1, 2, (7) 12

G Martynov General formulation of the Gaussianity hypothesis We will consider the probability space (X, B, ν), wherex is a separable Hilbert space of elementary events, B is the σ-algebra of Borel set on X, ν is a probability measure of the random ele- Let we have n observations X 1, X 2,, X n ment X of (X, B, ν) Let µ be a Gaussian measure on (X, B) defined by the mathematical expectation 0 and a covariance operator K with the kernel K(z, w), z, w X The covariance function K(z, w) supposed be known Let X be Gaussian random element with a characteristic function We will test the hypothesis versus alternative Ee i(x,t) = e (Kt,t)/2, t X (8) H 0µ : ν = µ H 1µ : ν is a meassure different from µ 13

G Martynov Let e = (e 1, e 2, ) be the orthonormal basis of the eigenvectors of K and σ1 2, σ2 2, be the eigenvalues of K Let x = (x 1, x 2, ) be the representation of x in the basis e Random element X = (X 1, X 2, ) has the independent components with the distributions N(0, σj 2 ), j = 1, 2, We transform the probability space (X, B, ν) to a probability space ([0, 1], C, Γ) Here C is the Borel set on [0, 1] and Γ is the measure corresponding to ν 14

G Martynov We denote transformed random element X as T = (T 1, T 2, ) Here T i = Φ(X j ; 0, σ 2 ), j = 1, 2,, Φ( ; µ, σ 2 ) is the normal distribution function with the mean µ and the variance σ 2 The observations X i of X are transformed to T i = (T i1, T i2, T i3, ), i = 1, 2,, T ij = Φ(X ij ; 0, σ 2 ), i = 1, 2,, n, j = 1, 2, All T i belong to [0, 1] Let Υ be the uniform measure on ([0, 1], C ) will test the hypothesis Now we H 0Υ : Γ = Υ (9) versus alternative H 1Υ : Γ Υ (10) 15

G Martynov EXAMPLE We will test the hypothesis H 0 that the observed random process on [0,1] is Gaussian with the zero mean and the covariance function K 0 (t, τ) = min(t, τ) This process (and all its observations) can be multiplied on 1 t Resulting process has the unit variance Its covariance function is K(t, τ) = min(t, τ) tτ Corresponding covariance operator has the eigenvalues λ k = (z 0,k /2) 2 and eigenfunctions ϕ k (t) = J 0 (z 0,k t)/ t 16

G Martynov GAUSSIANITY STATISTIC DEFINITION For application of the Cramér-von Mises-type test for testing the hypothesis H 0µ, it need to introduce the function F (t) on (0, 1) such, that it defines the measure Υ In the finite dimensional case, as a variant of F (t) can be chosen the obvious distribution function It is not possible in the case consideredinstead, it can be proposed the function F (t) = P {T 1 Q 1 (t 1 ), T 2 Q 2 (t 2 ), T 3 Q 3 (t 3 )} = Q 1 (t 1 )Q 2 (t 2 )Q 3 (t 3 ), t = (t 1, t 2, t 3, ), (11) in which functions Q i (t), i = 1, 2,, increase monotonically The convenient example of the distribution function is F (t) = t r 1 1 tr 2 2 tr 3 3, (12) where 1 = r 1 r 2 r 3, and r i 0 Powers r i must tend to zero sufficiently rapidly 17

G Martynov Let T (i) = (T i1, T i2, ) be the observations of T The distribution function is F n (t) = 1 n i=1 I Ti1 t r 1 1, T i2 t r 2 2, = 1 n n i=1 I Tij t r j (13) j The Cramér-von-Mises statistics can be written analogously to one and multi-dimensional cases ( ) 2 = n Fn (t) dt 1 dt 2 (14) ω 2 n [0,1] t r j j 18

G Martynov Corresponding empirical process is or ξ n (t) = 1 n ξ n (t) = n ( ( n i=1 F n (t) I Tij <t r j j t r j j ) t r j j, t [0, 1], ), t [0, 1] (15) The covariance function of ξn (t) is K (s, t) = min(s r j j, tr j j ) s r j j tr j j, s, t [0, 1] (16) 19

G Martynov The empirical process can be written as the normalized sum of the independent identically distributed random functions in the form where ξ n(t) = 1 n g i (t) = n i=1 Under the condition that g i (t), t [0, 1], (17) I Tij <t r j j t r j j [0,1] K (s, s)ds < (18) empirical process weakly converges in L 2 (X ) to the Gaussian process with the covariance function K (s, t) 20

G Martynov Then the inequality (18) goes into inequality [0,1] K (s, s)ds = 1 r j + 1 1 2r j + 1 < This inequality holds for all positive sequences {r j, j = 1, 2, } We are interested in the case when 1 r j 0, j, and empirical process does non-degenerate, ie [0,1] K (s, s)ds > 0 (19) To fulfill this condition, is sufficient to find a sequence for which satisfies the inequality 1 r j + 1 > 0 (20) As such a sequence can be selected the sequence r j = j a, a > 1, j = 1, 2, The sequence r j is not necessarily begins to decrease from 1 21

G Martynov It can be written K (s, t) = K0 (s, t) w(s)w(t), where K0 (s, t) = K0i (s j, t j ) = w(s) = s r j j, s, t [0, 1] min(s r j j, tr j j ), 22

G Martynov Monte Carlo results The limiting distribution of the statistic Ω 2 n was calculated also by the Monte-Carlo method The Cramér-von Mises statistic can be represented as ( 1 n ) 2 Ω 2 n = n I n T ij <t r i t r i j i dt [0,1] i=1 i=1 Here we used the double-loop calculations by Monte Carlo method Value of the statistic Ω 2 n computed in the inner loop, while its distribution is modeled in the outer loop The number of dimensions in the integral was 100, the number of Monte Carlo iterations to calculate the statistic Ω 2 n was 500, and the number of Monte Carlo iterations for calculating the percentage points was 10000 Here are a few estimated quantiles of the limiting distribution of Ω 2 n, with r i = i a(1 i b) When a = 25 and b = 05, the exact expectation is EΩ 2 01039 and the estimated expectation is Êω 2 0104 The corresponding percentage points are P {Ω 2 090} 017 and P {Ω 2 095} 021 When a = 3 and b = 05, the exact expectation is EΩ 2 01306 and the estimated expectation is ÊΩ 2 0132 The corresponding percentage points are P {Ω 2 090} 022 and P {Ω 2 095} 028 23

CvM test that the process is Gaussian G Martynov EIGENVALUES and EIGENFUNCTIONS of K 0j Eigenvalues and eigenfunctions of K 0i (s j, t j ) = min(s r j j, tr j j ) Eigenvalues and eigenfunctions of K 0 (s, t) = min(sr j j, tr j j ) Eigenvalues and eigenfunctions of K (s, t) = K0 (s, t) w(s)w(t) = min(sr j j, tr j j ) sr j j tr j j, s, t [0, 1] 24

G Martynov Limit distribution of the test statistic under the null hypothesis Eigenvalues and eigenfunctions of K 0j First, we shall consider the elementary kernel K0j (t, τ) in the one-dimensional case with arbitrary 0 < r 1 K 0j (t, τ) = min(tr, τ r ), 0 t, τ 1, i = 1, 2, The eigenfunctions and eigenvalues of K0j (t, τ) can be found from the Fredholm integral equation 1 ϕ r (t) = λ r min(t r, τ r )ϕ r (τ)dτ, t [0, 1] (21) 0 Let ϕ r (t) = h(t r ) and substitute ϕ r (t) into equation (21) can be transformed to the equation equation h r (x) + ρ rx 1 r 1 h r (x) = 0, h r (0) = 0 h r (1) = 0 (22) Let µ = r/(1 + r) The eigenvalues of the equation (21) are ( zµ 1,k ) 2 λ r,k = ρ r,k r = r, k = 1, 2, (23) 2µ r varies from 1 to zero while µ varies from 1/2 to zero It 25

G Martynov Corresponding eigenfunctions are ϕ r,k (t) = t µ/(2(1 µ)) J µ ( zµ 1,k t 1/(2(1 µ))), k = 1, 2, (24) The squared normalizing divisor for ϕ r,k (t) is D 2 r,k = (µ 1)z2µ µ 1,kΓ(2 + µ)γ(1 + 2µ) π 1 F 2 ( 1 2 + µ; 2 + µ, 1 + 2µ; z2 µ 1,k ) Here, p F q (a 1,, a p ; b 1,, b q, z) is the generalized hypergeometric function defined by the series pf q (a 1,, a p ; b 1,, b q, z) = 1 + m=1 (a 1 ) m (a p ) m (b 1 ) m (b q ) m m! tm, (25) and (a) m = a(a + 1) (a + m 1), where (a) 0 = 1, is the Pochhammer symbol, t < 1 We shall use the notation ϕ r,k (t) = ϕ r,k (t)/d r,k 26

G Martynov Eigenvalues and eigenfunctions of K 0 Further, we obtain the eigenvalues and eigenfunctions of the kernel K0 (t, τ), t, τ [0, 1] In this case, the Fredholm equation becomes ϕ r (t) = λ [0,1] min(t r j j, τ r j j )ϕ r(τ)dτ, t, τ [0, 1] (26) Here, r j, j = 1, 2,, is a prescribed sequence of numbers monotonically decreasing from 1 to 0 and satisfying condition (20) Denote by α j,k = 1/λ rj,k, k = 1, 2, 3,, the characteristic numbers for equation (21) Let A j = {α j,1, α j,1, α j,1 } be the set of characteristic numbers for equation (21) corresponding to the value r = r j Then the set ofthe characteristic numbers for equation (26) consists of all possible products α k1,k 2,k 3, = α j,kj, (27) where the (k 1, k 1, k 1 ) are all permutation of all positive natural numbers The eigenfunction corresponding to α k1 k 2 k 3 is ϕ k1,k 2,k 3,(t) = ϕ rj,k j (t) (28) We propose (hope) that the characteristic numbers α k1,k 2,k 3, have multiplicities equal to 1 We arrange all the characteristic numbers in descending order, by giving them the serial numbers α 1 > α 2 > α 3 > 27

G Martynov The following theorem gives us an idea about the behavior of those characteristic numbers THEOREM If r 0, then the limit form of the Fredholm equation (21) has a unique root equal to one PROOF The limit form of the equation (21) is ϕ(t) = λ 1 0 ϕ(τ)dτ, t [0, 1] (29) Hence, ϕ(t) is a constant It can to be proved that, in the form (24), ϕ(t) 1 Hence, equation (21) in this case has the only eigenvalue λ = 1 This assertion follows from Parseval s equality because here, formally, K(t, t) = 1 and integrates to 1 It follows from this theorem that λ rj,1 1, but λ rj,k (α j,k 0), k = 2, 3,, as j 28

G Martynov The behavior of the characteristic numbers when r j tends to zero is shown in Table 1 Here and below, we take r i = i a(1 i b) j r j α j,1 α j,2 α j,3 α j,4 α j,5 1 1 04053 00450 00162 00083 00050 2 060197 05298 00463 00160 00081 00048 3 031323 06829 00400 00132 00065 00039 4 017678 07917 00303 00097 00047 00028 5 010816 08610 00219 00068 00033 00019 10 001952 09716 00050 00015 00007 00004 25 000160 09976 00004 00001 00001 00000 50 000023 09997 00001 00000 00000 00000 99 000003 10000 00000 00000 00000 00000 0 1 0 0 0 0 29

G Martynov We can compute the largest characteristic number α 1 = α j,1 00642 (30) It is also obvious that the second largest characteristic number is α 2 = α 1 α 1,2 /α 1,1 0007137 (31) The corresponding eigenfunctions are ϕ 1 (t) = ϕ rj,1(t) and ϕ 2 (t) = ϕ 1 (t)ϕ r1,2(t)/ϕ r1,1(t) Notice that the sequence k 1, k 2, k 3, in (27) may contain only a finite number of terms different from 1 Denote v j,k = α j,k /α j,1, j 2 Then the m-th characteristic number can be represented as α m = α 1 v m,km,1 v m,km,2 v m,km,sm, (32) where v m,km,1 v m,km,2 v m,km,sm products is the m-th value among the finite v j,k1 v j,k2 v j,ks j 2, 1 k 1 < k 2 < < k s, s 1, ranked in descending order 30

G Martynov TABLE 2 Symbolic table for computing the first eigenvalues and characteristic functions of K0 (s, t) jk 1 5 6 10 11 15 16 20 21 25 26 30 1 2 1 1 3 1 1 1 4 1 1 1 5 1 1 2 1 1 1 6 1 1 2 1 1 7 1 1 1 1 1 1 1 2 1 1 1 3 1 1 1 1 4 1 1 1 2 1 5 1 1 1 1 1 6 1 1 1 1 1 2 1 1 1 2 1 1 1 1 1 3 1 1 1 1 1 1 4 1 1 1 1 1 2 1 1 1 5 1 1 2 5 1 1 1 1 1 2 1 1 1 1 1 1 1 1 3 1 1 1 1 1 1 1 1 1 4 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 1 1 6 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1 10 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 11 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 15 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Table 2 represents the realisation of the formula (32) for the considered sequence r i Denote its elements as s jk

G Martynov Column k in the table corresponds to the k-th characteristic number α k of K0 in descending order by the formula α k = α j,sjk Table 3 shows some real values of the characteristic numbers in accordance with Table 2 jk 1 5 6 10 α 1,1 α 1,2 α 1,1 α 1,1 α 1,3 α 1,1 α 1,1 α 1,1 α 1,4 α 1,1 1 α 2,1 α 2,1 α 2,2 α 2,1 α 2,1 α 2,1 α 2,3 α 2,1 α 2,1 α 2,1 α 3,1 α 3,1 α 3,1 α 3,2 α 3,1 α 3,1 α 3,1 α 3,1 α 3,1 α 3,3 5 α 4,1 α 4,1 α 4,1 α 4,1 α 4,1 α 4,2 α 4,1 α 4,1 α 4,1 α 4,1 α 5,1 α 5,1 α 5,1 α 5,1 α 5,1 α 5,1 α 5,1 α 5,2 α 5,1 α 5,1 31

G Martynov Eigenvalues of K The characteristic numbers α i equation of K (s, t) are solutions of the where and C j,k = 1 0 C k = k=1 w j (t)ϕ j,k (t)dt = C 2 k α k α = 1, C j,k, k = 1, 2, 1 0 t µ j/1 µ j ϕ j,k (t)dt, j = 1, 2, From this, it can be derived that C 2 r,k = ( 2 µ (µ 1)Γ(2 + µ)z µ µ 1,k 0 F 1 (; 2 + µ; 1 4 z2 µ 1,k)) 2 /D 2 r,k, where 0 F 1 is the generalized hypergeometric function 0F 1 (; a; z) = k=0 z k (a) k k! 32

G Martynov Smirnov formula In the previous section, we obtained the eigenvalues λ k = 1/α k for the covariance function K (s, t) For calculation of the limiting distribution function of Ω 2 we can use the Smirnov formula, namely, for t > 0, ( ) P Ω 2 > t = 1 λ ( 1) k+1 2k e tu/2 du π k=1 λ { } 2k 1 u 1 u k=1 λ k The Smirnov formula is designed for distinct eigenvalues 33

G Martynov The limiting distribution of Ω 2 n It was obtained one thousand values α i = 1/λ i The distribution of the statistic Ω 2 n is approximated by a finite quadratic form Q 100 = 100 where z i are independent identically distributed random variables with standard normal distribution The residue k=1 z 2 k λ i k=101 of the full quadratic form is replaced by its expectation 00296327 As a result, it was obtained the following percent points of the statistic Ω 2 : P {Ω 2 090} 016450, P {Ω 2 095} 020371, P {Ω 2 099} 030039, P {Ω 2 0995} 034350, z 2 i λ i {Ω 2 0999} 044563, 34

G Martynov BIBLIOGRAPHY Deheuvels, P, Martynov, G (2003) Karhunen-Loève expansions for weighted Wiener processes and Brownian bridges via Bessel functions, Progress in Probability, Birkhäuser, Basel/Switzerland, 55, 57 93 Darling, D A (1955) The Cramér-Smirnov test in the parametric case Ann Math Statist 26 1 20 Martynov, G V (1975) Computation of distribution function of quadratic forms of normally distributed random variables Theor Probab Appl 20 782 793 Krivjakova E N, Martynov G V, and Tjurin J N (1977) The distribution of the omega square statistics in the multivariate case Theory of Probability and its Applications, V 22, N 2, 415 420 35

G Martynov Martynov, G V, (1979) The omega square tests, Nauka, Moscow, 80pp, (in Russian) Martynov, G V, (1992) Statistical tests based on empirical processes and related questions, J Soviet Math, 61, 2195 2271 Van der Vaart, A W, Wellner, J A (1996) Weak Converge and Empirical Processes Springer-Verlag, 508 pp 36

END 37