Bayesian inference with M-splines on spectral measure of bivariate extremes
|
|
- Sabina Owen
- 5 years ago
- Views:
Transcription
1 Methodology and Computing in Applied Probability manuscript No. (will be inserted by the editor) Bayesian inference with M-splines on spectral measure of bivariate extremes Khader Khadraoui Pierre Ribereau Received: date / Accepted: date Abstract We consider a Bayesian methodology with M-splines for the spectral measure of a bivariate extreme-value distribution. The tail of a bivariate distribution function F in the max-domain of attraction of an extreme-value distribution function G may be approximated by that of its extreme value attractor. The function G is characterized by a probability measure with expectation equal to 1/2, called the spectral measure, and two extreme-value indices. This spectral measure determining the tail dependence structure of F. The approximation of the spectral measure is proposed thanks to a nonparametric Bayesian estimator that guarantees to fulfill a moment and a shape constraints. The problem of routine calculation of posterior distributions for both coefficients and knots of M-splines is addressed using the Markov chain Monte Carlo (MCMC) simulation technique of reversible jumps. Keywords Bayesian inference M-splines Bivariate extremes Spectral measure Monotone shape MCMC Mathematics Subject Classification (21) Primary 62G32 62F15 62G5 secondary 62F3 6J22 65D7 1 Introduction Suppose that we observe a random sample (X i1, X i2 ), i = 1,..., n, from an unknown bivariate distribution F in the max-domain of attraction of a bivariate Khader Khadraoui Laval University, Department of Mathematics and Statistics, Quebec city G1V A6, Canada. Tel.: +1(418) Ext Fax: +1(418) khader.khadraoui@mat.ulaval.ca Pierre Ribereau Université Lyon 1, Institut Camille Jordan ICJ UMR 528 CNRS, 69622, Lyon, France.
2 2 Khader Khadraoui, Pierre Ribereau extreme value distribution G. It is known that the tail of F is well approximated by the tail of G except for the case of asymptotic independence. Each margin of G is characterized by three parameters. The dependence structure of G is characterized by a spectral measure which is a finite Borel measure on a compact interval. Hence, approximate inference on the tail of F can be done via inference on the six parameters and inference on the spectral measure. The estimation of the six parameters is well understood while estimation of the spectral measure remains a problem that deserves further investigation although some works appear in the literature to address this question. The literature on spectral measure usually focuses on frequentest inference within parametric approaches; see for instance Coles and Tawn (1991, 1994); Joe et al. (1992); Smith (1994); Ledford and Tawn (1996); de Haan et al. (28); Boldi and Davison (27) and non-parametric procedures; see for instance de Haan and de Rond (1998); de Haan and Sinha (1999); Einmahl et al. (26, 21); Schmidt and Stadtmüller (26); Einmahl and Segers (29). Recently, using the method of Lagrange multipliers, a constrained piecewise linear estimate has been proposed in Einmahl and Segers (29) in order to incorporate the moment constraint characterizing the class of spectral measure. A review on dependence function and on spectral measure estimators can be found in the monographs Coles (21); de Haan and Ferreira (26). Bayesian approaches on spectral measure under moment constraint are rather rare although some works appear in the literature. It is the aim of our article to propose a smooth non-parametric estimation of the spectral measure using an M-spline basis. This estimation is proposed via a Bayesian approach that guarantees to satisfy some desirable constraints (concerning the expectation and the shape of the spectral measure) thanks to the prior distribution. The coherence of the Bayesian paradigm with inference on univariate and multivariate extremes has been argued in literature (Aitchidson and Dunsmore, 1975; Coles and Tawn, 1996b,a; Guillotte et al., 211). The contribution of this paper are twofold: first, we propose a nonparametric Bayesian estimator for the spectral measure which fulfills both the moment and the monotone restrictions; second, we use an M-spline basis in the construction of a monotone smooth estimator which to our knowledge this is the only example of a constrained estimator with M-splines in the spectral measure estimation frameworks. Splines are of importance for the reason that they are used wherever curves are to be fit: being polynomial, they can be evaluated quickly; being picewise polynomial, they are very flexible; their representation in terms of M-splines or B-splines provides geometric information and insight. Our methodology is presented in the following by constructing the estimator and selecting a prior distribution for the spectral measure. Then, the spectral measure is estimated by the posterior mode as it necessarily fulfills the moment and the shape constraints. Note that this not necessarily the case for the posterior mean in the presence of shape constraints (Abraham and Khadraoui, 215; Khadraoui, 217a). The posterior mode is computed using simulations from the posterior distribution. The problem of routine simulations from posterior distribution for both coefficients and knots of M-splines is
3 Bayesian inference with M-splines on spectral measure of bivariate extremes 3 addressed using the Markov chain Monte Carlo (MCMC) simulation technique of reversible jumps (Green, 1995). The paper is organized as follows. In Section 2, we review the general theoretical results for spectral measures. The construction of the subspace of spectral measures and selection of prior distribution are done in Section 3. Section 4 is devoted to the Bayesian inference. In Section 5, we provide a simulation study to compare the performances of the Bayes M-splines estimator for the spectral measure introduced in this paper with three estimators proposed recently in the literature. Section 6 discusses further research. 2 Bivariate tail approximation In this Section, we describe the dependence structure of a bivariate tail in various equivalent ways: from the spectral measure Φ introduced in de Haan and Resnick (1977) to the spectral measure H in Coles and Tawn (1991). We consider an observed vector (X 1, X 2 ) of one realization from continuous distribution function F and marginal distributions functions F 1 and F 2. We define D = [, ] 2 \ {(, )} and, for l = 1, 2, we put: Z l = 1 1 F l (X l ). (2.1) For every continuous γ : D R with compact support and assuming that, for ξ v ξp[ξ 1 (Z 1, Z 2 ) ] ν( ), (2.2) where v stands for vague convergence of measures (in D), we have lim ξ ξe[γ(ξ 1 (Z 1, Z 2 ))] = γdν. We precise that the exponent measure ν enjoys two crucial properties: homogeneity, and standardized marginals, D ν(c ) = c 1 ν( ), < c <, (2.3) ν([z, ] [, ]) = ν([, ] [z, ]) = 1/z, < z. (2.4) From (2.4) it is easy to remark that ν([, ] 2 \ [, ) 2 ) =. Let be an arbitrary norm on R 2. Consider the following polar coordinates, (r, φ), of (z 1, z 2 ) [, ) 2 \ {(, )}: r = (z 1, z 2 ) (, ), φ = arctan(z 1 /z 2 ) [, π/2]. (2.5)
4 4 Khader Khadraoui, Pierre Ribereau Now, using polar coordinates (r, φ) given in (2.5), we define a Borel measure Φ on [, π/2] by ( ) Φ( ) = ν {(z 1, z 2 ) [, ) 2 : r 1, φ }. (2.6) We can interpret the spectral measure defined in (2.6) as follows: [ ] v ξp (Z 1, Z 2 ) ξ, arctan(z 1 /Z 2 ) Φ( ), ξ. (2.7) We set z 1 (r, φ) = r sin φ/ (sin φ, cos φ) and z 2 (r, φ) = r cos φ/ (sin φ, cos φ). For every ν-integrable γ : D R and using the previous crucial property (2.3) of homogeneity, we have [,π/2] D γdν = [,π/2] γ(z 1 (r, φ), z 2 (r, φ))r 2 drφ(dφ), (2.8) where the exponent measure ν is, in the polar coordinate system (r, φ), a product measure r 2 drφ(dφ) and, in particular, is completely determined by its spectral measure Φ. The previous crucial property (2.4) of standardization restrictions on the exponent measure ν translate into a moment constraint on Φ: cos φ (sin φ, cos φ) Φ(dφ) = sin φ Φ(dφ) = 1. (2.9) [,π/2] (sin φ, cos φ) The bivariate tail dependence structure is analyzed as follows: X 1 and X 2 are completely tail dependent, that is, ξp[z 1 ξ, Z 2 ξ] 1 as ξ, if and only if Φ is concentrated on {φ = π/4} (equivalently ν is concentrated on the main diagonal); Similarly, X 1 and X 2 are tail independent, that is, ξp[z 1 ξ, Z 2 ξ] as ξ, if and only if Φ is concentrated on {φ =, φ = π/2} (equivalently ν is concentrated on the coordinate axes). Note that in case of completely tail dependence Φ({π/4}) = (1, 1) while in case of tail independence Φ({}) = Φ({π/2}) = 1. Up to now, we shall make no assumptions on the marginal distributions F 1 and F 2 except for continuity. However, if in addition to (2.2) we consider F on quadrants of the form [u 1, ) [u 2, ) (where u 1 and u 2 are a high thresholds), then the domain of attraction condition on a bivariate distribution function G yields good approximation for F. We note that F is well approximated only on a subset of its support. Precisely, if (2.2) holds and if there exist real sequences a nl > and b nl, for l {1, 2}, and a bivariate cumulative distribution function G with non degenerate margins, then, for all x, y R F n( ) n a n1 x + b n1, a n2 y + b n2 G(x, y), [ ] G(x, y) = exp l{ log G 1 (x), log G 2 (y)}, (2.1)
5 Bayesian inference with M-splines on spectral measure of bivariate extremes 5 where the stable tail dependence function l (Drees and Huang, 1998; Huang, 1992) can be expressed in terms of the spectral measures ν, Φ or H through ( ) l(x 1, x 2 ) = ν {(z 1, z 2 ) [, ] 2 : z 1 x 1 1 or z 2 x 1 2 } max(x 1 sin φ, x 2 cos φ) = Φ(dφ) [,π/2] sin φ, cos φ = 2 max ( ) wx 1, (1 w)x 2 dh(w), [,1] for (x 1, x 2 ) [, ) 2. The spectral measure H being a probability measure on [, 1] with mean equal to 1/2. The stable tail dependence function l was introduced by DM Mason in an unpublished 1991 manuscript and Huang (1992). Drees and Huang (1998) showed that the following tail empirical dependence function l attains the optimal rate of convergence for estimators of the stable tail dependence function, l(x1,..., x d ) = 1 n 1 { j=1,...,d:rij>n+1 kx k j}, i=1 where 1 A denotes the indicator function of the set A, r ij denotes the rank of X ij among X 1j,..., X nj (more precisely r ij = n s=1 1 (X sj X ij)) with k = k n and k/n. For more details about the connections between the stable tail dependence function at the first sight and the exponent and spectral measures on the other hand we refer the interested reader to the references Beirlant et al. (24); de Haan and Ferreira (26). Given a large thresholds, u 1 and u 2, the marginal cumulative distribution functions can be given, for l {1, 2}, by ( x l u ) 1/ηl l log{g l (x l δ l )} = ζ l 1 + η l, (2.11) σ l for x l such that η l (x l u l )+σ l >, where η l is a shape parameter (the extreme value index), σ l is a scale parameter and δ l = (ζ l, η l, σ l ). We note that < ζ l = log{g l (u l δ l )} and, as u l is large, we approximate ζ l by the marginal probability of exceeding the threshold (ζ l 1 G l (u l δ l )). Therefore, the aim from now on consists in studied the domain of attraction of the bivariate maxstable distribution G which might be characterized by its marginal parameters (δ 1, δ 2 ) and its spectral measure H with expectation equal to 1/2. 3 M-spline estimator and prior distribution In this section we use an M-spline basis to propose an estimator for the spectral measure. As we have seen in Section 2, the dependence structure of a bivariate tail is described in various equivalent ways. For the sake of simplicity, we propose in this paper to estimate H. From now on, we consider the same symbol to denote the spectral measure and its cumulative distribution, i.e., H(w) = H([, w]) for w [, 1].
6 6 Khader Khadraoui, Pierre Ribereau 3.1 M-spline estimator for spectral measure In this subsection, we shall construct a class H of smooth spectral measures whose only atoms, if any, are at and at 1. Our class H will be supported on the set of smooth functions constructed from a spline basis. Fix some order q, a natural number such that we assume in this paper q 3, and let K 2 be another natural number, which will increase with n, and partition the open unit interval (, 1) into K subintervals ((k 1)/K, k/k) for k = 1,..., K. Consider the linear space of splines of order q relative to this partition, that is, all functions s : (, 1) R which are piecewise polynomial of degree < q and which are, in the case that q 2, q 2 times continuously differentiable. A thorough presentation of splines is given in de Boor (1987). It can be shown that this is a J = (q + K 1)-dimensional vector space. A convenient basis in our study is the set of M-splines. Let t = (t 1,..., t K 1+2q ) be a nondecreasing sequence of knots, such that ( (q 1) t := K,..., 1 K,, 1 K, 2 K,..., K 1 K, 1, (K + 1) K K + (q 1) ),...,, K and let M 1,q,..., M J,q be the M-spline functions of order q and with complete knot vector t (interior and exterior knots). In this connection, it is worthwhile to stress that the term M-spline refers to a certain normalized B-spline on its minimal support [t j, t j+q ), i.e., M j,q := (q/(t j+q t j ))B j,q, (3.1) where B j,q denotes the usual jth B-spline function of order q. This brings us to an immediate propriety; + M j,q = t j+q t j M j,q = 1. We precise that the spectral measure H is not absolutely continuous with respect to the Lebesgue measure on [, 1], because it gives positive probabilities a and a 1 to the boundary points and 1. The Lebesgue decomposition of H into absolutely continuous H c and singular H s parts reads H(dw) = a δ (dw) + h β J (w)dw + a 1δ 1 (dw), (3.2) where δ z is the Dirac mass at state z and dw is the Lebesgue measure on (, 1). Since H is a probability measure for any w [, 1], then a + a 1 = 1 1 h β J (w)dw = 1 Hβ J (1), where, for β R J, we define the continuous part of the spectral measure H β J : (, 1) R1 by a linear combination of the M-splines, i.e., a function of the form: J H β J (w) = β j M j,q (w), (3.3) j=1
7 Bayesian inference with M-splines on spectral measure of bivariate extremes 7 where it will be clear in Proposition 1 that the function (spline) H β J ( ) is monotone by controlling in some way the values of the coefficients (β 1,..., β J ). Thus, the spectral measure H is absolutely continuous with respect to the reference measure µ on [, 1] given by µ(dw) = δ (dw) + dw + δ 1 (dw), (3.4) with density a, if w =, h(w) = h β J (w), if w (, 1), a 1, while if w = 1. (3.5) Remark 1 The class of spectral measures satisfying the Lebesgue decomposition (3.2) seems somewhat restrictive since it does not contain the atomless (has no atoms) continuous measures which are not absolutely continuous. An example of such measures is the Cantor distribution. For q = 1 the linear space of splines consists of piecewise constant functions with cell boundaries k/k for k =, 1,..., K. Our construction (3.3) therefore contains Cantor spectral measure constructed on piecewise constant functions as a special case (H J j=1 β jm j,1 ). Now, as the moment constraint is 1 wdh(w) = 1/2, then, using (3.3) and the derivative formula of M-spline functions as given in (de Boor, 1987, Ch. X), we can write 1 1 ( J ) wdh β J (w) = wd β j M j,q (w) = K = = 1 [ w J j=3 J j=1 j=1 ( J ) w (β j β j 1 )M j,q 1 (w) dw j=2 ] 1 β j M j,q (w) 1 J (q 1) β j M j,q (1) 1 J j=q j=j (q 2) J β j M j,q (w)dw (3.6) j=1 1 q 1 β j β j M j,q (w)dw (3.7) j=1 β j M j,q (w)dw 1 1 = Q(β 3:J, M 3:J,q ) β 1 M 1,q (w)dw β 2 M 2,q (w)dw = 1/2 a 1, where M 1,q (1) = M 2,q (1) = as K 2. We precise that we obtain the transition from (3.6) to (3.7) thanks to the M-splines property of normalization ( M j,q = 1) and in (3.7), the function M j,q (1) = w j,q (1)M j,q 1 (1) +
8 8 Khader Khadraoui, Pierre Ribereau {1 w j+1,q (1)}M j+1,q 1 (1) where w j,q (w) = w tj t j+q 1 t j if t j < t j+q 1 and otherwise. We recall the reader that the M-spline function of order 1 is M j,1 (w) = K1 [tj,t j+1)(w). Then, in order to introduce the moment and the monotone shape constraints, it will be convenient to work with the following parameterization. Suppose that we have a countable collection of candidate M-spline bases {M J, J J } where J = {4,..., J sup } and J sup N \. Model with M J basis has vector of unknown parameters, assumed to lie in ΘJ a Rd J, where the dimension d J = J + 2 may vary from model to model. Generally, all the parameters here vary over Θ = J J ( {J} Θ a J where ΘJ a is defined as follows: There exists a constant Cβa M that depends on β a = (a, β 1,..., β J, a 1 ) and M, where M = (M 1,q,..., M J,q ) denotes the M-spline basis, such that ΘJ a = {(a, β 1,..., β J 1, a 1 ) D : < a, a 1 < 1 } 2 and β 1 β J, (3.8) where D is a compact set of R J+1 and the Jth coefficient β J being a function of β 1,..., β J 1, a 1 and the M-spline basis via the mean restriction (3.7): tq+1 tq+2 β J = (1/2 a 1 + β 1 M 1,q (w)dw + β 2 M 2,q (w)dw ), )/( Q(β 3:J 1, M 3:J 1,q ) M J,q (1) 1 ) M J,q (w)dw t K+q 1 = C βa M. (3.9) Example 1 Take K = 2 and q = 2. Then, we obtain of course the sequence of knots t = (.5,,.5, 1, 1.5). In Figure 1, we plot the M-spline basis considered here in this example. It is easily seen from (3.7) that β 3 should satisfy in the M-spline estimator β 3 = β 1/2 1 1 M 1,2 (w)dw + β 2 M 2,2(w)dw a 1 M 3,2 (1) 1 1/2 M 3,2(w)dw = 1 2 β 1 + β a Remark 2 Arguably, although in the parameter subspace ΘJ a defined by (3.8) the atoms are restricted to (a, a 1 ) (, 1 2 )2, the Bernoulli( 1 2 ) spectral measure remains approximated closely here when (a, a 1 ) is close to ( 1 2, 1 2 ). As well, the Cantor spectral measure still be approximated closely when one may take (a, a 1 ) close to (, ) and q = 1 in the M-spline basis.
9 Bayesian inference with M-splines on spectral measure of bivariate extremes 9 K = 2 M 1,2 M 2,2 M 3,2 t 1 = 1/2 t 2 = t 3 = 1/2 t 4 = 1 t 5 = 1.5 Fig. 1 The linear M-spline basis considered in example 1; here q = 2. Consider µ the finite positive measure on [, 1] (its use was specified in (3.4)) and H the space of smooth spectral measures H, whose only atoms, if any, are at and at 1 where we mean by smooth all measure having a Lebesgue density h β J belongs to the Hölder space Cα (, 1). (This is the set of all functions that have α derivatives, for α the largest integer strictly smaller than α, with the α th derivative being Lipschitz of order α α.) From (3.2) and (3.3), we consider H a random function from a probability space (Θ, A, P) into (H, B) where B is the Borel σ field. Clearly, for (J, β a ) Θ, the σ field on H is the smallest such that the map (J, β a ) H is measurable. More precisely, a set E H is measurable if the set of β a J J R J+2 fulfilling constraints (3.8)-(3.9) and such that H E is a Borel set. For every H H, let β a be the vector of coordinates of the orthogonal projection from H onto the vector subspace C generated by the M-spline bases {M J, J J }, which is the approximating subspace. Thus, β is the unique β such that inf H c H β J J,β a ΘJ a J = Hc H β J, where β in ΘJ a satisfies the shape constraint β 1 β J with β J = C βa M. We turn now to the control of the monotone shape restriction and we explain how one may do this through controlling (simply) the vector of coefficients β. For this reason, we present in the following some statements concerning the link between the shape of H β J given by (3.3) and its coefficients (the monotonicity of H β J can be read off from its coefficients). These statements are useful in order to take into account the geometric prior information known from extreme value theory on H. Proposition 1 For w (, 1), let H β J (w) = J j=1 β jm j,q (w) where the coefficients (β 1,..., β J ) ΘJ a, β J = C βa M and (M 1,q,..., M J,q ) is an M-spline basis with order q > 2. If the coefficients verify β 1 β 2 β J, then the absolutely continuous part H β J of the spectral measure is monotone increasing: ( ) (1) H β J (w) for every w (, 1).
10 1 Khader Khadraoui, Pierre Ribereau Proof Using derivative formula of splines given in (de Boor, 1987, p.138) and the knot sequence t (equidistant knots), we deduce that for q > 1 ( J ) (1) J β j M j,q (w) = K (β j β j 1 )M j,q 1 (w), (3.1) j=1 j=2 where M j,q 1 is again an M-spline with the same knot sequence that M j,q but of one order lower. Since M j,q 1 (w) for every w (, 1) and if (β j β j 1 ) ( J (1) holds for j = 2,..., J, then the derivative j=1 β jm j,q (w)). We note that Proposition 1 provides a sufficient (but not necessary) condition under which H β J is a monotone increasing smooth function whose function and derivative values at and 1 may be prescribed to be. Remark 3 From (3.1), it is straightforward to explicit a convex (or concave) shape constraint using the M-spline basis in terms of the combination ( J (2) (β j + β j 2 2β j 1 ) for j = 3,..., J. Note that j=1 β jm j,q (w)) = K 2 J j=3 (β j 2β j 1 + β j 2 )M j,q 2 (w) for q > 2 and w (, 1). As a consequence, we are able to provide a sufficient condition under which the spline is a convex smooth function (if β j + β j 2 2β j 1 for j = 3,..., J) or a concave smooth function (if β j + β j 2 2β j 1 holds for j = 3,..., J). The convex shape is useful to estimate the Pikands dependence measure. A thorough analysis of the Pikands dependence measure is beyond the scope of the present paper. Clearly, controlling the continuous part shape of the spectral measure reduces to controlling the shape of a finite sequence of coefficients β 1,..., β J (Khadraoui, 217b). Thus, for a monotonicity constraint, it is straightforward to construct a set ΘJ a RJ+2 such that H β J necessarily fulfills the shape and the moment constraints. The following Proposition 2 provides Schonenberg s spline approximation for the absolutely continuous part of the spectral measure under the monotone shape constraint. Proposition 2 Let consider the subsets S J and S defined respectively by J } S J := {H βj = β j M j,q : β j R, β a ΘJ a, j=1 and { S := b i S i : S i i J J } S J and b i for i = 1, 2,.... Then, the closure of S in uniform norm is precisely the set of increasing continuous part of spectral measures on (, 1).
11 Bayesian inference with M-splines on spectral measure of bivariate extremes 11 Proof For a given function g on (, 1), the Schonenberg s spline approximation is defined by V g (w) = 1 K J g(t j )M j,q (w), (3.11) j=1 where t j := (t j t j+q 1 )/(q 1). It is obvious to remark that the closure S of S is contained in the set of increasing continuous part of spectral measures on (, 1). To prove the converse, it is sufficient to take H c be an increasing continuous part of a spectral measure and thus from the spline construction (3.11) it is easily seen that V H c is in S. From de Boor (1987), if H c is twice continuously differentiable, there exists a constant const H c,q (that may depend on H c and q) such that: sup H c (w) V H c(w) const Hc,q t 2, (3.12) w (,1) where t = max j t j t j 1. It follows from (3.12) that V H c converges to H c uniformly if t is small enough. Then, this shows that H c is in S which completes the proof. In other words, Proposition 2 means that any twice continuously differentiable function H c in S can be approximated by a spline in S generated from an M-spline basis. 3.2 Prior distribution The model approximating the tail of the unknown bivariate distribution function F is specified thanks to a spectral measure H H and marginal parameters (δ 1, δ 2 ) T 2 where δ l = (ζ l, η l, σ l ), l {1, 2} and T = (, ) (, ) 2. Since the parameter space for H is given by Θ, thus the complete parameter space is Ω = Θ T 2. In the sequel, we put θ = (J, β a ) and the model is parametrized by (θ, δ 1, δ 2 ) which defines F through the factorization: ({ } F : J J {J} ΘJ a (, ) 2 (, ) 4) F (θ, δ 1, δ 2 ) F (x, y), where F denotes the set of bivariate extreme value distributions and we put [ ] F (x, y) = exp l{ log F 1 (x), log F 2 (y)}, for (x, y) R 2, { ( ) 1/ηl x u F l (x) = exp ζ l 1 + η l l σ l }, for l {1, 2}, l(x 1, x 2 ) = 2 [,1] max(wx 1, (1 w)x 2 )dh(w), for (x 1, x 2 ) [, ) 2. The model is completed with a non-parametric prior distribution π for (θ, δ 1, δ 2 ) ({ J J {J} Θ a J } (, )2 (, ) 4 ). We precise that there should be no confusion whether π refers to either the constant or the prior
12 12 Khader Khadraoui, Pierre Ribereau distribution in the rest of the article. In the following we construct in particular a probability measure on the parameter space Θ where the map (J, β) H β J induces the absolutely continuous part of a probability measure on H which we specify as the prior for H c. We use π to denote a generic notation for some probability density. Proposition 3 Assume that π J (J) > for J = 4, 5,..., J sup together with π a (a) >, and the conditional density π J,β a(j, β a ) of π( {J} ΘJ a ) has support Θ. Let H c be a given continuous part of a spectral measure (increasing and continuous function on (, 1) with expectation equal to 1/2 a 1 ). Then, for every ɛ > π J,β a {(J, β a ) } ({J} ΘJ) a : H β J Hc < ɛ >. (3.13) J J Proof At first, we put VH J c(w) = J j=1 Hc (t j )M j,q(w) for every w (, 1). The application of the Schonenberg s spline approximation, by choosing K l sufficiently large ( t small enough), enables us to write V J l H c Hc ɛ/2, π J (J l ) > and π( {J l } Θ a J ) > has support Θ a l J. Then, using the fact l that H β J V H J c max β j H c (t j ), for β 1 β J and β J = C β j=1,...,j we can write { } π (J, β a ) Θ : H β J Hc < ɛ π {(J l, β al ) Θ : H βj V J l l H c < ɛ } { 2 π (J l, β al ) {J l } Θ a J : max β l l j=1,...,j l j H c (t j ) < ɛ } 2 >, which completes the proof. Proposition 3 shows that the support of the M-spline prior for the spectral measure can be quite large. Under the parameterization described previously, the prior distribution π is expressed as a trans-dimensional prior distribution on the random vector (θ, δ 1, δ 2 ), which, for convenience, factorizes as π J (J)π a (a)π β (β J J, a)π δ1 (δ 1 )π δ2 (δ 2 ) where β J = (β 1,..., β J 1 ). The prior distributions, with respect to the Lebesgue measure or the counting measure, for the spectral measure are specified as follows: (β J J, a, τ 2 ) N Θ J J 1 (m, τ 2 V) with density proportional to { exp β J V 1 β J 2τ }1 2 {β J Θ J }, τ 2 IG(τ 1, τ 2 ) with density equal to τ τ 1 2 Γ (τ (τ 2 1) ) τ1+1 exp{ τ2 τ }, 2 a U (,1/2) 2 with density proportional to 1 (,1/2) 2(a, a 1 ), J P(λ 1 ) with density equal to exp(λ 1 ) (λ1)j J!, a M,
13 Bayesian inference with M-splines on spectral measure of bivariate extremes 13 where Θ J = { } (β 1,..., β J 1 ) : (a, β 1,..., β J 1, a 1 ) ΘJ a, (3.14) m = (,..., ) is the prior expectation and V is the (J 1) (J 1) variance covariance matrix that will be specified in the following Proposition 4 in terms of the mean and the shape restrictions imposed on the coefficients. Concerning the marginal parameters, we consider independent priors for both margins given by π δl (δ l ) exp{ ζ l 2 } exp{ λ 2η l }σ l exp{ σ l λ 3 }, for l {1, 2}, where ζ l, η l and σ l respectively follow independent normal, exponential and gamma distributions. Concerning the adjustment of the prior-hyperparameters τ 1, τ 2, λ 1, λ 2 and λ 3, we set τ 1 = τ 2 =.1 in the prior of τ 2 (the variance of the truncated Gaussian prior π β ) such that the mode of the inverse gamma density is situated at τ 2 /(τ 1 + 1). We avoid to assign τ 1 and τ 2 to zero for not having an improper prior and avoid large values also to reduce the sensitivity of posterior inference to τ 1 and τ 2. In the same spirit, we assign λ 1 = λ 2 = 5 and λ 3 = 2. 4 Bayesian inference We introduce in this section the explicit likelihood used to develop the Bayesian inference. In this context, as Ledford and Tawn (1996), we adopt a censoring approach in the following manner: Let (X 1, X 2 ) = (X 1 u 1, X 2 u 2 ), we set I = (1 [u1, )(X 1 ), 1 [u2, )(X 2 )) and define: f (X1, X2 θ, τ 2, δ 1, δ 2 ) := F (u 1, u 2 θ, τ 2, δ 1, δ 2 ), if I = (, ), X 1 F (X 1, u 2 θ, τ 2, δ 1, δ 2 ), if I = (1, ), X 2 F (u 1, X 2 θ, τ 2, δ 1, δ 2 ), if I = (, 1), 2 X 1 X 2 F (X 1, X 2 θ, τ 2, δ 1, δ 2 ), if I = (1, 1), where an explicit expression for f will be obtained in the sequel. For σ l + η l (X l u l ) >, we consider that f l (X l ) = d F l (X l ) = ζ ( l X l u ) ( 1 l η +1) 1 + η l l Fl (X l ), for l {1, 2}, dx l σ l σ l where we write H({}) = a, H({1}) = a 1 and assume that H is absolutely continuous on (, 1) with Radon-Nikodym derivative h. It is straightforward
14 14 Khader Khadraoui, Pierre Ribereau that { 1 } l(x 1, x 2 ) = 2 a 1 + wh(w)dw, x 1 x 2 x 1 +x 2 { l(x 1, x 2 ) = 2 a + x 2 x 2 x 1 +x 2 } (1 w)h(w)dw, 2 x 1 x ( 2 l(x 1, x 2 ) = 2 x 1 x 2 (x 1 + x 2 ) 3 h x ) 2, x 1 + x 2 where l(x 1, x 2 ) can be writing l(x 1, x 2 ) = x 1 + x 2 + x1 x2 2 x l(x 1, x 2)dx 1dx 2, (x 1, x 2 ) [, ) 2. 1 x 2 We indicated previously that if H({}) > and H({1}) > (i.e., if the spectral measure has atoms at and 1), then f is positive on the set {(X1, X2 ) : σ l + η l (Xl u l) >, l = 1, 2}. We are now able to give an exact expression for f, by putting θ = (θ, τ 2, δ 1, δ 2 ), as follows: F (u 1, u 2 θ), if I = (, ), f 1(X 1) f (X1, X2 F 1(X 1) x 1 l(x 1, log{f 2 (u 2 )})F (X 1, u 2 θ), if I = (1, ), θ) = f 2(X 2) F 2(X 2) x 2 l( log{f 1 (u 1 )}, x 2 )F (u 1, X 2 θ), if I = (, 1), } { 2 l=1 f l (X l ) F l (X l ) l (x 1, x 2 )F (X 1, X 2 θ), if I = (1, 1), where x 1 = log{f 1 (X 1 )} and x 2 = log{f 2 (X 2 )} and l (x 1, x 2 ) = l(x 1, x 2 ) l(x 1, x 2 ) 2 l(x 1, x 2 ). x 1 x 2 x 1 x 2 Let X = {(X i1, X i2 ) : i = 1,..., n} stands for an observed sample from F and X = {(Xi1, X i2 ) : i = 1,..., n} stands for the corresponding censored sample. Then, the censored likelihood is given by n L(X θ) = f (Xi1, Xi2 θ). (4.1) i=1 It is natural that the censored likelihood (4.1) depends on the thresholds u 1 and u 2. The join posterior density for θ is computed from the Bayes theorem, up to a normalizing constant, as follows π(β J, a, τ 2, J, δ 1, δ 2 X ) L(X θ) π β (β J τ 2, a, J) π τ (τ 2 ) π J (J) π a (a) π δ1 (δ 1 ) π δ2 (δ 2 ). (4.2) Simulations from the posterior distribution (4.2) can be obtained by a reversible jumps Metropolis-Hastings algorithm (similarly to Khadraoui (217a)). Because the prior distribution π is expressed as a trans-dimensional prior distribution, implementation of the MCMC algorithm requires exact knowledge of π β (β J τ 2, a, J) (i.e., requires knowledge of the normalizing constant of the truncated Gaussian density N Θ J J 1 (, )).
15 Bayesian inference with M-splines on spectral measure of bivariate extremes 15 Proposition 4 Under the moment and the monotone constraints and for K 2, the truncated prior density of the coefficients (β J τ 2, a, J) with respect to the Lebesgue measure on R J 1 is given exactly by { π β (β J τ 2, a, J) = C βa M 1 J 1 exp( w2 exp } (J 1) 2τ )dw 2 { β J V 1 β } J 2τ 2 1 {β J Θ J }, (4.3) where V 1 = (v i,j ) 1 i,j J 1 is the (J 1) (J 1) variance-covariance matrix given by 2, if i = j = 1,..., J 2, 1, if i = j = J 1, v i,j = 1, if j = i ± 1, j = 1,..., J 1,, otherwise. Proof There exists a subset S J = {(β 1,..., β J 1 ) : β 1 β J 1 1}, (4.4) such that we can write Θ J = C βa M S J. We use λ(θ J ) and λ(s J ) to denote the normalizing constants corresponding respectively to Θ J and S J. Then, we can write (J 1)λ(SJ λ(θ J ) = {CM} βa ). (4.5) For the sake of simplicity and in order to obtain an expression for λ(s J ), we put ω = (ω 1,..., ω J 1 ) a random vector with Gaussian distribution where 1 [,1/J 1] each component is truncated to [, J 1 ] such that ω N J 1 (, τ 2 I J ) where I J is the (J 1) (J 1) identity matrix. Thus, the density of ω is given by ( π(ω τ 2, J) = λ exp ω ω where λ is a constant given by λ 1 = 1 J 1 ) J 1 2τ 2 j=1 1 { ωj 1 J 1 }, ( ) 1 exp ω2 J 1 ( ) 1 2τ 2 dω 1 exp ω2 J 2τ 2 dω J 1. Now, using the random vector ω, we can easily construct a vector β J with Gaussian distribution and satisfying the monotone constraint as follows β J = T 1 1 J ω, where TJ = (t i,j ) 1 i,j J 1 is the (J 1) (J 1) matrix given by { t 1, if i j, i,j =, else.
16 16 Khader Khadraoui, Pierre Ribereau By inverting the matrix T 1 J, we can also explicit ω in terms of β J in the following way: ω = T J β J, where T J = (t i,j ) 1 i,j J 1 is the matrix inverse of T 1 J given by 1, if i = j, t i,j = 1, if j = i 1, i = 2,..., J 1,, else. Therefore, the probability density of β J can be deduced from that of ω as follows: π(ω τ 2, J)dω = π(t J β J τ 2, J) dω dβ J dβ J ( = λ1 { TJ β J 1} exp (T Jβ J ) (T J β J ) ) dω dβ J 2τ 2 dβ J ( = λ1 {β1 β J 1 1} exp β J (T J T ) J)β J 2τ 2 dβ J, where the Jacobian dω dβ J = 1. Finally, we deduce the normalizing constant λ(θ J ) = } (J 1) { {C 1 βa J 1 ( ) (J 1), M exp dw} w2 (4.6) 2τ 2 which completes the proof. In the following section, we explore the numerical performances of our methodology to better qualify the contribution and provide some validation. 5 Numerical results In this section, we compare the performances of the spectral measure Bayes- M-splines estimator introduced in this paper with three estimators proposed recently in the literature in Einmahl et al. (21), Einmahl and Segers (29) and de Carvalhoc et al. (213). To be complete in this section, let us recall briefly these three methods that we aim to compare with. For this, let consider some i.i.d. observations x = {(x i,1, x i,2 ), i = 1,..., n} sampled from a distribution in the domain of attraction of a bivariate extreme value distribution. We start with fixed some threshold k n >. The choice of the threshold is both important and delicate where it determines the observations that will be used in the statistical inference. It is well known that the threshold must be lower than n and more precisely it must verify k n together with kn n. We return to this aspect in more details later. Once the threshold is established we use again r i,j the rank of x i,j in {x 1,j,..., x n,j } for i = 1,..., n and j {1, 2}. Let z i,j = n n+1 r i,j. For i = 1,..., n we denote by s i = z i,1 +z i,2 and ω i = zi,1 s i. Let put I n,kn the set of indices i = 1,..., n such that s i > n k n and I is the cardinal of I n,kn. Then, we can see that (ω i, i I n,kn ) as a sample from the spectral measure. It is therefore possible to do inference on H from
17 Bayesian inference with M-splines on spectral measure of bivariate extremes 17 (ω i, i I n,kn ). However, it must be signaled that a transformation such as the one we have just described creates dependence between ω i even when there is none between (X i,1, X i,2 ), i = 1,..., n. We are now in position to describe the three methods considered in the comparison: Empirical Spectral Measure (ESM): The authors in Einmahl et al. (21) proposed from the previous transformation an empirical estimator ĤESM of the spectral measure defined by Ĥ ESM (w) = 1 I i I n,kn 1 [ωi,1](w), w [, 1]. (5.1) This simple estimator poses a problem where it does not respect the mean constraint. Spectral measure based on the empirical maximum likelihood (MELE): The authors in Einmahl and Segers (29) proposed an estimator ĤMELE that satisfy the mean constraint and defined by Ĥ MELE (w) = 1 I i I n,kn p i 1 [ωi,1](w), w [, 1], (5.2) where p = (p 1,..., p I ) is solution to the following constrained optimization problem max p=(p1,...,p I ) R I i I + n,kn log(p i ), i I n,kn p i = 1, (5.3) i I n,kn ω i p i = 1 2. The solution is given by p i = 1 I 1 1+µ (ω i 1 2 ), i I n,k n where µ R is the Lagrange multiplier that solve i I n,kn ω i µ (ω i 1 2 ) =. The Euclidean maximum likelihood estimate (EMSM): The estimator denoted ĤEMSM and proposed in de Carvalhoc et al. (213) uses the same principle as ĤMELE but with the difference that the optimization problem that it solves is significantly simpler. It is defined by Ĥ EMSM (w) = 1 I i I n,kn p i 1 [ωi,1](w), ω [, 1], (5.4) where p is solution to the following constrained optimization problem 1 max p=(p1,...,p I ) R I 2 i I + n,kn ( I p i 1) 2, i I n,kn p i = 1, (5.5) i I n,kn ω i p i = 1 2. The solution is given by p i = 1 I (1 ( ω 1 2 )S 2 (ω i ω)) with ω = 1 I i I n,k ω i and S 2 = 1 I i I n,kn (ω i ω) 2.
18 18 Khader Khadraoui, Pierre Ribereau As we have just seen, the previous three methods are based on the rank transformed sample (ω i, i I n,kn ) whereas our Bayes M-spline methodology is based on the censored sample x. Now we proceed to compare our method with these three previously described methods. For each of the estimators, we draw 1 samples (each of size n = 1) from three well known models; two models are logistic and one asymmetric (more details about these models will be given in the following). For the methods based on rank sample, we consider 1 different thresholds k n = n α with α [.55,.7] is chosen in an equidistant grid. For the Bayes method, the two thresholds u 1 and u 2 are chosen in a manner that we obtain the same number of observations in the tail region determined thanks to the threshold k n (for each α and each sample). For each value of the threshold, we assess the performance of the estimators via the mean integrated square error MISE(Ĥ) = E [ 1 ] (Ĥ(w) H(w))2 dw. (5.6) The first model used is the logistic model since it is very useful in the theory of extreme values because of its great adaptability. Indeed, for e > 1, it is given by H e (w) = 1 ( ) 1 ((1 w) e 1 w e 1 )((1 w) e + w e ) 1+ 1 e. (5.7) 2 In this model (5.7) when e we obtain the case of independence and the spectral measure is concentrated at 1 2. Conversely, H puts a mass 1 2 at and at 1 when e 1 which coincide with the case of total dependence. For the simulations, we take e = 2 which corresponds to a moderate dependence and e = 4 which corresponds to a very marked independence. The second model used is the asymmetric logistic model. It generalizes the logistic model by allowing an asymmetry of the marginal distributions by the introduction of two additional parameters. For e > 1, it is given by H e,φ1,φ 2 (w) = 1 ( 1 + φ 1 φ 2 (φ e 2 1(1 w) e 1 φ e 2w e 1 ) ) (5.8) (φ e 1(1 w) e + φ e 2w e ) 1+ 1 e. In the simulations we take e = 2.5, φ 1 =.4 and φ 2 =.6 which enable us to obtain a spectral measure that presents two atoms at and at 1 given respectively by H e,φ1,φ 2 ({}) = 1 φ2 2 =.2 and H r,φ1,φ 2 ({1}) = 1+φ1 2 =.7. It is known that the three estimators presented above are unable to detect the atoms (we have always ω i ], 1[) and this example is a good illustration to measure the extent to which the estimate is deteriorated by the presence of atoms concerning the methods based on the rank sample. The simulation results are summarized in Figures 2, 3 and 4. One remarkable aspect of this numerical study is how our estimator performed to take into account the presence of atoms and the monotone with the mean constraints. For different value of α (different thresholds) the mean integrated square error
19 Bayesian inference with M-splines on spectral measure of bivariate extremes 19 MISE Bayes EMSM MELE ESM Spectral measure (a) MISE 1 3. α (b) Spectral measure estimate, here α =.575 w Posterior probability Number of interior knots (J q) (c) Posterior probability of knots, here α =.575. Fig. 2 Figures shown the simulation results for the logistic model with e = 2. (a) MISE of each estimator as a function of the threshold for α [.55,.7] and 1 samples with n = 1. (b) Bayes M-splines spectral measure estimate with 95% credible intervals with α =.575. (c) Posterior distribution of the number of interior knots with α =.575. (MISE) is sensitive to the choice of the threshold. The sensitivity of the error of estimation with α is illustrated by Figures 2(a), 3(a) and 4(a). Usually we note that the differences between the error of ESM-method, MELE-method and EMSM-method are not very significant. For the logistic model with e = 2 and e = 4 (Figures 2 and 3) there are no atoms and the error of estimation for the three methods based on rank sample is greater than those of our method. This feature can be explained by the fact that our estimate is smooth since it is a spline (whereas the others are piecewise linear) together with the fact that the free-knot approach considered here detects better the high and low variability regions of the data and facilitates the insertion of more coefficients in the high variability region. For the asymmetric model (see Figure 4) the presence of atoms at and at 1 deteriorates the error of estimation for the
20 2 Khader Khadraoui, Pierre Ribereau MISE Bayes EMSM MELE ESM Spectral measure (a) MISE 1 3. α (b) Spectral measure estimate, here α =.65. w Posterior probability Number of interior knots (J q) (c) Posterior probability of knots, here α =.65. Fig. 3 Figures shown the simulation results for the logistic model with e = 4. (a) MISE of each estimator as a function of the threshold for α [.55,.7] and 1 samples with n = 1. (b) Bayes M-splines spectral measure estimate with 95% credible intervals with α =.65. (c) Posterior distribution of the number of interior knots with α =.65. three methods (ESM, MELE and EMSM) contrariwise our estimate remains consistent on [, 1]. 6 Discussion The simulation results of Section 6 indicate that our Bayes M-spline method does considerably better than the three methods based on rank sample for the logistic and the asymmetric models studied in this paper. On the basis of the numerical study evidence, it appears that our method is a good robust c hoice since it always competitive with the others estimates of spectral measure and does considerably better for spectral measures with atoms, such as the
21 Bayesian inference with M-splines on spectral measure of bivariate extremes 21 MISE Bayes EMSM MELE ESM Spectral measure (a) MISE 1 1. α (b) Spectral measure estimate, here α =.575. w Posterior probability Number of interior knots (J q) (c) Posterior probability of knots, here α =.575. Fig. 4 Figures shown the simulation results for the asymmetric model with e = 2.5, φ 1 =.4 and φ 2 =.6. (a) MISE of each estimator as a function of the threshold for α [.55,.7] and 1 samples with n = 1. (b) Bayes M-splines spectral measure estimate with 95% credible intervals with α =.575. (c) Posterior distribution of the number of interior knots with α =.575. asymmetric model. In all cases, it is preferable to choose the Bayes M-spline estimator that gives estimator of the spectral measure with better estimation properties and smaller error of estimation. Of course, these better results are at the expense of additional model complexity. Such numerical efficiency improvements have been pointed out also in the paper Guillotte et al. (211). We precise here that B-splines and M-splines modeling of the tail dependence were introduced first by K. Khadraoui and P. Ribereau in an unpublished 213 manuscript (Khadraoui and Ribereau, 213) and second in Topyurek et al. (213). The methodology developed in this article open a door to other complex problems such as the extension to multivariate spectral measure estimation
22 22 Khader Khadraoui, Pierre Ribereau or the asymptotic analysis of the estimator. Clearly, a theoretical study to characterize the asymptotic properties (consistency and rate of convergence) of the Bayes estimate could be interesting to perform. Acknowledgements We thank a reviewer for a careful reading of the paper and for many helpful suggestions. Khader Khadraoui acknowledges the financial support of the Natural Sciences and Engineering Research Council of Canada. Pierre Ribereau acknowledges the French national Program LEFE/INSU and the LABEX MILyon (ANR-1-LABX-7) of Université de Lyon, within the program Investissements d Avenir (ANR-11-IDEX-7) operated by the French National Research Agency (ANR). References Abraham C, Khadraoui K (215) Bayesian regression with B-splines under combinations of shape constraints and smoothness properties. Statistica Neerlandica 69:15 17 Aitchidson J, Dunsmore IR (1975) Statistical prediction analysis. Cambridge University Press, Cambridge Beirlant J, Goegebeur Y, Segers J, Teugels J (24) Statistics of Extremes: Theory and Applications. Wiley, Chichester Boldi MOJ, Davison AC (27) A mixture model for multivariate extremes. J R Statist Soc (B) 69: Coles SG (21) An introduction to statistical modelling of extreme values. Springer, New York Coles SG, Tawn JA (1991) Modelling extreme multivariate events. J R Statist Soc (B) 53: Coles SG, Tawn JA (1994) Statistical methods for multivariate extremes: an application to structural design (with discussion). Appl Statist 43:1 48 Coles SG, Tawn JA (1996a) Bayesian modelling of extreme surges on the uk east coast. Phil Trans R Soc Lond (A) 363: Coles SG, Tawn JA (1996b) A Bayesian analysis of extreme rainfall data. Appl Statist 45: de Boor C (1987) A practical guide to splines. Springer, New York de Carvalhoc M, Oumow B, Segers J, Warcho M (213) A euclidean likelihood estimator for bivariate tail dependence. Communications in Statistics - Theory and Methods 42: de Haan L, de Rond J (1998) Sea and wind: multivariate extremes at work. Extremes 1:7 45 de Haan L, Ferreira A (26) Extreme value theory: An introduction. Springer, New York de Haan L, Resnick S (1977) Limit theory for multidimensional sample extremes. Z Wahrsch Verw Gebiete 4: de Haan L, Sinha AK (1999) Estimating the probability of a rare event. Ann Stat 27: de Haan L, Neves C, Peng L (28) Parametric tail copula estimation and model testing. J Multiv Anal 99:
23 Bayesian inference with M-splines on spectral measure of bivariate extremes 23 Drees H, Huang X (1998) Best attainable rates of convergence for estimates of the stable tail dependence function. J Multivariate Anal 64:25 45 Einmahl JHJ, Segers J (29) Maximum empirical likelihood estimation of the spectral measure of an extreme-value distribution. Ann Stat 37: Einmahl JHJ, de Haan L, Piterbarg V (21) Nonparametric estimation of the spectral measure of an extreme value distribution. Ann Stat 29: Einmahl JHJ, de Haan L, Di L (26) Weighted approximations to tail copula processes with application to testing the bivariate extreme value condition. Ann Stat 34: Green PJ (1995) Reversible jump markov chain monte carlo computation and Bayesian model determination. Biometrika 82: Guillotte S, Perron F, Segers J (211) Nonparametric Bayesian inference on bivariate extremes. J R Statist Soc (B) 73: Huang X (1992) Statistics of Bivariate Extremes. Ph.D. thesis, Erasmus Univ., Rotterdam Joe H, Smith RL, Weissman I (1992) Bivariate threshold methods for extremes. J R Statist Soc (B) 54: Khadraoui K (217a) Nonparametric adaptive Bayesian regression using priors with tractable normalizing constants and under qualitative assumptions. International Journal of Approximate Reasoning 8: Khadraoui K (217b) A smoothing stochastic simulated annealing method for localized shapes approximation. Journal of Mathematical Analysis and Applications 446: Khadraoui K, Ribereau P (213) Bayesian estimation with M-splines of the spectral measure of an extreme-value distribution. In: 41st Annual Meeting of the Statistical Society of Canada Ledford AW, Tawn JA (1996) Statistics for near independence in multivariate extreme values. Biometrika 83: Schmidt R, Stadtmüller U (26) Nonparametric estimation of tail dependence. Scan J Statist 33: Smith RL (1994) Multivariate threshold methods. In: Galambos J, Lechner J, Simiu E (eds) Extreme value theory and applications, Springer, Boston, MA, pp Topyurek N, Khadraoui K, Rivest LP (213) New spline estimator of spectral measure on extreme value theory. Master thesis, Télécom ParisTech, Paris
A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS
Statistica Sinica 20 2010, 365-378 A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS Liang Peng Georgia Institute of Technology Abstract: Estimating tail dependence functions is important for applications
More informationNon parametric modeling of multivariate extremes with Dirichlet mixtures
Non parametric modeling of multivariate extremes with Dirichlet mixtures Anne Sabourin Institut Mines-Télécom, Télécom ParisTech, CNRS-LTCI Joint work with Philippe Naveau (LSCE, Saclay), Anne-Laure Fougères
More informationESTIMATING BIVARIATE TAIL
Elena DI BERNARDINO b joint work with Clémentine PRIEUR a and Véronique MAUME-DESCHAMPS b a LJK, Université Joseph Fourier, Grenoble 1 b Laboratoire SAF, ISFA, Université Lyon 1 Framework Goal: estimating
More informationBayesian nonparametrics for multivariate extremes including censored data. EVT 2013, Vimeiro. Anne Sabourin. September 10, 2013
Bayesian nonparametrics for multivariate extremes including censored data Anne Sabourin PhD advisors: Anne-Laure Fougères (Lyon 1), Philippe Naveau (LSCE, Saclay). Joint work with Benjamin Renard, IRSTEA,
More informationBayesian Inference for Clustered Extremes
Newcastle University, Newcastle-upon-Tyne, U.K. lee.fawcett@ncl.ac.uk 20th TIES Conference: Bologna, Italy, July 2009 Structure of this talk 1. Motivation and background 2. Review of existing methods Limitations/difficulties
More informationModelling Multivariate Peaks-over-Thresholds using Generalized Pareto Distributions
Modelling Multivariate Peaks-over-Thresholds using Generalized Pareto Distributions Anna Kiriliouk 1 Holger Rootzén 2 Johan Segers 1 Jennifer L. Wadsworth 3 1 Université catholique de Louvain (BE) 2 Chalmers
More informationBayesian Modelling of Extreme Rainfall Data
Bayesian Modelling of Extreme Rainfall Data Elizabeth Smith A thesis submitted for the degree of Doctor of Philosophy at the University of Newcastle upon Tyne September 2005 UNIVERSITY OF NEWCASTLE Bayesian
More informationNotes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed
18.466 Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 1. MLEs in exponential families Let f(x,θ) for x X and θ Θ be a likelihood function, that is, for present purposes,
More informationA Conditional Approach to Modeling Multivariate Extremes
A Approach to ing Multivariate Extremes By Heffernan & Tawn Department of Statistics Purdue University s April 30, 2014 Outline s s Multivariate Extremes s A central aim of multivariate extremes is trying
More informationPREPRINT 2005:38. Multivariate Generalized Pareto Distributions HOLGER ROOTZÉN NADER TAJVIDI
PREPRINT 2005:38 Multivariate Generalized Pareto Distributions HOLGER ROOTZÉN NADER TAJVIDI Department of Mathematical Sciences Division of Mathematical Statistics CHALMERS UNIVERSITY OF TECHNOLOGY GÖTEBORG
More informationThe extremal elliptical model: Theoretical properties and statistical inference
1/25 The extremal elliptical model: Theoretical properties and statistical inference Thomas OPITZ Supervisors: Jean-Noel Bacro, Pierre Ribereau Institute of Mathematics and Modeling in Montpellier (I3M)
More informationExtreme Value Analysis and Spatial Extremes
Extreme Value Analysis and Department of Statistics Purdue University 11/07/2013 Outline Motivation 1 Motivation 2 Extreme Value Theorem and 3 Bayesian Hierarchical Models Copula Models Max-stable Models
More informationInvariant HPD credible sets and MAP estimators
Bayesian Analysis (007), Number 4, pp. 681 69 Invariant HPD credible sets and MAP estimators Pierre Druilhet and Jean-Michel Marin Abstract. MAP estimators and HPD credible sets are often criticized in
More informationTail dependence in bivariate skew-normal and skew-t distributions
Tail dependence in bivariate skew-normal and skew-t distributions Paola Bortot Department of Statistical Sciences - University of Bologna paola.bortot@unibo.it Abstract: Quantifying dependence between
More informationMathematical Methods for Physics and Engineering
Mathematical Methods for Physics and Engineering Lecture notes for PDEs Sergei V. Shabanov Department of Mathematics, University of Florida, Gainesville, FL 32611 USA CHAPTER 1 The integration theory
More informationEstimation of spatial max-stable models using threshold exceedances
Estimation of spatial max-stable models using threshold exceedances arxiv:1205.1107v1 [stat.ap] 5 May 2012 Jean-Noel Bacro I3M, Université Montpellier II and Carlo Gaetan DAIS, Università Ca Foscari -
More informationMultivariate generalized Pareto distributions
Multivariate generalized Pareto distributions Holger Rootzén and Nader Tajvidi Abstract Statistical inference for extremes has been a subject of intensive research during the past couple of decades. One
More informationEstimation de mesures de risques à partir des L p -quantiles
1/ 42 Estimation de mesures de risques à partir des L p -quantiles extrêmes Stéphane GIRARD (Inria Grenoble Rhône-Alpes) collaboration avec Abdelaati DAOUIA (Toulouse School of Economics), & Gilles STUPFLER
More informationSimulation of truncated normal variables. Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris
Simulation of truncated normal variables Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris Abstract arxiv:0907.4010v1 [stat.co] 23 Jul 2009 We provide in this paper simulation algorithms
More informationStatistical inference on Lévy processes
Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline
More informationChapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries
Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.
More informationAsymptotics for posterior hazards
Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear
More informationNonparametric Estimation of the Dependence Function for a Multivariate Extreme Value Distribution
Nonparametric Estimation of the Dependence Function for a Multivariate Extreme Value Distribution p. /2 Nonparametric Estimation of the Dependence Function for a Multivariate Extreme Value Distribution
More informationEstimating Bivariate Tail: a copula based approach
Estimating Bivariate Tail: a copula based approach Elena Di Bernardino, Université Lyon 1 - ISFA, Institut de Science Financiere et d'assurances - AST&Risk (ANR Project) Joint work with Véronique Maume-Deschamps
More informationStat 5421 Lecture Notes Proper Conjugate Priors for Exponential Families Charles J. Geyer March 28, 2016
Stat 5421 Lecture Notes Proper Conjugate Priors for Exponential Families Charles J. Geyer March 28, 2016 1 Theory This section explains the theory of conjugate priors for exponential families of distributions,
More informationStat 5101 Lecture Notes
Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random
More informationMultivariate Heavy Tails, Asymptotic Independence and Beyond
Multivariate Heavy Tails, endence and Beyond Sidney Resnick School of Operations Research and Industrial Engineering Rhodes Hall Cornell University Ithaca NY 14853 USA http://www.orie.cornell.edu/ sid
More informationThe deterministic Lasso
The deterministic Lasso Sara van de Geer Seminar für Statistik, ETH Zürich Abstract We study high-dimensional generalized linear models and empirical risk minimization using the Lasso An oracle inequality
More informationis a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications.
Stat 811 Lecture Notes The Wald Consistency Theorem Charles J. Geyer April 9, 01 1 Analyticity Assumptions Let { f θ : θ Θ } be a family of subprobability densities 1 with respect to a measure µ on a measurable
More informationNonparametric Bayesian Methods (Gaussian Processes)
[70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent
More informationModels and estimation.
Bivariate generalized Pareto distribution practice: Models and estimation. Eötvös Loránd University, Budapest, Hungary 7 June 2011, ASMDA Conference, Rome, Italy Problem How can we properly estimate the
More informationAn example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information.
An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information. Model: where g(t) = a(t s)f(s)ds + e(t), a(t) t = (rapidly). The problem,
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationComputational statistics
Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated
More informationBayesian Regression Linear and Logistic Regression
When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we
More informationModeling and Interpolation of Non-Gaussian Spatial Data: A Comparative Study
Modeling and Interpolation of Non-Gaussian Spatial Data: A Comparative Study Gunter Spöck, Hannes Kazianka, Jürgen Pilz Department of Statistics, University of Klagenfurt, Austria hannes.kazianka@uni-klu.ac.at
More informationOn Reparametrization and the Gibbs Sampler
On Reparametrization and the Gibbs Sampler Jorge Carlos Román Department of Mathematics Vanderbilt University James P. Hobert Department of Statistics University of Florida March 2014 Brett Presnell Department
More informationBayesian Modeling of Conditional Distributions
Bayesian Modeling of Conditional Distributions John Geweke University of Iowa Indiana University Department of Economics February 27, 2007 Outline Motivation Model description Methods of inference Earnings
More informationfor all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true
3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO
More informationBayesian inference for multivariate extreme value distributions
Bayesian inference for multivariate extreme value distributions Sebastian Engelke Clément Dombry, Marco Oesting Toronto, Fields Institute, May 4th, 2016 Main motivation For a parametric model Z F θ of
More informationAsymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½
University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ Lawrence D. Brown University
More informationON EXTREME VALUE ANALYSIS OF A SPATIAL PROCESS
REVSTAT Statistical Journal Volume 6, Number 1, March 008, 71 81 ON EXTREME VALUE ANALYSIS OF A SPATIAL PROCESS Authors: Laurens de Haan Erasmus University Rotterdam and University Lisbon, The Netherlands
More informationSTAT 7032 Probability Spring Wlodek Bryc
STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationIntroduction A basic result from classical univariate extreme value theory is expressed by the Fisher-Tippett theorem. It states that the limit distri
The dependence function for bivariate extreme value distributions { a systematic approach Claudia Kluppelberg Angelika May October 2, 998 Abstract In this paper e classify the existing bivariate models
More informationBayesian Methods for Machine Learning
Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),
More informationStat 516, Homework 1
Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball
More informationBivariate generalized Pareto distribution
Bivariate generalized Pareto distribution in practice Eötvös Loránd University, Budapest, Hungary Minisymposium on Uncertainty Modelling 27 September 2011, CSASC 2011, Krems, Austria Outline Short summary
More informationMore Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction
Sankhyā : The Indian Journal of Statistics 2007, Volume 69, Part 4, pp. 700-716 c 2007, Indian Statistical Institute More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order
More informationHastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model
UNIVERSITY OF TEXAS AT SAN ANTONIO Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model Liang Jing April 2010 1 1 ABSTRACT In this paper, common MCMC algorithms are introduced
More informationApril 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning
for for Advanced Topics in California Institute of Technology April 20th, 2017 1 / 50 Table of Contents for 1 2 3 4 2 / 50 History of methods for Enrico Fermi used to calculate incredibly accurate predictions
More informationGibbs Sampling in Linear Models #2
Gibbs Sampling in Linear Models #2 Econ 690 Purdue University Outline 1 Linear Regression Model with a Changepoint Example with Temperature Data 2 The Seemingly Unrelated Regressions Model 3 Gibbs sampling
More informationMarginal Specifications and a Gaussian Copula Estimation
Marginal Specifications and a Gaussian Copula Estimation Kazim Azam Abstract Multivariate analysis involving random variables of different type like count, continuous or mixture of both is frequently required
More informationOverview of Extreme Value Theory. Dr. Sawsan Hilal space
Overview of Extreme Value Theory Dr. Sawsan Hilal space Maths Department - University of Bahrain space November 2010 Outline Part-1: Univariate Extremes Motivation Threshold Exceedances Part-2: Bivariate
More informationBayesian Point Process Modeling for Extreme Value Analysis, with an Application to Systemic Risk Assessment in Correlated Financial Markets
Bayesian Point Process Modeling for Extreme Value Analysis, with an Application to Systemic Risk Assessment in Correlated Financial Markets Athanasios Kottas Department of Applied Mathematics and Statistics,
More informationNew Classes of Multivariate Survival Functions
Xiao Qin 2 Richard L. Smith 2 Ruoen Ren School of Economics and Management Beihang University Beijing, China 2 Department of Statistics and Operations Research University of North Carolina Chapel Hill,
More informationPractical Bayesian Quantile Regression. Keming Yu University of Plymouth, UK
Practical Bayesian Quantile Regression Keming Yu University of Plymouth, UK (kyu@plymouth.ac.uk) A brief summary of some recent work of us (Keming Yu, Rana Moyeed and Julian Stander). Summary We develops
More informationPATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS Parametric Distributions Basic building blocks: Need to determine given Representation: or? Recall Curve Fitting Binary Variables
More informationBayesian Inference for Discretely Sampled Diffusion Processes: A New MCMC Based Approach to Inference
Bayesian Inference for Discretely Sampled Diffusion Processes: A New MCMC Based Approach to Inference Osnat Stramer 1 and Matthew Bognar 1 Department of Statistics and Actuarial Science, University of
More informationSYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS. Massimo Grosi Filomena Pacella S. L. Yadava. 1. Introduction
Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 21, 2003, 211 226 SYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS Massimo Grosi Filomena Pacella S.
More informationSubmitted to the Brazilian Journal of Probability and Statistics
Submitted to the Brazilian Journal of Probability and Statistics Multivariate normal approximation of the maximum likelihood estimator via the delta method Andreas Anastasiou a and Robert E. Gaunt b a
More informationDefault Priors and Effcient Posterior Computation in Bayesian
Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature
More informationConstruction of an Informative Hierarchical Prior Distribution: Application to Electricity Load Forecasting
Construction of an Informative Hierarchical Prior Distribution: Application to Electricity Load Forecasting Anne Philippe Laboratoire de Mathématiques Jean Leray Université de Nantes Workshop EDF-INRIA,
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 7 Approximate
More informationOn the Fisher Bingham Distribution
On the Fisher Bingham Distribution BY A. Kume and S.G Walker Institute of Mathematics, Statistics and Actuarial Science, University of Kent Canterbury, CT2 7NF,UK A.Kume@kent.ac.uk and S.G.Walker@kent.ac.uk
More informationMachine Learning 2017
Machine Learning 2017 Volker Roth Department of Mathematics & Computer Science University of Basel 21st March 2017 Volker Roth (University of Basel) Machine Learning 2017 21st March 2017 1 / 41 Section
More informationOn the smallest eigenvalues of covariance matrices of multivariate spatial processes
On the smallest eigenvalues of covariance matrices of multivariate spatial processes François Bachoc, Reinhard Furrer Toulouse Mathematics Institute, University Paul Sabatier, France Institute of Mathematics
More informationTheorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1
Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be
More informationD I S C U S S I O N P A P E R
I N S T I T U T D E S T A T I S T I Q U E B I O S T A T I S T I Q U E E T S C I E N C E S A C T U A R I E L L E S ( I S B A ) UNIVERSITÉ CATHOLIQUE DE LOUVAIN D I S C U S S I O N P A P E R 2014/06 Adaptive
More informationBayesian Inference. Chapter 9. Linear models and regression
Bayesian Inference Chapter 9. Linear models and regression M. Concepcion Ausin Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in Mathematical Engineering
More informationOn a Class of Multidimensional Optimal Transportation Problems
Journal of Convex Analysis Volume 10 (2003), No. 2, 517 529 On a Class of Multidimensional Optimal Transportation Problems G. Carlier Université Bordeaux 1, MAB, UMR CNRS 5466, France and Université Bordeaux
More informationarxiv:math/ v1 [math.st] 16 May 2006
The Annals of Statistics 006 Vol 34 No 46 68 DOI: 04/009053605000000886 c Institute of Mathematical Statistics 006 arxiv:math/0605436v [mathst] 6 May 006 SPATIAL EXTREMES: MODELS FOR THE STATIONARY CASE
More informationStatistical Inference and Methods
Department of Mathematics Imperial College London d.stephens@imperial.ac.uk http://stats.ma.ic.ac.uk/ das01/ 31st January 2006 Part VI Session 6: Filtering and Time to Event Data Session 6: Filtering and
More informationThe Bias-Variance dilemma of the Monte Carlo. method. Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel
The Bias-Variance dilemma of the Monte Carlo method Zlochin Mark 1 and Yoram Baram 1 Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel fzmark,baramg@cs.technion.ac.il Abstract.
More informationA Bayesian perspective on GMM and IV
A Bayesian perspective on GMM and IV Christopher A. Sims Princeton University sims@princeton.edu November 26, 2013 What is a Bayesian perspective? A Bayesian perspective on scientific reporting views all
More informationSTA414/2104 Statistical Methods for Machine Learning II
STA414/2104 Statistical Methods for Machine Learning II Murat A. Erdogdu & David Duvenaud Department of Computer Science Department of Statistical Sciences Lecture 3 Slide credits: Russ Salakhutdinov Announcements
More informationThe Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations
The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations John R. Michael, Significance, Inc. and William R. Schucany, Southern Methodist University The mixture
More informationClassification via kernel regression based on univariate product density estimators
Classification via kernel regression based on univariate product density estimators Bezza Hafidi 1, Abdelkarim Merbouha 2, and Abdallah Mkhadri 1 1 Department of Mathematics, Cadi Ayyad University, BP
More informationOn the occurrence times of componentwise maxima and bias in likelihood inference for multivariate max-stable distributions
On the occurrence times of componentwise maxima and bias in likelihood inference for multivariate max-stable distributions J. L. Wadsworth Department of Mathematics and Statistics, Fylde College, Lancaster
More informationStochastic Design Criteria in Linear Models
AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.
More informationLong-Run Covariability
Long-Run Covariability Ulrich K. Müller and Mark W. Watson Princeton University October 2016 Motivation Study the long-run covariability/relationship between economic variables great ratios, long-run Phillips
More informationBayesian Multivariate Extreme Value Thresholding for Environmental Hazards
Bayesian Multivariate Extreme Value Thresholding for Environmental Hazards D. Lupton K. Abayomi M. Lacer School of Industrial and Systems Engineering Georgia Institute of Technology Institute for Operations
More informationRATE-OPTIMAL GRAPHON ESTIMATION. By Chao Gao, Yu Lu and Harrison H. Zhou Yale University
Submitted to the Annals of Statistics arxiv: arxiv:0000.0000 RATE-OPTIMAL GRAPHON ESTIMATION By Chao Gao, Yu Lu and Harrison H. Zhou Yale University Network analysis is becoming one of the most active
More informationStatistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation
Statistics 62: L p spaces, metrics on spaces of probabilites, and connections to estimation Moulinath Banerjee December 6, 2006 L p spaces and Hilbert spaces We first formally define L p spaces. Consider
More informationTail dependence coefficient of generalized hyperbolic distribution
Tail dependence coefficient of generalized hyperbolic distribution Mohalilou Aleiyouka Laboratoire de mathématiques appliquées du Havre Université du Havre Normandie Le Havre France mouhaliloune@gmail.com
More informationBayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence
Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns
More informationMultivariate generalized Pareto distributions
Bernoulli 12(5), 2006, 917 930 Multivariate generalized Pareto distributions HOLGER ROOTZÉN 1 and NADER TAJVIDI 2 1 Chalmers University of Technology, S-412 96 Göteborg, Sweden. E-mail rootzen@math.chalmers.se
More informationThe Skorokhod reflection problem for functions with discontinuities (contractive case)
The Skorokhod reflection problem for functions with discontinuities (contractive case) TAKIS KONSTANTOPOULOS Univ. of Texas at Austin Revised March 1999 Abstract Basic properties of the Skorokhod reflection
More informationStat 542: Item Response Theory Modeling Using The Extended Rank Likelihood
Stat 542: Item Response Theory Modeling Using The Extended Rank Likelihood Jonathan Gruhl March 18, 2010 1 Introduction Researchers commonly apply item response theory (IRT) models to binary and ordinal
More informationMaximum likelihood estimation of a log-concave density based on censored data
Maximum likelihood estimation of a log-concave density based on censored data Dominic Schuhmacher Institute of Mathematical Statistics and Actuarial Science University of Bern Joint work with Lutz Dümbgen
More informationThe Skorokhod problem in a time-dependent interval
The Skorokhod problem in a time-dependent interval Krzysztof Burdzy, Weining Kang and Kavita Ramanan University of Washington and Carnegie Mellon University Abstract: We consider the Skorokhod problem
More informationOptimal global rates of convergence for interpolation problems with random design
Optimal global rates of convergence for interpolation problems with random design Michael Kohler 1 and Adam Krzyżak 2, 1 Fachbereich Mathematik, Technische Universität Darmstadt, Schlossgartenstr. 7, 64289
More informationBayesian Model Averaging for Multivariate Extreme Values
Bayesian Model Averaging for Multivariate Extreme Values Philippe Naveau naveau@lsce.ipsl.fr Laboratoire des Sciences du Climat et l Environnement (LSCE) Gif-sur-Yvette, France joint work with A. Sabourin
More informationINDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS
INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem
More informationKneib, Fahrmeir: Supplement to "Structured additive regression for categorical space-time data: A mixed model approach"
Kneib, Fahrmeir: Supplement to "Structured additive regression for categorical space-time data: A mixed model approach" Sonderforschungsbereich 386, Paper 43 (25) Online unter: http://epub.ub.uni-muenchen.de/
More information2 Chance constrained programming
2 Chance constrained programming In this Chapter we give a brief introduction to chance constrained programming. The goals are to motivate the subject and to give the reader an idea of the related difficulties.
More informationLECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,
LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES Sergey Korotov, Institute of Mathematics Helsinki University of Technology, Finland Academy of Finland 1 Main Problem in Mathematical
More informationModelling geoadditive survival data
Modelling geoadditive survival data Thomas Kneib & Ludwig Fahrmeir Department of Statistics, Ludwig-Maximilians-University Munich 1. Leukemia survival data 2. Structured hazard regression 3. Mixed model
More information