Conditional Simulation of Max-Stable Processes
|
|
- Hilary Cole
- 5 years ago
- Views:
Transcription
1 1 Conditional Simulation of Ma-Stable Processes Clément Dombry UniversitédeFranche-Comté Marco Oesting INRA, AgroParisTech Mathieu Ribatet UniversitéMontpellier2andISFA,UniversitéLyon1 CONTENTS Abstract Introduction: the prediction problem and conditional distribution Conditional distribution of ma-stable processes Etremal functions, subetremal functions and hitting scenarios The general structure of conditional distributions Eamples of regular models Distribution of the etremal functions Conditional sampling of ma-stable processes A 3-step procedure for conditional sampling Gibbs sampling for the conditional hitting scenario Simulation Study Gibbs Sampler Conditional simulations Acknowledgements References Abstract Ma-stable processes are well established models for spatial etremes. In this chapter, we address the prediction problem: suppose a ma-stable processisobservedat some locations only, how can we use these observations to predict the behavior of 1
2 2 Etreme Value Modeling and Risk Analysis: Methods and Applications the process at other unobserved locations? Mathematically,thepredictionproblemis related to the conditional distribution of the process giventheobservations.recently, Dombry and Eyi-Minko (2013) provided an eplicit theoretical formula for the conditional distributions of ma-stable processes. The resultreliesonthespectralrepresentation of the ma-stable process as the pointwise maima over an infinite number of spectral functions belonging to a Poisson point process. The effect of conditioning on the Poisson point process is analyzed, resulting in the notions of hitting scenario and etremal or subetremal functions. Due to the compleity ofthestructureofthe conditional distributions, conditional simulation appears at the same time challenging and important to assess characteristics that are analytically intractable such as the conditional median or quantiles. The issue of conditional simulation was considered by Dombry et al. (2012) who proposed a three step procedure for conditionalsampling. As the conditional simulation of the hitting scenario becomes computationally very demanding even for a moderate number of conditioning points, a Gibbs sampler approach was proposed for this step. The results are illustrated on some simulation studies and we propose several diagnostics to check the performance. 1.1 Introduction: the prediction problem and conditional distribution In classical geostatistics, Gaussian random fields play a central role in the statistical theory based on the Central Limit Theorem. In a similar manner, ma-stable random fields turn out to be fundamental models for spatial etremes since they etend the well known univariate and multivariate etreme value theory to the infinite dimensional setting. Ma-stable random fields arise naturally when considering the component-wise maima of a large number of independent and identically random fields and seeking for a limit under a suitable affine normalization. Inthismultivariate or functional setting,thenotion of dependence is crucial: how does an etreme event occurring in some region affect the behavior of the random field at other locations? This is related to the prediction problem which is an important and long-standing challenge in etreme value theory. Suppose that a ma-stable random field Z =(Z()) X is observed at some stations 1,..., k Xonly, yielding Z( 1 )=z 1,...,Z( k )=z k. (1.1) How can we take benefit from these observations and predict the random field Z at other places? We are naturally lead to consider the conditional distribution of (Z()) X given the observations (1.1). In the classical Gaussian framework, i.e., if Z is a centered Gaussian random field, it is well known that the corresponding conditional distribution remains Gaussian and simple formulas give the conditional mean and covariance structure. This theory is strongly linked with the theory of Hilbert spaces: for eample, the conditional epectation of Z() can be obtained as the L 2 -proection of the random field η
3 Conditional Simulation of Ma-Stable Processes 3 onto the Gaussian subspace generated by the variables {Z( i ), 1 i k},resulting in the linear combination E [Z() Z( 1 ),...,Z( k )] = α 1 ()Z( 1 )+ + α k ()Z( k ), X, for some weight functions α i (), i = 1,...,n, that are obtained via the kriging theory (cf. Chilès and Delfiner (1999), for eample). A conditional simulation of Z given the observations (1.1) is then easily performed by setting Z() = Z()+α 1 ()(z 1 Z( 1 )) + + α k ()(z k Z( k )), X, where Z denotes the realization of an unconditional simulation of the random field Z. In etreme value theory, the prediction problem turns out to be more difficult. A similarkriging theory for ma-stable processes isvery appealing and a first approach in that direction was done by Davis and Resnick (1989, 1993). They introduced a L 1 - metric between ma-stable variables and proposed a kind of proection onto mastable spaces. To some etent, this work mimics the corresponding L 2 -theory for Gaussian spaces. However, unlike the eceptional Gaussian case, there is no clear relationship between the predictor obtained by proection onto the ma-stable space generated by the variables {Z( i ), 1 i k} and the conditional distributions of η with respect to these variables. Conditional distributions have been considered first by Weintraub (1991) in the case of a bivariate vector. A maor contribution is the work by Wang and Stoev (2011) where the authors consider ma-linear random fields, a special class of ma-stable random fields with discrete spectral measure, and give an eact epression of the conditional distributions as well as efficient algorithms. The ma-linear structure plays an essential role in their work and provides maor simplifications since in this case Z admits the simple representation q Z() = F f (), =1 X, where the symbol denotes the maimum, f 1,...,f q are deterministic functions and F 1,...,F q are i.i.d. random variables with unit Fréchet distribution. The authors determine the conditional distributions of (F ) 1 q given the observations (1.1) and deduce the conditional distribution of Z.AnotherapproachbyOestingand Schlather (2014) deals with ma-stable random fields with a mied moving maima representation. We present and discuss here the recent result from Dombry and Eyi-Minko (2013) anddombryetal. (2013) providingformulasfortheconditional distributions of ma-stable random fields as well as efficient algorithm for conditional sampling. Note that the results can be stated in the more general framework of ma-infinitely divisible processes, but for the sake of simplicity, we stick here to the ma-stable case. Section 2 reviews the theoretical results on conditional distribution: we introduce the notion of etremal functions and hitting scenarios and eplicit the eact distribution of the ma-stable process given the observations. Here, we focus on the
4 4 Etreme Value Modeling and Risk Analysis: Methods and Applications framework of regular models that yields tractable formulas. InSection3,weputthe emphasis on efficient simulation and discuss a 3-step procedure for conditional sampling. A difficult step is the conditional sampling for the hitting scenario for which a Gibbs sampler approach is proposed. Section 4 is devoted to simulation studies and we present several diagnostics to check the performance of the methods suggested. 1.2 Conditional distribution of ma-stable processes In the sequel, we consider Z asamplecontinuousma-stablerandomfieldonx R d.wecanassumewithoutlossofgeneralitythatz has unit Fréchet margins. Thus, the process Z possesses a spectral representation (see for eample de Haan (1984), Penrose (1992), Schlather (2002) or de Haan and Ferreira (2006)) Z() =ma i 1 ζ iy i (), X, (1.2) where {ζ i } i 1 are the points of a Poisson process on (0, ) with intensity ζ 2 d ζ, (Y i ) i 1 are independent replicates of a non-negative continuous sample path stochastic process Y such that E[Y ()] = 1 for all X, (ζ i ) i 1 and (Y i ) i 1 are independent. We now introduce a fundamental obect in our analysis which isafunction- valued Poisson point process associated with the representation (1.2). Let C = C(X, [0, + )) be the space of continuous non-negative functions on X.Weconsider the C-valued point process Φ={φ i } i 1 where φ i () =ζ i Y i () with ζ i and Y i as in (1.2). It is well known (de Haan and Ferreira, 2006) to verify that Φ is a Poisson point process with intensity measure Λ given by Λ(A) = 0 P[ζY A]ζ 2 dζ, A CBorel. Our strategy is to work on the level of the Poisson point process Φ rather than on the level of the ma-stable process Z and to derive the conditional distribution of Φ given the observations (1.1). The conditional distribution of Z is thendeduced easily Etremal functions, subetremal functions and hitting scenarios The observations (1.1) together with the spectral representation (1.2) yield the constraint ma φ i( 1 )=z 1,...,ma φ i( k )=z k. i 1 i 1 A first step in our strategy is the analysis of the way how these constraints are met, i.e. the way how the maima are attained. An important preliminary remark is that for each = 1,,k,themaimumZ( ) = ma i 1 φ i ( ) is almost surely attained by a unique function φ i,whichcanbeeasilyseenbythefactthatthepoint
5 Conditional Simulation of Ma-Stable Processes 5 process {ϕ i ( ),i 1} on (0, ) has intensity ζ 2 dζ. Thisleadstothefollowing definition. Definition The (almost surely) unique function φ Φ such that φ( ) = Z( ) is called the etremal function at and denoted by ϕ +. Furthermore, we define the etremal point process Φ + = {ϕ + } 1 k as the set of etremal functions with respect to the conditioning points { } 1 k. Note that the notation Φ + = {ϕ + } 1 k may include the multiple occurrence of some functions for eample if ϕ + 1 = ϕ + 2.Thisisnottakenintoaccountinthe etremal point process Φ + where each point has multiplicity one. The possible redundancies are captured by the so-called hitting scenario. We first introduce this notion with simple eamples. Assume first that there are k =2conditioning points 1, 2 and hence two etremal functions ϕ + 1 and ϕ + 2.Twocasescanoccur: either ϕ + 1 = ϕ + 2,i.e.themaimaatpoints 1 and 2 are reached by the same etremal function, in this case the hitting scenario is θ = { 1, 2 }; or ϕ + 1 ϕ + 2,i.e.themaimaatpoints 1 and 2 are reached by different etremal functions, in this case the hitting scenario is θ =({ 1 }, { 2 }). In the case of k = 3 conditioning points, there are 5 different possibilities for the hitting scenario: ({ 1, 2, 3 }), ({ 1, 2 }, { 3 }), ({ 1, 3 }, { 2 }), ({ 1 }, { 2, 3 }) and ({ 1 }, { 2 }, { 3 }). The interpretation is straightforward: for instance, the hitting scenario θ = ({ 1, 3 }, { 2 }) corresponds to the case when the maima at 1 and 3 are attained by the same etremal function but the maimum at 2 corresponds to a different etremal event, i.e. ϕ + 1 = ϕ + 3 ϕ + 2.Thegeneraldefinitionisasfollows. Definition The hitting scenario θ is a random partition (θ 1,...,θ l ) of the conditioning points { 1,, k } such that for any 1 2, 1 and 2 are in the same component of θ if and only if ϕ + 1 = ϕ + 2. The hitting scenario θ takes into account the redundancies in Φ + = {ϕ + 1,,ϕ + k } and it is straightforward that the number of blocks l of θ is eactly the number of distinct etremal functions and hence the cardinality of Φ +.Hencewe can rewrite Φ + = {ϕ + 1,,ϕ+ l } where ϕ+ = ϕ+ for all θ. So far, we considered only those functions φ i that hit the maimum Z at some conditioning points { } 1 k.theremainingfunctionsarecalledsub-etremal,as in the following definition. Definition Afunctionφ Φ satisfying φ( ) <Z( ) for all 1 k is called subetremal. The set of subetremal functions is called the subetremal point process Φ.
6 6 Etreme Value Modeling and Risk Analysis: Methods and Applications This yields a disoint decomposition of the point process Φ=Φ + Φ into its etremal and subetremal parts. This is illustrated in Figure where the etremal functions (black) and the subetremal functions (gray) are depicted as well as the corresponding hitting scenarios FIGURE 1.1 Two realizations of the Poisson point process Φ and of the corresponding hitting scenario θ with conditioning points i = i, i =1,, 4 represented by the circles. Left: the hitting scenario is θ =({ 1 }, { 2, 3 }, { 4 }). Right:thehittingscenario is θ =({ 1, 2 }, { 3, 4 }). Akeyresultinthestudyoftheconditionaldistributionisthe following theorem providing the oint distribution of (θ, Φ +, Φ ).WedenotebyP k the set of partitions of { 1,..., k } and by M p (C) the set of C-valued point measures. We introduce some vectorial notation. Let =( 1,..., k ) and for τ =(τ 1,...,τ l ) P k,we introduce τ =() τ.foranyvectors =(s 1,...,s m ) X m and any function f : X R, wenotef(s) =(f(s 1 ),...,f(s m )). Iff 1,f 2 : X R m are two functions, the notation f 1 (s) <f 2 (s) means that f 1 (s ) <f 2 (s ), =1,...,m. Theorem For any partition τ =(τ 1,...,τ l ) in P k and any Borel sets A C l and B M p (C), wehave = P[θ = τ,(ϕ + 1,...,ϕ+ l ) A, Φ B] 1 {ma C l f ( τ )<f ( τ ),=1,...,l,}1 {(f1,...,f l ) A} [ ] P Φ B and φ Φ, φ() < ma f () Λ(df 1 ) Λ(df l ). 1 l The main technical tool for this proof is the Mecke-Slivniack formula from stochastic geometry, see e.g. Stoyan et al. (1987). We sketch herethemainlines of the proof. Proof. First note that the event {θ = τ,(ϕ + 1,...,ϕ+ l ) A, Φ B} is realized if and only if there eists a l-tuple (φ 1,,φ l ) Φ l satisfying the following conditions:
7 Conditional Simulation of Ma-Stable Processes 7 i) Φ + = {φ 1,...,φ l } ii) Φ =Φ\{φ 1,...,φ l }; iii) θ = τ; iv) (φ 1,...,φ l ) A; v) Φ B. Clearly, if such a l-tuple does eist, it is necessarily unique and equal to (ϕ + 1,...,ϕ+ l ).Wededucethattheprobabilityofinterestcanbewritteninthe form P[θ = τ,(ϕ + 1,...,ϕ+ l ) A, Φ B] = E F (φ 1,...,φ l, Φ \{φ 1,...,φ l }) (1.3) (φ 1,,φ l ) Φ l with F : C l M p (C) {0, 1} defined by { 1 if conditions i)-v) are satisfied F (φ 1,...,φ l, Φ \{φ 1,...,φ l })= 0 otherwise. By the Slyvniak-Mecke formula (Stoyan et al., 1987), we deduce that the epectation (1.3) is equal to C l E[F (f 1,...,f l, Φ)Λ(df 1 ) Λ(df l ). The theorem follows after a more eplicit description of the functional F.Condition i), ii) and iii) together are equivalent to and ma φ ( τ ) <φ ( τ ),=1,...,l, φ Φ \{φ 1,...,φ l },φ() < ma 1 l φ (). Condition iv) is clear and condition v) is equivalent to Φ \{φ 1,...,φ l } B The general structure of conditional distributions In the previous section, we have seen that we are able to compute the oint distribution of the hitting scenario θ, theetremalfunctionsφ + and the subetremal functions Φ.Wenowconsidertheconditionalointdistributiongiventhe observations (1.1). It can be computed on the basis of Theorem only, because the observations (Z( 1 ),...,Z( k )) can be epressed as a function of θ and Φ +,i.e. Z( i )=ϕ + ( i) with such that i θ.
8 8 Etreme Value Modeling and Risk Analysis: Methods and Applications Nevertheless, this computation is rather tedious and we give here the result without proof and we consider only the so-called regular case. We say that the intensity measure Λ of the point process Φ is regular at some point s =(s 1,...,s p ) X p if the marginal spectral measure Λ is absolutely regular with respect to the Lebesgue measure on (0, + ) k.moreprecisely,wedefinethemarginalspectralmeasure Λ s (dz 1,...,dz p )=Λ(f(s 1 ) dz 1,...,f(s p ) dz p ) and assume that on (0, + ) k Λ s (dz 1,...,dz p )=λ s (z 1,...,z p )dz 1 dz p. The proof of the following result and more details are to be found in Dombry and Eyi- Minko (2013). Note that, in the non-regular case, some further analysis of the different hitting scenarios is needed, as some hitting scenarios have to be ecluded due to the conditioning data. For eamples of non-regular cases and conditionalsampling procedures in these cases, see Wang and Stoev (2011) and Oesting and Schlather (2014) who consider ma-linear and mied moving maima processes, respectively. Theorem Assume that Λ is regular at = ( 1,..., k ). For τ = (τ 1,...,τ l ) P k and =1,...,l,defineI = {i: i τ }, τ =( i ) i I, z τ =(z i ) i I, τ c =( i ) i/ I and z τ c =(z i ) i/ I.Theconditionaldistributionof (θ, Φ +, Φ ) with respect to the observations (1.1) is obtained as follows: 1. For τ =(τ 1,...,τ l ) P k, = P [θ = τ Z() =z] 1 l λ (τ, τ c )(z τ, u )du, (1.4) C,z =1 {u <z τ c } where the normalization constant C,z is such that τ P k P [θ = τ Z() =z] =1. 2. Conditionally on the observations (1.1) and on the hitting scenario τ = (τ 1,...,τ l ) P k,theetremalfunctionsϕ + 1,...,ϕ+ l are independent with distribution P[ϕ + df Z() =z,θ = τ] = Λ[df f( τ )=z τ,f( τ c ) < z τ c ], =1,...,l. (1.5) 3. Conditionally on the observations (1.1), Φ is independent of Φ + = {ϕ + 1,...,ϕ+ l } and has the same distribution as a Poisson Point process on C with intensity 1 {f()<z} Λ(df).
9 Conditional Simulation of Ma-Stable Processes 9 This theorem allows to reconstruct the conditional distribution of the point process Φ given the observations Z() =z via a three step procedure: construct first the conditional hitting scenario θ and the etremal functions ϕ + 1,...,ϕ+ l (step 1 and 2), this yields Φ + ;independentlyconstructtheconditionalsubetremalpoint process Φ (step 3); finally set Φ=Φ + Φ to obtain the conditional point process Φ. The conditional probability appearing in steps 2 and 3 are quite natural. Given the observations Z() =z and the hitting scenario θ = τ, theetremalfunction ϕ + must satisfy the constraints ϕ+ ( τ )=z τ and ϕ + ( τ c) < z τ c so that it seems natural to obtain the conditional intensity Λ given these constraints. Similarly, given Z() =z, thesubetremalfunctionsϕ Φ must satisfy the constraint ϕ() < z which naturally leads to the point process intensity Λ restricted to the set of functions {f C,f() < z} Eamples of regular models Theorem requires the assumption of regularity of the intensity measure Λ at point =( 1,..., k ).Wenowrecallsomepopularmodelsforma-stableprocess that are regular and give the corresponding intensity function λ.thesemodelsare mainly based on Gaussian or related distributions and provide a nice framework for eplicit computations. Eample 1: Brown-Resnick process The Brown Resnick process introduced by Kabluchko et al. (2009) is a stationary ma-stable random field on X = R d corresponding to the case where Y () = ep{w () γ()} in (1.2) with W acenteredgaussianprocesswithstationary increments, semi-variogram γ and such that W (o) =0almost surely. One can show that the intensity measure Λ is regular at X k as long as the covariance matri Σ of the random vector W () is positive definite. We denote by g the Gaussian density { g (u) =(2π) k/2 det(σ ) 1/2 ep 1 2 ut Σ 1 u The marginal intensity measure is computed as follows: for all Borel set A (0, + ) k }. Λ (A) = = = = A P[ζY () A]ζ 2 dζ P[ζ ep{w () γ()} A]ζ 2 dζ A λ (z)dz g (log z log ζ + γ()) k i=1 z 1 i ζ 2 dζdz
10 10 Etreme Value Modeling and Risk Analysis: Methods and Applications with λ (z) = 0 g (log z log ζ + γ()) k i=1 z 1 i ζ 2 dζ. Some standard but tedious computations for Gaussian integrals reveal that ( λ (z) =C ep 1 ) 2 log k zt Q log z + L log z z 1 i, z (0, ) k, with 1 k =(1) i=1,...,k, γ = {γ( i )} i=1,...,k, Q =Σ 1 i=1 Σ 1 1 k 1 T ( k Σ 1 1 T, L 1 T = k Σ 1 ) T γ 1 1 k Σ 1 1 k 1 T k γ Σ 1 k Σ 1 1 k C =(2π) (1 k)/2 det(σ ) 1/2 (1 T k Σ 1 1 k ) 1/2 { 1 (1 T k ep Σ 1 γ 1) 2 1 } 2 1 T k Σ 1 1 k 2 γt Σ 1 γ. See Dombry et al. (2013) for more details. Eample 2: Schlather process The Schlather process (Schlather, 2002), also called etremal Gaussian process, is ama-stablerandomfieldonx = R d corresponding to the case where Y () = (2π) 1/2 ma{0,w()} in (1.2) with W astandardgaussianprocesswithcorrelation function ρ.for X k and provided the covariance matri Σ of the random vector W () is positive definite, the intensity function is ( ) k +1 λ (z) =π (k 1)/2 det(σ ) 1/2 a (z) (k+1)/2 Γ, z (0, + ) k, 2 where a (z) =z T Σ 1 z.seedombryetal.(2013)formoredetailsonthecomputation. Eample 3: etremal t-process The etremal t-process is a generalization of the Schlather process above obtained with an etra parameter ν > 0 that yields more fleibility in the model. It is a ma-stable random field on X = R d obtained with Y () = c ν ma{0,w() ν } in (1.2) with W astandardgaussianprocesswithcorrelationfunctionρ and c ν = π2 (ν 2)/2 Γ((ν +1)/2) 1.ProvidedthecovariancematriΣ of the random vector W () is positive definite, the intensity function is λ (z) = c ν ν k+1 2 (ν 2)/2 π k/2 det(σ ) 1/2 a (z,ν) (k+ν)/2 ( ) k + ν k Γ z (1 ν)/ν 2 =1 for z (0, + ) k and with a (z,ν)=(z 1/ν ) T Σ 1 z 1/ν.SeeRibatet(2013).,
11 Conditional Simulation of Ma-Stable Processes Distribution of the etremal functions The distribution of the etremal function appearing in Equation (1.5) is still quite theoretical at this stage and it is unclear how one can sample from this distribution. There are also theoretical questions related to its definition: the intensity measure Λ has an infinite total mass and the conditioning event {f( τ )=z τ,f( τ c ) < z τ c } has zero measure. Awaytobypassthesedifficultiesistoconsideronlythefinitedimensionalmargins of the etremal functions ϕ + and not the full random process. In practice, this is enough for simulation purpose since one simulate the random field on a finite grid only and not on the whole state space X.Lets =(s 1,...,s m ) X m be the set of new locations for the conditional sampling, we focus on the conditional distribution of Z(s) Z() =z. Equation(1.5)canbesimplifiedifweassumetheregularity of the intensity measure Λ at (, s) X k+m.wecanthenintroducetheconditional intensity function Equation (1.5) can be rewritten as λ s,z (u) = λ (s,)(u, z), u (0, ) m. λ (z) P [ ϕ + (s) dv Z() =z,θ = τ] = 1 ( ) 1 {u <z C τ c }λ (s,τ c ) τ,z τ (v, u )du dv (1.6) with C the normalization constant C = 1 {u <z τ c }λ (s,τ c ) τ,z τ (v, u )du dv. In words, the conditional law of ϕ + (s) is equal to the distribution of the random variable V obtained as the first component of (V, U ) with density λ (s,τ c ) τ,z τ (v, u) conditioned to the event U < z τ c. Eample 1 continued: Brown-Resnick process In the case of Brown-Resnick process, for all (s, ) X m+k, (u, z) (0, ) m+k and provided the covariance matri Σ (s,) is positive definite, the conditional intensity function corresponds to a multivariate log-normal probability density function λ s,z (u) = (2π) m/2 det(σ s ) 1/2 { ep 1 } 2 (log u µ s,z) T Σ 1 s (log u µ m s,z) u 1 i, where µ s,z R m and Σ s are the mean and covariance matri of the underlying normal distribution and are given by Σ 1 s = Jm,kQ T (s,) J m,k } µ s,z = {L (s,) J m,k log z T T J m,k Q (s,) J m,k Σ s, i=1
12 12 Etreme Value Modeling and Risk Analysis: Methods and Applications with [ Idm J m,k = 0 k,m ], Jm,k = [ 0m,k where Id k denotes the k k identity matri and 0 m,k the m k null matri. Eample 2 continued: Schlather process For (s, ) X m+k, (u, z) R m+k and provided that the covariance matri Σ (s,) is positive definite, the conditional intensity function λ s,z (u) corresponds to a multivariate Student distribution λ s,z (u) = π m/2 (k +1) m/2 det( Σ) 1/2 { } (m+k+1)/2 1+ (u ( µ)t Σ 1 (u µ) Γ m+k+1 ) 2 k +1 Γ ( ), k+1 2 with k +1degrees of freedom, location parameter µ =Σ s: Σ 1 z and scale matri where Id k Σ = a (z) ( Σs Σ s: Σ 1 ) Σ :s k +1 [ ] Σs Σ Σ (s,) = s:. Σ :s Σ ], Eample 3 continued: etremal t-process The results are similar and generalize the case of Schlather processes. For (s, ) X m+k, (u, z) R m+k we set µ =Σ s: Σ 1 z 1/ν and Then we have Σ =(k + ν) 1 a (z,ν)(σ s Σ s: Σ 1 Σ :s ) λ s,z (u) = π m/2 (k + ν) m/2 det( Σ) 1/2 { } (m+k+ν)/2 1+ (u1/ν µ) T Σ 1 (u 1/ν µ) k + ν Γ ( ) m+k+ν m 2 Γ ( ) k+ν ν m u (ν 1)/ν. 2 The last term in bracket in the previous equation corresponds to the Jacobian of the mapping u u 1/ν.Hencewerecognizethattheconditionalintensityfunctionis the density of the random vector T ν where T is a Student random vector with k + ν degrees of freedom, mean µ and scale matri Σ. =1
13 Conditional Simulation of Ma-Stable Processes Conditional sampling of ma-stable processes We now consider the conditional sampling of ma-stable processes following Dombry et al. (2013). We present a 3-step sampling procedure that is quite straightforwardly derived from Theorem We particularly focus on the first step and discuss a Gibbs sampling approach for the conditional hitting scenario A 3-step procedure for conditional sampling Theorem provides the conditional distribution of the point process Φ given the observation (1.1). In practice, we are rather interested in the conditional simulation of the ma-stable process Z(s) at some location s = (s 1,...,s m ) X m.the connection is rather straightforward and is made eplicit inthenetproposition. Proposition Assume that Λ is regular at (, s) X k+m.considerthethree step procedure: 1. Draw a random partition θ P k with distribution (1.4); 2. Given θ =(τ 1,...,τ l ),drawl independent random vectors ϕ + (s),...,ϕ+ l (s) with distribution (1.6) and define the random vector Z + (s) = ma =1,...,l ϕ+ (s). 3. Independently draw a Poisson point process {ζ i } i 1 on (0, ) with intensity ζ 2 dζ and {Y i } i 1 independent copies of Y,anddefinetherandomvector Z (s) =ma i 1 ζ iy i (s)1 {ζiy i()<z}. Then the random vector Z(s) =ma{z + (s),z (s)} follows the conditional distribution of Z(s) given Z() =z. This 3-step procedure which, in general, can also be applied inthe non-regular case is illustrated on Figure on the simple case of 1-D Smith storm process with Gaussian shape. In step 3, it is worth noting that Z (s) follows the conditional distribution of Z(s) given Z() < z. It is not difficult to show that the conditional cumulative distribution function of Z(s) given Z() =z is given by P[Z(s) a Z() =z] =P[Z(s) a Z() < z] τ P k π,z (τ) l F τ, (a), =1
14 14 Etreme Value Modeling and Risk Analysis: Methods and Applications FIGURE 1.2 The 3-step procedure for simulation sampling. First line: construction of the conditional hitting scenario (step 1, left), of the etremal functions (step 2, middle) and of the associated process Z + (right). Second line: construction of the subetremal functions (step 3, left), of the associated process Z (middle) and of the conditioned ma-stable process Z =ma(z +,Z ) (right). where π,z (τ) =P[θ = τ Z() =z] is given by Equation (1.4) and F τ, (a) = P[ϕ + (s) a Z() =z, θ= τ] = {u <z τ c,v<a} λ (s, τ c ) τ,z τ (v, u )du dv {u <z τc } λ. τ τ,z τ (u )du c Gibbs sampling for the conditional hitting scenario In the above 3-step procedure for conditional sampling, step2requirestosamplethe etremal functions. As we have seen in Eamples 1, 2 and 3 where log-normaland Student distributions appear, these can be of a relatively simple structure although the imposed equality and inequality constraints may cause additional difficulties. Step 3 can be performed by an unconditional simulation of the ma-stable process Z according to an appropriate spectral representation reecting all those functions that do not respect the constraints given by the conditioning data.fordetailson unconditional simulation of ma-stable processes we refer to corresponding chapter. However, the conditional sampling of the hitting scenario (step 1) remains the most challenging step. According to Equation (1.4), it is given by π,z (τ) =P[θ = τ Z() =z] = 1 C,z l =1 ω τ
15 Conditional Simulation of Ma-Stable Processes 15 where ω τ = λ τ (z τ ) λ τ c τ,z τ (u )du. {u <z τ c } The last integral is the multivariate cumulative distribution of the conditional intensity function λ c τ τ,z τ.hencethisisamultivariatelog-normalcdfinthebrown- Resnick model and a multivariate Student cdf in the Schlather oretremal-tmodels. In the following, we assume that the weights ω τ can be accurately computed numerically and we focus on how to sample from π,z,adiscreteprobabilitymeasureonthe set P k of partition of { 1,, k }.Themaindifficultyisthatthenormingconstant C,z is unknown and a naive computation of this constant requires the computation of the weight l =1 ω τ for all τ P k.thenumberofterms,orequivalentlythecardinality of P k is given by the so-called Bell number B k.thefirst10 Bell numbers are k B k Hence determining the discrete probability π,z requires the computation of 52 weights for k = 5 and for k = 10.Evenworse,B and B so that the storage of all partitions is beyond the memory capacities of a standard computer. This is the so called phenomenon of combinatorial eplosion and we need an alternative method to the naive discrete enumeration of all possibilities. It is customary to use Monte Carlo Markov Chain sampling when the target distribution is known up to a multiplicative constant only. The aim is to construct a Markov chain with stationary distribution equal to the target distribution and with good miing properties. Two main methods eist: the Metropolis-Hasting algorithm and the Gibbs sampler. We focus here on this second option. For τ P k and {1,...,k}, letτ be the restriction of τ to the set { 1,..., k }.AsusualwithGibbssamplers,ourgoalistosimulatefrom P[θ θ = τ ], (1.7) where θ P k is a random partition which follows the target distribution π,z ( ) and τ is typically the current state of the Markov chain. It is easy to see that the number of possible updates according to (1.7) is always less than k,sothatthecombinatorial eplosion is avoided. Indeed, the point can be reallocated to any of the components of τ or to a new component with a single point. We deduce that the number of possible updates τ P k such that τ = τ is { b + l if { } is a partitioning set of τ, = l +1 if { } is not a partitioning set of τ, For illustration, consider the set { 1, 2, 3 } and let τ =({ 1, 2 }, { 3 }).Thenthe possible partitions τ such that τ 2 = τ 2 are ({ 1, 2 }, { 3 }), ({ 1 }, { 2 }, { 3 }), ({ 1 }, { 2, 3 }), (1.8)
16 16 Etreme Value Modeling and Risk Analysis: Methods and Applications while there eists only two partitions such that τ 3 = τ 3,i.e., ({ 1, 2 }, { 3 }), ({ 1, 2, 3 }). For our work we use a random scan implementation of the Gibbs sampler (Liu et al., 1995), meaning that one iteration of the Gibbs sampler selects randomly an element of {1,...,k} and then updates the current state τ according to the proposal distribution (1.7). For the sake of simplicity, we use the uniform random scan, i.e. is selected according to the uniform distribution on {1,...,k}. Figure shows two successive iterations of this random scan Gibbs sampler. FIGURE 1.3 Two successive iterations of the Gibbs sampler for the conditional hitting scenario with k =8conditioning points: after choosing a random point, the arrows show the different possible reallocations, the bold arrow representing the chosen one. The distribution (1.7) has nice properties. For all τ P k with τ = τ we have P[θ = τ θ = τ ]= π,z (τ ) τ P k π,z ( τ)1 { τ =τ } τ k=1 w τ k τ k=1 w τ k. (1.9) Since, τ and τ share many components, it can be seen that many factors in the righthand side of (1.9) cancel out ecept at most four of them. In the previouseample (1.8), the corresponding weights are 1, w {1, 2} w {1}w {2}, w {1, 2}w {3} w {1}w {2, 3} respectively. This makes the Gibbs sampler especially convenient. We finally provide some practical details on the computation of these conditional weights as well as a detailed algorithm (see Algorithm 1 in thesupplementarymaterial).
17 Conditional Simulation of Ma-Stable Processes 17 We first describe how each partition of { 1,..., k } is stored. To illustrate consider the set { 1, 2, 3 } and the partition ({ 1, 2 }, { 3 }). Withourconvention, this partition is defined as (1, 1, 2) indicating that 1 and 2 belong to the same partitioning set labeled 1 and 3 belongs to the partitioning set 2. There eist several equivalent notations for this partition: for eample one can use(2, 2, 1) or (1, 1, 3). However there is a one-one mapping between P k and the set { } Pk = (a 1,...,a k ), i {2,...,k}: 1=a 1 a i ma a +1,a i Z. 1 <i Consequently we shall restrict our attention to the partitions that live in Pk and going back to our eample we see that (1, 1, 2) is valid but (2, 2, 1) and (1, 1, 3) are not. For τ Pk of size l, letr 1 = k i=1 δ τ i=a and r 2 = k i=1 δ τ i=b, i.e.,the number of conditioning locations that belong to the partitioning sets a and b where b {1,...,b + } and { b + l (r 1 =1), = l +1 (r 1 1). Then the conditional probability distribution P[τ = b τ i = a i, i ] given by Equation (1.9) is proportional to 1 (b = a ), (1.10a) w τ,b/(w τ,b w τ,a ) (r 1 =1,r 2 0,b a ), (1.10b) w τ,bw τ,a /(w τ,b w τ,a ) (r 1 1,r 2 0,b a ), (1.10c) w τ,bw τ,a /w τ,a (r 1 1,r 2 =0,b a ), (1.10d) where τ =(a 1,...,a 1,b,a +1,...,a k ) and w τ,ai = w {k : τ k =a i}. Itisworth stressing that although τ does not necessarily belongs to Pk,itcorrespondstoa unique partition of P k and we can use the biection P k Pk to recode τ into an element of Pk. 1.4 Simulation Study Gibbs Sampler In this section we check whether the Gibbs sampler is able to sample from π,z ( ). To illustrate a typical sample path, Figure 1 in the supplementary material shows the trace plot of a simulated chain of length 2000 with k =5conditioning locations and athinninglagof5 and compares the theoretical probabilities {π,z (τ),τ P k } to the empirical ones estimated from the Markov chain. As mentioned in the previous sectionit can be seen that only a fewstates have asignificant probability to occur and these states most often differs only slightly. As epected for this particular simulation the empirical probabilities match the theoretical ones.
18 18 Etreme Value Modeling and Risk Analysis: Methods and Applications To better assess if our uniform random scan Gibbs sampler is able to sample from π,z ( ) we simulate 250 Markov chains of length 1000 after removing a burnin period and thinning the chain, and similarly to Figure??, comparethetheoretical probabilities {π,z (τ),τ P k } to the empirical ones using a χ 2 test for each of these chains. Since the computation of these theoretical probabilities is CPU intensive, the number of conditioning locations is at most 5 in our simulation study. Sample χ 2 p values K.S. p value = Uniform quantiles Sample χ 2 p values K.S. p value = Uniform quantiles Sample χ 2 p values K.S. p value = Uniform quantiles Sample χ 2 p values K.S. p value = Uniform quantiles FIGURE 1.4 QQ-plots for the sample χ 2 test p-values against U(0, 1) quantiles with a varying number of conditioning locations k with an insert showing the p-values of a Kolmogorov Smirnov test for uniformity from left to right, k = 2, 3, 4 and 5. The dashed lines show the 95% confidence envelopes. Figure 1.4 plots the sample p-values of these χ 2 tests against the quantiles of a U(0, 1) distribution for a varying number of conditional locations. This figure corroborates what we saw in Figure?? since the sample p-values seem to follow a U(0, 1) distribution indicating that the sampler is able to sample from π,z ( ) Conditional simulations In this section we check if our algorithm is able to produce realistic conditional simulations of Brown Resnick processes with semi-variogram γ(h) =(h/λ) ν.inthis case, the spectral process Y ( ) in (1.2) is a fractional Brownian motion with Hurst inde ν.tohaveabroadoverview,weconsiderthreedifferentsamplepathproperties as summarized below. Sample path properties γ 1 :Wiggly γ 2 :Smooth γ 3 :Verysmooth λ ν The variogram parameters are set to ensure that the etremal coefficient function satisfies θ(115) = 1.7.Figure1.5showsonerealizationforeachsamplepathconfiguration as well as the corresponding etremal coefficient function. These realizations will serve as the basis for our conditioning events. In order to check if our sampling procedure is accurate, givenasingleconditional event {Z() = z}, we generated 1000 conditional realizations of a Brown Resnick processes with standard Gumbel margins and semi-variograms γ ( =1, 2, 3).
19 Conditional Simulation of Ma-Stable Processes 19 Z() Z() Z() θ(h) h FIGURE 1.5 Three realizations of a Brown Resnick process with standard Gumbel margins and semi-variograms γ 1, γ 2 and γ 3 from left to right. The squares correspond to the 15 conditioning values that will be used in the simulation study. The right panel shows the associated etremal coefficient functions where the solid, dashed and dotted lines correspond respectively to γ 1,γ 2 and γ 3. Figure 1.6 shows the pointwise sample quantiles obtained from these 1000 simulated paths in comparison to unit Gumbel quantiles. As epected the conditional sample paths inherit the regularity driven by the Hurst inde ν/2 of the process Y ( ) in (1.2) and there is less variability in regions close to some conditioninglocations. On the opposite, although for this specific type of variogram θ(h) 2 as h, for regions far away from any conditioning location the sample quantiles converges to that of a standard Gumbel distribution indicating that the conditionaleventdoes not have any influence anymore. In addition the sample paths used to get the conditional events, see Figure 1.5, lay most of the time in the 95% pointwise confidence intervals corroborating that our sampling procedure seems to be accurate the coverage ranges between 0 93 and 1 00 with a mean value of So far we have checked that the proposed sampling procedure yields the epected coverage as well as the right marginal properties as we move far away from any conditioning location. The last point to be fulfilled is to assess whether the simulation procedure honors the spatial dependence driven by the semi-variogram γ( ). To this aim we use the F -madogram (Cooley et al., 2006) to compare the pairwise etremal coefficient estimates to the theoretical etremal coefficient function. Since Z( ) {Z() =z} is not ma-stable the F -madogram cannot be used. However, by integrating outtheconditional event werecover theoriginal Brown-Resnick distribution and the ma-stability property. So we generate independently 1000 conditional events {Z() =z}, being fied, and for each conditional event one conditional realization of a Brown Resnick process. Figure 1.7 compares thepairwiseetremal coefficient estimates based on these simulations to the theoretical etremal coefficient function. As epected, whatever the number of conditioning location and the semi-variograms are, the (binned) pairwise estimates match thetheoreticalcurveindicating that the spatial dependence is honored.
20 20 Etreme Value Modeling and Risk Analysis: Methods and Applications Z() Z() Z() Coverage (%): Coverage (%): Coverage (%): Z() Z() Z() Coverage (%): Coverage (%): Coverage (%): Z() Z() Z() Coverage (%): 100 Coverage (%): Coverage (%): FIGURE 1.6 Pointwise sample quantiles estimated from 1000 conditional simulations of Brown Resnick processes with standard Gumbel margins and semi-variograms γ 1, γ 2 and γ 3 (leftto right)and with k = 5, 10, 15 conditioning locations top to bottom. The solid black lines show the pointwise 0 025, 0 5, sample quantiles and the dashed grey lines that of a standard Gumbel distribution. The squares show the conditional points {( i,z i )} i=1,...,k.thesolidgreylinescorrespondthesimulatedpathsusedto get the conditioning events. The inserts give the proportion of points lying in the 95% pointwise confidence intervals.
21 Conditional Simulation of Ma-Stable Processes 21 θ(h) h θ(h) h FIGURE 1.7 Comparison of the etremal coefficient estimates (using a binned F -madogram with 250 bins) and the theoretical etremal coefficient function for a varying number of conditioning locations and different semi-variograms with k =5(left) or k =15 (right) conditioning points. The o, + and symbolscorrespondrespectivelyto γ 1, γ 2 and γ 3.Thesolid,dashedanddottedgreylinescorrespondtothetheoretical etremal coefficient functions for γ 1,γ 2 and γ 3. θ(h) h Acknowledgements M. Oesting and M. Ribatet acknowledge the ANR proect McSim for financial support. References Chilès, J.-P. and Delfiner, P. (1999), Geostatistics, Wiley Series in Probability and Statistics: Applied Probability and Statistics, John Wiley&Sons,Inc.,NewYork, modeling spatial uncertainty, A Wiley-Interscience Publication. Cooley, D., Naveau, P., and Poncet, P. (2006), Variograms for spatial ma-stable random fields, in Dependence in probability and statistics, Springer,NewYork, vol. 187 of Lecture Notes in Statist.,pp Davis, R. A. and Resnick, S. I. (1989), Basic properties and prediction of ma- ARMA processes, Adv. in Appl. Probab., 21, (1993), Predictionofstationaryma-stableprocesses, Ann. Appl. Probab., 3, de Haan, L. (1984), A spectral representation for ma-stable processes, Ann. Probab., 12,
22 22 Etreme Value Modeling and Risk Analysis: Methods and Applications de Haan, L. and Ferreira, A. (2006), Etreme value theory, Springer Series in Operations Research and Financial Engineering, Springer, New York, an introduction. Dombry, C. and Eyi-Minko, F. (2013), Regular conditional distributions of continuous ma-infinitely divisible random fields, Electron. J. Probab., 18,no.7,21. Dombry, C., Éyi-Minko, F., and Ribatet, M. (2013), Conditional simulation ofmastable processes, Biometrika, 100, Kabluchko, Z., Schlather, M., and de Haan, L. (2009), Stationary ma-stable fields associated to negative definite functions, Ann. Probab., 37, Liu, J. S., Wong, W. H., and Kong, A. (1995), Covariance structure and convergence rate of the Gibbs sampler with various scans, J. Roy. Statist. Soc. Ser. B,57, Oesting, M. and Schlather, M. (2014), Conditional sampling for ma-stable processes with a mied moving maima representation, Etremes, 17, Penrose, M. D. (1992), Semi-min-stable processes, Ann. Probab., 20, Ribatet, M. (2013), Spatial etremes: ma-stable processes at work, J. SFdS, 154, Schlather, M. (2002), Models for stationary ma-stable random fields, Etremes, 5, Stoyan, D., Kendall, W. S., and Mecke, J. (1987), Stochastic geometry and its applications, vol.69ofmathematische Lehrbücher und Monographien, II. Abteilung: Mathematische Monographien [Mathematical Tetbooks and Monographs, Part II: Mathematical Monographs], Akademie-Verlag,Berlin,withaforewordby David Kendall. Wang, Y. and Stoev, S. A. (2011), Conditional sampling for spectrally discrete mastable random fields, Adv. in Appl. Probab., 43, Weintraub, K. S. (1991), Sample and ergodic properties of some min-stable processes, Ann. Probab., 19,
arxiv: v1 [stat.me] 16 Dec 2011
Biometrika (2011),,, pp. 1 15 C 2007 Biometrika Trust Printed in Great Britain Conditional simulations of Brown Resnick processes arxiv:1112.3891v1 [stat.me] 16 Dec 2011 BY C. DOMBRY, F. ÉYI-MINKO, Laboratoire
More informationConditional simulation of max-stable processes
Biometrika (1),,, pp. 1 1 3 4 5 6 7 C 7 Biometrika Trust Printed in Great Britain Conditional simulation of ma-stable processes BY C. DOMBRY, F. ÉYI-MINKO, Laboratoire de Mathématiques et Application,
More informationConditional simulations of max-stable processes
Conditional simulations of max-stable processes C. Dombry, F. Éyi-Minko, M. Ribatet Laboratoire de Mathématiques et Application, Université de Poitiers Institut de Mathématiques et de Modélisation, Université
More informationSimulation of Max Stable Processes
Simulation of Max Stable Processes Whitney Huang Department of Statistics Purdue University November 6, 2013 1 / 30 Outline 1 Max-Stable Processes 2 Unconditional Simulations 3 Conditional Simulations
More informationMax stable Processes & Random Fields: Representations, Models, and Prediction
Max stable Processes & Random Fields: Representations, Models, and Prediction Stilian Stoev University of Michigan, Ann Arbor March 2, 2011 Based on joint works with Yizao Wang and Murad S. Taqqu. 1 Preliminaries
More informationExtreme Value Analysis and Spatial Extremes
Extreme Value Analysis and Department of Statistics Purdue University 11/07/2013 Outline Motivation 1 Motivation 2 Extreme Value Theorem and 3 Bayesian Hierarchical Models Copula Models Max-stable Models
More informationThe extremal elliptical model: Theoretical properties and statistical inference
1/25 The extremal elliptical model: Theoretical properties and statistical inference Thomas OPITZ Supervisors: Jean-Noel Bacro, Pierre Ribereau Institute of Mathematics and Modeling in Montpellier (I3M)
More informationEXACT SIMULATION OF BROWN-RESNICK RANDOM FIELDS
EXACT SIMULATION OF BROWN-RESNICK RANDOM FIELDS A.B. DIEKER AND T. MIKOSCH Abstract. We propose an exact simulation method for Brown-Resnick random fields, building on new representations for these stationary
More informationBayesian inference for multivariate extreme value distributions
Bayesian inference for multivariate extreme value distributions Sebastian Engelke Clément Dombry, Marco Oesting Toronto, Fields Institute, May 4th, 2016 Main motivation For a parametric model Z F θ of
More informationLecture 3: September 10
CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 3: September 10 Lecturer: Prof. Alistair Sinclair Scribes: Andrew H. Chan, Piyush Srivastava Disclaimer: These notes have not
More informationSoutenance d Habilitation à Diriger des Recherches. Théorie spatiale des extrêmes et propriétés des processus max-stables
Soutenance d Habilitation à Diriger des Recherches Théorie spatiale des extrêmes et propriétés des processus max-stables CLÉMENT DOMBRY Laboratoire de Mathématiques et Applications, Université de Poitiers.
More informationStatistical modelling of spatial extremes using the SpatialExtremes package
Statistical modelling of spatial extremes using the SpatialExtremes package M. Ribatet University of Montpellier TheSpatialExtremes package Mathieu Ribatet 1 / 39 Rationale for thespatialextremes package
More informationMax-stable processes: Theory and Inference
1/39 Max-stable processes: Theory and Inference Mathieu Ribatet Institute of Mathematics, EPFL Laboratory of Environmental Fluid Mechanics, EPFL joint work with Anthony Davison and Simone Padoan 2/39 Outline
More informationModelling spatial extremes using max-stable processes
Chapter 1 Modelling spatial extremes using max-stable processes Abstract This chapter introduces the theory related to the statistical modelling of spatial extremes. The presented modelling strategy heavily
More informationRandom tessellations associated with max-stable random fields
Random tessellations associated with max-stable random fields arxiv:40.2584v3 [math.pr] 6 Jan 206 Clément Dombry and Zakhar Kabluchko 2 February 5, 208 Abstract With any max-stable random process η on
More informationSpatial extreme value theory and properties of max-stable processes Poitiers, November 8-10, 2012
Spatial extreme value theory and properties of max-stable processes Poitiers, November 8-10, 2012 November 8, 2012 15:00 Clement Dombry Habilitation thesis defense (in french) 17:00 Snack buet November
More informationBayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference
1 The views expressed in this paper are those of the authors and do not necessarily reflect the views of the Federal Reserve Board of Governors or the Federal Reserve System. Bayesian Estimation of DSGE
More informationGoodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach
Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach By Shiqing Ling Department of Mathematics Hong Kong University of Science and Technology Let {y t : t = 0, ±1, ±2,
More informationarxiv: v2 [math.pr] 21 Apr 2017
Spectral tail processes and max-stable approximations of multivariate regularly varying time series arxiv:1704.06179v2 [math.pr] 21 Apr 2017 Anja Janßen April 24, 2017 Abstract A regularly varying time
More informationApproximate inference, Sampling & Variational inference Fall Cours 9 November 25
Approimate inference, Sampling & Variational inference Fall 2015 Cours 9 November 25 Enseignant: Guillaume Obozinski Scribe: Basile Clément, Nathan de Lara 9.1 Approimate inference with MCMC 9.1.1 Gibbs
More informationFunctions of Several Variables
Chapter 1 Functions of Several Variables 1.1 Introduction A real valued function of n variables is a function f : R, where the domain is a subset of R n. So: for each ( 1,,..., n ) in, the value of f is
More informationComputational statistics
Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated
More informationA Comparison of Particle Filters for Personal Positioning
VI Hotine-Marussi Symposium of Theoretical and Computational Geodesy May 9-June 6. A Comparison of Particle Filters for Personal Positioning D. Petrovich and R. Piché Institute of Mathematics Tampere University
More informationBasics of Point-Referenced Data Models
Basics of Point-Referenced Data Models Basic tool is a spatial process, {Y (s), s D}, where D R r Chapter 2: Basics of Point-Referenced Data Models p. 1/45 Basics of Point-Referenced Data Models Basic
More informationParticle Methods as Message Passing
Particle Methods as Message Passing Justin Dauwels RIKEN Brain Science Institute Hirosawa,2-1,Wako-shi,Saitama,Japan Email: justin@dauwels.com Sascha Korl Phonak AG CH-8712 Staefa, Switzerland Email: sascha.korl@phonak.ch
More informationAlternative implementations of Monte Carlo EM algorithms for likelihood inferences
Genet. Sel. Evol. 33 001) 443 45 443 INRA, EDP Sciences, 001 Alternative implementations of Monte Carlo EM algorithms for likelihood inferences Louis Alberto GARCÍA-CORTÉS a, Daniel SORENSEN b, Note a
More informationStatistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling
1 / 27 Statistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling Melih Kandemir Özyeğin University, İstanbul, Turkey 2 / 27 Monte Carlo Integration The big question : Evaluate E p(z) [f(z)]
More informationON EXTREME VALUE ANALYSIS OF A SPATIAL PROCESS
REVSTAT Statistical Journal Volume 6, Number 1, March 008, 71 81 ON EXTREME VALUE ANALYSIS OF A SPATIAL PROCESS Authors: Laurens de Haan Erasmus University Rotterdam and University Lisbon, The Netherlands
More informationApplications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices
Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.
More informationMarkov Chain Monte Carlo methods
Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As
More informationDAG models and Markov Chain Monte Carlo methods a short overview
DAG models and Markov Chain Monte Carlo methods a short overview Søren Højsgaard Institute of Genetics and Biotechnology University of Aarhus August 18, 2008 Printed: August 18, 2008 File: DAGMC-Lecture.tex
More informationOn the Optimal Scaling of the Modified Metropolis-Hastings algorithm
On the Optimal Scaling of the Modified Metropolis-Hastings algorithm K. M. Zuev & J. L. Beck Division of Engineering and Applied Science California Institute of Technology, MC 4-44, Pasadena, CA 925, USA
More informationA Practitioner s Guide to Generalized Linear Models
A Practitioners Guide to Generalized Linear Models Background The classical linear models and most of the minimum bias procedures are special cases of generalized linear models (GLMs). GLMs are more technically
More informationJournal de la Société Française de Statistique. Spatial extremes: Max-stable processes at work
Journal de la Société Française de Statistique Submission Spatial etremes: Ma-stable processes at work Titre: Etrêmes spatiau : les processus ma-stables en pratique Ribatet Mathieu 1 Abstract: Since many
More informationMarkov chain Monte Carlo
1 / 26 Markov chain Monte Carlo Timothy Hanson 1 and Alejandro Jara 2 1 Division of Biostatistics, University of Minnesota, USA 2 Department of Statistics, Universidad de Concepción, Chile IAP-Workshop
More informationHastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model
UNIVERSITY OF TEXAS AT SAN ANTONIO Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model Liang Jing April 2010 1 1 ABSTRACT In this paper, common MCMC algorithms are introduced
More informationarxiv: v2 [stat.me] 25 Sep 2012
Estimation of Hüsler-Reiss distributions and Brown-Resnick processes arxiv:107.6886v [stat.me] 5 Sep 01 Sebastian Engelke Institut für Mathematische Stochastik, Georg-August-Universität Göttingen, Goldschmidtstr.
More informationMax stable processes: representations, ergodic properties and some statistical applications
Max stable processes: representations, ergodic properties and some statistical applications Stilian Stoev University of Michigan, Ann Arbor Oberwolfach March 21, 2008 The focus Let X = {X t } t R be a
More information17 : Markov Chain Monte Carlo
10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo
More informationLECTURE 15 Markov chain Monte Carlo
LECTURE 15 Markov chain Monte Carlo There are many settings when posterior computation is a challenge in that one does not have a closed form expression for the posterior distribution. Markov chain Monte
More informationMCMC algorithms for fitting Bayesian models
MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models
More informationRobust Backtesting Tests for Value-at-Risk Models
Robust Backtesting Tests for Value-at-Risk Models Jose Olmo City University London (joint work with Juan Carlos Escanciano, Indiana University) Far East and South Asia Meeting of the Econometric Society
More information2 Statistical Estimation: Basic Concepts
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 2 Statistical Estimation:
More informationIntroduction to Machine Learning
Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin
More informationNUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS
NUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS Justin Dauwels Dept. of Information Technology and Electrical Engineering ETH, CH-8092 Zürich, Switzerland dauwels@isi.ee.ethz.ch
More informationOptimal scaling of the random walk Metropolis on elliptically symmetric unimodal targets
Bernoulli 15(3), 2009, 774 798 DOI: 10.3150/08-BEJ176 Optimal scaling of the random walk Metropolis on elliptically symmetric unimodal targets CHRIS SHERLOCK 1 and GARETH ROBERTS 2 1 Department of Mathematics
More informationCS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling
CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling Professor Erik Sudderth Brown University Computer Science October 27, 2016 Some figures and materials courtesy
More informationThe Poisson storm process Extremal coefficients and simulation
The Poisson storm process Extremal coefficients and simulation C. Lantuéjoul 1, J.N. Bacro 2 and L. Bel 3 1 MinesParisTech 2 Université de Montpellier II 3 AgroParisTech 1 Storm process Introduced by Smith
More informationThe Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations
The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations John R. Michael, Significance, Inc. and William R. Schucany, Southern Methodist University The mixture
More informationBayesian Linear Regression
Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective
More informationA HIERARCHICAL MAX-STABLE SPATIAL MODEL FOR EXTREME PRECIPITATION
Submitted to the Annals of Applied Statistics arxiv: arxiv:0000.0000 A HIERARCHICAL MAX-STABLE SPATIAL MODEL FOR EXTREME PRECIPITATION By Brian J. Reich and Benjamin A. Shaby North Carolina State University
More informationLecture 2: Univariate Time Series
Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:
More informationBivariate Degradation Modeling Based on Gamma Process
Bivariate Degradation Modeling Based on Gamma Process Jinglun Zhou Zhengqiang Pan Member IAENG and Quan Sun Abstract Many highly reliable products have two or more performance characteristics (PCs). The
More informationModels for Spatial Extremes. Dan Cooley Department of Statistics Colorado State University. Work supported in part by NSF-DMS
Models for Spatial Extremes Dan Cooley Department of Statistics Colorado State University Work supported in part by NSF-DMS 0905315 Outline 1. Introduction Statistics for Extremes Climate and Weather 2.
More informationST 740: Markov Chain Monte Carlo
ST 740: Markov Chain Monte Carlo Alyson Wilson Department of Statistics North Carolina State University October 14, 2012 A. Wilson (NCSU Stsatistics) MCMC October 14, 2012 1 / 20 Convergence Diagnostics:
More information3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form:
3.3 Gradient Vector and Jacobian Matri 3 3.3 Gradient Vector and Jacobian Matri Overview: Differentiable functions have a local linear approimation. Near a given point, local changes are determined by
More informationUniversity of Toronto Department of Statistics
Norm Comparisons for Data Augmentation by James P. Hobert Department of Statistics University of Florida and Jeffrey S. Rosenthal Department of Statistics University of Toronto Technical Report No. 0704
More informationThe Instability of Correlations: Measurement and the Implications for Market Risk
The Instability of Correlations: Measurement and the Implications for Market Risk Prof. Massimo Guidolin 20254 Advanced Quantitative Methods for Asset Pricing and Structuring Winter/Spring 2018 Threshold
More informationINF Introduction to classifiction Anne Solberg
INF 4300 8.09.17 Introduction to classifiction Anne Solberg anne@ifi.uio.no Introduction to classification Based on handout from Pattern Recognition b Theodoridis, available after the lecture INF 4300
More informationHandbook of Spatial Statistics Chapter 2: Continuous Parameter Stochastic Process Theory by Gneiting and Guttorp
Handbook of Spatial Statistics Chapter 2: Continuous Parameter Stochastic Process Theory by Gneiting and Guttorp Marcela Alfaro Córdoba August 25, 2016 NCSU Department of Statistics Continuous Parameter
More informationAsymptotics and Simulation of Heavy-Tailed Processes
Asymptotics and Simulation of Heavy-Tailed Processes Department of Mathematics Stockholm, Sweden Workshop on Heavy-tailed Distributions and Extreme Value Theory ISI Kolkata January 14-17, 2013 Outline
More informationStability of optimization problems with stochastic dominance constraints
Stability of optimization problems with stochastic dominance constraints D. Dentcheva and W. Römisch Stevens Institute of Technology, Hoboken Humboldt-University Berlin www.math.hu-berlin.de/~romisch SIAM
More informationMOL 410/510: Introduction to Biological Dynamics Fall 2012 Problem Set #4, Nonlinear Dynamical Systems (due 10/19/2012) 6 MUST DO Questions, 1
MOL 410/510: Introduction to Biological Dynamics Fall 2012 Problem Set #4, Nonlinear Dynamical Systems (due 10/19/2012) 6 MUST DO Questions, 1 OPTIONAL question 1. Below, several phase portraits are shown.
More informationExtremogram and ex-periodogram for heavy-tailed time series
Extremogram and ex-periodogram for heavy-tailed time series 1 Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia) and Yuwei Zhao (Ulm) 1 Zagreb, June 6, 2014 1 2 Extremal
More informationPrediction of Stable Stochastic Processes
Prediction of Stable Stochastic Processes Evgeny Spodarev Institute of Stochastics Workshop Probability, Analysis and Geometry, 2-6.09.2013 Seite 2 Prediction of Stable Stochastic Processes Evgeny Spodarev
More informationConditional Sampling for Max Stable Random Fields
Conditional Sampling for Max Stable Random Fields Yizao Wang Department of Statistics, the University of Michigan April 30th, 0 th GSPC at Duke University, Durham, North Carolina Joint work with Stilian
More informationVine Copulas. Spatial Copula Workshop 2014 September 22, Institute for Geoinformatics University of Münster.
Spatial Workshop 2014 September 22, 2014 Institute for Geoinformatics University of Münster http://ifgi.uni-muenster.de/graeler 1 spatio-temporal data Typically, spatio-temporal data is given at a set
More informationarxiv: v2 [math.st] 20 Nov 2016
A Lynden-Bell integral estimator for the tail inde of right-truncated data with a random threshold Nawel Haouas Abdelhakim Necir Djamel Meraghni Brahim Brahimi Laboratory of Applied Mathematics Mohamed
More informationBayesian Linear Models
Bayesian Linear Models Sudipto Banerjee September 03 05, 2017 Department of Biostatistics, Fielding School of Public Health, University of California, Los Angeles Linear Regression Linear regression is,
More informationPiecewise Smooth Solutions to the Burgers-Hilbert Equation
Piecewise Smooth Solutions to the Burgers-Hilbert Equation Alberto Bressan and Tianyou Zhang Department of Mathematics, Penn State University, University Park, Pa 68, USA e-mails: bressan@mathpsuedu, zhang
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate
More informationWhy experimenters should not randomize, and what they should do instead
Why experimenters should not randomize, and what they should do instead Maximilian Kasy Department of Economics, Harvard University Maximilian Kasy (Harvard) Experimental design 1 / 42 project STAR Introduction
More informationMade available courtesy of Elsevier:
A note on processes with random stationary increments By: Haimeng Zhang, Chunfeng Huang Zhang, H. and Huang, C. (2014). A Note on Processes with Random Stationary Increments. Statistics and Probability
More informationCalculation of Bayes Premium for Conditional Elliptical Risks
1 Calculation of Bayes Premium for Conditional Elliptical Risks Alfred Kume 1 and Enkelejd Hashorva University of Kent & University of Lausanne February 1, 13 Abstract: In this paper we discuss the calculation
More informationBayesian Inference for Discretely Sampled Diffusion Processes: A New MCMC Based Approach to Inference
Bayesian Inference for Discretely Sampled Diffusion Processes: A New MCMC Based Approach to Inference Osnat Stramer 1 and Matthew Bognar 1 Department of Statistics and Actuarial Science, University of
More informationOn prediction and density estimation Peter McCullagh University of Chicago December 2004
On prediction and density estimation Peter McCullagh University of Chicago December 2004 Summary Having observed the initial segment of a random sequence, subsequent values may be predicted by calculating
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationE(x i ) = µ i. 2 d. + sin 1 d θ 2. for d < θ 2 0 for d θ 2
1 Gaussian Processes Definition 1.1 A Gaussian process { i } over sites i is defined by its mean function and its covariance function E( i ) = µ i c ij = Cov( i, j ) plus joint normality of the finite
More informationFundamental Issues in Bayesian Functional Data Analysis. Dennis D. Cox Rice University
Fundamental Issues in Bayesian Functional Data Analysis Dennis D. Cox Rice University 1 Introduction Question: What are functional data? Answer: Data that are functions of a continuous variable.... say
More informationM.S. Project Report. Efficient Failure Rate Prediction for SRAM Cells via Gibbs Sampling. Yamei Feng 12/15/2011
.S. Project Report Efficient Failure Rate Prediction for SRA Cells via Gibbs Sampling Yamei Feng /5/ Committee embers: Prof. Xin Li Prof. Ken ai Table of Contents CHAPTER INTRODUCTION...3 CHAPTER BACKGROUND...5
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More informationMarkov Chain Monte Carlo Methods
Markov Chain Monte Carlo Methods John Geweke University of Iowa, USA 2005 Institute on Computational Economics University of Chicago - Argonne National Laboaratories July 22, 2005 The problem p (θ, ω I)
More informationICML Scalable Bayesian Inference on Point processes. with Gaussian Processes. Yves-Laurent Kom Samo & Stephen Roberts
ICML 2015 Scalable Nonparametric Bayesian Inference on Point Processes with Gaussian Processes Machine Learning Research Group and Oxford-Man Institute University of Oxford July 8, 2015 Point Processes
More informationBayesian Methods for Machine Learning
Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),
More informationNonparametric regression with martingale increment errors
S. Gaïffas (LSTA - Paris 6) joint work with S. Delattre (LPMA - Paris 7) work in progress Motivations Some facts: Theoretical study of statistical algorithms requires stationary and ergodicity. Concentration
More informationSequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes
Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Ellida M. Khazen * 13395 Coppermine Rd. Apartment 410 Herndon VA 20171 USA Abstract
More informationState Space and Hidden Markov Models
State Space and Hidden Markov Models Kunsch H.R. State Space and Hidden Markov Models. ETH- Zurich Zurich; Aliaksandr Hubin Oslo 2014 Contents 1. Introduction 2. Markov Chains 3. Hidden Markov and State
More informationMCMC Sampling for Bayesian Inference using L1-type Priors
MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling
More informationApplication of Copulas as a New Geostatistical Tool
Application of Copulas as a New Geostatistical Tool Presented by Jing Li Supervisors András Bardossy, Sjoerd Van der zee, Insa Neuweiler Universität Stuttgart Institut für Wasserbau Lehrstuhl für Hydrologie
More informationThreshold Boolean Form for Joint Probabilistic Constraints with Random Technology Matrix
Threshold Boolean Form for Joint Probabilistic Constraints with Random Technology Matrix Alexander Kogan, Miguel A. Leeune Abstract We develop a new modeling and exact solution method for stochastic programming
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationA spatio-temporal model for extreme precipitation simulated by a climate model
A spatio-temporal model for extreme precipitation simulated by a climate model Jonathan Jalbert Postdoctoral fellow at McGill University, Montréal Anne-Catherine Favre, Claude Bélisle and Jean-François
More informationHierarchical models. Dr. Jarad Niemi. August 31, Iowa State University. Jarad Niemi (Iowa State) Hierarchical models August 31, / 31
Hierarchical models Dr. Jarad Niemi Iowa State University August 31, 2017 Jarad Niemi (Iowa State) Hierarchical models August 31, 2017 1 / 31 Normal hierarchical model Let Y ig N(θ g, σ 2 ) for i = 1,...,
More information6 Markov Chain Monte Carlo (MCMC)
6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution
More informationPart 8: GLMs and Hierarchical LMs and GLMs
Part 8: GLMs and Hierarchical LMs and GLMs 1 Example: Song sparrow reproductive success Arcese et al., (1992) provide data on a sample from a population of 52 female song sparrows studied over the course
More informationNormal approximation of Poisson functionals in Kolmogorov distance
Normal approximation of Poisson functionals in Kolmogorov distance Matthias Schulte Abstract Peccati, Solè, Taqqu, and Utzet recently combined Stein s method and Malliavin calculus to obtain a bound for
More informationSupplementary Notes: Segment Parameter Labelling in MCMC Change Detection
Supplementary Notes: Segment Parameter Labelling in MCMC Change Detection Alireza Ahrabian 1 arxiv:1901.0545v1 [eess.sp] 16 Jan 019 Abstract This work addresses the problem of segmentation in time series
More informationYaming Yu Department of Statistics, University of California, Irvine Xiao-Li Meng Department of Statistics, Harvard University.
Appendices to To Center or Not to Center: That is Not the Question An Ancillarity-Sufficiency Interweaving Strategy (ASIS) for Boosting MCMC Efficiency Yaming Yu Department of Statistics, University of
More informationFaithful couplings of Markov chains: now equals forever
Faithful couplings of Markov chains: now equals forever by Jeffrey S. Rosenthal* Department of Statistics, University of Toronto, Toronto, Ontario, Canada M5S 1A1 Phone: (416) 978-4594; Internet: jeff@utstat.toronto.edu
More informationStochastic volatility models: tails and memory
: tails and memory Rafa l Kulik and Philippe Soulier Conference in honour of Prof. Murad Taqqu 19 April 2012 Rafa l Kulik and Philippe Soulier Plan Model assumptions; Limit theorems for partial sums and
More information