Conditional Sampling for Max Stable Random Fields

Similar documents
Max stable Processes & Random Fields: Representations, Models, and Prediction

Simulation of Max Stable Processes

arxiv:math/ v1 [math.st] 16 May 2006

Stochastic optimization Markov Chain Monte Carlo

Max stable processes: representations, ergodic properties and some statistical applications

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

Today: Linear Programming (con t.)

Extreme Value Analysis and Spatial Extremes

The Multivariate Normal Distribution. In this case according to our theorem

IEOR E4703: Monte-Carlo Simulation

Multivariate Distributions

Computer Intensive Methods in Mathematical Statistics

Solutions to Exercises

On the estimation of the heavy tail exponent in time series using the max spectrum. Stilian A. Stoev

Inference For High Dimensional M-estimates. Fixed Design Results

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma

December 19, Probability Theory Instituto Superior Técnico. Poisson Convergence. João Brazuna. Weak Law of Small Numbers

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 2012

Discrete solid-on-solid models

On Markov Chain Monte Carlo

Cheng Soon Ong & Christian Walder. Canberra February June 2018

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

A Marshall-Olkin Gamma Distribution and Process

Multivariate Normal-Laplace Distribution and Processes

Markov Chain Monte Carlo

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

2. Matrix Algebra and Random Vectors

Concentration inequalities for Feynman-Kac particle models. P. Del Moral. INRIA Bordeaux & IMB & CMAP X. Journées MAS 2012, SMAI Clermond-Ferrand

1 Lyapunov theory of stability

STA 711: Probability & Measure Theory Robert L. Wolpert

STATS 306B: Unsupervised Learning Spring Lecture 2 April 2

Mixtures of Gaussians. Sargur Srihari

Introduction to MCMC. DB Breakfast 09/30/2011 Guozhang Wang

Nonparametric Bayesian Methods (Gaussian Processes)

General Principles in Random Variates Generation

Stable Process. 2. Multivariate Stable Distributions. July, 2006

(Multivariate) Gaussian (Normal) Probability Densities

Mixing time for a random walk on a ring

STAT 200C: High-dimensional Statistics

Extremogram and Ex-Periodogram for heavy-tailed time series

Conditional Distributions

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method

Modeling & Control of Hybrid Systems Chapter 4 Stability

Statistical Methods in Particle Physics

Discretization of SDEs: Euler Methods and Beyond

An Extended BIC for Model Selection

Determinant Approximations

INTERIOR-POINT METHODS ROBERT J. VANDERBEI JOINT WORK WITH H. YURTTAN BENSON REAL-WORLD EXAMPLES BY J.O. COLEMAN, NAVAL RESEARCH LAB

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

Lecture I: Asymptotics for large GUE random matrices

Introduction to Markov Chains and Riffle Shuffling

An Alternative Method for Estimating and Simulating Maximum Entropy Densities

Random regular digraphs: singularity and spectrum

On Strongly Equivalent Nonrandomized Transition Probabilities

Asymptotic behaviour of multivariate default probabilities and default correlations under stress

Markov Chain Monte Carlo

Lecture 5 Channel Coding over Continuous Channels

Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University)

CRPS M-ESTIMATION FOR MAX-STABLE MODELS WORKING MANUSCRIPT DO NOT DISTRIBUTE. By Robert A. Yuen and Stilian Stoev University of Michigan

Kernel Density Estimation

UC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo

Clustering K-means. Clustering images. Machine Learning CSE546 Carlos Guestrin University of Washington. November 4, 2014.

Rank Determination for Low-Rank Data Completion

Reinforcement Learning

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing

Modelling Multivariate Peaks-over-Thresholds using Generalized Pareto Distributions

The Central Limit Theorem: More of the Story

Chapter 3 : Likelihood function and inference

Multinomial Data. f(y θ) θ y i. where θ i is the probability that a given trial results in category i, i = 1,..., k. The parameter space is

Lecture 4 Scheduling 1

Rademacher Bounds for Non-i.i.d. Processes

Efficient rare-event simulation for sums of dependent random varia

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM.

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms

Quasi-Monte Carlo Methods for Applications in Statistics

Partitioned Methods for Multifield Problems

STA 294: Stochastic Processes & Bayesian Nonparametrics

conditional cdf, conditional pdf, total probability theorem?

Introduction to gradient descent

AN INEQUALITY FOR TAIL PROBABILITIES OF MARTINGALES WITH BOUNDED DIFFERENCES

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

CS145: Probability & Computing

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

is a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications.

The strictly 1/2-stable example

CPSC 540: Machine Learning

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

Hausdorff operators in H p spaces, 0 < p < 1

Multi-resolution models for large data sets

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

2 Random Variable Generation

Central Limit Theorems for Conditional Markov Chains

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute

Independent Component (IC) Models: New Extensions of the Multinormal Model

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

Transcription:

Conditional Sampling for Max Stable Random Fields Yizao Wang Department of Statistics, the University of Michigan April 30th, 0 th GSPC at Duke University, Durham, North Carolina Joint work with Stilian A. Stoev

An Illustrating Example Given the model and some observations, how to do prediction? 0.7 1 1 0 1 0. 3.3 9.7 1.97 0.9 7. 1 0 1 Figure: A sample from the de Haan Pereira Model (de Haan and Pereira 00): a stationary (moving maxima) max stable random field. Parameters: ρ = 0., β 1 = 1., β = 0.7.

An Illustrating Example Four conditional samplings from the de Haan Pereira Model. 1 0 1 0. 3.3 0.7 9.7 1.97 0.9 7. 1 1 0 1 1 0 1 0.7 0. 3.3 9.7 1.97 0.9 7. 1 0 1 1 0 1 0.7 0. 3.3 9.7 1 1 0 1 0.7 0. 3.3 9.7 1.97 0.9.97.97 0.9 7. 1 0 1 0. 3.3 0.7 9.7 1 0.9 7. 7. 1 1 0 1 1 0 1 1 0 1 Difficulties: Analytical formula often impossible. Naïve Monte Carlo method does not apply.

Conditional Sampling for Max Stable Random Fields Our contribution: Obtained explicit formula of regular conditional probability for max linear models (including a large class of max stable random fields). Developed efficient software (R package maxlinear) for large scale conditional samplings. Potential applications in prediction for extremal phenomena, e.g., environmental and financial problems.

Max Linear Models Formula: p X i = a i,j Z j, 1 i n, denoted by X = A Z, j=1 Z j f Zj independent continuous, nonnegative random variables A = (a i,j ) 1 i n,1 j p, a i,j 0.

Max Linear Models Formula: X i = p a i,j Z j, 1 i n, denoted by X = A Z, j=1 Z j f Zj independent continuous, nonnegative random variables A = (a i,j ) 1 i n,1 j p, a i,j 0. Z 1,..., Z p independent α Fréchet: P(Z > t) = exp{ σ α t α } {X i } 1 i n is an α Fréchet process approximation of arbitrary max stable process (random field) with p sufficiently large. Modeling spatial extremes arising in meteorology, geology, and environmental applications.

Conditional Sampling for Max Linear Models Consider the model (A known) 0.7 1 X = A Z. Observations: X = x. What is the conditional distribution of Z, given X = x? Prediction on Y = B Z, given X = x. 1 0 1 1 0 1 0. 3.3 9.7 1.97 0.9 7.

Conditional Sampling for Max Linear Models Consider the model (A known) 0.7 1 X = A Z. Observations: X = x. What is the conditional distribution of Z, given X = x? Prediction on Y = B Z, given X = x. 1 0 1 1 0 1 0. 3.3 9.7 1.97 0.9 7. Remarks: Theoretical issue: P(Z E X = x) is not well defined. Rigorous treatment: ν(x, E) : R n + B R p [0, 1], regular conditional + probability.

Conditional Sampling for Max Linear Models Consider the model (A known) 0.7 1 X = A Z. Observations: X = x. What is the conditional distribution of Z, given X = x? Prediction on Y = B Z, given X = x. 1 0 1 1 0 1 0. 3.3 9.7 1.97 0.9 7. Remarks: Theoretical issue: P(Z E X = x) is not well defined. Rigorous treatment: ν(x, E) : R n + B R p [0, 1], regular conditional + probability. Computational issue: dim(a) = n p, dim(b) = n B p, n small, n B, p large.

A Toy Example Consider X = A Z with ( 1 1 1 A = 1 0 0 ), i.e. { X1 = Z 1 Z Z 3 X = Z 1 We have (in)equality constraints: Z 1 min(x 1, X ) =: ẑ 1 Z X 1 =: ẑ Z 3 X 1 =: ẑ 3.

A Toy Example Consider X = A Z with ( 1 1 1 A = 1 0 0 ), i.e. { X1 = Z 1 Z Z 3 X = Z 1 We have (in)equality constraints: Z 1 min(x 1, X ) =: ẑ 1 Z X 1 =: ẑ Z 3 X 1 =: ẑ 3. Two cases: (i) (red) If 0 < X 1 = X = a, then ẑ 1 = ẑ = ẑ 3 = a and Z 1 = ẑ 1, Z ẑ, Z 3 ẑ 3. (ii) (blue) If 0 < a = X < X 1 = b, then ẑ 1 = a, ẑ = ẑ 3 = b and Z 1 = ẑ 1, Z = ẑ, Z 3 ẑ 3 or Z 1 = ẑ 1, Z ẑ, Z 3 = ẑ 3.

A Toy Example Consider X = A Z with ( 1 1 1 A = 1 0 0 ), i.e. { X1 = Z 1 Z Z 3 X = Z 1 We have (in)equality constraints: Z 1 min(x 1, X ) =: ẑ 1 Z X 1 =: ẑ Z 3 X 1 =: ẑ 3. Two cases: (i) (red) If 0 < X 1 = X = a, then ẑ 1 = ẑ = ẑ 3 = a and Z 1 = ẑ 1, Z ẑ, Z 3 ẑ 3. (ii) (blue) If 0 < a = X < X 1 = b, then ẑ 1 = a, ẑ = ẑ 3 = b and Z 1 = ẑ 1, Z = ẑ, Z 3 ẑ 3 or Z 1 = ẑ 1, Z ẑ, Z 3 = ẑ 3. When Z j = ẑ j, we say Z j hits ẑ j different hitting scenarios.

Intuition of Conditional Distribution of Z X = x Define ẑ j := min 1 i n x i /a i,j and C(A, x) := {z R p + : x = A z} Need a distribution on C(A, x), for each x.

Intuition of Conditional Distribution of Z X = x Define ẑ j := min 1 i n x i /a i,j and C(A, x) := {z R p + : x = A z} Need a distribution on C(A, x), for each x. Partition of C(A, x), according to equality constraints: (blue) C(A, x) = C {1,} (A, x) C {1,3} (A, x) C {1,,3} (A, x) with C {1,} (A, x) = {z 1 = ẑ 1, z = ẑ, z 3 < ẑ 3 } C {1,3} (A, x) = {z 1 < ẑ 1, z = ẑ, z 3 = ẑ 3 } C {1,,3} (A, x) = {z 1 = ẑ 1, z = ẑ, z 3 = ẑ 3 }

Intuition of Conditional Distribution of Z X = x Define ẑ j := min 1 i n x i /a i,j and C(A, x) := {z R p + : x = A z} Need a distribution on C(A, x), for each x. Partition of C(A, x), according to equality constraints: (blue) C(A, x) = C {1,} (A, x) C {1,3} (A, x) C {1,,3} (A, x) with C {1,} (A, x) = {z 1 = ẑ 1, z = ẑ, z 3 < ẑ 3 } C {1,3} (A, x) = {z 1 < ẑ 1, z = ẑ, z 3 = ẑ 3 } C {1,,3} (A, x) = {z 1 = ẑ 1, z = ẑ, z 3 = ẑ 3 } Hitting scenarios: J {1,..., p} : C J (A, x). Conditional distribution: mixture of distributions, indexed by hitting scenarios: J (A, x) = {J : C J (A, x) }.

Intuition of Conditional Distribution of Z X = x C {1,} (A, x) = {z 1 = ẑ 1, z = ẑ, z 3 < ẑ 3 } C {1,3} (A, x) = {z 1 < ẑ 1, z = ẑ, z 3 = ẑ 3 } C {1,,3} (A, x) = {z 1 = ẑ 1, z = ẑ, z 3 = ẑ 3 } Define ν J on all the hitting scenarios J: ν J (x, E) := δẑj (π j (E)) P{Z j π j (E) Z j < ẑ j }, j J j J c

Intuition of Conditional Distribution of Z X = x C {1,} (A, x) = {z 1 = ẑ 1, z = ẑ, z 3 < ẑ 3 } C {1,3} (A, x) = {z 1 < ẑ 1, z = ẑ, z 3 = ẑ 3 } C {1,,3} (A, x) = {z 1 = ẑ 1, z = ẑ, z 3 = ẑ 3 } Define ν J on all the hitting scenarios J: ν J (x, E) := δẑj (π j (E)) P{Z j π j (E) Z j < ẑ j }, j J j J c It suffices to concentrate on relevant hitting scenarios J r (A, x) = {J J (A, x) : J = r} with r = min{ J : J J (A, x)}. Clearly, C {1,,3} is negligible compared to C {1,} and C {1,3} (C {1,,3} has lower dimension).

A Toy Example Consider X = A Z with ( 1 1 1 A = 1 0 0 In this case, ) { X1 = Z, i.e. 1 Z Z 3 X = Z 1 ẑ 1 = min(x 1, X ), ẑ = ẑ 3 = X 1. Two cases: (i) (red) If X 1 = X = a with 0 < a, then, ẑ 1 = ẑ = ẑ 3 = a and Z 1 = ẑ 1, Z ẑ, Z 3 ẑ 3. J (A, X) = {{1}, {1, }, {1, 3}, {1,, 3}} and J r (A, X) = {{1}}. (ii) (blue) If 0 < a = X < X 1 = b, then ẑ 1 = a, ẑ = ẑ 3 = b and Z 1 = ẑ 1, Z = ẑ, Z 3 ẑ 3 or Z 1 = ẑ 1, Z ẑ, Z 3 = ẑ 3. J (A, X) = {{1, }, {1, 3}, {1,, 3}} and J r (A, X) = {{1, }, {1, 3}}. Different X different hitting distributions different hitting scenarios.

Conditional Distribution for Max Linear Models Theorem (W and Stoev 0). The regular conditional probability ν(x, E) of Z w.r.t. X equals: E R R p where ν(x, E) = J J r (A,x) p J (A, x) w J := j J +, p J (A, x)ν J (x, E), P X a.a. x A (R p +) ẑ j f Zj (ẑ j ) j J c F Zj (ẑ j ), J J r (A,x) Proof is involved with the definition of regular conditional probability. p J = 1.

Conditional Distribution for Max Linear Models Theorem (W and Stoev 0). The regular conditional probability ν(x, E) of Z w.r.t. X equals: E R R p where ν(x, E) = J J r (A,x) p J (A, x) w J := j J +, p J (A, x)ν J (x, E), P X a.a. x A (R p +) ẑ j f Zj (ẑ j ) j J c F Zj (ẑ j ), J J r (A,x) Proof is involved with the definition of regular conditional probability. Algorithm I for conditional sampling Z X = x: (1) compute ẑ j, J (A, x), r(j (A, x)) and p J (A, x), and () sample Z ν(x, ). p J = 1.

Conditional Distribution for Max Linear Models Theorem (W and Stoev 0). The regular conditional probability ν(x, E) of Z w.r.t. X equals: E R R p where ν(x, E) = J J r (A,x) p J (A, x) w J := j J +, p J (A, x)ν J (x, E), P X a.a. x A (R p +) ẑ j f Zj (ẑ j ) j J c F Zj (ẑ j ), J J r (A,x) Proof is involved with the definition of regular conditional probability. Algorithm I for conditional sampling Z X = x: (1) compute ẑ j, J (A, x), r(j (A, x)) and p J (A, x), and () sample Z ν(x, ). p J = 1. Not the end of the story! We haven t discussed identification of J (A, x), which is closely related to the NP hard set covering problem.

Set Covering Problem Let H = (h i,j ) n p with h i,j {0, 1}. Write [m] {1,,, m}, m N. The column j [p] covers the row i [n], if h i,j = 1. The goal is to cover all rows with least columns. This is equivalent to solving min δ j, subject to h i,j δ j 1, i [n]. (1) δ j {0,1} j [p] An example: j [p] H = j [p] 1 1 0 1 0 1 0 1 1. Minimum cost coverings are columns {1, }, {1, 3}, and {, 3}.

Identification of J r (A, x) and Set Covering Problem J J r (A, x) 1 1 a solution of (1) with h i,j = 1 {ai,j ẑ j =x i } and δ j = 1 {j J}.

Identification of J r (A, x) and Set Covering Problem J J r (A, x) 1 1 a solution of (1) with h i,j = 1 {ai,j ẑ j =x i } and δ j = 1 {j J}. A toy example: consider X = A Z with ( ) { 1 1 1 X1 = Z A =, i.e. 1 Z Z 3. 1 0 0 X = Z 1 Then, (blue) 0 < a = X < X 1 = b H = ( 0 1 1 1 0 0 The minimum cost coverings are columns {1, } and {1, 3}. ).

Identification of J r (A, x) and Set Covering Problem J J r (A, x) 1 1 a solution of (1) with h i,j = 1 {ai,j ẑ j =x i } and δ j = 1 {j J}. A toy example: consider X = A Z with ( ) { 1 1 1 X1 = Z A =, i.e. 1 Z Z 3. 1 0 0 X = Z 1 Then, (blue) 0 < a = X < X 1 = b H = ( 0 1 1 1 0 0 The minimum cost coverings are columns {1, } and {1, 3}. ). Write J r (H) = J r (A, x), with H referred to as the hitting matrix.

Identification of J r (A, x) and Set Covering Problem J J r (A, x) 1 1 a solution of (1) with h i,j = 1 {ai,j ẑ j =x i } and δ j = 1 {j J}. A toy example: consider X = A Z with ( ) { 1 1 1 X1 = Z A =, i.e. 1 Z Z 3. 1 0 0 X = Z 1 Then, (blue) 0 < a = X < X 1 = b H = ( 0 1 1 1 0 0 The minimum cost coverings are columns {1, } and {1, 3}. ). Write J r (H) = J r (A, x), with H referred to as the hitting matrix. Set covering problem is NP hard.

Simple Cases of Set Covering Problem H 1 = Two types of H: 0 0 0 0 1 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 1 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0 It takes a while to solve, H = 0 0 0 0 1 1 0 0 0 0 0 1 0 1 0 0 0 0 1 1 1 1 1 0 0 0 0 0 1 0 1 0 0 0 0 1 1 0 1 0 0 0 1 0 1 1 0 0 0 r(j (H 1 )) = and J r (H 1 ) =. Clearly, columns 1 and are dominating. Therefore, r(j (H )) = and J (H ) = {{1, }}..

Simple Cases of Set Covering Problem H 1 = Two types of H: 0 0 0 0 1 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 1 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0 It takes a while to solve, H = 0 0 0 0 1 1 0 0 0 0 0 1 0 1 0 0 0 0 1 1 1 1 1 0 0 0 0 0 1 0 1 0 0 0 0 1 1 0 1 0 0 0 1 0 1 1 0 0 0 r(j (H 1 )) = and J r (H 1 ) =. Clearly, columns 1 and are dominating. Therefore, r(j (H )) = and J (H ) = {{1, }}.. Lemma (W and Stoev) W.p.1, H has nice structure. example

Factorization of Regular Conditional Probability Formula Theorem (W and Stoev) With probability one, ν(x, E) = r ν (s) (X, E) with {Z j } j J (s) X = x ν (s) (x, ). s=1 Blocking structure: r s=1 J(s) = {1,..., p} and r s=1 Is = {1,..., n}.

Factorization of Regular Conditional Probability Formula Theorem (W and Stoev) With probability one, ν(x, E) = r ν (s) (X, E) with {Z j } j J (s) X = x ν (s) (x, ). s=1 Blocking structure: r s=1 J(s) = {1,..., p} and r s=1 Is = {1,..., n}. Conditional independence: restricted on X = x, X i p a i,j Z j = j=1 j J (s) a i,j Z j, i I s.

Factorization of Regular Conditional Probability Formula Theorem (W and Stoev) With probability one, ν(x, E) = r ν (s) (X, E) with {Z j } j J (s) X = x ν (s) (x, ). s=1 Blocking structure: r s=1 J(s) = {1,..., p} and r s=1 Is = {1,..., n}. Conditional independence: restricted on X = x, X i p a i,j Z j = j=1 j J (s) a i,j Z j, i I s. Algorithm II {Z j } j J (1) ν (1) (x, ) {Z j } j J (r) ν (r) (x, ) Z d = ν(x, ) = J J r p J ν J (x, )

Computational Efficiency Time (in secs) of identification of the blocking structure for Algorithm II: p \ n 1 0 00 0.03 (0.0) 0.13 (0.03) 0. (0.0) 1. (0.09) 000 0.11 (0.0) 0.0 (0.0) 1.00 (0.0).9 (0.33)

Computational Efficiency Time (in secs) of identification of the blocking structure for Algorithm II: p \ n 1 0 00 0.03 (0.0) 0.13 (0.03) 0. (0.0) 1. (0.09) 000 0.11 (0.0) 0.0 (0.0) 1.00 (0.0).9 (0.33) Comparison of two formulas: J J r (A,x) p J ν J (x, ) = ν(x, ) = Example Suppose, given X = x, r s=1 w (s) j j J (s) ν (s) j (x, ) } {{ } ν (s) (x, ) r = and J (s) =, s []. Then, to use ν(x, ) requires memory for J r (A, x) = weights, while to apply {ν (s) (x, )} s [] requires only.

Applications Simulations based on the de Haan Pereira model (de Haan and Pereira 00). Computational tools for prediction: real data analysis on maxima rainfall data at Bourgogne, France.

De Haan Pereira Model A stationary max stable random field model: De Haan and Pereira (00): with φ(t 1, t ) := X t = e R φ(t u)m α (du), t = (t 1, t ) R. β 1 β { π 1 ρ exp 1 [ β (1 ρ ) 1 t1 ρβ 1 β t 1 t + βt ] }. Consistent estimators known for ρ, β 1, β.

De Haan Pereira Model A stationary max stable random field model: De Haan and Pereira (00): with φ(t 1, t ) := X t = e R φ(t u)m α (du), t = (t 1, t ) R. β 1 β { π 1 ρ exp 1 [ β (1 ρ ) 1 t1 ρβ 1 β t 1 t + βt ] }. Consistent estimators known for ρ, β 1, β. A discretized version: X t = h /α φ(t u j1j )Z j1j, q j 1,j q 1 with u j1j = ((j 1 + 1/)h, (j + 1/)h) and Z j1j s i.i.d. 1 Fréchet.

1 Simulations 1 1 0 1 1 1 0 1 1 1 1 0 1 1 0 1 1 0 1 1 0 1 1 0 1 1 0 1 Figure: conditional samplings from de Haan Pereira model. Parameters: ρ = 0, β 1 = 1, β = 1.

Simulations 1 0 1 0. 3.3 0.7 9.7 1.97 0.9 7. 1 1 0 1 1 0. 3.3 0.7 9.7.97 0.9 7. 1 0 1 0 1 1 0 1 Figure: 9% quantile of the conditional marginal deviation. Parameters: ρ = 0., β 1 = 1., β = 0.7.

Review Obtained explicit formula of regular conditional probability for max linear models (including a large class of max stable random fields). Investigated the conditional independence structure. Developed efficient software (R package maxlinear) for large scale conditional samplings. Potential applications in prediction for extremal phenomena, e.g., environmental and financial problems.

Review Obtained explicit formula of regular conditional probability for max linear models (including a large class of max stable random fields). Investigated the conditional independence structure. Developed efficient software (R package maxlinear) for large scale conditional samplings. Potential applications in prediction for extremal phenomena, e.g., environmental and financial problems. Thank you. Website: http: //www.stat.lsa.umich.edu/~yizwang/software/maxlinear/

Auxiliary Results

Regular Conditional Probability The regular conditional probability ν of Z given σ(x), is a function such that ν : R n B R p [0, 1], + (i) ν(x, ) is a probability measure, for all x R n, (ii) The function ν(, E) is measurable, for all Borel sets E B R p. (iii) For all E R R p and D R R n, (P X ( ) := P(X )): P(Z E, X D) = ν(x, E)P X (dx). () We will first guess a formula for ν and then prove (). D

A Heuristic Proof Consider a neighbor of C J (A, x) (of P measure 0) C J (A, x) = { z R p + : z j = ẑ j, j J, z k < ẑ k, k J c} C ɛ J (A, x) := { z R p + : z j [ẑ j (1 ɛ), ẑ j (1 + ɛ)], j J, z k < ẑ k (1 ɛ), k J c} for small enough ɛ > 0 and let C ɛ (A, x) := J J (A,x) C J ɛ (A, x). The sets A (C ɛ (A, x)) shrink to the point x, as ɛ 0. Proposition (W and Stoev 0). For all x A (R p +), we have, as ɛ 0, Remarks: P(Z E Z C ɛ (A, x)) ν(x, E), E R R p +. Proved by Taylor expansion. The choice of CJ ɛ (A, x) is delicate.

Nice Structure of the Hitting Matrix H Blocks of matrix H: Write i 1 j i if h i1,j = h i,j = 1. Define an equivalence relation on [n]: i 1 i, if i 1 = ĩ0 j 1 ĩ 1 j jm ĩm = i. (3) r blocks: the equivalence relation (3) induces [n] = r s=1 Is. Further, J (s) := J (s) := {j [p] : h i,j = 1 for all i I s } {j [p] : h i,j = 1 for some i I s } Example: H = Two blocks: 0 0 0 0 1 1 0 0 0 0 0 1 0 1 0 0 0 0 1 1 1 1 1 0 0 0 0 0 1 0 1 0 0 0 0 1 1 0 1 0 0 0 1 0 1 1 0 0 0 1. I 1 = {1,, 3}, J (1) = {}, J (1) = {,, 7},. I = {,,, 7}, J () = {1}, J () = {1,, 3, }. Lemma (W and Stoev 0). W.p.1, the hitting matrix H of max linear model X = A Z has nice structure: (i) r = r(j (H)) = r(j (A, x)) and (ii) J (s) is nonempty s [r].

When We Have Bad H? Another Toy Example Consider Different hitting matrices: X = A Z with A = 1 0 1 1 1 0 1 Z 1 > Z, Z > Z 3 H = Z 1 < Z < Z 1, Z > Z 3 H = Z 1 = Z, Z > Z 3 H =. 1 0 0 1 0 0 0 1 1 1 0 0 0 1 0 0 1 1 1 0 0 1 1 0 0 1 1

Factorization of Regular Conditional Probability Formula Theorem (W and Stoev 0). With probability one, we have (i) for all J [p], J J r (A, A Z) if and only if J can be written as J = {j 1,..., j r } with j s J (s), s [r], (ii) For the regular conditional probability ν(x, E), ν(x, E) = r ν (s) (X, E) with ν (s) (X, E) = s=1 where for all j J (s), w (s) j = ẑ j f Zj (ẑ j ) ν (s) j (x, E) = δ πj (E)(ẑ j ) k J (s) \{j} k J (s) \{j} F Zk (ẑ k ) (s) j J (s) w j ν (s) (s) j J (s) w j j (X, E), P(Z k π k (E) Z k < ẑ k ).