Max stable Processes & Random Fields: Representations, Models, and Prediction

Similar documents
Max stable processes: representations, ergodic properties and some statistical applications

Conditional Sampling for Max Stable Random Fields

Simulation of Max Stable Processes

Extreme Value Analysis and Spatial Extremes

On the estimation of the heavy tail exponent in time series using the max spectrum. Stilian A. Stoev

ELEMENTS OF PROBABILITY THEORY

Extremogram and ex-periodogram for heavy-tailed time series

CRPS M-ESTIMATION FOR MAX-STABLE MODELS WORKING MANUSCRIPT DO NOT DISTRIBUTE. By Robert A. Yuen and Stilian Stoev University of Michigan

Extremogram and Ex-Periodogram for heavy-tailed time series

Asymptotics and Simulation of Heavy-Tailed Processes

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

EXACT SIMULATION OF BROWN-RESNICK RANDOM FIELDS

arxiv:math/ v1 [math.st] 16 May 2006

Richard L. Smith Department of Statistics and Operations Research University of North Carolina Chapel Hill, NC

Stationary particle Systems

The extremal elliptical model: Theoretical properties and statistical inference

The Behavior of Multivariate Maxima of Moving Maxima Processes

Bayesian Modelling of Extreme Rainfall Data

Regular Variation and Extreme Events for Stochastic Processes

Introduction to Algorithmic Trading Strategies Lecture 10

Beyond the color of the noise: what is memory in random phenomena?

Sequential Monte Carlo Samplers for Applications in High Dimensions

Negative Association, Ordering and Convergence of Resampling Methods

Spatial extreme value theory and properties of max-stable processes Poitiers, November 8-10, 2012

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½

Multivariate Generalized Ornstein-Uhlenbeck Processes

Introduction to Spatial Data and Models

Estimation of spatial max-stable models using threshold exceedances

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Practical conditions on Markov chains for weak convergence of tail empirical processes

Weak quenched limiting distributions of a one-dimensional random walk in a random environment

Introduction to Spatial Data and Models

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS*

STOCHASTIC GEOMETRY BIOIMAGING

MAXIMA OF LONG MEMORY STATIONARY SYMMETRIC α-stable PROCESSES, AND SELF-SIMILAR PROCESSES WITH STATIONARY MAX-INCREMENTS. 1.

ENSC327 Communications Systems 19: Random Processes. Jie Liang School of Engineering Science Simon Fraser University

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Evgeny Spodarev WIAS, Berlin. Limit theorems for excursion sets of stationary random fields

Chapter 5. Chapter 5 sections

Hierarchical Modeling for Univariate Spatial Data

Statistical inference on Lévy processes

STA205 Probability: Week 8 R. Wolpert

Nonparametric estimation of extreme risks from heavy-tailed distributions

Scaling properties of Poisson germ-grain models

A Conditional Approach to Modeling Multivariate Extremes

Gaussian Random Fields: Excursion Probabilities

Tail process and its role in limit theorems Bojan Basrak, University of Zagreb

Gaussian, Markov and stationary processes

Hierarchical Modelling for Univariate Spatial Data

1 Simulating normal (Gaussian) rvs with applications to simulating Brownian motion and geometric Brownian motion in one and two dimensions

ON EXTREME VALUE ANALYSIS OF A SPATIAL PROCESS

Nonlinear Time Series Modeling

Structural Reliability

Estimation of the long Memory parameter using an Infinite Source Poisson model applied to transmission rate measurements

where r n = dn+1 x(t)

Handbook of Spatial Statistics Chapter 2: Continuous Parameter Stochastic Process Theory by Gneiting and Guttorp

Hardy-Stein identity and Square functions

ECE Lecture #10 Overview

Extreme Value Theory and Applications

Tail dependence in bivariate skew-normal and skew-t distributions

CONTAGION VERSUS FLIGHT TO QUALITY IN FINANCIAL MARKETS

Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION. September 2017

arxiv: v1 [math.pr] 3 Nov 2009

Single Equation Linear GMM with Serially Correlated Moment Conditions

Laplace Distribution Based Stochastic Models

Spring 2012 Math 541B Exam 1

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

Sample Spaces, Random Variables

The Convergence Rate for the Normal Approximation of Extreme Sums

Lecture 1: Pragmatic Introduction to Stochastic Differential Equations

Computational statistics

Logarithmic scaling of planar random walk s local times

Stochastic Spectral Approaches to Bayesian Inference

Threshold estimation in marginal modelling of spatially-dependent non-stationary extremes

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Topics in fractional Brownian motion

Two applications of max stable distributions: Random sketches and Heavy tail exponent estimation

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Tail negative dependence and its applications for aggregate loss modeling

Does k-th Moment Exist?

On the Estimation and Application of Max-Stable Processes

Nonlinear time series

Estimation de mesures de risques à partir des L p -quantiles

On the Fisher Bingham Distribution

Heavy Tailed Time Series with Extremal Independence

EXTREMAL QUANTILES OF MAXIMUMS FOR STATIONARY SEQUENCES WITH PSEUDO-STATIONARY TREND WITH APPLICATIONS IN ELECTRICITY CONSUMPTION ALEXANDR V.

Large sample distribution for fully functional periodicity tests

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Random Process Lecture 1. Fundamentals of Probability

Module 9: Stationary Processes

System Identification, Lecture 4

1: PROBABILITY REVIEW

Chapter 3 - Estimation by direct maximization of the likelihood

GARCH processes probabilistic properties (Part 1)

Vine Copulas. Spatial Copula Workshop 2014 September 22, Institute for Geoinformatics University of Münster.

Some Results on the Ergodicity of Adaptive MCMC Algorithms

System Identification, Lecture 4

1 Introduction. 2 Diffusion equation and central limit theorem. The content of these notes is also covered by chapter 3 section B of [1].

Transcription:

Max stable Processes & Random Fields: Representations, Models, and Prediction Stilian Stoev University of Michigan, Ann Arbor March 2, 2011 Based on joint works with Yizao Wang and Murad S. Taqqu.

1 Preliminaries 2 Representations 3 Further Examples 4 Prediction for Max stable Random Fields Future work 6 References 7 Appendix

Preliminaries

Max stable processes. Definition and Motivation. A stochastic process (field) X = {X t } t T is max stable if for all n N, { fdd {X t } t T = a n max X (i) t + b n 1 i n }t T, for some a n > 0, b n R, where X (i), i = 1, 2, are independent copies of X.

Max stable processes. Definition and Motivation. A stochastic process (field) X = {X t } t T is max stable if for all n N, { fdd {X t } t T = a n max X (i) t + b n 1 i n }t T, for some a n > 0, b n R, where X (i), i = 1, 2, are independent copies of X. Fact: If for i.i.d. processes ξ (i) = {ξ (i) t } t T, i = 1, 2,, we have n a n i=1 ξ t (i) + b n fdd X t, as n, for some a n > 0, b n R, then the limit X = {X t } t T is max stable.

Extreme Value Distributions The following is the counterpart to the CLT for maxima: Theorem (Fisher Tippett (1928) & Gnedenko (1943)) Suppose that X n, n = 1, 2,... are iid random variables and a n max X i + b n a n 1 i n 1 i n X i + b n d Z, as n, for some a n > 0, b n R. Then, G(x) = P{Z x} is one of three types of extreme value distributions (EVD): e x α, x > 0(α > 0) (Fréchet) G(x) = e e x, x R (Gumbel) e ( x)α, x < 0(α > 0) (negative Fréchet) The type of the limit is determined by the tails of the X i s. Max stable processes have extreme value marginals.

α Fréchet max stable processes Z is said to be α Fréchet, if P(Z x) = exp{ σ α /x α }, x > 0, with some scale parameter σ > 0.

α Fréchet max stable processes Z is said to be α Fréchet, if P(Z x) = exp{ σ α /x α }, x > 0, with some scale parameter σ > 0. Heavy tails: P(Z > x) = 1 e σα /x α σ α /x α, as x.

α Fréchet max stable processes Z is said to be α Fréchet, if P(Z x) = exp{ σ α /x α }, x > 0, with some scale parameter σ > 0. Heavy tails: P(Z > x) = 1 e σα /x α σ α /x α, as x. The process X = {X t } t T is called α Fréchet, if the max linear combination n a i X ti is α Fréchet i=1 for all a i > 0, t i T, 1 i n. Note: Recall the definition of Gaussian processes.

α Fréchet max stable processes Z is said to be α Fréchet, if P(Z x) = exp{ σ α /x α }, x > 0, with some scale parameter σ > 0. Heavy tails: P(Z > x) = 1 e σα /x α σ α /x α, as x. The process X = {X t } t T is called α Fréchet, if the max linear combination n a i X ti is α Fréchet i=1 for all a i > 0, t i T, 1 i n. Note: Recall the definition of Gaussian processes. Fact: A process X with α Fréchet marginals is max stable if and only if it is an α Fréchet process. No generality is lost if one focuses on the class of α Fréchet process max stable processes.

Asymptotics in a Picture: The Chocolate Hills Where?

Asymptotics in a Picture: The Chocolate Hills Where? Bohol Island, Philippines.

Representations

The de Haan s spectral represenation Theorem (de Haan (1984)) Let X = {X t } t R be a stochastically continuous α Fréchet max stable process. Then, there exist functions f t (u) L α +([0, 1], du), such that f.d.d. {X t } t R = { f t (U i ) } i 1 Γ 1/α i where 0 Γ 1 Γ 2 is a standard Poisson point process on (0, ) and the U i s are independent Uniform(0, 1) random variables, independent of the Γ i s. t R,

Extremal integral representations Theorem (S. & Taqqu (2006); S. (2008)) Let X = {X t } t T be an α Fréchet max stable process (separable in probability). Then, {X t } t X fdd = { e S } f t (u)m α (du), t T for some deterministic f t (u) L α +(S, µ). Here M α is an α Fréchet random sup measure.

Extremal integral representations Theorem (S. & Taqqu (2006); S. (2008)) Let X = {X t } t T be an α Fréchet max stable process (separable in probability). Then, {X t } t X fdd = { e S } f t (u)m α (du), t T for some deterministic f t (u) L α +(S, µ). Here M α is an α Fréchet random sup measure. The sup measure: Given a measure space (S, S, µ), M α is said to be an α Fréchet sup measure with control measure µ, if: independently scattered: M α (A i ) s are independent, for disjoint A i s. α Fréchet: P(M α (A) x) = exp{ µ(a)x α }, x > 0. σ sup-additive: For disjoint measurable A i s: M α ( i=1a i ) = sup M α (A i ), i N almost surely.

Extremal Integrals For a simple and non negative function f (u) := n i=1 a i1 Ai (u), (A i S), with disjoint A i s, we set e S f (u)m α (du) def = n a i M α (A i ), i=1

Extremal Integrals For a simple and non negative function f (u) := n i=1 a i1 Ai (u), (A i S), with disjoint A i s, we set Properties: S e S f (u)m α (du) def = i=1 n a i M α (A i ), i=1 α Fréchet: For all x > 0, we have ( e ) { n P fdm α x = exp ai α µ(a i )x α} { = exp f α L α (µ) /x α}.

Extremal Integrals For a simple and non negative function f (u) := n i=1 a i1 Ai (u), (A i S), with disjoint A i s, we set Properties: S e S f (u)m α (du) def = i=1 n a i M α (A i ), i=1 α Fréchet: For all x > 0, we have ( e ) { n P fdm α x = exp ai α µ(a i )x α} { = exp f α L α (µ) /x α}. max linearity: For all a, b > 0 and f, g L α +(µ), e S (af bg)dm α = a e S fdm α b e S gdm α.

Extremal Integrals For a simple and non negative function f (u) := n i=1 a i1 Ai (u), (A i S), with disjoint A i s, we set Properties: S e S f (u)m α (du) def = i=1 n a i M α (A i ), i=1 α Fréchet: For all x > 0, we have ( e ) { n P fdm α x = exp ai α µ(a i )x α} { = exp f α L α (µ) /x α}. max linearity: For all a, b > 0 and f, g L α +(µ), e S (af bg)dm α = a e S fdm α b e S gdm α. e (convergence) S f P ndm α ξ if and only if S f n α f α 0, for some f L α e +(µ).thus, extends by continuity in P to all f L α e +(µ) and ξ := S fdm α.

Back to max stable processes For any collection of non negative f t L α +(µ), the process X t := is α Fréchet (max stable). e S f t dm α, (t T ) { ( f ti (u) ) αµ(du) } P(X ti x i, 1 i n) = exp max, (x i > 0). S 1 i n x i Follows: from max linearity of the extremal integral.

Back to max stable processes For any collection of non negative f t L α +(µ), the process X t := is α Fréchet (max stable). e S f t dm α, (t T ) { ( f ti (u) ) αµ(du) } P(X ti x i, 1 i n) = exp max, (x i > 0). S 1 i n x i Examples: stationary α Fréchet processes/fields. (moving maxima) With (S, S, µ) (R, B, Leb), X t := e f (t x)m α (dx), R t R (mixed moving maxima) (S, S, µ) (R d V, B R d V, Leb ν), X t := e R V f ( t x, v)m α (d x, dv), t R d. Known as M3 process in Zhang & Smith (2004)

A Basic Example: The Extremal Process Take (S, S, µ) (R, B, Leb) and let X t = e dm α (dx) M α ([0, t)), t > 0. [0,t)

A Basic Example: The Extremal Process Take (S, S, µ) (R, B, Leb) and let X t = e dm α (dx) M α ([0, t)), t > 0. [0,t) For all t > 0, we have X t = M α ([0, t)) d = t 1/α Z, where Z is standard α Fréchet: P(Z x) = e x α, x > 0.

A Basic Example: The Extremal Process Take (S, S, µ) (R, B, Leb) and let X t = e dm α (dx) M α ([0, t)), t > 0. [0,t) For all t > 0, we have X t = M α ([0, t)) d = t 1/α Z, where Z is standard α Fréchet: P(Z x) = e x α, x > 0. The process {X t } t 0 has independent max increments: (X t1,x t2,,x tn ) d = (t 1/α 1 Z 1,t 1/α 1 Z 1 (t 2 t 1 ) 1/α Z 2,,t 1/α 1 Z 1 (t n t n 1 ) 1/α Z n ), where the Z i s are iid standard α Fréchet. Can be viewed as an analog of the Brownian motion.

α Fréchet Extremal Processes: Sample Paths 16 Five paths of the α Frechet Extremal Process: α = 3 14 12 10 X(t) 8 6 4 2 0 0 200 400 600 800 1000 t

Further Examples

Smith Random Fields Let Σ be a positive definite d d symmetric matrix and set φ Σ ( x) := 1 { (2π) d/2 Σ 1/2 exp for the Normal density with covariance Σ. Define the random field X t := i 1 x T Σ 1 x }, x R d 2 φ Σ (t U i ) Γ 1/α, t R d, i where {(Γ i, U i )} is a Poisson point process on (0, ) R d with the Lebesgue intensity. Then {X t } t R d is a moving average α Fréchet random field: {X t } t R d f.d.d. = { e R d φ Σ (t x)m α (dx) } t R d.

Smith Random Fields: Sample Surface 2D Smith random field α = 3, ρ = 0.8 800 700 600 Y 00 400 300 200 X

Brown Resnick Processes Let w t (ω ) be a standard Brownian motion supported on (Ω, F, P ). Consider a random α Fréchet sup measure M α (dω ) supported on (Ω, F, P). Define the α Fréchet process: X t := e Ω e wt(ω ) α t /2 M α (dω ), t R.

Brown Resnick Processes Let w t (ω ) be a standard Brownian motion supported on (Ω, F, P ). Consider a random α Fréchet sup measure M α (dω ) supported on (Ω, F, P). Define the α Fréchet process: X t := e Ω e wt(ω ) α t /2 M α (dω ), t R. Surprisingly, the process {X t } t R is stationary. It is called a Brown Resnick stationary α Fréchet process.

Brown Resnick Processes Let w t (ω ) be a standard Brownian motion supported on (Ω, F, P ). Consider a random α Fréchet sup measure M α (dω ) supported on (Ω, F, P). Define the α Fréchet process: X t := e Ω e wt(ω ) α t /2 M α (dω ), t R. Surprisingly, the process {X t } t R is stationary. It is called a Brown Resnick stationary α Fréchet process. More generally, if {w t } t R d is a zero-mean Gaussian random field with stationary increments, then X t := e wt(ω ) ασ2 e t /2 M α (dω ), t R d, Ω is a stationary α Fréchet random field. For more details, see Kabluchko, Schlather & de Haan (2009).

Prediction

Prediction for Max stable Random Fields Given is a max stable random field: X t := Problem: We observe X t1,, X tn. e S f t (u)m α (du), t T. (1)

Prediction for Max stable Random Fields Given is a max stable random field: X t := Problem: We observe X t1,, X tn. Goal: Predict X s1,, X sm. e S f t (u)m α (du), t T. (1)

Prediction for Max stable Random Fields Given is a max stable random field: X t := e S f t (u)m α (du), t T. (1) Problem: We observe X t1,, X tn. Goal: Predict X s1,, X sm. Need: To compute functionals of the conditional distribution (X s1,, X sm ) (X t1,, X tn ).

Prediction for Max stable Random Fields Given is a max stable random field: X t := Problem: We observe X t1,, X tn. Goal: Predict X s1,, X sm. e S f t (u)m α (du), t T. (1) Need: To compute functionals of the conditional distribution Challenges: (X s1,, X sm ) (X t1,, X tn ). The c.d.f. s are theoretically available from (1), but the densities are not.

Prediction for Max stable Random Fields Given is a max stable random field: X t := Problem: We observe X t1,, X tn. Goal: Predict X s1,, X sm. e S f t (u)m α (du), t T. (1) Need: To compute functionals of the conditional distribution Challenges: (X s1,, X sm ) (X t1,, X tn ). The c.d.f. s are theoretically available from (1), but the densities are not. The current state-of-the-art: only bi variate densities are known in closed form, for just a couple of models.

The Max Linear Model Consider the model X = A Z,

The Max Linear Model Consider the model X = A Z, that is p X i = a ij Z j, (1 i n) j=1 where A = (a ij ) n p, X = (Xi ) n i=1, Z = (Zj ) p j=1.

The Max Linear Model Consider the model X = A Z, that is p X i = a ij Z j, (1 i n) j=1 where A = (a ij ) n p, X = (Xi ) n i=1, Z = (Zj ) p j=1. If the Z j s are i.i.d. α Fréchet, then X is max stable.

The Max Linear Model Consider the model X = A Z, that is p X i = a ij Z j, (1 i n) j=1 where A = (a ij ) n p, X = (Xi ) n i=1, Z = (Zj ) p j=1. If the Z j s are i.i.d. α Fréchet, then X is max stable. In fact, X i = e [0,1] p f i dm α, with f i (u) = p 1/α a ij 1 [(j 1)/p,j/p) (u) j=1

The Max Linear Model Consider the model X = A Z, that is p X i = a ij Z j, (1 i n) j=1 where A = (a ij ) n p, X = (Xi ) n i=1, Z = (Zj ) p j=1. If the Z j s are i.i.d. α Fréchet, then X is max stable. In fact, X i = e [0,1] p f i dm α, with f i (u) = p 1/α a ij 1 [(j 1)/p,j/p) (u) j=1 Max linear models with large p can approximate, arbitrary e max stable models: X t = [0,1] f tdm α.

Another Representation: Spectral Measure For all max stable (α Fréchet) random vectors X : P( X { ( x) = exp S n 1 + max 1 i n w ) i αγ(d } w), ( x 0) x i where S n 1 + = { w 0 : w = max 1 i n w i = 1}, and Γ is unique finite measure on S n 1 + the spectral measure of X.

Another Representation: Spectral Measure For all max stable (α Fréchet) random vectors X : P( X { ( x) = exp S n 1 + max 1 i n w ) i αγ(d } w), ( x 0) x i where S n 1 + = { w 0 : w = max 1 i n w i = 1}, and Γ is unique finite measure on S n 1 + the spectral measure of X. For the max-linear model, Γ is discrete: Γ(d w) = where a j = (a ij ) n i=1 Rn +. p a j α δ aj / a j (d w), j=1

Another Representation: Spectral Measure For all max stable (α Fréchet) random vectors X : P( X { ( x) = exp S n 1 + max 1 i n w ) i αγ(d } w), ( x 0) x i where S n 1 + = { w 0 : w = max 1 i n w i = 1}, and Γ is unique finite measure on S n 1 + the spectral measure of X. For the max-linear model, Γ is discrete: Γ(d w) = where a j = (a ij ) n i=1 Rn +. p a j α δ aj / a j (d w), j=1 Hence the max linear models are called spectrally discrete.

Examples Spectrally discrete random field: Let φ t (j) 0 be deterministic functions and p X t := φ j (t)z j t T. j=1 For any t 1,, t n, and s 1,, s m, we have X = A Z, with a ij := φ ti (j) and X = (X ti ) n i=1. and Y = B Z, with b ij := φ si (j) and Y = (X si ) m i=1. Moving maxima: Let φ(u) L α +(R d, du), then X t := e R d φ(t u)m α (du) P φ(t u j )Z j. j [ M,M] d

Prediction: a Computational Solution Problem: Consider the max linear model X = A Z, where A = (a ij ) n p, a ij > 0, and the Z j s are independent α Fréchet. Goal: sample from the conditional distribution of Z X. Plug in: Y = B Z and obtain samples from the conditional distribution Y X : (X s1,, X sm ) (X t1,, X tn ). }{{}}{{} = Y = X

Prediction: a Computational Solution Problem: Consider the max linear model X = A Z, where A = (a ij ) n p, a ij > 0, and the Z j s are independent α Fréchet. Goal: sample from the conditional distribution of Z X. Plug in: Y = B Z and obtain samples from the conditional distribution Y X : (X s1,, X sm ) (X t1,, X tn ). }{{}}{{} = Y Idea: If Y 1,, Y N are independent samples from Y X = x, then by the LLN, for P X almost all x: 1 N h( Y N ( i ) a.s. E h( Y ) σ( X ) ), as N. j=1 = X

Benefits & Caveats Benefits:

Benefits & Caveats Benefits: Approximations to the optimal predictor for h( Z) given X in the mean square sense.

Benefits & Caveats Benefits: Approximations to the optimal predictor for h( Z) given X in the mean square sense. Monte Carlo approximations to conditional medians, quantiles other optimal predictors and prediction intervals.

Benefits & Caveats Benefits: Approximations to the optimal predictor for h( Z) given X in the mean square sense. Monte Carlo approximations to conditional medians, quantiles other optimal predictors and prediction intervals.

Benefits & Caveats Benefits: Approximations to the optimal predictor for h( Z) given X in the mean square sense. Monte Carlo approximations to conditional medians, quantiles other optimal predictors and prediction intervals.

Benefits & Caveats Benefits: Approximations to the optimal predictor for h( Z) given X in the mean square sense. Monte Carlo approximations to conditional medians, quantiles other optimal predictors and prediction intervals.

Benefits & Caveats Benefits: Approximations to the optimal predictor for h( Z) given X in the mean square sense. Monte Carlo approximations to conditional medians, quantiles other optimal predictors and prediction intervals.

Benefits & Caveats Benefits: Caveats: Approximations to the optimal predictor for h( Z) given X in the mean square sense. Monte Carlo approximations to conditional medians, quantiles other optimal predictors and prediction intervals. Need to sample efficiently from the conditional distribution. Must be able to handle large p = dim(z)!

Basic Observations Suppose x = A Z. (2) Observations: For all 1 j n: x i 0 Z j ẑ j ( x) min ( where 1 1 i n a 0 = ) ij For some j s the upper bounds are attained, so that (2) holds! Hitting scenario: such a set J = J( x) {1,, p}, that yields (2). Z j = ẑ j ( x), j J and Z j < ẑ j ( x), j J,

The Simple Solution Theorem (Wang & S. (2010)) The regular conditional probability p Z X ( x) equals: p Z X (E x) = p J ( x)ν J (E x), J hitting scenario where p J ( x) = w J / K w K for some weights w j ν J (E x) = δẑj (π j (E)) P(Z j π j (E) Z j ẑ j ). j J }{{} active constraints j J } {{ } inequality constraints

The Simple Solution Theorem (Wang & S. (2010)) The regular conditional probability p Z X ( x) equals: p Z X (E x) = p J ( x)ν J (E x), J hitting scenario where p J ( x) = w J / K w K for some weights w j ν J (E x) = δẑj (π j (E)) P(Z j π j (E) Z j ẑ j ). j J j J The weights: w J = 0, if J > r( x) and if J = r( x): w J = j J ẑjf Zj (ẑ j ) j J F Z j (ẑ j ). Notes: Here r( x) is the minimal number of equality constraints: r( x) = min J hitting scenario for x J. The result applies to independent Z j s with p.d.f. s f Z s and c.d.f. s F Z s.

Example Consider the toy model X 1 X 2 = X 3 1 0 0 1 1 0 1 1 1 Z 1 Z 2 Z 3.

Example Consider the toy model X 1 X 2 = X 3 1 0 0 1 1 0 1 1 1 Z 1 Z 2 Z 3. We have X 1 = Z 1, X 2 = Z 1 Z 2, X 3 = Z 1 Z 2 Z 3.

Example Consider the toy model X 1 X 2 = X 3 1 0 0 1 1 0 1 1 1 Z 1 Z 2 Z 3. We have X 1 = Z 1, X 2 = Z 1 Z 2, X 3 = Z 1 Z 2 Z 3. If x = (2 2 2) T, then the hitting scenarios are: J 1 = {1}, J 2 = {1, 2}, J 3 = {1, 3}, J 4 = {1, 2, 3}.

Example Consider the toy model X 1 X 2 = X 3 1 0 0 1 1 0 1 1 1 Z 1 Z 2 Z 3. We have X 1 = Z 1, X 2 = Z 1 Z 2, X 3 = Z 1 Z 2 Z 3. If x = (2 2 2) T, then the hitting scenarios are: J 1 = {1}, J 2 = {1, 2}, J 3 = {1, 3}, J 4 = {1, 2, 3}. Thus, r = r( x) = 1, and the conditional probability p Z X (dz x) = δ 2 (dz 1 )P(Z 2 dz 2 Z 2 < 2)P(Z 3 dz 3 Z 3 < 2) has only one component.

The Efficient Solution. Implementation Big Problem: Finding all hitting scenarios with brute force is NP hard (as a function of n and p)! Harder than the set covering problem.

The Efficient Solution. Implementation Big Problem: Finding all hitting scenarios with brute force is NP hard (as a function of n and p)! Harder than the set covering problem. Big Solution: Yizao found an equivalent representation of the regular conditional probability. Theorem (Wang & S. (2010)) For P X almost all x, we have p Z X (E x) = r( x) k=1 ν(k) (E x), with ν (k) (E x) = j J (k) ( x) p (k) j ( x)ν (k) j (E x), p (k) j ( x) = where w (k) j ( x) = ẑ j f Zj (ẑ j ) l J (k) \{j} F Z l (ẑ l ), ν (k) j (E x) = δẑj (π j (E)) l J (k) \{j} ( x) i J (k) ( x) w (k) i ( x), w (k) j P(Z l π l (E) Z l ẑ l ).

Some ingredients in the proof Hitting matrix: Given x = A Z, construct { 0, if aij ẑ H = (h ij ) n p, where h ij = j < x i 1, if a ij ẑ j = x i Intuition: want to pick r = r( x) columns of H (i.e., j s) that cover all rows.

Some ingredients in the proof Hitting matrix: Given x = A Z, construct { 0, if aij ẑ H = (h ij ) n p, where h ij = j < x i 1, if a ij ẑ j = x i Intuition: want to pick r = r( x) columns of H (i.e., j s) that cover all rows. Key facts: With P X probability one, the structure of H = H( x) is nice! One can decompose {1,, p} into r = r( x) disjoint classes: J(k), k = 1,, r. One and only one j in each J (k) J (k), k = 1,, r is active. For more details, please see Wang & S. (2010).

An application: Max AR(q) Davis & Resnick (1989): Max AR(q) are the stationary solutions to: X t = φ 1 X t 1 φ m X t m Z t, t Z, (3) with φ i 0, 1 i q and i.i.d. 1 Fréchet Z t s.

An application: Max AR(q) Davis & Resnick (1989): Max AR(q) are the stationary solutions to: X t = φ 1 X t 1 φ m X t m Z t, t Z, (3) with φ i 0, 1 i q and i.i.d. 1 Fréchet Z t s. If φ = q i=1 φ i < 1, then (3) has a unique solution. X t = ψ j Z t j. j=0

An application: Max AR(q) Davis & Resnick (1989): Max AR(q) are the stationary solutions to: X t = φ 1 X t 1 φ m X t m Z t, t Z, (3) with φ i 0, 1 i q and i.i.d. 1 Fréchet Z t s. If φ = q i=1 φ i < 1, then (3) has a unique solution. X t = ψ j Z t j. j=0 ψ j s decay exponentially: by truncation, we get a max linear model X t P Xt = M ψ j Z t j. j=0

An application: Max AR(q) Davis & Resnick (1989): Max AR(q) are the stationary solutions to: X t = φ 1 X t 1 φ m X t m Z t, t Z, (3) with φ i 0, 1 i q and i.i.d. 1 Fréchet Z t s. If φ = q i=1 φ i < 1, then (3) has a unique solution. X t = ψ j Z t j. j=0 ψ j s decay exponentially: by truncation, we get a max linear model X t P Xt = M ψ j Z t j. Max AR(q): we get the projection predictors by recursively solving j=0 X t+h = φ 1 Xt+h 1 φ q Xt+h q.

Prediction for Max AR Using i.i.d. conditional samples, we get: the conditional median, 9% predition intervals. estimates of P{X t+h X t+h X s, s t}. X 0 20 40 60 80 Prediction of MARMA(3,0) processes MARMA process Conditional 9% quantile Conditional median Projection predictor Lag (h) Coverage Width Proj Pred 1 0.96 13.06 0.706 2 0.92 26.6 0.03 3 0.94 37.8 0.36 4 0.97 4.6 0.23 0.966 1.2 0.178 10 0.947 62.8 0.029 20 0.943 66.0 0.001 30 0.91 66.2 0.000 40 0.9 6.4 0.000 0 0 100 10 t

12 Moving Maxima Random Fields 1 2 1 0 1 2 1 0 1 10 2 1 0 1 10 8 6 6 4 8 2 2 8 6 4 6 6 4 4 10 1 2 2 1 0 1 2 1 0 1 1 10 2 1 0 1 8 6 4 8 6 4 6 4 2 4 10 1 10 Smith moving maxima random field model where φ(t) = e X t = φ(t u)mα(du), R2 β 1 β 2 2π 1 ρ 2 exp { [β2 1 t2 1 2ρβ 1β 2 t 1 t 2 + β 2 2 t2 2 ] 2(1 ρ 2 ) Figure on the left: 4 conditional samples with β 1 = β 2 = 1, ρ = 0, given 7 observed values (all equal to ). }. 2 1 0 1 2 1 0 1

The Conditional Median Conditional expected median of the Smith model. 6.0 2 1 0 1... 4. 4.....0 4. 4.0 2 1 0 1 Parameters:ρ=0, β 1=1, β 2=1

An application: Rainfall in Bourgogne, France The model: Discretization of the 2 component moving maxima (Smith models) where X t = Intuition: e φ β1 (t u)m α (1) (du) R 2 e φ β2 (t u)m α (2) (du), R 2 φ(u) φ β (u 1, u 2 ) = 1 2πβ 2 e (u2 1 +u2 2 )/2β2, β > 0. Need 2 components to represent large and small scale effects. For simplicity, use isotropic components (ρ = 0 and equal scales).

A Realization of the Two component Smith Model A Realization of a two component Smith model

Observations: X ti, for i = 1,, 146 stations maxima over 1 years of daily rainfall. 00 6000 600 7000 700 1800 19000 1900 Lambert Longitude Lambert Latitude 100 0 100 200 Conditional Median

Rainfall in Bourgogne, France Observations: X ti, for i = 1,, 146 stations maxima over 1 years of daily rainfall. 00 6000 600 7000 700 1800 19000 1900 Lambert Longitude 100 0 100 200 Conditional Median Model fitting via Cross Validation Do a grid search over (β 1, β 2 ): 1. Condition on largest stations. 2. Compute p values for the rest 141. 3. Check if uniform. If not: 4. Change (β 1, β 2 ), goto 1.

Future work General estimation methodology?? Hard but recent exciting work via partial likelihood (sandwich) methods available. Construction and estimation of max stable models area of interest. Spatio temporal models many environmental applications. Computer networks extremal dependence in delay/loads/congestions. Prediction for the spectrally continuous case?? Important but hard!

Thank you!

Some References Brown, B. M. & Resnick, S. I. (1977), Extreme values of independent stochastic processes, J. Appl. Probability 14(4), 732 739. de Haan, L. (1984), A spectral representation for max stable processes, Annals of Probability, 12(4):1194 1204. Kabluchko & Schlather (2009) Ergodic properties of max infinitely divisible processes, Stoch. Process. Appl. Smith, R.L. (1990) Max-Stable Processes and Spatial Extremes. Unpubished manuscript. S. (2008) On the ergodicity and mixing of max-stable processes,stoch. Process. Appl. Wang (2010) maxlinear an R package. http://www.stat.lsa.umich.edu/ yizwang/software/maxlinear. Wang & S. (2010) On the structure and representations of max stable processes, Adv. Appl. Probab. 42(3): 8 877. Wang & S. (2010) Conditional sampling for spectrally discrete max stable random fields. Adv. Appl. Probab. To appear. Zhang & Smith (2004) The behavior of multivariate maxima of moving maxima processes, Journal of Applied Probability, 41(4): 1113 1123.

Appendix: Some Derivations

The idea behind de Haan s spectral representation { P{X tk x k, k = 1,, n} = P { =: P i 1 g(u i ) Γ 1/α i i 1 1 Γ 1/α i } 1 = P{N A = } = e µ(a), } max f t k (U i )/x k 1 1 k n where µ is the intensity measure of the Poisson point process N = {(U i, Γ i )} and A = {(u, x) : g(u) x 1/α } c {(u, x) : g(u) α > x}. We have (by Fubini) that µ(a) = 1 Thus, { P{X tk x k, k = 1,, n} = exp 0 0 1 {g α (u)>x}dudx = 1 0 g α (u)du. ( ) 1 αdu } max 0 1 k n f tk (u)/x k.

Stationarity of the standard Brown Resnick process For simplicity α 1 and let 0 t 1 t n, τ > 0: P{X tk +τ x k, 1 k n} { } = exp max x 1 Ω 1 k n k ewt k +τ (ω ) (t k +τ)/2 P (dω ) { ( = exp E P max x 1 1 k n k e(wt k +τ wτ tk/2) e (wτ τ/2)) } { ( ) = exp E P max x 1 1 k n k e(wt k t k/2) } = P{X tk x k, 1 k n}. Since {w t } t 0 has stationary and independent increments and since E P e wt = e t /2.