ASYMPTOTICS OF MARKOV KERNELS AND THE TAIL CHAIN

Size: px
Start display at page:

Download "ASYMPTOTICS OF MARKOV KERNELS AND THE TAIL CHAIN"

Transcription

1 ASYMPTOTICS OF MARKOV KERNELS AND THE TAIL CHAIN SIDNEY I. RESNICK AND DAVID ZEBER Abstract. An asyptotic odel for extree behavior of certain Markov chains is the tail chain. Generally taking the for of a ultiplicative rando walk, it is useful in deriving extreal characteristics such as point process liits. We place this odel in a ore general context, forulated in ters of extree value theory for transition kernels, and extend it by foralizing the distinction between extree and non-extree states. We ake the link between the update function and transition kernel fors considered in previous work, and we show that the tail chain odel leads to a ultivariate regular variation property of the finite-diensional distributions under assuptions on the arginal tails alone. 1. Introduction A ethod of approxiating the extreal behavior of discrete-tie Markov chains is to use an asyptotic process called the tail chain under an asyptotic assuption on the transition kernel of the chain. Loosely speaking, if the distribution of the next state converges under soe noralization as the current state becoes extree, then the Markov chain behaves approxiately as a ultiplicative rando walk upon leaving a large initial state. This approach leads to intuitive extreal odels in such cases as autoregressive processes with rando coefficients, which include a class of ARCH odels. The focus on Markov kernels was introduced by Sith 24]. Perfekt 18, 19] extended the approach to higher diensions, and Segers 23] rephrased the conditions in ters of update functions. Though not restrictive in practice, the previous approach tends to ask aspects of the processes extreal behaviour. Markov chains which adit the tail chain approxiation fall into one of two categories. Starting fro an extree state, the chain either reains extree over any finite tie horizon, or will drop to a non-extree state of lower order after a finite aount of tie. The latter case is probleatic in that the tail chain odel is not sensitive to possible subsequent jups fro a non-extree state to an extree one. Previous developents handle this by ruling out the class of processes exhibiting this behaviour via a technical condition, which we refer to as the regularity condition. Also, ost previous work has assued stationarity, since interest focused on coputing the extreal index or deriving liits for the exceedance point processes, drawing on the theory established for stationary processes with ixing by Leadbetter et al. 17]. However, stationarity is not fundaental in deterining the extreal behaviour of the finite-diensional distributions. We place the tail chain approxiation in the context of an extree value theory for Markovian transition kernels, which a priori does not necessitate any such restrictions on the class of processes to which it ay be applied. In particular, we introduce the concept of boundary distribution, which controls tail chain transitions fro non-extree to extree. Although distributional convergence results are ore naturally phrased in ters of transition kernels, we treat the equivalent update function fors as an integral coponent to interfacing with applications, and we phrase relevant Key words and phrases. Extree values, ultivariate regular variation, Markov chain, transition kernel, tail chain, heavy tails. S. I. Resnick and D. Zeber were partially supported by ARO Contract W911NF and NSA Grant H at Cornell University. 1

2 2 S. I. RESNICK AND D. ZEBER assuptions in ters of both. While not aking explicit a coplete tail chain odel for the class of chains excluded previously, we deonstrate the extent to which previous odels ay be viewed as a partial approxiation within our fraework. This is accoplished by foralizing the division between extree and non-extree states as a level we ter the extreal boundary. We show that, in general, the tail chain approxiates the extreal coponent, the portion of the original chain having yet to cross below this boundary. Phrased in these ters, the regularity condition requires that the distinction between the original chain and its extreal coponent disappears asyptotically. After introducing our extree value theory for transition kernels, along with a representation in ters of update functions, we derive liits of finite-diensional distributions conditional on the initial state, as it becoes extree. We then exaine the effect of the regularity condition on these results. Finally, adding the assuption of arginal regularly varying tails leads to convergence results for the unconditional distributions akin to regular variation Notation and Conventions. We review notation and relevant concepts. If not explicitly specified, assue that any space S under discussion is a topological space paired with its Borel σ-field of open sets BS to for a easurable space. Denote by KS the collection of its copact sets; by CS the space of real-valued continuous, bounded functions on S; and by C + K S the space of non-negative continuous functions with copact support. Weak convergence of probability easures is represented by. For a space E which is locally copact with countable base for exaple, a subset of, ] d, M + E is the space of non-negative Radon easures on BE; point easures consisting of single point asses at x will be written as ɛ x. A sequence of easures {µ n } M + E converges v vaguely to µ M + E written µ n µ if E f dµ n E f dµ as n for any f C+ K E. The shorthand µf = f dµ is handy. That the distribution of a rando vector X is regularly varying on a cone E, ] d v \{0} eans that t PX/bt ] µ in M + E as t for soe non-degenerate liit easure µ M + E and scaling function bt. The liit µ is necessarily hoogeneous in the sense that µ c = c α µ for soe α > 0. The regular variation is standard if bt = t. If X = X 0, X 1, X 2,... is a hoogeneous Markov chain and K is a Markov transition kernel, we write X K to ean that the dependence structure of X is specified by K, i.e. PX n+1 X n = x] = K x,, n = 0, 1,.... We adopt the standard shorthand P x X 1,..., X ] = PX 1,..., X X 0 = x]. Soe useful technical results are assebled in Section 8 p Extreal Theory for Markov Kernels We begin by focusing on the Markov transition kernels rather than the stochastic processes they deterine, and introduce a class of kernels we ter tail kernels, which we will view as scaling liits of certain kernels. Antecedents include Segers 23] definition of back-and-forth tail chains that approxiate certain Markov chains started fro an extree value. For a Markov chain X K on 0,, it is reasonable to expect that extreal behaviour of X is deterined by pairs X n, X n+1, and one way to control such pairs is to assue that X n, X n+1 belongs to a bivariate doain of attraction cf. 5, 24]. In the context of regular variation, writing 2.1 t P Xn bt A 0, X ] n+1 bt A 1 = K btu, bta 1 t P A 0 Xn bt du ]

3 ASYMPTOTICS OF MARKOV KERNELS AND THE TAIL CHAIN 3 suggests cobining arginal regular variation of X n with a scaling kernel liit to derive extreal properties of the finite-diensional distributions fdds 18, 19, 23], and this is the direction we take. We first discuss the kernel scaling operation. For siplicity, we assue the state space of the Markov chain is 0,, although with suitable odifications, it is relatively straightforward to extend the results to R d. Henceforth G and H will denote probability distributions on 0, Tail Kernels. The tail kernel associated with G, with boundary distribution H, is 2.2 K y, A { Gy 1 A y > 0 = HA y = 0 for any easurable set A. Thus, the class of tail kernels on 0, is paraeterized by the pair of probability distributions G, H. Such kernels are characterized by a scaling property: Proposition 2.1. A Markov transition kernel K is a tail kernel associated with soe G, H if and only if it satisfies the relation 2.3 K uy, A = K y, u 1 A when y > 0 for any u > 0, in which case G = K1,. The property 2.3 extends to y = 0 iff H = ɛ 0. Proof. If K is a tail kernel, 2.3 follows directly fro the definition. Conversely, assuing 2.3, for y > 0 we can write K y, A = K 1, y 1 A, deonstrating that K is a tail kernel associated with K1, with boundary distribution H = K0,. To verify the second assertion, fixing u > 0, we ust show that Hu 1 = H iff H = ɛ 0. On the one hand, we have ɛ 0 u 1 A = ɛ 0 A. On the other, H0, = li n Hn 1, = H1,, so H0, 1] = 0. A siilar arguent shows that H1, = 0 as well. We call the Markov chain T K a tail chain associated with G, H. Such a chain can be represented as 2.4 T n = ξ n T n 1 + ξ n 1 {Tn 1 =0} for n = 1, 2,..., where ξ n iid G and ξ n iid H are independent of each other and of T0. If H = ɛ 0, then T becoes a ultiplicative rando walk with step distribution G and absorbing barrier at {0}: T n = T 0 ξ 1 ξ n Convergence to Tail Kernels. The tail chain approxiates the behaviour of a Markov chain X K in extree states. Asyptotic results require that the noralized distribution of X 1 be well-approxiated by soe distribution G when X 0 is large, and we interpret this requireent as a doain of attraction condition for kernels. Definition. A Markov transition kernel K : 0, B0, 0, 1] is in the doain of attraction of G, written K DG, if as t, 2.5 K t, t G on 0, ]. Note that DG contains at least the class of tail kernels associated with G i.e. with any boundary distribution H. A siple scaling arguent extends 2.5 to 2.6 K tu, t Gu 1 =: K u,, u > 0, where K is any tail kernel associated with G; this is the for appearing in 2.1. Thus tail kernels are scaling liits for kernels in a doain of attraction. In fact, tail kernels are the only possible liits:

4 4 S. I. RESNICK AND D. ZEBER Proposition 2.2. Let K be a transition kernel and H be an arbitrary distribution on 0,. If for each u > 0 there exists a distribution G u such that K tu, t G u as t, then the function K defined on 0, B0, as is a tail kernel associated with G 1. K u, A := { G u A u > 0 HA u = 0 Proof. It suffices to show that G u = G 1 u 1 for any u > 0. But this follows directly fro the uniqueness of weak liits, since 2.6 shows that Ktu, t G 1 u 1. A version of 2.6 unifor in u is needed for fdd convergence results. Proposition 2.3. Suppose K DG, and K is a tail kernel associated with G. Then, for any u > 0 and any non-negative function u t = ut such that u t u as t, we have 2.7 K tu t, t K u,, t. Proof. Suppose u t u > 0. Observe that Ktu t, t = Ktu t, tu t u 1 t, and put h t x = u t x, hx = ux. Writing P t = Ktu t, tu t, we have K tu t, t = P t h 1 t G h 1 = Gu 1 = K u, by 2, Theore 5.5, p. 34]. The easure G controls X upon leaving an extree state, and H describes the possibility of juping fro a non-extree state to an extree one. The traditional assuption 2.5 provides no inforation about H, and in fact 2.7 ay fail if u = 0 see Exaple 6.2. However, the choice of H cannot be ignored if 0 is an accessible point of the state space, especially for cases where G{0} = K y, {0} > 0. We propose pursuing iplications of the traditional assuption 2.5 alone, and will add conditions as needed to understand boundary behaviour of X. Alternative, ore general forulations of 2.5 include replacing Kt, t with Kt, at or Kt, at + bt with appropriate functions at > 0 and bt, in analogy with the usual doains of attraction conditions in extree value theory. Indeed, the second choice coincides with the original presentation by Perfekt 18], and relates to the conditional extree value odel 8, 13, 14]. For clarity, and to aintain ties with regular variation, we retain the standard choice at = t, bt = Representation. How do we characterize kernels belonging to DG? Fro 2.4, for chains transitioning according to a tail kernel, the next state is a rando ultiple of the previous one, provided the prior state is non-zero. We expect that chains transitioning according to K DG behave approxiately like this upon leaving a large state, and this is best expressed in ters of a function describing how a new state depends on the prior one. Given a kernel K, we can always find a saple space E, a easurable function ψ : 0, E 0, and an E-valued rando eleent V such that ψy, V Ky, for all y. Given a rando variable X 0, if we define the process X = X 0, X 1, X 2,... recursively as X n+1 = ψx n, V n+1, n 0, where {V n } is an iid sequence equal in distribution to V and independent of X 0, then X is a Markov chain with transition kernel K. Call the function ψ an update function corresponding to K. If in addition K DG, the doain of attraction condition 2.5 becoes t 1 ψt, V ξ,

5 ASYMPTOTICS OF MARKOV KERNELS AND THE TAIL CHAIN 5 where ξ G. Applying the probability integral transfor or the Skorohod representation theores 3, Theore 3.2, p. 6], 4, Theore 6.7, p. 70], we get the following result. Proposition 2.4. If K is a transition kernel, K DG if and only if there exists a easurable function ψ : 0, 0, 1] 0, and a rando variable ξ G on the unifor probability space 0, 1], B, λ such that 2.8 t 1 ψ t, u ξ u u 0, 1] as t, and ψ is an update function corresponding to K in the sense that λ ψ y, A ] = K y, A for easurable sets A. Think of the update function as ψ y, U where Uu = u is a unifor rando variable on 0, 1]. Proof. If there exist such ψ and ξ satisfying 2.8 then clearly K DG. Conversely, suppose ψ, V is an update function corresponding to K. According to Skorohod s representation theore cf. Billingsley 4] p. 70, with the necessary odifications to allow for an uncountable index set, there exists a rando variable ξ and a stochastic process {Yt ; t 0} defined on the unifor probability space 0, 1], B, λ, taking values in 0,, such that ξ G, Y 0 d = ψ0, V, Y t d = t 1 ψt, V for t > 0, and Yt u ξ u as t for every u 0, 1]. Now, define ψ : 0, 0, 1] 0, as ψ 0, u = Y0 u and ψ t, u = tyt u, t > 0, u 0, 1]. It is evident that λψ y, A] = Pψy, V A] for y 0,, so ψ is indeed an update function corresponding to K, and ψ satisfies 2.8 by construction. Update functions corresponding to K are not unique, and soe of the ay fail to converge pointwise as in 2.8. However 2.8 is convenient, and Proposition 2.4 shows that Segers 23] Condition 2.2 in ters of update functions is equivalent to our weak convergence forulation K DG. Pointwise convergence in 2.8 gives an intuitive representation of kernels in a doain of attraction. Corollary 2.1. K DG iff there exists a rando variable ξ G defined on the unifor probability space, and a easurable function φ : 0, 0, 1], satisfying t 1 φt, u 0 for all u 0, 1] such that 2.9 ψy, u := ξu y + φy, u is an update function corresponding to K. Proof. If such ξ and φ exist, then t 1 ψt, u = ξu + t 1 φt, u ξu for all u, so ψ satisfies 2.8. The converse follows fro 2.8. Many Markov chains such as ARCH, GARCH and autoregressive processes are specified by structured recursions that allow quick recognition of update functions corresponding to kernels in a doain of attraction. A coon exaple is the update function ψy, Z, W = Zy + W, which behaves like ψ y, Z = Zy when y is large copare ψ to the for 2.4 discussed for tail kernels. In general, if K has an update function ψ of the for 2.10 ψy, Z, W = Zy + φy, W for a rando variable Z 0 and a rando eleent W, where t 1 φt, w 0 whenever w C for which PW C] = 1, then K DG with G = PZ ]. We will refer to update functions satisfying 2.10 as being in canonical for.

6 6 S. I. RESNICK AND D. ZEBER 3. Finite-Diensional Convergence and the Extreal Coponent Given a Markov chain X K DG, we show that the finite-diensional distributions fdds of X, started fro an extree state, converge to those of the tail chain T defined in 2.4. We initially develop results that depend only on G but not H, and then clarify what behaviour of X is controlled by G and H respectively. We ake explicit links with prior work that did not consider the notion of boundary distribution. If G{0} = 0, the choice of H is inconsequential, since P T eventually hits {0} ] = 0 and T is indistinguishable fro the ultiplicative rando walk {Tn = T 0 ξ 1 ξ n, n 0} where T 0 > 0 and {ξ n } are iid G and independent of T 0. In this case, assue without loss of generality that H = ɛ 0. However, if G{0} > 0, any result not depending on H ust be restricted to fdds conditional on the tail chain not having yet hit {0}. For exaple, consider the trajectory of X 1,..., X, started fro X 0 = t, through the region t, 2 0, δ] t,, where t is a high level. The tail chain would odel this as a path through 0, 2 {0} 0,, which requires specifying H to control transitions away fro {0}. This raises the question of how to interpret the first hitting tie of {0} for T in ters of the original Markov chain X. Such hitting ties are iportant in the study of Markov chain point process odels of exceedance clusters based on the tail chain. Intuitively, a transition to {0} by T represents a transition fro an extree state to a non-extree state by X. We ake this notion precise in Section 3.2 by viewing such transitions as downcrossings of a certain level we ter the extreal boundary. We assue X is a Markov chain on 0, with transition kernel K DG, K is a tail kernel associated with G with unspecified boundary distribution H, and T is a Markov chain on 0, with kernel K. The finite-diensional distributions of X, conditional on X 0 = y, are given by and analogously for T. P y X1,..., X dx ] = K y, dx1 K x, dx2 K x 1, dx, 3.1. FDDs Conditional on the Intial State. Define the conditional distributions 3.1 π t X1 u, = Ptu t,..., X ] and π u, = Pu T1,..., T ], 1, t on 0, B0, ]. We consider when π t π on 0, ] pointwise in u. If G{0} = 0, this is a direct consequence of the doain of attraction condition 2.5, but if G{0} > 0, ore thought is required. We begin by restricting the convergence to the saller space E := 0, ] 1 0, ]. Relatively copact sets in E are contained in rectangles a, ] 0, ], where a 0, 1. Theore 3.1. Let u t = ut be a non-negative function such that u t u > 0 as t. a The restrictions to E, 3.2 µ t u, := π t u, E satisfy 3.3 µ t ut, v µ u, b If G{0} = 0, we have 3.4 π t ut, π u, and µ u, := π u, E, in M + E on 0, ] t. t. Proof. The Markov structure suggests an induction arguent facilitated by Lea 8.2 p. 21. Consider a first. If = 1, then 3.3 above reduces to 2.7. Assue 2, and let f C + K E.

7 ASYMPTOTICS OF MARKOV KERNELS AND THE TAIL CHAIN 7 Writing E = 0, ] E 1, we can find a > 0 and B KE 1 such that f is supported on a, ] B. Now, observe that µ t Defining h t v = ut, f = K tu t, tdx 1 0, ] E 1 = K tu t, tdx 1 0, ] E 1 E 1 K tx 1, tdx 2 K tx 1, tdx fx µ t 1 x1, dx 2,..., x fx. µ t 1 v, dx 1 fv, x 1 and hv = µ 1 v, dx 1 fv, x 1, E 1 the previous expression becoes ut, f = µ t Now, suppose v t v > 0 : we verify 0, ] 3.5 h t v t hv. K tu t, tdv h t v. By continuity, we have fv t, x t 1 fv, x 1 whenever x t 1 x 1, and the induction hypothesis provides µ t 1 v v t, µ 1 v,. Also, fx, has copact support B without loss of generality, µ 1 v, B = 0. Cobining these facts, 3.5 follows fro Lea 8.2 b. Next, since the h t and h have coon copact support a, ], and recalling fro Propostion 2.3 that Ktu t, t K u,, Lea 8.2 a yields ut, f K u, dv hv = µ u, f. µ t 0, ] Iplication b follows fro essentially the sae arguent. For 2, suppose f C0, ]. Replacing µ by π and E 1 by 0, ] 1 in the definitions of h t and h, we have π t ut, f = K tu t, tdv h t v. 0, ] This tie Lea 8.2 a shows that h t v t hv if v t v > 0, and since K u, 0, ] = 1, resorting to Lea 8.2 a once ore yields ut, f K u, dv hv = π u, f. π t 0, ] If G{0} > 0, then K u, 0, ] = 1 G{0} < 1, and for 3.4 to hold would require knowing the behaviour of h t v t when v t 0 as well. Behaviour near zero is controlled by an asyptotic condition related to the boundary distribution H. Previous work handled this using the regularity condition discussed in Section The Extreal Boundary. The noralization eployed in the doain of attraction condition 2.5 suggests that, starting fro a large state t, the extree states are approxiately scalar ultiples of t. For exaple, we would consider a transition fro t into t/3, 2t] to reain extree. Thus, we think of states which can be ade saller than tδ for any δ, if t is large enough, as non-extree. In this context, the set 0, t] would consist of non-extree states. Under 2.5, a tail chain path through 0, odels the original chain X travelling aong extree states, and all of the non-extree states are copacted into the state {0} in the state space of T. Therefore, if X is started fro an extree state, the portion of the tail chain depending solely on G is inforative up until the first tie X crosses down to a non-extree state. If G{0} = 0,

8 8 S. I. RESNICK AND D. ZEBER such a transition would becoe ore and ore unlikely as the initial state increases in which case G provides a coplete description of the behaviour of X in any finite nuber of steps following a visit to an extree state Theore 3.1 b. Drawing upon this interpretation, we develop a rigorous forulation of the distinction between extree and non-extree states, and we recast Theore 3.1 as convergence on the unrestricted space 0, ] of the conditional fdds, given that X has not yet reached a non-extree state. Definition. Suppose K DG. An extreal boundary for K is a non-negative function yt defined on 0,, satisfying li yt = 0 and 3.6 K t, t 0, yt] G{0} as t. Such a function is guaranteed to exist by Lea 8.5 p. 23. If G{0} = 0, then yt 0 is a trivial choice. For any function 0 yt 0, we have li sup Kt, t 0, yt] G{0}, so 3.6 is equivalent to 3.7 li inf K t, t 0, yt] G{0}. If yt is an extreal boundary, it follows that any function 0 ỹt 0 with ỹt yt for t t 0 is also an extreal boundary for K. Taking ỹt = s t ys shows that without loss of generality, we can assue yt to be non-increasing. The extreal boundary has a natural forulation in ters of the update function. As in 2.10, let ψy, Z, W = Zy + φy, W be an update function in canonical for, where y is extree. If Z > 0 then the next state is approxiately Zy, another extree state. Otherwise, if Z = 0, the next state is φy, W, and a transition fro an extree to a non-extree state has taken place. This suggests choosing an extreal boundary whose order is between t and φt, w. Proposition 3.1. Suppose ψy, Z, W is an update function in canonical for as in If ζt > 0 is a function on 0, such that 3.8 φt, w/ζt 0 as t whenever w B for which PW B] = 1, then li inf Kt, 0, ζt] G{0}. Provided li ζt/t = 0, an extreal boundary is given by yt := ζt/t. Thus if φt, w = oζt and ζt = ot then ζt/t is an extreal boundary. For exaple, if ψy, Z, W = Zy + W, so that φt, w = w, then choosing ζt to be any function ζt such that ζt = ot akes ζt/t an extreal boundary. Choosing ζt = t, we find that yt = 1/ t is an extreal boundary. Proof. Since we have P ψt ζt, Z = 0 ] = P φt, W ζt, Z = 0 ] P φt, W ζt, Z = 0 ] P Z = 0 ] ] φt, W P > 1 P Z = 0 ], ζt li inf K t, 0, ζt] = li inf P ψt ζt ] P Z = 0 ]. We will need an extreal boundary for which 3.6 still holds upon replacing the initial state t with tu t, where u t u > 0. Copare the following extension with Proposition 2.3. Proposition 3.2. If K DG, then there exists an extreal boundary y t such that 3.9 K tu t, t 0, y t] G{0} as t for any non-negative function u t = ut u > 0.

9 ASYMPTOTICS OF MARKOV KERNELS AND THE TAIL CHAIN 9 We will refer to y as a unifor extreal boundary. Proof. Let yt be an extreal boundary for K. As a first step, fix u 0 > 1, and suppose u 1 u < u 0. Define ỹt = u 0 ytu 1 0. Now, if u t u, then y {u} t := u t ytu t satisfies 3.9, since K tu t, t 0, y {u} t] = K tu t, tu t 0, ytu t ] G{0}. Here y {u} depends on the choice of function u t. However, since we eventually have u 1 0 < u t < u 0 for t large enough, it follows that ỹt > y {u} t for such t. Hence, ỹt satisfies 3.9 for any u t u with u 1 0 < u < u 0. Next, we reove the restriction in u 0 via a diagonalization arguent. For k = 2, 3,..., let y k t be extreal boundaries such that Ktu t, t 0, y k t] G{0} whenever u t u for u k 1, k, and put y 0 = y 1 = y. Next, define the sequence {s k, x k : k = 0, 1,... } inductively as follows. Setting s 0 = 0 and x 0 = y 0 1, choose s k s k 1 +1 such that y j t k 1 x k 1 for all j = 0,..., k whenever t s k, and put x k = ax{y j s k : j = 0,..., k}. Note that x k k 1 x k 1, so x k 0, and s k. Finally, set y t = x k 1 sk, s k+1 t. k=0 0 < Observe that 0 y t 0, and suppose u t u > 0. Then u k 1 0, k 0 for soe k 0, so Ktu t, t 0, y k0 t] G{0}, and for k k 0, our construction ensures that whenever s k t < s k+1, we have y k0 t y k0 s k x k = y t. Therefore, y t y k0 t for t s k0, so y satisfies 3.9. Henceforth, we assue any K DG is accopanied by a unifor extreal boundary denoted by yt, and we consider extree states on the order of t to be tyt, ]. If G{0} = 0, then all positive states are extree states. We now use the extreal boundary to reforulate the convergence of Theore 3.1 on the larger space 0, ]. Put E t = yt, ] 1 0, ], so that E t E = 0, ] 1 0, ]. Recall the notation µ t and µ fro 3.1, 3.2 in Theore 3.1 p. 6. Theore 3.2. Let u t = ut be a non-negative function such that u t u > 0 as t. Taking we have µ t µ t ut, v µ u, u, = π t u, E t, in M + 0, ] t. Proof. Note that we can just as well write µ t u, = µ t u, E t. Suppose 2 and let f C + K 0, ]. For δ > 0, define A δ = δ, ] 1 0, ], and choose δ such that µ u, A δ = 0. On the one hand, for large t we have µ t ut, f = fx 1 E tx µ t ut, dx 0, ] fx 1 Aδ x µ u, dx as t by Lea 8.3 p. 22. Letting δ 0 yields E E fx 1 Aδ x µ t ut, dx 3.10 li inf µt ut, f µ u, f

10 10 S. I. RESNICK AND D. ZEBER by onotone convergence. On the other hand, fixing δ, we can decopose the space according to the first downcrossing of δ: 3.11 µ t ut, f = ut, dx 1 + ut, dx, fx 1 Aδ x µ t 0, ] k=1 fx 1 A kx µ t 0, ] δ where A k δ = δ, ]k 1 0, δ] 0, ] k. On the subsets A k δ we appeal to the bound on f, say M, to obtain fx 1 A kx µ t 0, ] δ ut, dx M µ t ut, A k δ. Now, µ t ut, A k t 3.12 δ µ k ut, δ, ] k 1 yt, δ] = µ t k ut, δ, ] k 1 0, δ] µ t k ut, δ, ] k 1 0, yt]. Considering the second ter, we have ut, δ, ] k 1 0, yt] µ t k = = where 0, ] E k 1 K tu t, tdx 1 1δ, ] x 1 K tx k 2, tdx k 1 1δ, ] x k 1 K tx k 1, t 0, yt] 0, ] µ t k 1 ut, dx k 1 ht x k 1, Moreover, if x t k 1 x k 1 δ, ] k 1, then h t x k 1 = K tx k 1, t 0, yt] 1 δ, ] k 1x k 1. h t x t k 1 = K tx t k 1, t 0, yt] 1 δ, ] k 1x t k 1 G{0} 1 δ, ] k 1x k 1, using the fact that yt is a unifor extreal boundary. Since µ k 1 u, δ, ] k 1 = 0 without loss of generality by choice of δ, we conclude that ut, δ, ] k 1 0, yt] G{0} µ k 1 u, δ, ] k 1 = µ k u, δ, ] k 1 {0} µ t k as t. Now, let us return to Given any ɛ > 0, by choosing δ sall enough, we can ake ut, δ, ] k 1 yt, δ] µ k u, δ, ] k 1 0, δ] µ k u, δ, ] k 1 {0} µ t k µ k u, 0, ] k 1 0, δ] µ k u, δ, ] k 1 {0} < µ k u, 0, ] k 1 {0} + ɛ 2 µ k u, 0, ] k 1 {0} ɛ = ɛ, 2 i.e li sup µ t ut, A k δ < ɛ, for k = 1,..., 1. Therefore, 3.11 iplies that, given ɛ > 0, li sup µ t ut, f fx 1 Aδ x µ u, dx + M 0, ] < µ u, f + ɛ for sall enough δ. Cobining this with 3.10 yields the result. 1 k=1 li sup µ t ut, A k δ

11 ASYMPTOTICS OF MARKOV KERNELS AND THE TAIL CHAIN The Extreal Coponent. Having thus foralized the distinction between extree and non-extree states, we return to the question of phrasing a fdd liit result for X when H is unspecified. The extreal boundary allows us to interpret the first hitting tie of {0} by the tail chain as approxiating the tie of the first transition fro extree down to non-extree. In this terinology, Theore 3.2 provides a result, given that such a transition has yet to occur. Define the first hitting tie of a non-extree state τt = inf { n 0 : X n tyt }. For a Markov chain started fro tu t, where u t u > 0, we have tu t > yt for large t, so τt is the first downcrossing of the extreal boundary. For the tail chain T, put τ = inf{n 0 : T n = 0}. Given T 0 > 0, write τ = inf{n 1 : ξ n = 0}, where {ξ n } G are iid and independent of T 0, i.e. τ follows a Geoetric distribution with paraeter p = G{0}. Thus, Pτ = ] = p1 p 1 for 1 if p > 0, and Pτ = ] = 1 if p = 0. Theore 3.2 becoes 3.14 P tut t 1 X, τt ] v P u T, τ ], iplying that τ approxiates τt: 3.15 P tut τt ] P τ ], t, u t u > 0. So if G{0} > 0, X takes an average of approxiately G{0} 1 steps to return to a non-extree state. but if G{0} = 0, P tut τ 1 ] 0 for any 1 so starting fro a larger and larger initial state, it will take longer and longer for X to cross down to a non-extree state. Let T be the tail chain associated with G, ɛ 0. For {ξ n } G iid and independent of T 0, 3.16 T n = T 0 ξ 1 ξ n. We restate 3.14 in ters of a process derived fro X, called the extreal coponent of X, whose fdds converge weakly to those of T. The extreal coponent is the part of X whose asyptotic behavior is controlled by G alone. Definition. The extreal coponent of X relative to t is the process X t defined for t > 0 as X t n = X n 1 {n<τt}, n = 0, 1,.... Observe that X t is a Markov chain on 0, with transition kernel K t x, A { K x, A tyt, ] + ɛ 0 A K x, 0, tyt] x > tyt = ɛ 0 A x tyt. It follows that K t t, t G as t, and additionally that K t t, {0} G{0}. The relation between the coponent processes X t, T and the coplete ones is P tut t 1 X t τt > ] = P tut t 1 X τt > ] and P u T τ > ] = P u T τ > ]. Theore 3.3. Let u t = ut 0 satisfy u t u > 0 as t. Then on 0, ], π t ut, t ] X 1 := P tut t,..., Xt P u T t 1,..., T ] t.

12 12 S. I. RESNICK AND D. ZEBER Proof. Suppose 2 and f C0, ], and assue first that f 0. Then f C + K 0, ] as well, since the space is copact. Recall the notation of Theore 3.2. Conditioning on τt, we can write π t ut, f = = fx π t 0, ] fx π t 0, ] ut, dx + ut, dx + k=1 k=1 0, ] k 1 {0} fx π t 0, ] k 1 {0} k+1 ut, dx fx k, 0,..., 0 π t k ut, dx k by the Markov property. Since ut, 0, ] = P tut t 1 X t, τt > ] = P tut t 1 X yt, ] ] π t = µ t +1 ut, 0, ], the first ter becoes µ t ] +1 ut, f µ +1 u, f = fx π u, dx = fx P u T dx 0, ] 0, ] as t. Next, for any A 0, ] k easurable, write A 0 = {x k 1 : x k 1, 0 A} 0, ] k 1, and observe that π t k ut, A 0, ] k 1 {0} = P tut t 1 X t k 1 A 0 0, ] k 1, X t k = 0 ] = P tut t 1 X k 1 A 0 yt, ] k 1, t 1 X k yt ] = µ t k ut, A 0 0, ] µ t k+1 ut, A 0 0, ] 2. Applying this reasoning to the ters in the suation yields fx k 1, 0,..., 0 µ t 0, ]k+1 k ut, dx k fx k 1, 0,..., 0 µ t k+1 ut, dx k+1 0, ] k fx k 1, 0,..., 0 µ k u, dxk fx k 1, 0,..., 0 µ k+1 u, dxk+1 0, ] k 0, ] k+1 ] = fx k, 0,..., 0 π k u, dxk = fx P u T dx. 0, ] k 1 {0} k+1 0, ] k 1 {0} Cobining these liits shows that E tut f t 1 X t Eu ft, as t. Finally, if f is not non-negative, then write f = f + f. Since each of f + and f is non-negative, bounded, and continuous, we can apply the above arguent to each. 4. The Regularity Condition Previous work on the tail chain derives fdd convergence of X to T under a single assuption analogous to our doain of attraction condition 2.5. As we observed in Section 3.1, when G{0} = 0, fdd convergence of {t 1 X} follows directly, but when G{0} > 0, it was coon to assue an additional technical condition which ade 2.5 iply fdd convergence to T as well. This condition, which we refer to as the regularity condition, is an asyptotic convergence assuption prescribing the boundary distribution to be H = ɛ 0. We consider equivalences between different fors appearing in the literature, in ters of both kernels and update functions, and show that, under the regularity condition, the extreal behaviour of X is asyptotically the sae as that of its extreal coponent X t.

13 ASYMPTOTICS OF MARKOV KERNELS AND THE TAIL CHAIN 13 In cases where G{0} > 0, Perfekt 18, 19] requires that 4.1 li δ 0 li sup sup u 0,δ] K tu, t, ] = 0, while Segers 23] stipulates that the chosen update function corresponding to K ust be of at ost linear order in the initial state: 4.2 li sup sup t 1 ψy, v <, v B 0, PV B 0 ] = 1. 0 y t Sith 24] used a variant of 4.1. We dee a forulation in ters of distributional convergence to be instructive in our context. Definition. A Markov transition kernel K DG satisfies the regularity condition if 4.3 K tu t, t ɛ 0 on 0, ] as t for any non-negative function u t = ut 0. Note that in 2.7 p. 4, we had u t u > 0. We interpret 4.3 as designating the boundary distribution H to be ɛ 0. We now consider the relationships between 4.1, 4.2 and 4.3, and propose an intuitive equivalent for update functions in canonical for. Proposition 4.1. Suppose K DG, and let ψ, V be an update function corresponding to K such that 4.4 t 1 ψt, v ξv whenever v B for which PV B] = 1, and ξ V G. Then: a Condition 4.1 is necessary and sufficient for K to satisfy the regularity condition 4.3. b Condition 4.2 is sufficient for K to satisfy the regularity condition 4.3. c If ψ is in canonical for, i.e. ψy, Z, W = Zy + φy, W, then ψ satisfies 4.2 if and only if φ, w is bounded on any neighbourhood of 0 for each w C, a set for which PW C] = 1. Proof. a Assue 4.1, and suppose u t 0. We show Ktu t, tx, ] 0 for any x > 0. Write ωt, δ = sup K tu, t, ]. u 0,δ] Let ɛ > 0 be given, and choose δ sall enough that li sup ωt, δ < ɛ/2. Then for t large enough that u t < δx, we have K tu t, tx, ] sup K tu, tx, ] = ωtx, δ < li sup u 0,δx] ωt, δ + ɛ/2 for t large enough. Our choice of δ iplies that Ktu t, tx, ] < ɛ. Conversely, assue that K satisfies 4.3 but that 4.1 fails. Choose ɛ > 0 and a sequence δ n 0 such that li sup ωt, δ n ɛ for n = 1, 2,.... Then for each n we can find a sequence t n k as k such that ωtn k, δ n ɛ for each k. Diagonalize to find k 1 < k 2 < such that s n = t n k n and ωs n, δ n ɛ for all n. Finally, for n = 1, 2,... choose u n 0, δ n ] such that K s n u n, s n, ] > ωs n, δ n ɛ/2, and put ut = n u n 1 sn,s n+1 t. Clearly ut 0, but Ks n us n, s n, ] ɛ/2 for all n, contradicting 4.3.

14 14 S. I. RESNICK AND D. ZEBER b Write Mv = li sup t sup 0 y t t 1 ψy, v. Since for δ > 0, we have sup t 1 ψtδ 1 y, v ψy, v = sup 0 y t 0 y δ tδ 1 δ 1 li sup sup 0 y δ Now, suppose u t 0. Given any δ > 0 we have t 1 ψty, v = δmv. t 1 ψtu t, v sup t 1 ψty, v 0 y δ provided t is large enough, so li sup t t 1 ψtu t, v δmv. Consequently, li sup t t 1 ψtu t, v = 0 for every v such that Mv <. Under 4.2, this eans that P t 1 ψtu t, V 0 ] = 1, iplying 4.3. c Suppose first that χ w a = sup 0 y a φy, w < for all a > 0, whenever w C. Fixing w C and z 0, note that and observe for any a > 0 that sup 0 y t t 1 φy, w sup 0 y a t 1 φy, w sup t 1 ψy, z, w z + sup t 1 φy, w, 0 y t 0 y t sup a y t Choosing a large enough that sup a y y 1 φy, w 1, say, it follows that li sup sup 0 y t y 1 φy, w t 1 χ w a sup y 1 φy, w a y. t 1 ψy, z, w z + 1, so v = z, w B 0. Therefore PZ, W B 0 ] PZ 0, W C] = 1. Conversely, suppose there is a set D with PW D] > 0 such that w D iplies χ w a = for soe 0 < a <. Since sup 0 y t t 1 ψy, z, w t 1 χ w t, we have 0, D B0 c, contradicting 4.2. The exclusion of necessity fro part b results fro the fact that a kernel K does not uniquely specify an update function ψ. Even when K satisfies the regularity condition 4.3, it ay be possible to choose a nasty update function ψ which satisfies 4.4, but not 4.2. However, in such cases there ay exist a different update function ψ corresponding to K which does satisfy 4.2. Here is an exaple of such a situation. We exhibit an update function ψ for which i 4.4 holds; ii 4.2 fails because condition c in Proposition 4.1 fails; but yet iii the corresponding kernel satisfies the regularity condition 4.3. Furtherore, we present a different choice of update function corresponding to the sae kernel which satisfies 4.2. Define ψy, V = Z, W = Zy + φy, W, where φy, w = k 1 {yw=1/k} k=1 and W U0, 1. i Since φt, w = 0 for t > 1/w, it is clear that ψ satisfies 4.4 with ξ = Z. ii Observe that for any w 0, 1, φ, w is unbounded on the interval 0, 1]. Therefore, by part c of Proposition 4.1, 4.2 cannot hold for ψ. iii However, the corresponding kernel does satisfy the regularity condition 4.3. Suppose u t 0 and a > 0 is arbitrarily large. Write P t 1 ψtu t, Z, W > x ] = P Zu t + t 1 φtu t, W > x ] P t 1 φtu t, W > x ] + PZ > a],

15 ASYMPTOTICS OF MARKOV KERNELS AND THE TAIL CHAIN 15 choosing 0 < x < x au t. Since for any t, {w : φtu t, w > tx } {tu t k 1 : k = 1, 2,... }, a set of easure 0 with respect to PW ], 4.3 follows by letting a. On the other hand, the update function ψ y, Z = Zy does satisfy 4.2, and for any y, P ψ y, Z ψy, Z, W ] = P W {yk 1 : k = 1, 2,... } ] = 0, so ψ does indeed correspond to K. The regularity condition 4.3 restricts attention to Markov chains for which the probability of returning to an extree state in the next steps after falling below the extreal boundary is asyptotically negligible. For such chains, as well as those for which yt 0 is an extreal boundary for K, X has the sae asyptotic behaviour as its extreal coponent, as described next. Theore 4.1. Suppose X K with K DG, and let ρ be a etric on R. If yt 0 is an extreal boundary for K, or if K satisfies the regularity condition 4.3, then for any ɛ > 0 we have t X 4.5 P tut ρ t Consequently, X1 4.6 P tut t,..., X t, X ] > ɛ 0 t, u t u > 0. t ] P u T 1,..., T ] t, u t u > 0. First let us extend the regularity condition to higher-order transition kernels. Lea 4.1. If K satisfies 4.3, then so do the -step transition kernels K. Proof. This is established by induction. Let u t 0 and f C0, ]. For 2, we have K tu t, f = K 1 tu t, tdv K tv, tdx fx. 0, ] 0, ] Assue that K 1 tu t, t ɛ 0 ; 4.3 iplies that Ktv t, tdx fx f0 whenever v t 0. Therefore, by Lea 8.2 a p. 21, we conclude that K tu t, f f0 = ɛ 0 f. Proof of Theore 4.1. Suppose ɛ > 0 and u t u > 0. Write P tut ρt 1 X t, t 1 X > ɛ ] = P tut ρt 1 X t, t 1 X > ɛ, τt = k ]. Since X j = X t j k=1 while j < τt, for the k-th suand to converge to 0, it is sufficient that P tut X t j /t X j /t > δ, τt = k ] = P tut Xj /t > δ, τt = k ] 0 for j = k,..., and any δ > 0. If j = k, we have P tut Xj /t > δ, τt = k ] P tut Xk /t > δ, X k /t yt ] = 0 for large t. For j > k, recalling the notation of Theore 3.2, P tut Xj /t > δ, τt = k ] = 1 0,yt] x k P tut Xj /t > δ ] ] X k /t = x k Ptut Xk /t dx k E k t = P txk Xj k > tδ ] 1 0,yt] x k µ t k ut, dx k 0, ] k

16 16 S. I. RESNICK AND D. ZEBER using the Markov property. We clai that this intergral 0 as t. If yt 0, this follows directly. Otherwise, recall that µ t k u t, v µ k u,, and consider h t x k = P txk X j k > tδ] 1 0,yt] x k. Suppose x t x 0, ] k. If x k > 0, then h t x t = 0 for large t because yt 0. Otherwise, if x k = 0, we have h t x t 0 since Lea 4.1 iplies that P t tx X j k > tδ] 0 k as t. Lea 8.2 b establishes 4.5; 4.6 follows by Slutsky s theore. Therefore, X converges to T in fdds under a G{0} = 0, b G{0} > 0 cobined with 4.3, or c G{0} > 0 cobined with the extreal boundary yt 0. In either case, we will be able to replace the extreal coponent X t with the coplete chain X in the results of Sections 5.1 and 5.2. However, that yt 0 is an extreal boundary, and consequently that 4.6 holds, does not iply the regularity condition to hold, regardless of G{0}; in particular, a kernel for which G{0} = 0 need not satisfy 4.3. This is illustrated in Exaple Convergence of the Unconditional FDDs 5.1. Effect of a Regularly Varying Initial Distribution. So far our convergence results required that the initial state becoe large, and the only distributional assuption was that the transition kernel K deterining X be attracted to soe distribution G. To obtain a result for the unconditional distribution of X 0,..., X, we require an additional assuption about how likely the initial observation X 0 is to be large. Using Lea 8.4, the results of the previous sections extend to ultivariate regular variation on the cone E = 0, ] 0, ] when the distribution of X 0 has a regularly varying tail. This cone is saller than the cone 0, ] +1 \{0} traditionally eployed in extree value theory, because the kernel doain of attraction condition 2.5 is uninforative when the initial state is not extree. This is analogous to the setting of the Conditional Extree Value Model considered in 8, 13]. Proposition 5.1. Assue X K with K DG, and X 0 H, where H is a distribution on 0, with a regularly varying tail. This eans that as t, for soe scaling function bt, th bt v ν α in M + 0, ], where ν α x, ] = x α and α > 0. Define the easure ν on E = 0, ] 0, ] by 5.1 ν dx 0, dx = να dx 0 P x0 T 1,..., T dx ]. Then, for = 1, 2,..., the following convergences take place as t : a In M + 0, ] 0, ], t P bt 1 X 0, X 1,..., X 0, ] 0, ] ] v ν 0, ] 0, ]. b In M + E, t P bt 1 X bt 0, X bt 1,..., X bt ] v ν. c If either G{0} = 0, yt 0 is an extreal boundary, or K satisfies the regularity condition 4.3, then in M + E, t P bt 1 X 0, X 1,..., X ] v ν. d In M + 0, ], t P X 0 /bt dx 0, τbt ] v 1 G{0} 1 να dx 0. Reark. These convergence stateents ay be reforulated equivalently as, say, P bt 1 X 0, X 1,..., X X0 > bt ] P T 0, T 1,..., T ], where T0 Paretoα. This is the for considered by Segers 23].

17 ASYMPTOTICS OF MARKOV KERNELS AND THE TAIL CHAIN 17 Proof. Apply Lea 8.4 p. 22 to the results of Theores 3.1, 3.3 and 4.1, and In the case = 1, E 1 is a rotated version of E used in the conditional extree value odel in 8, 9] and the liit can be expressed as ν x 0, ] 0, x 1 ] = x 0 ν α dup ξ x 1 /u] = x α 0 P ξ x 1 /x 0 ] x α 1 Eξ α 1 {ξ x1 /x 0 } for x 0, x 1 0, ] 0, ], where ξ G with Eξ α. Since ν x 0, ] {0} = x α 0 Pξ = 0] and ν 0, ] x 1, ] = x α 1 Eξ α, sets on the x 0 -axis incur ass proportional to G{0}, and sets bounded away fro this axis are weighted accordng to Eξ α. A consequence of the second observation is that li inf t P X 1 /bt > x ] Eξ α x α. Thus, knowledge concerning the tail behaviour of X 1 iposes a restriction on the distributions G v to which K can be attracted via the α-th oent. For exaple, if t PX 1 /bt ] ν α, then we ust have Eξ α 1; this property will be exained further in the next section and appears in various fors in Segers 23] and Basrak and Segers 1], in the stationary setting Joint Tail Convergence. What additional assuptions are necessary for convergences b and c of the previous result to take place on the larger cone E = 0, ] +1 \{0}? This was considered by Segers 1, 23] for stationary Markov chains. In b, the dependence on the extreal threshold and hence on t eans we are in the context of a triangular array and not, strictly speaking, in the setting of joint regular variation. However, the result is still useful, for exaple, to derive a point process convergence via the Poisson transfor 21, p. 183]. As a first step, we characterize convergence on the larger cone by decoposing it into saller, ore failiar cones. This is siilar to Theore 6.1 in 23] and one of the iplications of Theore 2.1 in 1]. As a convention in what follows, set 0, ] 0 A = A. Also, recall the notation E = 0, ] 0, ]. Proposition 5.2. Suppose Y t = Y t,0, Y t,1,..., Y t, is a rando vector on 0, ] +1 for each t > 0. Then there exists a non-null Radon easure µ on E = 0, ] +1 \{0} such that 5.2 t P Y t,0, Y t,1,..., Y t, ] v µ in M + E t if and only if for j = 0,..., there exist Radon easures µ j on E j = 0, ] 0, ] j, not all null, such that 5.3 t P Y t,j,..., Y t, ] v µ j in M + E j. The relation between the liit easures is the following: for j = 0,...,, and µ 0, x] c = µ j = µ 0, ] j on E j µ j xj, ] 0, x j+1 ] 0, x ] for x E. j=0 Furtherore, given j {0,..., 1}, if A 0, ] j \{0} j µ j 0, ] A <. is relatively copact, then

18 18 S. I. RESNICK AND D. ZEBER Proof. Assue first that 5.2 holds. Fixing j {0,..., }, define µ j := µ 0, ] j i.e. µ = µ. Let A E j be relatively copact with µ j A = 0. Then A = 0, ] j A is relatively copact in E, and E A = 0, ] j E j A, so µ E A = µ j A = 0. Therefore, t P Y t,j,..., Y t, A ] = t P Y t,0,..., Y t, A ] µ A = µ j A, establishing 5.3. Conversely, suppose we have 5.3 for j = 0,...,. For x 0, ] +1, define hx = µ j xj, ] 0, x j+1 ] 0, x ]. j=0 Decopose 0, x] c as a disjoint union 5.4 0, x] c = 0, ] j x j, ] 0, x j+1 ] 0, x ], j=0 and observe that at points of continuity of the liit, 5.5 t P Y t 0, x] c ] = t P Y t,j,..., Y t, x j, ] 0, x j+1 ] 0, x ] ] hx. j=0 Hence, 5.2 holds with the liit easure µ defined by µ 0, x] c = hx. Indeed, given f C + K E we can find δ > 0 such that x δ = δ,..., δ is a continuity point of h and f is supported on 0, x δ ] c. Therefore, t EfY t sup fx sup t P Y t 0, x δ ] c] <, x E t>0 iplying that the set { t PY t ] ; t > 0 } is relatively copact in M + E. Furtherore, if t k PY tk ] µ and s k PY sk ] µ as k, then µ = µ = µ on sets 0, x] c which are continuity sets of µ by 5.5. This extends to easurable rectangles in E bounded away fro 0 whose vertices are continuity points of h, leading us to the conclusion that µ = µ = µ on E. Moreover, since we can decopose 0, x] c for any x E as in 5.4, it is clear that µ is non-null iff not all of the µ j are null. Finally, for 1 j 1, if A 0, ] j \{0} j is relatively copact, then it is contained in 0,..., 0, x j+1,..., x ] c for soe x j+1,..., x 0, ] j. Applying 5.4 once again, we find that µ j 0, ] A = µ 0, ] j 0, ] A µ 0, ] j+1 0, ] k j 1 x k, ] 0, x k+1 ] 0, x ] = k=j+1 k=j+1 µ k xk, ] 0, x k+1 ] 0, x ] <. Consequently, the extension of the convergences in Proposition 5.1 to the larger cone E follows fro regular variation of the arginal tails. Theore 5.1. Suppose X K DG, and let bt be a scaling function and α > 0. Then 5.6 t P bt 1 X bt 0, X bt 1,..., X bt ] v µ in M + E t,

19 where if and only if ASYMPTOTICS OF MARKOV KERNELS AND THE TAIL CHAIN 19 µ E dx 0, dx = ν α dx 0 P x0 T 1,..., T dx ] = ν dx 0, dx, 5.7 t P X bt j /bt ] v c j ν α in M + 0, ], with c 0 = 1 and Eξ α j c j < for j = 1,...,. Proof. Assue first that 5.6 holds. It follows that t P X 0 > btx] ν x, ] 0, ] = x α for x > 0. Hence, bt RV 1/α, so by 5.6 again, we have for j 1 and t P X bt j > btx ] µ 0, ] j x, ] 0, ] j = c j x α, c j µ 0, ] 0, ] j 1 1, ] 0, ] j = = Eξ 1 ξ j α = Eξ α j. 0, ] ν α du P ξ 1 ξ j > u 1 ] Conversely, suppose that 5.7 holds for j = 0,...,. Lea 8.4 iplies that in M + E j, t P bt 1 X bt j,..., X bt dx 0, dx ] v c j ν α dx 0 P x0 T 1,..., T j dx ] =: µ j dx0, dx by the Markov property, and Proposition 5.2 yields 5.6, with µ E = µ = ν. At the end of Section 4, cases were outlined in which we could replace X bt j by X j. Theore 5.1 is ost striking for these since it shows that for a Markov chain whose kernel is in a doain of attraction, to obtain joint regular variation of the fdds it is enough to know that the arginal tails are regularly varying. In particular, if X has a regularly varying stationary distribution then the fdds are jointly regularly varying. This result was presented by Segers 23], and Basrak and Segers 1] showed that for a general stationary process, joint regular variation of fdds is equivalent to the existence of a tail process which reduces to the tail chain in the case of Markov chains. However, what Proposition 5.1 ephasizes is that it is the arginal tail behaviour alone, rather than stationarity, which provides the link with joint regular variation. Theore 5.1 also extends the observation ade in Section 5.1 that knowledge of the arginal tail behaviour for a Markov chain whose kernel is in a doain of attraction constrains the class of possible liit distributions G via its oents. If a particular choice of regularly varying initial v distribution leads to t PX j > bt ] a j ν α, then we have Eξ α a 1/j j. In particular, if X adits a stationary distribution whose tail is RV α, then Eξ α 1. Our first exaple illustrates the ain results. 6. Exaples Exaple 6.1. Let V = Z, W be any rando vector on 0, R. Consider the update function ψy, V = Zy + W + and its canonical for ψy, V = Zy + φy, W = Zy + W 1 {W > Zy} Zy 1 {W Zy}. For y > 0, the transition kernel has the for Ky, x, = P Zy + W > x]. Since t 1 ψt, V = Z +t 1 W + Z a.s., we have K DG with G = PZ ]. Furtherore, using Proposition 3.1, the function γt t is of larger order than φt, w, so yt = 1/ t is an extreal boundary. Since

20 20 S. I. RESNICK AND D. ZEBER φ, w is bounded on neighbourhoods of 0, Proposition 4.1 c iplies K satisfies the regularity condition 4.3. Consequently, fro Theore 4.1, we obtain fdd convergence of t 1 X to T as in 4.6. If K does not satisfy the regularity condition 4.3, Theore 4.1 ay fail to hold and starting fro tu, t 1 X ay fail to converge to T started fro u. Exaple 6.2. Let V = Z, W, W be any non-degenerate rando vector on 0, 3, and consider the Markov chain deterined by the update function ψy, V = Zy + W y 1 1 {y>0} + W 1 {y=0}. For y > 0, the transition kernel is Ky, x, = PZy + W y 1 > x] and since t 1 ψt, V = Z + W t 2 Z a.s., we have K DG with G = PZ ]. Furtherore, using Proposition 3.1, the function γt 1 is of larger order than φt, w, so yt = 1/t is an extreal boundary. However, note that φy, W, W = W y 1 1 {y>0} +W 1 {y=0} is unbounded near 0, iplying that Segers boundedness condition 4.2 does not hold. In fact, our for of the regularity condition 4.3 fails for K. Indeed, K tu t, tx, = PZtu t + W/tu t > tx] = PZu t + W/t 2 u t > x]. Choosing u t = t 2 yields Ktu t, tx, PW > x]. For appropriate x, this shows 4.3 fails. Not only does 4.3 fail but so does Theore 4.1, since the asyptotic behaviour of X is not the sae as that of X t. We show directly that the conditional fdds of t 1 X fail to converge to those of T. The idea is that if X k < yt = t 1, there is a positive probability that X k+1 > t. We illustrate this for = 2. Take f C0, ] 2 and u > 0. Observe if X 0 = tu > 0, fro the definition of ψ, X 1 = Z 1 tu + W 1 /tu and X 2 = Z 2 X 1 + W 2 /X 1 1 {X1 >0} + W 1 {X1 =0}. Furtherore, on {Z 1 > 0}, we have X 1 > 0 and X 2 = Z 2 X 1 +W 2 /X 1. On {Z 1 = 0, W 1 > 0}, X 1 > 0 and X 2 = Z 2 X 1 +W 2 /X 1. On {Z 1 = 0, W 1 = 0}, we have X 1 = 0 and X 2 = W. Therefore E tu fx 1 /t, X 2 /t = E tu fx 1 /t, X 2 /t 1 {Z1 >0} + E tu fx 1 /t, X 2 /t 1 {Z1 =0,W 1 >0} + E tu fx 1 /t, X 2 /t 1 {Z1 =0,W 1 =0} = A + B + C. For A, as t, we have A = Ef Z 1 u + W 1 /t 2 u, Z 2 Z 1 u + W 1 /t 2 u] + W 2 /Z 1 t 2 u + W 1 u 1 ] 1 {Z1 >0} EfZ 1 u, Z 1 Z 2 u 1 {Z1 >0}, while for B we obtain for t, B = EfW 1 /t 2 u, Z 2 W 1 /t 2 u + W 2 u/w 1 1 {Z1 =0,W 1 >0} Ef0, uw 2 /W 1 1 {Z1 =0,W 1 >0}. Finally for C, C = Ef0, W 2/t 1 {Z1 =0,W 1 =0} = PZ 1 = 0, W 1 = 0] Ef0, W 2/t PZ 1 = 0, W 1 = 0] f0, 0. Observe that li A + B + C] E u ft 1, T 2 = EfuZ 1, uz 1 Z 2. In the final exaple, the conditional distributions of t 1 X converge to those of the tail chain T, even though the regularity condition does not hold. This includes cases for which G{0} = 0 and G{0} > 0 with extreal boundary yt 0. Exaple 6.3. Let {ξ j, η j, j 1} be iid copies of the non-degenerate rando vector ξ, η on 0, 2. Taking V = ξ, η, consider a Markov chain which transitions according to the update function ψy, V = ξ y + y 1 1 {y>0} + η 1 {y=0} = ξ y + ξ y 1 1 {y>0} + η 1 {y=0},

3.8 Three Types of Convergence

3.8 Three Types of Convergence 3.8 Three Types of Convergence 3.8 Three Types of Convergence 93 Suppose that we are given a sequence functions {f k } k N on a set X and another function f on X. What does it ean for f k to converge to

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Prerequisites. We recall: Theorem 2 A subset of a countably innite set is countable.

Prerequisites. We recall: Theorem 2 A subset of a countably innite set is countable. Prerequisites 1 Set Theory We recall the basic facts about countable and uncountable sets, union and intersection of sets and iages and preiages of functions. 1.1 Countable and uncountable sets We can

More information

. The univariate situation. It is well-known for a long tie that denoinators of Pade approxiants can be considered as orthogonal polynoials with respe

. The univariate situation. It is well-known for a long tie that denoinators of Pade approxiants can be considered as orthogonal polynoials with respe PROPERTIES OF MULTIVARIATE HOMOGENEOUS ORTHOGONAL POLYNOMIALS Brahi Benouahane y Annie Cuyt? Keywords Abstract It is well-known that the denoinators of Pade approxiants can be considered as orthogonal

More information

A := A i : {A i } S. is an algebra. The same object is obtained when the union in required to be disjoint.

A := A i : {A i } S. is an algebra. The same object is obtained when the union in required to be disjoint. 59 6. ABSTRACT MEASURE THEORY Having developed the Lebesgue integral with respect to the general easures, we now have a general concept with few specific exaples to actually test it on. Indeed, so far

More information

The degree of a typical vertex in generalized random intersection graph models

The degree of a typical vertex in generalized random intersection graph models Discrete Matheatics 306 006 15 165 www.elsevier.co/locate/disc The degree of a typical vertex in generalized rando intersection graph odels Jerzy Jaworski a, Michał Karoński a, Dudley Stark b a Departent

More information

Solutions of some selected problems of Homework 4

Solutions of some selected problems of Homework 4 Solutions of soe selected probles of Hoework 4 Sangchul Lee May 7, 2018 Proble 1 Let there be light A professor has two light bulbs in his garage. When both are burned out, they are replaced, and the next

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

Some Classical Ergodic Theorems

Some Classical Ergodic Theorems Soe Classical Ergodic Theores Matt Rosenzweig Contents Classical Ergodic Theores. Mean Ergodic Theores........................................2 Maxial Ergodic Theore.....................................

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

Math Reviews classifications (2000): Primary 54F05; Secondary 54D20, 54D65

Math Reviews classifications (2000): Primary 54F05; Secondary 54D20, 54D65 The Monotone Lindelöf Property and Separability in Ordered Spaces by H. Bennett, Texas Tech University, Lubbock, TX 79409 D. Lutzer, College of Willia and Mary, Williasburg, VA 23187-8795 M. Matveev, Irvine,

More information

Semicircle law for generalized Curie-Weiss matrix ensembles at subcritical temperature

Semicircle law for generalized Curie-Weiss matrix ensembles at subcritical temperature Seicircle law for generalized Curie-Weiss atrix ensebles at subcritical teperature Werner Kirsch Fakultät für Matheatik und Inforatik FernUniversität in Hagen, Gerany Thoas Kriecherbauer Matheatisches

More information

Asymptotics of weighted random sums

Asymptotics of weighted random sums Asyptotics of weighted rando sus José Manuel Corcuera, David Nualart, Mark Podolskij arxiv:402.44v [ath.pr] 6 Feb 204 February 7, 204 Abstract In this paper we study the asyptotic behaviour of weighted

More information

arxiv: v1 [math.nt] 14 Sep 2014

arxiv: v1 [math.nt] 14 Sep 2014 ROTATION REMAINDERS P. JAMESON GRABER, WASHINGTON AND LEE UNIVERSITY 08 arxiv:1409.411v1 [ath.nt] 14 Sep 014 Abstract. We study properties of an array of nubers, called the triangle, in which each row

More information

Tight Bounds for Maximal Identifiability of Failure Nodes in Boolean Network Tomography

Tight Bounds for Maximal Identifiability of Failure Nodes in Boolean Network Tomography Tight Bounds for axial Identifiability of Failure Nodes in Boolean Network Toography Nicola Galesi Sapienza Università di Roa nicola.galesi@uniroa1.it Fariba Ranjbar Sapienza Università di Roa fariba.ranjbar@uniroa1.it

More information

Kinetic Theory of Gases: Elementary Ideas

Kinetic Theory of Gases: Elementary Ideas Kinetic Theory of Gases: Eleentary Ideas 17th February 2010 1 Kinetic Theory: A Discussion Based on a Siplified iew of the Motion of Gases 1.1 Pressure: Consul Engel and Reid Ch. 33.1) for a discussion

More information

4 = (0.02) 3 13, = 0.25 because = 25. Simi-

4 = (0.02) 3 13, = 0.25 because = 25. Simi- Theore. Let b and be integers greater than. If = (. a a 2 a i ) b,then for any t N, in base (b + t), the fraction has the digital representation = (. a a 2 a i ) b+t, where a i = a i + tk i with k i =

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

Understanding Machine Learning Solution Manual

Understanding Machine Learning Solution Manual Understanding Machine Learning Solution Manual Written by Alon Gonen Edited by Dana Rubinstein Noveber 17, 2014 2 Gentle Start 1. Given S = ((x i, y i )), define the ultivariate polynoial p S (x) = i []:y

More information

Physics 215 Winter The Density Matrix

Physics 215 Winter The Density Matrix Physics 215 Winter 2018 The Density Matrix The quantu space of states is a Hilbert space H. Any state vector ψ H is a pure state. Since any linear cobination of eleents of H are also an eleent of H, it

More information

FAST DYNAMO ON THE REAL LINE

FAST DYNAMO ON THE REAL LINE FAST DYAMO O THE REAL LIE O. KOZLOVSKI & P. VYTOVA Abstract. In this paper we show that a piecewise expanding ap on the interval, extended to the real line by a non-expanding ap satisfying soe ild hypthesis

More information

1 Rademacher Complexity Bounds

1 Rademacher Complexity Bounds COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #10 Scribe: Max Goer March 07, 2013 1 Radeacher Coplexity Bounds Recall the following theore fro last lecture: Theore 1. With probability

More information

Kinetic Theory of Gases: Elementary Ideas

Kinetic Theory of Gases: Elementary Ideas Kinetic Theory of Gases: Eleentary Ideas 9th February 011 1 Kinetic Theory: A Discussion Based on a Siplified iew of the Motion of Gases 1.1 Pressure: Consul Engel and Reid Ch. 33.1) for a discussion of

More information

The proofs of Theorem 1-3 are along the lines of Wied and Galeano (2013).

The proofs of Theorem 1-3 are along the lines of Wied and Galeano (2013). A Appendix: Proofs The proofs of Theore 1-3 are along the lines of Wied and Galeano (2013) Proof of Theore 1 Let D[d 1, d 2 ] be the space of càdlàg functions on the interval [d 1, d 2 ] equipped with

More information

Revealed Preference with Stochastic Demand Correspondence

Revealed Preference with Stochastic Demand Correspondence Revealed Preference with Stochastic Deand Correspondence Indraneel Dasgupta School of Econoics, University of Nottingha, Nottingha NG7 2RD, UK. E-ail: indraneel.dasgupta@nottingha.ac.uk Prasanta K. Pattanaik

More information

arxiv: v1 [math.pr] 17 May 2009

arxiv: v1 [math.pr] 17 May 2009 A strong law of large nubers for artingale arrays Yves F. Atchadé arxiv:0905.2761v1 [ath.pr] 17 May 2009 March 2009 Abstract: We prove a artingale triangular array generalization of the Chow-Birnbau- Marshall

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,

More information

Learnability and Stability in the General Learning Setting

Learnability and Stability in the General Learning Setting Learnability and Stability in the General Learning Setting Shai Shalev-Shwartz TTI-Chicago shai@tti-c.org Ohad Shair The Hebrew University ohadsh@cs.huji.ac.il Nathan Srebro TTI-Chicago nati@uchicago.edu

More information

Shannon Sampling II. Connections to Learning Theory

Shannon Sampling II. Connections to Learning Theory Shannon Sapling II Connections to Learning heory Steve Sale oyota echnological Institute at Chicago 147 East 60th Street, Chicago, IL 60637, USA E-ail: sale@athberkeleyedu Ding-Xuan Zhou Departent of Matheatics,

More information

Revealed Preference and Stochastic Demand Correspondence: A Unified Theory

Revealed Preference and Stochastic Demand Correspondence: A Unified Theory Revealed Preference and Stochastic Deand Correspondence: A Unified Theory Indraneel Dasgupta School of Econoics, University of Nottingha, Nottingha NG7 2RD, UK. E-ail: indraneel.dasgupta@nottingha.ac.uk

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I Contents 1. Preliinaries 2. The ain result 3. The Rieann integral 4. The integral of a nonnegative

More information

On Certain C-Test Words for Free Groups

On Certain C-Test Words for Free Groups Journal of Algebra 247, 509 540 2002 doi:10.1006 jabr.2001.9001, available online at http: www.idealibrary.co on On Certain C-Test Words for Free Groups Donghi Lee Departent of Matheatics, Uni ersity of

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13 CSE55: Randoied Algoriths and obabilistic Analysis May 6, Lecture Lecturer: Anna Karlin Scribe: Noah Siegel, Jonathan Shi Rando walks and Markov chains This lecture discusses Markov chains, which capture

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Principles of Optimal Control Spring 2008

Principles of Optimal Control Spring 2008 MIT OpenCourseWare http://ocw.it.edu 16.323 Principles of Optial Control Spring 2008 For inforation about citing these aterials or our Ters of Use, visit: http://ocw.it.edu/ters. 16.323 Lecture 10 Singular

More information

12 Towards hydrodynamic equations J Nonlinear Dynamics II: Continuum Systems Lecture 12 Spring 2015

12 Towards hydrodynamic equations J Nonlinear Dynamics II: Continuum Systems Lecture 12 Spring 2015 18.354J Nonlinear Dynaics II: Continuu Systes Lecture 12 Spring 2015 12 Towards hydrodynaic equations The previous classes focussed on the continuu description of static (tie-independent) elastic systes.

More information

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer

More information

Supplement to: Subsampling Methods for Persistent Homology

Supplement to: Subsampling Methods for Persistent Homology Suppleent to: Subsapling Methods for Persistent Hoology A. Technical results In this section, we present soe technical results that will be used to prove the ain theores. First, we expand the notation

More information

Some Proofs: This section provides proofs of some theoretical results in section 3.

Some Proofs: This section provides proofs of some theoretical results in section 3. Testing Jups via False Discovery Rate Control Yu-Min Yen. Institute of Econoics, Acadeia Sinica, Taipei, Taiwan. E-ail: YMYEN@econ.sinica.edu.tw. SUPPLEMENTARY MATERIALS Suppleentary Materials contain

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008 LIDS Report 2779 1 Constrained Consensus and Optiization in Multi-Agent Networks arxiv:0802.3922v2 [ath.oc] 17 Dec 2008 Angelia Nedić, Asuan Ozdaglar, and Pablo A. Parrilo February 15, 2013 Abstract We

More information

Multi-Dimensional Hegselmann-Krause Dynamics

Multi-Dimensional Hegselmann-Krause Dynamics Multi-Diensional Hegselann-Krause Dynaics A. Nedić Industrial and Enterprise Systes Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu B. Touri Coordinated Science Laboratory

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Biostatistics Department Technical Report

Biostatistics Department Technical Report Biostatistics Departent Technical Report BST006-00 Estiation of Prevalence by Pool Screening With Equal Sized Pools and a egative Binoial Sapling Model Charles R. Katholi, Ph.D. Eeritus Professor Departent

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

ON A CLASS OF DISTRIBUTIONS STABLE UNDER RANDOM SUMMATION

ON A CLASS OF DISTRIBUTIONS STABLE UNDER RANDOM SUMMATION Applied Probability Trust (6 Deceber 200) ON A CLASS OF DISTRIBUTIONS STABLE UNDER RANDOM SUMMATION L.B. KLEBANOV, Departent of Probability and Statistics of Charles University A.V. KAKOSYAN, Yerevan State

More information

Computable Shell Decomposition Bounds

Computable Shell Decomposition Bounds Coputable Shell Decoposition Bounds John Langford TTI-Chicago jcl@cs.cu.edu David McAllester TTI-Chicago dac@autoreason.co Editor: Leslie Pack Kaelbling and David Cohn Abstract Haussler, Kearns, Seung

More information

Numerically repeated support splitting and merging phenomena in a porous media equation with strong absorption. Kenji Tomoeda

Numerically repeated support splitting and merging phenomena in a porous media equation with strong absorption. Kenji Tomoeda Journal of Math-for-Industry, Vol. 3 (C-), pp. Nuerically repeated support splitting and erging phenoena in a porous edia equation with strong absorption To the eory of y friend Professor Nakaki. Kenji

More information

Estimating Parameters for a Gaussian pdf

Estimating Parameters for a Gaussian pdf Pattern Recognition and achine Learning Jaes L. Crowley ENSIAG 3 IS First Seester 00/0 Lesson 5 7 Noveber 00 Contents Estiating Paraeters for a Gaussian pdf Notation... The Pattern Recognition Proble...3

More information

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

The Transactional Nature of Quantum Information

The Transactional Nature of Quantum Information The Transactional Nature of Quantu Inforation Subhash Kak Departent of Coputer Science Oklahoa State University Stillwater, OK 7478 ABSTRACT Inforation, in its counications sense, is a transactional property.

More information

Research Article On the Isolated Vertices and Connectivity in Random Intersection Graphs

Research Article On the Isolated Vertices and Connectivity in Random Intersection Graphs International Cobinatorics Volue 2011, Article ID 872703, 9 pages doi:10.1155/2011/872703 Research Article On the Isolated Vertices and Connectivity in Rando Intersection Graphs Yilun Shang Institute for

More information

1 Proof of learning bounds

1 Proof of learning bounds COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #4 Scribe: Akshay Mittal February 13, 2013 1 Proof of learning bounds For intuition of the following theore, suppose there exists a

More information

Lost-Sales Problems with Stochastic Lead Times: Convexity Results for Base-Stock Policies

Lost-Sales Problems with Stochastic Lead Times: Convexity Results for Base-Stock Policies OPERATIONS RESEARCH Vol. 52, No. 5, Septeber October 2004, pp. 795 803 issn 0030-364X eissn 1526-5463 04 5205 0795 infors doi 10.1287/opre.1040.0130 2004 INFORMS TECHNICAL NOTE Lost-Sales Probles with

More information

LORENTZ SPACES AND REAL INTERPOLATION THE KEEL-TAO APPROACH

LORENTZ SPACES AND REAL INTERPOLATION THE KEEL-TAO APPROACH LORENTZ SPACES AND REAL INTERPOLATION THE KEEL-TAO APPROACH GUILLERMO REY. Introduction If an operator T is bounded on two Lebesgue spaces, the theory of coplex interpolation allows us to deduce the boundedness

More information

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES S. E. Ahed, R. J. Tokins and A. I. Volodin Departent of Matheatics and Statistics University of Regina Regina,

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

The isomorphism problem of Hausdorff measures and Hölder restrictions of functions

The isomorphism problem of Hausdorff measures and Hölder restrictions of functions The isoorphis proble of Hausdorff easures and Hölder restrictions of functions Doctoral thesis András Máthé PhD School of Matheatics Pure Matheatics Progra School Leader: Prof. Miklós Laczkovich Progra

More information

Tail Estimation of the Spectral Density under Fixed-Domain Asymptotics

Tail Estimation of the Spectral Density under Fixed-Domain Asymptotics Tail Estiation of the Spectral Density under Fixed-Doain Asyptotics Wei-Ying Wu, Chae Young Li and Yiin Xiao Wei-Ying Wu, Departent of Statistics & Probability Michigan State University, East Lansing,

More information

Poisson processes and their properties

Poisson processes and their properties Poisson processes and their properties Poisson processes. collection {N(t) : t [, )} of rando variable indexed by tie t is called a continuous-tie stochastic process, Furtherore, we call N(t) a Poisson

More information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all

2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all Lecture 6 Introduction to kinetic theory of plasa waves Introduction to kinetic theory So far we have been odeling plasa dynaics using fluid equations. The assuption has been that the pressure can be either

More information

A Markov Framework for the Simple Genetic Algorithm

A Markov Framework for the Simple Genetic Algorithm A arkov Fraework for the Siple Genetic Algorith Thoas E. Davis*, Jose C. Principe Electrical Engineering Departent University of Florida, Gainesville, FL 326 *WL/NGS Eglin AFB, FL32542 Abstract This paper

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

Tight Information-Theoretic Lower Bounds for Welfare Maximization in Combinatorial Auctions

Tight Information-Theoretic Lower Bounds for Welfare Maximization in Combinatorial Auctions Tight Inforation-Theoretic Lower Bounds for Welfare Maxiization in Cobinatorial Auctions Vahab Mirrokni Jan Vondrák Theory Group, Microsoft Dept of Matheatics Research Princeton University Redond, WA 9805

More information

Lecture #8-3 Oscillations, Simple Harmonic Motion

Lecture #8-3 Oscillations, Simple Harmonic Motion Lecture #8-3 Oscillations Siple Haronic Motion So far we have considered two basic types of otion: translation and rotation. But these are not the only two types of otion we can observe in every day life.

More information

ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE

ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE CHRISTOPHER J. HILLAR Abstract. A long-standing conjecture asserts that the polynoial p(t = Tr(A + tb ] has nonnegative coefficients whenever is

More information

arxiv: v3 [quant-ph] 18 Oct 2017

arxiv: v3 [quant-ph] 18 Oct 2017 Self-guaranteed easureent-based quantu coputation Masahito Hayashi 1,, and Michal Hajdušek, 1 Graduate School of Matheatics, Nagoya University, Furocho, Chikusa-ku, Nagoya 464-860, Japan Centre for Quantu

More information

On the Use of A Priori Information for Sparse Signal Approximations

On the Use of A Priori Information for Sparse Signal Approximations ITS TECHNICAL REPORT NO. 3/4 On the Use of A Priori Inforation for Sparse Signal Approxiations Oscar Divorra Escoda, Lorenzo Granai and Pierre Vandergheynst Signal Processing Institute ITS) Ecole Polytechnique

More information

Generalized eigenfunctions and a Borel Theorem on the Sierpinski Gasket.

Generalized eigenfunctions and a Borel Theorem on the Sierpinski Gasket. Generalized eigenfunctions and a Borel Theore on the Sierpinski Gasket. Kasso A. Okoudjou, Luke G. Rogers, and Robert S. Strichartz May 26, 2006 1 Introduction There is a well developed theory (see [5,

More information

Data-Driven Imaging in Anisotropic Media

Data-Driven Imaging in Anisotropic Media 18 th World Conference on Non destructive Testing, 16- April 1, Durban, South Africa Data-Driven Iaging in Anisotropic Media Arno VOLKER 1 and Alan HUNTER 1 TNO Stieltjesweg 1, 6 AD, Delft, The Netherlands

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

G G G G G. Spec k G. G Spec k G G. G G m G. G Spec k. Spec k

G G G G G. Spec k G. G Spec k G G. G G m G. G Spec k. Spec k 12 VICTORIA HOSKINS 3. Algebraic group actions and quotients In this section we consider group actions on algebraic varieties and also describe what type of quotients we would like to have for such group

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

Information Overload in a Network of Targeted Communication: Supplementary Notes

Information Overload in a Network of Targeted Communication: Supplementary Notes Inforation Overload in a Network of Targeted Counication: Suppleentary Notes Tiothy Van Zandt INSEAD 10 June 2003 Abstract These are suppleentary notes for Van Zandt 2003). They include certain extensions.

More information

PHY 171. Lecture 14. (February 16, 2012)

PHY 171. Lecture 14. (February 16, 2012) PHY 171 Lecture 14 (February 16, 212) In the last lecture, we looked at a quantitative connection between acroscopic and icroscopic quantities by deriving an expression for pressure based on the assuptions

More information

Supplementary Information for Design of Bending Multi-Layer Electroactive Polymer Actuators

Supplementary Information for Design of Bending Multi-Layer Electroactive Polymer Actuators Suppleentary Inforation for Design of Bending Multi-Layer Electroactive Polyer Actuators Bavani Balakrisnan, Alek Nacev, and Elisabeth Sela University of Maryland, College Park, Maryland 074 1 Analytical

More information

In this chapter, we consider several graph-theoretic and probabilistic models

In this chapter, we consider several graph-theoretic and probabilistic models THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions

More information

New upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany.

New upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany. New upper bound for the B-spline basis condition nuber II. A proof of de Boor's 2 -conjecture K. Scherer Institut fur Angewandte Matheati, Universitat Bonn, 535 Bonn, Gerany and A. Yu. Shadrin Coputing

More information

Random Process Review

Random Process Review Rando Process Review Consider a rando process t, and take k saples. For siplicity, we will set k. However it should ean any nuber of saples. t () t x t, t, t We have a rando vector t, t, t. If we find

More information

Work, Energy and Momentum

Work, Energy and Momentum Work, Energy and Moentu Work: When a body oves a distance d along straight line, while acted on by a constant force of agnitude F in the sae direction as the otion, the work done by the force is tered

More information

Ph 20.3 Numerical Solution of Ordinary Differential Equations

Ph 20.3 Numerical Solution of Ordinary Differential Equations Ph 20.3 Nuerical Solution of Ordinary Differential Equations Due: Week 5 -v20170314- This Assignent So far, your assignents have tried to failiarize you with the hardware and software in the Physics Coputing

More information

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Soft Coputing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Beverly Rivera 1,2, Irbis Gallegos 1, and Vladik Kreinovich 2 1 Regional Cyber and Energy Security Center RCES

More information

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co

More information

Graphical Models in Local, Asymmetric Multi-Agent Markov Decision Processes

Graphical Models in Local, Asymmetric Multi-Agent Markov Decision Processes Graphical Models in Local, Asyetric Multi-Agent Markov Decision Processes Ditri Dolgov and Edund Durfee Departent of Electrical Engineering and Coputer Science University of Michigan Ann Arbor, MI 48109

More information

arxiv: v2 [math.co] 3 Dec 2008

arxiv: v2 [math.co] 3 Dec 2008 arxiv:0805.2814v2 [ath.co] 3 Dec 2008 Connectivity of the Unifor Rando Intersection Graph Sion R. Blacburn and Stefanie Gere Departent of Matheatics Royal Holloway, University of London Egha, Surrey TW20

More information

Fairness via priority scheduling

Fairness via priority scheduling Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels Extension of CSRSM for the Paraetric Study of the Face Stability of Pressurized Tunnels Guilhe Mollon 1, Daniel Dias 2, and Abdul-Haid Soubra 3, M.ASCE 1 LGCIE, INSA Lyon, Université de Lyon, Doaine scientifique

More information