Monte Carlo methods for improper target distributions

Size: px
Start display at page:

Download "Monte Carlo methods for improper target distributions"

Transcription

1 Electronic Journal of tatistics Vol. 8 ( IN: DOI: /14-EJ969 Monte Carlo methos for improper target istributions Krishna B. Athreya Department of Mathematics an tatistics, Iowa tate University, Ames, Iowa, kbathreya@gmail.com an Vivekanana Roy Department of tatistics, Iowa tate University, Ames, Iowa vroy@iastate.eu Abstract: Monte Carlo methos (base on ii sampling or Markov chains for estimating integrals with respect to a proper target istribution (that is, a probability istribution are well known in the statistics literature. If the target istribution π happens to be improper then it is shown here that the stanar time average estimator base on Markov chains with π as its stationary istribution will converge to zero with probability 1, an hence is not appropriate. In this paper, we present some limit theorems for regenerative sequences an use these to evelop some algorithms to prouce strongly consistent estimators (calle regeneration an ratio estimators that work whether π is proper or improper. These methos may be referre to as regenerative sequence Monte Carlo (RMC methos. The regenerative sequences inclue Markov chains as a special case. We also present an algorithm that uses the omination of the given target π by a probability istribution π 0. Examples are given to illustrate the use an limitations of our algorithms. AM 2000 subject classifications: Primary 65C05, 62M99; seconary 60G50. Keywors an phrases: Importance sampling, improper posterior, Markov chains, MCMC, null recurrence, ratio limit theorem, regenerative sequence. Receive February Introuction Let π be a istribution (that is, a measure on a set an let f be a real value function on, integrable with respect to π, that is, f π <. Let λ := fπ. The problem aresse in this paper is to fin appropriate statistical proceures to estimate λ. That is, the problem is to fin a way to generate ata (X 1,X 2,...,X n an fin a statistic ˆλ n such that for large n, ˆλ n is close to λ in a suitable sense. When π is known to be a probability istribution, that is when π( = 1, a well establishe statistical proceure is the classical (ii Monte Carlo metho. This metho requires one to generate ii observations 2664

2 Monte Carlo methos for improper target istributions 2665 (X 1,X 2,...,X n with istribution π an estimate λ by ˆλ n i=1 f(x i/n. By the law of large numbers for ii ranom variables, ˆλ n λ as n. If in aition f2 π < then the accuracy of the estimate ˆλ n, that is the behavior of the quantity (ˆλ n λ is (by the classical CLT of the orer n 1/2. Over the past twenty years an alternative statistical proceure known as Markov chain Monte Carlo (MCMC has come into extensive use in statistical literature. In particular, if generating ii observations from π is not easy then classical Monte Carlo cannot be use to estimate λ. The MCMC metho is to generateamarkovchain{x n } n 0 with π asitsinvariantistribution anthatis suitably irreucible. Then one appeal to the law of large numbers for such chains that saysthat for anyinitial value ofx 0, the time average f(x j/n converges almost surely to the space average λ = fπ. Inee, the pioneering paper by [15] i introuce this metho where the target istribution π, although a probability istribution, turne out to be the so calle Gibbs measure on the configuration space in statistical mechanics an it was ifficult to simulate from π. In this funamental paper, they constructe a Markov chain {X n } n 0 that is appropriately irreucible an has π as its stationary istribution. This iea was aapte to some statistical problems by [10]. This whole apparatus was reiscovere in the late 80 s an early 90 s by a number of statisticians an since then the subject has exploe [see e.g. 21]. Both the above two methos, that is, the ii Monte Carlo an the MCMC epen crucially on the assumption that π is a probability istribution. A natural question suggeste by the above iscussion is the following. uppose π is not proper (that is, π( = but f : R is such that f π <. How o we estimate λ by statistical tools as istinct from classical numerical analytic tools? Clearly, the ii Monte Carlo to generate ata from π is not feasible since π( =. However, it may be possible to generate a Markov chain {X n } n 0 that is appropriately irreucible an amits the (improper istribution π as its stationary istribution. In such situations, it can be shown (see Corollary 2 that the usual estimator f(x j/n will inee not work, that is, will not converge to λ, but rather go to zero with probability 1. In particular, in Bayesian statistics if an improper prior is assigne to the parameters then the resulting posterior istribution π (for the given ata is not guarantee to be proper. If posterior proprietyis not checke an π happens to be improper then Corollary 2 implies that the usual MCMC estimator f(x j/n base onamarkovchain {X n } n 0 with π as its stationary(improper istribution will simply not work in the sense that it will go to zero with probability 1 an not to λ. The use of MCMC for improper posterior istribution oes not seem to have receive much attention in the literature. The exceptions inclue [22, 12], an [13]. In this paper we show that given a istribution π on some space that may or may not be proper, it may be possible to generate ata {X n } n 0 such that more general time averages (calle regeneration an ratio estimators in Theorem 1 than the stanar Monte Carlo estimator, f(x j/n, will converge to λ. This is base on the iea of regenerativesequences (Definition 1 which in-

3 2666 K. B. Athreya an V. Roy clue ii samples an Markov chains (null or positive recurrent as special cases (see section 2. For Harris recurrent Markov chains, the possibility of using the ratio estimator propose here was previously note by [22]. Base on these theoretical results we present several algorithms for statistical estimation of λ (see sections 2 an 3. ection 3 iscusses the use of a special class of regenerativesequences, namely regenerative sequences (not necessarily Markov chains base on null recurrent ranom walks for estimating countable sums an integrals when = R for any <. ome examples are given in ection 4. ome of these examples illustrate the use of null recurrent Gibbs chains for consistent estimation of quantities like λ. This use of null recurrent Markov chains oes not seem to have been note before in the statistical literature. ection 4 also presents a generalization of the importance sampling methos an its potential application in Bayesian sensitivity analysis. Another metho iscusse in this paper uses the iea of ominating the given target π by a probability istribution π 0 an then using the stanar Monte Carlo methoology (ii or MCMC with π 0 as its stationary istribution. This is iscusse in ection 5. ome concluing remarks appear in ection 6. Finally, proofs of theorems an corollaries are given in Appenix B an some technical results appear in Appenix A an Appenix C. 2. Convergence results for regenerative sequences In this section we introuce the concept of regenerative sequences an establish some convergence results for these. This in turn provies some useful statistical tools for estimating parameters. These tools are more general than the Monte Carlo methos base on ii observations or Markov chains. In particular, they are applicable to the case when the target istribution is not proper. The following efinition of a regenerative sequence is given in [2]. Definition 1. Let (Ω,F,P be a probability space an (, be a measurable space.asequenceofranomvariables{x n } n 0 efine on(ω,f,pwith values in(, is calle regenerative if there exists a sequence of integer(ranom times T 0 = 0 < T 1 < T 2 < such that the excursions{x n : T j n < T j+1,τ j+1 } j 0 are ii where τ j = T j T j 1 for j = 1,2,..., that is, P(τ j+1 = k j,x Tj+q A q,j,0 q < k j,j = 0,...,r r = P(τ 1 = k j,x T1+q A q,j,0 q < k j, for all k 0,k 1,...,k r N, the set of positive integers, A q,j,0 q < k j,j = 0,1,...,r, an r 1. The ranom times {T n } n 0 are calle regeneration times an {τ j } j 1 are the excursion times. Example 1. Let {X n } n 0 be an irreucible, recurrent Markov chain with countable state space = {s 0,s 1,s 2,...}. Let X 0 = s 0, T 0 = 0, T r+1 = inf{n : n T r + 1,X n = s 0 }, r = 0,1,2,... Then {X n } n 0 is regenerative with regeneration times {T n } n 0.

4 Monte Carlo methos for improper target istributions 2667 Example 2. Let {X n } n 0 be a Markov chain with general state space (,. Assume that is countably generate an {X n } n 0 is Harris recurrent. Then it has been inepenently shown by [3] an [18] that for some n 0 1, the Markov chain {Z n } n 0 has an atom that it visits infinitely often, where Z n X nn0 for n = 0,1,... By the Markov property, the excursions between consecutive visits to this atom are ii making {Z n } n 0 a regenerative sequence (see e.g. Meyn an Tweeie [16, ection ] or Athreya an Lahiri [2, Theorem ]. Example 3. Let {Y n } n 0 be a Harris recurrent Markov chain as in Example 2. Given {Y n = y n } n 0, let {A n } n 0 be inepenent positive integer value ranom variables. et X n = y 0 0 n < A 0 y 1 A 0 n < A 0 +A 1. Then {X n } n 0 is calle a semi Markov chain with embee Markov chain {Y n = y n } n 0 an sojourn times {A n } n 0. ince {Y n } n 0 is regenerative, it follows that {X n } n 0 is regenerative. The following theorem presents some useful limit results for regenerative sequences that can be use for constructing consistent estimators of integrals with respect to improper measures. Theorem 1. Let {X n } n 0 be a regenerative sequence with regeneration times {T n } n 0. Let ( T1 1 ˇπ(A = E I A (X j for A. (2.1 (The measure ˇπ is known as the canonical (or, occupation measurefor {X n } n 0. Let N n = k if T k n < T k+1,k,n = 0,1,2,..., that is, N n is the number of regenerations by time n. uppose f,g : R are two measurable functions such that f ˇπ <, g ˇπ <, an gˇπ 0. Then, as n, (i (regeneration estimator ˆλ n = f(x j N n (ii (ratio estimator an ˆR n = f(x j g(x j. fˇπ, (2.2 fˇπ (2.3 gˇπ. A proof of Theorem 1 is given in Appenix B. Theorem 1(i for continuous time regenerative sequences is available in [1]. Theorem 1(ii for Harris recurrent Markov chains is available in Meyn an Tweeie [16, Theorem ]. In Definition 1, if {X n : T j n < T j+1,τ j+1 } s are assume to be ii only for

5 2668 K. B. Athreya an V. Roy j 1, then {X n } n 0 is calle a elaye regenerative process. traightforwar moification of the proof of Theorem 1 shows that it hols for any elaye regenerative process. The next theorem escribes the asymptotic behavior of the regeneration times, T n an the number of regenerations, N n. The result on the asymptotic istribution of T n presente in the theorem below is alreay prove in Feller [8, Chapter XIII, ection 6]. Theorem 2. Let {T j } j 0 an {τ j } j 1 be as efine in Definition 1. uppose in aition P(τ 1 > x x α L(x as x, where 0 < α < 1 an L(x is slowly varying at (i.e., L(cx/L(x 1 as x, 0 < c <. (i Then T n a n V α, as n, (2.4 where {a n } satisfies na α n L(a n 1 an V α is a positive ranom variable with a stable istribution with inex α such that E(exp( sv α = exp( s α Γ(1 α, 0 s <, (2.5 where for p > 0, Γ(p = 0 u p 1 exp( uu is the Gamma function. (ii Also where b n = n α /L(n an N n b n Y α, as n, (2.6 P(Y α x = P(V α x 1 α for 0 < x <, that is Y α = V α α an V α is as efine in (2.5. As mentione before, the proof of Theorem 2(i is available in Feller [8, Chapter XIII, ection 6]. The proof of Theorem 2(ii for ranom walk Markov chains on R using metho of moments is presente in [4, page 229]. We provie a proof of Theorem 2(ii in Appenix B. Recall from the Introuction that our goal is to obtain a consistent estimate of λ = fπ for a given (possibly infinite measure π an an integrable function f : R. Theorem 1 yiels the following generic recipe for estimating λ. 1. Construct a regenerative sequence {X n } n 0 with π as its canonical measure. 2. As efine in Theorem 1, let N n be the number of regenerations by time n. Estimate λ by ˆλ n = f(x j/n n. By Theorem 1(i ˆλ n λ. 3. If we can fin a function g such that E π g := gπ( 0 is known then we can estimate λ by (E π gˆr n consistently, where ˆR n is efine in (2.3. In orer to use the above recipe, one nees to fin a regenerative sequence with π as its canonical measure. Later in this section, we see that a Markov

6 Monte Carlo methos for improper target istributions 2669 chain {X n } n 0 that has a recurrent atom an has π as its invariant measure can be use for this purpose. The following corollary uses (2.6 to provie an alternative estimator of λ. Corollary 1. From (2.2 an (2.6 it follows that uner the hypotheses of Theorem 1 an Theorem 2, f(x j/b n ( fˇπ Y α. If E(Y α is known an is finite then we can estimate fˇπ in the following way. Run k inepenent copies of {X n } an let Dn r = f(xr j /b n where {Xj r}n are the realize values of the chain in the rth run, r = 1,2,...,k. ince for large n, Dn,r r = 1,2,...,k are inepenent raws from an approximation to ( fˇπ Y α, we can estimate fˇπ by ˆλ n,k k r=1 Dr n /(ke(y α. It can be shown that there exists a sub-sub sequence k jl an n kjl such that ˆλ nkjl,k jl fˇπ as l. A proof of Corollary 1 is given in Appenix B. FortheMarkovchainsinExample1,itcanbeshownthatP(X n = s i for some 1 n < X 0 = s j = 1 for all s i,s j, that is, for such chains any s i is a recurrent point an the Markov chain regenerates every time it reaches s i. If {X n } n 0 is a Markov chain with general state space (, an is φ-irreucible with a nontrivial σ-finite reference measure φ, then by the regeneration metho of [3] an [18], the Markov chain {X n } n 0 is equivalent to an appropriatemarkovchain ˇX n onanenlargespaceš sothat ˇX n amitsanatom in Š [16, Theorem5.1.3].Nowin aitionifweassumethat {X n} n 0 isharris recurrent as in Example 2, then P( ˇX n = for some 1 n < ˇX 0 = x = 1 for all x Š, i.e., is a recurrent point for ˇX n. In Appenix A we present some results on the existence an the uniqueness of invariant measure for Markov chains. Using these results an Theorem 1, the following remark iscusses how integrals with respect to a given target istribution can be estimate using Markov chains. Remark 1. Let π be a given (proper or improper target istribution an {X n } n 0 be a Markov chain as in Examples 1 or 2 with invariant istribution π. Then from Corollary 3 in Appenix A we know that π is the unique (up to a multiplicative constant invariant measure for {X n } n 0. Also from Theorem 1 we know that for any two integrable functions f,g : R with gπ 0, no matter what the istribution of the starting value X 0, as n, the ratio ˆR n = f(x j/ g(x j converges almost surely to fπ/ gπ. If we choose g such that E π g = gπ is easy to evaluate, then λ = fπ can be consistently estimate by ˆR n (E π g. In particular, if g( = I K (, where K is a (possibly cyliner subset of with known π(k < then ˆR n = n f(x j/n n (K, where N n (K = I K(X j, the number of times the Markov chain visits K by time n. Further, from Corollary 3 in Appenix A we note that one may use the regenerative estimator ˆλ n as in (2.2 if π({ } is known. From Remark 1 we see that when π is not proper, the regeneration estimator (2.2 an the ratio estimator (2.3 base on a Markov chain provie consistent

7 2670 K. B. Athreya an V. Roy estimates of λ. On the contrary, in Corollary 2 below we show that when π is improper, the usual time average MCMC estimator f(x j/n is not consistent for λ. Inee, we show that it converges to zero with probability 1. A result similar to Corollary 2 but uner a ifferent set of hypotheses was establishe by [13]. Theorem 3. Let {X n } n 0 be a regenerative sequence with occupation measure ˇπ as in (2.1. Let ˇπ( = E(T 1 =. Then for any f : R such that f ˇπ <, 1 n lim f(x i = 0 with probability 1. n n i=1 Corollary 2. Let {X n } n 0 be a Harris recurrent Markov chain with improper invariant measure π, that is, π( =. Let f : R be such that f π <. Then the stanar time average estimator i=1 f(x i/n converges to 0 with probability 1. The proof of Theorem 3 is given in Appenix B. Corollary 2 follows from Theorem 3 since as pointe out in Example 2, every Harris recurrent Markov chain is regenerative. In the improper target case, N n, the number of regenerations, grows slower than n as note in (2.6. On the other han, from (2.2 we know that the time sums i=1 f(x i grow exactly at the rate N n. This may explain why the usual time average MCMC estimator is not consistent for λ when π is improper. 3. Algorithms base on ranom walks In this section we focus on the use of a special class of regenerative sequences, namely regenerative sequences (not necessarily Markov chains base on null recurrent ranom walks on Z an its variants for estimating countable sums an integrals. We begin with the case when is countable. Let be any countable set an f : R be such that s f(s <. Our goal is to obtain a consistent estimator of λ = s f(s. Without loss of generality (see Remark 3 we take to be Z, the set of integers. Let {X n } n 0 be a simple symmetric ranom walk (RW on Z starting at X 0. That is, X n+1 = X n + δ n+1,n 0 where {δ n } n 1 are ii with istribution P(δ 1 = +1 = 1/2 = P(δ 1 = 1 an inepenent of X 0. This Markov chain {X n } n 0 is null recurrent with the counting measure {π(i 1 : i Z} being its unique (up to multiplicative constant invariant measure [see e.g. 16, ection 8.4.3]. From Example 1 of ection 2, we know that {X n } n 0 is regenerative an it satisfies the hypothesis of Corollary 3 in Appenix A with = 0. ince π( = 1, Corollary 3 in Appenix A allows us to ientify the occupation measure π with the counting measure an hence from (2.2 it follows that for any initial X 0, ˆλ n = f(x j N n f(i λ, (3.1 i Z

8 Monte Carlo methos for improper target istributions 2671 where N n = I(X j = 0 is the number of visits to zero by {X j } n. o ˆλ n is a consistent estimator of λ. Let X 0 = 0, T 0 = 0, T r+1 = inf{n : n T r +1,X n = 0}, r = 0,1,2,... an τ i = T i T i 1,i 1. Then {τ i } i 1 are ii positive ranom variables with P(τ 1 > n 2/πn 1/2 as n [see e.g. 7, page 203]. o from Theorem 2 (that is, from (2.6 using b n = n 1/2 π/2, we have N n π n 2 Y 1/2, where Y 1/2 = V 1/2 1/2 an V α is as efine in (2.5. ince the ensity function of V 1/2 is available [see e.g. 8, page 173], simple calculations show that Y 1/2 = Z where Z N(0,1. (We can also use (2.5 to fin the istribution of Y 1/2. o f(x j/ n π 2 λ Z an since E( Z = 2/π is known we can use Corollary 1 to obtain an alternative estimator of λ. Remark 2. It can be shown [see 23] that the number of istinct values of X j for 0 j n grows at the rate of n. Thus, after n steps of running the RW Markov chain, the computation of f(x j involves evaluation of f only at approximately n sites in Z. The above iscussion implies the following algorithm, that can be use to estimate of λ = s f(sπ(s where π is a given (possibly infinite measure on a countable set an f : R is a function satisfying s f(s π(s <. Again, without loss of generality (see Remark 3 we assume that = Z. Algorithm I: 1. Run the simple symmetric ranom walk (RW, {X n } n 0 on Z. 2. Let ˆλ n = f(x jπ(x j /N n where N n is the number of visits by {X j } n to0bytimen.then ˆλ n λ,that is ˆλ n isaconsistentestimator of λ. Remark 3. Let be any countable set an as before let N be the set of positive integers. Then there exists a 1 1, onto map ζ : N. Consier the map ξ : Z N, efine as follows ξ(j = { 2j if j = 1,2,... 2j +1 if j = 0, 1, 2,.... Then ξ is 1 1, onto map from Z to N an so ζ ξ is a 1 1, onto map from Z to. Using Algorithm I, we can estimate λ = s f(sπ(s by (f ζ ξ(x j (π ζ ξ(x j /N n. We now present some examples of the function ζ. (i Note that, if = Z, we can take ζ to be simply ξ 1.

9 2672 K. B. Athreya an V. Roy (ii When = Z 2, efine A = {2 i 3 j : i 1,j 1}. There exists a 1 1, onto map ρ 1 : N A, for example take ρ 1 (1 to be the smallest element of A = 2 3 = 6, ρ 1 (2 to be the secon smallest element = = 12, an so on. Define another map ρ 2 : A N 2 by ρ 2 (2 i 3 j = (i,j. Lastly consier ρ 3 : N 2 Z 2 where ρ 3 (i,j = (ϕ(i,ϕ(j an ϕ = ξ 1, that is { ϕ(n = n 2 if n is even n 1 2 if n is o. o ζ ρ 3 ρ 2 ρ 1 : N = Z 2 is 1 1, onto. (iii We can similarly efine ζ when = Z for 3. Now we consier the case = R. Let f : R R be Lebesgue integrable an let λ = R f(xx. Our goal is to obtain a consistent estimator of λ. Let {η i} i 1 be ii Uniform ( 1/2,1/2 ranom variables. Let X n+1 = X n + η n+1,n 0,X 0 = x for some x R. Then it can be shown (see e.g. Durrett [7, page 195], Meyn an Tweeie [16, Proposition 9.4.5] that {X n } n 0 is a Harris recurrent Markov chain with state space R an the Lebesgue measure as its invariant measure (unique up to a multiplicative constant. From Example 2 of ection 2, we know that {X n } n 0 is regenerative an Corollary 3 in Appenix A ientifies the occupation measure π with (up to a multiplicative constant the Lebesgue measure. ince the Lebesgue measure of ( 1/2,1/2 is 1, it follows from (2.3 of Theorem 1 that for any initial X 0, ˆR n = f(x j f(xx, N n (K R where N n (K = I K(X j is the number of visits to K = ( 1/2,1/2 by {X j } n. o ˆR n is a consistent estimator of λ. ince E(η i = 0, Var(η i = 1/12, an the length of the interval ( 1/2,1/2 is 1, straightforwar calculations using Theorem in [4] show that N n (K n 12 Z, Z N(0,1. o as in the countable case, an alternative estimator of λ can be forme following Corollary 1. Next we consier the case = R 2. Let f : R 2 R be Lebesgue integrable anlet λ = R f(xx. Let X 2 n+1 = X n +η n+1,n 0,X 0 = (0,0an{η i } i 1 {(ηi 1,η2 i } i 1 sareiiranomvariableswhereηi 1,η2 i areiiuniform( 1/2,1/2. From Durrett s (2010 Theorem we know that starting anywhere X 0 = x, {X n } n 0 visits any circle infinitely often with probability one. Also since η i has an absolutely continuous istribution, the ensity of η i is boune below by ǫ, for any 0 < ǫ < 1 [see e.g. 7, Example 6.8.2]. Thus a regeneration scheme as in [3] exists making {X n } n 0 regenerative. ince the Lebesgue measure on R 2 is an invariant measure for {X n } n 0 by Corollary 3 in Appenix A an (2.3 of Theorem 1, a consistent estimator of λ is given by ˆR n = f(x j N n (K R 2 f(xx,

10 Monte Carlo methos for improper target istributions 2673 where N n (K = I K(X j is the number of visits to K = [ 1/2,1/2] [ 1/2,1/2] by {X j } n. ince X 0 = (0,0, note that P(X n K = = [ ( n ] 2 P ηi 1 [ 1/2,1/2] [ P i=1 ( i=1 η1 i n By the local central limit theorem, as n, [ 1 2 n, 1 ]] 2. 2 n ( [ np i=1 η1 i 1 n 2 n, 1 ] 2 c, 0 < c <. n ince E(η 1 i = 0, Var (η1 i = 1/12, we have c = 12/ 2π. Thus np(x n K c 2, as n an hence E[N n (K] = 1+ n P(X j K c 2 1 j c2 logn. (Here, as n, c n n means that lim n c n / n = 1. This yiels the following proposition. Proposition 1. Let N n (K be as efine above. Then ( Nn (K E 6 logn π. Next using iscrete renewal theory an Karamata s Tauberian theorem, arguing as in [4, Theorem 10.30], one can show that for each r = 1,2,... ( r Nn (K lime = µ r < exists n logn an {µ r } r 1 is a moment sequence of an exponential istribution an hence satisfies Carleman s conition for etermining the istribution. This implies the following theorem hols. Theorem 4. Let N n (K be as efine above. Then N n (K logn 6 π Y. where Y is an Exponential(1 ranom variable. ome recent results (analogous to Theorem 4 for ranom walks on Z 2 can be foun in [9] an [19]. As in the case of = R, we can use Theorem 4 to obtain an alternative estimator of λ (Corollary 1. Let = R an π be absolutely continuous with ensity h. From the above iscussion we see that the following algorithm can be use to estimate of λ = f(xπ(x where f : R R is π-integrable. R

11 2674 K. B. Athreya an V. Roy Algorithm II: 1. Run the mean 0 ranom walk, {X n } n 0 on R with increment istribution Uniform ( 1/2, 1/2. 2. Let ˆR n = f(xjh(xj N n(k wheren n (Kisthe numberofvisitsby{x j } n to the unit interval K = ( 1/2,1/2. Then ˆR n λ, that is, ˆRn is a consistent estimator of λ From the above iscussion we know that Algorithm II can be suitably moifie for the case = R 2. Remark 4. When is countable, the metho propose here works with the Markov chain RW replace by any mean 0, ranom walk on Z such that X n /n 0 in probability. By Chung-Fuchs theorem this ranom walk is recurrent [7, page 195]. imilarly when = R or R 2, any mean 0, finite variance ranom walk with absolutely continuous istribution can be use. On the other han, a symmetric ranom walk on R may not be recurrent. For example, consier a ranom walk Markov chain {X n } n 0 on R where the increment ranom variable η has a symmetric stable istribution with inex α, 0 < α < 1, that is, the characteristic function of η is ψ(t = exp( t α,0 < α < 1. It can be shown that {X n } n 0 is not recurrent [see 6, page 253] in the sense that for any nonegenerate interval, I in R, n=0 P(X n I < an hence the number of visits by {X n } n 0 to I is finite with probability 1. Lastly we present an algorithm that is base on the RW on Z an a ranomization tool to estimate integrals with respect to an absolutely continuous measure π on any R, <. Let f : R R be Borel measurable an π be an absolutely continuous istribution on R with Raon-Nikoym erivative p(. Assume that R f(x p(xx <. The followingtheoremproviesa consistent estimator of λ = R f(xp(xx. Theorem 5. Let {X n } n 0 be a RW on Z. Let {U ij : i = 0,1,...;j = 1,2,...,} be a sequence of ii Uniform ( 1/2,1/2 ranom variables an inepenent of {X n } n 0. Assume κ : Z Z be 1 1, onto. Define Y n := κ(x n +U n, n = 0,1,,... where U n = (U n1,u n2,...,u n. Then ˆλ n := f(y jp(y j f(xp(xx, (3.2 N n R where N n = I(X j = 0 is the number of visits to zero by {X j } n. It may be note that the sequence {Y n } n 0 efine above is not a Markov chain but is inee regenerative as in Theorem 1 with {T n } sequence corresponing to the returns of RW {X n } n 0 to zero. The proof of Theorem 5 is given in Appenix B. Note that, the function κ can be chosen as κ = ζ ξ where ζ an ξ are as given in Remark 3. Theorem 5 suggests the following variation of Algorithm I.

12 Monte Carlo methos for improper target istributions 2675 Algorithm III: 1. Generate the sequence {Y n } n 0 as in Theorem Estimate λ = R f(xp(xx by ˆλ n efine in ( Examples In this section we emonstrate the use of the statistical algorithms evelope in the previous sections with some examples. ome of our simulations also illustrate that the convergence of these algorithms coul be rather slow imulation examples We first consier estimating λ = m=1 1/m2 using RW (Algorithm I with X 0 = 0. The left panel in Figure 1 shows the estimates for 6 values (log 10 n = 3,4,...,8, where log 10 enotes logarithm base 10. The plot inclues the meian estimates an the 90% empirical confience interval estimates base on 30 inepenent repetitions. The meian an 90% interval estimates for n = 10 8 are 1.654, an (1.560, respectively. Note that, the true value λ is π 2 /6 = The time to run the RW chain for 10 8 steps using R [20] in an ol Intel Q GHz machine running Winows 7 is about 3 secons. Next we use the same chain (RW with X 0 = 0 to estimate λ = m=1001 1/(m an foun that the first 10 5 iterations yiele an estimate of 0 (re lines in the right panel in Figure 1. However, estimation of λ can be improve if we start the RW at X 0 = 1000, an efine regenerations as the returns to 1000 (green lines in the right panel in Figure 1. Next we consier estimating λ = 1 1/x 2 x (see left panel in Figure 2 using Algorithms II (re line an III (green line for the same six n values mentione above. We observe that the interval estimates base on Algorithm II are much λ^n λ^n no. of iterations (log10 scale Fig 1. Point an 90% interval estimates of m=1001 1/(m (right panel no. of iterations (log10 scale m=1 1/m2 (left panel an λ =

13 2676 K. B. Athreya an V. Roy estimates estimates no. of iterations (log10 scale 1e+01 4e+04 1e+05 no. of iterations Fig 2. The left panel shows the point an 90% interval estimates of λ = 1 (1/x2 x. The right panel shows the behavior of the stanar MCMC estimator base on the Gibbs chain. wier than those for Algorithm III particularly for smaller values of n. In the examples of this section, for applying Algorithm III we use κ = ζ ξ where ζ an ξ are as given in Remark 3. Lastly, we consier two examples with R 2. The first one is the following. Let f(x,y = exp[ (x 2 2xy +y 2 /2], (x,y R 2. (4.1 Define the measure π by π(a = A f(x,yxy,a B(R2. Note that π is a σ-finite, infinite measure on R 2. Note that f X Y (x y := f(x,y R f(x,yx = φ(x y, is a probability ensity on R, where φ(u = exp( u 2 /2/ 2π,u R is the stanar normal ensity. Our goal is to estimate π(a where A = [ 1,1] [ 1,1]. Notice that the integral, π(a is not available in close form. Let {(X n,y n } n 0 be the Gibbs sampler Markov chain that uses f X Y ( y an f Y X ( x alternately. Then {(X n,y n } n 0 has π as its invariant measure. From Theorem 3 we know that the usual MCMC estimator, I A(X j,y j /n 0 since π is an improper measure (see the right panel in Figure 2. It can be verifie that {(X n,y n } n 0 is regenerative (a proof of this is given in Appenix C. o by Theorem 1, for any initial (X 0,Y 0, an for any < a < b <, ˆR n = I A(X j,y j I [a,b](y j π(a 2π(b a, as π(r [a,b] = 2π(b a. Thus 2π(b aˆr n is a consistent estimator of π(a. The soli lines in the left panel in Figure 3 shows the meian (base on 30 inepenent repetitions estimates of π(( 1, 1 ( 1, 1 using Algorithm II with uniform ranom walk in R 2 (re line, Algorithm III (green line an the above ratio estimator base on the Gibbs sampler (with a = b = 1000 (blue line for the same six n values mentione above. The meian estimates

14 Monte Carlo methos for improper target istributions 2677 estimates estimates no. of iterations (log10 scale no. of iterations (log10 scale Fig 3. Estimates using Gibbs sampler an ranom walks. for n = 10 8 corresponing to the Gibbs sampler, Algorithm II (with uniform ranom walk in R 2 an Algorithm III are 4.438, 2.801, an respectively. Numerical integration yiels as an estimate of π(a. Like in the previous example, here also Algorithm III performs better than Algorithm II. The next example is as follows. Let where R + = (0,. Let f X Y (x y := f(x,y = exp( xy x,y R +, (4.2 f(x,y = yexp( xy; 0 < x,y <. R + f(x,yx Then for each y, f X Y ( y is a probability ensity on R +. Consier the Gibbs sampler {(X n,y n } n 0 that uses the two conitional ensities f X Y ( y an f Y X ( x alternately. [5] consiere this example an foun that the usual estimator f X Y(ˇx Y j /n for the marginal ensity f X (ˇx = R + f(ˇx,yy = R + f X Y (ˇx yf Y (yy = 1/ˇx breaks own. Inee from Theorem 3 we know that this estimator converges with probability 1 to zero as the above Gibbs sampler chain is regenerative (see Proposition 3 in Appenix C with an improper invariant measure. ([11] showe that the Gibbs sampler {(X n,y n } n 0 is a null recurrent Markovchain, in the sense that n=0 E[I A(X n,y n ] = for anyset Awith positivelebesgue measureon R 2. [5] suggesteas aremey that one restricts the conitional ensities to compact intervals. Theorem 1 provies alternatives that o not require this restriction. By Theorem 1 we have the following consistent estimator of f X (ˇx. Let 0 < a < b <. Then for all ˇx R +, log(b/aˆr n = log(b/a = log(b/a f X Y(ˇx Y j I [a,b](y j Y j exp( ˇxY j I [a,b](y j 1/ˇx = f X (ˇx,

15 2678 K. B. Athreya an V. Roy as f X Y(ˇx yf(x,yxy = R + f X Y (ˇx yf Y (yy = f X (ˇx an I K(x, yf(x,yxy = log(b/a with K = R + [a,b]. Although log(b/aˆr n is a consistent estimator of f X (ˇx base on the Gibbs sampler, our simulations show that in this case the Gibbs chain is apt to take values too large or too small (positive that are outsie the range of machine precision. Here we use Algorithms II an III to estimate f X (2 = 1/2. The right panel in Figure 3 shows the estimates (meian an 90% interval base on 30 repetitions using Algorithms II (re line an III (green line for the same six n values mentione above. The meian estimates corresponing to re, an green lines for n = 10 8 are 0.503, an respectively. Remark 5. Note that in orer to use the ratio estimator (2.3 base on the Gibbs sampler {(X n,y n } n 0 in the above two examples, by Theorem 1 it suffices to establish that these chains are regenerative. This avois having to establish the Harris recurrence of {(X n,y n } n 0, which is require to apply the Theorem in Meyn an Tweeie [16] Generalization of importance sampling Importance sampling is a Monte Carlo technique in which a sample from one ensity is use to estimate an expectation with respect to another. uppose π is a probability ensity on (with respect a measure ν an we want to estimate E π f = f πν where f : R is such that f πν <. Often, such a π is known only up to a normalizing constant, that is let π(x = cp(x, where c > 0 is a finite constant, not easy to evaluate an p(x is a known function. uppose π is another ensity (not necessarily proper with respect to the measure ν such that {x : π(x > 0} {x : π(x > 0}. Now note that E π f = f πν = = f(x π(x / π(x π(xν(x f(x p(x / π(x π(xν(x π(x π(x π(xν(x p(x π(x π(xν(x. Note that f(x[p(x/π(x]π(xν(x = E πf/c is well efine an is finite, an 0 < [p(x/π(x]π(xν(x = 1/c <. o if we can generate a Harris recurrent Markov chain (regenerative sequence {X n } n 0 with invariant (occupation ensity π, then by Theorem 1, we have for any initial X 0, ˆR n = f(x j p(xj π(x j p(x j π(x j f(xp(x π(x π(xν(x p(x π(x π(xν(x = E π f. (4.3 Thus ˆR n isaconsistentestimatorofe π f.notethat,heretheratiolimit theorem (Theorem 1 (ii is naturally applicable since the above importance sampling estimator of E π f has the form of a ratio. That is, in orer to use the above importance sampling estimator, we o not nee to fin any function g such

16 Monte Carlo methos for improper target istributions 2679 that gπ( 0 is known. Note that the stanar importance sampling using MCMC requires π to be a probability ensity an {X n } n 0 to be positive recurrent [10]. Here we are able to rop both these conitions An example involving Bayesian sensitivity analysis The importance sampling estimator (4.3 can be useful in Bayesian analysis when one wishes to see the effect of a change in the prior istribution on a posterior expectation. Let {π h (θ,h H} be a family of prior ensities, where h is calle a hyperparameter. The resulting posterior ensities are enote by π h (θ y l(θ yπ h (θ, where y enotes the observe ata an l(θ y is the likelihoo function. Let f(θ be a quantity ofinterest where f( is a function. In sensitivity analysis one wants to assess the changes in the posterior expectation E h (f(θ y as h varies. If two ifferent values of h result in the almost same posterior expectations, then the use of either of those two values will not be controversial.on the other han, if E h (f(θ y changes significantly as h varies, onemaywanttomakeaplotoftheposteriorexpectationstoseewhatvaluesofh cause big changes. If posterior expectations are not available in close form, then the importance sampling estimator (4.3 base on a single chain may be use to estimate E h (f(θ y for all h H. Note that, here the parameter θ correspons toxin (4.3 an π h (θ y playsthe roleof π. In general,the importanceensity π in (4.3 is preferre to have heavier tails than π h (θ y for all h H, as otherwise the ratio estimate will ten to be unstable because the weights p h (θ/π(θ can be arbitrarily large, where p h (θ = l(θ yπ h (θ. On the other han, it may be ifficult to construct a proper importance ensity π which has heavier tails than π h (θ y for all h H. ince the importance sampling estimator (4.3 can be use even when π is improper, in the following example, we take π to be the improper uniform ensity on R. Let y (y 1,y 2,...,y m be a ranom sample from N(µ,1. uppose, the conjugate normal prior N(µ 0,σ0 2 is assume for µ. Here h = (µ 0,σ 0. Let {µ j } j 0 be the ranom walk on R with Uniform ( 1/2, 1/2 increment istribution. As mentione in ection 3, {µ j } j 0 is a Harris recurrent Markov chain with state space R an the Lebesgue measure as its invariant measure. The estimator (4.3 for E h (f(µ y in this case becomes ˆR n = f(µj exp( m(ȳ µ j 2 /2φ(µ j ;µ 0,σ 2 0 exp( m(ȳ µj 2 /2φ(µ j ;µ 0,σ 2 0, (4.4 where ȳ = m i=1 y i/m, an φ(x;a,b is the ensity of the normal istribution with mean a, variance b, evaluate at the point x. A single ranom walk chain {µ j } n starte at µ0 = 0 with n = 10 7 is use to estimate E h (f(µ y with f(µ µ for ifferent values of (µ 0,σ 0. The plots in Figure 4 give graphs of the estimate (4.4 as µ 0 an σ 0 vary. For making these plots, we took m = 5, an ȳ = 0. We fixe σ 0 = 0.5 for the graph in the left panel in Figure 4 an the plot is base on the estimate (4.4 for 41 values of µ 0. (We took µ 0 ranging from 10 to 10 by increments of 0.5. The right panel in Figure 4 is

17 2680 K. B. Athreya an V. Roy estimates µ σ µ Fig 4. Posterior expectations for ifferent values of hyperparameters. base on the estimate (4.4 for 410 values of (µ 0,σ 0 with µ 0 ranging from 10 to 10 by increments of 0.5, an σ 0 ranging from 0.1 to 1 by increments of 0.1. Both the plots are meian estimates of posterior expectations E h (µ y base on 30 inepenent repetitions of ranom walk chains of length n = 10 7 with µ 0 = 0. The pointwise empirical 90% confience intervals are too narrow to istinguish these quantiles from the meian estimates in these plots. The meian estimates also match (up to two ecimal places with the true values E (µ0,σ 0(µ y = mσ 2 0ȳ/(mσ µ 0/(mσ Reuction to proper target istribution In the previous sections we have presente several algorithms for estimating integrals with respect to a measure π that may be proper or improper. In this section, we propose one more algorithm that involves reuction to a proper target istribution. We begin with the following result. Lemma 1. Let π be a σ-finite measure on a measurable space (,. (i Let {A n } n 1 be a finite or countable sequence of - sets such that A n s are isjoint, π(a n < for all n 1, an = n=1a n. Let J = {j : π(a j > 0} an let {p j : j J} be such that p j > 0 for all j J an j J p j = 1. Then π is ominate by the probability measure π 0 efine by π 0 (A = π(a A j p j, A. (5.1 π(a j j J That is, A,π 0 (A = 0 implies π(a = 0.

18 Monte Carlo methos for improper target istributions 2681 (ii Further, the Raon-Nikoym erivative h = π/π 0 is given by h( = j J π(a j p j I Aj (. (5.2 Remark 6. ince there can be many choices of {A n } n 1, an since we can use any {p j : j J} such that p j > 0 for all j J an j J p j = 1, the above lemma shows that given a σ-finite measure π, there are uncountably many probability measures π 0 ominating π. Let π be the given σ-finite (probably infinite, that is, π( = measure on (, an suppose we can fin a probability measure π 0 on such that π is ominate by π 0. (For example, if π is absolutely continuous on R, we can take π 0 to be the probability measure corresponing to the -variate tuent s t istribution. Let h π/π 0 be the Raon-Nikoym erivative of π with respect to π 0 an suppose we know h only upto a normalizing constant, that is, h = ch, where the finite constant c > 0 is unknown an h is known. In that case, if we have an integrable (with respect to π function g : R such that gπ 0 is known, we have a consistent estimator of λ = fπ via a new ratio estimator R n = f(x jh (X j g(x jh (X j fhπ 0 ghπ 0 = fπ (5.3 gπ, where {X j } n is either ii sample from π 0 or realizations of a positive Harris recurrentmarkovchainwithinvariantmeasureπ 0 starteatanyinitialx 0 = x 0. Lemma 1 an the above iscussion is formalize in the following algorithm for estimating λ = fπ where π is a σ-finite measure on. Algorithm IV: 1. Fin a probability measure π 0 ominating π (there are infinitely many such probability measures. Let h be the ensity of π with respect to π Draw ii sample {X j } n from π 0 or run a positive Harris recurrent Markov chain {X j } n (starting at any X 0 = x 0 with stationary istribution π 0. Estimate λ by the usual estimator f(x jh(x j /(n +1 if h is completely known, otherwise if h is known up to a multiplicative constant, estimate λ via the ratio estimator (5.3. One of the ifficulties with Algorithm IV is that if π 0 has thinner tails than π, then the function h (or h can take rather large values for some X j s making the estimator highly varying. Example 5.1 Let = R an let f : R R be Lebesgue integrable. Our goal is to obtain a consistent estimator of λ = f(xπ(x using Algorithm IV where R π(x = x is the Lebesgue measure on R.

19 2682 K. B. Athreya an V. Roy Define A j = [j,j + 1 for all j Z an let p j > 0, j Z, be such that j Z p j = 1. ince π([j,j +1 = 1, from (5.1 we have π 0 (A = j J p j π(a [j,j +1, A B(R. uppose U Uniform (0,1 an let N be a Z value ranom variable with P(N = j = p j, j Z. Let U an N be inepenent an efine X = N +U. Then it is easy to see that X has an absolutely continuous istribution with probability ensity function f X (x = p j if j x < j +1,j Z, that is X π 0. From (5.2, the Raon-Nikoym erivative h = π/π 0 = 1/p j if j x < j+1,j Z. Let X 1,X 2,...,X n be ii ranom sample with common istribution π 0. Then a consistent estimator of λ is f(x jh(x j /n = (1/n (f(x j/p Xj. 6. Concluing remarks In this paper we have consiere the problem of estimating λ fπ where f : R, f π <, but π( coul be. We have shown that the regular (ii or Markov chain Monte Carlo methos fail in the case when π( =. Here we have presente several statistical algorithms for estimating λ (when π( is finite or infinite base on the notion of regenerative sequences. It may be note that the general recipe in ection 2 an Algorithm III use regenerative sequences that may not be Markov chains, Algorithms I an II use null recurrent Markov chains. All these methos may be referre to as regenerative sequence Monte Carlo (RMC methos. Algorithm IV uses positive recurrent Markov chains. Although the Algorithms I IV o provie consistent estimators, in this paper we have not aresse the important question of the secon orer accuracy, that is,therateofecayof(ˆλ n λwhere ˆλ n istheestimatorofλ.thisisamatterfor further research an has been aresse in a forthcoming paper by the authors. InMCMCwithpropertarget(thatis,π isaprobabilitymeasure[17]iscussthe estimation of the stanar error of ˆλ n using the regeneration of the unerlying Markov chain. In the present context of improper target istribution something similar base on stanar error compute from the observe excursions for the unerlying regenerative sequences coul be trie. The results in [14] may be applicable to some of the Markov chains iscusse here, although verifying Karlsen an Tjøstheim s (2001 conitions for functional limit theorems oes not seem to be easy. In the case of reuction to a proper target istribution π 0 (that is, Algorithm IV it is possible to establish the rate of ecay of (ˆλ n λ uner strong conitions on f as well as the mixing rates for the chain use (with stationary istribution π 0.

20 Monte Carlo methos for improper target istributions 2683 Appenices Appenix A: Results on the existence an the uniqueness of invariant measure for Markov chains Theorem 6. Let {X n } n 0 be a Markov chain with state space (,. Let be a singleton such that { }. Let T be the hitting time (also known as the first passage time for, that is, T = min{n : 1 n <,X n = } an T = if X n for all n <. Let ( T 1 π(a = E I A (X j X 0 = for A. (A.1 (i (Existence of a subinvariant measure Let P ( enote P( X 0 =, then we have π(a = I A ( P (T = + P(x,A π(x for all A, (A.2 an hence π( is a subinvariant measure with respect to P, that is, π(a P(x,A π(x for all A. (A.3 (ii (Uniqueness of the subinvariant measure If π is a measure on (, that is subinvariant with respect to P, then π(a π({ }π (A for all A, (A.4 where π (A := π(a I A ( P (T = = P(x,A π(x = ( πp(a. Corollary 3. Let π( be as efine in (A.1. Then π(a = P(x,A π(x for all A, (A.5 that is, π( is an invariant measure with respect to P if an only if is recurrent, that is, P (T < = 1. Further, is recurrent if an only if π = π = πp. Also, if is accessible from everywhere in, that is, if for all x, there exists n(x 1 such that P n(x (x,{ } > 0, then for every subinvariant measure π with π({ } <, we have π(a = π({ } π(a for all A. A version of Theorem 6 can be foun in Meyn an Tweeie [16, section 10.2]. For the sake of completeness we give a proof of Theorem 6 an Corollary 3 in Appenix B.

21 2684 K. B. Athreya an V. Roy Appenix B: Proof of results Proof of Theorem 1. Let h : R + be a nonnegative function. Recall that, N n = k if T k n < T k+1,k,n = 0,1,2,... ince h 0, T Nn h(x j n h(x j T Nn+1 1 Let ξ r = T r+1 1 j=t r h(x j for r = 0,1,2,... o we have 1 N n N n 1 r=0 ξ r 1 N n n h(x j 1 N n h(x j. N n ξ r, r=0 (B.1 for all n 1. ince {X n } n 0 is a regenerative sequence, {ξ i } i 0 are ii nonnegative ranom variables with E(ξ 0 = hˇπ, where ˇπ is as efine in (2.1. Also, we have N n as n. By the strong law of large numbers, Nn r=0 ξ r/n n E(ξ 0 as n. Using (B.1 we then have 1 n 1 n hˇπ liminf h(x j limsup h(x j hˇπ n n an hence N n lim n 1 N n n h(x j = N n hˇπ. (B.2 Let f + an f are the positive an negative parts of the function f. Applying the above argument to f + an f separately yiels (i of Theorem 1. Next the proof of (ii of the theorem follows by noting that f(x j g(x j = f+ (X j f (X j g+ (X j g (X j, an applying (B.2 to each of the four terms in the above ratio. Proof of Theorem 2. To prove Theorem 2(ii, note that, P(N n k = P(T k+1 > n for all integers k 0 an n 0. This implies that for 0 < x <, ( P(N n b n x = P(N n b n x = P T bnx +1 > n = P ( T bnx +1 a bnx +1 > n a bnx +1 where {a n } is as in (2.4 an b n > 0 will be chosen shortly. uppose b n is such that for each 0 < x <, n x 1 α. (B.3 a bnx +1,

22 Monte Carlo methos for improper target istributions 2685 Then since V α has a continuous istribution, by (2.4 it follows that as n ( T bnx +1 n P > P(V α > x 1 α. a bnx +1 a bnx +1 (This uses the fact that if W n is a sequence of ranom variables such that W n W with P(W = w 0 = 0 an w n w 0 then P(W n > w n P(W > w 0. Now we fin b n so that (B.3 hols. Let Ψ(x P(τ 1 > x. Then Ψ(x is monotone in x. Thus Ψ 1 ( is well efine on (0,1. Also, by hypothesis of Theorem 2, Ψ(x x α L(x as x. We know that na α n L(a n 1, that is nψ(a n 1 as n. This in turn implies that as n, Ψ(a n 1/n, or a n Ψ 1 (1/n, that is a bnx +1 Ψ 1 (1/{ b n x +1}. Thus (B.3 will hol if as n, n Ψ 1 (1/( b n x +1 x 1 α, that is, Ψ 1 (1/ b n x + 1 n/x 1 α, or 1/( b n x + 1 Ψ(nx 1 α (nx 1 α α L(nx 1 α. This implies that as n, 1 b n x n α x L(nx n α L(n 1 α = x L(nxα 1. L(n ince L( is slowly varying at, the above equation will hol if b n = n α /L(n, thus establishing (B.3. Hence (2.6 is prove. Proof of Corollary 1. Let ˇλ fˇπ. Note that Dr n W r as n where W r = ˇλYα, an W r s are inepenent for all r = 1,2,...,k. ince Dn r,r = 1,...,k arealsoinepenent,(d1,...,d n k n (W 1,...,W k,which,bycontinuous mapping theorem, implies that ˆλ n,k k r=1 Dr n /(ke(y α k r=1 W r/ (ke(y α W k as n. By koroho s theorem, for each k, there exists λ n,k = ˆλn,k an W k = Wk, such that λ n,k W k as n. ince by the strong law of large numbers, W k ˇλ as k, an W k = Wk, we have W k ˇλ as k. ince ˇλ is a constant an convergence in probability implies almost surely convergence along a subsequence, there exists, k j such that W kj ˇλ. o λ nkj,k j ˇλ. However, λ n,k = ˆλn,k implying ˆλ nkj,k j ˇλ as j. The corollary follows since there exists further subsequences k jl an n kjl such that ˆλ nkjl,k jl ˇλ as l. Proof of Theorem 3. Note that 1 n f(x i = N n n n i=1 1 N n i=1 n f(x i, (B.4 an by Theorem 1 we know that i=1 f(x i/n n fˇπ. Let T j an τ j be as in Definition 1. Now Nn T Nn = τ j n Nn+1 τ j = T N n+1 N n N n N n N n N n

23 2686 K. B. Athreya an V. Roy ince τ i s are ii, by LLN, T n /n E(τ 1 = ˇπ( =. o lim n n/n n exists an lim n n/n n = with probability 1, which implies that N n /n 0 an hence the proof follows from (B.4. Proof of Theorem 5. Let T 0 = 0,T r+1 = inf {n : n T r + 1,X n = 0} for r 0 be the successive return times of the RW {X n } n 0 to the state 0. Let h : R R + be a nonnegative function. ince {Y n } n 0 is regenerative with regeneration times T r s, from (B.2 we have Note that lim n 1 n N n (T 1 1 h(y j = E h(y j. (B.5 = = = (T 1 1 E h(y j = E(h(Y j I(j < T 1 E(E(h(κ(X j +U j I(j < T 1 {X i } j i=0 E(E(h(κ(X j +U j {X i } j i=0 I(j < T 1 E(ψ(X j I(j < T 1, whereψ : Z Risefineasψ(x := K h(κ(x+uu,withk = [ 1/2,1/2] [ 1/2,1/2] [ 1/2,1/2] times. ince it follows that (T 1 1 E(ψ(X j I(j < T 1 = E ψ(x j = j Zψ(j, (T 1 1 E h(y j = h(κ(j+uu = h(xx. j Z K R From (B.5 it follows that lim n h(y j/n n = R h(xx. Applying the above arguments to f + p an f p separately yiels (3.2. Proof of Lemma 1. The measure π 0 efine in (5.1 is a probability measure since π 0 ( = π( A j p j = p j = 1 π(a j j J j J

24 Monte Carlo methos for improper target istributions 2687 Now the 1st part of the lemma follows from the fact that π 0 (A = 0 π(a A j = 0 for all j J π(a = 0. In orer to prove Lemma 1 (ii, note that π(a = j J π(a A j = j J π(a A j π(a j p j. π(a j p j (B.6 ince A n s are isjoint, for any j 0 J, π 0 (A A j0 = j J o from (B.6 we have for all A π(a = j J = A j J p j π(a A j0 A j π(a j π(a j π 0 (A A j = π(a j p j p j j J π(a j p j I Aj ( π 0, an hence the Raon-Nikoym erivative, h = π = π(a j I Aj (. π 0 p j j J π(a A j0 = p j0. π(a j0 A I Aj ( π 0 Proof of Theorem 6. Let π( be as efine in (A.1. Then for any A, (T 1 π(a = E I A (X j X 0 = ( = I A ( +E I A (X j I(T 1 j X 0 =. Now letting E ( enote E( X 0 =, we have Thus E ( I A (X j I(T 1 j ([ ] = E I A (X j I(T > j 1 I A (X j I(T = j. π(a = I A ( + E (I A (X j I(T > j 1 I A ( P (T = j

25 2688 K. B. Athreya an V. Roy = I A ( + = I A ( P (T = + E (I A (X j I(T > j 1 I A ( P (T < E (I A (X j I(T > j 1. (B.7 Now for j 1, {T > j 1} = {T j 1} c belongs to σ(x 0,X 1,...,X j 1, the σ-algebra generate by X 0,X 1,...,X j 1. By the Markov property of {X n } n 0, for j 1, E (I A (X j I(T > j 1 = E (I(T > j 1E[I A (X j X j 1 ] = E (I(T > j 1P(X j 1,A. Thus E (I A (X j I(T > j 1 = E ( = E ( = P(X j 1,AI(T > j 1 P(X j,ai(t > j P(x,A π(x, where the last equality follows from the efinition of π( as in (A.1. o from (B.7 we get π(a = I A ( P (T = + P(x,A π(x, establishing (A.2, that is, proving Theorem 6 (i. Next, let π be a σ-finite measure on (, that satisfies (A.3. We nee to show that π(a π({ }π (A for all A. From (A.3 we have π(a π({ }P(,A+ P(x,A π(x. But, P(x, A π(x { } = { } { } { } P(x,A { } + π({ } P(y, x π(y P(y,xP(x,A π(y P(x,AP(,x = π({ }P (X 2 A,X 1 { } + P(y,xP(x,A π(y. { } { } Iterating this we fin that (A.3 implies that for n 2, π(a π({ }(P (X 1 A+P (X 2 A,T >1+ +P (X n A,T >n 1.

ESTIMATION OF INTEGRALS WITH RESPECT TO INFINITE MEASURES USING REGENERATIVE SEQUENCES

ESTIMATION OF INTEGRALS WITH RESPECT TO INFINITE MEASURES USING REGENERATIVE SEQUENCES Applie Probability Trust (9 October 2014) ESTIMATION OF INTEGRALS WITH RESPECT TO INFINITE MEASURES USING REGENERATIVE SEQUENCES KRISHNA B. ATHREYA, Iowa State University VIVEKANANDA ROY, Iowa State University

More information

When is a Markov chain regenerative?

When is a Markov chain regenerative? When is a Markov chain regenerative? Krishna B. Athreya and Vivekananda Roy Iowa tate University Ames, Iowa, 50011, UA Abstract A sequence of random variables {X n } n 0 is called regenerative if it can

More information

Topic 7: Convergence of Random Variables

Topic 7: Convergence of Random Variables Topic 7: Convergence of Ranom Variables Course 003, 2016 Page 0 The Inference Problem So far, our starting point has been a given probability space (S, F, P). We now look at how to generate information

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Convergence of Random Walks

Convergence of Random Walks Chapter 16 Convergence of Ranom Walks This lecture examines the convergence of ranom walks to the Wiener process. This is very important both physically an statistically, an illustrates the utility of

More information

Logarithmic spurious regressions

Logarithmic spurious regressions Logarithmic spurious regressions Robert M. e Jong Michigan State University February 5, 22 Abstract Spurious regressions, i.e. regressions in which an integrate process is regresse on another integrate

More information

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013 Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing

More information

1. Aufgabenblatt zur Vorlesung Probability Theory

1. Aufgabenblatt zur Vorlesung Probability Theory 24.10.17 1. Aufgabenblatt zur Vorlesung By (Ω, A, P ) we always enote the unerlying probability space, unless state otherwise. 1. Let r > 0, an efine f(x) = 1 [0, [ (x) exp( r x), x R. a) Show that p f

More information

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions Working Paper 2013:5 Department of Statistics Computing Exact Confience Coefficients of Simultaneous Confience Intervals for Multinomial Proportions an their Functions Shaobo Jin Working Paper 2013:5

More information

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy,

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy, NOTES ON EULER-BOOLE SUMMATION JONATHAN M BORWEIN, NEIL J CALKIN, AND DANTE MANNA Abstract We stuy a connection between Euler-MacLaurin Summation an Boole Summation suggeste in an AMM note from 196, which

More information

Range and speed of rotor walks on trees

Range and speed of rotor walks on trees Range an spee of rotor wals on trees Wilfrie Huss an Ecaterina Sava-Huss May 15, 1 Abstract We prove a law of large numbers for the range of rotor wals with ranom initial configuration on regular trees

More information

arxiv: v2 [math.pr] 27 Nov 2018

arxiv: v2 [math.pr] 27 Nov 2018 Range an spee of rotor wals on trees arxiv:15.57v [math.pr] 7 Nov 1 Wilfrie Huss an Ecaterina Sava-Huss November, 1 Abstract We prove a law of large numbers for the range of rotor wals with ranom initial

More information

REAL ANALYSIS I HOMEWORK 5

REAL ANALYSIS I HOMEWORK 5 REAL ANALYSIS I HOMEWORK 5 CİHAN BAHRAN The questions are from Stein an Shakarchi s text, Chapter 3. 1. Suppose ϕ is an integrable function on R with R ϕ(x)x = 1. Let K δ(x) = δ ϕ(x/δ), δ > 0. (a) Prove

More information

General Glivenko-Cantelli theorems

General Glivenko-Cantelli theorems The ISI s Journal for the Rapid Dissemination of Statistics Research (wileyonlinelibrary.com) DOI: 10.100X/sta.0000......................................................................................................

More information

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012 CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration

More information

A Review of Multiple Try MCMC algorithms for Signal Processing

A Review of Multiple Try MCMC algorithms for Signal Processing A Review of Multiple Try MCMC algorithms for Signal Processing Luca Martino Image Processing Lab., Universitat e València (Spain) Universia Carlos III e Mari, Leganes (Spain) Abstract Many applications

More information

Least-Squares Regression on Sparse Spaces

Least-Squares Regression on Sparse Spaces Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction

More information

Calculus of Variations

Calculus of Variations 16.323 Lecture 5 Calculus of Variations Calculus of Variations Most books cover this material well, but Kirk Chapter 4 oes a particularly nice job. x(t) x* x*+ αδx (1) x*- αδx (1) αδx (1) αδx (1) t f t

More information

Two formulas for the Euler ϕ-function

Two formulas for the Euler ϕ-function Two formulas for the Euler ϕ-function Robert Frieman A multiplication formula for ϕ(n) The first formula we want to prove is the following: Theorem 1. If n 1 an n 2 are relatively prime positive integers,

More information

Pure Further Mathematics 1. Revision Notes

Pure Further Mathematics 1. Revision Notes Pure Further Mathematics Revision Notes June 20 2 FP JUNE 20 SDB Further Pure Complex Numbers... 3 Definitions an arithmetical operations... 3 Complex conjugate... 3 Properties... 3 Complex number plane,

More information

Math 342 Partial Differential Equations «Viktor Grigoryan

Math 342 Partial Differential Equations «Viktor Grigoryan Math 342 Partial Differential Equations «Viktor Grigoryan 6 Wave equation: solution In this lecture we will solve the wave equation on the entire real line x R. This correspons to a string of infinite

More information

Self-normalized Martingale Tail Inequality

Self-normalized Martingale Tail Inequality Online-to-Confience-Set Conversions an Application to Sparse Stochastic Banits A Self-normalize Martingale Tail Inequality The self-normalize martingale tail inequality that we present here is the scalar-value

More information

Jointly continuous distributions and the multivariate Normal

Jointly continuous distributions and the multivariate Normal Jointly continuous istributions an the multivariate Normal Márton alázs an álint Tóth October 3, 04 This little write-up is part of important founations of probability that were left out of the unit Probability

More information

1 Math 285 Homework Problem List for S2016

1 Math 285 Homework Problem List for S2016 1 Math 85 Homework Problem List for S016 Note: solutions to Lawler Problems will appear after all of the Lecture Note Solutions. 1.1 Homework 1. Due Friay, April 8, 016 Look at from lecture note exercises:

More information

Introduction. A Dirichlet Form approach to MCMC Optimal Scaling. MCMC idea

Introduction. A Dirichlet Form approach to MCMC Optimal Scaling. MCMC idea Introuction A Dirichlet Form approach to MCMC Optimal Scaling Markov chain Monte Carlo (MCMC quotes: Metropolis et al. (1953, running coe on the Los Alamos MANIAC: a feasible approach to statistical mechanics

More information

Iterated Point-Line Configurations Grow Doubly-Exponentially

Iterated Point-Line Configurations Grow Doubly-Exponentially Iterate Point-Line Configurations Grow Doubly-Exponentially Joshua Cooper an Mark Walters July 9, 008 Abstract Begin with a set of four points in the real plane in general position. A to this collection

More information

Monotonicity for excited random walk in high dimensions

Monotonicity for excited random walk in high dimensions Monotonicity for excite ranom walk in high imensions Remco van er Hofsta Mark Holmes March, 2009 Abstract We prove that the rift θ, β) for excite ranom walk in imension is monotone in the excitement parameter

More information

Differentiation ( , 9.5)

Differentiation ( , 9.5) Chapter 2 Differentiation (8.1 8.3, 9.5) 2.1 Rate of Change (8.2.1 5) Recall that the equation of a straight line can be written as y = mx + c, where m is the slope or graient of the line, an c is the

More information

Lower Bounds for the Smoothed Number of Pareto optimal Solutions

Lower Bounds for the Smoothed Number of Pareto optimal Solutions Lower Bouns for the Smoothe Number of Pareto optimal Solutions Tobias Brunsch an Heiko Röglin Department of Computer Science, University of Bonn, Germany brunsch@cs.uni-bonn.e, heiko@roeglin.org Abstract.

More information

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson JUST THE MATHS UNIT NUMBER 10.2 DIFFERENTIATION 2 (Rates of change) by A.J.Hobson 10.2.1 Introuction 10.2.2 Average rates of change 10.2.3 Instantaneous rates of change 10.2.4 Derivatives 10.2.5 Exercises

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k A Proof of Lemma 2 B Proof of Lemma 3 Proof: Since the support of LL istributions is R, two such istributions are equivalent absolutely continuous with respect to each other an the ivergence is well-efine

More information

Some Examples. Uniform motion. Poisson processes on the real line

Some Examples. Uniform motion. Poisson processes on the real line Some Examples Our immeiate goal is to see some examples of Lévy processes, an/or infinitely-ivisible laws on. Uniform motion Choose an fix a nonranom an efine X := for all (1) Then, {X } is a [nonranom]

More information

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x) Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)

More information

The Exact Form and General Integrating Factors

The Exact Form and General Integrating Factors 7 The Exact Form an General Integrating Factors In the previous chapters, we ve seen how separable an linear ifferential equations can be solve using methos for converting them to forms that can be easily

More information

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION The Annals of Statistics 1997, Vol. 25, No. 6, 2313 2327 LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION By Eva Riccomagno, 1 Rainer Schwabe 2 an Henry P. Wynn 1 University of Warwick, Technische

More information

Monte Carlo Methods with Reduced Error

Monte Carlo Methods with Reduced Error Monte Carlo Methos with Reuce Error As has been shown, the probable error in Monte Carlo algorithms when no information about the smoothness of the function is use is Dξ r N = c N. It is important for

More information

ALGEBRAIC AND ANALYTIC PROPERTIES OF ARITHMETIC FUNCTIONS

ALGEBRAIC AND ANALYTIC PROPERTIES OF ARITHMETIC FUNCTIONS ALGEBRAIC AND ANALYTIC PROPERTIES OF ARITHMETIC FUNCTIONS MARK SCHACHNER Abstract. When consiere as an algebraic space, the set of arithmetic functions equippe with the operations of pointwise aition an

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

A Modification of the Jarque-Bera Test. for Normality

A Modification of the Jarque-Bera Test. for Normality Int. J. Contemp. Math. Sciences, Vol. 8, 01, no. 17, 84-85 HIKARI Lt, www.m-hikari.com http://x.oi.org/10.1988/ijcms.01.9106 A Moification of the Jarque-Bera Test for Normality Moawa El-Fallah Ab El-Salam

More information

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control 19 Eigenvalues, Eigenvectors, Orinary Differential Equations, an Control This section introuces eigenvalues an eigenvectors of a matrix, an iscusses the role of the eigenvalues in etermining the behavior

More information

A Weak First Digit Law for a Class of Sequences

A Weak First Digit Law for a Class of Sequences International Mathematical Forum, Vol. 11, 2016, no. 15, 67-702 HIKARI Lt, www.m-hikari.com http://x.oi.org/10.1288/imf.2016.6562 A Weak First Digit Law for a Class of Sequences M. A. Nyblom School of

More information

Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs

Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs Ashish Goel Michael Kapralov Sanjeev Khanna Abstract We consier the well-stuie problem of fining a perfect matching in -regular bipartite

More information

Math 115 Section 018 Course Note

Math 115 Section 018 Course Note Course Note 1 General Functions Definition 1.1. A function is a rule that takes certain numbers as inputs an assigns to each a efinite output number. The set of all input numbers is calle the omain of

More information

A simple tranformation of copulas

A simple tranformation of copulas A simple tranformation of copulas V. Durrleman, A. Nikeghbali & T. Roncalli Groupe e Recherche Opérationnelle Créit Lyonnais France July 31, 2000 Abstract We stuy how copulas properties are moifie after

More information

Extreme Values by Resnick

Extreme Values by Resnick 1 Extreme Values by Resnick 1 Preliminaries 1.1 Uniform Convergence We will evelop the iea of something calle continuous convergence which will be useful to us later on. Denition 1. Let X an Y be metric

More information

Lecture 5. Symmetric Shearer s Lemma

Lecture 5. Symmetric Shearer s Lemma Stanfor University Spring 208 Math 233: Non-constructive methos in combinatorics Instructor: Jan Vonrák Lecture ate: January 23, 208 Original scribe: Erik Bates Lecture 5 Symmetric Shearer s Lemma Here

More information

Discrete Mathematics

Discrete Mathematics Discrete Mathematics 309 (009) 86 869 Contents lists available at ScienceDirect Discrete Mathematics journal homepage: wwwelseviercom/locate/isc Profile vectors in the lattice of subspaces Dániel Gerbner

More information

Tutorial on Maximum Likelyhood Estimation: Parametric Density Estimation

Tutorial on Maximum Likelyhood Estimation: Parametric Density Estimation Tutorial on Maximum Likelyhoo Estimation: Parametric Density Estimation Suhir B Kylasa 03/13/2014 1 Motivation Suppose one wishes to etermine just how biase an unfair coin is. Call the probability of tossing

More information

Linear First-Order Equations

Linear First-Order Equations 5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)

More information

arxiv: v4 [math.pr] 27 Jul 2016

arxiv: v4 [math.pr] 27 Jul 2016 The Asymptotic Distribution of the Determinant of a Ranom Correlation Matrix arxiv:309768v4 mathpr] 7 Jul 06 AM Hanea a, & GF Nane b a Centre of xcellence for Biosecurity Risk Analysis, University of Melbourne,

More information

Equilibrium in Queues Under Unknown Service Times and Service Value

Equilibrium in Queues Under Unknown Service Times and Service Value University of Pennsylvania ScholarlyCommons Finance Papers Wharton Faculty Research 1-2014 Equilibrium in Queues Uner Unknown Service Times an Service Value Laurens Debo Senthil K. Veeraraghavan University

More information

Separation of Variables

Separation of Variables Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical

More information

1 dx. where is a large constant, i.e., 1, (7.6) and Px is of the order of unity. Indeed, if px is given by (7.5), the inequality (7.

1 dx. where is a large constant, i.e., 1, (7.6) and Px is of the order of unity. Indeed, if px is given by (7.5), the inequality (7. Lectures Nine an Ten The WKB Approximation The WKB metho is a powerful tool to obtain solutions for many physical problems It is generally applicable to problems of wave propagation in which the frequency

More information

Math 300 Winter 2011 Advanced Boundary Value Problems I. Bessel s Equation and Bessel Functions

Math 300 Winter 2011 Advanced Boundary Value Problems I. Bessel s Equation and Bessel Functions Math 3 Winter 2 Avance Bounary Value Problems I Bessel s Equation an Bessel Functions Department of Mathematical an Statistical Sciences University of Alberta Bessel s Equation an Bessel Functions We use

More information

Final Exam Study Guide and Practice Problems Solutions

Final Exam Study Guide and Practice Problems Solutions Final Exam Stuy Guie an Practice Problems Solutions Note: These problems are just some of the types of problems that might appear on the exam. However, to fully prepare for the exam, in aition to making

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 3

MATH 56A: STOCHASTIC PROCESSES CHAPTER 3 MATH 56A: STOCHASTIC PROCESSES CHAPTER 3 Plan for rest of semester (1) st week (8/31, 9/6, 9/7) Chap 0: Diff eq s an linear recursion (2) n week (9/11...) Chap 1: Finite Markov chains (3) r week (9/18...)

More information

Schrödinger s equation.

Schrödinger s equation. Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of

More information

Implicit Differentiation

Implicit Differentiation Implicit Differentiation Thus far, the functions we have been concerne with have been efine explicitly. A function is efine explicitly if the output is given irectly in terms of the input. For instance,

More information

6 General properties of an autonomous system of two first order ODE

6 General properties of an autonomous system of two first order ODE 6 General properties of an autonomous system of two first orer ODE Here we embark on stuying the autonomous system of two first orer ifferential equations of the form ẋ 1 = f 1 (, x 2 ), ẋ 2 = f 2 (, x

More information

SHARP BOUNDS FOR EXPECTATIONS OF SPACINGS FROM DECREASING DENSITY AND FAILURE RATE FAMILIES

SHARP BOUNDS FOR EXPECTATIONS OF SPACINGS FROM DECREASING DENSITY AND FAILURE RATE FAMILIES APPLICATIONES MATHEMATICAE 3,4 (24), pp. 369 395 Katarzyna Danielak (Warszawa) Tomasz Rychlik (Toruń) SHARP BOUNDS FOR EXPECTATIONS OF SPACINGS FROM DECREASING DENSITY AND FAILURE RATE FAMILIES Abstract.

More information

Diophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations

Diophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations Diophantine Approximations: Examining the Farey Process an its Metho on Proucing Best Approximations Kelly Bowen Introuction When a person hears the phrase irrational number, one oes not think of anything

More information

Levy Process and Infinitely Divisible Law

Levy Process and Infinitely Divisible Law Stat205B: Probability Theory (Spring 2003) Lecture: 26 Levy Process an Infinitely Divisible Law Lecturer: James W. Pitman Scribe: Bo Li boli@stat.berkeley.eu Levy Processes an Infinitely Divisible Law

More information

Lower bounds on Locality Sensitive Hashing

Lower bounds on Locality Sensitive Hashing Lower bouns on Locality Sensitive Hashing Rajeev Motwani Assaf Naor Rina Panigrahy Abstract Given a metric space (X, X ), c 1, r > 0, an p, q [0, 1], a istribution over mappings H : X N is calle a (r,

More information

Construction of the Electronic Radial Wave Functions and Probability Distributions of Hydrogen-like Systems

Construction of the Electronic Radial Wave Functions and Probability Distributions of Hydrogen-like Systems Construction of the Electronic Raial Wave Functions an Probability Distributions of Hyrogen-like Systems Thomas S. Kuntzleman, Department of Chemistry Spring Arbor University, Spring Arbor MI 498 tkuntzle@arbor.eu

More information

WEIGHTING A RESAMPLED PARTICLE IN SEQUENTIAL MONTE CARLO. L. Martino, V. Elvira, F. Louzada

WEIGHTING A RESAMPLED PARTICLE IN SEQUENTIAL MONTE CARLO. L. Martino, V. Elvira, F. Louzada WEIGHTIG A RESAMPLED PARTICLE I SEQUETIAL MOTE CARLO L. Martino, V. Elvira, F. Louzaa Dep. of Signal Theory an Communic., Universia Carlos III e Mari, Leganés (Spain). Institute of Mathematical Sciences

More information

u!i = a T u = 0. Then S satisfies

u!i = a T u = 0. Then S satisfies Deterministic Conitions for Subspace Ientifiability from Incomplete Sampling Daniel L Pimentel-Alarcón, Nigel Boston, Robert D Nowak University of Wisconsin-Maison Abstract Consier an r-imensional subspace

More information

Quantile function expansion using regularly varying functions

Quantile function expansion using regularly varying functions Quantile function expansion using regularly varying functions arxiv:705.09494v [math.st] 9 Aug 07 Thomas Fung a, an Eugene Seneta b a Department of Statistics, Macquarie University, NSW 09, Australia b

More information

arxiv:math/ v1 [math.pr] 19 Apr 2001

arxiv:math/ v1 [math.pr] 19 Apr 2001 Conitional Expectation as Quantile Derivative arxiv:math/00490v math.pr 9 Apr 200 Dirk Tasche November 3, 2000 Abstract For a linear combination u j X j of ranom variables, we are intereste in the partial

More information

Proof of SPNs as Mixture of Trees

Proof of SPNs as Mixture of Trees A Proof of SPNs as Mixture of Trees Theorem 1. If T is an inuce SPN from a complete an ecomposable SPN S, then T is a tree that is complete an ecomposable. Proof. Argue by contraiction that T is not a

More information

LOGISTIC regression model is often used as the way

LOGISTIC regression model is often used as the way Vol:6, No:9, 22 Secon Orer Amissibilities in Multi-parameter Logistic Regression Moel Chie Obayashi, Hiekazu Tanaka, Yoshiji Takagi International Science Inex, Mathematical an Computational Sciences Vol:6,

More information

The total derivative. Chapter Lagrangian and Eulerian approaches

The total derivative. Chapter Lagrangian and Eulerian approaches Chapter 5 The total erivative 51 Lagrangian an Eulerian approaches The representation of a flui through scalar or vector fiels means that each physical quantity uner consieration is escribe as a function

More information

Lecture 2: Correlated Topic Model

Lecture 2: Correlated Topic Model Probabilistic Moels for Unsupervise Learning Spring 203 Lecture 2: Correlate Topic Moel Inference for Correlate Topic Moel Yuan Yuan First of all, let us make some claims about the parameters an variables

More information

Transforms. Convergence of probability generating functions. Convergence of characteristic functions functions

Transforms. Convergence of probability generating functions. Convergence of characteristic functions functions Transforms For non-negative integer value ranom variables, let the probability generating function g X : [0, 1] [0, 1] be efine by g X (t) = E(t X ). The moment generating function ψ X (t) = E(e tx ) is

More information

Solutions to Math 41 Second Exam November 4, 2010

Solutions to Math 41 Second Exam November 4, 2010 Solutions to Math 41 Secon Exam November 4, 2010 1. (13 points) Differentiate, using the metho of your choice. (a) p(t) = ln(sec t + tan t) + log 2 (2 + t) (4 points) Using the rule for the erivative of

More information

A. Exclusive KL View of the MLE

A. Exclusive KL View of the MLE A. Exclusive KL View of the MLE Lets assume a change-of-variable moel p Z z on the ranom variable Z R m, such as the one use in Dinh et al. 2017: z 0 p 0 z 0 an z = ψz 0, where ψ is an invertible function

More information

On colour-blind distinguishing colour pallets in regular graphs

On colour-blind distinguishing colour pallets in regular graphs J Comb Optim (2014 28:348 357 DOI 10.1007/s10878-012-9556-x On colour-blin istinguishing colour pallets in regular graphs Jakub Przybyło Publishe online: 25 October 2012 The Author(s 2012. This article

More information

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments 2 Conference on Information Sciences an Systems, The Johns Hopkins University, March 2, 2 Time-of-Arrival Estimation in Non-Line-Of-Sight Environments Sinan Gezici, Hisashi Kobayashi an H. Vincent Poor

More information

Introduction to variational calculus: Lecture notes 1

Introduction to variational calculus: Lecture notes 1 October 10, 2006 Introuction to variational calculus: Lecture notes 1 Ewin Langmann Mathematical Physics, KTH Physics, AlbaNova, SE-106 91 Stockholm, Sween Abstract I give an informal summary of variational

More information

HITTING TIMES FOR RANDOM WALKS WITH RESTARTS

HITTING TIMES FOR RANDOM WALKS WITH RESTARTS HITTING TIMES FOR RANDOM WALKS WITH RESTARTS SVANTE JANSON AND YUVAL PERES Abstract. The time it takes a ranom walker in a lattice to reach the origin from another vertex x, has infinite mean. If the walker

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs Lectures - Week 10 Introuction to Orinary Differential Equations (ODES) First Orer Linear ODEs When stuying ODEs we are consiering functions of one inepenent variable, e.g., f(x), where x is the inepenent

More information

Euler equations for multiple integrals

Euler equations for multiple integrals Euler equations for multiple integrals January 22, 2013 Contents 1 Reminer of multivariable calculus 2 1.1 Vector ifferentiation......................... 2 1.2 Matrix ifferentiation........................

More information

Fall 2016: Calculus I Final

Fall 2016: Calculus I Final Answer the questions in the spaces provie on the question sheets. If you run out of room for an answer, continue on the back of the page. NO calculators or other electronic evices, books or notes are allowe

More information

d dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1

d dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1 Lecture 5 Some ifferentiation rules Trigonometric functions (Relevant section from Stewart, Seventh Eition: Section 3.3) You all know that sin = cos cos = sin. () But have you ever seen a erivation of

More information

February 21 Math 1190 sec. 63 Spring 2017

February 21 Math 1190 sec. 63 Spring 2017 February 21 Math 1190 sec. 63 Spring 2017 Chapter 2: Derivatives Let s recall the efinitions an erivative rules we have so far: Let s assume that y = f (x) is a function with c in it s omain. The erivative

More information

A note on asymptotic formulae for one-dimensional network flow problems Carlos F. Daganzo and Karen R. Smilowitz

A note on asymptotic formulae for one-dimensional network flow problems Carlos F. Daganzo and Karen R. Smilowitz A note on asymptotic formulae for one-imensional network flow problems Carlos F. Daganzo an Karen R. Smilowitz (to appear in Annals of Operations Research) Abstract This note evelops asymptotic formulae

More information

Lyapunov Functions. V. J. Venkataramanan and Xiaojun Lin. Center for Wireless Systems and Applications. School of Electrical and Computer Engineering,

Lyapunov Functions. V. J. Venkataramanan and Xiaojun Lin. Center for Wireless Systems and Applications. School of Electrical and Computer Engineering, On the Queue-Overflow Probability of Wireless Systems : A New Approach Combining Large Deviations with Lyapunov Functions V. J. Venkataramanan an Xiaojun Lin Center for Wireless Systems an Applications

More information

Sturm-Liouville Theory

Sturm-Liouville Theory LECTURE 5 Sturm-Liouville Theory In the three preceing lectures I emonstrate the utility of Fourier series in solving PDE/BVPs. As we ll now see, Fourier series are just the tip of the iceberg of the theory

More information

Tractability results for weighted Banach spaces of smooth functions

Tractability results for weighted Banach spaces of smooth functions Tractability results for weighte Banach spaces of smooth functions Markus Weimar Mathematisches Institut, Universität Jena Ernst-Abbe-Platz 2, 07740 Jena, Germany email: markus.weimar@uni-jena.e March

More information

On the enumeration of partitions with summands in arithmetic progression

On the enumeration of partitions with summands in arithmetic progression AUSTRALASIAN JOURNAL OF COMBINATORICS Volume 8 (003), Pages 149 159 On the enumeration of partitions with summans in arithmetic progression M. A. Nyblom C. Evans Department of Mathematics an Statistics

More information

Regular tree languages definable in FO and in FO mod

Regular tree languages definable in FO and in FO mod Regular tree languages efinable in FO an in FO mo Michael Beneikt Luc Segoufin Abstract We consier regular languages of labele trees. We give an effective characterization of the regular languages over

More information

The Principle of Least Action

The Principle of Least Action Chapter 7. The Principle of Least Action 7.1 Force Methos vs. Energy Methos We have so far stuie two istinct ways of analyzing physics problems: force methos, basically consisting of the application of

More information

THE EFFICIENCIES OF THE SPATIAL MEDIAN AND SPATIAL SIGN COVARIANCE MATRIX FOR ELLIPTICALLY SYMMETRIC DISTRIBUTIONS

THE EFFICIENCIES OF THE SPATIAL MEDIAN AND SPATIAL SIGN COVARIANCE MATRIX FOR ELLIPTICALLY SYMMETRIC DISTRIBUTIONS THE EFFICIENCIES OF THE SPATIAL MEDIAN AND SPATIAL SIGN COVARIANCE MATRIX FOR ELLIPTICALLY SYMMETRIC DISTRIBUTIONS BY ANDREW F. MAGYAR A issertation submitte to the Grauate School New Brunswick Rutgers,

More information

First Order Linear Differential Equations

First Order Linear Differential Equations LECTURE 6 First Orer Linear Differential Equations A linear first orer orinary ifferential equation is a ifferential equation of the form ( a(xy + b(xy = c(x. Here y represents the unknown function, y

More information

Acute sets in Euclidean spaces

Acute sets in Euclidean spaces Acute sets in Eucliean spaces Viktor Harangi April, 011 Abstract A finite set H in R is calle an acute set if any angle etermine by three points of H is acute. We examine the maximal carinality α() of

More information

Chapter 6: Energy-Momentum Tensors

Chapter 6: Energy-Momentum Tensors 49 Chapter 6: Energy-Momentum Tensors This chapter outlines the general theory of energy an momentum conservation in terms of energy-momentum tensors, then applies these ieas to the case of Bohm's moel.

More information

EXISTENCE OF SOLUTIONS TO WEAK PARABOLIC EQUATIONS FOR MEASURES

EXISTENCE OF SOLUTIONS TO WEAK PARABOLIC EQUATIONS FOR MEASURES EXISTENCE OF SOLUTIONS TO WEAK PARABOLIC EQUATIONS FOR MEASURES VLADIMIR I. BOGACHEV, GIUSEPPE DA PRATO, AND MICHAEL RÖCKNER Abstract. Let A = (a ij ) be a Borel mapping on [, 1 with values in the space

More information

A Sketch of Menshikov s Theorem

A Sketch of Menshikov s Theorem A Sketch of Menshikov s Theorem Thomas Bao March 14, 2010 Abstract Let Λ be an infinite, locally finite oriente multi-graph with C Λ finite an strongly connecte, an let p

More information