Countable Partially Exchangeable Mixtures

Size: px
Start display at page:

Download "Countable Partially Exchangeable Mixtures"

Transcription

1 arxiv: v1 [math.pr] 24 Jun 2013 Countable Partiall Exchangeable Mixtures Cecilia Prosdocimi, Dipartimento di Economia e Finanza, Università LUISS, viale Romania 32, Roma, Ital, cprosdocimi@luiss.it Lorenzo Finesso, Istituto di Ingegneria Biomedica, Consiglio Nazionale delle Ricerche, corso Stati Uniti 4, Padova, Ital, lorenzo.finesso@isib.cnr.it Ma 22, 2014 Abstract Partiall exchangeable sequences representable as mixtures of Markov chains are completel specified b de Finetti s mixing measure. The paper characterizes, in terms of a subclass of hidden Markov models, the partiall exchangeable sequences with mixing measure concentrated on a countable set, both for discrete valued and for Polish valued sequences. 1 Introduction In the Hewitt-Savage generalization of de Finetti s theorem, Polish space valued exchangeable sequences are shown to be in one to one correspondence with mixtures of i.i.d. sequences, and ultimatel with the mixing measure defining the mixture. The mixing measure thus acts as a model of the exchangeable sequence and its properties shed light on the random mechanism generating the sequence. In this regard [5], using the connection with the Markov moment problem, characterizes the subclass of discrete valued exchangeable sequences whose mixing measures are absolutel continuous, and 1

2 have densities in L p. It is also of interest to characterize the subclass for which the mixing measure is singular. A contribution in this direction has been given b [3], where it is proved that a discrete valued exchangeable sequence has de Finetti mixing measure concentrated on a countable set if and onl if it is a hidden Markov model (HMM, i.e. a probabilistic function of a homogeneous Markov chain. The class of partiall exchangeable sequences are in one to one correspondence withmixtures ofmarkov chains. Asnoted in [5], theresults on the regularit of the mixing measure carr to partiall exchangeable sequences. On the other hand, to the best of our knowledge, no results have been reported concerning partiall exchangeable sequences with singular mixing measures. The goal of the present paper is to contribute a first result in this direction, characterizing, in the spirit of [3], countable mixtures of Markov chains. Our results hold both for discrete and for Polish valued sequences. This has required the development of a few special results for HMMs, previousl not available in the literature. Of independent interest are Proposition 1 on the rows of the arra of the successors of an HMM, and most of the Appendix, on properties of sequences of stopping times with respect to the filtration generated b the state and the output of an HMM. In the Polish valued case it has also been necessar to first extend [3] to show the equivalence between exchangeable HMMs and countable mixtures of Polish valued i.i.d. sequences. In Section 2 we review the basic notions in the setup most convenient for our purpose. The reader should be aware of the fact that slightl different notions of partial exchangeabilit coexist in the literature for discrete valued sequences. We review the original definition, introduced in [2] and elaborated in [8]. The latter paper clarifies the relationship with the alternative definition given in [4]. Section 3 concerns discrete valued sequences, Section 4 Polish valued sequences. In Section 3.1 we constructivel prove Proposition 1, which is instrumental in the balance of the paper. In Section 3.2 we characterize discrete valued countable mixtures of Markov chains. In Section 4 we extend the result of [3] to exchangeable sequences with values in a Polish space. This allows a characterization as HMMs of Polish valued partiall exchangeable sequences with singular mixing measure. Few final remarks and a hint at possible extensions are in Section 5. The Appendix contains some results on sequences of stopping times for the HMM, unavailable elsewhere in the literature, and a technical result for a special class of 2

3 HMMs representable as mixtures of i.i.d. sequences. 2 Preliminaries (Y n n 0 denotesasequenceofrandomvariablesonaprobabilitspace(ω,f,p, taking values in a Polish space S endowed with the Borel σ-field S. The generic element of S is denoted, and 0 N = N is an element (a string of the N +1-th fold Cartesian product S N+1, likewise (Y0 N = 0 N is the event (Y 0 = 0,Y 1 = 1,...,Y N = N. Exchangeable and partiall exchangeable sequences. An infinite sequence of random variables (Y n is exchangeable if it is distributionall invariant under finite permutations (see [9] page 24. An infinite sequence of random variables (Y n is partiall exchangeable if its successors arra V is distributionall invariant under finite permutations within each of its rows. For the precise definition of the successors arra and for notations we refer to [8], Section 3 for discrete state space and Section 4 for Polish state space. Note that the rows of the successors arra of a partiall exchangeable sequence are exchangeable. Mixtures of i.i.d. sequences and of Markov chains. The set of all probabilit measures on (S,S is denoted M S and it is equipped with the σ-field generated b the maps p p(a, varing p in M S and A in S. Definition 1. (Y n is a mixture of i.i.d. sequences if there exists a random variable p with values in M S such that for an A 0,...,A N S P ( Y 0 A 0,...,Y N A N p = p(a 0... p(a N P a.s.. (1 Definition 2. A mixture of i.i.d. sequences is countable (finite if the random variable p is countabl (finitel valued. Let H be a countable (finite set of indices. For a countable (finite mixture of i.i.d. sequences, denote with ( p h ( the possible values taken h H b p. Integrating over Ω, equation (1 reads P ( Y 0 A 0,...,Y N A N = h Hµ h p h (A 0...p h (A N, (2 with µ h := P( p = p h > 0 and h H µ h = 1. 3

4 Partiall exchangeable sequences are in one to one correspondence with mixtures of Markov chains whose transition kernels k t (, : S S [0,1] are constant, with respect to the first variable, on elements (E j j 0 of a partition of S = S δ where δ is a fictitious state, and S is the Borel σ-field of S. For the precise definitions and notations refer again to [8]. In short terms, let T be the set of kernels t : N 0 S [0,1] and, for an t T, define k t (,A := I Ej (t(j,a, (3 j 0 where I Ej ( is the indicator function of E j. We will consider mixtures of Markov chains, sa (Y n, with fixed initial state 0 E 1, i.e. satisfing Condition 1. (Y 0 = 0 = Ω, 0 E 1. We are read to give the following Definition3. (Y n satisfing Condition1is amixture of homogeneousmarkov chains if there exists a random element t with values in T, such that, for an choice of A 1,...,A N S, P(Y 1 A 1,...,Y N A N t =... k t ( 0,d 1...k t ( N 1,d N P-a.s.. A 1 A N Note that, b definition of k t, k t ( 0,d 1...k t ( N 1,d N = t(1,d 1... t(j N 1,d N, where j n is the element of the partition to which n belongs. Definition 4. A mixture of Markov chains is countable(finite if the random element t is countabl (finitel valued. For a countable (finite mixture of Markov chains equation (4 reads P(Y 1 A 1...Y N A N = µ h... t h (1,d 1...t h (j N 1,d N, (5 h H A 1 A N where H is a countable (finite set, t h are the possible values taken b t, and µ h := P( t = t h. 4 (4

5 Remark 1. To define mixtures of Markov chains when S is discrete, less technicalities are needed since transition kernels reduce to transition matrices. When this is the case P(Y N 1 = N 1 P = P 0 1 P P N 1 N P-a.s., (6 where P = ( P i,j i,j S is a random matrix varing on the set P of the transition matrices. The analog of equation (5 is P(Y N 1 = N 1 = h Hµ h P h P h N 1 N, (7 where (P h h H are the possible values taken b P. Representation theorems. The classic de Finetti s representation theorem characterizes exchangeable sequences as mixtures of i.i.d. sequences. The statement, the proof, and an extensive discussion of the ramifications of the theorem can be found in [1] and [9]. Partiall exchangeable sequences are in one-to-one correspondence with mixtures of Markov chains, as proven b the representation theorem of [8]. An earlier representation theorem, for discrete state space and based on a slightl different notion of partial exchangeabilit, was given in [4]. 3 Countable Markov mixtures with discrete state space In this section S is a discrete set. 3.1 The successors arra of hidden Markov models The definition and some useful properties of HMMs are given in the appendix. The following result will be instrumental later in the paper and it is also of independent interest. Proposition 1. Let (Y n be a HMM with recurrent 1 underling Markov chain, then each row of (V,n is a HMM with recurrent underling Markov chain. 1 A homogeneous Markov chain (X n is recurrent if P(X n = x i.o. n X 0 = x = 1. Such Markov chains have no transient states but possibl more than one recurrence class. 5

6 Proof. Denote with X the discrete state space of the Markov chain (X n n 0, underling the process (Y n, and let, for an x X and S, f x ( := P ( Y n = X n = x be the read-out distributions. Fix S. To prove the theorem, we construct a recurrent Markov chain (Wn n 1, such that (V,n n 1 is a HMM with underling Markov chain (Wn. The Markov chain (W n is based on the values (X n immediatel following some random times (τn. More precisel, define inductivel the random times of the n-th visit of (Y n to the state : τ 1 := inf{t 0 Y t = }, τ n := inf{t > τ n 1 Y t = }, with the usual convention inf = +. The random times (τn are stopping times with respect to the filtration spanned b (Y n, and so are the times (τn +1. Define the sequence { Wn := ε for τn = + X τ n +1 for τn (8 < +. where ε is a fictitious state. The sequence Wn is either identicall equal to ε, or it never hits it since the τn are either all finite or all infinite 2. Wefirstcheck that(wnisamarkovchain, thenweverifthatitisrecurrent, and finall we show that (V,n is a HMM with underling Markov chain (Wn. If the case Wn ε obtains, (W n is a recurrent Markov chain. Otherwise a direct computation gives, for an x 1,...,x N X, P ( W N = x N W N 1 = x N 1,...,W 1 = x 1 = P ( X τ N +1 = x N X τ N 1 +1 = x N 1,...,X τ 1 +1 = x 1 = P ( ( X τ N +1 = x N X τ N 1 +1 = x N 1 = P W N = x N W N 1 = x N 1, where Remark 4 in the Appendix applies, as (τn + 1 are hitting-times of X (see Definition 6 in the Appendix. Thus (Wn is Markov. Since P(W n = x i.o. n W 1 = x = P ( X τ n +1 = x i.o. n X τ 1 +1 = x, 2 If Y n =, for some finite n, then X n = x, for some x such that f x ( > 0. B the recurrence, X hits x infinitel man times, and thus Y hits infinitel man times. 6

7 to check the recurrence of (Wn we have to verif that, for all x X, P ( X τ n +1 = x i.o. n X τ 1 +1 = x = 1. (9 Choose x X such that f x ( > 0 and P ( X n+1 = x X n = x > 0, (there exists at least one such x. Define the auxiliar sequences of stopping times with respect to the σ-field spanned b (X n,y n : σ x, 1 := inf{t 0 X t = x, Y t = }, σ x, n := inf{t > σ x, n 1 X t = x, Y t = }. The stopping times ( ( σ x, n are finite whenever τ n are finite, and the sequence ( ( σ x, n is a random subsequence of τ n, (with σ x, n > τ 1 for an n > 1, thus (X σ x, n +1 = x m n(x τ m +1 = x, (10 and triviall (X σ x, n +1 = x i.o. n (X τ n+1 = x i.o. n. (11 The events ( X σ x, n +1 = x are independent under the law P( X n τ 1 +1 = x, since {( X σ x, n +1 = x,( X n τ 1 +1 = x } is a P-independent set. In fact, for an fixed sequence m 1 < < m n N, P ( X σ x, mn +1 = x,xσ x, m n 1 +1 = x,..., Xσ x, m 1 +1 = x, X τ 1 +1 = x = P ( X σ x, mn +1 = x, Xσ x, mn = x,..., Xσ x, m 1 +1 = x, X = x, X σ x, m 1 τ 1 +1 = x = P ( X σ x, mn +1 = x Xσ x, = x P ( X mn σ x, m 1 +1 = x Xσ x, = x m 1 P ( X σ x, m 1 = x X τ 1 +1 = x P ( X τ 1 +1 = x = P ( X σ x, mn +1 = x...p ( X σ x, m 1 +1 = x P ( X τ 1 +1 = x, where the second equalit follows b Lemma 4 in the Appendix, and the first and last equalit follow noting that X σ x, mn = x b definition of σ x,. 3 3 The case m 1 = 1 and σ x, m 1 = τ needs some care, but the goal is to appl the Borel-Cantelli lemma, therefore the initial events do not matter. 7

8 The events ( X σ x, n +1 = x are equiprobable, with strictl positive probabilit. B the Borel-Cantelli lemma P(X σ x, n +1 = x i.o. n X τ 1 +1 = x = 1. (12 Equations (11 and (12 taken together give P(X τ n +1 = x i.o. n X τ 1 +1 = x P(X σ x, n +1 = x i.o. n X τ 1 +1 = x = 1. Condition (9 is satisfied, thus the recurrence of (Wn is proved. We now prove that (V,n is a HMM with underling Markov chain (Wn. Set P ( V,n = δ W n = ε = 1. For ε x X and δ ȳ S, the couple (W n,v,n inherits the read-out distributions of (X n,y n : P ( V,n = ȳ W n = x = P ( Y τ n +1 = ȳ X τ n +1 = x = f x (ȳ, (13 this can be seen disintegrating the stopping time τ n +1, P ( Y τ n +1 = ȳ, X τ n +1 = x = m N = m N = m N P ( τ n +1 = m, Y m = ȳ, X m = x P ( Y m = ȳ τ n +1 = m, X m = x P ( τ n +1 = m, X m = x P ( Y m = ȳ X m = x P ( τ n +1 = m, X m = x = f x (ȳ m NP ( τ n +1 = m, X m = x = f x (ȳp ( X τ n +1 = x, where the third equalit follows b the conditional independence of the observations. To show that (V,n n is a HMM, it is enough to check that the observations are conditionall independent given the Markov chain (Wn, i.e. P ( V,1 = 1,...,V,N = N W 1 = x 1,...,W N = x N N = P ( V,n = n Wn = x n, n=1 8

9 as it follows from the direct computation, P ( V,1 = 1,...,V,N = N W 1 = x 1,...,W N = x N = P ( Y τ 1 +1 = 1,...,Y τ N +1 = N X τ 1 +1 = x 1,...,X τ N = P ( N Y τ n +1 = n X τ n +1 = x n = P ( V,n = n Wn = x n, n=1 n=1 N +1 = x N where the second equalit is a direct consequence of Remark 5 of the Appendix. The sequence (V,n n is therefore a HMM with recurrent underling Markov chain, and this conludes the proof of the proposition. An easier proof (see [10] of the above proposition holds true if HMMs are equivalentl defined as deterministic functions of Markov chains. The need for a more general proof arises since for Polish valued sequences, which are used in Section 4 of the paper, the latter definition of HMM does not make sense. 3.2 Representation of countable Markov mixtures The paper [3] contains a characterization of countable mixtures of i.i.d. sequences, linking HMMs to the class of exchangeable sequences. The main result of [3] can be rephrased as follows Theorem 1. (Dharmadhikari Let (Y n be an exchangeable sequence. (Y n is a countable mixture of i.i.d. sequences if and onl if (Y n is a HMM with recurrent underling Markov chain. Note that in the original formulation of Theorem 1 the stationarit of the underling Markov chain is one of the hpotheses, but close inspection of the proof in [3] reveals that onl the absence of transient states is required. The aim of this section is to extend the above theorem to partiall exchangeable sequences, i.e. to characterize countable mixtures of Markov chains. The analog of Theorem 1 for mixtures of Markov chains is as follows. Theorem 2. Let (Y n be a partiall exchangeable sequence satisfing Condition 1. (Y n is a countable mixture of Markov chains if and onl if (Y n is a HMM with recurrent underling Markov chain. 9

10 Proof. We first prove that if (Y n is a HMM, then it is a countable mixtures of Markov chains, i.e., in the notations of Remark 1, P takes countabl man values. B the partial exchangeabilit of (Y n, the sequence (V,n is exchangeable for an S, and thus a mixture of i.i.d. sequences. As proved e.g. in Lemma 2.15 of [1] or in Proposition of [9], lim N 1 N N I V,n ( = p ( P a.s., (14 n=1 where the limit has to be interpreted in the weak convergence topolog, and where p is the random variable with values in M S corresponding to p in Definition 1. It follows from the proof of Theorem 1 in [8], that the random measure p is the -th row of the random element P. B Proposition 1, each row (V,n is a HMM, and b Theorem 1 it is thus a countable mixture of i.i.d. sequences. Thus p takes countabl man values, and so does the -th row of P. Repeating the same reasoning for each, we conclude that P takes countabl man values. We now prove the converse: if (Y n is a countable mixture of Markov chains, i.e. P takes countabl man values, then (Yn is a HMM. Let us indicate with {P h } h H, where H is a countable set, the possible values of P, and let the finite distributions of (Y n beas in (7. We now construct a Markov chain (X n, and a sequence (Ỹn according to Definition 5. Proving that (Ỹn has the same distributions of (Y n, we get that (Y n is a HMM with underling Markov chain (X n. Let us start with the Markov chain. Let P be the direct sum of the transition matrices P h : P := P 1 O O... O P 2 O Let (X n be the Markov chain with state space 4 S H, transition matrix P, and initial distribution π, with { µh for = π(,h = 0 0 for 0, with S and h H, and µ h := P( P = P h. We have to show that (X n is recurrent. B Theorem 1 in [8], (Y n is conditionall recurrent, therefore 4 e.g. orderingthestatesinfirstlexicalorderasfollows: (1,1,(2,1,...,(1,2,(2,2,... 10

11 the matrices {P h } in the mixture correspond to recurrent chains. Since P is the direct sum of such matrices, (X n is recurrent. Consider now a sequence (Ỹn, jointl distributed with (X n, with fixed initial state Ỹ0 = 0, conditionall independent given (X n, and with read-out distributions defined as follows f (x,h ( := P ( Ỹ n = X n = (x,h = δ x,, where δ is the Kronecker smbol. B its ver definition (Ỹn satisfies the properties in Definition 5. Let us compute the finite distributions of (Ỹn, for an N 1 (S N, (recall Ỹ0 is fixed at 0, P ( (Ỹ Ỹ1 N = 1 N = N P 0 = 0 N,XN 0 = (xn 0,hN 0 (x N 0,hN 0 = (x N 0,hN 0 P(X 0 = (x 0,h 0 = N 1 n=0 (x N 0,hN 0 π(x 0,h 0 = h H N P ( Ỹ n = n X n = (x n,h n n=0 P ( X n+1 = (x n+1,h n+1 X n = (x n,h n N f (xn,h n( n n=0 µ h P h P h N 1 N, N 1 n=0 P (xn,h n(x n+1,h n+1 where the second equalit follows b conditional independence of (Ỹn given (X n, and the fourth follows b the definition of the read-out densities and b the block structure of P. Comparing (7 with the last expression, we have that (Y n and (Ỹn have the same distributions, thus (Y n is a HMM and the theorem is proved. Note that, for S finite, in Theorem 2, as well as in Theorem 1, the state space of the underling Markov chain is finite if and onl if the mixture is finite. 11

12 4 Countable Markov mixtures with Polish state space In this section S is a Polish space. 4.1 The successors arra The proposition below is the analog of Proposition 1 for uncountable state space S. Proposition 2. Let (Y n be a HMM with recurrent underling Markov chain, then each row of (V j,n is a HMM with recurrent underling Markov chain. Proof. Let (X n be the underling Markov chain of (Y n. Consider the partition E = (E j j 0 of S, and for an element E j of the partition define τ E j 1 := inf{t 0 Y t E j }, τ E j n := inf{t > τe j n 1 Y t E j }. The proof can be carried out exactl as the proof of Proposition 1, substituting τn there with τe j n, and σ x, n, defined below σ x,e j σ x,e j n n with σ x,e j 1 := inf{t 0 X t = x, Y t E j }, := inf{t > σ x, n 1 X t = x, Y t E j }, where x X is such that f x (E j > 0 and P ( X n+1 = x X n = x > 0, (x has the same role as in equation ( Representation of countable mixtures The main result of the section is the characterization of countable mixtures of Markov chains, given in Theorem 4 below. The result for Markov chains is based on the corresponding characterization of countable mixtures of i.i.d. sequences which, for discrete state spaces, was derived in [3] and is reported in this paper as Theorem 1. To the best of our knowledge the analogue of Theorem 1 for general state space is not available in the literature. As a preliminar result we thus give, in Theorem 3 below, the needed extension of Theorem 1. Note that the result of [3] can not be directl generalized to uncountable state spaces as it relies on a definition of HMM unsuitable for general spaces. 12

13 4.2.1 Representation of countable i.i.d. mixtures Theorem 3. Let (Y n be an exchangeable sequence. (Y n is a countable mixture of i.i.d. sequences if and onl if (Y n is a HMM with recurrent underling Markov chain. Proof. B Remark 2 in the Appendix, if (Y n is a countable mixture of i.i.d., then it is a HMM. Let us prove the converse. Let (Y n be an exchangeable HMM, with underling Markov chain (X n with initial distribution π and transition matrix P. (X n is recurrent, thus it has no transient states, but possibl more than one recurrence class. As noted in [3], b the exchangeabilit of (Y n, one can substitute P with the Cesàro limit P := lim n 1/n n k=1 Pk, where P k is the k-power of P. B the ergodic theorem P has a block structure, being the direct sum of matrices P h with identical rows, one block P h for each recurrence class. B Lemma 6 of the Appendix, (Y n is a countable mixture of i.i.d. sequences Representation of countable Markov mixtures Theorem 4. Let (Y n be a partiall exchangeable sequence satisfing Condition 1. (Y n is a countable mixtures of Markov chains if and onl if (Y n is a HMM with recurrent underling Markov chain. Proof. RefertothenotationsofDefinition3. If(Y n isahmmwithrecurrent Markov chain, the proof can be carried out as for Theorem 2 noting that 1 N N n=1 I V converges, for N, to a probabilit measure θ j,n j on S, which plas the role of t(j,, i.e. θ j ( = t(j,. We also need to use Proposition 2 instead of Proposition 1, and Theorem 3 instead of Theorem 1. ( We now prove the converse. Let t takes countabl man values, denoted with th, with h varing on a countable set H, and let h H µ h = P( t = t h. (15 The finite distributions of (Y n are as in equation (5. We will construct a sequence (Ỹn with the same distributions of (Y n, so that (Y n is a HMM. TheconstructionthatworkedfortheproofofTheorem2cannotbeusedhere. (Y n takesvaluesinanuncountablestatespacewhileweneedanhmmwitha discrete underling Markov chain. Consider thus amarkov chain(x n taking 13

14 values in H N 0 N 0, with initial distribution and transition probabilities as follows (b Condition 1 the initial value 0 E 1 P ( X 0 = (h,i,j = ( P X n = (h n,i n,j n X n 1 = (h n 1,i n 1,j n 1 µ h δ j,1 t h (i,e j t h (i,d 0 for t h (i,d for t h (i,d 0 = 0. = δ hn 1,h n δ in,j n 1 t hn (j n 1,E jn, The components (h n,i n,j n of X n represent respectivel: the index of the running chain in the mixture, the discretized value of Y n 1, and the discretized value of Y n. The Markov chain (X n is recurrent: the kernels t h (, correspond to recurrent Markov chains b Theorem 4 in [8]. Consider now a sequence (Ỹn jointl distributed with (X n, with fixed initial value Ỹ0 = 0, conditionall independent given (X n, and with read-out distributions defined as follows P ( Ỹ n d n X n = (h,i,j = { I Ej ( n t h(i,d n t h (i,e j for t h (i,e j 0 0 for t h (i,e j = 0. Note that, b its ver definition, (Ỹn satisfies the properties in Definition 5. 14

15 The distributions of (Ỹn are computed as follows P ( Ỹ 1 d 1,...,ỸN d N = P ( Ỹ 0 d 0,...,ỸN d N, X 0 = (h 0,i 0,j 0,...,X N = (h N,i N,j N h N 0,iN 0 jn 0 = P ( Ỹ 0 d 0,...,ỸN d N X 0 = (h 0,i 0,j 0,...,X N = (h N,i N,j N h N 0,iN 0 jn 0 = h N 0,iN 0 jn 0 = h N 0,iN 0 jn 0 P ( X 0 = (h 0,i 0,j 0,...,X N = (h N,i N,j N P ( X 0 = (h 0,i 0,j 0 N P ( Ỹ n d n X n = (h n,i n,j n n=0 N P ( X n = (h n,i n,j n X n 1 = (h n 1,i n 1,j n 1 n=1 t h (i 0,E j0 µ h0 δ j0,1 t h0 (i 0,d 0 N n=0 I Ejn ( n t h n (i n,d n t hn (i n,e jn N δ hn 1,h n δ in,j n 1 t hn (j n 1,E jn n=1 = t h (i 0,E 1 t h (i 0,d 0 N t h (l n 1,d n µ h t h (i 0,d 0 t h (i 0,E 1 t h H,i 0 n=1 h (l n 1,E ln = h H µ h t h (1,d 1...t h (l N 1,d n, N t h (l n 1,E ln n=1 where l n indicates the element of the partition to which n belongs, and recalling l 0 = 1. Integrating the expression above over A 1,...,A N, and comparing it with equation (5, one concludes that the distributions of (Ỹn coincide with those of (Y n, therefore proving that (Y n is a HMM. 5 Concluding remarks Throughout the paper we referred to the notion of partial exchangeabilit originall given b de Finetti and to the corresponding representation theorem as given in [8]. For discrete state space partial exchangeabilit can be defined in a slightl different wa, and a representation theorem in this 15

16 alternative framework is proved in [4]. According to [4], a sequence of random variables is partiall exchangeable if the probabilit is invariant under all permutations of a string that preserves the first value and the transition counts between an couple of states. A characterization of countable mixtures of Markov chains can be given also in the setup of [4], using different mathematical tools. The result is in [7], but for a complete proof see [10]. B the same token the characterization of countable mixtures of Markov chains of order k holds true, for the proof see [10]. Unfortunatel the approach of [7] and [10] does not readil generalize to Polish state space. Based on the results in [8], a de Finetti s tpe representation theorem for mixtures of semi-markov processes have been proved in [6]. The authors are confident that a characterization of countable mixtures of semi-markov processes in terms of HMMs can be given properl adapting the proof of Proposition 2 and Theorem 4. 6 Appendix 6.1 Hidden Markov models Let S be a Polish space endowed with the Borel σ-field S. Definition 5. Let (Ỹn n 0 be a random sequence with values in (S,S and such that, for some random sequence (X n n 0 on a discrete space X, it holds that (X n is a homogeneous Markov chain, (ỸN is conditionall independent given (X n, i.e. P ( N Ỹ 0 d 0,...,ỸN d N X0 N = x N 0 = P ( Ỹ n d n X n = x n. A Hidden Markov Model (HMM is an random sequence (Y n sharing finite distributions with (Ỹn, i.e. such that P ( Y 0 0,...Y N d N = P (Ỹ0 d 0,...ỸN d N for all N. 16 n=0

17 A HMM is characterized b the initial distribution π on X, b the transition matrix P = (P ij i,j X of the Markov chain (X n, and b the read-out distributions f x (d where f x (d := P ( Ỹ n d X n = x. Werefertothesequence (X n asthe underlingmarkovchain ofthehmm, and to (Y n as the output sequence. Remark 2. A countable mixture of i.i.d. sequences as in equation (2 is triviall the output (Y n of an HMM: take as (X n the Markov chain with values in H, with identit transition matrix, initial distribution (µ 1,...,µ h,... h H, and read-out distributions P ( Y n d X n = h = p h (d Strong Markov and strong conditional independence for HMMs This section contains a few useful properties of HMMs. Lemma 1. (Splitting propert Let (Y n be a HMM with underling Markov chain (X n. Then the couple (X n,y n is a Markov chain, moreover P ( X N = x, Y N d X1 N 1 = x1 N 1, Y1 N 1 d1 N 1 = P ( X N = x, Y N d X N 1 = x N 1. Proof. P ( X N = x, Y N d N X1 N 1 = x1 N 1, Y1 N 1 d1 N 1 P ( X1 N = = xn 1, Y 1 N d1 N P ( X1 N 1 = x1 N 1, Y1 N 1 d1 N 1 N n=1 = P( ( Y n d n X n = x n P X N 1 = x N 1 N 1 n=1 P( ( Y n d n X n = x n P X N 1 1 = x1 N 1 = P ( ( Y N d N X N = x N P XN = x N X N 1 = x N 1 = P ( Y N d N X N = x N, X N 1 = x N 1 P ( XN = x N X N 1 = x N 1 = P ( Y N d N, X N = x N X N 1 = x N 1, 17

18 where the second and fourth equalit follow b the conditional independence of the observations (Y n in the definition of HMM. Lemma 2. (Strong splitting propert Let (Y n be a HMM with underling Markov chain (X n, and γ be a stopping time for (X n,y n, then, for an x, x, x X, and an,ỹ,ȳ S it holds that P ( X γ+k = x, Y γ+k d X γ = x, Y γ dỹ, X γ n = x, Y γ n dȳ = P ( X γ+k = x, Y γ+k d X γ = x. (16 Proof. We manipulate separatel the LHS and the RHS of equation(16. For readabilit denote C r := ( γ = r, X r = x, Y r dỹ, X r n = x, Y r n dȳ. Appling Lemma 1, the numerator of the conditional probabilit on the LHS of equation (16 is P ( X γ+k = x, Y γ+k d, X γ = x, Y γ dỹ, X γ n = x, Y γ n dȳ = r 1 P( X r+k = x, Y r+k d C r P(Cr = r 1 P( X r+k = x, Y r+k d X r = x, P(C r = r 1 P( Y r+k d X r+k = x P ( X r+k = x, X r = x P(C r = f x (P k x,x r 1 P(C r = f x (P (k x,x P( X γ = x, Y γ dỹ, X γ n = x, Y γ n dȳ, where P (k x,x is the x,x-entr of the k-step transition matrix of the Markov chain (X n. The numerator of the conditional probabilit on the RHS of equation (16, again appling Lemma 1, is 18

19 P ( X γ+k = x, Y γ+k d, X γ = x = r 1 P( X r+k = x, Y r+k d γ = r, X r = x P ( γ = r, X r = x = r 1 P( X r+k = x, Y r+k d X r = x P ( γ = r, X r = x = f x (P (k x,x P( X γ = x. The lemma is proved comparing the expressions of the LHS and the RHS derived above. Definition 6. Let (Y n be a HMM with underling Markov chain (X n, and let A X S. We sa that the sequence of random times ( γ n n 1 is a sequence of hitting times of A if γ 1 := inf{t 0 (X t,y t A}, γ n := inf{t > γ n 1 (X t,y t A}. Lemma 3. (Generalized strong splitting propert Let (Y n be a HMM with underling Markov chain (X n. Let (γ n be a sequence of hitting times of A for(x n,y n, where A X S. Thenfor ann, andan(x 1, 1,...,(x N, N A P ( X γn = x N, Y γn d N X γ N 1 γ 1 d1 N 1 = x1 N 1, Y γ N 1 γ 1 = P ( X γn = x N, Y γn d N X γn 1 = x N 1. Proof. Denote with A c the complement of A in X S, and with (A c r the r- th fold Cartesian product of A c. Let B := ( X γ N 1 γ 1 = x1 N 1, Y γ N 1 γ 1 d1 N 1. Appling Lemma 2 in the third equalit above, the numerator of the conditional probabilit on the LHS is 19

20 P ( X γ N γ1 = x N 1, Y γ N γ 1 d1 N = r 1 P( γ N = γ N 1 +r, X γn 1 +r = x N, Y γn 1 +r d N B P ( B = ( r 1 P X (A c r 1 γn 1 +r = x N, Y γn 1 +r d N, X γ N 1+r 1 γ N 1 +1 = x 1 r 1, Y γ N 1+r 1 γ N 1 +1 dȳ1 r 1 B P ( B d( x 1 r 1,ȳ1 r 1 = ( r 1 X γn 1 +r = x N, Y γn 1 +r d N, (A c r 1 P X γ N 1+r 1 γ N 1 +1 = x r 1 1,Y γ N 1+r 1 γ N 1 +1 dȳ r 1 1 X γn 1 =x N 1 P ( B d( x r 1 1,ȳ1 r 1 = r 1 P( γ N = γ N 1 +r, X γn 1 +r = x N, Y γn 1 +r d N X γn 1 = x N 1 P ( B = P ( X γn = x N, Y γn d N X γn 1 = x N 1 P ( B, and dividing b P(B the lemma is proved. Remark 3. B the same token, for an (x 1, 1,...,(x N, N X S, P ( X γn +1 = x N, Y γn +1 d N X γ N 1+1 γ 1 +1 = x1 N 1, Y γ N 1+1 γ 1 +1 d1 N 1 = P ( X γn +1 = x N, Y γn +1 d N X γn 1 +1 = x N 1. Lemma 4. Let (Y n be a HMM with underling Markov chain (X n, and (γ n be a sequence of hitting times of A for (X n,y n, where A = A 1 A 2 X S, then, for an x 1,...,x N A 1, P ( X γn = x N X γ N 1 γ 1 = x N 1 1 Proof. For readabilit let D := ( X γ N 1 γ 1 numerator of the LHS is P ( X γ N γ1 = x N 1 = ( = P XγN = x N X = x γn 1 N 1. = x1 N 1, Y γ N 1 γ 1 d1 N 1. The P ( X γn = x N, Y γn d N D P(D (A 2 N = P ( X γn = x N, Y γn d N X γn 1 = x N 1 P(D (A 2 N = P ( ( X γn = x N X γn 1 = x N 1 P X γ N 1 γ 1 = x1 N 1. 20

21 Remark 4. B the same token, and using Remark 3, for anx 1,...,x N X, P ( X γn +1 = x N X γ ( N 1+1 γ 1 +1 = x1 N 1 = P XγN +1 = x N X γn 1+1 = x N 1. Lemma 5. (Strong conditional independence Let (Y n be a HMM with underling Markov chain (X n, and (γ n be a sequence of hitting times of A for (X n,y n, where A X S, then, for an (x 1, 1...,(x N, N A, P ( Y γ N γ 1 Proof. Let E := ( Y γ N 1 γ 1 P ( Y γ N γ 1 d N 1 X γ N γ1 = x N 1 N = P ( Y γk d k X γk = x k. k=1 d1 N 1,X γ N 1 γ 1 = x1 N 1 for readabilit. ( = P YγN d N, X γn = x N E P(E d1 N,X γ N γ1 = x N 1 = P ( Y γn d N, X γn = x N X γn 1 = x N 1 P(E = P ( ( Y γn d N X γn = x N P XγN = x N X γn 1 = x N 1 P(E N = P ( ( Y γk d k X γk = x k P X γ N 1 γ 1 = x1 N 1, k=1 where the second equalit follows b Lemma 3 and the last equalit follows iterating the procedure. Remark 5. B the same token, using Remark 3, for an x N 1,N 1 X S, P ( Y γ N+1 γ 1 +1 d N 1 Xγ N+1 γ 1 +1 = x N 1 N = P ( Y γk +1 d k X γk +1 = x k HMMs and countable mixtures of i.i.d. sequences The following fact was used in the proof of Theorem 3. If the HMM (Y n has an underling Markov chain with block structured transition probabilit matrix, with identical rows within blocks, then (Y n is a countable mixture of i.i.d. sequences. Consider a Markov chain (X n with values in X and transition matrix P as follows P p h p h... p h c 0 P h 1 c h 2 c h l h p h p h... p h c P :=....., P h := h 1 c h 2 c h l h,( P h p h p h... p h c h 1 c h 2 c h l h 21 k=1

22 with h H, a countable set. The block P h has size l h. Some of the p h c can be null. The Markov chain (X n has clearl H recurrence classes, one for each block, and no transient states. Let us indicate with C h the h-th recurrence class, corresponding to the states of the h-th block, set C h = {c h 1,...,ch l h }, where l h can be infinite. Triviall X = h H C h. An invariant distribution associated with the h-th block is p h := (p h,...,p h, and for an sequence c h 1 c h l h µ h > 0 with h H µ h = 1, the vector is an invariant distribution for P. π = (µ 1 p 1,...,µ h p h,... (18 Lemma 6. Consider a HMM (Y n where the underling Markov chain (X n has transition matrix P as in (17, invariant measure π as in (18, and assigned read-out distributions f x (d, then (Y n is a countable mixtures of i.i.d. sequences where p takes values in the set {F h, h H}, with and P( p = F h = µ h. F h (d := p h c h 1f c h 1 (d+ +p h c h l h f c h lh (d, Proof. Let us compute the finite distributions of (Y n. Let 0,..., n S n+1 : P ( ( Y0 N d0 N = P Y N 0 d0 N,XN 0 = xn 0 x N 0 X = x N 0 X P = x N 0 X π x0 = ( N N X0 = x 0 P(Y n d n X n = x n P(X n = x n X n 1 = x n 1 n=0 N f xn (d n n=0 N n=1 P xn 1,x n π x0 f x0 (d 0 P x0,x 1 f x1 (d 1...P xn 1,x N f xn (d N x N 0 X = µ h p h x 0 f x0 (d 0 p h x 1 f x1 (d 1...p h x N f xn (d N h H x N 0 C h = ( µ h p h x 0 f x0 (d 0 p h x 1 f x1 (d 1... ( p h x N f xn (d N h H x 0 C h x 1 C h x N C h = h Hµ h F h (d 0 F h (d 1...F h (d N, n=1 22

23 where the second equalit follows b the HMM properties, the fifth equalit follows noting that P xn,x n+1 is null for x n and x n+1 in different recurrence classes, and it is equal to p h x n+1 for x n and x n+1 in the same recurrence class C h. The expression above coincides with the representation of countable mixtures of i.i.d. sequences given in (2, thus completing the proof. References [1] D.J. Aldous, (1985, Exchangeabilit and Related Topics, in Ecole d Été de Probabilités de Saint-Flour XIII, LNM-1117, Springer, Berlin, pp [2] B. de Finetti, (1938, Sur la condition d equivalence partielle in Actualité Scientifiques et Industrielles, Hermann, Paris, vol. 739, pp [3] S.W. Dharmadhikari,(1964, Exchangeable Processes which are Functions of Stationar Markov Chains, in The Annals of Mathematical Statistics, vol. 35, pp [4] P. Diaconis and D. Freedman, (1980, de Finetti s Theorem for Markov Chains in Annals of Probabilit, vol. 8, pp [5] P. Diaconis and D. Freedman, (2004, The Markov Moment Problem and de Finetti s Theorem, Part I and Part II in Math. Z., vol. 247, pp [6] I. Epifani, and S. Fortini, and L. Ladelli, (2002, A characterization for mixtures of semi-markov processes in Statist. and Prob. Letters, vol.60, pp [7] L. Finesso and C. Prosdocimi, (2009, Partiall Exchangeable Hidden Markov Models, in Proceedings of the European Control Conference, ECC 09, Budapest, Hungar, pp [8] S. Fortini, L. Ladelli, G. Petris, and E. Regazzini, (2002, On mixtures of distributions of Markov chains in Stoch. Proc. Appl., vol. 100, pp [9] O. Kallenberg, (2005 Probabilistic Smmetries and Invariance Principles, Springer-Verlag 23

24 [10] C. Prosdocimi, (2010 Partial Exchangeabilit and Change Detection for Hidden Markov Models, PhD thesis,ccle XXII, Universit of Padova 24

Countable Partially Exchangeable Mixtures

Countable Partially Exchangeable Mixtures Countable Partiall Exchangeable Mixtures arxiv:1306.5631v2 [math.pr] 3 Jun 2015 Cecilia Prosdocimi Lorenzo Finesso June 4, 2015 Abstract Partiall exchangeable sequences representable as mixtures of Markov

More information

Stochastic Realization of Binary Exchangeable Processes

Stochastic Realization of Binary Exchangeable Processes Stochastic Realization of Binary Exchangeable Processes Lorenzo Finesso and Cecilia Prosdocimi Abstract A discrete time stochastic process is called exchangeable if its n-dimensional distributions are,

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

An Almost Sure Approximation for the Predictable Process in the Doob Meyer Decomposition Theorem

An Almost Sure Approximation for the Predictable Process in the Doob Meyer Decomposition Theorem An Almost Sure Approximation for the Predictable Process in the Doob Meyer Decomposition heorem Adam Jakubowski Nicolaus Copernicus University, Faculty of Mathematics and Computer Science, ul. Chopina

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

The Central Limit Theorem for Free Additive Convolution

The Central Limit Theorem for Free Additive Convolution journal of functional analsis 140, 359380 (1996) article no. 0112 The Central Limit Theorem for Free Additive Convolution Vittorino Pata*, - Dipartimento di Elettronica per l'automazione, Universita di

More information

Acta Universitatis Carolinae. Mathematica et Physica

Acta Universitatis Carolinae. Mathematica et Physica Acta Universitatis Carolinae. Mathematica et Physica František Žák Representation form of de Finetti theorem and application to convexity Acta Universitatis Carolinae. Mathematica et Physica, Vol. 52 (2011),

More information

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes Lectures on Stochastic Stability Sergey FOSS Heriot-Watt University Lecture 4 Coupling and Harris Processes 1 A simple example Consider a Markov chain X n in a countable state space S with transition probabilities

More information

Tensor products in Riesz space theory

Tensor products in Riesz space theory Tensor products in Riesz space theor Jan van Waaij Master thesis defended on Jul 16, 2013 Thesis advisors dr. O.W. van Gaans dr. M.F.E. de Jeu Mathematical Institute, Universit of Leiden CONTENTS 2 Contents

More information

Total Expected Discounted Reward MDPs: Existence of Optimal Policies

Total Expected Discounted Reward MDPs: Existence of Optimal Policies Total Expected Discounted Reward MDPs: Existence of Optimal Policies Eugene A. Feinberg Department of Applied Mathematics and Statistics State University of New York at Stony Brook Stony Brook, NY 11794-3600

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

1 Sequences of events and their limits

1 Sequences of events and their limits O.H. Probability II (MATH 2647 M15 1 Sequences of events and their limits 1.1 Monotone sequences of events Sequences of events arise naturally when a probabilistic experiment is repeated many times. For

More information

Bayesian estimation of Hidden Markov Models

Bayesian estimation of Hidden Markov Models Bayesian estimation of Hidden Markov Models Laurent Mevel, Lorenzo Finesso IRISA/INRIA Campus de Beaulieu, 35042 Rennes Cedex, France Institute of Systems Science and Biomedical Engineering LADSEB-CNR

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Week 3 September 5-7.

Week 3 September 5-7. MA322 Weekl topics and quiz preparations Week 3 September 5-7. Topics These are alread partl covered in lectures. We collect the details for convenience.. Solutions of homogeneous equations AX =. 2. Using

More information

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M. Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although

More information

Blow-up collocation solutions of nonlinear homogeneous Volterra integral equations

Blow-up collocation solutions of nonlinear homogeneous Volterra integral equations Blow-up collocation solutions of nonlinear homogeneous Volterra integral equations R. Benítez 1,, V. J. Bolós 2, 1 Dpto. Matemáticas, Centro Universitario de Plasencia, Universidad de Extremadura. Avda.

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

4. Product measure spaces and the Lebesgue integral in R n.

4. Product measure spaces and the Lebesgue integral in R n. 4 M. M. PELOSO 4. Product measure spaces and the Lebesgue integral in R n. Our current goal is to define the Lebesgue measure on the higher-dimensional eucledean space R n, and to reduce the computations

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

EQUIVALENT FORMULATIONS OF HYPERCONTRACTIVITY USING INFORMATION MEASURES

EQUIVALENT FORMULATIONS OF HYPERCONTRACTIVITY USING INFORMATION MEASURES 1 EQUIVALENT FORMULATIONS OF HYPERCONTRACTIVITY USING INFORMATION MEASURES CHANDRA NAIR 1 1 Dept. of Information Engineering, The Chinese Universit of Hong Kong Abstract We derive alternate characterizations

More information

A. Derivation of regularized ERM duality

A. Derivation of regularized ERM duality A. Derivation of regularized ERM dualit For completeness, in this section we derive the dual 5 to the problem of computing proximal operator for the ERM objective 3. We can rewrite the primal problem as

More information

Orientations of digraphs almost preserving diameter

Orientations of digraphs almost preserving diameter Orientations of digraphs almost preserving diameter Gregor Gutin Department of Computer Science Roal Hollowa, Universit of London Egham, Surre TW20 0EX, UK, gutin@dcs.rhbnc.ac.uk Anders Yeo BRICS, Department

More information

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures AB = BA = I,

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures AB = BA = I, FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING Lectures MODULE 7 MATRICES II Inverse of a matri Sstems of linear equations Solution of sets of linear equations elimination methods 4

More information

arxiv: v2 [math.pr] 4 Feb 2009

arxiv: v2 [math.pr] 4 Feb 2009 Optimal detection of homogeneous segment of observations in stochastic sequence arxiv:0812.3632v2 [math.pr] 4 Feb 2009 Abstract Wojciech Sarnowski a, Krzysztof Szajowski b,a a Wroc law University of Technology,

More information

3.0 PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES

3.0 PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES 3.0 PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES 3.1 Introduction In this chapter we will review the concepts of probabilit, rom variables rom processes. We begin b reviewing some of the definitions

More information

ERRATA: Probabilistic Techniques in Analysis

ERRATA: Probabilistic Techniques in Analysis ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,

More information

An Introduction to Entropy and Subshifts of. Finite Type

An Introduction to Entropy and Subshifts of. Finite Type An Introduction to Entropy and Subshifts of Finite Type Abby Pekoske Department of Mathematics Oregon State University pekoskea@math.oregonstate.edu August 4, 2015 Abstract This work gives an overview

More information

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM J. Appl. Prob. 49, 876 882 (2012 Printed in England Applied Probability Trust 2012 A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM BRIAN FRALIX and COLIN GALLAGHER, Clemson University Abstract

More information

Statistical Inference on Large Contingency Tables: Convergence, Testability, Stability. COMPSTAT 2010 Paris, August 23, 2010

Statistical Inference on Large Contingency Tables: Convergence, Testability, Stability. COMPSTAT 2010 Paris, August 23, 2010 Statistical Inference on Large Contingency Tables: Convergence, Testability, Stability Marianna Bolla Institute of Mathematics Budapest University of Technology and Economics marib@math.bme.hu COMPSTAT

More information

MEASURE-THEORETIC ENTROPY

MEASURE-THEORETIC ENTROPY MEASURE-THEORETIC ENTROPY Abstract. We introduce measure-theoretic entropy 1. Some motivation for the formula and the logs We want to define a function I : [0, 1] R which measures how suprised we are or

More information

Bayesian Nonparametrics

Bayesian Nonparametrics Bayesian Nonparametrics Peter Orbanz Columbia University PARAMETERS AND PATTERNS Parameters P(X θ) = Probability[data pattern] 3 2 1 0 1 2 3 5 0 5 Inference idea data = underlying pattern + independent

More information

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

INTRODUCTION TO MARKOV CHAIN MONTE CARLO INTRODUCTION TO MARKOV CHAIN MONTE CARLO 1. Introduction: MCMC In its simplest incarnation, the Monte Carlo method is nothing more than a computerbased exploitation of the Law of Large Numbers to estimate

More information

ON THE EQUIVALENCE OF CONGLOMERABILITY AND DISINTEGRABILITY FOR UNBOUNDED RANDOM VARIABLES

ON THE EQUIVALENCE OF CONGLOMERABILITY AND DISINTEGRABILITY FOR UNBOUNDED RANDOM VARIABLES Submitted to the Annals of Probability ON THE EQUIVALENCE OF CONGLOMERABILITY AND DISINTEGRABILITY FOR UNBOUNDED RANDOM VARIABLES By Mark J. Schervish, Teddy Seidenfeld, and Joseph B. Kadane, Carnegie

More information

Exchangeable Bernoulli Random Variables And Bayes Postulate

Exchangeable Bernoulli Random Variables And Bayes Postulate Exchangeable Bernoulli Random Variables And Bayes Postulate Moulinath Banerjee and Thomas Richardson September 30, 20 Abstract We discuss the implications of Bayes postulate in the setting of exchangeable

More information

Limit Theorems for Exchangeable Random Variables via Martingales

Limit Theorems for Exchangeable Random Variables via Martingales Limit Theorems for Exchangeable Random Variables via Martingales Neville Weber, University of Sydney. May 15, 2006 Probabilistic Symmetries and Their Applications A sequence of random variables {X 1, X

More information

Key words. Feedback shift registers, Markov chains, stochastic matrices, rapid mixing

Key words. Feedback shift registers, Markov chains, stochastic matrices, rapid mixing MIXING PROPERTIES OF TRIANGULAR FEEDBACK SHIFT REGISTERS BERND SCHOMBURG Abstract. The purpose of this note is to show that Markov chains induced by non-singular triangular feedback shift registers and

More information

Exchangeable random arrays

Exchangeable random arrays Exchangeable random arrays Tim Austin Notes for IAS workshop, June 2012 Abstract Recommended reading: [Ald85, Aus08, DJ07, Ald10]. Of these, [Aus08] and [DJ07] give special emphasis to the connection with

More information

Equivalence of History and Generator ɛ-machines

Equivalence of History and Generator ɛ-machines MCS Codes: 37A50 37B10 60J10 Equivalence of History and Generator ɛ-machines Nicholas F. Travers 1, 2, 1, 2, 3, 4, and James P. Crutchfield 1 Complexity Sciences Center 2 Mathematics Department 3 Physics

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Timed Watson-Crick Online Tessellation Automaton

Timed Watson-Crick Online Tessellation Automaton Annals of Pure and Applied Mathematics Vol. 8, No. 2, 204, 75-82 ISSN: 2279-087X (P), 2279-0888(online) Published on 7 December 204 www.researchmathsci.org Annals of Timed Watson-Cric Online Tessellation

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.

More information

The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1

The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1 Journal of Theoretical Probability. Vol. 10, No. 1, 1997 The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1 Jan Rosinski2 and Tomasz Zak Received June 20, 1995: revised September

More information

Exchangeability. Peter Orbanz. Columbia University

Exchangeability. Peter Orbanz. Columbia University Exchangeability Peter Orbanz Columbia University PARAMETERS AND PATTERNS Parameters P(X θ) = Probability[data pattern] 3 2 1 0 1 2 3 5 0 5 Inference idea data = underlying pattern + independent noise Peter

More information

Classification of Countable State Markov Chains

Classification of Countable State Markov Chains Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive

More information

SUPPLEMENT TO PAPER CONVERGENCE OF ADAPTIVE AND INTERACTING MARKOV CHAIN MONTE CARLO ALGORITHMS

SUPPLEMENT TO PAPER CONVERGENCE OF ADAPTIVE AND INTERACTING MARKOV CHAIN MONTE CARLO ALGORITHMS Submitted to the Annals of Statistics SUPPLEMENT TO PAPER CONERGENCE OF ADAPTIE AND INTERACTING MARKO CHAIN MONTE CARLO ALGORITHMS By G Fort,, E Moulines and P Priouret LTCI, CNRS - TELECOM ParisTech,

More information

25.1 Ergodicity and Metric Transitivity

25.1 Ergodicity and Metric Transitivity Chapter 25 Ergodicity This lecture explains what it means for a process to be ergodic or metrically transitive, gives a few characterizes of these properties (especially for AMS processes), and deduces

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

A PECULIAR COIN-TOSSING MODEL

A PECULIAR COIN-TOSSING MODEL A PECULIAR COIN-TOSSING MODEL EDWARD J. GREEN 1. Coin tossing according to de Finetti A coin is drawn at random from a finite set of coins. Each coin generates an i.i.d. sequence of outcomes (heads or

More information

arxiv: v1 [math.pr] 6 Jan 2014

arxiv: v1 [math.pr] 6 Jan 2014 Recurrence for vertex-reinforced random walks on Z with weak reinforcements. Arvind Singh arxiv:40.034v [math.pr] 6 Jan 04 Abstract We prove that any vertex-reinforced random walk on the integer lattice

More information

Prime numbers and Gaussian random walks

Prime numbers and Gaussian random walks Prime numbers and Gaussian random walks K. Bruce Erickson Department of Mathematics University of Washington Seattle, WA 9895-4350 March 24, 205 Introduction Consider a symmetric aperiodic random walk

More information

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))

More information

Nonparametric inference for ergodic, stationary time series.

Nonparametric inference for ergodic, stationary time series. G. Morvai, S. Yakowitz, and L. Györfi: Nonparametric inference for ergodic, stationary time series. Ann. Statist. 24 (1996), no. 1, 370 379. Abstract The setting is a stationary, ergodic time series. The

More information

Lecture 7. The Cluster Expansion Lemma

Lecture 7. The Cluster Expansion Lemma Stanford Universit Spring 206 Math 233: Non-constructive methods in combinatorics Instructor: Jan Vondrák Lecture date: Apr 3, 206 Scribe: Erik Bates Lecture 7. The Cluster Expansion Lemma We have seen

More information

Quaderni di Dipartimento. A Note on the Law of Large Numbers in Economics. Patrizia Berti (Università di Modena e Reggio Emilia)

Quaderni di Dipartimento. A Note on the Law of Large Numbers in Economics. Patrizia Berti (Università di Modena e Reggio Emilia) Quaderni di Dipartimento A Note on the Law of Large Numbers in Economics Patrizia Berti (Università di Modena e Reggio Emilia) Michele Gori (Università di Firenze) Pietro Rigo (Università di Pavia) # 131

More information

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES RUTH J. WILLIAMS October 2, 2017 Department of Mathematics, University of California, San Diego, 9500 Gilman Drive,

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

IRRATIONAL ROTATION OF THE CIRCLE AND THE BINARY ODOMETER ARE FINITARILY ORBIT EQUIVALENT

IRRATIONAL ROTATION OF THE CIRCLE AND THE BINARY ODOMETER ARE FINITARILY ORBIT EQUIVALENT IRRATIONAL ROTATION OF THE CIRCLE AND THE BINARY ODOMETER ARE FINITARILY ORBIT EQUIVALENT MRINAL KANTI ROYCHOWDHURY Abstract. Two invertible dynamical systems (X, A, µ, T ) and (Y, B, ν, S) where X, Y

More information

11.4. RECURRENCE AND TRANSIENCE 93

11.4. RECURRENCE AND TRANSIENCE 93 11.4. RECURRENCE AND TRANSIENCE 93 Similar arguments show that simple smmetric random walk is also recurrent in 2 dimensions but transient in 3 or more dimensions. Proposition 11.10 If x is recurrent and

More information

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 Last compiled: November 6, 213 1. Conditional expectation Exercise 1.1. To start with, note that P(X Y = P( c R : X > c, Y c or X c, Y > c = P( c Q : X > c, Y

More information

Semi-strongly asymptotically non-expansive mappings and their applications on xed point theory

Semi-strongly asymptotically non-expansive mappings and their applications on xed point theory Hacettepe Journal of Mathematics and Statistics Volume 46 (4) (2017), 613 620 Semi-strongly asymptotically non-expansive mappings and their applications on xed point theory Chris Lennard and Veysel Nezir

More information

A log-scale limit theorem for one-dimensional random walks in random environments

A log-scale limit theorem for one-dimensional random walks in random environments A log-scale limit theorem for one-dimensional random walks in random environments Alexander Roitershtein August 3, 2004; Revised April 1, 2005 Abstract We consider a transient one-dimensional random walk

More information

Math 369 Exam #1 Practice Problems

Math 369 Exam #1 Practice Problems Math 69 Exam # Practice Problems Find the set of solutions of the following sstem of linear equations Show enough work to make our steps clear x + + z + 4w x 4z 6w x + 5 + 7z + w Answer: We solve b forming

More information

Fuzzy Topology On Fuzzy Sets: Regularity and Separation Axioms

Fuzzy Topology On Fuzzy Sets: Regularity and Separation Axioms wwwaasrcorg/aasrj American Academic & Scholarl Research Journal Vol 4, No 2, March 212 Fuzz Topolog n Fuzz Sets: Regularit and Separation Aioms AKandil 1, S Saleh 2 and MM Yakout 3 1 Mathematics Department,

More information

arxiv: v2 [math.gn] 27 Sep 2010

arxiv: v2 [math.gn] 27 Sep 2010 THE GROUP OF ISOMETRIES OF A LOCALLY COMPACT METRIC SPACE WITH ONE END arxiv:0911.0154v2 [math.gn] 27 Sep 2010 ANTONIOS MANOUSSOS Abstract. In this note we study the dynamics of the natural evaluation

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that

More information

Markov Chains for Everybody

Markov Chains for Everybody Markov Chains for Everybody An Introduction to the theory of discrete time Markov chains on countable state spaces. Wilhelm Huisinga, & Eike Meerbach Fachbereich Mathematik und Informatik Freien Universität

More information

LECTURE 3. Last time:

LECTURE 3. Last time: LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate

More information

Non-homogeneous random walks on a semi-infinite strip

Non-homogeneous random walks on a semi-infinite strip Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

Marked Point Processes in Discrete Time

Marked Point Processes in Discrete Time Marked Point Processes in Discrete Time Karl Sigman Ward Whitt September 1, 2018 Abstract We present a general framework for marked point processes {(t j, k j ) : j Z} in discrete time t j Z, marks k j

More information

Insert your Booktitle, Subtitle, Edition

Insert your Booktitle, Subtitle, Edition C. Landim Insert your Booktitle, Subtitle, Edition SPIN Springer s internal project number, if known Monograph October 23, 2018 Springer Page: 1 job: book macro: svmono.cls date/time: 23-Oct-2018/15:27

More information

A product of γ-sets which is not Menger.

A product of γ-sets which is not Menger. A product of γ-sets which is not Menger. A. Miller Dec 2009 Theorem. Assume CH. Then there exists γ-sets A 0, A 1 2 ω such that A 0 A 1 is not Menger. We use perfect sets determined by Silver forcing (see

More information

Point-Map Probabilities of a Point Process

Point-Map Probabilities of a Point Process Point-Map Probabilities of a Point Process F. Baccelli, UT AUSTIN Joint work with M.O. Haji-Mirsadeghi TexAMP, Austin, November 23, 24 Structure of the talk Point processes Palm probabilities Mecke s invariant

More information

Introduction to Ergodic Theory

Introduction to Ergodic Theory Introduction to Ergodic Theory Marius Lemm May 20, 2010 Contents 1 Ergodic stochastic processes 2 1.1 Canonical probability space.......................... 2 1.2 Shift operators.................................

More information

Kesten s power law for difference equations with coefficients induced by chains of infinite order

Kesten s power law for difference equations with coefficients induced by chains of infinite order Kesten s power law for difference equations with coefficients induced by chains of infinite order Arka P. Ghosh 1,2, Diana Hay 1, Vivek Hirpara 1,3, Reza Rastegar 1, Alexander Roitershtein 1,, Ashley Schulteis

More information

Jónsson Properties for Non-Ordinal Sets Under the Axiom of Determinacy

Jónsson Properties for Non-Ordinal Sets Under the Axiom of Determinacy Jónsson Properties for Non-Ordinal Sets Under the Axiom of Determinacy Fifth NY Graduate Student Logic Conference The Jónsson Property For κ a cardinal and n ω, [κ] n = {(α 1,, α n ) κ n : α 1 < < α n

More information

On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables

On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables Deli Li 1, Yongcheng Qi, and Andrew Rosalsky 3 1 Department of Mathematical Sciences, Lakehead University,

More information

4 Inverse function theorem

4 Inverse function theorem Tel Aviv Universit, 2013/14 Analsis-III,IV 53 4 Inverse function theorem 4a What is the problem................ 53 4b Simple observations before the theorem..... 54 4c The theorem.....................

More information

General Glivenko-Cantelli theorems

General Glivenko-Cantelli theorems The ISI s Journal for the Rapid Dissemination of Statistics Research (wileyonlinelibrary.com) DOI: 10.100X/sta.0000......................................................................................................

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

Convergence of Feller Processes

Convergence of Feller Processes Chapter 15 Convergence of Feller Processes This chapter looks at the convergence of sequences of Feller processes to a iting process. Section 15.1 lays some ground work concerning weak convergence of processes

More information

arxiv: v1 [math.gr] 8 Nov 2008

arxiv: v1 [math.gr] 8 Nov 2008 SUBSPACES OF 7 7 SKEW-SYMMETRIC MATRICES RELATED TO THE GROUP G 2 arxiv:0811.1298v1 [math.gr] 8 Nov 2008 ROD GOW Abstract. Let K be a field of characteristic different from 2 and let C be an octonion algebra

More information

Lecture Notes Introduction to Ergodic Theory

Lecture Notes Introduction to Ergodic Theory Lecture Notes Introduction to Ergodic Theory Tiago Pereira Department of Mathematics Imperial College London Our course consists of five introductory lectures on probabilistic aspects of dynamical systems,

More information

Gärtner-Ellis Theorem and applications.

Gärtner-Ellis Theorem and applications. Gärtner-Ellis Theorem and applications. Elena Kosygina July 25, 208 In this lecture we turn to the non-i.i.d. case and discuss Gärtner-Ellis theorem. As an application, we study Curie-Weiss model with

More information

Ergodic Theory. Constantine Caramanis. May 6, 1999

Ergodic Theory. Constantine Caramanis. May 6, 1999 Ergodic Theory Constantine Caramanis ay 6, 1999 1 Introduction Ergodic theory involves the study of transformations on measure spaces. Interchanging the words measurable function and probability density

More information

Applied Stochastic Processes

Applied Stochastic Processes Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................

More information

Introduction to Dynamical Systems

Introduction to Dynamical Systems Introduction to Dynamical Systems France-Kosovo Undergraduate Research School of Mathematics March 2017 This introduction to dynamical systems was a course given at the march 2017 edition of the France

More information

process on the hierarchical group

process on the hierarchical group Intertwining of Markov processes and the contact process on the hierarchical group April 27, 2010 Outline Intertwining of Markov processes Outline Intertwining of Markov processes First passage times of

More information

THE MULTIPLICATIVE ERGODIC THEOREM OF OSELEDETS

THE MULTIPLICATIVE ERGODIC THEOREM OF OSELEDETS THE MULTIPLICATIVE ERGODIC THEOREM OF OSELEDETS. STATEMENT Let (X, µ, A) be a probability space, and let T : X X be an ergodic measure-preserving transformation. Given a measurable map A : X GL(d, R),

More information

The Game of Normal Numbers

The Game of Normal Numbers The Game of Normal Numbers Ehud Lehrer September 4, 2003 Abstract We introduce a two-player game where at each period one player, say, Player 2, chooses a distribution and the other player, Player 1, a

More information

Markov Processes on Discrete State Spaces

Markov Processes on Discrete State Spaces Markov Processes on Discrete State Spaces Theoretical Background and Applications. Christof Schuette 1 & Wilhelm Huisinga 2 1 Fachbereich Mathematik und Informatik Freie Universität Berlin & DFG Research

More information

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1 Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The

More information

RESEARCH INSTITUTE FOR MATHEMATICAL SCIENCES RIMS On self-intersection of singularity sets of fold maps. Tatsuro SHIMIZU.

RESEARCH INSTITUTE FOR MATHEMATICAL SCIENCES RIMS On self-intersection of singularity sets of fold maps. Tatsuro SHIMIZU. RIMS-1895 On self-intersection of singularity sets of fold maps By Tatsuro SHIMIZU November 2018 RESEARCH INSTITUTE FOR MATHEMATICAL SCIENCES KYOTO UNIVERSITY, Kyoto, Japan On self-intersection of singularity

More information