Sb1) = (4, Az,.-., An),

Similar documents
Problems and Solutions

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

#A63 INTEGERS 17 (2017) CONCERNING PARTITION REGULAR MATRICES

Positive and null recurrent-branching Process

Markov Chains and MCMC

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form

A L A BA M A L A W R E V IE W

S \ - Z) *! for j = 1.2,---,k + l. i=i i=i

2 Two-Point Boundary Value Problems

Maxima and Minima. (a, b) of R if

On the Central Limit Theorem for an ergodic Markov chain

Explicit evaluation of the transmission factor T 1. Part I: For small dead-time ratios. by Jorg W. MUller

ON THE "BANG-BANG" CONTROL PROBLEM*

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

SMSTC (2007/08) Probability.

Linear Algebra. Session 8

Unit 1 Matrices Notes Packet Period: Matrices

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations

THE FIXED POINT SET OF A UNIPOTENT TRANSFORMATION ON THE FLAG MANIFOLD

ERGODICITY OF DIFFUSION AND TEMPORAL UNIFORMITY OF DIFFUSION APPROXIMATION. 1. Introduction

On Nevai's Characterization with Almost Everywhere Positive Derivative X. LI AND E. B. SAFF*

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013

P a g e 5 1 of R e p o r t P B 4 / 0 9

Key words. Feedback shift registers, Markov chains, stochastic matrices, rapid mixing

CONVERGENT SEQUENCES IN SEQUENCE SPACES

Chapter 5. Weak convergence

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992.

Math 321: Linear Algebra

The Randomized Newton Method for Convex Optimization

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

OH BOY! Story. N a r r a t iv e a n d o bj e c t s th ea t e r Fo r a l l a g e s, fr o m th e a ge of 9

THE INVERSE OPERATION IN GROUPS

P-FACTOR-APPROACH TO DEGENERATE OPTIMIZATION PROBLEMS

On a certain type of matrices with an application to experimental design

Topological dynamics: basic notions and examples

15. T(x,y)=(x+4y,2a+3y) operator that maps a function into its second derivative. cos,zx are eigenvectors of D2, and find their corre

Countable and uncountable sets. Matrices.

417 P. ERDÖS many positive integers n for which fln) is l-th power free, i.e. fl n) is not divisible by any integral l-th power greater than 1. In fac

Math113: Linear Algebra. Beifang Chen

SYMMETRICOMPLETIONS AND PRODUCTS OF SYMMETRIC MATRICES

Linear Algebra and Matrix Inversion

2 Eigenvectors and Eigenvalues in abstract spaces.

Linear Algebra Review

Markov Chain Monte Carlo

3. DISCRETE RANDOM VARIABLES

Bayesian Methods with Monte Carlo Markov Chains II

or - CHAPTER 7 Applications of Integration Section 7.1 Area of a Region Between Two Curves 1. A= ~2[0- (x :2-6x)] dr=-~2(x 2-6x) dr

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

CONVERGENCE OF MULTIPLE ERGODIC AVERAGES FOR SOME COMMUTING TRANSFORMATIONS

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

(308 ) EXAMPLES. 1. FIND the quotient and remainder when. II. 1. Find a root of the equation x* = +J Find a root of the equation x 6 = ^ - 1.

Chap. 3 Rigid Bodies: Equivalent Systems of Forces. External/Internal Forces; Equivalent Forces

6 Markov Chain Monte Carlo (MCMC)

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

A matrix over a field F is a rectangular array of elements from F. The symbol

Induction, sequences, limits and continuity

Here, logx denotes the natural logarithm of x. Mrsky [7], and later Cheo and Yien [2], proved that iz;.(»)=^io 8 "0(i).

Chapter 2: Markov Chains and Queues in Discrete Time

Chapter 5 Matrix Approach to Simple Linear Regression

Math-Stat-491-Fall2014-Notes-V

Fundamentals of Engineering Analysis (650163)

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Statistical signal processing

Chapter 6 Inner product spaces

A GUIDE FOR MORTALS TO TAME CONGRUENCE THEORY

SPACES OF MATRICES WITH SEVERAL ZERO EIGENVALUES

PROBLEMS INVOLVING DIAGONAL PRODUCTS IN NONNEGATIVE MATRICES O

A NOTE ON CONFIDENCE BOUNDS CONNECTED WITH ANOVA AND MANOVA FOR BALANCED AND PARTIALLY BALANCED INCOMPLETE BLOCK DESIGNS. v. P.

Multiple Regression Model: I

CHAPTER 3. Fuzzy numbers were introduced by Hutton, B [Hu] and. studied by several Mathematicians like Kaleva [Kal], Diamond and

UNCLASSIFIED Reptoduced. if ike ARMED SERVICES TECHNICAL INFORMATION AGENCY ARLINGTON HALL STATION ARLINGTON 12, VIRGINIA UNCLASSIFIED

SPECIFICATION ERROR IN MULTINOMIAL LOGIT MODELS: ANALYSIS OF THE OMITTED VARIABLE BIAS. by Lung-Fei Lee. Discussion Paper No.

ON DIVISION ALGEBRAS*

Markov Chains and Stochastic Sampling

Visit to meet more individuals who benefit from your time

Linear Algebra Review

and problem sheet 6

SOME PROPERTIES OF ENTROPY OF ORDER a AND TYPE P

Lecture 8: The Metropolis-Hastings Algorithm

Linear Algebra. Week 7

MATH 2030: MATRICES. Example 0.2. Q:Define A 1 =, A. 3 4 A: We wish to find c 1, c 2, and c 3 such that. c 1 + c c

this chapter, we introduce some of the most basic techniques for proving inequalities. Naturally, we have two ways to compare two quantities:

S U E K E AY S S H A R O N T IM B E R W IN D M A R T Z -PA U L L IN. Carlisle Franklin Springboro. Clearcreek TWP. Middletown. Turtlecreek TWP.

Chapter 2. Matrix Arithmetic. Chapter 2

THE JOLLY-SEBER MODEL AND MAXIMUM LIKELIHOOD ESTIMATION REVISITED. Walter K. Kremers. Biometrics Unit, Cornell, Ithaca, New York ABSTRACT

Markov Decision Processes

NOTES ON MATRICES OF FULL COLUMN (ROW) RANK. Shayle R. Searle ABSTRACT

Random Variables and Probability Distributions

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Predicate Logic. 1 Predicate Logic Symbolization

PageRank algorithm Hubs and Authorities. Data mining. Web Data Mining PageRank, Hubs and Authorities. University of Szeged.

Practical conditions on Markov chains for weak convergence of tail empirical processes

Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University)

HW3 - Due 02/06. Each answer must be mathematically justified. Don t forget your name. 1 2, A = 2 2

Determinant Worksheet Math 113

ANOTHER CLASS OF INVERTIBLE OPERATORS WITHOUT SQUARE ROOTS

176 5 t h Fl oo r. 337 P o ly me r Ma te ri al s

AN ARCSINE LAW FOR MARKOV CHAINS

Transcription:

PROBABILITY ' AND MATHEMATICAL STATISTICS VoL 20, lux. 2 (20W), pp. 359-372 DISCRETE PROBABILITY MEASURES ON 2 x 2 STOCHASTIC MATRICES AND A FUNCTIONAL EQUATION OW '[o, 11 - -. A. MUKHERJEA AND J. S. RATTI (TMPA, FL), Abstract. In this paper, we consider the following natural problem: suppose pi and pa are two probability measures with fimte supports S(pl), S(p2), respectively, such that IS(,u1)1 = lsbz)l and S(pl)uS(pZ) c 2 x 2 stochastic matrices, and pxthe n-th convolution power of pl under matrix multiplication), as well as pi, converges weakly to the same probability measure 1, where S(Aj c 2 x 2 stochastic matrices with rank one. Then when does it follow that p1 = p2? What if Sbl) = S(p,)? In other words, can two different random walks, in this context, have the same invariant probability measure? Here, we consider related problems. 1. Introduction: Statement of the problem. Let p1 be a probability measure on 2 x 2 stochastic matrices such that its support S (pl), consisting of n points, is given by Sb1) = (4, Az,.-., An), where A, = (xi, y,) denotes the stochastic matrix whose first column is (xi, y,), 0 < xi < 1,0 < y, < 1 and xi > y,. The matrix (t, t) will be denoted simply by t. Then, it is well-known that the convolution iterates py, defined by - PI+' (9) = SP? (Y: yx ~g} p1 (dx), - converge weakly to a probability measure A, whose support consists of 2 x 2 stochastic matrices with identical rows. Thus, the elements in S(p1) can be represented by points below the diagonal in the unit square, and the elements in S (A) can be represented by points on the diagonal. Considering 1 as a probability measure on the unit interval [0, 11, let G be the distribution function of A. Then since A is uniquely determined by the convolution equation (1.1) A*p = A, the function G is uniquely determined by the functional equation

360 A. Mukherjea and J. S. Ratti where pi = p(ai), 0 < pi < 1, pl+p2+...+p, = 1. Writing g(x) = G(Lx+tl), where ti = lirn,,, A:, L = t,- tl, tl < t2 <... < t,, it is easily verified that (1-2) becomes (1.31 g(x) = C ~ig(aix-aiai+ 83, i=l n where 0 < x < 1, l/ai = xi -yiy ai = (t, - tl)/(t, - tl). It is easily shown that g(x) > 0 for x > 0 and g(x) < 1 for x < 1. In this paper, we study the problem concerning when the limit 1 deterrnines~~i~uel~ the probability measure pl. This problem was earlier examined in [2] in the case when n = 2. See also [I]; and [3], p. 159. Such problems come up in a natural manner in the theory of iterated function systems in the context of fractals/attractors. In that context, the measure p in (1.1) happens to be the distribution that induces the random walk with values in a set of stochastic matrices, and the measure J1 in (1.1) is the distribution that uniquely determines the attractor corresponding to the random walk induced by y. The problem is whether two different systems can give rise to the same attractor. In terms of the functional equation (1.31, the problem can be stated as follows: If the function g in (1.3) also satisfies the equation where the quantities pf, a:, a: are corresponding to another probability measure p, (with exactly the same meanings as before) such that p; also converges weakly to the same probability measure A, then when can we conclude that p1 = p2 or, in other words, for each i 2 1, pi = pi and (xi, yi) = (xi, ya? The theorem in the next section is an attempt to answer this question. 2. Main result: A theorem. THEOREM 2.1. Let p, and p, be two probability measures each with an n-point support such that where Ai = (xi, yi), A: = (xi, y:), xi-yi > 0, and xf -yi > 0. Suppose that both p-nd p; converge weakly, as n -r ao, to the same probability measure 1. Let ti = yi/[l -(xi -yi)] and ti = y!j[l- (x: - yi)] so that lim AT = (ti, ti) and n+ w n+ ao lim A? = (ti, ti). Suppose that the following conditions hold: (i) for 1 < i < n, t, = ti < min (ti, t:); (ii) the map x + tl. x is one-to-one on S (pl) US (p2). Then p1 = p2.

Discrete probability measures 361 [Let us remark that condition (ii) means that if (a, 6) and (c, d) are two different points in S (p,) v S I&), then tl (a - b) + b # t, (c - 4 + d. Geometrically, this means that if we consider the points P = Itl, tl), A = (1, 0), Ai = (xi, yi) and A! = (xi, y:) in the unit square, then the line through Ai (respectively, A;) parallel to the line PA does not contain any of the points Aj, j # d (respectively, A;, j # i), and Af, 1,< i 6 ~1 (respectively, Ai, 1,< i,< n). Let us also remark that the theorem remains true if we replace conditions (i) and (ii) above by the following conditions: (i') for 1 d i < n, t,-= t: > max {ti, g}; Fir) thedtap- x + t.. x is one-to-one on S (pl) u S b4.1 Pro of. The function g corresponding to A satisfies the equations Substituting one into the other, from (2.1) and (2.2) we have Writing h (x) = g (x) -pl pi g (a, a; x) -pn pk g (a, a; x -an a; + I), then we have where the summations in both expressions above are for i = 1 to i = n, j = 1 to j = n such that (i, j) # (1, 1) and (i, j) # (n, n). Now notice that and a.al.x-acaca+ai,-qai+ai I J J. i ' < o iff 4 LX;(I-$)+;(I -:).

3 62 A. Mukherjea and J. S. Ratti Since g(x) > 0 for x > 0, we have : (i, j) # (I, I}, (i, j) # (n, n) Note that pl = a',.= - 0, and that also, rnin ( 3:- 1 - I-- i+l <mm * { a ( I - This means that i) +- :( 1; i) : i +. ~, ~ < j < q Since g(x) < 1 for x < 1, instead of considering the "minimum" if we considered the "maximum" above, we would obtain similarly (after some calculations) max { I-- ( I-- :) : i +n } =max { I-- I;:(l-$):j+n], so that Let us now make the following observation. The points A,, Ai and A> are the points (xi, y,), (xi, y,) and (xj, y;), respectively. Note that the condition is equivalent to the condition that the points t, A, and t, A$ are identical.

Similarly, the condition Discrete probabilitv measures 363 is equivalent to the condition that the points t, Ai and t, A,? are identical. Now suppose that we are given that the points A, and A; are the same and that there are no points A, E S (pl) and A> E S (pa) such that the condition (2.6) holds unless they are the same. Then, of course, it follows from (2.4) that the points (xi,, yio) = (x;,, y,?,) for some io > 1 and jo > 1. But notice that (2.8) - g(x)= I and also EIS(~I~), Q<xdai,(l-l/aio), ~;g(a;x), O<~<oc~~(l-l/a;b), where B1 =min{ai(l-l/ai): i# 1, i#io}, Since g (x) > 0 for x > 0 and since A, = A; and Aio = A;,, it follows from (2.8) and (2.9) that As a result, aio = a& and aio = a;,,. Now we can go back to (2.3) and subtract appropriate terms from both sides. Writing h (x) = h (x) - p p,?, g (a a,?, x - a[i, a,?, + 01;~) -p- p'. g(a a! x-~'. a. a. +a'. a. -a! - = h(x) - Pio P; g (ai, a; x - mio aio + mi,) a'. +a!) 10 JO to JO JO 10 10 JO 10 JO JO JO - P,?~ Pi0 9 (aio a;o x - aio + a,, - olio aio + aio) we then have. - where the summation in both expressions on the right are for i = 1 to i = n, and j = 1 to j = n such that (i, j) # ( I, I), (i, j) # (n, n), (i, j) # (io, jo), (4 j) # (1, jd and (i, j) # (i0, 1). Again, following the same analysis as before, we now have

364 A. Mukherjea and 5. S. Ratti It follows from (2.12) that there exist i1 and jl such that il 4 { I, io), jl & (1, jo) and Ail = AS,. Like in (2.9) and (2.10), we can again show that pi, = pi,. The induction process continues and it follows that p1 = p,. w Let us now give an example to show that in Theorem 2.1 the condition that the map x + tl.x is one-to-one cannot be removed. EXAMPLE 2.2. Consider the probability measures pl and p2 such that Sh1) = tal, A25 A315 where, -.. ' A1 = (3/8, 5/24), A2 = (516, 1/6), A = (19124, 518) with 1 = / p1 (A,) = 2/37 PI 643) = 1/6? and where s b2) = (A;, AB 7 A;), A; = A1 = (3/8, 5/24), A', = (14/15, 2/15), A; = (91/120729/40) with p2(-4'1= 1/6, /12(4) = 4/5, p2 (-4;) = 1/30. It is easily verified that Notice that tl-a, = tl.a; = (113, 113) so that the map x+ tl.x is not one-to-one on S(pl)uS(p2). Let A, = (w) lim,,, pl. Then the function g1 corresponding to 1, satisfies the equation for 0 < x < 1. Similarly, if 1, = (w) limn,, p:, ding to A, satisfies the equation then the function g, correspon- for 0 < x < 1.

Discrete probabilitv measures 365 Observing that g, and g, both satisfy g,(x)=l, g2(x)=l for x21, and gl(x) = 0 = q2(x) for x < 0, it foilows immediately that the function satisfies both-(2.13)'and (214). Thus, though pl # p,, it follows that A; = A,. The following result, though it seems limited, does not seem. to be trivial. T~REM 2.3. Suppose p1 and p2 are two probability measures on 2x2 stochastic matrices such that both S(pl) and S(,u2) consist of n points. Suppose that (i) (w) limn-, p! = (w) limn+ p; = A; (ii) p1 (Ai) = p2 (A;) = pi, 1 < i < n, where S (pi) = {Al, A2,..., A,} and S(p2) = {A;, 4,..., A;). We assume: ti < ti+l, 1 <i<n-1. Then, ifng4, then p, =p2. Proof. We prove only the case where n = 4, which is not trivial. Let g be the function (as defined earlier) corresponding to 1 so that g satisfies the equations where ai, af, ai, a: have the same meanings as in (1.3) and (1.4). [We are assuming here, for simplicity, t, < t, < t, < t4, where ti = limn,, A;, and also ti < ti < ti < ti, where = lim,,, AT.] It follows from (2.15), since ol, = a; = 0 and o14 = a: = 1, that and that

I 1 I 366 A. Mukherjea and J. S. Ratti Notice that (2.16) implies that, for some 6 > 0, g(alx)=q(a;x), 06xd6. Thus, if a, > a;, then I.. so -that, for 0 < x < 6, g (4 X) = 0, which contradicts the fact that for x > 0, g(x) > 0. Thus, a, < a;. Similarly, a; < a, and, consequently, al = a;. It follows from (2.17) that there exists 6 > 0 such that g(a4{x-1)+1) = g(ak(x-1)+1) for 1-6 i x < 1. Writing y for x- 1 and putting h (JJ) = g (y +'l), we obtain h(a,y)=h(aiy), -S<y<0. Noting that g(x) < 1 for x < 1 and g(1) = 1 so that h(y) < 1 for y < 0 and h(0) = 1, we will again get a contradiction unless a, = a;. Thus, it follows that Al = A; and A, = A;, since t, = t; and t, = ti. Now we infer from (2.15) that for O<x<l. It follows from (2.18) that (2.19) min {a2 (1 - l/az), u3 (1 - l/a3)} = min id2 (1 - l/ai), a; (1 - l/a$)]. and (2.20) max (l/a2 + a2 (1 - l/az), l/a3 + a3 (1 - l/a3)) First, we consider the following: Case 1: = rnax {lla; + a; (1 - l/ak), l/a; + a; (1 - lla;)}. (2.2 1) a2 (1 - l/a2) = 01'2 (1 - l/a;). If az (1 - l/az) < a, (1 - l/a3) and a; (I - l/a>) < a; (1 - l/ai),

Discrete probability measures 367 then it follows from (2.18) that there exists 6 > 0 such that It follows from (2.22) that a, = a', and therefore, from (2.21), a, = a;, and A2 = A;. From (2.18) it follows that Aj = A;. Therefore, we assume that (2-23) a2 (1-1/a2) = a; (1 - l/a;) = a3 (1 - l/a3) < E\ (1 - l/ui). Since a, < b, we have 1/a2 < l/a,. Then from (2.20) we obtain. If b l/a3 +a3 (1 - l/a3) = max (l/ai +a;(l- l/ai), l/a; +a;(] - I/&)). then, by (2.18), for we have or B (as Ex - a3 (1- l/as)]) = g (a\ [x-~;(1- l/a;)3). It follows that there exists S > 0 such that h(a3y)=h(a;y), -6 < y < 0, where h(0) = 1 and h(y) < 1 for y < 0. It foi1ows that a, = a;, and therefore a, = a; and A, = A;. Consequently, from (2.18), we get A, = A',. Therefore, we assume that (2.25) l/a3 + a3 (1 - l/a3) = l/a', +EL (1 - l/ai) > l/a; +a; (1 - l/a$). Then, from (2.23) and (2.15) we get (2.26) A; = A3, and from (2.18) we obtain (2.27) (Pz- ~319 (a3 CX-3 (1- lla311) By (2.25), for = Pz 9 (a2 Ex-% (1 - f/az)l) - ~3 g (a; [x - (1 - l/a;)l). 11% + a3 (1-11~3) > x 2 max {l/a2 + o12 (1- lla,), i/a; + u; (1- l/a;)j I0 - PAMS 20.2

368 A. Mukherjea and J. S. Ratti we then have which implies that b2-~3) g ( ~ Cx-~3 3 - l/aai) = P2-P3 Consequently, from (2.27) we get P2 = P3. g (.a [x-a2 (1- l/az)l) = g (4[x -4(1- l/a;)l) for 0 ~x 6~1; It follows that b (2.28) A2 = A;. Then from (2.26) and (2.28) we obtain A;=A3, A2=A;, which is a contradiction since tl < t2 < t3 < t4 and t; < ti < ti < ti. Thus, we must assume that (2.29) 1/u3 + u3 (1 - l/a3) = l/ai + a; (1 - I/ai) By (2.18), then there exists 6 > 0 such that = l/a> +a; (1 - l/ui) > l/az + az (1 - l/u2), for 1-6 d x < 1. It follows that p2 < p,. Also, from (2.18) (see also (2.23)), for x < u; (1- l/a;) we get Pz 9 (a2 lx--2 (1- lla2ii) = (Pz- ~ 39 (a', ) Cx - a; (1- l/a;ll), so that p, > p3. Hence pz = p3. This gives in (2.30): g (a; [x - u; (I - l/a;)]) = 1. for x < l/a; + u; (1 - l/a3), a contradiction. Thus, we cannot asswe (2.23). Similarly, we cannot assume u2 (1 - l/az) = ai(1- l/ai) = u; (I - l/a;) < a3 (1 - l/a3). Thus, we must assume, if possible, a2 (1 - l/az) = oli (1 - l/ai) = u3 (1 - l/a3) = a; (1 - l/a;). Then l/a2 < l/a3 and l/a', < l/a;. From (2.20) we get l/a3 +a3 (1 - l/a3) = i/uk +a; (1 - l/a;). It follows that A, = A;. From (2.18) we obtain A, = A',.

Discrete probability measures 369 Case 2: ~~(1-l/az) = cti(1-l/a;). In this case, so that, since a; > a;, This means that a$ (1 - l/ai) < a; (1 - l/ai), 1 - a < 1 - a or l/a; > l/ai. since b Thus, since max {l/a2 + a2 (1 - l/a2), l/a, + a3 (1 - I/&)) = max {l/ak +a; (1 - l/ai), l/a\ + a; (1 - l/a',)>, it follows that we must have one of the following two possibilities: (i) l/az + a, (1 - l/az) = l/a; +a; (1 - l/a',); (ii) l/a3 + a3 (1 - l/a3) = l/a; + a; (1 - l/a;j. In the first situation, since a, (1 - l/a2) = a; (1 - l/ai), we have a, = a; and a, = a;. This, of course, means that t2 = ti and so that A, = A',. Then we have (2.31) (P2- ~ 39 (a2 ) Cx - at (1- l/az)l) This means that = P2 9 (a; [x - a'z (1 - l/a',)l)-~3 9 (a3 [X - a3 (1 - l/a,)l). for if then so that for B < x < l/a2 + a2 (1 - l/a2) the right-hand side of (2.31) is p, -p3, so that 9 (a2 Cx - az (1 - l/az)l) = 1,

370 A. Mukh.erjea and J. S. Ratti which contradicts that g(y) < 1 for y < 1. Thus, we can now assume that Note that, for lla', +a; (I - I/a;) < x < D, we have which means that p2 < p,. Now, if we have - b and also then for a2 (1 - l/az) < a3 (1-1/a3), a; (1 - l/a$) < a; (1 - l/ai), a2 (1 - l/az) = a; (I - l/a;) < x < min (l/a, + a3 (1 - l/a3), l/a; + a; (1 - l/a;)) we obtain Since Az = A;, we have p2 = p3. If az (1 - l/az) = a3 (1 - l/a,), then it follows from (2.32) that A, = A,, which is not possible. Also, if then either one of these is equal to a3(l - l/a3), in which case A, = A, - (a contradiction), or each one is less than M, (1 - l/a3), so that for a, (1 - l/a2) < x < a3 (1 - l/a3) we have (PZ- ~ 3 g (a, ) Cx - a2 (1- l/az)l) = Pz 9 (a; Cx -4 (1- l/ai)l), which means that p, $ p3. Thus, in this case, p2 = p,, SO that for x > ct; (1 - lla;), a contradiction. Thus, we have p, = p,, A, = A; and, consequently, from (2.18) we obtain g (as [x - a3 (1 - l/a3)1) = Cl (a; tx-a; (1 - l/a;)l),

Discrete probability measures 371 which is a contradiction, since 1/a3 + a3 (1 - l/a3) < l/ai + al(1- lla;) in this case. Thus, the only possibility is that (i) does not occur and (ii) occurs, that is Let max {l/a+ ay(1- l/a;), l/a> + a; (1 - l/a>)} b < x < M = l/a3 + u3 (1 - l/a3) = l/k', +a; (1 - l/a$). Then, we infer from (2.18) that there exists 6 > 0 such that g (a3 [x - u3 (1-1/a3)1) = g (4 Cx- 4 (1 - l/a;)l) for M-S < x < M. Writing y = x-m and q(yf1) = hcy), we have Note that h(o)= 1, andify < 0, then h(y) < 1. For any y <0,ifa3 >a;, then a contradiction. Thus, a, $ a;. Similarly, a3 2 a;, so that a, = a;, a, = a;. Hence A, = A;. It follows from (2.18) that for 0 < x < 1. Thus, and It follows that A2 = A;. a uz (1 - l/a2) = a> (1 - l/a>) l/az + uz (1 - l/az) = l/a; +a', (1 - l/a;). We can also prove the following theorem: THEOREM 2.4. Suppose,ul and p, are two probability measures on 2 x 2 stochastic matrices such that SOL1) and SOL2) consist of n points. Suppose that (i) (w) limn + p; = (w) limn + p; = A; (ii) S(p1) = S(p2) = {Al, Az,.- -3 An). We assume: ti < ti+l. Then, if n $4, p1 = p,. We omit the proof (which is simpler than that of Theorem 2.3).

3 72 A. Mukherjea and J. S. Ratti REFERENCES [I] S. Dhar, A. Mukherjea and J. S. Ratti, A non-linear Junctional equation arising from conuoiution iterates of a probability measure on 2 x 2 stochastic matrices, J. Nonlinear Analysis 36 (2) (19991, pp. 151-176. [2] A. Mukherjea, Limit theorems: stochastic matrices, ergodic Markov chains, and measures on semigroups, in: Prababilistic Analysis and Related Topics, Vol. 2, Academic Press, 1979, pp. 143-203. [3] M. Rosenblatt, Markov Processes: Structure and Asymptotic Behavior, Springer, 1971. ~e~artm:nt of Mathematics University of South Florida Tampa, FL 33620-5700, U.S.A. Received on 4.1 1.I 999