Efficient Generation of Random Bits from Finite State Markov Chains

Size: px
Start display at page:

Download "Efficient Generation of Random Bits from Finite State Markov Chains"

Transcription

1 Efficient Generation of Random Bits from Finite State Markov Chains Hongchao Zhou and Jehoshua Bruck, Feow, IEEE Abstract The probem of random number generation from an uncorreated random source (of unknown probabiity distribution) dates back to von eumann s 95 work. Eias (972) generaized von eumann s scheme and showed how to achieve optima efficiency in unbiased random bits generation. Hence, a natura question is what if the sources are correated? Both Eias and Samueson proposed methods for generating unbiased random bits in the case of correated sources (of unknown probabiity distribution), specificay, they considered finite Markov chains. However, their proposed methods are not efficient or have impementation difficuties. Bum (986) devised an agorithm for efficienty generating random bits from degree-2 finite Markov chains in expected inear time, however, his beautifu method is sti far from optimaity on information-efficiency. In this paper, we generaize Bum s agorithm to arbitrary degree finite Markov chains and combine it with Eias s method for efficient generation of unbiased bits. As a resut, we provide the first known agorithm that generates unbiased random bits from an arbitrary finite Markov chain, operates in expected inear time and achieves the information-theoretic upper bound on efficiency. Index Terms Random sequence, Random bits generation, Markov chain. I. ITRODUCTIO The probem of random number generation dates back to von eumann [9] who considered the probem of simuating an unbiased coin by using a biased coin with unknown probabiity. He observed that when one focuses on a pair of coin tosses, the events HT and T H have the same probabiity (H is for head and T is for tai ); hence, HT produces the output symbo 0 and T H produces the output symbo. The other two possibe events, namey, HH and T T, are ignored, namey, they do not produce any output symbos. More efficient agorithms for generating random bits from a biased coin were proposed by Hoeffding and Simons [7], Eias [4], Stout and Warren [7] and Peres [2]. Eias [4] was the first to devise an optima procedure in terms of the information efficiency, namey, the expected number of unbiased random bits generated per coin toss is asymptoticay equa to the entropy of the biased coin. In addition, Knuth and Yao [8] presented a simpe procedure for generating sequences with arbitrary probabiity distributions from an unbiased coin (the Manuscript received December 23, 200; revised June 2, 20; accepted October 4, 20. This work was supported in part by the SF Expeditions in Computing Program under grant CCF This paper was presented in part at IEEE Internationa Symposium on Information Theory (ISIT), Austin, Texas, June 200. Hongchao Zhou and Jehoshua Bruck are with the Department of Eectrica Engineering, Caifornia Institute of Technoogy, Pasadena, CA 925, USA, e-mai: hzhou@catech.edu; bruck@catech.edu. DOI (identifier) 0.09/TIT Copyright (c) 20 IEEE. probabiity of H and T is 2 ). Han and Hoshi [5] generaized this approach and considered the case where the given coin has an arbitrary known bias. In this paper, we study the probem of generating random bits from an arbitrary and unknown finite Markov chain (the transition matrix is unknown). The input to our probem is a sequence of symbos that represent a random trajectory through the states of the Markov chain - given this input sequence our agorithm generates an independent unbiased binary sequence caed the output sequence. This probem was first studied by Samueson [4]. His approach was to focus on a singe state (ignoring the other states) treat the transitions out of this state as the input process, hence, reducing the probem of correated sources to the probem of a singe independent random source; obviousy, this method is not efficient. Eias [4] suggested to utiize the sequences reated to a states: Producing an independent output sequence from the transitions out of every state and then pasting (concatenating) the coection of output sequences to generate a ong output sequence. However, neither Samueson nor Eias proved that their methods work for arbitrary Markov chains, namey, they did not prove that the transitions out of each state are independent. In fact, Bum [2] probaby reaized it, as he mentioned that: (i) Eias s agorithm is exceent, but certain difficuties arise in trying to use it (or the origina von eumann scheme) to generate bits in expected inear time from a Markov chain, and (ii) Eias has suggested a way to use a the symbos produced by a MC (Markov Chain). His agorithm approaches the maximum possibe efficiency for a one-state MC. For a muti-state MC, his agorithm produces arbitrariy ong finite sequences. He does not, however, show how to paste these finite sequences together to produce infinitey ong independent unbiased sequences. Bum [2] derived a beautifu agorithm to generate random bits from a degree-2 Markov chain in expected inear time by utiizing the von eumann scheme for generating random bits from biased coin fips. Whie his approach can be extended to arbitrary out-degrees (the genera Markov chain mode used in this paper), the information-efficiency is sti far from being optima due to the ow information-efficiency of the von eumann scheme. In this paper, we generaize Bum s agorithm to arbitrary degree finite Markov chains and combine it with existing methods for efficient generation of unbiased bits from biased coins, such as Eias s method. As a resut, we provide the first known agorithm that generates unbiased random bits from arbitrary finite Markov chains, operates in expected inear time and achieves the information-theoretic upper bound on efficiency. Specificay, we propose an agorithm (that we ca Agorithm

2 2 A), that is a simpe modification of Eias s suggestion to generate random bits, it operates on finite sequences and its efficiency can asymptoticay reach the information-theoretic upper bound for ong input sequences. In addition, we propose a second agorithm, caed Agorithm B, that is a combination of Bum s and Eias s agorithms, it generates infinitey ong sequences of random bits in expected inear time. One of our key ideas for generating random bits is that we expore equaprobabiity sequences of the same ength. Hence, a natura question is: Can we improve the efficiency by utiizing as many as possibe equa-probabiity sequences? We provide a positive answer to this question and describe Agorithm C, that is the first known poynomia-time and optima agorithm (it is optima in terms of information-efficiency for an arbitrary input ength) for random bit generation from finite Markov chains. In this paper, we use the foowing notations: x a : the a th eement of X X[a] : same as x a, the a th eement of X X[a : b] : subsequence of X from the a th to b th eement X a : X[ : a] X Y : the concatenation of X and Y e.g. s s 2 s 2 s = s s 2 s 2 s Y X : Y is a permutation of X Y =. X e.g. s s 2 s 2 s 3 s 3 s 2 s 2 s : Y is a permutation of X and y Y = x X namey the ast eement is fixed. e.g. s s 2 s 2 s 3 = s2 s 2 s s 3 where s 3 is fixed The remainder of this paper is organized as foows. Section II reviews existing schemes for generating random bits from arbitrariy biased coins. Section III discusses the chaenge in generating random bits from arbitrary finite Markov chains and presents our main emma - this emma characterizes the exit sequences of Markov chains. Agorithm A is presented and anayzed in Section IV, it is reated to Eias s ideas for generating random bits from Markov chains. Agorithm B is presented in Section V, it is a generaization of Bum s agorithm. An optima agorithm, caed Agorithm C, is described in Section VI. Finay, Section VII provides numerica evauations of our agorithms. II. GEERATIG RADOM BITS FOR BIASED COIS Consider a sequence of ength generated by a biased n-face coin X = x x 2...x {s, s 2,..., s n } such that the probabiity to get s i is p i, and n p i =. Whie we are given a sequence X the probabiities that p, p 2,..., p n are unknown, the question is: How can we efficienty generate an independent and unbiased sequence of 0 s and s from X? The definition of efficiency for a generation agorithm is given as foows. This definition wi be used throughout this paper. Definition. Let X be a random sequence in {s, s 2,..., s n } and et Ψ : {s, s 2,..., s n } {0, } be an agorithm generating random bits from X. Then given X, the efficiency (information-efficiency) of Ψ is defined as the ratio between the expected ength of the output sequence and the ength of the input sequence, i.e, η = E[ Ψ(X) ] In this section we describe three existing soutions for the probem of random bit generation from biased coins. A. The von eumann Scheme In 95, von eumann [9] considered this question for biased coins and described a simpe procedure for generating an independent unbiased binary sequence z z 2... from the input sequence X = x x 2... In his origina procedure, the coin is binary, however, it can be simpy generaized for the case of an n-face coin: For an input sequence, we can divide it into pairs x x 2, x 3 x 4,... and use the foowing mapping for each pair s i s j (i < j) 0, s i s j (i > j), s i s i ϕ where ϕ denotes the empty sequence. As a resut, by concatenating the outputs of a the pairs, we can get a binary sequence which is independent and unbiased. The von eumann scheme is computationay (very) fast, however, its information-efficiency is far from being optima. For exampe, when the input sequence is binary, the probabiity for a pair of input bits to generate an output bit (not a ϕ) is 2p p 2, hence the efficiency is p p 2, which is 4 at p = p 2 = 2 and ess esewhere. B. The Eias Scheme In 972, Eias [4] proposed an optima (in terms of efficiency) agorithm as a generaization of the von eumann scheme; for the sake of competeness we describe it here. Eias s method is based on the foowing idea: The possibe n input sequences of ength can be partitioned into casses such that a the sequences in the same cass have the same number of s k s for k n. ote that for every cass, the members of the cass have the same probabiity to be generated. For exampe, et n = 2 and = 4, we can divide the possibe n = 6 input sequences into 5 casses: S 0 = {s s s s } S = {s s s s 2, s s s 2 s, s s 2 s s, s 2 s s s } S 2 = {s s s 2 s 2, s s 2 s s 2, s s 2 s 2 s, s 2 s s s 2, s 2 s s 2 s, s 2 s 2 s s } S 3 = {s s 2 s 2 s 2, s 2 s s 2 s 2, s 2 s 2 s s 2, s 2 s 2 s 2 s } S 4 = {s 2 s 2 s 2 s 2 } ow, our goa is to assign a string of bits (the output) to each possibe input sequence, such that any two output sequences Y and Y with the same ength (say k), have the same probabiity to be generated, namey c k 2 for some 0 c n k. The idea is that for any given cass we partition the members of the cass to sets of sizes that are a power of 2, for a set with 2 i members (for some i) we assign binary strings of ength i. ote that when the cass size is odd we have to excude one member

3 3 of this cass. We now demonstrate the idea by continuing the exampe above. ote that in the exampe above, we cannot assign any bits to the sequence in S 0, so if the input sequence is s s s s, the output sequence shoud be ϕ (denoting the empty sequence). There are 4 sequences in S and we assign the binary strings as foows: s s s s 2 00, s s s 2 s 0 s s 2 s s 0, s 2 s s s Simiary, for S 2, there are 6 sequences that can be divided into a set of 4 and a set of 2: s s s 2 s 2 00, s s 2 s s 2 0 s s 2 s 2 s 0, s 2 s s s 2 s 2 s s 2 s 0, s 2 s 2 s s In genera, for a cass with W members that were not assigned yet, assign 2 j possibe output binary sequences of ength j to 2 j distinct unassigned members, where 2 j W < 2 j+. Repeat the procedure above for the rest of the members that were not assigned. ote that when a cass has an odd number of members, there wi be one and ony one member assigned to ϕ. Given an input sequence X of ength, using the method above, the output sequence can be written as a function of X, denoted by Ψ E (X), caed the Eias function. In [3], Ryabko and Matchikina showed that the Eias function of an input sequence of ength (that is generated by a biased coin with two faces) is computabe in O( og 3 og og()) time. We can prove that their concusion is vaid in the genera case of a coin with n faces for any n > 2. C. The Peres Scheme In 992, Peres [2] demonstrated that iterating the o- rigina von eumann scheme on the discarded information can asymptoticay achieve optima efficiency. Let s define the function reated to the von eumann scheme as Ψ : {0, } {0, }. Then the iterated procedures Ψ v with v 2 are defined inductivey. Given input sequence x x 2...x 2m, et i < i 2 <... < i k denote a the integers i m for which x 2i = x 2i, then Ψ v is defined as Ψ v (x, x 2,..., x 2m ) = Ψ (x, x 2,..., x 2m ) Ψ v (x x 2,..., x 2m x 2m ) Ψ v (x 2i,..., x 2ik ) ote that on the righthand side of the equation above, the first term corresponds to the random bits generated with the von eumann scheme, the second and third terms reate to the symmetric information discarded by the von eumann scheme. For exampe, when the input sequence is X = 000, the output sequence based on the von eumann scheme is Ψ (000) = 0 But based on the Peres scheme, we have the output sequence Ψ v (000) = Ψ (000) Ψ v (00) Ψ v (0) which is 00, onger than that generated by the von eumann Scheme. Finay, we can define Ψ v for sequences of odd ength by Ψ v (x, x 2,..., x 2m+ ) = Ψ v (x, x 2,..., x 2m ) Surprisingy, this simpe iterative procedure achieves the optima efficiency asymptoticay. The computationa compexity and memory requirements of this scheme are substantiay smaer than those of the Eias scheme. However, a drawback of this scheme is that its generaization to the case of an n-face coin with n > 2 is not obvious. D. Properties of the Schemes Let s denote Ψ : {s, s 2,..., s n } {0, } as a scheme that generates independent unbiased sequences from any biased coins (with unknown probabiities). Such Ψ can be the von eumann scheme, the Eias scheme, the Peres scheme or any other scheme. Let X be a sequence generated from an arbitrary biased coin, with ength, then a property of Ψ is that for any Y {0, } and Y {0, } with Y = Y, we have P [Ψ(X) = Y ] = P [Ψ(X) = Y ] amey, two output sequences of equa ength have equa probabiity. This eads to the foowing property for Ψ. It says that given the number of s i s for a i with i n, the number of such sequences yieding a binary sequence Y equas the number of such sequences yieding Y when Y and Y have the same ength. It further impies that given the condition of knowing the number of s i s for a i with i n, the output sequence of Ψ is sti independent and unbiased. This property is due to the inear independence of probabiity functions of the sequences with different numbers of the s i s. Lemma. Let S k,k 2,...,k n be the subset of {s, s 2,..., s n } consisting of a sequences with k i appearances of s i for a i n such that k + k k n =. Let B Y denote the set {X Ψ(X) = Y }. Then for any Y {0, } and Y {0, } with Y = Y, we have S k,k 2,...,k n BY = S k,k 2,...,k n BY. Proof: In S k,k 2,...,k n, each sequence has k i appearances of s i for a i n. Given a biased coin with n faces and a sequence in S k,k 2,...,k n, the probabiity of generating this sequence is β k,k 2,...,k n (p, p 2,..., p n ) = where p i is the probabiity to get s i with the biased coin. Then the probabiity of generating Y is S k,...,k n BY β k,...,k n (p,..., p n ) k +k k n = p k i i

4 4 And the probabiity of generating Y is S k,...,k n BY β k,...,k n (p,..., p n ) k +k k n = Since Ψ generates unbiased random sequences, we have P [Ψ(X) = Y ] = P [Ψ(X) = Y ]. As a resut, ( S k,...,k n BY S k,...,k n B Y ) k +...+k n = The set of poynomias β k,...,k n (p,..., p n ) = 0 k +...+k n= {β k,...,k n (p,..., p n )} is ineary independent in the vector space of functions on {(p,..., p n ) [0, ] n p + p p n = }, so we can concude that S k,k 2,...,k n BY = S k,k 2,...,k n BY. III. SOME PROPERTIES OF MARKOV CHAIS Our goa is to efficienty generate random bits from a Markov chain with unknown transition probabiities. The mode we study is that a Markov chain generates the sequence of states that it is visiting and this sequence of states is the input sequence to our agorithm for generating random bits. Specificay, we express an input sequence as X = x x 2...x with x i {s, s 2,..., s n }, where {s, s 2,..., s n } indicate the states of a Markov chain. One idea is that for a given Markov chain, we can treat each state, say s, as a coin and consider the next states (the states the chain has transitioned to after being at state s) as the resuts of a coin toss. amey, we can generate a coection of sequences π(x) = [π (X), π 2 (X),..., π n (X)], caed exit sequences, where π i (X) is the sequence of states foowing s i in X, namey, π i (X) = {x j+ x j = s i, j < } For exampe, assume that the input sequence is X = s s 4 s 2 s s 3 s 2 s 3 s s s 2 s 3 s 4 s remove the first eement of π k (X). Finay, we can uniquey generate the sequence x x 2...x. Lemma 3 (Equa-probabiity). Two input sequences X = x x 2...x and Y = y y 2...y with x = y have the same probabiity to be generated if π i (X) π i (Y ) for a i n. Proof: ote that the probabiity of generating X is P [X] = P [x ]P [x 2 x ]...P [x x ] and the probabiity of generating Y is P [Y ] = P [y ]P [y 2 y ]...P [y y ] By permutating the terms in the expression above, it is not hard to get that P [X] = P [Y ] if x = y and π i (X) π i (Y ) for a i n. Basicay, the exit sequences describe the edges that are used in the trajectory in the Markov chain. The edges in the trajectories that correspond to X and Y are identica, hence P [X] = P [Y ]. In [4], Samueson considered a two-state Markov chain, and he pointed out that it may generate unbiased random bits by appying the von eumann scheme to the exit sequence of state s. Later, in [4], in order to increase the efficiency, Eias has suggested a scheme that uses a the symbos produced by a Markov chain. His main idea was to create the fina output sequence by concatenating the output sequences that correspond to π (X), π 2 (X),... However, neither Samueson nor Eias proved that their methods produce random output sequences that are independent and unbiased. In fact, their proposed methods are not correct for some cases. To demonstrate it we consider: () Ψ(π (X)) as the fina output. (2) Ψ(π (X)) Ψ(π 2 (X))... as the fina output. For exampe, consider the two-state Markov chain in which P [s 2 s ] = p and P [s s 2 ] = p 2, as shown in Fig.. p p s s2 p2 p 2 If we consider the states foowing s we get π (X) as the set of states in bodface: Fig.. An exampe of Markov chain with two states. X = s s 4 s 2 s s 3 s 2 s 3 s s s 2 s 3 s 4 s Hence, the exit sequences are: π (X) = s 4 s 3 s s 2 π 2 (X) = s s 3 s 3 π 3 (X) = s 2 s s 4 π 4 (X) = s 2 s Lemma 2 (Uniqueness). An input sequence X can be uniquey determined by x and π(x). Proof: Given x and π(x), according to the work of Bum in [2], x x 2...x can uniquey be constructed in the foowing way: Initiay, set the starting state as x. Inductivey, if x i = s k, then set x i+ as the first eement in π k (X) and Assume that an input sequence of ength = 4 is generated from this Markov chain and the starting state is s, then the probabiities of the possibe input sequences and their corresponding output sequences are given in Tabe I. In the tabe we can see that the probabiities to produce 0 or are different for some p and p 2 in both methods, presented in coumns 3 and 4, respectivey. The probem of generating random bits from an arbitrary Markov chain is chaenging, as Bum said in [2]: Eias s agorithm is exceent, but certain difficuties arise in trying to use it (or the origina von eumann scheme) to generate random bits in expected inear time from a Markov chain. It seems that the exit sequence of a state is independent since each exit of the state wi not affect the other exits. However, this is not aways true when the ength of the input sequence

5 5 Input sequence Probabiity Ψ(π (X)) Ψ(π (X)) Ψ(π 2 (X)) s s s s ( p ) 3 ϕ ϕ s s s s 2 ( p ) 2 p 0 0 s s s 2 s ( p )p p s s s 2 s 2 ( p )p ( p 2 ) 0 0 s s 2 s s p p 2 ( p ) s s 2 s s 2 p 2 p 2 ϕ ϕ s s 2 s 2 s p ( p 2 )p 2 ϕ s s 2 s 2 s 2 p ( p 2 ) 2 ϕ ϕ TABLE I PROBABILITIES OF EXIT SEQUECES - A EXAMPLE THAT SIMPLE COCATEATIO DOES OT WORK. is given, say. Let s sti consider the exampe of the twostate Markov chain in Fig.. Assume the starting state of this Markov chain is s, if p > 0, then with non-zero probabiity we have π (X) = s s...s whose ength is. But it is impossibe to have π (X) = s 2 s 2...s 2 of ength. That means π (X) is not an independent sequence. The main reason is that athough each exit of a state wi not affect the other exits, it wi affect the ength of the exit sequence. In fact, π (X) is an independent sequence if the ength of π (X) is given, instead of giving the ength of X. In this paper, we consider this probem from another perspective. According to Lemma 3, we know that permutating the exit sequences does not change the probabiity of a sequence, however, the permuted sequence has to correspond to a trajectory in the Markov chain. The reason for this contingency is that in some cases the permuted sequence does not correspond to a trajectory: Consider the foowing exampe, and X = s s 4 s 2 s s 3 s 2 s 3 s s s 2 s 3 s 4 s π(x) = [s 4 s 3 s s 2, s s 3 s 3, s 2 s s 4, s 2 s ] If we permute the ast exit sequence s 2 s to s s 2, we cannot get a new sequence such that its starting state is s and its exit sequences are [s 4 s 3 s s 2, s s 3 s 3, s 2 s s 4, s s 2 ] This can be verified by attempting to construct the sequence using Bum s method (which is given in the proof of Lemma 2). otice that if we permute the first exit sequence s 4 s 3 s s 2 into s s 2 s 3 s 4, we can find such a new sequence, which is Y = s s s 2 s s 3 s 2 s 3 s s 4 s 2 s 3 s 4 s This observation motivated us to study the characterization of exit sequences that are feasibe in Markov chains (or finite state machines). Definition 2 (Feasibiity). Given a Markov chain, a starting state s α and a coection of sequences Λ = [Λ, Λ 2,..., Λ n ], we say that (s α, Λ) is feasibe if and ony if there exists a sequence X that corresponds to a trajectory in the Markov chain such that x = s α and π(x) = Λ. Based on the definition of feasibiity, we present the main technica emma of the paper. Repeating the notation from the beginning of the paper, we say that a sequence Y is a tai-fixed permutation of X, denoted as Y =. X, if and ony if () Y is a permutation of X, and (2) X and Y have the same ast eement, namey, y Y = x X. Lemma 4 (Main Lemma: Feasibiity and equivaence of exit sequences). Given a starting state s α and two coections of sequences Λ = [Λ, Λ 2,..., Λ n ] and Γ = [Γ, Γ 2,..., Γ n ] such. that Λ i = Γi (tai-fixed permutation) for a i n. Then (s α, Λ) is feasibe if and ony if (s α, Γ) is feasibe. The proof of this main emma wi be given in the Appendix. According to the main emma, we have the foowing equivaent statement. Lemma 5 (Feasibe permutations of exit sequences). Given an input sequence X = x x 2...x with x = s χ that produced from a Markov chain. Assume that [Λ, Λ 2,..., Λ n ] is an aribitrary coection of exit sequences that corresponds to the exit sequences of X as foows: ) Λ i is a permutation ( ) of π i (X), for i = χ. 2) Λ i is a tai-fixed permutation (. =) of π i (X), for i χ. Then there exists a feasibe sequence X = x x 2...x such that x = x and π(x ) = [Λ, Λ 2,..., Λ n ]. For this X, we have x = x. One might reason that Lemma 5 is stronger than the main emma (Lemma 4). However, we wi show that these two emmas are equivaent. It is obvious that if the statement in Lemma 5 is true, then the main emma is aso true. ow we show that if the main emma is true then the statement in Lemma 5 is aso true. Proof: Given X = x x 2...x, et s add one more symbo s n+ to the end of X (s n+ is different from a the states in X), then we can get a new sequence x x 2...x s n+, whose exit sequences are [π (X), π 2 (X),..., π χ (X)s n+,..., π n (X), ϕ] According to the main emma, we know that there exists another sequence x x 2...x x + such that its exit sequences are [Λ, Λ 2,..., Λ χ s n+,..., Λ n, ϕ]

6 6 and x = x. Definitey, the ast symbo of this sequence is s n+, i.e., x + = s n+. As a resut, we have x = s χ. ow, by removing the ast eement from x x 2...x x +, we can get a new sequence x = x x 2...x such that its exit sequences are [Λ, Λ 2,..., Λ χ,..., Λ n ] and x = x. We aso have x = s χ. This competes the proof. We demonstrate the resut above by considering the exampe at the beginning of this section. Let X = s s 4 s 2 s s 3 s 2 s 3 s s s 2 s 3 s 4 s with χ = and its exit sequences are given by [s 4 s 3 s s 2, s s 3 s 3, s 2 s s 4, s 2 s ] After permutating a the exit sequences (for i, we keep the ast eement of the i th sequence fixed), we get a new group of exit sequences [s s 2 s 3 s 4, s 3 s s 3, s s 2 s 4, s 2 s ] Based on these new exit sequences, we can generate a new input sequence X = s s s 2 s 3 s s 3 s 2 s s 4 s 2 s 3 s 4 s This accords with the statements above. IV. ALGORITHM A : MODIFICATIO OF ELIAS S SUGGESTIO Eias suggested to generate random bits from an arbitrary Markov chain by concatenating the outputs of different exit sequences. In the above section, we showed that direct concatenation cannot aways work. This motivates us to derive Agorithm A, which is a simpe modification of Eias s suggestion and is abe to generate random bits from any Markov chain efficienty. Agorithm A Input: A sequence X = x x 2...x produced by a Markov chain, where x i S = {s, s 2,..., s n }. Output: A sequence Y of 0 s and s. Main Function: Suppose x = s χ. for i := to n do if i = χ then Output Ψ(π i (X)). ese Output Ψ(π i (X) π i(x) ) end if end for Comment: () Ψ(X) can be any scheme that generates random bits from biased coins. For exampe, we can use the Eias function. (2) When i = χ, we can aso output Ψ(π i (X) π i(x) ) for simpicity, but the efficiency may be reduced a itte. The ony difference between Agorithm A and direct concatenation is that: Agorithm A ignores the ast symbos of some exit sequences. Let s go back to the exampe of a twostate Markov chain with P [s 2 s ] = p and P [s s 2 ] = p 2 in Fig., which demonstrates that direct concatenation does not aways work we. Here, sti assuming that an input sequence with ength = 4 is generated from this Markov chain with starting state s, then the probabiity of each possibe input sequence and its corresponding output sequence (based on Agorithm A) are given by: Input sequence Probabiity Output sequence s s s s ( p ) 3 ϕ s s s s 2 ( p ) 2 p ϕ s s s 2 s ( p )p p 2 0 s s s 2 s 2 ( p )p ( p 2 ) ϕ s s 2 s s p p 2 ( p ) s s 2 s s 2 p 2 p 2 ϕ s s 2 s 2 s p ( p 2 )p 2 ϕ s s 2 s 2 s 2 p ( p 2 ) 2 ϕ We can see that when the input sequence ength = 4, a bit 0 and a bit have the same probabiity of being generated and no onger sequences are generated. In this case, the output sequence is independent and unbiased. In order to prove that a the sequences generated by Agorithm A are independent and unbiased, we need to show that for any sequences Y and Y of the same ength, they have the same probabiity of being generated. Theorem 6 (Agorithm A). Let the sequence generated by a Markov chain be used as input to Agorithm A, then the output of Agorithm A is an independent unbiased sequence. Proof: Let s first divide a the possibe sequences in {s, s 2,..., s n } into casses, and use G to denote the set of the casses. Two sequences X and X are in the same cass if and ony if ) x = x and x = x = s χ for some χ. 2) If i = χ, π i (X ) π i (X). 3) If i χ, π i (X ). = π i (X). Let s use Ψ A to denote Agorithm A. For Y {0, }, et B Y be the set of sequences X of ength such that Ψ A (X) = Y. We show that for any S G, S B Y = S B Y whenever Y = Y. If S is empty, this concusion is trivia. In the foowing, we ony consider the case that S is not empty. ow, given a cass S, if i = χ et s define S i as the set consisting of a the permutations of π i (X) for X S, and if i χ et s define S i as the set consisting of a the permutations of π i (X) πi(x) for X S. For a i n and Y i {0, }, we continue to define S i (Y i ) = {Λ i S i Ψ(Λ i ) = Y i } which is the subset of S i consisting of a sequences yieding Y i. Based on Lemma, we know that S i (Y i ) = S i (Y i ) whenever Y i = Y i. This impies that S i(y i ) is a function of Y i, which can be written as M i ( Y i ).

7 7 For any partition of Y, namey Y, Y 2,..., Y n such that Y Y 2... Y n = Y, we have the foowing concusion: Λ S (Y ), Λ 2 S 2 (Y 2 ),..., Λ n S n (Y n ), we can aways find a sequence X S B Y such that π i (X) = Λ i for i = χ and π i (X) πi(x) = Λ i for a i χ. This concusion is immediate from Lemma 5. As a resut, we have S B Y = S i (Y i ) Y Y 2... Y n =Y Let, 2,..., n be a group of nonnegative integers partitioning Y, then the formua above can be rewritten as S B Y = M i ( i ) Simiary, we aso have S B Y = n = Y n = Y M i ( i ) which tes us that S B Y = S B Y if Y = Y. ote that a the sequences in the same cass S have the same probabiity of being generated. So when Y = Y, the probabiity of generating Y is = S G = S G P [X B Y ] P [S] X S P [X B Y X S] P [S] X S = P [S] S G X S = P [X B Y ] S B Y S S B Y S which impies that output sequence is independent and unbiased. Theorem 7 (Efficiency). Let X be a sequence of ength generated by a Markov chain, which is used as input to Agorithm A. Let Ψ in Agorithm A be Eias s function. Suppose the ength of its output sequence is M, then the imiting efficiency η = E[M] bound H(X). as reaizes the upper Proof: Here, the upper bound H(X) is provided by Eias [4]. We can use the same argument in Eias s paper [4] to prove this theorem. For a i n, et X i denote the next state foowing s i in the Markov chain. Then X i is a random variabe on {s, s 2,..., s n } with distribution {p i, p i2,..., p in }, where p ij with i, j n is the transition probabiity from state s i to state s j. The entropy of X i is denoted as H(X i ). Let U = (u, u 2,..., u n ) denote the stationary distribution of the Markov chain, then we have [3] H(X) n im = u i H(X i ) When, there exists an ϵ which 0, such that with probabiity ϵ, π i (X) > (u i ϵ ) for a i n. Using Agorithm A, with probabiity ϵ, the ength M of the output sequence is bounded beow by n ( ϵ )( π i (X) )η i where η i is the efficiency of the Ψ when the input is π i (X) or π i (X) π i(x). According to Theorem 2 in Eias s paper [4], we know that as π i (X), η i H(X i ). So with probabiity ϵ, the ength M of the output sequence is bounded from beow by ( ϵ )((u i ϵ ) )( ϵ )H(X i ) Then we have E[M] im im = im [ ( ϵ ) 3 ((u i ϵ ) )H(X i )] H(X) E[M] H(X) At the same time, is upper bounded by. So we can get E[M] im = im H(X) which competes the proof. Given an input sequence, it is efficient to generate independent unbiased sequences using Agorithm A. However, it has some imitations: () The compete input sequence has to be stored. (2) For a ong input sequence it is computationay intensive as it depends on the input ength. (3) The method works for finite-ength sequences and does not end itsef to stream processing. In order to address these imitations we propose two variants of Agorithm A. In the first variant of Agorithm A, instead of appying Ψ directy to Λ i = π i (X) for i = χ (or Λ i = π i (X) π i(x) for i χ), we first spit Λ i into severa segments with engths k i, k i2,... then appy Ψ to a of the segments separatey. It can be proved that this variant of Agorithm A can generate independent unbiased sequences from an arbitrary Markov chain, as ong as k i, k i2,... do not depend on the order of eements in each exit sequence. For exampe, we can spit Λ i into two segments of engths Λ i 2 and Λ i 2, we can aso spit it into three segments of engths (a, a, Λ i 2a)... Generay, the shorter each segment is, the faster we can obtain the fina output. But at the same time, we may have to sacrifice a itte information efficiency. The second variant of Agorithm A is based on the foowing idea: for a given sequence from a Markov chain, we can spit it into some shorter sequences such that they are independent of each other, therefore we can appy Agorithm A to a of the sequences and then concatenate their output sequences together as the fina one. In order to do this, given a sequence X = x x 2..., we can use x = s α as a specia state to it. For exampe, in practice, we can set a constant k, if there exists a minima integer i such that x i = s α and i > k, then we can spit X into two sequences x x 2...x i and x i x i+...

8 8 (note that both of the sequences have the eement x i ). For the second sequence x i x i+..., we can repeat the same procedure... Iterativey, we can spit a sequence X into severa sequences such that they are independent of each other. These sequences, with the exception of the ast one, start and end with s α, and their engths are usuay sighty onger than k. V. ALGORITHM B : GEERALIZATIO OF BLUM S ALGORITHM In [2], Bum proposed a beautifu agorithm to generate an independent unbiased sequence of 0 s and s from any Markov chain by extending the von eumann s scheme. His agorithm can dea with infinitey ong sequences and uses ony constant space and expected inear time. The ony drawback of his agorithm is that its efficiency is sti far from the information-theoretic upper bound, due to the imitation (compared to the Eias agorithm) of the von eumann s scheme. In this section, we generaize Bum s agorithm by repacing von eumann s scheme with Eias s. As a resut, we get Agorithm B: It maintains some good properties of Bum s agorithm and its efficiency approaches the information-theoretic upper bound. Agorithm B Input: A sequence (or a stream) x x 2... produced by a Markov chain, where x i {s, s 2,..., s n }. Parameter: n positive integer functions (window size) ϖ i (k) with k for a i n. Output: A sequence (or a stream) Y of 0 s and s. Main Function: E i = ϕ (empty) for a i n. k i = for a i n. c : the index of current state, namey, s c = x. whie next input symbo is s j ( nu) do E c = E c s j (Add s j to E c ). if E j ϖ j (k j ) then Output Ψ(E j ). E j = ϕ. k j = k j +. end if c = j. end whie In the agorithm above, we appy function Ψ on E j to generate random bits if and ony if the window for E j is competey fied and the Markov chain is currenty at state s j. For exampe, we set ϖ i (k) = 4 for a i n and for a k and assume that the input sequence is X = s s s s 2 s 2 s 2 s s 2 s 2 After reading the ast second (8 th ) symbo s 2, we have E = s s s 2 s 2 E 2 = s 2 s 2 s In this case, E 4 so the window for E is fu, but we don t appy Ψ to E because the current state of the Markov chain is s 2, not s. By reading the ast (9 th ) symbo s 2, we get E = s s s 2 s 2 E 2 = s 2 s 2 s s 2 Since the current state of the Markov chain is s 2 and E 2 4, we produce Ψ(E 2 = s 2 s 2 s s 2 ) and reset E 2 as ϕ. In the exampe above, treating X as input to Agorithm B, we get the output sequence Ψ(s 2 s 2 s s 2 ). The agorithm does not output Ψ(E = s s s 2 s 2 ) unti the Markov chain reaches state s again. Timing is crucia! ote that Bum s agorithm is a specia case of Agorithm B by setting the window size functions ϖ i (k) = 2 for a i n and k {, 2,...}. amey, Agorithm B is a generaization of Bum s agorithm, the key is that when we increase the windows sizes, we can appy more efficient schemes (compared to the von eumann s scheme) for Ψ. Assume a sequence of symbos X = x x 2...x with x = s χ have been read by the agorithm above, we want to show that for any, the output sequence is aways independent and unbiased. Unfortunatey, Bum s proof for the case of ϖ i (k) = 2 cannot be appied to our proposed scheme. For a i with i n, we can write π i (X) = F i F i2...f imi E i where F ij with j m i are the segments used to generate outputs. For a i, j, we have F ij = ϖ i (j) and { 0 Ei < ϖ i (m i + ) if i = χ 0 < E i ϖ i (m i + ) otherwise See Fig. 2 for simpe iustration. ( X ) 2( X ) ( X 3 ) F F F 2 3 E F2 F E 22 2 F F3 F32 33 Fig. 2. The simpified expressions for the exit sequences of X. Theorem 8 (Agorithm B). Let the sequence generated by a Markov chain be used as input to Agorithm B, then Agorithm B generates an independent unbiased sequence of bits in expected inear time. Proof: In the foowing proof, we use the same idea as in the proof for Agorithm A. Let s first divide a the possibe input sequences in {s, s 2,..., s n } into casses, and use G to denote the set consisting of a the casses. Two sequences X and X are in the same cass if and ony if ) x = x and x = x. 2) For a i with i n, π i (X) = F i F i2...f imi E i π i (X ) = F if i2...f im i E i where F ij and F ij are the segments used to generate outputs. E 3

9 9 3) For a i, j, F ij F ij. 4) For a i, E i = E i. Let s use Ψ B to denote Agorithm B. For Y {0, }, et B Y be the set of sequences X of ength such that Ψ B (X) = Y. We show that for any S G, S B Y = S B Y whenever Y = Y. If S is empty, this concusion is trivia. In the foowing, we ony consider the case that S is not empty. ow, given a cass S, et s define S ij as the set consisting of a the permutations of F ij for X S. Given Y ij {0, }, we continue to define S ij (Y ij ) = {Λ ij S ij Ψ(Λ ij ) = Y ij } for a i n and j m i, which is the subset of S ij consisting of a sequences yieding Y ij. According to Lemma, we know that S ij (Y ij ) = S ij (Y ij ) whenever Y ij = Y ij. This impies that S ij(y ij ) is a function of Y ij, which can be written as M ij ( Y ij ). Let, 2,..., m, 2..., nmn be non-negative integers such that their sum is Y, we want to prove that S m i B Y = M ij ( ij ) nm n = Y j= The proof is by induction. Let w = n m i. First, the concusion hods for w =. Assume the concusion hods for w >, we want to prove that the concusion aso hods for w +. ote that for a i n, if j < j 2, then F ij generates an output before F ij2 in Agorithm B. So given an input sequence X S, the ast segment that generates an output (the output can be an empty string) is F imi for some i with i n. ow, we show that this i is fixed for a the sequences in S, i.e., the position of the ast segment generating an output keeps unchanged. To prove this, given a sequence X S, et s see the first a symbos of X, i.e. X a, such that the ast segment F imi generates an output just after reading x a when the input sequence is X. Based on our agorithm, X a has the foowing properties. ) The ast symbo x a = s i. 2) π i (X a ) = F i F i2...f imi. 3) π j (X a ) = F j F j2...f jmj Ẽ j for j i, where Ẽj > 0. ow, et s permute each segment of F, F 2,..., F nmn to F, F 2,..., F nm n, then we get another sequence X S. According to Lemma 5, if we consider the first a symbos of X, i.e., X a, it has the simiar properties as X a : ) The ast symbo x a = s i. 2) π i (X a ) = F i F i2...f im i. 3) π j (X a ) = F j F j2...f jm j Ẽ j for j i, where Ẽj > 0. This impies that when the input sequence is X, F im i generates an output just after reading x a and it is the ast one. So we can concude that for a the sequences in S, their ast segments generating outputs are at the same position. Let s fix the ast segment F imi and assume F imi generates the ast imi bits of Y. We want to know how many sequences in S B Y have F imi as their ast segments that generate outputs? In order to get the answer, we concatenate F imi with E i as the new E i. As a resut, we have n m i = w segments to generate the first Y imi bits of Y. Based on our assumption, the number of such sequences wi be m i M kj ( kj ) M imi ( imi ) i(mi )+...= Y imi k= j= where,..., i(mi ), (i+)..., nmn are non-negative integers. For each imi, there are M imi ( imi ) different choices for F imi. Therefore, S B Y can be obtained by mutipying M imi ( imi ) by the number above and summing them up over imi. amey, we can get the concusion above. According to this concusion, we know that if Y = Y, then S B Y = S B Y. Using the same argument as in Theorem 6 we compete the proof of the theorem. ormay, the window size functions ϖ i (k) for i n can be any positive integer functions. Here, we fix these window size functions as a constant, namey, ϖ. By increasing the vaue of ϖ, we can increase the efficiency of the scheme, but at the same time it may cost more storage space and need more waiting time. It is hepfu to anayze the reationship between scheme efficiency and window size ϖ. Theorem 9 (Efficiency). Let X be a sequence of ength generated by a Markov chain with transition matrix P, which is used as input to Agorithm B with constant window size ϖ. Then as the ength of the sequence goes to infinity, the imiting efficiency of Agorithm B is η(ϖ) = n u i η i (ϖ) where U = (u, u 2,..., u n ) is the stationary distribution of this Markov chain, and η i (ϖ) is the efficiency of Ψ when the input sequence of ength ϖ is generated by a n-face coin with distribution (p i, p i2,..., p in ). Proof: When, there exists an ϵ which 0, such that with probabiity ϵ, (u i ϵ ) < π i (X) < (u i + ϵ ) for a i n. The efficiency of Agorithm B can be written as η(ϖ), which satisfies n π i(x) ϖ η i (ϖ)ϖ η(ϖ) With probabiity ϵ, we have n ( (u i ϵ ) )η ϖ i (ϖ)ϖ η(ϖ) So when, we have that n η(ϖ) = u i η i (ϖ) n π i(x) ϖ η i(ϖ)ϖ n (u i ϵ ) η ϖ i (ϖ)ϖ This competes the proof. Let s define α() = n k 2 n k, where 2 n k is the standard binary expansion of. Assume Ψ is the Eias function, then η i (ϖ) = ϖ! α( ϖ k!k 2!...k n! )pk i pk 2 i2...pk n in k +...+k n =ϖ Based on this formua, we can numericay study the reationship between the imiting efficiency and the window size (see

10 0 Section VII). In fact, when the window size becomes arge, the imiting efficiency (n ) approaches the informationtheoretic upper bound. VI. ALGORITHM C : A OPTIMAL ALGORITHM Both Agorithm A and Agorithm B are asymptoticay optima, but when the ength of the input sequence is finite they may not be optima. In this section, we try to construct an optima agorithm, caed Agorithm C, such that its informationefficiency is maximized when the ength of the input sequence is finite. Before presenting this agorithm, foowing the idea of Pae and Loui [], we first discuss the equivaent condition for a function f to generate random bits from an arbitrary Markov chain, and then present the sufficient condition for f to be optima. Lemma 0 (Equivaent condition). Let K = {k ij } be an n n non-negative integer matrix with n n j= k ij =. We define S (α,k) as S (α,k) = {X {s, s 2,..., s n } k j (π i (X)) = k ij, x = s α } where k j (X) is the number of s j in X. A function f : {s, s 2,..., s n } {0, } can generate random bits from an arbitrary Markov chain, if and ony if for any (α, K) and two binary sequences Y and Y with Y = Y, S (α,k) BY = S (α,k) BY where B Y = {X X {s, s 2,..., s n }, f(x) = Y } is the set of sequences of ength that yied Y. Proof: It is easy to see that if S (α,k) BY = S (α,k) BY for a (α, K) and Y = Y, then Y and Y have the same probabiity to be generated. In this case, f can generate random bits from an arbitrary Markov chain. In the rest, we ony need to prove the inverse caim. If f can generate random bits from an arbitrary Markov chain, then P [f(x) = Y ] = P [f(x) = Y ] for any two binary sequences Y and Y of the same ength. Here, et p ij be the transition probabiity from state s i to state s j for a i, j n, we can write = where and P [f(x) = Y ] S (α,k) BY ϕ K (p, p 2,..., p nn )P (x = s α ) α,k G Simiary, = G = {K k ij {0} Z +, i,j ϕ K (p, p 2,..., p nn ) = k ij = } p kij ij j= P [f(x) = Y ] S (α,k) BY ϕ K (p, p 2,..., p nn )P (x = s α ) α,k G As a resut, ( S (α,k) BY S (α,k) BY )ϕ K (p,..., p nn ) α,k G P (x = s α ) = 0 Since P (x = s α ) can be any vaue in [0, ], for a α n we have ( S (α,k) BY S (α,k) BY )ϕ K (p,..., p nn ) = 0 K G It can be proved that K G {ϕ K(p, p 2,..., p nn )} are inear independent in the vector space of functions on the transition probabiities, namey n {(p, p 2,..., p nn ) p ij [0, ], p ij = }. j= Based on this fact, we can concude that S (α,k) BY = S (α,k) BY for a (α, K) if Y = Y. Let s define α() = n k 2 n k, where 2 n k is the standard binary expansion of, then we have the sufficient condition for an optima function. Lemma (Sufficient condition for an optima function). Let f be a function that generates random bits from an arbitrary Markov chain with unknown transition probabiities. If for any α and any n n non-negative integer matrix K with n n j= k ij =, the foowing equation is satisfied, X S (α,k) f (X) = α( S (α,k) ) then f generates independent unbiased random bits with optima information efficiency. ote that f (X) is the ength of f (x) and S (α,k) is the size of S (α,k). Proof: Let h denote an arbitrary function that is abe to generate random bits from any Markov chain. According to Lemma 2.9 in [], we know that X S (α,k) h(x) α( S (α,k) ) Then the average output ength of h is E( h(x) ) = h(x) ϕ(k)p [x = s α ] (α,k) X S (α,k) α( S (α,k) )ϕ(k)p [x = s α ] = (α,k) (α,k) = E( f (X) ) X S (α,k) f (X) ϕ(k)p [x = s α ] So f is the optima one. This competes the proof. Here, we construct the foowing agorithm (Agorithm C) which satisfies a the conditions in Lemma 0 and Lemma. As a resut, it can generate unbiased random bits from an arbitrary Markov chain with optima information efficiency. Agorithm C

11 Input: A sequence X = x x 2..., x produced by a Markov chain, where x i S = {s, s 2,..., s n }. Output: A sequence Y of 0 s and s. Main Function: ) Get the matrix K = {k ij } with 2) Define S(X) as k ij = k j (π i (X)) S(X) = {X k j (π i (X )) = k ij i, j; x = x } then compute S(X). 3) Compute the rank r(x) of X in S(X) with respect to a given order. The rank with respect to a exicographic order wi be given ater. 4) According to S(X) and r(x), determine the output sequence. Let k 2n k be the standard binary expansion of S(X) with n > n 2 >... and assume the starting vaue of r(x) is 0. If r(x) < 2 n, the output is the n digit binary representation of r(x). If i k= 2n k r(x) < i+ k= 2n k, the output is the n i+ digit binary representation of r(x). Comment: The fast cacuations of S(X) and r(x) wi be given in the rest of this section. In Agorithm A, when we use Eias s function as Ψ, the imiting efficiency η = E[M] (as ) reaizes the bound H(X). Agorithm C is optima, so it has the same or higher efficiency. Therefore, the imiting efficiency of Agorithm C as aso reaizes the bound H(X). In Agorithm C, for an input sequence X with x = s χ, we can rank it with respect to the exicographic order of θ(x) and σ(x). Here, we define θ(x) = (π (X) π (X),..., π n (X) πn (X) ) which is the vector of the ast symbos of π i (X) for i n. And σ(x) is the compement of θ(x) in π(x), namey, σ(x) = (π (X) π(x),..., π n (X) πn(x) ) For exampe, when the input sequence is Its exit sequences are X = s s 4 s 2 s s 3 s 2 s 3 s s s 2 s 3 s 4 s π(x) = [s 4 s 3 s s 2, s s 3 s 3, s 2 s s 4, s 2 s ] Then for this input sequence X, we have that θ(x) = [s 2, s 3, s 4, s ] σ(x) = [s 4 s 2 s, s s 3, s 2 s, s 2 ] Based on the exicographic order defined above, both S(X) and r(x) can be obtained using a brute-force search. However, this approach in not computationay efficient. Here, we describe an efficient agorithm for computing S(X) and r(x) when n is a sma constant, such that Agorithm C is computabe in O( og 3 og og ) time. This method is inspired by the agorithm for computing the Eias function that is described in [3]. However, when n is not sma, the compexity of computing S(X) (or r(x)) has an exponentia dependence on n, which wi make this agorithm much sower in computation than the previous agorithms. Lemma 2. Let Z = ( (k i + k i k in )! ) k i!k i2!...k in! and et = n n j= k ij, then Z is computabe in O( og 3 og og ) time (not reated with n). Proof: It is known that given two numbers of ength n bits, their mutipication or division is computabe in O(n og n og og n) time based on Schönhage-Strassen agorithm []. We can cacuate Z based on this fast mutipication. For simpification, we denote k i = n j= k ij. ote that we can write Z as a mutipication of terms, namey k, k 2,..., k k, k, k 2..., k n k nn which are denoted as ρ 0, ρ 0 2,..., ρ 0, ρ 0 It is easy to see that the notation of every ρ 0 i used 2 og 2 bits (og 2 for the numerator and og for the denominator). The tota time to compute a of them is much ess than O( og 3 og og ). Based on these notations, we write Z as Z = ρ 0 ρ ρ 0 ρ 0 Suppose that og 2 is an integer. Otherwise, we can add trivia terms to the formua above to make og 2 be an integer. In order to cacuate Z quicky, the foowing cacuations are performed: ρ s i = ρ s 2i ρs 2i s =, 2,..., og 2 ; i =, 2,..., 2 s Then we are abe to compute Z iterativey and finay get Z = ρ og 2 To cacuate ρ i for i =, 2,..., /2, it takes 2(/2) mutipications of numbers with ength og 2 bits. Simiary, to cacuate ρ s i for i =, 2,..., /2, it takes 2(/2s ) mutipications of numbers with ength 2 s og 2 bits. So the time compexity of computing Z is og 2 s= 2(/2 s )O(2 s og 2 og(2 s og 2 ) og og(2 s og 2 )) This vaue is not greater than O( og 2 og( og ) og og( og )) which yieds the resut in the emma. Lemma 3. Let n be a sma constant, then S(X) in Agorithm C is computabe in O( og 3 og og ) time. Proof: The idea to compute S(X) in Agorithm C is that we can divide S(X) into different casses, denoted by S(X, θ) for different θ such that S(X, θ) = {X i, j, k j (π i (X )) = k ij, θ(x ) = θ}

12 2 where k ij = k j (π i (X)) is the number of s j s in π i (X) for a i, j n. θ(x) is the vector of the ast symbos of π(x) defined above. As a resut, we have S(X) = θ S(X, θ). Athough it is not easy to cacuate S(X) directy, but it is much easier to compute S(X, θ) for a given θ. For a given θ = (θ, θ 2,..., θ n ), we need first determine whether S(X, θ) is empty or not. In order to do this, we quicky construct a coection of exit sequences Λ = [Λ, Λ 2,..., Λ n ] by moving the first θ i in π i (X) to the end for a i n. According to the main emma, we know that S(X, θ) is empty if and ony if π i (X) does not incude θ i for some i or (x, Λ) is not feasibe. If S(X, θ) is not empty, then (x, Λ) is feasibe. In this case, based on the main emma, we have = ( S(X, θ) = (k i + k i k in )! k i!k i2!...k in! Here, we et Z = ( Then we can get S(X) = θ S(X, θ) = Z( θ (k i + k i k in )! k i!...(k iθi )!...k in! )( (k i + k i k in )! ) k i!k i2!...k in! k iθi (k i + k i k in ) ) k iθi (k i + k i k in ) ) According to Lemma 2, Z is computabe in O( og 3 og og ) time. So if n is a sma constant, then S(X) is aso computabe in O( og 3 og og ) time. However, when n is not sma, we have to enumerate a the possibe combinations for θ with O(n n ) time, which is not computationay efficient. Lemma 4. Let n be a sma constant, then r(x) in Agorithm C is computabe in O( og 3 og og ) time. Proof: Based on some cacuations in the emma above, we can try to obtain r(x) when X is ranked with respect to the exicographic order of θ(x) and σ(x). Let r(x, θ(x)) denote the rank of X in S(X, θ(x)), then we have that r(x) = S(X, θ) + r(x, θ(x)) θ<θ(x) where < is based on the exicographic order. In the formua, when n is a sma constant, θ<θ(x) S(X, θ) can be obtained in O( og 3 og og ) time by computing n θ<θ(x): S(X,θ) >0 Z k iθ i n (k i + k i k in ) where Z is defined in the ast emma and the second term can be cacuated fast when n is a sma constant. (However, n cannot be big, since the compexity of computing the second term has an exponentia dependence on n.) So far, we ony need to compute r(x, θ(x)), with respect to the exicography order of σ(x). Here, we write σ(x) as the concatenation of a group of sequences, namey σ(x) = σ (X) σ 2 (X)... σ n (X) such that for a i n σ i (X) = π i (X) πi(x). There are M = ( ) n symbos in σ(x). Let r i (X) be the number of sequences in S(X, θ(x)) such that their first M i symbos are σ(x)[, M i] and their M i + th symbos are smaer than σ(x)[m i + ]. Then we can get that M r(x, θ(x)) = r i (X) Let s assume that σ(x)[m i + ] = s wi for some w i, and it is the u th i symbo in σ vi (X). For simpicity, we denote σ vi (X)[u i, σ vi (X) ] as ζ i. For exampe, when n = 3 and [σ (X), σ 2 (X), σ 3 (X)] = [s s 2, s 2 s 3, s s s ], we have ζ = s, ζ 2 = s s, ζ 3 = s s s, ζ 4 = s 3, ζ 5 = s 2 s 3,... To cacuate r i (X), we can count a the sequences generated by permuting the symbos of ζ i, σ vi+(x),..., σ n (X) such that the M i + th symbo of the new sequence is smaer than s wi. Then we can get r i (X) = ( ζ i )! k j<w (ζ i )!...(k j (ζ i ) )!...k n (ζ i )! i i=v i + σ i (X)! k (σ i (X))!k 2 (σ i (X))!...k n (σ i (X))! where k j (X) counts the number of s i s in X. Let s define the vaues ρ 0 i = ζ i k wi (ζ i ) for a i M. In this expression, k wi (ζ i ) is the number of s wi s in ζ i, and s wi is the first symbo of ζ i. It is easy to show that for i M ρ 0 i ρ 0 i 2...ρ 0 2ρ 0 = i=v+ ζ i! k (ζ i )!...k j (ζ i )!...k n (ζ i )! σ i (X)! k (σ i (X))!k 2 (σ i (X))!...k n (σ i (X))! If we aso define the vaues λ 0 j<w i = i k j (ζ i ) ζ i for a i M, then we have and r i (X) = λ 0 i ρ 0 i ρ 0 i 2...ρ 0 r(x, θ(x)) = M λ 0 i ρ 0 i ρ 0 i 2...ρ 0 2ρ 0 Suppose that og 2 M is an integer. Otherwise, we can add trivia terms to the formua above to make og 2 M an

13 3 integer. In order to quicky cacuate r(x, θ(x)), the foowing cacuations are performed for s from to og 2 M: ρ s i = ρ s 2i ρ s 2i, i =, 2,..., 2 s M λ s i = λ s 2i + λs 2i ρ s 2i i =, 2,..., 2 s M By computing a ρ s i and λ s i for s from to og 2 M iterativey, we can get that r(x, θ(x)) = λ og 2 M ow, we use the same idea in [3] to anayze the computationa compexity. ote that every ρ 0 i and λ 0 i can be represented using 2 og 2 M bits (og 2 M for the numerator and og M for the denominator). And we can cacuate a of them quicky. To cacuate ρ i for i =, 2,..., M/2, it takes at most 2(M/2) mutipications of numbers with ength og 2 M bits. To cacuate λ i for i =, 2,..., M/2, it takes 3(M/2) mutipications of numbers with ength og 2 M bits. That is because we can write λ i as a b + c d = ad+bc bd for some integers a, b, c, d with ength og 2 M bits. Simiary, to cacuate a ρ s i and λ s i for some s, it takes at most 5(M/2s ) mutipications of numbers with ength 2 s og 2 M bits. As a resut, the time compexity of computing Z is og 2 M s= 5(M/2 s )O(2 s og 2 M og(2 s og 2 M) og og(2 s og 2 M)) which is computabe in O(M og 3 M og og M) time. As a resut, for a sma constant n, r(x) is computabe in O( og 3 og og ) time. Based on the discussion above, we know that Agorithm C is computabe in O( og 3 og og ) time when n is a sma constant. However, when n is not a constant, this agorithm is not computationay efficient since its time compexity depends exponentiay on n. VII. UMERICAL RESULTS In this section, we describe numerica resuts reated to the impementations of Agorithm A, Agorithm B, and Agorithm C. We use the Eias function for Ψ. In the first experiment, we use the foowing randomy generated transition matrix for a Markov chain with three states. P = Consider a sequence of ength 2 that is generated by the Markov chain defined above and assume that s is the first state of this sequence. amey, there are 3 = 7747 possibe input sequences. For each possibe input sequence, we can compute its generating probabiity and the corresponding output sequences using our three agorithms. Tabe II presents the resuts of cacuating the probabiities of a possibe output sequences for the three agorithms. ote that the resuts show that indeed the outputs of the agorithms are independent unbiased sequences. Aso, Agorithm C has the highest information efficiency (it is optima), and Agorithm A has a Output Probabiity Probabiity Probabiity Agorithm A Agorithm B Agorithm C with ϖ = 4 Λ E E E E E E E E-4.305E E E E E-5.44E-5 Expected Length TABLE II THE PROBABILITY OF EACH POSSIBLE OUTPUT SEQUECE AD THE EXPECTED OUTPUT LEGTH. Limiting efficiency Bum n=5 n=3 n= Fixed Window size ϖ Fig. 3. The imiting efficiency of Agorithm B varies with the vaue of window size ϖ for different state number n, where we assume that the transition probabiity p ij = for a i, j n. n higher information efficiency than Agorithm B (with window size 4). In the second cacuation, we want to test the infuence of window size ϖ (assume ϖ i (k) = ϖ for i n) on the efficiency of Agorithm B. Since the efficiency depends on the transition matrix of the Markov chain we decided to evauate of the efficiency reated to the uniform transition matrix, namey a the entries are n, where n is the number of states. We assume that n is infinitey arge. In this case, the stationary distribution of the Markov chain is { n, n,..., n }. Fig. 3 shows that when ϖ = 2 (Bum s Agorithm), the imiting efficiencies for n = (2, 3, 5) are ( 4, 3, 2 5 ), respectivey. When ϖ = 5,

14 4 their corresponding efficiencies are (0.7228,.342,.5827). So if the input sequence is ong enough, by changing ϖ from 2 to 5, the efficiency can increase 89% for n = 2, 240% for n = 3 and 296% for n = 4. When ϖ is sma, we can increase the efficiency of Agorithm B significanty by increasing the window size ϖ. When ϖ becomes arger, the efficiency of Agorithm B wi converge to the informationtheoretica upper bound, namey, og 2 n. ote that 3 is not a good vaue for the window size in the agorithm. That is because the Eias function is not very efficient when the ength of the input sequence is 3. Let s consider a biased coin with two states s, s 2. If the input sequence is s s s or s 2 s 2 s 2, the Eias function wi generate nothing. For a other cases, it has ony 2/3 chance to generate one bit and /3 chance to generate nothing. As a resut, the efficiency is even worse than the efficiency when the ength of the input sequence equas 2. VIII. COCLUDIG REMARKS We considered the cassica probem of generating independent unbiased bits from an arbitrary Markov chain with unknown transition probabiities. Our main contribution is the first known agorithm that has expected inear time compexity and achieves the information-theoretic upper bound on efficiency. Our work is reated to a number of interesting resuts in both computer science and information theory. In computer science, the attention has focused on extracting randomness from a genera weak random source (introduced by Zuckerman [8]). Hence, the concept of an extractor was introduced - it uses a sma number of truy random bits as the seed (catayst) for randomness extraction. During the past two decades, extractors and their appications have been studied extensivey, see [0] [5] for surveys on the topic. Whie our agorithms generate truy random bits (given a prefect Markov chain as a source) the goa of extractors is to generate random sequences which are asymptoticay cose to random bits in the sense of statistica distance. In information theory, it was discovered that optima source codes can be used as universa random bit generators from arbitrary stationary ergodic random sources [6] [6] (Markov chains studied in our paper are specia cases of stationary ergodic sources). When the input sequence is generated from a stationary ergodic process and it is ong enough one can obtain an output sequence that behaves ike truy random bits in the sense of normaized divergence. However, in some cases, the definition of normaized divergence is not strong enough. For exampe, suppose Y is a sequence of unbiased random bits in the sense of normaized divergence, and Y is Y with a concatenated at the beginning. If the sequence Y is ong enough the sequence Y is a sequence of unbiased random bits in the sense of normaized divergence. However the sequence Y might not be usefu in appications that are sensitive to the randomness of the first bit. APPEDIX In this appendix, we prove the main emma. Lemma 4 (Main Lemma: Feasibiity and equivaence of exit sequences). Given a starting state s α and two coections of sequences Λ = [Λ, Λ 2,..., Λ n ] and Γ = [Γ, Γ 2,..., Γ n ] such. that Λ i = Γi (tai-fixed permutation) for a i n. Then (s α, Λ) is feasibe if and ony if (s α, Γ) is feasibe. In the rest of the appendix we wi prove the main emma. To iustrate the caim in the emma, we express s α and Λ by a directed graph that has abes on the vertices and edges, we ca this graph a sequence graph. For exampe, when s α = s and Λ = [s 4 s 3 s s 2, s s 3 s 3, s 2 s s 4, s 2 s ], we have the directed graph in Fig. 4. Let V denote the vertex set, then and the edge set is V = {s 0, s, s 2,..., s n } E = {(s i, Λ i [k])} {(s 0, s α )} For each edge (s i, Λ i [k]), the abe of this edge is k. For the edge (s 0, s α ), the abe is. amey, the abe set of the outgoing edges of each state is {, 2,...}. s s s 4 Fig. 4. An exampe of a sequence graph G. 4 2 Given the abeing of the directed graph as defined above, we say that it contains a compete wak if there is a path in the graph that visits a the edges, without visiting an edge twice, in the foowing way: () Start from s 0. (2) At each vertex, we choose an unvisited edge with the minima abe to foow. Obviousy, the abeing corresponding to (s α, Λ) is a compete wak if and ony if (s α, Λ) is feasibe. In this case, for short, we aso say that (s α, Λ) is a compete wak. Before continuing to prove the main emma, we first give Lemma 5 and Lemma 6. Lemma 5. Assume (s α, Λ) with Λ = [Λ, Λ 2,..., Λ χ,..., Λ n ] is a a compete wak, which ends at state s χ. Then (s α, Γ) with Γ = [Λ,..., Γ χ,..., Λ n ] is aso a compete wak ending at s χ, if Λ χ Γ χ (permutation). Proof: (s α, Λ) and (s α, Γ) correspond to different abeings on the same directed graph G, denoted by L and L 2. Since L is a compete wak, it can trave a the edges in G one by one, denoted as (s i, s j ), (s i2, s j2 ),..., (s i, s j ) 2 3 s 2 s 3 2 3

15 5 where s i = s 0 and s j = s χ. We ca {, 2,..., } as the indexes of the edges. Based on L 2, et s have a wak on G starting from s 0 unti there is no unvisited outgoing edges to seect. In this wak, assume the foowing edges have been visited: (s iw, s jw ), (s iw2, s jw2 ),..., (s iwm, s jwm ) where w, w 2,..., w are distinct indexes chosen from {, 2,..., } and s iw = s 0. In order to prove that L 2 is a compete wak, we need to show that () s jwm = s χ and (2) M =. First, et s prove that s jwm = s χ. In G, et (out) i the number of outgoing edges of s i and et (in) i number of incoming edges of s i, then we have that (in) i (in) 0 = 0, (out) (in) χ = (out) i 0 = = χ (out) + for i 0, i χ denote denote the Based on these reations, we know that once we have a wak starting from s 0 in G, this wak wi finay end at state s χ. That is because we can aways get out of s i due to (in) i = (out) i if i χ, 0. ow, we prove that M =. This can be proved by contradiction. Assume M, then we define V = {w, w 2,..., w M } V = {, 2,..., }/{w, w 2,..., w M } ote that the abes of the outgoing edges of s are the same for L and L 2, since χ, 0. Therefore, based on L, before visiting edge (s iv, s jv ), there must be M (out) outgoing edges of s have been visited. As a resut, based on L, there must be M (out) + = M (in) + incoming edges of s have been visited before visiting (s iv, s jv ). Among a these M (in) + incoming edges, there exists at east one edge (s iu, s ju ) such that u V, since ony M (in) incoming edges of s have been visited based on L 2. According to our assumption, both u, v V and v is the minima one, so u > v. On the other hand, we know that (s iu, s ju ) is visited before (s iv, s jv ) based on L, so u < v. Here, the contradiction happens. Therefore, M =. This competes the proof. Here, et s give an exampe of the emma above. We know that, when s α = s, Λ = [s 4 s 3 s s 2, s s 3 s 3, s 2 s s 4, s 2 s ], (s α, Λ) is feasibe. The abeing on a directed graph corresponding to (s α, Λ) is given in Fig. 4, which is a compete wak starting at state s 0 and ending at state s. The path of the wak is s 0 s s 4 s 2 s s 3 s 2 s 3 s s s 2 s 3 s 4 s By permutating the abes of the outgoing edges of s, we can have the graph as shown in Fig. 6. The new abeing on G is aso a compete wak ending at state s, and its path is s 0 s 0 s s s 2 s s 3 s 2 s 3 s s 4 s 2 s 3 s 4 s where V corresponds to the visited edges based on L 2 and V corresponds to the unvisited edges based on L 2. Let v = min(v ), then (s iv, s jv ) is the unvisited edge with the minima index. Let = i v, then (s iv, s jv ) is an outgoing edge of s. Here χ, because a the outgoing edges of s χ have been visited. Assume the number of visited incoming edges of s is M (in) M (out) and the number of visited outgoing edges of s is, then M (in) = M (out) see Fig. 5 as an exampe. 2 s s s 2 s (out) Fig. 6. The sequence graph G with new abes. ( s i, s ) v jv (out) M (in) M ( s i, s ) u ju (in) Fig. 5. An iustration of the incoming and outgoing edges of s. In which, the soid arrows indicate visited edges, and the dashed arrows indicate unvisited edges. Based on Lemma 5, we have the foowing resut Lemma 6. Given a starting state s α and two coections of sequences Λ = [Λ, Λ 2,..., Λ k,..., Λ n ] and Γ =. [Λ,..., Γ k,..., Λ n ] such that Γ k = Λk (tai-fixed permutation). Then (s α, Λ) and (s α, Γ) have the same feasibiity. Proof: We prove that if (s α, Λ) is feasibe, then (s α, Γ) is aso feasibe. If (s α, Λ) is feasibe, there exists a sequence X such that s α = x and Λ = π(x). Suppose its ast eement is x = s χ. When k = χ, according to Lemma 5, we know that (s α, Γ) is feasibe. When k χ, we assume that Λ k = π k (X) = x k x k2...x kw. Let s consider the subsequence X = x x 2...x kw of X.

16 6 Then π k (X) = Λ Λ k k and the ast eement of X is s k. According to Lemma 5, we can get that: there exists a sequence x x 2...x k w with x = x and x k w = x k w such that π(x x 2...x k w ) = [π (X),..., Γ Γ k k, π k+ (X),..., π n (X)] since Γ Γ k k Λ Λ k k. Let x k w x k w +...x = x k w x kw +...x, i.e., concatenating x kw x kw +...x to the end of x x 2...x k w, we can generate a sequence x x 2...x such that its exit sequence of state s k is Γ Γ k k x kw = Γ k and its exit sequence of state s i with i k is Λ i = π i (X). So if (s α, Λ) is feasibe, then (s α, Γ) is aso feasibe. Simiary, if (s α, Γ) is feasibe, then (s α, Λ) is feasibe. As a resut, (s α, Λ) and (s α, Γ) have the same feasibiity. According to the emma above, we know that (s α, [Λ, Λ 2,..., Λ n ]) and (s α, [Γ, Λ 2,..., Λ n ]) have the same feasibiity, (s α, [Γ, Λ 2,..., Λ n ]) and (s α, [Γ, Γ 2,..., Λ n ]) have the same feasibiity,..., (s α, [Γ, Γ 2,..., Γ n, Λ n ]) and (s α, [Γ, Γ 2,..., Γ n, Γ n ]) have the same feasibiity, so the statement in the main emma is true. REFERECES [] A. V. Aho, L. E. Hopcroft, and J. D. Uman, The Design and Anaysis of Computer Agorithms. Reading, MA: Addison-Wesey, 976. [2] M. Bum, Independent unbiased coin fips from a correated biased source: a finite state Markov chain, Combinatorica, vo. 6, pp , 986. [3] T. M. Cover, J. A. Thomas, Eements of Information Theory, Second Edition, Wiey, Juy [4] P. Eias, The efficient construction of an unbiased random sequence, Ann. Math. Statist., vo. 43, pp , 972. [5] T. S. Han and M. Hoshi, Interva agorithm for random number generation, IEEE Trans. on Information Theory, vo. 43, o. 2, pp , 997. [6] T. S. Han, Fokore in source coding: information-spectrum approach, IEEE Trans. on Information Theory, vo. 5, no. 2, pp , [7] W. Hoeffding and G. Simon, Unbiased coin tossing with a biased coin, Ann. Math. Statist., vo. 4, pp , 970. [8] D. Knuth and A. Yao, The compexity of nonuniform random number generation, Agorithms and Compexity: ew Directions and Recent Resuts, pp , 976. [9] J. von eumann, Various techniques used in connection with random digits, App. Math. Ser., otes by G.E. Forstye, at. Bur. Stand., vo. 2, pp , 95. [0]. isan, Extracting randomness: how and why. A survey, in Proc. Eeventh Annua IEEE conference on Computationa Compexity, pp , 996. [] S. Pae and M. C. Loui, Optima random number generation from a biased coin, in Proc. Sixteenth Annu. ACM-SIAM Symp. Discrete Agorithms, pp , [2] Y. Peres, Iterating von eumann s procedure for extracting random bits, Ann. Statist., vo. 20, pp , 992. [3] B. Y. Ryabko and E. Matchikina, Fast and efficient construction of an unbiased random sequence, IEEE Trans. on Information Theory, vo. 46, pp , [4] P. A. Samueson, Constructing an unbiased random sequence, J. Amer. Statist. Assoc, pp , 968. [5] R. Shatie, Recent deveopments in expicit constructions of extractors, BULLETI-European Association For Theoretica Computer Science, vo 77, pp , [6] K. Visweswariah, S. R. Kukarni and S. Verdú, Source codes as random number generators, IEEE Trans. on Information Theory, vo. 44, no. 2, pp , 998. [7] Q. Stout and B. Warren, Tree agorithms for unbiased coin tosssing with a biased coin, Ann. Probab., vo. 2, pp , 984. [8] D. Zuckerman, Genera weak random sources, in Proc. of the 3st IEEE Symposium on Foundations of Computer Science, pp , 990. Hongchao Zhou received the B.S. degree in physics and mathematics and the M.S. degree in contro science and engineering from Tsinghua University, Beijing, China, in 2006 and 2008, respectivey. Currenty, he is pursuing the Ph.D. degree in eectrica engineering at Caifornia Institute of Technoogy, Pasadena. He worked with IBM China Research Laboratory, Beijing, as a research intern from January 2006 to June His current research interests incude information theory, stochastic computation, and networks. Jehoshua Bruck (F 0) received the B.S. and M.S. degrees in eectrica engineering from the Technion Israe Institute of Technoogy, Haifa, Israe, in 982 and 985, respectivey, and the Ph.D. degree in eectrica engineering from Stanford University, S- tanford, CA, in 989. He is the Gordon and Betty Moore Professor of computation and neura systems and eectrica engineering with the Department of Eectrica Engineering, Caifornia Institute of Technoogy, Pasadena. His extensive industria experience incudes working with IBM Amaden Research Center, San Jose, CA, as we as cofounding and serving as the Chairman of Rainfinity, acquired by EMC, Hopkinton, MA, in 2005, and XtremIO, Cupertino, CA. His current research interests incude information theory and systems and the theory computation in bioogica networks. Dr. Bruck is a recipient of the Feynman Prize for Exceence in Teaching, the Soan Research Feowship, the ationa Science Foundation Young Investigator Award, the IBM Outstanding Innovation Award, and the IBM Outstanding Technica Achievement Award. His papers were recognized in journas and conferences, incuding winning the 2009 IEEE Communications Society Best Paper Award in Signa Processing and Coding for Data Storage for his paper on rank moduation for fash memories, the 2005 A. Schekunoff Transactions Prize Paper Award from the IEEE Antennas and Propagation Society for his paper on signa propagation in wireess networks, and the 2003 Best Paper Award in the 2003 Design Automation Conference for his paper on cycic combinationa circuits.

Efficiently Generating Random Bits from Finite State Markov Chains

Efficiently Generating Random Bits from Finite State Markov Chains 1 Efficienty Generating Random Bits from Finite State Markov Chains Hongchao Zhou and Jehoshua Bruck, Feow, IEEE Abstract The probem of random number generation from an uncorreated random source (of unknown

More information

Efficient Generation of Random Bits from Finite State Markov Chains

Efficient Generation of Random Bits from Finite State Markov Chains Efficient Generation of Random Bits from Finite State Markov Chains Hongchao Zhou and Jehoshua Bruck, Feow, IEEE arxiv:0.5339v [cs.it] 4 Dec 00 Abstract The probem of random number generation from an uncorreated

More information

A Brief Introduction to Markov Chains and Hidden Markov Models

A Brief Introduction to Markov Chains and Hidden Markov Models A Brief Introduction to Markov Chains and Hidden Markov Modes Aen B MacKenzie Notes for December 1, 3, &8, 2015 Discrete-Time Markov Chains You may reca that when we first introduced random processes,

More information

XSAT of linear CNF formulas

XSAT of linear CNF formulas XSAT of inear CN formuas Bernd R. Schuh Dr. Bernd Schuh, D-50968 Kön, Germany; bernd.schuh@netcoogne.de eywords: compexity, XSAT, exact inear formua, -reguarity, -uniformity, NPcompeteness Abstract. Open

More information

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents MARKOV CHAINS AND MARKOV DECISION THEORY ARINDRIMA DATTA Abstract. In this paper, we begin with a forma introduction to probabiity and expain the concept of random variabes and stochastic processes. After

More information

CS229 Lecture notes. Andrew Ng

CS229 Lecture notes. Andrew Ng CS229 Lecture notes Andrew Ng Part IX The EM agorithm In the previous set of notes, we taked about the EM agorithm as appied to fitting a mixture of Gaussians. In this set of notes, we give a broader view

More information

Algorithms to solve massively under-defined systems of multivariate quadratic equations

Algorithms to solve massively under-defined systems of multivariate quadratic equations Agorithms to sove massivey under-defined systems of mutivariate quadratic equations Yasufumi Hashimoto Abstract It is we known that the probem to sove a set of randomy chosen mutivariate quadratic equations

More information

Asynchronous Control for Coupled Markov Decision Systems

Asynchronous Control for Coupled Markov Decision Systems INFORMATION THEORY WORKSHOP (ITW) 22 Asynchronous Contro for Couped Marov Decision Systems Michae J. Neey University of Southern Caifornia Abstract This paper considers optima contro for a coection of

More information

Partial permutation decoding for MacDonald codes

Partial permutation decoding for MacDonald codes Partia permutation decoding for MacDonad codes J.D. Key Department of Mathematics and Appied Mathematics University of the Western Cape 7535 Bevie, South Africa P. Seneviratne Department of Mathematics

More information

Cryptanalysis of PKP: A New Approach

Cryptanalysis of PKP: A New Approach Cryptanaysis of PKP: A New Approach Éiane Jaumes and Antoine Joux DCSSI 18, rue du Dr. Zamenhoff F-92131 Issy-es-Mx Cedex France eiane.jaumes@wanadoo.fr Antoine.Joux@ens.fr Abstract. Quite recenty, in

More information

MONTE CARLO SIMULATIONS

MONTE CARLO SIMULATIONS MONTE CARLO SIMULATIONS Current physics research 1) Theoretica 2) Experimenta 3) Computationa Monte Caro (MC) Method (1953) used to study 1) Discrete spin systems 2) Fuids 3) Poymers, membranes, soft matter

More information

Streaming Algorithms for Optimal Generation of Random Bits

Streaming Algorithms for Optimal Generation of Random Bits Streaming Algorithms for Optimal Generation of Random Bits ongchao Zhou Electrical Engineering Department California Institute of echnology Pasadena, CA 925 Email: hzhou@caltech.edu Jehoshua Bruck Electrical

More information

Separation of Variables and a Spherical Shell with Surface Charge

Separation of Variables and a Spherical Shell with Surface Charge Separation of Variabes and a Spherica She with Surface Charge In cass we worked out the eectrostatic potentia due to a spherica she of radius R with a surface charge density σθ = σ cos θ. This cacuation

More information

Asymptotic Properties of a Generalized Cross Entropy Optimization Algorithm

Asymptotic Properties of a Generalized Cross Entropy Optimization Algorithm 1 Asymptotic Properties of a Generaized Cross Entropy Optimization Agorithm Zijun Wu, Michae Koonko, Institute for Appied Stochastics and Operations Research, Caustha Technica University Abstract The discrete

More information

Lecture Note 3: Stationary Iterative Methods

Lecture Note 3: Stationary Iterative Methods MATH 5330: Computationa Methods of Linear Agebra Lecture Note 3: Stationary Iterative Methods Xianyi Zeng Department of Mathematica Sciences, UTEP Stationary Iterative Methods The Gaussian eimination (or

More information

On the Goal Value of a Boolean Function

On the Goal Value of a Boolean Function On the Goa Vaue of a Booean Function Eric Bach Dept. of CS University of Wisconsin 1210 W. Dayton St. Madison, WI 53706 Lisa Heerstein Dept of CSE NYU Schoo of Engineering 2 Metrotech Center, 10th Foor

More information

A. Distribution of the test statistic

A. Distribution of the test statistic A. Distribution of the test statistic In the sequentia test, we first compute the test statistic from a mini-batch of size m. If a decision cannot be made with this statistic, we keep increasing the mini-batch

More information

Problem set 6 The Perron Frobenius theorem.

Problem set 6 The Perron Frobenius theorem. Probem set 6 The Perron Frobenius theorem. Math 22a4 Oct 2 204, Due Oct.28 In a future probem set I want to discuss some criteria which aow us to concude that that the ground state of a sef-adjoint operator

More information

Formulas for Angular-Momentum Barrier Factors Version II

Formulas for Angular-Momentum Barrier Factors Version II BNL PREPRINT BNL-QGS-06-101 brfactor1.tex Formuas for Anguar-Momentum Barrier Factors Version II S. U. Chung Physics Department, Brookhaven Nationa Laboratory, Upton, NY 11973 March 19, 2015 abstract A

More information

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES Separation of variabes is a method to sove certain PDEs which have a warped product structure. First, on R n, a inear PDE of order m is

More information

Week 6 Lectures, Math 6451, Tanveer

Week 6 Lectures, Math 6451, Tanveer Fourier Series Week 6 Lectures, Math 645, Tanveer In the context of separation of variabe to find soutions of PDEs, we encountered or and in other cases f(x = f(x = a 0 + f(x = a 0 + b n sin nπx { a n

More information

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel Sequentia Decoding of Poar Codes with Arbitrary Binary Kerne Vera Miosavskaya, Peter Trifonov Saint-Petersburg State Poytechnic University Emai: veram,petert}@dcn.icc.spbstu.ru Abstract The probem of efficient

More information

LECTURE NOTES 8 THE TRACELESS SYMMETRIC TENSOR EXPANSION AND STANDARD SPHERICAL HARMONICS

LECTURE NOTES 8 THE TRACELESS SYMMETRIC TENSOR EXPANSION AND STANDARD SPHERICAL HARMONICS MASSACHUSETTS INSTITUTE OF TECHNOLOGY Physics Department Physics 8.07: Eectromagnetism II October, 202 Prof. Aan Guth LECTURE NOTES 8 THE TRACELESS SYMMETRIC TENSOR EXPANSION AND STANDARD SPHERICAL HARMONICS

More information

ORTHOGONAL MULTI-WAVELETS FROM MATRIX FACTORIZATION

ORTHOGONAL MULTI-WAVELETS FROM MATRIX FACTORIZATION J. Korean Math. Soc. 46 2009, No. 2, pp. 281 294 ORHOGONAL MLI-WAVELES FROM MARIX FACORIZAION Hongying Xiao Abstract. Accuracy of the scaing function is very crucia in waveet theory, or correspondingy,

More information

Gauss Law. 2. Gauss s Law: connects charge and field 3. Applications of Gauss s Law

Gauss Law. 2. Gauss s Law: connects charge and field 3. Applications of Gauss s Law Gauss Law 1. Review on 1) Couomb s Law (charge and force) 2) Eectric Fied (fied and force) 2. Gauss s Law: connects charge and fied 3. Appications of Gauss s Law Couomb s Law and Eectric Fied Couomb s

More information

More Scattering: the Partial Wave Expansion

More Scattering: the Partial Wave Expansion More Scattering: the Partia Wave Expansion Michae Fower /7/8 Pane Waves and Partia Waves We are considering the soution to Schrödinger s equation for scattering of an incoming pane wave in the z-direction

More information

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete Uniprocessor Feasibiity of Sporadic Tasks with Constrained Deadines is Strongy conp-compete Pontus Ekberg and Wang Yi Uppsaa University, Sweden Emai: {pontus.ekberg yi}@it.uu.se Abstract Deciding the feasibiity

More information

T.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA

T.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA ON THE SYMMETRY OF THE POWER INE CHANNE T.C. Banwe, S. Gai {bct, sgai}@research.tecordia.com Tecordia Technoogies, Inc., 445 South Street, Morristown, NJ 07960, USA Abstract The indoor power ine network

More information

MONOCHROMATIC LOOSE PATHS IN MULTICOLORED k-uniform CLIQUES

MONOCHROMATIC LOOSE PATHS IN MULTICOLORED k-uniform CLIQUES MONOCHROMATIC LOOSE PATHS IN MULTICOLORED k-uniform CLIQUES ANDRZEJ DUDEK AND ANDRZEJ RUCIŃSKI Abstract. For positive integers k and, a k-uniform hypergraph is caed a oose path of ength, and denoted by

More information

Integrating Factor Methods as Exponential Integrators

Integrating Factor Methods as Exponential Integrators Integrating Factor Methods as Exponentia Integrators Borisav V. Minchev Department of Mathematica Science, NTNU, 7491 Trondheim, Norway Borko.Minchev@ii.uib.no Abstract. Recenty a ot of effort has been

More information

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network An Agorithm for Pruning Redundant Modues in Min-Max Moduar Network Hui-Cheng Lian and Bao-Liang Lu Department of Computer Science and Engineering, Shanghai Jiao Tong University 1954 Hua Shan Rd., Shanghai

More information

Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channels

Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channels Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channes arxiv:cs/060700v1 [cs.it] 6 Ju 006 Chun-Hao Hsu and Achieas Anastasopouos Eectrica Engineering and Computer Science Department University

More information

Alberto Maydeu Olivares Instituto de Empresa Marketing Dept. C/Maria de Molina Madrid Spain

Alberto Maydeu Olivares Instituto de Empresa Marketing Dept. C/Maria de Molina Madrid Spain CORRECTIONS TO CLASSICAL PROCEDURES FOR ESTIMATING THURSTONE S CASE V MODEL FOR RANKING DATA Aberto Maydeu Oivares Instituto de Empresa Marketing Dept. C/Maria de Moina -5 28006 Madrid Spain Aberto.Maydeu@ie.edu

More information

STA 216 Project: Spline Approach to Discrete Survival Analysis

STA 216 Project: Spline Approach to Discrete Survival Analysis : Spine Approach to Discrete Surviva Anaysis November 4, 005 1 Introduction Athough continuous surviva anaysis differs much from the discrete surviva anaysis, there is certain ink between the two modeing

More information

Mat 1501 lecture notes, penultimate installment

Mat 1501 lecture notes, penultimate installment Mat 1501 ecture notes, penutimate instament 1. bounded variation: functions of a singe variabe optiona) I beieve that we wi not actuay use the materia in this section the point is mainy to motivate the

More information

FRIEZE GROUPS IN R 2

FRIEZE GROUPS IN R 2 FRIEZE GROUPS IN R 2 MAXWELL STOLARSKI Abstract. Focusing on the Eucidean pane under the Pythagorean Metric, our goa is to cassify the frieze groups, discrete subgroups of the set of isometries of the

More information

Traffic data collection

Traffic data collection Chapter 32 Traffic data coection 32.1 Overview Unike many other discipines of the engineering, the situations that are interesting to a traffic engineer cannot be reproduced in a aboratory. Even if road

More information

Stochastic Complement Analysis of Multi-Server Threshold Queues. with Hysteresis. Abstract

Stochastic Complement Analysis of Multi-Server Threshold Queues. with Hysteresis. Abstract Stochastic Compement Anaysis of Muti-Server Threshod Queues with Hysteresis John C.S. Lui The Dept. of Computer Science & Engineering The Chinese University of Hong Kong Leana Goubchik Dept. of Computer

More information

Math 124B January 17, 2012

Math 124B January 17, 2012 Math 124B January 17, 212 Viktor Grigoryan 3 Fu Fourier series We saw in previous ectures how the Dirichet and Neumann boundary conditions ead to respectivey sine and cosine Fourier series of the initia

More information

A proposed nonparametric mixture density estimation using B-spline functions

A proposed nonparametric mixture density estimation using B-spline functions A proposed nonparametric mixture density estimation using B-spine functions Atizez Hadrich a,b, Mourad Zribi a, Afif Masmoudi b a Laboratoire d Informatique Signa et Image de a Côte d Opae (LISIC-EA 4491),

More information

CONGRUENCES. 1. History

CONGRUENCES. 1. History CONGRUENCES HAO BILLY LEE Abstract. These are notes I created for a seminar tak, foowing the papers of On the -adic Representations and Congruences for Coefficients of Moduar Forms by Swinnerton-Dyer and

More information

Honors Thesis Bounded Query Functions With Limited Output Bits II

Honors Thesis Bounded Query Functions With Limited Output Bits II Honors Thesis Bounded Query Functions With Limited Output Bits II Daibor Zeený University of Maryand, Batimore County May 29, 2007 Abstract We sove some open questions in the area of bounded query function

More information

arxiv: v2 [cond-mat.stat-mech] 14 Nov 2008

arxiv: v2 [cond-mat.stat-mech] 14 Nov 2008 Random Booean Networks Barbara Drosse Institute of Condensed Matter Physics, Darmstadt University of Technoogy, Hochschustraße 6, 64289 Darmstadt, Germany (Dated: June 27) arxiv:76.335v2 [cond-mat.stat-mech]

More information

Approximated MLC shape matrix decomposition with interleaf collision constraint

Approximated MLC shape matrix decomposition with interleaf collision constraint Approximated MLC shape matrix decomposition with intereaf coision constraint Thomas Kainowski Antje Kiese Abstract Shape matrix decomposition is a subprobem in radiation therapy panning. A given fuence

More information

Pricing Multiple Products with the Multinomial Logit and Nested Logit Models: Concavity and Implications

Pricing Multiple Products with the Multinomial Logit and Nested Logit Models: Concavity and Implications Pricing Mutipe Products with the Mutinomia Logit and Nested Logit Modes: Concavity and Impications Hongmin Li Woonghee Tim Huh WP Carey Schoo of Business Arizona State University Tempe Arizona 85287 USA

More information

4 Separation of Variables

4 Separation of Variables 4 Separation of Variabes In this chapter we describe a cassica technique for constructing forma soutions to inear boundary vaue probems. The soution of three cassica (paraboic, hyperboic and eiptic) PDE

More information

Maximum likelihood decoding of trellis codes in fading channels with no receiver CSI is a polynomial-complexity problem

Maximum likelihood decoding of trellis codes in fading channels with no receiver CSI is a polynomial-complexity problem 1 Maximum ikeihood decoding of treis codes in fading channes with no receiver CSI is a poynomia-compexity probem Chun-Hao Hsu and Achieas Anastasopouos Eectrica Engineering and Computer Science Department

More information

Turbo Codes. Coding and Communication Laboratory. Dept. of Electrical Engineering, National Chung Hsing University

Turbo Codes. Coding and Communication Laboratory. Dept. of Electrical Engineering, National Chung Hsing University Turbo Codes Coding and Communication Laboratory Dept. of Eectrica Engineering, Nationa Chung Hsing University Turbo codes 1 Chapter 12: Turbo Codes 1. Introduction 2. Turbo code encoder 3. Design of intereaver

More information

6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17. Solution 7

6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17. Solution 7 6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17 Soution 7 Probem 1: Generating Random Variabes Each part of this probem requires impementation in MATLAB. For the

More information

Lecture 11. Fourier transform

Lecture 11. Fourier transform Lecture. Fourier transform Definition and main resuts Let f L 2 (R). The Fourier transform of a function f is a function f(α) = f(x)t iαx dx () The normaized Fourier transform of f is a function R ˆf =

More information

Akaike Information Criterion for ANOVA Model with a Simple Order Restriction

Akaike Information Criterion for ANOVA Model with a Simple Order Restriction Akaike Information Criterion for ANOVA Mode with a Simpe Order Restriction Yu Inatsu * Department of Mathematics, Graduate Schoo of Science, Hiroshima University ABSTRACT In this paper, we consider Akaike

More information

Schedulability Analysis of Deferrable Scheduling Algorithms for Maintaining Real-Time Data Freshness

Schedulability Analysis of Deferrable Scheduling Algorithms for Maintaining Real-Time Data Freshness 1 Scheduabiity Anaysis of Deferrabe Scheduing Agorithms for Maintaining Rea-Time Data Freshness Song Han, Deji Chen, Ming Xiong, Kam-yiu Lam, Aoysius K. Mok, Krithi Ramamritham UT Austin, Emerson Process

More information

Recursive Constructions of Parallel FIFO and LIFO Queues with Switched Delay Lines

Recursive Constructions of Parallel FIFO and LIFO Queues with Switched Delay Lines Recursive Constructions of Parae FIFO and LIFO Queues with Switched Deay Lines Po-Kai Huang, Cheng-Shang Chang, Feow, IEEE, Jay Cheng, Member, IEEE, and Duan-Shin Lee, Senior Member, IEEE Abstract One

More information

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with?

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with? Bayesian Learning A powerfu and growing approach in machine earning We use it in our own decision making a the time You hear a which which coud equay be Thanks or Tanks, which woud you go with? Combine

More information

Schedulability Analysis of Deferrable Scheduling Algorithms for Maintaining Real-Time Data Freshness

Schedulability Analysis of Deferrable Scheduling Algorithms for Maintaining Real-Time Data Freshness 1 Scheduabiity Anaysis of Deferrabe Scheduing Agorithms for Maintaining Rea- Data Freshness Song Han, Deji Chen, Ming Xiong, Kam-yiu Lam, Aoysius K. Mok, Krithi Ramamritham UT Austin, Emerson Process Management,

More information

An Operational Semantics for Weak PSL

An Operational Semantics for Weak PSL An Operationa Semantics for Weak PSL Koen Caessen 1,2 and Johan Mårtensson 1,3 {koen,johan}@safeogic.se 1 Safeogic AB 2 Chamers University of Technoogy 3 Gothenburg University Abstract. Extending inear

More information

STABLE GRAPHS BENJAMIN OYE

STABLE GRAPHS BENJAMIN OYE STABLE GRAPHS BENJAMIN OYE Abstract. In Reguarity Lemmas for Stabe Graphs [1] Maiaris and Sheah appy toos from mode theory to obtain stronger forms of Ramsey's theorem and Szemeredi's reguarity emma for

More information

FRST Multivariate Statistics. Multivariate Discriminant Analysis (MDA)

FRST Multivariate Statistics. Multivariate Discriminant Analysis (MDA) 1 FRST 531 -- Mutivariate Statistics Mutivariate Discriminant Anaysis (MDA) Purpose: 1. To predict which group (Y) an observation beongs to based on the characteristics of p predictor (X) variabes, using

More information

Nearly Optimal Constructions of PIR and Batch Codes

Nearly Optimal Constructions of PIR and Batch Codes arxiv:700706v [csit] 5 Jun 07 Neary Optima Constructions of PIR and Batch Codes Hia Asi Technion - Israe Institute of Technoogy Haifa 3000, Israe shea@cstechnionaci Abstract In this work we study two famiies

More information

Do Schools Matter for High Math Achievement? Evidence from the American Mathematics Competitions Glenn Ellison and Ashley Swanson Online Appendix

Do Schools Matter for High Math Achievement? Evidence from the American Mathematics Competitions Glenn Ellison and Ashley Swanson Online Appendix VOL. NO. DO SCHOOLS MATTER FOR HIGH MATH ACHIEVEMENT? 43 Do Schoos Matter for High Math Achievement? Evidence from the American Mathematics Competitions Genn Eison and Ashey Swanson Onine Appendix Appendix

More information

Tracking Control of Multiple Mobile Robots

Tracking Control of Multiple Mobile Robots Proceedings of the 2001 IEEE Internationa Conference on Robotics & Automation Seou, Korea May 21-26, 2001 Tracking Contro of Mutipe Mobie Robots A Case Study of Inter-Robot Coision-Free Probem Jurachart

More information

Minimizing Total Weighted Completion Time on Uniform Machines with Unbounded Batch

Minimizing Total Weighted Completion Time on Uniform Machines with Unbounded Batch The Eighth Internationa Symposium on Operations Research and Its Appications (ISORA 09) Zhangiaie, China, September 20 22, 2009 Copyright 2009 ORSC & APORC, pp. 402 408 Minimizing Tota Weighted Competion

More information

David Eigen. MA112 Final Paper. May 10, 2002

David Eigen. MA112 Final Paper. May 10, 2002 David Eigen MA112 Fina Paper May 1, 22 The Schrodinger equation describes the position of an eectron as a wave. The wave function Ψ(t, x is interpreted as a probabiity density for the position of the eectron.

More information

14 Separation of Variables Method

14 Separation of Variables Method 14 Separation of Variabes Method Consider, for exampe, the Dirichet probem u t = Du xx < x u(x, ) = f(x) < x < u(, t) = = u(, t) t > Let u(x, t) = T (t)φ(x); now substitute into the equation: dt

More information

The Group Structure on a Smooth Tropical Cubic

The Group Structure on a Smooth Tropical Cubic The Group Structure on a Smooth Tropica Cubic Ethan Lake Apri 20, 2015 Abstract Just as in in cassica agebraic geometry, it is possibe to define a group aw on a smooth tropica cubic curve. In this note,

More information

An approximate method for solving the inverse scattering problem with fixed-energy data

An approximate method for solving the inverse scattering problem with fixed-energy data J. Inv. I-Posed Probems, Vo. 7, No. 6, pp. 561 571 (1999) c VSP 1999 An approximate method for soving the inverse scattering probem with fixed-energy data A. G. Ramm and W. Scheid Received May 12, 1999

More information

Supporting Information for Suppressing Klein tunneling in graphene using a one-dimensional array of localized scatterers

Supporting Information for Suppressing Klein tunneling in graphene using a one-dimensional array of localized scatterers Supporting Information for Suppressing Kein tunneing in graphene using a one-dimensiona array of ocaized scatterers Jamie D Was, and Danie Hadad Department of Chemistry, University of Miami, Cora Gabes,

More information

V.B The Cluster Expansion

V.B The Cluster Expansion V.B The Custer Expansion For short range interactions, speciay with a hard core, it is much better to repace the expansion parameter V( q ) by f( q ) = exp ( βv( q )), which is obtained by summing over

More information

A Fundamental Storage-Communication Tradeoff in Distributed Computing with Straggling Nodes

A Fundamental Storage-Communication Tradeoff in Distributed Computing with Straggling Nodes A Fundamenta Storage-Communication Tradeoff in Distributed Computing with Stragging odes ifa Yan, Michèe Wigger LTCI, Téécom ParisTech 75013 Paris, France Emai: {qifa.yan, michee.wigger} @teecom-paristech.fr

More information

Streaming Algorithms for Optimal Generation of Random Bits

Streaming Algorithms for Optimal Generation of Random Bits Streaming Algorithms for Optimal Generation of Random Bits ongchao Zhou, and Jehoshua Bruck, Fellow, IEEE arxiv:09.0730v [cs.i] 4 Sep 0 Abstract Generating random bits from a source of biased coins (the

More information

Math 124B January 31, 2012

Math 124B January 31, 2012 Math 124B January 31, 212 Viktor Grigoryan 7 Inhomogeneous boundary vaue probems Having studied the theory of Fourier series, with which we successfuy soved boundary vaue probems for the homogeneous heat

More information

Higher dimensional PDEs and multidimensional eigenvalue problems

Higher dimensional PDEs and multidimensional eigenvalue problems Higher dimensiona PEs and mutidimensiona eigenvaue probems 1 Probems with three independent variabes Consider the prototypica equations u t = u (iffusion) u tt = u (W ave) u zz = u (Lapace) where u = u

More information

An explicit Jordan Decomposition of Companion matrices

An explicit Jordan Decomposition of Companion matrices An expicit Jordan Decomposition of Companion matrices Fermín S V Bazán Departamento de Matemática CFM UFSC 88040-900 Forianópois SC E-mai: fermin@mtmufscbr S Gratton CERFACS 42 Av Gaspard Coriois 31057

More information

Reichenbachian Common Cause Systems

Reichenbachian Common Cause Systems Reichenbachian Common Cause Systems G. Hofer-Szabó Department of Phiosophy Technica University of Budapest e-mai: gszabo@hps.ete.hu Mikós Rédei Department of History and Phiosophy of Science Eötvös University,

More information

2M2. Fourier Series Prof Bill Lionheart

2M2. Fourier Series Prof Bill Lionheart M. Fourier Series Prof Bi Lionheart 1. The Fourier series of the periodic function f(x) with period has the form f(x) = a 0 + ( a n cos πnx + b n sin πnx ). Here the rea numbers a n, b n are caed the Fourier

More information

NOISE-INDUCED STABILIZATION OF STOCHASTIC DIFFERENTIAL EQUATIONS

NOISE-INDUCED STABILIZATION OF STOCHASTIC DIFFERENTIAL EQUATIONS NOISE-INDUCED STABILIZATION OF STOCHASTIC DIFFERENTIAL EQUATIONS TONY ALLEN, EMILY GEBHARDT, AND ADAM KLUBALL 3 ADVISOR: DR. TIFFANY KOLBA 4 Abstract. The phenomenon of noise-induced stabiization occurs

More information

CS 331: Artificial Intelligence Propositional Logic 2. Review of Last Time

CS 331: Artificial Intelligence Propositional Logic 2. Review of Last Time CS 33 Artificia Inteigence Propositiona Logic 2 Review of Last Time = means ogicay foows - i means can be derived from If your inference agorithm derives ony things that foow ogicay from the KB, the inference

More information

The Binary Space Partitioning-Tree Process Supplementary Material

The Binary Space Partitioning-Tree Process Supplementary Material The inary Space Partitioning-Tree Process Suppementary Materia Xuhui Fan in Li Scott. Sisson Schoo of omputer Science Fudan University ibin@fudan.edu.cn Schoo of Mathematics and Statistics University of

More information

Multi-server queueing systems with multiple priority classes

Multi-server queueing systems with multiple priority classes Muti-server queueing systems with mutipe priority casses Mor Harcho-Bater Taayui Osogami Aan Scheer-Wof Adam Wierman Abstract We present the first near-exact anaysis of an M/PH/ queue with m > 2 preemptive-resume

More information

Some Measures for Asymmetry of Distributions

Some Measures for Asymmetry of Distributions Some Measures for Asymmetry of Distributions Georgi N. Boshnakov First version: 31 January 2006 Research Report No. 5, 2006, Probabiity and Statistics Group Schoo of Mathematics, The University of Manchester

More information

A Novel Learning Method for Elman Neural Network Using Local Search

A Novel Learning Method for Elman Neural Network Using Local Search Neura Information Processing Letters and Reviews Vo. 11, No. 8, August 2007 LETTER A Nove Learning Method for Eman Neura Networ Using Loca Search Facuty of Engineering, Toyama University, Gofuu 3190 Toyama

More information

The EM Algorithm applied to determining new limit points of Mahler measures

The EM Algorithm applied to determining new limit points of Mahler measures Contro and Cybernetics vo. 39 (2010) No. 4 The EM Agorithm appied to determining new imit points of Maher measures by Souad E Otmani, Georges Rhin and Jean-Marc Sac-Épée Université Pau Veraine-Metz, LMAM,

More information

$, (2.1) n="# #. (2.2)

$, (2.1) n=# #. (2.2) Chapter. Eectrostatic II Notes: Most of the materia presented in this chapter is taken from Jackson, Chap.,, and 4, and Di Bartoo, Chap... Mathematica Considerations.. The Fourier series and the Fourier

More information

Target Location Estimation in Wireless Sensor Networks Using Binary Data

Target Location Estimation in Wireless Sensor Networks Using Binary Data Target Location stimation in Wireess Sensor Networks Using Binary Data Ruixin Niu and Pramod K. Varshney Department of ectrica ngineering and Computer Science Link Ha Syracuse University Syracuse, NY 344

More information

Source and Relay Matrices Optimization for Multiuser Multi-Hop MIMO Relay Systems

Source and Relay Matrices Optimization for Multiuser Multi-Hop MIMO Relay Systems Source and Reay Matrices Optimization for Mutiuser Muti-Hop MIMO Reay Systems Yue Rong Department of Eectrica and Computer Engineering, Curtin University, Bentey, WA 6102, Austraia Abstract In this paper,

More information

Integrality ratio for Group Steiner Trees and Directed Steiner Trees

Integrality ratio for Group Steiner Trees and Directed Steiner Trees Integraity ratio for Group Steiner Trees and Directed Steiner Trees Eran Haperin Guy Kortsarz Robert Krauthgamer Aravind Srinivasan Nan Wang Abstract The natura reaxation for the Group Steiner Tree probem,

More information

Homework 5 Solutions

Homework 5 Solutions Stat 310B/Math 230B Theory of Probabiity Homework 5 Soutions Andrea Montanari Due on 2/19/2014 Exercise [5.3.20] 1. We caim that n 2 [ E[h F n ] = 2 n i=1 A i,n h(u)du ] I Ai,n (t). (1) Indeed, integrabiity

More information

APPENDIX C FLEXING OF LENGTH BARS

APPENDIX C FLEXING OF LENGTH BARS Fexing of ength bars 83 APPENDIX C FLEXING OF LENGTH BARS C.1 FLEXING OF A LENGTH BAR DUE TO ITS OWN WEIGHT Any object ying in a horizonta pane wi sag under its own weight uness it is infinitey stiff or

More information

Stat 155 Game theory, Yuval Peres Fall Lectures 4,5,6

Stat 155 Game theory, Yuval Peres Fall Lectures 4,5,6 Stat 155 Game theory, Yuva Peres Fa 2004 Lectures 4,5,6 In the ast ecture, we defined N and P positions for a combinatoria game. We wi now show more formay that each starting position in a combinatoria

More information

Primal and dual active-set methods for convex quadratic programming

Primal and dual active-set methods for convex quadratic programming Math. Program., Ser. A 216) 159:469 58 DOI 1.17/s117-15-966-2 FULL LENGTH PAPER Prima and dua active-set methods for convex quadratic programming Anders Forsgren 1 Phiip E. Gi 2 Eizabeth Wong 2 Received:

More information

V.B The Cluster Expansion

V.B The Cluster Expansion V.B The Custer Expansion For short range interactions, speciay with a hard core, it is much better to repace the expansion parameter V( q ) by f(q ) = exp ( βv( q )) 1, which is obtained by summing over

More information

Explicit overall risk minimization transductive bound

Explicit overall risk minimization transductive bound 1 Expicit overa risk minimization transductive bound Sergio Decherchi, Paoo Gastado, Sandro Ridea, Rodofo Zunino Dept. of Biophysica and Eectronic Engineering (DIBE), Genoa University Via Opera Pia 11a,

More information

8 Digifl'.11 Cth:uits and devices

8 Digifl'.11 Cth:uits and devices 8 Digif'. Cth:uits and devices 8. Introduction In anaog eectronics, votage is a continuous variabe. This is usefu because most physica quantities we encounter are continuous: sound eves, ight intensity,

More information

arxiv:quant-ph/ v3 6 Jan 1995

arxiv:quant-ph/ v3 6 Jan 1995 arxiv:quant-ph/9501001v3 6 Jan 1995 Critique of proposed imit to space time measurement, based on Wigner s cocks and mirrors L. Diósi and B. Lukács KFKI Research Institute for Partice and Nucear Physics

More information

How many random edges make a dense hypergraph non-2-colorable?

How many random edges make a dense hypergraph non-2-colorable? How many random edges make a dense hypergraph non--coorabe? Benny Sudakov Jan Vondrák Abstract We study a mode of random uniform hypergraphs, where a random instance is obtained by adding random edges

More information

New Efficiency Results for Makespan Cost Sharing

New Efficiency Results for Makespan Cost Sharing New Efficiency Resuts for Makespan Cost Sharing Yvonne Beischwitz a, Forian Schoppmann a, a University of Paderborn, Department of Computer Science Fürstenaee, 3302 Paderborn, Germany Abstract In the context

More information

Completion. is dense in H. If V is complete, then U(V) = H.

Completion. is dense in H. If V is complete, then U(V) = H. Competion Theorem 1 (Competion) If ( V V ) is any inner product space then there exists a Hibert space ( H H ) and a map U : V H such that (i) U is 1 1 (ii) U is inear (iii) UxUy H xy V for a xy V (iv)

More information

LECTURE NOTES 9 TRACELESS SYMMETRIC TENSOR APPROACH TO LEGENDRE POLYNOMIALS AND SPHERICAL HARMONICS

LECTURE NOTES 9 TRACELESS SYMMETRIC TENSOR APPROACH TO LEGENDRE POLYNOMIALS AND SPHERICAL HARMONICS MASSACHUSETTS INSTITUTE OF TECHNOLOGY Physics Department Physics 8.07: Eectromagnetism II October 7, 202 Prof. Aan Guth LECTURE NOTES 9 TRACELESS SYMMETRIC TENSOR APPROACH TO LEGENDRE POLYNOMIALS AND SPHERICAL

More information

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Brandon Maone Department of Computer Science University of Hesini February 18, 2014 Abstract This document derives, in excrutiating

More information

Copyright information to be inserted by the Publishers. Unsplitting BGK-type Schemes for the Shallow. Water Equations KUN XU

Copyright information to be inserted by the Publishers. Unsplitting BGK-type Schemes for the Shallow. Water Equations KUN XU Copyright information to be inserted by the Pubishers Unspitting BGK-type Schemes for the Shaow Water Equations KUN XU Mathematics Department, Hong Kong University of Science and Technoogy, Cear Water

More information