Adiabatic times for Markov chains and applications
|
|
- Adrian Miles
- 5 years ago
- Views:
Transcription
1 Adiabatic times for Markov chains and applications Kyle Bradford and Yevgeniy Kovchegov Department of Mathematics Oregon State University Corvallis, OR , USA bradfork, Abstract We state and prove a generalized adiabatic theorem for Markov chains and provide examples and applications related to Glauber dynamics of the Ising model over Z d /nz d. he theorems derived in this paper describe a type of adiabatic dynamics for l 1 R n + norm preserving, time inhomogeneous Markov transformations, while quantum adiabatic theorems deal with l C n norm preserving ones, i.e. gradually changing unitary dynamics in C n. Keywords: time inhomogeneous Markov processes; ergodicity; mixing times; adiabatic; Glauber dynamics; Ising model AMS Subject Classification: 60J10, 60J7, 60J8 1 Introduction he long-term stability of time inhomogeneous Markov processes is an active area of research in the field of stochastic processes and their applications. See [9] and [10], and references therein. he adiabatic time, as introduced in [5], is a way to quantify the stability for a certain class of time inhomogeneous Markov processes. In order for us to introduce the reader to the type of adiabatic results that we will be working with in this paper, let us first mention earlier results that were published in [5], thus postponing a more elaborate discussion of the matter until subsection Preliminaries he mixing time quantifies the time it takes for a Markov chain to reach a state that is close enough to its stationary distribution. For the discrete-time finite state case we will look at the evolution of the Markov chain through its probability transition matrix. See [6] for a systematized account of mixing time theory and examples. Let V denote the total variation distance. 1
2 Adiabatic times and applications Definition 1. Suppose P is a discrete-time finite Markov chain with a unique stationary distribution π, i.e. πp = π. Given an > 0, the mixing time t mix is defined as t mix = inf { t : νp t π V, for all probability distributions ν }. o define the adiabatic time in its first and simplest form that we will expand and generalize a few pages down we have to consider a time inhomogeneous Markov chain whose probability transition matrix evolves linearly from an initial probability transition matrix P initial to a final probability transition matrix P final. Namely, we consider two transition probability operators, P initial and P final, on a finite state space Ω, and we suppose there is a unique stationary distribution π f of P final. We let P s = 1 sp initial + sp final 1 We use 1 to define a time inhomogeneous Markov chain P t over [0, ] time interval. he adiabatic time quantifies how gradual the transition from P initial to P final should be so that at time, the distribution is close to the stationary distribution π f of P final. Definition. Given > 0, a time is called the adiabatic time if it is the least such that max ν νp 1 P P 1 P 1 π f V where the maximum is taken over all probability distributions ν over Ω. With these definitions one would naturally ask how adiabatic and mixing times compare. his will be especially relevant given the emergence of quantum adiabatic computation and some instances of using adiabatic algorithms to solve certain classical computation problems. See [1] and [8]. It can be speculated that there may be scenarios in which the adiabatic time is more convenient to compute than mixing times. If we find the relationship between the two, it will give us an understanding of the adiabatic transition which is more prevalent in a context of physics in terms of mixing times and vice versa. he following adiabatic theorem was proved in [5]. heorem Kovchegov 009. Let t mix denote the mixing time for P final. hen the adiabatic time tmix / = O In subsection 1.4 we will give an example that shows the order of t mix / is the best bound for the adiabatic time in this setting. here Ω = {0, 1,,..., n} and P initial = and P. final =
3 Adiabatic times and applications 3 Similar adiabatic results hold in the case of continuous-time Markov chains. here, the concept of an adiabatic time is defined within the same setting and a relationship with the mixing time is shown. Let us state a continuous adiabatic result from [5], and then prove a more general statement of the theorem in the next section. Once again we define the mixing time as a measurement of the time it takes for a Markov chain to reach a state that is close enough to its stationary distribution. For the continuous-time, finite-state case we look at the evolution of the Markov chain through its probability transition matrix as a function over time. Definition 3. Suppose P t is a finite continuous-time Markov chain with a unique stationary distribution π. Given an > 0, the mixing time t mix is defined as t mix = inf {t : νp t π V, for all probability distributions ν}. o define an adiabatic time we have to look at the linear evolution of a generator for the initial probability transition matrix to a generator for the final probability transition matrix. Suppose Q initial and Q final are two bounded generators for continuous-time Markov processes on a finite state space Ω, and π f is the unique stationary distribution for Q final. Let us define a time inhomogeneous generator Q[s] = 1 sq initial + sq final Given > 0 and 0 t 1 t, let P t 1, t denote a matrix of transition probabilities of a Markov process generated by Q[ t ] over the time interval [t 1, t ]. With this new generator we define the adiabatic time to be the smallest transition time such that regardless of our starting distribution, the continuous-time Markov chain generated by Q[ t ] arrives at a state close enough to our stationary distribution π f. Definition 4. Given > 0, a time is called the adiabatic time if it is the least such that max νp 0, π f V ν where the maximum is taken over all probability distributions ν over Ω. he above definition for continuous-time Markov chains is similar to the one in the discrete time setting. he corresponding adiabatic theorem for the continuous times case was proved in [5]. heorem Kovchegov 009. Let t mix denote the mixing time for Q final. ake λ such that λ max i Ω j:j i qinitial and λ max i Ω j:j i qfinal, where q initial and q final are the rates in Q initial and Q final respectively. hen the adiabatic time where θ = t mix / + /4λ. λt mix/ his is once again the best bound as can be shown through the corresponding example. In the next section we will state the adiabatic results for Markov chains that generalize the above mentioned theorems in [5] and provide examples of applications in statistical mechanics. Section is dedicated to proofs. + θ
4 Adiabatic times and applications 4 1. Results and discussion Here we extend the results from [5], and thus expand the range of problems that can be analyzed with these types of adiabatic theorems. One such problem that we will discuss in subsection 1.3 deals with adiabatic Glauber dynamics for the Ising model. Now, in order to solve a larger class of problems, we redefine the adiabatic transition for both the discrete and continuous cases. We consider an{ adiabatic } dynamics { where } transition probabilities change gradually from P initial = to P final = so that, for each pair of states i and j, p initial p final the corresponding mutation of p from p initial to p final is implemented differently and not always linearly. In the case of discrete time steps, this means defining p [s] = 1 φ sp initial + φ sp final, 3 where φ : [0, 1] [0, 1] are continuous functions such that φ 0 = 0 and φ 1 = 1 for all locations. We require functions φ to be such that the new operators P s = {p [s]} are Markov chains for all s [0, 1], e.g. φ φ i,k j, k Ω. he above definition generalizes 1. If we suppose there is a unique stationary distribution π f for P final, then the Definition of adiabatic time given in the previous section will hold for the adiabatic dynamics defined in 3. he new is related to mixing time via the following adiabatic theorem, that we will prove in section. ]} be an inhomogeneous heorem 1 Discrete Adiabatic heorem. Let P t = { [ p t discrete-time Markov chain over [0, ]. Let φs = min φ s be the pointwise minimum function of all of the φ functions. If m 1 is an integer such that φ is m + 1 times continuously differentiable in a neighborhood of 1, φ k 1 = 0 for all integers k such that 1 k < m and φ m 1 0, then = O t m+1 m mix / 1 m he above is, in fact, the best bound in the new setting as shown through the example given later. See subsection 1.4. Now we extend the notion of adiabatic dynamics for the continuous-time Markov generators as follows. We let q [s] = 1 φ sq initial + φ sq final for all pairs i j, 4 where once again φ : [0, 1] [0, 1] are continuous functions such that φ 0 = 0 and φ 1 = 1 for all locations. Also, we let Q[s] denote the corresponding Markov operator.
5 Adiabatic times and applications 5 If there is a unique stationary distribution π f for Q final, then the Definition 4 of adiabatic time will apply for the extended adiabatic dynamics in 4, and the new can be again related to mixing time. heorem Continuous Adiabatic heorem. Let Q [ ] t t [0, ] generate the inhomogeneous discrete-time Markov chain. Let φs = min φ s be the pointwise minimum function of all of the φ functions. Suppose m 1 is an integer such that φ is m + 1 times continuously differentiable in a neighborhood of 1, φ k 1 = 0 for all integers k such that 1 k < m and φ m 1 0. If we take λ such that λ max q initial and λ max i Ω i Ω j:j i j:j i q final, where q initial and q final are the rates in Q initial and Q final respectively. hen = O [λ ] 1 m m+1 t m mix / he reader can reference the proof of this theorem in section. Again this is the best bound in the new setting as can be shown through the same example. See subsection 1.4. Observe that we could take φ : [0, 1] [0, 1] such that φs min φ s in both adiabatic theorems thus guaranteeing the function is nice enough. Now we check that the above continuous adiabatic theorem is scale invariant. For a positive M, we scale the initial and final generators to be 1 M Q initial and 1 M Q final respectively. hen the adiabatic evolution is slowed down M times, and the new ] 1 m t m+1 adiabatic time should be of order M [ λ m mix / with the old t mix and λ taken before scaling. On the other hand the new mixing time will be Mt mix, and the new λ is λ M as the rates are M times lower. Plugging the new parameters into the expression in the theorem, we obtain [ ] 1 λ m Mtmix m+1 m M = M [ λ confirming the theorem is invariant under time scaling. ] 1 m t m+1 m mix Let us revisit adiabatic theorems in physics and quantum mechanics. he reader can find a version of the quantum adiabatic theorem in [7] and multiple other sources. he adiabatic results in physics consider a system that transitions from one state to another, while the energy function changes from an initial H initial to H final. If the change in the energy function happens slowly enough, for the system that is initially at one of the equilibrium states i.e. an eigenstate of the initial energy function H initial, the resulting state will end up -close to the corresponding eigenvector of the final energy function H final. hat is, provided the change in the external conditions is
6 Adiabatic times and applications 6 gradual enough, the jth eigenstate of H initial is carried to an -proximity of the jth eigenstate of H final. Often the adiabatic results concern with one eigenstate, the ground state. hinking of Schrödinger equation as an l C n norm preserving linear dynamics, and a finite Markov process as a natural description of an l 1 R n + norm preserving linear dynamics, the ground state of one would correspond to the stationary state of the other. It is important to mention that in addition to all of the above properties, the quantum adiabatic theorems often require the transition to be gradual enough for the state to be within an -proximity of the corresponding ground state at each time during the transition. aking this into account, the complete analogue of the quantum adiabatic theorem for l 1 R n + would be the one in which the initial distribution is µ 0 = π initial and µ t π t < t [0, ], where µ t = µ 0 P 1 P P t is the distribution of the time inhomogeneous Markov chain at time t [0, ], π initial is the stationary distribution of P initial, and π t is the stationary distribution P t. See [8] for a related result. While we are currently working on proving the above mentioned complete analogue in both discrete and continuous cases, the adiabatic results of this section are sufficiently strong for answering our questions concerning adiabatic Glauber dynamics as stated in the following subsection. Observe that the results of the next subsection could not be obtained using the adiabatic theorems of [5]. Finally, we would like to point out that the models of the adiabatic Markov evolution considered in this paper are similar to simulated annealing. See [3], [4], and []. We expect some of the adiabatic Markov chain results to be used for a class of optimization problems by introducing Monte Carlo Markov Chains with the self-adjusting rates. 1.3 Applications to Ising models with adiabatic Glauber dynamics Let us first state a version of the quantum adiabatic theorem. Given two Hamiltonians, H initial and H final, acting on a quantum system. Let Hs = 1 sh initial + sh final 5 Suppose the system evolves according to Ht/ from time t = 0 to time. hen if is large enough, the final state of the system will be close to the ground state of H final. hey are close in the l norm whenever C, where is the least spectral gap 3 of Hs over all s [0, 1], and C depends linearly on a square of the distance between H initial and H final. Now, switching to canonical ensembles of statistical mechanics will land us in a Gibbs measure space with familiar probabilistic properties, i.e. the Markov property of statistical independence. We consider a nearest-neighbor Ising model. here the spins can be of two types, -1 and +1. he spins interact only with nearest neighbors. A
7 Adiabatic times and applications 7 Hamiltonian determines the energy-value of the interactions of the configuration of spins. Here, for a microstate, we multiply its energy by the thermodynamic beta and call it the Hamiltonian of the microstate. In other words, letting x be a configuration of spins, the Hamiltonian we use in this paper will be defined as Hx = β M xixj i j where β is the thermodynamic beta, i.e. its inverse is the temperature times Boltzmann s constant, M = {M } is a symmetric matrix and for locations i and j, M = 0 if i is not a nearest neighbor to j and M = 1 if i is a nearest neighbor to j. he Markov property of statistical independence is reflected through the local Hamiltonian defined at every location j as follows H loc xj = β i:i j xixj, where i j means i and j are nearest neighbors on the graph. In the original, non-adiabatic case, the Glauber dynamics is used to generate the following Gibbs distribution πx = 1 Zβ e Hx over all spin configurations x { 1, +1} S, where S denotes all the sites of a graph, and Zβ is the normalization constant. Let us describe how the Glauber dynamics works in the case when each vertex of the connected graph is of the same degree. here, for each location j, we have an independent exponential clock with parameter one associated with it. When the clock rings, the spin xj of configuration x at the site j on the graph is reselected using the following probability P xj = +1 = e Hloc x + j { } e Hloc x j + e = tanh H loc x Hloc x + j + j where x + i = x i = xi for i j, x + j = +1 and x j = 1. Here P xj = 1 = 1 P xj = +1 Also H loc x j = H loc x + j. Now we have a continuous-time Markov process, where the state space is the collection of the configurations of spins. Now, consider an adiabatic evolution of Hamiltonians as in 5. here at each time t, Hs = 1 sh initial + sh final, where s = t. he local Hamiltonians must therefore evolve accordingly, H loc s = 1 sh loc initial + shloc final
8 Adiabatic times and applications 8 and the adiabatic Glauber dynamics is the one where when the clock rings, the spin xj is reselected with probabilities P s xj = +1 = e Hloc s e Hloc s x + j x j + e Hloc s x + j Here too, H loc s x j = H loc x + j. s he stationary distribution of the Q final -generated Markov process, i.e. Glauber dynamics with H initial energy, is, for a configuration x, πx = e H finalx all config. x e H finalx Let β 0 and β 1 denote the values of thermodynamic beta for H initial and H final respectively Adiabatic Glauber dynamics on Z /nz Consider nonlinear adiabatic Glauber dynamics of an Ising model on a two-dimensional torus Z /nz. here any two neighboring spin configurations x and { y in { 1, +1} n differ at only one site on the graph, say v Z /nz xu if u v. hat is yu = xu if u = v. he transition rates evolve according to the adiabatic Glauber dynamics rules, and the transition rates can be represented as q x,y [s] = 1 φ x,y sq initial x,y + φ x,y sq final x,y as in 4. Here the functions φ x,y s for two neighbors x and y depend entirely on the spins around the discrepancy site v. Namely if all four neighbors of v are of the same spin +1 or 1, then φ x,y s = cosh 4β 1 sinhs4β 0 4β 1 sinh4β 0 4β 1 cosh 4β 0 + s4β 0 4β 1 If it is three of one kind, and one of the other i.e three +1 and one 1, or three 1 and one +1 as illustrated below 1 +1 v then φ x,y s = cosh β 1 sinhsβ 0 β 1 sinhβ 0 β 1 cosh β 0 + sβ 0 β 1 If there are two of each kind, any function works, as both, the initial and the final, local Hamiltonians produce the same transition rates qx,y initial = qx,y final = 1/.
9 Adiabatic times and applications 9 Suppose tanhβ 1 < 1. Observe that cosh 4β 1 sinhs4β 0 4β 1 sinh4β 0 4β 1 cosh 4β 0 + s4β 0 4β 1 cosh β 1 sinhsβ 0 β 1 sinhβ 0 β 1 cosh β 0 + sβ 0 β 1 for s [0, 1]. herefore by heorem 15.1 in [6] and heorem of this paper, the adiabatic time [ ] = O C n logn + log, where C = β 0 β 1 [cothβ 0 β 1 tanh β 1 ] [1 tanhβ 1 ]. Here, at every vertex on the torus we attached a Poisson clock with rate one, and therefore we can take λ = n. Also m = 1 in the theorem, and one can find the expression for t mix in [6] Adiabatic Glauber dynamics on Z d /nz d he adiabatic Glauber dynamics of an Ising model on a d-dimensional torus Z d /nz d solves similarly. here the minimum function φs of the heorem is same as in the case of d = cosh β 1 sinhsβ 0 β 1 φs = sinhβ 0 β 1 cosh β 0 + sβ 0 β 1 and the adiabatic time = O C nd [ ] logn + log where again C = β 0 β 1 [cothβ 0 β 1 tanh β 1 ] [1 tanhβ 1 ]. if tanhβ 1 < 1 d, Notice that the time scaling argument that followed the statement of heorem works here as well. hat is, if we use one Poisson clock of rate one for all vertices, or equivalently place Poisson clocks of rate n d at every individual vertex, the new adiabatic time will be as λ = 1 here. = O C nd 1.4 Optimality of the bound [ ] logn + log In this subsection we give examples that show that the t mix order of adiabatic time given in Kovchegov [5], and t m+1 m mix order for more general settings of this current paper are in fact optimal. We consider discrete probability transition matrices P initial = and P. final =
10 Adiabatic times and applications 10 over n + 1 states, {0, 1,,..., n}, and let the discrete-time adiabatic probability transition matrix to be P s = 1 sp initial + sp final as in [5]. Let π f again denote the stationary distribution for P final. Here π f = 0,..., 0, 1 and the mixing time t mix = n for any 0, 1. Now, since µp initial inequality = ρ for any probability distribution µ, we have the following νp 1 P Observe that ρp j fin νp 1 P Now, because n+1 π f V ρ j=0 1 j! j! j = π f for any 0 j n. herefore π f V 1 = j= n+1 j= n+1 = 1 1 j! j! j P j fin π f V! j! j! j 1! j 1! n! n = 1 n n n for n, we see that νp 1 P π f V 1 hus νp 1 P π f V 1 e n 4 proving that the order of adiabatic time = O n n n 1 e 4 n implies 4 log1 n 4 in [5] is optimal. tmix / = t mix 4, Optimal bound for general settings he same example works in the more general setting considered in this paper. For the same P initial and P final, let p [s] = 1 φ sp initial + φ sp final as in 4. Suppose φ s = φs for all pairs of states i, j, and suppose m 1 is an integer such that φ is m + 1 times continuously differentiable in a neighborhood of 1, φ k 1 = 0 for all integers k such that 1 k < m
11 Adiabatic times and applications 11 and φ m 1 0. hen νp 1 P π f V ρ 1 1 φl/ and therefore νp 1 P l=0 π f V = 1 l= n+1 1 l= n+1 = 1 j=l+1 1 φl/ j= n+1 j=l+1 φj/ P l final π f j=l+1 φj/ φj/. φj/ φj/ he minimum function φx = 1 + φm 1x 1 m m! + O x 1 m+1 and νp 1 P as log1 + x x. π f V 1 It is a well known fact that j= n+1 j=l 1 + 1m φ m 1 j m m m! P j= n+1 1+ = 1 e log 1m φ m 1 j m m m! +O1 j/ m+1 1 e 1m φ m 1 Pn 1 m j=1 jm +On/ m+1 n 1 j k = j=1 k j=0 where B j is the jth Bernoulli number. Suppose νp 1 P π f V, then log1 1m+1 φ m 1 m V + O 1 j/ m+1 B j k n k+1 j, 6 k + 1 j j m j=0 hus confirming that the order of adiabatic time = O optimal. B j m n m+1 j + On/ m+1 m + 1 j j m+1 t m mix m 1 «in heorem 1 is
12 Adiabatic times and applications 1 Naturally, there is a similar example in the continuous case. here Q initial = and Q final = Proofs In this section we give formal proofs to both adiabatic theorems, heorem 1 and heorem..1 Proof of heorem 1 Proof. We write p [s] = 1 φ sp initial + φ s φsp final and define transition probability matrix ˆP = {ˆp } to be such that We will thus have that Observe that 1 φsˆp = 1 φ sp initial νp 1 P where ν N = νp 1 P P s = 1 φs ˆP + φsp final P 1 P 1 = natural numbers with N <. By the triangle inequality, we have max νp 1 P ν P 1 j=n+1 + φsp final + φ s φsp final φj/ ν N P N final + E P N, and E is the rest of the terms, and both and N are P 1 π f V max νp N ν [ ] where 0 S N 1 j=n+1 φ j. final π f V j=n+1 Let set N = t mix /, where > 0 is small. hen we have that max νp N ν final π f V φj/ / j=n+1 φj/ + S N
13 Adiabatic times and applications 13 [ ] Setting 1 j=n+1 φ j / we obtain log 1 / j=n+1 log φj/ We plug in the approximation of the minimum function φ around x = 1 obtaining log 1 / herefore φx = 1 + φm 1x 1 m m! j=n+1 log log 1 / 1m+1 φ m 1 m m! + O x 1 m m φ m 1 j m m m! N 1 j=1 + O 1 j/ m+1 N j m m+ + O m+1 Observe that 1 m+1 φ m 1 0 as φ : [0, 1] [0, 1] and φ1 = 1. tmix / 1 By 6, j=1 j m = m B k m k=0 m+1 k k tmix / m+1 k, where B k is the kth Bernoulli number, and therefore > log 1 / 1m+1 φ m 1 m m! m B k m N t mix / m+1 k m+ +O m + 1 k k m+1 k=0 In order for the right hand side of the above equation to be log 1 / close to zero, it is sufficient for to be of order of O. Proof of heorem m+1 t m mix / m 1 Proof. Define ˆQ to be a Markov generate with off-diagonal entries hen writing ˆq = 1 φ s 1 φs qinitial q [s] = 1 φ sq initial + φ s φs q final 1 φs + φ s φsq final. + φsq final would imply Observe that λ max i Ω Q[s] = 1 φs ˆQ + φsq final j:j i ˆq and λ max i Ω [ ] t q j:j i
14 Adiabatic times and applications 14 as λ max i Ω j:j i q initial and λ max i Ω j:j i q final Let P final t = e tq final denote the transition probability matrix associated with the generator Q final, and let P 0 = I + 1 ˆQ λ and P 1 = I + 1 λ Q final. he P 0 and P 1 are discrete Markov chains. Conditioning on the number of arrivals within the [N, ] time interval λ N n νp 0, = ν N P N, = ν N e λ N I n n! where ν N = νp 0, N and n! I n = N n N<s 1 < <s n< [ 1 φ n=0 s1 P 0 + φ [ 1 φ s1 ] P 1 sn P 0 + φ i.e. the order statistics of the n. herefore, combining the terms with P final, we obtain νp 0, = ν N n=0 λ n P n final n! = e λ N λ n n ν N n=0 e λ N n! where E denotes the rest of the terms. ake K > 0 and define = 1 K 1 K N P n final N φ 1 φxdx N φxdx 1 t mix / sn ] P 1 ds 1 ds n s1 sn φ ds 1 ds n + E n + E, and N = K 1 K 1 K 1 K φxdx 1 t mix / Recall the approximation of the minimum function φ around x = 1 φx = 1 + φm 1x 1 m m! + O x 1 m+1 and therefore 1 K 1 K φxdx = 1 K 1 + γk K m,
15 Adiabatic times and applications 15 where γk = 1 m φ m 1 m+1! + OK 1. hus we can write νp 0, = e λ N λ n N n ν N n=0 n! P n final We see, using standard uniformization argument, that νp 0, = e λ 1+ γk K m 1tmix / νn = e λ γk K m +γk n=0 [ N 1 + γk λ n t mix / n n! P n final t mix / νn exp {Q final t mix /} + E m ] n Now, since 1 m φ m 1 0, we have that, for any probability distribution ν, νp 0, π f V = ν exp {Q final t mix /} π f V e λ where, by the triangle inequality, and, by definition of t mix, 0 S N 1 e λ 1+ γk K m 1 tmix / ν exp {Q final t mix /} π f V e λ γk K m +γk λ n t mix / n n=0 γk K m +γk n! t mix / < / + E + E t mix / + SN, aking K = c λ/ 1 m tmix / 1 m with constant c >> 1 m+1 φ m 1 m+1!, we obtain γk > log1 / λ K m t mix / + γk and therefore hus 0 S N 1 e λ = Kt mix/ 1 + γk K m γk K m +γk = O [λ t mix / < / ] 1 m m+1 t m mix / References [1] D. Aharonov, W. van Dam, J. Kempe, Z. Landau, S. Lloyd and O. Regev, Adiabatic Quantum Computation Is Equivalent to Standard Quantum Computation SIAM Review, Vol.50, No. 4., 008, [] V. Cerny, A thermodynamical approach to the travelling salesman problem: an efficient simulation algorithm Journal of Optimization heory and Applications, 45, 1985, 41-51
16 Adiabatic times and applications 16 [3] S. Kirkpatrick, C. D. Gelatt, M. P. Vecchi, Optimization by Simulated Annealing Science, Vol. 0, Number 4598, 1983, [4] S. Kirkpatrick, Optimization by simulated annealing: Quantitative studies Journal of Statistical Physics, Vol. 34, Numbers 5-6, 1984, [5] Y. Kovchegov, A note on adiabatic theorem for Markov chains Statistics & Probability Letters, 80, 010, [6] D. A. Levin, Y. Peres and E. L. Wilmer, Markov Chains and Mixing imes Amer. Math. Soc., Providance, RI, 008 [7] A. Messiah, Quantum maechanics John Wiley and Sons, NY, 1958 [8] S. Rajagopalan, D. Shah and J. Shin, Network Adiabatic heorem: An efficient randomized protocol for contention resolution Proc. of the eleventh intl. joint conference on measurement and modeling of computer systems, 009, [9] L. Saloff-Coste and J. Zúñiga, Merging and stability for time inhomogeneous finite Markov chains arxiv: v1 [math.pr] 010 [10] L. Saloff-Coste and J. Zúñiga, ime inhomogeneous Markov chains with wave like behavior Annals of Applied Probability, Vol. 0, Number 5, 010,
A note on adiabatic theorem for Markov chains
Yevgeniy Kovchegov Abstract We state and prove a version of an adiabatic theorem for Markov chains using well known facts about mixing times. We extend the result to the case of continuous time Markov
More informationA note on adiabatic theorem for Markov chains and adiabatic quantum computation. Yevgeniy Kovchegov Oregon State University
A note on adiabatic theorem for Markov chains and adiabatic quantum computation Yevgeniy Kovchegov Oregon State University Introduction Max Born and Vladimir Fock in 1928: a physical system remains in
More informationOn Markov Chain Monte Carlo
MCMC 0 On Markov Chain Monte Carlo Yevgeniy Kovchegov Oregon State University MCMC 1 Metropolis-Hastings algorithm. Goal: simulating an Ω-valued random variable distributed according to a given probability
More informationCONVERGENCE RATE OF MCMC AND SIMULATED ANNEAL- ING WITH APPLICATION TO CLIENT-SERVER ASSIGNMENT PROBLEM
Stochastic Analysis and Applications CONVERGENCE RATE OF MCMC AND SIMULATED ANNEAL- ING WITH APPLICATION TO CLIENT-SERVER ASSIGNMENT PROBLEM THUAN DUONG-BA, THINH NGUYEN, BELLA BOSE School of Electrical
More informationPath Coupling and Aggregate Path Coupling
Path Coupling and Aggregate Path Coupling 0 Path Coupling and Aggregate Path Coupling Yevgeniy Kovchegov Oregon State University (joint work with Peter T. Otto from Willamette University) Path Coupling
More informationGlauber Dynamics for Ising Model I AMS Short Course
Glauber Dynamics for Ising Model I AMS Short Course January 2010 Ising model Let G n = (V n, E n ) be a graph with N = V n < vertices. The nearest-neighbor Ising model on G n is the probability distribution
More informationXVI International Congress on Mathematical Physics
Aug 2009 XVI International Congress on Mathematical Physics Underlying geometry: finite graph G=(V,E ). Set of possible configurations: V (assignments of +/- spins to the vertices). Probability of a configuration
More information7.1 Coupling from the Past
Georgia Tech Fall 2006 Markov Chain Monte Carlo Methods Lecture 7: September 12, 2006 Coupling from the Past Eric Vigoda 7.1 Coupling from the Past 7.1.1 Introduction We saw in the last lecture how Markov
More information25.1 Markov Chain Monte Carlo (MCMC)
CS880: Approximations Algorithms Scribe: Dave Andrzejewski Lecturer: Shuchi Chawla Topic: Approx counting/sampling, MCMC methods Date: 4/4/07 The previous lecture showed that, for self-reducible problems,
More informationProofs for Large Sample Properties of Generalized Method of Moments Estimators
Proofs for Large Sample Properties of Generalized Method of Moments Estimators Lars Peter Hansen University of Chicago March 8, 2012 1 Introduction Econometrica did not publish many of the proofs in my
More informationDiscrete quantum random walks
Quantum Information and Computation: Report Edin Husić edin.husic@ens-lyon.fr Discrete quantum random walks Abstract In this report, we present the ideas behind the notion of quantum random walks. We further
More informationQuantum algorithms based on quantum walks. Jérémie Roland (ULB - QuIC) Quantum walks Grenoble, Novembre / 39
Quantum algorithms based on quantum walks Jérémie Roland Université Libre de Bruxelles Quantum Information & Communication Jérémie Roland (ULB - QuIC) Quantum walks Grenoble, Novembre 2012 1 / 39 Outline
More informationOPTIMIZATION BY SIMULATED ANNEALING: A NECESSARY AND SUFFICIENT CONDITION FOR CONVERGENCE. Bruce Hajek* University of Illinois at Champaign-Urbana
OPTIMIZATION BY SIMULATED ANNEALING: A NECESSARY AND SUFFICIENT CONDITION FOR CONVERGENCE Bruce Hajek* University of Illinois at Champaign-Urbana A Monte Carlo optimization technique called "simulated
More informationMarkov Chain Monte Carlo. Simulated Annealing.
Aula 10. Simulated Annealing. 0 Markov Chain Monte Carlo. Simulated Annealing. Anatoli Iambartsev IME-USP Aula 10. Simulated Annealing. 1 [RC] Stochastic search. General iterative formula for optimizing
More information(# = %(& )(* +,(- Closed system, well-defined energy (or e.g. E± E/2): Microcanonical ensemble
Recall from before: Internal energy (or Entropy): &, *, - (# = %(& )(* +,(- Closed system, well-defined energy (or e.g. E± E/2): Microcanonical ensemble & = /01Ω maximized Ω: fundamental statistical quantity
More informationarxiv:cond-mat/ v2 [cond-mat.stat-mech] 25 Sep 2000
technical note, cond-mat/0009244 arxiv:cond-mat/0009244v2 [cond-mat.stat-mech] 25 Sep 2000 Jarzynski Relations for Quantum Systems and Some Applications Hal Tasaki 1 1 Introduction In a series of papers
More informationNewton s Method and Localization
Newton s Method and Localization Workshop on Analytical Aspects of Mathematical Physics John Imbrie May 30, 2013 Overview Diagonalizing the Hamiltonian is a goal in quantum theory. I would like to discuss
More information6 Markov Chain Monte Carlo (MCMC)
6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution
More informationAutomorphic Equivalence Within Gapped Phases
1 Harvard University May 18, 2011 Automorphic Equivalence Within Gapped Phases Robert Sims University of Arizona based on joint work with Sven Bachmann, Spyridon Michalakis, and Bruno Nachtergaele 2 Outline:
More informationResearch Statement. Dr. Anna Vershynina. August 2017
annavershynina@gmail.com August 2017 My research interests lie in functional analysis and mathematical quantum physics. In particular, quantum spin systems, open quantum systems and quantum information
More informationConductance and Rapidly Mixing Markov Chains
Conductance and Rapidly Mixing Markov Chains Jamie King james.king@uwaterloo.ca March 26, 2003 Abstract Conductance is a measure of a Markov chain that quantifies its tendency to circulate around its states.
More informationLattice spin models: Crash course
Chapter 1 Lattice spin models: Crash course 1.1 Basic setup Here we will discuss the basic setup of the models to which we will direct our attention throughout this course. The basic ingredients are as
More informationMixing times via super-fast coupling
s via super-fast coupling Yevgeniy Kovchegov Department of Mathematics Oregon State University Outline 1 About 2 Definition Coupling Result. About Outline 1 About 2 Definition Coupling Result. About Coauthor
More informationOberwolfach workshop on Combinatorics and Probability
Apr 2009 Oberwolfach workshop on Combinatorics and Probability 1 Describes a sharp transition in the convergence of finite ergodic Markov chains to stationarity. Steady convergence it takes a while to
More informationCombinatorial Criteria for Uniqueness of Gibbs Measures
Combinatorial Criteria for Uniqueness of Gibbs Measures Dror Weitz School of Mathematics, Institute for Advanced Study, Princeton, NJ 08540, U.S.A. dror@ias.edu April 12, 2005 Abstract We generalize previously
More informationComplexity of the quantum adiabatic algorithm
Complexity of the quantum adiabatic algorithm Peter Young e-mail:peter@physics.ucsc.edu Collaborators: S. Knysh and V. N. Smelyanskiy Colloquium at Princeton, September 24, 2009 p.1 Introduction What is
More informationCombinatorial Criteria for Uniqueness of Gibbs Measures
Combinatorial Criteria for Uniqueness of Gibbs Measures Dror Weitz* School of Mathematics, Institute for Advanced Study, Princeton, New Jersey 08540; e-mail: dror@ias.edu Received 17 November 2003; accepted
More information5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini
5. Simulated Annealing 5.1 Basic Concepts Fall 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Real Annealing and Simulated Annealing Metropolis Algorithm Template of SA A Simple Example References
More informationErgodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.
Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions
More informationQuantum mechanics in one hour
Chapter 2 Quantum mechanics in one hour 2.1 Introduction The purpose of this chapter is to refresh your knowledge of quantum mechanics and to establish notation. Depending on your background you might
More informationQuantum Annealing in spin glasses and quantum computing Anders W Sandvik, Boston University
PY502, Computational Physics, December 12, 2017 Quantum Annealing in spin glasses and quantum computing Anders W Sandvik, Boston University Advancing Research in Basic Science and Mathematics Example:
More informationAdiabatic quantum computation a tutorial for computer scientists
Adiabatic quantum computation a tutorial for computer scientists Itay Hen Dept. of Physics, UCSC Advanced Machine Learning class UCSC June 6 th 2012 Outline introduction I: what is a quantum computer?
More information8.1 Concentration inequality for Gaussian random matrix (cont d)
MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration
More informationMARKOV CHAINS AND HIDDEN MARKOV MODELS
MARKOV CHAINS AND HIDDEN MARKOV MODELS MERYL SEAH Abstract. This is an expository paper outlining the basics of Markov chains. We start the paper by explaining what a finite Markov chain is. Then we describe
More informationQuantum and classical annealing in spin glasses and quantum computing. Anders W Sandvik, Boston University
NATIONAL TAIWAN UNIVERSITY, COLLOQUIUM, MARCH 10, 2015 Quantum and classical annealing in spin glasses and quantum computing Anders W Sandvik, Boston University Cheng-Wei Liu (BU) Anatoli Polkovnikov (BU)
More informationQuantum Annealing for Prime Factorization
Quantum Annealing for Prime Factorization Shuxian Jiang, Sabre Kais Department of Chemistry and Computer Science, Purdue University Keith A. Britt, Alexander J. McCaskey, Travis S. Humble Quantum Computing
More informationFinding is as easy as detecting for quantum walks
Finding is as easy as detecting for quantum walks Jérémie Roland Hari Krovi Frédéric Magniez Maris Ozols LIAFA [ICALP 2010, arxiv:1002.2419] Jérémie Roland (NEC Labs) QIP 2011 1 / 14 Spatial search on
More informationThe query register and working memory together form the accessible memory, denoted H A. Thus the state of the algorithm is described by a vector
1 Query model In the quantum query model we wish to compute some function f and we access the input through queries. The complexity of f is the number of queries needed to compute f on a worst-case input
More informationComputing the Initial Temperature of Simulated Annealing
Computational Optimization and Applications, 29, 369 385, 2004 c 2004 Kluwer Academic Publishers. Manufactured in he Netherlands. Computing the Initial emperature of Simulated Annealing WALID BEN-AMEUR
More informationarxiv: v1 [math.co] 3 Nov 2014
SPARSE MATRICES DESCRIBING ITERATIONS OF INTEGER-VALUED FUNCTIONS BERND C. KELLNER arxiv:1411.0590v1 [math.co] 3 Nov 014 Abstract. We consider iterations of integer-valued functions φ, which have no fixed
More informationPotentially useful reading Sakurai and Napolitano, sections 1.5 (Rotation), Schumacher & Westmoreland chapter 2
Problem Set 2: Interferometers & Random Matrices Graduate Quantum I Physics 6572 James Sethna Due Friday Sept. 5 Last correction at August 28, 2014, 8:32 pm Potentially useful reading Sakurai and Napolitano,
More informationDerivation of the GENERIC form of nonequilibrium thermodynamics from a statistical optimization principle
Derivation of the GENERIC form of nonequilibrium thermodynamics from a statistical optimization principle Bruce Turkington Univ. of Massachusetts Amherst An optimization principle for deriving nonequilibrium
More informationINTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING
INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing
More informationIntro. Each particle has energy that we assume to be an integer E i. Any single-particle energy is equally probable for exchange, except zero, assume
Intro Take N particles 5 5 5 5 5 5 Each particle has energy that we assume to be an integer E i (above, all have 5) Particle pairs can exchange energy E i! E i +1andE j! E j 1 5 4 5 6 5 5 Total number
More informationMixing Times and Hitting Times
Mixing Times and Hitting Times David Aldous January 12, 2010 Old Results and Old Puzzles Levin-Peres-Wilmer give some history of the emergence of this Mixing Times topic in the early 1980s, and we nowadays
More informationCONSTRAINED PERCOLATION ON Z 2
CONSTRAINED PERCOLATION ON Z 2 ZHONGYANG LI Abstract. We study a constrained percolation process on Z 2, and prove the almost sure nonexistence of infinite clusters and contours for a large class of probability
More informationExample: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected
4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X
More information1 Mathematical preliminaries
1 Mathematical preliminaries The mathematical language of quantum mechanics is that of vector spaces and linear algebra. In this preliminary section, we will collect the various definitions and mathematical
More informationHertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). (v ji (1 x i ) + (1 v ji )x i )
Symmetric Networks Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). How can we model an associative memory? Let M = {v 1,..., v m } be a
More informationRandom dynamical systems with microstructure
Random dynamical systems with microstructure Renato Feres Washington University, St. Louis ESI, July 2011 1/37 Acknowledgements Different aspects of this work are joint with: Tim Chumley (Central limit
More informationMarkov Chains for Tensor Network States
Sofyan Iblisdir University of Barcelona September 2013 arxiv 1309.4880 Outline Main issue discussed here: how to find good tensor network states? The optimal state problem Markov chains Multiplicative
More informationWhere Probability Meets Combinatorics and Statistical Mechanics
Where Probability Meets Combinatorics and Statistical Mechanics Daniel Ueltschi Department of Mathematics, University of Warwick MASDOC, 15 March 2011 Collaboration with V. Betz, N.M. Ercolani, C. Goldschmidt,
More informationA. R. SOLTANI AND Z. SHISHEBOR
GEORGIAN MAHEMAICAL JOURNAL: Vol. 6, No. 1, 1999, 91-98 WEAKLY PERIODIC SEQUENCES OF BOUNDED LINEAR RANSFORMAIONS: A SPECRAL CHARACERIZAION A. R. SOLANI AND Z. SHISHEBOR Abstract. Let X and Y be two Hilbert
More informationIntroduction to Adiabatic Quantum Computation
Introduction to Adiabatic Quantum Computation Vicky Choi Department of Computer Science Virginia Tech April 6, 2 Outline Motivation: Maximum Independent Set(MIS) Problem vs Ising Problem 2 Basics: Quantum
More informationDynamics for the critical 2D Potts/FK model: many questions and a few answers
Dynamics for the critical 2D Potts/FK model: many questions and a few answers Eyal Lubetzky May 2018 Courant Institute, New York University Outline The models: static and dynamical Dynamical phase transitions
More informationMonte Carlo Methods for Computation and Optimization (048715)
Technion Department of Electrical Engineering Monte Carlo Methods for Computation and Optimization (048715) Lecture Notes Prof. Nahum Shimkin Spring 2015 i PREFACE These lecture notes are intended for
More informationarxiv:quant-ph/ v2 26 Mar 2005
Adiabatic Quantum Computation is Equivalent to Standard Quantum Computation arxiv:quant-ph/0405098v2 26 Mar 2005 Dorit Aharonov School of Computer Science and Engineering, Hebrew University, Jerusalem,
More information6.842 Randomness and Computation March 3, Lecture 8
6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n
More informationd 3 r d 3 vf( r, v) = N (2) = CV C = n where n N/V is the total number of molecules per unit volume. Hence e βmv2 /2 d 3 rd 3 v (5)
LECTURE 12 Maxwell Velocity Distribution Suppose we have a dilute gas of molecules, each with mass m. If the gas is dilute enough, we can ignore the interactions between the molecules and the energy will
More informationLarge deviations and averaging for systems of slow fast stochastic reaction diffusion equations.
Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Wenqing Hu. 1 (Joint work with Michael Salins 2, Konstantinos Spiliopoulos 3.) 1. Department of Mathematics
More information3 Undirected Graphical Models
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms For Inference Fall 2014 3 Undirected Graphical Models In this lecture, we discuss undirected
More informationarxiv: v1 [math.pr] 1 Jan 2013
The role of dispersal in interacting patches subject to an Allee effect arxiv:1301.0125v1 [math.pr] 1 Jan 2013 1. Introduction N. Lanchier Abstract This article is concerned with a stochastic multi-patch
More informationStochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property
Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat
More information3.320 Lecture 18 (4/12/05)
3.320 Lecture 18 (4/12/05) Monte Carlo Simulation II and free energies Figure by MIT OCW. General Statistical Mechanics References D. Chandler, Introduction to Modern Statistical Mechanics D.A. McQuarrie,
More informationCONVERGENCE THEOREM FOR FINITE MARKOV CHAINS. Contents
CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS ARI FREEDMAN Abstract. In this expository paper, I will give an overview of the necessary conditions for convergence in Markov chains on finite state spaces.
More informationMarkov Chains and MCMC
Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time
More informationVerschränkung versus Stosszahlansatz: The second law rests on low correlation levels in our cosmic neighborhood
Verschränkung versus Stosszahlansatz: The second law rests on low correlation levels in our cosmic neighborhood Talk presented at the 2012 Anacapa Society Meeting Hamline University, Saint Paul, Minnesota
More informationEquilibrium, out of equilibrium and consequences
Equilibrium, out of equilibrium and consequences Katarzyna Sznajd-Weron Institute of Physics Wroc law University of Technology, Poland SF-MTPT Katarzyna Sznajd-Weron (WUT) Equilibrium, out of equilibrium
More informationNotes on Householder QR Factorization
Notes on Householder QR Factorization Robert A van de Geijn Department of Computer Science he University of exas at Austin Austin, X 7872 rvdg@csutexasedu September 2, 24 Motivation A fundamental problem
More informationFourier analysis of boolean functions in quantum computation
Fourier analysis of boolean functions in quantum computation Ashley Montanaro Centre for Quantum Information and Foundations, Department of Applied Mathematics and Theoretical Physics, University of Cambridge
More informationReflections in Hilbert Space III: Eigen-decomposition of Szegedy s operator
Reflections in Hilbert Space III: Eigen-decomposition of Szegedy s operator James Daniel Whitfield March 30, 01 By three methods we may learn wisdom: First, by reflection, which is the noblest; second,
More informationAnalysis of Polya-Gamma Gibbs sampler for Bayesian logistic analysis of variance
Electronic Journal of Statistics Vol. (207) 326 337 ISSN: 935-7524 DOI: 0.24/7-EJS227 Analysis of Polya-Gamma Gibbs sampler for Bayesian logistic analysis of variance Hee Min Choi Department of Statistics
More informationStein s Method for concentration inequalities
Number of Triangles in Erdős-Rényi random graph, UC Berkeley joint work with Sourav Chatterjee, UC Berkeley Cornell Probability Summer School July 7, 2009 Number of triangles in Erdős-Rényi random graph
More informationCONVERGENCE OF KAC S RANDOM WALK
CONVERGENCE OF KAC S RANDOM WALK IGOR PAK AND SERGIY SIDENKO Abstract. We study a long standing open problem on the mixing time of Kac s random walk on SOn, R) by random rotations. We obtain an upper bound
More information8.334: Statistical Mechanics II Problem Set # 4 Due: 4/9/14 Transfer Matrices & Position space renormalization
8.334: Statistical Mechanics II Problem Set # 4 Due: 4/9/14 Transfer Matrices & Position space renormalization This problem set is partly intended to introduce the transfer matrix method, which is used
More informationARTICLE IN PRESS European Journal of Combinatorics ( )
European Journal of Combinatorics ( ) Contents lists available at ScienceDirect European Journal of Combinatorics journal homepage: www.elsevier.com/locate/ejc Proof of a conjecture concerning the direct
More informationRandomized Algorithms
Randomized Algorithms Saniv Kumar, Google Research, NY EECS-6898, Columbia University - Fall, 010 Saniv Kumar 9/13/010 EECS6898 Large Scale Machine Learning 1 Curse of Dimensionality Gaussian Mixture Models
More informationDetailed Proof of The PerronFrobenius Theorem
Detailed Proof of The PerronFrobenius Theorem Arseny M Shur Ural Federal University October 30, 2016 1 Introduction This famous theorem has numerous applications, but to apply it you should understand
More informationLecture 19: November 10
CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 19: November 10 Lecturer: Prof. Alistair Sinclair Scribes: Kevin Dick and Tanya Gordeeva Disclaimer: These notes have not been
More informationMixing Rates for the Gibbs Sampler over Restricted Boltzmann Machines
Mixing Rates for the Gibbs Sampler over Restricted Boltzmann Machines Christopher Tosh Department of Computer Science and Engineering University of California, San Diego ctosh@cs.ucsd.edu Abstract The
More informationStochastic Simulation
Stochastic Simulation Ulm University Institute of Stochastics Lecture Notes Dr. Tim Brereton Summer Term 2015 Ulm, 2015 2 Contents 1 Discrete-Time Markov Chains 5 1.1 Discrete-Time Markov Chains.....................
More informationCollective Effects. Equilibrium and Nonequilibrium Physics
1 Collective Effects in Equilibrium and Nonequilibrium Physics: Lecture 2, 24 March 2006 1 Collective Effects in Equilibrium and Nonequilibrium Physics Website: http://cncs.bnu.edu.cn/mccross/course/ Caltech
More informationRECURRENCE IN COUNTABLE STATE MARKOV CHAINS
RECURRENCE IN COUNTABLE STATE MARKOV CHAINS JIN WOO SUNG Abstract. This paper investigates the recurrence and transience of countable state irreducible Markov chains. Recurrence is the property that a
More informationQuestion: My computer only knows how to generate a uniform random variable. How do I generate others? f X (x)dx. f X (s)ds.
Simulation Question: My computer only knows how to generate a uniform random variable. How do I generate others?. Continuous Random Variables Recall that a random variable X is continuous if it has a probability
More informationMinicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics
Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics Eric Slud, Statistics Program Lecture 1: Metropolis-Hastings Algorithm, plus background in Simulation and Markov Chains. Lecture
More informationSolutions to Problem Set 8
Cornell University, Physics Department Fall 2014 PHYS-3341 Statistical Physics Prof. Itai Cohen Solutions to Problem Set 8 David C. sang, Woosong Choi 8.1 Chemical Equilibrium Reif 8.12: At a fixed temperature
More informationEfficient time evolution of one-dimensional quantum systems
Efficient time evolution of one-dimensional quantum systems Frank Pollmann Max-Planck-Institut für komplexer Systeme, Dresden, Germany Sep. 5, 2012 Hsinchu Problems we will address... Finding ground states
More informationUniversity of Chicago Autumn 2003 CS Markov Chain Monte Carlo Methods. Lecture 7: November 11, 2003 Estimating the permanent Eric Vigoda
University of Chicago Autumn 2003 CS37101-1 Markov Chain Monte Carlo Methods Lecture 7: November 11, 2003 Estimating the permanent Eric Vigoda We refer the reader to Jerrum s book [1] for the analysis
More informationLecture 2: September 8
CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 2: September 8 Lecturer: Prof. Alistair Sinclair Scribes: Anand Bhaskar and Anindya De Disclaimer: These notes have not been
More informationUniformly Uniformly-ergodic Markov chains and BSDEs
Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,
More informationMolecular propagation through crossings and avoided crossings of electron energy levels
Molecular propagation through crossings and avoided crossings of electron energy levels George A. Hagedorn Department of Mathematics and Center for Statistical Mechanics and Mathematical Physics Virginia
More informationThe Quantum Heisenberg Ferromagnet
The Quantum Heisenberg Ferromagnet Soon after Schrödinger discovered the wave equation of quantum mechanics, Heisenberg and Dirac developed the first successful quantum theory of ferromagnetism W. Heisenberg,
More informationVector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.
Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar
More informationReinforcement Learning
Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha
More informationSpectral Analysis of Random Walks
Graphs and Networks Lecture 9 Spectral Analysis of Random Walks Daniel A. Spielman September 26, 2013 9.1 Disclaimer These notes are not necessarily an accurate representation of what happened in class.
More informationConvergence of Random Walks and Conductance - Draft
Graphs and Networks Lecture 0 Convergence of Random Walks and Conductance - Draft Daniel A. Spielman October, 03 0. Overview I present a bound on the rate of convergence of random walks in graphs that
More informationMath Stochastic Processes & Simulation. Davar Khoshnevisan University of Utah
Math 5040 1 Stochastic Processes & Simulation Davar Khoshnevisan University of Utah Module 1 Generation of Discrete Random Variables Just about every programming language and environment has a randomnumber
More informationMarkov Chains Handout for Stat 110
Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of
More informationPath Integrals. Andreas Wipf Theoretisch-Physikalisches-Institut Friedrich-Schiller-Universität, Max Wien Platz Jena
Path Integrals Andreas Wipf Theoretisch-Physikalisches-Institut Friedrich-Schiller-Universität, Max Wien Platz 1 07743 Jena 5. Auflage WS 2008/09 1. Auflage, SS 1991 (ETH-Zürich) I ask readers to report
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More information