Quantum algorithms for searching, resampling, and hidden shift problems

Size: px
Start display at page:

Download "Quantum algorithms for searching, resampling, and hidden shift problems"

Transcription

1 Quantum algorithms for searching, resampling, and hidden shift problems by Māris Ozols A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Doctor of Philosophy in Combinatorics & Optimization Quantum Information Waterloo, Ontario, Canada, 2012 c Māris Ozols 2012

2 I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. ii

3 Abstract This thesis is on quantum algorithms. It has three main themes: (1) quantum walk based search algorithms, (2) quantum rejection sampling, and (3) the Boolean function hidden shift problem. The first two parts deal with generic techniques for constructing quantum algorithms, and the last part is on quantum algorithms for a specific algebraic problem. In the first part of this thesis we show how certain types of random walk search algorithms can be transformed into quantum algorithms that search quadratically faster. More formally, given a random walk on a graph with an unknown set of marked vertices, we construct a quantum walk that finds a marked vertex in a number of steps that is quadratically smaller than the hitting time of the random walk. The main idea of our approach is to interpolate the random walk from one that does not stop when a marked vertex is found to one that stops. The quantum equivalent of this procedure drives the initial superposition over all vertices to a superposition over marked vertices. We present an adiabatic as well as a circuit version of our algorithm, and apply it to the spatial search problem on the 2D grid. In the second part we study a quantum version of the problem of resampling one probability distribution to another. More formally, given query access to a black box that produces a coherent superposition of unknown quantum states with given amplitudes, the problem is to prepare a coherent superposition of the same states with different specified amplitudes. Our main result is a tight characterization of the number of queries needed for this transformation. By utilizing the symmetries of the problem, we prove a lower bound using a hybrid argument and semidefinite programming. For the matching upper bound we construct a quantum algorithm that generalizes the rejection sampling method first formalized by von Neumann in We describe quantum algorithms for the linear equations problem and quantum Metropolis sampling as applications of quantum rejection sampling. In the third part we consider a hidden shift problem for Boolean functions: given oracle access to f (x + s), where f (x) is a known Boolean function, determine the hidden shift s. We construct quantum algorithms for this problem using the pretty good measurement and quantum rejection sampling. Both algorithms use the Fourier transform and their complexity can be expressed in terms of the Fourier spectrum of f (in particular, in the second case it relates to water-filling of the spectrum). We also construct algorithms for variations of this problem where the task is to verify a given shift or extract only a single bit of information about it. iii

4 Acknowledgements First and most of all I would like to acknowledge my supervisors I was extremely lucky to have four of them! Andrew Childs and Debbie Leung supervised me at University of Waterloo, and Jérémie Roland and Martin Rötteler supervised me at NEC Laboratories America in Princeton. I would also like to acknowledge all my close friends you have taught me many lessons that are far more important than what I have learned at university. I cherish the time we have spent together it has shaped my personality in significant ways. I would not have become who I am without you all. iv

5 Table of Contents List of Tables List of Figures x xi 1 Introduction Overview Part I Part II Part III Mathematical preliminaries Semi-absorbing Markov chains and the extended hitting time Classical random walks Preliminaries Ergodicity Perron Frobenius theorem Semi-absorbing Markov chains Definition Stationary distribution Reversibility Discriminant matrix v

6 2.3.1 Definition Spectral decomposition Principal eigenvector Derivative Hitting time Definition Extended hitting time Dependence on s Adiabatic condition and the quantum hitting time of Markov chains Introduction Overview of the main result Interpolating Hamiltonian Definition Spectral decomposition The adiabatic condition The relevant subspace for adiabatic evolution The quantum adiabatic theorem Analysis of running time Choice of the schedule Running time Relation to the classical hitting time Conclusion and discussion Finding is as easy as detecting for quantum walks Introduction Related work Our approach and contributions vi

7 4.2 Preliminaries Spatial search on graphs Random walks Quantum walks Classical hitting time Quantum hitting time Discrete-time quantum walk Szegedy s construction Quantum circuit for W(s) Quantum search algorithms Algorithm with known values of p M and HT(P, M) Algorithms with approximately known p M Algorithms with a given bound on p M or HT(P, M) Application to the 2D grid Quantum rejection sampling Introduction Rejection sampling Related work Our results Definition of the problem Query complexity of quantum resampling Quantum rejection sampling algorithm Intuitive description of the algorithm Amplitude amplification subroutine and quantum rejection sampling algorithm Strong quantum rejection sampling algorithm Applications vii

8 5.5.1 Linear systems of equations Quantum Metropolis sampling Boolean function hidden shift problem Conclusion and open problems Quantum algorithms for the Boolean function hidden shift problem Introduction Hidden subgroup problem Hidden shift problem Hidden shift problem for Z d -valued functions Outline Notation and basic definitions Boolean Fourier transform Quantum Fourier transform Convolution Influence Bent functions Quantum algorithms for preparing the t-fold Fourier states Computing w s in the phase to prepare Φ t (s) Computing w s in the register to prepare Ψ t (s) Quantum algorithms for finding a hidden shift The PGM (Pretty Good Measurement) approach The Grover approach (quantum rejection sampling) The Simon approach (sampling and classical post-processing) Quantum algorithms for related problems Parity extraction Verification algorithms Zeroes in the Fourier spectrum viii

9 6.6.1 Undetectable shifts and anti-shifts Decision trees Zeroes in the t-fold Fourier spectrum Conclusions Open problems APPENDICES 157 A Water-filling vector is optimal for the SDP 158 References 162 ix

10 List of Tables 4.1 Summary of results on quantum search algorithms Reduction from quantum algorithm for linear system of equations to a quantum resampling problem Reduction from quantum Metropolis algorithm to a quantum resampling problem Summary of quantum query complexity upper bounds for the Boolean function hidden shift problem x

11 List of Figures 1.1 Structure of this thesis Markov chain P and the corresponding graph Summary of Markov chain properties Markov chain P and the corresponding absorbing chain P Rotation of v n (s) in a two-dimensional subspace Region corresponding to the double sum Extended hitting time HT(s) as a function of s Vectors U, M, and v n (s) Classical rejection sampling Classes of quantum query complexity problems Symmetrized algorithm Quantum circuit for implementing U ε Quantum algorithm for preparing the t-fold Fourier sate Φ t (s) Quantum algorithm for preparing the t-fold Fourier sate Ψ t (s) SWAP test Decision tree for function f xi

12 Chapter 1 Introduction 1.1 Overview This thesis consists of six chapters that are grouped into three parts (see Fig. 1.1). All three parts are self-contained and can be read independently from each other. The last part uses a result from the previous part, but it is not essential to know the details of this result to understand the application. 1. Introduction Part I Part II Part III 2. Markov chains [KOR10, KMOR10] 3. Quantum search (adiabatic version) [KOR10] 4. Quantum search (circuit version) [KMOR10] 5. Quantum rejection sampling [ORR12] 6. Hidden shift problem Figure 1.1: Structure of this thesis Part I The first part of this thesis is on quantizing Markov chains and is based on joint work with Hari Krovi, Frédéric Magniez, and Jérémie Roland. Most of the research for this 1

13 part was done in the summer of 2009 during my internship at NEC Laboratories America, and it was completed during subsequent short visits to Princeton. It is based on papers [KOR10, KMOR10] and consists of three chapters (see Fig. 1.1). Chapter 2 is purely classical, while Chapter 3 and Chapter 4 describe an adiabatic and a circuit-based version of a quantum walk algorithm that finds a marked vertex in a graph quadratically faster than any randomized classical algorithm. Chapter 2 contains results on Markov chains which are common to papers [KOR10] and [KMOR10]. Their proofs do not require any knowledge on quantum computing, but use only linear algebra and probability theory. The main result of this chapter is the formula p HT(s) = 2 M HT(P, M) (1.1) (1 s(1 p M )) 2 from Theorem It relates HT(P, M), the hitting time to Markov chain P, to the extended hitting time HT(s). This formula is used to analyze the quantum algorithms in Chapter 3 and Chapter 4, where it is important that HT(s) is monotonically increasing as a function of s. In Chapter 3 we describe an adiabatic quantum algorithm for finding a marked vertex in a graph that has some set M of its vertices marked. A classical version of this algorithm is a random walk with transition matrix P(s) = sp + (1 s)p, where parameter s slowly changes from s = 0 to s = 1 as the walk proceeds. During this evolution the transition matrix P(s) changes from initial matrix P to the absorbing matrix P which has no outgoing transitions from marked vertices. The main result of Chapter 3 is Theorem 3.6, which states that this algorithm finds a marked element with high probability in time O(» HT(P, M)) where HT(P, M) is the corresponding time for the classical random walk P. In Chapter 4 we use ideas from the previous chapter to design a search algorithm in the quantum circuit model, which is based on eigenvalue estimation. This algorithm also achieves a quadratic speed-up over the classical case, which is the main result of this chapter (see Theorem 4.10). We also provide several variations of this algorithm that relax the assumptions in the main theorem, as well as apply these results for the spatial search problem on the 2D grid Part II The second part of this thesis is on quantum rejection sampling. It consists of Chapter 5 and is almost identical to [ORR12], which is joint work with Martin Rötteler and Jérémie 2

14 Roland. This result originally came about as a byproduct of an algorithm for solving the Boolean function hidden shift problem, which is discussed in the third part of this thesis. I started to work on the hidden shift problem during my second internship at the NEC Laboratories America in the summer of 2010, when we discovered a Grover-like approach for solving this problem (see Sect ). The main ingredient in this approach is an amplitude amplification subroutine, which uses an oracle to implement a certain transformation between two unknown quantum states. This subroutine seemed to be of independent interest, so we decided to deviate from the original problem and study the abstract quantum state conversion problem solved by this subroutine. Only later we came to realize that it is the quantum equivalent of a simple probabilistic procedure known as rejection sampling which was studied by von Neumann [vn51]. Chapter 5 contains three main results: a query lower bound for the quantum resampling problem (see Sect. 5.3), the quantum rejection sampling algorithm (see Sect. 5.4), and applications for the linear systems of equations and quantum Metropolis sampling problems (see Sect. 5.5) Part III The third part of this thesis is based on unpublished work together with Andrew Childs, Martin Rötteler, and Jérémie Roland. It consists of Chapter 6 where we study a version of the hidden shift problem for Boolean functions. We assume that the underlying function is known and consider upper bounds on quantum query complexity by constructing quantum algorithms for this problem. The most important part of Chapter 6 is Sect. 6.4 where three different approaches for solving this problem are considered. The first approach is based on the pretty good measurement it corresponds to making all queries in parallel and performing a joint measurement of the obtained states (see Sect ). The second approach is based on quantum rejection sampling it uses amplitude amplification and performs all queries sequentially (see Sect ). The third approach is due to [Röt10, GRR11] and resembles Simon s algorithm [Sim94] it makes independent queries, each immediately followed by a measurement, and classically post-processes the obtained data (see Sect ). We also consider the problem of extracting one bit of information about the hidden shift, namely, determining the inner product with a given string (see Sect ) as well as the problem of verifying a given shift (see Sect ). The idea of using decision trees to construct Boolean functions that have large fraction of the Fourier spectrum equal to zero was suggested by Dmitry Gavinsky (see Sect ). 3

15 1.2 Mathematical preliminaries We assume that the reader is familiar with linear algebra and basics of quantum computing (for an introduction to quantum computing see [KLM07, KSV02]; for a more comprehensive overview see [NC10]). In particular, the reader should be familiar with two basic tools for constructing quantum algorithms: quantum amplitude amplification (see [KLM07, p. 163] or the original paper [BHMT00]) and eigenvalue estimation (see [KSV02, p. 125], [KLM07, p. 125.], [NC10, p. 221], or the papers [Kit95, CEMM98]). 4

16 Chapter 2 Semi-absorbing Markov chains and the extended hitting time Contents 2.1 Classical random walks Preliminaries Ergodicity Perron Frobenius theorem Semi-absorbing Markov chains Definition Stationary distribution Reversibility Discriminant matrix Definition Spectral decomposition Principal eigenvector Derivative Hitting time Definition Extended hitting time Dependence on s In this chapter we study a special type of Markov chains that can be described by a one-parameter family P(s) corresponding to convex combinations of some chain P and the corresponding absorbing chain P. Intuitively, P(s) has states that are hard to escape 5

17 which is controlled by the interpolation parameter s. For this reason we call such chains semi-absorbing. We will consider various properties of these chains as a function of the interpolation parameter s. We begin by discussing some preliminaries and defining basic concepts related to Markov chains, such as ergodicity (Sect. 2.1). Next, we define the interpolated Markov chain P(s) and consider various its properties, such as the stationary distribution and reversibility (Sect. 2.2). We proceed by applying these concepts to define and study the discriminant matrix of P(s) which encodes all relevant properties, such as eigenvalues and the principal eigenvector of P(s), but has a much more convenient form (Sect. 2.3). Finally, we define the hitting time HT and the extended hitting time HT(s) and relate the two via Theorem 2.22, which is the main result of this chapter (Sect. 2.4). Results from this chapter will be used in Chapter 3 and Chapter 4 to construct quantum algorithms based on adiabatic evolution and discrete-time quantum walks, respectively. 2.1 Classical random walks Preliminaries Let us consider a Markov chain 1 on a discrete state space X of size n. Its transition probabilities can be described by a row-stochastic matrix P, i.e., an n n matrix whose entries are real and non-negative and each row sums to one: x X : y X P xy = 1. (2.1) Here P xy is the probability to go from state x to y. A Markov chain P with state space X has a corresponding underlying directed graph with n vertices labelled by elements of X, and directed arcs labelled by non-zero probabilities P xy (see Fig. 2.1). We will represent probability distributions by row vectors whose entries are also real, non-negative, and sum to one. If p is the initial probability distribution, then the probability distribution p after one step of P is obtained by multiplying p by the transition 1 We will use terms random walk, Markov chain, and stochastic matrix interchangeably. The same holds for state, vertex, and element. 6

18 P = Ö è Figure 2.1: Markov chain P and the corresponding graph with transition probabilities. matrix from the left hand side: p = pp. A probability distribution π that satisfies πp = π is called the stationary distribution of P. For more background on Markov chains see, e.g., [GS97, KS60, KS07] Ergodicity Definition 2.1. A Markov chain is called irreducible, if any state in the underlying directed graph can be reached from any other by a finite number of steps (i.e., the graph is strongly connected); aperiodic, if there is no integer k > 1 that divides the length of every directed cycle of the underlying directed graph; ergodic, if it is both irreducible and aperiodic. Equivalently, a Markov chain P is ergodic if there exists some integer k 0 1 such that all entries of P k 0 (and, in fact, of P k for any k k 0 ) are strictly positive. Some authors call such chains regular and use the term ergodic already for irreducible chains [GS97, KS60]. From now on we will almost exclusively consider only ergodic Markov chains. Even though some of the Markov chain properties are independent from each other (such as irreducibility and aperiodicity), usually they are imposed in a specific order which is summarized in Fig As we impose more conditions, more can be said about the spectrum of P as discussed in the next section. 7

19 reversible aperiodic irreducible stochastic ergodic Figure 2.2: The order in which Markov chain properties from Definition 2.1 are typically imposed. Reversibility will be defined in Sect Perron Frobenius theorem The following theorem will be very useful for us. It is essentially the standard Perron Frobenius theorem [HJ90, Theorem 8.4.4, p. 508], but adapted for Markov chains. (This theorem is also known as the Ergodic Theorem for Markov chains [KS07, Theorem 5.9, p. 72].) The version presented here is based on the extensive overview of Perron Frobenius theory in [Mey00, Chapter 8]. Theorem (Perron Frobenius). Let P be a stochastic matrix. Then all eigenvalues of P are at most 1 in absolute value and 1 is an eigenvalue of P; if P is irreducible, then the 1-eigenvector is unique and strictly positive (i.e., it is of the form cπ for some non-vanishing probability distribution π and c = 0); if in addition to being irreducible, P is also aperiodic (i.e., P is ergodic), then the remaining eigenvalues of P are strictly smaller than 1 in absolute value. If P is irreducible but not aperiodic, it has some complex eigenvalues on the unit circle (which can be shown to be roots of unity) [Mey00, Chapter 8]. However, when in addition we also impose aperiodicity (and hence ergodicity), we are guaranteed that there is a unique eigenvalue of absolute value 1 and, in fact, it is equal to 1. 8

20 2.2 Semi-absorbing Markov chains Definition Assume that a subset M X of size m := M of the states are marked (throughout this chapter we assume that M is not empty). Let P be the Markov chain obtained from P by turning all outgoing transitions from marked states into self-loops (see Fig. 2.3). We call P the absorbing version of P (see [KS60, Chapter III] and [GS97, Sect. 11.2]). Note that P differs from P only in the rows corresponding to the marked states (where it contains all zeros on non-diagonal elements, and ones on the diagonal). If we arrange the states of X so that the unmarked states U := X \ M come first, matrices P and P have the following block structure: P := Ç PUU P MU P UM P MM å, P := Ç PUU P UM 0 I å, (2.2) where P UU and P MM are square matrices of size (n m) (n m) and m m, respectively, while P UM and P MU are matrices of size (n m) m and m (n m), respectively. U U M M Figure 2.3: Directed graphs underlying Markov chain P (left) and the corresponding absorbing chain P (right). Outgoing arcs from vertices in the marked set M have been turned into self-loops in P. Let us define an interpolated Markov chain that interpolates between P and P : P(s) := (1 s)p + sp, 0 s 1. (2.3) 9

21 This expression has some resemblance with adiabatic quantum computation where similar interpolations are usually defined for quantum Hamiltonians [FGGS00]. Indeed, we will use the interpolated Markov chain P(s) in Chapter 3 to construct an adiabatic quantum algorithm. Note that P(0) = P, P(1) = P, and P(s) has the following block structure: Ç å P P(s) = UU P UM. (2.4) (1 s)p MU (1 s)p MM + si Proposition 2.2. If P is ergodic then so is P(s) for s [0, 1). P(1) is not ergodic. Proof. A non-zero transition probability in P remains non-zero also in P(s) for s [0, 1). Thus the ergodicity of P implies that P(s) is also ergodic for s [0, 1). However, P(1) is not irreducible, since states in U are not reachable from M. Thus P(1) is not ergodic. Proposition 2.3. (P t ) UU = P t UU. Proof. Let us derive an expression for P t, the matrix of transition probabilities corresponding to t applications of P. Notice that Ä ää ä Ä a b a b = a 2 ä ab+b 0 1. By induction we get ( P t P t t 1 = UU k=0 Pk UU P ) UM. (2.5) 0 I When restricted to U, it acts as P t UU. Proposition 2.4 ([GS97, Theorem 11.3, p. 417]). If P is irreducible then lim k P k UU = 0. Intuitively this means that the sub-stochastic process defined by P UU eventually dies out or, equivalently, that the unmarked states of P eventually get absorbed (by Prop. 2.3). Proof. Let us fix an unmarked initial state x. Since P is irreducible, we can reach a marked state from x in a finite number of steps. Note that this also holds true for P. Let us denote the smallest number of steps by l x and the corresponding probability by p x. Thus in l := max x l x steps of P we are guaranteed to reach a marked state with probability at least p := min x p x > 0, independently of the initial state x U. Notice that the probability to still be in an unmarked state after l k steps is at most (1 p) k which approaches zero as we increase k. Proposition 2.5 ([KS60, Theorem 3.2.1, p. 46]). If P is irreducible then I P UU is invertible. 10

22 Proof. Notice that (I P UU ) (I + P UU + P 2 UU + + P k 1 UU ) = I Pk UU (2.6) and take the determinant of both sides. From Prop. 2.4 see that lim k det(i PUU k ) = 1. By continuity, there exists k 0 such that det(i P k 0 UU ) > 0, so the determinant of the left-hand side is non-zero as well. Using multiplicativity of determinant, we conclude that det(i P UU ) = 0 and thus I P UU is invertible. In the Markov chain literature (I P UU ) 1 is called the fundamental matrix of P Stationary distribution From now on let us demand that P is ergodic. Then according to the Perron Frobenius Theorem it has a unique and non-vanishing stationary distribution π. Let π U and π M be row vectors of length n m and m that are obtained by restricting π to sets U and M, respectively. Then π = Ä π U π M ä, π := Ä 0 U π M ä (2.7) where 0 U is the all-zeroes row vector indexed by elements of U and π satisfies π P = π. Let p M := x M π x be the probability to pick a marked element from the stationary distribution. In analogy to the definition of P(s) in Eq. (2.3), let π(s) be a convex combination of π and π, appropriately normalized: π(s) := (1 s)π + sπ (1 s) + sp M = 1 Ä ä (1 s)πu π 1 s(1 p M ) M. (2.8) Proposition 2.6. π(s) is the unique stationary distribution of P(s) for s [0, 1). At s = 1 any distribution with support only on marked states is stationary, including π(1). Proof. Notice that (π π )(P P ) = Ä π U 0 ä Ç å 0 0 = 0 (2.9) P MU P MM I which is equivalent to πp + π P = πp + π P. (2.10) 11

23 Using this equation we can check that π(s)p(s) = π(s) for any s [0, 1]: Ä (1 s)π + sπ ää (1 s)p + sp ä (2.11) = (1 s) 2 πp + (1 s)s(πp + π P) + s 2 π P (2.12) = (1 s) 2 π + (1 s)s(π + π ) + s 2 π (2.13) = Ä (1 s)π + sπ ää (1 s) + s ä (2.14) = (1 s)π + sπ. (2.15) Recall from Prop. 2.2 that P(s) is ergodic for s [0, 1) so π(s) is the unique stationary distribution by Perron Frobenius Theorem. Since P acts trivially on marked states, any distribution with support only on marked states is stationary for P(1) Reversibility Definition 2.7. Markov chain P is called reversible if it is ergodic and satisfies the socalled detailed balance condition where π is the unique stationary distribution of P. x, y X : π x P xy = π y P yx (2.16) Intuitively this means that the net flow of probability in the stationary distribution between every pair of states is zero. Note that Eq. (2.16) is equivalent to diag(π) P = P T diag(π) = Ä diag(π)p ä T (2.17) where diag(π) is a diagonal matrix whose diagonal is given by vector π. Thus Eq. (2.16) is equivalent to saying that matrix diag(π)p is symmetric. Proposition 2.8. If P is reversible then so is P(s) for any s [0, 1]. Hence, P(s) satisfies the extended detailed balance equation s [0, 1], x, y X : π x (s)p xy (s) = π y (s)p yx (s). (2.18) Proof. First, notice that the absorbing walk P is reversible 2 since Ç å Ç å Ç å diag(π )P 0 0 PUU P = UM 0 0 = = diag(π 0 diag(π M ) 0 I 0 diag(π M ) ) (2.19) 2 Strictly speaking, the definition of reversibility also includes ergodicity for the stationary distribution to be uniquely defined. However, we will relax this requirement for P since, by continuity, π is the natural choice of the unique stationary distribution. 12

24 which is symmetric. Next, notice that diag(π π )(P P ) = which gives us an analogue of Eq. (2.10): Ç å Ç å diag(πu ) = 0 (2.20) 0 0 P MU P MM I diag(π )P + diag(π)p = diag(π)p + diag(π )P. (2.21) Here the right-hand side is symmetric due to reversibility of P and P, thus so is the left-hand side. Using this we can check that P(s) is reversible: diag Ä (1 s)π + sπ ää (1 s)p + sp ä (2.22) = (1 s) 2 diag(π)p + (1 s)s Ä diag(π)p + diag(π )P ä + s 2 diag(π )P (2.23) where the first and last terms are symmetric since P and P are reversible, but the middle term is symmetric due to Eq. (2.21). 2.3 Discriminant matrix Definition The discriminant matrix of a Markov chain P(s) is D(s) :=» P(s) P(s) T, (2.24) where the Hadamard product and the square root are computed entry-wise. This matrix was introduced by Szegedy in [Sze04a, Sze04b]. We prefer to work with D(s) rather than P(s) since the matrix of transition probabilities is not necessarily symmetric while its discriminant matrix is. Proposition 2.9. If P is reversible then D(s) = diag Ä» π(s) ä P(s) diag Ä» π(s) ä 1, s [0, 1); (2.25) D(1) = ( Ä ä diag πu PUU diag Ä ä 1 ) πu 0. (2.26) 0 I 13

25 Here the square roots are also computed entry-wise and M 1 denotes the matrix inverse of M. Notice that for s [0, 1) the right-hand side of Eq. (2.25) is well-defined, since P(s) is ergodic by Prop. 2.2 and thus according to the Perron Frobenius Theorem has a unique and non-vanishing stationary distribution. However, recall from Prop. 2.6 that π(1) vanishes on U, so the right-hand side of Eq. (2.25) is no longer well-defined at s = 1. For this reason we have an alternative expression for D(1). Proof (of Prop. 2.9). For a reversible Markov chain P the extended detailed balance condition in Eq. (2.18) implies that D xy (s) =» P xy (s)p yx (s) = P xy (s)» π x (s)/π y (s). This is equivalent to Eq. (2.25). At s = 1 from Eq. (2.24) we have: D(1) =» P(1) P(1) T = Ã ÇPUU PUU T 0 å = 0 I (» PUU PUU T 0 ). (2.27) 0 I In the same way one can use Eq. (2.24) to compute the upper left block of D(s) for any s, and notice that it does not depend on s: D UU (s) = P UU P T UU = D UU(0) = diag Ä πu ä PUU diag Ä πu ä 1 (2.28) where the last equality follows from Eq. (2.25) at s = 0. Together with Eq. (2.27) this gives us the desired expression in Eq. (2.26) Spectral decomposition Recall from Eq. (2.24) that D(s) is real and symmetric. Therefore, its eigenvalues are real and it has an orthonormal set of real eigenvectors. Let D(s) = n λ i (s) v i (s) v i (s) (2.29) i=1 be the spectral decomposition of D(s) with eigenvalues λ i (s) and eigenvectors 3 v i (s). Moreover, let us arrange the eigenvalues so that λ 1 (s) λ 2 (s) λ n (s). (2.30) 3 There is no need to use bra-ket notation at this point; nevertheless we adopt it since vectors v i (s) later will be used as quantum states. 14

26 From now on we will assume that P is reversible (and hence ergodic) without explicitly mentioning it. Under this assumption the matrices P(s) and D(s) are similar (see Prop below). This means that D(s) essentially has the same properties as P(s), but in addition it also admits a spectral decomposition with orthogonal eigenvectors. This will be very useful in Chapter 3, where we will find the spectral decomposition of Hamiltonian H(s) in terms of that of D(s), and use it to relate properties of H(s) and P(s). Proposition The matrices P(s) and D(s) are similar for any s [0, 1] and therefore have the same eigenvalues. In particular, the eigenvalues of P(s) are real. Proof. From Eq. (2.25) we see that the matrices D(s) and P(s) are similar for s [0, 1). From Eq. (2.26) we see that D(1) is similar to P := Ä ä P UU 0 0 I. To verify that P and P(1) = Ä ä Ä ä PUU P UM 0 I are similar, let M := PUU I P UM 0 I. One can check that MP(1)M 1 = P where M 1 = Ä (P UU I) 1 (P UU I) 1 ä P UM exists, since PUU I is invertible according 0 I to Prop By transitivity, D(1) is also similar to P(1). Proposition The largest eigenvalue of D(s) is 1. It has multiplicity 1 when s [0, 1) and multiplicity m when s = 1. In other words, λ n 1 (s) < λ n (s) = 1, s [0, 1), (2.31) λ n m (1) < λ n m+1 (1) = = λ n (1) = 1. (2.32) Proof. Let us argue about P(s), since it has the same eigenvalues as D(s) by Prop From the Perron Frobenius Theorem we have that i : λ i (s) 1 and λ n (s) = 1. In addition, by Prop. 2.2 the Markov chain P(s) is ergodic for any s [0, 1), so i = n : λ i (s) < 1. Finally, note by Eq. (2.26) that for s = 1 eigenvalue 1 has multiplicity at least m. Recall from Eq. (2.28) that D UU (1) and P UU are similar. From Prop. 2.5 we conclude that all eigenvalues of P UU are strictly less than 1. Thus the multiplicity of eigenvalue 1 of D(1) is exactly m Principal eigenvector Let us prove an analogue of Prop. 2.6 for the matrix D(s). Proposition 2.12.» π(s) T is the unique (+1)-eigenvector of D(s) for s [0, 1). At s = 1 any vector with support only on marked states is a (+1)-eigenvector, including» π(1) T. 15

27 Proof. Since P(s) is row-stochastic, P(s) 1 T X = 1T X where 1 X is the all-ones row vector. Thus we can check that for s [0, 1), D(s)» Å»π(s) ã»π(s) ã 1»π(s) π(s) T = diag P(s) diagå T (2.33) Å»π(s) ã = diag P(s) 1 T X (2.34) Å»π(s) ã = diag 1 T X (2.35) =» π(s) T. (2.36) Uniqueness for s [0, 1) follows by the uniqueness of π(s) and Prop For the s = 1 case, notice from Eq. (2.26) that D(1) acts trivially on marked elements and recall from Eq. (2.8) that π(1) = (0 U π M )/p M. According to the above Proposition, for any s [0, 1] we can choose the principal eigenvector v n (s) in the spectral decomposition of D(s) in Eq. (2.29) to be v n (s) :=» π(s) T. (2.37) We would like to have an intuitive understanding of how v n (s) evolves as a function of s. Let us introduce some useful notation that we will also need later. Let 0 U and 1 U (respectively, 0 M and 1 M ) be the all-zeros and all-ones row vectors of dimension n m (respectively, m) whose entries are indexed by elements of U (respectively, M). Furthermore, let π U := π U /(1 p M ), π M := π M /p M (2.38) be the normalized row vectors describing the stationary distribution π restricted to unmarked and marked states. Let us also define the following unit vectors in R n : U :=» ( π U 0 M ) T = M :=» (0 U π M ) T = 1 pm 1» 1 pm x U x M Then we can express v n (s) as a linear combination of U and M. πx x, (2.39) πx x. (2.40) 16

28 Proposition v n (s) = cos θ(s) U + sin θ(s) M where cos θ(s) = Ã (1 s)(1 p M ) p, sin θ(s) = M 1 s(1 p M ) 1 s(1 p M ). (2.41) Proof. By substituting π(s) from Eq. (2.8) into Eq. (2.37) we get v n (s) =» π(s) T = Õ Ä(1 s)πu π M ä T 1 s(1 p M ) Õ Ä(1 ä T s)(1 pm ) π U p M π M = 1 s(1 p M ) (2.42) which is the desired expression. Thus v n (s) lies in the two-dimensional subspace span{ U, M } and is subject to a rotation as we change the parameter s (see Fig. 2.4). In particular, v n (0) =» 1 p M U + p M M, v n (1) = M. (2.43) M = v n (1) v n (0) Figure 2.4: As s changes from zero to one, the evolution of the principal eigenvector v n (s) corresponds to a rotation in the two-dimensional subspace span{ U, M }. U Proposition θ(s) and its derivative θ(s) := ds d θ(s) are related as follows: 2 θ(s) = sin θ(s) cos θ(s). (2.44) 1 s Proof. Notice that d Ä sin 2 θ(s) ä = 2 θ(s) sin θ(s) cos θ(s). (2.45) ds 17

29 On the other hand, according to Eq. (2.41) we have d Ä sin 2 θ(s) ä = d ( ) p M ds ds 1 s(1 p M ) By comparing both equations we get the desired result. = p M(1 p M ) (1 s(1 p M )) 2 = sin2 θ(s) cos 2 θ(s). (2.46) 1 s Derivative Proposition D(s) and its derivative Ḋ(s) := ds d D(s) are related as follows: Ḋ(s) = 1 ΠM, I D(s) (2.47) 2(1 s) where {X, Y} := XY + YX is the anticommutator of X and Y, and Π M := x M x x is the projector onto the m-dimensional subspace spanned by marked states M. Proof. Recall from Eq. (2.24) that D(s) =» P(s) P(s) T. The block structure of P(s) is given in Eq. (2.4). First, let us derive an expression for D MM (s), the lower right block of D(s): D MM (s) =» P MM (s) P MM (s) T (2.48) = Ä (1 s)pmm + si ä Ä (1 s)p T MM + siä. (2.49) Let us separately consider the diagonal and off-diagonal entries of D MM (s). For x, y M we have (1 s)» P xy P yx if x = y, D xy (s) = (2.50) (1 s)p xx + s if x = y. Thus we can write D MM (s) as D MM (s) = (1 s) P MM PMM T + si. (2.51) Expressions for the remaining blocks of D(s) can be derived in a straightforward way. By putting all blocks together we get (» PUU P D(s) = UU T» (1 s)(pum P» MU T ) ) (1 s)(pmu PUM T ) (1 s)» P MM PMM T + si. (2.52) 18

30 When we take the derivative with respect to s we find Ḋ(s) = Ñ 0 1» 2 PUM P T 1 s MU 1» 2 PMU P T 1 s UM I» P MM PMM T é. (2.53) To relate Ḋ(s) and the original matrix D(s), observe that (» 0 (1 s)(pum P Π M D(s) + D(s)Π M =» MU T ) ) (1 s)(pmu PUM T ) 2(1 s)» P MM PMM T + 2sI (2.54) which can be seen by overlaying the second column and row of D(s) given in Eq. (2.52). When we rescale this by an appropriate constant, we get Ñ 1 2(1 s) {Π M, D(s)} = s 0 1» 2 PUM P T» 1 s MU PMU PUM T» P MM PMM T 1 s s I é. (2.55) This is very similar to the expression for Ḋ(s) in Eq. (2.53), except for a slightly different coefficient for the identity matrix in the lower right corner. We can correct this by adding Π M with an appropriate constant: 2(1 s) 1 {Π M, D(s)} + 1 s 1 Π M = Ḋ(s). 2.4 Hitting time Definition To define the hitting time of Markov chain P, let us consider a simple classical algorithm for finding a marked element in the state space X using a random walk based on P. Random Walk Algorithm 1. Sample a vertex x X according to the stationary distribution π of P. 2. If x is marked, output x and exit. 3. Otherwise, update x according to P and go back to step 2. The hitting time of P is the expected number of applications of P during this algorithm (notice that the algorithm stops as soon as a marked element is reached, thus effectively it uses the absorbing Markov chain P ). Here is a more formal definition: 19

31 Definition Let P be an ergodic Markov chain, and M be a set of marked states. The hitting time of P with respect to M, denoted by HT(P, M), is the expected number of executions of the last step of the Random Walk Algorithm, conditioned on the initial vertex being unmarked. Proposition The hitting time of Markov chain P with respect to marked set M is HT(P, M) = t=0 U D t (1) U. (2.56) Proof. The expected number of iterations in the Random Walk Algorithm is HT(P, M) := = = = = l Pr[need exactly l steps] (2.57) l=1 l Pr[need exactly l steps] (2.58) l=1 t=1 Pr[need exactly l steps] (2.59) t=1 l=t Pr[need at least t steps] (2.60) t=1 Pr[need more than t steps] (2.61) t=0 where the region corresponding to the double sum is shown in Fig t l Figure 2.5: The region corresponding to the double sum in Eqs. (2.58) and (2.59). 20

32 Hence, we can consider the probability that no marked vertex is found after t steps starting from an unmarked vertex distributed according to π U = π U /(1 p M ). Let us find an explicit expression for this probability. The distribution of vertices at the first execution of step 3 of the Random Walk Algorithm is ( π U 0 M ), hence Pr[need more than t steps] = ( π U 0 M )P t (1 U 0 M ) T. (2.62) Recall from Prop. 2.3 that (P t ) UU = PUU t so we can simplify Eq. (2.62) as follows: Pr[need more than t steps] = ( π U 0 M )P t (1 U 0 M ) T (2.63) = π U 1 p M P t UU1 T U (2.64) = πu 1 p M diag Ä πu ä P t UU diag Ä πu ä 1 π T U 1 p M (2.65) = U D t (1) U, (2.66) where the last equality follows from the expression for the discriminant matrix D(1) in Eq. (2.26). By plugging this back in Eq. (2.61) we get the desired result Extended hitting time Let us define the following extension of the hitting time based on Eq. (2.56): HT(s) := U Ä D t (s) v n (s) v n (s) ä U. (2.67) t=0 Note that HT(1) = HT(P, M) since U v n (1) = U M = 0. This justifies calling HT(s) extended hitting time. We can use the similarity transformation between D(s) and P(s) from Prop. 2.9 to obtain an alternative expression for HT(s): HT(s) = ( π U 0 M ) Ä P t (s) Q(s) ä (1 U 0 M ) T (2.68) t=0 where Q(s) := lim t P t (s) is a stochastic matrix whose all rows are equal to π(s). Intuitively, HT(s) may be understood as the time it takes for P(s) to converge to its stationary distribution π(s), starting from ( π U 0 M ). For s = 1, the walk P(1) = P converges to the (non-unique) stationary distribution (0 U π M ), which only has support over marked elements. 21

33 Proposition The extended hitting time can be expressed as HT(s) = U A(s) U, A(s) := k: λ k (s) =1 v k (s) v k (s). (2.69) 1 λ k (s) Proof. Rewrite Eq. (2.67) using the spectral decomposition of D(s) from Eq. (2.29): HT(s) = t=0 k =n λ t k (s) U v k(s) v k (s) U = k: λ k (s) =1 v k (s) U 2 1 λ k (s) (2.70) where we exchanged the sums and used the expansion (1 x) 1 = t=0 x t. For technical reasons it will be important later that all eigenvalues of P(s) are nonnegative. We can guarantee this using a standard trick we replace the original Markov chain P with the lazy walk (P + I)/2 where I is the n n identity matrix. In fact, we can assume without loss of generality that the original Markov chain already is lazy, since this affects the hitting time only by a constant factor, as shown below. Proposition Let P be an ergodic and reversible Markov chain. Then for any s [0, 1] the eigenvalues of (P(s) + I)/2 are between 0 and 1. Moreover, if the extended hitting time of P is HT(s), then the extended hitting time of (P + I)/2 is 2 HT(s). Proof. Since P is reversible, so is P(s) by Prop Thus the eigenvalues of P(s) are real by Prop If λ k (s) is an eigenvalue of P(s) then λ k (s) [ 1, 1] according to Perron Frobenius Theorem. Thus, the eigenvalues of (P(s) + I)/2 satisfy (λ k (s) + 1)/2 [0, 1]. Recall from Prop that P(s) and D(s) are similar. Thus, the discriminant matrix of (P(s) + I)/2 is (D(s) + I)/2, which has the same eigenvectors as D(s). By Prop we see that the extended hitting time of (P(s) + I)/2 is given by k: λ k (s) =1 v k (s) U 2 1 λ. (2.71) k(s)+1 2 Since 1 λ k(s)+1 2 = 1 λ k(s) 2, the above expression is equal to 2 HT(s) as claimed. The following property of A(s) will be useful on several occasions. cos θ(s) Proposition A(s) M = sin θ(s) A(s) U. Proof. Recall from Prop that λ n (s) = 1, so A(s) v n (s) = 0 by definition. If we substitute v n (s) = cos θ(s) U + sin θ(s) M from Prop in this equation, we get the desired formula. 22

34 2.4.3 Dependence on s The goal of this section is to express HT(s) as a function of s and the hitting time HT(P, M) of the original Markov chain. The main idea is to relate ds d HT(s) to HT(s) and then solve the resulting differential equation. Lemma The derivative of HT(s) is related to HT(s) as d ds HT(s) = 2(1 p M) HT(s) (2.72) 1 s(1 p M ) where p M is the probability to pick a marked state from the stationary distribution π of P. Proof. Recall from Prop that HT(s) = U A(s) U where A(s) may be written as A(s) = B(s) 1 Π n (s) where B(s) := I D(s) + Π n (s), Π n (s) := v n (s) v n (s). (2.73) Recall from Sect that v n (s) is the unique (+1)-eigenvector of D(s) for s [0, 1), thus B(s) is indeed invertible when s is in this range. From now on we will not write the dependence on s explicitly. We will also often use f (s) as a shorthand form of ds d f (s). Let us start with d HT = U Ȧ U (2.74) ds and expand Ȧ using Eq. (2.73). To find ds d (B 1 ), take the derivative of both sides of B 1 B = I and get ds d (B 1 ) B + B 1 dds B = 0. Thus ds d (B 1 ) = B 1 ḂB 1 and Ȧ = B 1 ḂB 1 Π n. (2.75) Notice from Eq. (2.73) that Ḃ = Ḋ + Π n, thus Ȧ = B 1 ( Ḋ + Π n )B 1 Π n and d ds HT = h 1 + h 2 + h 3 where Let us evaluate each of these terms separately. h 1 := U B 1 ḊB 1 U, (2.76) h 2 := U B 1 Π n B 1 U, (2.77) h 3 := U Π n U. (2.78) 23

35 To evaluate the first term h 1, we substitute Ḋ = 1 and replace I D by B Π n according to Eq. (2.73): 2(1 s) ΠM, I D from Prop (1 s)h 1 = U B 1 {Π M, B Π n }B 1 U (2.79) = U B 1Ä {Π M, B} {Π M, Π n } ä B 1 U (2.80) = U {B 1, Π M } U U B 1 {Π M, Π n }B 1 U. (2.81) Recall that Π M = x M x x is the projector onto the marked states. Thus Π M U = 0 and the first term vanishes. Note that B has the same eigenvectors as D. In particular, B 1 v n = v n and thus B 1 Π n = Π n = Π n B 1. Using this we can expand the anticommutator in the second term: B 1 {Π M, Π n }B 1 = B 1 Π M Π n + Π n Π M B 1. Since all three matrices in this expression are real and symmetric and U is also real, both terms of the anti-commutator have the same contribution, so we get 2(1 s)h 1 = 2 U B 1 Π M Π n U. (2.82) Recall from Prop that v n = cos θ U + sin θ M, so we see that Π M Π n U = Π M v n v n U = sin θ M cos θ. Moreover, B 1 = A + Π n according to Eq. (2.73), so 2(1 s)h 1 = 2 sin θ cos θ U (A + Π n ) M. (2.83) Recall from Prop that sin θ U A M = cos θ U A U. To simplify the second term, notice that U Π n M = U v n v n M = cos θ sin θ. When we put this together, we get 2(1 s)h 1 = 2 cos 2 θ U A U 2 sin 2 θ cos 2 θ (2.84) or simply h 1 = cos2 θ Ä U A U sin 2 θ ä. (2.85) 1 s Let us now consider the second term h 2 = U B 1 Π n B 1 U. First, we compute Π n = v n v n + v n v n. Using B 1 v n = v n we get B 1 Π n B 1 = B 1 v n v n + v n v n B 1. Since v n U = cos θ we have h 2 = 2 U B 1 v n cos θ (2.86) where the factor two comes from the fact that all vectors involved are real and matrix B 1 is real and symmetric. Let us compute v n = θ Ä sin θ U + cos θ M ä. (2.87) 24

36 Notice that v n v n = 0 and thus Π n v n = 0. By substituting B 1 = A + Π n from Eq. (2.73) we get h 2 = 2 U A v n cos θ. (2.88) Next, we substitute v n and get h 2 = 2 θ Ä sin θ U A U + cos θ U A M ä cos θ. (2.89) Now we use Prop to substitute A M by A U : h 2 = 2 θ ( ) sin θ cos2 θ U A U cos θ = 2 θ cos θ U A U. (2.90) sin θ sin θ Finally, we substitute 2 θ = sin θ cos θ 1 s from Eq. (2.44) and get h 2 = cos2 θ U A U. (2.91) 1 s For the last term h 3 = U Π n U we observe that U v n v n U = θ sin θ cos θ thus h 3 = 2 θ sin θ cos θ where the factor two comes from symmetry. After substituting 2 θ from Eq. (2.44) we get h 3 = cos2 θ 1 s sin2 θ. (2.92) When we compare Eqs. (2.85), (2.91), and (2.92) we notice that h 2 = h 1 + h 3. Thus the derivative of the hitting time is d ds HT = h 1 + h 2 + h 3 = 2h 2. Recall from Eq. (2.69) that HT = U A U. Thus d ds HT(s) = θ(s) 2cos2 HT(s). (2.93) 1 s By substituting cos θ(s) from Eq. (2.41) we get the desired result. Theorem The extended hitting time HT(s) is related to HT(P, M), the hitting time of Markov chain P with marked states M, as follows: HT(s) = p 2 M HT(P, M) (2.94) (1 s(1 p M )) 2 where p M is the probability to pick a marked state from the stationary distribution π of P. 25

37 Proof. We will prove this theorem by solving the differential equation from Lemma sin θ cos θ In particular, let us consider Eq. (2.93). Recall from Eq. (2.44) that 2 θ = 1 s, so we can rewrite the coefficient in this equation as 2 cos2 θ 1 s = 2 sin θ cos θ 1 s Now we can rewrite the differential equation as By integrating both sides we get d ds HT(s) HT(s) cos θ sin θ = 4 θ cos θ d sin θ = 4 ds (sin θ) sin θ = 4 d ds. (2.95) (sin θ(s)). (2.96) sin θ(s) ln HT(s) = 4 ln sin θ(s) + C (2.97) for some constant C. Recall from Sect that HT(1) = HT(P, M) and from Eq. (2.41) that sin θ(1) = 1, so the boundary condition at s = 1 gives us C = ln HT(P, M). Since all quantities are non-negative, we can omit the absolute value signs. After exponentiating both sides we get HT(s) = sin 4 θ(s) HT(P, M). (2.98) We get the desired expression when we substitute sin θ(s) from Eq. (2.41). In the next two chapters we consider several quantum search algorithms whose running time depends on HT(s) for some values of s [0, 1]. Theorem 2.22 is a crucial ingredient in analysis of these algorithms, since it relates HT(s) to the usual hitting time HT(P, M). In particular, it is important that HT(s) is monotonically increasing as a function of s (some example plots of HT(s) are shown in Fig. 2.6). 26

Finding is as easy as detecting for quantum walks

Finding is as easy as detecting for quantum walks Finding is as easy as detecting for quantum walks Jérémie Roland Hari Krovi Frédéric Magniez Maris Ozols LIAFA [ICALP 2010, arxiv:1002.2419] Jérémie Roland (NEC Labs) QIP 2011 1 / 14 Spatial search on

More information

Quantum algorithms based on quantum walks. Jérémie Roland (ULB - QuIC) Quantum walks Grenoble, Novembre / 39

Quantum algorithms based on quantum walks. Jérémie Roland (ULB - QuIC) Quantum walks Grenoble, Novembre / 39 Quantum algorithms based on quantum walks Jérémie Roland Université Libre de Bruxelles Quantum Information & Communication Jérémie Roland (ULB - QuIC) Quantum walks Grenoble, Novembre 2012 1 / 39 Outline

More information

arxiv: v2 [quant-ph] 13 Feb 2014

arxiv: v2 [quant-ph] 13 Feb 2014 Quantum walks can find a marked element on any graph Hari Krovi Frédéric Magniez Maris Ozols Jérémie Roland January 30, 014 arxiv:100.419v [quant-ph] 13 Feb 014 Abstract We solve an open problem by constructing

More information

Discrete quantum random walks

Discrete quantum random walks Quantum Information and Computation: Report Edin Husić edin.husic@ens-lyon.fr Discrete quantum random walks Abstract In this report, we present the ideas behind the notion of quantum random walks. We further

More information

Reflections in Hilbert Space III: Eigen-decomposition of Szegedy s operator

Reflections in Hilbert Space III: Eigen-decomposition of Szegedy s operator Reflections in Hilbert Space III: Eigen-decomposition of Szegedy s operator James Daniel Whitfield March 30, 01 By three methods we may learn wisdom: First, by reflection, which is the noblest; second,

More information

A Glimpse of Quantum Computation

A Glimpse of Quantum Computation A Glimpse of Quantum Computation Zhengfeng Ji (UTS:QSI) QCSS 2018, UTS 1. 1 Introduction What is quantum computation? Where does the power come from? Superposition Incompatible states can coexist Transformation

More information

Lecture 2: September 8

Lecture 2: September 8 CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 2: September 8 Lecturer: Prof. Alistair Sinclair Scribes: Anand Bhaskar and Anindya De Disclaimer: These notes have not been

More information

The Quantum Query Complexity of Algebraic Properties

The Quantum Query Complexity of Algebraic Properties The Quantum Query Complexity of Algebraic Properties Sebastian Dörn Institut für Theoretische Informatik Universität Ulm 89069 Ulm, Germany Thomas Thierauf Fak. Elektronik und Informatik HTW Aalen 73430

More information

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states. Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring

More information

Introduction to Quantum Algorithms Part I: Quantum Gates and Simon s Algorithm

Introduction to Quantum Algorithms Part I: Quantum Gates and Simon s Algorithm Part I: Quantum Gates and Simon s Algorithm Martin Rötteler NEC Laboratories America, Inc. 4 Independence Way, Suite 00 Princeton, NJ 08540, U.S.A. International Summer School on Quantum Information, Max-Planck-Institut

More information

Quantum algorithms. Andrew Childs. Institute for Quantum Computing University of Waterloo

Quantum algorithms. Andrew Childs. Institute for Quantum Computing University of Waterloo Quantum algorithms Andrew Childs Institute for Quantum Computing University of Waterloo 11th Canadian Summer School on Quantum Information 8 9 June 2011 Based in part on slides prepared with Pawel Wocjan

More information

Quantum Algorithms for Graph Traversals and Related Problems

Quantum Algorithms for Graph Traversals and Related Problems Quantum Algorithms for Graph Traversals and Related Problems Sebastian Dörn Institut für Theoretische Informatik, Universität Ulm, 89069 Ulm, Germany sebastian.doern@uni-ulm.de Abstract. We study the complexity

More information

Quantum walk algorithms

Quantum walk algorithms Quantum walk algorithms Andrew Childs Institute for Quantum Computing University of Waterloo 28 September 2011 Randomized algorithms Randomness is an important tool in computer science Black-box problems

More information

Lecture 5: Random Walks and Markov Chain

Lecture 5: Random Walks and Markov Chain Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables

More information

Model Counting for Logical Theories

Model Counting for Logical Theories Model Counting for Logical Theories Wednesday Dmitry Chistikov Rayna Dimitrova Department of Computer Science University of Oxford, UK Max Planck Institute for Software Systems (MPI-SWS) Kaiserslautern

More information

A Review of Quantum Random Walks and their Algorithmic Applications

A Review of Quantum Random Walks and their Algorithmic Applications A Review of Quantum Random Walks and their Algorithmic Applications Enis K. Inan 1 and Mohamed Abidalrekab 2 1 Portland State University einan@pdx.edu 2 Portland State University moh29@pdx.edu June 17,

More information

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018 Lab 8: Measuring Graph Centrality - PageRank Monday, November 5 CompSci 531, Fall 2018 Outline Measuring Graph Centrality: Motivation Random Walks, Markov Chains, and Stationarity Distributions Google

More information

1.3 Convergence of Regular Markov Chains

1.3 Convergence of Regular Markov Chains Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain

More information

Compute the Fourier transform on the first register to get x {0,1} n x 0.

Compute the Fourier transform on the first register to get x {0,1} n x 0. CS 94 Recursive Fourier Sampling, Simon s Algorithm /5/009 Spring 009 Lecture 3 1 Review Recall that we can write any classical circuit x f(x) as a reversible circuit R f. We can view R f as a unitary

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Introduction to Group Theory

Introduction to Group Theory Chapter 10 Introduction to Group Theory Since symmetries described by groups play such an important role in modern physics, we will take a little time to introduce the basic structure (as seen by a physicist)

More information

arxiv: v2 [quant-ph] 7 Jan 2010

arxiv: v2 [quant-ph] 7 Jan 2010 Algorithms for Quantum Computers Jamie Smith and Michele Mosca arxiv:1001.0767v2 [quant-ph] 7 Jan 2010 1 Introduction Quantum computing is a new computational paradigm created by reformulating information

More information

Quantum algorithms based on span programs

Quantum algorithms based on span programs Quantum algorithms based on span programs Ben Reichardt IQC, U Waterloo [arxiv:0904.2759] Model: Complexity measure: Quantum algorithms Black-box query complexity Span programs Witness size [KW 93] [RŠ

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 1: Quantum circuits and the abelian QFT

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 1: Quantum circuits and the abelian QFT Quantum algorithms (CO 78, Winter 008) Prof. Andrew Childs, University of Waterloo LECTURE : Quantum circuits and the abelian QFT This is a course on quantum algorithms. It is intended for graduate students

More information

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains, Random Walks on Graphs, and the Laplacian Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

On Distributed Coordination of Mobile Agents with Changing Nearest Neighbors

On Distributed Coordination of Mobile Agents with Changing Nearest Neighbors On Distributed Coordination of Mobile Agents with Changing Nearest Neighbors Ali Jadbabaie Department of Electrical and Systems Engineering University of Pennsylvania Philadelphia, PA 19104 jadbabai@seas.upenn.edu

More information

Search via Quantum Walk

Search via Quantum Walk Search via Quantum Walk Frédéric Magniez Ashwin Nayak Jérémie Roland Miklos Santha Abstract We propose a new method for designing quantum search algorithms for finding a marked element in the state space

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Consistent Histories. Chapter Chain Operators and Weights

Consistent Histories. Chapter Chain Operators and Weights Chapter 10 Consistent Histories 10.1 Chain Operators and Weights The previous chapter showed how the Born rule can be used to assign probabilities to a sample space of histories based upon an initial state

More information

MTH 2032 SemesterII

MTH 2032 SemesterII MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents

More information

Lecture #5. Dependencies along the genome

Lecture #5. Dependencies along the genome Markov Chains Lecture #5 Background Readings: Durbin et. al. Section 3., Polanski&Kimmel Section 2.8. Prepared by Shlomo Moran, based on Danny Geiger s and Nir Friedman s. Dependencies along the genome

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition)

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition) Lecture 0: A (Brief) Introduction to Group heory (See Chapter 3.3 in Boas, 3rd Edition) Having gained some new experience with matrices, which provide us with representations of groups, and because symmetries

More information

Finite-Horizon Statistics for Markov chains

Finite-Horizon Statistics for Markov chains Analyzing FSDT Markov chains Friday, September 30, 2011 2:03 PM Simulating FSDT Markov chains, as we have said is very straightforward, either by using probability transition matrix or stochastic update

More information

The Framework of Quantum Mechanics

The Framework of Quantum Mechanics The Framework of Quantum Mechanics We now use the mathematical formalism covered in the last lecture to describe the theory of quantum mechanics. In the first section we outline four axioms that lie at

More information

Markov Decision Processes

Markov Decision Processes Markov Decision Processes Lecture notes for the course Games on Graphs B. Srivathsan Chennai Mathematical Institute, India 1 Markov Chains We will define Markov chains in a manner that will be useful to

More information

Quantum NP - Cont. Classical and Quantum Computation A.Yu Kitaev, A. Shen, M. N. Vyalyi 2002

Quantum NP - Cont. Classical and Quantum Computation A.Yu Kitaev, A. Shen, M. N. Vyalyi 2002 Quantum NP - Cont. Classical and Quantum Computation A.Yu Kitaev, A. Shen, M. N. Vyalyi 2002 1 QMA - the quantum analog to MA (and NP). Definition 1 QMA. The complexity class QMA is the class of all languages

More information

6.842 Randomness and Computation March 3, Lecture 8

6.842 Randomness and Computation March 3, Lecture 8 6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n

More information

Citation Osaka Journal of Mathematics. 43(2)

Citation Osaka Journal of Mathematics. 43(2) TitleIrreducible representations of the Author(s) Kosuda, Masashi Citation Osaka Journal of Mathematics. 43(2) Issue 2006-06 Date Text Version publisher URL http://hdl.handle.net/094/0396 DOI Rights Osaka

More information

QM and Angular Momentum

QM and Angular Momentum Chapter 5 QM and Angular Momentum 5. Angular Momentum Operators In your Introductory Quantum Mechanics (QM) course you learned about the basic properties of low spin systems. Here we want to review that

More information

Walks, Springs, and Resistor Networks

Walks, Springs, and Resistor Networks Spectral Graph Theory Lecture 12 Walks, Springs, and Resistor Networks Daniel A. Spielman October 8, 2018 12.1 Overview In this lecture we will see how the analysis of random walks, spring networks, and

More information

25.1 Markov Chain Monte Carlo (MCMC)

25.1 Markov Chain Monte Carlo (MCMC) CS880: Approximations Algorithms Scribe: Dave Andrzejewski Lecturer: Shuchi Chawla Topic: Approx counting/sampling, MCMC methods Date: 4/4/07 The previous lecture showed that, for self-reducible problems,

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A. Spielman September 19, 2018 7.1 Overview In today s lecture we will justify some of the behavior we observed when using eigenvectors

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 11: From random walk to quantum walk

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 11: From random walk to quantum walk Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 11: From random walk to quantum walk We now turn to a second major topic in quantum algorithms, the concept

More information

Detailed Proof of The PerronFrobenius Theorem

Detailed Proof of The PerronFrobenius Theorem Detailed Proof of The PerronFrobenius Theorem Arseny M Shur Ural Federal University October 30, 2016 1 Introduction This famous theorem has numerous applications, but to apply it you should understand

More information

Time-Efficient Quantum Walks for 3-Distinctness

Time-Efficient Quantum Walks for 3-Distinctness Time-Efficient Quantum Walks for 3-Distinctness Aleksandrs Belovs 1, Andrew M. Childs 2,4, Stacey Jeffery 3,4, Robin Kothari 3,4, and Frédéric Magniez 5 1 Faculty of Computing, University of Latvia 2 Department

More information

1 Mathematical preliminaries

1 Mathematical preliminaries 1 Mathematical preliminaries The mathematical language of quantum mechanics is that of vector spaces and linear algebra. In this preliminary section, we will collect the various definitions and mathematical

More information

Quantum Algorithms for Finding Constant-sized Sub-hypergraphs

Quantum Algorithms for Finding Constant-sized Sub-hypergraphs Quantum Algorithms for Finding Constant-sized Sub-hypergraphs Seiichiro Tani (Joint work with François Le Gall and Harumichi Nishimura) NTT Communication Science Labs., NTT Corporation, Japan. The 20th

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Quantum algorithms (CO 781/CS 867/QIC 823, Winter 2013) Andrew Childs, University of Waterloo LECTURE 13: Query complexity and the polynomial method

Quantum algorithms (CO 781/CS 867/QIC 823, Winter 2013) Andrew Childs, University of Waterloo LECTURE 13: Query complexity and the polynomial method Quantum algorithms (CO 781/CS 867/QIC 823, Winter 2013) Andrew Childs, University of Waterloo LECTURE 13: Query complexity and the polynomial method So far, we have discussed several different kinds of

More information

QALGO workshop, Riga. 1 / 26. Quantum algorithms for linear algebra.

QALGO workshop, Riga. 1 / 26. Quantum algorithms for linear algebra. QALGO workshop, Riga. 1 / 26 Quantum algorithms for linear algebra., Center for Quantum Technologies and Nanyang Technological University, Singapore. September 22, 2015 QALGO workshop, Riga. 2 / 26 Overview

More information

MDP Preliminaries. Nan Jiang. February 10, 2019

MDP Preliminaries. Nan Jiang. February 10, 2019 MDP Preliminaries Nan Jiang February 10, 2019 1 Markov Decision Processes In reinforcement learning, the interactions between the agent and the environment are often described by a Markov Decision Process

More information

Decoherence on Szegedy s Quantum Walk

Decoherence on Szegedy s Quantum Walk Decoherence on Szegedy s Quantum Walk Raqueline A. M. Santos, Renato Portugal. Laboratório Nacional de Computação Científica (LNCC) Av. Getúlio Vargas 333, 25651-075, Petrópolis, RJ, Brazil E-mail: raqueline@lncc.br,

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Unitary evolution: this axiom governs how the state of the quantum system evolves in time.

Unitary evolution: this axiom governs how the state of the quantum system evolves in time. CS 94- Introduction Axioms Bell Inequalities /7/7 Spring 7 Lecture Why Quantum Computation? Quantum computers are the only model of computation that escape the limitations on computation imposed by the

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

WAITING FOR A BAT TO FLY BY (IN POLYNOMIAL TIME)

WAITING FOR A BAT TO FLY BY (IN POLYNOMIAL TIME) WAITING FOR A BAT TO FLY BY (IN POLYNOMIAL TIME ITAI BENJAMINI, GADY KOZMA, LÁSZLÓ LOVÁSZ, DAN ROMIK, AND GÁBOR TARDOS Abstract. We observe returns of a simple random wal on a finite graph to a fixed node,

More information

Lecture 22. m n c (k) i,j x i x j = c (k) k=1

Lecture 22. m n c (k) i,j x i x j = c (k) k=1 Notes on Complexity Theory Last updated: June, 2014 Jonathan Katz Lecture 22 1 N P PCP(poly, 1) We show here a probabilistically checkable proof for N P in which the verifier reads only a constant number

More information

Quantum Complexity of Testing Group Commutativity

Quantum Complexity of Testing Group Commutativity Quantum Complexity of Testing Group Commutativity Frédéric Magniez 1 and Ashwin Nayak 2 1 CNRS LRI, UMR 8623 Université Paris Sud, France 2 University of Waterloo and Perimeter Institute for Theoretical

More information

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 14: Random Walks, Local Graph Clustering, Linear Programming Lecturer: Shayan Oveis Gharan 3/01/17 Scribe: Laura Vonessen Disclaimer: These

More information

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory Physics 202 Laboratory 5 Linear Algebra Laboratory 5 Physics 202 Laboratory We close our whirlwind tour of numerical methods by advertising some elements of (numerical) linear algebra. There are three

More information

The query register and working memory together form the accessible memory, denoted H A. Thus the state of the algorithm is described by a vector

The query register and working memory together form the accessible memory, denoted H A. Thus the state of the algorithm is described by a vector 1 Query model In the quantum query model we wish to compute some function f and we access the input through queries. The complexity of f is the number of queries needed to compute f on a worst-case input

More information

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 55, NO. 9, SEPTEMBER 2010 1987 Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE Abstract

More information

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature. Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.

More information

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 1 Outline Outline Dynamical systems. Linear and Non-linear. Convergence. Linear algebra and Lyapunov functions. Markov

More information

Eigenvalue comparisons in graph theory

Eigenvalue comparisons in graph theory Eigenvalue comparisons in graph theory Gregory T. Quenell July 1994 1 Introduction A standard technique for estimating the eigenvalues of the Laplacian on a compact Riemannian manifold M with bounded curvature

More information

Stochastic Processes (Week 6)

Stochastic Processes (Week 6) Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Simulated Annealing Barnabás Póczos & Ryan Tibshirani Andrey Markov Markov Chains 2 Markov Chains Markov chain: Homogen Markov chain: 3 Markov Chains Assume that the state

More information

Quantum Complexity Theory and Adiabatic Computation

Quantum Complexity Theory and Adiabatic Computation Chapter 9 Quantum Complexity Theory and Adiabatic Computation 9.1 Defining Quantum Complexity We are familiar with complexity theory in classical computer science: how quickly can a computer (or Turing

More information

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name:

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name: Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition Due date: Friday, May 4, 2018 (1:35pm) Name: Section Number Assignment #10: Diagonalization

More information

arxiv:quant-ph/ v1 15 Apr 2005

arxiv:quant-ph/ v1 15 Apr 2005 Quantum walks on directed graphs Ashley Montanaro arxiv:quant-ph/0504116v1 15 Apr 2005 February 1, 2008 Abstract We consider the definition of quantum walks on directed graphs. Call a directed graph reversible

More information

Lecture 2: Linear operators

Lecture 2: Linear operators Lecture 2: Linear operators Rajat Mittal IIT Kanpur The mathematical formulation of Quantum computing requires vector spaces and linear operators So, we need to be comfortable with linear algebra to study

More information

Unitary Dynamics and Quantum Circuits

Unitary Dynamics and Quantum Circuits qitd323 Unitary Dynamics and Quantum Circuits Robert B. Griffiths Version of 20 January 2014 Contents 1 Unitary Dynamics 1 1.1 Time development operator T.................................... 1 1.2 Particular

More information

2.1 Laplacian Variants

2.1 Laplacian Variants -3 MS&E 337: Spectral Graph heory and Algorithmic Applications Spring 2015 Lecturer: Prof. Amin Saberi Lecture 2-3: 4/7/2015 Scribe: Simon Anastasiadis and Nolan Skochdopole Disclaimer: hese notes have

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Stochastic Realization of Binary Exchangeable Processes

Stochastic Realization of Binary Exchangeable Processes Stochastic Realization of Binary Exchangeable Processes Lorenzo Finesso and Cecilia Prosdocimi Abstract A discrete time stochastic process is called exchangeable if its n-dimensional distributions are,

More information

Exponential algorithmic speedup by quantum walk

Exponential algorithmic speedup by quantum walk Exponential algorithmic speedup by quantum walk Andrew Childs MIT Center for Theoretical Physics joint work with Richard Cleve Enrico Deotto Eddie Farhi Sam Gutmann Dan Spielman quant-ph/0209131 Motivation

More information

MAA704, Perron-Frobenius theory and Markov chains.

MAA704, Perron-Frobenius theory and Markov chains. November 19, 2013 Lecture overview Today we will look at: Permutation and graphs. Perron frobenius for non-negative. Stochastic, and their relation to theory. Hitting and hitting probabilities of chain.

More information

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

Notes on the Matrix-Tree theorem and Cayley s tree enumerator Notes on the Matrix-Tree theorem and Cayley s tree enumerator 1 Cayley s tree enumerator Recall that the degree of a vertex in a tree (or in any graph) is the number of edges emanating from it We will

More information

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

Link Analysis Ranking

Link Analysis Ranking Link Analysis Ranking How do search engines decide how to rank your query results? Guess why Google ranks the query results the way it does How would you do it? Naïve ranking of query results Given query

More information

MARKOV CHAIN MONTE CARLO

MARKOV CHAIN MONTE CARLO MARKOV CHAIN MONTE CARLO RYAN WANG Abstract. This paper gives a brief introduction to Markov Chain Monte Carlo methods, which offer a general framework for calculating difficult integrals. We start with

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Fourier analysis of boolean functions in quantum computation

Fourier analysis of boolean functions in quantum computation Fourier analysis of boolean functions in quantum computation Ashley Montanaro Centre for Quantum Information and Foundations, Department of Applied Mathematics and Theoretical Physics, University of Cambridge

More information

Improved Quantum Algorithm for Triangle Finding via Combinatorial Arguments

Improved Quantum Algorithm for Triangle Finding via Combinatorial Arguments Improved Quantum Algorithm for Triangle Finding via Combinatorial Arguments François Le Gall The University of Tokyo Technical version available at arxiv:1407.0085 [quant-ph]. Background. Triangle finding

More information

By allowing randomization in the verification process, we obtain a class known as MA.

By allowing randomization in the verification process, we obtain a class known as MA. Lecture 2 Tel Aviv University, Spring 2006 Quantum Computation Witness-preserving Amplification of QMA Lecturer: Oded Regev Scribe: N. Aharon In the previous class, we have defined the class QMA, which

More information

Quantum Computing Lecture 6. Quantum Search

Quantum Computing Lecture 6. Quantum Search Quantum Computing Lecture 6 Quantum Search Maris Ozols Grover s search problem One of the two most important algorithms in quantum computing is Grover s search algorithm (invented by Lov Grover in 1996)

More information

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1. Stationary Distributions Monday, September 28, 2015 2:02 PM No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1. Homework 1 due Friday, October 2 at 5 PM strongly

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

Graph isomorphism, the hidden subgroup problem and identifying quantum states

Graph isomorphism, the hidden subgroup problem and identifying quantum states 1 Graph isomorphism, the hidden subgroup problem and identifying quantum states Pranab Sen NEC Laboratories America, Princeton, NJ, U.S.A. Joint work with Sean Hallgren and Martin Rötteler. Quant-ph 0511148:

More information

Generators for Continuous Coordinate Transformations

Generators for Continuous Coordinate Transformations Page 636 Lecture 37: Coordinate Transformations: Continuous Passive Coordinate Transformations Active Coordinate Transformations Date Revised: 2009/01/28 Date Given: 2009/01/26 Generators for Continuous

More information

Polynomiality of Linear Programming

Polynomiality of Linear Programming Chapter 10 Polynomiality of Linear Programming In the previous section, we presented the Simplex Method. This method turns out to be very efficient for solving linear programmes in practice. While it is

More information

FREE PROBABILITY THEORY

FREE PROBABILITY THEORY FREE PROBABILITY THEORY ROLAND SPEICHER Lecture 4 Applications of Freeness to Operator Algebras Now we want to see what kind of information the idea can yield that free group factors can be realized by

More information

MP463 QUANTUM MECHANICS

MP463 QUANTUM MECHANICS MP463 QUANTUM MECHANICS Introduction Quantum theory of angular momentum Quantum theory of a particle in a central potential - Hydrogen atom - Three-dimensional isotropic harmonic oscillator (a model of

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information