CS3719 Theory of Computation and Algorithms

Similar documents
CS6901: review of Theory of Computation and Algorithms

CS6902 Theory of Computation and Algorithms

Lecture 12: Mapping Reductions

Reducability. Sipser, pages

4.2 The Halting Problem

Notes for Lecture Notes 2

1 Computational Problems

CS5371 Theory of Computation. Lecture 15: Computability VI (Post s Problem, Reducibility)

Undecidability COMS Ashley Montanaro 4 April Department of Computer Science, University of Bristol Bristol, UK

Computational Models Lecture 8 1

Undecidable Problems and Reducibility

Week 3: Reductions and Completeness

ACS2: Decidability Decidability

FORMAL LANGUAGES, AUTOMATA AND COMPUTATION

Non-emptiness Testing for TMs

Computational Models Lecture 8 1

Undecidable Problems. Z. Sawa (TU Ostrava) Introd. to Theoretical Computer Science May 12, / 65

Computational Models Lecture 8 1

FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY

A Universal Turing Machine

The Turing Machine. Computability. The Church-Turing Thesis (1936) Theory Hall of Fame. Theory Hall of Fame. Undecidability

6.045: Automata, Computability, and Complexity Or, Great Ideas in Theoretical Computer Science Spring, Class 8 Nancy Lynch

Theory of Computation

COMP/MATH 300 Topics for Spring 2017 June 5, Review and Regular Languages

Theory of Computation p.1/?? Theory of Computation p.2/?? We develop examples of languages that are decidable

Friday Four Square! Today at 4:15PM, Outside Gates

CP405 Theory of Computation

NP-completeness. Chapter 34. Sergey Bereg

CSCE 551: Chin-Tser Huang. University of South Carolina

CS151 Complexity Theory. Lecture 1 April 3, 2017

Part I: Definitions and Properties

Finish K-Complexity, Start Time Complexity

Computability and Complexity Theory

Preliminaries and Complexity Theory

Introduction to Languages and Computation

Automata & languages. A primer on the Theory of Computation. Laurent Vanbever. ETH Zürich (D-ITET) October,

6.5.3 An NP-complete domino game

UNIT-VIII COMPUTABILITY THEORY

Computational Complexity and Intractability: An Introduction to the Theory of NP. Chapter 9

Decidability: Church-Turing Thesis

CPSC 421: Tutorial #1

CSCI3390-Lecture 6: An Undecidable Problem

Theory of Computation

1 Showing Recognizability

ECS 120 Lesson 18 Decidable Problems, the Halting Problem

NP-Complete Problems. Complexity Class P. .. Cal Poly CSC 349: Design and Analyis of Algorithms Alexander Dekhtyar..

Turing Machines and Time Complexity

acs-07: Decidability Decidability Andreas Karwath und Malte Helmert Informatik Theorie II (A) WS2009/10

Computer Sciences Department

1 Reals are Uncountable

CSE 105 THEORY OF COMPUTATION

Combinatorial Optimization

Computability Theory. CS215, Lecture 6,

Decidability (What, stuff is unsolvable?)

CS 125 Section #10 (Un)decidability and Probability November 1, 2016

CS154, Lecture 17: conp, Oracles again, Space Complexity

ECS 120 Lesson 20 The Post Correspondence Problem

Turing Machines, diagonalization, the halting problem, reducibility

CSCE 551 Final Exam, Spring 2004 Answer Key

Decision Problems with TM s. Lecture 31: Halting Problem. Universe of discourse. Semi-decidable. Look at following sets: CSCI 81 Spring, 2012

} Some languages are Turing-decidable A Turing Machine will halt on all inputs (either accepting or rejecting). No infinite loops.

Complexity Theory Part II

Computation Histories

Theory of Computation

Undecidability. We are not so much concerned if you are slow as when you come to a halt. (Chinese Proverb)

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Section 14.1 Computability then else

Q = Set of states, IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar

an efficient procedure for the decision problem. We illustrate this phenomenon for the Satisfiability problem.

Turing Machines Part III

Limitations of Algorithm Power

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan August 30, Notes for Lecture 1

Turing Machine Recap

Lecture 16: Time Complexity and P vs NP

Further discussion of Turing machines

Theory of Computation CS3102 Spring 2014 A tale of computers, math, problem solving, life, love and tragic death

Decidable Languages - relationship with other classes.

Theory of Computation (IX) Yijia Chen Fudan University

Third Year Computer Science and Engineering, 6 th Semester

15-251: Great Theoretical Ideas in Computer Science Fall 2016 Lecture 6 September 15, Turing & the Uncomputable

Theory of Computation Time Complexity

CS294: Pseudorandomness and Combinatorial Constructions September 13, Notes for Lecture 5

UNRESTRICTED GRAMMARS

Time Complexity. CS60001: Foundations of Computing Science

TURING MAHINES

Computability and Complexity

CS 320, Fall Dr. Geri Georg, Instructor 320 NP 1

6.045J/18.400J: Automata, Computability and Complexity. Quiz 2. March 30, Please write your name in the upper corner of each page.

Turing Machines. The Language Hierarchy. Context-Free Languages. Regular Languages. Courtesy Costas Busch - RPI 1

CSE 105 Theory of Computation

In complexity theory, algorithms and problems are classified by the growth order of computation time as a function of instance size.

CSE 105 THEORY OF COMPUTATION

A non-turing-recognizable language

Turing machines Finite automaton no storage Pushdown automaton storage is a stack What if we give the automaton a more flexible storage?

CS 21 Decidability and Tractability Winter Solution Set 3

CS154, Lecture 10: Rice s Theorem, Oracle Machines

CSE 105 THEORY OF COMPUTATION. Spring 2018 review class

Artificial Intelligence. 3 Problem Complexity. Prof. Dr. Jana Koehler Fall 2016 HSLU - JK

CSE 4111/5111/6111 Computability Jeff Edmonds Assignment 3: Diagonalization & Halting Problem Due: One week after shown in slides

Transcription:

CS3719 Theory of Computation and Algorithms Any mechanically (automatically) discretely computation of problem solving contains at least three components: - problem description - computational tool - analysis

Problem descriptions Formalize a problem

Sort the names of this class into alphabetic order lexicographically. - Abstract version of the problem: Instance: A set of names (last name follows middle name follows given name) Question: Find a list of these names in their lexicographical order - Decision version: Yes/No Question: Does the output list in lexicographical order?

-Concrete version: by a reasonable encoding method, convert the decision version of the problem into binary string, say 01 string. - This version is machine acceptable. Problem and algorithm -> program in high level languages (C++, JAVA etc.) -> assemble language (compiler) -> machine codes: 01-strings (assembler).

A language L over alphabet E (a finite set of symbols, say {0,1}), is any set of string made up of symbols from E. - L={10, 11, 101, 111, 1011, 1101,10001, } is the language of binary representations of prime numbers. - L = {xx x in L and x in L } is that the concatenation of two languages L and L is the language L.

Problems can be formalized as Languages. - The decision problem PATH: Instance: A graph G=(V,E), vertices u and v, and a nonnegative integer k. Question: Does a path exist in G between u and v whose length is at most k? - The formal language version of PATH: PATH={<G, u, v, k> G=(V,E) is a graph, u, v in V, k in I+, and there exists a path from u to v in G whose length is at most k}.

Tools -Human hand with pen and paper, -Calculator -Computer That is the computational models. Different models have different power.

What is a computer? What is the computational power of a computer? -Turing Machine resources: storage (space) and time

All models of machines we will study are Turing machines -finite amount of storage (finite state automata) -linear (on the size of the input) amount of storage (linear bounded automata) -array -tape- with an unlimited number of entries (Turing machine)

-Regarding the analyzing question... practical limitations (complexity) logical limitations (computability) we approach both issues formally, i.e., mathematically

Complexity: Algorithms And Problems Hierarchy of problems according to the complexity of algorithms to solve these problems. Undecidable (unsolvable) problems. Decidable (solvable) problems. NP-hard, NP-complete problems. Polynomial time solvable problems.

Figure 1: An simple illustration of complexity of problems

Undecidable (unsolvable) problems: (No algorithm exists) The Halting problem: Does there exist a program (algorithm/turing machine) Q to determine, for arbitrary given program P and input data D, whether or not P will halt on D?

Post's correspondence problem A correspondence system is a finite set P of ordered pairs of nonempty strings. A match of P is any string w * such that for some n>0 and some pairs (u 1, v 1 ), (u 2, v 2 ),...,(u n, v n ) P, w = u 1 u 2...u n = v 1 v 2...v n.

For example If P = {(a, ab), (b, ca), (ca, a), (abc, c)}, then w=abcaaabc is a match of P, since if the following sequence of five pairs is chosen, the concatenation of the first components and the concatenation of the second components are both equal to w= abcaaabc : (a, ab), (b, ca), (ca, a), (a, ab), (abc, c). The post's correspondence problem is to determine given a correspondence system whether that system has a match.

Hilbert s tenth problem To devise an `algorithm that test whether or not a polynomial has an integral root. A polynomial is a sum of terms. For example, 6x 3 yz 2 + 3xy 2 - x 3 10 An integral root is a set of integer values setting to the polynomial so that it will be zero.

For example, the above polynomial has an Integral root: x=5, y=3, z=0 (135-125-10=0). Let D denote a set of polynomials so that D = {p p is a polynomial with an integral root} Hilbert s tenth problem becomes Is D decidable? The answer is that it is not decidable.

A brief idea of proof Let D be a set of special polynomials that p has one variable. I.e., D ={p p is polynomial over x with an integral root}. For example, 4x 3-2x 2 + x 7 Let M be a Turing Machine that input: a polynomial p over the variable x. Program: Evaluate p with x set successively to the values 0, 1, -1, 2, -2, 3, -3, If at any point the polynomial evaluated to zero, accept.

M recognizes D. For general polynomial, we can devise a Turing machine M to recognize D similarly. To set successively the values of all combinations to the variables, if the polynomial evaluated to zero, accept. For example, x 0 0 1 0-1 0 2 0 y 0 1 0-1 0 2 0-2 Can you set the value pattern as x 0 0 0 0 0 1 1 1 1 1 y 0 1-1 2-2 0 1-1 2-2?

M can be modified as decidable with D. The bound of the value of the single variable can be calculated. For example, 4x 3-2x 2 + x 7 has a bound of x < 7. Since the bound is finite, if the value has exceeded the bound and the polynomial is not evaluated to zero, then stop. Thus, M can decide D. The general bound for D is k C max /C 1. Matijasevic proved no such a bound for D.

An intuitive proof of Halting problem Let us assume there exits an algorithm Q with such a property, i.e., Q(P(D)) will be run forever if arbitrary algorithm P with input data D, P(D) run forever. halt if P(D) halts.

New algorithm B Note that Algorithm P is a string and data D is a string too. Thus, Q(P(P)) is also a legal input to Q, regarding P itself as data. Design an new Algorithm B(X) for any algorithm X such that B(X) Halts if Q(X(X)) runs forever Runs forever if Q(X(X)) halts

The construction of B Note that B can be constructed because Q can be constructed. For example, we may build B on Q as follows: When Q detects P(D) stops and Q shall stop, but we modify Q (called B) and let B run forever; while Q detects P(D) runs forever and Q shall run forever, but we modify Q (called B) and let B stop.

Contradiction Let B run with input data B, then B(B) will either halt or will run forever, and this can be detected by Q(B(B)). If B(B) stops, hence Q(B(B)) stops and forces B(B) runs forever by the construction of B. --- B(B) enters both stop and run forever.

continuous If B(B) runs forever, then Q(B,B) runs forever and forces B(B) stops. --- B(B) enters both stop and run forever. --- All statements are logically followed the assumption --- assumption is wrong --- there cannot exist such a program Q.

The diagonalization method This method is due Georg Cantor in 1873. Definitions: one-to-one function f : A to B if it never maps two different elements to the same place. f is onto if it hits every element of B. f is correspondence if it is both one-to-one and onto. Correspondence is to pair the elements in A and B.

The correspondence can be used to compare the size of two sets. Cantor extended this idea to infinite set. Definition: A set A is countable if either it is finite or it has the same size as natural numbers N. For example, N = {1,2,3, }, E={2,4,6, }, O={1,3,5, } are same size and hence countable. Q be set of rational numbers: Q={m/n m, n in N}

1/1 1/2 1/3 1/4 1/5 2/1 2/2 2/3 2/4 2/5 3/1 3/2 3/3 3/4 3/5 4/1 4/2 4/3 4/4 4/5.. 5/1 5/2 5/3 5/4 5/5.. Q is countable.

Real numbers R is uncountable. Let f make correspondence between N and R. n f(n) 1 3.14159265 2 55.55555555 3 1.41427689 4 0.50000000... Construct a real number x by giving its decimal representation such that x is not belong to any f(n).

To do that, we let the first digit of the first real be different from the first digi of x, say x=.2; then let the second digit of the second real be different from the second digit of x, say x=.34; so on so forth. The new real number x=.34. is different from any real in the table by at least one digit difference. Therefore, x does not correspondence to any natural number and is uncountable. (can we choose 0 and 9 as digits in x?)

Diagonalization for Halting problem Let M1, M2, M3,. be all Turing machines listed in rows of an infinite table. Obviously, they include those machines: P,Q,B. (Algorithm regarded as a machine.) Let (M1), (M2), (M3), be their descriptions (as strings) listed in columns. Let entry (i,j) represent the result of the i-th machine runs on the j-th description as input.

(M1) (M2) (M3) M1 accept rej/nstop accept M2 accept accept rej/nstop M3 rej/nstop accept rej/nstop.... When a machine M runs on a description as input, it either accept or reject or nonstop.

When a machine Q runs on a description (machine M runs on input D), it either accept or reject. (M1) (M2) (M3) M1 accept reject accept M2 accept accept reject M3 reject accept reject...

When a machine B runs on a description (machine B runs on input D), it both accept and reject. (M1) (M2) (M3) (B) M1 accept reject accept M2 accept accept reject M3 reject accept reject.. B reject reject accept?.

Polynomial time decidable problems: (Algorithms exist and relatively efficient) Sorting a set of elements. Find the maximum, minimum, and median of a set of elements, Matrix multiplication. Matrixchain multiplication, Single source shortest path. Convex hull of a set of points. Voronoi diagrams. Delaunay triangulations.

NP-hard, NP-complete problems: (Algorithms exist, but not efficient) Boolean Satisfiability problem, vertex cover problem. Hamiltonian-cycle problem. A hamiltonian cycle of an undirected graph G=(V,E) is a simple cycle that contains each vertex in V. Does a graph G have a hamiltonian cycle? Traveling salespersons problem (W(m!), where m is the number of vertices in V.)

The measurement of the efficiency of algorithms (1) The worst-case time (and space). Insertion sort O(n 2 ) in worst-case time. (2) The average-case time. Quick sort O(n log n) time in average-case, O(n 2 ) in worst-case time. Other analysis methods:

The amortized analysis The randomized analysis In an amortized analysis, the time complexity is obtained by taking the average over all the operations performed. Even though a single operation in the sequence of operations may be very expensive, but the average cost of all operations in the sequence may be low. Example: Incrementing a binary counter. We shall count the number of flips in the counter when we keep adding a one from its lowest bit.

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 1 0 0 0 0 1 0 0 0 1 1 1 1 1 1 1 1

Increment(A) i 0 While i < length[a] and A[i]=1 Do A[i] 0 i i+1 If i < length[a] Then A[i] 1

In the conventional worst-case analysis, consider all k bits in the counter are `1's. any further increasing a `1' will cause k flips. Therefore, n incremental will cause O(kn) flips. Note that A[1] flip every time, A[2] flip every other time, A[3] flip every foutrth time,..., A[i] flip every 2 i th time. Thus, we have that log n i =0 n/2 i < n i=01/2 i = 2n. The average cost of each incremental is O(1), not O(k).

Optimal Algorithms Upper bound of a problem (1) the number of basic operations sufficient to solving a problem (2) the minimum time complexity among all known algorithms for solving a problem (3) upper bound can be established by illustrating an algorithm.

lower bound of a problem (1) the number of basic operations necessary to solving a problem (2) the maximum time complexity necessary by any algorithm for solving a problem (3) lower bound is much more difficult to establish.

An algorithm is optimal if its time complexity (i.e., its upper bound) matches with the lower bound of the problem. For example, the problem of sort n elements by comparisons. Lower bound = log 2 n! as there are n! different outcomes (permutations) and any decision tree which has n! leaves must be of height >= log 2 n!. Clearly, Merge-sort algorithm is optimal and insertion sort is not.

While you may already learn some methods of lower bound establishment such as decision tree and adversary (oracle), we shall also introduce a very useful method: Establish upper and lower bounds by transformable problems. Decision tree. Adversary. Transformation.

Figure 2Transfer of upper and lower bounds between transformable problems.

Suppose we have two problems, problem a and problem b, which are related so that problem a can be solved as follows: 1. The input to problem a is converted into a suitable input to problem b. 2. Problem b is solved. 3. The output of problem b is transformed into a correct solution to problem a We then say that problem a has been transformed to problem b. If the above transformation of step 1 and step 3 together can be done in O(t(N)) time, where N is the size of problem a, then we say that a is t(n)- transformable to b

Proposition 1 (Lower bound via transformability). If problem a is known to require at least T(N) time and a is t(n)-transformable to problem a t (N ) b, then b requires at least T(N) - O(t(N)) time. Proposition 2 (upper bound via transformability). If problem b can be solved in T(N) time and problem a is t(n)-transformable to problem a t (N ) b, then a can be solved in at most T(N) + O(t(N)) time.

For example. Element Uniqueness: Given N real numbers, decide if any two are unique. (Denote this problem as a.) This problem is known to have a lower bound. In the algebraic decision tree model any algorithm that determines whether the member of a set of N real numbers are distinct requires W(N log N) tests. Now, we have another problem, Closest Pair: Given N points in the Euclidean plane, find the closest pair of points (the shortest Euclidean distance). Denote this as b.

We want to find the lower bound of this problem. (Can we use decision tree method or adversary method to this problem?)

We transfer Element Uniqueness problem to Closest Pair problem. Given a set of real numbers (x 1, x 2,..., x N ) (INPUT to a), treat them as points in the y=0 line in the xycoordinate system (convert them into a suitable input of b). Apply any algorithm to solve b. The solution is the closest pair. If the distance between this pair is nonzero, then the points are distinct, otherwise it is not. (Convert the solution of b to the solution of a.) t N = O(N). By Proposition 1, b takes at least W(N log N) - O(N) time, which is the lower bound.

Using the same method, we can prove that the lower bound of sorting by comparison operations is W(n log n), by transferring the Element Uniqueness to Sorting. The lower bounds of a chain of problems can be proved in this manner.

Reduction for intractability The above transformation method can be used for proving a problem is intractable or tractable if the cost of transformation is bounded by a polynomial. For example, CLIQUE: Instance: A graph G=(V,E) and a positive integer J <= V.

Question: Does G contain a clique of size J or more? That is, a subset V in V such that V >= J and every two vertices in V are joined by an edge in E. VERTEX COVER (VC): Instance: A graph G=(V,E) and a positive integer k <= V. Question: Is there a vertex cover of size k or less for G? That is, a subset V in V such that V <=k and for each edge in E at least one of the endpoints is in V.

Let A be VC and B be clique For every instance of A, we can convert it to an instance of B in polynomial time. Let G=(V,E) and k <= V be an instance of VC. The corresponding instance of Clique is G c and the integer j= V -k. For covert the output of B to an output of A in polynomial time (constant time yes/no). => If A is intractable, then B is intractable.

For every instance of B, we can convert it to an instance of A in polynomial time. Let G=(V,E) and j <= V be an instance of Clique. The corresponding instance of VC is G c and the integer k= V -j. => If B is tractable, then A is tractable.

Reduction for decidability Mapping reducibility: If there is a computable function f: for every w in A, there is an f(w) in B. f is called the reduction of A to B. If A < m B, and B is decidable, then A is decidable. If A < m B, and A is undecidable, then B is undecidable.

Post correspondence problem Some instance has no match obviously. (abc, ab) (ca, a) (acc, ba) since the first element in an order pair is always larger than the second. Let us define PCP more precisely: PCP={[P] P is an instance of the Post correspondence problem with a match}

Where P={t 1 /b 1, t 2 /b 2, t k /b k }, and a match is a sequence i 1, i 2,, i s such that t i1 t i2 t is = b i1 b i2 b is. Proof idea: To show that for any TM M and input w, we can construct an instance P such that a match is an accepting computation history for M on w. Thus, if we can determine whether the instance P has a match, we can determine whether M accepts w (halting problem).

Let us call [t i /b i ] a domino. In the construction of P, we choose the dominos so that a match forces a simulation of M to accept w. Let us consider a simpler case that M on w does not move its head off the left-hand end of the type and the PCP requires a match always starts with [t 1 /b 1. ] Call it MPCP. MPCP={[P] P is an instance of the Post correspondence problem with a match starting at [t 1 /b 1 ]}.

Proof. Let TM R decide the PCP and construct TM S to decide A TM. M =(Q,S, G, d, q o, q accept, q reject,), where Q is the set of states, S is the input alphabet, G is the tape alphabet, d is the transition function of M. S constructs an instance of PCP P that has a match if and only if M accepts w. The construction of P of MCPC consists of 7 parts.

1. Let [# # q o w 1 w w w n ] be the first domino in P, where C 1 =q o w = q o w 1 w w w n is the first configuration of M and # is the separator. The current P will force the extension of the top string in order to form a match.

To do so, we shall provide additional dominos to allow this, but at the same time these dominos causes a single step simulation of M, as shown in the bottom part of the domino. Parts 2, 3, and 4 are as follows: 2. For every a, b in G and every q, r in Q, where q is not q reject if d( q, a) = (r,b,r), put [qa/br] into P. (head moves right)

3. For every a, b, c in G and every q, r in Q, where q is not q reject if d( q, a) = (r,b,l), put [cqa /rcb] into P. (head moves left) 4. For every a in G, put [a/a] into P. (head is not on symbol a) What do the above construction parts mean? Consider the following example: Let G ={0, 1, 2, e}, where e is empty, w= 0 1 0 0, and the start state of M is q o.

Part 1 puts the first domino as follows. [# // # q o 0 1 0 0], then start to match. Suppose M in q o reads 0 and enters q 7, write a 2 on the tape and moves head to R. That is d(q 0,0)=(q 7,2, R). Part 2 puts [q o 0 / 2 q 7 ], [# q o 0 # q o 0 1 0 0 # 2 q 7 ], Part 3 puts nothing, and Part 4 puts [0/0], [1/1], [2,2], and [e/e], [# q o 0 1 0 0 # # q o 0 1 0 0 # 2 q 7 1 0 0].

Part 5 for copy the # symbol to separate different configuration of M. I.e., put [#/#] and [#/e#] into P. The second domino allow us to add an empty symbol e to represent infinite number of blanks to the right. Thus, the current P has two configurations separated by #. [# q o 0 1 0 0 # // # q o 0 1 0 0 # 2 q 7 1 0 0 #].

Now, suppose M in q 7 reads 1 and enters q 5, write a 0 on the tape and moves head to R. That is, d(q 7,1)=(q 5,0, R). We have that [# q o 0 100 # 2 q 7 1 0 0 # // # q o 0100 # 2 q 7 1 0 0 # 2 0 q 5 0 0 #].

Then, suppose M in q 5 reads 0 and enters q 9, write a 2 on the tape and moves head to L. That is, d(q 5,0)=(q 9,2, L). We have dominos: [0q 5 0 / q 9 02], [1q 5 0 / q 9 12], [2q 5 0 / q 9 22], and [eq 5 0 / q 9 e2]. Only the first domino fits. [# q o 0100 # 2 q 7 100 # 20 q 5 00 # # q o 0100 # 2 q 7 100 # 20 q 5 00 # 2 q 9 020 # ]. This process of match and simulation M on w continue until q accept has been reached.

We need to make a catch up for the top part of the current P. To do so, we have part 6. 6. For every a in G, put [a q accept / q accept ] and [q accept a / q accept ] into P. This is to add pseudo-steps to M after halted as the head eats the adjacent symbols until no symbol left. Suppose that M in q 9 reads 0 and enters q accept.

[# q o 0100 # 2 q 7 100 # 20 q 5 00 # # q o 0100 # 2 q 7 100 # 20 q 5 00 # 2 q 9 020 # ]. [ #20q 5 00#2q 9 020 # #20q 5 00#2q 9 020 #2q accept 20#] [ #20q 5 00#2q 9 020 #2q accept 20# #20q 5 00#2q 9 020 #2q accept 20#2q accept 0#] [ #2q 9 020 #2q accept 20#2q accept 0# #2q 9 020 #2q accept 20#2q accept 0#2q accept #]

7. Finally, we add domino [q accept ##/#] to complete the match. [ #2q accept 20#2q accept 0#2q accept # #2q accept 20#2q accept 0#2q accept #q accept #] [ #2q accept 20#2q accept 0#2q accept #q accept ## #2q accept 20#2q accept 0#2q accept #q accept ##]

To remove the restriction on P. I.e., start at the first domino, we add some symbols to every element of P. If P = {[t 1 /b 1 ],[ t 2 /b 2 ],, [t k /b k ]} is a match, then we let P={[*t 1 /*b 1 *],[ *t 2 /b 2 *],, [*t k /b k *], [*o/o]}. Clearly, PCP must start at the first domino. [*o/o] for allowing the top of P to add #.