CSC : Homework #3

Similar documents
Turing Machine Variants

Register machines L2 18

CpSc 421 Final Exam December 15, 2006

Models of Computation, Recall Register Machines. A register machine (sometimes abbreviated to RM) is specified by:

Automata Theory CS S-12 Turing Machine Modifications

Section 20: Arrow Diagrams on the Integers

CSE 200 Lecture Notes Turing machine vs. RAM machine vs. circuits

Most General computer?

Universal Turing Machine. Lecture 20

Homework Assignment 6 Answers

CpSc 421 Homework 9 Solution

Turing Machines. Our most powerful model of a computer is the Turing Machine. This is an FA with an infinite tape for storage.

Turing Machine Recap

Theory of Computation

Automata and Computability. Solutions to Exercises

Decidability: Reduction Proofs

Automata and Computability. Solutions to Exercises

Computability and Complexity

Part I: Definitions and Properties

A Note on Turing Machine Design

V Honors Theory of Computation

Boolean circuits. Lecture Definitions

Turing Machine properties. Turing Machines. Alternate TM definitions. Alternate TM definitions. Alternate TM definitions. Alternate TM definitions

6.045: Automata, Computability, and Complexity Or, Great Ideas in Theoretical Computer Science Spring, Class 8 Nancy Lynch

CSE 211. Pushdown Automata. CSE 211 (Theory of Computation) Atif Hasan Rahman

CISC 4090: Theory of Computation Chapter 1 Regular Languages. Section 1.1: Finite Automata. What is a computer? Finite automata

Lecture 12: Randomness Continued

State Machines. Example FSM: Roboant

1 Definition of a Turing machine

(a) Definition of TMs. First Problem of URMs

CPSC 421: Tutorial #1

Turing Machines and the Church-Turing Thesis

CMPT 710/407 - Complexity Theory Lecture 4: Complexity Classes, Completeness, Linear Speedup, and Hierarchy Theorems

CSCE 551 Final Exam, Spring 2004 Answer Key

CS 361 Meeting 26 11/10/17

Theory of Computation

Turing Machines Part II

1 Acceptance, Rejection, and I/O for Turing Machines

CS20a: Turing Machines (Oct 29, 2002)

Parallelism and Machine Models

CSCI3390-Assignment 2 Solutions

Confusion of Memory. Lawrence S. Moss. Department of Mathematics Indiana University Bloomington, IN USA February 14, 2008

CSE 105 THEORY OF COMPUTATION

Chapter 7 Turing Machines

Theory of Computation Lecture Notes. Problems and Algorithms. Class Information

Introduction to Turing Machines. Reading: Chapters 8 & 9

15.1 Proof of the Cook-Levin Theorem: SAT is NP-complete

Modular Arithmetic Instructor: Marizza Bailey Name:

Outline. Complexity Theory. Example. Sketch of a log-space TM for palindromes. Log-space computations. Example VU , SS 2018

PS2 - Comments. University of Virginia - cs3102: Theory of Computation Spring 2010

MAT246H1S - Concepts In Abstract Mathematics. Solutions to Term Test 1 - February 1, 2018

Turing Machine Variants. Sipser pages

Congruences. September 16, 2006

Asymptotic notation : big-o and small-o

Announcements. Problem Set 6 due next Monday, February 25, at 12:50PM. Midterm graded, will be returned at end of lecture.

Turing Machines, diagonalization, the halting problem, reducibility

CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010

Space Complexity. The space complexity of a program is how much memory it uses.

Turing Machines Part II

CS 301. Lecture 18 Decidable languages. Stephen Checkoway. April 2, 2018

Computation Histories

Decidable Languages - relationship with other classes.

Turing Machines. Chapter 17

Decision Problems with TM s. Lecture 31: Halting Problem. Universe of discourse. Semi-decidable. Look at following sets: CSCI 81 Spring, 2012

Turing Machine variants

Turing Machines Part Two

where Q is a finite set of states

WORKSHEET ON NUMBERS, MATH 215 FALL. We start our study of numbers with the integers: N = {1, 2, 3,...}

CSE 105 THEORY OF COMPUTATION

Theory of Computation Lecture Notes

Non-emptiness Testing for TMs

More Turing Machines. CS154 Chris Pollett Mar 15, 2006.

9.2 Multiplication Properties of Radicals

Homework. Turing Machines. Announcements. Plan for today. Now our picture looks like. Languages

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan August 30, Notes for Lecture 1

The trick is to multiply the numerator and denominator of the big fraction by the least common denominator of every little fraction.

Theory of Computation

Undecidability and Rice s Theorem. Lecture 26, December 3 CS 374, Fall 2015

A Lower Bound of 2 n Conditional Jumps for Boolean Satisfiability on A Random Access Machine

Introduction to Computer Science and Programming for Astronomers

Turing Machines Part III

CSE 105 THEORY OF COMPUTATION

Advanced topic: Space complexity

Limits of Feasibility. Example. Complexity Relationships among Models. 1. Complexity Relationships among Models

Computability Theory. CS215, Lecture 6,

Week 2: Defining Computation

Introduction to Languages and Computation

Chapter 2 INTEGERS. There will be NO CALCULATORS used for this unit!

Complexity. Complexity Theory Lecture 3. Decidability and Complexity. Complexity Classes

Computational Models Lecture 8 1

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 9/6/2004. Notes for Lecture 3

Math 131 notes. Jason Riedy. 6 October, Linear Diophantine equations : Likely delayed 6

Computational Models Lecture 8 1

4.2 The Halting Problem

Lecture 23: Rice Theorem and Turing machine behavior properties 21 April 2009

MACHINE COMPUTING. the limitations

Computability Theory

Transcription:

CSC 707-001: Homework #3 William J. Cook wjcook@math.ncsu.edu Monday, March 15, 2004 1 Exercise 4.13 on page 118 (3-Register RAM) Exercise 4.13 Verify that a RAM need not have an unbounded number of registers. Use prime encoding to show that a three-register RAM can simulate a k-register RAM for any fixed k>3. 1.1 RAM Operations I will assume that a RAM has the following basic operations: Inc Rj adds one to register j. Dec Rj subtracts one from register j unless register j is zero in which case Dec Rj does nothing. JumpOnZero Rj, label jumps to the line of code labeled by label if register j is zero, else does nothing. Halt just halts the RAM. Rj l gives register j the value l. Rj Rl sets register j to the value of register l. Rj Rj +Rl sets register j to the value resulting from adding register l to register j. Rj Rj -Rl sets register j to the value resulting from subtracting register l from register j unless register l has a greater value than register j in which case the RAM just sets register j to zero. Rj Rj l sets register j to the value resulting from multiplying register j by l. Rj Rj l sets register j to the value resulting from dividing register j by l (integer division). Rj Rj mod l sets register j to the remainder resulting from dividing register j by l. 1

1.2 Unboundedness is Unnecessary The program of any RAM must be finite in length. Say the program for some given RAM is a list of n basic instructions. Each instruction refers to at most two different registers. Thus, the program could involve (at most) 2n different registers. Therefore, any given RAM uses only a finite number of registers. 1.3 Prime Encoding Let k be the maximum register index referred to by the program (this exists since the list of registers used is finite). These k +1registers(R0, R1,..., Rk) can be stored in a single register using prime encoding. Let{p 0,p 1,..., p k,p k+1,p k+2,p k+3 } be k + 4 distinct primes (I will need a few extra primes for some of the simulations). If Rj r j for all 0 j k, then we can encode all of the values, r j, in the first register via R0 k i=0 pr i i.noticethat r k+1 = r k+2 = r k+3 =0. Asamatterofconvention,Iwillrefertothe(k + 1)-register machine s registers as r 0,r 1,..., r k and the 3-register machine s registers as R0, R1, andr2. We begin with the input encoded in R0. 1.4 Simulating Instructions Now let s simulate each basic instruction with a list of instructions which only use three registers. The (k + 1)-register machine s program instructions are to be replaced by copies of the following simulations in order to construct the program for the 3-register machine. Note: Each copy of a simulation will need to have its labels adjusted so that labels are unique. Also, some simulations are built out of previous ones. I put quotation marks around the instructions which need to be replaced by the corresponding simulation. For example, Inc Rj = R0 R0 p j Replace Inc Rj with: R0 R0 p j This increases the exponent of p j by one. Thus r j r j +1. 2

Replace Dec Rj with: R1 R1 mod p j JumpOnZero R1, divisible R1 0 JumpOnZero R1, notdivisible divisible: R0 R0 p j notdivisible: R1 0 First we need to check if r j = 0, if this is true then when we divide R1 by p j we should get a non-zero remainder so we will end up jumping to notdivisible and finishing. Otherwise, we divide R0 by p j and thus decrement the exponent r j by one (r j r j 1). Replace JumpOnZero Rj, label with: R1 R1 mod p j JumpOnZero R1, divisible R1 0 JumpOnZero R1, label divisible: R1 0 We need to check if r j = 0 just like in Dec Rj. We jump to divisible only when r j 0. This will dump us out of the simulation. However, if r j =0,wejumpto label. Note: the last R1 0 functions as a no-op. I will continue to use this type thing throughout all of the simluations to guarantee that each label has an instruction next to it. You may also want to note that all simulations end with R1 = R2 = 0. Halt needs no replacement. Replace Rj l with: again: R1 R1 mod p j JumpOnZero R1, divisible R1 0 JumpOnZero R1, notdivisible divisible: R0 R0 p j JumpOnZero R1, again notdivisible: R0 p l j As long as r j 0, we decrement r j by dividing R0 by p j. Once r j =0,wesetit to l by multiplying R0 by p l j.thatis:r j 0 l. 3

Replace Rj Rl with: Rj 0 again: R2 R1 R2 R2 mod p l JumpOnZero R2, divisible R2 0 JumpOnZero R2, notdivisible divisible: R1 R1 p l R0 R0 p j JumpOnZero R2, again notdivisible: R1 0 First we use the previous simulation to set r j = 0. We then copy all the registers in R0 into R1. Thus we can use R1 like a second set of registers. As long as r l 0(inR1), we decrement r l (in R1 thus leaving the original r l in R0 untouched) and increment r j in R0. Thusr j in R0 gets incremented r l times. Hence, r j 0 r l. Replace Rj Rj + Rl with: again: R2 R1 R2 R2 mod p l JumpOnZero R2, divisible R2 0 JumpOnZero R2, notdivisible divisible: R1 R1 p l R0 R0 p j JumpOnZero R2, again notdivisible: R1 0 This works exactly like the previous simulation except that we don t start off by resetting r j to zero. We get: r j r j + r l. 4

Replace Rj Rj - Rl with: again: R2 R1 R2 R2 mod p l JumpOnZero R2, divisible R2 0 JumpOnZero R2, notdivisible divisible: R2 R0 R2 R2 mod p j JumpOnZero R2, divisibletoo R2 0 JumpOnZero R2, notdivisible divisibletoo: R1 R1 p l R0 R0 p j JumpOnZero R2, again notdivisible: R1 0 First, copy memory into R1. If r l =0(inR1), there is nothing to subtract so we jump to notdivisible and end this simulation. Next, if r j =0(inR0), we cannot subtract anything else from r j so we must jump to notdivisible and end this simulation. But if both are not zero, we decrement r l (in R1) and decrement r j (in R0) and then repeat the process. Thus, if r j r l, we decrement r j a total of r l times and get: r j r j r l. Otherwise, we end up decrementing until r j =0. Replace Rj Rj l with: R1 l JumpOnZero R1, zerocase R(k +1) l again: Rj Rj + Rj Dec R(k +1) JumpOnZero R(k +1), done JumpOnZero R(k +2), again zerocase: Rj 0 done: R1 0 We can piece together previous simulations to make this simulation. First, if l = 0, we have kind of a weird case. If so, we jump to zerocase and set r j = 0. Otherwise, we use r k+1 as a counter. This is okay since the original RAM only uses the registers up through r k. Then we add r j to itself l = r k+1 times. Note: At the end of each simulation not only are R1 and R2 set to zero, but also r k+1, r k+2,andr k+3. 5

Replace Rj Rj l with: R(k +1) Rj R(k +2) l Rj 0 again: R(k +3) R(k +1) R(k +3) R(k +3)-R(k +2) R(k +3) R(k +3)+R(k +2) R(k +3) R(k +3)-R(k +1) JumpOnZero R(k +3), bigenough R(k +3) 0 JumpOnZero R(k +3), done bigenough: R(k +1) R(k +1)-R(k +2) Inc Rj JumpOnZero R(k +1), done JumpOnZero R(k +3), again done: R(k +1) 0 R(k +2) 0 We divide r j by l (some positive integer) by counting how many times we need to subtract l from r j before we get a remainder less than l. First we set up the calculation by copying r j into r k+1,copyingl into r k+2, and setting r j to zero (r j is our counter). Because subtraction will not give an answer less than zero we can find out if r k+1 is less than l by subtracting l = r k+2 from it and then adding l back. For example: (l =5),3 5 0and0+5 5, but 5 3 2 0thus3< 5. Hence, if r k+1 was greater than l our calculation should have left its value unchanged. So that r k+3 r k+1 0. If so, r k+1 is big enough to have a copy of l subtracted from it. Thus we subtract l and increment our counter, r j. We repeat this process until the remainder is too small (less than l). Replace Rj Rj mod l with: R(k +1) Rj R(k +1) R(k +1) l R(k +1) R(k +1) l Rj Rj - R(k +1) R(k +1) 0 Just note that the remainder is given by: Rj -[(Rj l) l]. 1.5 Conclusion Hence an arbitrary RAM uses only finitely many registers. Also, finitely many registers can be stored in a single register using prime encoding. And all instructions can be simulated using just three registers if the memory is encoded in the first register using prime encoding. Hence any RAM can be simulated by a 3-register RAM. 6

2 Recognizing L = {w2w w {0, 1} } Conjecture: A one-tape Turing Machine requires time O(n 2 ) to recognize the language: L = {w2w w {0, 1} } The conjecture is true. We proved in class that any Turing Machine (TM) requires at least O(n 2 ) time to recognize the language of palindromes PAL = {w {0, 1, #} w = w R }. A simple adaptation of that proof will work for the language L also. 2.1 Notation and Defining L n First, let M be a Turing Machine which recognizes the language L. Let Q be the set of states of M. (Note: obviously, any machine which reconizes L must have more than 1 state, i.e. Q > 1). If this machine accepts L, it must also accept any subset of L including: L n = {w0 n 2w0 n w {0, 1} n } Notice that w L n implies that w =4n +1 O(n). I will show that there is some element of L n which can be recognized in time no less than Ω(n 2 ). 2.2 Crossing Sequences Consider the following crossing sequences: For w L n and n j 2n, letc j (w) =(q 1,q 2,...) be an ordered list of the states that M is in when crossing between the j th and (j +1) st symbol on the tape. This list is finite since M must eventually halt and accept w. Let C(w) ={c j (w) n j 2n} (crossing sequences in the first region of 0 s in w = σ0 n 2σ0 n ). 2.3 w, x L n, w x C(w) C(x) =φ Just as in the proof from class, we can see that different elements of L n have disjoint sets of crossing sequences. Suppose that w, x L n, w x. Note that w = σ0 n 2σ0 n and w = τ0 n 2τ0 n for some σ, τ {0, 1} and because w x, wemusthavethatσ τ. Now suppose that c C(w) C(x). Then we have, for some n i, j 2n, c = c i (w) =c j (x). Let w = w w be the partition of w into the first i and the last 4n +1 i symbols. Likewise, let x = x x be the partition of x into the first j and the last (4n +1) j symbols. Now because the crossing sequences are equal, M cannot tell if it is moving out of w into w or out of x into x. The same for when it transitions back from w into w or x into x. In essence, the machine (for any given word) can be divided into two separate worlds (for any partition of the word), and if the crossing states match up for two different words (at some partition), we can paste these two different worlds together. Therefore, because M accepts w and x, it must also accept w x. Let s look at w x. For w we have n w = i 2n. Thus w = σ0 i n. 2n +1 x =(4n +1 j) 3n +1 thus x =0 2n j 2τ0 n. Therefore, w x = σ0 n+i j 2τ0 n L since even if σ0 n+i j = τ0 n 7

we know that σ τ hence σ0 n+i j τ0 n. Hence, M must not accept w x. This is a contradiction. Thus c = c i (w) =c j (x) cannot exist. Therefore we have proved that: w, x L n,w x C(w) C(x) =φ. 2.4 Counting Crossing Sequences Next, note that the number of crossing sequences of length k is Q k. Thus the number of crossing sequences of length at most m is (recall Q > 1): m Q i = Q m+1 1 Q 1 i=0 For each w L n we can pick out some c w C(w) such that c w = min c C(w) c (the seqence of minimal length). Now since sets of crossing sequences are disjoint we must have a distinct sequence of minimal length for each w L n. So we need at least, L n =2 n minimal length crossing sequences. Let m = max w Ln c w (this is the length of the longest minimal crossing sequence). Thus we must have: Q m+1 1 Q 1 L n =2 n Taking logs we get: m log( Q )+(some constant) log( Q m+1 1) log( Q 1) n log(2). This implies that m Ω(n). Hence there exists some element ŵ L n such that cŵ = m Ω(n) thus every crossing sequence in C(ŵ) has length at least Ω(n). Thus in order to accept ŵ, M must cross over each of the positions on the tape between the (n +1) st and (2n) th symbols at least Ω(n) times. Hence accepting ŵ L n L will take n Ω(n) time. In other words, accepting words of length Θ(n) inl can take Ω(n 2 )time. 2.5 Final Comments Actually it is easy to see that no more than O(n 2 )timeisrequiredtorecognizel. Consider the following TM: 1: Scan over the input to make sure that there is exactly one 2. If not reject, then else return to the beginning of the tape. 2: Erase the first symbol if it is 0 or 1 and remember it. If it is 2 go to step: 5. 3: Scan past all 2 s and check if the first symbol to the right of the 2 s matches. If not reject, else replace this symbol with a 2. 4: Scan back to the beginning of the tape and go back to step: 2. 5: Scan right and see if all remaining symbols are 2 s. If so accept, if not reject. Step 1 takes time O(n). Steps 2-4 take O(n) and can loop at most O(n) times. Step 5 takes time O(n). Thus this machine takes no more than 2O(n) +O(n) O(n) =O(n 2 ) time. Hence L can be recognized in time O(n 2 ) and by the previous argument we can do no better asymptotically. 8