Theory of Computation

Similar documents
Theory of Computation

Time Complexity (1) CSCI Spring Original Slides were written by Dr. Frederick W Maier. CSCI 2670 Time Complexity (1)

Time Complexity. CS60001: Foundations of Computing Science

FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY. FLAC (15-453) Spring l. Blum TIME COMPLEXITY AND POLYNOMIAL TIME;

TIME COMPLEXITY AND POLYNOMIAL TIME; NON DETERMINISTIC TURING MACHINES AND NP. THURSDAY Mar 20

Intro to Theory of Computation

Chapter 7: Time Complexity

Theory of Computation Time Complexity

Computational complexity

Lecture 16: Time Complexity and P vs NP

Finish K-Complexity, Start Time Complexity

Complexity (Pre Lecture)

CP405 Theory of Computation

Harvard CS 121 and CSCI E-121 Lecture 20: Polynomial Time

Asymptotic notation : big-o and small-o

CISC 4090 Theory of Computation

Theory of Computation

CS154, Lecture 13: P vs NP

Computational Models Lecture 8 1

CSE 105 THEORY OF COMPUTATION

The purpose here is to classify computational problems according to their complexity. For that purpose we need first to agree on a computational

Further discussion of Turing machines

Advanced topic: Space complexity

Intractable Problems [HMU06,Chp.10a]

Theory of Computation

Languages, regular languages, finite automata

Friday Four Square! Today at 4:15PM, Outside Gates

CS154, Lecture 13: P vs NP

Lecture 21: Space Complexity (The Final Exam Frontier?)

Chapter 3: The Church-Turing Thesis

CSE 135: Introduction to Theory of Computation NP-completeness

Notes for Lecture Notes 2

Computational Complexity

Announcements. Problem Set 7 graded; will be returned at end of lecture. Unclaimed problem sets and midterms moved!

Theory of Computation

Computability and Complexity

3 Probabilistic Turing Machines

Computer Sciences Department

Time Complexity. Definition. Let t : n n be a function. NTIME(t(n)) = {L L is a language decidable by a O(t(n)) deterministic TM}

The Church-Turing Thesis

Chapter 1 - Time and Space Complexity. deterministic and non-deterministic Turing machine time and space complexity classes P, NP, PSPACE, NPSPACE

Computational Models Lecture 8 1

1 Computational Problems

Great Theoretical Ideas in Computer Science. Lecture 9: Introduction to Computational Complexity

Turing Machines. 22c:135 Theory of Computation. Tape of a Turing Machine (TM) TM versus FA, PDA

Computational Models Lecture 11, Spring 2009

FORMAL LANGUAGES, AUTOMATA AND COMPUTATION

Computational Models Lecture 8 1

SOLUTION: SOLUTION: SOLUTION:

CMPT 710/407 - Complexity Theory Lecture 4: Complexity Classes, Completeness, Linear Speedup, and Hierarchy Theorems

INTRODUCTION TO THE THEORY OF NP-COMPLETENESS

Theory of Computation

CS21 Decidability and Tractability

Complexity Theory Part II

COM364 Automata Theory Lecture Note 2 - Nondeterminism

Intractable Problems. Time-Bounded Turing Machines Classes P and NP Polynomial-Time Reductions

Applied Computer Science II Chapter 7: Time Complexity. Prof. Dr. Luc De Raedt. Institut für Informatik Albert-Ludwigs Universität Freiburg Germany

ECE 695 Numerical Simulations Lecture 2: Computability and NPhardness. Prof. Peter Bermel January 11, 2017

Complexity. Complexity Theory Lecture 3. Decidability and Complexity. Complexity Classes

Complexity: moving from qualitative to quantitative considerations

CS151 Complexity Theory. Lecture 1 April 3, 2017

ECS 120 Lesson 23 The Class P

Class Note #20. In today s class, the following four concepts were introduced: decision

NP-Completeness and Boolean Satisfiability

Variants of Turing Machine (intro)

1 Circuit Complexity. CS 6743 Lecture 15 1 Fall Definitions

Exam Computability and Complexity

Designing 1-tape Deterministic Turing Machines

More Turing Machines. CS154 Chris Pollett Mar 15, 2006.

Lecture 24: Randomized Complexity, Course Summary

Chap. 1.2 NonDeterministic Finite Automata (NFA)

Mapping Reducibility. Human-aware Robotics. 2017/11/16 Chapter 5.3 in Sipser Ø Announcement:

Theory of Computation

CS154, Lecture 17: conp, Oracles again, Space Complexity

CS5371 Theory of Computation. Lecture 12: Computability III (Decidable Languages relating to DFA, NFA, and CFG)

Theory of Computation

Finite Automata. Finite Automata

Computability and Complexity

CSCI3390-Lecture 14: The class NP

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2019

Turing Machines A Turing Machine is a 7-tuple, (Q, Σ, Γ, δ, q0, qaccept, qreject), where Q, Σ, Γ are all finite

An example of a decidable language that is not a CFL Implementation-level description of a TM State diagram of TM

Great Theoretical Ideas in Computer Science. Lecture 7: Introduction to Computational Complexity

1 Deterministic Turing Machines

Finite Automata. Mahesh Viswanathan

Growth of Functions (CLRS 2.3,3)

Lecture 3: Reductions and Completeness

UNIT-VIII COMPUTABILITY THEORY

Complexity Theory Part I

Turing Machines and Time Complexity

Nondeterministic Finite Automata

22c:135 Theory of Computation. Analyzing an Algorithm. Simplifying Conventions. Example computation. How much time does M1 take to decide A?

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

Outline. Complexity Theory. Example. Sketch of a log-space TM for palindromes. Log-space computations. Example VU , SS 2018

ECS 120 Lesson 15 Turing Machines, Pt. 1

CSE 105 THEORY OF COMPUTATION

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

Complete problems for classes in PH, The Polynomial-Time Hierarchy (PH) oracle is like a subroutine, or function in

Hierarchy theorems. Evgenij Thorstensen V18. Evgenij Thorstensen Hierarchy theorems V18 1 / 18

Transcription:

Theory of Computation Dr. Sarmad Abbasi Dr. Sarmad Abbasi () Theory of Computation 1 / 38

Lecture 21: Overview Big-Oh notation. Little-o notation. Time Complexity Classes Non-deterministic TMs The Class P Dr. Sarmad Abbasi () Theory of Computation 2 / 38

Resource Bounded Computations Let us now look at resource bounded computations The most important resources are: 1 Time 2 Space Dr. Sarmad Abbasi () Theory of Computation 3 / 38

Resource Bounded Computations Let A = {0 k 1 k : k 0} we saw an algorithm that accepts A in time O(n 2 ). Dr. Sarmad Abbasi () Theory of Computation 4 / 38

Big Oh Notation Let f, g : N R. We say that f (n) = O(g(n)) if there are positive integers c and n 0 such that for all n n 0 f (n) cg(n). When f (n) = O(g(n)) we say that g is an asymptotic upper bound on f. Some times we just say that g is an upper bound on f. Dr. Sarmad Abbasi () Theory of Computation 5 / 38

Big Oh Notation Let then Furthermore, But it is not true that f (n) = 13n 4 + 7n 2 + 13 f (n) = O(n 4 ). f (n) = O(n 5 ). f (n) = O(n 3 ) Dr. Sarmad Abbasi () Theory of Computation 6 / 38

Big Oh Notation Suppose that where b is a constant. Note that Hence, we can say Let then f 2 (n) = O(nlogn). f (n) = log b n log b n = log 2 n log 2 b. f (n) = O(logn). f 2 (n) = 5n log n + 200n log log n + 17 Dr. Sarmad Abbasi () Theory of Computation 7 / 38

Big Oh Notation We can use O notation in exponents also. Thus 2 O(n) stands for a function that is upper bounded by for some constant c. 2 cn Dr. Sarmad Abbasi () Theory of Computation 8 / 38

Big Oh Notation We have to be careful though. For example consider 2 O(log n). Well remember that O(log n) has a constant attached to it. Thus this function is upper bounded by 2 c log n for some constant c. Now, we get Thus 2 O(log n) is polynomial. 2 c log n = (2 log n ) c = n c. Dr. Sarmad Abbasi () Theory of Computation 9 / 38

Big Oh Notation Bounds of the form n c are called polynomial bounds. Where as bounds of the form 2 nc are called exponential bounds. Dr. Sarmad Abbasi () Theory of Computation 10 / 38

little Oh Notation The Big-Oh notation roughly says that f is upper bounded by g. Now we want to say f grows strictly slower that g. Let f, g : N R +. We say that if f (n) = o(g(n)) f (n) lim n g(n) = 0. Thus the ratio of f and g becomes closer and closer to 0. Dr. Sarmad Abbasi () Theory of Computation 11 / 38

little Oh Notation 1 n = o(n) 2 n = o(n log log n) 3 n log log n = o(n log n) 4 n log n = o(n 2 ) 5 n 2 = o(n 3 ) Dr. Sarmad Abbasi () Theory of Computation 12 / 38

little Oh Notation Lets look at a new TM that recognizes A = {0 k 1 k : k 0}. 1 Scan the tape and make sure all the 0s are before 1s. 2 Repeat till there are 0s and 1s on the tape. 1 Scan the tape and if the total number of 0s and 1s is odd reject. 2 Scan and cross of every other 0 and do the same with every other 1. 3 If all 0s and 1s are crossed accept. Thus if we have 13 0s in the next stage there would be 6. Dr. Sarmad Abbasi () Theory of Computation 13 / 38

little Oh Notation This algorithm shows that there is O(nlogn) time TM that decides A. Since, nlogn = o(n 2 ). This points to a real algorithmic improvement. Dr. Sarmad Abbasi () Theory of Computation 14 / 38

Time Complexity Classes Let t : N N be a function. Define the time complexity class TIME(t(n)) to be: TIME(t(n)) = {L L is decided by an O(t(n))-time Turing machine}. Dr. Sarmad Abbasi () Theory of Computation 15 / 38

So for example: 1 TIME(n) is the set of all languages, L, for which there is a linear time TM that accepts L. 2 TIME(n 2 ) is the set of all languages, L, for which there is a quadratic time TM that accepts L. Dr. Sarmad Abbasi () Theory of Computation 16 / 38

Recall A = {0 k 1 k : k 0}. Our first algorithm (or TM) showed that A TIME(n 2 ). The second one showed that A TIME(n log n). Notice that TIME(nlogn) TIME(n 2 ). Dr. Sarmad Abbasi () Theory of Computation 17 / 38

What will happen if we allow ourselves to have a 2-tape TM. Can we have a faster TM. Yes, if we have two tapes we can do the following: Dr. Sarmad Abbasi () Theory of Computation 18 / 38

1 Scan across to find out if all 0s are before 1s. 2 Copy all the 0s to the second tape. 3 Scan on the first tape in forward direction and the second tape in reverse direction crossing off 0s and 1s. 4 If all 0s have been crossed off and all ones have been crossed off accept else reject. In fact, it one can design a 2-tape TM that accepts A while scanning the input only once! Dr. Sarmad Abbasi () Theory of Computation 19 / 38

This leads to an important questions? Question Can we design a 1-tape TM that accepts A in o(n log n) time. Preferably linear time? The answer is NO! Dr. Sarmad Abbasi () Theory of Computation 20 / 38

In fact there is a language L such that 1 L is accepted by a 2-tape TM in time O(n). 2 L cannot be accepted by any 1-TM TM in time o(n 2 ). Dr. Sarmad Abbasi () Theory of Computation 21 / 38

Thus the definition of TIME(t(n)) changes if we talk about 1-tape TMs or 2-tape TM. Fortunately, it does not change by too much as the following theorem shows. Theorem Let t(n) be a function where t(n) n. Then every t(n)-time k-tape TM has an equivalent O(t 2 (n))-time single tape TM. Dr. Sarmad Abbasi () Theory of Computation 22 / 38

Notice the difference from computability theory. Previously we showed: Theorem Every k-tape TM has an equivalent 1-tape TM. (We ignored time). Now, we are doing more detailed analysis. Dr. Sarmad Abbasi () Theory of Computation 23 / 38

The proof of the theorem is just a more careful study of the previous proof. Remember given a k-tape TM, M, we made a 1-tape TM, N. N works as follows: 1 On input x convert the input to #q0#x# # the start configuration of M. This configuration says that x is on the first tape. The rest of the tapes are empty and the machine is in q 0. 2 In each pass over the tape change the current configuration to the next one. 3 If an accepting configuration is reached accept 4 If a rejecting configuration is reached reject. Now, we just have to estimate how much time does N require. Dr. Sarmad Abbasi () Theory of Computation 24 / 38

On any input x of length n, we make the following claims. 1 M uses at most t(n) cells of its k-tapes. 2 Thus any configuration has length at most kt(n) = O(t(n)). 3 Thus each pass of N requires at most O(t(n)) steps. 4 N makes at most t(n) passes on its tape (after that M must halt). This shows that N runs in time O(t(n) t(n)) = O(t 2 (n)). Dr. Sarmad Abbasi () Theory of Computation 25 / 38

Where did we use the fact that t(n) n. In the first step the machine converts x to the initial configuration. This takes time O(n). Thus the total time is actually O(n) + O(t 2 (n)) = O(t 2 (n)). This is a reasonable assumption as M requires time O(n) to read its input. Which is true for most problems. The answer usually depends on the entire input. Dr. Sarmad Abbasi () Theory of Computation 26 / 38

You have studied many non-deterministic models such as: 1 NFA: Non-deterministic finite automata. 2 NPDA: Non-deterministic pushdown automata. Now, we will define non-deterministic TMs. We will now define non-deterministic TMs. Dr. Sarmad Abbasi () Theory of Computation 27 / 38

A non-deterministic TM is formally given as M = (Σ, Γ,, q 0, q c, q r, δ) where 1 Σ is the input alphabet. 2 Γ is the work alphabet. 3 q 0, q a, q r are start, accept and reject states. 4 δ is now a transition function: For example we may have δ : Q Γ : P(Q Γ {L, R}). δ(q 5, a) = {(q 7, b, R), (q 4, c, L), (q 3, a, R)}. Thus the machine has three possible ways to proceed. Dr. Sarmad Abbasi () Theory of Computation 28 / 38

Time Complexity We can think of a non-deterministic computation as a tree: Dr. Sarmad Abbasi () Theory of Computation 29 / 38

We will say that a non-deterministic TM M accepts x. If x is accepted on at least one branch. On the other hand M rejects x if all branches of the tree reach the reject state. Dr. Sarmad Abbasi () Theory of Computation 30 / 38

The nondeterministic running time of M is the length of the longest branch. More, formally, The running time of a nondeterministic TM M is the function f : N N such that f (n) is the maximum number of steps M uses on any branch of its computation on any input of length n. Dr. Sarmad Abbasi () Theory of Computation 31 / 38

A few comments on the definition: 1 The notion of non-deterministic time does not correspond to any real world machine we can make. 2 It is merely a mathematical definition. 3 We will see that this definition is extremely useful. It allows us to capture many interesting problems. Dr. Sarmad Abbasi () Theory of Computation 32 / 38

Let us prove some relation between non-deterministic time and deterministic time. Theorem Let t(n) be a function, where t(n) n. Then every t(n) time nondeterministic single tape TM has an equivalent time deterministic single tape TM. 2 O(t(n)) Thus this theorem says that we can get rid of non-determinism at the cost of exponentiating the time! Dr. Sarmad Abbasi () Theory of Computation 33 / 38

Let N be a non-deterministic TM that runs in time t(n). Let us first estimate how big the computation tree of N can be: 1 The tree has depth t(n). 2 Let b be the maximum number of legal choices given by Ns transition function. 3 Then this tree can have b t(n) leaves. 4 The total number of nodes can be at most 1 + b + b 2 + + b t(n) 2b t(n). Dr. Sarmad Abbasi () Theory of Computation 34 / 38

A deterministic TM D simulates the computation tree of N. There are several ways to do this. One is given in the book. Here is a slightly different one. To begin with D is going to be a multi-tape TM. Dr. Sarmad Abbasi () Theory of Computation 35 / 38

Consider a string over R = 1, 2,......, b of length t(n). So the string 12313412 corresponds to a computation of M where 1 Starting from the root go to the first child c 1 of the root. 2 Then the second child of c 1 call it c 2. 3 Then to the third child child of c 2 call it c 3. 4 Then to the first child of c 3 Thus each such string corresponds to a branch in the tree. Dr. Sarmad Abbasi () Theory of Computation 36 / 38

Thus D is the following TM 1 On input x of length n. 2 For all strings s R t(n). 3 Simulate N making the choices dictated by s. 4 If N accepts then accept. 5 If all branches of N reject then reject. This method assumes that t(n) can be computed quickly. However, we can also eliminate this requirement. Dr. Sarmad Abbasi () Theory of Computation 37 / 38

For each string s the simulation can take O(t(n)) time. There are b t(n) strings to simulate. Thus the total time is O(t(n))b t(n) = 2 O(t(n)). Our last part is to convert this multi-tape TM into a single tape TM. If we use the previous theorem then the total time would be We will come back to NTMs again. (2 O(t(n)) ) 2 = 2 2O(t(n)) = 2 O(t(n)). Dr. Sarmad Abbasi () Theory of Computation 38 / 38

We want to know which problems can be solved in practice. Thus we define the class P as follows: P = k TIME(n k ). or more elaborately write is as P = TIME(n) TIME(n 2 ) TIME(n 3 ) TIME(n 4 ). Dr. Sarmad Abbasi () Theory of Computation 39 / 38

The class P is important because: 1 P is invariant for all models of computation that are polynomially equivalent to the deterministic single-tape machines. 2 P roughly corresponds to the problems that we can solve realistically on these models. Thus in defining P we can talk about single tape TMs, Java programs, multi-tape TMs. They all give the same definition of P. Dr. Sarmad Abbasi () Theory of Computation 40 / 38

The second point is problems in P can be solved in reasonable time. Some people may object to this as they can point out that if a problem can be solved in time O(n 5000 ) then such an algorithm is of no practical value. This objection is valid to a certain point. However calling P to be the threshold of practical and unpractical problems has been extremely useful. Dr. Sarmad Abbasi () Theory of Computation 41 / 38