Q = Set of states, IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar

Similar documents
Turing Machines and Time Complexity

CS151 Complexity Theory. Lecture 1 April 3, 2017

UNIT-VIII COMPUTABILITY THEORY

Turing Machines (TM) Deterministic Turing Machine (DTM) Nondeterministic Turing Machine (NDTM)

Lecture 25: Cook s Theorem (1997) Steven Skiena. skiena

Review of Complexity Theory

About the relationship between formal logic and complexity classes

an efficient procedure for the decision problem. We illustrate this phenomenon for the Satisfiability problem.

Review of unsolvability

6.5.3 An NP-complete domino game

Recap DFA,NFA, DTM. Slides by Prof. Debasis Mitra, FIT.

The Complexity of Optimization Problems

IV. Turing Machine. Yuxi Fu. BASICS, Shanghai Jiao Tong University

CSCC63 Worksheet Turing Machines

Turing machines Finite automaton no storage Pushdown automaton storage is a stack What if we give the automaton a more flexible storage?

Griffith University 3130CIT Theory of Computation (Based on slides by Harald Søndergaard of The University of Melbourne) Turing Machines 9-0

Part I: Definitions and Properties

The Class NP. NP is the problems that can be solved in polynomial time by a nondeterministic machine.

Introduction to Turing Machines. Reading: Chapters 8 & 9

Advanced topic: Space complexity

Space Complexity. The space complexity of a program is how much memory it uses.

The Cook-Levin Theorem

Lecture 20: conp and Friends, Oracles in Complexity Theory

cse303 ELEMENTS OF THE THEORY OF COMPUTATION Professor Anita Wasilewska

Essential facts about NP-completeness:

Computational complexity

MTAT Complexity Theory October 13th-14th, Lecture 6

On the Computational Hardness of Graph Coloring

Computable Functions

COMP/MATH 300 Topics for Spring 2017 June 5, Review and Regular Languages

Theory of Computation

Turing Machines. Chapter 17

CS5371 Theory of Computation. Lecture 10: Computability Theory I (Turing Machine)

Turing Machines. The Language Hierarchy. Context-Free Languages. Regular Languages. Courtesy Costas Busch - RPI 1

Advanced Topics in Theoretical Computer Science

P is the class of problems for which there are algorithms that solve the problem in time O(n k ) for some constant k.

The purpose here is to classify computational problems according to their complexity. For that purpose we need first to agree on a computational

A An Overview of Complexity Theory for the Algorithm Designer

Lecture notes on Turing machines

CSCI3390-Lecture 14: The class NP

CP405 Theory of Computation

TURING MAHINES

Automata & languages. A primer on the Theory of Computation. Laurent Vanbever. ETH Zürich (D-ITET) October,

Chapter 2 : Time complexity

Computability Theory. CS215, Lecture 6,

COMPARATIVE ANALYSIS ON TURING MACHINE AND QUANTUM TURING MACHINE

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs

Limitations of Algorithm Power

CS5371 Theory of Computation. Lecture 10: Computability Theory I (Turing Machine)

Artificial Intelligence. 3 Problem Complexity. Prof. Dr. Jana Koehler Fall 2016 HSLU - JK

Final exam study sheet for CS3719 Turing machines and decidability.

ECE 695 Numerical Simulations Lecture 2: Computability and NPhardness. Prof. Peter Bermel January 11, 2017

Notes for Lecture 3... x 4

NP, polynomial-time mapping reductions, and NP-completeness

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan August 30, Notes for Lecture 1

Notes for Lecture Notes 2

FORMAL LANGUAGES, AUTOMATA AND COMPUTATION

Computational Complexity

Most General computer?

Easy Shortcut Definitions

CMPT307: Complexity Classes: P and N P Week 13-1

CS4026 Formal Models of Computation

NP-Completeness. Algorithmique Fall semester 2011/12

Quantum Computing Lecture 8. Quantum Automata and Complexity

1 Computational Problems

Computer Sciences Department

CS154, Lecture 17: conp, Oracles again, Space Complexity

Computational Complexity

Spring Lecture 21 NP-Complete Problems

BBM402-Lecture 11: The Class NP

Week 2: Defining Computation

Sample Project: Simulation of Turing Machines by Machines with only Two Tape Symbols

ECS 120 Lesson 15 Turing Machines, Pt. 1

NODIA AND COMPANY. GATE SOLVED PAPER Computer Science Engineering Theory of Computation. Copyright By NODIA & COMPANY

CS294: Pseudorandomness and Combinatorial Constructions September 13, Notes for Lecture 5

CMSC 441: Algorithms. NP Completeness

CSCI 1590 Intro to Computational Complexity

CPSC 421: Tutorial #1

LTCC Course: Graph Theory 2017/18 3: Complexity and Algorithms

CHAPTER 3 FUNDAMENTALS OF COMPUTATIONAL COMPLEXITY. E. Amaldi Foundations of Operations Research Politecnico di Milano 1

6.045: Automata, Computability, and Complexity (GITCS) Class 15 Nancy Lynch

Umans Complexity Theory Lectures

In complexity theory, algorithms and problems are classified by the growth order of computation time as a function of instance size.

Theory of Computation CS3102 Spring 2014 A tale of computers, math, problem solving, life, love and tragic death

Non-emptiness Testing for TMs

CSCI3390-Assignment 2 Solutions

Boolean circuits. Lecture Definitions

Algorithmic probability, Part 1 of n. A presentation to the Maths Study Group at London South Bank University 09/09/2015

1. Introduction Recap

Midterm Exam 2 CS 341: Foundations of Computer Science II Fall 2018, face-to-face day section Prof. Marvin K. Nakayama

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

Lecture 19: Finish NP-Completeness, conp and Friends

CS3719 Theory of Computation and Algorithms

Chapter 7 Turing Machines

Non-Approximability Results (2 nd part) 1/19

Notes for Lecture 3... x 4

Midterm Exam 2 CS 341: Foundations of Computer Science II Fall 2016, face-to-face day section Prof. Marvin K. Nakayama

Chapter 1 - Time and Space Complexity. deterministic and non-deterministic Turing machine time and space complexity classes P, NP, PSPACE, NPSPACE

A Lower Bound for Boolean Satisfiability on Turing Machines

Transcription:

IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar Turing Machine A Turing machine is an abstract representation of a computing device. It consists of a read/write head that scans a (possibly infinite) one-dimensional (bidirectional) tape divided into sections, each of which is inscribed with a 0 or 1. Computation begins with the machine, in a given "state", scanning a section. It erases what it finds there, prints a 0 or 1, moves to an adjacent section, and goes into a new state. The machine stops after transferring to the special HALT state. Figure 1: A Turing Machine The characteristics and behavior of the Turing Machine qualifies it as a finite state machine (FSM), or a finite automaton. Significantly, it separates information into two elements - that from its internal state, and that derived externally. This behavior is completely determined by three parameters: (1) The state the machine is in, (2) The number on the square it is scanning, and (3) A table of instructions or the transition table. An instruction is defined as a 5-tuple, like this: (starting state, starting value, new state, new value, movement) The table of instructions specifies, for each state and binary input, what the machine should write, which direction it should move in, and which state it should go into. (e.g., "If in State 1 scanning a 0: print 1, move left, and go into State 3"). The transition table is a chart, describing the actions set for each state, or as a State Transition Diagram, representing the same information in diagrammatic form. This tape head is capable of only three actions: (1) Write on the tape (or erase from tape), only on the section being viewed, (2) Change the internal state, and (3) Move the tape 0 or 1 space, to the left or right. Usually, states are named S 0, S 1, S 2, etc. The alphabet is usually 'B', for blank, and '1' and '0'. Numbers can (but not necessarily) be represented by a series of 1s with a length of n+1 for a number n. The table can list only finitely many states, each of which becomes implicitly defined by the role it plays in the table of instructions. These states are often referred to as the "functional states" of the machine. Also, a probabilistic automaton can be defined as a Turing machine in which the transition from input and state to output and state takes place with a certain probability (e.g. "If in State 1 scanning a 0: (a) there is a 60% probability that the machine will print 1, move left, and go into State 3, and (b) there is a 40% probability that the machine will print 0, move left, and go into State 2"). Technically, a valid TM should have an action defined for every state/character pair that might occur. The formal definition of a Turing Machine M is a 7-tuple (Q,,, δ, q 0, #, F) where Q = Set of states, = Finite set of symbols, the input alphabet, = Finite set of symbols, the tape alphabet, δ = Partial transition function, # = Blank,

q 0 Q = Initial state, and F Q = Set of final states. The transition function for Turing machines is given by δ: Q X Q X { L, R This means that when the machine is in a given state (Q) and reads a given symbol ( ) from the tape, it replaces the symbol on the tape with some other symbol ( ), goes to some other state (Q), and moves the tape head one square left (L) or right (R). Consider a transition table described in Table 1. Fig. 2 shows the corresponding state transition diagram. State Read Write Move Next State 0 0 L S1 S1 B 1 L S2 1 B R S1 0 1 R S2 S2 B 0 R S2 1 1 L S1 Table 1: State Transition Table for a Turing Machine function f(n 1,n 2,,...,n k ), we assume that initially the tape consists of n 1, n 2,..., n k, properly encoded, with each separated from the previous one by a single blank, and with the tape head is initially positioned at the left-most bit of the first argument, and the state of the Turing machine some initial specified value. We say that the Turing machine has computed m = f(n 1,n 2,,...,n k ) if, when the machine halts, the tape consists of n 1, n 2,,..., n k, m, properly encoded, and separated by single blanks, and the read/write head back at the left-most bit of the first argument. For example, suppose we wish to create a Turing machine to compute the function m := multiply(n 1,n 2 ) := n 1 * n 2 Suppose the input tape reads _<1>1 1 1 _ 1 1 1 1 1 _ which encodes 3 and 4 respectively, in unary notation. (Here the position of the read/write head is marked). Then the Turing machine should HALT with its tape reading _<1>1 1 1 _ 1 1 1 1 1 _ 1 1 1 1 1 1 1 1 1 1 1 1 1 _ which encodes 3, 4, and 12 in unary notation. For our purposes we will assume that a Turing machine is a real computer. Hence something, which a Turing machine can compute in polynomial time, a real computer can also compute in polynomial time. An important result is: A function is computable if it can be computed by a Turing machine. Figure 2: Transition State Diagram for Turing Machine There are several conventions commonly used in Turing. We adopt the convention that numbers are represented in unary notation, i.e., that a string of n+1 successive 1s represents the non-negative integer n. Furthermore, if we want to compute a Complexity Theory is part of theoretical computer science. It is important to understand the theoretical limits of computation, otherwise how do you know what is feasible and what is not. Complexity Theory is about decision problems. Decision problems are those that have a TRUE / FALSE answer. e.g. PRIMES = { x N : x is prime Given x N determine whether x is a prime. (Note: PRIMES P has been proved).

COMPOSITES = { x N: x is not prime Given x N determine whether x is not a prime. COLOURABLE-k = G a graph such that G is colorable in k colors. Given a graph G determine whether it is colorable or not in k colors. Polynomial Time A decision problem is said to lie in complexity class P if one can solve it in polynomial time (in terms of the input size). These are the easy problems. Notice that a problem can lie in complexity class P even if we do not know a polynomial time algorithm for it yet. In other words it lies in P, but we do not yet know it lies in P. Complexity Classes P and co-p Formally: Let I denote an instance of a problem, e.g. an integer. Let S denote a subset of all possible instances. A decision problem is then to decide if I S The decision problem PRIMES is I is an integer S is the set of primes A problem lies in P if whenever I S, this can be determined (accepted) in polynomial time. Formally: Let I denote an instance of a problem, e.g. an integer. Let S denote a subset of all possible instances. A decision problem is then to decide if I S The decision problem PRIMES is I is an integer S is the set of primes A problem lies in co-p if it can be determined (accepted) in polynomial time whether I S. P and co-p Note: If a problem lies in P and we enter an instance I S, we do not even guarantee that an accepting algorithm terminates. Similarly, we do not even guarantee that an accepting algorithm terminates if a problem lies in co-p and we enter an instance I S. P = co-p Suppose a decision problem D lies in P, we wish to show it must lie in co-p. Since D lies in P, there is an algorithm that accepts instances On input of I S running in time c n (for some c where n is the input size). Now suppose we call the algorithm on an instance such that I S. Terminate the algorithm when it c reaches n + 1steps and output Do not accept Hence we can turn an algorithm for accepting positive instances into one that accepts negative instances. Non-Deterministic Polynomial Time A decision problem X lies in NP if, when X is true, there is a certificate C (or proof) which can be checked in polynomial time. This means the certificate must be polynomial in length. This says nothing about how long it may take to compute the certificate. i.e. There is a proof that I S which can be checked in polynomial time. Complexity Class NP COMPOSITES clearly lie in NP since we can always exhibit a factor as a certificate, which can be checked in polynomial time. We do not know how to find a factor in polynomial time though. COLORABLE-k clearly lies in NP since we can exhibit a coloring as a witness.

We do not know how to find the coloring in polynomial time though. Complexity Class co-np The class co-np is the set of problems for which when X is false, there is a certificate C (or proof) that can be checked in polynomial time. This is the set of problems for which there is a proof that I S such that the proof can be checked in polynomial time. We may not know how to generate this proof / witness / certificate in polynomial time though. Clearly COMPOSITES co-np since PRIMES P NP We have COMPOSITES NP co-np and P NP co-np NP P NP co-np co-np NP and co-np Unlike the classes P and co-np where we had P = co-np It is believed that NP co-np Is P = NP? It turns out that almost all interesting problems lie in NP and P is the set of easy problems. So are all interesting problems easy, i.e. do we have P = NP? This is the main open question in Computer Science. It is like other great questions Is there intelligent life in the universe? What is the meaning of life? Will you get a job when you graduate? Complexity class NP Complete Most people believe P NP. There is a set of problems for which if we knew a polynomial time solution then we would know P =NP. In some sense these are the hardest problems in NP. These are called the NP Complete problems. Note: They are difficult in the worst case, This says nothing about the average case Examples of NP Complete problems SAT Given a boolean expression (e.g. x y z) can one find an assignment to the variables that makes the expression hold? The best algorithm for this seems to be exponential in the number of variables, i.e. try all 2 n possible assignments in turn. The problem is NP Complete. Knapsack Problem Given a set of n items, with weights w i, is it possible to put items into a sack to make a specific weight, S such that S = bw 1 1+ b2w2 + + bnwn? The time taken to solve this seems to grow exponentially with the number of weights. The Knapsack problem clearly lies in NP as we can exhibit the b i as a witness. The Knapsack problem is an NP Complete problem Note the stated problem is a decision problem. But it can be stated as a computational problem: If we can put items into a sack to make a specific weight S, find bi {0,1 such that S = bw 1 1+ b2w2 + + bnwn?

The decision and the computational problem are related. We can turn an oracle for the decision knapsack problem into one for the computational knapsack problem. Consider the following algorithm that assumes an oracle Ow ( [1],, wn [ ], S) for the decision knapsack problem. if (O(w[1],..., w[n],s) == false){ output = false; T = S; b[1] = b[2] =... = b[n] = 0; for (i=1; i<=n; i++){ if (T == 0){ output = (b[1],..., b[n]); if O(w[i+1],..., w[n],tw[i]) == true){ T = T - w[i]; b[i] = 1; This algorithm assumes only one such assignment of bi exists. k-colorability Given a graph G, can one color G with k colors so that no two adjacent vertices have the same color? Again this is a very hard problem in general. It is NP Complete even for k = 3. The problem is that it is easy on the average. Most graphs are not 3-colorable. The trivial recursive backtracking algorithm will determine whether a graph is 3-colorable in constant time on average, no matter how large the input graph is. Often we do not care that a problem is hard in the worst case. We only care if it is easy on average. Since 3-Colorability is NP Complete this is evident that all NP Complete problems are easy on average.