CP405 Theory of Computation

Similar documents
CSE 105 THEORY OF COMPUTATION

FORMAL LANGUAGES, AUTOMATA AND COMPUTATION

V Honors Theory of Computation

CSE 105 THEORY OF COMPUTATION. Spring 2018 review class

Time Complexity (1) CSCI Spring Original Slides were written by Dr. Frederick W Maier. CSCI 2670 Time Complexity (1)

Computability and Complexity

UNIT-VIII COMPUTABILITY THEORY

Part I: Definitions and Properties

Final exam study sheet for CS3719 Turing machines and decidability.

Theory of Computation

Undecidable Problems and Reducibility

Computability and Complexity

CSE 105 Theory of Computation

ACS2: Decidability Decidability

CISC 4090 Theory of Computation

Introduction to Languages and Computation

CS 301. Lecture 18 Decidable languages. Stephen Checkoway. April 2, 2018

CPSC 421: Tutorial #1

CS5371 Theory of Computation. Lecture 12: Computability III (Decidable Languages relating to DFA, NFA, and CFG)

SYLLABUS. Introduction to Finite Automata, Central Concepts of Automata Theory. CHAPTER - 3 : REGULAR EXPRESSIONS AND LANGUAGES

Theory of Computation (IX) Yijia Chen Fudan University

What languages are Turing-decidable? What languages are not Turing-decidable? Is there a language that isn t even Turingrecognizable?

Undecidability COMS Ashley Montanaro 4 April Department of Computer Science, University of Bristol Bristol, UK

Friday Four Square! Today at 4:15PM, Outside Gates

Decidability (What, stuff is unsolvable?)

Chap. 4,5 Review. Algorithms created in proofs from prior chapters

Theory of Computation

CSE 105 THEORY OF COMPUTATION

Chapter 7: Time Complexity

FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY

FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY

Decidability (intro.)

CSCE 551: Chin-Tser Huang. University of South Carolina

Announcements. Problem Set 7 graded; will be returned at end of lecture. Unclaimed problem sets and midterms moved!

Automata & languages. A primer on the Theory of Computation. Laurent Vanbever. ETH Zürich (D-ITET) October,

Complexity Theory Part I

CS5371 Theory of Computation. Lecture 14: Computability V (Prove by Reduction)

CS5371 Theory of Computation. Lecture 18: Complexity III (Two Classes: P and NP)

P and NP. Or, how to make $1,000,000.

DM17. Beregnelighed. Jacob Aae Mikkelsen

CSE 105 THEORY OF COMPUTATION

CS154, Lecture 17: conp, Oracles again, Space Complexity

Non-emptiness Testing for TMs

Computational Models Lecture 11, Spring 2009

COMP/MATH 300 Topics for Spring 2017 June 5, Review and Regular Languages

Computer Sciences Department

CSE 105 THEORY OF COMPUTATION

Homework 8. a b b a b a b. two-way, read/write

Complexity. Complexity Theory Lecture 3. Decidability and Complexity. Complexity Classes

Computational Models Lecture 8 1

Space Complexity. The space complexity of a program is how much memory it uses.

THEORY OF COMPUTATION (AUBER) EXAM CRIB SHEET

Computational Models Lecture 8 1

CS154, Lecture 10: Rice s Theorem, Oracle Machines

NP-Completeness and Boolean Satisfiability

CSE 105 THEORY OF COMPUTATION

CS6901: review of Theory of Computation and Algorithms

4.2 The Halting Problem

Lecture 6: Oracle TMs, Diagonalization Limits, Space Complexity

Finish K-Complexity, Start Time Complexity

Intractable Problems. Time-Bounded Turing Machines Classes P and NP Polynomial-Time Reductions

CSCE 551 Final Exam, Spring 2004 Answer Key

acs-07: Decidability Decidability Andreas Karwath und Malte Helmert Informatik Theorie II (A) WS2009/10

Complexity Theory Part II

Time Complexity. CS60001: Foundations of Computing Science

CSE 105 THEORY OF COMPUTATION

Exam Computability and Complexity

conp, Oracles, Space Complexity

Computation Histories

CS3719 Theory of Computation and Algorithms

Reducability. Sipser, pages

P, NP, NP-Complete, and NPhard

Computational Models Lecture 8 1

Testing Emptiness of a CFL. Testing Finiteness of a CFL. Testing Membership in a CFL. CYK Algorithm

Q = Set of states, IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar

CpSc 421 Final Exam December 6, 2008

NODIA AND COMPANY. GATE SOLVED PAPER Computer Science Engineering Theory of Computation. Copyright By NODIA & COMPANY

Decidability and Undecidability

CS20a: Turing Machines (Oct 29, 2002)

Computability Theory

Lecture 23: Rice Theorem and Turing machine behavior properties 21 April 2009

Computability and Complexity

SCHEME FOR INTERNAL ASSESSMENT TEST 3

AC68 FINITE AUTOMATA & FORMULA LANGUAGES DEC 2013

CS20a: Turing Machines (Oct 29, 2002)

Intractable Problems [HMU06,Chp.10a]

MA/CSSE 474 Theory of Computation

CS481F01 Solutions 8

Student#: CISC-462 Exam, December XY, 2017 Page 1 of 12

Mapping Reducibility. Human-aware Robotics. 2017/11/16 Chapter 5.3 in Sipser Ø Announcement:

Turing Machines Part III

Definition: conp = { L L NP } What does a conp computation look like?

Functions on languages:

1 Showing Recognizability

6.840 Language Membership

Data Structures in Java

An example of a decidable language that is not a CFL Implementation-level description of a TM State diagram of TM

Lecture 20: conp and Friends, Oracles in Complexity Theory

Theory Bridge Exam Example Questions

Introduction to Turing Machines. Reading: Chapters 8 & 9

Transcription:

CP405 Theory of Computation

BB(3) q 0 q 1 q 2 0 q 1 1R q 2 0R q 2 1L 1 H1R q 1 1R q 0 1L

Growing Fast BB(3) = 6 BB(4) = 13 BB(5) = 4098 BB(6) = 3.515 x 10 18267 (known) (known) (possible) (possible)

Language: a set of strings Computation as a test if a string is in a language DFA: limited memory automaton Regular Language: a language accepted by a DFA Regular Operations (closed): A B, AB, A *, A, A B NFA: non-deterministic finite automaton DFA == NFA == regex (computational power) GNFA: NFA with regex transitions Prove lang. is non-regular with pumping lemma POSIX extended regexes are not formally regular (no corr. DFA) CFG: context-free grammar generates CFL Some languages are CF and not regular CFL are used for programming languages All regular languages are CF Every CFL can be converted to Chomsky Normal Form (CNF) - Rules are two vars, one terminal, or start->epsilon CNF string expansions require 2n-1 steps L-Systems: formal grammars for modeling morphogenesis PDA == CFG (computational power) Non-deterministic PDA > Deterministic PDA (comp. power) Context-sensitive grammar > Context-free grammar (comp power) Prove lang. is not context-free with pumping lemma TM > PDA (comp. power) TM can accept, reject, or loop forever TM match intuitive definition of algorithm Java, Python, etc. are == TM (comp power) Linear Bounded Automata (LBA) == TM with finite tape LBA == context-sensitive grammars (comp power) Post Tag System chops and concatenates strings (== TM) Queue Machine like PDA with queue in place of stack (== TM) PDA with two stacks == TM Counter Machines (pebbles in holes) with two registers (== TM) Random Access Machines provide indirect addressing (== TM) TMs that always halt correctly are called deciders. If TM M is a decider, then L(M) is a decidable language A DFA, A NFA, A CFG, A LBA are decidable A TM is undecidable (the acceptance problem) HALT is undecidable (the halting problem) Rice's Theorem: any non-trivial lang. property is undecidable TMs that accept correctly (may loop) are called recognizers Enumerators == recognizers that print If a TM M is a recognizer, then L(M) is an enumerable lang A TM, HALT are enumerable A TM, HALT are not enumerable EQ TM, EQ TM, are both not enumerable countable number of enumerable languages uncountable number of langs that are not enumerable Post Correspondence Problem (dominos) is undecidable Never play dominos with Matthew A Quine is a program that print out itself TMs can obtain their own descriptions Busy beaver function grows faster than any computable func.

Computability Theory Automata Theory Complexity Theory

Complexity Theory Algorithmic Complexity Reductions P and NP NP-Completeness

Algorithmic Complexity Time Complexity: How long does the algorithm take to run? Space Complexity: How much memory does the algorithm require?

What happens when the input size gets big?

Big-O Notation (Algorithm Order) When input sizes get big, the largest term dominates the expression

Big-O Notation (Algorithm Order) When input sizes get big, the largest term dominates the expression We can describe the order of the growth function by O(largest term) 2n 2 + 3n + 7 O(n 2 )

Big-O Notation (Algorithm Order) When input sizes get big, the largest term dominates the expression We can describe the order of the growth function by O(largest term) 2n 2 + 3n + 7 O(n 2 ) 77n + 1 O(n)

Big-O Notation (Algorithm Order) When input sizes get big, the largest term dominates the expression We can describe the order of the growth function by O(largest term) 2n 2 + 3n + 7 O(n 2 ) 77n + 1 O(n) 122 O(1)

Big-O Notation (Algorithm Order) When input sizes get big, the largest term dominates the expression We can describe the order of the growth function by O(largest term) 2n 2 + 3n + 7 O(n 2 ) 77n + 1 O(n) 122 O(1) 3 log n + 60 O(log n)

Big-O Notation Formal Definition The function f(n) is a member of the set O(g(n)) if there exist positive constants c and and n 0, such that: 0 f(n) c g(n) for all n n 0

c g(n) f(n) n n 0

Is 17n 2 + 16n + 2 O(n 2 )?

Is 17n 2 + 16n + 2 O(n 2 )? Is 17n 2 + 16n + 2 O(n 157 )? Is 17n 2 + 16n + 2 O(2 n)?

Big-O Notation (Algorithm Order) O(n) is an upper bound on the growth of the function

Big-O Notation (Algorithm Order) O(n) is an upper bound on the growth of the function Two algorithms with the same order are considered roughly equivalent Is this always a good idea?

Ω-Notation Formal Definition The function f(n) is a member of the set Ω(g(n)) if there exist positive constants c and and n 0, such that: 0 c g(n) f(n) for all n n 0

f(n) c g(n) n n 0

ϴ-Notation Formal Definition The function f(n) is a member of the set ϴ(g(n)) if there exist positive constants c 1, c 2, and n 0, such that: 0 c 1 g(n) f(n) c 2 g(n) for all n n 0

c 2 g(n) f(n) c 1 g(n) n n 0

Algorithm Running Times Linear search? Binary search? Selection sort? Quicksort? Generating chess future positions? Database select query?

TM Running Times Input length: n Time complexity: f(n) Where f is the maximum number of steps taken by the TM for an input of size n

TM Example TM, M, with L(M) = {0 k 1 k } M's algorithm: 1. Scan across tape and reject if a 0 is found to the right of a 1 2. Repeat: Cross off a single 0 and a single 1 3. If extra 0's or 1's remain, then reject. Otherwise, accept.

TM Example TM, M, with L(M) = {0 k 1 k } n steps M's algorithm: 1. Scan across tape and reject if a 0 is found to the right of a 1 2. Repeat: Cross off a single 0 and a single 1 3. If extra 0's or 1's remain, then reject. Otherwise, accept.

TM Example Also, move R/W head back to TM, M, with L(M) = {0 k 1 k } beginning: n steps M's algorithm: 1. Scan across tape and reject if a 0 is found to the right of a 1 2. Repeat: Cross off a single 0 and a single 1 3. If extra 0's or 1's remain, then reject. Otherwise, accept.

TM Example TM, M, with L(M) = {0 k 1 k } Total: 2n O(n) M's algorithm: 1. Scan across tape and reject if a 0 is found to the right of a 1 2. Repeat: Cross off a single 0 and a single 1 3. If extra 0's or 1's remain, then reject. Otherwise, accept.

TM Example TM, M, with L(M) = {0 k 1 k } M's algorithm: Crossing off one pair: O(n) 1. Scan across tape and reject if a 0 is found to the right of a 1 2. Repeat: Cross off a single 0 and a single 1 3. If extra 0's or 1's remain, then reject. Otherwise, accept.

TM Example TM, M, with L(M) = {0 k 1 k } M's algorithm: There are at most n/2 pairs 1. Scan across tape and reject if a 0 is found to the right of a 1 2. Repeat: Cross off a single 0 and a single 1 3. If extra 0's or 1's remain, then reject. Otherwise, accept.

TM Example TM, M, with L(M) = {0 k 1 k } M's algorithm: (n/2)o(n) = O(n 2 ) 1. Scan across tape and reject if a 0 is found to the right of a 1 2. Repeat: Cross off a single 0 and a single 1 3. If extra 0's or 1's remain, then reject. Otherwise, accept.

TM Example TM, M, with L(M) = {0 k 1 k } M's algorithm: 1. Scan across tape and reject if a 0 is found to A single the scan to check right of a 1 for extras: 2. Repeat: O(n) Cross off a single 0 and a single 1 3. If extra 0's or 1's remain, then reject. Otherwise, accept.

TM Example TM, M, with L(M) = {0 k 1 k } M's algorithm: 1. Scan across tape and reject if a 0 is found to the right of a 1 2. Repeat: Cross off a single 0 and a single 1 3. If extra 0's or 1's remain, then reject. Otherwise, accept. Whole algorithm: O(n) + O(n 2 ) + O(n) = O(n 2 )

TM with 2 Tapes Example TM, M 2, with L(M) = {0 k 1 k } M 2 's algorithm: 1. Scan across tape and reject if a 0 is found to the right of a 1 2. Scan across 0's on tape 1 and copy each 0 to tape 2 3. Scan across 1's on tape 1 and cross out each 0 on tape 2 4. If we run out of 0's on tape 2 or we have extra 0's then reject. Otherwise, accept.

TM with 2 Tapes Example TM, M 2, with L(M) = {0 k 1 k } Time complexity? M 2 's algorithm: 1. Scan across tape and reject if a 0 is found to the right of a 1 2. Scan across 0's on tape 1 and copy each 0 to tape 2 3. Scan across 1's on tape 1 and cross out each 0 on tape 2 4. If we run out of 0's on tape 2 or we have extra 0's then reject. Otherwise, accept.

TM with 2 Tapes Example TM, M 2, with L(M) = {0 k 1 k } Time complexity? O(n) M 2 's algorithm: 1. Scan across tape and reject if a 0 is found to the right of a 1 2. Scan across 0's on tape 1 and copy each 0 to tape 2 3. Scan across 1's on tape 1 and cross out each 0 on tape 2 4. If we run out of 0's on tape 2 or we have extra 0's then reject. Otherwise, accept.

TM with 2 Tapes Example Time complexity depends on the computational TM, M 2, with L(M) = {0 k 1 k } model selected. M 2 's algorithm: 1. Scan across tape and reject if a 0 is found to the right of a 1 2. Scan across 0's on tape 1 and copy each 0 to tape 2 3. Scan across 1's on tape 1 and cross out each 0 on tape 2 4. If we run out of 0's on tape 2 or we have extra 0's then reject. Otherwise, accept.

TM with k Tapes vs. TM with 1 Tape Tape 1 Tape 2 Tape 3 a b b b a b a b a b a a a a b How long does a single tape simulation of a k-tape machine take? Single Tape a b b b a # b a b a b # a a a a b

TM with k Tapes vs. TM with 1 Tape Tape 1 Tape 2 Tape 3 Each single transition in the k-tape machine takes two full scans in the single-tape machine: a b b b a b a b a b a a a a b 1. Get the current location of the R/W head for each tape. 2. Actually make the required changes. Single Tape a b b b a # b a b a b # a a a a b

TM with k Tapes vs. TM with 1 Tape Tape 1 Tape 2 Tape 3 a b b b a b a b a b a a a a b Simulated tapes can get bigger. If the k-tape machine takes t(n) time, then the increase in used tape length is O(t(n)) Single Tape a b b b a # b a b a b # a a a a b

TM with k Tapes vs. TM with 1 Tape Tape 1 a b b b a Tape 2 Tape 3 b a b a b (# steps in k-tape) (cost per step in 1-tape machine) t(n) O(t(n)) = O(t 2 (n)) a a a a b Single Tape a b b b a # b a b a b # a a a a b

Nondeterministic TM Decider

Nondeterministic TM Decider To be a decider, every branch must halt.

Nondeterministic We can simulate a TM Decider nondeterministic TM with a deterministic TM: Build a branching tree of TM configurations in a breadth-first fashion.

Nondeterministic TM Decider If a ND TM has time complexity t(n), then there is a corresponding D TM with time complexity 2 O(t(n))

Class P P = {A: where the language A is decidable in polynomial time on a deterministic, single-tape TM} An algorithm, M, decides a language in polynomial time if: there exists an integer k 1, such that O(M) = O(n k )

Example 1: Find a Graph Path PATH = { G, s, e : G is a directed graph, s is the start node, e is the end node} A start B C D E end

Example 1: Find a Graph Path Breadth-first search PATH = { G, s, e : G is a directed from graph, start to end: s is the start node, e is the end node} O( G ) A start PATH P B C D E end

Example 2: Relatively Prime RELPRIME = { x,y : x and y are relatively prime (1 is the largest integer that evenly divides both)} Ex: 10 and 21 are relatively prime

Example 2: Relatively Prime RELPRIME = { x,y : x and y are relatively prime (1 is the largest integer that evenly divides both)} Ex: 10 and 21 are relatively prime Algorithm: Input x and y Try all possible divisors up to min(x,y)

Example 2: Relatively Prime RELPRIME = { x,y : x and y are relatively prime (1 is the largest integer that evenly divides both)} Ex: 10 and 21 are relatively prime Algorithm: Input x and y Try all possible divisors up to min(x,y) Problem: 10,21 = 5, divisorstotest = 9 101,56 = 6, divisorstotest = 55

Example 2: Relatively Prime RELPRIME = { x,y : x and y are relatively prime (1 is the largest integer that evenly divides both)} Ex: 10 and 21 are relatively prime Brute force test is Algorithm: exponential. Input x and y Try all possible divisors up to min(x,y) Problem: 10,21 = 5, divisorstotest = 9 101,56 = 6, divisorstotest = 55

Example 2: Relatively Prime RELPRIME = { x,y : x and y are relatively prime (1 is the largest integer that evenly divides both)} A new encoding? 111111,1111 = 11, divisorstotest = 3 1111111,11111111 = 16, divisorstotest = 6

Example 2: Relatively Prime Doesn't look too RELPRIME = { x,y : x and y are relatively bad... prime (1 is the largest integer that even divides both)} A new encoding? 111111,1111 = 11, divisorstotest = 3 1111111,11111111 = 16, divisorstotest = 6

Example 2: Relatively Prime Unary encoding is RELPRIME = { x,y : x and y are relatively cheating. prime (1 is the largest integer that even divides both)} A new encoding? 111111,1111 = 11, divisorstotest = 3 1111111,11111111 = 16, divisorstotest = 6

Example 2: Relatively Prime RELPRIME = { x,y : x and y are relatively prime (1 is the largest integer that even divides both)} RELPRIME P Euclid's GCD Algorithm: while y!= 0: x = x % y swap(x, y) return x

An interesting problem E = (x 1 x x ) 2 3

An interesting problem Is there a combination of x's so that E is TRUE? E = (x 1 x x ) 2 3

An interesting problem E = (x 1 x ~x ) (x ~x x ) 2 3 2 4 3

Is this solution correct? An interesting problem x = TRUE 1 x 2 = FALSE x 3 = TRUE x 4 = TRUE E = (x 1 x ~x ) (x ~x x ) 2 3 2 4 3

Is this solution correct? x = TRUE 1 An interesting problem x 2 = FALSE x 3 = TRUE x 4 = TRUE E = (T F F) (F F T) E = (T) E = T (T)

An interesting problem It reduces to TRUE. How hard was it to E = (T F F) (F F T) test? E = (T) E = T (T)

An interesting problem How can we produce a set of variable values that satisfies the formula? E = (x 1 x ~x ) (x ~x x ) 2 3 2 4 3

An interesting problem How can we produce a set of variable values that satisfies the formula? If n is the number of variables, we can enumerate possible solutions: = 2 n

Satisfiability Problem Checking whether an answer is correct isn't too bad

Satisfiability Problem Checking whether an answer is correct isn't too bad Finding a correct solution seems harder

Class NP NP = {A: if there exists a polynomial, p, and verification language B P, s.t. for all strings w: w A iff s: s p( w ) and w,s B}

Class NP Every problem, w, in A... NP = {A: if there exists a polynomial, p, and verification language B P, s.t. for all strings w: w A iff s: s p( w ) and w,s B}

Class NP...has a solution... NP = {A: if there exists a polynomial, p, and verification language B P, s.t. for all strings w: w A iff s: s p( w ) and w,s B}

Class NP...that's within polynomial length of w... NP = {A: if there exists a polynomial, p, and verification language B P, s.t. for all strings w: w A iff s: s p( w ) and w,s B}

Class NP...and the solution can be checked quickly. NP = {A: if there exists a polynomial, p, and verification language B P, s.t. for all strings w: w A iff s: s p( w ) and w,s B}

Class NP NP = {A: if there exists a polynomial, p, and verification language B P, s.t. for all strings w: w A iff s: s p( w ) and w,s B} NP = Can be decided by a nondeterministic TM in polynomial time.

P NP How do we know this?

Does P = NP? This is the most famous unsolved problem in theoretical computer science

P = NP? If P = NP... Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss. Scott Aaronson

Which one is correct?

Polynomial Time Reduction Given two languages, A and B, A is polynomial time reducible to B if a polynomial time function, f, exists: A P B: w A iff f(w) B

Polynomial Time Reduction If A P B and B P, then...

Polynomial Time Reduction If A P B and B P, then A P.

Satisfiability: a special problem Every other problem in NP can be transformed to SAT in polynomial time SAT is in the class called NP-Complete

NP-Complete Class Imagine you just figured out a polynomial time solution to SAT.

NP-Complete Class Imagine you just figured out a polynomial time solution to SAT. You can now solve any NP problem in polynomial time!

NP-Complete Class Imagine you just figured out a polynomial time solution to SAT. You can now solve any NP problem in polynomial time! Then: P = NP

Use an Oracle Think of your solution to SAT as an oracle that runs in polynomial time

Use an Oracle Think of your solution to SAT as an oracle that runs in polynomial time Other NP problems can simply ask your oracle for help (think function call)

Use an Oracle Think of your solution to SAT as an oracle that runs in polynomial time Other NP problems can simply ask your oracle for help (think function call) Since every other NP problem can be reduced to SAT in polynomial time, all can be solved in polynomial time

Subset Sum Problem Given a set of integers, determine if there is a non-empty subset that sums to 0. Set = {17, 2, -15, -9, 1, 22, -5}

Subset Sum Problem Given a set of integers, determine if there is a non-empty subset that sums to 0. Set = {17, 2, -15, -9, 1, 22, -5} Subset = {1, 2, 17, -15, -5}

Subset Sum Problem Given a set of integers, determine if there is a non-empty subset that sums to 0. Set = {17, 2, -15, -9, 1, 22, -5} Subset = {1, 2, 17, -15, -5} NP-Complete

Knapsack Problem Given a set of items each with a weight and a value, determine the maximum value while staying under a set weight limit Items = {(3 lbs, $4), (6 lbs, $2), (2 lbs, $1)}

Knapsack Problem 0-1 Knapsack Problem: For each item, you either take it or leave it

Knapsack Problem 0-1 Knapsack Problem: For each item, you either take it or leave it Unbounded Knapsack Problem: For each item, you may take as many copies of it as you like

Knapsack Problem 0-1 Knapsack Problem: For each item, you either take it or leave it Unbounded Knapsack Problem: For each item, you may take as many copies of it as you like Both are NP-Complete

L Finite Languages Regular Languages Context-Free Languages P NP Decidable Languages Enumerable Languages

P = NP A possible proof? Theorem: P = NP Proof: Set N = 1 Clearly, P = NP