CPSC 421: Tutorial #1

Similar documents
V Honors Theory of Computation

THEORY OF COMPUTATION (AUBER) EXAM CRIB SHEET

} Some languages are Turing-decidable A Turing Machine will halt on all inputs (either accepting or rejecting). No infinite loops.

FORMAL LANGUAGES, AUTOMATA AND COMPUTATION

CSE 105 THEORY OF COMPUTATION. Spring 2018 review class

CSE 105 THEORY OF COMPUTATION

DM17. Beregnelighed. Jacob Aae Mikkelsen

CSE355 SUMMER 2018 LECTURES TURING MACHINES AND (UN)DECIDABILITY

CSE 105 THEORY OF COMPUTATION

Decidability (What, stuff is unsolvable?)

1 Showing Recognizability

CS 301. Lecture 18 Decidable languages. Stephen Checkoway. April 2, 2018

CSE 105 THEORY OF COMPUTATION

Further discussion of Turing machines

Introduction to Languages and Computation

CSCE 551 Final Exam, Spring 2004 Answer Key

ACS2: Decidability Decidability

Final exam study sheet for CS3719 Turing machines and decidability.

Turing Machines Part III

Computational Models Lecture 8 1

Theory of Computation (II) Yijia Chen Fudan University

Functions on languages:

Undecidable Problems and Reducibility

Computational Models Lecture 8 1

CSE 105 THEORY OF COMPUTATION

UNIT-VIII COMPUTABILITY THEORY

Introduction to Turing Machines. Reading: Chapters 8 & 9

Computational Models Lecture 8 1

Theory of Computation (IX) Yijia Chen Fudan University

Theory of Computation p.1/?? Theory of Computation p.2/?? We develop examples of languages that are decidable

Decidability and Undecidability

6.045: Automata, Computability, and Complexity Or, Great Ideas in Theoretical Computer Science Spring, Class 8 Nancy Lynch

TURING MAHINES

Turing Machines (TM) Deterministic Turing Machine (DTM) Nondeterministic Turing Machine (NDTM)

CS5371 Theory of Computation. Lecture 12: Computability III (Decidable Languages relating to DFA, NFA, and CFG)

Undecidability COMS Ashley Montanaro 4 April Department of Computer Science, University of Bristol Bristol, UK

Computational Models - Lecture 4

ECS 120: Theory of Computation UC Davis Phillip Rogaway February 16, Midterm Exam

Decision Problems with TM s. Lecture 31: Halting Problem. Universe of discourse. Semi-decidable. Look at following sets: CSCI 81 Spring, 2012

Nondeterministic Finite Automata

Part I: Definitions and Properties

CISC 4090: Theory of Computation Chapter 1 Regular Languages. Section 1.1: Finite Automata. What is a computer? Finite automata

FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY

CS154, Lecture 10: Rice s Theorem, Oracle Machines

MA/CSSE 474 Theory of Computation

CSE 105 THEORY OF COMPUTATION

A non-turing-recognizable language

PS2 - Comments. University of Virginia - cs3102: Theory of Computation Spring 2010

Reducability. Sipser, pages

Turing Machine Recap

Before we show how languages can be proven not regular, first, how would we show a language is regular?

Theory of Computation p.1/?? Theory of Computation p.2/?? Unknown: Implicitly a Boolean variable: true if a word is

T (s, xa) = T (T (s, x), a). The language recognized by M, denoted L(M), is the set of strings accepted by M. That is,

CSE 105 THEORY OF COMPUTATION

COMP/MATH 300 Topics for Spring 2017 June 5, Review and Regular Languages

HKN CS/ECE 374 Midterm 1 Review. Nathan Bleier and Mahir Morshed

CS243, Logic and Computation Nondeterministic finite automata

Finite Automata and Regular languages

A Note on Turing Machine Design

Computability Theory. CS215, Lecture 6,

Theory of Computation (IV) Yijia Chen Fudan University

CS5371 Theory of Computation. Lecture 10: Computability Theory I (Turing Machine)

CS 125 Section #10 (Un)decidability and Probability November 1, 2016

SYLLABUS. Introduction to Finite Automata, Central Concepts of Automata Theory. CHAPTER - 3 : REGULAR EXPRESSIONS AND LANGUAGES

THEORY OF COMPUTATION

CP405 Theory of Computation

10. The GNFA method is used to show that

Closure Properties of Regular Languages. Union, Intersection, Difference, Concatenation, Kleene Closure, Reversal, Homomorphism, Inverse Homomorphism

Theory of Computation

Introduction to the Theory of Computing

Harvard CS 121 and CSCI E-207 Lecture 10: CFLs: PDAs, Closure Properties, and Non-CFLs

CSCE 551: Chin-Tser Huang. University of South Carolina

CS5371 Theory of Computation. Lecture 10: Computability Theory I (Turing Machine)

Foundations of Informatics: a Bridging Course

CpSc 421 Final Exam December 6, 2008

Chapter 1 - Time and Space Complexity. deterministic and non-deterministic Turing machine time and space complexity classes P, NP, PSPACE, NPSPACE

CSCI3390-Lecture 6: An Undecidable Problem

Decidability (intro.)

Languages, regular languages, finite automata

Properties of Context-Free Languages. Closure Properties Decision Properties

Turing Machines. 22c:135 Theory of Computation. Tape of a Turing Machine (TM) TM versus FA, PDA

AC68 FINITE AUTOMATA & FORMULA LANGUAGES DEC 2013

Uses of finite automata

CISC4090: Theory of Computation

CSE 105 Theory of Computation

Theory Bridge Exam Example Questions

CpSc 421 Homework 9 Solution

CSE 105 Theory of Computation

Midterm Exam 2 CS 341: Foundations of Computer Science II Fall 2018, face-to-face day section Prof. Marvin K. Nakayama

CS 154, Lecture 2: Finite Automata, Closure Properties Nondeterminism,

Non-emptiness Testing for TMs

acs-07: Decidability Decidability Andreas Karwath und Malte Helmert Informatik Theorie II (A) WS2009/10

Homework 8. a b b a b a b. two-way, read/write

CSCE 551: Chin-Tser Huang. University of South Carolina

Comment: The induction is always on some parameter, and the basis case is always an integer or set of integers.

Decidability. William Chan

The View Over The Horizon

Turing Machines, diagonalization, the halting problem, reducibility

Computability and Complexity

Fooling Sets and. Lecture 5

Transcription:

CPSC 421: Tutorial #1 October 14, 2016 Set Theory. 1. Let A be an arbitrary set, and let B = {x A : x / x}. That is, B contains all sets in A that do not contain themselves: For all y, ( ) y B if and only if (y A and y / y). Can it be that B A? Solution. Notice that B A. If B A, then either B B or B / B. If B B, then by ( ), B / B, and this is absurd. If B / B, then also by ( ), the assumption that B A implies that B B, which is a contradiction. The conclusion is: B / A. What we have proved is that there exist something (the set B) that does not belong to A. Since the set A was arbitrary, we have shown that there is no universe! That is, it is not enough to spell some magic words to specify a set. 2. Let A be a family of sets. The union of the sets in A is A = {a : a A for some A A}, A A and the intersection of the sets in A is A = {a : a A for all A A}. A A (a) Show that ( A A A)c = A A Ac. Note that A needn t be countable. Solution. Suppose that a ( A A A)c. Then a / A A A; i.e., for no A A is a A. Then a / A for all A A, or a A c for all A; that is, a A A Ac. Conversely, if a A A Ac, then a A c for all A A, so a / A for all A A; i.e., it is not the case that there is A A for which a A, so a / A A A, or equivalently a ( A A A)c. (b) What is A A? (Which as do not satisfy the definition of intersection?) Solution. If it is not true that a A for every A, then a / A for some A, or a A c for some A. Since, however, there do not exist As in, this is absurd. In other words, no a fails to satisfy the definition of intersection, or, equivalently, all as satisfy the definition of intersection. That is, the as in A A exhaust the (nonexistent) universe. An alphabet is any nonempty finite set. Elements of an alphabet are called symbols. A string over an alphabet is a finite sequence of symbols from that alphabet. A language over an alphabet is any set of strings (can be empty). 1

A language is regular if one of the following holds (the list is not limited to these): 1. There is a DFA or NFA that recognizes it; 2. There is a regular expression that generates it; 3. Can be accepted by a read-only Turing Machine; 4. It is finite. (If L = {s 1,..., s n } for some fixed n 0, then the regular expression s 1 s n generates L.) DFA: A finite automaton is a 5-tuple (Q, Σ, δ, q 0, F ), where 1. Q is a finite set called the states, 2. Σ is a finite set called the alphabet, 3. δ : Q Σ Q is the transition function, 4. q 0 Q is the start state, and 5. F Q is the set of accept states. NFA: Same as DFA, except that 1. δ : Q (Σ {ε}) P(Q). Convention: If δ(q, a) = for some q Q and a Σ {ε}, then the NFA transfers to a fictitious death state that is not accepting and stays there forever (infinite loop?); in other words, the branch in which this transition occurs does not accept the input, and 2. ε-transitions (transitions on empty string, the non-deterministic choice) are allowed. Example: Construct an NFA that accepts C-like comments delimited by /* and */. Do not handle nested comments (assume they are not allowed). For simplicity, use Σ = {/,, c} where c is the only (non-comment) character in the language. Pumping Lemma. If A is a regular language, then p 1 (the pumping length) such that s A with s p, x, y, z Σ (that depend on s) such that s = xyz and all of the following hold: 1. For each i 0, xy i z A, 2. y > 0, and 3. xy p. Notes: 1. The pumping length p in the statement of the pumping lemma should work for all the strings in A; i.e., p depends on the language A, and not on a specific string. The order of quantifiers is important; 2. The pumping lemma is a necessary condition for a language to be regular; i.e., if it does not hold, then the language is not regular; 2

3. The converse of the pumping lemma is not true; if a language satisfies the pumping lemma, then it might or might not be regular; 4. Thus, one uses the pumping lemma is to prove that a language is not regular. Start with the assumption that the language is regular and show that the pumping lemma cannot be satisfied; 5. Holds vacuously for finite languages. If A is finite, let l max = max{ s : s A}. Then for p = l max + 1, no s is such that s p = l max + 1 ( s l max for all s A.) The lemma cannot be applied to any word in L. In other words, for this p, the antecedent of the logical statement if s is any string in A of length at least p, then s may..., which is the part if s is any string in A of length at least p, is always false, and since falsehood implies anything, the pumping lemma still holds; [ satisfies the antecedent s with s p ] 6. Contrapositive (equivalent statement): Suppose that p 1 s A with s p such that x, y, z Σ, at least one of the following holds: 1. s xyz; 2. There exists i 0 such that xy i z / A, or 3. y = 0, or 4. xy > p. Then A is not regular. Use the pumping lemma to show that A = { a 2n : n 0 } is not regular. Solution. Assume that A = { a 2n : n 0 } is regular. Let p be the pumping length given by the pumping lemma. Choose s to be the string a 2p. Because s is a member of A and s is longer than p, the pumping lemma guarantees that s can be split into three pieces, s = xyz, satisfying the three conditions of the pumping lemma. The third condition tells us that xy p, so y xy p < 2 p ; i.e., y < 2 p. Therefore, xyyz = xyz + y < 2 p + 2 p = 2 p+1. The second condition requires y > 0 so 2 p < xyyz < 2 p+1. The length of xyyz cannot be a power of 2. Hence xyyz is not a member of A, a contradiction. Therefore, A is not regular. Write a regular expression for the language consisting of the set of all strings of 0s and 1s such that every pair of adjacent 0s appears before any pair of adjacent 1s. Solution. The trick is to start by writing an expression for the set of strings that have no two adjacent 1s. Here is one such expression: (10 0) (ε 1) [(ε 1) = 1?] To see why this expression works, the first part consists of all strings in which every 1 is followed by a 0. To that, we have only to add the possibility that there is a 1 at the end, which will not be followed by a 0. That is the job of (ε 1). Now, we can rethink the question as asking for strings that have a prefix with no adjacent 1s followed by a suffix with no adjacent 0s. The former is the expression we developed, and the latter is the same expression, with 0 and 1 interchanged. Thus, a solution to this problem is (10 0) (ε 1)(01 1) (ε 0). Note that the ε 1 term in the middle is actually unnecessary, as a 1 matching that factor can be obtained from the (01 1) factor instead. A context-free grammar is a 4-tuple (V, Σ, R, S), where 1. V is a finite set called the variables, 3

2. Σ is a finite set, disjoint from V, called the terminals, 3. R is a finite set of rules, with each rule being a variable and a string of variables and terminals, and 4. S V is the start variable. Notes: 1. Only variables appear on the LHS of a production rule; 2. Both variables and terminals may appear on the RHS of a production rule; 3. Thus, R is a finite relation from V to (V Σ) ; R V (V Σ) ; 4. Let u, v, and w be strings of variables and terminals; i.e., u, v, w (V Σ), and let A w is a rule of the grammar. We say that uav yields uwv (V Σ), written uav uwv. Say that u derives v, written u v, if u = v or if a sequence u 1, u 2,..., u k (V Σ) exists for k 0 and u u 1 u 2 u k v; 5. The language of the grammar is {w Σ : S w}; 6. Convention: The entry point to the CFG is the rule corresponding to the start variable S; 7. The set of context-free languages is closed under union, concatenation and Kleene star; it is not closed under complementation and intersection; 8. A grammar is ambiguous is there is a string that has two or more different left-most derivations; 9. A context-free language is inherently ambiguous if there is no non-ambiguous grammar that can generate it; every grammar that generates it is ambiguous. Example: {a n : n 1} is generated by S SS a (consider the string aaa; verify: Every binary tree with 2n+1 nodes produces a n+1 ), which is an ambiguous grammar, but is also generated by the non-ambiguous grammar S Sa a, so this language is not inherently ambiguous. Write a CFG generating the language { } x 1 #x 2 # #x n : n 1, each x i {a, b}, and for some i and j, x i = x R j Solution. S UP V P ap a bp b T ε T #MT # U M#U ε V #MV ε M am bm ε. Note that we need to allow for the case when i = j, that is, some x i is a palindrome. Also, ε is in the language since it is a palindrome. A Turing machine is a 7-tuple, (Q, Σ, Γ, δ, q 0, q accept, q reject ), where Q, Σ, Γ are all finite sets and 4

1. Q is the set of states, 2. Σ is the input alphabet not containing the blank symbol, 3. Γ is the tape alphabet, where Γ and Σ Γ, 4. δ : (Q \ {q accept, q reject }) Γ Q Γ {L, R} is the transition function, 5. q 0 Q is the start state, 6. q accept Q is the accept state, and 7. q reject Q is the reject state, where q reject q accept. Example DTM to recognize the language consisting of the set of even-length palindromes: PAL = { xx R : x {0, 1} }. Solution. Σ = {0, 1}, Γ = {, 0, 1}, and Q = {q 0, q 1, q 2, q 3, q 4, q accept, q reject }, with the following meaning: q 0 : Initial state q 1 : leftmost non-blank symbol was 0 (and this 0 was erased) q 2 : rightmost non-blank symbol was 0 (and this 0 was erased) q 3 : leftmost non-blank symbol was 1 (and this 1 was erased) q 4 : rightmost non-blank symbol was 1 (and this 1 was erased) If TM is in state q 0 and reads, accept (means that the whole input is checked.) If the TM is in state q 0 and reads a 0, then the TM replaces 0 with and enters state q 1. The TM keeps moving to the right until the first blank symbol. Now go one step to the Left: If the symbol read is 1, then reject; if 0, then the TM replaces 0 with, transitions from state q 1 to state q 2, and keeps moving to the Right until it sees the first blank. Then, it enters state q 0 again and goes one step to the Right and reads the input on that cell (second cell now). Repeat this process above. The exact same process can be used if the rightmost symbol is 1, in which states q 3 and q 4 are used instead. NTM: Only difference is that δ : (Q \ {q accept, q reject }) Γ P(Q Γ {L, R}). Computation is a tree: NTM + input results in a configuration tree. An NTM accepts an input if the resulting configuration tree contains at least one accepting configuration (node); An input is rejected if all branches are rejected; An NTM is a decider if all branches halt on all inputs. A language is recognizable if there is a TM that recognizes it. A language is decidable if a TM decides it. Decider: Halts on every input; never loops. Example NTM: Given a graph G = (V, E) and an integer k > 0, determine if there is a subset C V such that 1. C k; 2. Any two vertices in C are adjacent (C is a clique). 5

Present the problem as one of accepting a language L; describe an NTM to accept L. Solution. Define language L as follows: L = { G, k : G has a clique of size at least k}. /* we assume that there is some standard way of presenting G as a string in a finite alphabet */ Assume there is a TM that, given two vertices of G, answers if these vertices are adjacent. NTM that accepts L: 1. Place G, k onto the tape; 2. Append the string with $; 3. While moving from the left to $, nondeterministically select some vertices v V (G); if the TM reads $, then it is at a leaf configuration; 4. (here we are at a leaf configuration) Check if the number of selected vertices is at least k; 5. For every two selected vertices check if they are adjacent; 6. Accept if all pairs are adjacent. Are NTMs more powerful that DTMs? That is, is the set of languages recognized by NTMs a proper subset of the set of languages recognized by DTMs? Solution. No they are not. A DTM is an NTM, and an NTM can be simulated by a DTM by performing a BFS on the configuration graph of the NTM. Let M be the set of all TMs, and let Result : M I {yes, no, nohalt}, where I Σ is the set of inputs. Result(M, w) is the result of running TM M on input w I. There is a TM U M such that Result(U, M, w ) = Result(M, w) for all M M and w I. U is the Universal Turing machine. It simulates every step of M on input w in a finite number of steps. Note that U is not specific to a certain TM and input; it works for all TM input pairs (the order of quantifiers is important.) The Halting problem is H TM = { M, w : M is a TM, w I, and Result(M, w) nohalt }. The acceptance problem is A TM = { M, w : M is a TM, w I, and Result(M, w) = yes }. We say that language A is Turing-reducible to language B, denoted as A T B, if there is a program (TM) that decides A when it takes as input a TM that decides B. 1. Usually used to prove that language B is undecidable via a reduction from a known undecidable language A; 2. This is an example of the concept of relative computability ; intuitively, A T B means that, in a sense, B is no easier than A, or A is no harder than B; 6

3. Notice that the definition mentions nothing about recognizability; Turingreducibility is about decidability ((un)recognizability questions are answered using mapping-reductions) 4. Example: If we suppose that B has decider M and M can be used to decide H TM (as a subroutine in a program that decides H TM,) then H TM T B, and therefore B is undecidable because this contradicts the fact that H TM is undecidable. 5. If A is undecidable and A T B ((un)decidability of B is not known,) then B is undecidable; 6. If A T B and B is decidable ((un)decidability of A is not unknown,) then A is decidable; 7. If the (un)decidability of A is unknown, A T B and B is undecidable, is A decidable? Answer: We cannot tell. 8. If the (un)decidability of B is unknown, A T B and A is decidable, is B decidable? Answer: We cannot tell. 9. If A m B and B is decidable, is A is decidable? Answer: Yes. 10. If A m B and A is undecidable, is B undecidable? Answer: Yes. 11. If A m B and B is recognizable, is A recognizable? Answer: Yes. The proof is the same as that of Theorem 5.22, except that M and N are recognizers instead of deciders. This also implies that if A m B and B is co-recognizable, then A co-recognizable. 12. If A m B and A is not Turing-recognizable, then B is not Turing-recognizable. 13. Is H TM recognizable? Answer: No. We know that A TM is not recognizable, and A TM m H TM implies that A TM H TM, so from the previous point, H TM is not recognizable. A Turing-Reduction. A useless state in a Turing machine is one that is never entered on any input string. Consider the problem of determining whether a Turing machine has any useless states. Formulate this problem as a language and show that it is undecidable. Solution. Let USELESS TM = { M : M is a TM with 1 or more useless states}. We show USELESS TM is undecidable by reducing A TM to it. Let R be a TM that decides USELESS TM and construct a TM that decides A TM as follows: S = On input M, w 1. First construct a TM N that simulates M on w but ensuring that all of M s original states are not useless. TM N has input alphabet {0, 1, 2} and works as follows: On input 0, N goes through each of the original states of M, except for q accept and q reject, until all non-halting states of M have been used, and then enters state q accept ; On input 1, N enters q reject ; 7

On input 2, N simulates M on w. Upon reading input 2, N erases its input tape, writes w on it, and branches to M s start state to simulate M. If that simulation is about to enter M s accept, it instead enters a new state r. 2. Run R on input N. If it accepts, reject. If it rejects, accept. The only possible useless state of N is the state r, because on inputs 0 and 1 it uses all other states. The only way for N to use r is if M accepts w. If M does not accept w, then it enters an infinite loop or rejects, and in both cases, N never reaches r. Thus, if R accepts N, then N has a useless state, and since the only useless state of N is r and it is only reached when M does not accept w, it follows that M, w / A TM. If R rejects N, then N does not have a useless state, so state r is not reachable, which happens only when M accepts w, so M, w A TM. This problem is reminiscent of the code reachability, or dead code problem, where pieces of codes are never reached on any input. Mapping Reduction. Consider the language L INFINITE = { M : L(M) is infinite}. By Rice Theorem, this language is not in decidable. We want to show that L INFINITE is not recognizable. Idea: Mapping reduction from H TM. We want to show that H TM m L INFINITE. Diagonalization. True/False. (a) If language L is recognizable but undecidable, then its complement, L, is not recognizable. True, since if a language is both recognizable and co-recognizable, then it is decidable. (b) A TM is co-recognizable. True. The statement means that A TM is recognizable, which is known. (c) A TM is recognizable. False. We know that A TM is recognizable but undecidable, so by (a), A TM cannot be recognizable. (d) { M : M is a TM and L(M) is uncountable} is undecidable. False. This is the empty language, since any language is at most countable over finite alphabets and finite-length strings. (e) { M : M is a TM and L(M) is countable} is decidable. True. This is the language of all TMs, since there are no uncountable languages (over finite alphabets and finite-length strings). (f) There are some languages recognized by a 5-tape, nondeterministic Turing machine that cannot be recognized by a 1-tape, deterministic Turing machine. False. (g) The language {0 n 1 n : 0 n 1000} is regular. True. This language is finite. (h) Every subset of a regular language is regular. False. The language Σ is regular (a DFA with one state that is accepting recognizes it), but {0 n 1 n : n 0} is a subset of Σ and is not regular. 8

(i) Let L 4 = L 1 L 2 L 3. If L 1 and L 2 are regular and L 3 is not regular, it is possible that L 4 is regular. True. (j) Every non-regular language is infinite. True. This is the contrapositive of the statement that every finite language is regular. (k) The intersection of any two non-regular languages is non-regular. False. If L 1 = {a n b n : n 0} and L 2 = {b n a n : n 0}, then L 1 L 2 = {ε}, which is regular. (l) If each of the languages L 1, L 2,... is regular, then i=1 L i is regular as well. False. If L i = {a i b i }, then i=1 L i = {a i b i : i 1}, which is not regular. (m) If a TM s language is decidable, then this TM is a decider. False. There exists a TM M whose language is decidable, but M is not a decider. Let M be the TM that loops indefinitely on all inputs, which has language L L(M) =. L is decidable (the TM that rejects all inputs is a decider for L), but M is not a decider. This shows that just because a TM s language is decidable, it is not necessarily the case that the TM itself must be a decider. (n) Every subset of a (Turing) decidable language is decidable. False. There are languages A and B such that A is a subset of B, B is decidable, but A is not decidable. Let A be any undecidable language, say A TM, and B = Σ. Then B is decidable (the TM that always accepts on any input,) but A is undecidable. This shows that a subset of a decidable language is not necessarily decidable, i.e., bigger languages are not necessarily harder. (o) The set of all Turing-undecidable language is countable. False. The set of valid TM encodings is countable, so there are at most countably many deciders. Since a language is decidable if there is a TM that decides it, it follows that the number of Turing-decidable languages is at most countable. However, the set of languages is uncountable, so the set of undecidable languages must be uncountable; for otherwise, the union of the set of decidable and undecidable languages is countable, and since a language is either decidable or undecidable, it will follow that the set of languages is countable, which is absurd. (p) If L 1 is undecidable and L 2 is decidable, then L 1 L 2 must be undecidable, where denotes the symmetric difference operator, L 1 L 2 = (L 1 \ L 2 ) (L 2 \ L 1 ). True. First note that L 1 \ L 2 = L 1 L 2. Assume for the purpose of contradiction that L 1 L 2 is decidable. Then (L 1 L 2 ) L 2 is decidable, because the set of decidable languages is closed under symmetric difference (this is because the symmetric difference can be expressed using complements, unions and intersections, operations under which decidable languages are closed.) However, (L 1 L 2 ) L 2 = L 1, which is undecidable by assumption. This gives a contradiction. 9

Why is (L 1 L 2 ) L 2 = L 1? (L 1 L 2 ) L 2 = ( (L 1 L 2 ) (L 1 L 2 ) ) L 2 ( ((L1 (L 1 L 2 ) L 2 = L 2 ) (L 1 L 2 ) ) ) ( ((L1 L 2 L 2 ) (L 1 L 2 ) ) ) L 2 = (L 1 L 2 ) (L 1 L 2 ) = L 1. (q) The intersection of a recognizable language and an unrecognizable language is always unrecognizable. False. Let L 1 be any unrecognizable language and L 2 be the recognizable language. Then L 1 L 2 =, which is recognizable. (r) Each context-free language is decidable. True. Given a context-free grammar and a word w, there is an algorithm to decide whether the grammar generates w. (s) There is an undecidable language L whose complement L is context-free. False. If L is context-free, then L is decidable, so L = L must also be decidable. Prove that for every language L, there is a decider M that accepts every string in L and another decider M that rejects every string not in L. Explain why this does not prove that every language is decidable. Solution. We can set M as the TM that accepts all inputs and M as the TM that rejects all inputs; then M accepts all strings in L and M rejects all strings in L, and both are deciders. This does not show that L is decidable, since the definition of decidability requires the same TM to accept all strings in L and reject all strings not in L, not two different TMs. Contrast this with Theorem 4.22 in Sipser 3rd ed., which states that an language is decidable iff it is recognizable and co-recognizable. In our case, that TM that always accepts, M, does not recognize L, since it accepts strings that are not in L. 10