Three Analog Neurons Are Turing Universal

Size: px
Start display at page:

Download "Three Analog Neurons Are Turing Universal"

Transcription

1 Three Analog Neurons Are Turing Universal Jiří Šíma Institute of Computer Science, Czech Academy of Sciences, P. O. Box 5, 807 Prague 8, Czech Republic, Abstract. The languages accepted online by binary-state neural networks with rational weights have been shown to be context-sensitive when an extra analog neuron is added (ANNs). In this paper, we provide an upper bound on the number of additional analog units to achieve Turing universality. We prove that any Turing machine can be simulated by a binary-state neural network extended with three analog neurons (ANNs) having rational weights, with a linear-time overhead. Thus, the languages accepted offline by ANNs with rational weights are recursively enumerable, which refines the classification of neural networks within the Chomsky hierarchy. Keywords: analog neural network, Turing machine, Chomsky hierarchy Introduction The computational power of (recurrent) neural networks with the saturatedlinear activation function depends on the descriptive complexity of their weight parameters [, 9]. Neural nets with integer weights, corresponding to binarystate networks, coincide with finite automata [,,, 8, 6, ]. Rational weights make the analog-state networks computationally equivalent to Turing machines [, ], and thus (by a real-time simulation []) polynomial-time computations of such networks are characterized by the complexity class P. Moreover, neural nets with arbitrary real weights can even derive super-turing computational capabilities []. In particular, their polynomial-time computations correspond to the nonuniform complexity class P/poly while any input/output mapping (including undecidable problems) can be computed within exponential time []. In addition, a proper hierarchy of nonuniform complexity classes between P and P/poly has been established for polynomial-time computations of neural nets with increasing Kolmogorov complexity of real weights []. As can be seen, our understanding of the computational power of superrecursive (super-turing) neural networks is satisfactorily fine-grained when changing from rational to arbitrary real weights. In contrast, there is still a gap between integer and rational weights which results in a jump from regular to recursively Research was done with institutional support RVO: and partially supported by the grant of the Czech Science Foundation No. P0//G06. The results are valid for more general classes of activation functions [6,, 5, ] including the logistic function [5].

2 J. Šíma enumerable languages in the Chomsky hierarchy. In the effort of refining the analysis of subrecursive neural nets we have introduced a model of binary-state networks extended with one extra analog-state neuron (ANNs) [7], as already a few additional analog units allow for Turing universality [, ]. Although this model of ANNs has been inspired by theoretical issues, neural networks with different types of units/layers are widely used in practical applications, e.g. in deep learning [0] which also demands for the analysis. In our previous work, we have characterized syntactically the class of languages accepted online by the ANNs [8] in terms of so-called cut languages [0]. The online input/output protocol means that a (potentially infinite) input word x is sequentially read symbol after symbol, each being processed with a constanttime overhead, while a neural network simultaneously signals via its output neuron whether the prefix of x that has been read so far, belongs the underlying language [, ]. By using this characterization we have shown that the languages recognized online by the ANNs with rational weights are context-sensitive, and we have presented explicit examples of such languages that are not context-free. In addition, we have formulated a sufficient condition when a ANN accepts only a regular language in terms of quasi-periodicity of its real weight parameters. For example, ANNs with weights from the smallest field extension Q(β) over rational numbers including ( a Pisot number β = /w (e.g. β Z is an integer or the plastic constant β = ) / 8.78) where w is the self-loop weight of the analog unit, have only a power of finite automata. These results refine the classification of subrecursive neural networks with the weights between integer and rational weights, within the Chomsky hierarchy. A natural question arises concerning an upper bound on the number of extra analog units with rational weights that are sufficient for simulating a Turing machine. In this paper, we prove that any language accepted by a Turing machine in time T (n) can be accepted offline by a binary-state neural network with three extra analog units (ANNs) having rational weights in time O(T (n)). The offline input/output protocol assumes that an input word x of length n is read by a neural network at beginning of a computation, either sequentially symbol after symbol in time O(n), or x is already encoded in an initial state of an analog unit. The neural network then carries out its computation until it possibly halts and decides whether x belongs to the underlying language, which is indicated by its output neurons []. Thus, for rational weights, the languages accepted online by ANNs or offline by ANNs are context-sensitive or recursively enumerable, respectively, while the classification of offline ANNs or even ANNs (with two analog units) within the Chomsky hierarchy remains open for further research. The proof exploits the classical technique of implementing two stacks of a pushdown automaton by two analog units, which is a model equivalent to Turing machine, including the encoding of stack contents based on a Cantor-like set []. In order to minimize the number of analog neurons, the first stack allows only for the push operation while the second one realizes the top and pop operations. To compensate for these restrictions, the third analog unit is introduced which

3 Three Analog Units Are Turing Universal is used to swap the contents of these two stacks by adding and subtracting their codes appropriately. The paper is organized as follows. In Section, we introduce a formal model of binary-state neural networks with three extra analog units. Section shows a simulation of Turing machine by a ANN with rational weights. We present some open problems in Section. Neural Networks With Three Analog Units In this section, we specify a formal computational model of binary-state neural networks with three extra analog units (shortly, ANN), N, which will be used for simulating a Turing machine. The network N consists of s units (neurons), indexed as V = {,..., s}. All the units in N are assumed to be binary-state (shortly binary) neurons except for the first three neurons,, V which are analog-state (shortly analog) units. The neurons are connected into a directed graph representing an architecture of N, in which each edge (i, j) V leading from unit i to j is labeled with a real weight w(i, j) R. The absence of a connection within the architecture corresponds to a zero weight between the respective neurons, and vice versa. The computational dynamics of N determines for each unit j V its state (output) j at discrete time instants t = 0,,,.... The outputs, y(t), y(t) from analog units,, V are real numbers from the unit interval I = [0, ], whereas the states j of the remaining s neurons j V \ {, (, } are binary values from ) {0, }. This establishes the network state =, y(t), y(t), y(t),..., y(t) s I {0, } s at each discrete time instant t 0. Without loss of efficiency [9], we assume a synchronous fully parallel mode for notational simplicity. At the beginning of a computation, the neural network N is placed in an initial state y (0) I {0, } s which may also include an external input. At discrete time instant t 0, an excitation of any neuron j V = s i=0 w(i, j)y(t) i, including a real bias value w(0, j) R which can be viewed as the weight from a formal constant unit input 0 = for every t 0 (i.e. 0 V ). At the next instant t +, all the neurons j V is defined as ξ (t) j compute their new outputs y (t+) j ( σ j : R I to ξ (t) j, y (t+) j = σ j in parallel by applying an activation function ). The analog units j {,, } employ ξ (t) j the saturated-linear function σ j (ξ) = σ(ξ) where for ξ σ(ξ) = ξ for 0 < ξ < 0 for ξ 0, while for neurons j V \ {,, } with binary states y j {0, }, the Heaviside activation function σ j (ξ) = H(ξ) is used where { for ξ 0 H(ξ) = () 0 for ξ < 0. ()

4 J. Šíma In this way, the new network state y (t+) I {0, } s at time t + is determined. The computational power of neural networks has been studied analogously to the traditional models of computations so that the networks are exploited as acceptors of formal languages L Σ [9]. For simplicity, we assume the binary alphabet Σ = {0, } and we use the following offline input/output protocol []. For a finite network N, an input word (string) x = x... x n {0, } n of arbitrary length n 0 can be encoded by the initial state of the analog input unit inp {,, }, using the encoding γ : {0, } I, that is, y (0) inp = γ (x... x n ). () We assume that the encoding γ could be evaluated by N in linear time O(n) if x is read online and stored, bit after bit, in the state of inp. After N carries its computation deciding about the input word whether it belongs to L within the computational time of T (n) updates, N halts and produces the result, which is indicated by the two neurons halt, out V as { if t = T (n) halt = 0 if t T (n) (T (n)) y out = { if x L 0 if x / L. () Note that the computation of N over x may not terminate. We say that a language L {0, } is accepted by ANN N, which is denoted by L = L(N ), if for any input word x {0, }, N halts and accepts x iff x L. Simulating a Turing Machine The following theorem shows how to simulate a Turing machine by a ANN with rational weights and a linear-time overhead. Theorem. Given a Turing machine M that accepts a language L = L(M) in time T (n), there is a ANN N with rational weights, which accepts the same language L = L(N ) in time O(T (n)). Proof. Without loss of generality, we assume that a given Turing machine M satisfies the following technical conditions. Its tape is arbitrarily extendable to the left and to the right, and the tape alphabet is {0, } which is sufficient for encoding the blank symbol uniquely (e.g. each symbol is encoded by two bits) so that there is the infinite string 0 ω to the left and to the right of the tape. At startup, M begins with an input word x = x... x n {0, } n written on the tape so that x is under the tape head. We will construct a ANN N with the set of neurons V, simulating the Turing machine M by using two stacks s and s. One stack holds the contents of the tape to the left of the head of M while the other stack stores the right part of the tape. We assume that the first stack s implements only the push(b) operation adding a given element b to the top of s, whereas the second stack

5 Three Analog Units Are Turing Universal 5 s allows only for the top and pop operation which reads and removes the top element of s, respectively. In addition, the top element of s models a symbol currently under the head of M. In order to compensate for these restriction, we introduce the swap operation which exchanges the contents of s and s, while the control unit remembers in its current state which part of the tape contents c cur {L, R}, either to the left of the head for c cur = L or to the right for c cur = R, is stored in the second stack s. We show how to implement one instruction of M by using the two stacks s and s and their operations push(b), top, pop, and swap. The transition function δ of M specifies for its current state q cur and for a symbol x under the head, its new state q new, a symbol b to overwrite x, and a direction d {L, R} for the tape head to move, which is either to the left for d = L or to the right for d = R, that is, δ(q cur, x) = (q new, b, d). The transition from q cur to q new is realized by the control unit, while the tape update takes place in the stacks, for which we distinguish two cases, a so-called short and long instruction. The short instruction applies when d = c cur. In this case, the two operations push(b); pop (5) implement the corresponding update of the tape contents so that x under the head of M is overwritten by b, the head moves to a new symbol which is next in the desired direction d and appears at the top of s, while c new = c cur is preserved. For the long instruction when d c cur, the following sequence of five operations push(top); pop; swap; push(b); pop (6) is employed where the first two operations push(top); pop shift the current symbol x = top under the head of M from the top of s to the top of s. Then the swap operation exchanges the contents of s and s so that x is back at the top of s. Now, c new = d c cur, which ensures the conversion to the previous case, and hence, the last two operations of (6) coincide with the short instruction (5). The stacks s and s are implemented by analog neurons of N having the same name, s, s V. The contents a = a... a p {0, } of stack s k for k {, }, where a is the top element of s k, is represented by the analog state of neuron s k, using the encoding γ : {0, } I, y sk = γ (a... a p ) = p i= 6 (a i + ) ( 7 ) i+ [ 0, 7 ) I. (7) Note that the empty string ε is encoded by zero. All the possible analog state values generated by the encoding (7) create a Cantor-like set so that two strings with distinct top symbols are represented by two sufficiently separated numbers []. In particular, for a ε, we have [ ) 0 if γ (a... a p ) 6, 8 a = if γ (a... a p ) [ 7, 7 ), (8)

6 6 J. Šíma which can be used for reading the top element from the stack s (which models a current tape symbol under the head of M) by a binary neuron employing the Heaviside activation function (): top = iff + 8 y s 0. (9) Furthermore, the push(b) and pop operation can be implemented by the analog neuron s and s, respectively, employing the linear part of the activation function (), as push(b) : y new s pop : y new s according to (7), where y new s k = ξs cur = 6 (b + ) + 7 ycur s = b + 7 ycur s [ ) 0, 7 = ξs cur = 6 (top + ) ys cur = 6 7 top + 7 y cur s [ 0, 7 ) and y cur s k (0) () (ξs cur k ) for k {, }, denotes the analog state (excitation) of neuron s k, encoding the new and current contents of stack s k, respectively. According to [], one can construct a binary-state (size-optimal) neural network N with integer weights that implements the finite control of Turing machine M (i.e. a finite automaton). In particular, N is a subnetwork of the ANN N with binary neurons in V V, which evaluates the transition function δ of M within four time steps by using the method of threshold circuit synthesis [7] (cf. [6]). Moreover, one can ensure that N operates in the fully parallel mode by using the technique of [9]. Thus, N holds internally a current state q cur of M and receives a current symbol x {0, } under the tape head of M (which is stored at the top of stack s ) via the neuron hd V implementing the top operation. Then, N computes δ (q cur, x) = (q new, b, d) within four computational steps, replaces the current state q cur with q new, and outputs a symbol b {0, } to overwrite x, via the neuron ow V. In addition, N holds a current value of c cur {L, R} which together with the calculated direction of head move d {L, R}, decides if a short or long instruction applies, depending on whether or not c cur = d. At the beginning of a computation, N holds the initial state of M and the stacks s, s contain the initial tape contents including an input word x = x... x n {0, } n which is encoded by the input neuron inp = s according to () and (7). Thus, y (0) s and y (0) s = γ (x) = γ (x0 ω ) = n i= = γ (0 ω ) = i= 6 ( 7 ) i+ = 6 7 ( 7 ) = [ 0, 7 ) 6 (x i+) ( 7 ) i+ + 6 n+ ( 7 ) [ 0, 7 ), which N could evaluate in linear time O(n) by using (0). One computational step of M is simulated within one macrostep of N which takes 7 computational steps for a short instruction, while a long one consumes 8 steps of N. Hereafter, the computational time t of N is related to the

7 Three Analog Units Are Turing Universal 7 macrostep. At the beginning of the macrostep when t = 0, the states of analog [ neurons ) s, s encode the stack contents according to (7), that is, y s (0) k = z k 0, for k {, }. Then, N reads the top element of s 7 via the neuron hd V at time instant t = of the macrostep, which is implemented by the integer weights w(0, hd) =, w(s, hd) = 8, () implying y () hd = top by (9). On the other hand, N outputs a symbol b {0, } to overwrite the current tape cell under the head via the neuron ow V either at time instant t = 6 for a short instruction (i.e. y ow (6) = b), or at time instant t = 7 for a long one (i.e. y ow (7) = b), whereas the state of ow V is 0 at other times, thus producing the sequence 0 5 b 0 or 0 6 b 0, respectively. We further extend N with the four control neurons c, c, c, c V for synchronizing the stack operations. Within each macrostep of N, the control neurons c, c, c, c produce the four sequences of binary output values, either, 0,, of length 7 for a short instruction, or 0, 5 0, 6 0, of length 8 for a long instruction, respectively, which can easily be implemented by a finite automaton and incorporated within N. For realizing the stack operations, the binary neurons pop, pop, bias V and the third auxiliary analog unit s V are introduced in N. In Table, the incoming rational weights to the neurons in V \ V are defined in the form of weight matrix with the entry w(i, j) Q in the ith row and jth column, where the analog neurons are separated from the binary ones by the double lines. For example, the weight of the connection from the control neuron c V and from the analog neurons s to pop V \ V is w(c, pop ) = and w(s, pop ) =, respectively, whereas the bias of pop is w(0, pop ) =. We will verify the implementation of the long instruction including the short one, within one macrostep of N which is composed of 8 network state updates. The state evolution of neurons during the macrostep is presented in Table which also shows the short instruction when the block bounded by the horizontal double lines corresponding to the time interval from t = 6 to t = 6 within the long instruction, is skipped. Moreover, alternatives for the short instruction are presented after the slash symbol, e.g. t = 7/6 means the seventeenth/sixth computational step of the long/short instruction within the macrostep. Observe that for every t =,..., 8 and k {, }, since ξ (t ) pop k pop k = 0 if (k = & t 6) or (k = & t 7) () = w(0, pop k ) + w(c k, pop k )y (t ) c k = y (t ) c k + w(s, pop k )y (t ) s + y (t ) s, () by Table, reducing to ξ pop (t ) k = + y s (t ) < 0 for y c (t ) k holds for (k = & t 6) or (k = & t 7). Similarly, we have = which bias = iff y(t ) c = 0 iff t = 8 for every t =,..., 8, (5)

8 8 J. Šíma pop pop bias s s s ow c c c c pop pop bias s s 0 0 s Table. The weight matrix with w(i, j) in the ith row and jth column for j V \ V. because ξ (t ) bias = w(c, bias)y (t ) c = y (t ) c 0 iff y (t ) c = 0. Furthermore, s = 0 if t 8 for every t =,..., 8, (6) since ξ s (t ) = w(0, s ) + w(c, s )y c (t ) + w(s, s )y s (t ) + w(s, s )y s (t ) = 5y(t ) c y(t ) s + y s (t ) which implies ξ s (t ) < 0 for y c (t ) = holding for t 8. For a given symbol under the head of M held in y () hd at time instant t = according to (), the binary-state subnetwork N evaluates the transition function δ of M during four computational steps for t =,,, 5, deciding whether a long or short instruction occurs, which is indicated through the state y c (5) of control neuron c at time instant t = 5, that is, y c (5) = 0 iff a long instruction applies. In the meantime, the state of analog unit s k for k {, }, starting with y s (0) k = z k [ ) 0,, is multiplied by its self-loop weight 7 w(sk, s k ) at each time instant t =,..., 6, producing z y s (t) k = w(s k, s k ) t [ ) 0, t if k = t+7 z k = t z [ ) for t = 0,..., 6, (7) 0, if k = 7 t since ow = c = pop = y pop (t) = bias = y(t) s = 0 for every t = 0,..., 5 due to (), (5), and (6). For a long instruction, we have y c (5) = 0 which implies ξ (5) pop = y (5) c + y (5) s = + 8 z (8)

9 t hd y(t) ow y c (t) y c (t) y c (t) y c (t) y pop (t) y pop (t) bias Three Analog Units Are Turing Universal z z 0 top / / s s s z z 0 z z 0 z z 0 z z 0 z 5 5 z top 0 0 z 6 6 z z (9) z (0) z z z z z + z z 0 z z z z z 0 z z 0 z z 0 z z 0 z 5 5 z 0 7/6 b 0 top 0 z / z z / 6 z 0 8/ z (9) z (0) 0 Table. The macrostep of ANN N simulating one long/short instruction of TM M. according to () and (7). Hence, y pop (6) = top by (9), which gives y s (7) = w(ow, s )y ow (6) + w(c, s )y c (6) + w(pop, s )y pop (6) +w(bias, s )y (6) bias + w(s, s )y s (6) + w(s, s )y s (6) = top + z 7 = z [ ) 0, 7 (9) by Table, since y ow (6) = y (6) bias = y(6) s = 0, y c (6) =, and y s (6) = z due to (5), 6 (6), and (7). It follows from (0) and (9) that z encodes the contents of the stack s after the first operation push(top) of long instruction (6) has been applied to γ (z ). Similarly, y s (7) = w(c, s )y c (6) + w(pop, s )y pop (6) + w(pop, s )y pop (6) + w(s, s )y s (6) = 6 7 top + 7 z = z [ ) 0, (0) 7

10 0 J. Šíma due to y pop (6) = 0 and y s (6) = 6 z by () and (7), respectively. According to () and (0), we thus know that z encodes the contents of the stack s after the second operation pop of long instruction (6) has been applied to γ (z ). The swap operation starts at time instant t = 8 when y s (8) = w(s, s )y s (7) = z [ ) 0, 8 y s (8) = w(s, s )y s (7) = z [ ) 0, 6 y (8) s = w(0, s ) + w(s, s )y s (7) + w(s, s )y s (7) = [ ) z + z 7, () () () according to (9), (0), and Table, since y ow (7) = y c (7) = y c (7) = y pop (7) = y (7) = 0 due to () and (5). At time instant t = 9, we have y (7) bias pop = y s (9) = w(bias, s )y (8) bias + w(s, s )y s (8) + w(s, s )y s (8) = + z + z + z = z [ ) 0, 5 y s (9) = w(bias, s )y (8) bias + w(s, s )y s (8) + w(s, s )y s (8) = + z + z z = z [ ) 0, 9 () (5) by () () and Table, since y ow (8) = y c (8) = y pop (8) = y pop (8) = 0 and y (8) bias = due to () and (5). This means that the respective multiples of z and z are exchanged between s and s, cf. (), () and (), (5), respectively. Similarly in () and (5), respectively, is further multiplied by its self-loop weight w(s k, s k ) at each time instant t = 0,..., 7, producing to (7), the state of analog unit s k for k {, }, starting with y (9) s k s k = w(s k, s k ) t z k = z t [ 0, t ) if k = t z [ 0, 8 t ) if k = for t = 9,..., 7, (6) since y ow (t) = y c (t) bias = y(t) s = 0 for every t = 9,..., 6 due to (), (5), and (6). Thus, the swap operation is finished at time instant t = = pop = pop when y () s = z and y () s = z. Analogously to (8), y (6) c = = 0 ensures ξ (6) pop = y (6) c + y (6) s = + 8 z (7) according to () and (6), which implies y (7) pop = top (8)

11 Three Analog Units Are Turing Universal by (9). At time instant t = 8, (9) reads as y (8) s = w(ow, s )y (7) ow + w(c, s )y (7) c + w(s, s )y (7) s = b + z 7 = z [ 0, 7 ), (9) since y pop (7) = y (7) bias = y(7) s = 0, y ow (7) = b, y c (7) =, and y s (7) = z due to (), 6 (5), (6), and (6). It follows from (0) and (9) that z encodes the contents of the stack s after the fourth operation push(b) of long instruction (6) has been applied to γ (z ). Similarly to (0), y s (8) = w(c, s )y c (7) + w(pop, s )y pop (7) + w(s, s )y s (7) = 6 7 top + 7 z = z [ ) 0, (0) 7 by (6) and (8). According to () and (0), we thus know that z encodes the contents of the stack s after the fifth operation pop of (6) has been applied to γ (z ), which completes the macrostep of N for a long instruction. For a short instruction when y c (5) =, y c (5) = 0, y s (5) = z and y (5) 5 s = 5 z, which coincides with a long instruction at time instant t = 6, the push(b) and pop operations of (5) are implemented analogously. Finally, if M reaches its final state at the computational time T (n) and accepts the input word x, then this is indicated by the two neurons halt, out V of subnetwork N during the macrostep T (n) according to (). Hence, N simulates M in time O(T (n)) because each macrostep takes only constant number of network s updates, which completes the proof of the theorem. Conclusion In this paper, we have achieved an upper bound on the number of extra analog units that are sufficient to make binary-state neural networks Turing universal. We have proven that three additional analog neurons with rational weights suffices for simulating a Turing machine with a linear-time overhead, which complements the lower bound that neural networks with one extra analog unit and rational weights accept online context-sensitive languages. It is an open question whether the upper bound can be improved, that is, if only one or two extra rational-weight analog units suffice for simulating any Turing machine. Another challenge for further research is to generalize the characterization of languages [8] that are accepted offline to ANNs employing two (or even more) extra analog units. Nevertheless, the ultimate goal is to prove a proper natural hierarchy of neural networks between integer and rational weights similarly as it is known between rational and real weights [] and possibly, map it to known hierarchies of regular/context-free languages. This problem is related to a more general issue of finding suitable complexity measures of subrecursive neural networks establishing the complexity hierarchies, which could be employed in practical neurocomputing, e.g. the precision of weight parameters, energy complexity [6], temporal coding etc.

12 J. Šíma References. Alon, N., Dewdney, A.K., Ott, T.J.: Efficient simulation of finite automata by neural nets. Journal of the ACM 8(), 95 5 (99). Balcázar, J.L., Gavaldà, R., Siegelmann, H.T.: Computational power of neural networks: A characterization in terms of Kolmogorov complexity. IEEE Transactions on Information Theory (), 75 8 (997). Horne, B.G., Hush, D.R.: Bounds on the complexity of recurrent neural network implementations of finite state machines. Neural Networks 9(), 5 (996). Indyk, P.: Optimal simulation of automata by neural nets. In: Proceedings of the STACS 995 Twelfth Annual Symposium on Theoretical Aspects of Computer Science. LNCS, vol. 900, pp. 7 8 (995) 5. Kilian, J., Siegelmann, H.T.: The dynamic universality of sigmoidal neural networks. Information and Computation 8(), 8 56 (996) 6. Koiran, P.: A family of universal recurrent networks. Theoretical Computer Science 68(), 7 80 (996) 7. Lupanov, O.B.: On the synthesis of threshold circuits. Problemy Kibernetiki 6, 09 0 (97) 8. Minsky, M.: Computations: Finite and Infinite Machines. Prentice-Hall, Englewood Cliffs (967) 9. Orponen, P.: Computing with truly asynchronous threshold logic networks. Theoretical Computer Science 7(-), 6 (997) 0. Schmidhuber, J.: Deep learning in neural networks: An overview. Neural Networks 6, 85 7 (05). Siegelmann, H.T.: Recurrent neural networks and finite automata. Journal of Computational Intelligence (), (996). Siegelmann, H.T.: Neural Networks and Analog Computation: Beyond the Turing Limit. Birkhäuser, Boston (999). Siegelmann, H.T., Sontag, E.D.: Analog computation via neural networks. Theoretical Computer Science (), 60 (99). Siegelmann, H.T., Sontag, E.D.: On the computational power of neural nets. Journal of Computer System Science 50(), 50 (995) 5. Šíma, J.: Analog stable simulation of discrete neural networks. Neural Network World 7(6), (997) 6. Šíma, J.: Energy complexity of recurrent neural networks. Neural Computation 6(5), (0) 7. Šíma, J.: The power of extra analog neuron. In: Proceedings of the TPNC 0 Third International Conference on Theory and Practice of Natural Computing. LNCS, vol. 8890, pp. 5 (0) 8. Šíma, J.: Neural networks between integer and rational weights. In: Proceedings of the IJCNN 07 Thirties International Joint Conference on Neural Networks. pp IEEE (07) 9. Šíma, J., Orponen, P.: General-purpose computation with neural networks: A survey of complexity theoretic results. Neural Computation 5(), (00) 0. Šíma, J., Savický, P.: Quasi-periodic β-expansions and cut languages. Theoretical Computer Science 70, (08). Šíma, J., Wiedermann, J.: Theory of neuromata. Journal of the ACM 5(), (998). Šorel, M., Šíma, J.: Robust RBF finite automata. Neurocomputing 6, 9 0 (00)

The Power of Extra Analog Neuron. Institute of Computer Science Academy of Sciences of the Czech Republic

The Power of Extra Analog Neuron. Institute of Computer Science Academy of Sciences of the Czech Republic The Power of Extra Analog Neuron Jiří Šíma Institute of Computer Science Academy of Sciences of the Czech Republic (Artificial) Neural Networks (NNs) 1. mathematical models of biological neural networks

More information

Neural Networks Between Integer and Rational Weights

Neural Networks Between Integer and Rational Weights Neural Networks Between Integer and Rational Weights Jiří Šíma Institute of Computer Science, The Czech Academy of Sciences, P. O. Box 5, 18207 Prague 8, Czech Republic, Email: sima@cs.cas.cz Abstract

More information

A Turing Machine Distance Hierarchy

A Turing Machine Distance Hierarchy A Turing Machine Distance Hierarchy Stanislav Žák and Jiří Šíma Institute of Computer Science, Academy of Sciences of the Czech Republic, P. O. Box 5, 18207 Prague 8, Czech Republic, {stan sima}@cs.cas.cz

More information

Computability and Complexity

Computability and Complexity Computability and Complexity Lecture 5 Reductions Undecidable problems from language theory Linear bounded automata given by Jiri Srba Lecture 5 Computability and Complexity 1/14 Reduction Informal Definition

More information

Part I: Definitions and Properties

Part I: Definitions and Properties Turing Machines Part I: Definitions and Properties Finite State Automata Deterministic Automata (DFSA) M = {Q, Σ, δ, q 0, F} -- Σ = Symbols -- Q = States -- q 0 = Initial State -- F = Accepting States

More information

CSCE 551: Chin-Tser Huang. University of South Carolina

CSCE 551: Chin-Tser Huang. University of South Carolina CSCE 551: Theory of Computation Chin-Tser Huang huangct@cse.sc.edu University of South Carolina Church-Turing Thesis The definition of the algorithm came in the 1936 papers of Alonzo Church h and Alan

More information

Nine switch-affine neurons suffice for Turing universality

Nine switch-affine neurons suffice for Turing universality Neural Networks PERGAMON Neural Networks 12 (1999) 593 600 Contributed article Nine switch-affine neurons suffice for Turing universality H.T. Siegelmann a, *, M. Margenstern b a Faculty of Industrial

More information

Computation Histories

Computation Histories 208 Computation Histories The computation history for a Turing machine on an input is simply the sequence of configurations that the machine goes through as it processes the input. An accepting computation

More information

Non-emptiness Testing for TMs

Non-emptiness Testing for TMs 180 5. Reducibility The proof of unsolvability of the halting problem is an example of a reduction: a way of converting problem A to problem B in such a way that a solution to problem B can be used to

More information

FORMAL LANGUAGES, AUTOMATA AND COMPUTATION

FORMAL LANGUAGES, AUTOMATA AND COMPUTATION FORMAL LANGUAGES, AUTOMATA AND COMPUTATION DECIDABILITY ( LECTURE 15) SLIDES FOR 15-453 SPRING 2011 1 / 34 TURING MACHINES-SYNOPSIS The most general model of computation Computations of a TM are described

More information

4.2 The Halting Problem

4.2 The Halting Problem 172 4.2 The Halting Problem The technique of diagonalization was discovered in 1873 by Georg Cantor who was concerned with the problem of measuring the sizes of infinite sets For finite sets we can simply

More information

Limits of Computability

Limits of Computability Limits of Computability Wolfgang Schreiner Wolfgang.Schreiner@risc.jku.at Research Institute for Symbolic Computation (RISC) Johannes Kepler University, Linz, Austria http://www.risc.jku.at Wolfgang Schreiner

More information

WE FOCUS on the classical analog recurrent neural

WE FOCUS on the classical analog recurrent neural IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 43, NO. 4, JULY 1997 1175 Computational Power of Neural Networks: A Characterization in Terms of Kolmogorov Complexity José L. Balcázar, Ricard Gavaldà, and

More information

CSE 200 Lecture Notes Turing machine vs. RAM machine vs. circuits

CSE 200 Lecture Notes Turing machine vs. RAM machine vs. circuits CSE 200 Lecture Notes Turing machine vs. RAM machine vs. circuits Chris Calabro January 13, 2016 1 RAM model There are many possible, roughly equivalent RAM models. Below we will define one in the fashion

More information

Theory of Computation Turing Machine and Pushdown Automata

Theory of Computation Turing Machine and Pushdown Automata Theory of Computation Turing Machine and Pushdown Automata 1. What is a Turing Machine? A Turing Machine is an accepting device which accepts the languages (recursively enumerable set) generated by type

More information

UNIT-VIII COMPUTABILITY THEORY

UNIT-VIII COMPUTABILITY THEORY CONTEXT SENSITIVE LANGUAGE UNIT-VIII COMPUTABILITY THEORY A Context Sensitive Grammar is a 4-tuple, G = (N, Σ P, S) where: N Set of non terminal symbols Σ Set of terminal symbols S Start symbol of the

More information

Turing Machines. Lecture 8

Turing Machines. Lecture 8 Turing Machines Lecture 8 1 Course Trajectory We will see algorithms, what can be done. But what cannot be done? 2 Computation Problem: To compute a function F that maps each input (a string) to an output

More information

CS21 Decidability and Tractability

CS21 Decidability and Tractability CS21 Decidability and Tractability Lecture 8 January 24, 2018 Outline Turing Machines and variants multitape TMs nondeterministic TMs Church-Turing Thesis So far several models of computation finite automata

More information

Lecture 12: Mapping Reductions

Lecture 12: Mapping Reductions Lecture 12: Mapping Reductions October 18, 2016 CS 1010 Theory of Computation Topics Covered 1. The Language EQ T M 2. Mapping Reducibility 3. The Post Correspondence Problem 1 The Language EQ T M The

More information

CpSc 421 Homework 9 Solution

CpSc 421 Homework 9 Solution CpSc 421 Homework 9 Solution Attempt any three of the six problems below. The homework is graded on a scale of 100 points, even though you can attempt fewer or more points than that. Your recorded grade

More information

THE SUPER-TURING COMPUTATIONAL POWER OF PLASTIC RECURRENT NEURAL NETWORKS

THE SUPER-TURING COMPUTATIONAL POWER OF PLASTIC RECURRENT NEURAL NETWORKS International Journal of Neural Systems, Vol. 24, No. 8 (2014) 1450029 (22 pages) c World Scientific Publishing Company DOI: 10.1142/S0129065714500294 THE SUPER-TURING COMPUTATIONAL POWER OF PLASTIC RECURRENT

More information

Finite Automata Theory and Formal Languages TMV027/DIT321 LP4 2018

Finite Automata Theory and Formal Languages TMV027/DIT321 LP4 2018 Finite Automata Theory and Formal Languages TMV027/DIT321 LP4 2018 Lecture 15 Ana Bove May 17th 2018 Recap: Context-free Languages Chomsky hierarchy: Regular languages are also context-free; Pumping lemma

More information

ACS2: Decidability Decidability

ACS2: Decidability Decidability Decidability Bernhard Nebel and Christian Becker-Asano 1 Overview An investigation into the solvable/decidable Decidable languages The halting problem (undecidable) 2 Decidable problems? Acceptance problem

More information

Introduction to Languages and Computation

Introduction to Languages and Computation Introduction to Languages and Computation George Voutsadakis 1 1 Mathematics and Computer Science Lake Superior State University LSSU Math 400 George Voutsadakis (LSSU) Languages and Computation July 2014

More information

Automata Theory - Quiz II (Solutions)

Automata Theory - Quiz II (Solutions) Automata Theory - Quiz II (Solutions) K. Subramani LCSEE, West Virginia University, Morgantown, WV {ksmani@csee.wvu.edu} 1 Problems 1. Induction: Let L denote the language of balanced strings over Σ =

More information

Theory of Computation - Module 4

Theory of Computation - Module 4 Theory of Computation - Module 4 Syllabus Turing Machines Formal definition Language acceptability by TM TM as acceptors, Transducers - designing of TM- Two way infinite TM- Multi tape TM - Universal Turing

More information

Homework Assignment 6 Answers

Homework Assignment 6 Answers Homework Assignment 6 Answers CSCI 2670 Introduction to Theory of Computing, Fall 2016 December 2, 2016 This homework assignment is about Turing machines, decidable languages, Turing recognizable languages,

More information

1 Showing Recognizability

1 Showing Recognizability CSCC63 Worksheet Recognizability and Decidability 1 1 Showing Recognizability 1.1 An Example - take 1 Let Σ be an alphabet. L = { M M is a T M and L(M) }, i.e., that M accepts some string from Σ. Prove

More information

NODIA AND COMPANY. GATE SOLVED PAPER Computer Science Engineering Theory of Computation. Copyright By NODIA & COMPANY

NODIA AND COMPANY. GATE SOLVED PAPER Computer Science Engineering Theory of Computation. Copyright By NODIA & COMPANY No part of this publication may be reproduced or distributed in any form or any means, electronic, mechanical, photocopying, or otherwise without the prior permission of the author. GATE SOLVED PAPER Computer

More information

Theory of Computation p.1/?? Theory of Computation p.2/?? We develop examples of languages that are decidable

Theory of Computation p.1/?? Theory of Computation p.2/?? We develop examples of languages that are decidable Decidable Languages We use languages to represent various computational problems because we have a terminology for dealing with languages Definition: A language is decidable if there is an algorithm (i.e.

More information

Positive Neural Networks in Discrete Time Implement Monotone-Regular Behaviors

Positive Neural Networks in Discrete Time Implement Monotone-Regular Behaviors Positive Neural Networks in Discrete Time Implement Monotone-Regular Behaviors Tom J. Ameloot and Jan Van den Bussche Hasselt University & transnational University of Limburg arxiv:1502.06094v2 [cs.ne]

More information

Turing Machines Part II

Turing Machines Part II Turing Machines Part II COMP2600 Formal Methods for Software Engineering Katya Lebedeva Australian National University Semester 2, 2016 Slides created by Katya Lebedeva COMP 2600 Turing Machines 1 Why

More information

Automata Theory (2A) Young Won Lim 5/31/18

Automata Theory (2A) Young Won Lim 5/31/18 Automata Theory (2A) Copyright (c) 2018 Young W. Lim. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later

More information

Turing machines and linear bounded automata

Turing machines and linear bounded automata and linear bounded automata Informatics 2A: Lecture 29 John Longley School of Informatics University of Edinburgh jrl@inf.ed.ac.uk 25 November, 2011 1 / 13 1 The Chomsky hierarchy: summary 2 3 4 2 / 13

More information

AC68 FINITE AUTOMATA & FORMULA LANGUAGES DEC 2013

AC68 FINITE AUTOMATA & FORMULA LANGUAGES DEC 2013 Q.2 a. Prove by mathematical induction n 4 4n 2 is divisible by 3 for n 0. Basic step: For n = 0, n 3 n = 0 which is divisible by 3. Induction hypothesis: Let p(n) = n 3 n is divisible by 3. Induction

More information

Lecture Notes: The Halting Problem; Reductions

Lecture Notes: The Halting Problem; Reductions Lecture Notes: The Halting Problem; Reductions COMS W3261 Columbia University 20 Mar 2012 1 Review Key point. Turing machines can be encoded as strings, and other Turing machines can read those strings

More information

ECS 120 Lesson 15 Turing Machines, Pt. 1

ECS 120 Lesson 15 Turing Machines, Pt. 1 ECS 120 Lesson 15 Turing Machines, Pt. 1 Oliver Kreylos Wednesday, May 2nd, 2001 Before we can start investigating the really interesting problems in theoretical computer science, we have to introduce

More information

CS 301. Lecture 18 Decidable languages. Stephen Checkoway. April 2, 2018

CS 301. Lecture 18 Decidable languages. Stephen Checkoway. April 2, 2018 CS 301 Lecture 18 Decidable languages Stephen Checkoway April 2, 2018 1 / 26 Decidable language Recall, a language A is decidable if there is some TM M that 1 recognizes A (i.e., L(M) = A), and 2 halts

More information

Automata and Computability. Solutions to Exercises

Automata and Computability. Solutions to Exercises Automata and Computability Solutions to Exercises Spring 27 Alexis Maciel Department of Computer Science Clarkson University Copyright c 27 Alexis Maciel ii Contents Preface vii Introduction 2 Finite Automata

More information

Introduction to Turing Machines

Introduction to Turing Machines Introduction to Turing Machines Deepak D Souza Department of Computer Science and Automation Indian Institute of Science, Bangalore. 12 November 2015 Outline 1 Turing Machines 2 Formal definitions 3 Computability

More information

Notes for Lecture 3... x 4

Notes for Lecture 3... x 4 Stanford University CS254: Computational Complexity Notes 3 Luca Trevisan January 14, 2014 Notes for Lecture 3 In this lecture we introduce the computational model of boolean circuits and prove that polynomial

More information

Exam Computability and Complexity

Exam Computability and Complexity Total number of points:... Number of extra sheets of paper:... Exam Computability and Complexity by Jiri Srba, January 2009 Student s full name CPR number Study number Before you start, fill in the three

More information

Automata and Computability. Solutions to Exercises

Automata and Computability. Solutions to Exercises Automata and Computability Solutions to Exercises Fall 28 Alexis Maciel Department of Computer Science Clarkson University Copyright c 28 Alexis Maciel ii Contents Preface vii Introduction 2 Finite Automata

More information

5 3 Watson-Crick Automata with Several Runs

5 3 Watson-Crick Automata with Several Runs 5 3 Watson-Crick Automata with Several Runs Peter Leupold Department of Mathematics, Faculty of Science Kyoto Sangyo University, Japan Joint work with Benedek Nagy (Debrecen) Presentation at NCMA 2009

More information

MA/CSSE 474 Theory of Computation

MA/CSSE 474 Theory of Computation MA/CSSE 474 Theory of Computation CFL Hierarchy CFL Decision Problems Your Questions? Previous class days' material Reading Assignments HW 12 or 13 problems Anything else I have included some slides online

More information

Turing Machines, diagonalization, the halting problem, reducibility

Turing Machines, diagonalization, the halting problem, reducibility Notes on Computer Theory Last updated: September, 015 Turing Machines, diagonalization, the halting problem, reducibility 1 Turing Machines A Turing machine is a state machine, similar to the ones we have

More information

CSE 105 THEORY OF COMPUTATION

CSE 105 THEORY OF COMPUTATION CSE 105 THEORY OF COMPUTATION Spring 2018 http://cseweb.ucsd.edu/classes/sp18/cse105-ab/ Today's learning goals Sipser Ch 5.1, 5.3 Define and explain core examples of computational problems, including

More information

The Power of One-State Turing Machines

The Power of One-State Turing Machines The Power of One-State Turing Machines Marzio De Biasi Jan 15, 2018 Abstract At first glance, one state Turing machines are very weak: the Halting problem for them is decidable, and, without memory, they

More information

cse303 ELEMENTS OF THE THEORY OF COMPUTATION Professor Anita Wasilewska

cse303 ELEMENTS OF THE THEORY OF COMPUTATION Professor Anita Wasilewska cse303 ELEMENTS OF THE THEORY OF COMPUTATION Professor Anita Wasilewska LECTURE 13 CHAPTER 4 TURING MACHINES 1. The definition of Turing machine 2. Computing with Turing machines 3. Extensions of Turing

More information

Turing machines COMS Ashley Montanaro 21 March Department of Computer Science, University of Bristol Bristol, UK

Turing machines COMS Ashley Montanaro 21 March Department of Computer Science, University of Bristol Bristol, UK COMS11700 Turing machines Department of Computer Science, University of Bristol Bristol, UK 21 March 2014 COMS11700: Turing machines Slide 1/15 Introduction We have seen two models of computation: finite

More information

FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY

FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY 15-453 FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY REVIEW for MIDTERM 1 THURSDAY Feb 6 Midterm 1 will cover everything we have seen so far The PROBLEMS will be from Sipser, Chapters 1, 2, 3 It will be

More information

DM17. Beregnelighed. Jacob Aae Mikkelsen

DM17. Beregnelighed. Jacob Aae Mikkelsen DM17 Beregnelighed Jacob Aae Mikkelsen January 12, 2007 CONTENTS Contents 1 Introduction 2 1.1 Operations with languages...................... 2 2 Finite Automata 3 2.1 Regular expressions/languages....................

More information

Turing s thesis: (1930) Any computation carried out by mechanical means can be performed by a Turing Machine

Turing s thesis: (1930) Any computation carried out by mechanical means can be performed by a Turing Machine Turing s thesis: (1930) Any computation carried out by mechanical means can be performed by a Turing Machine There is no known model of computation more powerful than Turing Machines Definition of Algorithm:

More information

CS20a: Turing Machines (Oct 29, 2002)

CS20a: Turing Machines (Oct 29, 2002) CS20a: Turing Machines (Oct 29, 2002) So far: DFA = regular languages PDA = context-free languages Today: Computability 1 Handicapped machines DFA limitations Tape head moves only one direction 2-way DFA

More information

Theory of Computing Tamás Herendi

Theory of Computing Tamás Herendi Theory of Computing Tamás Herendi Theory of Computing Tamás Herendi Publication date 2014 Table of Contents 1 Preface 1 2 Formal languages 2 3 Order of growth rate 9 4 Turing machines 16 1 The definition

More information

Turing machines Finite automaton no storage Pushdown automaton storage is a stack What if we give the automaton a more flexible storage?

Turing machines Finite automaton no storage Pushdown automaton storage is a stack What if we give the automaton a more flexible storage? Turing machines Finite automaton no storage Pushdown automaton storage is a stack What if we give the automaton a more flexible storage? What is the most powerful of automata? In this lecture we will introduce

More information

Theory of Computation

Theory of Computation Theory of Computation Unit 4-6: Turing Machines and Computability Decidability and Encoding Turing Machines Complexity and NP Completeness Syedur Rahman syedurrahman@gmail.com Turing Machines Q The set

More information

Undecidability COMS Ashley Montanaro 4 April Department of Computer Science, University of Bristol Bristol, UK

Undecidability COMS Ashley Montanaro 4 April Department of Computer Science, University of Bristol Bristol, UK COMS11700 Undecidability Department of Computer Science, University of Bristol Bristol, UK 4 April 2014 COMS11700: Undecidability Slide 1/29 Decidability We are particularly interested in Turing machines

More information

Turing Machines Part III

Turing Machines Part III Turing Machines Part III Announcements Problem Set 6 due now. Problem Set 7 out, due Monday, March 4. Play around with Turing machines, their powers, and their limits. Some problems require Wednesday's

More information

Introduction to Turing Machines. Reading: Chapters 8 & 9

Introduction to Turing Machines. Reading: Chapters 8 & 9 Introduction to Turing Machines Reading: Chapters 8 & 9 1 Turing Machines (TM) Generalize the class of CFLs: Recursively Enumerable Languages Recursive Languages Context-Free Languages Regular Languages

More information

CS151 Complexity Theory. Lecture 1 April 3, 2017

CS151 Complexity Theory. Lecture 1 April 3, 2017 CS151 Complexity Theory Lecture 1 April 3, 2017 Complexity Theory Classify problems according to the computational resources required running time storage space parallelism randomness rounds of interaction,

More information

Decidability: Church-Turing Thesis

Decidability: Church-Turing Thesis Decidability: Church-Turing Thesis While there are a countably infinite number of languages that are described by TMs over some alphabet Σ, there are an uncountably infinite number that are not Are there

More information

Turing Machines A Turing Machine is a 7-tuple, (Q, Σ, Γ, δ, q0, qaccept, qreject), where Q, Σ, Γ are all finite

Turing Machines A Turing Machine is a 7-tuple, (Q, Σ, Γ, δ, q0, qaccept, qreject), where Q, Σ, Γ are all finite The Church-Turing Thesis CS60001: Foundations of Computing Science Professor, Dept. of Computer Sc. & Engg., Turing Machines A Turing Machine is a 7-tuple, (Q, Σ, Γ, δ, q 0, q accept, q reject ), where

More information

Foundations of Informatics: a Bridging Course

Foundations of Informatics: a Bridging Course Foundations of Informatics: a Bridging Course Week 3: Formal Languages and Semantics Thomas Noll Lehrstuhl für Informatik 2 RWTH Aachen University noll@cs.rwth-aachen.de http://www.b-it-center.de/wob/en/view/class211_id948.html

More information

1 Circuit Complexity. CS 6743 Lecture 15 1 Fall Definitions

1 Circuit Complexity. CS 6743 Lecture 15 1 Fall Definitions CS 6743 Lecture 15 1 Fall 2007 1 Circuit Complexity 1.1 Definitions A Boolean circuit C on n inputs x 1,..., x n is a directed acyclic graph (DAG) with n nodes of in-degree 0 (the inputs x 1,..., x n ),

More information

CSE 105 THEORY OF COMPUTATION

CSE 105 THEORY OF COMPUTATION CSE 105 THEORY OF COMPUTATION Spring 2016 http://cseweb.ucsd.edu/classes/sp16/cse105-ab/ Today's learning goals Sipser Ch 3.3, 4.1 State and use the Church-Turing thesis. Give examples of decidable problems.

More information

Functions on languages:

Functions on languages: MA/CSSE 474 Final Exam Notation and Formulas page Name (turn this in with your exam) Unless specified otherwise, r,s,t,u,v,w,x,y,z are strings over alphabet Σ; while a, b, c, d are individual alphabet

More information

CS 21 Decidability and Tractability Winter Solution Set 3

CS 21 Decidability and Tractability Winter Solution Set 3 CS 21 Decidability and Tractability Winter 2018 Posted: January 31 Solution Set 3 If you have not yet turned in the Problem Set, you should not consult these solutions. 1. (a) A 2-NPDA is a 7-tuple (Q,,

More information

X-machines - a computational model framework.

X-machines - a computational model framework. Chapter 2. X-machines - a computational model framework. This chapter has three aims: To examine the main existing computational models and assess their computational power. To present the X-machines as

More information

Chapter 5. Finite Automata

Chapter 5. Finite Automata Chapter 5 Finite Automata 5.1 Finite State Automata Capable of recognizing numerous symbol patterns, the class of regular languages Suitable for pattern-recognition type applications, such as the lexical

More information

Theory of Computation

Theory of Computation Theory of Computation Lecture #10 Sarmad Abbasi Virtual University Sarmad Abbasi (Virtual University) Theory of Computation 1 / 43 Lecture 10: Overview Linear Bounded Automata Acceptance Problem for LBAs

More information

6.045J/18.400J: Automata, Computability and Complexity. Quiz 2. March 30, Please write your name in the upper corner of each page.

6.045J/18.400J: Automata, Computability and Complexity. Quiz 2. March 30, Please write your name in the upper corner of each page. 6.045J/18.400J: Automata, Computability and Complexity March 30, 2005 Quiz 2 Prof. Nancy Lynch Please write your name in the upper corner of each page. Problem Score 1 2 3 4 5 6 Total Q2-1 Problem 1: True

More information

Turing Machines. Wolfgang Schreiner

Turing Machines. Wolfgang Schreiner Turing Machines Wolfgang Schreiner Wolfgang.Schreiner@risc.jku.at Research Institute for Symbolic Computation (RISC) Johannes Kepler University, Linz, Austria http://www.risc.jku.at Wolfgang Schreiner

More information

Undecidable Problems and Reducibility

Undecidable Problems and Reducibility University of Georgia Fall 2014 Reducibility We show a problem decidable/undecidable by reducing it to another problem. One type of reduction: mapping reduction. Definition Let A, B be languages over Σ.

More information

Most General computer?

Most General computer? Turing Machines Most General computer? DFAs are simple model of computation. Accept only the regular languages. Is there a kind of computer that can accept any language, or compute any function? Recall

More information

CHAPTER 5. Interactive Turing Machines

CHAPTER 5. Interactive Turing Machines CHAPTER 5 Interactive Turing Machines The interactive Turing machine, which was introduced by Van Leeuwen and Wiedermann[28, 30], is an extension of the classical Turing machine model. It attempts to model

More information

Q = Set of states, IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar

Q = Set of states, IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar Turing Machine A Turing machine is an abstract representation of a computing device. It consists of a read/write

More information

Parallel Turing Machines on a Two-Dimensional Tape

Parallel Turing Machines on a Two-Dimensional Tape Czech Pattern ecognition Workshop 000, Tomáš Svoboda (Ed.) Peršlák, Czech epublic, February 4, 000 Czech Pattern ecognition Society Parallel Turing Machines on a Two-Dimensional Tape Daniel Průša František

More information

COMPARATIVE ANALYSIS ON TURING MACHINE AND QUANTUM TURING MACHINE

COMPARATIVE ANALYSIS ON TURING MACHINE AND QUANTUM TURING MACHINE Volume 3, No. 5, May 2012 Journal of Global Research in Computer Science REVIEW ARTICLE Available Online at www.jgrcs.info COMPARATIVE ANALYSIS ON TURING MACHINE AND QUANTUM TURING MACHINE Tirtharaj Dash

More information

ECS 120 Lesson 18 Decidable Problems, the Halting Problem

ECS 120 Lesson 18 Decidable Problems, the Halting Problem ECS 120 Lesson 18 Decidable Problems, the Halting Problem Oliver Kreylos Friday, May 11th, 2001 In the last lecture, we had a look at a problem that we claimed was not solvable by an algorithm the problem

More information

Turing machines and computable functions

Turing machines and computable functions Turing machines and computable functions In this chapter we address the question: how can we define the class of discrete functions that are computable? Probably, most of you are familiar with a few programming

More information

Further discussion of Turing machines

Further discussion of Turing machines Further discussion of Turing machines In this lecture we will discuss various aspects of decidable and Turing-recognizable languages that were not mentioned in previous lectures. In particular, we will

More information

Universal Turing Machine. Lecture 20

Universal Turing Machine. Lecture 20 Universal Turing Machine Lecture 20 1 Turing Machine move the head left or right by one cell read write sequentially accessed infinite memory finite memory (state) next-action look-up table Variants don

More information

Turing machines and linear bounded automata

Turing machines and linear bounded automata and linear bounded automata Informatics 2A: Lecture 29 John Longley School of Informatics University of Edinburgh jrl@inf.ed.ac.uk 27 November 2015 1 / 15 The Chomsky hierarchy: summary Level Language

More information

CS 361 Meeting 26 11/10/17

CS 361 Meeting 26 11/10/17 CS 361 Meeting 26 11/10/17 1. Homework 8 due Announcements A Recognizable, but Undecidable Language 1. Last class, I presented a brief, somewhat inscrutable proof that the language A BT M = { M w M is

More information

How to Pop a Deep PDA Matters

How to Pop a Deep PDA Matters How to Pop a Deep PDA Matters Peter Leupold Department of Mathematics, Faculty of Science Kyoto Sangyo University Kyoto 603-8555, Japan email:leupold@cc.kyoto-su.ac.jp Abstract Deep PDA are push-down automata

More information

Lecture 3 (Notes) 1. The book Computational Complexity: A Modern Approach by Sanjeev Arora and Boaz Barak;

Lecture 3 (Notes) 1. The book Computational Complexity: A Modern Approach by Sanjeev Arora and Boaz Barak; Topics in Theoretical Computer Science March 7, 2016 Lecturer: Ola Svensson Lecture 3 (Notes) Scribes: Ola Svensson Disclaimer: These notes were written for the lecturer only and may contain inconsistent

More information

1 Definition of a Turing machine

1 Definition of a Turing machine Introduction to Algorithms Notes on Turing Machines CS 4820, Spring 2017 April 10 24, 2017 1 Definition of a Turing machine Turing machines are an abstract model of computation. They provide a precise,

More information

Rational graphs trace context-sensitive languages

Rational graphs trace context-sensitive languages Rational graphs trace context-sensitive languages Christophe Morvan 1 and Colin Stirling 2 1 IRISA, Campus de eaulieu, 35042 Rennes, France christophe.morvan@irisa.fr 2 Division of Informatics, University

More information

Theory of Computation Lecture Notes. Problems and Algorithms. Class Information

Theory of Computation Lecture Notes. Problems and Algorithms. Class Information Theory of Computation Lecture Notes Prof. Yuh-Dauh Lyuu Dept. Computer Science & Information Engineering and Department of Finance National Taiwan University Problems and Algorithms c 2004 Prof. Yuh-Dauh

More information

Opleiding Informatica

Opleiding Informatica Opleiding Informatica Tape-quantifying Turing machines in the arithmetical hierarchy Simon Heijungs Supervisors: H.J. Hoogeboom & R. van Vliet BACHELOR THESIS Leiden Institute of Advanced Computer Science

More information

CS5371 Theory of Computation. Lecture 12: Computability III (Decidable Languages relating to DFA, NFA, and CFG)

CS5371 Theory of Computation. Lecture 12: Computability III (Decidable Languages relating to DFA, NFA, and CFG) CS5371 Theory of Computation Lecture 12: Computability III (Decidable Languages relating to DFA, NFA, and CFG) Objectives Recall that decidable languages are languages that can be decided by TM (that means,

More information

Automata & languages. A primer on the Theory of Computation. Laurent Vanbever. ETH Zürich (D-ITET) October,

Automata & languages. A primer on the Theory of Computation. Laurent Vanbever.   ETH Zürich (D-ITET) October, Automata & languages A primer on the Theory of Computation Laurent Vanbever www.vanbever.eu ETH Zürich (D-ITET) October, 19 2017 Part 5 out of 5 Last week was all about Context-Free Languages Context-Free

More information

CSE 105 THEORY OF COMPUTATION

CSE 105 THEORY OF COMPUTATION CSE 105 THEORY OF COMPUTATION Spring 2016 http://cseweb.ucsd.edu/classes/sp16/cse105-ab/ Today's learning goals Sipser Ch 2 Define push down automata Trace the computation of a push down automaton Design

More information

Notes for Lecture 3... x 4

Notes for Lecture 3... x 4 Stanford University CS254: Computational Complexity Notes 3 Luca Trevisan January 18, 2012 Notes for Lecture 3 In this lecture we introduce the computational model of boolean circuits and prove that polynomial

More information

CPSC 421: Tutorial #1

CPSC 421: Tutorial #1 CPSC 421: Tutorial #1 October 14, 2016 Set Theory. 1. Let A be an arbitrary set, and let B = {x A : x / x}. That is, B contains all sets in A that do not contain themselves: For all y, ( ) y B if and only

More information

CSCI 1590 Intro to Computational Complexity

CSCI 1590 Intro to Computational Complexity CSCI 59 Intro to Computational Complexity Overview of the Course John E. Savage Brown University January 2, 29 John E. Savage (Brown University) CSCI 59 Intro to Computational Complexity January 2, 29

More information

THEORY OF COMPUTATION (AUBER) EXAM CRIB SHEET

THEORY OF COMPUTATION (AUBER) EXAM CRIB SHEET THEORY OF COMPUTATION (AUBER) EXAM CRIB SHEET Regular Languages and FA A language is a set of strings over a finite alphabet Σ. All languages are finite or countably infinite. The set of all languages

More information

SCHEME FOR INTERNAL ASSESSMENT TEST 3

SCHEME FOR INTERNAL ASSESSMENT TEST 3 SCHEME FOR INTERNAL ASSESSMENT TEST 3 Max Marks: 40 Subject& Code: Automata Theory & Computability (15CS54) Sem: V ISE (A & B) Note: Answer any FIVE full questions, choosing one full question from each

More information

1 Computational Problems

1 Computational Problems Stanford University CS254: Computational Complexity Handout 2 Luca Trevisan March 31, 2010 Last revised 4/29/2010 In this lecture we define NP, we state the P versus NP problem, we prove that its formulation

More information