Continuous time models are equivalent to Turing machines.

Size: px
Start display at page:

Download "Continuous time models are equivalent to Turing machines."

Transcription

1 Continuous time models are equivalent to Turing machines. A characterization of P and NP with ordinary differential equations. Olivier Bournez 1, Daniel S. Graça 2,3, and Amaury Pouly 1,2 1 Ecole Polytechnique, LIX, Palaiseau Cedex, France 2 CEDMES/FCT, Universidade do Algarve, C. Gambelas, Faro, Portugal 3 SQIG/Instituto de Telecomunicações, Lisbon, Portugal April 1, 2014 Abstract In this paper, we give an implicit characterization of the P and NP complexity classes in terms of ordinary differential equations: we define two complexity classes P GP AC and NP GP AC in terms of differential equations with polynomial right-hand side and show that P = P GP AC and NP GP AC = NP. This result gives a purely continuous (time and space) elegant and simple characterization of P and NP. This is the first time such classes are characterized using only ordinary differential equations. This can also be interpreted in terms of analog computers, more precisely using the General Purpose Analog Computer (GPAC) model. The GPAC is probably the most famous continuous time model of computation, and is a physically feasible model in the sense that it can be implemented in practice through the use of analog electronics or mechanical devices. We get that the GPAC is provably equivalent to Turing machines both at the computability and complexity level, a fact that has never been established before. This may also provide a new point of view on the P vs NP problem, and on the understanding of purely analog models versus other models with analog aspects lie quantum models: first, the potential extra-power provably do not come from the analog aspects alone; second, polynomial dynamics are however sufficient to get the full power of Turing machines. 1 Introduction Since the introduction of the P and NP complexity classes, much wor has been done to build a well-developed complexity theory based on Turing Machines. In particular, classical computational complexity theory is based on limiting resources used by Turing machines, lie time and space. Another approach is implicit computational complexity. The term implicit in implicit computational complexity can sometimes be understood in various ways, but a common point of many of the characterizations is that they provide (Turing or equivalent) machine-independent alternative definitions of classical complexity. Implicit characterization theory has gained enormous interest in the last decade. This has led to many alternative characterizations of complexity classes using recursive function, function algebras, rewriting systems, neural networs, lambda calculus and so on. However, most of if not all of these models or characterizations are essentially discrete: in particular they are based on underlying models woring with a discrete time on objects which are essentially discrete such as words, terms, etc. that can be considered as being defined in a discrete space. 1

2 Models of computation woring on a continuous space have also been considered: they include Blum Shub Smale machines [BCSS98], and in some sense Computable Analysis [Wei00], or quantum computers [Fey82] which usually feature discrete-time and continuous-space. Machine-independent characterizations of the corresponding complexity classes have also been devised: see e.g. [BCdNM05, GM95]. However, the resulting characterizations are still essentially discrete, since time is still considered to be discrete. In this paper, we provide a purely analog machine-independent characterization of the P and NP classes. Indeed, our characterization is a characterization relying only on a simple and natural class of ordinary differential equations: P and NP are characterized using ordinary differential equations with polynomial right-hand side. This shows first that (classical) complexity theory can be presented in terms of ordinary differential equations problems, and that classical questions, such as the P vs NP questions, can be presented as questions about ordinary differential equations. This also shows as a side effect that solving ordinary differential equations leads to complete problems, even with a fixed dimension. All these results have never been established before. These results, that we believe to really open a new perspective on classical complexity theory, are interesting on their own. They actually originate as a side effect from an attempt to understand continuous-time analog models of computation: in particular, the General Purpose Analog Computer (GPAC) of Claude Shannon [Sha41]. The GPAC is a physically feasible model in the sense that it can be implemented in practice through the use of analog electronics or mechanical devices. Somehow as a proof of its physical reality, let us recall that the GPAC was precisely devised as a model for a certain type of physical devices (Differential Analyzers) [Bus31] which were in use until the 1960s. In this context, one way to interpret our results is to state that it is indeed possible to define a complexity theory for the GPAC, and that the resulting complexity theory turns out to be equivalent to classical complexity theory. Somehow, we are only stating that the GPAC is one among all the models which satisfy the effective Church Turing thesis, which states that reasonable models of computation are polynomially related to Turing machines. However, one must understand that, although it is clear in 2014 that analog models of computation lie the GPAC are equivalent to Turing machines from a computability point of view (in particular following [BCGH07], [GCB08]), the question whether they could solve some problems faster from a complexity point is more intriguing and really less clear. In particular, understand that not all dynamics can be simulated in polynomial time (see e.g. constructions in [Moo96], [Bou97], [Bou99], [AD90], [CP02], [Dav01], [Cop98], [Cop02])): can such dynamics be used to compute faster? Remember that the physical Church Turing thesis states that one cannot achieve super-turing power, in computability terms, with physically realistic models of computation. However, there has been some hope that one might be able to present realistic models which might compute faster than Turing machines in computational complexity terms. Perhaps the most-well nown candidate for such model are quantum computers, first introduced by Feynman in [Fey82]: see e.g. the famous result on integer factorization in polynomial time of [Sho97]. A fundamental difficulty one faces when we try to tal about time complexity for continuous time models of computation is that time is a problematic notion for many natural classes of systems: indeed, it is nown that Turing machines can be simulated by various classes of ordinary differential equations or analog models. This can be often done even in real time: the state y(t ) at time T N of the solution of the ordinary differential equation encodes the state after the execution of T steps of the Turing machine. However, a troubling problem is that many models exhibit the so-called Zeno s phenomenon, that is to say the possibility of realizing an infinite number of discrete transitions in a finite amount of time. This was done in [EN02] using blac holes, as mentioned earlier, to solve the Halting problem. Another example is in [AM98], where it is shown that arithmetical sets can be recognized in finite time by piecewise constant derivative systems: basically such constructions are based on a time contraction. Actually, many continuous time systems might undergo arbitrary time contractions to simulate the computation of a Turing machine in an arbitrary short time (see e.g. [Ruo93], [Ruo94], [Moo96], [Bou97], [Bou99], [AD90], [CP02], [Dav01], [Cop98], [Cop02]). 2

3 In this paper we give a fundamental contribution to this question: time of a computation, for the GPAC can be measured as the length of the curve (i.e. length of the solution curve of the ordinary differential equation associated to the GPAC). Doing so, we get to well defined complexity classes, that turn out to be the same as traditional complexity classes. This is so natural at the end, that it might seem trivial, and only a statement that continuous time models of computation do satisfy the effective Church Turing thesis, but once again, one must understand that various attempts of defining a clear complexity theory for continuous time models of computation have previously failed [BC08], and that stating that the effective Church Turing thesis covers continuous time models of computation such as the GPAC requires at minimum a formal proof. Somehow, in that spirit, our contribution is to point out that the effective Church Turing thesis indeed holds for continuous time systems, if reasonable is understood as defined by an ordinary differential equation with polynomial right-hand side, and if time is measured by the length of the curve. Previous attempts either failed because they considered classes of dynamical systems which where too general, or because they used a notion of time which was not robust: see [BC08] for discussions about various attempts. Technically, we define a complexity class P GP AC which roughly corresponds to any physical device whose dynamics is governed by an ordinary differential equation with a polynomial right-hand side, and that has a finite number of stable states: given an initial condition, the device will evolve (compute) and quicly reach a stable state. If we interpret some of these states as accepting and the others as rejecting, one can view this device as a recognizer, and thus a language recognizer. We claim that given reasonable constraints on the convergence (the device quicly reaches a stable state) and on speed (the device does not reach supersonic speeds, or equivalently the length of the curve remains polynomial bounded), this class corresponds exactly to the P class defined with Turing machines. In order to model non-determinism, we draw inspiration from control theory. A common situation is to consider a physical device which is subjected to perturbations introduced by a controller, and a typical objective is to manipulate the system via the controller in order to obtain some desired effect. More precisely, we define a complexity class NP GP AC of physical devices subjected to controllers and as whether for a given initial condition, there is an optimal control which maes the device reach an accepting stable state. Allowing arbitrary controllers is not quite realistic though, so we restrict the class to devices which are subjected to digital controllers, that is controllers which act as a digital device (a cloc for example, or an on/off switch). We show that for this particular class of controllers, NP GP AC = NP. As a result, one can reformulate the P vs N P questions in the following (physical) terms: is a self-evolving device as good as a recognizer as a device subject to digital control? We also get as a side effect that solving ordinary differential equations with polynomial righthand is a P-complete problem, i.e. from a control theory point of view, the reachability problem for (polynomial) dynamical systems is P-complete, and the reachability problem for open dynamical systems is NP-complete. This holds even when the dimension is considered as fixed. 2 Ordinary Differential Equations with Polynomial Right-Hand Side In this paper we will deal with the following class of differential equations: { y (t) = p(y(t)) y(0) = y 0 (1) defined over R d for some d, where p is a vector of polynomial. Such systems are sometimes called pivp, for polynomial initial value problems [GBC09]. A function f : R R is said to be a pivp function if there exists such a system with f(t) = y 1 (t) for all t, where y 1 denotes first component of the vector y defined in R d [GBC09]. 3

4 The class of pivp functions is nown to be a very wide class of functions, which includes the usual functions of Analysis (polynomials, trigonometric functions, exponential, etc.). Furthermore, pivp functions are closed by a very wide class of operations, including composition, sum, product, division, etc.. [GBC09]. The basic idea of this paper is to restrict to physical systems that can be written as (or possibly as projections of) systems of the form (1). We call such systems GPAC for reasons that will clear later. The fact that we restrict ourselves to such systems instead of more general ordinary differential equations is motivated by the fact that, as we will see, it is broad enough to let us characterize classical complexity, since one can simulate Turing machines with GPACs, but not too broad so that it suffers from pathological behavior. By analogy with classical discrete time complexity (Turing machines for example), we want to use GPAC systems as a language recognizer and we claim there is a natural way to do so. Intuitively, such a system must have a number of properties to mae it useful: existence: the solution y to the GPAC system must exist at all time; we do not want our device to blow-up stability: we choose one component of the system (arbitrarily y 1 ) as an indicator of whereas the input is to be accepted or not. We define two stable states for our device: y 1 1 (the accepting state) and y 1 1 (the rejecting state). Since these are stable states, once the system enters such a state, it must stay confined to it forever. reachability: for every input, we want our system to eventually choose to accept or not, so we require that it reaches either the accepting or the rejecting state. Definition 1: A language L is recognized by a GPAC if and only if there exists a vector of polynomials p and a simple encoding function ψ such that for any word w, the solution y to { y = p(y) satisfies: y(0) = ψ(w) y is defined over R if y 1 (t) 1 (resp. y 1 (t) 1) then u t, y 1 (u) 1 (resp. y 1 (u) 1) if w L (resp. w / L) then t > 0 s.t. y 1 (t) 1 (resp. y 1 (t) 1) [existence] [stability] [reachability] The class of all recognized languages is called R GP AC. It nown that the solution to a GPAC system is always computable and conversely, one can simulate a Turing machine with a GPAC system [GCB08], [CG09]. Together, these results show that our R GP AC is actually nothing more and nothing less than the class of recursive languages. Theorem 2 ([GCB08, CG09]): R GP AC = Recursive Languages However, the question about what happens at the complexity level was left open by these results. The rest of this paper is somehow about generalizing this result to the complexity level. Remar 3: Observe that, even if GPAC presented below loos lie circuits, to avoid misunderstanding, we insist on the fact that we are not considering a circuit approach (whose sense is not even clear): we want the same GPAC/differential equation to wor for all inputs. In particular, we want a fix For example the encoding: ψ(w) = w i=1 w i2 i if the alphabet is {0, 1}. 4

5 dimension, woring for all inputs: hence the need for an encoding function, just to state how input is feeded to the system, as one needs to state how input is feeded to a Turing machine for Turing machines. What we get is really stronger than what would get by a circuit approach. Notice that maing a sense to what we call a circuit approach is not so clear, but we realized that sometimes our readers want circuit lie constructions when they see thins looing as circuits. GPAC are not circuits in the sense of classical circuit complexity. 3 The Deterministic Polynomial Time Intuitively, some physical systems decide quicer than other systems, but can we mae this intuition formal? This is not as easy as it seems. Indeed, analog systems such as the GPAC are subjected to time and space contraction: one can rewrite the system so that it decides arbitrarily quicer and is still in the considered class. In other words, time itself is not a good measure of the complexity. This is a well-nown issue but it is not so easy to come up with a solution. In [BGP11], we investigated the complexity of solving differential systems with some extra-assumption on the growth of involved functions. However these assumption lacs a geometric interpretation. For this reason, we introduce a notion of complexity for these systems, which we claim is a natural counterpart of time in analog systems: the length of the curve. Indeed, in discrete models of computation lie Turing machines, the notion of time or steps coincides with the length of the path of computation. On the contrary, in analog systems, the notion of time is independent of the length of the path of computation, which is a first easy way to avoid to cope with time rescaling. Definition 4 (Length of a curve): Let I = [a, b] be a finite internal and y : I R n be a rectifiable curve. Define the length of the curve over I by: where length(y)(i) = b a y (t) 2 dt (x 1,..., x n ) 2 = x x n 2 Definition 5: A language L is recognized in polynomial time by a GPAC if and only if there exists a vector of polynomials p, a simple encoding function ψ, and a polynomial q such that for any word w, the solution y to { y = p(y) satisfies: y(0) = ψ(w) y is defined over R if y 1 (t) 1 (resp. y 1 (t) 1) then u t, y 1 (u) 1 (resp. y 1 (u) 1) if w L (resp. w / L) then y 1 (t) 1 (resp. y 1 (t) 1) whenever length(y)([0, t]) q( w ) t 0, length(y)([0, t]) t The class of all recognized languages is called P GP AC. Remar 6: Notice that the definition is the similar to the definition of R GP AC, except that we specify a maximum length after which the GPAC must have recognized the input. here we will use ψ(w) = ( w, w i=1 w i2 i ). 5

6 A constant unit u v + u + v An adder unit u v uv An multiplier unit u w = v u dv An integrator unit Figure 1: Circuit presentation of the GPAC: a circuit built from basic units Remar 7: The last, technical, condition about the length of the curve is needed to prevent pathological cases where the length of the curve is polynomial but the time needed to reach it is exponential (see the proof for an example). In practice, it is reasonable to assume the system is moving at some minimum speed. Remar 8: It is very important to notice that the encoding function we use for this definition is not the same as the previous one. Indeed, if we want the length of the curve to be polynomial, the previous encoding (ψ(w) = w i=1 w i2 i ) is not very good because ψ(w) 2 w which is exponential. For this reason, we use a more compact encoding using rational numbers such that ψ(w) w. Notice that the previous result clearly still holds with this encoding. Our first main result is the following: Theorem 9: P GP AC = P 4 Relation to the General Purpose Analog Computer of Claude Shannon In 1941, Claude Shannon introduced a model for the Differential Analyzer [Bus31], on which he wored as an operator, called the General Purpose Analog Computer (GPAC) [Sha41], which was later refined in [PE74], [GC03]. Originally it was presented as a model based on circuits (see Figure 1), where several units performing basic operations (e.g. sums, integration) are interconnected (see Figures 2 and 3). However, Shannon itself realized that functions computed by a GPAC are nothing more than solutions of a special class of polynomial differential equations. In particular it can be shown that a function is computed by Shannon s model [Sha41], [GC03] if and only if it is a (component of the) solution of a polynomial differential equation { y (t) = p(y(t)) y(0) = y 0 (2) where p is a vector of polynomial. In other words, a function is computed by Shannon s model iff it is a solution of a GPAC system. Hence our results not only define the complexity P (and NP in next section) in terms of differential equations, but they also say that functions computed by a GPAC which use a reasonable amount of resources (i.e. where the length of the computation is polynomially bounded) are those which belong to P (a similar result holds for N P if we introduce controllers for Shannon s GPAC as in following section). We get: 6

7 t 1 sin(t) y (t)= z(t) z (t)= y(t) y(0)= 0 z(0)= 1 { y(t)= sin(t) z(t)= cos(t) Figure 2: Example of GPAC circuit: computing sine and cosine with two variables t... y n (t) n integrators y 1= y 1 y 1 (t)= e t y 2= y 2 y 1 y 2 (t)= e et.... y n= y n y n 1 y n (t)= e. e.. t Figure 3: Example of GPAC circuits which can explode Theorem 10: The GPAC is equivalent to Turing machines both at the computability and complexity level: the GPAC model does satisfy the effective Church Turing thesis. Corollary 11: No model that can expressed as an ordinary differential equation with polynomial righthand side can compute faster that a Turing machine up to polynomial time. 5 The Nondeterministic Polynomial Time We believe that our main result is the previous characterization of P, and all what it implies. See above discussions. The purpose of this section is to show that this can be generalized to NP, without too many problems. We have seen that it is possible to obtain a natural characterization of P in terms of differential systems which decide quicly. However not all systems fall into the scope of this class. A typical example would be a car: it is a dynamical system subjected to an external control (the driver). One could argue that the driver itself is a dynamical system but it might be too complex to model or we might not want to model it. We claim that this form of control is related to nondeterminism in Turing machine in the following sense. If the controller (the driver for example), with limited capabilities, controls a system such as the one of the previous section, then it reaches the NP power of decision. Furthermore, all N P language can be recognized this way. The difficulty is to give a natural definition of a controller with limited capabilities. We give two different definitions, which we show to be equivalent from the point of the view of control. Our ey notion is the one of digital controller, which intuitively means that the controllers are allowed to give instructions among a finite number of possibilities at a fixed rate, just lie a cloc. Although we restrict ourselves, for simplicity, to a binary choice at the rate of 1, it should be clear that it does not change anything to the defined class (d denotes distance). 7

8 Remar 12: Because the solution to a GPAC is continuous with respect to the initial condition and to the controller, we cannot define a non-trivial class of language if we allow arbitrary controllers. Indeed, in that case, there would always be controllers such that y(t) lies in-between 1 and 1 at any time. Therefore, to allow arbitrary controllers, one probably needs to have one-sided classes, that is where acceptance is defined but nothing is said about rejection. Definition 13: A function u : R R is said to be a digital controller if there exists two constants m, M > 0 such that: u is continuous and t R, u(t) M n Z, t [n, n + 1/2], d(u(t), {0, 1}) m 1/4 Intuitively, a digital controller must satisfy two ey properties. First it must be continuous and bounded (although this is a technical condition, it is a reasonable assumption about a controller expected to give binary values). Second, half of the time, the controller must be very close to either 0 or 1 (formally the distance from u(t) to {0, 1} must be small). This definition captures the case of an external controller which we do not want to model, in the sense that we do not require that the controller itself be a dynamical system. It is interesting to study the case where we further require that the controller is also a dynamical system, that is when the controller is internal to the considered class of systems. Definition 14: A function u : R R is said to be a GPAC digital controller if u is a digital controller and there exists a GPAC q such that t, u(t) = z 1 (t) and z (t) = q(z(t)). We can now extend our definition of languages recognized by a GPAC to include controllers. We do not specify the type of controllers in the definition, thus there is one class per type of controller. Definition 15: A language L is recognized in polynomial time with controllers by a GPAC if and only if there exists a vector of polynomials p, a simple encoding function ψ, and a polynomial q, such that for any word w and controller u, the solution y to { y = p(y, u) satisfies: y is defined over R y(0) = ψ(w) if y 1 (t) 1 (resp. y 1 (t) 1) then u t, y 1 (u) 1 (resp. y 1 (u) 1) y 1 (t) 1 or y 1 (t) 1 whenever length(y)([0, t]) q( w ) if w M (resp. w / M), there exists at least one controller (resp. there exists no controller) such that y 1 (t) 1 for some t t 0, length(y)([0, t]) t Definition 16: The class of languages recognized in polynomial time by a GPAC with a digital controller is called NP GP AC. The definition is slightly more complicated that the one of polynomial time because we added an extra condition. The system is still required to tae a decision quicly but this time, the answer needs not to be correct for all controllers, but only for at least one controller if the word is in the language. 8

9 One can mae an analogy with a car trying to par: the car can successfully be pared if there is at least one controller (a driver), which can control the car and par it. On the contrary, failure to par the car (a bad driver), does not indicate that the car is impossible to par, except if all drivers fail. We will now prove two main results. First we show that recognizing in polynomial time with a digital controller is equivalent to recognizing with an internal digital controller. Second, we will prove that the class of languages recognized in polynomial time with a digital controller is exactly NP. Theorem 17: A language M is recognized in polynomial time by a GPAC with a digital controller if and only if it is recognized in polynomial time by a GPAC with a GPAC digital controller. Theorem 18: NP GP AC = NP 6 Some Other Consequences 6.1 On solving initial value problems Theorem 19: The problem of recognizing languages in polynomial time with a GPAC is P -complete. Proof. Follows from Theorem 9. Theorem 20: The problem of recognizing languages in polynomial time with a GPAC using a digital controller is N P -complete. Proof. Follows from Theorem 18. Remind that GPAC correspond to ordinary differential equations with polynomial right hand side. 6.2 P = N P expressed as a problem about ordinary differential equations Theorem 21: P = NP iff, for any given language recognized in polynomial time by a GPAC with a digital controller, it can be recognized on polynomial time by a GPAC without controllers. Proof. Follows from the previous two theorems. Stated in terms of ordinary differential equations: Corollary 22: P = N P iff, for any given language recognized in polynomial time by an ordinary differential equation with polynomial right-hand side with a digital controller, it can be recognized on polynomial time by a by an ordinary differential equation with polynomial right-hand side without controllers. 7 Summary and Conclusion In this paper we have shown that it is possible to characterize the complexity classes P and NP through the use of polynomial differential equations. This is one of the first examples of continuous implicit characterizations for such classes. Furthermore, we have seen that these classes mae sense when viewed from the perspective of a model of analog computation, Shannon s GPAC, also allowing us to define a robust measure of complexity for analog models of computation the length of the curve. This is, in our opinion, a significant development since, due to Zeno s phenomena, it has been difficult to present a robust definition of computational complexity for analog models of computation, except for very restricted 9

10 systems. The length of the curve seems to be a natural definition for complexity and, due to its nature, completely removes problems such as the above mentioned Zeno s phenomena. As a motivation for further wor, it would be interesting to use this new measure of computational complexity on other classes of analog models of computation, and see if equivalence with P and NP still holds. In other words, in would be interesting to tell if, using this complexity measure, the effective Church-Turing thesis still holds for other models of analog computation. 8 An overview of the required techniques in the following proofs In this section, we try to give an informal overview of the techniques used to prove the results on this paper. The interested reader can find all technical details in the appendix. These techniques serve both as a motivation for the definitions 5 and 15 and as a proof outline of theorems 9 and 18. We mostly focus here on the characterization of P. As suggested by the equivalence in Theorem 2, there are two different aspects in this wor. First, we need to study the computational complexity of differential equations with a polynomial right hand side. This mostly follows from constructions in a previous paper [BGP12] whose main theorem is informally the following: Theorem 23 ([BGP12]): One can solve (1) in time polynomial in length(y)([t 0, t]), deg(p), log Σp, log y 0 and log ε, where ε is the requested precision. The exponent of the polynomial is bounded by the dimension of the system. Since we are dealing with decision problems, we only need to be able to discriminate one component of the system between 1 (reject) and 1 (accept). This requires only constant precision, thus maing log ε = O (1). So finally, for the algorithm to be polynomial time (recall this is our target), we only need the length of the curve to be polynomial in the size of the input. This places a natural necessary condition on the class of differential equations to consider: length(y)([0, t]) poly( w ). We then observe that our hypotheses yield this condition. Second, we need to study the computational power of differential equations with a polynomial right hand side, in order to get that this class is not trivial and actually contain all Turing computable functions. In practice, we are going to simulate a Turing machine but there are a few useful remars to mae before going into the details of it. Without loss of generality, we can simulate a universal Turing machine which means the system we will build will be fixed except for the input. Concretely, this translates by taing deg p, Σp, d = O (1) in Theorem 23, maing it polynomial time in the size of the input (log y 0 ), the number of bits of precision ( log ε) and the length of the curve (length(y)([t 0, t])). This restriction prevents us from using the existing simulation in [GCB08] and we will explain why. To simplify the discussion, one must realise that the bound of the length implies that the supremum of the norm (sup t y(t) ) is bounded by a polynomial in the size of input. In other words, we need to ensure that at all time, our simulation of a Turing machine using differential equation satisfies the following condition: y(t) poly( w ) If one thins about it, this is actually a very strong condition. Indeed, the simulated machine runs in polynomial time so uses polynomial space in the size of the input. Obviously some of the components of the system are going to encode the tape, one way or another. This bounds more or less means that they must be bounded by the size of the tape. As a consequence, any integer encoding lie length(y) is the length of the curve (see Definition 4), Σp is the sum of the absolute values of all coefficients of p, deg(p) is the degree of p 10

11 w w i=1 w i2 i (for a binary tape) used in [GCB08] cannot wor because it is exponential in the size of the tape. The next natural is thus a rational encoding: w w i=1 w i2 i, this is the one we use in the proofs. To understand the difficulty, one must understand the ey property of integer encodings which we don t have: error correction. Indeed, integers form a discrete set, so it is possible to build a differential equation which given a value finds the closet integer very quicly with very good precision. This maes it possible to build a simulation which maes errors at each step of the machine but corrects them immediately after. A rational encoding doesn t have this property which prevents any inds of elementary error correction. This is where another ey idea is needed. Assume that f the function simulating one step of the machine (with a proper encoding). Now assume that we are able to build a function f, satisfying a differential equation lie (1), such that f simulate one step of the Turing machine except it maes some intrinsic error and also amplifies any input error: f(c + ε) = f(c) + η + αε α > 1 After n steps, the error will grow to the order of α n η thus quicly interfering with the encoding of the tape and breaing the simulation. We could arrange so that η is small enough to absord all errors for the necessary time of the simulation but this would mean we now it in advance which might not be the case and even if we new it, that would require to add it to the input of the system, maing the definition very awward. To solve this issue, we introduce the notion of Zig-Zag Turing Machine. These are Turing machines with the extra property that they zig-zag, i.e. they reach the left and right borders of the tape very often. Furthermore, the transition function has an extra output telling whether the machine is at a border or not. This property is the ey tool we need to cancel errors. Indeed, assume that the head reach the left border, then with a proper encoding, the left tape will be encoded by 0. Usually, this would be encoded by 0 plus some error ε, and the simulation would amplify this error. But since we can detect this case with the transition function, we can simply bypass the simulation of the step on the left part of the tape and just write 0. Since we cannot actually write 0 but only some approximation, we will write 0 plus some reset error. The whole point is of course that we can choose the reset error to be small enough to cancel errors for a small time... until we reach border again! Since the machine is zig-zagging all the time, this actually allows to cancel errors indefinitely, thus maing the simulation completely robust, although quite challenging to analyse. The rest of the constructions are basically strong improvment of constructions and ideas from [GCB08] in order to deal with our context, and deal with iterations and errors in a controlled way. 9 Acnowledgments D.S. Graça was partially supported by Fundação para a Ciência e a Tecnologia and EU FEDER POCTI/POCI via SQIG - Instituto de Telecomunicações through the FCT project PEst-OE/EEI/LA0008/2013. Olivier Bournez and Amaury Pouly were partially supported by DGA project CALCULS, and indirectly supported by ANR project DISPLEXITY, and by DIM LSC DISCOVER project. References [AD90] Rajeev Alur and David L. Dill. Automata for modeling real-time systems. In Mie Paterson, editor, Automata, Languages and Programming, 17th International Colloquium, ICALP90, Warwic University, England, July 16-20, 1990, Proceedings, volume 443 of Lecture Notes in Computer Science, pages Springer, [AM98] Eugene Asarin and Oded Maler. Achilles and the tortoise climbing up the arithmetical hierarchy. Journal of Computer and System Sciences, 57(3): , December

12 [BC08] Olivier Bournez and Manuel L. Campagnolo. New Computational Paradigms. Changing Conceptions of What is Computable, chapter A Survey on Continuous Time Computations, pages Springer-Verlag, New Yor, [BCdNM05] Olivier Bournez, Felipe Cucer, Paulin Jacobé de Naurois, and Jean-Yves Marion. Implicit complexity over an arbitrary structure: Sequential and parallel polynomial time. Journal of Logic and Computation, 15(1):41 58, [BCGH07] O. Bournez, M. L. Campagnolo, D. S. Graça, and E. Hainry. Polynomial differential equations compute all real computable functions on computable compact intervals. J. Complexity, 23(3): , [BCSS98] [BGP11] [BGP12] [Bou97] [Bou99] L. Blum, F. Cucer, M. Shub, and S. Smale. Complexity and Real Computation. Springer, Olivier Bournez, Daniel S. Graça, and Amaury Pouly. Solving analytic differential equations in polynomial time over unbounded domains. In MFCS, pages , Olivier Bournez, Daniel S. Graça, and Amaury Pouly. On the complexity of solving initial value problems. In 37h International Symposium on Symbolic and Algebraic Computation (ISSAC), volume abs/ , Olivier Bournez. Some bounds on the computational power of piecewise constant derivative systems (extended abstract). In ICALP, pages , Olivier Bournez. Achilles and the Tortoise climbing up the hyper-arithmetical hierarchy. Theoret. Comput. Sci., 210(1):21 71, [Bus31] V. Bush. The differential analyzer. A new machine for solving differential equations. J. Franlin Inst., 212: , [CG09] P. Collins and D. S. Graça. Effective computability of solutions of differential inclusions the ten thousand moneys approach. Journal of Universal Computer Science, 15(6): , [Cop98] B. Jac Copeland. Even Turing machines can compute uncomputable functions. In C.S. Calude, J. Casti, and M.J. Dinneen, editors, Unconventional Models of Computations. Springer-Verlag, [Cop02] B. Jac Copeland. Accelerating Turing machines. Minds and Machines, 12: , [CP02] [CPSW05] C. S. Calude and B. Pavlov. Coins, quantum measurements, and Turing s barrier. Quantum Information Processing, 1(1-2): , April D. C. Carothers, G. E. Parer, J. S. Sochaci, and P. G. Warne. Some properties of solutions to polynomial systems of differential equations. Electron. J. Diff. Eqns., 2005(40), April [Dav01] E. B. Davies. Building infinite machines. The British Journal for the Philosophy of Science, 52: , [EN02] Gábor Etesi and István Németi. Non-Turing computations via Malament-Hogarth spacetimes. International Journal Theoretical Physics, 41: , [Fey82] R. P. Feynman. Simulating physics with computers. Internat. J. Theoret. Phys., 21(6/7): ,

13 [GBC09] D. S. Graça, J. Buescu, and M. L. Campagnolo. Computational bounds on polynomial differential equations. Appl. Math. Comput., 215(4): , [GC03] D. S. Graça and J. F. Costa. Analog computers and recursive functions over the reals. J. Complexity, 19(5): , [GCB08] [GM95] [HW91] [Moo96] [PE74] [Ruo93] [Ruo94] [Sha41] [Sho97] D. S. Graça, M. L. Campagnolo, and J. Buescu. Computability with polynomial differential equations. Adv. Appl. Math., 40(3): , Erich Grädel and Klaus Meer. Descriptive complexity theory over the real numbers. In Proceedings of the Twenty-Seventh Annual ACM Symposium on the Theory of Computing, pages , Las Vegas, Nevada, 29May 1June ACM Press. J. H. Hubbard and B. H. West. Differential Equations: A Dynamical Systems Approach Ordinary Differential Equations. Springer, Cristopher Moore. Recursion theory on the reals and continuous-time computation. Theoretical Computer Science, 162(1):23 44, 5 August M. B. Pour-El. Abstract computability and its relations to the general purpose analog computer. Trans. Amer. Math. Soc., 199:1 28, Keijo Ruohonen. Undecidability of event detection for ODEs. Journal of Information Processing and Cybernetics, 29: , Keijo Ruohonen. Event detection for ODEs and nonrecursive hierarchies. In Proceedings of the Colloquium in Honor of Arto Salomaa. Results and Trends in Theoretical Computer Science (Graz, Austria, June 10-11, 1994), volume 812 of Lecture Notes in Computer Science, pages Springer-Verlag, Berlin, C. E. Shannon. Mathematical theory of the differential analyzer. J. Math. Phys. MIT, 20: , P. W. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J. Comput., 26: , [Wei00] K. Weihrauch. Computable Analysis: an Introduction. Springer,

14 A Proof of our statements A.1 Basic notation Throughout the paper we will use the following notation: f [n] = (x 1,..., x n ) = max 1in x i int(x) = x int n (x) = min(n, int(x)) { id if n = 0 f f [n 1] otherwise R = R \ {0} A.2 Basic definitions related to the GPAC frac(x) = x x frac n (x) = x int n (x) 1 if x < 0 sgn(x) = 0 if x = 0 1 if x > 0 d(x, U) = inf x u u U It is well nown [GC03] that a function is generable by a GPAC iff it is a component of the solution to the initial-value problem { ẏ = p(y) y(t 0 )= y 0 (3) where p : R d R d is a vector of polynomials. This motivates the definition of the following classes of functions. Definition 24: Let I R be an open interval, f, g : I R. We say that f GSPACE (I, g) if there exists d N, a vector of polynomials p, t 0 I and y 0 R d such that t I, f(t) = y 1 (t) and y(t) g(t), where y : I R is the unique solution over I of { ẏ = p(y) y(t 0 )= y 0 Let f : I R d. We say that f GSPACE d (I, g) if 1 i d, f i GSPACE (I, g). Remar 25: The notion of space here has nothing to do with space in Turing machine. It should be seen as space in the sense that if one draw the function on a paper, it will tae more space if the values are higher (without rescaling). Alternatively, it could thought of as a bound on the value. Remar 26: Contrary to classical arithmetical circuits, in this context it does not mae sense to measure the complexity of a GPAC by its number of gates or variables. Indeed the circuit can contain loops (feedforward connections are allowed) and our notion of complexity is uniform: we build one circuit and not one circuit per input size. Finally, it should be noted that there exists (efficient, such as the one built in this paper) universal GPAC which shows that these notions of complexity would not mae sense. The previous definition only considered functions from R to R d because it is natural. extend it to functions over R d by describing its behavior when composed. We now Definition 27: Let I R d be an open set and f, s f : I R. We say that f GSPACE (I, s f ) if J R, s g : J R, g GSPACE d (J, s g ), g(j) I f g GSPACE (J, max(s g, s f s g )) 14

15 Remar 28: In the special case of I R, Definition 27 matches Definition 24. Those classes enjoys many stability properties. For example if f GSPACE d (I, s f ) and g GSPACE d (J, s g ) for some s f and s g then f + g GSPACE d (I J, s f + s g ). More importantly, if f GSPACE d (I, s f ), g GSPACE d (J, s g ) and f(i) J then g f GSPACE d (I, max(s f, s g s g ). Many elementary functions also belong to those classes, for example sin, tanh GSPACE 1 (R, x 1). The proof techniques are similar to [CPSW05]. A.3 Some Basic Considerations on Turing Machines A.3.1 Assumptions Let M = (Q, Σ, b, δ, q 0, F ) be a deterministic Turing Machine. Without loss of generality we assume that the Q = {0,..., m 1} are the states of the machines, F Q are the accepting states, Σ = {0,..., 2} is the alphabet and b = 0 is the blan symbol, q 0 Q is the initial state and δ : Q Σ Q Σ {L, R} is the transition function with L = 0 and R = 1. We write δ 1, δ 2, δ 3 the components of δ. That is δ(q, σ) = (δ 1 (q, σ), δ 2 (q, σ), δ 3 (q, σ)) where δ 1 is the new state, δ 2 the new symbol and δ 3 the head move direction. We assume M be be fixed starting from now. Remar 29: The choice of Σ = {0,..., 2} will be crucial in the following sections. See Lemma 31. Consider a configuration c = (x, σ, y, q) of the machine where is the x is the part of tape at left of the head, y the part at the right, σ the symbol under the head and q the current state. Both x and y are words over Σ. For example x 0 is the symbol immediately at the left of the head and y 0 the symbol immediately at the right. Definition 30: Let c = (x, s, y, q) be a configuration the rational encoding of c is [c] = (0.x, s, 0.y, q) where 0.x = x x x n n 1 Q. Lemma 31: Let c be a reachable configuration and [c] = (0.x, σ, 0.y, q), then 0.x [0, 1 ] and similarly for 0.y. Proof. 0 n i=0 0 i 1 0.x = n i=0 x i i 1 n i=0 ( 2) i 1 1. A.3.2 Zig-Zag Turing Machines In this section we define a particular ind of Turing Machines called Zig-Zag Turing Machines which, informally, are able to trac the borders and the size of tape and zig-zag between the left and right borders very often. We recall that a reachable configuration is a configuration reachable from an initial configuration (that is (ε, σ, x, q 0 )). Definition 32: Let w Σ, define tapesize(w) = min{ j, w j = b} where b is the blan character. Let c = (x, σ, y, q) be a configuration, define tapesize(c) = 1 + tapesize(y) + tapesize(x). Definition 33: A Border Aware Turing Machine (BATM) is a tuple (Q, Σ, b, δ, q 0, F, δ b, δ s ) such that (Q, Σ, b, δ, q 0, F )is a Turing Machine in the sense of Definition 32, and δ b : Q Σ {LB, NB, RB} is the border function with LB = 1, NB = 0, RB = 1 and δ s : Q Σ { 1, 0, 1} is the tape size function. Furthermore for any reachable configuration c = (x, σ, y, q): if δ b (q, σ) = LB then the head is at the left border of the tape, i.e. x only contains blan characters 15

16 if δ b (q, σ) = RB then the head is at the right border of the tape, i.e. y only contains blan characters if c is the next configuration of the machine (M is deterministic) then tapesize(c ) = tapesize(c)+ δ s (q, σ) For the sae of simplicity, we write δ b (c) = δ b (q, σ) and δ s (c) = δ s (q, σ) when c = (_, σ,, q) but recall that it only depends on the state and the symbol under the head. Remar 34: Note that δ b does not need to always indicate when the head reaches the border but if it indicates it, it has to be correct. Also note that by definition of δ s the machine cannot change its tape size by more than one at each step. Definition 35: Let c be a reachable configuration. Consider the (infinite) sequence c 0, c 1,... of configurations starting from c. For a {LB, NB, RB}, the time to border of c is time2border a (c) = min{i 1 δ b (c i ) = a}. Definition 36: A Zig-Zag Turing Machine (ZZTM) is a BATM such that for any reachable configuration c of M, time2border LB (c), time2border RB (c) p(tapesize(c)) where p is a polynomial which only depends on the machine. Theorem 37: Any Turing Machine can be turned into a ZZTM which computes the same function at a cost of a polynomial slow-down. Proof. It is sufficient to simulate the original Turing Machine and do a Zig-Zag after each step. To do that, we introduce two symbols to trac the borders and after each step, we update the borders to reflect the actual size of the tape. A.4 Some Helper functions A.4.1 GPAC specific In order to be able to simulate a Turing Machine with differential equations, we will need to build robust functions which can handle small errors on the input. Our simulation notably rely on the computation of the integer and fractional parts but these functions are not analytical and thus unusable for our simulation. This is why we need to build analytical approximations of them. Depending on the context we will either need to have an arbitrary precision over a bounded domain or a fixed precision over an unbounded domain. Finally we will need a special function, which acts as a cloc, to simulate iterations with differential equations. Definition 38: For any x, y, λ R define ξ(x, y, λ) = tanh(xyλ) Intuitively, ξ is a good approximation of the sign function. The y parameters control how good the approximation is and λ controls the size of the interval around 0 where the approximation is not guaranteed. Lemma 39: For any x R and λ > 0, y 1, sgn(x) ξ(x, y, λ) < 1 and if x λ 1 sgn(x) ξ(x, y, λ) < e y. Finally ξ GSPACE ( R 3, x 1 ). Proof. Use that 0 1 tanh(x) e t for t 1. then 16

17 x ξ(x, 1, 4) ξ(x, 20, 4) σ 1 (x, 20, 100) Figure 4: Graphical representation of ξ, σ 1 x σ (x) Figure 5: Graphical representation of σ 17

18 x σ 3 (x, 10, 10) σ 4 (x, 2, 4) Figure 6: Graphical representation of σ p x s 1/4 (t, 1) s 1/100 (t, 100) Figure 7: Graphical representation of s δ Definition 40: For any x, y, λ R, define σ 1 (x, y, λ) = 1+ξ(x 1,y,λ) 2. For any p N, x, y, λ R, define σ p (x, y, λ) = 1 i=0 σ 1(x i, y + ln p, λ). Intuitively, σ p is an approximation of int n, that is the integer part (restricted to [0, p]). Again, y controls the quality of the approximation and λ the size of the interval around each integer where the approximation is uncontrolled. Lemma 41: For any p N, x R and y > 0, λ > 2, int p (x) σ p (x, y, λ) 1/2 + e y and if x < 1 λ 1 or x > p + λ 1 or d(x, N) > λ 1 then int p (x) σ p (x, y, λ) < e y. Finally σ p GSPACE ( R 3, p ). Proof. Use Lemma 39: some σ 1 are close to 0 (x i 1) and the others close to 1 and pe y ln p = e y. Definition 42: For any x, y, λ R define σ (x) = x 1 2π sin(2πx) 18

19 Contrary to previous function, σ is an approximation of the integer part over R but it comes at a cost: the approximation quality is fixed once and for all. By iterating σ we can get an arbitrary (but fixed) quality, at the cost of using more space. Lemma 43: For any n Z and ε 1 4, σ (n+ε) n 7 8 ε and σ GSPACE (R, x max(7x, 1)). Proof. Use Lagrange bound on the remainder of Taylor series. Corollary 44: For any η > 0, there exists η N and A η R such that f = σ [η] satisfies: n Z, ε 1/4, f(n + ε) n η and f GSPACE (R, x A η max(x, 1)) Proof. Use Lemma 43 and the fact that σ is a contraction mapping close to integers. Definition 45: For any t R, 0 < δ < 1/4 and λ > 0, define where s δ (t, λ) = = 2 sin(2πδ) 1 + ξ(sin(2πt) 1/,, λ + log 2) 2 and ξ is given by Definition 38 Intuitively, s δ is an approximation of a digital cloc function which alternates between 0 and 1, except that λ controls how close to 0 and 1 the approximation is. The δ parameters controls the size of the switching interval and must be fixed once and for all. Lemma 46: For any 0 < δ < 1/4 and λ > 0, s δ (, λ) is 1-periodic and positive. t [δ, 1/2 δ], 1 s δ (t, λ) e λ and t [1/2, 1], s δ (t, λ) e λ. Proof. Use Lemma 39 and some simple inequalities on sin. Furthermore A.4.2 Polynomial interpolation In order to implement the transition function of the Turing Machine, we will use a polynomial interpolation scheme (Lagrange interpolation). But since our simulation may have to deal with some amount of error in inputs, we have to investigate how this error propagates through the interpolating polynomial. Definition 47 (Lagrange polynomial): Let d N and f : G R where G is a finite subset of R d, we define L f (x) = x G d x i y i f( x) x i y i i=1 y G y x Recall that by definition, for all x G, L f (x) = f(x) so the interesting part is to now what happen for values of x not in G but close to G, that is to relate L f (x) L f ( x) with x x. Lemma 48: Let d N, K > 0 and f : G R, where G is a finite subset of R d. Then x, z [ K, K] d, L f (x) L f (z) A x z and L f GSPACE ( [ K, K] d, B ) where A and B depend only on d, K, f and G. 19

Rigorous numerical computation of polynomial differential equations over unbounded domains

Rigorous numerical computation of polynomial differential equations over unbounded domains Rigorous numerical computation of polynomial differential equations over unbounded domains Olivier Bournez 1, Daniel S. Graça 2,3, Amaury Pouly 1,2 1 Ecole Polytechnique, LIX, 91128 Palaiseau Cedex, France

More information

Robust computation with dynamical systems

Robust computation with dynamical systems Robust computation with dynamical systems Olivier Bournez 1 Daniel S. Graça 2 Emmanuel Hainry 3 1 École Polytechnique, LIX, Palaiseau, France 2 DM/FCT, Universidade do Algarve, Faro & SQIG/Instituto de

More information

Dynamics and Computation

Dynamics and Computation 1 Dynamics and Computation Olivier Bournez Ecole Polytechnique Laboratoire d Informatique de l X Palaiseau, France Dynamics And Computation 1 Dynamics and Computation over R Olivier Bournez Ecole Polytechnique

More information

Polynomial Time Corresponds to Solutions of Polynomial Ordinary Differential Equations of Polynomial Length

Polynomial Time Corresponds to Solutions of Polynomial Ordinary Differential Equations of Polynomial Length Polynomial Time Corresponds to Solutions of Polynomial Ordinary Differential Equations of Polynomial Length Olivier Bournez École Polytechnique, LIX, 928 Palaiseau Cedex, France Daniel Graça FCT, Universidade

More information

Boundedness of the domain of definition is undecidable for polynomial ODEs

Boundedness of the domain of definition is undecidable for polynomial ODEs Boundedness of the domain of definition is undecidable for polynomial ODEs Daniel S. Graça DM/FCT, University of Algarve, C. Gambelas, 8005-139 Faro, Portugal and SQIG/IT Lisbon, Portugal Jorge Buescu

More information

Polynomial differential equations compute all real computable functions on computable compact intervals

Polynomial differential equations compute all real computable functions on computable compact intervals Polynomial differential equations compute all real computable functions on computable compact intervals Olivier Bournez Inria Lorraine, France & LORIA, 54506 Vandœuvre-Lès-Nancy, France Manuel L. Campagnolo

More information

Analyse récursive vue avec des fonctions réelles récursives

Analyse récursive vue avec des fonctions réelles récursives 1/30 Analyse récursive vue avec des fonctions réelles récursives Emmanuel Hainry LORIA/INPL, Nancy, France 24 janvier 2007 2/30 Discrete Case There are several models for computation over integers Recursive

More information

CS151 Complexity Theory. Lecture 1 April 3, 2017

CS151 Complexity Theory. Lecture 1 April 3, 2017 CS151 Complexity Theory Lecture 1 April 3, 2017 Complexity Theory Classify problems according to the computational resources required running time storage space parallelism randomness rounds of interaction,

More information

A note on parallel and alternating time

A note on parallel and alternating time Journal of Complexity 23 (2007) 594 602 www.elsevier.com/locate/jco A note on parallel and alternating time Felipe Cucker a,,1, Irénée Briquel b a Department of Mathematics, City University of Hong Kong,

More information

CS4026 Formal Models of Computation

CS4026 Formal Models of Computation CS4026 Formal Models of Computation Turing Machines Turing Machines Abstract but accurate model of computers Proposed by Alan Turing in 1936 There weren t computers back then! Turing s motivation: find

More information

1 Computational problems

1 Computational problems 80240233: Computational Complexity Lecture 1 ITCS, Tsinghua Univesity, Fall 2007 9 October 2007 Instructor: Andrej Bogdanov Notes by: Andrej Bogdanov The aim of computational complexity theory is to study

More information

COMPARATIVE ANALYSIS ON TURING MACHINE AND QUANTUM TURING MACHINE

COMPARATIVE ANALYSIS ON TURING MACHINE AND QUANTUM TURING MACHINE Volume 3, No. 5, May 2012 Journal of Global Research in Computer Science REVIEW ARTICLE Available Online at www.jgrcs.info COMPARATIVE ANALYSIS ON TURING MACHINE AND QUANTUM TURING MACHINE Tirtharaj Dash

More information

Computability Theory. CS215, Lecture 6,

Computability Theory. CS215, Lecture 6, Computability Theory CS215, Lecture 6, 2000 1 The Birth of Turing Machines At the end of the 19th century, Gottlob Frege conjectured that mathematics could be built from fundamental logic In 1900 David

More information

Introduction to Languages and Computation

Introduction to Languages and Computation Introduction to Languages and Computation George Voutsadakis 1 1 Mathematics and Computer Science Lake Superior State University LSSU Math 400 George Voutsadakis (LSSU) Languages and Computation July 2014

More information

Turing Machines, diagonalization, the halting problem, reducibility

Turing Machines, diagonalization, the halting problem, reducibility Notes on Computer Theory Last updated: September, 015 Turing Machines, diagonalization, the halting problem, reducibility 1 Turing Machines A Turing machine is a state machine, similar to the ones we have

More information

Introduction to Turing Machines. Reading: Chapters 8 & 9

Introduction to Turing Machines. Reading: Chapters 8 & 9 Introduction to Turing Machines Reading: Chapters 8 & 9 1 Turing Machines (TM) Generalize the class of CFLs: Recursively Enumerable Languages Recursive Languages Context-Free Languages Regular Languages

More information

Turing Machines. Lecture 8

Turing Machines. Lecture 8 Turing Machines Lecture 8 1 Course Trajectory We will see algorithms, what can be done. But what cannot be done? 2 Computation Problem: To compute a function F that maps each input (a string) to an output

More information

CSE355 SUMMER 2018 LECTURES TURING MACHINES AND (UN)DECIDABILITY

CSE355 SUMMER 2018 LECTURES TURING MACHINES AND (UN)DECIDABILITY CSE355 SUMMER 2018 LECTURES TURING MACHINES AND (UN)DECIDABILITY RYAN DOUGHERTY If we want to talk about a program running on a real computer, consider the following: when a program reads an instruction,

More information

Harvard CS 121 and CSCI E-121 Lecture 14: Turing Machines and the Church Turing Thesis

Harvard CS 121 and CSCI E-121 Lecture 14: Turing Machines and the Church Turing Thesis Harvard CS 121 and CSCI E-121 Lecture 14: Turing Machines and the Church Turing Thesis Harry Lewis October 22, 2013 Reading: Sipser, 3.2, 3.3. The Basic Turing Machine The Basic Turing Machine a a b a

More information

Most General computer?

Most General computer? Turing Machines Most General computer? DFAs are simple model of computation. Accept only the regular languages. Is there a kind of computer that can accept any language, or compute any function? Recall

More information

The physical Church Turing thesis and non-deterministic computation over the real numbers

The physical Church Turing thesis and non-deterministic computation over the real numbers 370, 3349 3358 doi:10.1098/rsta.2011.0322 The physical Church Turing thesis and non-deterministic computation over the real numbers BY GILLES DOWEK* INRIA, 23 avenue d Italie, CS 81321, 75214 Paris Cedex

More information

An Optimal Lower Bound for Nonregular Languages

An Optimal Lower Bound for Nonregular Languages An Optimal Lower Bound for Nonregular Languages Alberto Bertoni Carlo Mereghetti Giovanni Pighizzini Dipartimento di Scienze dell Informazione Università degli Studi di Milano via Comelico, 39 2035 Milano

More information

Computability Theory

Computability Theory Computability Theory Cristian S. Calude May 2012 Computability Theory 1 / 1 Bibliography M. Sipser. Introduction to the Theory of Computation, PWS 1997. (textbook) Computability Theory 2 / 1 Supplementary

More information

A truly universal ordinary differential equation

A truly universal ordinary differential equation 1 / 21 A truly universal ordinary differential equation Amaury Pouly 1 Joint work with Olivier Bournez 2 1 Max Planck Institute for Software Systems, Germany 2 LIX, École Polytechnique, France 11 May 2018

More information

CS5371 Theory of Computation. Lecture 10: Computability Theory I (Turing Machine)

CS5371 Theory of Computation. Lecture 10: Computability Theory I (Turing Machine) CS537 Theory of Computation Lecture : Computability Theory I (Turing Machine) Objectives Introduce the Turing Machine (TM) Proposed by Alan Turing in 936 finite-state control + infinitely long tape A stronger

More information

The purpose here is to classify computational problems according to their complexity. For that purpose we need first to agree on a computational

The purpose here is to classify computational problems according to their complexity. For that purpose we need first to agree on a computational 1 The purpose here is to classify computational problems according to their complexity. For that purpose we need first to agree on a computational model. We'll remind you what a Turing machine is --- you

More information

NP, polynomial-time mapping reductions, and NP-completeness

NP, polynomial-time mapping reductions, and NP-completeness NP, polynomial-time mapping reductions, and NP-completeness In the previous lecture we discussed deterministic time complexity, along with the time-hierarchy theorem, and introduced two complexity classes:

More information

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010 CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010 Computational complexity studies the amount of resources necessary to perform given computations.

More information

The Turing machine model of computation

The Turing machine model of computation The Turing machine model of computation For most of the remainder of the course we will study the Turing machine model of computation, named after Alan Turing (1912 1954) who proposed the model in 1936.

More information

About the relationship between formal logic and complexity classes

About the relationship between formal logic and complexity classes About the relationship between formal logic and complexity classes Working paper Comments welcome; my email: armandobcm@yahoo.com Armando B. Matos October 20, 2013 1 Introduction We analyze a particular

More information

Abstract geometrical computation for Black hole computation

Abstract geometrical computation for Black hole computation Abstract geometrical computation for Black hole computation Laboratoire d Informatique Fondamentale d Orléans, Université d Orléans, Orléans, FRANCE Computations on the continuum Lisboa, June 17th, 25

More information

Complexity Theory Part I

Complexity Theory Part I Complexity Theory Part I Outline for Today Recap from Last Time Reviewing Verifiers Nondeterministic Turing Machines What does nondeterminism mean in the context of TMs? And just how powerful are NTMs?

More information

Turing Machines A Turing Machine is a 7-tuple, (Q, Σ, Γ, δ, q0, qaccept, qreject), where Q, Σ, Γ are all finite

Turing Machines A Turing Machine is a 7-tuple, (Q, Σ, Γ, δ, q0, qaccept, qreject), where Q, Σ, Γ are all finite The Church-Turing Thesis CS60001: Foundations of Computing Science Professor, Dept. of Computer Sc. & Engg., Turing Machines A Turing Machine is a 7-tuple, (Q, Σ, Γ, δ, q 0, q accept, q reject ), where

More information

On NP-Completeness for Linear Machines

On NP-Completeness for Linear Machines JOURNAL OF COMPLEXITY 13, 259 271 (1997) ARTICLE NO. CM970444 On NP-Completeness for Linear Machines Christine Gaßner* Institut für Mathematik und Informatik, Ernst-Moritz-Arndt-Universität, F.-L.-Jahn-Strasse

More information

Variants of Turing Machine (intro)

Variants of Turing Machine (intro) CHAPTER 3 The Church-Turing Thesis Contents Turing Machines definitions, examples, Turing-recognizable and Turing-decidable languages Variants of Turing Machine Multi-tape Turing machines, non-deterministic

More information

1 Showing Recognizability

1 Showing Recognizability CSCC63 Worksheet Recognizability and Decidability 1 1 Showing Recognizability 1.1 An Example - take 1 Let Σ be an alphabet. L = { M M is a T M and L(M) }, i.e., that M accepts some string from Σ. Prove

More information

The P versus NP Problem. Dean Casalena University of Cape Town CSLDEA001

The P versus NP Problem. Dean Casalena University of Cape Town CSLDEA001 The P versus NP Problem Dean Casalena University of Cape Town CSLDEA001 dean@casalena.co.za Contents 1. Introduction 2. Turing Machines and Syntax 2.1 Overview 2.2 Turing Machine Syntax. 2.3 Polynomial

More information

Acknowledgments 2. Part 0: Overview 17

Acknowledgments 2. Part 0: Overview 17 Contents Acknowledgments 2 Preface for instructors 11 Which theory course are we talking about?.... 12 The features that might make this book appealing. 13 What s in and what s out............... 14 Possible

More information

1 Definition of a Turing machine

1 Definition of a Turing machine Introduction to Algorithms Notes on Turing Machines CS 4820, Spring 2017 April 10 24, 2017 1 Definition of a Turing machine Turing machines are an abstract model of computation. They provide a precise,

More information

cse303 ELEMENTS OF THE THEORY OF COMPUTATION Professor Anita Wasilewska

cse303 ELEMENTS OF THE THEORY OF COMPUTATION Professor Anita Wasilewska cse303 ELEMENTS OF THE THEORY OF COMPUTATION Professor Anita Wasilewska LECTURE 13 CHAPTER 4 TURING MACHINES 1. The definition of Turing machine 2. Computing with Turing machines 3. Extensions of Turing

More information

Turing Machines Part II

Turing Machines Part II Turing Machines Part II Problem Set Set Five Five due due in in the the box box up up front front using using a late late day. day. Hello Hello Condensed Slide Slide Readers! Readers! This This lecture

More information

CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010

CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010 CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010 So far our notion of realistic computation has been completely deterministic: The Turing Machine

More information

Chapter 5. Number Theory. 5.1 Base b representations

Chapter 5. Number Theory. 5.1 Base b representations Chapter 5 Number Theory The material in this chapter offers a small glimpse of why a lot of facts that you ve probably nown and used for a long time are true. It also offers some exposure to generalization,

More information

Time-bounded computations

Time-bounded computations Lecture 18 Time-bounded computations We now begin the final part of the course, which is on complexity theory. We ll have time to only scratch the surface complexity theory is a rich subject, and many

More information

CS21 Decidability and Tractability

CS21 Decidability and Tractability CS21 Decidability and Tractability Lecture 8 January 24, 2018 Outline Turing Machines and variants multitape TMs nondeterministic TMs Church-Turing Thesis So far several models of computation finite automata

More information

Opleiding Informatica

Opleiding Informatica Opleiding Informatica Tape-quantifying Turing machines in the arithmetical hierarchy Simon Heijungs Supervisors: H.J. Hoogeboom & R. van Vliet BACHELOR THESIS Leiden Institute of Advanced Computer Science

More information

CS5371 Theory of Computation. Lecture 10: Computability Theory I (Turing Machine)

CS5371 Theory of Computation. Lecture 10: Computability Theory I (Turing Machine) CS537 Theory of Computation Lecture : Computability Theory I (Turing Machine) Objectives Introduce the Turing Machine (TM)? Proposed by Alan Turing in 936 finite-state control + infinitely long tape A

More information

Theory of Computation (IX) Yijia Chen Fudan University

Theory of Computation (IX) Yijia Chen Fudan University Theory of Computation (IX) Yijia Chen Fudan University Review The Definition of Algorithm Polynomials and their roots A polynomial is a sum of terms, where each term is a product of certain variables and

More information

The Church-Turing Thesis

The Church-Turing Thesis The Church-Turing Thesis Huan Long Shanghai Jiao Tong University Acknowledgements Part of the slides comes from a similar course in Fudan University given by Prof. Yijia Chen. http://basics.sjtu.edu.cn/

More information

TURING MAHINES

TURING MAHINES 15-453 TURING MAHINES TURING MACHINE FINITE STATE q 10 CONTROL AI N P U T INFINITE TAPE read write move 0 0, R, R q accept, R q reject 0 0, R 0 0, R, L read write move 0 0, R, R q accept, R 0 0, R 0 0,

More information

7.1 The Origin of Computer Science

7.1 The Origin of Computer Science CS125 Lecture 7 Fall 2016 7.1 The Origin of Computer Science Alan Mathison Turing (1912 1954) turing.jpg 170!201 pixels On Computable Numbers, with an Application to the Entscheidungsproblem 1936 1936:

More information

Theory of computation: initial remarks (Chapter 11)

Theory of computation: initial remarks (Chapter 11) Theory of computation: initial remarks (Chapter 11) For many purposes, computation is elegantly modeled with simple mathematical objects: Turing machines, finite automata, pushdown automata, and such.

More information

Computational Models Lecture 9, Spring 2009

Computational Models Lecture 9, Spring 2009 Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. p. 1 Computational Models Lecture 9, Spring 2009 Reducibility among languages Mapping reductions More undecidable

More information

Further discussion of Turing machines

Further discussion of Turing machines Further discussion of Turing machines In this lecture we will discuss various aspects of decidable and Turing-recognizable languages that were not mentioned in previous lectures. In particular, we will

More information

arxiv: v1 [cs.lo] 16 Jul 2017

arxiv: v1 [cs.lo] 16 Jul 2017 SOME IMPROVEMENTS IN FUZZY TURING MACHINES HADI FARAHANI arxiv:1707.05311v1 [cs.lo] 16 Jul 2017 Department of Computer Science, Shahid Beheshti University, G.C, Tehran, Iran h farahani@sbu.ac.ir Abstract.

More information

CSE 2001: Introduction to Theory of Computation Fall Suprakash Datta

CSE 2001: Introduction to Theory of Computation Fall Suprakash Datta CSE 2001: Introduction to Theory of Computation Fall 2013 Suprakash Datta datta@cse.yorku.ca Office: CSEB 3043 Phone: 416-736-2100 ext 77875 Course page: http://www.cse.yorku.ca/course/2001 11/7/2013 CSE

More information

Computational Tasks and Models

Computational Tasks and Models 1 Computational Tasks and Models Overview: We assume that the reader is familiar with computing devices but may associate the notion of computation with specific incarnations of it. Our first goal is to

More information

The symmetric subset-sum problem over the complex numbers

The symmetric subset-sum problem over the complex numbers The symmetric subset-sum problem over the complex numbers Mihai Prunescu Universität Freiburg, Abteilung Mathematische Logik, Eckerstr. 1, D-78103 Freiburg im Breisgau, Deutschland. Abstract A problem

More information

CSE 2001: Introduction to Theory of Computation Fall Suprakash Datta

CSE 2001: Introduction to Theory of Computation Fall Suprakash Datta CSE 2001: Introduction to Theory of Computation Fall 2012 Suprakash Datta datta@cse.yorku.ca Office: CSEB 3043 Phone: 416-736-2100 ext 77875 Course page: http://www.cs.yorku.ca/course/2001 11/13/2012 CSE

More information

Lecture 4 : Quest for Structure in Counting Problems

Lecture 4 : Quest for Structure in Counting Problems CS6840: Advanced Complexity Theory Jan 10, 2012 Lecture 4 : Quest for Structure in Counting Problems Lecturer: Jayalal Sarma M.N. Scribe: Dinesh K. Theme: Between P and PSPACE. Lecture Plan:Counting problems

More information

Chapter One. The Real Number System

Chapter One. The Real Number System Chapter One. The Real Number System We shall give a quick introduction to the real number system. It is imperative that we know how the set of real numbers behaves in the way that its completeness and

More information

15-251: Great Theoretical Ideas in Computer Science Lecture 7. Turing s Legacy Continues

15-251: Great Theoretical Ideas in Computer Science Lecture 7. Turing s Legacy Continues 15-251: Great Theoretical Ideas in Computer Science Lecture 7 Turing s Legacy Continues Solvable with Python = Solvable with C = Solvable with Java = Solvable with SML = Decidable Languages (decidable

More information

CPSC 467b: Cryptography and Computer Security

CPSC 467b: Cryptography and Computer Security CPSC 467b: Cryptography and Computer Security Michael J. Fischer Lecture 10 February 19, 2013 CPSC 467b, Lecture 10 1/45 Primality Tests Strong primality tests Weak tests of compositeness Reformulation

More information

Lecture 22: Quantum computational complexity

Lecture 22: Quantum computational complexity CPSC 519/619: Quantum Computation John Watrous, University of Calgary Lecture 22: Quantum computational complexity April 11, 2006 This will be the last lecture of the course I hope you have enjoyed the

More information

Great Theoretical Ideas in Computer Science. Lecture 7: Introduction to Computational Complexity

Great Theoretical Ideas in Computer Science. Lecture 7: Introduction to Computational Complexity 15-251 Great Theoretical Ideas in Computer Science Lecture 7: Introduction to Computational Complexity September 20th, 2016 What have we done so far? What will we do next? What have we done so far? > Introduction

More information

6-1 Computational Complexity

6-1 Computational Complexity 6-1 Computational Complexity 6. Computational Complexity Computational models Turing Machines Time complexity Non-determinism, witnesses, and short proofs. Complexity classes: P, NP, conp Polynomial-time

More information

Decision Problems with TM s. Lecture 31: Halting Problem. Universe of discourse. Semi-decidable. Look at following sets: CSCI 81 Spring, 2012

Decision Problems with TM s. Lecture 31: Halting Problem. Universe of discourse. Semi-decidable. Look at following sets: CSCI 81 Spring, 2012 Decision Problems with TM s Look at following sets: Lecture 31: Halting Problem CSCI 81 Spring, 2012 Kim Bruce A TM = { M,w M is a TM and w L(M)} H TM = { M,w M is a TM which halts on input w} TOTAL TM

More information

Undecidability COMS Ashley Montanaro 4 April Department of Computer Science, University of Bristol Bristol, UK

Undecidability COMS Ashley Montanaro 4 April Department of Computer Science, University of Bristol Bristol, UK COMS11700 Undecidability Department of Computer Science, University of Bristol Bristol, UK 4 April 2014 COMS11700: Undecidability Slide 1/29 Decidability We are particularly interested in Turing machines

More information

198:538 Lecture Given on February 23rd, 1998

198:538 Lecture Given on February 23rd, 1998 198:538 Lecture Given on February 23rd, 1998 Lecturer: Professor Eric Allender Scribe: Sunny Daniels November 25, 1998 1 A Result From theorem 5 of the lecture on 16th February 1998, together with the

More information

A Note on Turing Machine Design

A Note on Turing Machine Design CS103 Handout 17 Fall 2013 November 11, 2013 Problem Set 7 This problem explores Turing machines, nondeterministic computation, properties of the RE and R languages, and the limits of RE and R languages.

More information

Introduction to Turing Machines

Introduction to Turing Machines Introduction to Turing Machines Deepak D Souza Department of Computer Science and Automation Indian Institute of Science, Bangalore. 12 November 2015 Outline 1 Turing Machines 2 Formal definitions 3 Computability

More information

Lecture 2: From Classical to Quantum Model of Computation

Lecture 2: From Classical to Quantum Model of Computation CS 880: Quantum Information Processing 9/7/10 Lecture : From Classical to Quantum Model of Computation Instructor: Dieter van Melkebeek Scribe: Tyson Williams Last class we introduced two models for deterministic

More information

The Polynomial Hierarchy

The Polynomial Hierarchy The Polynomial Hierarchy Slides based on S.Aurora, B.Barak. Complexity Theory: A Modern Approach. Ahto Buldas Ahto.Buldas@ut.ee Motivation..synthesizing circuits is exceedingly difficulty. It is even

More information

Turing Machines Part III

Turing Machines Part III Turing Machines Part III Announcements Problem Set 6 due now. Problem Set 7 out, due Monday, March 4. Play around with Turing machines, their powers, and their limits. Some problems require Wednesday's

More information

where Q is a finite set of states

where Q is a finite set of states Space Complexity So far most of our theoretical investigation on the performances of the various algorithms considered has focused on time. Another important dynamic complexity measure that can be associated

More information

Lecture notes on Turing machines

Lecture notes on Turing machines Lecture notes on Turing machines Ivano Ciardelli 1 Introduction Turing machines, introduced by Alan Turing in 1936, are one of the earliest and perhaps the best known model of computation. The importance

More information

CHAPTER 5. Interactive Turing Machines

CHAPTER 5. Interactive Turing Machines CHAPTER 5 Interactive Turing Machines The interactive Turing machine, which was introduced by Van Leeuwen and Wiedermann[28, 30], is an extension of the classical Turing machine model. It attempts to model

More information

Complexity Theory Part I

Complexity Theory Part I Complexity Theory Part I Problem Problem Set Set 77 due due right right now now using using a late late period period The Limits of Computability EQ TM EQ TM co-re R RE L D ADD L D HALT A TM HALT A TM

More information

Parallel Turing Machines on a Two-Dimensional Tape

Parallel Turing Machines on a Two-Dimensional Tape Czech Pattern ecognition Workshop 000, Tomáš Svoboda (Ed.) Peršlák, Czech epublic, February 4, 000 Czech Pattern ecognition Society Parallel Turing Machines on a Two-Dimensional Tape Daniel Průša František

More information

Verifying whether One-Tape Turing Machines Run in Linear Time

Verifying whether One-Tape Turing Machines Run in Linear Time Electronic Colloquium on Computational Complexity, Report No. 36 (2015) Verifying whether One-Tape Turing Machines Run in Linear Time David Gajser IMFM, Jadranska 19, 1000 Ljubljana, Slovenija david.gajser@fmf.uni-lj.si

More information

(a) Definition of TMs. First Problem of URMs

(a) Definition of TMs. First Problem of URMs Sec. 4: Turing Machines First Problem of URMs (a) Definition of the Turing Machine. (b) URM computable functions are Turing computable. (c) Undecidability of the Turing Halting Problem That incrementing

More information

FORMAL LANGUAGES, AUTOMATA AND COMPUTATION

FORMAL LANGUAGES, AUTOMATA AND COMPUTATION FORMAL LANGUAGES, AUTOMATA AND COMPUTATION DECIDABILITY ( LECTURE 15) SLIDES FOR 15-453 SPRING 2011 1 / 34 TURING MACHINES-SYNOPSIS The most general model of computation Computations of a TM are described

More information

UNIT-VIII COMPUTABILITY THEORY

UNIT-VIII COMPUTABILITY THEORY CONTEXT SENSITIVE LANGUAGE UNIT-VIII COMPUTABILITY THEORY A Context Sensitive Grammar is a 4-tuple, G = (N, Σ P, S) where: N Set of non terminal symbols Σ Set of terminal symbols S Start symbol of the

More information

Advanced topic: Space complexity

Advanced topic: Space complexity Advanced topic: Space complexity CSCI 3130 Formal Languages and Automata Theory Siu On CHAN Chinese University of Hong Kong Fall 2016 1/28 Review: time complexity We have looked at how long it takes to

More information

an efficient procedure for the decision problem. We illustrate this phenomenon for the Satisfiability problem.

an efficient procedure for the decision problem. We illustrate this phenomenon for the Satisfiability problem. 1 More on NP In this set of lecture notes, we examine the class NP in more detail. We give a characterization of NP which justifies the guess and verify paradigm, and study the complexity of solving search

More information

arxiv: v2 [cs.fl] 29 Nov 2013

arxiv: v2 [cs.fl] 29 Nov 2013 A Survey of Multi-Tape Automata Carlo A. Furia May 2012 arxiv:1205.0178v2 [cs.fl] 29 Nov 2013 Abstract This paper summarizes the fundamental expressiveness, closure, and decidability properties of various

More information

Finite Automata Theory and Formal Languages TMV027/DIT321 LP4 2018

Finite Automata Theory and Formal Languages TMV027/DIT321 LP4 2018 Finite Automata Theory and Formal Languages TMV027/DIT321 LP4 2018 Lecture 15 Ana Bove May 17th 2018 Recap: Context-free Languages Chomsky hierarchy: Regular languages are also context-free; Pumping lemma

More information

Lies My Calculator and Computer Told Me

Lies My Calculator and Computer Told Me Lies My Calculator and Computer Told Me 2 LIES MY CALCULATOR AND COMPUTER TOLD ME Lies My Calculator and Computer Told Me See Section.4 for a discussion of graphing calculators and computers with graphing

More information

Lecture 2. 1 More N P-Compete Languages. Notes on Complexity Theory: Fall 2005 Last updated: September, Jonathan Katz

Lecture 2. 1 More N P-Compete Languages. Notes on Complexity Theory: Fall 2005 Last updated: September, Jonathan Katz Notes on Complexity Theory: Fall 2005 Last updated: September, 2005 Jonathan Katz Lecture 2 1 More N P-Compete Languages It will be nice to find more natural N P-complete languages. To that end, we ine

More information

Recognizing Tautology by a Deterministic Algorithm Whose While-loop s Execution Time Is Bounded by Forcing. Toshio Suzuki

Recognizing Tautology by a Deterministic Algorithm Whose While-loop s Execution Time Is Bounded by Forcing. Toshio Suzuki Recognizing Tautology by a Deterministic Algorithm Whose While-loop s Execution Time Is Bounded by Forcing Toshio Suzui Osaa Prefecture University Saai, Osaa 599-8531, Japan suzui@mi.cias.osaafu-u.ac.jp

More information

Chapter 11 - Sequences and Series

Chapter 11 - Sequences and Series Calculus and Analytic Geometry II Chapter - Sequences and Series. Sequences Definition. A sequence is a list of numbers written in a definite order, We call a n the general term of the sequence. {a, a

More information

Randomized Complexity Classes; RP

Randomized Complexity Classes; RP Randomized Complexity Classes; RP Let N be a polynomial-time precise NTM that runs in time p(n) and has 2 nondeterministic choices at each step. N is a polynomial Monte Carlo Turing machine for a language

More information

Lecture 3. 1 Terminology. 2 Non-Deterministic Space Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, 2005.

Lecture 3. 1 Terminology. 2 Non-Deterministic Space Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, 2005. Notes on Complexity Theory: Fall 2005 Last updated: September, 2005 Jonathan Katz Lecture 3 1 Terminology For any complexity class C, we define the class coc as follows: coc def = { L L C }. One class

More information

5.4 Continuity: Preliminary Notions

5.4 Continuity: Preliminary Notions 5.4. CONTINUITY: PRELIMINARY NOTIONS 181 5.4 Continuity: Preliminary Notions 5.4.1 Definitions The American Heritage Dictionary of the English Language defines continuity as an uninterrupted succession,

More information

Handouts. CS701 Theory of Computation

Handouts. CS701 Theory of Computation Handouts CS701 Theory of Computation by Kashif Nadeem VU Student MS Computer Science LECTURE 01 Overview In this lecturer the topics will be discussed including The Story of Computation, Theory of Computation,

More information

CS 125 Section #10 (Un)decidability and Probability November 1, 2016

CS 125 Section #10 (Un)decidability and Probability November 1, 2016 CS 125 Section #10 (Un)decidability and Probability November 1, 2016 1 Countability Recall that a set S is countable (either finite or countably infinite) if and only if there exists a surjective mapping

More information

Decidability and Undecidability

Decidability and Undecidability Decidability and Undecidability Major Ideas from Last Time Every TM can be converted into a string representation of itself. The encoding of M is denoted M. The universal Turing machine U TM accepts an

More information

Non-emptiness Testing for TMs

Non-emptiness Testing for TMs 180 5. Reducibility The proof of unsolvability of the halting problem is an example of a reduction: a way of converting problem A to problem B in such a way that a solution to problem B can be used to

More information

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1 Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear

More information

Turing Machines Part II

Turing Machines Part II Turing Machines Part II Hello Hello Condensed Slide Slide Readers! Readers! This This lecture lecture is is almost almost entirely entirely animations that that show show how how each each Turing Turing

More information