CHAOTIC COMPUTATION ABRAHAM MILIOTIS

Size: px
Start display at page:

Download "CHAOTIC COMPUTATION ABRAHAM MILIOTIS"

Transcription

1 CHAOTIC COMPUTATION By ABRAHAM MILIOTIS A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA

2 c 2009 Abraham Miliotis 2

3 TABLE OF CONTENTS page LIST OF TABLES LIST OF FIGURES ABSTRACT CHAPTER 1 INTRODUCTION Dissertation Overview Chaos Logistic Map, Topological Transitivity and Period Three The Tent Map, Topological Conjugacy and Universality Threshold Control and Excess Overflow Propagation Conclusion INTRODUCTION TO CHAOTIC COMPUTATION Number Encoding Excess Overflow as a Number Periodic Orbits for Number Representation Representation of Numbers in Binary Arithmetic Operations Decimal Addition Binary Addition Decimal Multiplication and Least Common Multiple Binary Operations Logic Gates Parallel Logic and the Half Adder The Deutsch-Jozsa Problem Conclusion SEARCHING AN UNSORTED DATABASE Encoding and Storing Information Searching for Information Encoding, Storing and Searching: An Example Discussion A SIMPLE ELECTRONIC IMPLEMENTATION OF CHAOTIC COMPUTATION An Iterated Nonlinear Map Threshold Control Chaos into Different Periods Electronic Analog Circuit: Experimental Results

4 4.4 Fundamental Logic Gates with a Chaotic Circuit Encoding and Searching a Database Using Chaotic Elements Conclusion LOGIC OPERATIONS FROM EVOLUTION OF DYNAMICAL SYSTEMS Generation of a Sequence of (2-input) Logic Gate Operations The Full Adder and 3-Input XOR and NXOR Conclusion MANIPULATING TIME FOR COMPUTATION Introduction Flexible Logic Gates Search Algorithm Neural Implementation Neural Models Algorithm Implementations Electronic Implementation Discussion CONCLUSION REFERENCES BIOGRAPHICAL SKETCH

5 Table LIST OF TABLES page 1-1 Summary of the transitions in behaviour of the different intervals described in Section Topological conjugacy Experimental measurements of Feigenbaum s constant (δ) in different systems based on their period doubling Truth-table for AND, OR, XOR, NOR, NAND, NOT, and WIRE Necessary and sufficient conditions for a chaotic element to satisfy AND, OR, XOR, NOR, NAND, NOT, WIRE Initial values, x prog, and threshold values, x, required to implement the logic gates AND, OR, XOR, NOR, NAND, NOT, and the identity operation (WIRE), with δ = Truth table for XOR and AND logic gates on the same set of inputs Truth table for two AND gates operating on independent inputs Required conditions to satisfied parallel implementation of the XOR and AND gate Required conditions for implementing two AND gates on independent sets of inputs Examples of initial values, x prog, y prog, and thresholds x, y, yielding the parallel operation of XOR and AND gates Examples of initial values x prog, y prog, and thresholds, yielding operation of two AND gates on independent inputs Truth-table for the five fundamental logic gates NOR, NAND, AND, OR and XOR Necessary and sufficient conditions to be satisfied by a chaotic element in order to implement the logical operations NOR, NAND, AND, OR and XOR Numerical values of x prog for implementing logical operations NOR, NAND, AND, OR and XOR Updated state values, x 1 = f(x 0 ), of a chaotic element in order to implement the logical operations NOR, NAND, AND, OR and XOR The truth table of the five basic logic operations NAND, AND, NOR, XOR, OR

6 5-2 Necessary and sufficient conditions to be satisfied by a chaotic element in order to implement NAND, AND, NOR, XOR and OR on subsequent iterations The truth table of full adder, and necessary conditions to be satisfied The truth table of the 3-input XOR and NXOR logic operations, necessary and sufficient conditions to be satisfied by the map The truth table of each of the five fundamental logic gates, AND, NAND, OR, NOR, XOR Time values, in arbitrary time units for each of the five gates considered Appropriate time sample instances, based on simulation points, to perform each of the five gates considered Appropriate R values so time shift an action potential in order to perform each of the five gates considered Delay times for Vs to implement each of the five gates considered

7 Figure LIST OF FIGURES page 1-1 Bifurcation diagram for the Logistic map Forward and Backward evolution of F Indicative behaviour of I under multiple applications of F μ for, (a) μ < 1 and (b) 1 < μ Plots of F 2 and F Exhibition of the trapping of two points in two different configurations of F μ Renormalization of two cases of F 2 μ Demonstration of birth of odd periodicity fixed points Plot of the function F : I I Plot of the function T : I I Bifurcation diagram for the Tent map Topological conjugacy between evolved states, up to n = Logistic map bifurcation diagram for some values within 3 < μ < μ Threshold Control Mechanism Threshold values for confining the logistic map on orbits of periodicity 2 to Emitted excess by thresholding the logistic map in the interval [0, 0.75] Encoding the set of integers {0, 1,..., 100} Number encoding in binary format Serial Addition Decimal parallel addition The branching algorithm can be extended to a larger treelike structure Schematic representation of the serial addition method for binary numbers Schematic representation of the parallel addition method for binary numbers Schematic representation of the method for the Least Common Multiple of four numbers Basis function T - Tent Map

8 2-11 Basis function T - Inverted Tent Map Four realizations of the chaotic Deutsch-Jozsa algorithm for the case k = The total excess emitted from each of the 72 functions The Tent map under the threshold mechanism Schematic representation of the changes in the state of different elements Searching for l Searching for e Searching for x Bifurcation diagram of the iterated map for various values of α and β Graphical form of the map to be implemented by an electronic circuit Effect of threshold value x on the dynamics of the system Circuit diagram of the nonlinear device of Equation Voltage response characteristics of the nonlinear device Schematic diagram for implementing the threshold controlled nonlinear map Circuit diagram of the threshold controller PSPICE simulation results of the experimental circuit Searching for b Searching for o Searching for d Graphical representation of five iterations of the Logistic map Patterns of binary two input symmetric operations Schematic representation of a flexible 2-input logic gate Construction of a generic signal Schematic representation of the time based search method Time Delay Unit (TDU) Demonstration of operating a NOR gate, with a single neuron using different sampling times

9 6-6 Demonstration of operating a NOR gate, with a neural circuit using different delay times Schematic representation of an electronic circuit for logic using time Demonstration of operating a NOR gate using an electronic circuit utilizing time dependant computation Schematic representation of an electronic circuit for the time dependant search method Demonstration of performing a search using an electronic circuit utilizing time dependant computation

10 Chair: William L. Ditto Major: Biomedical Engineering Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy CHAOTIC COMPUTATION By Abraham Miliotis May 2009 Chaotic Computation is the exploitation of chaotic systems to perform computational tasks. The abundance of uncountable distinct behaviours by chaotic systems, along with their embedded determinism, position such systems as perfect candidates for developing a new computational environment. The present dissertation focuses on algorithms developed over the past decade within the realm of Chaotic Computation. After a brief exposition of general Chaos Theory, we proceed to give detailed instructions for performing such algorithms, as well as specific examples of implementations. We begin with multiple methods for number representation and basic arithmetic manipulations, providing from the start evidence of the flexibility of Chaotic Computation. The compatibility with Turing machines is subsequently shown through an algorithm for logic operations whose general form is a recurrent theme. We soon though, proceed further than Turing machines and present a solution to the Deutsch-Jozsa problem, of arbitrary binary functions. Even more, a practical issue is also handled by showing how chaotic systems have a natural way for selecting matches of a searched item from within an unsorted database. Finally we present our latest results in handling prolonged evolution of chaotic systems. Specifically we demonstrate the dominance of selecting the appropriate behaviour, for a computational task, over being exact with specific state values, or even confined to specific physical quantities. 10

11 CHAPTER 1 INTRODUCTION Over the last five decades the advancement of computational machines has been an invaluable achievement by our society and extremely beneficial. Moore s law [1] either as a self-fulfilling prophecy, or as a simple description of this progress, has been obeyed up to now closely. The immense importance of preserving this progress has driven many to investigate possible reasons that could hinder further increase in computational performance. Obviously the ultimate limits imposed by the physical nature of our world were the first to be identified. The three most important include the speed of light limit on transmission of information, the limit on the amount of information a finite system can store, and the thermodynamic limit of thermal energy dissipation by the erasure of information [2], the latter being the most relevant to current semiconductor technology. Perhaps most important though, at this time, are the limitations inherent in semiconductor technology, which is the basis of contemporary computers. Regularly over the last decade the International Technology Roadmap for Semiconductors (ITRS) has been published with the aim to indicate the tread of specific features of semiconductor technology [3], for example lithography resolution, transistor size and connecting wire diameter. Current technology is relatively far from fundamental limits for these attributes (molecular sizes and thermal noise), but approaching them at an alarming rate; for example Austin et. al. report a method for creating memory half-pitch size equivalent to the size of an insulin molecule(6 nm) [4]. As the need for alternative computational paradigms is becoming progressively evident many physical systems have been proposed as alternative basis for computational machines. The most prominent being quantum computing [5, 6] and DNA computing [7 10]. Even after many years of intense research though, both quantum and DNA computing are still facing some fundamental problems. The main problem with both 11

12 those paradigms is that they are confined, by their nature, to a single physical realization. In each case this confinement presents different problems that are difficult to overcome. For quantum computing the dominant issue is that of thermal noise, which randomizes individual qubits and leads to decoherence of the system. DNA computing is mainly restricted by the time-scale with which the necessary chemical reactions take place, making any computational task very slow in producing a result. Regardless of the physical system used for computation it is clear that there will be ultimate limitations in effectiveness for accomplishing any individual task. The common theme in all alternative computational paradigms is flexibility and parallelization; this is also pursued by conventional computer science research. The common goal is the construction of a methodology that will provide a computational model that will be able to solve each computational problem the most efficient way and, if possible, multiple problems simultaneously. The main driving force to achieve such a model is the fact that the most powerful computational machine we have, the human brain, is able to both, solve problems in multiple ways, and handle multiple problems simultaneously. Chaotic systems over the last few decades have attracted the attention of a large portion of the scientific community. The main reason is the abundance of such systems in nature and the extensive repertoire of behaviours they exhibit [11, 12]; compounded by the fact that they are deterministic systems and can be described by a small set of equations. More recently there has been intense research in methods to control chaotic systems, inspired primarily by the work of Ott, Grebogi and Yorke [13]. Control mechanisms have provided the means to confine a chaotic system to a specific behaviour. Such techniques have been taken advantage by many fields of contemporary research including control, synchronization, communications and information encoding [14, 15]. Many attempts have been made since before the 90 s, and many are still active fields of research, for bridging dynamics and computation [16 23]. Probably though the clearest suggestion that a chaotic system, specifically, can be used for computation 12

13 can be attributed to C. Moore [24]. It wasn t until late 90 s though, that the use of a non-feedback control mechanism on a chaotic system to perform specific computational tasks was demonstrated [25, 26]. Over the last nine years non-feedback control has been utilized to show how chaotic systems can perform arithmetic and binary logic operations, and even further to solve more complex problems like the Deutsch-Jozsa problem and searching an unsorted database [27, 28]. Non-feedback control typically is achieved either by the use of a threshold on a state variable or by selecting specific values for the system parameters, in either case confining a system to a specific subset of all available points. The key idea is that from the wide variety of behaviours embedded in a chaotic system one can find a specific pattern of behaviour that can accomplish a specific task. Control can confine a system to the required specific behavioural pattern, without loss of the ability to switch to a different behaviour and perform a different task. This is accomplished by rigorous investigation of a chaotic system to identify the required behaviours that can reliably represent computational tasks; and the state variable thresholds, or parameter values, that will confine the system to evolve in the required manner. This allows us to envision a computational machine which has as its building block a chaotic system; each element being identical to all others, providing redundancy and reliability; each element able to perform a multitude of functions, providing flexibility; and each element independent of all others, providing parallelism of tasks. The reason we are specific about which chaotic system to use is the Universality of chaos [29]. Chaotic systems, even if their governing equations differ, behaviourally are the same. This not only allows us to choose the most convenient system for theoretical development of algorithms, but more importantly does not confine the physical realization to a specific technology. As mentioned above, chaotic systems exist in abundance in nature including high-speed electronic circuits, lasers, and even neurons; any of which can be used as the building block of a chaos based computer. 13

14 1.1 Dissertation Overview This dissertation can be considered to be of four parts. The first part is the remaining of this introductory chapter, the second part chapters two and three, while the fourth chapter can be considered to be on its own. Finally chapters five and six comprise the final part of our exposition of Chaotic Computation. In the remaining of this introductory chapter we provide necessary parts of Chaos Theory so that to relate it with computation. Specifically we expose the reader to a view of Chaos Theory from a loose Set Theory approach, with the intension to imply to the reader the connections from Set Theory, to Chaos Theory, to Computation and general Mathematical Logic. As this dissertation is concerned solely with Chaotic Computation we do not delve in depth on the peripheral issues and direct the reader to appropriate references [30, 31]. Chapters two and three can be considered to be the main part of this dissertation; they are an exposition of established algorithms of Chaotic Computation [25 28, 32, 33]. We begin in chapter two with the earliest algorithms of Chaotic Computation developed over the last nine years by Ditto et. al., algorithms for number representation, arithmetic operations and a solution to the Deutsch-Jozsa problem. While the third chapter presents exclusively the recent algorithm for searching an unsorted database [28]. In the relatively small third part that is chapter four, we present a recent implementation of most abilities of Chaotic Computation with an extremely simple electronic circuit [34]. Despite its length, the aims of this part are twofold; first and foremost to give substance of realization to the whole concept of Chaotic Computation through a physical system implementation; and beyond, to present a concrete demonstration of the Universality of chaos, and the ease with which we can translate our results to any chaotic system. The final part is our most recent results, the cutting edge developments in Chaotic Computation [33]. Chapter five, while focuses solely on binary operations, introduces the importance of what can be considered to be the time dimension ; in addition it 14

15 expands on our treatment of state values, by showing more concretely the importance of specific behaviours over exact state values. Finally chapter six expands further the idea of manipulating time. Through demonstrations, with electronic circuits, and mainly neural circuits we exhibit implementations of Chaotic Computation algorithms using time instances as the medium for computational commands. We wish to the reader the encounter with this exposition to be both informative and enjoyable. 1.2 Chaos Chaos Theory is the third major revolution of Physics of the 20 th Century. Even though its birth date can be placed as contemporary to Relativity and Quantum Mechanics, circa 1900, it wasn t until much later, in the early 1960s, that the field actually attracted enough attention to attain critical mass, a slightly American biased popular science recount of the history of Chaos Theory can be found in [35]. This dissertation is not about Chaos Theory as such, but more about some of the features of Chaos Theory. We utilize much of what is defined, predicted, and expected from Chaos Theory to propose a whole new realm for computation. The problem is that Chaos Theory is a very closely packed theory, with each part relating to some other part and the whole to the details (a concept which is a result of the theory as well). We will present the features of Chaos Theory that are necessary for projecting to the reader our results and direct the reader to: Devaney (1982) [36] for a solid mathematical exploration of Chaos and the origin of one of the most accepted definitions of Chaos; Peitgen, Jügens and Saupe (1992) [37] for a more hands on demonstration with heavy emphasis on fractals, and Ott (1993) [38] for a midpoint approach to Chaos Theory, with the addition of an exposition of Quantum Chaos Logistic Map, Topological Transitivity and Period Three In this introductory chapter we provide a somewhat non-traditional view of chaos. The approach we take is one closer to a mathematician s rather than a physicist s, since 15

16 computation, our main concern, is more of a mathematical concept than physical. We present chaos through in a loose set theory context, that is we treat topological spaces, intervals and even state variables as sets of points, we try to avoid any linearizing tool, like differentiation, and any representation that involves discretization of our set of points, like statistical manipulations. At the same time our exposition is loose, since we do not go into extreme formal mathematical precision, that is we omit providing proofs and extensive definitions of terms used. We direct the reader to two seminal sources, Principia Mathematica of Whitehead and Russell [30] and Kurt Gödel s ever important paper, On formally undecidable propositions of principia mathematica and related systems (1931) [31], for both more background information of our approach, especially the missing mathematical details, and more importantly for justification for our approach and how it relates to the concept of computation. Our main tool for presenting our necessary parts of Chaos Theory is the discrete time Logistic map: F n+1 μ (x n ) = x n+1 = μ x n (1 x n ), (1 1) where x is the state of the system, n 1 is the time step and 0 < μ can be thought of as a growth rate. Actually the origins of this quadratic equation is from population dynamics, specifically it is modeling the behaviour of a population with limited resources. In that context x represents the current fraction of the population with respect to the maximum possible sustainable population (x = 1). In its continuous-time form, the model can be solved analytically and gives rise to the most common sigmoid function, a well behaved function of continuous growth and eventual saturation 2. When we consider discrete time 1 We shall drop the superscript for the cases of n n + 1, unless we wish to emphasise the use of a single application of F μ. 2 A theorem by Poincaré and Bendixson [39] guarantees that continuous time two dimensional planar systems can not exhibit chaos. 16

17 though, for example n denoting the current generation of beings in our population, we have Equation 1 1 and the behaviour of x is no longer simple continuous growth leading to eventual saturation; in fact in this discrete case if the population does manage to reach saturation it collapses to zero. For our purposes we need a closer look at Equation 1 1. With a first glance it is clear the function is governed by the three variables x, μ, n, each in principle unbounded. There is no need though to go to such lengths, as we can confine ourselves to the domains 0 x 1, 0 < μ 4, 0 < n. This is truly a very small portion of the whole available space, so briefly lets justify this confinement, and we will provide more justification as we focus more in the above domains. Starting with 0 < n, we can set the beginning of time at any moment in time, we will actually see below that this is totally arbitrary; actually for some results of Chaos Theory reversing the arrow of time, n n 1, is very useful. In the case of μ things are a bit more complicated, except for the case μ = 0; for μ < 0 we can show the same behaviours as for 0 < μ; for 4 < μ most x values escape to infinity and the ones that are trapped form a Cantor set which is the prototypical fractal and, again, shares many of the features that are present in 0 < μ 4, subsequently we will confine μ further. Finally x, with the other two variables confined to the above domains, we have x < 0 and 1 < x escaping to infinity or eventually collapsing to zero. Therefore our concern is what happens to a state x [0, 1] for values of μ (0, 4], in the forward direction of time n n + 1. An immediate view of the behaviours in this domain can be obtained through a bifurcation diagram, see Figure 1-1. Such a diagram is obtained by evolving an initial x 0 state, 1 3 in this case, for a large number of time steps 2 and plotting the state at the final few time steps, in this case we evolved the map 500 time steps and plotted the last 200 states. From this empirical approach we can already see the 3 Using the critical point of the map we are guaranteed to evolve into an attracting state, if one exists. 17

18 richness that this simple map provides, at least three behaviours are evident: collapse to zero, attraction to a non-zero fixed point and attraction to a periodic orbit. Figure 1-1. Bifurcation diagram for the Logistic map. Eventual behaviour of an initial state x 0 under repeated applications of F μ. This static picture of what happens, even though extremely rich, is a generalization of the details of the behaviours of x. For example it should be clear, but it is not clearly shown, that this picture is almost completely invariant to the initial x 0, that is what happens to a state x [0, 1] happens to almost any other initial state, the map has a local effect that is global, and visa versa. It is not clear though what happens at the onset of chaos, when almost all points behave the same way and at the same time all the rest of the points behave in a different way! To see the details inside the bifurcation diagram we take a more dynamic approach with respect to the three available variables. Starting with n we will consider the question Where do states come from?, we are not going to address the issue of time reversal, 18

19 but rather take a step back in time and consider: x 0 = μ x 1 (1 x 1 ) = F μ (x 1 ), (1 2) where x 0 [0, 1] and 0 < μ 4. Figure 1-2 shows two plots of the function F 1 4 each showing how the function evolves in either direction of time. Figure 1-2. Forward and Backward evolution of F 4. (a) Forward evolution - Empty circle,, marks an arbitrary initial point x 0 ; we can track its forward evolution by moving up (or down ) to meet the function F 4 and then right (or left ) back to the diagonal, each full cycle of these two steps is equivalent to a single application of F 4 ; thus the full circle,, marks the point F4 5 (x 0 ) = x 5. (b) Backward evolution - The steps of the forward evolution can also be reversed; i.e. from an arbitrary initial point x 0, empty circle, we can move left and right (or left twice) to the two points on F 4 and then up and down (or down twice) to meet the diagonal on the points of F4 1 (x 0 ) = {x + 1, x 1}, the two empty squares, ; and of course further back to the four points of F4 2, the four full squares,, and so on. By solving Equation 1 2 we observe the following: by setting x 0 = x 1 we find the fixed points at x = 0 and x = 1 1 μ ; now setting x 0 = 0 and x 0 = 1 1 μ we find for each fixed point its two pre-images, specifically x 1 {0, 1} 0 and x 1 { 1 μ, 1 1 μ } 1 1 μ ; 19

20 and finally by solving the quadratic for any x 0 we reach x 1 = 1 2 ± μ 2 4 μ x 0 2 μ, which leads to μ < x 4 n x n 1 C, that is x > μ, F 1 4 μ (x) C; the pre-images of any x value greater than a fourth of the growth factor (μ), of the applied map, are in the complex plane. This is an interesting result and the start of another long story, it shows how the dynamics of the logistic map extend in the complex plane. This is beyond our current scope, but in passing we note that the logistic map is a reduction of the quadratic maps Q c (z) = z 2 + c where z, c C, the source of the famous Mandelbrot and Julia fractals [40], this is an initial hint to the concept of Universality, which we will address in Section We are now ready to begin to investigate how μ changes the behaviour of x [0, 1]. We can not explain what happens at every value of μ so we consider the change before and after a point of transition. Clearly at μ = 1 we have a major change, the fixed point x = 1 1 μ enters the domain [0, 1] and becomes an attractive fixed point, actually the big event at this μ-value is that the two fixed points cross paths. At the same time 0 becomes a repelling fixed point; these conclusions can be drawn by taking the absolute value of the derivative of the map at the fixed points, df (x ), a standard technique for characterization of dynamical systems. This is though a local linearization of the map, we prefer to show this, and subsequent transitions, in terms of global effects, thus prepare the ground for what happens at higher values of μ. We need to deviate for a while to single out some special points on the I = [0, 1] interval. We already seen the four points that map to the fixed points, {0, 1, 1 μ, 1 1 μ }, and we add to this list { 1 4, 1 2, 3 4 }. In general 1 2 is the most important point of F μ, not because of its value, but since it is the critical point of the map and its backward and forward evolution can actually characterize the map extensively; the theory behind the evolution of the critical point is called kneading theory [36]. The importance of the points 1 4 and 3 4 is topological, following F μ( 1 4 ) F μ( 1 2 ) = 3 4 and of course F μ( 1 4 ) = F μ( 3 4 ), in addition to F μ ( 1 2 ) = sup F μ(i), visually this means what will happen to the interval [ 1, 3] F n μ ([ 1, 3]) is what happened previously to the interval [ 3 n+1, 1] F μ ([ 3, 1]) and 4 dx 20

21 what is going to happen to [ 3 n+1, 1] F 4 μ ([ 3, 1]) is what happened to [0, 1] F n μ ([0, 1]), 4 in a normalized sense. Consider the interval [0, 1] separated in three parts I 1 = [0, 1 4 ), I 2 = [ 1 4, 3 4 ], I 3 = ( 3 4, 1]. No matter the μ-value we know the following: F μ(0) = 0, F μ ( 1 4 ) = F μ( 3 4 ) = 3 μ 16, F μ( 1 2 ) = 1 μ 4 and F μ (1) = 0 so F μ ({0, 1 4, 1 2, 3 4, 1}) {0, 3 4 ( 1 4 μ), ( 1 4 μ)}, divide out by a factor of ( 1 μ) and its like nothing happened! We need to emphasise that 4 this picture is only for visualization purposes and it applies only to a single application of the map, a snapshot if you like that we can keep track of while iterating n, below we will discuss the proper way to view evolution of intervals, the renormalization operator. Returning to the question What happens as μ < 1 1 < μ?. There are multiple ways we can analyse the situation, we look at it with respect to changes to the behaviour of the aforementioned intervals. Globally before the point of transition (μ < 1) we simply have F μ (I) < I so its clear that in the limit n the whole interval will collapse to zero. To see the change at the transition consider F μ (I 1 ) I n+1 1 I n+1 2, that is I 1 will become the union of what can be considered to be the I 1 and I 2 of the next application (n + 1) of the map, at which Fμ n+1 (I2 n+1 ) I3 n+2 and Fμ n+2 (I3 n+2 ) I1 n+3 I2 n+3 completing the circle. Actually it is a double spiral with a moving pivot, the sequence of post-images of 1 and 3 denoted as x. So we have at μ < 1, F 4 4 μ ( 1) < 1, which implies 2 4 that F n μ (I 2 ) I 2 =, this is enough in some sense to characterize the evolution since we have seen that in some sense both I 3 and I 1 end up, at least partly, in an interval that can be called I m 2, another way to describe this evolution is F n μ ( x) = 0 as n. With this description of the evolution of F μ for μ < 1 we can see what changes at μ = 1, specifically F μ (I 2 ) I 2 and at the single point magnification F 1 ( 1) = 1. The consequence of this 2 4 change can be expressed in multiple ways: since the post-images of I n 1 and I n 3 are in I n+1 2 it means that the relationship F μ (I 2 ) I 2 can be extended to F n μ (I 2 ) I n 1 2 even as n so I 2 contains a single point, which is the eventual post-image of all these points that get trapped in the sequence of I m 2 ; or simply as n, F n μ ( x) 1 1 μ (= 0, for μ = 1); and even though the pivot point does converge to zero eventually, there 21

22 will always be at least one more point between its value and 0, these points we will see are cental to what is chaos. So now an increase in μ toppling it over 1 separates 0 and 1 1 by a more concrete amount, giving substance to the fixed point at 1 1, and to μ μ the points inside the interval (0, 1 1 ), for a view of the evolution of points in either of μ these two regions (μ < 1 or 1 < μ) see Figure 1-3. We will come across transitions like the one we just described an infinite number of times going from μ = 1 to μ = 4. Figure 1-3. Indicative behaviour of I under multiple applications of F μ for, (a) μ < 1 and (b) 1 < μ. (a) (μ = 1 ) The two fixed points are one at 0, and the other outside 2 I. (b) (μ = 1 1) 0 remains a fixed point and now 1 1 I; with one pre-image 2 μ being itself at 1 1 = 1 and the other at ( μ <) 1 = 2. Empty circles,, mark μ 3 4 μ 3 initial points and full circles,, final points (not the fixed point). Now we are in the 1 < μ region and, even though the bifurcation diagram doesn t show it, the next point of transition is at μ = 2. We have seen how the behaviour of the whole interval I changed at μ = 1, and with it we should change our view of its subintervals. We could continue the discussion with I 1, I 2 and I 3, and we will return to them when necessary. Our aim though is to make the behaviour of the whole interval I as 22

23 visual as possible, and so given that now 1 1 exists in I (as well as both its pre-images μ {1 1 μ, 1 μ }) we have a better choice of subintervals to use, specifically J 1 = [0, 1 1 μ ), J 2 = [1 1, 1 ], J3 = ( 1, 1]. As we have done for the previous transition, first we will μ μ μ see how the interval I behaves with 1 < μ < 2. A quick side-note, taking the derivative at the fixed points we find the nature of 0 to be repelling and 1 1 μ attracting. The details though make all the difference, as we have defined the intervals J i we see that F 2 μ( J 0 3 ) = J 1 1 = F 1 μ( J 0 1 ). The interval J3 gets mapped onto J 1 and J 1 onto itself, so it seems that J 1 is not changing and even more it seems that J 1 and J 3 are disconnected 4 from J 2. So it seems we have three types of points, the set of four points {0, 1 1, 1, 1}, and the μ μ two intervals (1 1, 1 ) and (0, 1 1 ) ( 1, 1), the easy part is the fixed points and their μ μ μ μ pre-images F μ ({0, 1}) = 0 and F μ ({1 1 μ, 1 μ }) = 1 1 μ and in addition, since μ 4 < { 1 μ, 1}, we know all other pre-images of the fixed points are in the complex plane. For (1 1 μ, 1 μ ) since F μ : (1 1 μ, 1 μ ) (1 1 μ, 1 4 μ) and 1 1 μ < 1 4 μ < 1 μ we see that all points in the limit n are squeezed closer and closer to 1 1. Finally for (0, 1 1 ) ( 1, 1), the μ μ μ simple picture is that x (0, 1 1 μ ) and y ( 1 μ, 1), F n μ (x) > x so as n, x 1 1 μ and as we have seen x : F μ (y) = x, so Fμ n+1 (y) 1 1, therefore every point moves μ away from zero (or one) and towards 1 1 μ. We have though seen that F μ( J 1 ) = J 1, note that x J 1, x < μ 4 so every point in J 1 has two real pre-images, one of them is of course in J 3 and it has no pre-images of its own, but the other one is in J 1 and x 1 < x 0 so the pre-image also has two pre-images, and so on. So the complete picture of the evolution of J 1 is that each x 0 does evolve to x 1 (> x 0 ) and eventually, at n, to 1 1, but on μ every application of the map it is replaced by two other points; in other words, points from the neighbourhood of 0, and of course 1, eventually escape this neighbourhood and evolve into the neighbourhood of 1 1 μ, 0 is a repelling fixed point and 1 1 μ attracting. 4 Disconnected here is used as in every day language, in fact since F n μ ( J 3 ) 1 1 μ as n the intervals are of course still connected. 23

24 As soon as we cross from μ < 2 into 2 < μ we shall see more clearly the importance in considering this behaviour. At μ = 2 we reach the second transition point, 1 1 = 1 = 1, in some sense as μ μ 2 at μ = 1 we had 1 1 μ = 0, two of the points that lead to a fixed point cross paths; the difference in this case is two points that lead to the same fixed point (1 1 μ and 1 μ ). Before we go on to the exciting changes that happen in 2 < μ, we note in passing that at μ = 2 the derivative at the fixed point is 0 and so 1 2 is called super-attractive point; 1 2 has no pre-images real or imaginary, in a sense all points (even from the complex plane) converge to 1 2 ; more importantly though, it is the only point in J 2 as a result we now have, in addition to F 2 ( J 3 ) = F 2 ( J 1 ) = J 1, F 2 ( J 2 ) = J 2, that is all our intervals are eventually invariant 5, see Figure 1-4(a) for a view of the behaviour at μ = 2. We are now in the 2 < μ region; there are two important general changes in behaviour, first of the fixed point acquires now an infinite number of pre-images, since 1 μ < μ, and as a consequence, topological mixing is now possible, that is any closed 4 interval contains points that have pre-images, or even post-images, outside the interval, the set of points that lead to the fixed point has more than one point; more visually notice that J 1 J 2 J 3, see Figure 1-4(b). This leads us to the first concept that is a requirement for chaos, topological transitivity. Definition 1.1. Topological Transitivity. Given a map f : J J, f is said to be topologically transitive on J, if for any two open subintervals U, V J there exists a x U and a y V such that f n (x) = y, for some 0 < n. In other words, given an interval any point in it can, eventually, be mapped on any other point in the interval. This is not active for values 2 < μ < , but the fact 5 We shall see that the transition from eventually invariant to invariant makes all the difference. 24

25 that we have intervals mixing is the first step, it is the difference between eventually invariant and invariant. Before we proceed let us redefine our intervals so that they are not mixed before we even apply the map and to keep the picture of how the intervals evolve as it has been up to now. So we re-define J 1 = [0, 1 ), J μ 2 = [ 1, 1 1 ], J μ μ 3 = (1 1, 1], μ basically acknowledging the fact that the pre-images of the fixed point (1 1 ) have μ switched sides around 1, see Figure 1-4(b) for the mixing of the J 2 i intervals and their redefinition as J i, where i {1, 2, 3}. The behaviour of these intervals is straightforward, we still have F μ (J 3 ) = F μ (J 1 ), but now F μ (J 1 ) = J 1 J 2 and F μ (J 2 ) = J 6 {1 1 } J μ 3. Visually we have J 3 stretched and rotated around 1 1 onto J μ 1 J 2, J 1 is simply stretched onto J 1 J 2 and J 2 folded inside itself, with 1 2 as the pivot of the folding, and again rotated around 1 1 μ to land inside J 3; overall all points are spiralling around and inwards 1 1 μ, which is the situation we had at μ < 1, so it should come as no surprise what happens at μ = 3. Figure 1-4. Plots of F 2 and F 2.5. (a) Display of the super-attractive case when 1 is the 2 fixed point. (b) Display of the six intervals { J 1, J 2, J 3 } and {J 1, J 2, J 3 }, the overlap of J i shows the initiation of topological mixing. 25

26 As the points are spiralling inwards the fixed point, and μ is gradually raised we have in J 3 what was happening to I, the points collapsing towards the fixed point, until there is a point left out, specifically when F μ ( 1 2 ) (1 1 μ ) [ 1 (1 1 μ ) ]6 ; or more visually when the stretching of J 3 (and J 1 ) creates more points than the available points in J 1 J 2 and so a point is left out that will never spiral on 1 1 μ 7. All this happens at μ = 3 and, as at μ = 1, the point left out is a post-image of 1 2 only this time we have, for 0 < n, F 2n 1 3 ( 1 2 ) J 3 and F 2n 3 ( 1 2 ) J 2 as n, so we have a fixed point created in J 3 and one in J 2, thus initiating the famous period-doubling route to chaos. Before we follow this route, we need to note what happens to 1 1 μ our current fixed point. Just as at μ = 1, the fixed point at 0 changed form from attracting to repelling, in the same manner the nature of 1 1 μ changes, only with an important difference as we have already noted, unlike for 0 the number of pre-images for 1 1 μ is infinite, but at least countable, for now. It is time to provide the promised further justification of neglecting the regions 4 < μ and μ < 0. The creation of the two fixed points by entrapment is exactly what happens to points on I for 4 < μ, with the slight difference that the trapped points for 4 < μ collapse to 0, basically the top part of the parabola is protruding over some value on which the two points that happen to be on, are captured, Figure 1-5 displays this process. For μ < 0 consider F 2 3 (J 2 ), it is an inverted parabola, differing from F 1 0 for 0 < μ only by a shift of 2 ; and a small curvature deviation caused by the higher power terms of x, 3 see Figure 1-6(b). In fact we can take F 2 3, or for any μ for that matter, and using the renormalization operator translate it to a form for I, as follows: Define: x = 1 1 μ and ˆx = 1 μ. 6 Compare with F 1 (0.5) (1 0). 4 7 This is really a simplistic picture as it can be argued that there are always enough points in an interval, but its visual aid is of great value. 26

27 L μ (y) = (y x ) ˆx x We have: RF (x) = L μ ( F 2 μ ( L 1 μ and L 1 μ (y) = x + (ˆx x ) y. (x) )) = L μ Fμ 2 L 1 μ (x). Where RF μ is the renormalized function of F 2 μ that translates J 2 = [ 1 μ, 1 1 μ ] onto I = [0, 1], both by reflecting it upwards and rotating around so that 1 1 μ = 0 and 1 μ = 1, see Figure 1-6 for two explicit examples of renormalization, relating to the issues of μ < 0 and 4 < μ. This should make it clear how everything that happened to F 1 μ will happen to F 2 μ; of course appropriate L μ and L 1 μ can be found for all n, but this is beyond our current scope, see the work of Kenneth G. Wilson [41, 42]. This sidetrack 8 was intended to show how the actual value of μ is very relative, as well as the actual interval of x; or to be more precise the actual set of x values. Returning to the 3 < μ region and how the complexity of behaviours increases exponentially with increasing μ. At the moment we have two attracting fixed points, each with its two pre-images, one of them being the other fixed point and the other pre-image has its own two pre-images and so on. The process we have explored up to now will repeat, in fact infinite times, as μ is raised further. We will have a super-attractive case, when Fμ( 2 1) = 1, at μ , and of course the trapping of yet another two points for 2 2 each one of the current two cycle fixed points, and thus the creation of the four fixed point cycle, at μ = and of course the change of the two cycle fixed points into repelling points. The four cycle will become eight cycle and so on for all powers of 2. To our visual picture, we still have the action of the primordial fixed point, 1 1 μ, even though now, at 3 < μ it is repelling, its pivot action in the spiraling-in still exists. In addition we have more local effects from the newborn fixed points acting as pivots for a spiralling-in of their local surrounding points. In effect we have vortices within vortices. 8 Another reason for this diversion is to provide a small glimpse, without going into details, to the picture of Fμ 2k with k, which we will meet soon. 27

28 Figure 1-5. Exhibition of the trapping of two points in two different configurations of F μ. (a) Plot of both F3.2, 1 solid line, and F3.2, 2 dashed line showing the eventual evolution of different initial points, marked by, to the 2-cycle fixed points marked by. The solid triangles,, mark the sequence between the two fixed points. Note that F 2 and the diagonal can be used like F 1 in Figure 1-2 to track the 2 n-step evolution of points. (b) Plot of F 1 2+ as well as the 5 evolutions of some different initial points, marked by, under multiple applications of F Note that the two points marked by, are trapped by F 1 2+, while their four pre-images, marked by, are trapped by F , 5 and so on to F 2+ 5, and even further to F As more and more fixed points are created the gaps between them for all this swirling to take place are getting smaller and smaller, leading to smaller and smaller increments of μ to create the subsequent doubling, we will return to this evolution in Section when we talk about Universality. For now we are interested in what happens at the accumulation point, also know as Feigenbaum point, named after its discoverer Mitchell Feigenbaum, μ Let us attempt to account all the points in I and their behaviour starting with the easiest part {0, 1} the two points that collapse to 0. We have the fixed point of F 1 μ, that is 1 1 μ, with it we still have its pre-image 1 μ and going further 28

29 Figure 1-6. Renormalization of two cases of Fμ. 2 (a) Case of F3.8, 2 for which L 3.8 F3.8 2 L 1 3.8(x) translates into (b) A plot of RF μ (x), showing how representative points, marked by symbols, are translated onto points of similar topology. It should also make clear how cases of 4 < μ are contained in some form in 0 < μ 4. (c) Case of F3 2, for which L 3 F3 2 L 1 3 (x) translates into (d) A plot of RF μ (x), showing how representative points, marked by symbols, are translated onto points of similar topology. It should also make clear how cases of μ < 0 are contained in some form in 0 < μ 4. back time all the points that lead to it, as we mentioned infinite number of them. From Fμ 2 we have the two fixed points, of course different from 1 1 and 1, each of these two μ μ fixed points has the other one as one of its pre-images, but in addition each point has one more pre-image, which of course has its own pre-images, once again in infinite number. Continuing this for each F n μ with n = 2 k as k we can account for all fixed points of periodicity a power of 2, as already explained above through the period-doubling route, and all their pre-images. With all these points accounted for, it might seem we have run out of points for F 2k μ at k, to have any fixed points, in fact though we still have just as many points as we accounted for, we have a Cantor set once again and technically our fist instance of chaos. The details of the behaviour at the Feigenbaum point (μ ) of F 2 μ, 29

30 and for that matter of F 1 μ, are complicated to describe, both visually and mathematically, without the introduction of symbolic dynamics, further exposition of Cantor sets and renormalization group operator, concepts which are beyond our current scope. Instead we will proceed in increasing μ further and show how F 1 μ becomes chaotic on the whole interval I = [0, 1] at μ = 4, at which point the behaviours are the same as for F 2 μ at μ, but easier to describe and visualize. We utilize the period doubling route to the Feigenbaum point to introduce the second requirement for chaos, fixed points of periodicity three. The original definition a chaotic system by Devaney [36] required of a system: (i) Topological Transitivity, (ii) the set of periodic points to be dense in the space of the system, and (iii) sensitive dependance on initial conditions. Banks et. al. [43] though, showed how transitivity and density of the set of periodic points imply sensitive dependance on initial conditions, while Li and Yorke [44] proved that existence of period three fixed point implies the set of periodic points is dense; in fact they also show how the set of non-periodic points is also dense. Before Li and Yorke though, it was Sharkovsky [45, 46] that introduced his famous theorem 9 that actually claims more than just period three implies all other periodicities, it actually provides an ordering of the natural 10 numbers that will guide us to find fixed points of periodicities other than a power of 2. Theorem 1.1. Sharkovsky s Theorem. Given a continuous map f : R R, the ordering of the natural numbers: n n 5 2 n 3 2 n , 9 Even though the theorem applies only for maps on the real line, its strong conclusion makes it important to note in any exposition of chaos. 10 Actually the ordering is for more than just the Natural (N) numbers, it covers as many periodicities as all the Reals (R) [47]. 30

31 and if the map has a fixed point of periodicity q, where p q, then the map also has a fixed point of periodicity p. If the map does not have a fixed point of periodicity p, where p q, then the map does not have a fixed point of periodicity q. Further, for any p a continuous map does exist which has fixed points of periodicity p, but no fixed points of any periodicity q. The most obvious implication of Sharkovsky s theorem, which is analogue to the Li-Yorke theorem, is that if a map has a fixed point of period three then it has fixed points of all number periodicities. Of course the specification of the ordering allows us to say more, if a map has a finite number of periodic points then they are all a power of 2, and the converse if a map has a periodic point that is not a power of 2 then it has infinite number of periodic points. The second part of the theorem allows for the existence of maps with an infinite number of points of distinct periodicity and yet without all natural number periodicities. Actually the existence of such maps can be utilized in the proof of the theorem to establish this number ordering, and perhaps more importantly to guarantee the success of the threshold control mechanism for chaotic systems, see Section Furthermore there is a corollary to Sharkovsky s theorem, which states that if the map f is dependant on a varying parameter the ordering of the birth of new period fixed points is given by the Sharkovsky ordering, which brings us back to F μ and the further changes as we increase μ. We have already seen the birth of all periods of power of 2 up to the Feigenbaum point, our next step is to show the birth of all other fixed points of even periodicity by increasing μ over the Feigenbaum point to μ and showing how this region of μ allows only even periodicities. Readjusting the intervals under observation we define J 4 = [Fμ( 2 1), F 2 μ( 1 )], or more 2 clearly using F μ h and h( 1 2 ) = μ 4, J 4 = [h( μ 4 ), μ 4 ]; in addition we define J 5 = [h( μ 4 ), 1 1 μ ] 11 The algebraic expression for μ 2 = 2 3 [ 4 u +u+1], where u = ( ) 1 3 ; obtained either by F 2 μ( 1 2 ) = 1 1 μ or sup {F 1 μ ( 1 μ )} μ 4. 31

32 and J 6 = [1 1, μ ], therefore we have F μ 4 μ(j 5 ) J 6 and F μ (J 6 ) J 5, for μ < μ < μ 2. Before focusing on these intervals let us consider the rest of the points in I, of course F μ ({0, 1}) = 0 nothing new there; F μ : ( μ, 1) ( 0, h( μ )) and F n 4 4 μ : ( 0, h( μ )) J 4 4 as n. So it is clear that all points in I eventually are mapped inside J 4, once there we have...j 5 J 6 J 5... so it is impossible to have a point that is mapped back on itself after an odd number of applications of the map, furthermore points of all even periodicities are possible and exist. At μ 2 we can see what happens to Fμ 2 and it is precisely what has happened to each other even periodicity while increasing μ from the Feigenbaum point, with a small but important difference in the behaviour of 1. 2 Specifically F 2 μ 2 ( 1 2 ) = 1 μ that is 1 2 is mapped to the pre-image of the primordial fixed point, so 1 2 is a pre-image of a fixed point, but not itself fixed as in the super-attractive case. Visually this means that there is at least one interval that is completely wrapped around onto itself, in contrast to previously when 1 2 was eventually mapped onto a fixed point. We can also now see the difference between eventually invariant and invariant, Fμ 2 2 (J 5 ) = J 5 and Fμ 2 2 (J 6 ) = J 6, note the equality, and see Figure 1-7(a) for a visual of the intervals. Perhaps most importantly both pre-images of 1 are now less than μ, which μ 4 implies that both have two real pre-images, and at least two of those pre-images each have two pre-images of their own, and so on; the fixed point 1 1 μ has now a set of pre-images which form a Cantor set. When we put all these implications together, along with the fact that the Sharkovsky ordering for F 2 μ is the same as the ordering for F 1 μ without the last leg of the odd integers (so all number periodicities exist for F 2 μ 2 ), it should be clear that F 2 μ 2 is chaotic. It is important to note that Fμ 2 2 is chaotic on J 4 not the whole I. 12 Therefore now we are left with the last leg of the Sharkovsky ordering, the odd number periodicities. Obviously the most important being period three and the easiest one to visualize since it is relatively short. But before we focus on period three lets see 12 Actually F 2 μ is independently chaotic on J 5 and J 6. 32

33 how odd periodicities become possible in the first place. So at μ 2 < μ we have seen that F 2 μ 2 ( 1 2 ) < 1 μ, so F 2 μ(j 2 ) J 1 or to relate two sets from different views J 4 J 1, lets call this intersection J 7 J 5, most importantly J 7 < 1 μ F μ(j 7 ) < 1 1 μ F μ(j 7 ) J 5 so we have J 7 J 5... J 6 J 5... }{{} infinite 9 even times periodicity orbit is born, see Figure 1-7(b). J 6 J 7, hence the highest odd Figure 1-7. Demonstration of birth of odd periodicity fixed points. (a) Plot of F μ2, solid 2 line, and F μ 2, dashed line. We also mark the intervals J 4, J 5 and J 6, it should 2 be clear that F μ2 : J 5 J 6 and F μ2 : J 6 J 5 ; F μ 2 is chaotic on both these intervals. Note also that F μ2 : I {0, 1} J 4 J 4. (b) Plot of F μ, solid line, and Fμ, 2 dashed line, at μ = μ , a value for which the period three fixed points are attractive. Note how J 5 J 1 = J 7. Within J 7 we have one of the period three trapping regions, see inset. (c) Inset, showing a plot of Fμ(x), 3 for μ = μ and x [0.1358, ], topologically the same as some F μ (x), for x I. 9 Or none in the case of period three. 33

34 We return now to the issue of the birth of the fixed points of periodicity three for F 1 μ. At μ 3 = we have the birth of the period three orbit, the simplest way to view it is that the solutions to the equation F 3 μ x = 0, become real from complex. In terms of our visual picture, we have a growth factor which is large enough for 1 2 to fall outside the trapping region. The situation is very similar to what happened at μ = 1 when we had the introduction of a new fixed point in I by F 1 (I 2 ) I 2, only this time the set of this intersection does not contain a post-image of 1, F 2 1( 1) / F 2 1(I 2 ) I 2, so this set contains some other point or more correctly points. Since now it is not the maximal point that gets captured it means that the intersection can not be collapsed to a single point, but there will always be at least two points, marking the boundaries of this intersection interval, and since at least both these points are trapped we have two new fixed points, for each trapping region. Once we consider that we are dealing with Fμ, 3 there are three trapping regions, so a total of six new fixed points, along with the two primordial, {0, 1 1 }, eight in total agreeing with the fact that F 3 μ μ is an eighth degree polynomial 15. The nature of these points is one of them attracting and the other repelling, since we are talking about an interval being trapped the points that actually exist between the two fixed points are also trapped by eventually collapsing to the attracting fixed point. So the overall situation in a trapping region is identical to the situation we had in I for 1 < μ, see inset (c) of Figure 1-7. Therefore it should be of no surprise that everything that we have talked about up to now for I happens within each trapping region over and over again as we increase μ even further; in some sense this is the reason why period three implies chaos and not simply the fact that it allows all other periodicities; even though one argument implies the other. 14 Derivation of this value can be found in [48 50]. 15 In fact since F μ is a second order polynomial any F k μ will be a polynomial of order 2 k, so solutions to any case of F k μ x = 0, will have an even number of real roots. 34

35 Finally μ 3 < μ 4, with μ = 4 the most important part, given on Figure 1-8. In the region between μ 3 and 4 everything we have seen repeats infinite number of times. In some subset of I or other, we have the creation of new other fixed points that turn from stable to unstable giving birth to more and more fixed points in the same way, all the way to the creation of the period three orbit, for the subset in question, when, of course, we start over again! At μ = 4 we have the final brick on the wall, F 4 ( 1) = 2 1!16 We have seen the consequences of this before, at μ 2, 1 2 is mapped to 0, the other fixed point, and as before, more importantly 1 μ 4 the pre-images of 0 form a Cantor set; and after this long journey we finally have F : I I, F is chaotic on the whole interval I. We have seen all that has been created as we increased μ, and everything is still there, so we will not repeat ourselves. Instead, and in conclusion to this section, we will provide a holistic picture of the points in I. We have a set of points which are fixed, of some period or other, this set is dense in I, there is a difficult issue here, we can exclude from this set the fixed points of Fμ 2 17, we still have the fixed points of all periodicities, even infinity, but we are only considering the enumerable number periodicities, and this set is still dense in I. The set of fixed points of F 2 μ we can now put on their own and it is in itself a Cantor set, and of course dense in I, but even more, uncountable. We are not done though, there are still many points left and specifically the pre-images of all fixed points of enumerable periodicity, which as we have seen for each fixed point form a Cantor set on their own. Hence we can consider the following two sets, the fixed points of F 2 μ (and their pre-images in some sense), and the set of all other fixed points and their pre-images; both sets are an infinite collection of Cantor sets, both dense in I, both uncountable. For its 16 Subsequently we shall drop the subscript on F 4 and refer to the function of this parameter simply as F. 17 Note that μ here is not the Feigenbaum point value of μ, instead we are considering all points of all F 2 -functions, even the ones created after period three restarts the process. 35

36 immense importance, our final point in this section is to emphasise the following, we have seen how a function on its own is not chaotic, as a set on its own is not chaotic, it is the combination of a many-one function on a specific set, under specific parameters that create chaos. Figure 1-8. Plot of the function F : I I. Also marked are the intervals I 1, I 3 and I 2, the last broken up into I 2L and I 2R, which are the two intervals that are mapped completely onto each other. 36

37 Table 1-1. Summary of the transitions in behaviour of the different intervals described in Section 1.2. Each arrow represents a single application of F μ. is used to indicate that more than one point is trapped in the interval, while is used to indicate that a single point is trapped (typically an endpoint of the interval). μ I = I 1 I 2 I 3 I = J 1 J I = (J 1 J 5 ) 3 I = J 1 J 2 J 3 J 5 J 6 (J 3 J 6 ) I1 I 2 I 3 I 1 I 2 I 3 I 1 I 2 I 3 3 μ I 1 I 2 I 3 μ μ 2 I 1 I 2 I 3 I 1 I 2 μ 2 μ 3 I 3 μ 3 4 I 1 I 2 I 3 J 1 J 2 J 3 J 1 J 3 J 2 = J 1 J 3 J 1 J 3 J 2 = J 1 J 3 J 1 J 3 J 2 = J 1 J 3 J 1 J 3 J 2 = J 1 J 3 J 1 J 3 J 2 = J 1 J 3 The transition shown actually occurs at μ = J 1 J 2 J 3 J 5 J 6 J 1 J 2 J 3 J 5 J 6 J 1 J 2 J 3 J 1 J 2 J 3 J 1 J 2 J 3 J 5 J 6 J 5 J 6 J 7 J5 = J 5 J 7 J 5 J 6 J 7 J5 = J 5 J 7 37

38 1.2.2 The Tent Map, Topological Conjugacy and Universality This section will briefly introduce the Tent map, which along with the Logistic map are the two main systems we use for chaos based algorithm development. The Tent map is a piecewise linear map that also can exhibit chaos. Instead of presenting in detail the chaos in the Tent map, as we have done in the previous section for the Logistic map, we will introduce the concept of Topological Conjugacy, which relates one map to the other, guaranteeing the properties and behaviours in one map to exist in the other. In closing the section we will move further than Topological Conjugacy, into the promised property of chaos known as Universality. Universality plays a very important role in computation with chaotic systems since it releases us from confinement to any single physical realization; as long as a system is chaotic the algorithms we develop can be implemented by the system, regardless of whether the nature of the system is electrical, optical, chemical, or even a realization in some other physical realm. The Tent map. Just as the Logistic map, the Tent map is discrete time, but instead of a continuous polynomial, it is a piecewise linear discontinuous mapping of the form: μ x n, for x n < 1 Tμ n+1 (x n ) = x n+1 =, 2 (1 3) μ (1 x n ), for 1 x 2 n, where x [0, 1](I), 0 < n, and for our purposes we set μ = In this domain and μ value, the Tent map, except for the curvature from the polynomial terms, looks and behaves like the Logistic map, we can even mark sub-intervals, Ĩ1 = [0, 1 3 ], Ĩ2 = [ 1 3, 2 3 ] and Ĩ3 = [ 2 3, 1] with similar behaviour as the I i of F, see Figure 1-9. In addition, the bifurcation diagram for the Tent map is practically the same as for the Logistic map, see 18 As we have done for the Logistic map, for the Tent map in the case of n n + 1 and μ = 2 we will not use the super- and sub-scripts and refer to the function simply as T, unless emphasis is required. 38

39 Figure 1-10, even though it looks more compressed and the bifurcations for the period doubling are closely packed, basically everything that we showed for the Logistic map applies to the Tent map as well. We can actually show this identity mathematically through the Topological conjugacy of the two maps. Figure 1-9. Plot of the function T : I I. From the graph we can see how T : I 1 I 1 I 2, T : I 3 I 1 I 2, and T : I 2L I 3 and T : I 2R I 3 ; and of course how 1 is a pre-image of 0. 2 Topological Conjugacy. Topological conjugacy has its roots in set theory, consider two distinct sets, X and Y, and for each set there is a relation, say P and Q respectively, between its individual elements. For each set consider the larger set of the union of the elements that are in the domain and converse domain of the relation, 39

40 Figure Bifurcation diagram for the Tent map. Eventual evolution of an arbitrary initial state x 0. Inset: Magnification of the region μ [1, 1.1] and x [0.495, 0.5], showing more clearly the period-doubling. the field of the set, for example given x X : P(x) = x, then x and x are in the field of X and P, analogously for Y and Q. If there is a relation S, which is one-to-one and has as domain the field {X, P : X} and converse domain {Y, Q : Y}, then the relations P and Q are similar, that means P and Q have identical properties, in other words their effects on their respective sets are indistinguishable. When the sets considered are topological spaces, the case of the Logistic and Tent map on I, and the relations actual functions, then the functions are said to be Topologically conjugate to each other. To make this concept more concrete, and show as promised that there is no difference between the Logistic and Tent maps, consider the two functions F : I I and T : I I; the fact that I acts as the domain and converse domain for both functions should not worry us for two reasons, first of the sets (or spaces) considered are actually arbitrary 40

41 Table 1-2. Topological conjugacy. Given that the Tent map takes every x I on a x I, and the Logistic map takes every y I on a ý I, they are Topologically conjugate when a function, G, exists that takes every x I on a y I and every x I on a ý I; so that G T (x) = F G(x) = ý. Specifically G(x) = sin ( 2 π x). 2 I (x) G I (y) T F I ( x) G I (ý) and in reality unchanged 19 it is the relations (or functions) that we will actually manipulate, and second, the actual set of points in I for each function is different, both functions are applied on the same interval, which contains the same points, but the points themselves are different for each function, except {0, 1, 1}. So following 2 Table 1-2 we have the following relationships: T : I I, F : I I, G : I I and of course for any x, y, x, ý I, T (x) = x, F (y) = ý, G(x) = y and G( x) = ý, which imply G T (x) = G( x) = ý or F G(x) = F (y) = ý. For the Logistic and Tent map the conjugating function is given by G(x) = sin 2 ( π 2 x), specifically: y 1 = F G(x 0 ) y 1 = G T (x 0 ) y 1 = 4 y 0 (1 y 0 ) y 1 = 4 sin ( 2 π x ) ( ( 2 0 (1 sin 2 π x )) 2 0 y 1 = 4 sin ( 2 π x ) ( 2 0 cos 2 π x ) 2 0 y 1 = sin 2 (π x 0 ) x 0 < x 0 x 1 = 2 x 0 x 1 = 2 (1 x 0 ) y 1 = sin ( 2 π x ) 2 1 y 1 = sin 2 (π π x 0 ) y 1 = sin 2 (π x 0 ) y 1 = sin 2 (π x 0 ) 19 You can actually consider each domain and converse domain as fixed points in a set and the relations as just a virtual link between these points in each set, not actually affecting the points. 41

42 Of course by induction, if not by simply considering function composition, this result applies for any, and at any, n, F n G = G T n, see Figure 1-11 for an illustration. When we consider the spatial correlation between functions given by Topological conjugacy, in addition to the temporal correlation between a function and its future iterates (e.g. F and F n ) given by renormalization, we have a universal correlation between any functions that exhibit similar behaviours. Figure Topological conjugacy between evolved states, up to n = 5. The dashed line follows the evolution of G(x 0 ) under the Logistic map (F ), while the dash-dotted line follows the evolution of T n (x 0 ). The action of G(x) both as the first step (x 0 G y 0 ) and last step (x 5 G y 5 ) is shown along the triangular arrows. (f(x) is {F (parabola), T (piecewise linear), G (sigmoid), I (identity)}). 42

43 Universality. The theory of Universality is, as the name implies, extremely wide and intricate. It correlates many branches of Physics, from the theory of critical phenomena to Hamiltonian mechanics, and many branches of Mathematics, from statistics to vector spaces. The aim of this section is to present some important qualitative results, both as a hint of proof of Universality, and its main consequence, the independence between behaviours and actual physical system or mathematical interpretation. The most formal origins of Universality come from Mitchell Feigenbaum [29, 51, 52], circa 1975 when he discovered the universal constant δ = Originally δ was observed within a system as the rate of onset of period doubling as: μ 2 n+1 μ 2 n μ 2 n+2 μ 2 n+1 δ, as n, (1 4) where μ 2 k is the μ value of onset of the 2 kth cycle, in terms of Figure 1-12 it is the limit of the sequence b 1 b2, b 2 b3, b 3 b4,... δ. Almost immediately though with this initial observation, came both the theoretical and experimental confirmation (see Table 1-3) that extended the existence of δ from within a single system to every system that undergoes period doubling. More importantly though, and the actual basis of Universality, out of the theoretical treatment emerges a convergent function that encapsulates all such systems. A very rudimentary approach to describe this function is the following, consider any function that relates points in a set in the manner we have been concerned with in this chapter. Further, consider each such function as itself a point in a set of functions, these function-points converge to a single function-point, in some sense just as the converge to δ. We have seen in the previous section how x, n, μ of F, can all be varied in some way to create a combination of F and an interval from which chaos emerges, now we see that the functional form of F itself can also vary Threshold Control and Excess Overflow Propagation In this short section we will introduce two processes which have extended the breadth of influence of chaos theory, and are widely used in algorithms of chaos based b i b i+1 43

44 Figure Logistic map bifurcation diagram for some values within 3 < μ < μ. // on the axis indicate discontinuity in the displayed points, b 1 and b 2 are actually longer than shown. For the Logistic map the sequence of the ratios b 1 b4... converges to δ. In other systems the b i can be different from the b 2, b 2 b3, b 3 Logistic map, but the convergent value of the ratios is the same. Inset: Figure 1-1. Table 1-3. Experimental measurements of Feigenbaum s constant (δ) in different systems based on their period doubling. (Adapted from [53].) Hydrodynamic Electronic Optic Acoustic Systems Water Mercury Diode Transistor Laser Helium Observed δ 4.3 ± ± ± ± ± ± 0.6 computation. We will not go in details for either process, as both are now quite extensive in their own right. Control and synchronisation of chaotic systems [13 15, 54 64] have been studied for almost 20 years now with numerous results; we simply demonstrate the threshold control mechanism, as already mentioned in relation to the second part of the Sharkovsky theorem. The other process we introduce, the excess overflow propagation [65 68], also 44

45 has a long history and connections to a wide variety of other fields as critical phenomena, phase transitions, cooperative behaviours, and more. Threshold Control Mechanism. Following Sharkovsky s theorem and specifically the second part, we know that a map exists for every periodicity and more, that once periodicity q is established periodicity p is guaranteed, where p q. Therefore if we start with a map which has period 3, the Logistic map F 4 for example, this same map can be of some other maximal 20 period simply by confining the domain of the map, even more this period will seem attractive 21. Basically instead of looking for a whole new different map of maximal periodicity q, we take F 4 : I I and consider the map which is the part of F 4 : J J, where J = (0, x ] and x is the maximal point value of the sequence of points that are the desired orbit, hence the name of the method threshold control. The mechanism can be defined as: F (x), for F (x) x, F (x) = x, for x < F (x), (1 5) the map, F (x), is evolved as usual unless it exceeds x, in which case it is limited back to x and normal evolution continues, F (x) is the resulting part of F (x) which is the map of desired periodicity, see Figure 1-13 for a specific example of period 4 selection, and a demonstration of the process. Specifically for the logistic map, the interval ( 3, 1) provides thresholds for periodic 4 orbits of all orders greater than 2, for example 3 4 < x forces the map in a cycle of order 2, x of order 3, < x < of order 4 and so forth, see Figure 1-14 for a more extensive list. Obviously thresholds from [0, 3 ] produce fixed points 4 (period 1). 20 Maximal in the sense of the Sharkovsky ordering. 21 Only because all other periods are repelling. 45

46 Figure Threshold Control Mechanism. F 4 the dotted parabola, is confined on the interval J = (0, x ] and as a result the map F 4 is produced, the solid parabola. In this case x = Empty circles,, mark three different initial conditions and their paths, in dotted lines, showing how they are pushed onto the period 4 sequence of points x 1, x 2, x 3, x 4, marked with full circles,. Excess Overflow Propagation. Up to now we have been considering, in some form or other, just a single chaotic system, and even thought the potential of even a single chaotic system is immense, two is always more than one. Since we are considering more than one system we need to define some way in which the two systems to interact. Currently the method used in chaotic computation is the excess overflow propagation method. We define a monitoring value, x, for the state of the emitting system, f( 1 x); 46

47 Figure Threshold values for confining the logistic map on orbits of periodicity 2 to 50. once this monitoring value is crossed over by the actual state of the system, f( 1 x) > x, the difference between the actual state of the system and the monitoring value, the excess overflow (f( 1 x) x = E), is propagated to the receiving system(s) f( 2 x + E). Note that the emitting system is not affected in any way, unlike with the threshold control method where we confine the state of the system, also at the receiving system the incorporation of the excess overflow can happen both before and after an iteration of the system f( 2 x) + E. The method is very similar to the threshold control method, but independent, in the sense that the monitoring value can act as a threshold control, but is not necessary, the two values can be different, providing us more flexibility. 1.3 Conclusion We have presented, as briefly as possible, parts of Chaos Theory that are the basis for Chaotic Computation. Even though this exposition was made in the spirit of Set Theory and Logic, the exact connections were not actually laid out since our forthcoming 47

48 presentation of algorithms, developed for Chaotic Computation, do not actually reach such lengths. The possibility is there through, and we plan to explore further this view in our future work. 48

49 CHAPTER 2 INTRODUCTION TO CHAOTIC COMPUTATION Chaotic computation is the exploitation of the richness in behaviours of chaotic systems to perform computational tasks. This chapter deals with different manipulations of state variables that lead to selection of a specific behaviour, without relinquishing access to other behaviours, demonstrating the inherent versatility. Natural parallelism emerges both through cooperative processes of independent chaotic systems and through exploitation of multi-dimensional systems. A very good abstraction of chaotic computation is the translation of functions and data operators from the software realm onto the hardware; direct implementation of the objective task in hardware. Here specifically we almost exclusively present, the theoretical development of algorithms and methods of chaotic computaion and use the Logistic map to illustrate specific examples. For actual physical realizations, and verifications, of these results we direct the reader to results of electronic implementations, using Chua s circuit [69], and Logistic map circuit [70, 71], and to simulation results [26] of a IR NH 3 laser using the Lorentz system [72, 73]. In addition we can direct the reader to the very recent physical realization results for chaotic computation using synchronization [74] and stochastic resonance [75]. 2.1 Number Encoding The primary requirement for any computational system is representation of data. There need to be methods through which data can be recognized, stored, and reproduced, obviously only these three processes do not make a very capable computer. The absolute universal language is mathematics and so numbers is the most generic form of data representation. Therefore we begin with the different methods available in chaotic computation for number representation Excess Overflow as a Number This is the simplest method for encoding numbers using a chaotic computer and follows from the excess overflow propagation mechanism. For any given threshold (x ), the 49

50 amount by which the system variable monitored exceeds the threshold, E = f n (x ) x, for some n th iteration such that f n (x ) > x, is named excess overflow and is used to represent an integer. More specifically, we utilize an interval, K 1, for which the dynamical system that drives our chaotic computer contains fixed points for a single iteration, n = 1, under threshold control, i.e. f(x ) > x, x K 1. Given a requirement as of representing the set of integer numbers {1, N}, we can find from the interval K 1 the x which produces the largest value of excess E max and equate this with the largest integer we wish to encode, N. As a result, we can define 1 E max /N δ, where δ is called unit overflow and every integer in the set {1, N} can be represented by an excess overflow of z δ, where z {1, N}. Obviously the integer 0 is represented by zero amount of excess, obtained by setting the system at a natural fixed point, i.e. f n (x) = x, n. As an illustrative example we use the Logistic map, Equation 1 1. For the Logistic map the interval [0, 0.75] produces fixed points under threshold control, note that 0 and 0.75 are the natural fixed points of this system and either can be used to represent the integer 0. By taking the derivative of F (x ) x we find the threshold that produces the maximum excess overflow at x = 3/8, with an emitted excess of E max = 9/16, see Figure 2-1. Given the set of integers [0, 100] we can encode them using threshold controlled logistic map elements following the steps described above, i.e. 100 E max δ = (9/16)/100 and x = ( 4) ( z δ), z [0, 100], see Figure ( 4) The extensive versatility of this method will be demonstrated in following sections. We will show implementations of this method not only for other number representation methods, but also in algorithms for decimal and binary arithmetic, as well as for boolean logic operations Periodic Orbits for Number Representation The most immediate extension to the excess overflow encoding method is to consider the behaviour of the dynamical system outside the region already being utilized, i.e. x / K 1. We can utilize the effect of the threshold control mechanism on this interval, K 2, 50

51 Figure 2-1. Emitted excess by thresholding the logistic map in the interval [0, 0.75]. The threshold x = 3/8 produces the largest excess of E max = 9/16. to stabilize the system to a periodic orbit. Following Sharkovsky s theorem, see Theorem 1.1 on page 30, since we have a period three orbit, orbits with periods of all other integer values, and more, are guaranteed to exist. Therefore it is an obvious extension to utilize the appropriate orbit to represent its respective integer number. More specifically, for each n > 1 we find a x K 2, for which f n (x ) > x and f m (x ) < x, m < n. So for every integer we have a x that forces the system to emit excess at periodicity n {2, }, in some sense the converse of what is shown in Figure Clearly any of the thresholds for n = 1 fixed points found in the previous method can be utilized for representation of integer 1 and as before a natural fixed point can be used for integer 0. We will show implementations of this method for integer multiplication, calculating the least common multiple of any set of integers, and in conjunction with the excess 51

52 Figure 2-2. Encoding the set of integers {0, 1,..., 100}. We use 100 E max = 9/16 to obtain each excess z δ and the threshold x that produces it. overflow method in a third method for representing numbers more in the spirit of binary representation Representation of Numbers in Binary In this final example of number encoding methods we will combine the previous two methods, we use both periodic orbits and excess overflow. We can represent a number in binary format by coupling together elements which have their periodicity determined by their position away from the radix point as 2 #digits position. The farther away from the radix point an element is the shorter periodicity we give it, with the bit farthest away being on period one. The elements are joined together serially, so that the generic overflow generated by each element cascades through the array until it reaches the open end at the most significant digit, where we have the readout, see Figure 2-3. To be more specific a binary number a N... a 1 will be represented by N elements. Each element, j, as in the encoding based on periodicity, will be set at a threshold so that to produce a 52

53 generic overflow at periodicity of 2 N j, if the binary digit a j = 1, and at a threshold of 0, if a j = 0, thus generating no overflow. The resulting array is updated 2 N 1 times resulting in a multiple of the unit overflow N j=1 a j2 j 1 at the readout. An illustrative example using the Logistic map is shown in Figure 2-3, where we are using four elements, N = 4, to represent the binary number Following the example, a 4, the farthest element from the radix point, is set to a threshold that emits excess on every update, i.e. periodicity one (x (0, 0.75)); a 3 is given a threshold x (0.75, 0.905), to produce excess on every second update; a 2 from x (0.905, 0.925) for an excess every fourth update; and a 1 from x (0.925, 0.926) for an excess every eighth update. As a result we will collect emitted excess = 15, respectively from each element, as is desired for encoding 1111 in binary. This method allows us to encode large numbers without the need of thresholds that produce proportionally large periodicities, or of thresholds that produce excess in steps of very small δ. This is a good example of the flexibility of chaotic computation. In the specific example we sacrificed efficiency, in the number of elements per encoded integer, to allow short periodicities to encode larger numbers. Obviously, the method can be easily modified to encode numbers in any other base representation. 2.2 Arithmetic Operations We have established not one, but three methods for representing numbers, we should see them in action. The most obvious starting point is the simplest arithmetic operation: addition, which we extend into multiplication, which we extend to the least common multiple problem Decimal Addition There are multiple chaotic computation algorithms for decimal addition. We will present three such algorithms, each one building on its predecessor, from serial addition to parallel to branching. 53

54 Figure 2-3. Number encoding in binary format. The excess from each element cascades to the one above it until the readout is reached. The element closest to the readout, a 4, emits on every update, and as we move down the chain the elements emit in increasing powers of 2. The overall result is after 8 updates, we have 8 units of excess from a 4, 4 from a 3, 2 from a 2 and 1 from a 1, giving us a total of 15 units. Any binary number can be represented with this method. (Adapted from [26].) Serial. The most straightforward algorithm for addition utilizes the excess overflow encoding method in a serial manner. In reality it is a natural extension of the encoding method, since each number is encoded as a proportional excess we can chain-link the elements and cascade the emitted excess from each element into its neighbour all the way to the end of the chain. Each excess builds up on the one after it, all naturally summing up at the edge of the chain, see Figure 2-4. As we have seen in Section the unit overflow (δ) works as the proportionality constant between integers and emitted excess, therefore the addition of integers i, j, k, l is simply replicated by 54

55 iδ + jδ + kδ + lδ = (i + j + k + l) δ, using the avalanching of the excesses. For this algorithm the computational time is dependent on the adaptive process and the number of terms in the sum. Specifically after a single chaotic update of all the elements, it takes as many adaptation steps as there are terms in the sum to complete the operation. Figure 2-4. Serial Addition. We recruit as many elements as terms in a given sum. Each element is assigned a number from the sum, encoded using the excess overflow method. The elements are coupled together in a chain such that the excess can flow down the chain. The result is at the open end we collect the sum of all the emitted excesses as a multiple of δ. (Adapted from [25].) Parallel and Branching. Following from the previous example, consider the sum of four terms, i, j, k, l using the serial addition algorithm. If we take a closer look at the dynamics of the last element in the chain (in this case the one encoding l), we see that this element will receive the excess of all preceding elements simultaneously. To visualize this consider the local dynamics of three elements, once the chaotic update is complete. The excess of the first element is avalanched to the second, where it builds up with the local excess at the second element, so the third element will receive the combined excess of the two previous to it as a single excess. This can also be achieved by introducing the two excesses independently of each other, but simultaneously, i.e. arrive 55

56 at the same time and build up on each other locally at the third element instead of at the second. Of course this can be extended to any number of elements preceding the last element in the chain. Turning back to our example of adding i, j, k, l, the topology of the connectivity instead of being a chain is now a tree diagram, see Figure 2-5. By turning the chain into a tree we have collapsed the serial addition algorithm to a two step serial addition, regardless of the number of terms in the sum. The first step is to sum all, but one of the terms in parallel and then serially combine the parallel sum to the remaining term, before reading the result at the open end. We need to note that we can not fully parallelize the operation by connecting all terms directly to the open end, since we are not attributing any dynamical properties to the open end, i.e. if the open end had the ability to correctly build up excesses it would be identical to all the other dynamical systems used in the sum, making it the last element 1. Obviously for a sum of N terms the shortest computational time for the operation is to connect N 1 terms in parallel to the last term and perform the operation in two steps, this would be analogous to increasing the number of branches in the network. In case though of connectivity and/or spatial restrictions we could also extend the algorithm by increasing the number of trunks in the network, as is shown in Figure 2-6; note that now our computational time is dependant on the number of trunks. This is another case of chaotic computation exhibiting its flexibility, we can sacrifice temporal performance to satisfy spatial constraints Binary Addition Extending the above addition algorithms for the binary number encoding method is straightforward. The serial addition of binary numbers is realized by connecting the end bit of one number with the end bit of the following number in a chain manner all the way to the last number and then to the open end, see Figure 2-7. Similarly for 1 This could actually be overcome by inserting to any sum, 0 as the last term. 56

57 Figure 2-5. Decimal parallel addition. The excess from the elements encoding i, j, k is simultaneously propagated to the element encoding l on a first avalanching step, and on the second avalanching step the collective sum is read at the open edge. (Adapted from [26].) Figure 2-6. The branching algorithm can be extended to a larger treelike structure. The computational time in this case is proportional to the number of branches in the longest path that terminates at the last element. The above network will sum 15 terms in four avalanching steps. (Adapted from [26].) 57

58 implementing the parallel addition method for binary numbers we connect the end bit of all terms in the sum, but one, to the end bit of the single term chosen to act as the collection hub for all the excesses, before sending the result to the open end, see Figure 2-8. Figure 2-7. Schematic representation of the serial addition method for binary numbers. The numbers 7, 5, 2, 1 are encoded as explained above in Section and the elements are serially connected. The excess overflow builds up as it moves trough the network and 15 units of excess are collected at the OUTPUT. (Adapted from [26].) 58

59 Figure 2-8. Schematic representation of the parallel addition method for binary numbers. In this case a branching topology is used for the network, where one of the systems acts as a collection hub for simultaneous build up of excess. (Adapted from [26].) Decimal Multiplication and Least Common Multiple We can extend any addition method to perform multiplication in the usual way, the product of two numbers m n is a sum of n terms of the number m (and of course visa versa m terms of the number n). With our two above addition methods we have two obvious ways of implementing multiplication the serial and parallel summation of the terms. Furthermore using the excess overflow encoding of numbers we have a third method; we can use a single dynamical system that emits the appropriate excess on every update, specifically the amount that represents one of the numbers, m δ for instance, and we update the system n times collecting a total of excess equal to (m n) δ, the product 59

60 of the multiplication. This third method utilizes time as a computational quantity, and leads to a fourth multiplication method, which we will also use as a stepping stone for the method of least common multiple of many numbers. Following the periodic orbit encoding method, we can represent each number, of a two number product, using for each number a chaotic element set to emit excess at the appropriate periodicity. Therefore given a product m n we utilize two dynamical systems, one set to emit every m updates and the other to emit every n updates, the product of the two numbers is given by the number of updates on which the two elements emit simultaneously; more specifically the element emitting every m updates will have its n th emission on the (m n) th update and the same applies for the other element. The simple extension of this method to a larger number of terms results in an algorithm for the least common multiple of all the terms in consideration Binary Operations The power of conventional computers lies in their ability to perform boolean algebra. Even so their building blocks are restricted by manufacture to one of the two fundamental gates, NOR or NAND, which with suitable combinations, of either, the other logic gates can be reproduced, for example AND(X,Y) NOR(NOR(X,X),NOR(Y,Y)). This clearly is not the most efficient method for boolean algebra; a much more efficient alternative is for each gate to require only one building block, conversely each building block to be able to perform all gates. We show how a single chaotic element can represent each of all logic gates through simple state manipulations, removing the need for combining elements [32, 76]. Furthermore we show how multi-dimensionality can lead to natural parallelism, and we go even further, exiting the capabilities of conventional algorithms and addressing a problem designed for quantum computation. 2 The case of three numbers in a product is handled serially, find the product of two of the terms, and multiply it by the third term. 60

61 Figure 2-9. Schematic representation of the method for computing the Least Common Multiple of four numbers. Each element emits excess to the collecting element at the appropriate periodicity for its encoded number. The result at the OUTPUT is excess of the collecting element with magnitude equal to the number of term elements that emitted on the current update. The number of updates that causes the collecting element to emit excess equal to the number of terms (i.e. all term elements emitting simultaneously) is the Least Common Multiple of the terms. (Adapted from [26].) Logic Gates As we have shown an important characteristic of chaotic computation is the versatility we have in implementing the same algorithm with different methods. This of course extends to implementations for representation of logic gates, i.e. there are multiple ways we can achieve this representation. In this section we focus on the most 61

62 straightforward method developed and in order to present the method more clearly we specify it to the logistic map 3, Equation 1 1. The method we use consists of three steps: (a) initialization, (b) chaotic update, and (c) threshold control and excess overflow. Compared to the methods presented in previous sections, the new concept for this method is initialization; i.e. the setting of the initial state of the system, (X 0 ) just before the first chaotic update, based on specific rules. This initial condition of an element is used to define which logic operation it performs and on what set of inputs. Specifically, we initialize a logistic map element by setting its initial value x 0 according to: x 0 = x prog + x I 1 + x I 2, for gates that operate on two inputs, x 0 = x prog + x I, for gates that operate on one input, x prog can be thought of as programming the gate and x I i as the input values. For an input of logical 1, x I i = δ, and for an input of logical 0, x Ii = 0. As before a chaotic update implies the application of the logistic map: x 0 F (x 0 ). The control and overflow mechanism remains the same as well: E = 0 if F (x 0 ) x, and E = F (x 0 ) x if F (x 0 ) > x. Where x is the threshold for the element and E the excess overflow generated. Here in the context of binary algebra where the set of integers contains only 0 and 1, E and δ are actually equivalent 4. Turning to the specific logic gates to be implemented, the following Table 2-1, summarizes the input - output relationships we are to represent. The task is to identify initial conditions, x prog + x I i, and threshold values, x, for which a chaotic update will result in F (x 0 ) x for where the outputs in the above table 3 Universality of chaotic systems allows us to assume demonstrations on the logistic map can be carried over to any other chaotic system. 4 Equivalence of inputs and outputs is actually a soft requirement. 62

63 Table 2-1. Truth-table for the logic operations AND, OR, XOR, NOR, NAND, NOT, and the identity operation (WIRE). I 1 I 2 AND OR XOR NOR NAND I NOT WIRE are 0, and F (x 0 ) > x where the output is 1. As a specific example, for the OR gate we have the following requirements: 1. I 1 = I 2 = 0, which implies x I 1 = x I 2 = 0, i.e. x 0 = x prog.the required output is 0, which implies F (x prog ) x. 2. I 1 = 0 and I 2 = 1, which implies x I 1 = 0 and x I 2 = δ, i.e. x 0 = x prog +δ. The required output is 1, which implies F (x prog + δ) x = δ. This requirement is symmetric to I 1 = 1 and I 2 = 0, so the conditions for satisfying both requirements are identical. 3. I 1 = I 2 = 1, which implies x I 1 = x I 2 = δ, i.e. x 0 = x prog + 2 δ. The required output is 1, which implies F (x prog + 2δ) x = δ. All of the above requirements need to be satisfied by the same values for x prog and x, such that all three conditions hold true regardless of inputs. In a similar manner we can provide required conditions for a chaotic element to represent every gate, see Table 2-2. Table 2-2. Necessary and sufficient conditions for a chaotic element to satisfy the logic operations AND, OR, XOR, NOR, NAND, NOT, and the identity operation (WIRE). Input x I 1 + x I 2 AND OR XOR 0 F (x prog ) x F (x prog ) x F (x prog ) x δ F (x prog + δ) x F (x prog + δ) x δ F (x prog + δ) x δ 2δ F (x prog + 2δ) x δ F (x prog + 2δ) x δ F (x prog + 2δ) x Input x I 1 + x I 2 NOR NAND 0 F (x prog ) x δ F (x prog ) x δ δ F (x prog + δ) x F (x prog + δ) x δ 2δ F (x prog + 2δ) x F (x prog + 2δ) x Input x I NOT WIRE 0 F (x prog ) x δ F (x prog ) x δ F (x prog + δ) x F (x prog + δ) x δ 63

64 Values that simultaneously satisfy the above conditions are easily found. Specifically by choosing δ = 0.25, the OR gate can be realized by choosing values for x prog = 1/8 and for x = 11/16: 1. F (x prog ) = F (1/8) = 7/16 x (= 11/16), 2. F (x prog + δ) = F (3/8) 11/16 = 15/16 11/16 = 1/4(= δ), 3. F (x prog + 2δ) = F (5/8) 11/16 = 15/16 11/16 = 1/4(= δ). In fact values that satisfy the conditions for all the gates and δ = 0.25, have been identified and are summarized in Table 2-3. Table 2-3. Initial values, x prog, and threshold values, x, required to implement the logic gates AND, OR, XOR, NOR, NAND, NOT, and the identity operation (WIRE), with δ = Value AND OR XOR NOR NAND NOT WIRE x prog 0 1/8 1/4 1/2 3/8 1/2 1/4 x 3/4 11/16 3/4 3/4 11/16 3/4 3/4 The fact that we have a method to implement all logic gates is not impressive, what is impressive is the fact that we can switch from one gate to another from one computational step to the next, and even more, switch very easily and fast, in a relative timescale. This leads to the concept of on-the-fly hardware re-programming; an architecture based on conventional computation, so all other components can be easily imported, but with the efficiency chaotic computation offers Parallel Logic and the Half Adder In this section we present what is probably the most important extension of performing boolean logic with chaotic systems [32]. The same procedure we illustrated above for the logistic map is implemented on the 2-dimensional neuron model [77]: x n = (x n 1 ) 2 exp(y n 1 x n 1 ) + k (2 1) y n = a y n 1 b x n 1 + c (2 2) 64

65 where a = 0.89, b = 0.18, c = 0.28, k = 0.03; these parameter values keep the model dynamics completely chaotic. Two distinct cases are investigated: 1. The possibility of performing XOR gate and AND gate in parallel (Half Adder) 2. Performing two AND gates independently. The first case specifically, is the application of the two gates on the same set of inputs, each gate being performed in a different dimension. The second case involves operating the two AND gates on different sets of inputs, again each dimension performing one operation. As with the case of the logistic map, the first step is to define the necessary conditions needed to satisfy the truth table of each case, specifically following the truth tables (Tables 2-4, 2-5) we convert them to the conditional Tables 2-6, 2-7. Following the process of the previous section we determine values for the programming state shift and threshold value that satisfy these conditions, see Tables 2-8, 2-9. Table 2-4. Truth table for XOR and AND logic gates on the same set of inputs. (Case 1) I 1 I 2 XOR AND We should mention that each case is investigated independently, i.e. there is no requirement that the number of iterations (n) is identical for both cases, or that the value representing a logical 1 (δ) is the same. We have already seen some type of parallel operations with chaotic computing in Section with the parallel addition, here though we see a clear demonstration of parallelism. The dynamical system performs two completely different operations simultaneously. In fact the next section builds further on the parallel capabilities of chaotic computation, by tackling the complex Deutsch-Josza problem. 65

66 Table 2-5. Truth table for two AND gates operating on independent inputs. (Case 2) I1 1 I2 1 I1 2 I2 2 AND(I1, 1 I2) 1 AND(I1, 2 I2) Table 2-6. Required conditions to satisfied parallel implementation of the XOR and AND gate. (Case 1) Initial conditions XOR AND x prog, y prog x n x y n y x prog + δ 1, y prog + δ 2 x n x δ 1 y n y x prog + 2 δ 1, y prog + 2 δ 2 x n x y n y δ 2 Table 2-7. Required conditions for implementing two AND gates on independent sets of inputs. (Case 2) Initial conditions AND(I1, 1 I2) 1 AND(I1, 2 I2) 2 x prog, y prog x n x y n y x prog, y prog + δ 2 x n x y n y x prog, y prog + 2 δ 2 x n x y n y δ 2 x prog + δ 1, y prog x n x y n y x prog + δ 1, y prog + δ 2 x n x y n y x prog + δ 1, y prog + 2 δ 2 x n x y n y δ 2 x prog + 2 δ 1, y prog x n x δ 1 y n y x prog + 2 δ 1, y prog + δ 2 x n x δ 1 y n y x prog + 2 δ 1, y prog + 2 δ 2 x n x δ 1 y n y δ 2 66

67 Table 2-8. Examples of initial values, x prog, y prog, and thresholds x, y, that satisfy the conditions presented in Table 2-6, yielding the parallel operation of XOR and AND gates. In this range δ 0.7, and the number of required iterations to yield the correct result is n = 10. (Case 1) x prog y prog x y Table 2-9. Examples of initial values x prog, y prog, and thresholds, that satisfy the conditions presented in Table 2-7, yielding operation of two AND gates on independent inputs. For these values, δ and the number of iterations required n = 20; note that the high number of required iterations and the need for more decimal precision are a direct consequence of the large number of required conditions to be satisfied simultaneously. (Case 2) x prog y prog x y An important difference from the previous section on single binary gates is that in this method the number of iterations that are necessary for the conditions to be satisfied is greater than 1, actually in Chapter 5 we report extensive progress in involving the time dimension with chaotic computation The Deutsch-Jozsa Problem The Deutsch-Jozsa problem and its solution algorithm [78] is a celebrated result of quantum computing, since it is the first example of quantum computing outperforming classical algorithms. In addition it has been the stepping stone for the other two major results of quantum computing, Shor s factoring algorithm [79] and Grover s search algorithm [80] (details of the chaotic computation search algorithm are given in Chapter 3). In this section we will demonstrate the chaotic computing algorithm for solving the Deutsch-Jozsa problem, which is just as efficient as its quantum counterpart, provides the answer in a single computational step, but in contrast is much more realizable and provides a more apparent result. 67

68 The problem can be stated as follows: Given a binary domain, i.e. a discrete domain of 2 k states, and given an arbitrary binary function f : {0, 1} k {0, 1}, i.e. a function that maps every point in the binary domain to a 0 or a 1, determine whether the function is constant, maps the whole domain on either 0 or 1 exclusively, or whether the function is balanced, maps the whole domain in equal terms on 0 and 1 5. In other words, evaluate f for every given point 6, and count the number of resulting 0s and 1s for all points, which basically is the conventional approach in solving the problem. As a result the conventional algorithm in the best case requires evaluating the function for only 2 points, i.e. the function results in a 0 (or 1) for the first point considered, and the opposite for the second point thus the function is balanced. In the worst case though, i.e. when the first N/2 points give the same result, it would take 2 k evaluations to conclude whether the function is constant or balanced, i.e. the number of computational steps needed grows exponentially with the number of bits that define the domain. In the context of chaotic computation the situation is much simpler since we can perform the given function on all domain points simultaneously, furthermore we can read the result in one step. To make the demonstration of the algorithm clearer we explicitly use the tent map, Equation 1 3 and work in a state space that applies to this map. Also we partition the explanation of the algorithm in three steps: (a) definition of the domain, (b) definition of the function space, and (c) implementation of the function along with reading the result. As is typical of chaotic computation there are at least three different 5 We are guaranteed by the problem that the function will be either constant or balanced, exclusively. 6 The number of points in a given domain is given as: N = 2 k, where k is the number of binary digits considered. 68

69 implementations of the algorithm [26], below we present, with some modifications from [27], the most elaborate, but clear realization. (a) Consider a binary domain space (B k ) defined by points of length k binary digits, therefore the number of points in B k is equal to N = 2 k. Since we are to work with the tent map we will translate every point in B k, onto a point in the domain of the tent map [0, 1]. Take any of the 2 k points as a 1 a 2 a 3...a k, where each a i {0, 1}, we map each such point onto [0, 1] using 7 : x = 2 (k+1) + k a i 2 i i=1 (2 3) We encode the whole domain on an array of N elements, denoted by X N, each element(j) having a state x(j) given by Equation 2 3. To illustrate this, consider the case where k = 3, the binary domain (B 3 ) has eight points {000, 001, 010,..., 111} which translate onto the tent map domain as { 1 16, 3 16, 5 16, 7 16, 9 16, 11 16, 13, }, each of these points is used as the state value of a dynamical element in an eight element array. So the whole domain in consideration is encoded by a single array as: X 8 = {x(1) = 1 16 x(2) = 3 16 x(3) = 5 16 x(4) = 7 16 x(5) = x(6) = 16 x(7) = 16 x(8) = 16 }. (b) The function space of the functions: f : B k {0, 1} consists of 2 2k functions, that is for each of the N = 2 k points there are two output possibilities. Regardless of the total number of possible functions only two are constant, the function that has outputs all 1 and the function that has outputs all 0, for all input points. The number of balanced functions though, is dependant on k and is given by simple combinatorics as CM L = L!, where for a set of L items, made up of only two distinct objects, C is the (L M)!M! number of combinations where there are M items of one of the two objects. In the case of balanced functions for a binary domain of N points the above relationship becomes 7 The 2 (k+1) term is added so that points 0 and 0.5 are not used for encoding. 69

70 CN/2 N = N!. More specifically for our example of k = 3, we have a total number (N (N/2))!(N/2)! of possible functions 2 23 = 256, out of which two are constant and (C 8 4 =)70 are balanced. In our case, of the function domain being represented by an array of tent maps as described above, the function space will be populated with functions which are constructed out of combinations of the following two basis functions: x n+1 = T (x n ) = 1 2 xn 1 2 (2 4) x n+1 = T (x n ) = 1 T (x n ) = 2 xn 1 2 (2 5) where T is the tent map and T the inverted tent map, and n is used to denote the time step. Each one of the 2 2k functions is constructed as one of the different combinations of the two basis functions of length 2 k, specifically the function space looks like: {T (1)T (2)...T (j), T (1)T (2)... T (j),......, T (1) T (2)...T (j 1) T (j),......, T (1) T (2)... T (j)}, where j = 2 k, and each sequence of length j of the two basis functions, is one of the possible functions (F) to be applied on the domain space 8. Referring back to our concrete example of k = 3, we have 256 combinations of T and T in the function space, ranging from the single sequence of eight consecutive T, to 70 combinations of four f : {0, 1} k {0} and four f : {0, 1} k {1}, to the single case of eight consecutive T ; and of course the other 184 combinations 9. 8 In our notation here we use j instead of N to indicate the relation between the functions and the array of dynamical elements encoding the domain space. 9 Care not to confuse the 70 functions of four T and four T with the 70 balanced functions! A function that is balanced in T and T is not necessarily balanced in its output. 70

71 Figure Basis function T - Tent Map. (c) Turning to the specific processes involved in solving the problem, we will focus on the example of k = 3 to make this section more illustrative. As we have seen for the 3-bit case there are 256 functions, in the context of the problem, we are given one of these functions (F) (i.e. a specific sequence of 8 T and/or T ) and are assured that it is either constant or balanced, the task is to find which one it is. In a straightforward manner we setup the array of dynamical elements, X 8, to encode the binary domain as explained in (a) and then we apply the given function on the whole array F(X n 8) = X n+1 8, where n denotes the time step. Down to the scale of the individual elements we have T (x n (j)) = x n+1 (j) and T (x n (j)) = x n+1 (j), where j = 1,..., 8, whether 71

72 Figure Basis function T - Inverted Tent Map. it is T or T applied on the j th element is based on which function (F) we are given. Once the function is applied we threshold the elements at x = 0.5 and collect the excess the usual way. Basically we are using 0.5 as a separatrix of the state space, i.e. x n+1 (j) > x n+1 (j) 0.5 = E j and x n+1 (j) 0.5 0, E j = 0 (where E is emitted excess), this is a standard application of symbolic dynamics. Now we have the collected excess in a single step we also have the answer: If the collected excess is 0, then the function we have is the constant function F : B k {0} k, 72

73 i.e. the function that has output all 0; If the collected excess 10 is 2 k /4 = 8/4 = 2, which is the maximum possible excess for B 3, then we have the constant function F : B k {1} k, i.e. the function that has output all 1. Obviously if our collected excess is some other value we have a balanced function. This is as much as we are required by the original problem, but we can do even better, using the excess collected we can determine in which of five classes the given balanced function belongs to. The importance of this algorithm is not in the actual task it accomplishes, which is of little practical use. The primary importance is, like for the quantum analogue, in demonstrating an extremely higher efficiency than conventional computation in solving problems of this class. Further though for chaotic computation it is a milestone against quantum computation as well. It shows how chaotic computation can handle problems as well as quantum computation, if not better. 2.4 Conclusion This chapter almost exclusively dealt with theoretical developments during the first four years of Chaotic Computation ( ), while we relegated the experimental realizations to the references, so that to maintain a uniform global tone to this dissertation. In closing and without wishing to undermine the other results, once again we draw attention to the important result of the solution to the Deutsch-Jozsa problem, and ask the reader to consider the connections between this problem, Set Theory, Logic, and to be more specific to the structure of a chaotic function at the Feigenbaum point. 10 The factor of 1 4 comes from the fact that the average excess emitted E j is

74 Figure Four realizations of the chaotic Deutsch-Jozsa algorithm for the case k = 3. There are eight binary inputs 000, 001, 010, 011, 100, 101, 110, 111 and through Equation 2 3 we obtain the state values 1, 3, 5, 7, 9, 11, 13, each given to an element of an array. The vertical lines mark the state value of each element x(j). The horizontal line is the separatrix at 0.5. (a) Given the constant function F : B k {1} k, this is the function: T T T T T T T T, the application of the function brings every element over 0.5, the maximum excess of 2 is emitted from the array. (b) Given the constant function F : B k {0} k, T T T T T T T T, all array elements remain under 0.5 resulting in zero excess. (c) Given the balanced function of eight T functions, produces an excess of 1. (d) A randomly chosen balanced function, four elements are over 0.5 and four under, the excess is

75 Figure The total excess emitted from each of the 72 functions. The two squares indicate the two constant functions, referring back to Figures 2-12(a,b). As you can see the balanced functions (circles) separate in five groups. The points (c) and (d) refer to the respective graphs of Figure

76 CHAPTER 3 SEARCHING AN UNSORTED DATABASE In this chapter we present a chaotic computation algorithm for searching an unsorted database for a match between a queried for item and the contents of the database [28]. The dynamical system we use for the demonstration of the algorithm is the Tent map, given in Equation 1 3. In general most commonly used devices for storing and processing information are based on the binary encoding of information, i.e. upon bits. Larger chunks of information are encoded by combining consecutive bits into bytes and words. Here we show a different approach for information encoding and storage, based on the wide variety of patterns that can be extracted from nonlinear dynamical systems. We specifically demonstrate the use of arrays of nonlinear dynamical systems (or elements) to stably encode and store information (such as patterns and strings). Furthermore we demonstrate how this storage method enables the efficient and rapid search for specified items of information in the data store. It is the nonlinear dynamics of each array element that provides flexible capacity storage, as well as the means to preprocess data for exact and inexact pattern matching. In particular, we choose chaotic systems to store and process data through the natural evolution of their dynamics. More importantly perhaps we note that, our method involves just a single procedural step, it is naturally set up for parallel implementation and can be realized with hardware currently employed for chaos-based computing architectures. We first show a slightly modified storing and encoding scheme, based on the Excess overflow scheme of Section 2.1.1, in which we use the actual fixed point state of the system, instead of the excess generated. Specifically we demonstrate this scheme as applied to the Tent map for storing, and associate the storage with more than just numbers, different encodings. We then show the actual search process and how the results are a direct consequence of the non-linear nature of the Tent map. Finally with 76

77 specific examples we show implementations of the method for not only exact matches, but for also inexact, approximate matches, to a given target. 3.1 Encoding and Storing Information Consider a list of N data storage elements (labeled as j = 1, 2,..., N) in an array, where each element stores and encodes of one of M distinct items. N can be arbitrarily large and M is determined by the kind of data being stored. For instance when storing English text we can consider the letters of the alphabet to be each a naturally distinct item, so M = 26. For the case of data stored in decimal representation M = 10, and for work in bioinformatics (manipulating the symbols A, T, C, and G) we have M = 4. We can also consider strings and patterns as the items. For instance for manipulating English text we can use a large set of keywords as the basis, necessitating very large M. We store this list of N elements by N dynamically evolving chaotic elements. The state of the elements at discrete time n is given by X j n[m], where (j = 1, 2,..., N) indexes each element of our list and (m = 1, 2,..., M) indexes an item in our alphabet (namely one of the M distinct items). To reliably store information one must confine each dynamical system to a fixed point behaviour, i.e. a state that is stable and constant throughout the dynamical evolution of the system over time n, basically so that the list remains unchanged. We therefore employ a threshold control mechanism, see Section 1.2.3, to flexibly control the dynamical elements onto the large set of period 1 fixed points. Specifically for the tent map, thresholds in the range [0, 2 ] yield fixed points, namely 3 X n = T, for all time, where T is a threshold from 0 T 2. See Figure 3-1 for a 3 schematic of the tent map under the threshold mechanism, which is effectively described by a be-headed map. It is clear from Figure 3-1 that in the range [0, 2] the value of X 3 n lies above X n implying that the system with state X n at threshold T will be mapped to a state higher than T in the subsequent iterate and thus will be clipped back to T. Another way of graphically rationalizing this is to note that fixed point solutions are obtained where the X n+1 = X n line intersects the beheaded tent map. The value of X at the 77

78 intersection yields the value of the fixed point, and the slope at the intersection naturally gives the stability of the fixed point. It is clear from Figure 3-1 that in the range [0, 2] this 3 intersection is on the plateau, namely the fixed point solution is equal to the threshold value. Further the solution of the fixed point for this map is super stable as the slope is exactly zero on the plateau. This makes the thresholded state very robust and quite insensitive to noise. Figure 3-1. The Tent map under the threshold mechanism. Shown two cases of threshold control at 1 and 1. Effectively each threshold value is on a plateau yielding 4 2 a fixed point. The symbol indicates the action of the Tent map on the thresholded value X n < X n+1, while indicates the effect of the control X n+1 T, for X values in the range [0, 2]. 3 78

79 Returning to our data of a given alphabet we can take a large set of thresholds {T[1], T[2],..., T[M]} from the fixed point range, setting up a one-to-one correspondence of these M thresholds with the M distinct items of our data. This allows each item m to be uniquely encoded by a specific threshold T[m], with (m = 1, 2,...M). So the number of distinct items that can be stored in a single dynamical element is typically large, as the size of M is limited only by the precision and resolution of the threshold setting and the noise characteristics of the physical system being employed. Therefore given an unsorted list of N data, chosen out of the alphabet of M items, as described above, we setup an array of N elements (labeled as j = 1, 2,..., N), each a Tent map, each with a threshold T j [m] reliably storing and encoding the appropriate item of the list. That is, if element j holds item m in the unsorted list, the threshold value of element j is set to T j [m], without changing, or in any way affecting any parameter of the list. So by denoting the threshold of element j by T j [m] we have the following: if the state of element j of the system, X j n[m], exceeds its prescribed threshold T j [m] (i.e. when X j n[m] > T j [m]) the state X j n[m] is reset to T j [m]. Since the thresholds lie in the range yielding fixed points of period 1, this enables each element to hold its state at value X j n[m] = T j [m] for all times n. In our encoding for a reason that will become apparent in the next section, the thresholds are chosen from the interval (0, 1 ), namely a subset of the fixed point window 2 [0, 2 ]. For specific illustration, without loss of generality consider each item to be 3 represented by an integer m, in the range [1, M]. Defining a resolution r between each integer as: r = (M + 1), (3 1) gives us a lookup table, mapping the encoded item to the threshold, specifically relating the integers m in the range [1, M] to the thresholds T j [m] in the range [r, 1 r] by: 2 79

80 T j [m] = m r. (3 2) Therefore we obtain a direct correspondence between the set of integers 1 to M, where each integer can represent any item, and a set of M threshold values of a dynamical system. Even more we can store N list elements by setting appropriate thresholds, via Equation 3 2, on N dynamical elements. As mentioned before, the thresholded states encoding different items are very robust to noise since they are superstable fixed points. Finally this correspondence, or representation, is important for the process of encoding information in an M-level representation and, as we shall see below, it is primarily important for the process of searching the list for certain bits of information, by utilizing a specific property of the system. 3.2 Searching for Information Once we have a given list stored by setting appropriate thresholds on N dynamical elements, we can query for the existence of a specific item in the list. Here we show how the manner in which the information is encoded helps us preprocess the data such that the effort required in the pattern matching searches is reduced. Specifically we will demonstrate how we can use one global operational step to map the state of elements with the matching item to an unique maximal state that can be easily detected. Note that such an operation enables us to detect matches to strings/patterns (of length equivalent to log 2 M binary bits) in one step. It would take typically log 2 M steps to do the same for the case of binary encoded data. When searching for a specific item in the list, we globally shift the state of all elements of the list up by the amount that represents the queried item. Specifically the state X j n[m] of all the elements (j = 1,..., N) is raised to X j n[m] + Q[k], where Q[k] is a search key given by: Q[k] = 1 T[k], (3 3) 2 80

81 where k is the item being searched for, and T[k] its respective threshold. This addition shifts the interval that the list elements can span, from [r, 1 2 r] to [r + Q[k], 1 2 r + Q[k]], where Q[k] is the globally applied shift. Note that what we are searching for is the representation of the item, not the item itself. For example, we can encode each letter of the alphabet by a number, such that the lowest threshold T j [1] represents the letter A, the next highest T j [2] represents B, etc. When we search for A, we are really searching for the element with a state with threshold T j [1]. The item being searched for is encoded in a manner complimentary to the encoding of the items in the list (much like a key that fits a particular lock); i.e. Q[k]+T[k] adds up to 1. This guarantees that only the element(s) matching the item being searched for will 2 have its state shifted to 1 2. The value of 1 2 is special in that it is the only state value that on the subsequent update of the system will reach the value of 1.0, which is the maximum state value for the Tent map. So only the elements holding an item matching the queried item will reach the extremal value 1.0 on the dynamical update following a search query. Note that the important feature here is the nonlinear dynamics mapping uniquely the state 1 to 1, while all other states (both higher and lower than 1 ) get mapped to values 2 2 lower than 1. See Figure 3-2 for a schematic of this process. The salient characteristic of the point 1 2 is the fact that it is the unique critical point, and so it acts as pivot point for the nonlinear dynamical folding that will occur on the interval [r + Q[k], 1 2 r + Q[k]] during the next update. This provides us with a single global monitoring operation to push the state of all the elements matching the queried item to the unique maximal point in parallel. The crucial ingredient is the use of the existing critical point in the dynamical mapping to implement selection. Chaos is not strictly necessary here. It is evident that for unimodal maps higher nonlinearities allow larger operational ranges for the search operation and also enhance the resolution of the encoding. For the Tent map specifically, it can be shown that the minimal nonlinearity necessary for the above search operation to work is operation in the chaotic region. 81

82 Figure 3-2. Schematic representation of the changes in the state of an element for (i) a matching queried item, (ii) an item higher than the queried item, and (iii) an item lower than the queried item. The key value used is Q = 1, so the 4 matched item as a value of The behaviour of three values is shown 0.1, 0.25 and 0.3. It is clear that the application of the key does not seem to relatively affect the three values, a simple linear translation. The application of the map though, indicated by, clearly maps both 0.1 and 0.3 to lower states than the maximal state, acquired solely by

83 Another specific feature of the tent map is that its piecewise linearity allows the encoding and search operation to be very simple indeed. Of course to complete the search we must now detect the maximal state located at 1. This can be accomplished in a variety of ways. For example, one can simply employ a level detector to register all elements at the maximal state. This will directly give the total number of matches, if any. So the total search process is rendered simpler as the state with the matching pattern is selected out and mapped to the maximal value, allowing easy detection. Even more, by relaxing the detection level by a prescribed tolerance, we can check for the existence within our list of numbers or patterns that are close to the item or pattern being searched for. In this case close to means having a representation that is close to the representation of the item for which we are searching for. Using the earlier example of English letters of the alphabet encoded using the lowest threshold T j [1] for A, the next higher threshold for B, etc., relaxing the detection threshold a small amount allows us to find mistyped words, where L or N were substituted for M or where X or Z were substituted for Y. However, if we had chosen our representation such that the ordering put T and U before and after Y (as is the case on a standard QWERTY keyboard), then our relaxed search would find spellings of bot or bou when boy was intended. Thus nearness is defined by the choice of the representation and can be chosen advantageously depending on the intended use. Figure 3-5 gives an illustrative example of detecting such inexact matches. So nonlinear dynamics works as a powerful preprocessing tool, reducing the determination of matching patterns to the detection of maximal states, an operation that can conceivably be accomplished by simple addition and in parallel. 3.3 Encoding, Storing and Searching: An Example Consider the case where our data is English language text, encoded as described above by an array of tent maps. In this case the distinct items are the letters of the English alphabet. As a result M = 26 and we obtain r = 1 54 = from Equation 83

84 3 1, and the appropriate threshold level for each item is obtained from Equation 3 2. More specifically, consider as our list the sentence strawberry fields. Each letter 1 in this sentence is an element of the list with a value selected from our 26 possible values and can be encoded using the appropriate threshold, as in Figure 3-3(a). Now the list, as encoded above, can be searched for specific items. Figure 3-3 presents the example of searching for the letter l. To do so the search key value corresponding to letter l (from Equation 3 3, Q[l] = 15 ) is added globally to the state of all elements. 54 Then through their natural evolution, at the next time step the state of the element(s) containing the letter l is maximized. In Figure 3-4 we performed an analogous query for the letter e, which is present twice in our list, to show that multiple occurrences of the same item can be detected. Finally in Figure 3-5 we search for an item that is not part of our given list, the letter x. As expected Figure 3-5(c) shows that none of the elements are maximized. By lowering the detection level to the value 1 (2 r), we have detected whether adjacent items to the queried one are present. Specifically we have detected that the letters w and y are contained in the list. This demonstrates that inexact matches can also be found by this scheme. 3.4 Discussion A significant feature of the presented search method is that it employs a single simple global shift operation and does not entail accessing each item separately at any stage. It achieves this through the use of nonlinear folding to select out the matched item, and this nonlinear operation is the result of the natural dynamical evolution of the elements. So the search effort is considerably simplified because it uses the native responses of the nonlinear dynamical elements. We can then think of this as a natural application, at the machine level, in a computing machine consisting of chaotic modules [25 27, 32, 69, 70, 76, 81 85]. It is also equally potent as a special-applications search 1 The space between the words is ignored. 84

85 Figure 3-3. Searching for l. (a) Threshold levels encoding the sentence strawberry fields, bars marked as ; (b) the search key value for letter l is added to all elements, bars marked as ; (c) the elements update to the next time step, bars marked as. For clarity we marked solid black any elements that reach the detection level. chip, which can be added on to regular circuitry and should prove especially useful in machines, which are repeatedly employed for selection/search operations. In terms of the processor timescale, the search operation requires one dynamical step, namely one unit of the processor s intrinsic update time. The principal point here is the scope for parallelism that exists in our scheme. This is due to the selection process occurring through one global shift, which implies that there is no scale-up (in principle) with size N. Additionally conventional search algorithms work with ordered lists, and the time required for ordering generically scales with N as O(N log N). Here in contrast, there is no need for ordering, and this further reduces the search time. Regarding information storage capacity, note that we employ an M-state encoding, where M can be very large in principle. This offers much gain in encoding capacity. As 85

86 Figure 3-4. Searching for e. (a) Threshold levels encoding the sentence strawberry fields, bars marked as ; (b) the search key value for letter e is added to all elements, bars marked as ; (c) the elements update to the next time step, bars marked as. For clarity we marked solid black any elements that reach the detection level. in the example we present above, the letters of the alphabet are encoded by one element each; binary coding would require much more hardware to do the same. Specifically, consider the illustrative example of encoding a list of names, and then searching the list for the existence of a certain name. In the current ASCII encoding technique, each ASCII letter is encoded into two hexadecimal numbers or 8 bits. Assuming a maximum name length of k letters, this implies that one has to use 8 k binary bits per name. So typically the search operation scales as O(8kN). Consider in comparison what our scheme offers: if base 26 ( alphabetical representation) is used, each letter is encoded into one dynamical system (an alphabit ). As mentioned before, the system is capable of this dense encoding as it can be controlled on to 26 distinct fixed points, each corresponding to a letter. Again assuming a maximum length of k letters per name, one needs to use k alphabits per 86

87 Figure 3-5. Searching for x. (a) Threshold levels encoding the sentence strawberry fields, bars marked as ; (b) the search key value for letter x is added to all elements, bars marked as ; (c) the elements update to the next time step, bars marked as. It is clear that no elements reach the detection level at 1.0; (d) By lowering the detection level we can detect whether items adjacent to x are present. For clarity we marked solid black any elements that reach the detection level ( w and y ). name. So the search effort scales as kn. Namely, the storage is 8 times more efficient and the search can be done roughly 8 times faster as well! In general if base S encoding is employed, for example where S is the set of all possible names (size(s) N), then each name is encoded into one dynamical system with S fixed points (a superbit ). So one needs to use just 1 superbit per name, implying that the search effort scales simply as N, i.e. 8k times faster than the binary encoded case. Even more,in practice the final step of detecting the maximal values can conceivably be performed in parallel. This would reduce the search effort to two time steps (one to map the matching item to the maximal value and another step to detect the maximal value simultaneously). In that case the search effort would be 8kN times faster than the binary benchmark. 87

88 Alternate ideas to implement the increasingly important problem of search have included the use of quantum computers [80]. However, our nonlinear dynamical scheme has the distinct advantage that the enabling technology for practical implementation need not be very different from conventional silicon devices. Namely, the physical design of a dynamical search chip should be realizable through conventional CMOS circuitry. Implemented at the machine level, this scheme can perform unsorted searches efficiently. CMOS circuit realizations of chaotic systems, like the tent map, already operate beyond the region of 1 MHz [86, 87]. Thus a complete search for an item comprising of search key addition, update, threshold detection, and list restoration can be performed at 250 khz, regardless of the length of the list. Even more though, nonlinear systems are abundant in nature, and so embodiments of this concept can be conceived in many different physical systems ranging from fluids to electronics to optics. Potentially good candidates for physical realization of the scheme include nonlinear electronic circuits and optical devices [88]. Also systems such as single electron tunneling junctions [89], which are naturally piecewise linear maps, can conceivably be employed to make such search devices. In summary we have presented a method to efficiently and flexibly store information using nonlinear dynamical elements. We demonstrate how a single element can store M distinct items, where M can be large and can vary to best suit the nature of the data being stored and the application at hand. Namely, we have information storage elements of flexible capacity, capable of naturally storing data in different bases or in different alphabets or with multilevel logic. This cuts down space requirements by log 2 M in relation to elements storing via binary bits. Further we have shown how this method of storing information can be naturally exploited for searching of information. In particular, we demonstrated a method to determine the existence of an item in an unsorted list. The method involves a single global shift operation applied simultaneously to all the elements comprising the list, such that the next dynamical step pushes the element(s) storing the matching item (and only those) to a unique, maximal state. This extremal state can then 88

89 be detected by a simple level detector, directly giving the number of matches. Even more the maxima state can be treated as a maximal range, in which case approximate matches are identified as well. 89

90 CHAPTER 4 A SIMPLE ELECTRONIC IMPLEMENTATION OF CHAOTIC COMPUTATION This chapter is a short exposition of the results of our publication [34] concerning an iterated map with a very simple (i.e. minimal) electronic implementation. We first propose and characterize the map and then provide the circuit to implement the map. We proceed to determine control thresholds for flexibly representing the five fundamental logic gates and demonstrate how this map (and circuit) can be used to implement the search algorithm introduced in Chapter An Iterated Nonlinear Map We begin by considering an iterated map governed by the following equation: x n+1 = α x n, (4 1) 1 + x β n where α and β are system parameters. Figure 4-1 shows the bifurcation diagrams for different values of α and β. It is evident that this map yields dynamics ranging from fixed points through chaos. It is also clear that the map follows the period-doubling route to chaos with respect to α, and it does so as well with respect to β. In the following sections we will consider the map in the chaotic regime, with α = 2 and β = 10, namely the chaotic map given by: x n+1 = 2 x n, (4 2) 1 + x 10 n This operating point is indicated by the dotted line in the bottom right panel of Figure 4-1 and the graphical form of this map is presented in Figure Threshold Control Chaos into Different Periods Using the map given by Equation 4 2, we wish to construct a system that can represent M distinct states, where M can be large. The size of M will be limited only by our ability to distinguish one state from the next in the presence of noise. To do this we 90

91 Figure 4-1. Bifurcation diagram of the iterated map in Equation 4 1 for various values of α and β. The dotted line in the bottom right panel, indicates the chosen operating point as prescribed by Equation 4 2. Here x is the value taken by the map after initial transients have died out. use the simple and easily implementable threshold control mechanism described in Section Specifically we place under control the state variable x as: x n+1 = 2 x n 1+x 10 n, for x n+1 x, x, for x < x n+1, (4 3) where x is the imposed threshold value. The effect of this control is to limit the available phase space by clipping the state variable. In this method no parameters are adjusted, and only one state variable is occasionally reset. Note that this scheme is computationally simple and requires no costly run-time computations. Figure 4-3 illustrates the behaviour of the system under varying thresholds. Depending on the value of the threshold x, this control method produces a wide variety of nonlinear dynamical behaviours ranging from fixed points to periodic behaviour of various 91

92 Figure 4-2. Graphical form of the map to be implemented by an electronic circuit. The parameters for this form are set at α = 2 and β = 10. periodicities to chaos. As indicated in the figure, the system is controlled to fixed points for thresholds for x < 1. When the threshold is above unity, many different periodic (as well as chaotic) orbits become available. 4.3 Electronic Analog Circuit: Experimental Results The realization of the discrete map of Equation 4 2 in circuitry is depicted in Figure 4-4. In the circuit V in and V o denote input and output voltages and in terms of the equation x n and x n+1, respectively. A simple nonlinear device is constructed by coupling two complementary (n-channel and p-channel) (Q1, Q2) junction field-effect transistors (JFETs) [90] mimicking the nonlinear characteristic curve f(x) = 2x. The voltage 1+x 10 across resistor R1 is amplified by a factor of 5 using the operational amplifier U1 in order to scale the output voltage back into the range of the input voltage, a necessary condition for a circuit based on a map. The resulting voltage characteristics of the nonlinear device are depicted in Figure 4-5, compare with the mathematical form of the map in Figure

93 Figure 4-3. Effect of threshold value x on the dynamics of the system given by Equation 4 3. In order though to realize the map of Equation 4 3, we require two more sample and hold circuits, in addition to a threshold controller circuit, see Figure 4-6. The first sample and hold (S/H) circuit holds the input signal (x n ) in response to a clock signal CK1. The output from this sample and hold circuit is fed as input to the nonlinear device for the subsequent mapping, that is Equation 4 2. A second sample and hold (S/H) circuit takes the output from the nonlinear device in response to a clock signal CK2. In lieu of control, the output from the 2 nd sample-and-hold circuit (x n+1 ) closes the loop as the input to 1 st sample-and-hold circuit, through the threshold control circuit. The main purpose of the two sample-and-hold circuits is to introduce discreteness into the system and additionally 93

94 Figure 4-4. Circuit diagram of the nonlinear device of Equation 4 3. (Left) Intrinsic (resistorless), complementary device made of two (n-type and p-type) JFETs. Q1: 2N5457, Q2: 2N5460. (Right) Amplifier circuitry to scale the output voltage back into the range of the input voltage. R1: 535 Ω, U1: AD712 op-amp, R2: 100 kω and R3: 450 kω. Here V in = x n and V o = x n+1. to set the iteration speed. To implement the control for nonlinear dynamical computing, the output from the 2 nd sample and hold circuit is input to the threshold controller, as that described by Equation 4 3. The output from this threshold controller then becomes the input to the 1 st sample-and-hold circuit. In Figure 4-6, the sample and hold circuits are realized with National Semiconductors sample and hold IC LF398, triggered by delayed timing clock pulses CK1 and CK2 [70]. Here a clock rate of either 10 khz or 20 khz may be used. The threshold controller circuit is shown in Figure 4-7 is realized with an AD712 operational amplifier, a 1N4148 diode, a 1 kω series resistor and the threshold control voltage, x (= V con ). The Figure 4-8(a) displays the uncontrolled chaotic waveform and the Figures 4-8(b-d) show representative results of the chaotic system under different threshold values x (= V con ). It is clear that adjusting the threshold yields cycles of varying periodicities. Also, note that simply setting the threshold beyond the bounds of the attractor (5 V) 94

95 Figure 4-5. Voltage response characteristics of the nonlinear device, based on Equation 4 2 and circuit of Figure 4-4. gives back the original dynamics, and so the controller is easily switched on and off. A detailed comparison shows complete agreement between experimental observations and analytical results. For instance, the threshold that needs to be set in order to obtain a certain periodicity and the trajectory of the controlled orbit can be worked out exactly through symbolic dynamics techniques. Further, the control transience is very short here (typically of the order of 10 3 times the controlled cycle length) and the perturbation involved in threshold control is usually small. This method is then especially useful in the situation where one wishes to design controllable components that can switch flexibly between different behaviours. Calibrating the systems characteristics at the outset with 95

96 Figure 4-6. Schematic diagram for implementing the threshold controlled nonlinear map. CK1 and CK2 are clock timing signals, while the modules designated S/H are sample-and-hold circuits. Figure 4-7. Circuit diagram of the threshold controller. V in and V o are the input and output, D is a 1N4148 diode, R = 1 kω, and U2 is an AD712 op-amp. V con = x (controller input voltage). 96

97 respect to threshold gives one a look-up table directly and simply to extract widely varying temporal patterns. Figure 4-8. PSPICE simulation results of the experimental circuit. The ordinate is x n and the abscissa is the discrete time n measured in ms. (a) Uncontrolled chaos: x = 6 V, (b) period 5 cycle: x = 4 V, (c) period 2 cycle: x = 3.7 V and (d) period 1 cycle: x = 3.5 V. 4.4 Fundamental Logic Gates with a Chaotic Circuit Here we explicitly show how, by using the threshold controlled map of Equation 4 3, we obtains the clearly defined logic gate operations NOR, NAND, AND, OR, and XOR. The state of the system is represented by the state value of x. The initial state of the system is represented as x 0. In our method all five basic gate operations involve the following steps: specification of x 0 based on the operation and the inputs through threshold control, nonlinear update (evolution of the circuit dynamics), and output interpretation through threshold monitoring in the spirit of Excess Overflow Propagation, from Section Specifically: 1. Inputs and Programming; x 0 = x prog + x I 1 + x I 2. Here x prog is a programming shift that fixes the initial state x 0 of the system, based on the gate to be operated. Letting a finite voltage δ denote a logical 1, we set x I i = δ for an input of logical 1 and x I i = 0 for an input of logical 0. 97

98 2. Nonlinear update; i.e., x 0 f(x 0 ), where f(x) is the nonlinear function, given by Equation Thresholding to obtain the output { Z defined as: 0, for f(x) x, Z = f(x) x, for x < f(x), where x = 1 is the threshold reference signal; which is set the same for all gates unlike previously in the case of the Logistic map in Section The output is interpreted as logic output 0 if Z = 0 and logic output 1 if 1 2 δ < Z. Since the system is nonlinear (and may be chaotic), in order to specify the initial x 0 accurately in hardware experiments, one needs a controlling mechanism. Here we will employ a threshold controller to set the initial x 0. So in this example we will use the clipping action of the threshold controller to achieve the initialization. A comparator is used to recover the output from the state of Z. The logical operations are defined by the input to output mappings depicted in the truth table of Table 4-1. Table 4-1. Truth-table for the five fundamental logic gates NOR, NAND, AND, OR and XOR. I 1 I 2 NOR NAND AND OR XOR From the definition of f(x), x and the above truth table, we obtain a set of inequality conditions that need to be satisfied simultaneously, shown in Table 4-2. Note that the symmetry with respect to the inputs reduces the four conditions in the truth table of Table 4-1 to three distinct conditions, with rows 2 and 3 of Table 4-1 leading to the single condition of row 2 in Table 4-2. The above inequalities have many possible solutions depending on the size of δ. By setting δ = 0.3 we can easily solve the equations for the different programming shifts that each gate requires. The specific x prog values for the five different logical operations are listed in Table

99 Table 4-2. Necessary and sufficient conditions to be satisfied by a chaotic element in order to implement the logical operations NOR, NAND, AND, OR and XOR. Where f(x) is given by Equation 4 2 and 1 is the monitoring threshold (x ) of interpretation of the logic output. I 1 I 2 NOR NAND AND < f(x prog ) 1 < f(x prog ) f(x prog ) 1 0/1 1/0 f(x prog + δ) 1 1 < f(x prog + δ) f(x prog + δ) f(x prog + 2 δ) 1 f(x prog + 2 δ) 1 1 < f(x prog + 2 δ) I 1 I 2 OR XOR 0 0 f(x prog ) 1 f(x prog ) 1 0/1 1/0 1 < f(x prog + δ) 1 < f(x prog + δ) < f(x prog + 2 δ) f(x prog + 2 δ) 1 Table 4-3. Numerical values of x prog for implementing logical operations NOR, NAND, AND, OR and XOR, based on Equation 4 2. Operation NOR NAND AND OR XOR x prog By setting up a map the with initial condition x 0, as defined above, we allow the map to update to a new value x 1 = f(x 0 ) and compare this value to the monitoring threshold x = 1, if the new state of the map is greater than the threshold a logical 1 is the output, if the new state is less than the threshold a logical 0 is the output. The updated states of a chaotic element, following Equation 4 2, satisfying the conditions of Table 4-2, with x prog values given in Table 4-3 are shown in Table 4-4. The circuitry described in the section above has been tested to do the logic operations described here and shows complete agreement with the simulation results. Table 4-4. Updated state values, x 1 = f(x 0 ), of a chaotic element satisfying the conditions in Table 4-2 in order to implement the logical operations NOR, NAND, AND, OR and XOR. Operation NOR NAND AND OR XOR f(x prog ) f(x prog + δ) f(x prog + 2 δ) We have presented a proof of principle device that demonstrates the capability of this nonlinear map to implement the five fundamental logic gates; it does this by exploiting the nonlinear response of the system. The main benefit is its ability to exploit a single chaotic 99

100 element to reconfigure into different logic gates through a threshold based morphing mechanism. Contrast this to a conventional field programmable gate array element, where reconfiguration is achieved through switching between multiple single purpose gates. 4.5 Encoding and Searching a Database Using Chaotic Elements In the spirit of Chapter 3 we apply that method to the map, and circuit, given by Equation 4 3. Specifically we show how this map can be utilized to stably encode and store various items of information (such as patterns and strings) to create a database. Further we demonstrate how this storage method allows to efficiently determine the number of matches (if any) to some specified item [28]. Consider an array of elements each of which evolves according to Equation 4 3. The nonlinear dynamics of the array elements will be utilized for flexible capacity storage, as well as for pre-processing data for exact (and inexact) pattern matching tasks. Encoding information. We consider a database of length N and each member of the database is encoded in an element obeying Equation 4 3, we index these elements with j = {1, 2, 3,...N}, so the state of the whole array, at a time n, can be represented as Xn. j At the same time the database is made up of items from an alphabet of total number of unique items M, indexed with m = {1, 2, 3,...M}. We correlate each item m with a threshold, x, for Equation 4 3, such that the element is confined on a fixed point of period 1, we define T j [m] as the threshold for the j th element encoding the m th item. For this map, thresholds ranging from 0 to 1 yield fixed points, as depicted in Figure 4 3. Namely Xn j = T j [m], for all time n, when the threshold is chosen as 0 < T j [m] < 1. This can be obtained exactly from the fact that x < f(x), x (0, 1), implying that the subsequent iteration of a state at T j [m] will always exceed T j [m] and thus be reset to T j [m]. 100

101 In our encoding, the thresholds are chosen from the interval (0, 1 ), namely a sub-set 2 of the fixed-point window (0, 1) 1. Without loss of generality, consider each item to be represented by an integer z from the range [1, M]. Defining a resolution r between each threshold as: r = M, (4 4) gives a lookup map from the encoded integer to the threshold, relating the integers z in the set [1, M] to thresholds T j [m] in the range [r, 1 ], by: 2 T j [m] = z r. (4 5) Therefore we obtain a direct correspondence between a set of integers ranging from 1 to M, where each integer represents an item, and a set of M threshold values. So we can store N database elements by setting appropriate thresholds (via Equation 4 5) on N dynamical elements. Clearly from Equation 4 5, if the threshold setting has better resolution (smaller r), then a larger range of values can be encoded. Note however that precision is not a restrictive issue here, as different data representations can always be chosen in order to suit a given precision of the threshold mechanism. Processing Information. Once we have a given database stored by setting appropriate thresholds on N dynamical elements, we can query for the existence of a specific item in the database in one global operational step. This is achieved by globally shifting the state variable of all elements of the database up by an amount that represents the item being searched for. 1 Actually we can use as much as the interval (0, ), since is the pre-image of the maximum. 101

102 Noting that the maximal state variable value for this system is , one raises the state X j n of each element j to X j n + Q[k], where Q[k] is a search key given by: Q[k] = T [k], (4 6) where k is the index of the integer(item) being queried for, k z is the unique value of this system that evolves to the maximal value on an iteration of the system. So the value of the search key is simply (the pre-image of the maximal state variable value) minus the threshold value corresponding to the item being searched for, given k z we have T [k] = k r. The addition of the search key, Q[k] shifts the interval that the database elements can span, from [r, 1 2 ] to [r + Q[k], Q[k]]. Since Q[k] + T [k] adds up to , it is guaranteed that only the element(s) matching the item being queried for will have its(their) state shifted to , which is the only state which after the subsequent iteration will maximize to the value of So the total search process is rendered simple as the state with the matching pattern is selected out and mapped to the maximal value, allowing easy detection. Further, by relaxing the detection level by a prescribed tolerance, we can check for the existence within our database of numbers or patterns that are close to 3 Representative Example. Consider the case where our data is English language text, encoded as described above on a letter by letter basis by an array of maps, following Equation 4 5. In this case the distinct items are the letters of the English alphabet and we have M = 26. We obtain r = from Equation 4 4 and the appropriate threshold level for each item is obtained via Equation 4 5. More concretely, consider as 2 Note all other states (both higher and lower than ) get mapped to values lower than Where close to is defined by the designer of the database. 102

103 our database the phrase the quick brown fox ; each letter in this phrase is an element of the database and can be encoded using the appropriate threshold, as in Figure 4-9(a). Figure 4-9. Searching for b. (a) Threshold levels encoding the phrase the quick brown fox, bars marked as ; (b) the search key value for the letter b is added to all elements, bars marked as ; (c) the elements update to the next time step, bars marked as. For clarity we mark black the elements that reached the detection level. Now we query the database regarding the existence of specific items. Figure 4-9 presents the example of querying for the letter b. To do so the search key value corresponding to letter b ( 2 ) is added globally to the states of all elements, Figure (b). Then through their natural evolution, upon the next time step, the state(s) of the element(s) containing the letter b is(are) maximized, Figure 4-9(c). In Figure 4-10 we perform an analogous query for the letter o, which happens to be present twice in our database to show that multiple occurrences of the same item can be detected. Finally in Figure 4-11 we query for an item that is not part of our given database, the letter d. As expected Figure 4-9(c) shows that none of the elements are maximized. By lowering the 103

104 Figure Searching for o. (a) Threshold levels encoding the phrase the quick brown fox, bars marked as ; (b) the search key value for the letter o is added to all elements, bars marked as ; (c) the elements update to the next time step, bars marked as. For clarity we mark black any elements that reached the detection level. detection level to the value f( r) = , just one step down from the maximal, we detect whether items adjacent to the desired one are present. Specifically we detect that the letters c and e are contained in our database. This demonstrates that inexact matches can also be found, just as easily. 4.6 Conclusion In summary, we introduced a simple map having rich nonlinear dynamics, and a simple electronic circuit realization. Then we demonstrated the direct and flexible implementation of the five basic logic gates using this simple nonlinear map (circuit). Further, we showed how the dynamics of this map can be utilized to provide an efficient database search method.we have experimentally implemented the electronic circuit analog 104

105 Figure Searching for d. (a) Threshold levels encoding the phrase the quick brown fox, bars marked as ; (b) the search key value for the letter d is added to all elements, bars marked as ; (c) the elements update to the next time step, bars marked as. It is clear that no elements reach the detection level at ; (d) By lowering the detection level we can detect whether items adjacent to d are present ( c and e ). of this nonlinear map and have demonstrated the efficacy of the threshold controller in yielding different controlled responses from this map circuit. 105

106 CHAPTER 5 LOGIC OPERATIONS FROM EVOLUTION OF DYNAMICAL SYSTEMS In this chapter we propose the direct and flexible implementation of logic operations using the dynamical evolution of a nonlinear system [33]. The concept involves the observation of the state of the system at different time instances to obtain different logic outputs. We explicitly implement the basic NAND, AND, NOR, OR and XOR logic gates, as well as multiple-input XOR and XNOR logic gates. Further we demonstrate how the single dynamical system can do more complex operations such as bit by bit addition in just two iterations. The concept uses the nonlinear characteristics of the time dependence of the state of the dynamical system to extract different responses from the system. The highlight of this method is that a single nonlinear system is capable of yielding a time sequence of different logic operations. Further we explicitly demonstrate, through the three examples, how results from this method can be obtained by varying any of the defining variables (x 0, x init, δ, n). 5.1 Generation of a Sequence of (2-input) Logic Gate Operations We outline a method for obtaining the five basic logic gates using different dynamical iterates of a single nonlinear system. In particular consider a chaotic system whose state is represented by a value x. The state of the system evolves according to some dynamical rule. For instance, the updates of the state of the element from time n to n + 1 may be well described by a map, i.e., x n+1 = f(x n ), where f(x) is a nonlinear function. Now this element receives inputs before the first iteration (i.e., at n = 0) and outputs a signal after evolving for a (short) specified time or number of iterations. The method can be applied for any sequence of the gates, for illustrative purposes we chose the sequence NAND, AND, NOR, XOR and OR (see Table 5-1 for the truth table). In general the method involves the following steps: 1. Input definition (for a 2 input operation): x 0 = x init + x I 1 + x I 2, where x init is the initial state of the system. (Comparable to x prog in previous chapters, but now it is not defining a single gate but a sequence of gates, or more general, operations). 106

107 Table 5-1. The truth table of the five basic logic operations NAND, AND, NOR, XOR, OR. I 1 I 2 NAND AND NOR XOR OR before data inputs are introduced, x 0 is the actual initial state of the system that includes the data to be operated on. As previously we set x I i = δ for the logical input I i = 1, and x I i = 0 for the logical input I i = 0. So we need to consider the following three cases: (a) (b) (c) Both I 1, and I 2 are 0 (row 1 in Table I), i.e. the initial state of the system is: x 0 = x init = x init. Either I 1 = 0 and I 2 = 1, or I 1 = 1 and I 2 = 0 (row 2 or 3 in Table I), i.e. the initial state is: x 0 = x init δ. Both I 1 and I 2 are 1 (row 4 in Table I), i.e. the initial state is: x 0 = x init + δ + δ = x init + 2 δ. 2. Chaotic evolution over some prescribed number of steps, i.e. f n (x 0 ) x n, for 1 < n. 3. The evolved states f n (x 0 ) yield the logic output, at each n, as follows: Logic Output = 0, if f n (x 0 ) x n, Logic Output = 1, if f n (x 0 ) > x n, where x n is a reference monitoring value, at time instance n. Since the system is chaotic, in order to specify the initial x 0 accurately we employ the threshold control mechanism, see Section 1.2.3, we note that this mechanism can be invoked at any subsequent iteration as well. For logic recovery, the updated or evolved value of f(x) is compared with x n value using a comparator action, in the spirit of Excess Overflow Propagation, again as in Section In order to obtain all the desired input-output responses of the different gates, as displayed in Table 5-1, we need to satisfy the conditions enumerated in Table 5-2. Note that the symmetry of inputs to outputs reduces the four conditions in the truth Table 5-1 to three distinct conditions, with 2 t extnd and 3 rd row of Table 5-1 leading to the 2 nd condition of Table

108 Table 5-2. Necessary and sufficient conditions to be satisfied by a chaotic element in order to implement the logic operations NAND, AND, NOR, XOR and OR on subsequent iterations. Here x init = and δ = 0.25, for all n. While x n = 0.75, for n = {1, 2, 3, 4}, that is for NAND, AND, NOR, XOR logic operations, and x 5 = 0.4 for OR logic operation. Logic gate NAND AND NOR XOR OR Iteration (n) Condition 1: Logic input x 1 =f(x 0 ) > x 1 f(x 1 ) x 2 f(x 2 ) > x 3 f(x 3 ) x 4 f(x 4 ) x 5 (0,0) x 0 = x 1 = 0.88 x 2 = 0.43 x 3 = 0.98 x 4 = 0.08 x 5 = 0.28 Condition 2: Logic input x 1 =f(x 0 ) > x 1 f(x 1 ) x 2 f(x 2 ) x 3 f(x 3 ) > x 4 f(x 4 ) > x 5 (0,1) or (1,0) x 0 = δ x 1 = x 2 = x 3 = 0.33 x 4 = x 5 = 0.45 x 0 = Condition 3: x 1 =f(x 0 ) x 1 f(x 1 ) > x text 2 Logic input f(x 2 ) x 3 f(x 3 ) x 4 f(x 4 ) > x 5 (1,1) x 0 = δ x 1 = 0.58 x 2 = 0.98 x 3 = 0.1 x 4 = 0.34 x 5 = 0.9 x 0 = So given dynamics f(x), we must find values of threshold(s) x n, initial state(s) x init and δ satisfying the conditions derived from the specific truth table to be implemented. Using, as usual, the Logistic map, Equation 1 1, we incorporated in Table 5-2 actual values of x init and x n for n = 1, 2, 3, 4, 5, which satisfy the conditions imposed by the truth table considered. For illustrative purposes, the graphical representation of five iterations of the Logistic map is shown in Figure 5-1, displaying the results in Table 5-2. In summary, the inputs setup the initial state x init +x I 1 +x I 2. Then the system evolves over n iterative time steps to each updated state x n. The evolved state is compared to a monitoring threshold x n, at every n. If the state at iteration n, is greater than the threshold a logical 1 is the output and if the state is less than the threshold a logical 0 is the output. This process is repeated for each subsequent iteration. Note that in the above example we present results for which x init is not varied and x n is varied; the emphasis is on the specific behaviour that represents a computational 108

109 Figure 5-1. Graphical representation of five iterations of the Logistic map. Three different initial conditions are considered, each representing one of the three cases of two logic inputs, (0,0) by, (0,1)/(1,0) by, (1,1) by. At each iteration comparison with a monitoring threshold is performed, 0.75 for n = {1, 2, 3, 4}, 0.4 for n = 5. The results from two gates are also shown, the circles mark n = 1 (the NAND gate) compare with 0.75 and the colouring corresponds to the appropriate case of data; the squares mark n = 5 (the OR gate) compare with 0.4, and again the colouring is appropriate to the data. 109

110 task and not the actual values as such. The case of δ is slightly more complicated since it actually represents a constant entity, nevertheless varying its actual value is also possible, but more care is needed, hence in this example we simply set δ = Therefore in a more general context, we are relating inputs and required outputs with specific behavioural patterns, at the same time we can say the reverse, we find the behavioural pattern that can represent a needed operation. Hence the actual state values are not of great importance, and we can generate templates like Figure 5-2. To generate this specific template, we set x n = 0.65 and varied x 0, in contrast to the example above; once again δ was kept constant at Basically the behaviour of each combination of variables is then interpreted as one of the eight possible symmetric binary operations. It is clear from this figure that we are not confined to n < 5, so the length of the sequence of operations can be extended, and also the actual order of the sequence of operations can be changed. Theoretically all operations and all sequences of operations exist, under some combination of actual values of variables. We should note that, as is clear from Figure 5-2, the range of each operation is decreasing in size with increasing iterations, and since the dynamics are chaotic at some point we will lose definition, as we mentioned above though the threshold control mechanism can be invoked to re-initialize the system. 5.2 The Full Adder and 3-Input XOR and NXOR This section is a direct extension of the previous section, hence it is comprised of simply two demonstrations. We extend the above method for sequential logic operations in two ways, first to more than just two data inputs, and second to more than just logic gate operations. Specifically we show the implementation of the binary full adder and the implementations of 3-input XOR and NXOR gates. For these implementations we employ the usual three steps, but modified for 3 inputs as follows: 110

111 Figure 5-2. Patterns of binary two input symmetric operations. We use the Logistic map with δ = 0.25, x n = 0.65, n and vary x init. 1. Input definition (for a 3 input operation): x 0 = x init + x I 1 + x I 2 + x I 3, as usual with the addition of one extra input. In the context of the full adder, I 1 corresponds to the input binary number A, I 2 corresponds to the input binary number B, and I 3 corresponds to the carry input C in (the carry from the previous positional digit addition), as in Table 5-3. So we need to consider the following four cases: (a) (b) (c) If all inputs are 0 (1 st row in Table 5-3), i.e. the initial state of the system is: x 0 = x init = x 0. If any one of the input equals 1 (2 nd, 3 rd and 5 th row in Table 5-3), i.e. the initial state is: x 0 = x init +0+0+δ = x init +0+δ +0 = x init +δ +0+0 = x init +δ. If any two inputs equal to 1 (4 th, 6 th and 7 th row in Table 5-3), i.e. the initial state is: x 0 = x init δ + δ = x init + δ δ = x init + δ + δ + 0 = x init + 2 δ. 111

MAT335H1F Lec0101 Burbulla

MAT335H1F Lec0101 Burbulla Fall 2011 Q 2 (x) = x 2 2 Q 2 has two repelling fixed points, p = 1 and p + = 2. Moreover, if I = [ p +, p + ] = [ 2, 2], it is easy to check that p I and Q 2 : I I. So for any seed x 0 I, the orbit of

More information

Construction of a reconfigurable dynamic logic cell

Construction of a reconfigurable dynamic logic cell PRAMANA c Indian Academy of Sciences Vol. 64, No. 3 journal of March 2005 physics pp. 433 441 Construction of a reconfigurable dynamic logic cell K MURALI 1, SUDESHNA SINHA 2 and WILLIAM L DITTO 3 1 Department

More information

of Digital Electronics

of Digital Electronics 26 Digital Electronics 729 Digital Electronics 26.1 Analog and Digital Signals 26.3 Binary Number System 26.5 Decimal to Binary Conversion 26.7 Octal Number System 26.9 Binary-Coded Decimal Code (BCD Code)

More information

UNIVERSITY OF TORONTO Faculty of Arts and Science DECEMBER 2011 EXAMINATIONS. MAT335H1F Solutions Chaos, Fractals and Dynamics Examiner: D.

UNIVERSITY OF TORONTO Faculty of Arts and Science DECEMBER 2011 EXAMINATIONS. MAT335H1F Solutions Chaos, Fractals and Dynamics Examiner: D. General Comments: UNIVERSITY OF TORONTO Faculty of Arts and Science DECEMBER 2011 EXAMINATIONS MAT335H1F Solutions Chaos, Fractals and Dynamics Examiner: D. Burbulla Duration - 3 hours Examination Aids:

More information

ONE DIMENSIONAL CHAOTIC DYNAMICAL SYSTEMS

ONE DIMENSIONAL CHAOTIC DYNAMICAL SYSTEMS Journal of Pure and Applied Mathematics: Advances and Applications Volume 0 Number 0 Pages 69-0 ONE DIMENSIONAL CHAOTIC DYNAMICAL SYSTEMS HENA RANI BISWAS Department of Mathematics University of Barisal

More information

Zoology of Fatou sets

Zoology of Fatou sets Math 207 - Spring 17 - François Monard 1 Lecture 20 - Introduction to complex dynamics - 3/3: Mandelbrot and friends Outline: Recall critical points and behavior of functions nearby. Motivate the proof

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

Figure 1.1: Schematic symbols of an N-transistor and P-transistor

Figure 1.1: Schematic symbols of an N-transistor and P-transistor Chapter 1 The digital abstraction The term a digital circuit refers to a device that works in a binary world. In the binary world, the only values are zeros and ones. Hence, the inputs of a digital circuit

More information

PHY411 Lecture notes Part 5

PHY411 Lecture notes Part 5 PHY411 Lecture notes Part 5 Alice Quillen January 27, 2016 Contents 0.1 Introduction.................................... 1 1 Symbolic Dynamics 2 1.1 The Shift map.................................. 3 1.2

More information

Infinity Unit 2: Chaos! Dynamical Systems

Infinity Unit 2: Chaos! Dynamical Systems Infinity Unit 2: Chaos! Dynamical Systems Iterating Linear Functions These questions are about iterating f(x) = mx + b. Seed: x 1. Orbit: x 1, x 2, x 3, For each question, give examples and a symbolic

More information

... it may happen that small differences in the initial conditions produce very great ones in the final phenomena. Henri Poincaré

... it may happen that small differences in the initial conditions produce very great ones in the final phenomena. Henri Poincaré Chapter 2 Dynamical Systems... it may happen that small differences in the initial conditions produce very great ones in the final phenomena. Henri Poincaré One of the exciting new fields to arise out

More information

Julia Sets and the Mandelbrot Set

Julia Sets and the Mandelbrot Set Julia Sets and the Mandelbrot Set Julia sets are certain fractal sets in the complex plane that arise from the dynamics of complex polynomials. These notes give a brief introduction to Julia sets and explore

More information

THE LOGIC OF COMPOUND STATEMENTS

THE LOGIC OF COMPOUND STATEMENTS CHAPTER 2 THE LOGIC OF COMPOUND STATEMENTS Copyright Cengage Learning. All rights reserved. SECTION 2.4 Application: Digital Logic Circuits Copyright Cengage Learning. All rights reserved. Application:

More information

Computers also need devices capable of Storing data and information Performing mathematical operations on such data

Computers also need devices capable of Storing data and information Performing mathematical operations on such data Sequential Machines Introduction Logic devices examined so far Combinational Output function of input only Output valid as long as input true Change input change output Computers also need devices capable

More information

Boolean circuits. Lecture Definitions

Boolean circuits. Lecture Definitions Lecture 20 Boolean circuits In this lecture we will discuss the Boolean circuit model of computation and its connection to the Turing machine model. Although the Boolean circuit model is fundamentally

More information

MATH 521, WEEK 2: Rational and Real Numbers, Ordered Sets, Countable Sets

MATH 521, WEEK 2: Rational and Real Numbers, Ordered Sets, Countable Sets MATH 521, WEEK 2: Rational and Real Numbers, Ordered Sets, Countable Sets 1 Rational and Real Numbers Recall that a number is rational if it can be written in the form a/b where a, b Z and b 0, and a number

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

School of Computer Science and Electrical Engineering 28/05/01. Digital Circuits. Lecture 14. ENG1030 Electrical Physics and Electronics

School of Computer Science and Electrical Engineering 28/05/01. Digital Circuits. Lecture 14. ENG1030 Electrical Physics and Electronics Digital Circuits 1 Why are we studying digital So that one day you can design something which is better than the... circuits? 2 Why are we studying digital or something better than the... circuits? 3 Why

More information

Lecture 15: Exploding and Vanishing Gradients

Lecture 15: Exploding and Vanishing Gradients Lecture 15: Exploding and Vanishing Gradients Roger Grosse 1 Introduction Last lecture, we introduced RNNs and saw how to derive the gradients using backprop through time. In principle, this lets us train

More information

Chapter 2 Combinational Logic Circuits

Chapter 2 Combinational Logic Circuits Logic and Computer Design Fundamentals Chapter 2 Combinational Logic Circuits Part 3 Additional Gates and Circuits Overview Part 1 Gate Circuits and Boolean Equations Binary Logic and Gates Boolean Algebra

More information

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 27

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 27 CS 70 Discrete Mathematics for CS Spring 007 Luca Trevisan Lecture 7 Infinity and Countability Consider a function f that maps elements of a set A (called the domain of f ) to elements of set B (called

More information

Computational Tasks and Models

Computational Tasks and Models 1 Computational Tasks and Models Overview: We assume that the reader is familiar with computing devices but may associate the notion of computation with specific incarnations of it. Our first goal is to

More information

Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 20. To Infinity And Beyond: Countability and Computability

Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 20. To Infinity And Beyond: Countability and Computability EECS 70 Discrete Mathematics and Probability Theory Spring 014 Anant Sahai Note 0 To Infinity And Beyond: Countability and Computability This note ties together two topics that might seem like they have

More information

AN INTRODUCTION TO FRACTALS AND COMPLEXITY

AN INTRODUCTION TO FRACTALS AND COMPLEXITY AN INTRODUCTION TO FRACTALS AND COMPLEXITY Carlos E. Puente Department of Land, Air and Water Resources University of California, Davis http://puente.lawr.ucdavis.edu 2 Outline Recalls the different kinds

More information

Chapter One. The Real Number System

Chapter One. The Real Number System Chapter One. The Real Number System We shall give a quick introduction to the real number system. It is imperative that we know how the set of real numbers behaves in the way that its completeness and

More information

Introduction to Logic and Axiomatic Set Theory

Introduction to Logic and Axiomatic Set Theory Introduction to Logic and Axiomatic Set Theory 1 Introduction In mathematics, we seek absolute rigor in our arguments, and a solid foundation for all of the structures we consider. Here, we will see some

More information

Countability. 1 Motivation. 2 Counting

Countability. 1 Motivation. 2 Counting Countability 1 Motivation In topology as well as other areas of mathematics, we deal with a lot of infinite sets. However, as we will gradually discover, some infinite sets are bigger than others. Countably

More information

Chapter 2 Combinational Logic Circuits

Chapter 2 Combinational Logic Circuits Logic and Computer Design Fundamentals Chapter 2 Combinational Logic Circuits Part 3 Additional Gates and Circuits Charles Kime & Thomas Kaminski 2008 Pearson Education, Inc. (Hyperlinks are active in

More information

CS4026 Formal Models of Computation

CS4026 Formal Models of Computation CS4026 Formal Models of Computation Turing Machines Turing Machines Abstract but accurate model of computers Proposed by Alan Turing in 1936 There weren t computers back then! Turing s motivation: find

More information

Opleiding Informatica

Opleiding Informatica Opleiding Informatica Tape-quantifying Turing machines in the arithmetical hierarchy Simon Heijungs Supervisors: H.J. Hoogeboom & R. van Vliet BACHELOR THESIS Leiden Institute of Advanced Computer Science

More information

Fractals, Dynamical Systems and Chaos. MATH225 - Field 2008

Fractals, Dynamical Systems and Chaos. MATH225 - Field 2008 Fractals, Dynamical Systems and Chaos MATH225 - Field 2008 Outline Introduction Fractals Dynamical Systems and Chaos Conclusions Introduction When beauty is abstracted then ugliness is implied. When good

More information

Binary addition (1-bit) P Q Y = P + Q Comments Carry = Carry = Carry = Carry = 1 P Q

Binary addition (1-bit) P Q Y = P + Q Comments Carry = Carry = Carry = Carry = 1 P Q Digital Arithmetic In Chapter 2, we have discussed number systems such as binary, hexadecimal, decimal, and octal. We have also discussed sign representation techniques, for example, sign-bit representation

More information

What is a quantum computer? Quantum Architecture. Quantum Mechanics. Quantum Superposition. Quantum Entanglement. What is a Quantum Computer (contd.

What is a quantum computer? Quantum Architecture. Quantum Mechanics. Quantum Superposition. Quantum Entanglement. What is a Quantum Computer (contd. What is a quantum computer? Quantum Architecture by Murat Birben A quantum computer is a device designed to take advantage of distincly quantum phenomena in carrying out a computational task. A quantum

More information

THE GOLDEN MEAN SHIFT IS THE SET OF 3x + 1 ITINERARIES

THE GOLDEN MEAN SHIFT IS THE SET OF 3x + 1 ITINERARIES THE GOLDEN MEAN SHIFT IS THE SET OF 3x + 1 ITINERARIES DAN-ADRIAN GERMAN Department of Computer Science, Indiana University, 150 S Woodlawn Ave, Bloomington, IN 47405-7104, USA E-mail: dgerman@csindianaedu

More information

Bits. Chapter 1. Information can be learned through observation, experiment, or measurement.

Bits. Chapter 1. Information can be learned through observation, experiment, or measurement. Chapter 1 Bits Information is measured in bits, just as length is measured in meters and time is measured in seconds. Of course knowing the amount of information is not the same as knowing the information

More information

A Simple Model s Best Hope: A Brief Introduction to Universality

A Simple Model s Best Hope: A Brief Introduction to Universality A Simple Model s Best Hope: A Brief Introduction to Universality Benjamin Good Swarthmore College (Dated: May 5, 2008) For certain classes of systems operating at a critical point, the concept of universality

More information

Chapter 1 The Real Numbers

Chapter 1 The Real Numbers Chapter 1 The Real Numbers In a beginning course in calculus, the emphasis is on introducing the techniques of the subject;i.e., differentiation and integration and their applications. An advanced calculus

More information

CONSTRUCTION OF THE REAL NUMBERS.

CONSTRUCTION OF THE REAL NUMBERS. CONSTRUCTION OF THE REAL NUMBERS. IAN KIMING 1. Motivation. It will not come as a big surprise to anyone when I say that we need the real numbers in mathematics. More to the point, we need to be able to

More information

Chaos and Liapunov exponents

Chaos and Liapunov exponents PHYS347 INTRODUCTION TO NONLINEAR PHYSICS - 2/22 Chaos and Liapunov exponents Definition of chaos In the lectures we followed Strogatz and defined chaos as aperiodic long-term behaviour in a deterministic

More information

Lecture 7: Logic design. Combinational logic circuits

Lecture 7: Logic design. Combinational logic circuits /24/28 Lecture 7: Logic design Binary digital circuits: Two voltage levels: and (ground and supply voltage) Built from transistors used as on/off switches Analog circuits not very suitable for generic

More information

Top-down Causality the missing link in our physical theories

Top-down Causality the missing link in our physical theories Top-down Causality the missing link in our physical theories Jose P Koshy josepkoshy@gmail.com Abstract: Confining is a top-down effect. Particles have to be confined in a region if these are to bond together.

More information

Math 300: Foundations of Higher Mathematics Northwestern University, Lecture Notes

Math 300: Foundations of Higher Mathematics Northwestern University, Lecture Notes Math 300: Foundations of Higher Mathematics Northwestern University, Lecture Notes Written by Santiago Cañez These are notes which provide a basic summary of each lecture for Math 300, Foundations of Higher

More information

Metric spaces and metrizability

Metric spaces and metrizability 1 Motivation Metric spaces and metrizability By this point in the course, this section should not need much in the way of motivation. From the very beginning, we have talked about R n usual and how relatively

More information

Quantum computation: a tutorial

Quantum computation: a tutorial Quantum computation: a tutorial Samuel L. Braunstein Abstract: Imagine a computer whose memory is exponentially larger than its apparent physical size; a computer that can manipulate an exponential set

More information

BOOLEAN ALGEBRA INTRODUCTION SUBSETS

BOOLEAN ALGEBRA INTRODUCTION SUBSETS BOOLEAN ALGEBRA M. Ragheb 1/294/2018 INTRODUCTION Modern algebra is centered around the concept of an algebraic system: A, consisting of a set of elements: ai, i=1, 2,, which are combined by a set of operations

More information

DR.RUPNATHJI( DR.RUPAK NATH )

DR.RUPNATHJI( DR.RUPAK NATH ) Contents 1 Sets 1 2 The Real Numbers 9 3 Sequences 29 4 Series 59 5 Functions 81 6 Power Series 105 7 The elementary functions 111 Chapter 1 Sets It is very convenient to introduce some notation and terminology

More information

CHAOS AND DYNAMICS KELSEY MACE

CHAOS AND DYNAMICS KELSEY MACE CHAOS AND DYNAMICS KELSEY MACE Abstract. In this paper we will study chaos through the dynamics of the quadratic family of functions. We begin with an introduction to basic dynamical notions, including

More information

Prove that if not fat and not triangle necessarily means not green then green must be fat or triangle (or both).

Prove that if not fat and not triangle necessarily means not green then green must be fat or triangle (or both). hapter : oolean lgebra.) Definition of oolean lgebra The oolean algebra is named after George ool who developed this algebra (854) in order to analyze logical problems. n example to such problem is: Prove

More information

More Details Fixed point of mapping is point that maps into itself, i.e., x n+1 = x n.

More Details Fixed point of mapping is point that maps into itself, i.e., x n+1 = x n. More Details Fixed point of mapping is point that maps into itself, i.e., x n+1 = x n. If there are points which, after many iterations of map then fixed point called an attractor. fixed point, If λ

More information

Logical AND. Logical XOR

Logical AND. Logical XOR Logical AND 00 0 01 0 10 0 11 1 Logical XOR 00 0 01 1 10 1 11 0 00 00 01 00 10 00 11 01 Using the classical gate analog doesn t work, since there are three equivalent output states with different input

More information

Latches. October 13, 2003 Latches 1

Latches. October 13, 2003 Latches 1 Latches The second part of CS231 focuses on sequential circuits, where we add memory to the hardware that we ve already seen. Our schedule will be very similar to before: We first show how primitive memory

More information

Nonlinear Oscillations and Chaos

Nonlinear Oscillations and Chaos CHAPTER 4 Nonlinear Oscillations and Chaos 4-. l l = l + d s d d l l = l + d m θ m (a) (b) (c) The unetended length of each spring is, as shown in (a). In order to attach the mass m, each spring must be

More information

Lecture 6: Time-Dependent Behaviour of Digital Circuits

Lecture 6: Time-Dependent Behaviour of Digital Circuits Lecture 6: Time-Dependent Behaviour of Digital Circuits Two rather different quasi-physical models of an inverter gate were discussed in the previous lecture. The first one was a simple delay model. This

More information

Creative Objectivism, a powerful alternative to Constructivism

Creative Objectivism, a powerful alternative to Constructivism Creative Objectivism, a powerful alternative to Constructivism Copyright c 2002 Paul P. Budnik Jr. Mountain Math Software All rights reserved Abstract It is problematic to allow reasoning about infinite

More information

Chapter 23. Predicting Chaos The Shift Map and Symbolic Dynamics

Chapter 23. Predicting Chaos The Shift Map and Symbolic Dynamics Chapter 23 Predicting Chaos We have discussed methods for diagnosing chaos, but what about predicting the existence of chaos in a dynamical system. This is a much harder problem, and it seems that the

More information

CSE370: Introduction to Digital Design

CSE370: Introduction to Digital Design CSE370: Introduction to Digital Design Course staff Gaetano Borriello, Brian DeRenzi, Firat Kiyak Course web www.cs.washington.edu/370/ Make sure to subscribe to class mailing list (cse370@cs) Course text

More information

We are here. Assembly Language. Processors Arithmetic Logic Units. Finite State Machines. Circuits Gates. Transistors

We are here. Assembly Language. Processors Arithmetic Logic Units. Finite State Machines. Circuits Gates. Transistors CSC258 Week 3 1 Logistics If you cannot login to MarkUs, email me your UTORID and name. Check lab marks on MarkUs, if it s recorded wrong, contact Larry within a week after the lab. Quiz 1 average: 86%

More information

Languages, regular languages, finite automata

Languages, regular languages, finite automata Notes on Computer Theory Last updated: January, 2018 Languages, regular languages, finite automata Content largely taken from Richards [1] and Sipser [2] 1 Languages An alphabet is a finite set of characters,

More information

Intro To Digital Logic

Intro To Digital Logic Intro To Digital Logic 1 Announcements... Project 2.2 out But delayed till after the midterm Midterm in a week Covers up to last lecture + next week's homework & lab Nick goes "H-Bomb of Justice" About

More information

Lecture 1: Introduction to Quantum Computing

Lecture 1: Introduction to Quantum Computing Lecture 1: Introduction to Quantum Computing Rajat Mittal IIT Kanpur Whenever the word Quantum Computing is uttered in public, there are many reactions. The first one is of surprise, mostly pleasant, and

More information

where Q is a finite set of states

where Q is a finite set of states Space Complexity So far most of our theoretical investigation on the performances of the various algorithms considered has focused on time. Another important dynamic complexity measure that can be associated

More information

One-to-one functions and onto functions

One-to-one functions and onto functions MA 3362 Lecture 7 - One-to-one and Onto Wednesday, October 22, 2008. Objectives: Formalize definitions of one-to-one and onto One-to-one functions and onto functions At the level of set theory, there are

More information

The Fast Fourier Transform

The Fast Fourier Transform The Fast Fourier Transform 1 Motivation: digital signal processing The fast Fourier transform (FFT) is the workhorse of digital signal processing To understand how it is used, consider any signal: any

More information

Sequence convergence, the weak T-axioms, and first countability

Sequence convergence, the weak T-axioms, and first countability Sequence convergence, the weak T-axioms, and first countability 1 Motivation Up to now we have been mentioning the notion of sequence convergence without actually defining it. So in this section we will

More information

Unit II Chapter 4:- Digital Logic Contents 4.1 Introduction... 4

Unit II Chapter 4:- Digital Logic Contents 4.1 Introduction... 4 Unit II Chapter 4:- Digital Logic Contents 4.1 Introduction... 4 4.1.1 Signal... 4 4.1.2 Comparison of Analog and Digital Signal... 7 4.2 Number Systems... 7 4.2.1 Decimal Number System... 7 4.2.2 Binary

More information

Russell s logicism. Jeff Speaks. September 26, 2007

Russell s logicism. Jeff Speaks. September 26, 2007 Russell s logicism Jeff Speaks September 26, 2007 1 Russell s definition of number............................ 2 2 The idea of reducing one theory to another.................... 4 2.1 Axioms and theories.............................

More information

INVESTIGATION OF CHAOTICITY OF THE GENERALIZED SHIFT MAP UNDER A NEW DEFINITION OF CHAOS AND COMPARE WITH SHIFT MAP

INVESTIGATION OF CHAOTICITY OF THE GENERALIZED SHIFT MAP UNDER A NEW DEFINITION OF CHAOS AND COMPARE WITH SHIFT MAP ISSN 2411-247X INVESTIGATION OF CHAOTICITY OF THE GENERALIZED SHIFT MAP UNDER A NEW DEFINITION OF CHAOS AND COMPARE WITH SHIFT MAP Hena Rani Biswas * Department of Mathematics, University of Barisal, Barisal

More information

The World According to Wolfram

The World According to Wolfram The World According to Wolfram Basic Summary of NKS- A New Kind of Science is Stephen Wolfram s attempt to revolutionize the theoretical and methodological underpinnings of the universe. Though this endeavor

More information

Cell-based Model For GIS Generalization

Cell-based Model For GIS Generalization Cell-based Model For GIS Generalization Bo Li, Graeme G. Wilkinson & Souheil Khaddaj School of Computing & Information Systems Kingston University Penrhyn Road, Kingston upon Thames Surrey, KT1 2EE UK

More information

Chapter 1: Logic systems

Chapter 1: Logic systems Chapter 1: Logic systems 1: Logic gates Learning Objectives: At the end of this topic you should be able to: identify the symbols and truth tables for the following logic gates: NOT AND NAND OR NOR XOR

More information

AN INTRODUCTION TO FRACTALS AND COMPLEXITY

AN INTRODUCTION TO FRACTALS AND COMPLEXITY AN INTRODUCTION TO FRACTALS AND COMPLEXITY Carlos E. Puente Department of Land, Air and Water Resources University of California, Davis http://puente.lawr.ucdavis.edu 2 Outline Recalls the different kinds

More information

CSCI3390-Lecture 6: An Undecidable Problem

CSCI3390-Lecture 6: An Undecidable Problem CSCI3390-Lecture 6: An Undecidable Problem September 21, 2018 1 Summary The language L T M recognized by the universal Turing machine is not decidable. Thus there is no algorithm that determines, yes or

More information

MATH 415, WEEK 11: Bifurcations in Multiple Dimensions, Hopf Bifurcation

MATH 415, WEEK 11: Bifurcations in Multiple Dimensions, Hopf Bifurcation MATH 415, WEEK 11: Bifurcations in Multiple Dimensions, Hopf Bifurcation 1 Bifurcations in Multiple Dimensions When we were considering one-dimensional systems, we saw that subtle changes in parameter

More information

A NEW SET THEORY FOR ANALYSIS

A NEW SET THEORY FOR ANALYSIS Article A NEW SET THEORY FOR ANALYSIS Juan Pablo Ramírez 0000-0002-4912-2952 Abstract: We present the real number system as a generalization of the natural numbers. First, we prove the co-finite topology,

More information

Discrete Tranformation of Output in Cellular Automata

Discrete Tranformation of Output in Cellular Automata Discrete Tranformation of Output in Cellular Automata Aleksander Lunøe Waage Master of Science in Computer Science Submission date: July 2012 Supervisor: Gunnar Tufte, IDI Norwegian University of Science

More information

Every time has a value associated with it, not just some times. A variable can take on any value within a range

Every time has a value associated with it, not just some times. A variable can take on any value within a range Digital Logic Circuits Binary Logic and Gates Logic Simulation Boolean Algebra NAND/NOR and XOR gates Decoder fundamentals Half Adder, Full Adder, Ripple Carry Adder Analog vs Digital Analog Continuous»

More information

Lecture notes on Turing machines

Lecture notes on Turing machines Lecture notes on Turing machines Ivano Ciardelli 1 Introduction Turing machines, introduced by Alan Turing in 1936, are one of the earliest and perhaps the best known model of computation. The importance

More information

ECS 120 Lesson 18 Decidable Problems, the Halting Problem

ECS 120 Lesson 18 Decidable Problems, the Halting Problem ECS 120 Lesson 18 Decidable Problems, the Halting Problem Oliver Kreylos Friday, May 11th, 2001 In the last lecture, we had a look at a problem that we claimed was not solvable by an algorithm the problem

More information

Delay Coordinate Embedding

Delay Coordinate Embedding Chapter 7 Delay Coordinate Embedding Up to this point, we have known our state space explicitly. But what if we do not know it? How can we then study the dynamics is phase space? A typical case is when

More information

Lecture 1: Introduction to Quantum Computing

Lecture 1: Introduction to Quantum Computing Lecture : Introduction to Quantum Computing Rajat Mittal IIT Kanpur What is quantum computing? This course is about the theory of quantum computation, i.e., to do computation using quantum systems. These

More information

Computational Fluid Dynamics Prof. Dr. Suman Chakraborty Department of Mechanical Engineering Indian Institute of Technology, Kharagpur

Computational Fluid Dynamics Prof. Dr. Suman Chakraborty Department of Mechanical Engineering Indian Institute of Technology, Kharagpur Computational Fluid Dynamics Prof. Dr. Suman Chakraborty Department of Mechanical Engineering Indian Institute of Technology, Kharagpur Lecture No. #12 Fundamentals of Discretization: Finite Volume Method

More information

PERIODIC POINTS OF THE FAMILY OF TENT MAPS

PERIODIC POINTS OF THE FAMILY OF TENT MAPS PERIODIC POINTS OF THE FAMILY OF TENT MAPS ROBERTO HASFURA-B. AND PHILLIP LYNCH 1. INTRODUCTION. Of interest in this article is the dynamical behavior of the one-parameter family of maps T (x) = (1/2 x

More information

Introduction: Computer Science is a cluster of related scientific and engineering disciplines concerned with the study and application of computations. These disciplines range from the pure and basic scientific

More information

notes 5d. Details of the Construction of the Smale horseshoe map.

notes 5d. Details of the Construction of the Smale horseshoe map. by 3940-07 notes 5d Details of the Construction of the Smale horseshoe map. Let f (x; y) = ( 4 x + 8 ; 5y ). Then f maps D onto the rectangle R de ned 8 x 3 8 y 9 : In particular, it squeezes D horizontally

More information

CHAPTER 12 Boolean Algebra

CHAPTER 12 Boolean Algebra 318 Chapter 12 Boolean Algebra CHAPTER 12 Boolean Algebra SECTION 12.1 Boolean Functions 2. a) Since x 1 = x, the only solution is x = 0. b) Since 0 + 0 = 0 and 1 + 1 = 1, the only solution is x = 0. c)

More information

Chapter 2. Mathematical Reasoning. 2.1 Mathematical Models

Chapter 2. Mathematical Reasoning. 2.1 Mathematical Models Contents Mathematical Reasoning 3.1 Mathematical Models........................... 3. Mathematical Proof............................ 4..1 Structure of Proofs........................ 4.. Direct Method..........................

More information

Computer Science 324 Computer Architecture Mount Holyoke College Fall Topic Notes: Digital Logic

Computer Science 324 Computer Architecture Mount Holyoke College Fall Topic Notes: Digital Logic Computer Science 324 Computer Architecture Mount Holyoke College Fall 2007 Topic Notes: Digital Logic Our goal for the next few weeks is to paint a a reasonably complete picture of how we can go from transistor

More information

Department of Electrical & Electronics EE-333 DIGITAL SYSTEMS

Department of Electrical & Electronics EE-333 DIGITAL SYSTEMS Department of Electrical & Electronics EE-333 DIGITAL SYSTEMS 1) Given the two binary numbers X = 1010100 and Y = 1000011, perform the subtraction (a) X -Y and (b) Y - X using 2's complements. a) X = 1010100

More information

Binary addition example worked out

Binary addition example worked out Binary addition example worked out Some terms are given here Exercise: what are these numbers equivalent to in decimal? The initial carry in is implicitly 0 1 1 1 0 (Carries) 1 0 1 1 (Augend) + 1 1 1 0

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

v n+1 = v T + (v 0 - v T )exp(-[n +1]/ N )

v n+1 = v T + (v 0 - v T )exp(-[n +1]/ N ) Notes on Dynamical Systems (continued) 2. Maps The surprisingly complicated behavior of the physical pendulum, and many other physical systems as well, can be more readily understood by examining their

More information

A Thermodynamic Turing Machine: Artificial Molecular Computing Using Classical Reversible Logic Switching Networks [1]

A Thermodynamic Turing Machine: Artificial Molecular Computing Using Classical Reversible Logic Switching Networks [1] 1 arxiv:0904.3273v2 [cs.cc] 14 May 2009 A Thermodynamic Turing Machine: Artificial Molecular Computing Using Classical Reversible Logic Switching Networks [1] Abstract A Thermodynamic Turing Machine (TTM)

More information

The Finite-Difference Time-Domain (FDTD) Algorithm

The Finite-Difference Time-Domain (FDTD) Algorithm The Finite-Difference Time-Domain (FDTD Algorithm James R. Nagel 1. OVERVIEW It is difficult to overstate the importance of simulation to the world of engineering. Simulation is useful because it allows

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

The Growth of Functions. A Practical Introduction with as Little Theory as possible

The Growth of Functions. A Practical Introduction with as Little Theory as possible The Growth of Functions A Practical Introduction with as Little Theory as possible Complexity of Algorithms (1) Before we talk about the growth of functions and the concept of order, let s discuss why

More information

Conservation of Information

Conservation of Information Conservation of Information Amr Sabry (in collaboration with Roshan P. James) School of Informatics and Computing Indiana University May 8, 2012 Amr Sabry (in collaboration with Roshan P. James) (IU SOIC)

More information

BINARY TO GRAY CODE CONVERTER IMPLEMENTATION USING QCA

BINARY TO GRAY CODE CONVERTER IMPLEMENTATION USING QCA BINARY TO GRAY CODE CONVERTER IMPLEMENTATION USING QCA Neha Guleria Department of Electronics and Communication Uttarakhand Technical University Dehradun, India Abstract Quantum dot Cellular Automata (QCA)

More information

Complicated dynamics from simple functions

Complicated dynamics from simple functions Complicated dynamics from simple functions Math Outside the Box, Oct 18 2016 Randall Pyke rpyke@sfu.ca This presentation: www.sfu.ca/~rpyke Presentations Dynamics Discrete dynamical systems: Variables

More information

Solutions of a PT-symmetric Dimer with Constant Gain-loss

Solutions of a PT-symmetric Dimer with Constant Gain-loss Solutions of a PT-symmetric Dimer with Constant Gain-loss G14DIS Mathematics 4th Year Dissertation Spring 2012/2013 School of Mathematical Sciences University of Nottingham John Pickton Supervisor: Dr

More information

The Finite-Difference Time-Domain (FDTD) Algorithm

The Finite-Difference Time-Domain (FDTD) Algorithm The Finite-Difference Time-Domain (FDTD) Algorithm James R. Nagel Overview: It is difficult to overstate the importance of simulation to the world of engineering. Simulation is useful because it allows

More information