Neural Assemblies and Finite State Automata

Size: px
Start display at page:

Download "Neural Assemblies and Finite State Automata"

Transcription

1 2013 BRICS Congress on 1st Computational BRICS Countries Intelligence Congress & on 11th Computational Brazilian Congress Intelligence on Computational Intelligence Neural Assemblies and Finite State Automata João Ranhel Department of Electronic and Systems UFPE Universidade Federal de Pernambuco Recife, Brazil ieee.org Abstract Neural assembly computing (NAC) is a framework for investigating computational operations realized by spiking cell assemblies and for designing spiking neural machines. NAC concerns the way assemblies interact and how it results in information processing with causal and hierarchical relations. In addition, NAC investigates how assemblies represent states of the world, how they control data flux carried by spike streaming, how they create parallel processes by branching and dismantling other assemblies, how they reverberate and create memory loops, among other issues. As cell coalitions interact they realize logical functions. Memory loops and logical functions are the elements engineers use to create finite state machines (FSM). An overview of NAC is provided, a methodology for implementing FSM in NAC is presented in this paper, a finite state automaton is designed, and a simulation and respective results are shown. Supplemental materials are available for download. Discussions about how FSM on NAC and how NAC itself can contribute for designing new types of spiking neural machines are presented. Keywords Neural assembly computing, cell ensembles, finite state machines, neural coalition, finite-state automaton. I. INTRODUCTION Neural assembly computing (NAC) is an approach that tries to explain how neural cell ensembles represent and compute. The idea behind this framework is somehow simple: neurons represent internal/external objects and states of the world by means of cell assemblies. Interactions among these coalitions perform logical functions (AND, OR, NOT), and assemblies may form reverberating loops which memorize information just like flip-flops in digital circuits. Logical gates and flipflops are the elements engineers use to construct computers, finite state machines (FSM), among other electronic systems. Hence, in NAC the tools, knowledge, and technical procedures used in digital systems projects are used to design spiking neural networks (SNN). The NAC approach was introduced in [1], and constructions of FSM (or finite-state automata, FSA) using NAC is discussed in this article. In section II, the concept of cell assembly is revisited, the biological inspiration and plausibility are also discussed. In Section III an overview of NAC is shown. Section IV presents a revision on the FSA concepts and how FSAs are related to biological systems. A methodology for designing FSA in NAC is introduced. Simulations and the results that support claims made in the paper are also presented. The Section V holds discussions on how FSA in NAC can help on developing new classes of SNN. Directions for further investigations on NAC are also pointed out. In Section VI, conclusions are presented. II. NEURAL ASSEMBLIES AND COMPUTING The idea that groups of neurons (instead of single cells) represent, memorize, and underlie brain processes is relatively old. In 1949, in The organization of Behavior [2], D. Hebb proposed that concepts are represented by the activity of partially-overlapping neural groups, and the co-activation of cell assemblies could be responsible for representing things. A formalization of the cell assembly concept can be found in [3], where Wickens and Miller idealized a small block of homogeneous cortex tissue with local connections and used realistic physiological and anatomical parameters in order to determine the relationship for governing neural assembly ignitability, the number of strengthened synapses needed to produce a spike in a single neuron member, the upper and lower limits of neural assembly size, the capacity for independent activation, among other arguments. The model implicitly considers the idea of digital coalitions, once the authors consider that there is a fraction of assembly s members which, once triggered, inevitably makes all the assembly to ignite [3]. In their conception, after reaching a threshold in the number of active members, the assembly operates in an all-or-none fashion. Other authors describe assemblies by using more informal descriptions, for example, in the same sense relating coalitions to spatiality: [Assembly:] A spatially distributed set of cells that are activated in a coherent fashion and are part of the same representation [4]. However, other descriptions take assembly as transiently active ensembles of neurons [5] independently of their spatial position or their neighboring relations. This is how coalitions are considered in NAC. Functionally speaking, since a neural group fires together it forms a coalition and the neurons do not have to be located at the same spatial region. So far, investigations in NAC consider mainly digital coalitions. Digital assemblies (DA) are neural groups in which the coalition s behavior is determined by the great majority of their members [1]. A DA is a set of cells working in a dual-state manner: all (or almost all) neuron members firing together, or none of them firing for that assembly. However, individual neurons may be active for other assemblies out of a specific-assembly time window, or it may simply fire due to noise. Thus, whenever a group behaves like an active or inactive coalition it is considered as a DA /13 $ IEEE DOI /BRICS-CCI.&.CBIC /BRICS-CCI-CBIC

2 There is a strong intuition that cell assembly can support many brain computing processes, but mechanisms underlying such operations are not yet understood [6]. Although modern algorithms can detect and isolate unities and group activities in massive recordings (including in vivo and operant animals), to determine causal relations among unities and groups is quite complicated [7]. Considering such difficulty, one solution for trying to figure out how cell ensembles compute is to propose plausible models and execute simulations. III. NAC OVERVIEW Three important aspects in NAC are: the synaptic weights among neurons, the propagation delay between them, and the neuron types used for creating the SNN. The neuron type characterizes the computation a unity can perform, how it integrates the stimuli reaching it, how it behaves in response to such stimuli. The Izhikevich s simple model [8], [9] has been used in NAC mainly because it is possible to simulate behaviors of different neuron types. Concerning the synaptic weight, it is possible to reduce this issue to a single scalar by determining an excitatory postsynaptic perturbation (Θ) that guarantees a single spike for certain neuron type, a method also used in [3]. If Θ is divided by the number of synaptic inputs (k) among two assemblies, the minimum synaptic weight (sw) between such assemblies can be calculated. In other words, the minimum synaptic weight necessary for causing post-synaptic neurons to fire is: Θ/ (1) where sw is the synaptic strength common to all neurons connecting two assemblies, Θ is the total input current (the perturbation) capable of triggering a single spike in a neuron, and k is the number of pre-synaptic active inputs. Once all neurons fire in a DA, k is equal to the number of assembly s member. For simulations, a minimum of five and a maximum of two hundred members per coalition have been considered. As a solution for reducing the synaptic connection issue, a priori it is assumed synaptic weights equally distributed, with values calculated by equation (1). An important issue introduced in NAC designs is the spike propagation delay among neurons from pre- to post-synaptic assemblies. Spikes do not spread instantaneously along axons, so delays should be considered in SNN. Neuroscience literature reports spike velocities in large ranges, such as 1 to 100 m/s [10], sometimes as fast as 150 m/s [11], or even as slow as 0.15 m/s in non-myelinated axons [12]. Potentially, the last one may cause delays of ~10 ms in short ensembles. These three elements acting together create the phenomena that are the fundamentals of NAC. A. Logical Functions and Memory The conjunction of propagation delays, synaptic weights, neuron types, and the network connections makes spikes to spread out and to reach many neurons at different instants. Consequently, sets of neurons are reached by simultaneous stimuli and fire together, in synchronicity or in a patterned time-locked fashion. Sometimes spikes from an assembly A or from an assembly B can independently trigger a third assembly C. It means that A OR B is able to trigger C. This is equivalent to the logical function OR (denoted by the Boolean notation C A + B, which is read C is caused by spikes from A or from B). In other situations, spikes from A (associated to the synaptic weights A C) are not strong enough for triggering C alone, and the same may occur for spikes coming from B (and the synaptic weights B C) to C. But, when spikes from A AND B occur coincidently they become strong enough for triggering C. It means that this interaction performs the logical function AND (in Boolean notation C A B, which is read C is caused by spikes from A and B simultaneously). What is meant by strong enough is: if the synaptic weights from A and B are sw/2 only the spike coincidence from both A and B is strong enough for causing Θ, which triggers each neuron on the coalition C. One who knows Boolean algebra and digital circuits may be confused because, for the AND function, the inputs A AND B must remain active high (logic level 1 ) in order to obtain the output C high, but spiking neurons fire only for a brief instant, in general 1 ms. This is an important point: an assembly executes a logical function (AND, OR) only during the ephemeral time the coalition is happening. After that the resulting spikes simply become travelers to other nodes. Therefore, NAC machines may be seen as flux machines, in which information is most of the time travelling along axons. As the reader can infer, A can singly trigger other assembly, and an event A can also trigger (cause) more than one assembly, a process called branching. Each new branch is a parallel process triggered into the SNN. Now consider that an event (a smell, a vision, a pain, etc) creates an assembly. It means that assembly represents that event. Considering that coalitions stand for something (they represent external or internal states of the world), nervous systems should not let such ephemeral events disappear once they might carry important information. Therefore, nature probably found a way for retaining information by using assemblies. In other words, cell coalitions must interact in order to retain important representations, events, states, etc. The solution can be reverberation among assemblies: an assembly A may trigger an assembly B that may trigger A back forming a loop. Simple loops performing binary counters were shown in [13]. However, it is possible that loops are much more complex, such as A triggers B that triggers C that triggers D that triggers A back, or even more complex (see [1]). These reverberating loops are called Bistable Neural Assemblies (BNA) because they can be on two states: ON (active) or OFF (inactive). As soon as a BNA is triggered, it can retain one bit of information without demanding any plasticity mechanism, so, no synaptic changes are required for retaining information. Functionally, BNA may be comparable to flip-flop digital circuits, but remember that the output state is not a fixed stable value, once a firing pattern during a time window, and 29

3 then information become travelling spikes. Thus, the state of such BNA can only be changed during interactions with the member nodes (the member assemblies). Moreover, once it is a loop, a BNA would in thesis remain active indefinitely. In this case, it is necessary other coalitions for playing the veto or inhibition role. An assembly can dismantle a BNA or any branch by inhibiting another assembly. In this case, such coalition executes the NOT logical function. Moreover, two assemblies may be necessary (AND) in order to inhibit another assembly, and such conjunction executes the NAND logical function (an AND associated to a NOT function). The same reasoning is true for OR associated to inhibition, which creates the NOR logical function. Fig. 1 shows a raster with the above described functions being executed in a simulation. The Matlab code and a tutorial for this simulation are available at [14], [15]. Fig. 1. Fundamental relations of DAs. This raster plot shows the logical relations performed by digital assemblies, which are polychronous groups with 20 neurons per assembly. Coalitions K 2-K 3-K 4 perform a bistable loop, as well as K 6-K 7-K 8. K 10 performs the OR logical function (K 10 K 2 + K 7 + K 8), while K 12 perform an AND (K 12 K 4 K 8). K 14 performs the NOT function inhibiting K 8, acting as a NAND function ( ). Each neuron is numbered and plotted on the ordinate axis. Assemblies are distinguishable by sequential groups of cells firing together, labeled from K 0 to K 14. Each assembly has 20 neurons. Two coalitions are reserved by the system: K 0 starts the simulation process and K 1 serves as constant 1 input every time K 2 fires. Whenever users want input (K 1 ) to be 0 it must specify it in the InputString vector. For example, if InputString = [ ] then K 1 fires at the first K 2 event, it does not fire for the next four K 2 occurrences, it fires again coincidently at the sixth K 2 event, and so on. By doing so it is possible to trigger the system only once (at K 0 ) and then all the other events are activated internally by coalitions, except for the K 1 veto, performed by the system whenever users do not want K 1 firing with K 2 (whenever a bit 0 is found within the string). In this sense, K 2 works as a pacemaker for the string inputs. In order to demonstrate the logical functions two groups are used as rhythm generators, similar to clock signals in digital computers. One rhythm is sustained by K 2, which triggers K 3, which triggers K 4, which triggers K 2 back forming the RTH 1 reverberating loop. Note that when any coalition (K 2, K 3 or K 4 ) is triggered the RTH 1 loop is turned ON. The other rhythm is sustained by K 6, which triggers K 7, which triggers K 8, which triggers K 6 back, forming the RTH 2. When K 0 starts the simulator it triggers both K 2 and K 6, so these two loops are synchronized (see Fig. 1), which means the events K 2 and K 6, K 3 and K 7, K 4 and K 8 occur respectively at the same time. Users can change the network topology by changing in simulator the matrix Topol. For example, the first vector says: Topol = [ 0 2 sw tinev dnt; 0 6 sw tinev dnt; it means that K 0 is connected to K 2 with the default synaptic weight sw, the spike delay propagation among these coalitions is held in tinev (ms) and the default neuron type (dnt) is used for K 2. The second vector says that K 0 is also connected to K 6 with the synaptic weight sw, tinev (ms) is the spike delay propagation among their neurons, and the default neuron type dnt is used for neurons of assembly K 6. Now, let us analyze the loops RTH 1 and RTH 2. In order to construct these loops the following vectors are added: 2 3 sw tbd dnt; 3 4 sw tbd dnt; 4 2 sw tbd dnt; 6 7 sw tbd dnt; 7 8 sw tbd dnt; 8 6 sw tbd dnt; it means that K 2 is connected to K 3 with the default synaptic weight sw, the delay propagation among them is held on tbd ( 40 ms), K 3 having the neuron type dnt. The next vectors describe that K 3 is connected to K 4, and K 4 is connected to K 2 closing the loop. The vectors on the subsequent line describe that K 6 is connected to K 7, K 7 to K 8, and that K 8 is connected back to K 6. All connections have the same delay propagation (tbd); thus, regular time firing intervals are expected for these coalitions, which can be observed in the raster plot (Fig. 1). In order to construct an OR function, the following vector line is added: 2 10 sw tbd dnt; 7 10 sw tbd dnt; 8 10 sw tbd dnt; These vectors say that the neurons from assembly K 2 are connected to the neurons of assembly K 10, the next vector says that K 7 is also connected to K 10, and the last vector says that K 8 is connected to K 10. All neurons in these three groups are connected with the synaptic weight sw, they have the same delay propagation, and the default neuron type. Thus, the sum of all connections weighted sw is equal Θ; hence, when spikes from K 2, K 7, or K 8 reach the K 10 s neurons this assembly is triggered. It means that any of these coalitions can trigger K 10 independently. That is why the frequency of K 10 is three times greater than in K 2, K 7 or K 8 : each occurrence of these events causes the K 10 triggering, so the K 10 occurrence is tripled. In terms of logical functions, the raster plot (Fig. 1) shows that K 10 can be triggered by K 2 OR K 7 OR K (2) The vector line that constructs an AND logical function is: 4 12 sw/2 tbd dnt; 8 12 sw/2 tbd dnt; These two vectors connect the assembly K 4 to K 12 and K 8 to K 12, but each one with only half of the synaptic weight (sw/2). It means that alone neither K 4 nor K 8 can trigger K 12. Note in Fig.1 that K12 occurs only 40 ms (tbd) after the coincident 30

4 occurrence of K 4 and K 8. It means that 40 ms after K 4 AND K 8 fire, their spikes reach K 12 making this assembly to fire: (3) In order to cause an inhibitory function in simulation the following line vector is added: 1 14 sw/2 tbd dnt; sw/2 tbd dnt; sw tbd dnt; These vectors connect K 1 and K 14, and also K 12 and K 14 using sw/2. So, K 14 is triggered when K 1 and K 12 coincide (an NAND function) but note that K 14 is connected with a negative synaptic weight sw to K 8. It means that when K 14 happens it inhibits K 8, so K 14 is functioning as a NAND function. Once K 8 receives the spikes from K 7 and from K 14 at the same time, they cancel each other and the event K 8 does not happen. Concisely, when InputString has a 1 bit in its sequence, the system does not cause K 1 veto, so K 1 fires. K 1 and K 12 trigger the NAND K 14 that dismantles the loop RTH 2. How is it possible to confirm that logical functions were executed before and after dismantling RTH 2? Note that after dismantling RTH 2, K 7 and K 8 cannot trigger K 10 anymore, but K 2 remains firing. As K 2 is one input of the OR function, it triggers K 10 singly, because K 2 is connected with sw to K 10. Hence, after K 14, the coalition K 10 has the same frequency as K 2, although phase shifted by tbd milliseconds. After K 14, K 12 becomes silent because it depends on K 4 and K 8 occurring simultaneously; but K 8 becomes silent after the dismantling of RTH 2. Once K 4 contributes only with half of the perturbation (sw/2) and K 8 is OFF, K 12 cannot be triggered. Even after RTH 2 dismantled RTH 1 remains firing. That is because there is no inhibitory event applied to RTH 1 members. Note that both RTH 1 and RTH 2 perform memory loops. They retain one bit of information: the event which caused them, the event they are representing. Once they are reverberating loops, they would remain active indefinitely if not dismantled. In this case, RTH 2 was dismantled but RTH 1 remains memorizing the initial event K 0. In this sense, they can be short- or long-term memories. Thus, it causes no surprises that inhibition plays important roles in NAC (and in biological neural networks) for dismantling branches or active loops; otherwise the spiking network would increase the number of active reverberating loops in an epileptic manner. These are the fundamental concepts in NAC. These are also fundamental elements that engineers use to create sequential digital machines and computers. Thus, let us analyze how NAC can perform finite state machines. IV. FINITE AUTOMATON ON NAC A FSA is a mathematical abstraction, a computational model of a machine with finite memory, consisting of a finite set of states, a start state, an input alphabet, and a transition function that maps input symbols and current states to some set of next states [16]. In other words, FSA is an abstract machine that can be only in one of a finite number of states at a time. A specific condition or a triggering event may cause a transition, which changes the machine from one state to another. Thus, a particular FSA, or FSM, is defined in terms of a set of states and triggering conditions that cause transitions. Automata are often classified by the class of formal languages they can recognize: finite, regular, context-free, contextsensitive languages (see overview in [16]). Further discussion on automata theory is out of the scope of this paper. A. FSA and Biological Systems NAC is concerned with realizing networks biologically plausible; thus, it is important to get support on biological findings when proposing some kind of computation on NAC. Central pattern generators (CPG) are neural kernels capable of generating patterned rhythms and signals for information processing, but CPGs are also pointed out as responsible for generating patterned movements, such as the gaits for legged locomotion in animals [17]. As pointed out by Ijspeert: there are several interesting properties that make CPG models useful for the control of locomotion in robots as an alternative to methods based on finite-state machines, sine-generators, prerecorded reference trajectories, [17]. One can design certain types of CPG using FSA, in which each state in the automata generates an output pattern. It can be useful as gait generators in robots, or in sequential output for serial machines. In summary, pattern generation is common in biological systems and it can be implemented by means of FSA, but this is not the focus of this article. FSAs are also investigated as pattern recognizer in other stereotyped biological behaviors, such as language recognition and language production in humans and in animals, for example, the birdsong syntax has been studied having FSA as a fundamental component (review in [16]). B. Implementing FSA Let us joint what was described in previous sections and design a FSM capable of recognizing a pattern. Let us limit the alphabet or our machine to characters 0 and 1, and let us arbitrarily choose the 100 as the sequence to be detected. Fig. 2. States Diagram and Equations. States diagram of a FSA capable of detecting 100 as an input pattern. The top equations describe how to trigger new states while the below equations describes how to dismantle current state. For example: once in S 0 and the input is 1, S 0 dismantles itself ( 1 ) and this situation triggers S 1 ( 1 ). 31

5 As discussed earlier, assemblies may represent things ; actually, they stand for the event that triggers them. A FSA created on NAC memorizes states into reverberating loops (BNAs). In this sense, 100 may signify a sequence of events with a semantic meaning: if the animal is hungry 1, and no visible food is observed 0, and no food smell is felt 0, then trigger a pattern generator for foraging. This is a meaning or application for an automaton like the one described here. The first step for designing a FSA on NAC is to draft a state diagram, as shown in Fig. 2. In this diagram all the possible states are represented by circles, each one labeled with a unique name. In Fig. 2 the S 0 state is the initial state, indicated by the init arrow. When the automaton is in S 0, if a 0 is applied to the input the FSA has to stay put, but if a 1 is applied to the input the machine must go to state S 1. It means that once a 1 comes to the input the first element in the sequence is detected, so the machine goes to S 1. Then, once the machine is in S 1 and a 0 is applied to the input a transition forces the machine to state S 2. In this case, a 10 is detected. When the FSA is in S 2 and another 0 is applied to the input then the machine goes to the final state S 3 (the double circled state). When the FSA achieves state S 3 it may stop or it may continue indefinitely detecting the same sequence from an input streaming (a string). This is the case here: once in S 3 and a 0 comes from the string, the FSA restarts at S 0, but when the input is 1 it means the first state in the sequence 100 has already come, and the cycle is repeated from S 1. Starting from S 0 or S 1 and reaching S 3 is the forward flux the machine follows when detecting the 100 sequence. However, all the other possibilities must be predicted when designing the FSA. Hence, when in S 0 and a 0 comes the machine must continue in S 0. When in S 1 and 1 comes it means that the first character in the sequence has already appeared, thus, the machine stays at S 1. When in S 2, it means 10 has already appeared, but if a 1 comes the sequence is broken ( 101 ). However, the first element 1 has already appeared so, from S 2 the FSA has to go to S 1 when 1 comes. Adaptations were made on the diagram in Fig. 2, which differs a little from Moore and Mealy diagrams (two ordinary methods used for designing sequential machines). Note that each transition has a self-dismantling branch. As said, in NAC a state is represented by a bistable loop; thus, each transition must dismantle the state it is leaving, a simple task that can be performed by an inter-neuron group. Continuing with the methodology, after defining the state diagram, Boolean equations can be obtained for this specific machine. For instance: can be read S 0 is triggered whenever the FSA is in state S 0 and a 0 comes, OR whenever the FSA is in S 3 and a 0 comes. Some algebra reduces the sentence to: 0 ( + ) which means when a 0 comes and the machine is in S 0 OR in S 3, execute the transition to S 0. Equations must also be obtained for inhibitory transitions. For instance, when leaving S 3 the automata must dismantle the S 3 state. Another example: an equation for dismantling S 0, in Boolean notation is: 1 this equation can be read as: dismantling S 0 (NOT S 0 ) occurs whenever the machine is in S 0 and a 1 comes as input. C. FSA Simulation The FSA described above was simulated in Matlab, the code is available at [14], a tutorial is also available at [15], and the results are shown in the Fig. 3. Users can change the input sequence by changing the vector InputString[] in the code. Other parameters can be changed, for example, coalitions can be simulated as synfire chains (firing synchronously [18]) or as polychronous groups (firing time-locked [19], [20]. In addition, the global perturbation (Θ) can be altered in Teta; the default propagation delay in tdb, among others. Please, refer to the tutorial for further details. Thirty nine coalitions were used in order to create the FSA. The coalition K 0 was used for starting the simulator. It triggers both the rhythm bistable loop (K 2 -K 6 ) and the starting state loop S 0 (K 12 -K 16 ). Every time K 2 fires the system introduces a new character on the FSA. Users can read the input string by looking the K 1 coalition. When K 1 is firing the input character is 1 otherwise the input is 0. Fig. 3. FSA detecting 100. Raster plot showing the simulation of a FSA detecting 100 from an input string. At each K 2 occurrence one input character is presented to the FSA. If K 1 fires the input is 1 otherwise it is 0. The group of coalitions S 3 is the final state when 100 is detected. In order to isolate the FSA from the system influence some interneuron coalitions were added: K 8, K 9, and K 10. Coalitions K 7, K 11, K 15, K 18, K 25, and K 32 were intentionally not used for better visualizing the raster plot. Assemblies K 17, K 24, K 31, and K 38 are inhibitory interneuron coalitions that dismantle S 0, S 1, S 2 and S 3 respectively. Remember that S 0, S 1, S 2 and S 3 are 32

6 BNAs which need to be inhibited for becoming inactive. Similar to the clock in digital circuits, the rhythm signal is dictated by K 2, K 3, K 4, K 5 and K 6 (a BNA), a loop that remains active indefinitely creating a pattern that governs the timing for this automaton. State S 0 is represented by K 12, K 13, K 14, K 15, and K 16 (a BNA), while K 17 dismantles S 0. State S 1 is represented by the BNA K 19, K 20, K 21, K 22, and K 23, associated to K 24 that is responsible for dismantling S 1. State S 2 is represented by K 26, K 27, K 28, K 29, and K 30, while K 31 is responsible for dismantling S 2. Completing the description, S 3 is represented by the BNA K 33, K 34, K 35, K 36, and K 37, while K 38 dismantles the state S 3. D. Results Now it is possible to read the raster plot in Fig. 3. Note the inputs occurring synchronized to K 2 and how it may cause a transition in the FSA. For instance, S 3 (K 33 - K 37 ) fires only after S 2 (K 26 - K 30 ) is ON (firing) AND K 1 is OFF ( 0 or not firing). This happens only at the moment K 2 is firing. It means the final state (S 3 firing) is achieved and the sequence 100 was detected. After this, as the next K 2 occurs, the machine goes to S 1 (K 19 - K 23 firing) if K 1 is 1, otherwise it goes to S 0 (K 12 - K 16 firing) if K 1 is 0. A randomized noise was introduced (by changing the noisefactor) from 0.05 up to 0.15 pa (neuron type=1) and the FSA continues executing the function perfectly. It means that this machine running in NAC is quite robust. V. DISCUSSION In biological terms, the distribution of synaptic weights in NAC seems to be unrealistic and biased. Actually it is. In organisms, tissues and organs result from genetically biased protein creation. If complex organs can emerge from genes why not to admit that important species-specific neural structures can be created by genes? For some structures, synaptic connections are strongly biased in NAC, so, it is assumed that some topologies has biased hardwired synaptic strength and that some assemblies are interconnected by favored synaptic weights. Thus, machines discussed in this paper are (meanwhile) a priori deterministic. Learning in NAC is understood as neurons changing their correlations with some strongly biased neighbor structure. In other words, NAC machines may have strong predefined networks executing logical functions, bistable loops, and FSAs. Our intuition is that neighbor neural circuitry may become a kind of adaptive finite state automata as the network passes through experiences. The learning issue is not shown here, and learning on NAC is under development. VI. CONCLUSIONS An overview of the main concepts of NAC was presented. It was shown that spiking neural coalitions interact performing AND, OR, NOT, NOR, and NAND logical functions. In their interactions assemblies also create reverberating loops capable of memorizing one bit. In order to demonstrate that serial machines can be implemented on NAC, a FSA was designed. Simulations and correlate discussions were presented. Other pattern recognizers by FSA have been simulated using the same methodology and the same code available for download. The findings described above are important because they show a way to implement digital automata on spiking neural networks. FSAs as pattern detection are studied in several biological systems, for instance in birdsongs, in both learning and generating song patterns. FSAs as pattern generators are also used for gait generation and other actuator systems, and they can be used for robots and agent implementation. The creation of FSA on NAC opens new perspectives on SNN designs, as well as new ways to think how brains compute. REFERENCE [1] J. Ranhel, Neural assembly computing, Neural Networks and Learning Systems, IEEE Trans. on, vol. 23, no. 6, pp , [2] D. O. Hebb, The Organization of Behavior: A Neuropsychological Theory. Mahwah, NJ: Lawrence Erlbaum Associated, Inc, 2002, originally published: New York: Wiley, [3] J. Wickens and R. Miller, A formalisation of the neural assembly concept 1.constraints on neural assembly size, Biological Cybernetics, vol. 77, no. 5, pp , [4] A. K. Engel, P. Fries, and W. Singer, Dynamic predictions: oscillations and synchrony in top-down processing, Nature Reviews Neuroscience, vol. 2, no. 10, pp , [5] G. Buzsáki, Neural syntax: Cell assemblies, synapsembles, and readers, Neuron, vol. 68, no. 3, pp , [6] G. Buzsáki and A. Draguhn, Neuronal oscillations in cortical networks, Science, vol. 304, no. 5679, pp , [7] V. Lopes-dos Santos, S. Conde-Ocazionez, M. A. L. Nicolelis, S. T. Ribeiro, and A. B. L. Tort, Neuronal assembly detection and cell membership specification by principal component analysis, PLoS ONE, vol. 6, no. 6, p. e20996, [8] E. M. Izhikevich, Simple model of spiking neurons, IEEE Transactions on Neural Networks, vol. 14, no. 6, pp , [9], Which model to use for cortical spiking neurons? IEEE Transactions on Neural Networks, vol. 15, no. 5, pp , [10] E. Kandel, J. Schwartz, and T. Jessel, Principles of Neural Science. 4th ed. New York, NY: McGrall-Hill Health Prof. Division, [11] D. Purves, G. J. Augustine, D. Fitzpatrick, W. C. Hall, A.-S. Lamantia, J. O. McNamara, and M. S. Williams, NEUROSCIENCE. 3rd. ed. Sunderland, Massachusetts U.S.A.: Sinauer Associates, Inc., [12] E. Izhikevich, J. Gally, and G. Edelman, Spike-timing dynamics of neuronal groups, Cerebral Cortex, vol. 14, no. 8, pp , [13] J. Ranhel, C. Lima, J. Monteiro, J. Kogler Jr., and M. Netto, Bistable memory and binary counters in spiking neural network, in FOCI 2011 Proceedings, I. Press, Ed., vol. 1. IEEE Press, 2011, pp [14] J. Ranhel, Matlab file for simulating NAC basics, [Online]:. [15], Tutorial for FSA on NAC simulator, [Online]: [16] R. C. Berwick, K. Okanoya, G. J. L. Beckers, and J. J. Bolhuis, Songs to syntax: the linguistics of birdsong, Trends in cognitive sciences, vol. 15, no. 3, pp , [17] A. J. Ijspeert, Central pattern generators for locomotion control in animals and robots: A review, Neural Networks, vol. 21, no. 4, pp , [18] M. Abeles, Synfire chains, Scholarpedia, vol. 4, no. 7, p. 1441, [Online]: [19] E. Bienenstock, A model of neocortex, Network: Computation in Neural Systems, vol. 6, no. 1, pp , [20] E. M. Izhikevich, Polychronization: computation with spikes, Neural Computation, vol. 18, no. 2, pp ,

Lecture 10: Synchronous Sequential Circuits Design

Lecture 10: Synchronous Sequential Circuits Design Lecture 0: Synchronous Sequential Circuits Design. General Form Input Combinational Flip-flops Combinational Output Circuit Circuit Clock.. Moore type has outputs dependent only on the state, e.g. ripple

More information

Consider the following spike trains from two different neurons N1 and N2:

Consider the following spike trains from two different neurons N1 and N2: About synchrony and oscillations So far, our discussions have assumed that we are either observing a single neuron at a, or that neurons fire independent of each other. This assumption may be correct in

More information

Memories Associated with Single Neurons and Proximity Matrices

Memories Associated with Single Neurons and Proximity Matrices Memories Associated with Single Neurons and Proximity Matrices Subhash Kak Oklahoma State University, Stillwater Abstract: This paper extends the treatment of single-neuron memories obtained by the use

More information

Using Variable Threshold to Increase Capacity in a Feedback Neural Network

Using Variable Threshold to Increase Capacity in a Feedback Neural Network Using Variable Threshold to Increase Capacity in a Feedback Neural Network Praveen Kuruvada Abstract: The article presents new results on the use of variable thresholds to increase the capacity of a feedback

More information

ECEN 248: INTRODUCTION TO DIGITAL SYSTEMS DESIGN. Week 7 Dr. Srinivas Shakkottai Dept. of Electrical and Computer Engineering

ECEN 248: INTRODUCTION TO DIGITAL SYSTEMS DESIGN. Week 7 Dr. Srinivas Shakkottai Dept. of Electrical and Computer Engineering ECEN 248: INTRODUCTION TO DIGITAL SYSTEMS DESIGN Week 7 Dr. Srinivas Shakkottai Dept. of Electrical and Computer Engineering SEQUENTIAL CIRCUITS: LATCHES Overview Circuits require memory to store intermediate

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

CS 256: Neural Computation Lecture Notes

CS 256: Neural Computation Lecture Notes CS 56: Neural Computation Lecture Notes Chapter : McCulloch-Pitts Neural Networks A logical calculus of the ideas immanent in nervous activity M. Minsky s adaptation (Computation: Finite and Infinite Machines,

More information

Neural Networks 1 Synchronization in Spiking Neural Networks

Neural Networks 1 Synchronization in Spiking Neural Networks CS 790R Seminar Modeling & Simulation Neural Networks 1 Synchronization in Spiking Neural Networks René Doursat Department of Computer Science & Engineering University of Nevada, Reno Spring 2006 Synchronization

More information

Topic 8: Sequential Circuits

Topic 8: Sequential Circuits Topic 8: Sequential Circuits Readings : Patterson & Hennesy, Appendix B.4 - B.6 Goals Basic Principles behind Memory Elements Clocks Applications of sequential circuits Introduction to the concept of the

More information

EECS 144/244: Fundamental Algorithms for System Modeling, Analysis, and Optimization

EECS 144/244: Fundamental Algorithms for System Modeling, Analysis, and Optimization EECS 144/244: Fundamental Algorithms for System Modeling, Analysis, and Optimization Discrete Systems Lecture: Automata, State machines, Circuits Stavros Tripakis University of California, Berkeley Stavros

More information

Using a Hopfield Network: A Nuts and Bolts Approach

Using a Hopfield Network: A Nuts and Bolts Approach Using a Hopfield Network: A Nuts and Bolts Approach November 4, 2013 Gershon Wolfe, Ph.D. Hopfield Model as Applied to Classification Hopfield network Training the network Updating nodes Sequencing of

More information

COMPUTER SCIENCE TRIPOS

COMPUTER SCIENCE TRIPOS CST.2016.2.1 COMPUTER SCIENCE TRIPOS Part IA Tuesday 31 May 2016 1.30 to 4.30 COMPUTER SCIENCE Paper 2 Answer one question from each of Sections A, B and C, and two questions from Section D. Submit the

More information

Computability Theory for Neuroscience

Computability Theory for Neuroscience Computability Theory for Neuroscience by Doug Rubino Abstract Neuronal circuits are ubiquitously held to be the substrate of computation in the brain, information processing in single neurons is though

More information

Synchronous Sequential Logic

Synchronous Sequential Logic 1 IT 201 DIGITAL SYSTEMS DESIGN MODULE4 NOTES Synchronous Sequential Logic Sequential Circuits - A sequential circuit consists of a combinational circuit and a feedback through the storage elements in

More information

Let s now begin to formalize our analysis of sequential machines Powerful methods for designing machines for System control Pattern recognition Etc.

Let s now begin to formalize our analysis of sequential machines Powerful methods for designing machines for System control Pattern recognition Etc. Finite State Machines Introduction Let s now begin to formalize our analysis of sequential machines Powerful methods for designing machines for System control Pattern recognition Etc. Such devices form

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

CPE100: Digital Logic Design I

CPE100: Digital Logic Design I Chapter 3 Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu http://www.ee.unlv.edu/~b1morris/cpe1/ CPE1: Digital Logic Design I Section 14: Dr. Morris Sequential Logic Design Chapter 3 Chapter

More information

Finite-state machines (FSMs)

Finite-state machines (FSMs) Finite-state machines (FSMs) Dr. C. Constantinides Department of Computer Science and Software Engineering Concordia University Montreal, Canada January 10, 2017 1/19 Finite-state machines (FSMs) and state

More information

Chapter 3. Digital Design and Computer Architecture, 2 nd Edition. David Money Harris and Sarah L. Harris. Chapter 3 <1>

Chapter 3. Digital Design and Computer Architecture, 2 nd Edition. David Money Harris and Sarah L. Harris. Chapter 3 <1> Chapter 3 Digital Design and Computer Architecture, 2 nd Edition David Money Harris and Sarah L. Harris Chapter 3 Chapter 3 :: Topics Introduction Latches and Flip-Flops Synchronous Logic Design Finite

More information

EE40 Lec 15. Logic Synthesis and Sequential Logic Circuits

EE40 Lec 15. Logic Synthesis and Sequential Logic Circuits EE40 Lec 15 Logic Synthesis and Sequential Logic Circuits Prof. Nathan Cheung 10/20/2009 Reading: Hambley Chapters 7.4-7.6 Karnaugh Maps: Read following before reading textbook http://www.facstaff.bucknell.edu/mastascu/elessonshtml/logic/logic3.html

More information

Causality and communities in neural networks

Causality and communities in neural networks Causality and communities in neural networks Leonardo Angelini, Daniele Marinazzo, Mario Pellicoro, Sebastiano Stramaglia TIRES-Center for Signal Detection and Processing - Università di Bari, Bari, Italy

More information

(Refer Slide Time: 0:21)

(Refer Slide Time: 0:21) Theory of Computation Prof. Somenath Biswas Department of Computer Science and Engineering Indian Institute of Technology Kanpur Lecture 7 A generalisation of pumping lemma, Non-deterministic finite automata

More information

Synchronous Sequential Circuit Design. Digital Computer Design

Synchronous Sequential Circuit Design. Digital Computer Design Synchronous Sequential Circuit Design Digital Computer Design Races and Instability Combinational logic has no cyclic paths and no races If inputs are applied to combinational logic, the outputs will always

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation

More information

Addressing Challenges in Neuromorphic Computing with Memristive Synapses

Addressing Challenges in Neuromorphic Computing with Memristive Synapses Addressing Challenges in Neuromorphic Computing with Memristive Synapses Vishal Saxena 1, Xinyu Wu 1 and Maria Mitkova 2 1 Analog Mixed-Signal and Photonic IC (AMPIC) Lab 2 Nanoionic Materials and Devices

More information

Synchronous Sequential Circuit

Synchronous Sequential Circuit Synchronous Sequential Circuit The change of internal state occurs in response to the synchronized clock pulses. Data are read during the clock pulse (e.g. rising-edge triggered) It is supposed to wait

More information

Signal, donnée, information dans les circuits de nos cerveaux

Signal, donnée, information dans les circuits de nos cerveaux NeuroSTIC Brest 5 octobre 2017 Signal, donnée, information dans les circuits de nos cerveaux Claude Berrou Signal, data, information: in the field of telecommunication, everything is clear It is much less

More information

From Sequential Circuits to Real Computers

From Sequential Circuits to Real Computers 1 / 36 From Sequential Circuits to Real Computers Lecturer: Guillaume Beslon Original Author: Lionel Morel Computer Science and Information Technologies - INSA Lyon Fall 2017 2 / 36 Introduction What we

More information

arxiv:quant-ph/ v1 17 Oct 1995

arxiv:quant-ph/ v1 17 Oct 1995 PHYSICS AND CONSCIOUSNESS Patricio Pérez arxiv:quant-ph/9510017v1 17 Oct 1995 Departamento de Física, Universidad de Santiago de Chile Casilla 307, Correo 2, Santiago, Chile ABSTRACT Some contributions

More information

Lecture (08) Synchronous Sequential Logic

Lecture (08) Synchronous Sequential Logic Lecture (08) Synchronous Sequential Logic By: Dr. Ahmed ElShafee ١ Dr. Ahmed ElShafee, ACU : Spring 2018, CSE303 Logic design II Analysis of Clocked Sequential Circuits The behavior of a clocked sequential

More information

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5 Hopfield Neural Network and Associative Memory Typical Myelinated Vertebrate Motoneuron (Wikipedia) PHY 411-506 Computational Physics 2 1 Wednesday, March 5 1906 Nobel Prize in Physiology or Medicine.

More information

Discrete and Indiscrete Models of Biological Networks

Discrete and Indiscrete Models of Biological Networks Discrete and Indiscrete Models of Biological Networks Winfried Just Ohio University November 17, 2010 Who are we? What are we doing here? Who are we? What are we doing here? A population of interacting

More information

Chapter 3. Chapter 3 :: Topics. Introduction. Sequential Circuits

Chapter 3. Chapter 3 :: Topics. Introduction. Sequential Circuits Chapter 3 Chapter 3 :: Topics igital esign and Computer Architecture, 2 nd Edition avid Money Harris and Sarah L. Harris Introduction Latches and Flip Flops Synchronous Logic esign Finite State Machines

More information

3. Complete the following table of equivalent values. Use binary numbers with a sign bit and 7 bits for the value

3. Complete the following table of equivalent values. Use binary numbers with a sign bit and 7 bits for the value EGC22 Digital Logic Fundamental Additional Practice Problems. Complete the following table of equivalent values. Binary. Octal 35.77 33.23.875 29.99 27 9 64 Hexadecimal B.3 D.FD B.4C 2. Calculate the following

More information

Probabilistic Models in Theoretical Neuroscience

Probabilistic Models in Theoretical Neuroscience Probabilistic Models in Theoretical Neuroscience visible unit Boltzmann machine semi-restricted Boltzmann machine restricted Boltzmann machine hidden unit Neural models of probabilistic sampling: introduction

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Chapter 1 Introduction

Chapter 1 Introduction Chapter 1 Introduction 1.1 Introduction to Chapter This chapter starts by describing the problems addressed by the project. The aims and objectives of the research are outlined and novel ideas discovered

More information

Lecture 7: Sequential Networks

Lecture 7: Sequential Networks CSE 140: Components and Design Techniques for Digital Systems Lecture 7: Sequential Networks CK Cheng Dept. of Computer Science and Engineering University of California, San Diego 1 Part II: Sequential

More information

Introduction Biologically Motivated Crude Model Backpropagation

Introduction Biologically Motivated Crude Model Backpropagation Introduction Biologically Motivated Crude Model Backpropagation 1 McCulloch-Pitts Neurons In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published A logical calculus of the

More information

Influence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations

Influence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations Influence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations Robert Kozma rkozma@memphis.edu Computational Neurodynamics Laboratory, Department of Computer Science 373 Dunn

More information

Simple Neural Nets for Pattern Classification: McCulloch-Pitts Threshold Logic CS 5870

Simple Neural Nets for Pattern Classification: McCulloch-Pitts Threshold Logic CS 5870 Simple Neural Nets for Pattern Classification: McCulloch-Pitts Threshold Logic CS 5870 Jugal Kalita University of Colorado Colorado Springs Fall 2014 Logic Gates and Boolean Algebra Logic gates are used

More information

An Introductory Course in Computational Neuroscience

An Introductory Course in Computational Neuroscience An Introductory Course in Computational Neuroscience Contents Series Foreword Acknowledgments Preface 1 Preliminary Material 1.1. Introduction 1.1.1 The Cell, the Circuit, and the Brain 1.1.2 Physics of

More information

CISC 3250 Systems Neuroscience

CISC 3250 Systems Neuroscience CISC 3250 Systems Neuroscience Systems Neuroscience How the nervous system performs computations How groups of neurons work together to achieve intelligence Professor Daniel Leeds dleeds@fordham.edu JMH

More information

11.1 As mentioned in Experiment 10, sequential logic circuits are a type of logic circuit where the output of

11.1 As mentioned in Experiment 10, sequential logic circuits are a type of logic circuit where the output of EE 2449 Experiment 11 Jack Levine and Nancy Warter-Perez CALIFORNIA STATE UNIVERSITY LOS ANGELES Department of Electrical and Computer Engineering EE-2449 Digital Logic Lab EXPERIMENT 11 SEQUENTIAL CIRCUITS

More information

Lecture 4: Feed Forward Neural Networks

Lecture 4: Feed Forward Neural Networks Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training

More information

Positive Neural Networks in Discrete Time Implement Monotone-Regular Behaviors

Positive Neural Networks in Discrete Time Implement Monotone-Regular Behaviors Positive Neural Networks in Discrete Time Implement Monotone-Regular Behaviors Tom J. Ameloot and Jan Van den Bussche Hasselt University & transnational University of Limburg arxiv:1502.06094v2 [cs.ne]

More information

Module - 19 Gated Latches

Module - 19 Gated Latches Digital Circuits and Systems Prof. Shankar Balachandran Department of Electrical Engineering Indian Institute of Technology, Bombay And Department of Computer Science and Engineering Indian Institute of

More information

The Design Procedure. Output Equation Determination - Derive output equations from the state table

The Design Procedure. Output Equation Determination - Derive output equations from the state table The Design Procedure Specification Formulation - Obtain a state diagram or state table State Assignment - Assign binary codes to the states Flip-Flop Input Equation Determination - Select flipflop types

More information

Finite State Machine (1A) Young Won Lim 6/9/18

Finite State Machine (1A) Young Won Lim 6/9/18 Finite State Machine (A) 6/9/8 Copyright (c) 23-28 Young W. Lim. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free ocumentation License, Version.2 or

More information

Lecture 14: State Tables, Diagrams, Latches, and Flip Flop

Lecture 14: State Tables, Diagrams, Latches, and Flip Flop EE210: Switching Systems Lecture 14: State Tables, Diagrams, Latches, and Flip Flop Prof. YingLi Tian Nov. 6, 2017 Department of Electrical Engineering The City College of New York The City University

More information

Topic 8: Sequential Circuits. Bistable Devices. S-R Latches. Consider the following element. Readings : Patterson & Hennesy, Appendix B.4 - B.

Topic 8: Sequential Circuits. Bistable Devices. S-R Latches. Consider the following element. Readings : Patterson & Hennesy, Appendix B.4 - B. Topic 8: Sequential Circuits Bistable Devices Readings : Consider the following element Patterson & Hennesy, Appendix B.4 - B.6 Goals Basic Principles behind Memory Elements Clocks Applications of sequential

More information

Fundamentals of Digital Design

Fundamentals of Digital Design Fundamentals of Digital Design Digital Radiation Measurement and Spectroscopy NE/RHP 537 1 Binary Number System The binary numeral system, or base-2 number system, is a numeral system that represents numeric

More information

Week-5. Sequential Circuit Design. Acknowledgement: Most of the following slides are adapted from Prof. Kale's slides at UIUC, USA.

Week-5. Sequential Circuit Design. Acknowledgement: Most of the following slides are adapted from Prof. Kale's slides at UIUC, USA. Week-5 Sequential Circuit Design Acknowledgement: Most of the following slides are adapted from Prof. Kale's slides at UIUC, USA. Storing a value: SR = 00 What if S = 0 and R = 0? The equations on the

More information

6. Finite State Machines

6. Finite State Machines 6. Finite State Machines 6.4x Computation Structures Part Digital Circuits Copyright 25 MIT EECS 6.4 Computation Structures L6: Finite State Machines, Slide # Our New Machine Clock State Registers k Current

More information

Marr's Theory of the Hippocampus: Part I

Marr's Theory of the Hippocampus: Part I Marr's Theory of the Hippocampus: Part I Computational Models of Neural Systems Lecture 3.3 David S. Touretzky October, 2015 David Marr: 1945-1980 10/05/15 Computational Models of Neural Systems 2 Marr

More information

ELCT201: DIGITAL LOGIC DESIGN

ELCT201: DIGITAL LOGIC DESIGN ELCT201: DIGITAL LOGIC DESIGN Dr. Eng. Haitham Omran, haitham.omran@guc.edu.eg Dr. Eng. Wassim Alexan, wassim.joseph@guc.edu.eg Lecture 6 Following the slides of Dr. Ahmed H. Madian محرم 1439 ه Winter

More information

Computers also need devices capable of Storing data and information Performing mathematical operations on such data

Computers also need devices capable of Storing data and information Performing mathematical operations on such data Sequential Machines Introduction Logic devices examined so far Combinational Output function of input only Output valid as long as input true Change input change output Computers also need devices capable

More information

Convolutional Associative Memory: FIR Filter Model of Synapse

Convolutional Associative Memory: FIR Filter Model of Synapse Convolutional Associative Memory: FIR Filter Model of Synapse Rama Murthy Garimella 1, Sai Dileep Munugoti 2, Anil Rayala 1 1 International Institute of Information technology, Hyderabad, India. rammurthy@iiit.ac.in,

More information

Synfire Waves in Small Balanced Networks

Synfire Waves in Small Balanced Networks Synfire Waves in Small Balanced Networks Yuval Aviel 1,3, David Horn 2 and Moshe Abeles 1 1 Interdisciplinary Center for Neural Computation, Hebrew University, Jerusalem, Israel. 2 School of Physics and

More information

fraction dt (0 < dt 1) from its present value to the goal net value: Net y (s) = Net y (s-1) + dt (GoalNet y (s) - Net y (s-1)) (2)

fraction dt (0 < dt 1) from its present value to the goal net value: Net y (s) = Net y (s-1) + dt (GoalNet y (s) - Net y (s-1)) (2) The Robustness of Relaxation Rates in Constraint Satisfaction Networks D. Randall Wilson Dan Ventura Brian Moncur fonix corporation WilsonR@fonix.com Tony R. Martinez Computer Science Department Brigham

More information

ELEC Digital Logic Circuits Fall 2014 Sequential Circuits (Chapter 6) Finite State Machines (Ch. 7-10)

ELEC Digital Logic Circuits Fall 2014 Sequential Circuits (Chapter 6) Finite State Machines (Ch. 7-10) ELEC 2200-002 Digital Logic Circuits Fall 2014 Sequential Circuits (Chapter 6) Finite State Machines (Ch. 7-10) Vishwani D. Agrawal James J. Danaher Professor Department of Electrical and Computer Engineering

More information

Introduction: Computer Science is a cluster of related scientific and engineering disciplines concerned with the study and application of computations. These disciplines range from the pure and basic scientific

More information

ECEN 248: INTRODUCTION TO DIGITAL SYSTEMS DESIGN. Week 9 Dr. Srinivas Shakkottai Dept. of Electrical and Computer Engineering

ECEN 248: INTRODUCTION TO DIGITAL SYSTEMS DESIGN. Week 9 Dr. Srinivas Shakkottai Dept. of Electrical and Computer Engineering ECEN 248: INTRODUCTION TO DIGITAL SYSTEMS DESIGN Week 9 Dr. Srinivas Shakkottai Dept. of Electrical and Computer Engineering TIMING ANALYSIS Overview Circuits do not respond instantaneously to input changes

More information

Synchronization in Spiking Neural Networks

Synchronization in Spiking Neural Networks CS 790R Seminar Modeling & Simulation Synchronization in Spiking Neural Networks ~ Lecture 8 ~ René Doursat Department of Computer Science & Engineering University of Nevada, Reno Spring 2005 Synchronization

More information

King Fahd University of Petroleum and Minerals College of Computer Science and Engineering Computer Engineering Department

King Fahd University of Petroleum and Minerals College of Computer Science and Engineering Computer Engineering Department King Fahd University of Petroleum and Minerals College of Computer Science and Engineering Computer Engineering Department Page 1 of 13 COE 202: Digital Logic Design (3-0-3) Term 112 (Spring 2012) Final

More information

Latches. October 13, 2003 Latches 1

Latches. October 13, 2003 Latches 1 Latches The second part of CS231 focuses on sequential circuits, where we add memory to the hardware that we ve already seen. Our schedule will be very similar to before: We first show how primitive memory

More information

Linear Regression, Neural Networks, etc.

Linear Regression, Neural Networks, etc. Linear Regression, Neural Networks, etc. Gradient Descent Many machine learning problems can be cast as optimization problems Define a function that corresponds to learning error. (More on this later)

More information

Let us first give some intuitive idea about a state of a system and state transitions before describing finite automata.

Let us first give some intuitive idea about a state of a system and state transitions before describing finite automata. Finite Automata Automata (singular: automation) are a particularly simple, but useful, model of computation. They were initially proposed as a simple model for the behavior of neurons. The concept of a

More information

A Model for Real-Time Computation in Generic Neural Microcircuits

A Model for Real-Time Computation in Generic Neural Microcircuits A Model for Real-Time Computation in Generic Neural Microcircuits Wolfgang Maass, Thomas Natschläger Institute for Theoretical Computer Science Technische Universitaet Graz A-81 Graz, Austria maass, tnatschl

More information

Mealy & Moore Machines

Mealy & Moore Machines Mealy & Moore Machines Moore Machine is a finite-state machine whose output values are determined solely by its current state and can be defined as six elements (S, S 0, Σ, Λ, T, G), consisting of the

More information

COMPUTER SCIENCE TRIPOS

COMPUTER SCIENCE TRIPOS CST.2014.2.1 COMPUTER SCIENCE TRIPOS Part IA Tuesday 3 June 2014 1.30 to 4.30 pm COMPUTER SCIENCE Paper 2 Answer one question from each of Sections A, B and C, and two questions from Section D. Submit

More information

A Three-dimensional Physiologically Realistic Model of the Retina

A Three-dimensional Physiologically Realistic Model of the Retina A Three-dimensional Physiologically Realistic Model of the Retina Michael Tadross, Cameron Whitehouse, Melissa Hornstein, Vicky Eng and Evangelia Micheli-Tzanakou Department of Biomedical Engineering 617

More information

To cognize is to categorize revisited: Category Theory is where Mathematics meets Biology

To cognize is to categorize revisited: Category Theory is where Mathematics meets Biology To cognize is to categorize revisited: Category Theory is where Mathematics meets Biology Autonomous Systems Laboratory Technical University of Madrid Index 1. Category Theory 2. The structure-function

More information

CMSC 421: Neural Computation. Applications of Neural Networks

CMSC 421: Neural Computation. Applications of Neural Networks CMSC 42: Neural Computation definition synonyms neural networks artificial neural networks neural modeling connectionist models parallel distributed processing AI perspective Applications of Neural Networks

More information

Unit II Chapter 4:- Digital Logic Contents 4.1 Introduction... 4

Unit II Chapter 4:- Digital Logic Contents 4.1 Introduction... 4 Unit II Chapter 4:- Digital Logic Contents 4.1 Introduction... 4 4.1.1 Signal... 4 4.1.2 Comparison of Analog and Digital Signal... 7 4.2 Number Systems... 7 4.2.1 Decimal Number System... 7 4.2.2 Binary

More information

Digital electronics form a class of circuitry where the ability of the electronics to process data is the primary focus.

Digital electronics form a class of circuitry where the ability of the electronics to process data is the primary focus. Chapter 2 Digital Electronics Objectives 1. Understand the operation of basic digital electronic devices. 2. Understand how to describe circuits which can process digital data. 3. Understand how to design

More information

Chapter 3 Digital Logic Structures

Chapter 3 Digital Logic Structures Chapter 3 Digital Logic Structures Original slides from Gregory Byrd, North Carolina State University Modified by C. Wilcox, M. Strout, Y. Malaiya Colorado State University Computing Layers Problems Algorithms

More information

From Sequential Circuits to Real Computers

From Sequential Circuits to Real Computers From Sequential Circuits to Real Computers Lecturer: Guillaume Beslon Original Author: Lionel Morel Computer Science and Information Technologies - INSA Lyon Fall 2018 1 / 39 Introduction I What we have

More information

Neural Modeling and Computational Neuroscience. Claudio Gallicchio

Neural Modeling and Computational Neuroscience. Claudio Gallicchio Neural Modeling and Computational Neuroscience Claudio Gallicchio 1 Neuroscience modeling 2 Introduction to basic aspects of brain computation Introduction to neurophysiology Neural modeling: Elements

More information

Synchrony in Neural Systems: a very brief, biased, basic view

Synchrony in Neural Systems: a very brief, biased, basic view Synchrony in Neural Systems: a very brief, biased, basic view Tim Lewis UC Davis NIMBIOS Workshop on Synchrony April 11, 2011 components of neuronal networks neurons synapses connectivity cell type - intrinsic

More information

Synchronous Sequential Circuit Design. Dr. Ehab A. H. AL-Hialy Page 1

Synchronous Sequential Circuit Design. Dr. Ehab A. H. AL-Hialy Page 1 Synchronous Sequential Circuit Design Dr. Ehab A. H. AL-Hialy Page Motivation Analysis of a few simple circuits Generalizes to Synchronous Sequential Circuits (SSC) Outputs are Function of State (and Inputs)

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Sample Test Paper - I

Sample Test Paper - I Scheme G Sample Test Paper - I Course Name : Computer Engineering Group Marks : 25 Hours: 1 Hrs. Q.1) Attempt any THREE: 09 Marks a) Define i) Propagation delay ii) Fan-in iii) Fan-out b) Convert the following:

More information

arxiv: v1 [q-bio.nc] 30 Apr 2012

arxiv: v1 [q-bio.nc] 30 Apr 2012 Neuronal avalanches of a self-organized neural network with active-neuron-dominant structure Xiumin Li 1, 2, and Michael Small 3, 2 1 College of Automation, Chongqing University, Chongqing 444, China 2

More information

Ch 7. Finite State Machines. VII - Finite State Machines Contemporary Logic Design 1

Ch 7. Finite State Machines. VII - Finite State Machines Contemporary Logic Design 1 Ch 7. Finite State Machines VII - Finite State Machines Contemporary Logic esign 1 Finite State Machines Sequential circuits primitive sequential elements combinational logic Models for representing sequential

More information

CSE370: Introduction to Digital Design

CSE370: Introduction to Digital Design CSE370: Introduction to Digital Design Course staff Gaetano Borriello, Brian DeRenzi, Firat Kiyak Course web www.cs.washington.edu/370/ Make sure to subscribe to class mailing list (cse370@cs) Course text

More information

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Huffman Encoding

More information

Appendix B. Review of Digital Logic. Baback Izadi Division of Engineering Programs

Appendix B. Review of Digital Logic. Baback Izadi Division of Engineering Programs Appendix B Review of Digital Logic Baback Izadi Division of Engineering Programs bai@engr.newpaltz.edu Elect. & Comp. Eng. 2 DeMorgan Symbols NAND (A.B) = A +B NOR (A+B) = A.B AND A.B = A.B = (A +B ) OR

More information

Lecture 10: Sequential Networks: Timing and Retiming

Lecture 10: Sequential Networks: Timing and Retiming Lecture 10: Sequential Networks: Timing and Retiming CSE 140: Components and Design Techniques for Digital Systems Diba Mirza Dept. of Computer Science and Engineering University of California, San Diego

More information

EECS150 - Digital Design Lecture 23 - FSMs & Counters

EECS150 - Digital Design Lecture 23 - FSMs & Counters EECS150 - Digital Design Lecture 23 - FSMs & Counters April 8, 2010 John Wawrzynek Spring 2010 EECS150 - Lec22-counters Page 1 One-hot encoding of states. One FF per state. State Encoding Why one-hot encoding?

More information

Part I: Definitions and Properties

Part I: Definitions and Properties Turing Machines Part I: Definitions and Properties Finite State Automata Deterministic Automata (DFSA) M = {Q, Σ, δ, q 0, F} -- Σ = Symbols -- Q = States -- q 0 = Initial State -- F = Accepting States

More information

Spiking Neural Networks as Timed Automata

Spiking Neural Networks as Timed Automata Spiking Neural Networks as Timed Automata Giovanni Ciatto 1,2, Elisabetta De Maria 2, and Cinzia Di Giusto 2 1 Università di Bologna, Italy 2 Université Côté d Azur, CNRS, I3S, France Abstract In this

More information

Sequential Logic Circuits

Sequential Logic Circuits Chapter 4 Sequential Logic Circuits 4 1 The defining characteristic of a combinational circuit is that its output depends only on the current inputs applied to the circuit. The output of a sequential circuit,

More information

Spiking Neural P Systems with Anti-Spikes as Transducers

Spiking Neural P Systems with Anti-Spikes as Transducers ROMANIAN JOURNAL OF INFORMATION SCIENCE AND TECHNOLOGY Volume 14, Number 1, 2011, 20 30 Spiking Neural P Systems with Anti-Spikes as Transducers Venkata Padmavati METTA 1, Kamala KRITHIVASAN 2, Deepak

More information

Overview of Chapter 4

Overview of Chapter 4 Overview of hapter 4 Types of Sequential ircuits Storage Elements Latches Flip-Flops Sequential ircuit Analysis State Tables State Diagrams Sequential ircuit Design Specification Assignment of State odes

More information

Patterns, Memory and Periodicity in Two-Neuron Delayed Recurrent Inhibitory Loops

Patterns, Memory and Periodicity in Two-Neuron Delayed Recurrent Inhibitory Loops Math. Model. Nat. Phenom. Vol. 5, No. 2, 2010, pp. 67-99 DOI: 10.1051/mmnp/20105203 Patterns, Memory and Periodicity in Two-Neuron Delayed Recurrent Inhibitory Loops J. Ma 1 and J. Wu 2 1 Department of

More information

CMP 309: Automata Theory, Computability and Formal Languages. Adapted from the work of Andrej Bogdanov

CMP 309: Automata Theory, Computability and Formal Languages. Adapted from the work of Andrej Bogdanov CMP 309: Automata Theory, Computability and Formal Languages Adapted from the work of Andrej Bogdanov Course outline Introduction to Automata Theory Finite Automata Deterministic Finite state automata

More information

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA   1/ 21 Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural

More information

Finite State Machine. By : Ali Mustafa

Finite State Machine. By : Ali Mustafa Finite State Machine By : Ali Mustafa So Far We have covered the memory elements issue and we are ready to implement the sequential circuits. We need to know how to Deal(analyze) with a sequential circuit?

More information

Synchronous Sequential Circuit Design

Synchronous Sequential Circuit Design Synchronous Sequential Circuit Design 1 Sequential circuit design In sequential circuit design, we turn some description into a working circuit We first make a state table or diagram to express the computation

More information