Quasi Analog Formal Neuron and Its Learning Algorithm Hardware

Similar documents
Magnetic tunnel junction beyond memory from logic to neuromorphic computing WANJUN PARK DEPT. OF ELECTRONIC ENGINEERING, HANYANG UNIVERSITY

Heap Charge Pump Optimisation by a Tapered Architecture

CSE370: Introduction to Digital Design

Neuromorphic computing with Memristive devices. NCM group

of Digital Electronics

A Novel LUT Using Quaternary Logic

A 68 Parallel Row Access Neuromorphic Core with 22K Multi-Level Synapses Based on Logic- Compatible Embedded Flash Memory Technology

Design of Optimized Quantum-dot Cellular Automata RS Flip Flops

LOGIC CIRCUITS. Basic Experiment and Design of Electronics. Ho Kyung Kim, Ph.D.

Mark Redekopp, All rights reserved. Lecture 1 Slides. Intro Number Systems Logic Functions

Chapter 7. Sequential Circuits Registers, Counters, RAM

CHW 261: Logic Design

Section 3: Combinational Logic Design. Department of Electrical Engineering, University of Waterloo. Combinational Logic

Fault Tolerant Computing CS 530 Fault Modeling. Yashwant K. Malaiya Colorado State University

SUMMER 18 EXAMINATION Subject Name: Principles of Digital Techniques Model Answer Subject Code:

Are Rosenblatt multilayer perceptrons more powerfull than sigmoidal multilayer perceptrons? From a counter example to a general result

Intelligent Modular Neural Network for Dynamic System Parameter Estimation

Power Dissipation. Where Does Power Go in CMOS?

20. Combinational Circuits

Applications. Smartphone, tablet, game controller, antilock brakes, microprocessor, Wires

Synaptic Devices and Neuron Circuits for Neuron-Inspired NanoElectronics

Vidyalankar. S.E. Sem. III [EXTC] Digital System Design. Q.1 Solve following : [20] Q.1(a) Explain the following decimals in gray code form

CMOS Digital Integrated Circuits Lec 13 Semiconductor Memories

Successive approximation time-to-digital converter based on vernier charging method

Novel Bit Adder Using Arithmetic Logic Unit of QCA Technology

LH5P832. CMOS 256K (32K 8) Pseudo-Static RAM

Artificial Neural Network and Fuzzy Logic

BOOLEAN ALGEBRA INTRODUCTION SUBSETS

Lecture 8: Sequential Networks and Finite State Machines

Digital Integrated Circuits A Design Perspective

CMOS Inverter. Performance Scaling

Design for Manufacturability and Power Estimation. Physical issues verification (DSM)

CMPE12 - Notes chapter 2. Digital Logic. (Textbook Chapters and 2.1)"

Semiconductor Memories

VLSI Design of a Hamming Artificial Neural Network

ECE 407 Computer Aided Design for Electronic Systems. Simulation. Instructor: Maria K. Michael. Overview

Construction of a reconfigurable dynamic logic cell

Analogue Quantum Computers for Data Analysis

Computer Science. 20. Combinational Circuits. Computer Science COMPUTER SCIENCE. Section

Shift Register Counters

Chapter 8. Low-Power VLSI Design Methodology

Low Power, High Speed Parallel Architecture For Cyclic Convolution Based On Fermat Number Transform (FNT)

Unit II Chapter 4:- Digital Logic Contents 4.1 Introduction... 4

POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

1. Write a program to calculate distance traveled by light

Digital Integrated Circuits A Design Perspective. Semiconductor. Memories. Memories

Digital Fundamentals

Analysis of flip flop design using nanoelectronic single electron transistor

LOGIC CIRCUITS. Basic Experiment and Design of Electronics

CMPE12 - Notes chapter 1. Digital Logic. (Textbook Chapter 3)

Analysis and Synthesis of Weighted-Sum Functions

A NEW DESIGN TECHNIQUE OF REVERSIBLE GATES USING PASS TRANSISTOR LOGIC

Design of Low Power, High Speed Parallel Architecture of Cyclic Convolution Based on Fermat Number Transform (FNT)

IMPERIAL COLLEGE OF SCIENCE, TECHNOLOGY AND MEDICINE UNIVERSITY OF LONDON DEPARTMENT OF ELECTRICAL AND ELECTRONIC ENGINEERING EXAMINATIONS 2005

EE141Microelettronica. CMOS Logic

Clock-Gating and Its Application to Low Power Design of Sequential Circuits

Novel VLSI Implementation for Triplet-based Spike-Timing Dependent Plasticity

Test Pattern Generator for Built-in Self-Test using Spectral Methods

DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CS6201 DIGITAL PRINCIPLES AND SYSTEM DESIGN

Intro To Digital Logic

Digital Electronics Final Examination. Part A

Aspects of Systems and Circuits for Nanoelectronics

HN58C256 Series word 8-bit Electrically Erasable and Programmable CMOS ROM

DESIGN OF QCA FULL ADDER CIRCUIT USING CORNER APPROACH INVERTER

Why digital? Overview. Number Systems. Binary to Decimal conversion

Digital electronics form a class of circuitry where the ability of the electronics to process data is the primary focus.

Sample Test Paper - I

Design of Arithmetic Logic Unit (ALU) using Modified QCA Adder

DRAMATIC advances in technology scaling have given us

EE 466/586 VLSI Design. Partha Pande School of EECS Washington State University

Modeling and Compensation for Capacitive Pressure Sensor by RBF Neural Networks

Every time has a value associated with it, not just some times. A variable can take on any value within a range

Stand-Alone Hardware-Based Learning System. Trevor CLARKSON and Chi Kwong NG 1. Department of Electronic and Electrical Engineering,

SERIALLY PROGRAMMABLE CLOCK SOURCE. Features

NTE74177 Integrated Circuit TTL 35Mhz Presettable Binary Counter/Latch

Adders, subtractors comparators, multipliers and other ALU elements

INTEGRATED CIRCUITS. For a complete data sheet, please also download:

ELCT201: DIGITAL LOGIC DESIGN

Artificial Neural Network

PARALLEL DIGITAL-ANALOG CONVERTERS

DIGITAL LOGIC CIRCUITS

First Steps Towards a CPU Made of Spiking Neural P Systems

Design of Sequential Circuits

BCD Adder Design using New Reversible Logic for Low Power Applications

Design of A Efficient Hybrid Adder Using Qca

Semiconductor Memory Classification

ESE 570: Digital Integrated Circuits and VLSI Fundamentals

DIGITAL LOGIC CIRCUITS

Hopfield Neural Network

Logic Design. Chapter 2: Introduction to Logic Circuits

Addressing Challenges in Neuromorphic Computing with Memristive Synapses

Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid

! Charge Leakage/Charge Sharing. " Domino Logic Design Considerations. ! Logic Comparisons. ! Memory. " Classification. " ROM Memories.

A NEW DESIGN TECHNIQUE OF REVERSIBLE BCD ADDER BASED ON NMOS WITH PASS TRANSISTOR GATES

Quantum Dot Structures Measuring Hamming Distance for Associative Memories

PLA Minimization for Low Power VLSI Designs

Explicit Constructions of Memoryless Crosstalk Avoidance Codes via C-transform

74VHC393 Dual 4-Bit Binary Counter

Binary addition example worked out

Transcription:

Quasi Analog Formal Neuron and Its Learning Algorithm Hardware Karen Nazaryan Division of Microelectronics and Biomedical Devices, State Engineering University of Armenia, 375009, Terian Str. 105, Yerevan, Armenia nakar@freenet.am Abstract. A version of learning algorithm hardware implementation for a new neuron model quasi analog formal neuron (QAFN) is considered in this paper. Due to the presynaptic interaction of AND type, wide functional class (including all Boolean functions) for the QAFN operating is provided based on only one neuron. There exist two main approaches of neurons, neural networks (NN) and their learning algorithm hardware implementations analog and digital. The QAFN and its learning algorithm hardware are based on those two approaches simultaneously. Weight reprogrammability is realized based on EEPROM technique that is compatible with CMOS technology. The QAFN and its learning algorithm hardware are suitable to implement in VLSI technology. 1 Introduction An interest in the artificial neural networks and neurons has been greatly increased especially in the last years. Research show, that the computing machines based on NNs can easily solve such difficult problems which require not only standard algorithmic approaches. These neural networks and neurons are based on models of biological neurons [1, 2]. Thus, a biological neuron model is presented in fig.1 (below, the word neuron is implied as the artificial model of biological neuron). A neuron, has one or more inputs x 1, x 2, x 3,... x n and one output y. Inputs are weighted, i.e. input x is multiplied by weight w. Weighted inputs of the neuron s 1, s 2, s 3,... s n are summed by resulting algebraic sum S. In general, a neuron has a threshold value, which also is summed algebraically with S. The result of this whole sum is the argument of f activation function of the neuron. One of the most attractive features of NNs is their capability to be learned and adapted to solve various problems. That adaptability is provided by using learning algorithms, the aim of which is to find the optimum weight sets for the neurons. There exist two main approaches of learning algorithm performance hardware and software. V.N. Alexandrov et al. (Eds.) ICCS 2001, LNCS 2074, pp. 356 365, 2001. SpringerVerlag Berlin Heidelberg 2001

Quasi Analog Formal Neuron and Its Learning Algorithm Hardware 357 In general, neural models differ by the output activation function, input and weight values. For instance, activation function of classical formal neuron (FN) is Boolean one, inputs accept binary and weights integer values [1, 3]. In analog neuron model, the output function, inputs and weights accept analog values, etc. [4, 5, 6]. x 1 s 1 n y = f(s) = f wx Q(1) Ł =1 ł x 2... x n w 2 s 2 s n w n Q f(s) y 1 Fig. 1. Mathematical model of biological neuron 2 The QAFN Model Currently, there exist many neural models and variants of their hardware implementations. This work considers a learning algorithm hardware for the new neuron model named QAFN (fig.2) [7, 8], which is based on classical FN and enables the increase of FN functional possibilities. x 1 x 2 x 3 x n PI c 2 c 3 c 2 c 3 c 2 c 3 w 2 w 3 w d w 2 w 3 w d I w 0 c 0 =1 w 0 II S S Subtractor III ( a 1 ) ( a 2 ) f QAFN Fig. 2. Quasi analog formal neuron

358 K. Nazaryan On the other hand, research shows [8, 9, 10, 11] that the proposed QAFN model simplifies ways of hardware implementation, with simultaneous improvement of some technical parameters and features operating speed, accuracy, consuming area in semiconductor crystal, etc. Thus, QAFN realizes the following mathematical equation f QAFN = a d 1 = 0 c w a d 2 = 0 c w where the activation function f QAFN is a quasi analog one, synaptic c inputs accept logi and 0 values, weights w and w accept integer nonnegative (0,1,2,3 ) values, a 1 and a 2 are constants, d is the number of synaptic inputs (fig.3). The separation of I and II summing parts enable easily to form positive and negative weights, since those parts have identical hardware solutions [9, 10, 11]. Input interaction or presynaptic interaction (PI) unit realizes a logical input interaction between values of vector X=(x 1, x 2,... x n ), by forming logical values of vector C=(, c 2,... ). Values of cs are all possible variants of logical interactions between input values, including all inputs without interactions. Consequently, d=2 n 1 (where n is the number of information inputs x). For instance, in n=3 case, values of cs are the followings N=3, d=7, (,c 2,... c 7 ) (x 1,x 2,x 3 ), (1) c 2 =x 2 Without input interaction c 3 =x 3 c 4 x 2 c 4 x 2 c 5 x 3 c 5 x 3 Input interaction of logic AND or OR type c 6 =x 2 x 3 c 6 =x 2 x 3 c 7 x 2 x 3 c 7 x 2 x 3 Due to PI, the operating wide functional class of QAFN model is provided for both Boolean and quasi analog functions, based only on one neuron (without network construction) [1, 9, 12, 13]. This is one of the most important advantages of QAFN model. The input interaction type ( AND or OR ) mathematically is not significant for providing QAFN functioning. It should be taken into consideration from the hardware implementation point of view. In this model, the weight values are defined by the weight binary word. Consequently, the weight storing has a simple realization by using EEPROM technique. From a learning point of view, the neural networks are usually taught easier if the weights of the neurons are allowed to assume both positive and negative values [5]. One of the developed versions of QAFN [9], which provides that opportunity for all weights, is illustrated in fig.3. In this case a 1 =a 2 =1 and only one of the summing parts

Quasi Analog Formal Neuron and Its Learning Algorithm Hardware 359 (I) or (II) are used, and positive or negative weighted influence is defined by switching the weighted inputs s 1, s 2, s 3... either to positive or negative input of the subtractor s = c w = c (w w ) = c w c w if if w w = 0; = 0. The most significant bit (MSB) of weight binary word defines the sign of the given weight, as it is done in the known computing systems. MSB controls the output of a switch (IV) to the corresponding positive or negative input of the subtractor (III), depending on the weight sign value. For example, if the number of bits in the weight binary word is eight, the values of w are the following w w...... w w ( 7) ( 6) ( 0) 0 1 1 1 1 1 1 1 127 0 1 1 1 1 1 1 0 126 0 0 0 0 0 0 1 0 2 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 2 1 0 0 0 0 0 0 1 127 1 0 0 0 0 0 0 0 128 Thus, QAFN enables to overcome functional drawbacks of classical neurons, increase functional possibilities and adaptability, and improve technical parameters (speed, features, used area etc.). (2) 3 The QAFN Learning 3.1 The Learning Algorithm A brief description of a learning algorithm for QAFN will be considered here. The output function f(k) for the kth combination of input xs, generating C(k)=( (k), c 2 (k),... (k)) synaptic inputs, is given by f (k) = S(k) = d = 0 w (k) c (k) (3)

360 K. Nazaryan If f(k) is equal to the desired value for the kth combination of input variables, we will say that C(k) belongs to the functional class, otherwise C(k) belongs to the w 2 functional class [12, 14]. Due to the presynaptic interaction unit of QAFN, and w 2 functional classes may be overlapped [9].. f QAFN III ( a 2 ) ( a 1 ) II s 1 S S <0 0 IV s 1 I s 1 s w <0 w 0 IV s s w c s d w d <0 w d 0 IV s d s d w d Fig. 3. QAFN block scheme with weight switches Let W(k) be an arbitrarily selected initial weight vector. Two learning multitudes are given that represent and w 2 functional classes (they may be linearly unseparable), respectively. In this case the kth step of the neuron learning is the following [12, 14] 1. If X(k) and W(k) C(k)<0, then the W(k) weight vector is replaced by W(k1)= W(k) C(k) vector; 2. If X(k) w 2 and W(k) C(k) 0, then the W(k) weight vector is replaced by W(k1)= W(k) C(k) vector; 3. Otherwise, the W(k) weight vector is not changed, i.e. W(k1)= W(k). where is the correcting factor. is a real positive constant value. It influences on the learning rate. It is advised to choose value depending on the given problem. The learning process is accomplished at the k r g step, that is

Quasi Analog Formal Neuron and Its Learning Algorithm Hardware 361 W(k ) = W(k 1) = W(k 2) =... = W(k g), (4) r r r where g is the number of training multitudes [14]. The above described neuron learning algorithm converges after finite number of iterations that is proved in [14]. The brief description of the learning algorithm for QAFN is considered below. Assigning the output error r E = T A, (5) where T and A are target (desired) and actual outputs, respectively. Considering that T and A accept quasi analog values and xs, consequently cs logic 0 or 1, the learning algorithm for the QAFN, based on the algorithms described in [1, 12, 13, 14], can be written in the following way where D(k) = W(k 1) = W(k) D(k) C(k), (6) 0 ( T(k) < A(k) ) ( T(k) = A(k) ) ( T(k) > A(k) ) if E < 0 (2nd item); (7) if E = 0 (3th item); if E > 0 (1st item). Here, corresponds to the unit weight value. It is obvious that E/ is always equal to an integer value. Schematically, it can be presented as shown in fig.4. The output error generator (OEG) generates the D(k), comparing target (T) and actual (A) values (see equation (7)). 3.2 The Learning Hardware Computation of synaptic weight values is included within the chip by the digital and analog circuit blocks, and the synaptic weight values are stored in the digital memory. Before the illustration of the hardware implementation details, a technical particularity should be taken into consideration. The problem is that from the engineering point of view, it is very difficult to provide the exact analog value equal to the target one because of the technological mismatch of the semiconductor components. In order to overcome that problem, the following is recommended to take into consideration U t arget U = U 2 (8) DU= U U (9) where U, U are the bounds of higher and lower voltage levels, between which the actual output value of QAFN is assumed equal to the target one. Of course, DU should

362 K. Nazaryan be as small as possible considering the hardware implementation properties, in order to reach the ideal case. It is obvious that DU and unit weight should have the same order of their values. Thus, learning target value is given by the analog values of U and U considering that the DU difference should be the same for all desired output values and input combinations. A voltage levelshifting circuit block (controlled by only U ) to generate U and U values can solve that problem. The hardware of QAFN learning algorithm is illustrated in fig.5. A simple OEG is designed to generate up/down signals. A couple of the voltage comparators generate a digital voltage code comparing the target and actual analog values. That code is then used in the up/down signal generation during clock signal appearance. The corresponding digital weight (for which c =1) is increased/decreased by the up/down counter, accordingly. Up/Down signal generation is shown in Table 1. Each value of the weight word bits is stored in a memory cell and is allowed to renew due to EEPROM (electrically erasableprogrammable read only memory) technique [15, 16]. To refresh the weights, an access (EN enable) to the triggers of the corresponding counter, is required. The operational modes of the neuron are presented in Table 2. 1 X AND PI c 2 c 3 w 2 w 3 w d S w 0 f QAFN = S Learning Algorithm D Output Error Generator A T X Fig. 4. Schematic representation of QAFN learning Table 1. Up/Down signal generation Clk U actual, U target c up down Learning Algorithm 0 0 0 item 3 U actual < U 1 0 item 1 U actual > U 1 0 item 2 U <U actual <U 1 0 0 item 3

Quasi Analog Formal Neuron and Its Learning Algorithm Hardware 363 Table 2. The neuron operational modes Terminal Counter (cnt.) Comments CS OE U PR Weight storage access to trigger inputs Trigger outputs Working Mode 1 1 5v read R off Training Mode 1 1 5v read On R off Previous weight setting in counter 0 0 5v R off Off cnt. Learning algorithm realization 0 1 18v Off cnt. Previous weight erasing pulse 0 0 18v pulse write Off cnt. New weight storing 4 Conclusions The neuron suggested in this paper could be used in complex digitalanalog computational systems, in digital filters, data analyzers, controllers, complex Boolean and quasi analog functioning, digital analog signal processing systems, etc., where high speed and adaptability are crucial. Weight discretization is the most reliable way of eliminating leakage problems associated with, for instance, capacitive analog storage. Using statiigital storage is a convenient solution, it is, however, quite area consuming. Boolean fullfunctionality and wide functional class for quasi analog functions are provided by only one QAFN, without network construction, which is one of the most important advantages of the neuron model. A convenient hardware is designed for the neuron learning algorithm performance. The fabrication mismatches of the semiconductor components are taken into consideration, designing a convenient output error generator, based on some engineering approaches. Since the weight correcting value is fixed compatibly with the QAFN hardware and the learning algorithm is performed for only one neuron, the learning rate is relatively high, which depends only on clock signal frequency and possibilities of hardware implementation limitations. The hardware approach of the QAFN learning algorithm implementation provides a substantial high speed for the learning process. Reprogrammability of the weights is realized due to EEPROM technique. The CMOS QAFN and its learning algorithm are suitable to implement in VLSI technology.

364 K. Nazaryan U pr OE CS U target = (U U )/2 DU= U U EEPROM Cell ( w ( 0 ) ) EN up down EEPROM Cell ( w ( 1 ) ) Input/Output Control 2 0 2 1 2 2... 2 (B1) trigger outputs th counter trigger inputs 2 0 2 1 2 2... 2 (B1) EEPROM Cell ( w ( B1 ) ) 1st floating gate weight storage w ( 0 ) w ( 1 ) w ( B1) th floating gate weight storage dth floating gate weight storage... w d w QAFN f QAFN U U clock & & 1st counter up down c & & up down OEG dth counter Fig. 5. Schematic of QAFN learning algorithm hardware with analog/digital circuit blocks and digital weight storage. References 1. Mkrtichyan, S. Computer Logical Devices Design on the Neural Elements. Moscow Energia (1977), (in Russian). 2. Mkrtichyan, S. Neurons and Neural Nets. Moscow Energia, (1971) (in Russian). 3. Mkrtichyan, S., Mkrtichyan, A.S., Lazaryan, A.F., Nazaryan, K.M. and others. Binary Neurotriggers for Digital Neurocomputers. Proc of NEUREL 97, Belgrade, (1997) 3436. 4. Mead, C. Analogue VLSI Implementation of Neural Systems. Kluwer, Boston (1989). 5. Haykin, S. Neural Networks. A Comprehensive Foundation. Macmillan, New York, (1994) 6. Chua, L., and Yang, L. Cellular Neural Networks Theory. IEEE Trans. Circuits and Systems, vol 35, N 5, (1988). 12571290. 7. Mkrtichyan, S., and Nazaryan, K. Synthesis of Quasi Analog Formal Neuron. Report of Science Academy of Armenia, N3, Yerevan (1997). 262266 (in Armenian). 8. Nazaryan, K. Research and Design of Formal Neurons (MSc thesis). State Engineering University of Armenia, Yerevan (1997) (in Armenian). 9. Nazaryan, K.. Circuitry Realizations of Neural Networks (PhD thesis). State Engineering University of Armenia, Yerevan (1999) (in Armenian). 10. Nazaryan, K. Circuit Realization of a Formal Neuron Quasi Analog Formal Neuron. In Proc of NEUREL 97. Belgrade, (1997) 9498.

Quasi Analog Formal Neuron and Its Learning Algorithm Hardware 365 11. Nazaryan, K., The Quasi Analog Formal Neuron Implementation Using a Lateral Weight Transistor. Proc of NEURAP 98. Marseilles, (1998). 205209. 12. Mkrtichyan, S., Lazaryan, A. Learning Algorithm of Neuron and Its Hardware Implementation. In Proc of NEUREL 97. Belgrade, (1997) 3742. 13. Mkrtichyan, S., Navasardyan, A., Lazaryan, A., Nazaryan, K. and others. Adaptive Threshold Element. Patent AR#472, Yerevan, (1997). (in Armenian). 14. Tu, J. and Gonzalev, R. Pattern Recognition Principles, Mir, Moscow (1978). (in Russian). 15. Holler, M., Tam, S., Castro, H., Benson, R. An Electrically Trainable Artificial Neural Network (ETANN) with 10240 Floating Gate Synapses. In Proc of IEEE/INNS Int. Joint Conf. of Neural Networks 2, (1989) 191196. 16. Lee, B., Yang, H., Sheu, B. Analog FloatingGate Synapses for GeneralPurpose VLSI Neural Computation. IEEE Trans. on Circuit and Systems 38, (1991) 654658.