Design of Basic Logic Gates using CMOS and Artificial Neural Networks (ANN)

Similar documents
Handwritten English Character Recognition using Pixel Density Gradient Method

New Delhi & Affiliated to VTU, Belgavi ) Oorgaum, Kolar Dist ,Karnataka

Artificial Neural Network

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Synaptic Devices and Neuron Circuits for Neuron-Inspired NanoElectronics

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Neural Networks Introduction

Unit III. A Survey of Neural Network Model

Neural networks. Chapter 19, Sections 1 5 1

Artificial Neural Network and Fuzzy Logic

Lecture 4: Feed Forward Neural Networks

A 68 Parallel Row Access Neuromorphic Core with 22K Multi-Level Synapses Based on Logic- Compatible Embedded Flash Memory Technology

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Artificial Neural Networks. Edward Gatt

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Artificial Neural Networks. Historical description

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET

Artificial Intelligence

Lecture 7 Artificial neural networks: Supervised learning

Non-linear Measure Based Process Monitoring and Fault Diagnosis

Part 8: Neural Networks

Artificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen

Neural networks. Chapter 20. Chapter 20 1

Artificial neural networks

CS 4700: Foundations of Artificial Intelligence

Neural networks. Chapter 20, Section 5 1

Simulating Neural Networks. Lawrence Ward P465A

CSC 411 Lecture 10: Neural Networks

International Journal of Advanced Research in Computer Science and Software Engineering

Magnetic tunnel junction beyond memory from logic to neuromorphic computing WANJUN PARK DEPT. OF ELECTRONIC ENGINEERING, HANYANG UNIVERSITY

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Data Mining Part 5. Prediction

CMSC 421: Neural Computation. Applications of Neural Networks

Perceptron. (c) Marcin Sydow. Summary. Perceptron

Sections 18.6 and 18.7 Artificial Neural Networks

Neural Networks: Introduction

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning

Addressing Challenges in Neuromorphic Computing with Memristive Synapses

EEE 241: Linear Systems

Nature-inspired Analog Computing on Silicon

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

INTRODUCTION TO NEURAL NETWORKS

PERFORMANCE ANALYSIS OF CLA CIRCUITS USING SAL AND REVERSIBLE LOGIC GATES FOR ULTRA LOW POWER APPLICATIONS

Simple neuron model Components of simple neuron

ANN Control of Non-Linear and Unstable System and its Implementation on Inverted Pendulum

Digital Architecture for Real-Time CNN-based Face Detection for Video Processing

Sections 18.6 and 18.7 Artificial Neural Networks

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

Sections 18.6 and 18.7 Analysis of Artificial Neural Networks

Skew Management of NBTI Impacted Gated Clock Trees

Low Power CMOS Dr. Lynn Fuller Webpage:

Design and Implementation of Carry Adders Using Adiabatic and Reversible Logic Gates

Artificial Neural Networks The Introduction

A Novel Low Power 1-bit Full Adder with CMOS Transmission-gate Architecture for Portable Applications

CS:4420 Artificial Intelligence

Computational Intelligence Winter Term 2009/10

CS 4700: Foundations of Artificial Intelligence

COMPARING PERFORMANCE OF NEURAL NETWORKS RECOGNIZING MACHINE GENERATED CHARACTERS

Multilayer Perceptron

EE 230 Lecture 33. Nonlinear Circuits and Nonlinear Devices. Diode BJT MOSFET

Center for Spintronic Materials, Interfaces, and Novel Architectures. Spintronics Enabled Efficient Neuromorphic Computing: Prospects and Perspectives

Neuromorphic computing with Memristive devices. NCM group

Hand Written Digit Recognition using Kalman Filter

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

Simple Neural Nets for Pattern Classification: McCulloch-Pitts Threshold Logic CS 5870

Convolutional Associative Memory: FIR Filter Model of Synapse

Design Collocation Neural Network to Solve Singular Perturbed Problems with Initial Conditions

Machine Learning. Neural Networks

Introduction Biologically Motivated Crude Model Backpropagation

Computational Intelligence

A Simulation of Hand Written Characters using Neural Networks Techniques

PV021: Neural networks. Tomáš Brázdil

Toward More Accurate Scaling Estimates of CMOS Circuits from 180 nm to 22 nm

Weight Quantization for Multi-layer Perceptrons Using Soft Weight Sharing

Transistor Implementation of Reversible Comparator Circuit Using Low Power Technique

Deep Learning. Basics and Intuition. Constantin Gonzalez Principal Solutions Architect, Amazon Web Services

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan

Applications of Memristors in ANNs

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Looking at a two binary digit sum shows what we need to extend addition to multiple binary digits.

Ch.8 Neural Networks

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

NEUROMORPHIC COMPUTING WITH MAGNETO-METALLIC NEURONS & SYNAPSES: PROSPECTS AND PERSPECTIVES

Neural Networks and the Back-propagation Algorithm

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso

Integrated Circuit Implementation of a Compact Discrete-Time Chaos Generator

Artificial Neural Network : Training

Computational Intelligence

MOSFET: Introduction

Lecture 8: Introduction to Deep Learning: Part 2 (More on backpropagation, and ConvNets)

Linear Regression, Neural Networks, etc.

Digital electronic systems are designed to process voltage signals which change quickly between two levels. Low time.

Design of Low Power, High Speed Parallel Architecture of Cyclic Convolution Based on Fermat Number Transform (FNT)

Transcription:

AMSE JOURNALS-2016-Series: Advances D; Vol. 21; N 1; pp 54-65 Submitted July 2016; Revised Oct. 11, 2016, Accepted Nov. 15, 2016 Design of Basic Logic Gates using CMOS and Artificial Neural Networks (ANN) R. K. Mandal Department of Computer Science & Application, University of North Bengal Raja Rammohunpur, PO : NB Distt: Darjeeling, West Bengal 734013, India (rakesh_it2002@yahoo.com http://www.nbu.ac.in) Abstract: - This paper shows an approach to simplify the electronic circuits by using Complementary Metal Oxide Semiconductors (CMOS) transistors and develop the equivalent Artificial Neural Networks (ANNs). The development of these types of circuits leads to the simple hardware implementation of ANN models. In this paper the three basic gates already implemented using CMOS are designed by using simple Self Organizing Map (SOM) ANNs, which are unsupervised. Key-words: - Complementary Metal Oxide Semiconductors (CMOS), Artificial Neural Networks (ANNs), Self Organizing Map (SOM), Gates 54

I. Introduction Now-a-days computing is converging towards intelligent computing. Intelligence can be best implemented in computers using Artificial Neural Networks (ANNs) [1, 2]. Research is going on to develop models which can be used in various applications like medical informatics, handwriting recognition, speech recognition, and other applications of pattern recognition [3, 4, 5, 6]. After the development of these models the major challenge is to implement these models in the hardware circuits. If ANNs are developed equivalent to the CMOS circuits, then it becomes simple to map ANNs to hardware circuits using CMOS [7, 8, 9, 10]. Forssell M has worked in the field of hardware implementation of Artificial Neural Networks [11]. Work has already been done in this field where CMOS circuit was designed that accepts synapses inputs and generates pulse width modulated output waveform of constant frequency on the basis of activation level [12]. Logic gates are implemented in single layer and two layers feed forward neural network based supervised learning [13]. In an approach Artificial Neural Network (ANN) is used to demonstrate the way in which the biological system is processed in analog domain by using analog component like Gilbert cell multiplier, Adder, Neuron activation function for implementation [14]. Hui W et al worked on the use of artificial neural networks on segmented arc heather failure prediction [15]. This paper has been divided into three sections. Section-1 discusses the implementation of a simple ANN CMOS NOT gate. Section-2 discusses the implementation of a simple ANN CMOS AND gate. Section-3 explains the implementation of a simple ANN CMOS OR gate. II. NOT-Gate using CMOS and Equivalent ANN In Figure 1, an ANN has been developed which can replace a NOT gate. The ANN given in the above figure has been developed using CMOS, where x is the input of ANN and y is the output. T1 and T2 are the artificial neurons which replaces two transistors of the CMOS NOT gate. The x and Vdd are the inputs of T1 with weights w1 and w2. The x and Vss are the inputs of T2 with weights w2 and w4. Another neuron R is used as referee neuron. The outputs p and q of the neuron T1 and T2 are the inputs to referee neuron with weights w5 and w6. The output of the referee neuron is y which is the output of the NOT gate. Neuron T1 uses the following algorithm to generate the output. 55

Algorithm 1: Output generation of upper transistor of the ANN CMOS NOT gate. Step 1: The weights w1 and w3 are fixed and assigned values as given below: w1 = -1 and w3 = 1 Step2: Assign vdd = -1 Step3: Calculate y1 = x*w1 and y2 = vdd*w3 Step4: If y1 >= 1 then p = 1 else p = y2 Step 5: Stop Neuron T2 uses the following algorithm to generate the output. vdd w3 x w1 y2 y1 p T1 w5 R y w2 T2 y3 y4 w6 q vss w4 Figure 1: ANN NOT-Gate using CMOS Algorithm 2: Output generation of lower transistor of the ANN CMOS NOT gate. Step 1: The weights w2 and w4 are fixed and assigned values as given below: w2 = 1 and w4 = 1 Step2: Assign vss = 1 Step 3: Calculate y3 = x*w2 and y4 = vss*w4 56

Step 4: If y3 < 0 the q = y4 else q = -1 Step 5: Stop The referee neuron is a simple perceptron having two inputs without a bias and an output which is the output of the inverter. Example 1: This example shows the working of an ANN-CMOS inverter. Let us consider the value of x as -1. w1 = -1 and w3 = 1 vdd = -1 y1 = (-1) x (-1) y1 = 1 y2 = (-1) x 1 y2 = -1 y1 = 1 therefore p = 1 Now, w2 = 1 and w4 = 1 vss = 1 y3 = (-1) x 1 y3 = -1 y4 = 1 x 1 y4 = 1 y3 < 0 therefore q = 1 Now p = 1 and q = 1 Table 1: Truth Table for ANN-CMOS NOT gate x w1 w2 w3 w4 w5 w6 vdd vss y1 y2 y3 y4 p q y_out y -1-1 1 1 1 1 1-1 1 1-1 -1 1 1 1 2 1 1-1 1 1 1 1 1-1 1-1 -1 1 1-1 -1-2 -1 w5 = 1 and w6 = 1 Applying perceptron learning rule: y_out = p*w5 + q*w6 y_out = (1 x 1) + (1 x 1) 57

y_out = 2 which is > 0 therefore y = 1 [Hence, for x = -1 we get y = 1] Let us consider the value of x as 1. w1 = -1 and w3 = 1 vdd = -1 y1 = 1 x (-1) y1 = -1 y2 = (-1) x 1 y2 = -1 y1 = -1 which is less than 0 therefore p = -1 Now, w2 = 1 and w4 = 1 vss = 1 y3 = 1 x 1 y3 = 1 y4 = 1 x 1 y4 = 1 y3 = 1 therefore q = -1 Now p = -1 and q = -1 w5 = 1 and w6 = 1 Applying perceptron learning rule: y_out = p*w5 + q*w6 y_out = (-1 x 1) + (-1 x 1) y_out = -2 which is < 0 therefore y = -1 [Hence, for x = 1 we get y = -1] III. AND-Gate using CMOS and Equivalent ANN In Figure 2, an ANN has been developed which can replace an AND gate. The ANN given in the figure has been developed using CMOS, where x1 and x2 are the inputs of ANN and y is the output. T1, T2, T3 and T4 are the artificial neurons which replace four transistors of the CMOS AND gate. x1 and Vdd are the inputs of T1 with weights w1 and w2 and output of the neuron is t1. x2 and Vdd are the inputs of T2 with weights w8 and w7 and output of the neuron is t2. x1 and t1 are the inputs of T3 with weights w3 and w4 and output of the neuron is t3. x2 and t3 are the inputs of T4 with weights 58

w5 and w6 and output of the neuron is t4. Another neuron R is used which is also called the referee neuron. t1 and t2 are the inputs of R with weights w9 and w10 and output of the neuron is y, which is presented to an ANN CMOS NOT gate to generate the output of the ANN CMOS AND gate. The transistors T1 and T2 are reverse biased so the weights of the xi inputs w1 and w8 are set to -1 and weights of vdd inputs w2 and w7 are also set to -1. The transistors T3 and T4 are forward biased so the weights of the xi inputs w3 and w5 are set to 1 and other weights w4, w6, w9 and w10 are also set to 1. v dd w 7 w 1 w 2 w 8 x 1 T 1 T 2 w 10 t 1 t 2 R y ANN CMOS NOT GATE y w 9 w 4 T 3 w 3 t 3 w 6 w 5 x 2 T 4 t 4 Figure 2: ANN AND-Gate using CMOS For the transistors T1, T2, T3, T4 and R, the outputs are calculated as given in the following equations: tout1 = x1*w1 + vdd*w2.equation 1 t1 = f(tout1).equation 2 where, f = { 1 if tout1 >= 0 else -1} 59

Similarly, tout2 = x2*w8 + vdd*w7.equation 3 t2 = f(tout2).equation 4 tout3 = x1*w3 + t1*w4.equation 5 t3 = f(tout3).equation 6 tout4 = x2*w5 + t3*w6.equation 7 t4 = f(tout4).equation 8 Table 2: Weight Matrix for ANN-CMOS AND gate x1 x2 w1 w2 w3 w4 w5 w6 w7 w8 w9 w10-1 -1-1 -1 1 1 1 1-1 -1 1 1-1 1-1 -1 1 1 1 1-1 -1 1 1 1-1 -1-1 1 1 1 1-1 -1 1 1 1 1-1 -1 1 1 1 1-1 -1 1 1 Table 3: Truth Table for ANN-CMOS AND gate x1 x2 t1 t2 t3 t4 y y -1-1 1 1 1 1 1-1 -1 1 1-1 1 1 1-1 1-1 -1 1 1 1 1-1 1 1-1 -1 1 1-1 1 y out = t2*w10 + t1*w9.equation 9 y = f(y out).equation 10 Table 2 displays the weight matrix for ANN-CMOS AND gate for different possible inputs of an AND gate. Table 3 displays the truth table of ANN-CMOS AND gate. IV. OR-Gate using CMOS and Equivalent ANN In Figure 3, an ANN has been developed which can replace an OR gate. The ANN given in the figure has been developed using CMOS, where x1 and x2 are the inputs of ANN and y is the output. T1, T2, T3 and T4 are the artificial neurons which replace four transistors of the CMOS OR gate. x1 and vdd are the inputs of T1 with weights w1 and w2 and output of the neuron is t1. x2 and vdd are the inputs of T2 with weights w3 and w4 and output of the neuron is t2. x1 and vdd are the inputs of T3 with weights w5 and w6 and output of the neuron is t3. x2 and vdd are the inputs of T4 with 60

weights w7 and w8 and output of the neuron is t4. Another neuron R is used which is also called the referee neuron. t3 and t4 are the inputs of R with weights w9 and w10 and output of the neuron is y, which generates the output of the ANN CMOS OR gate. The transistors T1 and T2 are reverse biased so the weights of the xi inputs w1 and w3 are set to -1 and weights of vdd inputs w2 and w4 are also set to -1. The transistors T3 and T4 are forward biased so the weights of the xi inputs w5 and w7 are set to 1 and other weights w6, w8, w9 and w10 are also set to 1. w 2 v dd x 1 w 1 T 1 w 4 w 3 t 1 w 8 x 2 T 2 t 2 w 6 w 5 T 3 w 9 t 3 R y w 7 T 4 t 4 w 10 Figure 3: ANN OR-Gate using CMOS For the transistors T1, T2, T3, T4 and R, the outputs are calculated as given in the following equations: tout1 = x1*w1 + vdd*w2.equation 1 t1 = f(tout1).equation 2 Where, f = { 1 if tout1 > 0 else -1} Similarly, tout2 = x2*w3 + vdd*w4.equation 3 t2 = f(tout2).equation 4 tout3 = x1*w5 + vdd *w6.equation 5 61

t3 = f(tout3).equation 6 tout4 = x2*w7 + vdd *w8.equation 7 t4 = f(tout4).equation 8 yout = t4*w10 + t3*w9.equation 9 y = f (yout).equation 10 Where, f = {1 if tout1 >= 0 else -1} Table 4 displays the weight matrix for ANN-CMOS OR gate for different possible inputs Table 4: Weight Matrix for ANN-CMOS OR gate x1 x2 w1 w2 w3 w4 w5 w6 w7 w8 w9 w10-1 -1-1 -1-1 -1 1 1 1 1 1 1-1 1-1 -1-1 -1 1 1 1 1 1 1 1-1 -1-1 -1-1 1 1 1 1 1 1 1 1-1 -1-1 -1 1 1 1 1 1 1 Table 5: Truth Table for ANN-CMOS OR gate x 1 x 2 t 1 t 2 t 3 t 4 y -1-1 -1-1 -1-1 -1-1 1-1 -1-1 1 1 1-1 -1-1 1-1 1 1 1-1 -1 1 1 1 of an OR gate. Table 5 displays the truth table of ANN-CMOS OR gate. V. Discussion This paper is an approach to design the ANN models for basic logic gates. These basic logic gates designed using ANN models are based on CMOS circuits. These ANNs are simple Self Organizing Maps (SOMs) which uses unsupervised learning. So the synaptic weighs are fixed here. Some work has already been done in this field as discussed in the introduction but the work in this paper very simple circuits are developed. Keeping the circuit simple makes the circuits easy to implement. This approach can be used to simplify complex circuits in future. This approach is slightly deviated from the overall functioning and design of CMOS gates. A similar approach has been done in another paper which is more similar to CMOS gate. 62

Now-a-days complex electronic circuits are simplified using VLSI design. This can be done using CMOS circuits. Lot of work has already be done on ANNs. ANNs try to model human brain using artificial neurons. These neurons take binary inputs and produce binary outputs. If the CMOS circuits can be designed using ANNs, the circuits can be further simplified by software implementation. Complex circuits are expensive. Simple circuits can be designed which are less expensive. Complex functions can be carried out by ANN models and these models can be mapped to simple circuits. VI. Conclusion The approach in this paper is to develop simple circuits to design basic logic gates using CMOS. The use of CMOS circuits is used in many devices now days. ANNs are also becoming popular. This approach will lead to the development of simple circuits and also the hardware implementation of the ANN models will become simple with the advancement of this technology. Approach here is to initiate the work to map simple CMOS hardware circuits with ANN models which can solve complex problems. The work can be started by simply modeling logic gates with ANN. This paper is an approach to design the basic gates applying this logic. References [1.] L. Fausett, Fundamentals of Neural Networks, Architectures, Algorithms and Applications, Pearson Education, India, 2009. [2.] G.N. Swamy, G. Vijay Kumar, Neural Networks, Scitech, India, 2007. [3.] Ashutosh Aggarwal, Rajneesh Rani, RenuDhir, Handwritten Devanagri Character Recognition using Gradient Features, International Journal of Advanced Research in Computer Science and Software Engineering, Volume 2, Issue 5, May 2012, pp. 85-90. [4.] Sandeep, Saha, Nabarag Paul, Sayam Kumar Das, Sandip Kundu, Optical Character Recognition using 40-point Feature Extraction and Artificial Neural Network, International Journal of Advanced Research in Computer Science and Software Engineering, Volume 3, Issue 4, April 2013, ISSN 2277 128X, pp. 495-502. [5.] Ali Borji, Mandana Hamidi, Fariborz Mahmoudi, Robust Handwritten Character Recognition with Features Inspired by Visual Ventral Stream, Springer Science+Business Media, LLC. 2008, published online (31 August 2008), pp. 97-111. 63

[6.] Y Perwej and A Chaturvedi, Neural Networks for Handwritten English Alphabet Recognition, International Journal of Computer Applications, Volume 20, No. 7, pp. 1-5, 2011. [7.] Frye R C, Rietman E A, and Wong C C, Back-propagation learning and non idealities in analog neural network hardware, Neural Networks, IEEE Transactions on, vol. 2, no. 1, pp. 110 117, 1991. [8.] Jung S and Kim S S, Hardware implementation of a real-time neural network controller with a dsp and an fpga for nonlinear systems, Industrial Electronics, IEEE Transactions on, vol. 54, no. 1, pp. 265 271, 2007. [9.] Hikawa H, {FPGA} implementation of self organizing map with digital phase locked loops, Neural Networks, vol. 18, no. 56, pp. 514 522, 2005, {IJCNN} 2005. Available Online: http://www.sciencedirect.com/science/article/pii/s0893608005001103 [10.] Merolla P A, Arthur J V, Alvarez-Icaza R, Cassidy A S, Sawada J, Akopyan F, Jackson B L, Imam N, Guo C, Nakamura Y, Brezzo B, Vo I, Esser S K, Appuswamy R, Taba B, Amir A, Flickner M D, Risk W P, Manohar R, and Modha D S, A million spiking-neuron integrated circuit with a scalable communication network and interface, Science, Vol. 345, No. 6197, pp 668 673, 2014. Available Online: http://www.sciencemag.org/content/345/6197/668.abstract [11.] Forssell M, Hardware Implementation of Artificial Neural Networks, 18-859E Information Flow in Networks, pp 1-4. Available:http://users.ece.cmu.edu/~pggrover/teaching/files/NeuromorphicComputing.p df, (Accessed : 2016) [12.] Yellamraju S, Kumari Swati, Girolkar S, Chourasia S and Tete A D, Design of Various Logic Gates in Neural Networks, Annual IEEE India Conference (INDICON), 2013, Mumbai, India. [13.] Hawas N M, Rekaby B K A, ANN Based On Learning Rule of Neuron Activation Function Using Electronic Devices, International Journal of Advanced Computer Technology (IJACT), Vol 4, No. 3, pp 19-22, 2015. [14.] Kale N B, Padole V B, Compression and Decompression of Signal Using CMOS Technology...A Review, International Journal of Advanced Research in Computer Science and Software Engineering, Vol. 4, Issue 3, pp 53-55, 2014. 64

[15.] Hui W, Dejang C, Wei Z, Ping Z, Yongsheng L, Application of artificial neural networks to segmented arc heather failure prediction, AMSE Journals, Advances in Modeling, Series B, Vol. 54-1, Issue 1, 2011, pp17-29. 65