Addressing Challenges in Neuromorphic Computing with Memristive Synapses

Similar documents
Neuromorphic computing with Memristive devices. NCM group

NEUROMORPHIC COMPUTING WITH MAGNETO-METALLIC NEURONS & SYNAPSES: PROSPECTS AND PERSPECTIVES

The N3XT Technology for. Brain-Inspired Computing

RE-ENGINEERING COMPUTING WITH NEURO- MIMETIC DEVICES, CIRCUITS, AND ALGORITHMS

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

A 68 Parallel Row Access Neuromorphic Core with 22K Multi-Level Synapses Based on Logic- Compatible Embedded Flash Memory Technology

Artificial Neural Network and Fuzzy Logic

Memory and computing beyond CMOS

CISC 3250 Systems Neuroscience

Synaptic Devices and Neuron Circuits for Neuron-Inspired NanoElectronics

Novel VLSI Implementation for Triplet-based Spike-Timing Dependent Plasticity

A brain-inspired neuromorphic architecture for robust neural computation

Artificial Neural Network

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Magnetic tunnel junction beyond memory from logic to neuromorphic computing WANJUN PARK DEPT. OF ELECTRONIC ENGINEERING, HANYANG UNIVERSITY

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Neural networks. Chapter 19, Sections 1 5 1

An Introductory Course in Computational Neuroscience

A Hybrid CMOS/Memristive Nanoelectronic Circuit for Programming Synaptic Weights

Revision: Neural Network

Data Mining Part 5. Prediction

Introduction. Previous work has shown that AER can also be used to construct largescale networks with arbitrary, configurable synaptic connectivity.

Center for Spintronic Materials, Interfaces, and Novel Architectures. Spintronics Enabled Efficient Neuromorphic Computing: Prospects and Perspectives

Applications of Memristors in ANNs

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

STDP Learning of Image Patches with Convolutional Spiking Neural Networks

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Analog CMOS Circuits Implementing Neural Segmentation Model Based on Symmetric STDP Learning

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

3/10/2013. Lecture #1. How small is Nano? (A movie) What is Nanotechnology? What is Nanoelectronics? What are Emerging Devices?

Neural networks. Chapter 20. Chapter 20 1

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan

Lecture 14 Population dynamics and associative memory; stable learning

Neuromorphic Network Based on Carbon Nanotube/Polymer Composites

Neural networks. Chapter 20, Section 5 1

Post Von Neumann Computing

Neural Networks Introduction

CS:4420 Artificial Intelligence

Consider the following spike trains from two different neurons N1 and N2:

A nanoparticle-organic memory field-effect transistor behaving as a programmable spiking synapse

Outline. Neural dynamics with log-domain integrator circuits. Where it began Biophysics of membrane channels

Implementing Synaptic Plasticity in a VLSI Spiking Neural Network Model

Neuromorphic architectures: challenges and opportunites in the years to come

Instituto Tecnológico y de Estudios Superiores de Occidente Departamento de Electrónica, Sistemas e Informática. Introductory Notes on Neural Networks

Using Variable Threshold to Increase Capacity in a Feedback Neural Network

Linear Regression, Neural Networks, etc.

Lecture 4: Feed Forward Neural Networks

Biological Modeling of Neural Networks:

Introduction Biologically Motivated Crude Model Backpropagation

From Spin Torque Random Access Memory to Spintronic Memristor. Xiaobin Wang Seagate Technology

Artificial Neural Networks The Introduction

How to do backpropagation in a brain

Large-scale neural modeling

A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

COMP9444 Neural Networks and Deep Learning 2. Perceptrons. COMP9444 c Alan Blair, 2017

Lecture 7 Artificial neural networks: Supervised learning

Novel Devices and Circuits for Computing

arxiv: v1 [cs.et] 20 Nov 2014

Characterizing the firing properties of an adaptive

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

Structure and Measurement of the brain lecture notes

Perceptron. (c) Marcin Sydow. Summary. Perceptron

arxiv: v1 [cs.ne] 30 Mar 2013

INTRODUCTION TO NEURAL NETWORKS

Neural Networks and the Back-propagation Algorithm

Niobium oxide and Vanadium oxide as unconventional Materials for Applications in neuromorphic Devices and Circuits

Quasi Analog Formal Neuron and Its Learning Algorithm Hardware

Memristive Tunneling Devices: From Device Principles to Neuromorphic Applications

Liquid Computing in a Simplified Model of Cortical Layer IV: Learning to Balance a Ball

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

Signal, donnée, information dans les circuits de nos cerveaux

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

Supporting Online Material for

Learning Spatio-Temporally Encoded Pattern Transformations in Structured Spiking Neural Networks 12

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

! Charge Leakage/Charge Sharing. " Domino Logic Design Considerations. ! Logic Comparisons. ! Memory. " Classification. " ROM Memories.

1. HP's memristor and applications 2. Models of resistance switching. 4. 3D circuit architectures 5. Proposal for evaluation framework

Neuromorphic computing using non-volatile memory

CMOS Digital Integrated Circuits Lec 13 Semiconductor Memories

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Exercise 15 : Cable Equation

Probabilistic Models in Theoretical Neuroscience

CCS050M12CM2 1.2kV, 50A Silicon Carbide Six-Pack (Three Phase) Module Z-FET TM MOSFET and Z-Rec TM Diode

Synaptic dynamics. John D. Murray. Synaptic currents. Simple model of the synaptic gating variable. First-order kinetics

ESE 570: Digital Integrated Circuits and VLSI Fundamentals

Modeling of Retinal Ganglion Cell Responses to Electrical Stimulation with Multiple Electrodes L.A. Hruby Salk Institute for Biological Studies

New Delhi & Affiliated to VTU, Belgavi ) Oorgaum, Kolar Dist ,Karnataka

CARNEGIE MELLON UNIVERSITY DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING DIGITAL INTEGRATED CIRCUITS FALL 2002

1.2 kv 16 mω 1.8 mj. Package. Symbol Parameter Value Unit Test Conditions Notes 117 V GS = 20V, T C

How to outperform a supercomputer with neuromorphic chips

Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid

Design of Basic Logic Gates using CMOS and Artificial Neural Networks (ANN)

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler

tec UNIVERSITY OF PENNSYLVANIA ELECTRICAL ENGINEERING DEPARTMENT 7C., Ouarterly Report

Using a Hopfield Network: A Nuts and Bolts Approach

MM74C90 MM74C93 4-Bit Decade Counter 4-Bit Binary Counter

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

Artificial Intelligence

Temporal Pattern Analysis

Transcription:

Addressing Challenges in Neuromorphic Computing with Memristive Synapses Vishal Saxena 1, Xinyu Wu 1 and Maria Mitkova 2 1 Analog Mixed-Signal and Photonic IC (AMPIC) Lab 2 Nanoionic Materials and Devices Lab ECE Department, Boise State University, ID http://lumerink.com ORNL, 2012 June Boise 30, State 2016 University 1

Agenda Introduction Synapse Devices and Challenges Mixed-Signal Neural Learning Architecture Realizing Multibit Synapses Conclusion 2

Reinventing Computing Von Neumann computing in the nano-cmos regime + Massive transistor count, but. Device variability Dark silicon issue Not well suited for processing massively parallel data streams and pattern recognition tasks that come easy to a brain. US Office of Science and Technology (OSTP) RFI, June 2015: Create a new type of computer that can proactively interpret and learn from data, solve unfamiliar problems using what it has learned, and operate with the energy efficiency of the human brain DOE, Brain Initiative, AFOSR, DARPA, SRC 3

Brain-Inspired Neuromorphic System Our Current Research Focus Source: DARPA SyNAPSE Program 4

Revisiting Biological Inspiration A B C V spk,in Synapse Inhibitive Connection V spk, out Neuron t 1 t 2 t 3 Synapse Dendrites From Previous Neurons Neuron V mem V thr Axon V mem Neuron V spk,in V mem Soma Pre Post Δt<0 Δw Pre Post Δt>0 0 Δt Δt = t post t pre Synaptic placiticity Synapse V spk,in V spk, out V spk, out t fire Emerging understanding of neural computation from Computational Neuroscience Hierarchical spiking neural networks with localized learning Synapses strengthen or weaken by relative firing times of the pre- and postneurons Plasticity rules: Spike-Timing Dependent Plasticity (STDP) 5

Neuromorphic Computing with RRAMs WTA bus Output Layer Forward spike Backward spike Pre-synaptic neuron ReRAM synapse Post-synaptic neuron Input Layer WTA bus To achieve biology-like synapse density, emerging nanoscale devices are under consideration Resistive RAM (RRAM) or Memristors Chalcogenide-based Conductive Bridge RAM (CBRAM) devices from Mitkova Group at Boise State Three necessary criterions for the realization of neuromorphic hardware capable of deep learning: (1) non-volatility of the trained weights, (2) massive synaptic density, and (3) ultra-low-power operation. 6

CBRAM I-V Characteristics I-V Sweep Resistance 7

Stochastic CBRAM Behavior V TH_Set Distributions R ON Distributions The CBRAM switching behavior is stochastic The analog synapse value depends upon the electroforming process Weak bridge-> more intermediate resistance values The analog resistance value relaxes in few minutes to hours 8

Spike Timing Dependent Plasticity (STDP) 0.2 A V pre V post Δt>0 t V pre V post Δt<0 t B ΔGmr (µs) 0.1 Pre 0 Post Δt<0-0.1 Δw or ΔG mr Pre Post Δt>0 V net =V post -V pre t V net =V post -V pre t -0.2 0 Δt -4-2 0 2 4 Δt (µs) Δt = t post t pre v p v n t v p v n t t pre t post t pre t post Source: X. Wu, V. Saxena and K. Zhu, Homogeneous Spiking Neuromorphic System for Real-World Pattern Recognition, IEEE Journal on Emerging and Selected Topics in Circuits and Systems, Accepted. 9

CBRAM Synapse Challenges Variability: Device characteristics such as the switching threshold voltages are variable from device to device Switching characteristics depend upon the initial forming step and compliance current Resistive Load: Thousands of devices in parallel present a difficult (low resistance) load to drive by the neuron circuit Resolution: Challenging to realize long-term persistence of weights for more than 1-bit resolution in CBRAM devices 10

CMOS Integrate and Fire Neuron V thr V den C mem Spike Generator V leaky M leaky SW 2 SW 1 Ф d =0 A v Ф d =1 1 V mem Ф 1 Ф 2 V refr V rest Ф f =0 1 SW 3 Ф d V axon Ф i Phase Ф f Bus V tch Controller Interface Ф c V cpr V mode V rst Asynchronous integrated and fire operation Accommodates RRAM Provides current summing node Sustain a fixed voltage for synapses Dynamic biasing for large current drive STDP compatible STDP-compatible spike generator In-situ STDP learning Winner-takes-all bus for effective local decision making Single contact point to a synapse WTA Bus V wtab Source: X. Wu, V. Saxena and K. Zhu, Homogeneous Spiking Neuromorphic System for Real-World Pattern Recognition, IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 5(3), 2015. 11

Novel Tri-mode Operation A Leaky Integrator B Voltage Buffer C Discharging V leaky V refr V rest V leaky V refr V rest V leaky V refr V rest V refr V refr M leaky Фd=1 V refr A v V mem V thr M leaky Фd=1 A v M leaky Фd=1 V refr A v V thr C mem SW 2 SW 1 1 Фd=0 Фf=0 1 SW 3 V axon V thr C mem SW 2 SW 1 Фd=1 0 0 Фf=1 SW 3 V axon V thr C mem SW 2 SW 1 Фd=1 0 Фf=0 1 SW 3 V axon V den V refr A v V mem Vcpr Фd V den V refr A v V mem Vcpr Фd V den V refr A v V mem Vcpr Фd Spike Generator Фi Фf Phase Bus V tch Ф1 Controller Interface V mode Ф2 V rst Фc Spike Generator Фi Фf Phase Bus V tch Ф1 Controller Interface V mode Ф2 V rst Фc Spike Generator Фi Фf Phase Bus V tch Ф1 Controller Interface V mode Ф2 V rst Фc V wtab V wtab V wtab A first CMOS neuron that is compatible with RRAMs and drives a large load Several resistive devices in parallel load the neuron Source: X. Wu, V. Saxena and K. Zhu, Homogeneous Spiking Neuromorphic System for Real-World Pattern Recognition, IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 5(3), 2015. 12

Simulation: Output Spikes Output Spikes Parameter Symbol Unit Min Max Step Positive amplitude α + mv 0 360 24 α + τ + Negative amplitude α - mv 0 360 24 Positive pulse width τ + ns 48 396 23 Negative pulse width τ - ns 282 1125 56 Negative pulse slope s mv/ns 0.011 0.52 0.034 α - slope τ - Source: X. Wu, V. Saxena and K. Zhu, A CMOS Spiking Neuron for Dense Memristor-Synapse Connectivity for Brain-Inspired Computing, International Joint Conference on Neural Networks (IJCNN), 2015. 13

Test Chip interfaced with CBRAM Devices A B Pre-spike V pre Post-spike V post V a+ tail + +V t,p Δt Over threshold potential created by STDP spike-pair V a- tail - -V t,m Net potential across synapse V net = V post - V pre Mixed-signal IC with 180-nm CMOS Neurons connected with CBRAM Synapses A first RRAM compatible STDP learning Neuron Power consumed in the neuron is 9.3pJ/spike/synapse (in terms of 10,000 synapses each having around 1MΩ resistance). Source: X. Wu, V. Saxena, K. Zhu and S. Balagopal, A CMOS Spiking Neuron for Brain-Inspired Neural Networks With Resistive Synapses and In Situ Learning, IEEE TCAS-II, 62(11), pp.1088-1092. 14

Associative Learning Experiment Pre- Neuron 1 t 1 Pre-spike Neurons Output Layer IFN #1 IFN #2 Pre- Neuron 2 Resistive Synapses t 1 +Δt Post-Neuron IFN #3 Post-spike Input Layer t 2 Pre-Neuron 1(IFN #1) "Sight of Food" Pre-Neuron 2 (IFN #2) "Sound of Bell" Post-Neuron (IFN #3) "Salivation" Time (ms) Source: X. Wu, V. Saxena, K. Zhu and S. Balagopal, A CMOS Spiking Neuron for Brain-Inspired Neural Networks With Resistive Synapses and In Situ Learning, IEEE TCAS-II, 62(11), pp.1088-1092. 15

Total Current (ua) Current (µa) Dynamic Resistive Load Driving 97% Efficiency @ driving 10,000 x 1MΩ synapses 1600 1200 1,000 100 Synapses Driving Baseline 800 400 10 10,000 1,000 100 10 1 Number of Synapses 10,000 Synapses 1,000 Synapses 100 Synapses 0 56µA 13µA 0 2 4 6 8 10 Time (us) Source: X. Wu, V. Saxena and K. Zhu, Homogeneous Spiking Neuromorphic System for Real-World Pattern Recognition, IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 5(3), 2015. 16

Spike-based Pattern Recognition Circuit... WTA Image i j Input Network i j Input Neuron Matrix i j n Synaptic Network n Output Neurons Source: X. Wu, V. Saxena and K. Zhu, Homogeneous Spiking Neuromorphic System for Real-World Pattern Recognition, IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 5(3), 2015, in press. 17

Asynchronous WTA Bus Interface V leaky V refr V rest V thr Inhibitory connection Ф Excitatory connection d =1 C mem M leaky SW 2 1 SW 1 Ф d =0 Ф f =0 1 SW 3 V axon V den A v V mem V cpr Ф d Explicit one-one inhibitory connections Inhibitory connection Excitatory connection Spike Generator Implicit inhibition via shared WTA bus WTA Bus Ф i Phase Ф f Bus V tch Controller Interface Ф 1 Ф 2 Ф c V wtab V mode V rst V dd V mode Tri-state Buffer Delay V wtab Φ d Implicit inhibition via shared WTA bus V cpr V tch V rst Φ c D S Q Φ f Source: X. Wu, V. Saxena and K. Zhu, Homogeneous Spiking Neuromorphic System for Real-World Pattern Recognition, IEEE Journal on Emerging and Selected Topics in Circuits and Systems, Accepted. 18

Neuron # Neuron # Neuron # Demo: Handwrite Digits Classification First 20 Train (original sequence) Test (class-by-class) Input Spike Trains Output Spike Trains without Learning Output Spikes Train with Learning Conductance Evolution in Synapses (µs) 50 60 50 7 7 40 40 2 2 30 30 1 1 20 20 10 0 1 2 7 0 100 200 300 400 Time (µs) 0 0 100 200 300 400 Time (µs) 0 0 100 200 300 400 Time (µs) 0 1094 2189 3283 4377 5471 6566 7660 Time (µs) 10 0 Source: X. Wu, V. Saxena and K. Zhu, Homogeneous Spiking Neuromorphic System for Real-World Pattern Recognition, IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 5(3), 2015. 19

Parallel Multibit Synapses Emerging spike-based deep learning architectures require at least 4-bits of synaptic resolution. The devices fundamentally have bistable nonvolatile states with stochastic switching behavior Recent approach: Use several of these device in parallel Potentially multiple resistance states due to variation in individual threshold voltages Input Layer Forward spike Current integration Compound Multibit Synapse Backward spikes Output Neuron 20

Multibit Synapses: Dendritic Processing Dendritic processing to include sensitivity to time-overlap between the pre- and post-synaptic spikes Shift in time or pulse voltage in the back-propagating spikes Simulations exhibit remarkable STDP learning behavior; experiments underway with physical devices X. Wu and V. Saxena, Realizing Multibit Synapses with Bistable RRAM Devices, Unpublished result. 21

Conclusion Nanoscale emerging memory devices present opportunity for realizing non von Neumann computing Mixed-Signal architectures for spiking neural networks can realize a versatile neuromorphic computing motif Further need to characterize device capabilities and limitations for large scale integration Algorithm/architecture development to be driven ground up by emerging device capabilities 22

References 1. X. Wu, V. Saxena and K. Zhu, Homogeneous Spiking Neuromorphic System for Real-World Pattern Recognition, IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 5(3), 2015, in press. 2. X. Wu, V. Saxena and K. Zhu, A CMOS Spiking Neuron for Dense Memristor-Synapse Connectivity for Brain- Inspired Computing, International Joint Conference on Neural Networks (IJCNN), 2015, Accepted. 3. X. Wu, V. Saxena and K. A. Campbell, Energy-efficient STDP-based learning circuits with memristor synapses, SPIE Machine Intelligence and Bio-inspired Computation: Theory and Applications VIII, 2014. 4. X. Wu, V. Saxena and K. Zhu, A CMOS Spiking Neuron for Brain-Inspired Neural Networks with Resistive Synapses and In-Situ Learning, IEEE Transactions on Circuits and Systems II: Express Briefs, In final review. 5. X. Wu and V. Saxena, Associative Learning on CMOS-Memristor Spiking Neuromorphic Chip, IEEE Transactions on Nanotechnology, In 2 nd round review. 23

Questions? 24