Thermal noise driven computing

Similar documents
Noise-based logic: Binary, multi-valued, or fuzzy, with optional superposition of logic states

COMPLEX NOISE-BITS AND LARGE-SCALE INSTANTANEOUS PARALLEL OPERATIONS WITH LOW COMPLEXITY

Does a standalone, cold (low-thermal-noise), linear resistor exist without cooling?

Zero and negative energy dissipation at information-theoretic erasure

DEMONS: MAXWELL S DEMON, SZILARD S ENGINE AND LANDAUER S ERASURE DISSIPATION

Response to Comment on Zero and negative energy dissipation at information-theoretic erasure

Generalized Kirchhoff-Law-Johnson- Noise (KLJN) secure key exchange system using arbitrary resistors

Noise-based deterministic logic and computing: a brief survey. Department of Electrical and Computer Engineering, Texas A&M University, USA

(1) Department of Electrical Engineering, Texas A&M University, College Station, TX , USA;

ESE 570: Digital Integrated Circuits and VLSI Fundamentals

Does Information Have Mass?

! Charge Leakage/Charge Sharing. " Domino Logic Design Considerations. ! Logic Comparisons. ! Memory. " Classification. " ROM Memories.

Texas A&M University, Department of Electrical and Computer Engineering, College Station, TX , USA

Design Considerations for Integrated Semiconductor Control Electronics for a Large-scale Solid State Quantum Processor

Information entropy and thermal entropy: apples and oranges

BIRD S-EYE VIEW ON NOISE-BASED LOGIC

Digital Integrated Circuits A Design Perspective

Demons: Maxwell demon; Szilard engine; and Landauer's erasure-dissipation

Novel Bit Adder Using Arithmetic Logic Unit of QCA Technology

Lecture 10, ATIK. Data converters 3

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

ANALYSIS OF THE ATTENUATOR-ARTIFACT IN THE EXPERIMENTAL ATTACK OF GUNN-ALLISON-ABBOTT AGAINST THE KLJN SYSTEM

Lognormal distribution of firing time and rate from a single neuron?

Recall the essence of data transmission: How Computers Work Lecture 11

EE115C Winter 2017 Digital Electronic Circuits. Lecture 6: Power Consumption

Electrical Noise under the Fluctuation-Dissipation framework

Advanced Flash and Nano-Floating Gate Memories

Digital Integrated Circuits A Design Perspective. Semiconductor. Memories. Memories

LED lamp driving technology using variable series-parallel charge pump

Semiconductor Memories

Quantum Dot Structures Measuring Hamming Distance for Associative Memories

ENHANCEMENT OF NANO-RC SWITCHING DELAY DUE TO THE RESISTANCE BLOW-UP IN InGaAs

MM74C906 Hex Open Drain N-Channel Buffers

MM74C912 6-Digit BCD Display Controller/Driver

Analysis and design of a new SRAM memory cell based on vertical lambda bipolar transistor

Low Power VLSI Circuits and Systems Prof. Ajit Pal Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

An Autonomous Nonvolatile Memory Latch

Topic 4. The CMOS Inverter

Introduction to CMOS VLSI Design (E158) Lecture 20: Low Power Design

Analysis of flip flop design using nanoelectronic single electron transistor

An 85%-Efficiency Fully Integrated 15-Ratio Recursive Switched- Capacitor DC-DC Converter with 0.1-to-2.2V Output Voltage Range

ESE 570: Digital Integrated Circuits and VLSI Fundamentals

MM74C922 MM74C Key Encoder 20-Key Encoder

Application II: The Ballistic Field-E ect Transistor

Serial Parallel Multiplier Design in Quantum-dot Cellular Automata

Design of Arithmetic Logic Unit (ALU) using Modified QCA Adder

Advanced Topics In Solid State Devices EE290B. Will a New Milli-Volt Switch Replace the Transistor for Digital Applications?

Fig. 1 CMOS Transistor Circuits (a) Inverter Out = NOT In, (b) NOR-gate C = NOT (A or B)

Ultralow-Power Reconfigurable Computing with Complementary Nano-Electromechanical Carbon Nanotube Switches

Examination paper for TFY4185 Measurement Technique/ Måleteknikk

!"#$%&'()*"+,-./*($-"#+"0+'*"12%3+ (#3+4"#&'*"12%3+5'6+6)17-$%1$/)%8*

Quasiadiabatic switching for metal-island quantum-dot cellular automata


Announcements. EE141- Fall 2002 Lecture 7. MOS Capacitances Inverter Delay Power

(Updated February 17, 2006) Laszlo B. Kish.

SPECTRA FOR THE PRODUCT OF GAUSSIAN NOISES. Laszlo Bela Kish 1), Robert Mingesz 2), Zoltan Gingl 2), Claes-Goran Granqvist 3)

Demonstration of a functional quantum-dot cellular automata cell

Semiconductor Memories. Jan M. Rabaey Anantha Chandrakasan Borivoje Nikolic Paolo Spirito

Limits to Binary Logic Switch Scaling A Gedanken Model

Scaling of MOS Circuits. 4. International Technology Roadmap for Semiconductors (ITRS) 6. Scaling factors for device parameters

Status. Embedded System Design and Synthesis. Power and temperature Definitions. Acoustic phonons. Optic phonons

TECHNICAL INFORMATION

PHYSICAL UNCLONEABLE FUNCTION HARDWARE KEYS UTILIZING KIRCHHOFF-LAW- JOHNSON-NOISE SECURE KEY EXCHANGE AND NOISE-BASED LOGIC

THE INVERTER. Inverter

COMP 103. Lecture 16. Dynamic Logic

Design of A Efficient Hybrid Adder Using Qca

! Memory. " RAM Memory. ! Cell size accounts for most of memory array size. ! 6T SRAM Cell. " Used in most commercial chips

Directions for simulation of beyond-cmos devices. Dmitri Nikonov, George Bourianoff, Mark Stettler

DS0026 Dual High-Speed MOS Driver

ELEN 610 Data Converters

Semiconductor Memories

MM74C14 Hex Schmitt Trigger

Switched-Capacitor Circuits David Johns and Ken Martin University of Toronto

194 IEEE TRANSACTIONS ON NANOTECHNOLOGY, VOL. 4, NO. 2, MARCH Yan Qi, Jianbo Gao, and José A. B. Fortes, Fellow, IEEE

Rg2 Lg2 Rg6 Lg6 Rg7 Lg7. PCB Trace & Plane. Figure 1 Bypass Decoupling Loop

RFIC2017 MO2B-2. A Simplified CMOS FET Model using Surface Potential Equations For Inter-modulation Simulations of Passive-Mixer-Like Circuits

versions 9/27/2015, 10/1/

Electromagnetic Oscillations and Alternating Current. 1. Electromagnetic oscillations and LC circuit 2. Alternating Current 3.

Lognormal distribution of firing time and rate from a single neuron?

MM74C908 Dual CMOS 30-Volt Relay Driver

Thermal Noise Driven Heat Engines

MM74C912 6-Digit BCD Display Controller Driver MM74C917 6-Digit Hex Display Controller Driver

ELECTRONICS IA 2017 SCHEME

CHAPTER 15 CMOS DIGITAL LOGIC CIRCUITS

MM74C14 Hex Schmitt Trigger

Power dissipation in clocking wires for clocked molecular quantum-dot cellular automata

EE 230 Lecture 31. THE MOS TRANSISTOR Model Simplifcations THE Bipolar Junction TRANSISTOR

I. INTRODUCTION. CMOS Technology: An Introduction to QCA Technology As an. T. Srinivasa Padmaja, C. M. Sri Priya

Logical error rate in the Pauli twirling approximation

Circuits. David J. Starling Penn State Hazleton PHYS 212

Digital Integrated Circuits A Design Perspective

Topics. Dynamic CMOS Sequential Design Memory and Control. John A. Chandy Dept. of Electrical and Computer Engineering University of Connecticut

An Overview of Spin-based Integrated Circuits

Delay and Energy Consumption Analysis of Conventional SRAM

STUDY AND IMPLEMENTATION OF MUX BASED FPGA IN QCA TECHNOLOGY

University of Toronto. Final Exam

Switched Mode Power Conversion Prof. L. Umanand Department of Electronics Systems Engineering Indian Institute of Science, Bangalore

APPLYING QUANTUM COMPUTER FOR THE REALIZATION OF SPSA ALGORITHM Oleg Granichin, Alexey Wladimirovich

EE 466/586 VLSI Design. Partha Pande School of EECS Washington State University

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING QUESTION BANK

Transcription:

Published: Applied Physics Letters 89, 144104 (2006) (October 2) arxiv.org/abs/physics/0607007 Thermal noise driven computing Laszlo B. Kish a) Texas A&M University, Department of Electrical and Computer Engineering, College Station, TX 77843-3128, USA; email: Laszlo.Kish@ece.tamu.edu (July 2, 29; 2006) Abstract. A new type of computing, where thermal noise is the information carrier and the clock in a computer, is proposed. The energy requirement/dissipation are studied in a simple digital system with zero threshold voltage, when the thermal noise is equal to or greater than the digital signal. In a simple realization of a thermal noise driven gate, the lower limit of energy needed to generate the digital signal is º1.1* kt / bit. The arrangement has potentially improved energy efficiency and it is free of leakage current, crosstalk and ground plane electromagnetic interference problems. Disadvantage is the larger number of required elements. Keywords: Johnson noise; energy dissipation; suprathreshold stochastic resonator; leakage current. a) Until 1999: L.B. Kiss 1

Recently, it has been shown [1-4] that modulated thermal noise (Johnson noise) can be a special type of information carrier and it can be used for stealth communication [1], totally secure non-quantum communication [2,3], and the realization of totally secure classical networks [4]. These results inspire a further question: If Johnson noise is such a peculiar information carrier, can it perhaps be applied to data processing and computing, too? Though, we do not know the full answer to this question, in this Letter, we shall show a potential application of Johnson noise to reduce the problems of energy dissipation, leakage current, crosstalk and ground plane electromagnetic interference (EMI) in microprocessors. This idea may first look strange because the ultimate lower limits of energy dissipation in computers is dictated by Johnson noise, see [5-8] and references therein. However, here we attempt to put thermal noise to work for us by driving the logic devices. We shall analyze how much is the information channel capacity of a particular realization of the thermal noise driven digital information channel and what is the lower limit of the energy requirement of transferring information through such a channel. Our other inspiration is the fact that neural signals are stochastic, which indicates that the brain is using noise as an information carrier [9], and at the same time the brain is an extremely energy efficient signal processor [10]. It is tempting to assume that the great energy efficiency of the brain is somehow related to the stochastic nature of neural signals [10]. Note that John von Neumann [11]; Forshaw and coworkers [12,13]; and Palem and coworkers [14,15] have been proposing efficient ways of working with probabilistic switches, which are noisy digital logic units with relatively high error probability. Palem 2

and coworkers have pointed out that this may be a way to reduce power dissipation [14,15]. However, though these approaches can be relevant to future developments of the ideas outlined in the present paper, they are very different from our present approach. The system we propose is working in the regime of huge error probability, º 0.5, with zero logic threshold voltage and in the sub-noise signal amplitude limit, out of the range used by others. In the thermal noise driven computer the voltage in the channel is basically a noise and the statistical properties of this noise carry the information. Moreover, we base our study on Shannon's channel coding theorem because we believe that this theory is the proper application tool to characterize the ultimate information channel capacity and the energy efficiency in the relevant digital channels. Therefore, the thermal noise driven computer is a computer where (most of) the logic gates are driven by a very small DC digital signal voltage U s, which is zero in the "low" logic state, U sl = 0, and it is equal to or less than the effective Johnson noise voltage s in the "high" logic state U sh s. This situation will produce an extremely high error rate, close to the limit of 0.5 error probability in the digital channel, which is the case of zero information content. Thus the information channel has very low information content however, as we shall show, the energy requirement of creating the digital signal is also very low. Concerning the energy efficiency of such a computer, the important question is the energy requirement of handling a single bit (Joule/bit). We shall calculate the lower limit of this energy requirement. The highest possible information rate in a digital channel is given by Shannon's channel coding theorem: C dig = f c [ 1+ plog 2 p +(1- p)log 2 (1- p) ], (1) 3

where C dig is the information channel capacity, f c is the clock frequency, p is the probability of correct bit and 1- p is the error probability, see Figure 1. Though Eq. (1) provides the exact value of the information channel capacity, it does not show us what kind of encoding is needed to approach this limit. p = 0.5 yields The lowest (second) order of the Taylor expansion of Eq. 1 around the value C dig pº0.5 = ( Dp) 2 2 ln2 f, (2) c where Dp = p -0.5. The parabolic approximation given by Eq. (2) is very accurate in the interesting range ( p º 0.5). Even at p =1, which is far out of our range and where the inaccuracy of approximation is maximal; the relative inaccuracy is less than 30%. We will see that not only C dig but also the electrical power requirement to generate the signal is scaling with ( Dp) 2 in the interesting range. Therefore, when the error rate is approaching 0.5 and the information content goes to zero, the energy requirement of generating a bit converges to a fixed value, where this value may depend on the realization of the computer. For a possible realization of the thermal noise driven computing, let us suppose that a weak digital signal U s (t), where the "low" and "high" levels satisfy U sl = 0 and 0 <U sh s, drives a simple comparator [16] with zero reference voltage U ref = 0, see Fig. (2). The internal resistance of the output of the driving logic gate is represented by the resistance R and the total driven capacitance (given by the sum of the 4

comparator's input capacitance and the parasite capacitances) is represented by the capacitor C. The comparator is a logic representation of the input stage of the subsequent gate and we do not deal with the rest of logic operations by that gate. The output of the comparator provides a digital voltage via the signum operation [16,17]: if the input voltage is greater than zero, the output is "hi" and if the input voltage is less than zero, the output is "low". When the computer is running, in the case of U S (t) = 0 ("low" input), the comparator output shows p = 0.5 because the Johnson-noise has zero mean with a symmetric (Gaussian) amplitude density function around zero. However, for U s (t) =U sh ("high" input), the input Johnson noise will be superimposed on the nonzero DC signal U sh thus p > 0.5 due to the small asymmetry of the resultant amplitude density function around zero. Note comparator units driven with analog signals and additive noise have been used by Stocks [16,17] to propose and demonstrate the so-called suprathreshold stochastic resonance, where the noise acts as an information carrier of the analog signal. The noise was band-limited Gaussian white noise. In this Letter, we avoid any practical question concerning the practical realization of this computer, including the problem of the comparator. We are allowed to do that because we are looking for the lower limit of the energy requirement of processing a single bit. Because the digital signal must be generated in the information channel to run the computer, the energy requirement of generating the digital signal at idealistic conditions is an absolute lower limit of the energy dissipation. If we alternate the channel signal between "low" and "high" levels with the clock frequency, see Fig. (2), then at each clock period we dissipate the charging energy of the 5

capacitor C two times, first when we charge the capacitor and second when we discharge it [7], thus the power dissipation to generate the digital signal is given by: 1 P s = f c 2 CU 2 sh (3) By approximating the Gaussian shape of the amplitude distribution g(u) of the thermal noise with its top value g(0) (in our regime where p º 0.5) and use the s= kt / C Johnson relation [7], we get: Dp º g(0)u s = U s 2p s = U s 2pkT / C. (4) Supposing a symmetric time distribution of "low" and "high" bits, the effective Dp is the average of the two cases, and from Eqs. (2,3) we obtain: C dig pº0.5 = U sh 2 p ln2 kt / C f c. (5) Thus, from Eqs. (3,5) we find that the mean energy cost/bit operation is constant: P s = p ln2 kt / bit º1.1 kt / bit. (6) C dig 2 pº0.5 Conversely, the energy efficiency h of the data processing is impressive: 6

h= C dig pº0.5 = 2 bit / kt º 0.9 bit / kt º 2.3*1020 P s p ln2 bit / Joule (7) It is interesting to note that the above results contradict to the general opinion that no digital signal can be used if the energy difference between logic levels is less than kt * ln(2) or, if not, then a potential barrier of similar height should be between the two states. Such a statement is valid at most for certain types of digital memories. The existence of Shannon's Eq. (1) and the results above indicate that digital signal channels without information storage elements can process information at arbitrary, nonzero energy difference between the logic states. Today's microprocessors dissipate >>25,000 kt / bit energy [8] (though for the low-error operation º70 kt/bit would be enough [6-8] so the present 1.1*kT/bit value may look promising. However, we should keep in mind that today's Turing type generalpurpose computers need error-free operation, and that would require error correcting units (redundancy) and/or error correcting algorithms to function. The error correction need could increase the energy dissipation in the thermal noise driven computer potentially by orders of magnitude and though the information channel capacity would also increase, the resulting P s / C dig is a non-trivial problem. These results strongly depend on the way of error correction [11-13] and computer architecture. In conclusion, the main advantage of such a hypothetical thermal noise driven computer would be a potentially improved energy efficiency and an obvious lack of leakage current, cross-talk and ground EMI problems due the very low DC voltages. An apparent disadvantage is the large number of extra (redundancy) elements required for error reduction [11-13]. 7

Finally, we list some of the most important open questions we have not been able to address here: 1. Do we need error correction at all (except input/output operations) when we want to simulate the way the brain works? 2. Is there any way to realize a non-turing machine with stochastic elements without excessive hardware/software based redundance? 3. How should redundancy and error correction be efficiently used to run the thermal noise driven computer as a Turing machine? 4. How much energy would such error correction cost? 5. How much energy is needed to feed the comparator? 6. What is the impact of the noise of the comparator and how to reduce it? Though, all these questions are relevant for the ultimate energy dissipation of thermal noise driven computers, the lower limit given in this Letter stays valid because this is the energy need to generate the digital signal. 8

References 1. L.B. Kish, Appl. Physics Lett. 87 (2005), Art. No. 234109. 2. L.B. Kish, Phys. Lett. A 352, 178-182, (2006); also at http://arxiv.org/physics/0509136. 3. A. Cho, Science 309, 2148, (2005). 4. L.B. Kish and P. Mingesz, Fluct. Noise Lett. 6, C9-C21, (2006). 5. W. Porod, Appl. Phys. Lett. 52, 2191 (1988); and references therein; W. Porod, R.O. Grondin, D.K. Ferry, Phys. Rev. Lett. 52, 232-235, (1984); W. Porod, R.O. Grondin, D.K. Ferry, G. Porod, Phys. Rev. Lett. 52, 1206, (1984); and references therein. 6. R.K. Cavin, V.V. Zhirnov, J.A. Hutchby, G.I Bourianoff, Fluct. Noise Lett. 5, C29-38, (2005). 7. L.B. Kish, Phys. Lett. A 305, 144 149, (2002). 8. L.B. Kish, "On the fundamental limits of digital computing", Emerging Research Device Architectures Workshop by Semiconductor Research Corporation and International Technology Roadmap for Semiconductors and National Science Foundation, 9th July 2006, San Francisco. (Relevant paper under preparation). 9

9. G. Balazsi, L.B. Kish, Phys. Lett. A 265, 304 316, (2000). 10. S.M. Bezrukov, L.B. Kish, Smart Mater. Struct. 11 (2002) 800 803. 11. J. von Neumann, in Automata Studies, eds. C.E. Shannon, J. McCarthy, Princeton Univ. Press, 329-378, (1956). 12. K. Nikolic, A. Sadek, M. Forshaw, Nanotechnology 13, 357-362, (2002). 13. A.S. Sadek, K. Nikolic, M. Forshaw, Nanotechnology 15, 192-21, (2004); and references therein. 14. K.V. Palem, IEEE Trans. Comput. 54, 1123, (2005); and references therein. 15. P. Korkmaz, B.E.S. Akgul, K.V. Palem, L.N. Chakrapani, Jap. Journ. Appl. Phys. 45, 3307-3316, (2006). 16. N.G. Stocks, Phys. Lett. A 279, 308-312 (2001). 17. N.G. Stocks, Phys. Rev E 63, 041114, 1-9, (2001). 10

Figure caption Figure 1. Shannon's channel capacity of digital channels and the working regimes of the thermal noise driven and classical computers, respectively. Figure 2. Model circuitry of the information channel of a possible realization of the thermal noise driven computer. The U th (t) is the inherent Johnson noise voltage of the resistor R. 11

U th (t) R 1 2 U sh C U ref = 0