From Physics to Logic

Similar documents
Intro To Digital Logic

Lecture 1: Circuits & Layout

ESE370: Circuit-Level Modeling, Design, and Optimization for Digital Systems

0 volts. 2 volts. 5 volts

Floating Point Representation and Digital Logic. Lecture 11 CS301

Chapter 1 :: From Zero to One

Chapter 3 Basics Semiconductor Devices and Processing

! Previously: simple models (0 and 1 st order) " Comfortable with basic functions and circuits. ! This week and next (4 lectures)

EE141- Fall 2002 Lecture 27. Memory EE141. Announcements. We finished all the labs No homework this week Projects are due next Tuesday 9am EE141

Lecture 0: Introduction

Introduction to Computer Engineering. CS/ECE 252, Fall 2012 Prof. Guri Sohi Computer Sciences Department University of Wisconsin Madison

CMPE12 - Notes chapter 1. Digital Logic. (Textbook Chapter 3)

Electronics Fets and Mosfets Prof D C Dube Department of Physics Indian Institute of Technology, Delhi

Gates and Logic: From Transistors to Logic Gates and Logic Circuits

ESE 570: Digital Integrated Circuits and VLSI Fundamentals

ESE370: Circuit-Level Modeling, Design, and Optimization for Digital Systems

Building Quantum Computers: Is the end near for the silicon chip?

! Charge Leakage/Charge Sharing. " Domino Logic Design Considerations. ! Logic Comparisons. ! Memory. " Classification. " ROM Memories.

ESE 570: Digital Integrated Circuits and VLSI Fundamentals

Lecture 25. Semiconductor Memories. Issues in Memory

! Chris Diorio. ! Gaetano Borrielo. ! Carl Ebeling. ! Computing is in its infancy

Semiconductor memories

EN2912C: Future Directions in Computing Lecture 08: Overview of Near-Term Emerging Computing Technologies

Chapter 7. Sequential Circuits Registers, Counters, RAM

Digital Integrated Circuits A Design Perspective. Semiconductor. Memories. Memories

Quantum Computing. Vraj Parikh B.E.-G.H.Patel College of Engineering & Technology, Anand (Affiliated with GTU) Abstract HISTORY OF QUANTUM COMPUTING-

Transistor and Integrated Circuits: History

SEMICONDUCTOR MEMORIES

EE 5211 Analog Integrated Circuit Design. Hua Tang Fall 2012

Introduction to CMOS VLSI Design Lecture 1: Introduction

Magnetic core memory (1951) cm 2 ( bit)

Digital Integrated Circuits A Design Perspective

Administrative Stuff

Semiconductor Memories

! Previously: simple models (0 and 1 st order) " Comfortable with basic functions and circuits. ! This week and next (4 lectures)

Semiconductor Memories

Semiconductor Memory Classification

Computer Science 324 Computer Architecture Mount Holyoke College Fall Topic Notes: Digital Logic

CSC258: Computer Organization. Digital Logic: Transistors and Gates

Digital Logic: Boolean Algebra and Gates. Textbook Chapter 3

Digital Logic. CS211 Computer Architecture. l Topics. l Transistors (Design & Types) l Logic Gates. l Combinational Circuits.

The Transistor. Thomas J. Bergin Computer History Museum American University

ECE321 Electronics I

Advanced VLSI Design Prof. A. N. Chandorkar Department of Electrical Engineering Indian Institute of Technology Bombay

Chapter 1. Binary Systems 1-1. Outline. ! Introductions. ! Number Base Conversions. ! Binary Arithmetic. ! Binary Codes. ! Binary Elements 1-2

Computer organization

Semi-Conductors insulators semi-conductors N-type Semi-Conductors P-type Semi-Conductors

Gates and Logic: From switches to Transistors, Logic Gates and Logic Circuits

Self-study problems and questions Processing and Device Technology, FFF110/FYSD13

CMOS Digital Integrated Circuits Lec 13 Semiconductor Memories

CMOS Inverter. Performance Scaling

Gates and Logic: From Transistors to Logic Gates and Logic Circuits

Microelectronics Part 1: Main CMOS circuits design rules

MOS Transistor Properties Review

Introduction to Computer Engineering. CS/ECE 252, Spring 2017 Rahul Nayar Computer Sciences Department University of Wisconsin Madison

EE130: Integrated Circuit Devices

Computer Organization: Boolean Logic

Thin Film Transistors (TFT)

Logic Gates and Boolean Algebra

Scaling of MOS Circuits. 4. International Technology Roadmap for Semiconductors (ITRS) 6. Scaling factors for device parameters

nmos IC Design Report Module: EEE 112

Organisasi dan Arsitektur Komputer L#1: Fundamental Concepts Amil A. Ilham

Larry Zhang. Office: DH

Hw 6 and 7 Graded and available Project Phase 2 Graded Project Phase 3 Launch Today

DocumentToPDF trial version, to remove this mark, please register this software.

Digital Electronics Part II - Circuits

EE241 - Spring 2000 Advanced Digital Integrated Circuits. References

Switches: basic element of physical implementations

EECS150 - Digital Design Lecture 23 - FFs revisited, FIFOs, ECCs, LSFRs. Cross-coupled NOR gates

per chip (approx) 1 SSI (Small Scale Integration) Up to 99

Molecular Electronics For Fun and Profit(?)

2.1. Unit 2. Digital Circuits (Logic)

S No. Questions Bloom s Taxonomy Level UNIT-I

Carrier Transport by Diffusion

Unit 8A Computer Organization. Boolean Logic and Gates

The Digital Logic Level

GMU, ECE 680 Physical VLSI Design 1

Introduction to Photolithography

Topics. Dynamic CMOS Sequential Design Memory and Control. John A. Chandy Dept. of Electrical and Computer Engineering University of Connecticut

LEADING THE EVOLUTION OF COMPUTE MARK KACHMAREK HPC STRATEGIC PLANNING MANAGER APRIL 17, 2018

Name: Answers. Mean: 83, Standard Deviation: 12 Q1 Q2 Q3 Q4 Q5 Q6 Total. ESE370 Fall 2015

ESE370: Circuit-Level Modeling, Design, and Optimization for Digital Systems. Today MOS MOS. Capacitor. Idea

Field effect = Induction of an electronic charge due to an electric field Example: Planar capacitor

ECE520 VLSI Design. Lecture 23: SRAM & DRAM Memories. Payman Zarkesh-Ha

Moores Law for DRAM. 2x increase in capacity every 18 months 2006: 4GB

Digital Electronics Part II Electronics, Devices and Circuits

CMPE12 - Notes chapter 2. Digital Logic. (Textbook Chapters and 2.1)"

CS 152 Computer Architecture and Engineering. Lecture 11 VLSI

Spiral 2 7. Capacitance, Delay and Sizing. Mark Redekopp

Lecture 4: Technology Scaling

Chapter 2. Design and Fabrication of VLSI Devices

ECE 340 Lecture 31 : Narrow Base Diode Class Outline:

1.10 (a) Function of AND, OR, NOT, NAND & NOR Logic gates and their input/output.

VLSI. Faculty. Srikanth

Memory, Latches, & Registers

Lecture 1: Introduction to Quantum Computing

Computers of the Future? Moore s Law Ending in 2018?

E40M. Binary Numbers. M. Horowitz, J. Plummer, R. Howe 1

Active loads in amplifier circuits

MTJ-Based Nonvolatile Logic-in-Memory Architecture and Its Application

Transcription:

From Physics to Logic This course aims to introduce you to the layers of abstraction of modern computer systems. We won t spend much time below the level of bits, bytes, words, and functional units, but I think you should at least be aware of the richness that exists below this level and what implications it has for the higher layers. Simulating Classical Physics With Quantum Physics Modern computer systems are based ultimately on quantum physics. However, they use the quantum effects of materials essentially to simulate classical physics. Most mathematical theories of computation also assume classical physics. How we built computers and how we think about computers have matched for over half of a century. To a reasonable approximation, all current computers can be thought of as Universal Turing Machines, we can think of UTMs in terms of a Newtonian game of billiards, and we can implement that game by exploiting the quantum physics of certain materials. Starting in the early 1980s, models of computation based directly on quantum physics have been developed. These models may be more powerful than UTMs in a theoretical sense. In the mid 1990s, Peter Schor published an algorithm that factored large numbers into primes very quickly on a computational model known as a Quantum Turing Machine. That this task is believed to be difficult on classical computers under-girds almost all of cryptography, and thus Schor s result generated much interest. At this point, however, we don t know whether it is feasible to build a large scale quantum computer. Semiconductors The materials from which processors, memory, and other devices are constructed are known as semiconductors. While semiconductors are created using very mundane materials (silicon is the main ingredient of sand!), sophisticated chemistry is used to grow giant perfect silicon crystals and to subtly dope them with impurities in order to carefully control their properties. In addition to semiconductors, ultra-pure conductors, often created using copper, and insulators, typically created using silicon oxide (sand rust!) are the core materials of computers. Page 1 of 5

Transistors Semiconductors, insulators, and conductors are used to construct passive electronic components such as wires, resistors, capacitors, and inductors (rarely). However, the most important component built from semiconductors is an active one, the transistor. Transistors are three terminal devices, and there are many kinds. In the MOSFET, which is the typical kind of transistor in modern computer chips, the voltage on one terminal (the gate) controls how difficult it is send electrical current between two other terminals (the source and drain). In the nonlinear circuitry of computer chips (as opposed to the linear circuitry of, say, an audio amplifier), we usually use the gate to turn on and turn off the flow of current, like a simple valve. Below, you can see the symbol of a MOSFET and a cross-section of its typical construction. On a current processor chip, a MOSFET transistor has a size measured in the 10s of nanometers, about 1,000 to 10,000 times smaller than the width of a human hair. Logic and Memory Using transistors and capacitors, we can create the combinational logic and memory that are the basis of computers. For example, the following shows the symbol for a NAND ( Not And ) at the left and its implementation using four MOSFETs at right. The solid circles (often shown as open circles as well) represent inversion. Basically, a circle at the gate of a MOSFET indicates that as gate voltage decreases, it gets easier to Page 2 of 5

send current from source to drain. Without a circle, decreasing voltage makes it gets harder. The two inputs to the NAND represent truth values (true or false). The output of the NAND is true unless both inputs are true, in which the output is false. While this might seem like a strange logical operation, NAND is powerful because it is a universal logical operation any other combinational logic operation (and, or, not, xor, etc) or logic circuit built out of them can be implemented by wiring NANDs together. We can also use transistors (and other components) to build memory cells. Below, at left we see a DRAM cell (1 bit of memory), which is what main memory on most computers consists of. At right, we see an SRAM cell (again, 1 bit of memory), which is the typical cell in cache memories, processor register files, and other components. There is a third kind of memory cell, a latch, which is widely used in implementing processor pipelines. An important thing to notice is that an SRAM cell uses no capacitors but uses six transistors instead of just one. SRAM cells are much faster than DRAM cells, but also much more expensive. That s why they tend to be put in caches. An SRAM cell will also hold its bit as long as there is power. In a DRAM cell, because the bit is held in a capacitor and capacitors always leak charge, the bit must be frequently refreshed by external circuitry. In your typical computer, all of the main memory is read and rewritten 10-20 times per second to keep the contents from being lost. Photolithography and Chips Transistors were first developed in the late 1940s. By the late 1950s, engineers had learned how to mass-produce individual transistors inexpensively. Around this time, Jack Kilby, an engineer at Texas Instruments, had a vision of putting building multiple transistors and passive devices on a single semiconductor substrate. He developed the first monolithic integrated circuit (IC or chip ), inventing a technology that is still going strong today. A chip is produced using a process called photolithography, which you can think of as a strange kind of photographic printing. If you ve ever done black-and-white printing in a darkroom, you understand the gist of it already. Essentially, a negative called a mask is Page 3 of 5

213 Introduction to Computer Systems produced that is the inverse of the layer of material to be put on the semiconductor substrate. The substrate is coated with the material that will be deposited and then covered in a material called photoresist. The mask is then placed over the photoresist and a bright light (ultraviolet light these days) is shown on it. Next, the photoresist is developed, and then the unexposed photoresist is etched away (like fixing a photographic print). Then the next layer can be created over the current one. Chips are built up out of multiple layers like this. Below you ll see a bird s eye view of the Intel Core i7-3930k processor, which costs about $500. This whole chip has an area of only 430 square millimeters (about double the size of the nail on your big toe) yet contains about 2.2 billion transistors. Page 4 of 5

Moore s Law, Its Limits, And So Many Transistors In the late 1960s, Gordon Moore made a plot of the number of transistors on a chip as a function of the year the chip came out and found that it was exponential. Every 18 months, the number of transistors would double, and chips would become much faster because the transistors were smaller. He predicted that this would continue, and he was right. Moore s Law still holds true, except now the doubling period is just one year. Experts expect that it will continue for at least another 10 years. The current ways that we build processors will not be able to make use of this many transistors, and thus there is much current research on how to actually exploit having billions of transistors on a chip. Multicore Processors, GPUs, and Parallelism An important recent shift in processors has been the advent of the multicore era. In a multicore processor, the chip contains multiple copies of the processor. Even commonplace desktop microprocessors today have four cores (four copies of the processor). Our class s server machine has 40 cores, while a $10,000 server can easily have 64 cores today. The Intel Phi within the class server has over 240. It is expected that since the number of transistors on chip will continue to scale exponentially for some time, the consequence is that the number of cores (copies of the processor) per chip will also grow exponentially for some time, probably doubling every year or two. In addition to such completely general purpose processor cores, manufacturers of graphics hardware have also been making their hardware more general purpose, creating GPUs (Graphics Processing Units) that have a much larger number of cores, each of which is much smaller than a general purpose core. For example, the $3,000 NVIDIA Tesla K20, which fits into a typical desktop or server computer, sports almost 2,500 cores. Our class s server machine also has one of these. Having k cores, no matter what kind, does not mean that existing programs will get k times faster. A multicore chip is a parallel computer, meaning that it can do several things simultaneously at the hardware level. Parallel computers have been around for over 30 years, but the focus of research in parallel systems to this point has largely been to support scientific computations. With parallel computers on everyone s desktop and in everyone s cell phone, existing and new ideas from parallel systems will need to be employed. The challenge is how to feed the parallelism. Currently it is the programmer who needs to think about k things the chip should be doing simultaneously. This is unlikely the change anytime soon. Page 5 of 5