History & Binary Representation

Similar documents
Transistor and Integrated Circuits: History

ECE 340 Lecture 31 : Narrow Base Diode Class Outline:

Chapter 1. Binary Systems 1-1. Outline. ! Introductions. ! Number Base Conversions. ! Binary Arithmetic. ! Binary Codes. ! Binary Elements 1-2

The Transistor. Thomas J. Bergin Computer History Museum American University

Complement Arithmetic

How do computers represent numbers?

Chapter 4 Number Representations

Number Representation and Waveform Quantization

Computer Arithmetic. MATH 375 Numerical Analysis. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Computer Arithmetic

Residue Number Systems Ivor Page 1

0 volts. 2 volts. 5 volts

Short column around Transistor. 12/22/2017 JC special topic

Chapter 1 :: From Zero to One

Field effect = Induction of an electronic charge due to an electric field Example: Planar capacitor

Prime Clocks. Michael Stephen Fiske. 10th GI Conference on Autonomous Systems October 23, AEMEA Institute. San Francisco, California

Notes for Chapter 1 of. Scientific Computing with Case Studies

Arithmetic and Error. How does error arise? How does error arise? Notes for Part 1 of CMSC 460

ENG2410 Digital Design Introduction to Digital Systems. Fall 2017 S. Areibi School of Engineering University of Guelph

Numbering Systems. Contents: Binary & Decimal. Converting From: B D, D B. Arithmetic operation on Binary.

Hakim Weatherspoon CS 3410 Computer Science Cornell University

ENGIN 112 Intro to Electrical and Computer Engineering


Chapter 1: Introduction and mathematical preliminaries

Answer three questions from Section A and five questions from Section B.

Ex code

ALU (3) - Division Algorithms

Quantum Computing 101. ( Everything you wanted to know about quantum computers but were afraid to ask. )

ECE260: Fundamentals of Computer Engineering

Advanced VLSI Design Prof. A. N. Chandorkar Department of Electrical Engineering Indian Institute of Technology Bombay

The Magic of Negative Numbers in Computers

Carry Look Ahead Adders

! Chris Diorio. ! Gaetano Borrielo. ! Carl Ebeling. ! Computing is in its infancy

CPE100: Digital Logic Design I

CHAPTER 2 NUMBER SYSTEMS

Semiconductor Physics and Devices

Round-off Errors and Computer Arithmetic - (1.2)

Introduction The Nature of High-Performance Computation

Menu. Review of Number Systems EEL3701 EEL3701. Math. Review of number systems >Binary math >Signed number systems

Digital Systems Roberto Muscedere Images 2013 Pearson Education Inc. 1

NUMBERS AND CODES CHAPTER Numbers

Introduction to Computer Systems

Lecture 1: Circuits & Layout

MATH Dr. Halimah Alshehri Dr. Halimah Alshehri

CPE100: Digital Logic Design I

14:332:231 DIGITAL LOGIC DESIGN. Why Binary Number System?

QUADRATIC PROGRAMMING?

Numbers and Arithmetic

CMP 334: Seventh Class

We are here. Assembly Language. Processors Arithmetic Logic Units. Finite State Machines. Circuits Gates. Transistors

0,..., r 1 = digits in radix r number system, that is 0 d i r 1 where m i n 1

biologically-inspired computing lecture 14 Informatics luis rocha 2015 biologically Inspired computing INDIANA UNIVERSITY

14:332:231 DIGITAL LOGIC DESIGN. 2 s-complement Representation

Numbers and Arithmetic

What Every Programmer Should Know About Floating-Point Arithmetic DRAFT. Last updated: November 3, Abstract

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 5. Ax = b.

CSE 241 Digital Systems Spring 2013

Numbering Systems. Computational Platforms. Scaling and Round-off Noise. Special Purpose. here that is dedicated architecture

Graduate Institute of Electronics Engineering, NTU Basic Division Scheme

Digital Integrated Circuits A Design Perspective. Arithmetic Circuits. Jan M. Rabaey Anantha Chandrakasan Borivoje Nikolic.

Chapter 1 Error Analysis

Chapter 1: Solutions to Exercises

EE 434 Lecture 3. Yield Issues and Historical Development

Digital Systems Overview. Unit 1 Numbering Systems. Why Digital Systems? Levels of Design Abstraction. Dissecting Decimal Numbers

Adders, subtractors comparators, multipliers and other ALU elements

FYSE410 DIGITAL ELECTRONICS [1] [2] [3] [4] [5] A number system consists of an ordered set of symbols (digits).

E&CE 223 Digital Circuits & Systems. Winter Lecture Transparencies (Introduction) M. Sachdev

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Gates and Logic: From Transistors to Logic Gates and Logic Circuits

ELCT201: DIGITAL LOGIC DESIGN

CprE 281: Digital Logic

GaN based transistors

9. Datapath Design. Jacob Abraham. Department of Electrical and Computer Engineering The University of Texas at Austin VLSI Design Fall 2017

How fast can we calculate?

Binary Floating-Point Numbers

Gates and Logic: From switches to Transistors, Logic Gates and Logic Circuits

Gates and Logic: From Transistors to Logic Gates and Logic Circuits

ECE 372 Microcontroller Design

ELEC Digital Logic Circuits Fall 2014 Switching Algebra (Chapter 2)

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Chapter 01: Introduction. Lesson 01 Evolution Computers Part 1: Mechanical Systems, Babbage method of Finite Difference Engine and Turing Hypothesis

Can You Count on Your Computer?

EE260: Digital Design, Spring n Digital Computers. n Number Systems. n Representations. n Conversions. n Arithmetic Operations.

Numbers & Arithmetic. Hakim Weatherspoon CS 3410, Spring 2012 Computer Science Cornell University. See: P&H Chapter , 3.2, C.5 C.

Proposal to Improve Data Format Conversions for a Hybrid Number System Processor

Introduction and mathematical preliminaries

Lecture 3, Performance

Mathematical preliminaries and error analysis

Lecture 3, Performance

EECS150 - Digital Design Lecture 27 - misc2

Optimizing MPC for robust and scalable integer and floating-point arithmetic

quantum mechanics is a hugely successful theory... QSIT08.V01 Page 1

1.0 Introduction to Quantum Systems for Information Technology 1.1 Motivation

DSP Design Lecture 2. Fredrik Edman.

1 Floating point arithmetic

EE 5211 Analog Integrated Circuit Design. Hua Tang Fall 2012

Digital Systems and Information Part II

Multiplication of signed-operands

1.10 (a) Function of AND, OR, NOT, NAND & NOR Logic gates and their input/output.

Transcription:

History & Binary Representation C. R. da Cunha 1 1 Instituto de Física, Universidade Federal do Rio Grande do Sul, RS 91501-970, Brazil. August 30, 2017 Abstract In this lesson we will review the history of electronics and the development of the first microprocessors. Keywords Microcontrollers; Microprocessors; Electronics; Digital. 1 History Let us begin with the history of semiconductors and contemporary electronics. Although it started in Germany, it flourished at the Bell Labs in the United States. creq@if.ufrgs.br 1

1874 Diode effect discovered by Ferdinand Braun at the University of Berlin. 1906 Diode patented. 1925 Bell Labs is founded. 1925 MOS transistor is patented by Julius Lilienfeld. 1929 Walter Brattain joins Bell Labs. 1934 Another MOS patent by Oskar Heil. 1936 Mervin Kelly becomes director of Bell Labs. 1936 William Shockley joins Bell Labs. 1945 John Bardeen joins Bell Labs. 1947 Bardeen and Brattain conceive the Point Contact Transistor. 1948 Shockley invents the Bipolar Junction Transistor. 1953 First transistor computer built at the University of Manchester by Dick Grimsdale. 1954 Transistors are fabricated in Si by Morris Tanenbaum at Bell Labs. 1956 Nobel Prize for the invention of the transistor. 1956 Shockley found Shockley Semiconductor Laboratory in Mountain View, CA. 1957 Robert Noyce, Gordon Moore, Jean Hoerni and the other traitorous eight found Fairchild Semiconductors. 1958 Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductors independently develop the fist Integrated Circuit. 1959 Planar transistor developed by Jean Hoerni at Fairchild Semiconductors. 1968 Robert Noyce and Gordon Moore found Intel Corporation. Soon after Intel is founded, a new revolution starts by making electronic circuits mimick the behavior of machines such as Charles Babbage s difference engine from 1822, Alan Turing s a-machine from 1936, and John Atanasoff and Clifford Berry s automatic electronic digital computer from 1942. Soon after in 1946, John Brainerd at the University of Pennsylvania develop the Electronic Numerical Integrator and Computer (ENIAC). The first transistorized computer appeared in 1955 at the University of Manchester under the leadership of Tom Kilburn. It took another 15 years for the development of integrated processors. 2

Figure 1: DIP, PLCC and PGA packagings. Model Year W/Mem. [bits] [MHz] MIPS Applications Intel 4004 1971 4 BCD/12.74.056 Calculators Intel 8008 1972 8/14.80.12 Calc./Robots Intel 4040 1974 4 BCD/13.74.060 Calculators TI TMS1000 1974 4/8.4.050 Calculators Intel 8080 1974 8/16 3.125.29 Cash Registers G.In. CP1600* 1974 8/16 5.2 Video Games Zilog Z80 1976 8/16 8.40 Video Games MOS Tech 6502 1976 8/16 8 3.4 Video Games Intel MCS-48 1976 8/8 11.5 Controllers Intel 8085 1976 8/16 6.5 1 Controllers Intel 8086 1978 16/16 10.75 PC-XT Intel 8087 1979 16-10 50 kflops FPU Motorola 68000 1979 32/24 7.67 1 Mac/Video Games Intel 8051 1980 8/16 12 1 Controllers Intel 80186 1982 16/20 25 1 PC-AT Acorn ARM1/2 1985 32/26 6 8 Computers Atmel AVR 1996 8/ 8 8 Controllers (*) Microchip s PIC was born from there. 2 Microprocessors vs. Microcontrollers Microprocessors are units responsible for processing the flow of information in an eletronic system, whereas microcontrollers are units that incorporate a processor, a memory and other subunits to perform intelligent operations. Simple microcontrollers can have packagings as simple as a DIP16 such as the Intel 4004. Most common microcontrollers are found in a DIP40 package, whereas modern microprocessors are found in packagings such as a 82 PLCC. Some of these packagings are shown in Fig. 1. 3

3 Binary Representation Let us begin our study of microcontrollers by reviewing the binary representation and operations. A binary number has only two mnemonics. We will use 1 to represent the high state, and 0 to represent the low state. Thus, we can have a binary number such as 011001. This can be converted to decimal by: X ÿ n b n 2 n 1 ˆ 2 0 ` 0 ˆ 2 1 ` 0 ˆ 2 2 ` 1 ˆ 2 3 ` 1 ˆ 2 4 ` 0 ˆ 2 4 1 ` 8 ` 16 25. The reverse operation can be constructed by successively dividing a decimal number by 2 and taking the remainder. For example: 25{2 12 ` rr1s 12{2 6 ` rr0s 6{2 3 ` rr0s 3{2 1 ` rr1s 1{2 0 ` rr1s, where r[x] is the remainder of the operation. Therefore, 25 can be represented as 11001. Now, let s take number 9 in binary: 9{2 4 ` rr1s 4{2 2 ` rr0s 2{2 1 ` rr0s 1{2 0 ` rr1s It needs 4 bits to be represented. We could have reached the same result by taking Log 2 p9q «3.17 Ñ 4. Thus, we can represent decimal numbers in groups of four bits. This is called binary coded decimal or BCD. For instance,25 would be represented as 0010 0101. It requires more bits to be represented but it has the advantage of simplicity. 3.1 Fixed Point Representation How do we represent real numbers? There are two possibilities, the first is to use fixed point The trick here is to apply a binary point. For example, let us take again number 25 in binary (11001) and apply a point so that we have 110.01. In this case we have: (1) (2) (3) 4

110.01 1 ˆ 2 2 ` 1 ˆ 2 1 ` 0 ˆ 2 0 ` 0 ˆ 2 1 ` 1 ˆ 2 2 4 ` 2 ` 0.25 6.25. In our case both 25 and 6.25 have exacly the same representation in binary. The only difference is the point. 3.1.1 1 s Complement And negative numbers? One strategy is to use the 1 s complement by just negating the expression. For example, for our 25 we would have 011001 and -25 would be 100110 in a 6 bit notation. For a 3 bit representation we would have: 3 100 2 101 1 110 0 111 0 000 1 001 2 010 3 011 This has some problems. For example, the 1 s complement of 00 (0) is 111111 (-0). Furthermore, arithmetic operations become problematic. Let s take for example 3-1 in a 3 bit representation. This would be 011 + 110, which would produce 001 and a carry-out bit of 1. This has to be added to the result and we obtain 010, which is the expected result. 3.1.2 2 s Complement We can improve computations and use a 2 s complement by always adding one. For example, 3 in a 3 bit representation is 011, thus -3 in 1 s complement is 100. In 2 s complement we only have to add 1 and obtain 101. Thus, again in a 3 bit representation we would have: (4) (5) 4 100 3 101 2 110 1 111 0 000 1 001 2 010 3 011 (6) 5

This way, not only we avoid the problem of -0 as we also simplify the arithmetics. Let s see that example of 3-1 again now in 2 s complement. This would be 011+111=010=2 with a carry bit that can be completely discarded. In 2 s complement, overflow can be detected if summing two numbers of the same sign produces a number with an opposite sign. Let us now see how this works for signed fixed point numbers: 2.0 100 1.5 101 1.0 110 0.5 111 0.0 000 0.5 001 1.0 010 1.5 011 (7) 3.1.3 Arithmetic For fixed point representation, addition and subtraction are exactly the same operations as we would do for integer numbers. For example: 0.5 ` 1.0 00.1 ` 01.0 01.1 1.5 1.5 0.5 01.1 ` 11.1 01.0 1.0 Now let s see how it works for multiplication. For simplicity let us drop the radix point and take care of it later. 1.0 ˆ 0.5 010 ˆ 001 010 000 ` 000 00010 We must now account for the radix point. Since both multiplicands have points after the first bit, the result has to have the point after two bits. Therefore, the result is 000.10. However, our representation includes only one bit for the fractional part and two for the integer part. We therefore must either truncate or round the result for the appropriate number of bits. It is simple in this case to just truncate it to 00.1, which in decimal is 0.5, the result that we expected. Let us now use two bits for each the integer and the fractional part. Our correspondence table becomes: (8) (9) 6

Let s multiply 0.25 ˆ 2.00: 0.00 0.25 0001 0.25 1111 0.50 0010 0.50 1110 0.75 0011 0.75 1101 1.00 0100 1.00 1100 1.25 0101 1.25 1011 1.50 0110 1.50 1010 1.75 0111 1.75 1001 2.00 1000 (10) 0001 ˆ 1000 ` 1000 100 (11) Our multiplicands have two bits for the fractional part. Therefore, the result should have four bits for its fractional part and we would get 100.. Truncating it we get 00.00, which is completely wrong. This happened because we did not account for the sign. One way to compensate for it is by expanding the bit sign: 1000 ˆ 0001 111000 ` 0111000 (12) We must place the point four bits from the right hand side end. Therefore, the truncated result becomes 11.10, which corresponds to -0.5 as expected. Why did we do this 1 filling operation? Because we are multiplying 1ˆ a negative number. We therefore must take its 2 s complement. We obtain it by 1 filling. This is known as sign extension. Let s calculate it now the other way around: 0001 ˆ 1000 ` 1111 1111000 (13) 7

Taking four bits for the radix point and truncating we get 11.10, which is the expected result. Note, however that we took 2 s complement. This happened because we are multiplying one argument by the sign bit. It is like multiplying the argument by -1. Let s now take a look at another example: 0.75 ˆ 0.75: 1101 ˆ 0011 1111101 111101 ` 11110111 (14) Placing the point we get 1111.0111. If we simply truncate the result we get 1101 which corresponds to -0.75. The right result would be -0.5625 and we have a quantization error which cannot be accounted for in this representation with a restricted number of bits. For the sake of practice, let s make the same calculation the other way around: Which is exactly the same value. 0011 ˆ 1101 0011 0011 ` 1101 11110111 (15) 3.2 Floating Point With floating point we have a completely different story. In floating point, a number is represented as: S E 3 E 2 E 1 E 0 M 5 M 4 M 3 M 2 M 1 M 0, (16) where S is a bit for the sign, E is the exponent, and M is the mantissa. This can be represented as p 1q S M ˆ 2 E. Typically, in floating point representation, the mantissa is normalized. For example: 110.100 1.10100 ˆ 2 2 (17) 0.00101 1.01000 ˆ 2 3. Also, the exponent is stored with a 127 bias according to IEEE 754 standard. This means that the exponent is added to 127 (01111111) and then stored. For example, 12 (01100) is stored as 139 (010001011), and -5 is stored as 122 (01111010). 8

Let us now put it all together. A number in floating point notation is stored as SIGN EXPONENT MANTISSA. Also, for the mantissa, the 1 at the left hand side of the radix point is dropped. Let s look at some examples: 10.0110 1.0011 ˆ 2 1 1 1000 0011 0.00101 `1.01 ˆ 2 3 0 01111100 0100 (18) 3.2.1 Arithmetic Addition and subtraction are quite simple. We first must put both operands in the same exponent and them perform a standard sum of the mantissas maintaining the exponent. Multiplication and division are also simple. For the mantissa we must perform a standard multiplication. We sum the exponents and subtract the bias. The sign bit are just added and the carry is dropped. Thus for a floating point representation with 2 bits for the exponent with a bias 2 and 3 bits for the mantissa we have: 2.5 ` 0.5 10.1 ` 0.1 1.01 ˆ 2 1 ` 1.00 ˆ 2 1 0 11 010 ` 0 01 000 101.00 ˆ 2 1 ` 1.00 ˆ 2 1 110.00 ˆ 2 1 1.10 ˆ 2 1 0 11 100 3.0. 2.0 ˆ 0.25 10.0 ˆ 0.01 1.00 ˆ 2 1 ˆ 1.00 ˆ 2 2 1 11 000 ˆ 0 00 000 1 01 000 1.000 ˆ 2 1 0.100 0.5 (19) (20) 9