COMP 120. For any doubts in the following, contact Agam, Room. 023

Similar documents
1. Basics of Information

Numbering Systems. Contents: Binary & Decimal. Converting From: B D, D B. Arithmetic operation on Binary.

Complement Arithmetic

ENGIN 112 Intro to Electrical and Computer Engineering

ECE260: Fundamentals of Computer Engineering

Chapter 2 (Part 3): The Fundamentals: Algorithms, the Integers & Matrices. Integers & Algorithms (2.5)

SUPPLEMENTARY INFORMATION

= 2 5 Note how we need to be somewhat careful with how we define the total number of outcomes in b) and d). We will return to this later.

14:332:231 DIGITAL LOGIC DESIGN. Why Binary Number System?

4 Number Theory and Cryptography

2. Polynomials. 19 points. 3/3/3/3/3/4 Clearly indicate your correctly formatted answer: this is what is to be graded. No need to justify!

Combinational Logic. By : Ali Mustafa

17.1 Binary Codes Normal numbers we use are in base 10, which are called decimal numbers. Each digit can be 10 possible numbers: 0, 1, 2, 9.

3 The fundamentals: Algorithms, the integers, and matrices

Understanding Atomic Mass

Chapter 1 CSCI

EE260: Digital Design, Spring n Digital Computers. n Number Systems. n Representations. n Conversions. n Arithmetic Operations.

3.2 Conditional Probability and Independence

NUMBERS AND CODES CHAPTER Numbers

0,..., r 1 = digits in radix r number system, that is 0 d i r 1 where m i n 1

repetition, part ii Ole-Johan Skrede INF Digital Image Processing

Recursion and Induction

Image and Multidimensional Signal Processing

MODULAR ARITHMETIC I

CS1800 Discrete Structures Final Version A

Lecture notes on Turing machines

1 Computing System 2. 2 Data Representation Number Systems 22

CSE 1400 Applied Discrete Mathematics Definitions

Section F Ratio and proportion

Binary addition example worked out

(2) Generalize De Morgan s laws for n sets and prove the laws by induction. 1

Error Detection and Correction: Hamming Code; Reed-Muller Code

Investigating Inequalities:

A Universal Turing Machine

14:332:231 DIGITAL LOGIC DESIGN. 2 s-complement Representation

Getting Connected. Chapter 2, Part 2. Networking CS 3470, Section 1 Sarah Diesburg

Math 3 Proportion & Probability Part 2 Sequences, Patterns, Frequency Tables & Venn Diagrams

Shannon's Theory of Communication

Module 2. Basic Digital Building Blocks. Binary Arithmetic & Arithmetic Circuits Comparators, Decoders, Encoders, Multiplexors Flip-Flops

Counting Methods. CSE 191, Class Note 05: Counting Methods Computer Sci & Eng Dept SUNY Buffalo

Number Theory and Counting Method. Divisors -Least common divisor -Greatest common multiple

THERE ARE NO 3 3 BINARY PALINDROMIC MAGIC SQUARES

Binary Arithmetic: counting with ones

8+4 0 mod (12), mod (12), mod (12), mod (12), mod (12).

Counting, symbols, positions, powers, choice, arithmetic, binary, translation, hex, addresses, and gates.

Binary addition (1-bit) P Q Y = P + Q Comments Carry = Carry = Carry = Carry = 1 P Q

Reduce the amount of data required to represent a given quantity of information Data vs information R = 1 1 C

Undecidability COMS Ashley Montanaro 4 April Department of Computer Science, University of Bristol Bristol, UK

6.02 Fall 2012 Lecture #1

MATH Dr. Halimah Alshehri Dr. Halimah Alshehri

CS325: Analysis of Algorithms, Fall Final Exam

( c ) E p s t e i n, C a r t e r a n d B o l l i n g e r C h a p t e r 1 7 : I n f o r m a t i o n S c i e n c e P a g e 1

2.5 정수와알고리즘 (Integers and Algorithms)

Conversions between Decimal and Binary

CSCI-141 Exam 1 Review September 19, 2015 Presented by the RIT Computer Science Community

ECE 4450:427/527 - Computer Networks Spring 2017

Base-b representations of integers. (b 진법표현 ) Algorithms for computer arithmetic: Euclidean algorithm for finding GCD s.

CS1800 Discrete Structures Fall 2017 October, CS1800 Discrete Structures Midterm Version A

A Latin Square of order n is an n n array of n symbols where each symbol occurs once in each row and column. For example,

Part I, Number Systems. CS131 Mathematics for Computer Scientists II Note 1 INTEGERS

Department of Computer Science University at Albany, State University of New York Solutions to Sample Discrete Mathematics Examination II (Fall 2007)

CHAPTER 2 NUMBER SYSTEMS

We are here. Assembly Language. Processors Arithmetic Logic Units. Finite State Machines. Circuits Gates. Transistors

Discrete Mathematics: Midterm Test with Answers. Professor Callahan, section (A or B): Name: NetID: 30 multiple choice, 3 points each:

Design of Digital Circuits Reading: Binary Numbers. Required Reading for Week February 2017 Spring 2017

Cards, decks and hands

Recurrence Relations and Recursion: MATH 180

Massachusetts Institute of Technology Handout J/18.062J: Mathematics for Computer Science May 3, 2000 Professors David Karger and Nancy Lynch

3.2 Probability Rules

Lecture 1: Shannon s Theorem

1. How many errors may be detected (not necessarily corrected) if a code has a Hamming Distance of 6?

UNSIGNED BINARY NUMBERS DIGITAL ELECTRONICS SYSTEM DESIGN WHAT ABOUT NEGATIVE NUMBERS? BINARY ADDITION 11/9/2018

Data Compression Techniques

CSC Discrete Math I, Spring Relations

Residue Number Systems Ivor Page 1

Chapter 9 - Number Systems

Logic and Computer Design Fundamentals. Chapter 5 Arithmetic Functions and Circuits

Carleton University. Final Examination Fall DURATION: 2 HOURS No. of students: 275

CSCI 255. S i g n e d N u m s / S h i f t i n g / A r i t h m e t i c O p s.

Information redundancy

Exponentiation and Point Multiplication. Çetin Kaya Koç Spring / 70

The Hash Function Fugue

Math 21b: Linear Algebra Spring 2018

Sets. We discuss an informal (naive) set theory as needed in Computer Science. It was introduced by G. Cantor in the second half of the nineteenth

Number Representation and Waveform Quantization

EECS 126 Probability and Random Processes University of California, Berkeley: Fall 2014 Kannan Ramchandran September 23, 2014.

cse 311: foundations of computing Fall 2015 Lecture 12: Primes, GCD, applications

PROBABILITY. Contents Preface 1 1. Introduction 2 2. Combinatorial analysis 5 3. Stirling s formula 8. Preface

Identifying an m-ary Partition Identity through an m-ary Tree

Introduction to Quantum Computation

Outline. 1 Arithmetic on Bytes and 4-Byte Vectors. 2 The Rijndael Algorithm. 3 AES Key Schedule and Decryption. 4 Strengths and Weaknesses of Rijndael

0 / 1 Now, Binary value of the given number :-

LOGIC GATES. Basic Experiment and Design of Electronics. Ho Kyung Kim, Ph.D.

Lecture 2: Probability. Readings: Sections Statistical Inference: drawing conclusions about the population based on a sample

EE 229B ERROR CONTROL CODING Spring 2005

Introduction to Techniques for Counting

n n P} is a bounded subset Proof. Let A be a nonempty subset of Z, bounded above. Define the set

Math 230 Final Exam, Spring 2008

Digital Systems Overview. Unit 1 Numbering Systems. Why Digital Systems? Levels of Design Abstraction. Dissecting Decimal Numbers

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

Transcription:

COMP 120 Computer Organization Spring 2006 For any doubts in the following, contact Agam, Room. 023 Problem Set #1 Solution Problem 1. Miss Information [A] First card ca n be any one of 52 possibilities. So when we get to know exactly which card it is (1 possibility) we get Log 2 (52/1) bits = 5.7004 (approx.) bits of information Similarly the 10 th card can be any one of 43 possibilities. So when we get to know exactly which card it is (1 possibility) we get Log 2 (43/1) bits = 5.4263 (approx.) bits of information Finally, the second-last card narrows down the choice of the last card to just 1 and hence turning it over doesn t give us any new information. [ log 2 (1/1) = 0 ] [B] Total possibilities before we know anything = 52 (the number of cards in a deck) After we know it is red, the possibility space decreases by half to 26. Information received = log 2 (52/26) = 1 bit After we know it is a face card, the possibility space shrinks by a ratio of 3/13 (the ratio of face cards to total cards in any suit) Additional information received = log 2 (26/6) = 2.1154 (approx.) bits

After we know it is a diamond, the possibility space shrinks by half again (only two red suits, diamonds and hearts) Additional information received = log 2 (6/3) = 1 bit Remaining bits of information = Additional information received when we know exactly what the card is = log2(3/1) = 1.585 (approx.) bits [C] A -> 0 B -> 10 C -> 11 100101001100000 -> BABBACAAAAA [D] Many ways to do this. One easy way -> Expected number of A s = (0.7)(1000) = 700 Expected length of A strings = (700)(1) = 700 Similarly, length of B strings = (200)(2) = 400 And, length of C strings = (100)(2) = 200 Thus, total length = 700 + 400 + 200 = 1300 Compare with 1000 * log 2 (3/1) = 1584.9625 (slightly more) [D] By formula specified in handout, Information in bits from crooked coin = - [(0.4)log 2 (0.4) + (0.6)log 2 (0.6) ] = - [ (-0.52877) + (-0.44218) ] = 0.97095 By comparison, Information in bits, from fair coin = - 2 (0.5) log 2 (0.5) = 1

[E] Clearly, we have an upper bound for information which is the length of the file = 27358 * 8 bits = 218864 bits For the lower bound, we use the fact that Shakespeare wrote 154 sonnets, and since our information is essentially which sonnet is in the file, Lower bound on information = log 2 154 = 7.2668 (approx.) [F] 4328 bytes = 4328 * 8 bits = 34624 bits Since 7.2668 < 34264 < 218864 This is consistent with the answer to part (e) 2. Modular Arithmetic and 2 s complement Representation [A] A 32-bit word can encode 2 32 bits of information (by definition). And thus 2 32 different values. [B] (6 marks) Representation of 0 = 00000000000000000000000000000000 (32 zeroes) Representation of most positive integer = 01111111111111111111111111111111 (first digit zero, rest all ones) = 2 32 1 = 2147463647 Representation of most negative integer = 10000000000000000000000000000000 (first digit one, rest all zeroes) = -2 32 = -2147463648 Negation of most negative integer = 2 32 which cannot be representation using a 32-bit 2 s complement representation (naively negating it will yield itself as answer) [C] (5 marks) (1) 37 10 = 00000025 16 (2) -32768 10 = FFFF8000 16

(3) 11011110101011011011111011101111 2 = DEADBEEF 16 (4) 10101011101011011100101011111110 2 = ABADCAFE 16 (5) -1 10 = FFFFFFFF 16 [D] ( 6*2 + 3 = 15 marks) (1) 14 + 7 001110 000111 ---------- 010101 = 21 (2) 21-15 010101 110001 --------- 1000110 = 6 (discarding leftmost 1) (3) 15 21 001111 101011 ---------- 000101 = 2 s complement of 000101 = -6 (4) 21 6 010101 111010 ---------- 1001101 = 15 (discarding leftmost 1) (5) -6 + 21 111010 010101 --------- 1001101 = 15 (discarding leftmost 1) (6) 21 + (-21) 010101 101011 --------- 1000000 = 0 (discarding leftmost 1) (7) 21 + 21 010101

010101 ---------- 101010 not = 42 (because of the leftmost 1), as expected in decimal addition of the numbers, but [E] Brief mathematical explanation -A = 0 A = -2 n + (2 n A) = -2 n + [ i 2 +1 A] i = 1 = { -2 n + i 2 i = 1 - A } + 1 Which is basically our complement and add one operation (taking two s complement is to subtracting the number from a series of powers of 2) 3. Rotary Shaft Encoders [A] Slots created as follows: (Shown as a linear strip for convenience) (Previous slots in upper row, new slots in lower row) 0 1 0 1 0 1 0 1 0 0 1 1 0 0 1 1 0 0 As we can see, the 2 bit sequence formed by concatenating the lower sequence with the upper one yields 00, 01, 10, 11 left to right and the reverse from right to left [B] A possible different 3-bit sequence is the following: 000, 001, 011, 010, 110, 111, 101, 100 [C]

In the general case, we have to ensure that successive codes differ by just one digit. One easy way to do it: Divide N into two parts, say n1 and n2, such that n1 + n2 = N Write out numbers from 0 to 2 n1 and 0 to 2 n2 so that they satisfy this property (it is thus a recursive solution). All numbers from 1 to 2 N will fit in a matrix indexed by these two sequences, and adjacent entries satisfy this property (of differing by atmost one digit). Now simply move along adjacent cells till all numbers are covered. 4. Error Detection and Correction [A] (1) bits add up to 13 => error (2) bits add up to 12 => correct (3) bits add up to 13 => error (4) bits add up to 0 => correct [B] (1) Correct data: 0011 0110 0011 011 (2) Correct data: 1100 0000 0101 100 (3) Correct data: 000 101 10 (4) Correct data: 0110 1001

0110 100 [C] When two-bit error occurs in same row or column OR When 2 parity bits, one from the parity column and the other from the parity row are changed, this 2-bit error will be detected as a single-bit error E.g. 011 101 11 If a 2-bit error produces 010 101 01 Then it will be viewed as a single bit error and corrected as: 110 101 01 [D] Bit no. 13 is faulty, with index as 1101 [E] Parity error in p i => e i = 1 i.e. error in p0 => Rightmost bit of index is 1 error in p1 => Second from right bit of index is 1 and so on

[F] Example of 2-bit error that will be missed -> if a parity bit and a bit position in its check subset are both in error (with the bit position being in only one other check subset), the error will be detected as a single-bit error in the parity-bit with the other check subset. A simple modification -> introduce additional parity bit at position 0. 5. Huffman Coding [A] Using our earlier definition, Information per digit = log 2 (10/1) = 3.3219 [B] F(d) = 1 => d = 0 F(d) = 2 => d = 1 / 3 / 7 F(d) = 3 => d = 2 / 5 / 8 F(d) = 5 => d = 4 / 9 F(d) = 7 => d = 6 Information in 1 (1 possibility only) = log 2 (10/1) = 3.3219 (approx.) Information in 3 (3 possibilities) = log 2 (10/3) = 1.737 (approx.) Information in 5 (2 possibilities) = log 2 (10/2) = 2.3219 (approx.) [C] Average amount = weighted sum of all f(d) output values = 2(0.1)(3.3219) + 2(0.3)(1.737) + (0.2)(2.3219) = 0.6643 + 0.4643 + 1.0422 = 2.1708 [D] It will take n-1 iterations each step removes 2 members and adds one member. [E] 3 marks

1 0 1 5 0 2 3 0 1 1 7 Symbol Probability Encoding 1 0.1 000 2 0.3 10 3 0.3 11 5 0.2 01 7 0.1 001 [F] Average length = 3(0.1) + 2(0.3) + 2(0.3) + 2(0.2) + 3(0.1) = 0.3 + 0.6 + 0.6 + 0.4 + 0.3 = 2.2