LECTURE 5 Noise and ISI
|
|
- Stephen Arnold
- 6 years ago
- Views:
Transcription
1 MIT 6.02 DRAFT Lecture Notes Spring 2010 (Last update: February 22, 2010) Comments, questions or bug reports? Please contact LECTURE 5 Noise and ISI Sometimes theory tells you: Stop trying that approach, it will NEVER work! And you can try something else. Sometimes theory tells you: You ve done the best, at least in a carefully chosen metric for best! And you can decide you are finished. But far more often the theory tells you: Under certain difficult to satisfy conditions, your solution is sure to have this seemingly important property, provided you use a very particular approach. Though, you may have the seemingly important property using something like the very particular approach, and maybe the seemingly important property is not really that important. And maybe all you get is a suggestion as to what to try next. If there is intersymbol interference (ISI) in a communication channel, then the signal detected at the receiver depends not only on the particular bit being sent by transmitter, but also depends bits the transmitter sent previously, or on bits the transmitter will send next, or both. As you will see below, if there is ISI, then determining the probability of a bit error is more complicated than the ISI-free case. We will examine the ISI plus noise case by returning to the example of additive white Gaussian noise, and make use of the Unit Normal Cumulative Distribution Function, denoted Φ, as well as Conditional Probabilities. Following the analysis of bit error in the face of ISI, we will return to the subject of eliminating ISI using deconvolution. But this time, we will take the view that we are willing to accept imperfect deconvolution in the noise-free case, if the resulting strategy produces reasonable results in the noisy case. 5.1 The Unit Normal Cumulative Distribution Function The reason we emphasize the Normal (or Gaussian) cumlative distribution function (CDF) is that a wide variety of noise processes are well-described by the Normal distribution. 3
2 4 LECTURE 5. NOISE AND ISI Why is this the case? The answer follows from the central limit theorem that was mentioned, but not described, in the previous lecture. A rough statement of the central limit theorem is that the CDF for the sum of a large number of independent random variables is nearly Normal, almost regardless of the CDF s of the individual variables. The almost allows us to avoid delving in to some technicalities. Since noise in communication systems is often the result of the combined effects of many sources of interference, it is not surprising that the central limit theorem should apply. So, noise in communication systems is often, but not always, well-described by a the Normal CDF. We will return to this issue in subsequent lectures, but for now we will assume that noise is Normally distributed. The unit Normal probability density function (PDF) is just the Normal (or Gaussian) PDF for the zero-mean (µ = 0), unit standard-deviation ( = 1) case. Simplifying the formula from Lecture 4, the unit Normal PDF is f X (x) = e x 2 2, (5.1) 2π and the associated unit Normal cumulative distribution function is Φ(x) = x f X (x ) dx = x e x 2 2 dx. (5.2) 2π There is no closed-form formula for the unit Normal CDF, Φ, but most computer math libraries include a function for its evaluation. This might seem foolish given one is unlikely to be lucky enough to have a noise process with a standard deviation of exactly one. However, there is a simple way to use the unit Normal CDF for any normally distributed random variable. For example, suppose we have a Normally-distributed zero-mean noise process, noise[n], with standard deviation. The probability that noise[n] < x is given by P (noise[n] < x) = x e x 2 ( 2 2 x ) 2π 2 dx = Φ. (5.3) That is, we can evaluate the CDF for a zero-mean, standard-deviation process just by scaling the argument before evaluating Φ. Note, this does NOT imply that one can just scale the argument when evaluating the PDF! As with any CDF, lim x Φ(x) = 0 and lim x Φ(x) = 1. In addition, the symmetry of the zero-mean Normal PDF, f X (x) = f X ( x), implies Φ(0) = 0.5 (half the density in the PDF corresponds to negative values for the random variable). Another identity that follows from the symmetry of the Normal PDF, and one we will use subsequently, is Φ(x) = 1 Φ( x). (5.4) 5.2 ISI and BER Recall from last lecture that if our noise model is additive white Gaussian noise, and if we assume the receiver and transmitter have exactly the same bit period and never drift apart,
3 SECTION 5.2. ISI AND BER 5 Figure 5-1: A noise-free eye diagrams, showing a bit detection sample with no ISI effects then y[i + ks] = y nf [i + ks] + noise[i + ks] (5.5) where i + ks is the index of the bit detection sample for the k th transmitted bit, y[i + ks] is the value of the received voltage at that sample, and y nf [i + ks] is what the received voltage would have been in the absence of noise. If there are no ISI effects, then y nf [i + ks] is equal to either the receiver maximum voltage or the receiver minimum voltage, depending on whether a 1 bit or a 0 bit is being received, as shown in the eye diagram in Figure 5-1. For the eye diagram in Figure 5-2, there are three bits of intersymbol interference. And as can be seen in the figure, y nf [i + ks] can take on any of sixteen possible values. The eight upper values are associated with receiving a 1 bit, and the eight lower values are associated with receiving a 0 bit. In order to determine the probability of a bit error in the case shown in Figure 5-2, we must determine the probability of a bit error for each of sixteen cases associated with the sixteen possible values for y nf [i + ks]. For example, suppose v j L is one of the possible voltage values for the bit detection sample associated with receiving a 0 bit. Then for a digitization threshold voltage v th, the probability of a bit error, given y[i + ks] = v j L, P (y[i + ks] > v th y nf [i + ks] = v j L ) = P (noise[i + ks] > (v th v j L ) y nf [i + ks] = v j L ) (5.6) where we have used the notation P (a b) to indicate the probability that a is true, given it is known that b is true. Similarly, suppose v j H is one of the possible voltage values for a bit detection sample associated with receiving a 1 bit. Then the probability of a bit error, given y[i + ks] = v j H,
4 6 LECTURE 5. NOISE AND ISI Figure 5-2: A noise-free eye diagram showing a bit detection sample with three bits of ISI is P (noise[i + ks] < (v th v j H ) y nf [i + ks] = v j H ). (5.7) Comparing (5.7)to (5.6), there is a flip in the direction of the inequality. Note also that (v th v j H ) must be negative, or bit errors would occur even in the noise-free case. Equation (5.6) means that if the transmitter was sending a 0 bit, and if the sequence of transmitted bits surrounding that 0 bit would have produced voltage v j L at the receiver (in the absence of noise), then there will be a bit error if the noise is more positive than the distance between the threshold voltage and v j L. Similarly, (5.7) means that if the transmitter was sending a 1 bit, and if the sequence of transmitted bits surrounding that 1 bit would have produced voltage v j H at the receiver (in the absence of noise), then there will be a bit error if the noise is negative enough to offset how far V j H is above the threshold voltage. If the noise samples are normally distributed random variables with zero mean (µ = 0) and stadard deviation, the probabilities in (5.6) and (5.7) can be expressed using the unit normal CDF. Specifically, P (noise[i + ks] > (v th v j L ) y nf [i + ks] = v j L ) = 1 Φ (v th v j L ) = Φ ( v j L v th ) (5.8) and P (noise[i + ks] < (v th v j H ) y nf [i + ks] = v j H ) = Φ (v th v j H ) (5.9) where the right-most equality in (5.8) follows from (5.4).
5 SECTION 5.2. ISI AND BER 7 If all bit sequences are equally likely, then each of the possible voltage values for the bit detection sample is equally likely 1. Therefore, the probability of a bit error is just the sum of the conditional probabilities associated with each of the possible voltage values for a bit detection sample, divided by the number of possible values. More specifically, if there are J L voltage values for the bit detection sample associated with a transmitted 0 bit and J H voltage values associated with a transmitted 1 bit, and all voltage values are equally likely, then the probability of a bit error is given by ( j=j 1 L P (bit error) = v j L Φ v th J H + J L j=1 ) j=j H + j=1 Φ ( v th v j H ). (5.10) A number of popular bit encoding schemes, including the 8b/10b encoding scheme described in the first lab, reduce the probability of certain transmitter bit patterns. In cases like these, it may be preferable to track transmitter bit sequences rather than voltage values of the bit detection sample. A general notation for tracking the error probabilities as a function of transmitter bit sequence is quite unweildly, so we will consider a simple case. Suppose the ISI is such that only the previous bit interferes with the current bit. In this limited ISI case, the received bit detection sample can take on one of only four possible values. Let vl 1 denote the value associated with transmitting two 0 bits in a row (bit sequence 00), vl 2, the value associated with transmitting a 1 bit just before a 0 bit (bit sequence 10), vh 1, the value associated with transmitting a 0 bit just before a 1 bit (bit sequence 01), and vh 2, the value associated with transmitting two 1 bits in a row (bit sequence 00). See, even in this simple case, the notation is already getting awkward. We use P (00)1 to denote the probability that the receiver erroneously detected a 1 bit given a transmitted bit sequence of 00, P (01)0 to denote the probability that the receiver erroneously detected a 0 bit given a transmitted bit sequence of 01, and so forth. From (5.6) and (5.7), P (00)1 = P (noise[i + ks] > (v th v 1 L) y nf [i + ks] = v 1 L), (5.11) P (10)1 = P (noise[i + ks] > (v th v 2 L) y nf [i + ks] = v 2 L), (5.12) P (01)0 = P (noise[i + ks] < (v th v 1 H) y nf [i + ks] = v 1 H), (5.13) P (11)0 = P (noise[i + ks] < (v th v 2 H) y nf [i + ks] = v 2 H). (5.14) If we denote the probability of transmitting the bit sequence 00 as P 00, the probability of transmitting the bit sequence 01 as P 01, and so forth, then the probability of a bit error is given by P (bit error) = P (00)1 P 00 + P (10)1 P 10 + P (01)0 P 10 + P (11)0 P 11. (5.15) Whew! Too much notation. 1 the degenerate case, where multiple bit pattern result in identical voltage values for the bit detection sample, can be treated without altering the following analysis, as is shown in the worked example companion to this text
6 8 LECTURE 5. NOISE AND ISI 5.3 Deconvolution and noise Many problems in inference and estimation have deconvolution as the noise-free optimal solution, and finding methods for succeeding with deconvolution in the presence of noise arises in a variety applications including: communication systems, medical imaging, light microscopy and telescopy, and audio and image restoration. The subject is enormous, and we will touch on just a few aspects of the problem that will help us understand how to reduce some of the impact of intersymbol interference. When we use deconvolution, we are trying to eliminate the effects of an LTI channel that cause intersymbol interference. An alternative view of deconvolution is to view it as just one of many strategies for estimating the transmitted samples given the noisy received samples. Diagrammatically, X CHANNEL Y nf Add NOISE Y DECONV OLV ER W X (5.16) where X is the sequence of transmitted samples, Y nf is the sequence of noise-free samples produced by the channel, Y is the sequence of noisy samples and is the only sequence of samples accessible to the receiver, and the sequence W is the estimate of the sequence X generated by the deconvolver. If the LTI channel has a unit sample response, H, and that H is used to deconvolve the noisy received samples, the resulting sequence W satisfies m=n h[m]w[n m] = y nf [n] + noise[n]. (5.17) where the sequence Y nf can be determined by convolving X with the unit sample response of the channel, m=n h[m]x[n m] = y nf [n]. (5.18) If there is no noise, W = X, and we have perfect deconvolution. But, we also know that deconvolution can be very sensitive to noise. Since we can not change H, our plan will be to approximate H by H when deconvolving, and try to design the unit sample response of H to still get effective deconvolution, but with reduced noise sensitivity. As a first observation, if we assume h[n] = 0, n > K, then in the usual deconvolution case, W satisfies the difference equation w[n] + h[1]w[n 1] h[k]w[n K] = y nf [n] + noise[n]. (5.19) Suppose the input to this difference equation, y nf [n] + noise[n], is set to 0 for n > N. If the difference equation is unstable, lim n w[n] =. Clearly, if the difference equation in (5.19) is unstable, then noise could easily cause w[n] to head towards plus or minus. It is possible to precisely determine the stability of the difference equation by examining the roots of a K + 1 th -order polynomial whose coefficients are given, h[1],..., h[k]. Unfortunately, such an analysis does not appear to offer much intuition in to how to construct a good H. To try to develop more intuition, consider reorganizing the equation in a form to apply
7 SECTION 5.4. APPENDIX 9 plug and chug, w[n] = h[m] 1.0 w[n m] + y nf [n] noise[n]. (5.20) When viewed in this plug and chug form, one can already note that small values of are likely to be a problem. One can be a little more precise by noting that if then the solution to h[m] < 1 (5.21) w[n + N] + h[1]w[n 1 + N] h[k]w[n K + N] = 0 (5.22) (the difference equation after the input has been forced to zero) will go to zero as n, regardless of the values of w[n], w[n 1], etc. The above analysis suggests that if we are going to replace H with H, we should pick an H so that is large compared to h[m], m > 0. In addition, since the step response, s[n], is s[n] = n h[n], (5.23) 0 preserving this sum, at least for larger values of n, would imply that in the case of transmitting a long sequence of 1 bits, the deconvolver output voltage samples and the transmitted voltage samples would both settle to the same value (in the noise free case). 5.4 Appendix One can be far more specific about the relationship between the noise and the estimates for x[n] generated by deconvolution with an approximation to H, denoted H, but the derivation is lengthy. It is presented below for the truly dedicated reader Consider an LTI channel with unit sample response, H, and a received signal polluted by additive noise. If H is used to deconvolve the noisy received samples, the resulting sequence W satisfies h[m]w[n m] = y nf [n] + noise[n]. (5.24) where y nf [n] is, as usual, the noise-free output of the LTI channel given by and X is the sequence of transmitted samples. h[m]x[n m] = y nf [n], (5.25)
8 10 LECTURE 5. NOISE AND ISI Subtracting (5.25) from (5.24) h[m](w[n m] x[n m]) = noise[n] (5.26) Defining e[n] as the error at the n th sample, or the difference between transmitted sample x[n] and the estimated transmitted sample w[n], h[m]e[n m] = noise[n] (5.27) Reorganizing the error equation in to the form for solving by plug and chug Then if e[n] = the error at the n th step is bounded by h[m] 1.0 e[n m] + noise[n] (5.28) h[m] Γ < 1 (5.29) e[n] < Γ max 0 m<n e[m] noise[n]. (5.30) Equation (5.30) means that the error in estimating x[n] is no worse than the noise in the n th received sample, scaled by 1, plus Γ < 1 times the the worst case error of the prior steps. So, the error at the zeroth step is bounded by and the error at the first step is bounded by or in terms of only noise samples, e[0] 1.0 noise[0], (5.31) e[1] Γ e[0] noise[1], (5.32) e[1] Γ noise[0] + noise[1]. (5.33) For step two, or e[2] Γ max{ e[0], e[1] } noise[2] (5.34) e[2] Γ noise[0] + Γ noise[1] + noise[2]. (5.35)
9 SECTION 5.4. APPENDIX 11 At the n th step, or if > 0 e[n] 1.0 m=n 1 Γ n m noise[m] (5.36) e[n] Γ max m noise[m] (5.37) which finally means that if Γ is small, then the difference between w[n] and x[n] will be no worse than the constant, Γ, times the maximum noise sample seen so far. A large value of will yeild a small value for Γ and a small value for 1.0, leading to a small value for error. The upshot of the above lengthy analysis is that if deconvolution is applied to a channel that happens to have a value of that are large compared to h[m], m 0 then the deconvolver will not be that sensitive to noise. However, the channel s unit sample response is not under our control, so what do we do when is small? We could try deconvolving with an approximation to the original H, denoted H, and hope that we can design the approximate deconvolver s unit sample response to yeild effective deconvolution with reduced noise sensitivity. If we approximate h[n] by h[n] when deconvolving, then W satisfies h[m]w[n m] = y nf [n] + noise[n] (5.38) but Y nf still satisfies Subtracting h[m]x[n m] = y nf [n] (5.39) or h[m](w[n m] x[n m]) + h[m]e[n m] = noise[n] ( h[m] h[m])x[n m] = noise[n] (5.40) ( h[m] h[m])x[n m]. (5.41) And once again puting the error equation in the form for solving by plug and chug, e[n] = h[m] e[n m] ( h[m] h[m]) x[n m] noise[n] (5.42) If and h[m] Γ < 1 (5.43) K ( h[m] h[m]) β (5.44)
10 12 LECTURE 5. NOISE AND ISI where β will be small if H is close to H, then if x[n] is always between 0 and 1, e[n] Γ ( β + maxm noise[m] ). (5.45) Therefore, the error in w[n], our estimate of x[n], is the scale factor Γ multiplied by worst case noise plus the absolute sum of the difference between the unit sample response of the original system and the unit sample response of the approximate deconvolver. What the above lengthy analysis tries to make a little more precise, is the same observation we made based on stability. One way to make a deconvolver that is insensitive to noise, but still produces good estimates of the transmitted samples, is to try to match the original unit sample response as well as you can, but also try to make the first nonzero coefficient in the deconvolving system as large as you can. Finally, it is absolutely KEY to note that these are upper bound results, and the actual errors in the estimates produced by a given approximate deconvolver may be far lower than these upper bounds.
LECTURE 5 Noise and ISI
MIT 6.02 DRAFT Lecture Notes Spring 2010 (Last update: February 25, 2010) Comments, questions or bug reports? Please contact 6.02-staff@mit.edu LECTURE 5 Noise and ISI If there is intersymbol interference
More information!P x. !E x. Bad Things Happen to Good Signals Spring 2011 Lecture #6. Signal-to-Noise Ratio (SNR) Definition of Mean, Power, Energy ( ) 2.
Bad Things Happen to Good Signals oise, broadly construed, is any change to the signal from its expected value, x[n] h[n], when it arrives at the receiver. We ll look at additive noise and assume the noise
More informationConvolutional Coding LECTURE Overview
MIT 6.02 DRAFT Lecture Notes Spring 2010 (Last update: March 6, 2010) Comments, questions or bug reports? Please contact 6.02-staff@mit.edu LECTURE 8 Convolutional Coding This lecture introduces a powerful
More information6.02 Fall 2012 Lecture #11
6.02 Fall 2012 Lecture #11 Eye diagrams Alternative ways to look at convolution 6.02 Fall 2012 Lecture 11, Slide #1 Eye Diagrams 000 100 010 110 001 101 011 111 Eye diagrams make it easy to find These
More informationDue dates are as mentioned above. Checkoff interviews for PS4 and PS5 will happen together between October 24 and 28, 2012.
Problem Set 5 Your answers will be graded by actual human beings (at least that's what we believe!), so don't limit your answers to machine-gradable responses. Some of the questions specifically ask for
More informationDesigning Information Devices and Systems I Spring 2018 Homework 11
EECS 6A Designing Information Devices and Systems I Spring 28 Homework This homework is due April 8, 28, at 23:59. Self-grades are due April 2, 28, at 23:59. Submission Format Your homework submission
More informationAn introduction to basic information theory. Hampus Wessman
An introduction to basic information theory Hampus Wessman Abstract We give a short and simple introduction to basic information theory, by stripping away all the non-essentials. Theoretical bounds on
More informationLecture 11 FIR Filters
Lecture 11 FIR Filters Fundamentals of Digital Signal Processing Spring, 2012 Wei-Ta Chu 2012/4/12 1 The Unit Impulse Sequence Any sequence can be represented in this way. The equation is true if k ranges
More informationLTI Models and Convolution
MIT 6.02 DRAFT Lecture Notes Last update: November 4, 2012 CHAPTER 11 LTI Models and Convolution This chapter will help us understand what else besides noise (which we studied in Chapter 9) perturbs or
More informationCommunication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi
Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking
More informationECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)
ECE 564/645 - Digital Communications, Spring 018 Homework # Due: March 19 (In Lecture) 1. Consider a binary communication system over a 1-dimensional vector channel where message m 1 is sent by signaling
More informationCS S Lecture 5 January 29, 2019
CS 6363.005.19S Lecture 5 January 29, 2019 Main topics are #divide-and-conquer with #fast_fourier_transforms. Prelude Homework 1 is due Tuesday, February 5th. I hope you ve at least looked at it by now!
More informationLECTURE 13 Introduction to Filtering
MIT 6.02 DRAFT ecture Notes Fall 2010 (ast update: October 25, 2010) Comments, questions or bug reports? Please contact 6.02-staff@mit.edu ECTURE 13 Introduction to Filtering This lecture introduces the
More informationChannel Coding: Zero-error case
Channel Coding: Zero-error case Information & Communication Sander Bet & Ismani Nieuweboer February 05 Preface We would like to thank Christian Schaffner for guiding us in the right direction with our
More informationUnit 4 Systems of Equations Systems of Two Linear Equations in Two Variables
Unit 4 Systems of Equations Systems of Two Linear Equations in Two Variables Solve Systems of Linear Equations by Graphing Solve Systems of Linear Equations by the Substitution Method Solve Systems of
More informationMI 4 Mathematical Induction Name. Mathematical Induction
Mathematical Induction It turns out that the most efficient solution to the Towers of Hanoi problem with n disks takes n 1 moves. If this isn t the formula you determined, make sure to check your data
More informationLecture 12 : Recurrences DRAFT
CS/Math 240: Introduction to Discrete Mathematics 3/1/2011 Lecture 12 : Recurrences Instructor: Dieter van Melkebeek Scribe: Dalibor Zelený DRAFT Last few classes we talked about program correctness. We
More information1 GSW Sets of Systems
1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown
More informationDesigning Information Devices and Systems I Fall 2018 Lecture Notes Note Positioning Sytems: Trilateration and Correlation
EECS 6A Designing Information Devices and Systems I Fall 08 Lecture Notes Note. Positioning Sytems: Trilateration and Correlation In this note, we ll introduce two concepts that are critical in our positioning
More informationLet V be a vector space, and let X be a subset. We say X is a Basis if it is both linearly independent and a generating set.
Basis Let V be a vector space, and let X be a subset. We say X is a Basis if it is both linearly independent and a generating set. The first example of a basis is the standard basis for R n e 1 = (1, 0,...,
More informationFactoring. there exists some 1 i < j l such that x i x j (mod p). (1) p gcd(x i x j, n).
18.310 lecture notes April 22, 2015 Factoring Lecturer: Michel Goemans We ve seen that it s possible to efficiently check whether an integer n is prime or not. What about factoring a number? If this could
More information36 What is Linear Algebra?
36 What is Linear Algebra? The authors of this textbook think that solving linear systems of equations is a big motivation for studying linear algebra This is certainly a very respectable opinion as systems
More informationDesigning Information Devices and Systems I Fall 2018 Lecture Notes Note Positioning Sytems: Trilateration and Correlation
EECS 6A Designing Information Devices and Systems I Fall 08 Lecture Notes Note. Positioning Sytems: Trilateration and Correlation In this note, we ll introduce two concepts that are critical in our positioning
More informationCH 24 IDENTITIES. [Each product is 35] Ch 24 Identities. Introduction
139 CH 4 IDENTITIES Introduction First we need to recall that there are many ways to indicate multiplication; for eample the product of 5 and 7 can be written in a variety of ways: 5 7 5 7 5(7) (5)7 (5)(7)
More informationAn analogy from Calculus: limits
COMP 250 Fall 2018 35 - big O Nov. 30, 2018 We have seen several algorithms in the course, and we have loosely characterized their runtimes in terms of the size n of the input. We say that the algorithm
More informationTheFourierTransformAndItsApplications-Lecture28
TheFourierTransformAndItsApplications-Lecture28 Instructor (Brad Osgood):All right. Let me remind you of the exam information as I said last time. I also sent out an announcement to the class this morning
More informationMath 2602 Finite and Linear Math Fall 14. Homework 8: Core solutions
Math 2602 Finite and Linear Math Fall 14 Homework 8: Core solutions Review exercises for Chapter 5 page 183 problems 25, 26a-26b, 29. Section 8.1 on page 252 problems 8, 9, 10, 13. Section 8.2 on page
More informationSolving Linear and Rational Inequalities Algebraically. Definition 22.1 Two inequalities are equivalent if they have the same solution set.
Inequalities Concepts: Equivalent Inequalities Solving Linear and Rational Inequalities Algebraically Approximating Solutions to Inequalities Graphically (Section 4.4).1 Equivalent Inequalities Definition.1
More information33. SOLVING LINEAR INEQUALITIES IN ONE VARIABLE
get the complete book: http://wwwonemathematicalcatorg/getfulltextfullbookhtm 33 SOLVING LINEAR INEQUALITIES IN ONE VARIABLE linear inequalities in one variable DEFINITION linear inequality in one variable
More information( )( b + c) = ab + ac, but it can also be ( )( a) = ba + ca. Let s use the distributive property on a couple of
Factoring Review for Algebra II The saddest thing about not doing well in Algebra II is that almost any math teacher can tell you going into it what s going to trip you up. One of the first things they
More informationECE 546 Lecture 23 Jitter Basics
ECE 546 Lecture 23 Jitter Basics Spring 2018 Jose E. Schutt-Aine Electrical & Computer Engineering University of Illinois jesa@illinois.edu ECE 546 Jose Schutt Aine 1 Probe Further D. Derickson and M.
More informationECE Homework Set 2
1 Solve these problems after Lecture #4: Homework Set 2 1. Two dice are tossed; let X be the sum of the numbers appearing. a. Graph the CDF, FX(x), and the pdf, fx(x). b. Use the CDF to find: Pr(7 X 9).
More informationChapter 1 Review of Equations and Inequalities
Chapter 1 Review of Equations and Inequalities Part I Review of Basic Equations Recall that an equation is an expression with an equal sign in the middle. Also recall that, if a question asks you to solve
More informationCHAPTER 8 Viterbi Decoding of Convolutional Codes
MIT 6.02 DRAFT Lecture Notes Fall 2011 (Last update: October 9, 2011) Comments, questions or bug reports? Please contact hari at mit.edu CHAPTER 8 Viterbi Decoding of Convolutional Codes This chapter describes
More informationLecture 8: Shannon s Noise Models
Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have
More informationMITOCW ocw f99-lec09_300k
MITOCW ocw-18.06-f99-lec09_300k OK, this is linear algebra lecture nine. And this is a key lecture, this is where we get these ideas of linear independence, when a bunch of vectors are independent -- or
More informationMassachusetts Institute of Technology
Name: Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Department of Mechanical Engineering 6.5J/2.J Information and Entropy Spring 24 Issued: April 2, 24,
More informationCPSC 536N: Randomized Algorithms Term 2. Lecture 9
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Prof. Nick Harvey Lecture 9 University of British Columbia 1 Polynomial Identity Testing In the first lecture we discussed the problem of testing equality
More informationRandom Signal Transformations and Quantization
York University Department of Electrical Engineering and Computer Science EECS 4214 Lab #3 Random Signal Transformations and Quantization 1 Purpose In this lab, you will be introduced to transformations
More informationLecture 4: Gaussian Elimination and Homogeneous Equations
Lecture 4: Gaussian Elimination and Homogeneous Equations Reduced Row Echelon Form An augmented matrix associated to a system of linear equations is said to be in Reduced Row Echelon Form (RREF) if the
More informationMATH 320, WEEK 7: Matrices, Matrix Operations
MATH 320, WEEK 7: Matrices, Matrix Operations 1 Matrices We have introduced ourselves to the notion of the grid-like coefficient matrix as a short-hand coefficient place-keeper for performing Gaussian
More informationMITOCW watch?v=rf5sefhttwo
MITOCW watch?v=rf5sefhttwo The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To
More informationMATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9
Problem Set 1 These questions are based on the material in Section 1: Introduction to coding theory. You do not need to submit your answers to any of these questions. 1. The following ISBN was received
More informationMath101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:
Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2: 03 17 08 3 All about lines 3.1 The Rectangular Coordinate System Know how to plot points in the rectangular coordinate system. Know the
More informationSUMMER REVIEW PACKET. Name:
Wylie East HIGH SCHOOL SUMMER REVIEW PACKET For students entering Regular PRECALCULUS Name: Welcome to Pre-Calculus. The following packet needs to be finished and ready to be turned the first week of the
More informationMath 138: Introduction to solving systems of equations with matrices. The Concept of Balance for Systems of Equations
Math 138: Introduction to solving systems of equations with matrices. Pedagogy focus: Concept of equation balance, integer arithmetic, quadratic equations. The Concept of Balance for Systems of Equations
More informationDesigning Information Devices and Systems I Spring 2019 Lecture Notes Note 3
EECS 6A Designing Information Devices and Systems I Spring 209 Lecture Notes Note 3 3. Linear Dependence Recall the simple tomography example from Note, in which we tried to determine the composition of
More informationDigital Signal Processing Prof. T.K. Basu Department of Electrical Engineering Indian Institute of Technology, Kharagpur
Digital Signal Processing Prof. T.K. Basu Department of Electrical Engineering Indian Institute of Technology, Kharagpur Lecture - 33 Multirate Signal Processing Good afternoon friends, last time we discussed
More informationLecture 19 IIR Filters
Lecture 19 IIR Filters Fundamentals of Digital Signal Processing Spring, 2012 Wei-Ta Chu 2012/5/10 1 General IIR Difference Equation IIR system: infinite-impulse response system The most general class
More informationLecture 11: Continuous-valued signals and differential entropy
Lecture 11: Continuous-valued signals and differential entropy Biology 429 Carl Bergstrom September 20, 2008 Sources: Parts of today s lecture follow Chapter 8 from Cover and Thomas (2007). Some components
More informationNote: Please use the actual date you accessed this material in your citation.
MIT OpenCourseWare http://ocw.mit.edu 18.06 Linear Algebra, Spring 2005 Please use the following citation format: Gilbert Strang, 18.06 Linear Algebra, Spring 2005. (Massachusetts Institute of Technology:
More informationDesigning Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way
EECS 16A Designing Information Devices and Systems I Spring 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate
More informationEntanglement and information
Ph95a lecture notes for 0/29/0 Entanglement and information Lately we ve spent a lot of time examining properties of entangled states such as ab è 2 0 a b è Ý a 0 b è. We have learned that they exhibit
More information[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]
Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the
More informationLecture 6: Finite Fields
CCS Discrete Math I Professor: Padraic Bartlett Lecture 6: Finite Fields Week 6 UCSB 2014 It ain t what they call you, it s what you answer to. W. C. Fields 1 Fields In the next two weeks, we re going
More informationDifferential Equations
This document was written and copyrighted by Paul Dawkins. Use of this document and its online version is governed by the Terms and Conditions of Use located at. The online version of this document is
More informationSolving a Series. Carmen Bruni
A Sample Series Problem Question: Does the following series converge or diverge? n=1 n 3 + 3n 2 + 1 n 5 + 14n 3 + 4n First Attempt First let s think about what this series is - maybe the terms are big
More information6.080 / Great Ideas in Theoretical Computer Science Spring 2008
MIT OpenCourseWare http://ocw.mit.edu 6.080 / 6.089 Great Ideas in Theoretical Computer Science Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationLecture 4: Proof of Shannon s theorem and an explicit code
CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated
More informationEfficient Decoding of Permutation Codes Obtained from Distance Preserving Maps
2012 IEEE International Symposium on Information Theory Proceedings Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps Yeow Meng Chee and Punarbasu Purkayastha Division of Mathematical
More informationSolving Equations by Adding and Subtracting
SECTION 2.1 Solving Equations by Adding and Subtracting 2.1 OBJECTIVES 1. Determine whether a given number is a solution for an equation 2. Use the addition property to solve equations 3. Determine whether
More informationBusiness Statistics. Lecture 9: Simple Regression
Business Statistics Lecture 9: Simple Regression 1 On to Model Building! Up to now, class was about descriptive and inferential statistics Numerical and graphical summaries of data Confidence intervals
More informationIntroduction to Cryptology Dr. Sugata Gangopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Roorkee
Introduction to Cryptology Dr. Sugata Gangopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Roorkee Lecture 05 Problem discussion on Affine cipher and perfect secrecy
More informationMath 2 Variable Manipulation Part 1 Algebraic Equations
Math 2 Variable Manipulation Part 1 Algebraic Equations 1 PRE ALGEBRA REVIEW OF INTEGERS (NEGATIVE NUMBERS) Numbers can be positive (+) or negative (-). If a number has no sign it usually means that it
More informationMITOCW MITRES18_006F10_26_0602_300k-mp4
MITOCW MITRES18_006F10_26_0602_300k-mp4 FEMALE VOICE: The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational
More informationLecture 6 - Random Variables and Parameterized Sample Spaces
Lecture 6 - Random Variables and Parameterized Sample Spaces 6.042 - February 25, 2003 We ve used probablity to model a variety of experiments, games, and tests. Throughout, we have tried to compute probabilities
More informationThe Completion of a Metric Space
The Completion of a Metric Space Let (X, d) be a metric space. The goal of these notes is to construct a complete metric space which contains X as a subspace and which is the smallest space with respect
More informationSequential Logic (3.1 and is a long difficult section you really should read!)
EECS 270, Fall 2014, Lecture 6 Page 1 of 8 Sequential Logic (3.1 and 3.2. 3.2 is a long difficult section you really should read!) One thing we have carefully avoided so far is feedback all of our signals
More informationMATH 115, SUMMER 2012 LECTURE 4 THURSDAY, JUNE 21ST
MATH 115, SUMMER 2012 LECTURE 4 THURSDAY, JUNE 21ST JAMES MCIVOR Today we enter Chapter 2, which is the heart of this subject. Before starting, recall that last time we saw the integers have unique factorization
More informationAppendix A. Review of Basic Mathematical Operations. 22Introduction
Appendix A Review of Basic Mathematical Operations I never did very well in math I could never seem to persuade the teacher that I hadn t meant my answers literally. Introduction Calvin Trillin Many of
More informationTaylor Series: Notes for CSCI3656
Taylor Series: Notes for CSCI3656 Liz Bradley Department of Computer Science University of Colorado Boulder, Colorado, USA 80309-0430 c 2002 lizb@cs.colorado.edu Research Report on Curricula and Teaching
More informationMITOCW MITRES_6-007S11lec09_300k.mp4
MITOCW MITRES_6-007S11lec09_300k.mp4 The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for
More informationIntroduction to Systems of Equations
Introduction to Systems of Equations Introduction A system of linear equations is a list of m linear equations in a common set of variables x, x,, x n. a, x + a, x + Ù + a,n x n = b a, x + a, x + Ù + a,n
More informationDesigning Information Devices and Systems I Fall 2018 Homework 3
Last Updated: 28-9-5 :8 EECS 6A Designing Information Devices and Systems I Fall 28 Homework 3 This homework is due September 4, 28, at 23:59. Self-grades are due September 8, 28, at 23:59. Submission
More informationTAYLOR POLYNOMIALS DARYL DEFORD
TAYLOR POLYNOMIALS DARYL DEFORD 1. Introduction We have seen in class that Taylor polynomials provide us with a valuable tool for approximating many different types of functions. However, in order to really
More information2. Polynomials. 19 points. 3/3/3/3/3/4 Clearly indicate your correctly formatted answer: this is what is to be graded. No need to justify!
1. Short Modular Arithmetic/RSA. 16 points: 3/3/3/3/4 For each question, please answer in the correct format. When an expression is asked for, it may simply be a number, or an expression involving variables
More informationElementary factoring algorithms
Math 5330 Spring 018 Elementary factoring algorithms The RSA cryptosystem is founded on the idea that, in general, factoring is hard. Where as with Fermat s Little Theorem and some related ideas, one can
More informationFinal Solutions Fri, June 8
EE178: Probabilistic Systems Analysis, Spring 2018 Final Solutions Fri, June 8 1. Small problems (62 points) (a) (8 points) Let X 1, X 2,..., X n be independent random variables uniformly distributed on
More informationAP Calculus Chapter 9: Infinite Series
AP Calculus Chapter 9: Infinite Series 9. Sequences a, a 2, a 3, a 4, a 5,... Sequence: A function whose domain is the set of positive integers n = 2 3 4 a n = a a 2 a 3 a 4 terms of the sequence Begin
More informationInstructor (Brad Osgood)
TheFourierTransformAndItsApplications-Lecture17 Instructor (Brad Osgood):Is the screen fading and yes, Happy Halloween to everybody. Only one noble soul here came dressed as a Viking. All right. All right.
More informationECE 564/645 - Digital Communications, Spring 2018 Midterm Exam #1 March 22nd, 7:00-9:00pm Marston 220
ECE 564/645 - Digital Communications, Spring 08 Midterm Exam # March nd, 7:00-9:00pm Marston 0 Overview The exam consists of four problems for 0 points (ECE 564) or 5 points (ECE 645). The points for each
More informationNo Solution Equations Let s look at the following equation: 2 +3=2 +7
5.4 Solving Equations with Infinite or No Solutions So far we have looked at equations where there is exactly one solution. It is possible to have more than solution in other types of equations that are
More informationDesigning Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way
EECS 16A Designing Information Devices and Systems I Fall 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate it
More informationGaussian Quiz. Preamble to The Humble Gaussian Distribution. David MacKay 1
Preamble to The Humble Gaussian Distribution. David MacKay Gaussian Quiz H y y y 3. Assuming that the variables y, y, y 3 in this belief network have a joint Gaussian distribution, which of the following
More informationirst we need to know that there are many ways to indicate multiplication; for example the product of 5 and 7 can be written in a variety of ways:
CH 2 VARIABLES INTRODUCTION F irst we need to know that there are many ways to indicate multiplication; for example the product of 5 and 7 can be written in a variety of ways: 5 7 5 7 5(7) (5)7 (5)(7)
More informationPredicting BER to Very Low Probabilities
Predicting BER to Very Low Probabilities IBIS Summit, DAC, Anaheim, CA June 15, 2010 0.25 0.2 0.15 0.1 0.05 0-0.05-0.1 Arpad Muranyi arpad_muranyi@mentor.com Vladimir Dmitriev-Zdorov -0.15-0.2-0.25-0.5-0.4-0.3-0.2-0.1
More informationCircuits for Analog System Design Prof. Gunashekaran M K Center for Electronics Design and Technology Indian Institute of Science, Bangalore
Circuits for Analog System Design Prof. Gunashekaran M K Center for Electronics Design and Technology Indian Institute of Science, Bangalore Lecture No. # 08 Temperature Indicator Design Using Op-amp Today,
More informationLab 0 Appendix C L0-1 APPENDIX C ACCURACY OF MEASUREMENTS AND TREATMENT OF EXPERIMENTAL UNCERTAINTY
Lab 0 Appendix C L0-1 APPENDIX C ACCURACY OF MEASUREMENTS AND TREATMENT OF EXPERIMENTAL UNCERTAINTY A measurement whose accuracy is unknown has no use whatever. It is therefore necessary to know how to
More informationLecture 6 I. CHANNEL CODING. X n (m) P Y X
6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder
More informationSpecial Theory of Relativity Prof. Shiva Prasad Department of Physics Indian Institute of Technology, Bombay. Lecture - 15 Momentum Energy Four Vector
Special Theory of Relativity Prof. Shiva Prasad Department of Physics Indian Institute of Technology, Bombay Lecture - 15 Momentum Energy Four Vector We had started discussing the concept of four vectors.
More informationSolving Polynomial and Rational Inequalities Algebraically. Approximating Solutions to Inequalities Graphically
10 Inequalities Concepts: Equivalent Inequalities Solving Polynomial and Rational Inequalities Algebraically Approximating Solutions to Inequalities Graphically (Section 4.6) 10.1 Equivalent Inequalities
More informationDesigning Information Devices and Systems I Spring 2019 Homework 7
Last Updated: 2019-03-16 22:56 1 EECS 16A Designing Information Devices and Systems I Spring 2019 Homework 7 This homework is due March 15, 2019 at 23:59. Self-grades are due March 19, 2019, at 23:59.
More information1 Maintaining a Dictionary
15-451/651: Design & Analysis of Algorithms February 1, 2016 Lecture #7: Hashing last changed: January 29, 2016 Hashing is a great practical tool, with an interesting and subtle theory too. In addition
More informationPartial Fractions. June 27, In this section, we will learn to integrate another class of functions: the rational functions.
Partial Fractions June 7, 04 In this section, we will learn to integrate another class of functions: the rational functions. Definition. A rational function is a fraction of two polynomials. For example,
More informationPower System Analysis Prof. A. K. Sinha Department of Electrical Engineering Indian Institute of Technology, Kharagpur. Lecture - 21 Power Flow VI
Power System Analysis Prof. A. K. Sinha Department of Electrical Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Power Flow VI (Refer Slide Time: 00:57) Welcome to lesson 21. In this
More informationConditional densities, mass functions, and expectations
Conditional densities, mass functions, and expectations Jason Swanson April 22, 27 1 Discrete random variables Suppose that X is a discrete random variable with range {x 1, x 2, x 3,...}, and that Y is
More informationNote that we are looking at the true mean, μ, not y. The problem for us is that we need to find the endpoints of our interval (a, b).
Confidence Intervals 1) What are confidence intervals? Simply, an interval for which we have a certain confidence. For example, we are 90% certain that an interval contains the true value of something
More informationCIS 2033 Lecture 5, Fall
CIS 2033 Lecture 5, Fall 2016 1 Instructor: David Dobor September 13, 2016 1 Supplemental reading from Dekking s textbook: Chapter2, 3. We mentioned at the beginning of this class that calculus was a prerequisite
More informationCS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares
CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search
More informationDiscrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Discussion 6A Solution
CS 70 Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Discussion 6A Solution 1. Polynomial intersections Find (and prove) an upper-bound on the number of times two distinct degree
More information