COURSE MATERIAL Subject Name: Communication Theory UNIT III

Similar documents
for valid PSD. PART B (Answer all five units, 5 X 10 = 50 Marks) UNIT I

2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES

ECE-340, Spring 2015 Review Questions

ECE Branch GATE Paper The order of the differential equation + + = is (A) 1 (B) 2

Q. 1 Q. 25 carry one mark each.

Conventional Paper I-2010

Electrical Noise under the Fluctuation-Dissipation framework

Physical Noise Sources

2A1H Time-Frequency Analysis II

Fundamentals of Noise

Transistor Noise Lecture 10 High Speed Devices

Electronics and Communication Exercise 1

Prof. Anyes Taffard. Physics 120/220. Voltage Divider Capacitor RC circuits

Lecture 3: Signal and Noise

Lecture 14: Electrical Noise

Lecture 9. PMTs and Laser Noise. Lecture 9. Photon Counting. Photomultiplier Tubes (PMTs) Laser Phase Noise. Relative Intensity

P 1.5 X 4.5 / X 2 and (iii) The smallest value of n for

Question Paper Code : AEC11T03

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

Table of Contents [ntc]

Chapter 6. Random Processes

2A1H Time-Frequency Analysis II Bugs/queries to HT 2011 For hints and answers visit dwm/courses/2tf

Patrick F. Dunn 107 Hessert Laboratory Department of Aerospace and Mechanical Engineering University of Notre Dame Notre Dame, IN 46556

Pre-Lab. Introduction

Guest Lectures for Dr. MacFarlane s EE3350

Discrete Simulation of Power Law Noise

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else

Applied Probability and Stochastic Processes

Uncertainty, Errors, and Noise in Experimental Measurements

EE303: Communication Systems

Basic Electricity. Unit 2 Basic Instrumentation

EXP. NO. 3 Power on (resistive inductive & capacitive) load Series connection

7 The Waveform Channel

Chapter 5 Random Variables and Processes

Notes for course EE1.1 Circuit Analysis TOPIC 10 2-PORT CIRCUITS

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

Transistor Noise Lecture 14, High Speed Devices

Review Quantitative Aspects of Networking. Decibels, Power, and Waves John Marsh

Electronics. Basics & Applications. group talk Daniel Biesinger

Single-Time-Constant (STC) Circuits This lecture is given as a background that will be needed to determine the frequency response of the amplifiers.

RADIO AMATEUR EXAM GENERAL CLASS

Homework Assignment 11

Last Name _Di Tredici_ Given Name _Venere_ ID Number

Analysis and Design of Analog Integrated Circuits Lecture 14. Noise Spectral Analysis for Circuit Elements

ECE Homework Set 3

Lecture 15. Theory of random processes Part III: Poisson random processes. Harrison H. Barrett University of Arizona

Signals and Spectra - Review

Resonant Matching Networks

Time Varying Circuit Analysis

NAME SID EE42/100 Spring 2013 Final Exam 1

pickup from external sources unwanted feedback RF interference from system or elsewhere, power supply fluctuations ground currents

REVISED HIGHER PHYSICS REVISION BOOKLET ELECTRONS AND ENERGY

ELECTROMAGNETIC OSCILLATIONS AND ALTERNATING CURRENT

A SIGNAL THEORETIC INTRODUCTION TO RANDOM PROCESSES

DETECTION theory deals primarily with techniques for

Name of the Student: Problems on Discrete & Continuous R.Vs

AC Circuits. The Capacitor

Digital Band-pass Modulation PROF. MICHAEL TSAI 2011/11/10

PHYS 352. Noise. Noise. fluctuations in voltage (or current) about zero average voltage = 0 average V 2 is not zero

Determining Characteristic Impedance and Velocity of Propagation by Measuring the Distributed Capacitance and Inductance of a Line

ES 272 Assignment #2. in,3

Stochastic Processes. A stochastic process is a function of two variables:

Use of the decibel and the neper

Frequency Dependent Aspects of Op-amps

Example: Bipolar NRZ (non-return-to-zero) signaling

Physics 405/505 Digital Electronics Techniques. University of Arizona Spring 2006 Prof. Erich W. Varnes

IV. Covariance Analysis

Conventional Paper-I Part A. 1. (a) Define intrinsic wave impedance for a medium and derive the equation for intrinsic vy

EE4512 Analog and Digital Communications Chapter 4. Chapter 4 Receiver Design

2. The following diagram illustrates that voltage represents what physical dimension?

Intro to Stochastic Systems (Spring 16) Lecture 6

Department of Mechanical and Aerospace Engineering. MAE334 - Introduction to Instrumentation and Computers. Final Examination.

Reliability Theory of Dynamically Loaded Structures (cont.)

Transmission lines. Shouri Chatterjee. October 22, 2014

Lecture 4, Noise. Noise and distortion

GATE 2009 Electronics and Communication Engineering

Tactics Box 23.1 Using Kirchhoff's Loop Law

Solved Problems. Electric Circuits & Components. 1-1 Write the KVL equation for the circuit shown.

Square Root Raised Cosine Filter

Calculus Relationships in AP Physics C: Electricity and Magnetism

Some Important Electrical Units

System Identification

One-Parameter Processes, Usually Functions of Time

Introduction to AC Circuits (Capacitors and Inductors)

fehmibardak.cbu.tr Temporary Office 348, Mühendislik Fakültesi B Blok

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

MATHEMATICAL TOOLS FOR DIGITAL TRANSMISSION ANALYSIS

6.776 High Speed Communication Circuits Lecture 10 Noise Modeling in Amplifiers

TSKS01 Digital Communication Lecture 1

G.PULLAIAH COLLEGE OF ENGINEERING & TECHNOLOGY DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGINEERING PROBABILITY THEORY & STOCHASTIC PROCESSES

Lecture 05 Power in AC circuit

Chapter 1 The Electric Force

COPYRIGHTED MATERIAL. DC Review and Pre-Test. Current Flow CHAPTER

Module 4. Signal Representation and Baseband Processing. Version 2 ECE IIT, Kharagpur

Digital Baseband Systems. Reference: Digital Communications John G. Proakis

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Physics Department Statistical Physics I Spring Term 2013

How we wanted to revolutionize X-ray radiography, and how we then "accidentally" discovered single-photon CMOS imaging

Name of the Student: Problems on Discrete & Continuous R.Vs

Problems on Discrete & Continuous R.Vs

Chi-square is defined as the sum of the square of the (observed values - expected values) divided by the expected values, or

Transcription:

NH-67, TRICHY MAIN ROAD, PULIYUR, C.F. - 639114, KARUR DT. DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING COURSE MATERIAL Subject Name: Communication Theory Subject Code: EC2252 Class/Sem: BE (ECE)/IV Staff Name: Devilatha D UNIT III NOISE THEORY Syllabus: Review of Probability, Random Variables and Random Process; Gaussian Process; Noise Shot noise, Thermal noise and white noise; Narrow band noise, Noise temperature; Noise Figure. Overview: Objectives: The theory of Probability is used to predict the occurrence of noises in future performance of a random process. The common sources of electrical noise affecting communication systems are usually white in nature, i.e. it has a flat power spectrum. Majority of noise have Gaussian Probability distribution. To discuss about the probability distribution functions. To know the classification of random processes. To discuss the different types of noise. To know about noise temperature and noise figure.

Types of Noise Most manmade electro-magnetic noise occurs at frequencies below 500 MHz The most significant of these include: 1. Hydro lines 2. Ignition systems 3. Fluorescent lights 4. Electric motors Therefore deep space networks are placed out in the desert, far from these sources of interference. There are also a wide range of natural noise sources which cannot be so easily avoided, namely: 1. Atmospheric noise - lighting < 20 MHz 2. Solar noise - sun - 11 year sunspot cycle 3. Cosmic noise - 8 MHz to 1.5 GHz 4. Thermal or Johnson noise. Due to free electrons striking vibrating ions. 5. White noise - white noise has a constant spectral density over a specified range of frequencies. Johnson noise is an example of white noise. 6. Gaussian noise - Gaussian noise is completely random in nature however, the probability of any particular amplitude value follows the normal distribution curve. Johnson noise is Gaussian in nature. 7. Shot noise - bipolar transistors whereq = electron charge 1.6 x 10-17 coulombs 8. Excess noise, flicker, 1/f, and pink noise< 1 KHz are Inversely proportional to frequency and directly proportional to temperature and dc current 9. Transit time noise - occurs when the electron transit time across a junction is the same period as the signal. The noise power is given by: P n = ktb

Where, k = Boltzman's constant (1.38 x 10-23 J/K) T = temperature in degrees Kelvin B = bandwidth in Hz This equation applies to copper wire wound resistors, but is close enough to be used for all resistors. Maximum power transfer occurs when the source and load impedance are equal. 1.RANDOM VARIABLES In probability and statistics, a random variable or stochastic variable is a variable whose value is not known. Its possible values might represent the possible outcomes of a yet-to-be-performed experiment, or the potential values of a quantity whose alreadyexisting value is uncertain (e.g., as a result of incomplete information or imprecise measurements). Intuitively, a random variable can be thought of as a quantity whose value is not fixed, but which can take on different values; a probability distribution is used to describe the probabilities of different values occurring. Realizations of a random variable are called random variants. Random variables are usually real-valued, but one can consider arbitrary types such as boolean values, complex numbers, vectors, matrices, sequences, trees, sets, shapes, manifolds, functions, and processes. The term random element is used to encompass all such related concepts. A related concept is the stochastic process, a set of indexed random variables (typically indexed by time or space). 1.1 Examples The possible outcomes for one coin toss can be described by the state space Ω = {heads, tails}. We can introduce a real-valued random variable Y as follows: If the coin is equally likely to land on either side then it has a probability mass function given by:

A random variable can also be used to describe the process of rolling a die and the possible outcomes. The most obvious representation is to take the set Ω = {1, 2, 3, 4, 5, 6} as the state space, defining the random variable X equal to the number rolled. In this case, An example of a continuous random variable would be one based on a spinner that can choose a horizontal direction. Then the values taken by the random variable are directions. We could represent these directions by North West, East South East, etc. However, it is commonly more convenient to map the sample space to a random variable which takes values which are real numbers. This can be done, for example, by mapping a direction to a bearing in degrees clockwise from North. The random variable then takes values which are real numbers from the interval (0, 360), with all parts of the range being "equally likely". In this case, X = the angle spun. Any real number has probability zero of being selected, but a positive probability can be assigned to any range of values. For example, the probability of choosing a number in [0, 180] is ½. Instead of speaking of a probability mass function, we say that the probability density of X is 1/360. The probability of a subset of [0, 360) can be calculated by multiplying the measure of the set by 1/360. In general, the probability of a set for a given continuous random variable can be calculated by integrating the density over the given set. An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, X = 1; otherwise X = the value of the spinner as in the preceding example. There is a probability of ½ that this random variable will have the value 1. Other ranges of values would have half the probability of the last example.

1.2 Functions of random variables If we have a random variable on and a Borel measurable function, then will also be a random variable on, since the composition of measurable functions is also measurable. (However, this is not true if g is Lebesgue measurable.) The same procedure that allowed one to go from a probability space to can be used to obtain the distribution of. The cumulative distribution function of is If function g is invertible, i.e. g 1 exists, and increasing, then the previous relation can be extended to obtain and, again with the same hypotheses of invertibility of g, assuming also differentiability, we can find the relation between the probability density functions by differentiating both sides with respect to y, in order to obtain If there is no invertibility of g but each y admits at most a countable number of roots (i.e. a finite, or countably infinite, number of x i such that y = g(x i )) then the previous relation between the probability density functions can be generalized with. wherex i = g i -1 (y). The formulas for densities do not demand g to be increasing. 2. RANDOM PROCESS In probability theory, a stochastic process, or sometimes random process, is the counterpart to a deterministic process (or deterministic system). Instead of dealing with only one possible reality of how the process might evolve under time (as is the case, for example, for solutions of an ordinary differential equation), in a stochastic or random

process there is some indeterminacy in its future evolution described by probability distributions. This means that even if the initial condition (or starting point) is known, there are many possibilities the process might go to, but some paths may be more probable and others less so. In the simplest possible case (discrete time), a stochastic process amounts to a sequence of random variables known as a time series (for example, see Markov chain). Another basic type of a stochastic process is a random field, whose domain is a region of space, in other words, a random function whose arguments are drawn from a range of continuously changing values. One approach to stochastic processes treats them as functions of one or several deterministic arguments (inputs, in most cases regarded as time) whose values (outputs) are random variables: non-deterministic (single) quantities which have certain probability distributions. Random variables corresponding to various times (or points, in the case of random fields) may be completely different. The main requirement is that these different random quantities all have the same type. Although the random values of a stochastic process at different times may be independent random variables, in most commonly considered situations they exhibit complicated statistical correlations. Familiar examples of processes modeled as stochastic time series include stock market and exchange rate fluctuations, signals such as speech, audio and video, medical data such as a patient's EKG, EEG, blood pressure or temperature, and random movement such as Brownian motion or random walks. Examples of random fields include static images, random terrain (landscapes), or composition variations of an heterogeneous material. 2.1 Formal definition and basic properties 2.1.1 Definition Given a probability space, a stochastic process (or random process) with state space X is a collection of X-valued random variables indexed by a set T ("time"). That is, a stochastic process F is a collection where each F t is an X-valued random variable. A modificationg of the process F is a stochastic process on the same state space, with the same parameter set T such that A modification is indistinguishable if

2.2 Finite-dimensional distributions Let F be an X-valued stochastic process. For every finite subset, we may write, where and the restriction is a random variable taking values in X k. The distribution of this random variable is a probability measure on X k. Such random variables are called the finitedimensional distributions of F. Under suitable topological restrictions, a suitably "consistent" collection of finitedimensional distributions can be used to define a stochastic process (see Kolmogorov extension in the next section). 3. GAUSSIAN PROCESS In probability theory and statistics, a Gaussian process is a stochastic process whose realisations consist of random values associated with every point in a range of times (or of space) such that each such random variable has a normal distribution. Moreover, every finite collection of those random variables has a multivariate normal distribution. Gaussian processes are important in statistical modelling because of properties inherited from the normal distribution. For example, if a random process is modelled as a Gaussian process, the distributions of various derived quantities can be obtained explicitly. Such quantities include: the average value of the process over a range of times; the error in estimating the average using sample values at a small set of times. A Gaussian process is a stochastic process { X t ; t T } for which any finite linear combination of samples will be normally distributed (or, more generally, any linear functional applied to the sample function X t will give a normally distributed result). 3.1 History The concept is named after Carl Friedrich Gauss simply because the normal distribution is sometimes called the Gaussian distribution, although Gauss was not the first to study that distribution: see history of the normal/gaussian distribution. 3.2 Alternative definitions Alternatively, a process is Gaussian if and only if for every finite set of indicest 1,...,t k in the index set T

is a vector-valued Gaussian random variable. Using characteristic functions of random variables, the Gaussian property can be formulated as follows:{ X t ; t T } is Gaussian if and only if, for every finite set of indices t 1,..., t k, there are realsσ l j with σ i i > 0 and realsµ j such that The numbers σ l j and µ j can be shown to be the covariances and means of the variables in the process. 3.3 Important Gaussian processes The Wiener process is perhaps the most widely studied Gaussian process. It is not stationary, but it has stationary increments. The Ornstein Uhlenbeck process is a stationary Gaussian process. The Brownian bridge is a Gaussian process whose increments are not independent. The fractional Brownian motion is a Gaussian process whose covariance function is a generalisation of Wiener process. 3.4 Applications A Gaussian process can be used as a prior probability distribution over functions in Bayesian inference. (Given any set of N points in the desired domain of your functions, take a multivariate Gaussian whose covariance matrix parameter is the Gram matrix of your N points with some desired kernel, and sample from that Gaussian.) Inference of continuous values with a Gaussian process prior is known as Gaussian process regression, or Kriging. Gaussian processes are also useful as a powerful non-linear interpolation tool. 4. WHITE NOISE White noise is a random signal (or process) with a flat power spectral density. In other words, the signal contains equal power within a fixed bandwidth at any center frequency. White noise draws its name from white light in which the power spectral density of the light is distributed over the visible band in such a way that the eye's three color receptors (cones) are approximately equally stimulated. An infinite-bandwidth white noise signal is a purely theoretical construction. The bandwidth of white noise is limited in practice by the mechanism of noise generation, by the transmission medium and by finite observation capabilities. A random signal is

considered "white noise" if it is observed to have a flat spectrum over a medium's widest possible bandwidth. 4.1 White noise in a spatial context While it is usually applied in the context of frequency domain signals, the term white noise is also commonly applied to a noise signal in the spatial domain. In this case, it has an auto correlation which can be represented by a delta function over the relevant space dimensions. The signal is then "white" in the spatial frequency domain (this is equally true for signals in the angular frequency domain, e.g., the distribution of a signal across all angles in the night sky). 4.2 Statistical properties An example realization of a Gaussian white noise process. The image to the right displays a finite length, discrete time realization of a white noise process generated from a computer. Being uncorrelated in time does not restrict the values a signal can take. Any distribution of values is possible (although it must have zero DC component). Even a binary signal which can only take on the values 1 or -1 will be white if the sequence is statistically uncorrelated. Noise having a continuous distribution, such as a normal distribution, can of course be white. It is often incorrectly assumed that Gaussian noise (i.e., noise with a Gaussian amplitude distribution see normal distribution) is necessarily white noise, yet neither property implies the other. Gaussianity refers to the probability distribution with respect to the value i.e. the probability that the signal has a certain given value, while the term 'white' refers to the way the signal power is distributed over time or among frequencies.

Figure4.2 FFTspectrogram of pink noise (left) and white noise (right), shown with linear frequency axis (vertical). We can therefore find Gaussian white noise, but also Poisson, Cauchy, etc. white noises. Thus, the two words "Gaussian" and "white" are often both specified in mathematical models of systems. Gaussian white noise is a good approximation of many real-world situations and generates mathematically tractable models. These models are used so frequently that the term additive white Gaussian noise has a standard abbreviation: AWGN. Gaussian white noise has the useful statistical property that its values are independent (see Statistical independence). White noise is the generalized mean-square derivative of the Wiener process or Brownian motion. 4.3 Mathematical definition 4.3.1 White random vector A random vector is a white random vector if and only if its mean vector and autocorrelation matrix are the following: That is, it is a zero mean random vector, and its autocorrelation matrix is a multiple of the identity matrix. When the autocorrelation matrix is a multiple of the identity, we say that it has spherical correlation. 4.3.2 White random process (white noise) A continuous time random process w(t) where is a white noise process if and only if its mean function and autocorrelation function satisfy the following:

i.e. it is a zero mean process for all time and has infinite power at zero time shift since its autocorrelation function is the Dirac delta function. The above autocorrelation function implies the following power spectral density. Since the Fourier transform of the delta function is equal to 1. Since this power spectral density is the same at all frequencies, we call it white as an analogy to the frequency spectrum of white light. A generalization to random elements on infinite dimensional spaces, such as random fields, is the white noise measure. 4.3.3 Random vector transformations Two theoretical applications using a white random vector are the simulation and whitening of another arbitrary random vector. To simulate an arbitrary random vector, we transform a white random vector with a carefully chosen matrix. We choose the transformation matrix so that the mean and covariance matrix of the transformed white random vector matches the mean and covariance matrix of the arbitrary random vector that we are simulating. To whiten an arbitrary random vector, we transform it by a different carefully chosen matrix so that the output random vector is a white random vector. These two ideas are crucial in applications such as channel estimation and channel equalization in communications and audio. These concepts are also used in data compression. 4.3.4 Simulating a random vector Suppose that a random vector has covariance matrixk xx. Since this matrix is Hermitian symmetric and positive semidefinite, by the spectral theorem from linear algebra, we can diagonalize or factor the matrix in the following way. wheree is the orthogonal matrix of eigenvectors and Λ is the diagonal matrix of eigenvalues. We can simulate the 1st and 2nd moment properties of this random vector with mean and covariance matrix K xx via the following transformation of a white vector of unit variance:

where Thus, the output of this transformation has expectation and covariance matrix 4.3.5 Whitening a random vector The method for whitening a vector with mean and covariance matrixk xx is to perform the following calculation: Thus, the output of this transformation has expectation and covariance matrix By diagonalizingk xx, we get the following: Thus, with the above transformation, we can whiten the random vector to have zero mean and the identity covariance matrix.

5. THERMAL NOISE. Johnson Nyquist noise (thermal noise, Johnson noise, or Nyquist noise) is the electronicnoise generated by the thermal agitation of the charge carriers (usually the electrons) inside an electrical conductor at equilibrium, which happens regardless of any applied voltage. Thermal noise is approximately white, meaning that the power spectral density is nearly equal throughout the frequency spectrum (however see the section below on extremely high frequencies). Additionally, the amplitude of the signal has very nearly a Gaussianprobability density function. 5.1 Noise voltage and power Thermal noise is distinct from shot noise, which consists of additional current fluctuations that occur when a voltage is applied and a macroscopic current starts to flow. For the general case, the above definition applies to charge carriers in any type of conducting medium (e.g. ions in an electrolyte), not just resistors. It can be modeled by a voltage source representing the noise of the non-ideal resistor in series with an ideal noise free resistor. The power spectral density, or voltage variance (mean square) per hertz of bandwidth, is given by where k B is Boltzmann's constant in joules per kelvin, T is the resistor's absolute temperature in kelvins, and R is the resistor value in ohms (Ω). Use this equation for quick calculation: For example, a 1 kω resistor at a temperature of 300 K has For a given bandwidth, the root mean square (RMS) of the voltage, v n, is given by where f is the bandwidth in hertz over which the noise is measured. For a 1 kω resistor at room temperature and a 10 khz bandwidth, the RMS noise voltage is 400 nv. A

useful rule of thumb to remember is that 50 Ω at 1 Hz bandwidth correspond to 1 nv noise at room temperature. A resistor in a short circuit dissipates a noise power of The noise generated at the resistor can transfer to the remaining circuit; the maximum noise power transfer happens with impedance matching when the Thévenin equivalent resistance of the remaining circuit is equal to the noise generating resistance. In this case each one of the two participating resistors dissipates noise in both itself and in the other resistor. Since only half of the source voltage drops across any one of these resistors, the resulting noise power is given by wherep is the thermal noise power in watts. Notice that this is independent of the noise generating resistance. 5.2 Noise current The noise source can also be modeled by a current source in parallel with the resistor by taking the Norton equivalent that corresponds simply to divide by R. This gives the root mean square value of the current source as: Thermal noise is intrinsic to all resistors and is not a sign of poor design or manufacture, although resistors may also have excess noise. 5.3 Noise power in decibels Signal power is often measured in dbm (decibels relative to 1 milliwatt, assuming a 50 ohm load). From the equation above, noise power in a resistor at room temperature, in dbm, is then: Where, the factor of 1000 is present because the power is given in milliwatts, rather than watts. This equation can be simplified by separating the constant parts from the bandwidth:

which is more commonly seen approximated as: Noise power at different bandwidths is then simple to calculate: 5.4 Thermal noise on capacitors Thermal noise on capacitors is referred to as ktc noise. Thermal noise in an RC circuit has an unusually simple expression, as the value of the resistance (R) drops out of the equation. This is because higher R contributes to more filtering as well as to more noise. The noise bandwidth of the RC circuit is 1/(4RC), which can substituted into the above formula to eliminate R. The mean-square and RMS noise voltage generated in such a filter are: Thermal noise accounts for 100% of ktc noise, whether it is attributed to the resistance or to the capacitance. In the extreme case of the reset noise left on a capacitor by opening an ideal switch, the resistance is infinite, yet the formula still applies; however, now the RMS must be interpreted not as a time average, but as an average over many such reset events, since the voltage is constant when the bandwidth is zero. In this sense, the Johnson noise of an RC circuit can be seen to be inherent, an effect of the thermodynamic distribution of the number of electrons on the capacitor, even without the involvement of a resistor. The noise is not caused by the capacitor itself, but by the thermodynamic equilibrium of the amount of charge on the capacitor. Once the capacitor is disconnected from a conducting circuit, the thermodynamic fluctuation is frozen at a random value with standard deviation as given above. The reset noise of capacitive sensors is often a limiting noise source, for example in image sensors. As an alternative to the voltage noise, the reset noise on the capacitor can also be quantified as the electrical charge standard deviation, as Since the charge variance is k B TC, this noise is often called ktc noise.

Any system in thermal equilibrium has state variables with a mean energy of kt/2 per degree of freedom. Using the formula for energy on a capacitor (E = ½CV 2 ), mean noise energy on a capacitor can be seen to also be ½C(kT/C), or also kt/2. Thermal noise on a capacitor can be derived from this relationship, without consideration of resistance. The ktc noise is the dominant noise source at small capacitors. Noise of capacitors at 300 K Capacitance Electrons 1 ff 2 mv 12.5 e 10 ff 640 µv 40 e 100 ff 200 µv 125 e 1 pf 64 µv 400 e 10 pf 20 µv 1250 e 100 pf 6.4 µv 4000 e 1 nf 2 µv 12500 e 5.5 Noise at very high frequencies The above equations are good approximations at any practical radio frequency in use (i.e. frequencies below about 80 gigahertz). In the most general case, which includes up to optical frequencies, the power spectral density of the voltage across the resistor R, in V 2 /Hz is given by: wheref is the frequency, hplanck's constant, k B Boltzmann constant and T the temperature in kelvins. If the frequency is low enough, that means: (this assumption is valid until few terahertz at room temperature) then the exponential can be expressed in terms of its Taylor series. The relationship then becomes:

In general, both R and T depend on frequency. In order to know the total noise it is enough to integrate over all the bandwidth. Since the signal is real, it is possible to integrate over only the positive frequencies, and then multiply by 2. Assuming that R and T are constants over all the bandwidth f, then the root mean square (RMS) value of the voltage across a resistor due to thermal noise is given by That is, the same formula as above. 6. SHOT NOISE Shot noise is a type of electronic noise that occurs when the finite number of particles that carry energy (such as electrons in an electronic circuit or photons in an optical device) is small enough to give rise to detectable statistical fluctuations in a measurement. It is important in electronics, telecommunications, and fundamental physics. It also refers to an analogous noise in particle simulations, where due to the small number of particles, the simulation exhibits detectable statistical fluctuations not observed in the real-world system. Magnitude of this noise increases with the average magnitude of the current or intensity of the light. However, since the magnitude of the average signal increases more rapidly than that of the shot noise (its relative strength decreases with increasing signal), shot noise is often only a problem with small currents or light intensities. The intensity of a source will yield the average number of photons collected, but knowing the average number of photons which will be collected will not give the actual number collected. The actual number collected will be more than, equal to, or less than the average, and their distribution about that average will be a Poisson distribution. Since the Poisson distribution approaches a normal distribution for large numbers, the photon noise in a signal will approach a normal distribution for large numbers of photons collected. The standard deviation of the photon noise is equal to the square root of the average number of photons. The signal-to-noise ratio is then Where,N is the average number of photons collected. When N is very large, the signalto-noise ratio is very large as well. It can be seen that photon noise becomes more important when the number of photons collected is small.

Shot noise in electronic devices consists of random fluctuations of the electric current in many electrical conductors, due to the current being carried by discrete charges (electrons) whose number per unit time fluctuates. This is often an issue in p-n junctions. In metal wires this is not an issue, as correlations between individual electrons remove these random fluctuations. Shot noise is distinct from current fluctuations in thermal equilibrium, which happen without any applied voltage and without any average current flowing. These thermal equilibrium current fluctuations are known as Johnson-Nyquist noise or thermal noise. Shot noise is a Poisson process and the charge carriers which make up the current will follow a Poisson distribution. The current fluctuations have a standard deviation of whereq is the elementary charge, f is the bandwidth in hertz over which the noise is measured, and I is the average current through the device. All quantities are assumed to be in SI units. For a current of 100 ma this gives a value of if the noise current is filtered with a filter having a bandwidth of 1 Hz. If this noise current is fed through a resistor the resulting noise power will be If the charge is not fully localized in time but has a temporal distribution of q F(t) where the integral of F(t) over t is unity then the power spectral density of the noise current signal will be, Where,Ψ(f) is the Fourier transform of F(t). 7. NARROW BAND NOISE In most communication systems, we are often dealing with band-pass filtering of signals. Wideband noise will be shaped into bandlimited noise. If the bandwidth of the bandlimitednoise is relatively small compared to the carrier frequency, we refer to this as narrowbandnoise.

Generation of narrowband noise. We can derive the power spectral density Gn(f) and the auto-correlation functionrnn(t) of the narrowband noise and use them to analyse the performance of linearsystems. In practice, we often deal with mixing (multiplication), which is a non-linearoperation, and the system analysis becomes difficult. In such a case, it is useful to expressthe narrowband noise as n(t) = x(t) cos 2pfct - y(t) sin 2pfct wherefc is the carrier frequency within the band occupied by the noise. x(t) and y(t)are known as the quadrature components of the noise n(t). The Hibert transform ofn(t) is n^ (t) = H[n(t)] = x(t) sin 2pfct + y(t) cos 2pfct Generation of quadrature components of n(t).

8. NOISE TEMPERATURE In electronics, noise temperature is a temperature (in kelvins) assigned to a component such that the noise power delivered by the noisy component to a noiseless matched resistor is given by where: P RL = k B T s B n In watts, is the Boltzmann constant (1.381 10 23 J/K, joules per kelvin) is the noise temperature (K) is the noise bandwidth (Hz) Engineers often model noisy components as an ideal component in series with a noisy resistor. The source resistor is often assumed to be at room temperature, conventionally taken as 290 K (17 C, 62 F). 8.1 Measuring noise temperature The direct measurement of a component s noise temperature is a difficult process. Suppose that the noise temperature of a low noise amplifier (LNA) is measured by connecting a noise source to the LNA with a piece of transmission line. From the cascade noise temperature it can be seen that the noise temperature of the

transmission line ( ) has the potential of being the largest contributor to the output measurement (especially when you consider that LNA s can have noise temperatures of only a few Kelvin). To accurately measure the noise temperature of the LNA the noise from the input coaxial cable needs to be accurately known. This is difficult because poor surface finishes and reflections in the transmission line make actual noise temperature values higher than those predicted by theoretical analysis.similar problems arise when trying to measure the noise temperature of an antenna. Since the noise temperature is heavily dependent on the orientation of the antenna, the direction that the antenna was pointed during the test needs to be specified. In receiving systems, the system noise temperature will have three main contributors, the antenna ( ), the transmission line ( ), and the receiver circuitry ( ). The antenna noise temperature is considered to be the most difficult to measure because the measurement must be made in the field on an open system. One technique for measuring antenna noise temperature involves using cryogenically cooled loads to calibrate a noise figure meter before measuring the antenna. This provides a direct reference comparison at a noise temperature in the range of very low antenna noise temperatures, so that little extrapolation of the collected data is required 9. NOISE FIGURE It is a measure of degradation of the signal-to-noise ratio (SNR), caused by components in a radio frequency (RF) signal chain. The noise figure is defined as the ratio of the output noise power of a device to the portion thereof attributable to thermal noise in the input termination at standard noise temperaturet 0 (usually 290 K). The noise figure is thus the ratio of actual output noise to that which would remain if the device itself did not introduce noise. It is a number by which the performance of a radio receiver can be specified. The noise factor of a system is defined as: Where,SNR in and SNR out are the input and output power signal-to-noise ratios, respectively. The noise figure is defined as: wheresnr in,db and SNR out,db are in decibels (db). The noise figure is the noise factor, given in db:

These formulae are only valid when the input termination is at standard noise temperaturet 0, although in practice small differences in temperature do not significantly affect the values. The noise factor of a device is related to its noise temperaturet e : Devices with no gain (e.g., attenuators) have a noise factor F equal to their attenuation L (absolute value, not in db) when their physical temperature equals T 0. More generally, for an attenuator at a physical temperature T, the noise temperature is T e = (L 1)T, giving a noise factor of: If several devices are cascaded, the total noise factor can be found with Friis' Formula: where F n is the noise factor for the n-th device and G n is the power gain (linear, not in db) of the n-th device. In a well-designed receive chain, only the noise factor of the first amplifier should be significant.

Self test questions: 1. Which of thermal noise and shot noise is related to non-uniform distribution of charge on a conductor due to random, erratic wandering of electrons? 2. If noise is fed as input to a narrowband filter, it gives a sinusoid like output. Is that correct? 3. Probability can be defined as limiting value of the relative frequency of occurrence. Is it true? 4. Is Bayes theorem related to joint probability? 5. A random variable cannot be discrete. It is always continuous. Is it correct? 6. How are error function erf(u) and complimentary error function erfc(u) related? 7. Does binomial distribution approximates Poisson distribution? 8. Does Rayleigh distribution consider two signal components, each having Gaussian distribution? 9. What is the significance of central limit theorem? 10. To design decision threshold for a discrete message receiver a priori probability of the messages and noise density function are important. Is the statement correct? 11. Can we say those random variables are instances of a random process9or stochastic process)? 12. Is a band pass signal defined by in-phase and in-quadrature components? 13. If the input to a linear system is Gaussian, the output need not be Gaussian. Is it true? 14. For which random process ensemble average is always equal to time average stationary or ergodic? 15. A NRZ signal passes 90 percent of power while passing through an ideal LPF of cut-off frequency f b where clock frequency is f b but biphase signal passes 70 percent. Is it correct?

Possible Questions: 1. What is meant by noise equivalent bandwidth? Illustrate it with a diagram. 2. What is a narrow band noise? Discuss the properties of the quadrature components of narrow band noise. 3. Derive the expression for output signal to noise ratio for a DSB-SC receiver using coherent detection. 4. Write short notes on noise in SSB receivers. 5. Explain narrow band noise. 6. What is noise temperature? Derive the expression for effective noise temperature for cascaded system. 7. Discuss the types, causes and effects of the various forms of noise in the receiver. 8. Discuss the following: I. Noise equivalent bandwidth II. III. IV. Narrow band noise Noise temperature Noise power Spectral Density. 9. How sine wave pulse noise is represented. Obtain the Joint pdf of such noise component. 10. Discuss the effect of noise when amplifiers are connected in cascade. 11. Discuss the effect of noise in reactive circuits. 12. Define noise figure and obtain an expression for Noise figure of an amplifier. 13. Obtain an expression for Noise figure in terms of Equivalent Noise resistance. 14. Show how a narrow band noise can be represented as n(t) = nc(t) coswct - ns(t) sinwct where nc(t) and ns(t) are the in-phase and quadrature phase components of noise respectively. 15. Draw the tone modulated line spectrum with ß=2 for the above and comment about the transmission bandwidth.