Problem Set 3 Due Oct, 5
|
|
- Coleen Nichols
- 5 years ago
- Views:
Transcription
1 EE6: Random Processes in Systems Lecturer: Jean C. Walrand Problem Set 3 Due Oct, 5 Fall 6 GSI: Assane Gueye This problem set essentially reviews detection theory. Not all eercises are to be turned in. Only those with the sign are due on Thursday, October 5 th at the beginning of the class. Although the remaining eercises are not graded, you are encouraged to go through them. We will discuss some of the eercises during discussion sections. Please feel free to point out errors and notions that need to be clarified. Eercise 3.. We have seen in class that a convenient way to write the error probability in Gaussian detection is the use of the Q-function defined as: Q ep{ t π }dt In this eercise we will study some properties of the Q-function: a Show that Q Q b Show that e π e Q π Hint: rewrite the integrand multiply and divide by t and use integration by parts. Solution: awe will use the definition and a change of variable. Q π change of variable e t dt π π e t + dt + e t dt π e t dt + + π Q e t dt + 3-
2 EE6 Problem Set 3 Due Oct, 5 Fall 6 bfor the upper and lower bounds we will use integration by parts. We will drop the π term in the calculations. e t t t t e [ u t, v te t e t ] + t t t e dt 3. second term e For the upper bound we will perform another integration by parts in the second term of equation 3.. e t e u t, 3 v te t e + t t t e ] t [ e + t 3 t dt + t t 4 e dt 3 rd term e e 3 Eercise 3.. The output of a system is normal N.,. if the input X is equal to. When the input X is equal to, the output is normal N.,.. Observing the output of the system, we would like to compute our best estimate of X i or estimate of i. Find the best estimate of i, given the output of the system best in term of maimizing the probability of X i given the system output. We assume that the input is X with probability.6. The solution is given in the 6 notes Eample 8.6. Eercise 3.3. The Binary Phase-shift Keying BPSK and Quadrature Phase-shift Keying QPSK modulation schemes are shown in figure 3.. We consider that in both cases, the symbols S are sent over an additive gaussian channel with zero mean and variance σ. Assuming that the symbols are equally likely, compute the average error probability for each scheme. Which one is better? Note that the comparison is not fair because the two schemes do not have the same rate eggs and apples!. But let us compute the error probabilities and compare them. The BPSK error probability is the one given in eample 3. of Gallager s notes ML detection with bgal a and agal a. Thus the error probability is given by P bpsk e Q a σ 3-
3 EE6 Problem Set 3 Due Oct, 5 Fall 6 QPSK BPSK S s -a a S S -a a S S3 Figure 3.. BPSK and QPSK modulations For QPSK, consider that signal S was sent and observe that error occurs if the received signal does not fall in the first quartan. By considering the probability of detecting any other signal, we can see that the error probability is equal to P qpsk e Q a σ + Qa σ For a large enough we have P qpsk e P bpsk e Eercise 3.4. Given X i, i,, Z is an eponential random variable with mean λ i. Give the MAP decision rule of X given the observation Z. What is the probability of error? For the MAP assume that X with probability p. Solution given in 6 notes eample 8.6. Eercise 3.5. The conditional pmf of Y given X is given by the table below. X Y Find the MAP and MLE of X given Y. For the MAP assume that X with probability p. Solution given in 6 notes eample Eercise 3.6. The purpose of a radar system is to detect the presence of a target and etract useful information about the target. Suppose that in such a system, hypothesis H is that there is no target present, so that the observation is s w, where w is a normal random variable with zero mean and variance σ. Hypothesis H corresponds to the presence of the 3-3
4 EE6 Problem Set 3 Due Oct, 5 Fall 6 target and the observation is s a + w where a is an echo produced by the target and is assumed to be known. Evaluate: The probability of false alarm defined as the probability that the receiver decides a target is present when it is not. The probability of miss detection defined as the probability that the receiver decides a target is not present when it is. Write down the decision rule: H if y < a/ and H otherwise and compute the errors. Eercise 3.7. Suppose that we want to send at the same time two independent binary symbols X {, } and X {, } over an additive Gaussian channel, where and + are equally likely. A clever communication engineer proposes a transmission scheme that uses twice the channel to send at each time a linear combination of X and X. The channel input output relation is given by: Y h h X Z + h h Y where Y and Y are the channel outputs, the h i, i, are known real constant, and Z i, i, are iid standard normal random variables. We are interested in detecting both X and X i.e. the vector X, X T. a Show that the vector detection problem can be decomposed into two scalar detection problems. Hint: observe that the rows of the matri H are orthogonal. b Give the decision rule for detecting X and write the error probability using the Q- function. Solution: The scheme presented in this eercise in the Alamouti scheme well known in communication systems mainly in multi-antenna communications. a Observing that the rows and columns of the matri H are orthogonal, let us project the received signal on the directions H h, h T and H h, h T. We have: X Z Y H T Y, h + h Y h + h + h z + h z Y H T Y h + h, Y h + h + h z h z Thus we have written the vector detection problem into two scalar detection problems Y i α i + w i, i, where w i N, h + h. To be sure that the problem can be decoupled, 3-4
5 EE6 Problem Set 3 Due Oct, 5 Fall 6 we need to check that the w i s are uncorrelated. [E[w w ] E[h z + h z h z h z ] h h E[z ] h E[z z ] + h E[z z ] h h E[z ] h h + h h Thus w and w are uncorrelated. Since they are gaussian, they are also independent. Hence the vector detection problem can be decoupled into independent scalar detection problems. b We can use the same technique as in eample 3. of the lecture notes. We decide ˆ if y < and ˆ if y >. Since the two symbols are equally likely, the error probability is equal to the probability of error given that was sent. Given was sent, Y N h + h, h + h P e P N h + h, h + h > P N, > h + h Q h + h Eercise 3.8. In this eercise we consider a binary detection problem like in eample 3. where n independent observations Y i a+z i are available. Assume that the Z i, i,..., n are iid N, σ. Hypothesis H and H are mapped to a and a + respectively. Find the decision rule for the ML and the MAP detectors. Hint: find a sufficient statistic first. Convince yourself that the sample mean Ȳ n n i Y i is a sufficient statistic and that the decision rule is given by: H if Ȳ < and H otherwise. Now compute the distribution of Ȳ and deduce the error probabilities. Eercise 3.9. We observe n independent realizations Y,..., Y n of a random variable Y which distribution depends on another variable X. If X resp., then Y is an eponential random variable with mean resp.. We assume that X takes the value with probability p,. Compute the optimal detector of X given the Y i s optimal in terms of maimizing the probability of X given the observation Y n Y,..., Y n. What is this detector when p /. Solution given in eample of 6 notes. 3-5
6 EE6 Problem Set 3 Due Oct, 5 Fall 6 Eercise 3.. Consider a modified version of Eample 3. of the course note Gallager where the source output is no longer or but takes values in the set S {,,..., I}. If the source output is i S, the modulator produces the real vector a i a i,..., a in. Given that the source output is i the observation is Y a i + Z where the noise is assumed to be N, I n. a Show that the ML decision rule is given by î argmin i Y a i where. is the Euclidean distance in R n. b For n, use a one dimensional sufficient statistics to compute the decision rule. c Compare your decision rule to the one in a. Eplain. Solution: a The ML decision point is the one that maimizes the likelihood or conditional distribution. It is given by î argma i f Y H y i argma i ep π n Y a iy a i T, By def. of the cond. dist argma i Y a iy a i T, Taking log... argmin i Y a i Y a i T argmin i Y a i Thus the decision point is the one that minimizes the euclidian distance between the receive signal Y and the data symbols a i. b A one-dimensional sufficient statistic when n is Θ a a T Y. This is the inner product between Y and the direction of the signal. Given that S i, i, was sent, Θ N a a T a i, a a. Now we can compute the LLRΘ which is equal to LLRΘ Θ a a T a + a a a T y a a T a + a a a T y a + a We will then decide that Ŝ S whenever LLRΘ. To understand this decision rule, note first y a +a is just changing the origin from, vector to a +a to simplify we assume that a a. Think of the aes to be changed also from the usual ones to a a the direction of the signal and a a the direction perpendicular to the signal. Let y αa a + βa a in the new coordinate system. 3-6
7 EE6 Problem Set 3 Due Oct, 5 Fall 6 Taking the inner product a a T y a a T αa a + βa a αa a T a a + βa a T a a α Our decision rule becomes the simple test of whether α is greater then a a T a +a. Geometrically, we have reduced our vector detection problem to a one-dimensional detection problem where the detection is along the direction of the signal. Important remarks Our decomposition was possible because the noise was gaussian, uncorrelated and unit variance. If the noise is still gaussian but correlated with arbitrary non singular covariance matri, we need to re-scale the received signal before the projection. This is the job of the term K in the general sufficient statistic a a K Z Y. 3 The case where the noise is gaussian with singular covariance matri is discussed in Problem 3. 4 For a general distribution of the noise, there is no such result as far as I know!...this gives an idea why Gaussian s are important for us. c It is not hard to verify that the two decision rules are the same they have to be!. For that, note that in part a we decide that S S if y a < y a y a T y + a < y a T y + a a a T y a a < a a T y a a T a + a < a a T y a + a which is the decision in part b. The method in part b is quite intuitive and brings us back to the simple scalar detection case but it is applicable only for binary detection. If there are more than hypothesis, only the method in part a can be used. Eercise 3.. Gallager s note: Eercise 3. This eercise was just verification of results presented in the Galalger s note. a Given H, an error occurs if LLR Y. Write the error probability to get 3.3. b Similar to part a. c E[U] E[ b a T Y ] b a T E[ Y ] 3-7 <
8 EE6 Problem Set 3 Due Oct, 5 Fall 6 Thus given H E[U] b a T a and given H E[U] b a T b The variances are given by: varu H σ b a varu H Now using the decision rule given by 3.7, we easily get 3.3. d Again U is a Gaussian random variable with mean and variance given above. Just write the ratio of the conditional distribution and take the log. e Since U is one dimensional, we can now apply 3.8 and get 3.3 again. The whole point of this eercise was to study properties of the sufficient statistic. The message to be taken is that the sufficient statistic is easier to work with since it is usually of lower dimension than the original variable, it has the same LLR and leads to the same decision rule as the observation. Eercise 3.. Gallager s note: Eercise 3. Parts a, b, c are very similar to what we did in the previous problem. The only tricky part is d. d As it has been noticed in previous homework, if the correlation matri K Z is singular, then the noise lies in a lower dimensional space. If b a is entirely in that space, then the problem can be reduced to a lower dimensional estimation problem. If some component of b a is not in the space, then that component will be received without noise. By observing that particular component, we can always detect without error. Eercise 3.3. Gallager s note: Eercise 3.5 a Observe that given H, Y N a, AAT and given H, Y N b, AA T. The log-likelihood ratio is then given by: LLRY [ b a T A T A Y + aaa T a baa T b ] [ b a T A T v + aaa T a baa T b ] which shows that the dependency of the LLR to Y, is only through A Y v. Thus v is a sufficient statistic. b Observe that given H, v N A a, I and given H, v N A b, I. Now compute the LLR of verify that it is equal to the one computed in a. c Note that the noise is uncorrelated and we can apply the techniques used in eample
9 EE6 Problem Set 3 Due Oct, 5 Fall 6 of Gallager s notes. After some calculation steps, we obtain: P re H Q logη γ + γ where γ A b a Note: The goal of this eercise was to study the whitening filter. Whenever the noise covariance matri can be written as K Z AA T with A a nonsingular matri this is always the case if K Z is not singular, then the detection problem Y a i + Z is equivalent to Ỹ A a i + W where Ỹ A Y and W is a zero mean, uncorrelated N, I in case of gaussian noise. The process of multiplying by A is called the whitening filter. You will see more of it later in the course. Eercise 3.4. Gallager s note: Eercise 3.9 Solution: Conditioned on H, Y, Y and Y 3, Y 4 are independent. Each of these twodimensional distributions is circularly symmetric in the plane, so the conditional density of Y, Y, Y 3, Y 4 given H must depend on Y, Y, Y 3, Y 4 only through Y + Y, and Y3 +Y4. Since this is also true for the conditional density given H, Y +Y, Y3 +Y4 V, V must be sufficient statistic. By the symmetry of the transmitted signal and the noise, f V,V Hv, v f V,V Hv v We also epect that f V,V Hv, v > f V,V Hv, v whenever v > v proving this is technical, and not required. Then the decision rule must be the one stated. 3-9
10 EE6 Problem Set 3 Due Oct, 5 Fall 6 Technical stuff follows: f Y H y π π π f Y,φ H y, θ dθ f Y H y f φ θdθ π ep y acosθ + y asinθ + y 3 + y 4 π π σ ep 4 σ π σ ep y + y + y3 + y4 + a π 4 σ π σ ep y + y + y3 + y4 + a 4 σ π a y + y π σ 4 ep π σ 4 ep cosθ arctany σ /y y + y + y3 + y4 + a π σ y + y + y 3 + y 4 + a σ dθ π ep ay cosθ + y sinθ/σ dθ dθ π ep a y + y cosψ dψ σ I a y + y σ where I is the modified Bessel function of the first kind and th order. Similarly, f Y H y π σ ep y + y + y3 + y4 + a I 4 σ a y3 + y4 σ Since I z is an increasing function of z, the ML is of the stated form. Another way to be convince yourself is to compute the LLR. We know that given H i, i,, Y N, Σ i where: Σ a + σ a + σ σ σ The pdf of Y given H is given by: f Y H y Σ π σ a + σ e σ σ a + σ a + σ y +y a +σ + y 3 +y 4 σ We can also write f Y H y and see that it depends only on v and v. Now we can write the LLR and find that it is LLRY a σ a + σ v v 3-
11 EE6 Problem Set 3 Due Oct, 5 Fall 6 We will thus choose H whenever v > v and H otherwise. b Given H, V Y + Y, the distribution of which we found in Eercise.: f V Hv Similarly V Y + Y has distribution. a + σ ep v/a+ σ, v f V Hv σ ep v/σ, v c Let U V V compute P U > u H. Conditioning on V we have P U > u H V ind. V { a +σ P V > u + v H, V vf V Hv P V > u + v H f V Hv ep σ +a a +σ u σ ep u, σ +a σ u < Thus the pdf is given by: f U H u { ep u, u σ +a a +σ ep u, u < σ +a σ d Given H, an error occurs whenever U <. The corresponding probability is P U < H P U H u σ + a ep σ udu σ σ + a + a /σ e By symmetry of the transmitted signals and the noise, this is also the overall error probability. 3-
Problem Set 7 Due March, 22
EE16: Probability and Random Processes SP 07 Problem Set 7 Due March, Lecturer: Jean C. Walrand GSI: Daniel Preda, Assane Gueye Problem 7.1. Let u and v be independent, standard normal random variables
More informationA Simple Example Binary Hypothesis Testing Optimal Receiver Frontend M-ary Signal Sets Message Sequences. possible signals has been transmitted.
Introduction I We have focused on the problem of deciding which of two possible signals has been transmitted. I Binary Signal Sets I We will generalize the design of optimum (MPE) receivers to signal sets
More informationMACHINE LEARNING ADVANCED MACHINE LEARNING
MACHINE LEARNING ADVANCED MACHINE LEARNING Recap of Important Notions on Estimation of Probability Density Functions 22 MACHINE LEARNING Discrete Probabilities Consider two variables and y taking discrete
More informationMACHINE LEARNING ADVANCED MACHINE LEARNING
MACHINE LEARNING ADVANCED MACHINE LEARNING Recap of Important Notions on Estimation of Probability Density Functions 2 2 MACHINE LEARNING Overview Definition pdf Definition joint, condition, marginal,
More informationLecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel
Lecture Notes 4 Vector Detection and Estimation Vector Detection Reconstruction Problem Detection for Vector AGN Channel Vector Linear Estimation Linear Innovation Sequence Kalman Filter EE 278B: Random
More informationIntroduction to Detection Theory
Introduction to Detection Theory Detection Theory (a.k.a. decision theory or hypothesis testing) is concerned with situations where we need to make a decision on whether an event (out of M possible events)
More informationLecture 7. Union bound for reducing M-ary to binary hypothesis testing
Lecture 7 Agenda for the lecture M-ary hypothesis testing and the MAP rule Union bound for reducing M-ary to binary hypothesis testing Introduction of the channel coding problem 7.1 M-ary hypothesis testing
More informationProblem Set 2 Due Sept, 21
EE6: Rando Processes in Sstes Lecturer: Jean C. Walrand Proble Set Due Sept, Fall 6 GSI: Assane Guee This proble set essentiall reviews notions of conditional epectation, conditional distribution, and
More informationBayes Decision Theory - I
Bayes Decision Theory - I Nuno Vasconcelos (Ken Kreutz-Delgado) UCSD Statistical Learning from Data Goal: Given a relationship between a feature vector and a vector y, and iid data samples ( i,y i ), find
More informationProblem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30
Problem Set 2 MAS 622J/1.126J: Pattern Recognition and Analysis Due: 5:00 p.m. on September 30 [Note: All instructions to plot data or write a program should be carried out using Matlab. In order to maintain
More informationMATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS
MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element
More informationLecture 3: Pattern Classification. Pattern classification
EE E68: Speech & Audio Processing & Recognition Lecture 3: Pattern Classification 3 4 5 The problem of classification Linear and nonlinear classifiers Probabilistic classification Gaussians, mitures and
More informationEE 302 Division 1. Homework 6 Solutions.
EE 3 Division. Homework 6 Solutions. Problem. A random variable X has probability density { C f X () e λ,,, otherwise, where λ is a positive real number. Find (a) The constant C. Solution. Because of the
More informationSignal Detection Basics - CFAR
Signal Detection Basics - CFAR Types of noise clutter and signals targets Signal separation by comparison threshold detection Signal Statistics - Parameter estimation Threshold determination based on the
More informationProblem Set 1 Sept, 14
EE6: Random Processes in Systems Lecturer: Jean C. Walrand Problem Set Sept, 4 Fall 06 GSI: Assane Gueye This problem set essentially reviews notions of conditional expectation, conditional distribution,
More informationLecture 8: Signal Detection and Noise Assumption
ECE 830 Fall 0 Statistical Signal Processing instructor: R. Nowak Lecture 8: Signal Detection and Noise Assumption Signal Detection : X = W H : X = S + W where W N(0, σ I n n and S = [s, s,..., s n ] T
More informationECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)
ECE 564/645 - Digital Communications, Spring 018 Homework # Due: March 19 (In Lecture) 1. Consider a binary communication system over a 1-dimensional vector channel where message m 1 is sent by signaling
More informationDetection theory 101 ELEC-E5410 Signal Processing for Communications
Detection theory 101 ELEC-E5410 Signal Processing for Communications Binary hypothesis testing Null hypothesis H 0 : e.g. noise only Alternative hypothesis H 1 : signal + noise p(x;h 0 ) γ p(x;h 1 ) Trade-off
More informationUCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011
UCSD ECE53 Handout #40 Prof. Young-Han Kim Thursday, May 9, 04 Homework Set #8 Due: Thursday, June 5, 0. Discrete-time Wiener process. Let Z n, n 0 be a discrete time white Gaussian noise (WGN) process,
More informationStochastic processes Lecture 1: Multiple Random Variables Ch. 5
Stochastic processes Lecture : Multiple Random Variables Ch. 5 Dr. Ir. Richard C. Hendriks 26/04/8 Delft University of Technology Challenge the future Organization Plenary Lectures Book: R.D. Yates and
More informationLecture Note 2: Estimation and Information Theory
Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 2: Estimation and Information Theory Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 2.1 Estimation A static estimation problem
More information2 Statistical Estimation: Basic Concepts
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 2 Statistical Estimation:
More informationSquare Estimation by Matrix Inverse Lemma 1
Proof of Two Conclusions Associated Linear Minimum Mean Square Estimation b Matri Inverse Lemma 1 Jianping Zheng State e Lab of IS, Xidian Universit, Xi an, 7171, P. R. China jpzheng@idian.edu.cn Ma 6,
More informationUTA EE5362 PhD Diagnosis Exam (Spring 2011)
EE5362 Spring 2 PhD Diagnosis Exam ID: UTA EE5362 PhD Diagnosis Exam (Spring 2) Instructions: Verify that your exam contains pages (including the cover shee. Some space is provided for you to show your
More informationEE 574 Detection and Estimation Theory Lecture Presentation 8
Lecture Presentation 8 Aykut HOCANIN Dept. of Electrical and Electronic Engineering 1/14 Chapter 3: Representation of Random Processes 3.2 Deterministic Functions:Orthogonal Representations For a finite-energy
More informationECE531 Lecture 6: Detection of Discrete-Time Signals with Random Parameters
ECE531 Lecture 6: Detection of Discrete-Time Signals with Random Parameters D. Richard Brown III Worcester Polytechnic Institute 26-February-2009 Worcester Polytechnic Institute D. Richard Brown III 26-February-2009
More informationExercises * on Principal Component Analysis
Exercises * on Principal Component Analysis Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 207 Contents Intuition 3. Problem statement..........................................
More informationSummary of EE226a: MMSE and LLSE
1 Summary of EE226a: MMSE and LLSE Jean Walrand Here are the key ideas and results: I. SUMMARY The MMSE of given Y is E[ Y]. The LLSE of given Y is L[ Y] = E() + K,Y K 1 Y (Y E(Y)) if K Y is nonsingular.
More informationEE4304 C-term 2007: Lecture 17 Supplemental Slides
EE434 C-term 27: Lecture 17 Supplemental Slides D. Richard Brown III Worcester Polytechnic Institute, Department of Electrical and Computer Engineering February 5, 27 Geometric Representation: Optimal
More informationLecture 7 MIMO Communica2ons
Wireless Communications Lecture 7 MIMO Communica2ons Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Fall 2014 1 Outline MIMO Communications (Chapter 10
More informationDesigning Information Devices and Systems I Fall 2018 Lecture Notes Note 21
EECS 16A Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 21 21.1 Module Goals In this module, we introduce a family of ideas that are connected to optimization and machine learning,
More informationMAT 271E Probability and Statistics
MAT 71E Probability and Statistics Spring 013 Instructor : Class Meets : Office Hours : Textbook : Supp. Text : İlker Bayram EEB 1103 ibayram@itu.edu.tr 13.30 1.30, Wednesday EEB 5303 10.00 1.00, Wednesday
More informationLecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary
ECE 830 Spring 207 Instructor: R. Willett Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we saw that the likelihood
More information6. Vector Random Variables
6. Vector Random Variables In the previous chapter we presented methods for dealing with two random variables. In this chapter we etend these methods to the case of n random variables in the following
More informationEstimation techniques
Estimation techniques March 2, 2006 Contents 1 Problem Statement 2 2 Bayesian Estimation Techniques 2 2.1 Minimum Mean Squared Error (MMSE) estimation........................ 2 2.1.1 General formulation......................................
More informationLecture 12. Block Diagram
Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data
More informationTitle. Author(s)Tsai, Shang-Ho. Issue Date Doc URL. Type. Note. File Information. Equal Gain Beamforming in Rayleigh Fading Channels
Title Equal Gain Beamforming in Rayleigh Fading Channels Author(s)Tsai, Shang-Ho Proceedings : APSIPA ASC 29 : Asia-Pacific Signal Citationand Conference: 688-691 Issue Date 29-1-4 Doc URL http://hdl.handle.net/2115/39789
More informationDS-GA 1002 Lecture notes 10 November 23, Linear models
DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.
More information0.1. Linear transformations
Suggestions for midterm review #3 The repetitoria are usually not complete; I am merely bringing up the points that many people didn t now on the recitations Linear transformations The following mostly
More informationDecision-Point Signal to Noise Ratio (SNR)
Decision-Point Signal to Noise Ratio (SNR) Receiver Decision ^ SNR E E e y z Matched Filter Bound error signal at input to decision device Performance upper-bound on ISI channels Achieved on memoryless
More informationMidterm Exam 1 Solution
EECS 126 Probability and Random Processes University of California, Berkeley: Fall 2015 Kannan Ramchandran September 22, 2015 Midterm Exam 1 Solution Last name First name SID Name of student on your left:
More informationRecursive Estimation
Recursive Estimation Raffaello D Andrea Spring 07 Problem Set 3: Etracting Estimates from Probability Distributions Last updated: April 5, 07 Notes: Notation: Unlessotherwisenoted,, y,andz denoterandomvariables,
More informationMath 180A. Lecture 16 Friday May 7 th. Expectation. Recall the three main probability density functions so far (1) Uniform (2) Exponential.
Math 8A Lecture 6 Friday May 7 th Epectation Recall the three main probability density functions so far () Uniform () Eponential (3) Power Law e, ( ), Math 8A Lecture 6 Friday May 7 th Epectation Eample
More information392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6. Problem Set 2 Solutions
392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6 Problem Set 2 Solutions Problem 2.1 (Cartesian-product constellations) (a) Show that if A is a K-fold Cartesian
More informationECE531 Lecture 2b: Bayesian Hypothesis Testing
ECE531 Lecture 2b: Bayesian Hypothesis Testing D. Richard Brown III Worcester Polytechnic Institute 29-January-2009 Worcester Polytechnic Institute D. Richard Brown III 29-January-2009 1 / 39 Minimizing
More informationebay/google short course: Problem set 2
18 Jan 013 ebay/google short course: Problem set 1. (the Echange Parado) You are playing the following game against an opponent, with a referee also taking part. The referee has two envelopes (numbered
More informationCoding for Digital Communication and Beyond Fall 2013 Anant Sahai MT 1
EECS 121 Coding for Digital Communication and Beyond Fall 2013 Anant Sahai MT 1 PRINT your student ID: PRINT AND SIGN your name:, (last) (first) (signature) PRINT your Unix account login: ee121- Prob.
More informationParallel Additive Gaussian Channels
Parallel Additive Gaussian Channels Let us assume that we have N parallel one-dimensional channels disturbed by noise sources with variances σ 2,,σ 2 N. N 0,σ 2 x x N N 0,σ 2 N y y N Energy Constraint:
More informationChapter 2 Signal Processing at Receivers: Detection Theory
Chapter Signal Processing at Receivers: Detection Theory As an application of the statistical hypothesis testing, signal detection plays a key role in signal processing at receivers of wireless communication
More informationDigital Band-pass Modulation PROF. MICHAEL TSAI 2011/11/10
Digital Band-pass Modulation PROF. MICHAEL TSAI 211/11/1 Band-pass Signal Representation a t g t General form: 2πf c t + φ t g t = a t cos 2πf c t + φ t Envelope Phase Envelope is always non-negative,
More informationContents PART II. Foreword
Contents PART II Foreword v Preface vii 7. Integrals 87 7. Introduction 88 7. Integration as an Inverse Process of Differentiation 88 7. Methods of Integration 00 7.4 Integrals of some Particular Functions
More informationAppendix B Information theory from first principles
Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes
More informationElectrical Engineering Written PhD Qualifier Exam Spring 2014
Electrical Engineering Written PhD Qualifier Exam Spring 2014 Friday, February 7 th 2014 Please do not write your name on this page or any other page you submit with your work. Instead use the student
More informationLecture 3: Latent Variables Models and Learning with the EM Algorithm. Sam Roweis. Tuesday July25, 2006 Machine Learning Summer School, Taiwan
Lecture 3: Latent Variables Models and Learning with the EM Algorithm Sam Roweis Tuesday July25, 2006 Machine Learning Summer School, Taiwan Latent Variable Models What to do when a variable z is always
More informationParameter Estimation
1 / 44 Parameter Estimation Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay October 25, 2012 Motivation System Model used to Derive
More informationNotes on Discriminant Functions and Optimal Classification
Notes on Discriminant Functions and Optimal Classification Padhraic Smyth, Department of Computer Science University of California, Irvine c 2017 1 Discriminant Functions Consider a classification problem
More informationCOUNCIL ROCK HIGH SCHOOL MATHEMATICS. A Note Guideline of Algebraic Concepts. Designed to assist students in A Summer Review of Algebra
COUNCIL ROCK HIGH SCHOOL MATHEMATICS A Note Guideline of Algebraic Concepts Designed to assist students in A Summer Review of Algebra [A teacher prepared compilation of the 7 Algebraic concepts deemed
More informationCh. 12 Linear Bayesian Estimators
Ch. 1 Linear Bayesian Estimators 1 In chapter 11 we saw: the MMSE estimator takes a simple form when and are jointly Gaussian it is linear and used only the 1 st and nd order moments (means and covariances).
More informationMAT 2037 LINEAR ALGEBRA I web:
MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear
More information4 An Introduction to Channel Coding and Decoding over BSC
4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the
More informationExpression arrays, normalization, and error models
1 Epression arrays, normalization, and error models There are a number of different array technologies available for measuring mrna transcript levels in cell populations, from spotted cdna arrays to in
More informationECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form
ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER KRISTOFFER P. NIMARK The Kalman Filter We will be concerned with state space systems of the form X t = A t X t 1 + C t u t 0.1 Z t
More informationIEEE C80216m-09/0079r1
Project IEEE 802.16 Broadband Wireless Access Working Group Title Efficient Demodulators for the DSTTD Scheme Date 2009-01-05 Submitted M. A. Khojastepour Ron Porat Source(s) NEC
More informationChapter [4] "Operations on a Single Random Variable"
Chapter [4] "Operations on a Single Random Variable" 4.1 Introduction In our study of random variables we use the probability density function or the cumulative distribution function to provide a complete
More informationAssignment 9. Due: July 14, 2017 Instructor: Dr. Mustafa El-Halabi. ! A i. P (A c i ) i=1
CCE 40: Communication Systems Summer 206-207 Assignment 9 Due: July 4, 207 Instructor: Dr. Mustafa El-Halabi Instructions: You are strongly encouraged to type out your solutions using mathematical mode
More informationProblem 1: (3 points) Recall that the dot product of two vectors in R 3 is
Linear Algebra, Spring 206 Homework 3 Name: Problem : (3 points) Recall that the dot product of two vectors in R 3 is a x b y = ax + by + cz, c z and this is essentially the same as the matrix multiplication
More informationAssignment #9: Orthogonal Projections, Gram-Schmidt, and Least Squares. Name:
Assignment 9: Orthogonal Projections, Gram-Schmidt, and Least Squares Due date: Friday, April 0, 08 (:pm) Name: Section Number Assignment 9: Orthogonal Projections, Gram-Schmidt, and Least Squares Due
More informationVectors and Matrices Statistics with Vectors and Matrices
Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc
More information18.02 Multivariable Calculus Fall 2007
MIT OpenCourseWare http://ocw.mit.edu 18.02 Multivariable Calculus Fall 2007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.02 Problem Set 1 Due Thursday
More informationBASICS OF DETECTION AND ESTIMATION THEORY
BASICS OF DETECTION AND ESTIMATION THEORY 83050E/158 In this chapter we discuss how the transmitted symbols are detected optimally from a noisy received signal (observation). Based on these results, optimal
More informationDigital Transmission Methods S
Digital ransmission ethods S-7.5 Second Exercise Session Hypothesis esting Decision aking Gram-Schmidt method Detection.K.K. Communication Laboratory 5//6 Konstantinos.koufos@tkk.fi Exercise We assume
More information2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)?
ECE 830 / CS 76 Spring 06 Instructors: R. Willett & R. Nowak Lecture 3: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we
More informationInverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1
Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is
More informationECE Homework Set 2
1 Solve these problems after Lecture #4: Homework Set 2 1. Two dice are tossed; let X be the sum of the numbers appearing. a. Graph the CDF, FX(x), and the pdf, fx(x). b. Use the CDF to find: Pr(7 X 9).
More informationEE6604 Personal & Mobile Communications. Week 15. OFDM on AWGN and ISI Channels
EE6604 Personal & Mobile Communications Week 15 OFDM on AWGN and ISI Channels 1 { x k } x 0 x 1 x x x N- 2 N- 1 IDFT X X X X 0 1 N- 2 N- 1 { X n } insert guard { g X n } g X I n { } D/A ~ si ( t) X g X
More informationElementary Linear Algebra
Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We
More informationEE456 Digital Communications
EE456 Digital Communications Professor Ha Nguyen September 5 EE456 Digital Communications Block Diagram of Binary Communication Systems m ( t { b k } b k = s( t b = s ( t k m ˆ ( t { bˆ } k r( t Bits in
More informationList Decoding: Geometrical Aspects and Performance Bounds
List Decoding: Geometrical Aspects and Performance Bounds Maja Lončar Department of Information Technology Lund University, Sweden Summer Academy: Progress in Mathematics for Communication Systems Bremen,
More information4 Bias-Variance for Ridge Regression (24 points)
Implement Ridge Regression with λ = 0.00001. Plot the Squared Euclidean test error for the following values of k (the dimensions you reduce to): k = {0, 50, 100, 150, 200, 250, 300, 350, 400, 450, 500,
More informationLeast Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e.
Least Squares with Eamples in Signal Processing Ivan Selesnick March 7, 3 NYU-Poly These notes address (approimate) solutions to linear equations by least squares We deal with the easy case wherein the
More informationMetric-based classifiers. Nuno Vasconcelos UCSD
Metric-based classifiers Nuno Vasconcelos UCSD Statistical learning goal: given a function f. y f and a collection of eample data-points, learn what the function f. is. this is called training. two major
More informationDetection Theory. Chapter 3. Statistical Decision Theory I. Isael Diaz Oct 26th 2010
Detection Theory Chapter 3. Statistical Decision Theory I. Isael Diaz Oct 26th 2010 Outline Neyman-Pearson Theorem Detector Performance Irrelevant Data Minimum Probability of Error Bayes Risk Multiple
More informationNotes on Linear Algebra and Matrix Theory
Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a
More informationIntroduction to Probability Theory for Graduate Economics Fall 2008
Introduction to Probability Theory for Graduate Economics Fall 008 Yiğit Sağlam October 10, 008 CHAPTER - RANDOM VARIABLES AND EXPECTATION 1 1 Random Variables A random variable (RV) is a real-valued function
More informationDetection and Estimation Theory
ESE 524 Detection and Estimation Theory Joseph A. O Sullivan Samuel C. Sachs Professor Electronic Systems and Signals Research Laboratory Electrical and Systems Engineering Washington University 2 Urbauer
More informationC/CS/Phys C191 Grover s Quantum Search Algorithm 11/06/07 Fall 2007 Lecture 21
C/CS/Phys C191 Grover s Quantum Search Algorithm 11/06/07 Fall 2007 Lecture 21 1 Readings Benenti et al, Ch 310 Stolze and Suter, Quantum Computing, Ch 84 ielsen and Chuang, Quantum Computation and Quantum
More information2016 Spring: The Final Exam of Digital Communications
2016 Spring: The Final Exam of Digital Communications The total number of points is 131. 1. Image of Transmitter Transmitter L 1 θ v 1 As shown in the figure above, a car is receiving a signal from a remote
More informationSimultaneous SDR Optimality via a Joint Matrix Decomp.
Simultaneous SDR Optimality via a Joint Matrix Decomposition Joint work with: Yuval Kochman, MIT Uri Erez, Tel Aviv Uni. May 26, 2011 Model: Source Multicasting over MIMO Channels z 1 H 1 y 1 Rx1 ŝ 1 s
More informationIntroduction...2 Chapter Review on probability and random variables Random experiment, sample space and events
Introduction... Chapter...3 Review on probability and random variables...3. Random eperiment, sample space and events...3. Probability definition...7.3 Conditional Probability and Independence...7.4 heorem
More informationMA 114 Worksheet #01: Integration by parts
Fall 8 MA 4 Worksheet Thursday, 3 August 8 MA 4 Worksheet #: Integration by parts. For each of the following integrals, determine if it is best evaluated by integration by parts or by substitution. If
More informationCommunication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi
Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking
More information(x 3)(x + 5) = (x 3)(x 1) = x + 5
RMT 3 Calculus Test olutions February, 3. Answer: olution: Note that + 5 + 3. Answer: 3 3) + 5) = 3) ) = + 5. + 5 3 = 3 + 5 3 =. olution: We have that f) = b and f ) = ) + b = b + 8. etting these equal
More informationBayesian Updating with Continuous Priors Class 13, Jeremy Orloff and Jonathan Bloom
Bayesian Updating with Continuous Priors Class 3, 8.05 Jeremy Orloff and Jonathan Bloom Learning Goals. Understand a parameterized family of distributions as representing a continuous range of hypotheses
More informationMassachusetts Institute of Technology
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.011: Introduction to Communication, Control and Signal Processing QUIZ, April 1, 010 QUESTION BOOKLET Your
More informationNational University of Singapore Department of Electrical & Computer Engineering. Examination for
National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:
More informationMapper & De-Mapper System Document
Mapper & De-Mapper System Document Mapper / De-Mapper Table of Contents. High Level System and Function Block. Mapper description 2. Demodulator Function block 2. Decoder block 2.. De-Mapper 2..2 Implementation
More information4. Comparison of Two (K) Samples
4. Comparison of Two (K) Samples K=2 Problem: compare the survival distributions between two groups. E: comparing treatments on patients with a particular disease. Z: Treatment indicator, i.e. Z = 1 for
More information8 Basics of Hypothesis Testing
8 Basics of Hypothesis Testing 4 Problems Problem : The stochastic signal S is either 0 or E with equal probability, for a known value E > 0. Consider an observation X = x of the stochastic variable X
More informationMath Review 1: Vectors
Math Review 1: Vectors Coordinate System Coordinate system: used to describe the position of a point in space and consists of 1. An origin as the reference point 2. A set of coordinate axes with scales
More informationTrellis Coded Modulation
Trellis Coded Modulation Trellis coded modulation (TCM) is a marriage between codes that live on trellises and signal designs We have already seen that trellises are the preferred way to view convolutional
More information