ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE

Similar documents
2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Error Probability for M Signals

Lecture 3: Shannon s Theorem

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

Bernoulli Numbers and Polynomials

Lecture 20: Hypothesis testing

} Often, when learning, we deal with uncertainty:

Chapter 7 Channel Capacity and Coding

EGR 544 Communication Theory

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU

Lecture Notes on Linear Regression

Lecture 10 Support Vector Machines II

Communication with AWGN Interference

Composite Hypotheses testing

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)

EEE 241: Linear Systems

Chapter 7 Channel Capacity and Coding

8/25/17. Data Modeling. Data Modeling. Data Modeling. Patrice Koehl Department of Biological Sciences National University of Singapore

MAE140 - Linear Circuits - Winter 16 Final, March 16, 2016

Limited Dependent Variables

Special Relativity and Riemannian Geometry. Department of Mathematical Sciences

MAE140 - Linear Circuits - Winter 16 Midterm, February 5

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Math 217 Fall 2013 Homework 2 Solutions

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

Limited Dependent Variables and Panel Data. Tibor Hanappi

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

Assortment Optimization under MNL

Modelli Clamfim Equazione del Calore Lezione ottobre 2014

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

Modelli Clamfim Equazioni differenziali 7 ottobre 2013

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur

Department of Statistics University of Toronto STA305H1S / 1004 HS Design and Analysis of Experiments Term Test - Winter Solution

Rockefeller College University at Albany

MAE140 - Linear Circuits - Fall 13 Midterm, October 31

xp(x µ) = 0 p(x = 0 µ) + 1 p(x = 1 µ) = µ

Expected Value and Variance

PhysicsAndMathsTutor.com

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Generalized Linear Methods

Maximum Likelihood Estimation (MLE)

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

Maximal Margin Classifier

First day August 1, Problems and Solutions

Appendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder.

Exercises of Chapter 2

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Regression Analysis

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

E Tail Inequalities. E.1 Markov s Inequality. Non-Lecture E: Tail Inequalities

INF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE

Affine transformations and convexity

The Feynman path integral

Linear Approximation with Regularization and Moving Least Squares

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006)

Systems of Equations (SUR, GMM, and 3SLS)

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

Online Appendix to The Allocation of Talent and U.S. Economic Growth

The written Master s Examination

Supplement to Clustering with Statistical Error Control

9. Complex Numbers. 1. Numbers revisited. 2. Imaginary number i: General form of complex numbers. 3. Manipulation of complex numbers

Homework Assignment 3 Due in class, Thursday October 15

11 Tail Inequalities Markov s Inequality. Lecture 11: Tail Inequalities [Fa 13]

Complete subgraphs in multipartite graphs

Randomness and Computation

Estimation: Part 2. Chapter GREG estimation

Statistical analysis using matlab. HY 439 Presented by: George Fortetsanakis

Laboratory 1c: Method of Least Squares

Goodness of fit and Wilks theorem

Important Instructions to the Examiners:

The Geometry of Logit and Probit

(c) (cos θ + i sin θ) 5 = cos 5 θ + 5 cos 4 θ (i sin θ) + 10 cos 3 θ(i sin θ) cos 2 θ(i sin θ) 3 + 5cos θ (i sin θ) 4 + (i sin θ) 5 (A1)

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Solutions to Homework 7, Mathematics 1. 1 x. (arccos x) (arccos x) 1

Generative classification models

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

y i x P vap 10 A T SOLUTION TO HOMEWORK #7 #Problem

The exam is closed book, closed notes except your one-page cheat sheet.

x i1 =1 for all i (the constant ).

Welfare Properties of General Equilibrium. What can be said about optimality properties of resource allocation implied by general equilibrium?

1 GSW Iterative Techniques for y = Ax

Errors for Linear Systems

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Maximum Likelihood Estimation

e i is a random error

Société de Calcul Mathématique SA

Laboratory 3: Method of Least Squares

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

9.913 Pattern Recognition for Vision. Class IV Part I Bayesian Decision Theory Yuri Ivanov

e - c o m p a n i o n

since [1-( 0+ 1x1i+ 2x2 i)] [ 0+ 1x1i+ assumed to be a reasonable approximation

Lagrange Multipliers. A Somewhat Silly Example. Monday, 25 September 2013

Transcription:

ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE School of Computer and Communcaton Scences Handout 0 Prncples of Dgtal Communcatons Solutons to Problem Set 4 Mar. 6, 08 Soluton. If H = 0, we have Y = Z Z = Y Z, and f H =, we have Y = Z Z = Y Z. Therefore, Y = Y Z n all cases. Now snce Z s ndependent of H, we clearl have H Y (Y, Y Z ). Hence, Y s a suffcent statstc. Soluton. (a) The MAP decoder Ĥ() s gven b Ĥ() = arg max P Y H ( ) = T (Y ) takes two values wth the condtonal probabltes { 0 f = 0 or = f = or = 3. P T H (t 0) = { 0.7 f t = 0 0.3 f t = P T H (t ) = { 0.3 f t = 0 0.7 f t =. Therefore, the MAP decoder Ĥ(T ()) s Ĥ(T ()) = arg max P T (Y ) H (t ) = Hence, the two decoders are equvalent. (b) We have { 0 f t = 0 ( = 0 or = ) f t = ( = or = 3). and Pr{Y = 0 T (Y ) = 0, H = 0} = Pr{Y = 0 T (Y ) = 0, H = } = Pr{Y = 0, T (Y ) = 0 H = 0} Pr{T (Y ) = 0 H = 0} Pr{Y = 0, T (Y ) = 0 H = } Pr{T (Y ) = 0 H = } = 0.4 0.7 = 4 7 = 0. 0.3 = 3. Thus Pr{Y = 0 T (Y ) = 0, H = 0} = Pr{Y = 0 T (Y ) = 0, H = }, hence H T (Y ) Y s not true, although the MAP decoders are equvalent. Soluton 3. (a) The MAP decson rule can alwas be wrtten as Ĥ() = arg max f Y H ( )P H () = arg max g (T ())h()p H () = arg max g (T ())P H (). The last step s vald because h() s a non-negatve constant whch s ndependent of and thus does not gve an further nformaton for our decson.

(b) Let us defne the event B = { : T () = t}. Then, f Y H,T (Y ) (, t) = f Y,T (Y ) H(, t )P H () f T (Y ) H (t )P H () Pr{Y =, T (Y ) = t H = } = Pr{T (Y ) = t H = } = f Y H( ) B () B f Y H( )d. = Pr{Y =, Y B H = } Pr{Y B H = } If f Y H ( ) = g (T ())h(), then f Y H,T (Y ) (, t) = g (T ())h() B () g B (T ())h()d = g (t)h() B () g (t) h()d B = h() B() B h()d. Hence, we see that f Y H,T (Y ) (, t) does not depend on, so H T (Y ) Y. (c) Note that P Yk H( ) = p, P Yk H(0 ) = p and Thus, we have where t = k k. P Y,...,Y n H(,..., n ) = P Y H( ) P Yn H( n ). P Y,...,Y n H(,..., n ) = p t ( p ) (n t), Choosng g (t) = p t ( p ) (n t) and h() =, we see that P Y,...,Y n H(,..., n ) fulflls the condton n the queston. (d) Because Y,..., Y n are ndependent, f Y,...,Y n H(,..., n ) = n k= e ( k m ) π = e n ( k m ) (π) n k= = (π) n n e k= k e nm ( n n k= k m ). Choosng g (t) = e nm (t m nk= ) and h(,..., n ) = (π) n e k, we see that f Y,...,Y n H(,..., n ) = g (T (,..., n ))h(,..., n ). Hence the condton n the queston s fulflled. Soluton 4.

(a) Wth the observaton Y beng Y, Thus the MAP rule s f Y X ( +) = π e ( ) and f Y X ( ) = π e (+) π e ( ) p whch can be further smplfed to obtan (b) Observe that π e (+) ( p), ln p p f Y Y X(, +) = { [0, ]} π e ( ) f Y Y X(, ) = 4 { [ 3, ]} π e ( +) Wth g + (u, ) = {u 0} e ( ) π g (u, ) = {u 0} e ( +) 4 π h(, ) = { 3 }, we fnd f Y Y X(, x) = g x (u, )h(, ) and the Fsher Neman theorem lets us conclude that t = (u, ) s a suffcent statstc. (c) The MAP rule mnmzes the error probablt and s gven b the lkelhood rato test Note that Λ(, ) = log f Y Y X(, +) f Y Y X(, ) log p p + < Λ(, ) = + log 0 3 < 0 So the decson regon looks as follows (wth θ = log p p ): 3

Decde + θ -3 - - Decde (d) When s sent an error wll happen ether when > or when 0 and θ. The frst of these cannot happen, and the second happens wth probablt Q( + θ). 4 When + s sent an error wll happen ether when < 0 or when 0 and θ. The frst of these cannot happen, and the second happens wth probablt Q( θ). So the error probablt s gven b wth θ = Soluton 5. log p p. p 4 Q( + θ) + p Q( θ) (a) Inequalt (a) follows from the Bhattachara Bound. Usng the defnton of DMC, t s straghtforward to see that P Y X ( c 0 ) = P Y X ( c ) = n P Y X ( c 0, ) = n P Y X ( c, ). = and (b) follows b substtutng the above values n (a). Equalt (c) s obtaned b observng that s the same as,..., n (the frst one beng a vector notaton for the sum over all possble,..., n ). In (c), we see that we want the sum of all possble products. Ths s the same as summng over each and takng the product of the resultng sum for all. Ths results n equalt (d). We obtan (e) b wrtng (d) n a more concse form. When c 0, = c,, P Y X ( c 0, )P Y X ( c, ) = P Y X ( c 0, ). Therefore, P Y X ( c 0, )P Y X ( c, ) = P Y X ( c 0, ) =. 4

Ths does not affect the product, so we are onl nterested n the terms where c 0, c,. We form the product of all such sums where c 0, c,. We then look out for terms where c 0, = a and c, = b, a b, and rase the sum to the approprate power. (Eg. If we have the product prpqrpqrr, we would wrte t as p 3 q r 4 ). Hence equalt (f). (b) For a bnar nput channel, we have onl two source smbols X = {a, b}. Thus, (c) The value of z s: P e z n(a,b) z n(b,a) = z n(a,b)+n(b,a) = z d H(c 0,c ). () For a bnar nput Gaussan channel, z = f Y X ( 0)f Y X ( ) d = ( exp E ). () For the Bnar Smmetrc Channel (BSC), z = Pr{ = 0 x = 0} Pr{ = 0 x = } + Pr{ = x = 0} Pr{ = x = } = δ( δ). () For the Bnar Erasure Channel (BEC), Soluton 6. z = Pr{ = 0 x = 0} Pr{ = 0 x = } + Pr{ = E x = 0} Pr{ = E x = } + Pr{ = x = 0} Pr{ = x = } = 0 + δ + 0 = δ. B smmetr: P 00 = Pr{(N a) (N a)} = Pr{(N a)} Pr{(N a)} )] = Q. P 0 = P 03 = Pr{(N (b a)) (N a)} = Pr{N b a} Pr{N a} ( ) b a )] = Q Q. P 0 = Pr{(N (b a)) (N (b a))} = Pr{N b a} Pr{N b a} [ ( )] b a = Q. 5

Equvalentl, P 0δ = Pr{(Y R 0 ) (Y R ) (Y R ) (Y R 3 ) c 0 was sent} = P 00 P 0 P 0 P 03 )] ( ) b a )] [ ( )] b a = Q Q Q Q ) ( )] b a = Q + Q. P 0δ = Pr{(N [a, b a]) (N [a, b a])} = Pr{N [a, b a]} + Pr{N [a, b a]} Pr{(N [a, b a]) (N [a, b a])} ) ( )] [ b a ( a ) ( )] b a = Q Q Q Q, whch gves the same result as before. 6