Solutions 1. Introduction to Coding Theory - Spring 2010 Solutions 1. Exercise 1.1. See Examples 1.2 and 1.11 in the course notes.

Size: px
Start display at page:

Download "Solutions 1. Introduction to Coding Theory - Spring 2010 Solutions 1. Exercise 1.1. See Examples 1.2 and 1.11 in the course notes."

Transcription

1 Solutions 1 Exercise 1.1. See Exaples 1.2 and 1.11 in the course notes. Exercise 1.2. Observe that the Haing distance of two vectors is the iniu nuber of bit flips required to transfor one into the other. Using this, the first three conditions are trivial to verify. As for the triange inequality d(x, z) d(x, y) + d(y, z), (1) consider each position i of the vectors x, y and z. If x i = z i, the corresponding position contributes 0 to the left-hand-side of equation (1). In this case, either y i = x i = z i, thus contributing 0 to the right-hand-side as well, or y i x i, z i, thus contributing 2 to the right-handside. If x i z i, so that the corresponding i contributes 1 to the left-hand-side of equation (1), then y i ust be different fro at least one of x i and z i, thus contributing at least 1 to the right-hand-side. Suing over all values of i we readily obtain the triangle inequality. Exercise 1.3. This is very siilar to the case of BSC(ε) considered in the courses notes. For a received vector y Σ n and any codeword z, we have p(y z) = n p(y i z i ). Fro the definition of our channel, p(y i z i ) = ε/(q 1) for y i z i (this is the case for d(y, z) coordinates) and p(y i z i ) = 1 ε for y i = z i (this is the case for n d(y, z) coordinates). Therefore p(y z) = ( ) ε d(y,z) ( ) ε/(q 1) d(y,z) (1 ε) n d(y,z) = (1 ε) n. q 1 1 ε Since ε (q 1)/q, the ratio ε/(q 1) 1 ε 1, so that the codeword z that axiizes p(y z) is the one that iniizes d(y, z). Exercise Let A(n) := H ( 1 n,..., 1 n). We first show that A(s ) = A(s). (2) To see this, note that A(s ) = H ( ) 1 1 s,..., s corresponds to a choice between s equally likely events. We can group each s of these events together using Axio 3. For exaple, grouping the first s events gives us ( ) 1 A(s ) = H s 1, 1 s,..., 1 s + 1 A(s). s 1 Siilarly grouping all the other events s by s, we obtain A(s ) = A(s 1 ) + A(s). 1

2 We can now repeat this procedure recursively to obtain A(s ) = A(s 1 ) + A(s) = A(s 2 ) + 2A(s) = = A(s). Now for s and t integers, and for n arbitrarily large, we can always find such that On one hand, this gives us s t n < s +1. (3) n log t log s n + 1 n. (4) On the other hand, fro Axio 2, we know that A is a onotonic increasing function of its arguent, so that equation (3) gives us A(s ) A(t n ) < A(s +1 ). Fro equation (2), this is equivalent to saying that which gives us A(s) na(t) < ( + 1)A(s), n A(t) A(s) < n + 1 n. (5) As we let n grow to infinity, equations (4) and (5) gives us that so that A(t) ust be of the for A(t) li n A(s) = log t log s, A(t) = K log t for a constant K, where K ust be positive to satisfy Axio Suppose the p i are coensurable probabilities, so that p i = P n i ni. Consider choosing an event fro n i equiprobable events. Fro the expression we derived above for A(n), we know that the entropy of this choice is K log n i. But using Axio 3, we can also view this choice in the following equivalent anner: we can break down a choice fro n j equiprobable events into a choice fro n events with probabilities p 1,..., p n, then if the ith event is chosen, we have a second choice between n i equiprobable events. The entropy of this event is H(p 1,..., p n ) + p i K log n i. 2

3 We thus obtain H(p 1,..., p n ) = K (log n i ) p i log n i ( = K pi log n i ) p i log n i = K p i log n i ni = K p i log p i. 3. Now suppose the p i are incoensurable. Since the rationals are dense in the reals, we can approxiate the p i with rational nubers. We can thus find rationals p 1,..., p n 1 such that p i p i < ε for any ε > 0. Define p n as 1 n 1 p i. This ensures that ( p 1,..., p n ) is indeed a probability distribution, and p n p n < (n 1)ɛ can be ade as sall as we want. By continuity of H (Axio 1), H(p 1,..., p n ) tends to H( p 1,..., p n ) = K p i log p i. But by continuity of the function f(x 1,..., x n ) = K x i log x i (defined over real probability vectors (x 1,..., x n )), K p i log p i tends to K p i log p i. Thus the expression holds in general. Exercise I(X; Y ) = x,y = x,y p(x, y) log p(x, y) p(y) = x,y p(x, y) log + x,y = x log + x,y We prove siilarly that = H(X) H(X Y ). I(X; Y ) = H(Y ) H(Y X). I(X; Y ) = x,y p(x, y) log p(x, y) p(y) = x,y p(x, y) log x,y p(x, y) log p(y) + x,y = x log y p(y) log p(y) + x,y = H(X) + H(Y ) H(X, Y ). We can clearly see that I(X; Y ) is syetric in its arguents. 3

4 I(X; X) = x p(x, x) log = log 1 x = H(X). p(x, x) We could also obtain this forula by noting that I(X; X) = H(X) H(X X) = H(X). 2. Using the chain rule for two variables, we have H(X 1, X 2 ) = H(X 1 ) + H(X 2 X 1 ) H(X 1, X 2, X 3 ) = H(X 1 ) + H(X 2, X 3 X 1 ) = H(X 1 ) + H(X 2 X 1 ) + H(X 3 X 2, X 1 ). H(X 1,..., X n ) = H(X 1 ) + H(X 2 X 1 ) + + H(X n X n 1,..., X 1 ) n = H(X i X i 1,..., X 1 ). 3. To prove the chain rule for relative entropy, note that p(x, y) p(x, y) log q(x, y) D(p(x, y) q(x, y)) = x y = p(x, y) log p(y x) q(x)q(y x) x y = p(x, y) log q(x) + x y x y = D( q(x)) + D(p(y x) q(y x)). p(x, y) log p(y x) q(y x) Exercise Let χ be the support set of the rando variable X and let A = {x : > 0} be the 4

5 support set of the probability distribution. We have D(p q) = x A = x A log x A log q(x) log q(x) = log x A q(x) q(x) log x χ q(x) = log 1 = 0, with equality if and only if q(x)/ = 1 everywhere, since log t is a strictly concave function of t. Therefore D(p q) 0 (6) with equality if and only if = q(x) for all x. For any pair X, Y of rando variables, I(X; Y ) = D(p(x, y) p(y)). Equation (6) gives us I(X; Y ) 0, (7) with equality if and only if p(x, y) = p(y) for all values x, y, that is, if and only if X and Y are independent. 2. Let X take values over χ with soe probability distribution p, and let u be the unifor distribution over χ, so that u(x) = 1 χ for all x. Consider the quantity D(p u) = χ log u(x) = χ log χ log u(x) = log χ H(X). Fro equation (6), we have that H(X) log χ, with equality if p and u are the sae distribution. 3. Fro equation (7), we have so that I(X; Y ) = H(X) H(X Y ) 0, H(X Y ) H(X), with equality if and only if I(X; Y ) = 0, i.e., if and only if X and Y are independent. Thus conditioning reduces entropy. In the previous exercise, we saw the chain rule for the entropy of n variables: H(X 1,..., X n ) = n H(X i X i 1,..., X 1 ). 5

6 Each conditional entropy ter H(X i X i 1,..., X 1 ) is such that H(X i X i 1,..., X 1 ) H(X i ), with equality if and only if X i is independent fro the (i 1)-tuple X 1,..., X i 1. We finally get n H(X 1,..., X n ) H(X i ), with equality if and only if the X i are independent. 6

Lecture 5 - Information theory

Lecture 5 - Information theory Lecture 5 - Information theory Jan Bouda FI MU May 18, 2012 Jan Bouda (FI MU) Lecture 5 - Information theory May 18, 2012 1 / 42 Part I Uncertainty and entropy Jan Bouda (FI MU) Lecture 5 - Information

More information

Information Theory and Communication

Information Theory and Communication Information Theory and Communication Ritwik Banerjee rbanerjee@cs.stonybrook.edu c Ritwik Banerjee Information Theory and Communication 1/8 General Chain Rules Definition Conditional mutual information

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I Contents 1. Preliinaries 2. The ain result 3. The Rieann integral 4. The integral of a nonnegative

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

Lectures 8 & 9: The Z-transform.

Lectures 8 & 9: The Z-transform. Lectures 8 & 9: The Z-transfor. 1. Definitions. The Z-transfor is defined as a function series (a series in which each ter is a function of one or ore variables: Z[] where is a C valued function f : N

More information

Geometry. Selected problems on similar triangles (from last homework).

Geometry. Selected problems on similar triangles (from last homework). October 30, 2016 Geoetry. Selecte probles on siilar triangles (fro last hoework). Proble 1(5). Prove that altitues of any triangle are the bisectors in another triangle, whose vertices are the feet of

More information

Chapter 8: Differential entropy. University of Illinois at Chicago ECE 534, Natasha Devroye

Chapter 8: Differential entropy. University of Illinois at Chicago ECE 534, Natasha Devroye Chapter 8: Differential entropy Chapter 8 outline Motivation Definitions Relation to discrete entropy Joint and conditional differential entropy Relative entropy and mutual information Properties AEP for

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

COMPSCI 650 Applied Information Theory Jan 21, Lecture 2

COMPSCI 650 Applied Information Theory Jan 21, Lecture 2 COMPSCI 650 Applied Information Theory Jan 21, 2016 Lecture 2 Instructor: Arya Mazumdar Scribe: Gayane Vardoyan, Jong-Chyi Su 1 Entropy Definition: Entropy is a measure of uncertainty of a random variable.

More information

Prerequisites. We recall: Theorem 2 A subset of a countably innite set is countable.

Prerequisites. We recall: Theorem 2 A subset of a countably innite set is countable. Prerequisites 1 Set Theory We recall the basic facts about countable and uncountable sets, union and intersection of sets and iages and preiages of functions. 1.1 Countable and uncountable sets We can

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

E0 370 Statistical Learning Theory Lecture 5 (Aug 25, 2011)

E0 370 Statistical Learning Theory Lecture 5 (Aug 25, 2011) E0 370 Statistical Learning Theory Lecture 5 Aug 5, 0 Covering Nubers, Pseudo-Diension, and Fat-Shattering Diension Lecturer: Shivani Agarwal Scribe: Shivani Agarwal Introduction So far we have seen how

More information

Lecture 2: August 31

Lecture 2: August 31 0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 2: August 3 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy

More information

Complex Systems Methods 2. Conditional mutual information, entropy rate and algorithmic complexity

Complex Systems Methods 2. Conditional mutual information, entropy rate and algorithmic complexity Complex Systems Methods 2. Conditional mutual information, entropy rate and algorithmic complexity Eckehard Olbrich MPI MiS Leipzig Potsdam WS 2007/08 Olbrich (Leipzig) 26.10.2007 1 / 18 Overview 1 Summary

More information

Supplement to: Subsampling Methods for Persistent Homology

Supplement to: Subsampling Methods for Persistent Homology Suppleent to: Subsapling Methods for Persistent Hoology A. Technical results In this section, we present soe technical results that will be used to prove the ain theores. First, we expand the notation

More information

5.1 The derivative or the gradient of a curve. Definition and finding the gradient from first principles

5.1 The derivative or the gradient of a curve. Definition and finding the gradient from first principles Capter 5: Dierentiation In tis capter, we will study: 51 e derivative or te gradient o a curve Deinition and inding te gradient ro irst principles 5 Forulas or derivatives 5 e equation o te tangent line

More information

Chapter 2: Entropy and Mutual Information. University of Illinois at Chicago ECE 534, Natasha Devroye

Chapter 2: Entropy and Mutual Information. University of Illinois at Chicago ECE 534, Natasha Devroye Chapter 2: Entropy and Mutual Information Chapter 2 outline Definitions Entropy Joint entropy, conditional entropy Relative entropy, mutual information Chain rules Jensen s inequality Log-sum inequality

More information

Lecture 3: October 2, 2017

Lecture 3: October 2, 2017 Inforation and Coding Theory Autun 2017 Lecturer: Madhur Tulsiani Lecture 3: October 2, 2017 1 Shearer s lea and alications In the revious lecture, we saw the following stateent of Shearer s lea. Lea 1.1

More information

Homework 1 Due: Thursday 2/5/2015. Instructions: Turn in your homework in class on Thursday 2/5/2015

Homework 1 Due: Thursday 2/5/2015. Instructions: Turn in your homework in class on Thursday 2/5/2015 10-704 Homework 1 Due: Thursday 2/5/2015 Instructions: Turn in your homework in class on Thursday 2/5/2015 1. Information Theory Basics and Inequalities C&T 2.47, 2.29 (a) A deck of n cards in order 1,

More information

In this chapter, we consider several graph-theoretic and probabilistic models

In this chapter, we consider several graph-theoretic and probabilistic models THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions

More information

4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information

4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information 4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information Ramji Venkataramanan Signal Processing and Communications Lab Department of Engineering ramji.v@eng.cam.ac.uk

More information

Introduction to Information Theory. Uncertainty. Entropy. Surprisal. Joint entropy. Conditional entropy. Mutual information.

Introduction to Information Theory. Uncertainty. Entropy. Surprisal. Joint entropy. Conditional entropy. Mutual information. L65 Dept. of Linguistics, Indiana University Fall 205 Information theory answers two fundamental questions in communication theory: What is the ultimate data compression? What is the transmission rate

More information

Dept. of Linguistics, Indiana University Fall 2015

Dept. of Linguistics, Indiana University Fall 2015 L645 Dept. of Linguistics, Indiana University Fall 2015 1 / 28 Information theory answers two fundamental questions in communication theory: What is the ultimate data compression? What is the transmission

More information

Application of Information Theory, Lecture 7. Relative Entropy. Handout Mode. Iftach Haitner. Tel Aviv University.

Application of Information Theory, Lecture 7. Relative Entropy. Handout Mode. Iftach Haitner. Tel Aviv University. Application of Information Theory, Lecture 7 Relative Entropy Handout Mode Iftach Haitner Tel Aviv University. December 1, 2015 Iftach Haitner (TAU) Application of Information Theory, Lecture 7 December

More information

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3 A. Find all ordered pairs a, b) of positive integers for which a + b = 3 08. Answer. The six ordered pairs are 009, 08), 08, 009), 009 337, 674) = 35043, 674), 009 346, 673) = 3584, 673), 674, 009 337)

More information

The binary entropy function

The binary entropy function ECE 7680 Lecture 2 Definitions and Basic Facts Objective: To learn a bunch of definitions about entropy and information measures that will be useful through the quarter, and to present some simple but

More information

Geometry. figure (e.g. multilateral ABCDEF) into the figure A B C D E F is called homothety, or similarity transformation.

Geometry. figure (e.g. multilateral ABCDEF) into the figure A B C D E F is called homothety, or similarity transformation. ctober 15, 2017 Geoetry. Siilarity an hoothety. Theores an probles. efinition. Two figures are hoothetic with respect to a point, if for each point of one figure there is a corresponing point belonging

More information

5.7 Chebyshev Multi-section Matching Transformer

5.7 Chebyshev Multi-section Matching Transformer 3/8/6 5_7 Chebyshev Multisection Matching Transforers / 5.7 Chebyshev Multi-section Matching Transforer Reading Assignent: pp. 5-55 We can also build a ultisection atching network such that Γ f is a Chebyshev

More information

ECE 4400:693 - Information Theory

ECE 4400:693 - Information Theory ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential

More information

MA304 Differential Geometry

MA304 Differential Geometry MA304 Differential Geoetry Hoework 4 solutions Spring 018 6% of the final ark 1. The paraeterised curve αt = t cosh t for t R is called the catenary. Find the curvature of αt. Solution. Fro hoework question

More information

1 Proof of learning bounds

1 Proof of learning bounds COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #4 Scribe: Akshay Mittal February 13, 2013 1 Proof of learning bounds For intuition of the following theore, suppose there exists a

More information

CS 630 Basic Probability and Information Theory. Tim Campbell

CS 630 Basic Probability and Information Theory. Tim Campbell CS 630 Basic Probability and Information Theory Tim Campbell 21 January 2003 Probability Theory Probability Theory is the study of how best to predict outcomes of events. An experiment (or trial or event)

More information

EE5139R: Problem Set 4 Assigned: 31/08/16, Due: 07/09/16

EE5139R: Problem Set 4 Assigned: 31/08/16, Due: 07/09/16 EE539R: Problem Set 4 Assigned: 3/08/6, Due: 07/09/6. Cover and Thomas: Problem 3.5 Sets defined by probabilities: Define the set C n (t = {x n : P X n(x n 2 nt } (a We have = P X n(x n P X n(x n 2 nt

More information

(1) L(y) m / - E/*(*)/ = R(*),

(1) L(y) m / - E/*(*)/ = R(*), THE APPROXIMATE SOLUTION OF CERTAIN NONLINEAR DIFFERENTIAL EQUATIONS1 R. G. HUFFSTUTLER AND F. MAX STEIN 1. Introduction. We consider the best approxiation by polynoials P (x) of the solution on [0, l],

More information

Probability and Stochastic Processes: A Friendly Introduction for Electrical and Computer Engineers Roy D. Yates and David J.

Probability and Stochastic Processes: A Friendly Introduction for Electrical and Computer Engineers Roy D. Yates and David J. Probability and Stochastic Processes: A Friendly Introduction for Electrical and oputer Engineers Roy D. Yates and David J. Goodan Proble Solutions : Yates and Goodan,1..3 1.3.1 1.4.6 1.4.7 1.4.8 1..6

More information

4 = (0.02) 3 13, = 0.25 because = 25. Simi-

4 = (0.02) 3 13, = 0.25 because = 25. Simi- Theore. Let b and be integers greater than. If = (. a a 2 a i ) b,then for any t N, in base (b + t), the fraction has the digital representation = (. a a 2 a i ) b+t, where a i = a i + tk i with k i =

More information

Lecture 1: Introduction, Entropy and ML estimation

Lecture 1: Introduction, Entropy and ML estimation 0-704: Information Processing and Learning Spring 202 Lecture : Introduction, Entropy and ML estimation Lecturer: Aarti Singh Scribes: Min Xu Disclaimer: These notes have not been subjected to the usual

More information

LECTURE 3. Last time:

LECTURE 3. Last time: LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate

More information

ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE

ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE CHRISTOPHER J. HILLAR Abstract. A long-standing conjecture asserts that the polynoial p(t = Tr(A + tb ] has nonnegative coefficients whenever is

More information

Lecture October 23. Scribes: Ruixin Qiang and Alana Shine

Lecture October 23. Scribes: Ruixin Qiang and Alana Shine CSCI699: Topics in Learning and Gae Theory Lecture October 23 Lecturer: Ilias Scribes: Ruixin Qiang and Alana Shine Today s topic is auction with saples. 1 Introduction to auctions Definition 1. In a single

More information

Geometry. Selected problems on similar triangles (from last homework).

Geometry. Selected problems on similar triangles (from last homework). October 25, 2015 Geoetry. Selecte probles on siilar triangles (fro last hoework). Proble 1(5). Prove that altitues of any triangle are the bisectors in another triangle, whose vertices are the feet of

More information

U V. r In Uniform Field the Potential Difference is V Ed

U V. r In Uniform Field the Potential Difference is V Ed SPHI/W nit 7.8 Electric Potential Page of 5 Notes Physics Tool box Electric Potential Energy the electric potential energy stored in a syste k of two charges and is E r k Coulobs Constant is N C 9 9. E

More information

1 Generalization bounds based on Rademacher complexity

1 Generalization bounds based on Rademacher complexity COS 5: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #0 Scribe: Suqi Liu March 07, 08 Last tie we started proving this very general result about how quickly the epirical average converges

More information

Electrical and Information Technology. Information Theory. Problems and Solutions. Contents. Problems... 1 Solutions...7

Electrical and Information Technology. Information Theory. Problems and Solutions. Contents. Problems... 1 Solutions...7 Electrical and Information Technology Information Theory Problems and Solutions Contents Problems.......... Solutions...........7 Problems 3. In Problem?? the binomial coefficent was estimated with Stirling

More information

Answers to Econ 210A Midterm, October A. The function f is homogeneous of degree 1/2. To see this, note that for all t > 0 and all (x 1, x 2 )

Answers to Econ 210A Midterm, October A. The function f is homogeneous of degree 1/2. To see this, note that for all t > 0 and all (x 1, x 2 ) Question. Answers to Econ 20A Midter, October 200 f(x, x 2 ) = ax {x, x 2 } A. The function f is hoogeneous of degree /2. To see this, note that for all t > 0 and all (x, x 2 ) f(tx, x 2 ) = ax {tx, tx

More information

Question 1. Question 3. Question 4. Graduate Analysis I Exercise 4

Question 1. Question 3. Question 4. Graduate Analysis I Exercise 4 Graduate Analysis I Exercise 4 Question 1 If f is easurable and λ is any real nuber, f + λ and λf are easurable. Proof. Since {f > a λ} is easurable, {f + λ > a} = {f > a λ} is easurable, then f + λ is

More information

R. L. Ollerton University of Western Sydney, Penrith Campus DC1797, Australia

R. L. Ollerton University of Western Sydney, Penrith Campus DC1797, Australia FURTHER PROPERTIES OF GENERALIZED BINOMIAL COEFFICIENT k-extensions R. L. Ollerton University of Western Sydney, Penrith Capus DC1797, Australia A. G. Shannon KvB Institute of Technology, North Sydney

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

Understanding Machine Learning Solution Manual

Understanding Machine Learning Solution Manual Understanding Machine Learning Solution Manual Written by Alon Gonen Edited by Dana Rubinstein Noveber 17, 2014 2 Gentle Start 1. Given S = ((x i, y i )), define the ultivariate polynoial p S (x) = i []:y

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

Lecture 22: Final Review

Lecture 22: Final Review Lecture 22: Final Review Nuts and bolts Fundamental questions and limits Tools Practical algorithms Future topics Dr Yao Xie, ECE587, Information Theory, Duke University Basics Dr Yao Xie, ECE587, Information

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

Homework Set #2 Data Compression, Huffman code and AEP

Homework Set #2 Data Compression, Huffman code and AEP Homework Set #2 Data Compression, Huffman code and AEP 1. Huffman coding. Consider the random variable ( x1 x X = 2 x 3 x 4 x 5 x 6 x 7 0.50 0.26 0.11 0.04 0.04 0.03 0.02 (a Find a binary Huffman code

More information

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 Please submit the solutions on Gradescope. EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 1. Optimal codeword lengths. Although the codeword lengths of an optimal variable length code

More information

Non-uniform Berry Esseen Bounds for Weighted U-Statistics and Generalized L-Statistics

Non-uniform Berry Esseen Bounds for Weighted U-Statistics and Generalized L-Statistics Coun Math Stat 0 :5 67 DOI 0.007/s4004-0-009- Non-unifor Berry Esseen Bounds for Weighted U-Statistics and Generalized L-Statistics Haojun Hu Qi-Man Shao Received: 9 August 0 / Accepted: Septeber 0 / Published

More information

Lecture 21. Interior Point Methods Setup and Algorithm

Lecture 21. Interior Point Methods Setup and Algorithm Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and

More information

Combining Classifiers

Combining Classifiers Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

SOLUTIONS. PROBLEM 1. The Hamiltonian of the particle in the gravitational field can be written as, x 0, + U(x), U(x) =

SOLUTIONS. PROBLEM 1. The Hamiltonian of the particle in the gravitational field can be written as, x 0, + U(x), U(x) = SOLUTIONS PROBLEM 1. The Hailtonian of the particle in the gravitational field can be written as { Ĥ = ˆp2, x 0, + U(x), U(x) = (1) 2 gx, x > 0. The siplest estiate coes fro the uncertainty relation. If

More information

Example: Letter Frequencies

Example: Letter Frequencies Example: Letter Frequencies i a i p i 1 a 0.0575 2 b 0.0128 3 c 0.0263 4 d 0.0285 5 e 0.0913 6 f 0.0173 7 g 0.0133 8 h 0.0313 9 i 0.0599 10 j 0.0006 11 k 0.0084 12 l 0.0335 13 m 0.0235 14 n 0.0596 15 o

More information

Study on Markov Alternative Renewal Reward. Process for VLSI Cell Partitioning

Study on Markov Alternative Renewal Reward. Process for VLSI Cell Partitioning Int. Journal of Math. Analysis, Vol. 7, 2013, no. 40, 1949-1960 HIKARI Ltd, www.-hikari.co http://dx.doi.org/10.12988/ia.2013.36142 Study on Markov Alternative Renewal Reward Process for VLSI Cell Partitioning

More information

Example: Letter Frequencies

Example: Letter Frequencies Example: Letter Frequencies i a i p i 1 a 0.0575 2 b 0.0128 3 c 0.0263 4 d 0.0285 5 e 0.0913 6 f 0.0173 7 g 0.0133 8 h 0.0313 9 i 0.0599 10 j 0.0006 11 k 0.0084 12 l 0.0335 13 m 0.0235 14 n 0.0596 15 o

More information

h(x) lim H(x) = lim Since h is nondecreasing then h(x) 0 for all x, and if h is discontinuous at a point x then H(x) > 0. Denote

h(x) lim H(x) = lim Since h is nondecreasing then h(x) 0 for all x, and if h is discontinuous at a point x then H(x) > 0. Denote Real Variables, Fall 4 Problem set 4 Solution suggestions Exercise. Let f be of bounded variation on [a, b]. Show that for each c (a, b), lim x c f(x) and lim x c f(x) exist. Prove that a monotone function

More information

Iterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel

Iterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel 1 Iterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel Rai Cohen, Graduate Student eber, IEEE, and Yuval Cassuto, Senior eber, IEEE arxiv:1510.05311v2 [cs.it] 24 ay 2016 Abstract In

More information

Machine Learning. Lecture 02.2: Basics of Information Theory. Nevin L. Zhang

Machine Learning. Lecture 02.2: Basics of Information Theory. Nevin L. Zhang Machine Learning Lecture 02.2: Basics of Information Theory Nevin L. Zhang lzhang@cse.ust.hk Department of Computer Science and Engineering The Hong Kong University of Science and Technology Nevin L. Zhang

More information

KEMATH1 Calculus for Chemistry and Biochemistry Students. Francis Joseph H. Campeña, De La Salle University Manila

KEMATH1 Calculus for Chemistry and Biochemistry Students. Francis Joseph H. Campeña, De La Salle University Manila KEMATH1 Calculus for Chemistry and Biochemistry Students Francis Joseph H Campeña, De La Salle University Manila February 9, 2015 Contents 1 Conic Sections 2 11 A review of the coordinate system 2 12 Conic

More information

1 Introduction to information theory

1 Introduction to information theory 1 Introduction to information theory 1.1 Introduction In this chapter we present some of the basic concepts of information theory. The situations we have in mind involve the exchange of information through

More information

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13 CSE55: Randoied Algoriths and obabilistic Analysis May 6, Lecture Lecturer: Anna Karlin Scribe: Noah Siegel, Jonathan Shi Rando walks and Markov chains This lecture discusses Markov chains, which capture

More information

The Euler-Maclaurin Formula and Sums of Powers

The Euler-Maclaurin Formula and Sums of Powers DRAFT VOL 79, NO 1, FEBRUARY 26 1 The Euler-Maclaurin Forula and Sus of Powers Michael Z Spivey University of Puget Sound Tacoa, WA 98416 spivey@upsedu Matheaticians have long been intrigued by the su

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory TTIC 31120 Prof. Nati Srebro Lecture 2: PAC Learning and VC Theory I Fro Adversarial Online to Statistical Three reasons to ove fro worst-case deterinistic

More information

x log x, which is strictly convex, and use Jensen s Inequality:

x log x, which is strictly convex, and use Jensen s Inequality: 2. Information measures: mutual information 2.1 Divergence: main inequality Theorem 2.1 (Information Inequality). D(P Q) 0 ; D(P Q) = 0 iff P = Q Proof. Let ϕ(x) x log x, which is strictly convex, and

More information

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Capacity of a channel Shannon s second theorem. Information Theory 1/33 Capacity of a channel Shannon s second theorem Information Theory 1/33 Outline 1. Memoryless channels, examples ; 2. Capacity ; 3. Symmetric channels ; 4. Channel Coding ; 5. Shannon s second theorem,

More information

Tight Information-Theoretic Lower Bounds for Welfare Maximization in Combinatorial Auctions

Tight Information-Theoretic Lower Bounds for Welfare Maximization in Combinatorial Auctions Tight Inforation-Theoretic Lower Bounds for Welfare Maxiization in Cobinatorial Auctions Vahab Mirrokni Jan Vondrák Theory Group, Microsoft Dept of Matheatics Research Princeton University Redond, WA 9805

More information

Acyclic Colorings of Directed Graphs

Acyclic Colorings of Directed Graphs Acyclic Colorings of Directed Graphs Noah Golowich Septeber 9, 014 arxiv:1409.7535v1 [ath.co] 6 Sep 014 Abstract The acyclic chroatic nuber of a directed graph D, denoted χ A (D), is the iniu positive

More information

LORENTZ SPACES AND REAL INTERPOLATION THE KEEL-TAO APPROACH

LORENTZ SPACES AND REAL INTERPOLATION THE KEEL-TAO APPROACH LORENTZ SPACES AND REAL INTERPOLATION THE KEEL-TAO APPROACH GUILLERMO REY. Introduction If an operator T is bounded on two Lebesgue spaces, the theory of coplex interpolation allows us to deduce the boundedness

More information

DRAFT - Math 101 Lecture Note - Dr. Said Algarni

DRAFT - Math 101 Lecture Note - Dr. Said Algarni 2 Limits 2.1 The Tangent Problems The word tangent is derived from the Latin word tangens, which means touching. A tangent line to a curve is a line that touches the curve and a secant line is a line that

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

On the Navier Stokes equations

On the Navier Stokes equations On the Navier Stokes equations Daniel Thoas Hayes April 26, 2018 The proble on the existence and soothness of the Navier Stokes equations is resolved. 1. Proble description The Navier Stokes equations

More information

3.8 Three Types of Convergence

3.8 Three Types of Convergence 3.8 Three Types of Convergence 3.8 Three Types of Convergence 93 Suppose that we are given a sequence functions {f k } k N on a set X and another function f on X. What does it ean for f k to converge to

More information

Lecture 3: Channel Capacity

Lecture 3: Channel Capacity Lecture 3: Channel Capacity 1 Definitions Channel capacity is a measure of maximum information per channel usage one can get through a channel. This one of the fundamental concepts in information theory.

More information

Consistent Multiclass Algorithms for Complex Performance Measures. Supplementary Material

Consistent Multiclass Algorithms for Complex Performance Measures. Supplementary Material Consistent Multiclass Algoriths for Coplex Perforance Measures Suppleentary Material Notations. Let λ be the base easure over n given by the unifor rando variable (say U over n. Hence, for all easurable

More information

M ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS. Van H. Vu. 1.

M ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS. Van H. Vu. 1. M ath. Res. Lett. 15 (2008), no. 2, 375 388 c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS Van H. Vu Abstract. Let F q be a finite field of order q and P be a polynoial in F q[x

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a ournal published by Elsevier. The attached copy is furnished to the author for internal non-coercial research and education use, including for instruction at the authors institution

More information

MGMT 69000: Topics in High-dimensional Data Analysis Falll 2016

MGMT 69000: Topics in High-dimensional Data Analysis Falll 2016 MGMT 69000: Topics in High-dimensional Data Analysis Falll 2016 Lecture 14: Information Theoretic Methods Lecturer: Jiaming Xu Scribe: Hilda Ibriga, Adarsh Barik, December 02, 2016 Outline f-divergence

More information

The degree of a typical vertex in generalized random intersection graph models

The degree of a typical vertex in generalized random intersection graph models Discrete Matheatics 306 006 15 165 www.elsevier.co/locate/disc The degree of a typical vertex in generalized rando intersection graph odels Jerzy Jaworski a, Michał Karoński a, Dudley Stark b a Departent

More information

New upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany.

New upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany. New upper bound for the B-spline basis condition nuber II. A proof of de Boor's 2 -conjecture K. Scherer Institut fur Angewandte Matheati, Universitat Bonn, 535 Bonn, Gerany and A. Yu. Shadrin Coputing

More information

Chaos, Complexity, and Inference (36-462)

Chaos, Complexity, and Inference (36-462) Chaos, Complexity, and Inference (36-462) Lecture 7: Information Theory Cosma Shalizi 3 February 2009 Entropy and Information Measuring randomness and dependence in bits The connection to statistics Long-run

More information

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,

More information

ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016

ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 ECE598: Information-theoretic methods in high-dimensional statistics Spring 06 Lecture : Mutual Information Method Lecturer: Yihong Wu Scribe: Jaeho Lee, Mar, 06 Ed. Mar 9 Quick review: Assouad s lemma

More information

Math Real Analysis The Henstock-Kurzweil Integral

Math Real Analysis The Henstock-Kurzweil Integral Math 402 - Real Analysis The Henstock-Kurzweil Integral Steven Kao & Jocelyn Gonzales April 28, 2015 1 Introduction to the Henstock-Kurzweil Integral Although the Rieann integral is the priary integration

More information

MA 123 (Calculus I) Lecture 3: September 12, 2017 Section A2. Professor Jennifer Balakrishnan,

MA 123 (Calculus I) Lecture 3: September 12, 2017 Section A2. Professor Jennifer Balakrishnan, What is on today Professor Jennifer Balakrishnan, jbala@bu.edu 1 Techniques for computing limits 1 1.1 Limit laws..................................... 1 1.2 One-sided limits..................................

More information

An Approach to Conic Sections

An Approach to Conic Sections n pproach to Conic ections Jia. F. Weng 1998 1 Introduction. The study of conic sections can be traced back to ancient Greek atheaticians, usually to pplonious (ca. 220-190 bc) [2]. The nae conic section

More information

Generalized Alignment Chain: Improved Converse Results for Index Coding

Generalized Alignment Chain: Improved Converse Results for Index Coding Generalized Alignent Chain: Iproved Converse Results for Index Coding Yucheng Liu and Parastoo Sadeghi Research School of Electrical, Energy and Materials Engineering Australian National University, Canberra,

More information

Math 262A Lecture Notes - Nechiporuk s Theorem

Math 262A Lecture Notes - Nechiporuk s Theorem Math 6A Lecture Notes - Nechiporuk s Theore Lecturer: Sa Buss Scribe: Stefan Schneider October, 013 Nechiporuk [1] gives a ethod to derive lower bounds on forula size over the full binary basis B The lower

More information