EQUIVALENT FORMULATIONS OF HYPERCONTRACTIVITY USING INFORMATION MEASURES

Size: px
Start display at page:

Download "EQUIVALENT FORMULATIONS OF HYPERCONTRACTIVITY USING INFORMATION MEASURES"

Transcription

1 1 EQUIVALENT FORMULATIONS OF HYPERCONTRACTIVITY USING INFORMATION MEASURES CHANDRA NAIR 1 1 Dept. of Information Engineering, The Chinese Universit of Hong Kong Abstract We derive alternate characterizations for the hpercontractive region of a pair of random variables using information measures. 1. Introduction A pair of random variables (X, Y) defined on some probabilit space (Ω, F, µ), is said to be (p, q)-hpercontractive for 1 q p < if the inequalit E[g(Y) X] p g(y) q holds for ever bounded measurable function g(y). The following is another well-known equivalent definition (equivalence being a direct application of Hölder s inequalit). A pair of random variables (X, Y) defined on some probabilit space (Ω, F, µ), is said to be (p, q)- hpercontractive for 1 q p < if the inequalit E[ f (X)g(Y)] f (X) p g(y) q holds for ever pair of bounded measurable functions f (X), g(y). Here p = of p. For an p 1 one can define q p (X; Y) = inf{q : (X, Y) is (p, q)-hpercontractive}. p p 1 denotes the Hölder conjugate Define the ratio r p (X; Y) = q p(x;y) p. Hpercontractive inequalities have found a variet of applications in quantum phsics [3], theoretical computer science [4], analsis [6], and in information theor [1, 2]. In this talk we present the following alternate characterizations of r p (X; Y) using information measures. A ver useful propert of the hpercontractive parameter r p (X; Y) is the so-called tensorization propert which states: if {(X i, Y i )} n i=1 are independent random variables, then r p (X n ; Y n ) = n ma i=1 r p(x i ; Y i ). For the purpose of this manuscript, let us assume that the random variables X, Y take values in a finite alphabet space X Y. Thus we can talk about a probabilit mass function µ XY (, ) and the induced marginal distributions µ X (), µ Y (). We assume, without loss of generalit, that µ X () > 0, X and µ Y () > 0, Y. We will omit the subscripts on the measures when there is no ambuguit. Corresponding author: Chandra Nair, chandra.nair@gmail.com c The Author(s) 2014.

2 C. Nair 2 Consider two measures ν and µ on a space, sa X. If the measure ν is absolutel continuous with respect to the measure µ, then we denote it as ν µ. Let D(ν() µ()) = ν() X ν() log 2 µ() be the relative entrop between the two measures ν() and µ(), when ν µ. For p 1, define k p (X; Y) := ν(x,y) µ(x,y) ν(x,y) µ(x,y) D(ν() µ()) pd(ν(, ) D(µ(, )) (p 1)D(ν() D(µ()). Let P µ = {ν UXY (u,, ) : u U, U <, u U ν(u,, ) = µ(, ) X, Y} be the collection of measures induced b random variables (U, X, Y) such that the marginal law of (X, Y) is consistent with µ(x, Y). Here U is a random variable taking finitel man values and is often referred to as an auiliar random variable in multiuser information theor. Later we will see that we can restrict the size of U to X Y. For p 1 define u p (X; Y) = Theorem 1. The following equivalence holds: Further the common value is also equal to pi(u; XY) (p 1)I(U; X). r p (X; Y) = k p (X; Y) = u p (X; Y). inf{λ : H(Y) λph(x, Y) + λ(p 1)H(X) = K[H(Y) λph(x, Y) + λ(p 1)H(X)] µxy }, where K[ f ] represents the lower conve envelope of the function f evaluated at. Remark: The above result generalizes the equivalence results in both [1] and [2] which deal with the limiting case p. 2. Proof of the main result The equalit in Theorem 1 will be established b showing a sequence of inequalities. In particular, we will show that (i) r p (X; Y) k p (X; Y) (ii) k p (X; Y) u p (X; Y) (iii) u p (X; Y) r p (X; Y). Proof of r p (X; Y) k p (X; Y): satisf Given p > 1, and ɛ > 0 arbitrar, let non-negative functions f (X) and g(y) W.l.o.g. assume that f (X) p = g(y) (rp ɛ)p = 1. Define f () p = h() and g() (rp ɛ)p = j(). We have µ()h() = µ() j() = 1. E( f (X)g(Y)) > f (X) p g(y) (rp ɛ)p. (1) Since E( f (X)g(Y)) > 1, let C < 1 be such that Cµ(, )h() 1 p 1 (rp ɛ)p j() = 1. Define One easil verifies that ν XY µ XY and ν XY µ XY., ν(, ) := Cµ(, )h() 1 p j() 1 (rp ɛ)p.

3 manuscript - talk at IZS, Now observe that pd(ν(, ) µ(, )) (p 1)D(ν() µ()) = p log C + p 1 p ν() log h() + ν() log j() (p 1) ν() log ν() (r p ɛ) µ() = p log C + (p 1) ν() log µ()h() 1 µ() j() + ν() log ν() (r p ɛ) ν() 1 + (r p ɛ) 1 (r p ɛ) ν() log ν() µ() ν() log ν() µ() = 1 (r p ɛ) D(ν() µ()). Here the last inequalit follows since C < 1, µ()h() = 1, µ() j() = 1. Since ɛ > 0 is arbitrar we are done. Proof of k p (X; Y) u p (X; Y): The argument below is identical to the argument in [2] which in turn is motivated b similar arguments in [5]. Let δ (0, k p (X; Y)) be arbitrar. Let ν XY µ XY, ν XY µ XY be an distribution satisfing D(ν() µ()) pd(ν(, ) D(µ(, )) (p 1)D(ν() D(µ()) > k p(x; Y) δ. Let U ɛ := {1, 2}. Fi a sufficientl small 1 ɛ > 0 and define a triple (U ɛ, X, Y) according to: P(U ɛ = 1) = ɛ; Conditional distribution of (X, Y) (U ɛ = 1) = ν XY, P(U ɛ = 2) = 1 ɛ; Conditional distribution of (X, Y) (U ɛ = 2) = µ XY + ɛ 1 ɛ (µ XY ν XY ) = 1 1 ɛ µ XY ɛ 1 ɛ ν XY. Note that for an ɛ > 0 the distribution of (U ɛ, X, Y) belongs to P µ. For an 0 < λ < k p (X; Y) δ define the function g(ɛ) := I(U ɛ ; Y) λ(pi(u ɛ ; X, Y) (p 1)I(U ɛ ; X)). Elementar calculations ield that dg(ɛ) = D ( ν() µ() ) λ(pd ( ν(, ) µ(, ) ) (p 1)D ( ν() µ() ) > 0, dɛ ɛ 0 where the last inequalit is because of the choice of ν(x, Y) and as 0 < λ < k p (X; Y) δ. Since g(0) = 0 this implies that for some ɛ > 0 we have I(U ɛ; Y) λ(pi(u ɛ; X, Y) (p 1)I(U ɛ; X)) > 0. Note also that I(U ɛ; X, Y) > 0 since ν XY µ XY. This implies that pi(u; XY) (p 1)I(U; X) I(U ɛ ; Y) pi(u ɛ; X, Y) (p 1)I(U ɛ; X) > λ. Since the above holds for all λ < k p (X; Y) δ we have u p (X; Y) = Finall, since δ > 0 is arbitrar, we are done. pi(u; XY) (p 1)I(U; X) k p(x; Y) δ. 1 For instance ɛ < min (,):ν(,)>0 µ(,) ν(,) and ɛ > 1 ma (,) µ(, ) suffices. Observe that such an ɛ eists if X and Y are not constants.

4 C. Nair 4 Proof of u p (X; Y) r p (X; Y): This proof uses the tensorization propert of r p (X; Y) as well as the notion of tpical sequences often emploed in multi-terminal information theor. Consider an (U, X, Y) ν UXY P µ. Consider (U n, X n, Y n ) distributed according to i ν UXY (u i, i, i ), i.e. the components are independent and identicall distributed. Pick a single u n such that { {i : u i = u} nν U (u) U }. For instance, let U = {1, 2,..., m} and set the first nν U (1) entries of u n to be 1, then net nν U (2) entries of u n to be 2, and so on. At the end one would have at least m 1 m 1 n nν U (i) n (m 1) n ν U (i) = nν U (m) (m 1) i=1 entries taking the final value m. Define two sets according to and A = { n : {i : (ui, i ) = (u, )} nν UX (u, ) n log(n)νux (u, ) for all (u, )} B = { n : {i : (ui, i ) = (u, )} nν UY (u, ) n log(n)νuy (u, ) for all (u, )}. In the language used in network information theor, these are the sets of sequences n and n respectivel that are jointl tpical with the u n sequence chosen earlier. Note that for an set A and B we have (Lemma 1 in [1]) i=1 P(X n A, Y n B) = E(1 A E(1 B X n )) P(A) 1 1 p E(1B X n ) p P(A) 1 1 p P(B) 1 rp p. (2) The first inequalit follows from Hölder and the second one b the definition and tensorization propert of r p (X; Y) (which implies r p (X n ; Y n ) = r p (X; Y) when {X i, Y i } are i.i.d. according to (X, Y)). It is known (b a simple counting argument) that 1 n log 2 P(A) I(U; X) and 1 n log 2 P(B) as n. Define C = {( n, n ) : {i : (ui, i, i ) = (u,, )} nν UXY (u,, ) n log(n)νuxy (u,, ) for all (u,, )}. Clearl if ( n, n ) C then n A and n B. Thus P(C) P(X n A, Y n B). A counting argument again shows that 1 n log 2 P(C) I(U; XY). Since we have P(C) P(X n A, Y n B) P(A) p P(B) rp p b taking logarithms and dividing b n and letting n go to infinit, we obtain that ( I(U; XY) 1 1 ) I(U; X) 1. p r p p This implies that r p (pi(u; XY) (p 1)I(U; X)) for ever ν UXY P µ. When I(U; XY) > 0 we see that pi(u; XY) (p 1)I(U; X) > 0 impling r p pi(u; XY) (p 1)I(U; X) = u p(x; Y), as desired. The remaining part of the proof is to show that the common value is also given b inf{λ : H(Y) λph(x, Y) + λ(p 1)H(X) = K[H(Y) λph(x, Y) + λ(p 1)H(X)] µxy }. (3)

5 manuscript - talk at IZS, It is an eas eercise to observe that K[H(Y) λph(x, Y) + λ(p 1)H(X)] µxy = inf H(Y U) λph(x, Y U) + λ(p 1)H(X U). Thus if the equalit H(Y) λph(x, Y) + λ(p 1)H(X) = K[H(Y) λph(x, Y) + λ(p 1)H(X)] µxy holds for some λ then for ever ν UXY P µ Rearrangement ields impling H(Y U) λph(x, Y U) + λ(p 1)H(X U) H(Y) λph(x, Y) + λ(p 1)H(X). λ λ(pi(u; XY) (p 1)I(U; X)), pi(u; XY) (p 1)I(U; X) = u p(x; Y). Let λ p (X; Y) denote the infimum of λ satisfing (3). Then we have λ p (X; Y) u p (X; Y). On the other hand, for an ɛ > 0 there must eist a ν UXY P µ such that Re-arrangement ields H(Y U) (λ p (X; Y) ɛ)ph(x, Y U) + (λ p (X; Y) ɛ)(p 1)H(X U) For such a ν UXY clearl I(U; XY) > 0, hence < H(Y) (λ p (X; Y) ɛ)ph(x, Y) + (λ p (X; Y) ɛ)(p 1)H(X). (λ p (X; Y) ɛ)(pi(u; XY) (p 1)I(U; X)) <. λ p (X; Y) ɛ < pi(u; XY) (p 1)I(U; X) u p(x; Y). Taking ɛ 0 ields the desired equalit that λ p (X; Y) u p (X; Y) completing the proof of the equivalence. The last part of this section is to show that in the above calculations one can restrict to random variables U that take at most X Y distinct values. (These are also standard arguments in network information theor.) For an distribution ν XY on X Y define the function f (ν) = H(Y) λph(x, Y) + λ(p 1)H(X) where the entropies are evaluated at the distribution ν XY. Observe that (b Caratheodor-Fenchel-Bunt theorem) the lower conve envelope of f (ν) and the distribution µ XY denoted earlier as K[H(Y) λph(x, Y) + λ(p 1)H(X)] µxy can be computed as a conve combination of at most X Y distributions ν (i) XY, i = 1,.., X Y. Consider ν(i) XY to be the conditional distribution of (X, Y) when U = i and thus observe that we have K[H(Y) λph(x, Y) + λ(p 1)H(X)] µxy = inf H(Y U) λph(x, Y U) + λ(p 1)H(X U). U X Y Remark 1. This remark is for those unfamiliar with the cardinalit bounding arguments in network information theor. To appl Caratheodor-Fenchel-Bunt theorem one considers the continuous mapping from ν XY to S R X Y where the first X Y 1 co-ordinates represent the values of ν XY (i, j) (ecept the entr i = X, j = Y, whose value is forced b the remaining entries since ν XY is a probabilit vector) and the last co-ordinate is the value f (ν). One is interested in obtaining a particular point in the conve hull of S using as few points from S as possible. Since S is connected it suffices to use X Y distinct points. This improvement over Caratheodor is due to Fenchel and later generalized b Bunt.

6 C. Nair 6 Historical background and Acknowledgements This work is directl motivated b the work of the author with Venkat Anantharam, Amin Gohari, and Sudeep Kamath in [2] which deals with the equivalent version of Theorem 1 when p. Tring to generalize this result to finite p, the author had obtained that both u p (X; Y) and k p (X; Y) were lower bounds to r p (X; Y). Numerical eperiments conducted b Sudeep Kamath suggested that these bounds were tight, prompting the author to complete the proof of the equivalence. The author wishes to thank Venkat Anantharam, Amin Gohari, and Sudeep Kamath for various stimulating discussions relating to the connections between hpercontractivit parameters and information measures. This work was ported b an AoE grant (Project No. AoE/E-02/08) and two GRF grants (Project Nos and ) from the UGC of the Hong Kong Special Administrative Region, China. References [1] Rudolf Ahlswede and Peter Gács, Spreading of sets in product spaces and hpercontraction of the markov operator, The Annals of Probabilit (1976), [2] Venkat Anantharam, Amin Aminzadeh Gohari, Sudeep Kamath, and Chandra Nair, On maimal correlation, hpercontractivit, and the data processing inequalit studied b erkip and cover, CoRR abs/ (2013). [3] E. Brian Davies, Leonard Gross, and Barr Simon, Hpercontractivit: a bibliographic review, Ideas and methods in quantum and statistical phsics (Oslo, 1988), Cambridge Univ. Press, Cambridge, 1992, pp MR (93g:47052) [4] Jeff Kahn, G. Kalai, and Nathan Linial, The influence of variables on boolean functions, Foundations of Computer Science, 1988., 29th Annual Smposium on, 1988, pp [5] J Körner and K Marton, Comparison of two nois channels, Topics in Inform. Theor(ed. b I. Csiszar and P.Elias), Keszthel, Hungar (August, 1975), [6] Michel Talagrand, On russo s approimate zero-one law, The Annals of Probabilit 22 (1994), no. 3, pp (English).

On Hypercontractivity and a Data Processing Inequality

On Hypercontractivity and a Data Processing Inequality On Hercontractivit and a ata Processing Inequalit Venkat Anantharam EECS eartment, Universit of California, Berkele, CA, USA ananth@eecsberkeleedu Amin Gohari ISSL Lab, EE eartment, Sharif Universit of

More information

The Entropy Power Inequality and Mrs. Gerber s Lemma for Groups of order 2 n

The Entropy Power Inequality and Mrs. Gerber s Lemma for Groups of order 2 n The Entrop Power Inequalit and Mrs. Gerber s Lemma for Groups of order 2 n Varun Jog EECS, UC Berkele Berkele, CA-94720 Email: varunjog@eecs.berkele.edu Venkat Anantharam EECS, UC Berkele Berkele, CA-94720

More information

Computation of Csiszár s Mutual Information of Order α

Computation of Csiszár s Mutual Information of Order α Computation of Csiszár s Mutual Information of Order Damianos Karakos, Sanjeev Khudanpur and Care E. Priebe Department of Electrical and Computer Engineering and Center for Language and Speech Processing

More information

PICONE S IDENTITY FOR A SYSTEM OF FIRST-ORDER NONLINEAR PARTIAL DIFFERENTIAL EQUATIONS

PICONE S IDENTITY FOR A SYSTEM OF FIRST-ORDER NONLINEAR PARTIAL DIFFERENTIAL EQUATIONS Electronic Journal of Differential Equations, Vol. 2013 (2013), No. 143, pp. 1 7. ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu PICONE S IDENTITY

More information

Research Summary. Chandra Nair. Overview

Research Summary. Chandra Nair. Overview Research Summary Chandra Nair Overview In my early research career I considered random versions of combinatorial optimization problems; my doctoral thesis [17] resolved a long-standing conjecture about

More information

Hypercontractivity, Maximal Correlation and Non-interactive Simulation

Hypercontractivity, Maximal Correlation and Non-interactive Simulation Hypercontractivity, Maximal Correlation and Non-interactive Simulation Dept. Electrical Engineering, Stanford University Dec 4, 2014 Non-interactive Simulation Scenario: non-interactive simulation: Alice

More information

Toda s Theorem: PH P #P

Toda s Theorem: PH P #P CS254: Computational Compleit Theor Prof. Luca Trevisan Final Project Ananth Raghunathan 1 Introduction Toda s Theorem: PH P #P The class NP captures the difficult of finding certificates. However, in

More information

Ordinal One-Switch Utility Functions

Ordinal One-Switch Utility Functions Ordinal One-Switch Utilit Functions Ali E. Abbas Universit of Southern California, Los Angeles, California 90089, aliabbas@usc.edu David E. Bell Harvard Business School, Boston, Massachusetts 0163, dbell@hbs.edu

More information

Marcinkiewicz Interpolation Theorem by Daniel Baczkowski

Marcinkiewicz Interpolation Theorem by Daniel Baczkowski 1 Introduction Marcinkiewicz Interpolation Theorem b Daniel Baczkowski Let (, µ be a measure space. Let M denote the collection of all extended real valued µ-measurable functions on. Also, let M denote

More information

Sample-based Optimal Transport and Barycenter Problems

Sample-based Optimal Transport and Barycenter Problems Sample-based Optimal Transport and Barcenter Problems MAX UANG New York Universit, Courant Institute of Mathematical Sciences AND ESTEBAN G. TABA New York Universit, Courant Institute of Mathematical Sciences

More information

On Non-Interactive Simulation of Joint Distributions

On Non-Interactive Simulation of Joint Distributions On Non-Interactive Simulation of Joint Distributions Sudeep Kamath, Venkat Anantharam ECE Department, Princeton University, sukamath@princeton.edu EECS Department, University of California, Berkeley, ananth@eecs.berkeley.edu

More information

Flows and Connectivity

Flows and Connectivity Chapter 4 Flows and Connectivit 4. Network Flows Capacit Networks A network N is a connected weighted loopless graph (G,w) with two specified vertices and, called the source and the sink, respectivel,

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematics http://jipam.vu.edu.au/ Volume 2, Issue 2, Article 15, 21 ON SOME FUNDAMENTAL INTEGRAL INEQUALITIES AND THEIR DISCRETE ANALOGUES B.G. PACHPATTE DEPARTMENT

More information

An Information Theory For Preferences

An Information Theory For Preferences An Information Theor For Preferences Ali E. Abbas Department of Management Science and Engineering, Stanford Universit, Stanford, Ca, 94305 Abstract. Recent literature in the last Maimum Entrop workshop

More information

NATIONAL UNIVERSITY OF SINGAPORE Department of Mathematics MA4247 Complex Analysis II Lecture Notes Part I

NATIONAL UNIVERSITY OF SINGAPORE Department of Mathematics MA4247 Complex Analysis II Lecture Notes Part I NATIONAL UNIVERSITY OF SINGAPORE Department of Mathematics MA4247 Comple Analsis II Lecture Notes Part I Chapter 1 Preliminar results/review of Comple Analsis I These are more detailed notes for the results

More information

In applications, we encounter many constrained optimization problems. Examples Basis pursuit: exact sparse recovery problem

In applications, we encounter many constrained optimization problems. Examples Basis pursuit: exact sparse recovery problem 1 Conve Analsis Main references: Vandenberghe UCLA): EECS236C - Optimiation methods for large scale sstems, http://www.seas.ucla.edu/ vandenbe/ee236c.html Parikh and Bod, Proimal algorithms, slides and

More information

Section 1.5 Formal definitions of limits

Section 1.5 Formal definitions of limits Section.5 Formal definitions of limits (3/908) Overview: The definitions of the various tpes of limits in previous sections involve phrases such as arbitraril close, sufficientl close, arbitraril large,

More information

Non-interactive Simulation of Joint Distributions: The Hirschfeld-Gebelein-Rényi Maximal Correlation and the Hypercontractivity Ribbon

Non-interactive Simulation of Joint Distributions: The Hirschfeld-Gebelein-Rényi Maximal Correlation and the Hypercontractivity Ribbon Non-interactive Simulation of Joint Distributions: The Hirschfeld-Gebelein-Rényi Maximal Correlation and the Hypercontractivity Ribbon Sudeep Kamath and Venkat Anantharam EECS Department University of

More information

Chapter 6* The PPSZ Algorithm

Chapter 6* The PPSZ Algorithm Chapter 6* The PPSZ Algorithm In this chapter we present an improvement of the pp algorithm. As pp is called after its inventors, Paturi, Pudlak, and Zane [PPZ99], the improved version, pps, is called

More information

Duality, Geometry, and Support Vector Regression

Duality, Geometry, and Support Vector Regression ualit, Geometr, and Support Vector Regression Jinbo Bi and Kristin P. Bennett epartment of Mathematical Sciences Rensselaer Poltechnic Institute Tro, NY 80 bij@rpi.edu, bennek@rpi.edu Abstract We develop

More information

School of Computer and Communication Sciences. Information Theory and Coding Notes on Random Coding December 12, 2003.

School of Computer and Communication Sciences. Information Theory and Coding Notes on Random Coding December 12, 2003. ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE School of Computer and Communication Sciences Handout 8 Information Theor and Coding Notes on Random Coding December 2, 2003 Random Coding In this note we prove

More information

Linear programming: Theory

Linear programming: Theory Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analsis and Economic Theor Winter 2018 Topic 28: Linear programming: Theor 28.1 The saddlepoint theorem for linear programming The

More information

Computation of Total Capacity for Discrete Memoryless Multiple-Access Channels

Computation of Total Capacity for Discrete Memoryless Multiple-Access Channels IEEE TRANSACTIONS ON INFORATION THEORY, VOL. 50, NO. 11, NOVEBER 2004 2779 Computation of Total Capacit for Discrete emorless ultiple-access Channels ohammad Rezaeian, ember, IEEE, and Ale Grant, Senior

More information

Boolean Functions: Influence, threshold and noise

Boolean Functions: Influence, threshold and noise Boolean Functions: Influence, threshold and noise Einstein Institute of Mathematics Hebrew University of Jerusalem Based on recent joint works with Jean Bourgain, Jeff Kahn, Guy Kindler, Nathan Keller,

More information

You don't have to be a mathematician to have a feel for numbers. John Forbes Nash, Jr.

You don't have to be a mathematician to have a feel for numbers. John Forbes Nash, Jr. Course Title: Real Analsis Course Code: MTH3 Course instructor: Dr. Atiq ur Rehman Class: MSc-II Course URL: www.mathcit.org/atiq/fa5-mth3 You don't have to be a mathematician to have a feel for numbers.

More information

4 Inverse function theorem

4 Inverse function theorem Tel Aviv Universit, 2013/14 Analsis-III,IV 53 4 Inverse function theorem 4a What is the problem................ 53 4b Simple observations before the theorem..... 54 4c The theorem.....................

More information

Uniform continuity of sinc x

Uniform continuity of sinc x Uniform continuit of sinc Peter Haggstrom www.gotohaggstrom.com mathsatbondibeach@gmail.com August 3, 03 Introduction The function sinc = sin as follows: is well known to those who stud Fourier theor.

More information

A. Derivation of regularized ERM duality

A. Derivation of regularized ERM duality A. Derivation of regularized ERM dualit For completeness, in this section we derive the dual 5 to the problem of computing proximal operator for the ERM objective 3. We can rewrite the primal problem as

More information

SOME PROPERTIES OF CONJUGATE HARMONIC FUNCTIONS IN A HALF-SPACE

SOME PROPERTIES OF CONJUGATE HARMONIC FUNCTIONS IN A HALF-SPACE SOME PROPERTIES OF CONJUGATE HARMONIC FUNCTIONS IN A HALF-SPACE ANATOLY RYABOGIN AND DMITRY RYABOGIN Abstract. We prove a multi-dimensional analog of the Theorem of Hard and Littlewood about the logarithmic

More information

Correlation analysis 2: Measures of correlation

Correlation analysis 2: Measures of correlation Correlation analsis 2: Measures of correlation Ran Tibshirani Data Mining: 36-462/36-662 Februar 19 2013 1 Review: correlation Pearson s correlation is a measure of linear association In the population:

More information

Estimators in simple random sampling: Searls approach

Estimators in simple random sampling: Searls approach Songklanakarin J Sci Technol 35 (6 749-760 Nov - Dec 013 http://wwwsjstpsuacth Original Article Estimators in simple random sampling: Searls approach Jirawan Jitthavech* and Vichit Lorchirachoonkul School

More information

Biostatistics in Research Practice - Regression I

Biostatistics in Research Practice - Regression I Biostatistics in Research Practice - Regression I Simon Crouch 30th Januar 2007 In scientific studies, we often wish to model the relationships between observed variables over a sample of different subjects.

More information

Reverse Hyper Contractive Inequalities: What? Why?? When??? How????

Reverse Hyper Contractive Inequalities: What? Why?? When??? How???? Reverse Hyper Contractive Inequalities: What? Why?? When??? How???? UC Berkeley Warsaw Cambridge May 21, 2012 Hyper-Contractive Inequalities The noise (Bonami-Beckner) semi-group on { 1, 1} n 1 2 Def 1:

More information

Subfield value sets over finite fields

Subfield value sets over finite fields Subfield value sets over finite fields Wun-Seng Chou Institute of Mathematics Academia Sinica, and/or Department of Mathematical Sciences National Chengchi Universit Taipei, Taiwan, ROC Email: macws@mathsinicaedutw

More information

Fuzzy Topology On Fuzzy Sets: Regularity and Separation Axioms

Fuzzy Topology On Fuzzy Sets: Regularity and Separation Axioms wwwaasrcorg/aasrj American Academic & Scholarl Research Journal Vol 4, No 2, March 212 Fuzz Topolog n Fuzz Sets: Regularit and Separation Aioms AKandil 1, S Saleh 2 and MM Yakout 3 1 Mathematics Department,

More information

Stochastic Interpretation for the Arimoto Algorithm

Stochastic Interpretation for the Arimoto Algorithm Stochastic Interpretation for the Arimoto Algorithm Serge Tridenski EE - Sstems Department Tel Aviv Universit Tel Aviv, Israel Email: sergetr@post.tau.ac.il Ram Zamir EE - Sstems Department Tel Aviv Universit

More information

Strain Transformation and Rosette Gage Theory

Strain Transformation and Rosette Gage Theory Strain Transformation and Rosette Gage Theor It is often desired to measure the full state of strain on the surface of a part, that is to measure not onl the two etensional strains, and, but also the shear

More information

Lectures 22-23: Conditional Expectations

Lectures 22-23: Conditional Expectations Lectures 22-23: Conditional Expectations 1.) Definitions Let X be an integrable random variable defined on a probability space (Ω, F 0, P ) and let F be a sub-σ-algebra of F 0. Then the conditional expectation

More information

On the NP-Hardness of Bounded Distance Decoding of Reed-Solomon Codes

On the NP-Hardness of Bounded Distance Decoding of Reed-Solomon Codes On the NP-Hardness of Bounded Distance Decoding of Reed-Solomon Codes Venkata Gandikota Purdue Universit Badih Ghazi MIT Elena Grigorescu Purdue Universit Abstract Guruswami and Vard (IEEE Trans. Inf.

More information

4. Product measure spaces and the Lebesgue integral in R n.

4. Product measure spaces and the Lebesgue integral in R n. 4 M. M. PELOSO 4. Product measure spaces and the Lebesgue integral in R n. Our current goal is to define the Lebesgue measure on the higher-dimensional eucledean space R n, and to reduce the computations

More information

Constant 2-labelling of a graph

Constant 2-labelling of a graph Constant 2-labelling of a graph S. Gravier, and E. Vandomme June 18, 2012 Abstract We introduce the concept of constant 2-labelling of a graph and show how it can be used to obtain periodic sphere packing.

More information

Hints/Solutions for Homework 3

Hints/Solutions for Homework 3 Hints/Solutions for Homework 3 MATH 865 Fall 25 Q Let g : and h : be bounded and non-decreasing functions Prove that, for any rv X, [Hint: consider an independent copy Y of X] ov(g(x), h(x)) Solution:

More information

The Gallager Converse

The Gallager Converse The Gallager Converse Abbas El Gamal Director, Information Systems Laboratory Department of Electrical Engineering Stanford University Gallager s 75th Birthday 1 Information Theoretic Limits Establishing

More information

Review Topics for MATH 1400 Elements of Calculus Table of Contents

Review Topics for MATH 1400 Elements of Calculus Table of Contents Math 1400 - Mano Table of Contents - Review - page 1 of 2 Review Topics for MATH 1400 Elements of Calculus Table of Contents MATH 1400 Elements of Calculus is one of the Marquette Core Courses for Mathematical

More information

Tensor products in Riesz space theory

Tensor products in Riesz space theory Tensor products in Riesz space theor Jan van Waaij Master thesis defended on Jul 16, 2013 Thesis advisors dr. O.W. van Gaans dr. M.F.E. de Jeu Mathematical Institute, Universit of Leiden CONTENTS 2 Contents

More information

Key Capacity with Limited One-Way Communication for Product Sources

Key Capacity with Limited One-Way Communication for Product Sources Key Capacity with Limited One-Way Communication for Product Sources Jingbo Liu Paul Cuff Sergio Verdú Dept. of Electrical Eng., Princeton University, NJ 08544 {jingbo,cuff,verdu}@princeton.edu ariv:1405.7441v1

More information

On Range and Reflecting Functions About the Line y = mx

On Range and Reflecting Functions About the Line y = mx On Range and Reflecting Functions About the Line = m Scott J. Beslin Brian K. Heck Jerem J. Becnel Dept.of Mathematics and Dept. of Mathematics and Dept. of Mathematics and Computer Science Computer Science

More information

F6 Solving Inequalities

F6 Solving Inequalities UNIT F6 Solving Inequalities: Tet F6 Solving Inequalities F6. Inequalities on a Number Line An inequalit involves one of the four smbols >,, < or The following statements illustrate the meaning of each

More information

Chapter 6. Self-Adjusting Data Structures

Chapter 6. Self-Adjusting Data Structures Chapter 6 Self-Adjusting Data Structures Chapter 5 describes a data structure that is able to achieve an epected quer time that is proportional to the entrop of the quer distribution. The most interesting

More information

The optimal constant in Hardy-type inequalities

The optimal constant in Hardy-type inequalities The optimal constant in Hard-tpe inequalities Mu-Fa Chen Beijing Normal Universit, Beijing 1875, China) March 19, 213 Abstract To estimate the optimal constant in Hard-tpe inequalities, some variational

More information

1.3 LIMITS AT INFINITY; END BEHAVIOR OF A FUNCTION

1.3 LIMITS AT INFINITY; END BEHAVIOR OF A FUNCTION . Limits at Infinit; End Behavior of a Function 89. LIMITS AT INFINITY; END BEHAVIOR OF A FUNCTION Up to now we have been concerned with its that describe the behavior of a function f) as approaches some

More information

Gauss and Gauss Jordan Elimination

Gauss and Gauss Jordan Elimination Gauss and Gauss Jordan Elimination Row-echelon form: (,, ) A matri is said to be in row echelon form if it has the following three properties. () All row consisting entirel of zeros occur at the bottom

More information

Signed Measures. Chapter Basic Properties of Signed Measures. 4.2 Jordan and Hahn Decompositions

Signed Measures. Chapter Basic Properties of Signed Measures. 4.2 Jordan and Hahn Decompositions Chapter 4 Signed Measures Up until now our measures have always assumed values that were greater than or equal to 0. In this chapter we will extend our definition to allow for both positive negative values.

More information

EECS 750. Hypothesis Testing with Communication Constraints

EECS 750. Hypothesis Testing with Communication Constraints EECS 750 Hypothesis Testing with Communication Constraints Name: Dinesh Krithivasan Abstract In this report, we study a modification of the classical statistical problem of bivariate hypothesis testing.

More information

On the relation between the relative earth mover distance and the variation distance (an exposition)

On the relation between the relative earth mover distance and the variation distance (an exposition) On the relation between the relative earth mover distance and the variation distance (an eposition) Oded Goldreich Dana Ron Februar 9, 2016 Summar. In this note we present a proof that the variation distance

More information

1.1 The Equations of Motion

1.1 The Equations of Motion 1.1 The Equations of Motion In Book I, balance of forces and moments acting on an component was enforced in order to ensure that the component was in equilibrium. Here, allowance is made for stresses which

More information

RMT 2015 Calculus Test Solutions February 14, 2015

RMT 2015 Calculus Test Solutions February 14, 2015 RMT Calculus Test Solutions Februar 4, Let fx x + x 4 + x + x + x + Compute f Answer: 4 Solution : We directl compute f x x 4 + x + x + x + Plugging in x, we get 4 Solution : We first factor fx x + Then

More information

Hypercontractivity of spherical averages in Hamming space

Hypercontractivity of spherical averages in Hamming space Hypercontractivity of spherical averages in Hamming space Yury Polyanskiy Department of EECS MIT yp@mit.edu Allerton Conference, Oct 2, 2014 Yury Polyanskiy Hypercontractivity in Hamming space 1 Hypercontractivity:

More information

Section 3.1. ; X = (0, 1]. (i) f : R R R, f (x, y) = x y

Section 3.1. ; X = (0, 1]. (i) f : R R R, f (x, y) = x y Paul J. Bruillard MATH 0.970 Problem Set 6 An Introduction to Abstract Mathematics R. Bond and W. Keane Section 3.1: 3b,c,e,i, 4bd, 6, 9, 15, 16, 18c,e, 19a, 0, 1b Section 3.: 1f,i, e, 6, 1e,f,h, 13e,

More information

LECTURE 3. Last time:

LECTURE 3. Last time: LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate

More information

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:

More information

Week 3 September 5-7.

Week 3 September 5-7. MA322 Weekl topics and quiz preparations Week 3 September 5-7. Topics These are alread partl covered in lectures. We collect the details for convenience.. Solutions of homogeneous equations AX =. 2. Using

More information

Minimax Learning for Remote Prediction

Minimax Learning for Remote Prediction 1 Minimax Learning for Remote Prediction Cheuk Ting Li *, Xiugang Wu, Afer Ozgur, Abbas El Gamal * Department of Electrical Engineering and Computer Sciences Universit of California, Berkele Email: ctli@berkele.edu

More information

On the Gaussian Z-Interference channel

On the Gaussian Z-Interference channel On the Gaussian Z-Interference channel Max Costa, Chandra Nair, and David Ng University of Campinas Unicamp, The Chinese University of Hong Kong CUHK Abstract The optimality of the Han and Kobayashi achievable

More information

arxiv: v1 [quant-ph] 1 Apr 2016

arxiv: v1 [quant-ph] 1 Apr 2016 Measurement Uncertaint for Finite Quantum Observables René Schwonnek, David Reeb, and Reinhard F. Werner Quantum Information Group, Institute for Theoretical Phsics, Leibniz Universität Hannover arxiv:1604.00382v1

More information

Lecture 7: Advanced Coupling & Mixing Time via Eigenvalues

Lecture 7: Advanced Coupling & Mixing Time via Eigenvalues Counting and Sampling Fall 07 Lecture 7: Advanced Coupling & Miing Time via Eigenvalues Lecturer: Shaan Oveis Gharan October 8 Disclaimer: These notes have not been subjected to the usual scrutin reserved

More information

Displacement convexity of the relative entropy in the discrete h

Displacement convexity of the relative entropy in the discrete h Displacement convexity of the relative entropy in the discrete hypercube LAMA Université Paris Est Marne-la-Vallée Phenomena in high dimensions in geometric analysis, random matrices, and computational

More information

Discrete Probability Refresher

Discrete Probability Refresher ECE 1502 Information Theory Discrete Probability Refresher F. R. Kschischang Dept. of Electrical and Computer Engineering University of Toronto January 13, 1999 revised January 11, 2006 Probability theory

More information

arxiv: v1 [math.co] 1 Jan 2013

arxiv: v1 [math.co] 1 Jan 2013 Ricci-flat graphs with girth at least five arxiv:1301.0102v1 [math.co] 1 Jan 2013 Yong Lin Renmin Universit of China S.-T. Yau Harvard Universit Januar 3, 2013 Abstract Linuan Lu Universit of South Carolina

More information

11.4 Polar Coordinates

11.4 Polar Coordinates 11. Polar Coordinates 917 11. Polar Coordinates In Section 1.1, we introduced the Cartesian coordinates of a point in the plane as a means of assigning ordered pairs of numbers to points in the plane.

More information

Markov Decision Processes and Dynamic Programming

Markov Decision Processes and Dynamic Programming Master MVA: Reinforcement Learning Lecture: 2 Markov Decision Processes and Dnamic Programming Lecturer: Alessandro Lazaric http://researchers.lille.inria.fr/ lazaric/webpage/teaching.html Objectives of

More information

Mutual Information Approximation via. Maximum Likelihood Estimation of Density Ratio:

Mutual Information Approximation via. Maximum Likelihood Estimation of Density Ratio: Mutual Information Approimation via Maimum Likelihood Estimation of Densit Ratio Taiji Suzuki Dept. of Mathematical Informatics, The Universit of Toko Email: s-taiji@stat.t.u-toko.ac.jp Masashi Sugiama

More information

Roberto s Notes on Integral Calculus Chapter 3: Basics of differential equations Section 3. Separable ODE s

Roberto s Notes on Integral Calculus Chapter 3: Basics of differential equations Section 3. Separable ODE s Roberto s Notes on Integral Calculus Chapter 3: Basics of differential equations Section 3 Separable ODE s What ou need to know alread: What an ODE is and how to solve an eponential ODE. What ou can learn

More information

2.3 Quadratic Functions

2.3 Quadratic Functions 88 Linear and Quadratic Functions. Quadratic Functions You ma recall studing quadratic equations in Intermediate Algebra. In this section, we review those equations in the contet of our net famil of functions:

More information

Section B. Ordinary Differential Equations & its Applications Maths II

Section B. Ordinary Differential Equations & its Applications Maths II Section B Ordinar Differential Equations & its Applications Maths II Basic Concepts and Ideas: A differential equation (D.E.) is an equation involving an unknown function (or dependent variable) of one

More information

Katalin Marton. Abbas El Gamal. Stanford University. Withits A. El Gamal (Stanford University) Katalin Marton Withits / 9

Katalin Marton. Abbas El Gamal. Stanford University. Withits A. El Gamal (Stanford University) Katalin Marton Withits / 9 Katalin Marton Abbas El Gamal Stanford University Withits 2010 A. El Gamal (Stanford University) Katalin Marton Withits 2010 1 / 9 Brief Bio Born in 1941, Budapest Hungary PhD from Eötvös Loránd University

More information

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write Lecture 3: Expected Value 1.) Definitions. If X 0 is a random variable on (Ω, F, P), then we define its expected value to be EX = XdP. Notice that this quantity may be. For general X, we say that EX exists

More information

Symmetries in the Entropy Space

Symmetries in the Entropy Space Symmetries in the Entropy Space Jayant Apte, Qi Chen, John MacLaren Walsh Abstract This paper investigates when Shannon-type inequalities completely characterize the part of the closure of the entropy

More information

Mixing in Product Spaces. Elchanan Mossel

Mixing in Product Spaces. Elchanan Mossel Poincaré Recurrence Theorem Theorem (Poincaré, 1890) Let f : X X be a measure preserving transformation. Let E X measurable. Then P[x E : f n (x) / E, n > N(x)] = 0 Poincaré Recurrence Theorem Theorem

More information

Foundations of Databases

Foundations of Databases Foundations of Databases (Slides adapted from Thomas Eiter, Leonid Libkin and Werner Nutt) Foundations of Databases 1 Quer optimization: finding a good wa to evaluate a quer Queries are declarative, and

More information

Lecture 15: Conditional and Joint Typicaility

Lecture 15: Conditional and Joint Typicaility EE376A Information Theory Lecture 1-02/26/2015 Lecture 15: Conditional and Joint Typicaility Lecturer: Kartik Venkat Scribe: Max Zimet, Brian Wai, Sepehr Nezami 1 Notation We always write a sequence of

More information

DIFFERENTIATION. 3.1 Approximate Value and Error (page 151)

DIFFERENTIATION. 3.1 Approximate Value and Error (page 151) CHAPTER APPLICATIONS OF DIFFERENTIATION.1 Approimate Value and Error (page 151) f '( lim 0 f ( f ( f ( f ( f '( or f ( f ( f '( f ( f ( f '( (.) f ( f '( (.) where f ( f ( f ( Eample.1 (page 15): Find

More information

or just I if the set A is clear. Hence we have for all x in A the identity function I ( )

or just I if the set A is clear. Hence we have for all x in A the identity function I ( ) 3 Functions 46 SECTON E Properties of Composite Functions the end of this section ou will be able to understand what is meant b the identit function prove properties of inverse function prove properties

More information

Homework Assignment #2 for Prob-Stats, Fall 2018 Due date: Monday, October 22, 2018

Homework Assignment #2 for Prob-Stats, Fall 2018 Due date: Monday, October 22, 2018 Homework Assignment #2 for Prob-Stats, Fall 2018 Due date: Monday, October 22, 2018 Topics: consistent estimators; sub-σ-fields and partial observations; Doob s theorem about sub-σ-field measurability;

More information

Review of Elementary Probability Lecture I Hamid R. Rabiee

Review of Elementary Probability Lecture I Hamid R. Rabiee Stochastic Processes Review o Elementar Probabilit Lecture I Hamid R. Rabiee Outline Histor/Philosoph Random Variables Densit/Distribution Functions Joint/Conditional Distributions Correlation Important

More information

FOR a continuous random variable X with density g(x), the differential entropy is defined as

FOR a continuous random variable X with density g(x), the differential entropy is defined as Higher Order Derivatives in Costa s Entrop Power Inequalit Fan Cheng, Member, IEEE and Yanlin Geng, Member, IEEE arxiv:09.v [cs.it] Aug 0 Abstract Let X be an arbitrar continuous random variable and Z

More information

Lecture 7. The Cluster Expansion Lemma

Lecture 7. The Cluster Expansion Lemma Stanford Universit Spring 206 Math 233: Non-constructive methods in combinatorics Instructor: Jan Vondrák Lecture date: Apr 3, 206 Scribe: Erik Bates Lecture 7. The Cluster Expansion Lemma We have seen

More information

LECTURE NOTES IN EQUIVARIANT ALGEBRAIC GEOMETRY. Spec k = (G G) G G (G G) G G G G i 1 G e

LECTURE NOTES IN EQUIVARIANT ALGEBRAIC GEOMETRY. Spec k = (G G) G G (G G) G G G G i 1 G e LECTURE NOTES IN EQUIVARIANT ALEBRAIC EOMETRY 8/4/5 Let k be field, not necessaril algebraicall closed. Definition: An algebraic group is a k-scheme together with morphisms (µ, i, e), k µ, i, Spec k, which

More information

Licensed to: ichapters User

Licensed to: ichapters User Publisher: Richard Stratton Senior Sponsoring Editor: Cath Cantin Senior Marketing Manager: Jennifer Jones Discipline Product Manager: Gretchen Rice King Associate Editor: Janine Tangne Associate Editor:

More information

1.6 CONTINUITY OF TRIGONOMETRIC, EXPONENTIAL, AND INVERSE FUNCTIONS

1.6 CONTINUITY OF TRIGONOMETRIC, EXPONENTIAL, AND INVERSE FUNCTIONS .6 Continuit of Trigonometric, Eponential, and Inverse Functions.6 CONTINUITY OF TRIGONOMETRIC, EXPONENTIAL, AND INVERSE FUNCTIONS In this section we will discuss the continuit properties of trigonometric

More information

Monte Carlo integration

Monte Carlo integration Monte Carlo integration Eample of a Monte Carlo sampler in D: imagine a circle radius L/ within a square of LL. If points are randoml generated over the square, what s the probabilit to hit within circle?

More information

On the Consistency of Multiclass Classification Methods

On the Consistency of Multiclass Classification Methods On the Consistenc of Multiclass Classification Methods Ambuj Tewari 1 and Peter L. Bartlett 2 1 Division of Computer Science Universit of California, Berkele ambuj@cs.berkele.edu 2 Division of Computer

More information

6. Vector Random Variables

6. Vector Random Variables 6. Vector Random Variables In the previous chapter we presented methods for dealing with two random variables. In this chapter we etend these methods to the case of n random variables in the following

More information

INTRODUCTION TO DIFFERENTIAL EQUATIONS

INTRODUCTION TO DIFFERENTIAL EQUATIONS INTRODUCTION TO DIFFERENTIAL EQUATIONS. Definitions and Terminolog. Initial-Value Problems.3 Differential Equations as Mathematical Models CHAPTER IN REVIEW The words differential and equations certainl

More information

8. BOOLEAN ALGEBRAS x x

8. BOOLEAN ALGEBRAS x x 8. BOOLEAN ALGEBRAS 8.1. Definition of a Boolean Algebra There are man sstems of interest to computing scientists that have a common underling structure. It makes sense to describe such a mathematical

More information

Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel

Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel Jun Chen Dept. of Electrical and Computer Engr. McMaster University Hamilton, Ontario, Canada Chao Tian AT&T Labs-Research 80 Park

More information

Appendix 2 Complex Analysis

Appendix 2 Complex Analysis Appendix Complex Analsis This section is intended to review several important theorems and definitions from complex analsis. Analtic function Let f be a complex variable function of a complex value, i.e.

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Decision tables and decision spaces

Decision tables and decision spaces Abstract Decision tables and decision spaces Z. Pawlak 1 Abstract. In this paper an Euclidean space, called a decision space is associated with ever decision table. This can be viewed as a generalization

More information

Math 104: Homework 7 solutions

Math 104: Homework 7 solutions Math 04: Homework 7 solutions. (a) The derivative of f () = is f () = 2 which is unbounded as 0. Since f () is continuous on [0, ], it is uniformly continous on this interval by Theorem 9.2. Hence for

More information