On the Convergence of the Polarization Process in the Noisiness/Weak- Topology

Size: px
Start display at page:

Download "On the Convergence of the Polarization Process in the Noisiness/Weak- Topology"

Transcription

1 1 On the Convergence of the Polarization Process in the Noisiness/Weak- Topology Rajai Nasser arxiv: v2 [cs.it] 26 Oct 2018 Abstract Let W be a channel where the input alphabet is endowed with an Abelian group operation, and let W n) n 0 be Arıkan s channel-valued polarization process that is obtained from W using this operation. We prove that the process W n) n 0 converges almost surely to deterministic homomorphism channels in the noisiness/weak- topology. This provides a simple proof of multilevel polarization for a large family of channels, containing among others, discrete memoryless channels DMC), and channels with continuous output alphabets. This also shows that any continuous channel functional converges almost surely even if the functional does not induce a submartingale or a supermartingale). I. INTRODUCTION Polar codes are a family of capacity-achieving codes which were first introduced by Arıkan for binary input channels [1]. The construction of polar codes relies on a phenomenon called polarization: A collection of independent copies of a channel is converted into a collection of synthetic channels that are extreme, i.e., almost useless or almost perfect. The construction of polar codes was later generalized for channels with arbitrary but finite) input alphabet [2], [3], [4], [5], [6], [7]. Note that for channels where the input alphabet size is not prime, polarization is not necessarily a two-level polarization to useless and perfect channels): We may have multilevel polarization where the polarized channels can be neither useless nor perfect. In this paper, we are interested in the general multilevel polarization phenomenon which happens when we apply an Arıkan-style transformation that is based on an Abelian group operation. It was shown in [4] that as the number of polarization steps becomes large, the behavior of synthetic channels resembles that of deterministic homomorphism channels projecting their input onto a quotient group. This resemblance was formulated in [4] using Bhattacharyya parameters. We may say informally), that as the number of polarization steps goes to infinity, the synthetic channels converge to deterministic homomorphism channels. One reason why this statement is informal is because the synthetic channels do not have the same output alphabet, so in order to make the statement formal, we must define a space in which we can topologically compare channels with different output alphabets. In [8], we defined the space of all channels with fixed input alphabet and arbitrary but finite output alphabet. This space was first quotiented by an equivalence relation, and then several topological structures were defined. In this paper, we show that Arıkan s polarization process does converge in the noisiness/weak- topology to deterministic homomorphism channels. The proof uses the Blackwell measure of channels 1, and hence can be generalized to all channels whose equivalence class can be determined by the Blackwell measure. This family of channels contains, among others, all discrete memoryless channels and all channels with continuous output alphabets. Another advantage of our proof is that it implies the convergence of all channel functionals that are continuous in the noisiness/weak- topology. Therefore, we have convergence of those functionals even if they do not induce a submartingale or a supermartingale process. In Section II, we introduce the preliminaries of this paper. In Section III, we recall the multilevel polarization phenomenon. In Section IV, we show the convergence of the polarization process in the noisiness/weak- topology. For simplicity, we only discuss discrete memoryless channels, but the proof is valid for any channel whose equivalence class is determined by the Blackwell measure. A. Meta-Probability Measures II. PRELIMINARIES Let X be a finite set. The set of probability distributions on X is denoted as X. We associate X with its Borel σ- algebra. A meta-probability measure on X is a probability measure on the Borel sets of X. It is called a metaprobability measure because it is a probability measure on the set of probability distributions on X. We denote the set of meta-probability measures on X as MPX). A meta-probability measure MP on X is said to be balanced if p dmpp) = π X, X where π X is the uniform probability distribution on X. The set of balanced meta-probability measures on X is denoted as MP b X). The set of balanced and finitely supported metaprobability measures on X is denoted as MP bf X). B. DMC Spaces Let X and Y be two finite sets. The set of discrete memoryless channels DMC) with input alphabet X and output alphabet Y is denoted as DMC X,Y. The set of channels with input alphabet X is defined as DMC = n 1DMC X,[n], 1 Blackwell measures was used in [9] and [10] for analyzing the polarization of binary-input channels.

2 2 where [n] = 1,...,n} and is the disjoint union symbol. The symbol in DMC means that the output alphabet is arbitrary but finite. Let X, Y and Z be three finite sets. Let W DMC X,Y and V DMC Y,Z. The composition of V and W is the channel V W DMC X,Z defined as: V W)z x) = y YVz y)wy x). A channel W DMC X,Y is said to be degraded from another channel W DMC X,Y if there exists a channel V DMC Y,Y such that W = V W. Two channels are said to be equivalent if each one is degraded from the other. It is well known that if two channels are equivalent then every code has the same probability of error under ML decoding) for both channels. This is why it makes sense from an information-theoretic point of view to identify equivalent channels and consider them as one object in the space of equivalent channels. The quotient of DMC by the equivalence relation is denoted as DMC o). The equivalence class of a channel W DMC is denoted as Ŵ. C. A Necessary and Sufficient Condition for Degradedness Let U,X,Y be three finite sets and let W DMC X,Y. For every p U X, define P c p,w) = sup pu, x)wy x)du y). D DMC Y,U u U, x X, y Y P c p,w) can be interpreted as follows: LetU,X) be a pair of random variables in U X. Send X through the channel W and let Y be the output. P c p,w) can be seen as the optimal probability of correctly guessing U from Y among all random decoders D DMC Y,U. Now let X,Y,Y be three finite sets and let W DMC X,Y and W DMC X,Y. Buscemi proved in [11] that W is degraded from W if and only if P c p,w) P c p,w ) for every p U X and every finite set U. This means that W and W are equivalent if and only if P c p,w) = P c p,w ) for every p U X and every finite set U. Therefore, if p U X and Ŵ DMCo), we can define Pcp,Ŵ) = P c p,w) for any W Ŵ. D. Blackwell Measures Let W DMC X,Y. Let X,Y) be a pair of random variables in X Y, which is distributed as P X,Y x,y) = 1 X Wy x). In other words, X is uniformly distributed in X and Y is the output of the channel W when X is the input. For every y Y satisfying P Y y) > 0, let Wy 1 X be the posterior probability distribution of the input assuming y was received. More precisely, W 1 y x) = P X Y x y) = Wy x) x X Wy x ). The Blackwell measure of W is the meta-probability measure MP W MPX), which describes the random variablew 1 Y. It is easy to see that MP W = P Y y)δ W 1 y y Y: P Y y)>0 MPX). The following proposition, which is easy to prove, characterizes the Blackwell measures of DMCs: Proposition 1. [12] A meta-probability measure MP MPX) is the Blackwell measure of a DMC with input alphabet X if and only if MP MP bf X). The following proposition shows that the Blackwell measure characterizes the equivalence class of a channel: Proposition 2. [12] Two channels W DMC X,Y and W DMC X,Y are equivalent if and only if MP W = MP W. For every Ŵ DMCo), define MP Ŵ = MP W for any W Ŵ. Proposition 2 shows that MP Ŵ is well defined. E. The Noisiness/Weak- Topology In [8], we defined the noisiness metric as follows: on DMCo) Ŵ,Ŵ ) = sup m 1, p [m] X P c p,ŵ) P cp,ŵ ), where [m] = 1,...,m}. Ŵ,Ŵ ) is called the noisiness metric because it compares the noisiness of Ŵ with that of Ŵ : If P c p,ŵ) is close to P c p,ŵ ) for every random encoder p, then Ŵ and Ŵ have close noisiness levels. The topology on DMC o) which is induced by the metric o) is denoted as T. Another way to topologize the space DMC o) is through Blackwell measures: Proposition 1 implies that the mapping Ŵ MPŴ is a bijection from DMC o) to MP bfx). We call this mapping the canonical bijection from DMC o) to MP bf X). By choosing a topology on MP bf X), we can construct a topology on DMC o) through the canonical bijection. We showed in [8] that the weak- topology is exactly the same ast o). This is why we callto) the noisiness/weak- topology. Remark 1. Since we identify DMC o) and MP bfx) through the canonical bijection, we can use to define a metric on MP bf X). Furthermore, since MP bf X) is dense in MP b X) see e.g., [8]), we can extend the definition of to MP bx) by continuity. Similarly, we can extend the definition of any channel parameter or operation which is continuous in the noisiness/weak- topology such as the symmetric capacity, Arıkan s polar transformations, etc. [13]) to MP b X).

3 3 A. Useful Notations III. THE POLARIZATION PHENOMENON Throughout this paper, G, +) denotes a finite Abelian group. If W is a channel, we denote the symmetric capacity 2 of W as IW). For every subgroup H of G, define the channel D H DMC G,G/H as 1 if x A, D H A x) = 0 otherwise. In other words, D H is the deterministic channel where the output is the coset to which the input belongs. It is easy to see that ID H ) = log G/H. We denote the set D H : H is a subgroup of G} as DH G. Now let Y be a finite set and let W DMC G,Y. For every subgroup H of G, define the channel W[H] DMC G/H,Y as W[H]y A) = 1 A x A Wy x) = 1 H Wy x). x A Remark 2. Let X be a random variable uniformly distributed in G and let Y be the output of the channel W. It is easy to see that IW[H]) = IX mod H,Y). Let δ > 0. We say that a channel W DMC G,Y is δ- determined by a subgroup H of G,+) if IW) log G/H < δ and IW[H]) log G/H < δ. We say that W is δ-determined if there exists at least one subgroup H which δ-determines W. It is easy to see that if δ is small enough, there exists at most one subgroup that δ- determines W. Intuitively, if δ is small and W is δ-determined by a subgroup H, then the channel W is almost-equivalent to D H : Let X be a random variable that is uniformly distributed in G and let Y be the output of W when X is the input. We have: The inequality IX mod H;Y) log G/H < δ means that X mod H can be determined from Y with high probability. the inequality IX;Y) log G/H < δ means that there is almost no other information about X which can be determined from Y. Due to the above two observations, we can informally) say that if W isδ-determined byh, thenw is almost equivalent to D H. B. The Polarization Process Let W DMC G,Y be a channel with input alphabet G. Define the channels W DMC G,Y 2 and W + DMC G,Y 2 G as follows: W y 1,y 2 u 1 ) = 1 Wy 1 u 1 +u 2 )Wy 2 u 2 ), G 2 The symmetric capacity of a channel is the mutual information between a uniformly distributed input and the output. and W + y 1,y 2,u 1 u 2 ) = 1 G Wy 1 u 1 +u 2 )Wy 2 u 2 ). For every n 1 and every s = s 1,...,s n ),+} n, define W s = W s1 ) s2 ) ) sn. Remark 3. It can be shown that if W and V are equivalent, then W resp. W + ) and V resp. V + ) are equivalent. This allows us to write Ŵ and Ŵ+ to denote Ŵ and Ŵ+, respectively. It was shown in [3], [4] and [5] that as n becomes large, the behavior of almost all the synthetic channels W s ) s,+} n approaches the behavior of deterministic homomorphism channels projecting their input onto quotient groups. One way to formalize the above statement was given in [5] as follows: For every δ > 0, we have lim n 1 2 n s,+} n : W s is δ-determined } = 1. 1) Definition 1. Let B n ) n 1 be a sequence of independent and uniformly distributed Bernoulli random variables in, +}. Define the channel-valued random process W n ) n 0 as follows: W 0 = W. W n = W Bn n 1 = WB1,...,Bn) if n 1. Equation 1) can be rewritten as: lim P[W n is δ-determined}] = 1. 2) n One informal way to interpret Equation 2) is to say that the process W n ) n 0 converges to channels in DH G. This statement will be made formal in the following section. IV. CONVERGENCE OF THE POLARIZATION PROCESS Throughout this section, we identify a channel W DMC o) G, with its Blackwell measure MP W MP bf G). We also extend the definition of the + and operations to all balanced measures in MP b G) as discussed in Remark 1). Lemma 1. Let W n ) n 0 be the channel-valued process defined in Definition 1. Almost surely, the sequence IWn ) IW n ) ) converges to zero. n 0 Proof. It is well known that IW )+IW + ) = 2IW) for every channel with input alphabet G. Hence, we have EIW n+1 ) W n ) = 1 2 IW n )+ 1 2 IW+ n ) = IW n ). This shows that the process IW n )) n 0 is a martingale, hence it converges almost surely. This means that the process IWn+1 ) IW n ) ) almost surely converges to zero. On n 0 the other hand, we have IWn ) IW n ) if B n+1 =, IW n+1 ) IW n ) = IW n + ) IW n) if B n+1 = +, a) = IW n ) IW n),

4 4 where a) follows from the fact that IW n ) + IW+ n ) = 2IW n ), which means that IW n ) IW n ) = IW + n ) IW n ). Now define the set POL G = MP MP b G) : IMP ) = IMP)}. The next lemma shows that the Blackwell measures of channels with a small IW ) IW) are close in the noisiness metric sense) to measures in POL G. Lemma 2. For every ǫ > 0, there exists δ > 0 such that for every MP MP b G), if IMP ) IMP) < δ, then G, MP,POL G) < ǫ. Proof. Define the function f : MP b G) R + as fmp) = IMP ) IMP). Since the symmetric capacity and the transformation are continuous in the noisiness/weak- topology see [13]), the function f is also continuous in the same topology. Since POL G = f 1 0}) and since f is continuous, the set POL G is closed. Now for every ǫ > 0, define the set POL G,ǫ = MP MP b G) : G, MP,POL G) < ǫ}, and let δ = inffpol c G,ǫ )) = inf fmp) : MP MP b G) and G, MP,POL G) ǫ }. Since the set POL G is closed, we can see that the set POL c G,ǫ is closed as well. Furthermore, since the space MP b G) is compact see e.g., [8]), the set POL c G,ǫ is compact as well. Therefore, the set fpol c G,ǫ ) is compact in R +, which means that its infimum is achieved, i.e., there exists MP ǫ POL c G,ǫ such that δ = fmp ǫ ). But POL c G,ǫ POL G = and POL G = f 1 0}), so we must have δ = fmp ǫ ) > 0. From the definition of δ, we have: G, MP,POL G) ǫ fmp) δ. Hence, by contraposition, we have fmp) < δ G, MP,POL G) < ǫ. In the rest of this section, we analyze the balanced metaprobability measures that are in POL G i.e., those that satisfy IMP ) = IMP)). For every p, we denote the entropy of p as Hp). For every p,q, define p q as follows: p q)u 1 ) = pu 1 +u 2 )qu 2 ). Lemma 3. For every MP MP b G), we have IMP ) IMP) = Hp q) Hp))dMPp)dMPq). Proof. It is sufficient to show this for Blackwell measures of DMCs because we can then extend the equation to MP b G) by continuity). LetW be a DMC with input alphabetg. We haveiw ) IW) so IMP W ) IMP W) = IW) IW ). From [13, Proposition 8], we have IW) = log G Hp)dMP W p) = log G Hp)dMP W p)dmp W q). Similarly, IW ) = log G Hp)dMP W p) a) = log G Hp)dMP W,MP W ) p) b) = log G Hp)dC,+ # MP W MP W )p) c) = log G HC,+ p,q))dmp W MP W )p,q) d) = log G Hp q)dmp W p)dmp W q), where a) follows from [13, Proposition 10], b) follows from the definition of the, +)-convolution see Page 20 of [13]), c) follows from the properties of the push-forward probability measure, and d) follows from the definition of the C,+ map see Page 20 of [13]) and Fubini s theorem. The lemma now follows from the fact that IW ) IW) = IW) IW ). For every p and every u G, define p u as p u x) = px+u). Let p,q. We have p q = qu 2 )p u2. Due to the strict concavity of entropy, we have Hp q) Hp). Moreover, we have Hp q) = Hp) p u2 = p u 2, u 2,u 2 suppq) p u = p, u suppq) p u = p, u suppq), where suppq) = u 2 u 2 : u 2,u 2 suppq)}, and suppq) is the subgroup of G generated by suppq). Lemma 4. Let MP MPG). We have IMP ) = IMP) if and only if for every p,q suppmp) we have p = p u for every u suppq). Proof. Define the function F : R + as Fp,q) = Hp q) Hp). Since F is continuous and positive, the integral Fp,q)dMPp)dMPq) 3)

5 5 is equal to zero if and only if the function F is equal to zero on suppmp) suppmp). The lemma now follows from Lemma 3 and Equation 3). Lemma 5. POL G = MP D : D DH G }. Proof. Let H be a subgroup of G. It is easy to see that 1 MP DH = δ πa, where δ πa is a Dirac measure G/H centered at π A the uniform distribution on A). It is easy to check that MP DH satisfies the condition of Lemma 4, hence IMP D H ) = IMP DH ) and so MP D : D DH G } POL G. Now suppose that MP POL G. For everyp suppmp), define A p = suppp) and H p = A p. Lemma 4 shows that p = p u for every u H p. Let x,x A p. We have: px ) = px+x x) = p x xx) a) = px), where a) follows from the fact that x x H p. This shows that p is the uniform distribution on A p. Moreover, for every u H p, we have px+u) = p u x+u) = px+u u) = px) > 0. This implies that A p = x+h p, which means that the support of p is a coset of the subgroup H p. Now let p,q suppmp). Letx A p andu H q. Lemma 4 implies that px+u) = p u x) = px) > 0, hence u = x + u x H p. This shows that H q H p. Similarly, we can show that H p H q. Therefore, H p = H q for every p,q suppmp). This means that the support of MP consists of uniform distributions over cosets of the same subgroup of G. Let H be this subgroup. The above discussion shows that MP = α A δ πa, for some distribution α A : A G/H} over the quotient group G/H. Fix A G/H and let x A. We have: 1 G = π Gx) a) = px)dmpp) = 1 α B π B x) = α A A = α A H, B G/H where a) follows from the fact that MP is balanced. Hence α A = H G = 1 1, and so MP = δ πa. This G/H G/H means that MP = MP DH, thus MP MP D : D DH G }, and so POL G MP D : D DH G }. We conclude that POL G = MP D : D DH G }. Theorem 1. Let W n ) n 0 be the channel-valued process defined in Definition 1. Almost surely, there exists a subgroup H of G such that the sequence Ŵn) n 0 converges to ˆD H in the noisiness/weak- topology. Proof. Lemma 1 shows that almost surely, the sequence IW n ) IW n ) ) n 0 converges to zero. Let W n) n 0 be a sample of the process for which the sequence IW n ) IW n ) ) n 0 converges to zero. Lemma 2 implies that d o) G, MP W n,pol G ) ) n 0 converges to zero. Now since POL G is finite see Lemma 5), the sequence MP Wn ) n 0 converges to an element in POL G. Lemma 5 now implies that there exists a subgroup H of G such that the sequence ˆD H in the noisiness/weak- topol- Ŵn) n 0 converges to ogy. Corollary 1. For any channel functional F : DMC R which is invariant under channel equivalence, and which is continuous in the noisiness/weak- topology, the process FWn ) ) n 0 almost surely converges. More precisely,fw n) converges to FD H ) if Ŵ n converges to ˆD H. V. DISCUSSION Our proof can be used verbatim) to show the almost sure convergence of the polarization process associated to any channel whose equivalence class is determined by the Blackwell measure. This family of channels is large and contains almost any sensible channel we can think of [12]. ACKNOWLEDGMENT I would like to thank Emre Telatar and Maxim Raginsky for helpful discussions. REFERENCES [1] E. Arıkan, Channel polarization: A method for constructing capacityachieving codes for symmetric binary-input memoryless channels, Information Theory, IEEE Transactions on, vol. 55, no. 7, pp , [2] E. Şaşoğlu, E. Telatar, and E. Arıkan, Polarization for arbitrary discrete memoryless channels, in Information Theory Workshop, ITW IEEE, 2009, pp [3] W. Park and A. Barg, Polar codes for q-ary channels, Information Theory, IEEE Transactions on, vol. 59, no. 2, pp , [4] A. G. Sahebi and S. S. Pradhan, Multilevel channel polarization for arbitrary discrete memoryless channels, IEEE Transactions on Information Theory, vol. 59, no. 12, pp , Dec [5] R. Nasser and E. Telatar, Polar codes for arbitrary DMCs and arbitrary MACs, IEEE Transactions on Information Theory, vol. 62, no. 6, pp , June [6] R. Nasser, An ergodic theory of binary operations, part I: Key properties, IEEE Transactions on Information Theory, vol. 62, no. 12, pp , Dec [7], An ergodic theory of binary operations, part II: Applications to polarization, IEEE Transactions on Information Theory, vol. 63, no. 2, pp , Feb [8], Topological structures on DMC spaces, Entropy, vol. 20, no. 5, [Online]. Available: [9] M. Raginsky, Channel polarization and Blackwell measures, in 2016 IEEE International Symposium on Information Theory ISIT), July 2016, pp [10] N. Goela and M. Raginsky, Channel polarization through the lens of Blackwell measures, arxiv: , September [11] F. Buscemi, Degradable channels, less noisy channels, and quantum statistical morphisms: An equivalence relation, Probl. Inf. Transm., vol. 52, no. 3, pp , Jul [12] E. Torgersen, Comparison of Statistical Experiments, ser. Encyclopedia of Mathematics and its Applications. Cambridge University Press, [13] R. Nasser, Continuity of channel parameters and operations under various DMC topologies, Entropy, vol. 20, no. 5, [Online]. Available:

On the Polarization Levels of Automorphic-Symmetric Channels

On the Polarization Levels of Automorphic-Symmetric Channels On the Polarization Levels of Automorphic-Symmetric Channels Rajai Nasser Email: rajai.nasser@alumni.epfl.ch arxiv:8.05203v2 [cs.it] 5 Nov 208 Abstract It is known that if an Abelian group operation is

More information

An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets

An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets Jing Guo University of Cambridge jg582@cam.ac.uk Jossy Sayir University of Cambridge j.sayir@ieee.org Minghai Qin

More information

Polar Codes for Arbitrary DMCs and Arbitrary MACs

Polar Codes for Arbitrary DMCs and Arbitrary MACs Polar Codes for Arbitrary DMCs and Arbitrary MACs Rajai Nasser and Emre Telatar, Fellow, IEEE, School of Computer and Communication Sciences, EPFL Lausanne, Switzerland Email: rajainasser, emretelatar}@epflch

More information

Channel Polarization and Blackwell Measures

Channel Polarization and Blackwell Measures Channel Polarization Blackwell Measures Maxim Raginsky Abstract The Blackwell measure of a binary-input channel (BIC is the distribution of the posterior probability of 0 under the uniform input distribution

More information

Multi-Kernel Polar Codes: Proof of Polarization and Error Exponents

Multi-Kernel Polar Codes: Proof of Polarization and Error Exponents Multi-Kernel Polar Codes: Proof of Polarization and Error Exponents Meryem Benammar, Valerio Bioglio, Frédéric Gabry, Ingmar Land Mathematical and Algorithmic Sciences Lab Paris Research Center, Huawei

More information

Polar Codes for Sources with Finite Reconstruction Alphabets

Polar Codes for Sources with Finite Reconstruction Alphabets Polar Codes for Sources with Finite Reconstruction Alphabets Aria G. Sahebi and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 4809,

More information

Polar codes for the m-user MAC and matroids

Polar codes for the m-user MAC and matroids Research Collection Conference Paper Polar codes for the m-user MAC and matroids Author(s): Abbe, Emmanuel; Telatar, Emre Publication Date: 2010 Permanent Link: https://doi.org/10.3929/ethz-a-005997169

More information

Solutions to Homework Set #3 Channel and Source coding

Solutions to Homework Set #3 Channel and Source coding Solutions to Homework Set #3 Channel and Source coding. Rates (a) Channels coding Rate: Assuming you are sending 4 different messages using usages of a channel. What is the rate (in bits per channel use)

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:

More information

Polar Codes for Some Multi-terminal Communications Problems

Polar Codes for Some Multi-terminal Communications Problems Polar Codes for ome Multi-terminal Communications Problems Aria G. ahebi and. andeep Pradhan Department of Electrical Engineering and Computer cience, niversity of Michigan, Ann Arbor, MI 48109, A. Email:

More information

EE 4TM4: Digital Communications II. Channel Capacity

EE 4TM4: Digital Communications II. Channel Capacity EE 4TM4: Digital Communications II 1 Channel Capacity I. CHANNEL CODING THEOREM Definition 1: A rater is said to be achievable if there exists a sequence of(2 nr,n) codes such thatlim n P (n) e (C) = 0.

More information

The Compound Capacity of Polar Codes

The Compound Capacity of Polar Codes The Compound Capacity of Polar Codes S. Hamed Hassani, Satish Babu Korada and Rüdiger Urbanke arxiv:97.329v [cs.it] 9 Jul 29 Abstract We consider the compound capacity of polar codes under successive cancellation

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

Lecture 4 Channel Coding

Lecture 4 Channel Coding Capacity and the Weak Converse Lecture 4 Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 15, 2014 1 / 16 I-Hsiang Wang NIT Lecture 4 Capacity

More information

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016 Midterm Exam Time: 09:10 12:10 11/23, 2016 Name: Student ID: Policy: (Read before You Start to Work) The exam is closed book. However, you are allowed to bring TWO A4-size cheat sheet (single-sheet, two-sided).

More information

Can Feedback Increase the Capacity of the Energy Harvesting Channel?

Can Feedback Increase the Capacity of the Energy Harvesting Channel? Can Feedback Increase the Capacity of the Energy Harvesting Channel? Dor Shaviv EE Dept., Stanford University shaviv@stanford.edu Ayfer Özgür EE Dept., Stanford University aozgur@stanford.edu Haim Permuter

More information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information 204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener

More information

An Extended Fano s Inequality for the Finite Blocklength Coding

An Extended Fano s Inequality for the Finite Blocklength Coding An Extended Fano s Inequality for the Finite Bloclength Coding Yunquan Dong, Pingyi Fan {dongyq8@mails,fpy@mail}.tsinghua.edu.cn Department of Electronic Engineering, Tsinghua University, Beijing, P.R.

More information

Rudiments of Ergodic Theory

Rudiments of Ergodic Theory Rudiments of Ergodic Theory Zefeng Chen September 24, 203 Abstract In this note we intend to present basic ergodic theory. We begin with the notion of a measure preserving transformation. We then define

More information

Compound Polar Codes

Compound Polar Codes Compound Polar Codes Hessam Mahdavifar, Mostafa El-Khamy, Jungwon Lee, Inyup Kang Mobile Solutions Lab, Samsung Information Systems America 4921 Directors Place, San Diego, CA 92121 {h.mahdavifar, mostafa.e,

More information

A Comparison of Superposition Coding Schemes

A Comparison of Superposition Coding Schemes A Comparison of Superposition Coding Schemes Lele Wang, Eren Şaşoğlu, Bernd Bandemer, and Young-Han Kim Department of Electrical and Computer Engineering University of California, San Diego La Jolla, CA

More information

Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel

Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel Jun Chen Dept. of Electrical and Computer Engr. McMaster University Hamilton, Ontario, Canada Chao Tian AT&T Labs-Research 80 Park

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

On the Duality between Multiple-Access Codes and Computation Codes

On the Duality between Multiple-Access Codes and Computation Codes On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch

More information

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Shivaprasad Kotagiri and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame,

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information

Arimoto Channel Coding Converse and Rényi Divergence

Arimoto Channel Coding Converse and Rényi Divergence Arimoto Channel Coding Converse and Rényi Divergence Yury Polyanskiy and Sergio Verdú Abstract Arimoto proved a non-asymptotic upper bound on the probability of successful decoding achievable by any code

More information

The Method of Types and Its Application to Information Hiding

The Method of Types and Its Application to Information Hiding The Method of Types and Its Application to Information Hiding Pierre Moulin University of Illinois at Urbana-Champaign www.ifp.uiuc.edu/ moulin/talks/eusipco05-slides.pdf EUSIPCO Antalya, September 7,

More information

A Singleton Bound for Lattice Schemes

A Singleton Bound for Lattice Schemes 1 A Singleton Bound for Lattice Schemes Srikanth B. Pai, B. Sundar Rajan, Fellow, IEEE Abstract arxiv:1301.6456v4 [cs.it] 16 Jun 2015 In this paper, we derive a Singleton bound for lattice schemes and

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

Variable Length Codes for Degraded Broadcast Channels

Variable Length Codes for Degraded Broadcast Channels Variable Length Codes for Degraded Broadcast Channels Stéphane Musy School of Computer and Communication Sciences, EPFL CH-1015 Lausanne, Switzerland Email: stephane.musy@ep.ch Abstract This paper investigates

More information

Equidistant Polarizing Transforms

Equidistant Polarizing Transforms DRAFT 1 Equidistant Polarizing Transorms Sinan Kahraman Abstract arxiv:1708.0133v1 [cs.it] 3 Aug 017 This paper presents a non-binary polar coding scheme that can reach the equidistant distant spectrum

More information

arxiv: v3 [cs.it] 1 Apr 2014

arxiv: v3 [cs.it] 1 Apr 2014 Universal Polar Codes for More Capable and Less Noisy Channels and Sources David Sutter and Joseph M. Renes Institute for Theoretical Physics, ETH Zurich, Switzerland arxiv:32.5990v3 [cs.it] Apr 204 We

More information

Lecture 9 Polar Coding

Lecture 9 Polar Coding Lecture 9 Polar Coding I-Hsiang ang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 29, 2015 1 / 25 I-Hsiang ang IT Lecture 9 In Pursuit of Shannon s Limit Since

More information

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Nevroz Şen, Fady Alajaji and Serdar Yüksel Department of Mathematics and Statistics Queen s University Kingston, ON K7L 3N6, Canada

More information

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157 Lecture 6: Gaussian Channels Copyright G. Caire (Sample Lectures) 157 Differential entropy (1) Definition 18. The (joint) differential entropy of a continuous random vector X n p X n(x) over R is: Z h(x

More information

Construction of Polar Codes with Sublinear Complexity

Construction of Polar Codes with Sublinear Complexity 1 Construction of Polar Codes with Sublinear Complexity Marco Mondelli, S. Hamed Hassani, and Rüdiger Urbanke arxiv:1612.05295v4 [cs.it] 13 Jul 2017 Abstract Consider the problem of constructing a polar

More information

Lecture 3: Channel Capacity

Lecture 3: Channel Capacity Lecture 3: Channel Capacity 1 Definitions Channel capacity is a measure of maximum information per channel usage one can get through a channel. This one of the fundamental concepts in information theory.

More information

On the Capacity of the Interference Channel with a Relay

On the Capacity of the Interference Channel with a Relay On the Capacity of the Interference Channel with a Relay Ivana Marić, Ron Dabora and Andrea Goldsmith Stanford University, Stanford, CA {ivanam,ron,andrea}@wsl.stanford.edu Abstract Capacity gains due

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

A Simple Converse of Burnashev s Reliability Function

A Simple Converse of Burnashev s Reliability Function A Simple Converse of Burnashev s Reliability Function 1 arxiv:cs/0610145v3 [cs.it] 23 Sep 2008 Peter Berlin, Barış Nakiboğlu, Bixio Rimoldi, Emre Telatar School of Computer and Communication Sciences Ecole

More information

Midterm 1. Every element of the set of functions is continuous

Midterm 1. Every element of the set of functions is continuous Econ 200 Mathematics for Economists Midterm Question.- Consider the set of functions F C(0, ) dened by { } F = f C(0, ) f(x) = ax b, a A R and b B R That is, F is a subset of the set of continuous functions

More information

7 About Egorov s and Lusin s theorems

7 About Egorov s and Lusin s theorems Tel Aviv University, 2013 Measure and category 62 7 About Egorov s and Lusin s theorems 7a About Severini-Egorov theorem.......... 62 7b About Lusin s theorem............... 64 7c About measurable functions............

More information

Polar Codes are Optimal for Write-Efficient Memories

Polar Codes are Optimal for Write-Efficient Memories Polar Codes are Optimal for Write-Efficient Memories Qing Li Department of Computer Science and Engineering Texas A & M University College Station, TX 7784 qingli@cse.tamu.edu Anxiao (Andrew) Jiang Department

More information

CHAPTER 9. Embedding theorems

CHAPTER 9. Embedding theorems CHAPTER 9 Embedding theorems In this chapter we will describe a general method for attacking embedding problems. We will establish several results but, as the main final result, we state here the following:

More information

Lecture 2: August 31

Lecture 2: August 31 0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 2: August 3 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy

More information

Discrete Memoryless Channels with Memoryless Output Sequences

Discrete Memoryless Channels with Memoryless Output Sequences Discrete Memoryless Channels with Memoryless utput Sequences Marcelo S Pinho Department of Electronic Engineering Instituto Tecnologico de Aeronautica Sao Jose dos Campos, SP 12228-900, Brazil Email: mpinho@ieeeorg

More information

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity 5-859: Information Theory and Applications in TCS CMU: Spring 23 Lecture 8: Channel and source-channel coding theorems; BEC & linear codes February 7, 23 Lecturer: Venkatesan Guruswami Scribe: Dan Stahlke

More information

Shannon s Noisy-Channel Coding Theorem

Shannon s Noisy-Channel Coding Theorem Shannon s Noisy-Channel Coding Theorem Lucas Slot Sebastian Zur February 2015 Abstract In information theory, Shannon s Noisy-Channel Coding Theorem states that it is possible to communicate over a noisy

More information

Entropy and Ergodic Theory Lecture 4: Conditional entropy and mutual information

Entropy and Ergodic Theory Lecture 4: Conditional entropy and mutual information Entropy and Ergodic Theory Lecture 4: Conditional entropy and mutual information 1 Conditional entropy Let (Ω, F, P) be a probability space, let X be a RV taking values in some finite set A. In this lecture

More information

Joint Write-Once-Memory and Error-Control Codes

Joint Write-Once-Memory and Error-Control Codes 1 Joint Write-Once-Memory and Error-Control Codes Xudong Ma Pattern Technology Lab LLC, U.S.A. Email: xma@ieee.org arxiv:1411.4617v1 [cs.it] 17 ov 2014 Abstract Write-Once-Memory (WOM) is a model for many

More information

Scaling limit of random planar maps Lecture 2.

Scaling limit of random planar maps Lecture 2. Scaling limit of random planar maps Lecture 2. Olivier Bernardi, CNRS, Université Paris-Sud Workshop on randomness and enumeration Temuco, Olivier Bernardi p.1/25 Goal We consider quadrangulations as metric

More information

Practical Polar Code Construction Using Generalised Generator Matrices

Practical Polar Code Construction Using Generalised Generator Matrices Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:

More information

int cl int cl A = int cl A.

int cl int cl A = int cl A. BAIRE CATEGORY CHRISTIAN ROSENDAL 1. THE BAIRE CATEGORY THEOREM Theorem 1 (The Baire category theorem. Let (D n n N be a countable family of dense open subsets of a Polish space X. Then n N D n is dense

More information

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 Please submit the solutions on Gradescope. EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 1. Optimal codeword lengths. Although the codeword lengths of an optimal variable length code

More information

On the Construction of Polar Codes for Channels with Moderate Input Alphabet Sizes

On the Construction of Polar Codes for Channels with Moderate Input Alphabet Sizes 1 On the Construction of Polar Codes for Channels with Moderate Input Alphabet Sizes Ido Tal Department of Electrical Engineering, Technion, Haifa 32000, Israel. Email: idotal@ee.technion.ac.il Abstract

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

Exercise 1. = P(y a 1)P(a 1 )

Exercise 1. = P(y a 1)P(a 1 ) Chapter 7 Channel Capacity Exercise 1 A source produces independent, equally probable symbols from an alphabet {a 1, a 2 } at a rate of one symbol every 3 seconds. These symbols are transmitted over a

More information

Relay Networks With Delays

Relay Networks With Delays Relay Networks With Delays Abbas El Gamal, Navid Hassanpour, and James Mammen Department of Electrical Engineering Stanford University, Stanford, CA 94305-9510 Email: {abbas, navid, jmammen}@stanford.edu

More information

Refined Bounds on the Empirical Distribution of Good Channel Codes via Concentration Inequalities

Refined Bounds on the Empirical Distribution of Good Channel Codes via Concentration Inequalities Refined Bounds on the Empirical Distribution of Good Channel Codes via Concentration Inequalities Maxim Raginsky and Igal Sason ISIT 2013, Istanbul, Turkey Capacity-Achieving Channel Codes The set-up DMC

More information

ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016

ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 ECE598: Information-theoretic methods in high-dimensional statistics Spring 06 Lecture : Mutual Information Method Lecturer: Yihong Wu Scribe: Jaeho Lee, Mar, 06 Ed. Mar 9 Quick review: Assouad s lemma

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

Tame definable topological dynamics

Tame definable topological dynamics Tame definable topological dynamics Artem Chernikov (Paris 7) Géométrie et Théorie des Modèles, 4 Oct 2013, ENS, Paris Joint work with Pierre Simon, continues previous work with Anand Pillay and Pierre

More information

are Banach algebras. f(x)g(x) max Example 7.4. Similarly, A = L and A = l with the pointwise multiplication

are Banach algebras. f(x)g(x) max Example 7.4. Similarly, A = L and A = l with the pointwise multiplication 7. Banach algebras Definition 7.1. A is called a Banach algebra (with unit) if: (1) A is a Banach space; (2) There is a multiplication A A A that has the following properties: (xy)z = x(yz), (x + y)z =

More information

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 5, MAY 2009 2037 The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani Abstract The capacity

More information

ECE 4400:693 - Information Theory

ECE 4400:693 - Information Theory ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential

More information

Information measures in simple coding problems

Information measures in simple coding problems Part I Information measures in simple coding problems in this web service in this web service Source coding and hypothesis testing; information measures A(discrete)source is a sequence {X i } i= of random

More information

DYNAMICAL CUBES AND A CRITERIA FOR SYSTEMS HAVING PRODUCT EXTENSIONS

DYNAMICAL CUBES AND A CRITERIA FOR SYSTEMS HAVING PRODUCT EXTENSIONS DYNAMICAL CUBES AND A CRITERIA FOR SYSTEMS HAVING PRODUCT EXTENSIONS SEBASTIÁN DONOSO AND WENBO SUN Abstract. For minimal Z 2 -topological dynamical systems, we introduce a cube structure and a variation

More information

Remote Source Coding with Two-Sided Information

Remote Source Coding with Two-Sided Information Remote Source Coding with Two-Sided Information Basak Guler Ebrahim MolavianJazi Aylin Yener Wireless Communications and Networking Laboratory Department of Electrical Engineering The Pennsylvania State

More information

GROUP SHIFTS AND BERNOULLI FACTORS

GROUP SHIFTS AND BERNOULLI FACTORS Z d GROUP SHIFTS AND BERNOULLI FACTORS MIKE BOYLE AND MICHAEL SCHRAUDNER Abstract. In this paper, a group shift is an expansive action of Z d on a compact metrizable zero dimensional group by continuous

More information

Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels

Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels Bike ie, Student Member, IEEE and Richard D. Wesel, Senior Member, IEEE Abstract Certain degraded broadcast channels

More information

1. Simplify the following. Solution: = {0} Hint: glossary: there is for all : such that & and

1. Simplify the following. Solution: = {0} Hint: glossary: there is for all : such that & and Topology MT434P Problems/Homework Recommended Reading: Munkres, J.R. Topology Hatcher, A. Algebraic Topology, http://www.math.cornell.edu/ hatcher/at/atpage.html For those who have a lot of outstanding

More information

National University of Singapore Department of Electrical & Computer Engineering. Examination for

National University of Singapore Department of Electrical & Computer Engineering. Examination for National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:

More information

TOPOLOGY HW 2. x x ± y

TOPOLOGY HW 2. x x ± y TOPOLOGY HW 2 CLAY SHONKWILER 20.9 Show that the euclidean metric d on R n is a metric, as follows: If x, y R n and c R, define x + y = (x 1 + y 1,..., x n + y n ), cx = (cx 1,..., cx n ), x y = x 1 y

More information

Polar Write Once Memory Codes

Polar Write Once Memory Codes Polar Write Once Memory Codes David Burshtein, Senior Member, IEEE and Alona Strugatski Abstract arxiv:1207.0782v2 [cs.it] 7 Oct 2012 A coding scheme for write once memory (WOM) using polar codes is presented.

More information

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview AN INTRODUCTION TO SECRECY CAPACITY BRIAN DUNN. Overview This paper introduces the reader to several information theoretic aspects of covert communications. In particular, it discusses fundamental limits

More information

LECTURE 10. Last time: Lecture outline

LECTURE 10. Last time: Lecture outline LECTURE 10 Joint AEP Coding Theorem Last time: Error Exponents Lecture outline Strong Coding Theorem Reading: Gallager, Chapter 5. Review Joint AEP A ( ɛ n) (X) A ( ɛ n) (Y ) vs. A ( ɛ n) (X, Y ) 2 nh(x)

More information

Homework Set #3 Rates definitions, Channel Coding, Source-Channel coding

Homework Set #3 Rates definitions, Channel Coding, Source-Channel coding Homework Set # Rates definitions, Channel Coding, Source-Channel coding. Rates (a) Channels coding Rate: Assuming you are sending 4 different messages using usages of a channel. What is the rate (in bits

More information

Normal forms in combinatorial algebra

Normal forms in combinatorial algebra Alberto Gioia Normal forms in combinatorial algebra Master s thesis, defended on July 8, 2009 Thesis advisor: Hendrik Lenstra Mathematisch Instituut Universiteit Leiden ii Contents Introduction iv 1 Generators

More information

Essays on representations of p-adic groups. Smooth representations. π(d)v = ϕ(x)π(x) dx. π(d 1 )π(d 2 )v = ϕ 1 (x)π(x) dx ϕ 2 (y)π(y)v dy

Essays on representations of p-adic groups. Smooth representations. π(d)v = ϕ(x)π(x) dx. π(d 1 )π(d 2 )v = ϕ 1 (x)π(x) dx ϕ 2 (y)π(y)v dy 10:29 a.m. September 23, 2006 Essays on representations of p-adic groups Smooth representations Bill Casselman University of British Columbia cass@math.ubc.ca In this chapter I ll define admissible representations

More information

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.

More information

Shannon s noisy-channel theorem

Shannon s noisy-channel theorem Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for

More information

β-expansion: A Theoretical Framework for Fast and Recursive Construction of Polar Codes

β-expansion: A Theoretical Framework for Fast and Recursive Construction of Polar Codes β-expansion: A Theoretical Framework for Fast and Recursive Construction of Polar Codes Gaoning He, Jean-Claude Belfiore, Ingmar Land, Ganghua Yang Paris Research Center, Huawei Technologies 0 quai du

More information

Topology. Xiaolong Han. Department of Mathematics, California State University, Northridge, CA 91330, USA address:

Topology. Xiaolong Han. Department of Mathematics, California State University, Northridge, CA 91330, USA  address: Topology Xiaolong Han Department of Mathematics, California State University, Northridge, CA 91330, USA E-mail address: Xiaolong.Han@csun.edu Remark. You are entitled to a reward of 1 point toward a homework

More information

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore. This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore. Title Finite nilpotent metacyclic groups never violate the Ingleton inequality Author(s) Stancu, Radu; Oggier,

More information

Cut-Set Bound and Dependence Balance Bound

Cut-Set Bound and Dependence Balance Bound Cut-Set Bound and Dependence Balance Bound Lei Xiao lxiao@nd.edu 1 Date: 4 October, 2006 Reading: Elements of information theory by Cover and Thomas [1, Section 14.10], and the paper by Hekstra and Willems

More information

1 Adeles over Q. 1.1 Absolute values

1 Adeles over Q. 1.1 Absolute values 1 Adeles over Q 1.1 Absolute values Definition 1.1.1 (Absolute value) An absolute value on a field F is a nonnegative real valued function on F which satisfies the conditions: (i) x = 0 if and only if

More information

Boolean Algebras, Boolean Rings and Stone s Representation Theorem

Boolean Algebras, Boolean Rings and Stone s Representation Theorem Boolean Algebras, Boolean Rings and Stone s Representation Theorem Hongtaek Jung December 27, 2017 Abstract This is a part of a supplementary note for a Logic and Set Theory course. The main goal is to

More information

8. Prime Factorization and Primary Decompositions

8. Prime Factorization and Primary Decompositions 70 Andreas Gathmann 8. Prime Factorization and Primary Decompositions 13 When it comes to actual computations, Euclidean domains (or more generally principal ideal domains) are probably the nicest rings

More information

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122 Lecture 5: Channel Capacity Copyright G. Caire (Sample Lectures) 122 M Definitions and Problem Setup 2 X n Y n Encoder p(y x) Decoder ˆM Message Channel Estimate Definition 11. Discrete Memoryless Channel

More information

PROBLEMS, MATH 214A. Affine and quasi-affine varieties

PROBLEMS, MATH 214A. Affine and quasi-affine varieties PROBLEMS, MATH 214A k is an algebraically closed field Basic notions Affine and quasi-affine varieties 1. Let X A 2 be defined by x 2 + y 2 = 1 and x = 1. Find the ideal I(X). 2. Prove that the subset

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

B 1 = {B(x, r) x = (x 1, x 2 ) H, 0 < r < x 2 }. (a) Show that B = B 1 B 2 is a basis for a topology on X.

B 1 = {B(x, r) x = (x 1, x 2 ) H, 0 < r < x 2 }. (a) Show that B = B 1 B 2 is a basis for a topology on X. Math 6342/7350: Topology and Geometry Sample Preliminary Exam Questions 1. For each of the following topological spaces X i, determine whether X i and X i X i are homeomorphic. (a) X 1 = [0, 1] (b) X 2

More information

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................

More information

The Least Degraded and the Least Upgraded Channel with respect to a Channel Family

The Least Degraded and the Least Upgraded Channel with respect to a Channel Family The Least Degraded and the Least Upgraded Channel with respect to a Channel Family Wei Liu, S. Hamed Hassani, and Rüdiger Urbanke School of Computer and Communication Sciences EPFL, Switzerland Email:

More information

Smooth morphisms. Peter Bruin 21 February 2007

Smooth morphisms. Peter Bruin 21 February 2007 Smooth morphisms Peter Bruin 21 February 2007 Introduction The goal of this talk is to define smooth morphisms of schemes, which are one of the main ingredients in Néron s fundamental theorem [BLR, 1.3,

More information