Stochastic Interpretation for the Arimoto Algorithm
|
|
- Hugo Butler
- 5 years ago
- Views:
Transcription
1 Stochastic Interpretation for the Arimoto Algorithm Serge Tridenski EE - Sstems Department Tel Aviv Universit Tel Aviv, Israel sergetr@post.tau.ac.il Ram Zamir EE - Sstems Department Tel Aviv Universit Tel Aviv, Israel zamir@eng.tau.ac.il arxiv: v2 cs.it] 11 Mar 2015 Abstract The Arimoto algorithm computes the Gallager function ma Q ρ,q for a given channel P and parameter ρ, b means of alternating maimization. Along the wa, it generates a sequence of input distributions Q 1, Q 2,..., that converges to the maimizing input Q. We propose a stochastic interpretation for the Arimoto algorithm. We show that for a random i.i.d. codebook with a distributionq k, the net distribution Q k+1 in the Arimoto algorithm is equal to the tpe Q of the feasible transmitted codeword that maimizes the conditional Gallager eponent conditioned on a specific transmitted codeword tpe Q. This interpretation is a first step toward finding a stochastic mechanism for on-line channel input adaptation. 1 Inde Terms Arimoto-Blahut algorithm, Gallager error eponent, natural tpe selection, channel input adaptation. I. INTRODUCTION Optimization of the transmitter output, under a set of constraints and channel conditions, is an important goal in digital communication. In information theoretic terms, it corresponds to selecting the channel input distribution Q that maimizes the mutual information to achieve the Shannon capacit C of the channel or the Gallager function to achieve the best error eponent ER. Since the channel conditions are often changing, practical schemes tend to graduall adapt to these conditions, using feedback from the receiver to the transmitter. A previous work 1], 2] found the phenomenon of natural tpe selection NTS in adaptive loss source coding: the empirical distribution or tpe Q of the first D-matching codeword in an ordered random codebook, generated i.i.d. according to a distribution Q, amounts asmptoticall as the word length goes to infinit to a single iteration from Q to Q of the Blahut algorithm for rate-distortion function computation 3]. B iterating the NTS step, the sequence of codebook distributions Q, Q, Q,... converges to the reproduction distribution Q that realizes the rate-distortion function RD. This result gives a stochastic flavor to the Blahut algorithm, b interpreting each iteration as an instance of the conditional limit theorem of large deviation theor. In this paper we provide a first step toward the channelcoding counterpart of NTS. The underling idea is that the tpe of a good codeword good in the sense of having a low individual error probabilit indicates the direction to which the codebook distribution should evolve. Similarl to 1 This work of S. Tridenski and R. Zamir was supported in part b the Israeli Science Foundation ISF, grant # 870/11 the source coding case, the tpe of the best codeword in the above sense amounts to a single iteration of the Arimoto error-eponent computation algorithm 4]. Thus, b iteration of the NTS step we obtain a sequence of input distributions, Q, Q, Q,..., that converges to Q, the input that maimizes the Gallager function ρ,q. B a proper selection of the parameter ρ, this input leads to the optimum random-coding error eponent ER at a coding rate R, and to the channel capacit C in the limit as ρ 0. Section II gives some background on the Arimoto algorithm and presents our main result. Section III proves the main result. Discussion is given in Section IV. II. BACKGROUND AND STATEMENT OF RESULT In our search for a dual phenomenon in channel coding we consider an alternating maimization algorithm for random coding eponent computation, discovered b Arimoto in ]. This algorithm is a refinement of Arimoto s earlier work from 1972 on capacit computation 5] and Blahut s work 3] on rate-distortion function and capacit computation. A. Capacit and Error Eponent Consider a discrete memorless channel, with a transition probabilit distribution P from the input to the output. Let Q denote an arbitrar probabilit assignment on the channel input alphabet. It is well known, 6], 7], that the capacit C of this channel is given b maimizing the mutual information IQ,P over the input Q: C = maiq,p, 1 Q where IQ,P = ] P QP log Q P. 2 Furthermore, for an coding rate R < IQ,P, if we generate M = 2 nr length-n codewords b an i.i.d. distribution Q, then the error probabilit in maimum likelihood decoding is bounded from above b for all 0 ρ 1, where P e ep n ρ,q ], 3 ρ, Q = log QP 1 ] 4
2 is the Gallager function defined in 6, eq ]. The optimum value of the parameter ρ that maimizes the error eponent in 3 decreases as the rate R increases, and vanishes as R approaches IQ,P. In particular, IQ,P = lim ρ 0 ρ,q/ρ. For a fied value of the parameter ρ, the channel input distribution that maimizes the Gallager function 4, Q ρ = argma ρ,q, 5 Q maimizes also the error eponent in 3. 2 It follows that Q ρ Q C, as ρ 0, where Q C is the capacit-achieving input distribution 1. Unfortunatel, however, ecept for smmetric channels where the optimum input is uniform 6], these optimum distributions do not have a closed-form solution. This motivates the Arimoto and Blahut iterative algorithms 5], 3], 4]. B. Arimoto s Alternating Maimization Instead of maimizing ρ, Q directl over Q which ma be a hard task, the Arimoto algorithm alternates between two maimization steps. To this end, the Gallager function 4 can be rewritten as 4] log = log QP 1 ] = ] ρ Φ QP, 6 Q where Φ achieving the equalit in 6 is a transition probabilit matri given b Φ QP 1 Q P 1. 7 If we keep Φ frozen and maimize the right hand side of 6 with respect to the probabilit vectorq, the maimizing distribution is given b 4] Q = 1 1/ρ P Φ ] ρ, 8 K where K is the normalizing constant that keeps Q = 1. The epression 7 now contains the previous distribution Q. Plugging 7 into 8 we get a recursive epression for the new distribution in terms of the previous one: Q = 1 ρ ] 1/ρ K Q P 1 Q P 1. 9 At this point we have completed onl one of the two maimization steps of the Arimoto algorithm and it is still unclear whether the new distribution Q, determined b 9, does not result in a lower value of the Gallager function 2 For a non-redundant channel input alphabet, the optimum input Q ρ is unique 6, chap. 4.5, corr. 3]. ρ, Q. As a net step, we observe that the right hand side epression in the definition 7 has an etra meaning of the maimizer of 6 as a function of a dumm transition probabilit matri Φ. Therefore, replacing Q with Q in both 7 and 6 we get ρ, Q and ascertain that ρ, Q ρ, Q. Furthermore, as shown in 4], an iterative application of the recursion 9, starting from an arbitrar nonzero probabilit vector Q 1, produces a monotonicall converging sequence ρ, Q k ր ma Q ρ, Q and Q k Q ρ, as k. Note that the limit of 9 asρ 0 corresponds to computing the capacit C. Specificall, b L Hopital s rule, 9 becomes Q = 1 K Qep P log P Q P. 10 which is one iteration in computing the capacit-achieving distribution in Arimoto s original algorithm from ]. Our goal is to find a stochastic mechanism which produces a new tpe of the form 9. Gallager s upper bound on error probabilit seems to be a relevant tool for this purpose. C. Conditional Gallager Eponent for a Given Codeword Suppose that M messages are used to communicate through an n-dimensional channel P not necessaril memorless. Each message m, 1 m M, is represented b a codeword m of length n, selected independentl with the probabilit measure Q. Let m be a fied channel input selected to represent a certain message m. Given this m, consider the probabilit to get an error at the decoder, after we have independentl generated according to Q the codewords for each of the M 1 alternative messagesm m and have sent the message m through the channel. This probabilit, denotedp e,m, can be bounded from above as P e,m M ρ ρ P 1 sρ m QP ] s, 11 where s > 0 and 0 ρ 1. This is the Gallager bound 6, eq ], ecept for the averaging over the transmitted word m. For the rest of the paper, we assume that both the probabilit measure Q and the channel P are memorless: Q = Q k, P = P k k. In this case, the conditional bound 11 depends onl on the tpe of the fied word m, and takes the form shown b the following lemma, which is proved in the Appendi: Lemma 1 Conditional Gallager eponent: For memorless Q and P, and a transmitted codeword m, P e,m ep n s,m ], 12
3 where R 1 n logm is the coding rate, Q m is the tpe of m, and s,ρ,q, Q Qlog P 1 sρ Q P s. 13 It can be shown that the bound 12 can be eponentiall tight for a certain choice of the parameters s and ρ, which is determined b the tpe Q m and the code rate R. Nevertheless, to enhance the relation to Gallager s analsis 6], we shall restrict attention to the generall suboptimal choice of s = 1, and use the simplified notation ρ,q, Q s = 1, ρ, Q, Q = Qlog ρ P 1 Q P 1 ]. 14 The conditional Gallager function m plas a similar role to the Gallager function ρ, Q, conditioned on a transmitted codeword of tpe Q m. In fact, it is eas to show from 4 and 14 that where ρ, Q = min Q D Q Q = ρ,q, Q+D Q Q Q log ] Q Q, denotes the divergence Kullback-Leibler distance between the distributions Q and Q. The relation 15 can be eplained b averaging 12 over the transmitted words m, and noting that the frequenc of codewords of tpe Q in a random code generated b a distribution Q is proportional to ep nd Q Q ; see 7]. Figure 1 shows the steps of the Arimoto algorithm with respect to both the conditional and unconditional Gallager functions, for a binar smmetric channel BSC0.2, at ρ = 0.1, starting from an initial input distribution Q 1 = 0.1,0.9, and converging to Q = 0.5,0.5. D. Statement of Main Result Let X m + m=1 be an infinite list of codewords of length n, selected independentl with the probabilit measure Q. This list can be viewed alternativel as a sequence of nested random codebooks of size M, C M X 1,X 2,...,X M, i.e. growing in size with each codeword. Let Q m denote the tpe of the codeword X m. We define the codeword with the maimum conditional Gallager eponent in two steps: 1 In each codebook C M choose the codeword with the maimum conditional Gallager eponent 12, at s = 1, with respect to this codebook s size M. eponent E ρ, Q, Q 0 k k ρ, Q* ρ, Q k iteration k Fig. 1. Steps of the Arimoto algorithm. 2 Among all the codewords chosen in 1 one for each M = 1,2,3,... choose the codeword which has the maimum such eponent, which corresponds to ma M ma m ρ logm n. 17 Note that implicit in the definition 17 is that each codeword competes at least with all its predecessors in the list, which means that the conditional Gallager eponent is maimized onl among the feasible codewords. Denote the inde of the codeword determined b 17 as N n. Then the tpe Q Nn of the codeword X Nn is close to Q : Theorem 1 Favorite tpe: Q Nn Q in prob. 18 n where Q is the maimizing distribution 9 in an iteration of the Arimoto algorithm. III. PROOF OF MAIN RESULT The following lemma gives an alternative optimization meaning to the maimizing distribution 9 in one iteration of the Arimoto algorithm. As we shall see later, this lemma serves as a basis for our result. Lemma 2 Standard optimization: Q = argma E0 ρ,q, Q ρ D Q Q = Q arg ma ρ,q, Q, 19 Q:D Q Q DQ Q where Q is the maimizing distribution 9 in an iteration of the Arimoto algorithm. Since ρ and Q are fied, in the remainder of this section we shall use the simplified notation EP ρ,q,p, and also R M 1 n logm. The P here stands for an arbitrar
4 channel input, and should not be confused with the channel transition distribution. Proof: Since EP is a linear function ofp and DP Q is a conve function of P, the following epression is a concave function of P : LP = EP ρ DP Q DQ Q ] + λ P 1, 20 for an ρ > 0 and λ. Differentiating 20 and equating the derivative to zero, we obtain the maimizing distribution of the LHS of 19: L P = log P 1 Q P 1 ρlog P Q ρ + λ = 0 P = 1 K Q P 1 Q P 1 ρ ] 1/ρ = Q. Observe that P = Q is also the maimizer of EP under the constraint DP Q DQ Q. Indeed, suppose there eist a better distribution P, such that E P > EP and D P Q DQ Q, but then also L P > LP, since ρ > 0, which is a contradiction. The ke idea of the proof of Theorem 1 is to compare upper and lower bounds on the maimum ma m E Qm m, 21 which is equivalent to the double maimum in 17, as shown below. Note that 21 is a random variable, because the tpe of the m-th codeword is random. As a result of the remaining lemmas, the upper bound on 21 becomes a function of the random tpe Q Nn, whereas the lower bound becomes a tight deterministic bound. We start with a pair of bounds which are valid with probabilit 1: Lemma 3: E Q Nn Nn for all M. ma E Qm M, 22 Proof: E Q Nn Nn = ma E Qm m m = ma ma E Qm M m M m = ma ma E Qm M M ma E Qm M, for all M. Aiming at Lemma 2, in a somewhat smmetric manner we would like to replace ma E Q m on the RHS of 22 with EQ, and R Nn on the LHS of 22 with D Q Nn Q. Both of these substitutions can be accomplished onl in probabilit, using properties of tpes, as the net two lemmas show. Lemma 4: Choose M n = epndq Q. Then for an ǫ > 0 and n sufficientl large, with high probabilit the RHS of 22 is further bounded from below as ma n E Qm Mn ma P:DP Q DQ Q EP Mn ǫ. Proof: Given a list of M n words, generated independentl with the probabilit measure Q, the tpes Q satisfing DQ Q < DQ Q will be found in the list with high probabilit. On the other hand, the tpes satisfing DQ Q > DQ Q with high probabilit will not be found in the list. If we choose the word which has the highest EQ in the list, as the RHS of 22 suggests, this procedure translates into the following optimization problem: maep subject to DP Q DQ Q. Equivalentl, for an ǫ > 0 and n sufficientl large, with high probabilit ma E Qm ma EP ǫ. n P:DP Q DQ Q Lemma 5: For anǫ > 0 andnsufficientl large, with high probabilit the LHS of 22 is further bounded from above as E Q Nn Nn E Q Nn ρd QNn Q +ǫ. Proof: Observe that the codeword X Nn has the minimal inde among the codewords of the tpe Q Nn in the list. For an given tpe Q define a random variable R Q min Q m =Q Rm, which corresponds to the inde of the first instance of the tpe Q in the list. Then Pr R Nn D Q Nn Q ǫ Pr R Q D Q Q ] ǫ Q Q Pr R Q D Q Q ǫ Q e n D Q Q ǫ e nd Q Q = Q e nǫ = e nǫ+o1.
5 Which implies that with high probabilit R Nn D Q Nn Q ǫ, and the lemma follows. Combining lemmas 3, 4, and 5, we obtain E Q Nn ρd QNn Q EP Mn 2ǫ. 23 ma P:DP Q DQ Q Since R Mn DQ Q as n, b Lemma 2 ma EP Mn P:DP Q DQ Q n Therefore, we can rewrite 23 as ma P EP ρdp Q. E Q Nn ρd QNn Q ma EP ρdp Q 2ǫ. P B continuit, appling Lemma 2 again, we have 18. IV. DISCUSSION It follows from 15, 19, the non-negativit of the divergence 7], and since Q ρ is the unique fied point of the Arimoto algorithm, that E0 ρ,q, 24 with equalit if and onl if Q is the optimum input Q ρ in 5. In particular, it can be shown directl with the help of Jensen s inequalit that ρdq Q ρ, Q. 25 Therefore, the tpe Q gives a conditional error eponent 12 which is higher than the unconditional error eponent ρ, Q b a gap of at least ρdq Q. Our results suggest the eistence of an interesting phenomenon that the lowest individual error probabilities in random codebooks occur not at the lowest rates, but as a result of a trade-off between the probabilities of good tpes and the number of competing codewords. When this trade-off is combined with the conditional Gallager eponent, we obtain a better tpe for random codebook generation, which surprisingl coincides with the outcome of the Arimoto algorithm and thereb guarantees convergence to the optimal random codebook distribution, for a given channel. One difficult in such an input adaptation scheme, is that it relies on channel knowledge, since the conditional Gallager eponent assumes maimum likelihood decoding. Furthermore, the optimum selection of the parameter ρ depends on the channel and the coding rate. Another issue is that the sstem requires estimation of the error eponent of each codeword, which is possible onl after man transmissions. And another weakness of the current analsis is that it assumes working at a fied rate R < C. So the current result is still far from our final goal of a universal adaptive scheme for channel input adaptation, that approaches the true channel capacit C. Some modification of the analsis is required to support a mechanism which is completel independent of the channel, and onl relies on tentative estimates of codeword goodness at the receiver, and a feedback to the transmitter. This future work will be reported elsewhere. Proof of Lemma 1 APPENDIX For memorless Q and P the bound 11 becomes M ρ P 1 sρ m QP s = M ρ P 1 sρ m Q k P s k k = M ρ P 1 sρ m = M ρ P 1 sρ k m,k n = M ρ P 1 sρ m,k QP s k QP s k QP s = M ρ ep n 1 n log P 1 sρ n m,k ] ρ QP s = M ρ ep n Q m log P 1 sρ ] ρ Q P s = ep n s,m ]. ACKNOWLEDGEMENT We thank Yuval Kochman for interesting discussions in the beginning of this research, and Nir Weinberger for helpful comments. REFERENCES 1] R. Zamir and K. Rose, Natural Tpe Selection in Adaptive Loss Compression, IEEE Trans. on Information Theor, vol. 47, no. 1, pp , Jan ] Y. Kochman and R. Zamir, Adaptive Parametric Vector Quantization b Natural Tpe Selection, in Proceedings of the Data Compression Conference, Snowbird, Utah, Mar ] R. Blahut, Computation of Channel Capacit and Rate-Distortion Functions, IEEE Trans. on Information Theor, vol. 18, no. 4, pp , Jul ] S. Arimoto, Computation of Random Coding Eponent Functions, IEEE Trans. on Information Theor, vol. 22, no. 6, pp , Nov ], An Algorithm for Computing the Capacit of Arbitrar Discrete Memorless Channels, IEEE Trans. on Information Theor, vol. 18, pp , Jan ] R. Gallager, Information Theor and Reliable Communication. John Wile & Sons, ] T. M. Cover and J. A. Thomas, Elements of Information Theor. New York: Wile, 1991.
School of Computer and Communication Sciences. Information Theory and Coding Notes on Random Coding December 12, 2003.
ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE School of Computer and Communication Sciences Handout 8 Information Theor and Coding Notes on Random Coding December 2, 2003 Random Coding In this note we prove
More informationComputation of Total Capacity for Discrete Memoryless Multiple-Access Channels
IEEE TRANSACTIONS ON INFORATION THEORY, VOL. 50, NO. 11, NOVEBER 2004 2779 Computation of Total Capacit for Discrete emorless ultiple-access Channels ohammad Rezaeian, ember, IEEE, and Ale Grant, Senior
More informationComputation of Csiszár s Mutual Information of Order α
Computation of Csiszár s Mutual Information of Order Damianos Karakos, Sanjeev Khudanpur and Care E. Priebe Department of Electrical and Computer Engineering and Center for Language and Speech Processing
More information4 Inverse function theorem
Tel Aviv Universit, 2013/14 Analsis-III,IV 53 4 Inverse function theorem 4a What is the problem................ 53 4b Simple observations before the theorem..... 54 4c The theorem.....................
More informationIn applications, we encounter many constrained optimization problems. Examples Basis pursuit: exact sparse recovery problem
1 Conve Analsis Main references: Vandenberghe UCLA): EECS236C - Optimiation methods for large scale sstems, http://www.seas.ucla.edu/ vandenbe/ee236c.html Parikh and Bod, Proimal algorithms, slides and
More informationA bound for block codes with delayed feedback related to the sphere-packing bound
A bound for block codes with delaed feedback related to the sphere-packing bound Harikrishna R Palaianur Anant Sahai Electrical Engineering and Computer Sciences Universit of California at Berkele echnical
More information4 An Introduction to Channel Coding and Decoding over BSC
4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the
More informationOn the relation between the relative earth mover distance and the variation distance (an exposition)
On the relation between the relative earth mover distance and the variation distance (an eposition) Oded Goldreich Dana Ron Februar 9, 2016 Summar. In this note we present a proof that the variation distance
More informationUpper Bounds to Error Probability with Feedback
Upper Bounds to Error robability with Feedbac Barış Naiboğlu Lizhong Zheng Research Laboratory of Electronics at MIT Cambridge, MA, 0239 Email: {naib, lizhong }@mit.edu Abstract A new technique is proposed
More informationLECTURE 10. Last time: Lecture outline
LECTURE 10 Joint AEP Coding Theorem Last time: Error Exponents Lecture outline Strong Coding Theorem Reading: Gallager, Chapter 5. Review Joint AEP A ( ɛ n) (X) A ( ɛ n) (Y ) vs. A ( ɛ n) (X, Y ) 2 nh(x)
More informationLecture 4 Noisy Channel Coding
Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem
More informationCONTINUOUS SPATIAL DATA ANALYSIS
CONTINUOUS SPATIAL DATA ANALSIS 1. Overview of Spatial Stochastic Processes The ke difference between continuous spatial data and point patterns is that there is now assumed to be a meaningful value, s
More informationAn Information Theory For Preferences
An Information Theor For Preferences Ali E. Abbas Department of Management Science and Engineering, Stanford Universit, Stanford, Ca, 94305 Abstract. Recent literature in the last Maimum Entrop workshop
More informationSolutions to Two Interesting Problems
Solutions to Two Interesting Problems Save the Lemming On each square of an n n chessboard is an arrow pointing to one of its eight neighbors (or off the board, if it s an edge square). However, arrows
More informationTight Bounds for Symmetric Divergence Measures and a Refined Bound for Lossless Source Coding
APPEARS IN THE IEEE TRANSACTIONS ON INFORMATION THEORY, FEBRUARY 015 1 Tight Bounds for Symmetric Divergence Measures and a Refined Bound for Lossless Source Coding Igal Sason Abstract Tight bounds for
More informationCOMPRESSIVE (CS) [1] is an emerging framework,
1 An Arithmetic Coding Scheme for Blocked-based Compressive Sensing of Images Min Gao arxiv:1604.06983v1 [cs.it] Apr 2016 Abstract Differential pulse-code modulation (DPCM) is recentl coupled with uniform
More informationTight Bounds for Symmetric Divergence Measures and a New Inequality Relating f-divergences
Tight Bounds for Symmetric Divergence Measures and a New Inequality Relating f-divergences Igal Sason Department of Electrical Engineering Technion, Haifa 3000, Israel E-mail: sason@ee.technion.ac.il Abstract
More informationLecture 6 I. CHANNEL CODING. X n (m) P Y X
6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder
More information(Classical) Information Theory III: Noisy channel coding
(Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way
More informationIntermittent Communication
Intermittent Communication Mostafa Khoshnevisan, Student Member, IEEE, and J. Nicholas Laneman, Senior Member, IEEE arxiv:32.42v2 [cs.it] 7 Mar 207 Abstract We formulate a model for intermittent communication
More information6. Vector Random Variables
6. Vector Random Variables In the previous chapter we presented methods for dealing with two random variables. In this chapter we etend these methods to the case of n random variables in the following
More informationLecture 5 Channel Coding over Continuous Channels
Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From
More informationThe Poisson Channel with Side Information
The Poisson Channel with Side Information Shraga Bross School of Enginerring Bar-Ilan University, Israel brosss@macs.biu.ac.il Amos Lapidoth Ligong Wang Signal and Information Processing Laboratory ETH
More informationOn MIMO Fading Channels with Side Information at the Transmitter
On MIMO Fading Channels with ide Information at the Transmitter EE850 Project, Ma 005 Tairan Wang twang@ece.umn.edu, ID: 340947 Abstract Transmission of information over a MIMO discrete-time flat fading
More informationarxiv: v4 [cs.it] 17 Oct 2015
Upper Bounds on the Relative Entropy and Rényi Divergence as a Function of Total Variation Distance for Finite Alphabets Igal Sason Department of Electrical Engineering Technion Israel Institute of Technology
More informationFixed Point Theorem and Sequences in One or Two Dimensions
Fied Point Theorem and Sequences in One or Two Dimensions Dr. Wei-Chi Yang Let us consider a recursive sequence of n+ = n + sin n and the initial value can be an real number. Then we would like to ask
More information(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute
ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html
More informationEstimators in simple random sampling: Searls approach
Songklanakarin J Sci Technol 35 (6 749-760 Nov - Dec 013 http://wwwsjstpsuacth Original Article Estimators in simple random sampling: Searls approach Jirawan Jitthavech* and Vichit Lorchirachoonkul School
More informationPerformance of Polar Codes for Channel and Source Coding
Performance of Polar Codes for Channel and Source Coding Nadine Hussami AUB, Lebanon, Email: njh03@aub.edu.lb Satish Babu Korada and üdiger Urbanke EPFL, Switzerland, Email: {satish.korada,ruediger.urbanke}@epfl.ch
More informationFlows and Connectivity
Chapter 4 Flows and Connectivit 4. Network Flows Capacit Networks A network N is a connected weighted loopless graph (G,w) with two specified vertices and, called the source and the sink, respectivel,
More informationMutual Information Approximation via. Maximum Likelihood Estimation of Density Ratio:
Mutual Information Approimation via Maimum Likelihood Estimation of Densit Ratio Taiji Suzuki Dept. of Mathematical Informatics, The Universit of Toko Email: s-taiji@stat.t.u-toko.ac.jp Masashi Sugiama
More informationRematch and Forward: Joint Source-Channel Coding for Communications
Background ρ = 1 Colored Problem Extensions Rematch and Forward: Joint Source-Channel Coding for Communications Anatoly Khina Joint work with: Yuval Kochman, Uri Erez, Ram Zamir Dept. EE - Systems, Tel
More informationMismatched Multi-letter Successive Decoding for the Multiple-Access Channel
Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@cam.ac.uk Alfonso Martinez Universitat Pompeu Fabra alfonso.martinez@ieee.org
More informationEnergy State Amplification in an Energy Harvesting Communication System
Energy State Amplification in an Energy Harvesting Communication System Omur Ozel Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland College Park, MD 20742 omur@umd.edu
More informationSimple Channel Coding Bounds
ISIT 2009, Seoul, Korea, June 28 - July 3,2009 Simple Channel Coding Bounds Ligong Wang Signal and Information Processing Laboratory wang@isi.ee.ethz.ch Roger Colbeck Institute for Theoretical Physics,
More informationUNIVERSIDAD CARLOS III DE MADRID MATHEMATICS II EXERCISES (SOLUTIONS )
UNIVERSIDAD CARLOS III DE MADRID MATHEMATICS II EXERCISES (SOLUTIONS ) CHAPTER : Limits and continuit of functions in R n. -. Sketch the following subsets of R. Sketch their boundar and the interior. Stud
More informationLanguage and Statistics II
Language and Statistics II Lecture 19: EM for Models of Structure Noah Smith Epectation-Maimization E step: i,, q i # p r $ t = p r i % ' $ t i, p r $ t i,' soft assignment or voting M step: r t +1 # argma
More informationPure Core 1. Revision Notes
Pure Core Revision Notes Ma 06 Pure Core Algebra... Indices... Rules of indices... Surds... 4 Simplifing surds... 4 Rationalising the denominator... 4 Quadratic functions... 5 Completing the square....
More informationOn Maximizing the Second Smallest Eigenvalue of a State-dependent Graph Laplacian
25 American Control Conference June 8-, 25. Portland, OR, USA WeA3.6 On Maimizing the Second Smallest Eigenvalue of a State-dependent Graph Laplacian Yoonsoo Kim and Mehran Mesbahi Abstract We consider
More informationOn Scalable Coding in the Presence of Decoder Side Information
On Scalable Coding in the Presence of Decoder Side Information Emrah Akyol, Urbashi Mitra Dep. of Electrical Eng. USC, CA, US Email: {eakyol, ubli}@usc.edu Ertem Tuncel Dep. of Electrical Eng. UC Riverside,
More informationCut-Set Bound and Dependence Balance Bound
Cut-Set Bound and Dependence Balance Bound Lei Xiao lxiao@nd.edu 1 Date: 4 October, 2006 Reading: Elements of information theory by Cover and Thomas [1, Section 14.10], and the paper by Hekstra and Willems
More informationA New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality
0 IEEE Information Theory Workshop A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality Kumar Viswanatha, Emrah Akyol and Kenneth Rose ECE Department, University of California
More informationLesson 7: M/G/1 Queuing Systems Analysis
Slide supporting material Lesson 7: M/G/ Queuing Sstems Analsis Giovanni Giambene Queuing Theor and Telecommunications: Networks and Applications nd edition, Springer All rights reserved Motivations for
More informationLECTURE 13. Last time: Lecture outline
LECTURE 13 Last time: Strong coding theorem Revisiting channel and codes Bound on probability of error Error exponent Lecture outline Fano s Lemma revisited Fano s inequality for codewords Converse to
More informationNATIONAL UNIVERSITY OF SINGAPORE Department of Mathematics MA4247 Complex Analysis II Lecture Notes Part I
NATIONAL UNIVERSITY OF SINGAPORE Department of Mathematics MA4247 Comple Analsis II Lecture Notes Part I Chapter 1 Preliminar results/review of Comple Analsis I These are more detailed notes for the results
More informationShannon s noisy-channel theorem
Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for
More informationMath 214 Spring problem set (a) Consider these two first order equations. (I) dy dx = x + 1 dy
Math 4 Spring 08 problem set. (a) Consider these two first order equations. (I) d d = + d (II) d = Below are four direction fields. Match the differential equations above to their direction fields. Provide
More informationThe approximation of piecewise linear membership functions and Łukasiewicz operators
Fuzz Sets and Sstems 54 25 275 286 www.elsevier.com/locate/fss The approimation of piecewise linear membership functions and Łukasiewicz operators József Dombi, Zsolt Gera Universit of Szeged, Institute
More informationCHAPTER 3 Applications of Differentiation
CHAPTER Applications of Differentiation Section. Etrema on an Interval................... 0 Section. Rolle s Theorem and the Mean Value Theorem...... 0 Section. Increasing and Decreasing Functions and
More informationSHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe
SHARED INFORMATION Prakash Narayan with Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe 2/40 Acknowledgement Praneeth Boda Himanshu Tyagi Shun Watanabe 3/40 Outline Two-terminal model: Mutual
More informationQUALITATIVE ANALYSIS OF DIFFERENTIAL EQUATIONS
arxiv:1803.0591v1 [math.gm] QUALITATIVE ANALYSIS OF DIFFERENTIAL EQUATIONS Aleander Panfilov stable spiral det A 6 3 5 4 non stable spiral D=0 stable node center non stable node saddle 1 tr A QUALITATIVE
More informationSuperposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels
Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:
More informationLecture 11: Quantum Information III - Source Coding
CSCI5370 Quantum Computing November 25, 203 Lecture : Quantum Information III - Source Coding Lecturer: Shengyu Zhang Scribe: Hing Yin Tsang. Holevo s bound Suppose Alice has an information source X that
More informationLinear programming: Theory
Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analsis and Economic Theor Winter 2018 Topic 28: Linear programming: Theor 28.1 The saddlepoint theorem for linear programming The
More informationDiversity-Multiplexing Tradeoff in MIMO Channels with Partial CSIT. ECE 559 Presentation Hoa Pham Dec 3, 2007
Diversity-Multiplexing Tradeoff in MIMO Channels with Partial CSIT ECE 559 Presentation Hoa Pham Dec 3, 2007 Introduction MIMO systems provide two types of gains Diversity Gain: each path from a transmitter
More informationThe Entropy Power Inequality and Mrs. Gerber s Lemma for Groups of order 2 n
The Entrop Power Inequalit and Mrs. Gerber s Lemma for Groups of order 2 n Varun Jog EECS, UC Berkele Berkele, CA-94720 Email: varunjog@eecs.berkele.edu Venkat Anantharam EECS, UC Berkele Berkele, CA-94720
More informationChapter 6. Self-Adjusting Data Structures
Chapter 6 Self-Adjusting Data Structures Chapter 5 describes a data structure that is able to achieve an epected quer time that is proportional to the entrop of the quer distribution. The most interesting
More informationA Comparison of Two Achievable Rate Regions for the Interference Channel
A Comparison of Two Achievable Rate Regions for the Interference Channel Hon-Fah Chong, Mehul Motani, and Hari Krishna Garg Electrical & Computer Engineering National University of Singapore Email: {g030596,motani,eleghk}@nus.edu.sg
More informationSteepest descent on factor graphs
Steepest descent on factor graphs Justin Dauwels, Sascha Korl, and Hans-Andrea Loeliger Abstract We show how steepest descent can be used as a tool for estimation on factor graphs. From our eposition,
More informationLecture 4: Proof of Shannon s theorem and an explicit code
CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated
More informationSample-based Optimal Transport and Barycenter Problems
Sample-based Optimal Transport and Barcenter Problems MAX UANG New York Universit, Courant Institute of Mathematical Sciences AND ESTEBAN G. TABA New York Universit, Courant Institute of Mathematical Sciences
More informationECE 587 / STA 563: Lecture 5 Lossless Compression
ECE 587 / STA 563: Lecture 5 Lossless Compression Information Theory Duke University, Fall 28 Author: Galen Reeves Last Modified: September 27, 28 Outline of lecture: 5. Introduction to Lossless Source
More informationAn Achievable Error Exponent for the Mismatched Multiple-Access Channel
An Achievable Error Exponent for the Mismatched Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@camacuk Albert Guillén i Fàbregas ICREA & Universitat Pompeu Fabra University of
More informationECE 587 / STA 563: Lecture 5 Lossless Compression
ECE 587 / STA 563: Lecture 5 Lossless Compression Information Theory Duke University, Fall 2017 Author: Galen Reeves Last Modified: October 18, 2017 Outline of lecture: 5.1 Introduction to Lossless Source
More information3.0 PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES
3.0 PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES 3.1 Introduction In this chapter we will review the concepts of probabilit, rom variables rom processes. We begin b reviewing some of the definitions
More informationDiscrete Memoryless Channels with Memoryless Output Sequences
Discrete Memoryless Channels with Memoryless utput Sequences Marcelo S Pinho Department of Electronic Engineering Instituto Tecnologico de Aeronautica Sao Jose dos Campos, SP 12228-900, Brazil Email: mpinho@ieeeorg
More informationSquare Estimation by Matrix Inverse Lemma 1
Proof of Two Conclusions Associated Linear Minimum Mean Square Estimation b Matri Inverse Lemma 1 Jianping Zheng State e Lab of IS, Xidian Universit, Xi an, 7171, P. R. China jpzheng@idian.edu.cn Ma 6,
More informationRepresentation of Correlated Sources into Graphs for Transmission over Broadcast Channels
Representation of Correlated s into Graphs for Transmission over Broadcast s Suhan Choi Department of Electrical Eng. and Computer Science University of Michigan, Ann Arbor, MI 80, USA Email: suhanc@eecs.umich.edu
More informationTowards control over fading channels
Towards control over fading channels Paolo Minero, Massimo Franceschetti Advanced Network Science University of California San Diego, CA, USA mail: {minero,massimo}@ucsd.edu Invited Paper) Subhrakanti
More informationSHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe
SHARED INFORMATION Prakash Narayan with Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe 2/41 Outline Two-terminal model: Mutual information Operational meaning in: Channel coding: channel
More informationJournal of Inequalities in Pure and Applied Mathematics
Journal of Inequalities in Pure and Applied Mathematics http://jipam.vu.edu.au/ Volume 2, Issue 2, Article 15, 21 ON SOME FUNDAMENTAL INTEGRAL INEQUALITIES AND THEIR DISCRETE ANALOGUES B.G. PACHPATTE DEPARTMENT
More informationApproaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes
Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes 1 Zheng Wang, Student Member, IEEE, Jie Luo, Member, IEEE arxiv:0808.3756v1 [cs.it] 27 Aug 2008 Abstract We show that
More informationCapacity of the Discrete Memoryless Energy Harvesting Channel with Side Information
204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener
More informationStrain Transformation and Rosette Gage Theory
Strain Transformation and Rosette Gage Theor It is often desired to measure the full state of strain on the surface of a part, that is to measure not onl the two etensional strains, and, but also the shear
More informationReview of Essential Skills and Knowledge
Review of Essential Skills and Knowledge R Eponent Laws...50 R Epanding and Simplifing Polnomial Epressions...5 R 3 Factoring Polnomial Epressions...5 R Working with Rational Epressions...55 R 5 Slope
More informationCommunication Limits with Low Precision Analog-to-Digital Conversion at the Receiver
This full tet paper was peer reviewed at the direction of IEEE Communications Society subject matter eperts for publication in the ICC 7 proceedings. Communication Limits with Low Precision Analog-to-Digital
More informationCapacity Upper Bounds for the Deletion Channel
Capacity Upper Bounds for the Deletion Channel Suhas Diggavi, Michael Mitzenmacher, and Henry D. Pfister School of Computer and Communication Sciences, EPFL, Lausanne, Switzerland Email: suhas.diggavi@epfl.ch
More informationA Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources
A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources Wei Kang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College
More informationNash Equilibrium and the Legendre Transform in Optimal Stopping Games with One Dimensional Diffusions
Nash Equilibrium and the Legendre Transform in Optimal Stopping Games with One Dimensional Diffusions J. L. Seton This version: 9 Januar 2014 First version: 2 December 2011 Research Report No. 9, 2011,
More informationQuantum Sphere-Packing Bounds and Moderate Deviation Analysis for Classical-Quantum Channels
Quantum Sphere-Packing Bounds and Moderate Deviation Analysis for Classical-Quantum Channels (, ) Joint work with Min-Hsiu Hsieh and Marco Tomamichel Hao-Chung Cheng University of Technology Sydney National
More informationEntropies & Information Theory
Entropies & Information Theory LECTURE I Nilanjana Datta University of Cambridge,U.K. See lecture notes on: http://www.qi.damtp.cam.ac.uk/node/223 Quantum Information Theory Born out of Classical Information
More informationThe Method of Types and Its Application to Information Hiding
The Method of Types and Its Application to Information Hiding Pierre Moulin University of Illinois at Urbana-Champaign www.ifp.uiuc.edu/ moulin/talks/eusipco05-slides.pdf EUSIPCO Antalya, September 7,
More information2.1 Rates of Change and Limits AP Calculus
.1 Rates of Change and Limits AP Calculus.1 RATES OF CHANGE AND LIMITS Limits Limits are what separate Calculus from pre calculus. Using a it is also the foundational principle behind the two most important
More informationOptimal scaling of the random walk Metropolis on elliptically symmetric unimodal targets
Optimal scaling of the random walk Metropolis on ellipticall smmetric unimodal targets Chris Sherlock 1 and Gareth Roberts 2 1. Department of Mathematics and Statistics, Lancaster Universit, Lancaster,
More informationIN [1], Forney derived lower bounds on the random coding
Exact Random Coding Exponents for Erasure Decoding Anelia Somekh-Baruch and Neri Merhav Abstract Random coding of channel decoding with an erasure option is studied By analyzing the large deviations behavior
More informationService Outage Based Power and Rate Allocation for Parallel Fading Channels
1 Service Outage Based Power and Rate Allocation for Parallel Fading Channels Jianghong Luo, Member, IEEE, Roy Yates, Member, IEEE, and Predrag Spasojević, Member, IEEE Abstract The service outage based
More informationIntroduction to Information Theory
Introduction to Information Theory Gurinder Singh Mickey Atwal atwal@cshl.edu Center for Quantitative Biology Kullback-Leibler Divergence Summary Shannon s coding theorems Entropy Mutual Information Multi-information
More informationError Exponent Region for Gaussian Broadcast Channels
Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI
More informationAll parabolas through three non-collinear points
ALL PARABOLAS THROUGH THREE NON-COLLINEAR POINTS 03 All parabolas through three non-collinear points STANLEY R. HUDDY and MICHAEL A. JONES If no two of three non-collinear points share the same -coordinate,
More informationDigital Communications III (ECE 154C) Introduction to Coding and Information Theory
Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Tara Javidi These lecture notes were originally developed by late Prof. J. K. Wolf. UC San Diego Spring 2014 1 / 8 I
More informationIntroduction to Differential Equations. National Chiao Tung University Chun-Jen Tsai 9/14/2011
Introduction to Differential Equations National Chiao Tung Universit Chun-Jen Tsai 9/14/011 Differential Equations Definition: An equation containing the derivatives of one or more dependent variables,
More informationAnd similarly in the other directions, so the overall result is expressed compactly as,
SQEP Tutorial Session 5: T7S0 Relates to Knowledge & Skills.5,.8 Last Update: //3 Force on an element of area; Definition of principal stresses and strains; Definition of Tresca and Mises equivalent stresses;
More information8. BOOLEAN ALGEBRAS x x
8. BOOLEAN ALGEBRAS 8.1. Definition of a Boolean Algebra There are man sstems of interest to computing scientists that have a common underling structure. It makes sense to describe such a mathematical
More informationUpper Bounds on the Capacity of Binary Intermittent Communication
Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,
More informationNoise-Shaped Predictive Coding for Multiple Descriptions of a Colored Gaussian Source
Noise-Shaped Predictive Coding for Multiple Descriptions of a Colored Gaussian Source Yuval Kochman, Jan Østergaard, and Ram Zamir Abstract It was recently shown that the symmetric multiple-description
More informationDesign of Optimal Quantizers for Distributed Source Coding
Design of Optimal Quantizers for Distributed Source Coding David Rebollo-Monedero, Rui Zhang and Bernd Girod Information Systems Laboratory, Electrical Eng. Dept. Stanford University, Stanford, CA 94305
More informationCSCI 2570 Introduction to Nanocomputing
CSCI 2570 Introduction to Nanocomputing Information Theory John E Savage What is Information Theory Introduced by Claude Shannon. See Wikipedia Two foci: a) data compression and b) reliable communication
More informationBounded Expected Delay in Arithmetic Coding
Bounded Expected Delay in Arithmetic Coding Ofer Shayevitz, Ram Zamir, and Meir Feder Tel Aviv University, Dept. of EE-Systems Tel Aviv 69978, Israel Email: {ofersha, zamir, meir }@eng.tau.ac.il arxiv:cs/0604106v1
More informationA Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying
A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying Ahmad Abu Al Haija, and Mai Vu, Department of Electrical and Computer Engineering McGill University Montreal, QC H3A A7 Emails: ahmadabualhaija@mailmcgillca,
More informationInterspecific Segregation and Phase Transition in a Lattice Ecosystem with Intraspecific Competition
Interspecific Segregation and Phase Transition in a Lattice Ecosstem with Intraspecific Competition K. Tainaka a, M. Kushida a, Y. Ito a and J. Yoshimura a,b,c a Department of Sstems Engineering, Shizuoka
More information