On Code Design for Simultaneous Energy and Information Transfer

Similar documents
Improved Capacity Bounds for the Binary Energy Harvesting Channel

On the capacity of the general trapdoor channel with feedback

Convex Optimization methods for Computing Channel Capacity

Interactive Hypothesis Testing Against Independence

ECE 534 Information Theory - Midterm 2

LDPC codes for the Cascaded BSC-BAWGN channel

Formal Modeling in Cognitive Science Lecture 29: Noisy Channel Model and Applications;

Homework Set #3 Rates definitions, Channel Coding, Source-Channel coding

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK

New Information Measures for the Generalized Normal Distribution

Distributed Rule-Based Inference in the Presence of Redundant Information

EE/Stats 376A: Information theory Winter Lecture 5 Jan 24. Lecturer: David Tse Scribe: Michael X, Nima H, Geng Z, Anton J, Vivek B.

Coding Along Hermite Polynomials for Gaussian Noise Channels

I - Information theory basics

Estimation of the large covariance matrix with two-step monotone missing data

Convolutional Codes. Lecture 13. Figure 93: Encoder for rate 1/2 constraint length 3 convolutional code.

Anytime communication over the Gilbert-Eliot channel with noiseless feedback

Training sequence optimization for frequency selective channels with MAP equalization

On a Markov Game with Incomplete Information

Sampling and Distortion Tradeoffs for Bandlimited Periodic Signals

Universal Finite Memory Coding of Binary Sequences

State Estimation with ARMarkov Models

Amin, Osama; Abediseid, Walid; Alouini, Mohamed-Slim. Institute of Electrical and Electronics Engineers (IEEE)

s v 0 q 0 v 1 q 1 v 2 (q 2) v 3 q 3 v 4

Cryptanalysis of Pseudorandom Generators

Inequalities for the L 1 Deviation of the Empirical Distribution

Characterizing the Behavior of a Probabilistic CMOS Switch Through Analytical Models and Its Verification Through Simulations

Skip-Sliding Window Codes

Chapter 1 Fundamentals

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO)

Radial Basis Function Networks: Algorithms

Age of Information: Whittle Index for Scheduling Stochastic Arrivals

THE development of efficient and simplified bounding

Distributed K-means over Compressed Binary Data

Probability Estimates for Multi-class Classification by Pairwise Coupling

Some Results on the Generalized Gaussian Distribution

Generalized Coiflets: A New Family of Orthonormal Wavelets

4. Score normalization technical details We now discuss the technical details of the score normalization method.

MATH 2710: NOTES FOR ANALYSIS

Optimal Random Access and Random Spectrum Sensing for an Energy Harvesting Cognitive Radio with and without Primary Feedback Leveraging

Improving on the Cutset Bound via a Geometric Analysis of Typical Sets

Convexification of Generalized Network Flow Problem with Application to Power Systems

Analysis of M/M/n/K Queue with Multiple Priorities

Improved Bounds on Bell Numbers and on Moments of Sums of Random Variables

Shadow Computing: An Energy-Aware Fault Tolerant Computing Model

Some Unitary Space Time Codes From Sphere Packing Theory With Optimal Diversity Product of Code Size

Positive decomposition of transfer functions with multiple poles

An Analysis of Reliable Classifiers through ROC Isometrics

Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various Families of R n Norms and Some Open Problems

Spatial Outage Capacity of Poisson Bipolar Networks

Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models

On a class of Rellich inequalities

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules

Positivity, local smoothing and Harnack inequalities for very fast diffusion equations

On Doob s Maximal Inequality for Brownian Motion

Optimal Design of Truss Structures Using a Neutrosophic Number Optimization Model under an Indeterminate Environment

Information collection on a graph

Information collection on a graph

A Bound on the Error of Cross Validation Using the Approximation and Estimation Rates, with Consequences for the Training-Test Split

Topic: Lower Bounds on Randomized Algorithms Date: September 22, 2004 Scribe: Srinath Sridhar

Developing A Deterioration Probabilistic Model for Rail Wear

System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests

Linear diophantine equations for discrete tomography

Analysis of Multi-Hop Emergency Message Propagation in Vehicular Ad Hoc Networks

q-ary Symmetric Channel for Large q

Solved Problems. (a) (b) (c) Figure P4.1 Simple Classification Problems First we draw a line between each set of dark and light data points.

Hotelling s Two- Sample T 2

On split sample and randomized confidence intervals for binomial proportions

Notes on duality in second order and -order cone otimization E. D. Andersen Λ, C. Roos y, and T. Terlaky z Aril 6, 000 Abstract Recently, the so-calle

AI*IA 2003 Fusion of Multiple Pattern Classifiers PART III

Supplementary Materials for Robust Estimation of the False Discovery Rate

arxiv: v1 [physics.data-an] 26 Oct 2012

ON THE LEAST SIGNIFICANT p ADIC DIGITS OF CERTAIN LUCAS NUMBERS

Infinite Number of Chaotic Generalized Sub-shifts of Cellular Automaton Rule 180

The Graph Accessibility Problem and the Universality of the Collision CRCW Conflict Resolution Rule

44 CHAPTER 5. PERFORMACE OF SMALL SIGAL SETS In digital communications, we usually focus entirely on the code, and do not care what encoding ma is use

Uncertainty Modeling with Interval Type-2 Fuzzy Logic Systems in Mobile Robotics

RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning

A Note on Guaranteed Sparse Recovery via l 1 -Minimization

John Weatherwax. Analysis of Parallel Depth First Search Algorithms

An Introduction to Information Theory: Notes

arxiv: v1 [quant-ph] 3 Feb 2015

On Wald-Type Optimal Stopping for Brownian Motion

The analysis and representation of random signals

Delay characterization of multi-hop transmission in a Poisson field of interference

Beyond Worst-Case Reconstruction in Deterministic Compressed Sensing

Optimal Power Control over Fading Cognitive Radio Channels by Exploiting Primary User CSI

An Analysis of TCP over Random Access Satellite Links

COMPARISON OF VARIOUS OPTIMIZATION TECHNIQUES FOR DESIGN FIR DIGITAL FILTERS

GIVEN an input sequence x 0,..., x n 1 and the

Outline. Markov Chains and Markov Models. Outline. Markov Chains. Markov Chains Definitions Huizhen Yu

One step ahead prediction using Fuzzy Boolean Neural Networks 1

Approximating min-max k-clustering

HetNets: what tools for analysis?

Almost 4000 years ago, Babylonians had discovered the following approximation to. x 2 dy 2 =1, (5.0.2)

Recursive Estimation of the Preisach Density function for a Smart Actuator

FAST AND EFFICIENT SIDE INFORMATION GENERATION IN DISTRIBUTED VIDEO CODING BY USING DENSE MOTION REPRESENTATIONS

Self-Sustainability of Energy Harvesting Systems: Concept, Analysis, and Design

An Ant Colony Optimization Approach to the Probabilistic Traveling Salesman Problem

Transcription:

On Code Design for Simultaneous Energy and Information Transfer Anshoo Tandon Electrical and Comuter Engineering National University of Singaore Email: anshoo@nus.edu.sg Mehul Motani Electrical and Comuter Engineering National University of Singaore Email: motani@nus.edu.sg Lav R. Varshney Electrical and Comuter Engineering University of Illinois at Urbana-Chamaign Email: varshney@illinois.edu Abstract We consider the roblem of binary code design for simultaneous energy and information transfer where the receiver comletely relies on the received signal for fulfilling its real-time ower requirements. The receiver, in this scenario, would need a certain amount of energy (derived from the received signal) within a sliding time window for its continuous oeration. In order to meet this energy requirement at the receiver, the transmitter should use only those codewords which carry sufficient energy. In this aer, we assume that the transmitter uses on-off keying where bit one corresonds to transmission of a high energy signal. The transmitter uses only those codewords which have at least d ones in a sliding window of W = d + bits. We show that with this constraint, the noiseless code caacity is achieved by sequences generated from a finite state Markov machine. We also investigate achievable rates when such constrained codes are used on noisy communication channels. Although a few of these results are well known for run-length limited codes used for data storage, they do not seem to aear in literature in the form resented here. I. INTRODUCTION When the receiver relies comletely on the received information-bearing signal for its real-time ower requirements, the roblem at the transmitter is to design codes which maximize the information rate while constraining codewords to carry sufficient energy for smooth receiver oeration. Even when the receiver harvests art of the received energy for future use, the aroach of using the signal as simultaneous source of both energy and information is more efficient than using the signal in a time-shared manner for energy-only and information-only recetion tasks [] [3]. Practical alications of wireless ower and information transfer include communication between a assive tag and an active reader in radio frequency identification (RFID) systems. Batteryless imlantable biomedical devices have been roosed for healthcare systems which receive both energy and control signals wirelessly from an external unit [4], [5]. Other examles of simultaneous energy and information transfer include owerline communication where information is sent over the same lines which carry electric ower [6]. An early work on analysis of fundamental tradeoffs between simultaneous information and energy transfer was conducted in []. The tradeoff involved the study of the limits of information transfer through the use of codewords for which the average received energy exceeds a threshold. When on-off keying is emloyed (where (res. ) is reresented by the resence (res. absence) of a carrier), a majority transmission of indicates a greater oortunity for the receiver to use the signal to fulfill its ower requirements. Given a secific constraint on the minimum number of ones in a time window, the task is to design codes which maximize the information rate while satisfying the window constraint. In this aer, we study binary codes in which each codeword is constrained to have at least d ones in a sliding window of W = d + consecutive bits. This constraint is equivalent to having at least d ones between successive zeros, which in turn defines a Tye- (d, ) run-length limited (RLL) code. Note that, in general, a Tye- (d, k) RLL code is one in which the number of ones between successive zeros in each codeword is at least d and at most k. We give a robabilistic roof that the noiseless caacity of a (d, ) RLL code can be achieved by using a d + state Markovian chain. The state transition robabilities for this Markov chain are exlicitly rovided and any sequence obtained from these state transitions satisfies the given codeword constraint. We also give analytical exressions for achievable rates when these constrained codes are used on the (i) binary symmetric channel, (ii) Z-channel and (iii) binary erasure channel. Although a few of these results are well known for run-length codes used for data storage, they do not seem to aear in literature in the form resented here. Tye- (d, k) RLL codes (where the number of zeros between successive ones are at least d and at most k) have been used for magnetic and otical recording and researchers usually refer to Tye- (d, k) RLL codes simly as (d, k) codes [7] []. However, unless secified otherwise in this aer, by (d, k) codes and (d, ) codes we shall mean Tye- (d, k) RLL codes and Tye- (d, ) RLL codes, resectively. Note that aart from the hysical interretation of constraints, there is no combinatorial difference between Tye- and Tye- RLL codes when they are used on symmetric channels. However differences arise when these codes are emloyed on asymmetric channels, like the Z-channel. A (d, ) code can be reresented by transitions between d + states as shown by the finite-state machine in Fig. [], []. We remark that a (d, k) code can be reresented by a state machine with k + states, but it is not desirable to view a (d, ) code as a secial case of (d, k) code with

2 3 d d+ (a) 2 3 d d+ (b) - Fig.. State transition diagram of a (d, ) code. (a) State transitions labeled by outut bit, (b) State transitions labeled by transition robabilities. k = since the state sace becomes infinite. The use of (d, k) codes for simultaneous energy and information transfer has been roosed in [2], [3]. In [2], the tag-to-reader channel in RFID systems is modeled as a discretized Gaussian shift channel and the frame error rate of finite blocklength (d, k) codes is comared through simulations. In this aer, on the other hand, we consider alternate channel models and seek transmission rates at which the robability of error may be brought arbitrarily close to zero by increasing the blocklength. In [3], the receiver is assumed to be equied with a finite energy buffer and energy requirements at the receiver are modeled stochastically. The erformance of different (d, k) codes is comared through numerical otimization over state transition robabilities. In comarison, in this aer we do not assume energy buffering mechanism at the receiver and derive analytical exressions for state transition robabilities which lead to maximization of information rates using (d, ) codes. We analyze the noiseless caacity in Section II while achievable rates for noisy channels are investigated in Section III. Numerical results and conclusions are resented in Sections IV and V, resectively. II. (d, ) CODE CAPACITY If M N denotes the maximum number of distinct binary sequences of length N satisfying the (d, ) constraint, then the (d, ) code caacity is given by log C = lim 2 M N N N. () This was first studied by Shannon [4] and the code caacity is given by the logarithm of the largest root of the following characteristic equation (see [8] and [5]) z d+ z d =. (2) It is interesting to note that solutions to the above equation, for different values of d, are related to certain constants, called Meru constants, obtained from recurrence relations studied by the ancient mathematician Pingala in his work on rhythm and meter in Sanskrit oetry [6]. In articular, the first, second, and fourth Meru constants can be shown to be equal to the largest root of (2) for d =,, and d = 3, resectively. Let S n denote the state of the Markov machine in Fig. at time n where S n {, 2,..., d + }. A transition from state S n to state S n roduces bit X n. From Fig. (a) it follows that X n = when the machine transitions from state d + to state and X n = for all other state transitions. From Fig. (b) we note that the transition robability from state d + to state is denoted by and the transition robability from state d + to itself is equal to. This Markov machine is irreducible and aeriodic. Thus, it has a stationary robability distribution which we denote by {π j } d j= where π j is the steady state robability of the Markov chain being in state j. For this Markov chain, the entroy rate (or information rate) is R = H(S 2 S ) = π d H(), (3) where H() = + log 2 + ( ) log 2 ( ). The value of R is uniquely given by the choice of d and the transition robability. We will now rove that the sequences generated using the finite state machine given by Fig. achieve the binary (d, ) code caacity through an aroriate choice of the transition robability. That is, for a given d, the following relation is satisfied C = max R. (4) The asymtotic equiartition roerty (AEP) holds for Markov sources and the number of tyical sequences of length n is aroximately given by 2 nh(s2 S) where H(S 2 S ) is the entroy rate (or information rate) of the Markov source [7]. In general, if a constrained code is reresented by a finite Markov model then, using the AEP, it can be roved that there exist state transition robabilities such that the entroy rate of the Markov source matches the constrained code caacity []. A similar result is also obtained in [9] by enumerating the distinct sequences that a Markov source can generate using its associated connection matrix. Here, we give a simle roof of (4) when (d, ) constrained codes are reresented by Markov model in Fig.. A useful outcome of our roof is that closed-form exressions for the otimized transition robabilities are exlicitly resented. This seems not to have aeared in the literature before. Theorem. The maximum information rate of the Markov source governed by the state machine given in Fig. is equal to the (d, ) code caacity given by the logarithm of the largest real root of the following characteristic equation z d+ z d =. This caacity is achieved by choosing as the largest real value for which = ( ) d+. Proof: The steady state robability distribution satisfies [π π π d ] = [π π π d ] A, (5)

where A denotes the transition robability matrix for the finite state machine in Fig.. The (i, j) entry of A is the transition robability from state i to state j and A is given by................ A =................ (6)...... The diagonal of A is all zeros excet the bottom corner, which is. Solving (5), we get π = π = = π d = π d. (7) Since the steady state robabilities sum to one, we have π d = + d and π = = π d = + d. (8) Using (3) and (8), the information rate is given by R = H() + d. (9) To solve for which maximizes R, we equate the derivative of R with resect to to zero ( ) R ( + d) log = 2 dh() ( + d) 2 =. () The above equation yields Substituting () in (9), we get = ( ) d+. () R = log 2 ( ). (2) From (), we see that ( ) satisfies the equation z d+ = z. Equivalently, ( ) satisfies the equation z z (d+) = zd+ z d z d+ =. (3) Thus, the maximum information rate is given by the logarithm of the largest real root of the equation z d+ z d =. III. ACHIEVABLE RATE USING (d, ) CODE ON MEMORYLESS CHANNELS Consider a memoryless channel with inut sequence X N (satisfying the (d, ) constraint) and outut sequence Y N = (Y,..., Y N ). The channel caacity in this scenario is equal to C = lim N = lim N su I(X N ; Y N ) P (X N ) N su P (S N ) (4) I(S N ; Y N ), (5) N where, in the first equality the suremum is taken over all robabilities P (X N ) for the inut sequence. In the second equality, the suremum is taken over all robabilities P (S N ) for the sequence of states. The second equality follows since given the initial state, the sequences X N and S N are in oneto-one corresondence, and the initial states does not affect the average mutual information [8]. Although the channel caacity using (d, ) codes given by (5) is difficult to obtain for noisy channels, a useful lower bound on the caacity for a stationary Markovian source over memoryless channels is given as [7] C C LB = su P (S,S 2) I(S 2 ; Y 2 S ). (6) In general for a constrained code, an analytical exression for C LB is not available and thus its comutation is erformed either through numerical otimization [7] or through aroximation [8]. In this work we obtain analytical exressions for C LB when the finite state machine in Fig. generates the (d, ) code for the following channels: ) Binary Symmetric Channel 2) Z-Channel 3) Binary Erasure Channel A. Binary Symmetric Channel (BSC) A BSC is a binary-inut binary-outut memoryless channel with an associated crossover robability, P r( ) = P r( ), which we denote by q. The crossover robability reresents the robability of bit error by a hard-decision information decoder at the receiver due to channel noise. It may be temting to interret the transition of information bit to information bit due to channel noise as an energy loss, but the energy harvester at the receiver harvests energy radiated by the transmitter indeendent of the information decoder. Thus imosing the (d, ) code constraint at the transmitter hels to meet the energy requirement at the receiver even on noisy channels. The following roosition evaluates the achievable rate given by (6) for a BSC using (d, ) constrained codes. Proosition. The lower bound for caacity on BSC with crossover robability q when the Markovian state machine in Fig. is used to generate the (d, ) code is given by C LB = where satisfies the equation H( + q 2q) H(q), (7) + d ( q +2q) +d 2q dq = q dq ( q) d dq (+q 2q) 2q dq. (8) Proof: The average conditional mutual information in this case is given by I(S 2 ; Y 2 S ) = H(S 2 S ) + H(Y 2 S ) H(S 2, Y 2 S ) (9) = π d (H( + q 2q) H(q)) (2) H( + q 2q) H(q) =, + d (2)

where the last equality above follows from (8). To solve for which maximizes I(S 2 ; Y 2 S ), we equate its derivative with resect to to zero ( r r I = ( + d)( 2q) log ) 2 d (H(r) H(q)) ( + d) 2 =, where r = + q 2q. Using the relation H(r) = r log 2 ( r r (22) ) log 2 ( r), (23) equation (22) can be simlified to obtain (8). Finally, (7) follows from (6) and (2). Remark. Following observations can be made on the alication of Pro. for the secial case of q = and d =. When the BSC crossover robability q =, (7) and (8) reduce to (9) and (), resectively. In this case, C LB corresonds to the (d, ) code caacity C. For this reason, C is also called the noiseless caacity under the (d, ) RLL code constraint. When d =, the code becomes unconstrained and the robability of number of zeros in a codeword is reresented by. In this case, (8) reduces to = and (7) corresonds to the unconstrained BSC caacity H(q). B. Z-Channel The Z-channel is memoryless with inut alhabet X = {, }, outut alhabet Y = {, } and satisfies P r( ) =. We will denote the robability P r( ) by q. The following roosition gives an exlicit exression for the achievable rate C LB, given by (6), for the Z-channel. Proosition 2. When the Markovian state machine in Fig. is used to generate the (d, ) code for a Z-channel with q = P r( ), the achievable rate C LB is given by C LB = log 2 ( ) q log 2 ( + where satisfies the equation q( ) ), (24) ( ) (d+)( q) = (q( ) + ) (d+)q q (d+)q. (25) Proof: The average conditional mutual information in this case can be exressed as H (( )( q)) ( )H( q) I(S 2 ; Y 2 S ) =. + d (26) To solve for which maximizes I(S 2 ; Y 2 S ), we equate its derivative with resect to to zero and simlify to get (25). We get (24) by substituting the constraint (25) in (26). Remark 2. Following observations can be made on the alication of Pro. 2 to the secial case of q = and d =. When q =, (24) and (25) reduce to (2) and (), resectively, and C LB becomes equal to C. When d =, the code becomes unconstrained and the robability of number of zeros in a codeword is reresented by. In this case, (25) reduces to = ( q) ( + 2H(q)/( q)), (27) which is equal to the robability for the occurrence of for achieving the unconstrained caacity on a Z-channel. C. Binary Erasure Channel In this subsection we consider the BEC, a memoryless channel with inut alhabet X = {, }, outut alhabet Y = {, ɛ, } and transition robabilities P r(ɛ ) = q ; P r( ) = q, P r(ɛ ) = q ; P r( ) = q, where q is called the erasure robability. The following roosition evaluates the achievable rate for the BEC using (d, ) constrained codes. Proosition 3. When the Markovian state machine in Fig. is used to generate the (d, ) code for a BEC with erasure robability q, the achievable rate C LB is given by C LB = ( q)c, (28) where C is the noiseless code caacity for a (d, ) code. Proof: The average conditional mutual information in this case can be shown to satisfy I(S 2 ; Y 2 S ) = ( q)h(). (29) + d To solve for which maximizes I(S 2 ; Y 2 S ), we equate its derivative with resect to to zero and simlify to get (). Substituting () in (29), we get C LB = ( q) log 2 ( ) = ( )C. (3) Remark 3. Following observations can be made from Pro. 3 and its roof. The otimized value of in this case which maximizes the average conditional mutual information is indeendent of q. For a given d, this is equal to the corresonding value for the noiseless case. The lower bound C LB is tight for the case q = and is equal to the noiseless code caacity C. IV. NUMERICAL EXAMPLE The noiseless (d, ) code caacity C is lotted in Fig. 2 as a function of d. The dotted curves corresond to evaluation of the entroy rate, given by (9), for fixed values of the state transition robability. The otimized value of which achieves C is obtained by solving = ( ) d+ and is tabulated in Table I. A higher value of d imlies increased transmission of ones in every codeword and hence greater

C.9.8.7.6 Otimized =. = = =.7 CLB in bits er channel use.9.8.7.6 unconstrained BSC caacity d = d = d =.. 2 4 6 8 d.5. 5 5 5 q, BSC crossover robability Fig. 2. C as a function of d for (d, ) codes. Fig. 3. C LB as a function of the BSC crossover robability d 2 5 C.6942 55 62 44 82 77 29 56 TABLE I TABLE OF C AND OPTIMIZED AS A FUNCTION OF d oortunity for the receiver to use the received signal to fulfill its energy requirements. Fig. 3 lots C LB, the lower bound on the achievable rate using (d, ) code, versus the BSC crossover robability q. For q =, the lower bound is tight and is equal to the noiseless code caacity C. The lower bound is also tight for the case d =, in which case it is equal to the unconstrained BSC caacity. The otimized value of which satisfies the constraint given by (8) and maximizes the average conditional mutual information I(S 2 ; Y 2 S ) for BSC is lotted in Fig. 4. We observe that the otimized value of varies both with d and with BSC crossover robability q. The lower bound on the achievable rate using (d, ) code on the Z-channel is lotted in Fig. 5. This lower bound is tight for d = in which case it is equal to the unconstrained Z-channel caacity. The otimized value of which satisfies (25) and maximizes I(S 2 ; Y 2 S ) is shown in Fig. 6. The curve corresonding to d = in Fig. 6 deicts the value of robability of occurrence of zeros for achieving the unconstrained caacity for a Z-channel. The lower bound on the achievable rate using (d, ) code on BEC is lotted in Fig. 5. The otimized value of in this case satisfies the constraint () and is indeendent of the robability of erasure. These otimized values of are same as in the noiseless case and are tabulated in Table I as a function of d...5. 5 5 5 q, BSC crossover robability Fig. 4. Otimized for BSC V. CONCLUSION d = d = d = We analyzed achievable rates using Tye- (d, ) runlength limited codes for noiseless and noisy channels. The imact of increasing d on the achievable information rate was resented through otimization of a single arameter. The relation which this arameter, denoting the state transition robability, satisfies in order to maximize information rate was given exlicitly for different channel models. The use of (d, ) codes was motivated by the codeword constraint of having at least d ones in a moving window of size W = d + bits. The case when binary codewords are constrained to have at least d ones in a window of size W > d + oens interesting roblems on quantifying achievable rates for different channels. These roblems may be generalized to the study of constrained codewords over alhabets of size greater than two.

CLB in bits er channel use.9.8.7.6. unconstrained Z channel caacity..6.7.8.9.7.6 q, Pr( ) in Z channel Fig. 5. C LB for Z-channel d = d = d =...6.7.8.9 q, Pr( ) in Z channel d = d = d = REFERENCES [] L. R. Varshney, Transorting information and energy simultaneously, in Proc. 28 IEEE Int. Sym. Inf. Theory, Jul. 28,. 62 66. [2] P. Grover and A. Sahai, Shannon meets Tesla: Wireless information and ower transfer, in Proc. 2 IEEE Int. Sym. Inf. Theory, Jun. 2,. 2363 2367. [3] L. R. Varshney, On energy/information cross-layer architectures, in Proc. 22 IEEE Int. Sym. Inf. Theory, Jul. 22,. 36 365. [4] R. R. Harrison, P. T. Watkins, R. J. Kier, R. O. Lovejoy, D. J. Black, B. Greger, and F. Solzbacher, A low-ower integrated circuit for a wireless -electrode neural recording system, IEEE J. Solid-State Circuits, vol. 42, no.,. 23 33, 27. [5] A. Yakovlev, S. Kim, and A. Poon, Imlantable biomedical devices: Wireless owering and communication, IEEE Commun. Mag., vol. 5, no. 4,. 52 59, 22. [6] S. Huczynska, Powerline communication and the 36 officers roblem, Phil. Trans. R. Soc. A, vol. 364, no. 849,. 399 324, Dec. 26. [7] E. Zehavi and J. K. Wolf, On runlength codes, IEEE Trans. Inf. Theory, vol. 34, no.,. 45 54, 988. [8] S. Shamai and Y. Kofman, On the caacity of binary and Gaussian channels with run-length-limited inuts, IEEE Trans. Commun., vol. 38, no. 5,. 584 594, 99. [9] K. A. S. Immink, Codes for Mass Data Storage Systems. Shannon Foundation Publishers, The Netherlands, 999. [] B. H. Marcus, R. M. Roth, and P. H. Siegel. (2, October) An introduction to coding for constrained systems. [Online]. Available: htt://www.math.ubc.ca/ marcus/handbook/index.html [] C. V. Freiman and A. D. Wyner, Otimum block codes for noiseless inut restricted channels, Inf. Control, vol. 7, no. 3,. 398 45, 964. [2] Á. I. Barbero, E. Rosnes, G. Yang, and Ø. Ytrehus, Constrained codes for assive RFID communication, in Proc. 2 Inf. Theory Al. Worksho, Feb. 2. [3] A. M. Fouladgar, O. Simeone, and E. Erki, Constrained codes for joint energy and information transfer, 23, arxiv:3.87v [cs.it]. [4] C. E. Shannon, A mathematical theory of communication, Bell Syst. Tech. J., no. 27,. 379 429, 948. [5] J. J. Ashley and P. H. Siegel, A note on the Shannon caacity of runlength-limited codes, IEEE Trans. Inf. Theory, vol. 33, no. 4,. 6 65, 987. [6] L. R. Varshney, Local fidelity, constrained codes, and the Meru Prastāra, IEEE Potentials, vol. 27, no. 2,. 27 32, March 28. [7] R. G. Gallager, Princiles of Digital Communication. Cambridge University Press, 28. Fig. 6. Otimized for Z-channel CLB in bits er channel use.9.8.7.6 unconstrained BEC caacity d = d = d =..6.8 q, robability of erasure Fig. 7. C LB for BEC