IT was shown in [3] [5] that, when transmission takes place

Size: px
Start display at page:

Download "IT was shown in [3] [5] that, when transmission takes place"

Transcription

1 1 Te Generalized Area Teorem and Some of its Consequences Cyril Méasson, Andrea Montanari, Tom Ricardson, + and Rüdiger Urbanke arxiv:cs.it/51139 v1 9 Nov 5 Abstract Tere is a fundamental relationsip between belief propagation and maximum a posteriori decoding. Te case of transmission over te binary erasure cannel was investigated in detail in a companion paper. Tis paper investigates te extension to general memoryless cannels (paying special attention to te binary case). An area teorem for transmission over general memoryless cannels is introduced and some of its many consequences are discussed. We sow tat tis area teorem gives rise to an upper-bound on te maximum a posteriori tresold for sparse grap codes. In situations were tis bound is tigt, te extrinsic soft bit estimates delivered by te belief propagation decoder coincide wit te correct a posteriori probabilities above te maximum a posteriori tresold. More generally, it is conjectured tat te fundamental relationsip between te maximum a posteriori and te belief propagation decoder wic was observed for transmission over te binary erasure cannel carries over to te general case. We finally demonstrate tat in order for te design rate of an ensemble to approac te capacity under belief propagation decoding te component codes ave to be perfectly matced, a statement wic is well known for te special case of transmission over te binary erasure cannel. Index Terms belief propagation, maximum a posteriori, maximum likeliood, Maxwell construction, tresold, pase transition, Area Teorem, EXIT curve, entropy I. INTRODUCTION IT was sown in [3] [5] tat, wen transmission takes place over te binary erasure cannel (BEC) using sparse grap codes, tere exists a surprising and fundamental relationsip between te belief propagation (BP) and te maximum a posteriori (MAP) decoder. Tis relationsip emerges in te limit of large blocklengts. Operationally, tis relationsip is furnised for te BEC by te so-called Maxwell decoder. Tis decoder bridges te gap between BP and MAP decoding by augmenting te BP decoder wit an additional guessing device. Analytically, te relationsip between BP and MAP decoding is given in terms of te so-called extended BP EXIT (EBP EXIT) function. Fig. 1 sows tis curve (double S - saped curve) for transmission over te BEC and te ensemble LDPC( 3x+3x +4x 13 1, x 6 ) (te degree distributions are from an edge perspective). Te BP EXIT curve is te envelope of te EBP EXIT curve (let a ball run slowly down te slope). Te MAP EXIT curve on te oter and is conjecture to be derived in general from te EBP EXIT curve by te so-called Maxwell EPFL, Scool for Computer and Communication Sciences, CH-115 Lausanne, Switzerland. cyril.measson@epfl.c ENS, Laboratoire de Pysique Téorique, F-7531 Paris, France. montanar@lpt.ens.fr + Flarion Tecnologies, Bedminster, USA tjr@flarion.com EPFL, Scool for Computer and Communication Sciences, CH-115 Lausanne, Switzerland. ruediger.urbanke@epfl.c Parts of te material were presented in [1], []. construction. Tis Maxwell construction consists of converting te EBP EXIT curve into a single-valued function by cutting te EBP EXIT curve at te two S -saped spots in suc a way tat tere is a local balance of te cut areas. A detailed BP () BP MAP () Fig. 1. Te EBP EXIT curve (double S -saped curve), te corresponding BP EXIT curve (dased and solid line; te envelope of te EBP EXIT curve) and te MAP EXIT curve (tick solid line; constructed by cutting EBP EXIT at te two S -saped spots in suc a way tat tere is a local balance of te areas sown in gray) for te ensemble LDPC( 3x+3x +4x 13, x 6 ). 1 discussion of tis relationsip in te case of transmission over te BEC can be found in [5]. Let us summarize. For transmission over te BEC using sparse grap codes from long ensembles, BP decoding is asymptotically caracterized by its BP EXIT curve and MAP decoding is caracterized by its MAP EXIT curve. Tese two curves are linked via te EBP EXIT curve. A. Overview of Results Te pleasing picture sown in Fig. 1 seems to ave a fairly complete analog in te general setting. Unfortunately we are not able to prove tis claim in any generality. But we sow ow several of te key ingredients can be suitably extended to te general case and we will be able to prove some of teir fundamental properties. Namely, we introduce a general area teorem (GAT). Tis area teorem, wen applied to te BEC, leads back to te notion of EXIT functions as sown in te companion paper [5]. For te general case owever, it is necessary to use a distinct function (but similar in many respects to EXIT). We call it te generalized EXIT (GEXIT) function. We ten sow tat GEXIT functions sare some of te key properties wit EXIT functions. In particular, we are able to extend te upper-bound on te MAP tresold presented in [3] (or, more generally, te lower bound on te conditional entropy) to general cannels. In [6], [7] Guo, Samai and Verdú sowed tat for Gaussian cannels te derivative (wit respect of te signal-to-noise ratio) of te mutual information, is equal to te mean square

2 error (MSE), and in [6] tey sowed tat a similar relationsip olds for Poisson cannels. One can tink of GEXIT functions as providing suc a relationsip in a more general setting (were te generalization is wit respect to te admissible cannel families). For some cannel families, GEXIT functions ave particularly nice interpretations. E.g., for Gaussian cannels, we not only ave te interpretation of te derivative in terms of te MSE detector, but tis interpretation can be simplified even furter in te binary case: te derivative of te mutual information can be seen as te magnetization of te system as was sown by Macris in [8]. Te results in [9], wic ave appeared since te introduction of GEXIT functions in [1], can be reformulated to give an interpretation of GEXIT functions for te class of additive cannels (see also [1]). It is likely tat interpretations for oter classes of cannels will be found in te future. B. Paper Outline In Section II we review te necessary background material and in particular recall te GAT first stated in [1]. Starting from tis GAT, we introduce in Section III GEXIT functions. We will see tat for transmission over te BEC, GEXIT functions coincide wit standard EXIT functions, but tat tis is no longer true for general cannels. In Section V we ten concentrate on LDPC ensembles. In particular we define te quantities wic appear in te asymptotic setting. In Section IV we ten prove one of te fundamental properties of GEXIT functions, namely tat GEXIT kernels preserve te ordering implied by pysical degradation. Tis fact is ten exploited in Section VI, were we sow ow to compute an upper bound on te tresold under MAP decoding (or, more generally, a lower bound on te conditional entropy) by considering te BP GEXIT function, wic results from te regular GEXIT function if we substitute te MAP density by its equivalent BP density. In Section VII we define extended BP GEXIT (EBP GEXIT) functions wic include te unstable brances, present several examples of tese function and discuss ow tey provide a bridge between belief propagation and maximum a posteriori decoding. Several properties of EPB GEXIT functions are discussed in Section VIII togeter wit a numerical procedure for constructing tem. We sow tat tey satisfy an area teorem as well. Section IX presents some partial results on te smootness and uniqueness of EBP GEXIT functions. In Section X we sow te surprising fact tat, in case te previously computed upper bound on te MAP tresold is tigt, ten te a posteriori probabilities on te bits are equal to te corresponding BP estimates. Section XI contains a proof tat iterative coding systems cannot acieve reliable communication above capacity, using only density evolution and te area teorem (and not te standard Fano inequality). A matcing condition for component codes of capacity acieving sequences follows. In te appendices we collect some tecnical derivations and a discussion of several equivalent forms of te GEXIT functions for Gaussian cannels. We finally conclude wit some remarks in Section XII. II. REVIEW AND NOTATIONS Let X denote te cannel input alpabet (wic we always assume finite) and Y te cannel output alpabet (typically, Y = R). All cannels considered in tis paper are memoryless (M). Rater tan looking at a single memoryless cannel, we usually consider families of memoryless cannels parameterized by a real-valued parameter ǫ, wic we denote by {M(ǫ)} ǫ. Eac cannel from suc a family is caracterized by its transition probability density p Y X (y x) (were x X and y Y). We adopt ere te convention of formally denoting cannels by teir transition density even wen suc a density does not exist, and write f(y)p Y X (y x)dy as a proxy for te corresponding expectation. Transmission over binary-input memoryless outputsymmetric 1 (BMS) cannel plays a particularly important role. In tis case, it will be convenient to assume tat te input bit X i takes values x i X = {+1, 1}. Te cannel indexed by parameter ǫ is generically denoted by BMS(ǫ). In te sequel we will often assume tat te cannel family {BMS(ǫ)} ǫ is ordered by pysical degradation (see [11] for a discussion of tis concept). It is well known tat te standard families {BEC(ǫ)} 1 ǫ= (binary erasure cannels wit erasure parameter ǫ), {BSC(ǫ)} 1 ǫ= (binary symmetric cannels wit cross-over probability ǫ), and {BAWGNC(σ)} σ= (binaryinput additive wite Gaussian noise cannels Y = X + N were X takes values in X and te noise N as standard deviation σ and zero-mean) all ave tis property. For notational simplicity we will use a sortand and say tat a cannel family is degraded. In te binary case, an important role is played by te distribution of te log-likeliood ratio L = log p Y X(Y +1) p Y X (Y 1), assuming X = 1. We denote te corresponding density by c(l) and call it an L-density. In fact, witout loss of generality we can assume tat te log-likeliood ratio (L), is already included in te cannel description. Tis is justified since te random variable L constitutes a sufficient statistic. Tis inclusion of te L- processing is equivalent to assuming tat p Y X (y +1) = c(l). Furter facts regarding BMS cannels can be found in [11]. As far as LDPC and iterative coding systems are concerned, we will keep te formalism introduced in te companion paper [5] and wic is found, e.g., in [1] [15]. In te case of a non-binary input alpabet X, te log-likeliood mapping will be replaced by te canonical representation of te cannel output mapping, y log p Y X(y +1) p Y X (y 1) y ν(y) = {p Y X (y x)/z(y) : x X }, were z(y) = x X p Y X(y x). Notice tat ν(y) belongs to te ( X 1)-dimensional simplex S X 1. In te binary case, te log-likeliood ratio is just a particular parametrization of te one-dimensional simplex. In wat follows we will often be concerned wit ow certain quantities (e.g., te conditional entropy H(X Y )) beave as we cange te cannel parameter. In order to ensure tat 1 A binary memoryless cannels is said to be symmetric (or, more precisely, output-symmetric) wen te transition probability verifies p Y X (y + 1) = p Y X (y 1).

3 3 te involved objects exits we need to impose some regularity conditions on te cannel family wit respect to te cannel parameter. Tis can be done in various ways, but to be concrete we will impose te following restriction. Definition 1 (Cannel Smootness): Consider a family of memoryless cannels wit input and output alpabets X and Y, respectively, and caracterized by teir transition probability p Y X (y x) (wit y taking te canonical form described above). Assume tat te family is parameterized by ǫ, were ǫ takes values in some interval I R. Te cannel family is said to be smoot wit respect to te parameter ǫ if for all x X and all bounded continuously differentiable functions f(y) on S X 1, te integral f(y)p Y X (y x)dy exists and is a continuously differentiable function wit respect to ǫ, ǫ I. In te sequel we often say as a sortand tat a cannel BMS(ǫ) is smoot to mean tat we are transmitting over te cannel BMS(ǫ) and tat te cannel family {BMS(ǫ)} ǫ is smoot at te point ǫ. If BMS(ǫ) is smoot, te derivative d dǫ f(y)py X (y x)dy exists and is a linear functional of f. It is terefore consistent to formally define te derivative of p Y X (y x) wit respect to ǫ by setting d dǫ f(y)p Y X (y x) dy = f(y) dp Y X(y x) dǫ dy. (1) For a large class of cannel families it is straigtforward to ceck tat tey are smoot. Tis is e.g. te case if {Y} is finite and te transition probabilities are differentiable functions of ǫ, or if it admits a density wit respect to te Lebesgue measure, and te density is differentiable for eac y. In tese cases, te formal derivative (1) coincides wit te ordinary derivative. Example 1 (Smoot Cannels): It is straigtforward to ceck tat te families {BEC(ǫ)} 1 ǫ=, {BSC(ǫ)} 1 ǫ=, and {BAWGNC(σ)} σ= are all smoot. In te case of transmission over a BMS cannel it is useful to parameterize te cannels in suc a way tat te parameter reflects te cannel entropy. More precisely, we denote by te conditional entropy H(X Y ) wen te cannel input X is cosen uniformly at random from {+1, 1}, and te corresponding output is Y. Consider a family of BMS cannels caracterized by teir L-densities. We ten write tis family of L-densities as {c } if H(c ) =, were te entropy operator is defined as (see, e.g., [11]) H(c) = c(y)log (1 + e y )dy = c(y)l(y)dy. () Tis integral always exists as can be seen by writing it in te equivalent form as Rieman-Stieltjes integral ( ) e y 1+e d C (y). In te above definition we ave y introduced te kernel l(y) = log (1 + e y ). For reasons tat will become clearer in Lemma 1, we call l(y) te EXIT kernel. Te cannel family is said to be complete if ranges from to 1. For te binary erasure cannel te natural parameter ǫ (te erasure probability) already represents an entropy. Neverteless, to be consistent we will write in te future BEC(). By some abuse of notation, we write BSC() to denote te BSC wit cross-over probability equal to ǫ() = 1 (), were (x) = xlog (x) (1 x)log (1 x), te binary entropy function. In te same manner, BAWGNC() denotes te BAWGNC wit a standard deviation of te noise suc tat te cannel entropy is equal to. We will encounter cases were it is useful to allow eac bit of a codeword to be transmitted troug a different (family of) BMS cannel(s). By some abuse of notation, we will denote te i t cannel family by {BMS( i )} i. A situation in wic tis more general view appears naturally is wen we consider punctured ensembles. We can describe tis case by assuming tat some bits are passed troug an erasure cannel wit erasure probability equal to one, wereas te remaining bits are passed troug some oter BMS cannel. In suc cases it is convenient to assume tat all individual families {BMS( i )} i are parameterized in a smoot (differentiable) way by a single real parameter, call it ǫ, i.e., i = i (ǫ). In tis way, by canging ǫ all cannels cange according to i (ǫ) and tey describe a pat troug cannel space. Te general area teorem (GAT), first introduced in [1], plays center stage in te remainder of tis paper. Teorem 1 (General Area Teorem): Let X be cosen wit probability p X (x) from X n. Let te cannel from X to Y be memoryless, were Y i is te result of passing X i troug te smoot family {M(ǫ i )} ǫi, ǫ i I i. Let Ω be a furter observation of X so tat p Ω X,Y (ω x, y) = p Ω X (ω x). Ten n H(X i Y, Ω) dh(x Y, Ω) = dǫ i. (3) ǫ i=1 i Proof: For i [n], te entropy rule gives H(X Y, Ω) = H(X i Y, Ω) + H(X i X i, Y, Ω). We claim tat p(x i X i, Y, Ω) = p(x i X i, Y i, Ω), (4) wic is true since te cannel is memoryless and p Ω X,Y (ω x, y) = p Ω X (ω x). Furtermore H(X i Y, Ω) is differentiable wit respect to ǫ i as a consequence of te cannel smootness (it is straigtforward to write te conditional entropy as expectation of a differentiable kernel, cf. Lemma and remarks below). Terefore, H(X i X i, Y, Ω) = H(X Y,Ω) H(Xi Y,Ω) H(X i X i, Y i, Ω) and ǫ i = ǫ i. From tis te total derivate as stated in (3) follows immediately. III. GEXIT FUNCTIONS Let X be cosen wit probability p X (x) from X n. Assume tat te i t component of X is transmitted over a memoryless erasure cannel (not necessarily binary) wit erasure probability ǫ i, denote it by EC(ǫ i ). Ten H(X i Y ) = ǫ i H(X i Y i = X i, Y i ) + ǫ i H(X i Y i =?, Y i ) = ǫ i H(X i Y i ). Apply equation (3) in Teorem 1 assuming tat ǫ i = ǫ, i [n]. To remind ourselves tat Y is a function of te parameter ǫ we write Y (ǫ). Ten 1 d n dǫ H(X Y (ǫ)) = 1 n H(X i Y i (ǫ)). n i=1 Te function i (ǫ) = H(X i Y i (ǫ)) is known in te literature as te EXIT function associated to te i t bit of te given code and (ǫ) = 1 n n i=1 H(X i Y i (ǫ)) is te (average) EXIT

4 4 function. We conclude tat for transmission over EC(ǫ), (ǫ) = 1 d n dǫh(x Y (ǫ)). If we integrate tis relationsip wit respect to ǫ from to 1 and note tat H(X Y ()) = and H(X Y (1)) = H(X), ten we get te basic form of te area teorem for te EC(ǫ): (ǫ)dǫ = H(X)/n. Tis statement was first proved, in te binary case, by Asikmin, Kramer, and ten Brink in [16] using a different framework. Example (Area Teorem for Repetition Code and BEC): Consider te binary repetition code wit parameters [n, 1, n], were te first component describes te blocklengt, te second component denotes te dimension of te code, and te final component gives te minimum (Hamming) distance. By symmetry i () = () = n 1 for all i [n]. We ave 1 ()d = 1 n = H(X)/n, as predicted. Te above scenario can easily be generalized by allowing te various components of te code to be transmitted over different erasure cannels. Consider, e.g., a binary repetition code of lengt n in wic te first component is transmitted troug BEC(δ), were δ is constant, but te remaining components are passed troug BEC(). In tis case we ave ()d = (H(X Y (δ, 1,, 1)) H(X Y (δ,,, )))/n = δ/n (assuming tat X is cosen uniformly at random from te set of codewords). We will get back to tis point sortly wen we introduce GEXIT functions in Definition 3. Te concept of EXIT functions extends to general cannels in te natural way. To simplify notation somewat let us focus on te binary case. Definition ( for BMS Cannels): Let X be a binary vector of lengt n cosen wit probability p X (x). Assume tat transmission takes place over te family {BMS()}. Ten i () = H(X i Y i ()), () = 1 n H(X i Y i ()) = 1 n n n i (). i=1 i=1 Tis is te definition of te EXIT function introduced by ten Brink [17] [1] (see footnote ). We get a more explicit representation if we consider transmission using binary linear codes. In tis context recall tat a binary linear code is proper if it possess a generator matrix wit no zero columns. As a consequence, in a proper binary linear code alf te codewords take on te value +1 and alf te value 1 in eac given position. Lemma 1 ( for Linear Codes and BMS Cannels): Let X be cosen uniformly at random from a proper binary linear code and assume tat transmission takes place over te family {BMS()}. Define ( φ i (y i ) = pxi Y log i (+1 y i )), (5) p Xi Y i ( 1 y i ) and Φ i = φi (Y i ). Let a i denote te density of Φ i, assuming tat te all-one codeword was transmitted, and let a = 1 n n i=1 a i. Ten i () = H(a i ), () = H(a), More precisely, EXIT functions are usually defined as I(X i Y i (ǫ)) = H(X i ) H(X i Y i (ǫ)), wic differs from our definition only in a trivial way. were H( ) is te entropy operator introduced in (). Proof: Note tat X i Φ i Y i forms a Markov cain. 3 Equivalently, we claim tat Φ i is a sufficient statistic for X i. From tis we conclude tat (see [, Section.8]) H(X i Y i ) = H(X i Φ i ). Now note tat since we assume tat X was cosen uniformly at random from a proper binary linear codes, it follows tat te prior for eac X i is te uniform one. Terefore, Φ i is in fact a log-likeliood ratio. It is sown in [11, Lemma 3.37] tat, assuming tat X is cosen uniformly at random from a proper binary linear code, te binary cannel p(φ i x i ) is symmetric. Furter, note tat te density of Φ i conditioned on X i = 1 is equal to te density of Φ i conditioned tat te allone codeword was transmitted. 4 By assumption tis L-density is equal to a i. We conclude tat H(X i Φ i ) = H(a i ). As te next example sows, te EXIT function does not fulfill te area teorem in te general case. Example 3 (EXIT Function for General BMS Cannels): Fig. sows te EXIT function for te [3, 1, 3] repetition code as well as for te [6, 5, ] single parity-ceck code for BEC(), BSC(), and BAWGNC(). E.g., te EXIT function for te [n, n 1, ] single parity-ceck code over BSC() is given by ( 1 (1 ǫ()) n 1) i () = () =, were ǫ() = 1 (). Note tat tese EXIT functions are ordered. More precisely, for a repetition code we get te igest extrinsic entropy at te output for te cannel family {BSC()} and we get te lowest suc entropy if we use instead te family {BEC()}. Indeed, one can sow tat tese two families are te least and most informative family of cannels over te wole class of BMS cannels for a repetition code, [3] [5]. Te roles are exactly excanged at a ceck node. Since we know tat te EXIT function for te BEC fulfills te area teorem, it follows from tis extremality properties tat te EXIT functions for te BSC and te BAWGNC do not fulfill te area teorem. Indeed, for a single parity-ceck code wit n = 3 and te BSC() te area under te EXIT function is given by ( 1 (1 ǫ()) ) d 4374 < /3. Altoug te above fact migt be disappointing it is not surprising. As it sould be clear from te discussion at te 3 For z R, let y i be an element of ( ) 1(z) φ i so tat z = φi (y i ). Ten p Xi Y i,φ i (x i y i, z) = (1+x i)+(1 x i )e z (1+e z = p ) Xi Φ i (x i z). From tis we conclude tat p Y i X i,φ i (y i x i, z) = p Y i Φ i (y i z). 4 To see tis, note tat, using te symmetry of te cannel and te equal prior on te codewords, we can write p Xi Y (x i y) = c(y) x C: x i =x i p Y X (y x 1), were c(y) is a constant independent of x i, C denotes te code, and 1 denotes te all-one codeword. In te same manner, if x C, ten p Xi Y (x i yx ) = c (y) x C: x i =x i x p Y X(yx x x ). Compare te density of te loglikeliood ratio assuming tat te all-one codeword was transmitted to te i one assuming tat te codeword x was transmitted. Te claim follows by noting tat for any y Y, p Y X (y 1) = p Y X (yx x ), and tat in tis case also p Y X (y x 1) = p Y X (yx x x ).

5 5 1 [6,5, ] [3,1, 3] 1 Fig.. Te EXIT function of te [3,1, 3] repetition code and te [6, 5,] parity-ceck code for te BEC() (solid curve), BSC() (dased curve) and BAWGNC() (dotted curve). beginning of tis section, te EXIT function is related to te GAT only in te case of te erasure cannel. Let us terefore go back to te GAT and define te function wic fulfills te area teorem in te general case. Definition 3 (GEXIT Function): Let X be a vector of lengt n cosen wit probability p X (x) from X n. Let te cannel from X to Y be memoryless, were Y i is te result of passing X i troug te smoot family {M(ǫ i )} ǫi, ǫ i [, 1]. Assume tat all individual cannels are parameterized in a smoot (differentiable) way by a common parameter ǫ, i.e., ǫ i = ǫ i (ǫ), i [n]. Let Ω be a furter observation of X so tat p Ω X,Y (ω x, y) = p Ω X (ω x). Ten te i t and te (average) generalized EXIT (GEXIT) function are defined by g i (ǫ) = H(X i Y, Ω) dǫ i, ǫ i dǫ ǫ g(ǫ) = 1 n g n i (ǫ). i=1 Discussion: Te definition is stated in quite general terms. First note tat if we consider te integral ǫ g(ǫ)dǫ, ten from ǫ Teorem 1 we conclude tat te result is n( 1 H(X Y (ǫ), Ω) H(X Y (ǫ), Ω) ). In words, if we smootly cange te individual cannel parameters ǫ i as a function of ǫ, ten te integral of g i (ǫ) tells us ow muc te conditional entropy of te system canges due to te total cange of te parameters ǫ i. To be concrete, assume, e.g., tat all bits are sent troug Gaussian cannels. We can imagine tat we first only cange te parameter of te Gaussian cannel troug wic bit 1 is sent from its initial to its final value, ten te parameter of te second cannel and so on. Alternatively, we can imagine tat all cannel parameters are canged simultaneously. In te two cases te integrals of te individual GEXIT functions g i differ but teir sum is te same and it equals te total cange of te conditional entropy due to te cange of cannel parameters. Terefore, GEXIT functions can be considered to be a local way of measuring te cange of te conditional entropy of a system. One sould tink of te common parameter ǫ as a convenient way of parameterizing te pat troug cannel space tat we are taking. In many applications all cannels are identical, and formulas simplify significantly. In Section VII we will see a case in wic te extra degree of freedom afforded by allowing different cannels is important. Te additional observation Ω is useful if we consider te design or iterative systems and component-wise GEXIT functions. For wat follows toug we will not need it. Hence, we will drop Ω in te sequel. If we assume tat te input is binary we obtain a more explicit expression for te GEXIT functions. Lemma (g for BM Cannels): Let X be a binary vector of lengt n cosen wit probability p X (x). Let te cannel from X to Y be memoryless, were Y i is te result of passing X i over te smoot family {BM( i )} i, i [, 1]. Assume tat all individual cannel families are parameterized in a smoot (differentiable) way by a common parameter ǫ, i.e., i = i (ǫ), i [n]. Ten te i t and te (average) generalized EXIT (GEXIT) function are given by g i (ǫ) = p(x i )p(φ i x i ) d p(y i x i ) (6) φ i,y i d x i i p(x i log φ i)p(y i x i ) d i p(x i φ i )p(y i x i ) dǫ dy idφ i, g(ǫ) = 1 n x i n g i (ǫ), (7) i=1 were φ i (y i ) and Φ i are defined as in (5). Discussion: As mentioned above, te derivative of p(y i x i ) in Eq. (6) as to be interpreted in general as in Eq. (1). Moreover, writing te same expression as g i (ǫ) = f(y) d d i p(y i x i )dy, te existence of suc derivative follows from te cannel smooteness and te differentiability of f(y) (if written as a function of te log-likeliood log p(y +1) p(y 1). Proof: We proceed as in te proof of Lemma 1. We claim tat X i (Φ i, Y i ) Y forms a Markov cain (equivalently, (Φ i, Y i ) constitutes a sufficient statistic). To see tis, fix z R and let y i be an element of ( ) 1(z), φ i so tat z = φi (y i ). Ten, using te fact tat Y i is conditionally independent of Y i, given X i = x i, we may write p Xi Y i,y i,φ i (x i y i, y i, z) = p Yi X i (y i x i )p Xi Y i,φ i (x i y i, z) x i X p Y i X i (y i x i )p X i Y i,φ i (x i y i, z) Since X i Φ i Y i forms a Markov cain (as already sown in te proof of Lemma 1), we ave p Xi Y i,φ i (x i y i, z) = p Xi Φ i (x i z). Substituting in te above equation, we get p Xi Y i,y i,φ i (x i y i, y i, z) = p Xi Y i,φ i (x i y i, z), as claimed. Terefore, we can rewrite g i (ǫ) as g i (ǫ) = H(X i Y ) i d i dǫ = H(X i Φ i, Y i ) ǫ i d i dǫ. ǫ Expand H(X i Φ i, Y i ) as p(x i, φ i, y i )log (p(x i φ i, y i ))dy i dφ i φ i,y i x i = p(x i )p(φ i x i )p(y i x i ) φ i,y i x { i } p(x i φ i )p(y i x i ) log x i X p(x i φ i)p(y i x i ) dy i dφ i. Tis form as te advantage tat te dependence of H(X i Φ i, Y i ) upon te cannel at position i is completely

6 6 explicit. Let us terefore differentiate te above expression wit respect to i, te parameter wic governs te transition probability p(y i x i ). Te terms obtained by differentiating wit respect to te cannel inside te log vanis. For instance, wen differentiating wit respect to te p(y i x i ) at te numerator, we get p(x i )p(φ i x i ) d p(y i x i )dy i dφ i d i = φ i,y i x i φ i x i p(x i )p(φ i x i ) d d i y i p(y i x i )dy i dφ i =. Wen differentiating wit respect to te outer p(y i x i ) we get te stated result. Altoug te last lemma was stated for te case of binary cannels, it poses no difficulty to generalize it. It is in fact sufficient to replace φ i (y i ) wit any sufficient statistic of X i, given Y i = y i. For instance, one may take φ i (y i ) = {p Xi Y i (x i y i ); x i X }, wic takes value on te ( X 1)-dimensional simplex, or any parameterization of it. Te loglikeliood can be regarded as a particular parameterization of te 1-dimensional simplex. More generally, p Xi Y i (x i y i ) is a natural quantity appearing in iterative decoding. Te proof (as well as te statement) applies verbatimly to tis case. We get an even more compact description if we assume tat transmission takes place using a binary proper linear code and tat te cannel is symmetric. Lemma 3 (g for Linear Codes and BMS Cannels): Let X be cosen uniformly at random from a proper binary linear code of lengt n. Let te cannel from X to Y be memoryless, were Y i is te result of passing X i over te smoot family {BMS( i )} i. Assume tat all individual cannels are parameterized in a smoot (differentiable) way by a common parameter ǫ, i.e., i = i (ǫ), i [n]. Let te i t cannel be caracterized by its L-density, wic by some abuse of notation we denote by c BMS( i). Let φ i and Φ i be as defined in (5) and let a i denote te density of Φ i, assuming tat te all-one codeword was transmitted. Ten g i (ǫ) = a i (z)l c BMS( i) (z) dz, were l c BMS( i) (z) = c BMS( i)(w) log ǫ (1 + e z w ) dw. Discussion: Te remarks made after Lemma apply in particular to te present case: We write c BMS(i )(w) { ǫ log (1 + e z w ) dw as a proxy for } ǫ c BMS( i)(w)log (1 + e z w ) dw. Te latter expression exists, since log (1 + e z w ) is continuously differentiable as a function of w and by assumption te cannel family is smoot. Note furter tat l c BMS( i) (z) is continuous and non-negative so tat g i (ǫ) exists as well. Proof: Consider te expression for g i (ǫ) as given in (7). By assumption, p(y i x i ) is symmetric for all i [n]. Furter, as already remarked in te proof of Lemma 1, te cannel p(φ i x i ) is symmetric as well. It follows from tis and te fact tat p Xi (+1) = p Xi ( 1) (due to te assumption tat te code is proper and tat codewords are cosen wit uniform probability) tat te contributions to g i (ǫ) for x i = +1 and x i = 1 are identical. We can terefore assume witout loss of generality tat x i = +1. Recall tat te density of Φ i assuming tat X i = 1 is equal to te density of Φ i assuming tat te all-one codeword was transmitted. Te latter is by definition equal to a i. As remarked earlier, a i is symmetric. Furter, as discussed in te introduction, we can assume tat te i t BMS cannel outputs already log-likeliood ratios. Terefore, p Yi X i (y i + 1) = c BMS( i)(y i ). Finally, consider te expression witin te log. If x i = +1 ten te numerator and denominator are equal and we get one. If on te oter and x i = 1 ten we get by te previous remarks te product of te likelioods. Putting tis all togeter we get g i (ǫ) = a i (z) dc BMS( i)(w) d i log ( 1 + e z w ) dzdw. Te tesis follows by rearranging terms. Example 4 (Alternative Kernel Representations): Note tat because of te symmetry property of L-densities we can write g(ǫ) = = a(z)l c BMS() (z) dz a (z) lc BMS() (z) + e z l c BMS() ( z) 1 + e x dz. Tis means tat te kernel is uniquely specified on te absolute value domain [, ], but tat for eac z [, ] we can split te weigt of te kernel in any desired way between +z and z so tat l c BMS() (z)+e z l c BMS() ( z) equals te desired value. In te sequel we will use tis degree of freedom to bring some kernels into a more convenient form. Altoug it constitutes some abuse of notation we will in te sequel make no notational distinction between equivalent suc kernels even toug pointwise tey migt not represent te same function. As we ave already remarked in te discussion rigt after Definition 3, te GEXIT functions g i (ǫ) allow us to locally measure te cange of te conditional entropy of a system. Tis property is particularly apparent in te representation of Lemma 3 were we see tat te local measurement as two components: (i) te kernel wic depends on te derivative of te cannel seen at te given position and (ii) te distribution a i, wic encapsulates all our ignorance about te code beavior wit respect to te i t position. Tis representation is very intuitive. If we improve te observation of a particular bit (derivative of te cannel wit respect to te parameter) ten te amount by wic te conditional entropy of te overall system canges clearly depends on ow well tis particular bit was already known via te code constraints and te observations of te oter bits (extrinsic posterior density): if te bit was already perfectly known ten te additional observation afforded will be useless, wereas if noting was known about te bit one would expect tat te additional reduction in entropy of tis bit fully translates to a reduction of te entropy of te overall system. We will see some quantitative statements of tis nature in Section IV. In te next tree examples we compute te kernels l c BMS( i) (z) for te standard families {BEC()}, {BSC()}, and {BAWGNC()}. If we consider a single family of BMS cannels parameterized by te entropy it is convenient to

7 normalize te GEXIT kernel so tat it measure te progress per d. Tis means, in te following examples we compute l c BMS() (z) = c BMS(i )(w) ǫ c BMS(i )(w) ǫ log (1 + e z w ) dw. (8) log (1 + e w ) dw Example 5 (GEXIT Kernel, L-Domain {BEC()} ): If we take te family {c BEC()}, were = ǫ denotes bot, te cannel (intrinsic) entropy and te cross-over erasure probability, ten a quick calculation sows tat l c BEC() (z) = log (1 + e z ) = l(z). In words, te GEXIT kernel wit respect to te family {BEC()} is te regular EXIT kernel. Example 6 (GEXIT Kernel, L-Domain {BSC()} ): Let us now look at te family {c BSC()}. Some calculus sows tat l c BSC() (z) = log ( ) ǫ ǫ e z 1 + ǫ / log 1 ǫ e z ( 1 ǫ ǫ ), were ǫ = 1 (). For a fixed z R and, te kernel converges to 1 as 1+z/ log(ǫ), wereas te limit wen 1 is equal to 1+e. z Example 7 (GEXIT Kernel, L-Domain {BAWGNC()} ent ): Consider now te family {c BAWGNC()}, were denotes te cannel entropy. Tis family is defined in Example 1. Recall tat te noise is assumed to be Gaussian wit zeromean and variance σ. A convenient parameterization for tis case is ǫ = /σ. Tis means tat in te following = H(c BAWGNC(σ =/ǫ)). After some steps of calculus sown in Appendix I and Lemma 18, we get ( ) ( + l c e (w ǫ) + 4ǫ BAWGNC() (z) = / dw w+z 1+e ) e (w ǫ) 4ǫ dw. 1+e w In Appendix I we give alternative representations and/or interpretations of tis kernel. In particular we discuss te relationsip to te formulation presented by Guo, Samai and Verdú in [7], [6] using a connection to te MSE detector as well as te formulation by Macris in [8] based on te Nisimori identity. One convenient feature of standard EXIT functions is tat tey are fairly similar for a given code across te wole range of BMS cannels. Is tis still true for GEXIT functions? GEXIT functions depend on te cannel bot troug te kernel as well as troug te extrinsic densities. Let us terefore compare te sape of te various kernels. It is most convenient to compare te kernels not in te L-domain but in te D domain. A cange of variables sows tat in general te L- domain kernel, call it l c ( ), and te associated D -domain kernel, denote it by d c ( ), are linked by d c (s) = 1 s lc (log 1 s 1 + s ) s lc (log 1 + s ). (9) 1 s E.g., if we apply te above transformation to te previous examples we get te following results. Example 8 (GEXIT Kernel, D -Domain {BEC()} ): We get d c BEC() (s) = ((1 + s)/). Example 9 (GEXIT Kernel, D -Domain {BSC()} ): Some calculus sows tat d c BSC((ǫ)) (s) = 1 + ( s log((1 ǫ)/ǫ) log 1+ǫs s 1 ǫs+s ). Te limiting values are seen to be lim 1 d c BSC() (s) = 1 s, and lim d c BSC() (s) = 1. Example 1 (GEXIT Kernel, D -Domain {BAWGN()} ): Using Example 7 and (9), it is straigtforward to write te kernel in te D -domain as d c BAWGNC((ǫ)) (s) = i { 1,+1} + + (1 s )e (w ǫ) 4ǫ (1+is)+(1 is)e dw w e (w ǫ) 4ǫ 1+e dw w As sown in Appendix I, te limiting values are te same as for te BSC, i.e., lim 1 d c BAWGNC() (s) = 1 s, and lim d c BAWGNC() (s) = 1. In Fig. 3 we compare te EXIT kernel (wic is also te GEXIT kernel for te BEC) wit te GEXIT kernels for BSC() and BAWGNC() in te D -domain for several cannel parameters. Note tat tese kernels are distinct but quite similar. In particular, for =.5 te GEXIT kernel wit respect to BAWGNC() is ardly distinguisable from te regular EXIT kernel. Te GEXIT kernel for te BSC sows more variation.... Fig. 3. Comparison of te kernels d c BEC() (s) (dased line) wit d c BSC() (s) (dotted line) and d c BAWGNC() (s) (solid line) at cannel entropy rate =.1 (left), =.5 (middle) and =.9 (rigt). Example 11 (Repetition Code): Consider te [n, 1, n] repetition code. Let {c } caracterize a smoot family of BMS cannels. For n N, let c n denote te n-fold convolution of c. Te GEXIT function for te [n, 1, n] repetition code is ten given by g() = 1 d n d H(c n ). Explicitly, we get g BEC () = n = BEC (). As a furter example, g BSC is given in parametric form by ( j=±1 (ǫ), j n ( n ) i=1 i ǫ i ǫ n i log ( 1 + (ǫ/ǫ) n i j) ), n log (ǫ/ǫ) wit ǫ = 1 ǫ. Example 1 (Single Parity-Ceck Code): Consider te dual code, i.e., te [n, n 1, ] parity-ceck code. Some calculations sow tat g BSC is given in parametric form by ( (ǫ), 1 (1 ǫ) log( n 1+(1 ǫ) n 1 log ( ) 1 ǫ ǫ 1 (1 ǫ) n ) No simple analytic expressions are known for te case of transmission over te BAWGNC. Fig. 4 compares EXIT to GEXIT curves for some repetition and some single parity-ceck codes. Example 13 (Hamming Code): Consider te [7, 4, 3] Hamming code. Wen transmission takes place over BEC(), it is a tedious but conceptually simple exercise to sow tat te EXIT function is () = , ).. 7

8 8, g, g tat, for any ǫ i ǫ i, H(X i Y i (ǫ i), Y i ) H(X i Y i (ǫ i ), Y i ) H(X i Y i (ǫ i ), Φ i) H(X i Y i (ǫ i ), Φ i )... Fig. 4. Te EXIT (dased) and GEXIT (dotted) function of te [n,1, n] repetition code and te [n, n 1,] parity-ceck code assuming tat transmission takes place over BSC() (left picture) or te BAWGNC() (rigt picture), n {, 3, 4, 5,6}. see, e.g., [3], [16]. In a similar way, using te derivative of te conditional entropy, one can give an analytic expression for te GEXIT function assuming transmission takes place over te BSC. Bot expressions are evaluated in Fig. 5 (left). A comparison between GEXIT and EXIT functions for te Hamming code and te BSC is sown in Fig. 5 (rigt). Example 14 (Simplex Code): Consider now te dual of te Hamming code, i.e., te [7, 3, 4] Simplex code. For transmission over te BEC we ave () = Fig. 5 compares GEXIT and EXIT functions for tis code wen transmission takes place over te BEC and over te BSC.., g [7, 4, 3] Hamming [7, 3, 4] Simplex., g [7, 4, 3] Hamming [7, 3, 4] Simplex Fig. 5. Comparison of te GEXIT functions for te [7, 4, 3] Hamming code and its dual. Left picture: Comparison between GEXIT functions wen transmitting over te BEC (dased line) and over te BSC (solid line). Rigt picture: Comparison between GEXIT (solid line) and EXIT (dased line) functions wen transmission takes place over te BSC. IV. BASIC PROPERTIES OF GEXIT FUNCTIONS GEXIT functions fulfill te GAT by definition. Let us state a few more of teir properties. We first sow tat te GEXIT function preserves te partial order implied by pysical degradation. Lemma 4: Let X be cosen wit probability p X (x) from X n. Let te cannel from X to Y be memoryless, were Y i is te result of passing X i troug te smoot and degraded family {M(ǫ i )} ǫi, ǫ i I i. If X Y i Φ i forms a Markov cain ten H(X i Y ) H(X i Y i, Φ i ). (1) ǫ i ǫ i Proof: Since te derivatives in Eq. (1) are known to exist a.e., te above statement is in fact equivalent to saying Here, Y i (ǫ i ) and Y i (ǫ i ) are te result of transmitting X i troug te cannels wit parameter ǫ i and ǫ i, respectively. We claim tat X Y i (ǫ i ) Y i (ǫ i ), X Y i Φ i, (Y i (ǫ i ), Y i (ǫ i )) X (Y i, Φ i ). Te first claim follows from te assumption tat te cannel family is degraded and te second claim is also part of te assumption. Finally, te tird claim is true since te cannel is memoryless. Te tesis is terefore a consequence of Lemma 5 stated below by making te following substitutions: Y i (ǫ i ) Y, Y i (ǫ i) Y, Y i Z, Φ i Z. Lemma 5: Assume tat X Y Y, X Z Z, as well as (Y, Y ) X (Z, Z ) form Markov cains. Ten H(X Y, Z) H(X Y, Z) H(X Y, Z ) H(X Y, Z ). (11) Proof: Te statement is equivalent to H(X Z, Y, Z ) H(X Y, Z, Y, Z ) H(X Y, Z ) H(X Y, Y, Z ). Let us now condition on a event (Y = y, Z = z ). Te proof is completed by sowing tat (ere te conditioning upon Y = y, Z = z is left implicit for te sake of simplicity) H(X Y, Z) H(X Y ) H(X Z) + H(X). (1) Tis inequality can be written in terms of mutual information as I(Y ; X Z) I(Y ; X). Te statement is terefore a wellknown consequence of te data processing inequality, see [, p. 33], if we can sow tat, conditioned on Y = y, Z = z, Y X Z forms a Markov cain. In formulae, we ave to sow tat p(y, z x, y, z ) = p(y x, y, z )p(z x, y, z ), p(z x,y wic in turn follows if we can sow tat,z ) p(z x,y,y,z ) = 1. Te last equality can be sown by first applying Bayes law, ten expanding all terms in te order x, z, y and y, furter canceling common terms and, finally, repeatedly using te conditions tat X Y Y, X Z Z, as well as (Y, Y ) X (Z, Z ) form Markov cains. In case of linear codes, and communication over a smoot and degraded family of BMS cannels, Lemma 3 provides an explicit representation of te GEXIT function in terms of L-densities. In tis case Lemma 4 becomes a statement on te corresponding kernel. For completeness, let us state te corresponding condition explicitly. Corollary 1 (l c BMS() (z) Preserves Partial Order): Consider a smoot and degraded family of BMS cannels caracterized by te associated family of L-densities {c BMS()}. Let a and

9 9 b denote two symmetric L-densities so tat a b, i.e., b is pysically degraded wit respect to a. Ten a(z)l c BMS() (z)dz b(z)l c BMS() (z)dz. An alternative proof of tis statement is provided in Appendix II. We continue by examining some limiting cases. In te sequel E denotes te error-probability operator. In te L- domain it is defined as E(a) = 1 a(z)e ( z/ +z/) dz. Lemma 6 (Bounds for GEXIT Kernel): Let d c BMS() (z) be te kernel associated to a smoot degraded family of BMS cannels caracterized by teir family of L-densities {c BMS()}. Ten 1 z d c BMS() (z) 1. Terefore, if a is a symmetric L-density, we ave E(a) l c BMS() (z)a(z)dz 1. Proof: In Appendix II, we sow tat d c BMS() (z) is non-increasing and concave. Te upper bound follows from d c BMS() (z) < d c BMS()(z = ) = 1. Te lower bound is proved in a similar way by using concavity and observing tat d c BMS() (z = 1) =. Te final claim now follows from te fact tat te D -domain kernel associated to E is equal to (1 z)/. Lemma 7 (Furter Properties of GEXIT Functions): Let g() be te GEXIT function associated to a proper binary linear code of minimum distance larger tan 1, and transmission over a complete smoot family of BMS cannels. Ten g() =, g(1) = 1. If te minimum distance of te code is larger tan k, ten d k 1 g() dk 1 =. = Furter, g() is a non-decreasing function in. Proof: Consider te first two assertions. If =, ten te associated L-density corresponds to a delta at infinity (tis is an easy consequence of te minimum distance being at least ). On te oter and, if = 1 ten te corresponding L-density is a delta at zero. Te claim in bot cases follows now by a direct calculation. In order to prove te last claim, we use te definition of g() to write d k 1 g() dk 1 = 1 d k H(X Y ()) n dk. = = In order to evaluate te last derivative, we can first assume tat te i-t bit is transmitted troug a cannel BMS( i ). Next we take partial derivatives wit respect to k of te entropies { i }. Finally we set i = for all bits i. We get terefore (neglecting te factor 1/n): k H(X Y ). i1 ik i 1...i k i= Of course i can be set to rigt at te beginning for all te bits tat are not differentiated over. Tis is equivalent to passing te exact bits X i. We get te expression k H(X Y i1 ( i1 )...Y ik ( ik ), X i1...i i1 k ) ik i 1...i k to be evaluated at i1 = = ik =. If te code as minimum distance larger tan k, ten any n k bits determine te wole codeword and H(X Y i1 ( i1 )...Y ik ( ik ), X i1...i k ) =. Tis finises te proof. So far we ave used te compact notation g() for te GEXIT function. In some circumstance it is more convenient to use a notation tat makes te dependence of te functional on te involved densities more explicit. Definition 4 (Alternative Notation for GEXIT Functional): Consider a binary linear code and transmission over a smoot family of BMS cannels caracterized by te associated family of L-densities {c ǫ } ǫ. Let {a ǫ } ǫ denote te associated family of average extrinsic MAP densities (wic we assume smoot). Define were l cǫ (z) = G(c ǫ, a ǫ ) = dc ǫ(w) dǫ a ǫ (z)l cǫ (z)dz, log(1 + e z w )dw dc ǫ(w) dǫ log(1 + e w )dw. Lemma 8 (GEXIT and Dual GEXIT Function): Consider a binary code C and transmission over a complete and smoot family of BMS cannels caracterized by te associated family of L-densities {c ǫ } ǫ. Let {a ǫ } ǫ denote te corresponding family of (average) extrinsic MAP densities. Ten te standard GEXIT curve is given in parametric form by {H(c ǫ ), G(c ǫ, a ǫ )}. Te dual GEXIT curve is defined by {G(a ǫ, c ǫ ), H(a ǫ )}. Bot, standard and dual GEXIT curve ave an area equal to r(c), te rate of te code. Discussion: Note tat bot curves are comparable in tat te first component measures te cannel c and te second argument measure te MAP density a. Te difference between te two lies in te coice of measure wic is applied to eac component. Proof: Te statement tat {H(c ǫ ), G(c ǫ, a ǫ )} represents te standard GEXIT function follows by unwinding te corresponding definitions. Te only statement tat requires a proof is te one concerning te area under te dual GEXIT curve. We proceed as follows: Consider te entropy H(c ǫ a ǫ ). We ave H(c ǫ a ǫ ) = = ( ) c ǫ (w)a ǫ (v w)dw log(1 + e v )dv c ǫ (w)a ǫ (z)log(1 + e w z )dwdz. Consider now dh(cǫ aǫ) dǫ. Using te previous representation we get dh(c ǫ a ǫ ) dǫ = dc ǫ (w) a ǫ (z)log(1 + e w z )dwdz+ dǫ c ǫ (w) da ǫ(z) dǫ log(1 + e w z )dwdz.

10 1 Te first expression can be identified wit te standard GEXIT curve except tat it is parameterized by a generic parameter ǫ. Te second expression is essentially te same, but te roles of te two densities are excanged. Integrate now tis relationsip over te wole range of ǫ and assume tat tis range goes from perfect (cannel) to useless. Te integral on te left clearly equals 1. To perform te integrals on te rigt, reparameterize te first expression wit respect to = c ǫ(w)log(1 + e w )dw so tat te integral is equal to te area under te standard GEXIT curve given by {H(c ǫ ), G(c ǫ, a ǫ )}. In te same manner, reparameterize te second expression by = a ǫ(w)log(1 + e w )dw. Terefore te value of second expression is equal te area under te curve given by {H(a ǫ ), G(a ǫ, c ǫ )}. Since te sum of te two areas equals one and te area under te standard GEXIT curve equals r(c), it follows tat te area under te second curve equals 1 r(c). Finally, note tat if we consider te inverse of te second curve by excanging te two coordinates, i.e., if we consider te curve {G(a ǫ, c ǫ ), H(a ǫ )}, ten te area under tis curve is equal to 1 (1 r(c)) = r(c), as claimed. Example 15 (GEXIT Versus Dual GEXIT): Fig. 6 sows te standard GEXIT function and te dual GEXIT function for te [5, 4, ] code and transmission over te BSC. Altoug te two curves ave quite distinct sapes, te area under te two curves is te same. 1 G(c, a),h(a) bot GEXIT H(c ), G(a,c ) 1 Fig. 6. Standard and dual GEXIT function of [5, 4, ] code and transmission over te BSC. V. ENSEMBLES: CONCENTRATION AND ASYMPTOTIC SETTING For simple codes, like, e.g., single parity-ceck codes or repetition codes, and g are relatively easy to compute. In general toug it is not a trivial matter to determine te density of Φ i required for te calculation. Wat we can typically compute are te extrinsic estimates if we use te BP decoder instead of te MAP decoder. It is terefore natural to look at te equivalent of EXIT and GEXIT functions if we substitute te extrinsic MAP estimates by teir equivalent extrinsic BP estimates. Altoug most of te subsequent definitions and statements can be as easily derived for EXIT as for GEXIT functions, we focus on te latter. After all, tese are te natural objects to study as suggested by te GAT. Definition 5 (g BP for Linear Codes and BMS Cannels): Let X be cosen uniformly at random from a proper binary linear code. Let te cannel from X to Y be memoryless, were Y i is te result of passing X i troug te smoot family {BMS( i )} i, i [, 1]. Assume tat all individual cannels are parameterized in a smoot way by a common parameter ǫ, i.e., i = i (ǫ), i [n]. Let Φ BP,l i denote te extrinsic estimate of te i t bit at te l t round of BP decoding, assuming an arbitrary but fixed representation of te code by a Tanner grap as well as an arbitrary but fixed scedule of te decoder. Ten te BP GEXIT function is defined as g BP,l i (ǫ) = H(X i Φ BP,l i, Y i ) d i. i dǫ ǫ Te following statement, wic is a direct consequence of te previous definition and Lemma 4, confirms te intuitive fact tat te BP GEXIT function (wic is associated to te suboptimal BP decoder) is at least as large as te te GEXIT function itself, assuming only tat te cannel family is degraded. Corollary (GEXIT Versus BP GEXIT): Let X be cosen uniformly at random from a proper binary linear code. Let te cannel from X to Y be memoryless, were Y i is te result of passing X i troug a smoot and degraded family {BMS( i )} i, i [, 1]. Assume tat all individual cannels are parameterized in a smoot (differentiable) way by a common parameter ǫ, i.e., i = i (ǫ), i [n]. Let g i (ǫ) and g BP,l i (ǫ) be as defined in Definitions 3 and 5. Ten g i (ǫ) g BP,l i (ǫ). Definition 6 (Asymptotic BP EXIT and GEXIT Functions): Consider a dd pair (λ, ρ) and te corresponding sequence of ensembles LDPC(n, λ, ρ). Furter consider a smoot and degraded family {BMS()}. Assume tat all bits of X are sent troug te cannel BMS(). For G LDPC(n, λ, ρ) and i [n], let g i (G, ǫ) and g BP,l i (G, ǫ) denote te i t MAP and BP GEXIT function associated to code G. By some abuse of notation, define te asymptotic (and average) quantities [ g() = 1 ] limsup E G g n n i (G,), i [n] ] g BP,l () = lim n E G[ 1 n g BP () = lim l g BP,l (). i [n] g BP,l i (G, ) For notational simplicity we suppress te dependence of te above quantities on te dd pair and te cannel family {BMS()}. In te above definitions we ave taken te average of te individual curves over te ensemble. Let us now justify tis approac by sowing tat te quantities are concentrated. Te proof of te following statement, wic asserts te concentration of te conditional entropy, can be found in [5]. Teorem (Concentration of Conditional Entropy): Let G(n) be cosen uniformly at random from LDPC(n, λ, ρ). Assume tat G(n) is used to transmit over a BMS() cannel. By some abuse of notation, let H G(n) = H G(n) (X Y ) be te associated conditional entropy. Ten for any ξ > Pr { H G(n) E G(n) [H G(n) ] > nξ } e nbξ,,

Why We Can Not Surpass Capacity: The Matching Condition

Why We Can Not Surpass Capacity: The Matching Condition Why We Can Not Surpass Capacity: The Matching Condition Cyril Méasson EPFL CH-1015 Lausanne cyril.measson@epfl.ch, Andrea Montanari LPT, ENS F-75231 Paris montanar@lpt.ens.fr Rüdiger Urbanke EPFL CH-1015

More information

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator. Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative

More information

A = h w (1) Error Analysis Physics 141

A = h w (1) Error Analysis Physics 141 Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.

More information

HOMEWORK HELP 2 FOR MATH 151

HOMEWORK HELP 2 FOR MATH 151 HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,

More information

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note

More information

Differential Calculus (The basics) Prepared by Mr. C. Hull

Differential Calculus (The basics) Prepared by Mr. C. Hull Differential Calculus Te basics) A : Limits In tis work on limits, we will deal only wit functions i.e. tose relationsips in wic an input variable ) defines a unique output variable y). Wen we work wit

More information

Poisson Equation in Sobolev Spaces

Poisson Equation in Sobolev Spaces Poisson Equation in Sobolev Spaces OcMountain Dayligt Time. 6, 011 Today we discuss te Poisson equation in Sobolev spaces. It s existence, uniqueness, and regularity. Weak Solution. u = f in, u = g on

More information

Differentiation in higher dimensions

Differentiation in higher dimensions Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends

More information

Continuity and Differentiability Worksheet

Continuity and Differentiability Worksheet Continuity and Differentiability Workseet (Be sure tat you can also do te grapical eercises from te tet- Tese were not included below! Typical problems are like problems -3, p. 6; -3, p. 7; 33-34, p. 7;

More information

The derivative function

The derivative function Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative

More information

. If lim. x 2 x 1. f(x+h) f(x)

. If lim. x 2 x 1. f(x+h) f(x) Review of Differential Calculus Wen te value of one variable y is uniquely determined by te value of anoter variable x, ten te relationsip between x and y is described by a function f tat assigns a value

More information

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES (Section.0: Difference Quotients).0. SECTION.0: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES Define average rate of cange (and average velocity) algebraically and grapically. Be able to identify, construct,

More information

Homework 1 Due: Wednesday, September 28, 2016

Homework 1 Due: Wednesday, September 28, 2016 0-704 Information Processing and Learning Fall 06 Homework Due: Wednesday, September 8, 06 Notes: For positive integers k, [k] := {,..., k} denotes te set of te first k positive integers. Wen p and Y q

More information

lecture 26: Richardson extrapolation

lecture 26: Richardson extrapolation 43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)

More information

Bob Brown Math 251 Calculus 1 Chapter 3, Section 1 Completed 1 CCBC Dundalk

Bob Brown Math 251 Calculus 1 Chapter 3, Section 1 Completed 1 CCBC Dundalk Bob Brown Mat 251 Calculus 1 Capter 3, Section 1 Completed 1 Te Tangent Line Problem Te idea of a tangent line first arises in geometry in te context of a circle. But before we jump into a discussion of

More information

THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Math 225

THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Math 225 THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Mat 225 As we ave seen, te definition of derivative for a Mat 111 function g : R R and for acurveγ : R E n are te same, except for interpretation:

More information

Exam 1 Review Solutions

Exam 1 Review Solutions Exam Review Solutions Please also review te old quizzes, and be sure tat you understand te omework problems. General notes: () Always give an algebraic reason for your answer (graps are not sufficient),

More information

Chapter 5 FINITE DIFFERENCE METHOD (FDM)

Chapter 5 FINITE DIFFERENCE METHOD (FDM) MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential

More information

MAT 145. Type of Calculator Used TI-89 Titanium 100 points Score 100 possible points

MAT 145. Type of Calculator Used TI-89 Titanium 100 points Score 100 possible points MAT 15 Test #2 Name Solution Guide Type of Calculator Used TI-89 Titanium 100 points Score 100 possible points Use te grap of a function sown ere as you respond to questions 1 to 8. 1. lim f (x) 0 2. lim

More information

On Generalized EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels

On Generalized EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels 2012 IEEE International Symposium on Information Theory Proceedings On Generalied EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels H Mamani 1, H Saeedi 1, A Eslami

More information

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 01 Aug 08, 2016.

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 01 Aug 08, 2016. Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals Gary D. Simpson gsim1887@aol.com rev 1 Aug 8, 216 Summary Definitions are presented for "quaternion functions" of a quaternion. Polynomial

More information

Practice Problem Solutions: Exam 1

Practice Problem Solutions: Exam 1 Practice Problem Solutions: Exam 1 1. (a) Algebraic Solution: Te largest term in te numerator is 3x 2, wile te largest term in te denominator is 5x 2 3x 2 + 5. Tus lim x 5x 2 2x 3x 2 x 5x 2 = 3 5 Numerical

More information

Average Rate of Change

Average Rate of Change Te Derivative Tis can be tougt of as an attempt to draw a parallel (pysically and metaporically) between a line and a curve, applying te concept of slope to someting tat isn't actually straigt. Te slope

More information

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed. Preface Here are my online notes for my course tat I teac ere at Lamar University. Despite te fact tat tese are my class notes, tey sould be accessible to anyone wanting to learn or needing a refreser

More information

Physically Based Modeling: Principles and Practice Implicit Methods for Differential Equations

Physically Based Modeling: Principles and Practice Implicit Methods for Differential Equations Pysically Based Modeling: Principles and Practice Implicit Metods for Differential Equations David Baraff Robotics Institute Carnegie Mellon University Please note: Tis document is 997 by David Baraff

More information

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x)

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x) Calculus. Gradients and te Derivative Q f(x+) δy P T δx R f(x) 0 x x+ Let P (x, f(x)) and Q(x+, f(x+)) denote two points on te curve of te function y = f(x) and let R denote te point of intersection of

More information

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to

More information

Material for Difference Quotient

Material for Difference Quotient Material for Difference Quotient Prepared by Stepanie Quintal, graduate student and Marvin Stick, professor Dept. of Matematical Sciences, UMass Lowell Summer 05 Preface Te following difference quotient

More information

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households Volume 29, Issue 3 Existence of competitive equilibrium in economies wit multi-member ouseolds Noriisa Sato Graduate Scool of Economics, Waseda University Abstract Tis paper focuses on te existence of

More information

Integral Calculus, dealing with areas and volumes, and approximate areas under and between curves.

Integral Calculus, dealing with areas and volumes, and approximate areas under and between curves. Calculus can be divided into two ke areas: Differential Calculus dealing wit its, rates of cange, tangents and normals to curves, curve sketcing, and applications to maima and minima problems Integral

More information

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT LIMITS AND DERIVATIVES Te limit of a function is defined as te value of y tat te curve approaces, as x approaces a particular value. Te limit of f (x) as x approaces a is written as f (x) approaces, as

More information

How to Find the Derivative of a Function: Calculus 1

How to Find the Derivative of a Function: Calculus 1 Introduction How to Find te Derivative of a Function: Calculus 1 Calculus is not an easy matematics course Te fact tat you ave enrolled in suc a difficult subject indicates tat you are interested in te

More information

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY (Section 3.2: Derivative Functions and Differentiability) 3.2.1 SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY LEARNING OBJECTIVES Know, understand, and apply te Limit Definition of te Derivative

More information

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006 Mat 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006 f(x+) f(x) 10 1. For f(x) = x 2 + 2x 5, find ))))))))) and simplify completely. NOTE: **f(x+) is NOT f(x)+! f(x+) f(x) (x+) 2 + 2(x+) 5 ( x 2

More information

Math 212-Lecture 9. For a single-variable function z = f(x), the derivative is f (x) = lim h 0

Math 212-Lecture 9. For a single-variable function z = f(x), the derivative is f (x) = lim h 0 3.4: Partial Derivatives Definition Mat 22-Lecture 9 For a single-variable function z = f(x), te derivative is f (x) = lim 0 f(x+) f(x). For a function z = f(x, y) of two variables, to define te derivatives,

More information

INTRODUCTION TO CALCULUS LIMITS

INTRODUCTION TO CALCULUS LIMITS Calculus can be divided into two ke areas: INTRODUCTION TO CALCULUS Differential Calculus dealing wit its, rates of cange, tangents and normals to curves, curve sketcing, and applications to maima and

More information

1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist

1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist Mat 1120 Calculus Test 2. October 18, 2001 Your name Te multiple coice problems count 4 points eac. In te multiple coice section, circle te correct coice (or coices). You must sow your work on te oter

More information

Time (hours) Morphine sulfate (mg)

Time (hours) Morphine sulfate (mg) Mat Xa Fall 2002 Review Notes Limits and Definition of Derivative Important Information: 1 According to te most recent information from te Registrar, te Xa final exam will be eld from 9:15 am to 12:15

More information

Introduction to Derivatives

Introduction to Derivatives Introduction to Derivatives 5-Minute Review: Instantaneous Rates and Tangent Slope Recall te analogy tat we developed earlier First we saw tat te secant slope of te line troug te two points (a, f (a))

More information

Mathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative

Mathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative Matematics 5 Workseet 11 Geometry, Tangency, and te Derivative Problem 1. Find te equation of a line wit slope m tat intersects te point (3, 9). Solution. Te equation for a line passing troug a point (x

More information

Cubic Functions: Local Analysis

Cubic Functions: Local Analysis Cubic function cubing coefficient Capter 13 Cubic Functions: Local Analysis Input-Output Pairs, 378 Normalized Input-Output Rule, 380 Local I-O Rule Near, 382 Local Grap Near, 384 Types of Local Graps

More information

Copyright c 2008 Kevin Long

Copyright c 2008 Kevin Long Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula

More information

Digital Filter Structures

Digital Filter Structures Digital Filter Structures Te convolution sum description of an LTI discrete-time system can, in principle, be used to implement te system For an IIR finite-dimensional system tis approac is not practical

More information

2.1 THE DEFINITION OF DERIVATIVE

2.1 THE DEFINITION OF DERIVATIVE 2.1 Te Derivative Contemporary Calculus 2.1 THE DEFINITION OF DERIVATIVE 1 Te grapical idea of a slope of a tangent line is very useful, but for some uses we need a more algebraic definition of te derivative

More information

Taylor Series and the Mean Value Theorem of Derivatives

Taylor Series and the Mean Value Theorem of Derivatives 1 - Taylor Series and te Mean Value Teorem o Derivatives Te numerical solution o engineering and scientiic problems described by matematical models oten requires solving dierential equations. Dierential

More information

5.1 We will begin this section with the definition of a rational expression. We

5.1 We will begin this section with the definition of a rational expression. We Basic Properties and Reducing to Lowest Terms 5.1 We will begin tis section wit te definition of a rational epression. We will ten state te two basic properties associated wit rational epressions and go

More information

Notes on Neural Networks

Notes on Neural Networks Artificial neurons otes on eural etwors Paulo Eduardo Rauber 205 Consider te data set D {(x i y i ) i { n} x i R m y i R d } Te tas of supervised learning consists on finding a function f : R m R d tat

More information

2.11 That s So Derivative

2.11 That s So Derivative 2.11 Tat s So Derivative Introduction to Differential Calculus Just as one defines instantaneous velocity in terms of average velocity, we now define te instantaneous rate of cange of a function at a point

More information

Excursions in Computing Science: Week v Milli-micro-nano-..math Part II

Excursions in Computing Science: Week v Milli-micro-nano-..math Part II Excursions in Computing Science: Week v Milli-micro-nano-..mat Part II T. H. Merrett McGill University, Montreal, Canada June, 5 I. Prefatory Notes. Cube root of 8. Almost every calculator as a square-root

More information

Analytic Functions. Differentiable Functions of a Complex Variable

Analytic Functions. Differentiable Functions of a Complex Variable Analytic Functions Differentiable Functions of a Complex Variable In tis capter, we sall generalize te ideas for polynomials power series of a complex variable we developed in te previous capter to general

More information

Combining functions: algebraic methods

Combining functions: algebraic methods Combining functions: algebraic metods Functions can be added, subtracted, multiplied, divided, and raised to a power, just like numbers or algebra expressions. If f(x) = x 2 and g(x) = x + 2, clearly f(x)

More information

MVT and Rolle s Theorem

MVT and Rolle s Theorem AP Calculus CHAPTER 4 WORKSHEET APPLICATIONS OF DIFFERENTIATION MVT and Rolle s Teorem Name Seat # Date UNLESS INDICATED, DO NOT USE YOUR CALCULATOR FOR ANY OF THESE QUESTIONS In problems 1 and, state

More information

Numerical Differentiation

Numerical Differentiation Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function

More information

0.1 Differentiation Rules

0.1 Differentiation Rules 0.1 Differentiation Rules From our previous work we ve seen tat it can be quite a task to calculate te erivative of an arbitrary function. Just working wit a secon-orer polynomial tings get pretty complicate

More information

1watt=1W=1kg m 2 /s 3

1watt=1W=1kg m 2 /s 3 Appendix A Matematics Appendix A.1 Units To measure a pysical quantity, you need a standard. Eac pysical quantity as certain units. A unit is just a standard we use to compare, e.g. a ruler. In tis laboratory

More information

Exercises for numerical differentiation. Øyvind Ryan

Exercises for numerical differentiation. Øyvind Ryan Exercises for numerical differentiation Øyvind Ryan February 25, 2013 1. Mark eac of te following statements as true or false. a. Wen we use te approximation f (a) (f (a +) f (a))/ on a computer, we can

More information

2.8 The Derivative as a Function

2.8 The Derivative as a Function .8 Te Derivative as a Function Typically, we can find te derivative of a function f at many points of its domain: Definition. Suppose tat f is a function wic is differentiable at every point of an open

More information

Introduction to Machine Learning. Recitation 8. w 2, b 2. w 1, b 1. z 0 z 1. The function we want to minimize is the loss over all examples: f =

Introduction to Machine Learning. Recitation 8. w 2, b 2. w 1, b 1. z 0 z 1. The function we want to minimize is the loss over all examples: f = Introduction to Macine Learning Lecturer: Regev Scweiger Recitation 8 Fall Semester Scribe: Regev Scweiger 8.1 Backpropagation We will develop and review te backpropagation algoritm for neural networks.

More information

3.1 Extreme Values of a Function

3.1 Extreme Values of a Function .1 Etreme Values of a Function Section.1 Notes Page 1 One application of te derivative is finding minimum and maimum values off a grap. In precalculus we were only able to do tis wit quadratics by find

More information

Derivatives of Exponentials

Derivatives of Exponentials mat 0 more on derivatives: day 0 Derivatives of Eponentials Recall tat DEFINITION... An eponential function as te form f () =a, were te base is a real number a > 0. Te domain of an eponential function

More information

232 Calculus and Structures

232 Calculus and Structures 3 Calculus and Structures CHAPTER 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS FOR EVALUATING BEAMS Calculus and Structures 33 Copyrigt Capter 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS 17.1 THE

More information

Financial Econometrics Prof. Massimo Guidolin

Financial Econometrics Prof. Massimo Guidolin CLEFIN A.A. 2010/2011 Financial Econometrics Prof. Massimo Guidolin A Quick Review of Basic Estimation Metods 1. Were te OLS World Ends... Consider two time series 1: = { 1 2 } and 1: = { 1 2 }. At tis

More information

Regularized Regression

Regularized Regression Regularized Regression David M. Blei Columbia University December 5, 205 Modern regression problems are ig dimensional, wic means tat te number of covariates p is large. In practice statisticians regularize

More information

Phase space in classical physics

Phase space in classical physics Pase space in classical pysics Quantum mecanically, we can actually COU te number of microstates consistent wit a given macrostate, specified (for example) by te total energy. In general, eac microstate

More information

Chapter 4: Numerical Methods for Common Mathematical Problems

Chapter 4: Numerical Methods for Common Mathematical Problems 1 Capter 4: Numerical Metods for Common Matematical Problems Interpolation Problem: Suppose we ave data defined at a discrete set of points (x i, y i ), i = 0, 1,..., N. Often it is useful to ave a smoot

More information

REVIEW LAB ANSWER KEY

REVIEW LAB ANSWER KEY REVIEW LAB ANSWER KEY. Witout using SN, find te derivative of eac of te following (you do not need to simplify your answers): a. f x 3x 3 5x x 6 f x 3 3x 5 x 0 b. g x 4 x x x notice te trick ere! x x g

More information

Order of Accuracy. ũ h u Ch p, (1)

Order of Accuracy. ũ h u Ch p, (1) Order of Accuracy 1 Terminology We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, wic can be for instance te grid size or time step in a numerical

More information

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example, NUMERICAL DIFFERENTIATION James T Smit San Francisco State University In calculus classes, you compute derivatives algebraically: for example, f( x) = x + x f ( x) = x x Tis tecnique requires your knowing

More information

158 Calculus and Structures

158 Calculus and Structures 58 Calculus and Structures CHAPTER PROPERTIES OF DERIVATIVES AND DIFFERENTIATION BY THE EASY WAY. Calculus and Structures 59 Copyrigt Capter PROPERTIES OF DERIVATIVES. INTRODUCTION In te last capter you

More information

Math 312 Lecture Notes Modeling

Math 312 Lecture Notes Modeling Mat 3 Lecture Notes Modeling Warren Weckesser Department of Matematics Colgate University 5 7 January 006 Classifying Matematical Models An Example We consider te following scenario. During a storm, a

More information

Sin, Cos and All That

Sin, Cos and All That Sin, Cos and All Tat James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 9, 2017 Outline Sin, Cos and all tat! A New Power Rule Derivatives

More information

Section 15.6 Directional Derivatives and the Gradient Vector

Section 15.6 Directional Derivatives and the Gradient Vector Section 15.6 Directional Derivatives and te Gradient Vector Finding rates of cange in different directions Recall tat wen we first started considering derivatives of functions of more tan one variable,

More information

A.P. CALCULUS (AB) Outline Chapter 3 (Derivatives)

A.P. CALCULUS (AB) Outline Chapter 3 (Derivatives) A.P. CALCULUS (AB) Outline Capter 3 (Derivatives) NAME Date Previously in Capter 2 we determined te slope of a tangent line to a curve at a point as te limit of te slopes of secant lines using tat point

More information

Gradient Descent etc.

Gradient Descent etc. 1 Gradient Descent etc EE 13: Networked estimation and control Prof Kan) I DERIVATIVE Consider f : R R x fx) Te derivative is defined as d fx) = lim dx fx + ) fx) Te cain rule states tat if d d f gx) )

More information

Teaching Differentiation: A Rare Case for the Problem of the Slope of the Tangent Line

Teaching Differentiation: A Rare Case for the Problem of the Slope of the Tangent Line Teacing Differentiation: A Rare Case for te Problem of te Slope of te Tangent Line arxiv:1805.00343v1 [mat.ho] 29 Apr 2018 Roman Kvasov Department of Matematics University of Puerto Rico at Aguadilla Aguadilla,

More information

Recall from our discussion of continuity in lecture a function is continuous at a point x = a if and only if

Recall from our discussion of continuity in lecture a function is continuous at a point x = a if and only if Computational Aspects of its. Keeping te simple simple. Recall by elementary functions we mean :Polynomials (including linear and quadratic equations) Eponentials Logaritms Trig Functions Rational Functions

More information

Quantum Numbers and Rules

Quantum Numbers and Rules OpenStax-CNX module: m42614 1 Quantum Numbers and Rules OpenStax College Tis work is produced by OpenStax-CNX and licensed under te Creative Commons Attribution License 3.0 Abstract Dene quantum number.

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc

More information

1 2 x Solution. The function f x is only defined when x 0, so we will assume that x 0 for the remainder of the solution. f x. f x h f x.

1 2 x Solution. The function f x is only defined when x 0, so we will assume that x 0 for the remainder of the solution. f x. f x h f x. Problem. Let f x x. Using te definition of te derivative prove tat f x x Solution. Te function f x is only defined wen x 0, so we will assume tat x 0 for te remainder of te solution. By te definition of

More information

Lab 6 Derivatives and Mutant Bacteria

Lab 6 Derivatives and Mutant Bacteria Lab 6 Derivatives and Mutant Bacteria Date: September 27, 20 Assignment Due Date: October 4, 20 Goal: In tis lab you will furter explore te concept of a derivative using R. You will use your knowledge

More information

Function Composition and Chain Rules

Function Composition and Chain Rules Function Composition and s James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 8, 2017 Outline 1 Function Composition and Continuity 2 Function

More information

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER*

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER* EO BOUNDS FO THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BADLEY J. LUCIE* Abstract. Te expected error in L ) attimet for Glimm s sceme wen applied to a scalar conservation law is bounded by + 2 ) ) /2 T

More information

Lecture 21. Numerical differentiation. f ( x+h) f ( x) h h

Lecture 21. Numerical differentiation. f ( x+h) f ( x) h h Lecture Numerical differentiation Introduction We can analytically calculate te derivative of any elementary function, so tere migt seem to be no motivation for calculating derivatives numerically. However

More information

Precalculus Test 2 Practice Questions Page 1. Note: You can expect other types of questions on the test than the ones presented here!

Precalculus Test 2 Practice Questions Page 1. Note: You can expect other types of questions on the test than the ones presented here! Precalculus Test 2 Practice Questions Page Note: You can expect oter types of questions on te test tan te ones presented ere! Questions Example. Find te vertex of te quadratic f(x) = 4x 2 x. Example 2.

More information

7.1 Using Antiderivatives to find Area

7.1 Using Antiderivatives to find Area 7.1 Using Antiderivatives to find Area Introduction finding te area under te grap of a nonnegative, continuous function f In tis section a formula is obtained for finding te area of te region bounded between

More information

Click here to see an animation of the derivative

Click here to see an animation of the derivative Differentiation Massoud Malek Derivative Te concept of derivative is at te core of Calculus; It is a very powerful tool for understanding te beavior of matematical functions. It allows us to optimize functions,

More information

LOW-density parity-check (LDPC) codes were invented

LOW-density parity-check (LDPC) codes were invented IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 54, NO 1, JANUARY 2008 51 Extremal Problems of Information Combining Yibo Jiang, Alexei Ashikhmin, Member, IEEE, Ralf Koetter, Senior Member, IEEE, and Andrew

More information

Math 31A Discussion Notes Week 4 October 20 and October 22, 2015

Math 31A Discussion Notes Week 4 October 20 and October 22, 2015 Mat 3A Discussion Notes Week 4 October 20 and October 22, 205 To prepare for te first midterm, we ll spend tis week working eamples resembling te various problems you ve seen so far tis term. In tese notes

More information

A Reconsideration of Matter Waves

A Reconsideration of Matter Waves A Reconsideration of Matter Waves by Roger Ellman Abstract Matter waves were discovered in te early 20t century from teir wavelengt, predicted by DeBroglie, Planck's constant divided by te particle's momentum,

More information

Monoidal Structures on Higher Categories

Monoidal Structures on Higher Categories Monoidal Structures on Higer Categories Paul Ziegler Monoidal Structures on Simplicial Categories Let C be a simplicial category, tat is a category enriced over simplicial sets. Suc categories are a model

More information

Characterization of Relay Channels Using the Bhattacharyya Parameter

Characterization of Relay Channels Using the Bhattacharyya Parameter Caracterization of Relay Cannels Using te Battacaryya Parameter Josepine P. K. Cu, Andrew W. Ecford, and Raviraj S. Adve Dept. of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario,

More information

2.3 Algebraic approach to limits

2.3 Algebraic approach to limits CHAPTER 2. LIMITS 32 2.3 Algebraic approac to its Now we start to learn ow to find its algebraically. Tis starts wit te simplest possible its, and ten builds tese up to more complicated examples. Fact.

More information

WYSE Academic Challenge 2004 Sectional Mathematics Solution Set

WYSE Academic Challenge 2004 Sectional Mathematics Solution Set WYSE Academic Callenge 00 Sectional Matematics Solution Set. Answer: B. Since te equation can be written in te form x + y, we ave a major 5 semi-axis of lengt 5 and minor semi-axis of lengt. Tis means

More information

1 + t5 dt with respect to x. du = 2. dg du = f(u). du dx. dg dx = dg. du du. dg du. dx = 4x3. - page 1 -

1 + t5 dt with respect to x. du = 2. dg du = f(u). du dx. dg dx = dg. du du. dg du. dx = 4x3. - page 1 - Eercise. Find te derivative of g( 3 + t5 dt wit respect to. Solution: Te integrand is f(t + t 5. By FTC, f( + 5. Eercise. Find te derivative of e t2 dt wit respect to. Solution: Te integrand is f(t e t2.

More information

Exponentials and Logarithms Review Part 2: Exponentials

Exponentials and Logarithms Review Part 2: Exponentials Eponentials and Logaritms Review Part : Eponentials Notice te difference etween te functions: g( ) and f ( ) In te function g( ), te variale is te ase and te eponent is a constant. Tis is called a power

More information

Math 34A Practice Final Solutions Fall 2007

Math 34A Practice Final Solutions Fall 2007 Mat 34A Practice Final Solutions Fall 007 Problem Find te derivatives of te following functions:. f(x) = 3x + e 3x. f(x) = x + x 3. f(x) = (x + a) 4. Is te function 3t 4t t 3 increasing or decreasing wen

More information

IEOR 165 Lecture 10 Distribution Estimation

IEOR 165 Lecture 10 Distribution Estimation IEOR 165 Lecture 10 Distribution Estimation 1 Motivating Problem Consider a situation were we ave iid data x i from some unknown distribution. One problem of interest is estimating te distribution tat

More information

Stationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series

Stationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series Stationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series Lawrence D. Brown, Pilip A. Ernst, Larry Sepp, and Robert Wolpert August 27, 2015 Abstract We consider te class,

More information

Section 2.7 Derivatives and Rates of Change Part II Section 2.8 The Derivative as a Function. at the point a, to be. = at time t = a is

Section 2.7 Derivatives and Rates of Change Part II Section 2.8 The Derivative as a Function. at the point a, to be. = at time t = a is Mat 180 www.timetodare.com Section.7 Derivatives and Rates of Cange Part II Section.8 Te Derivative as a Function Derivatives ( ) In te previous section we defined te slope of te tangent to a curve wit

More information

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4.

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4. December 09, 20 Calculus PracticeTest s Name: (4 points) Find te absolute extrema of f(x) = x 3 0 on te interval [0, 4] Te derivative of f(x) is f (x) = 3x 2, wic is zero only at x = 0 Tus we only need

More information