Network Error Correction Model
|
|
- Sylvia Cameron
- 5 years ago
- Views:
Transcription
1 Chapter 2 Network Error Correction Model In the last chapter, we introduced network coding, and particularly, described linear network coding. From this chapter, we begin to discuss network error correction coding specifically. To be similar, let a communication network be represented by a finite acyclic directed graph with unit capacity channels. The network is used for single source multicast: there is a single source node, which produces messages, and multiple sink nodes, all of which demand the messages. All the remaining nodes are internal nodes. In this chapter, we will give the basic model of network error correction. 2.1 Basic Model For the case that there is an error on a channel e, the channel model is additive, i.e., the output of the channel e is Ũ e = U e + z e,whereu e F is the message that should be transmitted on e and z e F is the error occurred on e. Further, let error vector be an E -dimensional row vector z =[z e : e E] over the field F with each component z e representing the error occurred on the corresponding channel e. Recall some notation in linear network coding, which will be used frequently during our discussion on network error correction coding. The system transfer matrix (also called one-step transfer matrix) K =(k d,e ) d E,e E is an E E matrix with k d,e being the local encoding coefficient for head(d)=tail(e) and k d,e = 0for head(d) tail(e). As the transfer from all channels are accumulated, the overall transform matrix is F = I + K + K 2 + K 3 +. And further because the network is finite and acyclic, K N = 0 for some positive integer N. Thus we can write F = I + K + K 2 + K 3 + =(I K) 1. X. Guang and Z. Zhang, Linear Network Error Correction Coding, SpringerBriefs in Computer Science, DOI / , The Author(s)
2 18 2 Network Error Correction Model In addition, recall the ω E matrix B =(k d,e ) d In(s),e E where k d,e = 0for e / Out(s) and k d,e is the local encoding coefficient for e Out(s). Letρ E represent a set of channels, and for ρ, definea ρ E matrix A ρ =(A d,e ) d ρ,e E satisfying: { 1 d = e; A d,e = (2.1) 0 otherwise. This matrix for different ρ will be used frequently throughout the whole book. Moreover, we say that ρ is an error pattern if errors may occur on those channels in it. Therefore, when a source message vector x F ω is transmitted and an error vector z F E occurs, the received vector y F In(t) at the sink node t is y = x B F A In(t) + z F A In(t). (2.2) Note that the decoding matrix F t = [ f e : e In(t) ] = BFA In(t) and further let G t FA In(t). Then the above Eq. (2.2) can be rewritten as y = x F t + z G t = ( xz )[ ] F t ( xz ) F G t. t Further, let X be the source message set and Z be the error vector set. Clearly, X = F ω and Z = F E. For any sink node t T,letY t be the received message set with respect to t,thatis, Y t = { (xz) F t : all x X, z Z }. Furthermore, note that A In(t) G t = I In(t),an In(t) In(t) identity matrix. Hence, for any y F In(t), we at least can choose an error vector z Z satisfying z In(t) = y and z e = 0foralle E \ In(t), wherez In(t) only consists of the components corresponding to the channels in In(t), i.e., z In(t) =[z e : e In(t)], such that ( ) 0z F t = 0F t + zg t = zg t = z In(t) A In(t) G t = z In(t) = y. This shows Y t = { (xz) F t : allx X, z Z } = {xf t + zg t :allx X, z Z } = F In(t). Consequently, for each sink node t T, we define an encoding function En (t) as follows: En (t) : X = F ω Y t = F In(t) x x F t.
3 2.2 Distance and Weight 19 Definition 2.1. A linear network code is called a regular code, if for any sink node t T,Rank(F t )=ω. If the considered linear network code is regular, i.e., Rank(F t )=ω, En (t) is an injection, and hence En (t) is well-defined. Otherwise, if the linear network code is not regular, i.e., Rank(F t ) < ω for at least one sink node t T, then even in the errorfree case, the code is not decodable at least one sink node t T, let alone network error correction. Therefore, we assume that codes considered below are regular. Now, we can define a linear network error correction (LNEC) code as En (t) (X ) {En (t) (x) : x X }. To be specific, for any message vector x F ω, xf t is called a codeword for the sink node t,andlet C t {xf t :allx F ω }, which is the set of all codewords for the sink node t T, saycodebook for t. 2.2 Distance and Weight Similar to the classical coding theory, we can also define the distance between two received vectors at each sink node t T in order to characterize their discrepancy. Definition 2.2. For any two received vectors y 1,y 2 F In(t) at the sink node t T, the distance between y 1 and y 2 with respect to t is defined as: d (t) (y 1, y 2 ) {w H (z) : z F E such that y 1 = y 2 + zg t }, where w H (z) represents the Hamg weight of the error vector z, i.e., the number of nonzero components of z. In particular, for any two codewords x 1 F t, x 2 F t C t (or equivalently, any two message vectors x 1,x 2 X ), the distance between x 1 F t and x 2 F t at the sink node t T is: d (t) (x 1 F t, x 2 F t )={w H (z) : z F E such that x 1 F t = x 2 F t + zg t } = {w H (z) : z F E such that (x 1 x 2 )F t = zg t }. First, we show that this distance has the following property. Proposition 2.1. For any two received vectors y 1,y 2 F In(t) at the sink node t T, 0 d (t) (y 1, y 2 ) In(t).
4 20 2 Network Error Correction Model Proof. We know d (t) (y 1, y 2 )={w H (z) : z F E such that y 1 = y 2 + zg t } = {w H (z) : z F E such that y 1 y 2 = zg t }, and recall that A In(t) G t = I In(t) is an In(t) In(t) identity matrix. Furthermore, let z F E be an error vector satisfying z In(t) = y 1 y 2 and z e = 0foralle E \ In(t). It follows that which implies z G t = z In(t) A In(t) G t = z In(t) = y 1 y 2, d (t) (y 1,y 2 ) w H (z)=w H (y 1 y 2 ) In(t). On the other hand, it is evident that d (t) (y 1,y 2 ) 0 from w H (z) 0forany z F E. Combining the above, the proof is completed. Further, the following result shows that this distance is an actual metric. Theorem 2.1. This distance d (t) (, ) defined in vector space F In(t) is a metric, that is, the following three properties are qualified. For any y 1,y 2,y 3 in F In(t), 1. (Positive definiteness) d (t) (y 1,y 2 ) 0, and d (t) (y 1,y 2 )=0ifand only if y 1 = y (Symmetry) d (t) (y 1,y 2 )=d (t) (y 2,y 1 ). 3. (Triangle inequality) d (t) (y 1,y 3 ) d (t) (y 1,y 2 )+d (t) (y 2,y 3 ). Thus, the pair (F In(t),d (t) ) is a metric space. Proof. 1. It is evident that d (t) (y 1,y 2 ) 0foranyy 1,y 2 F In(t) because the Hamg weight is always nonnegative. Below we will show that d (t) (y 1,y 2 )=0if and only if y 1 = y 2. First, it is easily seen that d (t) (y 1,y 2 )=0ify 1 = y 2.Onthe other hand, assume that 0 = d (t) (y 1,y 2 )={w H (z) : z F E such that y 1 = y 2 + zg t }, which means that there exists an error vector z Z with w H (z) =0suchthat y 1 = y 2 + zg t. Subsequently, w H (z)=0 only if z = 0. Thus, y 1 = y Assume that d (t) (y 1,y 2 )=d 1, that is, there exists an error vector z F E with w H (z) =d 1 such that y 1 = y 2 + zg t, which is equal to y 2 = y 1 zg t. Together with the definition of the distance, this implies that Similarly, we can also obtain d (t) (y 2,y 1 ) w H (z)=d 1 = d (t) (y 1,y 2 ). d (t) (y 1,y 2 ) d (t) (y 2,y 1 ). Therefore, it follows d (t) (y 1,y 2 )=d (t) (y 2,y 1 ).
5 2.2 Distance and Weight Let d (t) (y 1,y 2 )=d 1,2, d (t) (y 2,y 3 )=d 2,3, and d (t) (y 1,y 3 )=d 1,3. Correspondingly, there exist three error vectors z 1,2,z 2,3,z 1,3 F E with Hamg weight w H (z 1,2 )=d 1,2, w H (z 2,3 )=d 2,3,andw H (z 1,2 )=d 1,3 such that Combining the equalities (2.3)and(2.4), we have So it is shown that y 1 = y 2 + z 1,2 G t, (2.3) y 2 = y 3 + z 2,3 G t, (2.4) y 1 = y 3 + z 1,3 G t. (2.5) y 1 = y 3 +(z 1,2 + z 2,3 )G t. d 1,3 =d (t) (y 1,y 3 ) =w H (z 1,3 ) w H (z 1,2 + z 1,3 ) w H (z 1,2 )+w H (z 1,3 ) =d 1,2 + d 2,3. Combining the above (1), (2), and (3), the theorem is proved. Subsequently, we can define the imum distance of a linear network error correction code at the sink node t T. Definition 2.3. The imum distance of a linear network error correction code at the sink node t T is defined as: d (t) = {d(t) (x 1 F t,x 2 F t ) : all distinct x 1,x 2 F ω } = d (t) (x 1 F t,x 2 F t ). x 1,x 2 F ω x 1 x 2 Intuitively, the weight of an error vector should be a measure of seriousness of the error. So weight measure of an error vector should satisfy the following the conditions at least: 1. The weight measure w( ) for any error vector should be nonnegative, that is, it is a mapping: w : Z Z + {0}, where Z + is the set of all positive integers. 2. For each message vector x X,whenxis transmitted and two distinct error vectors z 1 and z 2 happened respectively, if the two outputs are always the same, these two error vectors should have the same weight. Actually, it is easy to see that the Hamg weight used in classical coding theory satisfies the two mentioned conditions. Further, recall that in the classical
6 22 2 Network Error Correction Model error-correcting coding theory, the Hamg weight can be induced by the Hamg distance, that is, for any z Z, w H (z) =d H (0,z), whered H (, ) represents the Hamg distance. Motivated by this idea, for linear network error correction codes, the distance defined as above should also induce a weight measure for error vectors with respect to each sink node t T. We state it below. Definition 2.4. For any error vector z in F E, the weight measure of z induced by the distance d (t) (, ) with respect to the sink t T is defined as: w (t) (z)=d (t) (0,zG t ) = {w H (z ) : all error vectors z F E such that 0 = zg t + z G t } = {w H (z ) : all error vectors z F E such that zg t = z G t }, which is called network Hamg weight of the error vector z with respect to the sink node t. Now, we will show that this weight of errors satisfies two conditions on weight measure as stated above. First, it is easily to be seen that w (t) (z) is nonnegative for any error vector z F E. On the other hand, assume that z 1 and z 2 are arbitrary two error vectors in F E satisfying that, for any message vector x X, xf t + z 1 G t = xf t + z 2 G t, or equivalently, Together with the definition: z 1 G t = z 2 G t. w (t) (z i )={w H (z ) : all error vectors z F E such that z i G t = z G t }, i = 1,2, it follows w (t) (z 1 )=w (t) (z 2 ). Further, if any two error vectors z 1,z 2 F E satisfy z 1 G t = z 2 G t, we denote this w by z (t) 1 z 2, and then we can obtain the following result easily. Proposition 2.2. The relationship w(t) is an equivalent relation, that is, for any three error vectors z 1,z 2,z 3 F E, it has the following three properties: w 1. z (t) 1 z 1 ; w 2. if z (t) w 1 z 2,thenz (t) 2 z 1 ; w 3. if z (t) w 1 z 2, z (t) w 2 z 3,thenz (t) 1 z 3. In addition, the weight w (t) ( ) of error vectors at the sink node t T also induces a weight measure for received vectors in vector space Y t = F In(t) denoted by W (t) ( ). To be specific, let y Y t be a received vector at the sink node t,forany error vector z Z such that y = zg t,define: W (t) (y)=w (t) (z).
7 2.3 Error Correction and Detection Capabilities 23 It is easily seen that W (t) ( ) is a mapping from the received message space Y t = F In(t) to Z + {0}.Further,W (t) ( ) is well-defined, because the values w (t) (z) for all z Z satisfying zg t = y are the same from the definition of the weight of errors, and, for any y Y t, there always exists an error vector z Z such that y = zg t as G t is full-rank, i.e., Rank(G t )= In(t). In other words, for any y Y t, W (t) (y)={w H (z) : z Z such that y = zg t }. In particular, W (t) (zg t )=w (t) (z) for any z Z. Then, for any two received vectors y 1,y 2 Y t, d (t) (y 1,y 2 )={w H (z) : z Z such that y 1 = y 2 + zg t } = {w H (z) : z Z such that y 1 y 2 = zg t } = W (t) (zg t ) = W (t) (y 1 y 2 ). (2.6) Particularly, for any two codewords x 1 F t and x 2 F t at the sink node t T, we deduce which further implies that d (t) (x 1 F t,x 2 F t )=W (t) (x 1 F t x 2 F t )=W (t) ((x 1 x 2 )F t ), d (t) = {d(t) (x 1 F t,x 2 F t ) : x 1,x 2 X = F ω with x 1 x 2 } = {W (t) ((x 1 x 2 )F t ) : x 1,x 2 X = F ω with x 1 x 2 } = {W (t) (xf t ) : 0 x X = F ω } = W (t) (xf t ) (2.7) x F ω \{0} = W (t) (c) W (t) (C t ). (2.8) c C t \{0} We say W (t) (C t ) the weight of the codebook C t with respect to the sink node t T, which is defined as the imum weight of all nonzero codewords in C t. 2.3 Error Correction and Detection Capabilities In the last section, we have established network error correction model over linear network coding. In this section, we will discuss the error correction and detection capabilities. Similar to the imum distance decoding principle for classical error-correcting codes, we can also apply the imum distance decoding principle to linear network error correction codes at each sink node.
8 24 2 Network Error Correction Model Definition 2.5 (Minimum Distance Decoding Principle). When a vector y Y t is received at a sink node t T, choose a codeword c C t (or equivalently, xf t C t ) that is closest to the received vector y. To be specific, choose a codeword c C t (or equivalently, xf t C t ) such that d (t) (y,c)= d (t) (y,c ) c C t = x F d(t) (y,xf t ). ω And refer to this as the imum distance decoding principle at the sink node t T. Definition 2.6 (d-error-detecting). Let d be a positive integer. A linear network error correction code is d-error-detecting at the sink node t T, if, whichever error vector z Z with weight w (t) (z) no more than d but at least one happens, when any source message vector in X is transmitted by this linear network error correction code, the received vector at the sink node t is not a codeword. A linear network error correction code is exactly d-error-detecting at the sink node t T, ifitisd-errordetecting but not (d + 1)-error-detecting at t. Definition 2.7 (d-error-correcting). Let d be a positive integer. A linear network error correction code is d-error-correcting at the sink node t T, if the imum distance decoding principle at the sink node t is able to correct any error vector in Z with weight no more than d, assug that the incomplete decoding 1 rule is used. A linear network error correction code is exactly d-error-correctingat the sink node t if it is d-error-correcting but not (d + 1)-error-correcting at t. In the following two theorems, it is shown that, similar to classical coding theory, the error detection and error correction capabilities of linear network error correction codes at each sink node are characterized by its corresponding imum distance. Theorem 2.2. A linear network error correction code is exactly d-error-detecting at sink node t, if and only if d (t) = d + 1. Proof. First, assume that d (t) = d + 1. Then we claim that this linear network error correction code is d-error-detecting at the sink node t, that is, for any message vector x 1 F ω and any error vector z F E with 1 w (t) (z) d, x 1 F t + zg t / C t = {xf t : all x F ω }. On the contrary, suppose that there exists some message vector x 1 F ω and some error vector z F E with 1 w (t) (z) d such that x 1 F t + zg t C t, 1 Incomplete decoding refers to the case that if a word is received, find the closest codeword. If there are more than one such codewords, request a retransmission. The counterpart is complete decoding which means that if there are more than one such codewords, select one of them arbitrarily.
9 2.3 Error Correction and Detection Capabilities 25 which means that there exists another message vector x 2 F ω such that x 1 F t + zg t = x 2 F t. Together with the equality (2.6), this implies that d (t) (x 1 F t,x 2 F t )=W (t) (x 1 F t x 2 F t )=W (t) (zg t )=w (t) (z) d, which contradicts to d (t) = d + 1 > d. In the following, we will show that this linear network error correction code is not (d + 1)-error-detecting at t. Sinced (t) = d + 1, then there exist two codewords x 1 F t and x 2 F t satisfying d (t) (x 1 F t,x 2 F t )=d + 1. That is, there exists an error vector z with w (t) (z)=d +1suchthatx 1 F t = x 2 F t +zg t, which indicates that this linear network error correction code is not (d + 1)-errordetecting code at the sink node t T. On the other hand, let this linear network error correction code be exactly d-error-detecting at the sink node t T. To be specific, it is d-error-detecting and not (d +1)-error-detecting at t.thed-error-detecting property shows xf t +zg t / C t for any codeword xf t C t and any error vector z Z with 1 w (t) (z) d. Thus, it follows d (t) d + 1. Conversely, assume that d (t) < d + 1. Notice that d(t) = x F ω \{0}W (t) (xf t ) from (2.7), which implies that there is a codeword x 1 F t C t satisfying W (t) (x 1 F t )= d (t) < d + 1. Further, let z Z be an error vector satisfying x 1F t = zg t, and thus one has 1 w (t) (z)=w (t) (zg t )=W (t) (x 1 F t )=d (t) < d + 1. Thus, for any codeword x 2 F t C t,wehave x 2 F t + zg t = x 2 F t + x 1 F t =(x 1 + x 2 )F t C t, which, together with 1 w (t) (z) d, violates the d-error-detecting property at the sink node t. In addition, this LNEC code is not (d + 1)-error-detecting at the sink node t, that is, there exists a codeword xf t C t and an error vector z F E with w (t) (z) =d + 1 such that xf t + zg t C t.letx 1 F t = xf t + zg t for some x 1 F ω, and then d (t) d(t) (x 1 F t,xf t )=W (t) (x 1 F t xf t )=W (t) (zg t )=w (t) (z)=d + 1. Combining the above, it is shown d (t) = d + 1. This completes the proof. Theorem 2.3. A linear network error correction code is exactly d-error-correcting at sink node t if and only if d (t) = 2d + 1 or d(t) = 2d + 2.
10 26 2 Network Error Correction Model Proof. First, we assume that a linear network error correction code is d-error-correcting at the sink node t T, which means that, for any codeword xf t C t and any error vector z Z with weight no more than d, it follows d (t) (xf t,xf t + zg t ) < d (t) (x F t,xf t + zg t ), for any other codeword x F t C t. We will show that d (t) > 2d below. Assume the contrary that d(t) 2d, thatis, there exist two distinct codewords x 1 F t,x 2 F t C t satisfying d (t) (x 1 F t,x 2 F t ) 2d. Subsequently, we can always choose an error vector z Z such that x 1 F t x 2 F t = zg t from A In(t) G t = I In(t) where again I In(t) is an In(t) In(t) identity matrix. Thus, one has 2d d (t) (x 1 F t,x 2 F t )=W (t) (x 1 F t x 2 F t )=W (t) (zg t )=w (t) (z) ˆ d. Note that w (t) (z) ={w H (z ) :allz Z such that zg t = z G t }. Thus, without loss of generality, we assume that d 1 Further, let d 1 = ˆ 2 and d 2 = ˆ w (t) (z)=w H (z). d+1 2. And it is not difficult to see that d 1 d 2 d and d 1 + d 2 = ˆ d. Thus, we can claim that, there exist two error vectors z 1,z 2 Z satisfying: w H (z 1 )=d 1 and w H (z 2 )=d 2, respectively; ρ z1 ρ z2 = /0andρ z1 ρ z2 = ρ z ; z = z 1 + z 2 ; where ρ z is called an error pattern induced by an error vector z Z,definedas the set of the channels on which the corresponding coordinates of z are nonzero. Below, we will indicate that w H (z i )=w (t) (z i ), i = 1,2. Clearly, we know w (t) (z 1 ) w H (z 1 ).Further,ifw (t) (z 1 ) < w H (z 1 ), there exists an error vector z 1 Z satisfying z 1 G t = z 1 G t and Thus, we have w H (z 1)=w (t) (z 1)=w (t) (z 1 ) < w H (z 1 ). (2.9) (z 1 + z 2)G t = z 1 G t + z 2 G t = z 1 G t + z 2 G t =(z 1 + z 2 )G t = zg t,
11 2.3 Error Correction and Detection Capabilities 27 which leads to w H (z 1 + z 2) w H (z 1 )+w H(z 2 ) < w H (z 1 )+w H (z 2 ) (2.10) = w H (z 1 + z 2 ) (2.11) = w H (z) = w (t) (z), where the inequality (2.10) follows from (2.9) and the equality (2.11) follows from ρ z1 ρ z2 = /0. This means that there exists an error vector z 1 + z 2 Z satisfying (z 1 + z 2)G t = zg t and w H (z 1 + z 2) < w (t) (z), which evidently violates the fact that w (t) (z)={w H (z ) :allz Z such that z G t = zg t }. Therefore, one obtains w H (z 1 )=w (t) (z 1 ). Similarly, we can also deduce w H (z 2 )=w (t) (z 2 ). Combining the above, we have x 1 F t x 2 F t = zg t = z 1 G t + z 2 G t, subsequently, This implies that x 1 F t z 1 G t = x 2 F t + z 2 G t. d (t) (x 1 F t,x 2 F t + z 2 G t )=d (t) (x 1 F t,x 1 F t z 1 G t ) = W (t) (z 1 G t )=w (t) (z 1 )=w H (z 1 )=d 1 d 2 = w H (z 2 )=w (t) (z 2 )=W (t) (z 2 G t ) = d (t) (x 2 F t,x 2 F t + z 2 G t ), which further implies that when the message vector x 2 is transmitted and the error vector z 2 happens, the imum distance decoding principle can not decode x 2 successfully at the sink node t. This is a contradiction to d-error-correcting property at the sink node t T. So the hypothesis is not true, which indicates d (t) 2d + 1. On the other hand, if this LNEC code is not (d + 1)-error-correcting, then d (t) 2d +2. Conversely, suppose that d (t) 2d +3. We know that the fact that this LNEC code is not (d + 1)-error-correcting at the sink node t T means that there exists a codeword x 1 F t C t and an error vector z 1 Z with w (t) (z 1 )=d +1 such that when the message vector x 1 is transmitted and the error vector z 1 happens, the imum distance decoding principle cannot decode x 1 successfully at the sink node t. Tobe specific, there exists another message x 2 X satisfying: d (t) (x 2 F t,x 1 F t + z 1 G t ) d (t) (x 1 F t,x 1 F t + z 1 G t ),
12 28 2 Network Error Correction Model that is, there exists an error vector z 2 Z with w (t) (z 2 ) d + 1 satisfying Therefore, x 2 F t + z 2 G t = x 1 F t + z 1 G t. d (t) (x 1 F t,x 2 F t ) d (t) (x 1 F t,x 1 F t + z 1 G t )+d (t) (x 2 F t,x 2 F t + z 2 G t ) = W (t) (z 1 G t )+W (t) (z 2 G t ) = w (t) (z 1 )+w (t) (z 2 ) 2d + 2, which violates d (t) 2d + 3. Therefore, combining two cases d(t) 2d + 1and d (t) 2d + 2, we obtain d(t) = 2d + 1, or 2d + 2. In the following, we prove the sufficiency of the theorem. First, we consider the case d (t) = 2d + 1. For this case, this LNEC code is d-error-correcting at the sink node t T, that is, for any codeword xf t C t and any error vector z Z with weight no more than d, it follows d (t) (xf t,xf t + zg t ) < d (t) (x F t,xf t + zg t ), for any other codeword x F t C t. Assume the contrary that there are two distinct codewords x 1 F t,x 2 F t C t and an error vector z Z with weight no more than d such that Therefore, we have d (t) (x 2 F t,x 1 F t + zg t ) d (t) (x 1 F t,x 1 F t + zg t ). d (t) (x 1 F t,x 2 F t ) d (t) (x 1 F t,x 1 F t + zg t )+d (t) (x 2 F t,x 1 F t + zg t ) 2d (t) (x 1 F t,x 1 F t + zg t ) = 2W (t) (zg t ) = 2w (t) (z) 2d, which contradicts to d (t) = 2d + 1. On the other hand, this LNEC code is not (d + 1)-error-correcting at the sink node t T.Asd (t) = 2d + 1, there exist two codewords x 1F t,x 2 F t C t such that d (t) (x 1 F t,x 2 F t )=2d + 1, which further shows that there exists an error vector z Z with w (t) (z) =2d + 1 such that x 1 F t = x 2 F t + zg t. (2.12)
13 2.3 Error Correction and Detection Capabilities 29 Together with the definition of weight w (t) (z)={w H (z ) :allz Z such that z G t = zg t }, without loss of generality, we assume z Z satisfying w H (z)=w (t) (z)=2d + 1. Subsequently, let z 1,z 2 Z be two error vectors satisfying z = z 1 + z 2 with the following two conditions: ρ z1 ρ z2 = /0andρ z1 ρ z2 = ρ z ; ρ z1 = d,and ρ z2 = d + 1. Now, we can claim that w H (z 1 )=w (t) (z 1 )=d and w H (z 2 )=w (t) (z 2 )=d + 1. On the contrary, assume that w H (z 1 ) w (t) (z 1 ), which, together with w H (z 1 ) w (t) (z 1 ), indicates w H (z 1 ) > w (t) (z 1 ). Again notice that w (t) (z 1 )={w H (z 1 ) :allz 1 Z such that z 1 G t = z 1 G t }. It follows that there must exist an error vector z 1 Z such that w H(z 1 )=w(t) (z 1 ) and z 1 G t = z 1 G t. Thus, it is shown that (z 1 + z 2)G t = z 1 G t + z 2 G t = z 1 G t + z 2 G t = zg t, which implies that w (t) (z 1 + z 2)=w (t) (z). However, w H (z 1 + z 2) w H (z 1 )+w H(z 2 ) = w (t) (z 1 )+w H (z 2 ) < w H (z 1 )+w H (z 2 ) = w H (z 1 + z 2 ) = w H (z) = w (t) (z) = w (t) (z 1 + z 2), which is impossible. Similarly, we can also obtain w H (z 2 )=w (t) (z 2 )=d + 1. Therefore, from (2.12), one has that is, x 1 F t = x 2 F t + zg t = x 2 F t + z 1 G t + z 2 G t, x 1 F t z 1 G t = x 2 F t + z 2 G t. So when the message vector x 2 is transmitted and the error vector z 2 happens, the imum distance decoding principle cannot decode x 2 successfully at the sink node t, that is, it is not (d + 1)-error-correcting at the sink node t T.
14 30 2 Network Error Correction Model Combining the above, this LNEC code is exactly d-error-correcting at the sink node t T. Similarly, we can show that d (t) = 2d + 2 also implies exactly d-errorcorrecting at the sink node t T. The proof is completed. Corollary 2.1. For a linear network error correction code, d (t) = d if and only if it is exactly d 1 2 -error-correcting at the sink node t T. Remark 2.1. In [51], the authors indicated an interesting discovery that, for nonlinear network error correction codes, the number of the correctable errors can be more than half of the number of the detectable errors, which is contrast to the linear cases as stated above. In classical coding theory, if the positions of errors occurred are known by the decoder, the errors are called erasure errors. When network coding is under consideration, this scheme can be extended to linear network error correction codes. To be specific, we assume that the collection of channels on which errors may be occurred during the network transmission is known by sink nodes. For this case, we will characterize the capability for correcting erasure errors with respect to sink nodes, that is called erasure error correction capability. This characterization is a generalization of the corresponding result in classical coding theory. Let ρ be an error pattern consisting of the channels on which errors may happen, and we say that an error message vector z =[z e : e E] matches an error pattern ρ, ifz e = 0foralle E\ρ. Since the error pattern ρ is known by the sink nodes, a weight measure w (t) of the error pattern ρ with respect to the weight measure w (t) of error vectors at the sink node t is defined as: w (t) (ρ) max z Z ρ w (t) (z), where Z ρ represents the collection of all error vectors matching the error pattern ρ, i.e., Z ρ = {z : z F E and z matches ρ}. Naturally, we still use the imum distance decoding principle to correct erasure errors at sink nodes. First, we give the following definition. Definition 2.8 (d-erasure-error-correcting). Let d be a positive integer. A linear network error correction code is d-erasure-error-correcting at the sink node t T,if the imum distance decoding principle at the sink node t is able to correct all error vectors in Z matching any error pattern ρ with weight no more than d, assug that ρ is known by t and the incomplete decoding rule is used. The following result characterizes the erasure error correction capability of linear network error correction codes. Theorem 2.4. A linear network error correction code is d-erasure-error-correcting at the sink node t T if and only if d (t) d + 1.
15 2.3 Error Correction and Detection Capabilities 31 Proof. First, we prove that d (t) d +1 is a sufficient condition. Assume that d(t) d +1, and the contrary that this linear network error correction code is not d-erasureerror-correcting at the sink node t. That is, for some error pattern ρ with weight no more than d with respect to t, there are two distinct message vectors x 1,x 2 F ω and two distinct error vectors z 1,z 2 matching the error pattern ρ (or equivalently, z 1,z 2 Z ρ ) such that Subsequently, which further shows x 1 F t + z 1 G t = x 2 F t + z 2 G t. x 1 F t x 2 F t = z 2 G t z 1 G t, d (t) (x 1 F t,x 2 F t )=W (t) ((z 2 z 1 )G t )=w (t) (z 2 z 1 ) w (t) (ρ) d, (2.13) where the first inequality in (2.13) follows from both z 1 and z 2 matching ρ. This violates d (t) d + 1. On the other hand, we will prove the necessary condition of this theorem by contradiction. Assume a linear network error correction code has d (t) d for some sink node t. We will find an error pattern ρ with weight w (t) (ρ) d which can not be corrected. Since d (t) d, there exist two codewords x 1F t,x 2 F t C t such that d (t) (x 1 F t,x 2 F t ) d. Further, there exists an error vector z Z such that x 1 F t x 2 F t = zg t and w (t) (z)=w (t) (zg t )=W (t) (x 1 F t x 2 F t )=d (t) (x 1 F t,x 2 F t ) d. Note that w (t) (z)={w H (z ) : z Z such that z G t = zg t }. Without loss of generality, assume the error vector z satisfying w H (z)=w (t) (z) d. Let ρ z further be the error pattern induced by the error vector z, that is the collection of channels on which the corresponding coordinates of z are nonzero. Hence, w (t) (ρ z ) ρ z = w H (z) d. This shows that when the message vector x 2 is transmitted and the erasure error vector z matching ρ with w (t) (ρ) d happens, the imum distance decoding principle at the sink node t will decode x 1 instead of x 2,thatis,x 2 can not be decoded successfully at the sink node t, although ρ is known. This leads to a contradiction. Combining the above, we complete the proof.
16
Linear Codes, Target Function Classes, and Network Computing Capacity
Linear Codes, Target Function Classes, and Network Computing Capacity Rathinakumar Appuswamy, Massimo Franceschetti, Nikhil Karamchandani, and Kenneth Zeger IEEE Transactions on Information Theory Submitted:
More informationError Detection and Correction: Hamming Code; Reed-Muller Code
Error Detection and Correction: Hamming Code; Reed-Muller Code Greg Plaxton Theory in Programming Practice, Spring 2005 Department of Computer Science University of Texas at Austin Hamming Code: Motivation
More informationIndex coding with side information
Index coding with side information Ehsan Ebrahimi Targhi University of Tartu Abstract. The Index Coding problem has attracted a considerable amount of attention in the recent years. The problem is motivated
More informationChapter 2. Error Correcting Codes. 2.1 Basic Notions
Chapter 2 Error Correcting Codes The identification number schemes we discussed in the previous chapter give us the ability to determine if an error has been made in recording or transmitting information.
More informationAlphabet Size Reduction for Secure Network Coding: A Graph Theoretic Approach
ALPHABET SIZE REDUCTION FOR SECURE NETWORK CODING: A GRAPH THEORETIC APPROACH 1 Alphabet Size Reduction for Secure Network Coding: A Graph Theoretic Approach Xuan Guang, Member, IEEE, and Raymond W. Yeung,
More informationMATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.
MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups. Binary codes Let us assume that a message to be transmitted is in binary form. That is, it is a word in the alphabet
More informationCS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding
CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding Tim Roughgarden October 29, 2014 1 Preamble This lecture covers our final subtopic within the exact and approximate recovery part of the course.
More informationDistributed Decoding of Convolutional Network Error Correction Codes
1 Distributed Decoding of Convolutional Network Error Correction Codes Hengjie Yang and Wangmei Guo arxiv:1701.06283v2 [cs.it] 18 Feb 2017 Abstract A Viterbi-like decoding algorithm is proposed in this
More informationThe Capacity Region for Multi-source Multi-sink Network Coding
The Capacity Region for Multi-source Multi-sink Network Coding Xijin Yan Dept. of Electrical Eng. - Systems University of Southern California Los Angeles, CA, U.S.A. xyan@usc.edu Raymond W. Yeung Dept.
More informationELEC546 Review of Information Theory
ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random
More informationMATH3302. Coding and Cryptography. Coding Theory
MATH3302 Coding and Cryptography Coding Theory 2010 Contents 1 Introduction to coding theory 2 1.1 Introduction.......................................... 2 1.2 Basic definitions and assumptions..............................
More informationLecture 11: Quantum Information III - Source Coding
CSCI5370 Quantum Computing November 25, 203 Lecture : Quantum Information III - Source Coding Lecturer: Shengyu Zhang Scribe: Hing Yin Tsang. Holevo s bound Suppose Alice has an information source X that
More informationNetwork Coding on Directed Acyclic Graphs
Network Coding on Directed Acyclic Graphs John MacLaren Walsh, Ph.D. Multiterminal Information Theory, Spring Quarter, 0 Reference These notes are directly derived from Chapter of R. W. Yeung s Information
More informationAlgebraic Geometry Codes. Shelly Manber. Linear Codes. Algebraic Geometry Codes. Example: Hermitian. Shelly Manber. Codes. Decoding.
Linear December 2, 2011 References Linear Main Source: Stichtenoth, Henning. Function Fields and. Springer, 2009. Other Sources: Høholdt, Lint and Pellikaan. geometry codes. Handbook of Coding Theory,
More informationLecture 8: Shannon s Noise Models
Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have
More informationLecture 4 Noisy Channel Coding
Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem
More informationBinary Linear Codes G = = [ I 3 B ] , G 4 = None of these matrices are in standard form. Note that the matrix 1 0 0
Coding Theory Massoud Malek Binary Linear Codes Generator and Parity-Check Matrices. A subset C of IK n is called a linear code, if C is a subspace of IK n (i.e., C is closed under addition). A linear
More information8. Limit Laws. lim(f g)(x) = lim f(x) lim g(x), (x) = lim x a f(x) g lim x a g(x)
8. Limit Laws 8.1. Basic Limit Laws. If f and g are two functions and we know the it of each of them at a given point a, then we can easily compute the it at a of their sum, difference, product, constant
More informationChapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005
Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each
More information4 An Introduction to Channel Coding and Decoding over BSC
4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the
More informationSection 3 Error Correcting Codes (ECC): Fundamentals
Section 3 Error Correcting Codes (ECC): Fundamentals Communication systems and channel models Definition and examples of ECCs Distance For the contents relevant to distance, Lin & Xing s book, Chapter
More informationLecture 11: Polar codes construction
15-859: Information Theory and Applications in TCS CMU: Spring 2013 Lecturer: Venkatesan Guruswami Lecture 11: Polar codes construction February 26, 2013 Scribe: Dan Stahlke 1 Polar codes: recap of last
More informationEE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes
EE229B - Final Project Capacity-Approaching Low-Density Parity-Check Codes Pierre Garrigues EECS department, UC Berkeley garrigue@eecs.berkeley.edu May 13, 2005 Abstract The class of low-density parity-check
More informationCMSC 451: Max-Flow Extensions
CMSC 51: Max-Flow Extensions Slides By: Carl Kingsford Department of Computer Science University of Maryland, College Park Based on Section 7.7 of Algorithm Design by Kleinberg & Tardos. Circulations with
More information3.3 Easy ILP problems and totally unimodular matrices
3.3 Easy ILP problems and totally unimodular matrices Consider a generic ILP problem expressed in standard form where A Z m n with n m, and b Z m. min{c t x : Ax = b, x Z n +} (1) P(b) = {x R n : Ax =
More informationUSA Mathematics Talent Search
ID#: 036 16 4 1 We begin by noting that a convex regular polygon has interior angle measures (in degrees) that are integers if and only if the exterior angle measures are also integers. Since the sum of
More informationLecture 4: Proof of Shannon s theorem and an explicit code
CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated
More informationLecture 14: Hamming and Hadamard Codes
CSCI-B69: A Theorist s Toolkit, Fall 6 Oct 6 Lecture 4: Hamming and Hadamard Codes Lecturer: Yuan Zhou Scribe: Kaiyuan Zhu Recap Recall from the last lecture that error-correcting codes are in fact injective
More informationMATH 291T CODING THEORY
California State University, Fresno MATH 291T CODING THEORY Spring 2009 Instructor : Stefaan Delcroix Chapter 1 Introduction to Error-Correcting Codes It happens quite often that a message becomes corrupt
More informationMa/CS 6b Class 25: Error Correcting Codes 2
Ma/CS 6b Class 25: Error Correcting Codes 2 By Adam Sheffer Recall: Codes V n the set of binary sequences of length n. For example, V 3 = 000,001,010,011,100,101,110,111. Codes of length n are subsets
More informationRobust Network Codes for Unicast Connections: A Case Study
Robust Network Codes for Unicast Connections: A Case Study Salim Y. El Rouayheb, Alex Sprintson, and Costas Georghiades Department of Electrical and Computer Engineering Texas A&M University College Station,
More informationThe Hamming Codes and Delsarte s Linear Programming Bound
The Hamming Codes and Delsarte s Linear Programming Bound by Sky McKinley Under the Astute Tutelage of Professor John S. Caughman, IV A thesis submitted in partial fulfillment of the requirements for the
More informationCoding for Errors and Erasures in Random Network Coding
Coding for Errors and Erasures in Random Network Coding Ralf Koetter Institute for Communications Engineering TU Munich D-80333 Munich ralf.koetter@tum.de Frank R. Kschischang The Edward S. Rogers Sr.
More informationIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 5, MAY
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 5, MAY 2008 2303 Linear Network Codes and Systems of Polynomial Equations Randall Dougherty, Chris Freiling, and Kenneth Zeger, Fellow, IEEE Abstract
More informationBerlekamp-Massey decoding of RS code
IERG60 Coding for Distributed Storage Systems Lecture - 05//06 Berlekamp-Massey decoding of RS code Lecturer: Kenneth Shum Scribe: Bowen Zhang Berlekamp-Massey algorithm We recall some notations from lecture
More informationCodes for Partially Stuck-at Memory Cells
1 Codes for Partially Stuck-at Memory Cells Antonia Wachter-Zeh and Eitan Yaakobi Department of Computer Science Technion Israel Institute of Technology, Haifa, Israel Email: {antonia, yaakobi@cs.technion.ac.il
More informationChain Independence and Common Information
1 Chain Independence and Common Information Konstantin Makarychev and Yury Makarychev Abstract We present a new proof of a celebrated result of Gács and Körner that the common information is far less than
More informationA Polynomial-Time Algorithm for Pliable Index Coding
1 A Polynomial-Time Algorithm for Pliable Index Coding Linqi Song and Christina Fragouli arxiv:1610.06845v [cs.it] 9 Aug 017 Abstract In pliable index coding, we consider a server with m messages and n
More informationLecture 6 I. CHANNEL CODING. X n (m) P Y X
6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder
More informationEE512: Error Control Coding
EE512: Error Control Coding Solution for Assignment on Linear Block Codes February 14, 2007 1. Code 1: n = 4, n k = 2 Parity Check Equations: x 1 + x 3 = 0, x 1 + x 2 + x 4 = 0 Parity Bits: x 3 = x 1,
More informationMATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q
MATH-315201 This question paper consists of 6 printed pages, each of which is identified by the reference MATH-3152 Only approved basic scientific calculators may be used. c UNIVERSITY OF LEEDS Examination
More informationOn the Duality between Multiple-Access Codes and Computation Codes
On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch
More informationAn Introduction to (Network) Coding Theory
An Introduction to (Network) Coding Theory Anna-Lena Horlemann-Trautmann University of St. Gallen, Switzerland July 12th, 2018 1 Coding Theory Introduction Reed-Solomon codes 2 Introduction Coherent network
More informationFinite Mathematics. Nik Ruškuc and Colva M. Roney-Dougal
Finite Mathematics Nik Ruškuc and Colva M. Roney-Dougal September 19, 2011 Contents 1 Introduction 3 1 About the course............................. 3 2 A review of some algebraic structures.................
More informationArrangements, matroids and codes
Arrangements, matroids and codes first lecture Ruud Pellikaan joint work with Relinde Jurrius ACAGM summer school Leuven Belgium, 18 July 2011 References 2/43 1. Codes, arrangements and matroids by Relinde
More informationInformation Theory CHAPTER. 5.1 Introduction. 5.2 Entropy
Haykin_ch05_pp3.fm Page 207 Monday, November 26, 202 2:44 PM CHAPTER 5 Information Theory 5. Introduction As mentioned in Chapter and reiterated along the way, the purpose of a communication system is
More informationIntroduction to Graph Theory
Introduction to Graph Theory George Voutsadakis 1 1 Mathematics and Computer Science Lake Superior State University LSSU Math 351 George Voutsadakis (LSSU) Introduction to Graph Theory August 2018 1 /
More informationThe Capacity of a Network
The Capacity of a Network April Rasala Lehman MIT Collaborators: Nick Harvey and Robert Kleinberg MIT What is the Capacity of a Network? Source a Source b c d e Sink f Sink What is the Capacity of a Network?
More informationChapter 2: Random Variables
ECE54: Stochastic Signals and Systems Fall 28 Lecture 2 - September 3, 28 Dr. Salim El Rouayheb Scribe: Peiwen Tian, Lu Liu, Ghadir Ayache Chapter 2: Random Variables Example. Tossing a fair coin twice:
More informationLOW-density parity-check (LDPC) codes were invented
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 54, NO 1, JANUARY 2008 51 Extremal Problems of Information Combining Yibo Jiang, Alexei Ashikhmin, Member, IEEE, Ralf Koetter, Senior Member, IEEE, and Andrew
More informationMetric Space Topology (Spring 2016) Selected Homework Solutions. HW1 Q1.2. Suppose that d is a metric on a set X. Prove that the inequality d(x, y)
Metric Space Topology (Spring 2016) Selected Homework Solutions HW1 Q1.2. Suppose that d is a metric on a set X. Prove that the inequality d(x, y) d(z, w) d(x, z) + d(y, w) holds for all w, x, y, z X.
More informationError Correcting Index Codes and Matroids
Error Correcting Index Codes and Matroids Anoop Thomas and B Sundar Rajan Dept of ECE, IISc, Bangalore 5612, India, Email: {anoopt,bsrajan}@eceiiscernetin arxiv:15156v1 [csit 21 Jan 215 Abstract The connection
More information(Classical) Information Theory III: Noisy channel coding
(Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way
More informationWe saw in the last chapter that the linear Hamming codes are nontrivial perfect codes.
Chapter 5 Golay Codes Lecture 16, March 10, 2011 We saw in the last chapter that the linear Hamming codes are nontrivial perfect codes. Question. Are there any other nontrivial perfect codes? Answer. Yes,
More informationChapter 3 Linear Block Codes
Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and
More informationDecoding Reed-Muller codes over product sets
Rutgers University May 30, 2016 Overview Error-correcting codes 1 Error-correcting codes Motivation 2 Reed-Solomon codes Reed-Muller codes 3 Error-correcting codes Motivation Goal: Send a message Don t
More informationChapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University
Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission
More informationLecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity
5-859: Information Theory and Applications in TCS CMU: Spring 23 Lecture 8: Channel and source-channel coding theorems; BEC & linear codes February 7, 23 Lecturer: Venkatesan Guruswami Scribe: Dan Stahlke
More informationHomework #2 Solutions Due: September 5, for all n N n 3 = n2 (n + 1) 2 4
Do the following exercises from the text: Chapter (Section 3):, 1, 17(a)-(b), 3 Prove that 1 3 + 3 + + n 3 n (n + 1) for all n N Proof The proof is by induction on n For n N, let S(n) be the statement
More informationLecture 2 Linear Codes
Lecture 2 Linear Codes 2.1. Linear Codes From now on we want to identify the alphabet Σ with a finite field F q. For general codes, introduced in the last section, the description is hard. For a code of
More informationApproximately achieving Gaussian relay. network capacity with lattice-based QMF codes
Approximately achieving Gaussian relay 1 network capacity with lattice-based QMF codes Ayfer Özgür and Suhas Diggavi Abstract In [1], a new relaying strategy, quantize-map-and-forward QMF scheme, has been
More informationLecture 12. Block Diagram
Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data
More information2012 IEEE International Symposium on Information Theory Proceedings
Decoding of Cyclic Codes over Symbol-Pair Read Channels Eitan Yaakobi, Jehoshua Bruck, and Paul H Siegel Electrical Engineering Department, California Institute of Technology, Pasadena, CA 9115, USA Electrical
More informationDefinition 2.1. Let w be a word. Then the coset C + w of w is the set {c + w : c C}.
2.4. Coset Decoding i 2.4 Coset Decoding To apply MLD decoding, what we must do, given a received word w, is search through all the codewords to find the codeword c closest to w. This can be a slow and
More informationX 1 : X Table 1: Y = X X 2
ECE 534: Elements of Information Theory, Fall 200 Homework 3 Solutions (ALL DUE to Kenneth S. Palacio Baus) December, 200. Problem 5.20. Multiple access (a) Find the capacity region for the multiple-access
More informationLECTURE 13. Last time: Lecture outline
LECTURE 13 Last time: Strong coding theorem Revisiting channel and codes Bound on probability of error Error exponent Lecture outline Fano s Lemma revisited Fano s inequality for codewords Converse to
More informationAdaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes
Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Xiaojie Zhang and Paul H. Siegel University of California, San Diego, La Jolla, CA 9093, U Email:{ericzhang, psiegel}@ucsd.edu
More informationTopological properties
CHAPTER 4 Topological properties 1. Connectedness Definitions and examples Basic properties Connected components Connected versus path connected, again 2. Compactness Definition and first examples Topological
More informationDiscrete Optimization
Prof. Friedrich Eisenbrand Martin Niemeier Due Date: April 15, 2010 Discussions: March 25, April 01 Discrete Optimization Spring 2010 s 3 You can hand in written solutions for up to two of the exercises
More informationNetwork Coding for Error Correction
Network Coding for Error Correction Thesis by Svitlana S. Vyetrenko svitlana@caltech.edu In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy California Institute of Technology
More informationLecture 4 : Introduction to Low-density Parity-check Codes
Lecture 4 : Introduction to Low-density Parity-check Codes LDPC codes are a class of linear block codes with implementable decoders, which provide near-capacity performance. History: 1. LDPC codes were
More informationMath 5707: Graph Theory, Spring 2017 Midterm 3
University of Minnesota Math 5707: Graph Theory, Spring 2017 Midterm 3 Nicholas Rancourt (edited by Darij Grinberg) December 25, 2017 1 Exercise 1 1.1 Problem Let G be a connected multigraph. Let x, y,
More informationZangwill s Global Convergence Theorem
Zangwill s Global Convergence Theorem A theory of global convergence has been given by Zangwill 1. This theory involves the notion of a set-valued mapping, or point-to-set mapping. Definition 1.1 Given
More informationECE Information theory Final (Fall 2008)
ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1
More informationNotes on uniform convergence
Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean
More informationSum-networks from undirected graphs: Construction and capacity analysis
Electrical and Computer Engineering Conference Papers Posters and Presentations Electrical and Computer Engineering 014 Sum-networks from undirected graphs: Construction and capacity analysis Ardhendu
More informationEquivalence for Networks with Adversarial State
Equivalence for Networks with Adversarial State Oliver Kosut Department of Electrical, Computer and Energy Engineering Arizona State University Tempe, AZ 85287 Email: okosut@asu.edu Jörg Kliewer Department
More informationCMPSCI 611: Advanced Algorithms
CMPSCI 611: Advanced Algorithms Lecture 12: Network Flow Part II Andrew McGregor Last Compiled: December 14, 2017 1/26 Definitions Input: Directed Graph G = (V, E) Capacities C(u, v) > 0 for (u, v) E and
More informationLecture 3: Error Correcting Codes
CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error
More informationIndex Coding With Erroneous Side Information
Index Coding With Erroneous Side Information Jae-Won Kim and Jong-Seon No, Fellow, IEEE 1 Abstract arxiv:1703.09361v1 [cs.it] 28 Mar 2017 In this paper, new index coding problems are studied, where each
More informationClimbing an Infinite Ladder
Section 5.1 Section Summary Mathematical Induction Examples of Proof by Mathematical Induction Mistaken Proofs by Mathematical Induction Guidelines for Proofs by Mathematical Induction Climbing an Infinite
More informationCapacity of a Two-way Function Multicast Channel
Capacity of a Two-way Function Multicast Channel 1 Seiyun Shin, Student Member, IEEE and Changho Suh, Member, IEEE Abstract We explore the role of interaction for the problem of reliable computation over
More informationIowa State University. Instructor: Alex Roitershtein Summer Homework #5. Solutions
Math 50 Iowa State University Introduction to Real Analysis Department of Mathematics Instructor: Alex Roitershtein Summer 205 Homework #5 Solutions. Let α and c be real numbers, c > 0, and f is defined
More informationInformation Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results
Information Theory Lecture 10 Network Information Theory (CT15); a focus on channel capacity results The (two-user) multiple access channel (15.3) The (two-user) broadcast channel (15.6) The relay channel
More informationTopology. Xiaolong Han. Department of Mathematics, California State University, Northridge, CA 91330, USA address:
Topology Xiaolong Han Department of Mathematics, California State University, Northridge, CA 91330, USA E-mail address: Xiaolong.Han@csun.edu Remark. You are entitled to a reward of 1 point toward a homework
More informationSphere Packing and Shannon s Theorem
Chapter 2 Sphere Packing and Shannon s Theorem In the first section we discuss the basics of block coding on the m-ary symmetric channel. In the second section we see how the geometry of the codespace
More informationLecture 28: Generalized Minimum Distance Decoding
Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 007) Lecture 8: Generalized Minimum Distance Decoding November 5, 007 Lecturer: Atri Rudra Scribe: Sandipan Kundu & Atri Rudra 1
More informationLecture 3: Channel Capacity
Lecture 3: Channel Capacity 1 Definitions Channel capacity is a measure of maximum information per channel usage one can get through a channel. This one of the fundamental concepts in information theory.
More informationLecture 5 Channel Coding over Continuous Channels
Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From
More informationMATH 102 INTRODUCTION TO MATHEMATICAL ANALYSIS. 1. Some Fundamentals
MATH 02 INTRODUCTION TO MATHEMATICAL ANALYSIS Properties of Real Numbers Some Fundamentals The whole course will be based entirely on the study of sequence of numbers and functions defined on the real
More informationOn queueing in coded networks queue size follows degrees of freedom
On queueing in coded networks queue size follows degrees of freedom Jay Kumar Sundararajan, Devavrat Shah, Muriel Médard Laboratory for Information and Decision Systems, Massachusetts Institute of Technology,
More informationSPACES ENDOWED WITH A GRAPH AND APPLICATIONS. Mina Dinarvand. 1. Introduction
MATEMATIČKI VESNIK MATEMATIQKI VESNIK 69, 1 (2017), 23 38 March 2017 research paper originalni nauqni rad FIXED POINT RESULTS FOR (ϕ, ψ)-contractions IN METRIC SPACES ENDOWED WITH A GRAPH AND APPLICATIONS
More informationOn non-antipodal binary completely regular codes
On non-antipodal binary completely regular codes J. Borges, J. Rifà Department of Information and Communications Engineering, Universitat Autònoma de Barcelona, 08193-Bellaterra, Spain. V.A. Zinoviev Institute
More informationIN this paper, we consider the capacity of sticky channels, a
72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion
More informationMATH 291T CODING THEORY
California State University, Fresno MATH 291T CODING THEORY Fall 2011 Instructor : Stefaan Delcroix Contents 1 Introduction to Error-Correcting Codes 3 2 Basic Concepts and Properties 6 2.1 Definitions....................................
More informationOptimal File Sharing in Distributed Networks
Optimal File Sharing in Distributed Networks Moni Naor Ron M. Roth Abstract The following file distribution problem is considered: Given a network of processors represented by an undirected graph G = (V,
More informationAn Introduction to (Network) Coding Theory
An to (Network) Anna-Lena Horlemann-Trautmann University of St. Gallen, Switzerland April 24th, 2018 Outline 1 Reed-Solomon Codes 2 Network Gabidulin Codes 3 Summary and Outlook A little bit of history
More informationHybrid Noncoherent Network Coding
Hybrid Noncoherent Network Coding Vitaly Skachek, Olgica Milenkovic, Angelia Nedić University of Illinois, Urbana-Champaign 1308 W. Main Street, Urbana, IL 61801, USA Abstract We describe a novel extension
More informationAnalytical Performance of One-Step Majority Logic Decoding of Regular LDPC Codes
Analytical Performance of One-Step Majority Logic Decoding of Regular LDPC Codes Rathnakumar Radhakrishnan, Sundararajan Sankaranarayanan, and Bane Vasić Department of Electrical and Computer Engineering
More informationMAS309 Coding theory
MAS309 Coding theory Matthew Fayers January March 2008 This is a set of notes which is supposed to augment your own notes for the Coding Theory course They were written by Matthew Fayers, and very lightly
More information