Connecting Multiple-unicast and Network Error Correction: Reduction and Unachievability

Similar documents
IN THE paradigm of network coding, a set of source nodes

Refined Coding Bounds for Network Error Correction

Single-Source/Sink Network Error Correction Is as Hard as Multiple-Unicast

NP-Completeness : Proofs

Lecture 3: Shannon s Theorem

Problem Set 9 Solutions

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Maximizing the number of nonnegative subsets

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

More metrics on cartesian products

Finding Dense Subgraphs in G(n, 1/2)

APPENDIX A Some Linear Algebra

Affine transformations and convexity

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Notes on Frequency Estimation in Data Streams

ECE559VV Project Report

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

Edge Isoperimetric Inequalities

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Calculation of time complexity (3%)

find (x): given element x, return the canonical element of the set containing x;

The Order Relation and Trace Inequalities for. Hermitian Operators

Graph Reconstruction by Permutations

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016

On Network Coding of Independent and Dependent Sources in Line Networks

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

HMMT February 2016 February 20, 2016

n ). This is tight for all admissible values of t, k and n. k t + + n t

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The first idea is connectedness.

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

REAL ANALYSIS I HOMEWORK 1

State Amplification and State Masking for the Binary Energy Harvesting Channel

Volume 18 Figure 1. Notation 1. Notation 2. Observation 1. Remark 1. Remark 2. Remark 3. Remark 4. Remark 5. Remark 6. Theorem A [2]. Theorem B [2].

Estimation: Part 2. Chapter GREG estimation

Anti-van der Waerden numbers of 3-term arithmetic progressions.

The Minimum Universal Cost Flow in an Infeasible Flow Network

Lecture 4: Universal Hash Functions/Streaming Cont d

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

Approximately achieving Gaussian relay network capacity with lattice codes

763622S ADVANCED QUANTUM MECHANICS Solution Set 1 Spring c n a n. c n 2 = 1.

Module 9. Lecture 6. Duality in Assignment Problems

Composite Hypotheses testing

Error Probability for M Signals

Min Cut, Fast Cut, Polynomial Identities

Learning Theory: Lecture Notes

Perfect Competition and the Nash Bargaining Solution

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

The Fundamental Theorem of Distributed Storage Systems Revisited

Canonical transformations

18.1 Introduction and Recap

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

Chapter 8 Indicator Variables

CHAPTER 17 Amortized Analysis

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

Exercise Solutions to Real Analysis

Difference Equations

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or

a b a In case b 0, a being divisible by b is the same as to say that

The Synchronous 8th-Order Differential Attack on 12 Rounds of the Block Cipher HyRAL

x i1 =1 for all i (the constant ).

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

On Information Theoretic Games for Interference Networks

Introductory Cardinality Theory Alan Kaylor Cline

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

FREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced,

Introduction to Algorithms

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

2.3 Nilpotent endomorphisms

Lecture 4: November 17, Part 1 Single Buffer Management

Vapnik-Chervonenkis theory

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006)

On the Multicriteria Integer Network Flow Problem

5 The Rational Canonical Form

Appendix B. The Finite Difference Scheme

Lecture Notes on Linear Regression

Supplement to Clustering with Statistical Error Control

# c i. INFERENCE FOR CONTRASTS (Chapter 4) It's unbiased: Recall: A contrast is a linear combination of effects with coefficients summing to zero:

Lecture 10: May 6, 2013

General viscosity iterative method for a sequence of quasi-nonexpansive mappings

A new construction of 3-separable matrices via an improved decoding of Macula s construction

FACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP

The Second Eigenvalue of Planar Graphs

Quantum Network Code for Multiple-Unicast Network with Quantum Invertible Linear Operations

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

Exact-Regenerating Codes between MBR and MSR Points

On the Correlation between Boolean Functions of Sequences of Random Variables

The Geometry of Logit and Probit

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Lecture 4. Instructor: Haipeng Luo

Finding Primitive Roots Pseudo-Deterministically

System in Weibull Distribution

The Second Anti-Mathima on Game Theory

Transcription:

Connectng Multple-uncast and Network Error Correcton: Reducton and Unachevablty Wentao Huang Calforna Insttute of Technology Mchael Langberg Unversty at Buffalo, SUNY Joerg Klewer New Jersey Insttute of Technology arxv:1504.04930v1 [cs.it] 20 Apr 2015 Abstract We show that solvng a multple-uncast network codng problem can be reduced to solvng a sngle-uncast network error correcton problem, where an adversary may jam at most a sngle edge n the network. Specfcally, we present an effcent reducton that maps a multple-uncast network codng nstance to a network error correcton nstance whle preservng feasblty. The reducton holds for both the zero probablty of error model and the vanshng probablty of error model. Prevous reductons are restrcted to the zero-error case. As an applcaton of the reducton, we present a constructve example showng that the sngle-uncast network error correcton capacty may not be achevable, a result of separate nterest. I. INTRODUCTION Consder the problem that a source wshes to relably communcate to a termnal over a network wth pont-topont noseless channels, n the presence of an adversary. The adversary s characterzed by a collecton A of subsets of channels, so that t may choose an arbtrary A A and controls the channels n set A. Under the assumpton that 1) all channels have unform capacty and 2) A s the collecton of all subsets contanng z channels, Yeung and Ca [1] show that the cut-set bound characterzes the network error correcton capacty. Effcent capacty-achevng network error correcton codes under ths settng are proposed n [2] [5]. In the settngs that ether channel capactes are not unform or A s arbtrary, determnng the network capacty remans an open problem. For both cases t s shown that lnear codes are not suffcent to acheve capacty [6], [7]. Capacty bounds and achevable strateges for network error correcton wth unequal channel capactes are studes n [6]. Achevable strateges for network error correcton wth non-unform A,.e., A ncludes subsets of dfferent szes, are studed n [8] [10]. The sngle-source sngle-termnal network error correcton problem wth arbtrary A s shown n a prevous work of the authors [11] to be at least as hard as the multple-uncast network codng problem (wthout adversaral errors), usng the followng reducton technque. For a general multple-uncast network codng problem I, a correspondng network error correcton problem I c can be constructed, so that a rate s feasble wth zero error n I f and only f a correspondng rate s feasble wth zero error n I c. Therefore, the problem of determnng the zero-error feasblty of a rate n I s reduced to the problem of determnng the zero-error feasblty of a 0 Ths work has been supported n part by NSF grant CCF-1440014, CCF- 1440001, CCF-1439465, and CCF-1321129. rate n I c. Under the model that a vanshng probablty of error s allowed, the connecton between the feasblty n I and I c s also studed n [11]. However, n ths case the result theren s weaker, and has a gap so that the f and only f connecton between I and I c s broken. Hence for ths case the reducton from I to I c s not establshed. In ths paper, we close ths gap and complete the reducton from multple-uncast to network error correcton under the vanshng-error model. For a general multple-uncast network codng problem I, we show that a correspondng network error correcton problem I c can be constructed, so that a rate s feasble wth vanshng error n I f and only f a correspondng rate s feasble wth vanshng error n I c. We construct I c n the same way as n [11]. However, compared to the (mplct) nformaton-theoretc approach used n [11], we present a way to explctly construct the network code for I from the network code for I c. Furthermore, the new approach enables a stronger result, smplfes the proofs and generalzes the result of the zero-error model as a specal case. As there are connectons between I and I c for both zeroerror feasblty and vanshng-error feasblty, t s natural to ask f the connecton extends to the case of asymptotc feasblty. We answer ths queston negatvely by constructng a counter-example. By applyng our analyss to ths example, we further show that the (sngle-source sngle-termnal) network error correcton capacty s not achevable n general, whch s a result of separate nterest. Prevous works [12], [13] have studed the unachevablty of capacty for multple-uncast networks and sum-networks, respectvely. II. MODELS A. Multple-uncast Network Codng A network s a drected graph G = (V, E), where vertces represent network nodes and edges represent channels. Each edge e E has a capacty c e, whch s the number of bts that can be transmtted on e n one transmsson. An nstance I = (G, S, T, B) of the multple-uncast network codng problem, ncludes a network G, a set of source nodes S V, a set of termnal nodes T V and an S by T requrement matrx B. The (, j)-th entry of B equals 1 f termnal j requres the nformaton from source and equals 0 otherwse. B s assumed to be a permutaton matrx so that each source s pared wth a sngle termnal. Denote by s(t) the source requred by termnal t. Denote [n] {1,.., n}, and each source s S s assocated wth an ndependent message,

represented by a random varable M s unformly dstrbuted over [2 nrs ]. A network code of length n s a set of encodng functons φ e for every e E and a set of decodng functons φ t for each t T. For each e = (u, v), the encodng functon φ e s a functon takng as nput the sgnals receved from the ncomng edges of node u, as well as the random varable M u f u S. φ e evaluates to a value n [2 nce ], whch s the sgnal transmtted on e. For each t T, the decodng functon φ t maps the tuple of sgnals receved from the ncomng edges of t, to an estmated message ˆM s(t) wth values n [2 nr s(t) ]. A network code {φ e, φ t } e E,t T s sad to satsfy a termnal t under transmsson (m s, s S) f ˆMs(t) = m s(t) when (M s, s S) = (m s, s S). A network code s sad to satsfy the multple-uncast network codng problem I wth error probablty ɛ f the probablty that all t T are smultaneously satsfed s at least 1 ɛ. The probablty s taken over the jont dstrbuton on (M s, s S). Formally, the network code satsfes I wth error probablty ɛ f { } Pr (M s,s S) t T t s satsfed under (M s, s S) 1 ɛ. For an nstance I of the multple-uncast network codng problem, rate R s sad to be feasble f R s = R, s S, and for any ɛ > 0, there exsts a network code that satsfes I wth error probablty at most ɛ. Rate R s sad to be feasble wth zero error f R s = R, s S and there exsts a network code that satsfes I wth zero error probablty. Rate R s sad to be asymptotcally feasble f for any δ > 0, rate (1 δ)r s feasble. The model assumes that all sources transmt nformaton at equal rate. There s no loss of generalty n ths assumpton as a varyng rate source s can be modeled by several equal rate sources co-located at s. B. Sngle-source Sngle-termnal Network Error Correcton An nstance I c = (G, s, t, A) of the sngle-source sngletermnal network error correcton problem ncludes a network G, a source node s, a termnal node t and a collecton of subsets of channels A 2 E susceptble to errors. An error occurs n a channel f the output of the channel s dfferent from the nput. More precsely, the output of a channel e s the btwse xor of the nput sgnal and an error sgnal r e. We say there s an error n channel e f r e s not the zero vector. For a subset A A of channels, an A-error s sad to occur f an error occurs n every channel n A. Snce there s only a sngle source and a sngle termnal, n ths problem we suppress the subscrpt and denote by M the random message of the source, by R the rate of M, and by ˆM the output of the decoder at t. Denote by R A the set of error patterns r = {r e } e E that correspond to an A-error, for any A A. A network code {φ e, φ t } e E s sad to satsfy I c under transmsson m f ˆM = m when M = m, regardless of the occurrence of any error pattern r R A. A network code s sad to satsfy problem I c wth error probablty ɛ f the probablty that I c s satsfed s at least 1 ɛ. The probablty s taken over the dstrbuton on the source message M. Note that our model targets the worst-case scenaro n the sense that f M = m s transmtted and m s satsfed by the network code, then correct decodng s guaranteed regardless of the error pattern. For a sngle-source sngle-termnal network error correcton problem I c, rate R s sad to be feasble f for any ɛ > 0, there exsts a network code that satsfes I c wth error probablty at most ɛ. Rate R s sad to be feasble wth zero error f there exsts a network code that satsfes I c wth zero error probablty. Rate R s sad to be asymptotcally feasble f for any δ > 0, rate (1 δ)r s feasble. The capacty of I c s the supremum over all rates that are asymptotcally feasble. III. REDUCTION FROM MULTIPLE-UNICAST TO NETWORK ERROR CORRECTION Fg. 1: In the sngle-uncast network error correcton problem I c, the source s wants to communcate wth the termnal t. N s a general network wth pont-to-pont noseless channels. All edges outsde N (.e., edges for whch at least one of ts end-ponts does not belong to N ) have unt capacty. There s at most one error n ths network, and ths error may occur at any edge except {a, b, 1 k}. Namely, A ncludes all sngleton sets of a sngle edge n the network except {a } and {b }, = 1,..., k. Note that there are k parallel branches n total but only the frst and the k-th branches are drawn explctly. The multple-uncast network codng problem I s defned on the network N, where the k source-destnaton pars are (s, t ), = 1,..., k, and all channels are error-free. The followng theorem s the man result of ths secton. Theorem 1. Gven any multple-uncast network codng problem I wth source-destnaton pars {(s, t ), = 1,..., k}, a correspondng sngle-source sngle-termnal network error correcton problem I c = (G, s, t, A) n whch A ncludes sets wth at most a sngle edge can be constructed as specfed n Fgure 1, such that rate k s feasble n I c f and only f unt rate s feasble n I. The backward drecton of the theorem,.e., unt rate n I mples rate k n I c, s smple and s prove n [11]. In the

remander of ths secton we prove the forward drecton of Theorem 1,.e., the feasblty of rate k n I c mples the feasblty of unt rate n I. Suppose n I c a rate of k s acheved by a network code C = {φ e, φ t } e E wth length n, and wth a probablty of error ɛ. Recall that M s the source message unformly dstrbuted over M = [2 kn ], and ˆM s the output of the decoder at the termnal. Let M good = {m [2 kn ] : C satsfes I c under transmsson m} be the subset of messages that can be decoded correctly under any error pattern r R A. Denote by M bad = M\M good, then for any m M bad, there exsts an error pattern r R A such that ˆM m f M = m and r occurs,.e., a decodng error occurs. Because C satsfes I c wth error probablty ɛ, t follows that M bad 2 kn ɛ and thus M good (1 ɛ) 2 kn. We ntroduce some notaton needed n the proof. In problem I c, under the network code C, for = 1,..., k, let x (m, r) : M R A [2 n ] be the sgnal receved from channel x when M = m and the error pattern r happens. Let r = 0 denotes the case that no error has occurred n the network. Let x (m) = x (m, 0), x(m, r) = (x 1 (m, r),..., x k (m, r)) and x(m) = (x 1 (m),..., x k (m)). We defne functons a, b, y, z, z, a, b, y, z, z for problem I c n a smlar way. Notce that the set of edges a 1,..., a k forms a cut-set from s to t, and so does the set of edges b 1,..., b k. Therefore for any m 1, m 2 M good, m 1 m 2, t follows from the decodablty constrant that a(m 1 ) a(m 2 ) and b(m 1 ) b(m 2 ). Settng B good = {b(m) : m M good }, t then follows from M good (1 ɛ) 2 kn that B good (1 ɛ) 2 kn. Settng B err = [2 n ] k \B good, t follows that B err 2 kn ɛ. We defne A good and A err smlarly and so A good (1 ɛ) 2 kn, A err 2 kn ɛ. Let M(ẑ, ˆb ) = {m M good : z (m) = ẑ, b (m) = ˆb }, we defne a functon ψ : [2 n ] [2 n ] as: ψ (ẑ ) = arg max ˆb M(ẑ, ˆb ) ˆb,ẑ (1) Functon ψ wll be useful later, when we desgn the network codes n I. Intutvely, n the absence of adversaral errors, ψ estmates the sgnal transmtted on edge b gven that the sgnal transmtted on edge z s ẑ. In the followng we analyze how often wll ψ make a mstake. Defne M ψ = {m M good : ψ (z (m)) b (m)}. Notce that M ψ s the set of messages that, when they are transmtted by the source, ψ wll make a mstake n guessng the sgnal transmtted on b. Lemma 1 shows that the sze of ths set s small. Lemma 1. M ψ 2ɛ 2kn. We make the followng combnatoral observaton before provng Lemma 1. Lemma 2 s a varaton of [11, Lemma 4] Lemma 2. Let M(ẑ ) = {m Mgood : z (m) = ẑ }, then for any m 1, m 2 M(ẑ ) such that b (m 1 ) b (m 2 ), there exsts an element of B err that wll be decoded by termnal t to ether m 1 or m 2. Proof: Consder any m 1, m 2 M(ẑ ) such that b (m 1 ) b (m 2 ). Let r 1 be the error pattern that changes the sgnal on x to be x (m 2 ), and let r 2 be the error pattern that changes the sgnal on y to be y (m 1 ). Then f m 1 s transmtted by the source and r 1 happens, node B wll receve the same nputs (x (m 2 ), y (m 1 ), z (m 1) = z (m 2)) as n the stuaton that m 2 s transmtted and r 2 happens. Therefore b (m 1, r 1 ) = b (m 2, r 2 ), and so ether b (m 1, r 1 ) b (m 1 ) or b (m 2, r 2 ) b (m 2 ) because by hypothess b (m 1 ) b (m 2 ). Consder the frst case that b (m 1, r 1 ) b (m 1 ), then the tuple of sgnals (b 1 (m 1, r 1 ),..., b k (m 1, r 1 )) = (b 1 (m 1 ),..., b (m 1, r 1 ),..., b k (m 1 )) wll be decoded by the termnal to message m 1 because of the fact that m 1 M good whch s correctly decodable under any error pattern r R A. Therefore ths tuple of sgnals s an element of B err snce t does not equals b(m 1 ) = (b 1 (m 1 ),..., b k (m 1 )) and t does not equal b(m), for any m m 1, m M good, because otherwse t wll be decoded by the termnal to m. Smlarly n the latter case that b 1 (m 2, r 2 ) b 1 (m 2 ), then (b 2 (m 2, r 2 ),..., b k (m 2, r 2 )) = (b 2 (m 2 ),..., b (m 2, r 2 ),..., b k (m 2 )) s an element of B err and wll be decoded by the termnal to m 2. Therefore n both cases we are able to fnd an element of B err that wll be decoded by the termnal to ether m 1 or m 2. Proof (of Lemma 1): We can partton M ψ as M ψ = ( M(ẑ )\M(ẑ, ˆb ),ẑ ), ẑ and so M ψ = ẑ ( M(ẑ ) M(ẑ, ˆb ),ẑ ). (2) Consder an arbtrary ẑ and the set M(ẑ ). We defne an teratve procedure as follows. Intalze W := M(ẑ ). If there exst two messages m 1, m 2 W such that b (m 1 ) b (m 2 ), then delete both m 1, m 2 from W. Repeat the operaton untl there does not exst m 1, m 2 W such that b (m 1 ) b (m 2 ). After the procedure termnates, t follows that W M(ẑ, ˆb,ẑ ), otherwse by the defnton of ˆb,ẑ there must exst m 1, m 2 W such that b (m 1 ) b (m 2 ). Therefore at least M(ẑ ) M(ẑ, ˆb,ẑ ) elements are deleted from M(ẑ ). By Lemma 2, each par of elements deleted corresponds to an element of B err. Also by Lemma 2 the elements of B err correspondng to dfferent deleted pars are dstnct. Summng over all possble values of ẑ, t follows that the total number of deleted pars s smaller than the sze of B err : 1 ( M(ẑ 2 ) M(ẑ, ˆb ),ẑ ) ẑ # of pars deleted from M(ẑ ) ẑ B err = ɛ 2 kn. (3) Combnng (2) and (3) we have M ψ 2ɛ 2kn.

Next, let M(â, ˆb ) = {m M good : a (m) = â, b (m) = ˆb }, we defne a functon π : [2 n ] [2 n ] as: π (ˆb ) = arg max â M(â, ˆb ) â,ˆb (4) Functon π wll be useful later for desgnng the network codes n I. Intutvely, n the absence of adversaral errors, π estmates the sgnal transmtted on edge a gven that the sgnal transmtted on edge b s ˆb. In the followng we analyze how often wll π make a mstake. Defne M π = {m M good : π (b (m)) a (m)}. Notce that M π s the set of messages that, when they are transmtted by the source, π wll make a mstake n guessng the sgnal transmtted on a. Lemma 3 shows that the sze of ths set s small. Lemma 3. M π 3ɛ 2kn. We make the followng combnatoral observaton before provng Lemma 3. Lemma 4 s a varaton of [11, Lemma 3]. Lemma 4. Defne M(â ) = {m M good : a (m) = â }. If {b (m) : m M(â )} = L, then there exst (L 1) M(â ) dstnct elements of B err such that each of them wll be decoded by termnal t to some message m M(â ). Proof: Assume for concreteness that {b (m) : m M(â )} = {ˆb (1) (L),..., ˆb }, then there exst L messages (j) m 1,..., m L M(â ) such that b (m j ) = ˆb, j = 1,..., L. For j = 1,..., L, let r j be the error pattern that changes the sgnal on z to be z (m j). Then f a message m 0 M(â ) s transmtted by the source and r j happens, the node B wll receve the same nputs (x (m 0 ), y (m 0 ), z (m j)) as n the stuaton that m j s sent and no error happens. (j) Therefore b (m 0, r j ) = ˆb, and so {b(m 0, r j )} j [L] = {b (m 0, r j )} j [L] = L. Snce m 0 M good, t s correctly decodable under any error pattern r R A, and so all elements of {b(m 0, r j )} j [L] wll be decoded by the termnal to m 0. Except the element b(m 0 ), the other L 1 elements of {b(m 0, r j )} j [L] are elements of B err. Sum over all m 0 M(â ) and the asserton s proved. Proof (of Lemma 3): Defne A π,1 = {â [2 n ] : M(â ) 1 2 2(k 1)n }, and A π,2 = {â [2 n ]\A π,1 : {b (m) : m M(â )} > 1}. Then defne M π,1 = {m M good : a (m) A π,1 }, and Mπ,2 = {m Mgood : a (m) A π,2 }. Notce that by constructon Aπ,1 and Aπ,2 are dsjont, and M π,1 and Mπ,2 are dsjont. We clam that, M π M π,1 M π,2. (5) To prove the clam, consder any m M good such that m / M π,1 Mπ,2. We wll show that π(b (m)) = a (m). Suppose for the sake of contradcton that π(b (m)) = â a (m), then t follows that M(â, b (m)) (a) > M(a (m), b (m)) (b) = M(a (m)) (c) > 1 2 2(k 1)n, (6) where (a) s due to the defnton of π, (b) s due to the fact that m / M π,2 and (c) s due to the fact that m / M π,1. Let M(ˆb ) = {m M good : b (m ) = ˆb }, then M(â, b (m)) M(a (m)) M(b (m)). Snce â a (m), M(â, b (m)) and M(a (m)) are dsjont, and t follows that M(b (m)) M(â, b (m)) + M(a (m)) > 2 (k 1)n. However, because {(ˆb 1,..., ˆb k ) [2 n ] k : ˆb = b (m)} = 2 (k 1)n, by the pgeonhole prncple there must exst two messages m 1, m 2 M(b (m)) such that b(m 1 ) = b(m 2 ). Ths s a contradcton snce the termnal cannot dstngush m 1 from m 2. Ths proves π(b (m)) = a (m) as well as (5). We next bound the sze of M π,1 and Mπ,2. For any â A π,1, by defnton {(â 1,..., â k ) [2 n ] k : â = â }\{(a(m) : m M(â )} s a subset of Aerr wth sze at least 1 2 2(k 1)n. Therefore each element of A π,1 wll contrbute to at least 1 2 2(k 1)n dstnct elements of A err. Hence A π,1 1 2 2(k 1)n A err = ɛ 2 kn, and so A π,1 2ɛ 2n. It then follows that M π,1 1 2 2(k 1)n A π,1 = ɛ 2kn. By Lemma 4, each elements of A π,2 wll contrbute to at 1 least 2 2(k 1)n dstnct elements n B err. Therefore A π,2 1 2 2(k 1)n B err = ɛ 2 kn, and so A π,2 2ɛ 2n. It then follows that M π,2 2(k 1)n A π,2 = 2ɛ 2kn. Fnally, by (5) we have M π Mπ,1 + Mπ,2 3ɛ 2kn. We are now ready to prove Theorem 1. Proof ( part of Theorem 1): We show the feasblty of rate k n I c mples the feasblty of unt rate n I. Let {φ e, φ t } e E be the network error correcton code of length n that acheves rate k n I c, wth probablty of error ɛ. We assume that n ths code edge z smply relays the sgnal from edge a. Ths s wthout loss of generalty because for any network code that needs to process the sgnal on edge a to obtan the sgnal to be transmtted on edge z, t s equvalent to relay the sgnal on edge z and perform the processng work at the head node of edge z. Let E N E be the set of edges of the embedded graph N. For the multple-uncast problem I, we defne a length-n network code {τ e, τ t : e E N, [k]} as follows. τ e = φ e, e E N τ t = φ z π ψ φ z, = 1,..., k. where denotes functon composton; φ z and φ z are the encodng functons of edges z and z n problem I c; ψ s defned n (1); and π s defned n (4). In the followng we show that {τ e, τ t : e E N, [k]} acheves unt rate n I wth probablty of error upper bounded by 6kɛ. In problem I, let M be the random message assocated wth source s, then M, = 1,..., k are..d. unformly dstrbuted over [2 n ]. Denote for short M = (M 1,..., M k ), then slghtly abusng notatons we denote by τ t (M) the output of the decoder τ t under transmsson M. The probablty of decodng error s gven by k Pr{ τ t (M) M }, =1 where the probablty s taken over the jont dstrbuton of the random messages. Let m = (m 1,..., m k ) be the realzaton

of M. We clam that f there exsts a message m of problem I c (not to be confused wth m, a message of I) such that m M good, m / M ψ, m / Mπ and m = z(m), then τ t (m) = m. To prove the clam, suppose m = z(m) s transmtted n I. Notce that all edges n N perform the same codng scheme n I as n I c, therefore for termnal node t, by nvokng the functon φ z, t obtans z (m). Then by the defnton of M ψ, t follows that ψ (z (m)) = b (m). And by the defnton of M π, t follows that π(ψ (z (m))) = a (m). Fnally snce m = z(m), t follows that φ z (π(ψ (z (m)))) = φ z (a (m)) = z (m) = m. Therefore τ t (m) = m f m {z(m) [2 n ] k : m M good, m / M ψ, m / Mπ }. The probablty that τ t makes an error,.e., Pr{τ t (M) M }, s upper bounded by the probablty of the unon of the followng three events. E 1 = {M = m : m / {z(m) [2 n ] k : m M good }} E 2 = {M = m : m {z(m) [2 n ] k : m M ψ }} E 3 = {M = m : m {z(m) [2 n ] k : m M π }}. We upper bound the probablty of E 1, E 2, E 3, respectvely. Pr{E 1 } = 1 {z(m) : m Mgood } 2 kn (d) = 1 Mgood 2 kn 1 (1 ɛ) 2kn 2 kn = ɛ, (7) where (d) follows from the fact that z(m) = a(m) a(m ) = z(m ) for any m, m M good, m m. By Lemma 1, Pr{E 2 } = Mψ 2ɛ. (8) 2kn And by Lemma 3, we have Pr{E 3 } = Mπ 3ɛ. (9) 2kn Combnng (7), (8) and (9), t follows that Pr{τ t (M) M } Pr{E 1 } + Pr{E 2 } + Pr{E 3 } 6ɛ. Fnally, by takng the unon bound over the k termnals, k Pr{ τ t (M) M } 6kɛ. =1 Hence the probablty of error s arbtrarly small and ths establshes the feasblty of unt rate n I. The proof above suggests that the feasblty of rate k wth error probablty ɛ n I c mples the feasblty of unt rate wth error probablty 6kɛ n I. By settng ɛ = 0, we generalze the result n [11] regardng the zero-error model as a specal case. Corollary 1. Gven any multple-uncast network codng problem I wth source-destnaton pars {(s, t ), = 1,..., k}, a correspondng sngle-source sngle-termnal network error correcton problem I c = (G, s, t, A) n whch A ncludes sets wth at most a sngle edge can be constructed as specfed n Fgure 1, such that rate k s feasble wth zero error n I c f and only f unt rate s feasble wth zero error n I. IV. UNACHIEVABILITY OF NETWORK ERROR CORRECTION CAPACITY Theorem 1 and Corollary 1 suggest that there are strong f and only f connectons between I c and I for both zero-error feasblty and vanshng-error feasblty. It s natural to ask f ths connecton extends to the case of asymptotc feasblty,.e., f t s true that rate k s asymptotcally feasble n I c f and only f unt rate s asymptotcally feasble n I. We answer ths queston negatvely by constructng a counter-example. Fg. 2: Constructon of I c and I. In I c, the source s s and the termnal s t. A ncludes all sngleton sets of a sngle edge except {a } and {b }, = 1,..., k. In I, the source-destnaton pars are (s, t ), = 1,..., k. All edges have unt capacty. Theorem 2. There exsts a multple-uncast network codng problem I and a correspondng sngle-source sngle-termnal network error correcton problem I c, constructed from I as specfed n Fgure 2, such that rate k s asymptotcally feasble n I c, but unt rate s not asymptotcally feasble n I. Proof: The constructon of I and the correspondng I c are shown n Fgure 2. By the cut-set bounds, any rate larger than 1/k s not feasble n I. Ths shows the second statement of the theorem. We prove the frst statement of the theorem by descrbng a network code wth length n that acheves rate k k/n n I c. Frst dvde the source message of rate k k/n nto k peces M = (M 1,..., M k ), such that M, = 1,..., k are..d. unformly dstrbuted over [2 n 1 ]. We denote φ e : [2 k(n 1) ] [2 n ] as the encodng functon 1 of edge e, whch takes the source message M as nput, and outputs the sgnal to be transmtted on e when there s no error n the network. For all = 1,..., k, we let φ a (M) = φ x (M) = φ y (M) = φ z (M) = φ (s,c)(m) = M 1 Ths s called the global encodng fucton n the context of network codng.

Furthermore, we let φ (C,D) (M) = φ (D,t)(M) = φ z (M) = k M j, j=1 = 1,..., k where the summaton s btwse xor. Note that the edges a, x, y, z, (s, C), (C, D), (D, t ), z each has a capacty to transmt n bts. But we only requre each of them to transmt n 1 bts. Hence each edge reserves one unused bt. Node B, by observng the (possbly corrupted) sgnals receved from edges x, y, z, performs error correcton n the followng way. If the sgnal (of n 1 bts) receved from x equals the sgnal receved from y, forward the sgnal to edge b, and then transmt one bt of 0 usng the reserved bt. Otherwse, forward the sgnal receved from z to b, and then transmt one bt of 1 usng the reserved bt. Fnally, termnal t recovers the source message n the followng way. For = 1,..., k, such that the reserved bt on b equals 0, decode the remanng n 1 bts receved from b as ˆM. If ˆM 1,..., ˆM k are all obtaned n ths way then decodng s completed. If two or more peces of the ˆM s are not obtaned, then a decodng falure s declared. Otherwse, let ˆMl be the unque pece that s not obtaned, then subtract k j=1,j l ˆM j from the sgnal receved from b l, and decode the result as ˆM l. It remans to be shown that ˆM = M, = 1,..., k regardless of the error patterns, and that a decodng falure wll not be declared. Notce that the reserved bt on b equals 1 only f an error occurs to ether x or y. Snce there s at most a sngle error edge, for = 1,..., k there s at most one b wth reserved bt 1. Therefore the decoder s able to obtan at least k 1 peces of ˆM1,..., ˆM k durng the frst phase of decodng and wll never declare falure. Next notce that f the reserved bt on b s 0, then ˆM M only f errors occur to both x and y. Ths s not possble by hypothess and therefore ˆM = M f the reserved bt on b s 0. Fnally, suppose the reserved bt on b s 1, then an error must occur to ether x or y, and so z s not n error. Therefore ˆM = k j=1 M j k j=1,j ˆM j = M. Ths proves the correctness of decodng and the second statement of the theorem. By applyng the reducton result n Theorem 1 to the example constructed n Theorem 2, t follows that the same example shows the unachevablty of sngle-uncast network error correcton capacty. Corollary 2. There exsts a sngle-source sngle-termnal network error correcton problem whose capacty s not feasble. Proof: The constructon of the network error correcton problem I c s shown n Fgure 2. By the cut-set bounds, the capacty of I c s upper bounded by k. By Theorem 2, rate k s asymptotcally feasble n I c, and so the capacty of I c s k. Also by Theorem 2, unt rate n not feasble n I, and so by Theorem 1, rate k s not feasble n I c. Ths shows that the capacty of I c s not feasble. Corollary 2 suggests that although the network error correcton capacty s (by defnton) asymptotcally feasble, n general t may not be exactly feasble. Ths s n contrast to the scenaro of network error correcton wth unform A,.e., A s the collecton of all subsets contanng z channels. In ths case the network capacty can be acheved by lnear codes. Unachevablty of capacty s also studed for multpleuncast networks [12] and sum networks [13]. For both cases example networks for whch the capacty s not achevable are constructed usng matrod theory. V. CONCLUDING REMARKS We show that determnng the feasblty of a rate tuple n a multple-uncast network codng problem can be effcently reduced to determnng the feasblty of a correspondng rate n a correspondng sngle-uncast network error correcton problem, where an adversary may jam at most a sngle edge. Note that though our analyss assumes all source-destnaton pars n the multple-uncast transmt at equal rate, ths restrcton can be relaxed by modelng a varyng rate source s as several equal rate sources co-located at s. Fnally we apply the reducton to show the unachevablty of sngle-uncast network error correcton capacty. We note that our results do not mply that fndng the capacty of a multple-uncast network codng problem can be reduced to fndng the capacty of a sngle-uncast network error correcton problem. Whether t s possble to construct such a reducton would be an nterestng open problem. REFERENCES [1] R. W. Yeung and N. Ca, Network Error Correcton, I: Basc Concepts and Upper Bounds, Communcatons n Informaton & Systems, vol. 6, no. 1, pp. 19 35, 2006. [2] Z. Zhang, Lnear network error correcton codes n packet networks, IEEE Trans. Info. Theory, vol. 54, no. 1, pp. 209 218, Jan. 2008. [3] S. Jagg, M. Langberg, S. Katt, T. Ho, D. Katab, M. Medard, and M. Effros, Reslent Network Codng n the Presence of Byzantne Adversares, IEEE Trans. on Info. Theory, vol. 54, no. 6, pp. 2596 2603, 2008. [4] D. Slva, F. R. Kschschang, and R. Kotter, A Rank-Metrc Approach to Error Control n Random Network Codng, IEEE Trans. Info. Theory, vol. 54, no. 9, pp. 3951 3967, 2008. [5] R. Koetter and F. R. Kschschang, Codng for Errors and Erasures n Random Network Codng, IEEE Trans. on Info. Theory, vol. 54, no. 8, pp. 3579 3591, 2008. [6] S. Km, T. Ho, M. Effros, and A. S. Avestmehr, Network Error Correcton Wth Unequal Lnk Capactes, IEEE Trans. Info. Theory, vol. 57, no. 2, pp. 1144 1164, Feb. 2011. [7] O. Kosut, L. Tong, and D. Tse, Nonlnear Network Codng s Necessary to Combat General Byzantne Attacks, n Allerton Conference on Communcaton, Control, and Computng, 2009, pp. 593 599. [8] O. Kosut, L. Tong, and D. N. C. Tse, Polytope codes aganst adversares n networks, n IEEE ISIT, 2010, pp. 2423 2427. [9] D. Wang, D. Slva, and F. R. Kschschang, Robust Network Codng n the Presence of Untrusted Nodes, IEEE Trans. Info. Theory, vol. 56, no. 9, pp. 4532 4538, 2010. [10] P. H. Che, M. Chen, T. Ho, S. Jagg, and M. Langberg, Routng for Securty n Networks wth Adversaral Nodes, n IEEE NetCod, 2013. [11] W. Huang, T. Ho, M. Langberg, and J. Klewer, Sngle-source/snk network error correcton s as hard as multple-uncast, n Allerton Conference on Communcaton, Control, and Computng, 2014. [12] R. Dougherty, C. Frelng, and K. Zeger, Unachevablty of network codng capacty, IEEE Trans. Info. Theory, vol. 52, no. 6, pp. 2365 2372, Jun. 2006. [13] B. K. Ra and B. K. Dey, On Network Codng for Sum-Networks, IEEE Trans. Info. Theory, vol. 58, no. 1, pp. 50 63, Jan. 2012.