joint work with Elona Erez Tel Aviv University Meir Feder Overcoming Delay, Synchronization

Similar documents
Code Construction for Two-Source Interference Networks

THE idea of network coding over error-free networks,

Alphabet Size Reduction for Secure Network Coding: A Graph Theoretic Approach

Robust Network Codes for Unicast Connections: A Case Study

Analyzing Large Communication Networks

Communicating the sum of sources in a 3-sources/3-terminals network

Low Complexity Encoding for Network Codes

Sum-networks from undirected graphs: Construction and capacity analysis

Worst case analysis for a general class of on-line lot-sizing heuristics

Network Coding on Directed Acyclic Graphs

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

Distributed Decoding of Convolutional Network Error Correction Codes

Initialization Algorithms For Convolutional Network Coding

Algebraic gossip on Arbitrary Networks

Index Coding Algorithms for Constructing Network Codes. Salim El Rouayheb. Illinois Institute of Technology, Chicago

An Algebraic Approach to Network Coding

Lecture 3 : Introduction to Binary Convolutional Codes

Introduction to Binary Convolutional Codes [1]

An Ins t Ins an t t an Primer

Time stealing: An adventure in tropical land

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008

On Randomized Network Coding

The Lovász Local Lemma : A constructive proof

Graph independent field size bounds on failure protecting network codes

Lecture 3. 1 Polynomial-time algorithms for the maximum flow problem

A Relation Between Weight Enumerating Function and Number of Full Rank Sub-matrices

Coding on a Trellis: Convolutional Codes

A Complexity Theory for Parallel Computation

Fountain Codes. Amin Shokrollahi EPFL

ACO Comprehensive Exam March 20 and 21, Computability, Complexity and Algorithms

Design of linear Boolean network codes for combination networks

Cases Where Finding the Minimum Entropy Coloring of a Characteristic Graph is a Polynomial Time Problem

An Equivalence between Network Coding and Index Coding

Week 4. (1) 0 f ij u ij.

Flows and Cuts. 1 Concepts. CS 787: Advanced Algorithms. Instructor: Dieter van Melkebeek

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

PRAM lower bounds. 1 Overview. 2 Definitions. 3 Monotone Circuit Value Problem

An Extension of Menger s Theorem

Minimal Decompositions of Hypergraphs into Mutually Isomorphic Subhypergraphs

15.1 Matching, Components, and Edge cover (Collaborate with Xin Yu)

Geometric Steiner Trees

On Two Class-Constrained Versions of the Multiple Knapsack Problem

Quasi-linear Network Coding

On the Feasibility of Precoding-Based Network Alignment for Three Unicast Sessions

On the Exponent of the All Pairs Shortest Path Problem

Sums of Products. Pasi Rastas November 15, 2005

Shortest paths with negative lengths

On the complexity of maximizing the minimum Shannon capacity in wireless networks by joint channel assignment and power allocation

Communicating the Sum of Sources Over a Network View Document

Clearly C B, for every C Q. Suppose that we may find v 1, v 2,..., v n

MTAT Complexity Theory October 13th-14th, Lecture 6

Acyclic Digraphs arising from Complete Intersections

Generalized Network Sharing Outer Bound and the Two-Unicast Problem

Multicast Packing for Coding across Multiple Unicasts

Approximation Algorithms for Asymmetric TSP by Decomposing Directed Regular Multigraphs

Preliminaries. Graphs. E : set of edges (arcs) (Undirected) Graph : (i, j) = (j, i) (edges) V = {1, 2, 3, 4, 5}, E = {(1, 3), (3, 2), (2, 4)}

The Capacity of a Network

Testing Equality in Communication Graphs

Kernelization by matroids: Odd Cycle Transversal

NP-COMPLETE PROBLEMS. 1. Characterizing NP. Proof

Physical Layer and Coding

Homework Assignment 4 Solutions

Limitations of Algorithm Power

Convolutional Codes Klaus von der Heide

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 5, MAY

Fundamental Algorithms for System Modeling, Analysis, and Optimization

Limits of Feasibility. Example. Complexity Relationships among Models. 1. Complexity Relationships among Models

1 Primals and Duals: Zero Sum Games

Beyond the Butterfly A Graph-Theoretic Characterization of the Feasibility of Network Coding with Two Simple Unicast Sessions

How to Assign Papers to Referees Objectives, Algorithms, Open Problems p.1/21

Online Interval Coloring and Variants

Combinatorial Optimization

arxiv: v2 [cs.ds] 3 Oct 2017

Optimal File Sharing in Distributed Networks

Approximating Multicut and the Demand Graph

Ring Sums, Bridges and Fundamental Sets

We say that a flow is feasible for G (or just feasible if the graph and capacity function in question are obvious) if

Section Summary. Relations and Functions Properties of Relations. Combining Relations

CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES. D. Katz

Lecture 19 IIR Filters

Packing and Covering Dense Graphs

More Approximation Algorithms

Specral Graph Theory and its Applications September 7, Lecture 2 L G1,2 = 1 1.

Tree Decomposition of Graphs

CSE200: Computability and complexity Space Complexity

Chapter 10 Search Structures

Deterministic Network Coding by Matrix Completion. Nicholas James Alexander Harvey Bachelor of Mathematics, University of Waterloo, 2000

Information and Entropy

Multi-Agent Systems. Bernhard Nebel, Felix Lindner, and Thorsten Engesser. Summer Term Albert-Ludwigs-Universität Freiburg

On queueing in coded networks queue size follows degrees of freedom

ACO Comprehensive Exam 19 March Graph Theory

ACYCLIC DIGRAPHS GIVING RISE TO COMPLETE INTERSECTIONS

On Network Coding Capacity - Matroidal Networks and Network Capacity Regions. Anthony Eli Kim

ACO Comprehensive Exam March 17 and 18, Computability, Complexity and Algorithms

Linear Codes, Target Function Classes, and Network Computing Capacity

On the number of cycles in a graph with restricted cycle lengths

Approximation Algorithms and Hardness of Approximation. IPM, Jan Mohammad R. Salavatipour Department of Computing Science University of Alberta

Equations in Quadratic Form

THE DYNAMICS OF SUCCESSIVE DIFFERENCES OVER Z AND R

2.2 Asymptotic Order of Growth. definitions and notation (2.2) examples (2.4) properties (2.2)

Transcription:

Overcoming Delay, Synchronization and Cyclic Paths Meir Feder Tel Aviv University joint work with Elona Erez nd Cyclic Paths

Root of the problem In much of network coding assume instantaneous coding Instantaneous coding cannot work with cycles Node delay, which may be beneficial cycles, introduces a synchronization problem in code implementation How to deal with node delay in case of long input sequence? What about decoding Delay? Solution: Convolutional codes oot of the problem 2

Motivating Example x ) S x 2 ( n ) ( n x x n ) + x ( n ) x n ) + x ( ) x 2 ( n ) ( n ) ( 2 ( 2 n t t t 2 3 t 4 t 5 6 t At n = 4 sink t receives x(0) on both of its incoming links. At time instant n = 5, t receives x() and x() + x2(0). The effective decoding delay is 5. Only a single memory element is required. Motivating Example 2-

Convolutional Network Codes - Definition Let F (D) be the ring of polynomials over the binary field. Link e is associated with b(e), whose elements are in F (D). 0 0 t S t 5 4 t t 3 2 t 6 t 0 D 0 a v onvolutional Network Codes - Definition 2

Input stream xi(n) can be represented by a power series: Xi(D) = n=0 x i(n)d n, i =,, h In linear convolutional network codes: Ye(D) = ye(n)d n = me(e )Ye (D) = b(e) T x(d) n=0 e ΓI (v) where ye(n) are the symbols transmitted on the link e. To achieve rate h, the global coding vectors on the incoming links to t have to span F [D] h, where F [D] is the field of rational function over the binary field. onvolutional Network Codes - Definition 3

Dealing with Cycles Previous Results Precoding Code Construction Algorithm Complexity Decoding Delay ealing with Cycles 4

Previous Results Ahlswede et al (00): the cyclic network was unrolled into an acyclic layered network. The resulting scheme is time-variant, requires complex encoding/decoding and large delay Koetter and Médard (02): if each edge has delay, then there exists a time-invariant linear code with optimal rate. Li et al (03): a heuristic code construction for a linear time-invariant code. revious Results 5

Line Graph Originally the network is modeled as a directed graph G = (V, E) L(V, E) is the line graph with: Vertex Set: V = E s T Edge Set: E = {(e, e ) E 2 : head(e) = tail(e )} {(s, e) : e outgoing from s} {(e, ti) : e incoming to ti, i d} If there are h edge-disjoint paths between s and t in G, there are corresponding h node-disjoint paths in L. ine Graph 6

Recall the following: h: the minimal min-cut between s and any of the sinks T = {t,, td} F (D): the ring of polynomials with binary coefficients F [D]: the field of rational function over the binary field. v(e): a global coding vector (whose components maybe in F [D]) assigned to node e L. The code can be used for multicast if and only if for all t T, the global coding vectors incoming into t span F [D] h. ine Graph 7

Precoding We find a set of nodes ED in L, such that if we eliminate them, there will be no directed cycles. To insure that each cycle will contain at least a single delay, the coding coefficients for this set will be restricted to be polynomials with D as a common component. To maintain the same number of possible coding coefficients, if we choose polynomials with maximal degree M, then for e ED the maximal polynomial degree is M +. recoding 8

In order to minimize the delay, it is desired to minimize ED. Finding the minimal ED is the long standing problem of finding the minimal arc feedback set, which is NP-hard. The best known approximation algorithm with polynomial complexity achieves performance ratio O(log V log log V ). For our purposes - use any approximate solution to insert enough delays in the cycles. recoding 9

Code Construction The code construction goes in steps over the terminals: Let Ll be the sub-graph consisting only of the nodes that participate in the flow from s to tl. Ll is acyclic. Go over the nodes e Ll in a topological order. Maintains a list of h nodes Cl = {e,l,, ei,l,, eh,l}, each belongs to a different path. ode Construction 0

Some definitions: Pj,l: the jth path of the flow from s to tl. pj,l Pj,l: the set of nodes following ej,l Cl (not including ej,l). cj,l: the set of coding coefficients of edges with tail in pj,l and head in L \ pj,l. rl: the union of these sets of coefficients: rl = j hcj,l ode Construction

s edges in p j, edges outgoing from p j, m = 0 for edges in r l l l n e,l m 0 e j, e h, e h, e,l l l l n e j, l m m 0 n e h, l 0 m 0 n e h, l m 0 m 0 m 0 t l ode Construction 2

The partial coding vector - u(e) u(e) is defined for all e Cl as the global coding vector of e when all the coefficients in rl are set to zero. Vl = {v(e) : e Cl}, Ul = {u(e) : e Cl}. Requiring Vl to span F [D] h is not sufficient. Requiring Ul to span F [D] h is sufficient. At the end of step l, Vl = Ul. he partial coding vector - u(e) 3

. 8 7 6 ; ' 9 he partial coding vector - u(e) 4 t l v ( e 2 ), ++ - ) * (( m n l ( e,, 2 ) :e m ( e 2, e 2, n l ) m ( e, e, n l n ( e, l ) v ) 4 303 5 2 /0/ % $$ & " #!! v ( e ), v ( n 2, l e ) m ( e, l e, n l ) 0 m e n ( 2, l e 2, l v ( e, l ) 0 v ( e 2, l ), ) 0 0 s Requiring Vl to span F [D] h is not sufficient:

The dashed edges are edges in Lk for some k < l. The current edges in Cl are e,l and e2,l. We have reached e n 2,l in the topological order. The previous value m(e2,l, e n 2,l ) = 0 remains since v(en 2,l ) and v(e,l) are already a basis. Next we reach e n,l and we need to determine m(e,l, e n,l ). But for any value of m(e,l, e n,l ), we have v(en,l ) = v(en 2,l ) and the new set of vectors cannot be a basis! he partial coding vector - u(e) 5

Returning to the algorithm... The algorithm reached node ei,l; wishes to continue to e n i,l, the following node in Pi,l So far, the set Ul = {u(e) : e Cl} is a basis. A new list is generated: C n l = Cl e n i,l \ ei,l. There is a new set of partial coding vectors: U n l = {u n (e,l),, u n (e n i,l),, u n (eh,l)}. eturning to the algorithm... 6

The algorithm determines a coding coefficient m(ei,l, e n i,l) between node ei,l and e n i,l so that U n l will be a basis. Let m (ei,l, e n i,l) be the coding coefficient between ei,l and e n i,l before this stage of the algorithm. If with m (ei,l, e n i,l) U n l is a basis - done! Otherwise - we have the following Theorem: Theorem Suppose that with m (ei,l, e n i,l) the set U n l is not a basis. Then with any other value m(ei,l, e n i,l) the set U n l will be a basis. eturning to the algorithm... 7

But what about the previous sinks? Changing m (ei,l, e n i,l ) changes the coding vectors incoming at the previous sinks. May not be a basis anymore! Theorem 2 Let Ck be the set of nodes incoming into the sink tk, k < l. Denote by V k = {v (e,k),, v (eh,k)}, ej,k Ck, the set of global coding vectors of Ck with m (ei,l, e n i,l). If V k is a basis, then at most a single value of new coefficient m(ei,l, e n i,l) will cause the new set Vk = {v(e,k),, v(eh,k)} not to be a basis. ut what about the previous sinks? 8

Summing it all up If m (ei,l, e n i,l) must be replaced, pick a new value m(ei,l, e n i,l) according to some enumeration. Check if the independence condition is satisfied for all sinks. Otherwise take the next value for m(ei,l, e n i,l). Since for each sink only a single choice of m(ei,l, e n i,l) is bad, it is sufficient to enumerate over d + coefficients. The l-step continues until it reaches the sink tl, The algorithm terminates when it goes over all d sinks. umming it all up 9

Computation of transfer functions In the construction algorithm the transfer function from a certain node to another node has to be computed in each stage. Define for the line graph L the E E matrix C where Ci,j = m(ei, ej) for (ei, ej) L and zero otherwise. Koetter and Médard (02): The transfer function between ei and ej is Fi,j, of the matrix F = (I C) = I + C + C 2 +. Fi,j can be computed from C with complexity O( E 2 ). omputation of transfer functions 20

Complexity of Code Construction The complexity of the precoding depends on the specific algorithm chosen. The algorithm begins by finding the d flows from the source s to the sinks at complexity O(d E h). The algorithm has d steps and in each it may go over all nodes: For each stage, when check a possible m(, ): May compute dh transfer functions at complexity O(dh E 2 ) omplexity of Code Construction 2

Perform independence test for Ul at complexity O(h), and independence test for the other sinks at complexity O(dh 2 ) In the average case check a constant number of m(, ) s, thus stage complexity O(dh 2 + dh E 2 ) = O(dh E 2 ) At the worst case check d values - complexity O(d 2 h E 2 ). Total complexity: O(d 2 h E 3 ) in the average case and O(d 3 h E 3 ) in the worse case. omplexity of Code Construction 22

Comparison Jaggi el al, 2003, presented an algorithm for acyclic networks with complexity O( E dh 2 ) on average and O( E dh 2 + E hd 3 ) in the worst case. Our algorithm can also be used for acyclic networks at complexity O( E d 2 h 2 + E 2 ) in the average case and O( E d 3 h 2 + E 2 ) in the worst case. omparison 23

A single delay in a cycle In Koetter and Médard (02) analysis for cyclic networks it is assumed that each node in L has a single delay. But as we have shown it is sufficient to have only a single delay for each cycle in the network. Song et al (05) showed that for this asynchronous transmission the min-cut is an upper bound on the possible rate. Since this bound is achievable, this bound is tight. single delay in a cycle 24

Adding and Removing Sinks The existing construction algorithms (for acyclic networks) do not provide a simple way to add and remove sinks. In our algorithm - adding a new sink simply corresponds to a new step in the algorithm, as only coding coefficients in the flow between the source and the new sink might be changed. Removing sinks is analogous to adding sinks. The efficient algorithm for removing and adding sinks can be performed for block or convolutional linear network codes. dding and Removing Sinks 25

Decoding Delay of the Sequential Decoder I. Acyclic Networks The delay of the sequential decoder proposed in Erez and Feder (04) is defined by the determinant of the matrix A(D), whose columns are the coding vectors in Vl: If the term with the smallest power of the determinant is D N, then the delay is at most N. Acyclic Networks 26

For each coding coefficient we can choose from M polynomials. For node e incoming into sink tl let Pm(e) be the path from s to e in the flow Ll; denote by lm(e) the length of Pm(e). It can be shown that for a random code the average delay at tl, for any M > d is bounded by delay(tl) = e Γin(tl) lm(e) Acyclic Networks 27

Probability Distribution of Delay The cost of the flow is given by lm = e Γin(tl) lm(e) For random codes, for large M, the distribution of the delay: P (delay = q) = ( ) ( l m + q )q+lm q 2 The distribution is better for smaller M, as long as M > d. robability Distribution of Delay 28

II. Cyclic Networks For cyclic networks, the precoding stage adds delays even for block codes the sequential decoder is useful both for block and convolutional codes. The elements of A(D) might in general be rational functions, where the denominator of each element is indivisible by D. Therefore the least common multiplier of the denominators, denoted by lcm is also indivisible by D. I. Cyclic Networks 29

If we multiply A(D) by lcm to yield Ã(D), then the determinant is multiplied by lcm h. A(D) and Ã(D) have the same delay with the sequential decoder. The delay for Ã(D) is determined by the sum of delays accumulated along the h paths between the source and the sink. In comparison to acyclic networks, this delay might increase only by the number of nodes in ED. I. Cyclic Networks 30

Proof of Theorems roof of Theorems 3

Lemma Lemma Consider a set of nodes {e,, eh} and their coding vectors W = {w(e),, w(ei),, w(eh)}, which may be partial or global coding vectors. Consider now the coding vectors of the same set of nodes W = { w(e ),, w(ei),, w(eh)}, when m(ei, e) = 0 for e L. The set W is a basis iff the set W is a basis. emma 32

Proof Outline Split node ei into 3 nodes: etail, emid and ehead, connected by edges (etail, emid) and (emid, ehead). Gee: the transfer function from ehead to etail in L \ emid. The relation between w(ei) and w(ei) is: w(ei) = w(ei) + Gee w(ei) + = Gee w(ei) roof Outline 33

The other vectors are given by: w(ej) = w(ej) + Fij w(ei), j i Gee where Fij is the transfer function from ei to ej. The relation between W and W is linear and inverse a basis W will be mapped to a basis W and vice versa. roof Outline 34

Proof of Theorem Denote the coding vectors of C n l when all the coefficients in rl are zero by Ũ n l = {ũ n (e,l),, ũ n (e n i,l ),, ũn (eh,l)}. Assume Ul = {u(e,l),, u(ei,l),, u(eh,l)} is a basis. After replacing m (ei,l, e n i,l ) by m(e i,l, e n i,l ), ũn (e n i,l ) equals: ũ n (e n i,l) = ũ (e n i,l) + (m(ei,l, e n i,l) m (ei,l, e n i,l))u(ei,l) roof of Theorem 35

Using this relation it can be shown that if Ul is a basis and if with m (ei,l, e n i,l ) the set Ũ n l is not a basis, then for any other m(ei,l, e n i,l ) the set Ũ n l is a basis. From Lemma it follows that the set U n l is also a basis. roof of Theorem 36

Proof of Theorem 2 Before the replacement of m (ei,l, e n i,l ) the set V k = {v (e,k),, v (eh,k)} is a basis. We want to analyze when after the replacement to m(ei,l, e n i,l ) the new set of global coding vectors Vk = {v(e,k),, v(eh,k)}, is also basis. Assume that the edges outgoing from ei,l, except (ei,l, e n i,l ), are Γo = {(ei,l, e),, (ei,l, eq)}. roof of Theorem 2 37

The system Gee can be expressed as Gee = G + m(ei,l, e n i,l )G 2, The global coding vector v(ei,l) is given by: v(ei,l) = ṽ(ei,l) + Geeṽ(ei,l) + = Gee ṽ(ei,l) = m(ei,l, e n i,l )Qy(e i,l) where Q = G2/( G) and y(ei,l) = ṽ(ei,l)/( G). roof of Theorem 2 38

Using the linearity of the code, it can be shown that for j h: v(ej,k) v (ej,k) = f(m(ei,l, e n i,l))hjy(ei,l) where Hj F,j + QF2,j and F,j is the transfer function from ei,l to ej,k, when m(ei,l, e) = 0, e Γo \ (ei,l, e n i,l ), and F2,j when only the coefficient m(ei,l, e n i,l ) = 0. roof of Theorem 2 39

Suppose the representation of y(ei,l) in basis V k is: y(ei,l) = βv (e,k) + β2v (e2,k) + + βhv (eh,k) Using this relation and the assumption that V k is a basis, the set Vk is not a basis only if the following set of equation has a non trivial solution: f(m(ei,l, e n i,l )) α... αh = Hβ Hhβ......... Hβh Hhβh α... αh roof of Theorem 2 40

A non trivial solution exist only if the matrix has eigenvalue: λ = f(m(ei,l, e n i,l )) The matrix has eigenvalue 0 with multitude h and: λ = trace(a) = Hβ + H2β2 + + Hhβh with multitude. roof of Theorem 2 4

It can be shown that Vk is not a basis only for, m(ei,l, e n i,l) = m (ei,l, e n i,l ) m (e i,l,e n i,l )Q trace(a) Q m (ei,l,e n i,l )Q trace(a) for at most a single choice of m(ei,l, e n i,l ) the set V k will not be a basis. roof of Theorem 2 42