Convolutional Codes with Maximum Column Sum Rank for Network Streaming
|
|
- Nigel Dylan Robinson
- 5 years ago
- Views:
Transcription
1 1 Convolutional Codes with Maximum Column Sum Rank for Network Streaming Rafid Mahmood, Ahmed Badr, and Ashish Khisti School of Electrical and Computer Engineering University of Toronto Toronto, ON, M5S G4, Canada {rmahmood, abadr, Abstract The column Hamming distance of a convolutional code determines the error correction capability when streaming over a class of packet erasure channels. We show that the column sum rank parallels column Hamming distance when streaming over a network with link failures. We prove rank analogues of several known column Hamming distance properties and introduce a new family of convolutional codes that maximize the column sum rank up to the code memory. Our construction involves finding a class of super-regular matrices that preserve this property after multiplication with non-singular block diagonal matrices in the ground field. Index Terms Column distance, maximum rank distance (MRD) codes, network coding, super-regular matrices, maximum-distance profile (MDP) codes. I. INTRODUCTION In streaming communication, source packets arrive sequentially at the transmitter and are only useful for playback by the receiver in the same order. Erased packets must be recovered within a given maximum delay or they are considered permanently lost. Streaming codes are designed to recover packets within deadlines and have been studied in depth for singlelink communication [1] []. In [], [4], it was shown that the column Hamming distance is an appropriate metric for channels with a bounded number of erasures in a window. The column Hamming distance of a code determines the maximum number of erasures that can occur in any window of the stream for decoding to reman successful [5]. If there are fewer erasures than the distance in every window, each source symbol is recovered within the deadline. Convolutional codes such as m-mds codes are known to achieve the maximum column Hamming distance. These codes are constructed from block Toeplitz super-regular matrices [4] [6]. Several formulations for such matrices have been proposed in prior work [6], [7]. In network communication, a transmitter streams to several users through intermediate nodes. With linear network codes, the problem of decoding is reduced to inverting the transfer matrix between transmitted and received packets [8], [9]. Unreliable links in the network however, can reduce the rank of the channel, making this infeasible. The topic of correction from rank deficiencies has traditionally been treated with the assumption that the receiver has no knowledge of the transfer matrix, as it was originally motivated by non-coherent multiple-antenna channels [10], [11]. A family of end-to-end matrix codes known as subspace codes can decode over a rank deficient channel and are constructed using rank metric codes. Gabidulin codes are the most well-known rank metric codes and often act as constituents for subspace codes [12], [1]. The rank distance of the constituent code determines the maximum loss in the network from which successful decoding is possible. Subspace coding is performed over a single network use, which is transmitted and received instantaneously. It is also possible to code over multiple independent uses, with what is known as multi-shot coding [14]. A symbol lost at one time instance may then be recovered with a delay after subsequent shots. The sum rank of a constituent code gives the maximum number of link failures that can be corrected within the window of transmission by a multi-shot subspace code. As an alternative to block codes, convolutional rank metric codes were introduced for multi-shot communication in [15]. In this work, we assume the receiver has complete knowledge of the channel. Our motivation is internet streaming over the application layer. The transfer matrix is a function of a linear network code, which is transmitted in real time as header bits of the channel packets [9]. Using a coherent channel implies that rather than subspace codes, rank metric codes are now sufficient. The rank metric possesses several parallel properties to the Hamming metric. We introduce the column sum rank as a counterpart to column Hamming distance and define a class of convolutional codes that achieve the maximum column sum rank up to the memory. These codes are rank metric analogues of m-mds codes used in single-link streaming [5]. Interestingly, there has been little prior work on rank metric convolutional codes. To our knowledge, the only previously studied construction appears in [15], where the authors consider the active column sum rank. Their approach differs from the present work both in the code constructions and the distance metric. This paper is outlined as follows. We review fundamentals of extension fields, rank metric block codes, m-mds codes, and super-regular matrices in Section II. The network streaming problem is introduced in Section III. In Section IV, we define and
2 2 derive several properties for the column sum rank. We prove equivalence between the capability of a convolutional code to recover from rank deficiencies with delay and its column sum rank. Codes maximizing this metric are referred to as Maximum Sum Rank (MSR) codes. Our construction in Section V first presents a new class of matrices that preserve super-regularity after multiplication with block diagonal matrices in the ground field. Such super-regular matrices are used to construct the generator for a MSR code. We conclude with code examples and a discussion on the necessary field size. A. Extension Fields II. BACKGROUND INFORMATION For a prime q, let F q M be an extension field and F q [X] be a polynomial ring of F q. A primitive element α F q M is one whose consecutive powers generate the entire field [16]. The minimal polynomial p(x) F q [X] of a primitive α is the monic polynomial of degree M, for which p(α) = 0. A minimal polynomial is irreducible by definition and if any f(x) F q [X] also has α as a root, then p(x) f(x) [17]. The extension field is isomorphic to the vector space F M q. Let α 0,..., α M 1 F q M be a basis for this vector space. A normal basis is one where each α i = α qi for some α F q M, commonly referred to as a normal element [17]. We will denote α [i] = α qi to be the i-th Frobenius power of α. Every element in F q M can be represented as a linearized polynomial f(x) F q [X] evaluated at a normal α. A polynomial is referred to as linearized when monomial terms only have Frobenius powers. The coefficients of a linearized polynomial map to the entries of a vector f = (f 0,..., f M 1 ) T F M q, giving us an easy isomorphism between the extension field and vector space. f(α) = M 1 f i α [i] f = (f 0,..., f M 1 ) T (1) The q-degree (denoted deg q ) determines the largest Frobenius power in the polynomial. In this paper, we will frequently treat elements in the extension field as linearized polynomials evaluated at α. For every finite extension of a prime field, there exists at least one element that is both normal and primitive [18]. We refer to such an element as a primitive normal. B. Rank Metric Codes A vector space over the extension field is isomorphic to a matrix space over the ground field. We use the bijection φ : F n q M Fq M n to transform vectors of linearized polynomials to matrices via (1). The column rank of a vector x in the extension field refers to the rank of φ(x). The rank distance between two vectors x, ˆx F n q was defined in [12], where it was also revealed M to be a metric. d R (x, ˆx) = rank (φ(x) φ(ˆx)) For a linear block code C[n, k] over F q M, the minimum rank is defined similarly to minimum Hamming distance, and must satisfy a Singleton-like bound given by d R (C) min { } 1, M n (n k)+1 [12]. Maximum Rank Distance (MRD) codes achieve this bound with equality. They also have the following property. Theorem 1 (Gabidulin, [12]). Let G F k n q M matrix A F n k q satisfies rank GA = k. be the generator matrix of a MRD code. The product of G with any full-rank A complementary theorem was proven in [12] for the parity-check matrix of a MRD code. We use the equivalent property for the generator matrix, which arises from the fact that the dual of a MRD code is also a MRD code [12]. We will assume M n throughout this paper; a MRD code is then also MDS. Gabidulin codes are the most well known family of such block codes [12]. To construct a Gabidulin code, let g 0,..., g F q M be a set of linearly independent elements over F q. In practice, g i are often drawn from a subset of a normal basis. The generator matrix for a Gabidulin code C[n, k] is given as follows. G = g 0 g 1... g g [1] 0 g [1] 1... g [1] g [k 1] 0 g [k 1] 1... g [k 1]
3 C. m-mds Codes Let C[n, k, m] be a linear time-invariant convolutional code with memory m over F q M. For a source s [0,j] = (s 0,..., s j ), the channel packets x [0,j] = s [0,j] G EX j are found using the extended form generator matrix given below G 0 G 1... G j G EX G 0... G j 1 j =.... (2). G 0 where G j F k n and G q M j = 0 for j > m [19]. In this paper, we assume that G 0 has full row rank. This is necessary in order to guarantee that G EX j also has full row rank. The Hamming weight of x [0,j] is measured by summing the Hamming weight of each x t. The j-th column distance of a code determines the minimum Hamming weight amongst channel packets whose initial source s 0 is non-zero [5]. d H (j, C) = min wt H(x [0,j] ) x [0,j] C,s 0 0 We will simplify the notation when C is obvious. The column distance has several properties that were treated in [4], [5]. We will be proving rank metric analogues of two properties. 1) Assuming all prior packets have been recovered, if there are at most d H (j) 1 symbol erasures when a receiver observes x [0,j], then the packet s t is recoverable by time t + j. 2) The j-th column distance is upper bounded d H (j) (n k)(j + 1) + 1. If d H (j) achieves this bound, then d H (i) does as well for all i j. A family of codes known as m-mds codes achieve the upper bound for d H (m). The extended generator for a m-mds code is constructed by carefully taking a sub-matrix of k(m + 1) rows from a block Toeplitz super-regular matrix [5]. We will next define super-regularity and give a previous construction of such a matrix with a block Toeplitz structure. D. Super-regular Matrices In this paper, the rows and columns of a matrix are indexed starting from the 0-th. For r N, let σ be a permutation of the set {0,..., r 1} and let S r denote the symmetric group, or the set of all possible permutations. The sign function sgn σ indicates 1 or 1 respectively when σ is constructed from an even or odd number of transpositions. The Leibniz formula calculates the determinant of a matrix D = [D i,j ] of order r. det D = r 1 sgn σ D i,σ(i) () σ S r The product of entries for a single σ is referred to as term in the summation. A trivial determinant is one where each term in the Leibniz formula is equal to 0. A super-regular matrix is defined as a matrix for which every square sub-matrix with a non-trivial determinant is non-singular. From (2), the generator for a convolutional code is a block Toeplitz matrix and so we focus on super-regular matrices with the same structure. In [5], the authors provided a construction for a Toeplitz super-regular matrix that exists for every prime field F q. A second block Toeplitz matrix, which we outline below, was proposed in [7]. For n, m N, let M = q n(m+2) 1. Let α F q M be a primitive element and a root of the minimal polynomial p(x). For j = 0,..., m, we define T j F n n below. q M α [nj] α [nj+1]... α [n(j+1) 1] α [nj+1] α [nj+2]... α [n(j+1)] T j = (4) α [n(j+1) 1] α [n(j+1)]... α [n(j+2) 2] We then construct the block Toeplitz matrix T = [ T i,j ]. T 0 T 0 T 1 T = T 0... T m 1 T m Each non-zero entry is a linearized monomial Ti,j (X) evaluated at α. The q-degrees of these monomials increase as one (5)
4 4 moves further down and to the right inside T. deg q Ti,j(X) + 1 deg q Ti,j (X), if i < i deg q Ti,j (X) + 1 deg q Ti,j (X), if j < j We use the variable X when discussing properties of polynomials and evaluate at α specifically when calculating the determinant of a matrix. To show that T is super-regular, let D F r r be any square sub-matrix. If D has a non-trivial determinant, q M then det D must be non-zero. Each non-trivial term in the Leibniz formula is now a product of linearized monomials, which we denote D σ (α) = r 1 D i,σ(i)(α). The determinant det D becomes a polynomial D(α) = σ S r sgn σd σ (α). We bound the degree of D(X) using (6), which is preserved in all sub-matrices of T. Lemma 1 (Almeida et al, [7]). For T defined in (5), let D be any sub-matrix with a non-trivial determinant. Let D(X) be the polynomial which evaluates its determinant. The degree of D(X) is bounded as follows. 1 deg D(X) < q n(m+2) 1 In [7], the matrices in (4) contained entries with double exponents α 2i rather than Frobenius powers α [i]. As a result, the Lemma 1 included in the prior work requires q = 2. A direct extension reveals that the Frobenius power can be used for (4) and the corresponding q in the bounds. For completeness, we prove the extension for the upper bound in Appendix A. The lower bound is derived by algorithmically finding a unique σ = arg max σ deg D σ (X), which generates the highest degree monomial term. The algorithm does not change in the extension, so we refer the reader to [7] for details. The polynomials D(X) and D σ (X) share the same degree and therefore, D(X) is not the zero polynomial. Given the upper bound, deg D(X) < deg p(x) and consequently, p(x) D(X) [17]. As a result, α is not a root of D(X) and any D with a non-trivial determinant is also non-singular. (6a) (6b) III. NETWORK STREAMING PROBLEM The problem is defined in three steps: encoding, the network model, and decoding. A. Encoder At each time instance t 0, a source packet s t F k q arrives causally at a transmitter node. A channel packet x M t F n q is M a function of the previous source packets, i.e. x t = γ t (s 0,..., s t ). We consider the class of linear time invariant encoders for our scenario and will use convolutional codes with the structure in (2). A rate R = k n encoder with memory m generates the channel packet in the following manner. m x t = s t i G i B. Network Model Channel packets are instantaneously transmitted and received over a network. A receiver sequentially observes y t = x t A t, where A t F n n q is the channel matrix at time t [14], [20]. A single transmission is referred to as a network shot. Each shot is independent of all others. Communication over a window [t, t + W 1] of W shots is therefore given by y [t,t+w 1] = x [t,t+w 1] A [t,t+w 1], where A [t,t+w 1] = diag (A t,..., A t+w 1 ) is the block diagonal channel transfer matrix [14]. Let ρ t = rank (A t ). Clearly, t+w 1 i=t ρ i = rank (A [t,t+w 1] ). When the network is operating perfectly, ρ t = n, but unreliable connections cause a rank deficient channel matrix. In [21], it is shown that each failing link can reduce the rank of A t by at most 1. To facilitate the extension from the point-to-point streaming problem, we introduce a network variation of the sliding window model by using rank deficiencies in place of symbol erasures. Definition 1. Consider a network where the receiver observes y t = x t A t, with rank (A t ) = ρ t. A Rank Deficient Sliding Window Channel C R (N, W ) has the property that in any sliding window of length W, the block diagonal channel rank does not decrease by more than N, i.e. t+w 1 i=t ρ i nw N for all t 0. In analysis, we disregard the linearly dependent columns of A t and the associated received symbols. At each time instance, the receiver effectively observes y t = x ta t, where A t Fq n ρt is the reduced channel matrix containing only the linearly independent columns. C. Decoder Let T be the maximum delay permissible by the decoder. A packet received at time t must be recovered by t + T, i.e. there should exist a sequence of functions that solves ŝ t = η t (y 0,..., y t+t ). If ŝ t = s t, then the source packet is perfectly recovered by the deadline. Otherwise, the source packet is declared lost. In the linear model, the decoding problem is reduced to inverting
5 5 the transfer matrix between s t and y 0,..., y t+t. A linear code C is declared feasible over C R (N, W ) if there exists an encoder and decoder for the code, which completely recovers every source packet with delay T. We assume that T = W 1 for the remainder of this paper and will design convolutional codes with memory m = T in application. Coding for channels where T W 1 will be briefly addressed near the end. Our codes aim to maximize the sum rank metric, which is introduced in the next section. IV. MAXIMUM SUM RANK CODES The sum rank distance between channel packets x [0,j] and ˆx [0,j] is defined as the sum of the rank distance between each x t and ˆx t. Using the bijection φ( ) from (1) to map vectors in the extension field to matrices in the ground field, we introduce the j-th column sum rank of a code as an analogue of column Hamming distance. d R (j) = min x [0,j] C,s 0 0 t=0 j rank (φ(x t )) The authors in [15] proposed an alternative metric for their code construction: the active column sum rank. Using the state trellis, active column rank is the minimum sum rank amongst channel packets that exit the zero state at time 0 and do not re-enter the zero state between time 1 and j 1. In contrast, the column sum rank includes packets that return back to the zero state before time j. Our metric, rather than the active rank, is necessary and sufficient for network streaming over a window of length W. Theorem 2. Let C be a convolutional code used to stream over the window [0, W 1]. For t = 0,..., W 1, let A t F n ρt q be full column rank matrices. In addition, let A = diag (A 0,..., A W 1) be the channel matrix. If d R (W 1) > nw rank (A ), then s 0 is recoverable by time W 1. Conversely, if d R (W 1) nw rank (A ), then there exists at least one s 0 which cannot be recovered within the window. Proof: Consider two source packets s = (s 0,..., s j ) and ŝ = (ŝ 0,..., ŝ j ), where s 0 ŝ 0. Suppose they respectively generate channel packets x and ˆx, where both xa = y and ˆxA = y. Then (x ˆx)A = 0, where the difference x ˆx is a hypothetical channel packet whose sum rank is at least d R (W 1). However, (x t ˆx t )A t = 0 implies rank (φ(x t ) φ(ˆx t )) n rank (A t ) for t W 1. We arrive at the following contradiction on the sum rank of the channel packet by summing each of the inequalities. W 1 t=0 rank (φ(x t ) φ(ˆx t )) nw rank A < d R (W 1) For the converse, let s = (s 0,..., s W 1 ), where s 0 0, generate x with sum rank equal to d R (W 1). For t = 0,..., W 1, let ρ t = n rank (φ(x t )). Then there exist A t F n ρt q such that x t A t = 0. From summing all of the ρ t, the matrix A = diag (A 0,..., A W 1) must have rank equal to nw d R (W 1) and s is indistinguishable from the all-zero source over this channel. A time-invariant encoder implies this theorem is sufficient to show that all source packets are recoverable over the sliding window channel. Assuming all prior packets are decoded, we recover s t using the window [t, t + W 1]. The contributions of s 0,..., s t 1 can simply be negated from the received packets. Theorem 2 is a rank metric analogue to Property 1 in Section II-C, which guarantees the necessity and sufficiency of column Hamming distance in single-link streaming [4]. We have replaced symbol erasures with rank deficiency and the column Hamming distance with column sum rank. We next parallel Property 2 in Section II-C. Firstly, d R (j) (n k)(j + 1) + 1. The sum rank of a packet cannot exceed its Hamming weight, meaning that the upper bound on column Hamming distance is inherited by the column sum rank. Furthermore, if the j-th column sum rank is maximized, all prior ones are as well. Lemma 2. If d R (j) = (n k)(j + 1) + 1, then d R (i) = (n k)(i + 1) + 1 for all i j. Proof: It suffices to prove for i = j 1. Let C be a code for which d R (j 1) is at most (n k)j, but d R (j) achieves the maximum. Consider the source packets s [0,j 1] which generate x [0,j 1] = s [0,j 1] G EX j 1 with sum rank equal to d R (j 1). We next determine s j to obtain x j = j 1 t=0 s tg j t + s j G 0. The first summation produces a vector whose Hamming weight is at most n. Because rank (G 0 ) = k, s j can be selected specifically in order to negate up to k non-zero entries of the first summation. This implies that wt H (x j ) n k and consequently, rank (x j ) n k. Therefore, the sum rank of x [0,j] is upper bounded by d R (j 1) + n k (n k)(j + 1), which is a contradiction. Codes achieving maximum d R (j) up to the memory are referred to as Maximum Sum Rank (MSR) codes. They directly parallel m-mds convolutional codes, which maximize the m-th column Hamming distance [5]. By Theorem 2, a MSR code with memory W 1 can recover each source packet with delay W 1, provided that the rank of the channel matrix is greater than kw in every sliding window. We will show the existence of MSR codes in the next section. The following theorem
6 6 provides further insight on decoding with delay j. The theorem also serves as an extension of Theorem 1 to convolutional codes transmitted over independent network uses. Theorem. For t = 0,..., j, let A t F n ρt q be a set of full column rank matrices. Let ρ t satisfy the following condition t ρ i k(t + 1) (7) for all t j and with equality for t = j. We construct A = diag (A 0,..., A j ). The following statements are equivalent for any convolutional code. 1) d R (j) = (n k)(j + 1) + 1 2) G c ja is non-singular. Proof: We first prove 1 2. Consider a code where d R (j) = (n k)(j + 1) + 1 and suppose there exists an A satisfying (7), for which G c ja is singular. Then there exists channel packets x [0,j], where x [0,j] A = 0. The first packet x 0 is not necessarily non-zero, meaning there is no guarantee on the sum rank of x [0,j]. We let l = arg min t x t 0 and consider the vector x [l,j], whose sum rank is at least d R (l j). Because x t A t = 0 for t = l,..., j, we bound rank (φ(x t )) n ρ t for these instances. The sum rank of x [l,j] is bounded below. j rank (φ(x t )) n(j l + 1) t=l j t=l (n k)(j l + 1) The second line follows from j t=l ρ t k(j l + 1), which can be derived when (7) is met with equality for t = j. However, d R (j l) = (n k)(j l + 1) + 1 due to Lemma 2. The sum rank of x [l,j] is less than this, which is a contradiction. A is singular. Let m = arg min i d R (i) (n k)(i + 1) be the first instance where the column sum rank is not maximum. Consider x [0,m] whose sum rank is equal to d R (m). We assume m > 0 and will discuss the case for m = 0 at the end. The sum rank of x [0,m 1] is equal to (n k)m k 1 for some k 1 0 and the sum rank of x [0,m] is equal to (n k)(m + 1) k 2 for some k 2 0. We prove 2 1 by using a code with d R (j) (n k)(j + 1) and constructing an A for which G EX j We will show that there exist a set of matrices A t F n ρt q we will construct A [0,m] to have rank (m + 1)k. ρ t satisfying both (7) and x t A t = 0 for t = 0,..., m. In addition, Let ρ t = n rank (φ(x t )) for t = 0,..., m 1. Clearly, there exist A t for which x t A t = 0. Summing ρ t confirms that these matrices satisfy (7) for t m 1. t t ρ i = n(t + 1) rank (φ(x i )) k(t + 1) 1 The second line follows from the fact that the sum rank of x [0,t] is at least (n k)(t + 1) + 1. In fact, the exact summation for t = m 1 is known to be m 1 ρ i = mk 1 k 1. It remains to choose an appropriate ρ m. The rank of our x m is n k 1 k 1 k 2, so there should exist an A m with rank ρ m = k k 1 that satisfies x m A m = 0. To confirm this is possible, we will check whether ρ m n. The sum rank of our x [0,m 1] cannot exceed the sum rank of x [0,m], which is bounded by (n k)(m + 1). Using the sum rank of x [0,m 1], we bound k 1 n k 1. Inputting the inequality into the equation for ρ m guarantees that ρ m n and that A m possesses full column rank. Alternatively if m = 0, then rank (φ(x 0 )) n k, and we simply use an A 0 with rank ρ 0 = k. The remaining A m+1,..., A j can be any full-rank n k matrices, thus satisfying (7) for all t j. The product G c ja is given below. ( ) ( G EX m X A ) ( [0,m] G EX Y A = m A [0,m] XA ) [m+1,j] [m+1,j] YA [m+1,j] X and Y denote the remaining blocks that comprise G EX j. Note that G EX m A [0,m] is a square singular matrix. Therefore, det G EX j A is also zero. The constraints in (7) ensure that [0, j] is a feasible decoding window. For every t j, if t ρ i < k(t + 1), then the decoder does not possess sufficient information to decode. If the bound is achieved for some t, then we can invert G EX t A [0,t] in order to recover s 0 with delay t. In the next section, we will construct an extended generator matrix. This theorem will be useful afterwards to verify that the generator produces a MSR code.
7 7 A. Preservation of Super-regularity V. CODE CONSTRUCTION Because MSR codes are also m-mds, it is natural to assume that their generators can be constructed using super-regular matrices. MRD block codes however, have an additional property over MDS block codes, given by Theorem 1. Theorem extends this for the convolutional counterparts. In this section, we connect Theorem to super-regularity. Consider the case when the element α generating the matrix in (4) is primitive normal. T remains super-regular, but now each of the blocks T j resemble generator matrices for rate R = 1 Gabidulin codes. Let A t F n n q be a non-singular matrix in the ground field and construct A = diag (A t,..., A t ) from m + 1 copies. The product F = TA has the following structure. F 0 F 0 F 1 F = F 0... F m 1 F m We have let F j = T j A t for j = 0,..., m. It can be shown that each F j has the structure f [nj] 0 f [nj] 1... f [nj] f [nj+1] F j = 0 f [nj+1] 1... f [nj+1] f [n(j+1) 1] 0 f [n(j+1) 1] 1... f [n(j+1) 1] where f = (f 0,..., f ) = (α [0],..., α [] )A t is a linearly independent set over F q. [12]. In addition, each element f i is a linearized polynomial f i (X) evaluated at α, with coefficients from A t = [A k,i ]. We can then write every non-zero entry of F as follows. f [j] i = A k,i α [k+j] (9) k=0 The polynomials on any given column have the same set of coefficients, but the degrees of each monomial term increase as one moves downwards along F, i.e. f [j] i (X) = f i (X [j] ). The degree of each linearized polynomial is bounded below. j deg q f [j] i (X) n 1 + j (10) These bounds depend primarily on the Frobenius power j. Consequently, the polynomial entries on any fixed row of F i all share the same bound. Although A was constructed by repeating a single A t, similar results are reached when letting A = diag (A 0,..., A m ) be constructed from different matrices. The non-zero entries remain linearized polynomials with the same bounds, but each column block of F is now generated using a different A t. Consequently, there are a different set of linearly independent f 0,..., f for different column blocks. Overall, the structure of F remains similar to T, but with polynomials of varying degrees rather than monomials with fixed degrees. Consequently, we propose a weakened version of (6). Lemma. For t = 0,..., m, let A t F n n q be non-singular matrices. We construct A = diag (A 0,..., A m ). Let T be be the super-regular matrix in (5). The product F = TA satisfies the following. 1) deg q F i,j(x) + 1 deg q F i,j (X), if i < i 2) deg q F i,j (X) + 1 deg q F i,j (X), if F i,j is an entry of a different column block to the left of that from which F i,j is drawn. Statement 1 in Lemma () is identical to (6a) and follows directly from (10). Statement 2 is a weakened variation of (6b) that only holds when the two entries are drawn from different column blocks of F. The above lemma is used to show that F is also a super-regular matrix when α is primitive normal. Theorem 4. For t = 0,..., m, let A t F n n q be any non-singular matrices. We construct A = diag (A 0,..., A m ). Let T F q M be the super-regular matrix in (5). If M q n(m+2) 1 and α is primitive normal, then F = TA is super-regular. Proof: We show that F is super-regular in three parts, moving from specific to increasingly general cases. The problem in subsequent cases can be converted to the first case, which we will prove super-regular. Furthermore, we assume without loss of generality that A 0 = A 1 = = A m = A t. This simplifies notation and allows us to freely use the previous polynomial structures and bounds. Case 1: Consider when the degrees of the polynomials f i (X) are strictly increasing, i.e. deg f 0 (X) < < deg f (X). (8)
8 j X X X X X X X X X X X X 0 X X 0 X X j + n 1 X X X 0 0 X Fig. 1: The result of the Gaussian elimination applied on a single row of D i. Non-zero entries are denoted by X. The inverse isomorphism of the result generates a vector of polynomials with increasing degrees. Because the polynomials are linearized, the degrees are forced to be the following. deg f 0 (X) = 1, deg f 1 (X) = q,..., deg f (X) = q (11) In this case, F fully satisfies (6), as opposed to only Lemma. Every polynomial entry in F has the same degree as the monomial at the same position in T. If we let D be a sub-matrix of F, then we can construct D from T using the same row and column indices. The monomial entries of D share the same degrees as the polynomials in the corresponding positions of D. The Leibniz formula for the determinant yields deg D(X) = deg D(X). Lemma 1 holds for D(X) and it follows that D is non-singular. Therefore, F is super-regular. It is clear that F or its sub-matrices need only fully satisfy (6) in order for Lemma 1 to apply. In the remaining cases, we will modify the matrices to return to Case 1. Case 2: Consider a more general scenario where the degrees of f i (X) are different but not necessarily in increasing order. As a result, (6b) does not always hold but Lemma permits (6a) to be true. Permuting the columns of F allows us to re-arrange each column block to produce a matrix ˆF that satisfies (11). The set of sub-matrices of F and ˆF are identical up to column permutations and therefore, every sub-matrix of F is non-singular. Case : We now consider the scenario with no restrictions on the degrees of f i (X). Naturally, there may exist multiple polynomials sharing the same degree. Column permutations alone cannot transform F to satisfy (11). As a result, we will show how by using elementary column operations, each sub-matrix of F with a non-trivial determinant can be transformed to one with an increasing degree distribution. The matrix is then interpreted as a sub-matrix of a super-regular ˆF that satisfies (11). Let D be a sub-matrix of F with a non-trivial determinant. Lemma is clearly preserved. The authors in [7] revealed that D takes the following shape. D = O 1 O 2.. D 1 O h... D h Each O i is a zero matrix and each D i is a matrix containing non-zero polynomials drawn from a single column block of F. Let k i be the number of columns in each D i. The polynomials on each row of D i share the same bounds on degrees and are linearly independent amongst themselves. We apply elementary column operations on each D i separately in order to ensure that D satisfies (6b). Using the isomorphism in (1), each row of D i maps to a matrix in F M ki q. This matrix may possess non-zero entries only between the j-th to +jth rows. Because the polynomials are linearly independent, the matrix has full column rank. Using Gaussian elimination, we transform it to reduced column echelon form. An example of the desired structure is provided in Fig. 1. Applying the inverse isomorphism on the result matrix gives a vector of polynomials with strictly increasing degrees. By (9), the column operations that modify one row will modify all other rows to the same degree differences. The operations imply there exists a matrix M i F ki ki q that ensures D i M i satisfy the conditions of (6b). By constructing M = diag (M h,..., M 0 ), we produce ˆD = DM, for which (6) is completely satisfied. ˆD can be seen as a sub-matrix of a super-regular matrix meeting (11). Then det ˆD = det D det M implies that D is non-singular and F is super-regular. For this proof, we had let A 0 = A 1 = = A m. Without this assumption, the product F uses a different set of f i (X) in each column block. Because column operations are performed for each D i independently, the polynomial degrees can always be transformed in order to satisfy (11). The key technique in this proof is the elementary column operation matrix M. An example is provided in Appendix B. It is interesting to note that in general, F can be transformed using elementary column operations from the onset to a D 0 (12)
9 9 super-regular matrix whose polynomials satisfy (11). However, it is not obvious whether this preserves super-regularity. While column permutations will not change the set of sub-matrices, column addition operations on F produces a matrix with an entirely different set of sub-matrices. As a result, our proof opts to directly show that each sub-matrix is indeed non-singular. B. Encoder We permute the rows of T to resemble the extended generator matrix in (2). This re-arranged matrix is also super-regular. T 0 T 1... T m T 0... T m 1 T =... (1). T 0 G EX m [5]. is constructed as a sub-matrix of k(m + 1) rows from T. This process parallels the construction of m-mds generators Theorem 5. Let T be the super-regular matrix generated over F q M using Theorem 4. Let 0 i 1 < < i k < n and construct a (m + 1)k (m + 1)n sub-matrix G EX m of T from rows indexed jn + i 1,..., jn + i k for j = 0,..., m. This matrix is the extended generator of a MSR convolutional code C[n, k, m]. Proof: We( will) show that G EX m satisfies Theorem. Assume without loss of generality that i 1 = 0,..., i k = k 1. Each T i Gi is divided into, where G i F k n are the blocks of the extended generator matrix. For t = 0,..., m, let A q M t F n n q be T i non-singular matrices. We similarly divide A t = ( A t A t), where the two blocks A t F n ρt q and A t F n (n ρt) q represent the reduced channel matrix and some remaining matrix respectively. Let A = diag (A 0,..., A m ). The product can be written as T 0 A 0 T 1 A 1... T m A m T 0 A 1... T m 1 A m TA =..... T 0 A m where ( Gi A j G T i A j = i A ) j T ia j T ia j The sub-matrix of TA containing only the rows and columns involving G i A j is equal to the product G EX m A. If the ranks ρ t satisfy the conditions in (7), then G EX m A has a non-trivial determinant [7]. By Theorem 4, this matrix is non-singular. Therefore, G EX m satisfies Theorem and C[n, k, m] achieves d R (m) = (n k)(m + 1) + 1. The above technique can be used to construct convolutional codes of any parameter that achieve the maximum column sum rank up to the code memory. We can now directly use an MSR code for network streaming over a sliding window channel. A MSR code C[n, k, T ] can feasibly recover every packet over C R (N, W ) with delay T = W 1 if N < d R (W 1). In practice, the decoding deadline T is not always exactly equal to W 1. If the decoder relaxes the delay constraint, i.e. T W, the same code C[n, k, T ] achieves perfect recovery over every sliding window channel that has N < d R (W 1). However if T < W 1, then the maximum N achievable is d R (T ) 1. C. Example Although the bound on the field sizegiven in Theorem 4 is large, it is only a sufficiency constraint required for the proof. We present examples of MSR codes over fields with smaller sizes below. Example 1. Let α F 2 7 be a primitive normal element. Let T be the super-regular matrix from (1) generated over F 2 7, but with dimensions given by n = and m = 2. We pick the rows i 1 = 0, i 2 = 2 and let G j be defined as follows. ( ) α [jn] α G j = [jn+1] α [jn+2] α [jn+2] α [jn+] α [jn+4] Numerically, G EX 2 can be shown to satisfy Theorem, therefore making it the generator matrix for a MSR code C[, 2, 2]. Example 2. Let β F 2 10 be a primitive normal element. Let T be the super-regular matrix from (1) generated over F 2 10, but with dimensions given by n = 4 and m = 1. We pick the rows i 1 = 0, i 2 = and let G j be defined as below. ( ) β [jn] β G j = [jn+1] β [jn+2] β [jn+] β [jn+] β [jn+4] β [jn+5] β [jn+6]
10 10 G EX 1 is the generator matrix for a MSR code C[4, 2, 1]. In both cases, Theorem 4 guarantees the construction only if M = 2 11, i.e. α and β are primitive normal elements of F VI. CONCLUSION In this paper, we map the relationship between column sum rank and network streaming. We prove several properties of the column sum rank that parallel analogous properties of column Hamming distance. A convolutional code construction maximizing this metric up to the code memory is proposed. This MSR code is a rank metric counterpart to the m-mds convolutional code. We use matrices over extension fields that preserve super-regularity after multiplication with block diagonal matrices in the ground field. The proof requires large field sizes but we numerically show that codes can be constructed for smaller fields. Future work involves pursuing a more detailed study on the field requirements. Moreover, we have only considered a specific class of rank deficient sliding window channels. In single-link streaming over burst erasure or mixed erasure channels, structured constructions using m-mds codes as constituents have been revealed as more powerful alternatives []. MSR codes may be similarly useful constituents for analogous codes over networks having burst or mixed link failures. As MSR codes directly extend m-mds codes, it is likely that rank metric parallels exist for convolutional codes maximizing other Hamming distance metrics. APPENDIX A PROOF OF LEMMA 1 Proof: The bounds were proven in [7] for q = 2 in order to show that the matrix is super-regular. We will prove the upper bound for more general q, but refer the reader to the original work for the lower bound. The degree of D(X) is bounded by considering the degree of the polynomial terms. When constructing each D σ (X), up to two entries from the last row and column can be used. Choosing D r 1,r 1 prevents selecting a second entry, but due to (6), D r 1,r 1 (X) has a greater degree than the sum of any other two entries. In addition, deg D r 1,r 1 (X) q n(m+2) 2. We next consider the second last row and column. However, the degrees of entries here are bounded by deg D r 1,r 1 (X) q n(m+2) 4. This argument is used recursively to show r 1 r 1 deg D i,σ(i) (X) k=0 q n(m+2) 2 2k r 1 = q n(m+2) 2 k=0 q 2k < q n(m+2) q 2 < q n(m+2) 1 APPENDIX B EXAMPLE OF THE COLUMN TRANSFORMATION IN THEOREM 4 Example. Let n = 4, m = 1 and T be the super-regular matrix of Theorem 4 generated from a primitive and normal element α F Let A 0 and A 1 be the two non-singular square matrices given below A 0 = , A 1 = We will use A = diag (A 0, A 1 ) as the block-diagonal matrix and generate the product TA. The entries f 0 = α [1] + α [2], f 1 = α [0], f 2 = α [0] + α [2], f = α [] are linearly independent polynomials. Similarly, the entries g 0 = α [0] + α [1], g 1 = α [1] + α [2] + α [], g 2 = α [1], g = α [0] + α [2] are also respectively linearly independent. Note that the q-degree of these polynomials are upper and lower bounded between and 0. Furthermore, for any fixed row of TA, the polynomials derived from f i (X) always have a lower degree than those from g i (X).
11 11 ( ) T TA = 0 A 1 = T 0 A 0 T 1 A 1 g 0 g 1 g 2 g g [1] 0 g [1] 1 g [1] 2 g [1] g [2] 0 g [2] 1 g [2] 2 g [2] g [] 0 g [] 1 g [] 2 g [] 2 g [4] 0 g [5] 1 g [5] 2 g [5] f 0 f 1 f 2 f g [4] 0 g [4] 1 g [4] f [1] 0 f [1] 1 f [1] 2 f [1] g [5] f [2] 0 f [2] 1 f [2] 2 f [2] g [6] 0 g [6] 1 g [6] 2 g [6] f [] 0 f [] 1 f [] 2 f [] g [7] 0 g [7] 1 g [7] 2 g [7] Now consider the sub-matrix of the product formed from rows R i, i {1, 2, 4, 5} and columns C j, j {, 4, 5, 6}. We denote this matrix D. g [1] 0 g [1] 1 g [1] 2 g [2] D = 0 g [2] 1 g [2] 2 f g [4] 0 g [4] 1 g [4] 2 f [1] g [5] 0 g [6] 1 g [7] 2 This matrix does not satisfy (6b). The degree of the polynomials in C 4 and C 6 are the same and lower than the degree of the polynomials in C 5. We apply the following elementary column operations on D to construct a new matrix ˆD. 1) C 5 C 6 2) C 4 C 5 C 4 For the new matrix, the entries in C 4 are generated from Frobenius powers of ĝ 0 = g 0 g 2 = α [0]. ˆD now completely satisfies (6) and it follows that it is non-singular. REFERENCES [1] E. Martinian and C.-E. W. Sundberg, Burst erasure correction codes with low decoding delay, IEEE Trans. on Inf. Theory, vol. 50, no. 10, pp , [2] D. Leong, A. Qureshi, and T. Ho, On coding for real-time streaming under packet erasures, in IEEE Int. Symp. on Inf. Theory (ISIT), 201, pp [] A. Badr, P. Patil, A. Khisti, W. Tan, and J. Apostolopoulos, Layered constructions for low-delay streaming codes, to appear in IEEE Trans. on Inf. Theory. [4] V. Tomas, J. Rosenthal, and R. Smarandache, Decoding of convolutional codes over the erasure channel, IEEE Trans. on Inf. Theory, vol. 58, no. 1, pp , [5] H. Gluesing-Luerssen, J. Rosenthal, and R. Smarandache, Strongly-MDS convolutional codes, IEEE Trans. on Inf. Theory, vol. 52, no. 2, pp , [6] R. Hutchinson, R. Smarandache, and J. Trumpf, Superregular matrices and the construction of convolutional codes having a maximum distance profile, Lin. Algebra and Its App., vol. 428, pp , [7] P. Almeida, D. Napp, and R. Pinto, A new class of super regular matrices and MDP convolutional codes, Lin. Algebra and Its App., vol. 49, pp , 201. [8] R. Ahlswede, N. Cai, S. Li, and R. Yeung, Network information flow, IEEE Trans. on Inf. Theory, vol. 46, no. 4, pp , [9] T. Ho, M. Medard, R. Kotter, D. R. Karger, M. Effros, J. Shi, and B. Leong, A random linear network coding approach to multicast, IEEE Trans. on Inf. Theory, vol. 52, no. 10, pp , [10] R. Kotter and F. R. Kschischang, Coding for errors and erasures in random network coding, IEEE Trans. on Inf. Theory, vol. 54, no. 8, pp , [11] D. Silva, F. R. Kschischang, and R. Kotter, A rank-metric approach to error control in random network coding, IEEE Trans. on Inf. Theory, vol. 54, no. 9, pp , [12] E. M. Gabidulin, Theory of codes with maximum rank distance, Problems Inf. Transm., vol. 21, no. 1, pp. 1 12, [1] R. Roth, Maximum-rank array codes and their application to crisscross error correction, IEEE Trans. on Inf. Theory, vol. 7, no. 2, pp. 28 6, [14] R. W. Nobrega and B. F. Uchoa-Filho, Multishot codes for network coding using rank-metric codes, in IEEE Wireless Network Coding Conf. (WiNC), 2010, pp [15] A. Wachter-Zeh, M. Stinner, and V. Sidorenko, Convolutional codes in rank metric with application to random network coding, to appear in IEEE Trans. on Inf. Theory. [16] R. Lidl and H. Niederreiter, Introduction to finite fields and their applications. Cambridge University Press, [17] P. J. McWilliams and N. J. A. Sloane, Theory of Error-Correcting Codes. North Holland Mathematical Library, [18] H. W. Lenstra and R. J. Schoof, Primitive normal bases for finite fields, Math. of Comp., vol. 48, no. 177, pp , [19] R. Johannesson and K. S. Zigangirov, Fundamentals of Convolutional Coding. IEEE Press, [20] A. Khisti., D. Silva, and F. R. Kschischang, Secure-broadcast codes over linear-deterministic channels, in IEEE Int. Symp. on Inf. Theory (ISIT), 2010, pp [21] S. Jalali, M. Effros, and T. Ho, On the impact of a single edge on the network coding capacity, in Inf. Theory and App. (ITA), 2011, pp. 1 5.
Error Control for Real-Time Streaming Applications
Error Control for Real-Time Streaming Applications Ashish Khisti Department of Electrical and Computer Engineering University of Toronto Joint work with: Ahmed Badr (Toronto), Wai-Tian Tan (Cisco), Xiaoqing
More informationLayered Error-Correction Codes for Real-Time Streaming over Erasure Channels
Layered Error-Correction Codes for Real-Time Streaming over Erasure Channels Ashish Khisti University of Toronto Joint Work: Ahmed Badr (Toronto), Wai-Tian Tan (Cisco), John Apostolopoulos (Cisco). Sequential
More information(Structured) Coding for Real-Time Streaming Communication
(Structured) Coding for Real-Time Streaming Communication Ashish Khisti Department of Electrical and Computer Engineering University of Toronto Joint work with: Ahmed Badr (Toronto), Farrokh Etezadi (Toronto),
More informationDistributed Decoding of Convolutional Network Error Correction Codes
1 Distributed Decoding of Convolutional Network Error Correction Codes Hengjie Yang and Wangmei Guo arxiv:1701.06283v2 [cs.it] 18 Feb 2017 Abstract A Viterbi-like decoding algorithm is proposed in this
More informationOn queueing in coded networks queue size follows degrees of freedom
On queueing in coded networks queue size follows degrees of freedom Jay Kumar Sundararajan, Devavrat Shah, Muriel Médard Laboratory for Information and Decision Systems, Massachusetts Institute of Technology,
More informationCommunication over Finite-Ring Matrix Channels
Communication over Finite-Ring Matrix Channels Chen Feng 1 Roberto W. Nóbrega 2 Frank R. Kschischang 1 Danilo Silva 2 1 Department of Electrical and Computer Engineering University of Toronto, Canada 2
More informationDecoding of Convolutional Codes over the Erasure Channel
1 Decoding of Convolutional Codes over the Erasure Channel Virtudes Tomás, Joachim Rosenthal, Senior Member, IEEE, and Roxana Smarandache, Member, IEEE arxiv:10063156v3 [csit] 31 Aug 2011 Abstract In this
More informationStreaming-Codes for Multicast over Burst Erasure Channels
1 Streaming-Codes for Multicast over Burst Erasure Channels Ahmed Badr, Devin Lui and Ashish Khisti School of Electrical and Computer Engineering University of Toronto Toronto, ON, M5S 3G4, Canada Email:
More informationAN increasing number of multimedia applications require. Streaming Codes with Partial Recovery over Channels with Burst and Isolated Erasures
1 Streaming Codes with Partial Recovery over Channels with Burst and Isolated Erasures Ahmed Badr, Student Member, IEEE, Ashish Khisti, Member, IEEE, Wai-Tian Tan, Member IEEE and John Apostolopoulos,
More informationSuperregular Hankel Matrices Over Finite Fields: An Upper Bound of the Matrix Size and a Construction Algorithm
University of Zurich Institute of Mathematics Master s Thesis Superregular Hankel Matrices Over Finite Fields: An Upper Bound of the Matrix Size and a Construction Algorithm Author: Isabelle Raemy Supervisor:
More informationAn Introduction to (Network) Coding Theory
An to (Network) Anna-Lena Horlemann-Trautmann University of St. Gallen, Switzerland April 24th, 2018 Outline 1 Reed-Solomon Codes 2 Network Gabidulin Codes 3 Summary and Outlook A little bit of history
More informationRobust Network Codes for Unicast Connections: A Case Study
Robust Network Codes for Unicast Connections: A Case Study Salim Y. El Rouayheb, Alex Sprintson, and Costas Georghiades Department of Electrical and Computer Engineering Texas A&M University College Station,
More informationMultiple Access Network Information-flow And Correction codes
Multiple Access Network Information-flow And Correction codes Hongyi Yao 1, Theodoros K. Dikaliotis, Sidharth Jaggi, Tracey Ho 1 Tsinghua University California Institute of Technology Chinese University
More informationCODE DECOMPOSITION IN THE ANALYSIS OF A CONVOLUTIONAL CODE
Bol. Soc. Esp. Mat. Apl. n o 42(2008), 183 193 CODE DECOMPOSITION IN THE ANALYSIS OF A CONVOLUTIONAL CODE E. FORNASINI, R. PINTO Department of Information Engineering, University of Padua, 35131 Padova,
More informationAn Introduction to (Network) Coding Theory
An Introduction to (Network) Coding Theory Anna-Lena Horlemann-Trautmann University of St. Gallen, Switzerland July 12th, 2018 1 Coding Theory Introduction Reed-Solomon codes 2 Introduction Coherent network
More information3. Coding theory 3.1. Basic concepts
3. CODING THEORY 1 3. Coding theory 3.1. Basic concepts In this chapter we will discuss briefly some aspects of error correcting codes. The main problem is that if information is sent via a noisy channel,
More informationOptimal Exact-Regenerating Codes for Distributed Storage at the MSR and MBR Points via a Product-Matrix Construction
Optimal Exact-Regenerating Codes for Distributed Storage at the MSR and MBR Points via a Product-Matrix Construction K V Rashmi, Nihar B Shah, and P Vijay Kumar, Fellow, IEEE Abstract Regenerating codes
More informationAn Ins t Ins an t t an Primer
An Instant Primer Links from Course Web Page Network Coding: An Instant Primer Fragouli, Boudec, and Widmer. Network Coding an Introduction Koetter and Medard On Randomized Network Coding Ho, Medard, Shi,
More informationDistributed Reed-Solomon Codes
Distributed Reed-Solomon Codes Farzad Parvaresh f.parvaresh@eng.ui.ac.ir University of Isfahan Institute for Network Coding CUHK, Hong Kong August 216 Research interests List-decoding of algebraic codes
More informationRegenerating Codes and Locally Recoverable. Codes for Distributed Storage Systems
Regenerating Codes and Locally Recoverable 1 Codes for Distributed Storage Systems Yongjune Kim and Yaoqing Yang Abstract We survey the recent results on applying error control coding to distributed storage
More informationA Piggybacking Design Framework for Read-and Download-efficient Distributed Storage Codes
A Piggybacing Design Framewor for Read-and Download-efficient Distributed Storage Codes K V Rashmi, Nihar B Shah, Kannan Ramchandran, Fellow, IEEE Department of Electrical Engineering and Computer Sciences
More informationProgress on High-rate MSR Codes: Enabling Arbitrary Number of Helper Nodes
Progress on High-rate MSR Codes: Enabling Arbitrary Number of Helper Nodes Ankit Singh Rawat CS Department Carnegie Mellon University Pittsburgh, PA 523 Email: asrawat@andrewcmuedu O Ozan Koyluoglu Department
More informationContinuous-Model Communication Complexity with Application in Distributed Resource Allocation in Wireless Ad hoc Networks
Continuous-Model Communication Complexity with Application in Distributed Resource Allocation in Wireless Ad hoc Networks Husheng Li 1 and Huaiyu Dai 2 1 Department of Electrical Engineering and Computer
More informationOn Linear Subspace Codes Closed under Intersection
On Linear Subspace Codes Closed under Intersection Pranab Basu Navin Kashyap Abstract Subspace codes are subsets of the projective space P q(n), which is the set of all subspaces of the vector space F
More informationMATH32031: Coding Theory Part 15: Summary
MATH32031: Coding Theory Part 15: Summary 1 The initial problem The main goal of coding theory is to develop techniques which permit the detection of errors in the transmission of information and, if necessary,
More informationA Relation Between Weight Enumerating Function and Number of Full Rank Sub-matrices
A Relation Between Weight Enumerating Function and Number of Full Ran Sub-matrices Mahesh Babu Vaddi and B Sundar Rajan Department of Electrical Communication Engineering, Indian Institute of Science,
More informationOn Randomized Network Coding
On Randomized Network Coding Tracey Ho, Muriel Médard, Jun Shi, Michelle Effros and David R. Karger Massachusetts Institute of Technology, University of California, Los Angeles, California Institute of
More informationMATH 291T CODING THEORY
California State University, Fresno MATH 291T CODING THEORY Spring 2009 Instructor : Stefaan Delcroix Chapter 1 Introduction to Error-Correcting Codes It happens quite often that a message becomes corrupt
More informationInteractive Interference Alignment
Interactive Interference Alignment Quan Geng, Sreeram annan, and Pramod Viswanath Coordinated Science Laboratory and Dept. of ECE University of Illinois, Urbana-Champaign, IL 61801 Email: {geng5, kannan1,
More informationDistributed Data Storage with Minimum Storage Regenerating Codes - Exact and Functional Repair are Asymptotically Equally Efficient
Distributed Data Storage with Minimum Storage Regenerating Codes - Exact and Functional Repair are Asymptotically Equally Efficient Viveck R Cadambe, Syed A Jafar, Hamed Maleki Electrical Engineering and
More informationStrongly-MDS Convolutional Codes
Strongly-MDS Convolutional Codes Heide Gluesing-Luerssen Department of Mathematics University of Groningen P.O. Box 800 9700 AV Groningen, The Netherlands e-mail: gluesing@math.rug.nl Joachim Rosenthal
More informationError Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class
Error Correcting Codes: Combinatorics, Algorithms and Applications Spring 2009 Homework Due Monday March 23, 2009 in class You can collaborate in groups of up to 3. However, the write-ups must be done
More informationSector-Disk Codes and Partial MDS Codes with up to Three Global Parities
Sector-Disk Codes and Partial MDS Codes with up to Three Global Parities Junyu Chen Department of Information Engineering The Chinese University of Hong Kong Email: cj0@alumniiecuhkeduhk Kenneth W Shum
More informationIntroduction to finite fields
Chapter 7 Introduction to finite fields This chapter provides an introduction to several kinds of abstract algebraic structures, particularly groups, fields, and polynomials. Our primary interest is in
More informationWeakly Secure Data Exchange with Generalized Reed Solomon Codes
Weakly Secure Data Exchange with Generalized Reed Solomon Codes Muxi Yan, Alex Sprintson, and Igor Zelenko Department of Electrical and Computer Engineering, Texas A&M University Department of Mathematics,
More informationPAijpam.eu CONVOLUTIONAL CODES DERIVED FROM MELAS CODES
International Journal of Pure and Applied Mathematics Volume 85 No. 6 013, 1001-1008 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu doi: http://dx.doi.org/10.173/ijpam.v85i6.3
More informationLow Complexity Encoding for Network Codes
Low Complexity Encoding for Network Codes Sidharth Jaggi 1 Laboratory of Information and Decision Sciences Massachusetts Institute of Technology Cambridge, MA 02139, USA Email: jaggi@mit.edu Yuval Cassuto
More informationOn Weight Enumerators and MacWilliams Identity for Convolutional Codes
On Weight Enumerators and MacWilliams Identity for Convolutional Codes Irina E Bocharova 1, Florian Hug, Rolf Johannesson, and Boris D Kudryashov 1 1 Dept of Information Systems St Petersburg Univ of Information
More informationIndex Coding With Erroneous Side Information
Index Coding With Erroneous Side Information Jae-Won Kim and Jong-Seon No, Fellow, IEEE 1 Abstract arxiv:1703.09361v1 [cs.it] 28 Mar 2017 In this paper, new index coding problems are studied, where each
More informationReed-Solomon codes. Chapter Linear codes over finite fields
Chapter 8 Reed-Solomon codes In the previous chapter we discussed the properties of finite fields, and showed that there exists an essentially unique finite field F q with q = p m elements for any prime
More informationExplicit MBR All-Symbol Locality Codes
Explicit MBR All-Symbol Locality Codes Govinda M. Kamath, Natalia Silberstein, N. Prakash, Ankit S. Rawat, V. Lalitha, O. Ozan Koyluoglu, P. Vijay Kumar, and Sriram Vishwanath 1 Abstract arxiv:1302.0744v2
More informationSecure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel
Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Pritam Mukherjee Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 074 pritamm@umd.edu
More informationMinimum Repair Bandwidth for Exact Regeneration in Distributed Storage
1 Minimum Repair andwidth for Exact Regeneration in Distributed Storage Vivec R Cadambe, Syed A Jafar, Hamed Malei Electrical Engineering and Computer Science University of California Irvine, Irvine, California,
More informationLinearly Representable Entropy Vectors and their Relation to Network Coding Solutions
2009 IEEE Information Theory Workshop Linearly Representable Entropy Vectors and their Relation to Network Coding Solutions Asaf Cohen, Michelle Effros, Salman Avestimehr and Ralf Koetter Abstract In this
More informationError Correcting Index Codes and Matroids
Error Correcting Index Codes and Matroids Anoop Thomas and B Sundar Rajan Dept of ECE, IISc, Bangalore 5612, India, Email: {anoopt,bsrajan}@eceiiscernetin arxiv:15156v1 [csit 21 Jan 215 Abstract The connection
More informationDecoding Interleaved Gabidulin Codes using Alekhnovich s Algorithm 1
Fifteenth International Workshop on Algebraic and Combinatorial Coding Theory June 18-24, 2016, Albena, Bulgaria pp. 255 260 Decoding Interleaved Gabidulin Codes using Alekhnovich s Algorithm 1 Sven Puchinger
More informationAlphabet Size Reduction for Secure Network Coding: A Graph Theoretic Approach
ALPHABET SIZE REDUCTION FOR SECURE NETWORK CODING: A GRAPH THEORETIC APPROACH 1 Alphabet Size Reduction for Secure Network Coding: A Graph Theoretic Approach Xuan Guang, Member, IEEE, and Raymond W. Yeung,
More informationOn Random Network Coding for Multicast
On Random Network Coding for Multicast Adrian Tauste Campo Universitat Pompeu Fabra Barcelona, Spain Alex Grant Institute for Telecommunications Research University of South Australia arxiv:cs/7252v [cs.it]
More informationIBM Research Report. Construction of PMDS and SD Codes Extending RAID 5
RJ10504 (ALM1303-010) March 15, 2013 Computer Science IBM Research Report Construction of PMDS and SD Codes Extending RAID 5 Mario Blaum IBM Research Division Almaden Research Center 650 Harry Road San
More informationOn the Duality between Multiple-Access Codes and Computation Codes
On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch
More informationACI-matrices all of whose completions have the same rank
ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices
More informationCoding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014
Anna Dovzhik 1 Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014 Sharing data across channels, such as satellite, television, or compact disc, often
More informationElementary maths for GMT
Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1
More informationCommunication Efficient Secret Sharing
Communication Efficient Secret Sharing 1 Wentao Huang, Michael Langberg, senior member, IEEE, Joerg Kliewer, senior member, IEEE, and Jehoshua Bruck, Fellow, IEEE arxiv:1505.07515v2 [cs.it] 1 Apr 2016
More informationarxiv: v2 [cs.it] 10 Nov 2014
Rank-metric codes and their MacWilliams identities Alberto Ravagnani arxiv:1410.1333v2 [cs.it] 10 Nov 2014 Institut de Mathématiques, Université de Neuchâtel Emile-Argand 11, CH-2000 Neuchâtel, Switzerland
More informationOn Secure Index Coding with Side Information
On Secure Index Coding with Side Information Son Hoang Dau Division of Mathematical Sciences School of Phys. and Math. Sciences Nanyang Technological University 21 Nanyang Link, Singapore 637371 Email:
More informationSecure RAID Schemes from EVENODD and STAR Codes
Secure RAID Schemes from EVENODD and STAR Codes Wentao Huang and Jehoshua Bruck California Institute of Technology, Pasadena, USA {whuang,bruck}@caltechedu Abstract We study secure RAID, ie, low-complexity
More informationCommunication Efficient Secret Sharing
1 Communication Efficient Secret Sharing Wentao Huang, Michael Langberg, Senior Member, IEEE, Joerg Kliewer, Senior Member, IEEE, and Jehoshua Bruck, Fellow, IEEE Abstract A secret sharing scheme is a
More informationSection 3 Error Correcting Codes (ECC): Fundamentals
Section 3 Error Correcting Codes (ECC): Fundamentals Communication systems and channel models Definition and examples of ECCs Distance For the contents relevant to distance, Lin & Xing s book, Chapter
More informationOn the Capacity of Secure Network Coding
On the Capacity of Secure Network Coding Jon Feldman Dept. of IEOR Tal Malkin Dept. of CS Rocco A. Servedio Dept. of CS Columbia University, New York, NY {jonfeld@ieor, tal@cs, rocco@cs, cliff@ieor}.columbia.edu
More information2012 IEEE International Symposium on Information Theory Proceedings
Decoding of Cyclic Codes over Symbol-Pair Read Channels Eitan Yaakobi, Jehoshua Bruck, and Paul H Siegel Electrical Engineering Department, California Institute of Technology, Pasadena, CA 9115, USA Electrical
More informationMATH3302. Coding and Cryptography. Coding Theory
MATH3302 Coding and Cryptography Coding Theory 2010 Contents 1 Introduction to coding theory 2 1.1 Introduction.......................................... 2 1.2 Basic definitions and assumptions..............................
More informationMANY multimedia applications such as interactive audio/video conferencing, mobile gaming and cloud-computing require
Layered Constructions for Low-Delay Streaming Codes Ahmed Badr, Student Member, IEEE, Pratik Patil, Ashish Khisti, Member, IEEE, Wai-Tian Tan, Member, IEEE and John Apostolopoulos, Fellow, IEEE 1 arxiv:1308.3827v1
More informationCyclic Redundancy Check Codes
Cyclic Redundancy Check Codes Lectures No. 17 and 18 Dr. Aoife Moloney School of Electronics and Communications Dublin Institute of Technology Overview These lectures will look at the following: Cyclic
More informationMATH 291T CODING THEORY
California State University, Fresno MATH 291T CODING THEORY Fall 2011 Instructor : Stefaan Delcroix Contents 1 Introduction to Error-Correcting Codes 3 2 Basic Concepts and Properties 6 2.1 Definitions....................................
More informationGENERATING SETS KEITH CONRAD
GENERATING SETS KEITH CONRAD 1 Introduction In R n, every vector can be written as a unique linear combination of the standard basis e 1,, e n A notion weaker than a basis is a spanning set: a set of vectors
More informationInterference Alignment in Regenerating Codes for Distributed Storage: Necessity and Code Constructions
Interference Alignment in Regenerating Codes for Distributed Storage: Necessity and Code Constructions Nihar B Shah, K V Rashmi, P Vijay Kumar, Fellow, IEEE, and Kannan Ramchandran, Fellow, IEEE Abstract
More informationOn the Optimal Recovery Threshold of Coded Matrix Multiplication
1 On the Optimal Recovery Threshold of Coded Matrix Multiplication Sanghamitra Dutta, Mohammad Fahim, Farzin Haddadpour, Haewon Jeong, Viveck Cadambe, Pulkit Grover arxiv:1801.10292v2 [cs.it] 16 May 2018
More informationCode Construction for Two-Source Interference Networks
Code Construction for Two-Source Interference Networks Elona Erez and Meir Feder Dept. of Electrical Engineering-Systems, Tel Aviv University, Tel Aviv, 69978, Israel, E-mail:{elona, meir}@eng.tau.ac.il
More informationChapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005
Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each
More informationTHE idea of network coding over error-free networks,
Path Gain Algebraic Formulation for the Scalar Linear Network Coding Problem Abhay T. Subramanian and Andrew Thangaraj, Member, IEEE arxiv:080.58v [cs.it] 9 Jun 00 Abstract In the algebraic view, the solution
More informationLecture B04 : Linear codes and singleton bound
IITM-CS6845: Theory Toolkit February 1, 2012 Lecture B04 : Linear codes and singleton bound Lecturer: Jayalal Sarma Scribe: T Devanathan We start by proving a generalization of Hamming Bound, which we
More informationECE Information theory Final (Fall 2008)
ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1
More informationLecture 3: Error Correcting Codes
CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error
More information5.0 BCH and Reed-Solomon Codes 5.1 Introduction
5.0 BCH and Reed-Solomon Codes 5.1 Introduction A. Hocquenghem (1959), Codes correcteur d erreurs; Bose and Ray-Chaudhuri (1960), Error Correcting Binary Group Codes; First general family of algebraic
More informationFault Tolerate Linear Algebra: Survive Fail-Stop Failures without Checkpointing
20 Years of Innovative Computing Knoxville, Tennessee March 26 th, 200 Fault Tolerate Linear Algebra: Survive Fail-Stop Failures without Checkpointing Zizhong (Jeffrey) Chen zchen@mines.edu Colorado School
More informationChapter 7 Reed Solomon Codes and Binary Transmission
Chapter 7 Reed Solomon Codes and Binary Transmission 7.1 Introduction Reed Solomon codes named after Reed and Solomon [9] following their publication in 1960 have been used together with hard decision
More informationan author's https://oatao.univ-toulouse.fr/18723 http://dx.doi.org/10.1109/isit.2017.8006599 Detchart, Jonathan and Lacan, Jérôme Polynomial Ring Transforms for Efficient XOR-based Erasure Coding. (2017)
More informationLocally Encodable and Decodable Codes for Distributed Storage Systems
Locally Encodable and Decodable Codes for Distributed Storage Systems Son Hoang Dau, Han Mao Kiah, Wentu Song, Chau Yuen Singapore University of Technology and Design, Nanyang Technological University,
More informationLecture 7 MIMO Communica2ons
Wireless Communications Lecture 7 MIMO Communica2ons Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Fall 2014 1 Outline MIMO Communications (Chapter 10
More informationAalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes
Aalborg Universitet Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes Published in: 2004 International Seminar on Communications DOI link to publication
More informationIdeals over a Non-Commutative Ring and their Application in Cryptology
Ideals over a Non-Commutative Ring and their Application in Cryptology E. M. Gabidulin, A. V. Paramonov and 0. V. Tretjakov Moscow Institute of Physics and Technology 141700 Dolgoprudnii Moscow Region,
More informationOn Systems of Diagonal Forms II
On Systems of Diagonal Forms II Michael P Knapp 1 Introduction In a recent paper [8], we considered the system F of homogeneous additive forms F 1 (x) = a 11 x k 1 1 + + a 1s x k 1 s F R (x) = a R1 x k
More informationLDPC Codes. Intracom Telecom, Peania
LDPC Codes Alexios Balatsoukas-Stimming and Athanasios P. Liavas Technical University of Crete Dept. of Electronic and Computer Engineering Telecommunications Laboratory December 16, 2011 Intracom Telecom,
More informationFeasibility Conditions for Interference Alignment
Feasibility Conditions for Interference Alignment Cenk M. Yetis Istanbul Technical University Informatics Inst. Maslak, Istanbul, TURKEY Email: cenkmyetis@yahoo.com Tiangao Gou, Syed A. Jafar University
More informationUNIT MEMORY CONVOLUTIONAL CODES WITH MAXIMUM DISTANCE
UNIT MEMORY CONVOLUTIONAL CODES WITH MAXIMUM DISTANCE ROXANA SMARANDACHE Abstract. Unit memory codes and in particular, partial unit memory codes are reviewed. Conditions for the optimality of partial
More informationEnhancing Binary Images of Non-Binary LDPC Codes
Enhancing Binary Images of Non-Binary LDPC Codes Aman Bhatia, Aravind R Iyengar, and Paul H Siegel University of California, San Diego, La Jolla, CA 92093 0401, USA Email: {a1bhatia, aravind, psiegel}@ucsdedu
More informationReverse Edge Cut-Set Bounds for Secure Network Coding
Reverse Edge Cut-Set Bounds for Secure Network Coding Wentao Huang and Tracey Ho California Institute of Technology Michael Langberg University at Buffalo, SUNY Joerg Kliewer New Jersey Institute of Technology
More informationOn MBR codes with replication
On MBR codes with replication M. Nikhil Krishnan and P. Vijay Kumar, Fellow, IEEE Department of Electrical Communication Engineering, Indian Institute of Science, Bangalore. Email: nikhilkrishnan.m@gmail.com,
More informationTC08 / 6. Hadamard codes SX
TC8 / 6. Hadamard codes 3.2.7 SX Hadamard matrices Hadamard matrices. Paley s construction of Hadamard matrices Hadamard codes. Decoding Hadamard codes A Hadamard matrix of order is a matrix of type whose
More information1 The linear algebra of linear programs (March 15 and 22, 2015)
1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real
More informationInteresting Examples on Maximal Irreducible Goppa Codes
Interesting Examples on Maximal Irreducible Goppa Codes Marta Giorgetti Dipartimento di Fisica e Matematica, Universita dell Insubria Abstract. In this paper a full categorization of irreducible classical
More informationLinear Codes, Target Function Classes, and Network Computing Capacity
Linear Codes, Target Function Classes, and Network Computing Capacity Rathinakumar Appuswamy, Massimo Franceschetti, Nikhil Karamchandani, and Kenneth Zeger IEEE Transactions on Information Theory Submitted:
More informationDivision of Trinomials by Pentanomials and Orthogonal Arrays
Division of Trinomials by Pentanomials and Orthogonal Arrays School of Mathematics and Statistics Carleton University daniel@math.carleton.ca Joint work with M. Dewar, L. Moura, B. Stevens and Q. Wang
More informationFundamental rate delay tradeoffs in multipath routed and network coded networks
Fundamental rate delay tradeoffs in multipath routed and network coded networks John Walsh and Steven Weber Drexel University, Dept of ECE Philadelphia, PA 94 {jwalsh,sweber}@ecedrexeledu IP networks subject
More informationRecent Developments in Compressed Sensing
Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline
More informationVector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)
Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational
More informationA Field Extension as a Vector Space
Chapter 8 A Field Extension as a Vector Space In this chapter, we take a closer look at a finite extension from the point of view that is a vector space over. It is clear, for instance, that any is a linear
More informationChapter 5. Cyclic Codes
Wireless Information Transmission System Lab. Chapter 5 Cyclic Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Description of Cyclic Codes Generator and Parity-Check
More informationA Polynomial-Time Algorithm for Pliable Index Coding
1 A Polynomial-Time Algorithm for Pliable Index Coding Linqi Song and Christina Fragouli arxiv:1610.06845v [cs.it] 9 Aug 017 Abstract In pliable index coding, we consider a server with m messages and n
More information