CODE DECOMPOSITION IN THE ANALYSIS OF A CONVOLUTIONAL CODE

Similar documents
On Weight Enumerators and MacWilliams Identity for Convolutional Codes

Minimal-span bases, linear system theory, and the invariant factor theorem

PAijpam.eu CONVOLUTIONAL CODES DERIVED FROM MELAS CODES

UNIT MEMORY CONVOLUTIONAL CODES WITH MAXIMUM DISTANCE

Coding on a Trellis: Convolutional Codes

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008

Strongly-MDS Convolutional Codes

Binary Convolutional Codes of High Rate Øyvind Ytrehus

Constructions of MDS-Convolutional Codes

Lecture 3 : Introduction to Binary Convolutional Codes

ANALYSIS OF CONTROL PROPERTIES OF CONCATENATED CONVOLUTIONAL CODES

(Reprint of pp in Proc. 2nd Int. Workshop on Algebraic and Combinatorial coding Theory, Leningrad, Sept , 1990)

Höst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V.

On cyclic codes of composite length and the minimal distance

Vector spaces. EE 387, Notes 8, Handout #12

IDEAL CLASSES AND RELATIVE INTEGERS

State Space Realizations and Monomial Equivalence for Convolutional Codes

Lecture 4: Linear Codes. Copyright G. Caire 88

Research Article Minor Prime Factorization for n-d Polynomial Matrices over Arbitrary Coefficient Field

arxiv: v1 [math.ra] 15 Dec 2015

REPRESENTATIONS OF U(N) CLASSIFICATION BY HIGHEST WEIGHTS NOTES FOR MATH 261, FALL 2001

Canonical lossless state-space systems: staircase forms and the Schur algorithm

Convolutional Codes with Maximum Column Sum Rank for Network Streaming

Notes on n-d Polynomial Matrix Factorizations

Decoding of Convolutional Codes over the Erasure Channel

Decomposing Bent Functions

On Linear Subspace Codes Closed under Intersection

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

1 Linear Algebra Problems

Codes for Partially Stuck-at Memory Cells

Rings. EE 387, Notes 7, Handout #10

ACI-matrices all of whose completions have the same rank

Intrinsic products and factorizations of matrices

Solving Homogeneous Systems with Sub-matrices

Superregular Hankel Matrices Over Finite Fields: An Upper Bound of the Matrix Size and a Construction Algorithm

Introduction to convolutional codes

On some incidence structures constructed from groups and related codes

Algebraic aspects of 2D convolutional codes

Low-Power Cooling Codes with Efficient Encoding and Decoding

Orthogonal Arrays & Codes

A construction of periodically time-varying convolutional codes

An Analytic Approach to the Problem of Matroid Representibility: Summer REU 2015

Symmetric configurations for bipartite-graph codes

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE

Acyclic Digraphs arising from Complete Intersections

nd polynomial matrices with applications to multidimensional signal analysis

Triangularizing matrix polynomials. Taslaman, Leo and Tisseur, Francoise and Zaballa, Ion. MIMS EPrint:

Linear Codes, Target Function Classes, and Network Computing Capacity

RECAP How to find a maximum matching?

Skew cyclic codes: Hamming distance and decoding algorithms 1

Positive systems in the behavioral approach: main issues and recent results

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

Math 408 Advanced Linear Algebra

A proof of the Jordan normal form theorem

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

LECTURE 6: VECTOR SPACES II (CHAPTER 3 IN THE BOOK)

Topics in linear algebra

LIFTED CODES OVER FINITE CHAIN RINGS

Codes over Subfields. Chapter Basics

MINIMAL GENERATING SETS OF GROUPS, RINGS, AND FIELDS

Chapter10 Convolutional Codes. Dr. Chih-Peng Li ( 李 )

MATH 8253 ALGEBRAIC GEOMETRY WEEK 12

ELE/MCE 503 Linear Algebra Facts Fall 2018

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding

Coding theory: algebraic geometry of linear algebra David R. Kohel

Lecture 21: The decomposition theorem into generalized eigenspaces; multiplicity of eigenvalues and upper-triangular matrices (1)

Discrete Optimization 23

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA

Stochastic Realization of Binary Exchangeable Processes

First we introduce the sets that are going to serve as the generalizations of the scalars.

Recall the convention that, for us, all vectors are column vectors.

Linear Systems and Matrices

JORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5

1 Last time: least-squares problems

On the exact bit error probability for Viterbi decoding of convolutional codes

Decoding Procedure for BCH, Alternant and Goppa Codes defined over Semigroup Ring

Error control of line codes generated by finite Coxeter groups

Rings, Integral Domains, and Fields

Factorization of integer-valued polynomials with square-free denominator

M. VAN BAREL Department of Computing Science, K.U.Leuven, Celestijnenlaan 200A, B-3001 Heverlee, Belgium

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

exercise in the previous class (1)

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

SUMS OF UNITS IN SELF-INJECTIVE RINGS

J-SPECTRAL FACTORIZATION

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

1 Determinants. 1.1 Determinant

Algebra Homework, Edition 2 9 September 2010

Codes on Graphs, Normal Realizations, and Partition Functions

A Matrix Ring Description for Cyclic Convolutional Codes

A graph theoretic approach to matrix inversion by partitioning

Weakly Secure Data Exchange with Generalized Reed Solomon Codes

Strongly regular graphs constructed from groups

A 2 G 2 A 1 A 1. (3) A double edge pointing from α i to α j if α i, α j are not perpendicular and α i 2 = 2 α j 2

Subquadratic Space Complexity Multiplication over Binary Fields with Dickson Polynomial Representation

A q x k+q + A q 1 x k+q A 0 x k = 0 (1.1) where k = 0, 1, 2,..., N q, or equivalently. A(σ)x k = 0, k = 0, 1, 2,..., N q (1.

STRONG FORMS OF ORTHOGONALITY FOR SETS OF HYPERCUBES

Introduction to Association Schemes

Transcription:

Bol. Soc. Esp. Mat. Apl. n o 42(2008), 183 193 CODE DECOMPOSITION IN THE ANALYSIS OF A CONVOLUTIONAL CODE E. FORNASINI, R. PINTO Department of Information Engineering, University of Padua, 35131 Padova, ITALY. E-mail: fornasini@dei.unipd.it. Department of Mathematics, University of Aveiro, 3810-193 Aveiro, PORTUGAL. E-mail: raquel@ua.pt. Abstract A convolutional code can be decomposed into smaller codes if it admits decoupled encoders. In this paper, we show that if a code can be decomposed into smaller codes (subcodes) its column distances are the minimum of the column distances of its subcodes. Moreover, the j-th column distance of a convolutional code C is equal to the j-th column distance of the convolutional codes generated by the truncation of the canonical encoders of C to matrices which entries have degree smaller or equal than j. We show that if one of such codes can be decomposed into smaller codes, so can be all the other codes. Key words: Convolutional codes, decoupled encoders, code decomposition, free distance, column distance AMS subject classifications: 1 Introduction Some convolutional codes can be decomposed into smaller codes (subcodes). This happens if they admit decoupled encoders among its encoders [2]. The free distance and the column distances are the most common distance measures for a convolutional code. It was shown in [1] that the free distance of a code is equal to the minimum of the free distances of its subcodes. In this paper, we will show that similarly to the free distance, the j-th column distance of a convolutional code C is equal to the minimum of the j-th column distances of its subcodes. Moreover, to calculate the j-th column distance of C we can consider the truncation of the entries of a canonical encoder of C to polynomials of degree smaller or equal than j (truncation to degree j). Such obtained matrix generates a different convolutional code with same i-th column distances than C, for i j. So, if C admits a canonical encoder which truncation to degree j is Supported in part by the Portuguese Science Foundation (FCT) through the Unidade de Investigação Matemática e Aplicações of the University of Aveiro, Portugal. 183

184 E. Fornasini, R. Pinto a decoupled encoder, G d (d), we have that the j-th column distance of C is equal to the minimum of the column distances of the subcodes of the convolutional code generated by G d (d). We will see that if a convolutional code has such a canonical encoder, all the convolutional codes generated by the truncation to degree j of canonical encoders of C also admit decoupled encoders. 2 Convolutional codes We will consider convolutional codes constituted by left compact sequences of (F p ) Z, where p N and F is a finite field. Such sequences are naturally represented by Laurent power series ŵ(d) = w t d t F((d)) p, and we are allowed to multiply any left compact support sequence by a scalar Laurent series s(d) = s t d t F((d)). In fact, F((d)) p is a vector space over the field F((d)). F[d] and F(d) will denote, as usually, the ring of polynomials and the field of rational functions with coefficients in F, respectively. A [p,m]-convolutional code is an m-dimensional subspace of F((d)) p, which has a rational (and polynomial) basis, i.e., that is generated by a full row rank rational matrix G(d) F(d) m p, C = ImG(d) = {ŵ(d) : ŵ(d) = û(d)g(d), û(d) F((d)) m }. C is said to have rate m p and G(d) is called an encoder of C. G(d) produces a codeword ŵ(d) = û(d)g(d) corresponding to each information sequence û(d) F((d)) m. A convolutional code admits many encoders. Two encoders that generate the same code are called equivalent encoders and are related by a nonsingular rational left factor in F(d) m m. The encoders that can be realized by a physical device are called causal encoders. A causal encoder induces a non-anticipatory input-output map, i.e., produces codewords that start at the same time or after the corresponding information sequences. Among the causal encoders of a convolutional code we have the polynomial encoders and in the class of the polynomial encoders we distinguish the canonical encoders which are the left prime and row reduced ones. Two canonical encoders of a convolutional code C have the same row degrees φ i, i = 1,...,m, up to a row permutation, and these row degrees are called Forney indices of C. The maximum of the Forney indices is the memory of C and is represented by ν [3, 5]. The free distance of a convolutional code C [5] is defined as d free (C) := min{w H (ŵ(d)) = w H (w t ) : ŵ(d) = w t d t C\{0}}, where w H (w t ) represents the Hamming weight of w t, and is bounded by d free (C) (p m)( ν + 1) + ν + 1. m Such a bound is called the generalized Singleton bound. If the free distance of C is equal to the corresponding generalized Singleton bound, then C is said to be an MDS-code [6].

Code decomposition in the analysis of a convolutional code 185 A distance measure associated to each causal encoder of a convolutional code C is the column distance [5]. Definition 2.1 The j-th order column distance of a causal encoder G(d) F(d) m p is the minimum of the Hamming weights of ŵ(d) [0,j] 1 where ŵ(d) is a codeword corresponding to an information sequence û(d) = t 0 u t d t such that u 0 0. The column distance is an encoder property and two equivalent causal encoders can have different column distances. However the causal encoders of a convolutional code which are delay-free (i.e., that produce codewords that start at the same time as the corresponding information sequences) have the same column distances, which leads to the definition of column distance of the code. In this definition we only refer to polynomial encoders for simplicity. A polynomial encoder G(d) is delay-free if and only if G(0) has full row rank. Observe that a convolutional code always admit such encoders, being the canonical encoders an example of delay-free polynomial encoders. Definition 2.2 The j-th order column distance of a convolutional code is the j-th order column distance of any polynomial encoder G(d) of C such that G(0) is full row rank. Let G(d) = G 0 +G 1 d+ +G l d l, G i F m p, i = 1,...,l, be a polynomial encoder of degree l 2 of the convolutional code C, with G(0) = G 0 full row rank, and G 0 G 1 G l M(G(d)) = G 0 G 1 G l......... the corresponding semi-infinite matrix. Denote by M c j (G(d)) the truncation of M(G(d)) after j + 1 (block) columns G 0 G 1 G 2 G j G 0 G 1 G j 1 M c j(g(d)) = G 0 G j 2 (1).... G 0 where G i = 0 for i > l. Then the j-th order column distance of C is d c j(c) = min u0 0{w H ([u 0 u 1...u j ]M c j(g(d)))}, 1 If ŵ(d) = X t kw td t then ŵ(d) [0,j] = jx w td t where w t = 0 for t < k, if k > 0. t=0 2 We consider the degree of a polynomial matrix as the maximum of the degrees of its entries. If G(d) is an m p polynomial matrix of degree l, we can write G(d) = G 0 + G 1 d + + G l d l, with G i F m p, i = 1,..., l, and G l 0.

186 E. Fornasini, R. Pinto with u i F m, i = 0,1,...,j, for all j N [5]. For every j 0, the j-th column distance of a [p,m]-convolutional code C is bounded by [4] d c j(c) (p m)(j + 1) + 1. 3 Decoupled encoders and code decomposition A convolutional code is decomposable into smaller codes if it admits encoders in block diagonal form, called decoupled encoders. In this section we present a brief introduction to such encoders. For more details see [2]. Definition 3.1 Let p 1,...,p k be positive integers such that k i=1 p i = p and P a permutation matrix. An encoder G(d) of C is said to be a (p 1,...,p k )- decoupled encoder of C associated with P if there exist positive integers m 1,...,m k with k i=1 m i = m such that with G (i) (d) F(d) mi pi, i = 1,...,k. G(d)P = diag{g (1) (d),...,g (k) (d)}, (2) If G(d) is a (p 1,...,p k )-decoupled encoder that satisfies (2) and û(d) = [û 1 (d) û k (d)] F((d)) m, with û i (d) F((d)) mi, an information sequence, then its corresponding codeword ŵ(d) = û(d)g(d) is of the form ŵ(d) = [ŵ 1 (d) ŵ k (d)]p, where ŵ i (d) = û i (d)g (i) (d), i = 1,...,k. Consequently, up to a permutation of the components of the codewords of C, C = C (1) C (k), where C (i) is the [p i,m i ]-convolutional code generated by G (i) (d), i = 1,...,k, and we say that C is decomposable into C (1),...,C (k). If C does not have a (p 1,p 2 )-decoupled encoder, for some p 1,p 2 N, then C is said to be an undecomposable code. Definition 3.2 A (p 1,...,p k )-decoupled encoder G(d) of C associated with a permutation matrix P, G(d) = diag{g (1) (d),...,g (k) (d)}p 1, with G (i) (d) F(d) mi pi, i = 1,...,k and k i=1 m i = m, is said to be maximally-decoupled if C (i) = ImG (i) (d) is undecomposable, i = 1,...,k. The determination of a decoupled encoder of a [p, m]-convolutional code C is directly related with a partition of the columns of the encoders of C. We

Code decomposition in the analysis of a convolutional code 187 will consider that the columns of any encoder of C constitute a set of nonzero generators of F((d)) m. 3 Definition 3.3 A set of nonzero generators of F((d)) m, G = {ˆv 1 (d), ˆv 2 (d),..., ˆv p (d)} and a decomposition of F((d)) m in direct sum F((d)) m = V 1 V 2 V k, (3) are compatible if every vector of G belongs to a summand of (3) (and, obviously, to only one). If a generator set G is compatible with (3), then (i) G 1 G 2... G k with G i := V i G, i = 1...,k, is a partition of G and V i = span(g i ), i = 1,...,k. (ii) if B := {ˆv i1 (d),..., ˆv im (d)} G is a basis of F((d)) m, B i := G i B is a basis of span(g i ). (iii) there exists a unique finest direct sum decomposition V = V 1 V 2 V h (4) compatible with G. Each summand of any other compatible decomposition of F((d)) m can be expressed as a suitable sum of some V i s in (4). The following algorithm determines the partition of G = {ˆv 1 (d)... ˆv p (d)} associated with (4). Algorithm 1: Input: G(d) = [ˆv 1 (d)... ˆv p (d)]. Step 1: Select an m m nonsingular submatrix B(d) of G(d) and put X(d) = B(d) 1 G(d). Step 2: Construct the m p boolean matrix A defined by { 1 if Xij 0 A ij = 0 if X ij = 0. Step 3: Compute (A T A) p 1 and determine a permutation matrix P F p p such that P T (A T A) p 1 P = diag{n (1),...,N (h) }, 3 If the i-th column of an encoder of C is zero, the same happens for all equivalent encoders and, moreover, the i-th component of all codewords of C is also zero. Therefore to determine the decoupled encoders of C it is sufficient to consider the subcode of C constituted by its codewords without the i-th component, which encoders are the encoders of C without the i-th column.

188 E. Fornasini, R. Pinto where N (i) = [ 1... 1 ] T [ 1... 1 ] F p i p i, i = 1,...,h. Step 4: Partitionate P = [P 1... P h ] where P i F p pi, i = 1,...,h and define P := [GP 1 GP 2... GP h ]. Output: P and P. Let G(d) be an encoder of C, P = [G 1 (d)... G h (d)], with G i (d) F(d) m pi, i = 1,...,h, be the partition of the columns of G(d) obtained by applying Algorithm 1 and P be the corresponding permutation matrix. Then [G 1 (d)... G h (d)] = G(d)P with F((d)) m = spang 1 (d) spang h (d). Let also [B 1 (d)... B h (d)] be an m m nonsingular matrix such that B i (d) F(d) m mi is a submatrix of G i (d), with m i = rank G i (d), i = 1,...,h. Then Ḡ(d) := [B 1 (d)... B h (d)] 1 G(d) = diag{ḡ(1) (d),...,ḡ(h) (d)}p 1 (5) with Ḡ(i) (d) F(d) mi pi, i = 1,...,h. Ḡ(d) is a (p 1,...,p h )-decoupled encoder of C which is maximally-decoupled since [G 1 (d)... G h (d)] is the partition of the columns of G(d) associated with the finest direct sum decomposition of F((d)) m, which implies that the [p i,m i ]-convolutional codes C (i) = ImḠ(i) (d), i = 1,..., h, are undecomposable. Moreover, any other maximally-decoupled encoder of C, G(d), is such that G(d)P = diag{ G1 (d),..., G h (d)}, with G i (d) F(d) mi pi, i = 1,...,h. Proposition 3.1 If C admits a (p 1,...,p k )-decoupled encoder associated with a permutation matrix P then it also admits a (p 1,...,p k )-decoupled encoder associated with P which is canonical. The following result is immediate. Corollary 3.1 A convolutional code admits maximally-decoupled canonical encoders. 4 Code decomposition in the analysis of a convolutional code Let C be a [p,m]-convolutional code with free distance d free (C). Suppose that C can be decomposed into smaller codes, i.e., that admits a decoupled encoder G(d) = diag{g (1) (d),...,g (k) (d)}p, with k 2, G (i) (d) F(d) mi pi, i = 1,...,k, k m i = m, i=1 k p i = p and P a permutation matrix. Let C (i) be the [p i,m i ]-convolutional code generated by G (i) (d), i = 1,...,k. It is easy to see that [1] i=1 d free (C) = min 1 i k d free (C i ). (6)

Code decomposition in the analysis of a convolutional code 189 So, if we have a code C which can be decomposed into smaller codes with different free distances, we obtain a better code just by considering the smaller subcode with better free distance. If all the subcodes have the same free distance but different rates, C will have rate smaller than the subcode with higher rate. Moreover, if C is an MDS-code, it can not be decomposable into smaller codes, as stated in the following proposition. Proposition 4.1 If C is an MDS-code then C is undecomposable. Proof: Assume by contradiction that C is an MDS-code that admits a (p 1,p 2 )- decoupled encoder G(d) F(d) m p for some positive integers p 1,p 2 such that p 1 + p 2 = p. Then C also admits a canonical (p 1,p 2 )-decoupled encoder G c (d) such that G c (d)p = [ G (1) (d) 0 0 G (2) (d) for some permutation matrix P and G (i) (d) F(d) mi pi, i = 1,2, with m 1 + m 2 = m. Let C (i) = Im G (i) (d) and represent ν 1 = deg(c (1) ), ν 2 = deg(c (2) ) and ν = deg(c). Observe that ν 1 + ν 2 = ν. Since C is an MDS-code, ], d free (C) = (p m)( ν + 1) + ν + 1. m Let us consider two cases: ν 2 m 1 ν 1 m 2 and ν 2 m 1 < ν 1 m 2. Case 1: ν 2 m 1 ν 1 m 2. Since p 1 + p 2 = p, m 1 + m 2 = m and ν 1 + ν 2 = ν, we have that d free (C) = (p 1 + p 2 m 1 m 2 )( ν 1 + ν 2 m 1 + m 2 + 1) + ν 1 + ν 2 + 1 = = (p 1 m 1 )( ν 1 m 1 + 1) + ν 1 + 1 + (p 1 m 1 )( ν 1 + ν 2 m 1 + m 2 ν 1 m 1 )+ +(p 2 m 2 )( ν 1 + ν 2 m 1 + m 2 + 1) + ν 2. But d free (C (1) ) (p 1 m 1 )( ν1 m 1 + 1) + ν 1 + 1 which implies that d free (C) d free (C (1) ) + (p 1 m 1 )( ν 1 + ν 2 m 1 + m 2 ν 1 m 1 )+ +(p 2 m 2 )( ν 1 + ν 2 m 1 + m 2 + 1) + ν 2 and therefore d free (C) > d free (C (1) ) since (p 2 m 2 )( ν1+ν2 m 1+m 2 + 1) + ν 2 1 which contradicts (6). Case 2: ν 2 m 1 < ν 1 m 2. Proceeding the same way we conclude that d free (C) > d free (C (2) ) which also contradicts (6), and we conclude that C is undecomposable. Observe that the converse of the above lemma is not true as it is shown in the next example.

190 E. Fornasini, R. Pinto Example 4.1 Consider the [4, 2]-convolutional code C over the binary field such that [ ] 1 0 d 0 G c (d) = 0 1 1 1 is a canonical encoder of C. We can easily see that C is an undecomposable code which is not an MDS-code since it has free distance 2 but the corresponding generalized Singleton bound is 4. A similar result to (6) holds for the column distances of a convolutional code that can be decomposed into smaller codes, as stated in the following proposition. Proposition 4.2 Let G(d) be a (p 1,...,p k )-decoupled encoder of C associated with a permutation matrix P, G(d)P = diag{g (1) (d),...,g (k) (d)}, G (i) (d) F(d) mi pi, and C (i) be the [p i,m i ]-convolutional code generated by G (i) (d), i = 1,...,k. Then d c j (C) = min 1 i k d c j(c (i) ). Proof: By Proposition 3.1 we can assume without loss of generality that G(d) is canonical. Representing G (i) (d) = G (i) 0 + G(i) 1 d + + G(i) ν d ν, G (i) r F mi pi, i = 1,...,k, r = 0,...,ν, we have that G(d) = diag{g (1) 0,...,G(k) 0 }P + diag{g(1) 1,...,G(k) 1 }Pd+ + + diag{g (1) ν,...,g (k) ν }Pd ν and then for all j 0 there exist permutation matrices P 1 and P 2 such that M c j(g(d)) = P 1 diag{m c j(g (1) (d)),...,m c j(g (k) (d))} P 2, where P 1 is such that if u n = [u (1) n 0,...,j, then...u (k) n ], u (i) n F mi, i = 1,...,k, n = [u 0 u j ]P 1 = [u (1) 0 u (1) u (k) 0 u (k) j ]. Consequently, for u n F m, n = 0,...,j, [u 0 u j ]M c j(g(d)) = = [u (1) 0 u (1) j u (k) 0 u (k) j ] diag{m c j(g (1) (d)),...,m c j(g (k) (d))} P 2, which implies that d c j (C) = min 1 i k j d c j (C(i) ). Let G c (d) = G 0 + G 1 d + + G ν d ν, with G i F m p, i = 1,...,ν, be a canonical encoder of C and define j G c (d) [0,j] = G 0 + G 1 d + + G j d j, (7)

Code decomposition in the analysis of a convolutional code 191 for j = 0,1,...,ν. Since G c (d) is left prime, G 0 is full row rank and then so it is G c (d) [0,j], j = 0,1,...,ν. Therefore we can define C [j] to be the [p,m]- convolutional code generated by G c (d) [0,j], j = 0,1,...,ν. It is immediate to see that d c j(c) = d c j(c [j] ), j = 0,1,...,ν. Observe that if G c (d) and G c (d) are equivalent canonical encoders and C [j] and C [j] are the convolutional encoders generated by G c (d) [0,j] and G c (d) [0,j], respectively, it is not true that C [j] and C [j] are the same as it is shown in the following example. Example 4.2 Consider the [5, 3]-convolutional code C equivalent canonical encoders 1 0 d 2 d 3 0 G c (d) = 0 d 1 0 0 0 0 0 1 1 + d generated by the and G c (d) = 1 d 2 d 2 + d d 3 + d 2 d 3 + d 2 0 d 1 1 1 + d 0 d 1 1 1 d and let j = 1. It is easy to see that the convolutional codes 1 0 0 0 0 C [1] = ImG c (d) [0,1] = Im 0 d 1 0 0 0 0 0 1 1 + d and are distinct. C [1] = Im G c (d) [ 0,1] = Im 1 0 d 0 0 0 d 1 1 1 + d 0 d 1 1 1 d However, these codes C [j] and C [j] have similar properties of decoupling as stated in the following proposition. Proposition 4.3 Let G c (d) and G c (d) in F[d] m p be equivalent canonical encoders of degree ν and let C [j] and C [j] be the convolutional codes generated by G c (d) [0,j] and G c (d) [0,j], respectively, for j = 0,1,...,ν. Then if G c (d) [0,j] P = diag{g (1) (d),...,g (k) (d)}, G (i) (d) F[d] mi pi, i = k k 1,...,k, with m i = m, p i = p and P a permutation matrix, then i=1 i=1 diag{g (1) (d),...,g (k) (d)}p 1 is also an encoder of C [j].

192 E. Fornasini, R. Pinto Proof: Since G c (d) and G c (d) are equivalent canonical encoders, there exists a unimodular matrix U(d) F[d] m p such that G c (d) = U(d) G c (d). Therefore Ḡ(d) := U(d) G c (d) [0,j] is an encoder of C [0,j] such that Ḡ(d) [0,j] P = (U(d) G c (d) [0,j] ) [0,j] P = (U(d) G c (d)) [0,j] P = G c (d) [0,j] P = diag{g (1) (d),...,g (k) (d)}. Propositions 4.2 and 4.3 immediately imply the following result. Corollary 4.1 Let C be a [p,m]-convolutional code. If d c j (C) = (p m)(j+1)+1 for some j 0 then C does not have a canonical encoder G c (d) such that C [j] := ImG c (d) [0,j] is decomposable into k 2 smaller codes. 5 Conclusions We have showed that, similarly to the free distance, a convolutional code that can be decomposed into smaller codes has j-th column distance equal to the minimum of the j-th column distances of its subcodes. Moreover, a convolutional code C can be undecomposable and admit a canonical encoder G c (d) such that G c (d) [0,j] is decoupled for some j (as in Example 4.2 where C is undecomposable and G c (d) [0,1] is a (3,2)-decoupled encoder). The j- th column distance of such code C is equal to the j-th column distance of C [j] := ImG c (d) [0,j] which can be decomposed into smaller codes and, consequently, the j-th column distance of C is equal to the minimum of the j-th column distances of the subcodes of C [j]. A subject of future investigation is the study of these codes. Although they seem not to be the best codes in terms of their distances, they seem to present good performance in terms of decoding. References [1] J-J. Climent, V. Herrandez, C. Perea, New convolutional codes from old convolutional codes, Electronic Proceedings of the 16th International Symposium on Mathematical Theory and Systems (MTNS2004) (2004). [2] E. Fornasini, R. Pinto, Matrix fraction descriptions in convolutional coding, Linear Algebra and its Applications (2004), 392, 119-158. [3] G.D. Forney Jr., R. Johannesson, Z. Wan, Minimal and canonical rational generator matrices for convolutional codes, IEEE Trans. Inform. Theory (1996), 42:6, 1865-1880. [4] H. Gluesing-Luerssen, J. Rosenthal, R. Smarandache, Strongly-MDS convolutional codes, IEEE Trans. Inform. Theory (2006) 52:2, 584-598. [5] R. Johannesson, K. Zigangirov, Fundamentals of convolutional coding, Piscataway. NJ: IEEE Press (1999).

[6] J. Rosenthal, R. Smarandache, Maximum distance separable convolutional codes, Applicable Algebra in Engineering, Communication and Computing (1999), 10(1), 15-32. 193