Fraction-free Row Reduction of Matrices of Skew Polynomials

Similar documents
Fraction-free Computation of Matrix Padé Systems

Normal Forms for General Polynomial Matrices

Normal Forms for General Polynomial Matrices

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

M. VAN BAREL Department of Computing Science, K.U.Leuven, Celestijnenlaan 200A, B-3001 Heverlee, Belgium

Popov Form Computation for Matrices of Ore Polynomials

Elementary maths for GMT

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

1 Matrices and Systems of Linear Equations

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Algorithms for Normal Forms for Matrices of Polynomials and Ore Polynomials

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

Lattice reduction of polynomial matrices

VII Selected Topics. 28 Matrix Operations

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Efficient Algorithms for Order Bases Computation

Matrices and systems of linear equations

Math 314 Lecture Notes Section 006 Fall 2006

Determinant Formulas for Inhomogeneous Linear Differential, Difference and q-difference Equations

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

Matrices. Chapter Definitions and Notations

A Fraction Free Matrix Berlekamp/Massey Algorithm*

Chapter 2: Matrix Algebra

Complexity of the Havas, Majewski, Matthews LLL. Mathematisch Instituut, Universiteit Utrecht. P.O. Box

Linear Algebra March 16, 2019

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Unit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Solving Sparse Rational Linear Systems. Pascal Giorgi. University of Waterloo (Canada) / University of Perpignan (France) joint work with

On Unimodular Matrices of Difference Operators

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

ELEMENTARY LINEAR ALGEBRA

Chapter 3 Transformations

Generalizations of Sylvester s determinantal identity

A matrix over a field F is a rectangular array of elements from F. The symbol

Fast computation of normal forms of polynomial matrices

On the Arithmetic and Shift Complexities of Inverting of Dierence Operator Matrices

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

Linear Algebra, 4th day, Thursday 7/1/04 REU Info:

Gaussian Elimination and Back Substitution

Polynomial functions over nite commutative rings

Primitive sets in a lattice

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Linear Regression and Its Applications

Practical Linear Algebra: A Geometry Toolbox

Chapter 3. Vector spaces

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Eigenvalues by row operations

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Review of Basic Concepts in Linear Algebra

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

A Subresultant Theory for Ore Polynomials with Applications 1)

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

1 Determinants. 1.1 Determinant

A Fast Las Vegas Algorithm for Computing the Smith Normal Form of a Polynomial Matrix

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

Linear Algebra and Matrix Inversion

MATH 326: RINGS AND MODULES STEFAN GILLE

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

Linear Algebra. Min Yan

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

New concepts: Span of a vector set, matrix column space (range) Linearly dependent set of vectors Matrix null space

Chapter 2 Notes, Linear Algebra 5e Lay

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Vector Spaces, Orthogonality, and Linear Least Squares

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.

Abstract Vector Spaces and Concrete Examples

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Introduction to Matrices

Row Space and Column Space of a Matrix

Study Guide for Linear Algebra Exam 2

Linear Equations in Linear Algebra

Linear Algebra Practice Problems

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Elementary Row Operations on Matrices

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Basic Concepts in Linear Algebra

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

Contents. 6 Systems of First-Order Linear Dierential Equations. 6.1 General Theory of (First-Order) Linear Systems

Chapter 1: Systems of Linear Equations

Hermite normal form: Computation and applications

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Computing Minimal Nullspace Bases

Summing Symbols in Mutual Recurrences

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

Review of Matrices and Block Structures

Math Linear algebra, Spring Semester Dan Abramovich

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H

Dense LU factorization and its error analysis

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then

MATH2210 Notebook 2 Spring 2018

MA106 Linear Algebra lecture notes

Transcription:

Fraction-free Row Reduction of Matrices of Skew Polynomials Bernhard Beckermann Laboratoire d Analyse Numérique et d Optimisation Université des Sciences et Technologies de Lille France bbecker@ano.univ-lille1.fr Howard Cheng George Labahn Department of Computer Science University of Waterloo Waterloo, Canada fhchcheng,glabahng@scg.math.uwaterloo.ca ABSTRACT We present a new algorithm for row reduction of a matrix of skew polynomials. The algorithm can be used for nding full rank decompositions and other rank revealing transformations of such matrices. The algorithm can be used for computation in exact arithmetic domains where the growth of coecients in intermediate computations is a central concern. This coecient growth is controlled by using fractionfree methods. This allows us to obtain a polynomial-time algorithm: for an m s matrix of input skew polynomials of degree N with coecients whose lengths are bounded by K the algorithm has a worst case complexity of O(m 5 s 4 N 4 K ) bit operations. 1. INTRODUCTION In [] Abramov and Bronstein give a new fast algorithm for determining what they call a rank revealing transformations for skew polynomial matrices. These are transformations which convert a matrix of skew polynomials into one where the rank is determined entirely by the leading or trailing coecient matrix. They show that their algorithm can be used for a number of applications including the desingularization of linear recurrence systems and for computing rational solutions of a large class of linear functional systems. In the former application their rank revealing approach improves on the EG-elimination method of Abramov [1]. In the commutative case the algorithm of [] is the same as that given by Beckermann and Labahn [6]. In both cases the algorithms are fast but their methods require exact arithmetic while not handling coecient growth except through coecient GCD computations. Without the GCD computations the coecient growth can be exponential. The main contribution in this paper is a new algorithm which performs row reductions on a given matrix of skew polynomials into one having a full rank leading or trailing coecient matrix. The reductions can be used to nd a full rank decomposition of a matrix of skew polynomials along with rank revealing transformations used by Abramov and Bronstein. The main tool used in the algorithm is order bases which describes all solutions of a given order problem. The algorithm is noteworthy because it uses only fractionfree arithmetic without coecient GCD computations, while at the same time controls coecient growth of intermediate computations. This is similar to the process used by the subresultant algorithm for computing the GCD of two scalar polynomials [9, 10, 11]. The algorithm is based on the FFFG fraction-free method used in Beckermann and Labahn [8] which was developed for fraction-free computation of matrix rational approximants, matrix GCDs and generalized Richardson extrapolation processes. In the scalar case the FFFG algorithm generalizes the subresultant GCD algorithm [7]. We also give a complexity analysis of our algorithm. For an ms matrix of input skew polynomials whose coecients have lengths bounded by K the algorithm has a worst case complexity of O(m 5 s 4 N 4 K ) bit operations. The remainder of the paper is as follows. The next section gives the basic denitions for the problem and introduces order bases of skew polynomials, the primary tool that will be used to solve our problem. Section 3 gives a linear algebra formulation to our problem while the following section gives our fraction-free recurrence. Section 5 discusses the stopping criterion and complexity of our algorithm. The paper ends with a conclusion along with a discussion of future work.. PRELIMINARIES Let ID be an integral domain with Q its quotient eld and let Q[; ] be the Ore domain of skew polynomials over Q with automorphism and = 0. Thus the elements of Q interact with the shift via a = (a). An example of such a domain is ID = IK [n]; Q = IK (n) with the shift operator and (a(n)) = a(qnd) for some integers q and d with q 6= 0. The case when (a(n)) = a(n 1) is particularly important in applications. We remark that, as in [], we can map our problems to general skew polynomial domains including linear dierential, dierence and q-dierence operators. Given a matrix of polynomials of skew polynomials we are interested in applying row operations which transform the matrix into a matrix of skew polynomials which has the property that the rank is revealed by either the trailing or

leading coecient. We will focus on the case of trailing coecients. In this section we provide the preliminary denitions and tools which form the basis for our approach. We assume that we are given F(), a rectangular m s matrix of skew polynomials with entries in Q[; ] F() = NX j=0 F j j ; with F j ID ms : We adapt the convention to denote the elements of F() by F() k;`, and the elements of F j by F k;` j. For any vector of integers ~! = (~! 1; : : : ; ~! m), we let ~! denote the matrix of skew polynomials having ~! i on the diagonal and 0 everywhere else. A matrix of skew polynomials is said to have row (column) degree ~ if the i-th row (column) has maximal degree ~ i. The vector ~e is the vector consisting only of 1. The goal of this paper is to construct the type of rank revealing transformations needed by Abramov and Bronstein [] for their applications. Let Q[; ][?1 ;?1 ] be the iterated domain where we have the identities?1 =?1 = 1; a?1 = (a);?1 a =?1 (a) for all a Q. The rank revealing transformations of Abramov and Bronstein can be formalized as follows. Given F() ID [; ] ms (possibly after a shift with?`), we wish to nd T(?1 ) ID [?1 ;?1 ] mm such that T(?1 ) F() = W() ID [; ] ms ; with the number of nonzero rows r of W() coinciding with the rank of the trailing coecient W 0, and hence with the rank of W(). In addition we require the existence of S() Q[; ] mm such that S() T(?1 ) = I m: Notice that the second formula tells us that the process of elimination for getting W() is invertible. More precisely, we obtain for F() the full rank decomposition F() = S() W() = e S() f W() (1) with W() f ID [; ] rn obtained by extracting the nonzero rows of W(), and S() e Q[; ] mr by extracting from S() the corresponding columns. Moreover, the rank of the trailing coecient of W() f is of full row rank r, and this quantity coincides with the rank of W(). f Finally, from this last equation we see that the rank of F() is bounded above by r, whereas the rst equation tells us that rank F() rank W() = r. Thus we have found r = rank F()..1 Order Basis In this subsection we introduce the notion of order and order bases for a given matrix of skew polynomials. These are the primary tools which will be used for our algorithm. P() is said to have order ~! if P() F() = R() ~! () with R() Q[; ] 1s. R() in () is called a residual. In contrast to previous papers dealing with order bases [4, 5, 6, 8], we have chosen an order condition on the right. This has the advantage that while writing we have F() = X j P() F() = X j F j j ; S j j ; P() = X k S j = X k P k k ; P k k (F j?k): (3) Hence the unknowns P k can be obtained by building a linear system obtained by putting the undesired coecients of S j equal to zero. In order to specify these coecients (see equation (5) below), let us write more explicitly c k;` j (P()) := S k;` j for the coecients occurring in (3). Notice that, though the algebra Q[; ] is non-commutative, the matrix of coecients will have elements in the eld Q. Thus we may build determinants and apply other techniques known from fraction{free algorithms which enable us to control the size of intermediate quantities and to predict common factors. In what follows we will construct elements from Q[; ] mm (and more precisely from ID [; ] mm ) which will enable us to describe the entire set of vectors of a given order. Definition.. Let ~! be some multi-index. A matrix of skew polynomials M() Q[; ] mm is said to be an order basis of order ~! and degree ~ if there exists a multi-index ~ = (~ 1; :::; ~ m) such that a) every row of M() has order ~!, b) for every P() Q[; ] 1m of order ~! there exists a Q() Q[; ] 1m such that P() = Q() M(); where deg Q() 1;j deg P()? ~ j for all j, c) there exists a nonzero d Q such that M() = d ~ L() where deg L() k;` minf~k? 1; ~`? 1g ` k; minf~ k; ~`? 1g ` < k: Definition.1. Let P() Q[; ] 1m be a vector of skew polynomials and ~! a multi-index of integers. Then Remark.3. Note that when ~! = p~e for a given p, then every row of p~e has order ~!. Hence part (b) of Denition

. implies that there exists a M () Q[; ] mm such that M () M() = p~e ; deg M () k;` p? ~`; that is, a type of shifted inverse. The existence of a shifted left inverse M () will enable us to generalize the notion of unimodular transformations of order bases to the case of the non-commutative algebra Q[; ]. An essential implication of Denition.(b) is the following: Theorem.4. Suppose that there exists an order basis M() of order ~! and degree ~. Then there exists only the trivial row vector P() = 0 with column degree ~? ~e and order ~!. Thus, for any k, a row vector with column degree ~? ~e ~e k and order ~! is unique up to multiplication with an element from Q. In particular, an order basis is unique up to multiplication by constants from Q. Proof: We only need to show the rst part concerning row vectors P() with column degree ~? ~e and order ~!, the other parts being an immediate consequence of the rst. Suppose that P() 6= 0, d = degp(), and let Q() be as in Denition.(b). We rst claim that there exists at least one index j with ~ j degq() 1;j = d: (4) Otherwise, deg Q() 1;k M() k;` ~ k degq() 1;k < d for all k and ` since degm() k;` ~ k by Denition.(c), in contradiction with the denition of d. Let j be the largest index verifying (4). Then again by Denition.(c) deg k=j1 deg j?1 X k=1 k=j1 j?1 X k=1 Q() 1;k M() k;j d? ~ k? 1 degm() k;j d? 1; Q() 1;k M() k;j deg Q() 1;k ~ k? 1 d? 1; deg Q() 1;j M() j;j = degq() 1;j ~ j = d: This implies that degp() 1;j = d ~ j, which contradicts the assumption of the degree of P(). 3. DETERMINANTAL REPRESENTATIONS AND MAHLER SYSTEMS In what follows we will propose an algorithm for computing recursively order bases M() for increasing order vectors. In order to predict the size of these objects and predict common factors, we derive in this section a determinantal representation together with a particular choice of the constant d. Suppose that we are looking for a row vector P() of column degree ~ having order ~!. Comparing with (3), we know that P() has order ~! i for all ` = 1; : : : ; s; j = 0; : : : ; ~!`? 1: 0 = c 1;` j (P()) = k=1 minfj;~ k g X =0 P 1;k (F k;` j? ): This leads to some system of linear equations of the form (P 1;1 0 ; :::; P 1;1 ~ 1 ; :::; P 1;m 0 ; :::; P 1;m ) K(~ ~e; ~!) = 0; (5) ~m where the generalized Sylvester matrix is of the form K(~ ~e; ~!) = (K k;`(~ k 1; ~!`))`=1;:::;s k=1;:::;m ; where the! submatrix K k;`(;!) equals 6 6 4 0 (F k;` 0 ) 0 (F k;` 1 ) 0 (F!?1) k;` 0 1 (F k;` 0 ) 1 (F k;`!? ).......... 0?1 (F k;` 0 )?1 (F k;`!? ) 3 7 7 5 : Clearly, K k;`(~ k 1; ~!`) T (and thus K(~ ~e; ~!) T ) may be written as some striped Krylov matrix [8], that is, a matrix of the form 4 F (1)!` C ~ 1?1!` F (1)!` F (m)!` C ~m?1!` F (m)!` 3 5 : However, by stepping from one column to the next we not only multiply with a lower shift matrix but also apply in addition the application. Thus, in contrast to [8], here we obtain a striped Krylov matrix with a matrix C having operator-valued elements. How can we exploit this representation in order to derive a determinantal representation of order bases? According to (5), it follows from Theorem.4 that if there exists an order basis M() of order ~! and degree ~ then K(~; ~!) has full row rank, and more precisely k = 1; :::; m : rank K(~; ~!) = rank K(~ ~e k; ~!) = j~j: (6) Suppose more generally that ~ and ~! are multi-indices verifying (6). We call a multigradient d = d(~; ~!) any constant 1 times the determinant of a regular submatrix K (~; ~!) of maximal order of K(~; ~!), and a Mahler system corresponding to (~; ~!) a matrix polynomial M() with rows having order ~! and degree structure M() = d ~ lower order column degrees: In order to show that such a system exists, we write down explicitly the linear system of equations needed to compute the unknown coecients of the kth row of M(): denote by b k (~; ~!) the row added while passing from K(~; ~!) to K(~ ~e k; ~!). Then, by (5), the vector of coecients is a solution of the (overdetermined) system x K(~; ~!) = d b k (~; ~!) which by (6) is equivalent to the system x K (~; ~!) = d b k (~; ~!); (7) where in b k (~; ~!) and in K (~ ~e k; ~!) we keep the same columns as in K (~; ~!). Notice that, by Cramer's rule, (7)

leads to a solution with coecients in ID. Moreover, we may formally write down a determinantal representation of the elements of an determinantal order basis, namely M() k;` = det K (~ ~e k; ~!) E`;~`?1`;k () (8) with E`;() = [0; : : : ; 0j1; ; : : : ; j0; : : : ; 0] T ; (9) the nonzero entries in E`;n() occurring in the `-th stripe. In addition, we have that X M() k;j F() j;` = det K (~ ~e k; ~!) E`;~~ek () ; j where E`;~ () = [F() 1;`; : : : ; ~ 1?1 F() 1;`j : : : : : : j F() m;`; : : : ; ~m?1 F() m;`] T : (10) In both (8) and (10) the matrices have commutative entries in all but the last column. It is understood that the determinant in both cases is expanded along the last column. We nally mention that, by the uniqueness result of Theorem.4, any order basis of degree ~ and order ~! coincides up to multiplication with some element in Q with a Mahler system associated to (~; ~!), which therefore itself is an order basis of the same degree and order. The converse statement is generally not true. However, by a particular pivoting technique we may recover order basis by computing Mahler systems. 4. THE ALGORITHM In this section we show how to recursively compute order bases in a fraction-free way. For an order basis M() of a given type (~; ~!) having a Mahler system normalization, we look at the rst terms of the residuals. If they are all equal to zero then we have an order basis of a higher order. Otherwise, we give a recursive formula for building an order basis of higher order and degree. However, a priori this new system has coecients from Q since we divide through some factors. In our case, however, the new system will be a Mahler system according to the existence and uniqueness results established before, and hence we will keep objects with coecients in ID. In the following theorem we give a recurrence relation which closely follows the commutative case of [8, Theorem 6.1(c)] with the resulting order bases having properties similar to [8, Theorem 7.] and [8, Theorem 7.3]. Theorem 4.1. Let M() be an order basis corresponding to (~; ~!), e ~! := ~! ~e. Furthermore, denote by r j = c j; ~! (M()), the rst term of the residual for the j-th row and -th column of M(). a) If r 1 = ::: = r m = 0 then f M() := M() is an order basis of degree ~ := ~ and order e ~!. b) Otherwise, let be the smallest index with r 6= 0 and ~ = min j f~ j : r j 6= 0g: Then an order basis f M() of degree ~ := ~ ~e and order e ~! with coecients in Q is obtained via the formulas p f M()`;k = r M()`;k? r` M() ;k (11) for `; k = 1; ; :::; m, ` 6=, and (p ) f M() ;k = r M() ;k? X`6= (p`) f M()`;k (1) for k = 1; ; :::; m, where p j = coecient(m() ;j ; ~ j ;j?1 ). c) If in addition M() is a Mahler system with respect to (~; ~!), then f M() is also a Mahler system with respect to (~; e ~!). In particular, f M() has coecients in ID. Proof: Part (a) is clear from the fact that the rows of M() have order ~! e when r1 = ::: = r m = 0. For part (b) notice rst that f M() has order e ~! by construction, as required in Denition.(a). Also, verifying the new degree constraints of Denition.(c) (with ~ being replaced by ~) for the matrix f M() is straightforward and is the same as in the commutative case, see [8, Theorem 7.]. Also, notice that the leading coecient of all f M()`;` equals r by construction (though for the moment we are not sure to obtain a new order basis with coecients in ID ). We now focus on the properties of Denition.(b). If P() Q[; ] 1m has order ~! e then it has order ~! and so there exists an Q() Q[; ] 1m such that P() = j=1 Q() 1;j M() j; with deg Q() 1;j deg P()? ~ j, where M() j; denotes the j-th row of M(). Applying the rst set of row operations in (11) to rows ` 6= results in P() = where j6= ^Q() 1;j f M() j; ^Q() 1; M() ; (13) ^Q() 1;j = Q() 1;j p ^Q() 1; = i=1 r Q() 1;i ri r : for all j 6= and Note that deg ^Q() 1;j deg P()? ~ j = deg P()? ~ j for all j 6= while deg ^Q() 1; deg P()? ~ because of the minimality of ~. Since P() and all the M() f j; terms have order ~! e this must also be the case for ^Q() 1; M() ;. 1; Hence ^Q 0 r = 0 and so by assumption on we have that ^Q 1; 0 = 0. Writing ^Q() 1; = Q() 1; gives P() = j6= ^Q() 1;j f M() j; Q() 1; M() ; (14)

with deg Q() 1; deg P()? (~ 1) = deg P()? ~. Completing the row operations which normalize the degrees of f M() in (1) gives a e Q() with P() = e Q() fm() having the correct degree bounds. Consequently, the property of Denition.(b) holds. Finally, for establishing part (c) we know already from Section 3 and the existence of order bases of a specied degree and order that both (~; ~!) and (~; e ~!) satisfy (6). By the uniqueness result of Theorem.4 we only need to show that the \leading coecient" e d of f M() in Denition.(c) is a multigradient of (~; e ~!), the latter implying that f M() is a Mahler system and in particular has coecients from ID. Denote by d the corresponding \leading coecient" of M(). In the case discussed in part (a), we do not increase the rank by going from K(~; ~!) to K(~; e ~!) (we just add one column and keep full row rank), hence d = e d being a multigradient with respect to (~; ~!) is also a multigradient with respect to (~; e ~!). In the nal case described in part (b) we have e d = r. Using formula (10) for the residual of the th row of M() we learn that r coincides (up to a sign) with the determinant of a submatrix of order j~j of K(~; e ~!). Since r 6= 0 by construction, it follows that e d = r is a new multigradient, as required for the conclusion. Theorem 4.1 gives a computational procedure that results in the FFreduce algorithm given in Table 1. The stopping criterion and the complexity of this algorithm is given in the next section. Example 4.. For the domain ID = [n], let n F() = 0 3?1 0 0?1 0 0 0 1 3n : At the rst iteration, we have ~! = ~ = (0; 0), and the constant coecients in the rst column are [n ;?1] T. Choosing = 1 and performing the reduction, we obtain (n M() = ) 0 1 n 0 0 M() F() = 0 0 n 4 n 3 5n 4n 6 0 3?1 3n 64?n? n 3n 3 : 64n Note that the constant coecients in the second column of R() are zero, so no operations are required to increase ~! to (1; 1). Now, ~! = (1; 1), ~ = (1; 0), and d = n. The coecients of in the rst column is [n 4 n 3 5n 4n 6; 3] T. Choosing = and performing the row reductions, we see that M() is (?n? n? 3) 3?5n? 4n? n 4? n 3? 6 1 (n ) 3 Table 1: The FFreduce Algorithm Algorithm FFreduce Input: Matrix of skew polynomials F ID [; ] ms. Output: Mahler system M ID [; ] mm, Residual R ID [; ] ms Degree ~, order ~!, rank Initialization: M I m, R F, d 1; ~ ~0; ~! ~0, 0 While (number of zero rows of R 6= m): 0 For = 1; ::; s do Calculate for ` = 1; ::; m: rst term of residuals r` Dene set = f` f1; ::; mg : r` 6= 0g. R`; 0 If 6= fg then Choose such that: = min ` : ~` = min f~ g. Calculate for ` = 1; ::; m, ` 6= : p` coecient(m ;`; ~` ;`?1 ). Increase order for ` = 1; ::; m, ` 6= : M`; 1 d [r M`;? r` M ; ] R`; 1 d [r R`;? r` R ; ] Increase order and adjust degree constraints for row : M ; 1 (d) [r M;? P`6= (p`) M`; ] R ; 1 (d) [r R;? P`6= (p`) R`; ] Update multigradient, degree and : d = r, ~ ~ ~e, 1 end if Adjust residual in column : for ` = 1; :::; m R`; R`; = (formally) ~! = ~! ~e end for while M()F() is given in Figure 1, where we have divided row 1 by d and row by (d) = n n 3. Finally, with respect to the rank revealing transformation mentioned in Section 1, we have the following. Corollary 4.3. Let M() be the nal order basis of order ~! = k~e and degree ~, and let M () be the shifted left inverse of M() as explained in Remark.3. Then the quantities W() =?k~e R() k~e T(?1 ) =?k~e M() S() =?k~e M () k~e solve the full rank decomposition problem (1).

0 n n 3?n 4? n 3? 5n? 4n 1018 0?1 0 n n 3 3 : 3 104(n 1)?3? n 5 n 4 5n 3 4n 6n 1 n 3(n )n Figure 1: M() F() after two steps in Example 4.. In the case where one is only interested in determining W() f in a full rank decomposition (1), then one can do a simple modication of our algorithm to obtain an answer with smaller coecients. Indeed, it will be shown in the next section (Remark 5.4) that one can in fact use the residual one iteration before completion for our rank revealing transformations. One simply uses the rows corresponding to the pivot rows in the last iteration. Example 4.4. Let D = [n; n ] and consider 0?1?80 0 F() = 0?1356?988480?809?3 0?103771 750 0 0?19698?300 3 0 1 0 10 4 n (n 1) 0 0 3077(n 1) 5 ; which is the same as the example from [] except that it is multiplied (on the right) by 4. Using our algorithm, we terminate at ~! = (6; 6) in which the residuals are not all zero in the last two steps. The trailing coecient of the residual R() obtained one iteration previously (that is, two steps ago) at ~! = (5; 5) has a determinant that is an integer constant times n (n 1)? 80. Writing W() =?(5;5) R() (5;5), the determinant of the trailing coecient of W() is the same as that in [], up to a constant. We remark that the product of all the factors removed during the complete process is 350159171880334461640000000000 ( n (n 1)? 80) : the algorithm itself we drop the need to specify the variable. Let M k be the Mahler system of degree ~ k and R k be the residual after the k-th iteration with R k(0) denoting the trailing coecient of R k. We rst prove a lemma which relates the number of pivots used during the (k 1)-st iteration and the rank of R k(0). Lemma 5.1. Let k 0, and r be the number of times is incremented during the (k 1)-st iteration of FFreduce. Then r = rank R k(0). Proof: Denote by H ID ms, = 1; :::; s, the coecient of k of MF during the k-th iteration at the beginning the single step ~! ~! ~e (thus H is transformed into H 1 during this step). First, we claim that when row is chosen as a pivot for column, the subspace generated by the rows of H is the same as the subspace generated by row of H (called a pivot row) and the rows of H 1. This is clearly true after the order has been increased for rows ` 6= as the recurrence (11) is invertible. Multiplying row of M by produces zeros in row in the updated matrix, so that row of H must be kept. Finally, the adjustment from rows ` 6= is again invertible. Thus, the subspaces are the same. In particular, it follows that, after the k-th iteration, the rows of H 1 = R k(0) span the same space as all pivot rows plus the rows of H s1. Recalling the rst? 1 columns of H are zero by the order condition for Mahler systems, we see that H s1 = 0. Since in addition the -th component of the pivot row at stage equals r 6= 0, the pivot rows form a full row rank upper echelon matrix, and H s1 = 0. Hence rank R k(0) = r. We remark that the algorithm of [] also avoids the use of fractions by taking advantage of fraction-free Gaussian elimination of Bareiss [3] for the kernel computations used in their algorithm. The size of the coecients in the kernel vectors can be reduced by removing the greatest common factor among the components as done in the implementation of [] in Maple 8. However, extraneous factors introduced in previous iterations are not removed by such a process. 5. STOPPING CRITERIA AND COMPLEX- ITY In this section, we show that the stopping criteria of our algorithm ensures that the result is correct and discuss the worst case complexity of the algorithm. For convenience, we call the computation to increase j~!j by 1 as a step, and the computation to increase ~! = k~e to ~! 0 = (k 1)~e an iteration, so that there are s steps in each iteration. As in We next prove a lemma on the pivots used in each iteration. Lemma 5.. The pivots used in one iteration of FFreduce are distinct, that is, ~ k1 ~ k ~e. Moreover, rank R k(0) is increasing in k. Proof: By Denition.(b), there exists a polynomial Q Q[; ] mm such that M k = Q M k1; degq j;` ~ k j 1? ~ k1 ` for all j; `. Comparing the coecients at ~k j 1 in position (j; `), we have on the left a nonsingular lower triangular matrix (with (d) on diagonal), and on the right the leading row coecient matrix B of Q (with coecients at power ~ k j 1?~ k1 ` ) multiplied by a lower triangular matrix A. Since we are now

in the quotient eld, A must be nonsingular, and so B is also nonsingular and hence lower triangular. Hence the degrees on the diagonal cannot be smaller than 0, showing that ~ k j 1 ~ k1 j, or, in other words, ~ k1 ~ k ~e. Thus, the pivots in one iteration are distinct. Also, denoting by C the trailing coecient of Q, we easily obtain that C R k1(0) coincides with the matrix obtained by applying to all elements of R k(0) (which has the same rank as R k(0)). Hence the rank of R k(0) is increasing. We are now ready to prove the correctness of the algorithm. Theorem 5.3. The matrix R returned by FFreduce satises rank R(0) = rank F. Proof: Since the rank cannot increase after multiplication with a square matrix, we get from M k F = R k k the relation rank F rank (R k k ) = rank R k. On the other hand, using the shifted left inverse M k of M k of Remark.3 we have that rank F = rank ( k F) = rank (M k R k k ) rank (R k k ) = rank R k; showing that rank F = rank R k for all k. If the algorithm stops after the (k 1)-st iteration, then r = rank R k(0) rank R k1(0) rank R k1 r by Lemma 5.1, Lemma 5. and the fact that R k1 contains r nonzero rows. Consequently, we have equality everywhere, and r = rank R k1 = rank F. Remark 5.4. We have shown implicitly in the proof that, if we stop after iteration (k 1), that the trailing coecient of R k already has full rank r. Since all the pivots in one iteration are distinct by Lemma 5., the set of pivot rows in R k(0) also has rank r. Therefore, the submatrix of R k consisting of the pivot rows already leads to a full rank decomposition (using the corresponding columns of the shifted left inverse of the previous Mahler system). In order to determine the bit complexity of the FFreduce algorithm we make the assumption that the lengths and costs of coecient arithmetic satises length(a b) = length(a) length(b); cost(a b) = O(length(a) length(b)): Here length measures the total storage needed while cost measures the number of boolean operations. With this assumption we give our complexity in the following. Theorem 5.5. If F() has total degree N then Algorithm FFreduce requires at most m(n 1) iterations. Moreover, if K is a bound on the lengths of the coecients appearing in F(); F(); : : : ; m(n1)1 F(), then the bit complexity of the algorithm is O(m 5 s 4 N 4 K ). Proof: From the denition of R k we see that (N? k)~e ~ k is an upper bound for the sum of the row degrees of R k. After iteration k 1, we know from Lemma 5. that a component of this upper bound either is constant (for pivot rows) or otherwise decreases by one. Since, by Lemma 5.1, for all but the last iteration there is at least one nonzero row in R k which is not pivot (but which potentially could become a zero row in R k1), we may conclude that either the sum of the degree bounds of nontrivial rows in R k1 is lowered by at least 1, or we have created an additional zero row. Taking into account that the initial sum is bounded above by mn, we conclude that there are at most m(n 1) iterations. For bounding the bit complexity, we follow the complexity analysis in [8] where the upper bound O(mj~!j 4 K ) has been established. The key observation is that all coecients can be written as determinants by (8) and (10). Thus, Hadamard's inequality can be applied to obtain a bound on the lengths of the coecients. Remark 5.6. We remark that since the cost of arithmetic in IK [n] satises our complexity model when we are interested in eld operations in IK, the same complexity bound is applicable for ID = IK [n] with K being the maximum degree (in n) of all the coecients in appearing in F(); F(); : : : ; m(n1)1 F(). Note that our complexity analysis takes into account the coecient growth during the algorithm. 6. CONCLUSION In this paper we have given a fraction-free algorithm for transforming a given matrix of skew polynomials into one where the rank is determined only by the trailing or leading coecient matrices. The algorithm is a modication of the FFFG algorithm of [8] in the commutative case. We have shown that our algorithm runs in polynomial time, with near-linear growth in the sizes of coecients in the intermediate results. There are a number of topics for future research. We plan to investigate methods for improving the eciency of our fraction-free approach. At present, our algorithm does not appear to do as well as the algorithm of Abramov and Bronstein [] (as implemented in Maple 8) in practice unless s is small. This is due to the fact that the coecient growth in our algorithm is large even though it is controlled. While the algorithm in [] may have exponential growth, in practice it requires a very large input for our algorithm to be advantageous. We would like to nd fraction-free methods that more closely follow the approach in [], where linearly independent rows in the leading or trailing coecient matrix are not modied during an iteration. Furthermore, it is well known that modular algorithms improve on fractionfree methods by an order of magnitude. We plan to investigate such algorithms for our rank revealing computations. Our approach increases the degree of the rows of an order basis from rst to last. In fact it is easy to see that one can alter this order, for example based on the minimum degree of the rows of the residual, while still making use the fraction-

free recursion given in this paper. This will be discussed in a coming paper. We also plan to see how our algorithm can be used to compute GCDs of scalar skew polynomials and compare it to the subresultant algorithms of Li [1]. Finally, we are interested in extending our results to nested skew polynomial domains, allowing for computations in Weyl algebras. This is a dicult extension since then the corresponding associated linear systems do not have commutative elements. As such the standard tools that we use from linear algebra, namely determinants and Cramer's rule, do not exist in the classic sense. 7. REFERENCES [1] S. Abramov. EG-eliminations. Journal of Dierence Equations and Applications, 5:393{433, 1999. [] S. Abramov and M. Bronstein. On solutions of linear functional systems. In Proceedings of ISSAC 001, London, pages 1{6. ACM Press, 001. [3] E. Bareiss. Sylvester's identity and multistep integer-preserving Gaussian elimination. Math. Comp., (103):565{578, 1968. [4] B. Beckermann and G. Labahn. A uniform approach for Hermite Pade and simultaneous Pade approximants and their matrix generalization. Numerical Algorithms, 3:45{54, 199. [5] B. Beckermann and G. Labahn. A uniform approach for the fast, reliable computation of matrix-type Pade approximants. SIAM J. Matrix Anal. Appl., 15:804{83, 1994. [6] B. Beckermann and G. Labahn. Recursiveness in matrix rational interpolation problems. J. Comput. Appl. Math., 77:5{34, 1997. [7] B. Beckermann and G. Labahn. Eective computation of rational approximants and interpolants. Reliable Computing, 6:365{390, 000. [8] B. Beckermann and G. Labahn. Fraction-free computation of matrix GCD's and rational interpolants. SIAM J. Matrix Anal. Appl., (1):114{144, 000. [9] W. Brown and J. Traub. On Euclid's algorithm and the theory of subresultants. J. ACM, 18:505{514, 1971. [10] G. Collins. Subresultant and reduced polynomial remainder sequences. J. ACM, 14:18{14, 1967. [11] K. Geddes, S. Czapor, and G. Labahn. Algorithms for Computer Algebra. Kluwer, Boston, MA, 199. [1]. Li. A Subresultant Theory for Linear Dierential, Linear Dierence and Ore Polynomials, with Applications. PhD thesis, Johannes Kepler Univ. Linz, Austria, 1996.