Computing Approximate GCD of Univariate Polynomials by Structure Total Least Norm 1)

Similar documents
An Algorithm for Approximate Factorization of Bivariate Polynomials 1)

Displacement Structure in Computing Approximate GCD of Univariate Polynomials

Computing Approximate GCD of Multivariate Polynomials by Structure Total Least Norm 1)

Structured Matrix Methods Computing the Greatest Common Divisor of Polynomials

arxiv: v3 [math.ac] 11 May 2016

Fast Estimates of Hankel Matrix Condition Numbers and Numeric Sparse Interpolation*

Solving Approximate GCD of Multivariate Polynomials By Maple/Matlab/C Combination

The Mathematica Journal Symbolic-Numeric Algebra for Polynomials

Ruppert matrix as subresultant mapping

A geometrical approach to finding multivariate approximate LCMs and GCDs

Approximate GCDs of polynomials and sparse SOS relaxations

Contents Regularization and Matrix Computation in Numerical Polynomial Algebra

Algebraic Factorization and GCD Computation

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Lecture 6. Numerical methods. Approximation of functions

Fraction-free Row Reduction of Matrices of Skew Polynomials

The Seven Dwarfs of Symbolic Computation and the Discovery of Reduced Symbolic Models. Erich Kaltofen North Carolina State University google->kaltofen

Problem set 5: SVD, Orthogonal projections, etc.

Minimum Converging Precision of the QR-Factorization Algorithm for Real Polynomial GCD

CS 4424 GCD, XGCD

City Research Online. Permanent City Research Online URL:

Block Bidiagonal Decomposition and Least Squares Problems

A Note on Simple Nonzero Finite Generalized Singular Values

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Computing multiple roots of inexact polynomials

On the Berlekamp/Massey Algorithm and Counting Singular Hankel Matrices over a Finite Field

Barnett s Theorems About the Greatest Common Divisor of Several Univariate Polynomials Through Bezout-like Matrices

Journal of Symbolic Computation. On the Berlekamp/Massey algorithm and counting singular Hankel matrices over a finite field

A fast randomized algorithm for overdetermined linear least-squares regression

Key words. Macaulay matrix, multivariate polynomials, multiple roots, elimination, principal angles, radical ideal

Introduction to Numerical Linear Algebra II

Linear Systems. Carlo Tomasi

Numerical Methods in Matrix Computations

Signal Identification Using a Least L 1 Norm Algorithm

Further linear algebra. Chapter II. Polynomials.

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems

12x + 18y = 30? ax + by = m

Main matrix factorizations

1. Algebra 1.5. Polynomial Rings

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

UNIT 6: The singular value decomposition.

On Irreducible Polynomial Remainder Codes

Primitive sets in a lattice

Popov Form Computation for Matrices of Ore Polynomials

Computational Methods. Least Squares Approximation/Optimization

Determinant Formulas for Inhomogeneous Linear Differential, Difference and q-difference Equations

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS

Downloaded 11/07/17 to Redistribution subject to SIAM license or copyright; see

Notes 6: Polynomials in One Variable

Numerical Methods I: Eigenvalues and eigenvectors

On the Solution of Constrained and Weighted Linear Least Squares Problems

arxiv: v1 [math.na] 1 Sep 2018

Chapter 4. Greatest common divisors of polynomials. 4.1 Polynomial remainder sequences

EE731 Lecture Notes: Matrix Computations for Signal Processing

14.2 QR Factorization with Column Pivoting

Lattice reduction of polynomial matrices

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms.

On Exact and Approximate Interpolation of Sparse Rational Functions*

Computing with polynomials: Hensel constructions

Exact Arithmetic on a Computer

Introduction. Chapter One

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

Approximate Polynomial Decomposition

Computing Nearby Non-trivial Smith Forms

Linear Algebra (Review) Volker Tresp 2017

Lecture 6, Sci. Comp. for DPhil Students

Solving Polynomial Systems via Symbolic-Numeric Reduction to Geometric Involutive Form

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Basic concepts in Linear Algebra and Optimization

Linear Algebra (Review) Volker Tresp 2018

Extended companion matrix for approximate GCD

SVD and its Application to Generalized Eigenvalue Problems. Thomas Melzer

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Solution of Linear Equations

MATH 350: Introduction to Computational Mathematics

The Lanczos and conjugate gradient algorithms

THE GEOMETRY OF MULTIVARIATE POLYNOMIAL DIVISION AND ELIMINATION

Sylvester Matrix and GCD for Several Univariate Polynomials

Research Article Polynomial GCD Derived through Monic Polynomial Subtractions

Chapter 3 Transformations

DS-GA 1002 Lecture notes 10 November 23, Linear models

Notes on Eigenvalues, Singular Values and QR

Parallel Singular Value Decomposition. Jiaxing Tan

The Singular Value Decomposition

Structured Matrices and Solving Multivariate Polynomial Equations

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View Point

Fast Polynomial Multiplication

g(x) = 1 1 x = 1 + x + x2 + x 3 + is not a polynomial, since it doesn t have finite degree. g(x) is an example of a power series.

Preliminary Examination, Numerical Analysis, August 2016

S.F. Xu (Department of Mathematics, Peking University, Beijing)

Sparse Differential Resultant for Laurent Differential Polynomials. Wei Li

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

A Fraction Free Matrix Berlekamp/Massey Algorithm*

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

On Partial Difference Coherent and Regular Ascending Chains

Transcription:

MM Research Preprints, 375 387 MMRC, AMSS, Academia Sinica No. 24, December 2004 375 Computing Approximate GCD of Univariate Polynomials by Structure Total Least Norm 1) Lihong Zhi and Zhengfeng Yang Key Laboratory of Mathematics Mechanization Institute of Systems Science, AMSS, Academia Sinica Beijing 100080, China (lzhi,zyang)@mmrc.iss.ac.cn Abstract. The problem of approximating the greatest common divisor(gcd) for polynomials with inexact coefficients can be formulated as a low rank approximation problem with Sylvester matrix. This paper presents a method based on Structured Total Least Norm(STLN) for constructing the nearest Sylvester matrix of given lower rank. We present algorithms for computing the nearest GCD and a certified ɛ-gcd for a given tolerance ɛ. The running time of our algorithm is polynomial in the degrees of polynomials. We also show the performance of the algorithms on a set of univariate polynomials. 1. Introduction Approximate GCD computation of univariate polynomials has been studied by many authors [23, 18, 7, 9, 3, 15, 14, 19, 25, 8, 29]. Particularly, in [7, 9, 25, 8, 29], Singular Value Decomposition(SVD) of the Sylvester matrix has been used to obtain an upper bound of the approximate GCD and compute it. Suppose the SVD of Sylvester matrix S is S = UΣV T, where U and V are orthogonal and Σ = diag(σ 1, σ 2,..., σ n ), then σ k is the 2-norm distance to the nearest matrix of rank strictly less than k. However, SVD computation is not appropriate for computing the distance to the nearest STRUCTURE(for example Sylvester structure) matrix of given rank deficiency. In [6], they mentioned structured low rank approximation to compute approximate GCD. The basic idea of their method is to alternate projections between the sets of the rank-k matrices and structure matrices so that the rank constrain and the structural constraint are satisfied alternatively while the distance in between is being reduced. Although the algorithm is linear convergent, the speed may be very slow as shown by the tests in the last section. We present a method based on STLN[20] for structure preserving low rank approximation of a Sylvester matrix. STLN is an efficient method for obtaining an approximate solution to an overdetermined linear system AX B, preserving the given linear structure in the minimal perturbation [E F ] such that (A + E)X = B + F. An algorithm in [20] has been described for Hankel structure preserving low rank approximation using STLN with L p norm. 1) Partially supported by a National Key Basic Research Project of China and Chinese National Science Foundation under Grant 10401035

376 L. Zhi and Z.Yang In this paper, we adjust their algorithm for computing structure preserving rank reduction of a Sylvester matrix. Furthermore, we also derive a stable and efficient way to compute the nearest GCD and ɛ GCD of univariate polynomials. The organization of this paper is as follows. In Section 2, we introduce some notations and discuss the equivalence between the GCD problems and the low rank approximation of a Sylvester matrix. In Section 3, we consider solving an overdetermined system with Sylvester structure based on STLN. Finally, in Section 4, we present algorithms based on STLN for computing the nearest GCD and ɛ-gcd. We illustrate the experiments of our algorithms on a set of univariate polynomials. 2. Preliminaries In this paper we are concerned with the following task. PROBLEM 2.1 Given two polynomials f, g C[x] with deg(f) = m, deg(g) = n, and k is a given positive integer k min(m, n). We are considering to find the minimal perturbation f 2 2 + g 2 2, such that deg( f) m, deg( g) n and deg(gcd(f + f, g + g)) k. When k = 1, the above problem is the same as finding the nearest GCD defined in [16]. If we are given a tolerance ɛ, then the above problem is close to computing ɛ-gcd as [7]. Suppose S(f, g) is the Sylvester matrix of polynomials f and g. We know that the degree of GCD of f and g is equal to the rank deficiency of S. The above problem can be formulated as: min f f 2 deg(gcd( f, g)) k 2 + g g 2 2 min f f 2 rank( S) n+m k 2 + g g 2 2 where S is the Sylvester matrix generated by f and g. If we solve the overdetermined system using STLN [22, 21, 20] A X B (1) for S = [B A] where B are the first k columns of S and A are the remainder columns of S, then we obtain a minimal Sylvester structured pertubation [F E] such that B + F Range(A + E) Therefore, S = [B + F, A + E] is a solution that has Sylvester structure and rank( S) n + m k. The reason why we choose the first k columns to form the overdetermined system (1) can be seen from the following example and theorem. EXAMPLE 2.1 Suppose f = x 2 + x = x (x + 1), g = x 2 + 4x + 3 = (x + 3) (x + 1), S is the Sylvester matrix by f and g. 1 0 1 0 1 1 4 1 S = 0 1 3 4 0 0 0 3

Approximate GCD of Univariate Polynomials 377 The rank deficiency of S is 1. We partition S by two ways: S = [A 1 b 1 ] = [b 2 A 2 ], where b 1 is the last column of S, whereas b 2 is the first column of S. The overdetermined system A 1 X = b 1 has no solution, while the system has an exact solution as X = [ 3, 1, 0] T. A 2 X = b 2 Theorem 2.1 Given f, g C[x] with deg(f) = m, deg(g) = n and a nonzero integer k min(m, n). Suppose S is the Sylvester matrix of f and g. Partition S = [B A], where B are the first k columns of S and A are the last n + m k columns of S. Then we have rank(s) n + m k AX = B has a solution (2) Proof of Theorem 2.1: = Let AX = B have a solution, then B Range(A). Since B are the first k columns of S. Therefore, it is obvious that the rank of S = [B A] is less than n + m k. = Suppose rank(s) n + m k. Multiplying the vector [x n+m 1,, x, 1] to the two sides of the equation AX = B, it turns out to be [x n k 1 f,, f, x m 1 g,, g]x = [x n 1 f,, x n k f] (3) The solution X of (3) corresponds to the coefficients of polynomials u i, v i, with deg(u i ) n k 1, deg(v i ) m 1, satisfy x n i f = u i f + v i g, i = 1,..., k (4) Let d = GCD(f, g), f 1 = f/d, g 1 = g/d. Since rank(s) n + m k, we have deg(d) k and deg(f 1 ) m k, deg(g 1 ) n k. For any 1 i k, dividing x n i by g 1, we have a quotient a i and a remainder b i such that x n i = a i g 1 + b i where deg(a i ) deg(d) i, deg(b i ) n k 1. Now we can check that are solutions of (4) since deg(u i ) n k 1, u i = b i, v i = a i f 1 deg(v i ) = deg(a i ) + deg(f 1 ) deg(d) i + deg(f 1 ) m 1 and v i g + u i f = f 1 a i d g 1 + b i f = f a i g 1 + f b i = f x n i. Next, we show that for any given Sylvester matrix, when all the elements are allowed to be perturbed, it is always possible to find matrices [F E] with Sylvester structure such that B + F Range(A + E) where B are the first k columns of S and A are the last n + m k columns of S.

378 L. Zhi and Z.Yang Theorem 2.2 Given the integers m, n and k, k min(m, n), then there exists a Sylvester matrix S C (m+n) (m+n) with rank m + n k. Proof of Theorem 2.2: For m and n, we always can construct polynomials f, g C[x] such that deg(f) = m, deg(g) = n, and the degree of GCD(f, g) is k. Hence S is the Sylvester matrix generating by f, g and its rank is m + n k. Corollary 2.3 Given the integers m, n and k, k min(m, n), for any Sylvester matrix S = [B A] C (m+n) (m+n), where A C (m+n) (m+n k) and B C (m+n) k, then it is always possible to find a sylvester structure perturbation [F E] C (m+n) (m+n) such that B + F Range(A + E). 3. STLN for Overdetermined Systems with Sylvester Structure In this section, we will illustrate how to solve the overdetermined system AX B (5) where A C (m+n) (m+n k) and B C (m+n) k, and [B A] is the Sylvester matrix generated by polynomials f, g of degrees m and n respectively. We are to find minimum Sylvester structure perturbation [F E] such that B + F Range(A + E). 3.1. The case: k = 1 In this case, (5) can be transformed into the following overdetermined system: Ax b (6) where A C (m+n) (m+n 1) and b C (m+n) 1, S = [b A] is a Sylvester matrix. According to Theorem 2.2 and Corollary 2.3, there always exists Sylvester structure perturbation [f E] so that (b + f) Range(A + E). In the following, we illustrate how to find the minimum solution using STLN. First, suppose the Sylvester matrix S is generated by f = a m x m + a m 1 x m 1 + + a 1 x + a 0, g = b n x n + b n 1 x n 1 + + b 1 x + b 0. The Sylvester structure perturbation [f E] of S can be represented by a vector η C (m+n+2) 1 (η 1, η 2,, η m+n+1, η m+n+2 ) T, (7) when [f E] =. η 1 0 0 0 η m+2 0 0 0 η 2 η 1 0 0 η m+3 η m+2 0 0....... 0 0 η m+1 η m 0 0 η m+n+2 η m+n+1 0 0 0 η m+1 0 0 0 η m+n+2

Approximate GCD of Univariate Polynomials 379 f is the first column of the above matrix. Thus, we can find a matrix P 1 R (m+n) (m+n+2) such that f = P 1 η. ( ) Im+1 0 P 1 = (8) 0 0 where I m+1 is a (m + 1) (m + 1) identity matrix. We define the structured residual r = r(η, x) = b + f (A + E)x. (9) We are going to solve the equality-constrained least squares problems: min η,x η 2, subject to r = 0. (10) Formulation (10) can be solved by using the penalty method [1], (10) is transformed into: min η,x wr(η, x) η w 1, (11) 2 where w is a large penalty value such as 10 10. As in [21, 20], we use a linear approximation to r(η, x) to compute the minimization of (11). Let η and x represent a small change in η and x respectively, and E represents the corresponding change in E. The first order approximation to r(η + η, x + x) is r(η + η, x + x) = b + P 1 (η + η) (A + E + E)(x + x) b + P 1 η (A + E)x + P 1 η (A + E) x Ex Introducing a Sylvester structure matrix Y 1 C (m+n) (m+n+2) satisfies that Y 1 η = E x, (12) where x = (x 1, x 2 x m+n 1 ) T. Y 1 has the form as Y 1 = 0 x n x 1... xn+1....... 0.... xn x n 1 x 1 x m+n 1 x n+1......... x n 1 x m+n 1 m+1 n+1 (11) can be written as min x η ( w(y1 P 1 ) w(a + E) I m+n+2 0 ) ( η x ) + ( wr η ) 2 (13)

380 L. Zhi and Z.Yang EXAMPLE 3.1 Suppose m = n = 3, k = 1, A = 0 0 b 3 0 0 a 3 0 b 2 b 3 0 a 2 a 3 b 1 b 2 b 3 a 1 a 2 b 0 b 1 b 2 a 0 a 1 0 b 0 b 1 0 a 0 0 0 b 0, b = a 3 a 2 a 1 a 0 0 0. Then and Y 1 = P 1 = ( I4 0 0 0 0 0 0 0 x 3 0 0 0 x 1 0 0 0 x 4 x 3 0 0 x 2 x 1 0 0 x 5 x 4 x 3 0 0 x 2 x 1 0 0 x 5 x 4 x 3 0 0 x 2 x 1 0 0 x 5 x 4 0 0 0 x 2 0 0 0 x 5 ) It is easy to see that the coefficient matrix of the vector [ η x] T is also of block Toeplitz structure. We could apply fast least squares method[12] to solve it quickly. 3.2. The case: k > 1 In this case, (5) can be transformed into the following overdetermined system: AX B (14) where A C (m+n) (m+n k) and B C (m+n) k, [B A] has Sylvester structure. By stacking the columns of X = [x 1,..., x k ] and B = [b 1,..., b k ] into long vectors, (14) can be written as the following equivalent structured overdetermined system with a single right hand side vector A 0... 0 x 1 b 1 0 A... 0 x 2...... b 2. 0... 0 A x k b k However, since the dimension of the diagonal matrix diag(a,... A) is k(m+n) k(m+n k). The size of the linear overdetermined system (15) increases quadratically along with k. The algorithm we proposed in the above subsection may suffer from efficiency and stability problems. In the following, rather than working on the big system (15), we introduce the k-th Sylvester matrix S k which is a (m + n k + 1) (m + n 2k + 2) submatrix of S = [B A] by deleting the last k 1 rows of S and the last k 1 columns of coefficients of f and g (15)

Approximate GCD of Univariate Polynomials 381 separately in S, S k = a m 0 0 0 b n 0 0 0 a m 1 a m 0 0 b n 1 b n 0 0........ 0 0 0 a 0 0 0 0 b 0 n k+1 For k = 1, S 1 = S is the Sylvester matrix. m k+1. (16) Theorem 3.1 Given f, g C[x], deg(f) = m, deg(g) = n. 1 k min(m, n). S(f, g) is the Sylvester matrix of f and g, S k is the k th Sylvester matrix of f and g. Then the following statements are equivalent: (a) rank(s) m + n k. (b) deg(gcd(f, g)) k. (c) rank(s k ) m + n + 1 2k. (d) Rank deficiency of S k is greater than one. Proof of Theorem 3.1: The equivalence between (a) and (b) is a classic result. Since the column dimension of S k is m + n + 2 2k, it is obvious (c) and (d) are equivalent. We only need to prove that (a) and (d) are equivalent. Suppose rank(s) m + n k. Denote p = GCD(f, g), f = p u and g = p v, where GCD(u, v) = 1. Then d = deg(p) k, deg(u) m k and deg(v) n k. We have It can be certified that v f u g = 0. [0 n k deg(v), v, 0 m k deg(u), u] T is a non-zero null vector of S k, where u and v are coefficient vectors of the polynomials u and v respectively. So the rank deficiency of S k is at least one. On the other hand, suppose the rank deficiency of S k is greater than one, then there exists a vector w C (m+n+2 2k) 1 such that S k w = 0. We can form polynomials v and u from the first n k + 1 rows and the last m k + 1 rows of w. It is easy to check that v f + u g = 0. Suppose p = GCD(f, g), since f/p is a factor of u, we have deg(f/p) deg(u) m k = deg(p) k. By the above theorem, the overdetermined system (14) can be reformulated as single right hand side vector A k x b k (17)

382 L. Zhi and Z.Yang where S k = [b k A k ]. The Sylvester structure perturbation [f k E k ] of S k can still be represented by the vector η = (η 1, η 2,, η m+n+1, η m+n+2 ) T as (7), and η 1 0 0 0 η m+2 0 0 0 η 2 η 1 0 0 η m+3 η m+2 0 0........ [f k E k ] = 0 0 η m+1 η m 0 0 η m+n+2 η m+n+1 0 0 0 η m+1 0 0 0 η m+n+2 n k+1 m k+1 As in the subsection 3.1, we can find a matrix P k C (m+n k+1) (m+n+2) has the similar form as (8) such that f k = P k η, and a Sylvester structure matrix Y k C (m+n k+1) (m+n+2) satisfies Y k η = E k x, Y k has the Sylvester structure 0 x n+1 k. x... 1 xn+2 k....... 0... xn+1 k Y k = x n k x 1 x m+n+1 2k x n+2 k........ x n k m+1 x m+n+1 2k n+1 We can use STLN to solve the following minimization problem ( ) ( ) ( min w(yk P k ) w(a k + E k ) η wr x η + I m+n+2 0 x η ) (18) 2 (19) EXAMPLE 3.2 Suppose m = n = 3, k = 2, then the submatrix S 2 = [b 2 A 2 ] where 0 b 3 0 a 3 1 0 0 0 0 a 3 b 2 b 3 a 2 0 1 0 0 0 A 2 = a 2 b 1 b 2, b 2 = a 1, P 2 = 0 0 1 0 0 a 1 b 0 b 1 a 0 0 0 0 1 0 a 0 0 b 0 0 0 0 0 0 0 Y 2 = 0 0 0 0 x 2 0 0 0 x 1 0 0 0 x 3 x 2 0 0 0 x 1 0 0 0 x 3 x 2 0 0 0 x 1 0 0 0 x 3 x 2 0 0 0 x 1 0 0 0 x 3

Approximate GCD of Univariate Polynomials 383 4. Approximate GCD Algorithm and Experiments The following algorithm is designed for solving the problem (2.1). Algorithm AppSylv-k Input - A Sylvester matrix S generated by two polynomials f and g of degrees m n respectively. An integer 1 k n and a tolerance tol. Output- Two polynomials f and g with rank(s( f, g)) m+n k. and the Euclidean distance f f 2 2 + g g 2 2 is reduced to a minimum. 1. Compute the k-th Sylvester matrix S k as (16), set b k the first column of S k and A k be the last m + n 2k + 1 columns of S k. Let E k = 0, f k = 0. 2. Compute x from min A k x b k 2 and r = b k A k x. Form P k and Y k as showed in the above section. 3. Repeat (a) min x η ( w(yk P k ) w(a k + E k ) I n+m+2 0 (b) Set x = x + x, η = η + η. ) ( η x ) + ( wr η (c) Construct the matrix E k and f k from η, and Y k from x. Set A k = A k + E k, b k = b k + f k, r = b k A k x. until ( x 2 tol and η 2 tol) 4. Output the polynomials f and g formed from [b k A k ]. Given a tolerance ɛ, the algorithm AppSylv-k can be used to compute an ɛ-gcd of polynomials f and g with degrees m n respectively. The method starts with k = n m, using AppSylv-k to compute the minimum N = f f 2 + g g 2 with rank(s( f, g)) m + n k. If N < ɛ, then we can compute the ɛ-gcd from the matrix S k ( f, g) [10, 27]; Otherwise, we reduce k by one and repeat the AppSylv-k algorithm. Another method tests the degree of ɛ-gcd by computing the singular value decomposition of Sylvester matrix S(f, g), find the upper bound degree r of the ɛ-gcd as shown in [7, 9]. So we can start with k = r rather than k = n to compute the certified ɛ-gcd of the highest degree. EXAMPLE 4.1 Given polynomials p = x 2 + 2x + 4, f = p (5 + 2x) + 0.04x 2 + 0.03x + 0.05, g = p (5 + x) + 0.01x 2 + 0.02x + 0.04, and ɛ = 10 3, we are going to compute the ɛ-gcd of f and g. Set tol = 10 3, k = 2, and S is the Sylvester matrix of f and g. The singular values of S are: 54.505, 32.971, 17.869, 2.472, 0.00671, 0.00197. ) 2

384 L. Zhi and Z.Yang Construct the submatrix S 2 of S S 2 = 2.00 0 1 0 9.04 2.00 7.01 1 18.03 9.04 14.02 7.01 20.05 18.03 20.04 14.02 0 20.05 0 20.04 Apply the algorithm AppSylv-k to S 2, after three iterations of the step 2, we obtain the minimal perturbed S 2 of rank deficiency 1 S 2 = The singular values of S 2 are: We can form the polynomials f, g from S 2 The minimum perturbation is 2.0001 0 0.9933 0 9.0337 2.0001 7.0176 0.9933 18.0332 9.0337 14.0178 7.0176 20.0500 18.0332 20.0392 14.0178 0 20.0500 0 20.0392 48.845, 23.028, 2.4666, 1.3364 10 15. f = 2.0001x 3 + 9.0337x 2 + 18.0332x + 20.0500 g = 0.9933x 3 + 7.0176x 2 + 14.0178x + 20.0392 N = f f 2 2 + g g 2 2 = 0.0001583. It is easy to run the algorithm again for k = 3, we will find the least perturbation for f and g have a degree three GCD is 5.91735. So for ɛ = 10 3, we can say that the highest degree of ɛ-gcd of f and g is 2. Moreover, this GCD can be computed from S 2 as shown in [10] p = x 2 + 1.99982x + 3.98301 The forward error of this ɛ-gcd is: p 0.1345 p 2 / p 2 = 0.001785. EXAMPLE 4.2 The following example is given in Karmarkar and Lakshman s paper [16]. f = x 2 6x + 5 = (x 1)(x 5), g = x 2 6.3x + 5.72 = (x 1.1)(x 5.2). We are going to find the minimal perturbation that f and g have a common root. We consider this problem in two cases: the leading coefficients can be perturbed and the leading coefficients can not be perturbed.

Approximate GCD of Univariate Polynomials 385 1. Case 1 : the leading coefficients can be perturbed. Apply the algorithm AppSylv-k to f, g with k = 1 and tol = 10 3, after three iterations, we obtain polynomials f and g : with minimum distance as f = 0.9850x 2 6.0029x + 4.9994, g = 1.0150x 2 6.2971x + 5.7206, N = f f 2 2 + g g 2 2 = 0.0004663. The common root of the f and g is 5.09890429. 2. Case 2: the leading coefficients can not be perturbed. This case corresponds to that the first and the fourth term of η are fixed as one. Running the algorithm AppSylv-k for k = 1 and tol = 10 3, after three iterations, we get the polynomials f and g : with minimum distance as f = x 2 6.0750x + 4.9853, g = x 2 6.2222x + 5.7353, N = f f 2 2 + g g 2 2 = 0.01213604583. The common root of the f and g is 5.0969478. In the paper[16], they restricted to the second case where the perturbed polynomials remained to be monic. The minimum perturbation they obtained is 0.01213605293 which corresponding to the perturbed common root 5.096939087. Ex deg(f), coeff. iterative. iterative backward backward deg(g) k error num.(chu) num. error(zeng) error σ k σ k 1 2, 2 1 2.74e 2 7 2 6.79e 3 3.20e 3 5.93e 2 10 9 2 3, 2 2 1.96e 2 7 2 5.96e 4 3.77e 4 1.95e 2 10 9 3 3, 2 2 1.71 15 3 3.36e 1 2.08e 1 3.71e 1 10 9 4 3, 3 2 2.46e 2 11 2 1.91e 2 9.49e 3 1.05e 1 10 10 5 6, 4 3 2.76e 2 10 2 1.30e 2 3.99e 3 6.38e 2 10 9 6 6, 4 3 3.16 55 3 5.74e 1 2.39e 1 4.14e 1 10 9 7 10, 10 5 6.73e 2 58 2 2.18e 2 8.49e 3 8.03e 2 10 9 8 14, 13 7 8.78e 2 39 4 6.73e 2 2.05e 2 2.13e 1 10 9 Table 1. Algorithm performance on benchmarks(univariate case) In Table 1, we show the performance of our new algorithms for computing ɛ-gcd of univariate polynomials randomly generated in Maple. deg(f) and deg(g) denote the degree of the polynomials f and g. k is a given integer. coeff. error is the range of perturbations we add to f and g. iterative num.(chu) is the number of the iteration needed by Chu s[6] method; whereas iterative num. denotes the number of iterations by AppSylv-k algorithm. σ k and σ k are the last k-th singular value of S(f, g) and S( f, g) respectively. We show that the backward error computation. Suppose the polynomial p is GCD(f,g), u and v are the primary part of f and g respectively. Then the backward error in Table 1 is f p u 2 2 + g p v 2 2. backward error(zeng) denotes the backward error by Zeng s algorithm[27]; whereas backward error is the backward error by our algorithm.

386 L. Zhi and Z.Yang References [1] Anda, A.A. and Park, H. Fast plane with dynamic scaling. SIAM J. Matrix Anal. Appl. vol.15, pp. 162 174, 1994. [2] Anda, A.A. and Park, H. Self-scaling fast rotations for stiff and equality-constrained linear least squares problems. Linear Algebra and its Applications vol.234, pp. 137 161, 1996. [3] Bernhard Beckermann and George Labahn. When are two polynomials relatively prime?, Journal of Symbolic Computation, vol. 26, no. 6, pp. 677 689, 1998, Special issue of the JSC on Symbolic Numeric Algebra for Polynomials S. M. Watt and H. J. Stetter, editors. [4] Björck, Å. and Paige, C.C. Loss and recapture of orthogonality in the modified Gram S chmidt algorithm, SIAM J. Matrix Anal. Appl. vol.13, pp. 176 190, 1992. [5] P.Chin, R.M.Corless, C.F.Corless Optimization strategies for the approximate GCD problems. In Proc. 1998 Internat. Symp. Symbolic Algebraic Comput. (ISSAC 98) pp. 228 235. [6] Moody T.Chu, E.Robert and J.Robert.Plemmons Structured Low Rank Approximation, manuscript, 1998. [7] R.M.Corless, P.M.Gianni, B.M.Trager, and S.M.Watt The singular value decomposition for polynomial systems. In Proc. 1995 Internat. Symp. Symbolic Algebraic Comput. (IS- SAC 95) pp. 195 207. [8] Robert M. Corless, Stephen M. Watt, and Lihong Zhi QR factoring to compute the GCD of univariate approximate polynomials, submitted, 2002. [9] Emiris, I. Z., Galligo, A., and Lombardi, H. Certified approximate univariate GCDs. J. Pure Applied Algebra 117 & 118 (May 1997), 229 251. Special Issue on Algorithms for Algebra. [10] S.Gao, E.Kaltofen, J.May, Z.Yang and L.Zhi Approximate factorization of multivariate polynomials via differential equations. In Proc. 2004 Internat. Symp. Symbolic Algebraic Comput. (ISSAC 04) pp. 167 174. [11] Gene H. Golub and Charles F. Van Loan Matrix computations, Johns Hopkins, 3nd edition, 1996. [12] M.Gu Stable and efficient algorithms for structured systems of linear equations, in SIAM J. Matrix Anal. Appl., vol.19, pp. 279-306, 1997. [13] Hitz, M. A., Kaltofen, E. and Lakshman Y. N. Efficient algorithms for computing the nearest polynomial with a real root and related problems. In Proc. 1999 Internat. Symp. Symbolic Algebraic Comput. (ISSAC 99) (New York, N. Y., 1999), S. Dooley, Ed., ACM Press, pp. 205 212. [14] V.Hribernig and H.J.Stetter Detection and validation of clusters of polynomials zeros. J. Symb. Comput. vol.24: pp 667 681, 1997. [15] N. Karmarkar and Lakshman Y. N. Approximate polynomial greatest common divisors and nearest singular polynomials, in International Symposium on Symbolic and Algebraic Computation, Zürich, Switzerland, 1996, pp. 35 42, ACM. [16] N. K. Karmarkar and Lakshman Y. N. On approximate GCDs of univariate polynomials, Journal of Symbolic Computation, vol.26, no. 6, pp. 653 666, 1998, Special issue of the JSC on Symbolic Numeric Algebra for Polynomials S. M. Watt and H. J. Stetter, editors. [17] Li, T.Y. and Zeng, Z. A rank-revealing method And its application. Manuscript available at http://www.neiu.edu/ zzeng/papers.htm,2003. [18] M.T.Noda, and T.Sasaki Approximate GCD and its application to ill-conditioned algebraic equations. J.Comput. Appl.Math. vol.38 pp 335 351,1991. [19] V.Y.Pan Numerical computation of a polynomial GCD and extensions. Information and computation, 167:71-85, 2001.

Approximate GCD of Univariate Polynomials 387 [20] Park, H., Zhang, L. and Rosen, J.B. Low rank approximation of a Hankel matrix by structured total least norm, BIT Numerical Mathematics vol.35-4: pp. 757 779,1999. [21] Rosen, J.B., Park, H. and Glick, J. Total least norm formulation and solution for structured problems. SIAM Journal on Matrix Anal. Appl. vol.17-1: pp. 110 128, 1996. [22] Rosen, J.B., Park, H. and Glick, J. Structured total least norm for nonlinear problems. SIAM Journal On Matrix Anal. Appl. vol.20-1: pp. 14 30, 1999 [23] A. Schönhage. Quasi-gcd computations, Journal of Complexity, vol.1, pp. 118 137, 1985. [24] C.F.Van Load On the method of weighting for equality constrained least squares problems. SIAM J. Numer. Anal. vol.22, pp 851 864, 1985. [25] Zeng, Z. A method computing multiple roots of inexact polynomials. In Proc. 2003 Internat. Symp. Symbolic Algebraic Comput. (ISSAC 03), pp. 266 272. [26] Zeng, Z. The approximate GCD of inexact polynomials. Part I: a univariate algorithm. Manuscript, 2004. [27] Zeng, Z. and Dayton,B.H. The approximate GCD of inexact polynomials part II: a multivariate algorithm. In Proc. 2004 Internat. Symp. Symbolic Algebraic Comput.(ISSAC 2004) pp. 320 327. [28] Zhi,L. and Noda, M.T. Approximate GCD of multivariate polynomials Proc.ASCM 2000, World Scientific Press. pp. 9 18, 2000. [29] Zhi, L. Displacement structure in computing approximate GCD of univariate polynomials. In Proc. Sixth Asian Symposium on Computer Mathematics (ASCM 2003) (Singapore, 2003), Z. Li and W. Sit, Eds., vol. 10 of Lecture Notes Series on Computing, World Scientific, pp. 288 298.