Computing Approximate GCD of Univariate Polynomials by Structure Total Least Norm 1)

Size: px
Start display at page:

Download "Computing Approximate GCD of Univariate Polynomials by Structure Total Least Norm 1)"

Transcription

1 MM Research Preprints, MMRC, AMSS, Academia Sinica No. 24, December Computing Approximate GCD of Univariate Polynomials by Structure Total Least Norm 1) Lihong Zhi and Zhengfeng Yang Key Laboratory of Mathematics Mechanization Institute of Systems Science, AMSS, Academia Sinica Beijing , China Abstract. The problem of approximating the greatest common divisor(gcd) for polynomials with inexact coefficients can be formulated as a low rank approximation problem with Sylvester matrix. This paper presents a method based on Structured Total Least Norm(STLN) for constructing the nearest Sylvester matrix of given lower rank. We present algorithms for computing the nearest GCD and a certified ɛ-gcd for a given tolerance ɛ. The running time of our algorithm is polynomial in the degrees of polynomials. We also show the performance of the algorithms on a set of univariate polynomials. 1. Introduction Approximate GCD computation of univariate polynomials has been studied by many authors [23, 18, 7, 9, 3, 15, 14, 19, 25, 8, 29]. Particularly, in [7, 9, 25, 8, 29], Singular Value Decomposition(SVD) of the Sylvester matrix has been used to obtain an upper bound of the approximate GCD and compute it. Suppose the SVD of Sylvester matrix S is S = UΣV T, where U and V are orthogonal and Σ = diag(σ 1, σ 2,..., σ n ), then σ k is the 2-norm distance to the nearest matrix of rank strictly less than k. However, SVD computation is not appropriate for computing the distance to the nearest STRUCTURE(for example Sylvester structure) matrix of given rank deficiency. In [6], they mentioned structured low rank approximation to compute approximate GCD. The basic idea of their method is to alternate projections between the sets of the rank-k matrices and structure matrices so that the rank constrain and the structural constraint are satisfied alternatively while the distance in between is being reduced. Although the algorithm is linear convergent, the speed may be very slow as shown by the tests in the last section. We present a method based on STLN[20] for structure preserving low rank approximation of a Sylvester matrix. STLN is an efficient method for obtaining an approximate solution to an overdetermined linear system AX B, preserving the given linear structure in the minimal perturbation [E F ] such that (A + E)X = B + F. An algorithm in [20] has been described for Hankel structure preserving low rank approximation using STLN with L p norm. 1) Partially supported by a National Key Basic Research Project of China and Chinese National Science Foundation under Grant

2 376 L. Zhi and Z.Yang In this paper, we adjust their algorithm for computing structure preserving rank reduction of a Sylvester matrix. Furthermore, we also derive a stable and efficient way to compute the nearest GCD and ɛ GCD of univariate polynomials. The organization of this paper is as follows. In Section 2, we introduce some notations and discuss the equivalence between the GCD problems and the low rank approximation of a Sylvester matrix. In Section 3, we consider solving an overdetermined system with Sylvester structure based on STLN. Finally, in Section 4, we present algorithms based on STLN for computing the nearest GCD and ɛ-gcd. We illustrate the experiments of our algorithms on a set of univariate polynomials. 2. Preliminaries In this paper we are concerned with the following task. PROBLEM 2.1 Given two polynomials f, g C[x] with deg(f) = m, deg(g) = n, and k is a given positive integer k min(m, n). We are considering to find the minimal perturbation f g 2 2, such that deg( f) m, deg( g) n and deg(gcd(f + f, g + g)) k. When k = 1, the above problem is the same as finding the nearest GCD defined in [16]. If we are given a tolerance ɛ, then the above problem is close to computing ɛ-gcd as [7]. Suppose S(f, g) is the Sylvester matrix of polynomials f and g. We know that the degree of GCD of f and g is equal to the rank deficiency of S. The above problem can be formulated as: min f f 2 deg(gcd( f, g)) k 2 + g g 2 2 min f f 2 rank( S) n+m k 2 + g g 2 2 where S is the Sylvester matrix generated by f and g. If we solve the overdetermined system using STLN [22, 21, 20] A X B (1) for S = [B A] where B are the first k columns of S and A are the remainder columns of S, then we obtain a minimal Sylvester structured pertubation [F E] such that B + F Range(A + E) Therefore, S = [B + F, A + E] is a solution that has Sylvester structure and rank( S) n + m k. The reason why we choose the first k columns to form the overdetermined system (1) can be seen from the following example and theorem. EXAMPLE 2.1 Suppose f = x 2 + x = x (x + 1), g = x 2 + 4x + 3 = (x + 3) (x + 1), S is the Sylvester matrix by f and g S =

3 Approximate GCD of Univariate Polynomials 377 The rank deficiency of S is 1. We partition S by two ways: S = [A 1 b 1 ] = [b 2 A 2 ], where b 1 is the last column of S, whereas b 2 is the first column of S. The overdetermined system A 1 X = b 1 has no solution, while the system has an exact solution as X = [ 3, 1, 0] T. A 2 X = b 2 Theorem 2.1 Given f, g C[x] with deg(f) = m, deg(g) = n and a nonzero integer k min(m, n). Suppose S is the Sylvester matrix of f and g. Partition S = [B A], where B are the first k columns of S and A are the last n + m k columns of S. Then we have rank(s) n + m k AX = B has a solution (2) Proof of Theorem 2.1: = Let AX = B have a solution, then B Range(A). Since B are the first k columns of S. Therefore, it is obvious that the rank of S = [B A] is less than n + m k. = Suppose rank(s) n + m k. Multiplying the vector [x n+m 1,, x, 1] to the two sides of the equation AX = B, it turns out to be [x n k 1 f,, f, x m 1 g,, g]x = [x n 1 f,, x n k f] (3) The solution X of (3) corresponds to the coefficients of polynomials u i, v i, with deg(u i ) n k 1, deg(v i ) m 1, satisfy x n i f = u i f + v i g, i = 1,..., k (4) Let d = GCD(f, g), f 1 = f/d, g 1 = g/d. Since rank(s) n + m k, we have deg(d) k and deg(f 1 ) m k, deg(g 1 ) n k. For any 1 i k, dividing x n i by g 1, we have a quotient a i and a remainder b i such that x n i = a i g 1 + b i where deg(a i ) deg(d) i, deg(b i ) n k 1. Now we can check that are solutions of (4) since deg(u i ) n k 1, u i = b i, v i = a i f 1 deg(v i ) = deg(a i ) + deg(f 1 ) deg(d) i + deg(f 1 ) m 1 and v i g + u i f = f 1 a i d g 1 + b i f = f a i g 1 + f b i = f x n i. Next, we show that for any given Sylvester matrix, when all the elements are allowed to be perturbed, it is always possible to find matrices [F E] with Sylvester structure such that B + F Range(A + E) where B are the first k columns of S and A are the last n + m k columns of S.

4 378 L. Zhi and Z.Yang Theorem 2.2 Given the integers m, n and k, k min(m, n), then there exists a Sylvester matrix S C (m+n) (m+n) with rank m + n k. Proof of Theorem 2.2: For m and n, we always can construct polynomials f, g C[x] such that deg(f) = m, deg(g) = n, and the degree of GCD(f, g) is k. Hence S is the Sylvester matrix generating by f, g and its rank is m + n k. Corollary 2.3 Given the integers m, n and k, k min(m, n), for any Sylvester matrix S = [B A] C (m+n) (m+n), where A C (m+n) (m+n k) and B C (m+n) k, then it is always possible to find a sylvester structure perturbation [F E] C (m+n) (m+n) such that B + F Range(A + E). 3. STLN for Overdetermined Systems with Sylvester Structure In this section, we will illustrate how to solve the overdetermined system AX B (5) where A C (m+n) (m+n k) and B C (m+n) k, and [B A] is the Sylvester matrix generated by polynomials f, g of degrees m and n respectively. We are to find minimum Sylvester structure perturbation [F E] such that B + F Range(A + E) The case: k = 1 In this case, (5) can be transformed into the following overdetermined system: Ax b (6) where A C (m+n) (m+n 1) and b C (m+n) 1, S = [b A] is a Sylvester matrix. According to Theorem 2.2 and Corollary 2.3, there always exists Sylvester structure perturbation [f E] so that (b + f) Range(A + E). In the following, we illustrate how to find the minimum solution using STLN. First, suppose the Sylvester matrix S is generated by f = a m x m + a m 1 x m a 1 x + a 0, g = b n x n + b n 1 x n b 1 x + b 0. The Sylvester structure perturbation [f E] of S can be represented by a vector η C (m+n+2) 1 (η 1, η 2,, η m+n+1, η m+n+2 ) T, (7) when [f E] =. η η m η 2 η η m+3 η m η m+1 η m 0 0 η m+n+2 η m+n η m η m+n+2

5 Approximate GCD of Univariate Polynomials 379 f is the first column of the above matrix. Thus, we can find a matrix P 1 R (m+n) (m+n+2) such that f = P 1 η. ( ) Im+1 0 P 1 = (8) 0 0 where I m+1 is a (m + 1) (m + 1) identity matrix. We define the structured residual r = r(η, x) = b + f (A + E)x. (9) We are going to solve the equality-constrained least squares problems: min η,x η 2, subject to r = 0. (10) Formulation (10) can be solved by using the penalty method [1], (10) is transformed into: min η,x wr(η, x) η w 1, (11) 2 where w is a large penalty value such as As in [21, 20], we use a linear approximation to r(η, x) to compute the minimization of (11). Let η and x represent a small change in η and x respectively, and E represents the corresponding change in E. The first order approximation to r(η + η, x + x) is r(η + η, x + x) = b + P 1 (η + η) (A + E + E)(x + x) b + P 1 η (A + E)x + P 1 η (A + E) x Ex Introducing a Sylvester structure matrix Y 1 C (m+n) (m+n+2) satisfies that Y 1 η = E x, (12) where x = (x 1, x 2 x m+n 1 ) T. Y 1 has the form as Y 1 = 0 x n x 1... xn xn x n 1 x 1 x m+n 1 x n x n 1 x m+n 1 m+1 n+1 (11) can be written as min x η ( w(y1 P 1 ) w(a + E) I m+n+2 0 ) ( η x ) + ( wr η ) 2 (13)

6 380 L. Zhi and Z.Yang EXAMPLE 3.1 Suppose m = n = 3, k = 1, A = 0 0 b a 3 0 b 2 b 3 0 a 2 a 3 b 1 b 2 b 3 a 1 a 2 b 0 b 1 b 2 a 0 a 1 0 b 0 b 1 0 a b 0, b = a 3 a 2 a 1 a Then and Y 1 = P 1 = ( I x x x 4 x x 2 x x 5 x 4 x x 2 x x 5 x 4 x x 2 x x 5 x x x 5 ) It is easy to see that the coefficient matrix of the vector [ η x] T is also of block Toeplitz structure. We could apply fast least squares method[12] to solve it quickly The case: k > 1 In this case, (5) can be transformed into the following overdetermined system: AX B (14) where A C (m+n) (m+n k) and B C (m+n) k, [B A] has Sylvester structure. By stacking the columns of X = [x 1,..., x k ] and B = [b 1,..., b k ] into long vectors, (14) can be written as the following equivalent structured overdetermined system with a single right hand side vector A x 1 b 1 0 A... 0 x b A x k b k However, since the dimension of the diagonal matrix diag(a,... A) is k(m+n) k(m+n k). The size of the linear overdetermined system (15) increases quadratically along with k. The algorithm we proposed in the above subsection may suffer from efficiency and stability problems. In the following, rather than working on the big system (15), we introduce the k-th Sylvester matrix S k which is a (m + n k + 1) (m + n 2k + 2) submatrix of S = [B A] by deleting the last k 1 rows of S and the last k 1 columns of coefficients of f and g (15)

7 Approximate GCD of Univariate Polynomials 381 separately in S, S k = a m b n a m 1 a m 0 0 b n 1 b n a b 0 n k+1 For k = 1, S 1 = S is the Sylvester matrix. m k+1. (16) Theorem 3.1 Given f, g C[x], deg(f) = m, deg(g) = n. 1 k min(m, n). S(f, g) is the Sylvester matrix of f and g, S k is the k th Sylvester matrix of f and g. Then the following statements are equivalent: (a) rank(s) m + n k. (b) deg(gcd(f, g)) k. (c) rank(s k ) m + n + 1 2k. (d) Rank deficiency of S k is greater than one. Proof of Theorem 3.1: The equivalence between (a) and (b) is a classic result. Since the column dimension of S k is m + n + 2 2k, it is obvious (c) and (d) are equivalent. We only need to prove that (a) and (d) are equivalent. Suppose rank(s) m + n k. Denote p = GCD(f, g), f = p u and g = p v, where GCD(u, v) = 1. Then d = deg(p) k, deg(u) m k and deg(v) n k. We have It can be certified that v f u g = 0. [0 n k deg(v), v, 0 m k deg(u), u] T is a non-zero null vector of S k, where u and v are coefficient vectors of the polynomials u and v respectively. So the rank deficiency of S k is at least one. On the other hand, suppose the rank deficiency of S k is greater than one, then there exists a vector w C (m+n+2 2k) 1 such that S k w = 0. We can form polynomials v and u from the first n k + 1 rows and the last m k + 1 rows of w. It is easy to check that v f + u g = 0. Suppose p = GCD(f, g), since f/p is a factor of u, we have deg(f/p) deg(u) m k = deg(p) k. By the above theorem, the overdetermined system (14) can be reformulated as single right hand side vector A k x b k (17)

8 382 L. Zhi and Z.Yang where S k = [b k A k ]. The Sylvester structure perturbation [f k E k ] of S k can still be represented by the vector η = (η 1, η 2,, η m+n+1, η m+n+2 ) T as (7), and η η m η 2 η η m+3 η m [f k E k ] = 0 0 η m+1 η m 0 0 η m+n+2 η m+n η m η m+n+2 n k+1 m k+1 As in the subsection 3.1, we can find a matrix P k C (m+n k+1) (m+n+2) has the similar form as (8) such that f k = P k η, and a Sylvester structure matrix Y k C (m+n k+1) (m+n+2) satisfies Y k η = E k x, Y k has the Sylvester structure 0 x n+1 k. x... 1 xn+2 k xn+1 k Y k = x n k x 1 x m+n+1 2k x n+2 k x n k m+1 x m+n+1 2k n+1 We can use STLN to solve the following minimization problem ( ) ( ) ( min w(yk P k ) w(a k + E k ) η wr x η + I m+n+2 0 x η ) (18) 2 (19) EXAMPLE 3.2 Suppose m = n = 3, k = 2, then the submatrix S 2 = [b 2 A 2 ] where 0 b 3 0 a a 3 b 2 b 3 a A 2 = a 2 b 1 b 2, b 2 = a 1, P 2 = a 1 b 0 b 1 a a 0 0 b Y 2 = x x x 3 x x x 3 x x x 3 x x x 3

9 Approximate GCD of Univariate Polynomials Approximate GCD Algorithm and Experiments The following algorithm is designed for solving the problem (2.1). Algorithm AppSylv-k Input - A Sylvester matrix S generated by two polynomials f and g of degrees m n respectively. An integer 1 k n and a tolerance tol. Output- Two polynomials f and g with rank(s( f, g)) m+n k. and the Euclidean distance f f g g 2 2 is reduced to a minimum. 1. Compute the k-th Sylvester matrix S k as (16), set b k the first column of S k and A k be the last m + n 2k + 1 columns of S k. Let E k = 0, f k = Compute x from min A k x b k 2 and r = b k A k x. Form P k and Y k as showed in the above section. 3. Repeat (a) min x η ( w(yk P k ) w(a k + E k ) I n+m+2 0 (b) Set x = x + x, η = η + η. ) ( η x ) + ( wr η (c) Construct the matrix E k and f k from η, and Y k from x. Set A k = A k + E k, b k = b k + f k, r = b k A k x. until ( x 2 tol and η 2 tol) 4. Output the polynomials f and g formed from [b k A k ]. Given a tolerance ɛ, the algorithm AppSylv-k can be used to compute an ɛ-gcd of polynomials f and g with degrees m n respectively. The method starts with k = n m, using AppSylv-k to compute the minimum N = f f 2 + g g 2 with rank(s( f, g)) m + n k. If N < ɛ, then we can compute the ɛ-gcd from the matrix S k ( f, g) [10, 27]; Otherwise, we reduce k by one and repeat the AppSylv-k algorithm. Another method tests the degree of ɛ-gcd by computing the singular value decomposition of Sylvester matrix S(f, g), find the upper bound degree r of the ɛ-gcd as shown in [7, 9]. So we can start with k = r rather than k = n to compute the certified ɛ-gcd of the highest degree. EXAMPLE 4.1 Given polynomials p = x 2 + 2x + 4, f = p (5 + 2x) x x , g = p (5 + x) x x , and ɛ = 10 3, we are going to compute the ɛ-gcd of f and g. Set tol = 10 3, k = 2, and S is the Sylvester matrix of f and g. The singular values of S are: , , , 2.472, , ) 2

10 384 L. Zhi and Z.Yang Construct the submatrix S 2 of S S 2 = Apply the algorithm AppSylv-k to S 2, after three iterations of the step 2, we obtain the minimal perturbed S 2 of rank deficiency 1 S 2 = The singular values of S 2 are: We can form the polynomials f, g from S 2 The minimum perturbation is , , , f = x x x g = x x x N = f f g g 2 2 = It is easy to run the algorithm again for k = 3, we will find the least perturbation for f and g have a degree three GCD is So for ɛ = 10 3, we can say that the highest degree of ɛ-gcd of f and g is 2. Moreover, this GCD can be computed from S 2 as shown in [10] p = x x The forward error of this ɛ-gcd is: p p 2 / p 2 = EXAMPLE 4.2 The following example is given in Karmarkar and Lakshman s paper [16]. f = x 2 6x + 5 = (x 1)(x 5), g = x 2 6.3x = (x 1.1)(x 5.2). We are going to find the minimal perturbation that f and g have a common root. We consider this problem in two cases: the leading coefficients can be perturbed and the leading coefficients can not be perturbed.

11 Approximate GCD of Univariate Polynomials Case 1 : the leading coefficients can be perturbed. Apply the algorithm AppSylv-k to f, g with k = 1 and tol = 10 3, after three iterations, we obtain polynomials f and g : with minimum distance as f = x x , g = x x , N = f f g g 2 2 = The common root of the f and g is Case 2: the leading coefficients can not be perturbed. This case corresponds to that the first and the fourth term of η are fixed as one. Running the algorithm AppSylv-k for k = 1 and tol = 10 3, after three iterations, we get the polynomials f and g : with minimum distance as f = x x , g = x x , N = f f g g 2 2 = The common root of the f and g is In the paper[16], they restricted to the second case where the perturbed polynomials remained to be monic. The minimum perturbation they obtained is which corresponding to the perturbed common root Ex deg(f), coeff. iterative. iterative backward backward deg(g) k error num.(chu) num. error(zeng) error σ k σ k 1 2, e e e e , e e e e , e e e , e e e e , e e e e , e e e , e e e e , e e e e Table 1. Algorithm performance on benchmarks(univariate case) In Table 1, we show the performance of our new algorithms for computing ɛ-gcd of univariate polynomials randomly generated in Maple. deg(f) and deg(g) denote the degree of the polynomials f and g. k is a given integer. coeff. error is the range of perturbations we add to f and g. iterative num.(chu) is the number of the iteration needed by Chu s[6] method; whereas iterative num. denotes the number of iterations by AppSylv-k algorithm. σ k and σ k are the last k-th singular value of S(f, g) and S( f, g) respectively. We show that the backward error computation. Suppose the polynomial p is GCD(f,g), u and v are the primary part of f and g respectively. Then the backward error in Table 1 is f p u g p v 2 2. backward error(zeng) denotes the backward error by Zeng s algorithm[27]; whereas backward error is the backward error by our algorithm.

12 386 L. Zhi and Z.Yang References [1] Anda, A.A. and Park, H. Fast plane with dynamic scaling. SIAM J. Matrix Anal. Appl. vol.15, pp , [2] Anda, A.A. and Park, H. Self-scaling fast rotations for stiff and equality-constrained linear least squares problems. Linear Algebra and its Applications vol.234, pp , [3] Bernhard Beckermann and George Labahn. When are two polynomials relatively prime?, Journal of Symbolic Computation, vol. 26, no. 6, pp , 1998, Special issue of the JSC on Symbolic Numeric Algebra for Polynomials S. M. Watt and H. J. Stetter, editors. [4] Björck, Å. and Paige, C.C. Loss and recapture of orthogonality in the modified Gram S chmidt algorithm, SIAM J. Matrix Anal. Appl. vol.13, pp , [5] P.Chin, R.M.Corless, C.F.Corless Optimization strategies for the approximate GCD problems. In Proc Internat. Symp. Symbolic Algebraic Comput. (ISSAC 98) pp [6] Moody T.Chu, E.Robert and J.Robert.Plemmons Structured Low Rank Approximation, manuscript, [7] R.M.Corless, P.M.Gianni, B.M.Trager, and S.M.Watt The singular value decomposition for polynomial systems. In Proc Internat. Symp. Symbolic Algebraic Comput. (IS- SAC 95) pp [8] Robert M. Corless, Stephen M. Watt, and Lihong Zhi QR factoring to compute the GCD of univariate approximate polynomials, submitted, [9] Emiris, I. Z., Galligo, A., and Lombardi, H. Certified approximate univariate GCDs. J. Pure Applied Algebra 117 & 118 (May 1997), Special Issue on Algorithms for Algebra. [10] S.Gao, E.Kaltofen, J.May, Z.Yang and L.Zhi Approximate factorization of multivariate polynomials via differential equations. In Proc Internat. Symp. Symbolic Algebraic Comput. (ISSAC 04) pp [11] Gene H. Golub and Charles F. Van Loan Matrix computations, Johns Hopkins, 3nd edition, [12] M.Gu Stable and efficient algorithms for structured systems of linear equations, in SIAM J. Matrix Anal. Appl., vol.19, pp , [13] Hitz, M. A., Kaltofen, E. and Lakshman Y. N. Efficient algorithms for computing the nearest polynomial with a real root and related problems. In Proc Internat. Symp. Symbolic Algebraic Comput. (ISSAC 99) (New York, N. Y., 1999), S. Dooley, Ed., ACM Press, pp [14] V.Hribernig and H.J.Stetter Detection and validation of clusters of polynomials zeros. J. Symb. Comput. vol.24: pp , [15] N. Karmarkar and Lakshman Y. N. Approximate polynomial greatest common divisors and nearest singular polynomials, in International Symposium on Symbolic and Algebraic Computation, Zürich, Switzerland, 1996, pp , ACM. [16] N. K. Karmarkar and Lakshman Y. N. On approximate GCDs of univariate polynomials, Journal of Symbolic Computation, vol.26, no. 6, pp , 1998, Special issue of the JSC on Symbolic Numeric Algebra for Polynomials S. M. Watt and H. J. Stetter, editors. [17] Li, T.Y. and Zeng, Z. A rank-revealing method And its application. Manuscript available at zzeng/papers.htm,2003. [18] M.T.Noda, and T.Sasaki Approximate GCD and its application to ill-conditioned algebraic equations. J.Comput. Appl.Math. vol.38 pp ,1991. [19] V.Y.Pan Numerical computation of a polynomial GCD and extensions. Information and computation, 167:71-85, 2001.

13 Approximate GCD of Univariate Polynomials 387 [20] Park, H., Zhang, L. and Rosen, J.B. Low rank approximation of a Hankel matrix by structured total least norm, BIT Numerical Mathematics vol.35-4: pp ,1999. [21] Rosen, J.B., Park, H. and Glick, J. Total least norm formulation and solution for structured problems. SIAM Journal on Matrix Anal. Appl. vol.17-1: pp , [22] Rosen, J.B., Park, H. and Glick, J. Structured total least norm for nonlinear problems. SIAM Journal On Matrix Anal. Appl. vol.20-1: pp , 1999 [23] A. Schönhage. Quasi-gcd computations, Journal of Complexity, vol.1, pp , [24] C.F.Van Load On the method of weighting for equality constrained least squares problems. SIAM J. Numer. Anal. vol.22, pp , [25] Zeng, Z. A method computing multiple roots of inexact polynomials. In Proc Internat. Symp. Symbolic Algebraic Comput. (ISSAC 03), pp [26] Zeng, Z. The approximate GCD of inexact polynomials. Part I: a univariate algorithm. Manuscript, [27] Zeng, Z. and Dayton,B.H. The approximate GCD of inexact polynomials part II: a multivariate algorithm. In Proc Internat. Symp. Symbolic Algebraic Comput.(ISSAC 2004) pp [28] Zhi,L. and Noda, M.T. Approximate GCD of multivariate polynomials Proc.ASCM 2000, World Scientific Press. pp. 9 18, [29] Zhi, L. Displacement structure in computing approximate GCD of univariate polynomials. In Proc. Sixth Asian Symposium on Computer Mathematics (ASCM 2003) (Singapore, 2003), Z. Li and W. Sit, Eds., vol. 10 of Lecture Notes Series on Computing, World Scientific, pp

An Algorithm for Approximate Factorization of Bivariate Polynomials 1)

An Algorithm for Approximate Factorization of Bivariate Polynomials 1) MM Research Preprints, 402 408 MMRC, AMSS, Academia Sinica No. 22, December 2003 An Algorithm for Approximate Factorization of Bivariate Polynomials 1) Zhengfeng Yang and Lihong Zhi 2) Abstract. In this

More information

Displacement Structure in Computing Approximate GCD of Univariate Polynomials

Displacement Structure in Computing Approximate GCD of Univariate Polynomials MM Research Preprints, 230 238 MMRC, AMSS, Academia, Sinica, Beijing No 21, December 2002 Displacement Structure in Computing Approximate GCD of Univariate Polynomials Lihong Zhi 1) Abstract We propose

More information

Computing Approximate GCD of Multivariate Polynomials by Structure Total Least Norm 1)

Computing Approximate GCD of Multivariate Polynomials by Structure Total Least Norm 1) MM Reseach Pepints, 388 41 MMRC, AMSS, Academia Sinica No 23, Decembe 24 Computing Appoximate GCD of Multivaiate Polynomials by Stuctue Total Least Nom 1) Lihong Zhi and Zhengfeng Yang Key Laboatoy of

More information

Structured Matrix Methods Computing the Greatest Common Divisor of Polynomials

Structured Matrix Methods Computing the Greatest Common Divisor of Polynomials Spec Matrices 2017; 5:202 224 Research Article Open Access Dimitrios Christou, Marilena Mitrouli*, and Dimitrios Triantafyllou Structured Matrix Methods Computing the Greatest Common Divisor of Polynomials

More information

arxiv: v3 [math.ac] 11 May 2016

arxiv: v3 [math.ac] 11 May 2016 GPGCD: An iterative method for calculating approximate GCD of univariate polynomials arxiv:12070630v3 [mathac] 11 May 2016 Abstract Akira Terui Faculty of Pure and Applied Sciences University of Tsukuba

More information

Fast Estimates of Hankel Matrix Condition Numbers and Numeric Sparse Interpolation*

Fast Estimates of Hankel Matrix Condition Numbers and Numeric Sparse Interpolation* Fast Estimates of Hankel Matrix Condition Numbers and Numeric Sparse Interpolation* Erich L Kaltofen Department of Mathematics North Carolina State University Raleigh, NC 769-80, USA kaltofen@mathncsuedu

More information

Solving Approximate GCD of Multivariate Polynomials By Maple/Matlab/C Combination

Solving Approximate GCD of Multivariate Polynomials By Maple/Matlab/C Combination Solving Approximate GCD of Multivariate Polynomials By Maple/Matlab/C Combination Kai Li Lihong Zhi Matu-Tarow Noda Department of Computer Science Ehime University, Japan {likai,lzhi,noda}@hpc.cs.ehime-u.ac.jp

More information

The Mathematica Journal Symbolic-Numeric Algebra for Polynomials

The Mathematica Journal Symbolic-Numeric Algebra for Polynomials This article has not been updated for Mathematica 8. The Mathematica Journal Symbolic-Numeric Algebra for Polynomials Kosaku Nagasaka Symbolic-Numeric Algebra for Polynomials (SNAP) is a prototype package

More information

Ruppert matrix as subresultant mapping

Ruppert matrix as subresultant mapping Ruppert matrix as subresultant mapping Kosaku Nagasaka Kobe University JAPAN This presentation is powered by Mathematica 6. 09 : 29 : 35 16 Ruppert matrix as subresultant mapping Prev Next 2 CASC2007slideshow.nb

More information

A geometrical approach to finding multivariate approximate LCMs and GCDs

A geometrical approach to finding multivariate approximate LCMs and GCDs A geometrical approach to finding multivariate approximate LCMs and GCDs Kim Batselier, Philippe Dreesen, Bart De Moor 1 Department of Electrical Engineering, ESAT-SCD, KU Leuven / IBBT Future Health Department

More information

Approximate GCDs of polynomials and sparse SOS relaxations

Approximate GCDs of polynomials and sparse SOS relaxations Approximate GCDs of polynomials and sparse SOS relaxations Bin Li a, Jiawang Nie b Lihong Zhi a a Key Lab of Mathematics Mechanization, AMSS, Beijing 100190 China b Department of Mathematics, UCSD, La

More information

Contents Regularization and Matrix Computation in Numerical Polynomial Algebra

Contents Regularization and Matrix Computation in Numerical Polynomial Algebra Contents 1 Regularization and Matrix Computation in Numerical Polynomial Algebra.................................................... 1 Zhonggang Zeng 1.1 Notation and preliminaries..................................

More information

Algebraic Factorization and GCD Computation

Algebraic Factorization and GCD Computation Chapter 1 Algebraic Factorization and GCD Computation Lihong Zhi This chapter describes several algorithms for factorization and GCD computation of polynomials over algebraic extension fields. These algorithms

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

Fraction-free Row Reduction of Matrices of Skew Polynomials

Fraction-free Row Reduction of Matrices of Skew Polynomials Fraction-free Row Reduction of Matrices of Skew Polynomials Bernhard Beckermann Laboratoire d Analyse Numérique et d Optimisation Université des Sciences et Technologies de Lille France bbecker@ano.univ-lille1.fr

More information

The Seven Dwarfs of Symbolic Computation and the Discovery of Reduced Symbolic Models. Erich Kaltofen North Carolina State University google->kaltofen

The Seven Dwarfs of Symbolic Computation and the Discovery of Reduced Symbolic Models. Erich Kaltofen North Carolina State University google->kaltofen The Seven Dwarfs of Symbolic Computation and the Discovery of Reduced Symbolic Models Erich Kaltofen North Carolina State University google->kaltofen Outline 2 The 7 Dwarfs of Symbolic Computation Discovery

More information

Problem set 5: SVD, Orthogonal projections, etc.

Problem set 5: SVD, Orthogonal projections, etc. Problem set 5: SVD, Orthogonal projections, etc. February 21, 2017 1 SVD 1. Work out again the SVD theorem done in the class: If A is a real m n matrix then here exist orthogonal matrices such that where

More information

Minimum Converging Precision of the QR-Factorization Algorithm for Real Polynomial GCD

Minimum Converging Precision of the QR-Factorization Algorithm for Real Polynomial GCD Minimum Converging Precision of the QR-Factorization Algorithm for Real Polynomial GCD Pramook Khungurn Massachusetts Institute of Technology Cambridge, MA 02139-4307 pramook@mit.edu Hiroshi Sekigawa NTT

More information

CS 4424 GCD, XGCD

CS 4424 GCD, XGCD CS 4424 GCD, XGCD eschost@uwo.ca GCD of polynomials First definition Let A and B be in k[x]. k[x] is the ring of polynomials with coefficients in k A Greatest Common Divisor of A and B is a polynomial

More information

City Research Online. Permanent City Research Online URL:

City Research Online. Permanent City Research Online URL: Christou, D., Karcanias, N. & Mitrouli, M. (2007). A Symbolic-Numeric Software Package for the Computation of the GCD of Several Polynomials. Paper presented at the Conference in Numerical Analysis 2007,

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

A Note on Simple Nonzero Finite Generalized Singular Values

A Note on Simple Nonzero Finite Generalized Singular Values A Note on Simple Nonzero Finite Generalized Singular Values Wei Ma Zheng-Jian Bai December 21 212 Abstract In this paper we study the sensitivity and second order perturbation expansions of simple nonzero

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

Computing multiple roots of inexact polynomials

Computing multiple roots of inexact polynomials Computing multiple roots of inexact polynomials Contents Zhonggang Zeng Northeastern Illinois University 1 Introduction 1 2 Preliminaries 3 2.1 Notations....................................... 3 2.2 Basic

More information

On the Berlekamp/Massey Algorithm and Counting Singular Hankel Matrices over a Finite Field

On the Berlekamp/Massey Algorithm and Counting Singular Hankel Matrices over a Finite Field On the Berlekamp/Massey Algorithm and Counting Singular Hankel Matrices over a Finite Field Matthew T Comer Dept of Mathematics, North Carolina State University Raleigh, North Carolina, 27695-8205 USA

More information

Barnett s Theorems About the Greatest Common Divisor of Several Univariate Polynomials Through Bezout-like Matrices

Barnett s Theorems About the Greatest Common Divisor of Several Univariate Polynomials Through Bezout-like Matrices J Symbolic Computation (2002) 34, 59 81 doi:101006/jsco20020542 Available online at http://wwwidealibrarycom on Barnett s Theorems About the Greatest Common Divisor of Several Univariate Polynomials Through

More information

Journal of Symbolic Computation. On the Berlekamp/Massey algorithm and counting singular Hankel matrices over a finite field

Journal of Symbolic Computation. On the Berlekamp/Massey algorithm and counting singular Hankel matrices over a finite field Journal of Symbolic Computation 47 (2012) 480 491 Contents lists available at SciVerse ScienceDirect Journal of Symbolic Computation journal homepage: wwwelseviercom/locate/jsc On the Berlekamp/Massey

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

Key words. Macaulay matrix, multivariate polynomials, multiple roots, elimination, principal angles, radical ideal

Key words. Macaulay matrix, multivariate polynomials, multiple roots, elimination, principal angles, radical ideal SVD-BASED REMOVAL OF THE MULTIPLICITIES OF ALL ROOTS OF A MULTIVARIATE POLYNOMIAL SYSTEM KIM BATSELIER, PHILIPPE DREESEN, AND BART DE MOOR Abstract. In this article we present a numerical SVD-based algorithm

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

Linear Systems. Carlo Tomasi

Linear Systems. Carlo Tomasi Linear Systems Carlo Tomasi Section 1 characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix and of

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Signal Identification Using a Least L 1 Norm Algorithm

Signal Identification Using a Least L 1 Norm Algorithm Optimization and Engineering, 1, 51 65, 2000 c 2000 Kluwer Academic Publishers. Manufactured in The Netherlands. Signal Identification Using a Least L 1 Norm Algorithm J. BEN ROSEN Department of Computer

More information

Further linear algebra. Chapter II. Polynomials.

Further linear algebra. Chapter II. Polynomials. Further linear algebra. Chapter II. Polynomials. Andrei Yafaev 1 Definitions. In this chapter we consider a field k. Recall that examples of felds include Q, R, C, F p where p is prime. A polynomial is

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems Zheng-jian Bai Benedetta Morini Shu-fang Xu Abstract The purpose of this paper is to provide the convergence theory

More information

12x + 18y = 30? ax + by = m

12x + 18y = 30? ax + by = m Math 2201, Further Linear Algebra: a practical summary. February, 2009 There are just a few themes that were covered in the course. I. Algebra of integers and polynomials. II. Structure theory of one endomorphism.

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

1. Algebra 1.5. Polynomial Rings

1. Algebra 1.5. Polynomial Rings 1. ALGEBRA 19 1. Algebra 1.5. Polynomial Rings Lemma 1.5.1 Let R and S be rings with identity element. If R > 1 and S > 1, then R S contains zero divisors. Proof. The two elements (1, 0) and (0, 1) are

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

On Irreducible Polynomial Remainder Codes

On Irreducible Polynomial Remainder Codes 2011 IEEE International Symposium on Information Theory Proceedings On Irreducible Polynomial Remainder Codes Jiun-Hung Yu and Hans-Andrea Loeliger Department of Information Technology and Electrical Engineering

More information

Primitive sets in a lattice

Primitive sets in a lattice Primitive sets in a lattice Spyros. S. Magliveras Department of Mathematical Sciences Florida Atlantic University Boca Raton, FL 33431, U.S.A spyros@fau.unl.edu Tran van Trung Institute for Experimental

More information

Popov Form Computation for Matrices of Ore Polynomials

Popov Form Computation for Matrices of Ore Polynomials Popov Form Computation for Matrices of Ore Polynomials Mohamed Khochtali University of Waterloo Canada mkhochta@uwaterlooca Johan Rosenkilde, né Nielsen Technical University of Denmark Denmark jsrn@jsrndk

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

Determinant Formulas for Inhomogeneous Linear Differential, Difference and q-difference Equations

Determinant Formulas for Inhomogeneous Linear Differential, Difference and q-difference Equations MM Research Preprints, 112 119 No. 19, Dec. 2000. Beijing Determinant Formulas for Inhomogeneous Linear Differential, Difference and q-difference Equations Ziming Li MMRC, Academy of Mathematics and Systems

More information

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS IMA Journal of Numerical Analysis (2002) 22, 1-8 WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS L. Giraud and J. Langou Cerfacs, 42 Avenue Gaspard Coriolis, 31057 Toulouse Cedex

More information

Downloaded 11/07/17 to Redistribution subject to SIAM license or copyright; see

Downloaded 11/07/17 to Redistribution subject to SIAM license or copyright; see SIAM J MATRIX ANAL APPL Vol 20, No 1, pp 14 30 c 1998 Society for Industrial and Applied Mathematics STRUCTURED TOTAL LEAST NORM FOR NONLINEAR PROBLEMS J B ROSEN, HAESUN PARK, AND JOHN GLICK Abstract An

More information

Notes 6: Polynomials in One Variable

Notes 6: Polynomials in One Variable Notes 6: Polynomials in One Variable Definition. Let f(x) = b 0 x n + b x n + + b n be a polynomial of degree n, so b 0 0. The leading term of f is LT (f) = b 0 x n. We begin by analyzing the long division

More information

Numerical Methods I: Eigenvalues and eigenvectors

Numerical Methods I: Eigenvalues and eigenvectors 1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they

More information

On the Solution of Constrained and Weighted Linear Least Squares Problems

On the Solution of Constrained and Weighted Linear Least Squares Problems International Mathematical Forum, 1, 2006, no. 22, 1067-1076 On the Solution of Constrained and Weighted Linear Least Squares Problems Mohammedi R. Abdel-Aziz 1 Department of Mathematics and Computer Science

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

Chapter 4. Greatest common divisors of polynomials. 4.1 Polynomial remainder sequences

Chapter 4. Greatest common divisors of polynomials. 4.1 Polynomial remainder sequences Chapter 4 Greatest common divisors of polynomials 4.1 Polynomial remainder sequences If K is a field, then K[x] is a Euclidean domain, so gcd(f, g) for f, g K[x] can be computed by the Euclidean algorithm.

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

Lattice reduction of polynomial matrices

Lattice reduction of polynomial matrices Lattice reduction of polynomial matrices Arne Storjohann David R. Cheriton School of Computer Science University of Waterloo Presented at the SIAM conference on Applied Algebraic Geometry at the Colorado

More information

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms.

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. J.C. Zúñiga and D. Henrion Abstract Four different algorithms are designed

More information

On Exact and Approximate Interpolation of Sparse Rational Functions*

On Exact and Approximate Interpolation of Sparse Rational Functions* On Exact and Approximate Interpolation of Sparse Rational Functions* ABSTRACT Erich Kaltofen Department of Mathematics North Carolina State University Raleigh, North Carolina 27695-8205, USA kaltofen@math.ncsu.edu

More information

Computing with polynomials: Hensel constructions

Computing with polynomials: Hensel constructions Course Polynomials: Their Power and How to Use Them, JASS 07 Computing with polynomials: Hensel constructions Lukas Bulwahn March 28, 2007 Abstract To solve GCD calculations and factorization of polynomials

More information

Exact Arithmetic on a Computer

Exact Arithmetic on a Computer Exact Arithmetic on a Computer Symbolic Computation and Computer Algebra William J. Turner Department of Mathematics & Computer Science Wabash College Crawfordsville, IN 47933 Tuesday 21 September 2010

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions Linear Systems Carlo Tomasi June, 08 Section characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix

More information

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes. Optimization Models EE 127 / EE 227AT Laurent El Ghaoui EECS department UC Berkeley Spring 2015 Sp 15 1 / 23 LECTURE 7 Least Squares and Variants If others would but reflect on mathematical truths as deeply

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Approximate Polynomial Decomposition

Approximate Polynomial Decomposition Approximate Polynomial Decomposition Robert M. Corless Mark W. Giesbrecht David J. Jeffrey Rob.Corless@uwo.ca Mark.Giesbrecht@uwo.ca David.Jeffrey@uwo.ca Stephen M. Watt Stephen.Watt@uwo.ca Department

More information

Computing Nearby Non-trivial Smith Forms

Computing Nearby Non-trivial Smith Forms Computing Nearby Non-trivial Smith Forms Mark Giesbrecht Cheriton School of Computer Science University of Waterloo mwg@uwaterloo.ca ABSTRACT We consider the problem of computing the nearest matrix polynomial

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Lecture 6, Sci. Comp. for DPhil Students

Lecture 6, Sci. Comp. for DPhil Students Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page

More information

Solving Polynomial Systems via Symbolic-Numeric Reduction to Geometric Involutive Form

Solving Polynomial Systems via Symbolic-Numeric Reduction to Geometric Involutive Form Solving Polynomial Systems via Symbolic-Numeric Reduction to Geometric Involutive Form Greg Reid a Lihong Zhi b, a Dept. of Applied Mathematics, University of Western Ontario, London, N6A 5B7,Canada b

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Basic concepts in Linear Algebra and Optimization

Basic concepts in Linear Algebra and Optimization Basic concepts in Linear Algebra and Optimization Yinbin Ma GEOPHYS 211 Outline Basic Concepts on Linear Algbra vector space norm linear mapping, range, null space matrix multiplication terative Methods

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Extended companion matrix for approximate GCD

Extended companion matrix for approximate GCD Extended companion matrix for approximate GD Paola oito XLIM - DMI Université de Limoges - NRS 23 avenue lbert Thomas 876 Limoges EDEX paolaboitounilimfr Olivier Ruatta XLIM - DMI Université de Limoges

More information

SVD and its Application to Generalized Eigenvalue Problems. Thomas Melzer

SVD and its Application to Generalized Eigenvalue Problems. Thomas Melzer SVD and its Application to Generalized Eigenvalue Problems Thomas Melzer June 8, 2004 Contents 0.1 Singular Value Decomposition.................. 2 0.1.1 Range and Nullspace................... 3 0.1.2

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

THE GEOMETRY OF MULTIVARIATE POLYNOMIAL DIVISION AND ELIMINATION

THE GEOMETRY OF MULTIVARIATE POLYNOMIAL DIVISION AND ELIMINATION THE GEOMETRY OF MULTIVARIATE POLYNOMIAL DIVISION AND ELIMINATION KIM BATSELIER, PHILIPPE DREESEN, AND BART DE MOOR Abstract. Multivariate polynomials are usually discussed in the framework of algebraic

More information

Sylvester Matrix and GCD for Several Univariate Polynomials

Sylvester Matrix and GCD for Several Univariate Polynomials Sylvester Matrix and GCD for Several Univariate Polynomials Manuela Wiesinger-Widi Doctoral Program Computational Mathematics Johannes Kepler University Linz 4040 Linz, Austria manuela.wiesinger@dk-compmath.jku.at

More information

Research Article Polynomial GCD Derived through Monic Polynomial Subtractions

Research Article Polynomial GCD Derived through Monic Polynomial Subtractions International Scholarly Research Network ISRN Applied Mathematics Volume 2011, Article ID 714102, 7 pages doi:10.5402/2011/714102 Research Article Polynomial GCD Derived through Monic Polynomial Subtractions

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

Structured Matrices and Solving Multivariate Polynomial Equations

Structured Matrices and Solving Multivariate Polynomial Equations Structured Matrices and Solving Multivariate Polynomial Equations Philippe Dreesen Kim Batselier Bart De Moor KU Leuven ESAT/SCD, Kasteelpark Arenberg 10, B-3001 Leuven, Belgium. Structured Matrix Days,

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View Point

On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View Point Applied Mathematics E-Notes, 7(007), 65-70 c ISSN 1607-510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View

More information

Fast Polynomial Multiplication

Fast Polynomial Multiplication Fast Polynomial Multiplication Marc Moreno Maza CS 9652, October 4, 2017 Plan Primitive roots of unity The discrete Fourier transform Convolution of polynomials The fast Fourier transform Fast convolution

More information

g(x) = 1 1 x = 1 + x + x2 + x 3 + is not a polynomial, since it doesn t have finite degree. g(x) is an example of a power series.

g(x) = 1 1 x = 1 + x + x2 + x 3 + is not a polynomial, since it doesn t have finite degree. g(x) is an example of a power series. 6 Polynomial Rings We introduce a class of rings called the polynomial rings, describing computation, factorization and divisibility in such rings For the case where the coefficients come from an integral

More information

Preliminary Examination, Numerical Analysis, August 2016

Preliminary Examination, Numerical Analysis, August 2016 Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any

More information

S.F. Xu (Department of Mathematics, Peking University, Beijing)

S.F. Xu (Department of Mathematics, Peking University, Beijing) Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)

More information

Sparse Differential Resultant for Laurent Differential Polynomials. Wei Li

Sparse Differential Resultant for Laurent Differential Polynomials. Wei Li Sparse Differential Resultant for Laurent Differential Polynomials Wei Li Academy of Mathematics and Systems Science Chinese Academy of Sciences KSDA 2011 Joint work with X.S. Gao and C.M. Yuan Wei Li

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

A Fraction Free Matrix Berlekamp/Massey Algorithm*

A Fraction Free Matrix Berlekamp/Massey Algorithm* Linear Algebra and Applications, 439(9):2515-2526, November 2013. A Fraction Free Matrix Berlekamp/Massey Algorithm* Erich Kaltofen and George Yuhasz Dept. of Mathematics, North Carolina State University,

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

On Partial Difference Coherent and Regular Ascending Chains

On Partial Difference Coherent and Regular Ascending Chains MM Research Preprints, 82 88 KLMM, AMSS, Academia Sinica No. 24, December 2005 On Partial Difference Coherent and Regular Ascending Chains Xiao-Shan Gao and Gui-Lin Zhang Key Laboratory of Mathematics

More information