Extended companion matrix for approximate GCD

Size: px
Start display at page:

Download "Extended companion matrix for approximate GCD"

Transcription

1 Extended companion matrix for approximate GD Paola oito XLIM - DMI Université de Limoges - NRS 23 avenue lbert Thomas 876 Limoges EDEX paolaboitounilimfr Olivier Ruatta XLIM - DMI Université de Limoges - NRS 23 avenue lbert Thomas 876 Limoges EDEX olivierruattaunilimfr STRT We study a variant of the univariate approximate GD problem, where the coefficients of one polynomial f(x)are known exactly, whereas the coefficients of the second polynomial g(x)may be perturbed Our approach relies on the properties of the matrix which describes the operator of multiplication by gin the quotient ring [x]/(f) In particular, the structure of the null space of the multiplication matrix contains all the essential information about GD(f, g) Moreover, the multiplication matrix exhibits a displacement structure that allows us to design a fast algorithm for approximate GD computation with quadratic complexity wrt polynomial degrees ategories and Subject Descriptors G [Mathematics of omputing]: General ; G2 [Mathematics of omputing]: Numerical nalysis pproximation; G3 [Mathematics of omputing]: Numerical nalysis Numerical Linear lgebra; I2 [omputing Methodologies]: Symbolic and lgebraic Manipulations lgorithms General Terms Theory; lgorithms Keywords pproximate greatest common divisor; pproximate geometry; Symbolic-numeric computation; Structured matrices INTRODUTION The approximate polynomial greatest common divisor (denoted as GD) is a central object of symbolic-numeric computation The main difficulty of the problem comes from the fact that is no universal notion of GD One can find different approaches and different notions for GD We will not give a review of all the existing work on this subject, but we will recall one of the most popular approaches Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee SN 2, June 7-9, 2, San Jose, alifornia opyright 2 M $ to show how our work brings a different point of view on the problem The main approach to the computation of an GD consists in considering two univariate polynomials whose coefficients are known with uncertainty This uncertainty can be the result of the fact that the polynomials have floating point coefficients coming from previous computation (and so are subject to round-off errors) The most frequently adopted formulation is related to semi-algebraic optimization : given f and g two approximate polynomials, find two polynomials f and g such that f f and g g are small (lower than a given tolerance for instance) and such that the degree of gcd(f, g) is maximal That is, one looks for the most singular system close to the input ( f, g) n ε-gcd is obtained if the conditions f f < ε and g g < ε are satisfied One can try to compute the tolerance on the perturbation of the input polynomial thanks to direct computation (for instance from a jump on singular values of particular matrix for instance) This last approach has received a great interest following the work of Zeng using Sylvester like matrices ([22]) following several previous work [3], [4], [7], [8], [6], [7] and [8] Here, we consider a slightly different problem One of the polynomials, say f, is known exactly (it is the result of an exact model) and the second one, say g, is an approximate polynomial (result of measures or previous approximation for instance) This case occurs in applications such as model checking (to compare results of an exact model and measures) There are many other instances of such a problem, such as simplification of fractions when one of the polynomial is known exactly but the other one is not We give a example of such a situation When modeling an electromagnetic filter, one might want to parametrize its behavior with respect to the frequency ut one may need to do so even if there are singularities and to do so one may use Padé approximations of the electromagnetic signal at each point as a function of the frequency In some cases of interest, one can know all the singularities and so compute an exact polynomial called characteristic Padé approximations are computed independently for each point by a numerical process and denominators may have a non trivial gcd with the characteristic polynomial The denominators are not known exactly So, in order to identify unwanted common factors in denominators one has to compute approximate gcds between an exact and non exact polynomials This GD problem can also be interpreted as an optimization problem Given f exactly and g approximately, compute a polynomial g close to g such that g has a maxi- 74

2 mal degree gcd with f Our approach takes advantage of the asymmetry of the problem and of the structure of the quotient algebra [x]/(f(x)) (more accurately, of the displacement rank of the multiplication operator in this algebra) So, we address the following problem : Problem Let f(x) [x] a given polynomial and g(x) another polynomial Find g(x) close to g(x) (in a sense that will be explained) such that f(x) and g(x) have a gcd of maximal degree This may be also an interesting approach when one has two polynomials, one known with high confidence and another with worse accuracy This approach may take advantage of this asymmetry which would not be possible for classical framework based on Sylvester or ézout matrices ll the previous methods allow to deform the two input polynomials They give a symmetrical role to those polynomials If one of them is known with high confidence, the previous methods cannot take into account this information Here, the polynomial used to construct the quotient algebra is supposed to be exact or known with higher confidence than the one use to compute the multiplication matrix Furthermore, just as in [7] and [8] we can translate our problem to an optimization problem This is important in order to be able to design the numerical part of our algorithm oth the approach and the algorithm seem new since we address a new problem The proposed algorithm is fast since the exponent of its complexity is better than the classical linear algebra exponent in the degree of the input polynomials Organisation of the paper: The second section is devoted to some basic result on algebra needed after, the third section gives an algebraic method for gcd based on linear algebra, the fourth section recalls the arnett s formula allowing to compute the multiplication matrix without division, the fifth gives the displacement rank structure of the multiplication matrix, the sixth describes the final algorithm and experiments before finishing with conclusions and perpectives 2 EULIDIN STRUTURE ND QUOTIENT LGER In this section, we recall basic algebraic results needed to understand the principle of our approach ll material in this section can be found (even in the non reduced case and in the multivariate setting) in [9] for instance ssume that K is an algebraically closed field (here we think about ) Let f(x) and g(x) K[x] and assume dq that f(x) = (x ζ i) and that ζ i ζ j for all i j i= in {,, d} Let = K[x]/(f) and π : K[x] be the natural projection For i {,, d}, we define L i(x) = Q (x ζ j ) j i Q j i (ζ i ζ j ), the ith Lagrange polynomial associated with {ζ,, ζ d } learly, since deg(l i) < deg(f), we have π(l i) = L i, for all i {,, d} Let = HomK(, K) be the usual dual space of For all i {,, d}, we define ζi : K by ζi (p) = p(ζ i) for all p The following lemma is obvious from the definition of the polynomials j L i if i = j that for i and j {,, d}, we have L i(ζ j) = else This implies that the set {L,, L d } is a basis of well known fact is that the set { ζ,, ζd } form a basis dual of the basis {L,, L d } of s a corollary, we have the Lagrange interpolation formula : Each p can be written p(x) = d P i= ζi (p)l i(x) direct consequence is that if we choose {L,, L d } as a basis of, for all g K[x], the remainder π(g) resulting of the Euclidean division of g by f is given by (g(ζ ),, g(ζ d )) in the basis {L,, L d }, ie P r = d g(ζ i)l i(x) In other word, divide g by f is equivalent i= to evaluate g at the roots of f The general philosophy of this last assertion will allow us to make a lot of proof in a very simple way For example, it is very easy to see the different operation in using this representation Let g and h be two elements in, then we P have g + h = d P (g(ζ i) + h(ζ i)) L i(x) and g h = d (g(ζ i) i= h(ζ i))l i(x) in This allows us to avoid the use of the section of the surjection π In fact, the Lagrange polynomials L,, L d reveal a deeper structure on the algebra : The polynomials j L,, L d are the idempotents of, ie L i Li if i = j L j = otherwise Thanks to this description of the quotient algebra, it is easy to derive algorithms for both polynomial solving and gcd computation even though the problems are of very different nature Remark that we have expressed everything in the monomial basis since it is the most widely used basis to express polynomials We can use other bases particular basis is the hebyshev basis where all results are exactly the same since it is a graded basis 3 N LGERI LGORITHM FOR GD OMPUTTION To first give an idea on how to exploit the section above in order to design algorithm for gcd, We recall a classical method for polynomial solving (see [6] for instance) Proofs are given for the sake of completeness and because very similar ideas will lead us to the GD computation 3 Roots via eigenvalues Let f(x) = d P i= i= f ix i [x] be a polynomial oegree d Then we consider the matrix of the multiplication by x in [x]/(f) Its matrix in the monomial basis,, x d is the following: ` x x 2 x d Frob(f) = x x 2 x d f f f 2 well known as the Frobenius companion matrix associated with f Proposition Let f(x) [x]be polynomial oegree d with d distinct roots Z(f) = {z,, z d }, then the eigenvalues of Frob(f) are the roots of f(x), ie Spec(Frob(f)) = {z,, z d } 75

3 Then, to compute the roots of f(x) one can compute the eigenvalues of its Frobenius companion matrix This is the object of the method proposed (reintroduced) by Edelman and Murakami [9] and revisited by Fortune [] and many others trying to use the displacement structure of the companion matrix In fact, often, the author realized that the monomial basis of the quotient algebra is not the most suitable one and proposed to express the matrix of the same linear application but in other basis In the case of the hebyshev basis this algorithm was already known by arnett [2] and ardinal later [6] In the next section, we will also take advantage of the structure of the quotient algebra to design an algorithm for gcd computation mainly using linear algebra (eigenvalues are used in theory and never computed) 32 Structure of quotient and gcd Let f(x) and g(x) K[x] such that they are both monic s above, we denote = K[x]/(f) and d = deg(f) We denote {ζ,, ζ d } the set of roots of f(x) and we assume that f(x) j is squarefree, ie ζ i ζ j if i j We define M g : where π(p) denote the remainder h π(gh) of p(x) K[x] divided by f(x) We denote M g the matrix of M g in the monomial basis, x,, x d of but other bases can be used matrix representing the map M g is called an extended companion matrix Proposition 2 The eigenvalues of M g are {g(ζ ),, g(ζ d )} Proof It is a direct corollary of the proposition since if we write the matrix of this linear map in the Lagrange basis associated with {ζ,, ζ d }, then it is and gives the wanted result Trivially, we have: g(ζ ) g(ζ d ) orollary We have corank(m g) = deg(f) rank(m g) = deg(gcd(f, g)) The i-th column of the matrix M g contains the coefficients of π(x i g(x)) Let p,, p l be a basis of Ker(M g) and let P (x),, P l (x) be the corresponding polynomials First we remark that nn(g) = {P (x) P (x) g(x) = } is an ideal of Lemma The ideal nn(g) is a principal ideal Proof Let us define v(x) = Y ζ Z(f)\(Z(f) Z(g)) (x ζ) For all h nn(g) it is clear that Z(h) Z(f)\(Z(f) Z(g)) and then v divides h Furthermore v nn(g) since in the Lagrange basis v(x) g(x) = dx v(ζ i)g(ζ i) L i(x) = i= This shows that nn(g) = (v) To compute v(x), we built the matrix with columns formed by p,, p l and we make a triangulation by operating only on the columns This way we obtain the polynomial of minimal degree as a linear combination of P (x),, P l (x) and it is easily seen that this v(x) up to a multiplicative scalar factor Lemma 2 The first column of a column echelon form of the matrix K g built from a basis of Ker(M g) is the generator of nn(g), ie it is the vector of the coefficients of v(x) up to a scalar multiplication Proof Since the columns of a column echelon form of the matrix K g are linearly independent, they form a basis of nn(g) as K-vector space So v(x) is a linear combination of the polynomials associated with those columns The polynomial associated with the column echelon form of K g have all different degree (because it is an echelon form) and so v(x) is a linear combination of those polynomial ecause v(x) as the lowest degree possible, it is a scalar multiple of the polynomial associated with the first column Proposition 3 gcd(f(x), g(x)) = f(x) v(x) Proof y construction, we have v(x) g(x) = mod f(x) and so v(x) divide f(x) We also have gcd( f(x), g(x)) = v(x) gcd(f(x), g(x)) since the roots of f(x) are the root of f(x) v(x) where g(x) vanishes Since deg( f(x) ) = deg(gcd(f(x), g(x)) v(x) we have the wanted result In all this section, we did not care if the polynomials are known in monomial or hebyshev basis for instance In fact, in order to have an algebraic algorithm, we only need to be able to perform Euclidean division and this is always the case if the polynomial basis is graduated (as for monomial, hebyshev, most of the orthogonal bases) 4 EZOUTIN ND RNETT S FORMUL classical matricial formulation of resultant is given by the ézout matrix In this part, we recall the construction of the ézout matrix and a special factorization of the multiplication matrix expressed in the monomial basis This factorization is called arnett s formula (see [2]) The arnett s formula allows to build the classical extended companion matrix without using Euclidean division and only stable numerical computations Furthermore, this factorization reveals that the extended companion matrix has a special rank structure and we will use this fact later to design a fast algorithm to compute GD Definition Let f and g [x] oegree m and n respectively (with n m), we denote Θ f,g (x, y) = f(x)g(y) f(y)g(x) P θ i,jx i y j = m P κ f,g,j (x)y j The ézout matrix associated i,j j= with f and g is f,g = ` θ i,j i,j {,,m } = Remark that since Θ f,g (x, y) = Θ f,g (y, x) the matrix f,g is symmetric The polynomials κ f,g,j (x) are univariate polynomials oegree at most m One particular case of interest is when f = In this case the ézout matrix has a Hankel structure, ie θ i,j = θ i,j+ In this case we denote 76

4 H g,i(x) = κ,g,i(x) for i {,, m } which are called the Horner polynomials Proposition 4 Let i {,, m }, the polynomial H g,i(x) = c,m i + + c,mx i has degree i and since they have different degree, they form a basis of [x]/(g) Furthermore, Θ,g(x, y) = m P H g,m i(x)y i i= orollary 2 The matrix,g is the basis conversion from the Horner basis H,, M m to the monomial basis, x,, x m of [x]/(g) This leads us to the following theorem, known as arnett s formula (see [2]): Theorem Let M g be the multiplication matrix associated with g in [x]/(f) in the monomial basis, we have: M g = f,g,f Proof We have Θ f,g (x, y) = f(x) g(y) g(x) +g(x) f(x) f(y) and so g(x) f(x) f(y) Θ f,g (x, y) in [x, y]/(f(x)) So, for each i {,, deg(f) }, we have Θ f,g,i (x) g(x)θ,f,i (x) This last equality means that f,g is the matrix of the multiplication by g(x) in [x]/(f) The result follows directly from this fact The arnett s formula reveals the rank structure of the multiplication matrix Furthermore, this formula is already known if we choose hebyshev basis instead of monomial basis to express the polynomials and the matrices have exactly the same nature 5 STRUTURED MTRIES ND SYMP- TOTILLY FST LGORITHMS In this section, we briefly recall some basics on displacement structured matrices and related algorithms We are especially interested in two forms oisplacement structure: Toeplitz-like and auchy-like structure See [5] for a more detailed treatment of this topic 5 Displacement structure Given an integer n and a complex number ϑ with ϑ =, define the circulant matrix ϑ Zn ϑ = n n Next, define the Toeplitz-like displacement operator as the linear operator T : m n m n T () = Z m Z ϑ n matrix m n is said to be Toeplitz-like if T () is a small rank matrix (where small means small with respect to the matrix size) The number α = rank( ()) is called the displacement rank of If is Toeplitz-like, then there exist (non-unique) displacement generators G m α and H α n such that T () = GH Other formally different (but essentially equivalent) definitions of the Toeplitz-like displacement operator are found in the literature; as a consequence, the exact displacement rank of a given matrix may vary slightly depending on the definition that has been chosen Toeplitz-like structure is a generalization of Toeplitz structure: indeed, it is easy to see that Toeplitz matrices are also Toeplitz-like Moreover, note that displacement structure is preserved under inversion: the inverse of a Toeplitz-like matrix is again Toeplitz-like, with the same displacement rank Sums and products of Toeplitz-like matrices are also Toeplitz-like, even though the displacement rank may increase More examples of Toeplitz-like matrices include: Toeplitz-block matrices, such as the Sylvester matrix associated with two given polynomials; ézout matrices (which can be defined via sums of products of Sylvester matrices, see eg [2]), the multiplication matrix M f (which is a product of ézoutians) In particular, M f has Toeplitz-like displacement rank equal to 2, regardless of its size, with respect to the displacement operator defined above similar definition holds for auchy-like structure; here the relevant displacement operator is : m n m n () = D D 2, where D and D 2 are diagonal matrices of appropriate size with disjoint spectra matrix is said to be auchylike if () has small rank One can then define notions of auchy-like displacement generators and auchy-like displacement rank 52 Fast solution oisplacement structured linear systems Gaussian elimination with partial pivoting (GEPP) is a well-known and reliable algorithm that computes the solution of a linear system Its arithmetic complexity for an n n matrix is asymptotically O(n 3 ) ut if the system matrix exhibits displacement structure, it is possible to apply a variant of GEPP with complexity O(n 2 ) The main idea consists in operating on displacement generators rather than on the whole matrix; see [] for a detailed description of the algorithm (which will be denoted as GKO in the following) Strictly speaking, the GKO algorithm performs GEPP (or, equivalently, computes the PLU factorization) for auchylike matrices However, several authors have pointed out (see [], [3], [2]) that Toeplitz-like matrices can be stably and cheaply transformed into auchy-like matrices; the same is true for displacement generators onsider, for instance, the case of square matrices, with ϑ = Recall that the Fourier matrix F of size n n, which defines the discrete Fourier transform, is such that F jk = n e 2πi(j )(k ) We have the following result (taken from []): 77

5 Proposition 5 Let be an n n Toeplitz-like matrix with generators G and H Denote by D the matrix diag(, e πi/n,, e (n )πi/n ) and let F be the Fourier matrix of size n n Then the matrix F D F H is auchy-like, of the same displacement rank as, with respect to the displacement operator defined by D = diag(, e 2πi/n,, e 2πi(n )/n ) and D 2 = diag(e πi/n, e 3πi/n,, e (2n )πi/n ) Its auchy-like generators can be computed as Ĝ = F G and ĤH = F D H H Here M H denotes the transpose conjugate of a given matrix M Generalization to the case of m n rectangular matrices is possible In this case, the parameter ϑ should be chosen so that the spectra of D and D 2 are well separated (see [] and [5]) We also point out that the GKO algorithm can be adapted to pivoting techniques other than partial pivoting ([2], [2]) This is especially useful in case of instability due to internal growth of generator entries Matlab implementation of the GKO algorithm that takes into account several pivoting strategies is found in the package DRSolve described in [] In our implementation, we use the pivoting strategy proposed in [2] t each step, the displacement generators G and are redefined, so that the columns of the new first generator G are orthogonal This result can be achieved by computing the QR factorization G = QR, where Q has size n 2, and defining new generators G = Q and H = RH This ensures good conditioning of G One then performs pivoted Gaussian elimination on the matrix column corresponding to the column of H with maximum 2-norm Such a technique involves both row and column permutations; it is not equivalent to complete pivoting, which would be too expensive, but numerical results show that it generally allows a good choice of pivots and reduces the growth of generator entries Error analysis for this pivoting strategy shows that the backward error essentially depends on the auchy-like displacement operator and on the magnitude of the computed upper triangular factor (but not on the magnitude of the generators) 6 STRUTURED PPROH TO GD OMPUTTION We propose here an algorithm that exploits the algebraic and displacement structure of the multiplication matrix to compute the GD of two given polynomials with real coefficients (as defined in Section ) 6 Rank estimation It has been pointed out in Section 3 that the rank deficiency of the multiplication matrix equals the gcd degree In an approximate setting, a relevant notion is the approximate rank (or ε-rank): recall that a matrix Mhas ε-rank k if there exists a matrix M of exact rank k such that M M ε 2 The GD degree can then be estimated using the approximate rank of the multiplication matrix Observe that the technique of computing the GD degree via approximate rank of a resultant (eg, Sylvester) matrix is often exploited in the literature: see for instance [7], [8], [6], [5] Here we use the structured pivoted LU decomposition to estimate the approximate rank of the multiplication matrix Recall that M g has a Toeplitz-like structure with displacement rank 2; it can then be transformed into a auchy-like matrix ˆM g as described in Section 52 Fast pivoted Gauss elimination yields a factorization ˆM g = P LUP 2, where L is a square, nonsingular, lower triangular matrix with diagonal entries equal to, U is upper triangular and P,2 are permutation matrices Inspection of the diagonal entries (or of the row norms) of U allows to estimate the approximate rank of ˆMg and, therefore, of M g Developing a reliable and numerically robust factorizationbased strategy to compute the GD degree is not an easy task s far as the LU factorization is concerned, we mention that Gauss elimination with complete pivoting is rankrevealing in the exact case, but in principle it might not detect near rank-deficiency Since we cannot expect the pivoting strategy we use to perform better than complete pivoting, we conclude that our strategy cannot ensure an a priori certification of approximate rank, nor of GD degree There is, however, a relationship between GKO pivots and distance from a rank-deficient matrix Indeed at any GKO step we have a Ũ k 3/2 ρ a, 2 where e U k k is the trailing matrix block that has not yet been factorized, a is the current pivot and ρ is a positive constant depending on the auchy-like displacement operator This bound can be derived directly using the results presented in [2]; see also [5] We propose in the future to give a more detailed theoretical and numerical analysis of the effectiveness of our degree estimation strategy; the proposed numerical experiments are a first step in this direction 62 Minimization of a quadratic functional Let us suppose that: the polynomial f(x) = P n j= fjxj is exactly known, the polynomial g(x) = P m j= gjxj is approximately known and may be perturbed, so that we consider its coefficients as variables, the GD degree is known Then we can reformulate the problem of GD computation as the minimization of a quadratic functional Indeed, recall that the cofactor v(x) with respect to f(x) is defined by the shortest vector (ie, the vector with the maximum number of trailing zeros) that belongs to the null space of M g We assume v(x) to be monic; we denote its degree as k and we have M gv = M g v v k = lso observe that the entries of M g are linear functions of the coefficients of g(x) Then the equation M gv = can be 78

6 rewritten as F(g, v)=, where the functional F is defined as F : m+ k R + F(g, v) = M gv 2 2 For a preliminary study of the problem, we have chosen to solve the equation F(g, v)= by means of Newton s method, applied so as to exploit structure Denote by z = [g,, g m, v,, v k ] T the vector of unknowns; then each Newton step has the form z (j+) = z (j) J(g (j), v (j) ) M g (j)v (j) In particular, notice that the Jacobian matrix associated with F is an n (m + k + ) Toeplitz-like matrix oisplacement rank 3 This property allows to compute a solution of the linear system J(g (j), v (j) )y = M g (j)v (j) in a fast way; therefore, the arithmetic complexity of each iteration is quadratic wrt the degree of the input polynomials We propose in the future to take into consideration other optimization methods in the quasi-newton family, such as FGS 63 omputation oisplacement generators In order to perform fast factorization of the multiplication matrix M g, we need to compute Toeplitz-like displacement generators It turns out that the range of (M g) is spanned by the first and last column of the displaced matrix, and the columns of indices from 2 to n are multiples of the first one Therefore, it suffices to compute a few rows and columns of M g in order to obtain displacement generators This can be done in a fast and stable way by using arnett s formula If we denote e j the j-th vector of the canonical basis of n, then the computation of the j-th column of M gcan be seen as M g(:, j) = (f, g) `(, f) e j, that is, it consists in solving a triangular Hankel linear system and computing a matrix-vector product For row computation, recall that the ezoutian is a symmetric matrix; we have analogously: M g(j, :) = e T j (f, g) (, f) = `(, f) (f, g)e j T, so that the computation of a row of M g amounts to performing a matrix-vector product and solving a Hankel triangular system similar approach holds for computation oisplacement generators of the Jacobian matrix J(g, v) associated with the functional F(g, v) 64 Description of the algorithm Input: coefficients of polynomials f(x) and g(x) Output: a perturbed polynomial g(x) such that f and g have a nontrivial common factor Estimate the approximate rank k of M g by computing a fast pivoted LU decomposition of the associated auchylike matrix 2 gain by using fast LU, compute a vector v = [v, v,, v k,,,, ] T in the approximate null space of M g 3 pply structured Newton with initial guess (g, v) and compute polynomials g and ṽ such that f and g have a common factor oegree deg f k and ṽ is the monic cofactor for f The algebraic complexity of step is O(deg(f) 2 ) and this is also the complexity of step 2 and of each iteration of step 3 This asymptotic bound seems to be realistic even if the degrees used for our experimentations are not large enough to truly confirm it in practice The number of Newton iterations needed, in our tests, does not seem to grow with the degree as far as our implementation allows us to go So, in our tests, the complexity of the rank computation dominates the arithmetical cost of the algorithm 65 Numerical experiments and computational issues We have written a preliminary implementation of the proposed method in Matlab (available at the URL The results of a few numerical experiments are shown below The polynomials f and g are monic and have random coefficients uniformly distributed over [, ] They have an exact GD of prescribed degree perturbation is then added to g The perturbation vector has random entries uniformly distributed over [ η, η] and its norm is of the order of magnitude of η We show: the residual F( g, ṽ), the 2-norm distance between the exact and the computed cofactor v, the 2-norm distance between the exact and the computed perturbed polynomial g (which is expected to be roughly of the same order of magnitude as η) In the following table we have taken η =e-5 n, m, deg gcd(f, g) F( g, ṽ) v ṽ 2 g g 2 8, 7, 3 2e-5 9e-5 4e-5 5, 4, 5 5e-5 226e-5 35e-4 22, 22, 7 27e-3 4e-3 22e-4 36, 36, 9e-2 57e-4 2 Here are results for η =e-8: n, m, deg gcd(f, g) F( g, ṽ) v ṽ 2 g g 2 8, 7, 3 549e-5 63e-5 585e-8 28, 27, 3 79e-4 898e-4 65e-7 38, 37, 3 488e-2 426e-2 23e-5 58, 57, 23 23e-2 44e-2 254e-4 There are several issues in our approach that deserve further investigations Let us mention in particular: The choice of a threshold (or a more refined technique) for estimating approximate rank Normalization of polynomials: here we mostly work with monic polynomials, but other normalizations may be considered The structured implementation of the optimization step (minimizing F(g, v)) We have used for now a heuristic structured version of the Gauss-Newton algorithm Observe that each step of classical Gauss-Newton applied to our problem has the form z (j+) = z (j) y (j), where z (j) is the vector containing the coefficients of the j-th iterate polynomials g (j) and v (j), and y (j) is the least-norm solution to the underdetermined system J(g (j), v (j) )y (j) = M g (j)v (j) omputing this leastnorm solution in a structured and fast way is a difficult point that will require more work Our implementation gives a solution which is not, in general, the 79

7 least-norm one, even though it is typically quite close Further work will also include a study of other possible optimization methods that lend themselves well to a structured approach 7 ONLUSIONS We have proposed and implemented a fast structured matrixbased approach to a variant of the GD problem, namely, the problem of computing an approximate greatest common divisor of two univariate polynomials, one of which is known to be exact To our knowledge, this variant has been so far neglected in the existing literature It may be also interesting when one polynomial is known with high accuracy and the other is not Our approach is based on the structure of the multiplication matrix and on the subsequent reformulation of the problem as the minimization of a suitably defined functional Our choice of the multiplication matrix M g over other resultant matrices (eg, Sylvester, ézout) is motivated by the smaller size of M g, with respect eg to the Sylvester matrix, the strong link between the null space of M g and the gcd, and in particular the fact that the null space of M g immediately yields a gcd cofactor, the displacement structure of M g, the possibility of computing selected rows and columns of M g in a stable and cheap way, thanks to arnett s formula This is, however, a preliminary study Further work will include generalizations of the proposed problem and a more thorough analysis of the optimization part of the algorithm Furthermore, this approach can be generalized in several interesting ways: using better bases than the monomial one, it can be extended to some multivariate setting to compute the co-factor of a polynomial g in [x,, x n]/(f,, f n) when f,, f n define a complete intersection since arnett s formula still holds, to compute the GD of f with g,, g k where f is known with accuracy but g,, g k are inaccurate, one can take g as a linear combination of g,, g k with our method and succeed with a high probability 8 REFERENES [] ricò, G Rodriguez, fast solver for linear systems with displacement structure Numer lgorithms, 2 DOI: 7/s x [2] S arnett, Polynomials and linear control systems, Monographs and Textbooks in Pure and pplied Mathematics, 77, Marcel Dekker, Inc, New York, 983 xi+452 pp ISN: [3] eckermann, G Labahn, When are two numerical polynomials relatively prime?, Journal of Symbolic omputation, 28 (998), no 6, [4] eckermann, G Labahn, Fast and numerically stable Euclidean-like algorithm for detecting relatively prime numerical polynomials, Journal of Symbolic omputation, 28 (998), no 6, [5] D ini, P oito, fast algorithm for approximate polynomial gcd based on structured matrix computations, in Numerical Methods for Structured Matrices and pplications: Georg Heinig memorial volume, Operator Theory: dvances and pplications, vol 99, irkhäuser (2), [6] J-P ardinal, On two iterative methods for approximating roots of a polynomial, In J Renegar, M Shub and S Smale editors, Proc SIM-MS Summer Seminar on Math of Numerical nalysis, Vol 32 of Lecture Notes in pplied Math, MS Press (996), [7] R orless, P Gianni, Trager, S Watt, The Singular Value Decomposition for pproximate Polynomial Systems, Proceeedings of ISS 995 (Montreal, anada), 95-27, M Press (995) [8] R orless, S Watt, L Zhi, QR factoring to compute the GD of univariate approximate polynomials, IEEE Trans Signal Process, 52 (24), no 2, [9] Edelman, H Murakami, Polynomial roots from companion matrix eigenvalues, Math omp 64 (995), no 2, [] S Fortune, n iterated eigenvalue algorithm for approximating roots of univariate polynomials, Journal of Symbolic omputation, 33 (22), no 5, [] I Gohberg, T Kailath, and V Olshevsky, Fast Gaussian elimination with partial pivoting for matrices with displacement structure, Math omp 64 (22), (995) [2] M Gu, Stable and Efficient lgorithms for Structured Systems of Linear Equations, SIM J Matrix nal ppl 9, (998) [3] G Heinig, Inversion of generalized auchy matrices and other classes of structured matrices, Linear lgebra in Signal Processing, IM volumes in Mathematics and its pplications 69, 95-4 (994) [4] U Helmke, P Fuhrmann, ezoutians, Linear lgebra ppl 24, (989) [5] T Kailath, Sayed, Displacement Structure: Theory and pplications, SIM Review 37(3), (995) [6] E Kaltofen, Z Yang, L Zhi, Structured Low Rank pproximation of a Sylvester Matrix, in Dongming Wang and Lihong Zhi (eds), Proc International Workshop on Symbolic-Numeric omputation (25) [7] N Karmarkar, Y Lakshman, pproximate polynomial greatest common divisors and nearest singular polynomials, Proc Int Symp Symbolic lgebraic omput, Zurich, Switzerland, 996, pp [8] N Karmarkar, Y Lakshman, On approximate GDs of univariate polynomials, Journal of Symbolic omputation, 26 (996), no 6, [9] Mourrain, O Ruatta, Relations between roots and coefficients, interpolation and application to system solving, Journal of Symbolic omputation, 33 (22), no 5, [2] V Pan, On computations with dense structured matrices, Math of omput 55(9), 79-9 (99) [2] M Stewart, Stable Pivoting for the Fast Factorization of auchy-like Matrices, preprint (997) [22] Z Zeng, The approximate GD of inexact polynomials Part I: a univariate algorithm, zzeng/uvgcdpdf 8

Computing Approximate GCD of Univariate Polynomials by Structure Total Least Norm 1)

Computing Approximate GCD of Univariate Polynomials by Structure Total Least Norm 1) MM Research Preprints, 375 387 MMRC, AMSS, Academia Sinica No. 24, December 2004 375 Computing Approximate GCD of Univariate Polynomials by Structure Total Least Norm 1) Lihong Zhi and Zhengfeng Yang Key

More information

An Algorithm for Approximate Factorization of Bivariate Polynomials 1)

An Algorithm for Approximate Factorization of Bivariate Polynomials 1) MM Research Preprints, 402 408 MMRC, AMSS, Academia Sinica No. 22, December 2003 An Algorithm for Approximate Factorization of Bivariate Polynomials 1) Zhengfeng Yang and Lihong Zhi 2) Abstract. In this

More information

Displacement Structure in Computing Approximate GCD of Univariate Polynomials

Displacement Structure in Computing Approximate GCD of Univariate Polynomials MM Research Preprints, 230 238 MMRC, AMSS, Academia, Sinica, Beijing No 21, December 2002 Displacement Structure in Computing Approximate GCD of Univariate Polynomials Lihong Zhi 1) Abstract We propose

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

APPROXIMATE GREATEST COMMON DIVISOR OF POLYNOMIALS AND THE STRUCTURED SINGULAR VALUE

APPROXIMATE GREATEST COMMON DIVISOR OF POLYNOMIALS AND THE STRUCTURED SINGULAR VALUE PPROXIMTE GRETEST OMMON DIVISOR OF POLYNOMILS ND THE STRUTURED SINGULR VLUE G Halikias, S Fatouros N Karcanias ontrol Engineering Research entre, School of Engineering Mathematical Sciences, ity University,

More information

Ruppert matrix as subresultant mapping

Ruppert matrix as subresultant mapping Ruppert matrix as subresultant mapping Kosaku Nagasaka Kobe University JAPAN This presentation is powered by Mathematica 6. 09 : 29 : 35 16 Ruppert matrix as subresultant mapping Prev Next 2 CASC2007slideshow.nb

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

Polynomials, Ideals, and Gröbner Bases

Polynomials, Ideals, and Gröbner Bases Polynomials, Ideals, and Gröbner Bases Notes by Bernd Sturmfels for the lecture on April 10, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra We fix a field K. Some examples of fields

More information

GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS

GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS JENNY WANG Abstract. In this paper, we study field extensions obtained by polynomial rings and maximal ideals in order to determine whether solutions

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

Hands-on Matrix Algebra Using R

Hands-on Matrix Algebra Using R Preface vii 1. R Preliminaries 1 1.1 Matrix Defined, Deeper Understanding Using Software.. 1 1.2 Introduction, Why R?.................... 2 1.3 Obtaining R.......................... 4 1.4 Reference Manuals

More information

Integer Least Squares: Sphere Decoding and the LLL Algorithm

Integer Least Squares: Sphere Decoding and the LLL Algorithm Integer Least Squares: Sphere Decoding and the LLL Algorithm Sanzheng Qiao Department of Computing and Software McMaster University 28 Main St. West Hamilton Ontario L8S 4L7 Canada. ABSTRACT This paper

More information

Lecture 7: Polynomial rings

Lecture 7: Polynomial rings Lecture 7: Polynomial rings Rajat Mittal IIT Kanpur You have seen polynomials many a times till now. The purpose of this lecture is to give a formal treatment to constructing polynomials and the rules

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

On the BMS Algorithm

On the BMS Algorithm On the BMS Algorithm Shojiro Sakata The University of Electro-Communications Department of Information and Communication Engineering Chofu-shi, Tokyo 182-8585, JAPAN Abstract I will present a sketch of

More information

Barnett s Theorems About the Greatest Common Divisor of Several Univariate Polynomials Through Bezout-like Matrices

Barnett s Theorems About the Greatest Common Divisor of Several Univariate Polynomials Through Bezout-like Matrices J Symbolic Computation (2002) 34, 59 81 doi:101006/jsco20020542 Available online at http://wwwidealibrarycom on Barnett s Theorems About the Greatest Common Divisor of Several Univariate Polynomials Through

More information

Fast Estimates of Hankel Matrix Condition Numbers and Numeric Sparse Interpolation*

Fast Estimates of Hankel Matrix Condition Numbers and Numeric Sparse Interpolation* Fast Estimates of Hankel Matrix Condition Numbers and Numeric Sparse Interpolation* Erich L Kaltofen Department of Mathematics North Carolina State University Raleigh, NC 769-80, USA kaltofen@mathncsuedu

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Mathematical Olympiad Training Polynomials

Mathematical Olympiad Training Polynomials Mathematical Olympiad Training Polynomials Definition A polynomial over a ring R(Z, Q, R, C) in x is an expression of the form p(x) = a n x n + a n 1 x n 1 + + a 1 x + a 0, a i R, for 0 i n. If a n 0,

More information

Lattice reduction of polynomial matrices

Lattice reduction of polynomial matrices Lattice reduction of polynomial matrices Arne Storjohann David R. Cheriton School of Computer Science University of Waterloo Presented at the SIAM conference on Applied Algebraic Geometry at the Colorado

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms.

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. J.C. Zúñiga and D. Henrion Abstract Four different algorithms are designed

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Review of Matrices and Block Structures

Review of Matrices and Block Structures CHAPTER 2 Review of Matrices and Block Structures Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,500 08,000.7 M Open access books available International authors and editors Downloads Our authors

More information

Computing multiple roots of inexact polynomials

Computing multiple roots of inexact polynomials Computing multiple roots of inexact polynomials Contents Zhonggang Zeng Northeastern Illinois University 1 Introduction 1 2 Preliminaries 3 2.1 Notations....................................... 3 2.2 Basic

More information

Structured Matrix Methods Computing the Greatest Common Divisor of Polynomials

Structured Matrix Methods Computing the Greatest Common Divisor of Polynomials Spec Matrices 2017; 5:202 224 Research Article Open Access Dimitrios Christou, Marilena Mitrouli*, and Dimitrios Triantafyllou Structured Matrix Methods Computing the Greatest Common Divisor of Polynomials

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Fraction-free Row Reduction of Matrices of Skew Polynomials

Fraction-free Row Reduction of Matrices of Skew Polynomials Fraction-free Row Reduction of Matrices of Skew Polynomials Bernhard Beckermann Laboratoire d Analyse Numérique et d Optimisation Université des Sciences et Technologies de Lille France bbecker@ano.univ-lille1.fr

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture Week9 Vector Spaces 9. Opening Remarks 9.. Solvable or not solvable, that s the question Consider the picture (,) (,) p(χ) = γ + γ χ + γ χ (, ) depicting three points in R and a quadratic polynomial (polynomial

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

Chapter 3. Rings. The basic commutative rings in mathematics are the integers Z, the. Examples

Chapter 3. Rings. The basic commutative rings in mathematics are the integers Z, the. Examples Chapter 3 Rings Rings are additive abelian groups with a second operation called multiplication. The connection between the two operations is provided by the distributive law. Assuming the results of Chapter

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

17 Solution of Nonlinear Systems

17 Solution of Nonlinear Systems 17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Rings. EE 387, Notes 7, Handout #10

Rings. EE 387, Notes 7, Handout #10 Rings EE 387, Notes 7, Handout #10 Definition: A ring is a set R with binary operations, + and, that satisfy the following axioms: 1. (R, +) is a commutative group (five axioms) 2. Associative law for

More information

Lecture 2: Computing functions of dense matrices

Lecture 2: Computing functions of dense matrices Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture

More information

Computing with polynomials: Hensel constructions

Computing with polynomials: Hensel constructions Course Polynomials: Their Power and How to Use Them, JASS 07 Computing with polynomials: Hensel constructions Lukas Bulwahn March 28, 2007 Abstract To solve GCD calculations and factorization of polynomials

More information

Math 121 Homework 5: Notes on Selected Problems

Math 121 Homework 5: Notes on Selected Problems Math 121 Homework 5: Notes on Selected Problems 12.1.2. Let M be a module over the integral domain R. (a) Assume that M has rank n and that x 1,..., x n is any maximal set of linearly independent elements

More information

Definition For a set F, a polynomial over F with variable x is of the form

Definition For a set F, a polynomial over F with variable x is of the form *6. Polynomials Definition For a set F, a polynomial over F with variable x is of the form a n x n + a n 1 x n 1 + a n 2 x n 2 +... + a 1 x + a 0, where a n, a n 1,..., a 1, a 0 F. The a i, 0 i n are the

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

Introduction to Matrices

Introduction to Matrices POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder

More information

COMPUTER ARITHMETIC. 13/05/2010 cryptography - math background pp. 1 / 162

COMPUTER ARITHMETIC. 13/05/2010 cryptography - math background pp. 1 / 162 COMPUTER ARITHMETIC 13/05/2010 cryptography - math background pp. 1 / 162 RECALL OF COMPUTER ARITHMETIC computers implement some types of arithmetic for instance, addition, subtratction, multiplication

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Section III.6. Factorization in Polynomial Rings

Section III.6. Factorization in Polynomial Rings III.6. Factorization in Polynomial Rings 1 Section III.6. Factorization in Polynomial Rings Note. We push several of the results in Section III.3 (such as divisibility, irreducibility, and unique factorization)

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Contents Regularization and Matrix Computation in Numerical Polynomial Algebra

Contents Regularization and Matrix Computation in Numerical Polynomial Algebra Contents 1 Regularization and Matrix Computation in Numerical Polynomial Algebra.................................................... 1 Zhonggang Zeng 1.1 Notation and preliminaries..................................

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector

More information

(Inv) Computing Invariant Factors Math 683L (Summer 2003)

(Inv) Computing Invariant Factors Math 683L (Summer 2003) (Inv) Computing Invariant Factors Math 683L (Summer 23) We have two big results (stated in (Can2) and (Can3)) concerning the behaviour of a single linear transformation T of a vector space V In particular,

More information

Rational Univariate Reduction via Toric Resultants

Rational Univariate Reduction via Toric Resultants Rational Univariate Reduction via Toric Resultants Koji Ouchi 1,2 John Keyser 1 Department of Computer Science, 3112 Texas A&M University, College Station, TX 77843-3112, USA Abstract We describe algorithms

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

The Seven Dwarfs of Symbolic Computation and the Discovery of Reduced Symbolic Models. Erich Kaltofen North Carolina State University google->kaltofen

The Seven Dwarfs of Symbolic Computation and the Discovery of Reduced Symbolic Models. Erich Kaltofen North Carolina State University google->kaltofen The Seven Dwarfs of Symbolic Computation and the Discovery of Reduced Symbolic Models Erich Kaltofen North Carolina State University google->kaltofen Outline 2 The 7 Dwarfs of Symbolic Computation Discovery

More information

Fast Polynomial Multiplication

Fast Polynomial Multiplication Fast Polynomial Multiplication Marc Moreno Maza CS 9652, October 4, 2017 Plan Primitive roots of unity The discrete Fourier transform Convolution of polynomials The fast Fourier transform Fast convolution

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

The Berlekamp algorithm

The Berlekamp algorithm The Berlekamp algorithm John Kerl University of Arizona Department of Mathematics 29 Integration Workshop August 6, 29 Abstract Integer factorization is a Hard Problem. Some cryptosystems, such as RSA,

More information

WORKING WITH MULTIVARIATE POLYNOMIALS IN MAPLE

WORKING WITH MULTIVARIATE POLYNOMIALS IN MAPLE WORKING WITH MULTIVARIATE POLYNOMIALS IN MAPLE JEFFREY B. FARR AND ROMAN PEARCE Abstract. We comment on the implementation of various algorithms in multivariate polynomial theory. Specifically, we describe

More information

Computing Minimal Nullspace Bases

Computing Minimal Nullspace Bases Computing Minimal Nullspace ases Wei Zhou, George Labahn, and Arne Storjohann Cheriton School of Computer Science University of Waterloo, Waterloo, Ontario, Canada {w2zhou,glabahn,astorjoh}@uwaterloo.ca

More information

Algorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices

Algorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices Algorithms to solve block Toeplitz systems and least-squares problems by transforming to Cauchy-like matrices K. Gallivan S. Thirumalai P. Van Dooren 1 Introduction Fast algorithms to factor Toeplitz matrices

More information

Introduction to Applied Linear Algebra with MATLAB

Introduction to Applied Linear Algebra with MATLAB Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

Algebra Exam Syllabus

Algebra Exam Syllabus Algebra Exam Syllabus The Algebra comprehensive exam covers four broad areas of algebra: (1) Groups; (2) Rings; (3) Modules; and (4) Linear Algebra. These topics are all covered in the first semester graduate

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Local Fields. Chapter Absolute Values and Discrete Valuations Definitions and Comments

Local Fields. Chapter Absolute Values and Discrete Valuations Definitions and Comments Chapter 9 Local Fields The definition of global field varies in the literature, but all definitions include our primary source of examples, number fields. The other fields that are of interest in algebraic

More information

CPE 310: Numerical Analysis for Engineers

CPE 310: Numerical Analysis for Engineers CPE 310: Numerical Analysis for Engineers Chapter 2: Solving Sets of Equations Ahmed Tamrawi Copyright notice: care has been taken to use only those web images deemed by the instructor to be in the public

More information

Solving linear equations with Gaussian Elimination (I)

Solving linear equations with Gaussian Elimination (I) Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Resolution of Singularities in Algebraic Varieties

Resolution of Singularities in Algebraic Varieties Resolution of Singularities in Algebraic Varieties Emma Whitten Summer 28 Introduction Recall that algebraic geometry is the study of objects which are or locally resemble solution sets of polynomial equations.

More information

nonlinear simultaneous equations of type (1)

nonlinear simultaneous equations of type (1) Module 5 : Solving Nonlinear Algebraic Equations Section 1 : Introduction 1 Introduction Consider set of nonlinear simultaneous equations of type -------(1) -------(2) where and represents a function vector.

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

A connection between number theory and linear algebra

A connection between number theory and linear algebra A connection between number theory and linear algebra Mark Steinberger Contents 1. Some basics 1 2. Rational canonical form 2 3. Prime factorization in F[x] 4 4. Units and order 5 5. Finite fields 7 6.

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

Math 4310 Solutions to homework 7 Due 10/27/16

Math 4310 Solutions to homework 7 Due 10/27/16 Math 4310 Solutions to homework 7 Due 10/27/16 1. Find the gcd of x 3 + x 2 + x + 1 and x 5 + 2x 3 + x 2 + x + 1 in Rx. Use the Euclidean algorithm: x 5 + 2x 3 + x 2 + x + 1 = (x 3 + x 2 + x + 1)(x 2 x

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra 1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information