Key words. Macaulay matrix, multivariate polynomials, multiple roots, elimination, principal angles, radical ideal
|
|
- Austin Weaver
- 5 years ago
- Views:
Transcription
1 SVD-BASED REMOVAL OF THE MULTIPLICITIES OF ALL ROOTS OF A MULTIVARIATE POLYNOMIAL SYSTEM KIM BATSELIER, PHILIPPE DREESEN, AND BART DE MOOR Abstract. In this article we present a numerical SVD-based algorithm to remove all multiplicities of the roots of a multivariate polynomial system. The algorithm consists of two steps. First, univariate polynomials are computed by means of elimination. Then, their square-free parts are determined and added to the original polynomial system. The main computational tool are principal angles and vectors, which are determined from a SVD. No symbolical Gröbner basis computations are needed. Tolerances required by the algorithm are derived using perturbation results on principal angles. Numerical experiments demonstrate the effectiveness of the proposed algorithm together with the improvement of the conditioning of the roots after removing their multiplicities. Key words. Macaulay matrix, multivariate polynomials, multiple roots, elimination, principal angles, radical ideal AMS subject classifications. 15A03,15B05,15A18,15A23 1. Introduction. It is well-established that multiple roots of a univariate polynomial p(x) are ill-conditioned. Indeed, let z be a root of multiplicity m of p(x), viz. p(z) = p (z) =... = p (m 1) (z) = 0. Suppose now that the coefficients of p(x) are perturbed by p(x) such that z + z is a simple root of p(x) = p(x) + p(x). We can write this as (1.1) p(z + z) = p(z + z) + p(z + z) = 0. Substitution of p(z + z) by its Taylor series allows us to write (1.1) as p(z) p(m 1) (z) ( z) m 1 (m 1)! + p(m) (z) ( z) m m! + O(( z) m+1 ) + p(z + z) = 0, from which the first m terms vanish. This allows us then to deduce the following upper bound on the forward error z (1.2) z + z) m! p(z p (m) (z) The ill-conditioning is a direct result from the 1/m exponent. A similar expression for the condition number of a multiple root can be found in [12]. The inequality (1.2) also tells us that it is not possible to write an expression such as forward error condition number backward error + higher order terms Kim Batselier and Philippe Dreesen are research assistants at the KU Leuven, Belgium. Bart De Moor is a full professor at the KU Leuven, Belgium.Research supported by Research Council KUL: GOA/10/09 MaNet, PFV/10/002 (OPTEC), several PhD/postdoc & fellow grants, Flemish Government:IOF: IOF/KP/SCORES4CHEM,FWO: PhD/postdoc grants, projects: G (Brainmachine), G (Mechatronics MPC), G (Structured systems),iwt: PhD Grants, projects: SBO LeCoPro, SBO Climaqs, SBO POM, EUROSTARS SMART, iminds 2012, Belgian Federal Science Policy Office: IUAP P7/19 (DYSCO, Dynamical systems, control and optimization, ),EU: ERNSI, FP7-EMBOCON (ICT ), FP7-SADCO ( MC ITN ), ERC ST HIGHWIND ( ), ERC AdG A-DATADRIVE-B,COST: Action ICO806: IntelliCIS. Department of Electrical Engineering ESAT-STADIUS Center for Dynamical Systems, Signal Processing and Data Analytics, KU Leuven / IBBT Future Health Department, 3001 Leuven, Belgium 1 1 m.
2 2 KIM BATSELIER, PHILIPPE DREESEN, AND BART DE MOOR when a polynomial has multiple roots. We will write h.o.t. instead of higher order terms from here on. The roots of a univariate polynomial are mathematically equivalent with the eigenvalues of its Frobenius companion matrix and multiple eigenvalues are hence also ill-conditioned [26]. The roots of a multivariate polynomial system f 1,..., f s can also be found from eigenvalue problems, namely their corresponding Stetter eigenvalue problems [2, 24, 25]. And hence the same ill-conditioning applies when there are multiple roots. The matrices in the Stetter eigenvalue problem are not companion matrices and therefore do not contain the coefficients of the polynomial system. Instead, they express the multiplication operation in the quotient ring C n / f 1,..., f s, where C n is the ring of n-variate polynomials and f 1,..., f s is the polynomial ideal generated by f 1,..., f s. There are two ways of getting rid of this ill-conditioning of multiple roots. One way is to rephrase the root-finding problem such that one needs to solve a regular nonlinear least squares problem on a pejorative manifold [29] instead of solving the ill-conditioned eigenvalue problem. The pejorative manifold of a given multiplicity structure contains all perturbations of a univariate polynomial p(x) such that the multiplicity structure of its roots is preserved [17]. As a consequence, this approach only works for univariate polynomials. The second way of getting rid of the ill-conditioning is by removing the multiplicities of the roots. This is also called computing the radical ideal and it is this approach that will be followed in this article. Numerical implementations of computing the radical ideal will always result in an approximate radical ideal. There are basically two methods of doing this: using the matrix of traces [14, 15, 16] or adding the square-free parts of the univariate polynomials p(x i ) (i = 1,..., n) in the ideal f 1,..., f s to the polynomial system [9, 10]. In this article we will follow the latter approach. Our main algorithm presented in this article is a direct application of the elimination method in [6] and the method to compute an approximate gcd in [5] and consists of two steps: the computation of each of the univariate polynomials p(x i ) and their square-free parts. This is achieved by checking principal angles, also called canonical angles [7], between certain subspaces to detect nontrivial intersections and solving a least squares problem. The main computational tool in all these algorithms is the singular value decomposition (SVD), which can be computed in a numerically backward stable way [13]. In addition to the development of the algorithms, we also show how a perturbation result of the principal angles can be used to determine suitable numerical tolerances for these algorithms. The outline of this article is as follows. First we provide in Section 2 some definitions and introduce the notation. Then, in Section 3 we recall the main theorem from algebraic geometry [9, 10] that allows us to remove the multiplicities of all roots of a given multivariate polynomial system. In Section 4 we introduce the Macaulay matrix, which will be the key matrix in the algorithms, and give an interpretation to its row space. The numerical elimination algorithm from [6] that is used in this article to compute the univariate polynomials p(x i ) is presented in Section 5, together with a discussion on how to choose the numerical tolerances. A new result hereby is the determination of the condition number of elimination. In Section 6 we discuss the numerical algorithm to compute the square-free part of p(x i ), based on the approximate GCD algorithm from [5], also with a discussion on choosing the tolerance. Finally, we conclude the article with the illustration of the algorithms on various examples in Section 7. All algorithms and numerical examples presented in this article are implemented in a Matlab [23] package called PNLA and are freely available from
3 SVD-BASED REMOVAL MULTIPLICITIES 3 2. Preliminaries. The ring of multivariate polynomials in n variables with complex coefficients is denoted by C n. It is easy to show that the subset of C n, containing all multivariate polynomials of total degrees from 0 up to d forms a vector space. We will denote this vector space by Cd n. We consider multivariate polynomials that appear in engineering applications and limit ourselves therefore, without loss of generality, to multivariate polynomials with only real coefficients. Throughout this article we will use a monomial basis as a basis for Cd n. The total degree of a monomial x a = x a x an n is defined as a = n i=1 a i. The degree of a polynomial p, deg(p), then corresponds with the degree of the monomial of p with highest degree. An important concept that we will need is that of a polynomial ideal. Definition 2.1. ([10, p. 30]) Let f 1,..., f s C n. Then we set (2.1) f 1,..., f s = { s } h i f i : h 1,..., h s C n i=1 and call it the ideal generated by f 1,..., f s. The ideal hence contains all polynomial combinations s i=1 h if i without any constraints on the degrees of h 1,..., h s. For this reason, the polynomials f 1,..., f s are also called the generators of the polynomial ideal. When the polynomial system f 1,..., f s has a finite number of affine roots, then we will call the corresponding ideal f 1,..., f s zero-dimensional. We will denote all polynomials of the ideal f 1,..., f s with a degree from 0 up to d by f 1,..., f s d. Observe that this implies that f 1,..., f s d C n d and f 1,..., f s d is therefore also a vector space. The set of generators is not unique for a given polynomial ideal. An important set of generators for a given polynomial ideal f 1,..., f s is the Gröbner basis [8]. With each polynomial ideal I we can associate a set of affine roots V. The polynomial ideal that has the same affine set of roots but where none of the roots have multiplicities is called the radical of I. Definition 2.2. ([10, p. 176]) Let I = f 1,..., f s be a polynomial ideal. The radical of I, denoted I, is the set {f : f m I for some integer m 1}. It can be shown that I is also an ideal and that I I. One therefore has to enlarge the polynomial ideal I to I by adding extra generators in order to get rid of the multiplicities of the roots. 3. Removing the multiplicities of the roots of a multivariate polynomial system. In this section, we will review the theorem from algebraic geometry that removes the multiplicities of affine roots and consequently also removes all root at infinity. As we will see, a great advantage of this theorem is that no knowledge of the roots and their multiplicities is required. The key ingredient will be the square-free parts of the univariate polynomials that lie in f 1,..., f s. The fundamental theorem of algebra states that every non-zero univariate polynomial with complex coefficients has exactly as many complex roots as its degree, counting multiplicities. This means that any univariate polynomial p(x) of degree d with r distinct roots z 1,..., z r, each of multiplicity m 1,..., m r, can be factorized as p(x) = c (x z 1 ) m1 (x z 2 ) m2... (x z r ) mr,
4 4 KIM BATSELIER, PHILIPPE DREESEN, AND BART DE MOOR with c C and m m r = d. The reduced or square-free part of the polynomial p is then found by stripping away the multiplicities of the roots. Definition 3.1. ([10, p. 180]) The square-free (or reduced) part of a univariate polynomial p(x) of degree d with distinct roots z 1,..., z r each of multiplicities m 1,..., m r is the polynomial p red (x) = (x z 1 ) (x z 2 )... (x z r ). An obvious way to find p red (x) from a given p(x) would be to first compute its roots and determine their respective multiplicities. The following lemma provides a way of computing p red (x) from p(x) without the need of computing its roots. Lemma 3.2. ([10, p. 181]) Let p(x) C 1 d and p (x) be its first derivative then (3.1) p red (x) = p(x) GCD(p(x), p (x)), where GCD(p(x), p (x)) stands for the greatest common divisor between p(x) and p (x). Proof. Since p(x) = c (x z 1 ) m1 (x z 2 ) m2... (x z r ) mr then with p (x) = c H(x) = r (x z j ) mj 1 H(x) j=1 r m k (x z j ) k=1 j k a polynomial in C d vanishing at none of the z 1,..., z m. Clearly GCD(p(x), p (x)) = c r (x z j ) mj 1 which proves (3.1). If a polynomial system has a finite amount of affine roots, then we can find for each variable x i (i = 1,..., n) a univariate polynomial p(x i ) M d of minimal degree. The following theorem tells us how we can remove the multiplicities of the affine roots and obtain the radical ideal I by adding square-free parts of those univariate polynomials. Theorem 3.3. ([9, p. 41]) Let I = f 1,..., f s be a zero-dimensional ideal. For each i = 1,..., n, let p(x i ) be the univariate polynomial of minimal degree that lies in M d, and let p red (x i ) be the square-free part of p(x i ). Then I = f1,..., f s, p red (x 1 ),..., p red (x n ). j=1 Converting Theorem 3.3 into an algorithm is rather straightforward, see Algorithm 3.1. It consists basically of 2 steps: an elimination step to compute for each
5 SVD-BASED REMOVAL MULTIPLICITIES 5 variable x i the univariate polynomial p(x i ) of minimal degree and a GCD step to compute their respective square-free parts. In the next sections we will discuss numerical implementations of these 2 steps. Central in these implementations is the Macaulay matrix. We start with defining the Macaulay matrix and giving an interpretation for its row space in the next section. Algorithm 3.1. Compute square-free generators of radical ideal I I Input: generators f 1,..., f s of zero-dimensional ideal I Output: generators of the radical ideal I for i = 1,..., n do p(x i ) univariate polynomial in x i of minimal degree p red (x i ) square-free part of p(x i ) end for return f 1,..., f s, p red (x 1 ),..., p red (x n ) 4. Macaulay matrix. The Macaulay matrix is central to the algorithms described in this article. We first give its proper definition, after which we discuss its size and give an interpretation to its row space. This interpretation will be required to understand the elimination algorithm in the next section. Definition 4.1. Given a set of polynomials f 1,..., f s Cd n, each of degree d i (i = 1,..., s), then the Macaulay matrix of degree d is the matrix containing the coefficients of (4.1) M(d) = ( f1 T x 1 f1 T... xn d d1 f1 T f2 T x 1 f2 T... xn d ds ) fs T T where each polynomial f i is multiplied with all monomials from degree 0 up to d d i for all i = 1,..., s. When constructing the Macaulay matrix, it is more practical to start with the coefficient vectors of the original polynomial system f 1,..., f s, after which all the rows corresponding to multiplied polynomials x a f i up to a degree max(d 1,..., d s ) are added. The x a here are the multivariate monomials x a = x a xan n. Then, one can add the coefficient vectors of all polynomials x a f i of one degree higher and so forth until the desired degree d is obtained. Increasing the degree d to d + 1 is performed by adding for each polynomial f i extra rows to M(d). When the Macaulay matrix is constructed in this way then it will have a quasi-toeplitz structure, in the sense of being almost or nearly Toeplitz [22]. The Macaulay matrix depends explicitly on the degree d for which it is defined, hence the notation M(d). The reason (4.1) is called the Macaulay matrix is because it was Macaulay who introduced this matrix, drawing from earlier work by Sylvester [27], in his work on elimination theory, resultants and solving multivariate polynomial systems [20, 21]. It is in fact a generalization of the Sylvester matrix to n variables and an arbitrary degree d. For a given degree d, the number of rows p(d) of M(d) is given by the polynomial (4.2) p(d) = s ( ) d di + n = s n n! dn + O(d n 1 ) i=1 and the number of columns q(d) by ( ) d + n (4.3) q(d) = = 1 n n! dn + O(d n 1 ).
6 6 KIM BATSELIER, PHILIPPE DREESEN, AND BART DE MOOR From these two expressions it is clear that the number of rows will grow faster than the number of columns as soon as the total amount of multivariate polynomials s > 1. We denote the rank of M(d) by r(d) and the dimension of its null space by c(d). Given a set of polynomials f 1,..., f s we can interpret the row space of M(d), denoted M d, as the vector space (4.4) M d = { s i=1 } h i f i : h i Cd d n i (i = 1,..., s). This interpretation will be needed when we discuss the elimination step of our algorithm. 5. Numerical computation of the univariate polynomial p(x i ). The first step of Algorithm 3.1 is to find for each variable x i its corresponding univariate polynomial p(x i ) of minimal degree. In this section we give a short overview of the elimination method as described in [6]. The key idea is that the desired univariate p(x i ) lies in the intersection M d span{1, x i, x 2 i, x 3 i,..., x d i } for an unknown degree d. This requirement is easily understood: p(x i ) M d since it belongs to the polynomial ideal f 1,..., f s and p(x i ) span{1, x i, x 2 i, x 3 i,..., x d i } implies that it is univariate in x i. Let the columns of the orthogonal matrix E(d) form a canonical basis for span{1, x i, x 2 i, x3 i,..., xd i }. Each column of E(d) therefore corresponds with a particular canonical vector e j, where j = 1,..., d+1 is the position of the monomial x j 1 i according to the monomial ordering that is used. Checking whether there is a nontrivial intersection can be done by inspecting the smallest principal angle between the two vector spaces. When this angle is zero, there is a nontrivial intersection and p(x i ) can then be computed as a basis vector for the intersection. For small principal angles it is numerically better to compute the sine of the angles using the Theorem 5.1. Since we assume that the coefficients of the polynomials are real, we state the theorem for this particular case. It is, however, also valid for complex entries by replacing the transpose ( ) T with the Hermitian transpose ( ) H. Theorem 5.1. ([7, p ] and [18, p. 6]) Assume that the columns of Q 1 and Q 2 are orthogonal bases for two subspaces of R m. Let and let the SVD of this r 1 r 2 matrix be A = Q T 1 Q 2, A = Y C Z T, C = diag(σ 1,..., σ r2 ). If we assume that σ 1 σ 2... σ r2, then the principal angles and principal vectors associated with this pair of subspaces are given by cos(θ k ) = σ k (A), U = Q 1 Y, V = Q 2 Z.
7 SVD-BASED REMOVAL MULTIPLICITIES 7 The singular values µ 1,..., µ m of the matrix Q 2 Q 1 Q T 1 Q 2 are given by µ k = 1 σ 2 k. Moreover, the principal angles satisfy the equalities θ k = arcsin(µ k ). The right principal vectors can be computed as v k = Q 2 z k, k = 1,..., r 2, where z k are the corresponding right singular vectors of Q 2 Q 1 Q T 1 Q 2. The left principal vectors are then computed by u k = Q 1 Q T 1 v k /σ k. Let the columns of U be an orthonormal basis for M d. Then the columns of the q(d) (d + 1) matrix E(d) U U T E(d) span the orthogonal projection of span{1, x i, x 2 i, x 3 i,..., x d i } onto the orthogonal complement of M d. If there is a nontrivial intersection, then the right singular vector E(d) z d+1 corresponding with µ d+1 is a unit basis vector for this vector space. This basis vector is hence also the desired univariate polynomial p(x i ). If the columns of the q(d) c(d) matrix N constitute an orthonormal basis for the null space of M(d) then E(d) U U T E(d) can be replaced by N N T E(d) since E(d) U U T E(d) = (I U U T ) E(d) = N N T E(d). The right singular vectors for N N T E(d) are identical to the right singular vectors of the c(d) (d + 1) matrix N T E(d), which is much smaller than the original q(d) (d + 1) matrix. Furthermore, the coefficient matrix E(d) never needs to be explicitly constructed. Indeed, each column of E(d) contains a single nonzero entry. Let i be the vector of row indices of nonzero entries of E(d) then N T E(d) can be rewritten as N(i, :) T using MATLAB notation. Since the degree d at which there is a nontrivial intersection is not know, our algorithm will need to iterate over increasing degrees. Two tolerances τ 1, τ 2 are thereby needed: τ 1 to decide the numerical rank of M(d) and τ 2 to decide whether the principal angle is numerically zero. The most robust way to determine the numerical rank of M(d) and an orthogonal basis N for the numerical null space is the SVD. Let M(d) = U S V T be the SVD of M(d) with orthogonal U, V and S a diagonal matrix containing the singular values σ 1... σ q. Then the numerical rank r is chosen such that σ 1... σ r τ 1 σ r+1... σ q and the singular vectors of V corresponding with the singular values σ r+1,..., σ q constitute an orthonormal basis for the numerical null space of M(d). The approx-rank gap σ r /σ r+1 [19, p.920] then serves as a measure of how well the numerical rank is defined. Indeed, if there is a large gap between σ r and σ r+1 and τ 1 lies between these two values then small changes in τ 1 will not affect the determination of the numerical rank. Numerical experiments indicate that the numerical rank of M(d) is very well defined with approxi-rank gaps of being common. Indeed, the file
8 8 KIM BATSELIER, PHILIPPE DREESEN, AND BART DE MOOR polysys collection of our PNLA package contains over a 100 multivariate polynomial systems with their corresponding approxi-rank gaps, almost all of which are larger than As will be shown in the numerical experiments in Section 7, a default choice of τ 1 = max(q(d), p(d)) M(d) 2 u works very well for the numerical rank test. u is the unit roundoff, which is in double precision. A second tolerance τ 2 is needed to determine whether the computed principal angle is numerically zero. We have established that finding the univariate polynomial p(x i ) corresponds with the computation of a principal angle between two vector spaces and its corresponding principal vector. We therefore define the condition number of finding p(x i ) as the condition number of the principal angle θ. It is shown in [7] that the condition number of the principal angle θ between the row spaces of M(d) and E(d) is essentially max(κ(m), κ(e)), where κ denotes the condition number of a matrix. More specifically, let M, E be the perturbations of M(d), E(d) respectively with M 2 M 2 ɛ M, E 2 E 2 ɛ E. Then the following relationship [7, p. 585] (5.1) θ θ 2 (ɛ M κ(m) + ɛ E κ(e)) + h.o.t. holds where θ is the principal angle between the perturbed vector spaces. E(d) is exact and unperturbed so we can therefore set ɛ E = 0. Also, when there is a nontrivial intersection then θ = 0. This allows us to simplify (5.1) to θ 2 ɛ M κ(m), which indicates that the condition number of the principal angle is the condition number of M(d). The Macaulay matrix of a consistent polynomial system is for almost all degrees singular and we therefore need to define its condition number as κ(m) = σ 1 σ r. Furthermore, it is shown in [7, p. 587] that when the perturbations are due to numerical computations and the orthogonal basis is computed using Householder transformations then θ θ (p κ(m) + c κ(e)) h.o.t. where p is the number of rows of M(d) and c = dim(range( E(d))) = d+1. The factor 2 53 is due to the fact that we work in double precision. This allows us to set τ 2 = (p κ(m) + d + 1) The most computationally expensive step is the computation of the SVD of the p q Macaulay matrix M(d). Only the singular values and an orthonormal basis for the right null space of M(d) are needed. This costs 4pq 2 + 8q 3 flops [13]. Using (4.2) and (4.3), this computational complexity can be expressed in terms of n, s and d as O((s + 2) d 3n /(n!) 3 ), where both s and n are fixed for a given problem. The complete numerical algorithm to compute the univariate polynomial p(x i ) is summarized in Algorithm 5.1 and is implemented in the MATLAB PNLA package as punivar.m. An
9 SVD-BASED REMOVAL MULTIPLICITIES 9 obvious optimization is to use the recursive updating scheme for the orthogonal basis N, described in [4]. This recursive algorithm takes N from M(d) and updates it to the N of M(d+1), using only the additional rows that are required to construct M(d+1) from M(d). This orthogonalization scheme reduces the computational complexity to O((s + 2) d 3n 3 /(n 1)! 3 ). Algorithm 5.1. Numerical computation of the univariate polynomial p(x i ) Input: f 1,..., f s of degrees d 1,..., d s, monomial x i Output: univariate p(x i ) p(x i ) d max(d 1,..., d s ) compute tolerances τ 1, τ 2 N orthonormal basis for null space of M(d) from SVD i row indices of nonzero entries of E(d) while p(x i ) = do [W S Z] SVD(N(i, :) T ) if arcsin(µ d+1 ) < τ 2 then p(x i ) z d+1 else d d + 1 compute tolerances τ 1, τ 2 N orthogonal basis for null space of M(d) from SVD add row index of additional nonzero entry of E(d) to i end if end while 6. Numerical computation of the square-free part of p(x i ). In this section we will discuss how the square-free part of p(x i ) can be computed numerically. First, the assumption is made that the univariate polynomial p(x i ) from Algorithm 5.1 is exact. Then we will explain how to take into account that only an approximation to p(x i ) is available. The following theorem relates the least common multiple and greatest common divisor of two multivariate polynomials to each other and is of crucial importance in our method. Theorem 6.1. ([10, p. 190]) Let f 1, f 2 C n and l, g their exact least common multiple and greatest common divisor respectively, then (6.1) l g = f 1 f 2. Remember from Lemma 3.2 that the square-free part of p(x i ) can be computed as p red (x i ) = p(x i ) GCD(p(x i ), p (x i )). Setting f 1 = p(x i ), f 2 = p (x i ) in (6.1) and rewriting it as p(x 1 ) g = l p (x i ) shows that the desired square-free part is found from the polynomial division of l by p (x i ). The least common multiple l of p(x i ), p (x i ) obviously satisfies l = h 1 p(x i ) = h 2 p (x i )
10 10 KIM BATSELIER, PHILIPPE DREESEN, AND BART DE MOOR where the polynomial factors h 1, h 2 are of minimal degree. This can be rewritten as (6.2) l = h 1 M p (d l ) = h 2 M p (d l ) where l, h 1, h 2 are row vectors and M p (d l ), M p (d l ) are the Macaulay matrices of p(x i ) and p (x i ) respectively at a degree d l = deg(l). This means that once we have computed l, the square-free part of p(x i ) is the solution of the overdetermined linear system (6.3) M p (d l ) T h T 2 = l T. Solving this linear system is in fact equivalent to the polynomial division l/p (x i ). The least common multiple l can be found as a vector in the nontrivial intersection of the row spaces of M p (d l ) and M p (d l ). Since the degree d l of l is unknown, the algorithm will iterate over increasing degrees d. The highest attainable value for d l is deg(p(x i )) + deg(p (x i )) and therefore the iterations of the algorithm do not need to go beyond this degree. Again, the criterion to decide whether there is a nontrivial intersection is the smallest principal angle between the row spaces of M p (d l ) and M p (d l ). Just like in previous section, the sine of the smallest principal angle is found from the SVD of N T U, where the columns of U, N are an orthonormal basis for the row space of M p (d) and the null space of M p (d) respectively. When the smallest principal angle is numerically zero, then the principal vector associated with that angle is the least common multiple. In order to decide whether a principal angle is numerically zero, one needs to set a tolerance τ. It is important to realize that the polynomials p(x i ) and p (x i ) will not be exact due to numerical errors made in Algorithm 5.1. Instead, we will have perturbed polynomials p(x i ) = p(x i ) + e 1, p (x i ) = p (x i ) + e 2. This means that their respective Macaulay matrices are also perturbed by structured matrices M p, M p. Suppose that e 1 2 ɛ 1 and e 2 2 ɛ 2, then it is shown in [5] that M p 2 M p 2 ɛ 1, M p 2 M p (d) 2 ɛ 2 holds. A straightforward application of the perturbation result (5.1) leads to θ k 2 (ɛ 1 κ(m p ) + ɛ 2 κ(m p )) + h.o.t. Assuming that ɛ 1, ɛ 2 are of the same order of magnitude ɛ, we can hence choose the tolerance τ as (6.4) τ = 2 ɛ ( κ(m p ) + κ(m p )). Deriving upper bounds for e 1, e 2 is far from trivial. The univariate polynomials p(x i ) are the result of 2 consecutive SVD s and the forward error of any p(x i ) therefore depends on the perturbation of the computed singular subspaces. The sensitivity of the singular vectors to perturbations is, according to Wedin s Theorem [28], determined by the gap between σ r and σ r+1. As the numerical experiments in Section 7 show, this separation is well-determined. Using the tolerance τ 1 of the rank test as ɛ in (6.4) works well in practice but future work is required to determine a more theoretically meaningful value for ɛ instead. The whole algorithm to compute the least common multiple is summarized in Algorithm 6.1. As it also finds a nontrivial intersection between two vector spaces, it is very similar to Algorithm 5.1. The computational
11 SVD-BASED REMOVAL MULTIPLICITIES 11 complexity is also here determined by the SVD of M p and M p. Note that the matrices M p and M p will always be smaller than M(d) in the elimination step. This implies that the computational cost of Algorithm 3.1 is completely dominated by the elimination step. Algorithm 6.1. Numerical computation of a least common multiple l Input: polynomials f 1, f 2, noise-level ɛ Output: least common multiple l l d max(deg(f 1 ), deg(f 2 )) compute tolerance τ U orthogonal basis for row space of M f1 (d) N orthogonal basis for null space of M f2 (d) while l = & d deg(f 1 ) + deg(f 2 ) do [W S Z] SVD (N T U) if arcsin(µ d+1 ) < τ then l U z d+1 else d d + 1 compute tolerance τ U orthogonal basis for row space of M f1 (d) N orthogonal basis for null space ofm f2 (d) end if end while Once the least common multiple l is found, then the desired square-free part can be computed as p red (x i ) = argmin x M p i (d l ) T x l T 2 2. The MATLAB/Octave function in the PNLA package that computes both the least common multiple l and the solution of the overdetermined system (6.3) is getlcm.m. Algorithm 3.1 is implemented in the PNLA package as getrad.m. 7. Numerical Experiments. In this section we illustrate the application of Algorithm 3.1 on 3 multivariate polynomial systems with multiple roots. All computations were done in MATLAB on a 2.66 GHz quad-core desktop with 8 GB RAM. Forward errors and residuals of the least squares solutions were computed in the 2- norm and the forward errors were determined using the exact result from the Groebner package in Maple [1]. All polynomials were normalized prior to the computations by dividing their coefficient vectors by their 2-norm. The roots of the multivariate polynomials were computed numerically without a Groebner basis using the method described in [3, 11]. Example 7.1. Consider the polynomial system ([15, p. 7]) x x 1 x 2 6x 1 + 6x x = 0, x x 2 1x 2 7x x 1 x x 1 x x 1 x x x = 0, x x 2 1x 2 5x x 1 x x 1 x x 1 x x x = 0, that has 2 affine roots: (1, 1) with a multiplicity of 3 and ( 1, 2) of multiplicity 2. Computing the roots by means of the Stetter eigenvalue problem returns for the root
12 12 KIM BATSELIER, PHILIPPE DREESEN, AND BART DE MOOR (1, 1) 3 results with a relative forward error of and for the root ( 1, 2) 2 results with a relative forward error of Algorithm 5.1 returns the univariate polynomial p(x 1 ) = x x x x x 5 1 at a degree d = 5 and with a relative forward error of The Macaulay matrix M(5) has a numerical rank of 16 with tolerance τ 1 = an approxi-rank gap of Computing the roots of p(x 1 ) as the eigenvalues of its companion matrix results in i, i, i, i, The root 1 has a relative forward error of and the root 1 has a relative forward error of due to its higher multiplicity. These results are consistent with the results of solving the Stetter eigenvalue problems. Applying the differential operator D x1 to p(x 1 ) results in p (x 1 ) = x x x x 4 1. The least common multiple of p(x 1 ) and p (x 1 ) is found for d = 6 with an principal angle of The computed square-free part is p red (x 1 ) = x x 2 1, with a residual of The x 1 term can be taken to be zero since it is smaller than the numerical tolerance τ = The univariate polynomial in x 2 that is computed by Algorithm 5.1 is p(x 2 ) = x x x 3 2 for d = 3 and with a relative forward error of The 5 10 Macaulay matrix is of full row rank with a tolerance of Computing the derivative of p(x 2 ) yields p (x 2 ) = x x 2 2. The least common multiple of p(x 2 ) and p (x 2 ) is found for d = 4 with an principal angle of The computed square-free part is p red (x 2 ) = x x 2 2, with a residual of Adding the 2 approximate square-free parts p red (x 1 ) and p red (x 2 ) to the original polynomial system and computing the roots by means of the Stetter eigenvalue problems returns the following 2 roots: ( , ) with a relative forward error of and ( , )
13 SVD-BASED REMOVAL MULTIPLICITIES 13 with a relative forward error of The number of corrects digits of the result therefore has increased from 5 and 7 to 13 for both roots. Example 7.2. The polynomial system x x x = 0, x x x = 0, x x x = 0, has 27 affine roots: 9 real regular roots, 6 conjugated complex ones and the solutions (1, 0, 0), (0, 1, 0), (0, 0, 1), each of multiplicity 4. If one tries to compute the roots of this system then the 3 roots (1, 0, 0), (0, 1, 0), (0, 0, 1) can only be determined accurately up to 6 digits. Algorithm 5.1 computes the univariate polynomials p(x 1 ), p(x 2 ), p(x 3 ) at d = 14 with relative forward errors of The Macaulay matrix is for this degree and its rank is 653 with an approxi-rank gap of The tolerance for the rank test is for all 3 cases τ 1 = The tolerance for the principal angle τ 2 = The computed principal angles are all smaller than All computed square-free parts have relative forward errors of Adding p red (x 1 ), p red (x 2 ) and p red (x 3 ) to the original polynomial system reduces the total amount of affine roots to 18 and results in an accuracy of 12 digits for the roots (1, 0, 0), (0, 1, 0), (0, 0, 1). Example 7.3. The KSS5 system [30] consists of 5 polynomials in 5 variables f i (x 1,..., x 5 ) = x 2 i + 5 x j 2x i 4 (i = 1,..., 5) j=1 and has 32 affine roots, which includes the root (1,..., 1) of multiplicity 16. This root can at best be computed numerically with 3 accurate digits (1.003) for 5 of the 16 results, the other results do have not a single correct digit. The univariate polynomials are all computed at d = 10 by Algorithm 5.1 with relative forward errors of order of magnitude The Macaulay matrix is for this degree with a rank of 2971 and approxi-rank gap of The tolerance for the rank test is for all 5 univariate polynomials τ 1 = The tolerance for the principal angles is τ 2 = , all computed principal angles are bounded from above by All square-free parts are computed with relative forward errors of Adding these approximate square-free polynomials to the original polynomial system reduces the number of affine roots to 17. The root (1,..., 1) can now be determined with a relative forward error of REFERENCES [1] Maple 16, Maplesoft, a division of Waterloo Maple Inc, Waterloo, Ontario. [2] W. Auzinger and H. J. Stetter, An elimination algorithm for the computation of all zeros of a system of multivariate polynomial equations, in Int. Conf. on Numerical Mathematics, Singapore 1988, Birkhäuser ISNM 86, 1988, pp [3] K. Batselier, P. Dreesen, and B. De Moor, Prediction Error Method Identification is an Eigenvalue Problem, Proc 16th IFAC Symposium on System Identification (SYSID 2012), 2012, pp [4] K. Batselier, P. Dreesen, and B. De Moor, A fast iterative orthogonalization scheme for the Macaulay matrix. Submitted, 2013.
14 14 KIM BATSELIER, PHILIPPE DREESEN, AND BART DE MOOR [5], A geometrical approach to finding multivariate approximate LCMs and GCDs, Linear Algebra and its Applications, 438 (2013), pp [6], The Geometry of Multivariate Polynomial Division and Elimination, SIAM Journal on Matrix Analysis and Applications, 34 (2013), pp [7] Å. Björck and G. H. Golub, Numerical Methods for Computing Angles Between Linear Subspaces, Mathematics of Computation, 27 (1973), pp. pp [8] B. Buchberger, Ein Algorithmus zum Auffinden der Basiselemente des Restklassenringes nach einem nulldimensionalen Polynomideal, PhD thesis, Mathematical Institute, University of Innsbruck, Austria, [9] D. A. Cox, J. B. Little, and D. O Shea, Using Algebraic Geometry, Graduate Texts in Mathematics, Springer-Verlag, March [10], Ideals, Varieties and Algorithms, Springer-Verlag, third ed., [11] P. Dreesen, K. Batselier, and B. De Moor, Back to the roots: Polynomial system solving, linear algebra, systems theory, Proc 16th IFAC Symposium on System Identification (SYSID 2012), 2012, pp [12] R. T. Farouki and V. T. Rajan, On the Numerical Condition of Polynomials in Berstein Form, Comput. Aided Geom. Des., 4 (1987), pp [13] G. H. Golub and C. F. Van Loan, Matrix Computations, The Johns Hopkins University Press, 3rd ed., Oct [14] I. Janovitz-Freireich, B. Mourrain, L. Rónyai, and A. Szántó, On the computation of matrices of traces and radicals of ideals, Journal of Symbolic Computation, 47 (2012), pp [15] I. Janovitz-Freireich, L. Rónyai, and A. Szántó, Approximate Radical for Clusters: A Global Approach Using Gaussian Elimination or SVD, Mathematics in Computer Science, 1 (2007), pp [16] I. Janovitz-Freireich, A. Szántó, B. Mourrain, and L. Rónyai, Moment matrices, trace matrices and the radical of ideals, in Proceedings of the twenty-first international symposium on Symbolic and algebraic computation, ISSAC 08, New York, NY, USA, 2008, ACM, pp [17] W. Kahan, Conserving confluence curbs ill-condition, technical report. [18] A. V. Knyazev and M. E. Argentati, Principal Angles between Subspaces in an A-based Scalar Product: Algorithms and Perturbation Estimates, SIAM Journal on Scientific Computing, 23 (2002), pp [19] T. Y. Li and Z. Zeng, A rank-revealing method with updating, downdating, and applications, SIAM Journal on Matrix Analysis and Applications, 26 (2005), pp [20] F. S. Macaulay, On some formulae in elimination, Proc. London Math. Soc., 35 (1902), pp [21], The algebraic theory of modular systems, Cambridge University Press, [22] B. Mourrain and V. Y. Pan, Multivariate polynomials, duality, and structured matrices, Journal of Complexity, 16 (2000), pp [23] MATLAB R2012a, The Mathworks Inc., Natick, Massachusetts. [24] H. J. Stetter, Matrix eigenproblems are at the heart of polynomial system solving, SIGSAM Bulletin, 30 (1996), pp [25], Numerical Polynomial Algebra, Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, [26] G. W. Stewart and J.-G. Sun, Matrix Perturbation Theory (Computer Science and Scientific Computing), Academic Press, June [27] J. J. Sylvester, On a theory of syzygetic relations of two rational integral functions, comprising an application to the theory of Sturm s function and that of the greatest algebraical common measure, Trans. Roy. Soc. Lond., (1853). [28] P.-Ȧ. Wedin, Perturbation bounds in connection with singular value decomposition, BIT Numerical Mathematics, 12 (1972), pp [29] Z. Zeng, Computing multiple roots of inexact polynomials, Mathematics of Computation, 74 (2005), pp [30] Z. Zeng, The closedness subspace method for computing the multiplicity structure of a polynomial system, in Interactions of Classical and Numerical Algebraic, 2009.
THE GEOMETRY OF MULTIVARIATE POLYNOMIAL DIVISION AND ELIMINATION
THE GEOMETRY OF MULTIVARIATE POLYNOMIAL DIVISION AND ELIMINATION KIM BATSELIER, PHILIPPE DREESEN, AND BART DE MOOR Abstract. Multivariate polynomials are usually discussed in the framework of algebraic
More informationA geometrical approach to finding multivariate approximate LCMs and GCDs
A geometrical approach to finding multivariate approximate LCMs and GCDs Kim Batselier, Philippe Dreesen, Bart De Moor 1 Department of Electrical Engineering, ESAT-SCD, KU Leuven / IBBT Future Health Department
More informationJoint Regression and Linear Combination of Time Series for Optimal Prediction
Joint Regression and Linear Combination of Time Series for Optimal Prediction Dries Geebelen 1, Kim Batselier 1, Philippe Dreesen 1, Marco Signoretto 1, Johan Suykens 1, Bart De Moor 1, Joos Vandewalle
More informationMaximum Likelihood Estimation and Polynomial System Solving
Maximum Likelihood Estimation and Polynomial System Solving Kim Batselier Philippe Dreesen Bart De Moor Department of Electrical Engineering (ESAT), SCD, Katholieke Universiteit Leuven /IBBT-KULeuven Future
More informationComputing Minimal Polynomial of Matrices over Algebraic Extension Fields
Bull. Math. Soc. Sci. Math. Roumanie Tome 56(104) No. 2, 2013, 217 228 Computing Minimal Polynomial of Matrices over Algebraic Extension Fields by Amir Hashemi and Benyamin M.-Alizadeh Abstract In this
More informationStructured Matrices and Solving Multivariate Polynomial Equations
Structured Matrices and Solving Multivariate Polynomial Equations Philippe Dreesen Kim Batselier Bart De Moor KU Leuven ESAT/SCD, Kasteelpark Arenberg 10, B-3001 Leuven, Belgium. Structured Matrix Days,
More informationMCS 563 Spring 2014 Analytic Symbolic Computation Friday 31 January. Quotient Rings
Quotient Rings In this note we consider again ideals, but here we do not start from polynomials, but from a finite set of points. The application in statistics and the pseudo code of the Buchberger-Möller
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationComputing Approximate GCD of Univariate Polynomials by Structure Total Least Norm 1)
MM Research Preprints, 375 387 MMRC, AMSS, Academia Sinica No. 24, December 2004 375 Computing Approximate GCD of Univariate Polynomials by Structure Total Least Norm 1) Lihong Zhi and Zhengfeng Yang Key
More informationOn The Belonging Of A Perturbed Vector To A Subspace From A Numerical View Point
Applied Mathematics E-Notes, 7(007), 65-70 c ISSN 1607-510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View
More informationOn the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms.
On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. J.C. Zúñiga and D. Henrion Abstract Four different algorithms are designed
More informationOn Weighted Structured Total Least Squares
On Weighted Structured Total Least Squares Ivan Markovsky and Sabine Van Huffel KU Leuven, ESAT-SCD, Kasteelpark Arenberg 10, B-3001 Leuven, Belgium {ivanmarkovsky, sabinevanhuffel}@esatkuleuvenacbe wwwesatkuleuvenacbe/~imarkovs
More informationMajorization for Changes in Ritz Values and Canonical Angles Between Subspaces (Part I and Part II)
1 Majorization for Changes in Ritz Values and Canonical Angles Between Subspaces (Part I and Part II) Merico Argentati (speaker), Andrew Knyazev, Ilya Lashuk and Abram Jujunashvili Department of Mathematics
More informationBACK TO THE ROOTS POLYNOMIAL SYSTEM SOLVING USING LINEAR ALGEBRA AND SYSTEM THEORY
BACK TO THE ROOTS POLYNOMIAL SYSTEM SOLVING USING LINEAR ALGEBRA AND SYSTEM THEORY PHILIPPE DREESEN, KIM BATSELIER, AND BART DE MOOR Abstract. We return to the algebraic roots of the problem of finding
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationAn Algorithm for Approximate Factorization of Bivariate Polynomials 1)
MM Research Preprints, 402 408 MMRC, AMSS, Academia Sinica No. 22, December 2003 An Algorithm for Approximate Factorization of Bivariate Polynomials 1) Zhengfeng Yang and Lihong Zhi 2) Abstract. In this
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationThe F 4 Algorithm. Dylan Peifer. 9 May Cornell University
The F 4 Algorithm Dylan Peifer Cornell University 9 May 2017 Gröbner Bases History Gröbner bases were introduced in 1965 in the PhD thesis of Bruno Buchberger under Wolfgang Gröbner. Buchberger s algorithm
More informationIndex. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)
page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation
More informationCounting and Gröbner Bases
J. Symbolic Computation (2001) 31, 307 313 doi:10.1006/jsco.2000.1575 Available online at http://www.idealibrary.com on Counting and Gröbner Bases K. KALORKOTI School of Computer Science, University of
More informationNotes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.
Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationarxiv: v1 [math.na] 1 Sep 2018
On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing
More informationSVD, PCA & Preprocessing
Chapter 1 SVD, PCA & Preprocessing Part 2: Pre-processing and selecting the rank Pre-processing Skillicorn chapter 3.1 2 Why pre-process? Consider matrix of weather data Monthly temperatures in degrees
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationOverview of Computer Algebra
Overview of Computer Algebra http://cocoa.dima.unige.it/ J. Abbott Universität Kassel J. Abbott Computer Algebra Basics Manchester, July 2018 1 / 12 Intro Characteristics of Computer Algebra or Symbolic
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationMath 307 Learning Goals. March 23, 2010
Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent
More informationSolution of Linear Equations
Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass
More informationDS-GA 1002 Lecture notes 10 November 23, Linear models
DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.
More information14.2 QR Factorization with Column Pivoting
page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution
More informationA fast randomized algorithm for overdetermined linear least-squares regression
A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm
More informationGroebner Bases and Applications
Groebner Bases and Applications Robert Hines December 16, 2014 1 Groebner Bases In this section we define Groebner Bases and discuss some of their basic properties, following the exposition in chapter
More informationMath 307 Learning Goals
Math 307 Learning Goals May 14, 2018 Chapter 1 Linear Equations 1.1 Solving Linear Equations Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear
More informationA Novel Linear Algebra Method for the Determination of Periodic Steady States of Nonlinear Oscillators
A Novel Linear Algebra Method for the Determination of Periodic Steady States of Nonlinear Oscillators Abstract Periodic steady-state (PSS) analysis of nonlinear oscillators has always been a challenging
More informationSummer Project. August 10, 2001
Summer Project Bhavana Nancherla David Drescher August 10, 2001 Over the summer we embarked on a brief introduction to various concepts in algebraic geometry. We used the text Ideals, Varieties, and Algorithms,
More information7. Dimension and Structure.
7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain
More informationStatistical Geometry Processing Winter Semester 2011/2012
Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian
More informationComputational Approaches to Finding Irreducible Representations
Computational Approaches to Finding Irreducible Representations Joseph Thomas Research Advisor: Klaus Lux May 16, 2008 Introduction Among the various branches of algebra, linear algebra has the distinctions
More informationA Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem
A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present
More informationMATH 425-Spring 2010 HOMEWORK ASSIGNMENTS
MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT
More information1 Number Systems and Errors 1
Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug
More informationKey words. conjugate gradients, normwise backward error, incremental norm estimation.
Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322
More informationCHAPTER 0 PRELIMINARY MATERIAL. Paul Vojta. University of California, Berkeley. 18 February 1998
CHAPTER 0 PRELIMINARY MATERIAL Paul Vojta University of California, Berkeley 18 February 1998 This chapter gives some preliminary material on number theory and algebraic geometry. Section 1 gives basic
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationNumerical Linear Algebra
Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course
More informationFraction-free Row Reduction of Matrices of Skew Polynomials
Fraction-free Row Reduction of Matrices of Skew Polynomials Bernhard Beckermann Laboratoire d Analyse Numérique et d Optimisation Université des Sciences et Technologies de Lille France bbecker@ano.univ-lille1.fr
More informationComputing multiple roots of inexact polynomials
Computing multiple roots of inexact polynomials Contents Zhonggang Zeng Northeastern Illinois University 1 Introduction 1 2 Preliminaries 3 2.1 Notations....................................... 3 2.2 Basic
More informationThe total least squares problem in AX B. A new classification with the relationship to the classical works
Eidgenössische Technische Hochschule Zürich Ecole polytechnique fédérale de Zurich Politecnico federale di Zurigo Swiss Federal Institute of Technology Zurich The total least squares problem in AX B. A
More informationGeneralized eigenvector - Wikipedia, the free encyclopedia
1 of 30 18/03/2013 20:00 Generalized eigenvector From Wikipedia, the free encyclopedia In linear algebra, for a matrix A, there may not always exist a full set of linearly independent eigenvectors that
More information17 Solution of Nonlinear Systems
17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m
More informationSome Notes on Least Squares, QR-factorization, SVD and Fitting
Department of Engineering Sciences and Mathematics January 3, 013 Ove Edlund C000M - Numerical Analysis Some Notes on Least Squares, QR-factorization, SVD and Fitting Contents 1 Introduction 1 The Least
More informationCharacterization of half-radial matrices
Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the
More informationThe SVD-Fundamental Theorem of Linear Algebra
Nonlinear Analysis: Modelling and Control, 2006, Vol. 11, No. 2, 123 136 The SVD-Fundamental Theorem of Linear Algebra A. G. Akritas 1, G. I. Malaschonok 2, P. S. Vigklas 1 1 Department of Computer and
More informationGRÖBNER BASES AND POLYNOMIAL EQUATIONS. 1. Introduction and preliminaries on Gróbner bases
GRÖBNER BASES AND POLYNOMIAL EQUATIONS J. K. VERMA 1. Introduction and preliminaries on Gróbner bases Let S = k[x 1, x 2,..., x n ] denote a polynomial ring over a field k where x 1, x 2,..., x n are indeterminates.
More informationECE 275A Homework #3 Solutions
ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =
More informationNormed & Inner Product Vector Spaces
Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed
More informationStat 159/259: Linear Algebra Notes
Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationMCS 563 Spring 2014 Analytic Symbolic Computation Monday 27 January. Gröbner bases
Gröbner bases In this lecture we introduce Buchberger s algorithm to compute a Gröbner basis for an ideal, following [2]. We sketch an application in filter design. Showing the termination of Buchberger
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationETNA Kent State University
C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.
More information2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable
Deciding positivity of real polynomials Eberhard Becker, Victoria Powers, and Thorsten Wormann Abstract. We describe an algorithm for deciding whether or not a real polynomial is positive semidenite. The
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationThe value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.
Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class
More informationIV. Matrix Approximation using Least-Squares
IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that
More informationIntroduction to Applied Linear Algebra with MATLAB
Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation
More informationLecture 2: Computing functions of dense matrices
Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture
More informationLecture 9: Numerical Linear Algebra Primer (February 11st)
10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template
More informationP -adic root separation for quadratic and cubic polynomials
P -adic root separation for quadratic and cubic polynomials Tomislav Pejković Abstract We study p-adic root separation for quadratic and cubic polynomials with integer coefficients. The quadratic and reducible
More informationNotes on Eigenvalues, Singular Values and QR
Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square
More informationPrincipal Angles Between Subspaces and Their Tangents
MITSUBISI ELECTRIC RESEARC LABORATORIES http://wwwmerlcom Principal Angles Between Subspaces and Their Tangents Knyazev, AV; Zhu, P TR2012-058 September 2012 Abstract Principal angles between subspaces
More informationImage Registration Lecture 2: Vectors and Matrices
Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationJournal of Algebra 226, (2000) doi: /jabr , available online at on. Artin Level Modules.
Journal of Algebra 226, 361 374 (2000) doi:10.1006/jabr.1999.8185, available online at http://www.idealibrary.com on Artin Level Modules Mats Boij Department of Mathematics, KTH, S 100 44 Stockholm, Sweden
More information8. Diagonalization.
8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard
More informationReview of Some Concepts from Linear Algebra: Part 2
Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set
More informationSingular Value Decompsition
Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost
More informationLinear Algebra. Min Yan
Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................
More informationLinear Systems. Carlo Tomasi
Linear Systems Carlo Tomasi Section 1 characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix and of
More information1 Singular Value Decomposition and Principal Component
Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)
More informationσ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2
HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For
More informationMöller s Algorithm. the algorithm developed in [14] was improved in [18] and applied in order to solve the FGLM-problem;
Möller s Algorithm Teo Mora (theomora@disi.unige.it) Duality was introduced in Commutative Algebra in 1982 by the seminal paper [14] but the relevance of this result became clear after the same duality
More informationExponentials of Symmetric Matrices through Tridiagonal Reductions
Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm
More informationThe Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation
The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse
More informationA PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE
Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationApplied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationIntroduction. Chapter One
Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and
More informationThe characteristic equation and minimal state space realization of SISO systems in the max algebra
KULeuven Department of Electrical Engineering (ESAT) SISTA Technical report 93-57a The characteristic equation and minimal state space realization of SISO systems in the max algebra B De Schutter and B
More informationGRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.
GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,
More informationEfficient and Accurate Rectangular Window Subspace Tracking
Efficient and Accurate Rectangular Window Subspace Tracking Timothy M. Toolan and Donald W. Tufts Dept. of Electrical Engineering, University of Rhode Island, Kingston, RI 88 USA toolan@ele.uri.edu, tufts@ele.uri.edu
More informationMIT Final Exam Solutions, Spring 2017
MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of
More informationClass notes: Approximation
Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R
More informationContents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2
Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition
More information