Multivariate Polynomial System Solving Using Intersections of Eigenspaces

Size: px
Start display at page:

Download "Multivariate Polynomial System Solving Using Intersections of Eigenspaces"

Transcription

1 J. Symbolic Computation (2) 32, doi:.6/jsco Available online at on Multivariate Polynomial System Solving Using Intersections of Eigenspaces H. MICHAEL MÖLLER AND RALF TENBERG FB Mathematik der Universität Dortmund, 4422 Dortmund, Germany The solutions of a polynomial system can be computed using eigenvalues and eigenvectors of certain endomorphisms. There are two different approaches, one by using the (right) eigenvectors of the representation matrices, one by using the (right) eigenvectors of their transposed ones, i.e. their left eigenvectors. For both approaches, we describe the common eigenspaces and give an algorithm for computing the solution of the algebraic system. As a byproduct, we present a new method for computing radicals of zero-dimensional ideals. c 2 Academic Press. Introduction The problem of finding the common solutions of a system of polynomial equations, f (x,..., x n ) =, f s (x,..., x n ) =, or in algebraic terms, the problem of finding the variety V of the ideal A := (f,..., f s ), can be transformed into one or several eigenspace problems if the set of solutions is finite. See for instance the motivating note Corless (996) or the more detailed exposition in Cox et al. (998, Chap. 2, Section 4). It is well known that for n = s = the Frobenius companion matrix has the solutions as eigenvalues. For n >, there are classical results which describe for a fixed polynomial f of the given polynomial ring P invariants of the endomorphism Φ f : P/A P/A,. [g] [f g]. The set {f(y) y V } is the set of eigenvalues of Φ f. So these theorems describe just the image of V under f or parts of it, see for instance Gonzalez-Vega et al. (999). By an eigenspace method, we understand a method which uses one or several sets of eigenvalues f(v ) (and their eigenspaces) for the reconstruction of V. A naive method is as follows. One tests every combination (λ,..., λ n ), λ i eigenvalue of a representation matrix of Φ xi, for a common zero of f,..., f s. So each test requires s polynomial evaluations. This is reasonable if there are only a few different values λ i, see for instance p. 63 of Gonzalez-Vega et al. (999). In the worst case, if all λ i are different, one has card(v ) n tests for finding the card(v ) solutions, in a certain sense the search for m needles in a haystack of size m n. Using u-resultants, Lazard (98) considered methods for combining eigenvalues of two endomorphisms Φ xi and Φ xj in order to reduce the haystack investigation. A result // $35./ c 2 Academic Press

2 54 H. M. Möller and R. Tenberg of Schur shows that there is a basis of P/A such that the representation matrices of all Φ xi, i =,..., n, are upper triangular. Denoting by α (j) i the jth diagonal element of the representation matrix to Φ xi, Yokoyama et al. (992) proved that the points α (j) := (α (j),..., α(j) n ) are the solutions of the given system of equations. A totally different way was proposed first by Auzinger and Stetter (988). Let M f denote the transpose of a representation matrix corresponding to Φ f. M f is called the multiplication matrix. Then for every z = (z,..., z n ) V there is a right eigenvector v to the eigenvalue f(z), M f v = f(z)v, () v independent of f, which allows one to read off all components z i of z. If all eigenspaces of M f have dimension, in other words if M f is non-derogatory, then such v is easily detected. Whereas in Auzinger and Stetter (988) resultant methods were used, the method was described and extended in Möller and Stetter (995) using Gröbner techniques. In that paper, procedures were also proposed on how to reconstruct the points of V. A discussion on the stability of such procedures can be found in several papers of Stetter, for instance Stetter (996). Another method which is based on a Schur factorization of multiplication matrices was proposed by Corless et al. (997). They also consider stability aspects of their procedure. There are also other eigenspace methods for special instances like Manocha s for the complete intersection case, which is presented in Cox et al. (998, Chap. 3, Section 6), as an application of u-resultants. In this paper, we discuss the eigenspaces for both the representation matrices and the multiplication matrices. For this purpose, we briefly present the necessary definitions from Gröbner techniques and (for a better understanding of multiple points) the concept of dual bases. The investigation of common eigenspaces of the representation matrices leads to a new and simple calculation of the radical A of a zero-dimensional ideal A as A = A : q, q being a polynomial. We extend the subspace method that was already indicated in Möller and Stetter (995) to a method for computing the variety V starting with either the representation matrices or the multiplication matrices for the endomorphisms Φ x,..., Φ xn. We stress, explicitly, that our method is direct. It needs only a (numerical) method for computing the eigenspaces of a matrix. We need no points or coordinates in general position, which often means a finite number of trials until they are found. (Remember that you always have only a finite number of trials!) On the contrary, if we have not yet acquired sufficient information to reconstruct the vector v in () then we do not discard them but analyse them more carefully until we obtain v. This reuse of information is, in a sense, the ecologically correct procedure. 2. Basic Definitions and Results Gröbner bases are introduced in most textbooks on Computer Algebra. Unfortunately, notations differ slightly. Since one of our motivations is to clear misunderstandings about the eigenvalue method, we begin by fixing the basic notations. Let P denote the ring of polynomials in the variables x,..., x n with complex coefficients, i.e. P = C[x,..., x n ]. Definition. (Terms, Ordering) Let the set of terms T := {x i xin n i,..., i n N }

3 be ordered by an admissible ordering < T, i.e. (a) < T t for all t T \ {}, (b) if t < T t 2, then t t < T t 2 t for all t T. Multivariate Polynomial System Solving 55 The admissible term ordering is called a degree ordering, if deg(t ) < deg(t 2 ) implies t < T t 2 for arbitrary terms t, t 2. Having fixed an admissible term ordering, then every f P \ {} has in its representation as a linear combination of terms a maximal term w.r.t. < T, which is called the leading term lt(f) = lt <T (f). (We omit in the following the dependence of lt on < T.) In our definition, we followed B. Buchberger, who introduced the name Gröbner bases and many related notations. For us, a monomial is a term multiplied with an element from the ground field representing in a sense the singular where polynomial is the plural. Definition 2. (Normal Set, Normal Form) Let A be an ideal and < T an admissible term ordering. The set N of all terms not contained in the multiplicative semigroup {lt(f) f A} is called the normal set of A. The projection NF : P span N orthogonal to A, i.e. NF(f) f A for all f P, NF(f) = f for all f span N, is called the normal form mapping, and NF(f) the normal form of f. Instead of the notation normal set the names standard monomials and basis monomials are often used in literature. We prefer the notation normal set because it reminds one of the normal form mapping NF. The normal set, and hence the normal form, depends on the ideal and (by the leading terms) on the term ordering. Definition 3. (Gröbner Basis) Let A P be an ideal, and < T an admissible ordering. The set G = {g,..., g s } A is called a Gröbner basis if for every f A there exist a g G such that lt(g) divides lt(f). From a Gröbner basis G of the ideal A w.r.t. the term order < T the associated normal set can be computed as N = {t T g G s.t. lt(g) divides t}. Gröbner bases also allow the efficient computation of the normal form NF(f) of f, see for instance Cox et al. (998). For a fixed ideal A, P/A = {[f] f P}, [f] = {g f g A} is an algebra. We consider it as a vector space. Then, as stated in Möller and Stetter (995), the following holds. Proposition. If N is the normal set of the ideal A w.r.t. an arbitrary admissible term ordering, then {[t] t N } is a linear basis of the vector space P/A. Let then [f] = t N c t(f)[t]. NF(f) = t N c t (f)t,

4 56 H. M. Möller and R. Tenberg Proof. f NF(f) A. Hence [f] = [NF(f)]. By linearity of the mapping g [g], {[t] t N } generates P/A. It is also a basis because [ ] c t [t] = c t t = h := c t t A, t N t N t N and h leads to the contradiction lt(h) N by h span N and lt(h) N by h A. Definition 4. (Multiplication Matrix) Let A be an ideal and N = {t,..., t r } its normal set w.r.t. a given term order < T. If r [f t i ] = m ij (f)[t j ], i =,..., r j= then the complex r r matrix M f = (m ij (f)) r i,j= is called the multiplication matrix corresponding to f P. If we consider the linear endomorphism Φ f : P/A P/A, [g] [f g], and look at the representation matrix B f w.r.t. the basis which consists of the elements of the normal set, then we easily see that B f = M T f. Remark. This intimate connection of the multiplication matrix M f and the representation matrix B f led to confusion in previous papers on the eigenvalue method. Sometimes both were called multiplication matrices or tables. Left eigenvectors of one matrix are right eigenvectors of the transposed one. Since we want to study both systems of eigenvectors, and right and left depend on which of the matrices we prefer, we hope to clarify our representation by formulating results only for one type of eigenvectors of the respective matrix. For an r r matrix A we denote by E(λ, A) := {v C r Av = λv} the eigenspace to the eigenvalue λ of A. Remark 2. The main idea of the method proposed in this paper is to find specific vectors which are eigenvectors of all M f, f P, resp. B f, f P. Anticipating a result of Theorem 2, the eigenvalues of M f, and hence also of B f = Mf T, are of type f(), ranging over the common zeros of A, we have to compute E(f( ), M f ) and E(f( ), B f ). f P This infinite intersection can be avoided because the commuting families {M f f P} resp. {B f f P} are generated by {M x,..., M xn } resp. {B x,..., B xn }, n E(f( ), M f ) = E(i, M xi ), where = (,..., n ). f P f P i= f P n E(f( ), B f ) = E(i, B xi ), i=

5 3. Dual Bases Multivariate Polynomial System Solving 57 The multiplicity of a common zero z of a set of polynomials F is a useful piece of information. If one is interested in the geometric interpretation, how the algebraic surfaces f =, f F, intersect in z, one needs more than a proper number. Using the concept of dual bases which is investigated in Marinari et al. (993, 996) and which dates back to Lasker (95), Macaulay (96) and, especially, Gröbner (97), one gets a better insight into the geometric configuration at the intersection points. In addition, and this is why we introduce this concept here, it enables us to describe eigenspaces and to explain the decomposition of the multiplication matrices into a kind of Jordan normal form following Möller and Stetter (995). Definition 5. (Variety, Zero-dimensional) Let A be an ideal. The set V (A) = {z C n f(z) = f A} is called the variety of A. An ideal A is called zero-dimensional if its variety consists of a finite set of points. By the finiteness theorem (Cox et al., 998), A is zero-dimensional if and only if its normal set is finite. Hence exactly for zero-dimensional ideals finite multiplication matrices and representation matrices can be defined. By Hom(P, C) we denote the set of all linear complex valued functionals on P. Definition 6. (Dual Basis) Let V P be a vector space. If {L (),..., L (s) } is a basis of the vector space then it is called a dual basis of V. V := {L Hom(P, C) L(f) = f V } It is a nice linear algebra exercise to show that exactly the vector spaces V of finite codimension possess a dual basis and under which additional conditions on the dual basis V is an ideal. Here we concentrate on functionals of special type. Definition 7. (Differential Operators, Shifts) For terms t = x i xin n T we define the elementary differential operators D(t) : P P, p i! i n! i+ +in p x i xin n Finite linear combinations of these elementary operators will be called differential operators. For u T the shift σ u of the elementary differential operator D(t) is defined to be D( t u ) if u divides t and otherwise. By linearity the definition of shifts is extended to arbitrary differential operators. Obviously, the set of all non-zero shifts of an elementary differential operator D(t) is finite. Hence the same holds for arbitrary differential operators. Let L be an arbitrary differential operator. Then a non-zero constant multiple of D(), the identity operator, is among its shifts because L is a linear combination of elementary operators D(t). If D(u) is one of them with maximal differentiation order, then σ u L = cd() with a constant c..

6 58 H. M. Möller and R. Tenberg Given a finite dimensional subspace of span{d(t) t T } and a point z C n, then the set of all f P with L(f)(z) = for all L is a vector space. We need an additional condition on to ensure that this vector space is an ideal. Definition 8. (Closed Subspace) A linear subspace of span{d(t) t T } is called closed if its dimension is finite, and if L σ xi L, i =,..., n. Hence also σ u L if L and u T. Especially, every non-zero closed subspace contains D(), the identity operator. We remark that in Möller and Stetter (995) closed subspaces were by definition non-zero spaces. Here we modify the definition to avoid case distinctions in the theorems of Section 5. For z C n and L span{d(t) t T } we define the functional L z : P C, L z (f) := (Lf)(z), that is point evaluation of the polynomial Lf in z. For convenience, whenever we use the notation L z, z C n, for a functional, then we implicitly mean that L is a differential operator (followed by a point evaluation at z). The correspondence of dual bases of zero-dimensional ideals and closed subspaces of differential operators is given by the next theorem. Theorem. If A is a zero-dimensional ideal with V (A) = {z,..., z m }, then there are closed vector spaces of differential operators (k), such that f A L zk (f) = for all L (k), k =,..., m. Let {L (,k),..., L (s k,k) } denote a basis of (k), k =,..., m, then the s + + s m functionals L (j,k) : f (L (j,k) f)( ) are linearly independent and constitute a dual basis of the ideal A. Proof. (Marinari et al., 993) Conditions equivalent to L zk (f) = L (k) like L (i,k) (f) =, i =,..., s k, are called Max Noether conditions, (see Lasker, 95). They more precisely, the differential operators L (k) allow one to understand better how the manifolds p =, p A, intersect in. For instance by considering the differential operators of order one finds all tangents in in common to the manifolds. (These manifolds pass through since D() (k).) These Max Noether conditions give more insight than the proper number s k = dim (k), the multiplicity of as point of the variety of A. Therefore we will call (k) the Max Noether space of A at. The space (k) := {L zk L (k) } is often called the local dual space of A at. Having a basis {L (),..., L (s k) } of the Max Noether space of A at, then {L (),..., L (s k) } is a basis of (k) and hence a specific local dual basis (of A at ). An arbitrary local dual basis does not necessarily give as much geometrical information as a Max Noether space, because the local dual basis consists of functionals which may also be given as, say, definite integrals.

7 Multivariate Polynomial System Solving 59 The ideal Q k := {p P L zk (p) = L (k) } is a primary ideal with V (Q k ) = { }. By construction Q k = (k). As an easy consequence of Theorem, we get m A = Q k. k= Hence the dual basis of A as given in Theorem reflects the primary decomposition of A. In Möller (993) the definition of a consistently ordered basis was introduced. This is an ordered list {L (),..., L (s) } of differential operators, constituting a basis of a closed subspace, such that for every degree l > there is a j such that {L (),..., L (j) } is a basis of {L deg L < l}. Such a basis can always be computed by starting with L () := D() and enlarging a basis of {L deg L < l} to a basis of {L deg L l} for increasing l. The first element in a consistently ordered basis is always c D(), c a constant. For convenience, we always assume D() to be the first element if the basis is consistently ordered. A frequently used formula is the application of a differential operator on a product of two polynomials. For L span{d(t) t T } and f, g P the Leibniz rule gives L(f g) = u T D(u)f σ u L(g). (2) 4. Eigenspaces of M f Theorem 2. Let A be a zero-dimensional ideal, V (A) = {z,..., z m }, N = {t,..., t r } its normal set, and M f the multiplication matrix corresponding to f P. Then f(z ),..., f(z m ) are the eigenvalues of M f. Let (k) be the Max Noether space of A at, k =,..., m. If {L (,k),..., L (s k,k) } is a consistently ordered basis of (k), then the vectors satisfy M f D (,k) M f D (j,k) D (j,k) := (L (j,k) (t ), L (j,k) (t 2 ),..., L (j,k) (t r )) T C r f( ) D (,k) =, (3) f( ) D (j,k) = u T \{} D(u) zk f (σ u D (j,k) ) zk, j = 2,..., s k. (4) Proof. (Möller and Stetter, 995) In this theorem, the vector (σ u D (j,k) ) zk stands for ((σ u L (j,k) ) zk (t ), (σ u L (j,k) ) zk (t 2 ),..., (σ u L (j,k) ) zk (t r )) T. By the consistent ordering of the basis {L (,k),..., L (s k,k) }, the shifted operator σ u L (j,k), u T \{}, is a linear combination of operators L (i,k), i < j. Therefore, the identities (3) and (4) of Theorem 2 give, in matrix vector notation, the following. Corollary. Let A, V (A), and the vectors D z (j,k) k be as in Theorem 2. Define the matrix S := (D z (,),..., D z (s,), D z (,2) 2,..., D z (sm,m) m ). Then for arbitrary f P M f S = S J f,

8 52 H. M. Möller and R. Tenberg where J f is a block diagonal matrix and its kth diagonal block is an s k s k upper triangular matrix with diagonal entries f( ). All D z (,k) k are eigenvectors to the eigenvalue f( ) by (3). Hence, using Remark 2, n E(f( ), M f ) = E(i, M xi ). (5) D (,k) f P We will now prove that the vectors D z (,k) k, k =,..., m, (and their scalar multiples) are the only common eigenvectors. This result is also proved in Mourrain (998). Theorem 3. Let A, V (A), M f, and the vectors D z (j,k) k be as in Theorem 2. Then dim E(f( ), M f ) = and therefore f P f P i= E(f( ), M f ) = span{d (,k) }. Proof. We assume for simplicity that A is primary with V (A) = {z}. Let N = {t,..., t r } be its normal set. As Theorem 2 shows, every eigenvector to eigenvalue f(z) of M f is of type (L z (t ),..., L z (t r )) T, L denoting an element of the Max Noether space of A at z. For i =,..., n let f := x i. Then on the one hand M xi L z (t ). = z i L z (t r ) L z (t ). L z (t r ), (6) where f(z) = z i. On the other hand by the Leibniz rule L(x i t j ) = x i L(t j ) + (σ xi L)(t j ) one gets L z (x i t ) L z (t ) (σ xi L) z (t ). = z i. +.. (7) L z (x i t r ) L z (t r ) (σ xi L) z (t r ) The left-hand sides of (6) and (7) are equal since M xi is the multiplication table. Hence comparing the right-hand sides one gets (σ xi L) z (t j ) = for j =,..., r. (σ xi L) z A because L is in the Max Noether space and, by closedness, σ xi L is also. Therefore (σ xi L) z =. This holds only if σ xi L =. Since i was arbitrary, we obtain σ u L = for arbitrary shifts. This implies L = c D(), c a constant. The vector D (,k) = (L (,k) (t ), L (,k) (t 2 ),..., L (,k) (t r )) T allows one to read off the components of the point, since L (,k) is the identity D() and the variables x,..., x n are contained in the normal set {t,..., t r } or if an x i is not contained, its value at

9 Multivariate Polynomial System Solving 52 can easily be reconstructed by evaluating the Gröbner basis element g with lt(g) = x i at, (cf. Möller and Stetter, 995). Example. We consider the ideal A given by the polynomials g (x, y) = 2y 2 + x 2 2y, g 2 (x, y) = xy x, g 3 (x, y) = x 3. The set G = {g, g 2, g 3 } is a Gröbner basis w.r.t. a degree ordering with x < y and we get N = {, x, y, x 2 }. The variety of A is V (A) = {(, ), (, )}. We can use the algorithm described in Marinari et al. (996) to compute bases for the Max Noether spaces. It results in {D()} at z = (, ), {D(), D(x), D(x 2 ) 2 D(y) } at z 2 = (, ). Hence the dual basis by Theorem { D() (,), D() (,), D(x) (,), (D(x 2 ) } 2 D(y)) (,). The multiplication matrices w.r.t. x and y are easily set up: M x :=, M y :=. 2 Here, the matrix S of Corollary is (using L for D(x 2 ) 2 D(y)) D() (,) () D() (,) () D(x) (,) () L (,) () D() S := (,) (x) D() (,) (x) D(x) (,) (x) L (,) (x) = D() (,) (y) D() (,) (y) D(x) (,) (y) L (,) (y) D() (,) (x 2 ) D() (,) (x 2 ) D(x) (,) (x 2 ) L (,) (x 2 2 ) This gives M x S = SJ x and M y S = SJ y with the Jordan like matrices J x :=, J y := 2 The first two columns of S are the eigenvectors (5). Their second and third components are the x- and y- coordinates of the variety points (, ) and (, ).. 5. Eigenspaces of B f For the beginning, let us repeat a definition from eigenvalue theory. Definition 9. (Principal Vector) Let A be a r r matrix with eigenvalue λ. An eigenvector to eigenvalue λ is called a principal vector of first order. If v is a principal

10 522 H. M. Möller and R. Tenberg vector of order k to eigenvalue λ, then a vector w satisfying Aw = λw + v is called a principal vector (to eigenvalue λ) of order k +. In the following, we fix a polynomial f and a zero-dimensional ideal A P with variety V (A) = {z,..., z m } and normal set N = {t,..., t r }. Theorem 4. Let B f = M T f be the representation matrix of the endomorphism Φ f : P/A P/A, [g] [f g] w.r.t. the basis {[t ],..., [t r ]}. For every positive integer k the following statements are equivalent (a) b := (b,..., b r ) T is a principal vector of order k to eigenvalue λ of B f. (b) r i= b it i A : (f λ) k, but r i= b it i A : (f λ) k. Proof. Induction on k. Let k =. Then B f b = λb r b i t i f λ i= r b i t i A i= r b i t i A : (f λ). In addition r i= b it i A = A : () = A : (f λ) is equivalent to b. k k : Let p := r i= b it i A : (f λ) k, but p A : (f λ) k. Then q := p (f λ) A : (f λ) k, but not in A : (f λ) k 2. Let q =: g i t i + a with a A. g := (g,..., g r ) T is a principal vector of order k. By definition q = p (f λ). Hence B f b λb = g, i.e. b is a principal vector of order k. (a) (b) follows by reversing the conclusions. By (3) of Theorem 2 and Mf T = B f the eigenvalues of B f are also {f(z) z V (A)}. Hence for every z V (A), the polynomials having principal vectors of B f as coefficient vectors can be found in the ascending chain A A : (f f(z)) A : (f f(z)) 2 A : (f f(z)) r = A : (f f(z)) r+ =... which becomes stationary, because P is Noetherian. A : (f f(z)) j contains all polynomials corresponding to principal vectors of order at most j. Theorem 5. Let {L (,k),..., L (sk,k) } be a basis of the Max Noether space of A at, k =,..., m. Then the Max Noether space (j) of n i= A : (x i i ) at z j is generated by n {σ xi L (,k),..., σ xi L (sk,k) } if j = k, i= i= {L (,j),..., L (sj,j) } if j k. Proof. Since D(t)(x i i ) is for t = x i and for t T \ {, x i }, the Leibniz rule (2) gives L (l,j) ((x i i ) g) = (x i i )L (l,j) (g) + σ xi L (l,j) (g).

11 Now g n i= A : (x i i ) is equivalent to = L (l,j) z j Multivariate Polynomial System Solving 523 ((x i i ) g) = (z ji i )L z (l,j) j (g) + (σ xi L (l,j) ) zj (g) for l =,..., s j, j =,..., m, i =,..., n. If j k, then z ji i for an i {,..., n}. Hence using that σ xi L always has a smaller (differentiation) order than L and inspecting for j k the L (l,j) in increasing order, we see that for j k the space (j) is generated by L (l,j), l =,..., s j. And for j = k the space (j) is generated by σ xi L (l,j), i =,..., n, l =,..., s j. A Max Noether space at a point z is the null space, if and only if z does not belong to the variety. This happens in the theorem, iff is a simple zero of A (then local dual basis is exactly {D() zk }). Such is not in the variety of n i= A : (x i i ). If we know that n g A : (x i i ), g A, (8) i= without knowing all components i of explicitly, then we can reconstruct them because the coefficient vector of g is an eigenvector to eigenvalue i of B xi by Theorem 4. Example 2. We consider the same ideal A = (g, g 2, g 3 ) as at the end of the previous section. The representation matrices w.r.t. x and y are B x = Mx T =, B y = My T =. 2 By means of the matrices 2 T x :=, T y := 2 2 we get the Jordan normal forms B x T x = T x, B yt y = T y In T x the first two columns are first order principal vectors (eigenvectors). They are coefficient vectors of y and x 2 resp. Hence by Theorem 4 A : x = (g, g 2, g 3, y, x 2 ) = ( y, x 2 ). In T y the first column is an eigenvector to eigenvalue, hence A : y = (g, g 2, g 3, x 2 + 2y 2) = (g, g 2, x 2 + 2y 2), and second and fourth columns of T y are eigenvectors to giving, by Theorem 4, A : ( y) = (g, g 2, g 3, x 2, x) = (y 2 y, x)..

12 524 H. M. Möller and R. Tenberg By Theorem 5, we get the following bases of the Max Noether spaces of the quotients A : x A : y : at (, ), {D(), D(x), D(x 2 ) 2D(y)} at (, ), A : x A : ( y) : {D()} at (, ), {D(), D(x)} at (, ). 6. Generators of Max Noether Spaces Definition. Let A be zero-dimensional and z V (A). We say, the Max Noether space of A at z is generated by exactly k differential operators L (),..., L (k), if = span{σ u L (j) u T, j =,..., k}, and if less than k elements of generate only proper subspaces of. In the univariate case, every Max Noether space is generated by one element since closed subsets are for n = of type span{d(),..., D(x s )}, as already remarked in Möller and Stetter (995). We also remark that Max Noether spaces generated by one element are well known and studied deeply. Macaulay (96) called them principal systems, which are nowadays connected with Gorenstein rings (Eisenbud, 994, p. 529f). If a Max Noether space of a primary ideal Q at (the single zero) z is generated by L, then Q is irreducible, i.e. for any primaries Q, Q 2 with Q = Q Q 2 either Q = Q or Q 2 = Q holds. The reason is that V (Q ) = V (Q 2 ) = V (Q) by Q = Q Q 2, and L is either in the Max Noether space of Q or in that one of Q 2. But then this space contains, which means Q or Q 2 is a subideal of Q. Since Q = Q Q 2 it cannot be a proper subideal, see also Marinari et al. (993). If we take a differential operator and all its shifts, then the space generated by them is closed. Hence it is a Max Noether space of an irreducible primary ideal. We can construct in this way, for each of the k differential operators, a Max Noether space of a primary ideal. Thus we easily see that, if a Max Noether space of a primary ideal Q with variety {z} and generated by exactly k differential operators is given, then Q is the intersection of exactly k different irreducible primary ideals. Theorem 6. Let A be a zero-dimensional ideal and z V (A). The Max Noether space of A at z is generated by exactly k elements if and only if dim E(f(z), B f ) = k. f P Proof. For simplicity, we may assume V (A) = {z}, i.e. A is a primary ideal. Let be generated by k differential operators. These k operators together with their shifts generate as a vector space. Hence by cancelling linearly dependent elements, a basis {L (),..., L (s) } is obtained containing the k generators. Let L (j),..., L (jk) denote these generators. Take q,..., q s span N biorthogonal to the local dual basis {L () z,..., L (s) z }, i.e. L (i) z (q j ) = δ ij. Then by biorthogonality (σ u L (ji) ) z (q ji ) = for all proper shifts. Hence

13 the Leibniz rule (2) simplifies for arbitrary polynomials f to Multivariate Polynomial System Solving 525 L (j) z (f q ji ) = f(z)l (j) z (q ji ), j =,..., s. This means (f f(z))q ji is annihilated by L () z,..., L (s) z. Therefore (f f(z))q ji A, or in other words q ji A : (f f(z)), meaning by Theorem 4 that the coefficient vector of q ji is an eigenvector of B f for all f P. Since the q j,..., q jk are linearly independent, every eigenspace E(f(z), B f ) has at least dimension k. Assume, there is a vector v f P E(f(z), B f ) which is not a linear combination of the coefficient vectors of the polynomials q j,..., q jk. Let v be the coefficient vector of q span N. We may replace q by q := q k z ( q)q ji. Then L (ji) z (q) =, i =,..., k. The Leibniz rule gives for all f P i= L(ji) L (i) z (f q) = f(z)l (i) z (q) + D(u) z (f)(σ u L (i) ) z (q). u T \{} Since the coefficient vector of q is an eigenvector to the eigenvalue f(z), the sum over all u T \ {} has to be, D(u) z (f)(σ u L (i) ) z (q) =. u T \{} This holds for arbitrary polynomials f. Hence every (σ u L (i) ) z (q) has to vanish. In the given basis of, every operator is either a L (ji) or a proper shift. Therefore, every functional from the local dual basis annihilates q span N. This means q = and thus v = contradicting the assumption. Remark 3. Theorem 6 shows that the number of linearly independent common eigenvectors to the representation matrices is the number of irreducible primary components of the original ideal. This contrasts with the result of Theorem 3. There it was shown that the number of linearly independent common eigenvectors to the multiplication matrices is the number of maximal primary components, i.e. the cardinality of the variety. Hence the number of linearly independent common eigenvectors to the multiplication matrices is at most as large as the corresponding number for the representation matrices. As a byproduct to Theorem 6 we obtain a new method for computing radicals. Theorem 7. Let A P be a zero-dimensional ideal with variety V (A) = {z,..., z m } and normal set N = {t,..., t r }. Furthermore, let n v k E(i, B xi ), k =,..., m, and Then q := q + + q m satisfies i= q k := v T k t with t = (t,..., t r ) T. A : q = A.

14 526 H. M. Möller and R. Tenberg Proof. Let (k) be the Max Noether space of A at with basis {L (,k),..., L (s k,k) }, k =,..., m. By Theorems 4 and 5, every q k satisfies L (i,j) z j (q k ) = for i =,..., s j, j k, σ xl L (i,k) (q k ) = for i =,..., s k, l =,..., n. q k A because q k span N. Hence for an i = j(k) L (i,k) (q k ). Let f P. Then the Leibniz formula reduces to L (i,k) (f q) = f( )L (i,k) (q) = f( )L (i,k) (q k ). (9) If f A : q, then L (i,k) (f q) = for all i =,..., s k and k =,..., m. Using (9) with i = j(k) one gets f( ) =, k =,..., m, i.e. f A. If f A, then by (9) L (i,k) (f q) = for all i, k. This means fq A or f A : q. Example 3. We have in continuing the above example E(, B x ) E(, B y ) = span{( 2,, 2, ) T } and E(, B x ) E(, B y ) = span{(,,, ) T }. Choosing we get with t = (, x, y, x 2 ) T v := and v 2 := 2 q := 2 vk T t = y. k=. We compute a Gröbner basis of A : q similar to the FGLM technique as Lakshman pointed out in his thesis (Lakshman, 99). Because of NF(q) = q and 2 NF(xq) =, NF(yq) = 2 x2, NF(y 2 q) = 2 x2, we get x A : q, y 2 y A : q, A : q, and no y c in A : q, c a constant. Hence (x, y 2 y) = A : q = A. 7. How to Compute the Common Zeros Using Eigenvectors We already indicated in the preceding example how to compute an ideal quotient A : q if a normal form algorithm for A is known and if q is an arbitrary polynomial. The basic version of the algorithm is as follows (Lakshman, 99). Algorithm. (Ideal Quotient Computation) Given: A a zero-dimensional ideal, < an admissible ordering, and a procedure NF which computes for given f P the normal form of f w.r.t. an admissible ordering < 2. Output: A Gröbner basis G of A : q w.r.t. <.

15 Step. Initialize T o := T, the set of terms; G := ; N :=. Step 2. Let t o := min < T o ; T o := T o \ {t o }. Step 3. If NF(t o q) depends linearly on {NF(tq) t N }, NF(t o q) = t N Multivariate Polynomial System Solving 527 c t NF(tq), then enlarge G by t o t c tt and remove from T o all multiples of t o otherwise enlarge N by t o. Step 4. If T o then go to step 2, otherwise return G and stop. The FGLM-algorithm can be considered as special instance q =. For the practical computation of normal forms, recursive relations like NF(x i g) = NF(x i NF(g)) reduce the computational amount drastically. So one might think that the computation of the radical as given in Theorem 7 is not that difficult. The problem is to find an element v k of n i= E(i, B xi ). The computation of the sets n i= E(i, B xi ) or n i= E(i, M xi ) plays an essential role in the algorithm we propose in the following. It is based on the easily proved fact that v E(λ, A i ) = A j v E(λ, A i ) if A i A j = A j A i, and the fact that the matrices B x,..., B xn and M x,..., M xn resp. commute. We describe first the basic version of the algorithm, which holds without modification for both sets of matrices, B x,..., B xn and M x,..., M xn. Hence we use A i := M xi or A i := B xi resp. Algorithm. (Subspace Method, Basic Version) Given: Multiplication matrices (or representation matrices resp.) A,..., A n C r r. Output: The points of V (A). Initialization: Compute the set Λ of all eigenvalues of A and for every α Λ the eigenspace V α := E(α, A ). Loop: For j =,..., n compute for every V α,...,α j the set Λ α,...,α j of all eigenvalues α j+ of Φ α,...,α j : V α,...,α j V α,...,α j, v A j+ v. Denote by V α,...,α j+ the eigenspace corresponding to α j+. Final: Return all (α,..., α n ) corresponding to eigenspaces V α,...,α n. This algorithm terminates obviously. The result is correct because by commutativity of the matrices A,..., A n the mappings Φ α,...,α j are endomorphisms and by construction v V α,...,α j if and only if v is an eigenvector to eigenvalue α i of A i, i =,..., j. Hence n V α,...,α n = E(α i, A i ). i= n i= E(α i, A i ) is the null space if and only if (α,..., α n ) is not in V (A), but V α,...,α n is by construction not the null space. A shortcut to V α,...,α n is obtained, if dim V α,...,α j = for a j < n. Then all subsequent V α,...,α i, i > j, as non-null subspaces of V α,...,α j have dimension. Hence especially V α,...,α n = V α,...,α j. Therefore, an advanced version of the subspace method always tests the dimension of the actual V α,...,α j. If the dimension is, then V α,...,α n is found and the missing

16 528 H. M. Möller and R. Tenberg coordinates α j+,..., α n of the point α = (α,..., α n ) V (A) are either read off from D() α as described in Section 4 if A i = M xi or they are computed as eigenvalues of A i, i > j, if A i = B xi as described in Section 5. The advanced version works best, if the initial matrix A is already non-derogatory, i.e. if their eigenspaces all have dimension. If only an A j, j >, is non-derogatory, then V α,...,α j = j i= E(α i, A i ) has dimension and then the subsequent computation of eigenspaces is superfluous. We state explicitly that, in any case whenever a V α,...,α j is computed, there is at least one point of type (α,..., α j,...) in the variety. And if dim V α,...,α j >, then all subsequently computed eigenspaces usually have a smaller dimension. More precisely, dim V α,...,α j α j+ dim V α,...,α j+. The basic and the advanced version of the algorithm works with both types of matrices, the multiplication matrices M xi and the representation matrices B xi. But there are differences in the performance when applied to the two types. The subspace method using the representation matrices B xi has drawbacks compared with the method using the multiplication matrices M xi. First of all, the computation of eigenspaces needs floating point arithmetic. In the beginning, the Gröbner basis and hence the matrices B xi and M xi resp. are without rounding errors. But the successive computation of eigenspaces V α,...,α j, j =,..., n, induces rounding errors. The more computations are needed, the more the data are misrepresented. The dimension of the spaces V α,...,α j is an indicator of the complexity. In the case of multiplication matrices, the algorithm starts with an eigenspace V α of dimension at least m, where m denotes the number of distinct points of the variety with the same first component α. This means that here the multiplicity of the individual zeros is ignored, see Theorem 2. And at termination, the spaces V α,...,α n are all onedimensional by Theorem 3. This contrasts with the case of representation matrices. Here the spaces V α,...,α n can have a dimension greater than by Theorem 6 although every V α,...,α n corresponds to one single point of the variety. In a sense, the multiplicity of common zeros causes a greater dimension of eigenspaces in the B xi case. Only in the case of a complete intersection, i.e. A is generated by n polynomials f,..., f n, we know in advance that the usage of representation matrices also leads to one-dimensional common eigenspaces V α,...,α n (Eisenbud, 994, p. 529). A second drawback becomes apparent if the computation terminates with a onedimensional V α,...,α j, j < n. In the multiplication matrix case A i = M xi, the missing α k, k > j, can be read off without extra arithmetic operations apart from an eventual normalization. In the case A i = B xi however, the α k, k > j, have to be computed as eigenvalues of B xj given the eigenvector, the generator of V α,...,α j. This means for every k > j a matrix vector multiplication plus an eventual normalization. Example 4. We consider the ideal A given by the polynomials g (x, y, z) = y 3 y 2, g 2 (x, y, z) = z 2 4zx 25y 2 + 2x 8, g 3 (x, y, z) = zy + 3y 2 4y, g 4 (x, y, z) = x 2 6y 2 3x + 2,

17 g 5 (x, y, z) = xy 2y 2 2y. Multivariate Polynomial System Solving 529 The set G = {g, g 2, g 3, g 4, g 5 } is a Gröbner basis w.r.t. a degree ordering with y < x < z and we get N = {, y, x, z, y 2, zx}. The multiplication matrices w.r.t. x, y, and z are easily set up: M x :=, M y :=, M z := We first compute the eigenspaces of M x and obtain 4,, 2 as eigenvalues and with V 4 = span{v () 4 }, V = span{v (), v(2) }, V 2 = span{v () 2, v(2) 2, v(3) 2 } v () 4 = (,, 4,,, 4) T, v () = (,,,,, ) T, v (2) = (,,,,, ) T, v () 2 = (,, 2,,, ) T, v (2) 2 = (,,,,, 2) T, v (3) 2 = (,,,,, ) T. For α = 4 we have dim V 4 =, such that (,, 4,,, 4) T is a common eigenvector of all matrices and we read off the zero (4,, ) V (A). We have to apply M y on V and V 2 : and The eigenspaces of [ () M y v v (2) ] [ = () v v (2) [ () M y v 2 v (2) 2 v (3) ] [ 2 = () v 2 v (2) 2 v (3) 2 N = [ ] ] [ ] ] 4. and N 2 = 4

18 53 H. M. Möller and R. Tenberg are, as easy computation gives, Therefore in every case α 2 = and E(, N ) = span{(, ) T, (, ) T }, E(, N 2 ) = span{(,, ) T, (,, ) T }. V, = span{v (), v(2) }, V 2, = span{v () 2, v(2) 2 }. No V α,α 2 has dimension. Hence continuing with M z we get and [ () M z v v (2) ] [ = () v v (2) [ () M z v 2 v (2) ] [ 2 = () v 2 v (2) 2 The matrices [ ] 4 4 N = and N 2 = have only one eigenvalue each and eigenspaces E(2, N ) = span{(2, ) T }, E(4, N 2 ) = span{(, 4) T }. ] [ 4 4 ] ] [ ]. 6 8 [ ] 6 8 Therefore α 3 = 2 if (α, α 2 ) = (, ), α 3 = 4 if (α, α 2 ) = (2, ), and computation gives So we obtain three different roots: V,,2 = span{2v () + v (2) } = span{(,,, 2,, 2)T }, V 2,,4 = span{v () 2 + 4v (2) 2 } = span{(,, 2, 4,, 8)T }. z = (4,, ) from V 4 = V 4,,, z 2 = (,, 2) from V,,2, z 3 = (2,, 4) from V 2,,4. Of course, the computation of the eigenvectors generating V,,2 and V 2,,4 resp. was not necessary. It served only for illustrating Theorem 3. The computations for the representation matrices are similar. V 4 is spanned by v := (,,,,, ) T, the coefficient vector of y 2. The first component α is 4, the y- and z- components are eigenvalues of B y and B z resp. with corresponding eigenvector v each, resulting in α 2 = α 3 =. Computation gives, furthermore, V,,2 = span{(4,, 2, 2, 2, ) T }, V 2,,4 = span{( 4,, 4,, 9, ) T, (,,,,, ) T }. This implies that the Max Noether spaces at (4,, ) and (,, 2) are generated by one differential operator and its shifts, whereas the Max Noether space at (2,, 4) is generated by two differential operators.

19 References Multivariate Polynomial System Solving 53 Auzinger, W., Stetter, H. (988). An elimination algorithm for the computation of all zeros of a system of multivariate polynomial equations. In Numerical Mathematics, Proceedings of the International Conference, Singapore 988, volume 86 of Int. Ser. Numer. Math., pp. 3. Corless, R. (996). Editor s Corner: Gröbner bases and matrix eigenproblems. SIGSAM Bull., 3, Corless, R., Gianni, P. M., Trager, B. M. (997). A reordered Schur factorization method for zerodimensional polynomial systems with multiple roots. In Kuechlin, W. ed., Proceedings of ISSAC 997, pp ACM. Cox, D., Little, J., O Shea, D. (998). Using Algebraic Geometry, volume 85. Graduate Texts in Mathematics. New York, Springer-Verlag. Eisenbud, D. (994). Commutative Algebra with a View Toward Algebraic Geometry, volume 5. Graduate Texts in Mathematics. New York, Springer-Verlag. Gonzalez-Vega, L., Rouillier, F., Roy, M. F. (999). Symbolic recipes for polynomial system solving. In Cohen, A. M., Cuypers, H., Sterk, H. eds, Some Tapas of Computer Algebra, pp Berlin, Springer-Verlag. Gröbner, W. (97). Algebraische Geometrie II, B.I.-Hochschultaschenbücher 737/737a. Mannheim, Bibliographisches Institut AG. Lakshman, Y. N. (99). On the complexity of computing Gröbner bases for zero dimensional polynomial ideals. Ph.D. Thesis. Rensselaer Polytechnic Institue, New York. Lasker, E. (95). Zur Theorie der Moduln und Ideale. Math. Ann., 6, 2 6. Lazard, D. (98). Résolution des systèmes d équations algébriques. Theor. Comput. Sci., 5, 77. Macaulay, F. S. (96). The Algebraic Theory of Modular Systems. Cambridge University Press. Marinari, M., Möller, H. M., Mora, T. (993). Gröbner bases of ideals defined by functionals with an application to ideals of projective points. Appl. Algebra Eng. Commun. Comput., 4, Marinari, M., Möller, H. M., Mora, T. (996). On multiplicities in polynomial system solving. Trans. Am. Math. Soc., 348, Möller, H. M. (993). Systems of algebraic equations solved by means of endomorphisms. In Cohen, G. et al. ed., Applied Algebra, Algebraic Algorithms and Error-correcting Codes, AAECC-, LNCS 673, pp Springer-Verlag. Möller, H. M., Stetter, H. J. (995). Multivariate polynomial equations with multiple zeros solved by matrix eigenproblems. Numer. Math., 7, Mourrain, B. (998). Computing the isolated roots by matrix methods. J. Symb. Comput., 26, Stetter, H. J. (996). Analysis of zero clusters in multivariate polynomial systems. In Caviness, R. ed., Proceedings of ISSAC 996, pp ACM. Yokoyama, K., Noro, M., Takeshima, T. (992). Solutions of systems of algebraic equations and linear maps on residue class rings. J. Symb. Comput., 4, Originally Received 3 November 999 Accepted 27 April 2 Published electronically 4 August 2

De Nugis Groebnerialium 5: Noether, Macaulay, Jordan

De Nugis Groebnerialium 5: Noether, Macaulay, Jordan De Nugis Groebnerialium 5: Noether, Macaulay, Jordan ACA 2018 Santiago de Compostela Re-reading Macaulay References Macaulay F. S., On the Resolution of a given Modular System into Primary Systems including

More information

I. Duality. Macaulay F. S., The Algebraic Theory of Modular Systems, Cambridge Univ. Press (1916);

I. Duality. Macaulay F. S., The Algebraic Theory of Modular Systems, Cambridge Univ. Press (1916); I. Duality Macaulay F. S., On the Resolution of a given Modular System into Primary Systems including some Properties of Hilbert Numbers, Math. Ann. 74 (1913), 66 121; Macaulay F. S., The Algebraic Theory

More information

MCS 563 Spring 2014 Analytic Symbolic Computation Friday 31 January. Quotient Rings

MCS 563 Spring 2014 Analytic Symbolic Computation Friday 31 January. Quotient Rings Quotient Rings In this note we consider again ideals, but here we do not start from polynomials, but from a finite set of points. The application in statistics and the pseudo code of the Buchberger-Möller

More information

4 Hilbert s Basis Theorem and Gröbner basis

4 Hilbert s Basis Theorem and Gröbner basis 4 Hilbert s Basis Theorem and Gröbner basis We define Gröbner bases of ideals in multivariate polynomial rings and see how they work in tandem with the division algorithm. We look again at the standard

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

Counting and Gröbner Bases

Counting and Gröbner Bases J. Symbolic Computation (2001) 31, 307 313 doi:10.1006/jsco.2000.1575 Available online at http://www.idealibrary.com on Counting and Gröbner Bases K. KALORKOTI School of Computer Science, University of

More information

Computing Minimal Polynomial of Matrices over Algebraic Extension Fields

Computing Minimal Polynomial of Matrices over Algebraic Extension Fields Bull. Math. Soc. Sci. Math. Roumanie Tome 56(104) No. 2, 2013, 217 228 Computing Minimal Polynomial of Matrices over Algebraic Extension Fields by Amir Hashemi and Benyamin M.-Alizadeh Abstract In this

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

Algebraic Varieties. Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra

Algebraic Varieties. Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra Algebraic Varieties Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra Algebraic varieties represent solutions of a system of polynomial

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

A Study of Numerical Elimination for the Solution of Multivariate Polynomial Systems

A Study of Numerical Elimination for the Solution of Multivariate Polynomial Systems A Study of Numerical Elimination for the Solution of Multivariate Polynomial Systems W Auzinger and H J Stetter Abstract In an earlier paper we had motivated and described am algorithm for the computation

More information

MCS 563 Spring 2014 Analytic Symbolic Computation Monday 27 January. Gröbner bases

MCS 563 Spring 2014 Analytic Symbolic Computation Monday 27 January. Gröbner bases Gröbner bases In this lecture we introduce Buchberger s algorithm to compute a Gröbner basis for an ideal, following [2]. We sketch an application in filter design. Showing the termination of Buchberger

More information

The Jordan Canonical Form

The Jordan Canonical Form The Jordan Canonical Form The Jordan canonical form describes the structure of an arbitrary linear transformation on a finite-dimensional vector space over an algebraically closed field. Here we develop

More information

(1) for all (2) for all and all

(1) for all (2) for all and all 8. Linear mappings and matrices A mapping f from IR n to IR m is called linear if it fulfills the following two properties: (1) for all (2) for all and all Mappings of this sort appear frequently in the

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Generalized eigenspaces

Generalized eigenspaces Generalized eigenspaces November 30, 2012 Contents 1 Introduction 1 2 Polynomials 2 3 Calculating the characteristic polynomial 5 4 Projections 7 5 Generalized eigenvalues 10 6 Eigenpolynomials 15 1 Introduction

More information

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A = STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

GRÖBNER BASES AND POLYNOMIAL EQUATIONS. 1. Introduction and preliminaries on Gróbner bases

GRÖBNER BASES AND POLYNOMIAL EQUATIONS. 1. Introduction and preliminaries on Gróbner bases GRÖBNER BASES AND POLYNOMIAL EQUATIONS J. K. VERMA 1. Introduction and preliminaries on Gróbner bases Let S = k[x 1, x 2,..., x n ] denote a polynomial ring over a field k where x 1, x 2,..., x n are indeterminates.

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

Summer Project. August 10, 2001

Summer Project. August 10, 2001 Summer Project Bhavana Nancherla David Drescher August 10, 2001 Over the summer we embarked on a brief introduction to various concepts in algebraic geometry. We used the text Ideals, Varieties, and Algorithms,

More information

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations: Homework Exercises 1 1 Find the complete solutions (if any!) to each of the following systems of simultaneous equations: (i) x 4y + 3z = 2 3x 11y + 13z = 3 2x 9y + 2z = 7 x 2y + 6z = 2 (ii) x 4y + 3z =

More information

5 The existence of Gröbner basis

5 The existence of Gröbner basis 5 The existence of Gröbner basis We use Buchberger s criterion from the previous section to give an algorithm that constructs a Gröbner basis of an ideal from any given set of generators Hilbert s Basis

More information

1 Invariant subspaces

1 Invariant subspaces MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Definition (T -invariant subspace) Example. Example

Definition (T -invariant subspace) Example. Example Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

More information

MULTIPLICITIES OF MONOMIAL IDEALS

MULTIPLICITIES OF MONOMIAL IDEALS MULTIPLICITIES OF MONOMIAL IDEALS JÜRGEN HERZOG AND HEMA SRINIVASAN Introduction Let S = K[x 1 x n ] be a polynomial ring over a field K with standard grading, I S a graded ideal. The multiplicity of S/I

More information

INITIAL COMPLEX ASSOCIATED TO A JET SCHEME OF A DETERMINANTAL VARIETY. the affine space of dimension k over F. By a variety in A k F

INITIAL COMPLEX ASSOCIATED TO A JET SCHEME OF A DETERMINANTAL VARIETY. the affine space of dimension k over F. By a variety in A k F INITIAL COMPLEX ASSOCIATED TO A JET SCHEME OF A DETERMINANTAL VARIETY BOYAN JONOV Abstract. We show in this paper that the principal component of the first order jet scheme over the classical determinantal

More information

COMMUTING PAIRS AND TRIPLES OF MATRICES AND RELATED VARIETIES

COMMUTING PAIRS AND TRIPLES OF MATRICES AND RELATED VARIETIES COMMUTING PAIRS AND TRIPLES OF MATRICES AND RELATED VARIETIES ROBERT M. GURALNICK AND B.A. SETHURAMAN Abstract. In this note, we show that the set of all commuting d-tuples of commuting n n matrices that

More information

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013 Math 113 Homework 5 Bowei Liu, Chao Li Fall 2013 This homework is due Thursday November 7th at the start of class. Remember to write clearly, and justify your solutions. Please make sure to put your name

More information

The Cayley-Hamilton Theorem and the Jordan Decomposition

The Cayley-Hamilton Theorem and the Jordan Decomposition LECTURE 19 The Cayley-Hamilton Theorem and the Jordan Decomposition Let me begin by summarizing the main results of the last lecture Suppose T is a endomorphism of a vector space V Then T has a minimal

More information

Journal of Algebra 226, (2000) doi: /jabr , available online at on. Artin Level Modules.

Journal of Algebra 226, (2000) doi: /jabr , available online at   on. Artin Level Modules. Journal of Algebra 226, 361 374 (2000) doi:10.1006/jabr.1999.8185, available online at http://www.idealibrary.com on Artin Level Modules Mats Boij Department of Mathematics, KTH, S 100 44 Stockholm, Sweden

More information

ABSTRACT. Department of Mathematics. interesting results. A graph on n vertices is represented by a polynomial in n

ABSTRACT. Department of Mathematics. interesting results. A graph on n vertices is represented by a polynomial in n ABSTRACT Title of Thesis: GRÖBNER BASES WITH APPLICATIONS IN GRAPH THEORY Degree candidate: Angela M. Hennessy Degree and year: Master of Arts, 2006 Thesis directed by: Professor Lawrence C. Washington

More information

Lecture 7. Econ August 18

Lecture 7. Econ August 18 Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

More information

Algebra Homework, Edition 2 9 September 2010

Algebra Homework, Edition 2 9 September 2010 Algebra Homework, Edition 2 9 September 2010 Problem 6. (1) Let I and J be ideals of a commutative ring R with I + J = R. Prove that IJ = I J. (2) Let I, J, and K be ideals of a principal ideal domain.

More information

Calculation in the special cases n = 2 and n = 3:

Calculation in the special cases n = 2 and n = 3: 9. The determinant The determinant is a function (with real numbers as values) which is defined for quadratic matrices. It allows to make conclusions about the rank and appears in diverse theorems and

More information

Abstract Vector Spaces

Abstract Vector Spaces CHAPTER 1 Abstract Vector Spaces 1.1 Vector Spaces Let K be a field, i.e. a number system where you can add, subtract, multiply and divide. In this course we will take K to be R, C or Q. Definition 1.1.

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

MAKSYM FEDORCHUK. n ) = z1 d 1 zn d 1.

MAKSYM FEDORCHUK. n ) = z1 d 1 zn d 1. DIRECT SUM DECOMPOSABILITY OF SMOOTH POLYNOMIALS AND FACTORIZATION OF ASSOCIATED FORMS MAKSYM FEDORCHUK Abstract. We prove an if-and-only-if criterion for direct sum decomposability of a smooth homogeneous

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Detailed Proof of The PerronFrobenius Theorem

Detailed Proof of The PerronFrobenius Theorem Detailed Proof of The PerronFrobenius Theorem Arseny M Shur Ural Federal University October 30, 2016 1 Introduction This famous theorem has numerous applications, but to apply it you should understand

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Math 418 Algebraic Geometry Notes

Math 418 Algebraic Geometry Notes Math 418 Algebraic Geometry Notes 1 Affine Schemes Let R be a commutative ring with 1. Definition 1.1. The prime spectrum of R, denoted Spec(R), is the set of prime ideals of the ring R. Spec(R) = {P R

More information

Generalized Eigenvectors and Jordan Form

Generalized Eigenvectors and Jordan Form Generalized Eigenvectors and Jordan Form We have seen that an n n matrix A is diagonalizable precisely when the dimensions of its eigenspaces sum to n. So if A is not diagonalizable, there is at least

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS

More information

Exploring the Exotic Setting for Algebraic Geometry

Exploring the Exotic Setting for Algebraic Geometry Exploring the Exotic Setting for Algebraic Geometry Victor I. Piercey University of Arizona Integration Workshop Project August 6-10, 2010 1 Introduction In this project, we will describe the basic topology

More information

A NOTE ON THE JORDAN CANONICAL FORM

A NOTE ON THE JORDAN CANONICAL FORM A NOTE ON THE JORDAN CANONICAL FORM H. Azad Department of Mathematics and Statistics King Fahd University of Petroleum & Minerals Dhahran, Saudi Arabia hassanaz@kfupm.edu.sa Abstract A proof of the Jordan

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

Q N id β. 2. Let I and J be ideals in a commutative ring A. Give a simple description of

Q N id β. 2. Let I and J be ideals in a commutative ring A. Give a simple description of Additional Problems 1. Let A be a commutative ring and let 0 M α N β P 0 be a short exact sequence of A-modules. Let Q be an A-module. i) Show that the naturally induced sequence is exact, but that 0 Hom(P,

More information

Green s Hyperplane Restriction Theorem: an extension to modules

Green s Hyperplane Restriction Theorem: an extension to modules Green s Hyperplane Restriction Theorem: an extension to modules Ornella Greco Abstract In this paper, we prove a generalization of Green s Hyperplane Restriction Theorem to the case of modules over the

More information

MATH 115A: SAMPLE FINAL SOLUTIONS

MATH 115A: SAMPLE FINAL SOLUTIONS MATH A: SAMPLE FINAL SOLUTIONS JOE HUGHES. Let V be the set of all functions f : R R such that f( x) = f(x) for all x R. Show that V is a vector space over R under the usual addition and scalar multiplication

More information

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure. Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.

More information

Generalized eigenvector - Wikipedia, the free encyclopedia

Generalized eigenvector - Wikipedia, the free encyclopedia 1 of 30 18/03/2013 20:00 Generalized eigenvector From Wikipedia, the free encyclopedia In linear algebra, for a matrix A, there may not always exist a full set of linearly independent eigenvectors that

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

9. The determinant. Notation: Also: A matrix, det(a) = A IR determinant of A. Calculation in the special cases n = 2 and n = 3:

9. The determinant. Notation: Also: A matrix, det(a) = A IR determinant of A. Calculation in the special cases n = 2 and n = 3: 9. The determinant The determinant is a function (with real numbers as values) which is defined for square matrices. It allows to make conclusions about the rank and appears in diverse theorems and formulas.

More information

The minimal components of the Mayr-Meyer ideals

The minimal components of the Mayr-Meyer ideals The minimal components of the Mayr-Meyer ideals Irena Swanson 24 April 2003 Grete Hermann proved in [H] that for any ideal I in an n-dimensional polynomial ring over the field of rational numbers, if I

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

1.4 Solvable Lie algebras

1.4 Solvable Lie algebras 1.4. SOLVABLE LIE ALGEBRAS 17 1.4 Solvable Lie algebras 1.4.1 Derived series and solvable Lie algebras The derived series of a Lie algebra L is given by: L (0) = L, L (1) = [L, L],, L (2) = [L (1), L (1)

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

Comparison between XL and Gröbner Basis Algorithms

Comparison between XL and Gröbner Basis Algorithms Comparison between XL and Gröbner Basis Algorithms Gwénolé Ars 1, Jean-Charles Faugère 2, Hideki Imai 3, Mitsuru Kawazoe 4, and Makoto Sugita 5 1 IRMAR, University of Rennes 1 Campus de Beaulieu 35042

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

2. Intersection Multiplicities

2. Intersection Multiplicities 2. Intersection Multiplicities 11 2. Intersection Multiplicities Let us start our study of curves by introducing the concept of intersection multiplicity, which will be central throughout these notes.

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Linear Algebra II Lecture 13

Linear Algebra II Lecture 13 Linear Algebra II Lecture 13 Xi Chen 1 1 University of Alberta November 14, 2014 Outline 1 2 If v is an eigenvector of T : V V corresponding to λ, then v is an eigenvector of T m corresponding to λ m since

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

Chapter 1 Vector Spaces

Chapter 1 Vector Spaces Chapter 1 Vector Spaces Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 110 Linear Algebra Vector Spaces Definition A vector space V over a field

More information

LS.1 Review of Linear Algebra

LS.1 Review of Linear Algebra LS. LINEAR SYSTEMS LS.1 Review of Linear Algebra In these notes, we will investigate a way of handling a linear system of ODE s directly, instead of using elimination to reduce it to a single higher-order

More information

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1]. Topics: Linear operators MODULE 7 We are going to discuss functions = mappings = transformations = operators from one vector space V 1 into another vector space V 2. However, we shall restrict our sights

More information

LINEAR DEPENDENCE OF QUOTIENTS OF ANALYTIC FUNCTIONS OF SEVERAL VARIABLES WITH THE LEAST SUBCOLLECTION OF GENERALIZED WRONSKIANS

LINEAR DEPENDENCE OF QUOTIENTS OF ANALYTIC FUNCTIONS OF SEVERAL VARIABLES WITH THE LEAST SUBCOLLECTION OF GENERALIZED WRONSKIANS LINEAR DEPENDENCE OF QUOTIENTS OF ANALYTIC FUNCTIONS OF SEVERAL VARIABLES WITH THE LEAST SUBCOLLECTION OF GENERALIZED WRONSKIANS RONALD A. WALKER Abstract. We study linear dependence in the case of quotients

More information

Change of Ordering for Regular Chains in Positive Dimension

Change of Ordering for Regular Chains in Positive Dimension Change of Ordering for Regular Chains in Positive Dimension X. Dahan, X. Jin, M. Moreno Maza, É. Schost University of Western Ontario, London, Ontario, Canada. École polytechnique, 91128 Palaiseau, France.

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

IDEAL INTERPOLATION: MOURRAIN S CONDITION VS D-INVARIANCE

IDEAL INTERPOLATION: MOURRAIN S CONDITION VS D-INVARIANCE **************************************** BANACH CENTER PUBLICATIONS, VOLUME ** INSTITUTE OF MATHEMATICS POLISH ACADEMY OF SCIENCES WARSZAWA 200* IDEAL INTERPOLATION: MOURRAIN S CONDITION VS D-INVARIANCE

More information

ADVANCED TOPICS IN ALGEBRAIC GEOMETRY

ADVANCED TOPICS IN ALGEBRAIC GEOMETRY ADVANCED TOPICS IN ALGEBRAIC GEOMETRY DAVID WHITE Outline of talk: My goal is to introduce a few more advanced topics in algebraic geometry but not to go into too much detail. This will be a survey of

More information

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.) 4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

Linear Algebra I. Ronald van Luijk, 2015

Linear Algebra I. Ronald van Luijk, 2015 Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition

More information

4.4 Noetherian Rings

4.4 Noetherian Rings 4.4 Noetherian Rings Recall that a ring A is Noetherian if it satisfies the following three equivalent conditions: (1) Every nonempty set of ideals of A has a maximal element (the maximal condition); (2)

More information

Non-commutative reduction rings

Non-commutative reduction rings Revista Colombiana de Matemáticas Volumen 33 (1999), páginas 27 49 Non-commutative reduction rings Klaus Madlener Birgit Reinert 1 Universität Kaiserslautern, Germany Abstract. Reduction relations are

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in

More information

On the BMS Algorithm

On the BMS Algorithm On the BMS Algorithm Shojiro Sakata The University of Electro-Communications Department of Information and Communication Engineering Chofu-shi, Tokyo 182-8585, JAPAN Abstract I will present a sketch of

More information

Computing syzygies with Gröbner bases

Computing syzygies with Gröbner bases Computing syzygies with Gröbner bases Steven V Sam July 2, 2008 1 Motivation. The aim of this article is to motivate the inclusion of Gröbner bases in algebraic geometry via the computation of syzygies.

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information