A Study of Numerical Elimination for the Solution of Multivariate Polynomial Systems

Size: px
Start display at page:

Download "A Study of Numerical Elimination for the Solution of Multivariate Polynomial Systems"

Transcription

1 A Study of Numerical Elimination for the Solution of Multivariate Polynomial Systems W Auzinger and H J Stetter Abstract In an earlier paper we had motivated and described am algorithm for the computation of the zeros of multivariate polynomial systems by means of multiplication tables and solution of an associated eigenvalue problem In this paper we give a more transparent description of this approach, including degenerate situations Both the theoretical results and the algorithmic procedures are applied to examples, and the results are displayed and discussed 1 Introduction We consider systems of multivariate polynomial equations: f 1 x 1,, x n = F x := f n x 1,, x n = 11 Here, f ν x := j a ν j x j := j a ν j 1,,j n x j 1 1 x jn n, j = n ν=1 The zeros of 11 are denoted by ξ µ := ξ µ1,, ξ µn C n j ν m ν, a ν j C The n polynomials f ν, ν = 11n, form the polynomial ideal F gx hx mod F will denote the fact that gx hx F We will also consider a further linear dummy polynomial f x := a 1 x a n x n + a with indeterminate coefficients a ν The ideal of the n + 1 polynomials f ν, ν = 1n, will be called F It is useful to consider also the natural embedding of 11 into the projective n-space CP n homogenization: f 1 x,, x n := x m 1 x1 f 1,, x n = x x 12 x1 f n x,, x n := x mn f n,, x n = x x Each zero of 11 is a zero of 12; but 12 may have solutions with ξ µ = and some other ξ µν ; these are called zeros at infinity of 11 Institut für Angewandte und Numerische Mathematik, Technische Universität Wien, Wiedner Hauptstrasse 6 1, A-14 Wien, Austria 1

2 The multiplicity of an isolated zero of 12 is defined by the factorization of the resultant Ra,, a n of the polynomial system f, f 1,, f n ; cf any text of classical algebra Ra ν is a homogeneous polynomial of degree m = m m ν the Bézout number of 11 in the dummy coefficients a ν and factors into ν=1 Ra ν = m a ξ µ + a 1 ξ µ1 + + a n ξ µn µ=1 where ξ µ, ξ µ1,, ξ µn CP n are the isolated zeros of 12 For the finite zeros of 12, the multiplicity carries over to the corresponding zero of 11 Unfortunately, Ra ν vanishes identically when 12 has solution manifolds If 12 has no solution manifolds, it has precisely m isolated zeros counting multiplicities cf 13 and 11 has at most m isolated zeros In [1], the following method for the numerical determination of all zeros of a multivariate polynomial system 11 has been derived: Consider a set Z of m power products PPs of the unknowns x ν : x j := x j 1 1 x jn n, j J IIN n, J = m, J, and denote them as basis power products BPPs Let Z also denote the formal vector of theses BPPs in some agreed order, with x = 1 as last component Assume that we are able to find, for ν = 11n, m-vectors of multivariate polynomials X ν λ x = χ ν λµ, x λ = 11n, µ = 11 m, such that the associated polynomial combinations of the f λ reduce to n λ=1 13 X ν λ x f λx = Bν x ν I Z, ν = 11n 14 The B ν are m m-matrices with real resp complex elements for real resp complex f λ Then we say that the B ν determine a multiplication table mod F wrt the PP basis Z Consider a zero ξ µ = ξ µ1,, ξ µn C n of 11 and denote the numerical vector which arises after substitution of ξ µ into Z by z µ C m By 14, z µ has to satisfy Bν ξ µν I z µ =, for ν = 11n, 15 ie, z µ has to be a joint eigenvector of the B ν, with resp eigenvalues ξ µν Thus, the joint eigenvectors of the B ν are the only candidates for BPP vectors z µ associated with a zero of 11 zero BPP vectors Actually, it turns out that 15 is a full characterization of the zero BPP vectors of the system 11 For a suitable basis Z of PPs, each joint eigenvector of the B ν of 14 defines a zero of 11 through its eigenvalues ξ µν as well as through its components corresponding to a BPP x ν Thus, the determination of all finite zeros of 11 is reduced to the generation of the B ν for an appropriate Z basis and to the computation of the m joint eigenvectors of the B ν they commute, and the computation of the zeros proceeds entirely in the context of Linear Algebra In [1], we had motivated and described the generation of multiplication tables 14 and the computation of the zeros from associated eigenvalue problems only for the non-degenerate case where all zeros of 11 are finite We will use the terms non-deficient resp deficient in this paper Even for this case, a more refined analysis of the situation had not been given In the present paper, we will give a more transparent presentation of the relation between the module F of the polynomials in 11 and the multiplication tables mod F wrt a PP basis, for the non-deficient case as well as for the deficient case Furthermore, we will describe the algorithmic generation of the PP basis Z and the multiplication table matrices B ν in the general situation Both the theoretical results and the algorithmic procedures will be applied to examples, and the results will be displayed and discussed 2

3 Figure 21: Partitioning of J 2 Preparations Naturally, the PP basis Z introduced in Section 1 must be a basis for the residue class ring F of the polynomial ideal F if it is to characterize the zeros of 11 fully However, we have not been able to find a constructive approach to the generation of bases for F in the algebraic literature Therefore, we have based our algorithmic approach to the generation of the PP basis Z on the resultant Ra ν about which many constructive results may be found in the older algebraic literature By 13, the knowledge of the resultant Ra ν of f, f 1,, f n, which is also called the n-resultant of 11 in the algebraic literature, opens the way to the zeros of 11, at least theoretically According to classical results, Ra ν permits a representation not unique Ra ν = n ψ ν x f ν x; 21 ν= this makes it natural to look for multivariate polynomials ψ ν which make all PPs except x cancel out of the polynomial combination 21 The following ansatz 1 for the ψ ν is also classical, cf eg [2]: Consider the set J := j IIN n : n n j ν ν=1 ν=1 n m ν n + 1, with J =: N := ν=1 m ν n J may be partitioned into disjoint sets cf Fig 21 J := j J : j λ < m λ, λ = 11n J ν := j J : j ν m ν, j λ < m λ, λ = ν+11n, ν = 11n 23 Note that this partition depends on the order of the polynomials in 11 Now we compose the ψ ν in 21 from the PPs in the sets J resp Jν ˇ := J ν,,, m ν,,,, ν = 11n, 24 ν th position 1 We represent sets of PPs in n variables by the sets of their exponents in IIN n 3

4 ie, the J ˇ ν are defined by shifting the J ν towards the origin Then J represents the set of all PPs which may occur in the right hand side of 21; this makes 21 a linear homogeneous system for the coefficients c ν j of the ψ ν and the unknown left-hand side R, with an N N-matrix Aa ν described below The irreducibility and normalization required for the resultant imply that Ra ν := det Aa ν = ρ Ra ν 25 where ρ is a polynomial in the coefficients of the f ν of 11 only and does not contain the dummy coefficients a ν Actually, ρ is the determinant of a certain submatrix of A according to [2] By 13 and 25, the factorization of Ra ν fully determines the zeros of 11, except when R and ρ vanish while Ra ν is not identically zero In this case one may perturb 11 into a non-deficient system for which ρ will not vanish, and then let the perturbation parameter δ approach zero: Ra ν = lim δ Ra ν ; δ 26 ρδ As both ρ and R are polynomials in δ, this is a purely algebraic process on the elements in the matrix Aa ν Thus, this matrix determines the resultant - and hence its factorization and the zeros of 11 - in the deficient case as well Only if 11 possesses solution manifolds and Ra ν vanishes identically, the matrix A of 21/24 is not a suitable starting-point for the computation of the isolated zeros of 11 To define A formally, we denote by Z ν the vector of the PPs in J ˇ ν, ν = 1n The order of the PPs within these vectors is irrelevant, except that we assume x = 1 to be the last element of Z := Z The set resp vector Z of all PPs in J will be segmented into Z = Z 1 Z N m m 27 with a fixed but arbitrary order of the PPs in Z 1 the union of the Z ν, ν = 11n Now, the polynomial combination cf 21 and 24 [ Z 1 f 1 x + + Z n f n x + ] f a ν, x =: Z A 11 A 1 Z 1 A 1 a ν A a ν Z =: Aa ν Z 28 defines the matrix Aa ν and its segmentations corresponding to 27 Obviously, each row of A contains the coefficients of a particular f ν as non-zero elements; their position depends on the PP within Z ν with which f ν has been multiplied and on the order of the PPs in Z The dummy coefficients a ν of f occur on A 1 and A only; due to the linearity of f, A i a ν =: n ν= a ν 4 A ν i, i =, 1, 29

5 and each matrix A ν := element By 28, the A ν satisfy or The system 21 for the coefficients c ν j A ν 1 Aν, ν = 1n, has one element 1 in each row as the only non-zero x ν Z = A ν Z, ν = 11n, 21 A ν 1 A ν x νi of the ψ ν now takes the form c 1 j c 2 j c n j c j = 211 A = R 212 which explains once more 25 However, 212 will not play a role in our further development Also, the introduction of the dummy polynomial f a ν, x served only to relate our approach to the classical algebraic literature For our purposes, we could as well have defined A ν and A ν 1, Aν by 21/211 3 Multiplication Tables mod F; case I: A 11 regular We will now explain the generation of the multiplication tables introduced in Section 1 and establish their properties For simplicity, we treat the non-degenerate case at first, ie we assume at first that A 11 in 28 is regular In this case Z will take the role of the PP basis Z of Section 1 with According to 28 and 211 we have, for ν = 11n, A ν 1 A 1 11 [ Z 1 f 1x + + ] Z 1 f nx = B ν x ν I Z = B ν x ν I Z, Z n 31 B ν := A ν Aν 1 A 1 11 A 1 IR m m or C m m resp 2 32 Note that we have formed a polynomial combination of the f ν in F only; cf the end of the previous section 31 establishes the B ν of 32 as the matrices determining a multiplication table mod F wrt the PP basis Z : x ν Z B ν Z mod F, ν = 11n, 33 and we have cf Section 1 for zero BPP vector Proposition 31: For each finite zero ξ µ = ξ µ1,, ξ µn C n of 11, the associated BPP vector z µ C n wrt Z satisfies B ν z µ = ξ µν z µ, ν = 11n, 34 ie the joint eigenvectors of the B ν are the only candidates for the zero BPP vectors of 11 The fact that the B ν have common eigenvectors suggests that they commute: 2 For simplicity, we will assume that the f ν have real coefficients There is no change in the formal development if the coefficients a ν j are complex numbers 5

6 Proposition 32: The B ν of 32 commute Proof: By 33, B ν B λ Z x ν x λ Z x ν x λ Z B λ B ν Z mod F The existence of a non-vanishing row c T in B ν B λ B λ B ν would thus imply the existence of a nontrivial linear dependence c T Z mod F within the BPPs of Z However, according to [2], the matrix A 11 A 1 contains all information about the ideal F, and the regularity of A 11 contradicts the possibility of an elimination Z of Z 1 from A 11 A 1 1 mod F Z On the basis of Proposition 32, we may define, for all j IIN n : B j := B 1 j 1 B n jn IR m m 35 b T j := 1 B j IR m 36 The b T j furnish the basis representation of all PPs mod F: Proposition 33: For all j IIN n, x j b T j Z mod F 37 Proof: x j = 1 x j Z 1 B j Z = b T j Z mod F by 33, 35, 36 As an immediate consequence of 37, we have b T j = 1 for j J, 38 where the position of the only non-vanishing entry 1 is determined by the position of x j in Z Proposition 34: For any joint eigenvector z C m of the B ν, the last component is : 1 z 39 Proof: Assume that the last component of z vanishes Then, for each j J, z j = where x j = ξj 1 1 ξjn n z = j 1 z 38 = b T j z 36 = 1 B j z hyp = x j 1 z ass =, and ξ ν are the eigenvalues of z for B ν Hence our assumption would imply Thus we may normalize each joint eigenvector of the B ν by setting its last component equal to 1 The following relations link the polynomials f ν directly to the multiplication tables: Proposition 35: For ν = 11n, j a ν j b T j = 31 Proof: f ν x = j a ν j x j j a ν j b T j Z mod F by 37 But a nontrivial linear dependence mod F of the BPPs in Z cannot exist; cf the proof of Proposition 32 6

7 Theorem 36: Let z µ C m be a joint eigenvector of the B ν, with eigenvalues ξ µν, ν = 11n, resp Then ξ µ = ξ µ1,, ξ µn C n is a zero of 11 Proof: f ν ξ µ = j 36 = j a ν j ξµ j = j a ν j ξ j 1 µ1 ξjn µn 1 z µ 1 a ν j b T j z 31 µ = hyp = j a ν j 1 B j 36 z µ = For regular A 11, Theorem 36 and Proposition 31 establish a complete equivalence of a multiplication table 33 and the system 11 in the following sense: Proposition 37: If A 11 is regular, 11 has no zero at infinity and hence exactly m isolated zeros counting multiplicities Proof: From the definition of A in 28 it follows that det Aa ν = det A 11 det A a ν A 1 a ν n = det A 11 det ν= A 1 11 A 1 a ν B ν by 29 and 32, 311 with B = I Hence Ra ν = det Aa ν has the full degree m in a so that the factorization 13 of the resultant R must have a term a ξ µ in each factor, which implies the assertion Proposition 38: If the m zeros of 11 are simple, the multiplication table matrices B ν have exactly m joint eigenvectors Proof: A trivial consequence of Proposition 31 Remark 39: If simple zeros of 11 have some coinciding components say ξ µ1 1 = ξ µ2 1 then the corresponding B ν will have multiple eigenvalues with eigenspaces of dimension greater 1 here B ν will have a double eigenvalue with an eigenspace of dimension 2 The fact that a zero BPP vector has to be a joint eigenvector of all B ν serves to distinguish the zero BPP vectors within the eigenspace Proposition 31: If 11 has a zero ξ µ of multiplicity k, then each B ν has an eigenvalue ξ µν with algebraic multiplicity k but geometric multiplicity 1, and the associated eigenvector is the same for all B ν viz the zero BPP vector for ξ µ Proof: By 311, 25 and 13, we have for ν = 11n: det B ν λi = ν [ ] 1 det A 11 R λ, 1 [ ] 1 m = ρ det A 11 λ ξ µ + ξ µν µ=1 312 so that the algebraic multiplicities of the zeros of 11 and of the corresponding eigenvalues of the B ν are the same If we consider a perturbation of 11 which takes the k-fold zero ξ µ into k simple zeros and then regard the limit process back to 11, the corresponding procedure for the multiplication tables will at first produce k separate but close joint eigenvectors for the B k which must merge into one eigenvector 7

8 Naturally, the situations of Remark 39 and Proposition 31 may occur in combination, eg zeros of a multiplicity k > 1 may coincide in some components, with obvious consequences for the B ν and their eigenvalues and eigenvectors The preceding results may be combined into Theorem 311: If a particular multiplication table matrix B ν has only eigenspaces of dimension 1, then the zeros of 11 may be completely determined by an eigenanalysis of this B ν If B ν has an eigenspace of a dimension greater than 1, then the selection of the zero BPP vectors must use further B ν s Remark 312: Note that Z contains all linear powers x ν, ν = 11n, if m ν 2 for all ν, cf 23 In this case, the ξ µν occur explicitly in the normalized eigenvectors of each B ν If there are linear equations in 11, they must be linearly independent to exclude solution manifolds; hence the ξ µν missing in the eigenvectors may be computed from the linear part of 11 Of course, it should be advantageous to reduce the number of unknowns in 11 by means of the linear equations before a solution of 11 is considered 4 Multiplication Tables mod F; case II: A 11 singular For the matrix A 1 := A 11 A 1 we define Note that the case N = N m is perfectly possible for singular A 11 N := rank A 1 N m 41 The following assumption appears to be necessary and sufficient for a transfer of the results of Section 3 to the more general situation Assumption M: The N PPs in Z cf 27 may be ordered and segmented into Ẑ 1 ˆN Z = Z 1 N, ˆN + N + m = N, 42 Z m such that the following conditions hold: i The ordering and segmentation of A 1 induced by 42 cf 28 Z 1 [ ] f 1x + + f nx = Â11 Ā 11 Ā 1 Z n Ẑ 1 Z 1 Z 43 generates a matrix Ā11 of rank N ie the N columns of Ā11 are linearly independent ii No x ν -neighbor of Z is in Ẑ1, ie cf 21 x ν Z =: Āν Z =: A ν 1 A ν Ẑ 1 Z 1 Z for ν = 11n 44 8

9 iii For a left pseudoinverse Ā+ 11 of Ā11 ie Ā + 11 Ā11 N = I IR N, Ā ν 1 Ā+ 11 Â11 = for ν = 11n 45 Under this assumption which we suppose to be true in the following, we may form multiplication tables mod F wrt the PP basis Z, cf 14 and 31 33: Z 1 [ ] Āν 1 Ā+ 11 f 1x + + Ẑ 1 f nx = Bν x ν I Z 1 46 Z Z n = Bν x ν I Z with so that B ν := Āν Āν 1 Ā+ 11 Ā1 IR m m, ν = 11n, 47 x ν Z B ν Z mod F, ν = 11n 48 This shows that - under Assumption M - we are able to achieve the situation described in Section 1 Proposition 41: For a given admissible segmentation 42, the B ν are uniquely defined ie independent of the choice of the pseudoinverse Ā+ 11 Proof: Since N is the rank of A 1, the columns of Ā 1 must be linear combinations of the N columns of Ā 11 Hence Ā+ 11 A 1 is independent of the particular choice of Ā + 11 since Ā+ 11 Ā11 = I As a consequence of 46, Proposition 31 remains valid, with m, z µ, Z, B ν replaced by m, z µ, Z, Bν For the validity of the analogously adapted Propositions 32 35, we need Proposition 42: There exists no nontrivial linear dependence ie Z is a basis of the residue class ring F Proof: We distinguish two cases: c T Z mod F, 49 a N = N m: Then Ā 11 IR N N is regular and we may argue as in the proof of Proposition 32 b N = rank A1 = rank Ā11 < N m: Assume that 49 holds The row c T IR n is obviously independent of any N linearly independent rows in A 1 for which the segments in Ā11 are also linearly independent But at the same time, 49 must be formable as a linear combination of rows in 43 according to 26, which constitutes a contradiction to N = rank A 1 With Proposition 42, Propositions may be immediately adapted and we have as analogy to Proposition 31 and Theorem 36 Theorem 43: For each finite zero ξ µ = ξ µ1,, ξ µn C n of 11, the associated zero BPP vector z µ C m wrt Z satisfies B ν z µ = ξ µν z µ, ν = 11n, 41 ie it is a joint eigenvector of the B ν Conversely, if z µ C m is a joint eigenvector of the B ν, with eigenvalues ξ µν, ν = 11n, then ξ µ := ξ µ1,, ξ µn is a zero of 11 9

10 Furthermore, due to the mutual commutativity of the m m-matrices B ν under Assumption M, we must have a set of m joint eigenvectors counting multiplicities in the sense of Propositions 38 and 31 and Remark 39; thus there are precisely m finite zeros for system 11 counting multiplicities Let us now discuss the requirements of Assumption M: Condition i merely serves to distinguish a set of N linearly independent columns in the matrix A1 ; because of N = rank A1, such a set must exist and there cannot be more than N linearly independent columns Algorithmically, one must select the columns of Ā 11 such that their linear independence is numerically well established, ie with column pivoting; see Section 5 The determination of the numerical rank of A 1 is then a byproduct of the determination of Ā 11 Condition ii is a control criterion for the separation of the remaining columns of A 1 into Â11 and Ā1 or, equivalently, of the remaining PPs in Z into Ẑ1 and Z cf 43: Each PP in Z must have all its x ν -neighbors in Z or in Z 1 Since x = 1 must be in Z, this requires that each increasing path in J which issues from must reach a point in Z 1 ; the PPs for all previous points must be in Z Geometrically, this means that the points in J associated with Z 1 must separate the origin from the outer part of IIN n Condition iii is a genuine hypothesis; it can only be tested after the separation 43 has been effected If there are no finite solution manifolds for the system 11, the residue class ring F has a finite dimension and thus there exists a finite PP basis for it containing the element 1 On the other hand, no finite bases for F exist when there are finite solution manifolds of a positive dimension Thus it is trivial that Assumption M cannot be satisfied if there are finite solution manifolds, and any attempt to effect a separation 42 satisfying must fail The critical question is the reverse one: May a reasonable algorithm for the selection of the PPs in Z 1 fail although there exists a finite PP basis for F Such a failure could be due to three reasons: a Any PP basis for F contains PPs not in Z cf 27 This would seem to contradict the constructive theory for the resultant Ra ν except possibly in the case where Ra ν vanishes identically due to a zero manifold at infinity b The selection process for independent columns in A 1 leads to a set Z 1 which does not separate the PP 1 from the farfield PPs although appropriate sets Z 1 would exist c The selection process leads to a separation 42 of Z which satisfies 43 and 44 but not 45 Conjecture 44: If 11 has no finite zero manifolds, the algorithm explained in Section 5 and any other reasonable algorithm for the separation 42 will succeed, possibly after a suitable reduction of A 1 by an omission of appropriate rows in the case that 11 has zero manifolds at infinity Remark 45: In this conjecture, success refers to an execution of the algorithm in precise arithmetic ie in IR or C resp For a floating-point algorithm it is clear that it may also fail numerically if 11 is very close to a system with zero manifolds Remark 46: It appears that the ideas of T Y Li eg [3] may be used for an appropriate reduction of the matrix A 1 in the case of zero manifolds at infinity 1

11 5 Algorithmic Realization Our algorithm may indeed be called an elimination algorithm since the multiplication tables B ν resp are obtained by row elimination and back-substitution within the matrix A 1 = A 11 A 1 In Sections 3 resp 4 above the non-deficient A 11 non-singular and the deficient case A 11 singular were treated separately; however, for an algorithmic realization such a distinction is not necessary as will become clear below 1 Preparations For the purpose of internal data representation a suitable order has to be chosen for the exponents multiindices j = j 1,, j n within the hypertetraeder J cf 22 and Fig 21 associated with the power products x j = x j 1 1 x jn n Z Lexicographic ordering, for instance, is a convenient choice since it is easy to manage algorithmically By the address of a PP x j we understand the position of its exponent j J wrt the prescribed order eg, its lexicographic position within J Note that we do not a priori mark a set of basis power products except x which must be a BPP The auxiliary functions required for the management of the underlying lexicographic data structure include among others GETNPP j: returns the lexicographic successor of a given multiindex j; ADDRPP j, l: returns the lexicographic address l of a given multiindex j; PPADDR l, j: returns the multiindex j whose lexicographic address is given by l inverse of ADDRPP GETNPP is mainly used within the generation of the matrix A 1 step 2 below; a modified version of GETNPP is also required to generate the sets J ˇ ν cf 24 and step 2 below ADDRPP and PPADDR perform combinatorical tasks providing the necessary link between the columns of A 1 and their corresponding multiindices j J steps 3 6 below These auxiliary functions are used in a black box manner; thus the algorithm can easily be adjusted by substituting GETNPP, to another ordering or even to a reduced ansatz for the efficient treatment of special problem types cf Remark 46 2 Generation of the matrix A 1 Now we compose the N N+m-matrix A 1 according to the rules described in Section 2: Each column of A 1 is associated via the chosen ordering 3 with a PP x j, j J ; each row of A 1 represents a sparse linear relation mod f ν between these PPs, B ν xǰf ν x = j a ν j x j+ǰ =, 51 which is a shifted version of one of the given polynomial equations f ν x =, ν = 1,, n cf for instance 28 The various shift factors xǰ for f ν are taken from the set of PPs with exponents from J ˇ ν cf 23, 24 and are processed in some convenient order eg, lexicographically 4 Now, according to 51, the polynomial coefficient a ν j becomes the l-th entry in the corresponding row of A 1 where l is the address of j + ǰ Thus the generation of A 1 is easy to perform algorithmically 3 Elimination Next we apply row eliminations in conjunction with a special column pivoting strategy involving column permutations to determine the numerical rank N N of the sparse rectangular matrix 3 In Section 2, a different arrangement of columns, A 1 = A 11 A 1, was used for theoretical purposes 4 Recall that, by construction, any j+ǰ ever occuring in 51 is indeed contained in J, cf Section 2 11

12 A 1 and to select a suitable set of N linearly independent columns 5 Basically, we use conventional column pivoting with the aim of optimal numerical stability 6 However, we restrict the pivot search by distinguishing different types of columns PPs x j Z exponents j J : δ First a maximal linearly independent set is selected among the PPs x j where j is in J := j IIN n : n n j ν = ν=1 ν=1 m ν n ie the exterior boundary of J These are considered first because none of the x j, j J, can be a BPP no x ν -neighbor is in J and so Assumption M, ii would be violated γ Next, all other PPs in our ansatz, except those in Z cf β, are considered as pivot candidates β The PPs x j Z cf 23, which form a PP basis in the non-deficient case cf Section 3, are pivot candidates of lowest priority except x, cf α: They are only considered as pivot PPs if it has not been possible to find a full set of N linearly independent columns of the type δ and γ α The PP x must be a BPP; so it is not admitted as a pivot candidate This procedure leads to an assortment of N N pivot PPs of the type δ, γ and, possibly, β which form the set Z 1 in the sense of Section 4 The remaining non-pivot PPs of the type α, β and γ not: δ constitute the set of basis candidates Z \ Z 1, from which a definitive PP basis will be selected in step 5 4 Back-substitution By back-substitution within the N linearly independent rows of the triangularized matrix A 1 arising from step 3 we obtain, for each of the N pivot PPs x k Z 1, a representation x k b k,j x j mod F 53 j: x j Z\ Z 1 in terms of non-pivot PPs x j Z \ Z 1 5 Determination of a PP basis Now we attempt to extract a PP basis Z from the set of basis candidates cf 3 which satisfies Assumption M To this end we choose an initial minimal test basis Z simply consisting of the element x and apply the following recursive testing & extension procedure: Consider all x ν -neighbors ν = 1,, n of the current test basis Z which are not contained in Z Let x k denote the x ν -neighbor under consideration We distinguish several cases: If x k is a basis candidate, it is added to Z Continue with the new test basis If x k is a pivot PP cf 3, the search path has reached its endpoint wrt the x ν -direction and x k permits a representation 53 in terms of basis candidates Three cases are possible: If x k is representable by PPs from Z solely such that 53 reduces to x k b k,j x j mod F, 54 j: x j Z there is no need to extend Z Passed Continue with the current test basis 5 Cf Remark ii at the end of this section 6 Cf Remark iii at the end of this section 12

13 If in 53 there occur non-vanishing 7 coefficients b k,j for which x j is not contained in Z but is a basis candidate, the respective PPs x j are added to Z Continue with the new test basis If in 53 there occurs at least one non-vanishing 7 coefficient b k,j for which x j is of the type δ, the algorithm has failed since x j is not admissible as a basis PP cf step 3 Otherwise x k is of the type δ and is neither a basis candidate nor a pivot PP This means that the search path has met the exterior boundary J and the algorithm has failed Once all PPs in the current test basis Z pass the above test, a minimal PP basis Z =: Z of dimension m m has been found which, by construction, satisfies Assumption M, and the system 11 has m finite isolated zeros In the case of failure, the fact has been established that 11 has zero manifolds cf Conjecture 44 6 Multiplication tables Solution of eigenproblems, extraction of the zeros If step 5 has been successful we form, for some ν 1,, n, the multiplication table mod F wrt the chosen PP basis: For each x k Z, 8 x ν x k = x k+iν j: x j Z b k+iν,j x j mod F 55 with b k+iν,j from 53 if x k+iν Z, b k+iν,j := δ k+iν,j if x k+iν Z, 56 and the m m coefficients b k+iν,j in 56 define the multiplication table B ν in the sense of Section 4 Now we solve the eigenproblem B ν z = λ z by means of a standard library procedure and obtain m eigenvalues λ µ and their associated eigenvectors z µ, which we normalize such that the x -component is 1 If there are simple eigenvalues only, we can immediately extract the components of the m isolated zeros ξ µ : For µ = 1,, m, λ µ, l = ν, ξ µl := component of z µ corresponding to the 57 position of the linear PP x l in Z, l ν It may occur that a linear PP x l l ν is not a member of Z, eg if the l-th equation in 11 is linear In this case the solution component ξ µl may be obtained as a scalar product on the basis of the representation 55 9 for x l with the numerical vector z µ substituted for Z If there occur multiple eigenvalues, we solve the eigenproblems for further multiplication tables B ν and merge the results if necessary; cf the end of Section 3 Remark 39 ff; we omit the details Remarks i In the non-deficient case our algorithm automatically selects the generic PP basis Z cf Sections 2 & 3 due to the strategy used in steps 3 and 4 above 7 Naturally, non-vanishing has to be understood in a numerical sense ν 8 i ν denotes the ν-th unit exponent,, 1,, 9 If x l is not a BPP then it permits a representation 55 because 1 is a BPP and Assumption M is satisfied 13

14 ii Recall that the PP x is not admitted as a pivot candidate in step 3 above Thus, for N < N it is possible that the addition of the x -column of A 1 to the pivot columns selected in step 3 increases the rank by 1 This has to be checked after step 3 has been completed If so, there exists a linear combination of shifted equations 51 which results in 1 =, and the fact has been established that the given system 11 is inconsistent iii The number N of equations represented by the matrix A 1 strongly increases with increasing m ν and n cf 22, but A 1 has a very sparse structure In special cases, N can be chosen much smaller than in 22, cf Remark 46 The use of sparse matrix techniques is therefore indispensable for an efficient implementation, and it will be favourable to combine the column selection procedure in step 3 with a fill-in minimization strategy iv For the treatment of cases where the determination of rank A 1 is numerically critical one may also use a suitably adapted QR decomposition in our elimination procedure v Recall that our algorithm fails if there are finite zero manifolds cf Section 6, Example 64 We have, however, been successful in computing all finite isolated zeros for a number of systems with a zero manifold at infinity cf Section 6, Example 66 Obviously, the information about these manifolds is filtered out in steps 3 and 4 6 Numerical Examples; Discussion We have implemented a not yet completely optimized version of our algorithm in FORTRAN 77 and have successfully solved a number of non-deficient as well as deficient systems In the following we present some simple examples featuring different types of deficiency and briefly comment on the results Example 61: n = 2, m 1 = m 2 = 2; m = 4, N = 6 Non-deficient case x x = x 2 x 1 4x 2 + x 1 = has a full set of 4 finite isolated zeros The 6 6-matrix A 11 in the sense of Section 2 is nonsingular, ie the 6 columns of the type δ and γ cf Section 5, step 3 are linearly independent Thus our algorithm selects the generic PP basis Z = Z = x j : j =,,, 1, 1,, 1, 1 and delivers the isolated real zeros 2a, 2a, 2a, 2a, 4a, a and 4a, a a = 1/ 5 on the basis of the multiplication table for ν = 1 or ν = 2 Example 62: n = 2, m 1 = m 2 = 2; m = 4, N = 6 One zero at infinity x 1 x 2 x 1 = x 2 1 x 2 = 62 Due to the first equation in 62, the PPs x 1 Z and x 1 x 2 Z are not independent; so this is a deficient situation Indeed, 62 has only 3 finite isolated zeros and one isolated zero at infinity 14

15 Let us have a closer look on the matrix A 1 for this example each column is labelled by its corresponding PP exponent:,, 1, 2, 3 1, 1, 1 1, 2 2, 2, 1 3, A 1 = A 1 has the full rank N = N = 6 But, 3 is a zero column; in step 3 our algorithm chooses the PP x 1 x 2 as a pivot PP instead of x 3 2 and selects the reduced PP basis Z = x j : j =,,, 1, 1, of the correct dimension m = 3; the zero column, 3 is simply ignored The isolated zeros,, 1, 1 and 1, 1 are obtained on the basis of the multiplication table for ν = 1 Note that for ν = 2 there occurs the 2-fold eigenvalue x 2 = 1 Example 63: n = 2, m 1 = m 2 = 2; m = 4, N = 6 Deficient case with a full set of finite isolated zeros x 1 x 2 x 1 1 = x x2 2 x 2 = has a full set of 4 finite isolated zeros However, similarly as in 62, the first equation in 63 relates the PPs 1, x 1 and x 1 x 2 Z such that Z cannot be a PP basis A 1 has the full rank N = N = 6 Here no zero columns occur; a linearly independent set of columns is obtained by choosing x 1 x 2 as a pivot PP instead of x 2 1 This leads to the PP basis Z = x j : j =,,, 1, 1,, 2, of the correct dimension m = m = 4, and our algorithm delivers two pairs of complex conjugate zeros on the basis of the multiplication table for ν = 1 or ν = 2 Example 64: n = 3, m 1 = m 2 = 2, m 3 = 3; m = 12, N = 44 Several zeros at infinity 16x x 1 x 2 4x = 2x 2 x 3 + 4x = 2x 2 1 x 3 x 1 2x 2 = has 7 finite isolated zeros and 5 isolated zeros at infinity It turns out that the matrix A 1 has the rank N = 39; thus 5 equations in A 1 are redundant In step 4 it turns out that 1 columns can be ignored they correspond to the PP set Ẑ1 of Section 4; our algorithm selects the PP basis Z = x j : j =,,,,, 1,, 1,,, 1, 2, 1,,, 1,, 1, 1, 1, 1 of the correct dimension m = 7 and delivers the isolated zeros three real zeros and two complex conjugate pairs on the basis of any of the multiplication tables 15

16 Example 65: n = 2, m 1 = m 2 = 2; m = 4, N = 6 Finite zero manifold x 1 x 1 x 2 = x 2 1x 1 x 2 = has a finite zero manifold: x 1 = x 2, and our algorithm fails in step 4: It turns out that the PP x 1 x 2 2 does not permit a representation 53, which would be necessary to construct a valid PP basis Example 66: n = 3, m 1 = m 2 = m 3 = 2; m = 8, N = 27 Zero manifold at infinity x 1 + 1x 1 + x 2 + x = x 1 + 5x 2 3 = x 1 2x = has a zero manifold at infinity besides 4 finite isolated zeros Here the matrix A 1 has the rank N = 24; thus 3 equations in A 1 are redundant In step 4 it turns out that 7 columns can be ignored they correspond to the PP set Ẑ1 of Section 4; our algorithm selects the PP basis Z = x j : j =,,,,, 1,, 1,, 1,, of the correct dimension m = 4 and delivers the isolated zeros 1, 3, 2, 5, 5, 2, 2, 3, 7 and 3, 3, 2 on the basis of any of the multiplication table for ν = 1 The occurrence of a zero manifold at infinity does not hurt us Note, however, that for Example 66 a suitably reduced ansatz would lead to a more efficient algorithm References [1] W Auzinger, H J Stetter, An Elimination Algorithm for the Computation of All Zeros of a System of Multivariate Polynomial Systems, Intern Series of Numer Math 86, [2] F S Macaulay, Some Formulae in Elimination, Proc London Math Soc 1 35, [3] T Y Li, T Sauer, J A Yorke, The Random Product Homotopy and Deficient Polynomial Systems, Numer Math 51,

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Contents Eigenvalues and Eigenvectors. Basic Concepts. Applications of Eigenvalues and Eigenvectors 8.3 Repeated Eigenvalues and Symmetric Matrices 3.4 Numerical Determination of Eigenvalues and Eigenvectors

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A = STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

arxiv: v1 [math.co] 3 Nov 2014

arxiv: v1 [math.co] 3 Nov 2014 SPARSE MATRICES DESCRIBING ITERATIONS OF INTEGER-VALUED FUNCTIONS BERND C. KELLNER arxiv:1411.0590v1 [math.co] 3 Nov 014 Abstract. We consider iterations of integer-valued functions φ, which have no fixed

More information

ACI-matrices all of whose completions have the same rank

ACI-matrices all of whose completions have the same rank ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices

More information

On the singular elements of a semisimple Lie algebra and the generalized Amitsur-Levitski Theorem

On the singular elements of a semisimple Lie algebra and the generalized Amitsur-Levitski Theorem On the singular elements of a semisimple Lie algebra and the generalized Amitsur-Levitski Theorem Bertram Kostant, MIT Conference on Representations of Reductive Groups Salt Lake City, Utah July 10, 2013

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued) 1 A linear system of equations of the form Sections 75, 78 & 81 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m can be written in matrix

More information

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems. Chapter 3 Linear Algebra In this Chapter we provide a review of some basic concepts from Linear Algebra which will be required in order to compute solutions of LTI systems in state space form, discuss

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

Introduction to Matrices

Introduction to Matrices POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

Rectangular Systems and Echelon Forms

Rectangular Systems and Echelon Forms CHAPTER 2 Rectangular Systems and Echelon Forms 2.1 ROW ECHELON FORM AND RANK We are now ready to analyze more general linear systems consisting of m linear equations involving n unknowns a 11 x 1 + a

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Calculating determinants for larger matrices

Calculating determinants for larger matrices Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

More information

Canonical lossless state-space systems: staircase forms and the Schur algorithm

Canonical lossless state-space systems: staircase forms and the Schur algorithm Canonical lossless state-space systems: staircase forms and the Schur algorithm Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics School of Mathematical Sciences Projet APICS Universiteit

More information

4. Determinants.

4. Determinants. 4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.

More information

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU LU Factorization A m n matri A admits an LU factorization if it can be written in the form of Where, A = LU L : is a m m lower triangular matri with s on the diagonal. The matri L is invertible and is

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

LS.1 Review of Linear Algebra

LS.1 Review of Linear Algebra LS. LINEAR SYSTEMS LS.1 Review of Linear Algebra In these notes, we will investigate a way of handling a linear system of ODE s directly, instead of using elimination to reduce it to a single higher-order

More information

Chapter 1: Systems of Linear Equations

Chapter 1: Systems of Linear Equations Chapter : Systems of Linear Equations February, 9 Systems of linear equations Linear systems Lecture A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Combinatorics for algebraic geometers

Combinatorics for algebraic geometers Combinatorics for algebraic geometers Calculations in enumerative geometry Maria Monks March 17, 214 Motivation Enumerative geometry In the late 18 s, Hermann Schubert investigated problems in what is

More information

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used

More information

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6 Chapter 6 Orthogonality 6.1 Orthogonal Vectors and Subspaces Recall that if nonzero vectors x, y R n are linearly independent then the subspace of all vectors αx + βy, α, β R (the space spanned by x and

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING Lectures MODULE 8 MATRICES III Rank of a matrix 2 General systems of linear equations 3 Eigenvalues and eigenvectors Rank of a matrix

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

REPRESENTATION THEORY OF S n

REPRESENTATION THEORY OF S n REPRESENTATION THEORY OF S n EVAN JENKINS Abstract. These are notes from three lectures given in MATH 26700, Introduction to Representation Theory of Finite Groups, at the University of Chicago in November

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

ON KRONECKER PRODUCTS OF CHARACTERS OF THE SYMMETRIC GROUPS WITH FEW COMPONENTS

ON KRONECKER PRODUCTS OF CHARACTERS OF THE SYMMETRIC GROUPS WITH FEW COMPONENTS ON KRONECKER PRODUCTS OF CHARACTERS OF THE SYMMETRIC GROUPS WITH FEW COMPONENTS C. BESSENRODT AND S. VAN WILLIGENBURG Abstract. Confirming a conjecture made by Bessenrodt and Kleshchev in 1999, we classify

More information

Zero controllability in discrete-time structured systems

Zero controllability in discrete-time structured systems 1 Zero controllability in discrete-time structured systems Jacob van der Woude arxiv:173.8394v1 [math.oc] 24 Mar 217 Abstract In this paper we consider complex dynamical networks modeled by means of state

More information

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by Linear Algebra Determinants and Eigenvalues Introduction: Many important geometric and algebraic properties of square matrices are associated with a single real number revealed by what s known as the determinant.

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

MTH 2032 Semester II

MTH 2032 Semester II MTH 232 Semester II 2-2 Linear Algebra Reference Notes Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2 ii Contents Table of Contents

More information

INITIAL COMPLEX ASSOCIATED TO A JET SCHEME OF A DETERMINANTAL VARIETY. the affine space of dimension k over F. By a variety in A k F

INITIAL COMPLEX ASSOCIATED TO A JET SCHEME OF A DETERMINANTAL VARIETY. the affine space of dimension k over F. By a variety in A k F INITIAL COMPLEX ASSOCIATED TO A JET SCHEME OF A DETERMINANTAL VARIETY BOYAN JONOV Abstract. We show in this paper that the principal component of the first order jet scheme over the classical determinantal

More information

Another algorithm for nonnegative matrices

Another algorithm for nonnegative matrices Linear Algebra and its Applications 365 (2003) 3 12 www.elsevier.com/locate/laa Another algorithm for nonnegative matrices Manfred J. Bauch University of Bayreuth, Institute of Mathematics, D-95440 Bayreuth,

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,

More information

Dimension. Eigenvalue and eigenvector

Dimension. Eigenvalue and eigenvector Dimension. Eigenvalue and eigenvector Math 112, week 9 Goals: Bases, dimension, rank-nullity theorem. Eigenvalue and eigenvector. Suggested Textbook Readings: Sections 4.5, 4.6, 5.1, 5.2 Week 9: Dimension,

More information

8. Prime Factorization and Primary Decompositions

8. Prime Factorization and Primary Decompositions 70 Andreas Gathmann 8. Prime Factorization and Primary Decompositions 13 When it comes to actual computations, Euclidean domains (or more generally principal ideal domains) are probably the nicest rings

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

ADVANCED TOPICS IN ALGEBRAIC GEOMETRY

ADVANCED TOPICS IN ALGEBRAIC GEOMETRY ADVANCED TOPICS IN ALGEBRAIC GEOMETRY DAVID WHITE Outline of talk: My goal is to introduce a few more advanced topics in algebraic geometry but not to go into too much detail. This will be a survey of

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

LINEAR EQUATIONS WITH UNKNOWNS FROM A MULTIPLICATIVE GROUP IN A FUNCTION FIELD. To Professor Wolfgang Schmidt on his 75th birthday

LINEAR EQUATIONS WITH UNKNOWNS FROM A MULTIPLICATIVE GROUP IN A FUNCTION FIELD. To Professor Wolfgang Schmidt on his 75th birthday LINEAR EQUATIONS WITH UNKNOWNS FROM A MULTIPLICATIVE GROUP IN A FUNCTION FIELD JAN-HENDRIK EVERTSE AND UMBERTO ZANNIER To Professor Wolfgang Schmidt on his 75th birthday 1. Introduction Let K be a field

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

CHAPTER 3. Matrix Eigenvalue Problems

CHAPTER 3. Matrix Eigenvalue Problems A SERIES OF CLASS NOTES FOR 2005-2006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 3 A COLLECTION OF HANDOUTS ON SYSTEMS OF ORDINARY DIFFERENTIAL

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math 489AB Exercises for Chapter 2 Fall Section 2.3 Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial

More information

Detailed Proof of The PerronFrobenius Theorem

Detailed Proof of The PerronFrobenius Theorem Detailed Proof of The PerronFrobenius Theorem Arseny M Shur Ural Federal University October 30, 2016 1 Introduction This famous theorem has numerous applications, but to apply it you should understand

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

M. VAN BAREL Department of Computing Science, K.U.Leuven, Celestijnenlaan 200A, B-3001 Heverlee, Belgium

M. VAN BAREL Department of Computing Science, K.U.Leuven, Celestijnenlaan 200A, B-3001 Heverlee, Belgium MATRIX RATIONAL INTERPOLATION WITH POLES AS INTERPOLATION POINTS M. VAN BAREL Department of Computing Science, K.U.Leuven, Celestijnenlaan 200A, B-3001 Heverlee, Belgium B. BECKERMANN Institut für Angewandte

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to: MAC Module Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to: Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors

More information

MAC Module 12 Eigenvalues and Eigenvectors

MAC Module 12 Eigenvalues and Eigenvectors MAC 23 Module 2 Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to:. Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors

More information

Incompatibility Paradoxes

Incompatibility Paradoxes Chapter 22 Incompatibility Paradoxes 22.1 Simultaneous Values There is never any difficulty in supposing that a classical mechanical system possesses, at a particular instant of time, precise values of

More information

DEGREE SEQUENCES OF INFINITE GRAPHS

DEGREE SEQUENCES OF INFINITE GRAPHS DEGREE SEQUENCES OF INFINITE GRAPHS ANDREAS BLASS AND FRANK HARARY ABSTRACT The degree sequences of finite graphs, finite connected graphs, finite trees and finite forests have all been characterized.

More information

4 Matrix Diagonalization and Eigensystems

4 Matrix Diagonalization and Eigensystems 14.102, Math for Economists Fall 2004 Lecture Notes, 9/21/2004 These notes are primarily based on those written by George Marios Angeletos for the Harvard Math Camp in 1999 and 2000, and updated by Stavros

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

Topological Data Analysis - Spring 2018

Topological Data Analysis - Spring 2018 Topological Data Analysis - Spring 2018 Simplicial Homology Slightly rearranged, but mostly copy-pasted from Harer s and Edelsbrunner s Computational Topology, Verovsek s class notes. Gunnar Carlsson s

More information

Chapter 2. Vector Spaces

Chapter 2. Vector Spaces Chapter 2 Vector Spaces Vector spaces and their ancillary structures provide the common language of linear algebra, and, as such, are an essential prerequisite for understanding contemporary applied mathematics.

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) Eigenvalues and Eigenvectors Fall 2015 1 / 14 Introduction We define eigenvalues and eigenvectors. We discuss how to

More information

Euler characteristic of the truncated order complex of generalized noncrossing partitions

Euler characteristic of the truncated order complex of generalized noncrossing partitions Euler characteristic of the truncated order complex of generalized noncrossing partitions D. Armstrong and C. Krattenthaler Department of Mathematics, University of Miami, Coral Gables, Florida 33146,

More information

9. Integral Ring Extensions

9. Integral Ring Extensions 80 Andreas Gathmann 9. Integral ing Extensions In this chapter we want to discuss a concept in commutative algebra that has its original motivation in algebra, but turns out to have surprisingly many applications

More information

SOLUTION OF GENERALIZED LINEAR VECTOR EQUATIONS IN IDEMPOTENT ALGEBRA

SOLUTION OF GENERALIZED LINEAR VECTOR EQUATIONS IN IDEMPOTENT ALGEBRA , pp. 23 36, 2006 Vestnik S.-Peterburgskogo Universiteta. Matematika UDC 519.63 SOLUTION OF GENERALIZED LINEAR VECTOR EQUATIONS IN IDEMPOTENT ALGEBRA N. K. Krivulin The problem on the solutions of homogeneous

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information