Solving Systems of Polynomial Equations

Similar documents
Intersection of Ellipsoids

The Laplace Expansion Theorem: Computing the Determinants and Inverses of Matrices

Information About Ellipses

Low-Degree Polynomial Roots

Distance Between Ellipses in 2D

Least-Squares Fitting of Data with Polynomials

A Relationship Between Minimum Bending Energy and Degree Elevation for Bézier Curves

Fitting a Natural Spline to Samples of the Form (t, f(t))

Perspective Projection of an Ellipse

Distance from a Point to an Ellipse, an Ellipsoid, or a Hyperellipsoid

Intersection of Ellipses

Reconstructing an Ellipsoid from its Perspective Projection onto a Plane

Distance from Line to Rectangle in 3D

Intersection of Objects with Linear and Angular Velocities using Oriented Bounding Boxes

Computing Orthonormal Sets in 2D, 3D, and 4D

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Least Squares Fitting of Data by Linear or Quadratic Structures

B-Spline Interpolation on Lattices

Elementary maths for GMT

Tropical Polynomials

MAT Linear Algebra Collection of sample exams

Section 5.3 Systems of Linear Equations: Determinants

1.5 F15 O Brien. 1.5: Linear Equations and Inequalities

Preliminary algebra. Polynomial equations. and three real roots altogether. Continue an investigation of its properties as follows.

Intersection of Infinite Cylinders

Chapter 1: Systems of Linear Equations

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Dynamic Collision Detection using Oriented Bounding Boxes

Projective Spaces. Chapter The Projective Line

SKILL BUILDER TEN. Graphs of Linear Equations with Two Variables. If x = 2 then y = = = 7 and (2, 7) is a solution.

Lecture 7: Introduction to linear systems

ELEMENTARY LINEAR ALGEBRA

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Chapter 9: Systems of Equations and Inequalities

1 Roots of polynomials

Polyhedral Mass Properties (Revisited)

Linear Algebra 1 Exam 2 Solutions 7/14/3

Linear Algebra, Summer 2011, pt. 2

POLI270 - Linear Algebra

ELEMENTARY LINEAR ALGEBRA

2-4 Zeros of Polynomial Functions

Chapter 7. Linear Algebra: Matrices, Vectors,

Rational Exponents. Polynomial function of degree n: with leading coefficient,, with maximum number of turning points is given by (n-1)

Multiplication of Polynomials

Lecture Summaries for Linear Algebra M51A

Matrices Gaussian elimination Determinants. Graphics 2009/2010, period 1. Lecture 4: matrices

Intersection of Rectangle and Ellipse

Some Notes on Linear Algebra

ALGEBRAIC GEOMETRY HOMEWORK 3

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

ax 2 + bx + c = 0 where

Algebra 2 (2006) Correlation of the ALEKS Course Algebra 2 to the California Content Standards for Algebra 2

Linear Algebra March 16, 2019

Least Squares Fitting of Data

Distance to Circles in 3D

Local properties of plane algebraic curves

MATH 196, SECTION 57 (VIPUL NAIK)

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

Rotation of Axes. By: OpenStaxCollege

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

Introduction and Review of Power Series

8. Diagonalization.

(II.B) Basis and dimension

Review Questions REVIEW QUESTIONS 71

Bilinear and quadratic forms

SMSU Mathematics Course Content

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Jan 9

Module 04 Optimization Problems KKT Conditions & Solvers

Conceptual Questions for Review

Math 1314 Week #14 Notes

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Systems of Equations

DM559 Linear and Integer Programming. Lecture 2 Systems of Linear Equations. Marco Chiarandini

Homework #6 Solutions

Principal Curvatures of Surfaces

Final Exam A Name. 20 i C) Solve the equation by factoring. 4) x2 = x + 30 A) {-5, 6} B) {5, 6} C) {1, 30} D) {-5, -6} -9 ± i 3 14

7. Symmetric Matrices and Quadratic Forms

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Solving Polynomial and Rational Inequalities Algebraically. Approximating Solutions to Inequalities Graphically

Quadratics. Shawn Godin. Cairine Wilson S.S Orleans, ON October 14, 2017

DEPARTMENT OF MATHEMATICS

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Chapter 4. Vector Space Examples. 4.1 Diffusion Welding and Heat States

Solving Systems of Linear Equations. Classification by Number of Solutions

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

MS 2001: Test 1 B Solutions

ELEMENTARY LINEAR ALGEBRA

Solving Linear and Rational Inequalities Algebraically. Definition 22.1 Two inequalities are equivalent if they have the same solution set.

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

SHORT ANSWER. Write the word or phrase that best completes each statement or answers the question.

IDEAL CLASSES AND RELATIVE INTEGERS

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

ARCS IN FINITE PROJECTIVE SPACES. Basic objects and definitions

Homework and Computer Problems for Math*2130 (W17).

CALCULUS JIA-MING (FRANK) LIOU

Linear Algebra. Hoffman & Kunze. 2nd edition. Answers and Solutions to Problems and Exercises Typos, comments and etc...

Chapter 4. Solving Systems of Equations. Chapter 4

Basic Concepts in Linear Algebra

Transcription:

Solving Systems of Polynomial Equations David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. Created: October 30, 2000 Last Modified: June 30, 2008 Contents 1 Introduction 2 2 Linear Equations in One Formal Variable 2 3 Any Degree Equations in One Formal Variable 4 3.1 Case n = 2 and m = 1........................................ 4 3.2 Case n = 2 and m = 2........................................ 4 3.3 General Case n m......................................... 5 4 Any Degree Equations in Any Formal Variables 5 5 Two Variables, One Quadratic Equation, One Linear Equation 5 6 Two Variables, Two Quadratic Equations 6 7 Three Variables, One Quadratic Equation, Two Linear Equations 7 8 Three Variables, Two Quadratic Equations, One Linear Equation 8 9 Three Variables, Three Quadratic Equations 9 1

1 Introduction It is, of course, well known how to solve systems of linear equations. Given n equations in m unknowns, m j=0 a ijx j = b i for 0 i < n, let the system be represented in matrix form by Ax = b where A = [a ij ] is n m, x = [x j ] is m 1, and b = [b i ] is n 1. The n (m + 1) augmented matrix [A b] is constructed and row-reduced to [E c]. The augmented matrix has the properties: The first nonzero entry in each row is 1. If the first nonzero entry in row r is in column c, then all other entries in column c are 0. All zero rows occur last in the matrix. If the first nonzero entries in rows 1 through r occur in columns c 1 through c r, then c 1 <... < c r. If there is a row whose first m entries are zero, but the last entry is not zero, then the system of equations has no solution. If there is no such row, let ρ = rank([e c]) denote the number of nonzero rows of the augmented matrix. If ρ = m, the system has exactly one solution. In this case E = I m, the m m identity matrix, and the solution is x = c. If ρ < m, the system has infinitely many solutions, the solution set having dimension m ρ. In this case, the zero rows can be omitted to obtain the ρ (m + 1) matrix [I ρ F c + ] where I ρ is the ρ ρ identity matrix, F is ρ (m ρ), and c + consists of the first ρ entries of c. Let x be partitioned into its first ρ components x + and its remaining m ρ components x. The general solution to the system is x + = c + F x where the x are the free parameters in the system. Generic numerical linear system solvers for square systems (n = m) use row-reduction methods so that (1) the order of time for the algorithm is small, in this case O(n 3 ), and (2) the calculations are robust in the presence of a floating point number system. It is possible to solve a linear system using cofactor expansions, but the order of time for the algorithm is O(n!) which makes this an expensive method for large n. However, n = 3 for many computer graphics applications. The overhead for a generic row-reduction solver normally uses more cycles than a simple cofactor expansion, and the matrix of coefficients for the application are usually not singular (or nearly singular) so that robustness is not an issue, so for this size system the cofactor expansion is a better choice. Systems of polynomial equations also arise regularly in computer graphics applications. For example, determining the intersection points of two circles in 2D is equivalent to solving two quadratic equations in two unknowns. Determining if two ellipsoids in 3D intersect is equivalent to showing that a system of three quadratic equations in three unknowns does not have any real-valued solutions. Computing the intersection points between a line and a polynomial patch involves setting up and solving systems of polynomial equations. A method for solving such systems involves eliminating variables in much the same way that you do for linear systems. However, the formal calculations have a flavor of cofactor expansions rather than row-reductions. 2 Linear Equations in One Formal Variable To motivate the general idea, consider a single equation a 0 + a 1 x = 0 in the variable x. If a 1 0, there is a unique solution x = a 0 /a 1. If a 1 = 0 and a 0 0, there are no solutions. If a 0 = a 1 = 0, any x is a solution. 2

Now consider two equations in the same variable, a 0 + a 1 x = 0 and b 0 + b 1 x = 0 where a 1 0 and b 1 0. The first equation is multiplied by b 1, the second equation is multiplied by a 1, and the two equations are subtracted to obtain a 0 b 1 a 1 b 0 = 0. This is a necessary condition that a value x be a solution to both equations. If the condition is satisfied, then solving the first equation yields x = a 0 /a 1. In terms of the row-reduction method for linear systems discussed in the last section, n = 2, m = 1, and the augmented matrix is listed below with its reduction steps: a 1 a 0 b 1 b 0 a 1b 1 a 0 b 1 a 1 b 1 a 1 b 0 a 1b 1 a 0 b 1 0 a 0 b 1 a 1 b 0 1 a 0/a 1 0 a 0 b 1 a 1 b 0 The condition a 0 b 1 a 1 b 0 is exactly the one mentioned in the previous section to guarantee that there is at least one solution. The row-reduction presented here is a formal construction. The existence of solutions and the solution x itself are obtained as functions of the parameters a 0, a 1, b 0, and b 1 of the system. These parameters are not necessarily known scalars and can themselves depend on other variables. Suppose that a 0 = c 0 + c 1 y and b 0 = d 0 + d 1 y. The original two equations are a 1 x + c 1 y + c 0 = 0 and b 1 x + d 1 y + d 0 = 0, a system of two equations in two unknowns. The condition for existence of solutions is 0 = a 0 b 1 a 1 b 0 = (c 0 + c 1 y)b 1 a 1 (d 0 + d 1 y) = (b 1 c 0 a 1 d 0 ) + (b 1 c 1 a 1 d 1 )y. This condition is the result of starting with two equations in unknowns x and y and eliminating x to obtain a single equation for y. The y-equation has a unique solution as long as b 1 c 1 a 1 d 1 0. Once y is computed, then a 0 = c 0 + c 1 y is computed and x = a 0 /a 1 is computed. Let us modify the problem once more and additionally set a 1 = e 0 + e 1 y and b 1 = f 0 + f 1 y. equations are e 1 xy + e 0 x + c 1 y + c 0 = 0 f 1 xy + f 0 x + d 1 y + d 0 = 0 The two This is a system of two quadratic equations in two unknowns. The condition for existence of solutions is 0 = a 0 b 1 a 1 b 0 = (c 0 + c 1 y)(f 0 + f 1 y) (e 0 + e 1 y)(d 0 + d 1 y) = (c 0 f 0 e 0 d 0 ) + ((c 0 f 1 e 0 d 0 ) + (c 1 f 0 e 1 d 0 ))y + (c 0 f 1 e 1 d 1 )y 2. This equation has at most two real-valued solutions for y. Each solution leads to a value for x = a 0 /a 1 = (c 0 + c 1 y)/(e 0 + e 1 y). The two equations define hyperbolas in the plane whose asymptotes are axis-aligned. Geometrically the two hyperbolas can only intersect in at most two points. Similar constructions arise when there are additional linear equations. For example, if a 0 + a 1 x = 0, b 0 + b 1 x = 0, and c 0 + c 1 x = 0, then solving pairwise leads to the conditions for existence: a 0 b 1 a 1 b 0 = 0 and a 0 c 1 a 1 c 0 = 0. If both are satisfied, then a solution is x = a 0 /a 1. Allowing a 0 = a 00 + a 10 y + a 01 z, b 0 = b 00 + b 10 y + b 01 z, and c 0 = c 00 + c 10 y + c 01 z leads to three linear equations in three unknowns. The two conditions for existence are two linear equations in y and z, an elimination of the variable x. These two equations can be further reduced by eliminating y in the same manner. Note that in using this approach, there are many quantites of the form AB CD. This is where my earlier comment comes in about the method having a flavor of cofactor expansions. These terms are essentially determinants of 2 2 submatrices of the augmented matrix. 3

3 Any Degree Equations in One Formal Variable Consider the polynomial equation in x, f(x) = n i=0 a ix i = 0. The roots to this equation can be found either by closed form solutions when n 4 or by numerical methods for any degree. How you go about computing polynomial roots is not discussed in this document. If you have a second polynomial equation in the same variable, g(x) = m j=0 b jx j = 0, the problem is to determine conditions for existence of a solution, just like we did in the last section. The assumption is that a n 0 and b m 0. The last section handled the case when n = m = 1. 3.1 Case n = 2 and m = 1 The equations are f(x) = a 2 x 2 + a 1 x + a 0 = 0 and g(x) = b 1 x + b 0 = 0 where a 2 0 and b 1 0. It must also be the case that 0 = b 1 f(x) a 2 xg(x) = (a 1 b 1 a 2 b 0 )x + a 0 b 1 =: c 1 x + c 0 where the coefficients c 0 and c 1 are defined by the last equality in the displayed equation. The two equations are now reduced to two linear equations, b 1 x + b 0 = 0 and c 1 x + c 0 = 0. A bit more work must be done as compared to the last section. In that section the assumption was made that the leading coefficients were nonzero (b 1 0 and c 1 0). In the current construction, c 1 is derived from previously specified information, so we need to deal with the case when it is zero. If c 1 = 0, then c 0 = 0 is necessary for there to be a solution. Since b 1 0 by assumption, c 0 = 0 implies a 0 = 0. The condition c 1 = 0 implies a 1 b 1 = a 2 b 0. When a 0 = 0, a solution to the quadratic is x = 0. To be also a solution of g(x) = 0, we need 0 = g(0) = b 0 which in turn implies 0 = a 2 b 0 = a 1 b 1, or a 1 = 0 since b 1 0. In summary, this is the case f(x) = a 2 x 2 and g(x) = b 1 x. Also when a 0 = 0, another root of the quadratic is determined by a 2 x+a 1 = 0. This equation and b 1 x+b 0 = 0 are the case discussed in the last section and can be reduced appropriately. One could also directly solve for x = b 0 /b 1, substitute into the quadratic, and multiply by b 2 1 to obtain the existence condition a 2 b 2 0 a 1 b 0 b 1 + a 0 b 2 1 = 0. 3.2 Case n = 2 and m = 2 The equations are a 2 x 2 + a 1 x + a 0 = 0 and b 2 x 2 + b 1 x + b 0 = 0 where a 2 0 and b 2 0. It must also be the case that 0 = b 2 f(x) a 2 g(x) = (a 1 b 2 a 2 b 1 )x + (a 0 b 2 a 2 b 0 ) =: c 1 x + c 0. The two quadratic equations are reduced to a single linear equation whose coefficients c 0 and c 1 are defined by the last equality in the displayed equation. If c 1 = 0, then for there to be solutions it is also necessary that c 0 = 0. In this case, consider that 0 = b 0 f(x) a 0 g(x) = (a 2 b 0 a 0 b 2 )x 2 + (a 1 b 0 a 0 b 1 )x = c 0 x 2 + (a 1 b 0 a 0 b 1 )x = (a 1 b 0 a 0 b 1 )x. If a 1 b 0 a 0 b 1 0, then the solution must be x = 0 and the consequences are 0 = f(0) = a 0 and 0 = g(0) = b 0. But this contradicts a 1 b 0 a 0 b 1 0. Therefore, if a 1 b 2 a 2 b 1 = 0 and a 0 b 2 a 2 b 0 = 0, then a 1 b 0 a 0 b 1 = 0 must follow. These three conditions imply that (a 0, a 1, a 2 ) (b 0, b 1, b 2 ) = (0, 0, 0), so (b 0, b 1, b 2 ) is a multiple 4

of (a 0, a 1, a 2 ) and the two quadratic equations were really only one equation. Now if c 1 0, we have reduced the problem to the case n = 2 and m = 1. This was discussed in the previous subsection. A variation is to compute a 2 g(x) b 2 f(x) = (a 2 b 1 a 1 b 2 )x + (a 2 b 0 a 0 b 2 ) = 0 and b 1 f(x) a 1 g(x) = (a 2 b 1 a 1 b 2 )x 2 + (a 0 b 1 a 1 b 0 ) = 0. Solve for x in the first equation, x = (a 0 b 2 a 2 b 0 )/(a 2 b 1 a 1 b 2 ) and replace in the second equation and multiply by the denominator term to obtain (a 2 b 1 a 1 b 2 )(a 1 b 0 a 0 b 1 ) (a 2 b 0 a 0 b 2 ) 2 = 0. 3.3 General Case n m The elimination process is recursive. Given that the elimination process has already been established for the cases with degrees smaller than n, we just need to reduce the current case f(x) of degree n and g(x) of degree m n to one with smaller degrees. It is assumed here that a n 0 and b m 0. Define h(x) = b m f(x) a n x n m. The conditions f(x) = 0 and g(x) = 0 imply that 0 = h(x) = b m f(x) a n x n m g(x) = b m n i=0 a ix i a n x n m m i=0 b ix i = n i=0 a ib m x i m i=0 a nb i x n m+i = n m 1 i=0 a i b m x i + n 1 i=n m (a ib m a n b i (n m) )x i where it is understood that 1 i=0 ( ) = 0 (summations are zero whenever the upper index is smaller than the lower index). The polynomials h(x) has degree at most n 1. Therefore, the polynomials g(x) and h(x) both have degrees smaller than n, so the smaller degree algorithms already exist to solve them. 4 Any Degree Equations in Any Formal Variables A general system of polynomial equations can always be written formally as a system of polynomial equations in one of the variables. The conditions for existence, as constructed formally in the last section, are new polynomial equations in the remaining variables. Morever, these equations typically have higher degree than the original equations. As variables are eliminated, the degree of the reduced equations increase. Eventually the system is reduced to a single (high-degree) polynomial equation in one variable. Given solutions to this equation, they can be substituted into the previous conditions of existence to solve for other variables. This is similar to the back substitution that is used in linear system solvers. 5 Two Variables, One Quadratic Equation, One Linear Equation The equations are Q(x, y) = α 00 +α10x+α 01 y+α 20 x 2 +α 11 xy+α 02 y 2 = 0 and L(x, y) = b 00 +b 10 x+b 01 y = 0. These can be written formally as polynomials in x, f(x) = (α 20 )x 2 + (α 11 y + α 10 )x + (α 02 y 2 + α 01 y + α 00 ) = a 2 x 2 + a 1 x + a 0 5

and g(x) = (β 10 )x + (β 01 y + β 00 ) = b 1 x + b 0. The condition for existence of f(x) = 0 and g(x) = 0 is h(x) = h 0 + h 1 x + h 2 x 2 = 0 where h 0 = α 02 β00 2 α 01 β 00 β 01 + α 00 β01 2 h 1 = α 10 β01 2 + 2α 02 β 00 β 10 α 11 β 00 β 01 α 01 β 01 β 10 h 2 = α 20 β01 2 α 11 β 01 β 10 + α 02 β10. 2 Given a root x to h(x) = 0, the formal value of y is obtained from L(x, y) = 0 as y = (b 00 + b 10 x)/b 01. 6 Two Variables, Two Quadratic Equations Consider two quadratic equations F (x, y) = α 00 + α10x + α 01 y + α 20 x 2 + α 11 xy + α 02 y 2 = 0 and G(x, y) = β 00 + β 10 x + β 01 y + β 20 x 2 + β 11 xy + β 02 y 2 = 0. These can be written formally as polynomials in x, and The condition for existence is where f(x) = (α 20 )x 2 + (α 11 y + α 10 )x + (α 02 y 2 + α 01 y + α 00 ) = a 2 x 2 + a 1 x + a 0 g(x) = (β 20 )x 2 + (β 11 y + β 10 )x + (β 02 y 2 + β 01 y + β 00 ) = b 2 x 2 + b 1 x + b 0. 0 = (a 2 b 1 a 1 b 2 )(a 1 b 0 a 0 b 1 ) (a 2 b 0 a 0 b 2 ) 2 = h 0 = d 00 d 10 d 2 20 h 1 = d 01 d 10 + d 00 d 11 2d 20 d 21 4 h i y i =: h(y) i=0 h 2 = d 01 d 11 + d 00 d 12 d 2 21 2d 20 d 22 h 3 = d 01 d 12 + d 00 d 13 2d 21 d 22 with h 4 = d 01 d 13 d 2 22 d 00 = α 20 β 10 β 20 α 10 d 01 = α 20 β 11 β 20 α 11 d 10 = α 10 β 00 β 10 α 00 d 11 = α 11 β 00 + α 10 β 01 β 11 α 00 β 10 α 01 d 12 = α 11 β 01 + α 10 β 02 β 11 α 01 β 10 α 02 d 13 = α 11 β 02 β 11 α 02 d 20 = α 20 β 00 β 20 α 00 d 21 = α 20 β 01 β 20 α 01 d 22 = α 20 β 02 β 20 α 02 6

For each root ȳ to h(y) = 0, the quadratic F (x, ȳ) = 0 can be solved for values x. To make sure you have a solution to both equations, test that G( x, ȳ) = 0. 7 Three Variables, One Quadratic Equation, Two Linear Equations Let the three equations be F (x, y, z) = 0 i+j+k 2 α ijkx i y j z k, G(x, y, z) = 0 i+j+k 1 β ijkx i y j z k, and H(x, y, z) = 0 i+j+k 1 γ ijkx i y j z k. As polynomial equations in x, these are written as f(x) = a 2 x 2 +a 1 x+ a 0 = 0, g(x) = b 1 x + b 0 = 0, and h(x) = c 1 x + c 0 = 0 where a 0 = 0 j+k 2 α 0jky j z k a 1 = 0 j+k 1 α 1jky j z k a 2 = α 200 b 0 = β 010 y + β 001 z + β 000 b 1 = β 100 c 0 = γ 010 y + γ 001 z + γ 000 c 1 = γ 100 The condition for existence of x-solutions to f = 0 and g = 0 is 0 = a 2 b 2 0 a 1 b 0 b 1 + a 0 b 2 1 = d ij y i z j =: D(y, z) 0 i+j 2 where d 20 = α 200 β010 2 β 100 α 110 β 010 + β100α 2 020 d 11 = 2α 200 β 010 β 001 β 100 (α 110 β 001 + α 101 β 010 ) + β100α 2 011 d 02 = α 200 β001 2 β 100 α 101 β 001 + β100α 2 002 d 10 = 2α 200 β 010 β 000 β 100 (α 110 β 000 + α 100 β 010 ) + β100α 2 010 d 01 = 2α 200 β 001 β 000 β 100 (α 101 β 000 + α 100 β 001 ) + β100α 2 001 d 00 = α 200 β000 2 β 100 α 100 β 000 + β100α 2 000 The condition for existence of x-solutions to g = 0 and h = 0 is 0 = b 0 c 1 b 1 c 0 = e 10 y + e 01 z + e 00 =: E(y, z) where e 10 = β 010 γ 100 γ 010 β 100 e 01 = β 001 γ 100 γ 001 β 100 e 00 = β 000 γ 100 γ 000 β 100 7

We now have two equations in two unknowns, a quadratic equation D(y, z) = 0 and a linear equation E(y, z) = 0. This case was handled in an earlier section. For each solution (ȳ, z), a corresponding x value is computed by solving either G(x, ȳ, z) = 0 or H(x, ȳ, z) = 0 for x. 8 Three Variables, Two Quadratic Equations, One Linear Equation Let the three equations be F (x, y, z) = 0 i+j+k 2 α ijkx i y j z k, G(x, y, z) = 0 i+j+k 2 β ijkx i y j z k, and H(x, y, z) = 0 i+j+k 1 γ ijkx i y j z k. As polynomial equations in x, these are written as f(x) = a 2 x 2 +a 1 x+ a 0 = 0, g(x) = b 2 x 2 + b 1 x + b 0 = 0, and h(x) = c 1 x + c 0 = 0 where a 0 = 0 j+k 2 α 0jky j z k a 1 = 0 j+k 1 α 1jky j z k a 2 = α 200 b 0 = 0 j+k 2 β 0jky j z k b 1 = 0 j+k 1 β 1jky j z k b 2 = β 200 c 0 = γ 010 y + γ 001 z + γ 000 c 1 = γ 100 The condition for existence of x-solutions to f = 0 and h = 0 is 0 = a 2 c 2 0 a 1 c 0 c 1 + a 0 c 2 1 = d ij y i z j =: D(y, z) 0 i+j 2 where d 20 = α 200 γ010 2 γ 100 α 110 γ 010 + γ100α 2 020 d 11 = 2α 200 γ 010 γ 001 γ 100 (α 110 γ 001 + α 101 γ 010 ) + γ100α 2 011 d 02 = α 200 γ001 2 γ 100 α 101 γ 001 + γ100α 2 002 d 10 = 2α 200 γ 010 γ 000 γ 100 (α 110 γ 000 + α 100 γ 010 ) + γ100α 2 010 d 01 = 2α 200 γ 001 γ 000 γ 100 (α 101 γ 000 + α 100 γ 001 ) + γ100α 2 001 d 00 = α 200 γ000 2 γ 100 α 100 γ 000 + γ100α 2 000 The condition for existence of x-solutions to g = 0 and h = 0 is 0 = b 2 c 2 0 b 1 c 0 c 1 + b 0 c 2 1 = e ij y i z j =: E(y, z) 0 i+j 2 8

where e 20 = β 200 γ 2 010 γ 100 β 110 γ 010 + γ 2 100β 020 e 11 = 2β 200 γ 010 γ 001 γ 100 (β 110 γ 001 + β 101 γ 010 ) + γ 2 100β 011 e 02 = β 200 γ 2 001 γ 100 β 101 γ 001 + γ 2 100β 002 e 10 = 2β 200 γ 010 γ 000 γ 100 (β 110 γ 000 + β 100 γ 010 ) + γ 2 100β 010 e 01 = 2β 200 γ 001 γ 000 γ 100 (β 101 γ 000 + β 100 γ 001 ) + γ 2 100β 001 e 00 = β 200 γ 2 000 γ 100 β 100 γ 000 + γ 2 100β 000 We now have two equations in two unknowns, quadratic equations D(y, z) = 0 and E(y, z) = 0. This case was handled in an earlier section. For each solution (ȳ, z), a corresponding x value is computed by solving F (x, ȳ, z) = 0 for values x. It should be verified that G( x, ȳ, z) = 0 and G( x, ȳ, z) = 0. 9 Three Variables, Three Quadratic Equations Let the three equations be F (x, y, z) = 0 i+j+k 2 α ijkx i y j z k, G(x, y, z) = 0 i+j+k 2 β ijkx i y j z k, and H(x, y, z) = 0 i+j+k 2 γ ijkx i y j z k. As polynomial equations in x, these are written as f(x) = a 2 x 2 +a 1 x+ a 0 = 0, g(x) = b 2 x 2 + b 1 x + b 0 = 0, and h(x) = c 2 x 2 + c 1 x + c 0 = 0 where a 0 = 0 j+k 2 α 0jky j z k a 1 = 0 j+k 1 α 1jky j z k a 2 = α 200 b 0 = 0 j+k 2 β 0jky j z k b 1 = 0 j+k 1 β 1jky j z k b 2 = β 200 c 0 = 0 j+k 2 γ 0jky j z k c 1 = 0 j+k 1 γ 1jky j z k c 2 = γ 200 The condition for existence of x-solutions to f = 0 and g = 0 is 0 = (a 2 b 1 a 1 b 2 )(a 1 b 0 a 0 b 1 ) (a 2 b 0 a 0 b 2 ) 2 = d ij y i z k =: D(y, z) 0 i+j 4 where The condition for existence of x-solutions to f = 0 and h = 0 is 0 = (a 2 c 1 a 1 c 2 )(a 1 c 0 a 0 c 1 ) (a 2 c 0 a 0 c 2 ) 2 = e ij y i z k =: E(y, z) 0 i+j 4 9

where The two polynomials D(y, z) and E(y, z) are fourth degree. The equations D(y, z) = 0 and E(y, z) = 0 can be written formally as polynomials equations in y, d(y) = 4 i=0 δ iy i and e(y) = 4 i=0 ɛ iy i where the coefficients are polynomials in z with degree(d i (z)) = 4 i and degree(e i (z)) = 4 i. The construction for eliminating y results in a polynomial in z obtained by computing the determinant of the Bézout matrix for d and e, the 4 4 matrix M = [M ij ] with M ij = min(4,7 i j) k=max(4 j,4 i) w k,7 i j k for 0 i 3 and 0 j 3, with w i,j = δ i γ j δ j γ i for 0 i 4 and 0 j 4. In expanded form, w 4,3 w 4,2 w 4,1 w 4,0 w M = 4,2 w 3,2 + w 4,1 w 3,1 + w 4,0 w 3,0. w 4,1 w 3,1 + w 4,0 w 2,1 + w 3,0 w 2,0 w 4,0 w 3,0 w 2,0 w 1,0 The degree of w i,j is 8 i j. The Bézout determinant det(m(z)) is a polynomial of degree 16 in z. For each solution z to det(m(z)) = 0, corresponding values ȳ are obtained by solving the quartic equation D(y, z) = 0. Finally, corresponding values x are obtained by solving the quadratic equation F (x, ȳ, z) = 0. Any potential solution ( x, ȳ, z) should be tested if G( x, ȳ, z) = 0 and H( x, ȳ, z) = 0. 10