Topics in algebraic geometry and geometric modeling. Pål Hermunn Johansen

Size: px
Start display at page:

Download "Topics in algebraic geometry and geometric modeling. Pål Hermunn Johansen"

Transcription

1 Topics in algebraic geometry and geometric modeling Pål Hermunn Johansen

2 ii

3 Contents 1 Introduction 1 2 The tangent developable Introduction Tangent developables Local properties of a real tangent developable Illustrations The tangent developable of a complex algebraic curve Closest points, moving surfaces and algebraic geometry Introduction The underlying idea Degrees of the moving surfaces Implementation of a test algorithm Discussion Solving a closest point problem by subdivision Introduction Definition of the problem The basic method Quality of the output and special cases Improving the basic method Changing the subdivision Changing the multiplication algorithm to allow an early exit Introducing a box test and a plane test Using the second order derivatives iii

4 iv CONTENTS The recursive algorithm explained The basic method with the box and plane tests Doing a preconditioned constant sign test Speed measurements Error analysis Conclusion Monoid hypersurfaces Introduction Basic properties Monoid surfaces Quartic monoid surfaces The strata of quartic monoids Definition of the strata Types 1 to Type 9 - the tangent cone is smooth

5 Chapter 1 Introduction This PhD thesis was started as a part of the European Community funded project Intersection algorithms for geometry based IT-applications using approximate algebraic methods. The project was coordinated by Tor Dokken of SINTEF in Oslo and included partners from many countries: The University of Cantabria from Spain, INRIA and the University of Nice from France, think3 from Italy and France, the University of Linz from Austria, and SINTEF and the University of Oslo from Norway. The vision of the project was to bring algebraic geometry and approximation theory together and apply it to problems in Computer Aided Geometric Design (CAGD). The imperfect quality of intersection algorithms in CAGD systems imposes high costs on the product creation process in industry, and it was deemed necessary to find new and better methods for solving these problems. A better understanding of the geometrical objects in CAGD is needed to fulfill this goal. The vision and goal of the project has certainly influenced my work and focus as a PhD student. As a direct result of that, the topics in this thesis cover different parts of the project plan. Chapters 2, 5 and 6 provide insight into objects that are interesting in CAGD and geometric modelling. Chapters 3 and 4, on the other hand, investigate an important problem in CAGD and other industrial applications, the closest point problem. The investigation covers both theoretical aspects and pure optimalization. The closest point problem is so common that having fast algorithms is of great importance. The topics of the chapters may seem unrelated, but they all address central problems in applied geometry. 1

6 2 CHAPTER 1. INTRODUCTION Chapter 2 is a study of tangent developables and was published [15] in the proceedings of the first COMPASS workshop. Developable surfaces are common in CAGD. A good understanding of these objects is helpful for programmers that wish to use them in CAGD application programs. The tangent developable of a curve C P 3 is a singular surface with cuspidal edges along C and the flex tangents of C. It also contains a multiple curve, typically double. We express the degree of this curve in terms of the invariants of C. In many cases we can describe the intersections of C with the multiple curve, and pictures of these cases are provided. Chapter 3 describes a method of computing closest points to a parametric surface patch. The chapter is the result of collaboration with Jan B. Thomassen and Tor Dokken and was published [41] in the Proceedings after the conference Spline curves and surfaces in Tromsø. The article was a collaboration between three people and my responsibility was mainly the degree formulas and text in Section 3.3. The method for computing closest points to a given parametric surface patch is based on moving surfaces. For each parametric surface there are two natural moving surfaces, one for each parameter direction. These two objects let us reduce the closest point problem for a given point to solving two univariate polynomial equations. We also describe an implementation of our algorithm which although not being fast is very reliable. Chapter 4 is a written and improved version of a talk given at the MEGA 2005 conference. This chapter deals with the same problem as the previous chapter, but solves the problem by subdivision techniques. Different ways of solving this problem through subdivision are explored, and different optimizations are timed. An error analysis for subdivision methods is carried out, and this gives the user full control over the guaranteed accuracy of the subdivision methods. Chapter 5 is an article written with my advisor Ragni Piene and one of her other students, Magnus Løberg. This article has been accepted for the proceedings of the COMPASS 2 workshop, and is a study of monoid hypersurfaces. A monoid hypersurface is an irreducible hypersurface of degree d which has a singular point of multiplicity d 1. Any monoid hypersurface admits a rational parameterization, and is hence of potential interest in computer aided geometric design. We study properties of monoids in general and of monoid surfaces in particular. The main results include a description of the possible real forms of the singularities on a monoid surface other than the point of multiplicity d 1. These results are applied to the classification of singularities on quartic monoid surfaces, complementing earlier work on the subject.

7 3 My contribution has been formulating and proving the lemmas and propositions in this chapter, building on the work started by the other authors. In particular, Proposition 5.8 and its constructive proof has been my contribution, and also extending work in [38] into a complete classification of singularities away from the triple point. In Chapter 6 the classification of monoids is continued by considering the space of quartic monoids in P 3 with only isolated singularities. This space has a natural stratification based on geometric invariants related to the singularites of the monoids. The strata of monoids are defined by first defining the invariants, and then defining when two different monoids are considered to have the same set of invariants. The result is a very high number of strata of monoids. By using the classification in the previous chapter, we are able to calculate the dimension of each stratum. Also, if a stratum is associated to a singular tangent cone, then the stratum can be expressed as an image of a certain map, and this construction let us recover the number of components of the stratum. During the work on the thesis over the last four years I have met many interesting people and made several new friends. Many of these have inspired and helped me complete my work, and I am happy to mention some of them here. First of all I would like to thank my advisor, Ragni Piene, for always being positive and supportive in my efforts. I will also thank her for answering lots of questions, for asking me the right questions, and for providing small hints when my research has been incomplete or temporarily stuck. I would also like to thank my cand.scient. advisor Jan Christophersen for his effort in turning me into a worthy PhD candidate. Many thanks to Tor Dokken for leading the successful GAIA II project and providing insight into the world of CAGD. I would like to thank Mohammed Elkadi, Bernard Mourrain and André Galligo for help and advice during my stay in Nice. Finally, I would like to thank the many fellow students with whom I shared an office, Torquil Macdonald Sørensen, Tore Halsne Flåtten, Le Thi Ha, An Ta Thi Kieu, Guillaume Chèze, Ola Nilsson, my good friends George Harry Hitching and Oliver Labs, and, most of all, my girlfriend Maria Samuelsen. You have all made the work on this thesis a better experience.

8 4 CHAPTER 1. INTRODUCTION

9 Chapter 2 The tangent developable 2.1 Introduction If we have a curve on which tangents can be defined, then the associated tangent developable is the surface swiped out by the tangents. Tangent developables have a cuspidal edge, and are easy to generate. Since most developable surfaces are tangent developables, the Computer Aided Geometric Design community should be interested in their properties. This article describes the local and global geometry of tangent developables. For the local study of tangent developables we consider analytic real curves. Cleave showed in [5] that for most curves the tangent developable has a cuspidal edge along most of the curve. This was extended by Mond in [23] and [24] where he analyzed the tangent developable of more special curves. This work was further extended by Ishikawa in [13], and results from that article are used in section 2.3. The following section contains figures illustrating the local behavior of tantgent developables, and one may want to have a brief look at these before reading the rest of the text. In section 2.5 the tangent developables of complex projective algebraic curves are described. Algebraic geometrical invariants are introduced and relations between these invariants are taken from [31]. We also show that tangent developables of rational curves of degree 4 have a double curve. Many thanks goes to Ragni Piene for lots of good advice and considerable help with this article. 5

10 6 CHAPTER 2. THE TANGENT DEVELOPABLE 2.2 Tangent developables Given a curve in some space, its tangent developable is the union of the tangent lines to the curve. The tangent line at a singular point is defined as the limit of tangent lines at non-singular points. If the curve is algebraic, then its tangent developable will be an algebraic surface. Assume we have a parameterization of a curve with a non-vanishing derivative. Then we can make a map that parameterizes the corresponding tangent developable. Let U R, γ : U R 3 be a map with a non-vanishing derivative. Define the map Γ : U R R 3 by Γ(t, u) = γ(t) + uγ (t) (2.1) In this case the tangent developable of γ(u) is the image of Γ. The following example uses this technique to calculate the implicit equation of a tangent developable. Example 2.1 (The tangent developable of the twisted cubic). Consider the twisted cubic curve parameterized by γ : R R 3 where γ(t) = (t, t 2, t 3 ). The tangent developable is then the image of Γ : R 2 R 3 where Γ(t, u) = (t + u, t 2 + 2ut, t 3 + 3ut 2 ). The algebra program Singular [10] can calculate the implicit equation of the surface: z 2 6xyz + 4x 3 z + 4y 3 3x 2 y 2 = 0. In this case the implicit equation describe the same set of points as the the image of Γ. However, when dealing with real parameterizations this is not always true. Calculating the Jacobian ideal shows us that the tangent developable is singular exactly at γ(r). Moreover, if the surface is intersected with a general plane, the resulting curve will have a cusp singularity at each intersection point with γ(r). Definition 2.2 (The type of a germ). Let γ be a smooth (C ) curve germ, γ : (R, p) (R 3, q). We say that the germ is of finite type if the vectors γ (p), γ (p), γ (p), γ (4) (p),... span R 3. In this case, let a i = min{k dim γ (p), γ (p),..., γ (k) (p) = i} and define the type of the germ to be the triple (a 1, a 2, a 3 ).

11 2.3. LOCAL PROPERTIES OF A REAL TANGENT DEVELOPABLE 7 In this article we will only look at parameterizations where all the germs are of finite type. What does a tangent developable look like? Along most of the curve, the tangent developable has a cuspidal edge singularity, so it is never smooth. 2.3 Local properties of a real tangent developable We now want to study the local properties of the tangent developable close to the curve. Now we are no longer forced to use complex numbers, so we choose to study only real tangent developables. Since this is a local study, we now look at germs of curves γ : (R, 0) (R 3, 0), as in definition 2.2. Cleave shows in [5] that the tangent developable of most smooth curves γ have a cuspidal edge along most of the curve. That is, the cuspidal edge exists at intervals of points of type (1, 2, 3). We have already decided only to look at curves where all the points are of finite type, and for all of these curves we will have a cuspidal edge along most of the curve. In the language of Cleave: Given a curve with nonzero curvature and torsion at a point γ(t 0 ). If the tangent developable is intersected with a general plane through γ(t 0 ), the resulting curve will have a cusp at that point. In [24] Mond provides drawings of the tangent developable at points of type (1, 2, k) for 3 k 7. This is (in the language of differential geometry) when the torsion vanishes to order 4. This was extended by Goo Ishikawa in [13], where he proves the following: The local diffeomorphism class of the tangent developable is determined by the type of the point if and only if the type is one of the following: (1, 2, 2+r) where r is a positive integer, (1, 3, 4), (1, 3, 5), (2, 3, 4) or (3, 4, 5). In other words, for these types we can restrict our study to curves on the form x = t l1+1 =: t a y = t l2+2 =: t b z = t l3+3 =: t c at the origin. For other types we have to include more terms (of the power series) in the local parameterizations to study the point. In these cases we can get several different real pictures, but since points of other types are quite exotic, they will not be analyzed here.

12 8 CHAPTER 2. THE TANGENT DEVELOPABLE Knowing this we can calculate local self intersection curves at points of type (1, 2, k) quite easily: Example 2.3 ((a, b, c) = (1, 2, k) for k 3). To find local self intersection curves we need to solve the equation Γ(t, u) = Γ(s, v) where Γ is defined as in equation (2.1), Γ(t, u) = (t + u, t 2 + 2tu, t k + kt k 1 u). Some straightforward calculations leads us to solving (t 2 s 2 ) + 2w(t s) = 0 (1 k)(t k s k ) + kw(t k 1 s k 1 ) = 0, where w = t + u = s + v. Assuming s t we (eventually) get 0 = 2(1 k)(t k s k ) + k(t + s)(t k 1 s k 1 ) = (2 k)(t k 1 + s k 1 ) + 2(t k 2 s + t k 3 s ts k 2 ). It is not hard to prove that s = t is the only possible real self intersection by analyzing the polynomial f(t) = (2 k)(t k 1 + 1) + 2(t k 2 + t k t) and its derivative. The real self intersection occurs exactly when k is even. This is compatible with what Mond found in [23], but since Mond looked at C curves he could only draw the conclusion for k 7. Note that we have complex self intersections for all k 5. Example 2.4 (Types (1, 3, 4), (1, 3, 5), (2, 3, 4) and (3, 4, 5)). Points of types (1, 3, 4), (1, 3, 5) and (2, 3, 4) each have one local real self intersection curve, while points of type (3, 4, 5) have no real self intersection curves. This was calculated using Singular [10]. The following section contains pictures of all of these types. 2.4 Illustrations This section contains figures of tangent developables of different curves, each parameterized by a map t (t a, t b, t c ) for some triple (a, b, c). For each of the curves, the origin is of type (a, b, c) and all other points (close to the origin) are of type (1, 2, 3). For all the figures, we have drawn the points that are at a distance of 2 from the origin, so the figures illustrate the local properties of the tangent developable. The first five figures show points of type (1, 2, k). We can see that we have self intersection curves exactly when k is even, as calculated in example 2.3. In the first figure, all points are of type (1, 2, 3):

13 2.4. ILLUSTRATIONS 9 For most curves, the only types are (1, 2, 3) and (1, 2, 4). The following figure shows a point of type (1, 2, 4): The following figures show points of type (1, 2, k).

14 10 CHAPTER 2. THE TANGENT DEVELOPABLE The tangent developable of the curve (t, t 2, t 5 ) The tangent developable of the curves (t, t 2, t 6 ) and (t, t 2, t 7 ) The rest of the figures come from example 2.4. Note that for the points where k 1 (0) = 1 (types (1, 3, 4) and (1, 3, 5)) the line which is a cuspidal edge, but not part of the curve, is an inflectional tangent line. This corresponds to the Plücker formula mentioned in section 2.5, c = r 0 + k 1, where c is the degree of the cuspidal edge. The tangent developable of the curve (t, t 3, t 4 )

15 2.4. ILLUSTRATIONS 11 The tangent developable of the curve (t, t 3, t 5 ) The tangent developable of the curve (t 2, t 3, t 4 ) The tangent developable of the curve (t 3, t 4, t 5 )

16 12 CHAPTER 2. THE TANGENT DEVELOPABLE 2.5 The tangent developable of a complex algebraic curve To any projective algebraic curve, there are associated several invariants, most importantly the degree and genus of the curve. Classical algebraic geometry gives many relations between these values and the geometry of the curve. In [31] Piene obtained results for the tangent developable, and the formulas have been taken from that article. In this section a curve will be a reduced algebraic curve C 0 in the projective complex 3-space, P 3 C. We also assume that the curve spans the space. Let X P 3 C denote the tangent developable of C 0. Let h : C C 0 be the normalization map, so that C is the desingularization of C 0. Let g denote the (geometric) genus of the curve and r 0 the degree. The rank r 1 is defined as the number of tangents that intersect a general line. Clearly this is the same as the degree of the tangent developable. The class r 2 is defined as the number of osculating planes to C 0 that contain a general point. The osculating plane at a point on the curve is the plane with the highest order of contact with the curve at that point. Another point of view is that the osculating plane at a point x 0 is the limit of the planes containing x 0, x 1 and x 2 as x 1, x 2 x 0. For each point p C, we can choose affine coordinates around h(p) such that the branch of C 0 determined by p has a (formal) parameterization at h(p) equal to x = t l y = t l z = t l with l 0 := 0 l 1 l 2 l 3. This (formal) parameterization is also a curve germ γ : (C, 0) (C 3, 0). Because of this we extend the notion of the type to the complex domain, and say that the type of the germ determined by p is equal to (l 1 + 1, l 2 + 2, l 3 + 3). The coordinates are chosen such that p is the origin, the tangent is the line y = z = 0, and the osculating plane is z = 0. We call k i (p) = l i+1 l i the ith stationary index of p. Since k i (p) 0 only for a finite number of points p we can define k i = p C k i(p). If l 1 = 0, then the germ is nonsingular. If l 1 1 we say that the germ has a cusp, and if l 1 = 1 the cusp is said to be ordinary. If l 1 = 0 and l 2 1 we

17 2.5. THE TANGENT DEVELOPABLE OF A COMPLEX CURVE 13 call the point h(p) an inflection point or flex, and if l 2 = 1 the flex is ordinary. If l 1 = l 2 = 0 and l 3 1 we say that the curve has a stall or a point of hyperosculation. For most curves we will have no cusps and no flexes. Now it is time to state the relations between these values, all taken from [31]: r 1 = 2r 0 + 2g 2 k 0 (2.2) r 2 = 3(r 0 + 2g 2) 2k 0 k 1 (2.3) k 2 = 4(r 0 + 3g 3) 3k 0 2k 1 (2.4) Note that r 1 3 since since r 1 is the degree of the tangent developable, and no quadric surface with a cuspidal edge exists. Furthermore, r 2 3 since r 2 is the degree of the dual curve, and the dual curve must span the space. From the definition we get k 2 0. The tangent developable X of C 0 has degree µ 0 = r 1, rank µ 1 = r 2 (defined as the class of the intersection of the tangent developable with a general plane, a plane curve) and class µ 2 = 0 (defined as the number of tangent planes containing a general line). Its cuspidal edge consists of C 0 and the flex tangents of C 0. The cuspidal edge has degree c = r 0 + k 1. Formulas involving algebraic invariants, as those above, are often called Plücker formulas, and such formulas is central in enumerative algebraic geometry. There are lots of Plücker formulas, relating many different algebraic invariants. In addition to the cuspidal edge, X has a double (or higher order multiple) curve, some times called the nodal curve of C 0. It consists of points that are on more than one tangent of C 0. Eventual bitangents are part of the nodal curve. Let b denote the degree of the nodal curve. If the nodal curve is double and the flexes of C 0 are ordinary, then [31] gives the following expressions for b: 2b = µ 0 (µ 0 1) µ 1 3c = r 1 (r 1 1) r 2 3(r 0 + k 1 ) = r 1 (r 1 4) k 0 2k 1 = (2r 0 + 2g 2 k 0 )(2r 0 + 2g 6 k 0 ) k 0 2k 1. For rational curves, g = 0, so then In this case we see that 2b = (2r 0 2 k 0 )(2r 0 6 k 0 ) k 0 2k 1. k 2 = 4(r 0 3) 3k 0 2k 1 0

18 14 CHAPTER 2. THE TANGENT DEVELOPABLE implies k 0 = 4 3 r k k r 0 4 We can find a lower bound for b for rational curves of degree r 0 4 by first eliminating k 1 (using equation (2.4)) in the expression for b: 2b = (2r 0 2 k 0 )(2r 0 6 k 0 ) k 0 2k 1 = (2r 0 2 k 0 )(2r 0 6 k 0 ) + 2k 0 + k 2 4r (2r 0 2 k 0 )(2r 0 6 k 0 ) + 2k 0 4r (using k 2 0). As a function in k 0 the expression above is strictly decreasing (for k r 0 4). In other words, we can set k 0 = 4 3 r 0 4 and not break the inequality: 2b (2r 0 2 ( 4 3 r 0 4))(2r 0 6 ( 4 3 r 0 4)) + 2( 4 3 r 0 4) 4r = 4 9 r 0(r 0 3). We conclude that rational curves with b = 0 must have degree 3, and the twisted cubic is the only one of these that is not planar. It follows that every rational curve C 0 of degree greater than 3 gives a tangent developable with a nodal curve of positive degree. We want to check if b = 1 is possible. If g = 0 and b = 1, then 2b 4 9 r 0(r 0 3) implies r 0 = 4. Also, k r 0 4 = 4 3. This leads us to consider two cases, k 0 = 0 and k 0 = 1. If k 0 = 0 the formula for b implies k 1 = 5 and equation (2.4) gives k 2 = 6. If k 0 = 1 the formula for b implies k 1 = 1 and equation (2.4) give k 2 = 1. The second stationary index k 2 cannot be negative, so b = 1 is impossible. The following example shows that b = 2 actually can occur for g = 0 and r 0 4: Example 2.5 (A singular curve of degree 4). Let the curve γ 0 : C C 3 be given by γ 0 (t) = (t, t 2, t 3 +t 4 ). This is an imbedding that is one to one on points, so the degree is 4. Note that γ 0 is nonsingular, but if we take the projective completion γ : P 1 P 3 given by γ(s; t) = (s 4 ; s 3 t; s 2 t 2 ; st 3 + t 4 ) we get a singular curve. In fact, setting t = 1 yields the local parameterization at (0; 1), s (s 4 ; s 3 ; s 2 ; s + 1). Let (w; x; y; z) be the projective coordinates for P 3 C. Since 1/(1 + s) = 1 s + s2 s in a neighborhood of 0, setting z = 1

19 2.5. THE TANGENT DEVELOPABLE OF A COMPLEX CURVE 15 gives the local parameterization w = s 2 s 3 + s 4 s x = s 3 s 4 + s 5 s y = s 4 s 5 + s 6 s We see that the type of the local parameterization is (2, 3, 4), and thus k 0 (γ(0; 1)) = 1 and k 1 (γ(0; 1)) = k 2 (γ(0; 1)) = 0. At any other point we see that the first and second derivative are linearly independent, so each of them are of type (1, 2, n) for some value of n. This means that we have k 0 = 1 and k 1 = 0. The degree of the curve is r 0 = 4, and the genus of the curve is g = 0 since the curve is rational. Now we can calculate the rest of the invariants mentioned above. From the formulas we get the rank of the curve, r 1 = 5, the class of the curve, r 2 = 4, the second stationary index, k 2 = 1, the degree of the surface µ 0 = r 1 = 5, the rank of the surface µ 1 = r 2 = 4, and finally the degree of the nodal curve, b = 2. Using Singular [10], we can verify some of the results. A Gröbner bases computation gives us the implicit equation of the surface: F = 3wx 2 y x 3 y 2 4w 2 y 3 14wxy 3 + 8x 2 y 3 9wy 4 4wx 3 z 16x 4 z + 6w 2 xyz + 24wx 2 yz 6w 2 y 2 z w 3 z 2 This equation is, predictably, of degree µ 0 = r 1 = 5. We can find the singular locus by setting the four partial derivatives equal to zero. The last one, 1 2 F z = 2wx3 8x 4 + 3w 2 xy + 12wx 2 y 3w 2 y 2 w 3 z, leads us to consider two cases, w = 0 and w 0. The first case implies x = 0 from F/ z = 0, and then F/ w = 0 gives y = 0. This leaves us with one point, namely (0; 0; 0; 1) = γ(0; 1), the singular point of the curve. If w 0 we can choose w = 1 and solve the system of equation quite easily. This is because F/ z = 0 becomes 0 = 2x 3 8x 4 + 3xy + 12x 2 y 3y 2 z, (2.5) so we can substitute z into the other equations. F/ z = 0, the equation F/ y = 0 gives In other words, assuming 0 = 16x 6 + 8x 5 32x 4 y + x 4 16x 3 y + 16x 2 y 2 2x 2 y + 8xy 2 + y 2 = (4x + 1) 2 (x 2 y) 2.

20 16 CHAPTER 2. THE TANGENT DEVELOPABLE If x 2 y = 0, then equation (2.5) gives z = x 3 + x 4, as expected. Setting x = 1/4 in the rest of the equations gives us a solution for every y, so z is a polynomial of degree 2 in y given by (2.5). This is the degree of the nodal curve that we calculated earlier. Note that most curves will have k 0 = k 1 = 0, with a nodal curve of degree b = 2(r 0 + g 1)(r 0 + g 3). Unless r 0 = 3 and g = 0, the nodal curve will not be empty. The cuspidal edge and the nodal curve may both be singular, and they will usually intersect. If the nodal curve is double and the flexes are ordinary, X will have a finite number of points with multiplicity 3. These points can be of different types. If the nodal curve has a node at q outside the cuspidal edge, then q must lie on at least 3 tangents, and therefore the nodal curve must have at least multiplicity 3 at q since any selection of two out of three tangents will give a branch in the nodal curve. The total number T of triple points of the tangent developable X of C 0 is given in [31] and is T = 1 6 (r 1 4)((r 1 3)(r 1 2) 6g). (2.6) The formula (2.6) is valid when the nodal curve is double. When the nodal curve is more than double we have to use a generalized formula for the degree of the multiple curves (also found in [31]). If the nodal curve consists of curves D j, where D j is ordinary j-multiple, then the degrees b j of D j satisfy j(j 1)b j = r 1 (r 1 1) r 2 3(r 0 + k 1 ), (2.7) j still assuming the flexes to be ordinary. Note that this is a very special case, and that producing interesting examples with high j may be hard. An example where the nodal curve is triple can be found in [40, p. 65], and we have calculated, using Singular [10], the details 1. Example 2.6 (The equianharmonic rational quartic). Let α = 1 3 3, let C0 be the rational curve defined by the map γ : P 1 C P3 C where γ(s; t) = (αs 4 s 2 t 2 ; αs 3 t; αst 3 ; αt 4 s 2 t 2 ), 1 There is an error in [40], m is not supposed to be 3, but the same as α in the example, m =

21 2.5. THE TANGENT DEVELOPABLE OF A COMPLEX CURVE 17 and let X be its tangent developable. A Gröbner bases computation gives us the implicit equation F = 0 of the surface X. Here F is a polynomial of degree 6 in the projective coordinates (w; x; y; z): F = 12w 2 x 3 y + 3w 4 y 2 72αw 2 x 2 y w 2 xy 3 256αx 3 y 3 +18αw 3 xyz + 24wx 3 yz + 6w 3 y 2 z + 48αwx 2 y 2 z + 24wxy 3 z +3w 2 x 2 z 2 12αw 2 xyz x 3 yz 2 + 3w 2 y 2 z 2 72αx 2 y 2 z 2 +12xy 3 z 2 + 4αw 3 z 3 + 6wx 2 z αwxyz 3 + 3x 2 z 4 Taking a primary decomposition of the Jacobian ideal of F, we find that the singular locus of X consists of two components, the curve C 0 and the conic D defined by z 2 + 4xy = 0 in the plane w + z = 0. We want to show that D is a triple curve of X. The conic D can be parameterized by θ : P 1 C P3 C where θ(u; v) = ( 2uv; v 2 ; u 2 ; 2uv). Using this parameterization we find the following: The point θ(u; v) lies on the tangent to C 0 at γ(s; t) if and only if G(s, t, u, v) := s 3 u 3αst 2 u + 3αs 2 tv t 3 v = 0. For a fixed (u; v) P 1 C, zeros of G(s, t, u, v) corresponds to points on C 0 whose tangent contain θ(u; v). For most (u; v) P 1 C we will get three distinct tangents. In fact, let (u, v) denote the discriminant of G with respect to (s; t). In this case (u, v) = (u 2 + (3α + 1)uv v 2 )(u 2 (3α + 1)uv v 2 ). If (u, v) 0, then the point θ(u; v) lies on three distinct tangents to C 0. Let A denote the four points on D corresponding to (u, v) = 0. We conclude that each point on D not in A lies on exactly three tangents of C 0. This means that D is a triple curve of X. Moreover, A is exactly the intersection of D and C 0, and these four points are the only points on C 0 whose local parameterization is not of type (1, 2, 3). In fact, the local parameterizations in each point of A is of type (1, 2, 4). This means that k 0 = k 1 = 0 and k 2 = 4. Furthermore, the degree of C 0 is r 0 = 4 and the formulas give the rank r 1 = 6 and the class r 2 = 6 of the curve. The multiple curve only have one component, the triple curve D, and this corresponds to b 3 = 2 in equation (2.7).

22 18 CHAPTER 2. THE TANGENT DEVELOPABLE The set A form an equianharmonic set on D, and that is why C 0 is called the equianharmonic rational quartic. Note that this example is very special and arise from the thorough study [40] of the rational normal curve in P 4 C. The curve C 0 is constructed by projecting the rational normal curve in P 4 C from a general point on a quadric called the nucleus of the polarity. The equation of the nucleus of the polarity is x 0 x 4 4x 1 x 3 + 3x 2 2 and the projection centre of this example is (1, 0, α, 0, 1). All the formulas in this section holds for curves in P 3 C. We can not make similar equalities for real curves, but the projective invariants of the complex curve give results for the real part in the form of inequalities. However, these inequalities will not be made explicit in this article.

23 Chapter 3 Closest points, Moving Surfaces and Algebraic Geometry 1 Jan B. Thomassen, Pål H. Johansen, and Tor Dokken 3.1 Introduction In this paper, we present a new method for calculating closest points to a parametric surface. The method is based on algebraic techniques, in particular on moving surfaces. Moving surfaces are objects that have previously been used for implicitization [34], but the closest point problem now provides another application of these. Recently, there has been renewed interest in exploring links between geometric modeling and algebraic geometry [9]. The work presented in this paper is a part of this trend, and extends work from the European Commission project GAIA II. Algebraic geometry has many uses in geometric modeling, including such applications as point classification, implicitization, intersection and selfintersection problems, ray-tracing, etc. It was therefore natural to ask whether 1 This chapter is the article [15]. My contribution is mainly the theory in Section

24 20 CHAPTER 3. CLOSEST POINTS BY MOVING SURFACES algebraic geometry also can be used in algorithms for computing closest points. The closest point problem is a generic problem in CAGD. Applications include surface smoothing, surface fitting, and curve or surface selection. The closest point problem can be described in the following way. We are given a parametric surface p(u, v) and a point x 0 in space. We want to find the point p cl on the surface that is closest to x 0, or more precisely, we want to find the parameters (u cl, v cl ) of p cl. The conventional way to compute closest points involves iterative methods, like Newton s method, to minimize the distance function from x 0 to a point on the surface. This leads to solving a set of two polynomial equations in u and v, (x 0 p(u, v)) p u (u, v) = 0, (3.1) (x 0 p(u, v)) p v (u, v) = 0, for the footpoints to x 0. We recall that a footpoint p to x 0 is a point on the surface such that the vector (x 0 p) is orthogonal to the tangent plane at p. Eqs. (3.1) express an orthogonality condition: The vector (x 0 p cl ) is orthogonal to the tangent vectors p u (u cl, v cl ) and p v (u cl, v cl ) at the closest point. This is illustrated in Fig x 0 p (u,v ) v p cl p (u,v) u Figure 3.1: Orthogonality conditions for the closest point. One disadvantage of iterative methods is that we need an initial guess. It is a problem to come up with a good initial guess [20]. A bad guess may give a sequence of iterations that does not converge, or that converges to the wrong solution. Furthermore, if a large number of closest points needs to be computed, the method may be slow.

25 3.2. THE UNDERLYING IDEA 21 Another way to solve Eqs. (3.1) is to use subdivision techniques. An example of this is Bézier clipping [27]. These methods are often robust and effective, but may be unstable and use a long time to converge for some difficult surfaces, like surfaces with singularities. Such methods are probably the methods of choice in real applications, but we will not discuss them further here. The method we propose in this paper for solving the closest point problem uses moving surfaces, as already mentioned. A moving surface in our setting is a one-parameter family of surfaces. We construct two such surfaces: one moving in the u-direction and one moving in the v-direction. The two moving surfaces give us two polynomial equations that are univariate. Univariate polynomial equations can be solved fast with a recursive solver and all roots may be found on the interval of interest within a predefined accuracy. This will give an algorithm that does not need any initial guess, has no convergence problems, and is fast when many closest points are to be calculated. We may use elimination theory and Sylvester resultants to construct the moving surfaces in our method. From this construction, we obtain formulas for the algebraic degrees of the geometric objects involved when the surfaces addressed are Bézier surfaces. We also describe an implementation of an algorithm for computing closest points based on the moving surface method. In this implementation, we construct the moving surfaces by solving a system of linear equations, rather than by using resultants. The implementation produced accurate results when run on test cases of biquadratic Bézier surfaces. Unfortunately, it couldn t be applied to bicubic surfaces due to memory shortage when building certain matrices necessary for the construction of the moving surfaces. The organization of the paper is the following. In the following section, we describe the way we use moving surfaces and the idea behind our method. In Section 3.3, we analyze the method by using elimination theory and Sylvester s resultant, which gives us formulas for the algebraic degrees of the moving surfaces in the scheme. In Section 3.4, we present an algorithm for our method and describe some results we have obtained from implementing it. Finally, Section 3.5 is a discussion of these results. 3.2 The underlying idea Our method involves moving surfaces, which have been introduced by Sederberg for implicitization [34]. In that context, a moving surface is an implicit surface depending on two parameters, but in our setting a moving surface is a one-

26 22 CHAPTER 3. CLOSEST POINTS BY MOVING SURFACES parameter family of implicit surfaces. Let us make the assumption that we are dealing with parametric surfaces that are single rational patches. Thus, a moving surface q(x; u), depending on the parameter u, is given by q(x; u) = N q i (x)b i,n (u). (3.2) i=0 Here, B i,n (u) are Bernstein basis polynomials of degree N, and q i (x) is a set of N + 1 algebraic functions. In other words, q is given in terms of a Bernstein polynomial in u of degree N, where the coefficients q i are implicit surfaces. Furthermore, the moving surface q follows a surface p(u, v) (in the parameter u) if q(p(u, v); u) = 0. (3.3) Moving surfaces may in this way follow a parametric surface in either the u or the v direction. How can we make use of such moving surfaces? Suppose we find a moving surface q 1 (x; u) with the following properties: q 1 follows the given surface p(u, v) in u. This means that the surface defined by q 1 (x; u) = 0 intersects p in u-isoparameter curves. q 1 is orthogonal to p for each u. q 1 is ruled for each u. More precisely, it is swept out by lines spanned by the normal n(u, v) along the u-isoparameter curves. Then, for a given point x 0 in space, the equation q 1 (x 0 ; u) = 0 (3.4) is a univariate equation for the u-parameter of all footpoints to x 0. An example of a moving surface with these properties is shown in Figure 3.2. Clearly, we may have a similar moving surface q 2 in the v direction. In the following, the subscript 1 or 2 on q refers to either u or v. A possible exception to this situation is that we are dealing with certain nongeneric surfaces, like surfaces of revolution. For these surfaces some points (like those lying on the axis of revolution) may give, not footpoints, but footcurves. I.e. the set of points with the same distance to x 0 is a curve on the surface. This is presumably a problem for most methods of computing closest points,

27 3.2. THE UNDERLYING IDEA 23 q 1 (x 0 ;u)=0 v n(u,v) p(u,v) Figure 3.2: A moving surface q 1 that intersects p at u-isocurves, is orthogonal to it, and is ruled. u and requires a separate discussion. For simplicity we assume that all the surfaces we consider are sufficiently generic for this to happen. Based on the considerations above, we propose a method for computing closest points in two steps: 1. Preprocessing. Construct two moving surfaces: q 1 for the u-direction, and q 2 for the v-direction. This is done once for each surface. 2. For each given point x 0, use the two moving surfaces to get two univariate equations in u and v: q 1 (x 0 ; u) = 0, (3.5) q 2 (x 0 ; v) = 0. Check each pair of solutions (u, v) to these equations, along with the closest point on the border, to find which one corresponds to the closest point. Finding the closest point on the border amounts to running a similar algorithm for the four border curves.

28 24 CHAPTER 3. CLOSEST POINTS BY MOVING SURFACES A sketch of a situation where we get a solution u 0 and v 0 from Step 2 is shown in Figure 3.3. q 2 ( x 0 ;v 0 )=0 q 1 ( x 0 ;u 0 )=0 x 0 p v (u 0 ;v 0 ) p cl p u (u 0 ;v 0 ) Figure 3.3: When the solutions u 0 and v 0 are found in Step 2, we can draw the moving surfaces for these two parameter values. The point x 0, the closest point p cl, and the straight line between them, lie on both of these surfaces. Let us also make a remark about curves. A similar construction works for curves, both in 2D and 3D. In 2D we have moving lines, while in 3D we have moving planes. Since lines and planes are described implicitly by algebraic functions that are linear, the algorithms become simpler. Furthermore, for curves there is only one equation in Step 2. This equation is in fact equivalent to the orthogonality condition (x 0 p(t)) p (t) = Degrees of the moving surfaces The two moving surfaces described in the previous section can be analyzed more formally. In this section we will use elimination theory, in particular Sylvester s resultant, to perform this analysis [6]. We will assume that the surface p is a single polynomial patch, i.e. a Bézier patch. In this case we obtain formulas for the algebraic degrees involved in q 1 and q 2 given a parametric surface of bidegree (n u, n v ). Referring back to the form (3.2) of a moving surface, the

29 3.3. DEGREES OF THE MOVING SURFACES 25 required degrees are: d 1 deg x (q 1 ) = the degree of q 1 (or q 1,i ) in x d 2 deg x (q 2 ) = the degree of q 2 (or q 2,i ) in x N deg u (q 1 ) = the degree of q 1 (or B i,n ) in u It turns out that deg u (q 1 ) is equal to deg v (q 2 ) so we need only one N. This is connected with the fact that N counts the number of possible footpoints, and this is given by the number of roots of q 1 and q 2, respectively. Thus we have a parameterized surface p : R 2 R 3, where p is given by three polynomials p 1, p 2, p 3 R[u, v] of degree (n u, n v ). We assume that this description of the surface is sufficiently general, so that the degrees cannot be reduced. Now let V be the set of points (u, v, x) R R R 3 such that x is on the normal of p given by the parameter values (u, v). The set V is described by the two equations F 1 (u, v, x) := (x p) p u = 0, (3.6) F 2 (u, v, x) := (x p) p v = 0. The points satisfying these equations make a variety in R 5. Using a resultant, we can eliminate one variable, and get one polynomial defining a hypersurface in R 4. If we eliminate u, this set of points is exactly the set V = {(v, x) R R 3 u R s.t. F 1 (u, v, x) = F 2 (u, v, x) = 0}, which corresponds to the moving surface q 2. We want to determine the degrees in v and x of the equation defining V. First, we write F 1 (u, v, x) = F 2 (u, v, x) = 2n u 1 i=0 f i (v, x)u i, (3.7) 2n u g i (v, x)u i, i=0 and then use the Sylvester resultant to eliminate u. By examining the Sylvester matrix, we can determine the degrees of the equation defining V. The Sylvester

30 26 CHAPTER 3. CLOSEST POINTS BY MOVING SURFACES matrix is a square matrix of size (4n u 1) (4n u 1). It looks like this: f g 0 0 f 1 f 0 0 g f 2nu 1 f 2nu 2 f 0 g 2nu 1 g 1 0 f 2nu 1 f 1 g 2nu g f 2nu 1 0 g 2nu There are 2n u columns to the left, and the degree in v is 2n v for each of these entries. There are 2n u 1 columns to the right, each entry being of degree 2n v 1. The total degree in v of the resultant is thus N = 4n u n v + (2n u 1)(2n v 1). (3.8) Note the symmetry of this expression with respect to n u and n v, which confirms what we said previously about needing only one N. The degree in x, that is, d 2, is a little trickier to work out. The polynomials f 0,..., f nu 1 are of degree 1 in x, but the polynomials f nu,..., f 2nu 1 are of degree 0. Furthermore, the polynomials g 0,..., g nu are of degree 1 and the polynomials g nu+1,..., g 2nu are of degree 0 in x. This means that the bottom n u rows are of degree 0 in x and the rest of the 3n u 1 rows are of degree 1. The total degrees are therefore d 1 = 3n v 1, (3.9) d 2 = 3n u 1. (3.10) As mentioned, the degree formulas for d 1,2 and N are derived for general parametrized surfaces, and as such are upper bounds. For some surfaces the degrees could be effectively lower. This happens, for example, if the degree of p is artificially high, so it can be obtained from a degree elevation of a lowerdegree parametrization. The degree can drop in other cases, but if the degree in v drops, then the corresponding degree in u will typically drop in the same way. For this reason, there will still be only one N. A similar analysis can be carried out for rational surface patches. The degree formulas are then: N = 9n u n v + (3n u 2)(3n v 2) d 1 = 4n v 2 (3.11) d 2 = 4n u 2

31 3.4. IMPLEMENTATION OF A TEST ALGORITHM 27 Bézier Rational (1, 1) (2, 2) (3, 3) (4, 4) (1, 1) (2, 2) (3, 3) (4, 4) d 1, N Table 3.1: Degrees for Bézier and rational surfaces of degrees of the form (n, n). Since n u = n v we also have d 1 = d 2. Examples of the degrees for Bézier and rational surfaces of degrees (n, n) with n ranging from 1 to 4 is shown in Table 3.1. As far as we know, these results are new 2. The numbers d 1 and d 2 are the degrees of an algebraic surface that is perpendicular to a parametric surface along an entire isocurve, and this has not been noted before. 3.4 Implementation of a test algorithm To test our ideas, we have implemented an algorithm for computing closest points for tensor product Bézier surfaces. We have chosen not to use resultants for this. Instead we rely on solving a system of linear equations, which will be explained below. The reason is that this is a numerically very stable method, which allows us to use the Bernstein form for all polynomials in a straightforward way, which would not have been the case for resultant based methods. Besides, we do not get into possible problems with base points. A central object in our implementation is the moving ruled surface r(u, v, w) = p(u, v) + wn(u, v), (3.12) where p is the given surface, n is the normal vector, and w is an additional parameter. This can be thought of as a trivariate tensor product Bézier object. For fixed (u 0, v 0 ), the line r(u 0, v 0, w), w R, is orthogonal to the surface at p(u 0, v 0 ). In other words, all points on this line has p(u 0, v 0 ) as a footpoint. Another property we have used in our implementation, is that evaluating an algebraic function q(x) on an n-variate Bernstein tensor polynomial r(u 1,..., u n ) yields a new n-variate Bernstein tensor polynomial. If we write q(x) = j b jx j, where j is a multi-index, x j is a monomial in (x, y, z) in multi- 2 We thank the referee for urging us to make this point.

32 28 CHAPTER 3. CLOSEST POINTS BY MOVING SURFACES index form, and b j are the coefficients, we have a factorization q(r(u 1,..., u n )) = b T D T B(u 1,..., u n ). (3.13) Here, b is the coefficients b j organized in a vector, D is a matrix of numbers, and B(u 1,..., u n ) is a basis of n-variate Bernstein tensor polynomials, also organized in a vector. If q is a degree d algebraic function and r is a degree (m 1,..., m n ) Bernstein polynomial, then q(r) is a degree (dm 1,..., dm n ) Bernstein polynomial. In our implementation, we use evaluation routines for algebraic functions on Bernstein polynomials in order to find such matrix factorizations. The moving surfaces q 1 and q 2 are defined by an array of coefficients. For q 1 (x; u) we need to determine the coefficients b 1,i;j of q 1,i (x) = j b 1,i;jx j, see Eq. (3.2). This means that we can use numerical linear algebra to find the vector b 1 of coefficients in q 1. More precisely, we need to find a vector in the null-space of D 1. We used a technique based on Gauss elimination and back-substitution for this, which is faster than, say, SVD of D 1. This way of using numerical linear algebra has previously been used in implicitization, see [7]. The algorithm follows the two-step structure described in Section 3.2. Step 1. Preprocessing Input: A parametric surface p(u, v). 1. Construct a moving ruled surface r(u, v, w) 2. Insert r into q 1 to get the equation q 1 (r(u, v, w), u)) = 0. This can be factored into the linear equation B T (u, v, w)d 1 b 1 = 0, (3.14) where b 1 is the vector of coefficients in q 1,i. Similarly for the v- direction. 3. Solve the matrix equation D 1 b 1 = 0 by e.g. Gauss elimination and back-substitution. Similarly for the v-direction. Output: The vectors b 1 and b 2, or equivalently, the moving surfaces q 1 and q 2. Step 2. For each given point x 0 Input: A point x 0 in space. 1. Find the closest point on the boundary curves.

33 3.4. IMPLEMENTATION OF A TEST ALGORITHM 29 Moving surfaces Newton s method Average no. of hits Running times Full algo. 1 2 min Just Step 2 1 s 5 s Accuracy Table 3.2: Average results for running the two closest point algorithms on ten random biquadratic surfaces. For details, see the text. 2. Insert x 0 in q 1 and q 2 to get univariate equations in u and v: 3. Find all roots u i and v j. q 1 (x 0 ; u) = 0, (3.15) q 2 (x 0 ; v) = Check each pair (u i, v j ) and the closest point on the boundary to find the closest point. Output: The parameters (u cl, v cl ) of the closest point p cl. As an example, we tested the algorithm on a set of ten random biquadratic Bézier surfaces. That is, the control points were random points in the unit cube. We expect that within the family of biquadratic surfaces such surfaces will be a challenge for any closest point algorithm. For each surface, 1000 points were generated randomly in the bounding box, and their closest points on the surface were computed. However, points whose closest points were found to lie on the boundary were discarded. For comparison, we also implemented an algorithm based on Newton s method. We used a PC with two Intel Pentium 4 2.8GHz processors to run these algorithms. The results are shown in Table 3.2. Table 3.2 reports the average number of closest points found (hits) for each surface. For both algorithms, less than half of the 1000 random points gave hits because a majority of the closest points were on the boundaries of the surfaces. (The surfaces had complicated geometries with lots of self-intersections.) The moving surfaces algorithm was consistently better for getting hits Newton s method produced a lot of messages for No convergence. Running times were considerably longer for the full moving surfaces algorithm, with 1 2 minutes. Most of this time is spent in the preprocessing step where the moving surfaces are constructed. When only Step 2 of the algorithm is considered it is much faster.

34 30 CHAPTER 3. CLOSEST POINTS BY MOVING SURFACES Finally, the accuracy given in Table 3.2 is an average of the errors for the reported closest points. The averaging is over the order of magnitude of the errors, i.e. it is an average of the log of the errors for each point. An error for a single point was defined in terms of the angle θ between the vector (x 0 p cl ) and the normal n(u cl, v cl ) at the computed closest point, see Figure 3.4. As x 0 n(u cl,v cl) θ p cl Figure 3.4: The error can be measured by the angle θ between the normal n at p cl and the vector to the point x. we can see, Newton s method produced much better accuracy than the moving surfaces. Sources of error for the moving surfaces method are the building of the matrix D, the Gauss elimination, the insertion of x 0 to get the polynomial equations, and the solving of these equations. The results in Table 3.2, however, does not include iterative refinements. Looking into the details for each surface not shown in Table 3.2 it turns out that there is a complementary property for the two algorithms: Surfaces that had a low accuracy for moving surfaces also had a high number of hits. For example, one random surface had an accuracy of 10 5 vs for moving surfaces and Newton s method, respectively, while the hit numbers were 265 vs It is necessary to make some remarks about problems with the memory usage of our implementation of the moving surfaces algorithm. The amount of memory needed for the matrix D 1 (or D 2 ) was about 350 Mbytes for the biquadratic surface. This is a lot, but does not cause any problems. For a bicubic Bézier surface, however, the corresponding memory requirement is about 4 Gbytes with double precision! But even going to single precision was too much to handle for

35 3.5. DISCUSSION 31 our PCs. 3.5 Discussion Moving surfaces provides a new method for computing closest points to a parametric surface. It is an alternative to the conventional algorithms based on iterations and Newton s method, or to subdivision methods. Compared to a Newton based closest point algorithm, it takes a long time to set up the system of moving surfaces for each parametric surface, but a short time to compute the closest point once a point in space is given. This suggests that the potential use of the moving surfaces method is for situations where a large number of closest points are to be computed for each given surface, and where long preprocessing times are acceptable. Furthermore, the moving surfaces method is better than Newton s method for actually finding the closest points for surfaces with complex geometry. Thus, if this kind of stability is desired, the moving surfaces method may also be a better choice, combined with an iterative refinement of the resulting closest points. In other words, the closest points found from the moving surfaces method could be used as the starting point of the Newton iterations. However, we had problems with the implementation of the algorithm, due to memory shortage, when applied on the realistic case of bicubic surfaces. The large amount of memory is mainly used for building the matrices D 1 and D 2. In contrast, the amount of memory needed for storing the moving surfaces themselves corresponds to only (N +1)(d 1,2 +1)(d 1,2 +2)(d 1,2 +3)/6 doubles (i.e. the dimensions of the vectors b 1 and b 2, respectively. In the bicubic case this number is This shows that we must find another way of implementing the algorithm, or that we must find a way to use moving surfaces together with approximations. But if we can afford the preprocessing of the coefficients of the moving surfaces we can get fast and accurate calculations of closest points.

36 32 CHAPTER 3. CLOSEST POINTS BY MOVING SURFACES

37 Chapter 4 Solving a closest point problem by subdivision 4.1 Introduction In this paper, a closest point problem is solved by using subdivision techniques, and it is shown that a considerable speed-up is possible if relatively simple ideas are used. Closest point problems are heavily used in CAGD applications. Applications include surface smoothing, surface fitting, and curve and surface selection. It is also important to note that in some applications, solving closest point problems becomes the bottleneck of the algorithm. This fact alone makes it interesting to be able to solve such problems fast. The common way of solving closest point problems is by using iterative methods (see [12] and references therein). These are generally fast, but the need for an initial guess makes them error prone. Some times these kinds of errors, even when highly uncommon, can ruin the result. In these cases, a subdivision method can be the best way of ensuring high quality output. The closest point problem can be solved by using a numerically stable algebraic polynomial solver, and this approach is explored in [19, Section 4.2]. This method is robust, but it is much slower than what one can get from a good iterative method or a subdivision algorithm. Note that closest point problems for discrete sets are very different from the problem of finding closest points on smooth surfaces. However, in a CAGD 33

38 34 CHAPTER 4. CLOSEST POINTS BY SUBDIVISION system it may be interesting to find the closest point on a model consisting of many smooth surfaces patches. Given such a problem, the techniques concerning the discrete case found in [37] will be useful in addition to a good understanding of the smooth case considered here. Section 4.2 defines the problem and explains the basic way of solving it by subdivision. Notes about how different implementations should be evaluated and what happens in special cases are also included here. Section 4.3 explores different changes to the basic algorithm and how these changes affect the speed of implementations. Section contains a table summarizing run times for the different implementations in Section 4.3. Section 4.4 contains an error analysis of subdivision methods. A formula giving a bound on the error is produced, and this formula can be used when determining if the output is accurate enough for a given application. The chapter ends with Section 4.5, the conclusion. 4.2 Definition of the problem In this chapter, a surface patch ϕ of (bi)degree (d, e) is a map ϕ : [0, 1] 2 R 3 given by an array of control points c ij R 3 and the formula ϕ(u, v) = d i=0 j=0 e ( ) ( ) d e c ij (1 u) i u d i (1 v) j v e j. i j This surface is called a tensor product Bézier surface with control points c ij. For a point x R 3, we want to find the parameter point (u 0, v 0 ) [0, 1] 2 that minimizes the distance function (u, v) ϕ(u, v) x. Note that if x is in the image of ϕ, then the closest point problem specializes to the inverse problem. This problem has been studied in, for example, [21]. In a CAGD system one would like to solve the same problem in a more general setting, for example where ϕ is a tensor product spline. However, since we can reduce the spline case to the Bézier case by knot insertion, considering only the Bézier case is not a big limitation The basic method The idea was to start by implementing a simple method for finding closest points, and then introduce different changes that might or might not speed

39 4.2. DEFINITION OF THE PROBLEM 35 up the implementation. The basic method described here will be improved in Section 4.3. Let F : [0, 1] 2 R be defined as the distance squared function, F (u, v) = ϕ(u, v) x 2. The critical points of F are the zero set of two partial derivatives, F u (u, v) and F v (u, v). If we evaluate one of these we get F u (u, v) = 2(ϕ(u, v) x) ϕ u (u, v), where ϕ u = ϕ u. Note that this is a very natural orthogonality condition: when F u (u, v ) and F v (u, v ) are both zero, the vector ϕ(u, v ) x is orthogonal to the tangent plane of ϕ associated to the parameter value (u, v ), or the parameterization is degenerate in this point. Abusing language, we say that (u, v ) is a critical point for x if F u (u, v ) = F v (u, v ) = 0. It is clear that the closest point (u 0, v 0 ) will be either a critical point or on the border of the parameter domain. The basic method is straightforward. Find the closest point on the border, and then find all the critical points by applying a subdivision solver to the tensor product functions F u and F v, and compare the distances. The shortest distance will give the closest point. One note on the closest point on the border may be useful: To find the closest point on one side, for example (0, 1) {0}, it is sufficient to evaluate F (u, 0) on zeros of the univariate polynomial F u (u, 0). Every implementation in this chapter will start by finding the closest point on the border Quality of the output and special cases Even though the algorithms described in this chapter have been tested thoroughly, it is the application of the output data that should determine if any algorithm is accurate enough. When doing subdivision, one may be very unlucky, and the subdivision will take a lot of time. This happens when the set of critical points is onedimensional. The possibility of a one-dimensional set of solutions is by far the biggest problem with subdivision methods, and usually must be handled with great care. When this happens in our case, all points on each connected component of the set of critical points will give the same distance to x, so any point on one such component will do as a solution to the problem. For this reason special cases are not such a big problem, at least when we can detect it. We solve this by counting the number of calls to the recursive function, and, if

40 36 CHAPTER 4. CLOSEST POINTS BY SUBDIVISION it is called too many times, we abort and start over with a lower maximum recursion depth. Together with iterative refinement of best points, this should give us good answers in practically all cases. When the processing of one point is completed, we know to which depth the recursion has been completed successfully. Then the error analysis in Section 4.4 can be used to determine if the guaranteed accuracy is sufficient. 4.3 Improving the basic method All implementations follow this pattern: Initialize (u 0, v 0, D) = (0, 0, F (0, 0)). This set of variables represents the best parameter point so far with the distance squared. For each (u, v) {(0, 1), (1, 0), (1, 1)}, evaluate F (u, v) and, if F (u, v) < D, update (u 0, v 0, D). For each side of the unit square, solve a univariate polynomial by subdivision and update (u 0, v 0, D) if a new closest point is found. Recursively find candidate points in the interior of the unit square and update (u 0, v 0, D). It is the last item that will change when improving the basic method. But before we discuss these changes, we need some simple notation. Let ϕ c : [0, 1] 2 R n be given by (d + 1)(e + 1) control points c = (c ij ) and the formula ϕ c (u, v) = d i=0 j=0 e ( ) ( ) d e c ij (1 u) i u d i (1 v) j v e j. i j We say that c = (c ij ) represents ϕ c on the square [0, 1] 2 R 2. Furthermore, we say that b = (b ij ) represents ϕ c on the rectangle [a 1, b 1 ] [a 2, b 2 ] if ϕ b (u, v) = ϕ c ((1 u)a 1 + ub 1, (1 v)a 2 + vb 2 ). Such representations can be calculated easily by using de Casteljau s algorithm, and if 0 a i b i 1 for i = 1, 2, the calculations are stable. The input to the basic recursive solver is a rectangle [a 1, b 1 ] [a 2, b 2 ] and two sets of control points (scalars) representing F u and F v. If either F u or F v

41 4.3. IMPROVING THE BASIC METHOD 37 has control points of constant signs, then the solver concludes that there are no base points in the rectangle and returns. Else it subdivides the rectangle and calls itself with new control points representing F u and F v in the subdivided rectangles, once for each sub-rectangle. The algorithm also keeps track of the recursion level, and when it reaches the bottom it evaluates the midpoint and updates (u 0, v 0, D) if a new closest point has been found. Now we will focus on different changes to the implementations Changing the subdivision The first change is to subdivide ϕ(u, v) x, ϕ u (u, v) and ϕ v (u, v) instead of subdividing F u and F v. Since the cost of subdividing is roughly proportional to the cube of the degree, subdividing these three vector valued functions is faster than subdividing the two scalar functions F u and F v. However, this change means that the dot product must be carried out when checking whether the control points of F u or F v have constant sign. The cost of the multiplication of tensor product Bézier functions is roughly proportional to the fourth power of the degree, and that indicates that changing the subdivision is a bad idea. However, even though changing the subdivision turned out to make the algorithm slower, it allowed us to do other optimizations (see Sections and 4.3.3) that let us do the full multiplication less often. Experiments showed that only subdividing ϕ(u, v) x and calculating the subdivided ϕ u (u, v) and ϕ v (u, v) from ϕ(u, v) x is a bad idea. The operation is unstable, and this instability slowed down the implementations considerably. On the other hand, it is conceivable that this instability can become negligible in some cases, when the recursive level is much smaller than the precision of the computer arithmetic used Changing the multiplication algorithm to allow an early exit The conventional way to do the multiplication of two tensor product polynomials is to initialize the result control points to zero, then loop through the control coefficients of one factor and, for each such coefficient, loop through the coefficients of the other factor, while accumulating the result. The multiplication in the dot products (ϕ(u, v) x) ϕ u and (ϕ(u, v) x) ϕ v was changed so that each coefficient of the result was calculated before starting on the next. The loops became more complicated, but the change allows the constant sign test to stop early if both signs are encountered.

42 38 CHAPTER 4. CLOSEST POINTS BY SUBDIVISION The implementation was further improved by first calculating the corner coefficients. These coefficients only depend on one control point from each factor, so they can be calculated very fast. Also, when checking for constant sign in F u and F v, the corner coefficients are most likely to differ. This change resulted in a considerable improvement in speed, but not enough to offset the speed lost by the changes in Section Introducing a box test and a plane test When we enter the recursive function, the variable D holds the best distance squared. Let {b ij } represent ϕ(u, v) x in the rectangle [a 1, b 1 ] [a 2, b 2 ], so ϕ([a 1, b 1 ], [a 2, b 2 ]) x = ϕ b ([0, 1], [0, 1]) is contained in the convex hull of the control points {b ij }. We can estimate the distance from x to the surface patch ϕ([a 1, b 1 ], [a 2, b 2 ]) and return if it is too big. This is done by calculating the distance from 0 to the smallest box containing {b ij }. If this distance is bigger than D, we can return before doing the multiplication. This is called the box test. While calculating the smallest box containing the control points {b ij }, we can also find the control point that is closest to 0. Let this point be b αβ, and calculate b αβ b ij for each control point b ij. If all of these dot products are bigger than the constant D b αβ, we can return before doing the multiplication. This is called the plane test, since it checks if all control points b ij are on the other side (than the origin) of the plane X b αβ = D b αβ. For the box test and the plane test to work as well as possible we also calculate distances at the middle point and the midpoints of the sides before subdividing. This way, when entering the recursive function, we have already tested the corner points. This increases the chance of the box test and the plane test helping us. The box test and the plane test resulted in a huge speed-up, making the algorithm faster than the basic method Using the second order derivatives For some rectangles [a 1, b 1 ] [a 2, b 2 ] the function F may be convex, and therefore the rectangle can contain at most one base-point. This property can be determined from the signs of the eigenvalues of the Hessian of F. If the product of the eigenvalues is positive in the entire rectangle and the Newton refinement converges to a point inside the rectangle, then this point is the only base-point in the rectangle.

43 4.3. IMPROVING THE BASIC METHOD 39 A method for calculating the control coefficients of the product of the eigenvalues was introduced, but the high degree made this too costly, and it only made the implementations slower The recursive algorithm explained This section explains the algorithm we got after applying the changes in Sections 4.3.1, and The algorithm follows the pattern specified in the beginning of Section 4.3. When we enter the recursive function, it is assumed that the corners of the square to be considered has been evaluated, and that the variables (u 0, v 0 ) and D have been updated accordingly. We keep track of the total number of calls to be able to abort in special cases. The recursive function works as follows: Input: The square [u 1, u 2 ] [v 1, v 2 ], control points b ij representing ϕ x and vectors b ij u and b ij v representing ϕ u and ϕ v in this square. Output: Void, the variables (u 0, v 0 ) and D will be updated if necessary. If the number of calls to the recursive function is bigger than a constant, in our case 16384, abort. If else, increase the variable holding the number of calls to the recursive function. Calculate the distance from the origin to the smallest box containing the control points b ij. If this is bigger than D, return. Also perform the plane test described in Section Calculate the control points a ij R representing 1 2 F u, one at a time, starting with the corner points a 0,0 = b 0,0 b 0,0 u, a 2d 1,0 = b d,0 b d 1,0 u, a 0,2e = b 0,e b 0,e u and a 2d,2e = b d,e b d 1,e u. If both signs are encountered, stop calculating coefficients and skip to the next bullet point. If all signs are the same, return. Do the same as above for F v. Let u = 1 2 (u 1 + u 2 ) and v = 1 2 (v 1 + v 2 ). Evaluate F (u 1, v ), F (u 2, v ), F (u, v 1 ), F (u, v 2 ) and F (u, v ) and update (u 0, v 0 ) and D accordingly. If the square is sufficiently small, return. Calculate representations of ϕ x, ϕ u and ϕ v on the four squares [u 1, u ] [v 1, v ], [u 1, u ] [v, v 2 ], [u, u 2 ] [v 1, v ] and [u, u 2 ] [v, v 2 ] using de Casteljau s algorithm on b ij, b ij u and b ij v. Then call the recursive function

44 40 CHAPTER 4. CLOSEST POINTS BY SUBDIVISION with these values. In most cases it pays to sort the order of these calls based on the values F (u 1, v ), F (u 2, v ), F (u, v 1 ) and F (u, v 2 ). To be precise, if F (u 1, v ) < F (u 2, v ), then call the the recursive function for the square [u 1, u ] [v 1, v ] before the square [u, u 2 ] [v 1, v ] and the square [u 1, u ] [v, v 2 ] before [u, u 2 ] [v, v 2 ]. Similarly, if F (u, v 1 ) < F (u, v 2 ), handle the square [u 1, u ] [v 1, v ] before [u 1, u ] [v, v 2 ] and [u, u 2 ] [v 1, v ] before [u, u 2 ] [v, v 2 ] The basic method with the box and plane tests After seeing the huge improvement from the tests in Section 4.3.3, it was natural to try to improve the basic method using the same tests. The resulting algorithm calculates representations of F u, F v and ϕ(u, v) x in the unit square. Then the recursive function does essentially the same as the algorithm in Section 4.3.5, except that no multiplication needs to be carried out. The result is a method that is a little slower than the algorithm in Section However, the difference is so small that a strong conclusion cannot be drawn - the result can be very different on different hardware, and, most importantly, on other test cases Doing a preconditioned constant sign test General subdivision algorithms can be sped up considerably by doing a preconditioned constant sign test [26]. The idea is to do a special linear transformation of the equations to be solved, and then to check if any of the resulting equations has control points of constant sign. The linear transformation is determined by the cofactor matrix of the Jacobian matrix evaluated in the midpoint of the square. Preconditioning requires the equations to be of the same degree, so we elevate the degree of F u and F v to (2d, 2e), and transform the system by using the cofactor matrix of the Hessian of F at the midpoint: ( ) ( ) ( ) G1 Fvv (u =, v ) F uv (u, v ) Fu F uv (u, v ) F uu (u, v ) G 2 If the control points of either G 1 or G 2 have constant sign (different from zero) we can conclude that the rectangle contains no critical point. For our datasets, doing preconditioning helped a lot when the box and plane tests where not present, cutting processing time in half. When the box and plane tests were used, the effect ranged from a slowdown of a few percent to a F v

45 4.3. IMPROVING THE BASIC METHOD 41 speedup of a few percent. It is highly likely that this will be different on other datasets Speed measurements The implementations were tested on many surfaces and on many points per surface. To be exact, for each bi-degree, 100 surfaces with control points placed randomly in the unit cube were tested, and for each surface 100 random points in the unit cube were selected. Table 4.1 shows the times for seven algorithms: the algorithm described in the algorithm from Section without the plane test from Section the algorithm from Section without the plane test and the box test from Section the algorithm from Section without the improved calculation of the product 1 2 F u(u, v) = (ϕ(u, v) x) ϕ u (u, v) from Section the basic algorithm the basic algorithm with a box test the basic algorithm with a box test and a plane test Each of these algorithms were tested with and without preconditioning, and the table shows the best time for each algorithm. Degree (2, 2) (3, 3) (4, 4) (3, 9) (7, 7) (20, 20) Algorithm from Section no plane test no plane or box test no multiplication optimization Basic algorithm with box test with box and plane test Table 4.1: Time in seconds spent to solve the closest point problem for 100 surfaces and 100 point per surface on a 2.80 GHz Pentium 4 CPU. A indicates that it did not pay to do a preconditioned constant sign test.

46 42 CHAPTER 4. CLOSEST POINTS BY SUBDIVISION The maximum recursion level was set to 32, but numbers should be representative. Experiments showed that the time was roughly proportional to the level of recursion, at least for reasonable values. The usual cautions apply: It is unlikely that these test cases give a very good indication of what is fastest for less random data sets on different hardware. Because of this, it is recommended that anyone who needs to calculate closest points fast do their own timings. Remark: I also experimented with different compiler optimization flags, and the optimal set of flags was not constant for different degrees. To be specific, it was the -funroll-loops options that helped in some cases, but made things worse in others. If this problem is to be solved in production code, the best compiler available should be used. The table shows times for the best set of compiler settings for each case. 4.4 Error analysis If infinite precision in the calculations is assumed, then we can develop a lower bound L of F ([0, 1], [0, 1]) in terms of the distance squared D returned by the algorithm, the bi-degree (d, e) of ϕ, the depth n of the recursion, and the diameter R = max i,j,k,l { cij c kl } of the control points. We also assume that the number of recursive calls is not constrained in any way except in terms of depth. This means that the actual constrained algorithm will give worse results in some rare cases, but it will be able to report what level was reached successfully, giving us a worse guaranteed accuracy. We say that the depth of recursion is n if any square bigger than 2 n 2 n that may contain the closest point is subdivided. We know that a square of size 2 n 2 n containing the closest point (u 0, v 0 ) will be considered by the algorithm. Let this square be denoted [u 1, u 2 ] [v 1, v 2 ] with u 1 u 0 u 2, v 1 v 0 v 2 and u 2 u 1 = v 2 v 1 2 n. The corners of this small square have been evaluated as candidates for the return value of the algorithm, so D F (u i, v j ) for i = 1, 2 and j = 1, 2. We now assume u 0 u 1 2 n 1 and v 0 v 1 2 n 1, so that (u 1, v 1 ) is the sample point closest to (u 0, v 0 ). The derivatives ϕ u and ϕ v have limited range: ϕ u (u, v) dr for all (u, v) [0, 1] 2 (4.1) ϕ v (u, v) er for all (u, v) [0, 1] 2 (4.2)

47 4.4. ERROR ANALYSIS 43 This limits the difference ϕ(u 0, v 0 ) ϕ(u 1, v 1 ): ϕ(u 0, v 0 ) ϕ(u 1, v 1 ) 2 n 1 (d + e)r (4.3) Thus we have a lower bound for the shortest distance, ϕ(u 0, v 0 ) x D 2 n 1 (d + e)r. This bound is not very impressive, but we can improve this bound by using the fact that F u (u 0, v 0 ) = 0 and that the second derivatives are bounded as follows: ϕ uu (u, v) 2d(d 1)R for all (u, v) [0, 1] 2 (4.4) ϕ uv (u, v) 2deR for all (u, v) [0, 1] 2 (4.5) ϕ vv (u, v) 2e(e 1)R for all (u, v) [0, 1] 2 (4.6) We can now make a distance preserving coordinate change such that ϕ(u 0, v 0 ) = (0, 0, 0), x = ( ϕ(u 0, v 0 ) x, 0, 0) and ϕ(u 1, v 1 ) =: (a, b, c). From equations (4.1) and (4.2) we get b 2 + c 2 (2 n 1 (d + e)r) 2 =: B. Furthermore, equations (4.4), (4.5) and (4.6) gives ϕ u (u, v) (1, 0, 0) 2 n d(e + d 1) ϕ v (u, v) (1, 0, 0) 2 n e(d + e 1) on the rectangle [u 1, u 0 ] [v 1, v 0 ]. This gives a 2 2n 1 (d + e)(d + e 1). We can refine this a little bit: Equations (4.4), (4.5) and (4.6) give ϕ u (u, v) (1, 0, 0) 2 n d((d 1) u 0 u + e v 0 v ) and ϕ v (u, v) (1, 0, 0) 2 n e(d u 0 u + (e 1) v 0 v ) Setting u 0 u = v 0 v = t and integrating the bound on the derivatives, we get a 2 n n (d + e)(d + e 1)t dt = 2 2n 2 (d + e)(d + e 1) =: A.

48 44 CHAPTER 4. CLOSEST POINTS BY SUBDIVISION From Pythagoras we get D ( ϕ(u 0, v 0 ) x + A) 2 + B. From this we get the lower bound L of the shortest distance L := D B A ϕ(u 0, v 0 ) x. The corresponding upper bound of the error D ϕ(u 0, v 0 ) x can be calculated in a stable way: D ϕ(u0, v 0 ) x D L = D D B + A = B D + D B + A This is better than the error bound in equation (4.3) in most cases, when D is much bigger than B. A few examples can illustrate this pretty well. If the bi-degree is (3, 3), the diameter of the control points is 1, the distance returned is D = 0.01, the depth of the recursion is 26, then the error is at most If the recursion level is increased to 32, then the error is at most Conclusion The closest point problem treated in this chapter is quite common in geometric applications, and often needs to be solved by a computer algorithm in the fastest possible way. Subdivision methods has the advantage that they can be made very fast in almost all cases, and the guaranteed accuracy of the algorithm is known. For special points, when the subdivision method takes too long, the application must decide the proper action. In some cases it is natural to fall back to a more accurate method. In other cases it is natural to simply discard the point and move on to the next. The result is a flexible set of algorithms that should be usable for most applications.

49 Chapter 5 Monoid hypersurfaces 1 Pål Hermunn Johansen, Magnus Løberg, Ragni Piene 5.1 Introduction A monoid hypersurface is an (affine or projective) irreducible algebraic hypersurface which has a singularity of multiplicity one less than the degree of the hypersurface. The presence of such a singular point forces the hypersurface to be rational: there is a rational parameterization given by (the inverse of) the linear projection of the hypersurface from the singular point. The existence of an explicit rational parameterization makes such hypersurfaces potentially interesting objects in computer aided design. Moreover, since the space of monoids of a given degree is much smaller than the space of all hypersurfaces of that degree, one can hope to use monoids efficiently in (approximate or exact) implicitization problems. These were the reasons for considering monoids in the paper [35]. In [28] monoid curves are used to approximate other curves that are close to a monoid curve, and in [29] the same is done for monoid surfaces. In both articles the error of such approximations are analyzed for each approximation, a bound on the distance from the monoid to the original curve or surface can be computed. 1 This chapter has been submitted as an article for the proceedings of the conference COM- PASS II, and has been accepted by the editors of the book. 45

50 46 CHAPTER 5. MONOID HYPERSURFACES In this article we shall study properties of monoid hypersurfaces and the classification of monoid surfaces with respect to their singularities. Section 5.2 explores properties of monoid hypersurfaces in arbritrary dimension and over an arbitrary base field. Section 5.3 contains results on monoid surfaces, both over arbritrary fields and over R. The last section deals with the classification of monoid surfaces of degree four. Real and complex quartic monoid surfaces were first studied by Rohn [32], who gave a fairly complete description of all possible cases. He also remarked [32, p. 56] that some of his results on quartic monoids hold for monoids of arbitrary degree; in particular, we believe he was aware of many of the results in Section 5.3. Takahashi, Watanabe, and Higuchi [38] classify complex quartic monoid surfaces, but do not refer to Rohn. (They cite Jessop [14]; Jessop, however, only treats quartic surfaces with double points and refers to Rohn for the monoid case.) Here we aim at giving a short description of the possible singularities that can occur on quartic monoids, with special emphasis on the real case. 5.2 Basic properties Let k be a field, let k denote its algebraic closure and P n := P n k the projective n-space over k. Furthermore we define the set of k-rational points P n (k) as the set of points that admit representatives (a 0 : : a n ) with each a i k. For any homogeneous polynomial F k[x 0,..., x n ] of degree d and point p = (p 0 : p 1 : : p n ) P n we can define the multiplicity of Z(F ) at p. We know that p r 0 for some r, so we can assume p 0 = 1 and write F = d i=0 x d i 0 f i (x 1 p 1 x 0, x 2 p 2 x 0,..., x n p n x 0 ) where f i is homogeneous of degree i. Then the multiplicity of Z(F ) at p is defined to be the smallest i such that f i 0. Let F k[x 0,..., x n ] be of degree d 3. We say that the hypersurface X = Z(F ) P n is a monoid hypersurface if X is irreducible and has a singular point of multiplicity d 1. In this article we shall only consider monoids X = Z(F ) where the singular point is k-rational. Modulo a projective transformation of P n over k we may and shall therefore assume that the singular point is the point O = (1 : 0 : : 0).

51 5.2. BASIC PROPERTIES 47 Hence, we shall from now on assume that X = Z(F ), and F = x 0 f d 1 + f d, where f i k[x 1,..., x n ] k[x 0,..., x n ] is homogeneous of degree i and f d 1 0. Since F is irreducible, f d is not identically 0, and f d 1 and f d have no common (non-constant) factors. The natural rational parameterization of the monoid X = Z(F ) is the map given by θ F : P n 1 P n θ F (a) = (f d (a) : f d 1 (a)a 1 :... : f d 1 (a)a n ), for a = (a 1 : : a n ) such that f d 1 (a) 0 or f d (a) 0. The set of lines through O form a P n 1. For every a = (a 1 : : a n ) P n 1, the line L a := {(s : ta 1 :... : ta n ) (s : t) P 1 } (5.1) intersects X = Z(F ) with multiplicity at least d 1 in O. If f d 1 (a) 0 or f d (a) 0, then the line L a also intersects X in the point θ F (a) = (f d (a) : f d 1 (a)a 1 :... : f d 1 (a)a n ). Hence the natural parameterization is the inverse of the projection of X from the point O. Note that θ F maps Z(f d 1 ) \ Z(f d ) to O. The points where the parameterization map is not defined are called base points, and these points are precisely the common zeros of f d 1 and f d. Each such point b corresponds to the line L b contained in the monoid hypersurface. Additionally, every line of type L b contained in the monoid hypersurface corresponds to a base point. Note that Z(f d 1 ) P n 1 is the projective tangent cone to X at O, and that Z(f d ) is the intersection of X with the hyperplane at infinity Z(x 0 ). Assume P X is another singular point on the monoid X. Then the line L through P and O has intersection multiplicity at least d = d + 1 with X. Hence, according to Bezout s theorem, L must be contained in X, so that this is only possible if dim X 2. By taking the partial derivatives of F we can characterize the singular points of X in terms of f d and f d 1 : Lemma 5.1. Let = ( x 1,..., x n ) be the gradient operator. (i) A point P = (p 0 : p 1 : : p n ) P n is singular on Z(F ) if and only if f d 1 (p 1,..., p n ) = 0 and p 0 f d 1 (p 1,..., p n ) + f d (p 1,..., p n ) = 0.

52 48 CHAPTER 5. MONOID HYPERSURFACES (ii) All singular points of Z(F ) are on lines L a where a is a base point. (iii) Both Z(f d 1 ) and Z(f d ) are singular in a point a P n 1 if and only if all points on L a are singular on X. (iv) If not all points on L a are singular, then at most one point other than O on L a is singular. Proof. (i) follows directly from taking the derivatives of F = x 0 f d 1 + f d, and (ii) follows from (i) and the fact that F (P ) = 0 for any singular point P. Furthermore, a point (s : ta 1 :... : ta n ) on L a is, by (i), singular if and only if s f d 1 (ta) + f d (ta) = t d 1 (s f d 1 (a) + t f d (a)) = 0. This holds for all (s : t) P 1 if and only if f d 1 (a) = f d (a) = 0. This proves (iii). If either f d 1 (a) or f d 1 (a) are nonzero, the equation above has at most one solution (s 0 : t 0 ) P 1 in addition to t = 0, and (iv) follows. Note that it is possible to construct monoids where F k[x 0,..., x n ], but where no points of multiplicity d 1 are k-rational. In that case there must be (at least) two such points, and the line connecting these will be of multiplicity d 2. Furthermore, the natural parameterization will typically not induce a parameterization of the k-rational points from P n 1 (k). 5.3 Monoid surfaces In the case of a monoid surface, the parameterization has a finite number of base points. From Lemma 5.1 (ii) we know that all singularities of the monoid other than O, are on lines L a corresponding to these points. In what follows we will develop the theory for singularities on monoid surfaces most of these results were probably known to Rohn [32, p. 56]. We start by giving a precise definition of what we shall mean by a monoid surface. Definition 5.2. For an integer d 3 and a field k of characteristic 0 the polynomials f d 1 k[x 1, x 2, x 3 ] d 1 and f d k[x 1, x 2, x 3 ] d define a normalized nondegenerate monoid surface Z(F ) P 3, where F = x 0 f d 1 +f d k[x 0, x 1, x 2, x 3 ] if the following hold: (i) f d 1, f d 0

53 5.3. MONOID SURFACES 49 (ii) gcd(f d 1, f d ) = 1 (iii) The curves Z(f d 1 ) P 2 and Z(f d ) P 2 have no common singular point. The curves Z(f d 1 ) P 2 and Z(f d ) P 2 are called respectively the tangent cone and the intersection with infinity. Unless otherwise stated, a surface that satisfies the conditions of Definition 5.2 shall be referred to simply as a monoid surface. Since we have finitely many base points b and each line L b contains at most one singular point in addition to O, monoid surfaces will have only finitely many singularities, so all singularities will be isolated. (Note that Rohn included surfaces with nonisolated singularities in his study [32].) We will show that the singularities other than O can be classified by local intersection numbers. Definition 5.3. Let f, g k[x 1, x 2, x 3 ] be nonzero and homogeneous. Assume p = (p 1 : p 2 : p 3 ) Z(f, g) P 2, and define the local intersection number k[x 1, x 2, x 3 ] mp I p (f, g) = lg, (f, g) where k is the algebraic closure of k, m p = (p 2 x 1 p 1 x 2, p 3 x 1 p 1 x 3, p 3 x 2 p 2 x 3 ) is the homogeneous ideal of p, and lg denotes the length of the local ring as a module over itself. Note that I p (f, g) 1 if and only if f(p) = g(p) = 0. When I p (f, g) = 1 we say that f and g intersect transversally at p. The terminology is justified by the following lemma: Lemma 5.4. Let f, g k[x 1, x 2, x 3 ] be nonzero and homogeneous and p Z(f, g). Then the following are equivalent: (i) I p (f, g) > 1 (ii) f is singular at p, g is singular at p, or f(p) and g(p) are nonzero and parallel. (iii) s f(p) + t g(p) = 0 for some (s, t) (0, 0) Proof. (ii) is equivalent to (iii) by a simple case study: f is singular at p if and only if (iii) holds for (s, t) = (1, 0), g is singular at p if and only if (iii) holds for (s, t) = (0, 1), and f(p) and g(p) are nonzero and parallel if and only if (iii) holds for some s, t 0.

54 50 CHAPTER 5. MONOID HYPERSURFACES We can assume that p = (0 : 0 : 1), so I p (f, g) = lg S where S = k[x 1, x 2, x 3 ] (x1,x 2). (f, g) Furthermore, let d = deg f, e = deg g and write f = d i=1 f i x d i 3 and g = e i=1 g i x e i 3 where f i, g i are homogeneous of degree i. If f is singular at p, then f 1 = 0. Choose l = ax 1 + bx 2 such that l is not a multiple of g 1. Then l will be a nonzero non-invertible element of S, so the length of S is greater than 1. We have f(p) = ( f 1 (p), 0) and g(p) = ( g 1 (p), 0). If they are parallel, choose l = ax 0 + bx 1 such that l is not a multiple of f 1 (or g 1 ), and argue as above. Finally assume that f and g intersect transversally at p. We may assume that f 1 = x 1 and g 1 = x 2. Then (f, g) = (x 1, x 2 ) as ideals in the local ring k[x 1, x 2, x 3 ] (x1,x 2). This means that S is isomorphic to the field k(x 3 ). The length of any field is 1, so I p (f, g) = lg S = 1. Now we can say which are the lines L b, with b Z(f d 1, f d ), that contain a singularity other than O: Lemma 5.5. Let f d 1 and f d be as in Definition 5.2. The line L b contains a singular point other than O if and only if Z(f d 1 ) is nonsingular at b and the intersection multiplicity I b (f d 1, f d ) > 1. Proof. Let b = (b 1 : b 2 : b 3 ) and assume that (b 0 : b 1 : b 2 : b 3 ) is a singular point of Z(F ). Then, by Lemma 5.1, f d 1 (b 1, b 2, b 3 ) = f d (b 1, b 2, b 3 ) = 0 and b 0 f d 1 (b 1, b 2, b 3 ) + f d (b 1, b 2, b 3 ) = 0, which implies I b (f d 1, f d ) > 1. Furthermore, if f d 1 is singular at b, then the gradient f d 1 (b 1, b 2, b 3 ) = 0, so f d, too, is singular at b, contrary to our assumptions. Now assume that Z(f d 1 ) is nonsingular at b = (b 1 : b 2 : b 3 ) and the intersection multiplicity I b (f d 1, f d ) > 1. The second assumption implies f d 1 (b 1, b 2, b 3 ) = f d (b 1, b 2, b 3 ) = 0 and s f d 1 (b 1, b 2, b 3 ) = t f d (b 1, b 2, b 3 ) for some (s, t) (0, 0). Since Z(f d 1 ) is nonsingular at b, we know that f d 1 (b 1, b 2, b 3 ) 0, so t 0. Now ( s/t : b 1 : b 2 : b 3 ) (1 : 0 : 0 : 0) is a singular point of Z(F ) on the line L b.

55 5.3. MONOID SURFACES 51 Recall that an A n singularity is a singularity with normal form x 2 1+x 2 2+x n+1 3, see [3, p. 184]. Proposition 5.6. Let f d 1 and f d be as in Definition 5.2, and assume P = (p 0 : p 1 : p 2 : p 3 ) (1 : 0 : 0 : 0) is a singular point of Z(F ) with I (p1:p 2:p 3)(f d 1, f d ) = m. Then P is an A m 1 singularity. Proof. We may assume that P = (0 : 0 : 0 : 1) and write the local equation g := F (x 0, x 1, x 2, 1) = x 0 f d 1 (x 1, x 2, 1) + f d (x 1, x 2, 1) = d g i (5.2) with g i k[x 0, x 1, x 2 ] homogeneous of degree i. Since Z(f d 1 ) is nonsingular at 0 := (0 : 0 : 1), we can assume that the linear term of f d 1 (x 1, x 2, 1) is equal to x 1. The quadratic term g 2 of g is then g 2 = x 0 x 1 + ax bx 1 x 2 + cx 2 2 for some a, b, c k. The Hessian matrix of g evaluated at P is H(g)(0, 0, 0) = H(g 2 )(0, 0, 0) = 1 2a b 0 b 2c which has corank 0 when c 0 and corank 1 when c = 0. By [3, p. 188], P is an A 1 singularity when c 0 and an A n singularity for some n when c = 0. The index n of the singularity is equal to the Milnor number i=2 µ = dim k k[x 0, x 1, x 2 ] (x0,x 1,x 2) J g k[x 0, x 1, x 2 ] (x0,x = 1,x 2) dim k ( ). g x 0, g x 1, g x 2 We need to show that µ = I 0 (f d 1, f d ) 1. From the definition of the intersection multiplicity, it is not hard to see that I 0 (f d 1, f d ) = dim k k[x 1, x 2 ] (x1,x 2) (f d 1 (x 1, x 2, 1), f d (x 1, x 2, 1)). The singularity at p is isolated, so the Milnor number is finite. Furthermore, since gcd(f d 1, f d ) = 1, the intersection multiplicity is finite. Therefore both dimensions can be calculated in the completion rings. For the rest of the proof we view f d 1 and f d as elements of the power series rings k[[x 1, x 2 ]] k[[x 0, x 1, x 2 ]], and all calculations are done in these rings.

56 52 CHAPTER 5. MONOID HYPERSURFACES Since Z(f d 1 ) is smooth at O, we can write f d 1 (x 1, x 2, 1) = (x 1 φ(x 2 )) u(x 1, x 2 ) for some power series φ(x 2 ) and invertible power series u(x 1, x 2 ). To simplify notation we write u = u(x 1, x 2 ) k[[x 1, x 2 ]]. The Jacobian ideal J g is generated by the three partial derivatives: g = (x 1 φ(x 2 )) u x 0 ( g = x 0 u + (x 1 φ(x 2 )) u x 1 x 1 g = x 0 x 2 By using the fact that x 1 φ(x 2 ) u x 1 and u x 2 : J g = ) + f d ( φ (x 2 )u + (x 1 φ(x 2 )) u x 2 x 1 (x 1, x 2 ) ) + f d x 2 (x 1, x 2 ) ( g x 0 ) we can write J g without the symbols ( ) x 1 φ(x 2 ), x 0 u + f d x 1 (x 1, x 2 ), x 0 uφ (x 2 ) + f d x 2 (x 1, x 2 ) To make the following calculations clear, define the polynomials h i by writing f d (x 1, x 2, 1) = d i=0 xi 1h i (x 2 ). Now ( J g = x 1 φ(x 2 ), x 0 u + d i=1 ixi 1 1 h i (x 2 ), x 0 uφ (x 2 ) + ) d i=0 xi 1h i (x 2), so where k[[x 0, x 1, x 2 ]] J g = k[[x 2 ]] (A(x 2 )) ( d ) ( A(x 2 ) = φ (x 2 ) i=1 iφ(x d ) 2) i 1 h i (x 2 ) + i=0 φ(x 2) i h i (x 2). For the intersection multiplicity we have k[[x 1, x 2 ]] ( ) = f d 1 (x 1, x 2, 1), f d (x 1, x 2, 1) k[[x 1, x 2 ]] ( x 1 φ(x 2 ), d i=0 xi 1 h i(x 2 ) ) = k[[x 2 ]] ( ) B(x 2 ) where B(x 2 ) = d i=0 φ(x 2) i h i (x 2 ). Observing that B (x 2 ) = A(x 2 ) gives the result µ = I 0 (f d 1, f d ) 1.

57 5.3. MONOID SURFACES 53 Corollary 5.7. A monoid surface of degree d can have at most 1 2d(d 1) singularities in addition to O. If this number of singularities is obtained, then all of them will be of type A 1. Proof. The sum of all local intersection numbers I a (f d 1, f d ) is given by Bézout s theorem: I a (f d 1, f d ) = d(d 1). a Z(f d 1,f d ) The line L a will contain a singularity other than O only if I a (f d 1, f d ) 2, giving a maximum of 1 2d(d 1) singularities in addition to O. Also, if this number is obtained, all local intersection numbers must be exactly 2, so all singularities other than O will be of type A 1. Both Proposition 5.6 and Corollary 5.7 were known to Rohn, who stated these results only in the case d = 4, but said they could be generalized to arbitrary d [32, p. 60]. For the rest of the section we will assume k = R. It turns out that we can find a real normal form for the singularities other than O. The complex singularities of type A n come in several real types, with normal forms x 2 1±x 2 2±x n+1 3. Varying the ± gives two types for n = 1 and n even, and three types for n 3 odd. The real type with normal form x 2 1 x x n+1 3 is called an A n singularity, or of type A, and is what we find on real monoids: Proposition 5.8. On a real monoid, all singularities other than O are of type A. Proof. Assume p = (0 : 0 : 1) is a singular point on Z(F ) and set g = F (x 0, x 1, x 2, 1) as in the proof of Proposition 5.6. First note that u 1 g = x 0 (x 1 φ(x 2 )) + f d (x 1, x 2 )u 1 is an equation for the singularity. We will now prove that u 1 g is right equivalent to ±(x 2 0 x x n 2 ), for some n, by constructing right equivalent functions u 1 g =: g (0) g (1) g (2) g (3) ±(x 2 0 x x n 2 ). Let g (1) (x 0, x 1, x 2 ) = g (0) (x 0, x 1 + φ(x 2 ), x 2 ) where ψ(x 1, x 2 ) R[[x 1, x 2 ]]. define = x 0 x 1 + f d (x 1 + φ(x 2 ), x 2 )u 1 (x 1 + φ(x 2 ), x 2 ) = x 0 x 1 + ψ(x 1, x 2 ) Write ψ(x 1, x 2 ) = x 1 ψ 1 (x 1, x 2 ) + ψ 2 (x 2 ) and g (2) (x 0, x 1, x 2 ) = g (1) (x 0 ψ 1 (x 1, x 2 ), x 1, x 2 ) = x 0 x 1 + ψ 2 (x 2 ).

58 54 CHAPTER 5. MONOID HYPERSURFACES The power series ψ 2 (x 2 ) can be written on the form ψ 2 (x 2 ) = sx n 2 (a 0 + a 1 x 2 + a 2 x ) where s = ±1 and a 0 > 0. We see that g (2) is right equivalent to g (3) = x 0 x 1 + sx n 2 since ( ) n g (2) (x 0, x 1, x 2 ) = g (3) x 0, x 1, x 2 a 0 + a 1 x 2 + a 2 x Finally we see that g (4) (x 0, x 1, x 2 ) := g (3) (sx 0 sx 1, x 0 + x 1, x 2 ) = s(x 2 0 x x n 2 ) proves that u 1 g is right equivalent to s(x 2 0 x x n 2 ) which is an equation for an A n 1 singularity with normal form x 2 0 x x n 2. Note that for d = 3, the singularity at O can be an A + 1 happens for example when f 2 = x x x 2 2. singularity. This For a real monoid, Corollary 5.7 implies that we can have at most 1 2d(d 1) real singularities in addition to O. We can show that the bound is sharp by a simple construction: Example. To construct a monoid with the maximal number of real singularities, it is sufficient to construct two affine real curves in the xy-plane defined by equations f d 1 and f d of degrees d 1 and d such that the curves intersect in d(d 1)/2 points with multiplicity 2. Let m {d 1, d} be odd and set f m = ε m i=1 ( x sin ( ) 2iπ + y cos m ( ) ) 2iπ + 1. m For ε > 0 sufficiently small there exist at least m+1 2 radii r > 0, one for each root of the univariate polynomial f m x=0, such that the circle x 2 + y 2 r 2 intersects f m in m points with multiplicity 2. Let f 2d 1 m be a product of such circles. Now the homogenizations of f d 1 and f d define a monoid surface with d(d 1) singularities. See Figure 5.1. Proposition 5.6 and Bezout s theorem imply that the maximal Milnor number of a singularity other than O is d(d 1) 1. The following example shows that this bound can be achieved on a real monoid: Example. The surface X P 3 defined by F = x 0 (x 1 x2 d 2 + x3 d 1 ) + x d 1 has exactly two singular points. The point (1 : 0 : 0 : 0) is a singularity of multiplicity

59 5.3. MONOID SURFACES 55 Figure 5.1: The curves f m for m = 3, 5 and corresponding circles 3 with Milnor number µ = (d 2 3d + 1)(d 2), while the point (0 : 0 : 1 : 0) is an A d(d 1) 1 singularity. A picture of this surfaces for d = 4 is shown in Figure 5.2. Figure 5.2: The surface defined by F = x 0 (x 1 x d x d 1 3 ) + x d 1 for d = 4.

60 56 CHAPTER 5. MONOID HYPERSURFACES 5.4 Quartic monoid surfaces Every cubic surface with isolated singularities is a monoid. Both smooth and singular cubic surfaces have been studied extensively, most notably in [33], where real cubic surfaces and their singularities were classifed, and more recently in [36], [4], and [16]. The site [17] contains additional pictures and references. In this section we shall consider the case d = 4. The classification of real and complex quartic monoid surfaces was started by Rohn [32]. (In addition to considering the singularities, Rohn studied the existence of lines not passing through the triple point, and that of other special curves on the monoid.) In [38], Takahashi, Watanabe, and Higuchi described the singularities of such complex surfaces. The monoid singularity of a quartic monoid is minimally elliptic [42], and minimally elliptic singularities have the same complex topological type if and only if their dual graphs are isomorphic [18]. In [18] all possible dual graphs for minimally elliptic singularities are listed, along with example equations. Using Arnold s notation for the singularities, we use and extend the approach of Takahashi, Watanabe, and Higuchi in [38]. Consider a quartic monoid surface, X = Z(F ), with F = x 0 f 3 + f 4. The tangent cone, Z(f 3 ), can be of one of nine (complex) types, each needing a separate analysis. For each type we fix f 3, but any other tangent cone of the same type will be projectively equivalent (over the complex numbers) to this fixed f 3. The nine different types are: 1. Nodal irreducible curve, f 3 = x 1 x 2 x 3 + x x Cuspidal curve, f 3 = x 3 1 x 2 2x Conic and a chord, f 3 = x 3 (x 1 x 2 + x 2 3) 4. Conic and a tangent line, f 3 = x 3 (x 1 x 3 + x 2 2). 5. Three general lines, f 3 = x 1 x 2 x Three lines meeting in a point, f 3 = x 3 2 x 2 x A double line and another line, f 3 = x 2 x A triple line f 3 = x A smooth curve, f 3 = x x x ax 0 x 1 x 3 where a 3 1

61 5.4. QUARTIC MONOID SURFACES 57 To each quartic monoid we can associate, in addition to the type, several integer invariants, all given as intersection numbers. From [38] we know that, for the types 1 3, 5, and 9, these invariants will determine the singularity type of O up to right equivalence. In the other cases the singularity series, as defined by Arnol d in [1] and [2], is determined by the type of f 3. We shall use, without proof, the results on the singularity type of O due to [38]; however, we shall use the notations of [1] and [2]. We complete the classification begun in [38] by supplying a complete list of the possible singularities occurring on a quartic monoid. In addition, we extend the results to the case of real monoids. Our results are summarized in the following theorem. Theorem 5.9. On a quartic monoid surface, singularities other than the monoid point can occur as given in Table 5.1. Moreover, all possibilities are realizable on real quartic monoids with a real monoid point, and with the other singularities being real and of type A. Proof. The invariants listed in the Invariants and constraints column are all nonnegative integers, and any set of integer values satisfying the equations represents one possible set of invariants, as described above. Then, for each set of invariants, (positive) intersection multiplicities, denoted m i, m i determine the singularities other than O. The column Other singularities give these and the equations they must satisfy. Here we use the notation A 0 for a line L a on Z(F ) where O is the only singular point. The analyses of the nine cases share many similarities, and we have chosen not to go into great detail when one aspect of a case differs little from the previous one. We end the section with a discussion on the possible real forms of the tangent cone and how this affects the classification of the real quartic monoids. In all cases, we shall write f 4 = a 1 x a 2 x 3 1x 2 + a 3 x 3 1x 3 + a 4 x 2 1x a 5 x 2 1x 2 x 3 + a 6 x 2 1x a 7 x 1 x a 8 x 1 x 2 2x 3 + a 9 x 1 x 2 x a 10 x 1 x a 11 x a 12 x 3 2x 3 + a 13 x 2 2x a 14 x 2 x a 15 x 4 3 and m i, will and we shall investigate how the coefficients a 1,..., a 15 are related to the geometry of the monoid.

62 58 CHAPTER 5. MONOID HYPERSURFACES Case Triple point Invariants and constraints Other singularities 1 T 3,3,4 A mi 1, P m i = 12 T 3,3,3+m m = 2,..., 12 A mi 1, P m i = 12 m 2 Q 10 A mi 1, P m i = 12 T 9+m m = 2, 3 A mi 1, P m i = 12 m 3 T 3,4+r0,4+r 1 r 0 = max(j 0, k 0 ), r 1 = max(j 1, k 1 ), A mi 1, P m i = 4 k 0 k 1, j 0 > 0 k 0 > 0, min(j 0, k 0 ) 1, A m i 1, P m i = 8 j 0 j 1 j 1 > 0 k 1 > 0, min(j 1, k 1 ) 1 4 S series j 0 8, k 0 4, min(j 0, k 0 ) 2, A mi 1, P m i = 4 k 0, j 0 > 0 k 0 > 0, j 1 > 0 k 0 > 1 A m i 1, P m i = 8 j 0 5 T 4+jk,4+j l,4+j m m 1 + l 1 4, k 2 + m 2 4, A mi 1, P m i = 4 m 1 l 1, k 3 + l 3 4, k 2 > 0 k 3 > 0, A m i 1, P m i = 4 k 2 m 2, l 1 > 0 l 3 > 0, m 1 > 0 m 2 > 0, A P m 1, m i i = 4 k 3 l 3 min(k 2, k 3 ) 1, min(l 1, l 3 ) 1, min(m 1, m 2 ) 1, j k = max(k 2, k 3 ), j l = max(l 1, l 3 ), j m = max(m 1, m 2 ) 6 U series j 1 > 0 j 2 > 0 j 3 > 0, A mi 1, P m i = 4 j 1, at most one of j 1, j 2, j 3 > 1, A m i 1, P m i = 4 j 2, j 1, j 2, j 3 4 A P m 1, m i i = 4 j 3 7 V series j 0 > 0 k 0 > 0, min(j 0, k 0 ) 1, A mi 1, P m i = 4 j 0, j 0 4, k V series None 9 P 8 = T 3,3,3 A mi 1, P m i = 12 Table 5.1: Possible configurations of singularities for each case

63 5.4. QUARTIC MONOID SURFACES 59 Case 1. The tangent cone is a nodal irreducible curve, and we can assume f 3 (x 1, x 2, x 3 ) = x 1 x 2 x 3 + x x 3 3. The nodal curve is singular at (1 : 0 : 0). If f 4 (1, 0, 0) 0, then O is a T 3,3,4 singularity [38]. We recall that (1 : 0 : 0) cannot be a singular point on Z(f 4 ) as this would imply a singular line on the monoid, so we assume that either (1 : 0 : 0) Z(f 4 ) or (1 : 0 : 0) is a smooth point on Z(f 4 ). Let m denote the intersection number I (1:0:0) (f 3, f 4 ). Since Z(f 3 ) is singular at (1 : 0 : 0) we have m 1. From [38] we know that O is a T 3,3,3+m singularity for m = 2,..., 12. Note that some of these complex singularities have two real forms, as illustrated in Figure 5.3. Figure 5.3: The monoids Z(x 3 + y 3 + 5xyz z 3 (x + y)) and Z(x 3 + y 3 + 5xyz z 3 (x y)) both have a T 3,3,5 singularity, but the singularities are not right equivalent over R. (The pictures are generated by the program [8].) Bézout s theorem and Proposition 5.6 limit the possible configurations of singularities on the monoid for each m. Let θ(s, t) = ( s 3 t 3, s 2 t, st 2 ). Then the tangent cone Z(f 3 ) is parameterized by θ as a map from P 1 to P 2. When we need to compute the intersection numbers between the rational curve Z(f 3 ) and the curve Z(f 4 ), we can do that by studying the roots of the polynomial

64 60 CHAPTER 5. MONOID HYPERSURFACES f 4 (θ). Expanding the polynomial gives f 4 (θ)(s, t) = a 1 s 12 a 2 s 11 t + ( a 3 + a 4 )s 10 t 2 + (4a 1 + a 5 a 7 )s 9 t 3 + ( 3a 2 + a 6 a 8 + a 11 )s 8 t 4 + ( 3a 3 + 2a 4 a 9 + a 12 )s 7 t 5 + (6a 1 + 2a 5 a 7 a 10 + a 13 )s 6 t 6 + ( 3a 2 + 2a 6 a 8 + a 14 )s 5 t 7 + ( 3a 3 + a 4 a 9 + a 15 )s 4 t 8 + (4a 1 + a 5 a 10 )s 3 t 9 + ( a 2 + a 6 )s 2 t 10 a 3 st 11 + a 1 t 12. This polynomial will have roots at (0 : 1) and (1 : 0) if and only if f 4 (1, 0, 0) = a 1 = 0. When a 1 = 0 we may (by symmetry) assume a 3 0, so that (0 : 1) is a simple root and (1 : 0) is a root of multiplicity m 1. Other roots of f 4 (θ) correspond to intersections of Z(f 3 ) and Z(f 4 ) away from (1 : 0 : 0). The multiplicity m i of each root is equal to the corresponding intersection multiplicity, giving rise to an A mi 1 singularity if m i > 0, as described by Proposition 5.6, or a line L a Z(F ) with O as the only singular point if m i = 1. The polynomial f 4 (θ) defines a linear map from the coefficient space k 15 of f 4 to the space of homogeneous polynomials of degree 12 in s and t. By elementary linear algebra, we see that the image of this map is the set of polynomials of the form b 0 s 12 + b 1 s 11 t + b 2 s 10 t b 12 t 12 where b 0 = b 12. The kernel of the map corresponds to the set of polynomials of the form lf 3 where l is a linear form. This means that f 4 (θ) 0 if and only if f 3 is a factor in f 4, making Z(F ) reducible and not a monoid. For every m = 0, 2, 3, 4,..., 12 we can select r parameter points p 1,..., p r P 1 \ {(0 : 1), (1 : 0)} and positive multiplicities m 1,..., m r with m m r = 12 m and try to describe the polynomials f 4 such that f 4 (θ) has a root of multiplicity m i at p i for each i = 1,..., r. Still assuming a 3 0 whenever a 1 = 0, any such choice of parameter points p 1,..., p r and multiplicities m 1,..., m r corresponds to a polynomial q = b 0 s 12 + b 1 s 11 t + + b 12 t 12 that is, up to a nonzero constant, uniquely determined. Now, q is equal to f 4 (θ) for some f 4 if and only if b 0 = b 12. If m 2, then q contains a factor st m 1, so b 0 = b 12 = 0, giving q = f 4 (θ) for some f 4. In fact, when m 2 any choice of p 1,..., p r and m 1,..., m r with m 1 + +m r = 12 m corresponds to a four dimensional space of equations f 4 that gives this set of roots and multiplicities in f 4 (θ). If f 4 is one such f 4, then any other is of the

65 5.4. QUARTIC MONOID SURFACES 61 form λf 4 + lf 3 for some constant λ 0 and linear form l. All of these give monoids that are projectively equivalent. When m = 0, we write p i = (α i : β i ) for i = 1,..., r. The condition b 0 = b 12 on the coefficients of q translates to α m1 1 αr mr = β m1 1 βr mr. (5.3) This means that any choice of parameter points (α 1 : β 1 ),..., (α r : β r ) and multiplicities m 1,..., m r with m m r = 12 that satisfy condition (5.3) corresponds to a four dimensional family λf 4 + lf 3, giving a unique monoid up to projective equivalence. For example, we can have an A 11 singularity only if f 4 (θ) is of the form (αs βt) 12. Condition (5.3) implies that this can only happen for 12 parameter points, all of the form (1 : ω), where ω 12 = 1. Each such parameter point (1 : ω) corresponds to a monoid uniquely determined up to projective equivalence. However, since there are six projective transformations of the plane that maps Z(f 3 ) onto itself, this correspondence is not one to one. If ω1 12 = ω2 12 = 1, then ω 1 and ω 2 will correspond to projectively equivalent monoids if and only if ω1 3 = ω2 3 or ω1ω = 1. This means that there are three different quartic monoids with one T 3,3,4 singularity and one A 11 singularity. One corresponds to those ω where ω 3 = 1, one to those ω where ω 3 = 1, and one to those ω where ω 6 = 1. The first two of these have real representatives, ω = ±1. It easy to see that for any set of multiplicities m m r = 12, we can find real points p 1,..., p r such that condition (5.3) is satisfied. This completely classifies the possible configurations of singularities when f 3 is an irreducible nodal curve. Case 2. The tangent cone is a cuspidal curve, and we can assume f 3 (x 1, x 2, x 3 ) = x 3 1 x 2 2x 3. The cuspidal curve is singular at (0 : 0 : 1) and can be parameterized by θ as a map from P 1 to P 2 where θ(s, t) = (s 2 t, s 3, t 3 ). The intersection numbers are determined by the degree 12 polynomial f 4 (θ). As in the previous case, f 4 (θ) 0 if and only if f 3 is a factor of f 4, and we will assume this is not the case. The multiplicity m of the factor s in f 4 (θ) determines the type of singularity at O. If m = 0 (no factor s), then O is a Q 10 singularity. If m = 2 or m = 3, then O is of type Q 9+m. If m > 3, then (0 : 0 : 1) is a singular point on Z(f 4 ), so the monoid has a singular line and is not considered in this article. Also, m = 1 is not possible, since f 4 (θ(s, t)) = f 4 (s 2 t, s 3, t 3 ) cannot contain st 11 as a factor. For each m = 0, 2, 3 we can analyze the possible configurations of other singularities on the monoid. Similarly to the previous case, any choice of parameter points p 1,..., p r P 1 \ {(0 : 1)} and positive multiplicities m 1,..., m r

66 62 CHAPTER 5. MONOID HYPERSURFACES with m i = 12 m corresponds, up to a nonzero constant, to a unique degree 12 polynomial q. When m = 2 or m = 3, for any choice of parameter values and associated multiplicities, we can find a four dimensional family f 4 = λf 4 + lf 3 with the prescribed roots in f 4 (θ). As before, the family gives projectively equivalent monoids. When m = 0, one condition must be satisfied for q to be of the form f 4 (θ), namely b 11 = 0, where b 11 is the coefficient of st 11 in q. For example, we can have an A 11 singularity only if q is of the form (αs βt) 12. The condition b 11 = 0 implies that either q = λs 12 or q = λt 12. The first case gives a surface with a singular line, while the other gives a monoid with an A 11 singularity (see Figure 5.2). The line from O to the A 11 singularity corresponds to the inflection point of Z(f 3 ). For any set of multiplicities m 1,..., m r with m m r = 12, it is not hard to see that there exist real points p 1,..., p r such that the condition b 11 = 0 is satisfied. It suffices to take p i = (α i : 1), with m i α i = 0 (the condition corresponding to b 11 = 0). This completely classifies the possible configurations of singularities when f 3 is a cuspidal curve. Case 3. The tangent cone is the product of a conic and a line that is not tangent to the conic, and we can assume f 3 = x 3 (x 1 x 2 + x 2 3). Then Z(f 3 ) is singular at (1 : 0 : 0) and (0 : 1 : 0), the intersections of the conic Z(x 1 x 2 + x 2 3) and the line Z(x 3 ). For each f 4 we can associate four integers: j 0 := I (1:0:0) (x 1 x 2 + x 2 3, f 4 ), k 0 := I (1:0:0) (x 3, f 4 ), j 1 := I (0:1:0) (x 1 x 2 + x 2 3, f 4 ), k 1 := I (0:1:0) (x 3, f 4 ). We see that k 0 > 0 f 4 (1 : 0 : 0) = 0 j 0 > 0, and that Z(f 4 ) is singular at (1 : 0 : 0) if and only if k 0 and j 0 both are bigger than one. These cases imply a singular line on the monoid, and are not considered in this article. The same holds for k 1, j 1 and the point (0 : 1 : 0). Define r i = max(j i, k i ) for i = 1, 2. Then, by [38], O will be a singularity of type T 3,4+r0,4+r 1 if r 0 r 1, or of type T 3,4+r1,4+r 0 if r 0 r 1. We can parameterize the line Z(x 3 ) by θ 1 where θ 1 (s, t) = (s, t, 0), and the conic Z(x 1 x 2 + x 2 3) by θ 2 where θ 2 (s, t) = (s 2, t 2, st). Similarly to the previous cases, roots of f 4 (θ 1 ) correspond to intersections between Z(f 4 ) and the line Z(x 3 ), while roots of f 4 (θ 2 ) correspond to intersections between Z(f 4 ) and the conic Z(x 1 x 3 + x 2 3). For any legal values of of j 0, j 1, k 0 and k 1, parameter points (α 1 : β 1 ),..., (α mr : β mr ) P 1 \ {(0 : 1), (1 : 0)},

67 5.4. QUARTIC MONOID SURFACES 63 with multiplicities m 1,..., m r such that m m r = 4 k 0 k 1, and parameter points (α 1 : β 1),..., (α m r : β m r ) P1 \ {(0 : 1), (1 : 0)}, with multiplicities m 1,..., m r such that m m r = 8 j 0 j 1, we can fix polynomials q 1 and q 2 such that q 1 is nonzero, of degree 4, and has factors s k1, t k0 and (β i s α i t) mi for i = 1,..., r, q 2 is nonzero, of degree 8, and has factors s j1, t j0 and (β i s α i t)m i for i = 1,..., r. Now q 1 and q 2 are determined up to multiplication by nonzero constants. Write q 1 = b 0 s b 4 t 4 and q 2 = c 0 s c 8 t 8. The classification of singularities on the monoid consists of describing the conditions on the parameter points and nonzero constants λ 1 and λ 2 for the pair (λ 1 q 1, λ 2 q 2 ) to be on the form (f 4 (θ 1 ), f 4 (θ 2 )) for some f 4. Similarly to the previous cases, f 4 (θ 1 ) 0 if and only if x 3 is a factor in f 4 and f 4 (θ 2 ) 0 if and only if x 1 x 2 +x 2 3 is a factor in f 4. Since f 3 = x 3 (x 1 x 2 +x 2 3), both cases will make the monoid reducible, so we only consider λ 1, λ 2 0. We use linear algebra to study the relationship between the coefficients a 1... a 15 of f 4 and the polynomials q 1 and q 2. We find (λ 1 q 1, λ 2 q 2 ) to be of the form (f 4 (θ 1 ), f 4 (θ 2 )) if and only if λ 1 b 0 = λ 2 c 0 and λ 1 b 4 = λ 2 c 8. Furthermore, the pair (λ 1 q 1, λ 2 q 2 ) will fix f 4 modulo f 3. Since f 4 and λf 4 correspond to projectively equivalent monoids for any λ 0, it is the ratio λ 1 /λ 2, and not λ 1 and λ 2, that is important. Recall that k 0 > 0 j 0 > 0 and k 1 > 0 j 1 > 0. If k 0 > 0 and k 1 > 0, then b 0 = c 0 = b 4 = c 8 = 0, so for any λ 1, λ 2 0 we have (λ 1 q 1, λ 2 q 2 ) = (f 4 (θ 1 ), f 4 (θ 2 )) for some f 4. Varying λ 1 /λ 2 will give a one-parameter family of monoids for each choice of multiplicities and parameter points. If k 0 = 0 and k 1 > 0, then b 0 = c 0 = 0. The condition λ 1 b 4 = λ 2 c 8 implies λ 1 /λ 2 = c 8 /b 4. This means that any choice of multiplicities and parameter points will give a unique monoid up to projective equivalence. The same goes for the case where k 0 > 0 and k 1 = 0. Finally, consider the case where k 0 = k 1 = 0. For (λ 1 q 1, λ 2 q 2 ) to be of the form (f 4 (θ 1 ), f 4 (θ 2 )) we must have λ 1 /λ 2 = c 8 /b 4 = c 0 /b 0. This translates into a condition on the parameter points, namely (β 1) m 1 (β r ) m r β m1 1 βr mr = (α 1) m 1 (α r ) m r. (5.4) α m1 1 αr mr

68 64 CHAPTER 5. MONOID HYPERSURFACES In other words, if condition (5.4) holds, we have a unique monoid up to projective equivalence. It is easy to see that for any choice of multiplicities, it is possible to find real parameter points such that condition (5.4) is satisfied. This completes the classification of possible singularities when the tangent cone is a conic plus a chordal line. Case 4. The tangent cone is the product of a conic and a line tangent to the conic, and we can assume f 3 = x 3 (x 1 x 3 + x 2 2). Now Z(f 3 ) is singular at (1 : 0 : 0). For each f 4 we can associate two integers j 0 := I (1:0:0) (x 1 x 3 + x 2 2, f 4 ) and k 0 := I (1:0:0) (x 3, f 4 ). We have j 0 > 0 k 0 > 0, j 0 > 1 k 0 > 1. Furthermore, j 0 and k 0 are both greater than 2 if and only if Z(f 4 ) is singular at (1 : 0 : 0), a case we have excluded. The singularity at O will be of the S series, from [1], [2]. We can parameterize the conic Z(x 1 x 3 + x 2 2) by θ 2 and the line Z(x 3 ) by θ 1 where θ 2 (s, t) = (s 2, st, t 2 ) and θ 1 (s, t) = (s, t, 0). As in the previous case, the monoid is reducible if and only if f 4 (θ 1 ) 0 or f 4 (θ 2 ) 0. Consider two nonzero polynomials q 1 = b 0 s 4 + b 1 s 3 t + b 2 s 2 t 2 + b 3 st 3 + b 4 t 4 q 2 = c 0 s 8 + c 1 s 7 t + + c 7 st 7 + c 8 t 8. Now (λ 1 q 1, λ 2 q 2 ) = (f 4 (θ 1 ), f 4 (θ 2 )) for some f 4 if and only if λ 1 b 0 = λ 2 c 0 and λ 1 b 1 = λ 2 c 1. As before, only the cases where λ 1, λ 2 0 are interesting. We see that (λ 1 q 1, λ 2 q 2 ) = (f 4 (θ 1 ), f 4 (θ 2 )) for some λ 1, λ 2 0 if and only if the following hold: b 0 = 0 c 0 = 0 and b 1 = 0 c 1 = 0 b 0 c 1 = b 1 c 0. The classification of other singularities (than O) is very similar to the previous case. Roots of f 4 (θ 1 ) and f 4 (θ 2 ) away from (1 : 0) correspond to intersections of Z(f 3 ) and Z(f 4 ) away from the singular point of Z(f 3 ), and when one such intersection is multiple, there is a corresponding singularity on the monoid. Now assume (λ 1 q 1, λ 2 q 2 ) = (f 4 (θ 1 ), f 4 (θ 2 )) for some λ 1, λ 2 0 and some f 4. If b 0 0 (equivalent to c 0 0) then j 0 = k 0 = 0 and λ 1 /λ 2 = c 0 /b 0. If b 0 = c 0 = 0 and b 1 0 (equivalent to c 1 0), then j 0 = k 0 = 1, and λ 1 /λ 2 = c 1 /b 1. If b 0 = b 1 = c 0 = c 1 = 0, then j 0, k 0 > 1 and any value of λ 1 /λ 2

69 5.4. QUARTIC MONOID SURFACES 65 will give (λ 1 q 1, λ 2 q 2 ) of the form (f 4 (θ 1 ), f 4 (θ 2 )) for some f 4. Thus we get a one-dimensional family of monoids for this choice of q 1 and q 2. Now consider the possible configurations of other singularities on the monoid. Assume that j 0 8 and k 0 4 are nonnegative integers such that j 0 > 0 k 0 > 0 and j 0 > 1 k 0 > 1. For any set of multiplicities m 1,..., m r with m m r = 4 k 0 and m 1,..., m r with m m r = 8 j 0, there exists a polynomial f 4 with real coefficients such that f 4 (θ 1 ) has real roots away from (1 : 0) with multiplicities m 1,..., m r, and f 4 (θ 2 ) has real roots away from (1 : 0) with multiplicities m 1,..., m r. Furthermore, for this f 4 we have k 0 = k 0 and j 0 = j 0. Proposition 5.6 will give the singularities that occur in addition to O. This completes the classification of the singularities on a quartic monoid (other than O) when the tangent cone is a conic plus a tangent. Case 5. The tangent cone is three general lines, and we assume f 3 = x 1 x 2 x 3. For each f 4 we associate six integers, k 2 := I (1,0,0) (f 4, x 2 ), l 1 := I (0,1,0) (f 4, x 1 ), m 1 := I (0,0,1) (f 4, x 1 ), k 3 := I (1,0,0) (f 4, x 3 ), l 3 := I (0,1,0) (f 4, x 3 ), m 2 := I (0,0,1) (f 4, x 2 ). Now k 2 > 0 k 3 > 0, l 1 > 0 l 3 > 0, and m 1 > 0 m 2 > 0. If both k 2 and k 3 are greater than 1, then the monoid has a singular line, a case we have excluded. The same goes for the pairs (l 1, l 3 ) and (m 1, m 2 ). When the monoid does not have a singular line, we define j k = max(k 2, k 3 ), j l = max(l 1, l 3 ) and j m = max(m 1, m 2 ). If j k j l j m, then [38] gives that O is a T 4+jk,4+j l,4+j m singularity. The three lines Z(x 1 ), Z(x 2 ) and Z(x 3 ) are parameterized by θ 1, θ 2 and θ 3 where θ 1 (s, t) = (0, s, t), θ 2 (s, t) = (s, 0, t) and θ 3 (s, t) = (s, t, 0). Roots of the polynomial f 4 (θ i ) away from (1 : 0) and (0 : 1) correspond to intersections between Z(f 4 ) and Z(x i ) away from the singular points of Z(f 3 ). As before, we are only interested in the cases where none of f 4 (θ i ) 0 for i = 1, 2, 3, as this would make the monoid reducible. For the study of other singularities on the monoid we consider nonzero polynomials q 1 = b 0 s 4 + b 1 s 3 t + b 2 s 2 t 2 + b 3 st 3 + b 4 t 4, q 2 = c 0 s 4 + c 1 s 3 t + c 2 s 2 t 2 + c 3 st 3 + c 4 t 4, q 3 = d 0 s 4 + d 1 s 3 t + d 2 s 2 t 2 + d 3 st 3 + d 4 t 4. Linear algebra shows that (λ 1 q 1, λ 2 q 2, λ 3 q 3 ) = (f 4 (θ 1 ), f 4 (θ 2 ), f 4 (θ 3 )) for some f 4 if and only if λ 1 b 0 = λ 3 d 4, λ 1 b 4 = λ 2 c 4, and λ 2 c 0 = λ 3 d 0. A simple analysis

70 66 CHAPTER 5. MONOID HYPERSURFACES shows the following: There exist λ 1, λ 2, λ 3 0 such that (λ 1 q 1, λ 2 q 2, λ 3 q 3 ) = (f 4 (θ 1 ), f 4 (θ 2 ), f 4 (θ 3 )) for some f 4, and such that Z(f 4 ) and Z(f 3 ) have no common singular point if and only if all of the following hold: b 0 = 0 d 4 = 0 and b 0 = d 4 = 0 (b 1 0 or d 3 0), b 4 = 0 c 4 = 0 and b 4 = c 4 = 0 (b 3 0 or c 3 0), c 0 = 4 d 0 = 0 and c 0 = d 0 = 0 (c 1 0 or d 1 0), b 0 c 4 d 0 = b 4 c 0 d 4. Similarly to the previous cases we can classify the possible configurations of other singularities by varying the multiplicities of the roots of the polynomials q 1, q 2 and q 3. Only the multiplicities of the roots (0 : 1) and (1 : 0) affect the first three bullet points above. Then, for any set of multiplicities of the rest of the roots, we can find q 1, q 2 and q 3 such that the last bullet point is satisfied. This completes the classification when Z(f 3 ) is the product of three general lines. Case 6. The tangent cone is three lines meeting in a point, and we can assume that f 3 = x 3 2 x 2 x 2 3. We write f 3 = l 1 l 2 l 3 where l 1 = x 2, l 2 = x 2 x 3 and l 3 = x 2 + x 3, representing the three lines going through the singular point (1 : 0 : 0). For each f 4 we associate three integers j 1, j 2 and j 3 defined as the intersection numbers j i = I (1:0:0) (f 4, l i ). We see that j 1 = 0 j 2 = 0 j 3 = 0, and that Z(f 4 ) is singular at (1 : 0 : 0) if and only if two of the integers j 1, j 2, j 3 are greater then one. (Then all of them will be greater than one.) The singularity will be of the U series [1], [2]. The three lines Z(l 1 ), Z(l 2 ) and Z(l 3 ) can be parameterized by θ 1, θ 2, and θ 3 where θ 1 (s, t) = (s, 0, t), θ 2 (s, t) = (s, t, t) and θ 2 (s, t) = (s, t, t). For the study of other singularities on the monoid we consider nonzero polynomials q 1 = b 0 s 4 + b 1 s 3 t + b 2 s 2 t 2 + b 3 st 3 + b 4 t 4, q 2 = c 0 s 4 + c 1 s 3 t + c 2 s 2 t 2 + c 3 st 3 + c 4 t 4, q 3 = d 0 s 4 + d 1 s 3 t + d 2 s 2 t 2 + d 3 st 3 + d 4 t 4. Linear algebra shows that (λ 1 q 1, λ 2 q 2, λ 3 q 3 ) = (f 4 (θ 1 ), f 4 (θ 2 ), f 4 (θ 3 )) for some f 4 if and only if λ 1 b 0 = λ 2 c 4 = λ 3 d 0, and 2λ 1 b 1 = λ 2 c 1 + λ 3 d 1. There exist

71 5.4. QUARTIC MONOID SURFACES 67 λ 1, λ 2, λ 3 0 such that (λ 1 q 1, λ 2 q 2, λ 3 q 3 ) = (f 4 (θ 1 ), f 4 (θ 2 ), f 4 (θ 3 )) for some f 4 and such that Z(f 4 ) and Z(f 3 ) have no common singular point if and only if all of the following hold: b 0 = 0 c 0 = 0 d 0 = 0, if b 0 = c 0 = d 0 = 0, then at least two of b 1, c 1, and d 1 are different from zero, 2b 1 c 0 d 0 = b 0 c 1 d 0 + b 0 c 0 d 1. As in all the previous cases we can classify the possible configurations of other singularities for all possible j 1, j 2, j 3. As before, the first bullet point only affect the multiplicity of the factor t in q 1, q 2 and q 3. For any set of multiplicities for the rest of the roots, we can find q 1, q 2, q 3 with real roots of the given multiplicities such that the last bullet point is satisfied. This completes the classification of the singularities (other than O) when Z(f 3 ) is three lines meeting in a point. Case 7. The tangent cone is a double line plus a line, and we can assume f 3 = x 2 x 2 3. The tangent cone is singular along the line Z(x 3 ). The line Z(x 2 ) is parameterized by θ 1 and the line Z(x 3 ) is parameterized by θ 2 where θ 1 (s, t) = (s, 0, t) and θ 2 (s, t) = (s, t, 0). The monoid is reducible if and only if f 4 (θ 1 ) or f 4 (θ 2 ) is identically zero, so we assume that neither is identically zero. For each f 4 we associate two integers, j 0 := I (1:0:0) (f 4, x 2 ) and k 0 := I (1:0:0) (f 4, x 3 ). Furthermore, we write f 4 (θ 2 ) as a product of linear factors f 4 (θ 2 ) = λs k0 r (α i s t) mi. i=0 Now the singularity at O will be of the V series and depends on j 0, k 0 and m 1,..., m r. Other singularities on the monoid correspond to intersections of Z(f 4 ) and the line Z(x 2 ) away from (1 : 0 : 0). Each such intersection corresponds to a root in the polynomial f 4 (θ 1 ) different from (1 : 0). Let j 0 4 and k 0 4 be integers such that j 0 > 0 k 0 > 0. Then, for any homogeneous polynomials q 1, q 2 in s, t of degree 4 such that s is a factor of multiplicity j 0 in q 1 and of multiplicity k 0 in q 2, there is a polynomial f 4 and nonzero constants λ 1 and λ 2 such that k 0 = k 0, j 0 = j 0 and (λ 1 q 1, λ 2 q 2 ) = (f 4 (θ 1 ), f 4 (θ 2 )). Furthermore, if q 1 and q 2 have real coefficients, then f 4 can be selected with real coeficients. This follows from an analysis similar to case 5 and completes the classification of singularities when the tangent cone is a product of a line and a double line.

72 68 CHAPTER 5. MONOID HYPERSURFACES Case 8. The tangent cone is a triple line, and we assume that f 3 = x 3 3. The line Z(x 3 ) is parameterized by θ where θ(s, t) = (s, t, 0). Assume that the polynomial f 4 (θ) has r distinct roots with multiplicities m 1,..., m r. (As before f 4 (θ) 0 if and only if the monoid is reducible.) Then the type of the singularity at O will be of the V series [3, p. 267]. The integers m 1,..., m r are constant under right equivalence over C. Note that one can construct examples of monoids that are right equivalent over C, but not over R (see Figure 5.4). Figure 5.4: The monoids Z(z 3 + xy 3 + x 3 y) and Z(z 3 + xy 3 x 3 y) are right equivalent over C but not over R. The tangent cone is singular everywhere, so there can be no other singularities on the monoid. Case 9. The tangent cone is a smooth cubic curve, and we write f 3 = x x x ax 1 x 2 x 3 where a 3 1. This is a one-parameter family of elliptic curves, so we cannot use the parameterization technique of the other cases. The singularity at O will be a P 8 singularity (cf. [3, p. 185]), and other singularities correspond to intersections between Z(f 3 ) and Z(f 4 ), as described by Proposition 5.6. To classify the possible configurations of singularities on a monoid with a nonsingular (projective) tangent cone, we need to answer the following question: For any positive integers m 1,..., m r such that r i=1 m i = 12, does there, for some a R \ { 1}, exist a polynomial f 4 with real coefficients such that Z(f 3, f 4 ) = {p 1,..., p r } P 2 (R) and I pi (f 3, f 4 ) = m i for i = 1,..., r? Rohn

Real monoid surfaces

Real monoid surfaces Real monoid surfaces Pål H. Johansen, Magnus Løberg, Ragni Piene Centre of Mathematics for Applications, University of Oslo, Norway Non-linear Computational Geometry IMA, May 31, 2007 Outline Introduction

More information

Algebraic geometry for geometric modeling

Algebraic geometry for geometric modeling Algebraic geometry for geometric modeling Ragni Piene SIAM AG17 Atlanta, Georgia, USA August 1, 2017 Applied algebraic geometry in the old days: EU Training networks GAIA Application of approximate algebraic

More information

THE ENVELOPE OF LINES MEETING A FIXED LINE AND TANGENT TO TWO SPHERES

THE ENVELOPE OF LINES MEETING A FIXED LINE AND TANGENT TO TWO SPHERES 6 September 2004 THE ENVELOPE OF LINES MEETING A FIXED LINE AND TANGENT TO TWO SPHERES Abstract. We study the set of lines that meet a fixed line and are tangent to two spheres and classify the configurations

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

BRILL-NOETHER THEORY, II

BRILL-NOETHER THEORY, II This article follows the paper of Griffiths and Harris, "On the variety of special linear systems on a general algebraic curve" 1 WARMUP ON DEGENERATIONS The classic first problem in Schubert calculus

More information

Syzygies and distance functions to parameterized curves and surfaces

Syzygies and distance functions to parameterized curves and surfaces Syzygies and distance functions to parameterized curves and surfaces Laurent Busé INRIA Sophia Antipolis, France Laurent.Buse@inria.fr AGGM 2016 Conference, CMO at Oaxaca, Mexico August 7-12, 2016 Overall

More information

0. Introduction 1 0. INTRODUCTION

0. Introduction 1 0. INTRODUCTION 0. Introduction 1 0. INTRODUCTION In a very rough sketch we explain what algebraic geometry is about and what it can be used for. We stress the many correlations with other fields of research, such as

More information

12. Hilbert Polynomials and Bézout s Theorem

12. Hilbert Polynomials and Bézout s Theorem 12. Hilbert Polynomials and Bézout s Theorem 95 12. Hilbert Polynomials and Bézout s Theorem After our study of smooth cubic surfaces in the last chapter, let us now come back to the general theory of

More information

David Eklund. May 12, 2017

David Eklund. May 12, 2017 KTH Stockholm May 12, 2017 1 / 44 Isolated roots of polynomial systems Let f 1,..., f n C[x 0,..., x n ] be homogeneous and consider the subscheme X P n defined by the ideal (f 1,..., f n ). 2 / 44 Isolated

More information

ADVANCED TOPICS IN ALGEBRAIC GEOMETRY

ADVANCED TOPICS IN ALGEBRAIC GEOMETRY ADVANCED TOPICS IN ALGEBRAIC GEOMETRY DAVID WHITE Outline of talk: My goal is to introduce a few more advanced topics in algebraic geometry but not to go into too much detail. This will be a survey of

More information

Sixty-Four Curves of Degree Six

Sixty-Four Curves of Degree Six Sixty-Four Curves of Degree Six Bernd Sturmfels MPI Leipzig and UC Berkeley with Nidhi Kaihnsa, Mario Kummer, Daniel Plaumann and Mahsa Sayyary 1 / 24 Poset 2 / 24 Hilbert s 16th Problem Classify all real

More information

ELLIPTIC CURVES BJORN POONEN

ELLIPTIC CURVES BJORN POONEN ELLIPTIC CURVES BJORN POONEN 1. Introduction The theme of this lecture is to show how geometry can be used to understand the rational number solutions to a polynomial equation. We will illustrate this

More information

On Maps Taking Lines to Plane Curves

On Maps Taking Lines to Plane Curves Arnold Math J. (2016) 2:1 20 DOI 10.1007/s40598-015-0027-1 RESEARCH CONTRIBUTION On Maps Taking Lines to Plane Curves Vsevolod Petrushchenko 1 Vladlen Timorin 1 Received: 24 March 2015 / Accepted: 16 October

More information

4 Hilbert s Basis Theorem and Gröbner basis

4 Hilbert s Basis Theorem and Gröbner basis 4 Hilbert s Basis Theorem and Gröbner basis We define Gröbner bases of ideals in multivariate polynomial rings and see how they work in tandem with the division algorithm. We look again at the standard

More information

PROBLEMS FOR VIASM MINICOURSE: GEOMETRY OF MODULI SPACES LAST UPDATED: DEC 25, 2013

PROBLEMS FOR VIASM MINICOURSE: GEOMETRY OF MODULI SPACES LAST UPDATED: DEC 25, 2013 PROBLEMS FOR VIASM MINICOURSE: GEOMETRY OF MODULI SPACES LAST UPDATED: DEC 25, 2013 1. Problems on moduli spaces The main text for this material is Harris & Morrison Moduli of curves. (There are djvu files

More information

where m is the maximal ideal of O X,p. Note that m/m 2 is a vector space. Suppose that we are given a morphism

where m is the maximal ideal of O X,p. Note that m/m 2 is a vector space. Suppose that we are given a morphism 8. Smoothness and the Zariski tangent space We want to give an algebraic notion of the tangent space. In differential geometry, tangent vectors are equivalence classes of maps of intervals in R into the

More information

9. Birational Maps and Blowing Up

9. Birational Maps and Blowing Up 72 Andreas Gathmann 9. Birational Maps and Blowing Up In the course of this class we have already seen many examples of varieties that are almost the same in the sense that they contain isomorphic dense

More information

10. Smooth Varieties. 82 Andreas Gathmann

10. Smooth Varieties. 82 Andreas Gathmann 82 Andreas Gathmann 10. Smooth Varieties Let a be a point on a variety X. In the last chapter we have introduced the tangent cone C a X as a way to study X locally around a (see Construction 9.20). It

More information

The Algebra and Geometry of Curve and Surface Inversion

The Algebra and Geometry of Curve and Surface Inversion Brigham Young University BYU ScholarsArchive All Faculty Publications 2002-01-01 The Algebra and Geometry of Curve and Surface Inversion Thomas W. Sederberg tom@cs.byu.edu Eng-Wee Chionh See next page

More information

Resolution of Singularities in Algebraic Varieties

Resolution of Singularities in Algebraic Varieties Resolution of Singularities in Algebraic Varieties Emma Whitten Summer 28 Introduction Recall that algebraic geometry is the study of objects which are or locally resemble solution sets of polynomial equations.

More information

Fast Approximate Implicitization of Envelope Curves using Chebyshev Polynomials

Fast Approximate Implicitization of Envelope Curves using Chebyshev Polynomials Fast Approximate Implicitization of Envelope Curves using Chebyshev Polynomials Oliver J. D. Barrowclough, Bert Jüttler, and Tino Schulz Abstract Consider a rational family of planar rational curves in

More information

Holomorphic line bundles

Holomorphic line bundles Chapter 2 Holomorphic line bundles In the absence of non-constant holomorphic functions X! C on a compact complex manifold, we turn to the next best thing, holomorphic sections of line bundles (i.e., rank

More information

Examples of numerics in commutative algebra and algebraic geo

Examples of numerics in commutative algebra and algebraic geo Examples of numerics in commutative algebra and algebraic geometry MCAAG - JuanFest Colorado State University May 16, 2016 Portions of this talk include joint work with: Sandra Di Rocco David Eklund Michael

More information

QUARTIC SPECTRAHEDRA. Bernd Sturmfels UC Berkeley and MPI Bonn. Joint work with John Christian Ottem, Kristian Ranestad and Cynthia Vinzant

QUARTIC SPECTRAHEDRA. Bernd Sturmfels UC Berkeley and MPI Bonn. Joint work with John Christian Ottem, Kristian Ranestad and Cynthia Vinzant QUARTIC SPECTRAHEDRA Bernd Sturmfels UC Berkeley and MPI Bonn Joint work with John Christian Ottem, Kristian Ranestad and Cynthia Vinzant 1 / 20 Definition A spectrahedron of degree n in R 3 is a convex

More information

arxiv: v3 [math.dg] 19 Jun 2017

arxiv: v3 [math.dg] 19 Jun 2017 THE GEOMETRY OF THE WIGNER CAUSTIC AND AFFINE EQUIDISTANTS OF PLANAR CURVES arxiv:160505361v3 [mathdg] 19 Jun 017 WOJCIECH DOMITRZ, MICHA L ZWIERZYŃSKI Abstract In this paper we study global properties

More information

Introduction Curves Surfaces Curves on surfaces. Curves and surfaces. Ragni Piene Centre of Mathematics for Applications, University of Oslo, Norway

Introduction Curves Surfaces Curves on surfaces. Curves and surfaces. Ragni Piene Centre of Mathematics for Applications, University of Oslo, Norway Curves and surfaces Ragni Piene Centre of Mathematics for Applications, University of Oslo, Norway What is algebraic geometry? IMA, April 13, 2007 Outline Introduction Curves Surfaces Curves on surfaces

More information

Algebraic Geometry for CAGD

Algebraic Geometry for CAGD Chapter 17 Algebraic Geometry for CAGD Initially, the field of computer aided geometric design and graphics drew most heavily from differential geometry, approximation theory, and vector geometry. Since

More information

On complete degenerations of surfaces with ordinary singularities in

On complete degenerations of surfaces with ordinary singularities in Home Search Collections Journals About Contact us My IOPscience On complete degenerations of surfaces with ordinary singularities in This article has been downloaded from IOPscience. Please scroll down

More information

Simple Examples on Rectangular Domains

Simple Examples on Rectangular Domains 84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation

More information

LINEAR ALGEBRA KNOWLEDGE SURVEY

LINEAR ALGEBRA KNOWLEDGE SURVEY LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.

More information

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 27

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 27 FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 27 RAVI VAKIL CONTENTS 1. Proper morphisms 1 2. Scheme-theoretic closure, and scheme-theoretic image 2 3. Rational maps 3 4. Examples of rational maps 5 Last day:

More information

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124 Matrices Math 220 Copyright 2016 Pinaki Das This document is freely redistributable under the terms of the GNU Free Documentation License For more information, visit http://wwwgnuorg/copyleft/fdlhtml Contents

More information

SIAM Conference on Applied Algebraic Geometry Daejeon, South Korea, Irina Kogan North Carolina State University. Supported in part by the

SIAM Conference on Applied Algebraic Geometry Daejeon, South Korea, Irina Kogan North Carolina State University. Supported in part by the SIAM Conference on Applied Algebraic Geometry Daejeon, South Korea, 2015 Irina Kogan North Carolina State University Supported in part by the 1 Based on: 1. J. M. Burdis, I. A. Kogan and H. Hong Object-image

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date April 29, 23 2 Contents Motivation for the course 5 2 Euclidean n dimensional Space 7 2. Definition of n Dimensional Euclidean Space...........

More information

Rational Bézier Patch Differentiation using the Rational Forward Difference Operator

Rational Bézier Patch Differentiation using the Rational Forward Difference Operator Rational Bézier Patch Differentiation using the Rational Forward Difference Operator Xianming Chen, Richard F. Riesenfeld, Elaine Cohen School of Computing, University of Utah Abstract This paper introduces

More information

Porteous s Formula for Maps between Coherent Sheaves

Porteous s Formula for Maps between Coherent Sheaves Michigan Math. J. 52 (2004) Porteous s Formula for Maps between Coherent Sheaves Steven P. Diaz 1. Introduction Recall what the Thom Porteous formula for vector bundles tells us (see [2, Sec. 14.4] for

More information

NUMERICAL COMPUTATION OF GALOIS GROUPS

NUMERICAL COMPUTATION OF GALOIS GROUPS Draft, May 25, 2016 NUMERICAL COMPUTATION OF GALOIS GROUPS JONATHAN D. HAUENSTEIN, JOSE ISRAEL RODRIGUEZ, AND FRANK SOTTILE Abstract. The Galois/monodromy group of a family of geometric problems or equations

More information

Vector Spaces, Orthogonality, and Linear Least Squares

Vector Spaces, Orthogonality, and Linear Least Squares Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ

More information

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 41

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 41 FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 41 RAVI VAKIL CONTENTS 1. Normalization 1 2. Extending maps to projective schemes over smooth codimension one points: the clear denominators theorem 5 Welcome back!

More information

LINEAR ALGEBRA: THEORY. Version: August 12,

LINEAR ALGEBRA: THEORY. Version: August 12, LINEAR ALGEBRA: THEORY. Version: August 12, 2000 13 2 Basic concepts We will assume that the following concepts are known: Vector, column vector, row vector, transpose. Recall that x is a column vector,

More information

SELECTED SAMPLE FINAL EXAM SOLUTIONS - MATH 5378, SPRING 2013

SELECTED SAMPLE FINAL EXAM SOLUTIONS - MATH 5378, SPRING 2013 SELECTED SAMPLE FINAL EXAM SOLUTIONS - MATH 5378, SPRING 03 Problem (). This problem is perhaps too hard for an actual exam, but very instructional, and simpler problems using these ideas will be on the

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

14. Rational maps It is often the case that we are given a variety X and a morphism defined on an open subset U of X. As open sets in the Zariski

14. Rational maps It is often the case that we are given a variety X and a morphism defined on an open subset U of X. As open sets in the Zariski 14. Rational maps It is often the case that we are given a variety X and a morphism defined on an open subset U of X. As open sets in the Zariski topology are very large, it is natural to view this as

More information

MA 206 notes: introduction to resolution of singularities

MA 206 notes: introduction to resolution of singularities MA 206 notes: introduction to resolution of singularities Dan Abramovich Brown University March 4, 2018 Abramovich Introduction to resolution of singularities 1 / 31 Resolution of singularities Let k be

More information

Secant varieties. Marin Petkovic. November 23, 2015

Secant varieties. Marin Petkovic. November 23, 2015 Secant varieties Marin Petkovic November 23, 2015 Abstract The goal of this talk is to introduce secant varieies and show connections of secant varieties of Veronese variety to the Waring problem. 1 Secant

More information

Introduction to Arithmetic Geometry

Introduction to Arithmetic Geometry Introduction to Arithmetic Geometry 18.782 Andrew V. Sutherland September 5, 2013 What is arithmetic geometry? Arithmetic geometry applies the techniques of algebraic geometry to problems in number theory

More information

11. Dimension. 96 Andreas Gathmann

11. Dimension. 96 Andreas Gathmann 96 Andreas Gathmann 11. Dimension We have already met several situations in this course in which it seemed to be desirable to have a notion of dimension (of a variety, or more generally of a ring): for

More information

Introduction to Arithmetic Geometry Fall 2013 Lecture #17 11/05/2013

Introduction to Arithmetic Geometry Fall 2013 Lecture #17 11/05/2013 18.782 Introduction to Arithmetic Geometry Fall 2013 Lecture #17 11/05/2013 Throughout this lecture k denotes an algebraically closed field. 17.1 Tangent spaces and hypersurfaces For any polynomial f k[x

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date May 9, 29 2 Contents 1 Motivation for the course 5 2 Euclidean n dimensional Space 7 2.1 Definition of n Dimensional Euclidean Space...........

More information

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) =

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) = Math 395. Quadratic spaces over R 1. Algebraic preliminaries Let V be a vector space over a field F. Recall that a quadratic form on V is a map Q : V F such that Q(cv) = c 2 Q(v) for all v V and c F, and

More information

Bézier Curves and Splines

Bézier Curves and Splines CS-C3100 Computer Graphics Bézier Curves and Splines Majority of slides from Frédo Durand vectorportal.com CS-C3100 Fall 2017 Lehtinen Before We Begin Anything on your mind concerning Assignment 1? CS-C3100

More information

OSCULATION FOR CONIC FIBRATIONS

OSCULATION FOR CONIC FIBRATIONS OSCULATION FOR CONIC FIBRATIONS ANTONIO LANTERI, RAQUEL MALLAVIBARRENA Abstract. Smooth projective surfaces fibered in conics over a smooth curve are investigated with respect to their k-th osculatory

More information

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6 Chapter 6 Orthogonality 6.1 Orthogonal Vectors and Subspaces Recall that if nonzero vectors x, y R n are linearly independent then the subspace of all vectors αx + βy, α, β R (the space spanned by x and

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

Combinatorics and geometry of E 7

Combinatorics and geometry of E 7 Combinatorics and geometry of E 7 Steven Sam University of California, Berkeley September 19, 2012 1/24 Outline Macdonald representations Vinberg representations Root system Weyl group 7 points in P 2

More information

Chapter 0 of Calculus ++, Differential calculus with several variables

Chapter 0 of Calculus ++, Differential calculus with several variables Chapter of Calculus ++, Differential calculus with several variables Background material by Eric A Carlen Professor of Mathematics Georgia Tech Spring 6 c 6 by the author, all rights reserved - Table of

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Math 4370 Exam 1. Handed out March 9th 2010 Due March 18th 2010

Math 4370 Exam 1. Handed out March 9th 2010 Due March 18th 2010 Math 4370 Exam 1 Handed out March 9th 2010 Due March 18th 2010 Problem 1. Recall from problem 1.4.6.e in the book, that a generating set {f 1,..., f s } of I is minimal if I is not the ideal generated

More information

VARIETIES WITHOUT EXTRA AUTOMORPHISMS I: CURVES BJORN POONEN

VARIETIES WITHOUT EXTRA AUTOMORPHISMS I: CURVES BJORN POONEN VARIETIES WITHOUT EXTRA AUTOMORPHISMS I: CURVES BJORN POONEN Abstract. For any field k and integer g 3, we exhibit a curve X over k of genus g such that X has no non-trivial automorphisms over k. 1. Statement

More information

Algebra Workshops 10 and 11

Algebra Workshops 10 and 11 Algebra Workshops 1 and 11 Suggestion: For Workshop 1 please do questions 2,3 and 14. For the other questions, it s best to wait till the material is covered in lectures. Bilinear and Quadratic Forms on

More information

Introduction. Chapter Points, Vectors and Coordinate Systems

Introduction. Chapter Points, Vectors and Coordinate Systems Chapter 1 Introduction Computer aided geometric design (CAGD) concerns itself with the mathematical description of shape for use in computer graphics, manufacturing, or analysis. It draws upon the fields

More information

Lecture 1. Renzo Cavalieri

Lecture 1. Renzo Cavalieri Lecture 1 Renzo Cavalieri Enumerative Geometry Enumerative geometry is an ancient branch of mathematics that is concerned with counting geometric objects that satisfy a certain number of geometric conditions.

More information

Elimination Theory in the 21st century

Elimination Theory in the 21st century NSF-CBMS Conference on Applications of Polynomial Systems Before we start... Before we start... Session on Open Problems Friday 3.30 pm Before we start... Session on Open Problems Friday 3.30 pm BYOP Before

More information

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 48

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 48 FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 48 RAVI VAKIL CONTENTS 1. A little more about cubic plane curves 1 2. Line bundles of degree 4, and Poncelet s Porism 1 3. Fun counterexamples using elliptic curves

More information

On projective classification of plane curves

On projective classification of plane curves Global and Stochastic Analysis Vol. 1, No. 2, December 2011 Copyright c Mind Reader Publications On projective classification of plane curves Nadiia Konovenko a and Valentin Lychagin b a Odessa National

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

π X : X Y X and π Y : X Y Y

π X : X Y X and π Y : X Y Y Math 6130 Notes. Fall 2002. 6. Hausdorffness and Compactness. We would like to be able to say that all quasi-projective varieties are Hausdorff and that projective varieties are the only compact varieties.

More information

Unbounded Convex Semialgebraic Sets as Spectrahedral Shadows

Unbounded Convex Semialgebraic Sets as Spectrahedral Shadows Unbounded Convex Semialgebraic Sets as Spectrahedral Shadows Shaowei Lin 9 Dec 2010 Abstract Recently, Helton and Nie [3] showed that a compact convex semialgebraic set S is a spectrahedral shadow if the

More information

Generic properties of Symmetric Tensors

Generic properties of Symmetric Tensors 2006 1/48 PComon Generic properties of Symmetric Tensors Pierre COMON - CNRS other contributors: Bernard MOURRAIN INRIA institute Lek-Heng LIM, Stanford University 2006 2/48 PComon Tensors & Arrays Definitions

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

Local properties of plane algebraic curves

Local properties of plane algebraic curves Chapter 7 Local properties of plane algebraic curves Throughout this chapter let K be an algebraically closed field of characteristic zero, and as usual let A (K) be embedded into P (K) by identifying

More information

CHAPTER 3. Gauss map. In this chapter we will study the Gauss map of surfaces in R 3.

CHAPTER 3. Gauss map. In this chapter we will study the Gauss map of surfaces in R 3. CHAPTER 3 Gauss map In this chapter we will study the Gauss map of surfaces in R 3. 3.1. Surfaces in R 3 Let S R 3 be a submanifold of dimension 2. Let {U i, ϕ i } be a DS on S. For any p U i we have a

More information

2. Intersection Multiplicities

2. Intersection Multiplicities 2. Intersection Multiplicities 11 2. Intersection Multiplicities Let us start our study of curves by introducing the concept of intersection multiplicity, which will be central throughout these notes.

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Algebraic Methods in Combinatorics

Algebraic Methods in Combinatorics Algebraic Methods in Combinatorics Po-Shen Loh 27 June 2008 1 Warm-up 1. (A result of Bourbaki on finite geometries, from Răzvan) Let X be a finite set, and let F be a family of distinct proper subsets

More information

Linear Algebra and Robot Modeling

Linear Algebra and Robot Modeling Linear Algebra and Robot Modeling Nathan Ratliff Abstract Linear algebra is fundamental to robot modeling, control, and optimization. This document reviews some of the basic kinematic equations and uses

More information

MODULI SPACES OF CURVES

MODULI SPACES OF CURVES MODULI SPACES OF CURVES SCOTT NOLLET Abstract. My goal is to introduce vocabulary and present examples that will help graduate students to better follow lectures at TAGS 2018. Assuming some background

More information

Elliptic curves and modularity

Elliptic curves and modularity Elliptic curves and modularity For background and (most) proofs, we refer to [1]. 1 Weierstrass models Let K be any field. For any a 1, a 2, a 3, a 4, a 6 K consider the plane projective curve C given

More information

Algebraic Geometry (Math 6130)

Algebraic Geometry (Math 6130) Algebraic Geometry (Math 6130) Utah/Fall 2016. 2. Projective Varieties. Classically, projective space was obtained by adding points at infinity to n. Here we start with projective space and remove a hyperplane,

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

Ruled Surfaces. Chapter 14

Ruled Surfaces. Chapter 14 Chapter 14 Ruled Surfaces We describe in this chapter the important class of surfaces, consistng of those which contain infinitely many straight lines. The most obvious examples of ruled surfaces are cones

More information

Introduction to Arithmetic Geometry Fall 2013 Lecture #23 11/26/2013

Introduction to Arithmetic Geometry Fall 2013 Lecture #23 11/26/2013 18.782 Introduction to Arithmetic Geometry Fall 2013 Lecture #23 11/26/2013 As usual, a curve is a smooth projective (geometrically irreducible) variety of dimension one and k is a perfect field. 23.1

More information

Symmetries and Polynomials

Symmetries and Polynomials Symmetries and Polynomials Aaron Landesman and Apurva Nakade June 30, 2018 Introduction In this class we ll learn how to solve a cubic. We ll also sketch how to solve a quartic. We ll explore the connections

More information

A POLAR, THE CLASS AND PLANE JACOBIAN CONJECTURE

A POLAR, THE CLASS AND PLANE JACOBIAN CONJECTURE Bull. Korean Math. Soc. 47 (2010), No. 1, pp. 211 219 DOI 10.4134/BKMS.2010.47.1.211 A POLAR, THE CLASS AND PLANE JACOBIAN CONJECTURE Dosang Joe Abstract. Let P be a Jacobian polynomial such as deg P =

More information

Polynomials, Ideals, and Gröbner Bases

Polynomials, Ideals, and Gröbner Bases Polynomials, Ideals, and Gröbner Bases Notes by Bernd Sturmfels for the lecture on April 10, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra We fix a field K. Some examples of fields

More information

Algorithms for Tensor Decomposition via Numerical Homotopy

Algorithms for Tensor Decomposition via Numerical Homotopy Algorithms for Tensor Decomposition via Numerical Homotopy Session on Algebraic Geometry of Tensor Decompositions August 3, 2013 This talk involves joint work and discussions with: Hirotachi Abo Dan Bates

More information

Topology of implicit curves and surfaces

Topology of implicit curves and surfaces Topology of implicit curves and surfaces B. Mourrain, INRIA, BP 93, 06902 Sophia Antipolis mourrain@sophia.inria.fr 24th October 2002 The problem Given a polynomial f(x, y) (resp. f(x, y, z)), Compute

More information

PURE MATHEMATICS AM 27

PURE MATHEMATICS AM 27 AM SYLLABUS (2020) PURE MATHEMATICS AM 27 SYLLABUS 1 Pure Mathematics AM 27 (Available in September ) Syllabus Paper I(3hrs)+Paper II(3hrs) 1. AIMS To prepare students for further studies in Mathematics

More information

Multidimensional Geometry and its Applications

Multidimensional Geometry and its Applications PARALLEL COORDINATES : VISUAL Multidimensional Geometry and its Applications Alfred Inselberg( c 99, ) Senior Fellow San Diego SuperComputing Center, CA, USA Computer Science and Applied Mathematics Departments

More information

Approximation of Circular Arcs by Parametric Polynomials

Approximation of Circular Arcs by Parametric Polynomials Approximation of Circular Arcs by Parametric Polynomials Emil Žagar Lecture on Geometric Modelling at Charles University in Prague December 6th 2017 1 / 44 Outline Introduction Standard Reprezentations

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

8. Prime Factorization and Primary Decompositions

8. Prime Factorization and Primary Decompositions 70 Andreas Gathmann 8. Prime Factorization and Primary Decompositions 13 When it comes to actual computations, Euclidean domains (or more generally principal ideal domains) are probably the nicest rings

More information

Geometric approximation of curves and singularities of secant maps Ghosh, Sunayana

Geometric approximation of curves and singularities of secant maps Ghosh, Sunayana University of Groningen Geometric approximation of curves and singularities of secant maps Ghosh, Sunayana IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish

More information

Polynomials. Math 4800/6080 Project Course

Polynomials. Math 4800/6080 Project Course Polynomials. Math 4800/6080 Project Course 2. Algebraic Curves. Everyone knows what a curve is, until he has studied enough mathematics to become confused through the countless number of possible exceptions.

More information

Sample Exam 1 KEY NAME: 1. CS 557 Sample Exam 1 KEY. These are some sample problems taken from exams in previous years. roughly ten questions.

Sample Exam 1 KEY NAME: 1. CS 557 Sample Exam 1 KEY. These are some sample problems taken from exams in previous years. roughly ten questions. Sample Exam 1 KEY NAME: 1 CS 557 Sample Exam 1 KEY These are some sample problems taken from exams in previous years. roughly ten questions. Your exam will have 1. (0 points) Circle T or T T Any curve

More information