A Trust Region Method for Constructing Triangle-mesh Approximations of Parametric Minimal Surfaces

Similar documents
A Sobolev trust-region method for numerical solution of the Ginz

A Sobolev Gradient Method for Treating the Steady-state Incompressible Navier-Stokes Equations

A Simple Explanation of the Sobolev Gradient Method

NEWTON S METHOD IN THE CONTEXT OF GRADIENTS

Numerical Optimization of Partial Differential Equations

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

MS&E 318 (CME 338) Large-Scale Numerical Optimization

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review

Math 302 Outcome Statements Winter 2013

Improved Damped Quasi-Newton Methods for Unconstrained Optimization

Accelerated Block-Coordinate Relaxation for Regularized Optimization

Math 10C - Fall Final Exam

Iterative Methods for Smooth Objective Functions

Algorithms for Constrained Optimization

Line Search Methods for Unconstrained Optimisation

2 Nonlinear least squares algorithms

Chapter 3 Numerical Methods

Regularization on Discrete Spaces

Termination criteria for inexact fixed point methods

On Lagrange multipliers of trust-region subproblems

Appendix A Taylor Approximations and Definite Matrices

Numerical Simulations on Two Nonlinear Biharmonic Evolution Equations

Spectral gradient projection method for solving nonlinear monotone equations

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

Tangent spaces, normals and extrema

Numerical optimization

Generalized Newton-Type Method for Energy Formulations in Image Processing

Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses

Course Summary Math 211

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

Higher-Order Methods

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent

1. Introduction. The Stokes problem seeks unknown functions u and p satisfying

Trust Region Methods. Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh. Convex Optimization /36-725

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Efficient smoothers for all-at-once multigrid methods for Poisson and Stokes control problems

Numerical computation II. Reprojection error Bundle adjustment Family of Newtonʼs methods Statistical background Maximum likelihood estimation

Introduction into Implementation of Optimization problems with PDEs: Sheet 3

8 Numerical methods for unconstrained problems

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

Maria Cameron. f(x) = 1 n

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

AN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION

MATH Max-min Theory Fall 2016

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Numerical Analysis of Electromagnetic Fields

7a3 2. (c) πa 3 (d) πa 3 (e) πa3

The nonsmooth Newton method on Riemannian manifolds

Trust Regions. Charles J. Geyer. March 27, 2013

1 Numerical optimization

THE restructuring of the power industry has lead to

NUMERICAL SOLUTION OF CONVECTION DIFFUSION EQUATIONS USING UPWINDING TECHNIQUES SATISFYING THE DISCRETE MAXIMUM PRINCIPLE

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

ON NUMERICAL RECOVERY METHODS FOR THE INVERSE PROBLEM

Convergence of a Two-parameter Family of Conjugate Gradient Methods with a Fixed Formula of Stepsize

ORIE 6326: Convex Optimization. Quasi-Newton Methods

Overlapping Schwarz preconditioners for Fekete spectral elements

Unconstrained optimization

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

Minimization of Static! Cost Functions!

2. Quasi-Newton methods

On Lagrange multipliers of trust region subproblems

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

Fractals. R. J. Renka 11/14/2016. Department of Computer Science & Engineering University of North Texas. R. J. Renka Fractals

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

NATIONAL BOARD FOR HIGHER MATHEMATICS. M. A. and M.Sc. Scholarship Test. September 25, Time Allowed: 150 Minutes Maximum Marks: 30

Newton s Method. Javier Peña Convex Optimization /36-725

5 Handling Constraints

20. A Dual-Primal FETI Method for solving Stokes/Navier-Stokes Equations

Step lengths in BFGS method for monotone gradients

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Introduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems

Maximum Likelihood Estimation

Numerical methods part 2

On the use of quadratic models in unconstrained minimization without derivatives 1. M.J.D. Powell

17 Solution of Nonlinear Systems

PARTITION OF UNITY FOR THE STOKES PROBLEM ON NONMATCHING GRIDS

Multigrid Methods for Elliptic Obstacle Problems on 2D Bisection Grids

PREPRINT 2010:25. Fictitious domain finite element methods using cut elements: II. A stabilized Nitsche method ERIK BURMAN PETER HANSBO

1. Introduction. We analyze a trust region version of Newton s method for the optimization problem

Non-Conforming Finite Element Methods for Nonmatching Grids in Three Dimensions

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

1 Numerical optimization

NONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM

AMS subject classifications. Primary, 65N15, 65N30, 76D07; Secondary, 35B45, 35J50

A Recursive Trust-Region Method for Non-Convex Constrained Minimization

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Trust-region methods for rectangular systems of nonlinear equations

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

Reduced-Hessian Methods for Constrained Optimization

Transcription:

A Trust Region Method for Constructing Triangle-mesh Approximations of Parametric Minimal Surfaces Robert J. Renka Department of Computer Science & Engineering, University of North Texas, Denton, TX 76203-1366 Abstract Given a function f 0 defined on the unit square with values in R 3, we construct a piecewise linear function f on a triangulation of such that f agrees with f 0 on the boundary nodes, and the image of f has minimal surface area. The problem is formulated as that of minimizing a discretization of a least squares functional whose critical points are uniformly parameterized minimal surfaces. The nonlinear least squares problem is treated by a trust region method in which the trust region radius is defined by a stepwise-variable Sobolev metric. Test results demonstrate the effectiveness of the method. keywords: minimal surface; triangle mesh; trust region; variable metric method 1. Introduction In [11] we introduced a trust region method in which the trust region radius is measured in the H 1 Sobolev metric. The method was shown to be equivalent to a method which blends a Newton-like iteration with a steepest descent iteration using the Sobolev gradient and which we termed a Levenberg-Marquardt- Neuberger method. The underlying theory is developed in [3], and the method is applied to the Navier-Stokes equations in [12], and to the Ginzburg-Landau equations in [4]. Here we apply a variation of the method to the problem of approximating minimal surfaces. This research extends the work began in [9]. Refer to [10] for a treatment of the analogous minimal curve length problem. Other algorithms for constructing triangle-mesh approximations to minimal surfaces have been presented in [1], [7], [8], and, more recently, in [2] and [6]. By adding a volume constraint to the surface area functional, these methods treat the more general problem of approximating constant mean curvature surfaces, including both soap bubbles with nonzero mean curvature and soap films with zero mean curvature (minimal surfaces). They are thus more powerful than our method but require much more elaborate procedures to maintain stability. More Email address: robert.renka@unt.edu (Robert J. Renka) Preprint submitted to Elsevier November 12, 2013

precisely, at least in the case of the Plateau problem with fixed boundary which we treat here, they alternate between updating the geometry as defined by the vertex positions and updating the connection topology by swapping edges. Our method of parameterizing the surface makes it unnecessary to swap edges. In addition to treating fixed boundaries, the method of [6] also treats free boundaries constrained to lie on a fixed surface or constrained by a fixed length, and nonmanifold boundary curves shared by more than two surface sheets. That method treats the problem of poor mesh quality by replacing the surface area functional by an extension of a centroidal Voronoi tessellation energy functional, where the extension is designed so that minimizing the functional is asymptotically equivalent to minimizing surface area. This results in very high-quality meshes which could be advantageous for estimating differential surface properties. The simpler more efficient method presented here, however, is sufficient for numerical simulations. We formulate the continuous problem in Section 2, describe the discretized problem in Section 3, discuss the solution method in Section 4, and present test results in Section 5. 2. Sobolev space formulation Denote the unit square by = [0, 1] 2, and define a parametric representation of regular surfaces by S = {g C 2 (, R 3 ) : g 1 g 2 0}, where g 1 and g 2 denote first partial derivatives. Note that S does not include the zero function and is therefore not a linear space. Rather, it is an infinitedimensional manifold which will be equipped with a Riemannian metric in the form of an f-dependent Sobolev inner product in the tangent space at each point f S. The surface area functional is ψ(f) = f 1 f 2 for f S, where denotes Euclidean norm in R 3. Now fix f 0 S and define the linear space of compactly supported variations S 0 = {h C 2 (, R 3 ) : h = 0}, where denotes the boundary of. A minimal surface is obtained by minimizing ψ over functions f that agree with f 0 on the boundary of : ψ (f)h = lim [ψ(f + αh) ψ(f)] = 0 α α 0 1 for all h S 0. More precisely, a minimal surface is the image of a critical point of ψ. 2

Consider the least squares functional φ(f) = 1 f 1 f 2 2. 2 This is similar in appearance but not to be confused with the Dirichlet integral of f. It was shown in [9] that critical points of φ are critical points of ψ with f 1 f 2 constant, so that the critical points are uniformly parameterized, and critical points of ψ are, with a change of parameters, critical points of φ. In the case of minimizing curve length, the analogue of φ has critical points corresponding to constant-velocity trajectories with zero curvature. Our computational procedure involves a discretization of φ rather than of ψ. This has an enormous advantage in terms of avoiding degenerate triangles and the ill-conditioning associated with widely varying triangle areas. In order to simplify the derivation of expressions for gradients and Hessians of φ, define a nonlinear differential operator A by for f C 2 (, R 3 ) so that A(f) = f 1 f 2 φ(f) = 1 2 A(f), A(f) L 2, where, L 2 denotes the standard inner product associated with L 2 (, R 3 ). Note that, since f has continuous mixed second partial derivative f 12 = f 21, A (f) is self-adjoint on S 0 in the L 2 inner product: A (f)h, g L 2 = f 1 h 2 + h 1 f 2, g = g f 1, h 2 + h 1, f 2 g = (g f 1 ) 2, h h, (f 2 g) 1 = h, (f 1 g) 2 + (g f 2 ) 1 = h, f 1 g 2 + g 1 f 2 = h, A (f)g L 2 for all g, h S 0. While A(f) is not an element of S 0, we can formally apply A (f) to A(f) to obtain expressions for Fréchet derivatives as follows: and φ (f)h = A (f)h, A(f) L 2 = h, [A (f)]a(f) L 2, φ (f)kh = A (f)h, A (f)k L 2 + A (k)h, A(f) L 2 = h, [A (f)]a (f)k L 2 + h, [A (k)]a(f) L 2. The L 2 gradient g 0 and Hessian H of φ are thus given by g 0 (f) = [A (f)]a(f) = f 1 (f 1 f 2 ) 2 + (f 1 f 2 ) 1 f 2 = (f 2 f 1 f 2 ) 1 (f 1 f 2 f 1 ) 2, 3

and H(f)k = [A (f)]a (f)k + [A (k)]a(f) = f 1 (f 1 k 2 + k 1 f 2 ) 2 + (f 1 k 2 + k 1 f 2 ) 1 f 2 +k 1 (f 1 f 2 ) 2 + (f 1 f 2 ) 1 k 2 = [f 2 k 1 f 2 + f 2 (f 1 k 2 )] 1 [f 1 k 2 f 1 + f 1 (f 2 k 1 )] 2 [k 2 (f 1 f 2 )] 1 + [k 1 (f 1 f 2 )] 2 for k S 0 so that H(f) = [A (f)] 2 + C f D 2 C f D 1, where D 1 and D 2 denote first partial derivative operators and C f is the skew-symmetric linear operator defined by C f (k) = (f 1 f 2 ) k. It was shown in [9] that g 0 (f) = 0 implies that f 1 f 2 is constant and f has zero mean curvature. The Gauss-Newton approximation to H(f) is G(f) = [A (f)] 2 so that G(f)k = [f 2 k 1 f 2 + f 2 (f 1 k 2 )] 1 [f 1 k 2 f 1 + f 1 (f 2 k 1 )] 2. Now for fixed f S define a tangent space at f by S 0 along with the f-dependent inner product k, h f = φ (f)kh = H(f)k, h L 2. As an alternative when H(f) is not positive definite we may define the metric by k, h f = G(f)k, h L 2. Then, in addition to being represented by the L 2 gradient g 0 in the L 2 inner product, the bounded linear functional φ (f) is represented by a unique gradient g f S 0 in the f inner product: φ (f)h = g f, h f for all h S 0 so that H(f)g f = g 0 (f) or G(f)g f = g 0 (f), depending on the choice of f inner product, and a gradient descent iteration with g f is a Newton or Gauss-Newton iteration. 3. Discretized problem In order to keep the notation as simple as possible, we will retain the symbols used in the previous section but with altered definitions. A triangulation of is a set of nondegenerate triangles T such that no two elements of T have intersecting interiors, the union of triangles coincides with, and no triangle side contains a vertex in its interior. A structured triangulation is obtained by uniformly partitioning into a k by k grid and partitioning each of the (k 1) 2 square cells into a pair of triangles using the diagonal with slope 1. Denote the set of k 2 vertices by V T, the (k 2) 2 interior vertices by V I, and the 4(k 1) boundary vertices by V 0. Let Q be the set of all triples τ = [a, b, c] such that a, b, and c enumerate the vertices of a member of T in cyclical counterclockwise 4

order. Let S be the set of all functions f from V T to R 3 such that A τ (f) 0 for all τ Q, where A τ (f) is the normal to a surface triangle: A τ (f) = (f b f a ) (f c f a ) for τ = [a, b, c]. Denote by S 0 the functions in S with zeros on the boundary vertices. The surface area is ψ(f) = 1 A τ (f) 2 τ Q and the discretized least squares functional is The gradient g 0 in S 0 is defined by φ (f)h = τ Q A τ (f), A τ (f)h φ(f) = 1 A τ (f) 2. 2 τ Q = τ Q A τ (f), (f b f a ) (h c h a ) + (h b h a ) (f c f a ) = τ Q A τ (f), (f c f b ) h a + (f a f c ) h b + (f b f a ) h c = h p, p V T τ=[p,b,c] T p [A τ (f) (f c f b )] for all h S 0, where T p denotes the elements of Q that include vertex p. The partial derivative of φ with respect to f p is thus given by for p V I. Also, (g 0 ) p = τ=[p,b,c] T p [A τ (f) (f c f b )] φ (f)gh = τ Q[ A τ (f)g, A τ (f)h + A τ (f), A τ (g)h ] = h p, p V T τ=[p,b,c] T p [A τ (f)g (f c f b ) + A τ (f) (g c g b )] 5

for g, h S 0 so that the Hessian H is defined by (H(f)g) p = [A τ (f)g (f c f b ) + A τ (f) (g c g b )] τ=[p,b,c] T p = {[(f b f p ) (g c g p ) + (g b g p ) (f c f p )] τ=[p,b,c] T p = (f c f b ) + [(f b f p ) (f c f p )] (g c g b )} τ=[p,b,c] T p {(f c f b ) g p (f c f b ) (f c f b ) [g b (f c f p ) g c (f b f p )] + [(f b f p ) (f c f p )] (g c g b )} with g b = 0 for b V 0. The Gauss-Newton approximation is obtained by omitting the last term: (G(f)g) p = [A τ (f)g (f c f b )] τ=[p,b,c] T p = [(f c f b ) g p (f c f b ) (f c f b ) τ=[p,b,c] T p {g b (f c f p ) g c (f b f p )]}. The order-3(k 2) 2 matrix G(f) is positive definite for all f by a minor modification of Theorem 3.1 in [9]. Since we use a conjugate gradient method to solve the linear systems involving G and H, it is not necessary to compute and store the matrices. We use the above expressions to compute matrix-vector products. Note that (G(f)f) p = 2 τ[p,b,c] T p[a τ (f) (f c f b )] = 2 (g 0 ) p so that G(f)f = 2g 0. We also have H(f)f = 3g 0. We use the former expression to compute g 0. 4. Trust region method We retain the notation defined in Section 3. Critical points of φ are computed by the iteration f n+1 = f n + δf, (n = 0, 1,...), where δf S 0 minimizes the quadratic model function q(δf) = φ(f n ) + δf, g 0 (f n ) 0 + 1 2 δf, H nδf 0, subject to the trust region constraint δf, δf C = δf, C(f n )δf 0 2 for Hessian H n = H(f n ) or approximate Hessian H n = G(f n ), trust-region radius, and symmetric positive definite matrix C which is used, along with the S 0 inner product, 0 to define the trust-region metric, C. 6

The trust region problem has a unique global solution δf if and only if δf is feasible and there exists a nonnegative Lagrange multiplier λ such that [H n + λc]δf = g 0 (f n ) and either λ = 0 or δf, δf C = 2 (the constraint is active). The iteration is thus a blend of a Newton or Gauss-Newton iteration, which provides a superlinear convergence rate, and a steepest descent iteration (using a gradient defined by the choice of C) which ensures progress toward a local minimum from an arbitrary initial estimate f 0. Note that, unlike the Levenberg-Marquardt method, the damping parameter λ is defined implicitly by the trust region radius. The latter is adjusted using the heuristic method described in [5, p 68]. The basic idea is to compute the ratio of the actual reduction in φ relative to the reduction predicted by the model, and shrink or expand the trust region accordingly. Since q(δf) agrees with φ(f n + δf) in the first two or three terms, a constrained minimizer of q accurately approximates a minimizer of φ when δf, δf C is small. The unconstrained minimizer of q is the Newton or Gauss-Newton direction g n = Hn 1 g 0 (f n ). When is small and the constraint is active the quadratic term is very small, and the constrained minimizer of q is approximated by the constrained minimizer in the steepest descent direction, referred to as the Cauchy point. The unconstrained minimizer in the steepest descent direction is obtained by minimizing q(αg) over α for C-gradient g = C 1 g 0 : g u = g 0, g 0 g, H n g 0 g. The constrained minimization problem for computing δf is solved by a dogleg method in which the curved trajectory δf( ) is approximated by the polygonal line with a segment from 0 to g u followed by a segment from g u to g n. The dogleg path intersects the trust-region boundary at most once. This follows from the fact that δf, δf C increases and q(δf) decreases along the path. The linear system defining g n is solved by a preconditioned conjugate gradient (PCG) method preconditioned by the diagonal of the Hessian H(f n ) (which coincides with the diagonal of G(f n )). We implemented and tested two methods. The Euclidean trust region method is the standard method, using H n = G(f n ) corresponding to the Gauss-Newton search direction, and C = I for identity matrix I so that the trust region radius is defined by the Euclidean norm. The Sobolev trust region method, on the other hand, uses H n = H(f n ) and C = G(f n ) corresponding to the Newton search direction and a trust region defined by a stepwise varying weighted Sobolev metric. Both methods were found to be quite effective, the optimal choice being problem-dependent, while other reasonable methods, including C = diag(h n ) and use of the standard H 1 Sobolev norm, were less efficient on our test problems. 7

Note that Hessian values H(f n ) are generally not positive definite unless f n is quite close to a local minimum of φ, and this would appear to be problematic for an algorithm that uses a conjugate gradient method to solve linear systems. Surprisingly, however, this was not a problem for our dogleg method. The conjugate gradient iteration is simply terminated when a direction of negative curvature is encountered, and the iterate, although only a crude approximation to Hn 1 g 0 (f n ), is used as the value of g n in the dogleg method. 5. Test results We tested the methods with boundary curves taken from the following four test functions: Catenoid f(x, y) = (r cos(θ), r sin(θ), y) for radius r = cosh(y.5) and angle θ = 2πx. Helicoid f(x, y) = (x cos(10y), x sin(10y), 2y). Enneper s Surface f(x, y) = (u u 3 /3 + uv 2, v v 3 /3 + u 2 v, u 2 v 2 ) for u = (2x 1)r/ 2 and v = (2y 1)r/ 2, where r = 1.1. Henneberg s Surface f(x, y) = (2 sinh(u) cos(v) (2/3) sinh(3u) cos(3v), 2 sinh(u) sin(v) (2/3) sinh(3u) sin(3v), 2 cosh(2u) cos(2v)) for u =.5x.35 and v = π(2y 1). The unit square was uniformly partitioned with a k by k grid for k = 40, 50,..., 80, and initial approximations were computed as grid-point values of f 0 = f + p, where p(x, y) = 100xy(1 x)(1 y), so that f 0 agreed with the test function f on the boundary of. Initial estimates are thus taken from surfaces that are smooth but far from minimal. The computed surfaces are depicted in Figures 1 4 below. In each case the surface on the left was computed with high resolution and rendered with smooth shading, while the surface on the right was computed with low resolution and rendered as a wireframe mesh so that the parameterization is visible. (The quadrilaterals in the mesh are images of squares in the parameter space.) The apparent flaws in the wireframe images are not flaws in the surfaces, but rather reflect less than perfectly uniform parameterizations. They could be eliminated by a sequence of edge swaps based on the Delaunay circumcircle criterion, or perhaps just by improving the initial estimates. Tables 1 4 display iteration counts and execution times for each of the five k values and the two trust region methods described in Section 4. Labels NIT and NCG denote the number of outer (Newton or Gauss-Newton) iterations and the number of conjugate gradient iterations, respectively. Convergence of the outer iteration was defined by a bound of 5 10 5 on the relative change in the energy functional φ. This resulted in highly accurate approximations with root-mean-square Euclidean gradient components of at most 5 10 4 and in most cases less than 10 6. Convergence of the conjugate gradient method was 8

Euclidean Sobolev k NIT NCG Time NIT NCG Time 40 28 10345 0.69 22 8519 0.59 50 35 17685 1.59 28 14994 1.39 60 42 29543 3.45 35 23356 2.82 70 48 44772 6.78 47 39554 5.96 80 61 73240 13.93 37 36551 7.06 Table 1: Iteration counts and execution times for Catenoid Euclidean Sobolev k NIT NCG Time NIT NCG Time 40 19 6999 0.47 21 8215 0.57 50 21 10452 0.99 32 20315 2.02 60 22 14978 1.96 43 35670 4.82 70 34 31321 5.44 69 76823 14.05 80 44 50324 11.94 28 31924 7.68 Table 2: Iteration counts and execution times for Helicoid defined by a bound of 5 10 6 on the relative residual. Execution times are in minutes on a MacBook Air with a 1.8 GHz Intel Core i7 processor running Matlab version 7.13 (R2011b). Figure 1: Catenoid As previously mentioned, the test results indicate no clear choice between the two trust region methods. The Sobolev method is more efficient for the catenoid, the Euclidean method performs better for Enneper s surface, and the preferred method depends on k for the other two test functions. Both methods, however, are significantly more robust and efficient than the methods discussed in [9]. We have thus demonstrated that, as is typically the case, the trust region 9

Figure 2: Helicoid Figure 3: Enneper s Surface Euclidean Sobolev k NIT NCG Time NIT NCG Time 40 30 11736 0.79 77 41731 2.86 50 23 12007 1.16 90 64945 5.96 60 28 17875 2.30 96 92990 10.96 70 27 22167 3.82 128 160983 24.09 80 52 54369 12.41 140 219190 42.04 Table 3: Iteration counts and execution times for Enneper s surface 10

Figure 4: Henneberg s Minimal Surface Euclidean Sobolev k NIT NCG Time NIT NCG Time 40 14 3829 0.26 19 5682 0.41 50 19 7334 0.69 18 7547 0.73 60 26 13356 1.72 23 12484 1.62 70 29 19111 3.21 31 22050 4.05 80 38 31845 7.27 12 9296 2.20 Table 4: Iteration counts and execution times for Henneberg s surface approach is more robust than a line search for globalizing a Newton iteration. We also eliminated the inefficiency and inelegance of the ad hoc procedure for deciding when to switch from a gradient descent iteration to a Newton iteration. As a final illustration of the good mesh quality produced by our method we created an initial surface similar to that of Figure 6 in [6]: { 2y if y.5 f 0 (x, y) = 2 2y if y >.5 for (x, y) in the unit square. The initial surface and computed minimal surface on a 40 by 40 grid are depicted in Figure 5. References [1] K. A. Brakke, The surface evolver, Experiment. Math. 1 (1992), 141 165. [2] W. Chen, Y. Cai, J. Zheng, Constructing triangular meshes of minimal area, Computer-Aided Design & Applications 5 (2008), 508 518. [3] P. Kazemi, R. J. Renka, A Levenberg-Marquardt method based on Sobolev gradients, Nonlin. Anal.: Theory, Methods & Appl. 75 (2012), 6170 6179. 11

Figure 5: Initial surface and minimal surface with saddle-shaped boundary [4] P. Kazemi, R. J. Renka, Minimization of the Ginzburg-Landau energy functional by a Sobolev gradient trust-region method, Appl. Math. Comput. 219 (2013), 5936 5942. [5] J. Nocedal, S. J. Wright, Numerical Optimization, Springer-Verlag, New York, 1999. [6] H. Pan, Y.-K. Choi, Y. Liu, W. Hu, Q. Du, K. Polthier, C. Zhang, W. Wang, Robust modeling of constant mean curvature surfaces, ACM Trans. Graph., SIGGRAPH 2012. [7] U. Pinkall, K. Polthier, Computing discrete minimal surfaces and their conjugates, Experim. Math. 2 (1993), 15 36. [8] K. Polthier, W. Rossman, Discrete constant mean curvature surfaces and their index, J. Reine Angew. Math 549 (2002), 47 77. [9] R. J. Renka, J. W. Neuberger, Minimal surfaces and Sobolev gradients, SIAM J. Sci. Comput. 16 (1995), 1412 1427. [10] R. J. Renka, in: J. W. Neuberger, Sobolev Gradients and Differential Equations, Springer-Verlag, Berlin, 2010, pp. 199 208. [11] R. J. Renka, Nonlinear least squares and Sobolev gradients, Appl. Num. Math. 65 (2013), 91 104. [12] R. J. Renka, A Sobolev gradient method for treating the steady-state incompressible Navier-Stokes equations, Cent. Eur. J. Math. 11 (2013), 630 641. 12