A level set method for solving free boundary problems associated with obstacles

Similar documents
Comparison of formulations and solution methods for image restoration problems

Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996

LIMITED MEMORY BUNDLE METHOD FOR LARGE BOUND CONSTRAINED NONSMOOTH OPTIMIZATION: CONVERGENCE ANALYSIS

/00 $ $.25 per page

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L.

A Proximal Method for Identifying Active Manifolds

Algorithms for Nonsmooth Optimization

A quasisecant method for minimizing nonsmooth functions

Limited Memory Bundle Algorithm for Large Bound Constrained Nonsmooth Minimization Problems

Spectral gradient projection method for solving nonlinear monotone equations

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

The WENO Method for Non-Equidistant Meshes

Numerical Methods of Applied Mathematics -- II Spring 2009

Semismooth Hybrid Systems. Paul I. Barton and Mehmet Yunt Process Systems Engineering Laboratory Massachusetts Institute of Technology

and P RP k = gt k (g k? g k? ) kg k? k ; (.5) where kk is the Euclidean norm. This paper deals with another conjugate gradient method, the method of s

t x 0.25

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

2 Introduction Standard conservative schemes will admit non-physical oscillations near material interfaces, which can be avoided by the application of

A derivative-free nonmonotone line search and its application to the spectral residual method

IE 5531: Engineering Optimization I

A finite element level set method for anisotropic mean curvature flow with space dependent weight

An Inexact Newton Method for Nonlinear Constrained Optimization

Active Contours Snakes and Level Set Method

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS

Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts 1 and Stephan Matthai 2 3rd Febr

A convergence result for an Outer Approximation Scheme

Numerical Methods I Solving Nonlinear Equations

MS&E 318 (CME 338) Large-Scale Numerical Optimization

Iterative procedure for multidimesional Euler equations Abstracts A numerical iterative scheme is suggested to solve the Euler equations in two and th

system of equations. In particular, we give a complete characterization of the Q-superlinear

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

Optimal Control Approaches for Some Geometric Optimization Problems

IE 5531: Engineering Optimization I

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

The speed of Shor s R-algorithm

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An unconstrained multiphase thresholding approach for image segmentation

390 Chapter 10. Survey of Descent Based Methods special attention must be given to the surfaces of non-dierentiability, it becomes very important toco

Novel determination of dierential-equation solutions: universal approximation method

IMA Preprint Series # 2098

A Continuation Method for the Solution of Monotone Variational Inequality Problems

Nonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization

SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS

166 T. LOREN Now the composition K :[0;T[! IR is a usual real-valued function and our aim is to construct K() (with K(0) = K 0 ) such that K is decrea

New hybrid conjugate gradient methods with the generalized Wolfe line search

Eulerian Vortex Motion in Two and Three Dimensions

On the solution of incompressible two-phase ow by a p-version discontinuous Galerkin method

Convex Hodge Decomposition of Image Flows

A Simple Compact Fourth-Order Poisson Solver on Polar Geometry

arxiv: v1 [math.oc] 1 Jul 2016

Discrete Maximum Principle for a 1D Problem with Piecewise-Constant Coefficients Solved by hp-fem

Robust linear optimization under general norms

Gradient method based on epsilon algorithm for large-scale nonlinearoptimization

Splitting Enables Overcoming the Curse of Dimensionality

Convergence of The Multigrid Method With A Wavelet. Abstract. This new coarse grid operator is constructed using the wavelet

8 Numerical methods for unconstrained problems

Identifying Active Constraints via Partial Smoothness and Prox-Regularity

Iterative Methods for Solving A x = b

A Unified Approach to Proximal Algorithms using Bregman Distance

1.The anisotropic plate model

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

Lecture 25: Subgradient Method and Bundle Methods April 24

Introduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems

Solving Dual Problems

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Steepest Descent. Juan C. Meza 1. Lawrence Berkeley National Laboratory Berkeley, California 94720

An Inexact Newton Method for Optimization

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION

Introduction to Partial Differential Equations

Solving large Semidefinite Programs - Part 1 and 2

Optimization Tutorial 1. Basic Gradient Descent

Numerical Simulations on Two Nonlinear Biharmonic Evolution Equations

CONSTRAINED NONLINEAR PROGRAMMING

Obstacle problems and isotonicity

Marjo Haarala. Large-Scale Nonsmooth Optimization

INTERGRID OPERATORS FOR THE CELL CENTERED FINITE DIFFERENCE MULTIGRID ALGORITHM ON RECTANGULAR GRIDS. 1. Introduction

A Computational Comparison of Branch and Bound and. Outer Approximation Algorithms for 0-1 Mixed Integer. Brian Borchers. John E.

Step lengths in BFGS method for monotone gradients

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

COURSE Iterative methods for solving linear systems

XIAO-CHUAN CAI AND MAKSYMILIAN DRYJA. strongly elliptic equations discretized by the nite element methods.

Solution of the Two-Dimensional Steady State Heat Conduction using the Finite Volume Method

Computers and Mathematics with Applications

PREPRINT 2010:23. A nonconforming rotated Q 1 approximation on tetrahedra PETER HANSBO

Arc Search Algorithms

E. YABLONOVITCH photonic crystals by using level set methods

An interior-point trust-region polynomial algorithm for convex programming

Stochastic dominance with imprecise information

The nonsmooth Newton method on Riemannian manifolds

quantitative information on the error caused by using the solution of the linear problem to describe the response of the elastic material on a corner

Finite element approximation on quadrilateral meshes

OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

A mixed nite volume element method based on rectangular mesh for biharmonic equations

A Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions

J.TAUSCH AND J.WHITE 1. Capacitance Extraction of 3-D Conductor Systems in. Dielectric Media with high Permittivity Ratios

Transcription:

A level set method for solving free boundary problems associated with obstacles K. Majava University of Jyväskylä, Department of Mathematical Information Technology, P.O. Box 35 (Agora), FIN-44 University of Jyväskylä, Finland X.-C. Tai University of Bergen, Department of Mathematics, Johannes Brunsgate, N-59 Bergen, Norway Abstract A method related to the level set method is proposed for solving free boundary problems coming from contact with obstacles. Two dierent approaches are described and applied for solving an unilateral obstacle problem. The cost functionals coming from the approach are nonsmooth. For solving the nonsmooth minimization problems, two methods are applied: rstly, a proximal bundle method, which a method for solving general nonsmooth optimization problems. Secondly, a gradient method is proposed for solving the regularized problems. Numerical experiments are included to verify the convergence of the methods and the quality of the results. Key words: Level set methods, free boundary problems, obstacle problem Introduction The level set method initiated by Osher and Sethian [] has proven to be an ef- cient numerical device for capturing moving fronts, see [4]. There are many industrial problems where interfaces need to be identied, which can be formulated as moving front problems. We mention, for example, image segmentation This research was partially supported by the Academy of Finland, NorFA (Nordisk Forskerutdanningsakademi), and the research council of Norway. Email addresses: majkir@mit.jyu.fi (K. Majava), tai@mi.uib.no (X.-C. Tai). URLs: http://www.mit.jyu.fi/majkir (K. Majava), http://www.mi.uib.no/%7etai (X.-C. Tai). Preprint submitted to Elsevier Science 9 November 3

problems [5], inverse problems [6], and optimal shape design problem [7,8]. In free boundary problems, it is often needed to nd the boundary of some domains. It is natural to use level set method for this kind of applications. In [3,9], the level set method was used for Stefan type of free boundary problems. In this work, we shall propose an alternative approach for the level set idea of [,3,9] and use it for tracing the free boundaries from obstacle contact type of problems. The contents of the paper are as follows. In Section, a model free boundary problem is described and the proposed level set approaches are introduced for solving the problem. In Section 3, the solution algorithms are described that are applied for realizing the proposed level set approaches. In Section 4, numerical experiments are presented to verify the convergence of the methods and the quality of the results. Finally, in Section 5, the conclusions are stated. A modied level set method Consider a model free boundary problem which comes from the minimization problem: min F (v), () v K with ( F (v) = v fv) dx, K = {v v H(Ω), v ψ}. () Ω In the above, Ω R p, p =,, ψ is the obstacle function satisfying ψ on Ω, and f typically represents external force for physical problems. The solution u for () is unique and it can be formally written as the function satisfying u f, u ψ, ( u f) (u ψ) =. To nd the solution u, we need to nd the contact region Ω + = {x u(x) = ψ(x), x Ω}. Once we know Ω +, the value of u in Ω\Ω + can be obtained from solving u = f in Ω\Ω +, u = on Ω, u = ψ on Ω +. In order to nd u, we essentially just need to nd Γ = Ω +. Inside Γ, u = ψ and outside Γ, u is the solution of the Poisson equation. Based on the above observation, we see that it is essentially enough to nd the curve in order to solve the free boundary problem (). Starting with an initial curve, we shall slowly evolve the curve to the true free boundary. We can use the level set method to represent the curve, i.e., we try to nd a function ϕ(t, x) such that Γ(t) = {x ϕ(t, x) = }.

In the above, Γ() is the initial curve and Γ(t) converges to the true free boundary, when t. One of the essential ingredient of the level set method is to nd the velocity eld V (t, x), which is then used to move the level set function ϕ(t, x) by solving ϕ t V ϕ =, ϕ(, x) = ϕ (x) = ±distance(x, Γ()). In this work, we propose an alternative approach, which seems to be simpler than the approach outlined above. Dene the Heaviside function H(ϕ) as, ϕ >, H(ϕ) =, ϕ. For any v K, there is exists ϕ H (Ω) such that v = ψ + ϕh(ϕ). (3) It is easy to see that ψ, if ϕ (i.e., in the contact region) v = ψ + ϕ, if ϕ > (i.e., outside the contact region). (4) Thus, the sign of the function ϕ tells the information of the contact region. The curve, which separates the regions where ϕ is positive or negative, gives the free boundary. In the traditional level set method, the function ϕ is only used to represent the curve Γ. In our approach, ϕ is not only used to represent the curve, but also to carry information about the solution u outside the contact region, i.e., ϕ is used to indicate that u = ψ inside the contact region and its value outside the contact region shall be ϕ = u ψ. We use iterative type of methods to nd the correct values of ϕ both inside and outside the contact region. Note that the value of ϕ inside the contact region is not unique. Representation (3) is not the only formulation that can be used to express the value of a function v K as a function of ϕ H (Ω). In fact, there exists a function ϕ H (Ω) for every v K such that v = ψ + (ϕ + ϕ ). (5) For the above representation for v, we see that (4) is also correct. From now on, we use a shortened notation J i (ϕ) := F (v i (ϕ)), for i =,, (6) where v (ϕ) = ψ + ϕh(ϕ) and v (ϕ) = ψ + (ϕ + ϕ ). (7) 3

On the boundary Ω, we have ϕ = ψ for both of the approaches (i.e., for i =, ). Thus, the function ϕ assumes a Dirichlet boundary condition. For simplicity, we assume from now on that ψ = on Ω so that we have ϕ H (Ω) for both of the approaches. Consider now the following unconstrained minimization problem min J i (ϕ), i =,. (8) ϕ H (Ω) Due to the use of the Heaviside function and absolute value of functions in approaches and, the cost functional J i is not dierentiable. However, we can prove that the problem has a solution: Theorem The non-smooth minimization problem (8) has a minimizer for i =,. The minimizer may not be unique. PROOF. For minimization problem (), the cost functional is strictly convex and continuous in H (Ω) and K is closed in H (Ω), see [, p.9]. Thus, there exists a unique u K such that Associated with this unique solution u, let us dene F (u) F (v) v K. (9) ϕ u(x) ψ(x), if u(x) > ψ(x) (i.e., outside the contact region) (x) =, if u(x) = ψ(x) (i.e., inside the contact region). () With ϕ given above, we have u = v i (ϕ ), i =,. For any ϕ H(Ω), we have v i (ϕ) K. Thus, we get from (9) that J i (ϕ ) J i (ϕ) ϕ H (Ω), i =,. () This means that ϕ is a minimizer for (8). The value of ϕ inside the contact region can be any negative value, which means that the minimizer is nonunique. 3 Solution algorithms Minimization problem (8) has a minimizer. However, the minimizer is nonunique and the cost functional J i is non-convex. In order to nd a minimizer for it, we shall use two algorithms. The rst one is the proximal bundle method, which is a popular method for nding local minimizers for non-smooth and 4

non-convex minimization problems. For the second algorithm, we turn the nonsmooth minimization problem into a dierentiable problem by regularizing the non-smooth functions used in the cost functionals and then use a gradient method to nd a minimizer for the smoothed problem. 3. Proximal bundle method (PB) In what follows, we introduce shortly the proximal bundle method, which is a method for nding a local minimum of a general unconstrained nonlinear optimization problem min J (x), () x R n where J : R n R. We assume that J in () is a locally Lipschitz continuous function. Thus, J may be non-dierentiable but it has a subdierential at each point x. The subdierential J (of Clarke []) of J at x R n is dened by J (x) = conv{ξ R n : x i x, J (x i ) and J (x i ) ξ}. Element ξ J (x) is called a subgradient. If J is dierentiable at x, then J (x) = { J (x)}. The basic idea in bundle methods is to approximate the whole subdierential by collecting subgradients calculated in a neighbourhood of the considered point x. At each iterate, this approximation of the subdierential is used to build up a local model of the original problem. The origin of the methods is the classical cutting plane method [], where a piecewise linear approximation of the objective function is formed. In bundle methods, a stabilizing quadratic term is added to the polyhedral approximation in order to accumulate some second order information about the curvature of the objective function. PB method assumes that at each point x R n, we can evaluate the function value J (x) and an arbitrary subgradient g(x) from the subdierential J (x). For the convergence of the method, in the nonconvex case, J is further assumed to be upper semismooth [3,4]. Some details of the PB method are given in Appendix of this paper. For more details, see [3,5,6]. The tested proximal bundle algorithm is from the software package NSOLIB. The code utilizes the subgradient aggregation strategy of [5] to keep the storage requirements bounded, and the safeguarded quadratic interpolation algorithm of [3] to control the size of the search region. For solving the quadratic subproblem, the code employs the quadratic solver QPDF4, which is based on the dual active-set method described in [7]. 5

3. Smoothed approximations and the gradient method Because the Heaviside function is nondierentiable at ϕ =, we replace it by a smooth approximation H ε (ϕ) = π tan ϕ ε +, for small positive ε. The gradient of H ε (ϕ) is denoted by δ ε (ϕ) (the smoothed Dirac delta function): ε δ ε (ϕ) = π(ϕ + ε ). For approach, i.e., for i = in (6)(7), we then have F v = v f Correspondingly, we have and v ϕ = H ε(ϕ) + ϕδ ε (ϕ). J ϕ = F v v ϕ = ( v f)(h ε (ϕ) + ϕδ ε (ϕ)). (3) Since also ϕ is nondierentiable at, we use again a smooth approximation ϕ ϕ + ˆε, for small positive ˆε for approach, i.e., for i = in (6)(7). Then, and, altogether, we have v ϕ = ( + ) ϕ, ϕ + ˆε J ϕ = F v v ϕ = ( ) ( v ϕ f) +. (4) ϕ + ˆε Let Ji ε denote the corresponding cost functionals with the smoothed functions. We shall use the following gradient method (GR) to nd a function ϕ, which approximates the minimizers of (8): ϕ n+ = ϕ n α J i ε ϕ (ϕn ), i =,. The step size α is xed and is obtained by trial and error approach. 6

.5.5.5.5..4.6.8..4.6.8 Fig.. The obstacle (left) and the analytical solution (right) for n =. 4 Numerical experiments In this section, we present the results of the numerical experiments that were computed using the proposed algorithms. All experiments are performed on an HP9/J56 workstation ( 55 MHz PA86 CPU) and the algorithms are implemented with Fortran 77. 4. General issues about the experiments The following two example problems were considered. Both problems have f =. Example In this one-dimensional example, we chose Ω = [, ]. The obstacle and the analytical solution are shown in Figure. Example 3 In this example, we chose Ω = [, ] [, ] and x y ψ(x, y) =, for x + y,, elsewhere. (5) With the consistent Dirichlet boundary condition, problem () with ψ introduced in (5) has an analytical solution of the form x y, for r r, u(x, y) = (r ) ln(r/r)/ (r ), for r r, where r = x + y, R =, and R =.69796548..., which satises (r ) ( ln(r /R)) =. 7

.5.5.5.5 Fig.. The obstacle (left) and the analytical solution (right) for n =. The obstacle and the analytical solution are illustrated in Figure. The same example has been considered, e.g., in [8]. Discretization of the problems. In the one-dimensional case, Ω = [, ] is divided into n subintervals, and we denote h = /(n + ). In the twodimensional case, Ω = [, ] [, ] is divided into n subintervals in x- direction and y-direction. We denote n = n and h = (4/( n+). Then, u, f, ϕ, and ψ denote vectors in R n, whose components correspond to the values of the unknown functions in the equidistant discretization points in Ω. A nite dierence approximation is used for with Dirichlet boundary conditions. How to apply PB for solving (8). Even if it is possible to apply PB for solving problem (8) with ε, ˆε =, our main focus in this paper is to solve the problems with ε, ˆε >. In numerical simulations, the level set function ϕ is seldom exactly zero at a nodal point. Hence, special care needs to be taken to deal with the case, where ϕ at a given node has dierent sign than at one of the neighbouring nodes. Our numerical tests also show that the cost of PB for large problem sizes is much more expensive than the cost of GR (gradient) method. Hence, PB is primarily used only as a tool to justify that the results obtained using GR are correct. However, we have applied PB also with ˆε = for approach. In order to apply PB for realizing the level set approaches of Section, i.e., for solving problem (8), we need to be able to compute the cost function value J i (ϕ) and a subgradient g(ϕ) J i (ϕ) at each point ϕ R n. In the nonsmooth case (i.e., without the regularization using ˆε), we only need to supply one of the subgradients at each ϕ. For the smoothed cases, i.e., for ε, ˆε >, the cost functional J i (ϕ) is dierentiable, so that instead of calculating a subgradient, we set g(ϕ) = ϕ J i(ϕ) due to (3) or (4). 8

Initial values and stopping criteria. In the experiments, we chose ϕ = for the initial value and the boundary condition needs to be properly supplied. PB with ˆε = converged to a dierent solution when the initial value ϕ = was used. However, during the experiments, a surprising observation was made. Despite the nonconvexity of the problems, when ε, ˆε > were chosen properly, both PB and GR converged to the same result, regardless of the value ϕ. We have used the stopping criterion ϕ n J i (ϕ n ) < for GR. For PB, the stopping criteria are more complicated and they are explained in the Appendix. 4. Results of the experiments GR vs. PB. In the rst experiments, we tested if GR obtains the same results as PB for ε, ˆε >. For this purpose, we considered three dierent problem sizes for both examples. In Example, n = 3, 5, and in Example 3, n =, 3, 5. PB is not able to solve larger problems than n = 5 5 due to memory problems. The values of the smoothing parameters ε, ˆε were chosen to be the smallest ones (as powers of h) for which both algorithms converged to the same solution from both ϕ = and ϕ =. In the one-dimensional case, we chose ε = h, ˆε = h, and in the two-dimensional case, ε = h, ˆε = h 4. Table Results for Example. n Appr Alg it CP U e(u, ū) α F (u ) 3 GR 579 6...38933 PB 3534.79..38939 3 GR 85977 9.63.3.57347 PB 3453 6.5.3.5638 5 GR 63496 9..54.58399 PB 968.44.54.58398 5 GR 468 376.77. 5.656 PB 343 6.4..655964 GR 5458 4.9 6.37 3 5.495 PB 5 3.3 6.38.4957 RR 58364 59.6 4. 3 5.8794 PB 4793 75. 4. 3.875 9

Table Results for Example 3. n Appr Alg it CP U e(u, ū) α F (u ) e(u, U) * GR 74.3.73 3.476744 PB 3.68.78 3.47495 * GR 344365 6.96 5.5 3 3 3.8495 PB 4873 56.4 6. 3 3.86969 3*3 GR 3499 39.84 3.54 3 3 3.87834 PB 474 38. 3.55 3 3.87785 3*3 GR 38 66.85 8.93 3 3.996 PB 4859 4.6 8.83 3.99754 5*5 GR 4599 76..7 3 3 3.9669 PB 835 85.3.7 3 3.96593 5*5 GR 535794 34.53 3.4 3.93383 PB 4773 94. 3.4 3.93383 63*63 GR 96 38.48 7.7 3.96376 5.95 63*63 GR 374 86.8.3 3.9378.9 5 7*7 GR 3984 65.49.9 3.9475.5 7*7 GR 963347 564.97 4.47 5 5 3.94343 3.65 6 The results are given in Tables, where in the rst column, the size of the problem is given. In the next column, we state which of the level set approaches is considered. Then, the solution algorithm used is given. In the next two columns, it denotes the number of iterations needed and CP U the elapsed CPU time in seconds. In the sixth column, e(u, ū) denotes the average error between the analytical solution ū and the obtained result u : e(u, ū) = n n (u ū) i. i= In the seventh column, the value of the constant step size α is given for GR. We always chose the largest value of α for which the algorithms converged. In the next column, F (u ) denotes the value of the cost functional for the obtained result. From the tables, we conclude that with the chosen values of ε, ˆε, GR obtains the same results as PB, so that we can trust that GR is able to solve the problems. PB is faster than GR for small problems, but it gets slow when the

...3.4.5.6..4.6.8..4.6.8...4..4.6.8....3.4.5.6..4.6.8 6 8 x 3..4.6.8 Fig. 3. Dierence (u u) between the computed solution u and the analytical solution u. Top: n = 5, bottom n =. Left: approach, right: approach. size of the problem gets large. Quality of the results with ε, ˆε >. PB is not able to solve larger problems than n = 5 5. However, from the above experiments we know that GR is able to solve the considered problems for ε, ˆε >. Hence, we solved Example 3 with GR also for n = 63, 7, in order to be able to compare the obtained results with the nite element results of [8]. The results are presented Table, where e(u, U) in the last column denotes the average error between the obtained result u and the nite element result U of [8]. In Figure 3, the dierences between the computed solution u and the analytical solution u are presented for Example with n = 5 and n =. In Figure 4, the same dierences are presented for Example 3 with n = 63 and n = 7. Finally, in Figure 5, the dierences between the obtained result u and the nite element result U are presented for Example 3. Both the tables and the dierence plots clearly indicate that the results of approach are more accurate than results of approach. This is partly because for approach, a smaller value of the smoothing parameter ˆε can be used. In

u* u 3 x x..5.4.6.5.8 u* u u* u 3 x u* u x.5.5 3.5 5 3 Fig. 4. Di erence between the obtained result u and the analytical solution u. Top: n = 63, bottom: n = 7. Left: approach, right: approach. fact, we observe from Table, that the result of approach is approximately equally far away from the analytical solution and from the result of [8], whereas the result of approach is clearly closer to the results of [8] than to the analytical solution. The plots in Figure 3 clearly show the fact which is not so visible in Figure 4: The result of approach is below the analytical solution at the contact region, whereas the result of approach is slightly above the analytical solution at the contact region. This is due to di erent type of smoothing used in the two formulations. The results U of [8] are closer to the analytical solution at the contact region than the results of our approaches with ε, ε >. This is why u U is plotted on the left and U u on the right in Figure 5. Convergence with respect to the smoothing parameters. In Tables 3 4, the error e(u, u ) is given for di erent values of ε, ε, and n for Example 3. Here, PB was used as a solution algorithm and we only considered the values of ε, ε > for which PB converged to the same solution from both ϕ = and ϕ =. For approach, only the values ε = h, h were considered. The results are as can be expected; they are less accurate for larger value of ε, see Table 3.

3 x u* U. U u*. 5.5 x.5 6 u* U. x.5 3.5 U u*. 5 x Fig. 5. Di erence between the obtained result u and the nite element result U of [8]. Top: n = 63, bottom: n = 7. Left: approach, right: approach. For approach, the values ε = h, h3, h4, h5, h6, were considered. The results in Table 4 show that the value of ε does not have a signi cant e ect on the average error between the obtained result and the analytical solution. Actually, the average error seems to grow slightly when ε gets smaller. In Figure 6, the di erences between the computed solution u and the analytical solution u are plotted for n = 5 and for di erent values of ε. It can be seen that for large value of ε, the result is far away from the analytical one at the contact region but close to it outside the contact region. As ε gets smaller, the result gets closer to the analytical one at the contact region but goes further away outside the contact region. For ε, ε >, the results of our new approaches are always below or above the analytical solution at the contact region, due to the smoothing used in the two formulations. However, for ε, ε =, no error is introduced and hence, the results will be exact at the contact region. In Figure 6, also a plot for ε = is Table 3 Error e(u, u ) with respect to ε for approach. ε /n 3 5 h 6.54.4.5 h.78 3.55 3.7 3 3

Table 4 Error e(u, ū) with respect to ˆε for approach. ˆε / n 3 5 h 4.93 3 6.9.84 h 3 5.6 3 6.59.68 h 4 6. 3 8.83 3.4 h 5 5.8 3 9.83 3.63 h 6 6.34 3. 3 3.7 6.68 3. 3 3.56 included. There is no visible error between this result and the one with ˆε = h 6, but a closer examination shows that the result with ˆε = really is exact at the contact region. Other conclusions from the numerical experiments Using both solution algorithms, it takes considerably more time to solve the problem of approach than that of approach. For GR, this is due to the fact that smaller value of α must be used for the solution of the problem of approach. Moreover, both the algorithms become even slower when the value of the smoothing parameter ε, ˆε is increased. This is a surprising observation, because usually, when this kind of smoothing is used, the opposite behaviour is observed. Finally, Figure 7 illustrates how the contact region develops during the GR iterations of approach for n =. 5 Conclusion A level set related method was proposed for solving free boundary problems coming from contact with obstacles. Two dierent approaches were described and applied for solving an unilateral obstacle problem. The results obtained were promising: even if the considered problems are nonsmooth and nonconvex, we obtained the same results using two dierent solution algorithms. Also the accuracy of the results was reasonable, taken into account that a small regularization error was introduced in the methods, in order to make the problems dierentiable. In this work, we have only tested the proposed methods for solving the obstacle problem (). The idea is applicable also for other free boundary problems dealing with contact regions. 4

5 5 x 6 8 x 6 8 x 6 8 x 6 8 x 6 8 x Fig. 6. Dierence (u u) between the result of PB and the analytical solution for dierent values of ε: Top: left: ε = h, right: ε = h 3 ; Middle: left: ε = h 4, right: ε = h 5 ; Bottom: left: ε = h 6, right: ε =. A Appendix In this appendix, we give some details about the proximal bundle method. For that purpose, let us consider a general unconstrained nonlinear optimization problem min J (x), (A.) x R n where J : R n R is a locally Lipschitz continuous function. We assume that at each point x R n, we can evaluate the function value J (x) and an arbitrary subgradient g(x) from the subdierential J (x). For the convergence of the 5

Fig. 7. The gures illustrate how the contact region develops during the iterations. Top, from left to right: it = 5,,, bottom, from left to right: it = 5, 5,. method, in the nonconvex case, J is further assumed to be upper semismooth [3,4]. In what follows, we describe how the search direction is obtained in the proximal bundle method and what kind of line search it uses. For more details, see [3,5,6]. Search direction: The aim is to produce a sequence {x k } k= R n converging to some local minimum of problem (A.). Suppose that at the k-th iteration of the algorithm, we have the current iterate x k, some trial points y j R n (from past iterations), and subgradients g j J (y j ) for j G k, where the index set G k is a nonempty subset of {,..., k}. The trial points y j are used to form a local approximation model of the function J in a neighbourhood of the considered point x. The idea behind the proximal bundle method is to approximate the objective function from below by a piecewise linear function. For this purpose, J is replaced with the so-called cutting-plane model ˆ J k α (x) := max j G k{j (xk ) + (g j ) T (x x k ) α k j } (A.) with the linearization error α k j := J (x k ) J (y j ) (g j ) T (x k y j ), for all j G k. (A.3) Now, if J is convex, then the cutting-plane model ˆ J k α gives an underestimate for J and the nonnegative linearization error α k j measures how well the model 6

approximates the original problem. In the nonconvex case, these facts are not valid anymore: α k j may have a tiny (or even negative) value, even if the trial point y j lies far away from the current iterate x k, making the corresponding subgradient g j useless. For these reasons, the linearization error α k j is replaced with the so-called subgradient locality measure (cf. [5]) β k j := max{ α k j, γ(s k j ) }, (A.4) where γ is the distance measure parameter (γ = if J is convex), and s k j := x j y j + k i=j x i+ x i (A.5) is the distance measure estimating x k y j without the need to store the trial points y j. Then, obviously, βj k for all j G k and min x K J ˆk β (x) J (x k ). In order to calculate the search direction d k R n, the original problem (A.) is replaced with the cutting plane model min d ˆ J k β (x k + d) + δk d T d, (A.6) where ˆ J k β (x) denotes (A.) with α k j replaced by β k j. In (A.6), the regularizing quadratic penalty term δk d T d is included in order to guarantee the existence of the solution d k and keep the approximation local enough. The role of the weighting parameter δ k > is to improve the convergence rate and to accumulate some second order information about the curvature of J around x k. One of the most important questions concerning proximal bundle method is the choice of the weight δ k. The simplest strategy would be to use a constant weight δ k δ fix but this can lead to several diculties [6]. Therefore, the weight is kept as a variable and updated when necessary. For updating δ k, the safeguarded quadratic interpolation algorithm due to [3] is used. Notice that problem (A.6) is still a nonsmooth optimization problem. However, due to the piecewise linear nature it can be rewritten (cf. [6, p.6] for details) as a (smooth) quadratic programming subproblem of nding the solution (d, z) = (d k, z k ) R n+ of min z + d,z δk d T d s. t. βj k + (g j ) T d z, for all j G k. (A.7) Line search: Let us consider the problem of determining the step size into the direction d k calculated above. Assume that m L (, ), m R (m L, ), 7

and t (, ] are xed line search parameters. First, we shall search for the largest number t k L [, ] such that t k L t and J (x k + t k Ld k ) J (x k ) + m L t k Lz k, (A.8) where z k is the predicted amount of descent. If such a parameter exists, we take a long serious step x k+ := x k + t k Ld k and y k+ := x k+. The long serious step yields a signicant decrease in the value of the objective function. Thus, there is no need for detecting discontinuities in the gradient of J, so that we set g k+ J (x k+ ). Otherwise, if (A.8) holds but < t k L < t, then a short serious step x k+ := x k + t k Ld k and y k+ := x k + t k Rd k is taken. Finally, if t k L =, then we take a null step where t k R > t k L is such that x k+ := x k and y k+ := x k + t k Rd k, β k+ k+ + (gk+ ) T d k m R z k. (A.9) In the short serious step and null step, there exists discontinuity in the gradient of J. Then, the requirement (A.9) ensures that x k and y k+ lie on the opposite sides of this discontinuity, and the new subgradient g k+ J (y k+ ) will force a remarkable modication of the next problem of nding the search direction. In PB, the line search algorithm presented in [6] is used. In practice, the iteration is terminated if z k tol P B, where tol P B > is the nal accuracy tolerance supplied by the user. However, there are also some other, more complicated, stopping criteria involved [9]. In the numerical experiments, we chose the following values for the line search parameters: m L =., m R =.5, and t =., and for the distance measure parameter, we chose γ =.. As a stopping criterion in D, we used tol P B = 5. In D, dierent stopping criteria were used for dierent problem sizes to obtain the desired accuracy, namely, tol P B = 5 for n =, tol P B = 6 for n = 3, and tol P B = 7 for n = 5. 8

Acknowledgements The authors want to thank Dr. Marko Mäkelä at the University of Jyväskylä for the possibility to use the PB code in the numerical experiments. References [] S. Osher, J. A. Sethian, Fronts propagating with curvature dependent speed: Algorithms based on hamilton-jacobi formulations, J. Comput. Phys. 79 (988) 49. [] S. Osher, R. Fedkiw, Level set methods and dynamic implicit surfaces, Vol. 53 of Applied Mathematical Sciences, Springer-Verlag, New York, 3. [3] S. Osher, R. P. Fedkiw, Level set methods: an overview and some recent results, J. Comput. Phys. 69 () () 4635. [4] J. A. Sethian, Level set methods and fast marching methods, nd Edition, Vol. 3 of Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press, Cambridge, 999, evolving interfaces in computational geometry, uid mechanics, computer vision, and materials science. [5] T. F. Chan, L. A. Vese, Image segmentation using level sets and the piecewise constant mumford-shah model, Tech. rep., CAM Report -4, UCLA, Math. Depart., revised December (April ). [6] T. F. Chan, X.-C. Tai, Level set and total variation regularization for elliptic inverse problems with discontinuous coecients, J. Comput. Phys. (3) to appear. [7] S. Osher, F. Santosa, Level set methods for optimization problems involving geometry and constraints. I. Frequencies of a two-density inhomogeneous drum, J. Comput. Phys. 7 () () 788. [8] F. Santosa, A level-set approach for inverse problems involving obstacles, ESAIM Contrôle Optim. Calc. Var. (995/96) 733 (electronic). [9] J. A. Sethian, J. Strain, Crystal growth and dendritic solidication, J. Comput. Phys. 98 () (99) 353. [] R. Glowinski, Numerical methods for nonlinear variational problems, Springer- Verlag, 984. [] F. H. Clarke, Optimization and Nonsmooth Analysis, John Wiley & Sons, New York, 983. [] J. E. Kelley, The cutting plane method for solving convex programs, Journal of the SIAM 8 (96) 737. 9

[3] K. C. Kiwiel, Proximity control in bundle methods for convex nondierentiable optimization, Math. Program. 46 (99) 5. [4] H. Schramm, J. Zowe, A version of the bundle idea for minimizing a nonsmooth function: conceptual idea, convergence analysis, numerical results, SIAM J. Optim. () (99) 5. [5] K. C. Kiwiel, Methods of Descent for Nondierentiable Optimization, Lecture Notes in Mathematics 33, Springer-Verlag, Berlin-Heidelberg, 985. [6] M. M. Mäkelä, P. Neittaanmäki, Nonsmooth Optimization. Analysis and Algorithms with Applications to Optimal Control, World Scientic, Singapore, 99. [7] K. C. Kiwiel, A method for solving certain quadratic programming problems arising in nonsmooth optimization, IMA J. Numer. Anal. 6 (986) 375. [8] X.-C. Tai, Rate of convergence for some constraint decomposition methods for nonlinear variational inequalities, Numer. Math. 93 (4) (3) 755786. [9] M. M. Mäkelä, T. Männikkö, Numerical solution of nonsmooth optimal control problems with an application to the continuous casting process, Adv. Math. Sci. Appl. 4 () (994) 4955.