Chapter 1 Global minimization The issues related to the behavior of global minimization problems along a sequence of functionals F are by now well understood, and mainly rely on the concept of -limit. In this chapter we review this notion, which will be the starting point of our analysis. We will mainly be interested in the properties of -limits with respect to the convergence of minimization problems; further properties of -limits will be recalled when necessary. 1.1 Upper and lower bounds Here and afterwards F will be functionals defined on a separable metric (or metrizable) space X, if not further specified. Definition 1.1.1 (lower bound) We say that F is a lower bound for the family (F ) if for all u 2 X we have F (u) apple lim inf F (u ) for all u! u, or, equivalently, F (u) apple F (u )+o(1) for all u! u. (LB) The inequality (LB) is usually referred to as the liminf inequality. If F is a lower bound we obtain a lower bound also for minimum problems on compact sets. Proposition 1.1.2 Let F be a lower bound for F and K be a compact subset of X. Then inf K Proof. Let u k 2 K be such that u k! u and F apple lim inf inf F. (1.1) K lim k F k (u k )=liminfinf F. K 9
10 CHAPTER 1. GLOBAL MINIMIATION We set Then by (LB) we have n uk if = eu = k u otherwise. as desired. inf K F apple F (u) apple lim inf F (eu ) apple lim k F k (u k )=liminfinf F, (1.2) K Remark 1.1.3 Note that the hypothesis that K be compact cannot altogether be removed. A trivial example on the real line is: n F (x) = 1 if x =1/ 0 otherwise. Then F = 0 is a lower bound according to Definition 1.1.1, but (1.1) fails if we take R in place of K. Remark 1.1.4 The hypothesis that K be compact can be substituted by the hypothesis that K be closed and the sequence (F )beequi-coercive; i.e., that the proof being the same. if sup F (u ) < +1 then (u ) is precompact, (1.3) Definition 1.1.5 (upper bound) We say that F is a upper bound for the family (F ) if for all u 2 X we have there exists u! u such that F (u) lim sup F (u ). or, equivalently, F (u) F (u )+o(1). The inequality (UB) is usually referred to as the limsup inequality. (UB) If F is an upper bound for F we obtain an upper bound also for the corresponding minimum problems on open sets. Proposition 1.1.6 Let F be an upper bound for F and A be an open subset of X. Then inf A F lim sup inf A F. (1.4) Proof. The proof is immediately derived from the definition after remarking that if u 2 A then we may suppose also that u 2 A so that F (u) lim sup F (u ) and (1.4) follows by the arbitrariness of u. lim sup inf F A
1.2. -CONVERGENCE 11 Remark 1.1.7 Again, note that the hypothesis that A be open cannot be removed. A trivial example on the real line is: n 1 if x =0 F (x) = 0 otherwise (independent of ). Then F = 0 is an upper bound according to Definition 1.1.5 (and also a lower bound!), but (1.4) fails taking A = {0}. Note that in the remark above 0 is an upper bound for F at 0 even though F (0) = 1 for all, which trivially shows that an upper bound at a point can be actually (much) lower that any element of the family F at that point. 1.2 -convergence In this section we introduce the concept of -limit. Definition 1.2.1 ( -limit) We say that F is the -limit of the sequence (F ) if it is both a lower and an upper bound according to Definitions 1.1.1 and 1.1.5. If (LB) and (UB) hold at a point u then we say that F is the -limit at u, andwewrite F (u) = - lim F (u). Note that this notation does does not imply that u is in any of the domains of F,evenif F (u) isfinite. Remark 1.2.2 (alternate upper bound inequalities) If F is a lower bound then requiring that (UB) holds is equivalent to any of the following there exists u! u such that F (u) =limf (u ); (RS) for all >0thereexistsu! u such that F (u)+ lim sup F (u ). (AUB) The latter is called the approximate limsup inequality, and is more handy in computations. A sequence satisfying (RS) is called a recovery sequence. The construction of a recovery sequence is linked to an ansatz on its form. The description of this ansatz gives an insight of the relevant features of the energies (oscillations, concentration, etc.) and is usually given on a subclass of u for which it is easier to prove its validity, while for general u one proceeds by a density argument. Example 1.2.3 We analyze some simple examples on the real line. 1. From Remark 1.1.7 we see that the constant sequence n 1 if x =0 F (x) = 0 otherwise
12 CHAPTER 1. GLOBAL MINIMIATION -converges to the constant 0; in particular this is a constant sequence not converging to itself. 2. The sequence n 1 if x = F (x) = 0 otherwise again -converges to the constant 0. This is clearly a lower and an upper bound at all x 6= 0. At x = 0 any sequence x 6= is a recovery sequence. 3. The sequence n 1 if x = F (x) = 0 otherwise -converges to n 1 if x =0 F (x) = 0 otherwise. Again, F is clearly a lower and an upper bound at all x 6= 0. At x =0thesequencex = is a recovery sequence. 4. Take the sum of the energies in Examples 2 and 3 above. This is identically 0, so is its limit, while the sum of the -limits is the function F in Example 3. The same function F is obtained as the -limit by taking the function G (x) =F (x)+f ( x) (F in Example 3). 5. Let F (x) =sin(x/). Then the -limit is the constant 1. This is clearly a lower bound. A recovery sequence for a fixed x is x =2 bx/(2 )c /2 (btc is the integer part of t). The following fundamental property of -convergence derives directly from its definition Proposition 1.2.4 (stability under continuous perturbations) Let F -converge to F and G converge continuously to G (i.e., G (u )! G(u) if u! u); then F + G! F + G. Note that this proposition applies to G = G if G is continuous, but is in general false for G = G even if G is lower semicontinuous. Example 1.2.5 The functions sin(x/)+x 2 +1 -converge to x 2. In this case we may apply the proposition above with F (x) =sin(x/) (see Example 1.2.3(5)) and G (x) =x 2 + 1. Note for future reference that F has countably many local minimizers, which tend to be dense in the real line, while F has only one global minimizer. It may be useful to define the lower and upper -limit can be viewed as their equality -limits, so that the existence of a Definition 1.2.6 (lower and upper -limits) We define - lim inf F (u) =inf{lim inf F (u ):u! u} (1.5)
1.2. -CONVERGENCE 13 - lim sup F (u) =inf{lim sup F (u ):u! u} (1.6) Remark 1.2.7 1. We immediately obtain that the -limit exists at a point u if and only if - lim inf F (u) = - lim sup F (u). 2. Comparing with the trivial sequence u = u we obtain - lim inf F (u) apple lim inf F (u) (and analogously for the - lim sup). More in general, note that the -limit depends on the topology on X. If we change topology, converging sequences change and the value of the -limit changes. A weaker topology will have more converging sequences and the value will decrease, a stronger topology will have less converging sequences and the value will increase. The pointwise limit above corresponding to the -limit with respect to the discrete topology. 3. From the formulas above it is immediate to check that a constant sequence F = F -converges to itself if and only if F is lower semicontinuous; i.e., (LB) holds with F = F. Indeed (LB) equivalent to the validity of (1.5), while F is always an upper bound. More in general a constant sequence F = F converges to the lower-semicontinuous envelope F of F defined by F (u) = max{g : G apple F, G is lower semicontinuous}; in particular the -limit is a lower-semicontinuous function. 4. It may be convenient to notice that the upper and lower limits are lower-semicontinuous functions and, with the notation just introduced, that - lim inf - lim sup F (u) = - lim inf F (u) (1.7) F (u) = - lim sup F (u); (1.8) that is, -limits are unchanged upon substitution of F with its lower-semicontinuous envelope. These properties are an important observation for the actual computation of the -limit, since in many cases lower-semicontinuous envelopes satisfy structural properties that make them easier to handle. As an example we may consider (homogeneous) integral functionals of the form F (u) = f(u) dx, defined on L 1 () equipped with the weak topology. Under some growth conditions the -limits can be computed with respect to the weak topology on bounded sets of L 1 (), which is metrizable. In this case, the lower-semicontinuous envelope of F is F (u) = f (u) dx,
14 CHAPTER 1. GLOBAL MINIMIATION where f is the convex and lower-semicontinuous envelope of f; i.e., f = max{g : g apple f, g is lower-semicontinuous and convex}. In particular convexity is a necessary condition for -limits of the integral form above. 1.3 Convergence of minimum problems As we have already remarked, the -convergence of F will not imply convergence of minimizers if minimizers (or almost minimizers ) do not converge. It is necessary then to assume a compactness (or mild coerciveness ) property as follows: there exists a precompact sequence (u )withf (u )=inff + o(1), (1.9) which is implied by the following stronger condition there exists a compact set K such that inf F =inf K F for all >0. (1.10) This condition is implied by the equi-coerciveness hypothesis (1.3); i.e., if for all c there exists a compact set K such that the sublevel sets {F apple c} are all contained in K. Tocheck that (1.10) is stronger than (1.9) consider F (x) =e x on the real line: any converging sequence satisfies (1.9) but (1.10) does not hold. By arguing as for Propositions 1.1.2 and 1.1.6 we will deduce the convergence of minima. This result is further made precise in the following theorem. Theorem 1.3.1 (Fundamental Theorem of -convergence) Let (F ) satisfy the compactness property (1.9) and -converge to F. Then (i) F admits minimum, and min F =liminf F (ii) if (u k ) is a minimizing sequence for some subsequence (F k ) (i.e., is such that F k (u k )=inff + o(1)) which converges to some u then its limit point is a minimizer for F. Proof. By condition (1.9) we can argue as in the proof of Proposition 1.1.2 with K = X and also apply Proposition 1.1.6 with A = X to deduce that inf F lim sup inf F lim inf inf F inf F. (1.11) We then have that there exists the limit lim inf F =inff.
1.4. AN EXAMPLE: HOMOGENIATION 15 Since from (1.9) there exists a minimizing sequence (u ) from which we can extract a converging subsequence, it su ces to prove (ii). We can then follow the proof of Proposition 1.1.2 to deduce as in (1.2) that inf F apple F (u) apple lim k F k (u k )=lim inf F =inff ; i.e., F (u) =inff as desired. Corollary 1.3.2 In the hypotheses of Theorem 1.3.1 the minimizers of F are all the limits of converging minimizing sequences. Proof. If u is a limit of a converging minimizing sequence then it is a minimizer of F by (ii) in Theorem 1.3.1. Conversely, if u is a minimizer of F,theneveryitsrecoverysequence (u )isaminimizingsequence. Remark 1.3.3 Trivially, it is not true that all minimizers of F are limits of minimizers of F, since this is not true even for (locally) uniformly converging sequences on the line. Take for example: 1) F (x) =x 2 or F (x) =e x and F (x) = 0. All points minimize the limit but only x =0minimizesF in the first case, and we have no minimizer for the second case; 2) F (x) =(x 2 1) 2 and F (x) =F (x)+(x 1) 2. F is minimized by 1 and 1, but the only minimum of F is 1. Note however that 1 is the limit of strong local minimizers for F. 1.4 An example: homogenization The theory of homogenization of integral functional is a very wide subject in itself. We will refer to monographs on the subject for details if needed. In this context, we want only to highlight some facts that will be needed in the sequel and give a hint of the behaviour in the case of elliptic energies. We consider a : R n! [, ], with 0 < < <+1 1-periodic in the coordinate directions, and the integrals F (u) = x a ru 2 dx defined in H 1 (), where is a bounded open subset of R n. The computation of the - limit of F is referred to as their homogenization, implying that a simpler homogeneous functional can be used to capture the relevant features of F. The limit can be computed both with respect to the L 1 - topology, but it can also be improved; e.g., in 1D it coincides with the limit in the L 1 topology. This means that the liminf inequality holds for u
16 CHAPTER 1. GLOBAL MINIMIATION converging in the L 1 topology, while there exists a recovery sequence with u tending to u in the L 1 sense. An upper bound is given by the pointwise limit of F, whose computation in this case can be obtained by the following non-trivial but well-known result. x Proposition 1.4.1 (Riemann-Lebesgue lemma) The functions a (x) = a converge weakly in L 1 to their average a = a(y) dy (0,1) n (1.12) For fixed u the pointwise limit of F (u) isthensimplya R ru 2 dx, which then gives an upper bound for the -limit. In a one-dimensional setting, the -limit is completely described by a, and is given by F hom (u) =a u 0 2 1 1 1 dx, where a = 0 a(y) dy is the harmonic mean of a. We briefly sketch a proof which gives the ansatz for recovery sequences. We check the limit inequality by fixing u! u. Suppose for the sake of simplicity that N =1/ 2 N, and write F (u ) = = a NX i=1 i (i 1) NX n 1 min i=1 NX i=1 x a u 0 2 dx 0 a(y) v 0 2 dy : v(1) v(0) = u (i) u ((i 1)) o u (i) u ((i 1)) (the inequality is obtained by minimizing over all functions w with w((i 1)) = u (i) and w((i 1)) = u (i); the minimum problem in the second line is obtained by scaling such w and using the periodicity of a, the third line is a direct computation of the previous minimum). If we define eu as the piecewise-a ne interpolation of u on then the estimate above shows that F (u ) F hom (eu ). The functional on the right-hand side is independent of and with a convex integrand; hence, it is lower semicontinuous with respect to the weak H 1 -convergence. Since eu! u we then deduce lim inf F (u ) 2 lim inf F hom (eu ) F hom (u);
1.4. AN EXAMPLE: HOMOGENIATION 17 i.e., the liminf inequality. The ansatz for the upper bound is obtained by optimizing the lower bound: recovery sequences oscillate around the target function in an optimal way. If the target function is linear (or a ne) u(x) = zx then a recovery sequence is obtained by taking the 1-periodic function v minimizing n 1 min 0 o a(y) v 0 +1 2 dy : v(0) = v(1) = 0 = a, and setting x u (x) =z x + v. This construction can be repeated up to a small error if u is piecewise a ne, and then carries over to arbitrary u by density. As a particular case we can fix 2 [0, 1] and consider the 1-periodic a given on [0, 1) by if 0 apple y< a(y) = (1.13) if apple y<1. In this case we have a = +(1 ). (1.14) Note that the same result is obtained only assuming that {y 2 (0, 1) : a(y) = } = and {y 2 (0, 1) : a(y) = } =1. Thus in one dimension the limit depends only on the volume fraction of. In the higher-dimensional case the limit can still be described by an elliptic integral, of the form F hom (u) = haru, rui dx, where A is a constant symmetric matrix with ai apple A apple ai (I the identity matrix) with strict inequalities unless a is constant. If we take in two dimensions a(y 1,y 2 )=a(y 1 )(a laminate in the first direction) then A is a diagonal matrix with diag(a, a). Of course, if a(y 1,y 2 )=a(y 2 ) then the two values are interchanged. If a takes only the values and in particular this shows that in the higher-dimensional case the results depends on the geometry of {y 2 (0, 1) : a(y) = } (often referred to as the microgeometry of the problem) and not only on the volume fraction. In order to make minimum problems meaningful, we may consider the a ne space X = ' + H0 1 () (i.e., we consider only functions with u = ' on @). It can be proved that this boundary condition is compatible with the -limit; i.e., that the -limit is the restriction to X of the previous one, or, equivalently that recovery sequences for the first -limit can be taken satisfying the same boundary data as their limit. As a consequence of Thorem 1.3.1 we then conclude that oscillating minimum problems for F with fixed boundary data are approximated by a simpler minimum problem with the same boundary
18 CHAPTER 1. GLOBAL MINIMIATION data. Note however that all energies, both F and F hom, are strictly convex, which implies that they have no local non global minimizer. Example 1.4.2 We can add some continuously converging perturbation to obtain some more convergence result. For example, we can add perturbations of the form x G (u) = g,u dx. On g we make the following hypothesis: g is a Borel function 1-periodic in the first variable and uniformly Lipschitz in the second one; i.e., g(y, z) g(y, z 0 ) applel z z 0. We then have a perturbed homogenization result as follows. Proposition 1.4.3 The functionals F + G -converge both in the L 1 topology to the functional F hom + G, where G(u) = g(u) dx, and g(z) = g(y, z) dy (0,1) n is simply the average of g(,z). Proof. By Proposition 1.2.4 it su ces to show that G converges continuously with respect to the L 1 -convergence. If u! u in L 1 then x x G (u ) G(u) apple g,u g,u dx + G (u) G(u) apple L u u dx + G (u) G(u). It su ces then to show that G converges pointwise to G. Ifu is piecewise constant then this follows immediately from the Riemann-Lebesgue Lemma. Noting that also g(z) g(z 0 ) apple L z z 0 we easily obtain the convergence for u 2 L 1 () by the density of piecewise-constant functions. Note that with a slightly more technical proof we can improve the Lipschitz continuity condition to a local Lipschitz continuity of the form g(y, z) g(y, z 0 ) applel(1 + z + z 0 ) z z 0. In particular in 1D we can apply the result for g(y, z) =a(y) z 2 and we have that x a ( u 0 2 + u 2 ) dx
1.5. HIGHER-ORDER -LIMITS AND A CHOICE CRITERION 19 -converges to (a u 0 2 + a u 2 ) dx. As a consequence of Theorem 1.3.1, under the condition of coerciveness we obtain a convergence result as follows. lim inf g(,z)=+1, z!±1 Proposition 1.4.4 The solutions to the minimum problems n o min F (u)+g (u) :u 2 H 1 () converge (up to subsequences) to a constant function u, whose constant value minimizes g. Proof. The proof of the proposition follows immediately from Theorem 1.3.1, once we observe that by the coerciveness and continuity of g a minimizer for that function exists, and the constant function u defined above minimizes both F hom and G. If g is di erentiable then by computing the Euler-Lagrange equations of F + G we conclude that we obtain solutions of X @ x @u a + @ x @x i @x i @u g,u =0 (1.15) ij with Neumann boundary conditions, converging to the constant u. 1.5 Higher-order -limits and a choice criterion If the hypotheses of Theorem 1.3.1 are satisfied then we have noticed that every minimum point of the limit F corresponds to a minimizing sequence for F. However, not all points may be limits of minimizers for F, and it may be interesting to discriminate between limits of minimizing sequences with di erent speeds of convergence. To this end, we may look at scaled -limits. If we suppose that, say, u is a limit of a sequence (u )with F (u )=minf + O( ) (1.16) for some >0 (but, of course, the rate of convergence may also no be polynomial) then we may look at the -limit of the scaled functionals F (u) = F (u ) min F. (1.17)
20 CHAPTER 1. GLOBAL MINIMIATION Suppose that F -converges to some F not taking the value 1. Then: (i) the domain of F is contained in the set of minimizers of F (but may as well be empty); (ii) F (u) 6= +1 if and only if there exists a recovery sequence for u satisfying (1.16). Moreover, we can apply Theorem 1.3.1 to F and obtain the following result, which gives a choice criterion among minimizers of F. Theorem 1.5.1 Let the hypotheses of Theorem 1.3.1 be satisfied and the functionals in (1.17) -converge to some F not taking the value 1 and not identically +1. Then (i) inf F =minf + min F + o( ); (ii) if F (u )=minf + o( ) and u! u then u minimizes both F and F. Proof. We can apply Theorem 1.3.1 to a (subsequence of a) converging minimizing sequence for F ; i.e., a sequence satisfying hypothesis (ii). Its limit point u satisfies F (u) =minf =lim min F =lim min F min F, which proves (i). Since, as already remarked u is also a minimizer of F, we also have (ii). Example 1.5.2 Simple examples in the real line: (1) if F (x) =x 2 then F (x) = 0. We have F (x) =0if0< <1, F 1 (x) =x 2 (if = 1), and 0 x =0 F (x) = +1 x 6= 0 if >1; (2) if F (x) =(x 2 1) 2 + (x 1) 2 then F (x) =(x 2 1) 2.Wehave F 0 x =1 (x) = +1 x 6=1 if 0 < <1, if = 1, if >1. 8 < 0 x =1 F 1 (x) = 4 x = 1 : +1 x 6=1 F (x) = 0 x =1 +1 x 6= 1
1.5. HIGHER-ORDER -LIMITS AND A CHOICE CRITERION 21 Remark 1.5.3 It must be observed that the functionals F in Theorem 1.5.1 are often equicoercive with respect to a stronger topology than the original F, so that we can improve the convergence in (ii). Example 1.5.4 (Gradient theory of phase transitions) Let F (u) = (W (u)+ 2 ru 2 ) dx (1.18) be defined in L 1 () with domain in H 1 (). Here W (u) =(u 2 1) 2 (or a more general double-well potential; i.e., a non-negative function vanishing exactly at ±1). Then (F )is equicoercive with respect to the weak L 1 -convergence. Since this convergence is metrizable on bounded sets, we can consider L 1 () equipped with this convergence. The -limit is then simply F 0 (u) = W (u) dx, where W is the convex envelope of W ;i.e.w (u) =((u 2 1) _ 0) 2. All functions with kuk 1 apple 1 are minimizers of F 0. We take = 1 and consider W (u) F 1 (u) = + ru 2 dx. (1.19) Then (F 1 ) is equicoercive with respect to the strong L 1 -convergence, and its -limit is F 1 (u) =c W H n 1 (@{u =1}\) for u 2 BV (; {±1}), (1.20) and +1 otherwise, where c W =8/3 (in general c W =2 R 1 p 1 W (s) ds). This results states that recovery sequences (u ) tend to sit in the bottom of the wells (i.e., u 2±1) in order to make W (u) finite; however, every phase transition costs a positive amount, which is optimized by balancing the e ects of the two terms in the integral. Indeed, by optimizing the interface between the phases {u =1} and {u = 1} one obtains the optimal surface tension c W. In one dimension the ansatz on the recovery sequences around a jump point x 0 is that they are of the form x u (x) =v x0, where v minimizes n +1 o min (W (v)+ v 0 2 ) dx : v(±1) =±1 1 In more than one dimension the ansatz becomes d(x, {u =1}) u (x) =v, 1 p =2 W (s) ds. 1
22 CHAPTER 1. GLOBAL MINIMIATION where d(,a) is the signed distance from the set A. This means that around the interface @{u = 1} the recovery sequence pass from 1 to +1 following the one-dimensional profile of v essentially on a O()-neighbourhood of @{u = 1}. Note that (i) we have an improved convergence of recovery sequences from weak to strong L 1 - convergence; (ii) the domain of F 1 is almost disjoint from that of the F 1, the only two functions in common being the constants ±1; (iii) in order to make the -limit properly defined we have to use the space of functions of bounded variation or, equivalently, the family of sets of finite perimeter if we take as parameter the set A = {u =1}. In this context the set @{u =1} is properly defined in a measure-theoretical way, as well as its (n 1)-dimensional Hausdor measure. Example 1.5.5 (linerized fracture mechanics from interatomic potentials) We now give an example in which the scaling of the variable, and not only of the energy is part of the problem. We consider a systems of one-dimensional nearest-neighbour atomistic interactions through a Lennard-Jones type interaction. Note that by the one-dimensional nature of the problem we can parameterize the position of the atoms as an increasing function of the parameter. Let be a C 2 potential as in Figure 1.1, with domain (0, +1) (weset (z) =+1 for z apple 0), minimum in 1 with 00 (1) > 0, convex in (0,z 0 ), concave in (z 0, +1) and tending to (1) < +1 at +1. A possible choice is Lennard Jones potential (z) = 1 z 12 2 z 6. Figure 1.1: a Lennard-Jones potential
1.5. HIGHER-ORDER -LIMITS AND A CHOICE CRITERION 23 We consider the energy N(v) = NX i=1 (v i v i 1 ) with N 2 N defined on v i with v i >v i 1. We introduce the small parameter =1/N and identify the vector (v 0,...,v N ) is identified with a discrete function defined on \ [0, 1]. A non-trivial -limit will be obtained by scaling and rewriting the energy in terms of a scaled variable u = p id v ; i.e., u i = p (v i i). This scaling can be justified noting that (up to additive constants) v i = i = id/ is the absolute minimum of the energy. The scaled energies that we consider are id F (u) = N + p u NX ui u min N = J p i 1, where i=1 J(w) = (1 + w) min = (1 + w) (1). For convenience we extend the function to all R setting J(w) =+1 if w apple 0. Again, the vector (u 0,...,u N ) is identified with a discrete function defined on \ [0, 1] or with its piecewise-a ne interpolation. With this last identification, F can be viewed as functionals in L 1 (0, 1), and their -limit computed with respect to that topology. We denote w 0 =1+z 0. It must be noted that for all w>0wehave n # i : u i u o p i 1 > w apple 1 J(w) F (u), so that this number of indices is equi-bounded along sequences with equibounded energy. We may therefore suppose that the corresponding points i converge to a finite set S [0, 1]. For fixed w, wehavej(w) c w 2 on ( 1, w] for some c>0; this gives, if A is compactly contained in (0, 1) \ S, that F (u) c X ui u p i 1 2 X ui u i 1 2 = c c u 0 2 dt i i A (the sum extended to i such that u i u i 1 p apple w). By the arbitrariness of A in this estimate we then have that if u! u and F (u ) apple C<+1 then u is piecewise-h 1 ;i.e.,there exists a finite set S (0, 1) such that u 2 H 1 ((0, 1) \ S); we denote by S(u) the minimal set such that u 2 H 1 ((0, 1) \ S(u)). The reasoning above also shows that c 1 0 u 0 2 dt + J(w)#(S(u))
24 CHAPTER 1. GLOBAL MINIMIATION is a lower bound for the -limit of F. The -limit on piecewise-h 1 (0, 1) functions can be computed by optimizing the choice of w and c, obtaining F (u) = 1 2 J 00 (0) 1 0 u 0 2 dt + J(1)#(S(u)) with the constraint that u + >u on S(u). Note that the parameterization of v i on would suggest to interpret v i v i 1 as a di erent quotient and hence the change of variables u i = v i id. This would give an energy of the form NX ui u i 1 ef (u) = J ; i=1 it can be shown that F e converges to the energy with domain the set of piecewise-a increasing u with u 0 = 1 a.e., and for such u ef (u) =J(1)#(S(u)). This di erent choice of the parameterization hence only captures the fracture part of the energy. 1.6 References For an introduction to -convergence we refer to the elementary book A. Braides. -convergence for Beginners. Oxford University Press, 2002. More examples, and an overview of the methods for the computation of -limits can be found in A. Braides. A Handbook of -convergence. In Handbook of Partial Di erential Equations. Stationary Partial Di erential Equations, Volume 3. Elsevier, 2006. ne