Juan Carlos De Los Reyes 1 and Sergio González 1


 Aron Moody
 1 years ago
 Views:
Transcription
1 ESAIM: M2AN 43 (29) 8 7 DOI:.5/m2an/2839 ESAIM: Mathematical Modelling and Numerical Analysis PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES Juan Carlos De Los Reyes and Sergio González Abstract. This paper is devoted to the numerical solution of stationary laminar Bingham fluids by pathfollowing methods. By using duality theory, a system that characterizes the solution of the original problem is derived. Since this system is illposed, a family of regularized problems is obtained and the convergence of the regularized solutions to the original one is proved. For the update of the regularization parameter, a pathfollowing method is investigated. Based on the differentiability properties of the path, a model of the value functional and a correspondent algorithm are constructed. For the solution of the systems obtained in each pathfollowing iteration a semismooth Newton method is proposed. Numerical experiments are performed in order to investigate the behavior and efficiency of the method, and a comparison with a penaltynewtonuzawaconjugate gradient method, proposed in [Dean et al., J. NonNewtonian Fluid Mech. 42 (27) 36 62], is carried out. Mathematics Subject Classification. 47J2, 76A, 65K, 9C33, 9C46, 9C53. Received June 5, 27. Revised June 2nd, 28. Published online October 6, 28.. Introduction Bingham models are used to analyze flows of materials for which the imposed stress must exceed a critical yield stress to initiate motion, i.e., they behave as rigid bodies when the stress is low but flow as viscous fluids at high stress. Examples of Bingham fluids include tooth paste, water suspensions of clay or sewage sludge. For the mathematical analysis of Bingham fluid flow we refer to [7,9,22]. In [22] the authors consider a variational formulation of the model and study qualitative properties of it. Existence and uniqueness of the solution and the structure of the flow are investigated. In [7] the authors further analyze the resulting inequality of the second kind and prove, among other results, the Lipschitz stability of the solution with respect to the plasticity threshold. Further in [4] and[9 ] the authors investigate the regularity of the solution for the cross section and cavity model, respectively. Bingham fluid flow in cylindrical pipes has been numerically treated by different methodologies. In [3], Chapter V, the authors propose a global εtype regularization of the model and prove the convergence of the regularized solutions towards the original one. Direct regularization of the primal problem by twice differentiable Keywords and phrases. Bingham fluids, variational inequalities of second kind, pathfollowing methods, semismooth Newton methods. Research partially supported by DAAD, EPN Quito and TU Berlin joint project: Ph.D. Programme in Applied Mathematics. Research Group on Optimization, Departmento de Matemática, EPN Quito, Ecuador. Article published by EDP Sciences c EDP Sciences, SMAI 28
2 82 J.C. DE LOS REYES AND S. GONZÁLEZ functions has also been considered in [23] in combination with Newton methods. Although this type of regularization allows the direct use of second order methods, important discrepancies of the regularized problem with respect to properties of the original model arise (cf. [6], p. 39). An alternative to the direct regularization of the primal problem consists in the socalled multiplier approach. In [3], the authors analyze the existence of multipliers by using duality theory and propose an Uzawatype algorithm for its numerical solution. Also by using duality theory, augmented Lagrangian methods are proposed in [8,9] and the unconditional convergence of the method is proven (see [9], Thm. 4.2). In the recent paper [6], the authors make a review of existing numerical approaches and propose a penaltynewtonuzawa conjugate gradient method for the solution of the problem. This approach is compared numerically with our method in Section 5. In this paper, we consider a Tikhonov regularization of the dual problem, which by duality theory implies a local regularization of the original one. The proposed local regularization allows the application of semismooth Newton methods and leads directly to a decoupled system of equations to be solved in each semismooth Newton iteration. This constitutes an important difference with respect to other primaldual second order approaches (see e.g. [6]), where an additional method has to be used in order to obtain a decoupled system, at the consequent computational cost. For the update of the regularization parameter a path following method is proposed and analyzed. The differentiability of the path and of the path value functional are studied. A model function that preserves the main properties of the value functional is proposed and a correspondent algorithm developed. After discretization in space, each regularized problem is solved by using a semismooth Newton method. These type of methods have been successfully applied to infinite dimensional complementarity problems like the Signorini or contact problem (see [4,2,24,25]), image restoration (see [6]), optimal control problems (see [5,7]), and, in general, to infinite dimensional optimization problems (see [6,7,9,26]). Pathfollowing strategies together with semismooth Newton methods have been investigated in [4,5,25] for variational inequalities of the first kind and constrained optimal control problems. These cases involve unilateral pointwise constraints on the state variable, which are regularized by a MoreauYosida technique. Differently from [4,5], our problem involves a variational inequality of the second kind. As a result, and in contrast to unilateral pointwise constrained problems, pointwise constraints on the Euclidean norm of the velocity gradient have to be considered. This fact adds new difficulties to the path analysis. In particular, extra regularity estimates for the regularized solutions have to be obtained in order to get differentiability of the path. Let us mention that, although the method developed in this article is concerned with Bingham fluid flow, the results can be extended to other variational inequalities of the second kind as well. The paper is organized as follows. In Section 2 the original problem is stated and, using Fenchel s duality theory, a necessary condition is derived. Since the characterizing system for the original problem is illposed, a family of regularized problems is introduced and the convergence of the regularized solutions to the original one is proved. In Section 3, the path value functional is introduced and the differentiability of the path and the value functional is investigated. A model function which preserves the qualitative properties of the pathvalue functional is constructed and an iterative algorithm is proposed. In Section 4 a semismooth Newton method to solve the complementarity system for each regularized problem is stated. In Section 5, numerical experiments which show the main features of the proposed algorithm are presented. 2. Problem statement and regularization Let be a bounded domain in R 2, with Lipschitz boundary Γ, and let f L 2 (). We are concerned with the following variational inequality of the second kind: find y H () such that a(y, v y)+gj(v) gj(y) (f,v y) 2, for all v H (), (2.) where a(y, v) :=μ y(x), v(x) dx, j(v) := v(x) dx and (, ) 2 stands for the scalar product in L2 (). The scalar product in R N and the Euclidean norm are denoted by, and, respectively. (, ) X stands
3 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 83 for the scalar product in a Hilbert space X, and X for its associated norm. The duality pairing between a Banach space Y and its dual Y is represented by, Y,Y. Besides that, we will use the bold notation L 2 () := L 2 () L 2 (). Inequality (2.) models the stationary flow of a Bingham fluid in a pipe of cross section (see [7,3,22]). The variable y(x) stands for the velocity at x, f(x) for the linear decay of pressures, μ for the viscosity and g for the plasticity threshold of the fluid (yield stress). Problem (2.) corresponds to the necessary condition of the following unconstrained minimization problem, min J(y) := y H () 2 a(y, y)+gj(y) (f,y) 2. (P) Remark 2.. It can be shown (cf. [2], Thm. 6.) that there exists a unique solution y H () to problem (P). Moreover, if has a sufficiently regular boundary, it follows that y H 2 () H (), see [4], Theorem The Fenchel dual In this section, we obtain the dual problem of (P) by using Fenchel s duality in infinitedimensional spaces (see [8]). Let us start by defining the functionals F : H () R by F (y) := 2 a(y, y) (f,y) 2 and G : L 2 () R by G (q) :=g q(x) dx. It can be easily verified that these two functionals are convex, continuous and proper. We also define the operator Λ L(H (), L 2 ()) by Λv := v. Thanks to these definitions, we may rewrite problem (P) as inf {F (y)+g (Λy)}. (2.2) y H () Following [8], pp. 6 6, the associated dual problem of (2.2) isgivenby sup { F ( Λ q) G (q)}, (2.3) q L 2 () where Λ L(L 2 (),H ()) is the adjoint operator of Λ, and F : H () R and G : L 2 () R denote the convex conjugate functionals of F and G, respectively. We recall that given a Hilbert space H and a convex function ϕ : H R {, + }, the convex conjugate functional ϕ : H R {, + } is defined by Thus, we have that F ( Λ q) = sup ϕ (v )=sup{ v,v ϕ(v)}, for v V. v V v H () G (q) = sup p L 2 () { Λ q, v H (),H () } 2 a(v, v)+(f,v) 2, (2.4) {(q, p) L2() g } p(x) dx. (2.5) Note that in (2.3) we have already identified L 2 () with its dual. Now, let us calculate F ( Λ q). Let q L 2 () be given. From (2.4), we obtain that { F ( Λ q)= sup (q, Λv) L 2 v H () () } 2 a(v, v)+(f,v) 2,
4 84 J.C. DE LOS REYES AND S. GONZÁLEZ which implies, since { (q, Λv) L 2 () 2 a(v, v) +(f,v) 2 } is a concave quadratic functional in H (), that the supremum is attained at v(q) H () satisfying Using (2.6) withz = v(q), we obtain that a(v(q),z)+(q, Λz) L 2 () (f,z) 2 =, for all z H (). (2.6) F ( Λ q)= (q, v(q)) L 2 () 2 a(v(q),v(q)) + (f,v(q)) 2 = a(v(q),v(q)). (2.7) 2 Lemma 2.. The expression (q, p) L 2 () g p(x) dx, for all p L 2 () (2.8) is equivalent to q(x) g a.e. in. (2.9) Proof. Let us start by showing that (2.8) implies (2.9). Assume that (2.9) doesnothold, i.e., assume that S := {x :g q(x) < a.e.} has positive measure. Choosing p L 2 () such that p(x) := { q(x) in S in \ S leads to g p(x) dx q(x), p(x) dx = (g q(x) ) q(x) dx < S which is a contradiction to (2.8). Conversely, due to the fact that q(x) g a.e. in and thanks to the CauchySchwarz inequality, we obtain, for an arbitrary p L 2 (), that g p(x) dx q(x),p(x) dx (g q(x) ) p(x) dx. Lemma 2. immediately implies that { G if q(x) g, a.e. in (q) = (2.) + otherwise. Thus, using (2.7) and(2.) in(2.3) we obtain the dual problem sup J (q) := 2 a(v(q),v(q)) q(x) g where v(q) satisfies a(v(q),z) (f,z) 2 +(q, Λz) L 2 () =, for all z H (). (P ) Due to the fact that both F and G are convex and continuous, [8], Theorem 4., p. 59, and [8], Remark 4.2, p. 6, imply that no duality gap occurs, i.e., inf J(y) = sup J (q), (2.) y H () q(x) g a(v,z)+(q, z) L 2 =(f,z) () 2 and that the dual problem (P ) has at least one solution q L 2 ().
5 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 85 Next, we will characterize the solutions of the primal and dual problems. From Fenchel s duality theory (see [8], Eqs. (4.22) (4.25), p. 6) the solutions y and q satisfy the following extremality conditions: Λ q F (y), (2.2) q G ( y). (2.3) Let us analyze (2.2). Since F is Gateaux differentiable in y, [8], Proposition 5.3, p. 23, implies that F (y) = {F (y)}. Thus,wehavethat(2.2) can be equivalently expressed as the following equation a(y, v) (f,v) 2 +(q, v) L 2 () =, for all v H (). (2.4) On the other hand, from (2.3) and the definition of the subdifferential it follows that ( ) g y(x) dx p(x) dx (q, y p) L 2 (), for all p L2 (). Then, for p =, we obtain that g y(x) dx (q, y) L 2 (), which implies, since q(x) g a.e. in and by Lemma (2.), that g y(x) dx =(q, y) L 2 (). This last expression is equivalent to { y(x) = or y(x), and q(x) =g y(x) y(x) (2.5) Lemma 2.2. Equations (2.9) and (2.5) can be equivalently expressed as the following equation max (σg, σq(x)+ y(x) ) q(x) =g (σq(x)+ y(x)), a.e. in, for all σ>. (2.6) Proof. We start by showing that (2.6) implies (2.9) and(2.5). From (2.6) it follows that σq(x)+ y(x) q(x) =g, a.e. in, max(σg, σq(x)+ y(x) ) which immediately implies (2.9). Let us split into the two following disjoint sets: {x :σg σq(x)+ y(x) } and {x :σg < σq(x)+ y(x) }. (2.7) On the set {x : σg σq(x) + y(x) }, wehavethatg(σq(x) + y(x)) σgq(x) =, and thus y(x) =. To see that y(x) on the set{x : σg < σq(x) + y(x) }, we assume the opposite and immediately obtain that g < q(x), which contradicts the fact that q(x) g a.e. in. Moreover, from (2.6), we have that g (σq(x) + y(x)) = σq(x) + y(x) q(x), (2.8) and it follows that g y(x) =( σq(x) + y(x) σg) q(x). (2.9)
6 86 J.C. DE LOS REYES AND S. GONZÁLEZ Considering the norms in (2.8) and(2.9), we find that ( σq(x) + y(x) σg) = y(x) and thus we are in the second case of (2.5). Reciprocally, assume that (2.9) holds and consider the two cases in (2.5). If y(x) =, we obtain that g (σq(x) + y(x)) = max (σg, σq(x) + y(x) ) q(x) = σgq(x). Similarly, if y(x) andq(x) =g y(x) y(x),wehavethat which implies that g (σq(x)+ y(x)) = g y(x) (σg + y(x) ), y(x) max (σg, σq(x)+ y(x) ) q(x) = max(σg, σg + y(x) ) g y(x) y(x) = g y(x) (σg + y(x) ). y(x) Thus, the equivalence follows. Summarizing, we may rewrite (2.2) and(2.3) as the following system { a(y, v)+(q, v)l 2 () =(f,v) 2, for all v H (), max (σg, σq(x) + y(x) ) q g(σq + y) =, a.e. in and for σ >. (S) We define the active and inactive sets for (S) bya := {x : σq(x)+ y(x) σg} and I := \A, respectively. Remark 2.2. The solution to (S) isnotunique(see[2], Rem. 6.3, and [3], Chap. 5) Regularization In order to avoid problems related to the nonuniqueness of the solution to system (S), we propose a Tikhonovtype regularization of (P ). With this regularization procedure, we do not only achieve uniqueness of the solution but also get a local regularization for the nondifferentiable term in (P). This technique has also been used for TVbased infconvolutiontype image restoration [6]. For a parameter > we consider the following regularized dual problem sup J (q) := 2 a(v(q),v(q)) 2 q 2 L 2 q(x) g where v(q) satisfies (P ) a(v(q),z)+(q, z) L 2 () (f,z) 2 =, for all z H (). Therefore, the regularized problem is obtained from (P ) by subtracting the term 2 q 2 L 2 from the objective functional. Further, it is possible to show that this penalization corresponds to a regularization of the primal problem. Consider the continuously differentiable function ψ : R 2 R, defined by ψ (z) := { g z g2 2 if z g 2 z 2 if z < g (2.2)
7 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 87 By using this function, which is a local regularization of the Euclidean norm, we obtain the following regularized version of (P) min J (y) := a(y, y)+ ψ ( y)dx (f,y) y H () 2 2. (P ) Furthermore, we are able to state the following theorem. Theorem 2.3. Problem (P ) is the dual problem of (P ) and we have J (q )=J (y ), (2.2) where q and y denote the solutions to (P ) and (P ), respectively. Proof. In order to calculate the dual problem to (P ) we use the same argumentation used in Section 2. for the original problem (P). We only have to replace the functional G in (2.2) by G (q) = ψ (q)dx, (2.22) with ψ as in (2.2). Thus, for q L 2 (), we have G (q) = sup p L 2 () + {x : p(x) g, a.e.} {x : p(x) <g, a.e.} ] [ q(x),p(x) g p(x) + g2 dx (2.23) 2 [ q(x),p(x) 2 p(x) 2] dx. From (2.23) it is possible to conclude, by proceeding as in the proof of Lemma 2., thatg (q) = unless q(x) g. Suppose now that q(x) g. We define the functional Ψ : L 2 () R by Ψ(p) := {x : p(x) g, a.e} + {x : p(x) <g, a.e} ] [ q(x),p(x) g p(x) + g2 dx 2 [ q(x),p(x) 2 p(x) 2] dx. By introducing, for any p L 2 (), the function p L 2 () by p (x) := it is easy to verify that Ψ(p ) Ψ( p ), which yields { p (x) a.e. in {x : p(x) <g,a.e.} a.e. in {x : p(x) g, a.e.} sup p L 2 () Ψ(p) = sup p L 2 () Ψ(p). (2.24) p(x) g a.e. in Therefore, in order to calculate the supremum in (2.23), we only have to consider the last term in (2.24). Since this expression is a concave quadratical functional, the maximizer is easily calculated as p = p,which
8 88 J.C. DE LOS REYES AND S. GONZÁLEZ implies that { G (q) = 2 q 2 L 2 if q(x) g otherwise. (2.25) Note that this regularization procedure turns the primal problem into the unconstrained minimization of a continuously differentiable functional, while the corresponding dual problem is still the constrained minimization of a quadratic functional. Remark 2.3. Due to the regularization procedure, the objective functional of (P )resultsinal2 ()uniformly concave functional. Thus, (P ) admits a unique solution q L 2 () for each fixed >. Additionally, since a(, ) is a coercive and bicontinuous form and due to the fact that J is strictly convex and differentiable, [2], Theorem.6, implies that (P ) has also a unique solution. Theorem 2.4. Let y be the solution to (P ). Then y H 2 () H () and there exists a constant K>, independent of, such that y H 2 K( f L 2 + C). (2.26) for some C>. Proof. Note that y can be characterized as the solution of the following equation f μδy + ϕ(y ) in y = on Γ (2.27) where ϕ denotes the subdifferential of the convex and lower semicontinuous functional ϕ : L 2 () R { } defined by { ϕ(u) = ψ ( u)dx if ψ ( u) L () + elsewhere. Thus, [4], Lemma, implies the result. Remark 2.4. Theorem 2.4 implies that y H () and, since n =2, y L q (), for all q [, ). Moreover, from (2.26) we conclude that y is uniformly bounded in L q () for all q [, ). Next, we characterizethesolutionsto(p )and(p )(y and q, respectively). From Fenchel s duality theory, these solutions satisfy the following system: Λ q F (y ) (2.28) q G ( y ). (2.29) Note that both F and G are differentiable in y and y, respectively. Thus, F (y )and G ( y )consist only of the respective Gateaux derivatives. Since (2.28) is similar to equation (2.2), it is equivalent to a(y,v)+(q, v) L 2 () (f,v) 2 =, for all v H (). (2.3) On the other hand, due to the differentiability of G,equation(2.29) can be written as (q,p) L 2 () = y,p g dx + y,p dx, for all p L 2 () y A \A or equivalently as { q (x) = y (x) a.e. in \A, q (x) =g y(x) (2.3) y a.e. in A,
9 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 89 where, A = {x : y (x) g, a.e.}. Consequently, the solutions (y,q ) of the regularized problems (P )and(p ) satisfy the system { a(y,v)+(q, v) L 2 () (f,v) 2 =, for all v H () max(g, y (x) )q (x) g y (x) =, a.e. in, for all >. (S ) Clearly q (x) = g on A and q (x) <gon I := \A.WecallsetsA and I the active and inactive sets for (S ), respectively. In the following theorem the convergence of the regularized solutions towards the original one is verified. Theorem 2.5. The solutions y of the regularized primal problems converge to the solution y of the original problem strongly in H () as. Moreover, the solutions q of the regularized dual problems converge to asolutionq of the original dual problem weakly in L 2 (). Proof. Let us start by recalling that (y, q) and(y, q ) satisfy equations (2.4) and(2.3) respectively. Thus, by subtracting (2.3) from(2.4), we obtain that μ (y y ), v dx = q q, v dx, for all v H (). (2.32) Further, choosing v := y y in (2.32), we have that μ (y y ) 2 dx = q q, (y y ) dx. (2.33) Next, we establish pointwise bounds for (q q)(x), (y y )(x), in the following disjoint sets A A, A I, A I and I I. On A A : Here, we use the facts that q(x) = q (x) = g, q(x) =g y(x) y(x) and q y (x) (x) =g y (x).thus,we have the following pointwise estimate (q q)(x), (y y )(x) q (x) y(x) g y (x) y (x), y (x) g y(x) y(x), y(x) (2.34) + q(x) y (x) g y(x) g y (x) g y(x) + g y (x) =. On A I : Here, we know that y (x) = q (x), q (x) <g, q(x) = g and q(x) =g y(x) y(x). Hence, we get (q q)(x), (y y )(x) g y(x) q (x) 2 g y(x) + g y (x) = q (x) 2 + g q (x) < (g 2 q (x) 2) < g 2. (2.35) On A I: Inthissetitholdsthat y(x) =andq (x) =g y(x) y (x). Then, we have that (q q)(x), (y y )(x) = g y (x) + q(x) y (x). (2.36) On I I: Here, we have that y(x) =, y (x) = q (x), q(x) g, and q (x) <g. (q q)(x), (y y )(x) = q (x) 2 + q(x) y (x) (g 2 q (x) 2) < g 2. (2.37)
10 9 J.C. DE LOS REYES AND S. GONZÁLEZ Since A A, A I, A I and I I provide a disjoint partitioning of, (2.33) and estimates (2.34), (2.35), (2.36) and(2.37) implythat μ (y y ) 2 dx< g 2 dx. (2.38) Thus, we conclude that y y strongly in H () as. On the other hand, since y y strongly in H (), (2.32) implies that q q, weakly in grad(h ()) L 2 (), (2.39) where grad(h ()) := {q L 2 () : v H () such that q = v}. 3. Pathfollowing method In this section, we investigate the application of continuation strategies to properly control the increase of. Our main objective is to develop an automatic updating strategy for the regularization parameter which guarantees an efficient and fast approximation of the solution to problem (P). For that purpose, we investigate the properties of the path (y,q ) H () L2 (), with (, ), and construct an appropriate model of the value functional, which will be used in an updating algorithm. 3.. The primaldual path In this part we introduce the primaldual path and discuss some of its properties. Specifically, Lipschitz continuity and differentiability of the path are obtained. Definition 3.. The family of solutions C = {(y,q ): [M, )} to (S ), with M a positive constant, considered as subset of H () L2 (), is called the primaldual path associated to (P )(P ). Lemma 3.. The path C is bounded in H () L2 (), i.e., thereexistc>, independent of, such that y H + q L 2 C. Proof. First, from the fact that q (x) g, for every >, we conclude that q is uniformly bounded in L 2 (). Furthermore, Theorem 2.4 implies that y H is uniformly bounded in H (). Therefore, C is bounded in H () L2 (). Theorem 3.2. Let [M, ). The function y is globally Lipschitz continuous in W,p (), for2 p< 2+min(s 2,ɛ), wheres>2 and ɛ depends on μ, and. Proof. Let, [M, ). We introduce the following notations δ y := y y, θ (x) :=max(g, y (x) ) and δ θ := θ θ. It is easy to verify that the following expression holds δ θ (x) y (x) y (x), a.e. in, which implies that δ θ (x) y (x) + δ y (x), a.e. in, (3.) and, similarly, δ θ (x) y (x) + δ y (x), a.e. in. (3.2) Next, we separate the proof in two parts. First, we prove the Lipschitz continuity of y in H (), and then, by introducing an auxiliar problem, we obtain the Lipschitz continuity in W,p (), for some p>2.
11 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 9 In H (): From (S ), we know that ( y a(δ y,δ y ) = g y ), δ y θ θ ( ) y = g(), δ y θ L 2 () ( y + g L 2 () θ y ), δ y, θ L 2 () (3.3) which, since y (x) θ M, a.e. in, implies the existence of a constant K>such that a(δ y,δ y ) K δ y H ( y + g θ y ), δ y. (3.4) θ L 2 () Next, let us analyze the second term on the right hand side of (3.4). ( y g θ y ), δ y θ L 2 () ( ) θ δ y + δ θ y = g, δ y θ θ L 2 () = g δ y (x) 2 ( δθ y dx + g, δ y θ θ θ ). L 2 () (3.5) Since y (x) θ (x) a.e. in, CauchySchwarz inequality implies that ( ) δθ y g, δ y θ θ L 2 () g δθ y (x) θ (x)θ (x), δ y(x) dx g g δ θ (x) y (x) δ y (x) θ (x)θ (x) δ θ (x) δ y (x) θ (x) dx. dx Again, since y (x) θ (x) a.e.in,(3.) implies that ( ) δθ y g, δ y θ θ L 2 () g y δ y dx + g δ y 2 θ dx θ δ y 2 g δ y dx + g dx θ δ y 2 g M meas()/2 δ y H + g dx. θ (3.6) Finally, using (3.6) and(3.5) in(3.4), we have that a(δ y,δ y ) (K + g ) meas()/2 δ y M H, which, due to the coercivity of a(, ), implies the existence of a constant L> such that y y H L.
12 92 J.C. DE LOS REYES AND S. GONZÁLEZ In W,p (): First, note that (3.2) implies the existence of ζ(x) [, ] such that δ θ (x) =ζ(x)[ y (x) + δy(x) ], a.e. in. (3.7) From (S ), we have, for all v H (), that ( ) ( ) ( ) δy δθ y y a(δ y,v)+g, v g, v = g(), v, (3.8) θ L 2 () θ θ L 2 () θ L 2 () which, together with (3.7), implies that ( ) ( δy ζ(x) δy a(δ y,v)+g, v g θ L 2 () θ θ ( y g() θ ) y, v L 2 () = ), v + g L 2 () ( ζ(x) y θ θ ) y, v, (3.9) L 2 () for all v H (). Defining f := g( ) y θ +g ζ(x) y θ θ y,equation(3.9) motivates the introduction of the following auxiliar problem: find w H () such that a(w, v)+(β(w), v) L 2 () = ( f, v )L, for all v 2 () H (), (3.) [ w where β(w) :=g θ ζ(x) δy, w θ θ δ y y ]. Clearly, δ y is also solution of (3.). Note that f(x) y (x) g θ (x) + g ζ(x) y (x) θ (x)θ (x) y (x), a.e. in, which, since y(x) θ M and y (x) θ, a.e. in, implies that f(x) 2 g a.e. in. M Therefore, f L s 2 g M meas()/s, for s. (3.) Next, let us define the matrix A(x) R 2 2 by A(x) := y (x) δ y (x) x x y (x) δ y (x) x 2 x y (x) x y (x) x 2 δ y (x) x 2 δ y (x) x 2, a.e. in. Then, we can rewrite β(w) as [ ] I ζ(x) β(w) =g θ θ θ δ y A(x) w, (3.2) where I stands for the 2 2identity matrix. Moreover, we can rewrite the auxiliar problem (3.) as α(x) w, v dx = f, v dx, (3.3)
13 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 93 ζ(x) where α(x) :=(μ + g θ (x)) I + g θ A(x), a.e. in. Multiplying α(x) byξ (x)θ (x) δ y(x) R2 2 and taking the scalar product with ξ, we obtain that α(x)ξ,ξ = μ ξ 2 + g ζ(x) θ (x) ξ 2 g θ (x)θ (x) δ y (x) δ y(x),ξ y (x),ξ, a.e. in. Furthermore, since ζ(x) and y (x) θ (x) a.e. in, the CauchySchwarz inequality implies that g ζ(x) θ (x)θ (x) δ y (x) δ ξ 2 y(x),ξ y (x),ξ g, a.e. in, θ (x) which, due to the fact that g θ (x), a.e. in, implies that μ ξ 2 α(x)ξ,ξ (μ +2) ξ 2, a.e. in. (3.4) Thus, [3], Theorem 2., p. 64, implies the existence of a constant c p such that w W,p c p f L s, for s>2and2 p<2+min(s 2,ɛ), (3.5) where w is the unique solution of (3.) andɛ depends on μ, and. Therefore, since δ y is solution of (3.), estimates (3.) and(3.5) imply the existence of L > such that y y W,p L, for 2 p<2+min(s 2,ɛ). Remark 3.2. Since y is Lipschitz continuous in W,p (), for some p>2, thereexists a weakaccumulation point ẏ W,p () of (y y )as, which is a strong accumulation point in H (). For the subsequent analysis and the remaining sections of the paper, we will use the following assumption. Assumption 3.3. Let [M, ). Thereexistε,ε 2 > and r> such that for all ( r, + r). meas({x A I : y (x) y (x) <ε })=, meas({x A I : y (x) y (x) <ε 2 })=, Lemma 3.4. Let [M, ) be fixed, and let ( r, + r). It holds that lim meas(a I ) = lim meas(a I )=. Proof. Let us introduce the set A ε := {x A I : y y ε }. From assumption (3.3), we get that meas(a I ) meas(a ε ). (3.6) Due to Chebyshev s inequality we get that ε meas(a ε ) y (x) y (x) dx A I y (x) dx + y (x) y (x) dx,
14 94 J.C. DE LOS REYES AND S. GONZÁLEZ which, by Lemma 3. and Theorem 3.2, implies that for some K >. Therefore, ε meas(a ε ) (meas()) /2 y H + (meas()) /2 y y H K, lim meas ({x : y (x) y (x) ε })=, and the result follows from (3.6). The other case is treated similarly. As a consequence of Lemma 3.4 we also obtain that lim meas(a A )=meas(a ) and lim meas(i I )=meas(i ), (3.7) which, since A =(A A ) (A \A )andi =(I I ) (I \I ), implies that lim (A \A ) = lim (I \I )=. (3.8) Proposition 3.5. Let >Mand ẏ + be a weak accumulation point of (y y ) in W,p (), forsome p>2, as. Thenẏ + satisfies (( ) ) ẏ + a(ẏ +,v)+g y y, ẏ + y 3 y χ A, v L 2 () + (( y + ẏ + ) χi, v ) =. (3.9) L 2 () Proof. See the Appendix. Proceeding as in Proposition 3.5, we also obtain that (( ) ) ẏ a(ẏ,v)+g y y, ẏ y 3 y χ A, v L 2 () + (( y + ẏ ) χi, v ) =, (3.2) L 2 () where ẏ stand for any weak accumulation point of (y y )inw,p (), for some p>2, as. Therefore, we obtain the following result. Theorem 3.6. The function y H () is differentiable at all [M,+ ), andẏ satisfies (( ) ) ẏ a(ẏ,v)+g y y, ẏ y 3 y χ A, v L 2 () + ( ( y + ẏ ) χ I, v ) =. (3.2) L 2 () Proof. Let z denote the difference between two accumulation points of () (y y )as. From (3.9) and(3.2) we obtain that (( ) ) z a(z,v) + g y y, z y 3 y χ A + zχ I, v =. L 2 ()
15 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 95 Choosing v = z in the last expression, we obtain that ( ) μ z 2 H + z 2 L 2 (I + g z ) y y, z y 3 y, z L 2 (A ) =. (3.22) Since ( ) z y y, z y 3 y, z, L 2 (A ) we get, from (3.22), that z =. Consequently, accumulation points are unique and by (3.9) and(3.2) they satisfy (3.2) Path value functional In this section we study the value functional associated to (P ). We prove that the functional is twice differentiable with non positive second derivative, which implies concavity of the functional. Definition 3.3. The functional V () :=J (y ) defined on [M, ), M>, is called the path value functional. Let us start by analyzing the differentiability properties of V. Proposition 3.7. Let [M, ). The value functional V is differentiable at, with V () = 2 2 g 2 dx + y 2 dx. (3.23) 2 A I Proof. Let r> be sufficiently small and let ( r, + r). From (2.3) and by choosing v = y y,we find that 2 a(y + y,y y )+ 2 (q + q, (y y )) L 2 () (f,y y ) 2 =. (3.24) On the other hand, note that 2 a(y + y,y y )= 2 a(y,y ) 2 a(y,y ). Consequently, from (3.24), we obtain that V () V () = 2 a(y,y ) 2 a(y,y ) (f,y y ) 2 + [ψ ( y ) ψ ( y )] dx = [ψ ( y ) ψ ( y )] dx 2 (q + q, (y y )) L 2 (), where ψ is defined by (2.2). Then, from (S ), we conclude that V () V () = z dx, (3.25) where z is defined by z(x) := [ ψ ( y (x)) ψ ( y (x)) g y (x) 2 θ (x) + y ] (x), (y y )(x), θ (x)
16 96 J.C. DE LOS REYES AND S. GONZÁLEZ V () V () a.e. in. Next, we will analyze the limit lim. Using the disjoint partitioning of given by := A A, 2 := A I, 3 := A I,and 4 := I I,wegetthat V () V () = 4 j= j z j dx, where z j represents the value of z when restricted to each set j, j =,...,4. Now, we analyze each integral j z j dx separately. On : Here, we analyze the limit lim that θ (x) = y (x), θ (x) = y (x), ψ( y (x)) = g y (x) g2 2 Thus, we obtain the following pointwise a.e. estimate [ ] z (x) = g [ y (x) y (x) ]+ g2 2 g y (x) 2 y (x) + y (x) y (x), (y (x) y (x)) ( ) = g 2 [ y (x) y (x) ] y (x), y (x) y (x) y (x) z dx. We start by recalling that a.e. in,wehave and ψ( y (x)) = g y (x) g g2 2 [ ] and, therefore, z dx = g [ ][ y y y ], y dx + g2 dx. (3.26) 2 y y 2 Next, we estimate the two integrals in (3.26) separately. First, note that, since we are working in,wehave that y (x) g a.e. Therefore, we obtain the following pointwise estimate in : y (x), y (x) y (x) y (x) y (x) y (x) + y (x) y (x) y (x), y (x) y (x) y (x) y (x) y (x) y (x) + y (x) y (x), y (x) y (x) y (x) (3.27) 2 g y (x) y (x). Therefore, from CauchySchwarz inequality, Theorem 3.2 and (3.27), we have the following estimate g 2 [ y y ][ y ], y dx y y g y y 2 y, y y y dx y y 2 dx y y 2 H L 2. (3.28)
17 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 97 Next, we analyze the second expression in the right hand side of (3.26). Since A =(A A ) (A \A ), we have that g 2 g2 g2 dx = dx dx. (3.29) 2 2 A 2 A \A Further, since, [M, ), we obtain g2 2 which, due to Lemma 3.4, implies that g 2 2 A \A dx A \A g2 2M 2 meas(a \A ), dx, as. (3.3) Thus, from (3.29), (3.3) and the Lebesgue s bounded convergence theorem, we conclude that g 2 2 dx g2 2 Therefore, from (3.26), (3.28) and(3.3), we conclude that dx, as. (3.3) A 2 lim z dx = 2 2 g 2 dx. (3.32) A On 4 : We study the limit of z 4 dx as. Let us recall that a.e. on 4, θ (x) =θ (x) =g, 4 ψ( y (x)) = 2 y (x) 2 and ψ( y (x)) = 2 y (x) 2. Thus, we obtain that z 4 (x) = [ 2 y (x) 2 2 y (x) 2 ] 2 y (x)+ y (x), y (x) y (x) = y (x), y (x), a.e., 2 which implies, since I =(I I ) (I \I ), that [ z 4 dx = ] y, y dx + y, y dx. (3.33) 4 2 I I \I Let us study the two integrals on the right hand side of (3.33) separately. Theorem 3.2 implies that y y strongly in H () as, and, therefore, y, y dx I y 2 dx. I (3.34) On the other hand, due to CauchySchwarz and Hölder inequalities, and since y (x) < g M and y (x) < g M a.e. in 4, we obtain that y, y dx I \I g2 M meas(i \I ),
18 98 J.C. DE LOS REYES AND S. GONZÁLEZ which, due to Lemma 3.4, implythat I \I y, y dx, as. (3.35) Finally, we obtain, from (3.33), (3.34) and(3.35), that lim z 4 dx = y 2 dx. (3.36) 4 2 I On 2 and 3 : We study the behavior of 2 z 2 dx and 3 z 3 dx, as. Let us start with 2 z 2 dx. First, note that a.e. in 2,wehavethatθ (x) = y (x), θ (x) =g, ψ( y (x)) = 2 y (x) 2 and ψ( y (x)) = g y (x) g2 2. Thus, we obtain that z 2 (x) =g y (x) g2 2 2 y (x) 2 g y (x) 2 y (x) + 2 y (x), y (x) y (x), a.e. in 2.Moreover,sincein 2 we have that y (x) <g, a.e., and due to the CauchySchwarz inequality, we have the following pointwise estimate: z 2 (x) g 2 y (x) g + g 2 y (x) y (x), y (x) + g2 2 g 2 y (x) g + y (x) 2 y (x) g + g2 2 (3.37) < g y (x) g + g2 2 a.e. in 2. Then, we divide the analysis in two cases: (i) y (x) g : Since in 2 we have that y (x) < g obtain that a.e., and due to Theorem 3.2 and (3.37), we { y g } z 2 dx g 2 y y dx + g2 2 meas( 2) g meas( 2 ) /2 y y H + g2 2 meas( 2) glmeas( 2 ) /2 + g2 2M meas( 2 2 ). (ii) y (x) g < : Since in 2 we have that y (x) g { y < g } z 2 dx g g 2 3 g2 2M meas( 2 2 ). a.e., we have that g dx + g2 2 meas( 2) (3.38) (3.39) Consequently, (3.37), (3.38), (3.39) and Lemma 3.4 imply that lim z 2 dx =. 2 (3.4) Analogously, we conclude that lim z 3 dx =. 3 (3.4) Then, the result follows from (3.32), (3.36), (3.4) and(3.4).
19 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 99 Proposition 3.8. Let [M, ). The function V () is twice differentiable at, with its second derivative given by V () = 3 g 2 dx + y, ẏ dx, (3.42) A I where ẏ is defined in Proposition 3.6. Moreover, V (), for all [M, ). Proof. Letusfirstprove(3.42). Since q (x) = g, a.e. ina,and q (x) = y (x) a.e. in I,wecanwrite V () = 2 2 q 2 dx. (3.43) From (3.43) we conclude that V () V () = 2 2 q 2 dx 2 2 q 2 dx. V We are concerned with the limit lim () V (). We introduce the notation I j = 2 2 j q 2 dx 2 2 j q 2 dx, j =,...,4, where the sets j, j =,...,4 are defined as in Proposition 3.7, andweanalyze the integrals I j separately. On (lim I ): Let us start by recalling that a.e. on,wehavethat q (x) = q (x) = g. Thus, since A =(A A ) (A \A ), we get () I = () g 2 ( 2 2 ) dx = g 2 ( + ) A 2 2 dx g 2 ( + ) 2 A \A 2 2 dx. 2 Since, [M, ), we get that g 2 ( + ) A \A 2 2 dx g2 ( + ) 2 2M 4 meas(a \A ), which, due to Lemma 3.4, implies that A \A g 2 ( + ) dx as. (3.44) Thus, (3.44), Lemma 3.4 and the Lebesgue s bounded convergence theorem imply that lim ( ) I = 3 g 2 dx. (3.45) A On 4 (lim I 4): First, note that q (x) = y (x) and q (x) = y (x) a.e. on 4. Thus, since I =(I I ) (I \I ), we obtain that () I 4 = 2() I [ y 2 y 2] [ dx y 2 y 2] dx. (3.46) 2() I \I Next, let us analyze the two integrals of the right hand side of (3.46) separately.
20 J.C. DE LOS REYES AND S. GONZÁLEZ [ (i) lim 2( ) I y 2 y 2] dx: Notethat [ y 2 y 2] ( ) ( ) y y y y dx = y, + y, dx, I I }{{} :=J which, implies that J 2 I y, ẏ dx ( ) y y I y, y, ẏ dx + ( ) y y I y, ẏ dx. Next, we separately analyze the two terms in the right hand side of the inequality above. Since ẏ is an accumulation point of y y in H () as (see Rem. 3.2 and Thm. 3.6), we have that ( ) y y y, ẏ dx as. (3.47) I On the other hand, we have that I ( ) y y y, y, ẏ dx ( ) y y y, ẏ dx + I which, since ẏ is an accumulation point of y y I I ( ) y y y y, dx, in H () as, implies that ( ) y y y, y, ẏ dx as. (3.48) Furthermore, Theorem 3.2 yields that ( ) y y y y, dx = y y 2 H L 2. (3.49) I Consequently, (3.47), (3.48) and(3.49) imply lim [ y 2 y 2] dx = y, ẏ dx. (3.5) 2() I I [ (ii) lim 2( ) I \I y 2 y 2] dx: First note that I \I y 2 y 2 dx I \I y y y dx + I \I y y y dx.
21 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES Therefore, from Theorem 3.2, Remark 2.4 and Hölder inequality, we have that y I \I 2 y 2 dx meas(i \I ) ( /4 y L 4 + y L 4) y y H L 2K meas(i \I ) /4, and then, from Lemma 3.4, we conclude that lim Finally, (3.46), (3.5) and(3.5) implythat [ y 2 y 2] dx =. (3.5) 2() I \I lim I 4 = y, ẏ dx. (3.52) I On 2 and 3 : We analyze the limits lim I 2 and lim I 3. We start by analyzing lim I 2. Thus, we recall that in 2,wehavethat y (x) g. Then, from Theorem 3.2, Remark2.4 and Hölder inequality, we conclude that I g 2 2 y 2 dx 2 2 y 2 y 2 dx LKmeas(2 ) /4, which, due to Lemma 3.4, implies that lim I 2 =. (3.53) Analogously, we conclude that lim I 3 =. (3.54) Finally, (3.45), (3.52), (3.53), and (3.54) imply(3.43). Now, we prove that V (). Using v =ẏ χ I in (3.2), we obtain that a(ẏ, ẏ χ I )+ ( ) y, ẏ χ + ( ) I ẏ L 2 () χ I, ẏ χ =, I L 2 () which implies that (μ + ) ẏ 2 dx = I y, ẏ dx. I Thus, we can easily conclude that y, ẏ dx, I which yields that V () = 3 g 2 dx + y, ẏ dx. A I
22 2 J.C. DE LOS REYES AND S. GONZÁLEZ 3.3. Model functions and pathfollowing algorithm In this section, following [4], we propose model functions which approximate the value functional V () and share some of its qualitative properties. These model functions will then be used for the development of the path following algorithm. From Theorem 3.2 and Propositions 3.7 and 3.8, it follows that V (), [M, ) is a monotonically increasing and concave function. We then propose the model functions m() =C C 2 μ + G, (3.55) with C R, C 2 andg, which share the main qualitative properties of V (), i.e., ṁ() and m(). To motivate the introduction of these model functions, let us take the test function v = y χ I in (3.2). We get that a(ẏ,y χ I )+ ( ) y χ I, y χ I L 2 () + ( ẏ χ I, y χ I =. (3.56) )L 2 () From the definition of a(, ), we obtain that (μ + ) ( ) ẏ, y χ I L 2 () + y 2 dx =. I Consequently, by using Propositions 3.7 and 3.8, weobtain (μ + ) V ()+ 3 g 2 dx +2 V () 2 g 2 dx =, A A which implies that (μ + ) V ()+2 V ()+μ 3 g 2 dx =. (3.57) A Note that A g 2 is a function of which is uniformly bounded from above by g 2 meas(). Replacing V by m and the dependent term A g 2 by 2G, we obtain the differential equation (μ + ) m()+2ṁ()+2 3 μg =, (3.58) whose solutions are the family of functions given by (3.55). In order to determine C, C 2 and G, we fix a reference value r >, r, for which the value V ( r )is known. Then, we use the following conditions m() =V (), m( r )=V ( r ), ṁ() = V (). Solving the resulting system of nonlinear equations C C 2 μ + G = V (), C C 2 G = V ( r ), μ + r r C 2 (μ + ) 2 + G 2 = V (),
23 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 3 we obtain that G = r(v ( r ) V ()) r r 2 ϑ(μ + ), (3.59) μ( r ) [ ] where ϑ := r V () r(v ( r) V ()) ( r ). Consequently, the parameters C and C 2 are given by C 2 =(μ + ) 2 ( V () G 2 ), C = V ()+ C 2 μ + + G Once we have determined the values of the coefficients of the model, we are able to propose the updating strategy for. Let {τ k } satisfy τ k (, ) for all k N and τ k ask, and assume that V ( k )is available. Following [4], the idea is to have a superlinear rate of convergence for our algorithm, i.e., given k, theupdatevalue k+ should ideally satisfy V V (k+ ) τk V V (k ), (3.6) where V := lim V (). Since V and V ( k+ ) are unknowns, we approximate these values by lim k m( k ) and m( k+ ), respectively. Hereafter, we use the notation C,k, C 2,k and G k for the coefficients on the model function (3.55) related with each k. Further, note that lim k m( k )=C,k.Thus,(3.6) is replaced by C,k m( k+ ) τ k C,k m( k ). (3.6) Calling β k := τ k C,k m( k ) and solving the equation C,k m( k+ )=β k, we obtain that k+ = D k 2 + D 2 k 4 + μg k β k, (3.62) where D k = (C 2,k+G k ) β k μ. Next, we write a pathfollowing algorithm which uses the update strategy for, given by(3.62). Algorithm PF.. Select r and compute V ( r ). Choose > max(, r ) and set k =. 2. Solve { a(yk,v)+(q k,v) L 2 () (f,v) 2 =, for all v H () max(g, k y k (x) )q k (x) =g k y k (x), a.e. in. 3. Compute V ( k ), V (k ) and update k by using k+ = D k 2 + D 2 k 4 + μg k β k (3.63) 4. Stop, or set k := k + andgotostep2. 4. Semismooth Newton method In this section we state an algorithm for the efficient solution of (3.63). Since no smoothing operation takes place in the complementary function in (3.63), it is not possible to get Newton differentiability in infinite
24 4 J.C. DE LOS REYES AND S. GONZÁLEZ dimensions (see [27], Sect. 3.3). Therefore, we consider a discretized version of system (3.63), and propose a semismooth Newton method to solve this problem. Specifically, we state a primaldual scheme to solve system (3.63) and prove local superlinear convergence of the method. By involving the primal and the dual variables in the same algorithm, we compute the solutions to the discrete versions of (P )and(p ) simultaneously. The algorithm proposed is a particular case of the Newton type algorithms developed in [6]. Let us introduce the definition of Newton differentiability Definition 4.. Let X and Z be two Banach spaces. The function F : X Z is called Newton differentiable if there exists a family of generalized derivatives G : X L(X, Z) such that lim h h X F (x + h) F (x) G(x + h)h Z =. Throughout this section we denote discretized quantities by superscript h. For a vector v R n we denote by D(v) :=diag(v) then ndiagonal matrix with diagonal entries v i. Besides that, we denote by the Hadamard product of vectors, i.e., v w:= (v w,...,v n w n ). We use a finite element approximation of system (3.63) and consider the spaces V h := { η C() : η T Π, T T h}, W h := { q h := (q h,q h 2 ) L 2 () : q h T,q h 2 T Π, T T h}, to approximate the velocity y h and the multiplier q h, respectively. Here, Π k denotes the set of polynomials of degree less or equal than k and T h denotes a regular triangulation of. Thus, the discrete analogous of (3.63) is given by { A h μ y + B h q f h = max ( ge h,ξ( h y) ) q g h (4.) y =, for >, where A h μ Rn n is the stiffness matrix, e h R 2m is the vector of all ones and B h R n 2m is obtained in the usual way, from the bilinear form (, ) L 2 () and the basis functions of V h and W h. Here, y h R n and q h R 2m are the solution coefficients of the approximated regularized primal and dual solutions y h V h and q h W h, respectively. Further, we construct the right hand side f h using the basis functions ϕ i V h,i =,...,n, (see [], Sect. 6). The discrete version of the gradient is given by ( ) h h := 2 h R 2m n, (4.2) where h := ϕi(x) Tk x and 2 h := ϕi(x) Tk x 2,fori =,...,n; k =,...,m.notethat ϕi(x) Tk x and ϕi(x) Tk x 2 in each triangle T k, respectively. Consequently, we obtain that h y = y h (x). Hereafter, the matrix A h μ is assumed to be symmetric and positive definite. The function ξ : R2m R 2m is defined by are the constant values of ϕi(x) x and ϕi(x) x 2 (ξ(p)) i =(ξ(p)) i+m := (p i,p i+m ) for p R 2m, i =,...,m. System (4.) can also be written as the following operator equation: [ ] F (y h,q h A h μy h + B h q h f h ):= max ( ge h,ξ( h y h ) ) q h g h y h =. (4.3) It is well known (see e.g. [7,27]) that the maxoperator and the norm function ξ involved in (4.3) are semismooth. Furthermore, this is also true for the composition of semismooth functions that arises in (4.3). A particular
25 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 5 element of the generalized Jacobian of max(, ) :R N R N is the diagonal matrix G max R N N defined by (G max (v)) ii := { if vi, if v i <, for i N. (4.4) Consequently, given approximations yk h and qh k the Newton step to (4.3) at(yh k,qh k )isgivenby: [ ][ ] [ A h μ B h ( χ Ak+ D(qk h)p ( δy A h ) ) μ yk h h h yk h gi2l h D(m h k ) = Bh qk h + f ] h δ q D(m h k )qh k + g h yk h, (4.5) where m h k := max ( ge h,ξ( h yk h)) R 2m,andχ A = D(t h k ) R2m 2m with ( t h k ) i := { if ξ( h y h k ) i g else. (4.6) Further, P h R 2m 2m denotes the generalized Jacobian of ξ, i.e., forp R 2m,wehavethat ξ i ξ i P h p j p j+m (p) :=, where the block diagonal matrices are defined by ξ i+m p j ξ i+m p j+m ξ i = ξ { p i i+m := δ (p i,p i+m) if (p i,p i+m ) ij p j p j ε if (p i,p i+m ) = for i, j =,...,m, ξ i = ξ { p i+m i+m := δ (p i,p i+m) if (p i,p i+m ) ij p j+m p j+m ε 2 if (p i,p i+m ) = for i, j =,...,m, with ε and ε 2 real numbers such that (ε,ε 2 ). From the invertibility of D(m h k ) we obtain that δ q = q h k + D(m h k) ( g h y h k + C h k h δ y ), (4.7) where C h k := gi 2l χ Ak+ D(q h k )P h ( h y h k ). Thus, the remaining equation for δy can be written as where the matrix Ξ,k and the right hand side η,k are given by Ξ,k δ y = η,k, (4.8) Ξ,k := A h μ + B h D(m h k) Ck h h, η,k := A h μyk h + M h f h gb h D(m h k) h yk. h It can be verified (cf. [6], p. 8) that the matrix Ξ,k is symmetric at the solution. Thanks to [6], Lemma 3.3, we know that the condition ξ(qk h) i g, fori =,...,m, must hold to guarantee the positive definiteness of the matrix Ch k. Moreover, we can assert that if { the last } condition is fulfilled, the matrix Ξ,k is positive definite, λ min (Ξ,k ) λ min (A h μ) > and the sequence is uniformly bounded. Ξ,k k N
26 6 J.C. DE LOS REYES AND S. GONZÁLEZ Due to these results, we know that if ξ(qk h) i g holds for all i =,...,m, the solution of (4.8) existsfor all k and it is a descent direction for the objective functional in (P ). However, this condition is unlikely to be fulfilled by all i m and k N. To overcome this difficulty, Hintermüller and Stadler [6] constructed a globalized semismooth Newton algorithm by modifying the term involving D(qk h)p h ( y yk h ) for indices i in which ξ(qk h) i >g. This is done by replacing qk h by g max ( g, ξ(q h k ) i) ((q h k ) i, (q h k ) i+m), when assembling the system matrix Ξ,k. Thus, we guarantee that ξ(qk h) i g for i =,...,m. Further, we obtain a modified system matrix, denoted by Ξ +,k, which replaces Ξ,k in (4.8). This new matrix is positive } definite for all and the sequence {(Ξ +,k ) is uniformly bounded. k N Algorithm SSN.. Initialize (y h,qh ) Rm R 2m and set k =. 2. Estimate the active sets, i.e., determine χ Ak+ R 2m 2m. 3. Compute Ξ +,k if the dual variable is not feasible for all i =,...,m;otherwisesetξ+,k =Ξ,k. Solve Ξ +,k δ y = η,k. 4. Compute δ q from (4.7). 5. Update yk+ h := yh k + δ y and qk+ h := qh k + δ q. 6. Stop, or set k := k + and go to step 2. Following [6], Lemma 3.5, we know that qk h qh and yk h yh implies that Ξ +,k converges to Ξ,k as k. Thus, thanks to this result we can state the following theorem. Theorem 4.. The iterates (y h k,qh k ) of Algorithm SSN converge superlinearly to (yh,q h ) provided that (y h,q h ) is sufficiently close to (y h,qh ). Proof. We refer the readers to [6], Theorem 3.6, for the complete proof. The projection procedure, which yields the matrix Ξ +,k, assures that in each iteration of Algorithm SSN, δ y =(Ξ +,k ) η,k constitutes a descent direction for the objective functional in (P ). Additionally, steps 3. and 4. of the algorithm involve a decoupled system of equations for δ y and δ q, which is obtained directly, due to the regularization proposed and the structure of the method. Moreover, the computation of δ q through (4.7) turns out to be computationally efficient, since only the inverse of a diagonal matrix is needed. 5. Numerical results In this section we present numerical experiments which illustrate the main properties of the pathfollowing and the semismooth Newton methods applied to the numerical solution of laminar Bingham fluids. The experiments have been carried out for a constant function f, representing the linear decay of pressure in the pipe. The parameter is updated using the pathfollowing strategy defined in Section 3.3. Unless we specify the contrary, we stop the Algorithm PF as soon as r h k := (r,h k,r2,h k,r3,h k ) is of the order 7,where r,h k = y h k +(A h μ ) (B h q h k f h ) H,h / f h L 2,h r 2,h k = max(ge h,ξ(q h k + h y h k )) q h k g(q h k + h y h k ) L 2,h r 3,h k = max(, q h k g) L 2,h
27 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES Figure. Example : flow of a Bingham fluid defined by g =,μ =andf =(left)and velocity profile along the diagonal y(x,x ) (right). Figure 2. Example : final inactive set I. with A h μ, B h, h and ξ defined as in (4.). Here H,h, L 2,h and L 2,h denote the discrete versions of H, L 2 and L 2 respectively. r,h k and r 2,h k describe the improvement of the algorithm towards the solution of the discrete version of the optimality system (S), while r 3,h k measures the feasibility of q h k. We use the mass matrix to calculate the integrals related to the space V h and a composite trapezoidal formula for the integrals associated to the space W h. Additionally we use the sequence τ k =. k Example In our first example, we focus on the behavior of Algorithm PF. We consider := ], [ ], [ and compute the flow of a Bingham fluid defined by μ = g =andf =. We work with a uniform triangulation, with h =.46 ( /28), where h is the radius of the inscribed circumferences of the triangles in the mesh. In this example, we use the initial values r =and =. The inner Algorithm SSN for = is initialized with the solution of the Poisson problem A h μy h = f h together with q h = and is finished if the residual δ y is lower than ɛ,whereɛ denotes the machine accuracy. The resulting velocity function is displayed in Figure and the final inactive set in Figure 2. The value of the regularization parameter k reaches a factor of 3 in three iterations and we obtain a maximum velocity of.29. The graphics illustrate the expected mechanical properties of the material, i.e., since the shear stress transmitted by a fluid layer decreases toward the center of the pipe, the Bingham fluid moves like a solid in that sector. Besides that, Figure shows that there are no stagnant zones in the flow (see [22]).
28 8 J.C. DE LOS REYES AND S. GONZÁLEZ Table. updates and convergence behavior of Algorithm PF for a Bingham fluid defined by g =,μ =andf =. #it. k rk h y h k+ y h H k ν,h k h.8 e e e e e e e e e 4 Table 2. Number of iterations of Algorithm SSN in each pathfollowing iteration. k.8 e+3.82 e+7.9 e+3 #it. 7 6 =4 Table 3. Number of iterations of Algorithm SSN without any automatic updating strategy. k.8 e+3.82 e+7.9 e+3 #it.ssn 3 33 fails to converge In Table we report the values of the regularization parameter k and the residuals r h k and ν h k = y h k+ y h k H,h y h k y h k H,h From the behavior of rk h, it is possible to observe a superlinear convergence rate of the Algorithm PF according to the strategy proposed in (3.6). Furthermore, the behavior of νk h implies a superlinear convergence rate of y k towards the solution, as k increases. These data are depicted in Figure 3, where the two magnitudes are plotted in a logarithmic scale. In Table 2, we show the number of inner iterations that Algorithm SSN needs to achieve convergence in each iteration of Algorithm PF and the total number of SSN iterations needed. It can be observed that the path following strategy allows to reach large values of k and, consequently, to obtain a better approximation of the solution of the problem. In contrast to these results, in Table 3 we show the number of iterations that Algorithm SSN needs to achieve convergence without any updating strategy. In this case, the algorithm does not only need more iterations for each value of k, but also fails to converge for large values of it. Finally, in Figure 4 we plot and compare the path value functional V () (solid line) and the model functions m( k ) calculated from the values C,k, C 2,k and G k given in each iteration of the algorithm. It can be observed that as k increases, m( k ) becomes a better model for V (). However, even for small values of k, the model functions stay close to the value functional Example 2 In this example, we compare the numerical behavior of Algorithm PF versus a penaltynewtonuzawaconjugate gradient method proposed by Dean et al. in[6]. We consider the flow of a Bingham fluid in the cross section of a cylindrical pipe, given by the disk defined by := {x =(x,x 2 ) R 2 : x 2 + x2 2 <R2 },wherer>. It is well known (see [2], Ex. 2, p. 8) that in
Investigating the Influence of BoxConstraints on the Solution of a Total Variation Model via an Efficient PrimalDual Method
Article Investigating the Influence of BoxConstraints on the Solution of a Total Variation Model via an Efficient PrimalDual Method Andreas Langer Department of Mathematics, University of Stuttgart,
More informationON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction
J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties
More informationYou should be able to...
Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set
More informationREGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROLSTATE CONSTRAINTS
REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROLSTATE CONSTRAINTS fredi tröltzsch 1 Abstract. A class of quadratic optimization problems in Hilbert spaces is considered,
More informationA Distributed Newton Method for Network Utility Maximization, II: Convergence
A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility
More informationOn duality theory of conic linear problems
On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 333225, USA email: ashapiro@isye.gatech.edu
More informationIn particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a nonzero column vector X with
Appendix: Matrix Estimates and the PerronFrobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient
More informationAn introduction to some aspects of functional analysis
An introduction to some aspects of functional analysis Stephen Semmes Rice University Abstract These informal notes deal with some very basic objects in functional analysis, including norms and seminorms
More informationOptimization Theory. A Concise Introduction. Jiongmin Yong
October 11, 017 16:5 wsbook9x6 Book Title Optimization Theory 01708Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 01708Lecture Notes page Optimization
More informationPart 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)
Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual
More informationLecture 15 Newton Method and SelfConcordance. October 23, 2008
Newton Method and SelfConcordance October 23, 2008 Outline Lecture 15 Selfconcordance Notion Selfconcordant Functions Operations Preserving Selfconcordance Properties of Selfconcordant Functions Implications
More informationFIXED POINT ITERATIONS
FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Nonlinear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in
More informationL p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p norm of a measurable function f : X R is defined by
L p Functions Given a measure space (, µ) and a real number p [, ), recall that the L p norm of a measurable function f : R is defined by f p = ( ) /p f p dµ Note that the L p norm of a function f may
More informationA Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions
A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions Angelia Nedić and Asuman Ozdaglar April 16, 2006 Abstract In this paper, we study a unifying framework
More informationReal Analysis Problems
Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationSolving Dual Problems
Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem
More informationLecture 4 Lebesgue spaces and inequalities
Lecture 4: Lebesgue spaces and inequalities 1 of 10 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 4 Lebesgue spaces and inequalities Lebesgue spaces We have seen how
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationLECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE
LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework  MC/MC Constrained optimization
More informationNumerical Methods for LargeScale Nonlinear Systems
Numerical Methods for LargeScale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, BerlinHeidelbergNew York, 2004 Num.
More informationLinear convergence of iterative softthresholding
arxiv:0709.1598v3 [math.fa] 11 Dec 007 Linear convergence of iterative softthresholding Kristian Bredies and Dirk A. Lorenz ABSTRACT. In this article, the convergence of the often used iterative softthresholding
More informationDate: July 5, Contents
2 Lagrange Multipliers Date: July 5, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 14 2.3. Informative Lagrange Multipliers...........
More informationContinuity of convex functions in normed spaces
Continuity of convex functions in normed spaces In this chapter, we consider continuity properties of realvalued convex functions defined on open convex sets in normed spaces. Recall that every infinitedimensional
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of WisconsinMadison May 2014 Wright (UWMadison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationg(x) = P (y) Proof. This is true for n = 0. Assume by the inductive hypothesis that g (n) (0) = 0 for some n. Compute g (n) (h) g (n) (0)
Mollifiers and Smooth Functions We say a function f from C is C (or simply smooth) if all its derivatives to every order exist at every point of. For f : C, we say f is C if all partial derivatives to
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 00 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationOptimization for Machine Learning
Optimization for Machine Learning (Problems; Algorithms  A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html
More informationNewton s Method. Javier Peña Convex Optimization /36725
Newton s Method Javier Peña Convex Optimization 10725/36725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and
More information2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?
MA 6454A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is
More informationLAGRANGIAN TRANSFORMATION IN CONVEX OPTIMIZATION
LAGRANGIAN TRANSFORMATION IN CONVEX OPTIMIZATION ROMAN A. POLYAK Abstract. We introduce the Lagrangian Transformation(LT) and develop a general LT method for convex optimization problems. A class Ψ of
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationSemismooth implicit functions
Semismooth implicit functions Florian Kruse August 18, 2016 Abstract Semismoothness of implicit functions in infinitedimensional spaces is investigated. We provide sufficient conditions for the semismoothness
More informationFUNCTIONAL ANALYSIS LECTURE NOTES: WEAK AND WEAK* CONVERGENCE
FUNCTIONAL ANALYSIS LECTURE NOTES: WEAK AND WEAK* CONVERGENCE CHRISTOPHER HEIL 1. Weak and Weak* Convergence of Vectors Definition 1.1. Let X be a normed linear space, and let x n, x X. a. We say that
More information8 Numerical methods for unconstrained problems
8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields
More informationLecture 13: Constrained optimization
20101203 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems
More informationDownloaded 12/29/14 to Redistribution subject to SIAM license or copyright; see
SIAM J. OPTIM. Vol. 7, No., pp. 59 87 c 26 Society for Industrial and Applied Mathematics PATHFOLLOWING METHODS FOR A CLASS OF CONSTRAINED MINIMIZATION PROBLEMS IN FUNCTION SPACE MICHAEL HINTERMÜLLER
More informationMath 273a: Optimization Convex Conjugacy
Math 273a: Optimization Convex Conjugacy Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com Convex conjugate (the Legendre transform) Let f be a closed proper
More informationSelfConcordant Barrier Functions for Convex Optimization
Appendix F SelfConcordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomialtime algorithms for the solution of convex optimization
More informationA PROJECTED HESSIAN GAUSSNEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSSNEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
More information2) Let X be a compact space. Prove that the space C(X) of continuous realvalued functions is a complete metric space.
University of Bergen General Functional Analysis Problems with solutions 6 ) Prove that is unique in any normed space. Solution of ) Let us suppose that there are 2 zeros and 2. Then = + 2 = 2 + = 2. 2)
More information1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),
Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer
More informationHelly's Theorem and its Equivalences via Convex Analysis
Portland State University PDXScholar University Honors Theses University Honors College 2014 Helly's Theorem and its Equivalences via Convex Analysis Adam Robinson Portland State University Let us know
More informationEquilibria with a nontrivial nodal set and the dynamics of parabolic equations on symmetric domains
Equilibria with a nontrivial nodal set and the dynamics of parabolic equations on symmetric domains J. Földes Department of Mathematics, Univerité Libre de Bruxelles 1050 Brussels, Belgium P. Poláčik School
More informationSome lecture notes for Math 6050E: PDEs, Fall 2016
Some lecture notes for Math 65E: PDEs, Fall 216 Tianling Jin December 1, 216 1 Variational methods We discuss an example of the use of variational methods in obtaining existence of solutions. Theorem 1.1.
More informationMath 209B Homework 2
Math 29B Homework 2 Edward Burkard Note: All vector spaces are over the field F = R or C 4.6. Two Compactness Theorems. 4. Point Set Topology Exercise 6 The product of countably many sequentally compact
More informationFinitedimensional spaces. C n is the space of ntuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a prehilbert space, or a unitary space) if there is a mapping (, )
More informationTHEOREMS, ETC., FOR MATH 515
THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every
More informationNonlinear Programming 3rd Edition. Theoretical Solutions Manual Chapter 6
Nonlinear Programming 3rd Edition Theoretical Solutions Manual Chapter 6 Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts 1 NOTE This manual contains
More informationSobolev Spaces. Chapter 10
Chapter 1 Sobolev Spaces We now define spaces H 1,p (R n ), known as Sobolev spaces. For u to belong to H 1,p (R n ), we require that u L p (R n ) and that u have weak derivatives of first order in L p
More informationSUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS
SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS A. RÖSCH AND R. SIMON Abstract. An optimal control problem for an elliptic equation
More informationCHAPTER VIII HILBERT SPACES
CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugatelinear transformation if it is a reallinear transformation from X into Y, and if T (λx)
More informationHamburger Beiträge zur Angewandten Mathematik
Hamburger Beiträge zur Angewandten Mathematik Numerical analysis of a control and state constrained elliptic control problem with piecewise constant control approximations Klaus Deckelnick and Michael
More informationHow to Characterize the WorstCase Performance of Algorithms for Nonconvex Optimization
How to Characterize the WorstCase Performance of Algorithms for Nonconvex Optimization Frank E. Curtis Department of Industrial and Systems Engineering, Lehigh University Daniel P. Robinson Department
More informationExamples of Dual Spaces from Measure Theory
Chapter 9 Examples of Dual Spaces from Measure Theory We have seen that L (, A, µ) is a Banach space for any measure space (, A, µ). We will extend that concept in the following section to identify an
More informationHilbert spaces. 1. CauchySchwarzBunyakowsky inequality
(October 29, 2016) Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/notes 201617/03 hsp.pdf] Hilbert spaces are
More informationReal Analysis Notes. Thomas Goller
Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................
More informationLecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University
Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................
More informationA Posteriori Estimates for Cost Functionals of Optimal Control Problems
A Posteriori Estimates for Cost Functionals of Optimal Control Problems Alexandra Gaevskaya, Ronald H.W. Hoppe,2 and Sergey Repin 3 Institute of Mathematics, Universität Augsburg, D8659 Augsburg, Germany
More informationReproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto
Reproducing Kernel Hilbert Spaces 9.520 Class 03, 15 February 2006 Andrea Caponnetto About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationA Concise Course on Stochastic Partial Differential Equations
A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original
More informationHandout on Newton s Method for Systems
Handout on Newton s Method for Systems The following summarizes the main points of our class discussion of Newton s method for approximately solving a system of nonlinear equations F (x) = 0, F : IR n
More informationare Banach algebras. f(x)g(x) max Example 7.4. Similarly, A = L and A = l with the pointwise multiplication
7. Banach algebras Definition 7.1. A is called a Banach algebra (with unit) if: (1) A is a Banach space; (2) There is a multiplication A A A that has the following properties: (xy)z = x(yz), (x + y)z =
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More information1 Overview. 2 A Characterization of Convex Functions. 2.1 Firstorder Taylor approximation. AM 221: Advanced Optimization Spring 2016
AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 8 February 22nd 1 Overview In the previous lecture we saw characterizations of optimality in linear optimization, and we reviewed the
More information10. Unconstrained minimization
Convex Optimization Boyd & Vandenberghe 10. Unconstrained minimization terminology and assumptions gradient descent method steepest descent method Newton s method selfconcordant functions implementation
More informationNonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage:
Nonlinear Analysis 71 2009 2744 2752 Contents lists available at ScienceDirect Nonlinear Analysis journal homepage: www.elsevier.com/locate/na A nonlinear inequality and applications N.S. Hoang A.G. Ramm
More informationAN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING
AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general
More informationA Précis of Functional Analysis for Engineers DRAFT NOT FOR DISTRIBUTION. JeanFrançois Hiller and KlausJürgen Bathe
A Précis of Functional Analysis for Engineers DRAFT NOT FOR DISTRIBUTION JeanFrançois Hiller and KlausJürgen Bathe August 29, 22 1 Introduction The purpose of this précis is to review some classical
More informationKarushKuhnTucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36725
KarushKuhnTucker Conditions Lecturer: Ryan Tibshirani Convex Optimization 10725/36725 1 Given a minimization problem Last time: duality min x subject to f(x) h i (x) 0, i = 1,... m l j (x) = 0, j =
More informationNormed & Inner Product Vector Spaces
Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken KreutzDelgado ECE Department, UC San Diego Ken KreutzDelgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed
More informationOn Solving LargeScale Finite Minimax Problems. using Exponential Smoothing
On Solving LargeScale Finite Minimax Problems using Exponential Smoothing E. Y. Pee and J. O. Royset This paper focuses on finite minimax problems with many functions, and their solution by means of exponential
More information13 PDEs on spatially bounded domains: initial boundary value problems (IBVPs)
13 PDEs on spatially bounded domains: initial boundary value problems (IBVPs) A prototypical problem we will discuss in detail is the 1D diffusion equation u t = Du xx < x < l, t > finitelength rod u(x,
More informationIterative regularization of nonlinear illposed problems in Banach space
Iterative regularization of nonlinear illposed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and
More informationChapter 3. Characterization of best approximations. 3.1 Characterization of best approximations in Hilbert spaces
Chapter 3 Characterization of best approximations In this chapter we study properties which characterite solutions of the approximation problem. There is a big difference in the treatment of this question
More informationStatistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation
Statistics 62: L p spaces, metrics on spaces of probabilites, and connections to estimation Moulinath Banerjee December 6, 2006 L p spaces and Hilbert spaces We first formally define L p spaces. Consider
More informationAccelerating Nesterov s Method for Strongly Convex Functions
Accelerating Nesterov s Method for Strongly Convex Functions Hao Chen Xiangrui Meng MATH301, 2011 Outline The Gap 1 The Gap 2 3 Outline The Gap 1 The Gap 2 3 Our talk begins with a tiny gap For any x 0
More informationParallel Cimminotype methods for illposed problems
Parallel Cimminotype methods for illposed problems Cao Van Chung Seminar of Centro de Modelización Matemática Escuela Politécnica Naciónal Quito ModeMat, EPN, Quito Ecuador cao.vanchung@epn.edu.ec, cvanchung@gmail.com
More informationConvexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ.
Convexity in R n Let E be a convex subset of R n. A function f : E (, ] is convex iff f(tx + (1 t)y) (1 t)f(x) + tf(y) x, y E, t [0, 1]. A similar definition holds in any vector space. A topology is needed
More informationLecture Notes of the Autumn School Modelling and Optimization with Partial Differential Equations Hamburg, September 2630, 2005
Lecture Notes of the Autumn School Modelling and Optimization with Partial Differential Equations Hamburg, September 2630, 2005 Michael Hinze, René Pinnau, Michael Ulbrich, and Stefan Ulbrich supported
More informationThe Subdifferential of Convex Deviation Measures and Risk Functions
The Subdifferential of Convex Deviation Measures and Risk Functions Nicole Lorenz Gert Wanka In this paper we give subdifferential formulas of some convex deviation measures using their conjugate functions
More informationNewton s Method. Ryan Tibshirani Convex Optimization /36725
Newton s Method Ryan Tibshirani Convex Optimization 10725/36725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equalityconstrained minimization
E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equalityconstrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained
More informationLecture 2: Convex Sets and Functions
Lecture 2: Convex Sets and Functions HyangWon Lee Dept. of Internet & Multimedia Eng. Konkuk University Lecture 2 Network Optimization, Fall 2015 1 / 22 Optimization Problems Optimization problems are
More informationELLIPTIC EQUATIONS WITH MEASURE DATA IN ORLICZ SPACES
Electronic Journal of Differential Equations, Vol. 2008(2008), No. 76, pp. 1 10. ISSN: 10726691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu (login: ftp) ELLIPTIC
More informationProper Orthogonal Decomposition for Optimal Control Problems with Mixed ControlState Constraints
Proper Orthogonal Decomposition for Optimal Control Problems with Mixed ControlState Constraints Technische Universität Berlin Martin Gubisch, Stefan Volkwein University of Konstanz March, 3 Martin Gubisch,
More informationOPTIMALITY CONDITIONS FOR STATECONSTRAINED PDE CONTROL PROBLEMS WITH TIMEDEPENDENT CONTROLS
OPTIMALITY CONDITIONS FOR STATECONSTRAINED PDE CONTROL PROBLEMS WITH TIMEDEPENDENT CONTROLS J.C. DE LOS REYES P. MERINO J. REHBERG F. TRÖLTZSCH Abstract. The paper deals with optimal control problems
More informationFunctional Analysis Review
Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all
More informationRESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems
Optimization Methods and Software Vol. 00, No. 00, July 200, 8 RESEARCH ARTICLE A strategy of finding an initial active set for inequality constrained quadratic programming problems Jungho Lee Computer
More informationLecture: Convex Optimization Problems
1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt2015fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization
More informationSPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS
SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly
More informationLecture 5. Theorems of Alternatives and SelfDual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and SelfDual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationA fullnewton step infeasible interiorpoint algorithm for linear programming based on a kernel function
A fullnewton step infeasible interiorpoint algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interiorpoint algorithm with
More informationOn the convergence rate of a forwardbackward type primaldual splitting algorithm for convex optimization problems
On the convergence rate of a forwardbackward type primaldual splitting algorithm for convex optimization problems Radu Ioan Boţ Ernö Robert Csetnek August 5, 014 Abstract. In this paper we analyze the
More informationThe Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1
October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,
More informationA Perrontype theorem on the principal eigenvalue of nonsymmetric elliptic operators
A Perrontype theorem on the principal eigenvalue of nonsymmetric elliptic operators Lei Ni And I cherish more than anything else the Analogies, my most trustworthy masters. They know all the secrets of
More informationConvergence rates in l 1 regularization when the basis is not smooth enough
Convergence rates in l 1 regularization when the basis is not smooth enough Jens Flemming, Markus Hegland November 29, 2013 Abstract Sparsity promoting regularization is an important technique for signal
More informationEE 227A: Convex Optimization and Applications October 14, 2008
EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider
More informationProblem 3. Give an example of a sequence of continuous functions on a compact domain converging pointwise but not uniformly to a continuous function
Problem 3. Give an example of a sequence of continuous functions on a compact domain converging pointwise but not uniformly to a continuous function Solution. If we does not need the pointwise limit of
More information