Juan Carlos De Los Reyes 1 and Sergio González 1

Size: px
Start display at page:

Download "Juan Carlos De Los Reyes 1 and Sergio González 1"

Transcription

1 ESAIM: M2AN 43 (29) 8 7 DOI:.5/m2an/2839 ESAIM: Mathematical Modelling and Numerical Analysis PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES Juan Carlos De Los Reyes and Sergio González Abstract. This paper is devoted to the numerical solution of stationary laminar Bingham fluids by path-following methods. By using duality theory, a system that characterizes the solution of the original problem is derived. Since this system is ill-posed, a family of regularized problems is obtained and the convergence of the regularized solutions to the original one is proved. For the update of the regularization parameter, a path-following method is investigated. Based on the differentiability properties of the path, a model of the value functional and a correspondent algorithm are constructed. For the solution of the systems obtained in each path-following iteration a semismooth Newton method is proposed. Numerical experiments are performed in order to investigate the behavior and efficiency of the method, and a comparison with a penalty-newton-uzawa-conjugate gradient method, proposed in [Dean et al., J. Non-Newtonian Fluid Mech. 42 (27) 36 62], is carried out. Mathematics Subject Classification. 47J2, 76A, 65K, 9C33, 9C46, 9C53. Received June 5, 27. Revised June 2nd, 28. Published online October 6, 28.. Introduction Bingham models are used to analyze flows of materials for which the imposed stress must exceed a critical yield stress to initiate motion, i.e., they behave as rigid bodies when the stress is low but flow as viscous fluids at high stress. Examples of Bingham fluids include tooth paste, water suspensions of clay or sewage sludge. For the mathematical analysis of Bingham fluid flow we refer to [7,9,22]. In [22] the authors consider a variational formulation of the model and study qualitative properties of it. Existence and uniqueness of the solution and the structure of the flow are investigated. In [7] the authors further analyze the resulting inequality of the second kind and prove, among other results, the Lipschitz stability of the solution with respect to the plasticity threshold. Further in [4] and[9 ] the authors investigate the regularity of the solution for the cross section and cavity model, respectively. Bingham fluid flow in cylindrical pipes has been numerically treated by different methodologies. In [3], Chapter V, the authors propose a global ε-type regularization of the model and prove the convergence of the regularized solutions towards the original one. Direct regularization of the primal problem by twice differentiable Keywords and phrases. Bingham fluids, variational inequalities of second kind, path-following methods, semi-smooth Newton methods. Research partially supported by DAAD, EPN Quito and TU Berlin joint project: Ph.D. Programme in Applied Mathematics. Research Group on Optimization, Departmento de Matemática, EPN Quito, Ecuador. jcdelosreyes@math.epn.edu.ec; sgonzalez@math.epn.edu.ec Article published by EDP Sciences c EDP Sciences, SMAI 28

2 82 J.C. DE LOS REYES AND S. GONZÁLEZ functions has also been considered in [23] in combination with Newton methods. Although this type of regularization allows the direct use of second order methods, important discrepancies of the regularized problem with respect to properties of the original model arise (cf. [6], p. 39). An alternative to the direct regularization of the primal problem consists in the so-called multiplier approach. In [3], the authors analyze the existence of multipliers by using duality theory and propose an Uzawa-type algorithm for its numerical solution. Also by using duality theory, augmented Lagrangian methods are proposed in [8,9] and the unconditional convergence of the method is proven (see [9], Thm. 4.2). In the recent paper [6], the authors make a review of existing numerical approaches and propose a penalty-newton-uzawa conjugate gradient method for the solution of the problem. This approach is compared numerically with our method in Section 5. In this paper, we consider a Tikhonov regularization of the dual problem, which by duality theory implies a local regularization of the original one. The proposed local regularization allows the application of semismooth Newton methods and leads directly to a decoupled system of equations to be solved in each semismooth Newton iteration. This constitutes an important difference with respect to other primal-dual second order approaches (see e.g. [6]), where an additional method has to be used in order to obtain a decoupled system, at the consequent computational cost. For the update of the regularization parameter a path following method is proposed and analyzed. The differentiability of the path and of the path value functional are studied. A model function that preserves the main properties of the value functional is proposed and a correspondent algorithm developed. After discretization in space, each regularized problem is solved by using a semismooth Newton method. These type of methods have been successfully applied to infinite dimensional complementarity problems like the Signorini or contact problem (see [4,2,24,25]), image restoration (see [6]), optimal control problems (see [5,7]), and, in general, to infinite dimensional optimization problems (see [6,7,9,26]). Path-following strategies together with semismooth Newton methods have been investigated in [4,5,25] for variational inequalities of the first kind and constrained optimal control problems. These cases involve unilateral pointwise constraints on the state variable, which are regularized by a Moreau-Yosida technique. Differently from [4,5], our problem involves a variational inequality of the second kind. As a result, and in contrast to unilateral pointwise constrained problems, pointwise constraints on the Euclidean norm of the velocity gradient have to be considered. This fact adds new difficulties to the path analysis. In particular, extra regularity estimates for the regularized solutions have to be obtained in order to get differentiability of the path. Let us mention that, although the method developed in this article is concerned with Bingham fluid flow, the results can be extended to other variational inequalities of the second kind as well. The paper is organized as follows. In Section 2 the original problem is stated and, using Fenchel s duality theory, a necessary condition is derived. Since the characterizing system for the original problem is ill-posed, a family of regularized problems is introduced and the convergence of the regularized solutions to the original one is proved. In Section 3, the path value functional is introduced and the differentiability of the path and the value functional is investigated. A model function which preserves the qualitative properties of the path-value functional is constructed and an iterative algorithm is proposed. In Section 4 a semismooth Newton method to solve the complementarity system for each regularized problem is stated. In Section 5, numerical experiments which show the main features of the proposed algorithm are presented. 2. Problem statement and regularization Let be a bounded domain in R 2, with Lipschitz boundary Γ, and let f L 2 (). We are concerned with the following variational inequality of the second kind: find y H () such that a(y, v y)+gj(v) gj(y) (f,v y) 2, for all v H (), (2.) where a(y, v) :=μ y(x), v(x) dx, j(v) := v(x) dx and (, ) 2 stands for the scalar product in L2 (). The scalar product in R N and the Euclidean norm are denoted by, and, respectively. (, ) X stands

3 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 83 for the scalar product in a Hilbert space X, and X for its associated norm. The duality pairing between a Banach space Y and its dual Y is represented by, Y,Y. Besides that, we will use the bold notation L 2 () := L 2 () L 2 (). Inequality (2.) models the stationary flow of a Bingham fluid in a pipe of cross section (see [7,3,22]). The variable y(x) stands for the velocity at x, f(x) for the linear decay of pressures, μ for the viscosity and g for the plasticity threshold of the fluid (yield stress). Problem (2.) corresponds to the necessary condition of the following unconstrained minimization problem, min J(y) := y H () 2 a(y, y)+gj(y) (f,y) 2. (P) Remark 2.. It can be shown (cf. [2], Thm. 6.) that there exists a unique solution y H () to problem (P). Moreover, if has a sufficiently regular boundary, it follows that y H 2 () H (), see [4], Theorem The Fenchel dual In this section, we obtain the dual problem of (P) by using Fenchel s duality in infinite-dimensional spaces (see [8]). Let us start by defining the functionals F : H () R by F (y) := 2 a(y, y) (f,y) 2 and G : L 2 () R by G (q) :=g q(x) dx. It can be easily verified that these two functionals are convex, continuous and proper. We also define the operator Λ L(H (), L 2 ()) by Λv := v. Thanks to these definitions, we may rewrite problem (P) as inf {F (y)+g (Λy)}. (2.2) y H () Following [8], pp. 6 6, the associated dual problem of (2.2) isgivenby sup { F ( Λ q) G (q)}, (2.3) q L 2 () where Λ L(L 2 (),H ()) is the adjoint operator of Λ, and F : H () R and G : L 2 () R denote the convex conjugate functionals of F and G, respectively. We recall that given a Hilbert space H and a convex function ϕ : H R {, + }, the convex conjugate functional ϕ : H R {, + } is defined by Thus, we have that F ( Λ q) = sup ϕ (v )=sup{ v,v ϕ(v)}, for v V. v V v H () G (q) = sup p L 2 () { Λ q, v H (),H () } 2 a(v, v)+(f,v) 2, (2.4) {(q, p) L2() g } p(x) dx. (2.5) Note that in (2.3) we have already identified L 2 () with its dual. Now, let us calculate F ( Λ q). Let q L 2 () be given. From (2.4), we obtain that { F ( Λ q)= sup (q, Λv) L 2 v H () () } 2 a(v, v)+(f,v) 2,

4 84 J.C. DE LOS REYES AND S. GONZÁLEZ which implies, since { (q, Λv) L 2 () 2 a(v, v) +(f,v) 2 } is a concave quadratic functional in H (), that the supremum is attained at v(q) H () satisfying Using (2.6) withz = v(q), we obtain that a(v(q),z)+(q, Λz) L 2 () (f,z) 2 =, for all z H (). (2.6) F ( Λ q)= (q, v(q)) L 2 () 2 a(v(q),v(q)) + (f,v(q)) 2 = a(v(q),v(q)). (2.7) 2 Lemma 2.. The expression (q, p) L 2 () g p(x) dx, for all p L 2 () (2.8) is equivalent to q(x) g a.e. in. (2.9) Proof. Let us start by showing that (2.8) implies (2.9). Assume that (2.9) doesnothold, i.e., assume that S := {x :g q(x) < a.e.} has positive measure. Choosing p L 2 () such that p(x) := { q(x) in S in \ S leads to g p(x) dx q(x), p(x) dx = (g q(x) ) q(x) dx < S which is a contradiction to (2.8). Conversely, due to the fact that q(x) g a.e. in and thanks to the Cauchy-Schwarz inequality, we obtain, for an arbitrary p L 2 (), that g p(x) dx q(x),p(x) dx (g q(x) ) p(x) dx. Lemma 2. immediately implies that { G if q(x) g, a.e. in (q) = (2.) + otherwise. Thus, using (2.7) and(2.) in(2.3) we obtain the dual problem sup J (q) := 2 a(v(q),v(q)) q(x) g where v(q) satisfies a(v(q),z) (f,z) 2 +(q, Λz) L 2 () =, for all z H (). (P ) Due to the fact that both F and G are convex and continuous, [8], Theorem 4., p. 59, and [8], Remark 4.2, p. 6, imply that no duality gap occurs, i.e., inf J(y) = sup J (q), (2.) y H () q(x) g a(v,z)+(q, z) L 2 =(f,z) () 2 and that the dual problem (P ) has at least one solution q L 2 ().

5 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 85 Next, we will characterize the solutions of the primal and dual problems. From Fenchel s duality theory (see [8], Eqs. (4.22) (4.25), p. 6) the solutions y and q satisfy the following extremality conditions: Λ q F (y), (2.2) q G ( y). (2.3) Let us analyze (2.2). Since F is Gateaux differentiable in y, [8], Proposition 5.3, p. 23, implies that F (y) = {F (y)}. Thus,wehavethat(2.2) can be equivalently expressed as the following equation a(y, v) (f,v) 2 +(q, v) L 2 () =, for all v H (). (2.4) On the other hand, from (2.3) and the definition of the subdifferential it follows that ( ) g y(x) dx p(x) dx (q, y p) L 2 (), for all p L2 (). Then, for p =, we obtain that g y(x) dx (q, y) L 2 (), which implies, since q(x) g a.e. in and by Lemma (2.), that g y(x) dx =(q, y) L 2 (). This last expression is equivalent to { y(x) = or y(x), and q(x) =g y(x) y(x) (2.5) Lemma 2.2. Equations (2.9) and (2.5) can be equivalently expressed as the following equation max (σg, σq(x)+ y(x) ) q(x) =g (σq(x)+ y(x)), a.e. in, for all σ>. (2.6) Proof. We start by showing that (2.6) implies (2.9) and(2.5). From (2.6) it follows that σq(x)+ y(x) q(x) =g, a.e. in, max(σg, σq(x)+ y(x) ) which immediately implies (2.9). Let us split into the two following disjoint sets: {x :σg σq(x)+ y(x) } and {x :σg < σq(x)+ y(x) }. (2.7) On the set {x : σg σq(x) + y(x) }, wehavethatg(σq(x) + y(x)) σgq(x) =, and thus y(x) =. To see that y(x) on the set{x : σg < σq(x) + y(x) }, we assume the opposite and immediately obtain that g < q(x), which contradicts the fact that q(x) g a.e. in. Moreover, from (2.6), we have that g (σq(x) + y(x)) = σq(x) + y(x) q(x), (2.8) and it follows that g y(x) =( σq(x) + y(x) σg) q(x). (2.9)

6 86 J.C. DE LOS REYES AND S. GONZÁLEZ Considering the norms in (2.8) and(2.9), we find that ( σq(x) + y(x) σg) = y(x) and thus we are in the second case of (2.5). Reciprocally, assume that (2.9) holds and consider the two cases in (2.5). If y(x) =, we obtain that g (σq(x) + y(x)) = max (σg, σq(x) + y(x) ) q(x) = σgq(x). Similarly, if y(x) andq(x) =g y(x) y(x),wehavethat which implies that g (σq(x)+ y(x)) = g y(x) (σg + y(x) ), y(x) max (σg, σq(x)+ y(x) ) q(x) = max(σg, σg + y(x) ) g y(x) y(x) = g y(x) (σg + y(x) ). y(x) Thus, the equivalence follows. Summarizing, we may rewrite (2.2) and(2.3) as the following system { a(y, v)+(q, v)l 2 () =(f,v) 2, for all v H (), max (σg, σq(x) + y(x) ) q g(σq + y) =, a.e. in and for σ >. (S) We define the active and inactive sets for (S) bya := {x : σq(x)+ y(x) σg} and I := \A, respectively. Remark 2.2. The solution to (S) isnotunique(see[2], Rem. 6.3, and [3], Chap. 5) Regularization In order to avoid problems related to the non-uniqueness of the solution to system (S), we propose a Tikhonovtype regularization of (P ). With this regularization procedure, we do not only achieve uniqueness of the solution but also get a local regularization for the non-differentiable term in (P). This technique has also been used for TV-based inf-convolution-type image restoration [6]. For a parameter > we consider the following regularized dual problem sup J (q) := 2 a(v(q),v(q)) 2 q 2 L 2 q(x) g where v(q) satisfies (P ) a(v(q),z)+(q, z) L 2 () (f,z) 2 =, for all z H (). Therefore, the regularized problem is obtained from (P ) by subtracting the term 2 q 2 L 2 from the objective functional. Further, it is possible to show that this penalization corresponds to a regularization of the primal problem. Consider the continuously differentiable function ψ : R 2 R, defined by ψ (z) := { g z g2 2 if z g 2 z 2 if z < g (2.2)

7 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 87 By using this function, which is a local regularization of the Euclidean norm, we obtain the following regularized version of (P) min J (y) := a(y, y)+ ψ ( y)dx (f,y) y H () 2 2. (P ) Furthermore, we are able to state the following theorem. Theorem 2.3. Problem (P ) is the dual problem of (P ) and we have J (q )=J (y ), (2.2) where q and y denote the solutions to (P ) and (P ), respectively. Proof. In order to calculate the dual problem to (P ) we use the same argumentation used in Section 2. for the original problem (P). We only have to replace the functional G in (2.2) by G (q) = ψ (q)dx, (2.22) with ψ as in (2.2). Thus, for q L 2 (), we have G (q) = sup p L 2 () + {x : p(x) g, a.e.} {x : p(x) <g, a.e.} ] [ q(x),p(x) g p(x) + g2 dx (2.23) 2 [ q(x),p(x) 2 p(x) 2] dx. From (2.23) it is possible to conclude, by proceeding as in the proof of Lemma 2., thatg (q) = unless q(x) g. Suppose now that q(x) g. We define the functional Ψ : L 2 () R by Ψ(p) := {x : p(x) g, a.e} + {x : p(x) <g, a.e} ] [ q(x),p(x) g p(x) + g2 dx 2 [ q(x),p(x) 2 p(x) 2] dx. By introducing, for any p L 2 (), the function p L 2 () by p (x) := it is easy to verify that Ψ(p ) Ψ( p ), which yields { p (x) a.e. in {x : p(x) <g,a.e.} a.e. in {x : p(x) g, a.e.} sup p L 2 () Ψ(p) = sup p L 2 () Ψ(p). (2.24) p(x) g a.e. in Therefore, in order to calculate the supremum in (2.23), we only have to consider the last term in (2.24). Since this expression is a concave quadratical functional, the maximizer is easily calculated as p = p,which

8 88 J.C. DE LOS REYES AND S. GONZÁLEZ implies that { G (q) = 2 q 2 L 2 if q(x) g otherwise. (2.25) Note that this regularization procedure turns the primal problem into the unconstrained minimization of a continuously differentiable functional, while the corresponding dual problem is still the constrained minimization of a quadratic functional. Remark 2.3. Due to the regularization procedure, the objective functional of (P )resultsinal2 ()-uniformly concave functional. Thus, (P ) admits a unique solution q L 2 () for each fixed >. Additionally, since a(, ) is a coercive and bicontinuous form and due to the fact that J is strictly convex and differentiable, [2], Theorem.6, implies that (P ) has also a unique solution. Theorem 2.4. Let y be the solution to (P ). Then y H 2 () H () and there exists a constant K>, independent of, such that y H 2 K( f L 2 + C). (2.26) for some C>. Proof. Note that y can be characterized as the solution of the following equation f μδy + ϕ(y ) in y = on Γ (2.27) where ϕ denotes the subdifferential of the convex and lower semicontinuous functional ϕ : L 2 () R { } defined by { ϕ(u) = ψ ( u)dx if ψ ( u) L () + elsewhere. Thus, [4], Lemma, implies the result. Remark 2.4. Theorem 2.4 implies that y H () and, since n =2, y L q (), for all q [, ). Moreover, from (2.26) we conclude that y is uniformly bounded in L q () for all q [, ). Next, we characterizethesolutionsto(p )and(p )(y and q, respectively). From Fenchel s duality theory, these solutions satisfy the following system: Λ q F (y ) (2.28) q G ( y ). (2.29) Note that both F and G are differentiable in y and y, respectively. Thus, F (y )and G ( y )consist only of the respective Gateaux derivatives. Since (2.28) is similar to equation (2.2), it is equivalent to a(y,v)+(q, v) L 2 () (f,v) 2 =, for all v H (). (2.3) On the other hand, due to the differentiability of G,equation(2.29) can be written as (q,p) L 2 () = y,p g dx + y,p dx, for all p L 2 () y A \A or equivalently as { q (x) = y (x) a.e. in \A, q (x) =g y(x) (2.3) y a.e. in A,

9 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 89 where, A = {x : y (x) g, a.e.}. Consequently, the solutions (y,q ) of the regularized problems (P )and(p ) satisfy the system { a(y,v)+(q, v) L 2 () (f,v) 2 =, for all v H () max(g, y (x) )q (x) g y (x) =, a.e. in, for all >. (S ) Clearly q (x) = g on A and q (x) <gon I := \A.WecallsetsA and I the active and inactive sets for (S ), respectively. In the following theorem the convergence of the regularized solutions towards the original one is verified. Theorem 2.5. The solutions y of the regularized primal problems converge to the solution y of the original problem strongly in H () as. Moreover, the solutions q of the regularized dual problems converge to asolutionq of the original dual problem weakly in L 2 (). Proof. Let us start by recalling that (y, q) and(y, q ) satisfy equations (2.4) and(2.3) respectively. Thus, by subtracting (2.3) from(2.4), we obtain that μ (y y ), v dx = q q, v dx, for all v H (). (2.32) Further, choosing v := y y in (2.32), we have that μ (y y ) 2 dx = q q, (y y ) dx. (2.33) Next, we establish pointwise bounds for (q q)(x), (y y )(x), in the following disjoint sets A A, A I, A I and I I. On A A : Here, we use the facts that q(x) = q (x) = g, q(x) =g y(x) y(x) and q y (x) (x) =g y (x).thus,we have the following pointwise estimate (q q)(x), (y y )(x) q (x) y(x) g y (x) y (x), y (x) g y(x) y(x), y(x) (2.34) + q(x) y (x) g y(x) g y (x) g y(x) + g y (x) =. On A I : Here, we know that y (x) = q (x), q (x) <g, q(x) = g and q(x) =g y(x) y(x). Hence, we get (q q)(x), (y y )(x) g y(x) q (x) 2 g y(x) + g y (x) = q (x) 2 + g q (x) < (g 2 q (x) 2) < g 2. (2.35) On A I: Inthissetitholdsthat y(x) =andq (x) =g y(x) y (x). Then, we have that (q q)(x), (y y )(x) = g y (x) + q(x) y (x). (2.36) On I I: Here, we have that y(x) =, y (x) = q (x), q(x) g, and q (x) <g. (q q)(x), (y y )(x) = q (x) 2 + q(x) y (x) (g 2 q (x) 2) < g 2. (2.37)

10 9 J.C. DE LOS REYES AND S. GONZÁLEZ Since A A, A I, A I and I I provide a disjoint partitioning of, (2.33) and estimates (2.34), (2.35), (2.36) and(2.37) implythat μ (y y ) 2 dx< g 2 dx. (2.38) Thus, we conclude that y y strongly in H () as. On the other hand, since y y strongly in H (), (2.32) implies that q q, weakly in grad(h ()) L 2 (), (2.39) where grad(h ()) := {q L 2 () : v H () such that q = v}. 3. Path-following method In this section, we investigate the application of continuation strategies to properly control the increase of. Our main objective is to develop an automatic updating strategy for the regularization parameter which guarantees an efficient and fast approximation of the solution to problem (P). For that purpose, we investigate the properties of the path (y,q ) H () L2 (), with (, ), and construct an appropriate model of the value functional, which will be used in an updating algorithm. 3.. The primal-dual path In this part we introduce the primal-dual path and discuss some of its properties. Specifically, Lipschitz continuity and differentiability of the path are obtained. Definition 3.. The family of solutions C = {(y,q ): [M, )} to (S ), with M a positive constant, considered as subset of H () L2 (), is called the primal-dual path associated to (P )-(P ). Lemma 3.. The path C is bounded in H () L2 (), i.e., thereexistc>, independent of, such that y H + q L 2 C. Proof. First, from the fact that q (x) g, for every >, we conclude that q is uniformly bounded in L 2 (). Furthermore, Theorem 2.4 implies that y H is uniformly bounded in H (). Therefore, C is bounded in H () L2 (). Theorem 3.2. Let [M, ). The function y is globally Lipschitz continuous in W,p (), for2 p< 2+min(s 2,ɛ), wheres>2 and ɛ depends on μ, and. Proof. Let, [M, ). We introduce the following notations δ y := y y, θ (x) :=max(g, y (x) ) and δ θ := θ θ. It is easy to verify that the following expression holds δ θ (x) y (x) y (x), a.e. in, which implies that δ θ (x) y (x) + δ y (x), a.e. in, (3.) and, similarly, δ θ (x) y (x) + δ y (x), a.e. in. (3.2) Next, we separate the proof in two parts. First, we prove the Lipschitz continuity of y in H (), and then, by introducing an auxiliar problem, we obtain the Lipschitz continuity in W,p (), for some p>2.

11 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 9 In H (): From (S ), we know that ( y a(δ y,δ y ) = g y ), δ y θ θ ( ) y = g(), δ y θ L 2 () ( y + g L 2 () θ y ), δ y, θ L 2 () (3.3) which, since y (x) θ M, a.e. in, implies the existence of a constant K>such that a(δ y,δ y ) K δ y H ( y + g θ y ), δ y. (3.4) θ L 2 () Next, let us analyze the second term on the right hand side of (3.4). ( y g θ y ), δ y θ L 2 () ( ) θ δ y + δ θ y = g, δ y θ θ L 2 () = g δ y (x) 2 ( δθ y dx + g, δ y θ θ θ ). L 2 () (3.5) Since y (x) θ (x) a.e. in, Cauchy-Schwarz inequality implies that ( ) δθ y g, δ y θ θ L 2 () g δθ y (x) θ (x)θ (x), δ y(x) dx g g δ θ (x) y (x) δ y (x) θ (x)θ (x) δ θ (x) δ y (x) θ (x) dx. dx Again, since y (x) θ (x) a.e.in,(3.) implies that ( ) δθ y g, δ y θ θ L 2 () g y δ y dx + g δ y 2 θ dx θ δ y 2 g δ y dx + g dx θ δ y 2 g M meas()/2 δ y H + g dx. θ (3.6) Finally, using (3.6) and(3.5) in(3.4), we have that a(δ y,δ y ) (K + g ) meas()/2 δ y M H, which, due to the coercivity of a(, ), implies the existence of a constant L> such that y y H L.

12 92 J.C. DE LOS REYES AND S. GONZÁLEZ In W,p (): First, note that (3.2) implies the existence of ζ(x) [, ] such that δ θ (x) =ζ(x)[ y (x) + δy(x) ], a.e. in. (3.7) From (S ), we have, for all v H (), that ( ) ( ) ( ) δy δθ y y a(δ y,v)+g, v g, v = g(), v, (3.8) θ L 2 () θ θ L 2 () θ L 2 () which, together with (3.7), implies that ( ) ( δy ζ(x) δy a(δ y,v)+g, v g θ L 2 () θ θ ( y g() θ ) y, v L 2 () = ), v + g L 2 () ( ζ(x) y θ θ ) y, v, (3.9) L 2 () for all v H (). Defining f := g( ) y θ +g ζ(x) y θ θ y,equation(3.9) motivates the introduction of the following auxiliar problem: find w H () such that a(w, v)+(β(w), v) L 2 () = ( f, v )L, for all v 2 () H (), (3.) [ w where β(w) :=g θ ζ(x) δy, w θ θ δ y y ]. Clearly, δ y is also solution of (3.). Note that f(x) y (x) g θ (x) + g ζ(x) y (x) θ (x)θ (x) y (x), a.e. in, which, since y(x) θ M and y (x) θ, a.e. in, implies that f(x) 2 g a.e. in. M Therefore, f L s 2 g M meas()/s, for s. (3.) Next, let us define the matrix A(x) R 2 2 by A(x) := y (x) δ y (x) x x y (x) δ y (x) x 2 x y (x) x y (x) x 2 δ y (x) x 2 δ y (x) x 2, a.e. in. Then, we can rewrite β(w) as [ ] I ζ(x) β(w) =g θ θ θ δ y A(x) w, (3.2) where I stands for the 2 2-identity matrix. Moreover, we can rewrite the auxiliar problem (3.) as α(x) w, v dx = f, v dx, (3.3)

13 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 93 ζ(x) where α(x) :=(μ + g θ (x)) I + g θ A(x), a.e. in. Multiplying α(x) byξ (x)θ (x) δ y(x) R2 2 and taking the scalar product with ξ, we obtain that α(x)ξ,ξ = μ ξ 2 + g ζ(x) θ (x) ξ 2 g θ (x)θ (x) δ y (x) δ y(x),ξ y (x),ξ, a.e. in. Furthermore, since ζ(x) and y (x) θ (x) a.e. in, the Cauchy-Schwarz inequality implies that g ζ(x) θ (x)θ (x) δ y (x) δ ξ 2 y(x),ξ y (x),ξ g, a.e. in, θ (x) which, due to the fact that g θ (x), a.e. in, implies that μ ξ 2 α(x)ξ,ξ (μ +2) ξ 2, a.e. in. (3.4) Thus, [3], Theorem 2., p. 64, implies the existence of a constant c p such that w W,p c p f L s, for s>2and2 p<2+min(s 2,ɛ), (3.5) where w is the unique solution of (3.) andɛ depends on μ, and. Therefore, since δ y is solution of (3.), estimates (3.) and(3.5) imply the existence of L > such that y y W,p L, for 2 p<2+min(s 2,ɛ). Remark 3.2. Since y is Lipschitz continuous in W,p (), for some p>2, thereexists a weakaccumulation point ẏ W,p () of (y y )as, which is a strong accumulation point in H (). For the subsequent analysis and the remaining sections of the paper, we will use the following assumption. Assumption 3.3. Let [M, ). Thereexistε,ε 2 > and r> such that for all ( r, + r). meas({x A I : y (x) y (x) <ε })=, meas({x A I : y (x) y (x) <ε 2 })=, Lemma 3.4. Let [M, ) be fixed, and let ( r, + r). It holds that lim meas(a I ) = lim meas(a I )=. Proof. Let us introduce the set A ε := {x A I : y y ε }. From assumption (3.3), we get that meas(a I ) meas(a ε ). (3.6) Due to Chebyshev s inequality we get that ε meas(a ε ) y (x) y (x) dx A I y (x) dx + y (x) y (x) dx,

14 94 J.C. DE LOS REYES AND S. GONZÁLEZ which, by Lemma 3. and Theorem 3.2, implies that for some K >. Therefore, ε meas(a ε ) (meas()) /2 y H + (meas()) /2 y y H K, lim meas ({x : y (x) y (x) ε })=, and the result follows from (3.6). The other case is treated similarly. As a consequence of Lemma 3.4 we also obtain that lim meas(a A )=meas(a ) and lim meas(i I )=meas(i ), (3.7) which, since A =(A A ) (A \A )andi =(I I ) (I \I ), implies that lim (A \A ) = lim (I \I )=. (3.8) Proposition 3.5. Let >Mand ẏ + be a weak accumulation point of (y y ) in W,p (), forsome p>2, as. Thenẏ + satisfies (( ) ) ẏ + a(ẏ +,v)+g y y, ẏ + y 3 y χ A, v L 2 () + (( y + ẏ + ) χi, v ) =. (3.9) L 2 () Proof. See the Appendix. Proceeding as in Proposition 3.5, we also obtain that (( ) ) ẏ a(ẏ,v)+g y y, ẏ y 3 y χ A, v L 2 () + (( y + ẏ ) χi, v ) =, (3.2) L 2 () where ẏ stand for any weak accumulation point of (y y )inw,p (), for some p>2, as. Therefore, we obtain the following result. Theorem 3.6. The function y H () is differentiable at all [M,+ ), andẏ satisfies (( ) ) ẏ a(ẏ,v)+g y y, ẏ y 3 y χ A, v L 2 () + ( ( y + ẏ ) χ I, v ) =. (3.2) L 2 () Proof. Let z denote the difference between two accumulation points of () (y y )as. From (3.9) and(3.2) we obtain that (( ) ) z a(z,v) + g y y, z y 3 y χ A + zχ I, v =. L 2 ()

15 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 95 Choosing v = z in the last expression, we obtain that ( ) μ z 2 H + z 2 L 2 (I + g z ) y y, z y 3 y, z L 2 (A ) =. (3.22) Since ( ) z y y, z y 3 y, z, L 2 (A ) we get, from (3.22), that z =. Consequently, accumulation points are unique and by (3.9) and(3.2) they satisfy (3.2) Path value functional In this section we study the value functional associated to (P ). We prove that the functional is twice differentiable with non positive second derivative, which implies concavity of the functional. Definition 3.3. The functional V () :=J (y ) defined on [M, ), M>, is called the path value functional. Let us start by analyzing the differentiability properties of V. Proposition 3.7. Let [M, ). The value functional V is differentiable at, with V () = 2 2 g 2 dx + y 2 dx. (3.23) 2 A I Proof. Let r> be sufficiently small and let ( r, + r). From (2.3) and by choosing v = y y,we find that 2 a(y + y,y y )+ 2 (q + q, (y y )) L 2 () (f,y y ) 2 =. (3.24) On the other hand, note that 2 a(y + y,y y )= 2 a(y,y ) 2 a(y,y ). Consequently, from (3.24), we obtain that V () V () = 2 a(y,y ) 2 a(y,y ) (f,y y ) 2 + [ψ ( y ) ψ ( y )] dx = [ψ ( y ) ψ ( y )] dx 2 (q + q, (y y )) L 2 (), where ψ is defined by (2.2). Then, from (S ), we conclude that V () V () = z dx, (3.25) where z is defined by z(x) := [ ψ ( y (x)) ψ ( y (x)) g y (x) 2 θ (x) + y ] (x), (y y )(x), θ (x)

16 96 J.C. DE LOS REYES AND S. GONZÁLEZ V () V () a.e. in. Next, we will analyze the limit lim. Using the disjoint partitioning of given by := A A, 2 := A I, 3 := A I,and 4 := I I,wegetthat V () V () = 4 j= j z j dx, where z j represents the value of z when restricted to each set j, j =,...,4. Now, we analyze each integral j z j dx separately. On : Here, we analyze the limit lim that θ (x) = y (x), θ (x) = y (x), ψ( y (x)) = g y (x) g2 2 Thus, we obtain the following pointwise a.e. estimate [ ] z (x) = g [ y (x) y (x) ]+ g2 2 g y (x) 2 y (x) + y (x) y (x), (y (x) y (x)) ( ) = g 2 [ y (x) y (x) ] y (x), y (x) y (x) y (x) z dx. We start by recalling that a.e. in,wehave and ψ( y (x)) = g y (x) g g2 2 [ ] and, therefore, z dx = g [ ][ y y y ], y dx + g2 dx. (3.26) 2 y y 2 Next, we estimate the two integrals in (3.26) separately. First, note that, since we are working in,wehave that y (x) g a.e. Therefore, we obtain the following pointwise estimate in : y (x), y (x) y (x) y (x) y (x) y (x) + y (x) y (x) y (x), y (x) y (x) y (x) y (x) y (x) y (x) + y (x) y (x), y (x) y (x) y (x) (3.27) 2 g y (x) y (x). Therefore, from Cauchy-Schwarz inequality, Theorem 3.2 and (3.27), we have the following estimate g 2 [ y y ][ y ], y dx y y g y y 2 y, y y y dx y y 2 dx y y 2 H L 2. (3.28)

17 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 97 Next, we analyze the second expression in the right hand side of (3.26). Since A =(A A ) (A \A ), we have that g 2 g2 g2 dx = dx dx. (3.29) 2 2 A 2 A \A Further, since, [M, ), we obtain g2 2 which, due to Lemma 3.4, implies that g 2 2 A \A dx A \A g2 2M 2 meas(a \A ), dx, as. (3.3) Thus, from (3.29), (3.3) and the Lebesgue s bounded convergence theorem, we conclude that g 2 2 dx g2 2 Therefore, from (3.26), (3.28) and(3.3), we conclude that dx, as. (3.3) A 2 lim z dx = 2 2 g 2 dx. (3.32) A On 4 : We study the limit of z 4 dx as. Let us recall that a.e. on 4, θ (x) =θ (x) =g, 4 ψ( y (x)) = 2 y (x) 2 and ψ( y (x)) = 2 y (x) 2. Thus, we obtain that z 4 (x) = [ 2 y (x) 2 2 y (x) 2 ] 2 y (x)+ y (x), y (x) y (x) = y (x), y (x), a.e., 2 which implies, since I =(I I ) (I \I ), that [ z 4 dx = ] y, y dx + y, y dx. (3.33) 4 2 I I \I Let us study the two integrals on the right hand side of (3.33) separately. Theorem 3.2 implies that y y strongly in H () as, and, therefore, y, y dx I y 2 dx. I (3.34) On the other hand, due to Cauchy-Schwarz and Hölder inequalities, and since y (x) < g M and y (x) < g M a.e. in 4, we obtain that y, y dx I \I g2 M meas(i \I ),

18 98 J.C. DE LOS REYES AND S. GONZÁLEZ which, due to Lemma 3.4, implythat I \I y, y dx, as. (3.35) Finally, we obtain, from (3.33), (3.34) and(3.35), that lim z 4 dx = y 2 dx. (3.36) 4 2 I On 2 and 3 : We study the behavior of 2 z 2 dx and 3 z 3 dx, as. Let us start with 2 z 2 dx. First, note that a.e. in 2,wehavethatθ (x) = y (x), θ (x) =g, ψ( y (x)) = 2 y (x) 2 and ψ( y (x)) = g y (x) g2 2. Thus, we obtain that z 2 (x) =g y (x) g2 2 2 y (x) 2 g y (x) 2 y (x) + 2 y (x), y (x) y (x), a.e. in 2.Moreover,sincein 2 we have that y (x) <g, a.e., and due to the Cauchy-Schwarz inequality, we have the following pointwise estimate: z 2 (x) g 2 y (x) g + g 2 y (x) y (x), y (x) + g2 2 g 2 y (x) g + y (x) 2 y (x) g + g2 2 (3.37) < g y (x) g + g2 2 a.e. in 2. Then, we divide the analysis in two cases: (i) y (x) g : Since in 2 we have that y (x) < g obtain that a.e., and due to Theorem 3.2 and (3.37), we { y g } z 2 dx g 2 y y dx + g2 2 meas( 2) g meas( 2 ) /2 y y H + g2 2 meas( 2) glmeas( 2 ) /2 + g2 2M meas( 2 2 ). (ii) y (x) g < : Since in 2 we have that y (x) g { y < g } z 2 dx g g 2 3 g2 2M meas( 2 2 ). a.e., we have that g dx + g2 2 meas( 2) (3.38) (3.39) Consequently, (3.37), (3.38), (3.39) and Lemma 3.4 imply that lim z 2 dx =. 2 (3.4) Analogously, we conclude that lim z 3 dx =. 3 (3.4) Then, the result follows from (3.32), (3.36), (3.4) and(3.4).

19 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 99 Proposition 3.8. Let [M, ). The function V () is twice differentiable at, with its second derivative given by V () = 3 g 2 dx + y, ẏ dx, (3.42) A I where ẏ is defined in Proposition 3.6. Moreover, V (), for all [M, ). Proof. Letusfirstprove(3.42). Since q (x) = g, a.e. ina,and q (x) = y (x) a.e. in I,wecanwrite V () = 2 2 q 2 dx. (3.43) From (3.43) we conclude that V () V () = 2 2 q 2 dx 2 2 q 2 dx. V We are concerned with the limit lim () V (). We introduce the notation I j = 2 2 j q 2 dx 2 2 j q 2 dx, j =,...,4, where the sets j, j =,...,4 are defined as in Proposition 3.7, andweanalyze the integrals I j separately. On (lim I ): Let us start by recalling that a.e. on,wehavethat q (x) = q (x) = g. Thus, since A =(A A ) (A \A ), we get () I = () g 2 ( 2 2 ) dx = g 2 ( + ) A 2 2 dx g 2 ( + ) 2 A \A 2 2 dx. 2 Since, [M, ), we get that g 2 ( + ) A \A 2 2 dx g2 ( + ) 2 2M 4 meas(a \A ), which, due to Lemma 3.4, implies that A \A g 2 ( + ) dx as. (3.44) Thus, (3.44), Lemma 3.4 and the Lebesgue s bounded convergence theorem imply that lim ( ) I = 3 g 2 dx. (3.45) A On 4 (lim I 4): First, note that q (x) = y (x) and q (x) = y (x) a.e. on 4. Thus, since I =(I I ) (I \I ), we obtain that () I 4 = 2() I [ y 2 y 2] [ dx y 2 y 2] dx. (3.46) 2() I \I Next, let us analyze the two integrals of the right hand side of (3.46) separately.

20 J.C. DE LOS REYES AND S. GONZÁLEZ [ (i) lim 2( ) I y 2 y 2] dx: Notethat [ y 2 y 2] ( ) ( ) y y y y dx = y, + y, dx, I I }{{} :=J which, implies that J 2 I y, ẏ dx ( ) y y I y, y, ẏ dx + ( ) y y I y, ẏ dx. Next, we separately analyze the two terms in the right hand side of the inequality above. Since ẏ is an accumulation point of y y in H () as (see Rem. 3.2 and Thm. 3.6), we have that ( ) y y y, ẏ dx as. (3.47) I On the other hand, we have that I ( ) y y y, y, ẏ dx ( ) y y y, ẏ dx + I which, since ẏ is an accumulation point of y y I I ( ) y y y y, dx, in H () as, implies that ( ) y y y, y, ẏ dx as. (3.48) Furthermore, Theorem 3.2 yields that ( ) y y y y, dx = y y 2 H L 2. (3.49) I Consequently, (3.47), (3.48) and(3.49) imply lim [ y 2 y 2] dx = y, ẏ dx. (3.5) 2() I I [ (ii) lim 2( ) I \I y 2 y 2] dx: First note that I \I y 2 y 2 dx I \I y y y dx + I \I y y y dx.

21 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES Therefore, from Theorem 3.2, Remark 2.4 and Hölder inequality, we have that y I \I 2 y 2 dx meas(i \I ) ( /4 y L 4 + y L 4) y y H L 2K meas(i \I ) /4, and then, from Lemma 3.4, we conclude that lim Finally, (3.46), (3.5) and(3.5) implythat [ y 2 y 2] dx =. (3.5) 2() I \I lim I 4 = y, ẏ dx. (3.52) I On 2 and 3 : We analyze the limits lim I 2 and lim I 3. We start by analyzing lim I 2. Thus, we recall that in 2,wehavethat y (x) g. Then, from Theorem 3.2, Remark2.4 and Hölder inequality, we conclude that I g 2 2 y 2 dx 2 2 y 2 y 2 dx LKmeas(2 ) /4, which, due to Lemma 3.4, implies that lim I 2 =. (3.53) Analogously, we conclude that lim I 3 =. (3.54) Finally, (3.45), (3.52), (3.53), and (3.54) imply(3.43). Now, we prove that V (). Using v =ẏ χ I in (3.2), we obtain that a(ẏ, ẏ χ I )+ ( ) y, ẏ χ + ( ) I ẏ L 2 () χ I, ẏ χ =, I L 2 () which implies that (μ + ) ẏ 2 dx = I y, ẏ dx. I Thus, we can easily conclude that y, ẏ dx, I which yields that V () = 3 g 2 dx + y, ẏ dx. A I

22 2 J.C. DE LOS REYES AND S. GONZÁLEZ 3.3. Model functions and path-following algorithm In this section, following [4], we propose model functions which approximate the value functional V () and share some of its qualitative properties. These model functions will then be used for the development of the path following algorithm. From Theorem 3.2 and Propositions 3.7 and 3.8, it follows that V (), [M, ) is a monotonically increasing and concave function. We then propose the model functions m() =C C 2 μ + G, (3.55) with C R, C 2 andg, which share the main qualitative properties of V (), i.e., ṁ() and m(). To motivate the introduction of these model functions, let us take the test function v = y χ I in (3.2). We get that a(ẏ,y χ I )+ ( ) y χ I, y χ I L 2 () + ( ẏ χ I, y χ I =. (3.56) )L 2 () From the definition of a(, ), we obtain that (μ + ) ( ) ẏ, y χ I L 2 () + y 2 dx =. I Consequently, by using Propositions 3.7 and 3.8, weobtain (μ + ) V ()+ 3 g 2 dx +2 V () 2 g 2 dx =, A A which implies that (μ + ) V ()+2 V ()+μ 3 g 2 dx =. (3.57) A Note that A g 2 is a function of which is uniformly bounded from above by g 2 meas(). Replacing V by m and the dependent term A g 2 by 2G, we obtain the differential equation (μ + ) m()+2ṁ()+2 3 μg =, (3.58) whose solutions are the family of functions given by (3.55). In order to determine C, C 2 and G, we fix a reference value r >, r, for which the value V ( r )is known. Then, we use the following conditions m() =V (), m( r )=V ( r ), ṁ() = V (). Solving the resulting system of nonlinear equations C C 2 μ + G = V (), C C 2 G = V ( r ), μ + r r C 2 (μ + ) 2 + G 2 = V (),

23 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 3 we obtain that G = r(v ( r ) V ()) r r 2 ϑ(μ + ), (3.59) μ( r ) [ ] where ϑ := r V () r(v ( r) V ()) ( r ). Consequently, the parameters C and C 2 are given by C 2 =(μ + ) 2 ( V () G 2 ), C = V ()+ C 2 μ + + G Once we have determined the values of the coefficients of the model, we are able to propose the updating strategy for. Let {τ k } satisfy τ k (, ) for all k N and τ k ask, and assume that V ( k )is available. Following [4], the idea is to have a superlinear rate of convergence for our algorithm, i.e., given k, theupdatevalue k+ should ideally satisfy V V (k+ ) τk V V (k ), (3.6) where V := lim V (). Since V and V ( k+ ) are unknowns, we approximate these values by lim k m( k ) and m( k+ ), respectively. Hereafter, we use the notation C,k, C 2,k and G k for the coefficients on the model function (3.55) related with each k. Further, note that lim k m( k )=C,k.Thus,(3.6) is replaced by C,k m( k+ ) τ k C,k m( k ). (3.6) Calling β k := τ k C,k m( k ) and solving the equation C,k m( k+ )=β k, we obtain that k+ = D k 2 + D 2 k 4 + μg k β k, (3.62) where D k = (C 2,k+G k ) β k μ. Next, we write a path-following algorithm which uses the update strategy for, given by(3.62). Algorithm PF.. Select r and compute V ( r ). Choose > max(, r ) and set k =. 2. Solve { a(yk,v)+(q k,v) L 2 () (f,v) 2 =, for all v H () max(g, k y k (x) )q k (x) =g k y k (x), a.e. in. 3. Compute V ( k ), V (k ) and update k by using k+ = D k 2 + D 2 k 4 + μg k β k (3.63) 4. Stop, or set k := k + andgotostep2. 4. Semismooth Newton method In this section we state an algorithm for the efficient solution of (3.63). Since no smoothing operation takes place in the complementary function in (3.63), it is not possible to get Newton differentiability in infinite

24 4 J.C. DE LOS REYES AND S. GONZÁLEZ dimensions (see [27], Sect. 3.3). Therefore, we consider a discretized version of system (3.63), and propose a semismooth Newton method to solve this problem. Specifically, we state a primal-dual scheme to solve system (3.63) and prove local superlinear convergence of the method. By involving the primal and the dual variables in the same algorithm, we compute the solutions to the discrete versions of (P )and(p ) simultaneously. The algorithm proposed is a particular case of the Newton type algorithms developed in [6]. Let us introduce the definition of Newton differentiability Definition 4.. Let X and Z be two Banach spaces. The function F : X Z is called Newton differentiable if there exists a family of generalized derivatives G : X L(X, Z) such that lim h h X F (x + h) F (x) G(x + h)h Z =. Throughout this section we denote discretized quantities by superscript h. For a vector v R n we denote by D(v) :=diag(v) then n-diagonal matrix with diagonal entries v i. Besides that, we denote by the Hadamard product of vectors, i.e., v w:= (v w,...,v n w n ). We use a finite element approximation of system (3.63) and consider the spaces V h := { η C() : η T Π, T T h}, W h := { q h := (q h,q h 2 ) L 2 () : q h T,q h 2 T Π, T T h}, to approximate the velocity y h and the multiplier q h, respectively. Here, Π k denotes the set of polynomials of degree less or equal than k and T h denotes a regular triangulation of. Thus, the discrete analogous of (3.63) is given by { A h μ y + B h q f h = max ( ge h,ξ( h y) ) q g h (4.) y =, for >, where A h μ Rn n is the stiffness matrix, e h R 2m is the vector of all ones and B h R n 2m is obtained in the usual way, from the bilinear form (, ) L 2 () and the basis functions of V h and W h. Here, y h R n and q h R 2m are the solution coefficients of the approximated regularized primal and dual solutions y h V h and q h W h, respectively. Further, we construct the right hand side f h using the basis functions ϕ i V h,i =,...,n, (see [], Sect. 6). The discrete version of the gradient is given by ( ) h h := 2 h R 2m n, (4.2) where h := ϕi(x) Tk x and 2 h := ϕi(x) Tk x 2,fori =,...,n; k =,...,m.notethat ϕi(x) Tk x and ϕi(x) Tk x 2 in each triangle T k, respectively. Consequently, we obtain that h y = y h (x). Hereafter, the matrix A h μ is assumed to be symmetric and positive definite. The function ξ : R2m R 2m is defined by are the constant values of ϕi(x) x and ϕi(x) x 2 (ξ(p)) i =(ξ(p)) i+m := (p i,p i+m ) for p R 2m, i =,...,m. System (4.) can also be written as the following operator equation: [ ] F (y h,q h A h μy h + B h q h f h ):= max ( ge h,ξ( h y h ) ) q h g h y h =. (4.3) It is well known (see e.g. [7,27]) that the max-operator and the norm function ξ involved in (4.3) are semismooth. Furthermore, this is also true for the composition of semismooth functions that arises in (4.3). A particular

25 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES 5 element of the generalized Jacobian of max(, ) :R N R N is the diagonal matrix G max R N N defined by (G max (v)) ii := { if vi, if v i <, for i N. (4.4) Consequently, given approximations yk h and qh k the Newton step to (4.3) at(yh k,qh k )isgivenby: [ ][ ] [ A h μ B h ( χ Ak+ D(qk h)p ( δy A h ) ) μ yk h h h yk h gi2l h D(m h k ) = Bh qk h + f ] h δ q D(m h k )qh k + g h yk h, (4.5) where m h k := max ( ge h,ξ( h yk h)) R 2m,andχ A = D(t h k ) R2m 2m with ( t h k ) i := { if ξ( h y h k ) i g else. (4.6) Further, P h R 2m 2m denotes the generalized Jacobian of ξ, i.e., forp R 2m,wehavethat ξ i ξ i P h p j p j+m (p) :=, where the block diagonal matrices are defined by ξ i+m p j ξ i+m p j+m ξ i = ξ { p i i+m := δ (p i,p i+m) if (p i,p i+m ) ij p j p j ε if (p i,p i+m ) = for i, j =,...,m, ξ i = ξ { p i+m i+m := δ (p i,p i+m) if (p i,p i+m ) ij p j+m p j+m ε 2 if (p i,p i+m ) = for i, j =,...,m, with ε and ε 2 real numbers such that (ε,ε 2 ). From the invertibility of D(m h k ) we obtain that δ q = q h k + D(m h k) ( g h y h k + C h k h δ y ), (4.7) where C h k := gi 2l χ Ak+ D(q h k )P h ( h y h k ). Thus, the remaining equation for δy can be written as where the matrix Ξ,k and the right hand side η,k are given by Ξ,k δ y = η,k, (4.8) Ξ,k := A h μ + B h D(m h k) Ck h h, η,k := A h μyk h + M h f h gb h D(m h k) h yk. h It can be verified (cf. [6], p. 8) that the matrix Ξ,k is symmetric at the solution. Thanks to [6], Lemma 3.3, we know that the condition ξ(qk h) i g, fori =,...,m, must hold to guarantee the positive definiteness of the matrix Ch k. Moreover, we can assert that if { the last } condition is fulfilled, the matrix Ξ,k is positive definite, λ min (Ξ,k ) λ min (A h μ) > and the sequence is uniformly bounded. Ξ,k k N

26 6 J.C. DE LOS REYES AND S. GONZÁLEZ Due to these results, we know that if ξ(qk h) i g holds for all i =,...,m, the solution of (4.8) existsfor all k and it is a descent direction for the objective functional in (P ). However, this condition is unlikely to be fulfilled by all i m and k N. To overcome this difficulty, Hintermüller and Stadler [6] constructed a globalized semismooth Newton algorithm by modifying the term involving D(qk h)p h ( y yk h ) for indices i in which ξ(qk h) i >g. This is done by replacing qk h by g max ( g, ξ(q h k ) i) ((q h k ) i, (q h k ) i+m), when assembling the system matrix Ξ,k. Thus, we guarantee that ξ(qk h) i g for i =,...,m. Further, we obtain a modified system matrix, denoted by Ξ +,k, which replaces Ξ,k in (4.8). This new matrix is positive } definite for all and the sequence {(Ξ +,k ) is uniformly bounded. k N Algorithm SSN.. Initialize (y h,qh ) Rm R 2m and set k =. 2. Estimate the active sets, i.e., determine χ Ak+ R 2m 2m. 3. Compute Ξ +,k if the dual variable is not feasible for all i =,...,m;otherwisesetξ+,k =Ξ,k. Solve Ξ +,k δ y = η,k. 4. Compute δ q from (4.7). 5. Update yk+ h := yh k + δ y and qk+ h := qh k + δ q. 6. Stop, or set k := k + and go to step 2. Following [6], Lemma 3.5, we know that qk h qh and yk h yh implies that Ξ +,k converges to Ξ,k as k. Thus, thanks to this result we can state the following theorem. Theorem 4.. The iterates (y h k,qh k ) of Algorithm SSN converge superlinearly to (yh,q h ) provided that (y h,q h ) is sufficiently close to (y h,qh ). Proof. We refer the readers to [6], Theorem 3.6, for the complete proof. The projection procedure, which yields the matrix Ξ +,k, assures that in each iteration of Algorithm SSN, δ y =(Ξ +,k ) η,k constitutes a descent direction for the objective functional in (P ). Additionally, steps 3. and 4. of the algorithm involve a decoupled system of equations for δ y and δ q, which is obtained directly, due to the regularization proposed and the structure of the method. Moreover, the computation of δ q through (4.7) turns out to be computationally efficient, since only the inverse of a diagonal matrix is needed. 5. Numerical results In this section we present numerical experiments which illustrate the main properties of the path-following and the semismooth Newton methods applied to the numerical solution of laminar Bingham fluids. The experiments have been carried out for a constant function f, representing the linear decay of pressure in the pipe. The parameter is updated using the path-following strategy defined in Section 3.3. Unless we specify the contrary, we stop the Algorithm PF as soon as r h k := (r,h k,r2,h k,r3,h k ) is of the order 7,where r,h k = y h k +(A h μ ) (B h q h k f h ) H,h / f h L 2,h r 2,h k = max(ge h,ξ(q h k + h y h k )) q h k g(q h k + h y h k ) L 2,h r 3,h k = max(, q h k g) L 2,h

27 PATH FOLLOWING METHODS FOR STEADY LAMINAR BINGHAM FLOW IN CYLINDRICAL PIPES Figure. Example : flow of a Bingham fluid defined by g =,μ =andf =(left)and velocity profile along the diagonal y(x,x ) (right). Figure 2. Example : final inactive set I. with A h μ, B h, h and ξ defined as in (4.). Here H,h, L 2,h and L 2,h denote the discrete versions of H, L 2 and L 2 respectively. r,h k and r 2,h k describe the improvement of the algorithm towards the solution of the discrete version of the optimality system (S), while r 3,h k measures the feasibility of q h k. We use the mass matrix to calculate the integrals related to the space V h and a composite trapezoidal formula for the integrals associated to the space W h. Additionally we use the sequence τ k =. k Example In our first example, we focus on the behavior of Algorithm PF. We consider := ], [ ], [ and compute the flow of a Bingham fluid defined by μ = g =andf =. We work with a uniform triangulation, with h =.46 ( /28), where h is the radius of the inscribed circumferences of the triangles in the mesh. In this example, we use the initial values r =and =. The inner Algorithm SSN for = is initialized with the solution of the Poisson problem A h μy h = f h together with q h = and is finished if the residual δ y is lower than ɛ,whereɛ denotes the machine accuracy. The resulting velocity function is displayed in Figure and the final inactive set in Figure 2. The value of the regularization parameter k reaches a factor of 3 in three iterations and we obtain a maximum velocity of.29. The graphics illustrate the expected mechanical properties of the material, i.e., since the shear stress transmitted by a fluid layer decreases toward the center of the pipe, the Bingham fluid moves like a solid in that sector. Besides that, Figure shows that there are no stagnant zones in the flow (see [22]).

28 8 J.C. DE LOS REYES AND S. GONZÁLEZ Table. -updates and convergence behavior of Algorithm PF for a Bingham fluid defined by g =,μ =andf =. #it. k rk h y h k+ y h H k ν,h k h.8 e e e e e e e e e 4 Table 2. Number of iterations of Algorithm SSN in each path-following iteration. k.8 e+3.82 e+7.9 e+3 #it. 7 6 =4 Table 3. Number of iterations of Algorithm SSN without any automatic updating strategy. k.8 e+3.82 e+7.9 e+3 #it.ssn 3 33 fails to converge In Table we report the values of the regularization parameter k and the residuals r h k and ν h k = y h k+ y h k H,h y h k y h k H,h From the behavior of rk h, it is possible to observe a superlinear convergence rate of the Algorithm PF according to the strategy proposed in (3.6). Furthermore, the behavior of νk h implies a superlinear convergence rate of y k towards the solution, as k increases. These data are depicted in Figure 3, where the two magnitudes are plotted in a logarithmic scale. In Table 2, we show the number of inner iterations that Algorithm SSN needs to achieve convergence in each iteration of Algorithm PF and the total number of SSN iterations needed. It can be observed that the path following strategy allows to reach large values of k and, consequently, to obtain a better approximation of the solution of the problem. In contrast to these results, in Table 3 we show the number of iterations that Algorithm SSN needs to achieve convergence without any updating strategy. In this case, the algorithm does not only need more iterations for each value of k, but also fails to converge for large values of it. Finally, in Figure 4 we plot and compare the path value functional V () (solid line) and the model functions m( k ) calculated from the values C,k, C 2,k and G k given in each iteration of the algorithm. It can be observed that as k increases, m( k ) becomes a better model for V (). However, even for small values of k, the model functions stay close to the value functional Example 2 In this example, we compare the numerical behavior of Algorithm PF versus a penalty-newton-uzawaconjugate gradient method proposed by Dean et al. in[6]. We consider the flow of a Bingham fluid in the cross section of a cylindrical pipe, given by the disk defined by := {x =(x,x 2 ) R 2 : x 2 + x2 2 <R2 },wherer>. It is well known (see [2], Ex. 2, p. 8) that in

c 2004 Society for Industrial and Applied Mathematics

c 2004 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 15, No. 1, pp. 39 62 c 2004 Society for Industrial and Applied Mathematics SEMISMOOTH NEWTON AND AUGMENTED LAGRANGIAN METHODS FOR A SIMPLIFIED FRICTION PROBLEM GEORG STADLER Abstract.

More information

SEVERAL PATH-FOLLOWING METHODS FOR A CLASS OF GRADIENT CONSTRAINED VARIATIONAL INEQUALITIES

SEVERAL PATH-FOLLOWING METHODS FOR A CLASS OF GRADIENT CONSTRAINED VARIATIONAL INEQUALITIES SEVERAL PATH-FOLLOWING METHODS FOR A CLASS OF GRADIENT CONSTRAINED VARIATIONAL INEQUALITIES M. HINTERMÜLLER AND J. RASCH Abstract. Path-following splitting and semismooth Newton methods for solving a class

More information

Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method

Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method Article Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method Andreas Langer Department of Mathematics, University of Stuttgart,

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Barbara Kaltenbacher, Tuomo Valkonen,

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

You should be able to...

You should be able to... Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set

More information

K. Krumbiegel I. Neitzel A. Rösch

K. Krumbiegel I. Neitzel A. Rösch SUFFICIENT OPTIMALITY CONDITIONS FOR THE MOREAU-YOSIDA-TYPE REGULARIZATION CONCEPT APPLIED TO SEMILINEAR ELLIPTIC OPTIMAL CONTROL PROBLEMS WITH POINTWISE STATE CONSTRAINTS K. Krumbiegel I. Neitzel A. Rösch

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Chapter 1 Foundations of Elliptic Boundary Value Problems 1.1 Euler equations of variational problems

Chapter 1 Foundations of Elliptic Boundary Value Problems 1.1 Euler equations of variational problems Chapter 1 Foundations of Elliptic Boundary Value Problems 1.1 Euler equations of variational problems Elliptic boundary value problems often occur as the Euler equations of variational problems the latter

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

A Distributed Newton Method for Network Utility Maximization, II: Convergence

A Distributed Newton Method for Network Utility Maximization, II: Convergence A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility

More information

REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS

REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS fredi tröltzsch 1 Abstract. A class of quadratic optimization problems in Hilbert spaces is considered,

More information

BASICS OF CONVEX ANALYSIS

BASICS OF CONVEX ANALYSIS BASICS OF CONVEX ANALYSIS MARKUS GRASMAIR 1. Main Definitions We start with providing the central definitions of convex functions and convex sets. Definition 1. A function f : R n R + } is called convex,

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Robust error estimates for regularization and discretization of bang-bang control problems

Robust error estimates for regularization and discretization of bang-bang control problems Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of

More information

On duality theory of conic linear problems

On duality theory of conic linear problems On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

1.The anisotropic plate model

1.The anisotropic plate model Pré-Publicações do Departamento de Matemática Universidade de Coimbra Preprint Number 6 OPTIMAL CONTROL OF PIEZOELECTRIC ANISOTROPIC PLATES ISABEL NARRA FIGUEIREDO AND GEORG STADLER ABSTRACT: This paper

More information

A duality-based approach to elliptic control problems in non-reflexive Banach spaces

A duality-based approach to elliptic control problems in non-reflexive Banach spaces A duality-based approach to elliptic control problems in non-reflexive Banach spaces Christian Clason Karl Kunisch June 3, 29 Convex duality is a powerful framework for solving non-smooth optimal control

More information

Algorithms for Nonsmooth Optimization

Algorithms for Nonsmooth Optimization Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization

More information

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality. The Lagrangian. We consider an optimization program of the form Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

PDE Constrained Optimization selected Proofs

PDE Constrained Optimization selected Proofs PDE Constrained Optimization selected Proofs Jeff Snider jeff@snider.com George Mason University Department of Mathematical Sciences April, 214 Outline 1 Prelim 2 Thms 3.9 3.11 3 Thm 3.12 4 Thm 3.13 5

More information

Second Order Elliptic PDE

Second Order Elliptic PDE Second Order Elliptic PDE T. Muthukumar tmk@iitk.ac.in December 16, 2014 Contents 1 A Quick Introduction to PDE 1 2 Classification of Second Order PDE 3 3 Linear Second Order Elliptic Operators 4 4 Periodic

More information

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Applied Mathematical Sciences, Vol. 6, 212, no. 63, 319-3117 Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Nguyen Buong Vietnamese

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

NONTRIVIAL SOLUTIONS TO INTEGRAL AND DIFFERENTIAL EQUATIONS

NONTRIVIAL SOLUTIONS TO INTEGRAL AND DIFFERENTIAL EQUATIONS Fixed Point Theory, Volume 9, No. 1, 28, 3-16 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.html NONTRIVIAL SOLUTIONS TO INTEGRAL AND DIFFERENTIAL EQUATIONS GIOVANNI ANELLO Department of Mathematics University

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Convex Analysis and Optimization Chapter 2 Solutions

Convex Analysis and Optimization Chapter 2 Solutions Convex Analysis and Optimization Chapter 2 Solutions Dimitri P. Bertsekas with Angelia Nedić and Asuman E. Ozdaglar Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions Angelia Nedić and Asuman Ozdaglar April 16, 2006 Abstract In this paper, we study a unifying framework

More information

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov, LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES Sergey Korotov, Institute of Mathematics Helsinki University of Technology, Finland Academy of Finland 1 Main Problem in Mathematical

More information

Linear and non-linear programming

Linear and non-linear programming Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)

More information

An optimal control problem for a parabolic PDE with control constraints

An optimal control problem for a parabolic PDE with control constraints An optimal control problem for a parabolic PDE with control constraints PhD Summer School on Reduced Basis Methods, Ulm Martin Gubisch University of Konstanz October 7 Martin Gubisch (University of Konstanz)

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

A REVIEW OF OPTIMIZATION

A REVIEW OF OPTIMIZATION 1 OPTIMAL DESIGN OF STRUCTURES (MAP 562) G. ALLAIRE December 17th, 2014 Department of Applied Mathematics, Ecole Polytechnique CHAPTER III A REVIEW OF OPTIMIZATION 2 DEFINITIONS Let V be a Banach space,

More information

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008 Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

More information

An introduction to some aspects of functional analysis

An introduction to some aspects of functional analysis An introduction to some aspects of functional analysis Stephen Semmes Rice University Abstract These informal notes deal with some very basic objects in functional analysis, including norms and seminorms

More information

1. Bounded linear maps. A linear map T : E F of real Banach

1. Bounded linear maps. A linear map T : E F of real Banach DIFFERENTIABLE MAPS 1. Bounded linear maps. A linear map T : E F of real Banach spaces E, F is bounded if M > 0 so that for all v E: T v M v. If v r T v C for some positive constants r, C, then T is bounded:

More information

Affine covariant Semi-smooth Newton in function space

Affine covariant Semi-smooth Newton in function space Affine covariant Semi-smooth Newton in function space Anton Schiela March 14, 2018 These are lecture notes of my talks given for the Winter School Modern Methods in Nonsmooth Optimization that was held

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method PRECONDITIONED CONJUGATE GRADIENT METHOD FOR OPTIMAL CONTROL PROBLEMS WITH CONTROL AND STATE CONSTRAINTS ROLAND HERZOG AND EKKEHARD SACHS Abstract. Optimality systems and their linearizations arising in

More information

Convergence rate estimates for the gradient differential inclusion

Convergence rate estimates for the gradient differential inclusion Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient

More information

L p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by

L p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by L p Functions Given a measure space (, µ) and a real number p [, ), recall that the L p -norm of a measurable function f : R is defined by f p = ( ) /p f p dµ Note that the L p -norm of a function f may

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

1 Continuity Classes C m (Ω)

1 Continuity Classes C m (Ω) 0.1 Norms 0.1 Norms A norm on a linear space X is a function : X R with the properties: Positive Definite: x 0 x X (nonnegative) x = 0 x = 0 (strictly positive) λx = λ x x X, λ C(homogeneous) x + y x +

More information

An inexact subgradient algorithm for Equilibrium Problems

An inexact subgradient algorithm for Equilibrium Problems Volume 30, N. 1, pp. 91 107, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam An inexact subgradient algorithm for Equilibrium Problems PAULO SANTOS 1 and SUSANA SCHEIMBERG 2 1 DM, UFPI, Teresina,

More information

Best approximations in normed vector spaces

Best approximations in normed vector spaces Best approximations in normed vector spaces Mike de Vries 5699703 a thesis submitted to the Department of Mathematics at Utrecht University in partial fulfillment of the requirements for the degree of

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Solving Distributed Optimal Control Problems for the Unsteady Burgers Equation in COMSOL Multiphysics

Solving Distributed Optimal Control Problems for the Unsteady Burgers Equation in COMSOL Multiphysics Excerpt from the Proceedings of the COMSOL Conference 2009 Milan Solving Distributed Optimal Control Problems for the Unsteady Burgers Equation in COMSOL Multiphysics Fikriye Yılmaz 1, Bülent Karasözen

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

Levenberg-Marquardt method in Banach spaces with general convex regularization terms Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization

More information

Numerical Methods for Large-Scale Nonlinear Systems

Numerical Methods for Large-Scale Nonlinear Systems Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.

More information

Numerical algorithms for one and two target optimal controls

Numerical algorithms for one and two target optimal controls Numerical algorithms for one and two target optimal controls Sung-Sik Kwon Department of Mathematics and Computer Science, North Carolina Central University 80 Fayetteville St. Durham, NC 27707 Email:

More information

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent Chapter 5 ddddd dddddd dddddddd ddddddd dddddddd ddddddd Hilbert Space The Euclidean norm is special among all norms defined in R n for being induced by the Euclidean inner product (the dot product). A

More information

arxiv: v1 [math.oc] 21 Apr 2016

arxiv: v1 [math.oc] 21 Apr 2016 Accelerated Douglas Rachford methods for the solution of convex-concave saddle-point problems Kristian Bredies Hongpeng Sun April, 06 arxiv:604.068v [math.oc] Apr 06 Abstract We study acceleration and

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

On the p-laplacian and p-fluids

On the p-laplacian and p-fluids LMU Munich, Germany Lars Diening On the p-laplacian and p-fluids Lars Diening On the p-laplacian and p-fluids 1/50 p-laplacian Part I p-laplace and basic properties Lars Diening On the p-laplacian and

More information

Examination paper for TMA4180 Optimization I

Examination paper for TMA4180 Optimization I Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted

More information

2 Two-Point Boundary Value Problems

2 Two-Point Boundary Value Problems 2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x

More information

Real Analysis Problems

Real Analysis Problems Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Linear convergence of iterative soft-thresholding

Linear convergence of iterative soft-thresholding arxiv:0709.1598v3 [math.fa] 11 Dec 007 Linear convergence of iterative soft-thresholding Kristian Bredies and Dirk A. Lorenz ABSTRACT. In this article, the convergence of the often used iterative softthresholding

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Naohiko Arima, Sunyoung Kim, Masakazu Kojima, and Kim-Chuan Toh Abstract. In Part I of

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

I teach myself... Hilbert spaces

I teach myself... Hilbert spaces I teach myself... Hilbert spaces by F.J.Sayas, for MATH 806 November 4, 2015 This document will be growing with the semester. Every in red is for you to justify. Even if we start with the basic definition

More information

Lecture 4 Lebesgue spaces and inequalities

Lecture 4 Lebesgue spaces and inequalities Lecture 4: Lebesgue spaces and inequalities 1 of 10 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 4 Lebesgue spaces and inequalities Lebesgue spaces We have seen how

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

Date: July 5, Contents

Date: July 5, Contents 2 Lagrange Multipliers Date: July 5, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 14 2.3. Informative Lagrange Multipliers...........

More information

Continuity of convex functions in normed spaces

Continuity of convex functions in normed spaces Continuity of convex functions in normed spaces In this chapter, we consider continuity properties of real-valued convex functions defined on open convex sets in normed spaces. Recall that every infinitedimensional

More information

g(x) = P (y) Proof. This is true for n = 0. Assume by the inductive hypothesis that g (n) (0) = 0 for some n. Compute g (n) (h) g (n) (0)

g(x) = P (y) Proof. This is true for n = 0. Assume by the inductive hypothesis that g (n) (0) = 0 for some n. Compute g (n) (h) g (n) (0) Mollifiers and Smooth Functions We say a function f from C is C (or simply smooth) if all its derivatives to every order exist at every point of. For f : C, we say f is C if all partial derivatives to

More information

Euler Equations: local existence

Euler Equations: local existence Euler Equations: local existence Mat 529, Lesson 2. 1 Active scalars formulation We start with a lemma. Lemma 1. Assume that w is a magnetization variable, i.e. t w + u w + ( u) w = 0. If u = Pw then u

More information

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1 Approximate course outline First-order

More information