On Second-order Properties of the Moreau-Yosida Regularization for Constrained Nonsmooth Convex Programs

Size: px
Start display at page:

Download "On Second-order Properties of the Moreau-Yosida Regularization for Constrained Nonsmooth Convex Programs"

Transcription

1 On Second-order Properties of the Moreau-Yosida Regularization for Constrained Nonsmooth Convex Programs Fanwen Meng,2 ; Gongyun Zhao Department of Mathematics National University of Singapore 2 Science Drive 2, Singapore 7543 Republic of Singapore 2 School of Mathematics University of Southampton Highfield, Southampton SO7 BJ Great Britain Abstract In this paper, we attempt to investigate a class of constrained nonsmooth convex optimization problems, that is, piecewise C 2 convex objectives with smooth convex inequality constraints. By using the Moreau-Yosida regularization, we convert these problems into unconstrained smooth convex programs. Then, we investigate the second-order properties of the Moreau-Yosida regularization η. By introducing the (GAIPCQ) qualification, we show that the gradient of the regularized function η is piecewise smooth, thereby, semismooth. Keywords. Moreau-Yosida Regularization, Piecewise C k Functions, Semismooth Functions. AMS Subject Classification (99): 90C25, 65K0, 52A4. Introduction In this paper, we consider the following constrained convex programming: min f 0 (x) s.t. f i (x) 0, i I = {, 2,..., m}, () where f 0, f i, i I, are finite, convex on R n and f 0 is possibly nonsmooth. For nonsmooth programs, many approaches have been presented so far and they are often restricted to the convex unconstrained case. In general, the various approaches Corresponding author.

2 are based on combinations of the following three methods: (i) subgradient methods (e.g., Ermoliev, Kryazhimskii, and Ruszczyńskii (997), Poljak (967)); (ii) bundle techniques (e.g., Kiwiel (990), Lemaréchal, Nemirovskii, and Nesterov (995), Lemaréchal and Strodiot (98)); (iii) Moreau-Yosida regularization (e.g., Fukushima and Qi (996), Lemaréchal and Sagastizábal (997a, 997b), Qi and Chen (997)). In this paper, we would like to investigate some properties of the Moreau-Yosida regularization. For any proper, closed convex function f : R n R {+ }, the Moreau (965) -Yosida (964) regularization of f, η : R n R, is defined by η(x) = min y R n{f(y) + 2 y x 2 M}, (2) where M is a symmetric positive definite n n matrix and z 2 M = zt Mz for any z R n. Let p(x) denote the unique solution of (2). The motivation of the study of Moreau-Yosida regularization is due to the fact that is equivalent to More precisely, the following statements are equivalent: (i) x minimizes f; (ii) p(x) = x; (iii) η(x) = 0; (iv) x minimizes η; (v) f(p(x)) = f(x); (vi) η(x) = f(x). min{f(x) x R n }, (3) min{η(x) x R n }. (4) It is easy to see that η is finite everywhere and convex. A nice property of the Moreau-Yosida regularization is that η is continuously differentiable and its gradient g(x) η(x) = M(x p(x)), (5) is Lipschitz continuous. For more details about Moreau-Yosida regularization, the readers may be referred to Hiriart-Urruty and Lemaréchal (993) and Rockafellar (970). In many situations, besides the first order property mentioned above, we are also interested in the second order property of the Moreau-Yosida regularization, e.g., when we wish to use Newton s method to minimize η(x). In most circumstances, due to the nondifferentiability of f, the second order derivative of η does not exist. Thus we need to investigate a weaker second order property, namely, the semismoothness of g(= η). Under the condition of semismoothness, Qi and Sun (993) derived the superlinear (quadratic) 2

3 convergence of a nonsmooth version of Newton s method for solving a nonsmooth equation. Also, by virtue of the semismoothness of g, Fukushima and Qi (996) have shown that the superlinear convergence can be guaranteed by using approximate solutions of the problem (2) to construct search directions for minimizing η. In addition, the concept of semismoothness plays a central role in the development of optimality conditions in nonsmooth analysis as well, e.g., Chaney (988, 989, 990). Recently, several authors have investigated the semismoothness of g for some class of f, for example, see Lemaréchal and Sagastizábal (994, 997), Meng and Hao (200), Mifflin, Qi and Sun (999), Qi (995), Rockafellar (985), Sun and Han (997), Zhao and Meng (2000). Qi (995) conjectured that g is semismooth under a regularity condition if f is the maximum of several twice continuously differentiable convex functions. Later, Sun and Han (997) gave a proof to this conjecture under a constant rank constraint qualification (CRCQ) for the case where M = (/λ)i and λ is a positive constant. In Meng and Hao (200), the authors investigated the unconstrained case of problem (), where f 0 is assumed to be piecewise C 2. By introducing the (SCRCQ) qualification, which is weaker than (CRCQ), they derived the same result for g as Sun and Han (997). In addition, Mifflin, Qi and Sun (999) investigated the same problem as in Meng and Hao (200). By introducing the (AIPCQ) qualification, which is weaker than (CRCQ) and (SCRCQ), they derived the same result for g as Sun and Han (997). Recently, Zhao and Meng (2000) studied the Lagrangian-dual function of certain problems. Specifically, there are two cases under consideration: (i) linear objectives with either affine inequality or strictly convex inequality constraints; (ii) strictly convex objectives with general convex inequality and affine equality constraints. After deriving the piecewise C 2 -smoothness of the Lagrangiandual function, they showed that the gradient of the Moreau-Yosida regularization of this function is piecewise smooth, thereby, semismooth, under the (AIPCQ) qualification. Notice that all papers mentioned above mainly study the Moreau-Yosida regularization of a continuous (possibly nondifferentiable) function on R n. And, the results derived are applicable to the unconstrained optimization problem. In this paper, we consider the constrained problem () instead. We should point out that the smooth problem min η 0 (x) s.t. f i (x) 0, i I, where η 0 is the Moreau-Yosida regularization of f 0, is not equivalent to (), because η 0 coincides with f 0 only on the set of their unconstrained minima, which may not be the solutions of the constrained minimization problems. Therefore, although the results derived in Mifflin, Qi, and Sun (999) can be easily extended to (6), but they are not applicable to (). Due to this observation, we consider the following generalized function: (6) f(x) = f 0 (x) + δ(x D), (7) where D := {x R n : f i (x) 0, i I} and δ(x D) is the usual indicator function on the 3

4 convex set D (see, Rockafellar (970)). Then, problem () can be written as the following unconstrained problem: min{f(x) x R n }. (8) Problem (8) then is equivalent to where min{η(x) x R n }, (9) η(x) = min y R n{f(y) + 2 y x 2 M}. (0) In this paper, we assume that for problem (), f 0 is piecewise C 2 and f i are twice continuously differentiable. We aim to investigate the second-order properties of the regularized function η in (0). The analysis presented in this paper is related to the results in Mifflin, Qi and Sun (999), however, there are several fundamental distinctions. Firstly, f in (7) is a generalized function (indeed, f is not even finite or continuous everywhere on R n.), whereas, in Mifflin, Qi and Sun (999), it was required to be finite everywhere, convex and continuous on the whole space. Actually, the latter condition is a fundamental assumption in many publications regarding to the Moreau-Yosida regularization, e.g., see Lemaréchal and Sagastizábal (997), Mifflin, Qi and Sun (999), Qi (995), Zhao and Meng (2000). Secondly, although the feasible region D is defined by some smooth inequalities, but the geometric figure of D is clearly nonsmooth, on which the indicator function is defined. In this paper, we attempt to cope with this two kinds of nonsmooth situations simultaneously. To this end, we introduce a new qualification, called (GAIPCQ). Under this condition, we show that the gradient of the Moreau-Yosida regularization of f is piecewise smooth. Thereby, the gradient of η is semismooth. The rest of this paper is organized as follows. In Section 2, some basic notions and properties are collected. Section 3 studies the piecewise smoothness and semismoothness of the gradient g. Section 4 concludes. 2. Definitions and Basic Properties In this section, we recall some concepts, such as semismoothness and piecewise C k - ness, which are basic elements of this paper. In addition, we introduce a new constraint qualification, (GAIPCQ), which will be used in next section. It is well known that η is a continuously differentiable convex function defined on R n not only for the specific type function f defined in (7), but even for general proper, closed, nondifferentiable convex functions as well. In order to use Newton s method or modified Newton s methods, it is important to study the Hessian of η, i.e., the Jacobian of g. It is also known that g is globally Lipschitz continuous with modulus M. However, g may not be differentiable. To extend the definition of Jacobian to certain classes 4

5 of nonsmooth functions, Qi and Sun (993) introduced the definition of semismoothness for vector-valued functions below: Definition 2.. Suppose F : R n R m is a locally Lipschitzian vector-valued function, we say that F is semismooth at x if lim V F (x+th ), h h, t 0 {V h }, () exists for any h R n, where F (y) denotes the generalized Jacobian of F at y defined by Clarke (983). The semismoothness of the gradient g is a key condition in the analysis of the superlinear convergence of an approximate Newton method proposed by Fukushima and Qi (993). In fact, semismoothness is a local property for nonsmooth functions. A direct verification of semismoothness is difficult. Fortunately, according to Qi and Sun (993), it is known that piecewise smooth functions are semismooth functions. Hence, one needs to study the piecewise smoothness of g for some class of convex problems. Now, we define the piecewise smoothness for vector-valued functions, see,. Pang and Ralph (996). Definition 2.2. A continuous function ψ : R n R m is said to be piecewise C k on a set W R n if there exist a finite number of functions ψ, ψ 2,..., ψ q such that each ψ j C k (U) for some open set U containing W, and ψ(x) {ψ (x),..., ψ q (x)} for any x W. We refer to {ψ j } j Iq as a representation of ψ on W, where I q := {, 2,..., q}. Remark 2.. If m =, then Definition 2.2 is referred to the notion of piecewise C k -ness for real-valued functions. In addition, the domain W in above definition is a general set in R n. In particular, when W is open, then all pieces ψ j above could be simply required to be C k on W = U. Actually, in the following context, we mainly pay attention to the case where ψ is piecewise C k on some open set in R n. In the following context, f 0 is always assumed to be convex. Let f 0 be piecewise C on W R n with a representation {h j } j Ĵ. For any j Ĵ, define D j := {x W : h j (x) = f 0 (x)}. We define the active index set at x by And, let J(x) = {j Ĵ : x D j}, x W. (2) K(x) = {j Ĵ : x cl(intd j)}, x W. (3) We then obtain the following result regarding to the subdifferential of f 0. 5

6 Proposition 2..(Pang and Ralph (996)) Let f 0 be a convex real-valued function on R n. Suppose that f 0 is piecewise C on an open set W R n with a representation {h j } j Ĵ. Then, for every x W, f 0 (x) = conv{ h j (x) : j K(x)}. (4) where convs denotes the convex hull of the set S. For the piecewise C function f 0 above, it is easy to see that K(x) J(x) for any x W. Hence, f 0 (x) = conv{ h j (x) : j K(x)} conv{ h j (x) : j J(x)}. (5) Moreover, if f 0 above is piecewise C 2 on W, by virtue of Lemma 2 in Mifflin, Qi and Sun (999), we then have 2 h j (x) 0, j K(x), x W, (6) where A 0 means the matrix A is positive semidefinite. Now, let s define the set of indexes of active constraints at any point x D by I(x) := {i I f i (x) = 0}. (7) In order to study the piecewise smoothness of g in (5), we introduce the following constraint qualification. Definition 2.3. Generalized Affine Independence Preserving Constraint Qualification(GAIPCQ) is said to hold for f 0 and f i, i I, at x, if for every subset J K J(x) I(x) for which there exists a sequence {x k } with {x k } x, J K J(x k ) I(x k ) and the vectors {( hj (x k ) ) ( fi (x, k ) 0 ) : j J, i K }, (8) being linearly independent, it follows that the vectors {( ) ( ) } hj (x) fi (x), : j J, i K, (9) 0 are linearly independent. Remark 2.2. For problem (8) when f 0 is piecewise C 2, in order to investigate the smoothness of g, obviously, we shall need the information about the constraints f i together with pieces h j of f 0. Thus, (GAIPCQ) above involves both pieces h j (i.e., f 0 ) and f i. On the other hand, if problem () is unconstrained and f 0 is piecewise C 2 on R n, then 6

7 f = f 0, which implies that f is piecewise C 2 on the whole R n as well. Evidently, in this case problem (8) becomes to problem (). This is the exact situation studied in Mifflin, Qi and Sun (999). In this case, the terms concerning I and I(x) in Definition 2.3 will vanish automatically, and (GAIPCQ) is reduced to the qualification (AIPCQ) introduced in Mifflin, Qi and Sun (999). 3. Smoothness of g for Piecewise C 2 Objectives In this section, we shall investigate the piecewise smoothness of the gradient of the Moreau-Yosida regularized function η under the (GAIPCQ) qualification. Then, according to Qi and Sun (993), we can derive the semismoothness of g(= η). In the rest of this paper, we make the following assumptions: Assumption 3.. = D R n. Assumption 3.2. f 0 is convex, piecewise C 2 on R n with a representation {h j } j Ĵ. Assumption 3.3. f i is convex, twice continuously differentiable on R n for all i I. For the generalized function f in (7), we can easily see that its effective domain is exactly the feasible region D. Then, for any x R n, due to the closeness of D, it follows that the proximal solution p(x) lies in D, hence, η(x) is finite. Therefore, the effective domain of η is the whole n-dimensional space. Now, for x R n, since p(x) minimizes f(y) + y 2 x 2 M on Rn, equivalently, p(x) minimizes f 0 (y) + y 2 x 2 M over D, then there exists a multiplier (λ(x), µ(x)) R J(p(x)) I(p(x)) such that g(x) = M(x p(x)) = j J(p(x)) λ j(x) h j (p(x)) + i I(p(x)) µ i(x) f i (p(x)), λ j (x) 0, j J(p(x)), j J(p(x)) λ j(x) =, (20) µ i (x) 0, i I(p(x)). For a nonnegative vector a R m, we shall let supp(a), called the support of a, be the subset of {,..., m} consisting of the indexes i for which a i > 0. We define the following sets: M(x) = {(λ(x), µ(x)) R J(p(x)) I(p(x)) (λ(x), µ(x)) satisfies (20)}, (2) 7

8 B(x) = {J K J(p(x)) I(p(x)) there exists (λ(x), µ(x)) M(x) such that and supp(λ(x)) J J(p(x)), supp(µ(x)) K I(p(x)), K, and the vectors {( hj (p(x)) ) ( fi (p(x)), 0 ) } : j J, i K are linearly independent.}, (22) ˆB(x) = {J K B(x) J K(p(x))}. (23) If I(p(x)) =, i.e., I(p(x)) = 0, then all terms with respect to I(p(x)), µ(x) and K in (20) (23) would vanish automatically. If I(p(x)), namely, the optimal solution p(x) lies on the boundary of D, as to the corresponding Lagrangian multipliers µ(x) in (20), we make the following non-degenerate assumption: Assumption 3.4. For any (λ(x), µ(x)) M(x), µ(x) 0. Lemma 3.. Let B(x) be defined in (22), then B(x) for any x R n. Proof. It is easy to verify that M(x) in (2) is a convex set. It s obvious that the result is true if I(p(x)) = 0. Next, we consider the case where I(p(x)). According to the convexity of M(x) and Assumption 3.4, we can easily see that there exists a vertex of M(x), say, α(x) = (α λ (x), α µ (x)), such that α µ (x) 0. Let J = supp(α λ (x)) and K = supp(α µ (x)), then we have J J(p(x)) and K I(p(x)). Without loss of generality, we assume J = {, 2,..., s}, K = {,..., t}. In what follows we need to show the vectors {( ) ( ) ( ) ( )} h (p(x)) hs (p(x)) f (p(x)) ft (p(x)),...,,,...,, (24) 0 0 are linearly independent. If this desired result is not true, then there exist scalars k, k 2,..., k s and l, l 2,... l t which are not all zero satisfying s ( ) hj (p(x)) t ( ) fi (p(x)) k j + l i = 0, (25) 0 namely, j= i= s j= k j h j (p(x)) + t i= l t f i (p(x)) = 0, k + k k s = 0. 8 (26)

9 Let β = (β λ, β µ ) with β λ = (k,..., k s, 0,..., 0) R J(p(x)), β µ = (l,..., l t, 0,..., 0) R I(p(x)), then for any sufficiently small ε > 0, e.g., taking ε = min{α λj (x)/ k j, α µi (x)/ l i : k j 0, l i 0, j J, i K.}, we have and α(x) ± εβ 0, (27) α k (v) ± εβ k = 0, k (J(p(x)) \ J) (I(p(x)) \ K). (28) Hence, it follows from (26) that j J(p(x)) (α j(x) ± εβ j ) = ± ε j J(p(x)) β j = ± ε j J β j = ± ε(k + + k s ) = ± 0 =. In addition, by (2) and (26), we have j J(p(x)) (α j(x) ± εβ j ) h j (p(x)) + i I(p(x)) (α i(x) ± εβ i ) f i (p(x)) = g(x) ± ε( s j= k j h j (p(x)) + t i= l i f i (p(x))) = g(x). Therefore, by (27) (30), it follows that α(x) ± εβ M(x). Furthermore, we have α(x) = (α(x) + εβ)/2 + (α(x) εβ)/2, which leads to a contradiction with that α(x) is the vertex of M(x). Hence, (24) are linearly independent, thereby, J K B(x). Thus, B(x). Now, we establish the following lemma, which is very important in analysis. Lemma 3.2. Let B(x) be defined in (22). Suppose that (GAIPCQ) is satisfied at p(x). Then, there exists an open neighborhood N (x) such that B(y) B(x) for any y N (x). Proof. In the following, we shall consider the two cases: I(p(x)) = 0 and I(p(x)), respectively. Case i I(p(x)) = 0. In this case, it is easy to see that p(x) lies in the relative interior of D. By Theorem XV 4..4 of Hiriart-Urruty and Lemaréchal (993), we have p(.) is continuous on R n. So, it follows that p(y) rid for y close to x, thereby, I(p(y)) =. Thus, for any y close to x, the terms regarding the index set K in B(y) are automatically vacuous. Hence, we only need to verify that the desired result concerning J is true. However, in this case, according to Remark 2.2, (GAIPCQ) is reduced to (AIPCQ). So, the desired result is valid with the help of Lemma 3 of Mifflin, Qi and Sun (999). Case ii I(p(x)). By the continuity of p(.) and (2), for any y close to x, we have J(p(y)) J(p(x)). (3) 9 (29) (30)

10 Since f i (p(x)) < 0 for any i I \ I(p(x)), then by virtue of the continuity of p(.) again, there exists an open neighborhood N (x) around x such that for any y N (x), f i (p(y)) < 0, i I \ I(p(x)). Hence, I \ I(p(x)) I \ I(p(y)). So, it follows that I(p(y)) I(p(x)) (32) for any y N (x). Now, suppose on the contrary that the proposed result is not true. Hence, noticing that J(p(x)) and I(p(x)) are finite index sets, then there exists a sequence {x k } tending to x such that for all k, there is an index set J K B(x k ) \ B(x). So, for each k, the vectors {( ) hj (p(x k )) ) ( fi (p(x, k )) 0 } : j J, i K are linearly independent and there exists (λ(x k ), µ(x k )) M(x k ) such that supp(λ(x k )) J J(p(x k )), supp(µ(x k )) K I(p(x k )), but J K / B(x). According to the (GAIPCQ), the vectors {( ) ( ) } hj (p(x)) fi (p(x)), : j J, i K (34) 0 must be linearly independent. In addition, by (3) and (32), it follows that J K J(p(x)) I(p(x)). Then, we get the following statement: There doesn t exist (λ(x), µ(x)) M(x) such that supp(λ(x)) J J(p(x)), and supp(µ(x)) K I(p(x)). (35) Due to the boundedness of λ J (x k ), there exists a convergent subsequence of {λ J (x k )} k=. For convenience in description, we still use this same notation. Let s assume {λ J (x k )} tends to λ J. Then, regarding to this convergent subsequence, it follows from (20) that g(x k ) j J (33) λ j (x k ) h j (p(x k )) = i K µ i (x k ) f i (p(x k )), (36) According to (33), we can easily know that f K (p(x k )) has full column rank. follows from (36) that So, it µ K (x k ) = [( f K (p(x k ))) T f K (p(x k ))] ( f K (p(x k ))) T [g(x k ) λ j (x k ) h j (p(x k ))]. j J (37) By the continuity of p(.) and f i (.), as k, we have ( f K (p(x k ))) T f K (p(x k )) ( f K (p(x))) T f K (p(x)). In addition, from (34) we have f K (p(x)) has full column rank. Thereby, the inverse of ( f K (p(x))) T f K (p(x)) does exist. Hence, as k, the right hand side of (37) must tend to [( f K (p(x))) T f K (p(x))] ( f K (p(x))) T [g(x) j J λ j h j (p(x))], (38) 0

11 which is denoted by µ K. So, we have g(x) λ j h j (p(x)) = µ i f i (p(x)), (39) j J i K Now, by setting λ i (x) = λ i (x) if i J, λ i (x) = 0 for i J(p(x)) \ J, and, µ i (x) = µ i (x) if i K, µ i (x) = 0 for i I(p(x))\K, we have (λ(x), µ(x)) M(x) satisfying supp(λ(x)) J, supp(µ(x)) K, which leads to a contradiction with (35). Therefore, by Case i and Case ii, we have shown the proposed conclusion. Corollary 3.. Let ˆB(x) be defined in (23). Suppose that (GAIPCQ) is satisfied at p(x). Then, there exists an open neighborhood ˆN (x) such that ˆB(y) ˆB(x) for any y ˆN (x). Proof. By virtue of Proposition 2., Lemma 3. and (5), it s clear that ˆB(x) for any x R n. Noticing the continuity of p(.), it follows that K(p(y)) K(p(x)) for y close to x. Hence, the result is valid by virtue of Lemma 3.2. Now, for x R n and any J K ˆB(x), choose some k J and define J = J \ {k}. We consider the following mapping: H J K (y, z, q, w) = (h j (y) h k (y)) j J (f i (y)) i K, ( M j J z j h j (y) + ( j J z j) h k (y) + ) i K q i f i (y) + y w (40) where w R n, (y, z, q) R n R J R K. Let (y 0, z 0, q 0, w 0 ) = (p(x), λ J(x), µ K (x), x), then we have H J K (y 0, z 0, q 0, w 0 ) = 0. For convenience, set h j (y) = h j (y) h k (y), j J. Then, the matrix of partial derivatives of H J K (y, z, q, w) with respect to (y, z, q) is h J(y) T 0 0 J (y,z,q) H J K (y, z, q, w) = f K (y) T 0 0, (4) B J K (y, z, q) M h J(y) M f K (y) where for i K, f i (y) is the i-th column of f K (y) and h j (y) is the j-th column of h J(y) for any j J, and B J K (y, z, q) = I + M z j 2 h j (y) + ( z j ) 2 h k (y) + q i 2 f i (y). (42) j J j J i K

12 When J =, then J, the variable z, and the terms concerning them in above formulae are vacuous. Similarly, all terms regarding to K above will vanish when I(p(x)) = 0. In this two cases, the mapping H in (40) and its corresponding Jacobian matrix would be described by simpler forms in some lower dimensional spaces, and the analysis becomes much easier. Now, we get the following result: Lemma 3.3. For any J K ˆB(x), there exists an open neighborhood S J K of w 0 (= x) and an open neighborhood Q J K of (y 0, z 0, q 0 )(= (p(x), λ J (x), µ K (x))) such that when w cls J K, the equations H J K (y, z, q, w) = 0 have a unique solution (y(w), z(w), q(w)) J K clq J K where z J K (w) or q J K (w) is vacuous if J = or K = 0, respectively. Furthermore, (y(w), z(w), q(w)) J K is continuously differentiable on S J K. Proof. When J = or I(p(x)) = 0, it is easy to see that the desired results are true. Next, we consider the case when J > and I(p(x)) = 0. It follows from (6) that 2 h j (x) 0 for any j J. Also, noticing q 0 0, 0 z 0 j, 0 j J z0 j for any j J, then there exists an open neighborhood of (y 0, z 0, q 0, w 0 ), say N J K (y 0, z 0, q 0, w 0 ), such that B J K (y, z, q) is positive definite on this neighborhood. On the other hand, by the definition of ˆB(x), we can easily know that the vectors { h j (y 0 ), f i (y 0 ) : j J, i K} (43) are linearly independent. So, by the continuity of h j and f i, j J, i K, there exists an open neighborhood of (y 0, z 0, q 0, w 0 ), for convenience say N J K (y 0, z 0, q 0, w 0 ) as above, such that for any (y, z, q, w) N J K (y 0, z 0, q 0, w 0 ) { h j (y), f i (y) : j J, i K} (44) are linearly independent. Then, it follows that the matrix ( h J(y), f K (y) ) has full column rank. Hence, we have J (y,z,q) H J K (y, z, q, w) is nonsingular on N J K (y 0, z 0, q 0, w 0 ). Noticing the fact that H J K (y 0, z 0, q 0, w 0 ) = 0 and by virtue of the Implicit Function Theorem, we derive the desired conclusion. Since J(p(x)) and I(p(x)) are both finite index sets, according to Lemma 3.3, we derive the following finitely many functions, g J K : S J K R n defined by g J K (w) = M(w y J K (w)), J K ˆB(x). (45) where S J K, y J K (w) are defined in Lemma 3.3. Obviously, g J K (w) is continuously differentiable on S J K. By using these functions and the (GAIPCQ) qualification, we derive the piecewise smoothness of g. 2

13 Proposition 3.. Suppose that (GAIPCQ) holds at p(x). Then, there exists an open neighborhood N (x) around x such that g is piecewise smooth on N (x). In particular, g is semismooth at x. Proof. From (20) and Corollary 3., for any w J K ˆB(w) ˆB(x) such that ˆN (x), it follows that there exists g(w) = M(w p(w)) = j J λ j(w) h j (p(w)) + i K µ i(w) f i (p(w)), λ j (w) 0, j J(p(w)), j J(p(w)) λ j(w) =, (46) Let µ i (w) 0, i I(p(w)). h J (p(w)) f K (p(w)) A(p(w)) =, (47) J 0 K where J, 0 K, denotes the J -dimensional row vector with all ones components, and the K -dimensional zero row vector, respectively. According to the definition of ˆB(x), it follows that A(p(w)) has full column rank. Then, from (46), we get ( ) λj (w) = ( A(p(w)) T A(p(w)) ) ( ) A(p(w)) T g(w), w µ K (w) ˆN (x). (48) Since p(.), h j (.) and f i (.) are continuous, so we have λ J (w) λ J (x) and µ K (w) µ K (x) as w x. Thus, we can choose N (x) { J K ˆB(x) S J K } ˆN (x) to be an open neighborhood of w 0 (= x) such that for any w N (x) and J K ˆB(w), (p(w), λ J (w), µ K (w)) clq J K, where λ J (w), µ K (w) is vacuous when J = or I(p(x)) = 0, respectively. For any w N (x), there exists J K ˆB(x) such that H J K (p(w), λ J (w), µ K (w), w) = 0, (49) and (p(w), λ J (w), µ K (w)) clq J K. So, by virtue of the Implicit Function Theorem, we have (p(w), λ J (w), µ K (w)) = (y J K (w), z J (w), q K (w)). Therefore, it follows that g(w) = M(w p(w)) = M(w y J K (w)) = g J K (w), (50) where g J K (w) is defined in (45). Thus, for any w N (x), there exists at least one continuously differentiable function g J K : S J K R n such that g(w) = g J K (w) with J K ˆB(x). Noticing that ˆB(x) is a finite set, we have g is piecewise smooth on N (x) by Definition 2.2. Moreover, g is semismooth at x according to Qi and Sun (993). 3

14 Remark 3.. In this section, we show that g is piecewise smooth around x, thereby, semismooth at x, for the problems with C 2 inequality constraints. Actually, if constraint functions f i in () are piecewise C 2 convex on R n, we can also analogously derive the same results without difficulty by following the idea of this paper. The description in this case should be much more complicated. However, it is not straightforward to extend these results to the general constraint problems with both equality and inequality constraints. In fact, the analysis would become quite difficult and complex in this situation since the methods used in this paper are perhaps not valid. One of the main obstacles is that at the KKT points the Lagrangian multipliers corresponding to the equality constraints are very hard to deal with. 4. Conclusion The Moreau-Yosida regularization is a powerful tool for smoothing nondifferentiable functions, which is widely used in nonsmooth optimization. A significant feature is the semismoothness of the gradient of the Moreau-Yosida regularization. In this paper, we investigate the semismoothness of the gradient g of the Moreau-Yosida regularization of f, which plays a key role in the superlinear convergence analysis of approximate Newton methods. The function f is closely related to a constrained program with a convex, piecewise C 2 objective f 0 and inequality constraints. By introducing a new qualification, we derive the piecewise smoothness of the gradient g. We also believe that the results established in this paper could enrich the theory in nonsmooth optimization greatly. Actually, there are still a lot of questions left open. For instance, for general convex problems with both equality and inequality constraints, we have not a confirmative result for the semismoothness of the gradient g so far. It is a great challenge to answer these questions. Acknowledgment The research of the first author was supported by Grant R of IHPC- CIM, National University of Singapore. References [] Chaney, R.W. Second-order Necessary Conditions in Semismooth Optimization. Math. Programming. 988, 40 (), [2] Chaney, R.W. Optimality Conditions for Piecewise C 2 Nonlinear Programming. J. Optim. Theory Anal. 989, 6 (2),

15 [3] Chaney, R.W. Piecewise C k Functions in Nonsmooth Analysis. Nonlinear Anal. Theory, Methods & Appl. 990, 5 (7), [4] Clarke, F. H. Optimization and Nonsmooth Analysis. John Wiley and Sons: New York, 983. [5] Ermoliev, Y. M.; Kryazhimskii, A. V.; Ruszczyńskii, A. Constraint Aggregation Principle in Convex Optimization. Math. Programming. 997, 76 (3), [6] Fukushima, M.; Qi, L. A Global and Superlinear Convergent Algorithm for Nonsmooth Convex Minimization. SIAM J. Optim. 996, 6 (4), [7] Hiriart-Urruty, J. B.; Lemaréchal, C. Convex Analysis and Minimization Algorithms. Springer Verlag: Berlin, 993. [8] Kiwiel, K. Proximity Control in Bundle Methods for Convex Nondifferentiable Minimization. Math. Programming. 990, 46 (), [9] Lemaréchal, C.; Nemirovskii, A.; Nesterov, Y. New Variants of Bundle Methods. Math. Programming. 995, 69 (), 47. [0] Lemaréchal, C.; Sagastizábal, C. An Approach to Variable Metric Methods. In Systems modelling and optimization, Lecture notes in control and information sciences 97; Henry, J., Yvon, J.-P., Eds., Springer-Verlag: Berlin, 994; [] Lemaréchal, C.; Sagastizábal, C. Variable Metric Bundle Methods: From Conceptual to Implementable Forms. Math. Programming. 997a, 76 (3), [2] Lemaréchal, C.; Sagastizábal, C. Practical Aspects of the Moreau-Yosida Regularization I: Theoretical Preliminaries. SIAM J. Optim. 997b, 7 (2), [3] Lemaréchal, C.; Strodiot, J. J.; Bihain, A. On a Bundle Method for Nonsmooth Optimization. In Nonlinear Programming 4; Mangasarian, O. L.; Meyer, R. R.; Robinson, S. M. Eds.; Academic Press: New York, 98; [4] Meng F. W.; Hao Y. The Property of Piecewise Smoothness of Moreau-Yosida Approximation for a Piecewise C 2 Convex Function. Adv. Math. (Chinese). 200l, 30 (4), [5] Mifflin, R.; Qi, L.; Sun, D. Properties of the Moreau-Yosida Regularization of a Piecewise C 2 Convex Function. Math. Programming. 999, 84 (2), [6] Moreau, J. J. Proximite et Dualite Dans un Espace Hilbertien. Bulletin de la Societe Mathematique de France (French). 965, 93,

16 [7] Pang, J. S.; Ralph, D. Piecewise Smoothness, Local Invertibility, and Parametric Analysis of Normal Maps. Math. Oper. Res. 996, 2 (2), [8] Poljak, B. T. A General Method of Solving Extremum Problems. Dokl. Akad. Nauk SSSR. 967, 74, [9] Qi, L. Second-order Analysis of the Moreau-Yosida Regularization of a Convex Function. Preprint, Revised version, School of Mathematics, the University of New South Wales, Sydney, Australia, 995. [20] Qi, L.; Chen, X. A Preconditioning Proximal Newton Method for Nondifferentiable Convex Optimization. Math. Programming. 997, 76 (3), [2] Qi, L.; Sun, J. A Nonsmooth Version of Newton s Method. Math. Programming. 993, 58 (3), [22] Rockafellar, R. T. Convex Analysis. Princeton, New Jersey, 970. [23] Rockafellar, R. T. Maximal Monotone Relations and the Second Derivatives on Nonsmooth Functions. Ann. Inst. H. Poincaré Anal. Non Linéaire. 985, 2 (3), [24] Sun, D.; Han, J. On a Conjecture in Moreau-Yosida Regularization of a Nonsmooth Convex Function. Chinese Sci. Bull. 997, 42 (7), [25] Yosida, K. Functional analysis. Springer Verlag: Berlin, 964. [26] Zhao, G.; Meng, F. On Piecewise Smoothness of a Lagrangian-dual Function and Its Moreau-Yosida Regularization. Research Report, DAMTP of University of Cambridge,

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

A quasisecant method for minimizing nonsmooth functions

A quasisecant method for minimizing nonsmooth functions A quasisecant method for minimizing nonsmooth functions Adil M. Bagirov and Asef Nazari Ganjehlou Centre for Informatics and Applied Optimization, School of Information Technology and Mathematical Sciences,

More information

Optimality Conditions for Nonsmooth Convex Optimization

Optimality Conditions for Nonsmooth Convex Optimization Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

A Proximal Method for Identifying Active Manifolds

A Proximal Method for Identifying Active Manifolds A Proximal Method for Identifying Active Manifolds W.L. Hare April 18, 2006 Abstract The minimization of an objective function over a constraint set can often be simplified if the active manifold of the

More information

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F.

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F. Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM M. V. Solodov and B. F. Svaiter January 27, 1997 (Revised August 24, 1998) ABSTRACT We propose a modification

More information

A Continuation Method for the Solution of Monotone Variational Inequality Problems

A Continuation Method for the Solution of Monotone Variational Inequality Problems A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:

More information

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L.

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L. McMaster University Advanced Optimization Laboratory Title: A Proximal Method for Identifying Active Manifolds Authors: Warren L. Hare AdvOl-Report No. 2006/07 April 2006, Hamilton, Ontario, Canada A Proximal

More information

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems Xiaoni Chi Guilin University of Electronic Technology School of Math & Comput Science Guilin Guangxi 541004 CHINA chixiaoni@126.com

More information

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? Francisco Facchinei a,1 and Christian Kanzow b a Università di Roma La Sapienza Dipartimento di Informatica e

More information

FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR

FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR B. F. SVAITER

More information

On constraint qualifications with generalized convexity and optimality conditions

On constraint qualifications with generalized convexity and optimality conditions On constraint qualifications with generalized convexity and optimality conditions Manh-Hung Nguyen, Do Van Luu To cite this version: Manh-Hung Nguyen, Do Van Luu. On constraint qualifications with generalized

More information

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions 260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum

More information

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES Francisco Facchinei 1, Andreas Fischer 2, Christian Kanzow 3, and Ji-Ming Peng 4 1 Università di Roma

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

Merit functions and error bounds for generalized variational inequalities

Merit functions and error bounds for generalized variational inequalities J. Math. Anal. Appl. 287 2003) 405 414 www.elsevier.com/locate/jmaa Merit functions and error bounds for generalized variational inequalities M.V. Solodov 1 Instituto de Matemática Pura e Aplicada, Estrada

More information

CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY

CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY IN NONLINEAR PROGRAMMING 1 A. L. DONTCHEV Mathematical Reviews, Ann Arbor, MI 48107 and R. T. ROCKAFELLAR Dept. of Math., Univ. of Washington, Seattle, WA 98195

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

Downloaded 09/27/13 to Redistribution subject to SIAM license or copyright; see

Downloaded 09/27/13 to Redistribution subject to SIAM license or copyright; see SIAM J. OPTIM. Vol. 23, No., pp. 256 267 c 203 Society for Industrial and Applied Mathematics TILT STABILITY, UNIFORM QUADRATIC GROWTH, AND STRONG METRIC REGULARITY OF THE SUBDIFFERENTIAL D. DRUSVYATSKIY

More information

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1 Tim Hoheisel and Christian Kanzow Dedicated to Jiří Outrata on the occasion of his 60th birthday Preprint

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Summary Notes on Maximization

Summary Notes on Maximization Division of the Humanities and Social Sciences Summary Notes on Maximization KC Border Fall 2005 1 Classical Lagrange Multiplier Theorem 1 Definition A point x is a constrained local maximizer of f subject

More information

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION International Journal of Pure and Applied Mathematics Volume 91 No. 3 2014, 291-303 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu doi: http://dx.doi.org/10.12732/ijpam.v91i3.2

More information

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Extended Monotropic Programming and Duality 1

Extended Monotropic Programming and Duality 1 March 2006 (Revised February 2010) Report LIDS - 2692 Extended Monotropic Programming and Duality 1 by Dimitri P. Bertsekas 2 Abstract We consider the problem minimize f i (x i ) subject to x S, where

More information

Lagrange Relaxation and Duality

Lagrange Relaxation and Duality Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler

More information

The nonsmooth Newton method on Riemannian manifolds

The nonsmooth Newton method on Riemannian manifolds The nonsmooth Newton method on Riemannian manifolds C. Lageman, U. Helmke, J.H. Manton 1 Introduction Solving nonlinear equations in Euclidean space is a frequently occurring problem in optimization and

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996 Working Paper Linear Convergence of Epsilon-Subgradient Descent Methods for a Class of Convex Functions Stephen M. Robinson WP-96-041 April 1996 IIASA International Institute for Applied Systems Analysis

More information

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45 Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.

More information

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes 016 0 J.-S. CHEN, C.-H. KO AND X.-R. WU whereas the natural residual function ϕ : R R is given by ϕ (a, b) = a (a b) + = min{a, b}. Recently, the generalized Fischer-Burmeister function ϕ p : R R, which

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

BASICS OF CONVEX ANALYSIS

BASICS OF CONVEX ANALYSIS BASICS OF CONVEX ANALYSIS MARKUS GRASMAIR 1. Main Definitions We start with providing the central definitions of convex functions and convex sets. Definition 1. A function f : R n R + } is called convex,

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

Bulletin of the. Iranian Mathematical Society

Bulletin of the. Iranian Mathematical Society ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 41 (2015), No. 5, pp. 1259 1269. Title: A uniform approximation method to solve absolute value equation

More information

Least Sparsity of p-norm based Optimization Problems with p > 1

Least Sparsity of p-norm based Optimization Problems with p > 1 Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from

More information

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

arxiv: v1 [math.oc] 13 Mar 2019

arxiv: v1 [math.oc] 13 Mar 2019 SECOND ORDER OPTIMALITY CONDITIONS FOR STRONG LOCAL MINIMIZERS VIA SUBGRADIENT GRAPHICAL DERIVATIVE Nguyen Huy Chieu, Le Van Hien, Tran T. A. Nghia, Ha Anh Tuan arxiv:903.05746v [math.oc] 3 Mar 209 Abstract

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

SECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS

SECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS Journal of Applied Analysis Vol. 9, No. 2 (2003), pp. 261 273 SECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS I. GINCHEV and V. I. IVANOV Received June 16, 2002 and, in revised form,

More information

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS A. F. Izmailov and A. S. Kurennoy December 011 ABSTRACT In the context of mixed complementarity problems various concepts of solution regularity are

More information

POLARS AND DUAL CONES

POLARS AND DUAL CONES POLARS AND DUAL CONES VERA ROSHCHINA Abstract. The goal of this note is to remind the basic definitions of convex sets and their polars. For more details see the classic references [1, 2] and [3] for polytopes.

More information

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials G. Y. Li Communicated by Harold P. Benson Abstract The minimax theorem for a convex-concave bifunction is a fundamental theorem

More information

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

A Note on KKT Points of Homogeneous Programs 1

A Note on KKT Points of Homogeneous Programs 1 A Note on KKT Points of Homogeneous Programs 1 Y. B. Zhao 2 and D. Li 3 Abstract. Homogeneous programming is an important class of optimization problems. The purpose of this note is to give a truly equivalent

More information

DUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY

DUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY DUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY R. T. Rockafellar* Abstract. A basic relationship is derived between generalized subgradients of a given function, possibly nonsmooth and nonconvex,

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

Algorithms for Nonsmooth Optimization

Algorithms for Nonsmooth Optimization Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization

More information

Implications of the Constant Rank Constraint Qualification

Implications of the Constant Rank Constraint Qualification Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates

More information

On the iterate convergence of descent methods for convex optimization

On the iterate convergence of descent methods for convex optimization On the iterate convergence of descent methods for convex optimization Clovis C. Gonzaga March 1, 2014 Abstract We study the iterate convergence of strong descent algorithms applied to convex functions.

More information

Downloaded 12/13/16 to Redistribution subject to SIAM license or copyright; see

Downloaded 12/13/16 to Redistribution subject to SIAM license or copyright; see SIAM J. OPTIM. Vol. 11, No. 4, pp. 962 973 c 2001 Society for Industrial and Applied Mathematics MONOTONICITY OF FIXED POINT AND NORMAL MAPPINGS ASSOCIATED WITH VARIATIONAL INEQUALITY AND ITS APPLICATION

More information

Weak sharp minima on Riemannian manifolds 1

Weak sharp minima on Riemannian manifolds 1 1 Chong Li Department of Mathematics Zhejiang University Hangzhou, 310027, P R China cli@zju.edu.cn April. 2010 Outline 1 2 Extensions of some results for optimization problems on Banach spaces 3 4 Some

More information

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION CHRISTIAN GÜNTHER AND CHRISTIANE TAMMER Abstract. In this paper, we consider multi-objective optimization problems involving not necessarily

More information

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124

More information

Relationships between upper exhausters and the basic subdifferential in variational analysis

Relationships between upper exhausters and the basic subdifferential in variational analysis J. Math. Anal. Appl. 334 (2007) 261 272 www.elsevier.com/locate/jmaa Relationships between upper exhausters and the basic subdifferential in variational analysis Vera Roshchina City University of Hong

More information

Identifying Active Constraints via Partial Smoothness and Prox-Regularity

Identifying Active Constraints via Partial Smoothness and Prox-Regularity Journal of Convex Analysis Volume 11 (2004), No. 2, 251 266 Identifying Active Constraints via Partial Smoothness and Prox-Regularity W. L. Hare Department of Mathematics, Simon Fraser University, Burnaby,

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

Optimality, identifiability, and sensitivity

Optimality, identifiability, and sensitivity Noname manuscript No. (will be inserted by the editor) Optimality, identifiability, and sensitivity D. Drusvyatskiy A. S. Lewis Received: date / Accepted: date Abstract Around a solution of an optimization

More information

SECOND ORDER DUALITY IN MULTIOBJECTIVE PROGRAMMING

SECOND ORDER DUALITY IN MULTIOBJECTIVE PROGRAMMING Journal of Applied Analysis Vol. 4, No. (2008), pp. 3-48 SECOND ORDER DUALITY IN MULTIOBJECTIVE PROGRAMMING I. AHMAD and Z. HUSAIN Received November 0, 2006 and, in revised form, November 6, 2007 Abstract.

More information

Fixed points in the family of convex representations of a maximal monotone operator

Fixed points in the family of convex representations of a maximal monotone operator arxiv:0802.1347v2 [math.fa] 12 Feb 2008 Fixed points in the family of convex representations of a maximal monotone operator published on: Proc. Amer. Math. Soc. 131 (2003) 3851 3859. B. F. Svaiter IMPA

More information

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics

More information

Convergence rate estimates for the gradient differential inclusion

Convergence rate estimates for the gradient differential inclusion Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient

More information

EXISTENCE RESULTS FOR QUASILINEAR HEMIVARIATIONAL INEQUALITIES AT RESONANCE. Leszek Gasiński

EXISTENCE RESULTS FOR QUASILINEAR HEMIVARIATIONAL INEQUALITIES AT RESONANCE. Leszek Gasiński DISCRETE AND CONTINUOUS Website: www.aimsciences.org DYNAMICAL SYSTEMS SUPPLEMENT 2007 pp. 409 418 EXISTENCE RESULTS FOR QUASILINEAR HEMIVARIATIONAL INEQUALITIES AT RESONANCE Leszek Gasiński Jagiellonian

More information

Inequality Constraints

Inequality Constraints Chapter 2 Inequality Constraints 2.1 Optimality Conditions Early in multivariate calculus we learn the significance of differentiability in finding minimizers. In this section we begin our study of the

More information

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 94, No. 2, pp. 449-467, AUGUST 1997 Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 M. V. SOLODOV3

More information

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n. Vector Spaces Definition: The usual addition and scalar multiplication of n-tuples x = (x 1,..., x n ) R n (also called vectors) are the addition and scalar multiplication operations defined component-wise:

More information

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,

More information

Convergence of Stationary Points of Sample Average Two-Stage Stochastic Programs: A Generalized Equation Approach

Convergence of Stationary Points of Sample Average Two-Stage Stochastic Programs: A Generalized Equation Approach MATHEMATICS OF OPERATIONS RESEARCH Vol. 36, No. 3, August 2011, pp. 568 592 issn 0364-765X eissn 1526-5471 11 3603 0568 doi 10.1287/moor.1110.0506 2011 INFORMS Convergence of Stationary Points of Sample

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Convex Analysis and Economic Theory AY Elementary properties of convex functions Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given

More information

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

A Dual Condition for the Convex Subdifferential Sum Formula with Applications Journal of Convex Analysis Volume 12 (2005), No. 2, 279 290 A Dual Condition for the Convex Subdifferential Sum Formula with Applications R. S. Burachik Engenharia de Sistemas e Computacao, COPPE-UFRJ

More information

STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES

STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES CHONG LI, DONGHUI FANG, GENARO LÓPEZ, AND MARCO A. LÓPEZ Abstract. We consider the optimization problem (P A )

More information

THE NEARLY ADDITIVE MAPS

THE NEARLY ADDITIVE MAPS Bull. Korean Math. Soc. 46 (009), No., pp. 199 07 DOI 10.4134/BKMS.009.46..199 THE NEARLY ADDITIVE MAPS Esmaeeil Ansari-Piri and Nasrin Eghbali Abstract. This note is a verification on the relations between

More information

OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS

OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS S. HOSSEINI Abstract. A version of Lagrange multipliers rule for locally Lipschitz functions is presented. Using Lagrange

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints

Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints An Li and Jane J. Ye Abstract. In this paper we study an optimal control problem with nonsmooth

More information

The exact absolute value penalty function method for identifying strict global minima of order m in nonconvex nonsmooth programming

The exact absolute value penalty function method for identifying strict global minima of order m in nonconvex nonsmooth programming Optim Lett (2016 10:1561 1576 DOI 10.1007/s11590-015-0967-3 ORIGINAL PAPER The exact absolute value penalty function method for identifying strict global minima of order m in nonconvex nonsmooth programming

More information

Global Maximum of a Convex Function: Necessary and Sufficient Conditions

Global Maximum of a Convex Function: Necessary and Sufficient Conditions Journal of Convex Analysis Volume 13 2006), No. 3+4, 687 694 Global Maximum of a Convex Function: Necessary and Sufficient Conditions Emil Ernst Laboratoire de Modélisation en Mécaniue et Thermodynamiue,

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal

More information

Active sets, steepest descent, and smooth approximation of functions

Active sets, steepest descent, and smooth approximation of functions Active sets, steepest descent, and smooth approximation of functions Dmitriy Drusvyatskiy School of ORIE, Cornell University Joint work with Alex D. Ioffe (Technion), Martin Larsson (EPFL), and Adrian

More information

Chapter 2 Convex Analysis

Chapter 2 Convex Analysis Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,

More information

Optimality, identifiability, and sensitivity

Optimality, identifiability, and sensitivity Noname manuscript No. (will be inserted by the editor) Optimality, identifiability, and sensitivity D. Drusvyatskiy A. S. Lewis Received: date / Accepted: date Abstract Around a solution of an optimization

More information

FUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES. Yılmaz Yılmaz

FUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES. Yılmaz Yılmaz Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 33, 2009, 335 353 FUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES Yılmaz Yılmaz Abstract. Our main interest in this

More information

Epiconvergence and ε-subgradients of Convex Functions

Epiconvergence and ε-subgradients of Convex Functions Journal of Convex Analysis Volume 1 (1994), No.1, 87 100 Epiconvergence and ε-subgradients of Convex Functions Andrei Verona Department of Mathematics, California State University Los Angeles, CA 90032,

More information

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Thiago A. de André Paulo J. S. Silva March 24, 2007 Abstract In this paper, we present

More information

RECURSIVE APPROXIMATION OF THE HIGH DIMENSIONAL max FUNCTION

RECURSIVE APPROXIMATION OF THE HIGH DIMENSIONAL max FUNCTION RECURSIVE APPROXIMATION OF THE HIGH DIMENSIONAL max FUNCTION Ş. İ. Birbil, S.-C. Fang, J. B. G. Frenk and S. Zhang Faculty of Engineering and Natural Sciences Sabancı University, Orhanli-Tuzla, 34956,

More information

1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o

1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o Journal of Convex Analysis Volume 6 (1999), No. 1, pp. xx-xx. cheldermann Verlag A HYBRID PROJECTION{PROXIMAL POINT ALGORITHM M. V. Solodov y and B. F. Svaiter y January 27, 1997 (Revised August 24, 1998)

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematics http://jipam.vu.edu.au/ Volume 4, Issue 4, Article 67, 2003 ON GENERALIZED MONOTONE MULTIFUNCTIONS WITH APPLICATIONS TO OPTIMALITY CONDITIONS IN

More information

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction ACTA MATHEMATICA VIETNAMICA 271 Volume 29, Number 3, 2004, pp. 271-280 SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM NGUYEN NANG TAM Abstract. This paper establishes two theorems

More information

Differentiability of Convex Functions on a Banach Space with Smooth Bump Function 1

Differentiability of Convex Functions on a Banach Space with Smooth Bump Function 1 Journal of Convex Analysis Volume 1 (1994), No.1, 47 60 Differentiability of Convex Functions on a Banach Space with Smooth Bump Function 1 Li Yongxin, Shi Shuzhong Nankai Institute of Mathematics Tianjin,

More information

SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES

SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES ARCHIVUM MATHEMATICUM (BRNO) Tomus 42 (2006), 167 174 SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES ABDELHAKIM MAADEN AND ABDELKADER STOUTI Abstract. It is shown that under natural assumptions,

More information

OPTIMALITY CONDITIONS FOR D.C. VECTOR OPTIMIZATION PROBLEMS UNDER D.C. CONSTRAINTS. N. Gadhi, A. Metrane

OPTIMALITY CONDITIONS FOR D.C. VECTOR OPTIMIZATION PROBLEMS UNDER D.C. CONSTRAINTS. N. Gadhi, A. Metrane Serdica Math. J. 30 (2004), 17 32 OPTIMALITY CONDITIONS FOR D.C. VECTOR OPTIMIZATION PROBLEMS UNDER D.C. CONSTRAINTS N. Gadhi, A. Metrane Communicated by A. L. Dontchev Abstract. In this paper, we establish

More information