CONVERGENCE AND STABILITY OF A REGULARIZATION METHOD FOR MAXIMAL MONOTONE INCLUSIONS AND ITS APPLICATIONS TO CONVEX OPTIMIZATION

Size: px
Start display at page:

Download "CONVERGENCE AND STABILITY OF A REGULARIZATION METHOD FOR MAXIMAL MONOTONE INCLUSIONS AND ITS APPLICATIONS TO CONVEX OPTIMIZATION"

Transcription

1 Variational Analysis and Appls. F. Giannessi and A. Maugeri, Eds. Kluwer Acad. Publ., Dordrecht, 2004 CONVERGENCE AND STABILITY OF A REGULARIZATION METHOD FOR MAXIMAL MONOTONE INCLUSIONS AND ITS APPLICATIONS TO CONVEX OPTIMIZATION Ya. I. Alber, 1 D. Butnariu 2 and G. Kassay 3 Faculty of Mathematics, Technion - Israel Institute of Technology, Haifa, Israel; 1 Department of Mathematics, University of Haifa, Haifa, Israel; 2 Faculty of Mathematics, University Babes-Bolyai, Cluj-Napoca, Romania 3 Abstract: Key words: In this paper we study the stability and convergence of a regularization method for solving inclusions f A, where A is a maimal monotone point-to-set operator from a refleive smooth Banach space X with the Kadec-Klee property to its dual. We assume that the data A and f involved in the inclusion are given by approimations A and f converging to A and f, respectively, in the sense of Mosco type topologies. We prove that the sequence 1 ( A J µ = + α ) f which results from the regularization process converges wealy and, under some conditions, converges strongly to the minimum norm solution of the inclusion f A, provided that the inclusion is consistent. These results lead to a regularization procedure for perturbed conve optimization problems whose objective functions and feasibility sets are given by approimations. In particular, we obtain a strongly convergent version of the generalized proimal point optimization algorithm which is applicable to problems whose feasibility sets are given by Mosco approimations Maimal monotone inclusion, Mosco convergence of sets, regularization method, conve optimization problem, generalized proimal point method for optimization Mathematics Subject Classification: Primary: 47J06, 47A52, 90C32; Secondary: 47H14, 90C48, 90C25. 1

2 2 Variational Analysis and Appls. 1. INTRODUCTION Let X be a refleive, strictly conve and smooth Banach space with the Kadec-Klee property (i.e., such that if a sequence in X converges wealy to some X, then converges strongly whenever lim = ) and let X be the dual of X. Given a maimal monotone mapping problem Find A X : 2 X and an element f X, we consider the following X such that f A. (1) Problems lie (1) are often ill-posed in the sense that they may not have solutions, may have infinitely many solutions and/or small data perturbations may lead to significant distortions of the solution sets. A regularization technique, whose basic idea can be traced bac to Browder [16] and Cruceanu [23], consists of replacing the original problem (1) by the problem α µ α Find z X such that f ( A+ α J ) z, (2) µ where α is a positive real number and J : X X is the duality mapping of gauge µ defined by the equations ( ) J µ yy, = J µ y y and J µ y = µ y, (3) while µ :[0,+ ) [0,+ ) is supposed to be continuous, strictly increasing, having µ (0) = 0 and lim t µ ( t) =+. One does so for several reasons. First, since the mapping µ A+ α J is surjective and A+ α J µ 1 is single valued (cf. [22, Proposition 3.10, p. 165]), the regularized problem (2) has unique solution (even if the inclusion (1) has no solution at all). Second, it follows from [47, p. 129] and [23] that, if { α } is a sequence of positive real numbers and lim α = 0 then by solving (2) for α = α one finds vectors z α converging to a solution of (1) provided that this inclusion is consistent. Third, the operator small perturbations of theoretical solution µ A+ α J f A+ 1 α J µ f will not mae the vector z α 1 is continuous and, therefore, be far from the of (2). In applications it frequently

3 Convergence and Stability of a Regularization Method for Maimal Monotone Inclusions and Its Applications to Conve Optimization 3 happens that not only f but also the operator A involved in (1) can be approimated but not precisely computed. This naturally leads to the question whether the regularized inclusion (2) is stable, that is, whether by solving instead of the regularized inclusion (2) a sequence of regularized inclusions f ( A + α J µ ) X in which A : X 2 are maimal monotone operators approimating A and f approimates f, the sequence of corresponding solutions = A + α J µ f (4) 1 still converges to a solution of (1) when lim α = 0 and the original inclusion (1) is consistent. This question was previously considered by Lavrentev [36] who dealt with it in Hilbert spaces under the assumption that A is linear and positive semidefinite, Dom A = X and µ ( t) = t/ 2. In Alber [1] the problem appears in a more general contet but under the assumption that the operator A is defined on the whole Banach space X. The main purpose of this paper is to show that if the approimations A and f satisfy some quite mild requirements, then the answer to the question posed above is affirmative, i.e., the sequence defined by (4) converges strongly to the minimal norm solution of (1) as α 0 and provided that (1) has at least one solution. Subsequently, we prove that the stability results we have obtained for the regularization method presented above apply to the resolution of conve optimization problems with perturbed data and, in particular, to produce a strongly convergent version of a proimal point method. The stability results proved in this wor (see Section 2) do not mae additional demands on the data of the original inclusion (1) besides the assumption that A is maimal monotone. The conditions under which we prove those results only concern the quality of the approimations A and f. They as that either the Mosco wea upper limit (as defined in [45]) or the wea-strong upper limit (introduced in Subsection 2.1 below) of the sequence of sets Graph A be a subset of Graph ( A ), the later being a somewhat weaer requirement. Also, they as for a ind of linage of the approimative data in the form of the boundedness of the sequence

4 4 Variational Analysis and Appls. { α 1 dist ( f, A v )} (5) for some bounded sequence v in X. If approimants satisfying these conditions eist, then the inclusion (1) is necessarily consistent, the sequence defined by (4) is bounded and its wea accumulation points are solutions of it (see Theorem 2.2 and Corollary 2.3). The main stability results we prove for the proposed regularization scheme are Theorem 2.4 and its Corollary 2.5. They show that if solutions of (1) eist and each of them is the limit of a sequence v such that the sequence (5) converges to zero, then the sequence A and f given by (4) converges strongly to the minimal norm solution of (1). When one has to solve optimization problems lie that of finding a vector { } argmin F( ) : g ( ) 0, i I, (6) i where the functions F, gi : X (,+ ] are conve and lower semicontinuous, perturbations of data are inherent because of imprecise computations and measurements. Since problems lie (6) may happen to be ill-posed, replacing the original data F and gi by approimations F and g i may lead to significant distortions of the solution set. In Section 3 we consider (6) and its perturbations in their subgradient inclusion form. We apply the stability results presented in Section 2 for finding out how good the approimative data F and g i should be in order to ensure that the vectors resulting from the resolution of the regularized perturbed inclusions strongly approimate solutions of (6). Theorem 3.2 answers this question. It shows that for this to happen it is sufficient that the perturbed data would satisfy the conditions ( A ) and ( B ) given in Subsection 3.1. Condition ( A ) ass for sufficiently uniform point-wise convergence of F to F. Condition ( B ) guarantees wea-strong upper convergence of the feasibility sets of the perturbed problems to the feasibility set of the original problem. Proposition 3.6 provides a tool for verifying the validity of condition ( B ) in the case of optimization problems with affine constraints as well as in the case of some problems of semidefinite programming. In Section 4 we consider the question whether or under which conditions the generalized proimal point method for optimization which emerged from the wors of Martinet [43], [44], Rocafellar [52] and Censor and Zenios [21] can be forced to converge strongly in infinite dimensional Banach spaces. The origin of this question can be traced bac to Rocafellar s wor [52]. The relevance of the question emerges from the role of the proimal

5 Convergence and Stability of a Regularization Method for Maimal Monotone Inclusions and Its Applications to Conve Optimization 5 point method in the construction of augmented Lagrangian algorithms (see [53], [18, Chapter 3] and [30]): in this contet a better behaved sequence obtained by regularization of the proimal point method may be of use in order to determine better approimations for a solution of the primal problem. It was shown by Butnariu and Iusem [17] that in smooth uniformly conve Banach spaces the generalized proimal point method converges subsequentially wealy, and sometimes wealy, to solutions of the optimization problem to which it is applied. However, it follows from the wor of Güler [28] that the sequences generated by the proimal point method may fail to converge strongly. The generalized proimal point method essentially consists of solving a sequence of perturbed variants of the given conve optimization problem. We apply the results established in Section 3 in order to prove that by regularizing the perturbed problems via ( y, ) in the scheme studied in this paper we obtain a sequence { } X X such that, when the optimization problem is consistent, { F( y )} converges to the optimal value of F and converges strongly to the minimum norm optimal solution of the original optimization problem. The stability of the regularization scheme represented by (2) was studied before in various settings, but mostly as a way of regularizing variational inequalities involving maimal monotone operators (which, in view of Minty s Theorem, can be also seen as a way of regularizing inclusions involving maimal monotone operators). Mosco [45], [46], Lisovets [39], [40], [41], Ryazantseva [54], Alber and Ryazantseva [6], Alber [2], Alber and Noti [5] have considered the scheme under additional assumptions (not made in our current wor) concerning the data A and f (as, for instance, some ind of continuity or that the perturbed operators A and A should have the same domains). The stability results they have established usually require Hausdorff metric type convergence conditions for the graphs of A. Also under Hausdorff metric type convergence conditions, but with no additional demands on the operator A than its maimal monotonicity, strong convergence of the regularized sequence defined by (4) to the minimal norm solution of (1) was proven by Alber, Butnariu and Ryazantseva in [4]. Recently, wea convergence properties of this regularization scheme were proved by Alber [3] under metric and Mosco type convergence assumptions on the approimants. By contrast, we establish here strong convergence of the regularized sequence by eclusively using variants of Mosco type convergence for the approimants. The stability of regularization schemes applied to ill-posed problems is a multifaceted topic with multiple applications in various fields as one can see

6 6 Variational Analysis and Appls. from the monographs of Lions and Magenes [37], Dontchev and Zolezzi [24], Kaplan and Tichatsche [31], Engl, Hane and Neubauer [27], Showalter [55], and Bonnans and Shapiro [14]. We prove here that the regularization scheme (4) has strong and stable convergence behavior under undemanding conditions and that it can be applied to a large class of conve optimization problems. An interesting topic for further research is to find out whether and under which conditions this regularization scheme wors when applied to other problems lie, for instance, differential equations [55], inverse problems [27], linearized abstract equations [14, Section ], etc. which, in many circumstances, can be represented as inclusions involving maimal monotone operators. Convergence of the regularization scheme (4) may happen to be slow (as shown by an eample given in [4]). Its rate of convergence seems to depend not only on the properties of A and f but also on the geometry of the Banach space X in which the problem is set. It is an interesting open problem to evaluate the rate of convergence of the regularization scheme discussed in this wor in a way similar to that in which such rates were evaluated for alternative regularization methods by Kaplan and Tichatsche [34], [33], [32] and [42]. Such an evaluation may help decide for which type of problems and in which settings application of the regularization scheme (4) is efficient. The convergence and the reliability under errors of the generalized proimal point method in finite dimensional spaces was systematically studied along the last decade (see [25], [26] and see [29] for a survey on this topic). In infinite dimensional Hilbert spaces repeated attempts were recently made in order to discover how the problem data should be in order to ensure that the generalized proimal point method converges wealy or strongly under error perturbations (see [8], [9], [15], [30]). Projected subgradient type regularization techniques meant to force strong convergence in Hilbert spaces of Rocafellar s classical proimal point algorithm were discovered by Bausche and Combettes [12, Corollary 6.2] and Solodov and Svaiter [57]. The regularized generalized proimal point method we propose in Section 4 wors in non Hilbertian spaces too. It presents an interesting feature which can be easily observed from Theorem 4.2 and Corollary 4.3: if X is uniformly conve, smooth and separable, then by applying the regularized generalized proimal point method (60) one can reduce resolution of optimization problems in spaces of infinite dimension to solving a sequence of optimization problems in spaces of finite dimension whose solutions will necessarily converge strongly to the minimal norm optimum of the original problem.

7 Convergence and Stability of a Regularization Method for Maimal Monotone Inclusions and Its Applications to Conve Optimization 7 2. CONVERGENCE AND STABILITY ANALYSIS FOR MAXIMAL MONOTONE INCLUSIONS 2.1 We start our discussion about the stability of the regularization scheme S of subsets (2) by recalling (see [45, Definition 1.1]) that a sequence { } of X is called convergent (in Mosco sense) if where w lim S = s lim S, s lim S represents the collection of all y X which are limits (in the strong convergence sense) of sequences with the property that S for all and there eists a sequence w lims denotes the collection of all X such that y property that there eists a subsequence y S i for all in. In this case, the set S:= s lim S = w lim S is called the limit of { } X converging wealy to and with the S of { i S } such that S and is denoted S = LimS. By analogy with Mosco s w lim we introduce the following notion of limit for sequences of sets contained in X X. This induces a form of graphical convergence for point-to-set mappings from X to X which we use in the sequel. For a comprehensive discussion of other notions of convergence of sequences of sets see [13]. Definition. The wea-strong upper limit of a sequence { U } X of subsets of X, denoted ws lim U, is the collection of all pairs (, y) X X for which there eists a sequence wealy to and a sequence strongly to y contained in X which converges y contained in X which converges and such that, for some subsequence have (, y ) U for all. i U of { U } i we It is easy to see that, if A : X X,, is a sequence of point-to-set mappings then the wea-strong upper limit of the sequence = Graph A,, is the set U of all pairs (, y ) X X with the U

8 8 Variational Analysis and Appls. property that there eists a sequence {( y )} converges wealy to X and, for some subsequence i y A ( ),. in X, y, X X i A of { A } we have such that converges strongly to y in Therefore, in virtue of [11, Proposition ], the graphical upper limit of the sequence A, lim # considered in [11, Definition 7.1.1], w limgraph A, the wea-strong upper limit ws lim Graph A and the Mosco upper limit A are related by lim # Graph A ws lim Graph A w lim Graph A. (7) As noted in the Introduction, a goal of this wor is to establish convergence and stability of the regularization scheme (4) under undemanding convergence requirements for the approimative data A and f. As far as we now, the most general result in this respect is that presented in [4 Section 2]. It guarantees convergence and stability of the regularization scheme (4) under the requirement that the maimal monotone operators A approimate the maimal monotone operator A in the sense that there eist three functions agζ,, : R+ R+, where ζ is strictly increasing and continuous at zero, such that for any (, y) Graph( A ) and for any, there eists a pair (, y ) Graph A with the property that a( ) and y y 1 1 ( ) ζ ( ). g y (8) Clearly, if this requirement is satisfied, then Graph ( A) Graph lim A, (9) where lim A stands for the graphical lower limit of the sequence A (see [11, p. 267]). Since the mappings and we wor with are maimal monotone, Proposition from [11] applies and, due to (9), it implies that A is eactly the graphical limit of the sequence A that is, A A,

9 Convergence and Stability of a Regularization Method for Maimal Monotone Inclusions and Its Applications to Conve Optimization 9 A= lim A = # lim A. (10) In this section we show that convergence and stability of the regularization scheme (4) can be ensured under conditions that are much less demanding than the locally uniform graphical convergence (8). In fact, we prove convergence and stability of the scheme (4) by requiring (see (16) below) less than the graphical convergence (10). This allows us to apply the regularization scheme to a wide class of conve optimization problems as shown in Sections 3 and 4. All over this paper we denote by µ :[0,+ ) [0,+ ) a gauge function with the property that the following limit eists and we have µ () t lim > 0. (11) t t The duality mapping of gauge µ is denoted J µ, as usual. 2.2 The net result shows that, under quite mild conditions concerning the mappings A and the vectors f, the sequence generated in X N according to (4) is well defined, bounded and that its wea accumulation points are necessarily solutions of (1). Theorem. Let { α } be a bounded sequence of positive real numbers. Suppose that, for each, the mapping A : X 2 is maimal monotone. Then the following statements are true: ( ) The sequence i given by (4) is well defined; ( ii ) If there eists a bounded sequence v sequence (5) is bounded, then the sequence X in X such that the is bounded too and has wea accumulation points; ( iii ) If, in addition to the requirements in ( ii ), we have that the sequence { α } converges to zero, the sequence f converges wealy to f in X and ( ) w lim Graph A Graph A, (12)

10 10 Variational Analysis and Appls. then the problem (1) has at least one solution and any wea accumulation point of is a solution of it. Proof. Since the mappings A µ + α J are surjective and sequence A bounded, observe that, for each such that µ are maimal monotone it follows that 1 A α J µ + are single valued. Hence, the is well defined. In order to show that this sequence is, there eists a function h A f = h + α J. (13) The sets Av are nonempty because, otherwise, the sequence (5) would be unbounded. Also, these sets are conve and closed. Hence, for each there eists g Av such that g f = dist ( f, A v ). (14) Taing into account that A is monotone, we deduce h g, v 0. Hence, g, v h, v = f α J, v µ = f, v α J, + α J, v µ µ µ αµ α = f, v + J, v αµ αµ f, v + v, where the first equality follows from (13) and the third equality, as well as the last inequality, follows from (3). By consequence, αµ v f g, v f g + f g v * (15)

11 Convergence and Stability of a Regularization Method for Maimal Monotone Inclusions and Its Applications to Conve Optimization 11 for all. Suppose, by contradiction, that for some subsequence i deduce that, for sufficiently large of it we have lim, we have i is unbounded. Then, = +. From (15) we 1 1 i i i i i µ v f g 1+ i * α i v i i, where, according to (14) and the hypothesis, the sequence α 1 f g is bounded. Taing on both sides of this inequality the upper limit as and taing into account (11), (14) and the boundedness of v one gets that the limit on the left hand side is + while that on the right hand side is finite, that is, a contradiction. This shows that is bounded and, since X is refleive, Now, assume that { α } to f in X has wea accumulation points. converges to zero, f and (12) also holds. Observe that the sequence converges to zero because { } M := sup dist ( f, Av) is finite and 1 α converges wealy g f α converges to zero,, f g α M for all deduce that. Consequently, since g f converges wealy to converges wealy to f, we f too. Let v be a wea accumulation point of the sequence v and denote by i v subsequence of have i.e., v and let v converging wealy to v. Since for any we i i i v, g ) Graph A, condition (12) implies that ( v, f) Graph( A), ( is a solution of (1). Let j be a subsequence of Note that for any z X we have be a wea accumulation point of a which converges wealy to.

12 12 Variational Analysis and Appls. z, f h = z, f f + z, f h =, +, µ z f f α z J, + µ z f f α z J *, where the last sum converges to zero as sequence {( j j h } h converges wealy to. This shows that the f. Hence, the sequence, ) converges wealy to (, f ) in X X. Since we also have j j j that h A for all, condition (12) implies that (, f) Graph A, that is, is a solution of (1). ( ) 2.3 Condition (12) involved in Theorem 2.2 is difficult to verify in applications as those discussed in Section 3 below. We show net that this condition can be relaed at the epense of strenghtening the convergence requirements for f Note that in view of (7) condition (16) below is. weaer than (12). Precisely, we have the following result: Corollary. Let { α } zero. Suppose that, for each be a sequence of positive real numbers converging to, the mapping A : X 2 monotone and that there eists a bounded sequence the sequence (5) is bounded. Then the sequence v X is maimal in X such that given by (4) is well defined, bounded and has wea accumulation points. If, in addition, the sequence f converges strongly to f in X and ( ) ws lim Graph A Graph A, (16) then the problem (1) has solutions and any wea accumulation point of is a solution of it. Proof. Well definedness and boundedness of the sequence results from Theorem 2.2. Eactly as in the proof of Theorem 2.2 we deduce that for each there eist h A and g Av such that (13) and (14) hold. Observe that the sequence g converges strongly to f because of (14) and the boundedness of (5). It remain to show that, under the

13 Convergence and Stability of a Regularization Method for Maimal Monotone Inclusions and Its Applications to Conve Optimization 13 assumptions that f accumulation point of accumulation point of the sequence v subsequence of converges strongly to f and (16) holds, any wea is a solution of (1). Let v be a wea v (such a point eists because is bounded and X is refleive) and denote by v v i converging wealy to v. Since for all we i i have (v, g ) i Graph A, condition (16) implies that ( v, f) Graph( A), i.e., v is a solution of (1). Let be a wea accumulation point of and let j be a subsequence of Note that, according to (13), we have a which converges wealy to. f h f f = f f * + α f h + J µ, * * where the last sum converges to zero as, because is bounded (and, hence, so is µ J f converges to f by hypothesis. Therefore, the sequence h converges strongly to f. Since we also have that h A for all, condition (16) implies that ( ) (, f) Graph A, that is, is a solution of (1).. ) and the sequence 2.4 If problem (1) has only one solution (as happens, for instance, when is strictly monotone), then Theorem 2.2 guarantees wea convergence of A the whole sequence the whole sequence not only wea convergence, but also strong convergence of solution set) is the limit of a sequence v However, in general, we do not now whether converges wealy. The net result shows that solution of (1) can be ensured provided that any element of 1 A f to a (the satisfying (17) below. In view of the remars in Subsection 2.1, this result improves upon Theorem 2.2 in [4].

14 14 Variational Analysis and Appls. Theorem. Suppose that problem (1) has at least one solution and that the sequence of positive real numbers { α } converges to zero. If A : X 2 X,, are maimal monotone operators with the property (12), if f is a sequence converging wealy to 1 each v A f, there eists a sequence v in X and such that v f in X and if, for which converges strongly to 1 0 s lim Av f, α (17) then the sequence given by (4) is well defined and converges strongly to the minimal norm solution of problem (1). Proof. The assumption that problem (1) has solutions implies, in our current setting, the eistence of a bounded sequence v as required by Theorem 2.2. Observe that, since (17) holds, the sequence { α 1 dist ( f, A v )} converges to zero and, therefore, it is bounded. Hence, one can apply Theorem 2.2 in order to deduce well definedness and boundedness of and the fact that any wea accumulation point of it is a solution of 1 (1). Note that, since A is maimal monotone, A is maimal monotone too 1 and, therefore, the set A f, which is eactly the presumed nonempty solution set of problem (1), is conve and closed. The space X is refleive and strictly conve and, therefore, the nonempty, conve and closed set 1 A f contains a unique minimal norm element (the metric projection of 0 onto the set 1 A f ). We show that the only wea accumulation point of is. To this end, let i be a subsequence of which converges wealy to some X. According to Theorem 2.2, is 1 necessarily contained in A f. If = 0, then this is necessarily the minimal norm element of A 1 f, i.e., =. Suppose that 0. Let solution of problem (1). By hypothesis, there eists a sequence converging strongly in X to with l A v for each, we have v and such that, for some sequence v be any other v l

15 Convergence and Stability of a Regularization Method for Maimal Monotone Inclusions and Its Applications to Conve Optimization 15 Clearly, 1 lim l f = 0. (18) α 0< lim inf i and there eists a subsequence j of i such that lim inf i lim j =. (19) The subsequence j is still wealy convergent to and has i lim inf j 0 µ µ < = µ lim =limµ j ( ), (20) because µ is continuous and increasing (as being a gauge function). For each, let h A be the function for which (13) is satisfied. These functions eist because A, we have µ 0 h l, v = f α J l, v = f l, v α J, + α J, v µ µ αµ αµ f l, v + v, is well defined. Due to the monotonicity of where the first equality results from (13) and the last inequality follows from (3). This implies 1 µ f l, v + µ v, (21) α

16 16 Variational Analysis and Appls. where the first term of the right hand side converges to zero as because of (18) and because of the boundedness of v and. Replacing by j in this inequality, we deduce that for large enough 1 1 j j j j j j f l, v + v. (22) j α µ Letting here we get j j lim lim v = v, because v converges strongly to v and because of (19). Since v is an arbitrarily chosen solution of problem (1), it follows that sequence converges wealy to. It remains to show that that, since converges wealy to Kadec-Klee property, it is sufficient to show that =. Hence, the converges strongly. To this end, observe and since X is a space with the converges to. In other words, it is sufficient to prove that all convergent subsequences of the bounded sequence p to 0, then be a convergent subsequence of p 0 lim inf lim = 0, p that is, = lim = 0. Suppose now that p lim = β > 0. converge to. In order to prove that, let. If p converges Then, there eists a positive integer 0 such that, for all integers 0, p have > 0. According to (21) this implies that, for 0, one has we p 1 1 p p p p p f l, v + v. p α µ p

17 Convergence and Stability of a Regularization Method for Maimal Monotone Inclusions and Its Applications to Conve Optimization 17 Letting in this inequality we get p p lim inf lim lim v = v. Since v is an arbitrarily chosen solution of problem (1) we can tae here p v= and obtain = lim. This completes the proof. 2.5 Similarly to Corollary 2.3 ensuring that the wea accumulation points of are solutions of (1), we can use Theorem 2.4 in order to prove N strong convergence of N to a solution of (1) when condition (12) is replaced by the weaer requirement (16) but strenghtening the convergence requirements on f. N Corollary. Suppose that problem (1) has solutions and the sequence of positive real numbers { } X α converges to zero. If A : X 2,, are maimal monotone operators with the property (16), if f is a sequence converging strongly to f in X and if, for each v A f, there eists a sequence v which converges strongly to v in X and satisfies (17), then the sequence given by (4) is well defined and converges strongly to the minimal norm solution of problem (1). Proof. Well definedness and boundedness of 1 as well as the fact that any wea accumulation point of it is a solution of (1) result from Corollary 2.3. In order to show that converges strongly to the minimal norm solution of the problem one reproduces without modification the arguments made for the same purpose in the proof of Theorem REGULARIZATION OF CONVEX OPTIMIZATION PROBLEMS 3.1 We have noted above that Theorem 2.4 and Corollary 2.5, can be of use in order to prove stability properties of the procedure (4) applied to optimization problems with perturbed data. Such properties are of interest in applications in which the data involved in the optimal solution finding

18 18 Variational Analysis and Appls. process are affected by computational and/or measurement errors. To mae things precise, in what follows F : X (,+ ] is a lower semicontinuous conve function and Ω is a nonempty, closed conve subset of Int ( Dom F ), the interior of the domain of F. We consider the following optimization problem under the assumption that it has at least one solution: ( P) Minimize F( ) subject to Ω. (23) It is not difficult to verify that by solving the following inclusion where ( P ) Find X such that 0 A, A: X 2 X * is the operator defined by A= F + N Ω, (24) with F denoting the subdifferential of F and normal cone operator associated to Ω, that is, N : X 2 X * Ω denoting the N Ω ( ) { } h X * : h, z 0, z Ω if Ω, = otherwise, (25) one implicity finds solutions of (P). The operators F and N Ω are maimal monotone (cf. [51] by taing into account that N Ω is the subgradient of the indicator function of the set Ω ). Consequently, the operator A is maimal monotone too (cf. [50]). We presume that the function F can not be eactly determined and that, instead, we have a sequence of conve, lower semicontinuous functions F : X (,+ ], ( ), such that Dom F Dom F,, (26) and which approimates F in the following sense: Condition ( A ). There eists a continuous function c :[0,+ ) [0,+ ) and a sequence of positive real numbers { } such that lim δ = 0 and ( ) δ F ( ) F( ) c δ, (27) whenever Dom F and.

19 Convergence and Stability of a Regularization Method for Maimal Monotone Inclusions and Its Applications to Conve Optimization 19 In real world optimization problems it often happens that the set Ω is defined by a system of inequalities gi ( ) 0, i I, where g i are conve and lower semicontinuous functions on X. The functions g i may also be hard to precisely evaluate and, then, determining the set Ω (or determining whether a vector belongs to it or not) is done by using some (still conve and lower semicontinuous) approimations g i,, instead. In other words, one replaces the set Ω by some nonempty closed conve approimations Ω,, of it. In what follows we assume that ( ) Ω Int Dom F,, (28) and that the closed conve sets sense: Ω approimate the set Ω in the following Condition ( B ). The net two requirements are satisfied: ( ) For any there eists a sequence i y Ω y strongly to y in X and such that y Ω for all ; which converges ( ii ) If z is a sequence in X which is wealy convergent and such that for some subsequence all Ω of { Ω } i, then there eists a sequence with the property that lim z w = 0. w we have z Ω i for contained in Ω Observe that the requirement ( B( i ) ) is equivalent to the condition that Ω s lim Ω. The requirement ( B ( ii )) implies that w lim Ω Ω. Taen together, the requirements ( B( i ) ) and ( B( ii ) ) imply that Ω = Lim Ω. It can be verified that the requirement ( B ( ii )) is satisfied whenever there eists a function b: X [0,+ ) which is bounded on bounded sets, and a sequence of positive real numbers { γ } N converging to zero such that for any N and each z Ω, we have that dist ( z,ω) < b( z) γ. The last condition was repeatedly used in the regularization of variational inequalities involving maimal monotone operators (see [8]). For each, we associate to problem (23) the problem

20 20 Variational Analysis and Appls. ( P ) Minimize F ( ) subject to Ω, which can be solved by finding solutions of the inclusion ( P ) Find X such that 0 where the operator A : X 2 X* A, is defined by A : = F + N Ω, (29) and is also maiman monotone. The question is whether under the conditions ( A ), ( B ), (11) and presuming that { α } converges to zero, the sequence { } by (29) and for generated according to (4) for the operators A given f = f = 0,, 1 ( α ) ( ) i.e., the sequence : = A + J µ 0. (30) ' converges strongly to a solution of problem ( ) P and, hence, to a solution of the original optimization problem ( P ). It should be noted that, since by Asplund's Theorem (see, for istance, [22] we have t ( ) with φ (): µτ ( ), µ J = t = 0 determining the vectors optimization problem defined by (30) amounts to solving the ( Q) Minimize F( ) αφ ( ) + subject to Ω (31) ) By contrast to problem( P which may have intinitely many solutions, the problem( Q ) always has unique solution. Moreover, by choosing µ () t = 2t 2 and, thus, φ() t = t, one ensures that the objective function of ( Q ) is strongly conve and, therefore, the problem ( Q ) may be better posed and easier to solve than ( ). P 3.2 We aim now towards giving an answer to the question ased in Subsection 3.1. To this end, when D is a nonempty closed conve subset of X and X, we denote by Proj D ( ) the metric projection of onto the set D (this eists and it is unique by our hypothesis that the space X is

21 Convergence and Stability of a Regularization Method for Maimal Monotone Inclusions and Its Applications to Conve Optimization 21 strictly conve and refleive). The net result shows stability and convergence of the regularization technique when applied to conve optimization problems. For proving it, recall that the objective function F of the problem ( P ) is assumed to be lower semicontinuous and conve and its domain Dom F has nonempty interior since Ω Int(Dom F) - (see Subsection 3.1). Consequently, F is continuous on Int ( Dom F ), for each Int ( Dom F ), we have F( ) (cf. [48]) and the right hand sided derivative of F at, i.e. the function F (, ) : X R given by ( ) ( ) ( ) lim F + d t F F, d :=, t 0 t is a well defined continuous seminorm on X. Theorem. Suppose that conditions ( A ) and ( B ) are satisfied. If there eists a sequence { α of positive real numbers converging to zero such that for } each optimal solution of (P ), there eists a sequence { v } properties that v Ω for all and v with the lim v v = 0 = lim Pro 1 α j F ( v ) + NΩ v (0) ( ), (32) then the sequence { } given by (30) converges strongly to the minimal norm solution of the optimization problem ( P ). Proof. We show that Corollary 2.5 applies to the problems ( P ) and ( P ), that is, to the maimal monotone operators A and A defined by (24) and (29), respectively, and to the functions f = f = 0, ( ). First, we prove that the condition (16) is satisfied. For this purpose, tae ( zh) ws limgraph, A. Then, there eists a sequence { z } converging wealy to z in X and there eists a sequence { h } converging strongly to h in X such that for some subsequence { A i } of { i A } we have ( z, h ) Grap h( A ) for all. This means that z Ω and h F ( z ) + N ( z ),, or, equivalently, i i Ωi z Ω and h = ξ + θ, i

22 22 Variational Analysis and Appls. with ξ F ( z ) and θ N ( z ) for all. We have to show that i Ωi z Ωand h F( z) + N Ω ( z ). (33) The sequence { z } is wealy convergent to z and z Ω i for all. Therefore, according to ( B ( ii )), there eists a sequence w such = Ω that lim z w 0. Clearly, the sequence { w }, converges wealy to z. Since the set Ω is closed and conve, and therefore wealy closed, we obtain that z Ω. In order to complete the proof of (33), let u Ω be fied. According to ( B ( i )), there eists a sequence { u which converges } strongly to u and such that u Ω for any. Since h θ = ξ F ( z ) we deduce i i i h θ, u z F ( u ) F ( z ) i i i i F ( u ) F( u ) + F( z ) F ( z ) i i + Fu ( ) Fz ( ) i i (( c u ) + c( z )) δ + F( u ) F( z ), where the last inequality results from (27). By consequence, i i (( ) ( i h, u z c u + c z )) δ + F( u ) F( z ) i + θ, u z, i i i where the last term on the right hand side of the inequality is nonpositive i because θ NΩ ( z ) and u Ω (see (25)). Thus, for any i i, we obtain i i i h, u z (( c u ) + c z )) δ + F( u ) F( z ). (34) As noted above, the function F is continuous on ( ) i Int Dom F. Hence, the i sequence { Fu ( )} converges to F( u ). Since F is also conve, it is wealy lower semicontinuous and, then, we have F( z) lim inf F( z ). Taing lim sup for on both sides of (34), and taing into account i that the sequences { u } and { z } are bounded and that the function c is continuous (see condition ( A )), we obtain that

23 Convergence and Stability of a Regularization Method for Maimal Monotone Inclusions and Its Applications to Conve Optimization 23 hu, z Fu ( ) F( z). (35) Since the latter holds for arbitrary u Ω, it implies that h FΩ ( z), where F X (,+ ] is the lower semicontinuous conve function defined by Ω : F := F + ι, Ω Ω with Ω ι standing for the indicator function of the set Ω. As noted above, the function F is continuous on the interior of its domain and, thus, is continuous on Ω= Dom F Ω. Hence, applying [48] and observing that = N (see (25)), we deduce that, for any X, ι Ω Ω F ( ) = F( ) + ι ( ) = F( ) + N Ω ( ). Ω Consequently, h F ( z) = F( z) + N ( z) Ω Ω Ω and this completes the proof of (33). Now observe that, according to (32) and (29), we have that for each solution v of ( P ) there eists a sequence v such that v Ω for all and with the property that 1 1 lim α dist (0 Av) limα Proj (0) 0 F ( v ) + NΩ ( v ), = =, that is, condition (17) is also satisfied. 3.3 Recall (see Subsection 3.1) that we assume that the problem ( P ) has optimal solutions. By contrast, some or all problems ( P ) may not have optimal solutions. Theorem 3.2 guarantees eistence and convergence of to a solution of ( P ) with no consistency requirements on the problems ( P ). In our circumstances the functions F may not have global minimizers either. The following consequence of Theorem 3.2 may be of use for global minimization of F when some of the problems ( P ) have no optimal solutions.

24 24 Variational Analysis and Appls. Corollary. Suppose that conditions ( A ) and ( B ) hold. If there eists a sequence { α } of positive real numbers converging to zero such that for each optimal solution v of ( P ), there eists a sequence { v } with the properties that v Ω for all N and ( ) 1 lim v v 0 limα Proj F ( ) 0 v = =, (36) then the sequence { } given by (30) converges strongly to the minimal norm solution of the optimization problem ( P ). Proof. Note that 0 NΩ ( v ) for all and, therefore, when (36) holds, we have { ζ ζ Ω } { g g F v } dist (0, Av) = inf g+ : g F( v) and N ( v) inf : ( ) = Proj (0). F ( v ) This implies (32) because of (36). 3.4 If X is a Hilbert space and the functions F and F are differentiable on the interior of Dom( F ), then the condition (36) can be relaed by taing into account (see [52, Remar 3. p. 890]) that, in this case, we have F( v ) + Proj ( F ( v )) = Proj ( F ( v )), (37) N ( ) Ω v T ( v ) Ω where TΩ ( v ) denotes the tangent cone of Ω at the point v, that is, the polar cone of ( N ) Ω v. Precisely, we have the following result whose proof reproduces without modification the arguments in Theorem 3.2 with the only eception that for showing (17) one uses (37), (39) below, and the equalities dist(0, Av) = dist(0, F ( v) + NΩ ( v)) = dist( F ( v ), NΩ ( v )) (38) = F ( v ) + Proj ( F ( v )), NΩ ( v ) where the first is due to the fact that 0 NΩ ( v ) and the second follows from (36).

25 Convergence and Stability of a Regularization Method for Maimal Monotone Inclusions and Its Applications to Conve Optimization 25 Corollary. Suppose that X is a Hilbert space and that conditions ( A ) and ( B ) hold. If the functions F and F, N, are (Gâteau) differentiable on Int(Dom F ) and if there eists a sequence { α } of positive real numbers converging to zero such that for each optimal solution v of ( P ), there eists a sequence { v } with the properties that v Ω for all N and 1 lim v v 0 lim Proj ( F ( v α )) T ( v ) Ω = =, (39) then the sequence { } given by (30) converges strongly to the minimal norm solution of the optimization problem ( P ). 3.5 If Ω =Ω for all, then condition ( B ) is, obviously, satisfied. In this case, if there eists a sequence { α } of positive real numbers converging to zero such that for each solution v of ( P ) we have 1 lim α F( v) F( v) 0 =, (40) then (32) holds too. Indeed, if v is a solution of ( P ), then Fv ( + tu ( v)) Fv ( ) Fv (), u v = lim 0, t 0 t for any u Ω and this shows that F( v) NΩ ( v). Therefore, taing v := v for all we have Proj (0) F ( v) + Proj ( F ( v)) F ( ) ( ) N ( v) v + NΩ v Ω = Proj ( F ( v)) ( F ( v)) NΩ ( v) F () v F() v, which together with (40) implies (32). Hence, we have the following result: Corollary. Suppose that Ω = Ω for all and condition ( A ) holds. Assume that the functions F and F,, are (Gâteau) differentiable on Int(Dom F ). If there eists a sequence of positive real numbers

26 26 Variational Analysis and Appls. converging to zero such that for each solution v of ( P ) condition (40) is satisfied, then the sequence { } given by (30) converges strongly to the minimal norm solution of the optimization problem ( P ). 3.6 Theorem 3.2 shows that perturbed conve optimization problems can be regularized by the method (4). This naturally leads to the question of whether this regularization technique still wors when the set Ω is defined by continuous affine constraints and one has to replace the affine constraints of ( P ) by approimations which are still continuous and affine. We are going to show that this is indeed the case when Ω satisfies a Robinson type regularity condition. In order to do that, let LX (, X ) be the Banach space of all linear continuous operators L: X X provided with the norm L := sup{ L : B (0, 1)}, (41) X where BX ( ur, ) stands for the closed ball of center u and radius r in X. Suppose that L, X L L( X, ), l, l X and let K X be a nonempty closed conve cone. Suppose that the sets Ω and are defined by and Ω:= { X : L( ) + l K} (42) Ω :={ X : L ( ) + l K. (43) } The set Ω is called regular if the point-to-set mapping L ( ) + l K is regular in the sense of Robinson [49, p. 132], that is, 0 Int{ L ( ) + l y: X, y K}. (44) Taing in the net result K = { 0} Ω one obtains an answer to the question posed above. The fact is that the proposition we prove below is more general and can be also used in order to guarantee validity of condition ( B ) for some classes of problems of interest in semidefinite programming. Combined with Theorem 3.2 it implies that if the data involved in the constraints of the perturbed problem ( P ) are strong approimations of the data of ( P ) and if conditions ( A ) and (32) hold, then the regularization technique (4) can be applied in order to produce strong approimations of the minimal norm solution of ( P ).

27 Convergence and Stability of a Regularization Method for Maimal Monotone Inclusions and Its Applications to Conve Optimization 27 Proposition. Suppose that L, L L( X, X ) and l, l X for any. Let K X be a nonempty closed conve cone and consider the problems ( P ) and ( P ) with the feasibility sets Ω and Ω defined at (42) and (43), L respectively. If the set is regular, if the sequence Ω strongly to L in LX, ( X ) and if the sequence l l in X, then condition ( B ) is satisfied. converges converges strongly to Proof. We first prove ( B ( ii )). For this purpose we apply Corollary 4 in [10, p. 133] to the set M = K l and to the point-to-set mapping G X 2 X : defined by G ( ) = L( ) M. This is possible because of the regularity of Ω (see (44)) which guarantees that 0 Int GX ( ). Hence, by observing that Ω=G 1 (0), we deduce that for any Ω there eists a positive real number δ ( ) such that for any z X we have dist( z,ω) = dist( z, G (0)) (1 + z ) dist ( ( ) ) δ ( ) L z,m = (1 + z ) dist ( ( ) ) δ ( ) L z l K +,. (45) Now, let Ω be fied and let δ := δ ( ). If z Ω, then ( ) * dist ( Lz ( ) +, lk) Lz ( ) + l L( z) + l Lz ( ) L( z) + l l L L + z l l because of (41). Taing into account (45), it follows that for any have * * z Ω, we 1 dist( z,ω) (1 + z ) L L z + l l δ * 1 (1 )( 1) ma + z + z + L L, l l. δ * (46)

28 28 Variational Analysis and Appls. Consider the bounded on bounded sets function b: X [0,+ ) defined by 1 bu ( ):= ( u + 1)(1 + u + ). δ By (46) we obtain that, for any z Ω, z Ω ( z) = ( Ω) b( zγ ) Proj dist z,, where z γ := ma L L, l l converges to zero as. Let be a wealy convergent sequence in * X such that, for some subsequence Ω i of { Ω }, we have z Ω i for all. According to (47), the vectors w := Proj ( z ) have the property that z w b( z ) γ, where, since Ω z is bounded, the sequence { bz ( )} Hence, li m z w 0 and ( B ( ii )) is satisfied. = is bounded too. Now we prove that ( B ( i )) is also satisfied. To this end, we consider the function g : X X defined by ( ) ( ) L + l if = 0 g (, ) = 1 1 L + l if 1. and the point-to-set mapping Γ : X 2 X * defined by (, ) (, ) Γ = g K for all X and 0. Since Ω is regular (see (44)), we have that 0 Int[ Im Γ(, 0) ]. Clearly, 1 0 Ω=Γ (, 0)(0). Let u Ω. By Theorem 1 in [49], there eists η > 0 such that

29 Convergence and Stability of a Regularization Method for Maimal Monotone Inclusions and Its Applications to Conve Optimization 29 B, gb u,, K, (48) 0 (0 η) ( X ( 1) 0) X Since L L and l l converge to 0, there eists 0 such that for any integer 0 we have g (, 0) g (, ) = L ( ) + l L ( ) η l BX ( u 1) 2,,. This implies that whenever 0 we have 0 0 η gb ( X( u 1) 0) g( BX( u 1) ) K BX u 0,,,, +,. 2 (49) This and (48) show that the function g satisfies the assumptions in [49, Corollary 2] (with K instead of K ). Consequently, application of this result yields that, for each Ω and for any integer 0, the set { } Ω + 1 = X : g(, ) K η contains the open ball B X ( 0, 2 ) and that for any X we have 0 ( ) 2 dist(,ω ) 1 + u dist(0, g(, ) K). (50) η Note that, if Ω, then g(, 0) K and, therefore, we have dist (0, g (, ) K) = dist ( g (, ), K) This and (50) implies * g (, ) g (, 0) = L ( ) l L( ) l L L l l 1 1 ( 1ma ) { L L l l } +, ( )( ) { } 2 dist(,ω ) 1+ + u + 1 ma L L, l l, η (51)

30 30 Variational Analysis and Appls. for any Ω. Define the function a :[0,+ ) [0,+ ) by 0 ( )( ) 2 at () = 1+ u + t t+ 1 η and the sequence of nonnegative real numbers 1 1 { } β = ma L L, l l. According to (51) we have Proj ( ) = dist(,ω ) a( ) β, Ω for all integers 0 and for all Ω. Since, by hypothesis, lim β = 0, condition ( B ( i )) holds. 3.7 The implementation of the regularization procedure discussed in this wor requires computing vectors defined by (30) with operators A given by (29). This implicitly means solving problems lie (31). In some circumstances, in the regularization process, one can reduce problems placed in infinite dimensional settings to finite dimensional problems for which many efficient techniques of computing solutions are available. This is typically the case of the problem considered in the following eample. p 1 q Let X = with p (1, ), q= p( p 1) and, then, X =. Let q a \{ 0} and b j q \{0} +, for all j J, where J is a nonempty set of q q indices and + stands for the subset of consisting of vectors with nonnegative coordinates. For each j J, let β j be a nonnegative real number. Consider ( P ) to be the following optimization problem in p : Minimize F( ) over the set = a, (52) p j Ω:= { : b, β, j J. (53) We assume that + j } a= a,..., a,...) has infinitely many coordinates a 0 and ( 1 i p that the problem ( P ) has optimal solutions. Whenever u is an element in or in q, we denote by u [ ] the vector in the same space as u obtained by replacing by zero all coordinates u of u with i >. With this notations, for i i

31 Convergence and Stability of a Regularization Method for Maimal Monotone Inclusions and Its Applications to Conve Optimization 31 each, let 12 / := [ ], and observe that { α } α a a is a sequence of positive real numbers which converge to zero as. We associate to problem (52)+(53) the perturbed problems ( ) given by Minimi ze F ( ) = a[ ], over Ω. (54) Note that, for each, the problem (54) has optimal solutions because its objective function is bounded from below on Ω by F F = inf{ F( ) : Ω }. Problem ( P ) is ill posed and, therefore, even if one can find an optimal solution y linear programming problems ( p converge in or, at best, its wea accumulation points (if any) are optimal solutions of ( P ). We apply the regularization method (4) to the problems ( P ) and ( P ) p 1 with the function µ () t = t. It is easy to see that, in this case, determining the vector defined by (30) reduces to finding the unique optimal solution of the problem Minimize a [ ], + α over Ω. (55) sequence converges strongly to the minimal norm solution of ( P ). 2 F ( ) F ( ) a a[ ] = α, p P for each of the essentially finite dimensional P ), the sequences y may not Theorem 3.2 applies to problems ( P ) and ( P ) and guarantees that the Indeed, observe that condition ( A ) is satisfied because, for any have p, we and condition ( B ) trivially holds. It remains to prove that (32) holds too, that is, for any optimal solution v of ( ) there eists a sequence P v of vectors in Ω such that lim v v = = α 0). 1 0 lim Proj ( a [ ] + NΩ ( v ) (56)

On Total Convexity, Bregman Projections and Stability in Banach Spaces

On Total Convexity, Bregman Projections and Stability in Banach Spaces Journal of Convex Analysis Volume 11 (2004), No. 1, 1 16 On Total Convexity, Bregman Projections and Stability in Banach Spaces Elena Resmerita Department of Mathematics, University of Haifa, 31905 Haifa,

More information

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department

More information

BREGMAN DISTANCES, TOTALLY

BREGMAN DISTANCES, TOTALLY BREGMAN DISTANCES, TOTALLY CONVEX FUNCTIONS AND A METHOD FOR SOLVING OPERATOR EQUATIONS IN BANACH SPACES DAN BUTNARIU AND ELENA RESMERITA January 18, 2005 Abstract The aim of this paper is twofold. First,

More information

MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES

MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES Dan Butnariu and Elena Resmerita Abstract. In this paper we establish criteria for the stability of the proximal mapping Prox f ϕ =( ϕ+ f)

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces Existence and Approximation of Fixed Points of in Reflexive Banach Spaces Department of Mathematics The Technion Israel Institute of Technology Haifa 22.07.2010 Joint work with Prof. Simeon Reich General

More information

A strongly convergent hybrid proximal method in Banach spaces

A strongly convergent hybrid proximal method in Banach spaces J. Math. Anal. Appl. 289 (2004) 700 711 www.elsevier.com/locate/jmaa A strongly convergent hybrid proximal method in Banach spaces Rolando Gárciga Otero a,,1 and B.F. Svaiter b,2 a Instituto de Economia

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

Monotone operators and bigger conjugate functions

Monotone operators and bigger conjugate functions Monotone operators and bigger conjugate functions Heinz H. Bauschke, Jonathan M. Borwein, Xianfu Wang, and Liangjin Yao August 12, 2011 Abstract We study a question posed by Stephen Simons in his 2008

More information

On duality theory of conic linear problems

On duality theory of conic linear problems On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu

More information

EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES

EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES JEREMY J. BECNEL Abstract. We examine the main topologies wea, strong, and inductive placed on the dual of a countably-normed space

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

("-1/' .. f/ L) I LOCAL BOUNDEDNESS OF NONLINEAR, MONOTONE OPERA TORS. R. T. Rockafellar. MICHIGAN MATHEMATICAL vol. 16 (1969) pp.

(-1/' .. f/ L) I LOCAL BOUNDEDNESS OF NONLINEAR, MONOTONE OPERA TORS. R. T. Rockafellar. MICHIGAN MATHEMATICAL vol. 16 (1969) pp. I l ("-1/'.. f/ L) I LOCAL BOUNDEDNESS OF NONLINEAR, MONOTONE OPERA TORS R. T. Rockafellar from the MICHIGAN MATHEMATICAL vol. 16 (1969) pp. 397-407 JOURNAL LOCAL BOUNDEDNESS OF NONLINEAR, MONOTONE OPERATORS

More information

Math 104: Homework 7 solutions

Math 104: Homework 7 solutions Math 04: Homework 7 solutions. (a) The derivative of f () = is f () = 2 which is unbounded as 0. Since f () is continuous on [0, ], it is uniformly continous on this interval by Theorem 9.2. Hence for

More information

The sum of two maximal monotone operator is of type FPV

The sum of two maximal monotone operator is of type FPV CJMS. 5(1)(2016), 17-21 Caspian Journal of Mathematical Sciences (CJMS) University of Mazandaran, Iran http://cjms.journals.umz.ac.ir ISSN: 1735-0611 The sum of two maximal monotone operator is of type

More information

Proof. We indicate by α, β (finite or not) the end-points of I and call

Proof. We indicate by α, β (finite or not) the end-points of I and call C.6 Continuous functions Pag. 111 Proof of Corollary 4.25 Corollary 4.25 Let f be continuous on the interval I and suppose it admits non-zero its (finite or infinite) that are different in sign for x tending

More information

Radius Theorems for Monotone Mappings

Radius Theorems for Monotone Mappings Radius Theorems for Monotone Mappings A. L. Dontchev, A. Eberhard and R. T. Rockafellar Abstract. For a Hilbert space X and a mapping F : X X (potentially set-valued) that is maximal monotone locally around

More information

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem 56 Chapter 7 Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem Recall that C(X) is not a normed linear space when X is not compact. On the other hand we could use semi

More information

On proximal-like methods for equilibrium programming

On proximal-like methods for equilibrium programming On proximal-lie methods for equilibrium programming Nils Langenberg Department of Mathematics, University of Trier 54286 Trier, Germany, langenberg@uni-trier.de Abstract In [?] Flam and Antipin discussed

More information

Epiconvergence and ε-subgradients of Convex Functions

Epiconvergence and ε-subgradients of Convex Functions Journal of Convex Analysis Volume 1 (1994), No.1, 87 100 Epiconvergence and ε-subgradients of Convex Functions Andrei Verona Department of Mathematics, California State University Los Angeles, CA 90032,

More information

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 3, 2018 ISSN 1223-7027 ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES Vahid Dadashi 1 In this paper, we introduce a hybrid projection algorithm for a countable

More information

Maximal Monotone Inclusions and Fitzpatrick Functions

Maximal Monotone Inclusions and Fitzpatrick Functions JOTA manuscript No. (will be inserted by the editor) Maximal Monotone Inclusions and Fitzpatrick Functions J. M. Borwein J. Dutta Communicated by Michel Thera. Abstract In this paper, we study maximal

More information

An Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1

An Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1 An Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1 Nobuo Yamashita 2, Christian Kanzow 3, Tomoyui Morimoto 2, and Masao Fuushima 2 2 Department of Applied

More information

Hölder Metric Subregularity with Applications to Proximal Point Method

Hölder Metric Subregularity with Applications to Proximal Point Method Hölder Metric Subregularity with Applications to Proximal Point Method GUOYIN LI and BORIS S. MORDUKHOVICH Revised Version: October 1, 01 Abstract This paper is mainly devoted to the study and applications

More information

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Lu-Chuan Ceng 1, Nicolas Hadjisavvas 2 and Ngai-Ching Wong 3 Abstract.

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces

Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces Laboratoire d Arithmétique, Calcul formel et d Optimisation UMR CNRS 6090 Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces Emil Ernst Michel Théra Rapport de recherche n 2004-04

More information

The small ball property in Banach spaces (quantitative results)

The small ball property in Banach spaces (quantitative results) The small ball property in Banach spaces (quantitative results) Ehrhard Behrends Abstract A metric space (M, d) is said to have the small ball property (sbp) if for every ε 0 > 0 there exists a sequence

More information

A projection-type method for generalized variational inequalities with dual solutions

A projection-type method for generalized variational inequalities with dual solutions Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4812 4821 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A projection-type method

More information

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction ACTA MATHEMATICA VIETNAMICA 271 Volume 29, Number 3, 2004, pp. 271-280 SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM NGUYEN NANG TAM Abstract. This paper establishes two theorems

More information

Dedicated to Michel Théra in honor of his 70th birthday

Dedicated to Michel Théra in honor of his 70th birthday VARIATIONAL GEOMETRIC APPROACH TO GENERALIZED DIFFERENTIAL AND CONJUGATE CALCULI IN CONVEX ANALYSIS B. S. MORDUKHOVICH 1, N. M. NAM 2, R. B. RECTOR 3 and T. TRAN 4. Dedicated to Michel Théra in honor of

More information

Problems for Chapter 3.

Problems for Chapter 3. Problems for Chapter 3. Let A denote a nonempty set of reals. The complement of A, denoted by A, or A C is the set of all points not in A. We say that belongs to the interior of A, Int A, if there eists

More information

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Convex Analysis and Economic Theory AY Elementary properties of convex functions Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

Preprint Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN

Preprint Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN Fakultät für Mathematik und Informatik Preprint 2013-04 Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN 1433-9307 Stephan Dempe and

More information

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

EXISTENCE RESULTS FOR QUASILINEAR HEMIVARIATIONAL INEQUALITIES AT RESONANCE. Leszek Gasiński

EXISTENCE RESULTS FOR QUASILINEAR HEMIVARIATIONAL INEQUALITIES AT RESONANCE. Leszek Gasiński DISCRETE AND CONTINUOUS Website: www.aimsciences.org DYNAMICAL SYSTEMS SUPPLEMENT 2007 pp. 409 418 EXISTENCE RESULTS FOR QUASILINEAR HEMIVARIATIONAL INEQUALITIES AT RESONANCE Leszek Gasiński Jagiellonian

More information

Quasi-relative interior and optimization

Quasi-relative interior and optimization Quasi-relative interior and optimization Constantin Zălinescu University Al. I. Cuza Iaşi, Faculty of Mathematics CRESWICK, 2016 1 Aim and framework Aim Framework and notation The quasi-relative interior

More information

PROPERTIES OF A CLASS OF APPROXIMATELY SHRINKING OPERATORS AND THEIR APPLICATIONS

PROPERTIES OF A CLASS OF APPROXIMATELY SHRINKING OPERATORS AND THEIR APPLICATIONS Fixed Point Theory, 15(2014), No. 2, 399-426 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.html PROPERTIES OF A CLASS OF APPROXIMATELY SHRINKING OPERATORS AND THEIR APPLICATIONS ANDRZEJ CEGIELSKI AND RAFA

More information

The local equicontinuity of a maximal monotone operator

The local equicontinuity of a maximal monotone operator arxiv:1410.3328v2 [math.fa] 3 Nov 2014 The local equicontinuity of a maximal monotone operator M.D. Voisei Abstract The local equicontinuity of an operator T : X X with proper Fitzpatrick function ϕ T

More information

Geometry and topology of continuous best and near best approximations

Geometry and topology of continuous best and near best approximations Journal of Approximation Theory 105: 252 262, Geometry and topology of continuous best and near best approximations Paul C. Kainen Dept. of Mathematics Georgetown University Washington, D.C. 20057 Věra

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information

THROUGHOUT this paper, we let C be a nonempty

THROUGHOUT this paper, we let C be a nonempty Strong Convergence Theorems of Multivalued Nonexpansive Mappings and Maximal Monotone Operators in Banach Spaces Kriengsak Wattanawitoon, Uamporn Witthayarat and Poom Kumam Abstract In this paper, we prove

More information

ADMM for monotone operators: convergence analysis and rates

ADMM for monotone operators: convergence analysis and rates ADMM for monotone operators: convergence analysis and rates Radu Ioan Boţ Ernö Robert Csetne May 4, 07 Abstract. We propose in this paper a unifying scheme for several algorithms from the literature dedicated

More information

AW -Convergence and Well-Posedness of Non Convex Functions

AW -Convergence and Well-Posedness of Non Convex Functions Journal of Convex Analysis Volume 10 (2003), No. 2, 351 364 AW -Convergence Well-Posedness of Non Convex Functions Silvia Villa DIMA, Università di Genova, Via Dodecaneso 35, 16146 Genova, Italy villa@dima.unige.it

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

Semi-infinite programming, duality, discretization and optimality conditions

Semi-infinite programming, duality, discretization and optimality conditions Semi-infinite programming, duality, discretization and optimality conditions Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205,

More information

A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06

A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06 A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06 CHRISTIAN LÉONARD Contents Preliminaries 1 1. Convexity without topology 1 2. Convexity

More information

Continuity of convex functions in normed spaces

Continuity of convex functions in normed spaces Continuity of convex functions in normed spaces In this chapter, we consider continuity properties of real-valued convex functions defined on open convex sets in normed spaces. Recall that every infinitedimensional

More information

On pseudomonotone variational inequalities

On pseudomonotone variational inequalities An. Şt. Univ. Ovidius Constanţa Vol. 14(1), 2006, 83 90 On pseudomonotone variational inequalities Silvia Fulina Abstract Abstract. There are mainly two definitions of pseudomonotone mappings. First, introduced

More information

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) Andreas Löhne May 2, 2005 (last update: November 22, 2005) Abstract We investigate two types of semicontinuity for set-valued

More information

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces Filomat 28:7 (2014), 1525 1536 DOI 10.2298/FIL1407525Z Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Convergence Theorems for

More information

Exercise Solutions to Functional Analysis

Exercise Solutions to Functional Analysis Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n

More information

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε 1. Continuity of convex functions in normed spaces In this chapter, we consider continuity properties of real-valued convex functions defined on open convex sets in normed spaces. Recall that every infinitedimensional

More information

Constraint qualifications for convex inequality systems with applications in constrained optimization

Constraint qualifications for convex inequality systems with applications in constrained optimization Constraint qualifications for convex inequality systems with applications in constrained optimization Chong Li, K. F. Ng and T. K. Pong Abstract. For an inequality system defined by an infinite family

More information

Chapter 2 Convex Analysis

Chapter 2 Convex Analysis Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,

More information

Optimality Conditions for Nonsmooth Convex Optimization

Optimality Conditions for Nonsmooth Convex Optimization Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never

More information

Convex Functions. Pontus Giselsson

Convex Functions. Pontus Giselsson Convex Functions Pontus Giselsson 1 Today s lecture lower semicontinuity, closure, convex hull convexity preserving operations precomposition with affine mapping infimal convolution image function supremum

More information

Stationarity and Regularity of Infinite Collections of Sets. Applications to

Stationarity and Regularity of Infinite Collections of Sets. Applications to J Optim Theory Appl manuscript No. (will be inserted by the editor) Stationarity and Regularity of Infinite Collections of Sets. Applications to Infinitely Constrained Optimization Alexander Y. Kruger

More information

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set Analysis Finite and Infinite Sets Definition. An initial segment is {n N n n 0 }. Definition. A finite set can be put into one-to-one correspondence with an initial segment. The empty set is also considered

More information

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT Linear and Nonlinear Analysis Volume 1, Number 1, 2015, 1 PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT KAZUHIRO HISHINUMA AND HIDEAKI IIDUKA Abstract. In this

More information

PARTIAL REGULARITY OF BRENIER SOLUTIONS OF THE MONGE-AMPÈRE EQUATION

PARTIAL REGULARITY OF BRENIER SOLUTIONS OF THE MONGE-AMPÈRE EQUATION PARTIAL REGULARITY OF BRENIER SOLUTIONS OF THE MONGE-AMPÈRE EQUATION ALESSIO FIGALLI AND YOUNG-HEON KIM Abstract. Given Ω, Λ R n two bounded open sets, and f and g two probability densities concentrated

More information

Convergence rate estimates for the gradient differential inclusion

Convergence rate estimates for the gradient differential inclusion Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient

More information

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.

More information

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION HALUK ERGIN AND TODD SARVER Abstract. Suppose (i) X is a separable Banach space, (ii) C is a convex subset of X that is a Baire space (when endowed

More information

In applications, we encounter many constrained optimization problems. Examples Basis pursuit: exact sparse recovery problem

In applications, we encounter many constrained optimization problems. Examples Basis pursuit: exact sparse recovery problem 1 Conve Analsis Main references: Vandenberghe UCLA): EECS236C - Optimiation methods for large scale sstems, http://www.seas.ucla.edu/ vandenbe/ee236c.html Parikh and Bod, Proimal algorithms, slides and

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

On duality gap in linear conic problems

On duality gap in linear conic problems On duality gap in linear conic problems C. Zălinescu Abstract In their paper Duality of linear conic problems A. Shapiro and A. Nemirovski considered two possible properties (A) and (B) for dual linear

More information

Problem 1: Compactness (12 points, 2 points each)

Problem 1: Compactness (12 points, 2 points each) Final exam Selected Solutions APPM 5440 Fall 2014 Applied Analysis Date: Tuesday, Dec. 15 2014, 10:30 AM to 1 PM You may assume all vector spaces are over the real field unless otherwise specified. Your

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Necessary conditions for convergence rates of regularizations of optimal control problems

Necessary conditions for convergence rates of regularizations of optimal control problems Necessary conditions for convergence rates of regularizations of optimal control problems Daniel Wachsmuth and Gerd Wachsmuth Johann Radon Institute for Computational and Applied Mathematics RICAM), Austrian

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore. This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore. Title Composition operators on Hilbert spaces of entire functions Author(s) Doan, Minh Luan; Khoi, Le Hai Citation

More information

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators Radu Ioan Boţ Ernö Robert Csetnek April 23, 2012 Dedicated to Jon Borwein on the occasion of his 60th birthday Abstract. In this note

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

The theory of monotone operators with applications

The theory of monotone operators with applications FACULTY OF MATHEMATICS AND COMPUTER SCIENCE BABEŞ-BOLYAI UNIVERSITY CLUJ-NAPOCA, ROMÂNIA László Szilárd Csaba The theory of monotone operators with applications Ph.D. Thesis Summary Scientific Advisor:

More information

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Applied Mathematical Sciences, Vol. 6, 212, no. 63, 319-3117 Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Nguyen Buong Vietnamese

More information

g 2 (x) (1/3)M 1 = (1/3)(2/3)M.

g 2 (x) (1/3)M 1 = (1/3)(2/3)M. COMPACTNESS If C R n is closed and bounded, then by B-W it is sequentially compact: any sequence of points in C has a subsequence converging to a point in C Conversely, any sequentially compact C R n is

More information

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given

More information

Duality (Continued) min f ( x), X R R. Recall, the general primal problem is. The Lagrangian is a function. defined by

Duality (Continued) min f ( x), X R R. Recall, the general primal problem is. The Lagrangian is a function. defined by Duality (Continued) Recall, the general primal problem is min f ( x), xx g( x) 0 n m where X R, f : X R, g : XR ( X). he Lagrangian is a function L: XR R m defined by L( xλ, ) f ( x) λ g( x) Duality (Continued)

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

Aliprantis, Border: Infinite-dimensional Analysis A Hitchhiker s Guide

Aliprantis, Border: Infinite-dimensional Analysis A Hitchhiker s Guide aliprantis.tex May 10, 2011 Aliprantis, Border: Infinite-dimensional Analysis A Hitchhiker s Guide Notes from [AB2]. 1 Odds and Ends 2 Topology 2.1 Topological spaces Example. (2.2) A semimetric = triangle

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Lecture 7: Semidefinite programming

Lecture 7: Semidefinite programming CS 766/QIC 820 Theory of Quantum Information (Fall 2011) Lecture 7: Semidefinite programming This lecture is on semidefinite programming, which is a powerful technique from both an analytic and computational

More information

Hölder Metric Subregularity with Applications to Proximal Point Method

Hölder Metric Subregularity with Applications to Proximal Point Method Hölder Metric Subregularity with Applications to Proximal Point Method GUOYIN LI and BORIS S. MORDUKHOVICH February, 01 Abstract This paper is mainly devoted to the study and applications of Hölder metric

More information

Keywords. 1. Introduction.

Keywords. 1. Introduction. Journal of Applied Mathematics and Computation (JAMC), 2018, 2(11), 504-512 http://www.hillpublisher.org/journal/jamc ISSN Online:2576-0645 ISSN Print:2576-0653 Statistical Hypo-Convergence in Sequences

More information

Optimality, identifiability, and sensitivity

Optimality, identifiability, and sensitivity Noname manuscript No. (will be inserted by the editor) Optimality, identifiability, and sensitivity D. Drusvyatskiy A. S. Lewis Received: date / Accepted: date Abstract Around a solution of an optimization

More information

A double projection method for solving variational inequalities without monotonicity

A double projection method for solving variational inequalities without monotonicity A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014

More information

Extreme points of compact convex sets

Extreme points of compact convex sets Extreme points of compact convex sets In this chapter, we are going to show that compact convex sets are determined by a proper subset, the set of its extreme points. Let us start with the main definition.

More information

Banach Spaces V: A Closer Look at the w- and the w -Topologies

Banach Spaces V: A Closer Look at the w- and the w -Topologies BS V c Gabriel Nagy Banach Spaces V: A Closer Look at the w- and the w -Topologies Notes from the Functional Analysis Course (Fall 07 - Spring 08) In this section we discuss two important, but highly non-trivial,

More information

arxiv: v1 [math.fa] 30 Jun 2014

arxiv: v1 [math.fa] 30 Jun 2014 Maximality of the sum of the subdifferential operator and a maximally monotone operator arxiv:1406.7664v1 [math.fa] 30 Jun 2014 Liangjin Yao June 29, 2014 Abstract The most important open problem in Monotone

More information

Lipschitz shadowing implies structural stability

Lipschitz shadowing implies structural stability Lipschitz shadowing implies structural stability Sergei Yu. Pilyugin Sergei B. Tihomirov Abstract We show that the Lipschitz shadowing property of a diffeomorphism is equivalent to structural stability.

More information

CHARACTERIZATION OF (QUASI)CONVEX SET-VALUED MAPS

CHARACTERIZATION OF (QUASI)CONVEX SET-VALUED MAPS CHARACTERIZATION OF (QUASI)CONVEX SET-VALUED MAPS Abstract. The aim of this paper is to characterize in terms of classical (quasi)convexity of extended real-valued functions the set-valued maps which are

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

for all u C, where F : X X, X is a Banach space with its dual X and C X

for all u C, where F : X X, X is a Banach space with its dual X and C X ROMAI J., 6, 1(2010), 41 45 PROXIMAL POINT METHODS FOR VARIATIONAL INEQUALITIES INVOLVING REGULAR MAPPINGS Corina L. Chiriac Department of Mathematics, Bioterra University, Bucharest, Romania corinalchiriac@yahoo.com

More information