Applied Mathematics, Article ID 414031, 5 pages http://dx.doi.org/10.1155/2014/414031 Research Article On an Iterative Method for Finding a Zero to the Sum of Two Maximal Monotone Operators Hongwei Jiao 1 and Fenghui Wang 2 1 School of Mathematical Science, Henan Institute of Science and Technology, Xinxiang 453003, China 2 Department of Mathematics, Luoyang Normal University, Luoyang 471022, China Correspondence should be addressed to Hongwei Jiao; hongweijiao@126.com and Fenghui Wang; wfenghui@163.com Received 2 March 2014; Revised 18 August 2014; Accepted 19 August 2014; Published 21 August 2014 Academic Editor: Luigi Muglia Copyright 2014 H. Jiao and F. Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In this paper we consider a problem that consists of finding a zero to the sum of two monotone operators. One method for solving such a problem is the forward-backward splitting method. We present some new conditions that guarantee the weak convergence of the forward-backward method. Applications of these results, including variational inequalities and gradient projection algorithms, are also considered. 1. Introduction It is well known that monotone inclusions problems play an important role in the theory of nonlinear analysis. This problem consists of finding a zero of maximal monotone operators. However, in some examples such as convex programming and variational inequality problems, the operator is needed to be decomposed of the sum of two monotone operators (see, e.g., [1 6]). In this way, one needs to find x H so that 0 Ax+Bx, (1) where A and B are two monotone operators on a Hilbert space H. To solve such problem, the splitting method, such as Peaceman-Rachford algorithm [7]andDouglas-Rachford algorithm [8], is usually considered. We consider a special case whenever B:H 2 H is multivalued and A:H H is single-valued. A classical way to solve problem (1) under our assumption is the forward-backward splitting (FBS) (see [2, 9]). Starting with an arbitrary initial x 0 H, thefbs generates a sequence (x n ) satisfying x n+1 = (I+rB) 1 (I ra) x n, (2) where r is some properly chosen real number. Then the FBS converges weakly to a solution of problem (1)whenever such point exists. On the other hand, we observe that problem (1) is equivalent to the fixed point equation: (I+rB) 1 (I ra) x=x, (3) for the single-valued operator (I+rB) 1 (I ra).moreover, if r is properly chosen, the operator (I + rb) 1 (I ra) should be nonexpansive. Motivated by this assumption, by using the techniques of the fixed point theory for nonexpansive operators, we try to investigate and study various monotone inclusion problems. The rest of this paper is organized as follows. In Section 2, some useful lemmas are introduced. In Section 3, weconsider the modified forward-backward splitting method and prove its weak convergence under some new conditions. In Section 4, some applications of our results in finding a solution of the variational inequality problem are included. 2. Preliminary and Notation Throughout the paper, I denotes the identity operator,fix(s) thesetofthefixedpointsofanoperators,and f the gradient of the functional f:h R. Thenotation denotes strong convergence and weak convergence. Denote by ω w (x n ) the set of the cluster points of (x n ) in the weak topology (i.e., the set {x : x nj x},where(x nj ) means a subsequence of (x n )).
2 Applied Mathematics Let C be a nonempty closed convex subset of H.Denote by P C the projection from H onto C;namely,forx H, P C x is the unique point in C with the property x P Cx = min y C x y. (4) It is well-known that P C x is characterized by the inequality x P C x, z P C x 0, z C. (5) A single-valued operator S:H H is called nonexpansive if firmly nonexpansive if Sx Sy x y ( x, y H) ; (6) Sx Sy,x y Sx Sy 2 ( x, y H) ; (7) and κ-averaged if there exists a constant κ (0, 1) and a nonexpansive operator R such that S =(1 κ)i+κr.firmly nonexpansive operators are (1/2)-averaged. Lemma 1 (see [10]). The following assertions hold. (i) S is κ-averaged for κ (0, 1) if and only if Sx Sy 2 x y 2 1 κ κ (I S) x (I S) y 2 (8) for all x, y dom S. (ii) Assume that S i is κ i -averaged for κ i (0, 1), i = 1, 2. Then S 1 S 2 is κ-averaged with κ=κ 1 +κ 2 κ 1 κ 2. The following lemma is known as the demiclosedness principle for nonexpansive mappings. Lemma 2. Let C beanonemptyclosedconvexsubsetofhand S a nonexpansive operator with Fix(S) =0.If(x n ) is a sequence in C such that x n xand (I S)x n y,then(i S)x=y. In particular, if y=0, thenx Fix(S). if AmultivaluedoperatorB:H 2 H is called monotone u V,x y 0, ( u Bx, V By) ; (9) κ-inverse strongly monotone (κ-ism), if there exists a constant κ>0so that u V,x y κ u V 2, ( u Bx,V By); (10) and maximal monotone if it is monotone and its graph G(B) = {(x, y) : y Bx} is not properly contained in the graph of any other monotone operator. In what follows, we shall assume that (i) A:H H is single-valued and κ-ism; (ii) B:H 2 H is multivalued and maximal monotone. Hereafter, if no confusion occurs, we denote by J r := (I+rB) 1 (11) the resolvent of B for any given r > 0.ItisknownthatJ r is single-valued and firmly nonexpansive; moreover dom(i + rb) = H (see [11]). Lemma 3 (see [12]). For r>0,lett r =J r (I ra).then (i) Fix(T r ) = (A + B) 1 (0); (ii) T r is (2κ + r)/4κ-averaged; (iii) For 0<r s 2κ,x H,itfollowsthat x T rx 2 x T sx, (12) T sx T r x 1 r s x T sx. Definition 4. Assume that (x n ) is a sequence in H and that (ε n ) is a real sequence with n ε n <.Then(x n ) is called quasi Fejér monotone w.r.t. C, if x n+1 z x n z +ε n ( z C). (13) Lemma 5 (see [13]). Let C be a nonempty closed convex subset of H. If the sequence (x n ) is quasi-fejér-monotone w.r.t. C,then the following hold: (i) x n x Cif and only if ω w (x n ) C; (ii) the sequence (P C x n ) converges strongly; (iii) if x n x C,thenx = lim n P C x n. 3. Weak Convergence Theorem In [10] Combettes considered a modified FBS: for any initial guess x 0 H,set x n+1 = (1 α n ) x n +α n J rn (x n r n Ax n ) +e n, (14) where (α n ) [0, 1], (r n ) [0,2κ] and (e n ) H is computation error. He proved the weak convergence of algorithm (14)providedthat (1) lim n α n >0, (2) 0<lim n r n lim n r n <2κ, (3) n e n <. We observe that (14) is in fact a Mann-type iteration. In the followingwe shallprovetheconvergenceof (14) under some sightlyweakconditions. Theorem 6. Suppose the following conditions are satisfied: (C1) lim n α n >0; (C2) 0 r n 2κ; (C4) lim n (4κ/(2κ + r n ) α n )>0; (C5) n e n <.
Applied Mathematics 3 If in addition Ω:=(A+B) 1 (0) =0, then the sequence (x n ) generated by (14) converges weakly to x := P Ω x n. Proof. We first show that (x n ) is quasi-fejér-monotone. Let T n =J rn (I r n A). ThenItfollowsfromLemma 3 that T n is β n -averaged with β n =(2κ+r n )/4κ and Ω=Fix(T n ).Letting z Ω,wehavethat x n+1 z = (1 α n)(x n z)+α n (T n x n z)+e n (1 α n ) x n z +α n x n z + e n = x n z + e n. (15) By condition (C5), we conclude that (x n ) is quasi-fejérmonotone w.r.t Ω. Next let us show ω w (x n ) Ω.Toseethis,chooseM>0 so that 2 sup ( x n z ) M. (16) n 0 Let R n =(1 α n )I+α n T n.obviouslyr n is α n β n -averaged. According to Lemma 1,wededucethat x n+1 z 2 = ((1 α n)i+α n T n )x n z+e n 2 R nx n z 2 +2 x n+1 z,e n R nx n z 2 +M e n whichinturnimpliesthat n i=0 x n z 2 1 α nβ n α n β n R nx n x n 2 +M e n x n z 2 α n ( 1 β n α n ) T nx n x n 2 +M e n, (17) α i ( 1 α β i ) T ix i x i 2 x 0 z 2 +M i for all n N.Lettingn yields n=0 i=0 e i (18) α n ( 1 β n α n ) T nx n x n 2 <. (19) By conditions (C1) and (C4), we check that lim inf n α n ( 1 β n α n )>0, (20) which yields that T n x n x n 0. By condition (C2), we find n 0 N and r (0,2κ]so that r n rfor all n n 0.Let T r =J r (I rb).itthenfollowsfromlemma 3 that T rx n x n 2 T nx n x n (21) for all n n 0.Lettingn,wehave T r x n x n 0 as n.takex ω w (x n ) and a subsequence (x nk ) of (x n ) such that x nk x.sincet r is nonexpansive, applying Lemmas 2 and 3 yields x Fix(T r )=Ωand thus ω w (x n ) Ω.ByLemma 5 the proof is complete. Wecanalsopresentanotherconditionfortheweak convergence of (14). Theorem 7. Suppose that the following conditions are satisfied: (C1) n r n+1 r n < ; (C2) 0<lim n r n 2κ; (C4) n α n (4κ/(2κ + r n ) α n )= ; (C5) n e n <. If in addition Ω:=(A+B) 1 (0) =0, then the sequence (x n ) generated by (14) converges weakly to x := P Ω x n. Proof. Compared with the proof of Theorem 6, it suffices to show that T n x n x n 0 as n. Observe that the estimate n=0 α n ( 4κ α 2κ + r n ) T nx n x n 2 < (22) n still holds. Let a n := x n T n x n. Then by condition (C4) lim n a n =0. According to Lemma 3,wehave a n+1 = (1 α n)(x n T n x n )+(T n x n T n+1 x n+1 )+e n (1 α n ) x n T n x n + T nx n T n+1 x n+1 + e n (1 α n ) x n T n x n + T n+1x n T n+1 x n+1 + T nx n T n+1 x n + e n (1 α n ) x n T n x n + x n x n+1 + T nx n T n+1 x n + e n =(1 α n ) x n T n x n + α n (x n T n x n ) e n + 1 r n+1 r n x n T n x n + e n (1 α n ) x n T n x n +α n x n T n x n + 1 r n+1 r n x n T n x n +2 e n a n +M( r n r n+1 + e n ), (23) where M>0is properly chosen real number. Then, for any given p N, we arrive at a n+p a n +M i=n ( r i r i+1 + e i ). (24)
4 Applied Mathematics Conditions (C1) and (C5) imply that lim n a n exists and therefore x n T n x n 0 as n.hencetheproof is complete. Applying Theorem 7, one can easily get the following. Corollary 8. Suppose that the following conditions are satisfied: (1) 0<r<2κ, (2) 0 α n 4κ/(2κ+r); (3) n α n (4κ/(2κ + r) α n )=, (4) Ω:=(A+B) 1 (0) =0. x n+1 =(1 α n )x n +α n J r (x n rax n ) (25) converges weakly to some point in Ω. Remark 9. Corollary 8 implies that our condition is slightly weaker than that of Combettes whenever the sequence (r n ) approaches to some constant. 4. Application Let C be a nonempty closed convex subset of H.Avariational inequality problem (VIP) is formulated as a problem of finding a point x Cwith the property Ax,z x 0, z C, (26) where A:H H is a nonlinear operator. We shall denote by Ω the solution set of VIP (26). One method for solving VIP is the projection algorithm which generates, starting with an arbitrary initial x 0 H,asequence(x n ) satisfying x n+1 =P C (x n rax n ), (27) where r is properly chosen real number. If, in addition, A is κism, then theiteration (27)with 0<r<2κconverges weakly to a point in Ω, whenever such point exists. Let B be the normal cone for C,thatis,B:={w H : x z, w 0, z C}.By[14,Theorem3],VIP(26)isequivalent to finding a zero of the maximal monotone operator A+B. Recalling P C =J r for any r>0,wethuscanapplytheprevious results to get the following. Corollary 10. Suppose the following conditions are satisfied: (C1) lim n α n >0; (C2) 0 r n 2κ; (C4) lim n (4κ/(2κ + r n ) α n )>0. x n+1 = (1 α n ) x n +α n P C (x n r n Ax n ) (28) Corollary 11. Suppose that the following conditions are satisfied: (C1) n r n+1 r n < ; (C2) 0<lim n r n 2κ; (C4) n α n (4κ/(2κ + r n ) α n )=. x n+1 =(1 α n )x n +α n P C (x n r n Ax n ) (29) Consider the optimization problem of finding a point x with the property x arg min x C f (x), (30) where f : H R is a convex and differentiable function. The gradient projection algorithm (GPA) generates asequence(x n ) by the iterative procedure x n+1 =P C (x n r f(x n )), (31) where x 0 H and r is a positive parameter. If, in addition, f is (1/κ)-Lipschitz continuous, that is, for any x, y H, f (x) f(y) 1 κ x y, (32) then the GPA with 0 < r < 2κ converges weakly to a minimizer of f onto C, if such minimizers exist (see, e.g., [15, Corollary 4.1]). Denote by Ω the solution set of the variational inequality f (x),z x 0, z C. (33) According to [16,Lemma5.13],wehaveΩ=arg min x C f(x). Further, if f is (1/κ)-Lipschitz continuous, then it is also κism (see [17, Corollary10]).Thus,wecanapplytheprevious results by letting A= f. Corollary 12. Assume that f : H R is convex and differentiable with (1/κ)-Lipschitz-continuous gradient f and that (C1) lim n α n >0; (C2) 0 r n 2κ; (C4) lim n (4κ/(2κ + r n ) α n )>0. x n+1 =(1 α n )x n +α n P C (x n r n f (x n )) (34) Corollary 13. Assume that f : H R is convex and differentiable with (1/κ)-Lipschitz-continuous gradient f and that
Applied Mathematics 5 (C1) n r n+1 r n < ; (C2) 0<lim n r n 2κ; (C4) n α n (4κ/(2κ + r n ) α n )=. x n+1 = (1 α n ) x n +α n P C (x n r n f (x n )) (35) Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. Acknowledgments The authors would like to express their sincere thanks to the anonymous referees and editors for their careful review of thepaperandthevaluablecomments,whichhavegreatly improved the earlier version of this paper. This work is supported by the National Natural Science Foundation of China (Grant nos. 11301253 and 11271112), the Basic and Frontier Project of Henan (no. 122300410268), and the Science and Technology Key Project of Education Department of Henan Province (14A110024). [10] P. L. Combettes, Solving monotone inclusions via compositions of nonexpansive averaged operators, Optimization, vol. 53,no.5-6,pp.475 504,2004. [11] R. T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM Journal on Control and Optimization,vol.14, no.5,pp.877 898,1976. [12] G. López, V. Martín-Márquez, F. Wang, and H.-K. Xu, Forward-backward splitting methods for accretive operators in Banach spaces, Abstract and Applied Analysis,vol.2012,Article ID109236,25pages,2012. [13] D. Butnariu, Y. Censor, and S. Reich, Quasi-Fejérian Analysis of Some Optimization Algorithms, Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications, Elsevier Science Publishers, Amsterdam, TheNetherlands, 2001. [14] R. T. Rockafellar, On the maximality of sums of nonlinear monotone operators, Transactions of the American Mathematical Society,vol.149,pp.75 88,1970. [15] C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Problems, vol.20,no.1,pp.103 120,2004. [16]H.W.Engl,M.Hanke,andA.Neubauer,Regularization of Inverse Problems, Kluwer Academic Publishers Group, Dordrecht,TheNetherlands,1996. [17] J. B. Baillon and G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones, Israel Mathematics,vol.26,no.2,pp.137 150,1977. References [1] J. Eckstein and D. P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators, Mathematical Programming,vol.55,no.3, pp.293 318,1992. [2] P. L. Lions and B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAMJournalonNumericalAnalysis, vol.16,no.6,pp.964 979,1979. [3] T. Pennanen, A splitting method for composite mappings, Numerical Functional Analysis and Optimization,vol.23,no.7-8, pp. 875 890, 2002. [4] J. E. Spingarn, Applications of the method of partial inverses to convex programming: decomposition, Mathematical Programming,vol.32,no.2,pp.199 223,1985. [5] P. Tseng, Further applications of a splitting algorithm to decomposition in variational inequalities and convex programming, Mathematical Programming, vol. 48, pp. 249 263, 1990. [6] P. Tseng, Applications of a splitting algorithm to decomposition in convex programming and variational inequalities, SIAM Journal on Control and Optimization,vol.29,no.1,pp.119 138, 1991. [7] D. W. Peaceman and J. Rachford, The numerical solution of parabolic and elliptic differential equations, Society for Industrial and Applied Mathematics,vol.3,no.1,pp.28 41,1955. [8] J. Douglas and H. H. Rachford, On the numerical solution of heat conduction problems in two or three space variables, Transactions of the American Mathematical Society, vol.82,pp. 421 439, 1956. [9] G. B. Passty, Ergodic convergence to a zero of the sum of monotone operators, JournalofMathematicalAnalysisand Applications,vol.72,no.2,pp.383 390,1979.
Advances in Operations Research Advances in Decision Sciences Applied Mathematics Algebra Probability and Statistics The Scientific World Journal International Differential Equations Submit your manuscripts at International Advances in Combinatorics Mathematical Physics Complex Analysis International Mathematics and Mathematical Sciences Mathematical Problems in Engineering Mathematics Discrete Mathematics Discrete Dynamics in Nature and Society Function Spaces Abstract and Applied Analysis International Stochastic Analysis Optimization