A General Iterative Method for Constrained Convex Minimization Problems in Hilbert Spaces

Similar documents
Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem

A New Modified Gradient-Projection Algorithm for Solution of Constrained Convex Minimization Problem in Hilbert Spaces

A general iterative algorithm for equilibrium problems and strict pseudo-contractions in Hilbert spaces

New Iterative Algorithm for Variational Inequality Problem and Fixed Point Problem in Hilbert Spaces

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert spaces

Research Article Strong Convergence of a Projected Gradient Method

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A Relaxed Explicit Extragradient-Like Method for Solving Generalized Mixed Equilibria, Variational Inequalities and Constrained Convex Minimization

The Split Hierarchical Monotone Variational Inclusions Problems and Fixed Point Problems for Nonexpansive Semigroup

Research Article Some Krasnonsel skiĭ-mann Algorithms and the Multiple-Set Split Feasibility Problem

Synchronal Algorithm For a Countable Family of Strict Pseudocontractions in q-uniformly Smooth Banach Spaces

Algorithms for finding minimum norm solution of equilibrium and fixed point problems for nonexpansive semigroups in Hilbert spaces

Iterative common solutions of fixed point and variational inequality problems

Some unified algorithms for finding minimum norm fixed point of nonexpansive semigroups in Hilbert spaces

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim

The Split Common Fixed Point Problem for Asymptotically Quasi-Nonexpansive Mappings in the Intermediate Sense

Extensions of the CQ Algorithm for the Split Feasibility and Split Equality Problems

On the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces

STRONG CONVERGENCE THEOREMS BY A HYBRID STEEPEST DESCENT METHOD FOR COUNTABLE NONEXPANSIVE MAPPINGS IN HILBERT SPACES

The Generalized Viscosity Implicit Rules of Asymptotically Nonexpansive Mappings in Hilbert Spaces

Multi-step hybrid steepest-descent methods for split feasibility problems with hierarchical variational inequality problem constraints

Thai Journal of Mathematics Volume 14 (2016) Number 1 : ISSN

On an iterative algorithm for variational inequalities in. Banach space

WEAK AND STRONG CONVERGENCE OF AN ITERATIVE METHOD FOR NONEXPANSIVE MAPPINGS IN HILBERT SPACES

CONVERGENCE THEOREMS FOR STRICTLY ASYMPTOTICALLY PSEUDOCONTRACTIVE MAPPINGS IN HILBERT SPACES. Gurucharan Singh Saluja

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

Strong Convergence of Two Iterative Algorithms for a Countable Family of Nonexpansive Mappings in Hilbert Spaces

STRONG CONVERGENCE OF AN ITERATIVE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS AND FIXED POINT PROBLEMS

On nonexpansive and accretive operators in Banach spaces

Research Article An Iterative Algorithm for the Split Equality and Multiple-Sets Split Equality Problem

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces

Research Article Algorithms for a System of General Variational Inequalities in Banach Spaces

INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS. Yazheng Dang. Jie Sun. Honglei Xu

The Journal of Nonlinear Science and Applications

Shih-sen Chang, Yeol Je Cho, and Haiyun Zhou

Hybrid Steepest Descent Method for Variational Inequality Problem over Fixed Point Sets of Certain Quasi-Nonexpansive Mappings

STRONG CONVERGENCE OF AN IMPLICIT ITERATION PROCESS FOR ASYMPTOTICALLY NONEXPANSIVE IN THE INTERMEDIATE SENSE MAPPINGS IN BANACH SPACES

Research Article Self-Adaptive and Relaxed Self-Adaptive Projection Methods for Solving the Multiple-Set Split Feasibility Problem

Research Article Cyclic Iterative Method for Strictly Pseudononspreading in Hilbert Space

CONVERGENCE OF HYBRID FIXED POINT FOR A PAIR OF NONLINEAR MAPPINGS IN BANACH SPACES

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES

The viscosity technique for the implicit midpoint rule of nonexpansive mappings in

Existence and convergence theorems for the split quasi variational inequality problems on proximally smooth sets

The Strong Convergence Theorems for the Split Equality Fixed Point Problems in Hilbert Spaces

Monotone variational inequalities, generalized equilibrium problems and fixed point methods

An Algorithm for Solving Triple Hierarchical Pseudomonotone Variational Inequalities

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1

Strong Convergence Theorem of Split Equality Fixed Point for Nonexpansive Mappings in Banach Spaces

Strong convergence theorem for amenable semigroups of nonexpansive mappings and variational inequalities

Variational inequalities for fixed point problems of quasi-nonexpansive operators 1. Rafał Zalas 2

Convergence of Fixed-Point Iterations

CONVERGENCE OF THE STEEPEST DESCENT METHOD FOR ACCRETIVE OPERATORS

PROXIMAL POINT ALGORITHMS INVOLVING FIXED POINT OF NONSPREADING-TYPE MULTIVALUED MAPPINGS IN HILBERT SPACES

A double projection method for solving variational inequalities without monotonicity

Common fixed points of two generalized asymptotically quasi-nonexpansive mappings

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces

A GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD

Convergence to Common Fixed Point for Two Asymptotically Quasi-nonexpansive Mappings in the Intermediate Sense in Banach Spaces

Two-Step Iteration Scheme for Nonexpansive Mappings in Banach Space

Acceleration of the Halpern algorithm to search for a fixed point of a nonexpansive mapping

Convergence theorems for a finite family. of nonspreading and nonexpansive. multivalued mappings and equilibrium. problems with application

Acceleration Method for Convex Optimization over the Fixed Point Set of a Nonexpansive Mapping

Iterative Method for a Finite Family of Nonexpansive Mappings in Hilbert Spaces with Applications

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

Split hierarchical variational inequality problems and fixed point problems for nonexpansive mappings

THROUGHOUT this paper, we let C be a nonempty

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces

Split equality problem with equilibrium problem, variational inequality problem, and fixed point problem of nonexpansive semigroups

Existence of Solutions to Split Variational Inequality Problems and Split Minimization Problems in Banach Spaces

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces

Research Article Hybrid Algorithm of Fixed Point for Weak Relatively Nonexpansive Multivalued Mappings and Applications

Weak and strong convergence theorems of modified SP-iterations for generalized asymptotically quasi-nonexpansive mappings

Steepest descent approximations in Banach space 1

A Viscosity Method for Solving a General System of Finite Variational Inequalities for Finite Accretive Operators

arxiv: v1 [math.oc] 20 Sep 2016

Research Article Strong Convergence Theorems for Zeros of Bounded Maximal Monotone Nonlinear Operators

STRONG CONVERGENCE RESULTS FOR NEARLY WEAK UNIFORMLY L-LIPSCHITZIAN MAPPINGS

ITERATIVE SCHEMES FOR APPROXIMATING SOLUTIONS OF ACCRETIVE OPERATORS IN BANACH SPACES SHOJI KAMIMURA AND WATARU TAKAHASHI. Received December 14, 1999

The fuzzy over-relaxed proximal point iterative scheme for generalized variational inclusion with fuzzy mappings

Strong Convergence Theorems for Nonself I-Asymptotically Quasi-Nonexpansive Mappings 1

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

An iterative method for fixed point problems and variational inequality problems

On The Convergence Of Modified Noor Iteration For Nearly Lipschitzian Maps In Real Banach Spaces

APPROXIMATE SOLUTIONS TO VARIATIONAL INEQUALITY OVER THE FIXED POINT SET OF A STRONGLY NONEXPANSIVE MAPPING

Research Article Generalized Mann Iterations for Approximating Fixed Points of a Family of Hemicontractions

Split equality monotone variational inclusions and fixed point problem of set-valued operator

Viscosity approximation method for m-accretive mapping and variational inequality in Banach space

BREGMAN DISTANCES, TOTALLY

A convergence result for an Outer Approximation Scheme

CONVERGENCE THEOREMS FOR TOTAL ASYMPTOTICALLY NONEXPANSIVE MAPPINGS

CONVERGENCE OF APPROXIMATING FIXED POINTS FOR MULTIVALUED NONSELF-MAPPINGS IN BANACH SPACES. Jong Soo Jung. 1. Introduction

Hybrid extragradient viscosity method for general system of variational inequalities

Regularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces

A Solution Method for Semidefinite Variational Inequality with Coupled Constraints

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT

Research Article On an Iterative Method for Finding a Zero to the Sum of Two Maximal Monotone Operators

Transcription:

A General Iterative Method for Constrained Convex Minimization Problems in Hilbert Spaces MING TIAN Civil Aviation University of China College of Science Tianjin 300300 CHINA tianming963@6.com MINMIN LI Civil Aviation University of China College of Science Tianjin 300300 CHINA limi986@yeah.net Abstract: It is well known that the gradient-projection algorithm plays an important role in solving constrained convex minimization problems. In this paper, based on Xu s method [Xu, H. K.: Averaged mappings and the gradient-projection algorithm, J. Optim. Theory Appl. 50, 360-378(0)], we use the idea of regularization to establish implicit and explicit iterative methods for finding the approximate minimizer of a constrained convex minimization problem and prove that the sequences generated by our methods converge strongly to a solution of the constrained convex minimization problem. Such a solution is also a solution of a variational inequality defined over the set of the solutions of the constrained convex minimization problem. As application, we apply our main result to solve the split feasibility problem in Hilbert spaces. Key Words: Variational inequality; Regularization algorithm; Constrained convex minimization; Fixed point; Averaged mapping; Nonexpansive mappings. Introduction Strong convergence is convergent in norm. In finite dimensional space, strong convergence is equivalent to the weak convergence. In infinite dimension space, strong convergence must be weak convergence, but weak convergence is not necessarily strong convergence. Therefore, we hope to obtain a iterative algorithm which converges in norm to a best solution of constrained convex minimization problems. Now we give an example that is weakly but not strongly convergent. Assume that l = {x = {x k } k= x k < }, {e i } l, e i = (0, 0,,, 0, ). For v l, we can obtain e i, v = v i 0, v, which implies that e i 0. However, e i = (i =,, 3, ) implies that e i is not strongly convergent to the 0. Consider the following constrained convex minimization problem: min f(x), () x C where C is a nonempty closed and convex subset of a real Hilbert space H and f : C IR is a realvalued convex and continuously Fréchet differentiable function. Assume that the minimization problem () is consistent and let S denote its solution set. It is well known that the gradient-projection algorithm is very useful in dealing with constrained convex minimization problems and has extensively been studied (see [-8]and the references therein). It has recently been applied to solve split feasibility problems(see [9-4]) which find applications in image reconstructions and the intensity modulated radiation therapy (see [, 3, 5, 6, 7]). The gradient-projection algorithm (GPA) generates a sequence {x n } using the following recursive formula: x n+ := Proj C (x n γ f(x n )), n 0, () or more generally, x n+ := Proj C (x n γ n f(x n )), n 0, (3) where, in both () and (3), the initial guess x 0 is taken from C arbitrarily, the parameters γ or γ n are positive real numbers satisfying appropriate conditions. The convergence of the algorithms () and (3) depends on the behavior of the gradient f; see Levitin and Polyak []. As a matter of fact, it is known [] that if f has a Lipschitz continuous and strongly monotone gradient, then {x n } generated by () and (3) can be strongly convergent to a minimizer of f in C, respectively. If the gradient of f fails to be strongly monotone, then {x n } can only be weakly convergent if H is infinite-dimensional. Regularization, in particular, the traditional Tikhonov regularization, is usually used to solve ill-posed optimization problems. E-ISSN: 4-880 7 Volume 3, 04

Next consider the regularized minimization problem: min f α(x) := f(x) + α x C x, (4) here α > 0 is the regularization parameter. f is convex and f is inverse strongly monotone. Since now the gradient f α is α-strongly monotone and (L+α)- Lipschitzian, (4) has a unique solution which is denoted as x α C. Assume that the minimization problem () is consistent. If we appropriately select the regularized parameter α and the parameter γ in the projectiongradient algorithm, we can get a single iterative algorithm that generates a sequence {x n } in the following manner: x n+ = Proj C (I γ f αn )x n. (5) It is proved that if the sequence {α n } and the parameter γ satisfy appropriate conditions, the sequence {x n } generated by (5) converges weakly to a minimizer of () [8]. One question arises in the literature naturally: Is it possible to get strong convergence of (5) when we make some changes? In 00, Yamada [9] introduced the following hybrid iterative algorithm for solving the variational inequality x n+ = T x n µα n F T x n, n 0, (6) where F is a κ-lipschitzian and η-strongly monotone operator with κ > 0, η > 0. Then he proved that if {α n } satisfies appropriate conditions, the sequence {x n } generated by (6) converges strongly to the unique solution of variational inequality F x, x x 0, x F ix(t ). (7) Recently, Xu [0] provided a modification of GPA so that strong convergence is guaranteed. He considered the following hybrid gradient-projection algorithm x n+ = θ n h(x n )+( θ n )Proj C (x n λ n f(x n )). (8) Assume that the minimization problem () is consistent, it is proved that if the sequences {θ n } and {λ n } satisfy appropriate conditions, the sequence {x n } generated by (8) converges in norm to a minimizer of () which solves the variational inequality x S, (I h)x, x x 0, x S. (9) In this article, motivated and inspired by the research work of [0], we will combine the iterative method (6) with the iterative method (5) and consider the following hybrid algorithm for the idea of regularization approach: { yn = (I µθ n F )Proj C (I γ f αn )x n, (0) x n+ = Proj C y n, n 0. Assume that the minimization problem () is consistent, we will prove that if the sequence {θ n } of parameters and the sequence {α n } of parameters satisfy appropriate conditions, then the sequence {x n } generated by (0) converges in norm to a minimizer of () which solves the variational inequality (VI) x S, F x, x x 0, x S, where S is the solution set of the minimization problem (). Finally, in Sect.4, we apply this algorithm to the split feasibility problem, and give the numerical result in Sect. 5 Preliminaries Throughout this paper, we assume that H is a Hilbert space whose inner product and norm are denoted by, and, respectively, and C is a nonempty closed convex subset of H. We write x n x to indicate that the sequence {x n } converges weakly to x, x n x implies that {x n } converges strongly to x. ω w (x n ) := {x : x nj x} is the weak ω-limit set of the sequence {x n } n=. Definition A mapping T : H H is said to be (a) nonexpansive, if and only if T x T y x y, x, y H. (b) firmly nonexpansive, if and only if T I is nonexpansive, or equivalently x y, T x T y T x T y, x, y H. Alternatively, T is firmly nonexpansive, if and only if T can be expressed as T = (I + W ), where W : H H is nonexpansive. Projections are firmly nonexpansive. (c) an averaged mapping, if and only if it can be written as the average of the identity I and a nonexpansive mapping; that is, T = ( ε)i + εw, E-ISSN: 4-880 7 Volume 3, 04

where ε is a number in (0, ) and W : H H is nonexpansive. More precisely, when the above expression holds, we say that T is ε-averaged. Thus firmly nonexpansive mappings (in particular, projections) are (/)-averaged maps. Proposition ([5, ]) Let T : H H be given. We have: (i) if T is υ-ism, then for γ > 0, γt is (υ/γ)-ism; (ii) T is averaged, iff the complement I T is υ- ism for some υ > /; indeed, for ε (0, ), T is ε-averaged, iff I T is (/ε)-ism; (iii) the composite of finitely many averaged mappings is averaged. That is, if each of the mappings {T i } N i= is averaged, then so is the composite T T N (see []). In particular, an averaged mapping is a nonexpansive mapping. Definition 3 (See[3]) for comprehensive theory of monotone operators.) (i) A is monotone if and only if, x y, Ax Ay 0, x, y H. (ii) Given is a number υ > 0. A : H H is said to be υ-inverse strongly monotone, if and only if x y, Ax Ay υ Ax Ay, x, y H. (iii) Given is a number ζ > 0. A is said to be ζ-strongly monotone, if and only if x y, Ax Ay ζ x y, x, y H. It is easily seen that, if T is nonexpansive, then I T is monotone. It is also easily seen that a projection is a one-ism. Inverse strongly monotone operators have widely been applied to solve practical problems in various fields; for instance, in traffic assignment problems(see[4, 5]). Definition 4 Let the operators S, T, V : H H be given. (i) If T = ( α)s + αv for some α (0, ) and if S is averaged and V is nonexpansive, then T is averaged. (ii) T is firmly nonexpansive, if and only if the complement I T is firmly nonexpansive. (iii) If T = ( α)s + αv for some α (0, ), S is firmly nonexpansive and V is nonexpansive, then T is averaged. (iv) T is nonexopansive, if and only if the complement I T is (/)-ism. Lemma 5 [6] Assume that {a n } is a sequence of non-negative real numbers such that a n+ ( γ n )a n + γ n δ n + β n, n 0, where {γ n } and {β n} are sequences in (0,) and {δ n } is a sequence in IR such that (i) γ n = ; (ii) either lim sup n δ n 0 or (iii) β n <. Then lim n a n = 0. γ n δ n < ; Lemma 6 [7] Let C be a closed and convex subset of a Hilbert space H and let T : C C be a nonexpansive mapping with F ixt. If {x n } n= is a sequence in C weakly converging to x and if {(I T )x n } n= converges strongly to y, then (I T )x = y. Lemma 7 Let C be a closed subset of a real Hilbert space H, given x H and y C. Then y = P C x if and only if there holds the inequality x y, y z 0, z C. 3 Main results Assume that the minimization problem () is consistent and let S denote its solution set. Assume that the gradient f is L -inverse strongly monotone ( L -ism) (see [3]) with a constant L > 0. Throughout the rest of this paper, we always let H be a real Hilbert space and let C be a nonempty closed and convex subset of H. Since S is a closed convex subset, the nearest point projection from H onto S is well defined. Let F : C H be a κ-lipschitzian and η-strongly monotone operator with κ, η > 0. Let 0 < µ < η/κ, τ = µ(η µκ ). Assume that α t is continuous with respect to t and in addition, α lim t t 0 t = 0; that is, there is a constant B > 0 so as to α t t < B. For t (0, ), we consider a mapping X t on C defined by X t (x) = Proj C (I tµf )V αt (x), x C, () where V αt := Proj C (I γ f αt ). It is obvious that V αt is a nonexpansive mapping. It is also easy to see that X t is a contraction. E-ISSN: 4-880 73 Volume 3, 04

For t (0, ), we can get (I tµf )V αt (x) (I tµf )V αt (y) = V αt (x) V αt (y) tµ(f V αt (x) F V αt (y)) = V αt (x) V αt (y) tµ V αt (x) V αt (y), F V αt (x) F V αt (y) +t µ F V αt (x) F V αt (y) V αt (x) V αt (y) tµη V αt (x) V αt (y) +t µ κ V αt (x) V αt (y) = ( tµ(η tµκ )) V αt (x) V αt (y) ( sµ(η sµκ ) ) V αt (x) V αt (y) ( tµ(η µκ )) V αt (x) V αt (y) = ( tτ) x y. Indeed, we have X t (x) X t (y) = Proj C (I tµf )V αt (x) Proj C (I tµf )V αt (y) (I tµf )V αt (x) (I tµf )V αt (y) ( tτ) x y. Hence X t has a unique fixed point, denoted x t, which uniquely solves the fixed point equation x t = Proj C (I tµf )V αt (x t ). () The next proposition summarizes the properties of {x t }. Proposition 8 Let x t be defined by () (i) {x t } is bounded for t (0, τ ). (ii) lim t 0 x t Proj C (I γ f αt )(x t ) = 0. (iii) x t defines a continuous curve from (0, /τ) into H. Proof: (i) For a p S, then we have x t p = Proj C (I tµf )V αt (x t ) p (I tµf )V αt (x t ) p = (I tµf )Proj C (I γ f αt )(x t ) (I tµf )Proj C (I γ f)p tµf Proj C (I γ f)p ( tτ) Proj C (I γ f αt )(x t ) Proj C (I γ f)p + t µf p ( tτ)( x t p + γα t p ) + t µf p ( tτ) x t p + t µf p + γα t p = ( tτ) x t p + t( µf p + γ α t t p ) ( tτ) x t p + t( µf (p) + γb p ). It follows that x t p µf (p) + γb p. τ Hence, {x t } is bounded. (ii) By the definition of {x t }, we have x t Proj C (I γ f αt )(x t ) = Proj C (I tµf )V αt x t Proj C Proj C (I γ f αt )(x t ) (I tµf )V αt x t Proj C (I γ f αt )(x t ) = tµ F V αt x t 0. {x t } is bounded, so is {F V αt x t }. (iii) For t, t 0 (0, /τ), we have x t x t0 = Proj C (I tµf )V αt (x t ) Proj C (I t 0 µf )V αt0 (x t0 ) (I tµf )Proj C (I γ f αt )(x t ) (I t 0 µf )Proj C (I γ f αt0 )(x t0 ) = (I µtf )V αt x t (I µt 0 F )V αt0 x t +(I µt 0 F )V αt0 x t (I µt 0 F )V αt0 x t0 (I t 0 µf )Proj C (I γ f αt0 )(x t ) (I t 0 µf )Proj C (I γ f αt0 )(x t0 ) + (I tµf )Proj C (I γ f αt )(x t ) (I t 0 µf )Proj C (I γ f αt0 )(x t ) ( t 0 τ) x t x t0 + (I tµf )Proj C (I γ f αt )(x t ) (I tµf )Proj C (I γ f αt0 )(x t ) + (I tµf )Proj C (I γ f αt0 )(x t ) (I t 0 µf )Proj C (I γ f αt0 )(x t ) ( t 0 τ) x t x t0 + Proj C (I γ f αt )x t Proj C (I γ f αt0 )x t + t 0 µf Proj C (I γ f αt0 )(x t ) tµf Proj C (I γ f αt0 )(x t ) ( t 0 τ) x t x t0 + (I γ f αt )x t (I γ f αt0 )x t +µ t t 0 F Proj C (I γ f αt0 )(x t ) E-ISSN: 4-880 74 Volume 3, 04

= ( t 0 τ) x t x t0 +γ α t α t0 x t +µ t t 0 F Proj C (I γ f αt0 )(x t ). Therefore, x t x t0 µ F Proj C (I γ f α t0 )x t t 0 τ t t 0 + γ xt t 0 τ α t α t0. Therefore x t x t0 as t t 0. This means x t is continuous. Our main result below shows that {x t } converges in norm to a minimizer of () which solves a variational inequality. Theorem 9 Assume that the minimization problem () is consistent and let S denote its solution set. Assume that the gradient f is L-ism. Let F : C H is η-strongly monotone and κ-lipschitzian. Fix a constant µ satisfying 0 < µ < η/κ and a constant γ satisfying 0 < γ < /L. Assume also that t (0, ) satisfies the condition α t = o(t). Let {x t } be defined by (). Then x t converges in norm, as t 0, to a minimizer of () which solves the variational inequality F x, x x 0, x S. (3) Equivalently, we have Proj S (I F )x = x. Proof: It is easy to see that the uniqueness of a solution of the variational inequality (3). Let x S denote the unique solution of (3). Let us prove that x t x (t 0). Set and y t := (I tµf )V αt (x t ) V := Proj C (I γ f). Then we have x t = Proj C y t. For a given x S, we write x t x = Proj C y t x = Proj C y t y t + y t x = Proj C y t y t + (I tµf )V αt (x t ) (I tµf ) x tµf ( x). Since Proj C is the metric projection from H onto C, we have y t x t, x x t 0. It follows that x t x = Proj C y t y t, Proj C y t x + (I tµf )V αt (x t ) (I tµf ) x, x t x tµ F ( x), x t x (I tµf )V αt (x t ) (I tµf ) x, x t x tµ F ( x), x t x (I tµf )V αt (x t ) (I tµf )V x x t x tµ F ( x), x t x ( tτ) V αt (x t ) V x x t x tµ F ( x), x t x = ( tτ) V αt (x t ) V αt x +V αt x V x x t x tµ F ( x), x t x ( tτ) x t x + γα t x x t x tµ F ( x), x t x. To derive that x t x τ µf ( x), x t x + α t t γ τ x x t x. (4) Since {x t } is bounded as t 0, we see that if {t n } is a sequence in (0,) such that t n 0 and x tn x, then by (4), x tn x. We may further assume that α tn 0. Notice that Proj C (I γ f) is nonexpansive. It turns out that x tn Proj C (I γ f)x tn x tn Proj C (I γ fα tn )x tn + Proj C (I γ fα tn )x tn Proj C (I γ f)x tn x tn Proj C (I γ fα tn )x tn + γα tn x tn. From the boundedness of {x t } and lim Proj C(I γ f αt )x t x t = 0, t 0 we conclude that lim x t n n Proj C (I γ f)x tn = 0. Since x tn x, by lemma 6, we obtain x = Proj C (I γ f) x. This shows that x S. We next prove that x is a solution of the variational inequality (3). Since x t = Proj C y t y t + (I tµf )V αt (x t ), E-ISSN: 4-880 75 Volume 3, 04

we can derive that F (x t ) = tµ (Proj Cy t y t ) + tµ ((I tµf )V α t (x t ) (I tµf )(x t )). Note that, for x S, Proj C y t y t, Proj C y t x 0. Therefore, for x S, F (x t ), x t x = tµ Proj Cy t y t, x t x + (I tµf ) tµ V αt (x t ) (I tµf )(x t ), x t x tµ (I tµf )V α t (x t ) (I tµf )(x t ), x t x = tµ (I tµf )(x t) (I tµf )V αt (x t ), x t x = tµ (I V α t )(x t ), x t x + F (x t ) F V αt (x t ), x t x = tµ (I V α t )(x t ) (I V αt ) x, x t x tµ (I V α t ) x, x t x + F (x t ) F V αt (x t ), x t x tµ (I V α t ) x, x t x + F (x t ) F V αt (x t ), x t x tµ V α t x V x x t x + F (x t ) F V αt (x t ), x t x tµ γα t x x t x + F (x t ) F V αt (x t ), x t x. Since Proj C (I γ f αt ) is nonexpansive, we obtain that I Proj C (I γ f αt ) is monotone, i.e. (I Proj C (I γ f αt ))(x t ) (I Proj C (I γ f αt ))( x), x t x 0. Taking the limit through t = t n 0 ensures that x is a solution to (3). That is to say F ( x), x x 0. Hence x = x by uniqueness. Therefore x t x as t 0. The variational inequality (3) can be written as (I F )x x, x x 0, x S. So, by lemma 7, it is equivalent to the fixed point equation P S (I F )x = x. Finally, we consider the following hybrid algorithm for the idea of regularization approach: { yn = (I µθ n F )Proj C (I γ f αn )x n, (5) x n+ = Proj C y n, n 0, where the initial guess x 0 is selected in C arbitrarily. Assume that the parameter γ satisfies the condition 0 < γ < /L and in addition, that the following conditions are satisfied for {α n } and {θ n} (0, ): (i) θ n 0; (ii) (iii) (iv) (v) α lim n n θ n = 0; θ n = ; θ n+ θ n < ; α n+ α n <. Theorem 0 Assume that the minimization problem () is consistent and the gradient f is L -ism. Let F : C H is η-strongly monotone and κ- Lipschitzian. Fix a constant µ satisfying 0 < µ < η/κ and a constant γ satisfying 0 < γ < /L. Let {x n } be generated by algorithm (5) with the sequences {θ n } and {α n } satisfying the above conditions. Then the sequence {x n } converges in norm to x that is obtained in Theorem 9. Proof: () The sequence {x n } is bounded. Set First we see that V αn := Proj C (I γ f αn ). (I µθ n F )Proj C (I γ f αn )x n (I µθ n F )Proj C (I γ f αn )x n = (I µθ n F )V αn x n (I µθ n F )V αn x n = V αn x n V αn x n µθ n (F V αn x n F V αn x n ) = V αn x n V αn x n +µ θ n F V αn x n F V αn x n µθ n V αn x n V αn x n, F V αn x n F V αn x n E-ISSN: 4-880 76 Volume 3, 04

V αn x n V αn x n +µ θnk V αn x n V αn x n µθ n η V αn x n V αn x n = ( µθ n η + µ θnk ) V αn x n V αn x n ( µθ n(η µθ n )k ) V αn x n V αn x n ( θ n τ) x n x n. Indeed, we have, for x S, x n+ x = Proj C y n Proj C x y n x = θ n µf ( x) +(I µθ n F )Proj C (I γ f αn )x n (I µθ n F )Proj C (I γ f) x µθ n F ( x) +( θ n τ) Proj C (I γ f αn )x n Proj C (I γ f) x = µθ n F ( x) +( θ n τ) Proj C (I γ f αn )x n Proj C (I γ f αn ) x +Proj C (I γ f αn ) x Proj C (I γ f) x θ n µf ( x) + ( θ n τ) x n x + γα n x = ( θ n τ) x n x By induction +θ n ( µf ( x) + γ α n x ) θ n ( θ n τ) x n x +θ n ( µf ( x) + γb x ) max { x n x, } τ ( µf ( x) + γb x ). x n x max { x 0 x, In particular, {x n } is bounded. µf ( x) + γb x }. τ () We prove that x n+ x n 0 as n. Let M be a constant such that { } M > max sup µ F V αk x n, sup γ x n. κ,n 0 n 0 We compute x n+ x n = Proj C y n Proj C y n y n y n (I µθ n F )Proj C (I γ f αn )x n and (I µθ n F )Proj C (I γ f αn )x n + (I µθ n F )Proj C (I γ f αn )x n (I µθ n F )Proj C (I γ f αn )x n ( θ n τ) x n x n + (I µθ n F )Proj C (I γ f αn )x n (I µθ n F )Proj C (I γ f αn )x n + (I µθ n F )Proj C (I γ f αn )x n (I µθ n F )Proj C (I γ f αn )x n ( θ n τ) x n x n + V αn x n V αn x n + (I µθ n F )V αn x n (I µθ n F )V αn x n = ( θ n τ) x n x n + V αn x n V αn x n +µ θ n θ n F V αn x n ( θ n τ) x n x n + M θ n θ n + V αn x n V αn x n V αn x n V αn x n = Proj C (I γ f αn )x n Proj C (I γ f αn )x n (I γ f αn )x n (I γ f αn )x n = γ f αn (x n ) + γ f αn (x n ) = γ α n α n x n. (6) Combining (6) and (6), we can obtain x n+ x n ( τθ n ) x n x n +M( θ n θ n + α n α n ). (7) Apply lemma 5 to (7) to conclude that x n+ x n 0 as n. (3) We prove that ω w (x n ) S. Let ˆx ω w (x n ) and assume that x nk ˆx for some subsequence {x nk } k= of {x n}. We may further assume that α nk 0. Set V := Proj C (I γ f). Notice that V is nonexpansive and F ixv turns out that = S. It x nk V x nk x nk V αnk x nk + V αnk x nk V x nk x nk x nk + + x nk + V αnk x nk + V αnk x nk V x nk E-ISSN: 4-880 77 Volume 3, 04

= x nk x nk + + Proj C y nk Proj C (I γ f αnk )x nk + Proj C (I γ f αnk )x nk Proj C (I γ f)x nk x nk x nk + + θ nk µf V αnk x nk +γα nk x nk x nk x nk + + M(θ nk + α nk ) 0 as k. So lemma 6 guarantees that ω w (x n ) F ixv = S. (4) We prove that x n x as n, where x is the unique solution of the V I (3). First we observe that there is some ˆx ω w (x n ) S. Such that lim sup F x, x n x = F x, ˆx x 0. (8) n We now compute x n+ x = Proj C y n Proj C x y n x = (I µθ n F )V αn (x n ) (I µθ n F )V x θ n µf x (I µθ n F )V αn (x n ) (I µθ n F )V x θ n µf x, x n+ x ( θ n τ) V αn (x n ) V αn x +V αn x V x µθ n F x, x n+ x = ( θ n τ) ( V αn (x n ) V αn x + V αn x V x + V αn (x n ) V αn x, V αn x V x ) µθ n F x, x n+ x ( θ n τ) ( x n x + γ αn x +γα n x x n x ) µθ n F x, x n+ x ( θ n τ) x n x +γ αn x + γα n x x n x µθ n F x, x n+ x = ( θ n τ) x n x +θ n [γ α n x x n x + γ α n α n x θ n θ n µf x, x n+ x + θ n τ x n x ] = ( θ n ) x n x + θ n χ n. (9) where θ n = θ n τ. Therefore, x n+ x ( θ n ) x n x + θ n χ n. χ n = τ [γ α n θ n x x n x + γ α α n n θ n x µf x, x n+ x + θ n τ x n x ]. Applying lemma 5 to the inequality (9), together with (8), we get x n x 0 as n. 4 Application In this section, we give an application of Theorem 0 to the split feasibility problem (say SFP, for short), which was introduced by Censor and Elfving []. Since its inception in 994, the split feasibility problem (SFP) has received much attention due to its applications in signal processing and image reconstruction, with particular progress in intensity-modulated radiation therapy. The SFP can mathematically be formulated as the problem of finding a point x with the property x C, and Ax Q, (0) where C and Q are nonempty, closed and convex subset of Hilbert space H and H, respectively. A : H H is a bounded linear operator. It is clear that x is a solution to the split feasibility problem () if and only if x C and Ax Proj Q Ax = 0. We define the proximity function f by f(x) = Ax Proj QAx, and consider the constrained convex minimization problem min f(x) = min xϵc xϵc Ax Proj QAx. () Then x solves the split feasibility problem () if and only if x solves the minimization problem () with the minimize equal to 0. Byrne [5] introduced the so-called CQ algorithm to solve the (SFP). x n+ = Proj C (I γa (I Proj Q )A)x n, n 0, () where 0 < γ < / A. He obtained that the sequence {x n } generated by (3) converges weekly to a solution of the (SFP). In order to obtain strong convergence iterative sequence to solve the (SFP). We propose the following algorithm: y n = (I µθ n F )Proj C (I γ(a (I Proj Q )A +α n I))x n, x n+ = Proj C y n, n 0, (3) E-ISSN: 4-880 78 Volume 3, 04

where the initial guess is x 0 C and F : C H is η-strongly monotone and κ-lipschitzian with constants κ > 0, η > 0 such that 0 < µ < η/κ. We can show that the sequence {x n } generated by (4) converges strongly to a solution of the (SFP) () if the sequence {θ n } (0, ) and the sequence {α n } of parameters satisfy appropriate conditions. Applying Theorem 0, we obtain the following result. Theorem Assume that the split feasibility problem () is consistent. Let the sequence {x n } be generated by (4), where the sequence {θ n } (0, ) and the sequence {α n } satisfy the conditions (i)-(v). Then the sequence {x n } converges strongly to a solution of the split feasibility problem (). Proof: By the definition of the proximity function f, we have Hence, we obtain and f(x) = A (I Proj Q )Ax. f(x) f(y) = A (I Proj Q )Ax A (I Proj Q )Ay = A (I Proj Q )Ax A (I Proj Q )Ay, A (I Proj Q )Ax A (I Proj Q )Ay = (I Proj Q )Ax (I Proj Q )Ay, AA (I Proj Q )Ax AA (I Proj Q )Ay AA (I Proj Q )Ax (I Proj Q )Ay = AA (I Proj Q )Ax (I Proj Q )Ay, (I Proj Q )Ax (I Proj Q )Ay = AA Ax Ay (Proj Q Ax Proj Q Ay), (I Proj Q )Ax (I Proj Q )Ay = AA ( x y, A (I Proj Q )Ax A (I Proj Q )Ay Proj Q Ax Proj Q Ay, (I Proj Q )Ax (I Proj Q )Ay ) Proj Q Ax Proj Q Ay, (I Proj Q )Ax (I Proj Q )Ay = Proj Q Ax Proj Q Ay, Ax Ay (Proj Q Ax Proj Q Ay) = Ax Ay, Proj Q Ax Proj Q Ay Proj Q Ax Proj Q Ay Proj Q Ax Proj Q Ay Proj Q Ax Proj Q Ay = 0. Therefore, f(x) f(y) = A (I Proj Q )Ax A (I Proj Q )Ay AA x y, A (I Proj Q )Ax A (I Proj Q )Ay. That is, x y, f(x) f(y) = x y, A (I Proj Q )Ax A (I Proj Q )Ay AA A (I Proj Q )Ax = A (I Proj Q )Ay A f(x) f(y). Hence, f is A -ism. Set f αn (x) = f(x) + α n x. Consequently, f αn (x) = f(x) + α n I(x) = A (I Proj Q )Ax + α n x. Then the iterative scheme (4) is equivalent to { yn = (I µθ n F )Proj C (I γ f αn )x n, x n+ = Proj C y n, n 0, (4) where the initial guess is x 0 C and the parameters {θ n } (0, ) and {α n} satisfy the above conditions (i)-(v). Due to Theorem 0, we have the conclusion immediately. 5 Numerical Result In this section, we consider the following simple example to illustrate the effectiveness, realization, and convergence of the algorithm in Theorem. Example In part 4, we assume that H = H = IR 3. Take F = I with Lipschitz constant κ = and strongly monotone constant η =. Give the parameters θ n = n+, α n = for every n 0. Fix (n+) µ = γ =. In the split feasibility problem (SFP), we take A = 0 0 3 E-ISSN: 4-880 79 Volume 3, 04

b = 5 7 7, x 0 = 0. The SFP can be formulated as the problem of finding a point x with the property x C and Ax Q where C = H, Q = {b} H. This implies that x is the solution of system of linear equations Ax = b, and x = 5. 3 Then by Theorem 4., the sequence {x n } is generated by x n+ = (I (n + ) I) P C (x n A Ax n + A b ) (n + ) x n. As n, we have {x n } x = (, 5, 3) T. Table : x 0 = (0,, ) T n(iterative number) Error(n) 55 9.85 0 3 550 9.899 0 4 00 9.9885 0 5 Table : x 0 = (0,, ) T n(iterative number) x n (iterative point) 55 (.9905, 4.990,.9905) T 550 (.9989, 4.9990,.9985) T 00 (.9999, 4.9998,.9998) T From the computer programming point of view, the algorithms are easier to implement in this paper. 6 Conclusion Methods for solving constrained convex minimization problem have been extensively studied in Hilbert space. But to the best of our knowledge, in this paper, it would probably be the first time in the literature that we use the idea of regularization to establish a different hybrid method for finding a minimizer of constrained convex minimization problems and also prove some strong convergence theorems. Finally, we apply this algorithm to the split feasibility problem. Acknowledgements: The authors thank the referees for their helping comments, which notably improved the presentation of this manuscript. This work was supported by the Fundamental Research Funds for the Central Universities (No. 303K004). References: [] E.-S. Levitin and B.-T. Polyak, Constrained minimization methods Zh. Vychisl. Mat. Mat. Fiz., 6, 966, pp.787-83. [] P. Kumam, A new hybrid iterative method for solution of equilibrium problems and fixed point problems for an inverse strongly monotone operator and a nonexpansive mapping, J. Appl. Math. Comput., 9, 009, pp.63-80. [3] Y. Yao and H.-K. Xu, Iterative methods for finding minimum-norm fixed points of nonexpansive mappings with applications, Optimization, 60, 0, pp.645-658. [4] S. He, W. Sun, New hybrid steepest descent algorithms for variational inequalities over the common fixed points set of infinite nonexpansive mappings, WSEAS Transactions on Mathematics, (), 0, pp.83-9. [5] M. Su and H.-K. Xu, Remarks on the gradientprojection algorithm, J. Nonliner Anal. Optim.,, 00, pp.35-43. [6] R.-X Ni, Strong Convergence of a Hybrid Projection Algorithm for Approximation of a Common Element of Three Sets in Banach Spaces, WSEAS Transactions on Mathematics, 03, pp.09-769. [7] P. Kumam, A Hybrid approximation method for equilibrium and fixed point problems for a monotone mapping and a nonexpansive mapping, Nonlinear Anal.: Hybrid Syst., (4), 008, pp.45-55. [8] P.-H. Calamai and J.-J. Moré, Projected gradient methods for linearly constrained problems, Math. Program, 39, 987, pp.93-6. [9] F. Wang and H.-K. Xu, Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem, J. Inequal. Appl., 00, 0085. [0] J.-S. Jung, Strong convergence of composite iterative methods for equilibrium problems and fixed point problems, Appl. Math. Comput., 3, 009, pp.498-505. E-ISSN: 4-880 80 Volume 3, 04

[] J.-S. Jung, Strong convergence of iterative methods for k-strictly pseudo-contractive mappings in Hilbert spaces, Appl. Math. Comput., 5, 00, pp.3746-3753. [] Y. Censor and T. Elfving, A multiprojection algorithm using Bregman projections in a product space, Numer. Algorithms, 8, 994, -39. [3] H.-K. Xu, A variable Krasnosel skii-mann algorithm and the multiple-set split feasibility problem, Inverse Probl.,, 006, 0-034. [4] H.-K. Xu, Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces, Inverse Probl., 6, 00, 0508. [5] C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Probl. 0, 004, 03-0. [6] S.-N He, J. Guo, Algorithms for Finding the Minimum Norm Solution of Hierarchical Fixed Point Problems, WSEAS Transactions on Mathematics,, 03, 4-880. [7] G. Lopez, V. Martin and H.-K. Xu, Perturbation techniques for nonexpansive mappings with applications, Nonliner Anal., Real World Appl., 0, 009, pp.369-383. [8] H.-K. Xu, Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces, Inverse Problems, 6, 00, 0508. [9] I. Yamada, The hybrid steepest descent for the variational inequality problems over the intersection of fixed points sets of nonexpansive mapping, in: D. Butnariu, Y. Censor, S. Reich(Eds), Inherently Parallel Algorithms in Feasibility and Optimization and Their Application, Elservier, New York. 00, pp. 473-504. [0] H.-K. Xu, Averaged Mappings and the Gradient- Projection Algorithm, J. Optim. Theory Appl., 50, 0, pp.360-378. [] C. Martinez-Yanes and H.-K. Xu, Strong convergence of the CQ method for fixed-point iteration processes, Nonlinear Anal., 64, 006, pp.400-4. [] P.-L. Combettes, Solving monotone inclusions via compositions of nonexpansive averaged operators, Optimization, 53, 004, pp.475-504. [3] H. Brezis, Operateur Maximaux Monotones et Semi-Groups de Contractions dans les Espaces de Hilbert, North-Holland, Amsterdam. 973. [4] D.-P. Bretarkas and E.-M. Gafin, Projection methods for variational inequalities with applications to the traffic assignment problem, Math. Program. Stud.,7, 98, pp.39-59. [5] D. Han and H.-K. Lo, Solving non-additive traffic assignment problemsa descent method for cocoercive variational inequalities, Eur. J. Oper. Res., 59, 004, pp.59-544. [6] H.-K. Xu, Iterative algorithms for nonlinear operators, J. Lond. Math. Soc., 66, 00, pp.40-56. [7] K. Geobel and W.-A. Kirk, Topics in Metric Fixed Point Theory, in: Cambridge Stud. Adv. Math., vol. Cambridge Univ. Press, 8, 990, pp.473-504. E-ISSN: 4-880 8 Volume 3, 04