A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT. The split common fixed-point problem is an inverse problem that consists in finding an element in a fixed-point set such that its image under a bounded linear operator belongs to another fixed-point set. In this paper, we propose a new algorithm that is completely different from the existing algorithms. Under standard assumptions, we establish a weak convergence theorem of the proposed algorithm. Applications of the proposed algorithm to special cases as well as compressed sensing are also included. 00 Mathematics Subject Classification: 47J5, 47J0, 49N45, 65J5. Keywords: split common fixed-point problem, Féjer monotonicity, demicontractive operators.. INTRODUCTION In recent years there has been growing interest in the study of the split common fixed-point problem (SCFP). This problem aims to find an element in a fixed-point set such that its image under a bounded linear operator belongs to another fixed-point set [6]. More specifically, the SCFP consists in finding x Fix(U), s.t. Ax Fix(T ), (.) where A : H K is a bounded linear operator, Fix(U) and Fix(T ) stand for respectively the fixed point sets of U : H H and T : K K, and H and K are two Hilbert spaces. In particular, if T and U are projection operators, then the SCFP is reduced to the well-known split feasibility problem [5, 3], which consists in finding x with the property: x C, s.t. Ax Q, (.) where C and Q are nonempty closed convex subsets of H and K, respectively. It is clear that in this special case U = P C and T = P Q, where P C, P Q are the orthogonal projections onto C and Q, respectively.
Problem (.) can be cast as solving a fixed-point equation: x = U(I τa (I T )A)x, (.3) where τ is a positive real number and A is the corresponding adjoint operator of A. Based on this equation, Censor and Segal [6] introduced the following algorithm for solving the SCFP: x n+ = U(x n τa (I T )Ax n ), (.4) where τ is a properly chosen stepsize. Algorithm (.4) was originally designed to solve problem (.) whenever the operators involved are directed operators. It is shown that if the stepsize τ is chosen in the interval (0, / A ), then the iterative sequence generated by (.4) converges weakly to a solution of the SCFP whenever such a solution exists. Subsequently, this iterative scheme was extended to the case of quasinonexpansive operators [], demicontractive operators [], and finite many directed operators [4, 7]. In [7], the constant stepsize in (.4) was replaced by a variable step that does not depend on the operator norm A, since the computation of the norm is in general not an easy work in practice. In a recent work [, 3], a modification of (.4) was presented so that it generates an iterative sequence with a norm convergent property. The aim of this paper is to introduce a new algorithm to solve the SCFP. Our idea for designing iterative algorithms is based on an observation: problem (.) amounts to solving, instead of (.3), another fixedpoint equation (3.). In light of this fixed-point equation, we can easily introduce a new algorithm for solving problem (.). By using Féjer monotonicity, we obtain the weak convergence of the iterative sequence to a solution for demicontrcitve operators. As an application, we obtain a new algorithm for solving the SFP.. PRELIMINARY AND NOTATION Throughout the paper, we denote by H and K two Hilbert spaces, I the identity operator, Fix(T ) the set of the fixed points of an operator T, and ω w {x n } the set of weak cluster points of a sequence {x n }. The notation stands for strong convergence and stands for weak convergence. Definition.. An operator T : H H is called nonexpansive, if T x T y x y, x, y H; firmly nonexpansive, if T x T y x y (I T )x (I T )y, x, y H.
Definition.. Let T : H H be an operator with Fix(T ). Then I T is said to be demiclosed at zero, if, for any {x n } in H, there holds the following implication: ] x n x x Fix(T ). (I T )x n 0 It is well known that nonexpansive operators are demiclosed at zero. Definition.3. Let T : H H be an operator with Fix(T ). Then (i) T : H H is called directed if or equivalently z T x, x T x 0, z Fix(T ), x H, x z, x T x x T x, z Fix(T ), x H; (.) (ii) T : H H is called quasi nonexpansive if T x z x z, z Fix(T ), x H; (iii) T : H H is called κ-demicontractive with κ <, if or equivalently T x z x z + κ (I T )x, z Fix(T ), x H, x z, T x x κ x T x, z Fix(T ), x H. (.) An orthogonal projection P C from H onto a nonempty closed convex subset C H is defined by It is well known that the projection P C is characterized by: P C x := arg min x y, x H. (.3) y C x P C x, z P C x 0, z C. (.4) The concept of Féjer-monotonicity plays an important role in the subsequent analysis of this paper. Recall that a sequence {x n } in H is said to be Féjer-monotone with respect to (w.r.t.) a nonempty closed convex nonempty subset S of H if x n+ z x n z, n 0, z S. Lemma.4. [, 7] Let S be a nonempty closed convex subset in H. If the sequence {x n } is Féjer monotone w.r.t. S, then the following hold: (i) x n z S if and only if ω w {x n } S; 3
4 (ii) the sequence {P S x n } converges strongly; (iii) if x n z S, then z = lim n P Sx n. 3. A NEW ALGORITHM AND ITS CONVERGENCE ANALYSIS Let us first consider the SCFP (.) for demicontractive operators. More specifically, we make use of the following assumptions: (A) U is κ -demicontractive with κ < ; (A) T is κ -demicontractive with κ < ; (A3) both I U and I T are demiclosed at zero; (A4) it is consistent, i.e., its solution set, denoted by S, is nonempty. Under these conditions, we propose the following algorithm: In what follows, we will treat problem (.) under (A)-(A4). We first state a theorem, which implies that the SCFP is equivalent to solving a fixed-point equation. Theorem 3.. Let condition (A4) be satisfied. Then, for any τ > 0, x is a solution to problem (.) if and only if it solves the following fixed point equation: Proof. It is trivial to see that if x S, then it solves Eq. (3.). To see the converse, let x be a solution to Eq. (3.), that is, Now fix any z S. It then follows from (.) that x = x τ[(x Ux) + A (I T )Ax]. (3.) x Ux + A (I T )Ax = 0. 0 = x Ux + A (I T )Ax, x z = x Ux, x z + A (I T )Ax, x z = x Ux, x z + (I T )Ax, Ax Az κ x Ux + κ (I T )Ax. Hence x = Ux and Ax = T (Ax), and the proof is thus complete. Theorem 3. thus enables us to propose a new iterative algorithm for solving problem (.). Algorithm. Choose an initial guess x 0 H arbitrarily. Assume that the nth iterate x n has been constructed; then calculate the (n + )th iterate x n+ via the formula: x n+ = x n τ[(x n Ux n ) + A (I T )Ax n ],
5 where the stepsize τ is a properly chosen real number. Theorem 3.. Let conditions (A)-(A4) be satisfied and let {x n } be the sequence generated by Algorithm. If the stepsize τ satisfies ( min κ, κ ) A, then {x n } converges weakly to a solution z S, where z = lim n P Sx n. Proof. We first show that the sequence {x n } is Féjer-monotone w.r.t. S. To this end, let u n = (x n Ux n )+ A (I T )Ax n. By inequality (.), we have and x n Ux n, x n z κ Ux n x n, A (I T )Ax n, x n z = (I T )Ax n, Ax n Az Combining these two inequalities, we have κ (I T )Ax n, which implies u n, x n z κ Ux n x n + κ (I T )Ax n = κ x n Ux n + κ A A (I T )Ax n κ x n Ux n + κ A A (I T )Ax n min( κ, κ A )( x n Ux n + A (I T )Ax n ) 4 min( κ, κ A )( x n Ux n + A (I T )Ax n ) 4 min( κ, κ A ) (x n Ux n ) + A (I T )Ax n = ( 4 min κ, κ ) A u n. x n+ z = x n z τ u n, x n z + u n x n z τ ( min κ, κ A = x n z τ [ ( min κ, κ A ) u n + τ u n ) ] τ u n.
6 Let us now define Thus, we have δ := τ [ ( min κ, κ ) ] A τ. x n+ z x n z δ u n. (3.) Since δ is clearly a positive number, this implies that {x n } is Féjer-monotone. We next show that every weak cluster point of the sequence {x n } belongs to the solution set, i.e., ω w {x n } S. As a Féjer-monotone sequence, {x n } is bounded, and so is the sequence {u n }. Moreover it follows from (3.) that δ u n x n z x n+ z. By induction, we have for all n 0, so that which particularly implies n δ u k x 0 z k=0 u n <, n=0 lim u n = 0. (3.3) n On the other hand, we deduce from (.) that κ x n Ux n + κ (I T )Ax n x n Ux n, x n z + (I T )Ax n, Ax n Az = x n Ux n, x n z + A (I T )Ax n, x n z = u n, x n z u n x n z. Since {x n } and {u n } both bounded, this together with (3.3) implies that lim x n Ux n = lim (I T )Ax n = 0. n n By our condition (A3), we conclude that every weak cluster point of the sequence {x n } belongs to the solution set. Finally, we deduce from Lemma.4 that the sequence {x n } converges weakly to a solution z of problem (.) since we have shown that {x n } is Féjer-monotone and every weak cluster point of {x n } belongs to the solution set.
7 4. APPLICATIONS TO SPECIAL CASES In this section, we study applications of the previous results to various nonlinear operators. We first consider the case for quasi-nonexpansive operators. As every quasi-nonexpansive is clearly 0-demicontractive, we can state the following results. Theorem 4.. Assume that U and T in problem (.) are quasi-nonexpansive, and both I T and I U are demiclosed at zero. Let {x n } be the sequence generated by Algorithm. If the stepsize satisfies max(, A ), then {x n } converges weakly to a solution z of problem (.). Since every nonexpansive operator is clearly quasi nonexpansive, and demiclosed at zero, we can easily deduce the following results. Theorem 4.. Assume that U and T in problem (.) are nonexpansive operators. Let {x n } be the sequence generated by Algorithm. If the stepsize satisfies max(, A ), then {x n } converges weakly to a solution z of problem (.). We next consider the case for quasi-nonexpansive operators. As every directed operator is clearly -- demicontractive, we can state the following results. Theorem 4.3. Assume that U and T in problem (.) are directed, and both I T and I U are demiclosed at zero. Let {x n } be the sequence generated by Algorithm. If the stepsize satisfies max(, A ), then {x n } converges weakly to a solution z of problem (.). Since every firmly nonexpansive operator is clearly directed and nonexpansive, and thus demiclosed at zero, from which we can easily deduce the following results. Theorem 4.4. Assume that U and T in problem (.) are firmly nonexpansive operators. Let {x n } be the sequence generated by Algorithm. If the stepsize satisfies max(, A ), then {x n } converges weakly to a solution z of problem (.).
8 We finally consider the case for projection operators, namely the split feasibility problem (.). In this situation, we have the following iterative algorithm for problem (.). However, in this special case, conditions posed on the stepsize τ can be further relaxed. Algorithm. Choose an initial guess x 0 H arbitrarily. Assume that the nth iterate x n has been constructed; then calculate the (n + )th iterate x n+ via the formula: x n+ = x n τ[(x n P C x n ) + A (I P Q )Ax n ], (4.) where the stepsize τ is a positive real number. Theorem 4.5. Assume that problem (.) is consistent. Let {x n } be the sequence generated by Algorithm. If the stepsize satisfies + A, then {x n } converges weakly to a solution z of problem (.). Proof. It suffices to show that the sequence {x n } is Féjer-monotone w.r.t. S, since the rest proof is similar to that of Theorem 3.. To this end, let us define It is clear that f is differential, and It then follows that f(x) = ( x PC x + (I P Q )Ax ), x H. f(x) = (x P C x) + A (I P Q )Ax. f(x) f(y) = (I P C )(x y) + A (I P Q )A(x y) (I P C )(x y) + A (I P Q )A(x y) x y + A (I P Q )A(x y) x y + A A(x y) x y + A x y = ( + A ) x y, where we have used the fact that both I P C and I P C are nonexpansive, and A = A. This clearly implies that f(x) is Lipschitz continuous with constant + A. Hence, by Baillon-Haddad theorem, we conclude that f(x) is /( + A )-inverse strongly monotone, that is, f(x) f(y), x y + A f(x) f(y). (4.)
9 Let now u n = (x n P C x n ) + A (I P Q )Ax n. Hence u n = f(x n ). In view of (4.), we have from which it then follows that Let us now set Thus, the last inequality yields u n, x n z = f(x n ) f(z), x n z = = + A f(x n) f(z) + A f(x n) + A u n, x n+ z = x n z τ u n, x n z + τ u n x n z + A u n + τ u n ( ) = x n z τ + A τ u n. ( δ := τ + A τ ). x n+ z x n z δ u n. Since δ is clearly a positive number, this implies that {x n } is Féjer-monotone. 5. APPLICATIONS IN SIGNAL PROCESSING In this section we consider applications of our algorithm to inverse problems arising from signal processing. Compressed sensing is a very active domain of research and applications, based on the fact that an N-sample signal x with exactly m nonzero components can be recovered from m k < N measurements as long as the number of measurements is smaller than the number of signal samples and at the same time much larger than the sparsity level of x. Likewise the measurements are required to be incoherent, which means that the information contained in the signal is spread out in the domain. Since k < N, the problem of recovering x from k measurements is ill conditioned because we encounter an underdeterminated system of linear equations. But using a sparsity prior, it turns out that reconstructing x from y is possible as long as the number of nonzero elements is small enough. More specifically, compressed sensing can be formulated as inverting the equation system y = Ax + ε, (5.)
0 where x R N is the data to be recovered, y R k is the vector of noisy observations or measurements, and ε represents the noise, A : R N R k is a bounded linear observation operator, often ill-conditioned because it models a process with loss of information. A powerful approach for problem (5.) consists in considering a solution x represented by a sparse expansion, that is, represented by a series expansion with respect to an orthonormal basis that has only a small number of large coefficients. When attempting to find sparse solutions to linear inverse problems of type (5.), a successful model is the convex unconstraint minimization problem min x R N y Ax + ν x, (5.) where ν is a positive parameter, and is the l norm. Problem (5.) consists in minimizing an objective function, which includes a quadratic error term combined with a sparseness-inducing l regularization term, which is to make small component of x to become zero. By means of convex analysis, one is able to show that a solution to the constrained least squares problem min x R N y Ax subjet to x t, (5.3) for any nonnegative real number t, is a minimizer of (5.) (cf. [9]). Clearly problem (5.3) is a particular case of (.) where C = {x R N : x t} and Q = {y}, and thus can be solved by the proposed algorithm. In this case, P C and P C is the projection onto the closed l -ball in R N (see [8]). Algorithm 3. Choose an initial guess x 0 H arbitrarily. Assume that the nth iterate x n has been constructed; then calculate the (n + )th iterate x n+ via the formula: where the stepsize τ is a positive real number. x n+ = x n τ[(x n P C x n ) + A (Ax n b)], (5.4) Theorem 5.. Assume that problem (5.3) is consistent. Let {x n } be the sequence generated by Algorithm 3. If the stepsize satisfies + A, then {x n } converges weakly to a solution z of problem (5.3). REFERENCES [] H.H. Bauschke, J.M. Borwein, On projection algorithms for solving convex feasibility problems, SIAM Review, 38 (996), 367 46. [] O. A. Boikanyo, A strongly convergent algorithm for the split common fixed point problem, Appl. Math. Comput., 65 (05), 844 853. [3] C. Byrne, Iterative oblique projection onto convex sets and the split feasibility problem, Inverse Problems, 8 (00), 44 453.
[4] A. Cegielski, General method for solving the split common fixed point problem, J. Optim. Theory Appl., 65(05), 385 404. [5] Y. Censor, T. Elfving, A multiprojection algorithms using Bregman projection in a product space, Numer. Algorithm, 8 (994), 39. [6] Y. Censor, A. Segal, The split common fixed point problem for directed operators, J. Convex Anal., 6 (009), 587 600. [7] H. Cui, F. Wang, Iterative methods for the split common fixed point problem in Hilbert spaces, Fixed Point Theory & Applications, 04, (04), 8. [8] J.C. Duchi, S. Shalevshwartz, Y. Singer, et al. Efficient projections onto the l -ball for learning in high dimensions, International conference on machine learning, 008: 7 79. [9] M. A. Figueiredo, R. D. Nowak and S. J. Wright, Gradient projection for sparse reconstruction: Application to Compressed Sensing and other Inverse Problems, IEEE J. Sel. Top. Signal Process. (007), 586 598. [0] G. López et al., Solving the split feasibility problem without prior knowledge of matrix norms, Inverse Problems, 8, (0), 085004. [] A. Moudafi, A note on the split common fixed-point problem for quasi-nonexpansive operators. Nonlinear Anal., 74 (0), 4083 4087. [] A. Moudafi, The split common fixed point problem for demicontractive mappings. Inverse Problems, 6 (00), 055007. [3] P. Kraikaew, S. Saejung, On split common fixed point problems, J. Math. Anal. Appl., 45 (04), 53 54. [4] F. Schöpfer, T. Schuster, A.K. Louis, An iterative regularization method for the solution of the split feasibility problem in Banach spaces, Inverse Problems, 4 (008), 055008. [5] K.K. Tan and H.K. Xu, Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process, J. Math. Anal. Appl., 78 (993), 30 308. [6] F. Wang, On the convergence of CQ algorithm with variable steps for the split equality problem, to appear in Numerical Algorithms. [7] F. Wang, H.K. Xu, Cyclic algorithms for split feasibility problems in Hilbert spaces, Nonlinear Anal., 74 (0), 405 4. [8] H.K. Xu, Iterative algorithms for nonlinear operators, J. London Math. Soc. 66 (00), 40-56. [9] H.K. Xu, A variable Krasnosel skii-mann algorithm and the multiple-set split feasibility problem, Inverse Problems, (006), 0 034. [0] H.K. Xu, Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces, Inverse Problems, 6 (00), 0508.