Sakurai and Iiduka Fixed Point Theory and Applications 2014, 2014:202 R E S E A R C H Open Access Acceleration of the Halpern algorithm to search for a fixed point of a nonexpansive mapping Kaito Sakurai * and Hideaki Iiduka * Correspondence: k_sakurai@cs.meiji.ac.jp Department of Computer Science, Meiji University, 1-1-1 Higashimita, Tama-ku, Kawasaki-shi, Kanagawa 214-8571, Japan Abstract This paper presents an algorithm to accelerate the Halpern fixed point algorithm in a real Hilbert space. To this goal, we first apply the Halpern algorithm to the smooth convex minimization problem, which is an example of a fixed point problem for a nonexpansive mapping, and indicate that the Halpern algorithm is based on the steepest descent method for solving the minimization problem. Next, we formulate a novel fixed point algorithm using the ideas of conjugate gradient methods that can accelerate the steepest descent method. We show that, under certain assumptions, our algorithm strongly converges to a fixed point of a nonexpansive mapping. We numerically compare our algorithm with the Halpern algorithm and show that it dramatically reduces the running time and iterations needed to find a fixed point compared with that algorithm. MSC: 47H10; 65K05; 90C25 Keywords: conjugate gradient method; fixed point; Halpern algorithm; nonexpansive mapping; smooth convex optimization; steepest descent method 1 Introduction Fixed point problems for nonexpansive mappings [1, Chapter 4], [2,Chapter3],[3,Chapter 1], [4, Chapter 3] have been investigated in many practical applications, and they include convex feasibility problems [5], [1, Example 5.21], convex optimization problems [1, Corollary 17.5], problems of finding the zeros of monotone operators [1, Proposition 23.38], and monotonevariational inequalities [1, Subchapter 25.5]. Fixed point problems can be solved by using useful fixed point algorithms, suchasthe Krasnosel skiĭ-mann algorithm [1,Subchapter5.2],[6,Subchapter1.2],[7, 8],the Halpern algorithm [6, Subchapter1.2],[9, 10], and the hybrid method [11]. Meanwhile, to make practical systems and networks see, e.g., [12 15] and references therein) stable and reliable, the fixed point has to be found at a faster pace. That is, we need a new algorithm that approximates the fixed point faster than the conventional ones. In this paper, we focus on the Halpern algorithm and present an algorithm to accelerate the search for a fixed point of a nonexpansive mapping. To achieve the main objective of this paper, we first apply the Halpern algorithm to the smooth convex minimization problem, which is an example of a fixed point problem for 2014 Sakurai and Iiduka; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Sakurai and IidukaFixed Point Theory and Applications 2014, 2014:202 Page 2 of 11 a nonexpansive mapping, and indicate that the Halpern algorithm is based on the steepest descent method [16, Subchapter 3.3] for solving the minimization problem. Anumberofiterativemethods[16, Chapters 5-19] have been proposed to accelerate the steepest descent method. In particular, conjugate gradient methods [16, Chapter 5] have been widely used as an efficient accelerated version of most gradient methods. Here, we focus on the conjugate gradient methods and devise an algorithm blending the conjugate gradient methods with the Halpern algorithm. Our main contribution is to propose a novel algorithm for finding a fixed point of a nonexpansive mapping, for which we use the ideas of accelerated conjugate gradient methods for optimization over the fixed point set [17, 18], and prove that the algorithm converges to some fixed point in the sense of the strong topology of a real Hilbert space. To demonstrate the effectiveness and fast convergence of our algorithm, we numerically compare our algorithm with the Halpern algorithm. Numerical results show that it dramatically reduces the running time and iterations needed to find a fixed point compared with that algorithm. This paper is organized asfollows.section2 gives the mathematical preliminaries. Section 3 devises the acceleration algorithm for solving fixed point problems and presents its convergence analysis. Section 4 applies the proposed and conventional algorithms to a concrete fixed point problem and provides numerical examples comparing them. 2 Mathematical preliminaries Let H be a real Hilbert space with the inner product, and its induced norm,and let N be the set of all positive integers including zero. 2.1 Fixed point problem Suppose that C H is nonempty, closed, and convex. A mapping T : C C is said to be nonexpansive [1, Definition 4.1ii)], [2, 3.2)], [3,Subchapter1.1],[4,Subchapter3.1]if Tx) Ty) x y x, y C). The fixed point set of T : C C is denoted by FixT):= { x C : Tx)=x }. The metric projection onto C [1, Subchapter4.2,Chapter28]isdenotedbyP C.Itis defined by P C x) C and x P C x) = inf y C x y x H). P C is nonexpansive with FixP C )=C [1, Proposition 4.8, 4.8)]. Proposition 2.1 Suppose that C H is nonempty, closed, and convex, T : C Cisnonexpansive, and x H. Then i) [1, Corollary 4.15], [2, Lemma 3.4], [3, Proposition 5.3], [4, Theorem 3.1.6] FixT) is closed and convex. ii) [1, Theorem 3.14] ˆx = P C x) if and only if x ˆx, y ˆx 0 y C). Proposition 2.1i) guarantees that if FixT), P FixT) x) exists for all x H. This paper discusses the following fixed point problem.
Sakurai and IidukaFixed Point Theory and Applications 2014, 2014:202 Page 3 of 11 Problem 2.1 Suppose that T : H H is nonexpansive with FixT).Then find x H such that T x ) = x. 2.2 The Halpern algorithm and our algorithm The Halpern algorithm generates the sequence x n ) n N [6, Subchapter1.2][9, 10] asfollows: given x 0 H and α n ) n N satisfying lim α n =0, n=0 α n =, and n=0 α n+1 α n <, x n+1 := α n x 0 +1 α n )Tx n ) n N). 1) Algorithm 1) strongly converges to P FixT) x 0 ) FixT)) [6, Theorem 6.17], [9, 10]. Here, we shall discuss Problem 2.1 when FixT) is the set of all minimizers of a convex, continuously Fréchet differentiable functional f over H and see that algorithm 1)is based on the steepest descent method [16, Subchapter 3.3] to minimize f over H. Supposethat the gradient of f,denotedby f, is Lipschitz continuous with a constant L >0anddefine T f : H H by T f := I α f, 2) where α 0, 2/L]andI : H H stands for the identity mapping. Accordingly, T f satisfies the nonexpansivity condition see, e.g.,[12, Proposition 2.3]) and Fix T f ) { = argmin f x):= x H : f x ) } = min f x). x H x H Therefore, algorithm 1)withT := T f can be expressed as follows. d f n+1 := f x n), y n := T f x n )=x n α f x n )=x n + αd f n+1, x n+1 := α n x 0 +1 α n )y n n N). 3) This implies that algorithm 3) uses the steepest descent direction [16, Subchapter3.3] d f,sdd n+1 := f x n )off at x n, and hence algorithm 3) is based on the steepest descent method. Meanwhile, conjugate gradient methods [16, Chapter 5] are popular acceleration methods of the steepest descent method. The conjugate gradient direction of f at x n n N) is d f,cgd n+1 := f x n )+β n dn f,cgd,whered f,cgd 0 := f x 0 )andβ n ) n N 0, ), which, together with 2), implies that d f,cgd n+1 = 1 T f ) x n ) x n + βn dn f,cgd. 4) α Therefore, by replacing d f n+1 := f x n)inalgorithm3)withd f,cgd n+1 defined by 4), we can formulate a novel algorithm for solving Problem 2.1. Beforepresentingthealgorithm,weprovidethefollowinglemmaswhichareusedto prove the main theorem.
Sakurai and IidukaFixed Point Theory and Applications 2014, 2014:202 Page 4 of 11 Proposition 2.2 [6, Lemmas 1.2 and 1.3] Let a n ) n N,b n ) n N,c n ) n N,α n ) n N 0, ) be sequences with a n+1 1 α n )a n + α n b n + c n n N). Suppose that n=0 α n =, lim sup b n 0, and n=0 c n <. Then lim a n =0. Proposition 2.3 [19, Lemma 1] Suppose that x n ) n N Hweaklyconvergestox Hand y x. Then lim inf x n x < lim inf x n y. 3 Acceleration of the Halpern algorithm We present the following algorithm. Algorithm 3.1 Step 0. Choose μ 0, 1], α >0,andx 0 H arbitrarily, and set α n ) n N 0, 1), β n ) n N [0, ). Compute d 0 := Tx 0 ) x 0 )/α. Step 1. Given x n, d n H,computed n+1 H by d n+1 := 1 α Txn ) x n ) + βn d n. Compute x n+1 H as follows. y n := x n + αd n+1, x n+1 := μα n x 0 +1 μα n )y n. Put n := n +1,andgotoStep1. We can check that Algorithm 3.1 coincides with the Halpern algorithm 1)whenβ n := 0 n N)andμ := 1. This section makes the following assumptions. Assumption 3.1 The sequences α n ) n N and β n ) n N satisfy a C1) lim α n =0, C2) α n =, n=0 C3) α n+1 α n <, n=0 C4) β n α 2 n n N). Moreover, x n ) n N in Algorithm 3.1 satisfies C5) Txn ) x n ) n N is bounded. Let us do a convergence analysis of Algorithm 3.1. Theorem 3.1 Under Assumption 3.1, the sequence x n ) n N generated by Algorithm 3.1 strongly converges to P FixT) x 0 ). Algorithm 3.1 when β n := 0 n N) is the Halpern algorithm defined by x n+1 := μα n x 0 +1 μα n )Tx n ) n N).
Sakurai and IidukaFixed Point Theory and Applications 2014, 2014:202 Page 5 of 11 Hence, the nonexpansivity of T ensures that, for all x FixT)andforalln N, x n+1 x = μα n x 0 x)+1 μα n ) Tx n ) x ) μα n x 0 x +1 μα n ) Tx n ) x μα n x 0 x +1 μα n ) x n x. 5) Suppose that n := 0. From 5), we have x 1 x μα 0 x 0 x +1 μα 0 ) x 0 x = x 0 x. Assume that x m x x 0 x for some m N. Then5) impliesthat x m+1 x μα m x 0 x +1 μα m ) x m x μα m x 0 x +1 μα m ) x 0 x = x 0 x. Hence, induction guarantees that x n x x 0 x n N). Therefore, we find that x n ) n N is bounded. Moreover, since the nonexpansivity of T ensures that Tx n )) n N is also bounded, C5) holds. Accordingly, Theorem 3.1 says that if α n ) n N satisfies C1)-C3), Algorithm 3.1 when β n := 0 n N) i.e., the Halpern algorithm) strongly converges to P FixT) x 0 ). This means that Theorem 3.1 is a generalization of the convergence analysis of the Halpern algorithm. 3.1 Proof of Theorem 3.1 We first show the following lemma. Lemma 3.1 Suppose that Assumption 3.1 holds. Then d n ) n N,x n ) n N, and y n ) n N are bounded. Proof We have from C1) and C4) that lim β n = 0. Accordingly, there exists n 0 N such that β n 1/2 for all n n 0.DefineM 1 := max{ d n0,2/α) sup n N Tx n ) x n }.Then C5) implies that M 1 <. Assume that d n M 1 for some n n 0. The triangle inequality ensures that d n+1 = 1 ) Txn ) x n + βn d n 1 Tx n ) x n + β n d n M 1, α α which means that d n M 1 for all n n 0, i.e.,d n ) n N is bounded. The definition of y n n N)impliesthat ) 1 ) y n = x n + α Txn ) x n + βn d n α = Tx n )+αβ n d n. 6) The nonexpansivity of T and 6) imply that, for all x FixT)andforalln n 0, y n x = Tx n )+αβ n d n x Tx n ) Tx) + αβ n d n x n x + αm 1 β n.
Sakurai and IidukaFixed Point Theory and Applications 2014, 2014:202 Page 6 of 11 Therefore, we find that, for all x FixT)andforalln n 0, x n+1 x = μα n x 0 x)+1 μα n )y n x) μα n x 0 x +1 μα n ) y n x μα n x 0 x +1 μα n ) { } x n x + αm 1 β n 1 μα n ) x n x + μα n x 0 x + αm 1 β n, which, together with C4) and α n <1n N), means that, for all x FixT) andforall n n 0, x n+1 x 1 μα n ) x n x + μα n x 0 x + αm ) 1. μ Induction guarantees that, for all x FixT)andforalln n 0, x n x x 0 x + αm 1 μ. Therefore, x n ) n N is bounded. The definition of y n n N) and the boundedness of x n ) n N and d n ) n N imply that y n ) n N is also bounded. This completes the proof. Lemma 3.2 Suppose that Assumption 3.1 holds. Then i) lim x n+1 x n =0. ii) lim x n Tx n ) =0. iii) lim sup x 0 x, y n x 0, where x := P FixT) x 0 ). Proof i) Equation 6), the triangle inequality, and the nonexpansivity of T imply that, for all n N, y n+1 y n = Txn+1 ) Tx n )+αβ n+1 d n+1 β n d n ) Txn+1 ) Tx n ) + α βn+1 d n+1 β n d n x n+1 x n + α β n+1 d n+1 + β n d n ), which, together with d n M 1 n n 0 )andc4),impliesthat,foralln n 0, y n+1 y n x n+1 x n + αm 1 α 2 n+1 + α 2 n). 7) On the other hand, from α n α n+1 α n + α n+1 and α n <1n N), we have that, for all n N, αn+1 2 + α2 n α2 n+1 + α ) n αn+1 α n + α n+1 α n+1 + α n )α n+1 + α n+1 α n. 8)
Sakurai and IidukaFixed Point Theory and Applications 2014, 2014:202 Page 7 of 11 We also find that, for all n N\{0}, x n+1 x n = μαn x 0 +1 μα n )y n ) μα n 1 x 0 +1 μα n 1 )y n 1 = μαn α n 1 )x 0 +1 μα n )y n y n 1 ) + μα n 1 α n )y n 1 μ α n α n 1 x 0 + y n 1 ) +1 μα n ) y n y n 1 1 μα n ) y n y n 1 + M 2 α n α n 1, where M 2 := sup n N μ x 0 + y n )<.Hence,7)and8)ensurethat,foralln n 0, x n+1 x n 1 μα n ) x n x n 1 + αm 1 αn + α n 1 )α n + α n α n 1 ) + M 2 α n α n 1 =1 μα n ) x n x n 1 +αm 1 + M 2 ) α n α n 1 + αm 1 μ α n + α n 1 )μα n. Proposition 2.2, C1), C2), and C3) lead us to lim x n+1 x n =0. 9) ii) From x n+1 y n = μα n x 0 y n M 2 α n n N), C1) means that lim x n+1 y n =0.Sincethetriangleinequalityensuresthat y n x n y n x n+1 + x n+1 x n n N), we find from 9)that lim d n+1 = 1 α lim y n x n =0. 10) From the definition of d n+1 n N), we have, for all n n 0, 0 1 Txn ) x n dn+1 + β n d n d n+1 + M 1 β n. α Since Equation 10) andlim β n = 0 guarantee that the right-hand side of the above inequality converges to 0, we find that lim Tx n ) x n =0. 11) iii) From the limit superior of x 0 x, y n x ) n N,thereexistsy nk ) k N y n ) n N ) such that lim sup x0 x, y n x = lim k x0 x, y nk x. 12) Moreover, since y nk ) k N is bounded, there exists y nki ) i N y nk ) k N )whichweaklyconverges to some point ŷ H). Equation 10) guaranteesthatx nki ) i N weakly converges to ŷ.
Sakurai and IidukaFixed Point Theory and Applications 2014, 2014:202 Page 8 of 11 We shall show that ŷ FixT). Assume that ŷ / FixT), i.e., ŷ Tŷ). Proposition 2.3, 11), and the nonexpansivity of T ensure that lim inf i x n ki ŷ < lim inf x nki Tŷ) i = lim inf xnki Tx nki )+Tx nki ) Tŷ) i = lim inf Txnki ) Tŷ) i lim inf x n i ki ŷ. This is a contradiction. Hence, ŷ FixT). Hence, 12) and Proposition 2.1ii) guarantee that lim sup x0 x, y n x = lim x0 x, y nki x = x 0 x, ŷ x 0. i This completes the proof. Now, we are in a position to prove Theorem 3.1. Proof of Theorem 3.1 The inequality x + y 2 x 2 +2 y, x + y x, y H), 6), and the nonexpansivity of T imply that, for all n N, y n x 2 = Tx n ) x + αβ n d n 2 Tx n ) T x ) 2 +2αβ n yn x, d n xn x 2 + M 3 α 2 n, where β n α 2 n n N) andm 3 := sup n N 2α y n x, d n <. Wethushavethat,forall n N, xn+1 x 2 = μαn x0 x ) +1 μα n ) y n x ) 2 = μ 2 αn 2 x0 x 2 +1 μα n ) 2 yn x 2 +2μα n 1 μα n ) x 0 x, y n x μ 2 αn 2 x 0 x 2 +1 μα n ) 2{ x n x 2 + M 3 αn 2 } +2μα n 1 μα n ) x 0 x, y n x 1 μα n ) xn x 2 + + { 21 μα n ) x 0 x, y n x } μα n. { μα x0 n x 2 + M } 3α n μα n μ Proposition 2.2, C1), C2), and Lemma 3.2iii) lead one to deduce that lim xn+1 x 2 =0.
Sakurai and IidukaFixed Point Theory and Applications 2014, 2014:202 Page 9 of 11 This guarantees that x n ) n N generated by Algorithm 3.1 strongly converges to x := P FixT) x 0 ). Suppose that FixT) is bounded. Then we can set a bounded, closed convex set C FixT)) such that P C canbecomputedwithinafinitenumberofarithmeticoperations e.g., C is a closed ball with a large enough radius). Hence, we can compute x n+1 := P C μαn x 0 +1 μα n )y n ) 13) instead of x n+1 in Algorithm 3.1. Fromx n ) n N C, the boundedness of C means that x n ) n N is bounded. The nonexpansivity of T guarantees that Tx n ) Tx) x n x x FixT)), which means that Tx n )) n N is bounded. Therefore, C5) holds. We can prove that Algorithm 3.1 with 13) strongly converges to a point in FixT) by referring to the proof of Theorem 3.1. Let us consider the case where FixT) is unbounded. In this case, we cannot choose aboundedc satisfying FixT) C. Although we can execute Algorithm 3.1, we need to verify the boundedness of Tx n ) x n ) n N. Instead, we can apply the Halpern algorithm 1) to this case without any problem. However, the Halpern algorithm would converge slowly because it is based on the steepest descent method see Section 1). Hence, in this case, it would be desirable to execute not only the Halpern algorithm but also Algorithm 3.1. 4 Numerical examples and conclusion Let us apply the Halpern algorithm1) and Algorithm 3.1to the following convex feasibility problem [5], [1, Example 5.21]. Problem 4.1 Given a nonempty, closed convex set C i R N i =0,1,...,m), m find x C := C i, i=0 where one assumes that C. Define a mapping T : R N R N by 1 T := P 0 m m P i ), 14) i=1 where P i := P Ci i = 0,1,...,m) stands for the metric projection onto C i.sincep i i = 0,1,...,m) is nonexpansive, T defined by 14) is also nonexpansive. Moreover, we find that m m FixT)=FixP 0 ) FixP i )=C 0 C i = C. i=1 i=1 Therefore, Problem 4.1 coincides with Problem 2.1 with T defined by 14). The experiment used an Apple Macbook Air with a 1.30GHz IntelR) CoreTM) i5-4250u CPU and 4GB DDR3 memory. The Halpern algorithm 1) and Algorithm 3.1 were written in Java. The operating system of the computer was Mac OSX version 10.8.5.
Sakurai and Iiduka Fixed Point Theory and Applications 2014, 2014:202 Page 10 of 11 Figure 1 Behavior of Tx n ) x n for the Halpern algorithm and Algorithm 3.1 proposed). The Halpern algorithm took 850 iterations to satisfy Tx n ) x n <10 6, whereas Algorithm 3.1 took only six.) We set α := 1, μ := 1/10 5, α n := 1/n +1)n N), and β n := 1/n +1) 2 n N) inalgorithm 3.1 and compared Algorithm 3.1 with the Halpern algorithm 1)withα n := μ/n +1) n N). In the experiment, we set C i i =0,1,...,m) asaclosedballwithcenterc i R N and radius r i >0.Thus,P i i =0,1,...,m)canbecomputedwith P i x):=c i + r i c i x x c i) if c i x > r i, or P i x):=x if c i x r i. We set N := 100, m := 3, r i := 1 i =0,1,2,3),and c 0 := 0. The experiment used random vectors c i 1/ N,1/ N) N i =1,2,3)generated by the java.util.randomclassso asto satisfy C. We also used the java.util.random class to set a random initial point in 16, 16) N. Figure 1 describes the behaviors of Tx n ) x n for the Halpern algorithm 1) and Algorithm 3.1 proposed). The x-axis and y-axis represent the elapsed time and value of Tx n ) x n. The results show that compared with the Halpern algorithm, Algorithm 3.1 dramatically reduces the time required to satisfy Tx n ) x n <10 6.Wefoundthat the Halpern algorithm took 850 iterations to satisfy Tx n ) x n <10 6,whereasAlgorithm 3.1 took only six. This paper presented an algorithm to accelerate the Halpern algorithm for finding a fixed point of a nonexpansive mapping on a real Hilbert space and its convergence analysis. The convergence analysis guarantees that the proposed algorithm strongly converges to a fixed point of a nonexpansive mapping under certain assumptions. We numerically compared the abilities of the proposed and Halpern algorithms in solving a concrete fixed point problem. The results showed that the proposed algorithm performs better than the Halpern algorithm. Competing interests The authors declare that they have no competing interests. Authors contributions All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript. Acknowledgements We are sincerely grateful to the associate editor Lai-Jiu Lin and the two anonymous reviewers for helping us improve the original manuscript. Endnote a Examples of αn ) n N and β n ) n N satisfying C1)-C4) are α n := 1/n +1) a and β n := 1/n +1) 2a n N), where a 0, 1]. Received: 5 June 2014 Accepted: 3 September 2014 Published: 26 Sep 2014
Sakurai and Iiduka Fixed Point Theory and Applications 2014, 2014:202 Page 11 of 11 References 1. Bauschke, HH, Combettes, PL: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, Berlin 2011) 2. Goebel, K, Kirk, WA: Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge 1990) 3. Goebel, K, Reich, S: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York 1984) 4. Takahashi, W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama 2000) 5. Bauschke, HH, Borwein, JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 383), 367-426 1996) 6. Berinde, V: Iterative Approximation of Fixed Points. Springer, Berlin 2007) 7. Krasnosel skiĭ, MA: Two remarks on the method of successive approximations. Usp. Mat. Nauk 10, 123-127 1955) 8. Mann, WR: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506-510 1953) 9. Halpern, B: Fixed points of nonexpansive maps. Bull. Am. Math. Soc. 73, 957-961 1967) 10. Wittmann, R: Approximation of fixed points of nonexpansive mappings. Arch. Math. 585), 486-491 1992) 11. Nakajo, K, Takahashi, W: Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 279, 372-379 2003) 12. Iiduka, H: Iterative algorithm for solving triple-hierarchical constrained optimization problem. J. Optim. Theory Appl. 148, 580-592 2011) 13. Iiduka, H: Fixed point optimization algorithm and its application to power control in CDMA data networks. Math. Program. 133,227-242 2012) 14. Iiduka, H: Iterative algorithm for triple-hierarchical constrained nonconvex optimization problem and its application to network bandwidth allocation. SIAM J. Optim. 223), 862-878 2012) 15. Iiduka, H: Fixed point optimization algorithms for distributed optimization in networked systems. SIAM J. Optim. 23, 1-26 2013) 16. Nocedal, J, Wright, SJ: Numerical Optimization, 2nd edn. Springer Series in Operations Research and Financial Engineering. Springer, Berlin 2006) 17. Iiduka, H: Acceleration method for convex optimization over the fixed point set of a nonexpansive mapping. Math. Program. 2014). doi:10.1007/s10107-013-0741-1 18. Iiduka, H, Yamada, I: A use of conjugate gradient direction for the convex optimization problem over the fixed point set of a nonexpansive mapping. SIAM J. Optim. 194), 1881-1893 2009) 19. Opial, Z: Weak convergence of the sequence of successive approximation for nonexpansive mappings. Bull. Am. Math. Soc. 73, 591-597 1967) 10.1186/1687-1812-2014-202 Cite this article as: Sakurai and Iiduka: Acceleration of the Halpern algorithm to search for a fixed point of a nonexpansive mapping. Fixed Point Theory and Applications 2014, 2014:202