Heinz H. Bauschke and Walaa M. Moursi. December 1, Abstract

Similar documents
Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings

On a result of Pazy concerning the asymptotic behaviour of nonexpansive mappings

On the order of the operators in the Douglas Rachford algorithm

On a result of Pazy concerning the asymptotic behaviour of nonexpansive mappings

A Dykstra-like algorithm for two monotone operators

Monotone operators and bigger conjugate functions

Victoria Martín-Márquez

Firmly Nonexpansive Mappings and Maximally Monotone Operators: Correspondence and Duality

arxiv: v1 [math.fa] 21 Dec 2011

arxiv: v1 [math.fa] 25 May 2009

arxiv: v1 [math.fa] 30 Jun 2014

The Brezis-Browder Theorem in a general Banach space

FINDING BEST APPROXIMATION PAIRS RELATIVE TO TWO CLOSED CONVEX SETS IN HILBERT SPACES

The rate of linear convergence of the Douglas Rachford algorithm for subspaces is the cosine of the Friedrichs angle

THE CYCLIC DOUGLAS RACHFORD METHOD FOR INCONSISTENT FEASIBILITY PROBLEMS

On the convexity of piecewise-defined functions

Self-dual Smooth Approximations of Convex Functions via the Proximal Average

Linear and strong convergence of algorithms involving averaged nonexpansive operators

arxiv: v1 [math.oc] 15 Apr 2016

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces

ATTOUCH-THÉRA DUALITY REVISITED:

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

The sum of two maximal monotone operator is of type FPV

Convergence rate estimates for the gradient differential inclusion

On moving averages. April 1, Abstract

On Slater s condition and finite convergence of the Douglas Rachford algorithm for solving convex feasibility problems in Euclidean spaces

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

On Total Convexity, Bregman Projections and Stability in Banach Spaces

Monotone Linear Relations: Maximality and Fitzpatrick Functions

Nearly convex sets: fine properties and domains or ranges of subdifferentials of convex functions

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces

Nonconvex notions of regularity and convergence of fundamental algorithms for feasibility problems

On the Brézis - Haraux - type approximation in nonreflexive Banach spaces

A WEAK-TO-STRONGCONVERGENCE PRINCIPLE FOR FEJÉR-MONOTONE METHODS IN HILBERT SPACES

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators

Best Approximation to Common Fixed Points of A Semigroup of Nonexpansive Operators

The resolvent average of monotone operators: dominant and recessive properties

Maximal monotone operators are selfdual vector fields and vice-versa

Affine nonexpansive operators, Attouch-Théra duality and the Douglas-Rachford algorithm

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

Visco-penalization of the sum of two monotone operators

References 1. Aleyner, A., & Reich, S. (2008). Block-iterative algorithms for solving convex feasibility problems in Hilbert and Banach spaces. Journa

THROUGHOUT this paper, we let C be a nonempty

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS

Analysis of the convergence rate for the cyclic projection algorithm applied to semi-algebraic convex sets

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1

Maximality of the sum of a maximally monotone linear relation and a maximally monotone operator arxiv: v1 [math.

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

ITERATIVE SCHEMES FOR APPROXIMATING SOLUTIONS OF ACCRETIVE OPERATORS IN BANACH SPACES SHOJI KAMIMURA AND WATARU TAKAHASHI. Received December 14, 1999

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem

Convex Feasibility Problems

Strong Convergence of Two Iterative Algorithms for a Countable Family of Nonexpansive Mappings in Hilbert Spaces

ITERATIVE ALGORITHMS WITH ERRORS FOR ZEROS OF ACCRETIVE OPERATORS IN BANACH SPACES. Jong Soo Jung. 1. Introduction

Banach-Steinhaus Type Theorems in Locally Convex Spaces for Bounded Convex Processes

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT

Sum of two maximal monotone operators in a general Banach space is maximal

Brézis - Haraux - type approximation of the range of a monotone operator composed with a linear mapping

STRONG CONVERGENCE THEOREMS FOR COMMUTATIVE FAMILIES OF LINEAR CONTRACTIVE OPERATORS IN BANACH SPACES

1. Standing assumptions, problem statement, and motivation. We assume throughout this paper that

Victoria Martín-Márquez

Research Article Strong Convergence of a Projected Gradient Method

Splitting methods for decomposing separable convex programs

The Split Hierarchical Monotone Variational Inclusions Problems and Fixed Point Problems for Nonexpansive Semigroup

ON THE RANGE OF THE SUM OF MONOTONE OPERATORS IN GENERAL BANACH SPACES

Research Article On an Iterative Method for Finding a Zero to the Sum of Two Maximal Monotone Operators

The method of cyclic intrepid projections: convergence analysis and numerical experiments

MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES

WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES

A New Use of Douglas-Rachford Splitting and ADMM for Classifying Infeasible, Unbounded, and Pathological Conic Programs

Subgradient Projectors: Extensions, Theory, and Characterizations

Research Article Hybrid Algorithm of Fixed Point for Weak Relatively Nonexpansive Multivalued Mappings and Applications

1 Introduction and preliminaries

The Method of Alternating Projections

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

A Cyclic Douglas Rachford Iteration Scheme

CONVERGENCE TO FIXED POINTS OF INEXACT ORBITS FOR BREGMAN-MONOTONE AND FOR NONEXPANSIVE OPERATORS IN BANACH SPACES

Projection and proximal point methods: convergence results and counterexamples

Available online at J. Nonlinear Sci. Appl., 10 (2017), Research Article

Monotone variational inequalities, generalized equilibrium problems and fixed point methods

Canadian Mathematical Society Société mathématique du Canada

On nonexpansive and accretive operators in Banach spaces

Pythagorean Property and Best Proximity Pair Theorems

Thai Journal of Mathematics Volume 14 (2016) Number 1 : ISSN

STRONG CONVERGENCE THEOREMS BY A HYBRID STEEPEST DESCENT METHOD FOR COUNTABLE NONEXPANSIVE MAPPINGS IN HILBERT SPACES

Some unified algorithms for finding minimum norm fixed point of nonexpansive semigroups in Hilbert spaces

CONVERGENCE OF APPROXIMATING FIXED POINTS FOR MULTIVALUED NONSELF-MAPPINGS IN BANACH SPACES. Jong Soo Jung. 1. Introduction

Convex Optimization Notes

On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities

On non-expansivity of topical functions by a new pseudo-metric

MONOTONE OPERATORS ON BUSEMANN SPACES

FITZPATRICK FUNCTIONS AND CONTINUOUS LINEAR MONOTONE OPERATORS. 1. Introduction. Throughout this paper, we assume that

A NOTE ON LINEAR FUNCTIONAL NORMS

STRONG CONVERGENCE OF AN IMPLICIT ITERATION PROCESS FOR ASYMPTOTICALLY NONEXPANSIVE IN THE INTERMEDIATE SENSE MAPPINGS IN BANACH SPACES

ASYMPTOTICALLY NONEXPANSIVE MAPPINGS IN MODULAR FUNCTION SPACES ABSTRACT

Maximal Monotone Operators with a Unique Extension to the Bidual

The Generalized Viscosity Implicit Rules of Asymptotically Nonexpansive Mappings in Hilbert Spaces

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces

Strongly convex functions, Moreau envelopes and the generic nature of convex functions with strong minimizers

A General Iterative Method for Constrained Convex Minimization Problems in Hilbert Spaces

Transcription:

The magnitude of the minimal displacement vector for compositions and convex combinations of firmly nonexpansive mappings arxiv:1712.00487v1 [math.oc] 1 Dec 2017 Heinz H. Bauschke and Walaa M. Moursi December 1, 2017 Abstract Maximally monotone operators and firmly nonexpansive mappings play key roles in modern optimization and nonlinear analysis. Five years ago, it was shown that if finitely many firmly nonexpansive operators are all asymptotically regular (i.e., the have or almost have fixed points), then the same is true for compositions and convex combinations. In this paper, we derive bounds on the magnitude of the minimal displacement vectors of compositions and of convex combinations in terms of the displacement vectors of the underlying operators. Our results completely generalize earlier works. Moreover, we present various examples illustrating that our bounds are sharp. 2010 Mathematics Subject Classification: Primary 47H05, 47H09; Secondary 47H10, 90C25. Keywords: Asymptotically regular, firmly nonexpansive mapping, maximally monotone operator, minimal displacement vector, nonexpansive mapping, resolvent. 1 Introduction and Standing Assumptions Throughout this paper, X is a real Hilbert space with inner product, (1) Mathematics, University of British Columbia, Kelowna, B.C. V1V 1V7, Canada. E-mail: heinz.bauschke@ubc.ca. Simons Institute for the Theory of Computing, UC Berkeley, Melvin Calvin Laboratory, #2190, Berkeley, CA 94720, USA and Mansoura University, Faculty of Science, Mathematics Department, Mansoura 35516, Egypt. E-mail: walaa.moursi@gmail.com. 1

and induced norm. Recall that T : X X is firmly nonexpansive (see, e.g., [3], [14], and [15] for further information) if ( (x, y) X X) Tx Ty 2 x y, Tx Ty and that a setvalued operator A : X X is maximally monotone if it is monotone, i.e., {(x, x ), (y, y )} gra A x y, x y 0 and if the graph of A cannot be properly enlarged without destroying monotonicity 1. These notions are equivalent (see [18] and [12]) in the sense that if A is maximally monotone, then its resolvent J A := (Id +A) 1 is firmly nonexpansive, and if T is firmly nonexpansive, then T 1 Id is maximally monotone 2. In optimization, one main problem is to find zeros of (sums of) maximally monotone operators these zeros may correspond to critical points or solutions to optimization problems. In terms of resolvents, the corresponding problem is that of finding fixed points. For background material in fixed point theory and monotone operator theory, we refer the reader to [3], [7], [8], [10], [14], [15], [21], [22], [24], [23], [25], [27], [28], and [26]. However, not every problem has a solution; equivalently, not every resolvent has a fixed point. To make this concrete, let us assume that T : X X is firmly nonexpansive. The deviation from T possessing a fixed point is captured by the notion of the minimal (negative) displacement vector which is well defined by 3 v T := P ran(id T) (0). (2) If T almost has a fixed point in the sense that v T = 0, i.e., 0 ran(id T), then we say that T is asymptotically regular. From now on, we assume that I := {1, 2,..., m}, where m {2, 3, 4,...} and that we are given m firmly nonexpansive operators T 1,..., T m ; equivalently, m resolvents of maximally monotone operators A 1,..., A m : ( i I) T i = J Ai = (Id +A i ) 1 is firmly nonexpansive, and we abbreviate the corresponding minimal displacement vectors by ( i I) v i := v Ti = P ran(id Ti )(0). (3) A natural question is the following: What can be said about the minimal displacement vector of T when T is either a composition or a convex combination of T 1,..., T n? Five years ago, the authors of [5] proved the following: If each T i is asymptotically regular, then so are the corresponding compositions and convex combinations. 1 We shall write dom A = { x X Ax = } for the domain of A, ran A = A(X) = x X Ax for the range of A, and gra A = { (x, u) X X } u Ax for the graph of A. 2 Here and elsewhere, Id denotes the identity operator on X. 3 Given a nonempty closed convex subset C of X, we denote its projection mapping or projector by P C. 2

This can be expressed equivalently as ( i I) v i = 0 v T = 0, (4) where T is either a composition or a convex combination of the family (T i ) i I. It is noteworthy that these results have been studied recently by Kohlenbach [17] and [16] from the viewpoint of proof mining. In this work, we obtain sharp bounds on the magnitude of the minimal displacement vector of T that hold true without any assumption of asymptotic regularity of the given operators. The proofs rely on techniques that are new and that were introduced in [5] and [1] (where projectors were considered). The new results concerning compositions are presented in Section 2 while convex combinations are dealt with in Section 3. Finally, our notation is standard and follows [3] to which we also refer for standard facts not mentioned here. 2 Compositions In this section, we explore compositions. Proposition 2.1. ( ε > 0) ( x X) such that x T m T m 1 T 1 x ε + m k=1 v k. Proof. The proof is broken up into several steps. Set ( i I) Ã i := v i + A i ( v i ). (5) and observe that [3, Proposition 23.17(ii)&(iii)] yields ( i I) T i := JÃi = v i + J Ai = v i + T i. (6) We also work in X := X m = { x = (x i ) i I ( i I) x i X }, with x, y = x i, y i, (7) i I where we embed the original operators via T : X m X m : (x i ) i I (T i x i ) i I and A : X m X m : (x i ) i I (A i x i ) i I. (8) Denoting the identity on X m by Id, we observe that J A = (Id +A) 1 = T 1 T m = T. (9) Because ran(id T) = ran(id T 1 ) ran(id T m ) and hence ran(id T) = ran(id T 1 ) ran(id T m ), we have (e.g., by using [3, Proposition 29.3]) v := (v i ) i I = P ran(id T) 0. (10) 3

Finally, define the cyclic right-shift operator R : X m X m : (x 1, x 2,..., x m ) (x m, x 1,..., x m 1 ) and M := Id R, (11) and the diagonal subspace := { x = (x) i I x X }, (12) with orthogonal complement. CLAIM 1: v ran (A( v) + M). Indeed, (3) implies that ( i I) v i ran (Id T i ) = ran (Id J Ai ) = ran J A 1 = dom (Id +A 1 i i ) = dom A 1 i = ran A i = ran A i ( v i ). Hence, v ran A( v) = ran A( v) + 0 ran A( v) +. On the other hand, we learn from [5, Corollary 2.6] (applied to A( v)) that ran (A( v) + M) = ran A( v) +. Altogether, we obtain that v ran (A( v) + M) and CLAIM 1 is verified. CLAIM 2: ( ε > 0) ( (b, x) X X) b ε and x = v + T(b + Rx). Fix ε > 0. In view of CLAIM 1, there exists x X and b X such that b ε and b v + A(x v) + Mx. Hence, b + Rx = b + x Mx x + A(x v) v = (Id +( v + A( v))x. Thus, x = J v+a( v) (b + Rx) = v + T(b + Rx), where the last identity follows from (6), (9) and (10). CLAIM 3: ( ε > 0) ( (c, x) X X) c ε and x = c + v + T(Rx). Fix ε > 0, let b and x be as in CLAIM 2, and set c := x v T(Rx) = T(b + Rx) T(Rx). Then, since T is nonexpansive, c = T(b + Rx) T(Rx) b ε, and CLAIM 3 thus holds. CONCLUSION: Let ε > 0. By CLAIM 3 (applied to ε/ m), there exists (c, x) X X such that c ε/ m and x = c + v + T(Rx). Hence i I c i c m ε and ( i I) x i = c i + v i + T i x i 1, where x 0 := x m. The triangle inequality and the nonexpansiveness of each T i thus yields T m T m 1 T 1 x 0 x 0 = T m T m 1 T 1 x 0 x m = T m T m 1 T 2 T 1 x 0 T m T m 1 T 2 x 1 + T m T m 1 T 3 T 2 x 1 T m T m 1 T 3 x 2 + T m T m 1 T 4 T 3 x 2 T m T m 1 T 4 x 3 + + T m T m 1 x m 2 T m x m 1 + T m x m 1 x m T m T m 1 T 2 T 1 x 0 T m T m 1 T 2 x 1 + T m T m 1 T 3 T 2 x 1 T m T m 1 T 3 x 2 + T m T m 1 T 4 T 3 x 2 T m T m 1 T 4 x 3 + + T m T m 1 x m 2 T m x m 1 4

+ T m x m 1 x m T 1 x 0 x 1 + T 2 x 1 x 2 + T 3 x 2 x 3 + + T m 1 x m 2 x m 1 + T m x m 1 x m = c 1 + v 1 + c 2 + v 2 + + c m + v m m k=1 m ε + c i + k=1 m k=1 v i v i, (13) as claimed. We are now ready for our first main result. Theorem 2.2. v Tm T 2 T 1 v T1 + + v Tm. Proof. By Proposition 2.1, we have ( ε > 0) v Tm T 2 T 1 ε + v T1 + + v Tm and the result thus follows. As an immediate consequence of Theorem 2.2, we obtain the first main result of [5]: Corollary 2.3. [5, Corollary 3.2] Suppose that v 1 = = v m = 0. Then v Tm T 2 T 1 = 0. We now show that the bound on v Tm T 2 T 1 given in Theorem 2.2 is sharp: Example 2.4. Suppose that X = R, T 1 : X X : x x a 1, and T 2 : X X : x x a 2, where (a 1, a 2 ) R R. Then (v T1, v T2, v T2 T 1 ) = (a 1, a 2, a 1 + a 2 ) and a 1 + a 2 = v T2 T 1 v 1 + v 2 = a 1 + a 2 ; moreover, the inequality is an equality if and only if a 1 a 2 0. Proof. On the one hand, it is clear that ran(id T 1 ) = {a 1 } and likewise ran(id T 2 ) = {a 2 }. Consequently, (v 1, v 2 ) = (a 1, a 2 ). On the other hand, T 2 T 1 : X X : x x a 1 a 2 = x (a 1 + a 2 ), therefore ran(id T 2 T 1 ) = {a 1 + a 2 }. Hence, v T2 T 1 = a 1 + a 2, v T2 T 1 = a 1 + a 2 and v 1 + v 2 = a 1 + a 2, and the conclusion follows. The remaining results in this section concern the effect of cyclically permuting the operators in the composition. Proposition 2.5. v Tm T m 1 T 2 T 1 = v Tm 1 T m 2 T 1 T m = = v T1 T m T 2. Proof. We start by proving that if S 1 : X X and S 2 : X X are averaged 4, then v S2 S 1 = v S1 S 2. (14) 4 Let S : X X. Then S is α-averaged if there exists α [0, 1[ such that S = (1 α) Id +αn and N : X X is nonexpansive. 5

To this end, let x X and note that S 2 S 1 and S 1 S 2 are α-averaged where α [0, 1[ by, e.g., [3, Remark 4.34(iii) and Proposition 4.44]. Using [19, Proposition 2.5(ii)] applied to S 2 S 1 and S 1 S 2 yields v S2 S 1 v S1 S 2 2 (S 2 S 1 ) n x (S 2 S 1 ) n+1 x ((S 1 S 2 ) n S 1 x (S 1 S 2 ) n+1 S 1 x) 2 = (S 2 S 1 ) n x (S 2 S 1 ) n+1 x (S 1 (S 2 S 1 ) n x S 1 (S 2 S 1 ) n+1 x) 2 = (Id S 1 )(S 2 S 1 ) n x (Id S 1 )(S 2 S 1 ) n+1 x 2 α 1 α ( (S 2S 1 ) n x (S 2 S 1 ) n+1 x 2 S 1 (S 2 S 1 ) n x S 1 (S 2 S 1 ) n+1 x 2 ) α 1 α ( (S 2S 1 ) n x (S 2 S 1 ) n+1 x 2 (S 1 S 2 ) n S 1 x (S 1 S 2 ) n+1 S 1 x 2 ) α 1 α ( v S 2 S 1 2 v S1 S 2 2 ) = 0, (15) where the last identity follows from [4, Lemma 2.6]. Because T m 1 T m 2... T 1 is averaged by [3, Remark 4.34(iii) and Proposition 4.44], we can and do apply (14), with (S 1, S 2 ) replaced by (T m 1 T m 2... T 1, T m ), to deduce that v Tm T m 1 T 2 T 1 = v Tm 1 T m 2 T 1 T m. The remaining identities follow similarly. Proposition 2.6. We have v Tm T m 1 T 1 ran(id T m T m 1 T 1 ) v Tm 1 T 1 T m ran(id T m 1 T 1 T m ) v T1 T m T 2 ran(id T 1 T m... T 2 ). (16a) (16b) (16c) Proof. We prove the implication of (16a): Suppose that ( y X) v Tm T m 1 T 1 = y T m T m 1 T 1 y, i.e., y Fix(v Tm T 1 + T m T 1 ). By [6, Proposition 2.5(iv)], we have v Tm T T 1 = (T m T 1 )y (T m T 1 ) 2 y. Using Proposition 2.5, we obtain v Tm 1 T 1 T m = v Tm T 2 T 1 = (T m T m 1 T 1 )y (T m T m 1 T 1 ) 2 y T m 1 T 1 y (T m 1 T 1 T m )T m 1 T 1 y y T m T m 1 T 1 y = v Tm T m 1 T 1 = v Tm 1 T 1 T m. (17) Consequently, v Tm 1 T 1 T m = T m 1 T 1 y (T m 1 T 1 T m )T m 1 T 1 y and hence v Tm 1...T 1 T m = T m 1 T 1 y (T m 1 T 1 T m )T m 1... T 1 y ran(id T m 1... T 1 T m ). (18) The opposite implication and the remaining m 2 equivalences are proved similarly. The following example, taken from De Pierro s [11, Section 3 on page 193], illustrates that the conclusion of Proposition 2.6 does not necessarily hold if the operators are permuted noncyclically. Example 2.7. Suppose that X = R 2, m = 3, C 1 = R {0}, C 2 = R {1}, C 3 = { (x, y) R 2 y 1/x > 0 }, and (T1, T 2, T 3 ) = (P C1, P C2, P C3 ). Then, v T3 T 2 T 1 = v T3 T 1 T 2 = 0, v T3 T 2 T 1 ran(id T 3 T 2 T 1 ) but v T3 T 1 T 2 ran(id T 3 T 1 T 2 ). 6

Proof. Note that T 2 T 1 = P C2 P C1 = P C2 = T 2 and T 1 T 2 = P C1 P C2 = P C1 = T 1. Consequently, (T 3 T 2 T 1, T 3 T 1 T 2 ) = (P C3 P C2, P C3 P C1 ). The claim that v T3 T 2 T 1 = v T3 T 1 T 2 = 0 follows from [1, Theorem 3.1], or Theorem 2.2 applied with m = 3. This and [2, Lemma 2.2(i)] imply that Fix T 3 T 2 T 1 = Fix P C3 P C2 = C 3 C 2 =, whereas Fix T 3 T 1 T 2 = Fix P C3 P C1 = C 3 C 1 =. Hence, v T3 T 2 T 1 ran(id T 3 T 2 T 1 ) but v T3 T 1 T 2 ran(id T 3 T 1 T 2 ). Figure 1: A GeoGebra [13] snapshot that illustrates the behaviour of the sequence ((P 3 P 2 P 1 ) n x 0 ) n N in Proposition 2.6. The first few iterates of the sequences (P 1 (P 3 P 2 P 1 ) n x 0 ) n N (blue points), (P 2 P 1 (P 3 P 2 P 1 ) n x 0 ) n N (green points), and ((P 3 P 2 P 1 ) n x 0 ) n N (black points) are also depicted. Figure 2: A GeoGebra [13] snapshot that illustrates the behaviour of the sequence ((P 3 P 1 P 2 ) n x 0 ) n N in Proposition 2.6. The first few iterates of the sequences (P 1 (P 3 P 1 P 2 ) n x 0 ) n N (green points), (P 2 P 1 (P 3 P 1 P 2 ) n x 0 ) n N (blue points), and ((P 3 P 1 P 2 ) n x 0 ) n N (black points) are also depicted. 7

3 Convex Combinations We start with the following useful lemma. Lemma 3.1. Suppose ( i I) A i is 3 monotone 5 and dom A i = X. Let (α i ) i I be a family of nonnegative real numbers. Then the following hold: (i) i I α i A i is maximally monotone, 3 monotone and dom ( i I α i A i ) = X. (ii) ran( i I α i A i ) = i I α i ran A i. Proof. Note that ( i I), α i A i is maximally monotone, 3 monotone and dom α i A i = X. (i): The proof proceeds by induction. For n = 2, the 3 monotonicity of α 1 A 1 + α 2 A 2 follows from [3, Proposition 25.22(ii)], whereas the maximal monotonicity of α 1 A 1 + α 2 A 2 follows from, e.g., [3, Proposition 25.5(i)]. Now suppose that for some n 2 it holds that i=1 n α ia i is maximally monotone and 3 monotone. Then n+1 i=1 α ia i = i=1 n α ia i + α n+1 A n+1, which is maximally monotone and 3 monotone, where the conclusion follows from applying the base case with (α 1, α 2, A 1, A 2 ) replaced by (1, α n+1, i=1 n α ia i, A n+1 ). (ii): Combine (i) and [20, Corollary 6]. From this point onwards, let (λ i ) i I be in ]0, 1] with i I λ i = 1, and set T := λ i T i. (19) i I We are now ready for our second main result. Theorem 3.2. v T i I λ i v Ti. Proof. It follows from [3, Examples 20.7 and 25.20] that ( i I) Id T i is maximally monotone, 3 monotone and dom(id T i ) = X. This and Lemma 3.1(ii) (applied with (α i, A i ) replaced by (λ i, Id T i ) imply that ran ( Id T ) = ran i I λ i (Id T i ) = λ i ran(id T i ). (20) i I Now, on the one hand, it follows from the definition of v T that ( y ran ( Id T )) v T y. (21) On the other hand, the definition of v i implies that ( i I) v i ran(id T i ). Hence, λ i v i λ i ran(id T i ). Therefore, i I λ i v i i I λ i ran(id T i ) i I λ i ran(id T i ) = ran ( Id T ), where the last identity follows from (20). Now apply (21) with y replaced by i I λ i v i. As an easy consequence of Theorem 3.2, we obtain the second main result of [5]: 5 We recall that a monotone operator B : X X is 3* monotone (see [9]) (this is also known as rectangular) if ( (x, y ) dom B ran B) sup (z,z ) gra B x z, z y < +. 8

Corollary 3.3. [5, Theorem 5.5] Suppose that v 1 = = v m = 0. Then v T = 0. The bound we provided in Theorem 3.2 is sharp as we illustrate now: Example 3.4. Let a X and suppose that T : X X : x x a. Then v T = a and therefore Fix T = a = 0. Set ( i I) T i = T. Then T = i I λ i T i = T, ( i I) v i = v T = a. Consequently, v T = a = i I λ i a = i I λ i v i. Example 3.4 suggests that the identity v T = i I λ i v i holds true; however, the following example provides a negative answer to this conjecture. Example 3.5. Suppose that m = 2, that T 1 : X X : x x a 1, and that T 2 : X X : x 1 2 x a 2, where (a 1, a 2 ) (X {0}) X. Then ran(id T 1 ) = {a 1 }, ran(id T 2 ) = X, ran(id T) = X, and 0 = v T = λ 1 v 1 + λ 2 v 2 = λ 1 a 1. Proof. On the one hand, one can easily verify that (v 1, v 2 ) = (a 1, 0); hence, λ 1 v 1 + λ 2 v 2 = λ 1 a 1 = 0. On the other hand, T : X X : x λ 1+1 2 x (λ 1 a 1 + λ 2 a 2 ). Hence, T is a Banach contraction, and therefore, Fix T =. Consequently, v T = 0. Acknowledgments The research of HHB was partially supported by a Discovery Grant of the Natural Sciences and Engineering Research Council of Canada. WMM was supported by the Simons Institute for the Theory of Computing research fellowship. References [1] H.H. Bauschke, The composition of finitely many projections onto closed convex sets in Hilbert space is asymptotically regular, Proceedings of the American Mathematical Society 131(2003), 141 146. [2] H.H. Bauschke and J.M. Borwein, Dykstra s alternating projection algorithm for two sets, Journal of Approximation Theory 79 (1994), 418 443. [3] H.H. Bauschke and P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, Second Edition, Springer, 2017. [4] H.H. Bauschke, W.L. Hare, and W.M. Moursi, Generalized solutions for the sum of two maximally monotone operators, SIAM Journal on Control and Optimization 52 (2014), 1034 1047. [5] H.H. Bauschke, V. Martin-Marquez, S.M. Moffat, and X. Wang, Compositions and convex combinations of asymptotically regular firmly nonexpansive mappings are also asymptotically regular, Fixed Point Theory and Applications (2012), 2012:53. 9

[6] H.H. Bauschke and W.M. Moursi, The Douglas Rachford algorithm for two (not necessarily intersecting) affine subspace, SIAM Journal in Optimization 26, 968 985, 2016. [7] J.M. Borwein and J.D. Vanderwerff, Convex Functions, Cambridge University Press, 2010. [8] H. Brezis, Operateurs Maximaux Monotones et Semi-Groupes de Contractions dans les Espaces de Hilbert, North-Holland/Elsevier, 1973. [9] H. Brezis and A. Haraux, Image d une Somme d opérateurs Monotones et Applications, Israel Journal of Mathematics 23 (1976), 165 186. [10] R.S. Burachik and A.N. Iusem, Set-Valued Mappings and Enlargements of Monotone Operators, Springer-Verlag, 2008. [11] A. De Pierro, From parallel to sequential projection methods and vice versa in convex feasibility: results and conjectures. In: Inherently parallel algorithms in feasibility and optimization and their applications (Haifa, 2000), 187 201, Studies in Computational Mathematics, 8 (2001), North-Holland, Amsterdam. [12] J. Eckstein and D.P. Bertsekas, On the Douglas Rachford splitting method and the proximal point algorithm for maximal monotone operators, Mathematical Programming 55 (1992), 293 318. [13] GeoGebra, http://www.geogebra.org. [14] K. Goebel and W.A. Kirk, Topics in Metric Fixed Point Theory, Cambridge University Press, 1990. [15] K. Goebel and S. Reich, Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings, Marcel Dekker, 1984. [16] U. Kohlenbach, A polynomial rate of asymptotic regularity for compositions of projections in Hilbert space, to appear in Foundations of Computational Mathematics. Available at http: //www.mathematik.tu-darmstadt.de/~kohlenbach/inconsistentfeasibility.pdf from the author s homepage http://www.mathematik.tu-darmstadt.de/~kohlenbach/. [17] U. Kohlenbach, G. López-Acedo and A. Nicolae, Quantitative asymptotic regularity results for the composition of two mappings, Optimization 66 (2017), 1291 1299. [18] G.J. Minty, Monotone (nonlinear) operators in Hilbert spaces, Duke Mathematical Journal 29 (1962), 341 346. [19] W.M. Moursi, The forward-backward algorithm and the normal problem, Journal of optimization Theory and Applications (2017). doi:10.1007/s10957-017-1113-4 [20] T. Pennanen, On the range of monotone composite mappings, Journal of Nonlinear and Convex Analysis 2 (2001), 193 202. [21] R.T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, 1970. [22] R.T. Rockafellar and R.J-B. Wets, Variational Analysis, Springer-Verlag, corrected 3rd printing, 2009. [23] S. Simons, From Hahn-Banach to Monotonicity, Springer-Verlag, 2008. [24] S. Simons, Minimax and Monotonicity, Springer-Verlag, 1998. [25] C. Zălinescu, Convex Analysis in General Vector Spaces, World Scientific Publishing, 2002. [26] E. Zeidler, Nonlinear Functional Analysis and Its Applications I: Fixed Point Theorems, Springer- Verlag, 1993. [27] E. Zeidler, Nonlinear Functional Analysis and Its Applications II/A: Linear Monotone Operators, Springer-Verlag, 1990. [28] E. Zeidler, Nonlinear Functional Analysis and Its Applications II/B: Nonlinear Monotone Operators, Springer-Verlag, 1990. 10