Are adaptive Mann iterations really adaptive?

Similar documents
On Weak and Strong Convergence Theorems for a Finite Family of Nonself I-asymptotically Nonexpansive Mappings

6.3 Testing Series With Positive Terms

7.1 Convergence of sequences of random variables

Introduction to Optimization Techniques. How to Solve Equations

Multi parameter proximal point algorithms

Math Solutions to homework 6

Infinite Sequences and Series

Strong Convergence Theorems According. to a New Iterative Scheme with Errors for. Mapping Nonself I-Asymptotically. Quasi-Nonexpansive Types

On the Variations of Some Well Known Fixed Point Theorem in Metric Spaces

Product measures, Tonelli s and Fubini s theorems For use in MAT3400/4400, autumn 2014 Nadia S. Larsen. Version of 13 October 2014.

Chapter 7 Isoperimetric problem

Common Coupled Fixed Point of Mappings Satisfying Rational Inequalities in Ordered Complex Valued Generalized Metric Spaces

Riesz-Fischer Sequences and Lower Frame Bounds

Optimally Sparse SVMs

Convergence of random variables. (telegram style notes) P.J.C. Spreij

Sequences and Series of Functions

MAT1026 Calculus II Basic Convergence Tests for Series

Chapter 6 Infinite Series

Integrable Functions. { f n } is called a determining sequence for f. If f is integrable with respect to, then f d does exist as a finite real number

Advanced Analysis. Min Yan Department of Mathematics Hong Kong University of Science and Technology

Convergence of Random SP Iterative Scheme

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence

b i u x i U a i j u x i u x j

NEW FAST CONVERGENT SEQUENCES OF EULER-MASCHERONI TYPE

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4.

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3

Math 341 Lecture #31 6.5: Power Series

Lecture 3 The Lebesgue Integral

1.3 Convergence Theorems of Fourier Series. k k k k. N N k 1. With this in mind, we state (without proof) the convergence of Fourier series.

Seunghee Ye Ma 8: Week 5 Oct 28

arxiv: v2 [math.fa] 21 Feb 2018

lim za n n = z lim a n n.

Iterative Method For Approximating a Common Fixed Point of Infinite Family of Strictly Pseudo Contractive Mappings in Real Hilbert Spaces

Math 25 Solutions to practice problems

1+x 1 + α+x. x = 2(α x2 ) 1+x

7 Sequences of real numbers

Singular Continuous Measures by Michael Pejic 5/14/10

Feedback in Iterative Algorithms

A constructive analysis of convex-valued demand correspondence for weakly uniformly rotund and monotonic preference

A 2nTH ORDER LINEAR DIFFERENCE EQUATION

Rates of Convergence by Moduli of Continuity

The log-behavior of n p(n) and n p(n)/n

CS537. Numerical Analysis and Computing

Weak and Strong Convergence Theorems of New Iterations with Errors for Nonexpansive Nonself-Mappings

A General Iterative Scheme for Variational Inequality Problems and Fixed Point Problems

Differentiable Convex Functions

Uniform Strict Practical Stability Criteria for Impulsive Functional Differential Equations

A Fixed Point Result Using a Function of 5-Variables

Journal of Multivariate Analysis. Superefficient estimation of the marginals by exploiting knowledge on the copula

TR/46 OCTOBER THE ZEROS OF PARTIAL SUMS OF A MACLAURIN EXPANSION A. TALBOT

The value of Banach limits on a certain sequence of all rational numbers in the interval (0,1) Bao Qi Feng

Periodic solutions for a class of second-order Hamiltonian systems of prescribed energy

Measure and Measurable Functions

Fall 2013 MTH431/531 Real analysis Section Notes

Boundaries and the James theorem

2 Banach spaces and Hilbert spaces

k-generalized FIBONACCI NUMBERS CLOSE TO THE FORM 2 a + 3 b + 5 c 1. Introduction

MATH4822E FOURIER ANALYSIS AND ITS APPLICATIONS

A collocation method for singular integral equations with cosecant kernel via Semi-trigonometric interpolation

Axioms of Measure Theory

Estimation of the essential supremum of a regression function

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS

Sequences and Series

Random Walks on Discrete and Continuous Circles. by Jeffrey S. Rosenthal School of Mathematics, University of Minnesota, Minneapolis, MN, U.S.A.

7.1 Convergence of sequences of random variables

M17 MAT25-21 HOMEWORK 5 SOLUTIONS

COMMON FIXED POINT THEOREMS VIA w-distance

CS321. Numerical Analysis and Computing

Analytic Continuation

Ma 530 Introduction to Power Series

Mathematical Methods for Physics and Engineering

Computation of Error Bounds for P-matrix Linear Complementarity Problems

LECTURE 8: ASYMPTOTICS I

MAS111 Convergence and Continuity

Some Common Fixed Point Theorems in Cone Rectangular Metric Space under T Kannan and T Reich Contractive Conditions

SOME GENERALIZATIONS OF OLIVIER S THEOREM

Council for Innovative Research

Research Article A New Second-Order Iteration Method for Solving Nonlinear Equations

1 Duality revisited. AM 221: Advanced Optimization Spring 2016

Machine Learning Theory Tübingen University, WS 2016/2017 Lecture 12

On Orlicz N-frames. 1 Introduction. Renu Chugh 1,, Shashank Goel 2

Optimization Methods MIT 2.098/6.255/ Final exam

MATH301 Real Analysis (2008 Fall) Tutorial Note #7. k=1 f k (x) converges pointwise to S(x) on E if and

Approximation by Superpositions of a Sigmoidal Function

A NOTE ON BOUNDARY BLOW-UP PROBLEM OF u = u p

Assignment 5: Solutions

Appendix to Quicksort Asymptotics

ON POINTWISE BINOMIAL APPROXIMATION

An Introduction to Randomized Algorithms

4.3 Growth Rates of Solutions to Recurrences

Bounds for the Extreme Eigenvalues Using the Trace and Determinant

Local Approximation Properties for certain King type Operators

Problem Set 4 Due Oct, 12

Generalization of Contraction Principle on G-Metric Spaces

Research Article Convergence Theorems for Finite Family of Multivalued Maps in Uniformly Convex Banach Spaces

Lecture Notes for Analysis Class

A GENERALIZED MEAN PROXIMAL ALGORITHM FOR SOLVING GENERALIZED MIXED EQUILIBRIUM PROBLEMS (COMMUNICATED BY MARTIN HERMANN)

INFINITE SEQUENCES AND SERIES

The random version of Dvoretzky s theorem in l n

McGill University Math 354: Honors Analysis 3 Fall 2012 Solutions to selected problems

Transcription:

MATHEMATICAL COMMUNICATIONS 399 Math. Commu., Vol. 4, No. 2, pp. 399-42 (2009) Are adaptive Ma iteratios really adaptive? Kamil S. Kazimierski, Departmet of Mathematics ad Computer Sciece, Uiversity of Breme, 28 334 Breme, Germay Received December, 2008; accepted October 7, 2009 Abstract. We show that the Adaptive Ma Iteratios deserve to be amed i such way. Namely, we will show that they adapt to the properties of the operator, eve if the iformatio o these properties are ot kow to the Ma Iteratio a-priori. We will also show o a example that the Adaptive Ma Iteratios perform better umerically tha the usual Ma Iteratios. AMS subject classificatios: Primary 47H06, 47H0; Secodary 54H25 Key words: adaptive Ma iteratios, strogly accretive mappigs, strogly pseudocotractive mappigs, p-smooth Baach spaces, Baach spaces smooth of power type. Itroductio Let X be a real Baach space. For p > the set-valued mappig J p : X 2 X give by J p (x) = {x X : x, x = x x, x = x p }, where X deotes the dual space of X ad, deotes the duality pairig, is called the duality mappig of X. By j p we deote a sigle-valued selectio of J p. Notice that as a cosequece of the Hah-Baach Theorem, J p (x) is oempty for every x i X. A Baach space X is said to be p-smooth, if it admits a weak polarizatio law, i.e. if there exists a positive costat G p such that for all x, y i X ad all j p (x) J p (x) x y p x p p j p (x), y + G p y p. Notice that for < p < the sequece spaces l p, Lebesgue spaces L p ad Sobolev spaces W m p are mi{2, p}-smooth [9, 5]. Due to the polarizatio idetity every Hilbert space is 2-smooth. A map T : X X is called strogly pseudocotractive, if for some p > there exists a costat k > 0, such that for each x ad y i X there is a j p (x y) J p (x y) satisfyig T x T y, j p (x y) ( k) x y p. A map T is called strogly accretive if (I T ) is strogly pseudocotractive. Correspodig author. Email address: kamilk@math.ui-breme.de (K. S. Kazimierski) http://www.mathos.hr/mc c 2009 Departmet of Mathematics, Uiversity of Osijek

400 K. S. Kazimierski Strogly accretive operators are of great importace i physics, sice it is well kow that may sigificat problems i physics ca be modeled as time-depedet oliear equatios of the form du dt + S(t)u = 0, where S is a strogly accretive operator. The mai focus of iterest lies o the equilibrium poits of such a system. Therefore may solvers of the equatio Su = 0 have bee itroduced i recet years. Notice that u solves the above equatio, if (ad oly if) it is a fixed poit of the mappig T := I S. It is well-kow (cf. e.g. [5, 3, 4, 7, 3, 4, 2]) that Ma iteratios [] are well suited methods for fidig such a fixed poit of the strogly pseudocotractive mappig T. We recall that the sequece (x ) is a Ma iteratio, if for some oegative sequece (α ) the recursio x + = ( α )x + α T x with x 0 C () is satisfied. I [0] the author itroduced a problem-adapted selectio strategy for the update coefficiets (α ), if T is a strogly pseudocotractive mappig defied o some p- smooth space X. I this paper we will itroduce a ew ad improved versio of the strategy metioed before. We will also show that the strategy of this paper is adaptive i the sese that it adapts ay cotiuity of the operator. This result is the mai differece ad improvemet to the Ma iteratio proposed by Chidume i [5]. Covergece rates results for Ma iteratios are very rare. To the author s best kowledge this is the first time that covergece rates for accretive operators subject to their cotiuity were prove ad hece it is the first time that the adaptivity of the Ma iteratios was show. 2. Costructio of the adaptive Ma iteratios I this sectio we costruct adaptive Ma iteratios for strogly pseudocotractive ad strogly accretive operators mappig X to X. The costructio carried out i this sectio is a improved versio of the costructio preseted i [0]. Let X be a p-smooth Baach space, T : X X strogly pseudocotractive ad suppose T has a fixed poit. If T has a fixed poit, the by the strog pseudocotractivity this fixed poit is uique (cf. e.g. [0]). Sice X is p-smooth ad T is

strogly pseudocotractive, we have Adaptive Ma iteratios 40 x + x p x x p α p x T x, j p (x x ) +α p G p x T x p = x x p α p x x (T x T x ), j p (x x ) +α p G p x T x p x x p α p x x p + α p( k) x x p +α p G p x T x p. We arrive at the cetral iequality of our costructio x + x p x x p α pk x x p + α p G p x T x p. (2) We otice that the right-had side i (2) is smaller tha the left-had side as log as ( ) 0 < α < α + := pk x G p x p p x T x. (3) p By differetiatio we ca see that the right-had side i (2) is miimal for ( α = k G p By τ we deote the umber defied by The we see that ( τ p := ) x x p p x T x p p α = τ α +.. (4) ) p. (5) We do ot kow the exact value of x x p. Assume that we kow a upper boud for R o x x p, i.e. x x p R. The we replace x x p i (3) by β R. We shall impose additioal coditios o β later. For the time beig assume that β (0, ). Notice that our assumptio x x p R is ot a boudedess coditio for the operator. I fact, it is still possible that the rage of the operator is ubouded. Notice further that we are allowed to overestimate x x p by arbitrary large R. This is very importat i real world applicatios. Usually some iformatio o the magitude of x is available (e.g. by the structure of the operator). The it is easy to set R = D( x + C) p, where C is the magitude estimatio ad D is some big umber. We already oticed that α = τ p α +. Our estimatio of the exact distace x x p of course also iflueces the optimal choice of τ p. Therefore we replace it by some umber t (0, ). The itroductio of this secod parameter t is the

402 K. S. Kazimierski mai improvemet to the costructio proposed i [0]. We itroduce ow auxiliary variables ( ) h := (pk) p p R G p x T x ad q := p p p. (6) The all admissible α ca be writte as α = pk h β q for some choice of β ad t, where α is admissible if the related x + admits x + x x x. For ay fixed choice of α = pk h β q t with β (0, ) ad t (0, ) we have αg p p x T x p = pk (pk) pq R q p (G p x T x p ) β (q )p q t p = h βt q p R. By (2) we get for x with x x p β R t x + x p ( h t β q + h β q t p )R. (7) O the other had, if x x p < β R we coclude agai with (2) that x + x p (β + h β q t p )R. (8) The mai idea of our costructio is to choose β ad t optimal, i the sese that the estimatios i (7) ad (8) are miimized. Thus such optimal β ad t miimize max{ h t β q + h β q t p, β + h β q t p }. for fixed h. Oe ca show (cf. Appedix), that the above maximum is miimal for ad t opt, where β opt is the solutio of the equatio β opt ad t opt is defied via p+ p τ p h β = β (9) t opt = τ p (β opt ) q, where τ p is the same as i (5). The the optimal value of α, which we deote by, is give by α opt Oe ca check that h t opt (β opt α opt = pk h (β opt = pk τ ph (β opt = pk ) q + h (β opt β opt β opt ) q τ (β opt ) q p+ ). p (β opt ) q (t opt ) p = β opt ) + h (β opt ) q (t opt ) p.

By (7) or (8) we therefore get for α opt Adaptive Ma iteratios 403 x + x p (β opt ( = = β opt + p (( + p + h (β opt ) q (t opt ( βopt )β opt ) p )R ) R ) β opt p (βopt ) 2) R. Thus the umber R + defied by (( ) R + := + p β opt p (βopt ) 2) R (0) fulfills x + x p R + < R, where the right-had side is true due to the fact that β opt as defied by (9) is zero, oly if x is already a fixed poit of T. We of course assume that the iteratio stops the. Algorithm (Adaptive Ma iteratio). (S 0 ) Choose a arbitrary x 0 X with x 0 x p R 0. Set = 0. (S ) Stop if T x = x, else compute the uique positive solutio β of the equatio (S 2 ) Set p ( k p G p ) p R x T x p α = R + = pk β β (( + p p+ p β = β. () ) β p β2 ) R. (S 3 ) Set x + = ( α )x + α T x. (S 4 ) Let ( + ) ad go to step (S ). The costats G p ad k are as itroduced i Sectio. The formulas i (S ) ad (S 2 ) follow directly from (3), (6), (9) ad (0). 3. Covergece results I this sectio we will state several theorems, which justify the ame of adaptive Ma iteratios. I fact, we will show that Algorithm automatically adapts to the cotiuity of the uderlyig operator T. I what follows let h ad q be defied as i (6).

404 K. S. Kazimierski Theorem. Let X be a p-smooth Baach space ad T : X X a strogly pseudocotractive map (with fixed poit x ), mappig bouded sets o bouded sets. Let R 0 be the iitial guess o the distace x 0 x p. The the sequece (x ) defied by the Ma iteratio of Algorithm coverges strogly to the fixed poit of T (if T has a fixed poit). The rate of covergece is give by for some C > 0. x x C p p, Proof. For the proof we use a ice trick also used i [] ad [8]. Cosider + Sice q >, we have + Due to 0 < β < we get + [( ) ] q + p β p β2 = [( ) ] q R + p β q p β2 [( ) ] q + p β p β2 R q. [( ) ] + p β p β2 R q ( p β )( β ) R q. q ( β ) R q. By the Taylor expasio of the right-had side i (9) or () (as a fuctio of β ), a simple geometric argumet or by employig the implicit fuctio theorem, we get that Therefore + β τ p h + p+ p τph. q ( β ) R q q τ p h + p+ R p τ q. ph Notice that Hece + x +x 2 mi{, x}. 2q p p+ p+ mi{, p τ ph }R q.

By costructio we have R q Adaptive Ma iteratios 405 R q 0. Therefore the umbers R q are uiformly bouded away from zero. Sice T maps bouded sets o bouded sets, for M defied by M := { T x T x p :x X, x x p R 0 }, we have that Hece M <. x T x p 2 p ( x x p + T x T x p ) 2 p (R 0 + M) <. Therefore by defiitio of h the umbers h R q from zero. Thus for some c > 0. We have The + Fially, we arrive at + + = 0 R q c k=0 k+ + c ( + ). x x R /p sice /(q ) /p = (p )/p. Remark. Let us metio: are also uiformly bouded away ( + )c. k c p p p p,. Notice that the proof of strog covergece (without ay covergece rate) ca be carried out i the same way as the proof of Theorem 3.2 i [0]. 2. We also remark that the theorem is also true, if T is oly locally strogly accretive. 3. The crucial poit i the estimatios above is that M (as defied above) is fiite. It would therefore suffice to demad that the rage of the R 0 orm ball aroud the fixed poit is bouded, istead of the slightly more restrictive assumptio that the rage of all bouded sets is bouded. We ca ow prove that the Adaptive Ma iteratio of Algorithm automatically adapts the cotiuity of T. Theorem 2. Assume that additioally to the assumptios made i Theorem the operator T is (locally) Hlder cotiuous, i.e. for every bouded set there exist some 0 < γ < ad κ > 0, such that T x T y κ x y γ

406 K. S. Kazimierski for all x, y i that bouded set. The the Ma iteratio coverges as for some C > 0. x x C γ p p, Proof. Alog the lies of the proof i Theorem we get R ( γ)(q ) + R ( γ)(q ) q τ p h + p+ R ( γ)( q). p τph We otice that sice the idetity operator is Lipschitz, it is also locally Hlder. Therefore the operator I T is also locally Hlder. We get the x T x p = (I T )x (I T )x p C x x γp CR γ for some geeric costat C > 0. Hece, We arrive at R ( γ)(q ) + h CR ( γ)(q ). R ( γ)(q ) Therefore as i the proof of Theorem we coclude Fially, we arrive at R ( γ)(q ) + C. C. x x R /p C γ p p. Remark 2. Let us metio:. Notice that the Hlder expoet γ does ot have to be give i Algorithm. Despite the fact that the algorithm does ot have the iformatio about the Hlder cotiuity, it is automatically able to achieve a faster covergece rate tha for merely bouded operators. We thik that this behavior of Algorithm justifies the adaptivity i the ame. 2. We would also like to remark that the last theorem iduces that Algorithm may perform better tha the algorithm proposed by Chidume i [5] for certai operators. Theorem 3. Assume that additioally to the assumptios made i Theorem the operator T is (locally) Lipschitz cotiuous, i.e. for every bouded set there exists some κ > 0, such that T x T y κ x y for all x, y i that bouded set. The the Ma iteratio coverges (almost) liearly, i.e. there exists a costat C > 0, such that x x C exp( /C).

Adaptive Ma iteratios 407 Proof. For this very special case the proof is rather simple. Cosider (( ) ) R + = + p β p β2 R. We show that β are bouded away from oe. From (9) we see that β oly if h 0. With T also I T is (locally) Lipschitz. Cosider therefore with some geeric costat C > 0 h p C R x T x p = C R (I T )x (I T )x p C R R C. Hece there exists some Γ <, such that R + ΓR for all. Therefore the sequece (R ) coverges geometrically. But (R ) also majorizes the sequece ( x x p ), which proves the claim. Remark 3. Agai we remark that Algorithm does ot eed the iformatio about the Lipschitz cotiuity to coverge (almost) liearly. Notice that the results of Theorem exted caoically to strogly accretive mappigs. Therefore, as a cosequece of Theorem we ca prove the followig corollary: Corollary. Let X be a p-smooth Baach space, f X ad S : X X strogly accretive. Suppose S maps bouded sets o bouded sets ad suppose the equatio Sx = f has a solutio. The this solutio is uique ad the sequece (x ) defied by the Ma iteratio of Algorithm with T : X X ad T (x) = f + x Sx coverges strogly to this uique solutio x with the rate x x C p p. If i additio S is also γ-hlder cotiuous, the the rate improves to x x C γ p p. If S is Lipschitz cotiuous, the the (x ) coverge liearly to the solutio x. Remark 4. If i additio to strog pseudocotractivity i Theorem (resp. strog accretivity i Corollary ) we assume that T (resp. S) is Lipschitz cotiuous, the the existece of the fixed poit of T (resp. solutio of Sx = f) follows from [2, 6]. We remark that our approach ca also be exteded to set-valued strog pseudocotractios ad strogly accretive mappigs without ay difficulties.

408 K. S. Kazimierski 4. Numerical example I this sectio we cosider a simple example to visualize the stregth of the adaptive Ma iteratios. We cosider the rotatio ( ) cos(φ) a si(φ) Rot a,φ =, a, a si(φ) cos(φ) as a mappig from X to X, where X is the space R 2 equipped with the euclidea orm. Hece the space X is a Hilbert space. Oe ca check that Rot a,φ is strogly accretive with k = cos(φ). O the other had, Rot a,φ is ot a cotractio for a, sice Rot a,φ x 2 = (cos(φ) 2 + a 2 si(φ) 2 ) x 2. Further, it is clear, that the fixed poit is (0, 0). I all our simulatios the startig poit was x 0 = (, 0) T ad the value of a was 0. A classical assumptio o the step-sizes α is give by α 2 = 0 ad α =. (2) Hece a classical cadidate for a step-size is α =. However, if coditio (2) is fulfilled for some (α ), the it is also fulfilled for all c (α ) with 0 < c <. 0 7 φ = 0. φ = 0.2 0 6 φ = 0.3 φ = 0.4 φ = 0.5 φ = 0.6 0 5 φ = 0.7 φ = 0.8 φ = 0.9 φ =.0 0 4 φ = 0. 0 0.08 φ = 0.2 φ = 0.3 φ = 0.4 0 0.06 φ = 0.5 φ = 0.6 φ = 0.7 0 0.04 φ = 0.8 φ = 0.9 φ =.0 0 0.02 0 3 0 0 0 2 0 0.02 0 0 0.04 0 0 0 00 200 300 400 500 600 700 800 900 000 0 0.06 0 00 200 300 400 500 600 700 800 900 000 Results for c = Results for c = 0. Figure. Covergece results of stadard step-size rule α = c/ for the first 000 iteratios ad variable values of φ from 0. to.0 i 0. steps. The y-axis deotes the squared distace of the curret iterate to the fixed-poit I the first step of our aalysis we have to clarify how the choice of a particular c iflueces the behavior of the Ma iteratio. A typical result is displayed i

Adaptive Ma iteratios 409 Figure. We see that for the choice c = the iterates diverge from the fixed poit i the few first iteratios. They begi to coverge oly i the late iteratios. We further see that the covergece is lethargic i the sese that the lies represetig the covergece are very flat. This behavior was to be expected. We further observe that for big values of φ the iterates diverge more tha for small values. Ad i geeral they coverge rather slow. This is remarkable, sice the bigger φ, the more accretive the problem. Therefore we expect the iterates to coverge faster with icreasig the value of φ. We coclude that for c = the iterates behave couterituitive. However, the expected behavior that stroger accretiveess (bigger k) implies faster covergece if we choose c = 0.. Although we do ot visualize it here, we remark that if the value of c is chose too small, the results are agai gettig couterituitive. Altogether we see that the right choice of the costat c is crucial i applicatios. However, the stadard covergece proofs do ot deliver ay iformatio o the appropriate choice of c. This is completely differet for the adaptive Ma iteratios as we ca see i Figure 2. First we have chose the iitial estimate o the distace to, which is the true distace of the start poit x 0 = (, 0) T to the fixed poit (0, 0). We see that for adaptive Ma iteratios the full iformatio o accretivity is used immediately by the iteratio. The more accretive the mappig, the faster the covergece of the Ma iteratios. O the other had, the iformatio about the exact distace to the fixed poit is usually ot available. However, usually some sesible estimate o the magitude of the distace is available due to the costraits of the problem. What happes if we overestimate the value R 0 by several orders of magitude? A example is agai give i Figure 2. We have tested our results for R 0 = 00. Hece we overestimated the distace fuctio by two orders of magitude. But agai the accretiveess is 0 2 φ = 0. 0 2 0 φ = 0.2 φ = 0.3 φ = 0.4 φ = 0.5 φ = 0.6 φ = 0.7 φ = 0.8 φ = 0.9 φ =.0 0 0 0 0 0 0 0 2 0 00 200 300 400 500 600 700 800 900 000 φ = 0. φ = 0.2 φ = 0.3 0 φ = 0.4 φ = 0.5 φ = 0.6 φ = 0.7 φ = 0.8 φ = 0.9 φ =.0 0 2 0 00 200 300 400 500 600 700 800 900 000 Results for R 0 = Results for R 0 = 00 Figure 2. Covergece results of adaptive Ma iteratios for the first 000 iteratios ad variable values of φ from 0. to.0 i 0. steps. The y-axis deotes the squared error of the curret iterate to the fixed-poit trasported by meas of covergece speed. The more accretive the mappig, the faster the covergece. Altogether we see that the results for Ma iteratios always agree with the ituitio.

40 K. S. Kazimierski At last we otice that the adaptive Ma iteratios coverge faster tha the stadard versio. The slope of the distaces i Figure (represetig the stadard Ma iteratio) is gettig flatter ad flatter with icreasig the umber of iteratios. This is ot the case for the adaptive Ma iteratios i Figure. This was to be expected after the results of Theorem 3. We summarize: I our umerical simulatios adaptive Ma iteratios coverged faster tha stadard Ma iteratios. Ad the results of the adaptive Ma iteratios with respect to the depedecy o the accretivity were more ituitive i compariso with the stadard Ma iteratios. Altogether we dare say that for the class of operators discussed i this paper (i.e. strogly accretive) the adaptive Ma iteratios have better covergece properties tha the stadard Ma iteratios. 5. Appedix We show that for fixed h the maximum is miimal for β opt ad t opt defied via where τ p is the same as i (5). We cosider the fuctio max{ h t β q + h β q t p, β + h β q t p } (3) ad t opt, with β opt the solutio of the equatio p+ p τ p h β = β (4) t opt = τ p (β opt ) q, (β, t ) max{ h t β q + h β q t p, β + h β q t p }. Oe ca easily see that for small β ad small t the maximum is domiated by the first term. For large β ad large t the maximum is domiated by the secod term. We use this observatio i our aalysis. For fixed h ad β the fuctio t h t β q + h β q t p (5) is miimal for Next we otice that the fuctio t = τ p. β h τ β q + h β q τ p is mootoically decreasig i β. We also otice that for t = τ p the secod term of maximum (3), i.e. the fuctio β β + h β q τ p

Adaptive Ma iteratios 4 is mootoically decreasig. Hece the first term of maximum (3) is bigger tha the secod as log as or equivaletly h τ β q + h β q τ p β + h β q τ p β h τ β q. We kow that for β (0, ) this iequality is fulfilled i some subiterval where β is the solutio of (0, β ), τ p h β q = β. (6) Altogether we have prove, that for (β, t ) (0, β ) (0, ) the maximum (3) is miimal for (β, t ) = (β, τ ). Next we cosider the regio (β, ) (0, ). We otice that for fixed β > β the fuctio t β + h β q τ p (7) is mootoically icreasig. O the other had, for small values of t the fuctio i (7) is bigger tha the fuctio i (5). From the above cosideratios we kow that for t = τ we have h t β q + h βt q p < β + h βt q p. Ad we kow that i the iterval (0, τ p ) the fuctio i (5) is mootoically decreasig. Together, we have that for fixed β > β maximum (3) is miimal, if or equivaletly h t β q + h β q t p = β + h β q t p β = h t β q. (8) We ow cosider the secod term i maximum (3) alog the lie of (β, t ) fulfillig (8). (Of course we could also cosider the first term, sice alog lie (8) both sides have the same value.) We have the β + h β q t p = β + h β q ( β β q ). h p The right had-side i the last equatio, as fuctio of β, is miimal for β fulfillig p+ p τ p h β = β. (9) We deote the solutio of the above equatio by β +. The the poit (β +, t + ) with t + defied by t + = τ p (β + ) q is located o the lie described by (8). We otice that q = p p < p+ p.

42 K. S. Kazimierski Therefore comparig (6) ad (9), we see that β < β +. Hece, for (β, t ) (β, ) (0, ) maximum (3) is miimal for (β, t ) = (β +, t + ). Sice the poit (β, τ ) is also located o the lie fulfillig (8), we get that (β +, t + ) is the miimizer of maximum (3) i the whole square (0, ) (0, ). This shows the assertio. Refereces [] K. Bredies, D. A. Lorez, Iterated hard shrikage for miimizatio problems with sparsity costraits, SIAM Joural o Scietific Computig, to appear. [2] F. Browder, Noliear mappigs of oexpasive ad accretive type i Baach spaces, Bull. Amer. Math. Soc. 73(967), 875 882. [3] C. E. Chidume, A iterative process for oliear lipschitzia strogly accretive mappigs i L p spaces, J. Math. Aal. Appl. 5(990), 453 46. [4] C. E. Chidume, Iterative solutio of oliear equatios with strogly accretive operators, J. Math. Aal. Appl. 92(995), 502 58. [5] C. Chidume, Steepest descet method for locally accretive mappigs, J. Korea Math. Soc. 33(996), 2. [6] K. Deimlig, Zeros of accretive operators, Mauscripta Math. 3(974), 365 374. [7] L. Deg, A iterative process for oliear lipschitz ad strogly accretive mappigs i uiformly covex ad uiformly smooth Baach spaces, Noliear Aalysis 24(995), 98 987. [8] J. C. Du, Rates of covergece for coditioal gradiet algorithms ear sigular ad osigular extremals, SIAM Joural o Cotrol ad Optimizatio 7(979), 87 2. [9] O. Haer, O the uiform covexity of L p ad l p, Ark. Mat. 3(956), 239 244. [0] K. S. Kazimierski, Adaptive Ma iteratios for oliear accretive ad pseudocotractive operator equatios, Mathematical Commuicatios 3(2008), 33 44. [] W. Ma, Mea value i iteratio, Proc. Am. Math. Soc. 4(953), 506 50. [2] M. A. Noor, Some developmets i geeral variatioal iequalities, Appl. Math. Compu. 52(2007), 99 277. [3] M. O. Osilike, Ishikawa ad Ma iteratio methods with errors for oliear equatios of the accretive type, J. Math. Aal. Appl. 23(997), 9 05. [4] A. Rafiq, Ma iterative scheme for oliear equatios, Mathematical Commuicatios 2(2007), 25 3. [5] Z. -B. Xu, G. Roach, Characteristic iequalities of uiformly covex ad uiformly smooth Baach spaces, J. Math. Aal. Appl. 57(99), 89 20.