Parallel Cimmino-type methods for ill-posed problems

Size: px
Start display at page:

Download "Parallel Cimmino-type methods for ill-posed problems"

Transcription

1 Parallel Cimmino-type methods for ill-posed problems Cao Van Chung Seminar of Centro de Modelización Matemática Escuela Politécnica Naciónal Quito ModeMat, EPN, Quito Ecuador

2

3 Outline 1 Introduction 2 Parallel regularization proximal methods 3 Parallel Newton-type methods 4 Parallel hybrid projection-iteration methods 5 Applications 6 References

4 Introduction Problem Solve the system or equation A i (x) = F i (x) f i = 0, i = 1, N ( ) A(x) = N A i (x) = N (F i (x) f i ) = 0. ( ) F i : X Y ; X -Banach space and Y = X ; or X = Y = H Hilbert space; f i Y given. The problems ( ) and ( ) are considered in ill-posed cases.

5 Ill-posed problem & Regularization Ill-posed problem Problem F (x) = f is well-posed (in Hadamard s sense) iff: (i) f Y, x f : F (x f ) = f ; (ii) x f is unique for each f ; (iii) x f is continuously depended on f. Otherwise, problem is ill-posed. Regularization technique Tikhonov: Regularization problem: { min F (x) f 2 x Y + α x 2 } X, α > 0. Lavrentiev: F : H H-Hilbert. Regularization equation: A(x) + αx = F (x) f + αx = 0, α > 0 where F -monotone: F (x) F (y), x y 0, x, y Dom(F ).

6 Operator splitting up techniques Solve Ax = b, A R m m ; b R m -given. For k 0; a i - i th row of A: S. Kaczmarz (1937) x k+1 = x k + b [k] a [k], x [k] a [k] 2 a [k] ; ([k] = (k mod m) + 1). G. Cimmino (1938) xk i = b i a i, x k a i 2 a i ; i = 1, m. x k+1 = x k + 1 m x m k i ;

7 Operator splitting up techniques Figure: Two splitting up techniques

8 Proposed methods Parallel regularization proximal methods Implicit Iterative Regularization (PIIR) method Explicit Iterative Regularization (PEIR) method Parallel Newton-type methods Regularization Newton-Kantorovich (PRNK) method Regularization Gauss-Newton (PRGN) method Parallel hybrid projection-iteration methods Proximal - Projection (PRXP) method Hybrid CQ-Projection (PCQP) method

9 Operator splitting up techniques

10

11 Parallel regularization proximal methods Solve the system in Hilbert space H: where A i (x) = F i (x) f i = 0; i = 1, N (1.1) A i : H H - Inverse-strongly monotone operators Inverse-strongly monotone (ISM) operators: A(x) A(y), x y c 1 A(x) A(x) 2 x, y Dom(A) H. Examples: A : H H - linear, compact, A 0, A = A A - ISM. P C - metric projector onto convex set C H P C, A = I P C are ISM. A : H H s.t L, c > 0, x, y Dom(A) H: A(x) A(y) L x y ; A(x) A(y), x y c x y 2 A - ISM.

12 Idea and Preliminaries Regularization Let {α k } R +, α k 0, consider Lavrentiev regularization If S A := { z H : N A i (x) + α k x = 0. (1.2) N A i (z) = 0 } =, then k N!xk : N A i (xk ) + α kxk = 0 and lim x k k = x := argmin z. z S A xk x and xk x k+1 α k α k+1 x. α k If A i - ISM then A i (xk ) 2c i α k x, for i = 1, N.

13 Implicit Iterative Regularization (PIIR) method Idea Proximal method: Find x k+1 from A(x k+1 ) + γ(x k+1 x k ) = 0 (γ > 0, at step k N), then x k x : A(x ) = 0 (1). N xi k = 0 and apply 01 step of proximal method: ( A i (xk i ) + α k ) N xi k + γk (xk i x k) = 0, i = 1, N. Split (1.2) into sub-eqs. A i (x i k ) + α k PIIR method in free noise case ( A i (xk i ) + αk N + γ k x k+1 = 1 N ) x i k = γ kx k, i = 1, 2,..., N, N (1.3) xk i, k = 0, 1, 2,... (1) cf. works of Rockafellar R.T., Ryazantseva I.P., Xu H.-K.

14 Implicit Iterative Regularization (PIIR) method Theorem 1.1 Assume that S := { z H : A i (z) = 0, i = 1, N } =, α k 0, γ k + as k and γ n (α k α k+1 ) α 2 k 0; α k = + γ k=0 k Then, the PIIR converges: x k x as k +. Theorem 1.2 If S =, with the same assumptions in Theorem 1.1 and α k γ k + as k, then x k converges to solution of problem: min x subject to N A i (x) 2 N = F i (x) f i 2 min. x

15 Explicit Iterative Regularization method (PEIR) Each problem in (1.3) leads to the fixed point equation: x = Tk i (x) := x k 1 [ A γ i (x) + α ] k k N x T i k (x) T i k (y) Nc α k Nγ k x y a contraction. PEIR method in free noise case For arbitrary m k 1, zk,i l+1 := z k 1 γ k z k+1 := 1 N [ A i (z l k,i ) + α k N zl k,i l = 0, 1,..., m k 1; N z m k k,i ; k = 0, 1, 2... ] ; z 0 k,i := z k ;

16 Explicit Iterative Regularization method (PEIR) m = 1, N = 1, β k := 1/(Nγ n ) PEIR become to simply iterative regularization method. x k+1 = x k β n [ A(xk ) + α k x k ]. Theorem 1.3. Suppose {α k }, {γ k } are as in Theorem 1.2. Moreover, assume that S = and (Nc + α n )/γ n q < 1. Then the PEIR converges: z n x as n +. If S = then x k converges to solution of problem: min x subject to N A i (x) 2 N = F i (x) f i 2 min. x

17 PRPX methods in inexact data cases Inexact data f k,i, F k,i - noise data of f i, F i at step k: F k,i (x) F i (x) h k g( x ), x H f k,i f i δ k, h k, δ k > 0, k = 1, 2,... g : R + R + -increasing function. PIIR method in noise case ( ) A k,i (zn) i αk + N + γ n zk i = γ kz k, i = 1, 2,..., N, z k+1 = 1 N N zk i, k = 0, 1, 2,...

18 PRPX methods in inexact data cases Theorem 1.4 Let {α k }, {γ k } satisfy all the assumptions in Theorem 1.2. Moreover, k N, m 1 > 0: γ k (α k α k+1 ) α 3 k m 1γ 0 α 2 ; γ k α 2 k γ 0α 2 0 ; h k g( x ) + δ k αk 0 h 0 g( x ) + δ 0 γ k and (1 4m 1 + m 2 1 )α 0 > 4m 1 Nγ 0, α 0 γ 0 N và x 2 lα 2 0 with l := 2(2Nγ 0 + α 0 ) γ 0 [(1 4m 1 + c 2 1 )α 0 4m 1 Nγ 0 ] {[ 2c γ 0 α N 2 γ 0 + c 1γ 0 (Nγ 0 + α 0 ) α 3 0 ] x 2 + ( h0 g( x ) 2 } ) + δ 0 2γ 0 α 0 Then from z 0 = 0, n: z n x n α n l ( zn n x ).

19 PRPX methods in inexact data cases Exact operators case: h k 0 (F k,i (x) F i (x)) and f k,i f i δ, δ > 0, f δ := N f k,i Theorem 1.5 (A-posterior rule) Let {α k }, {γ k } as in Theorem 1.2 and n N: m 1 > 0, η (0, 1] α k γ 2; k (α k α k+1 ) α k+1 α 3 m 1γ 0 k α 2 ; 0 γ k α 2 k γ 0α 2 0 ; F (0) f δ > C 1 δ η (C 1 > 1). Moreover, (1 4m 1 + m 2 1 )α 0 > 4m 1 Nγ 0, α 0 γ 0 N, x 2 lα 2 0 and c 2 l x 2.

20 PRPX methods in inexact data cases Theorem 1.5 (A-posterior rule) 4(2Nγ l := 0 + α 0 )(C + 1) x 2 (C 1)γ 0 [(1 4m 1 + m1 2)α 0 4m 1 Nγ 0 ] { ( N 2 ) c + α 0 (C + 1) N 2 γ 0 α 2 0 (C 1) + m } 1γ 0 (Nγ 0 + α 0 ) α 3. 0 Where C := (C 1 + 1)/2 > 1. Then from z 0 = 0, k δ N - iterative number: 1 F (z kδ ) f δ C 1 δ η < F (z k ) f δ, k N: 0 k k δ ; 2 If k δ is bounded as δ 0 then lim z k δ 0 δ = y S = {x H : F (x) = f }. Else if lim δ 0 k δ = and η < 1 then lim δ 0 z kδ = x.

21 PRPX methods apply to ( ) Apply PRPX to solve the equation in Hilbert space H: A(x) = N N ( ) A i (x) = Fi (x) f i = 0, F i : H H - continuous and monotone. Denote S A := {z Dom(A) H : A(z) = 0}. Corollary 1.1 If S A =, with the same assumptions in Theorem 1.2, then x k in (1.3) converges to x := argmin Z. z S A Else, if S A =, then x k converges to solution of problem: min x subject to A(x) 2 N ( ) = Fi (x) f i 2 min. x

22

23 Regularized Newton-Kantorovich Method (PRNK) Problem setting Solve the eq. ( ) A(x) = N A i (x) = N (F i (x) f i ) = 0, F i : H H (i = 1, N) - monotone; Frechet differentiable in B[0, r]; Assume S A := {z H : A(z) = 0} =. Idea Apply PIIR method to ( ) For each k 0, apply one step of Newton-Kantorovich to solve the sub-eq. in (1.3) w.r.t i: [ ( )] [ F i (x αk k) + N + γ k (xk i x k) = F i (x k ) + α ] k N x k.

24 Regularized Newton-Kantorovich Method (PRNK) PRNK method [ z k+1 F i (z k) + ( )] αk N + γ k hk i [ = F i (z k ) + α ] k N z k, i = 1, N = z k + 1 N h N k i, k 0 Theorem 2.1 (convergence of PRNK) Assume that: α k 0, γ k + ; γ k α 4 k γ 0α 4 0 ; N2 4γ 0 α 2 0 and F i - twice continuously Frechet differentiable in B[0, r]; r = M D, M > 2, where M, D is chose: max{c 2 A, x 2 } D ; D lα 0 γ0 < (M 1) 2 D ; C A F i (x k ) + α k N x k ; ϕ > 0: F i (x) F i (y) ϕ x y x, y B[0, r].

25 Regularized Newton-Kantorovich Method (PRNK) Theorem 2.1 (convergence of PRNK)... Moreover, c 1, c 2 : γ3 N (α k α k+1 ) 2 and α 5 k { 2γ 0 (1 c 1 ) l := (2 + c 2 )(2Nγ 0 + α 0 ) c 1γ0 3 α 3 ; γ k (γ k+1 γ k ) c 2 γ Nc 1γ (4c c 1 + c 2 )α 0 γ α 0 2γ 0 α 2 0 } 2 ϕ 2. Then starting from z 0 = 0, z k x := argmin z as k +. z S A Remark: e.g for parameters: α n := α 0 (1 + n) p, 0 < p 1 8 and γ n := γ 0 (1 + n) 1/2 where γ 0 := max { 5; ( 12ϕ 2 D ) 2 } ; α0 := 5Nγ 0 and M 3 with c 1 = 1 64 ; c 2 = 1 2.

26 Regularized Newton-Kantorovich Method (PRNK) Convergence rate Let B := B[x, r] and A i C 2 (B): A i L i, i = 1, N, x B. If F i are monotone (i = 1, N) and u H: x = N F i (x )u and N L i u < 2, then, xk x = O(α k ). If F i is ci 1 - ISM; F i (x) = 0 (i = 1, N) is a consistent system and w i H (i = 1, N): x = N F i (x )w i and N L i w i < 1, then, x k x = O(α 1/4 k ). Under these conditions, z k in PRNK converges with the rates: z k x = O(α k ) or z k x = O(α 1/4 k ).

27 Regularized Gauss-Newton Method (PRGN) Problem setting For i = 1, N: X, Y i - Hilbert spaces; F i : X Y i - Frechet differentiable. Solve system: F i (x) = y i, i = 1, N. Set Y = Y 1 Y 2... Y N ;, Y = F : X Y : N, Yi ; F (x) = (F 1 (x),..., F N (x)); Y y δ = (y δ 1,..., y δ N ) - noise data: y i y δ i δ > 0. Consider problem in noise case: F (x) = y δ. Assume: S := {z X : F (z) = 0 Y } =.

28 Regularized Gauss-Newton Method (PRGN) Idea Apply Tikhonov regularization (x 0 -given; α k > 0) N F i (x) yi δ 2 Y i + α k x x 0 2 min x X Split-up regularized problem into sub-problems w.r.t i; Apply Gauss-Newton iteration for each sub-problem. Ready sequential RGN method: Choose x0 δ X -arbitrary. For k = 0, 1, 2...: x δ k+1 = x δ k ( N ( N ) 1 F i (x k δ) F i (x k δ) + α ki ) F i (x k δ) (F i (xk δ) y i δ) + α k(xk δ x0 )

29 Regularized Gauss-Newton Method (PRGN) Parallel Regularized Gauss - Newton method (PRGN) Choose x δ 0 X -arbitrary; β k := α k /N. For k = 0, 1, 2...: Assumptions hk i = (F i (x k δ) F i (x k δ) + β ki ) 1( F i (x k δ) (F i (xk δ) y i δ) + β k(xk δ x0 ) ), i = 1, N xk+1 δ = xk δ + N 1 N hk i. A1. For ρ > 1, α n - chose s.t.: α k > 0, α k 0, 1 α k α k+1 ρ. A2. r > 0: F i, (i = 1..N)-continuously differentiable in B 2r (x 0 ); x B r (x 0 ) S.

30 Regularized Gauss-Newton Method (PRGN) Assumptions A3. Source condition holds for some 0 < µ 1 and v i X x x 0 = (F i (x ) F i (x )) µ v i If 0 < µ 1 2, then x, z B 2r (x 0 ); v X, h i (x, z, v) X : (F i (x) F i (z))v = F i (z)h i (x, z, v); h i (x, z, v) K 0 x z v. If 2 1 < µ 1, then F i is Lipschitz continuous in B 2r (x 0 ). A-priori stopping rule: K δ > 0 s.t.: ηα µ+ 2 1 K N µ+ 1 2 δ ηα µ+ 2 1 δ k k < K δ,η > 0 - fixed parameter. Theorem 2.2 Let A1, A2, A3 hold; stopping iteratve k := K δ. N If v i and η are sufficiently small; x0 δ = x0 is close enough to x, then PRGN converges: xn δ x = O(α µ k ).

31

32 Parallel Proximal - Projection (PPXP) method Problem Solve system ( ) in Hilbert space H: A i (x) = 0 (i = 1, N) where A i : H H - maximal monotone for i = 1, N. Maximal monotone operator F : H H - maximal monotone iff: F - monotone: F (x) F (y), x y 0 x, y H; G - monotone s.t: Graph(F ) Graph(G ); (Graph(F ) := {(x, y) H H : x Dom(F ); y = F (x)}). If F is maximal monotone and S F := {z H : F (z) = 0} = S F - convex, closed.

33 Parallel Proximal - Projection (PPXP) method Proximal-Projection method (2) To solve one eq. A(x) = 0 (N = 1): choose x 0 H - arbitrary, µ k (0, µ), σ [0, 1). At step k 0 Proximal: Find y k H: A(y k ) + µ k (y k x k ) + e k = 0 where e k σ max { A(y k ), µ k x k y k } If x k = y k or A(y k ) = 0 then stop. Else, define H k = { z H z y k, A(y k ) 0 }, W k = { z H z x k, x 0 x k 0 }. and Projection: x k+1 := P Hk W k (x 0 ). Set k := k + 1 and repeat. Where P C -metric projector onto closed, convex set C. (2) M. V. Solodov, B. F. Svaiter (2001), "Forcing strong convergence of proximal point iterations in Hilbert space", Math. Progr. 87,

34 Figure: Illustration of Proximal-Projection method

35 Parallel Proximal - Projection (PPXP) method Consider the system A i (x) = 0, A i -maximal monotone for i = 1, N. PPXP method Choose x 0 H - arbitrary; µ i k (0, µ); σ [0, 1), i = 1, N. At k 0: Proximal: Find yk i : A i (yk i ) + µi k (y k i x k) + ek i = 0, where e i k σ max { A i (y i k ), µi k x k y i k } (i = 1, N). Define: H i k = { z H z y i k, A i (y i k ) 0} and W k = { z H z x k, x 0 x k 0 }. Block Cimmino: Find index j k (1 j k N) s.t.: x k P Hjk (x k ) = max { x k P Hi (x k ) }.,N Projection: Compute: x k+1 = P H j k k W k (x 0 ). If x k+1 = x k then stop. Else, set k := k + 1 and repeat.

36 Parallel Proximal - Projection (PPXP) method Theorem 3.1 (on convergence of PPXP method). Assume that S := {x H : A i (x) = 0; i = 1, N} =. If PPXP method stops at iteration k <, then x k S. Else, x k P S (x 0 ) as k. If S =, then x k + as k. Remark Computing of x k+1 = P j H k (x k W 0 ): k If P j H k (x 0 ) W k then x k+1 = P j k H k (x 0 ). k Else, x k+1 = x 0 + λ 1 A jk (y j k k ) + λ 2 (x 0 x k ), where λ 1 A i (y j k k ) 2 + λ 2 A i (y j k k ), x 0 x k = A i (y j k k ), x 0 y j k k λ 1 A i (y j k k ), x 0 x k + λ 2 x 0 x k 2 = x 0 x k 2.

37 Parallel CQ-Projection method (PCQP) Problem Solve system A i (x) = 0, where A i : X X, and i = 1, N Assumptions X - reflexive, uniformly convex, uniformly smooth real Banach space; X dual space of X A i -maximal monotone (i = 1, N) with respect to dual product x X, f X : f, x := f (x). S := {z X : A i (z) = 0; i = 1, N} =.

38 Parallel CQ-Projection method (PCQP) Preliminaries Normalized duality mapping!j : X X J(x) = {f X : f, x = x 2 X = f 2 X }. Generalized distance: Functional φ : X X R + φ(x, y) = x 2 2 J(y), x + y 2 (x, y) X X. Generalized metric projection from X onto C X, C = Π C (x) := argminφ(z, x). z C If C-convex, closed, nonempty set then x X!Π C (x).

39 Parallel CQ-Projection method (PCQP) Preliminaries Let = C X, C-convex, closed and T : C C; p C - asymptotic fixed point of T if: {x n } C s.t x n p and T (x n ) x n 0 n n Denote ˆF (T ) - set of all asymptotic fixed points of T. Relatively nonexpansive mapping: Let T : C C s.t. F (T ) := {u C : u = T (u)} = ; ˆF (T ) F (T ) and p F (T ), x C φ(p, T (x)) φ(p, x). = C X, C-convex, closed; T : C C - a relatively nonexpansive mapping. Then F (T ) is closed and convex.

40 Parallel CQ-Projection method (PCQP) Preliminaries Let A : X X be a maximal monotone operator with then the resolvent S A := {z X : A(z) = 0} = J r A : X Dom(A), Jr A := (J + ra) 1 J r > 0 is a relatively nonexpansive mapping and F (J r A ) S A for all r > 0. Solving A i (x) = 0 (i = 1, N) Finding a common fixed point for family J r A i i = 1, N.

41 PCQP in Banach space Problem setting Let C X - nonempty, closed, convex; T i : C C, (i = 1, N) - relatively nonexpansive mappings. Find x F := N F (T i ) = N { z C : Ti (z) = z } =. CQ-method (3) for one mapping T (N = 1): x 0 C chosen arbitrarily, for k 0 y k := J 1( α k J(x k ) + (1 α k )J(T (x k )) ), C k := {z C : φ(z, y k ) φ(z, x k )}, Q k := {z C : x k z, J(x 0 ) J(x k ) 0}, x k+1 := Π Ck Q k (x 0 ). (3) S. Matsushita and W. Takahashi, "A strong convergence theorem for relatively nonexpansive mappings in a Banach space", J. Approx. Theory., 134 (2006)

42 PCQP in Banach space Algorithm PCQP 1. Let x 0 C - arbitrarily chosen; {α k } [0, 1), α k k 0. For k 0, Calculate y i k := J 1( α k J(x k ) + (1 α k )J ( T i (x k ) )) i = 1, N. Find i k := argmax { yk i x k }.,2,...,N Define C k := {v C : φ(v, y i k k ) φ(v, x k )}, and Q k := {u C : J(x 0 ) J(x k ), x k u 0}. Compute x k+1 := Π Ck Q k (x 0 ). If x k = x k+1 then stop. Else, set k := k + 1 and repeat.

43 PCQP in Banach space Remark If Algorithm PCQP 1 reaches at a step k 0, then F C k Q k and x k+1 is well-defined. Lemma 3.1 If Algorithm PCQP 1 finishes at a finite step k < +, then T i (x k ) = x k (i = 1, N). Theorem 3.2 (on convergence of Algorithm PCQP 1). Let {x k } be the (infinite) sequence generated by Algorithm PCQP 1 and T i be continuous for i = 1, N. Then x k x := Π F (x 0 ) as k.

44 PCQP in Banach space Algorithm PCQP 2. Let x 0 Cbe arbitrarily chosen; {α k } [0, 1), α k k 0. For k 0, Calculate y i k := J 1( α k J(x 0 ) + (1 α k )J ( T i (x k ) )), i = 1, N. Find i k := argmax { yk i x k }.,2,...,N Define C k := {v C : φ(v, y i k k ) α k φ(v, x 0 ) + (1 α k )φ(v, x k )}, Q k := {u C : x k u, J(x 0 ) J(x k ) 0}. Compute x k+1 := Π Ck Q k (x 0 ). If x k = x k+1 then stop. Else, set k := k + 1 and repeat. Theorem 3.3 (on convergence of Algorithm PCQP 2). Let {x k } be the (infinite) sequence generated by Algorithm PCQP 2 and T i be continuous for i = 1, N. Then x k x := Π F (x 0 ) as k.

45 PCQP method in Hilbert space Algorithm PCQM 3. Let x 0 C- arbitrarily chosen; {α k } [0, 1), α k k 0. For k 0, Find z k := P C (x k ). Calculate y i k := α kz k + (1 α k )T i (z k ), i = 1, 2,..., N. Determine i k := argmax { yk i x k }.,2,...,N Define C k := {v H : v y i k k v x k }, Q k := {u H : x 0 x k, x k u 0}. Compute x k+1 := P Ck Q k (x 0 ). If x k = x k+1 then stop. Else, set k := k + 1 and repeat.

46 PCQP method in Hilbert space Theorem 3.4 (on convergence of Algorithm PCQP 3). If Algorithm PCQP 3 finishes at a finite step k < +, then T i (x k ) = x k (i = 1, N). Else, let {x k } be the (infinite) sequence generated by Algorithm PCQP 3 and T i be continuous for i = 1, N. Then x k x := P F (x 0 ) as k.

47 Figure: Illustration of CQ method in Hilbert space

48 Applications

49 Data reconstruction problem Problem Reconstruct the weights x H (Hilbert space) from its projections x, vi = µi, i = 1, 2,..., N where the views v i are given. Apply PIIR method Set the hyperplanes H i := { } ξ H : ξ, vi = µi and A i (x) := x P i (x) where P i : H H i - metric projector: x, vi µi P i (x) = x v i 2 v i A i -invert strongly monotone, i = 1, N. Denote S = {x H : x, v i = µ i i = 1,..., N}

50 Data reconstruction problem Apply PIIR method Apply PIIR method to solve the system A i (x) := x P i (x), i = 1, N: where λ n = xk i = Nγ k α n + Nγ k x k+1 = Nγ k Nγ n + α k Corollary 4.1 N α n + Nγ n + N. { (1 λ k )x k + λ k P (k) { (1 λ k )x k + λ k N } i (x k ) N } P (k) i (x k ) Let {α n }; {γ n } satisfy all conditions in Theorem 1.1. Then, x k x = argmin z S z. i = 1, N;

51 Overdetermined Linear Systems Problem Solve the system with g R m - given Bx = g; B R m n, m n Split given data into N blocks B 1 g 1 B 2 B =. ; g = g 2. ; B i R m i n ; g i R m i, B N g N where N 2; 1 m i m 1, and N m i = m.

52 Overdetermined Linear Systems Apply PIIR and PEIR methods to solve drived system A i (x) := B T i B i x B T i g i = 0, i = 1, N The PIIRM becomes [ Bi + (γ n + α n N )I ] x i n = γ n x n + B i g i, i = 1, N And the PEIRM x n+1 = 1 N x n+1 = (1 α n Nγ n )x n 1 Nγ n N xn, i n = 0, 1,... N B i x n + 1 Nγ n N g i.

53 Overdetermined Linear Systems Nose case: Suppose that B δ, g δ -given s.t. B δ B δ; g δ g δ. Partitioning B δ, g δ into N blocks and applying PIIR, one get [ B T δi B δi + (γ n + α n N )I ] z i n = γ n z n + B T δi g δi, z n+1 = 1 N N zn, i n 0. i = 1, N Corollary 4.2 Let α n ; γ n satisfy all the conditions in Theorem 1.2, then with termination index n(δ) = [δ µ ], converges to the solution x as the error level δ 0. Moreover, z n(δ) x ω γ δ1 µ + x n(δ) x where ω and γ are some positive constants, n(δ) = δ µ and µ (0, 1).

54 System of the first kind Fredholm equations Problem Consider the system ( Ai x ) b (t) := K i (t, s)x(s)ds f i (t) = 0 a i = 1, N where K i (t, s) and f i (t) are given, continuous symmetric kernels and continuous functions, respectively. Moreover, b b < H i x, x >= K i (t, s)x(t)x(s)dtds 0, x L 2 [a, b]. a a Then operators A i are also inverse - strongly monotone. Hence, the system can be solved by PIIR and PEIR methods.

55 Nonlinear integral equation Problem Apply PRGN to solve system arising from "parameters detection" problem (4) A i (x) = 0 A i (x) := c i exp ( t w i (s)x(s)ds ) f i ; x X := L 2 (0, 1), 0 c i > 0, w i C [0, 1] and f i X, i = 1, N - given constants and functions, respectively. 0 < w w i (t) w for all i = 1, N and t [0, 1] A i : X X - monotone, twice continuously Frechet differentiable. (4) Tautenhahn U., "On the method of Lavrentiev regularization for nonlinear ill-posed problems", Inverse Problems, 2003, 18, pp

56 Nonlinear integral equation Problem Apply PRNK to solve the equation arising from Wiener-type filtering theory (5) N A(x) = (F j (x) f j ) = 0 j=1 F j (x) := 1 β j t s x(s)ds + [arctan(jx(t))] 3, j = 1, 2; x X := L 2 (0, 1), 0 F j : X X - monotone, continuously Frechet differentiable. (5) N. S. Hoang, Ramm A. G., "Dynamical systems method for solving nonlinear equations with monotone operators", Math. Comp., 2010, 79, pp

57 N-dimentional steady state Navier-Stocks (6) Problem Ω R N - convex, bounded and open set has Lipschizt boundary Ω. Consider N η u + u i D i u + p = f in Ω divu = 0 in Ω u = 0 in Ω where u = (u 1,.., u N ), f = (f 1,..., f N ); u i, f i - functions; p-scalar function; D i = x i ; η > 0- constant. (6) Lu T., Neittaanmäki P., and Tai X.-C. (1992), "A parallel splitting up method for partial differential equations and its application to Navier-Stokes equations", RAIRO Math. Numer. Anal., 26 (6), pp Model.

58 N-dimentional steady state Navier-Stocks Notations L 2 (Ω) = L 2 (Ω)... L 2 (Ω): u, v L 2 (Ω) = N Ω u i.v i dx. H(Ω) = C0 (Ω)... C 0 (Ω): (u, v) H(Ω) = N D i u, D i v L 2dx. Set b(u, w, v) := N Ω u i D i wvdx and write Navier-Stocks equation in weak form: η(u, v) H(Ω) + b(u, w, v) = u, v L 2 (Ω) Iterative method Let y 0 H(Ω)-arbitrary; β > 0-fixed; ɛ 0 > 0 tolerance. At k 0, solve p k = divy k in Ω p k n = 0 in Ω

59 N-dimentional steady state Navier-Stocks Iterative method Set u k := y k p k. If k > 1 and u k u k 1 < ɛ 0 then stop. Else Find y k+1 H(Ω) from β y k+1 u k, v L 2 (Ω) + η y k+1, v L 2 (Ω) + b(y k+1, u k, v) Set k := k + 1 and repeat. = f, v L 2 (Ω) v H(Ω) At each step k 0, one step of PRNK can be applied to find y k+1. If C > 0: η 2 > C f H 2 (Ω) then the iterative method converges.

60 Furthermore information For more details, please refer to works of group P. K. Anh, C. V. Chung and V. T. Dzung: [1]. "Parallel hybrid methods for a finite family of relatively nonexpansive mappings", Numerical Functional Analysis and Optimization, (2013), DOI: / [2]. "Parallel iteratively regularized Gauss Newton method for systems of nonlinear ill-posed equations", International Journal of Computer Mathematics, (2013), DOI: / [3]. "Parallel regularized Newton method for nonlinear ill-posed equations", Numer. Algor., 2011, Vol. 58, pp [4]. "Parallel iterative regularization algorithms for large overdetermined linear systems", Int. J. Comput. Methods, 2010, Vol. 7, pp [5]. "Parallel iterative regularization methods for solving systems of ill-posed equations", Appl. Math. Comput., 2009, Vol. 212, pp and in some papers in local journals.

61 THANKS FOR YOUR ATTENTION!

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 3, 2018 ISSN 1223-7027 ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES Vahid Dadashi 1 In this paper, we introduce a hybrid projection algorithm for a countable

More information

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Applied Mathematical Sciences, Vol. 6, 212, no. 63, 319-3117 Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Nguyen Buong Vietnamese

More information

THROUGHOUT this paper, we let C be a nonempty

THROUGHOUT this paper, we let C be a nonempty Strong Convergence Theorems of Multivalued Nonexpansive Mappings and Maximal Monotone Operators in Banach Spaces Kriengsak Wattanawitoon, Uamporn Witthayarat and Poom Kumam Abstract In this paper, we prove

More information

Regularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces

Regularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces Int. Journal of Math. Analysis, Vol. 3, 2009, no. 12, 549-561 Regularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces Nguyen Buong Vietnamse Academy of Science

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department

More information

Regularization proximal point algorithm for finding a common fixed point of a finite family of nonexpansive mappings in Banach spaces

Regularization proximal point algorithm for finding a common fixed point of a finite family of nonexpansive mappings in Banach spaces RESEARCH Open Access Regularization proximal point algorithm for finding a common fixed point of a finite family of nonexpansive mappings in Banach spaces Jong Kyu Kim 1* and Truong Minh Tuyen 2 * Correspondence:

More information

Research Article Hybrid Algorithm of Fixed Point for Weak Relatively Nonexpansive Multivalued Mappings and Applications

Research Article Hybrid Algorithm of Fixed Point for Weak Relatively Nonexpansive Multivalued Mappings and Applications Abstract and Applied Analysis Volume 2012, Article ID 479438, 13 pages doi:10.1155/2012/479438 Research Article Hybrid Algorithm of Fixed Point for Weak Relatively Nonexpansive Multivalued Mappings and

More information

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

ITERATIVE SCHEMES FOR APPROXIMATING SOLUTIONS OF ACCRETIVE OPERATORS IN BANACH SPACES SHOJI KAMIMURA AND WATARU TAKAHASHI. Received December 14, 1999

ITERATIVE SCHEMES FOR APPROXIMATING SOLUTIONS OF ACCRETIVE OPERATORS IN BANACH SPACES SHOJI KAMIMURA AND WATARU TAKAHASHI. Received December 14, 1999 Scientiae Mathematicae Vol. 3, No. 1(2000), 107 115 107 ITERATIVE SCHEMES FOR APPROXIMATING SOLUTIONS OF ACCRETIVE OPERATORS IN BANACH SPACES SHOJI KAMIMURA AND WATARU TAKAHASHI Received December 14, 1999

More information

On an iterative algorithm for variational inequalities in. Banach space

On an iterative algorithm for variational inequalities in. Banach space MATHEMATICAL COMMUNICATIONS 95 Math. Commun. 16(2011), 95 104. On an iterative algorithm for variational inequalities in Banach spaces Yonghong Yao 1, Muhammad Aslam Noor 2,, Khalida Inayat Noor 3 and

More information

SHRINKING PROJECTION METHOD FOR A SEQUENCE OF RELATIVELY QUASI-NONEXPANSIVE MULTIVALUED MAPPINGS AND EQUILIBRIUM PROBLEM IN BANACH SPACES

SHRINKING PROJECTION METHOD FOR A SEQUENCE OF RELATIVELY QUASI-NONEXPANSIVE MULTIVALUED MAPPINGS AND EQUILIBRIUM PROBLEM IN BANACH SPACES U.P.B. Sci. Bull., Series A, Vol. 76, Iss. 2, 2014 ISSN 1223-7027 SHRINKING PROJECTION METHOD FOR A SEQUENCE OF RELATIVELY QUASI-NONEXPANSIVE MULTIVALUED MAPPINGS AND EQUILIBRIUM PROBLEM IN BANACH SPACES

More information

Strong convergence theorems for total quasi-ϕasymptotically

Strong convergence theorems for total quasi-ϕasymptotically RESEARCH Open Access Strong convergence theorems for total quasi-ϕasymptotically nonexpansive multi-valued mappings in Banach spaces Jinfang Tang 1 and Shih-sen Chang 2* * Correspondence: changss@yahoo.

More information

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Lu-Chuan Ceng 1, Nicolas Hadjisavvas 2 and Ngai-Ching Wong 3 Abstract.

More information

ITERATIVE ALGORITHMS WITH ERRORS FOR ZEROS OF ACCRETIVE OPERATORS IN BANACH SPACES. Jong Soo Jung. 1. Introduction

ITERATIVE ALGORITHMS WITH ERRORS FOR ZEROS OF ACCRETIVE OPERATORS IN BANACH SPACES. Jong Soo Jung. 1. Introduction J. Appl. Math. & Computing Vol. 20(2006), No. 1-2, pp. 369-389 Website: http://jamc.net ITERATIVE ALGORITHMS WITH ERRORS FOR ZEROS OF ACCRETIVE OPERATORS IN BANACH SPACES Jong Soo Jung Abstract. The iterative

More information

STRONG CONVERGENCE THEOREMS BY A HYBRID STEEPEST DESCENT METHOD FOR COUNTABLE NONEXPANSIVE MAPPINGS IN HILBERT SPACES

STRONG CONVERGENCE THEOREMS BY A HYBRID STEEPEST DESCENT METHOD FOR COUNTABLE NONEXPANSIVE MAPPINGS IN HILBERT SPACES Scientiae Mathematicae Japonicae Online, e-2008, 557 570 557 STRONG CONVERGENCE THEOREMS BY A HYBRID STEEPEST DESCENT METHOD FOR COUNTABLE NONEXPANSIVE MAPPINGS IN HILBERT SPACES SHIGERU IEMOTO AND WATARU

More information

Projective Splitting Methods for Pairs of Monotone Operators

Projective Splitting Methods for Pairs of Monotone Operators Projective Splitting Methods for Pairs of Monotone Operators Jonathan Eckstein B. F. Svaiter August 13, 2003 Abstract By embedding the notion of splitting within a general separator projection algorithmic

More information

STRONG CONVERGENCE THEOREMS FOR COMMUTATIVE FAMILIES OF LINEAR CONTRACTIVE OPERATORS IN BANACH SPACES

STRONG CONVERGENCE THEOREMS FOR COMMUTATIVE FAMILIES OF LINEAR CONTRACTIVE OPERATORS IN BANACH SPACES STRONG CONVERGENCE THEOREMS FOR COMMUTATIVE FAMILIES OF LINEAR CONTRACTIVE OPERATORS IN BANACH SPACES WATARU TAKAHASHI, NGAI-CHING WONG, AND JEN-CHIH YAO Abstract. In this paper, we study nonlinear analytic

More information

Krasnoselskii type algorithm for zeros of strongly monotone Lipschitz maps in classical banach spaces

Krasnoselskii type algorithm for zeros of strongly monotone Lipschitz maps in classical banach spaces DOI 10.1186/s40064-015-1044-1 RESEARCH Krasnoselskii type algorithm for zeros of strongly monotone Lipschitz maps in classical banach spaces Open Access C E Chidume 1*, A U Bello 1, and B Usman 1 *Correspondence:

More information

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1 Int. Journal of Math. Analysis, Vol. 1, 2007, no. 4, 175-186 Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1 Haiyun Zhou Institute

More information

A General Iterative Method for Constrained Convex Minimization Problems in Hilbert Spaces

A General Iterative Method for Constrained Convex Minimization Problems in Hilbert Spaces A General Iterative Method for Constrained Convex Minimization Problems in Hilbert Spaces MING TIAN Civil Aviation University of China College of Science Tianjin 300300 CHINA tianming963@6.com MINMIN LI

More information

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces YUAN-HENG WANG Zhejiang Normal University Department of Mathematics Yingbing Road 688, 321004 Jinhua

More information

Functionalanalytic tools and nonlinear equations

Functionalanalytic tools and nonlinear equations Functionalanalytic tools and nonlinear equations Johann Baumeister Goethe University, Frankfurt, Germany Rio de Janeiro / October 2017 Outline Fréchet differentiability of the (PtS) mapping. Nonlinear

More information

A GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD

A GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD A GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD OGANEDITSE A. BOIKANYO AND GHEORGHE MOROŞANU Abstract. This paper deals with the generalized regularization proximal point method which was

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert spaces

Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert spaces Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 016, 4478 4488 Research Article Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert

More information

A general iterative algorithm for equilibrium problems and strict pseudo-contractions in Hilbert spaces

A general iterative algorithm for equilibrium problems and strict pseudo-contractions in Hilbert spaces A general iterative algorithm for equilibrium problems and strict pseudo-contractions in Hilbert spaces MING TIAN College of Science Civil Aviation University of China Tianjin 300300, China P. R. CHINA

More information

Viscosity Approximative Methods for Nonexpansive Nonself-Mappings without Boundary Conditions in Banach Spaces

Viscosity Approximative Methods for Nonexpansive Nonself-Mappings without Boundary Conditions in Banach Spaces Applied Mathematical Sciences, Vol. 2, 2008, no. 22, 1053-1062 Viscosity Approximative Methods for Nonexpansive Nonself-Mappings without Boundary Conditions in Banach Spaces Rabian Wangkeeree and Pramote

More information

On the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces

On the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (06), 5536 5543 Research Article On the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces

More information

Some unified algorithms for finding minimum norm fixed point of nonexpansive semigroups in Hilbert spaces

Some unified algorithms for finding minimum norm fixed point of nonexpansive semigroups in Hilbert spaces An. Şt. Univ. Ovidius Constanţa Vol. 19(1), 211, 331 346 Some unified algorithms for finding minimum norm fixed point of nonexpansive semigroups in Hilbert spaces Yonghong Yao, Yeong-Cheng Liou Abstract

More information

Common fixed points of two generalized asymptotically quasi-nonexpansive mappings

Common fixed points of two generalized asymptotically quasi-nonexpansive mappings An. Ştiinţ. Univ. Al. I. Cuza Iaşi. Mat. (N.S.) Tomul LXIII, 2017, f. 2 Common fixed points of two generalized asymptotically quasi-nonexpansive mappings Safeer Hussain Khan Isa Yildirim Received: 5.VIII.2013

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

CONVERGENCE THEOREMS FOR STRICTLY ASYMPTOTICALLY PSEUDOCONTRACTIVE MAPPINGS IN HILBERT SPACES. Gurucharan Singh Saluja

CONVERGENCE THEOREMS FOR STRICTLY ASYMPTOTICALLY PSEUDOCONTRACTIVE MAPPINGS IN HILBERT SPACES. Gurucharan Singh Saluja Opuscula Mathematica Vol 30 No 4 2010 http://dxdoiorg/107494/opmath2010304485 CONVERGENCE THEOREMS FOR STRICTLY ASYMPTOTICALLY PSEUDOCONTRACTIVE MAPPINGS IN HILBERT SPACES Gurucharan Singh Saluja Abstract

More information

M. Marques Alves Marina Geremia. November 30, 2017

M. Marques Alves Marina Geremia. November 30, 2017 Iteration complexity of an inexact Douglas-Rachford method and of a Douglas-Rachford-Tseng s F-B four-operator splitting method for solving monotone inclusions M. Marques Alves Marina Geremia November

More information

Strong Convergence of Two Iterative Algorithms for a Countable Family of Nonexpansive Mappings in Hilbert Spaces

Strong Convergence of Two Iterative Algorithms for a Countable Family of Nonexpansive Mappings in Hilbert Spaces International Mathematical Forum, 5, 2010, no. 44, 2165-2172 Strong Convergence of Two Iterative Algorithms for a Countable Family of Nonexpansive Mappings in Hilbert Spaces Jintana Joomwong Division of

More information

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (206), 424 4225 Research Article Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem Jong Soo

More information

Extensions of the CQ Algorithm for the Split Feasibility and Split Equality Problems

Extensions of the CQ Algorithm for the Split Feasibility and Split Equality Problems Extensions of the CQ Algorithm for the Split Feasibility Split Equality Problems Charles L. Byrne Abdellatif Moudafi September 2, 2013 Abstract The convex feasibility problem (CFP) is to find a member

More information

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4890 4900 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A generalized forward-backward

More information

A New Modified Gradient-Projection Algorithm for Solution of Constrained Convex Minimization Problem in Hilbert Spaces

A New Modified Gradient-Projection Algorithm for Solution of Constrained Convex Minimization Problem in Hilbert Spaces A New Modified Gradient-Projection Algorithm for Solution of Constrained Convex Minimization Problem in Hilbert Spaces Cyril Dennis Enyi and Mukiawa Edwin Soh Abstract In this paper, we present a new iterative

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

How large is the class of operator equations solvable by a DSM Newton-type method?

How large is the class of operator equations solvable by a DSM Newton-type method? This is the author s final, peer-reviewed manuscript as accepted for publication. The publisher-formatted version may be available through the publisher s web site or your institution s library. How large

More information

A double projection method for solving variational inequalities without monotonicity

A double projection method for solving variational inequalities without monotonicity A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014

More information

CONVERGENCE OF APPROXIMATING FIXED POINTS FOR MULTIVALUED NONSELF-MAPPINGS IN BANACH SPACES. Jong Soo Jung. 1. Introduction

CONVERGENCE OF APPROXIMATING FIXED POINTS FOR MULTIVALUED NONSELF-MAPPINGS IN BANACH SPACES. Jong Soo Jung. 1. Introduction Korean J. Math. 16 (2008), No. 2, pp. 215 231 CONVERGENCE OF APPROXIMATING FIXED POINTS FOR MULTIVALUED NONSELF-MAPPINGS IN BANACH SPACES Jong Soo Jung Abstract. Let E be a uniformly convex Banach space

More information

Math 273a: Optimization Subgradient Methods

Math 273a: Optimization Subgradient Methods Math 273a: Optimization Subgradient Methods Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com Nonsmooth convex function Recall: For ˉx R n, f(ˉx) := {g R

More information

Academic Editor: Hari M. Srivastava Received: 29 September 2016; Accepted: 6 February 2017; Published: 11 February 2017

Academic Editor: Hari M. Srivastava Received: 29 September 2016; Accepted: 6 February 2017; Published: 11 February 2017 mathematics Article The Split Common Fixed Point Problem for a Family of Multivalued Quasinonexpansive Mappings and Totally Asymptotically Strictly Pseudocontractive Mappings in Banach Spaces Ali Abkar,

More information

1 Introduction and preliminaries

1 Introduction and preliminaries Proximal Methods for a Class of Relaxed Nonlinear Variational Inclusions Abdellatif Moudafi Université des Antilles et de la Guyane, Grimaag B.P. 7209, 97275 Schoelcher, Martinique abdellatif.moudafi@martinique.univ-ag.fr

More information

Graph Convergence for H(, )-co-accretive Mapping with over-relaxed Proximal Point Method for Solving a Generalized Variational Inclusion Problem

Graph Convergence for H(, )-co-accretive Mapping with over-relaxed Proximal Point Method for Solving a Generalized Variational Inclusion Problem Iranian Journal of Mathematical Sciences and Informatics Vol. 12, No. 1 (2017), pp 35-46 DOI: 10.7508/ijmsi.2017.01.004 Graph Convergence for H(, )-co-accretive Mapping with over-relaxed Proximal Point

More information

Viscosity approximation method for m-accretive mapping and variational inequality in Banach space

Viscosity approximation method for m-accretive mapping and variational inequality in Banach space An. Şt. Univ. Ovidius Constanţa Vol. 17(1), 2009, 91 104 Viscosity approximation method for m-accretive mapping and variational inequality in Banach space Zhenhua He 1, Deifei Zhang 1, Feng Gu 2 Abstract

More information

A Relaxed Explicit Extragradient-Like Method for Solving Generalized Mixed Equilibria, Variational Inequalities and Constrained Convex Minimization

A Relaxed Explicit Extragradient-Like Method for Solving Generalized Mixed Equilibria, Variational Inequalities and Constrained Convex Minimization , March 16-18, 2016, Hong Kong A Relaxed Explicit Extragradient-Like Method for Solving Generalized Mixed Equilibria, Variational Inequalities and Constrained Convex Minimization Yung-Yih Lur, Lu-Chuan

More information

Synchronal Algorithm For a Countable Family of Strict Pseudocontractions in q-uniformly Smooth Banach Spaces

Synchronal Algorithm For a Countable Family of Strict Pseudocontractions in q-uniformly Smooth Banach Spaces Int. Journal of Math. Analysis, Vol. 8, 2014, no. 15, 727-745 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ijma.2014.212287 Synchronal Algorithm For a Countable Family of Strict Pseudocontractions

More information

Fixed point theory for nonlinear mappings in Banach spaces and applications

Fixed point theory for nonlinear mappings in Banach spaces and applications Kangtunyakarn Fixed Point Theory and Applications 014, 014:108 http://www.fixedpointtheoryandapplications.com/content/014/1/108 R E S E A R C H Open Access Fixed point theory for nonlinear mappings in

More information

Research Article Some Krasnonsel skiĭ-mann Algorithms and the Multiple-Set Split Feasibility Problem

Research Article Some Krasnonsel skiĭ-mann Algorithms and the Multiple-Set Split Feasibility Problem Hindawi Publishing Corporation Fixed Point Theory and Applications Volume 2010, Article ID 513956, 12 pages doi:10.1155/2010/513956 Research Article Some Krasnonsel skiĭ-mann Algorithms and the Multiple-Set

More information

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces Filomat 28:7 (2014), 1525 1536 DOI 10.2298/FIL1407525Z Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Convergence Theorems for

More information

ON THE CONVERGENCE OF MODIFIED NOOR ITERATION METHOD FOR NEARLY LIPSCHITZIAN MAPPINGS IN ARBITRARY REAL BANACH SPACES

ON THE CONVERGENCE OF MODIFIED NOOR ITERATION METHOD FOR NEARLY LIPSCHITZIAN MAPPINGS IN ARBITRARY REAL BANACH SPACES TJMM 6 (2014), No. 1, 45-51 ON THE CONVERGENCE OF MODIFIED NOOR ITERATION METHOD FOR NEARLY LIPSCHITZIAN MAPPINGS IN ARBITRARY REAL BANACH SPACES ADESANMI ALAO MOGBADEMU Abstract. In this present paper,

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Florianópolis - September, 2011.

More information

The Equivalence of the Convergence of Four Kinds of Iterations for a Finite Family of Uniformly Asymptotically ø-pseudocontractive Mappings

The Equivalence of the Convergence of Four Kinds of Iterations for a Finite Family of Uniformly Asymptotically ø-pseudocontractive Mappings ±39ff±1ffi ß Ω χ Vol.39, No.1 2010fl2fl ADVANCES IN MATHEMATICS Feb., 2010 The Equivalence of the Convergence of Four Kinds of Iterations for a Finite Family of Uniformly Asymptotically ø-pseudocontractive

More information

Zeqing Liu, Jeong Sheok Ume and Shin Min Kang

Zeqing Liu, Jeong Sheok Ume and Shin Min Kang Bull. Korean Math. Soc. 41 (2004), No. 2, pp. 241 256 GENERAL VARIATIONAL INCLUSIONS AND GENERAL RESOLVENT EQUATIONS Zeqing Liu, Jeong Sheok Ume and Shin Min Kang Abstract. In this paper, we introduce

More information

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM Georgian Mathematical Journal Volume 9 (2002), Number 3, 591 600 NONEXPANSIVE MAPPINGS AND ITERATIVE METHODS IN UNIFORMLY CONVEX BANACH SPACES HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

More information

Convergence rate estimates for the gradient differential inclusion

Convergence rate estimates for the gradient differential inclusion Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient

More information

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim Korean J. Math. 25 (2017), No. 4, pp. 469 481 https://doi.org/10.11568/kjm.2017.25.4.469 GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS Jong Kyu Kim, Salahuddin, and Won Hee Lim Abstract. In this

More information

STRONG CONVERGENCE OF APPROXIMATION FIXED POINTS FOR NONEXPANSIVE NONSELF-MAPPING

STRONG CONVERGENCE OF APPROXIMATION FIXED POINTS FOR NONEXPANSIVE NONSELF-MAPPING STRONG CONVERGENCE OF APPROXIMATION FIXED POINTS FOR NONEXPANSIVE NONSELF-MAPPING RUDONG CHEN AND ZHICHUAN ZHU Received 17 May 2006; Accepted 22 June 2006 Let C be a closed convex subset of a uniformly

More information

Algorithms for Nonsmooth Optimization

Algorithms for Nonsmooth Optimization Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization

More information

Strong convergence to a common fixed point. of nonexpansive mappings semigroups

Strong convergence to a common fixed point. of nonexpansive mappings semigroups Theoretical Mathematics & Applications, vol.3, no., 23, 35-45 ISSN: 792-9687 (print), 792-979 (online) Scienpress Ltd, 23 Strong convergence to a common fixed point of nonexpansive mappings semigroups

More information

A Viscosity Method for Solving a General System of Finite Variational Inequalities for Finite Accretive Operators

A Viscosity Method for Solving a General System of Finite Variational Inequalities for Finite Accretive Operators A Viscosity Method for Solving a General System of Finite Variational Inequalities for Finite Accretive Operators Phayap Katchang, Somyot Plubtieng and Poom Kumam Member, IAENG Abstract In this paper,

More information

On the Convergence of Ishikawa Iterates to a Common Fixed Point for a Pair of Nonexpansive Mappings in Banach Spaces

On the Convergence of Ishikawa Iterates to a Common Fixed Point for a Pair of Nonexpansive Mappings in Banach Spaces Mathematica Moravica Vol. 14-1 (2010), 113 119 On the Convergence of Ishikawa Iterates to a Common Fixed Point for a Pair of Nonexpansive Mappings in Banach Spaces Amit Singh and R.C. Dimri Abstract. In

More information

Research Article Iterative Approximation of a Common Zero of a Countably Infinite Family of m-accretive Operators in Banach Spaces

Research Article Iterative Approximation of a Common Zero of a Countably Infinite Family of m-accretive Operators in Banach Spaces Hindawi Publishing Corporation Fixed Point Theory and Applications Volume 2008, Article ID 325792, 13 pages doi:10.1155/2008/325792 Research Article Iterative Approximation of a Common Zero of a Countably

More information

ITERATIVE APPROXIMATION OF SOLUTIONS OF GENERALIZED EQUATIONS OF HAMMERSTEIN TYPE

ITERATIVE APPROXIMATION OF SOLUTIONS OF GENERALIZED EQUATIONS OF HAMMERSTEIN TYPE Fixed Point Theory, 15(014), No., 47-440 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.html ITERATIVE APPROXIMATION OF SOLUTIONS OF GENERALIZED EQUATIONS OF HAMMERSTEIN TYPE C.E. CHIDUME AND Y. SHEHU Mathematics

More information

APPROXIMATING SOLUTIONS FOR THE SYSTEM OF REFLEXIVE BANACH SPACE

APPROXIMATING SOLUTIONS FOR THE SYSTEM OF REFLEXIVE BANACH SPACE Bulletin of Mathematical Analysis and Applications ISSN: 1821-1291, URL: http://www.bmathaa.org Volume 2 Issue 3(2010), Pages 32-39. APPROXIMATING SOLUTIONS FOR THE SYSTEM OF φ-strongly ACCRETIVE OPERATOR

More information

On The Convergence Of Modified Noor Iteration For Nearly Lipschitzian Maps In Real Banach Spaces

On The Convergence Of Modified Noor Iteration For Nearly Lipschitzian Maps In Real Banach Spaces CJMS. 2(2)(2013), 95-104 Caspian Journal of Mathematical Sciences (CJMS) University of Mazandaran, Iran http://cjms.journals.umz.ac.ir ISSN: 1735-0611 On The Convergence Of Modified Noor Iteration For

More information

Algorithm for Zeros of Maximal Monotone Mappings in Classical Banach Spaces

Algorithm for Zeros of Maximal Monotone Mappings in Classical Banach Spaces International Journal of Mathematical Analysis Vol. 11, 2017, no. 12, 551-570 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ijma.2017.7112 Algorithm for Zeros of Maximal Monotone Mappings in Classical

More information

Research Article Algorithms for a System of General Variational Inequalities in Banach Spaces

Research Article Algorithms for a System of General Variational Inequalities in Banach Spaces Journal of Applied Mathematics Volume 2012, Article ID 580158, 18 pages doi:10.1155/2012/580158 Research Article Algorithms for a System of General Variational Inequalities in Banach Spaces Jin-Hua Zhu,

More information

Iterative common solutions of fixed point and variational inequality problems

Iterative common solutions of fixed point and variational inequality problems Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (2016), 1882 1890 Research Article Iterative common solutions of fixed point and variational inequality problems Yunpeng Zhang a, Qing Yuan b,

More information

INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS. Yazheng Dang. Jie Sun. Honglei Xu

INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS. Yazheng Dang. Jie Sun. Honglei Xu Manuscript submitted to AIMS Journals Volume X, Number 0X, XX 200X doi:10.3934/xx.xx.xx.xx pp. X XX INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS Yazheng Dang School of Management

More information

Monotone variational inequalities, generalized equilibrium problems and fixed point methods

Monotone variational inequalities, generalized equilibrium problems and fixed point methods Wang Fixed Point Theory and Applications 2014, 2014:236 R E S E A R C H Open Access Monotone variational inequalities, generalized equilibrium problems and fixed point methods Shenghua Wang * * Correspondence:

More information

A VISCOSITY APPROXIMATIVE METHOD TO CESÀRO MEANS FOR SOLVING A COMMON ELEMENT OF MIXED EQUILIBRIUM, VARIATIONAL INEQUALITIES AND FIXED POINT PROBLEMS

A VISCOSITY APPROXIMATIVE METHOD TO CESÀRO MEANS FOR SOLVING A COMMON ELEMENT OF MIXED EQUILIBRIUM, VARIATIONAL INEQUALITIES AND FIXED POINT PROBLEMS J. Appl. Math. & Informatics Vol. 29(2011), No. 1-2, pp. 227-245 Website: http://www.kcam.biz A VISCOSITY APPROXIMATIVE METHOD TO CESÀRO MEANS FOR SOLVING A COMMON ELEMENT OF MIXED EQUILIBRIUM, VARIATIONAL

More information

Extensions of Korpelevich s Extragradient Method for the Variational Inequality Problem in Euclidean Space

Extensions of Korpelevich s Extragradient Method for the Variational Inequality Problem in Euclidean Space Extensions of Korpelevich s Extragradient Method for the Variational Inequality Problem in Euclidean Space Yair Censor 1,AvivGibali 2 andsimeonreich 2 1 Department of Mathematics, University of Haifa,

More information

Research Article Self-Adaptive and Relaxed Self-Adaptive Projection Methods for Solving the Multiple-Set Split Feasibility Problem

Research Article Self-Adaptive and Relaxed Self-Adaptive Projection Methods for Solving the Multiple-Set Split Feasibility Problem Abstract and Applied Analysis Volume 01, Article ID 958040, 11 pages doi:10.1155/01/958040 Research Article Self-Adaptive and Relaxed Self-Adaptive Proection Methods for Solving the Multiple-Set Split

More information

On Total Convexity, Bregman Projections and Stability in Banach Spaces

On Total Convexity, Bregman Projections and Stability in Banach Spaces Journal of Convex Analysis Volume 11 (2004), No. 1, 1 16 On Total Convexity, Bregman Projections and Stability in Banach Spaces Elena Resmerita Department of Mathematics, University of Haifa, 31905 Haifa,

More information

Nonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage:

Nonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage: Nonlinear Analysis 71 2009 2744 2752 Contents lists available at ScienceDirect Nonlinear Analysis journal homepage: www.elsevier.com/locate/na A nonlinear inequality and applications N.S. Hoang A.G. Ramm

More information

The Journal of Nonlinear Science and Applications

The Journal of Nonlinear Science and Applications J. Nonlinear Sci. Appl. 2 (2009), no. 2, 78 91 The Journal of Nonlinear Science and Applications http://www.tjnsa.com STRONG CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS AND FIXED POINT PROBLEMS OF STRICT

More information

Convergence theorems for a finite family. of nonspreading and nonexpansive. multivalued mappings and equilibrium. problems with application

Convergence theorems for a finite family. of nonspreading and nonexpansive. multivalued mappings and equilibrium. problems with application Theoretical Mathematics & Applications, vol.3, no.3, 2013, 49-61 ISSN: 1792-9687 (print), 1792-9709 (online) Scienpress Ltd, 2013 Convergence theorems for a finite family of nonspreading and nonexpansive

More information

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Applied Mathematical Sciences, Vol. 5, 2011, no. 76, 3781-3788 Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Nguyen Buong and Nguyen Dinh Dung

More information

Regularization in Banach Space

Regularization in Banach Space Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University

More information

Lecture 13 Newton-type Methods A Newton Method for VIs. October 20, 2008

Lecture 13 Newton-type Methods A Newton Method for VIs. October 20, 2008 Lecture 13 Newton-type Methods A Newton Method for VIs October 20, 2008 Outline Quick recap of Newton methods for composite functions Josephy-Newton methods for VIs A special case: mixed complementarity

More information

Weak and strong convergence theorems of modified SP-iterations for generalized asymptotically quasi-nonexpansive mappings

Weak and strong convergence theorems of modified SP-iterations for generalized asymptotically quasi-nonexpansive mappings Mathematica Moravica Vol. 20:1 (2016), 125 144 Weak and strong convergence theorems of modified SP-iterations for generalized asymptotically quasi-nonexpansive mappings G.S. Saluja Abstract. The aim of

More information

Research Article The Solution by Iteration of a Composed K-Positive Definite Operator Equation in a Banach Space

Research Article The Solution by Iteration of a Composed K-Positive Definite Operator Equation in a Banach Space Hindawi Publishing Corporation International Journal of Mathematics and Mathematical Sciences Volume 2010, Article ID 376852, 7 pages doi:10.1155/2010/376852 Research Article The Solution by Iteration

More information

Research Article Strong Convergence Theorems for Zeros of Bounded Maximal Monotone Nonlinear Operators

Research Article Strong Convergence Theorems for Zeros of Bounded Maximal Monotone Nonlinear Operators Abstract and Applied Analysis Volume 2012, Article ID 681348, 19 pages doi:10.1155/2012/681348 Research Article Strong Convergence Theorems for Zeros of Bounded Maximal Monotone Nonlinear Operators C.

More information

general perturbations

general perturbations under general perturbations (B., Kassay, Pini, Conditioning for optimization problems under general perturbations, NA, 2012) Dipartimento di Matematica, Università degli Studi di Genova February 28, 2012

More information

Shih-sen Chang, Yeol Je Cho, and Haiyun Zhou

Shih-sen Chang, Yeol Je Cho, and Haiyun Zhou J. Korean Math. Soc. 38 (2001), No. 6, pp. 1245 1260 DEMI-CLOSED PRINCIPLE AND WEAK CONVERGENCE PROBLEMS FOR ASYMPTOTICALLY NONEXPANSIVE MAPPINGS Shih-sen Chang, Yeol Je Cho, and Haiyun Zhou Abstract.

More information

PROXIMAL POINT ALGORITHMS INVOLVING FIXED POINT OF NONSPREADING-TYPE MULTIVALUED MAPPINGS IN HILBERT SPACES

PROXIMAL POINT ALGORITHMS INVOLVING FIXED POINT OF NONSPREADING-TYPE MULTIVALUED MAPPINGS IN HILBERT SPACES PROXIMAL POINT ALGORITHMS INVOLVING FIXED POINT OF NONSPREADING-TYPE MULTIVALUED MAPPINGS IN HILBERT SPACES Shih-sen Chang 1, Ding Ping Wu 2, Lin Wang 3,, Gang Wang 3 1 Center for General Educatin, China

More information

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 36 Why proximal? Newton s method: for C 2 -smooth, unconstrained problems allow

More information

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F.

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F. Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM M. V. Solodov and B. F. Svaiter January 27, 1997 (Revised August 24, 1998) ABSTRACT We propose a modification

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

A Lyusternik-Graves Theorem for the Proximal Point Method

A Lyusternik-Graves Theorem for the Proximal Point Method A Lyusternik-Graves Theorem for the Proximal Point Method Francisco J. Aragón Artacho 1 and Michaël Gaydu 2 Abstract We consider a generalized version of the proximal point algorithm for solving the perturbed

More information

Thai Journal of Mathematics Volume 14 (2016) Number 1 : ISSN

Thai Journal of Mathematics Volume 14 (2016) Number 1 : ISSN Thai Journal of Mathematics Volume 14 (2016) Number 1 : 53 67 http://thaijmath.in.cmu.ac.th ISSN 1686-0209 A New General Iterative Methods for Solving the Equilibrium Problems, Variational Inequality Problems

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

Levenberg-Marquardt method in Banach spaces with general convex regularization terms Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization

More information

WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES

WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES Fixed Point Theory, 12(2011), No. 2, 309-320 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.html WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES S. DHOMPONGSA,

More information

Numerical Methods for Large-Scale Nonlinear Systems

Numerical Methods for Large-Scale Nonlinear Systems Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.

More information