Parallel Cimmino-type methods for ill-posed problems Cao Van Chung Seminar of Centro de Modelización Matemática Escuela Politécnica Naciónal Quito ModeMat, EPN, Quito Ecuador cao.vanchung@epn.edu.ec, cvanchung@gmail.com
Outline 1 Introduction 2 Parallel regularization proximal methods 3 Parallel Newton-type methods 4 Parallel hybrid projection-iteration methods 5 Applications 6 References
Introduction Problem Solve the system or equation A i (x) = F i (x) f i = 0, i = 1, N ( ) A(x) = N A i (x) = N (F i (x) f i ) = 0. ( ) F i : X Y ; X -Banach space and Y = X ; or X = Y = H Hilbert space; f i Y given. The problems ( ) and ( ) are considered in ill-posed cases.
Ill-posed problem & Regularization Ill-posed problem Problem F (x) = f is well-posed (in Hadamard s sense) iff: (i) f Y, x f : F (x f ) = f ; (ii) x f is unique for each f ; (iii) x f is continuously depended on f. Otherwise, problem is ill-posed. Regularization technique Tikhonov: Regularization problem: { min F (x) f 2 x Y + α x 2 } X, α > 0. Lavrentiev: F : H H-Hilbert. Regularization equation: A(x) + αx = F (x) f + αx = 0, α > 0 where F -monotone: F (x) F (y), x y 0, x, y Dom(F ).
Operator splitting up techniques Solve Ax = b, A R m m ; b R m -given. For k 0; a i - i th row of A: S. Kaczmarz (1937) x k+1 = x k + b [k] a [k], x [k] a [k] 2 a [k] ; ([k] = (k mod m) + 1). G. Cimmino (1938) xk i = b i a i, x k a i 2 a i ; i = 1, m. x k+1 = x k + 1 m x m k i ;
Operator splitting up techniques Figure: Two splitting up techniques
Proposed methods Parallel regularization proximal methods Implicit Iterative Regularization (PIIR) method Explicit Iterative Regularization (PEIR) method Parallel Newton-type methods Regularization Newton-Kantorovich (PRNK) method Regularization Gauss-Newton (PRGN) method Parallel hybrid projection-iteration methods Proximal - Projection (PRXP) method Hybrid CQ-Projection (PCQP) method
Operator splitting up techniques
Parallel regularization proximal methods Solve the system in Hilbert space H: where A i (x) = F i (x) f i = 0; i = 1, N (1.1) A i : H H - Inverse-strongly monotone operators Inverse-strongly monotone (ISM) operators: A(x) A(y), x y c 1 A(x) A(x) 2 x, y Dom(A) H. Examples: A : H H - linear, compact, A 0, A = A A - ISM. P C - metric projector onto convex set C H P C, A = I P C are ISM. A : H H s.t L, c > 0, x, y Dom(A) H: A(x) A(y) L x y ; A(x) A(y), x y c x y 2 A - ISM.
Idea and Preliminaries Regularization Let {α k } R +, α k 0, consider Lavrentiev regularization If S A := { z H : N A i (x) + α k x = 0. (1.2) N A i (z) = 0 } =, then k N!xk : N A i (xk ) + α kxk = 0 and lim x k k = x := argmin z. z S A xk x and xk x k+1 α k α k+1 x. α k If A i - ISM then A i (xk ) 2c i α k x, for i = 1, N.
Implicit Iterative Regularization (PIIR) method Idea Proximal method: Find x k+1 from A(x k+1 ) + γ(x k+1 x k ) = 0 (γ > 0, at step k N), then x k x : A(x ) = 0 (1). N xi k = 0 and apply 01 step of proximal method: ( A i (xk i ) + α k ) N xi k + γk (xk i x k) = 0, i = 1, N. Split (1.2) into sub-eqs. A i (x i k ) + α k PIIR method in free noise case ( A i (xk i ) + αk N + γ k x k+1 = 1 N ) x i k = γ kx k, i = 1, 2,..., N, N (1.3) xk i, k = 0, 1, 2,... (1) cf. works of Rockafellar R.T., Ryazantseva I.P., Xu H.-K.
Implicit Iterative Regularization (PIIR) method Theorem 1.1 Assume that S := { z H : A i (z) = 0, i = 1, N } =, α k 0, γ k + as k and γ n (α k α k+1 ) α 2 k 0; α k = + γ k=0 k Then, the PIIR converges: x k x as k +. Theorem 1.2 If S =, with the same assumptions in Theorem 1.1 and α k γ k + as k, then x k converges to solution of problem: min x subject to N A i (x) 2 N = F i (x) f i 2 min. x
Explicit Iterative Regularization method (PEIR) Each problem in (1.3) leads to the fixed point equation: x = Tk i (x) := x k 1 [ A γ i (x) + α ] k k N x T i k (x) T i k (y) Nc α k Nγ k x y a contraction. PEIR method in free noise case For arbitrary m k 1, zk,i l+1 := z k 1 γ k z k+1 := 1 N [ A i (z l k,i ) + α k N zl k,i l = 0, 1,..., m k 1; N z m k k,i ; k = 0, 1, 2... ] ; z 0 k,i := z k ;
Explicit Iterative Regularization method (PEIR) m = 1, N = 1, β k := 1/(Nγ n ) PEIR become to simply iterative regularization method. x k+1 = x k β n [ A(xk ) + α k x k ]. Theorem 1.3. Suppose {α k }, {γ k } are as in Theorem 1.2. Moreover, assume that S = and (Nc + α n )/γ n q < 1. Then the PEIR converges: z n x as n +. If S = then x k converges to solution of problem: min x subject to N A i (x) 2 N = F i (x) f i 2 min. x
PRPX methods in inexact data cases Inexact data f k,i, F k,i - noise data of f i, F i at step k: F k,i (x) F i (x) h k g( x ), x H f k,i f i δ k, h k, δ k > 0, k = 1, 2,... g : R + R + -increasing function. PIIR method in noise case ( ) A k,i (zn) i αk + N + γ n zk i = γ kz k, i = 1, 2,..., N, z k+1 = 1 N N zk i, k = 0, 1, 2,...
PRPX methods in inexact data cases Theorem 1.4 Let {α k }, {γ k } satisfy all the assumptions in Theorem 1.2. Moreover, k N, m 1 > 0: γ k (α k α k+1 ) α 3 k m 1γ 0 α 2 ; γ k α 2 k γ 0α 2 0 ; h k g( x ) + δ k αk 0 h 0 g( x ) + δ 0 γ k and (1 4m 1 + m 2 1 )α 0 > 4m 1 Nγ 0, α 0 γ 0 N và x 2 lα 2 0 with l := 2(2Nγ 0 + α 0 ) γ 0 [(1 4m 1 + c 2 1 )α 0 4m 1 Nγ 0 ] {[ 2c γ 0 α 0 + 1 N 2 γ 0 + c 1γ 0 (Nγ 0 + α 0 ) α 3 0 ] x 2 + ( h0 g( x ) 2 } ) + δ 0 2γ 0 α 0 Then from z 0 = 0, n: z n x n α n l ( zn n x ).
PRPX methods in inexact data cases Exact operators case: h k 0 (F k,i (x) F i (x)) and f k,i f i δ, δ > 0, f δ := N f k,i Theorem 1.5 (A-posterior rule) Let {α k }, {γ k } as in Theorem 1.2 and n N: m 1 > 0, η (0, 1] α k γ 2; k (α k α k+1 ) α k+1 α 3 m 1γ 0 k α 2 ; 0 γ k α 2 k γ 0α 2 0 ; F (0) f δ > C 1 δ η (C 1 > 1). Moreover, (1 4m 1 + m 2 1 )α 0 > 4m 1 Nγ 0, α 0 γ 0 N, x 2 lα 2 0 and c 2 l x 2.
PRPX methods in inexact data cases Theorem 1.5 (A-posterior rule) 4(2Nγ l := 0 + α 0 )(C + 1) x 2 (C 1)γ 0 [(1 4m 1 + m1 2)α 0 4m 1 Nγ 0 ] { ( N 2 ) c + α 0 (C + 1) N 2 γ 0 α 2 0 (C 1) + m } 1γ 0 (Nγ 0 + α 0 ) α 3. 0 Where C := (C 1 + 1)/2 > 1. Then from z 0 = 0, k δ N - iterative number: 1 F (z kδ ) f δ C 1 δ η < F (z k ) f δ, k N: 0 k k δ ; 2 If k δ is bounded as δ 0 then lim z k δ 0 δ = y S = {x H : F (x) = f }. Else if lim δ 0 k δ = and η < 1 then lim δ 0 z kδ = x.
PRPX methods apply to ( ) Apply PRPX to solve the equation in Hilbert space H: A(x) = N N ( ) A i (x) = Fi (x) f i = 0, F i : H H - continuous and monotone. Denote S A := {z Dom(A) H : A(z) = 0}. Corollary 1.1 If S A =, with the same assumptions in Theorem 1.2, then x k in (1.3) converges to x := argmin Z. z S A Else, if S A =, then x k converges to solution of problem: min x subject to A(x) 2 N ( ) = Fi (x) f i 2 min. x
Regularized Newton-Kantorovich Method (PRNK) Problem setting Solve the eq. ( ) A(x) = N A i (x) = N (F i (x) f i ) = 0, F i : H H (i = 1, N) - monotone; Frechet differentiable in B[0, r]; Assume S A := {z H : A(z) = 0} =. Idea Apply PIIR method to ( ) For each k 0, apply one step of Newton-Kantorovich to solve the sub-eq. in (1.3) w.r.t i: [ ( )] [ F i (x αk k) + N + γ k (xk i x k) = F i (x k ) + α ] k N x k.
Regularized Newton-Kantorovich Method (PRNK) PRNK method [ z k+1 F i (z k) + ( )] αk N + γ k hk i [ = F i (z k ) + α ] k N z k, i = 1, N = z k + 1 N h N k i, k 0 Theorem 2.1 (convergence of PRNK) Assume that: α k 0, γ k + ; γ k α 4 k γ 0α 4 0 ; N2 4γ 0 α 2 0 and F i - twice continuously Frechet differentiable in B[0, r]; r = M D, M > 2, where M, D is chose: max{c 2 A, x 2 } D ; D lα 0 γ0 < (M 1) 2 D ; C A F i (x k ) + α k N x k ; ϕ > 0: F i (x) F i (y) ϕ x y x, y B[0, r].
Regularized Newton-Kantorovich Method (PRNK) Theorem 2.1 (convergence of PRNK)... Moreover, c 1, c 2 : γ3 N (α k α k+1 ) 2 and α 5 k { 2γ 0 (1 c 1 ) l := (2 + c 2 )(2Nγ 0 + α 0 ) c 1γ0 3 α 3 ; γ k (γ k+1 γ k ) c 2 γ0 2 0 4Nc 1γ 3 0 + (4c 1 + 2 c 1 + c 2 )α 0 γ 2 0 + 2α 0 2γ 0 α 2 0 } 2 ϕ 2. Then starting from z 0 = 0, z k x := argmin z as k +. z S A Remark: e.g for parameters: α n := α 0 (1 + n) p, 0 < p 1 8 and γ n := γ 0 (1 + n) 1/2 where γ 0 := max { 5; ( 12ϕ 2 D ) 2 } ; α0 := 5Nγ 0 and M 3 with c 1 = 1 64 ; c 2 = 1 2.
Regularized Newton-Kantorovich Method (PRNK) Convergence rate Let B := B[x, r] and A i C 2 (B): A i L i, i = 1, N, x B. If F i are monotone (i = 1, N) and u H: x = N F i (x )u and N L i u < 2, then, xk x = O(α k ). If F i is ci 1 - ISM; F i (x) = 0 (i = 1, N) is a consistent system and w i H (i = 1, N): x = N F i (x )w i and N L i w i < 1, then, x k x = O(α 1/4 k ). Under these conditions, z k in PRNK converges with the rates: z k x = O(α k ) or z k x = O(α 1/4 k ).
Regularized Gauss-Newton Method (PRGN) Problem setting For i = 1, N: X, Y i - Hilbert spaces; F i : X Y i - Frechet differentiable. Solve system: F i (x) = y i, i = 1, N. Set Y = Y 1 Y 2... Y N ;, Y = F : X Y : N, Yi ; F (x) = (F 1 (x),..., F N (x)); Y y δ = (y δ 1,..., y δ N ) - noise data: y i y δ i δ > 0. Consider problem in noise case: F (x) = y δ. Assume: S := {z X : F (z) = 0 Y } =.
Regularized Gauss-Newton Method (PRGN) Idea Apply Tikhonov regularization (x 0 -given; α k > 0) N F i (x) yi δ 2 Y i + α k x x 0 2 min x X Split-up regularized problem into sub-problems w.r.t i; Apply Gauss-Newton iteration for each sub-problem. Ready sequential RGN method: Choose x0 δ X -arbitrary. For k = 0, 1, 2...: x δ k+1 = x δ k ( N ( N ) 1 F i (x k δ) F i (x k δ) + α ki ) F i (x k δ) (F i (xk δ) y i δ) + α k(xk δ x0 )
Regularized Gauss-Newton Method (PRGN) Parallel Regularized Gauss - Newton method (PRGN) Choose x δ 0 X -arbitrary; β k := α k /N. For k = 0, 1, 2...: Assumptions hk i = (F i (x k δ) F i (x k δ) + β ki ) 1( F i (x k δ) (F i (xk δ) y i δ) + β k(xk δ x0 ) ), i = 1, N xk+1 δ = xk δ + N 1 N hk i. A1. For ρ > 1, α n - chose s.t.: α k > 0, α k 0, 1 α k α k+1 ρ. A2. r > 0: F i, (i = 1..N)-continuously differentiable in B 2r (x 0 ); x B r (x 0 ) S.
Regularized Gauss-Newton Method (PRGN) Assumptions A3. Source condition holds for some 0 < µ 1 and v i X x x 0 = (F i (x ) F i (x )) µ v i If 0 < µ 1 2, then x, z B 2r (x 0 ); v X, h i (x, z, v) X : (F i (x) F i (z))v = F i (z)h i (x, z, v); h i (x, z, v) K 0 x z v. If 2 1 < µ 1, then F i is Lipschitz continuous in B 2r (x 0 ). A-priori stopping rule: K δ > 0 s.t.: ηα µ+ 2 1 K N µ+ 1 2 δ ηα µ+ 2 1 δ k k < K δ,η > 0 - fixed parameter. Theorem 2.2 Let A1, A2, A3 hold; stopping iteratve k := K δ. N If v i and η are sufficiently small; x0 δ = x0 is close enough to x, then PRGN converges: xn δ x = O(α µ k ).
Parallel Proximal - Projection (PPXP) method Problem Solve system ( ) in Hilbert space H: A i (x) = 0 (i = 1, N) where A i : H H - maximal monotone for i = 1, N. Maximal monotone operator F : H H - maximal monotone iff: F - monotone: F (x) F (y), x y 0 x, y H; G - monotone s.t: Graph(F ) Graph(G ); (Graph(F ) := {(x, y) H H : x Dom(F ); y = F (x)}). If F is maximal monotone and S F := {z H : F (z) = 0} = S F - convex, closed.
Parallel Proximal - Projection (PPXP) method Proximal-Projection method (2) To solve one eq. A(x) = 0 (N = 1): choose x 0 H - arbitrary, µ k (0, µ), σ [0, 1). At step k 0 Proximal: Find y k H: A(y k ) + µ k (y k x k ) + e k = 0 where e k σ max { A(y k ), µ k x k y k } If x k = y k or A(y k ) = 0 then stop. Else, define H k = { z H z y k, A(y k ) 0 }, W k = { z H z x k, x 0 x k 0 }. and Projection: x k+1 := P Hk W k (x 0 ). Set k := k + 1 and repeat. Where P C -metric projector onto closed, convex set C. (2) M. V. Solodov, B. F. Svaiter (2001), "Forcing strong convergence of proximal point iterations in Hilbert space", Math. Progr. 87, 189-202.
Figure: Illustration of Proximal-Projection method
Parallel Proximal - Projection (PPXP) method Consider the system A i (x) = 0, A i -maximal monotone for i = 1, N. PPXP method Choose x 0 H - arbitrary; µ i k (0, µ); σ [0, 1), i = 1, N. At k 0: Proximal: Find yk i : A i (yk i ) + µi k (y k i x k) + ek i = 0, where e i k σ max { A i (y i k ), µi k x k y i k } (i = 1, N). Define: H i k = { z H z y i k, A i (y i k ) 0} and W k = { z H z x k, x 0 x k 0 }. Block Cimmino: Find index j k (1 j k N) s.t.: x k P Hjk (x k ) = max { x k P Hi (x k ) }.,N Projection: Compute: x k+1 = P H j k k W k (x 0 ). If x k+1 = x k then stop. Else, set k := k + 1 and repeat.
Parallel Proximal - Projection (PPXP) method Theorem 3.1 (on convergence of PPXP method). Assume that S := {x H : A i (x) = 0; i = 1, N} =. If PPXP method stops at iteration k <, then x k S. Else, x k P S (x 0 ) as k. If S =, then x k + as k. Remark Computing of x k+1 = P j H k (x k W 0 ): k If P j H k (x 0 ) W k then x k+1 = P j k H k (x 0 ). k Else, x k+1 = x 0 + λ 1 A jk (y j k k ) + λ 2 (x 0 x k ), where λ 1 A i (y j k k ) 2 + λ 2 A i (y j k k ), x 0 x k = A i (y j k k ), x 0 y j k k λ 1 A i (y j k k ), x 0 x k + λ 2 x 0 x k 2 = x 0 x k 2.
Parallel CQ-Projection method (PCQP) Problem Solve system A i (x) = 0, where A i : X X, and i = 1, N Assumptions X - reflexive, uniformly convex, uniformly smooth real Banach space; X dual space of X A i -maximal monotone (i = 1, N) with respect to dual product x X, f X : f, x := f (x). S := {z X : A i (z) = 0; i = 1, N} =.
Parallel CQ-Projection method (PCQP) Preliminaries Normalized duality mapping!j : X X J(x) = {f X : f, x = x 2 X = f 2 X }. Generalized distance: Functional φ : X X R + φ(x, y) = x 2 2 J(y), x + y 2 (x, y) X X. Generalized metric projection from X onto C X, C = Π C (x) := argminφ(z, x). z C If C-convex, closed, nonempty set then x X!Π C (x).
Parallel CQ-Projection method (PCQP) Preliminaries Let = C X, C-convex, closed and T : C C; p C - asymptotic fixed point of T if: {x n } C s.t x n p and T (x n ) x n 0 n n Denote ˆF (T ) - set of all asymptotic fixed points of T. Relatively nonexpansive mapping: Let T : C C s.t. F (T ) := {u C : u = T (u)} = ; ˆF (T ) F (T ) and p F (T ), x C φ(p, T (x)) φ(p, x). = C X, C-convex, closed; T : C C - a relatively nonexpansive mapping. Then F (T ) is closed and convex.
Parallel CQ-Projection method (PCQP) Preliminaries Let A : X X be a maximal monotone operator with then the resolvent S A := {z X : A(z) = 0} = J r A : X Dom(A), Jr A := (J + ra) 1 J r > 0 is a relatively nonexpansive mapping and F (J r A ) S A for all r > 0. Solving A i (x) = 0 (i = 1, N) Finding a common fixed point for family J r A i i = 1, N.
PCQP in Banach space Problem setting Let C X - nonempty, closed, convex; T i : C C, (i = 1, N) - relatively nonexpansive mappings. Find x F := N F (T i ) = N { z C : Ti (z) = z } =. CQ-method (3) for one mapping T (N = 1): x 0 C chosen arbitrarily, for k 0 y k := J 1( α k J(x k ) + (1 α k )J(T (x k )) ), C k := {z C : φ(z, y k ) φ(z, x k )}, Q k := {z C : x k z, J(x 0 ) J(x k ) 0}, x k+1 := Π Ck Q k (x 0 ). (3) S. Matsushita and W. Takahashi, "A strong convergence theorem for relatively nonexpansive mappings in a Banach space", J. Approx. Theory., 134 (2006) 257-266.
PCQP in Banach space Algorithm PCQP 1. Let x 0 C - arbitrarily chosen; {α k } [0, 1), α k k 0. For k 0, Calculate y i k := J 1( α k J(x k ) + (1 α k )J ( T i (x k ) )) i = 1, N. Find i k := argmax { yk i x k }.,2,...,N Define C k := {v C : φ(v, y i k k ) φ(v, x k )}, and Q k := {u C : J(x 0 ) J(x k ), x k u 0}. Compute x k+1 := Π Ck Q k (x 0 ). If x k = x k+1 then stop. Else, set k := k + 1 and repeat.
PCQP in Banach space Remark If Algorithm PCQP 1 reaches at a step k 0, then F C k Q k and x k+1 is well-defined. Lemma 3.1 If Algorithm PCQP 1 finishes at a finite step k < +, then T i (x k ) = x k (i = 1, N). Theorem 3.2 (on convergence of Algorithm PCQP 1). Let {x k } be the (infinite) sequence generated by Algorithm PCQP 1 and T i be continuous for i = 1, N. Then x k x := Π F (x 0 ) as k.
PCQP in Banach space Algorithm PCQP 2. Let x 0 Cbe arbitrarily chosen; {α k } [0, 1), α k k 0. For k 0, Calculate y i k := J 1( α k J(x 0 ) + (1 α k )J ( T i (x k ) )), i = 1, N. Find i k := argmax { yk i x k }.,2,...,N Define C k := {v C : φ(v, y i k k ) α k φ(v, x 0 ) + (1 α k )φ(v, x k )}, Q k := {u C : x k u, J(x 0 ) J(x k ) 0}. Compute x k+1 := Π Ck Q k (x 0 ). If x k = x k+1 then stop. Else, set k := k + 1 and repeat. Theorem 3.3 (on convergence of Algorithm PCQP 2). Let {x k } be the (infinite) sequence generated by Algorithm PCQP 2 and T i be continuous for i = 1, N. Then x k x := Π F (x 0 ) as k.
PCQP method in Hilbert space Algorithm PCQM 3. Let x 0 C- arbitrarily chosen; {α k } [0, 1), α k k 0. For k 0, Find z k := P C (x k ). Calculate y i k := α kz k + (1 α k )T i (z k ), i = 1, 2,..., N. Determine i k := argmax { yk i x k }.,2,...,N Define C k := {v H : v y i k k v x k }, Q k := {u H : x 0 x k, x k u 0}. Compute x k+1 := P Ck Q k (x 0 ). If x k = x k+1 then stop. Else, set k := k + 1 and repeat.
PCQP method in Hilbert space Theorem 3.4 (on convergence of Algorithm PCQP 3). If Algorithm PCQP 3 finishes at a finite step k < +, then T i (x k ) = x k (i = 1, N). Else, let {x k } be the (infinite) sequence generated by Algorithm PCQP 3 and T i be continuous for i = 1, N. Then x k x := P F (x 0 ) as k.
Figure: Illustration of CQ method in Hilbert space
Applications
Data reconstruction problem Problem Reconstruct the weights x H (Hilbert space) from its projections x, vi = µi, i = 1, 2,..., N where the views v i are given. Apply PIIR method Set the hyperplanes H i := { } ξ H : ξ, vi = µi and A i (x) := x P i (x) where P i : H H i - metric projector: x, vi µi P i (x) = x v i 2 v i A i -invert strongly monotone, i = 1, N. Denote S = {x H : x, v i = µ i i = 1,..., N}
Data reconstruction problem Apply PIIR method Apply PIIR method to solve the system A i (x) := x P i (x), i = 1, N: where λ n = xk i = Nγ k α n + Nγ k x k+1 = Nγ k Nγ n + α k Corollary 4.1 N α n + Nγ n + N. { (1 λ k )x k + λ k P (k) { (1 λ k )x k + λ k N } i (x k ) N } P (k) i (x k ) Let {α n }; {γ n } satisfy all conditions in Theorem 1.1. Then, x k x = argmin z S z. i = 1, N;
Overdetermined Linear Systems Problem Solve the system with g R m - given Bx = g; B R m n, m n Split given data into N blocks B 1 g 1 B 2 B =. ; g = g 2. ; B i R m i n ; g i R m i, B N g N where N 2; 1 m i m 1, and N m i = m.
Overdetermined Linear Systems Apply PIIR and PEIR methods to solve drived system A i (x) := B T i B i x B T i g i = 0, i = 1, N The PIIRM becomes [ Bi + (γ n + α n N )I ] x i n = γ n x n + B i g i, i = 1, N And the PEIRM x n+1 = 1 N x n+1 = (1 α n Nγ n )x n 1 Nγ n N xn, i n = 0, 1,... N B i x n + 1 Nγ n N g i.
Overdetermined Linear Systems Nose case: Suppose that B δ, g δ -given s.t. B δ B δ; g δ g δ. Partitioning B δ, g δ into N blocks and applying PIIR, one get [ B T δi B δi + (γ n + α n N )I ] z i n = γ n z n + B T δi g δi, z n+1 = 1 N N zn, i n 0. i = 1, N Corollary 4.2 Let α n ; γ n satisfy all the conditions in Theorem 1.2, then with termination index n(δ) = [δ µ ], converges to the solution x as the error level δ 0. Moreover, z n(δ) x ω γ δ1 µ + x n(δ) x where ω and γ are some positive constants, n(δ) = δ µ and µ (0, 1).
System of the first kind Fredholm equations Problem Consider the system ( Ai x ) b (t) := K i (t, s)x(s)ds f i (t) = 0 a i = 1, N where K i (t, s) and f i (t) are given, continuous symmetric kernels and continuous functions, respectively. Moreover, b b < H i x, x >= K i (t, s)x(t)x(s)dtds 0, x L 2 [a, b]. a a Then operators A i are also inverse - strongly monotone. Hence, the system can be solved by PIIR and PEIR methods.
Nonlinear integral equation Problem Apply PRGN to solve system arising from "parameters detection" problem (4) A i (x) = 0 A i (x) := c i exp ( t w i (s)x(s)ds ) f i ; x X := L 2 (0, 1), 0 c i > 0, w i C [0, 1] and f i X, i = 1, N - given constants and functions, respectively. 0 < w w i (t) w for all i = 1, N and t [0, 1] A i : X X - monotone, twice continuously Frechet differentiable. (4) Tautenhahn U., "On the method of Lavrentiev regularization for nonlinear ill-posed problems", Inverse Problems, 2003, 18, pp. 191-207.
Nonlinear integral equation Problem Apply PRNK to solve the equation arising from Wiener-type filtering theory (5) N A(x) = (F j (x) f j ) = 0 j=1 F j (x) := 1 β j t s x(s)ds + [arctan(jx(t))] 3, j = 1, 2; x X := L 2 (0, 1), 0 F j : X X - monotone, continuously Frechet differentiable. (5) N. S. Hoang, Ramm A. G., "Dynamical systems method for solving nonlinear equations with monotone operators", Math. Comp., 2010, 79, pp. 239-258.
N-dimentional steady state Navier-Stocks (6) Problem Ω R N - convex, bounded and open set has Lipschizt boundary Ω. Consider N η u + u i D i u + p = f in Ω divu = 0 in Ω u = 0 in Ω where u = (u 1,.., u N ), f = (f 1,..., f N ); u i, f i - functions; p-scalar function; D i = x i ; η > 0- constant. (6) Lu T., Neittaanmäki P., and Tai X.-C. (1992), "A parallel splitting up method for partial differential equations and its application to Navier-Stokes equations", RAIRO Math. Numer. Anal., 26 (6), pp. 673-708. Model.
N-dimentional steady state Navier-Stocks Notations L 2 (Ω) = L 2 (Ω)... L 2 (Ω): u, v L 2 (Ω) = N Ω u i.v i dx. H(Ω) = C0 (Ω)... C 0 (Ω): (u, v) H(Ω) = N D i u, D i v L 2dx. Set b(u, w, v) := N Ω u i D i wvdx and write Navier-Stocks equation in weak form: η(u, v) H(Ω) + b(u, w, v) = u, v L 2 (Ω) Iterative method Let y 0 H(Ω)-arbitrary; β > 0-fixed; ɛ 0 > 0 tolerance. At k 0, solve p k = divy k in Ω p k n = 0 in Ω
N-dimentional steady state Navier-Stocks Iterative method Set u k := y k p k. If k > 1 and u k u k 1 < ɛ 0 then stop. Else Find y k+1 H(Ω) from β y k+1 u k, v L 2 (Ω) + η y k+1, v L 2 (Ω) + b(y k+1, u k, v) Set k := k + 1 and repeat. = f, v L 2 (Ω) v H(Ω) At each step k 0, one step of PRNK can be applied to find y k+1. If C > 0: η 2 > C f H 2 (Ω) then the iterative method converges.
Furthermore information For more details, please refer to works of group P. K. Anh, C. V. Chung and V. T. Dzung: [1]. "Parallel hybrid methods for a finite family of relatively nonexpansive mappings", Numerical Functional Analysis and Optimization, (2013), DOI: 10.1080/01630563.2013.830127. [2]. "Parallel iteratively regularized Gauss Newton method for systems of nonlinear ill-posed equations", International Journal of Computer Mathematics, (2013), DOI: 10.1080/00207160.2013.782399. [3]. "Parallel regularized Newton method for nonlinear ill-posed equations", Numer. Algor., 2011, Vol. 58, pp. 379 398. [4]. "Parallel iterative regularization algorithms for large overdetermined linear systems", Int. J. Comput. Methods, 2010, Vol. 7, pp. 525 537. [5]. "Parallel iterative regularization methods for solving systems of ill-posed equations", Appl. Math. Comput., 2009, Vol. 212, pp. 542-550. and in some papers in local journals.
THANKS FOR YOUR ATTENTION!