An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems

Size: px
Start display at page:

Download "An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems"

Transcription

1 Int. J. Contemp. Math. Sciences, Vol. 5, 2010, no. 52, An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical and Computational Sciences National Institute of Technology Karnataka Surathkal, India Atef Ibrahim Elmahdy Department of Mathematical and Computational Sciences National Institute of Technology Karnataka Surathkal, India Abstract An iterative regularization method in the setting of a finite dimensional subspace X h of the real Hilbert space X has been considered for obtaining stable approximate solution to nonlinear ill-posed operator equations F (x) = y where F : D(F ) X X is a nonlinear monotone operator on X. We assume that only a noisy data y δ with y y δ δ are available. Under the assumption that the Fréchet derivative F of F is Lipschitz continuous, a choice of the regularization parameter using an adaptive selection of the parameter and a stopping rule for the iteration index using a majorizing sequence are presented. We prove that under a general source condition on x 0 ˆx, the error n,α ˆx between the regularized approximation x h,δ n,α, (x h,δ 0,α := P hx 0 where P h is an orthogonal projection on to X h ) and the solution ˆx is of optimal order. The results of computational experiments are provided which shows the reliability of our method. Mathematics Subject Classification: 65J20,65J15, 47J06, 47J35 Keywords: Nonlinear ill-posed operator, Monotone operator, Majorizing sequence, Regularized Projection method

2 2548 S. George and A. I. Elmahdy 1 Introduction Let F : D(F ) X X is a nonlinear monotone operator defined on a real Hilbert space X with inner product.,. and norm.. Recall that F is a monotone operator if F (x 2 ) F (x 1 ), x 2 x 1 0, x 1, x 2 D(F ) X. We consider the problem of solving the nonlinear ill-posed operator equation F (x) = y (1) approximately when the data y is not known exactly. Further we assume that y δ X are the available noisy data with y y δ δ (2) and that (1) has a solution ˆx. The equation (1) is ill-posed in the sense that the Fréchet derivative F (.) is not boundedly invertible (see [9], page 26). Nonlinear ill-posed problems arise in a number of applications (see, [4, 5, 9]). Since (1) is ill-posed, one has to replace the equation (1) by a nearby equation whose solution is less sensitive to perturbation in the right side y. This replacement is known as regularization. A well known method for regularizing (1), when F is monotone is the method of Lavrentiev regularization (see [12]). In this method approximation x δ α is obtained by solving the singularly perturbed operator equation F (x) + α(x x 0 ) = y δ. (3) In [2], George and Elmahdy considered an iterative regularization method; x δ n+1,α = x δ n,α (F (x 0 ) + αi) 1 (F (x δ n,α) y δ + α(x δ n,α x 0 )), (4) where x δ 0,α := x 0 and proved that (x δ n,α) converges to the unique solution x δ α of (3) under the following Assumptions. Assumption 1.1 There exists r 0 > 0 such that B r0 (ˆx) D(F ) and F is Fréchet differentiable at all x B r0 (ˆx). Assumption 1.2 There exists a continuous, strictly monotonically increasing function ϕ : (0, a] (0, ) with a F (ˆx) satisfying lim ϕ(λ) = 0 and a λ 0 vector v X with v 1 such that x 0 ˆx = ϕ(f (ˆx))v and sup αϕ(λ) λ 0 λ + α c ϕϕ(α), α (0, a].

3 An iteratively regularized projection method 2549 Assumption 1.3 There exists a constant k 0 > 0 such that for every x, u B r0 (ˆx) and v X, there exists an element Φ(x, u, v) X satisfying [F (x) F (u)]v = F (u)φ(x, u, v), Φ(x, u, v) k 0 v x u for all x, u B r0 (ˆx) and v X. REMARK 1.4 It can be seen that functions for 0 < ν 1 and ϕ(λ) = λ ν, λ > 0 { (ln 1 ϕ(λ) = λ ) p, 0 < λ e (p+1) 0, otherwise for p 0 satisfy the above assumption (see [10]). The convergence analysis in [2] as well as in this paper is based on majorizing sequences. Recall (see [1], Definition ) that a nonnegative sequence (t n ) is said to be a majorizing sequence of a sequence (x n ) in X if x n+1 x n t n+1 t n, n 0. In applications, one looks for a sequence (x h,δ n,α) in a finite dimensional subspace X h of X such that x h,δ n,α x δ α as h 0 and n. After providing some preparatory results in Section 2, in section 3 we considered an iteratively regularized projection method for obtaining a sequence (x h,δ n,α) in a finite dimensional subspace X h of X and proved that x h,δ n,α converges to x δ α. Also in section 3 we obtained an estimate for n,α x δ α. Using an error estimate for x δ α ˆx (see [2, 12]), we obtained an error estimate for n,α ˆx in section 4. The error analysis for the order optimal result using an adaptive selection of the parameter α and a stopping rule using a majorizing sequence are also given in section 4. Implementation of the adaptive choice of the parameter and the choice of the stoping rule are given in section 5. Examples and the results of computational experiments are given in section 6. Finally the paper ends with some concluding remarks in section 7. 2 Preparatory Results For proving the results in [2] as well as the results in this paper we use the following Lemma on majorization, which is a reformulation of Lemma in [1].

4 2550 S. George and A. I. Elmahdy LEMMA 2.1 Let (t n ) be a majorizing sequence for (x n ) in X. If then x = lim x n n exists and lim n t n = t, x x n t t n, n 0. (5) Let ( t n ), n 0, be defined iteratively by t 0 = 0, t 1 = η, where r [0, 1). t n+1 = t n + k 0η (1 r) ( t n t n 1 ) (6) LEMMA 2.2 ([2], Lemma 2.2) Assume there exist nonnegative numbers k 0, η and r [0, 1) such that k 0 η r. (7) (1 r) Then the sequence ( t n ) defined in (6) is increasing, bounded above by t := and converges to some t such that 0 < t. Moreover, for n 0; and η 1 r η 1 r, 0 t n+1 t n r( t n t n 1 ) r n η, (8) t t n rn η. (9) 1 r The following Lemma based on the Assumption 1.3 will be used in due course. LEMMA 2.3 ([2],Lemma 2.3) For u, v, x 0 B r0 (ˆx) F (v) F (u) F (x 0 )(v u) = F (x 0 ) 1 Here after we assume that x 0 ˆx ρ and k 0 2 ρ2 + ρ + δ α 0 Φ(u + t(v u), x 0, v u)dt. η min{ r(1 r) k 0, r 0 (1 r)}. (10) THEOREM 2.4 ([2],Theorem 2.4) Suppose (6) holds. Let the assumptions in Lemma 2.2 with η as in (10) and Assumption 1.3 be satisfied. Then the sequence (x δ n,α) defined in (4) is well defined and x δ n,α B t (x 0) for all n 0. Further (x δ n,α) is a Cauchy sequence in B t (x 0) and hence converges to x δ α B t (x 0) B t (x 0) and F (x δ α) + α(x δ α x 0 ) = y δ. Moreover, the following estimate hold for all n 0, and x δ n+1,α x δ n,α t n+1 t n, (11) x δ n,α x δ α t t n rn η (1 r). (12)

5 An iteratively regularized projection method Iteratively Regularized Projection Method Let H be a bounded subset of positive reals such that zero is a limit point of H, and let {P h } h H be a family of orthogonal projections from X into itself. Let Γ h := (I P h )F (x 0 ) (13) and We assume that γ h := F (P h x 0 )(I P h ). (14) b h := (I P h )x 0 0 (15) as h 0. The above assumption is satisfied if P h I pointwise. Let ( t n,h ), n 0 be defined iteratively by t 0,h = 0, t 1,h = η h, t n+1,h = t n,h + (1 + γ h α ) k 0 η h (1 r h ) ( t n,h t n 1,h ) (16) where k 0, α and r h [0, 1) are nonnegative numbers, with (1+ γ h α ) k 0 (1 r h ) η h r h. We need the following Lemma, proof of which is analogous to the proof of Lemma2.2 in [2], so we ignore the proof. LEMMA 3.1 Assume there exist nonnegative numbers k 0, α and r h [0, 1) such that (1 + γ h α ) k 0 (1 r h ) η h r h. (17) Then the sequence ( t n,h ) defined in (16) is increasing, bounded above by t h := η h 1 r h, and converges to some t h such that 0 < t h η h 1 r h. Moreover, for n 0; 0 t n+1,h t n,h r h ( t n,h t n 1,h ) r n hη h, (18) and Let t h t n,h rn h 1 r h η h. (19) x h,δ n+1,α := x h,δ n,α (P h F (P h x 0 ) + αi) 1 P h (F (x h,δ n,α) y δ + α(x h,δ n,α x 0 )), (20) where x h,δ 0,α := P h x 0. Now we shall prove that the sequence ( t n,h ) is a majorizing sequence of the sequence (x h,δ n,α). Let (1 + γ h α )(k 0 2 (b h + ρ) 2 + b h + ρ) + δ α η h (21) min{ r h(1 r h ) k 0 (1 + γ h /α), r 0(1 r h )}.

6 2552 S. George and A. I. Elmahdy THEOREM 3.2 Let the assumptions in Lemma 3.1 with η h as in (21) and Assumption 1.3 be satisfied. Then the sequence ( t n,h ) defined in (16) is a majorizing sequence of sequence (x h,δ n,α) defined in (20) and x h,δ n,α Bt h (P h x 0 ) for all n 0. Proof. Let G(x) = x R α (P h x 0 ) 1 [F (x) y δ + α(x x 0 )] where R α (P h x 0 ) 1 = (P h F (P h x 0 )P h + αp h ) 1. Then since R α (P h x 0 ) 1 = R α (P h x 0 ) 1 P h = P h R α (P h x 0 ) 1 ; for u, v B t h (P h x 0 ), G(u) G(v) = u v R α (P h x 0 ) 1 [F (u) y δ + α(u x 0 )] +R α (P h x 0 ) 1 [F (v) y δ + α(v x 0 )] = R α (P h x 0 ) 1 [R α (P h x 0 )(u v) (F (u) F (v))] +αr α (P h x 0 ) 1 (v u) = R α (P h x 0 ) 1 [F (P h x 0 )P h (u v) (F (u) F (v)) + α(u v)] +αr α (P h x 0 ) 1 (v u) = R α (P h x 0 ) 1 [F (P h x 0 )P h (u v) (F (u) F (v))]. Now since G(x h,δ n,α) = x h,δ n+1,α and P h (x h,δ n 1,α) = (x h,δ n 1,α) we have (x h,δ n+1,α x h,δ n,α) = G(x h,δ n,α) G(x h,δ n 1,α) = R α (P h x 0 ) 1 [F (P h x 0 )(x h,δ n 1,α) (F (x h,δ n,α) F (x h,δ n 1,α))] = R α (P h x 0 ) 1 F (P h x 0 ) 1 0 Φ(x h,δ n,α + t(x h,δ n 1,α x h,δ n,α), P h x 0, x h,δ n 1,α x h,δ n,α)dt = R α (P h x 0 ) 1 [F (P h x 0 )P h + F (P h x 0 )(I P h )] 1 0 Φ(x h,δ n,α + t(x h,δ n 1,α x h,δ n,α), P h x 0, x h,δ n 1,α x h,δ n,α)dt The last but one step follows from Lemma 2.3. So by Assumption 1.3 and the relation we have R α (P h x 0 ) 1 [F (P h x 0 )P h + F (P h x 0 )(I P h )] 1 + γ h α (22) n+1,α x h,δ n,α (1 + γ h α )k 0 n,α + t(x h,δ n 1,α x h,δ n,α) P h x 0 n 1,α. (23)

7 An iteratively regularized projection method 2553 Now we shall prove that the sequence ( t n,h ) defined in (16) is a majorizing sequence of the sequence (x h,δ n,α) and x h,δ n,α Bt h (P h x 0 ), for all n 0. Note that F (ˆx) = y, so 1,α P h x 0 = (P h F (P h x 0 ) + αi) 1 P h (F (P h x 0 ) y δ ) = (P h F (P h x 0 ) + αi) 1 P h (F (P h x 0 ) y + y y δ ) = (P h F (P h x 0 ) + αi) 1 P h (F (P h x 0 ) F (ˆx) + y y δ ) = (P h F (P h x 0 ) + αi) 1 P h (F (P h x 0 ) F (ˆx) F (P h x 0 )(P h x 0 ˆx) +F (P h x 0 )(P h x 0 ˆx) + y y δ ) (P h F (P h x 0 ) + αi) 1 P h (F (P h x 0 ) F (ˆx) F (P h x 0 )(P h x 0 ˆx)) + (P h F (P h x 0 ) + αi) 1 P h F (P h x 0 )(P h x 0 ˆx) + (P h F (P h x 0 ) + αi) 1 P h (y y δ ) (P h F (P h x 0 ) + αi) 1 P h F (P h x 0 ) 1 0 Φ(ˆx + t(p h x 0 ˆx), P h x 0, (P h x 0 ˆx))dt + (P h F (P h x 0 ) + αi) 1 P h F (P h x 0 )(P h x 0 ˆx) + δ α (P h F (P h x 0 ) + αi) 1 P h [F (P h x 0 )P h + F (P h x 0 )(I P h )] 1 0 Φ(ˆx + t(p h x 0 ˆx), P h x 0, (P h x 0 ˆx))dt + (P h F (P h x 0 ) + αi) 1 P h [F (P h x 0 )P h +F (P h x 0 )(I P h )](P h x 0 ˆx) + δ α (1 + γ h α )(k 0 2 P hx 0 ˆx 2 + P h x 0 ˆx ) + δ α (1 + γ h α )(k 0 2 (b h + ρ) 2 + b h + ρ) + δ α η h. The last but one step follows from Assumption 1.3, (22) and the inequality P h x 0 ˆx b h + ρ. So 1,α P h x 0 t 1,h t 0,h. Assume that for some k. Then i+1,α x h,δ i,α t i+1,h t i,h, i k (24) k+1,α P hx 0 k+1,α xh,δ k,α + xh,δ k,α xh,δ k 1,α + + xh,δ 1,α P h x 0 t k+1,h t k,h + t k,h t k 1,h + + t 1,h t 0,h = t k+1,h t h.

8 2554 S. George and A. I. Elmahdy So x h,δ i+1,α Bt h (P h x 0 ) for all i k, and hence, x h,δ k+1,α + t(xh,δ k,α xh,δ k+1,α ) Bt h (P h x 0 ). Therefore by (23) and (24) we have k+2,α xh,δ k+1,α k 0(1 + γ h α ) t h x h,δ η h k+1,α xh,δ k,α k 0 (1 + γ h α ) (1 r h ) ( t k+1,h t k,h ) = t k+2,h t k+1,h. Thus by induction n+1,α x h,δ n,α t n+1,h t n,h for all n 0 and hence ( t n,h ), n 0 is a majorizing sequence of the sequence (x h,δ n,α). In particular n,α P h x 0 t n,h t h, i.e., x h,δ n,α Bt h (P h x 0 ), for all n 0. Hence This completes the proof. Let and n,α P h x 0 t h Note that for 0 < b h < 2(1 r) k 0, q < 1. η h 1 r h. (25) r := max{ r, r h }, (26) q := 1 2 [2 r + k 0 b h ]. (27) THEOREM 3.3 Let x h,δ n,α be as in (20) and x δ n,α be as in (4). Let assumptions in Theorem 2.4 and Theorem 3.2 hold. Then we have the following estimate, n,α x δ n,α q n b h + ( Γ h + k 0 F (x 0 ) b h ) α (q r h ) η h. Proof.Note that x h,δ n,α x δ n,α = x h,δ n 1,α x δ n 1,α (P h F (P h x 0 ) + αi) 1 P h (F (x h,δ n 1,α) y δ + α(x h,δ n 1,α x 0 )) +(F (x 0 ) + αi) 1 (F (x δ n 1,α) y δ + α(x δ n 1,α x 0 )) = x h,δ n 1,α x δ n 1,α [(P h F (P h x 0 ) + αi) 1 P h (F (x 0 ) + αi) 1 ] (F (x h,δ n 1,α) y δ + α(x h,δ n 1,α x 0 )) (F (x 0 ) + αi) 1 [F (x h,δ n 1,α) F (x δ n 1,α) +α(x h,δ n 1,α x δ n 1,α)] = (F (x 0 ) + αi) 1 [F (x 0 )(x h,δ n 1,α x δ n 1,α) (F (x h,δ n 1,α) F (x δ n 1,α))] q n

9 An iteratively regularized projection method 2555 where and (F (x 0 ) + αi) 1 [F (x 0 )P h P h F (P h x 0 )P h ](P h F (P h x 0 ) + αi) 1 P h [(F (x h,δ n 1,α) y δ + α(x h,δ n 1,α x 0 ))] = (F (x 0 ) + αi) 1 [F (x 0 )(x h,δ n 1,α x δ n 1,α) (F (x h,δ n 1,α) F (x δ n 1,α))] (F (x 0 ) + αi) 1 [F (x 0 ) P h F (x 0 ) +P h F (x 0 ) P h F (P h x 0 )](x h,δ n 1,α) =: Γ 1 Γ 2. (28) Γ 1 = (F (x 0 ) + αi) 1 [F (x 0 )(x h,δ n 1,α x δ n 1,α) (F (x h,δ n 1,α) F (x δ n 1,α))] Γ 2 = (F (x 0 ) + αi) 1 [F (x 0 ) P h F (x 0 ) +P h F (x 0 ) P h F (P h x 0 )](x h,δ n 1,α). Note that by Lemma 2.3 Γ 1 (F (x 0 ) + αi) 1 F (x 0 ) 1 x h,δ n 1,α), x 0, x δ n 1,α x h,δ n 1,α)dt k Φ(x h,δ n 1,α + t(x δ n 1,α x 0 (x h,δ n 1,α + t(x δ n 1,α x h,δ n 1,α)) x δ n 1,α x h,δ n 1,α dt 1 k 0 [t x 0 x δ n 1,α + (1 t) P h x 0 x h,δ 0 +(1 t) P h x 0 x 0 ] x δ n 1,α x h,δ n 1,α dt k 0 2 [ η 1 r + η h + b h ] n 1,α x δ 1 r n 1,α h 1 2 [ r + r h + k 0 b h ] n 1,α x δ n 1,α and by Assumption [2 r + k 0 b h ] n 1,α x δ n 1,α n 1,α q n 1,α x δ n 1,α (29) Γ 2 = (F (x 0 ) + αi) 1 [(I P h )F (x 0 ) P h (F (P h x 0 ) F (x 0 ))](x h,δ n 1,α) (F (x 0 ) + αi) 1 (I P h )F (x 0 ) + (F (x 0 ) + αi) 1 P h F (x 0 )Φ(P h x 0, x 0, x h,δ n 1,α) ( Γ h + k 0 F (x 0 ) b h ) α n 1,α. (30)

10 2556 S. George and A. I. Elmahdy Therefore by (28), (29)and (30) we have n,α x δ n,α q This completes the proof. n 1,α x δ n 1,α + Γ h + k 0 F (x 0 ) b h α n 1,α q n b h + Γ h + k 0 F (x 0 ) b h η h (rh n 1 + qrh n q n 1 ) α q n b h + ( Γ h + k 0 F (x 0 ) b h α q n ) (q r h ) η h. 4 Error Bounds Under Source Conditions It is known (cf.[12], Proposition 3.1) that x δ α x α δ α (31) and (cf.[2], Theorem 3.1) that x α ˆx (k 0 r 0 + 1)c ϕ ϕ(α). (32) where x α is the unique solution of F (x) + α(x x 0 ) = y. Combining the estimates in Theorem 2.4, Theorem 3.3, (31) and (32) we obtain the following Theorem. THEOREM 4.1 Let x h,δ n,α be as in (20) and let the assumptions in Theorem 2.4 and Theorem 3.3 be satisfied. Then we have the following; n,α ˆx q n b h +( Γ h + k 0 F (x 0 ) b h q n ) α (q r h ) η h+ rn η 1 r + δ α +(k 0r 0 +1)c ϕ ϕ(α). (33) Let and let n δ := min{n : max{q n, r n } δ} (34) C := max{b h + Γ h + k 0 F (x 0 ) b h η h + η (q r h ) 1 r + 1, (k 0r 0 + 1)c ϕ }. (35) THEOREM 4.2 Let x h,δ n,α be as in (20) and let the assumptions in Theorem 2.4and Theorem 3.3 be satisfied. Let n δ be as in (34) and C be as in (35). Then for all 0 < α 1 we have the following; n δ,α ˆx C(ϕ(α) + δ ). (36) α

11 An iteratively regularized projection method A priori choice of the parameter Note that the error ϕ(α) + δ α in (36) is of optimal order if α δ := α(δ) satisfies, α δ ϕ(α δ ) = δ. Now using the function ψ(λ) := λϕ 1 (λ), 0 < λ a we have δ = α δ ϕ(α δ ) = ψ(ϕ(α δ )), so that α δ = ϕ 1 (ψ 1 (δ)). Hence by (36) we have the following. THEOREM 4.3 Let ψ(λ) := λϕ 1 (λ) for 0 < λ a, and assumptions in Theorem 4.2 holds. For δ > 0, let α =: α δ = ϕ 1 (ψ 1 (δ)). Let n δ be as in (34). Then n δ,α ˆx = (ψ 1 (δ)). 4.2 An adaptive choice of the parameter In this subsection, we will present a parameter choice rule based on the adaptive method studied in [7, 11]. In practice, the regularization parameter α is often selected from some finite set D M (α) := {α i = µ i α 0, i = 0, 1,, M} (37) where µ > 1 and M is such that α M < 1 α M+1. We choose α 0 := δ, because in general ϕ(λ) = λ ν, 0 < ν 1 and in this case the best possible error estimate is order ( δ) and from Theorem 4.3, it follows that such an accuracy cannot be guaranteed for α < δ. Let n M := min{n : max{q n, r n } δ} (38) and let x i := x h,δ n M,α i. The parameter choice strategy that we are going to consider in this paper, we select α = α i from D M (α) and operates only with corresponding x i, i = 0, 1,, M. THEOREM 4.4 Assume that there exists i {0, 1, 2,, M} such that ϕ(α i ) δ α i. Let assumptions of Theorem 4.2 and Theorem 4.3 hold and let l := max{i : ϕ(α i ) δ α i } < M, k := max{i : x i x j 4C δ α j, j = 0, 1, 2,, i}. (39) Then l k and ˆx x k cψ 1 (δ) where c = 6Cµ.

12 2558 S. George and A. I. Elmahdy Proof. To see that l k, it is enough to show that, for each i {1, 2,, M}, ϕ(α i ) δ = x i x j 4C δ, α i α j j = 0, 1,, i. For j i, by (36) we have x i x j x i ˆx + ˆx x j C(ϕ(α i ) + δ ) + C(ϕ(α j ) + δ ) α i α j 2C δ + 2C δ. α i α j 4C δ. α j Thus the relation l k is proved. Next we observe that ˆx x k ˆx x l + x l x k C(ϕ(α l ) + δ α l ) + 4C δ α l 6C δ α l. Now since α δ α l+1 µα l, it follows that δ α l µ δ α δ = µϕ(α δ ) = µψ 1 (δ). This completes the proof of the theorem. 5 Implementation of Adaptive Choice Rule In this section we provide an algorithm for the determination of a parameter fulfilling the balancing principle (39) and also provide a starting point for the iteration (20) approximating the unique solution x δ α of (3). The choice of the starting point involves the following steps: Choose α 0 = δ, µ > 1 and q < 1. Choose x 0 D(F ) such that x 0 ˆx ρ and (1 + γ h α 0 )( k 0 b h + ρ) + δ α 0 η h min{ (1 r h)r h k 0 (1+ γ h α0 ), r 0(1 r h )}. 2 (b h + ρ) 2 + Choose n M such that n M = min{n : max{q n, r n } δ}. Finally the adaptive algorithm associated with the choice of the parameter specified in Theorem 4.4 involves the following steps:

13 An iteratively regularized projection method Algorithm Set i 0 solve x i := x h,δ n M,α i by using the iteration (20). If x i x j > 4C δ µ j, j i, then take k = i 1. Set i = i + 1 and return to step 2. 6 Examples In this section we consider some simple examples satisfying the assumptions made in the paper and presents a few computed examples. We consider the operator F : L 2 [0, 1] L 2 [0, 1] defined by (cf.[10], Example 6.1) F (x)(s) = K K(x)(s) + f(s), x, f L 2 [0, 1], s [0, 1] (40) where K : L 2 [0, 1] L 2 [0, 1] is a compact linear operator such that the range of K denoted by R(K) is not closed in L 2 [0, 1]. Then the equation F (x) = y is ill-posed as K is compact with non-closed range. The Frèchet derivative F (.) of F is given by F (x)z = K Kz, x, z L 2 [0, 1]. (41) So F is monotone on L 2 [0, 1]. Further for x, y, z L 2 [0, 1] [F (x) F (y)]z = 0. (42) Hence Assumption 1.3 holds trivially. Again note that, since Φ(x, y, z) = 0 k 0 z x y, k 0 0 we can choose η h large enough in step 2 of the algorithm. Further, due to (41) the iteration x h,δ m+1,α needs only one step to compute. This can be seen as follows: i.e., x h,δ m+1,α = x h,δ m,α (P h F (P h x 0 ) + αi) 1 P h [F (x h,δ m,α) y δ + α(x h,δ m,α x 0 )] (P h F (P h x 0 ) + αi)p h x h,δ m+1,α = (P h F (P h x 0 ) + αi)p h x h,δ m,α P h [F (x h,δ m,α) y δ + α(x h,δ m,α x 0 )] = (P h K K + αi)p h x h,δ m,α P h [K Kx h,δ m,α +f y δ + α(x h,δ m,α x 0 )] = P h (f y δ αx 0 ). (43)

14 2560 S. George and A. I. Elmahdy Now we shall give the details for implementing the algorithm given in the above section. Let (V n ) be a sequence of finite dimensional subspaces of X and let P h, h = 1/n denote the orthogonal projection on X with range R(P h ) = V n. We assume that dimv n = n + 1, and P h x x 0 as h 0 for all x X. Let{v 1, v 2,, v n+1 } be a basis of V n, n = 1, 2,. Note that x h,δ m+1,α V n. Thus x h,δ m+1,α is of the form n+1 i=1 λ i v i for some scalars λ 1, λ 2,, λ n+1. It can be seen that x h,δ m+1,α is a solution of (43) if and only if λ = (λ 1, λ 2,, λ n+1 ) T is the unique solution of where and (M n + αb n ) λ = ā (44) M n = ( Kv i, Kv j ), i, j = 1, 2,, n + 1 B n = ( v i, v j ), i, j = 1, 2,, n + 1 ā = ( P h (y δ + αx 0 f), v i ) T, i = 1, 2,, n + 1. Note that (44) is uniquely solvable because M n is a positive definite matrix (i.e., xm n x T > 0 for all non-zero vector x) and B n is an invertible matrix. 6.1 Numerical Examples In order to illustrate the method considered in the above section, we consider the space X = Y = L 2 [0, 1] and consider K : L 2 [0, 1] L 2 [0, 1] as the Fredholm integral operator with K(x)(s) = k(t, s) = 1 0 k(s, t)x(t)dt (45) { 0, t s t s, t > s. (46) We apply the Algorithm in section 5 by choosing V n as the space of linear splines in a uniform grid of n + 1 points in [0, 1]. Specifically for fixed n we consider t i = i 1, i = 1, 2,, n + 1 as the grid points. We take the basis n function v i, i = 1, 2,, n + 1 of V n as follows: { t2 t v 1 (t) = t 2, 0 = t 1 t t 2 (47) 0, t 2 t t n+1 = 1 for j = 2, 3,, n, v j (t) = 0, 0 = t 1 t t j 1, t t j 1 t j t j 1, t j 1 t t j, t j+1 t t j+1 t j, t j t t j+1, 0, t j+1 t t n+1 = 1 (48)

15 An iteratively regularized projection method 2561 and { 0, 0 t tn v n+1 (t) = t t n t n+1 t n, t n t t n+1. Let P h be the orthogonal projection onto V n. We note that for x C[0, 1] P h x x 2 = dist(x, R(P h )) π n x x 2 π n x x (49) where π n is the (piecewise linear) interpolatory projection onto V n. It is known [6] that π n x x 0 as n. Therefore using the fact that C[0, 1] is dense in L 2 [0, 1], it follows that P h x x 2 0 for all x L 2 [0, 1]. The elements Kv i, i = 1, 2,, n + 1, the entries of the matrix B n, M n and ā are computed explicitly. For the operator K defined by (45) and (46), Γ h = γ h = (I P h )F (x 0 ) = (I P h )K K = O(n 2 ) (see [3]). EXAMPLE 6.1 In this example we take y = (26+s6 6s 5 +15s 4 36s)+ f(s) where f(s) = s 2 and x 0 = 0. Then the exact solution is ˆx = 1(s 2 1)2. Since ˆx x 0 = ˆx = K 1 R(K ) = R(F (ˆx) 1/2 ), ϕ(λ) = λ 1/2 and hence ψ 1 (δ) = ϕ(α δ ) = (δ) 1/3 and ˆx x k cψ 1 (δ) where c = 6Cµ. The result are given in Table 1, Table 2 and figure 1. Here and below e k := x k ˆx and y δ = y + δ. e n k e k k ψ 1 (δ) Table 1: δ = ; µ = 1.01 EXAMPLE 6.2 In this example we take y = (s6 + 15s 5 66s + 50) + f(s) where f(s) = s 2 and x 0 (s) = s. Then the exact solution is ˆx = 1 2 (s2 + 1) and

16 2562 S. George and A. I. Elmahdy e n k e k k ψ 1 (δ) Table 2: δ = , µ = 1.3 ˆx x 0 = 1 2 (s 1)2 = K 1 R(K ) = R(F (ˆx) 1/2 ), ϕ(λ) = λ 1/2 and hence ψ 1 (δ) = ϕ(α δ ) = δ 1/3. According to the theory, ˆx x k cψ 1 (δ) where c = 6Cµ. The results are given in Table 3, Table 4 and Figure 2. e n k e k k ψ 1 (δ) Table 3: δ = ; µ = 1.01 REMARK 6.3 The last column of the tables shows that e k = (ψ 1 (δ)). During computation we observe that due to the round off error k and e k remains as a constant for large values of n.

17 An iteratively regularized projection method 2563 Figure 1: The curve starting from 0.5 represents the actual solution ˆx and the other curve represents x k of Example 6.1. The left figure shows the solution for n = 1024, δ = ; µ = 1.01 and the right figure shows the solution for n = 1024, δ = ; µ = 1.3 Figure 2: The curve starting from 0.5 represents the actual solution ˆx and the other curve represents x k of Example 6.2. The left figure shows the solution for n = 1024, δ = ; µ = 1.01 and the right figure shows the solution for n = 1024, δ = 0.001; µ = 1.3

18 2564 S. George and A. I. Elmahdy e n k e k k ψ 1 (δ) Table 4: δ = 0.001; µ = Concluding Remarks In this paper we have considered an iteratively regularized projection method for approximately solving the nonlinear ill-posed operator equation F (x) = y, when the available data is y δ in place of the exact data y with y y δ δ. It is assumed that F is Fréchet differentiable in a neighborhood of some initial guess x 0 of the actual solution ˆx. The procedure involves finding the fixed point of the function G h (x) := x (P h F (P h x 0 ) + αi) 1 P h (F (x) y δ + α(x x 0 )), in an iterative manner in a finite dimensional subspace X h of X. Here x 0 is an initial guess and P h is the orthogonal projection on to X h. For choosing the regularization parameter α we made use of the adaptive method suggested by Pereversev and Schock in [11] and the stopping rule is based on a majorizing sequence. The numerical experiments presented in the above section support our claim that if α is chosen according to the balancing principle (39), then x k ˆx cψ 1 (δ). Acknowledgements The authors thanks P.Jidhesh for providing MATLAB code for the computation. The first author thanks National Institute of Technology Karnataka, India, for the financial support under seed money grant No.RGO/O.M/SEED GRANT/106/2009. The work of Atef I Elmahdy is supported by Indo-Egypt Cultural Exchange Programme , under the research fellowship of ICCR, India; BNG/171/

19 An iteratively regularized projection method 2565 References [1] I.K.Argyros, Convergenve and Applications of Newton-type Iterations, Springer,2008. [2] S. George and A.I.Elmahdy, An Analysis of Lavrentiev Regularization Method for Nonlinear Ill-posed Problems Using a Majorizing Sequence, (2010),(Communicated). [3] C.W.Groetsch, J.T.King and D.Murio, Asymptotic analysis of a finite element method for Fredholm equations of the first kind, In: Treatment of Integral Equations by Numerical Methods, Eds.:C.T.H.baker and G.F.Miller, Accademic Press, London, (1982) [4] H.W.Engl, Regularization methods for the stable solution of inverse problems Surv.Math.Ind.(1993) 3, [5] H.W.Engl, M.Hanke and A. Neubauer, Regularization of Inverse Problems (Dordrecht:Kluwer) [6] B.V.Limaye, Spectral Perturbation and Approximation with Numerical Experiments, Proceedings of the centren for mathematical Analysis, Australian National University, Vol.13,(1987). [7] P.Mathe and S.V.Perverzev, Geometry of linear ill-posed problems in variable Hilbert scales, Inverse problems, 19(3),(2003) [8] P.Mahale and M.T.Nair, Iterated Lavrentiev regularization for nonlinear ill-posed problems ANZIAM Journal vol 51,(2009), [9] A.G.Ramm, Inverse Problems: Mathematical and Analytical Techniques with Applications to Engineering, Springer (2005) [10] M.T.Nair and P.Ravishankar, Regularized versions of continuous Newton s method and continuous modified Newton s method under general source conditions, Numer.Func.Anal.Opti. 29(9-10), (2008), [11] S.V.Perverzev and E. Schock, On the adaptive selection of the parameter in regularization of ill-posed problems, SIAM J.Numer.Anal. 43(5),(2005) [12] U. Tautanhahn, On the method of Lavrentiev regularization for nonlinear ill-posed problems, Inverse Problems, 18(1) (2002),

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Int. Journal of Math. Analysis, Vol. 4, 1, no. 45, 11-8 An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical

More information

How large is the class of operator equations solvable by a DSM Newton-type method?

How large is the class of operator equations solvable by a DSM Newton-type method? This is the author s final, peer-reviewed manuscript as accepted for publication. The publisher-formatted version may be available through the publisher s web site or your institution s library. How large

More information

Math 5520 Homework 2 Solutions

Math 5520 Homework 2 Solutions Math 552 Homework 2 Solutions March, 26. Consider the function fx) = 2x ) 3 if x, 3x ) 2 if < x 2. Determine for which k there holds f H k, 2). Find D α f for α k. Solution. We show that k = 2. The formulas

More information

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Applied Mathematical Sciences, Vol. 6, 212, no. 63, 319-3117 Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Nguyen Buong Vietnamese

More information

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume, Number, Pages S -9939(XX- ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. HOANG AND A. G. RAMM (Communicated

More information

444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003),

444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003), 444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003), 2191-2195 1 On a new notion of regularizer A.G. Ramm LMA/CNRS, 31 Chemin Joseph Aiguier, Marseille 13402, France and Mathematics

More information

Introduction to the Numerical Solution of IVP for ODE

Introduction to the Numerical Solution of IVP for ODE Introduction to the Numerical Solution of IVP for ODE 45 Introduction to the Numerical Solution of IVP for ODE Consider the IVP: DE x = f(t, x), IC x(a) = x a. For simplicity, we will assume here that

More information

Numerical Methods for Differential Equations Mathematical and Computational Tools

Numerical Methods for Differential Equations Mathematical and Computational Tools Numerical Methods for Differential Equations Mathematical and Computational Tools Gustaf Söderlind Numerical Analysis, Lund University Contents V4.16 Part 1. Vector norms, matrix norms and logarithmic

More information

Continuous Functions on Metric Spaces

Continuous Functions on Metric Spaces Continuous Functions on Metric Spaces Math 201A, Fall 2016 1 Continuous functions Definition 1. Let (X, d X ) and (Y, d Y ) be metric spaces. A function f : X Y is continuous at a X if for every ɛ > 0

More information

A Spectral Characterization of Closed Range Operators 1

A Spectral Characterization of Closed Range Operators 1 A Spectral Characterization of Closed Range Operators 1 M.THAMBAN NAIR (IIT Madras) 1 Closed Range Operators Operator equations of the form T x = y, where T : X Y is a linear operator between normed linear

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

Research Article Inverse Free Iterative Methods for Nonlinear Ill-Posed Operator Equations

Research Article Inverse Free Iterative Methods for Nonlinear Ill-Posed Operator Equations International Mathematics and Mathematical Sciences, Article I 75454, 8 pages http://dx.doi.org/.55/24/75454 Research Article Inverse Free Iterative Methods for Nonlinear Ill-Posed Operator Equations Ioannis

More information

Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright.

Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright. Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright. John L. Weatherwax July 7, 2010 wax@alum.mit.edu 1 Chapter 5 (Conjugate Gradient Methods) Notes

More information

On Linear Operators with Closed Range

On Linear Operators with Closed Range Journal of Applied Mathematics & Bioinformatics, vol.1, no.2, 2011, 175-182 ISSN: 1792-6602 (print), 1792-6939 (online) International Scientific Press, 2011 On Linear Operators with Closed Range P. Sam

More information

This article was published in an Elsevier journal. The attached copy is furnished to the author for non-commercial research and education use, including for instruction at the author s institution, sharing

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. 1999 83: 139 159 Numerische Mathematik c Springer-Verlag 1999 On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems Jin Qi-nian 1, Hou Zong-yi

More information

On the Midpoint Method for Solving Generalized Equations

On the Midpoint Method for Solving Generalized Equations Punjab University Journal of Mathematics (ISSN 1016-56) Vol. 40 (008) pp. 63-70 On the Midpoint Method for Solving Generalized Equations Ioannis K. Argyros Cameron University Department of Mathematics

More information

B. Appendix B. Topological vector spaces

B. Appendix B. Topological vector spaces B.1 B. Appendix B. Topological vector spaces B.1. Fréchet spaces. In this appendix we go through the definition of Fréchet spaces and their inductive limits, such as they are used for definitions of function

More information

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1 Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems p. 1/1 p. 2/1 Converse Lyapunov Theorem Exponential Stability Let x = 0 be an exponentially stable equilibrium

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

A GALERKIN S PERTURBATION TYPE METHOD TO APPROXIMATE A FIXED POINT OF A COMPACT OPERATOR

A GALERKIN S PERTURBATION TYPE METHOD TO APPROXIMATE A FIXED POINT OF A COMPACT OPERATOR International Journal of Pure and Applied Mathematics Volume 69 No. 1 2011, 1-14 A GALERKIN S PERTURBATION TYPE METHOD TO APPROXIMATE A FIXED POINT OF A COMPACT OPERATOR L. Grammont Laboratoire de Mathématiques

More information

Nonlinear error dynamics for cycled data assimilation methods

Nonlinear error dynamics for cycled data assimilation methods Nonlinear error dynamics for cycled data assimilation methods A J F Moodey 1, A S Lawless 1,2, P J van Leeuwen 2, R W E Potthast 1,3 1 Department of Mathematics and Statistics, University of Reading, UK.

More information

EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES

EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES JEREMY J. BECNEL Abstract. We examine the main topologies wea, strong, and inductive placed on the dual of a countably-normed space

More information

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Radu Ioan Boț and Bernd Hofmann September 16, 2016 Abstract In the literature on singular perturbation

More information

Numerical Methods for Large-Scale Nonlinear Systems

Numerical Methods for Large-Scale Nonlinear Systems Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.

More information

Dynamical systems method (DSM) for selfadjoint operators

Dynamical systems method (DSM) for selfadjoint operators Dynamical systems method (DSM) for selfadjoint operators A.G. Ramm Mathematics Department, Kansas State University, Manhattan, KS 6656-262, USA ramm@math.ksu.edu http://www.math.ksu.edu/ ramm Abstract

More information

1.4 The Jacobian of a map

1.4 The Jacobian of a map 1.4 The Jacobian of a map Derivative of a differentiable map Let F : M n N m be a differentiable map between two C 1 manifolds. Given a point p M we define the derivative of F at p by df p df (p) : T p

More information

Robust error estimates for regularization and discretization of bang-bang control problems

Robust error estimates for regularization and discretization of bang-bang control problems Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of

More information

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

Nonlinear equations. Norms for R n. Convergence orders for iterative methods Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

Chapter 4. Inverse Function Theorem. 4.1 The Inverse Function Theorem

Chapter 4. Inverse Function Theorem. 4.1 The Inverse Function Theorem Chapter 4 Inverse Function Theorem d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d dd d d d d This chapter

More information

Solutions to Problem Set 5 for , Fall 2007

Solutions to Problem Set 5 for , Fall 2007 Solutions to Problem Set 5 for 18.101, Fall 2007 1 Exercise 1 Solution For the counterexample, let us consider M = (0, + ) and let us take V = on M. x Let W be the vector field on M that is identically

More information

Spectral theory for compact operators on Banach spaces

Spectral theory for compact operators on Banach spaces 68 Chapter 9 Spectral theory for compact operators on Banach spaces Recall that a subset S of a metric space X is precompact if its closure is compact, or equivalently every sequence contains a Cauchy

More information

Economics 204 Fall 2011 Problem Set 2 Suggested Solutions

Economics 204 Fall 2011 Problem Set 2 Suggested Solutions Economics 24 Fall 211 Problem Set 2 Suggested Solutions 1. Determine whether the following sets are open, closed, both or neither under the topology induced by the usual metric. (Hint: think about limit

More information

Dynamical Systems Method for Solving Ill-conditioned Linear Algebraic Systems

Dynamical Systems Method for Solving Ill-conditioned Linear Algebraic Systems Dynamical Systems Method for Solving Ill-conditioned Linear Algebraic Systems Sapto W. Indratno Department of Mathematics Kansas State University, Manhattan, KS 6656-6, USA sapto@math.ksu.edu A G Ramm

More information

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent Chapter 5 ddddd dddddd dddddddd ddddddd dddddddd ddddddd Hilbert Space The Euclidean norm is special among all norms defined in R n for being induced by the Euclidean inner product (the dot product). A

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM OLEG ZUBELEVICH DEPARTMENT OF MATHEMATICS THE BUDGET AND TREASURY ACADEMY OF THE MINISTRY OF FINANCE OF THE RUSSIAN FEDERATION 7, ZLATOUSTINSKY MALIY PER.,

More information

MATH 5640: Fourier Series

MATH 5640: Fourier Series MATH 564: Fourier Series Hung Phan, UMass Lowell September, 8 Power Series A power series in the variable x is a series of the form a + a x + a x + = where the coefficients a, a,... are real or complex

More information

f(x) f(z) c x z > 0 1

f(x) f(z) c x z > 0 1 INVERSE AND IMPLICIT FUNCTION THEOREMS I use df x for the linear transformation that is the differential of f at x.. INVERSE FUNCTION THEOREM Definition. Suppose S R n is open, a S, and f : S R n is a

More information

New w-convergence Conditions for the Newton-Kantorovich Method

New w-convergence Conditions for the Newton-Kantorovich Method Punjab University Journal of Mathematics (ISSN 116-2526) Vol. 46(1) (214) pp. 77-86 New w-convergence Conditions for the Newton-Kantorovich Method Ioannis K. Argyros Department of Mathematicsal Sciences,

More information

A general iterative algorithm for equilibrium problems and strict pseudo-contractions in Hilbert spaces

A general iterative algorithm for equilibrium problems and strict pseudo-contractions in Hilbert spaces A general iterative algorithm for equilibrium problems and strict pseudo-contractions in Hilbert spaces MING TIAN College of Science Civil Aviation University of China Tianjin 300300, China P. R. CHINA

More information

INVERSE FUNCTION THEOREM and SURFACES IN R n

INVERSE FUNCTION THEOREM and SURFACES IN R n INVERSE FUNCTION THEOREM and SURFACES IN R n Let f C k (U; R n ), with U R n open. Assume df(a) GL(R n ), where a U. The Inverse Function Theorem says there is an open neighborhood V U of a in R n so that

More information

Notes on Integrable Functions and the Riesz Representation Theorem Math 8445, Winter 06, Professor J. Segert. f(x) = f + (x) + f (x).

Notes on Integrable Functions and the Riesz Representation Theorem Math 8445, Winter 06, Professor J. Segert. f(x) = f + (x) + f (x). References: Notes on Integrable Functions and the Riesz Representation Theorem Math 8445, Winter 06, Professor J. Segert Evans, Partial Differential Equations, Appendix 3 Reed and Simon, Functional Analysis,

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

PhD Course: Introduction to Inverse Problem. Salvatore Frandina Siena, August 19, 2012

PhD Course: Introduction to Inverse Problem. Salvatore Frandina Siena, August 19, 2012 PhD Course: to Inverse Problem salvatore.frandina@gmail.com theory Department of Information Engineering, Siena, Italy Siena, August 19, 2012 1 / 68 An overview of the - - - theory 2 / 68 Direct and Inverse

More information

Regularization and Inverse Problems

Regularization and Inverse Problems Regularization and Inverse Problems Caroline Sieger Host Institution: Universität Bremen Home Institution: Clemson University August 5, 2009 Caroline Sieger (Bremen and Clemson) Regularization and Inverse

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

3 Compact Operators, Generalized Inverse, Best- Approximate Solution

3 Compact Operators, Generalized Inverse, Best- Approximate Solution 3 Compact Operators, Generalized Inverse, Best- Approximate Solution As we have already heard in the lecture a mathematical problem is well - posed in the sense of Hadamard if the following properties

More information

A G Ramm, Implicit Function Theorem via the DSM, Nonlinear Analysis: Theory, Methods and Appl., 72, N3-4, (2010),

A G Ramm, Implicit Function Theorem via the DSM, Nonlinear Analysis: Theory, Methods and Appl., 72, N3-4, (2010), A G Ramm, Implicit Function Theorem via the DSM, Nonlinear Analysis: Theory, Methods and Appl., 72, N3-4, (21), 1916-1921. 1 Implicit Function Theorem via the DSM A G Ramm Department of Mathematics Kansas

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

ON MATRIX VALUED SQUARE INTEGRABLE POSITIVE DEFINITE FUNCTIONS

ON MATRIX VALUED SQUARE INTEGRABLE POSITIVE DEFINITE FUNCTIONS 1 2 3 ON MATRIX VALUED SQUARE INTERABLE POSITIVE DEFINITE FUNCTIONS HONYU HE Abstract. In this paper, we study matrix valued positive definite functions on a unimodular group. We generalize two important

More information

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space 1 Professor Carl Cowen Math 54600 Fall 09 PROBLEMS 1. (Geometry in Inner Product Spaces) (a) (Parallelogram Law) Show that in any inner product space x + y 2 + x y 2 = 2( x 2 + y 2 ). (b) (Polarization

More information

RNDr. Petr Tomiczek CSc.

RNDr. Petr Tomiczek CSc. HABILITAČNÍ PRÁCE RNDr. Petr Tomiczek CSc. Plzeň 26 Nonlinear differential equation of second order RNDr. Petr Tomiczek CSc. Department of Mathematics, University of West Bohemia 5 Contents 1 Introduction

More information

Reproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto

Reproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto Reproducing Kernel Hilbert Spaces 9.520 Class 03, 15 February 2006 Andrea Caponnetto About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert

More information

Non-degeneracy of perturbed solutions of semilinear partial differential equations

Non-degeneracy of perturbed solutions of semilinear partial differential equations Non-degeneracy of perturbed solutions of semilinear partial differential equations Robert Magnus, Olivier Moschetta Abstract The equation u + F(V (εx, u = 0 is considered in R n. For small ε > 0 it is

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

L p Spaces and Convexity

L p Spaces and Convexity L p Spaces and Convexity These notes largely follow the treatments in Royden, Real Analysis, and Rudin, Real & Complex Analysis. 1. Convex functions Let I R be an interval. For I open, we say a function

More information

CHAPTER II HILBERT SPACES

CHAPTER II HILBERT SPACES CHAPTER II HILBERT SPACES 2.1 Geometry of Hilbert Spaces Definition 2.1.1. Let X be a complex linear space. An inner product on X is a function, : X X C which satisfies the following axioms : 1. y, x =

More information

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

CHAPTER V DUAL SPACES

CHAPTER V DUAL SPACES CHAPTER V DUAL SPACES DEFINITION Let (X, T ) be a (real) locally convex topological vector space. By the dual space X, or (X, T ), of X we mean the set of all continuous linear functionals on X. By the

More information

Another consequence of the Cauchy Schwarz inequality is the continuity of the inner product.

Another consequence of the Cauchy Schwarz inequality is the continuity of the inner product. . Inner product spaces 1 Theorem.1 (Cauchy Schwarz inequality). If X is an inner product space then x,y x y. (.) Proof. First note that 0 u v v u = u v u v Re u,v. (.3) Therefore, Re u,v u v (.) for all

More information

Nonlinear stabilization via a linear observability

Nonlinear stabilization via a linear observability via a linear observability Kaïs Ammari Department of Mathematics University of Monastir Joint work with Fathia Alabau-Boussouira Collocated feedback stabilization Outline 1 Introduction and main result

More information

Nonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage:

Nonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage: Nonlinear Analysis 71 2009 2744 2752 Contents lists available at ScienceDirect Nonlinear Analysis journal homepage: www.elsevier.com/locate/na A nonlinear inequality and applications N.S. Hoang A.G. Ramm

More information

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE MICHAEL LAUZON AND SERGEI TREIL Abstract. In this paper we find a necessary and sufficient condition for two closed subspaces, X and Y, of a Hilbert

More information

An Iterative Procedure for Solving the Riccati Equation A 2 R RA 1 = A 3 + RA 4 R. M.THAMBAN NAIR (I.I.T. Madras)

An Iterative Procedure for Solving the Riccati Equation A 2 R RA 1 = A 3 + RA 4 R. M.THAMBAN NAIR (I.I.T. Madras) An Iterative Procedure for Solving the Riccati Equation A 2 R RA 1 = A 3 + RA 4 R M.THAMBAN NAIR (I.I.T. Madras) Abstract Let X 1 and X 2 be complex Banach spaces, and let A 1 BL(X 1 ), A 2 BL(X 2 ), A

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

Threshold behavior and non-quasiconvergent solutions with localized initial data for bistable reaction-diffusion equations

Threshold behavior and non-quasiconvergent solutions with localized initial data for bistable reaction-diffusion equations Threshold behavior and non-quasiconvergent solutions with localized initial data for bistable reaction-diffusion equations P. Poláčik School of Mathematics, University of Minnesota Minneapolis, MN 55455

More information

Linear maps preserving AN -operators

Linear maps preserving AN -operators Linear maps preserving AN -operators (Ritsumeikan University) co-author: Golla Ramesh (Indian Instiute of Technology Hyderabad) Numerical Range and Numerical Radii Max-Planck-Institute MPQ June 14, 2018

More information

In Chapter 14 there have been introduced the important concepts such as. 3) Compactness, convergence of a sequence of elements and Cauchy sequences,

In Chapter 14 there have been introduced the important concepts such as. 3) Compactness, convergence of a sequence of elements and Cauchy sequences, Chapter 18 Topics of Functional Analysis In Chapter 14 there have been introduced the important concepts such as 1) Lineality of a space of elements, 2) Metric (or norm) in a space, 3) Compactness, convergence

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider

More information

CONVOLUTION OPERATORS IN INFINITE DIMENSION

CONVOLUTION OPERATORS IN INFINITE DIMENSION PORTUGALIAE MATHEMATICA Vol. 51 Fasc. 4 1994 CONVOLUTION OPERATORS IN INFINITE DIMENSION Nguyen Van Khue and Nguyen Dinh Sang 1 Introduction Let E be a complete convex bornological vector space (denoted

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Exercise Solutions to Functional Analysis

Exercise Solutions to Functional Analysis Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n

More information

Kantorovich s Majorants Principle for Newton s Method

Kantorovich s Majorants Principle for Newton s Method Kantorovich s Majorants Principle for Newton s Method O. P. Ferreira B. F. Svaiter January 17, 2006 Abstract We prove Kantorovich s theorem on Newton s method using a convergence analysis which makes clear,

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information

Operators with Closed Range, Pseudo-Inverses, and Perturbation of Frames for a Subspace

Operators with Closed Range, Pseudo-Inverses, and Perturbation of Frames for a Subspace Canad. Math. Bull. Vol. 42 (1), 1999 pp. 37 45 Operators with Closed Range, Pseudo-Inverses, and Perturbation of Frames for a Subspace Ole Christensen Abstract. Recent work of Ding and Huang shows that

More information

Functionalanalytic tools and nonlinear equations

Functionalanalytic tools and nonlinear equations Functionalanalytic tools and nonlinear equations Johann Baumeister Goethe University, Frankfurt, Germany Rio de Janeiro / October 2017 Outline Fréchet differentiability of the (PtS) mapping. Nonlinear

More information

s P = f(ξ n )(x i x i 1 ). i=1

s P = f(ξ n )(x i x i 1 ). i=1 Compactness and total boundedness via nets The aim of this chapter is to define the notion of a net (generalized sequence) and to characterize compactness and total boundedness by this important topological

More information

Outline of the course

Outline of the course School of Mathematical Sciences PURE MTH 3022 Geometry of Surfaces III, Semester 2, 20 Outline of the course Contents. Review 2. Differentiation in R n. 3 2.. Functions of class C k 4 2.2. Mean Value Theorem

More information

Analysis in weighted spaces : preliminary version

Analysis in weighted spaces : preliminary version Analysis in weighted spaces : preliminary version Frank Pacard To cite this version: Frank Pacard. Analysis in weighted spaces : preliminary version. 3rd cycle. Téhéran (Iran, 2006, pp.75.

More information

16. Local theory of regular singular points and applications

16. Local theory of regular singular points and applications 16. Local theory of regular singular points and applications 265 16. Local theory of regular singular points and applications In this section we consider linear systems defined by the germs of meromorphic

More information

Construction of some Generalized Inverses of Operators between Banach Spaces and their Selections, Perturbations and Applications

Construction of some Generalized Inverses of Operators between Banach Spaces and their Selections, Perturbations and Applications Ph. D. Dissertation Construction of some Generalized Inverses of Operators between Banach Spaces and their Selections, Perturbations and Applications by Haifeng Ma Presented to the Faculty of Mathematics

More information

SOLVABILITY AND THE NUMBER OF SOLUTIONS OF HAMMERSTEIN EQUATIONS

SOLVABILITY AND THE NUMBER OF SOLUTIONS OF HAMMERSTEIN EQUATIONS Electronic Journal of Differential Equations, Vol. 2004(2004), No. 54, pp. 1 25. ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu (login: ftp) SOLVABILITY

More information

L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS

L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS Rend. Sem. Mat. Univ. Pol. Torino Vol. 57, 1999) L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS Abstract. We use an abstract framework to obtain a multilevel decomposition of a variety

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

L p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by

L p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by L p Functions Given a measure space (, µ) and a real number p [, ), recall that the L p -norm of a measurable function f : R is defined by f p = ( ) /p f p dµ Note that the L p -norm of a function f may

More information

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems www.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems H. Egger RICAM-Report 2005-01 www.ricam.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing

More information

Functional Analysis HW 2

Functional Analysis HW 2 Brandon Behring Functional Analysis HW 2 Exercise 2.6 The space C[a, b] equipped with the L norm defined by f = b a f(x) dx is incomplete. If f n f with respect to the sup-norm then f n f with respect

More information

Normed & Inner Product Vector Spaces

Normed & Inner Product Vector Spaces Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed

More information

Improved Complexity of a Homotopy Method for Locating an Approximate Zero

Improved Complexity of a Homotopy Method for Locating an Approximate Zero Punjab University Journal of Mathematics (ISSN 116-2526) Vol. 5(2)(218) pp. 1-1 Improved Complexity of a Homotopy Method for Locating an Approximate Zero Ioannis K. Argyros Department of Mathematical Sciences,

More information