THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS

Size: px
Start display at page:

Download "THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS"

Transcription

1 Asian-European Journal of Mathematics Vol. 3, No. 1 (2010) c World Scientific Publishing Company THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. Hoang Mathematics Department, Kansas State University, Manhattan, Kansas, KS , USA nguyenhs@math.ksu.edu A. G. Ramm Mathematics Department, Kansas State University, Manhattan, Kansas, KS , USA ramm@math.ksu.edu Communicated by Jörg Koppitz Received January 27, 2009 Revised July 14, 2009 A review of the authors results is given. Several methods are discussed for solving nonlinear equations F (u) =f, wheref is a monotone operator in a Hilbert space, and noisy data are given in place of the exact data. A discrepancy principle for solving the equation is formulated and justified. Various versions of the Dynamical Systems Method (DSM) for solving the equation are formulated. These versions of the DSM include a regularized Newton-type method, a gradient-type method, and a simple iteration method. Apriori and a posteriori choices of stopping rules for these methods are proposed and justified. Convergence of the solutions, obtained by these methods, to the minimal norm solution to the equation F (u) =f is proved. Iterative schemes with a posteriori choices of stopping rule corresponding to the proposed DSM are formulated. Convergence of these iterative schemes to a solution to equation F (u) =f is justified. New nonlinear differential inequalities are derived and applied to a study of large-time behavior of solutions to evolution equations. Discrete versions of these inequalities are established. Keywords: Ill-posed problems; nonlinear operator equations; monotone operators; nonlinear inequalities; dynamical systems method. AMS Subject Classification: 47H05, 47J05, 47N20, 65J20, 65M30 1. Introduction Consider equation F (u) =f, (1.1) 57

2 58 N. S. Hoang & A. G. Ramm where F is an operator in a Hilbert space H. Throughout this paper we assume that F is a monotone continuous operator. Monotonicity is understood as follows: F (u) F (v),u v 0, u, v H. (1.2) We assume that equation (1.1) has a solution, possibly non-unique. Assume that f is not known but f, the noisy data, f f, areknown. There are many practically important problems which are ill-posed in the sense of J. Hadamard. Problem (1.1) is well-posed in the sense of Hadamard if and only if F is injective, surjective, and the inverse operator F 1 is continuous. To solve illposed problem (1.1), one has to use regularization methods rather than the classical Newton s or Newton-Kantorovich s methods. Regularization methods for stable solution of linear ill-posed problems have been studied extensively (see [13], [15], [38] and references therein). Among regularization methods, the variational regularization (VR) is one of the frequently used methods. When F = A, wherea is a linear operator, the VR method consists of minimizing the following functional: Au f 2 + α u 2 min. (1.3) The minimizer u,a of problem (1.3) can be found from the Euler equation: (A A + αi)u,α = A f. Inthe VR method the choiceof the regularizationparameterα is important. Various choices of the regularization parameter have been proposed and justified. Among these, the discrepancy principle (DP) appears to be the most efficient in practice (see [13]). According to the DP one chooses α as the solution to the following equation: Au,α f = C, 1 <C= const. (1.4) When the operator F is nonlinear, the theory is less complete (see [2], [37]). In this case, one may try to minimize the functional F (u) f 2 + α u 2 min (1.5) as in the case of linear operator F. The minimizer to problem (1.5) solves the following Euler equation F (u,α ) F (u,α )+αu,α = F (u,α ) f. (1.6) However, there are several principal difficulties in nonlinear problems: there are no general results concerning the solvability of (1.6), and the notion of minimalnorm solution does not make sense, in general, when F is nonlinear. Other methods for solving (1.1) with nonlinear F have been studied. Convergence proofs of these methods often rely on the source-type assumptions. These assumptions are difficult to verify in practice and they may not hold. Equation (1.1) with a monotone operator F is of interest and importance in many applications. Every solvable linear operator equation Au = f can be reduced to solving operator equation with a monotone operator A A. For equations with a

3 The DSM for Solving NOE 59 bounded operator A this is a simple fact, and for unbounded, closed, densely defined linear operators A it is proved in [28], [30], [31], [15]. Physical problems with dissipation of energy often can be reduced to solving equations with monotone operators (see, e.g., [35]). For simplicity we present the results for equations in Hilbert spaces, but some results can be generalized to the operators in Banach spaces. When F is monotone then the notion minimal-norm solution makes sense (see, e.g., [15], p. 110). In [36], Tautenhahn studied a discrepancy principle for solving equation (1.1). The discrepancy principle in [36] requires solving for α the following equation: (F (u,α )+αi) 1 (F (u,α ) f ) = C, 1 <C= const, (1.7) where u,α solves the equation: F (u,α )+αu,α = f. For this discrepancy principle optimal rate of convergence is obtained in [36]. However, the convergence of the method is justified under source-type assumptions and other restrictive assumptions. These assumptions often do not hold and some of them cannot be verified, in general. In addition, equation (1.7) is difficult to solve numerically. A continuous analog of the Newton method for solving well-posed operator equations was proposed in [3], in In [1], [4] [33], and in the monograph [15] the Dynamical Systems Method for solving operator equations is studied systematically. The DSM consists of finding a nonlinear map Φ(t, u) such that the Cauchy problem u =Φ(t, u), u(0) = u 0, (1.8) has a unique solution for all t 0, there exists lim t u(t) := u( ), and F (u( )) = f,! u(t) t 0; u( ); F (u( )) = f. (1.9) Various choices of Φ were proposed in [15] for (1.9) to hold. Each such choice yields a version of the DSM. In this paper, several methods developed by the authors for solving stably equation (1.1) with a monotone operator F in a Hilbert space H and noisy data f,given in place of the exact data f, are presented. A discrepancy principle (DP) for solving stably equation (1.1) is formulated and justified. In this DP the only assumptions on F are the continuity and monotonicity. Thus, our result is quite general and can be applied for a wide range of problems. Several versions of the Dynamical Systems Method (DSM) for solving equation (1.1) are formulated. These versions of the DSM are a Newton-type method, a gradient-type method and a simple iterations method. A priori and a posteriori choices of stopping rules for several versions of the DSM and for the corresponding iterative scheme are proposed and justified. Convergence of the solutions of these versions of the DSM to the minimal-norm

4 60 N. S. Hoang & A. G. Ramm solution to the equation F (u) = f is proved. Iterative schemes, corresponding to the proposed versions of the DSM, are formulated. Convergence of these iterative schemes to a solution to equation F (u) =f is established. When one uses these iterative schemes one does not have to solve a nonlinear equation for the regularization parameter. The stopping time is chosen automatically in the course of calculations. Implementation of these methods is illustrated in Sec. 6 by a numerical experiment. In Secs. 2 and 3 basic and auxiliary results are formulated, in Sec. 4 proofs are given, in Sec. 5 ideas of applications of the basic nonlinear inequality (2.87) are outlined. 2. Basic Results 2.1. A discrepancy principle Let us consider the following equation F (V,a )+av,a f =0, a > 0, (2.1) where a = const. It is known (see, e.g., [15], p. 111) that equation (2.1) with a monotone continuous operator F has a unique solution V,a for any f H. Assume that equation (1.1) has a solution. It is known that the set of solutions N := {u : F (u) =f} is convex and closed if F is monotone and continuous (see, e.g., [15], p. 110). A closed and convex set N in H has a unique minimal-norm element. This minimal-norm solution to (1.1) is denoted by y. Theorem 1. Let γ (0, 1] and C > 0 be some constants such that C γ >. Assume that F (0) f >C γ.lety be the minimal-norm solution to equation (1.1). Then there exists a unique a() > 0 such that F (V,a() ) f = C γ, (2.2) where V,a() solves (2.1) with a = a(). If 0 <γ<1 then lim V,a() y =0. (2.3) 0 Instead of using (2.1), one may use the following equation: F (V,a )+a(v,a ū) f =0, a > 0, (2.4) where ū is an element of H. DenoteF 1 (u) :=F (u +ū). Then F 1 is monotone and continuous. Equation (2.4) can be written as: F 1 (U,a )+au,a f =0, U,a := V,a ū, a > 0. (2.5) Applying Theorem 1 with F = F 1 one gets the following result: Corollary 2. Let γ (0, 1] and C > 0 be some constants such that C γ >. Let ū H and z be the solution to (1.1) with minimal distance to ū. Assume that F (ū) f >C γ. Then there exists a unique a() > 0 such that F (Ṽ,a()) f = C γ, (2.6)

5 The DSM for Solving NOE 61 where Ṽ,a() solves the following equation: If γ (0, 1) then this a() satisfies F (Ṽ,a)+a()(Ṽ,a ū) f =0. (2.7) lim 0 Ṽ,a() z =0. (2.8) The following result is useful for the implementation of our DP. Theorem 3. Let, F, f,andy be as in Theorem 1 and 0 <γ<1. Assume that v H, θ>0 is a constant, α() > 0, and the following inequalities are satisfied: and Then one has: F (v )+α()v f θ, θ > 0, (2.9) C 1 γ F (v ) f C 2 γ, 0 <C 1 <C 2. (2.10) lim v y =0. (2.11) 0 Remark 1. Based on Theorem 3 an algorithm for solving nonlinear equations with monotone Lipschitz continuous operators is outlined in [11]. Remark 2. It is an open problem to choose γ and C 1,C 2 optimal in some sense. Remark 3. Theorem 1 and Theorem 3 do not hold, in general, for γ = 1. Indeed, let Fu = u, p p, p =1,p N(F ):={u H : Fu =0}, f = p, f = p + q, where p, q =0, q =1,Fq =0, q =. For linear operator F we write Fu, rather than F (u). One has Fy = p, wherey = p, is the minimal-norm solution to the equation Fu = p. EquationFV,a + av,a = p + q, has the unique solution V,a = q/a + p/(1 + a). Equation (2.2) is C = q +(ap)/(1 + a). Thisequation yields a = a() =c/(1 c), where c := (C 2 1) 1/2, and we assume c < 1. Thus, lim 0 V,a() = p + c 1 q := v, andfv = p. Therefore v = lim 0 V,a() is not p, i.e., is not the minimal-norm solution to the equation Fu = p. This argument is borrowed from [14], p. 29. If equation (1.1) has a unique solution, then one can prove convergence results (2.3) and (2.11) for γ = The dynamical systems method Let a(t) 0 be a positive and strictly decreasing function. Let V (t) solvethe following equation: F (V (t)) + a(t)v (t) f =0. (2.12) Throughout the paper we assume that equation F (u) =f has a solution in H, possibly nonunique, and y is the minimal-norm solution to this equation. Let f be unknown, but f be given, f f.

6 62 N. S. Hoang & A. G. Ramm The Newton-type DSM In this section we assume that F is a monotone operator, twice Fréchet differentiable, and F (j) (u) M j (R, u 0 ), u B(u 0,R), 0 j 2. (2.13) This assumption is satisfied in many applications. Denote A := F (u (t)), A a := A + ai, (2.14) where I is the identity operator. Let u (t) solve the following Cauchy problem: u = A 1 a(t) [F (u )+a(t)u f ], u (0) = u 0. (2.15) We assume below that F (u 0 ) f >C 1 ζ,wherec 1 > 1andζ (0, 1] are some constants. We also assume without loss of generality that (0, 1). Assume that equation (1.1) has a solution, possibly nonunique, and y is the minimal norm solution to equation (1.1). Recall that we are given the noisy data f, f f. Lemma 4 ([8] Lemma 2.7). Suppose M 1,c 0,andc 1 are positive constants and 0 y H. Then there exist >0 and a function a(t) C 1 [0, ), 0 <a(t) 0, such that the following conditions hold M 1, (2.16) y c 0 a(t) ȧ(t) c 1 a(t) a(t) 2 [ 1 ȧ(t) ], (2.17) 2a(t) a(t) [ 1 ȧ(t) ], (2.18) a(t) F (0) f a2 (0). (2.19) In the proof of Lemma 2.7 in [8] we have demonstrated that conditions (2.17) (2.19) are satisfied for a(t) = d,whereb (0, 1], c, d > 0areconstants,c>6b, (c+t) b and d is sufficiently large. Theorem 5. Assume a(t) = d,whereb (0, 1], c, d > 0 are constants, c> (c+t) b 6b, andd is sufficiently large so that conditions (2.17) (2.19) hold. Assume that F : H H is a twice Fréchet differentiable monotone operator, (2.13) holds, u 0 is an element of H satisfying inequalities u 0 V 0 F (0) f, h(0) = F (u 0 )+a(0)u 0 f 1 a(0) 4 a(0) V (0). (2.20) Then the solution u (t) to problem (2.15) exists on an interval [0,T ], lim 0 T =, and there exists a unique t, t (0,T ) such that lim 0 t = and F (u (t )) f = C 1 ζ, F (u (t)) f >C 1 ζ, t [0,t ), (2.21)

7 The DSM for Solving NOE 63 where C 1 > 1 and 0 <ζ 1. Ifζ (0, 1) and t satisfies (2.21), then lim u (t ) y =0. (2.22) 0 Remark 4. One can choose u 0 satisfying inequalities (2.20). Indeed, if u 0 is a sufficiently close approximation to V (0) the solution to equation (2.12), then inequalities (2.20) are satisfied. Note that the second inequality in (2.20) is a sufficient condition for the following inequality (see also (4.55)) e t 1 2 h(0) 4 a(t) V (t), t 0, (2.23) to hold. In our proof inequality (2.23) (or inequality (4.55)) is used at t = t.the stopping time t is often sufficiently large for the quantity e t 2 a(t ) to be large. Note that V (t) is a strictly increasing function of t (0, ) (see Lemma 20). In this case inequality (2.23) with t = t is satisfied for a wide range of u 0. Condition c > 6b is used in the proof of Lemma 27 (see below) The dynamical system gradient method In this section we assume that F is a monotone operator, twice Fréchet differentiable, and estimates (2.13) hold. Denote A := F (u (t)), A a := A + ai, a = a(t), where I is the identity operator. Let u (t) solve the following Cauchy problem: u = A a(t) [F (u )+a(t)u f ], u (0) = u 0. (2.24) Let us recall the following result: Lemma 6 ([9] Lemma 11). Suppose M 1,c 0,andc 1 are positive constants and 0 y H. Then there exist >0 andafunctiona(t) C 1 [0, ), 0 <a(t) 0, such that ȧ(t) a3 (t) 4, (2.25) and the following conditions hold M 1 y, (2.26) [ a 2 (t) 2 ȧ(t) ], (2.27) a(t) c 0 (M 1 + a(t)) 2a 2 (t) [ ȧ(t) c 1 a(t) a2 (t) a 2 (t) 2 ȧ(t) ], (2.28) 2 a(t) a 2 g(0) < 1. (2.29) (0)

8 64 N. S. Hoang & A. G. Ramm We have demonstrated in the proof of Lemma 11 in [9] that conditions (2.25) (2.29) are satisfied with a(t) = d (c+t), where b (0, 1 b 4 ], c 1, and d > 0 are constants, and d is sufficiently large. Theorem 7. Let a(t) = d, where b (0, 1 (c+t) b 4 ], c 1, and d > 0 are constants, and d is sufficiently large so that conditions (2.25) (2.29) hold. Assume that F : H H is a twice Fréchet differentiable monotone operator, (2.13) holds, u 0 is an element of H, satisfying inequalities F (u 0 ) f > C 1 ζ and h(0) = F (u 0 ) + a(0)u 0 f 1 4 a(0) V (0). (2.30) Then the solution u (t) to problem (2.24) exists on an interval [0, T ], lim 0 T =, and there exists t, t (0, T ), not necessarily unique, such that F (u (t )) f = C 1 ζ, lim t =, (2.31) 0 where C 1 > 1 and 0 < ζ 1 are constants. If ζ (0, 1) and t satisfies (2.31), then lim u (t ) y = 0. (2.32) 0 Remark 5. One can easily choose u 0 satisfying inequality (2.30). Note that inequality (2.30) is a sufficient condition for the following inequality (cf. (4.95)) e ϕ(t) h(0) 1 4 a(t) V (t), t 0, (2.33) to hold. In our proof inequality (2.33) (see also (4.95)) is used at t = t. The stopping time t is often sufficiently large for the quantity e ϕ(t ) a(t ) to be large. In this case inequality (2.33) (cf. (4.95)) with t = t is satisfied for a wide range of u 0. The parameter ζ is not fixed in (2.31). While we could fix it, for example, by setting ζ = 0.9, it is an interesting open problem to propose an optimal in some sense criterion for choosing ζ The simple iteration DSM In this section we assume that F is a monotone operator, Fréchet differentiable, and sup F (u) M 1 (u 0, R). (2.34) u B(u 0,R) Let us consider a version of the DSM for solving equation (1.1): u = ( F (u ) + a(t)u f ), u (0) = u 0, (2.35) where F is a monotone operator. The advantage of this version compared with (2.15) is the absence of the inverse operator in the algorithm, which makes the algorithm (2.35) less expensive than (2.15). On the other hand, algorithm (2.15) converges faster than (2.35) in many

9 The DSM for Solving NOE 65 cases. The algorithm (2.35) is cheaper than the DSM gradient algorithm proposed in (2.24). The advantage of method (2.35), a modified version of the simple iteration method, over the Gauss-Newton method and the version (2.15) of the DSM is the following: neither inversion of matrices nor evaluation of F is needed in a discretized version of (2.35). Although the convergence rate of the DSM (2.35) maybe slower than that of the DSM (2.15), the DSM (2.35) might be faster than the DSM (2.15) for large-scale systems due to its lower computation cost. In this section we investigate a stopping rule based on a discrepancy principle (DP) for the DSM (2.35). The main results of this section is Theorem 9 in which a DP is formulated, the existence of a stopping time t is proved, and the convergence of the DSM with the proposed DP is justified under some natural assumptions. Lemma 8 ([10] Lemma 11). Suppose M 1 and c 1 are positive constants and 0 y H. Then there exist a number > 0 and a function a(t) C 1 [0, ), 0 < a(t) 0, such that and the following conditions hold ȧ(t) a2 (t) 2, (2.36) M 1, (2.37) y 0 [ a(t) ȧ(t) ], (2.38) 2a(t) a(t) ȧ(t) c 1 a(t) a(t) [ a(t) ȧ(t) ], (2.39) 2 a(t) g(0) < 1. (2.40) a(0) It is shown in the proof of Lemma 11 in [10] that conditions (2.36) (2.40) hold for the function a(t) = d, where b (0, 1 (c+t) b 2 ], c 1 and d > 0 are constants, and d is sufficiently large. Theorem 9. Let a(t) = d (c+t), where b (0, 1 b 2 ], c 1 and d > 0 are constants, and d is sufficiently large so that conditions (2.36) (2.40) hold. Assume that F : H H is a Fréchet differentiable monotone operator, condition (2.34) holds, and u 0 is an element of H, satisfying inequalities F (u 0 ) f > C 1 ζ and h(0) = F (u 0 ) + a(0)u 0 f 1 4 a(0) V (0). (2.41) Assume that equation F (u) = f has a solution y B(u 0, R), possibly nonunique, and y is the minimal-norm solution to this equation. Then the solution u (t) to problem (2.35) exists on an interval [0, T ], lim 0 T =, and there exists t, t (0, T ), not necessarily unique, such that F (u (t )) f = C 1 ζ, lim t =, (2.42) 0

10 66 N. S. Hoang & A. G. Ramm where C 1 > 1 and 0 <ζ 1 are constants. If ζ (0, 1) and t satisfies (2.42), then lim u (t ) y =0. (2.43) 0 Remark 6. One can easily choose u 0 satisfying inequality (2.41). Again, inequality (2.41) is a sufficient condition for (2.33) (cf. (4.131)) to hold. In our proof inequality (2.33) is used at t = t. The stopping time t is often sufficiently large for the quantity e ϕ(t) a(t ) to be large. In this case inequality (2.33) with t = t is satisfied for a wide range of u Iterative schemes Let 0 <a n 0 be a positive strictly decreasing sequence. Denote V n := V n, where V n, solves the following equation: Note that if a n := a(t n )thenv n, = V (t n ). F (V n, )+a n V n, f =0. (2.44) Iterative scheme of Newton-type In this section we assume that F is a monotone operator, twice Fréchet differentiable, and F (j) (u) M j (R, u 0 ), u B(u 0,R), 0 j 2. (2.45) Consider the following iterative scheme: u n+1 = u n A 1 n [F (u n)+a n u n f ], A n := F (u n )+a n I, u 0 = u 0, (2.46) where u 0 is chosen so that inequality (2.52) holds. Note that F (u n ) 0sinceF is monotone. Thus, A 1 n 1 a n. Lemma 10 ([7] Lemma 2.5). Suppose M 1,c 0,andc 1 are positive constants and 0 y H. Then there exist >0 and a sequence 0 < (a n ) n=0 0 such that the following conditions hold a n 2a n+1, (2.47) f F (0) a2 0, (2.48) M 1 y, (2.49) a n a n+1 a 2 1 n+1 2c 1, (2.50) a n c a n a n+1 c 1 a n+1 a n+1. (2.51) It is shown in the proof of Lemma 2.5 in [7] that conditions (2.47) (2.51) hold for the sequence a n = d,wherec 1, 0 <b 1, and d is sufficiently large. (c+n) b

11 The DSM for Solving NOE 67 Remark 7. In Lemmas 10 14, one can choose a 0 and so that a0 is uniformly bounded as 0evenifM 1 (R) as R at an arbitrary fast rate. Choices of a 0 and, satisfying this condition, are discussed in [7], [9] and [10]. Let a n and satisfy conditions (2.47) (2.51). Assume that equation F (u) =f has a solution y B(u 0,R), possibly nonunique, and y is the minimal-norm solution to this equation. Let f be unknown but f be given, and f f. Wehavethe following result: d (c+n) b Theorem 11. Assume a n = where c 1, 0 <b 1, andd is sufficiently large so that conditions (2.47) (2.51) hold. Let u n be defined by (2.46). Assume that u 0 is chosen so that F (u 0 ) f >C 1 ζ γ >ζ and g 0 := u 0 V 0 F (0) f. (2.52) a 0 Then there exists a unique n, depending on C 1 and γ (see below), such that F (u n ) f C 1 γ, C 1 γ < F (u n ) f, n <n, (2.53) where C 1 > 1, 0 <γ 1. Let 0 < ( m ) m=1 be a sequence such that m 0. IfN is a cluster point of the sequence n m satisfying (2.53), then lim u n m m = u, (2.54) where u is a solution to the equation F (u) =f. If lim n m =, (2.55) m and γ (0, 1), then lim u n m m y =0. (2.56) Note that by Remark 9, inequality (2.52) is satisfied with u 0 = An iterative scheme of gradient-type In this section we assume that F is a monotone operator, twice Fréchet differentiable, and estimates (2.45) hold. Consider the following iterative scheme: u n+1 = u n α n A n[f (u n )+a n u n f ], A n := F (u n )+a n I, u 0 = u 0, (2.57) where u 0 is chosen so that inequality (2.65) holds, and {α n } n=1 is a positive sequence such that 2 0 < α α n a 2 n +(M 1 + a n ) 2, A n M 1 + a n. (2.58) It follows from this condition that 1 α n A a n A an = sup a 2 n (M1+an)2 1 α n 1 α n a 2 n. (2.59)

12 68 N. S. Hoang & A. G. Ramm Note that F (u n ) 0 since F is monotone. Lemma 12 ([9] Lemma 12). Suppose M 1, c 0, c 1 and α are positive constants and 0 y H. Then there exist > 0 and a sequence 0 < (a n ) n=0 0 such that the following conditions hold a n a n+1 2, (2.60) f F (0) a3 0, (2.61) M 1 y, (2.62) c 0 (M 1 + a 0 ) 1 2, (2.63) a 2 n αa4 n 2 + a n a n+1 c 1 a2 n+1 a n+1. (2.64) It is shown in the proof of Lemma 12 in [9] that the sequence (a n ) n=0 satisfying conditions (2.60) (2.64) can be chosen of the form a n = d (c+n), where c 1, 0 < b b 1 4, and d is sufficiently large. Assume that equation F (u) = f has a solution in B(u 0, R), possibly nonunique, and y is the minimal-norm solution to this equation. Let f be unknown but f be given, and f f. We prove the following result: d (c+n) b where c 1, 0 < b 1 4, and d is sufficiently Theorem 13. Assume a n = large so that conditions (2.60) (2.64) hold. Let u n be defined by (2.57). Assume that u 0 is chosen so that F (u 0 ) f > C 1 ζ > and Then there exists a unique n such that g 0 := u 0 V 0 F (0) f a 0. (2.65) F (u n ) f C 1 ζ, C 1 ζ < F (u n ) f, n < n, (2.66) where C 1 > 1, 0 < ζ 1. Let 0 < ( m ) m=1 be a sequence such that m 0. If the sequence {n m := n m } m=1 is bounded, and {n mj } j=1 is a convergent subsequence, then where ũ is a solution to the equation F (u) = f. If lim u n mj = ũ, (2.67) j and ζ (0, 1), then lim n m =, (2.68) m lim u n m y = 0. (2.69) m It is pointed out in Remark 9 that inequality (2.65) is satisfied with u 0 = 0.

13 The DSM for Solving NOE A simple iteration method In this section we assume that F is a monotone operator, Fréchet differentiable. Consider the following iterative scheme: u n+1 = u n α n [F (u n )+a n u n f ], u 0 = u 0, (2.70) where u 0 is chosen so that inequality (2.77) holds, and {α n } n=1 is a positive sequence such that 2 0 < α α n a n +(M 1 + a n ), M 1(u 0,R)= sup F (u). (2.71) u B(u 0,R) It follows from this condition that 1 α n (J n + a n ) = sup 1 α n 1 α n a n. (2.72) a n M 1+a n Here, J n is an operator in H such that J n = Jn 0and J n M 1, u B(u 0,R). A specific choice of J n is made in formula (4.186) below. Lemma 14 ([10] Lemma 12). Suppose M 1, c 1 and α are positive constants and 0 y H. Then there exist a number >0 andasequence0 < (a n ) n=0 0 such that the following conditions hold a n a n+1 2, (2.73) f F (0) a2 0, (2.74) M 1 y, (2.75) a n αa2 n + a n a n+1 c 1 a n+1 a n+1. (2.76) It is shown in the proof of Lemma 12 in [10] that conditions (2.73) (2.76) hold for the sequence a n = d,wherec 1, 0 <b 1 (c+n) b 2,andd is sufficiently large. Let a n and satisfy conditions (2.73) (2.76). Assume that equation F (u) =f has a solution y B(u 0,R), possibly nonunique, and y is the minimal-norm solution to this equation. Let f be unknown but f be given, and f f. Weprove the following result: Theorem 15. Assume that F is a Fréchet differentiable monotone operator and F is selfadjoint. Assume a n = d where c 1, 0 <b 1 (c+n) b 2,andd is sufficiently large so that conditions (2.73) (2.76) hold. Let u n be defined by (2.70). Assume that u 0 is chosen so that F (u 0 ) f >C 1 ζ >and g 0 := u 0 V 0 F (0) f. (2.77) a 0 Then there exists a unique n such that F (u n ) f C 1 ζ, C 1 ζ < F (u n ) f, n <n, (2.78)

14 70 N. S. Hoang & A. G. Ramm where C 1 > 1, 0 <ζ 1. Let 0 < ( m ) m=1 be a sequence such that m 0. If the sequence {n m := n m } m=1 is bounded, and {n m j } j=1 is a convergent subsequence, then lim u n mj =ũ, (2.79) j where ũ is a solution to the equation F (u) =f. If and ζ (0, 1), then lim n m =, (2.80) m lim u n m y =0. (2.81) m Remark 8. If H is a complex Hilbert space, then a bounded non-negative-definite operator A = F,theFréchet derivative of a monotone operator F,isselfadjoint;if A is a bounded linear operator defined on all of H and Au, u 0 for all u H, then A is selfadjoint. This is not ( true, ) in general, if H is a real Hilbert space. 11 Example: H = R 2, A is matrix.thena is not selfadjoint, but Au, u = 01 u u 1 u 2 + u for all u 1,u 2 R. Remark 9. In Theorems we choose u 0 H such that g 0 := u 0 V 0 F (0) f. (2.82) a 0 It is easy to choose u 0 satisfying this condition. Indeed, if, for example, u 0 =0, then by Lemma 20 in Sec. 3.2 (see below) one gets g 0 = V 0 = a 0 V 0 a 0 If (2.82) and either (2.48) or (2.74) hold then F (0) f a 0. (2.83) g 0 a 0. (2.84) This inequality is used in the proof of Theorems 11 and 15. If (2.82) and (2.61) hold, then g 0 a2 0. (2.85) This inequality is used in the proof of Theorem Nonlinear inequalities A nonlinear differential inequality In [15] the following differential inequality ġ(t) γ(t)g(t)+α(t)g 2 (t)+β(t), t τ 0, (2.86)

15 The DSM for Solving NOE 71 was studied and applied to various evolution problems. In (2.86) α(t),β(t),γ(t) and g(t) are continuous non-negative functions on [τ 0, ) whereτ 0 is a fixed number. In [15], an upper bound for g(t) is obtained under some conditions on α, β, γ. In [12] the following generalization of (2.86): ġ(t) γ(t)g(t)+α(t)g p (t)+β(t), t τ 0, p > 1, (2.87) is studied. We have the following result: Theorem 16 ([12] Theorem 1). Let α(t),β(t) and γ(t) be continuous functions on [τ 0, ) and α(t) > 0, t τ 0. Suppose there exists a function μ(t) > 0, μ C 1 [τ 0, ), such that α(t) μ p (t) + β(t) 1 [ γ(t) μ(t) ]. (2.88) μ(t) μ(t) Let g(t) 0 beasolutiontoinequality(2.87) such that μ(τ 0 )g(τ 0 ) < 1. (2.89) Then g(t) exists globally and the following estimate holds: 0 g(t) < 1 μ(t), t τ 0. (2.90) Consequently, if lim t μ(t) =, then lim g(t) =0. (2.91) t Theorem 16 remains valid if the sign < in (2.89) and (2.90) is replaced by the sign (see Theorem 2 in [12]). When p = 2 we have the following corollary: Corollary 17 ([15] p. 97). Suppose there exists a monotonically growing function μ(t), such that μ C 1 [τ 0, ), μ > 0, lim t μ(t) =, 0 α(t) μ(t) [ 2 β(t) 1 2μ(t) γ(t) μ(t) ], u := du μ(t) dt, (2.92) [ γ(t) μ(t) ], (2.93) μ(t) where α(t),β(t) and γ(t) are continuous non-negative functions on [τ 0, ), τ 0 0. Let g(t) 0 beasolutiontoinequality(2.87) such that Then g(t) exists globally and the following estimate holds: μ(τ 0 )g(τ 0 ) < 1. (2.94) 0 g(t) < 1 μ(t), t τ 0. (2.95)

16 72 N. S. Hoang & A. G. Ramm Consequently, if lim t μ(t) =, then lim g(t) =0. t A discrete version of the nonlinear inequality Theorem 18 ([12] Theorem 4). Let α n,γ n and g n be non-negative sequences of numbers, and the following inequality holds: g n+1 g n γ n g n + α n gn p + β n, h n h n > 0, 0 <h n γ n < 1, (2.96) or, equivalently, g n+1 g n (1 h n γ n )+α n h n g p n + h nβ n, h n > 0, 0 <h n γ n < 1. (2.97) If there is a monotonically growing sequence of positive numbers (μ n ) n=1, such that the following conditions hold: then α n μ p + β n 1 n μ n ( γ n μ ) n+1 μ n, (2.98) μ n h n g 0 1 μ 0, (2.99) 0 g n 1 n 0. (2.100) μ n Therefore, if lim n μ n =, thenlim n g n =0. 3. Auxiliary Results 3.1. Auxiliary results from the theory of monotone operators Recall the following result (see e.g., [15], p. 112): Lemma 19. Assume that equation (1.1) is solvable, y is its minimal-norm solution, assumption (1.2) holds, and F is continuous. Then lim V a y =0, (3.1) a 0 where V a := V 0,a solves equation (2.1) with = Auxiliary results for the regularized equation (2.1) Lemma 20 ([11] Lemma 2). Assume F (0) f > 0. Leta>0, andf be monotone. Denote ψ(a) := V,a, φ(a) :=aψ(a) = F (V,a ) f, where V,a solves (2.1). Thenψ(a) is decreasing, and φ(a) is increasing (in the strict sense).

17 The DSM for Solving NOE 73 Lemma 21 ([11] Lemma 3). If F is monotone and continuous, then V,a = O( 1 a ) as a,and lim F (V,a) f = F (0) f. (3.2) a Lemma 22 ([11] Lemma 4). Let C > 0 and γ (0, 1] be constants such that C γ >. Suppose that F (0) f >C γ. Then, there exists a unique a() > 0 such that F (V,a() ) f = C γ. Lemma 23. If F is monotone and a 0, then ( ) max F (u) F (v),a u v F (u) F (v)+a(u v), u, v H. (3.3) Proof. Denote w := F (u) F (v)+a(u v), h := w. (3.4) Since F (u) F (v),u v 0, one obtains from two equations and w, u v = F (u) F (v)+a(u v),u v, (3.5) w, F(u) F (v) = F (u) F (v) 2 + a u v, F(u) F (v), (3.6) the following inequalities: and a u v 2 w, u v u v h, (3.7) F (u) F (v) 2 w, F(u) F (v) h F (u) F (v). (3.8) Inequalities (3.7) and (3.8) imply Lemma 23 is proved. a u v h, F (u) F (v) h. (3.9) Lemma 24. Let t 0 > 0 satisfy a(t 0 ) = 1 y, C 1 C > 1. (3.10) Then, F (V (t 0 )) f C, (3.11) and V ȧ ( a y 1+ 1 ), t t 0. (3.12) C 1

18 74 N. S. Hoang & A. G. Ramm Proof. This t 0 exists and is unique since a(t) > 0 monotonically decays to 0 as t.sincea(t) > 0 monotonically decays, one has: a(t) 1 C 1 y, 0 t t 0. (3.13) By Lemma 22 there exists t 1 > 0 such that F (V (t 1 )) f = C, F(V (t 1 )) + a(t 1 )V (t 1 ) f =0. (3.14) We claim that t 1 [0,t 0 ]. Indeed, from (3.14) and (3.30) one gets ( C = a(t 1 ) V (t 1 ) a(t 1 ) y + ) = a(t 1 ) y +, C > 1, a(t 1 ) so a(t 1) y C 1. Thus, a(t 1 ) y C 1 = a(t 0 ). Since a(t) 0, one has t 1 t 0. It follows from the inequality t 1 t 0, Lemma 20 and the first equality in (3.14) that F (V (t 0 )) f F(V (t 1 )) f = C. Differentiating both sides of (2.12) with respect to t, oneobtains A a(t) V = ȧv. This and the relations A a := F (u)+ai, F (u) :=A 0, imply V ȧ A 1 a(t) V ȧ a V ȧ ( y + ) ȧ ( a a a y 1+ 1 ), t t 0. C 1 (3.15) Lemma 24 is proved. Lemma 25. Let n 0 > 0 satisfy the inequality: > 1 a n0+1 C 1 y, a n0 C > 1. (3.16) Then, F (V n0+1) f C, (3.17) ( V n y 1+ 2 ), 0 n n 0 +1, (3.18) C 1

19 The DSM for Solving NOE 75 and V n V n+1 a ( n a n+1 y 1+ 2 ), 0 n n (3.19) a n+1 C 1 Proof. The number n 0, satisfying (3.16), exists and is unique since a n > 0 monotonically decays to 0 as n. One has 2, n 0. This and inequality (3.16) imply a n a n y > C 1 a n0 a n0+1 > 1 C 1 y a n0, C > 1. (3.20) Thus, 2 C 1 y >, n n (3.21) a n It follows from Lemma 22 that there exists n 1 > 0 such that F (V n1+1) f C < F (V n1 ) f, (3.22) where V n solves the equation F (V n )+a n V n f =0.We claim that n 1 [0,n 0 ]. Indeed, one has F (V n1 ) f = a n1 V n1,and V n1 y + a n1 (cf. (3.30)), so ( C < a n1 V n1 a n1 y + ) = a n1 y +, C > 1. (3.23) Therefore, < a n 1 y C 1. (3.24) From (3.24) and (3.16) one gets < y a n1 C 1 <. (3.25) a n0+1 Since a n decreases monotonically, inequality (3.25) implies n 1 n 0.This,thefirst inequality in (3.22) and Lemma 20 imply a n1 F (V n0+1) f F (V n1+1) f C. (3.26) One has a n+1 V n V n+1 2 = (a n+1 a n )V n F (V n )+F(V n+1 ),V n V n+1 (a n+1 a n )V n,v n V n+1 (a n a n+1 ) V n V n V n+1, n 0. By (3.30), V n y + a n, and, by (3.21), implies (3.18). From (3.18) and (3.27) one obtains V n V n+1 a n a n+1 V n a n a n+1 a n+1 a n+1 a n 2 y C 1 (3.27) for all n n This y ( 1+ 2 C 1 ), n n (3.28)

20 76 N. S. Hoang & A. G. Ramm Lemma 25 is proved. Lemma 26. Let V a := V,a =0, so F (V a )+av a f = 0. Let y be the minimal-norm solution to equation (1.1). Then and V,a V a a, V a y, a > 0, (3.29) V,a V a + a y +, a > 0. (3.30) a Proof. From (2.1) one gets F (V,a ) F (V a ) + a(v,a V a ) = f f. Multiply this equality by (V,a V a ) and use (1.2) to obtain V,a V a f f, V,a V a = F (V,a ) F (V a ) + a(v,a V a ), V,a V a a V,a V a 2. This implies the first inequality in (3.29). Let us derive a uniform, with respect to a, bound on V a. From the equation and the monotonicity of F one gets This implies the desired bound: F (V a ) + av a F (y) = 0, 0 = F (V a ) + av a F (y), V a y a V a, V a y. V a y, a > 0. (3.31) Similar arguments can be found in [15], p Inequalities (3.30) follow from (3.29) and (3.31) and the triangle inequality. Lemma 26 is proved. Lemma 27 ([8] Lemma 2.11). Let a(t) = has e t 2 t 0 Lemma 28 ([9] Lemma 9). Let a(t) = Define ϕ(t) = t a 2 (s) 0 2 ds. Then, one has d (c+t) b where d, c, b > 0, c 6b. One e s 2 ȧ(s) V (s) ds 1 2 a(t) V (t), t 0. (3.32) d (c+t) b where b (0, 1 4 ], d2 c 1 2b 6b. t e ϕ(t) e ϕ(t) ȧ(s) V (s) ds 1 2 a(t) V (t). (3.33) 0

21 The DSM for Solving NOE 77 Lemma 29 ([10] Lemma 9). Let a(t) = Define ϕ(t) = t 0 a(s) 2 ds. Then, one has 4. Proofs of the Basic Results d (c+t) b where b (0, 1 2 ], dc1 b 6b. t e ϕ(t) e ϕ(t) ȧ(s) V (s) ds 1 2 a(t) V (t). (3.34) 4.1. Proofs of the discrepancy principles Proof of Theorem 1 0 Proof. The existence and uniqueness of a() follow from Lemma 22. Let us show that lim a() = 0. (4.1) 0 The triangle inequality, the first inequality in (3.29), equality (2.2) and equality (2.1) imply a() V a() a() ( V,a() V a() + V,a() ) where V a solves (2.1) with = 0. From inequality (4.2), one gets + a() V,a() = + C γ, (4.2) lim a() V a() = 0. (4.3) 0 It follows from Lemma 20 with f = f, i.e., = 0, that the function φ 0 (a) := a V a is non-negative and strictly increasing on (0, ). This and relation (4.3) imply: From (2.2) and (3.30), one gets Thus, one gets: lim a() = 0. (4.4) 0 C γ = a V,a a() y +. (4.5) C γ a() y. (4.6) If γ < 1 then C 1 γ > 0 for sufficiently small. This implies: 0 lim 0 a() lim 0 1 γ y = 0. (4.7) C 1 γ By the triangle inequality and the first inequality (3.29), one has V,a() y V a() y + V a() V,a() V a() y + Relation (2.3) follows from (4.4), (4.7), (4.8) and Lemma 19. a(). (4.8)

22 78 N. S. Hoang & A. G. Ramm Proof of Theorem 3 Proof. By Lemma 23 a u v F (u) F (v)+au av, v, u H, a >0. (4.9) Using inequality (4.9) with v = v and u = V,α(), equation (1.4) with a = α(), and inequality (2.9), one gets α() v V,α() F(v ) F (V,α() )+α()v α()v,α() = F (v )+α()v f θ. Therefore, (4.10) v V,α() θ α(). (4.11) Using the triangle inequality, (3.30) and (4.11), one gets: α() v α() V,α() + α() v V,α() α() y + + θ. (4.12) From the triangle inequality and inequalities (2.9) and (2.10) one obtains: α() v F (v ) f F (v )+α()v f C 1 γ θ. (4.13) Inequalities (4.12) and (4.13) imply C 1 γ θ θ + α() y +. (4.14) This inequality and the fact that C 1 1 γ 2θ 1 γ > 0 for sufficiently small and 0 <γ<1imply α() 1 γ y C 1 1 γ, 0 < 1. (4.15) 2θ1 γ Thus, one obtains lim 0 =0. (4.16) α() From the triangle inequality and inequalities (2.9), (2.10) and (4.11), one gets α() V,α() F(v ) f + F (v )+α()v f + α() v V,α() C 2 γ + θ + θ. This inequality implies lim α() V,α() =0. (4.17) 0 The triangle inequality and inequality (3.29) imply α V α α ( V,α V α + V,α ) (4.18) + α V,α.

23 The DSM for Solving NOE 79 From (4.18) and (4.17), one gets lim α() V α() =0. (4.19) 0 It follows from Lemma 20 with f = f, i.e., = 0, that the function φ 0 (a) :=a V a is non-negative and strictly increasing on (0, ). This and relation (4.19) imply lim α() =0. (4.20) 0 From the triangle inequality and inequalities (4.11) and (3.29) one obtains v y v V,α() + V,α() V α() + V α() y θ α() + α() + V (4.21) α() y, where V α() solves equation (3) with a = α() andf = f. The conclusion (2.11) follows from (4.16), (4.20), (4.21) and Lemma 19. Theorem 3 is proved Proofs of convergence of the dynamical systems method ProofofTheorem5 Proof. Denote Let C := C (4.22) 2 w := u V, g(t) := w. (4.23) From (4.23) and (2.15) one gets ẇ = V A 1 a(t)[ F (u ) F (V )+a(t)w ]. (4.24) We use Taylor s formula and get: F (u ) F (V )+aw = A a w + K, K M 2 2 w 2, (4.25) where K := F (u ) F (V ) Aw, andm 2 is the constant from the estimate (2.13) and A a := A + ai. Multiplying (4.24) by w and using (4.25) one gets gġ g 2 + M 2 2 A 1 a(t) g3 + V g. (4.26) Since 0 <a(t) 0, there exists t 0 > 0 such that a(t 0 ) = 1 y, C > 1. (4.27) C 1 This and Lemma 24 imply that inequalities (3.11) and (3.12) hold. Since g 0, inequalities (4.26) and (3.12) imply, for all t [0,t 0 ], that ġ g(t)+ c 0 a(t) g2 + ȧ a(t) c 1, c 0 = M ( 2 2, c 1 = y 1+ 1 ). (4.28) C 1

24 80 N. S. Hoang & A. G. Ramm Inequality (4.28) is of the type (2.87) with γ(t) =1, α(t) = c 0 a(t), β(t) =c ȧ 1 a(t). (4.29) Let us check assumptions (2.92) (2.94). Take μ(t) = a(t), (4.30) where = const > 0 and satisfies conditions (2.16) (2.19) in Lemma 4. It follows that inequalities (2.92) (2.94) hold. Since u 0 satisfies the first inequality in (2.20), one gets g(0) a(0), by Remark 9. This, inequalities (2.92) (2.94), and Corollary 17 yield g(t) < a(t), t t 0, g(t) := u (t) V (t). (4.31) Therefore, F (u (t)) f F(u (t)) F (V (t)) + F (V (t)) f M 1 g(t)+ F (V (t)) f (4.32) M 1a(t) + F (V (t)) f, t t 0. From (3.11) one gets F (V (t 0 )) f C. (4.33) This, inequality (4.32), the inequality M1 y (see (2.16)), the relation (4.27), and the definition C 1 =2C 1 (see (4.22)), imply F (u (t 0 )) f M 1a(t 0 ) + C M (4.34) 1(C 1) + C (C 1) + C = C 1. y Thus, if F (u (0)) f >C 1 ζ, 0 <ζ 1, (4.35) then, by the continuity of the function t F (u (t)) f on [0, ), there exists t (0,t 0 ) such that F (u (t )) f = C 1 ζ (4.36) for any given ζ (0, 1], and any fixed C 1 > 1. Let us prove (2.22). From (4.32) with t = t, and from (3.30), one gets C 1 ζ a(t ) M 1 + a(t ) V (t ) (4.37) a(t ) M 1 + y a(t )+.

25 The DSM for Solving NOE 81 Thus, for sufficiently small, onegets C ζ a(t )( M1 where C <C 1 is a constant. Therefore, lim 0 We claim that a(t ) lim 0 1 ζ C ( M1 ) + y, C >0, (4.38) ) + y =0, 0 <ζ<1. (4.39) lim t =. (4.40) 0 Let us prove (4.40). Using (2.15), one obtains: d ( ) F (u )+au f = Aa u +ȧu = ( ) F (u )+au f +ȧu. (4.41) dt This and (2.12) imply: d [ F (u ) F (V )+a(u V ) ] = [ F (u ) F (V )+a(u V ) ] +ȧu. (4.42) dt Denote v := v(t) :=F (u (t)) F (V (t)) + a(t)(u (t) V (t)), h := h(t) := v. (4.43) Multiplying (4.42) by v, one obtains Thus, hḣ = h2 + v, ȧ(u V ) +ȧ v, V h 2 + h ȧ u V + ȧ h V, h 0. (4.44) Note that from inequality (3.3) one has Inequalities (4.45) and (4.46) imply ( ḣ h ḣ h + ȧ u V + ȧ V. (4.45) a u V h, F (u ) F (V ) h. (4.46) 1 ȧ a ) + ȧ V. (4.47) Since 1 ȧ a 1 2 because c 2b, it follows from inequality (4.47) that ḣ 1 2 h + ȧ V. (4.48) Inequality (4.48) implies: h(t) h(0)e t 2 + e t 2 t 0 e s 2 ȧ V ds. (4.49)

26 82 N. S. Hoang & A. G. Ramm From (4.49) and the second inequality in (4.46), one gets F (u (t)) F (V (t)) h(0)e t 2 + e t 2 This and the triangle inequality imply t F (u (t)) f F (V (t)) f F (V (t)) F (u (t)) By Lemma 27 one gets a(t) V (t) h(0)e t 2 e t 2 t 0 t 0 e s 2 ȧ V ds. (4.50) e s 2 ȧ V ds. (4.51) 1 2 a(t) V (t) e t 2 0 e s 2 ȧ V (s) ds. (4.52) From the second inequality in (2.20), one gets h(0)e t a(0) V (0) e t 2, t 0. (4.53) Since a(t) = Therefore, d (c+t) b, b (0, 1], c 1, 2b <c,onegets e t 2 a(0) a(t). (4.54) e t 1 2 h(0) 4 a(t) V (0) 1 4 a(t) V (t), t 0, (4.55) where we have used the inequality V (t) V (t ) for t<t, established in Lemma 20. From (4.36) and (4.51), (4.52), (4.55), one gets C 1 ζ = F (u (t )) f 1 4 a(t ) V (t ). (4.56) It follows from the triangle inequality and the first inequality in (3.29) that a(t) V (t) a(t) V (t) +. This and (4.56) imply ( ) 0 lim a(t ) V (t ) lim 4C 1 ζ + =0. (4.57) 0 0 Since V (t) increases (see Lemma 20), the above formula implies lim 0 a(t )=0. Since 0 <a(t) 0, it follows that lim 0 t =, i.e., (4.40) holds. It is now easy to finish the proof of Theorem 5. From the triangle inequality and inequalities (4.31) and (3.29) one obtains u (t ) y u (t ) V (t ) + V (t ) V (t ) + V (t ) y a(t ) + a(t ) + V (t ) y. (4.58)

27 The DSM for Solving NOE 83 Note that V (t) :=V (t) =0 and V (t) solves (2.12). Note that V (t )=V 0,a(t ) (see equation (2.12)). From (4.39), (4.40), inequality (4.58) and Lemma 19, one obtains (2.22). Theorem 5 is proved. Remark 10. The trajectory u (t) remains in the ball B(u 0,R):={u : u u 0 < R} for all t t,wherer does not depend on as 0. Indeed, estimates (4.31), (3.30) and (3.13) imply: u (t) u 0 u (t) V (t) + V (t) + u 0 a(0) + C y C 1 + u 0 := R, t t. (4.59) Here we have used the fact that t <t 0 (see Lemma 24). Since one can choose a(t) and so that a(0) is uniformly bounded as 0 regardless of the growth of M 1 (see Remark 7) one concludes that R can be chosen independent of and M ProofofTheorem7 Proof. Denote Let C := C (4.60) 2 From (4.61) and (2.24) one gets We use Taylor s formula and get: w := u V, g(t) := w. (4.61) ẇ = V A a(t)[ F (u ) F (V )+a(t)w ]. (4.62) F (u ) F (V )+aw = A a w + K, K M 2 2 w 2, (4.63) where K := F (u ) F (V ) Aw, andm 2 is the constant from the estimate (2.13) and A a := A + ai. Multiplying (4.62) by w and using (4.63) one gets gġ a 2 g 2 + M 2(M 1 + a) g V g, g := g(t) := w(t), (4.64) where the estimates: A aa a w, w a 2 g 2 and A a M 1 + a were used. Note that the inequality A aa a w, w a 2 g 2 is true if A 0. Since F is monotone and differentiable (see (1.2)), one has A := F (u ) 0. Let t 0 > 0 be such that a(t 0 ) = 1 y, C > 1, (4.65) C 1 as in (3.10). It follows from Lemma 24 that inequalities (3.11) and (3.12) hold.

28 84 N. S. Hoang & A. G. Ramm Since g 0, inequalities (4.64) and (3.12) imply, for all t [0,t 0 ], that ġ(t) a 2 (t)g(t)+c 0 (M 1 +a(t))g 2 (t)+ ȧ(t) a(t) c 1, c 0 = M ( 2 2,c 1 = y 1+ 1 ). C 1 (4.66) Inequality (4.66) is of the type (2.87) with γ(t) =a 2 ȧ(t) (t), α(t) =c 0 (M 1 + a(t)), β(t) =c 1 a(t). (4.67) Let us check assumptions (2.92) (2.94). Take μ(t) = a 2, =const. (4.68) (t) ByLemma6thereexist and a(t) such that conditions (2.25) (2.29) hold. This implies that inequalities (2.92) (2.94) hold. Thus, Corollary 17 yields g(t) < a2 (t), t t 0. (4.69) Note that inequality (4.69) holds for t = 0 since (2.29) holds. Therefore, F (u (t)) f F(u (t)) F (V (t)) + F (V (t)) f M 1 g(t)+ F (V (t)) f (4.70) M 1a 2 (t) + F (V (t)) f, t t 0. It follows from Lemma 20 that F (V (t)) f is decreasing. Since t 1 t 0,onegets F (V (t 0 )) f F(V (t 1 )) f = C. (4.71) This, inequality (4.70), the inequality M1 y (see (2.26)), the relation (4.65), and the definition C 1 =2C 1 (see (4.60)) imply F (u (t 0 )) f M 1a 2 (t 0 ) + C M (4.72) 1(C 1) + C (2C 1) = C 1. y We have used the inequality a 2 (C 1) (t 0 ) a(t 0 )= (4.73) y which is true if is sufficiently small, or, equivalently, if t 0 is sufficiently large. Thus, if then there exists t (0,t 0 ) such that F (u (0)) f >C 1 ζ, 0 <ζ 1, (4.74) for any given ζ (0, 1], and any fixed C 1 > 1. F (u (t )) f = C 1 ζ (4.75)

29 The DSM for Solving NOE 85 Let us prove (2.32). If this is done, then Theorem 7 is proved. First, we prove that lim 0 a(t ) =0. From (4.70) with t = t, (2.12) and (3.30), one gets C 1 ζ a 2 (t ) M 1 + a(t ) V (t ) a 2 (t ) M 1 + y a(t )+. Thus, for sufficiently small, onegets ( C ζ C 1 ζ M1 a(0) a(t ) where C <C 1 is a constant. Therefore, ( lim 0 a(t ) lim M1 a(0) 0 Secondly, we prove that 1 ζ C (4.76) ) + y, C >0, (4.77) ) + y =0, 0 <ζ<1. (4.78) lim t =. (4.79) 0 Using (2.24), one obtains: d ( ) F (u )+au f = Aa u +ȧu = A a A ( ) a F (u )+au f +ȧu. (4.80) dt This and (2.12) imply: d [ F (u ) F (V )+a(u V ) ] = A a A [ a F (u ) F (V )+a(u V ) ] +ȧu. (4.81) dt Denote v := F (u ) F (V )+a(u V ), h := h(t) := v(t). (4.82) Multiplying (4.81) by v and using monotonicity of F,oneobtains hḣ = A aa a v, v + v, ȧ(u V ) +ȧ v, V h 2 a 2 + h ȧ u V + ȧ h V, h 0. (4.83) Again, we have used the inequality A a A a a 2, which holds for A 0, i.e., monotone operators F.Thus, From inequality (3.3) we have Inequalities (4.84) and (4.85) imply ( ḣ h ḣ ha 2 + ȧ u V + ȧ V. (4.84) a u V h, F (u ) F (V ) h. (4.85) a 2 ȧ a ) + ȧ V. (4.86)

30 86 N. S. Hoang & A. G. Ramm Since a 2 ȧ a 3a2 4 > a2 2 Inequality (4.87) implies: Denote h(t) h(0)e t a 2 (s) 0 2 ds + e t a 2 (s) 0 2 ds From (4.88) and (4.85), one gets by inequality (2.25), it follows from inequality (4.86) that ḣ a2 2 h + ȧ V. (4.87) ϕ(t) := t 0 t 0 e s 0 a 2 (ξ) 2 dξ ȧ(s) V (s) ds. (4.88) a 2 (s) ds. (4.89) 2 t F (u (t)) F (V (t)) h(0)e ϕ(t) + e ϕ(t) e ϕ(s) ȧ(s) V (s) ds. (4.90) This and the triangle inequality imply F (u (t)) f F(V (t)) f F(V (t)) F (u (t)) t a(t) V (t) h(0)e ϕ(t) e ϕ(t) e ϕ(s) ȧ V ds. From Lemma 28 it follows that 1 2 a(t) V (t) e ϕ(t) From (2.30) one gets t (4.91) e ϕ(s) ȧ V (s) ds. (4.92) h(0)e ϕ(t) 1 4 a(0) V (0) e ϕ(t), t 0. (4.93) If c 1and2b c 2 1, then it follows that e ϕ(t) a(0) a(t). (4.94) Indeed, inequality a(0) a(t)e ϕ(t) is obviously true for t =0,and ( a(t)e ϕ(t)) t 0, provided that c 1and2b c 2 1. Inequalities (4.93) and (4.94) imply e ϕ(t) h(0) 1 4 a(t) V (0) 1 4 a(t) V (t), t 0, (4.95) where we have used the inequality V (t) V (t ) for t t, established in Lemma 20. From (4.75) and (4.91), (4.92), (4.95), one gets C ζ = F (u (t )) f 1 4 a(t ) V (t ). (4.96) It follows from the triangle inequality and the first inequality in (3.29) that a(t) V (t) a(t) V (t) +. (4.97)

31 The DSM for Solving NOE 87 From (4.97) and (4.96) one gets ( 0 lim a(t ) V (t ) lim 4C ζ + ) =0. (4.98) 0 0 Since V (t) is increasing, this implies lim 0 a(t ) = 0. Since 0 <a(t) 0, it follows that (4.79) holds. From the triangle inequality and inequalities (4.69) and (3.29) one obtains u (t ) y u (t ) V + V (t ) V (t ) + V (t ) y a2 (t ) + a(t ) + V (t ) y, (4.99) where V (t) := V (t) =0 and V (t) solves (2.12). From (4.78), (4.79), inequality (4.99) and Lemma 19, one obtains (2.32). Theorem 7 is proved. By the arguments, similar to the ones in the proof of Theorems or in Remark 10, one can show that the trajectory u (t) remains in the ball B(u 0,R):= {u : u u 0 <R} for all t t,wherer does not depend on as ProofofTheorem9 Proof. Denote Let C := C (4.100) 2 From (4.101) and (2.35) one gets w := u V, g := g(t) := w(t). (4.101) ẇ = V [ F (u ) F (V )+a(t)w ]. (4.102) Multiplying (4.102) by w and using (1.2) one gets gġ ag 2 + V g. (4.103) Let t 0 > 0 be such that a(t 0 ) = 1 y, C 1 C > 1. (4.104) This t 0 exists and is unique since a(t) > 0 monotonically decays to 0 as t. It follows from inequality (4.104) and Lemma 24 that inequalities (3.11) and (3.12) hold. Since g 0, inequalities (4.103) and (3.12) imply ġ a(t)g(t)+ ȧ(t) a(t) c 1, ( c 1 = y 1+ 1 ). C 1 (4.105)

32 88 N. S. Hoang & A. G. Ramm Inequality (4.105) is of the type (2.87) with ȧ(t) γ(t) =a(t), α(t) =0, β(t) =c 1 a(t). (4.106) Let us check assumptions (2.92) (2.94). Take μ(t) =, =const. (4.107) a(t) By Lemma 8 there exist and a(t) such that conditions (2.37) (2.40) hold. It follows that inequalities (2.92) (2.94) hold. Thus, Corollary 17 yields g(t) < a(t), t t 0. (4.108) The triangle inequality and inequality (4.108) imply F (u (t)) f F(u (t)) F (V (t)) + F (V (t)) f M 1 g(t)+ F (V (t)) f (4.109) M 1a(t) + F (V (t)) f, t t 0. Inequality (3.11), inequality (4.109), the inequality M1 y (see (2.37)), the relation (4.104), and the definition C 1 =2C 1 (see (4.100)) imply F (u (t 0 )) f M 1a(t 0 ) + C M (4.110) 1(C 1) + C (C 1) + C = C 1. y Thus, if then there exists t (0,t 0 ) such that F (u (0)) f >C 1 ζ, 0 <ζ 1, (4.111) F (u (t )) f = C 1 ζ (4.112) for any given ζ (0, 1], and any fixed C 1 > 1. Let us prove (2.43). If this is done, then Theorem 9 is proved. First, we prove that lim 0 a(t ) =0. From (4.109) with t = t, and from (3.30), one gets C 1 ζ a(t ) M 1 + a(t ) V (t ) a(t ) M 1 + y a(t )+. Thus, for sufficiently small, onegets C ζ a(t )( M1 (4.113) ) + y, C >0, (4.114)

33 The DSM for Solving NOE 89 where C <C 1 is a constant. Therefore, lim 0 a(t ) lim 0 Secondly, we prove that 1 ζ C ( M1 ) + y =0, 0 <ζ<1. (4.115) lim t =. (4.116) 0 Using (2.35), one obtains: d ( ) ( ) F (u )+au f = Aa u +ȧu = A a F (u )+au f +ȧu, (4.117) dt where A a := F (u )+a. This and (2.12) imply: d [ F (u ) F (V )+a(u V ) ] [ = A a F (u ) F (V )+a(u V ) ] +ȧu. (4.118) dt Denote v := F (u ) F (V )+a(u V ), h = v. (4.119) Multiplying (4.118) by v and using monotonicity of F,oneobtains hḣ = A av, v + v, ȧ(u V ) +ȧ v, V h 2 (4.120) a + h ȧ u V + ȧ h V, h 0. Again, we have used the inequality F (u )v, v 0 which follows from the monotonicity of F.Thus, Inequalities (4.121) and (3.3) imply ( ḣ h Since a ȧ a a 2 Inequality (4.123) implies: Denote ḣ ha + ȧ u V + ȧ V. (4.121) a ȧ a ) + ȧ V. (4.122) by inequality (2.36), it follows from inequality (4.122) that h(t) h(0)e t a(s) 0 2 ds + e t a(s) 0 From (4.124) and (3.3), one gets ḣ a 2 h + ȧ V. (4.123) ϕ(t) := 2 ds t t 0 0 e s a(ξ) 0 2 dξ ȧ(s) V (s) ds. (4.124) a(s) ds. (4.125) 2 t F (u (t)) F (V (t)) h(0)e ϕ(t) + e ϕ(t) e ϕ(s) ȧ(s) V (s) ds. (4.126) 0

34 90 N. S. Hoang & A. G. Ramm Therefore, F (u (t)) f F(V (t)) f F(V (t)) F (u (t)) t a(t) V (t) h(0)e ϕ(t) e ϕ(t) e ϕ(s) ȧ V ds. From Lemma 29 it follows that 1 2 a(t) V (t) e ϕ(t) From (2.41) one gets t 0 0 (4.127) e ϕ(s) ȧ V (s) ds. (4.128) h(0)e ϕ(t) 1 4 a(0) V (0) e ϕ(t), t 0. (4.129) If c 1and2b d, then it follows that e ϕ(t) a(0) a(t). (4.130) Indeed, inequality a(0) a(t)e ϕ(t) is obviously true for t =0,and ( a(t)e ϕ(t)) 0, t provided that c 1and2b d. Inequalities (4.129) and (4.130) imply e ϕ(t) h(0) 1 4 a(t) V (0) 1 4 a(t) V (t), t 0, (4.131) where we have used the inequality V (t) V (t ) for t t, established in Lemma 20. From (4.112) and (4.127), (4.128), (4.131), one gets C ζ = F (u (t )) f 1 4 a(t ) V (t ). (4.132) It follows from the triangle inequality and the first inequality in (3.29) one obtains This and inequality (4.132) imply a(t) V (t) a(t) V (t) +. (4.133) ( 0 lim a(t ) V (t ) lim 4C ζ + ) =0. (4.134) 0 0 Since V (t) is increasing, this implies lim 0 a(t ) = 0. Since 0 <a(t) 0, it follows that (4.116) holds. From the triangle inequality and inequalities (4.108) and (3.29) one obtains: u (t ) y u (t ) V + V (t ) V (t ) + V (t ) y a(t ) + a(t ) + V (t ) y, (4.135) where V (t) :=V (t) =0 and V (t) solves (2.12). From (4.115), (4.116), inequality (4.135) and Lemma 19, one obtains (2.43). Theorem 9 is proved. By the arguments, similar to the ones in the proof of Theorems or in Remark 10, one can show that: the trajectory u (t) remains in the ball B(u 0,R):= {u : u u 0 <R} for all t t,wherer does not depend on as 0.

arxiv: v1 [math.na] 28 Jan 2009

arxiv: v1 [math.na] 28 Jan 2009 The Dynamical Systems Method for solving nonlinear equations with monotone operators arxiv:0901.4377v1 [math.na] 28 Jan 2009 N. S. Hoang and A. G. Ramm Mathematics Department, Kansas State University,

More information

Nonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage:

Nonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage: Nonlinear Analysis 71 2009 2744 2752 Contents lists available at ScienceDirect Nonlinear Analysis journal homepage: www.elsevier.com/locate/na A nonlinear inequality and applications N.S. Hoang A.G. Ramm

More information

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume, Number, Pages S -9939(XX- ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. HOANG AND A. G. RAMM (Communicated

More information

How large is the class of operator equations solvable by a DSM Newton-type method?

How large is the class of operator equations solvable by a DSM Newton-type method? This is the author s final, peer-reviewed manuscript as accepted for publication. The publisher-formatted version may be available through the publisher s web site or your institution s library. How large

More information

Dynamical Systems Gradient Method for Solving Nonlinear Equations with Monotone Operators

Dynamical Systems Gradient Method for Solving Nonlinear Equations with Monotone Operators Acta Appl Math (29) 16: 473 499 DOI 1.17/s144-8-938-1 Dynamical Systems Gradient Method for Solving Nonlinear Equations with Monotone Operators N.S. Hoang A.G. Ramm Received: 28 June 28 / Accepted: 26

More information

Ann. Polon. Math., 95, N1,(2009),

Ann. Polon. Math., 95, N1,(2009), Ann. Polon. Math., 95, N1,(29), 77-93. Email: nguyenhs@math.ksu.edu Corresponding author. Email: ramm@math.ksu.edu 1 Dynamical systems method for solving linear finite-rank operator equations N. S. Hoang

More information

NONLINEAR DIFFERENTIAL INEQUALITY. 1. Introduction. In this paper the following nonlinear differential inequality

NONLINEAR DIFFERENTIAL INEQUALITY. 1. Introduction. In this paper the following nonlinear differential inequality M athematical Inequalities & Applications [2407] First Galley Proofs NONLINEAR DIFFERENTIAL INEQUALITY N. S. HOANG AND A. G. RAMM Abstract. A nonlinear differential inequality is formulated in the paper.

More information

Dynamical systems method (DSM) for selfadjoint operators

Dynamical systems method (DSM) for selfadjoint operators Dynamical systems method (DSM) for selfadjoint operators A.G. Ramm Mathematics Department, Kansas State University, Manhattan, KS 6656-262, USA ramm@math.ksu.edu http://www.math.ksu.edu/ ramm Abstract

More information

A G Ramm, Implicit Function Theorem via the DSM, Nonlinear Analysis: Theory, Methods and Appl., 72, N3-4, (2010),

A G Ramm, Implicit Function Theorem via the DSM, Nonlinear Analysis: Theory, Methods and Appl., 72, N3-4, (2010), A G Ramm, Implicit Function Theorem via the DSM, Nonlinear Analysis: Theory, Methods and Appl., 72, N3-4, (21), 1916-1921. 1 Implicit Function Theorem via the DSM A G Ramm Department of Mathematics Kansas

More information

This article was originally published in a journal published by Elsevier, and the attached copy is provided by Elsevier for the author s benefit and for the benefit of the author s institution, for non-commercial

More information

This article was published in an Elsevier journal. The attached copy is furnished to the author for non-commercial research and education use, including for instruction at the author s institution, sharing

More information

Dynamical Systems Method for Solving Operator Equations

Dynamical Systems Method for Solving Operator Equations Dynamical Systems Method for Solving Operator Equations Alexander G. Ramm Department of Mathematics Kansas State University Manhattan, KS 6652 email: ramm@math.ksu.edu URL: http://www.math.ksu.edu/ ramm

More information

Stability of solutions to abstract evolution equations with delay

Stability of solutions to abstract evolution equations with delay Stability of solutions to abstract evolution equations with delay A.G. Ramm Department of Mathematics Kansas State University, Manhattan, KS 66506-2602, USA ramm@math.ksu.edu Abstract An equation u = A(t)u+B(t)F

More information

Dynamical Systems Gradient Method for Solving Ill-Conditioned Linear Algebraic Systems

Dynamical Systems Gradient Method for Solving Ill-Conditioned Linear Algebraic Systems Acta Appl Math (21) 111: 189 24 DOI 1.17/s144-9-954-3 Dynamical Systems Gradient Method for Solving Ill-Conditioned Linear Algebraic Systems N.S. Hoang A.G. Ramm Received: 28 September 28 / Accepted: 29

More information

M athematical I nequalities & A pplications

M athematical I nequalities & A pplications M athematical I nequalities & A pplications With Compliments of the Author Zagreb, Croatia Volume 4, Number 4, October 20 N. S. Hoang and A. G. Ramm Nonlinear differential inequality MIA-4-82 967 976 MATHEMATICAL

More information

Dynamical Systems Method for Solving Ill-conditioned Linear Algebraic Systems

Dynamical Systems Method for Solving Ill-conditioned Linear Algebraic Systems Dynamical Systems Method for Solving Ill-conditioned Linear Algebraic Systems Sapto W. Indratno Department of Mathematics Kansas State University, Manhattan, KS 6656-6, USA sapto@math.ksu.edu A G Ramm

More information

444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003),

444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003), 444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003), 2191-2195 1 On a new notion of regularizer A.G. Ramm LMA/CNRS, 31 Chemin Joseph Aiguier, Marseille 13402, France and Mathematics

More information

Comm. Nonlin. Sci. and Numer. Simul., 12, (2007),

Comm. Nonlin. Sci. and Numer. Simul., 12, (2007), Comm. Nonlin. Sci. and Numer. Simul., 12, (2007), 1390-1394. 1 A Schrödinger singular perturbation problem A.G. Ramm Mathematics Department, Kansas State University, Manhattan, KS 66506-2602, USA ramm@math.ksu.edu

More information

Convergence rates of the continuous regularized Gauss Newton method

Convergence rates of the continuous regularized Gauss Newton method J. Inv. Ill-Posed Problems, Vol. 1, No. 3, pp. 261 28 (22) c VSP 22 Convergence rates of the continuous regularized Gauss Newton method B. KALTENBACHER, A. NEUBAUER, and A. G. RAMM Abstract In this paper

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Economics 204 Fall 2011 Problem Set 2 Suggested Solutions

Economics 204 Fall 2011 Problem Set 2 Suggested Solutions Economics 24 Fall 211 Problem Set 2 Suggested Solutions 1. Determine whether the following sets are open, closed, both or neither under the topology induced by the usual metric. (Hint: think about limit

More information

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Applied Mathematical Sciences, Vol. 6, 212, no. 63, 319-3117 Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Nguyen Buong Vietnamese

More information

A numerical algorithm for solving 3D inverse scattering problem with non-over-determined data

A numerical algorithm for solving 3D inverse scattering problem with non-over-determined data A numerical algorithm for solving 3D inverse scattering problem with non-over-determined data Alexander Ramm, Cong Van Department of Mathematics, Kansas State University, Manhattan, KS 66506, USA ramm@math.ksu.edu;

More information

Convergence analysis of a one-step intermediate Newton iterative scheme

Convergence analysis of a one-step intermediate Newton iterative scheme Revista Colombiana de Matemáticas Volumen 35 (21), páginas 21 27 Convergence analysis of a one-step intermediate Newton iterative scheme Livinus U. Uko* Raúl Eduardo Velásquez Ossa** Universidad de Antioquia,

More information

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E,

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E, Tel Aviv University, 26 Analysis-III 9 9 Improper integral 9a Introduction....................... 9 9b Positive integrands................... 9c Special functions gamma and beta......... 4 9d Change of

More information

A nonlinear singular perturbation problem

A nonlinear singular perturbation problem A nonlinear singular perturbation problem arxiv:math-ph/0405001v1 3 May 004 Let A.G. Ramm Mathematics epartment, Kansas State University, Manhattan, KS 66506-60, USA ramm@math.ksu.edu Abstract F(u ε )+ε(u

More information

Université de Metz. Master 2 Recherche de Mathématiques 2ème semestre. par Ralph Chill Laboratoire de Mathématiques et Applications de Metz

Université de Metz. Master 2 Recherche de Mathématiques 2ème semestre. par Ralph Chill Laboratoire de Mathématiques et Applications de Metz Université de Metz Master 2 Recherche de Mathématiques 2ème semestre Systèmes gradients par Ralph Chill Laboratoire de Mathématiques et Applications de Metz Année 26/7 1 Contents Chapter 1. Introduction

More information

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

Nonlinear equations. Norms for R n. Convergence orders for iterative methods Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

A NOTE ON ALMOST PERIODIC VARIATIONAL EQUATIONS

A NOTE ON ALMOST PERIODIC VARIATIONAL EQUATIONS A NOTE ON ALMOST PERIODIC VARIATIONAL EQUATIONS PETER GIESL AND MARTIN RASMUSSEN Abstract. The variational equation of a nonautonomous differential equation ẋ = F t, x) along a solution µ is given by ẋ

More information

Institut für Mathematik

Institut für Mathematik U n i v e r s i t ä t A u g s b u r g Institut für Mathematik Martin Rasmussen, Peter Giesl A Note on Almost Periodic Variational Equations Preprint Nr. 13/28 14. März 28 Institut für Mathematik, Universitätsstraße,

More information

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1.

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1. Sep. 1 9 Intuitively, the solution u to the Poisson equation S chauder Theory u = f 1 should have better regularity than the right hand side f. In particular one expects u to be twice more differentiable

More information

2 Statement of the problem and assumptions

2 Statement of the problem and assumptions Mathematical Notes, 25, vol. 78, no. 4, pp. 466 48. Existence Theorem for Optimal Control Problems on an Infinite Time Interval A.V. Dmitruk and N.V. Kuz kina We consider an optimal control problem on

More information

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Radu Ioan Boț and Bernd Hofmann September 16, 2016 Abstract In the literature on singular perturbation

More information

Obstacle problems and isotonicity

Obstacle problems and isotonicity Obstacle problems and isotonicity Thomas I. Seidman Revised version for NA-TMA: NA-D-06-00007R1+ [June 6, 2006] Abstract For variational inequalities of an abstract obstacle type, a comparison principle

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM OLEG ZUBELEVICH DEPARTMENT OF MATHEMATICS THE BUDGET AND TREASURY ACADEMY OF THE MINISTRY OF FINANCE OF THE RUSSIAN FEDERATION 7, ZLATOUSTINSKY MALIY PER.,

More information

THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS

THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS RALPH HOWARD DEPARTMENT OF MATHEMATICS UNIVERSITY OF SOUTH CAROLINA COLUMBIA, S.C. 29208, USA HOWARD@MATH.SC.EDU Abstract. This is an edited version of a

More information

BREGMAN DISTANCES, TOTALLY

BREGMAN DISTANCES, TOTALLY BREGMAN DISTANCES, TOTALLY CONVEX FUNCTIONS AND A METHOD FOR SOLVING OPERATOR EQUATIONS IN BANACH SPACES DAN BUTNARIU AND ELENA RESMERITA January 18, 2005 Abstract The aim of this paper is twofold. First,

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

Traces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains

Traces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains Traces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains Sergey E. Mikhailov Brunel University West London, Department of Mathematics, Uxbridge, UB8 3PH, UK J. Math. Analysis

More information

Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations

Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations C. David Levermore Department of Mathematics and Institute for Physical Science and Technology

More information

Normed Vector Spaces and Double Duals

Normed Vector Spaces and Double Duals Normed Vector Spaces and Double Duals Mathematics 481/525 In this note we look at a number of infinite-dimensional R-vector spaces that arise in analysis, and we consider their dual and double dual spaces

More information

NOTES ON EXISTENCE AND UNIQUENESS THEOREMS FOR ODES

NOTES ON EXISTENCE AND UNIQUENESS THEOREMS FOR ODES NOTES ON EXISTENCE AND UNIQUENESS THEOREMS FOR ODES JONATHAN LUK These notes discuss theorems on the existence, uniqueness and extension of solutions for ODEs. None of these results are original. The proofs

More information

Lecture 9 Metric spaces. The contraction fixed point theorem. The implicit function theorem. The existence of solutions to differenti. equations.

Lecture 9 Metric spaces. The contraction fixed point theorem. The implicit function theorem. The existence of solutions to differenti. equations. Lecture 9 Metric spaces. The contraction fixed point theorem. The implicit function theorem. The existence of solutions to differential equations. 1 Metric spaces 2 Completeness and completion. 3 The contraction

More information

Hille-Yosida Theorem and some Applications

Hille-Yosida Theorem and some Applications Hille-Yosida Theorem and some Applications Apratim De Supervisor: Professor Gheorghe Moroșanu Submitted to: Department of Mathematics and its Applications Central European University Budapest, Hungary

More information

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov, LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES Sergey Korotov, Institute of Mathematics Helsinki University of Technology, Finland Academy of Finland 1 Main Problem in Mathematical

More information

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Int. Journal of Math. Analysis, Vol. 4, 1, no. 45, 11-8 An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

Nonlinear stabilization via a linear observability

Nonlinear stabilization via a linear observability via a linear observability Kaïs Ammari Department of Mathematics University of Monastir Joint work with Fathia Alabau-Boussouira Collocated feedback stabilization Outline 1 Introduction and main result

More information

Ordinary Differential Equation Theory

Ordinary Differential Equation Theory Part I Ordinary Differential Equation Theory 1 Introductory Theory An n th order ODE for y = y(t) has the form Usually it can be written F (t, y, y,.., y (n) ) = y (n) = f(t, y, y,.., y (n 1) ) (Implicit

More information

Journal of Complexity. New general convergence theory for iterative processes and its applications to Newton Kantorovich type theorems

Journal of Complexity. New general convergence theory for iterative processes and its applications to Newton Kantorovich type theorems Journal of Complexity 26 (2010) 3 42 Contents lists available at ScienceDirect Journal of Complexity journal homepage: www.elsevier.com/locate/jco New general convergence theory for iterative processes

More information

Equations paraboliques: comportement qualitatif

Equations paraboliques: comportement qualitatif Université de Metz Master 2 Recherche de Mathématiques 2ème semestre Equations paraboliques: comportement qualitatif par Ralph Chill Laboratoire de Mathématiques et Applications de Metz Année 25/6 1 Contents

More information

Continuity. Chapter 4

Continuity. Chapter 4 Chapter 4 Continuity Throughout this chapter D is a nonempty subset of the real numbers. We recall the definition of a function. Definition 4.1. A function from D into R, denoted f : D R, is a subset of

More information

Finding discontinuities of piecewise-smooth functions

Finding discontinuities of piecewise-smooth functions Finding discontinuities of piecewise-smooth functions A.G. Ramm Mathematics Department, Kansas State University, Manhattan, KS 66506-2602, USA ramm@math.ksu.edu Abstract Formulas for stable differentiation

More information

MINIMAL GRAPHS PART I: EXISTENCE OF LIPSCHITZ WEAK SOLUTIONS TO THE DIRICHLET PROBLEM WITH C 2 BOUNDARY DATA

MINIMAL GRAPHS PART I: EXISTENCE OF LIPSCHITZ WEAK SOLUTIONS TO THE DIRICHLET PROBLEM WITH C 2 BOUNDARY DATA MINIMAL GRAPHS PART I: EXISTENCE OF LIPSCHITZ WEAK SOLUTIONS TO THE DIRICHLET PROBLEM WITH C 2 BOUNDARY DATA SPENCER HUGHES In these notes we prove that for any given smooth function on the boundary of

More information

Exercise Solutions to Functional Analysis

Exercise Solutions to Functional Analysis Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n

More information

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set Analysis Finite and Infinite Sets Definition. An initial segment is {n N n n 0 }. Definition. A finite set can be put into one-to-one correspondence with an initial segment. The empty set is also considered

More information

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

Numerical Methods for Large-Scale Nonlinear Systems

Numerical Methods for Large-Scale Nonlinear Systems Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

Second order forward-backward dynamical systems for monotone inclusion problems

Second order forward-backward dynamical systems for monotone inclusion problems Second order forward-backward dynamical systems for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek March 6, 25 Abstract. We begin by considering second order dynamical systems of the from

More information

Lecture Notes on PDEs

Lecture Notes on PDEs Lecture Notes on PDEs Alberto Bressan February 26, 2012 1 Elliptic equations Let IR n be a bounded open set Given measurable functions a ij, b i, c : IR, consider the linear, second order differential

More information

An Introduction to Variational Inequalities

An Introduction to Variational Inequalities An Introduction to Variational Inequalities Stefan Rosenberger Supervisor: Prof. DI. Dr. techn. Karl Kunisch Institut für Mathematik und wissenschaftliches Rechnen Universität Graz January 26, 2012 Stefan

More information

Generalized Local Regularization for Ill-Posed Problems

Generalized Local Regularization for Ill-Posed Problems Generalized Local Regularization for Ill-Posed Problems Patricia K. Lamm Department of Mathematics Michigan State University AIP29 July 22, 29 Based on joint work with Cara Brooks, Zhewei Dai, and Xiaoyue

More information

GALERKIN TIME STEPPING METHODS FOR NONLINEAR PARABOLIC EQUATIONS

GALERKIN TIME STEPPING METHODS FOR NONLINEAR PARABOLIC EQUATIONS GALERKIN TIME STEPPING METHODS FOR NONLINEAR PARABOLIC EQUATIONS GEORGIOS AKRIVIS AND CHARALAMBOS MAKRIDAKIS Abstract. We consider discontinuous as well as continuous Galerkin methods for the time discretization

More information

Generalized Forchheimer Equations for Porous Media. Part V.

Generalized Forchheimer Equations for Porous Media. Part V. Generalized Forchheimer Equations for Porous Media. Part V. Luan Hoang,, Akif Ibragimov, Thinh Kieu and Zeev Sobol Department of Mathematics and Statistics, Texas Tech niversity Mathematics Department,

More information

Inverse scattering problem from an impedance obstacle

Inverse scattering problem from an impedance obstacle Inverse Inverse scattering problem from an impedance obstacle Department of Mathematics, NCKU 5 th Workshop on Boundary Element Methods, Integral Equations and Related Topics in Taiwan NSYSU, October 4,

More information

Order Preserving Properties of Vehicle Dynamics with Respect to the Driver s Input

Order Preserving Properties of Vehicle Dynamics with Respect to the Driver s Input Order Preserving Properties of Vehicle Dynamics with Respect to the Driver s Input Mojtaba Forghani and Domitilla Del Vecchio Massachusetts Institute of Technology September 19, 214 1 Introduction In this

More information

GLOBAL ATTRACTIVITY IN A CLASS OF NONMONOTONE REACTION-DIFFUSION EQUATIONS WITH TIME DELAY

GLOBAL ATTRACTIVITY IN A CLASS OF NONMONOTONE REACTION-DIFFUSION EQUATIONS WITH TIME DELAY CANADIAN APPLIED MATHEMATICS QUARTERLY Volume 17, Number 1, Spring 2009 GLOBAL ATTRACTIVITY IN A CLASS OF NONMONOTONE REACTION-DIFFUSION EQUATIONS WITH TIME DELAY XIAO-QIANG ZHAO ABSTRACT. The global attractivity

More information

Chapter 2 Finite Element Spaces for Linear Saddle Point Problems

Chapter 2 Finite Element Spaces for Linear Saddle Point Problems Chapter 2 Finite Element Spaces for Linear Saddle Point Problems Remark 2.1. Motivation. This chapter deals with the first difficulty inherent to the incompressible Navier Stokes equations, see Remark

More information

CORE 50 YEARS OF DISCUSSION PAPERS. Globally Convergent Second-order Schemes for Minimizing Twicedifferentiable 2016/28

CORE 50 YEARS OF DISCUSSION PAPERS. Globally Convergent Second-order Schemes for Minimizing Twicedifferentiable 2016/28 26/28 Globally Convergent Second-order Schemes for Minimizing Twicedifferentiable Functions YURII NESTEROV AND GEOVANI NUNES GRAPIGLIA 5 YEARS OF CORE DISCUSSION PAPERS CORE Voie du Roman Pays 4, L Tel

More information

Some asymptotic properties of solutions for Burgers equation in L p (R)

Some asymptotic properties of solutions for Burgers equation in L p (R) ARMA manuscript No. (will be inserted by the editor) Some asymptotic properties of solutions for Burgers equation in L p (R) PAULO R. ZINGANO Abstract We discuss time asymptotic properties of solutions

More information

In essence, Dynamical Systems is a science which studies differential equations. A differential equation here is the equation

In essence, Dynamical Systems is a science which studies differential equations. A differential equation here is the equation Lecture I In essence, Dynamical Systems is a science which studies differential equations. A differential equation here is the equation ẋ(t) = f(x(t), t) where f is a given function, and x(t) is an unknown

More information

Seminorms and locally convex spaces

Seminorms and locally convex spaces (April 23, 2014) Seminorms and locally convex spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/notes 2012-13/07b seminorms.pdf]

More information

at time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t))

at time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t)) Notations In this chapter we investigate infinite systems of interacting particles subject to Newtonian dynamics Each particle is characterized by its position an velocity x i t, v i t R d R d at time

More information

Nonlinear Control Lecture 5: Stability Analysis II

Nonlinear Control Lecture 5: Stability Analysis II Nonlinear Control Lecture 5: Stability Analysis II Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2010 Farzaneh Abdollahi Nonlinear Control Lecture 5 1/41

More information

ON CONTINUITY OF MEASURABLE COCYCLES

ON CONTINUITY OF MEASURABLE COCYCLES Journal of Applied Analysis Vol. 6, No. 2 (2000), pp. 295 302 ON CONTINUITY OF MEASURABLE COCYCLES G. GUZIK Received January 18, 2000 and, in revised form, July 27, 2000 Abstract. It is proved that every

More information

Euler Equations: local existence

Euler Equations: local existence Euler Equations: local existence Mat 529, Lesson 2. 1 Active scalars formulation We start with a lemma. Lemma 1. Assume that w is a magnetization variable, i.e. t w + u w + ( u) w = 0. If u = Pw then u

More information

Partial Differential Equations

Partial Differential Equations Part II Partial Differential Equations Year 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2015 Paper 4, Section II 29E Partial Differential Equations 72 (a) Show that the Cauchy problem for u(x,

More information

Complex Analysis Qualifying Exam Solutions

Complex Analysis Qualifying Exam Solutions Complex Analysis Qualifying Exam Solutions May, 04 Part.. Let log z be the principal branch of the logarithm defined on G = {z C z (, 0]}. Show that if t > 0, then the equation log z = t has exactly one

More information

A Simple Proof of the Fredholm Alternative and a Characterization of the Fredholm Operators

A Simple Proof of the Fredholm Alternative and a Characterization of the Fredholm Operators thus a n+1 = (2n + 1)a n /2(n + 1). We know that a 0 = π, and the remaining part follows by induction. Thus g(x, y) dx dy = 1 2 tanh 2n v cosh v dv Equations (4) and (5) give the desired result. Remarks.

More information

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing.

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing. 5 Measure theory II 1. Charges (signed measures). Let (Ω, A) be a σ -algebra. A map φ: A R is called a charge, (or signed measure or σ -additive set function) if φ = φ(a j ) (5.1) A j for any disjoint

More information

h(x) lim H(x) = lim Since h is nondecreasing then h(x) 0 for all x, and if h is discontinuous at a point x then H(x) > 0. Denote

h(x) lim H(x) = lim Since h is nondecreasing then h(x) 0 for all x, and if h is discontinuous at a point x then H(x) > 0. Denote Real Variables, Fall 4 Problem set 4 Solution suggestions Exercise. Let f be of bounded variation on [a, b]. Show that for each c (a, b), lim x c f(x) and lim x c f(x) exist. Prove that a monotone function

More information

EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES

EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES JEREMY J. BECNEL Abstract. We examine the main topologies wea, strong, and inductive placed on the dual of a countably-normed space

More information

Robust error estimates for regularization and discretization of bang-bang control problems

Robust error estimates for regularization and discretization of bang-bang control problems Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of

More information

X. Linearization and Newton s Method

X. Linearization and Newton s Method 163 X. Linearization and Newton s Method ** linearization ** X, Y nls s, f : G X Y. Given y Y, find z G s.t. fz = y. Since there is no assumption about f being linear, we might as well assume that y =.

More information

A brief introduction to ordinary differential equations

A brief introduction to ordinary differential equations Chapter 1 A brief introduction to ordinary differential equations 1.1 Introduction An ordinary differential equation (ode) is an equation that relates a function of one variable, y(t), with its derivative(s)

More information

INF-SUP CONDITION FOR OPERATOR EQUATIONS

INF-SUP CONDITION FOR OPERATOR EQUATIONS INF-SUP CONDITION FOR OPERATOR EQUATIONS LONG CHEN We study the well-posedness of the operator equation (1) T u = f. where T is a linear and bounded operator between two linear vector spaces. We give equivalent

More information

USING FUNCTIONAL ANALYSIS AND SOBOLEV SPACES TO SOLVE POISSON S EQUATION

USING FUNCTIONAL ANALYSIS AND SOBOLEV SPACES TO SOLVE POISSON S EQUATION USING FUNCTIONAL ANALYSIS AND SOBOLEV SPACES TO SOLVE POISSON S EQUATION YI WANG Abstract. We study Banach and Hilbert spaces with an eye towards defining weak solutions to elliptic PDE. Using Lax-Milgram

More information

The continuity method

The continuity method The continuity method The method of continuity is used in conjunction with a priori estimates to prove the existence of suitably regular solutions to elliptic partial differential equations. One crucial

More information

NOTES ON LINEAR ODES

NOTES ON LINEAR ODES NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary

More information

A TWO PARAMETERS AMBROSETTI PRODI PROBLEM*

A TWO PARAMETERS AMBROSETTI PRODI PROBLEM* PORTUGALIAE MATHEMATICA Vol. 53 Fasc. 3 1996 A TWO PARAMETERS AMBROSETTI PRODI PROBLEM* C. De Coster** and P. Habets 1 Introduction The study of the Ambrosetti Prodi problem has started with the paper

More information

Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem

Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem min{f (x) : x R n }. The iterative algorithms that we will consider are of the form x k+1 = x k + t k d k, k = 0, 1,...

More information

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous: MATH 51H Section 4 October 16, 2015 1 Continuity Recall what it means for a function between metric spaces to be continuous: Definition. Let (X, d X ), (Y, d Y ) be metric spaces. A function f : X Y is

More information

Layer structures for the solutions to the perturbed simple pendulum problems

Layer structures for the solutions to the perturbed simple pendulum problems Layer structures for the solutions to the perturbed simple pendulum problems Tetsutaro Shibata Applied Mathematics Research Group, Graduate School of Engineering, Hiroshima University, Higashi-Hiroshima,

More information

Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem

Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem min{f (x) : x R n }. The iterative algorithms that we will consider are of the form x k+1 = x k + t k d k, k = 0, 1,...

More information

L p MAXIMAL REGULARITY FOR SECOND ORDER CAUCHY PROBLEMS IS INDEPENDENT OF p

L p MAXIMAL REGULARITY FOR SECOND ORDER CAUCHY PROBLEMS IS INDEPENDENT OF p L p MAXIMAL REGULARITY FOR SECOND ORDER CAUCHY PROBLEMS IS INDEPENDENT OF p RALPH CHILL AND SACHI SRIVASTAVA ABSTRACT. If the second order problem ü + B u + Au = f has L p maximal regularity for some p

More information