Dynamical systems method (DSM) for selfadjoint operators A.G. Ramm Mathematics Department, Kansas State University, Manhattan, KS 6656-262, USA ramm@math.ksu.edu http://www.math.ksu.edu/ ramm Abstract Let A be a selfadjoint linear operator in a Hilbert space H. The DSM (dynamical systems method) for solving equation Av = f consists of solving the Cauchy problem u = Φ(t, u), u() = u, where Φ is a suitable operator, and proving that i) u(t) t >, ii) u( ), and iii) A(u( )) = f. It is proved that if equation Av = f is solvable and u solves the problem u = i(a + ia)u if, u() = u, where a > is a parameter and u is arbitrary, then lim a lim t u(t, a) = y, where y is the unique minimal-norm solution of the equation Av = f. Stable solution of the equation Av = f is constructed when the data are noisy, i.e., f is given in place of f, f f. The case when a = a(t) >, a(t)dt =, a(t) as t is considered. It is proved that in this case lim t u(t) = y and if f is given in place of f, then lim t u(t ) = y, where t is properly chosen. 1 Introduction Let H be a Hilbert space, A be a linear, not necessarily bounded and injective, selfadjoint operator in H. Assume that equation Av = f (1) is solvable, possibly nonuniquely. By y we denote the unique minimal-norm solution to (1). Let N = {v : Av = } be the null-space of A. Then y N. We do not assume that the range of A is closed, so problem (1) is an ill-posed one. Let us assume that the data f Math subject classification: 35R25, 35R3, 37B55, 47H2, 47J5, 49N45, 65M32, 65R3 key words: dynamical systems method, operator equations, ill-posed problems
are not known but the noisy data f are known, f f. Given the data {f,, A} we want to construct a stable approximation to y, i.e., an element u such that lim u y =. (2) In this paper a new method is proposed for stable solution of equation (1). We treat equations (1) with unbounded linear operators, which is a novel point also. In most of the studies the operator in (1) was assumed bounded and often compact. The author hopes that his method can be implemented numerically so that it will be efficient and economical. In the literature several methods are described for stable solution of equation (1): variational regularization [2], [3], [5], iterative regularization [1],[1], method of quasisolutions [3], and the dynamical systems method (DSM) (see [6], [7], [8], and the literature cited there). The DSM for solving equation (2) consists of solving the problem: u = Φ(t, u), u() = u, (3) where u := du, and Φ(t, u) is an operator chosen so that problem (3) has a unique global dt solution which stabilizes at infinity to the solution of equation (2): i) u(t) t >, ii) u( ), iii) A(u( )) = f. (4) The Cauchy problem (3) is a general dynamical system, and by this reason we call the above method for solving equation (1) the dynamical systems method (DSM). This method is justified for every solvable linear equation with densely defined closed operator A, not necessarily selfadjoint, and for very wide class of nonlinear operator equations ([6]). The aim of this paper is to give a version of the DSM for equation (1) with selfadjoint operator, which possibly requires less computational work than the variational regularization and also than the DSM version from [6]. In both methods, just mentioned, one has to compute the elements A Au, where A is the adjoint operator. Even in the finite-dimensional space computation of A Au, where u is a vector, requires many more operations than computation of Au. The variational regularization method for selfadjoint A = A requires computation of A 2 u. This operation has the same operation count as computation of A Au. The method, described below, requires computation of Au rather than A Au. In [4] the operator A A is defined for unbounded, densely defined, closed A. Our idea can be explained informally as follows. Let B be a linear invertible operator for which the operator e Bt is well defined, for example, B is a generator of a C semigroup. Then one has e Bs ds = B 1 (e Bt I). Let us assume that lim t e Bt =. This happens, for example, if RB ci, where c > is a constant and I is the identity operator. Then lim t B 1 (e Bt I) = B 1. 2
On the other hand, the operator ebs ds solves the following Cauchy problem: Ẇ = BW + I, W () =. Thus, under the above assumptions on B, one can calculate B 1 by solving the Cauchy problem for W and calculating the limit lim t W (t) = B 1. Solving Cauchy problems for ODE is a branch of numerical analysis which is well developed. Let us describe our method for solving equation (1). Let a > be a parameter. Consider the Cauchy problem: u = i(a + ia)u if, u() = u, (5) where u H is arbitrary. The initial condition in (5) is understood as the strong limit lim t + u(t) u =. Existence of this limit for an arbitrary u H follows from the spectral theorem if A is selfadjoint and a >. One can also refer to the boundedness of the operator e i(a+ia)t provided that a > and A = A. Let us formulate the first result: Theorem 1. Problem (5) has a unique solution for all t and lim lim u(t, a) = y. (6) a t If the noisy data f, f f are given, then one replaces f by f in (5) and finds a and t = t() such that (2) holds with u = u(t, a ). Thus, to solve equation (1) one solves Cauchy problem (5) and calculates the solution y by formula (6). To implement this method numerically, one has to choose a finite interval [, τ] on which the solution to (5) is calculated, and choose a = a(τ) such that lim τ u(τ, a(τ)) = y. One can check (see the proof of Theorem 1 below) that this e relation holds if lim τ a(τ) = and lim τa(τ) τ =. For example, one may take a(τ) a(τ) = τ γ, where γ (, 1) is a constant. If H is a real Hilbert space, then equation (5) is considered in a complex Hilbert space, in the complexification of H. The numerical efficiency of the method of solving equation (1), based on Theorem 1, only the numerical experiments can show. Proofs are given in Section 2. In Section 3 an alternative approach is given and the result is formulated in Theorem 2 and proved. In this alternative approach the parameter a = a(t) is a positive function of time monotonically decaying as t and satisfying some technical assumptions. In [1] and [1] some convergent iterative methods are proposed for solving equation (1). In most of the results in [1] it is assumed that A is a bounded linear operator. In [1], p. 65, there is a remark concerning unbounded, closed, densely defined operators, 3
and in [1] a method is proposed for solving equation (1) by an iterative procedure. This method is based on the von Neumann s theorem about selfadjointness of the operator A A provided that A is closed and densely defined. In [9] a regularization method is proposed for a class of unbounded operators, not necessarily linear, but some compactness assumptions are imposed on the stabilizing functional in [9]. The results presented in Theorem 1 and Theorem 2 of our paper give an approach to stable solution of equation (1), which differs from the approaches in the cited literature. 2 Proofs Proof of Theorem 1. The unique solution to (5) is: One has: provided that a >. Also, i Moreover, u = e it(a+ia) u i e i(t s)(a+ia) dsf. (7) e it(a+ia) u e at u as t, (8) e i(t s)(a+ia) dsf = (A + ia) 1 [e it(a+ia) I]f (A + ia) 1 f as t. (9) (A + ia) 1 f y 2 = (A + ia) 1 Ay y 2 = a 2 (A + ia) 1 y 2 a 2 d(e s y, y) = := J, a 2 + s 2 (1) where E s is the resolution of the identity corresponding to the selfadjoint operator A. One has: lim J = P Ny 2 =, (11) a where P N is the orthogonal projection onto N, the null-space of A, and P N y = because y N. Consider now the case of noisy data f. In this case f in (5) and in (7) is replaced by f, estimate (8) holds, and (9) is replaced by i e i(t s)(a+ia) dsf = (A + ia) 1 [e it(a+ia) I]f = (A + ia) 1 f + (A + ia) 1 (f f) (A + ia) 1 e it(a+ia) f := J 1 + J 2 + J 3. (12) One has (A + ia) 1 (f f) a, (13) 4
and (A + ia) 1 e it(a+ia) f f e at, (14) a lim J 1 = y (15) a as we have proved above. If one chooses a = a and t = t = t() such that lim a =, lim t() =, lim t()a =, lim =, (16) a then lim u(t(), a ) = y. Theorem 1 is proved. Remark 1. In the above proof we could take u =. We did not do this because we wanted to show that the DSM converges globally with respect to the initial approximation u and also because the choice of u in numerical calculations may be used to increase the rate of convergence: if one knows approximately the location of y, one can choose u in a neighborhood of y. From the proof of Theorem 1 it follows that the error of the method for exact data is bounded from above by const e ta + o(1) as a, see formulas a (9)-(11). Thus, if a = a(τ) then this error tends to zero as τ provided that t = τ, e τa(τ) lim a(τ) =, and lim τ τ a(τ) =. 3 Another approach Consider the problem where a(t) >, u = i[a + ia(t)]u if, u() = u, (17) a(t)dt =, a(t) as t, a + a 2 dt <. (18) Theorem 2. If (18) hold and Ay = f, y N, then problem (17) has a unique solution for all t and lim u(t) y =. (19) t If f is given in place of f, we choose t = t() and a = a so that lim t() =, lim a =, and lim =. Then a(t ) lim u y =, (2) where u = u(t()), and u(t) solves (17) with f in place of f. Proof of Theorem 2. The solution to (17) is u(t) = e iat R t a(s)ds u ie iat R t a(s)ds 5 e ias Ae R s a(p)dp dsy := I 1 + I 2. (21)
One has lim t I 1 = due to (18), and an integration by parts yields I 2 = y e ita R t a(s)ds y e ita e isa a(s)e R t s a(p)dp dsy y as t, (22) because the norm of the second term obviously tends to zero and the norm of the third term tends to zero by the dominated convergence theorem if one takes into account that y N and that the function a(s)e R t s a(p)dp is positive, integrable uniformly with respect to t (, ), and pointwise tends to zero as t because of (18). One has sup t a(s)e R t s a(p)dp ds = sup(1 e R t a(p)dp ) = 1. For example, if a(t) = 1, and E 1+t λ is the resolution of the identity corresponding to the selfadjoint operator A, then e isa ya(s)e R t s a(p)dp ds 2 = = d(e λ y, y)(1 + t) 2 t d(e λ y, y) e isλ (1 + s) 1 e R t s (1+p) 1 dp ds 2 e isλ ds 2 1 = O( (1 + t) ) as t 2 provided that λ. This last condition is satisfied if y N. More generally, let us assume (see the last assumption in (18)) that a 2 + a L 1 (, ). We claim that under this assumption the third term on the right-hand side of (22), which we denote by J, tends to zero. Let us give a detailed proof of this claim. Taking into account that A is selfadjoint and using the spectral theorem we get: where J 2 = J 1 (λ) := J 1 := J 1 (λ)d(e λ y, y), e iλ(t p) a(p)e R t p a(q)dq dp 2. Let us prove that lim t J 1 = for every λ. If this is proved, then lim t J =, because (E E )y = since y N. Here E := lim λ, λ< E λ, and, as usual, we assume that E λ+ = E λ. Integrating by parts we get where e iλ(t p) a(p)e R t p a(q)dq dp = a(t) R t iλ + eiλt a(p)dp iλ J 2 := 1 iλ e iλ(t p) [a (p) + a 2 (p)]e R t p a(q)dq dp. + J 2, 6
From the assumption a + a 2 L 1 (, ) it follows that lim t J 2 = for any λ. Therefore, from (18) it follows that lim t J = as claimed. If f is given in place of f, and u (t) solves (17) with f in place of f, then u (t) = e iat R t a(s)ds u ie iat R t a(s)ds e ias e R s a(p)dp dsf = I 1 () + I 2 (), and lim t I 1 () =, so if one sets t = t() and lim t() =, then lim I 1 () =. One has I 2 () = I 2 +I 3, where I 2 is defined in (21) and does not depend on f, while I 3 is similar to I 2 (), with f replaced by f f. We have proved in (22) that lim t I 2 = y. One can get the following estimate I 3 a(t) a(s)e R t s a(p)dp ds Let us set t = t, t as, and assume that lim a(t ) a(t). (23) =. (24) If (24) holds, then the conclusion of Theorem 2 follows from the above estimates. Theorem 2 is proved. The above results can be used, for instance, for developing an efficient algorithm for solving ill-conditioned linear algebraic systems and for stable solution of integral equations of the first kind. Added in proofs: The monograph [11] contains a systematic development of the DSM. References [1] A. Bakushinskiĭ, A. Goncharskiĭ, Ill-posed problems: theory and applications, Kluwer, Dordrecht, 1994. [2] C. Groetsch, The Theory of the Tikhonov Regularization for Fredholm Equations of the First Kind, Pitman, Boston, 1984. [3] V. Ivanov, V. Vasin, V. Tanana, Theory of linear ill-posed problems, Nauka, Moscow, 1978. (In Russian). [4] T. Kato, Perturbation Theory for Linear Operators, Springer Verlag, New York, 1984. [5] V. Morozov, Methods of solving incorrectly posed problems, Springer-Verlag, New York, 1984. 7
[6] A. G. Ramm, Inverse Problems, New York, Springer, 25. [7] A. G. Ramm, Dynamical systems method for solving operator equations. Commun Nonlinear Sci Numer Simul., 9(4), (24), 383-42. [8] A. G. Ramm, Dynamical systems method (DSM) and nonlinear problems, in the book: Spectral Theory and Nonlinear Analysis, Singapore, World Scientific Publishers, 25, pp. 21-228. (editor J. Lopez-Gomez). [9] A. G. Ramm, Regularization of ill-posed problems with unbounded operators, Jour. Math. Anal. Appl., 271, (22), 547-55. [1] G. Vainikko, A. Veretennikov, Iterative procedures in ill-posed problems, Nauka, Moscow, 1986. [11] A. G. Ramm, Dynamical Systems Method for Solving Operator Equations, Elsevier, Amsterdam, 26. 8