Dynamical Systems Method for Solving Ill-conditioned Linear Algebraic Systems
|
|
- Vivian Underwood
- 5 years ago
- Views:
Transcription
1 Dynamical Systems Method for Solving Ill-conditioned Linear Algebraic Systems Sapto W. Indratno Department of Mathematics Kansas State University, Manhattan, KS , USA A G Ramm Department of Mathematics Kansas State University, Manhattan, KS , USA ramm@math.ksu.edu Abstract A new method, the Dynamical Systems Method (DSM), justified recently, is applied to solving ill-conditioned linear algebraic system (ICLAS). The DSM gives a new approach to solving a wide class of ill-posed problems. In this paper a new iterative scheme for solving ICLAS is proposed. This iterative scheme is based on the DSM solution. An a posteriori stopping rules for the proposed method is justified. This paper also gives an a posteriori stopping rule for a modified iterative scheme developed in A.G.Ramm, JMAA,33 (7), , and proves convergence of the solution obtained by the iterative scheme. MSC: 15A1; 47A5; 65F5; 65F Keywords: Hilbert matrix, Fredholm integral euations of the first kind, iterative regularization, variational regularization, discrepancy principle, Dynamical Systems Method Biographical notes: Professor Alexander G. Ramm is an author of more than 58 papers, patents, 1 monographs, an editor of 3 books, and an associate editor of several mathematics and computational mathematics Journals. He gave more than 135 addresses at various Conferences, visited many Universities in Europe, Africa, America, Asia, and Australia. He won Khwarizmi Award in Mathematics, was Mercator Professor, Distinguished Visiting Professor supported by the Royal Academy of Engineering, invited plenary speaker at the Seventh PanAfrican Congress of Mathematicians, a London Mathematical Society speaker, distinguished HKSTAM speaker, CNRS research professor, Fulbright professor in Israel, distinguished Foreign professor in Mexico and 1
2 Egypt. His research interests include inverse and ill-posed problems, scattering theory, wave propagation, mathematical physics, differential and integral euations, functional analysis, nonlinear analysis, theoretical numerical analysis, signal processing, applied mathematics and operator theory. Sapto W. Indratno is currently a PhD student at Kansas State University under the supervision of Prof. Alexander G. Ramm. He is a coauthor of three accepted papers. His fields of interest are numerical analysis, optimization, stochastic processes, inverse and ill-posed problems, scattering theory, differential euations and applied mathematics. 1 Introduction We consider a linear euation Au = f, (1) where A : R m R m, and assume that euation (1) has a solution, possibly nonuniue. According to Hadamard [9, p.9], problem (1) is called well-posed if the operator A is injective, surjective, and A 1 is continuous. Problem (1) is called ill-posed if it is not well-posed. Ill-conditioned linear algebraic systems arise as discretizations of ill-posed problems, such as Fredholm integral euations of the first kind, b a k(x, t)u(t)dt = f(x), c x d, () where k(x, t) is a smooth kernel. Therefore, it is of interest to develop a method for solving ill-conditioned linear algebraic systems stably. In this paper we give a method for solving linear algebraic systems (1) with an ill-conditioned-matrix A. The matrix A is called ill-conditioned if κ(a) >> 1, where κ(a) := A A 1 is the condition number of A. If the null-space of A, N (A) := {u : Au = }, is nontrivial, then κ(a) =. Let A = UΣV be the singular value decomposition (SVD) of A, UU = U U = I, V V = V V = I, and Σ = diag(σ 1, σ,..., σ m ), where σ 1 σ σ m are the singular values of A. Applying this SVD to the matrix A in (1), one gets f = i β i u i and y = i,σ i> β i σ i v i, (3) where β i = u i, f. Here, denotes the inner product of two vectors. The terms with small singular values σ i in (3) cause instability of the solution, because the coefficients β i are known with errors. This difficulty is essential when one deals with an ill-conditioned matrix A. Therefore a regularization is needed for solving ill-conditioned linear algebraic system (1). There are many methods to solve (1) stably: variational regularization, uasisolutions, iterative regularization (see e.g, [], [4], [6], [9]). The method proposed in this paper is based on the Dynamical Systems Method (DSM) developed in [9, p.76]. The DSM for
3 solving euation (1) with, possibly, nonlinear operator A consists of solving the Cauchy problem u(t) = Φ(t, u(t)), u() = u ; u(t) := du dt, (4) where u H is an arbitrary element of a Hilbert space H, and Φ is some nonlinearity, chosen so that the following three conditions hold: a) there exists a uniue solution u(t) t, b) there exists u( ), and c) Au( ) = f. In this paper we choose Φ(t, u(t)) = (A A + a(t)i) 1 f u(t) and consider the following Cauchy problem: u a (t) = u a (t) + [A A + a(t)i m ] 1 A f, u a () = u, (5) where a(t) >, and a(t) as t, (6) A is the adjoint matrix and I n is an m m identity matrix. The initial element u in (5) can be chosen arbitrarily in N(A), where N (A) := {u Au = }. (7) For example, one may take u = in (5) and then the uniue solution to (5) with u() = has the form u(t) = t e (t s) T 1 a(s) A fds, (8) where T := A A, T a := T + ai, I is the identity operator. In the case of noisy data we replace the exact data f with the noisy data f δ in (8), i.e., u δ (t) = tδ e (t δ s) T 1 a(s) A f δ ds, (9) where t δ is the stopping time which will be discussed later. There are many ways to solve the Cauchy problem (5). For example, one may apply a family of Runge-Kutta methods for solving (5). Numerically, the Runge-Kutta methods reuire an appropriate stepsize to get an accurate and stable solution. Usually the stepsizes have to be chosen sufficiently small to get such a solution. The number of steps will increase when t δ, the stopping time, increases, see []. Therefore the computation time will increase significantly. Since lim δ t δ =, as was proved in [9], the family of the Runge-Kutta method may be less efficient for solving the Cauchy problem (5) than the method, proposed in this paper. We give a simple iterative scheme, based on the DSM, which produces stable solution to euation (1). The novel points in our paper are iterative schemes (1) and (13) (see below), which are constructed on the basis of formulas (8) and (9), and a modification of the iterative scheme given in [8]. Our stopping rule for the iterative scheme (13) is given in (85) (see below). In [9, p.76] the function a(t) is assumed to be a slowly decaying monotone function. In this 3
4 paper instead of using the slowly decaying continuous function a(t) we use the following piecewise-constant function: a (n) (t) = α j+1 χ (tj,t j+1](t), (, 1), t j = j ln(), n N, (1) j= where N is the set of positive integer, t =, α >, and χ (tj,t j+1](t) = { 1, t (tj, t j+1 ];, otherwise. (11) The parameter α in (1) is chosen so that assumption (17) (see below) holds. This assumption plays an important role in the proposed iterative scheme. Definition (1) allows one to write (8) in the form u n+1 = u n + (1 )T 1 α n+1 A f, u =. (1) A detailed derivation of the iterative scheme (1) is given in Section. When the data f are contaminated by some noise, we use f δ in place of f in (8), and get the iterative scheme We always assume that u δ n+1 = u δ n + (1 )T 1 α n+1 A f δ, u δ =. (13) f δ f δ, (14) where f δ are the noisy data, which are known, while f is unknown, and δ is the level of noise. Here and throughout this paper the notation z denotes the l -norm of the vector z R m. In this paper a discrepancy type principle (DP) is proposed to choose the stopping index of iteration (13). This DP is based on discrepancy principle for the DSM developed in [11], where the stopping time t δ is obtained by solving the following nonlinear euation tδ e (t δ s) a(s) Q 1 a(s) f δ ds = Cδ, C (1, ]. (15) It is a non-trivial task to obtain the stopping time t δ satisfying (15). In this paper we propose a discrepancy type principle based on (15) which can be easily implemented numerically: iterative scheme (13) is stopped at the first integer n δ satisfying the ineualities: n δ 1 j= ( n δ j 1 n j )α j+1 Q 1 α j+1 f δ ds Cδ ε < ( n j 1 n j )α j+1 Q 1 α f j+1 δ, 1 n < n δ, j= (16) 4
5 and it is assumed that (1 )α Q 1 α f δ Cδ ε, C > 1, ε (, 1), α >. (17) We prove in Section that using discrepancy-type principle (16), one gets the convergence: lim δ uδ n δ y =, (18) where u δ n is defined in (13). About other versions of discrepancy principles for DSM we refer the reader to [6],[1]. In this paper we assume that A is bounded. If the operator A is unbounded then f δ may not belong to the domain of A. In this case the expression A f δ is not defined. In [7], [8] and [1] solving (1) with unbounded operators is discussed. In these papers the unbounded operator A is assumed to be linear, closed, densely defined operator in a Hilbert space. Under these assumptions one may use the operator A (AA +ai) 1 in place of Ta 1 A. This operator is defined for any f in the Hilbert space. In [8] an iterative scheme with a constant regularization parameter is given: u δ n+1 = ata 1 u δ n + Ta 1 A f δ, (19) but the stopping rule, which produces a stable solution of euation (1) by this iterative scheme, has not been discussed in [8]. In this paper the constant regularization parameter a in iterative scheme (19) is replaced with the geometric series {α n } n=1, α >, (, 1), i.e. u δ n+1 = α n T 1 α nuδ n + T 1 α na f δ. () Stopping rule (85) (see below) is used for this iterative scheme. Without loss of generality we use α = 1 in (). The convergence analysis of this iterative scheme is presented in Section 3. In Section 4 some numerical experiments are given to illustrate the efficiency of the proposed methods. Derivation of the proposed method In this section we give a detailed derivation of iterative schemes (1) and (13). Let us denote by y R m the uniue minimal-norm solution of euation (1). Throughout this paper we denote T a(t) := A A+a(t)I m, where I m is the identity operator in R m, and a(t) is given in (1). Lemma.1. Let g(x) be a continuous function on (, ), c > and (, 1) be constants. If lim g(x) = g() := g, (1) x + then lim n j=1 ( n j 1 n j) g(c j+1 ) = g. () 5
6 Proof. Let and Then ω j (n) := n j n+1 j, ω j (n) >, (3) l 1 F l (n) := ω j (n)g(c j ). (4) j=1 n F n+1 (n) g F l (n) + ω j (n)g(c j ) g. Take ɛ > arbitrary small. For sufficiently large l(ɛ) one can choose n(ɛ), such that F l(ɛ) (n) ɛ, n > n(ɛ), j=l because lim n n =. Fix l = l(ɛ) such that g(c j ) g ɛ This is possible because of (1). One has for j > l(ɛ). and n j=l(ɛ) ω j (n)g(c j ) g F l(ɛ) (n) ɛ, n > n(ɛ) ɛ n n ω j (n) g(c j ) g + ω j (n) 1 g j=l(ɛ) n j=l(ɛ) ω j (n) + n l(ɛ) g ɛ + g n l(ɛ) ɛ, if n is sufficiently large. Here we have used the relation n ω j (n) = 1 n+1 l. j=l Since ɛ > is arbitrarily small, Lemma.1 is proved. j=l(ɛ) Let us define u n := tn e (tn s) T 1 a (n) (s) A fds, t n = n ln(), (, 1). (5) 6
7 Note that u n = t tn e (tn s) T 1 a (n) (s) A fds + e (tn s) T 1 a (n) (s) A fds t t = e (tn t) e (t s) T 1 a(s) A fds + = e (tn t) u + tn Using definition (1), one gets tn t e (tn s) T 1 a (n) (s) A fds. t e (tn s) T 1 a (n) (s) A fds u n = e (tn t) u + [1 e (tn t) ]T 1 α na f = n u + (1 n 1 )T α na f. Therefore, (5) can be rewritten as iterative scheme (1). Lemma.. Let u n be defined in (1) and Ay = f. Then u n y n ( y + n j 1 n j) α j+1 T 1 α y, n 1, (6) j+1 and j= Proof. By definitions (5) and (1) we obtain tn ( u n = e (tn s) T 1 n a(s) A fds = u n y as n. (7) j= From (8) and the euation Ay = f, one gets: ( ) n n u n = j+1 j T 1 α A f j+1 j= ( n = j= j= n j+1 j n j+1 j ) T 1 α j+1 (T α j+1 α j+1 I m )y j= ) T 1 α j+1 A f. (8) ( = n j 1 n j) ( y n j 1 n j) α j+1 T 1 α y j+1 = y n ( y n j 1 n j) α j+1 T 1 α y. j+1 j= Thus, estimate (6) follows. To prove (7), we apply Lemma.1 with g(a) := y. Since y N (A), it follows from the spectral theorem that a T 1 a a lim a g (a) = lim a (a + s) d E sy, y = P N (A) y =, 7
8 where E s is the resolution of the identity corresponding to A A, and P is the orthogonal projector onto N (A). Thus, by Lemma.1, (7) follows. Let us discuss iterative scheme (13). The following lemma gives the estimate of the difference of the solutions u δ n and u n. Lemma.3. Let u n and u δ n be defined in (1) and (13), respectively. Then u δ n u n 1 w n, n, (9) 3/ where w n := (1 ) δ α. n Proof. Let H n := u δ n u n. Then from the definitions of u δ n and u n we get the estimate H n+1 u δ n u n + (1 ) T 1 α n+1 A (f δ f) H n + w n. (3) Let us prove ineuality (9) by induction. For n = one has u = u δ =, so (9) holds. For n = 1 one has u δ δ 1 u 1 (1 ), so (9) holds. If (9) α holds for n k, then for n = k + 1 one has ( ) 3/ H k+1 H k + w k w 3/ k = 1 w 3/ k (31) 1 w k 1 = w 1 3/ k+1 wk+1. w k+1 1 3/ Hence (9) is proved for n..1 Stopping criterion In this section we give a stopping rule for iterative scheme given in (13). Let Q := AA, Q a := Q + ai m, and G n := tn e (tn s) a(s) Q 1 a(s) f δ ds = ( n j 1 n j )α j+1 Q 1 α f j+1 δ, n 1, j= (3) where t n = n ln, (, 1) and α >. Then stopping rule (16) can be rewritten as G nδ Cδ ε < G n, 1 n < n δ, ε (, 1), C > 1, G 1 > Cδ ε. (33) 8
9 Note that n G n+1 = ( n j n+1 j )α j+1 Q 1 α f j+1 δ so j= = ( n j n+1 j )α j+1 Q 1 α f j+1 δ + (1 )α n+1 Q 1 α f n+1 δ j= = G n + (1 )α n+1 Q 1 α n+1 f δ, G n = G + (1 )α n Q 1 α nf δ, n 1, G =. (34) Lemma.4. Let G n be defined in (34). Then G n Proof. Using the identity 1 1 α n y + δ, n 1, (, 1). (35) aq 1 a the estimates = AT 1 a A I m, a >, T := A A, T a := T + ai m, a Q 1 a 1, f δ f δ, and a a ATa 1, where Q := AA, Q a := Q + ai m, we get G n = G + (1 ) AT 1 α na f δ f δ = G + (1 ) AA Q 1 α nf δ f δ = G + (1 ) (AA + α n I α n I)Q 1 n f δ f δ = G + (1 )α n Q 1 α nf δ = G + (1 )α n Q 1 α n(f δ f + f) G + (1 )α n Q 1 α n(f δ f) + (1 )α n Q 1 α nf G + (1 )δ + (1 ) AT 1 α na f f = G + (1 )δ + (1 ) A(T 1 α na Ay y) = G + (1 )δ + (1 ) A( α n T 1 α ny) = G + (1 )δ + (1 )α n ATα 1 ny G + (1 )δ + (1 )α n y α n = G + (1 )δ + (1 ) α n y = G + (1 )δ + α y. (36) 9
10 Therefore, G n δ (G δ) + α y, n 1, G =. (37) Let us prove relation (35) by induction. From relation (37) we get α G 1 δ δ + y δ α 1 y 1 α y. (38) Thus, for n = 1 relation (35) holds. Suppose that 1 G n δ 1 α n y, 1 n k. (39) Then by ineualities (37) and (39) we obtain G k+1 δ (G k δ) + α k y 1 1 α k y + α k y = 1 α k y = 1 α k α k+1 y α k α k+1 y. (4) Thus, relation (35) is proved. Lemma.5. Let G n be defined in (34), (, 1), and α > be chosen such that G 1 > Cδ ε, ε (, 1), C > 1. Then there exists a uniue integer n c such that G nc 1 < G nc and G nc > G nc+1, n c 1. (41) Moreover, Proof. From Lemma.4 we have G n G n+1 < G n, n n c. (4) 1 1 α n y + δ, n 1, (, 1). Since G 1 > Cδ ε and lim sup n G n δ < Cδ ε, it follows that there exists an integer n c 1 such that G nc 1 < G nc and G nc > G nc+1. Let us prove the monotonicity of G n, for n n c. We have G nc+1 G nc <. Using definition (34), we get G nc+1 G nc = G nc + (1 )α nc+1 Q 1 α f nc+1 δ G nc ( ) = (1 ) α nc+1 Q 1 α f nc+1 δ G nc <. (43) 1
11 This implies Note that α nc+1 α Q 1 nc+1 f δ G nc <. (44) G n+1 G n = (1 )(α n+1 Q 1 α n+1 f δ G n ). Therefore, to prove the monotonicity of G n for n n c, one needs to prove the ineuality α n+1 Q 1 α n+1 f δ G n <, n n c. This ineuality is a conseuence of the following lemma: Lemma.6. Let G n be defined in (34), and (44) holds. Then α n+1 Q 1 α n+1 f δ G n <, n n c. (45) Proof. Let us prove Lemma.6 by induction. Let and D n := α n+1 Q 1 α n+1 f δ G n h(a) := a Q 1 a f δ. The function h(a) is a monotonically growing function of a, a >. Indeed, by the spectral theorem, we get h(a 1 ) = a 1 Q 1 a 1 f δ = a 1 (a 1 + s) d F sf δ, f δ a (a + s) d F sf δ, f δ = a Q 1 a f δ = h(a ), (46) where F s is the resolution of the identity corresponding to Q := AA, because a 1 a (a 1+s) (a +s) if < a 1 < a and s. By the assumption we have D nc = α nc+1 Q 1 α f nc+1 δ G nc <. Thus, relation (45) holds for n = n c. For n = n c + 1 we get D nc+1 = α nc+ Q 1 α nc+ f δ (1 )α nc+1 Q 1 α nc+1 f δ G nc = h(α nc+ ) h(α nc+1 ) + h(α nc+1 ) G nc = h(α nc+ ) h(α nc+1 ) + ( h(α nc+1 ) G nc ) (47) = h(α nc+ ) h(α nc+1 ) + D nc = h(α nc+ ) h(α nc+1 ) + D nc <. Here we have used the monotonicity of the function h(a). Thus, relation (45) holds for n = n c + 1. Suppose D n <, n c n n c + k 1. 11
12 This, together with the monotonically growth of the function h(a) := a Q 1 f δ, yields D nc+k = α nc+k+1 Q 1 α f nc+k+1 δ G nc+k = h(α nc+k+1 ) (1 ) h(α nc+k ) G nc+k 1 = h(α nc+k+1 ) h(α nc+k ) + ( h(α nc+k ) G nc+k 1) = h(α nc+k+1 ) h(α nc+k ) + D nc+k 1 = h(α nc+k+1 ) h(α nc+k ) + D nc+k 1 <. (48) Thus, D n <, n 1. Lemma.6 is proved. Let us continue with the proof of Lemma.5. From relation (34) we have G n+1 G n = ( 1)G n + (1 )α n+1 Q 1 α f n+1 δ ( ) = (1 ) α n+1 Q 1 α f n+1 δ G n. Using assumption (44) and applying Lemma.6, one gets G n+1 G n <, n n c. Let us prove that the integer n c is uniue. Suppose there exists another integer n d such that G nd 1 < G nd and G nd > G nd +1. One may assume without loss of generality that n c < n d. Since G n > G n+1, n n c, and n c < n d, it follows that G nd 1 > G nd. This contradicts the assumption G nd 1 < G nd. Thus, the integer n c is uniue. Lemma.5 is proved. Lemma.7. Let G n be defined in (34). If α is chosen such that relations G 1 > Cδ ε, C > 1, ε (, 1), holds then there exists a uniue n δ satisfying ineuality (33). Proof. Let us show that there exists an integer n δ so that ineuality (33) holds. Applying Lemma.4, one gets lim sup G n δ. (49) n Since G 1 > Cδ ε and lim sup n G n δ < Cδ ε, it follows that there exists an index n δ satisfying stopping rule (33). The uniueness of the index n δ follows from the monotonicity of G n, see Lemma.5. Thus, Lemma.7 is proved. Lemma.8. Let Ay = f, y N (A), and n δ be chosen by rule (33). Then so lim δ n δ =, (, 1), (5) lim n δ =. (51) δ 1
13 Proof. From rule (33) and relation (34) we have Cδ ε + (1 )α n δ Q 1 α n δ f δ < G nδ 1 + (1 )α n δ Q 1 α n δ f δ = G nδ Cδ ε, (5) so (1 )α n δ Q 1 α n δ f δ (1 )Cδ ε. (53) Thus, α n δ Q 1 α n δ f δ < Cδ ε. (54) Note that if f then there exists a λ > such that F λ f, F λ f, f := ξ >, (55) where ξ is a constant which does not depend on δ, and F s is the resolution of the identity corresponding to the operator Q := AA. Let For a fixed number c 1 > we obtain h(δ, c 1 ) = c 1 Q c1 f δ = h(δ, α) := α Q 1 α f δ. c 1 (c 1 + s) d F sf δ, f δ λ c 1 (c 1 + s) d F sf δ, f δ c λ 1 (c 1 + λ ) d F s f δ, f δ = c 1 F λ f δ (c 1 + λ ), δ >. (56) Since F λ is a continuous operator, and f f δ < δ, it follows from (55) that Therefore, for the fixed number c 1 > we get lim F λ f δ, f δ = F λ f, f >. (57) δ h(δ, c 1 ) c > (58) for all sufficiently small δ >, where c is a constant which does not depend on δ. For example one may take c = ξ provided that (55) holds. Let us derive from estimate (54) and the relation (58) that n δ as δ. From (54) we have h(δ, α n δ ) (Cδ ε ). Therefore, lim h(δ, α n δ ) =. (59) δ Suppose lim δ nδ. Then there exists a subseuence δ j such that where c 1 is a constant. By (58) we get α n δ j c1 >, (6) h(δ j, α n δ j ) > c >, δ j as j. (61) This contradicts relation (59). Thus, lim δ n δ =. Lemma.8 is proved. 13
14 Lemma.9. Let n δ be chosen by rule (33). Then δ α n δ as δ. (6) Proof. Relation (35), together with stopping rule (33), implies Cδ ε 1 < G nδ 1 1 α n δ 1 y + δ. (63) Then This yields 1 α y n δ 1 (1 )δ ε, ε (, 1). (64) (C 1) lim δ δ α n δ = lim δ Lemma.9 is proved. δ α n δ 1 lim δ δ 1 ε (1 y =. (65) )(C 1) Theorem.1. Let y N (A), f δ f δ, f δ > Cδ ε, C > 1, ε (, 1). Suppose n δ is chosen by rule (33). Then where u δ n is given in (13). lim δ uδ n δ y =, (66) Proof. Using Lemma. and Lemma.3, we get the estimate u δ n δ y u δ n δ u nδ + u nδ y 1 (1 ) δ 3/ α n + u nδ y δ := I 1 + I, δ (67) where I 1 := (1 ) 1 3/ α and I n δ := u nδ y. Applying Lemma.9, one gets lim δ I 1 =. Since n δ as δ, it follows from Lemma. that lim δ I =. Thus, lim δ u δ n δ y =. Theorem.1 is proved. The algorithm based on the proposed method can be stated as follows: Step 1. Assume that (14) holds. Choose C (1, ) and ε (.9, 1). Fix (, 1), and choose α > so that (17) holds. Set n = 1, and u =. Step. Use iterative scheme (13) to calculate u n. Step 3. Calculate G n, where G n is defined in (34). Step 4. If G n Cδ ε then stop the iteration, set n δ = n, and take u δ n δ as the approximate solution. Otherwise set n = n + 1, and go to Step 1. 14
15 3 Iterative scheme In [8] the following iterative scheme for the exact data f is given: u n+1 = ata 1 u n + Ta 1 A f, u 1 = u 1 N (A), (68) where a is a fixed positive constant. It is proved in [8] that iterative scheme (68) gives the relation lim u n y =, n y N (A). In the case of noisy data the exact data f in (68) is replaced with the noisy data f δ, i.e. u δ n+1 = ata 1 u δ n + Ta 1 A f δ, u 1 = u 1 N (A), (69) where f δ f δ for sufficiently small δ >. It is proved in [8] that there exist an integer n δ such that lim δ uδ n δ y =, (7) where u δ n is the approximate solution corresponds to the noisy data. But a method of choosing the integer n δ has not been discussed. In this section we modify iterative scheme (68) by replacing the constant parameter a in (68) with a geometric seuence { } n=1, i.e. u n+1 = n T 1 u n n + T 1 n A f, u 1 =, (71) where (, 1). The initial approximation u 1 is chosen to be. In general one may choose an arbitrary initial approximation u 1 in the set N (A). If the data are noisy then the exact data f in (71) is replaced with the noisy data f δ, and iterative scheme (69) is replaced with: u δ n+1 = n T 1 n uδ n + T 1 n A f δ, u δ 1 =. (7) We prove convergence of the solution obtained by iterative scheme (71) in Theorem 3.1 for arbitrary (, 1), i.e. lim u n y =, (, 1). n In the case of noisy data we use discrepancy-type principle (85) to obtain the integer n δ such that lim δ uδ n δ y =. (73) We prove relation (73), for arbitrary (, 1), in Theorem 3.6. Let us prove that the seuence u n, defined by iterative scheme (71), converges to the minimal norm solution y of euation (1). Theorem 3.1. Consider iterative scheme (71). Let y N (A). Then lim u n y =. (74) n 15
16 Proof. Consider the identity y = ata 1 y + Ta 1 A f, Ay = f. (75) Let w n := u n y and B n := n T 1 n. Then w n+1 = B n w n, w 1 = y u 1 = y. One uses (75) and gets y u n = B B n... B 1 w 1 = B B n... B 1 y ( n ) = + s n + s... d E s y, y + s ( ) ( ) n ( ) = + s n... d E s y, y + s + s n ( + s) n d E sy, y, (76) where E s is the resolution of the identity corresponding to the operator T := A A. Here we have used the identity (75) and the monotonicity of the function φ(x) := x (x+s), s. From estimate (76) we derive relation (74). Indeed, write n b ( + s) n d E sy, y = n ( + s) n d E sy, y + b n ( + s) n d E sy, y, (77) where b is a sufficiently small number which will be chosen later. For any fixed b > one has < 1 if s b. Therefore, it follows that +s +b On the other hand one has b b n ( + s) n d E sy, y as n. (78) n ( + s) n d E sy, y b d E s y, y. (79) Since y N (A), one has lim b b d E sy, y =. Therefore, given an arbitrary number ɛ > one can choose b(ɛ) such that b(ɛ) n ( + s) n d E sy, y < ɛ. (8) Using this b(ɛ), one chooses sufficiently large n(ɛ) such that b(ɛ) n ( + s) n d E sy, y < ɛ, n > n(ɛ). (81) Since ɛ > is arbitrary, Theorem 3.1 is proved. 16
17 As we mentioned before if the exact data f are contaminated by some noise then iterative scheme (7) is used, where f δ f δ. Note that u δ n+1 u n+1 n T 1 n (uδ n u n ) + δ n uδ n u n + δ n. (8) To prove the convergence of the solution obtained by iterative scheme (7), we need the following lemmas: Lemma 3.. Let u n and u δ n be defined in (71) and (7), respectively. Then u δ n u n 1 δ, n 1. (83) n Proof. Let us prove relation (83) by induction. For n = 1 one has u δ 1 u 1 =. Thus, for n = 1 the relation holds. Suppose u δ l u l 1 δ, 1 l k. (84) l Then from (8) and (84) we have Thus, u δ k+1 u k+1 u δ k u k + Lemma 3. is proved. δ k δ (1 ). k+1 1 u δ n u n δ (1 ) n, n 1. δ + δ k k Let us formulate our stopping rule: the iteration in iterative scheme (7) is stopped at the first integer n δ satisfying AT 1 n δ A f δ f δ Cδ ε < AT 1 n A f δ f δ, 1 n < n δ, C > 1, ε (, 1), (85) and it is assumed that f δ > Cδ ε. Lemma 3.3. Let u δ n be defined in (7), and W n := AT 1 n A f δ f δ. Then Proof. Note that W n+1 W n, n 1. (86) W n = AA Q 1 n f δ f δ = (Q n n I m )Q 1 n f δ f δ = n Q 1 n f δ, (87) where Q := AA, and Q a := Q + ai m. Using the spectral theorem, one gets W n+1 = (n+1) ( n+1 + s) d F sf δ, f δ 17 n ( n + s) d F sf δ, f δ = W n, (88)
18 where F s is the resolution of the identity corresponding to the operator Q := AA. Here we have used the monotonicity of the function g(x) = x (x+s), s. Thus, W n+1 W n, n 1. (89) Lemma 3.3 is proved. Lemma 3.4. Let u δ n be defined in (7), and f δ > Cδ ε, ε (, 1), C > 1. Then there exists a uniue index n δ such that ineuality (85) holds. Proof. Let e n := AT 1 n A f δ f δ. Then where Q a := AA + ai. Therefore, e n = n Q 1 n f δ, (9) e n n Q 1 n (f δ f) + n Q 1 n f f δ f + n Q 1 n Ay δ + n y, (91) where the estimate Q 1 a A = ATa 1 1 a lim sup e n δ. n was used. Thus, This shows that the integer n δ, satisfying (85), exists. The uniueness of n δ follows from its definition. Lemma 3.5. Let u δ n be defined in (7). If n δ is chosen by rule (85) then Proof. From (91) we have lim δ AT 1 A f δ f δ δ + δ n δ =. (9) e 1, (93) where e 1 := u 1 y = y. It follows from stopping rule (85) and estimate (93) that Therefore, and so Cδ ε AT 1 n δ 1 A f δ f δ n δ 1 e 1 + δ. (94) (C 1) δ ε n δ 1 e 1, (95) 1 e 1, ε (, 1). (96) n δ 1 (C 1)δε 18
19 This implies Thus, δ n δ δ n δ = δ n δ 1 e 1 δ 1/ (C 1)δ ε = as δ. Lemma 3.5 is proved. e 1 1/ (C 1) δ1 ε. (97) The proof of convergence of the solution obtained by iterative scheme (7) is given in the following theorem: Theorem 3.6. Let u δ n be defined in (7), y N (A), f δ > Cδ ε, ε (, 1), C > 1, (, 1). If n δ is chosen by rule (85), then u δ n y as δ. (98) Proof. From Lemma 3. we get the following estimate: u δ n δ y u δ n δ u nδ + u nδ y 1 δ n + u nδ y := I 1 +I, (99) δ 1 δ where I 1 := and I n δ := u nδ y. By Lemma 3.5 one gets I 1 as δ. To prove lim δ I = one needs the relation lim δ n δ =. This relation is a conseuence of the following lemma: Lemma 3.7. If n δ is chosen by rule (85), then n δ as δ, (1) so Proof. Note that lim n δ =. (11) δ AT 1 a A f δ f δ = AA Q 1 f δ f δ = (AA + ai m ai m )Q 1 f δ f δ a = f δ aq 1 a f δ f δ = aq 1 a f δ, where a >, Q := AA and Q a := Q + ai. From stopping rule (85) we have AT 1 n δ A f δ f δ Cδ ε. Thus, 1 lim AT δ n δ A f δ f δ = lim n δ Q 1 n δ f δ =. (1) δ Using an argument given in the proof of Lemma.8, (see formulas (54)-(61) in which α = 1), one gets lim δ n δ =, so lim δ n δ =. Lemma 3.7 is proved. Lemma 3.7 and Theorem 3.1 imply I as δ. Thus, Theorem 3.6 is proved. a 19
20 4 Numerical experiments In all the experiments we measure the accuracy of the approximate solutions using the relative error: Rel.Err = uδ n δ y, y where. is the Euclidean norm in R n. The exact data are perturbed by some noises so that f δ f δ, where f δ = f + δ e e, δ is the noise level, and e R n is the noise taken from the Gaussian distribution with mean and standard deviation 1. The MATLAB routine called randn with seed 15 is used to generate the vector e. The iterative schemes (13) and (7) will be denoted by IS 1 and IS, respectively. In the iterative scheme IS 1, for fixed (, 1), one needs to choose a sufficiently large α > so that ineuality (17) hold, for example one may choose α 1. The number of iterations of IS 1 and IS are denoted by Iter 1 and Iter, respectively. We compare the results obtained by the proposed methods with the results obtained by using the variational regularization method (VR). In VR we use the Newton method for solving the euation for regularization parameter. In [4] the nonlinear euation Au V R (a) f δ = (Cδ), C = 1.1, (13) where u V R (a) := Ta 1 A f δ, is solved by the Newton s method. In this paper the initial value of the regularization parameter is taken to be α = α, where k k δ δ is the first integer such that the Newton s method for solving (13) converges. We stop the iteration of the Newton s method at the first integer n δ satisfying the ineuality ATa 1 n A f δ f δ (Cδ) 1 3 (Cδ), a := α. The number of iterations needed to complete a convergent Newton s method is denoted by Iter V R. 4.1 Ill-conditioned linear algebraic systems n Table 1: Condition number of some Hilbert matrices κ(a) = A A
21 where Consider the following system: H (m) u = f, (14) H (m) 1 ij =, i, j = 1,,..., m, i + j + 1 is a Hilbert matrix of dimension m. The system (14) is an example of a severely ill-posed problem if m > 1, because the condition number of the Hilbert matrix is increasing exponentially as m grows, see Table (1). The minimal eigenvalues of Hilbert matrix of dimension m can be obtained using the following formula λ min (H (m) ) = 15/4 π 3/ m( + 1) (4m+4) (1 + o(1)). (15) This formula is proved in [3]. Since κ(h (m) ) = λmax(h(m) ) λ min(h, it follows from (15) (m) ) that the condition number grows as O( e3.555m m ). The following exact solution is used to test the proposed methods: y R m, where y k =.5k, k = 1,,..., m. The Hilbert matrix of dimension m = is used in the experiments. This matrix has condition number of order 1 33, so it is a severely ill-conditioned matrix. In Table one can see that the number of iterations of the iterative Table : Hilbert matrix problem: the number of iterations and the relative errors with respect to the parameter (α = 1, δ = 1 ). IS 1 IS REl.Err Iter 1 REl.Err Iter scheme IS 1 and IS increases as the value of increases. The relative errors start to increase at =.15. By these observations, we suggest to choose the parameter in the interval (.15,.5). In Table 3 the results of the experiments with various values of δ are presented. Here the parameter ε was.99. The geometric seuence {.5 } n=1 was used in the iterative schemes IS 1 and IS. The parameter C in (16) and (85) were 1.1. The parameter k δ in the variational regularization method was 1. One can see that the relative errors of IS 1 and IS are smaller than these for the VR. The relative error decreases as the noise level decreases which can be seen on the same table. This shows that the proposed method produces stable solutions. 1
22 Table 3: ICLAS with Hilbert matrix: the relative errors and the number of iterations δ IS 1 IS VR REl.Err Iter 1 REl.Err Iter REl.Err Iter V R. 5% % % Fredholm integral euations of the first kind (FIEFK) Here we consider two Fredholm integral euations : a) f(s) = 3 3 k(t s)u(t)dt, (16) where { 1 + cos( π k(z) = 3 z), z < 3; (17), z 3, and { [ (6 + z) 1 1 f(z) = cos( π 3 z)] 9 πz π sin( 3 ), z 6; (18), z > 6. b) where and f(s) = 1 k(s, t) = k(s, t)u(t)dt, s (, 1), (19) { s(t 1), s < t; t(s 1), s t, (11) f(s) = (s 3 s)/6. (111) The problem a) is discussed in [5] where the solution to this problem is u(x) = k(x). The second problem is taken from [1] where the solution is u(x) = x. The Galerkin s method is used to discretized the integrals (16) and (19). For the basis functions we use the following orthonormal box functions and { m c1 φ i (s) :=, [s i 1, s i ] ;, otherwise, { m c ψ i (t) :=, [t i 1, t i ] ;, otherwise, (11) (113)
23 where s i = d 1 + i d m, t i = d 3 + i d4 m, i =, 1,,..., m. In the problem a) the parameters c 1, c, d 1, d, d 3 and d 4 are set to 1, 6, 6, 1, 3 and 6, respectively. In the second problem we use d 1 = d 3 = and c 1 = c = d = d 4 = 1. Here we approximate the solution u(t) by ũ = m j=1 c jψ j (t). Therefore solving problem (16) is reduced to solving the linear algebraic system where in problem a) A ij = A c = f, c, f R m, (114) k(t s)φ i (s)ψ j (t)dsdt and f i = 6 6 f(s)φ i(s)ds, i, j = 1,,..., m, and in problem b) A ij = 1 1 k(s, t)φ i (s)ψ j (t)dsdt and f i = 1 f(s)φ i(s)ds, and c j = c j i, j = 1,,..., m. Table 4: Problem a): the number of iterations and the relative errors with respect to the parameter (α =, δ = 1 ). IS 1 IS REl.Err Iter 1 REl.Err Iter Table 5: Problem a): the relative errors and the number of iterations δ IS 1 IS VR REl.Err Iter 1 REl.Err Iter REl.Err Iter V R. 5% % % The parameter m = 6 is used in problem a). In this case the condition number of the matrix A with m = 6 is , so it is an ill-conditioned matrix. Here the parameter C in IS 1 and IS are and 1.1, respectively. For problem b) the parameter m is. In this case the condition number of the 3
24 Table 6: Problem b): the number of iterations and the relative errors with respect to the parameter (α = 4, δ = 1 ). IS 1 IS REl.Err Iter 1 REl.Err Iter Table 7: Problem b): the relative errors and the number of iterations δ IS 1 IS VR REl.Err Iter 1 REl.Err Iter REl.Err Iter V R. 5% % % matrix A is The parameter C is 1.1 in the both iterative schemes IS 1 and IS. In Tables 4 and 6 we give the relation between the parameter and the number of iterations and the relative errors of the iterative schemes IS 1 and IS. The closer the parameter to 1, the larger number of iterations we get, and the closer the parameter to, the smaller the number of iterations we get. But the relative error starts to increase if the parameter is chosen too small. Based on the numerical results given in Tables 4 and 5, we suggest to choose the parameter in the interval (.15,.5). In the iterative schemes IS 1 and IS we use the geometric seuence {.5 } n=1 for problem a). The geometric series {4.5 } n=1 is used in problem b). In the variational regularization method we use α = and α = 4 as the initial regularization parameter of the Newton s method in problem a) and b), respectively. Since the Newton s method for solving (13) is locally convergent, in problem b) we need to choose a smaller regularization parameter α than for IS 1 and IS methods. Here k δ = 8 was used. The numerical results on Table 5 show that the solutions produced by the proposed iterative schemes are stable. In problem a) the relative errors of the iterative scheme IS, are smaller than these for the iterative scheme IS 1 and than these for the variational regularization, VR. In Table 7 the relative errors produced by the three methods for solving problem b) are presented. The relative error of IS 1 is smaller than the one for the other two methods. 4
25 5 Conclusion We have demonstrated that the proposed iterative schemes can be used for solving ill-conditioned linear algebraic systems stably. The advantage of the iterative scheme (13) compared with iterative scheme (7) is the following: one applies the operator Ta 1 only once at each iteration. Note that the difficulty of using the Newton s method is in choosing the initial value for the regularization parameter, since the Newton s method for solving euation (13) converges only locally. In solving (13) by the Newton s method one often has to choose an initial regularization parameter a sufficiently close to the root of euation (13) as shown in problem b) in Section 4.. In our iterative schemes the initial regularization parameter can be chosen in the interval [1, 4] which is larger than the initial regularization parameter used in the variational regularization method. In the iterative scheme IS 1 we modified the discrepancy-type principle t e (t s) a(s) Q 1 a(s) ds = Cδ, C (1, ), given in [11], by using (1) to get discrepancy-type principle (33), which can be easily implemented numerically. In Section 3 we used the geometric series {α n } n=1 in place of the constant regularization parameter a in the iterative scheme u n+1 = ata 1 u n + T 1 A f δ developed in [8]. This geometric series of the regularization parameter allows one to use the a posteriori stopping rule given in (85). We proved that this stopping rule produces stable approximation of the minimal norm solution of euation (1). In all the experiments stopping rules (33) and (85) produce stable approximations to the minimal norm solution of euation (1). It is of interest to develop a method for choosing the parameter in the proposed methods which gives sufficiently small relative error and small number of iterations. a References [1] Delves, L. M. and Mohamed, J. L., Computational Methods for Integral Euations, Cambridge University Press, 1985; p. 31. [] Hoang, N.S. and Ramm, A. G. Solving ill-conditioned linear algebraic systems by the dynamical systems method (DSM), Inverse Problems in Sci. and Engineering, 16, N5, (8), [3] Kalyabin, G. A., An asymptotic formula for the minimal eigenvalues of Hilbert type matrices, Functional analysis and its applications, Vol. 35, No. 1, pp 67-7, 1. [4] Morozov, V., Methods of solving incorrectly posed problems, Springer Verlag, New York,
26 [5] Phillips, D. L., A techniue for the numerical solution of certain integral euations of the first kind, J Assoc Comput Mach, 9, 84-97, 196. [6] Ramm, A. G., Inverse problems, Springer, New York, 5. [7] Ramm, A. G., Ill-posed problems with unbounded operators, Journ. Math. Anal. Appl., 35,, ,7. [8] Ramm, A. G., Iterative solution of linear euations with unbounded operators, Jour. Math. Anal. Appl., 33, N, , 7. [9] Ramm, A. G., Dynamical systems method for solving operator euations, Elsevier, Amsterdam, 7. [1] Ramm, A. G., On unbounded operators and applications, Appl. Math. Lett., 1, , 8. [11] Ramm, A. G., Discrepancy principle for DSM, I, II, Comm. Nonlin. Sci. and Numer. Simulation, 1, N1, (5), 95-11; 13, (8), [1] Ramm, A. G., A new discrepancy principle, Journ. Math.Anal. Appl., 31, (5),
ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume, Number, Pages S -9939(XX- ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. HOANG AND A. G. RAMM (Communicated
More informationAnn. Polon. Math., 95, N1,(2009),
Ann. Polon. Math., 95, N1,(29), 77-93. Email: nguyenhs@math.ksu.edu Corresponding author. Email: ramm@math.ksu.edu 1 Dynamical systems method for solving linear finite-rank operator equations N. S. Hoang
More informationDynamical systems method (DSM) for selfadjoint operators
Dynamical systems method (DSM) for selfadjoint operators A.G. Ramm Mathematics Department, Kansas State University, Manhattan, KS 6656-262, USA ramm@math.ksu.edu http://www.math.ksu.edu/ ramm Abstract
More informationHow large is the class of operator equations solvable by a DSM Newton-type method?
This is the author s final, peer-reviewed manuscript as accepted for publication. The publisher-formatted version may be available through the publisher s web site or your institution s library. How large
More informationThis article was published in an Elsevier journal. The attached copy is furnished to the author for non-commercial research and education use, including for instruction at the author s institution, sharing
More informationA numerical algorithm for solving 3D inverse scattering problem with non-over-determined data
A numerical algorithm for solving 3D inverse scattering problem with non-over-determined data Alexander Ramm, Cong Van Department of Mathematics, Kansas State University, Manhattan, KS 66506, USA ramm@math.ksu.edu;
More informationElectromagnetic wave scattering by many small bodies and creating materials with a desired refraction coefficient
168 Int. J. Computing Science and Mathematics, Vol. Electromagnetic wave scattering by many small bodies and creating materials with a desired refraction coefficient A.G. Ramm Mathematics epartment, Kansas
More informationThis article was originally published in a journal published by Elsevier, and the attached copy is provided by Elsevier for the author s benefit and for the benefit of the author s institution, for non-commercial
More informationNonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage:
Nonlinear Analysis 71 2009 2744 2752 Contents lists available at ScienceDirect Nonlinear Analysis journal homepage: www.elsevier.com/locate/na A nonlinear inequality and applications N.S. Hoang A.G. Ramm
More informationDynamical Systems Gradient Method for Solving Ill-Conditioned Linear Algebraic Systems
Acta Appl Math (21) 111: 189 24 DOI 1.17/s144-9-954-3 Dynamical Systems Gradient Method for Solving Ill-Conditioned Linear Algebraic Systems N.S. Hoang A.G. Ramm Received: 28 September 28 / Accepted: 29
More informationA G Ramm, Implicit Function Theorem via the DSM, Nonlinear Analysis: Theory, Methods and Appl., 72, N3-4, (2010),
A G Ramm, Implicit Function Theorem via the DSM, Nonlinear Analysis: Theory, Methods and Appl., 72, N3-4, (21), 1916-1921. 1 Implicit Function Theorem via the DSM A G Ramm Department of Mathematics Kansas
More informationCreating materials with a desired refraction coefficient: numerical experiments. Sapto W. Indratno and Alexander G. Ramm*
76 Int. J. Computing Science and Mathematics, Vol. 3, Nos. /2, 200 Creating materials with a desired refraction coefficient: numerical experiments Sapto W. Indratno and Alexander G. Ramm* Department of
More informationA collocation method for solving some integral equations in distributions
A collocation method for solving some integral equations in distributions Sapto W. Indratno Department of Mathematics Kansas State University, Manhattan, KS 66506-2602, USA sapto@math.ksu.edu A G Ramm
More informationComments on the letter of P.Sabatier
Comments on the letter of P.Sabatier ALEXANDER G. RAMM Mathematics Department, Kansas State University, Manhattan, KS 66506-2602, USA ramm@math.ksu.edu Aug. 22, 2003. Comments on the letter of P.Sabatier,
More information444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003),
444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003), 2191-2195 1 On a new notion of regularizer A.G. Ramm LMA/CNRS, 31 Chemin Joseph Aiguier, Marseille 13402, France and Mathematics
More informationA Simple Proof of the Fredholm Alternative and a Characterization of the Fredholm Operators
thus a n+1 = (2n + 1)a n /2(n + 1). We know that a 0 = π, and the remaining part follows by induction. Thus g(x, y) dx dy = 1 2 tanh 2n v cosh v dv Equations (4) and (5) give the desired result. Remarks.
More informationAn Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems
Int. J. Contemp. Math. Sciences, Vol. 5, 2010, no. 52, 2547-2565 An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical and Computational
More informationarxiv: v1 [math.na] 28 Jan 2009
The Dynamical Systems Method for solving nonlinear equations with monotone operators arxiv:0901.4377v1 [math.na] 28 Jan 2009 N. S. Hoang and A. G. Ramm Mathematics Department, Kansas State University,
More informationStability of solutions to abstract evolution equations with delay
Stability of solutions to abstract evolution equations with delay A.G. Ramm Department of Mathematics Kansas State University, Manhattan, KS 66506-2602, USA ramm@math.ksu.edu Abstract An equation u = A(t)u+B(t)F
More informationCreating materials with a desired refraction coefficient: numerical experiments
Creating materials with a desired refraction coefficient: numerical experiments Sapto W. Indratno and Alexander G. Ramm Department of Mathematics Kansas State University, Manhattan, KS 66506-2602, USA
More informationTHE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS
Asian-European Journal of Mathematics Vol. 3, No. 1 (2010) 57 105 c World Scientific Publishing Company THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. Hoang
More informationNONLINEAR DIFFERENTIAL INEQUALITY. 1. Introduction. In this paper the following nonlinear differential inequality
M athematical Inequalities & Applications [2407] First Galley Proofs NONLINEAR DIFFERENTIAL INEQUALITY N. S. HOANG AND A. G. RAMM Abstract. A nonlinear differential inequality is formulated in the paper.
More informationComputational method for acoustic wave focusing
Int. J. Computing Science and Mathematics Vol. 1 No. 1 7 1 Computational method for acoustic wave focusing A.G. Ramm* Mathematics epartment Kansas State University Manhattan KS 6656-6 USA E-mail: ramm@math.ksu.edu
More informationInverse scattering problem with underdetermined data.
Math. Methods in Natur. Phenom. (MMNP), 9, N5, (2014), 244-253. Inverse scattering problem with underdetermined data. A. G. Ramm Mathematics epartment, Kansas State University, Manhattan, KS 66506-2602,
More informationA method for creating materials with a desired refraction coefficient
This is the author s final, peer-reviewed manuscript as accepted for publication. The publisher-formatted version may be available through the publisher s web site or your institution s library. A method
More informationFinding discontinuities of piecewise-smooth functions
Finding discontinuities of piecewise-smooth functions A.G. Ramm Mathematics Department, Kansas State University, Manhattan, KS 66506-2602, USA ramm@math.ksu.edu Abstract Formulas for stable differentiation
More informationConvergence rates of the continuous regularized Gauss Newton method
J. Inv. Ill-Posed Problems, Vol. 1, No. 3, pp. 261 28 (22) c VSP 22 Convergence rates of the continuous regularized Gauss Newton method B. KALTENBACHER, A. NEUBAUER, and A. G. RAMM Abstract In this paper
More informationDynamical Systems Method for Solving Operator Equations
Dynamical Systems Method for Solving Operator Equations Alexander G. Ramm Department of Mathematics Kansas State University Manhattan, KS 6652 email: ramm@math.ksu.edu URL: http://www.math.ksu.edu/ ramm
More informationHeat Transfer in a Medium in Which Many Small Particles Are Embedded
Math. Model. Nat. Phenom. Vol. 8, No., 23, pp. 93 99 DOI:.5/mmnp/2384 Heat Transfer in a Medium in Which Many Small Particles Are Embedded A. G. Ramm Department of Mathematics Kansas State University,
More informationA MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS
A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed
More informationFredholm Theory. April 25, 2018
Fredholm Theory April 25, 208 Roughly speaking, Fredholm theory consists of the study of operators of the form I + A where A is compact. From this point on, we will also refer to I + A as Fredholm operators.
More informationyou expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form
Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)
More informationAN INVERSE PROBLEM FOR THE WAVE EQUATION WITH A TIME DEPENDENT COEFFICIENT
AN INVERSE PROBLEM FOR THE WAVE EQUATION WITH A TIME DEPENDENT COEFFICIENT Rakesh Department of Mathematics University of Delaware Newark, DE 19716 A.G.Ramm Department of Mathematics Kansas State University
More informationTwo-parameter regularization method for determining the heat source
Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for
More informationA Comparison between Solving Two Dimensional Integral Equations by the Traditional Collocation Method and Radial Basis Functions
Applied Mathematical Sciences, Vol. 5, 2011, no. 23, 1145-1152 A Comparison between Solving Two Dimensional Integral Equations by the Traditional Collocation Method and Radial Basis Functions Z. Avazzadeh
More informationLinear Algebra using Dirac Notation: Pt. 2
Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q - Southern Illinois University February 6, 2018 PHYS 476Q - Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018
More informationarxiv: v2 [math.pr] 27 Oct 2015
A brief note on the Karhunen-Loève expansion Alen Alexanderian arxiv:1509.07526v2 [math.pr] 27 Oct 2015 October 28, 2015 Abstract We provide a detailed derivation of the Karhunen Loève expansion of a stochastic
More informationFrom now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1.
Chapter 1 Metric spaces 1.1 Metric and convergence We will begin with some basic concepts. Definition 1.1. (Metric space) Metric space is a set X, with a metric satisfying: 1. d(x, y) 0, d(x, y) = 0 x
More informationA LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1.
A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE THOMAS CHEN AND NATAŠA PAVLOVIĆ Abstract. We prove a Beale-Kato-Majda criterion
More informationFRIEDRICH-ALEXANDER-UNIVERSITÄT ERLANGEN-NÜRNBERG. Lehrstuhl für Informatik 10 (Systemsimulation)
FRIEDRICH-ALEXANDER-UNIVERSITÄT ERLANGEN-NÜRNBERG INSTITUT FÜR INFORMATIK (MATHEMATISCHE MASCHINEN UND DATENVERARBEITUNG) Lehrstuhl für Informatik 0 (Systemsimulation) On a regularization technique for
More information2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.
Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following
More informationStability of an abstract wave equation with delay and a Kelvin Voigt damping
Stability of an abstract wave equation with delay and a Kelvin Voigt damping University of Monastir/UPSAY/LMV-UVSQ Joint work with Serge Nicaise and Cristina Pignotti Outline 1 Problem The idea Stability
More informationComm. Nonlin. Sci. and Numer. Simul., 12, (2007),
Comm. Nonlin. Sci. and Numer. Simul., 12, (2007), 1390-1394. 1 A Schrödinger singular perturbation problem A.G. Ramm Mathematics Department, Kansas State University, Manhattan, KS 66506-2602, USA ramm@math.ksu.edu
More information5 Compact linear operators
5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.
More informationON SPECTRAL METHODS FOR VOLTERRA INTEGRAL EQUATIONS AND THE CONVERGENCE ANALYSIS * 1. Introduction
Journal of Computational Mathematics, Vol.6, No.6, 008, 85 837. ON SPECTRAL METHODS FOR VOLTERRA INTEGRAL EQUATIONS AND THE CONVERGENCE ANALYSIS * Tao Tang Department of Mathematics, Hong Kong Baptist
More informationNonlinear error dynamics for cycled data assimilation methods
Nonlinear error dynamics for cycled data assimilation methods A J F Moodey 1, A S Lawless 1,2, P J van Leeuwen 2, R W E Potthast 1,3 1 Department of Mathematics and Statistics, University of Reading, UK.
More informationScattering by many small particles and creating materials with a desired refraction coefficient
102 Int. J. Computing Science and Mathematics, Vol. 3, Nos. 1/2, 2010 Scattering by many small particles and creating materials with a desired refraction coefficient M.I. Andriychuk Institute of Applied
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationA Concise Course on Stochastic Partial Differential Equations
A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original
More informationOn the simplest expression of the perturbed Moore Penrose metric generalized inverse
Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated
More informationMath 5520 Homework 2 Solutions
Math 552 Homework 2 Solutions March, 26. Consider the function fx) = 2x ) 3 if x, 3x ) 2 if < x 2. Determine for which k there holds f H k, 2). Find D α f for α k. Solution. We show that k = 2. The formulas
More informationDETERMINATION OF AN UNKNOWN SOURCE TERM IN A SPACE-TIME FRACTIONAL DIFFUSION EQUATION
Journal of Fractional Calculus and Applications, Vol. 6(1) Jan. 2015, pp. 83-90. ISSN: 2090-5858. http://fcag-egypt.com/journals/jfca/ DETERMINATION OF AN UNKNOWN SOURCE TERM IN A SPACE-TIME FRACTIONAL
More informationMinimal periods of semilinear evolution equations with Lipschitz nonlinearity
Minimal periods of semilinear evolution equations with Lipschitz nonlinearity James C. Robinson a Alejandro Vidal-López b a Mathematics Institute, University of Warwick, Coventry, CV4 7AL, U.K. b Departamento
More informationREGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE
Int. J. Appl. Math. Comput. Sci., 007, Vol. 17, No., 157 164 DOI: 10.478/v10006-007-0014-3 REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE DOROTA KRAWCZYK-STAŃDO,
More informationFunctional Analysis Review
Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all
More informationRegularization methods for large-scale, ill-posed, linear, discrete, inverse problems
Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola
More informationDynamical Systems Gradient Method for Solving Nonlinear Equations with Monotone Operators
Acta Appl Math (29) 16: 473 499 DOI 1.17/s144-8-938-1 Dynamical Systems Gradient Method for Solving Nonlinear Equations with Monotone Operators N.S. Hoang A.G. Ramm Received: 28 June 28 / Accepted: 26
More informationAn Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems
Int. Journal of Math. Analysis, Vol. 4, 1, no. 45, 11-8 An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical
More informationSEMILINEAR ELLIPTIC EQUATIONS WITH DEPENDENCE ON THE GRADIENT
Electronic Journal of Differential Equations, Vol. 2012 (2012), No. 139, pp. 1 9. ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu SEMILINEAR ELLIPTIC
More informationDirect method to solve Volterra integral equation of the first kind using operational matrix with block-pulse functions
Journal of Computational and Applied Mathematics 22 (28) 51 57 wwwelseviercom/locate/cam Direct method to solve Volterra integral equation of the first kind using operational matrix with block-pulse functions
More informationHow to create materials with a desired refraction coefficient?
How to create materials with a desired refraction coefficient? A. G. Ramm Mathematics Department, Kansas State University, Manhattan, KS66506, USA ramm@math.ksu.edu www.math.ksu.edu/ ramm Abstract The
More informationShort note on compact operators - Monday 24 th March, Sylvester Eriksson-Bique
Short note on compact operators - Monday 24 th March, 2014 Sylvester Eriksson-Bique 1 Introduction In this note I will give a short outline about the structure theory of compact operators. I restrict attention
More informationLecture 4: Numerical solution of ordinary differential equations
Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor
More informationChapter 8 Integral Operators
Chapter 8 Integral Operators In our development of metrics, norms, inner products, and operator theory in Chapters 1 7 we only tangentially considered topics that involved the use of Lebesgue measure,
More informationReview and problem list for Applied Math I
Review and problem list for Applied Math I (This is a first version of a serious review sheet; it may contain errors and it certainly omits a number of topic which were covered in the course. Let me know
More informationAlexander Ostrowski
Ostrowski p. 1/3 Alexander Ostrowski 1893 1986 Walter Gautschi wxg@cs.purdue.edu Purdue University Ostrowski p. 2/3 Collected Mathematical Papers Volume 1 Determinants Linear Algebra Algebraic Equations
More informationFall TMA4145 Linear Methods. Solutions to exercise set 9. 1 Let X be a Hilbert space and T a bounded linear operator on X.
TMA445 Linear Methods Fall 26 Norwegian University of Science and Technology Department of Mathematical Sciences Solutions to exercise set 9 Let X be a Hilbert space and T a bounded linear operator on
More informationIterative Methods for Ill-Posed Problems
Iterative Methods for Ill-Posed Problems Based on joint work with: Serena Morigi Fiorella Sgallari Andriy Shyshkov Salt Lake City, May, 2007 Outline: Inverse and ill-posed problems Tikhonov regularization
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More informationWave equation on manifolds and finite speed of propagation
Wave equation on manifolds and finite speed of propagation Ethan Y. Jaffe Let M be a Riemannian manifold (without boundary), and let be the (negative of) the Laplace-Beltrami operator. In this note, we
More informationLecture Notes on PDEs
Lecture Notes on PDEs Alberto Bressan February 26, 2012 1 Elliptic equations Let IR n be a bounded open set Given measurable functions a ij, b i, c : IR, consider the linear, second order differential
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationTHE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR
THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},
More informationHomework If the inverse T 1 of a closed linear operator exists, show that T 1 is a closed linear operator.
Homework 3 1 If the inverse T 1 of a closed linear operator exists, show that T 1 is a closed linear operator Solution: Assuming that the inverse of T were defined, then we will have to have that D(T 1
More informationA nonlinear singular perturbation problem
A nonlinear singular perturbation problem arxiv:math-ph/0405001v1 3 May 004 Let A.G. Ramm Mathematics epartment, Kansas State University, Manhattan, KS 66506-60, USA ramm@math.ksu.edu Abstract F(u ε )+ε(u
More informationAn Accurate Fourier-Spectral Solver for Variable Coefficient Elliptic Equations
An Accurate Fourier-Spectral Solver for Variable Coefficient Elliptic Equations Moshe Israeli Computer Science Department, Technion-Israel Institute of Technology, Technion city, Haifa 32000, ISRAEL Alexander
More informationRegularity of the density for the stochastic heat equation
Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department
More informationApplied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems.
Printed Name: Signature: Applied Math Qualifying Exam 11 October 2014 Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems. 2 Part 1 (1) Let Ω be an open subset of R
More informationYour first day at work MATH 806 (Fall 2015)
Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies
More informationNormed & Inner Product Vector Spaces
Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed
More informationRKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee
RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets 9.520 Class 22, 2004 Tomaso Poggio and Sayan Mukherjee About this class Goal To introduce an alternate perspective of RKHS via integral operators
More information5 Linear Algebra and Inverse Problem
5 Linear Algebra and Inverse Problem 5.1 Introduction Direct problem ( Forward problem) is to find field quantities satisfying Governing equations, Boundary conditions, Initial conditions. The direct problem
More informationarxiv: v3 [math.oa] 12 Jul 2012
FREE GROUP C -ALGEBRAS ASSOCIATED WITH l p arxiv:1203.0800v3 [math.oa] 12 Jul 2012 RUI OKAYASU Abstract. For every p 2, we give a characterization of positive definite functions on a free group with finitely
More informationPiecewise Smooth Solutions to the Burgers-Hilbert Equation
Piecewise Smooth Solutions to the Burgers-Hilbert Equation Alberto Bressan and Tianyou Zhang Department of Mathematics, Penn State University, University Park, Pa 68, USA e-mails: bressan@mathpsuedu, zhang
More informationGEOPHYSICAL INVERSE THEORY AND REGULARIZATION PROBLEMS
Methods in Geochemistry and Geophysics, 36 GEOPHYSICAL INVERSE THEORY AND REGULARIZATION PROBLEMS Michael S. ZHDANOV University of Utah Salt Lake City UTAH, U.S.A. 2OO2 ELSEVIER Amsterdam - Boston - London
More informationSome recent results on controllability of coupled parabolic systems: Towards a Kalman condition
Some recent results on controllability of coupled parabolic systems: Towards a Kalman condition F. Ammar Khodja Clermont-Ferrand, June 2011 GOAL: 1 Show the important differences between scalar and non
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationEncyclopedia of Mathematics, Supplemental Vol. 3, Kluwer Acad. Publishers, Dordrecht,
Encyclopedia of Mathematics, Supplemental Vol. 3, Kluwer Acad. Publishers, Dordrecht, 2001, 328-329 1 Reproducing kernel Consider an abstract set E and a linear set F of functions f : E C. Assume that
More informationON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES
Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.
More informationWeaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms
DOI: 10.1515/auom-2017-0004 An. Şt. Univ. Ovidius Constanţa Vol. 25(1),2017, 49 60 Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms Doina Carp, Ioana Pomparău,
More informationIf you make reference to this version of the manuscript, use the following information:
This is the author s final, peer-reviewed manuscript as accepted for publication. The publisher-formatted version may be available through the publisher s web site or your institution s library. Inverse
More informationRepresentations of Gaussian measures that are equivalent to Wiener measure
Representations of Gaussian measures that are equivalent to Wiener measure Patrick Cheridito Departement für Mathematik, ETHZ, 89 Zürich, Switzerland. E-mail: dito@math.ethz.ch Summary. We summarize results
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationADJOINTS, ABSOLUTE VALUES AND POLAR DECOMPOSITIONS
J. OPERATOR THEORY 44(2000), 243 254 c Copyright by Theta, 2000 ADJOINTS, ABSOLUTE VALUES AND POLAR DECOMPOSITIONS DOUGLAS BRIDGES, FRED RICHMAN and PETER SCHUSTER Communicated by William B. Arveson Abstract.
More informationTong Sun Department of Mathematics and Statistics Bowling Green State University, Bowling Green, OH
Consistency & Numerical Smoothing Error Estimation An Alternative of the Lax-Richtmyer Theorem Tong Sun Department of Mathematics and Statistics Bowling Green State University, Bowling Green, OH 43403
More informationRegularization and Inverse Problems
Regularization and Inverse Problems Caroline Sieger Host Institution: Universität Bremen Home Institution: Clemson University August 5, 2009 Caroline Sieger (Bremen and Clemson) Regularization and Inverse
More informationON PROJECTIVE METHODS OF APPROXIMATE SOLUTION OF SINGULAR INTEGRAL EQUATIONS. Introduction Let us consider an operator equation of second kind [1]
GEORGIAN MATHEMATICAL JOURNAL: Vol. 3, No. 5, 1996, 457-474 ON PROJECTIVE METHODS OF APPROXIMATE SOLUTION OF SINGULAR INTEGRAL EQUATIONS A. JISHKARIANI AND G. KHVEDELIDZE Abstract. The estimate for the
More informationExercise Solutions to Functional Analysis
Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n
More informationOn Riesz-Fischer sequences and lower frame bounds
On Riesz-Fischer sequences and lower frame bounds P. Casazza, O. Christensen, S. Li, A. Lindner Abstract We investigate the consequences of the lower frame condition and the lower Riesz basis condition
More informationLinear Algebra Review (Course Notes for Math 308H - Spring 2016)
Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Dr. Michael S. Pilant February 12, 2016 1 Background: We begin with one of the most fundamental notions in R 2, distance. Letting (x 1,
More information