Dynamical Systems Gradient Method for Solving Ill-Conditioned Linear Algebraic Systems
|
|
- Jason Kennedy
- 5 years ago
- Views:
Transcription
1 Acta Appl Math (21) 111: DOI 1.17/s Dynamical Systems Gradient Method for Solving Ill-Conditioned Linear Algebraic Systems N.S. Hoang A.G. Ramm Received: 28 September 28 / Accepted: 29 June 29 / Published online: 14 July 29 Springer Science+Business Media B.V. 29 Abstract A version of the Dynamical Systems Method (DSM) for solving ill-conditioned linear algebraic systems is studied in this paper. An aprioriand a posteriori stopping rules are justified. An algorithm for computing the solution using a spectral decomposition of the left-hand side matrix is proposed. Numerical results show that when a spectral decomposition of the left-hand side matrix is available or not computationally expensive to obtain the new method can be considered as an alternative to the Variational Regularization. Keywords Ill-conditioned linear algebraic systems Dynamical Systems Method (DSM) Variational Regularization Mathematics Subject Classification (2) 65F1 65F22 1 Introduction The Dynamical Systems Method (DSM) was systematically introduced and investigated in [19] as a general method for solving operator equations, linear and nonlinear, especially ill-posed operator equations (see also [15 21]). In several recent publications various versions of the DSM, proposed in [19], were shown to be as efficient and economical as variational regularization methods (see [4 1, 2]). This was demonstrated, for example, for the problems of solving ill-conditioned linear algebraic systems (cf. [2]), and stable numerical differentiation of noisy data (see [3, 22, 23]). The aim of this paper is to formulate a version of the DSM gradient method for solving ill-posed linear equations and to demonstrate numerical efficiency of this method. There is a large literature on iterative regularization methods. These methods can be derived from N.S. Hoang A.G. Ramm ( ) Mathematics Department, Kansas State University, Manhattan, KS , USA ramm@math.ksu.edu N.S. Hoang nguyenhs@math.ksu.edu
2 19 N.S. Hoang, A.G. Ramm a suitable version of the DSM by a discretization (see [19]). In the Gauss-Newton-type version of the DSM one has to invert some linear operator, which is an expensive procedure. The same is true for regularized Newton-type versions of the DSM and of their iterative counterparts. In contrast, the DSM gradient method we study in this paper does not require inversion of operators. We want to solve equation Au = f, (1) where A is a linear bounded operator in a Hilbert space H.Weassumethat(1) has a solution, possibly nonunique, and denote by y the unique minimal-norm solution to (1), y N := N (A) := {u : Au = }, Ay = f. We assume that the range of A, R(A), is not closed, so problem (1) is ill-posed. Let f δ, f f δ δ, be the noisy data. We want to construct a stable approximation of y,given{δ,f δ,a}. There are many methods for doing this, see, e.g., [11 13, 19, 24], to mention a few books, where variational regularization, quasisolutions, quasiinversion, iterative regularization, and the DSM are studied. The DSM version we study in this paper consists of solving the Cauchy problem u(t) = A (Au(t) f), u() = u, u N, u := du dt, (2) where A is the adjoint to operator A, and proving the existence of the limit lim t u(t) = u( ), and the relation u( ) = y, i.e., If the noisy data f δ aregiven,thenwesolvetheproblem lim u(t) y =. (3) t u δ (t) = A (Au δ (t) f δ ), u δ () = u, (4) and prove that, for a suitable stopping time t δ,andu δ := u δ (t δ ), one has lim u δ y =. (5) δ In Sect. 2 these results are formulated precisely and recipes for choosing t δ are proposed. The novel results in this paper include the proof of the discrepancy principle (Theorem 3), an efficient method for computing u δ (t δ ) (Sect. 3), and an a priori stopping rule (Theorem 2). Our presentation is essentially self-contained. Our results show that the DSM provides a method for solving a wide range of ill-posed problems, which is quite competitive with other methods, currently used. The DSM yields sometimes better accuracy and stability than variational regularization, and is simple in computational implementation. 2 Results Suppose A : H H is a linear bounded operator in a Hilbert space H. Assume that equation Au = f (6)
3 Dynamical Systems Gradient Method for Solving Ill-Conditioned 191 has a solution not necessarily unique. Denote by y the unique minimal-norm solution i.e., y N := N (A). Consider the following Dynamical Systems Method (DSM) u = A (Au f), u() = u, (7) where u N is arbitrary. Denote T := A A, Q := AA. The unique solution to (7) is u(t) = e tt u + e tt e st dsa f. Let us show that any ill-posed linear equation (6) with exact data can be solved by the DSM. 2.1 Exact Data Theorem 1 Suppose u N. Then problem (7) has a unique solution defined on [, ), and u( ) = y, where u( ) = lim t u(t). Proof Denote w := u(t) y, w = w(). Note that w N. One has The unique solution to (8)isw = e tt w. Thus, w 2 = ẇ = Tw, T = A A. (8) T e 2tλ d E λ w,w, where u, v is the inner product in H,andE λ is the resolution of the identity of the selfadjoint operator T. Thus, w( ) 2 = lim T t e 2tλ d E λ w,w = P N w 2 =, where P N = E E is the orthogonal projector onto N. Theorem 1 is proved. 2.2 Noisy Data f δ Let us solve stably equation (6) assuming that f is not known, but f δ, the noisy data, are known, where f δ f δ. Consider the following DSM u δ = A (Au δ f δ ), u δ () = u. Denote w δ := u δ y, T := A A, w δ () = w := u y N. Let us prove the following result: Theorem 2 If lim δ t δ =, lim δ t δ δ =, and w N, then lim w δ(t δ ) =. δ
4 192 N.S. Hoang, A.G. Ramm Proof One has The unique solution of (9)is ẇ δ = Tw δ + η δ, η δ = A (f δ f), η δ A δ. (9) w δ (t) = e tt w δ () + Let us show that lim t w δ (t) =. One has lim w δ(t) lim e tt w δ () + lim t t t One uses the spectral theorem and gets: Note that e (t s)t dsη δ = = T T de λ η δ e (t s)λ ds e (t s)t η δ ds. e tλetλ 1 T de λ η δ = λ e (t s)t η δ ds. (1) 1 e tλ de λ η δ. (11) λ 1 e tλ t, λ>,t, (12) λ since 1 x e x for x. From (11) and(12), one obtains 2 T e (t s)t dsη δ = 1 e tλ 2 λ d E λ η δ,η δ t 2 T d E λ η δ,η δ = t 2 η δ 2. (13) Since η δ A δ, from (1)and(13), one gets ( ) lim w δ(t δ ) lim e tδt w δ () +t δ δ A =. δ δ Here we have used the relation: lim δ e t δt w δ () = P N w =, and the last equality holds because w N. Theorem 2 is proved. From Theorem 2, it follows that the relation t δ = C δ γ, γ = const, γ (, 1) and C>is a constant, can be used as an aprioristopping rule, i.e., for such t δ one has lim δ u δ(t δ ) y =. (14)
5 Dynamical Systems Gradient Method for Solving Ill-Conditioned Discrepancy Principle Let us consider equation (6) with noisy data f δ, and a DSM of the form u δ = A Au δ + A f δ, u δ () = u, (15) for solving this equation. Equation (15) has been used in Sect Recall that y denotes the minimal-norm solution of (6). Theorem 3 Assume that Au f δ >Cδ. The solution t δ to the equation h(t) := Au δ (t) f δ =Cδ, 1 <C= const, (16) does exist, is unique, and lim δ u δ(t δ ) y =. (17) Proof Denote and One has v δ (t) := Au δ (t) f δ, T := A A, Q = AA w δ (t) := u δ (t) y, w := u y. d dt v δ(t) 2 = 2Re A u δ (t), Au δ (t) f δ = 2Re A[ A (Au δ (t) f δ )],Au δ (t) f δ = 2 A v δ (t) 2. (18) Thus, v δ (t) is a nonincreasing function. Let us prove that (16) has a solution for C>1. Recall the known commutation formulas: Using these formulas and the representation e st A = A e sq, Ae st = e tq A. one gets: u δ (t) = e tt u + e (t s)t A f δ ds, v δ (t) = Au δ (t) f δ = Ae tt u + A e (t s)t A f δ ds f δ = e tq Au + e tq e sq dsqf δ f δ = e tq A(u y) + e tq f + e tq (e tq I)f δ f δ = e tq Aw + e tq f e tq f δ. (19)
6 194 N.S. Hoang, A.G. Ramm Note that lim t e tq Aw = lim Ae tt w = AP N w =. t Here the continuity of A, and the following relations were used. Therefore, lim t e tt w = lim T t e st de s w = (E E )w = P N w, lim t v δ(t) = lim t e tq (f f δ ) f f δ δ, (2) because e tq 1. The function h(t) is continuous on [, ), h() = Au f δ >Cδ, h( ) δ. Thus, (16) must have a solution t δ. Let us prove the uniqueness of t δ. Without loss of generality we can assume that there exists t 1 >t δ such that Au δ (t 1 ) f δ =Cδ.Since v δ (t) is nonincreasing and v δ (t δ ) = v δ (t 1 ), one has v δ (t) = v δ (t δ ), t [t δ,t 1 ]. Thus, d dt v δ(t) 2 =, t (t δ,t 1 ). (21) Using (18)and(21) one obtains A v δ (t) = A (Au δ (t) f δ ) =, t [t δ,t 1 ]. This and (15) imply One has u δ (t) =, t (t δ,t 1 ). (22) u δ (t) = Tu δ (t) + A f δ ( = T e tt u + ) e (t s)t A f δ ds + A f δ = Te tt u (I e tt )A f δ + A f δ = e tt (T u A f δ ). (23) From (23)and(22), one gets Tu A f = e tt e tt (T u A f)=. Note that the operator e tt is an isomorphism for any fixed t since T is selfadjoint and bounded. Since Tu A f =, by (23) one has u δ (t) =, u δ (t) = u δ (), t. Consequently, Cδ < Au δ () f δ = Au δ (t δ ) f δ =Cδ. This is a contradiction which proves the uniqueness of t δ. Let us prove (17). First, we have the following estimate: Au(t δ ) f Au(t δ ) Au δ (t δ ) + Au δ (t δ ) f δ + f δ f δ e t δq e sq Qds f δ f +Cδ + δ. (24)
7 Dynamical Systems Gradient Method for Solving Ill-Conditioned 195 Let us use the inequality: δ e t δq e sq Qds = I e t δq 2, and conclude from (24), that lim Au(t δ) f =. (25) δ Secondly, we claim that lim t δ =. (26) δ Assume the contrary. Then there exist t > and a sequence (t δn ) n=1, t δ n <t, such that lim Au(t δ n ) f =. (27) n Analogously to (18), one proves that d v 2, dt where v(t) := Au(t) f. Thus, v(t) is nonincreasing. This and (27) imply the relation v(t ) = Au(t ) f =. Thus, = v(t ) = e t Q A(u y). This implies A(u y) = e t Q e t Q A(u y) =, so u y N.Sinceu y N,it follows that u = y. This is a contradiction because Cδ Au f δ = f f δ δ, 1 <C. Thus, lim δ t δ =. Let us continue the proof of (17). Let w δ (t) := u δ (t) y. We claim that w δ (t) is nonincreasing on [,t δ ]. One has d dt w δ(t) 2 = 2Re u δ (t), u δ (t) y = 2Re A (Au δ (t) f δ ), u δ (t) y = 2Re Au δ (t) f δ,au δ (t) f δ + f δ Ay ( ) 2 Au δ (t) f δ Au δ (t) f δ f δ f. Here we have used the inequalities: Au δ (t) f δ Cδ > f δ Ay =δ, t [,t δ ]. Let ɛ> be arbitrary small. Since lim t u(t) = y, there exists t >, independent of δ, such that u(t ) y ɛ 2. (28)
8 196 N.S. Hoang, A.G. Ramm Since lim δ t δ = (see (26)), there exists δ > such that t δ >t, δ (,δ ).Since w δ (t) is nonincreasing on [,t δ ] one has w δ (t δ ) w δ (t ) u δ (t ) u(t ) + u(t ) y, δ (,δ ). (29) Note that u δ (t ) u(t ) = e t T e st dsa (f δ f) e t T e st dsa δ. (3) Since e t T est dsa is a bounded operator for any fixed t, one concludes from (3)that lim δ u δ (t ) u(t ) =. Hence, there exists δ 1 (,δ ) such that From (28) (31), one obtains u δ (t ) u(t ) ɛ 2, δ (,δ 1). (31) u δ (t δ ) y = w δ (t δ ) ɛ 2 + ɛ 2 = ɛ, δ (,δ 1). This means that lim δ u δ (t δ ) = y. Theorem 3 is proved. 3 Computing u δ (t δ ) 3.1 Systems with Known Spectral Decomposition One way to solve the Cauchy problem (15) is to use explicit Euler or Runge-Kutta methods with a constant or adaptive stepsize h. However, stepsize h for solving (15) by explicit numerical methods is often smaller than 1 and the stopping time t δ = nh maybelarge.therefore, the computation time, characterized by the number of iterations n, for this approach may be large. This fact is also reported in [2], where one of the most efficient numerical methods for solving ordinary differential equations (ODEs), the DOPRI45 (see [1]), is used for solving a Cauchy problem in a DSM. The usage of explicit Euler method leads to a Landweber iteration which is known for slow convergence. Thus, it may be computationally expensive to compute u δ (t δ ) by numerical methods for ODEs. However, when A in (15) is a matrix and a decomposition A = USV,whereU and V are unitary matrices and S is a diagonal matrix, is known, it is possible to compute u δ (t δ ) at a speed comparable to other methods, such as the variational regularization (VR), as it will be shown below. We have u δ (t) = e tt u + e tt e st dsa f δ, T := A A. (32) Suppose that a decomposition A = USV, (33) where U and V are unitary matrices and S is a diagonal matrix is known. These matrices possibly contain complex entries. Thus, T = A A = V SSV and e T = e V SSV.Usingthe
9 Dynamical Systems Gradient Method for Solving Ill-Conditioned 197 formula e V SSV = Ve SS V, which is valid if V is unitary and SS is diagonal, (32) can be rewritten as u δ (t) = Ve t SS V u + V e (s t) SS ds SU f δ. (34) Here, the overbar stands for complex conjugation. Choose u =. Then u δ (t) = V e (s t) SS ds Sh δ, h δ := U f δ. (35) Let us assume that δ< f. (36) This is a natural assumption. Let us check that Indeed, if A f δ =, then one gets A f δ. (37) f δ,f = f δ,ay = A f δ,y =. (38) This implies δ 2 f f δ 2 = f 2 + f δ 2 >δ 2. (39) This contradiction implies (37). The stopping time t δ we choose by the following discrepancy principle: δ Au δ (t δ ) f δ = e (s t δ) SS ds SSh δ h δ = e t δ SS h δ =Cδ, where 1 <C. Let us find t δ from the equation φ(t):= ψ(t) Cδ =, ψ(t):= e t SS h δ. (4) The existence and uniqueness of the solution t δ to (4) follow from Theorem 3. We claim that (4) can be solved by using Newton s iteration (48) for any initial value t such that φ(t )>. Let us prove this claim. It is sufficient to prove that φ(t) is a monotone strictly convex function. This is proved below. Without loss of generality, we can assume that h δ (see (4)) is a vector with real components. The proof remained essentially the same for h δ with complex components. First, we claim that SSh δ, and SSe t SS h δ, (41) so ψ(t)>. Indeed, since e t SS is an isomorphism and e t SS commutes with SS one concludes that SSe t SS h δ =iff SSh δ =. If SSh δ = then Sh δ =, and, therefore, = Sh δ = SU f δ = V V SU f δ = V A f δ. (42)
10 198 N.S. Hoang, A.G. Ramm Since V is a unitary matrix, it follows from (42) thata f δ =. This contradicts to relation (37). Let us now prove that φ monotonically decays and is strictly convex. Then our claim will be proved. One has d dt e t SS h δ,e t SS h δ = 2 e t SS h δ, SSe t SS h δ. Thus, ψ(t)= d dt e t SS d dt h δ = e t SS h δ 2 SS h δ, SSe t SS h δ 2 e t SS h δ = e t. (43) e t SS h δ Equation (43), relation (41), and the fact that e t SS h δ, SSe t SS h δ = SSe t SS h δ 2 imply From (43) and the definition of ψ in (4), one gets Differentiating (45) with respect to t, one obtains ψ(t)<. (44) ψ(t) ψ(t)= e t SS h δ, SSe t SS h δ. (45) ψ(t) ψ(t)+ ψ 2 (t) = SSe t SS h δ, SSe t SS h δ + e t SS h δ, SS SSe t SS h δ This equation and (43)imply = 2 SSe t SS h δ 2. ψ(t) ψ(t)= 2 SSe t SS h δ 2 e t SS h δ, SSe t SS h δ 2 e t SS h δ 2 SSe t SS h δ 2 >. (46) Here the inequality: e t SS h δ, SSe t SS h δ e t SS h δ SSe t SS h δ was used. Since ψ>, inequality (46) implies ψ(t)>. (47) It follows from inequalities (44) and(47) thatφ(t) is a strictly convex and decreasing function on (, ). Therefore, t δ can be found by Newton s iterations: t n+1 = t n φ(t n) φ(t n ) = t n + e tn SS h δ Cδ SSe tn SS h δ,e tn SS h δ e tn SS h δ, n=, 1,..., (48) for any initial guess t of t δ such that φ(t )>. Once t δ is found, the solution u δ (t δ ) is computed by (35). Remark 1 In the decomposition A = VSU we do not assume that U,V and S are matrices with real entries. The singular value decomposition (SVD) is a particular case of this decomposition.
11 Dynamical Systems Gradient Method for Solving Ill-Conditioned 199 It is computationally expensive to get the SVD of a matrix in general. However, there are many problems in which the decomposition (33) can be computed fast using the fast Fourier transform (FFT). Examples include image restoration problems with circulant block matrices (see [14]) and deconvolution problems (see Sect. 4.2). 3.2 On the Choice of t Let us discuss a strategy for choosing the initial value t in Newton s iterations for finding t δ. We choose t satisfying condition: <φ(t ) = e t SS h δ δ δ (49) by the following strategy 1. Choose t := 1 h δ as an initial guess for t δ. 2. Compute φ(t ).Ift satisfying (49) we are done. Otherwise, we go to step If φ(t )<and the inequality φ(t )>δhas not occurred in iteration, we replace t by t 1 and go back to step 2. If φ(t )< and the inequality φ(t )>δhas occurred in iteration, we replace t by t 3 and go back to step 2. If φ(t )>δ,wegotostep4. 4. If φ(t )>δ and the inequality φ(t )< has not occurred in iterations, we replace t by 3t and go back to step 2. If the inequality φ(t )< has occurred in some iteration before, we stop the iteration and use t as an initial guess in Newton s iterations for finding t δ. 4 Numerical Experiments In this section results of some numerical experiments with ill-conditioned linear algebraic systems are reported. In all the experiments, by DSMG we denote the version of the DSM described in this paper, by VR we denote the Variational Regularization, implemented using the discrepancy principle, and by DSM-[2] we denote the method developed in [2]. 4.1 A Linear Algebraic System for the Computation of Second Derivatives Let us do some numerical experiments with linear algebraic systems arising in a numerical experiment of computing the second derivative of a noisy function. The problem is reduced to an integral equation of the first kind. A linear algebraic system is obtained by a discretization of the integral equation whose kernel K is Green s function { s(t 1), if s<t K(s,t) = t(s 1), if s t. Here s,t [, 1]. UsingA N from [2], we do some numerical experiments for solving u N from the linear algebraic system A N u N = b N,δ. In the experiments the exact right-hand side is computed by the formula b N = A N u N when u N is given. In this test, u N is computed by u N := ( u(t N,1 ), u(t N,2 ),..., u(t N,N ) ) T, tn,i := i,i= 1,...,N, N where u(t) is a given function. We use N = 1, 2,...,1 and b N,δ = b N + e N,wheree N is a random vector whose coordinates are independent, normally distributed, with mean
12 2 N.S. Hoang, A.G. Ramm Fig. 1 Plots of differences between the exact solution and solutions obtained by the DSMG, VR and DSM-[2] Table 1 Numerical results for computing second derivatives with δ rel =.1 N DSM DSM-[2] VR n iter u δ y 2 y 2 n linsol u δ y 2 y 2 n linsol u δ y 2 y and variance 1, and scaled so that e N =δ rel b N. This linear algebraic system is mildly ill-posed: the condition number of A 1 is In Fig. 1, the difference between the exact solution and solution obtained by the DSMG, VR and DSM-[2] are plotted. In these experiments, we used N = 1 and u(t) = sin(πt) with δ rel =.5 and δ rel =.1. Figure 1 shows that the results obtained by the VR and the DSM-[2] are very close to each other. The results obtained by the DSMG are much better than those by the DSM-[2] and by the VR. Table 1 presents numerical results when N varies from 1 to 1, u(t) = sin(2πt),and t [, 1]. In this experiment the DSMG yields more accurate solutions than the DSM-[2] and the VR. The DSMG in this experiment takes more iterations than the DSM-[2] andthe VR to get a solution. In this experiment the DSMG is implemented using the SVD of A obtained by the function svd in Matlab. As already mentioned, the SVD is a special case of the spectral decomposition (33). It is expensive to compute the SVD, in general. However, there are practically important problems where the spectral decomposition (33) can be computed fast (see
13 Dynamical Systems Gradient Method for Solving Ill-Conditioned 21 Sect. 4.2 below). These problems consist of deconvolution problems using the Fast Fourier Transform (FFTs). The conclusion from this experiment is: the DSMG may yield results with much better accuracy than the VR and DSM-[2]. Numerical experiments for various u(t) show that the DSMG competes favorably with the VR and the DSM-[2]. 4.2 An Application to Image Restoration The image degradation process can be modeled by the following equation: g δ = g + w, g = h f, w δ, (5) where h represents a convolution function that models the blurring that many imaging systems introduce. For example, camera defocus, motion blur, imperfections of the lenses, all these phenomena can be modeled by choosing a suitable h. The functions g δ, f,andw are the observed image, the original signal, and the noise, respectively. The noise w can be due to the electronics used (thermal and shot noise), the recording medium (film grain), or the imaging process (photon noise). In practice g,h and f in (5) are often given as functions of a discrete argument and (5) can be written in this case as g δ,i = g i + w i = j= f j h i j + w i, i Z. (51) Note that one (or both) signals f j and h j have compact support (finite length). Suppose that signal f is periodic with period N, i.e., f i+n = f i,andh j = forj<andj N. Assume that f is represented by a sequence f,...,f N 1 and h is represented by h,...,h N 1.Then the convolution h f is periodic signal g with period N, andtheelementsofg are defined as N 1 g i = h j f (i j)mod N, i =, 1,...,N 1. (52) j= Here (i j)mod N is i j modulo N. The discrete Fourier transform (DFT) of g is defined as the sequence N 1 ĝ k := g j e i2πjk/n, k=, 1,...,N 1. j= Denote ĝ = (ĝ,...,ĝ N 1 ) T.Then(52) implies ĝ = ˆ f ĥ, fˆ ĝ := ( fˆ ĥ, fˆ 1 ĥ 1,..., fˆ N 1 ĥ N 1 ) T. (53) Let diag(a) denote a diagonal matrix whose diagonal is (a,...,a N 1 ) and other entries are zeros. Then (53) can be rewritten as ĝ = A ˆ f, A:= diag(ĥ). (54) Since A is of the form (33) with U = V = I and S = diag(ĥ), one can use the DSMG method to solve (54) stably for f ˆ.
14 22 N.S. Hoang, A.G. Ramm Fig. 2 Original and blurred noisy images Fig. 3 Regularized images when noise level is 1% The image restoration test problem we use is taken from [14]. This test problem was developed at the US Air Force Phillips Laboratory, Lasers and Imaging Directorate, Kirtland Air Force Base, New Mexico. The original and blurred images have pixels, and are shown in Fig. 2. These data has been widely used in the literature for testing image restoration algorithms. Figure 3 plots the regularized images by the VR and the DSMG when δ rel =.1. Again, with an input value for δ rel, the observed blurred noisy images is computed by g g δ = g + δ rel err err, where err is a vector with random entries normally distributed with mean and variance 1. In this experiment, it took 5 and 8 iterations for the DSMG and the VR, respectively, to yield numerical results. From Fig. 3 one concludes that the DSMG is comparable to the VR in terms of accuracy. The time of computation in this experiment is about the same for the VR and DSMG. Figure 4 plots the regularized images by the VR and the DSMG when δ rel =.5. It took 4 and 7 iterations for the DSMG and the VR, respectively, to yield numerical results. Figure 4 shows that the images obtained by the DSMG and the VR are about the same. The conclusions from this experiment are: the DSMG yields results with the same accuracy as the VR, and requires less iterations than the VR. The restored images by the DSM-[2] are about the same as those by the VR. Remark 2 Equation (5) can be reduced to (53) whenever one of the two functions f and h has compact support and the other is periodic.
15 Dynamical Systems Gradient Method for Solving Ill-Conditioned 23 Fig. 4 Regularized images when noise level is 5% 5 Concluding Remarks A version of the Dynamical Systems Method for solving ill-conditioned linear algebraic systems is studied in this paper. An aprioriand a posteriori stopping rules are formulated and justified. An algorithm for computing the solution in the case when a spectral decomposition of the matrix A is available is presented. Numerical results show that the DSMG, i.e., the DSM version developed in this paper, yields results comparable to those obtained by the VR and the DSM-[2] developed in [2], and the DSMG method may yield much more accurate results than the VR method. It is demonstrated in [14] that the rate of convergence of the Landweber method can be increased by using preconditioning techniques. The rate of convergence of the DSM version, presented in this paper, might be improved by a similar technique. The advantage of our method over the steepest descent in [14] is the following: the stopping time t δ can be found from a discrepancy principle by Newton s iterations for a wide range of initial guesses t ;whent δ is found one can compute the solution without any iterations. Also, our method requires less iterations than the steepest descent in [14], which is an accelerated version of the Landweber method. References 1. Hairer, E., Nørsett, S.P., Wanner, G.: Solving Ordinary Differential Equation. I. Nonstiff Problems. Springer, Berlin (1987) 2. Hoang, N.S., Ramm, A.G.: Solving ill-conditioned linear algebraic systems by the dynamical systems method (DSM). Inverse Probl. Sci. Eng. 16(N5), (28) 3. Hoang, N.S., Ramm, A.G.: On stable numerical differentiation. Aust. J. Math. Anal. Appl. 5(N1), 1 7 (28). Article 5 4. Hoang, N.S., Ramm, A.G.: An iterative scheme for solving nonlinear equations with monotone operators. BIT Numer. Math. 48(N4), (28) 5. Hoang, N.S., Ramm, A.G.: Dynamical systems method for solving linear finite-rank operator equations. Ann. Pol. Math. 95(N1), (29) 6. Hoang, N.S., Ramm, A.G.: Dynamical Systems Gradient method for solving nonlinear equations with monotone operators. Acta Appl. Math. 16, (29) 7. Hoang, N.S., Ramm, A.G.: A new version of the Dynamical Systems Method (DSM) for solving nonlinear equations with monotone operators. Differ. Equ. Appl. 1(N1), 1 25 (29) 8. Hoang, N.S., Ramm, A.G.: A discrepancy principle for equations with monotone continuous operators. Nonlinear Anal., Theory Methods Appl. 7, (29) 9. Hoang, N.S., Ramm, A.G.: Dynamical systems method for solving nonlinear equations with monotone operators. Math. Comput. 79, 269 (21) 1. Hoang, N.S., Ramm, A.G.: The Dynamical Systems Method for solving nonlinear equations with monotone operators. Asian Eur. Math. J. (29, to appear) 11. Ivanov, V., Tanana, V., Vasin, V.: Theory of Ill-Posed Problems. VSP, Utrecht (22)
16 24 N.S. Hoang, A.G. Ramm 12. Lattes, J., Lions, J.: Mèthode de Quasi-Réversibilité et Applications. Dunod, Paris (1967) 13. Morozov: Methods of Solving Incorrectly Posed Problems. Springer, New York (1984) 14. Nagy, J.G., Palmer, K.M.: Steepest descent, CG, and iterative regularization of ill-posed problems. BIT Numer. Math. 43, (23) 15. Ramm, A.G.: Dynamical systems method for solving operator equations. Commun. Nonlinear Sci. Numer. Simul. 9(N2), (24) 16. Ramm, A.G.: Dynamical Systems Method (DSM) and nonlinear problems. In: Lopez-Gomez, J. (ed.) Spectral Theory and Nonlinear Analysis, pp World Scientific, Singapore (25) 17. Ramm, A.G.: Dynamical systems method (DSM) for unbounded operators. Proc. Am. Math. Soc. 134(N4), (26) 18. Ramm, A.G.: Dynamical systems method for nonlinear equations in Banach spaces. Commun. Nonlinear Sci. Numer. Simul. 11(N3), (26) 19. Ramm, A.G.: Dynamical Systems Method for Solving Operator Equations. Elsevier, Amsterdam (27) 2. Ramm, A.G.: Dynamical systems method for solving linear ill-posed problems. Ann. Pol. Math. 95(N3), (29) 21. Ramm, A.G., Airapetyan, R.: Dynamical systems and discrete methods for solving nonlinear ill-posed problems. In: Anastassiou, G. (ed.) Appl. Math. Reviews, vol. 1, pp World Scientific, Singapore (2) 22. Ramm, A.G., Smirnova, A.B.: On stable numerical differentiation. Math. Comput. 7, (21) 23. Ramm, A.G., Smirnova, A.B.: Stable numerical differentiation: when is it possible? J. Korean SIAM 7(N1), (23) 24. Vainikko, G., Veretennikov, A.: Iterative Processes in Ill-Posed Problems. Nauka, Moscow (1996)
Ann. Polon. Math., 95, N1,(2009),
Ann. Polon. Math., 95, N1,(29), 77-93. Email: nguyenhs@math.ksu.edu Corresponding author. Email: ramm@math.ksu.edu 1 Dynamical systems method for solving linear finite-rank operator equations N. S. Hoang
More informationON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume, Number, Pages S -9939(XX- ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. HOANG AND A. G. RAMM (Communicated
More informationDynamical systems method (DSM) for selfadjoint operators
Dynamical systems method (DSM) for selfadjoint operators A.G. Ramm Mathematics Department, Kansas State University, Manhattan, KS 6656-262, USA ramm@math.ksu.edu http://www.math.ksu.edu/ ramm Abstract
More informationThis article was published in an Elsevier journal. The attached copy is furnished to the author for non-commercial research and education use, including for instruction at the author s institution, sharing
More informationThis article was originally published in a journal published by Elsevier, and the attached copy is provided by Elsevier for the author s benefit and for the benefit of the author s institution, for non-commercial
More informationHow large is the class of operator equations solvable by a DSM Newton-type method?
This is the author s final, peer-reviewed manuscript as accepted for publication. The publisher-formatted version may be available through the publisher s web site or your institution s library. How large
More informationNonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage:
Nonlinear Analysis 71 2009 2744 2752 Contents lists available at ScienceDirect Nonlinear Analysis journal homepage: www.elsevier.com/locate/na A nonlinear inequality and applications N.S. Hoang A.G. Ramm
More informationTHE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS
Asian-European Journal of Mathematics Vol. 3, No. 1 (2010) 57 105 c World Scientific Publishing Company THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. Hoang
More informationDynamical Systems Method for Solving Ill-conditioned Linear Algebraic Systems
Dynamical Systems Method for Solving Ill-conditioned Linear Algebraic Systems Sapto W. Indratno Department of Mathematics Kansas State University, Manhattan, KS 6656-6, USA sapto@math.ksu.edu A G Ramm
More informationarxiv: v1 [math.na] 28 Jan 2009
The Dynamical Systems Method for solving nonlinear equations with monotone operators arxiv:0901.4377v1 [math.na] 28 Jan 2009 N. S. Hoang and A. G. Ramm Mathematics Department, Kansas State University,
More informationNONLINEAR DIFFERENTIAL INEQUALITY. 1. Introduction. In this paper the following nonlinear differential inequality
M athematical Inequalities & Applications [2407] First Galley Proofs NONLINEAR DIFFERENTIAL INEQUALITY N. S. HOANG AND A. G. RAMM Abstract. A nonlinear differential inequality is formulated in the paper.
More informationA G Ramm, Implicit Function Theorem via the DSM, Nonlinear Analysis: Theory, Methods and Appl., 72, N3-4, (2010),
A G Ramm, Implicit Function Theorem via the DSM, Nonlinear Analysis: Theory, Methods and Appl., 72, N3-4, (21), 1916-1921. 1 Implicit Function Theorem via the DSM A G Ramm Department of Mathematics Kansas
More informationDynamical Systems Gradient Method for Solving Nonlinear Equations with Monotone Operators
Acta Appl Math (29) 16: 473 499 DOI 1.17/s144-8-938-1 Dynamical Systems Gradient Method for Solving Nonlinear Equations with Monotone Operators N.S. Hoang A.G. Ramm Received: 28 June 28 / Accepted: 26
More informationA numerical algorithm for solving 3D inverse scattering problem with non-over-determined data
A numerical algorithm for solving 3D inverse scattering problem with non-over-determined data Alexander Ramm, Cong Van Department of Mathematics, Kansas State University, Manhattan, KS 66506, USA ramm@math.ksu.edu;
More information444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003),
444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003), 2191-2195 1 On a new notion of regularizer A.G. Ramm LMA/CNRS, 31 Chemin Joseph Aiguier, Marseille 13402, France and Mathematics
More informationConvergence rates of the continuous regularized Gauss Newton method
J. Inv. Ill-Posed Problems, Vol. 1, No. 3, pp. 261 28 (22) c VSP 22 Convergence rates of the continuous regularized Gauss Newton method B. KALTENBACHER, A. NEUBAUER, and A. G. RAMM Abstract In this paper
More informationA collocation method for solving some integral equations in distributions
A collocation method for solving some integral equations in distributions Sapto W. Indratno Department of Mathematics Kansas State University, Manhattan, KS 66506-2602, USA sapto@math.ksu.edu A G Ramm
More information5 Compact linear operators
5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.
More informationAn Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems
Int. Journal of Math. Analysis, Vol. 4, 1, no. 45, 11-8 An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical
More informationStability of solutions to abstract evolution equations with delay
Stability of solutions to abstract evolution equations with delay A.G. Ramm Department of Mathematics Kansas State University, Manhattan, KS 66506-2602, USA ramm@math.ksu.edu Abstract An equation u = A(t)u+B(t)F
More informationBanach Journal of Mathematical Analysis ISSN: (electronic)
Banach J. Math. Anal. 6 (2012), no. 1, 139 146 Banach Journal of Mathematical Analysis ISSN: 1735-8787 (electronic) www.emis.de/journals/bjma/ AN EXTENSION OF KY FAN S DOMINANCE THEOREM RAHIM ALIZADEH
More informationNormality of adjointable module maps
MATHEMATICAL COMMUNICATIONS 187 Math. Commun. 17(2012), 187 193 Normality of adjointable module maps Kamran Sharifi 1, 1 Department of Mathematics, Shahrood University of Technology, P. O. Box 3619995161-316,
More informationarxiv: v2 [math.oa] 21 Nov 2010
NORMALITY OF ADJOINTABLE MODULE MAPS arxiv:1011.1582v2 [math.oa] 21 Nov 2010 K. SHARIFI Abstract. Normality of bounded and unbounded adjointable operators are discussed. Suppose T is an adjointable operator
More informationEigenvalues and Eigenvectors
/88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationRegularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1
Applied Mathematical Sciences, Vol. 5, 2011, no. 76, 3781-3788 Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Nguyen Buong and Nguyen Dinh Dung
More informationSome notes on a second-order random boundary value problem
ISSN 1392-5113 Nonlinear Analysis: Modelling and Control, 217, Vol. 22, No. 6, 88 82 https://doi.org/1.15388/na.217.6.6 Some notes on a second-order random boundary value problem Fairouz Tchier a, Calogero
More informationGEOPHYSICAL INVERSE THEORY AND REGULARIZATION PROBLEMS
Methods in Geochemistry and Geophysics, 36 GEOPHYSICAL INVERSE THEORY AND REGULARIZATION PROBLEMS Michael S. ZHDANOV University of Utah Salt Lake City UTAH, U.S.A. 2OO2 ELSEVIER Amsterdam - Boston - London
More informationPreconditioning. Noisy, Ill-Conditioned Linear Systems
Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image
More informationNumerical Methods for Large-Scale Nonlinear Systems
Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.
More informationTHE FORM SUM AND THE FRIEDRICHS EXTENSION OF SCHRÖDINGER-TYPE OPERATORS ON RIEMANNIAN MANIFOLDS
THE FORM SUM AND THE FRIEDRICHS EXTENSION OF SCHRÖDINGER-TYPE OPERATORS ON RIEMANNIAN MANIFOLDS OGNJEN MILATOVIC Abstract. We consider H V = M +V, where (M, g) is a Riemannian manifold (not necessarily
More informationPreconditioning. Noisy, Ill-Conditioned Linear Systems
Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image
More informationA Simple Proof of the Fredholm Alternative and a Characterization of the Fredholm Operators
thus a n+1 = (2n + 1)a n /2(n + 1). We know that a 0 = π, and the remaining part follows by induction. Thus g(x, y) dx dy = 1 2 tanh 2n v cosh v dv Equations (4) and (5) give the desired result. Remarks.
More informationCS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3
CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 Felix Kwok February 27, 2004 Written Problems 1. (Heath E3.10) Let B be an n n matrix, and assume that B is both
More informationEncyclopedia of Mathematics, Supplemental Vol. 3, Kluwer Acad. Publishers, Dordrecht,
Encyclopedia of Mathematics, Supplemental Vol. 3, Kluwer Acad. Publishers, Dordrecht, 2001, 328-329 1 Reproducing kernel Consider an abstract set E and a linear set F of functions f : E C. Assume that
More informationIterative Solution of a Matrix Riccati Equation Arising in Stochastic Control
Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationOn the simplest expression of the perturbed Moore Penrose metric generalized inverse
Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated
More informationM athematical I nequalities & A pplications
M athematical I nequalities & A pplications With Compliments of the Author Zagreb, Croatia Volume 4, Number 4, October 20 N. S. Hoang and A. G. Ramm Nonlinear differential inequality MIA-4-82 967 976 MATHEMATICAL
More informationIterative Methods for Smooth Objective Functions
Optimization Iterative Methods for Smooth Objective Functions Quadratic Objective Functions Stationary Iterative Methods (first/second order) Steepest Descent Method Landweber/Projected Landweber Methods
More informationConvergence rate estimates for the gradient differential inclusion
Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient
More informationDynamical Systems Method for Solving Operator Equations
Dynamical Systems Method for Solving Operator Equations Alexander G. Ramm Department of Mathematics Kansas State University Manhattan, KS 6652 email: ramm@math.ksu.edu URL: http://www.math.ksu.edu/ ramm
More informationON MATRIX VALUED SQUARE INTEGRABLE POSITIVE DEFINITE FUNCTIONS
1 2 3 ON MATRIX VALUED SQUARE INTERABLE POSITIVE DEFINITE FUNCTIONS HONYU HE Abstract. In this paper, we study matrix valued positive definite functions on a unimodular group. We generalize two important
More informationSTRONG CONVERGENCE THEOREMS BY A HYBRID STEEPEST DESCENT METHOD FOR COUNTABLE NONEXPANSIVE MAPPINGS IN HILBERT SPACES
Scientiae Mathematicae Japonicae Online, e-2008, 557 570 557 STRONG CONVERGENCE THEOREMS BY A HYBRID STEEPEST DESCENT METHOD FOR COUNTABLE NONEXPANSIVE MAPPINGS IN HILBERT SPACES SHIGERU IEMOTO AND WATARU
More informationPEER-REVIEWED PUBLICATIONS
PEER-REVIEWED PUBLICATIONS (In most cases, authors are listed alphabetically to indicate equivalent contributions) BOOKS 1. A.B.Bakushinsky, M.Yu.Kokurin, A.B.Smirnova, Iterative Methods for Ill-Posed
More informationTwo-parameter regularization method for determining the heat source
Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for
More informationA NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang
A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.
More informationTikhonov Regularization of Large Symmetric Problems
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi
More informationWhere is matrix multiplication locally open?
Linear Algebra and its Applications 517 (2017) 167 176 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Where is matrix multiplication locally open?
More informationMeans of unitaries, conjugations, and the Friedrichs operator
J. Math. Anal. Appl. 335 (2007) 941 947 www.elsevier.com/locate/jmaa Means of unitaries, conjugations, and the Friedrichs operator Stephan Ramon Garcia Department of Mathematics, Pomona College, Claremont,
More informationA model function method in total least squares
www.oeaw.ac.at A model function method in total least squares S. Lu, S. Pereverzyev, U. Tautenhahn RICAM-Report 2008-18 www.ricam.oeaw.ac.at A MODEL FUNCTION METHOD IN TOTAL LEAST SQUARES SHUAI LU, SERGEI
More informationDiscrete ill posed problems
Discrete ill posed problems Gérard MEURANT October, 2008 1 Introduction to ill posed problems 2 Tikhonov regularization 3 The L curve criterion 4 Generalized cross validation 5 Comparisons of methods Introduction
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationA LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1.
A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE THOMAS CHEN AND NATAŠA PAVLOVIĆ Abstract. We prove a Beale-Kato-Majda criterion
More informationFUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES. Yılmaz Yılmaz
Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 33, 2009, 335 353 FUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES Yılmaz Yılmaz Abstract. Our main interest in this
More informationMinimal periods of semilinear evolution equations with Lipschitz nonlinearity
Minimal periods of semilinear evolution equations with Lipschitz nonlinearity James C. Robinson a Alejandro Vidal-López b a Mathematics Institute, University of Warwick, Coventry, CV4 7AL, U.K. b Departamento
More informationWELL-POSEDNESS FOR HYPERBOLIC PROBLEMS (0.2)
WELL-POSEDNESS FOR HYPERBOLIC PROBLEMS We will use the familiar Hilbert spaces H = L 2 (Ω) and V = H 1 (Ω). We consider the Cauchy problem (.1) c u = ( 2 t c )u = f L 2 ((, T ) Ω) on [, T ] Ω u() = u H
More informationarxiv: v1 [math.na] 1 Sep 2018
On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing
More informationProperties of the Scattering Transform on the Real Line
Journal of Mathematical Analysis and Applications 58, 3 43 (001 doi:10.1006/jmaa.000.7375, available online at http://www.idealibrary.com on Properties of the Scattering Transform on the Real Line Michael
More informationA nonlinear singular perturbation problem
A nonlinear singular perturbation problem arxiv:math-ph/0405001v1 3 May 004 Let A.G. Ramm Mathematics epartment, Kansas State University, Manhattan, KS 66506-60, USA ramm@math.ksu.edu Abstract F(u ε )+ε(u
More informationInverse problem and optimization
Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples
More informationThe best generalised inverse of the linear operator in normed linear space
Linear Algebra and its Applications 420 (2007) 9 19 www.elsevier.com/locate/laa The best generalised inverse of the linear operator in normed linear space Ping Liu, Yu-wen Wang School of Mathematics and
More informationNormed & Inner Product Vector Spaces
Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed
More informationAn Iterative Procedure for Solving the Riccati Equation A 2 R RA 1 = A 3 + RA 4 R. M.THAMBAN NAIR (I.I.T. Madras)
An Iterative Procedure for Solving the Riccati Equation A 2 R RA 1 = A 3 + RA 4 R M.THAMBAN NAIR (I.I.T. Madras) Abstract Let X 1 and X 2 be complex Banach spaces, and let A 1 BL(X 1 ), A 2 BL(X 2 ), A
More informationMath Ordinary Differential Equations
Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x
More informationMath 113 Final Exam: Solutions
Math 113 Final Exam: Solutions Thursday, June 11, 2013, 3.30-6.30pm. 1. (25 points total) Let P 2 (R) denote the real vector space of polynomials of degree 2. Consider the following inner product on P
More informationELLIPTIC RECONSTRUCTION AND A POSTERIORI ERROR ESTIMATES FOR PARABOLIC PROBLEMS
ELLIPTIC RECONSTRUCTION AND A POSTERIORI ERROR ESTIMATES FOR PARABOLIC PROBLEMS CHARALAMBOS MAKRIDAKIS AND RICARDO H. NOCHETTO Abstract. It is known that the energy technique for a posteriori error analysis
More informationAN EXTENSION OF YAMAMOTO S THEOREM ON THE EIGENVALUES AND SINGULAR VALUES OF A MATRIX
Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 AN EXTENSION OF YAMAMOTO S THEOREM ON THE EIGENVALUES AND SINGULAR VALUES OF A MATRIX TIN-YAU TAM AND HUAJUN HUANG Abstract.
More informationBOUNDARY VALUE PROBLEMS IN KREĬN SPACES. Branko Ćurgus Western Washington University, USA
GLASNIK MATEMATIČKI Vol. 35(55(2000, 45 58 BOUNDARY VALUE PROBLEMS IN KREĬN SPACES Branko Ćurgus Western Washington University, USA Dedicated to the memory of Branko Najman. Abstract. Three abstract boundary
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationFunctional Analysis Review
Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all
More informationA Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization
A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization Alexandra Smirnova Rosemary A Renaut March 27, 2008 Abstract The preconditioned iteratively regularized Gauss-Newton
More informationFinding discontinuities of piecewise-smooth functions
Finding discontinuities of piecewise-smooth functions A.G. Ramm Mathematics Department, Kansas State University, Manhattan, KS 66506-2602, USA ramm@math.ksu.edu Abstract Formulas for stable differentiation
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationLinear Inverse Problems
Linear Inverse Problems Ajinkya Kadu Utrecht University, The Netherlands February 26, 2018 Outline Introduction Least-squares Reconstruction Methods Examples Summary Introduction 2 What are inverse problems?
More informationGlobal Solutions for a Nonlinear Wave Equation with the p-laplacian Operator
Global Solutions for a Nonlinear Wave Equation with the p-laplacian Operator Hongjun Gao Institute of Applied Physics and Computational Mathematics 188 Beijing, China To Fu Ma Departamento de Matemática
More informationMP463 QUANTUM MECHANICS
MP463 QUANTUM MECHANICS Introduction Quantum theory of angular momentum Quantum theory of a particle in a central potential - Hydrogen atom - Three-dimensional isotropic harmonic oscillator (a model of
More informationALUTHGE ITERATION IN SEMISIMPLE LIE GROUP. 1. Introduction Given 0 < λ < 1, the λ-aluthge transform of X C n n [4]:
Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 ALUTHGE ITERATION IN SEMISIMPLE LIE GROUP HUAJUN HUANG AND TIN-YAU TAM Abstract. We extend, in the context of connected noncompact
More informationHigher rank numerical ranges of rectangular matrix polynomials
Journal of Linear and Topological Algebra Vol. 03, No. 03, 2014, 173-184 Higher rank numerical ranges of rectangular matrix polynomials Gh. Aghamollaei a, M. Zahraei b a Department of Mathematics, Shahid
More information(VII.E) The Singular Value Decomposition (SVD)
(VII.E) The Singular Value Decomposition (SVD) In this section we describe a generalization of the Spectral Theorem to non-normal operators, and even to transformations between different vector spaces.
More informationShort note on compact operators - Monday 24 th March, Sylvester Eriksson-Bique
Short note on compact operators - Monday 24 th March, 2014 Sylvester Eriksson-Bique 1 Introduction In this note I will give a short outline about the structure theory of compact operators. I restrict attention
More informationNumerische Mathematik
Numer. Math. 1999 83: 139 159 Numerische Mathematik c Springer-Verlag 1999 On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems Jin Qi-nian 1, Hou Zong-yi
More informationInverse Ill Posed Problems in Image Processing
Inverse Ill Posed Problems in Image Processing Image Deblurring I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz, martin.plesinger@tul.cz, strakos@cs.cas.cz 1,3 Faculty of Mathematics
More informationAn Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems
Int. J. Contemp. Math. Sciences, Vol. 5, 2010, no. 52, 2547-2565 An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical and Computational
More informationFredholm Theory. April 25, 2018
Fredholm Theory April 25, 208 Roughly speaking, Fredholm theory consists of the study of operators of the form I + A where A is compact. From this point on, we will also refer to I + A as Fredholm operators.
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationORACLE INEQUALITY FOR A STATISTICAL RAUS GFRERER TYPE RULE
ORACLE INEQUALITY FOR A STATISTICAL RAUS GFRERER TYPE RULE QINIAN JIN AND PETER MATHÉ Abstract. We consider statistical linear inverse problems in Hilbert spaces. Approximate solutions are sought within
More informationSemi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations
Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations Sunyoung Bu University of North Carolina Department of Mathematics CB # 325, Chapel Hill USA agatha@email.unc.edu Jingfang
More informationArnoldi-Tikhonov regularization methods
Arnoldi-Tikhonov regularization methods Bryan Lewis a, Lothar Reichel b,,1 a Rocketcalc LLC, 100 W. Crain Ave., Kent, OH 44240, USA. b Department of Mathematical Sciences, Kent State University, Kent,
More informationKrylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration
Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University,
More informationDue Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces
Due Giorni di Algebra Lineare Numerica (2GALN) 16 17 Febbraio 2016, Como Iterative regularization in variable exponent Lebesgue spaces Claudio Estatico 1 Joint work with: Brigida Bonino 1, Fabio Di Benedetto
More informationMoore Penrose inverses and commuting elements of C -algebras
Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We
More informationSYMMETRIC PROJECTION METHODS FOR DIFFERENTIAL EQUATIONS ON MANIFOLDS
BIT 0006-3835/00/4004-0726 $15.00 2000, Vol. 40, No. 4, pp. 726 734 c Swets & Zeitlinger SYMMETRIC PROJECTION METHODS FOR DIFFERENTIAL EQUATIONS ON MANIFOLDS E. HAIRER Section de mathématiques, Université
More informationRobust error estimates for regularization and discretization of bang-bang control problems
Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of
More informationECE 275A Homework #3 Solutions
ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =
More informationParameter Identification in Partial Differential Equations
Parameter Identification in Partial Differential Equations Differentiation of data Not strictly a parameter identification problem, but good motivation. Appears often as a subproblem. Given noisy observation
More informationThe Kadison-Singer and Paulsen Problems in Finite Frame Theory
Chapter 1 The Kadison-Singer and Paulsen Problems in Finite Frame Theory Peter G. Casazza Abstract We now know that some of the basic open problems in frame theory are equivalent to fundamental open problems
More informationSPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT
SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT Abstract. These are the letcure notes prepared for the workshop on Functional Analysis and Operator Algebras to be held at NIT-Karnataka,
More informationOn Semigroups Of Linear Operators
On Semigroups Of Linear Operators Elona Fetahu Submitted to Central European University Department of Mathematics and its Applications In partial fulfillment of the requirements for the degree of Master
More information