Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1
|
|
- Cleopatra Jordan
- 5 years ago
- Views:
Transcription
1 Applied Mathematical Sciences, Vol. 5, 2011, no. 76, Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Nguyen Buong and Nguyen Dinh Dung Vietnamese Academy of Science and Technology Institute of Information Technology 18, Hoang Quoc Viet, Cau Giay, Ha Noi, Vietnam nbuong@ioit.ac.vn Abstract. The purpose of this paper is to give a regularization method for solving a system of ill-posed equations involving linear bounded mappings in real Hilbert spaces and an example of finding a common solution of two systems of linear algebraic equations with singular matrices. Mathematics Subect Classification: 47H17 Keywords: Hilbert spaces, Tikhonov regularization, singular matrix 1. INTRODUCTION Let X and Y be Hilbert spaces with scalar product and norm of X denoted by the symbols.,. X and. X, respectively. Let A, =1,..., N, be N linear bounded mappings from X into Y. Consider the following problem: find an element x X such that where f is given in Y a priori. Set A x = f, =1,..., N, (1.1) S = {x X : A x = f }, =1,..., N, and S = N S. Here, we assume that S. From the properties of A it is easy to see that each set S is a closed convex set in X. Therefore, S is also a closed convex subset in X. 1 This work was supported by the Vietnamese National Foundation of Science and Technology Development.
2 3782 Nguyen Buong and Nguyen Dinh Dung We are specially interested in the situation where the data f is not exactly known, i.e., we have only an approximation f δ of the data f satisfying f f δ Y δ, δ 0,,..., N. (1.2) With the above conditions on A, each -equation in (1.1) is ill-posed. By this we mean that the solution set S does not depend continuously on the data f. Therefore, to find a solution of each -equation in (1.1) one has to use stable methods. One of those methods is the variational variant of Tikhonov s regularization that consists of minimizing the functional A x f δ 2 Y + α x x 2 X, (1.3) where x is some element in X \ S, and chosing a value of the regularization parameter α = α(δ 1, δ N ) > 0. It proved in [1] that each -minimization problem of (1.3) has unique solution x αδ, and if δ 2/α, α 0 then {xαδ } converges to a solution x satisfying x x X = min x S x x X, =1,..., N. Our problem: find x δ α such that x δ α x as δ,α 0, a relation α = α(δ 1, δ N ) such that x δ α(δ 1, δ N ) x as δ 0, and finally estimate the value x δ α(δ 1, δ N ) x where x is a x -minimal norm element in S (x MNS). Burger and Kaltenbacher [2] used the Newton-Kaczmarz method cyclically for regularizing each separate equation in (1.1) under a source condition on each mapping A. The Steepest-Descent-Kaczmarz method is used cyclically by Haltmeier, Kowar, Leitão, and Scherzerfor [3] for regularizing each separate equation in (1.1) under a local tangential cone condition on each mapping A. Note that the system of equations (1.1) can be written in the form Ax = f, (1.4) where A : X Y := Y 1... Y N by Ax := (A 1 x,..., A N x), and f := (f 1,..., f N ). Very recently, Halmeier, Leitão and Scherzer [4] considered the Landweber-Kaczmarz method to solve (1.4) under a local tangential cone condition also on each mapping A. Equation (1.4) can be seen as a special case of (1.1) with N = 1. Howerver, one potential advantage of (1.1) over (1.4) can be that it might better reflect the structure of the underlying information (f 1,..., f N ) leading to the couplet system, than a plain concantenation into one single data element f could. In [5], for finding a common zero for a finite family of potential hemicontinuous monotone mappings from a reflexive Banach space E into E, the adoint of E, the first author proposed a regularization method by solving the
3 Regularization for a common solution 3783 following regularized equation α μ A (x)+αu(x) =θ, =0 μ 0 =0<μ <μ +1 < 1, =1, 2,..., N 1, (1.5) where U is a normalized duality mapping of E and estimated convergence rate of the regularized solution under a smooth condition only for one A 1, i.e., A 1( x) z = U(x 0 ), for some element z E. In this paper, to solve (1.1), we consider a new regularization method based on the following unconstrained optimization problem: min x X A x f δ 2 Y + α x x 2 X. (1.6) We shall show convergence rate of regularized solution in (1.6) under a source condition only on A 1 in the next section. In section 3, we give an example showing that problem (1.4), perhaps, is well-posed, although each equation in (1.1) is ill-posed. So, the cyclical regularization for each separate equation in (1.1) as in [2] and [3] is dispensable. Moreover, it does not exploir the wellposed property of the given system of equations. Meantime, our method (1.6) still uses the property. Above and below, the symbols and denote the weak convergence and convergence in norm, respectively, and a b is meant a = O(b) and b = O(a). 2. MAIN RESULTS Under the assumptions on A it can be easy to show that problem (1.6) admits a unique solution. We shall first address two questions. Is the problem (1.6) stable in the sense of continuous dependence of the solution on the data f δ? Secondly, do the solutions of (1.6) converge toward a solution of (1.1) as α, δ 0. In [6], stability has been proved for the case N = 1. For the convenience of the reader, we provide the whole argument in the case of arbitrary N 1. Theorem 2.1. Let α>0, f δ k f δ with δ 0, ask, and x k be a minimizer of (1.6) with f δ replaced by f δ k. Then the sequence of {x k } converges to a minimizer of (1.6). Proof. Obviously, x δ α is a solution of (1.6) if and only if it is a solution of the following equation Bx + α(x x )= f δ, (2.1) where B = A A and fδ = A f δ,
4 3784 Nguyen Buong and Nguyen Dinh Dung where A denotes the adoint mapping of A. Obviously, if f δ k f k δ = as k. Denote by x δk α A f δ k f δ = A f δ the solution of the following equation Bx + α(x x )= f δ k, k f δ = f δ k then A f δ k. (2.2) From (2.1), (2.2) and the monotone property of the mapping B it implies that x δk α x δ α X f k δ f δ X /α, for each α>0. This together with f δ k f δ follows that x δk α Theorem is proved. xδ α as k. Further, without loss of generality, assume that δ = δ, δ 0. Theorem 2.2. Let α(δ) be such that α(δ) 0,δ/α(δ) 0 as δ 0. Then the sequence {x δ α }, where δ 0,α = α(δ) and xδ α is a solution of (1.6), converges to an x -MNS x of (1.1). Proof. Clearly, we have, for each y S, that By = f, N f = A f. (2.3) So, from (2.1) and (2.3), we obtain that Bx δ α By, xδ α y + α xδ α x,x δ α y = f δ f,x δ α y. This together with the nonegative property of B implies that x δ α y X x y X + A δ (Y X) y S. (2.4) α Since A are the bounded linear mappings and δ/α(δ) 0, the sequence {x δ α(δ) } is bounded. Then, there exists a subsequence {x k := x δ k α(δk )} of the sequence {x δ α(δ) } converging weakly to some element x H as k.now, we shall prove that x S. Indeed, from (1.6) we can obtain the following inequalities A l x k f δ k l 2 Y l A x k f δ k 2 Y A y f δ k Y + α(δ k ) y x 2 X Nδ 2 k + α(δ k ) y x 2 X,
5 Regularization for a common solution 3785 for any y S and l =1,..., N. Since each functional A l x f δ k l 2 is weakly lower semicontinuous in x [7], x k x and δ k,α(δ k ) 0ask, we obtain from the last inequality that A l x = f l,l =1,..., N. So, x S. Now, from (2.4), δ/α(δ) 0, as δ 0, the weakly lower semicontinuity of norm and that any closed convex subset in H has only one x -MNS, it follows that x δ α(δ) x as δ 0. This completes the proof. Theorem 2.3. Assume that there exists ω Y 1 such that x x = A 1ω. Then for the choice α δ p, 0 <p<1, we obtain x δ α(δ) x X = O(δ p/2 ). Proof. Using (1.6) with x = x and δ = δ for all =1,..., N, we obtain So, we have that A x δ α(δ) f δ 2 Y + α(δ) x δ α(δ) x 2 X A x f δ 2 Y + α(δ) x x 2 X. A x δ α(δ)) f δ 2 Y + α(δ) x δ α(δ) x 2 X Nδ 2 + α(δ)( x x 2 X x δ α(δ) x 2 X + x δ α(δ) x 2 X). Since x x 2 X xδ α(δ) x 2 X + xδ α(δ) x 2 X =2 x x, x x δ α(δ) we obtain that A 1 x δ α(δ) f1 δ 2 Y 1 + α(δ) x δ α(δ) x 2 X Nδ 2 +2α(δ) ω, A 1 ( x x δ α(δ) ) Nδ 2 +2α(δ) ω, f 1 f1 δ + f 1 δ A 1x δ α(δ) Nδ 2 +2α(δ) ω Y1 (δ + A 1 x δ α(δ) f 1 δ Y 1 ). Therefore, A 1 x δ α(δ) f 1 δ 2 Y 1 + α(δ) x δ α(δ) x 2 X Nδ2 +2α(δ) ω Y1 δ +2α(δ) ω Y1 A 1 x δ α(δ) f 1 δ Y 1. This together with the implication (a, b, c 0,a 2 ab + c 2 ) a b + c (2.5) (2.6)
6 3786 Nguyen Buong and Nguyen Dinh Dung implies A 1 x δ α(δ) f δ 1 Y 1 [ Nδ 2 +2 ω Y1 α(δ)δ] 1/2 +2 ω Y1 α(δ) and hence if α(δ) δ p, 0 <p<1. x δ α(δ) x X = O(δ p/2 ), 3. NUMERICAL EXAMPLES For illustration, we consider the following problem of finding a common solution of two systems of linear algebraic equations A x = f, =1, 2, (3.1) where A 1 = ,A 2 = 2 1 0,f 1 = 2 3,f 2 = It is easy to verify that system (3.1) possesses a unique common solution x = (1; 1; 1). Since det A 1 = det A 2 = 0, each equation in (3.1) is ill-posed. Based on the results in section 2, the common solution of (3.1) can be found by solving the following optimization problem min A 1x f A 2 x f α x x 2, (3.2) x R 3 where x = x 2 i + x2 2 + x 2 3 for every x =(x 1 ; x 2 ; x 3 ) R 3. It is not difficult to verify that (3.2) is equivalent to the following equation Bx + α(x x )= f, (3.3) where B = A 1 A 1 + A 2 A 2 = , f = A 1 f 1 + A 2 f 2 = 26 20, and x is any vector in R 3. Since det B = 996 and (3.3) can be considered as a regularization equation for the well-posed equation Bx = f, we can use Jacoby or Gauss-Seidel iteration methods for finding a unique solution x α of (3.3). The following table 1 shows the calculation results for the approximation solution x α =(x α 1 ; xα 2 ; xα 3 )at15th iteration with the started point x 0 =(2, 2, 2). Table th approximation solution with started point x 0 = (2; 2; 2).
7 Regularization for a common solution 3787 α x α 1 x α 2 x α 3 x x α Now, we give another example with det B = 0. We consider the case that A 1 = ,A 2 = , and Then, f 1 = ,f 2 = B = , f = Since the rank of the matrix B = 3, it is not difficult to verify that the set of common solutions of (3.1) is a line passed through two points x = (1; 1; 1; 1) and x = (1; 3; 6; 2). So, the solution of minimal norm is the vector x = (1; 7/15; 1/3; 11/15) (1; ; ; ). Table 2 shows our calculation result with δ =10 n. To solve regularized equation (3.3) with f replaced by f δ we use the iterative regularization [8] x (k+1) = x (k) α k (Bx (k) + ε k (x (k) x (0) ) f δ ),x (0) R 4 any vector, with the stopping rule in [9]: Bx (K) f δ 2 τδ < Bx (k) f δ 2,τ>1, for all k =1,..., K 1 and every fixed δ. In our example, we have that L = B = We chose ε 0 =0.1 and ε k+1 = ε k /(1 + ε 3 k ). Then, the sequence {ε k } satisfies the conditions: ε k > 0,ε k 0 and (ε k ε k+1 )/(ε 3 k ε k+1) = 1 and λ =(ε 0 + L 2 )/2 = By taking α k = cε k with c =1/(2λ) = we have that (1 cλ)cε 2 0 = < 1. Clearly, condition (2.6) in [8] is equivalent to τ ( x x (0) [(1 + ε 3 0 )(1 + ε2 0 )+2ε ) 2 0] +1. (1 cλ 2(ε 0 /c))ε 0 Therefore, with x (0) = (0; 0; 0; 0), we obtain that x x (0) = x =1.3663, and hence τ
8 3788 Nguyen Buong and Nguyen Dinh Dung Table 2. Approximation solutions with started point x (0) = (0; 0; 0; 0), τ = 100 and δ =10 n,n=1, 2, n K x (K) 1 x (K) 2 x (K) 3 x (K) 4 B f δ τδ References [1] A.N. Tikhonov and V.Y. Arsenin, Solutions of ill-posed problems, Wiley, N.Y [2] M. Burger and B. Kaltenbacher, Regularization Newton-Kaczmarz methods for nonlinear ill-posed problems, SIAM J. Numer. Analysis, 44 (2006) [3] M. Haltmeier, R. Kowar, A. Leitao, and O. Scherzer, Kaczmarz methods for nonlinear ill-posed equations I: convergence analysis, Inverse problem and Imaging, 1 (2) (2007) , II: Applications, 1 (3) (2007) [4] A.D. Cezaro, M. Haltmeier, A. Leitao, and O. Scherzer, On steepest-descent-kaczmarz method for regularizing systems of nonlinear ill-posed equations, Applied Mathematics and Computations, 202 (2) (2008) [5] Ng. Buong, Regularization for unconstrained vector optimization of convex functionals in Banach spaces, Zh. Vychisl. Mat. i Mat. Fiziki, 46(3) (2006) [6] H.W. Engl, K. Kunisch, and A. Neubauer, Convergence rates for Tikhonov regularization of non-linear ill-posed problems, Inverse Problems 5 (1989) [7] M.M. Vainberg, Variational method and method of monotone mappings, Moscow, Nauka 1972 (in Russian). [8] A.B. Bakushinsky, A. Goncharsky, Ill-posed problems: Theory and Applications, Kluwer Academic [9] A.B. Bakushinsky and A. Smirnova, A posteriori stopping rule for regularized fixed point iterations, Nonl. Anal. 64 (2006) Received: June, 2011
Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces
Applied Mathematical Sciences, Vol. 6, 212, no. 63, 319-3117 Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Nguyen Buong Vietnamese
More informationRegularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces
Int. Journal of Math. Analysis, Vol. 3, 2009, no. 12, 549-561 Regularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces Nguyen Buong Vietnamse Academy of Science
More informationStrong convergence to a common fixed point. of nonexpansive mappings semigroups
Theoretical Mathematics & Applications, vol.3, no., 23, 35-45 ISSN: 792-9687 (print), 792-979 (online) Scienpress Ltd, 23 Strong convergence to a common fixed point of nonexpansive mappings semigroups
More informationarxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher
ENHANCED CHOICE OF THE PARAMETERS IN AN ITERATIVELY REGULARIZED NEWTON- LANDWEBER ITERATION IN BANACH SPACE arxiv:48.526v [math.na] 2 Aug 24 Barbara Kaltenbacher Alpen-Adria-Universität Klagenfurt Universitätstrasse
More informationFunctionalanalytic tools and nonlinear equations
Functionalanalytic tools and nonlinear equations Johann Baumeister Goethe University, Frankfurt, Germany Rio de Janeiro / October 2017 Outline Fréchet differentiability of the (PtS) mapping. Nonlinear
More informationNumerische Mathematik
Numer. Math. 1999 83: 139 159 Numerische Mathematik c Springer-Verlag 1999 On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems Jin Qi-nian 1, Hou Zong-yi
More informationRegularization in Banach Space
Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University
More informationNew Algorithms for Parallel MRI
New Algorithms for Parallel MRI S. Anzengruber 1, F. Bauer 2, A. Leitão 3 and R. Ramlau 1 1 RICAM, Austrian Academy of Sciences, Altenbergerstraße 69, 4040 Linz, Austria 2 Fuzzy Logic Laboratorium Linz-Hagenberg,
More informationIterative regularization of nonlinear ill-posed problems in Banach space
Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and
More informationA NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday
A NOTE ON THE NONLINEAR LANDWEBER ITERATION Martin Hanke Dedicated to Heinz W. Engl on the occasion of his 60th birthday Abstract. We reconsider the Landweber iteration for nonlinear ill-posed problems.
More informationThe impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization
The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization Radu Ioan Boț and Bernd Hofmann March 1, 2013 Abstract Tikhonov-type regularization of linear and nonlinear
More informationMorozov s discrepancy principle for Tikhonov-type functionals with non-linear operators
Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Stephan W Anzengruber 1 and Ronny Ramlau 1,2 1 Johann Radon Institute for Computational and Applied Mathematics,
More informationIterative Regularization Methods for a Discrete Inverse Problem in MRI
CUBO A Mathematical Journal Vol.10, N ō 02, (137 146). July 2008 Iterative Regularization Methods for a Discrete Inverse Problem in MRI A. Leitão Universidade Federal de Santa Catarina, Departamento de
More informationA range condition for polyconvex variational regularization
www.oeaw.ac.at A range condition for polyconvex variational regularization C. Kirisits, O. Scherzer RICAM-Report 2018-04 www.ricam.oeaw.ac.at A range condition for polyconvex variational regularization
More informationTwo-parameter regularization method for determining the heat source
Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for
More informationIterative Regularization Methods for Inverse Problems: Lecture 3
Iterative Regularization Methods for Inverse Problems: Lecture 3 Thorsten Hohage Institut für Numerische und Angewandte Mathematik Georg-August Universität Göttingen Madrid, April 11, 2011 Outline 1 regularization
More informationRobust error estimates for regularization and discretization of bang-bang control problems
Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of
More informationAccelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems
www.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems H. Egger RICAM-Report 2005-01 www.ricam.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing
More informationConvergence rates of the continuous regularized Gauss Newton method
J. Inv. Ill-Posed Problems, Vol. 1, No. 3, pp. 261 28 (22) c VSP 22 Convergence rates of the continuous regularized Gauss Newton method B. KALTENBACHER, A. NEUBAUER, and A. G. RAMM Abstract In this paper
More informationAccelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik
Accelerated Landweber iteration in Banach spaces T. Hein, K.S. Kazimierski Preprint 2009-17 Fakultät für Mathematik Impressum: Herausgeber: Der Dekan der Fakultät für Mathematik an der Technischen Universität
More informationA Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization
A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization Alexandra Smirnova Rosemary A Renaut March 27, 2008 Abstract The preconditioned iteratively regularized Gauss-Newton
More informationLevenberg-Marquardt method in Banach spaces with general convex regularization terms
Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization
More informationDue Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces
Due Giorni di Algebra Lineare Numerica (2GALN) 16 17 Febbraio 2016, Como Iterative regularization in variable exponent Lebesgue spaces Claudio Estatico 1 Joint work with: Brigida Bonino 1, Fabio Di Benedetto
More informationNecessary conditions for convergence rates of regularizations of optimal control problems
Necessary conditions for convergence rates of regularizations of optimal control problems Daniel Wachsmuth and Gerd Wachsmuth Johann Radon Institute for Computational and Applied Mathematics RICAM), Austrian
More information444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003),
444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003), 2191-2195 1 On a new notion of regularizer A.G. Ramm LMA/CNRS, 31 Chemin Joseph Aiguier, Marseille 13402, France and Mathematics
More informationWEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE
Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department
More informationOn a family of gradient type projection methods for nonlinear ill-posed problems
On a family of gradient type projection methods for nonlinear ill-posed problems A. Leitão B. F. Svaiter September 28, 2016 Abstract We propose and analyze a family of successive projection methods whose
More informationA model function method in total least squares
www.oeaw.ac.at A model function method in total least squares S. Lu, S. Pereverzyev, U. Tautenhahn RICAM-Report 2008-18 www.ricam.oeaw.ac.at A MODEL FUNCTION METHOD IN TOTAL LEAST SQUARES SHUAI LU, SERGEI
More informationConvergence rates for Morozov s Discrepancy Principle using Variational Inequalities
Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Stephan W Anzengruber Ronny Ramlau Abstract We derive convergence rates for Tikhonov-type regularization with conve
More informationOn nonexpansive and accretive operators in Banach spaces
Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive
More informationarxiv: v1 [math.na] 16 Jan 2018
A FAST SUBSPACE OPTIMIZATION METHOD FOR NONLINEAR INVERSE PROBLEMS IN BANACH SPACES WITH AN APPLICATION IN PARAMETER IDENTIFICATION ANNE WALD arxiv:1801.05221v1 [math.na] 16 Jan 2018 Abstract. We introduce
More informationAn Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems
Int. Journal of Math. Analysis, Vol. 4, 1, no. 45, 11-8 An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical
More informationIterative Methods for Smooth Objective Functions
Optimization Iterative Methods for Smooth Objective Functions Quadratic Objective Functions Stationary Iterative Methods (first/second order) Steepest Descent Method Landweber/Projected Landweber Methods
More informationParallel Cimmino-type methods for ill-posed problems
Parallel Cimmino-type methods for ill-posed problems Cao Van Chung Seminar of Centro de Modelización Matemática Escuela Politécnica Naciónal Quito ModeMat, EPN, Quito Ecuador cao.vanchung@epn.edu.ec, cvanchung@gmail.com
More informationCONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS
Int. J. Appl. Math. Comput. Sci., 2002, Vol.2, No.2, 73 80 CONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS JERZY KLAMKA Institute of Automatic Control, Silesian University of Technology ul. Akademicka 6,
More informationON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction
J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties
More informationWeak sharp minima on Riemannian manifolds 1
1 Chong Li Department of Mathematics Zhejiang University Hangzhou, 310027, P R China cli@zju.edu.cn April. 2010 Outline 1 2 Extensions of some results for optimization problems on Banach spaces 3 4 Some
More informationContents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.
Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological
More informationA Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator
A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Florianópolis - September, 2011.
More informationOn pseudomonotone variational inequalities
An. Şt. Univ. Ovidius Constanţa Vol. 14(1), 2006, 83 90 On pseudomonotone variational inequalities Silvia Fulina Abstract Abstract. There are mainly two definitions of pseudomonotone mappings. First, introduced
More information5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns
5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized
More informationON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume, Number, Pages S -9939(XX- ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. HOANG AND A. G. RAMM (Communicated
More informationTuning of Fuzzy Systems as an Ill-Posed Problem
Tuning of Fuzzy Systems as an Ill-Posed Problem Martin Burger 1, Josef Haslinger 2, and Ulrich Bodenhofer 2 1 SFB F 13 Numerical and Symbolic Scientific Computing and Industrial Mathematics Institute,
More informationNumerical differentiation by means of Legendre polynomials in the presence of square summable noise
www.oeaw.ac.at Numerical differentiation by means of Legendre polynomials in the presence of square summable noise S. Lu, V. Naumova, S. Pereverzyev RICAM-Report 2012-15 www.ricam.oeaw.ac.at Numerical
More informationConvergence rate estimates for the gradient differential inclusion
Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient
More informationOn the simplest expression of the perturbed Moore Penrose metric generalized inverse
Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated
More informationGeneralized Local Regularization for Ill-Posed Problems
Generalized Local Regularization for Ill-Posed Problems Patricia K. Lamm Department of Mathematics Michigan State University AIP29 July 22, 29 Based on joint work with Cara Brooks, Zhewei Dai, and Xiaoyue
More informationParameter Identification
Lecture Notes Parameter Identification Winter School Inverse Problems 25 Martin Burger 1 Contents 1 Introduction 3 2 Examples of Parameter Identification Problems 5 2.1 Differentiation of Data...............................
More informationScalar Asymptotic Contractivity and Fixed Points for Nonexpansive Mappings on Unbounded Sets
Scalar Asymptotic Contractivity and Fixed Points for Nonexpansive Mappings on Unbounded Sets George Isac Department of Mathematics Royal Military College of Canada, STN Forces Kingston, Ontario, Canada
More informationConvergence rates of convex variational regularization
INSTITUTE OF PHYSICS PUBLISHING Inverse Problems 20 (2004) 1411 1421 INVERSE PROBLEMS PII: S0266-5611(04)77358-X Convergence rates of convex variational regularization Martin Burger 1 and Stanley Osher
More informationA NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang
A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.
More informationNumerical Methods for the Solution of Ill-Posed Problems
Numerical Methods for the Solution of Ill-Posed Problems Mathematics and Its Applications Managing Editor: M.HAZEWINKEL Centre for Mathematics and Computer Science, Amsterdam, The Netherlands Volume 328
More informationWhy a minimizer of the Tikhonov functional is closer to the exact solution than the first guess
Why a minimizer of the Tikhonov functional is closer to the exact solution than the first guess Michael V. Klibanov, Anatoly B. Bakushinsky and Larisa Beilina Department of Mathematics and Statistics University
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More informationOn Penalty and Gap Function Methods for Bilevel Equilibrium Problems
On Penalty and Gap Function Methods for Bilevel Equilibrium Problems Bui Van Dinh 1 and Le Dung Muu 2 1 Faculty of Information Technology, Le Quy Don Technical University, Hanoi, Vietnam 2 Institute of
More informationCONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS
CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS Igor V. Konnov Department of Applied Mathematics, Kazan University Kazan 420008, Russia Preprint, March 2002 ISBN 951-42-6687-0 AMS classification:
More informationParameter Identification in Partial Differential Equations
Parameter Identification in Partial Differential Equations Differentiation of data Not strictly a parameter identification problem, but good motivation. Appears often as a subproblem. Given noisy observation
More informationFUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES. Yılmaz Yılmaz
Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 33, 2009, 335 353 FUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES Yılmaz Yılmaz Abstract. Our main interest in this
More informationBulletin of the Transilvania University of Braşov Vol 10(59), No Series III: Mathematics, Informatics, Physics, 63-76
Bulletin of the Transilvania University of Braşov Vol 1(59), No. 2-217 Series III: Mathematics, Informatics, Physics, 63-76 A CONVERGENCE ANALYSIS OF THREE-STEP NEWTON-LIKE METHOD UNDER WEAK CONDITIONS
More informationA derivative-free nonmonotone line search and its application to the spectral residual method
IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral
More informationSubdifferential representation of convex functions: refinements and applications
Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential
More informationA CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY
A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY ARMIN LECHLEITER AND ANDREAS RIEDER January 8, 2007 Abstract. The Newton-type regularization
More informationWhere is matrix multiplication locally open?
Linear Algebra and its Applications 517 (2017) 167 176 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Where is matrix multiplication locally open?
More informationON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES
U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 3, 2018 ISSN 1223-7027 ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES Vahid Dadashi 1 In this paper, we introduce a hybrid projection algorithm for a countable
More informationOn intermediate value theorem in ordered Banach spaces for noncompact and discontinuous mappings
Int. J. Nonlinear Anal. Appl. 7 (2016) No. 1, 295-300 ISSN: 2008-6822 (electronic) http://dx.doi.org/10.22075/ijnaa.2015.341 On intermediate value theorem in ordered Banach spaces for noncompact and discontinuous
More informationNonlinear regularization methods for ill-posed problems with piecewise constant or strongly varying solutions
IOP PUBLISHING (19pp) INVERSE PROBLEMS doi:10.1088/0266-5611/25/11/115014 Nonlinear regularization methods for ill-posed problems with piecewise constant or strongly varying solutions H Egger 1 and A Leitão
More informationCubic regularization of Newton s method for convex problems with constraints
CORE DISCUSSION PAPER 006/39 Cubic regularization of Newton s method for convex problems with constraints Yu. Nesterov March 31, 006 Abstract In this paper we derive efficiency estimates of the regularized
More informationA General Iterative Method for Constrained Convex Minimization Problems in Hilbert Spaces
A General Iterative Method for Constrained Convex Minimization Problems in Hilbert Spaces MING TIAN Civil Aviation University of China College of Science Tianjin 300300 CHINA tianming963@6.com MINMIN LI
More informationTikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations
Tikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations Ronny Ramlau Gerd Teschke April 13, 25 Abstract We shall be concerned with the construction of Tikhonov based iteration
More informationConvergence rates in l 1 -regularization when the basis is not smooth enough
Convergence rates in l 1 -regularization when the basis is not smooth enough Jens Flemming, Markus Hegland November 29, 2013 Abstract Sparsity promoting regularization is an important technique for signal
More informationON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS
ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS Olivier Scaillet a * This draft: July 2016. Abstract This note shows that adding monotonicity or convexity
More informationExercise Solutions to Functional Analysis
Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n
More informationEXISTENCE OF SOLUTIONS FOR A RESONANT PROBLEM UNDER LANDESMAN-LAZER CONDITIONS
Electronic Journal of Differential Equations, Vol. 2008(2008), No. 98, pp. 1 10. ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu (login: ftp) EXISTENCE
More informationON SINGULAR PERTURBATION OF THE STOKES PROBLEM
NUMERICAL ANALYSIS AND MATHEMATICAL MODELLING BANACH CENTER PUBLICATIONS, VOLUME 9 INSTITUTE OF MATHEMATICS POLISH ACADEMY OF SCIENCES WARSZAWA 994 ON SINGULAR PERTURBATION OF THE STOKES PROBLEM G. M.
More informationA Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator
A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Cambridge - July 28, 211. Doctoral
More informationSolving Linear Systems
Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the
More informationModified Landweber iteration in Banach spaces convergence and convergence rates
Modified Landweber iteration in Banach spaces convergence and convergence rates Torsten Hein, Kamil S. Kazimierski August 4, 009 Abstract Abstract. We introduce and discuss an iterative method of relaxed
More informationINSTITUTE OF MATHEMATICS THE CZECH ACADEMY OF SCIENCES. Note on the fast decay property of steady Navier-Stokes flows in the whole space
INSTITUTE OF MATHEMATICS THE CZECH ACADEMY OF SCIENCES Note on the fast decay property of stea Navier-Stokes flows in the whole space Tomoyuki Nakatsuka Preprint No. 15-017 PRAHA 017 Note on the fast
More informationViscosity approximation method for m-accretive mapping and variational inequality in Banach space
An. Şt. Univ. Ovidius Constanţa Vol. 17(1), 2009, 91 104 Viscosity approximation method for m-accretive mapping and variational inequality in Banach space Zhenhua He 1, Deifei Zhang 1, Feng Gu 2 Abstract
More informationConditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space
Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Radu Ioan Boț and Bernd Hofmann September 16, 2016 Abstract In the literature on singular perturbation
More informationLECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE
LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization
More informationA NONLINEAR DIFFERENTIAL EQUATION INVOLVING REFLECTION OF THE ARGUMENT
ARCHIVUM MATHEMATICUM (BRNO) Tomus 40 (2004), 63 68 A NONLINEAR DIFFERENTIAL EQUATION INVOLVING REFLECTION OF THE ARGUMENT T. F. MA, E. S. MIRANDA AND M. B. DE SOUZA CORTES Abstract. We study the nonlinear
More informationCOURSE Iterative methods for solving linear systems
COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme
More informationVisco-penalization of the sum of two monotone operators
Visco-penalization of the sum of two monotone operators Patrick L. Combettes a and Sever A. Hirstoaga b a Laboratoire Jacques-Louis Lions, Faculté de Mathématiques, Université Pierre et Marie Curie Paris
More informationSearch Directions for Unconstrained Optimization
8 CHAPTER 8 Search Directions for Unconstrained Optimization In this chapter we study the choice of search directions used in our basic updating scheme x +1 = x + t d. for solving P min f(x). x R n All
More informationCS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares
CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search
More informationSteepest descent approximations in Banach space 1
General Mathematics Vol. 16, No. 3 (2008), 133 143 Steepest descent approximations in Banach space 1 Arif Rafiq, Ana Maria Acu, Mugur Acu Abstract Let E be a real Banach space and let A : E E be a Lipschitzian
More informationSparse Recovery in Inverse Problems
Radon Series Comp. Appl. Math XX, 1 63 de Gruyter 20YY Sparse Recovery in Inverse Problems Ronny Ramlau and Gerd Teschke Abstract. Within this chapter we present recent results on sparse recovery algorithms
More informationAn improved convergence theorem for the Newton method under relaxed continuity assumptions
An improved convergence theorem for the Newton method under relaxed continuity assumptions Andrei Dubin ITEP, 117218, BCheremushinsaya 25, Moscow, Russia Abstract In the framewor of the majorization technique,
More informationMultipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations
Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Oleg Burdakov a,, Ahmad Kamandi b a Department of Mathematics, Linköping University,
More informationOn an iterative algorithm for variational inequalities in. Banach space
MATHEMATICAL COMMUNICATIONS 95 Math. Commun. 16(2011), 95 104. On an iterative algorithm for variational inequalities in Banach spaces Yonghong Yao 1, Muhammad Aslam Noor 2,, Khalida Inayat Noor 3 and
More informationA CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE
Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received
More informationFunctional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...
Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................
More information2 Statement of the problem and assumptions
Mathematical Notes, 25, vol. 78, no. 4, pp. 466 48. Existence Theorem for Optimal Control Problems on an Infinite Time Interval A.V. Dmitruk and N.V. Kuz kina We consider an optimal control problem on
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationA semismooth Newton method for L 1 data fitting with automatic choice of regularization parameters and noise calibration
A semismooth Newton method for L data fitting with automatic choice of regularization parameters and noise calibration Christian Clason Bangti Jin Karl Kunisch April 26, 200 This paper considers the numerical
More informationfor all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true
3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO
More informationIterative Domain Decomposition Methods for Singularly Perturbed Nonlinear Convection-Diffusion Equations
Iterative Domain Decomposition Methods for Singularly Perturbed Nonlinear Convection-Diffusion Equations P.A. Farrell 1, P.W. Hemker 2, G.I. Shishkin 3 and L.P. Shishkina 3 1 Department of Computer Science,
More informationAlexander Ostrowski
Ostrowski p. 1/3 Alexander Ostrowski 1893 1986 Walter Gautschi wxg@cs.purdue.edu Purdue University Ostrowski p. 2/3 Collected Mathematical Papers Volume 1 Determinants Linear Algebra Algebraic Equations
More information