Regolarizzazione adattiva in spazi di Lebesgue a esponente variabile. Claudio Estatico Dipartimento di Matematica, Università di Genova, Italy

Size: px
Start display at page:

Download "Regolarizzazione adattiva in spazi di Lebesgue a esponente variabile. Claudio Estatico Dipartimento di Matematica, Università di Genova, Italy"

Transcription

1 GNCS 2018 Convegno Biennale del Gruppo Nazionale per il Calcolo Scientifico Montecatini Terme, febbraio 2018 Regolarizzazione adattiva in spazi di Lebesgue a esponente variabile Claudio Estatico Dipartimento di Matematica, Università di Genova, Italy Progetto di ricerca GNCS 2017: Metodi numerici non lineari per problemi inversi e applicazioni

2 Progetto di ricerca GNCS 2017: Metodi numerici non lineari per problemi inversi e applicazioni CNR Bari: Teresa Laudadio Univ. Bologna: Germana Landi, Damiana Lazzaro, Elena Loli Piccolomini, Fabiana Zama Univ. Cagliari: Luisa Fermo, Giuseppe Rodriguez; Anna Concas, Patricia Diaz De Alba Univ. Genova: Claudio Estatico, Alberto Sorrentino Univ. Insubria: Marco Donatelli; Davide Bianchi, Alessandro Buccini Univ. L Aquila: Pietro Dell Acqua Univ. Modena e Reggio Emilia: Marco Prato; Carla Bertocchi Univ. Padova: Elena Morotti Univ. Sapienza, Roma: Silvia Noschese RWTH Aachen University, Germany: Caterina Fenu

3 Outline I - Inverse Problems, variational approach and regularization by iterative methods in (Hilbert and) Banach spaces. II - A sketch on variable exponent Lebesgue spaces. III - Application of variable exponent Lebesgue spaces in iterative regularization methods. IV - Numerical results in image deblurring.

4 Inverse Problem By the knowledge of some observed data g (i.e., the effect), find an approximation of some model parameters f (i.e., the cause). Given the (noisy) data g G, find (an approximation of) the unknown f F such that Af = g where A : F G is a known linear operator, and F,G are two functional (here Hilbert or Banach) spaces. True image Blurred (noisy) image Restored image Inverse problems are usually ill-posed, they need regularization techniques.

5 Solution of inverse problems by minimization Variational approaches are very useful to solve the functional equation Af = g. These methods minimize the Tikhonov-type variational functional Φ α Φ α (f) = Af g p G + αr(f), where 1 < p < +, R : F [0, + ) is a (convex and proper) functional, and α > 0 is the regularization parameter. The data-fitting term Af g p G is called residual (usually in mathematics) or cost function (usually in engineering). The penalty term R(f) is often f q F, or f q F or Lf q F, for q 1 (such as the Hölder conjugate of p) and a differential operator L which measures the non-regularity of f.

6 Numerical linear algebra lives in Hilbert spaces A Hilbert space is a complete vector space endowed with a scalar product that allows (lengths and) angles to be defined and computed. In any Hilbert spaces, what happens in our Euclidean world (that is, in our part of universe...) always holds: Pythagorean theorem: if u v, then u + v 2 = u 2 + v 2 Parallelogram identity: u + v 2 + u v 2 = 2( u 2 + v 2 ) Gradient of (the square of) the norm: 1 2 u 2 = u The scalar product allows also for a natural definition of: orthogonal projection (best approximation) of u on v P (u) = u, v v/ v 2 SVD decomposition (or eigenvalues/eigenvectors dec.) of linear operators (for separable Hilbert spaces, as any finite dimensional Hilbert space...)

7 Regularization: the recent framework in Banach spaces Several regularization methods for ill-posed functional equations by variational approaches have been first formulated as minimization problems in Hilbert spaces (i.e., the classical approach). Later they have been extended to Banach spaces setting (i.e., the more recent approach). Examples: L 1 for sparse recovery or L p, 1 < p < 2 for edge restoration. Hilbert spaces Banach spaces L 2 (R n ), H s = W s,2 L p (R n ), W s,p, (p 1); BV Benefits Easier computation Better restoration (Spectral theory, of the discontinuities; eigencomponents) Sparse solutions Drawbacks Over-smoothness Theoretically tricky (bad localization of edges) (Convex analysis required)

8 Minimization of the residual by gradient-type it. methods For the simple residual functional Φ(f) = Af g p G, the basic minimization approach is the gradient-type iteration, which reads as f k+1 = f k τ k ψ(f k, g) where ψ(f k, g) ( Af g p G), i.e., ψ(f k, g) is an approximation of the (sub-)gradient of the residual functional Φ at point f k, and τ k > 0 is the step length. For the least square functional Φ(f) = 2 1 Af g 2 2 in L2 Hilbert space ) ) Φ(f) = Φ(f) = ((( 2 1 u 2 ) u=af g J (Af g) = ((Af g) A) = A (Af g) we have the simplest iterative method, i.e., the Landweber method, f k+1 = f k τa (Af k g) where τ (0, 2/ A 2 2 ) is a fixed step length.

9 From Hilbert to Banach spaces f k+1 = f k τa (Af k g) Formally, A is the dual operator of A, that is, the operator A : G F such that g (Af) = (A g )(f), f F and g G, where F and G are the dual spaces of F and G. If F and G are Hilbert spaces, then F is isometrically isomorph to F and G is isometrically isomorph to G (by virtue of Riesz Theorem), and the operator A : G F can be identified with A : G F. However, in general Banach spaces are not isometrically isomorph to their duals. This way, the Landweber iteration above is well defined in Hilbert spaces (...only!) The key point: To generalize from Hilbert to Banach spaces we have to consider the so-called duality maps.

10 Minimization in Banach spaces by duality maps A duality map is a function which allows us to associate (in a special way) an element of a Banach space B with an element (or a subset of elements) of its dual B as follows: Theorem (Asplund [1968]) Let B be a Banach space and p > 1. A duality map J B : B 2 B is the sub-differential of the convex functional f defined as f(b) = ( ) p 1 b p B : 1 J B = f = p p B. By chaining rule, the (sub-)differential of the residual Φ(f) = 1 p Af g p G, by means of the duality map J G : G 2 G, is the following ( ) 1 Φ(f) = p Af g p G = A J G (Af g).

11 Landweber iterative method in Hilbert spaces A : F G A : G F Φ(f) = 1 2 Af g 2 G f k+1 = f k τa (Af k g) Landweber iterative method in Banach spaces A : F G A : G F Φ(f) = p 1 Af g p G ( ) f k+1 = J F J F f k τ k A J G (Af k g) Some remarks. In the Banach space L p, with 1 < p < +, we have J L p(f) = f p 1 sgn(f). J L p is a non-linear, single-valued, diagonal operator, which cost O(n) operations, and does not increase the global numerical complexity O(n log n) of shift-invariant image restoration problems solved by FFT.

12 Landweber iterative method in Hilbert spaces A : F G A : G F Φ(f) = 1 2 Af g 2 G f k+1 = f k τa (Af k g) Landweber iterative method in Banach spaces A : F G A : G F Φ(f) = p 1 Af g p G ( ) f k+1 = J F J F f k τ k A J G (Af k g) Some remarks. Numerical linear algebra and related image restoration tools developed usually in Hilbert spaces are still useful in Banach spaces (preconditioning, trigonometric matrix algebras for boundary conditions, super-resolution... ). All is buried inside the discretized setting.

13 Duality maps are the basic tool for generalizing classical iterative methods for linear systems to Banach spaces: Landweber method, CG, Mann iterations, Gauss-Newton Gradient type iterations (for nonlinear problems). Basic hypotheses for the convergence (and regularization behavior): uniformly smooth and (uniformly) convex Banach spaces. [Schöpfer, Louis, Hein, Scherzer, Schuster, Kazimierski, Kaltenbacher, Q. Jin, Tautenhahn, Neubauer, Hofmann, Daubechies, De Mol, Fornasier, Tomba,....] To reduce over-smoothness, these methods have been implemented in the context of L p Lebesgue spaces with 1 < p 2. p > 1 Low regularization Good recovery of edges and discontinuities Improve the sparsity p 2 High regularization Higher stability Over-smoothness

14 A numerical evidence in L p Lebesgue space, 1 < p 2 A 2D Image restoration with Landweber method (200 iterations) True image x PSF Blurred and noisy image Hilbert Restoration (p = 2) Banach Restoration (p = 1.5) Banach Restoration (p = 1.2)

15 A new framework: variable exponent Lebesgue spaces L p( ) In image restoration, often different regions of the image require different amount of regularization. Setting different levels of regularization is useful because background, low intensity, and high intensity values require different filtering levels (see Nagy, Pauca, Plemmons, Torgersen, J Opt Soc Am A, 1997). The idea: the ill-posed functional equation Af = g is solved in L p( ) Banach spaces, namely, the variable exponent Lebesgue spaces, a special case of the so-called Musielak-Orlicz functional spaces (first proposed in two seminal papers in 1931 and 1959, but intensively studied just in the last 10 years). In a variable exponent Lebesgue space, to measure a function f, instead of a constant exponent p all over the domain, we have a pointwise variable (i.e., a distribution) exponent 1 p( ) +. This way, different regularization levels on different regions of the image to restore can be automatically and adaptively assigned.

16 A sketch on variable exponent Lebesgue spaces L p( ) L p (Ω) L p(x) (Ω) 1 p p(x) : Ω [1, ] p is constant p(x) is a measurable function ( 1/p ( ) 1/??? f p = Ω dx) f(x) p f p( ) = Ω f(x) p(x) dx f = ess sup f(x)... f L p (Ω) Ω f(x) p dx < f L p( ) (Ω)??? In the following, Ω = {x Ω : p(x) = } has zero measure.

17 Ω = [ 5, 5] p(x) = { 2 se 5 x 0 3 se 0 < x 5

18 Ω = [ 5, 5] p(x) = { 2 se 5 x 0 3 se 0 < x 5 f(x) = 1 x 1 1/3 / Lp( ) ([ 5, 5])

19 Ω = [ 5, 5] p(x) = { 2 se 5 x 0 3 se 0 < x 5 f(x) = 1 x+1 1/3 Lp( ) ([ 5, 5]) f(x) = 1 x 1 1/3 / Lp( ) ([ 5, 5])

20 The norm of variable exponent Lebesgue spaces In the conventional case L p, the norm is f L p = ( Ω f(x) p dx) 1/p. In L p( ) Lebesgue spaces, the definition and computation of the norm is not straightforward, since we have not a constant value for computing the ( mandatory ) radical. ( 1/???. f L p( ) = f(x) dx) p(x) The solution: compute first the modular (for 1 p( ) < + ) ρ(f) = f(x) p(x) dx, Ω Ω and then obtain the (so called Luxemburg [1955]) norm by solving a 1D minimization problem f L p( ) = inf { ( f λ > 0 : ρ λ) } 1.

21 The elements of a variable exponent Lebesgue space The Lebesgue space is a Banach space. ρ(f) = f L p( ) = inf L p( ) (Ω) = Ω f(x) p(x) dx, { ( f λ > 0 : ρ λ) } 1. { } f : Ω R f L p( ) < In the case of a constant function exponent p(x) = p, this norm is exactly the classical one f p, indeed ( f ( f(x) ) pdx 1 ρ = = λ) Ω λ λ p f(x) p dx = 1 Ω λ p f p p and { 1 } inf λ > 0 : λ p f p p 1 = f p

22 Modulus VS Norm In (classical) L p, norm and modulus are the same apart from a p-root: f p < f(x) p dx < In L p( ), norm and modulus are really different: f p( ) < ρ(f) < Ω Indeed, the following holds ( f f p( ) < there exist a λ > 0 s.t. ρ λ) < (and notice that λ can be chosen large enough... ).

23 A simple example of a strange behavior ρ(f) = Ω = [1, ) f(x) 1 p(x) = x 1 Indeed ρ(f/λ) = 1 x dx = BUT f p( ) ( 1 λ ) x dx = 1 λ log λ <, for any λ > 1

24 The vector case: the Lebesgue spaces of sequences l p( ) = (l p n n ) The unit circle of x = (x 1 ; x 2 ) in R 2 with variable exponents p = (p 1 ; p 2 ). x p x p( ) Inclusion if (p 1, p 2 ) (q 1, q 2 ) (as classical) No inclusion in general

25 Properties of variable exponent Lebesgue spaces L p( ) Let p = ess inf Ω p(x), and p + = ess sup Ω p(x). If p + =, then L p( ) (Ω) is a bad (although very interesting) Banach space, with poor geometric properties (i.e., not useful for our regularization schemes). If 1 < p p + <, then L p( ) (Ω) is a good Banach space, since many properties of classical Lebesgue spaces L p still hold. This is the natural framework for our iterative methods in Banach spaces, because: L p( ) is uniformly smooth, uniformly convex, and reflexive, ( its dual space is well defined, L p( )) L q( ), where 1 p(x) + q(x) 1 = 1, we can define its duality map.

26 The duality map of the variable exponent Lebesgue space By extending the duality maps, we can define into L p( ) all the iterative methods developed in L p (Landweber, Steepest descent, CG, Mann iter.). For any constant 1 < r < +, we recall that the duality map, that is, the (sub-)differential of the functional r 1 f r L p, in the classical Banach space L p, with constant 1 < p < +, is defined as follows ( J L p(f) ) (x) = f(x) p 1 sgn(f(x)) f p r p By generalizing a result of P. Matei [2012], we have that the corresponding duality map in variable exponent Lebesgue space is defined as follows ( ) J L p( )(f) (x) = Ω 1 p(x) f(x) p(x) f p(x) p( ) dx p(x) f(x) p(x) 1 sgn(f(x)) f p(x) r p( ) where any product and any ratio have to be considered as pointwise..,

27 The adaptive algorithm in variable exponent Lebesgue spaces It is a numerical evidence that, in L p image deblurring, dealing with small 1 p << 2 improves sparsity and allows a better restoration of the edges of the images and of the zero-background, dealing with p 2 (even p > 2), allows a better restoration of the regions of pixels with the highest intensities. The idea: to use a scaled into [1, 2] version of the (re-)blurred data g as distribution of the exponent p( ) for the variable exponent Lebesgue spaces L p( ) where computing the solution. Example (linear interpolation): p( ) = 1 + [A g( ) min(a g)]/[max(a g) min(a g)] The Landweber (i.e., fixed point) iterative scheme in this L p( ) Banach space can be modified as adaptive iteration algorithm, by recomputing, after each fixed number of iterations, the exponent distribution p k ( ) by means of the k-th restored image f k (instead of the first re-blurred data Ag), that is p k ( )= 1 + [f k ( ) min(f k )]/[max(f k ) min(f k )]

28 The conjugate gradient method in L p( ) for image restoration Let p( ) = 1 + [A g( ) min(a g)]/[max(a g) min(a g)] ρ 0 = A J Lp( ) r (Af 0 g) For k = 1, 2, 3,... α k = arg min α 1 r A(f k + αρ k ) g r L p( ) f k+1 = f k + α kρ k f k+1 = J (Lp( ) ) s (f k+1 ) β k+1 = γ Af k+1 g r L p( ) Af k g r L p( ) with γ < 1/2 ρ k+1 = A J Lp( ) r (Af k+1 g) + β k+1 ρ k and recompute p( ) = 1 + [f k ( ) min(f k )]/[max(f k ) min(f k )] each m iterations by using the last iteration f k. [Proof of convergence for L p (constant) Lebesgue spaces in 2017].

29 Numerical results

30 True image Point Spread Function Blurred image (noise = 4.7%) Special thanks to Brigida Bonino (Univ. Genova).

31 True image Blurred (noise = 4.7%) p = 2 (0.2692) p = 1.3 (0.2681) p = (0.2473) and irreg. (0.2307)

32 True image Point Spread Function Blurred image (noise = 4.7%)

33 A remote sensing application: spatial resolution enhancement of microwave radiometer data It is an under-determined problem (with a special structured matrix), since we want to reconstruct the High frequency components reduced by the Low Pass filter of the radiometer. Joint work with: Matteo Alparone, Flavia Lenti, Maurizio Migliaccio, Nando Nunziata (Dip. di Ingegneria, Univ. Napoli Parthenope)

34 RECT E DOUBLE RECT p=2 p= Variable p pmin = 1.2; pmax = 2; lambda = 0.3; maxiter = 2000; ethr = 0.12; fkind = 1; Variable p pmin = 1.2; pmax = 2; lambda = 0.3; maxiter = 2000; ethr = 0.12; fkind = 1;

35 300 Signal Res En Constant p = 1.2 xconst TB Res En variable p xvar TB

36 Conclusions Basic tools for regularization methods in Banach spaces have been reviewed, and the variable exponent Lebesgue spaces has been introduced. Iterative methods in variable exponent Lebesgue spaces can give adaptive regularization. The framework is new (previously only used in the penalty term as weighted total variation (f) p( ) dx in [T. Chan et al 1997], without any link to Banach space setting). TO DO: Theoretical convergence is under analysis. TO DO: Application of L p( ) duality maps for the acceleration of R-L and ISRA has given very good numerical result. It should be better analyzed.

37 Thank you for your attention

38 References [1] F. Schöpfer, A.K. Louis, T. Schuster, Nonlinear iterative methods for linear ill-posed problems in Banach spaces, Inverse Problems, vol. 22, pp , [2] M. O. Scherzer, M. Grasmair, H. Grossauer, M. Haltmeier, F. Lenzen, Variational Methods in Imaging, Springer, Berlin, [3] L. Diening, P. Harjulehto, P. Hästö, M. Ruzicka, Lebesgue and Sobolev Spaces with Variable Exponents, Lecture Notes in Mathematics, vol. 2017, Springer, [4] Schuster, T., Kaltenbacher, B., Hofmann, B., and Kazimierski, K. S Regularization Methods in Banach Spaces. Radon Series on Computational and Applied Mathematics, vol. 10, De Gruyter. [5] P. Brianzi, F. Di Benedetto, C. Estatico, Preconditioned iterative regularization in Banach spaces, Comp. Optim. Appl., vol. 54, pp , [6] P. Dell Acqua, C. Estatico, Acceleration of multiplicative iterative algorithms for image deblurring by duality maps in Banach spaces, Appl. Num. Math., vol. 99, pp , [7] C. Estatico, S. Gratton, F. Lenti, D. Titley-Peloquin, A conjugate gradient like method for p-norm minimization in functional spaces, Numer. Math., vol. 137, pp , 2017.

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces Due Giorni di Algebra Lineare Numerica (2GALN) 16 17 Febbraio 2016, Como Iterative regularization in variable exponent Lebesgue spaces Claudio Estatico 1 Joint work with: Brigida Bonino 1, Fabio Di Benedetto

More information

Progetto di Ricerca GNCS 2016 PING Problemi Inversi in Geofisica Firenze, 6 aprile Regularized nonconvex minimization for image restoration

Progetto di Ricerca GNCS 2016 PING Problemi Inversi in Geofisica Firenze, 6 aprile Regularized nonconvex minimization for image restoration Progetto di Ricerca GNCS 2016 PING Problemi Inversi in Geofisica Firenze, 6 aprile 2016 Regularized nonconvex minimization for image restoration Claudio Estatico Joint work with: Fabio Di Benedetto, Flavia

More information

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

Scaled gradient projection methods in image deblurring and denoising

Scaled gradient projection methods in image deblurring and denoising Scaled gradient projection methods in image deblurring and denoising Mario Bertero 1 Patrizia Boccacci 1 Silvia Bonettini 2 Riccardo Zanella 3 Luca Zanni 3 1 Dipartmento di Matematica, Università di Genova

More information

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke

More information

Regularization in Banach Space

Regularization in Banach Space Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University

More information

Variable Exponents Spaces and Their Applications to Fluid Dynamics

Variable Exponents Spaces and Their Applications to Fluid Dynamics Variable Exponents Spaces and Their Applications to Fluid Dynamics Martin Rapp TU Darmstadt November 7, 213 Martin Rapp (TU Darmstadt) Variable Exponent Spaces November 7, 213 1 / 14 Overview 1 Variable

More information

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher ENHANCED CHOICE OF THE PARAMETERS IN AN ITERATIVELY REGULARIZED NEWTON- LANDWEBER ITERATION IN BANACH SPACE arxiv:48.526v [math.na] 2 Aug 24 Barbara Kaltenbacher Alpen-Adria-Universität Klagenfurt Universitätstrasse

More information

Convergence rates in l 1 -regularization when the basis is not smooth enough

Convergence rates in l 1 -regularization when the basis is not smooth enough Convergence rates in l 1 -regularization when the basis is not smooth enough Jens Flemming, Markus Hegland November 29, 2013 Abstract Sparsity promoting regularization is an important technique for signal

More information

On nonstationary preconditioned iterative regularization methods for image deblurring

On nonstationary preconditioned iterative regularization methods for image deblurring On nonstationary preconditioned iterative regularization methods for image deblurring Alessandro Buccini joint work with Prof. Marco Donatelli University of Insubria Department of Science and High Technology

More information

Variable Lebesgue Spaces

Variable Lebesgue Spaces Variable Lebesgue Trinity College Summer School and Workshop Harmonic Analysis and Related Topics Lisbon, June 21-25, 2010 Joint work with: Alberto Fiorenza José María Martell Carlos Pérez Special thanks

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Decompositions of variable Lebesgue norms by ODE techniques

Decompositions of variable Lebesgue norms by ODE techniques Decompositions of variable Lebesgue norms by ODE techniques Septièmes journées Besançon-Neuchâtel d Analyse Fonctionnelle Jarno Talponen University of Eastern Finland talponen@iki.fi Besançon, June 217

More information

Accelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik

Accelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik Accelerated Landweber iteration in Banach spaces T. Hein, K.S. Kazimierski Preprint 2009-17 Fakultät für Mathematik Impressum: Herausgeber: Der Dekan der Fakultät für Mathematik an der Technischen Universität

More information

Parameter Identification in Partial Differential Equations

Parameter Identification in Partial Differential Equations Parameter Identification in Partial Differential Equations Differentiation of data Not strictly a parameter identification problem, but good motivation. Appears often as a subproblem. Given noisy observation

More information

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization Radu Ioan Boț and Bernd Hofmann March 1, 2013 Abstract Tikhonov-type regularization of linear and nonlinear

More information

Function spaces with variable exponents

Function spaces with variable exponents Function spaces with variable exponents Henning Kempka September 22nd 2014 September 22nd 2014 Henning Kempka 1 / 50 http://www.tu-chemnitz.de/ Outline 1. Introduction & Motivation First motivation Second

More information

arxiv: v1 [math.na] 16 Jan 2018

arxiv: v1 [math.na] 16 Jan 2018 A FAST SUBSPACE OPTIMIZATION METHOD FOR NONLINEAR INVERSE PROBLEMS IN BANACH SPACES WITH AN APPLICATION IN PARAMETER IDENTIFICATION ANNE WALD arxiv:1801.05221v1 [math.na] 16 Jan 2018 Abstract. We introduce

More information

Inverse Eigenvalue Problems: Theory and Applications

Inverse Eigenvalue Problems: Theory and Applications Inverse Eigenvalue Problems: Theory and Applications A Series of Lectures to be Presented at IRMA, CNR, Bari, Italy Moody T. Chu (Joint with Gene Golub) Department of Mathematics North Carolina State University

More information

On the regularization properties of some spectral gradient methods

On the regularization properties of some spectral gradient methods On the regularization properties of some spectral gradient methods Daniela di Serafino Department of Mathematics and Physics, Second University of Naples daniela.diserafino@unina2.it contributions from

More information

Parameter Identification by Iterative Constrained Regularization

Parameter Identification by Iterative Constrained Regularization Journal of Physics: Conference Series PAPER OPEN ACCESS Parameter Identification by Iterative Constrained Regularization To cite this article: Fabiana Zama 2015 J. Phys.: Conf. Ser. 657 012002 View the

More information

Iterative Methods for Inverse & Ill-posed Problems

Iterative Methods for Inverse & Ill-posed Problems Iterative Methods for Inverse & Ill-posed Problems Gerd Teschke Konrad Zuse Institute Research Group: Inverse Problems in Science and Technology http://www.zib.de/ag InverseProblems ESI, Wien, December

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

Levenberg-Marquardt method in Banach spaces with general convex regularization terms Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization

More information

Besov-type spaces with variable smoothness and integrability

Besov-type spaces with variable smoothness and integrability Besov-type spaces with variable smoothness and integrability Douadi Drihem M sila University, Department of Mathematics, Laboratory of Functional Analysis and Geometry of Spaces December 2015 M sila, Algeria

More information

c 2008 Society for Industrial and Applied Mathematics

c 2008 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1430 1458 c 8 Society for Industrial and Applied Mathematics IMPROVEMENT OF SPACE-INVARIANT IMAGE DEBLURRING BY PRECONDITIONED LANDWEBER ITERATIONS PAOLA BRIANZI,

More information

Linear Inverse Problems

Linear Inverse Problems Linear Inverse Problems Ajinkya Kadu Utrecht University, The Netherlands February 26, 2018 Outline Introduction Least-squares Reconstruction Methods Examples Summary Introduction 2 What are inverse problems?

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

Interior-Point Methods as Inexact Newton Methods. Silvia Bonettini Università di Modena e Reggio Emilia Italy

Interior-Point Methods as Inexact Newton Methods. Silvia Bonettini Università di Modena e Reggio Emilia Italy InteriorPoint Methods as Inexact Newton Methods Silvia Bonettini Università di Modena e Reggio Emilia Italy Valeria Ruggiero Università di Ferrara Emanuele Galligani Università di Modena e Reggio Emilia

More information

1 Sparsity and l 1 relaxation

1 Sparsity and l 1 relaxation 6.883 Learning with Combinatorial Structure Note for Lecture 2 Author: Chiyuan Zhang Sparsity and l relaxation Last time we talked about sparsity and characterized when an l relaxation could recover the

More information

Iterative Methods for Smooth Objective Functions

Iterative Methods for Smooth Objective Functions Optimization Iterative Methods for Smooth Objective Functions Quadratic Objective Functions Stationary Iterative Methods (first/second order) Steepest Descent Method Landweber/Projected Landweber Methods

More information

A range condition for polyconvex variational regularization

A range condition for polyconvex variational regularization www.oeaw.ac.at A range condition for polyconvex variational regularization C. Kirisits, O. Scherzer RICAM-Report 2018-04 www.ricam.oeaw.ac.at A range condition for polyconvex variational regularization

More information

Image restoration: numerical optimisation

Image restoration: numerical optimisation Image restoration: numerical optimisation Short and partial presentation Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP / 6 Context

More information

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Applied Mathematical Sciences, Vol. 5, 2011, no. 76, 3781-3788 Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Nguyen Buong and Nguyen Dinh Dung

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1, Math 30 Winter 05 Solution to Homework 3. Recognizing the convexity of g(x) := x log x, from Jensen s inequality we get d(x) n x + + x n n log x + + x n n where the equality is attained only at x = (/n,...,

More information

Some New Results on Global Nonexistence for Abstract Evolution Equations with Positive Initial Energy

Some New Results on Global Nonexistence for Abstract Evolution Equations with Positive Initial Energy Some New Results on Global Nonexistence for Abstract Evolution Equations with Positive Initial Energy Patrizia Pucci 1 & James Serrin Dedicated to Olga Ladyzhenskaya with admiration and esteem 1. Introduction.

More information

An introduction to some aspects of functional analysis

An introduction to some aspects of functional analysis An introduction to some aspects of functional analysis Stephen Semmes Rice University Abstract These informal notes deal with some very basic objects in functional analysis, including norms and seminorms

More information

Weak Solutions to Nonlinear Parabolic Problems with Variable Exponent

Weak Solutions to Nonlinear Parabolic Problems with Variable Exponent International Journal of Mathematical Analysis Vol. 1, 216, no. 12, 553-564 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/1.12988/ijma.216.6223 Weak Solutions to Nonlinear Parabolic Problems with Variable

More information

A Caffarelli-Kohn-Nirenberg type inequality with variable exponent and applications to PDE s

A Caffarelli-Kohn-Nirenberg type inequality with variable exponent and applications to PDE s A Caffarelli-Kohn-Nirenberg type ineuality with variable exponent and applications to PDE s Mihai Mihăilescu a,b Vicenţiu Rădulescu a,c Denisa Stancu-Dumitru a a Department of Mathematics, University of

More information

Overview of normed linear spaces

Overview of normed linear spaces 20 Chapter 2 Overview of normed linear spaces Starting from this chapter, we begin examining linear spaces with at least one extra structure (topology or geometry). We assume linearity; this is a natural

More information

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy Banach Spaces These notes provide an introduction to Banach spaces, which are complete normed vector spaces. For the purposes of these notes, all vector spaces are assumed to be over the real numbers.

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

Iterative Regularization Methods for Inverse Problems: Lecture 3

Iterative Regularization Methods for Inverse Problems: Lecture 3 Iterative Regularization Methods for Inverse Problems: Lecture 3 Thorsten Hohage Institut für Numerische und Angewandte Mathematik Georg-August Universität Göttingen Madrid, April 11, 2011 Outline 1 regularization

More information

Accelerated Gradient Methods for Constrained Image Deblurring

Accelerated Gradient Methods for Constrained Image Deblurring Accelerated Gradient Methods for Constrained Image Deblurring S Bonettini 1, R Zanella 2, L Zanni 2, M Bertero 3 1 Dipartimento di Matematica, Università di Ferrara, Via Saragat 1, Building B, I-44100

More information

arxiv: v1 [math.na] 15 Jun 2009

arxiv: v1 [math.na] 15 Jun 2009 Noname manuscript No. (will be inserted by the editor) Fast transforms for high order boundary conditions Marco Donatelli arxiv:0906.2704v1 [math.na] 15 Jun 2009 the date of receipt and acceptance should

More information

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Stephan W Anzengruber 1 and Ronny Ramlau 1,2 1 Johann Radon Institute for Computational and Applied Mathematics,

More information

The Levenberg-Marquardt Iteration for Numerical Inversion of the Power Density Operator

The Levenberg-Marquardt Iteration for Numerical Inversion of the Power Density Operator The Levenberg-Marquardt Iteration for Numerical Inversion of the Power Density Operator G. Bal (gb2030@columbia.edu) 1 W. Naetar (wolf.naetar@univie.ac.at) 2 O. Scherzer (otmar.scherzer@univie.ac.at) 2,3

More information

AMS classification scheme numbers: 65F10, 65F15, 65Y20

AMS classification scheme numbers: 65F10, 65F15, 65Y20 Improved image deblurring with anti-reflective boundary conditions and re-blurring (This is a preprint of an article published in Inverse Problems, 22 (06) pp. 35-53.) M. Donatelli, C. Estatico, A. Martinelli,

More information

AW -Convergence and Well-Posedness of Non Convex Functions

AW -Convergence and Well-Posedness of Non Convex Functions Journal of Convex Analysis Volume 10 (2003), No. 2, 351 364 AW -Convergence Well-Posedness of Non Convex Functions Silvia Villa DIMA, Università di Genova, Via Dodecaneso 35, 16146 Genova, Italy villa@dima.unige.it

More information

On some properties of modular function spaces

On some properties of modular function spaces Diana Caponetti University of Palermo, Italy Joint work with G. Lewicki Integration Vector Measures and Related Topics VI Bȩdlewo, June 15-21, 2014 Aim of this talk is to introduce modular function spaces

More information

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University

More information

CRITICAL POINT METHODS IN DEGENERATE ANISOTROPIC PROBLEMS WITH VARIABLE EXPONENT. We are interested in discussing the problem:

CRITICAL POINT METHODS IN DEGENERATE ANISOTROPIC PROBLEMS WITH VARIABLE EXPONENT. We are interested in discussing the problem: STUDIA UNIV. BABEŞ BOLYAI, MATHEMATICA, Volume LV, Number 4, December 2010 CRITICAL POINT METHODS IN DEGENERATE ANISOTROPIC PROBLEMS WITH VARIABLE EXPONENT MARIA-MAGDALENA BOUREANU Abstract. We work on

More information

Problem Set 6: Solutions Math 201A: Fall a n x n,

Problem Set 6: Solutions Math 201A: Fall a n x n, Problem Set 6: Solutions Math 201A: Fall 2016 Problem 1. Is (x n ) n=0 a Schauder basis of C([0, 1])? No. If f(x) = a n x n, n=0 where the series converges uniformly on [0, 1], then f has a power series

More information

MTH 503: Functional Analysis

MTH 503: Functional Analysis MTH 53: Functional Analysis Semester 1, 215-216 Dr. Prahlad Vaidyanathan Contents I. Normed Linear Spaces 4 1. Review of Linear Algebra........................... 4 2. Definition and Examples...........................

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Stephan W Anzengruber Ronny Ramlau Abstract We derive convergence rates for Tikhonov-type regularization with conve

More information

ADJOINTS OF LIPSCHITZ MAPPINGS

ADJOINTS OF LIPSCHITZ MAPPINGS STUDIA UNIV. BABEŞ BOLYAI, MATHEMATICA, Volume XLVIII, Number 1, March 2003 ADJOINTS OF LIPSCHITZ MAPPINGS ŞTEFAN COBZAŞ Dedicated to Professor Wolfgang W. Breckner at his 60 th anniversary Abstract. The

More information

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure

More information

An l 2 -l q regularization method for large discrete ill-posed problems

An l 2 -l q regularization method for large discrete ill-posed problems Noname manuscript No. (will be inserted by the editor) An l -l q regularization method for large discrete ill-posed problems Alessandro Buccini Lothar Reichel the date of receipt and acceptance should

More information

Course Description - Master in of Mathematics Comprehensive exam& Thesis Tracks

Course Description - Master in of Mathematics Comprehensive exam& Thesis Tracks Course Description - Master in of Mathematics Comprehensive exam& Thesis Tracks 1309701 Theory of ordinary differential equations Review of ODEs, existence and uniqueness of solutions for ODEs, existence

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

COMPACT EMBEDDINGS ON A SUBSPACE OF WEIGHTED VARIABLE EXPONENT SOBOLEV SPACES

COMPACT EMBEDDINGS ON A SUBSPACE OF WEIGHTED VARIABLE EXPONENT SOBOLEV SPACES Adv Oper Theory https://doiorg/05352/aot803-335 ISSN: 2538-225X electronic https://projecteuclidorg/aot COMPACT EMBEDDINGS ON A SUBSPACE OF WEIGHTED VARIABLE EXPONENT SOBOLEV SPACES CIHAN UNAL and ISMAIL

More information

Reminder Notes for the Course on Measures on Topological Spaces

Reminder Notes for the Course on Measures on Topological Spaces Reminder Notes for the Course on Measures on Topological Spaces T. C. Dorlas Dublin Institute for Advanced Studies School of Theoretical Physics 10 Burlington Road, Dublin 4, Ireland. Email: dorlas@stp.dias.ie

More information

Statistical Inverse Problems and Instrumental Variables

Statistical Inverse Problems and Instrumental Variables Statistical Inverse Problems and Instrumental Variables Thorsten Hohage Institut für Numerische und Angewandte Mathematik University of Göttingen Workshop on Inverse and Partial Information Problems: Methodology

More information

New Algorithms for Parallel MRI

New Algorithms for Parallel MRI New Algorithms for Parallel MRI S. Anzengruber 1, F. Bauer 2, A. Leitão 3 and R. Ramlau 1 1 RICAM, Austrian Academy of Sciences, Altenbergerstraße 69, 4040 Linz, Austria 2 Fuzzy Logic Laboratorium Linz-Hagenberg,

More information

TRACES FOR FRACTIONAL SOBOLEV SPACES WITH VARIABLE EXPONENTS

TRACES FOR FRACTIONAL SOBOLEV SPACES WITH VARIABLE EXPONENTS Adv. Oper. Theory 2 (2017), no. 4, 435 446 http://doi.org/10.22034/aot.1704-1152 ISSN: 2538-225X (electronic) http://aot-math.org TRACES FOR FRACTIONAL SOBOLEV SPACES WITH VARIABLE EXPONENTS LEANDRO M.

More information

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Barbara Kaltenbacher, Tuomo Valkonen,

More information

On an iterative algorithm for variational inequalities in. Banach space

On an iterative algorithm for variational inequalities in. Banach space MATHEMATICAL COMMUNICATIONS 95 Math. Commun. 16(2011), 95 104. On an iterative algorithm for variational inequalities in Banach spaces Yonghong Yao 1, Muhammad Aslam Noor 2,, Khalida Inayat Noor 3 and

More information

Functional Analysis. Martin Brokate. 1 Normed Spaces 2. 2 Hilbert Spaces The Principle of Uniform Boundedness 32

Functional Analysis. Martin Brokate. 1 Normed Spaces 2. 2 Hilbert Spaces The Principle of Uniform Boundedness 32 Functional Analysis Martin Brokate Contents 1 Normed Spaces 2 2 Hilbert Spaces 2 3 The Principle of Uniform Boundedness 32 4 Extension, Reflexivity, Separation 37 5 Compact subsets of C and L p 46 6 Weak

More information

Variational Image Restoration

Variational Image Restoration Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

SPARSE SIGNAL RESTORATION. 1. Introduction

SPARSE SIGNAL RESTORATION. 1. Introduction SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful

More information

Bayesian Paradigm. Maximum A Posteriori Estimation

Bayesian Paradigm. Maximum A Posteriori Estimation Bayesian Paradigm Maximum A Posteriori Estimation Simple acquisition model noise + degradation Constraint minimization or Equivalent formulation Constraint minimization Lagrangian (unconstraint minimization)

More information

Modified Landweber iteration in Banach spaces convergence and convergence rates

Modified Landweber iteration in Banach spaces convergence and convergence rates Modified Landweber iteration in Banach spaces convergence and convergence rates Torsten Hein, Kamil S. Kazimierski August 4, 009 Abstract Abstract. We introduce and discuss an iterative method of relaxed

More information

Three Generalizations of Compressed Sensing

Three Generalizations of Compressed Sensing Thomas Blumensath School of Mathematics The University of Southampton June, 2010 home prev next page Compressed Sensing and beyond y = Φx + e x R N or x C N x K is K-sparse and x x K 2 is small y R M or

More information

arxiv: v1 [math.cv] 3 Sep 2017

arxiv: v1 [math.cv] 3 Sep 2017 arxiv:1709.00724v1 [math.v] 3 Sep 2017 Variable Exponent Fock Spaces Gerardo A. hacón and Gerardo R. hacón Abstract. In this article we introduce Variable exponent Fock spaces and study some of their basic

More information

Convergence rates of spectral methods for statistical inverse learning problems

Convergence rates of spectral methods for statistical inverse learning problems Convergence rates of spectral methods for statistical inverse learning problems G. Blanchard Universtität Potsdam UCL/Gatsby unit, 04/11/2015 Joint work with N. Mücke (U. Potsdam); N. Krämer (U. München)

More information

On the uniform Opial property

On the uniform Opial property We consider the noncommutative modular function spaces of measurable operators affiliated with a semifinite von Neumann algebra and show that they are complete with respect to their modular. We prove that

More information

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE METHODS AND APPLICATIONS OF ANALYSIS. c 2011 International Press Vol. 18, No. 1, pp. 105 110, March 2011 007 EXACT SUPPORT RECOVERY FOR LINEAR INVERSE PROBLEMS WITH SPARSITY CONSTRAINTS DENNIS TREDE Abstract.

More information

JUHA KINNUNEN. Harmonic Analysis

JUHA KINNUNEN. Harmonic Analysis JUHA KINNUNEN Harmonic Analysis Department of Mathematics and Systems Analysis, Aalto University 27 Contents Calderón-Zygmund decomposition. Dyadic subcubes of a cube.........................2 Dyadic cubes

More information

SYMMETRY OF POSITIVE SOLUTIONS OF SOME NONLINEAR EQUATIONS. M. Grossi S. Kesavan F. Pacella M. Ramaswamy. 1. Introduction

SYMMETRY OF POSITIVE SOLUTIONS OF SOME NONLINEAR EQUATIONS. M. Grossi S. Kesavan F. Pacella M. Ramaswamy. 1. Introduction Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 12, 1998, 47 59 SYMMETRY OF POSITIVE SOLUTIONS OF SOME NONLINEAR EQUATIONS M. Grossi S. Kesavan F. Pacella M. Ramaswamy

More information

Bounded uniformly continuous functions

Bounded uniformly continuous functions Bounded uniformly continuous functions Objectives. To study the basic properties of the C -algebra of the bounded uniformly continuous functions on some metric space. Requirements. Basic concepts of analysis:

More information

Functional Analysis I

Functional Analysis I Functional Analysis I Course Notes by Stefan Richter Transcribed and Annotated by Gregory Zitelli Polar Decomposition Definition. An operator W B(H) is called a partial isometry if W x = X for all x (ker

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

Some functional inequalities in variable exponent spaces with a more generalization of uniform continuity condition

Some functional inequalities in variable exponent spaces with a more generalization of uniform continuity condition Int. J. Nonlinear Anal. Appl. 7 26) No. 2, 29-38 ISSN: 28-6822 electronic) http://dx.doi.org/.2275/ijnaa.26.439 Some functional inequalities in variable exponent spaces with a more generalization of uniform

More information

EXISTENCE RESULTS FOR OPERATOR EQUATIONS INVOLVING DUALITY MAPPINGS VIA THE MOUNTAIN PASS THEOREM

EXISTENCE RESULTS FOR OPERATOR EQUATIONS INVOLVING DUALITY MAPPINGS VIA THE MOUNTAIN PASS THEOREM EXISTENCE RESULTS FOR OPERATOR EQUATIONS INVOLVING DUALITY MAPPINGS VIA THE MOUNTAIN PASS THEOREM JENICĂ CRÎNGANU We derive existence results for operator equations having the form J ϕu = N f u, by using

More information

INF-SUP CONDITION FOR OPERATOR EQUATIONS

INF-SUP CONDITION FOR OPERATOR EQUATIONS INF-SUP CONDITION FOR OPERATOR EQUATIONS LONG CHEN We study the well-posedness of the operator equation (1) T u = f. where T is a linear and bounded operator between two linear vector spaces. We give equivalent

More information

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Rosemary Renaut, Jodi Mead Arizona State and Boise State Copper Mountain Conference on Iterative Methods

More information

An iterative multigrid regularization method for Toeplitz discrete ill-posed problems

An iterative multigrid regularization method for Toeplitz discrete ill-posed problems NUMERICAL MATHEMATICS: Theory, Methods and Applications Numer. Math. Theor. Meth. Appl., Vol. xx, No. x, pp. 1-18 (200x) An iterative multigrid regularization method for Toeplitz discrete ill-posed problems

More information

l(y j ) = 0 for all y j (1)

l(y j ) = 0 for all y j (1) Problem 1. The closed linear span of a subset {y j } of a normed vector space is defined as the intersection of all closed subspaces containing all y j and thus the smallest such subspace. 1 Show that

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

NUMERICAL OPTIMIZATION METHODS FOR BLIND DECONVOLUTION

NUMERICAL OPTIMIZATION METHODS FOR BLIND DECONVOLUTION NUMERICAL OPTIMIZATION METHODS FOR BLIND DECONVOLUTION ANASTASIA CORNELIO, ELENA LOLI PICCOLOMINI, AND JAMES G. NAGY Abstract. This paper describes a nonlinear least squares framework to solve a separable

More information

Sparse & Redundant Representations by Iterated-Shrinkage Algorithms

Sparse & Redundant Representations by Iterated-Shrinkage Algorithms Sparse & Redundant Representations by Michael Elad * The Computer Science Department The Technion Israel Institute of technology Haifa 3000, Israel 6-30 August 007 San Diego Convention Center San Diego,

More information

Numerical Linear Algebra and. Image Restoration

Numerical Linear Algebra and. Image Restoration Numerical Linear Algebra and Image Restoration Maui High Performance Computing Center Wednesday, October 8, 2003 James G. Nagy Emory University Atlanta, GA Thanks to: AFOSR, Dave Tyler, Stuart Jefferies,

More information