Generalized Local Regularization for Ill-Posed Problems

Size: px
Start display at page:

Download "Generalized Local Regularization for Ill-Posed Problems"

Transcription

1 Generalized Local Regularization for Ill-Posed Problems Patricia K. Lamm Department of Mathematics Michigan State University AIP29 July 22, 29 Based on joint work with Cara Brooks, Zhewei Dai, and Xiaoyue Luo

2 Basic problem Problem: Solve Au = f for ū dom(a) X, where the ideal data f R(A) X is only approximately given. We assume the operator A may be linear or nonlinear; X is a Hilbert or reflexive Banach space; Existence/uniqueness of solution ū for f R(A), which does not depend continuously on f ; The data is only approximately given by f δ X, with f f δ X < δ. (Precise conditions on A to be given later.)

3 Examples of (related) regularization methods Equation for th order Tikhonov regularization: Let α > and find uα δ dom(a) X minimizing Au f δ 2 X + α u 2 X. If A bounded linear, the regularized solution uα δ satisfies A Au + αu = A f δ. (1) Features (for generic linear case): Pros: Cons : Solution of (1) easily Stability of (A A) 1 vs A 1 handled for f δ X A nonlinear = modify or discard (1) All-purpose regularizer Ignores special structure of A Easy to implement Can be oversmoothing

4 Examples of (related) regularization methods Equation for Lavrent ev (or simplified) regularization: In contrast to the Tikhonov regularization equation for linear A, A Au + αu = A f δ, the method of Lavrent ev finds uα δ dom(a) satisfying Au + αu = f δ. Features: Pros: Underlying operator is still A, so preserves structure of equation; Method the same for both linear and nonlinear problems Easy to implement Cons: Solvability guaranteed only for certain A Numerical results not always satisfactory

5 Diagonal quantities Comparisons: Both Tikhonov (linear case) and Lavrent ev methods stabilize using the diagonal quantity αu. But local regularization also involves a diagonal quantity : Tikhonov and Lavrent ev regularization: * αu added to the underlying operator, which makes a small shift of singular values of the operator away from zero. Local regularization: * We first subtract a diagonal quantity from the operator... use this to construct a diagonal αu to be added back in. = αu more closely linked to A = can preserve structure * The subtraction effectively splits the underlying operator (As we ll see later, the splitting should be related to the sensitivity of the data to local changes in the solution.)

6 Local regularization The equation for local regularization: For each α (, ᾱ] and X α X, let u δ α dom(a α ) X α satisfy A α u + a α u = f δ α for given (nonlinear) operator A α : dom(a α ) X α X, parameter (function/operator) a α ( ), f δ α X α defined via f δ α = T α f δ, and for for a family {T α } of bounded linear operators, T α : X X α, T α f Xα f X, uniformly in f X. [E.g., T α = I, or T α = a (normalized) mollification operator, or a number of other choices.]

7 Compare Tikhonov, Lavrent ev, and local regularization Comments about the local regularization equation, A α u + a α u = T α f δ Tikhonov and Lavrent ev are special cases, with parameter-independent A α and X α = X for each: Method A α a α T α Tikhonov: A A α (scalar) A Lavrent ev: A α (scalar) I In local regularization, T α plays the role of consolidating a local piece of the function f δ : Tikhonov: T α = A (not local at all) Lavrent ev: T α = I (may be too local)

8 Operators/parameters for local regularization Selection of A α, a α, and T α for local regularization method: Motivation: Look at what s needed to show stability and convergence to ū of uα δ, the solution of the (generic) equation A α u + a α u = T α f δ. (2) To simplify the motivation: * Take operators A, A α bounded linear, α >. * Take X α = X, α >. Will make comparisons with: * stability/convergence results for Tikhonov/Lavrent ev (since these methods also take the form of equation (2) ).

9 Stability of the three methods (1) Stability for methods based on A α u + a α u = T α f δ : Want: solution u δ α unique + depends continuously on data f δ. = need (A α + a α I ) invertible with bounded inverse. This is easy for Tikhonov regularization: A α = A A automatically nonnegative self-adjoint. Not automatic for Lavrent ev or local regularization: Need added conditions on A α. E.g., assume A α nonnegative self-adjoint. Note difference: For Lavrent ev = assumption is on A α = A; For local reg. = assumption is on A α (we construct).

10 Convergence for the three methods So assume we already have stability, or that ( ) 1 (A α + a α I ) 1 = O c(α) where c(α) as α ; here denotes the operator norm. (2) Convergence for methods based on A α u + a α u = T α f δ : Must show: For each f R(A) there is a choice of α = α(δ, f δ ) for which α(δ, f δ ) as δ, and for which u δ α(δ,f δ ) ū as δ, for all f δ X satisfying f f δ X δ.

11 Convergence for the three methods, cont d. Verification of convergence of uα δ to ū, where A α uα δ + a α uα δ = T α f δ. (3) with A α, a α, T α given: First note ū X uniquely satisfies Aū = f, = T α Aū = T α f, = A α ū + a α ū = T α f + (A α ū + a α ū T α Aū). (4) So subtract: equation (3) (4) to obtain (A α + a α I )(uα δ ū) = T α d δ (A α ū + a α ū T α Aū), where d δ (f δ f ) X satisfies d δ X δ.

12 Convergence for the three methods, cont d. We then have (u δ α ū) = (A α + a α I ) 1 T α d δ + (A α + a α I ) 1 [(T α Aū A α ū) a α ū], where ) (Aα + a α I ) 1 T α d δ X δ = O(. c(α) Goal: select α = α(δ, f δ ) so that α(δ, f δ ) as δ, and so that both terms above in (uα δ ū) go to zero as δ. That is, The term due to data error goes to zero, i.e., δ c ( α(δ, f δ as δ. ) ) The term due to method error goes to zero: (A α + a α I ) 1 (D α ū a α ū) as δ. Here we have defined D α = T α A A α.

13 Convergence for the three methods, cont d. Note: There is generally little difficulty in making an a priori choice of α = α(δ, f δ ) so that α(δ, f δ ) and δ c ( α(δ, f δ as δ. ) ) The problem then is to show that for a selection of α-values with α, the operator (A α + a α I ) is continuously invertible, and (A α + a α I ) 1 (D α ū a α ū), as α. It is well-known that these last statements are true (for linear A) in the case of: Tikhonov regularization, for A bounded; Lavrent ev regularization, for a narrower class of operators A (e.g. A nonnegative self-adjoint).

14 Local regularization: choice of A α and a α For local regularization, we ll use the desired convergence condition, (A α + a α I ) 1 (D α ū a α ū) as α, (5) to motivate the selection of A α and a α. (Recall D α T α A A α.) That is, ū satisfies Aū = f = T α Aū = T α f, where the operator T α A may be split as follows: T α A = A α + (T α A A α ) = T α A = A α + D α. Using (5), this splitting of T α A needs to be such that (roughly) D α ū a α ū for α, for some choice of a α, i.e., D α diagonal when applied to ū.

15 Local regularization: choice of A α and a α So in local regularization: We first select T α (selection criteria to be discussed later). We then select A α and a α (and define D α T α A A α ) so that for α > we have: 1 (A α + a α I ) continuously invertible, and 2 the operator (D α a α I ) satisfies (A α + a α I ) 1 (D α a α I ) ū as α. X Roughly, we take the modified equation T α Au = T α f, and split its leading operator via T α A = A α + D α, where A α = the bigger, nearly continuously invertible part of T α A, D α = the smaller, diagonal-like part of T α A.

16 Statement of convergence The above statement of convergence, namely, (A α + a α I ) 1 (D α a α I ) ū as α. (6) X is obtained for Tikhonov & Lavrent ev, without splitting T α A: Tikhonov: T α A = A α = A A, D α =, a α = α. Lavrent ev: T α A = A α = A, D α =, a α = α. In these cases the convergence as α in (6) takes the form (respectively) of (A A + αi ) 1 ( αū), (A + αi ) 1 ( αū). X X In contrast: With local regularization we can obtain a stronger statement of convergence (than (6)) for many problems.

17 Local regularization: choice of A α and a α Stronger statement of convergence: A stronger result than (A α + a α I ) 1 (D α ū a α ū) as α, X is obtained if the method satisfies D α ū a α ū X = o (c(α)) as α (7) (recall that (A α + a α I ) 1 = O( 1 c(α) ) ). The stronger result (7) cannot hold for Tikhonov or Lavrent ev (in general). For these methods the usual case is (A α + αi ) 1 ( = O 1 ) α as α, i.e., c(α) = α, but D α ū a α ū X = αū X = O(α) = D α ū a α ū X o ( c(α) ) as α.

18 Local regularization: the selection of T α How to select T α in the method of local regularization? We will select T α in order to: Facilitate a regularization method which is localized in an appropriate sense. Work toward the goal of finding a method which is easier/faster than classical regularization methods. Effectively utilize information about the sensitivity of the data f to (local) changes in u (for f, u satisfying the Au = f ). We will demonstrate the selection of T α, as well as of A α and a α, through some examples.

19 Examples of local regularization Three motivating examples... from where local regularization theory is the most complete, i.e., for Volterra operators. 1 Linear problem on X = L p (, 1), 1 < p : t k(t, s)u(s) ds = f (t), a.e. t [, 1]. 2 Nonlinear Hammerstein problem on X = C[, 1]: t k(t, s)g(s, u(s)) ds = f (t), a.e. t [, 1], under (standard) conditions on g. 3 Nonlinear autoconvolution problem, X = L 2 (, 1), C[, 1]: t u(t s)u(s) ds = f (t), a.e. t [, 1].

20 Linear Volterra example First consider the linear Volterra problem Au(t) = t k(t, s)u(s) ds = f (t), a.e. t [, 1], in the special case of X = L 2 (, 1), and for k L 2 ([, 1] [, 1]). To determine an appropriate choice for T α, we first look at sensitivities of the data f = Au to local parts of u: A is bounded linear on X = the Frechet differential of A at u X with increment u X is given by i.e., DA(u; u)(t) = DA(u; u) = A u, t k(t, s) u(s) ds, a.e. t [, 1].

21 Sensitivities of data for linear Volterra problem Graphs of: (1) u = unit impulse of width.1, (1st & 3rd rows of graphs above), and (2) the response DA(u; u) (2nd & 4th rows), immediately below each u. (Here the kernel for A is k(t, s) = 1.)

22 Sensitivities of data for linear Volterra problem Combined graphs of DA(u; u) for each unit impulse u of width.1 (As before, the kernel for A is k(t, s) = 1.)

23 Sensitivities of data for linear Volterra problem Combined graphs of DA(u; u) for each unit impulse u of width.1 (Here the kernel for A is k(t, s) = t s.)

24 Sensitivities of data for linear Volterra problem Combined graphs of DA(u; u) for each unit impulse u of width.1 (Here the kernel for A is k (t, s) = (t s) 2.)

25 Sensitivities of data for linear Volterra problem Combined graphs of DA(u; u) for each unit impulse u of width.1 (Here the kernel for A is k(t, s) = (t s) 3.)

26 Selection of T α for linear Volterra problem = for the linear Volterra problem, the solution u at t most influences data f on the interval [t, t + α], for some α > small. The length α of the interval is the regularization parameter. (Alternatively, use two parameters one for sensitivity region and another for regularization. E.g., see [PKL & Dai, 25] ) One possible choice of T α : T α f (t) = 1 α α f (t + ρ) dρ, (averaging function on [t, t + α]). More generally, T α f (t) = α a.e. t [, 1 α], f (t + ρ) dη α(ρ) α dη, a.e. t [, 1 α], α(ρ) for suitable measure η α on [, α].

27 Selection of A α, a α for linear Volterra problem. For example, we will apply the simple averaging T α to the original linear Volterra equation. Assume α (, ᾱ], for ᾱ > small: Start with the original equation Au = f, or t k(t, s)u(s) ds = f (t), a.e. t [, 1]. Next compute T α Au = T α f, for a.e. t [, 1 α], 1 α α t+ρ k(t + ρ, s)u(s) ds dρ = 1 α α f (t + ρ) dρ. Now split: T α A = A α + D α, where D α is nearly diagonal : T α Au(t) = 1 α α t + 1 α α t+ρ k(t + ρ, s)u(s) ds dρ t k(t + ρ, s)u(s) ds dρ.

28 Selection of A α, a α for linear Volterra problem. So the splitting of T α A is thus given by where, for a.e. t [, 1 α], A α u(t) = T α A = A α + D α, t D α u(t) = 1 α and k α (t, s) 1 α k α (t, s)u(s) ds, α t+ρ α k(t + ρ, s) dρ. = A α is of the same form as A. t k(t + ρ, s)u(s) ds dρ, Finally: select a α = a α (t) so that D α ū a α ū for α small.

29 Selection of a α for linear Volterra problem. Need a α = a α (t) so that, for a.e. t [, 1 α], D α u(t) = 1 α when u = ū. α t+ρ t k(t + ρ, s)u(s) ds dρ a α (t)u(t), If we approximate u on the local interval [t, t + α] by the constant u(t), then we have (formally) for D α u(t) ( 1 α α t+ρ t ) k(t + ρ, s)ds dρ u(t) = a α (t)u(t), a α (t) = 1 α α t+ρ t k(t + ρ, s)ds dρ, a.e. t [, 1 α].

30 Local regularization for the linear Volterra problem To summarize: The particular construction we have chosen leads to a local regularization equation for the linear Volterra problem given, for noisy data f δ, by A α u + a α u = f δ α on X α = L 2 (, 1 α), where for a.e. t [, 1 α], f δ α (t) = T α f δ (t) = 1 α A α u(t) = = α t t a α (t) = 1 α f δ (t + ρ) dρ, k α (t, s)u(s) ds ( 1 α ) k(t + ρ, s) dρ u(s) ds, α α t+ρ t k(t + ρ, s)ds dρ.

31 Local regularization for the linear Volterra problem Result: For a large class of kernels k and for α sufficiently small, 1 The quantity a α, satisfies a α (t) = 1 α α t+ρ t k(t + ρ, s)ds dρ, < c 1 c(α) a α (t) c 2 c(α), a.e. t [, 1 α], for some c(α) > with the property that c(α) as α ; and, 2 (A α + a α I ) is invertible with (A α + a α I ) 1 ( ) 1 = O c(α) as α.

32 Stronger convergence result Question: So why does the local regularization method satisfy the stronger convergence result? Recall that in this case we need to verify (D α ū a α ū) Xα = o (c(α)) as α. = 1 α In contrast to Tikhonov and Lavrent ev, where X α = X and (D α ū αū) Xα = αū Xα o(α), the method of local regularization gives α t+ρ D α ū(t) a α (t)ū(t) = ( 1 k(t+ρ, s)ū(s) ds dρ α t α t+ρ t ) k(t + ρ, s) ds dρ ū(t).

33 Stronger convergence result, cont d So for local regularization we have D α ū(t) a α (t)ū(t) = 1 α α t+ρ t k(t + ρ, s) (ū(s) ū(t)) ds dρ, a.e. t [, 1 α), and thus as α, ( ) D α ū a α ū Xα = O a α ( ) L S α ū ū Xα where = O ( c(α) ) S α ū ū Xα S α ū(t) = 1 α α ū(t + s) ds. Note that S α ū(t) is a special case of the Hardy-Littlewood maximal function for ū at t.

34 Stronger convergence result, cont d So we have D α ū a α ū Xα = O ( c(α) ) S α ū ū Xα, and it follows from the Lebesgue differentiation theorem that for any ū X. Thus S α ū ū Xα as α, D α ū a α ū Xα = o ( c(α) ) as α, and the stronger convergence result is obtained. For smoother ū, it s obvious that from this same estimate we can obtain a rate of convergence.

35 General results for linear Volterra problems More general results for the linear Volterra problem ( [PKL] for earlier work]; [Brooks thesis, 27] and [Brooks & PKL, 29] for L p results and discrepancy principle) f R(A) X = L p [, 1], f δ L p [, 1], 1 < p. For ν-smoothing problems, with a large class of measures (depending on ν) used to define suitable operators T α. Convergence as δ of u δ α to ū in X α = L p [, 1 α] for a priori choices of α = α(δ, f δ ). Optimal rates of convergence for the ν-smoothing problem. A discrepancy principle of the form where ( d(α) α m d(α) = τδ, f δ ) X T α f δ A α uα δ T α f δ Xα. Xα

36 Nonlinear Volterra problems Nonlinear Hammerstein problem: Start with the usual local regularization equation A α u + a α u = T α f δ, except now A α and the term a α u are both nonlinear in u. How to handle the nonlinearity? Use the local regularization interval to facilitate linearization of the nonlinear problem [Luo thesis 7; Brooks, PKL, Luo 9] i.e., for t >, let τ = min{t, α}, and linearize a α = a α (t, u(t)) about the prior u-value u(t τ). Numerical implementation of the linearized problem: u = (T α f δ A α u)/a α ( ), t > = solve a linear problem for u(t) sequentially, since A α u and (the linearized) a α only involve values of u prior to t.

37 Nonlinear Volterra problems, cont d Nonlinear autoconvolution problem: For the autoconvolution problem (as with Hammerstein), the usual local regularization equation A α u + a α u = T α f δ (8) involves nonlinearities in both A α and the term a α u. First: Solve (8) for u(t), t [, α] (α > small): * Nonlinear equation (8) not too difficult to solve on [, α]; * Need only a reasonable approximation to u on [, α] for convergence Then: Solve (8) for u(t), t (α, 1] using linear methods: * The (approximate) solution u on (α, 1] can be found sequentially from (8) using linear techniques. Thus, linearization of (8) not required in practice for the autoconvolution problem (but used as tool in convergence theory). [Dai thesis, 26; Dai & PKL, 28; Brooks, Dai, & PKL, 29]

38 Nonlinear Volterra problems, cont d Numerical results for nonlinear Volterra problems: In general, Qualitatively equivalent to Tikhonov solution, except not oversmoothed; considerably better than Lavrent ev solution. Many times faster than a nonlinear Tikhonov (iterative) approach in practice. (See above references.) Other non-volterra work: 2-D image deblurring Older work: [Cui thesis], [Cui, Lamm, Scofield, 7]. Newer results (to appear) are based on generalized local regularization approach = less computationally intensive. General n-dimensional problems governed by a nonlinear monotone operator (to appear; from generalized theory).

39 Sample numerical results

40 Sample numerical results, cont d

A generalized approach to local regularization of linear Volterra problems in L p spaces

A generalized approach to local regularization of linear Volterra problems in L p spaces A generalized approach to local regularization of linear Volterra problems in L p spaces Cara D. Brooks and Patricia K. Lamm Department of Mathematics, Michigan State University, East Lansing, MI 48824

More information

A DISCREPANCY PRINCIPLE FOR PARAMETER SELECTION IN LOCAL REGULARIZATION OF LINEAR VOLTERRA INVERSE PROBLEMS. Cara Dylyn Brooks A DISSERTATION

A DISCREPANCY PRINCIPLE FOR PARAMETER SELECTION IN LOCAL REGULARIZATION OF LINEAR VOLTERRA INVERSE PROBLEMS. Cara Dylyn Brooks A DISSERTATION A DISCREPANCY PRINCIPLE FOR PARAMETER SELECTION IN LOCAL REGULARIZATION OF LINEAR VOLTERRA INVERSE PROBLEMS By Cara Dylyn Brooks A DISSERTATION Submitted to Michigan State University in partial fulfillment

More information

1. Introduction. We consider an inverse problem based on a first-kind Volterra integral equation of the form

1. Introduction. We consider an inverse problem based on a first-kind Volterra integral equation of the form FUTURE POLYNOMIAL REGULARIZATION OF ILL-POSED VOLTERRA EQUATIONS AARON C. CINZORI AND PATRICIA K. LAMM Abstract. We examine a new discrete method for regularizing ill-posed Volterra problems. Unlike many

More information

Lecture 2: Tikhonov-Regularization

Lecture 2: Tikhonov-Regularization Lecture 2: Tikhonov-Regularization Bastian von Harrach harrach@math.uni-stuttgart.de Chair of Optimization and Inverse Problems, University of Stuttgart, Germany Advanced Instructional School on Theoretical

More information

On local regularization methods for linear Volterra equations and nonlinear equations of Hammerstein type

On local regularization methods for linear Volterra equations and nonlinear equations of Hammerstein type INSTITUTE OF PHYSICS PUBLISHING Inverse Problems 21 (25) 1773 179 INVERSE PROBLEMS doi:1.188/266-5611/21/5/16 On local regularization methods for linear Volterra equations and nonlinear equations of Hammerstein

More information

Robust error estimates for regularization and discretization of bang-bang control problems

Robust error estimates for regularization and discretization of bang-bang control problems Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of

More information

Regularization and Inverse Problems

Regularization and Inverse Problems Regularization and Inverse Problems Caroline Sieger Host Institution: Universität Bremen Home Institution: Clemson University August 5, 2009 Caroline Sieger (Bremen and Clemson) Regularization and Inverse

More information

LOCAL REGULARIZATION METHODS FOR THE STABILIZATION OF LINEAR ILL-POSED EQUATIONS OF VOLTERRA TYPE

LOCAL REGULARIZATION METHODS FOR THE STABILIZATION OF LINEAR ILL-POSED EQUATIONS OF VOLTERRA TYPE LOCAL REGULARIZATION METHODS FOR THE STABILIZATION OF LINEAR ILL-POSED EQUATIONS OF VOLTERRA TYPE Patricia K. Lamm Department of Mathematics, Michigan State University, E. Lansing, MI 48824-127 E-mail:

More information

Regularization in Banach Space

Regularization in Banach Space Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University

More information

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Barbara Kaltenbacher, Tuomo Valkonen,

More information

NUMERICAL SOLUTION OF FIRST-KIND VOLTERRA EQUATIONS BY SEQUENTIAL TIKHONOV REGULARIZATION

NUMERICAL SOLUTION OF FIRST-KIND VOLTERRA EQUATIONS BY SEQUENTIAL TIKHONOV REGULARIZATION NUMERICAL SOLUTION OF FIRST-KIND VOLTERRA EQUATIONS BY SEQUENTIAL TIKHONOV REGULARIZATION PATRICIA K. LAMM AND LARS ELDÉN Abstract. We consider the problem of finding regularized solutions to ill-posed

More information

Normed & Inner Product Vector Spaces

Normed & Inner Product Vector Spaces Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed

More information

Necessary conditions for convergence rates of regularizations of optimal control problems

Necessary conditions for convergence rates of regularizations of optimal control problems Necessary conditions for convergence rates of regularizations of optimal control problems Daniel Wachsmuth and Gerd Wachsmuth Johann Radon Institute for Computational and Applied Mathematics RICAM), Austrian

More information

Non-stationary Friedrichs systems

Non-stationary Friedrichs systems Department of Mathematics, University of Osijek BCAM, Bilbao, November 2013 Joint work with Marko Erceg 1 Stationary Friedrichs systems Classical theory Abstract theory 2 3 Motivation Stationary Friedrichs

More information

WELL-POSEDNESS FOR HYPERBOLIC PROBLEMS (0.2)

WELL-POSEDNESS FOR HYPERBOLIC PROBLEMS (0.2) WELL-POSEDNESS FOR HYPERBOLIC PROBLEMS We will use the familiar Hilbert spaces H = L 2 (Ω) and V = H 1 (Ω). We consider the Cauchy problem (.1) c u = ( 2 t c )u = f L 2 ((, T ) Ω) on [, T ] Ω u() = u H

More information

Spectral Regularization

Spectral Regularization Spectral Regularization Lorenzo Rosasco 9.520 Class 07 February 27, 2008 About this class Goal To discuss how a class of regularization methods originally designed for solving ill-posed inverse problems,

More information

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

Levenberg-Marquardt method in Banach spaces with general convex regularization terms Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization

More information

Regularization via Spectral Filtering

Regularization via Spectral Filtering Regularization via Spectral Filtering Lorenzo Rosasco MIT, 9.520 Class 7 About this class Goal To discuss how a class of regularization methods originally designed for solving ill-posed inverse problems,

More information

NONLINEAR THREE POINT BOUNDARY VALUE PROBLEM

NONLINEAR THREE POINT BOUNDARY VALUE PROBLEM SARAJEVO JOURNAL OF MATHEMATICS Vol.8 (2) (212), 11 16 NONLINEAR THREE POINT BOUNDARY VALUE PROBLEM A. GUEZANE-LAKOUD AND A. FRIOUI Abstract. In this work, we establish sufficient conditions for the existence

More information

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume, Number, Pages S -9939(XX- ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. HOANG AND A. G. RAMM (Communicated

More information

A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem

A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem Larisa Beilina Michael V. Klibanov December 18, 29 Abstract

More information

arxiv: v1 [math.na] 28 Jan 2009

arxiv: v1 [math.na] 28 Jan 2009 The Dynamical Systems Method for solving nonlinear equations with monotone operators arxiv:0901.4377v1 [math.na] 28 Jan 2009 N. S. Hoang and A. G. Ramm Mathematics Department, Kansas State University,

More information

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Applied Mathematical Sciences, Vol. 5, 2011, no. 76, 3781-3788 Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Nguyen Buong and Nguyen Dinh Dung

More information

An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems

An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Int. J. Contemp. Math. Sciences, Vol. 5, 2010, no. 52, 2547-2565 An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical and Computational

More information

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Applied Mathematical Sciences, Vol. 6, 212, no. 63, 319-3117 Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Nguyen Buong Vietnamese

More information

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

Volterra Integral Equations of the First Kind with Jump Discontinuous Kernels

Volterra Integral Equations of the First Kind with Jump Discontinuous Kernels Volterra Integral Equations of the First Kind with Jump Discontinuous Kernels Denis Sidorov Energy Systems Institute, Russian Academy of Sciences e-mail: contact.dns@gmail.com INV Follow-up Meeting Isaac

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

Statistically-Based Regularization Parameter Estimation for Large Scale Problems Statistically-Based Regularization Parameter Estimation for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova March 1, 2010 National Science Foundation: Division of Computational

More information

Singularities and Laplacians in Boundary Value Problems for Nonlinear Ordinary Differential Equations

Singularities and Laplacians in Boundary Value Problems for Nonlinear Ordinary Differential Equations Singularities and Laplacians in Boundary Value Problems for Nonlinear Ordinary Differential Equations Irena Rachůnková, Svatoslav Staněk, Department of Mathematics, Palacký University, 779 OLOMOUC, Tomkova

More information

An inverse elliptic problem of medical optics with experimental data

An inverse elliptic problem of medical optics with experimental data An inverse elliptic problem of medical optics with experimental data Jianzhong Su, Michael V. Klibanov, Yueming Liu, Zhijin Lin Natee Pantong and Hanli Liu Abstract A numerical method for an inverse problem

More information

THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS

THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS Asian-European Journal of Mathematics Vol. 3, No. 1 (2010) 57 105 c World Scientific Publishing Company THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. Hoang

More information

Cubic spline collocation for a class of weakly singular Fredholm integral equations and corresponding eigenvalue problem

Cubic spline collocation for a class of weakly singular Fredholm integral equations and corresponding eigenvalue problem Cubic spline collocation for a class of weakly singular Fredholm integral equations and corresponding eigenvalue problem ARVET PEDAS Institute of Mathematics University of Tartu J. Liivi 2, 549 Tartu ESTOIA

More information

State will have dimension 5. One possible choice is given by y and its derivatives up to y (4)

State will have dimension 5. One possible choice is given by y and its derivatives up to y (4) A Exercise State will have dimension 5. One possible choice is given by y and its derivatives up to y (4 x T (t [ y(t y ( (t y (2 (t y (3 (t y (4 (t ] T With this choice we obtain A B C [ ] D 2 3 4 To

More information

Intertibility and spectrum of the multiplication operator on the space of square-summable sequences

Intertibility and spectrum of the multiplication operator on the space of square-summable sequences Intertibility and spectrum of the multiplication operator on the space of square-summable sequences Objectives Establish an invertibility criterion and calculate the spectrum of the multiplication operator

More information

Adaptive methods for control problems with finite-dimensional control space

Adaptive methods for control problems with finite-dimensional control space Adaptive methods for control problems with finite-dimensional control space Saheed Akindeinde and Daniel Wachsmuth Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy

More information

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut Collaborators: Jodi Mead and Iveta Hnetynkova DEPARTMENT OF MATHEMATICS

More information

How large is the class of operator equations solvable by a DSM Newton-type method?

How large is the class of operator equations solvable by a DSM Newton-type method? This is the author s final, peer-reviewed manuscript as accepted for publication. The publisher-formatted version may be available through the publisher s web site or your institution s library. How large

More information

2 Tikhonov Regularization and ERM

2 Tikhonov Regularization and ERM Introduction Here we discusses how a class of regularization methods originally designed to solve ill-posed inverse problems give rise to regularized learning algorithms. These algorithms are kernel methods

More information

Ill-Posedness of Backward Heat Conduction Problem 1

Ill-Posedness of Backward Heat Conduction Problem 1 Ill-Posedness of Backward Heat Conduction Problem 1 M.THAMBAN NAIR Department of Mathematics, IIT Madras Chennai-600 036, INDIA, E-Mail mtnair@iitm.ac.in 1. Ill-Posedness of Inverse Problems Problems that

More information

PhD Course: Introduction to Inverse Problem. Salvatore Frandina Siena, August 19, 2012

PhD Course: Introduction to Inverse Problem. Salvatore Frandina Siena, August 19, 2012 PhD Course: to Inverse Problem salvatore.frandina@gmail.com theory Department of Information Engineering, Siena, Italy Siena, August 19, 2012 1 / 68 An overview of the - - - theory 2 / 68 Direct and Inverse

More information

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Stephan W Anzengruber 1 and Ronny Ramlau 1,2 1 Johann Radon Institute for Computational and Applied Mathematics,

More information

Inverse scattering problem from an impedance obstacle

Inverse scattering problem from an impedance obstacle Inverse Inverse scattering problem from an impedance obstacle Department of Mathematics, NCKU 5 th Workshop on Boundary Element Methods, Integral Equations and Related Topics in Taiwan NSYSU, October 4,

More information

Irregular Solutions of an Ill-Posed Problem

Irregular Solutions of an Ill-Posed Problem Irregular Solutions of an Ill-Posed Problem Peter Linz University of California at Davis Richard L.C. Wang Clearsight Systems Inc. Abstract: Tikhonov regularization is a popular and effective method for

More information

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication.

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication. 7-6 Multiplying matrices by diagonal matrices is faster than usual matrix multiplication. The following equations generalize to matrices of any size. Multiplying a matrix from the left by a diagonal matrix

More information

On the solvability of an inverse fractional abstract Cauchy problem

On the solvability of an inverse fractional abstract Cauchy problem On the solvability of an inverse fractional abstract Cauchy problem Mahmoud M. El-borai m ml elborai @ yahoo.com Faculty of Science, Alexandria University, Alexandria, Egypt. Abstract This note is devolved

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. 1999 83: 139 159 Numerische Mathematik c Springer-Verlag 1999 On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems Jin Qi-nian 1, Hou Zong-yi

More information

Nonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage:

Nonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage: Nonlinear Analysis 71 2009 2744 2752 Contents lists available at ScienceDirect Nonlinear Analysis journal homepage: www.elsevier.com/locate/na A nonlinear inequality and applications N.S. Hoang A.G. Ramm

More information

6. Approximation and fitting

6. Approximation and fitting 6. Approximation and fitting Convex Optimization Boyd & Vandenberghe norm approximation least-norm problems regularized approximation robust approximation 6 Norm approximation minimize Ax b (A R m n with

More information

A spacetime DPG method for acoustic waves

A spacetime DPG method for acoustic waves A spacetime DPG method for acoustic waves Rainer Schneckenleitner JKU Linz January 30, 2018 ( JKU Linz ) Spacetime DPG method January 30, 2018 1 / 41 Overview 1 The transient wave problem 2 The broken

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Parallel Cimmino-type methods for ill-posed problems

Parallel Cimmino-type methods for ill-posed problems Parallel Cimmino-type methods for ill-posed problems Cao Van Chung Seminar of Centro de Modelización Matemática Escuela Politécnica Naciónal Quito ModeMat, EPN, Quito Ecuador cao.vanchung@epn.edu.ec, cvanchung@gmail.com

More information

AN EXTENSION OF THE LAX-MILGRAM THEOREM AND ITS APPLICATION TO FRACTIONAL DIFFERENTIAL EQUATIONS

AN EXTENSION OF THE LAX-MILGRAM THEOREM AND ITS APPLICATION TO FRACTIONAL DIFFERENTIAL EQUATIONS Electronic Journal of Differential Equations, Vol. 215 (215), No. 95, pp. 1 9. ISSN: 172-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu AN EXTENSION OF THE

More information

Ann. Polon. Math., 95, N1,(2009),

Ann. Polon. Math., 95, N1,(2009), Ann. Polon. Math., 95, N1,(29), 77-93. Email: nguyenhs@math.ksu.edu Corresponding author. Email: ramm@math.ksu.edu 1 Dynamical systems method for solving linear finite-rank operator equations N. S. Hoang

More information

A numerical algorithm for solving 3D inverse scattering problem with non-over-determined data

A numerical algorithm for solving 3D inverse scattering problem with non-over-determined data A numerical algorithm for solving 3D inverse scattering problem with non-over-determined data Alexander Ramm, Cong Van Department of Mathematics, Kansas State University, Manhattan, KS 66506, USA ramm@math.ksu.edu;

More information

Convergence rates of the continuous regularized Gauss Newton method

Convergence rates of the continuous regularized Gauss Newton method J. Inv. Ill-Posed Problems, Vol. 1, No. 3, pp. 261 28 (22) c VSP 22 Convergence rates of the continuous regularized Gauss Newton method B. KALTENBACHER, A. NEUBAUER, and A. G. RAMM Abstract In this paper

More information

Numerical Solution of a Coefficient Identification Problem in the Poisson equation

Numerical Solution of a Coefficient Identification Problem in the Poisson equation Swiss Federal Institute of Technology Zurich Seminar for Applied Mathematics Department of Mathematics Bachelor Thesis Spring semester 2014 Thomas Haener Numerical Solution of a Coefficient Identification

More information

Existence and iteration of monotone positive solutions for third-order nonlocal BVPs involving integral conditions

Existence and iteration of monotone positive solutions for third-order nonlocal BVPs involving integral conditions Electronic Journal of Qualitative Theory of Differential Equations 212, No. 18, 1-9; http://www.math.u-szeged.hu/ejqtde/ Existence and iteration of monotone positive solutions for third-order nonlocal

More information

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Radu Ioan Boț and Bernd Hofmann September 16, 2016 Abstract In the literature on singular perturbation

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

Regularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces

Regularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces Int. Journal of Math. Analysis, Vol. 3, 2009, no. 12, 549-561 Regularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces Nguyen Buong Vietnamse Academy of Science

More information

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces Due Giorni di Algebra Lineare Numerica (2GALN) 16 17 Febbraio 2016, Como Iterative regularization in variable exponent Lebesgue spaces Claudio Estatico 1 Joint work with: Brigida Bonino 1, Fabio Di Benedetto

More information

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems www.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems H. Egger RICAM-Report 2005-01 www.ricam.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing

More information

Sparse Recovery in Inverse Problems

Sparse Recovery in Inverse Problems Radon Series Comp. Appl. Math XX, 1 63 de Gruyter 20YY Sparse Recovery in Inverse Problems Ronny Ramlau and Gerd Teschke Abstract. Within this chapter we present recent results on sparse recovery algorithms

More information

Maximum principle for the fractional diusion equations and its applications

Maximum principle for the fractional diusion equations and its applications Maximum principle for the fractional diusion equations and its applications Yuri Luchko Department of Mathematics, Physics, and Chemistry Beuth Technical University of Applied Sciences Berlin Berlin, Germany

More information

This article was published in an Elsevier journal. The attached copy is furnished to the author for non-commercial research and education use, including for instruction at the author s institution, sharing

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

The oblique derivative problem for general elliptic systems in Lipschitz domains

The oblique derivative problem for general elliptic systems in Lipschitz domains M. MITREA The oblique derivative problem for general elliptic systems in Lipschitz domains Let M be a smooth, oriented, connected, compact, boundaryless manifold of real dimension m, and let T M and T

More information

Dynamical systems method (DSM) for selfadjoint operators

Dynamical systems method (DSM) for selfadjoint operators Dynamical systems method (DSM) for selfadjoint operators A.G. Ramm Mathematics Department, Kansas State University, Manhattan, KS 6656-262, USA ramm@math.ksu.edu http://www.math.ksu.edu/ ramm Abstract

More information

A Spectral Characterization of Closed Range Operators 1

A Spectral Characterization of Closed Range Operators 1 A Spectral Characterization of Closed Range Operators 1 M.THAMBAN NAIR (IIT Madras) 1 Closed Range Operators Operator equations of the form T x = y, where T : X Y is a linear operator between normed linear

More information

Slide05 Haykin Chapter 5: Radial-Basis Function Networks

Slide05 Haykin Chapter 5: Radial-Basis Function Networks Slide5 Haykin Chapter 5: Radial-Basis Function Networks CPSC 636-6 Instructor: Yoonsuck Choe Spring Learning in MLP Supervised learning in multilayer perceptrons: Recursive technique of stochastic approximation,

More information

Sparse Approximation and Variable Selection

Sparse Approximation and Variable Selection Sparse Approximation and Variable Selection Lorenzo Rosasco 9.520 Class 07 February 26, 2007 About this class Goal To introduce the problem of variable selection, discuss its connection to sparse approximation

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

Optimal control problems with PDE constraints

Optimal control problems with PDE constraints Optimal control problems with PDE constraints Maya Neytcheva CIM, October 2017 General framework Unconstrained optimization problems min f (q) q x R n (real vector) and f : R n R is a smooth function.

More information

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut DEPARTMENT OF MATHEMATICS AND STATISTICS Prague 2008 MATHEMATICS AND STATISTICS

More information

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova SIAM Annual Meeting July 10, 2009 National Science Foundation:

More information

Existence, stability and instability for Einstein-scalar field Lichnerowicz equations by Emmanuel Hebey

Existence, stability and instability for Einstein-scalar field Lichnerowicz equations by Emmanuel Hebey Existence, stability and instability for Einstein-scalar field Lichnerowicz equations by Emmanuel Hebey Joint works with Olivier Druet and with Frank Pacard and Dan Pollack Two hours lectures IAS, October

More information

Derivatives. Differentiability problems in Banach spaces. Existence of derivatives. Sharpness of Lebesgue s result

Derivatives. Differentiability problems in Banach spaces. Existence of derivatives. Sharpness of Lebesgue s result Differentiability problems in Banach spaces David Preiss 1 Expanded notes of a talk based on a nearly finished research monograph Fréchet differentiability of Lipschitz functions and porous sets in Banach

More information

We denote the space of distributions on Ω by D ( Ω) 2.

We denote the space of distributions on Ω by D ( Ω) 2. Sep. 1 0, 008 Distributions Distributions are generalized functions. Some familiarity with the theory of distributions helps understanding of various function spaces which play important roles in the study

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Florianópolis - September, 2011.

More information

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher ENHANCED CHOICE OF THE PARAMETERS IN AN ITERATIVELY REGULARIZED NEWTON- LANDWEBER ITERATION IN BANACH SPACE arxiv:48.526v [math.na] 2 Aug 24 Barbara Kaltenbacher Alpen-Adria-Universität Klagenfurt Universitätstrasse

More information

Regularity and approximations of generalized equations; applications in optimal control

Regularity and approximations of generalized equations; applications in optimal control SWM ORCOS Operations Research and Control Systems Regularity and approximations of generalized equations; applications in optimal control Vladimir M. Veliov (Based on joint works with A. Dontchev, M. Krastanov,

More information

Zdzislaw Brzeźniak. Department of Mathematics University of York. joint works with Mauro Mariani (Roma 1) and Gaurav Dhariwal (York)

Zdzislaw Brzeźniak. Department of Mathematics University of York. joint works with Mauro Mariani (Roma 1) and Gaurav Dhariwal (York) Navier-Stokes equations with constrained L 2 energy of the solution Zdzislaw Brzeźniak Department of Mathematics University of York joint works with Mauro Mariani (Roma 1) and Gaurav Dhariwal (York) LMS

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

Functionalanalytic tools and nonlinear equations

Functionalanalytic tools and nonlinear equations Functionalanalytic tools and nonlinear equations Johann Baumeister Goethe University, Frankfurt, Germany Rio de Janeiro / October 2017 Outline Fréchet differentiability of the (PtS) mapping. Nonlinear

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Numerical Approximation of Phase Field Models

Numerical Approximation of Phase Field Models Numerical Approximation of Phase Field Models Lecture 2: Allen Cahn and Cahn Hilliard Equations with Smooth Potentials Robert Nürnberg Department of Mathematics Imperial College London TUM Summer School

More information

A characterization of the two weight norm inequality for the Hilbert transform

A characterization of the two weight norm inequality for the Hilbert transform A characterization of the two weight norm inequality for the Hilbert transform Ignacio Uriarte-Tuero (joint works with Michael T. Lacey, Eric T. Sawyer, and Chun-Yen Shen) September 10, 2011 A p condition

More information

Semi-discrete equations in Banach spaces: The approximate inverse approach

Semi-discrete equations in Banach spaces: The approximate inverse approach Semi-discrete equations in Banach spaces: The approximate inverse approach Andreas Rieder Thomas Schuster Saarbrücken Frank Schöpfer Oldenburg FAKULTÄT FÜR MATHEMATIK INSTITUT FÜR ANGEWANDTE UND NUMERISCHE

More information

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke

More information

Measure, Integration & Real Analysis

Measure, Integration & Real Analysis v Measure, Integration & Real Analysis preliminary edition 10 August 2018 Sheldon Axler Dedicated to Paul Halmos, Don Sarason, and Allen Shields, the three mathematicians who most helped me become a mathematician.

More information

Research Article Solvability of a Class of Integral Inclusions

Research Article Solvability of a Class of Integral Inclusions Abstract and Applied Analysis Volume 212, Article ID 21327, 12 pages doi:1.1155/212/21327 Research Article Solvability of a Class of Integral Inclusions Ying Chen and Shihuang Hong Institute of Applied

More information

An Integral-type Constraint Qualification for Optimal Control Problems with State Constraints

An Integral-type Constraint Qualification for Optimal Control Problems with State Constraints An Integral-type Constraint Qualification for Optimal Control Problems with State Constraints S. Lopes, F. A. C. C. Fontes and M. d. R. de Pinho Officina Mathematica report, April 4, 27 Abstract Standard

More information

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1.

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1. Sep. 1 9 Intuitively, the solution u to the Poisson equation S chauder Theory u = f 1 should have better regularity than the right hand side f. In particular one expects u to be twice more differentiable

More information

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The

More information

Optimal control as a regularization method for. an ill-posed problem.

Optimal control as a regularization method for. an ill-posed problem. Optimal control as a regularization method for ill-posed problems Stefan Kindermann, Carmeliza Navasca Department of Mathematics University of California Los Angeles, CA 995-1555 USA {kinder,navasca}@math.ucla.edu

More information

The Dirichlet-to-Neumann operator

The Dirichlet-to-Neumann operator Lecture 8 The Dirichlet-to-Neumann operator The Dirichlet-to-Neumann operator plays an important role in the theory of inverse problems. In fact, from measurements of electrical currents at the surface

More information

Quantum field theory and the Ricci flow

Quantum field theory and the Ricci flow Quantum field theory and the Ricci flow Daniel Friedan Department of Physics & Astronomy Rutgers the State University of New Jersey, USA Natural Science Institute, University of Iceland Mathematics Colloquium

More information

Upper and lower solution method for fourth-order four-point boundary value problems

Upper and lower solution method for fourth-order four-point boundary value problems Journal of Computational and Applied Mathematics 196 (26) 387 393 www.elsevier.com/locate/cam Upper and lower solution method for fourth-order four-point boundary value problems Qin Zhang a, Shihua Chen

More information