Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods

Size: px
Start display at page:

Download "Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods"

Transcription

1 Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry, Göttingen and Felix Bernstein Institute for Mathematical Statistics in the Biosciences University of Göttingen Chemnitz Symposium on Inverse Problems 2017 (on Tour in Rio) 1 joint work with Housen Li Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

2 Outline 1 Introduction 2 A posteriori parameter choice methods 3 Error analysis 4 Simulations 5 Conclusion Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

3 Introduction Outline 1 Introduction 2 A posteriori parameter choice methods 3 Error analysis 4 Simulations 5 Conclusion Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

4 Introduction Statistical inverse problems Setting: X, Y Hilbert spaces, T : X Y bounded, linear Task: Recover unknown f X from noisy measurements Y = Tf + σξ Noise: ξ is a standard Gaussian white noise process, σ > 0 noise level The model has to be understood in a weak sense: with ξ, g N Y g := Tf, g Y + σ ξ, g for all g Y ( ) 0, g 2 Y and E [ ξ, g 1 ξ, g 2 ] = g 1, g 2 Y. Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

5 Introduction Statistical inverse problems Assumptions: T is injective and Hilbert-Schmidt ( σ 2 k <, σ k singular values) σ is known exactly As the problem is ill-posed, regularization is needed. Consider filter-based regularization schemes ˆf α := q α (T T )T Y, α > 0. Aim: A posteriori choice of α such that rate of convergence (as σ 0) is order optimal (no loss of log-factors) Note: Heuristic parameter choice rules might work here as well, as the Bakushinskĭı veto does not hold in our setting (Becker 11). Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

6 A posteriori parameter choice methods Outline 1 Introduction 2 A posteriori parameter choice methods 3 Error analysis 4 Simulations 5 Conclusion Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

7 A posteriori parameter choice methods The discrepancy principle { For deterministic data: α DP = max α > 0 } T ˆf α Y τσ Y But here: Y / Y! Either pre-smoothing (Y Z := T Y X ) or discretization: Y R n, ξ N n (0, I n ) and choose { α DP = max α > 0 T ˆfα Y τσ } n 2 Pros: Easy to implement Works for all q α Order-optimal convergence rates Cons: How to choose τ 1? Only discretized meaningful Early saturation Davies & Anderssen 86, Lukas 95, Blanchard, Hoffmann & Reiß 16 Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

8 A posteriori parameter choice methods The quasi-optimality criterion Neubauer 08 (r α (λ) = 1 λq α (λ)): α QO = argmin α>0 But for spectral cut-off r α (T T ) ˆf α = 0 for all α > 0 r α (T X T ) ˆf α Alternative formulation for Tikhonov regularization if candidates α 1 <... < α m are given: n QO = argmin ˆf αn ˆf X αn+1, α QO := α nqo. Pros: 1 n m 1 Easy to implement, very fast No knowledge of σ necessary Order-optimal convergence rates in mildly ill-posed situations Cons: Only for special q α Additional assumptions on noise and/or f necessary Performance unclear in severely ill-posed situations Bauer & Kindermann 08, Bauer & Reiß 08, Bauer & Kindermann 09 Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

9 A posteriori parameter choice methods The Lepskĭı-type balancing principle For given α, the standard deviation of ˆf α can be bounded by ( ) std (α) := σ Tr q αk (T T ) 2 T T If candidates α 1 <... < α m are given: { n LEP = max j } X ˆf αj ˆf αk 4κstd (α k ) for all 1 k j Pros: and α LEP = α nlep Works for all q α Robust in practice convergence rates (mildly / severely ill-posed) Cons: Computationally expansive κ 1 depends on decay of σ k loss of log factor compared to order-optimal rate Bauer & Pereverzev 05, Mathé 06, Mathé & Pereverzev 06 Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

10 A posteriori parameter choice methods Unbiased risk estimation Dating back to ideas of Mallows 73 and Stein 81 let ˆr (α, Y ) := T ˆf 2 α 2 T ˆf α, Y + 2σ 2 Tr (T Tq α (T T )) Y and choose α URE = argmin α>0 ˆr (α, Y ) [ T Note that E [ˆr (α, Y )] = E ˆf α Tf of α (Unbiased Risk Estimation) 2 Y ] c with c independent For spectral cut-off and in mildly ill-posed situations, this gives order optimal rates (Chernousova & Golubev 14). Besides this, only optimality in the image space is known (Li 87, Lukas 93, Kneip 94). Distributional behavior of α URE : Lucka et al 17. In general: Pros? Cons? Convergence rates? Order optimality? Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

11 Error analysis Outline 1 Introduction 2 A posteriori parameter choice methods 3 Error analysis 4 Simulations 5 Conclusion Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

12 Error analysis A priori parameter choice Assumptions Filter α q α (λ) C q and λ q α (λ) C q. Source condition W φ := { f X f = φ(t T )w, ω X C }. Note: for any f X there exists φ such that f W φ. Qualification condition The function φ is a qualification of q α if sup φ(λ) 1 λq α (λ) C φ φ(α) λ [0, T T Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

13 Error analysis A priori parameter choice Assumptions Let Σ (x) := # { k σ 2 k x} be the counting function of the singular values of T Approximation by smooth surrogate There exists S C 2, α 1 (0, T T ] and C S (0, 2) such that (1) lim α 0 S(α)/Σ(α) = 1 (approximation) (2) S < 0 (decreasing) (3) lim α S(α) = lim α S (α) = 0 (behavior above σ1 2) (4) lim α 0 αs(α) = 0 (Hilbert-Schmidt) (5) αs (α) is integrable on (0, α 1 ] S (6) (α) S (α) C S α on (0, α 1 ] Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

14 Error analysis A priori parameter choice A priori convergence rates Bissantz, Hohage, Munk, Ruymgaart 07 Let α φ(α ) 2 = σ 2 S(α ). (i) If φ is a qualification of q α, then [ ] ˆf α f 2 X sup E f W φ φ(α ) 2 = σ 2 S(α ) α as σ 0. (ii) If λ λφ(λ) is a qualification of the filter q α, then [ T ] sup E ˆf α Tf 2 α φ(α ) 2 = σ 2 S(α ) as σ 0. f W φ Y Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

15 Error analysis A priori parameter choice Mildly ill-posed situation: Example Assume σ 2 k k a, W b := Bissantz, Hohage, Munk, Ruymgaart 07 Let α (σ 2 ) a/(a+b+1). { f X : k=1 kb f 2 k 1 } with a > 1, b > 0: If φ (λ) = λ b/2a is a qualification of q α, then [ ] sup E ˆf α f 2 (σ 2 ) f W b X b a+b+1. If φ (λ) = λ b/2a+1/2 is a qualification of q α, then [ T ] sup E ˆfα Tf 2 (σ 2 ) a+b a+b+1. f W b Y These rates are order optimal over W b. Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

16 Error analysis Unbiased Risk estimation as parameter choice Unbiased risk estimation vs. the oracle Recall that ˆr (α, Y ) := 2 T ˆf α 2 T ˆf α, Y + 2σ 2 Tr (T Tq α (T T )) Y is an unbiased estimator for [ T r (α, f ) := E ˆfα Tf In the following, we will compare α URE = argmin α>0 2 Y ]. ˆr (α, Y ) and α o = argmin r(α, f ). α>0 Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

17 Error analysis Unbiased Risk estimation as parameter choice Additional assumptions (a) α {q α (σ 2 k )} k=1 is strictly monotone and continuous as R l2. (b) As α 0, αq α (α) c q > 0. (c) For α > 0, the function λ λq α (λ) is non-decreasing. Satisfied by Tikhonov, spectral cut-off, Landweber, iterated Tikhonov and Showalter regularization, under proper parametrization. E.g. Tikhonov with re-parametrization α α (q α (λ) = 1/( α + λ)) violates (b). ( λ ) (d) ψ (λ) := λφ 1 is convex (e) There exists a constant C q > c 2 q 1 ( Ψ (C q x) exp C for some explicitly known C > 0. such that ) x dx < with Ψ(x) := 2 x (S 1 (x)) 2. (d) can always be satisfied by weakening φ, (e) restricts the decay of the singular values Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

18 Error analysis Unbiased Risk estimation as parameter choice Oracle inequality Li & W. 16 There are positive constants C i, i = 1,..., 6, such that for all f W φ it holds [ ] E ˆfαURE f 2 C 1 ψ 1( C 2 r(α o, f ) + C 3 σ 2) + C 4 σ 2 X as σ 0. r(α o, f ) + σ r(α o, f ) + C 5 ( ) S 1 r(α C o,f ) 6 σ 2 Gives a comparison of the strong risk under α URE with the weak risk under the oracle α o. Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

19 Error analysis Unbiased Risk estimation as parameter choice Convergence rates Li & W. 16 If also λ λφ(λ) is a qualification of the filter q α, then for α φ(α ) 2 = σ 2 S(α ) there are C 1, C 2, C 3 > 0 such that as σ 0. [ ] sup E ˆf f 2 αure C 1 σ 2 S(α ) σ 2 S(α ) + C 2 f W φ X α S 1 (C 3 S(α )) If there is C 4 > 0 such that S(C 4 x) C 3 S(x), then this equals the a priori rate [ ] sup E ˆfαURE f 2 φ(α ) 2 = σ 2 S(α ). f W φ X α Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

20 Error analysis Unbiased Risk estimation as parameter choice Order optimality in mildly ill-posed situations Assume σ 2 k k a, W b := Oracle inequality For all f W: [ ] E ˆf f 2 αure X { f X : k=1 kb f 2 k 1 } with a > 1, b > 0: r(α o, f ) b a+b + σ 2a r(α o, f ) 1+a + σ 1 2a r(α o, f ) 1+2a 2. Convergence rate Thus, if λ λ b/2a+1/2 is a qualification of q α, then [ ] sup E ˆfαURE f 2 σ 2b a+b+1 f W b X which is order-optimal. Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

21 Error analysis Unbiased Risk estimation as parameter choice Unbiased risk estimation - pros and cons [ T 2 ] α URE = argmin ˆf α 2 T ˆf α, Y + 2σ 2 Tr (T Tq α (T T )) α>0 Y Pros: Works for many q α order-optimal convergence rates in mildly ill-posed situations no loss of log factor no tuning parameter Cons: Computationally expansive Early saturation performance in severely ill-posed situations unclear H. Li and F. Werner (2017). Empirical risk minimization as parameter choice rule for general linear regularization methods. Submitted, arxiv: Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

22 Simulations Outline 1 Introduction 2 A posteriori parameter choice methods 3 Error analysis 4 Simulations 5 Conclusion Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

23 Simulations Rates of convergence A mildly ill-posed situation - antiderivative Let T : L 2 ([0, 1]) L 2 ([0, 1]) given by (Tf ) (x) = 1 0 min {x (1 y), y (1 x)} f (y) dy As (Tf ) = f the singular values σ k of T satisfy σ k k 2. We choose f (x) = Fourier coefficients: f k = ( 1)k 1 4π 3 k 2 ε > 0. { x if 0 x 1 2, 1 x if 1 2 x 1., so the optimal rate is O ( σ 3 4 ε) for any Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

24 Simulations Rates of convergence A mildly ill-posed situation - Tikhonov regularization 10 2 Empirical MISE 10 2 Empirical Variance 10 0 σ Figure: Empirical MISE and variance of ˆf f 2 2 over 104 repetitions: α o ( ), α DP ( ), α QO ( ), α LEP ( ), α URE ( ). Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

25 Simulations Rates of convergence A severely ill-posed situation - satellite gradiometry Let R > 1 and S R 2 the unit sphere. Given g = 2 u r 2 u = 0 in R d \ B, u = f on S, ( u (x) = O x 1 2 ) as x 2. Corresponding T : L 2 (S, µ) L 2 (RS, µ) has singular values σ k = k ( k + 1) R k 2. on RS find f in We choose f (x) = π 2 x, x [ π, π] ( Optimal rate of convergence is O ( log (σ)) 3+ε) for any ε > 0. Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

26 Simulations Rates of convergence A severely ill-posed situation - Tikhonov regularization Empirical MISE 10 0 Empirical Variance ( log (σ)) Figure: Empirical MISE and variance of ˆf f 2 2 over 104 repetitions: α o ( ), α DP ( ), α QO ( ), α LEP ( ), α URE ( ). Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

27 Simulations Rates of convergence A severely ill-posed situation - backwards heat equation Let t > 0. Given g = u (, t) find f in u t (x, t) = 2 u (x, t) in ( π, π] (0, t), t 2 u (x, 0) = f (x) on [ π, π], u ( π, t) = u (π, t) on t (0, t]. Corresponding T : L 2 ([ π, π]) L 2 ([ π, π]) has singular values σ k = exp ( k 2 t ). We choose f (x) = π 2 x, x [ π, π] ( Optimal rate of convergence is O ( log (σ)) 3/2+ε) for any ε > 0. Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

28 Simulations Rates of convergence A severely ill-posed situation - Tikhonov regularization 10 1 Empirical MISE 10 2 Empirical Variance 10 0 ( log (σ)) Figure: Empirical MISE and variance of ˆf f 2 2 over 104 repetitions: α o ( ), α DP ( ), α QO ( ), α LEP ( ), α URE ( ). Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

29 Simulations Efficiency simulations Efficiency simulations Measure the efficiency of a parameter choice rule α by the fraction [ ] E ˆfαo f 2 X R := [ ] E ˆf α f Numerical approximations of these as functions of σ with different parameters a, ν > 0 in the following setting: σ k = exp ( ak) f k = ±k ν (1 + N ( 0, 0.1 2)) Y k = σ k f k + N ( 0, σ 2) k = 1,..., 300, 10 4 repetitions Tikhonov regularization 2 X Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

30 Simulations Efficiency simulations Efficiency simulations - results a = 0.2, ν = 1 a = 0.3, ν = σ σ Figure: R QO ( ), R DP ( ), R LEP ( ), and R URE ( ) Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

31 Simulations Efficiency simulations Efficiency simulations - results a = 0.4, ν = 1 a = 0.6, ν = σ σ Figure: R QO ( ), R DP ( ), R LEP ( ), and R URE ( ) Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

32 Simulations Efficiency simulations Efficiency simulations - results a = 0.3, ν = a = 0.3, ν = σ σ Figure: R QO ( ), R DP ( ), R LEP ( ), and R URE ( ) Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

33 Conclusion Outline 1 Introduction 2 A posteriori parameter choice methods 3 Error analysis 4 Simulations 5 Conclusion Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

34 Conclusion Presented results Analysis of a parameter choice based on unbiased risk estimation: oracle inequality convergence rates order optimality in mildly ill-posed situations Numerical comparison: in this specific setting, quasi-optimality outperforms all other methods unbiased risk estimation has higher variance (by design) simulations suggest order optimality of quasi-optimality also in severely ill-posed situations, not clear for unbiased risk estimation Thank you for your attention! Frank Werner, MPIbpC Göttingen Unbiased Risk Estimation October 30, / 34

Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods

Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry,

More information

Statistical Inverse Problems and Instrumental Variables

Statistical Inverse Problems and Instrumental Variables Statistical Inverse Problems and Instrumental Variables Thorsten Hohage Institut für Numerische und Angewandte Mathematik University of Göttingen Workshop on Inverse and Partial Information Problems: Methodology

More information

Inverse problems in statistics

Inverse problems in statistics Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) YES, Eurandom, 10 October 2011 p. 1/32 Part II 2) Adaptation and oracle inequalities YES, Eurandom, 10 October 2011

More information

Convergence rates of spectral methods for statistical inverse learning problems

Convergence rates of spectral methods for statistical inverse learning problems Convergence rates of spectral methods for statistical inverse learning problems G. Blanchard Universtität Potsdam UCL/Gatsby unit, 04/11/2015 Joint work with N. Mücke (U. Potsdam); N. Krämer (U. München)

More information

Statistical regularization theory for Inverse Problems with Poisson data

Statistical regularization theory for Inverse Problems with Poisson data Statistical regularization theory for Inverse Problems with Poisson data Frank Werner 1,2, joint with Thorsten Hohage 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical

More information

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

Inverse problems in statistics

Inverse problems in statistics Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) Yale, May 2 2011 p. 1/35 Introduction There exist many fields where inverse problems appear Astronomy (Hubble satellite).

More information

ORACLE INEQUALITY FOR A STATISTICAL RAUS GFRERER TYPE RULE

ORACLE INEQUALITY FOR A STATISTICAL RAUS GFRERER TYPE RULE ORACLE INEQUALITY FOR A STATISTICAL RAUS GFRERER TYPE RULE QINIAN JIN AND PETER MATHÉ Abstract. We consider statistical linear inverse problems in Hilbert spaces. Approximate solutions are sought within

More information

Regularization in Banach Space

Regularization in Banach Space Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University

More information

MURDOCH RESEARCH REPOSITORY

MURDOCH RESEARCH REPOSITORY MURDOCH RESEARCH REPOSITORY This is the author s final version of the work, as accepted for publication following peer review but without the publisher s layout or pagination. The definitive version is

More information

Numerical differentiation by means of Legendre polynomials in the presence of square summable noise

Numerical differentiation by means of Legendre polynomials in the presence of square summable noise www.oeaw.ac.at Numerical differentiation by means of Legendre polynomials in the presence of square summable noise S. Lu, V. Naumova, S. Pereverzyev RICAM-Report 2012-15 www.ricam.oeaw.ac.at Numerical

More information

Inverse problems in statistics

Inverse problems in statistics Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) YES, Eurandom, 10 October 2011 p. 1/27 Table of contents YES, Eurandom, 10 October 2011 p. 2/27 Table of contents 1)

More information

Iterative Regularization Methods for Inverse Problems: Lecture 3

Iterative Regularization Methods for Inverse Problems: Lecture 3 Iterative Regularization Methods for Inverse Problems: Lecture 3 Thorsten Hohage Institut für Numerische und Angewandte Mathematik Georg-August Universität Göttingen Madrid, April 11, 2011 Outline 1 regularization

More information

Regularization and Inverse Problems

Regularization and Inverse Problems Regularization and Inverse Problems Caroline Sieger Host Institution: Universität Bremen Home Institution: Clemson University August 5, 2009 Caroline Sieger (Bremen and Clemson) Regularization and Inverse

More information

CONVERGENCE RATES OF GENERAL REGULARIZATION METHODS FOR STATISTICAL INVERSE PROBLEMS AND APPLICATIONS

CONVERGENCE RATES OF GENERAL REGULARIZATION METHODS FOR STATISTICAL INVERSE PROBLEMS AND APPLICATIONS CONVERGENCE RATES OF GENERAL REGULARIZATION METHODS FOR STATISTICAL INVERSE PROBLEMS AND APPLICATIONS BY N. BISSANTZ, T. HOHAGE, A. MUNK AND F. RUYMGAART UNIVERSITY OF GÖTTINGEN AND TEXAS TECH UNIVERSITY

More information

[11] Peter Mathé and Ulrich Tautenhahn, Regularization under general noise assumptions, Inverse Problems 27 (2011), no. 3,

[11] Peter Mathé and Ulrich Tautenhahn, Regularization under general noise assumptions, Inverse Problems 27 (2011), no. 3, Literatur [1] Radu Boţ, Bernd Hofmann, and Peter Mathé, Regularizability of illposed problems and the modulus of continuity, Zeitschrift für Analysis und ihre Anwendungen. Journal of Analysis and its Applications

More information

Resolving the White Noise Paradox in the Regularisation of Inverse Problems

Resolving the White Noise Paradox in the Regularisation of Inverse Problems 1 / 32 Resolving the White Noise Paradox in the Regularisation of Inverse Problems Hanne Kekkonen joint work with Matti Lassas and Samuli Siltanen Department of Mathematics and Statistics University of

More information

Model Selection and Geometry

Model Selection and Geometry Model Selection and Geometry Pascal Massart Université Paris-Sud, Orsay Leipzig, February Purpose of the talk! Concentration of measure plays a fundamental role in the theory of model selection! Model

More information

Uncertainty principles for far field patterns and applications to inverse source problems

Uncertainty principles for far field patterns and applications to inverse source problems for far field patterns and applications to inverse source problems roland.griesmaier@uni-wuerzburg.de (joint work with J. Sylvester) Paris, September 17 Outline Source problems and far field patterns AregularizedPicardcriterion

More information

in Bounded Domains Ariane Trescases CMLA, ENS Cachan

in Bounded Domains Ariane Trescases CMLA, ENS Cachan CMLA, ENS Cachan Joint work with Yan GUO, Chanwoo KIM and Daniela TONON International Conference on Nonlinear Analysis: Boundary Phenomena for Evolutionnary PDE Academia Sinica December 21, 214 Outline

More information

Augmented Tikhonov Regularization

Augmented Tikhonov Regularization Augmented Tikhonov Regularization Bangti JIN Universität Bremen, Zentrum für Technomathematik Seminar, November 14, 2008 Outline 1 Background 2 Bayesian inference 3 Augmented Tikhonov regularization 4

More information

MCMC Sampling for Bayesian Inference using L1-type Priors

MCMC Sampling for Bayesian Inference using L1-type Priors MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling

More information

Institut für Numerische und Angewandte Mathematik

Institut für Numerische und Angewandte Mathematik Institut für Numerische und Angewandte Mathematik Iteratively regularized Newton-type methods with general data mist functionals and applications to Poisson data T. Hohage, F. Werner Nr. 20- Preprint-Serie

More information

Preconditioned Newton methods for ill-posed problems

Preconditioned Newton methods for ill-posed problems Preconditioned Newton methods for ill-posed problems Dissertation zur Erlangung des Doktorgrades der Mathematisch-Naturwissenschaftlichen Fakultäten der Georg-August-Universität zu Göttingen vorgelegt

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Stephan W Anzengruber 1 and Ronny Ramlau 1,2 1 Johann Radon Institute for Computational and Applied Mathematics,

More information

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut Collaborators: Jodi Mead and Iveta Hnetynkova DEPARTMENT OF MATHEMATICS

More information

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova SIAM Annual Meeting July 10, 2009 National Science Foundation:

More information

A semismooth Newton method for L 1 data fitting with automatic choice of regularization parameters and noise calibration

A semismooth Newton method for L 1 data fitting with automatic choice of regularization parameters and noise calibration A semismooth Newton method for L data fitting with automatic choice of regularization parameters and noise calibration Christian Clason Bangti Jin Karl Kunisch April 26, 200 This paper considers the numerical

More information

An inverse scattering problem in random media

An inverse scattering problem in random media An inverse scattering problem in random media Pedro Caro Joint work with: Tapio Helin & Matti Lassas Computational and Analytic Problems in Spectral Theory June 8, 2016 Outline Introduction and motivation

More information

Spectral Regularization

Spectral Regularization Spectral Regularization Lorenzo Rosasco 9.520 Class 07 February 27, 2008 About this class Goal To discuss how a class of regularization methods originally designed for solving ill-posed inverse problems,

More information

= E [ξ ϕ ξ ψ ] satises 1

= E [ξ ϕ ξ ψ ] satises 1 ITERATIVELY REGULARIZED GAUSS-NEWTON METHOD FOR NONLINEAR INVERSE PROBLEMS WITH RANDOM NOISE FRAN BAUER 1, THORSTEN HOHAGE 2, AXEL MUN 2 1 FUZZY LOGIC LABORATORIUM LINZ-HAGENBERG, AUSTRIA 2 GEORG-AUGUST

More information

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut DEPARTMENT OF MATHEMATICS AND STATISTICS Prague 2008 MATHEMATICS AND STATISTICS

More information

Regularization via Spectral Filtering

Regularization via Spectral Filtering Regularization via Spectral Filtering Lorenzo Rosasco MIT, 9.520 Class 7 About this class Goal To discuss how a class of regularization methods originally designed for solving ill-posed inverse problems,

More information

Regularization Theory

Regularization Theory Regularization Theory Solving the inverse problem of Super resolution with CNN Aditya Ganeshan Under the guidance of Dr. Ankik Kumar Giri December 13, 2016 Table of Content 1 Introduction Material coverage

More information

REGULARIZATION OF SOME LINEAR ILL-POSED PROBLEMS WITH DISCRETIZED RANDOM NOISY DATA

REGULARIZATION OF SOME LINEAR ILL-POSED PROBLEMS WITH DISCRETIZED RANDOM NOISY DATA REGULARIZATION OF SOME LINEAR ILL-POSED PROBLEMS WITH DISCRETIZED RANDOM NOISY DATA PETER MATHÉ AND SERGEI V. PEREVERZEV Abstract. For linear statistical ill-posed problems in Hilbert spaces we introduce

More information

arxiv: v1 [math.st] 28 May 2016

arxiv: v1 [math.st] 28 May 2016 Kernel ridge vs. principal component regression: minimax bounds and adaptability of regularization operators Lee H. Dicker Dean P. Foster Daniel Hsu arxiv:1605.08839v1 [math.st] 8 May 016 May 31, 016 Abstract

More information

Penalized Barycenters in the Wasserstein space

Penalized Barycenters in the Wasserstein space Penalized Barycenters in the Wasserstein space Elsa Cazelles, joint work with Jérémie Bigot & Nicolas Papadakis Université de Bordeaux & CNRS Journées IOP - Du 5 au 8 Juillet 2017 Bordeaux Elsa Cazelles

More information

On Regularization Algorithms in Learning Theory

On Regularization Algorithms in Learning Theory On Regularization Algorithms in Learning Theory Frank Bauer a, Sergei Pereverzev b, Lorenzo Rosasco c,1 a Institute for Mathematical Stochastics, University of Göttingen, Department of Mathematics, Maschmühlenweg

More information

Inverse scattering problem from an impedance obstacle

Inverse scattering problem from an impedance obstacle Inverse Inverse scattering problem from an impedance obstacle Department of Mathematics, NCKU 5 th Workshop on Boundary Element Methods, Integral Equations and Related Topics in Taiwan NSYSU, October 4,

More information

Uncertainty Quantification for Inverse Problems. November 7, 2011

Uncertainty Quantification for Inverse Problems. November 7, 2011 Uncertainty Quantification for Inverse Problems November 7, 2011 Outline UQ and inverse problems Review: least-squares Review: Gaussian Bayesian linear model Parametric reductions for IP Bias, variance

More information

On Consistent Hypotheses Testing

On Consistent Hypotheses Testing Mikhail Ermakov St.Petersburg E-mail: erm2512@gmail.com History is rather distinctive. Stein (1954); Hodges (1954); Hoefding and Wolfowitz (1956), Le Cam and Schwartz (1960). special original setups: Shepp,

More information

On a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model

On a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model On a Data Assimilation Method coupling, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model 2016 SIAM Conference on Uncertainty Quantification Basile Marchand 1, Ludovic

More information

Bayesian inverse problems with Laplacian noise

Bayesian inverse problems with Laplacian noise Bayesian inverse problems with Laplacian noise Remo Kretschmann Faculty of Mathematics, University of Duisburg-Essen Applied Inverse Problems 2017, M27 Hangzhou, 1 June 2017 1 / 33 Outline 1 Inverse heat

More information

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Rosemary Renaut, Jodi Mead Arizona State and Boise State September 2007 Renaut and Mead (ASU/Boise) Scalar

More information

SIGNAL AND IMAGE RESTORATION: SOLVING

SIGNAL AND IMAGE RESTORATION: SOLVING 1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

Sparse Approximation and Variable Selection

Sparse Approximation and Variable Selection Sparse Approximation and Variable Selection Lorenzo Rosasco 9.520 Class 07 February 26, 2007 About this class Goal To introduce the problem of variable selection, discuss its connection to sparse approximation

More information

Linear convergence of iterative soft-thresholding

Linear convergence of iterative soft-thresholding arxiv:0709.1598v3 [math.fa] 11 Dec 007 Linear convergence of iterative soft-thresholding Kristian Bredies and Dirk A. Lorenz ABSTRACT. In this article, the convergence of the often used iterative softthresholding

More information

Class 2 & 3 Overfitting & Regularization

Class 2 & 3 Overfitting & Regularization Class 2 & 3 Overfitting & Regularization Carlo Ciliberto Department of Computer Science, UCL October 18, 2017 Last Class The goal of Statistical Learning Theory is to find a good estimator f n : X Y, approximating

More information

On regularization algorithms in learning theory

On regularization algorithms in learning theory Journal of Complexity 23 (2007) 52 72 www.elsevier.com/locate/jco On regularization algorithms in learning theory Frank Bauer a, Sergei Pereverzev b, Lorenzo Rosasco c,d, a Institute for Mathematical Stochastics,

More information

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke

More information

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization Radu Ioan Boț and Bernd Hofmann March 1, 2013 Abstract Tikhonov-type regularization of linear and nonlinear

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

Statistically-Based Regularization Parameter Estimation for Large Scale Problems Statistically-Based Regularization Parameter Estimation for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova March 1, 2010 National Science Foundation: Division of Computational

More information

Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles

Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles Arkadi Nemirovski H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology Joint research

More information

Sequential Monte Carlo Samplers for Applications in High Dimensions

Sequential Monte Carlo Samplers for Applications in High Dimensions Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex

More information

Anomalous transport of particles in Plasma physics

Anomalous transport of particles in Plasma physics Anomalous transport of particles in Plasma physics L. Cesbron a, A. Mellet b,1, K. Trivisa b, a École Normale Supérieure de Cachan Campus de Ker Lann 35170 Bruz rance. b Department of Mathematics, University

More information

An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information.

An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information. An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information. Model: where g(t) = a(t s)f(s)ds + e(t), a(t) t = (rapidly). The problem,

More information

2 Tikhonov Regularization and ERM

2 Tikhonov Regularization and ERM Introduction Here we discusses how a class of regularization methods originally designed to solve ill-posed inverse problems give rise to regularized learning algorithms. These algorithms are kernel methods

More information

High-dimensional regression with unknown variance

High-dimensional regression with unknown variance High-dimensional regression with unknown variance Christophe Giraud Ecole Polytechnique march 2012 Setting Gaussian regression with unknown variance: Y i = f i + ε i with ε i i.i.d. N (0, σ 2 ) f = (f

More information

Convergence rates of the continuous regularized Gauss Newton method

Convergence rates of the continuous regularized Gauss Newton method J. Inv. Ill-Posed Problems, Vol. 1, No. 3, pp. 261 28 (22) c VSP 22 Convergence rates of the continuous regularized Gauss Newton method B. KALTENBACHER, A. NEUBAUER, and A. G. RAMM Abstract In this paper

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

ISyE 691 Data mining and analytics

ISyE 691 Data mining and analytics ISyE 691 Data mining and analytics Regression Instructor: Prof. Kaibo Liu Department of Industrial and Systems Engineering UW-Madison Email: kliu8@wisc.edu Office: Room 3017 (Mechanical Engineering Building)

More information

Introduction to Bayesian methods in inverse problems

Introduction to Bayesian methods in inverse problems Introduction to Bayesian methods in inverse problems Ville Kolehmainen 1 1 Department of Applied Physics, University of Eastern Finland, Kuopio, Finland March 4 2013 Manchester, UK. Contents Introduction

More information

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Radu Ioan Boț and Bernd Hofmann September 16, 2016 Abstract In the literature on singular perturbation

More information

arxiv: v1 [math.na] 6 Sep 2018

arxiv: v1 [math.na] 6 Sep 2018 Q-curve and area rules for choosing heuristic parameter in Tikhonov regularization arxiv:1809.02061v1 [math.na] 6 Sep 2018 Toomas Raus and Uno Hämarik Institute of Mathematics and Statistics, University

More information

Compensation. June 29, Arizona State University. Edge Detection from Nonuniform Data via. Convolutional Gridding with Density.

Compensation. June 29, Arizona State University. Edge Detection from Nonuniform Data via. Convolutional Gridding with Density. Arizona State University June 29, 2012 Overview gridding with density compensation is a common method for function reconstruction nonuniform Fourier data. It is computationally inexpensive compared with

More information

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume, Number, Pages S -9939(XX- ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. HOANG AND A. G. RAMM (Communicated

More information

Kernels to detect abrupt changes in time series

Kernels to detect abrupt changes in time series 1 UMR 8524 CNRS - Université Lille 1 2 Modal INRIA team-project 3 SSB group Paris joint work with S. Arlot, Z. Harchaoui, G. Rigaill, and G. Marot Computational and statistical trade-offs in learning IHES

More information

Sparse Recovery in Inverse Problems

Sparse Recovery in Inverse Problems Radon Series Comp. Appl. Math XX, 1 63 de Gruyter 20YY Sparse Recovery in Inverse Problems Ronny Ramlau and Gerd Teschke Abstract. Within this chapter we present recent results on sparse recovery algorithms

More information

Direct estimation of linear functionals from indirect noisy observations

Direct estimation of linear functionals from indirect noisy observations Direct estimation of linear functionals from indirect noisy observations Peter Mathé Weierstraß Institute for Applied Analysis and Stochastics, Mohrenstraße 39, D 10117 Berlin, Germany E-mail: mathe@wias-berlin.de

More information

A Lower Bound Theorem. Lin Hu.

A Lower Bound Theorem. Lin Hu. American J. of Mathematics and Sciences Vol. 3, No -1,(January 014) Copyright Mind Reader Publications ISSN No: 50-310 A Lower Bound Theorem Department of Applied Mathematics, Beijing University of Technology,

More information

The Stein hull. Clément Marteau* Institut de Mathématiques, Université de Toulouse, INSA - 135, Avenue de Rangueil, F Toulouse Cedex 4, France

The Stein hull. Clément Marteau* Institut de Mathématiques, Université de Toulouse, INSA - 135, Avenue de Rangueil, F Toulouse Cedex 4, France Journal of Nonparametric Statistics Vol. 22, No. 6, August 2010, 685 702 The Stein hull Clément Marteau* Institut de Mathématiques, Université de Toulouse, INSA - 135, Avenue de Rangueil, F-31 077 Toulouse

More information

Provable Alternating Minimization Methods for Non-convex Optimization

Provable Alternating Minimization Methods for Non-convex Optimization Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish

More information

Ill-Posedness of Backward Heat Conduction Problem 1

Ill-Posedness of Backward Heat Conduction Problem 1 Ill-Posedness of Backward Heat Conduction Problem 1 M.THAMBAN NAIR Department of Mathematics, IIT Madras Chennai-600 036, INDIA, E-Mail mtnair@iitm.ac.in 1. Ill-Posedness of Inverse Problems Problems that

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. 1999 83: 139 159 Numerische Mathematik c Springer-Verlag 1999 On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems Jin Qi-nian 1, Hou Zong-yi

More information

Nonlinear error dynamics for cycled data assimilation methods

Nonlinear error dynamics for cycled data assimilation methods Nonlinear error dynamics for cycled data assimilation methods A J F Moodey 1, A S Lawless 1,2, P J van Leeuwen 2, R W E Potthast 1,3 1 Department of Mathematics and Statistics, University of Reading, UK.

More information

Adaptive Filter Theory

Adaptive Filter Theory 0 Adaptive Filter heory Sung Ho Cho Hanyang University Seoul, Korea (Office) +8--0-0390 (Mobile) +8-10-541-5178 dragon@hanyang.ac.kr able of Contents 1 Wiener Filters Gradient Search by Steepest Descent

More information

Microlocal Methods in X-ray Tomography

Microlocal Methods in X-ray Tomography Microlocal Methods in X-ray Tomography Plamen Stefanov Purdue University Lecture I: Euclidean X-ray tomography Mini Course, Fields Institute, 2012 Plamen Stefanov (Purdue University ) Microlocal Methods

More information

COMPUTATION OF FOURIER TRANSFORMS FOR NOISY B FOR NOISY BANDLIMITED SIGNALS

COMPUTATION OF FOURIER TRANSFORMS FOR NOISY B FOR NOISY BANDLIMITED SIGNALS COMPUTATION OF FOURIER TRANSFORMS FOR NOISY BANDLIMITED SIGNALS October 22, 2011 I. Introduction Definition of Fourier transform: F [f ](ω) := ˆf (ω) := + f (t)e iωt dt, ω R (1) I. Introduction Definition

More information

New Statistical Model for the Enhancement of Noisy Speech

New Statistical Model for the Enhancement of Noisy Speech New Statistical Model for the Enhancement of Noisy Speech Electrical Engineering Department Technion - Israel Institute of Technology February 22, 27 Outline Problem Formulation and Motivation 1 Problem

More information

Minimum Message Length Analysis of the Behrens Fisher Problem

Minimum Message Length Analysis of the Behrens Fisher Problem Analysis of the Behrens Fisher Problem Enes Makalic and Daniel F Schmidt Centre for MEGA Epidemiology The University of Melbourne Solomonoff 85th Memorial Conference, 2011 Outline Introduction 1 Introduction

More information

NON-EXTINCTION OF SOLUTIONS TO A FAST DIFFUSION SYSTEM WITH NONLOCAL SOURCES

NON-EXTINCTION OF SOLUTIONS TO A FAST DIFFUSION SYSTEM WITH NONLOCAL SOURCES Electronic Journal of Differential Equations, Vol. 2016 (2016, No. 45, pp. 1 5. ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu NON-EXTINCTION OF

More information

Smoothness beyond dierentiability

Smoothness beyond dierentiability Smoothness beyond dierentiability Peter Mathé Tartu, October 15, 014 1 Brief Motivation Fundamental Theorem of Analysis Theorem 1. Suppose that f H 1 (0, 1). Then there exists a g L (0, 1) such that f(x)

More information

A model function method in total least squares

A model function method in total least squares www.oeaw.ac.at A model function method in total least squares S. Lu, S. Pereverzyev, U. Tautenhahn RICAM-Report 2008-18 www.ricam.oeaw.ac.at A MODEL FUNCTION METHOD IN TOTAL LEAST SQUARES SHUAI LU, SERGEI

More information

Conditional Value-at-Risk (CVaR) Norm: Stochastic Case

Conditional Value-at-Risk (CVaR) Norm: Stochastic Case Conditional Value-at-Risk (CVaR) Norm: Stochastic Case Alexander Mafusalov, Stan Uryasev RESEARCH REPORT 03-5 Risk Management and Financial Engineering Lab Department of Industrial and Systems Engineering

More information

Approximation Theoretical Questions for SVMs

Approximation Theoretical Questions for SVMs Ingo Steinwart LA-UR 07-7056 October 20, 2007 Statistical Learning Theory: an Overview Support Vector Machines Informal Description of the Learning Goal X space of input samples Y space of labels, usually

More information

Maximum Mean Discrepancy

Maximum Mean Discrepancy Maximum Mean Discrepancy Thanks to Karsten Borgwardt, Malte Rasch, Bernhard Schölkopf, Jiayuan Huang, Arthur Gretton Alexander J. Smola Statistical Machine Learning Program Canberra, ACT 0200 Australia

More information

Signal Denoising with Wavelets

Signal Denoising with Wavelets Signal Denoising with Wavelets Selin Aviyente Department of Electrical and Computer Engineering Michigan State University March 30, 2010 Introduction Assume an additive noise model: x[n] = f [n] + w[n]

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Modified Landweber iteration in Banach spaces convergence and convergence rates

Modified Landweber iteration in Banach spaces convergence and convergence rates Modified Landweber iteration in Banach spaces convergence and convergence rates Torsten Hein, Kamil S. Kazimierski August 4, 009 Abstract Abstract. We introduce and discuss an iterative method of relaxed

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

6.435, System Identification

6.435, System Identification System Identification 6.435 SET 3 Nonparametric Identification Munther A. Dahleh 1 Nonparametric Methods for System ID Time domain methods Impulse response Step response Correlation analysis / time Frequency

More information

Endogeneity in non separable models. Application to treatment models where the outcomes are durations

Endogeneity in non separable models. Application to treatment models where the outcomes are durations Endogeneity in non separable models. Application to treatment models where the outcomes are durations J.P. Florens First Draft: December 2004 This version: January 2005 Preliminary and Incomplete Version

More information

Bayesian Aggregation for Extraordinarily Large Dataset

Bayesian Aggregation for Extraordinarily Large Dataset Bayesian Aggregation for Extraordinarily Large Dataset Guang Cheng 1 Department of Statistics Purdue University www.science.purdue.edu/bigdata Department Seminar Statistics@LSE May 19, 2017 1 A Joint Work

More information

A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem

A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem Larisa Beilina Michael V. Klibanov December 18, 29 Abstract

More information

Bayesian Models for Regularization in Optimization

Bayesian Models for Regularization in Optimization Bayesian Models for Regularization in Optimization Aleksandr Aravkin, UBC Bradley Bell, UW Alessandro Chiuso, Padova Michael Friedlander, UBC Gianluigi Pilloneto, Padova Jim Burke, UW MOPTA, Lehigh University,

More information

Inverse problems with Poisson data: Tikhonov-type regularization and iteratively regularized Newton methods

Inverse problems with Poisson data: Tikhonov-type regularization and iteratively regularized Newton methods Inverse problems with Poisson data: Tikhonov-type regularization and iteratively regularized Newton methods Dissertation zur Erlangung des mathematisch-naturwissenschaftlichen Doktorgrades Doctor rerum

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information