Statistical Inverse Problems and Instrumental Variables

Size: px
Start display at page:

Download "Statistical Inverse Problems and Instrumental Variables"

Transcription

1 Statistical Inverse Problems and Instrumental Variables Thorsten Hohage Institut für Numerische und Angewandte Mathematik University of Göttingen Workshop on Inverse and Partial Information Problems: Methodology and Applications RICAM, Linz,

2 Collaborators Frank Bauer (Linz) Laurent Cavalier (Marseille) Jean-Pierre Florens (Toulouse) Jan Johannnes (Heidelberg) Enno Mammen (Mannheim) Axel Munk (Göttingen)

3 outline 1 A Newton method for nonlinear statistical inverse problems 2 Oracle inequalities 3 Nonparametric instrumental variables and perturbed operators

4 statistical inverse problem problem: Let X, Y be separable Hilbert spaces and F : D(F ) X Y a Fréchet differentiable, one-to-one operator. Estimate a given indirect observations in the form of a random process F 1 is not continuous! Y = F (a ) + σξ + δζ. ξ normalized stochastic noise: a Hilbert space satisfying Eξ = 0 and Cov ξ 1 σ 0 stochastic noise level ζ Y normalized deterministic noise, ζ = 1 δ 0 deterministic noise level

5 the algorithm The Newton equation F [â k ](â k+1 â k ) = Y F (â k ), k = 1, 2,... is regularized in each step by Tikhonov regularization with initial guess a 0 and regularization parameter α k = α 0 q k, q (0, 1): â k+1 := argmin a X F [â k ](a â k )+F (â k ) Y 2 Y +α k+1 a a 0 2 X

6 What is this for linear problems? If F = T is linear, the iteration formula simplifies to â k+1 := argmin a X Ta Y 2 Y + α k+1 a a 0 2 X. The iteration steps decouple in the sense that none of the previous iterate appears in the formula for â k+1. Bias and variance must be balanced by proper choice of the stopping index.

7 What if â k / D(F) for some k? Since typically D(F ) X and the stochastic noise σξ can be arbitrarily large, there exists a positive probability that â k / D(F ) in each Newton step. Emergency stop : If this happens, we stop the Newton iteration and return a 0 as estimator of a. We will have to show that the probability the such an emergency stop is necessary rapidly tends to 0 with the stochastic noise level σ.

8 Can we improve on the qualification of Tikhonov regularization? Replace Tikhonov regularization by iterated Tikhonov regularization: â (0) k+1 := a 0 â (j) k+1 := argmin a X â k+1 := â (m) k+1 closed formula: { F [â k ](a â k ) + F (â k ) Y 2 Y } j = 1,..., m +α k+1 a â (j 1) k+1 2 X ( â k+1 := a 0 + g αk+1 F [â k ] F [â k ] ) F [â k ] ( ) Y F(â k ) + F [â k ](â k a 0 ) r α (λ) := ( α α+λ) m, gα (λ) := 1 λ (1 r α(λ))

9 deterministic convergence analysis: references: B. Kaltenbacher, A. Neubauer, O. Scherzer. Iterative Regularization Methods for Nonlinear Ill-Posed Problems. Radon Series on Computational and Applied Mathematics, de Gruyter, Berlin, 2008 A. B. Bakushinsky and M. Y. Kokurin. Iterative Methods for Approximate Solution of Inverse Problems. Springer, Dordrecht, A. B. Bakushinsky. The problem of the convergence of the iteratively regularized Gauss-Newton method. Comput. Maths. Math. Phys., 32: , The following results are from: F. Bauer, T. Hohage and A. Munk. Iteratively Regularized Gauss-Newton Method for Nonlinear Inverse Problems with Random Noise. preprint, under revision for SIAM J. Numer. Anal.

10 error decomposition Let T := F [a ] and T k := F [â k ]. The error E k = â k a in the kth Newton step can be decomposed into an approximation error E app k+1 := r α k+1 (T T )E 0, a propagated data noise error and a nonlinearity error Ek+1 noi := g α k+1 (Tk T k)tk (δζ + σξ), E nl k+1 := g αk+1 (T k T k)t k (F (a ) F (â k ) + T k E k ) + ( r αk+1 (T k T k) r αk (T T ) ) E 0, i.e. E k+1 = E app k+1 + E noi k+1 + E nl k+1.

11 crucial lemma Lemma Under certain assumptions discussed below there exists γ nl > 0 such that E nl k γ nl ( app E k + Ek noi ) k = 1,..., K max.

12 assumptions of the lemma source condition: There exists a sufficiently small source w Y such that a 0 a = T w α 0 sufficiently large such that E 0 q m E app 1 Lipschitz condition: For all a 1, a 2 D(F ) F [a 1 ] F [a 2 ] L a 1 a 2. choice of K max : E noi k K max := max {k N : } C stop αk

13 on the proof of the lemma The proof uses an straightforward induction argument in k. The following properties of iterated Tikhonov regularization are used: There exists γ app > 0 such that for all k E app k+1 app E γ app E app k k+1 This rules out methods with infinite qualification such as Landweber iteration! The propagated data noise is an ordered process in the sense that E noi noi E for all k. k k+1

14 optimal deterministic rates Corollary For deterministic errors (σ = 0) define the optimal stopping index by ( K := min {K max, K }, K := argmin k N E app k + δ ). αk Then there exist constants C, δ 0 > 0 such that ( â K a C inf E app k N k + δ ) for all δ (0, δ 0 ]. αk In particular, under the Hölder source condition a 0 a = Λ(T T ) w with µ [ 1 2, m] we obtain ) â K a = O ( w 1 2µ+1 δ 2µ 2µ+1,

15 propagated data noise error We make the following assumptions on the variance term V (a, α) := g α (F [a] F [a])f [a] ξ 2 : There exists a known function ϕ noi such that (EV (a, α)) 1/2 ϕ noi (α) α (0, α 0 ] and a D(F ). There are constants 1 < γ noi γ noi < such that γ noi ϕ noi (α k+1 )/ϕ noi (α k ) γ noi, k N 0. (exponential inequality) λ 1, λ 2 > 0 a D(F ) α (0, α 0 ] τ 1 P {V (a, α) τev (a, α)} λ 1 e λ 2τ.

16 optimal rates for known smoothness Theorem Assume that {a : a a 0 2R} D(F) and define the optimal stopping index ( K := argmin k N E app k + δ ) + σϕ noi (α k ). αk If â k a 0 2R for k = 1,..., K, set K := K, otherwise K := 0. Then there exist constants C > 1 and δ 0, σ 0 > 0 such that ( E â K a 2) 1/2 C min k N for all δ (0, δ 0 ] and σ (0, σ 0 ]. ( E app k + δ ) + σϕ noi (α k ) αk Short: The Newton method achieves the same rate as iterated Tikhonov applied to the linearized problem.

17 outline 1 A Newton method for nonlinear statistical inverse problems 2 Oracle inequalities 3 Nonparametric instrumental variables and perturbed operators

18 oracle parameter choice rules Consider an inverse problem Y = F (a ) + σξ + δζ and a family {R α : Y X } of regularized inverses of F. An oracle parameter choice rule α or for the method {R α } and the solution a is defined by sup ζ 1 E R αor (Y ) a 2 = inf α sup ζ 1 E R α (Y ) a 2 An oracle inequality for some given parameter choice rule α = α (Y, σ, δ) is an estimate of the form sup E R α (Y ) a 2 χ(σ, δ) sup E R αor (Y ) a 2. ζ 1 ζ 1 In the optimal case χ(σ, δ) 1 as σ, δ 0. E. Candès. Modern statistical estimation via oracle inequalities. Acta Numerica, 15: , 2006.

19 typical convergence results in deterministic regularization theory In deterministic theory convergence results for parameter choice rules typically contain a comparison with all other reconstruction methods R : Y X In this case one cannot consider only one a X, otherwise the optimal method would be R(Y ) a. Hence, estimates must be uniform over a smoothness class S X, which is typically defined by a source condition. E.g. sup a S sup ζ 1 C inf R R α (F(a) + δζ) a sup sup R(F(a ) + δζ) a. a S ζ 1

20 T. T. Cai and M. G. Low. nonparametric estimation over shrinking neighborhoods: superefficiency and adaptation. Ann. Stat., 33: , oracle inequalities are more precise Proposition Let R α := (αi + T T ) 1 T (Tikhonov regularization) and A = {(T T ) µ w : w ρ} with ρ > 0 and µ (0, 1]. Then for all a A inf R sup a S sup ζ 1 R(T (a) + δζ) a sup δ>0 inf α>0 sup ζ 1 R α (Ta + δζ) a = In other words: For every element a in the smoothness class A there exists an error level δ > 0 for which the classical deterministic error bounds are suboptimal by an arbitrarily large factor! Deterministic analog to superefficiency.

21 balancing principle for nonlinear inverse problems Let â 0, â 1,..., â Kmax be estimators of a such that â k a Φ noi (k) + Φ app (k) + Φ nl (k), k K max. Φ app is unknown and non-increasing. Φ noi is known and non-decreasing. Φ nl is unknown and satisfies for some γ nl > 0 Φ nl (k) γ nl ( Φnoi (k) + Φ app (k) ), k = 0,..., K max.

22 oracle inequality Lepskiĭ balancing principle: { â k bal := min k K max : k â m 4(1 + γ nl )Φ noi (m), m = k + 1,..., K max } Theorem (Bauer, Hohage, Munk) Assume that Φ noi (k + 1) γ noi Φ noi (k) for some constant γ noi <. Then â kbal a ( 6(1 + γ nl )γ noi min Φapp (k) + Φ noi (k) ). k=1,...,k max extension of a result for the linear case γ nl = 0 by P. Mathé and S. Pereverzev. Regularization of some linear ill-posed problems with discretized random noisy data. Math. Comp., 75: , See also O. V. Lepskiĭ. On a problem of adaptive estimation in Gaussian white noise. Theory Probab. Appl., 35: , P. Mathé. The Lepskiĭ principle revisited. Inverse Problems, 22:L11 L15, 2006.

23 deterministic errors, unknown smoothness We return now to the Newton method for nonlinear inverse problems. Corollary Let u = F (a ) + δζ. Then â kbal a 6(1 + γ nl )γ noi inf k N ( Φ app (k) + δ ) αk

24 stochastic noise, unknown smoothness Corollary Let u = F (a ) + σξ + δζ. Furthermore let k bal be chosen by the Lepskiĭ balancing principle if â k B 2R (a 0 ) for k = 1,..., K max and k bal := 0 else. Then there exists a constant C > 0 such that for σ, δ small enough ( E â kbal a 2) ( 1/2 C min E app k N k + δ ) + (ln σ 1 )σϕ noi (α k ) αk

25 Can the logaritmic factor be avoided? In general, no! Counter-example: A. Tsybakov. On the best rate of adaptive estimation in some inverse problems. C. R. Acad. Sci. Paris, 300: , However, for linear compact operators with polynomially decaying singular values, yes! L. Cavalier, G. K. Golubev, D. Picard, and A. B. Tsybakov. Oracle inequalities for inverse problems. Ann. Stat., 30: , L. Cavalier and A. Tsybakov, Sharp adaption for inverse problems with random noise. Prob. Theor. Rel. Fields, 123: , L. Cavalier and G. K. Golubev. Risk hull method for inverse problems. Ann. Stat., to appear.

26 Unbiased Risk Estimation Let Y = Ta + ɛ and a α = T R α Y a linear estimator of a depending on α > 0. To estimate the risk R(α, a ) := E a α a 2 assume an independent copy of the noise ɛ is available and consider U(Y, α, ɛ) := T R α Y 2 2 R α (Y + ɛ), Y ɛ. Then U(Y, α, ɛ) is an unbiased estimator of the risk up to an additive constant since EU(Y, α, ɛ) = E T R α Y 2 2E R α (y + ɛ + ɛ), y + ɛ ɛ = E a α 2 2 R α y, y 2E R α ɛ, ɛ + 2E R α ɛ, ɛ = E a α 2 2 K R α y, a = E a α 2 2E K R α Y, a = R(α, a ) a 2.

27 A condition for bounding the variance of U To bound the variance of U the following condition is used in the analysis: ( tr g α (T T ) 2) C tr ((T T ) 2 g α (T T ) 4) This condition is satisfied for the truncated singular value decomposition, but violated for Tikhonov regularization, Landweber iteration and ν-methods.

28 A modified iterated Tikhonov regularization For given m = 2, 3,... compute an estimator by and â (0) α := margmin a X ( Ta Y 2 + α a 2) â (l) α := argmin a X ( Ta Y 2 + α a â (l 1) α 2), l = 1,..., m. Then for exact data Y = Ta a â (m) = r α (T T )a with r α (λ) := ( ) α j α + (j + 1)λ. α + λ α + λ The method satisfies the usual assumptions and has qualification m 1. Moreover, it satisfies the condition on the previous slide if the singular values of T decay polynomially.

29 outline 1 A Newton method for nonlinear statistical inverse problems 2 Oracle inequalities 3 Nonparametric instrumental variables and perturbed operators

30 introduction regression problem: Estimate a function a given n independent observations (X i, Z i ), i = 1,..., n of random variables X, Z satisfying Z = a(x) + ɛ where ɛ is an unobservable nuisance variable satisfying E(ɛ X) = 0. Often the assumption E(ɛ X) = 0 is violated. We will show that by solving an ill-posed inverse problem one can still estimate a if there exists another observable quantity W, which is sufficiently correlated with X and satisfies E(ɛ W ) = 0.

31 Estimating hourly wages as a function of the education level Z i : hourly wage of indiviual i X i : level of education of individual i unknown: a(x) := E(Z X) Here it seems unlikely that the wage X and the nuisance variable ɛ = Z a(x) are uncorrelated since there are other variables such intellegence and stamina which influence both X and Z. However, we may choose W e.g. as distance of the individuals appartment from college and reasonably assume that E(ɛ W ) = 0. P. Hall and J.L. Horowitz. Nonparametric methods for inference in the presence of instrumental variables. Ann. Stat., 33: , 2005.

32 a linear first kind integral equation From the observed data (X i, Z i, W i ) we can estimate the joint density f (x, y, w). We have E(X = x W = w)g(x)dx = E(Z W = w) for all w. x Setting k(w, x) := E(X = x W = w) and u(w) := E(Z W = w) we obtain the linear integral equation k(w, x)g(x) dx = u(w). Note that both the right hand side and the kernel are noisy since they have to be estimated from the data.

33 a nonlinear integral equation Often the assumption E(X W ) = 0 can be replaced by the stronger independence assumption ɛ, W independent, Eɛ = 0. The first assumption is equivalent to f (ɛ + a(x), x, w) dx = f W (ɛ + a(x), x)f Y,X (w) dx for all x, w where f W and F Y,X denote the marginal densities w.r.t. W and Y, X, respectively. This is a nonlinear integral equation with a noisy kernel, which can be solved by regularized Newton methods. joint work with J.P.Florens, J. Johannes and E. Mammen

34 related work also leading to a nonlinear integral equation: J.L. Horowitz and S. Lee. Nonparametric instrumental variables estimation of a quantile regression model. Econometrica, 75: , Proof of convergence is modelled after the following paper: N. Bissantz, T. Hohage, A. Munk Consistency and rates of Convergence of Nonlinear Tikhonov regularization with random noise. Inverse Problems, 20: , It uses a Hölder source condition which seems unnatural in this context since the estimated kernels of the integral operators are smooth.

Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods

Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry,

More information

Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods

Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry,

More information

= E [ξ ϕ ξ ψ ] satises 1

= E [ξ ϕ ξ ψ ] satises 1 ITERATIVELY REGULARIZED GAUSS-NEWTON METHOD FOR NONLINEAR INVERSE PROBLEMS WITH RANDOM NOISE FRAN BAUER 1, THORSTEN HOHAGE 2, AXEL MUN 2 1 FUZZY LOGIC LABORATORIUM LINZ-HAGENBERG, AUSTRIA 2 GEORG-AUGUST

More information

Inverse problems in statistics

Inverse problems in statistics Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) Yale, May 2 2011 p. 1/35 Introduction There exist many fields where inverse problems appear Astronomy (Hubble satellite).

More information

Inverse problems in statistics

Inverse problems in statistics Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) YES, Eurandom, 10 October 2011 p. 1/32 Part II 2) Adaptation and oracle inequalities YES, Eurandom, 10 October 2011

More information

Iterative Regularization Methods for Inverse Problems: Lecture 3

Iterative Regularization Methods for Inverse Problems: Lecture 3 Iterative Regularization Methods for Inverse Problems: Lecture 3 Thorsten Hohage Institut für Numerische und Angewandte Mathematik Georg-August Universität Göttingen Madrid, April 11, 2011 Outline 1 regularization

More information

ORACLE INEQUALITY FOR A STATISTICAL RAUS GFRERER TYPE RULE

ORACLE INEQUALITY FOR A STATISTICAL RAUS GFRERER TYPE RULE ORACLE INEQUALITY FOR A STATISTICAL RAUS GFRERER TYPE RULE QINIAN JIN AND PETER MATHÉ Abstract. We consider statistical linear inverse problems in Hilbert spaces. Approximate solutions are sought within

More information

Inverse problems in statistics

Inverse problems in statistics Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) YES, Eurandom, 10 October 2011 p. 1/27 Table of contents YES, Eurandom, 10 October 2011 p. 2/27 Table of contents 1)

More information

Convergence rates of spectral methods for statistical inverse learning problems

Convergence rates of spectral methods for statistical inverse learning problems Convergence rates of spectral methods for statistical inverse learning problems G. Blanchard Universtität Potsdam UCL/Gatsby unit, 04/11/2015 Joint work with N. Mücke (U. Potsdam); N. Krämer (U. München)

More information

Institut für Numerische und Angewandte Mathematik

Institut für Numerische und Angewandte Mathematik Institut für Numerische und Angewandte Mathematik Iteratively regularized Newton-type methods with general data mist functionals and applications to Poisson data T. Hohage, F. Werner Nr. 20- Preprint-Serie

More information

[11] Peter Mathé and Ulrich Tautenhahn, Regularization under general noise assumptions, Inverse Problems 27 (2011), no. 3,

[11] Peter Mathé and Ulrich Tautenhahn, Regularization under general noise assumptions, Inverse Problems 27 (2011), no. 3, Literatur [1] Radu Boţ, Bernd Hofmann, and Peter Mathé, Regularizability of illposed problems and the modulus of continuity, Zeitschrift für Analysis und ihre Anwendungen. Journal of Analysis and its Applications

More information

Regularization in Banach Space

Regularization in Banach Space Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University

More information

Functionalanalytic tools and nonlinear equations

Functionalanalytic tools and nonlinear equations Functionalanalytic tools and nonlinear equations Johann Baumeister Goethe University, Frankfurt, Germany Rio de Janeiro / October 2017 Outline Fréchet differentiability of the (PtS) mapping. Nonlinear

More information

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization Radu Ioan Boț and Bernd Hofmann March 1, 2013 Abstract Tikhonov-type regularization of linear and nonlinear

More information

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday A NOTE ON THE NONLINEAR LANDWEBER ITERATION Martin Hanke Dedicated to Heinz W. Engl on the occasion of his 60th birthday Abstract. We reconsider the Landweber iteration for nonlinear ill-posed problems.

More information

A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization

A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization Alexandra Smirnova Rosemary A Renaut March 27, 2008 Abstract The preconditioned iteratively regularized Gauss-Newton

More information

Convergence rates of the continuous regularized Gauss Newton method

Convergence rates of the continuous regularized Gauss Newton method J. Inv. Ill-Posed Problems, Vol. 1, No. 3, pp. 261 28 (22) c VSP 22 Convergence rates of the continuous regularized Gauss Newton method B. KALTENBACHER, A. NEUBAUER, and A. G. RAMM Abstract In this paper

More information

Numerical differentiation by means of Legendre polynomials in the presence of square summable noise

Numerical differentiation by means of Legendre polynomials in the presence of square summable noise www.oeaw.ac.at Numerical differentiation by means of Legendre polynomials in the presence of square summable noise S. Lu, V. Naumova, S. Pereverzyev RICAM-Report 2012-15 www.ricam.oeaw.ac.at Numerical

More information

ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS

ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS Olivier Scaillet a * This draft: July 2016. Abstract This note shows that adding monotonicity or convexity

More information

Approximate source conditions in Tikhonov-Phillips regularization and consequences for inverse problems with multiplication operators

Approximate source conditions in Tikhonov-Phillips regularization and consequences for inverse problems with multiplication operators Approximate source conditions in Tikhonov-Phillips regularization and consequences for inverse problems with multiplication operators Bernd Hofmann Abstract The object of this paper is threefold. First,

More information

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces Due Giorni di Algebra Lineare Numerica (2GALN) 16 17 Febbraio 2016, Como Iterative regularization in variable exponent Lebesgue spaces Claudio Estatico 1 Joint work with: Brigida Bonino 1, Fabio Di Benedetto

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher ENHANCED CHOICE OF THE PARAMETERS IN AN ITERATIVELY REGULARIZED NEWTON- LANDWEBER ITERATION IN BANACH SPACE arxiv:48.526v [math.na] 2 Aug 24 Barbara Kaltenbacher Alpen-Adria-Universität Klagenfurt Universitätstrasse

More information

Preconditioned Newton methods for ill-posed problems

Preconditioned Newton methods for ill-posed problems Preconditioned Newton methods for ill-posed problems Dissertation zur Erlangung des Doktorgrades der Mathematisch-Naturwissenschaftlichen Fakultäten der Georg-August-Universität zu Göttingen vorgelegt

More information

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems www.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems H. Egger RICAM-Report 2005-01 www.ricam.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing

More information

Generated Covariates in Nonparametric Estimation: A Short Review.

Generated Covariates in Nonparametric Estimation: A Short Review. Generated Covariates in Nonparametric Estimation: A Short Review. Enno Mammen, Christoph Rothe, and Melanie Schienle Abstract In many applications, covariates are not observed but have to be estimated

More information

Statistical regularization theory for Inverse Problems with Poisson data

Statistical regularization theory for Inverse Problems with Poisson data Statistical regularization theory for Inverse Problems with Poisson data Frank Werner 1,2, joint with Thorsten Hohage 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical

More information

Convergence rates in l 1 -regularization when the basis is not smooth enough

Convergence rates in l 1 -regularization when the basis is not smooth enough Convergence rates in l 1 -regularization when the basis is not smooth enough Jens Flemming, Markus Hegland November 29, 2013 Abstract Sparsity promoting regularization is an important technique for signal

More information

Riesz bases of Floquet modes in semi-infinite periodic waveguides and implications

Riesz bases of Floquet modes in semi-infinite periodic waveguides and implications Riesz bases of Floquet modes in semi-infinite periodic waveguides and implications Thorsten Hohage joint work with Sofiane Soussi Institut für Numerische und Angewandte Mathematik Georg-August-Universität

More information

On regularization algorithms in learning theory

On regularization algorithms in learning theory Journal of Complexity 23 (2007) 52 72 www.elsevier.com/locate/jco On regularization algorithms in learning theory Frank Bauer a, Sergei Pereverzev b, Lorenzo Rosasco c,d, a Institute for Mathematical Stochastics,

More information

Convergence analysis of an inexact iteratively regularized Gauss-Newton method under general source conditions

Convergence analysis of an inexact iteratively regularized Gauss-Newton method under general source conditions ! "$#&%$')(*$'+#,-'/.0#.2143658795";:'+H IJ.0E5A5LKNMO'QP?A"$# R 'Q?AS"TH E#H')( *;')#')(.01U58"$E(B,-',@"$#H74?7F"$#5 VNW9XZY\[^]`_a[b]LcdXfe&[^]Lgih0jklenm4W9Xo]UpkljL[qclX

More information

Penalized Barycenters in the Wasserstein space

Penalized Barycenters in the Wasserstein space Penalized Barycenters in the Wasserstein space Elsa Cazelles, joint work with Jérémie Bigot & Nicolas Papadakis Université de Bordeaux & CNRS Journées IOP - Du 5 au 8 Juillet 2017 Bordeaux Elsa Cazelles

More information

A model function method in total least squares

A model function method in total least squares www.oeaw.ac.at A model function method in total least squares S. Lu, S. Pereverzyev, U. Tautenhahn RICAM-Report 2008-18 www.ricam.oeaw.ac.at A MODEL FUNCTION METHOD IN TOTAL LEAST SQUARES SHUAI LU, SERGEI

More information

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Int. Journal of Math. Analysis, Vol. 4, 1, no. 45, 11-8 An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical

More information

Endogeneity in non separable models. Application to treatment models where the outcomes are durations

Endogeneity in non separable models. Application to treatment models where the outcomes are durations Endogeneity in non separable models. Application to treatment models where the outcomes are durations J.P. Florens First Draft: December 2004 This version: January 2005 Preliminary and Incomplete Version

More information

Nonlinear error dynamics for cycled data assimilation methods

Nonlinear error dynamics for cycled data assimilation methods Nonlinear error dynamics for cycled data assimilation methods A J F Moodey 1, A S Lawless 1,2, P J van Leeuwen 2, R W E Potthast 1,3 1 Department of Mathematics and Statistics, University of Reading, UK.

More information

D I S C U S S I O N P A P E R

D I S C U S S I O N P A P E R I N S T I T U T D E S T A T I S T I Q U E B I O S T A T I S T I Q U E E T S C I E N C E S A C T U A R I E L L E S ( I S B A ) UNIVERSITÉ CATHOLIQUE DE LOUVAIN D I S C U S S I O N P A P E R 2014/06 Adaptive

More information

arxiv: v1 [math.st] 28 May 2016

arxiv: v1 [math.st] 28 May 2016 Kernel ridge vs. principal component regression: minimax bounds and adaptability of regularization operators Lee H. Dicker Dean P. Foster Daniel Hsu arxiv:1605.08839v1 [math.st] 8 May 016 May 31, 016 Abstract

More information

MURDOCH RESEARCH REPOSITORY

MURDOCH RESEARCH REPOSITORY MURDOCH RESEARCH REPOSITORY This is the author s final version of the work, as accepted for publication following peer review but without the publisher s layout or pagination. The definitive version is

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Cambridge - July 28, 211. Doctoral

More information

Uncertainty Quantification for Inverse Problems. November 7, 2011

Uncertainty Quantification for Inverse Problems. November 7, 2011 Uncertainty Quantification for Inverse Problems November 7, 2011 Outline UQ and inverse problems Review: least-squares Review: Gaussian Bayesian linear model Parametric reductions for IP Bias, variance

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. 1999 83: 139 159 Numerische Mathematik c Springer-Verlag 1999 On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems Jin Qi-nian 1, Hou Zong-yi

More information

Marlis Hochbruck 1, Michael Hönig 1 and Alexander Ostermann 2

Marlis Hochbruck 1, Michael Hönig 1 and Alexander Ostermann 2 Mathematical Modelling and Numerical Analysis Modélisation Mathématique et Analyse Numérique Will be set by the publisher REGULARIZATION OF NONLINEAR ILL-POSED PROBLEMS BY EXPONENTIAL INTEGRATORS Marlis

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Florianópolis - September, 2011.

More information

Inverse Statistical Learning

Inverse Statistical Learning Inverse Statistical Learning Minimax theory, adaptation and algorithm avec (par ordre d apparition) C. Marteau, M. Chichignoud, C. Brunet and S. Souchet Dijon, le 15 janvier 2014 Inverse Statistical Learning

More information

Tikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations

Tikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations Tikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations Ronny Ramlau Gerd Teschke April 13, 25 Abstract We shall be concerned with the construction of Tikhonov based iteration

More information

2014:05 Incremental Greedy Algorithm and its Applications in Numerical Integration. V. Temlyakov

2014:05 Incremental Greedy Algorithm and its Applications in Numerical Integration. V. Temlyakov INTERDISCIPLINARY MATHEMATICS INSTITUTE 2014:05 Incremental Greedy Algorithm and its Applications in Numerical Integration V. Temlyakov IMI PREPRINT SERIES COLLEGE OF ARTS AND SCIENCES UNIVERSITY OF SOUTH

More information

Improving the Accuracy of the Adomian Decomposition Method for Solving Nonlinear Equations

Improving the Accuracy of the Adomian Decomposition Method for Solving Nonlinear Equations Applied Mathematical Sciences, Vol. 6, 2012, no. 10, 487-497 Improving the Accuracy of the Adomian Decomposition Method for Solving Nonlinear Equations A. R. Vahidi a and B. Jalalvand b (a) Department

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

REGULARIZATION OF SOME LINEAR ILL-POSED PROBLEMS WITH DISCRETIZED RANDOM NOISY DATA

REGULARIZATION OF SOME LINEAR ILL-POSED PROBLEMS WITH DISCRETIZED RANDOM NOISY DATA REGULARIZATION OF SOME LINEAR ILL-POSED PROBLEMS WITH DISCRETIZED RANDOM NOISY DATA PETER MATHÉ AND SERGEI V. PEREVERZEV Abstract. For linear statistical ill-posed problems in Hilbert spaces we introduce

More information

Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems

Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems Jonas Latz 1 Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems Jonas Latz Technische Universität München Fakultät für Mathematik Lehrstuhl für Numerische Mathematik jonas.latz@tum.de November

More information

Information theoretic perspectives on learning algorithms

Information theoretic perspectives on learning algorithms Information theoretic perspectives on learning algorithms Varun Jog University of Wisconsin - Madison Departments of ECE and Mathematics Shannon Channel Hangout! May 8, 2018 Jointly with Adrian Tovar-Lopez

More information

Canberra Symposium on Regularization and Chemnitz Symposium on Inverse Problems on Tour

Canberra Symposium on Regularization and Chemnitz Symposium on Inverse Problems on Tour Monday, 19 th November 2012 8:00 Registration 8:45 Welcome and Opening Remarks Chair: Bob Anderssen Jin Cheng (Fudan University Shanghai, China) Ill-posedness and Tikhonov regularization in the study of

More information

The Stein hull. Clément Marteau* Institut de Mathématiques, Université de Toulouse, INSA - 135, Avenue de Rangueil, F Toulouse Cedex 4, France

The Stein hull. Clément Marteau* Institut de Mathématiques, Université de Toulouse, INSA - 135, Avenue de Rangueil, F Toulouse Cedex 4, France Journal of Nonparametric Statistics Vol. 22, No. 6, August 2010, 685 702 The Stein hull Clément Marteau* Institut de Mathématiques, Université de Toulouse, INSA - 135, Avenue de Rangueil, F-31 077 Toulouse

More information

Aggregation of Spectral Density Estimators

Aggregation of Spectral Density Estimators Aggregation of Spectral Density Estimators Christopher Chang Department of Mathematics, University of California, San Diego, La Jolla, CA 92093-0112, USA; chrchang@alumni.caltech.edu Dimitris Politis Department

More information

The Levenberg-Marquardt Iteration for Numerical Inversion of the Power Density Operator

The Levenberg-Marquardt Iteration for Numerical Inversion of the Power Density Operator The Levenberg-Marquardt Iteration for Numerical Inversion of the Power Density Operator G. Bal (gb2030@columbia.edu) 1 W. Naetar (wolf.naetar@univie.ac.at) 2 O. Scherzer (otmar.scherzer@univie.ac.at) 2,3

More information

On probabilities of large and moderate deviations for L-statistics: a survey of some recent developments

On probabilities of large and moderate deviations for L-statistics: a survey of some recent developments UDC 519.2 On probabilities of large and moderate deviations for L-statistics: a survey of some recent developments N. V. Gribkova Department of Probability Theory and Mathematical Statistics, St.-Petersburg

More information

Regularization and Inverse Problems

Regularization and Inverse Problems Regularization and Inverse Problems Caroline Sieger Host Institution: Universität Bremen Home Institution: Clemson University August 5, 2009 Caroline Sieger (Bremen and Clemson) Regularization and Inverse

More information

This article was published in an Elsevier journal. The attached copy is furnished to the author for non-commercial research and education use, including for instruction at the author s institution, sharing

More information

Initial Temperature Reconstruction for a Nonlinear Heat Equation: Application to Radiative Heat Transfer.

Initial Temperature Reconstruction for a Nonlinear Heat Equation: Application to Radiative Heat Transfer. Proceedings of the 5th International Conference on Inverse Problems in Engineering: Theory and Practice, Cambridge, UK, 11-15th July 2005 Initial Temperature Reconstruction for a Nonlinear Heat Equation:

More information

On the stochastic nonlinear Schrödinger equation

On the stochastic nonlinear Schrödinger equation On the stochastic nonlinear Schrödinger equation Annie Millet collaboration with Z. Brzezniak SAMM, Paris 1 and PMA Workshop Women in Applied Mathematics, Heraklion - May 3 211 Outline 1 The NL Shrödinger

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

Modified Landweber iteration in Banach spaces convergence and convergence rates

Modified Landweber iteration in Banach spaces convergence and convergence rates Modified Landweber iteration in Banach spaces convergence and convergence rates Torsten Hein, Kamil S. Kazimierski August 4, 009 Abstract Abstract. We introduce and discuss an iterative method of relaxed

More information

Finding one root of a polynomial system

Finding one root of a polynomial system Finding one root of a polynomial system Smale s 17th problem Pierre Lairez Inria Saclay FoCM 2017 Foundations of computational mathematics 15 july 2017, Barcelona Solving polynomial systems A well studied

More information

Additive Isotonic Regression

Additive Isotonic Regression Additive Isotonic Regression Enno Mammen and Kyusang Yu 11. July 2006 INTRODUCTION: We have i.i.d. random vectors (Y 1, X 1 ),..., (Y n, X n ) with X i = (X1 i,..., X d i ) and we consider the additive

More information

Estimation of a quadratic regression functional using the sinc kernel

Estimation of a quadratic regression functional using the sinc kernel Estimation of a quadratic regression functional using the sinc kernel Nicolai Bissantz Hajo Holzmann Institute for Mathematical Stochastics, Georg-August-University Göttingen, Maschmühlenweg 8 10, D-37073

More information

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Rosemary Renaut, Jodi Mead Arizona State and Boise State September 2007 Renaut and Mead (ASU/Boise) Scalar

More information

Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals

Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals Acta Applicandae Mathematicae 78: 145 154, 2003. 2003 Kluwer Academic Publishers. Printed in the Netherlands. 145 Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals M.

More information

Marlis Hochbruck 1, Michael Hönig 1 and Alexander Ostermann 2

Marlis Hochbruck 1, Michael Hönig 1 and Alexander Ostermann 2 ESAIM: M2AN 43 (29) 79 72 DOI:.5/m2an/292 ESAIM: Mathematical Modelling and Numerical Analysis www.esaim-m2an.org REGULARIZATION OF NONLINEAR ILL-POSED PROBLEMS BY EXPONENTIAL INTEGRATORS Marlis Hochbruck,

More information

Instrumental Variables Estimation and Other Inverse Problems in Econometrics. February Jean-Pierre Florens (TSE)

Instrumental Variables Estimation and Other Inverse Problems in Econometrics. February Jean-Pierre Florens (TSE) Instrumental Variables Estimation and Other Inverse Problems in Econometrics February 2011 Jean-Pierre Florens (TSE) 2 I - Introduction Econometric model: Relation between Y, Z and U Y, Z observable random

More information

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Stephan W Anzengruber 1 and Ronny Ramlau 1,2 1 Johann Radon Institute for Computational and Applied Mathematics,

More information

How large is the class of operator equations solvable by a DSM Newton-type method?

How large is the class of operator equations solvable by a DSM Newton-type method? This is the author s final, peer-reviewed manuscript as accepted for publication. The publisher-formatted version may be available through the publisher s web site or your institution s library. How large

More information

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Applied Mathematical Sciences, Vol. 6, 212, no. 63, 319-3117 Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Nguyen Buong Vietnamese

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

Parallel Cimmino-type methods for ill-posed problems

Parallel Cimmino-type methods for ill-posed problems Parallel Cimmino-type methods for ill-posed problems Cao Van Chung Seminar of Centro de Modelización Matemática Escuela Politécnica Naciónal Quito ModeMat, EPN, Quito Ecuador cao.vanchung@epn.edu.ec, cvanchung@gmail.com

More information

Properties for systems with weak invariant manifolds

Properties for systems with weak invariant manifolds Statistical properties for systems with weak invariant manifolds Faculdade de Ciências da Universidade do Porto Joint work with José F. Alves Workshop rare & extreme Gibbs-Markov-Young structure Let M

More information

Some lecture notes for Math 6050E: PDEs, Fall 2016

Some lecture notes for Math 6050E: PDEs, Fall 2016 Some lecture notes for Math 65E: PDEs, Fall 216 Tianling Jin December 1, 216 1 Variational methods We discuss an example of the use of variational methods in obtaining existence of solutions. Theorem 1.1.

More information

Quantile Processes for Semi and Nonparametric Regression

Quantile Processes for Semi and Nonparametric Regression Quantile Processes for Semi and Nonparametric Regression Shih-Kang Chao Department of Statistics Purdue University IMS-APRM 2016 A joint work with Stanislav Volgushev and Guang Cheng Quantile Response

More information

On Regularization Algorithms in Learning Theory

On Regularization Algorithms in Learning Theory On Regularization Algorithms in Learning Theory Frank Bauer a, Sergei Pereverzev b, Lorenzo Rosasco c,1 a Institute for Mathematical Stochastics, University of Göttingen, Department of Mathematics, Maschmühlenweg

More information

Different types of phase transitions for a simple model of alignment of oriented particles

Different types of phase transitions for a simple model of alignment of oriented particles Different types of phase transitions for a simple model of alignment of oriented particles Amic Frouvelle Université Paris Dauphine Joint work with Jian-Guo Liu (Duke University, USA) and Pierre Degond

More information

SHADOWING AND INVERSE SHADOWING IN SET-VALUED DYNAMICAL SYSTEMS. HYPERBOLIC CASE. Sergei Yu. Pilyugin Janosch Rieger. 1.

SHADOWING AND INVERSE SHADOWING IN SET-VALUED DYNAMICAL SYSTEMS. HYPERBOLIC CASE. Sergei Yu. Pilyugin Janosch Rieger. 1. Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 32, 2008, 151 164 SHADOWING AND INVERSE SHADOWING IN SET-VALUED DYNAMICAL SYSTEMS. HYPERBOLIC CASE Sergei Yu. Pilyugin

More information

Goodness-of-fit tests for the cure rate in a mixture cure model

Goodness-of-fit tests for the cure rate in a mixture cure model Biometrika (217), 13, 1, pp. 1 7 Printed in Great Britain Advance Access publication on 31 July 216 Goodness-of-fit tests for the cure rate in a mixture cure model BY U.U. MÜLLER Department of Statistics,

More information

Adaptive wavelet decompositions of stochastic processes and some applications

Adaptive wavelet decompositions of stochastic processes and some applications Adaptive wavelet decompositions of stochastic processes and some applications Vladas Pipiras University of North Carolina at Chapel Hill SCAM meeting, June 1, 2012 (joint work with G. Didier, P. Abry)

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

New Algorithms for Parallel MRI

New Algorithms for Parallel MRI New Algorithms for Parallel MRI S. Anzengruber 1, F. Bauer 2, A. Leitão 3 and R. Ramlau 1 1 RICAM, Austrian Academy of Sciences, Altenbergerstraße 69, 4040 Linz, Austria 2 Fuzzy Logic Laboratorium Linz-Hagenberg,

More information

Can we do statistical inference in a non-asymptotic way? 1

Can we do statistical inference in a non-asymptotic way? 1 Can we do statistical inference in a non-asymptotic way? 1 Guang Cheng 2 Statistics@Purdue www.science.purdue.edu/bigdata/ ONR Review Meeting@Duke Oct 11, 2017 1 Acknowledge NSF, ONR and Simons Foundation.

More information

Statistical inference on Lévy processes

Statistical inference on Lévy processes Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline

More information

Statistical Measures of Uncertainty in Inverse Problems

Statistical Measures of Uncertainty in Inverse Problems Statistical Measures of Uncertainty in Inverse Problems Workshop on Uncertainty in Inverse Problems Institute for Mathematics and Its Applications Minneapolis, MN 19-26 April 2002 P.B. Stark Department

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

THE LENT PARTICLE FORMULA

THE LENT PARTICLE FORMULA THE LENT PARTICLE FORMULA Nicolas BOULEAU, Laurent DENIS, Paris. Workshop on Stochastic Analysis and Finance, Hong-Kong, June-July 2009 This is part of a joint work with Laurent Denis, concerning the approach

More information

Piecewise Smooth Solutions to the Burgers-Hilbert Equation

Piecewise Smooth Solutions to the Burgers-Hilbert Equation Piecewise Smooth Solutions to the Burgers-Hilbert Equation Alberto Bressan and Tianyou Zhang Department of Mathematics, Penn State University, University Park, Pa 68, USA e-mails: bressan@mathpsuedu, zhang

More information

Conditional moment representations for dependent random variables

Conditional moment representations for dependent random variables Conditional moment representations for dependent random variables W lodzimierz Bryc Department of Mathematics University of Cincinnati Cincinnati, OH 45 22-0025 bryc@ucbeh.san.uc.edu November 9, 995 Abstract

More information

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

Levenberg-Marquardt method in Banach spaces with general convex regularization terms Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization

More information

Quantile methods. Class Notes Manuel Arellano December 1, Let F (r) =Pr(Y r). Forτ (0, 1), theτth population quantile of Y is defined to be

Quantile methods. Class Notes Manuel Arellano December 1, Let F (r) =Pr(Y r). Forτ (0, 1), theτth population quantile of Y is defined to be Quantile methods Class Notes Manuel Arellano December 1, 2009 1 Unconditional quantiles Let F (r) =Pr(Y r). Forτ (0, 1), theτth population quantile of Y is defined to be Q τ (Y ) q τ F 1 (τ) =inf{r : F

More information

Nonparametric regression with martingale increment errors

Nonparametric regression with martingale increment errors S. Gaïffas (LSTA - Paris 6) joint work with S. Delattre (LPMA - Paris 7) work in progress Motivations Some facts: Theoretical study of statistical algorithms requires stationary and ergodicity. Concentration

More information

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Applied Mathematical Sciences, Vol. 5, 2011, no. 76, 3781-3788 Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Nguyen Buong and Nguyen Dinh Dung

More information

Nonparametric Inference In Functional Data

Nonparametric Inference In Functional Data Nonparametric Inference In Functional Data Zuofeng Shang Purdue University Joint work with Guang Cheng from Purdue Univ. An Example Consider the functional linear model: Y = α + where 1 0 X(t)β(t)dt +

More information

An Empirical Study of the ɛ-algorithm for Accelerating Numerical Sequences

An Empirical Study of the ɛ-algorithm for Accelerating Numerical Sequences Applied Mathematical Sciences, Vol 6, 2012, no 24, 1181-1190 An Empirical Study of the ɛ-algorithm for Accelerating Numerical Sequences Oana Bumbariu North University of Baia Mare Department of Mathematics

More information