Regularization and Inverse Problems

Size: px
Start display at page:

Download "Regularization and Inverse Problems"

Transcription

1 Regularization and Inverse Problems Caroline Sieger Host Institution: Universität Bremen Home Institution: Clemson University August 5, 2009 Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

2 Outline 1 Preliminaries The Inverse Problem The Moore-Penrose Generalized Inverse Eigensystems and Singular Systems 2 Regularization Classical Tikhonov Regularization Tikhonov Regularization with Sparsity Constraints Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

3 Preliminaries Notation: T : X Y is a bounded linear operator K : X Y is a compact linear operator X and Y are Hilbert spaces Definition A problem is ill-posed if one or more of the following holds: a solution does not exist Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

4 Preliminaries Notation: T : X Y is a bounded linear operator K : X Y is a compact linear operator X and Y are Hilbert spaces Definition A problem is ill-posed if one or more of the following holds: a solution does not exist the solution is not unique Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

5 Preliminaries Notation: T : X Y is a bounded linear operator K : X Y is a compact linear operator X and Y are Hilbert spaces Definition A problem is ill-posed if one or more of the following holds: a solution does not exist the solution is not unique the solultion does not depend continuously on the data Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

6 The Inverse Problem Find x X such that Tx = y. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

7 The Inverse Problem Find x X such that Tx = y. There may be some problems with this, i.e. T 1 may not exist NS(T ) {0} ( = non-unique solutions) Thus, we seek an approximate solution. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

8 The Inverse Problem Find x X such that Tx = y. There may be some problems with this, i.e. T 1 may not exist NS(T ) {0} ( = non-unique solutions) Thus, we seek an approximate solution. Types of Solutions x X is a least-squares solution to Tx = y if Tx y = inf{ Tz y z X } Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

9 The Inverse Problem Find x X such that Tx = y. There may be some problems with this, i.e. T 1 may not exist NS(T ) {0} ( = non-unique solutions) Thus, we seek an approximate solution. Types of Solutions x X is a least-squares solution to Tx = y if Tx y = inf{ Tz y z X } x X is a best-approximate solution to Tx = y if x is a least-squares solution and x = inf{ z z is a least-squares solution of Tx = y} Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

10 The Moore-Penrose Generalized Inverse Definition: Moore-Penrose Generalized Inverse T is the unique linear extension of T 1 to dom(t ) with NS(T ) = range(t ) where T := T NS(T ) : NS(T ) range(t ). Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

11 The Moore-Penrose Generalized Inverse Definition: Moore-Penrose Generalized Inverse T is the unique linear extension of T 1 to dom(t ) with NS(T ) = range(t ) where T := T NS(T ) : NS(T ) range(t ). If P and Q are orthogonal projectors onto NS(T ) and range(t ) 1.) TT T = T 2.) T TT = T 3.) T T = I P 4.) TT = Q dom(t ) Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

12 The Moore-Penrose Generalized Inverse Definition: Moore-Penrose Generalized Inverse T is the unique linear extension of T 1 to dom(t ) with NS(T ) = range(t ) where T := T NS(T ) : NS(T ) range(t ). If P and Q are orthogonal projectors onto NS(T ) and range(t ) 1.) TT T = T 2.) T TT = T 3.) T T = I P 4.) TT = Q dom(t ) T is bounded if range(t ) is closed. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

13 The Moore-Penrose Generalized Inverse Definition: Moore-Penrose Generalized Inverse T is the unique linear extension of T 1 to dom(t ) with NS(T ) = range(t ) where T := T NS(T ) : NS(T ) range(t ). If P and Q are orthogonal projectors onto NS(T ) and range(t ) 1.) TT T = T 2.) T TT = T 3.) T T = I P 4.) TT = Q dom(t ) T is bounded if range(t ) is closed. If y dom(t ) then a unique best-approximate solution, x := T y Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

14 The Moore-Penrose Generalized Inverse Definition: Moore-Penrose Generalized Inverse T is the unique linear extension of T 1 to dom(t ) with NS(T ) = range(t ) where T := T NS(T ) : NS(T ) range(t ). If P and Q are orthogonal projectors onto NS(T ) and range(t ) 1.) TT T = T 2.) T TT = T 3.) T T = I P 4.) TT = Q dom(t ) T is bounded if range(t ) is closed. If y dom(t ) then a unique best-approximate solution, x := T y Definition: Gaussian Normal Equation For y dom(t ), x X is a least-squares solution T Tx = T y. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

15 Eigensystems and Singular Systems Definition A selfadjoint K has the eigensystem (λ n ; v n ) where the λ n are non-zero eigenvalues and the v n are corresponding eigenvectors. We may diagonalize K by Kx = n=1 λ n x, v n v n for all x X. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

16 Eigensystems and Singular Systems Definition A selfadjoint K has the eigensystem (λ n ; v n ) where the λ n are non-zero eigenvalues and the v n are corresponding eigenvectors. We may diagonalize K by Kx = n=1 λ n x, v n v n for all x X. Definition A non-selfadjoint K has the singular system (σ n ; v n, u n ) where K is the adjoint of K {σ 2 n} n N are the non-zero eigenvalues of K K (and KK ) Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

17 Eigensystems and Singular Systems Definition A selfadjoint K has the eigensystem (λ n ; v n ) where the λ n are non-zero eigenvalues and the v n are corresponding eigenvectors. We may diagonalize K by Kx = n=1 λ n x, v n v n for all x X. Definition A non-selfadjoint K has the singular system (σ n ; v n, u n ) where K is the adjoint of K {σ 2 n} n N are the non-zero eigenvalues of K K (and KK ) {v n } n N are eigenvectors of K K {u n } n N are eigenvectors of KK defined by u n := Kvn Kv n Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

18 Eigensystems and Singular Systems Properties of a singular system: Kv n = σ n u n K u n = σ n v n Kx = n=1 σ n x, v n u n Ky = n=1 σ n y, u n v n Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

19 Eigensystems and Singular Systems Properties of a singular system: Kv n = σ n u n K u n = σ n v n Kx = n=1 σ n x, v n u n Ky = n=1 σ n y, u n v n Further, iff K has finite dimensional range = K has finitely many singular values = infinite series involving singular values degenerate to finite sums. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

20 Eigensystems and Singular Systems Properties of a singular system: Kv n = σ n u n K u n = σ n v n Kx = n=1 σ n x, v n u n Ky = n=1 σ n y, u n v n Further, iff K has finite dimensional range = K has finitely many singular values = infinite series involving singular values degenerate to finite sums. Theorem (Engl,Hanke,Neubauer) For compact linear operator K with singular system (σ n ; v n, u n ) and y Y we have: 1 y dom(k ) n=1 < (Picard Criterion for existence of y,u n 2 σ 2 n a best-approximate solution.) 2 For y dom(k ), K y = n=1 y,u n σ n v n Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

21 Regularization Recall: we seek to approximate x := T y Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

22 Regularization Recall: we seek to approximate x := T y Normally, we only have an approximation of y, i.e. y δ such that y δ y δ in the ill-posed case, T y δ is not a good approximation of x because of the unboundedness of T. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

23 Regularization Recall: we seek to approximate x := T y Normally, we only have an approximation of y, i.e. y δ such that y δ y δ in the ill-posed case, T y δ is not a good approximation of x because of the unboundedness of T. We seek an approximation x δ α of x such that 1 x δ α depends continuously on the noisy data y δ (this allows stable computation of x δ α) 2 the noise level δ 0 and for appropriate α, x δ α x δ Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

24 Regularization Definition Let α 0 (0, ] then α (0, α 0 ] let R α : Y X be a continuous operator. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

25 Regularization Definition Let α 0 (0, ] then α (0, α 0 ] let R α : Y X be a continuous operator. The family {R α } is a regularization iff y dom(t ) a parameter choice rule α = α(δ, y δ ) such that { } lim sup R α(δ,y δ 0 δ )y δ T y y δ Y, y δ y δ = 0 (1) Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

26 Regularization Definition Let α 0 (0, ] then α (0, α 0 ] let R α : Y X be a continuous operator. The family {R α } is a regularization iff y dom(t ) a parameter choice rule α = α(δ, y δ ) such that { } lim sup R α(δ,y δ 0 δ )y δ T y y δ Y, y δ y δ = 0 (1) and α : R + Y (0, α 0 ) such that { } lim sup α(δ, y δ ) y δ Y, y δ y δ δ 0 = 0 (2) Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

27 Regularization Definition Let α 0 (0, ] then α (0, α 0 ] let R α : Y X be a continuous operator. The family {R α } is a regularization iff y dom(t ) a parameter choice rule α = α(δ, y δ ) such that { } lim sup R α(δ,y δ 0 δ )y δ T y y δ Y, y δ y δ = 0 (1) and α : R + Y (0, α 0 ) such that { } lim sup α(δ, y δ ) y δ Y, y δ y δ δ 0 = 0 (2) For a specific y dom(t ), a pair (R α, α) is a regularization method if (1) and (2) hold. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

28 Regularization If the parameter choice rule does not depend on y δ, we say it is a-priori and we write α = α(δ). Otherwise, it is an a-posteriori parameter choice rule. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

29 Regularization If the parameter choice rule does not depend on y δ, we say it is a-priori and we write α = α(δ). Otherwise, it is an a-posteriori parameter choice rule. Proposition (Engl,Hanke,Neubauer) Further, (R α, α) is convergent (for linear R α ) iff lim δ 0 α(δ) = 0 and lim δ 0 δ R α(δ) = 0. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

30 Regularization If the parameter choice rule does not depend on y δ, we say it is a-priori and we write α = α(δ). Otherwise, it is an a-posteriori parameter choice rule. Proposition (Engl,Hanke,Neubauer) Further, (R α, α) is convergent (for linear R α ) iff lim δ 0 α(δ) = 0 and lim δ 0 δ R α(δ) = 0. X µ,ρ := {x X x = (T T ) µ ω, ω ρ}, µ > 0 Proposition (Engl,Hanke,Neubauer) If range(t ) is non-closed, a regularization algorithm cannot converge to zero faster than δ 2µ 2µ+1 ρ 1 2µ+1 as δ 0 for x X µ,ρ. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

31 Regularization (δ, M, R α ) := sup { R α y δ x x M, y δ Y, Tx y δ δ } for some M X Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

32 Regularization (δ, M, R α ) := sup { R α y δ x x M, y δ Y, Tx y δ δ } for some M X Definition We say (R α, α) is optimal in X µ,ρ if (δ, X µ,ρ, R α ) = δ 2µ 2µ+1 ρ 1 2µ+1 holds δ > 0. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

33 Regularization (δ, M, R α ) := sup { R α y δ x x M, y δ Y, Tx y δ δ } for some M X Definition We say (R α, α) is optimal in X µ,ρ if (δ, X µ,ρ, R α ) = δ 2µ 2µ+1 ρ 1 2µ+1 holds δ > 0. (R α, α) is of optimal order in X µ,ρ if C 1 such that (δ, X µ,ρ, R α ) Cδ 2µ 2µ+1 ρ 1 2µ+1 δ > 0 Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

34 Regularization (δ, M, R α ) := sup { R α y δ x x M, y δ Y, Tx y δ δ } for some M X Definition We say (R α, α) is optimal in X µ,ρ if (δ, X µ,ρ, R α ) = δ 2µ 2µ+1 ρ 1 2µ+1 holds δ > 0. (R α, α) is of optimal order in X µ,ρ if C 1 such that (δ, X µ,ρ, R α ) Cδ 2µ 2µ+1 ρ 1 2µ+1 δ > 0 Theorem (Engl,Hanke,Neubauer) Let τ > τ 0 1, then if (R α, α τ ) is of optimal order in X µ,ρ for some µ > 0 and ρ > 0 then all (R α, α τ ) with τ > τ 0 are convergent for y range(t ) and of optimal order X ν,ρ with 0 < ν µ and ρ > 0. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

35 Classical Tikhonov Regularization The Tikhonov Functional Φ(x) := Tx y δ 2 + α x 2 Theorem (Engl,Hanke,Neubauer) xα δ := (T T + αi ) 1 T y δ is the unique minimizer of Φ(x). Proof Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

36 Classical Tikhonov Regularization The Tikhonov Functional Φ(x) := Tx y δ 2 + α x 2 Theorem (Engl,Hanke,Neubauer) x δ α := (T T + αi ) 1 T y δ is the unique minimizer of Φ(x). Proof Theorem (Engl,Hanke,Neubauer) For xα δ := (T T + αi ) 1 T y δ, y range(t ), y y δ δ if α = α(δ) δ such that lim α(δ) = 0 and lim 2 δ 0 δ 0 α(δ) = 0 then lim x δ δ 0 α(δ) = T y. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

37 Classical Tikhonov Regularization Proposition (Engl,Hanke,Neubauer) As long as µ 1, Tikhonov regularization with the a-priori choice rule ) 2 2µ+1 ( α δ ρ is of optimal order in X µ,ρ, the best possible convergence rate for µ = 1 is: α ( ) 2 δ 3 ρ and x δ α x = O ( ) δ 2 3 Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

38 Tikhonov Regularization with Sparsity Constraints In some applications, we require a sparse solution So use an l p norm of the coefficients of x wrt an orthonormal basis {ϕ i } i N with 1 p 2 Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

39 Tikhonov Regularization with Sparsity Constraints In some applications, we require a sparse solution So use an l p norm of the coefficients of x wrt an orthonormal basis {ϕ i } i N with 1 p 2 Decreasing p from 2 to 1, we increase the penalty on small coefficients and decrease the penalty on large coefficients, i.e. we encourage representations with large, but fewer coefficients wrt the orthonormal basis {ϕ i } i N Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

40 Tikhonov Regularization with Sparsity Constraints In some applications, we require a sparse solution So use an l p norm of the coefficients of x wrt an orthonormal basis {ϕ i } i N with 1 p 2 Decreasing p from 2 to 1, we increase the penalty on small coefficients and decrease the penalty on large coefficients, i.e. we encourage representations with large, but fewer coefficients wrt the orthonormal basis {ϕ i } i N Generally, we restrict ourselves to p 1 since the functional is no longer convex for p < 1. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

41 Tikhonov Regularization with Sparsity Constraints In some applications, we require a sparse solution So use an l p norm of the coefficients of x wrt an orthonormal basis {ϕ i } i N with 1 p 2 Decreasing p from 2 to 1, we increase the penalty on small coefficients and decrease the penalty on large coefficients, i.e. we encourage representations with large, but fewer coefficients wrt the orthonormal basis {ϕ i } i N Generally, we restrict ourselves to p 1 since the functional is no longer convex for p < 1. The Tikhonov Functional with l 1 Penalty Ψ(x) := Tx y δ 2 + α i ϕ i, x Denote x δ α to be the minimizer of Ψ(x). Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

42 Tikhonov Regularization with Sparsity Constraints Definition For x X and a regularization R α, x is an R α -minimizing solution if Tx = y and R α (x ) = min{r α (x) Tx = y}. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

43 Tikhonov Regularization with Sparsity Constraints Definition For x X and a regularization R α, x is an R α -minimizing solution if Tx = y and R α (x ) = min{r α (x) Tx = y}. Recall, R α (x) := α i φ i, x Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

44 Tikhonov Regularization with Sparsity Constraints Definition For x X and a regularization R α, x is an R α -minimizing solution if Tx = y and R α (x ) = min{r α (x) Tx = y}. Recall, R α (x) := α i φ i, x Proposition (Grasmair,Haltmeier,Scherzer) Assume that T is injective (or finite basis injectivity holds), then a unique minimizer xα δ of Ψ(x) and a unique R α -minimizing solution x. For y range(t ) and y y δ δ if α satisfies lim α(δ) = 0 and δ 0 δ lim α(δ) = 0, then lim x δ δ 0 α(δ) = x. δ 0 Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

45 Tikhonov Regularization with Sparsity Constraints Theorem (Grasmair,Haltmeier,Scherzer) Assume that R α (x ) range(t ), that the finite basis injectivity property holds, and that Tx = y has an R α -minimizing solution that is sparse wrt {ϕ i } i N. Then for parameter choice strategy α δ, we have x δ α x = O(δ) Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

46 Tikhonov Regularization with Sparsity Constraints Theorem (Grasmair,Haltmeier,Scherzer) Assume that R α (x ) range(t ), that the finite basis injectivity property holds, and that Tx = y has an R α -minimizing solution that is sparse wrt {ϕ i } i N. Then for parameter choice strategy α δ, we have x δ α x = O(δ) In fact, Grasmair,Haltmeier,Scherzer also prove that for 1 p 2, if we know in advance the solution is sparse in the basis {ϕ i } i N then we obtain x δ α x = O(δ 1/p ). Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

47 Tikhonov Regularization with Sparsity Constraints Theorem (Grasmair,Haltmeier,Scherzer) Assume that R α (x ) range(t ), that the finite basis injectivity property holds, and that Tx = y has an R α -minimizing solution that is sparse wrt {ϕ i } i N. Then for parameter choice strategy α δ, we have x δ α x = O(δ) In fact, Grasmair,Haltmeier,Scherzer also prove that for 1 p 2, if we know in advance the solution is sparse in the basis {ϕ i } i N then we obtain x δ α x = O(δ 1/p ). We have not yet answered how to find this minimizer x δ α. However, Jack will discuss this in his presentation. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, / 16

3 Compact Operators, Generalized Inverse, Best- Approximate Solution

3 Compact Operators, Generalized Inverse, Best- Approximate Solution 3 Compact Operators, Generalized Inverse, Best- Approximate Solution As we have already heard in the lecture a mathematical problem is well - posed in the sense of Hadamard if the following properties

More information

Sparse Recovery in Inverse Problems

Sparse Recovery in Inverse Problems Radon Series Comp. Appl. Math XX, 1 63 de Gruyter 20YY Sparse Recovery in Inverse Problems Ronny Ramlau and Gerd Teschke Abstract. Within this chapter we present recent results on sparse recovery algorithms

More information

Iterative regularization methods for ill-posed problems

Iterative regularization methods for ill-posed problems Alma Mater Studiorum Università di Bologna Dottorato di Ricerca in MATEMATICA Ciclo XXV Settore Concorsuale di afferenza: 01/A5 Settore Scientifico disciplinare: MAT/08 Iterative regularization methods

More information

LECTURE 7. k=1 (, v k)u k. Moreover r

LECTURE 7. k=1 (, v k)u k. Moreover r LECTURE 7 Finite rank operators Definition. T is said to be of rank r (r < ) if dim T(H) = r. The class of operators of rank r is denoted by K r and K := r K r. Theorem 1. T K r iff T K r. Proof. Let T

More information

5 Compact linear operators

5 Compact linear operators 5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.

More information

Ill-Posedness of Backward Heat Conduction Problem 1

Ill-Posedness of Backward Heat Conduction Problem 1 Ill-Posedness of Backward Heat Conduction Problem 1 M.THAMBAN NAIR Department of Mathematics, IIT Madras Chennai-600 036, INDIA, E-Mail mtnair@iitm.ac.in 1. Ill-Posedness of Inverse Problems Problems that

More information

A Spectral Characterization of Closed Range Operators 1

A Spectral Characterization of Closed Range Operators 1 A Spectral Characterization of Closed Range Operators 1 M.THAMBAN NAIR (IIT Madras) 1 Closed Range Operators Operator equations of the form T x = y, where T : X Y is a linear operator between normed linear

More information

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Stephan W Anzengruber 1 and Ronny Ramlau 1,2 1 Johann Radon Institute for Computational and Applied Mathematics,

More information

Regularization Theory

Regularization Theory Regularization Theory Solving the inverse problem of Super resolution with CNN Aditya Ganeshan Under the guidance of Dr. Ankik Kumar Giri December 13, 2016 Table of Content 1 Introduction Material coverage

More information

Analysis Preliminary Exam Workshop: Hilbert Spaces

Analysis Preliminary Exam Workshop: Hilbert Spaces Analysis Preliminary Exam Workshop: Hilbert Spaces 1. Hilbert spaces A Hilbert space H is a complete real or complex inner product space. Consider complex Hilbert spaces for definiteness. If (, ) : H H

More information

The essential numerical range and the Olsen problem

The essential numerical range and the Olsen problem The essential numerical range and the Olsen problem Lille, 2010 Definition Let H be a Hilbert space, T B(H). The numerical range of T is the set W (T ) = { Tx, x : x H, x = 1}. Definition Let H be a Hilbert

More information

Regularization in Banach Space

Regularization in Banach Space Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization Radu Ioan Boț and Bernd Hofmann March 1, 2013 Abstract Tikhonov-type regularization of linear and nonlinear

More information

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Barbara Kaltenbacher, Tuomo Valkonen,

More information

Inverse scattering problem from an impedance obstacle

Inverse scattering problem from an impedance obstacle Inverse Inverse scattering problem from an impedance obstacle Department of Mathematics, NCKU 5 th Workshop on Boundary Element Methods, Integral Equations and Related Topics in Taiwan NSYSU, October 4,

More information

Your first day at work MATH 806 (Fall 2015)

Your first day at work MATH 806 (Fall 2015) Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies

More information

COMPACT OPERATORS. 1. Definitions

COMPACT OPERATORS. 1. Definitions COMPACT OPERATORS. Definitions S:defi An operator M : X Y, X, Y Banach, is compact if M(B X (0, )) is relatively compact, i.e. it has compact closure. We denote { E:kk (.) K(X, Y ) = M L(X, Y ), M compact

More information

Iterative Methods for Ill-Posed Problems

Iterative Methods for Ill-Posed Problems Iterative Methods for Ill-Posed Problems Based on joint work with: Serena Morigi Fiorella Sgallari Andriy Shyshkov Salt Lake City, May, 2007 Outline: Inverse and ill-posed problems Tikhonov regularization

More information

Short note on compact operators - Monday 24 th March, Sylvester Eriksson-Bique

Short note on compact operators - Monday 24 th March, Sylvester Eriksson-Bique Short note on compact operators - Monday 24 th March, 2014 Sylvester Eriksson-Bique 1 Introduction In this note I will give a short outline about the structure theory of compact operators. I restrict attention

More information

Review and problem list for Applied Math I

Review and problem list for Applied Math I Review and problem list for Applied Math I (This is a first version of a serious review sheet; it may contain errors and it certainly omits a number of topic which were covered in the course. Let me know

More information

Resolving the White Noise Paradox in the Regularisation of Inverse Problems

Resolving the White Noise Paradox in the Regularisation of Inverse Problems 1 / 32 Resolving the White Noise Paradox in the Regularisation of Inverse Problems Hanne Kekkonen joint work with Matti Lassas and Samuli Siltanen Department of Mathematics and Statistics University of

More information

Convergence rates in l 1 -regularization when the basis is not smooth enough

Convergence rates in l 1 -regularization when the basis is not smooth enough Convergence rates in l 1 -regularization when the basis is not smooth enough Jens Flemming, Markus Hegland November 29, 2013 Abstract Sparsity promoting regularization is an important technique for signal

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

Lecture 2: Tikhonov-Regularization

Lecture 2: Tikhonov-Regularization Lecture 2: Tikhonov-Regularization Bastian von Harrach harrach@math.uni-stuttgart.de Chair of Optimization and Inverse Problems, University of Stuttgart, Germany Advanced Instructional School on Theoretical

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Radu Ioan Boț and Bernd Hofmann September 16, 2016 Abstract In the literature on singular perturbation

More information

AN ANALYSIS OF MULTIPLICATIVE REGULARIZATION

AN ANALYSIS OF MULTIPLICATIVE REGULARIZATION Michigan Technological University Digital Commons @ Michigan Tech Dissertations, Master's Theses and Master's Reports - Open Dissertations, Master's Theses and Master's Reports 2015 AN ANALYSIS OF MULTIPLICATIVE

More information

2 Linear Ill-Posed Problems Compact Operators Regularization Operators CGNE for the Bilinear Ansatz...

2 Linear Ill-Posed Problems Compact Operators Regularization Operators CGNE for the Bilinear Ansatz... Abstract The effect of wavefront distortion due to turbulence in the Earth s atmosphere has made it necessary to develop tools for ground-based telescopes that compensate for this aberration. Therefore,

More information

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation

More information

Errata Applied Analysis

Errata Applied Analysis Errata Applied Analysis p. 9: line 2 from the bottom: 2 instead of 2. p. 10: Last sentence should read: The lim sup of a sequence whose terms are bounded from above is finite or, and the lim inf of a sequence

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

Bayesian inverse problems with Laplacian noise

Bayesian inverse problems with Laplacian noise Bayesian inverse problems with Laplacian noise Remo Kretschmann Faculty of Mathematics, University of Duisburg-Essen Applied Inverse Problems 2017, M27 Hangzhou, 1 June 2017 1 / 33 Outline 1 Inverse heat

More information

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke

More information

Generalized Local Regularization for Ill-Posed Problems

Generalized Local Regularization for Ill-Posed Problems Generalized Local Regularization for Ill-Posed Problems Patricia K. Lamm Department of Mathematics Michigan State University AIP29 July 22, 29 Based on joint work with Cara Brooks, Zhewei Dai, and Xiaoyue

More information

MASTER. The numerical inversion of the Laplace transform. Egonmwan, A.O. Award date: Link to publication

MASTER. The numerical inversion of the Laplace transform. Egonmwan, A.O. Award date: Link to publication MASTER The numerical inversion of the Laplace transform Egonmwan, A.O. Award date: 212 Link to publication Disclaimer This document contains a student thesis (bachelor's or master's), as authored by a

More information

Nonlinear error dynamics for cycled data assimilation methods

Nonlinear error dynamics for cycled data assimilation methods Nonlinear error dynamics for cycled data assimilation methods A J F Moodey 1, A S Lawless 1,2, P J van Leeuwen 2, R W E Potthast 1,3 1 Department of Mathematics and Statistics, University of Reading, UK.

More information

Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods

Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry,

More information

MP 472 Quantum Information and Computation

MP 472 Quantum Information and Computation MP 472 Quantum Information and Computation http://www.thphys.may.ie/staff/jvala/mp472.htm Outline Open quantum systems The density operator ensemble of quantum states general properties the reduced density

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

Factorization method in inverse

Factorization method in inverse Title: Name: Affil./Addr.: Factorization method in inverse scattering Armin Lechleiter University of Bremen Zentrum für Technomathematik Bibliothekstr. 1 28359 Bremen Germany Phone: +49 (421) 218-63891

More information

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE METHODS AND APPLICATIONS OF ANALYSIS. c 2011 International Press Vol. 18, No. 1, pp. 105 110, March 2011 007 EXACT SUPPORT RECOVERY FOR LINEAR INVERSE PROBLEMS WITH SPARSITY CONSTRAINTS DENNIS TREDE Abstract.

More information

C.6 Adjoints for Operators on Hilbert Spaces

C.6 Adjoints for Operators on Hilbert Spaces C.6 Adjoints for Operators on Hilbert Spaces 317 Additional Problems C.11. Let E R be measurable. Given 1 p and a measurable weight function w: E (0, ), the weighted L p space L p s (R) consists of all

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. 1999 83: 139 159 Numerische Mathematik c Springer-Verlag 1999 On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems Jin Qi-nian 1, Hou Zong-yi

More information

Fredholm Theory. April 25, 2018

Fredholm Theory. April 25, 2018 Fredholm Theory April 25, 208 Roughly speaking, Fredholm theory consists of the study of operators of the form I + A where A is compact. From this point on, we will also refer to I + A as Fredholm operators.

More information

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books. Applied Analysis APPM 44: Final exam 1:3pm 4:pm, Dec. 14, 29. Closed books. Problem 1: 2p Set I = [, 1]. Prove that there is a continuous function u on I such that 1 ux 1 x sin ut 2 dt = cosx, x I. Define

More information

A semismooth Newton method for L 1 data fitting with automatic choice of regularization parameters and noise calibration

A semismooth Newton method for L 1 data fitting with automatic choice of regularization parameters and noise calibration A semismooth Newton method for L data fitting with automatic choice of regularization parameters and noise calibration Christian Clason Bangti Jin Karl Kunisch April 26, 200 This paper considers the numerical

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

Intertibility and spectrum of the multiplication operator on the space of square-summable sequences

Intertibility and spectrum of the multiplication operator on the space of square-summable sequences Intertibility and spectrum of the multiplication operator on the space of square-summable sequences Objectives Establish an invertibility criterion and calculate the spectrum of the multiplication operator

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

Chapter 2 Mathematical Background

Chapter 2 Mathematical Background Chapter 2 Mathematical Background In this chapter we briefly review concepts from the theory of ill-posed problems, operator theory, linear algebra, estimation theory and sparsity. These concepts are used

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

Extrinsic Antimean and Bootstrap

Extrinsic Antimean and Bootstrap Extrinsic Antimean and Bootstrap Yunfan Wang Department of Statistics Florida State University April 12, 2018 Yunfan Wang (Florida State University) Extrinsic Antimean and Bootstrap April 12, 2018 1 /

More information

Singular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0

Singular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0 Singular value decomposition If only the first p singular values are nonzero we write G =[U p U o ] " Sp 0 0 0 # [V p V o ] T U p represents the first p columns of U U o represents the last N-p columns

More information

Numerical differentiation by means of Legendre polynomials in the presence of square summable noise

Numerical differentiation by means of Legendre polynomials in the presence of square summable noise www.oeaw.ac.at Numerical differentiation by means of Legendre polynomials in the presence of square summable noise S. Lu, V. Naumova, S. Pereverzyev RICAM-Report 2012-15 www.ricam.oeaw.ac.at Numerical

More information

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Applied Mathematical Sciences, Vol. 6, 212, no. 63, 319-3117 Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Nguyen Buong Vietnamese

More information

Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods

Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry,

More information

WEYL S LEMMA, ONE OF MANY. Daniel W. Stroock

WEYL S LEMMA, ONE OF MANY. Daniel W. Stroock WEYL S LEMMA, ONE OF MANY Daniel W Stroock Abstract This note is a brief, and somewhat biased, account of the evolution of what people working in PDE s call Weyl s Lemma about the regularity of solutions

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

The C-Numerical Range in Infinite Dimensions

The C-Numerical Range in Infinite Dimensions The C-Numerical Range in Infinite Dimensions Frederik vom Ende Technical University of Munich June 17, 2018 WONRA 2018 Joint work with Gunther Dirr (University of Würzburg) Frederik vom Ende (TUM) C-Numerical

More information

Statistical regularization theory for Inverse Problems with Poisson data

Statistical regularization theory for Inverse Problems with Poisson data Statistical regularization theory for Inverse Problems with Poisson data Frank Werner 1,2, joint with Thorsten Hohage 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical

More information

ECE295, Data Assimila0on and Inverse Problems, Spring 2015

ECE295, Data Assimila0on and Inverse Problems, Spring 2015 ECE295, Data Assimila0on and Inverse Problems, Spring 2015 1 April, Intro; Linear discrete Inverse problems (Aster Ch 1 and 2) Slides 8 April, SVD (Aster ch 2 and 3) Slides 15 April, RegularizaFon (ch

More information

Spectral Theory, with an Introduction to Operator Means. William L. Green

Spectral Theory, with an Introduction to Operator Means. William L. Green Spectral Theory, with an Introduction to Operator Means William L. Green January 30, 2008 Contents Introduction............................... 1 Hilbert Space.............................. 4 Linear Maps

More information

Real Variables # 10 : Hilbert Spaces II

Real Variables # 10 : Hilbert Spaces II randon ehring Real Variables # 0 : Hilbert Spaces II Exercise 20 For any sequence {f n } in H with f n = for all n, there exists f H and a subsequence {f nk } such that for all g H, one has lim (f n k,

More information

MURDOCH RESEARCH REPOSITORY

MURDOCH RESEARCH REPOSITORY MURDOCH RESEARCH REPOSITORY This is the author s final version of the work, as accepted for publication following peer review but without the publisher s layout or pagination. The definitive version is

More information

Spectral Regularization

Spectral Regularization Spectral Regularization Lorenzo Rosasco 9.520 Class 07 February 27, 2008 About this class Goal To discuss how a class of regularization methods originally designed for solving ill-posed inverse problems,

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

arxiv: v1 [math.st] 26 Mar 2012

arxiv: v1 [math.st] 26 Mar 2012 POSTERIOR CONSISTENCY OF THE BAYESIAN APPROACH TO LINEAR ILL-POSED INVERSE PROBLEMS arxiv:103.5753v1 [math.st] 6 Mar 01 By Sergios Agapiou Stig Larsson and Andrew M. Stuart University of Warwick and Chalmers

More information

ESL Chap3. Some extensions of lasso

ESL Chap3. Some extensions of lasso ESL Chap3 Some extensions of lasso 1 Outline Consistency of lasso for model selection Adaptive lasso Elastic net Group lasso 2 Consistency of lasso for model selection A number of authors have studied

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

MP463 QUANTUM MECHANICS

MP463 QUANTUM MECHANICS MP463 QUANTUM MECHANICS Introduction Quantum theory of angular momentum Quantum theory of a particle in a central potential - Hydrogen atom - Three-dimensional isotropic harmonic oscillator (a model of

More information

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets 9.520 Class 22, 2004 Tomaso Poggio and Sayan Mukherjee About this class Goal To introduce an alternate perspective of RKHS via integral operators

More information

Robust error estimates for regularization and discretization of bang-bang control problems

Robust error estimates for regularization and discretization of bang-bang control problems Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of

More information

Regularization via Spectral Filtering

Regularization via Spectral Filtering Regularization via Spectral Filtering Lorenzo Rosasco MIT, 9.520 Class 7 About this class Goal To discuss how a class of regularization methods originally designed for solving ill-posed inverse problems,

More information

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University

More information

1 Inner Product and Orthogonality

1 Inner Product and Orthogonality CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =

More information

PhD Course: Introduction to Inverse Problem. Salvatore Frandina Siena, August 19, 2012

PhD Course: Introduction to Inverse Problem. Salvatore Frandina Siena, August 19, 2012 PhD Course: to Inverse Problem salvatore.frandina@gmail.com theory Department of Information Engineering, Siena, Italy Siena, August 19, 2012 1 / 68 An overview of the - - - theory 2 / 68 Direct and Inverse

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition We are interested in more than just sym+def matrices. But the eigenvalue decompositions discussed in the last section of notes will play a major role in solving general

More information

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Stephan W Anzengruber Ronny Ramlau Abstract We derive convergence rates for Tikhonov-type regularization with conve

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Fidelity and trace norm

Fidelity and trace norm Fidelity and trace norm In this lecture we will see two simple examples of semidefinite programs for quantities that are interesting in the theory of quantum information: the first is the trace norm of

More information

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

Levenberg-Marquardt method in Banach spaces with general convex regularization terms Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization

More information

Optimal control as a regularization method for. an ill-posed problem.

Optimal control as a regularization method for. an ill-posed problem. Optimal control as a regularization method for ill-posed problems Stefan Kindermann, Carmeliza Navasca Department of Mathematics University of California Los Angeles, CA 995-1555 USA {kinder,navasca}@math.ucla.edu

More information

An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems

An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Int. J. Contemp. Math. Sciences, Vol. 5, 2010, no. 52, 2547-2565 An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical and Computational

More information

Reproducing Kernel Hilbert Spaces

Reproducing Kernel Hilbert Spaces 9.520: Statistical Learning Theory and Applications February 10th, 2010 Reproducing Kernel Hilbert Spaces Lecturer: Lorenzo Rosasco Scribe: Greg Durrett 1 Introduction In the previous two lectures, we

More information

Linear convergence of iterative soft-thresholding

Linear convergence of iterative soft-thresholding arxiv:0709.1598v3 [math.fa] 11 Dec 007 Linear convergence of iterative soft-thresholding Kristian Bredies and Dirk A. Lorenz ABSTRACT. In this article, the convergence of the often used iterative softthresholding

More information

Spectral theory for compact operators on Banach spaces

Spectral theory for compact operators on Banach spaces 68 Chapter 9 Spectral theory for compact operators on Banach spaces Recall that a subset S of a metric space X is precompact if its closure is compact, or equivalently every sequence contains a Cauchy

More information

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling 1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples

More information

Double Layer Potentials on Polygons and Pseudodifferential Operators on Lie Groupoids

Double Layer Potentials on Polygons and Pseudodifferential Operators on Lie Groupoids Double Layer Potentials on Polygons and Pseudodifferential Operators on Lie Groupoids joint work with Hengguang Li Yu Qiao School of Mathematics and Information Science Shaanxi Normal University Xi an,

More information

COMPUTATION OF FOURIER TRANSFORMS FOR NOISY B FOR NOISY BANDLIMITED SIGNALS

COMPUTATION OF FOURIER TRANSFORMS FOR NOISY B FOR NOISY BANDLIMITED SIGNALS COMPUTATION OF FOURIER TRANSFORMS FOR NOISY BANDLIMITED SIGNALS October 22, 2011 I. Introduction Definition of Fourier transform: F [f ](ω) := ˆf (ω) := + f (t)e iωt dt, ω R (1) I. Introduction Definition

More information

Pseudoinverse and Adjoint Operators

Pseudoinverse and Adjoint Operators ECE 275AB Lecture 5 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 5 ECE 275A Pseudoinverse and Adjoint Operators ECE 275AB Lecture 5 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p.

More information

Fall TMA4145 Linear Methods. Solutions to exercise set 9. 1 Let X be a Hilbert space and T a bounded linear operator on X.

Fall TMA4145 Linear Methods. Solutions to exercise set 9. 1 Let X be a Hilbert space and T a bounded linear operator on X. TMA445 Linear Methods Fall 26 Norwegian University of Science and Technology Department of Mathematical Sciences Solutions to exercise set 9 Let X be a Hilbert space and T a bounded linear operator on

More information

2 Tikhonov Regularization and ERM

2 Tikhonov Regularization and ERM Introduction Here we discusses how a class of regularization methods originally designed to solve ill-posed inverse problems give rise to regularized learning algorithms. These algorithms are kernel methods

More information

Basic Linear Inverse Method Theory - DRAFT NOTES

Basic Linear Inverse Method Theory - DRAFT NOTES Basic Linear Inverse Method Theory - DRAFT NOTES Peter P. Jones 1 1 Centre for Compleity Science, University of Warwick, Coventry CV4 7AL, UK (Dated: 21st June 2012) BASIC LINEAR INVERSE METHOD THEORY

More information

Ch 10.1: Two Point Boundary Value Problems

Ch 10.1: Two Point Boundary Value Problems Ch 10.1: Two Point Boundary Value Problems In many important physical problems there are two or more independent variables, so the corresponding mathematical models involve partial differential equations.

More information

Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010

Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010 Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010 Diagonalization of Symmetric Real Matrices (from Handout Definition 1 Let δ ij = { 1 if i = j 0 if i j A basis V = {v 1,..., v n } of R n

More information