Regularization and Inverse Problems

Similar documents
3 Compact Operators, Generalized Inverse, Best- Approximate Solution

Sparse Recovery in Inverse Problems

Iterative regularization methods for ill-posed problems

LECTURE 7. k=1 (, v k)u k. Moreover r

5 Compact linear operators

Ill-Posedness of Backward Heat Conduction Problem 1

A Spectral Characterization of Closed Range Operators 1

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators

Regularization Theory

Analysis Preliminary Exam Workshop: Hilbert Spaces

The essential numerical range and the Olsen problem

Regularization in Banach Space

Kernel Method: Data Analysis with Positive Definite Kernels

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs

Inverse scattering problem from an impedance obstacle

Your first day at work MATH 806 (Fall 2015)

COMPACT OPERATORS. 1. Definitions

Iterative Methods for Ill-Posed Problems

Short note on compact operators - Monday 24 th March, Sylvester Eriksson-Bique

Review and problem list for Applied Math I

Resolving the White Noise Paradox in the Regularisation of Inverse Problems

Convergence rates in l 1 -regularization when the basis is not smooth enough

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

Lecture 2: Tikhonov-Regularization

1 Math 241A-B Homework Problem List for F2015 and W2016

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space

AN ANALYSIS OF MULTIPLICATIVE REGULARIZATION

2 Linear Ill-Posed Problems Compact Operators Regularization Operators CGNE for the Bilinear Ansatz...

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

Errata Applied Analysis

Functional Analysis Review

Bayesian inverse problems with Laplacian noise

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring

Generalized Local Regularization for Ill-Posed Problems

MASTER. The numerical inversion of the Laplace transform. Egonmwan, A.O. Award date: Link to publication

Nonlinear error dynamics for cycled data assimilation methods

Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods

MP 472 Quantum Information and Computation

PDEs in Image Processing, Tutorials

Factorization method in inverse

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE

C.6 Adjoints for Operators on Hilbert Spaces

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Two-parameter regularization method for determining the heat source

Numerische Mathematik

Fredholm Theory. April 25, 2018

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.

A semismooth Newton method for L 1 data fitting with automatic choice of regularization parameters and noise calibration

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

Intertibility and spectrum of the multiplication operator on the space of square-summable sequences

ECE 275A Homework #3 Solutions

Chapter 2 Mathematical Background

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

Extrinsic Antimean and Bootstrap

Singular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0

Numerical differentiation by means of Legendre polynomials in the presence of square summable noise

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods

WEYL S LEMMA, ONE OF MANY. Daniel W. Stroock

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

The C-Numerical Range in Infinite Dimensions

Statistical regularization theory for Inverse Problems with Poisson data

ECE295, Data Assimila0on and Inverse Problems, Spring 2015

Spectral Theory, with an Introduction to Operator Means. William L. Green

Real Variables # 10 : Hilbert Spaces II

MURDOCH RESEARCH REPOSITORY

Spectral Regularization

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

arxiv: v1 [math.st] 26 Mar 2012

ESL Chap3. Some extensions of lasso

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

MP463 QUANTUM MECHANICS

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee

Robust error estimates for regularization and discretization of bang-bang control problems

Regularization via Spectral Filtering

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

1 Inner Product and Orthogonality

PhD Course: Introduction to Inverse Problem. Salvatore Frandina Siena, August 19, 2012

The Singular Value Decomposition

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Fidelity and trace norm

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

Optimal control as a regularization method for. an ill-posed problem.

An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems

Reproducing Kernel Hilbert Spaces

Linear convergence of iterative soft-thresholding

Spectral theory for compact operators on Banach spaces

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Double Layer Potentials on Polygons and Pseudodifferential Operators on Lie Groupoids

COMPUTATION OF FOURIER TRANSFORMS FOR NOISY B FOR NOISY BANDLIMITED SIGNALS

Pseudoinverse and Adjoint Operators

Fall TMA4145 Linear Methods. Solutions to exercise set 9. 1 Let X be a Hilbert space and T a bounded linear operator on X.

2 Tikhonov Regularization and ERM

Basic Linear Inverse Method Theory - DRAFT NOTES

Ch 10.1: Two Point Boundary Value Problems

Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010

Transcription:

Regularization and Inverse Problems Caroline Sieger Host Institution: Universität Bremen Home Institution: Clemson University August 5, 2009 Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 1 / 16

Outline 1 Preliminaries The Inverse Problem The Moore-Penrose Generalized Inverse Eigensystems and Singular Systems 2 Regularization Classical Tikhonov Regularization Tikhonov Regularization with Sparsity Constraints Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 2 / 16

Preliminaries Notation: T : X Y is a bounded linear operator K : X Y is a compact linear operator X and Y are Hilbert spaces Definition A problem is ill-posed if one or more of the following holds: a solution does not exist Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 3 / 16

Preliminaries Notation: T : X Y is a bounded linear operator K : X Y is a compact linear operator X and Y are Hilbert spaces Definition A problem is ill-posed if one or more of the following holds: a solution does not exist the solution is not unique Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 3 / 16

Preliminaries Notation: T : X Y is a bounded linear operator K : X Y is a compact linear operator X and Y are Hilbert spaces Definition A problem is ill-posed if one or more of the following holds: a solution does not exist the solution is not unique the solultion does not depend continuously on the data Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 3 / 16

The Inverse Problem Find x X such that Tx = y. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 4 / 16

The Inverse Problem Find x X such that Tx = y. There may be some problems with this, i.e. T 1 may not exist NS(T ) {0} ( = non-unique solutions) Thus, we seek an approximate solution. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 4 / 16

The Inverse Problem Find x X such that Tx = y. There may be some problems with this, i.e. T 1 may not exist NS(T ) {0} ( = non-unique solutions) Thus, we seek an approximate solution. Types of Solutions x X is a least-squares solution to Tx = y if Tx y = inf{ Tz y z X } Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 4 / 16

The Inverse Problem Find x X such that Tx = y. There may be some problems with this, i.e. T 1 may not exist NS(T ) {0} ( = non-unique solutions) Thus, we seek an approximate solution. Types of Solutions x X is a least-squares solution to Tx = y if Tx y = inf{ Tz y z X } x X is a best-approximate solution to Tx = y if x is a least-squares solution and x = inf{ z z is a least-squares solution of Tx = y} Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 4 / 16

The Moore-Penrose Generalized Inverse Definition: Moore-Penrose Generalized Inverse T is the unique linear extension of T 1 to dom(t ) with NS(T ) = range(t ) where T := T NS(T ) : NS(T ) range(t ). Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 5 / 16

The Moore-Penrose Generalized Inverse Definition: Moore-Penrose Generalized Inverse T is the unique linear extension of T 1 to dom(t ) with NS(T ) = range(t ) where T := T NS(T ) : NS(T ) range(t ). If P and Q are orthogonal projectors onto NS(T ) and range(t ) 1.) TT T = T 2.) T TT = T 3.) T T = I P 4.) TT = Q dom(t ) Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 5 / 16

The Moore-Penrose Generalized Inverse Definition: Moore-Penrose Generalized Inverse T is the unique linear extension of T 1 to dom(t ) with NS(T ) = range(t ) where T := T NS(T ) : NS(T ) range(t ). If P and Q are orthogonal projectors onto NS(T ) and range(t ) 1.) TT T = T 2.) T TT = T 3.) T T = I P 4.) TT = Q dom(t ) T is bounded if range(t ) is closed. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 5 / 16

The Moore-Penrose Generalized Inverse Definition: Moore-Penrose Generalized Inverse T is the unique linear extension of T 1 to dom(t ) with NS(T ) = range(t ) where T := T NS(T ) : NS(T ) range(t ). If P and Q are orthogonal projectors onto NS(T ) and range(t ) 1.) TT T = T 2.) T TT = T 3.) T T = I P 4.) TT = Q dom(t ) T is bounded if range(t ) is closed. If y dom(t ) then a unique best-approximate solution, x := T y Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 5 / 16

The Moore-Penrose Generalized Inverse Definition: Moore-Penrose Generalized Inverse T is the unique linear extension of T 1 to dom(t ) with NS(T ) = range(t ) where T := T NS(T ) : NS(T ) range(t ). If P and Q are orthogonal projectors onto NS(T ) and range(t ) 1.) TT T = T 2.) T TT = T 3.) T T = I P 4.) TT = Q dom(t ) T is bounded if range(t ) is closed. If y dom(t ) then a unique best-approximate solution, x := T y Definition: Gaussian Normal Equation For y dom(t ), x X is a least-squares solution T Tx = T y. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 5 / 16

Eigensystems and Singular Systems Definition A selfadjoint K has the eigensystem (λ n ; v n ) where the λ n are non-zero eigenvalues and the v n are corresponding eigenvectors. We may diagonalize K by Kx = n=1 λ n x, v n v n for all x X. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 6 / 16

Eigensystems and Singular Systems Definition A selfadjoint K has the eigensystem (λ n ; v n ) where the λ n are non-zero eigenvalues and the v n are corresponding eigenvectors. We may diagonalize K by Kx = n=1 λ n x, v n v n for all x X. Definition A non-selfadjoint K has the singular system (σ n ; v n, u n ) where K is the adjoint of K {σ 2 n} n N are the non-zero eigenvalues of K K (and KK ) Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 6 / 16

Eigensystems and Singular Systems Definition A selfadjoint K has the eigensystem (λ n ; v n ) where the λ n are non-zero eigenvalues and the v n are corresponding eigenvectors. We may diagonalize K by Kx = n=1 λ n x, v n v n for all x X. Definition A non-selfadjoint K has the singular system (σ n ; v n, u n ) where K is the adjoint of K {σ 2 n} n N are the non-zero eigenvalues of K K (and KK ) {v n } n N are eigenvectors of K K {u n } n N are eigenvectors of KK defined by u n := Kvn Kv n Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 6 / 16

Eigensystems and Singular Systems Properties of a singular system: Kv n = σ n u n K u n = σ n v n Kx = n=1 σ n x, v n u n Ky = n=1 σ n y, u n v n Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 7 / 16

Eigensystems and Singular Systems Properties of a singular system: Kv n = σ n u n K u n = σ n v n Kx = n=1 σ n x, v n u n Ky = n=1 σ n y, u n v n Further, iff K has finite dimensional range = K has finitely many singular values = infinite series involving singular values degenerate to finite sums. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 7 / 16

Eigensystems and Singular Systems Properties of a singular system: Kv n = σ n u n K u n = σ n v n Kx = n=1 σ n x, v n u n Ky = n=1 σ n y, u n v n Further, iff K has finite dimensional range = K has finitely many singular values = infinite series involving singular values degenerate to finite sums. Theorem (Engl,Hanke,Neubauer) For compact linear operator K with singular system (σ n ; v n, u n ) and y Y we have: 1 y dom(k ) n=1 < (Picard Criterion for existence of y,u n 2 σ 2 n a best-approximate solution.) 2 For y dom(k ), K y = n=1 y,u n σ n v n Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 7 / 16

Regularization Recall: we seek to approximate x := T y Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 8 / 16

Regularization Recall: we seek to approximate x := T y Normally, we only have an approximation of y, i.e. y δ such that y δ y δ in the ill-posed case, T y δ is not a good approximation of x because of the unboundedness of T. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 8 / 16

Regularization Recall: we seek to approximate x := T y Normally, we only have an approximation of y, i.e. y δ such that y δ y δ in the ill-posed case, T y δ is not a good approximation of x because of the unboundedness of T. We seek an approximation x δ α of x such that 1 x δ α depends continuously on the noisy data y δ (this allows stable computation of x δ α) 2 the noise level δ 0 and for appropriate α, x δ α x δ Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 8 / 16

Regularization Definition Let α 0 (0, ] then α (0, α 0 ] let R α : Y X be a continuous operator. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 9 / 16

Regularization Definition Let α 0 (0, ] then α (0, α 0 ] let R α : Y X be a continuous operator. The family {R α } is a regularization iff y dom(t ) a parameter choice rule α = α(δ, y δ ) such that { } lim sup R α(δ,y δ 0 δ )y δ T y y δ Y, y δ y δ = 0 (1) Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 9 / 16

Regularization Definition Let α 0 (0, ] then α (0, α 0 ] let R α : Y X be a continuous operator. The family {R α } is a regularization iff y dom(t ) a parameter choice rule α = α(δ, y δ ) such that { } lim sup R α(δ,y δ 0 δ )y δ T y y δ Y, y δ y δ = 0 (1) and α : R + Y (0, α 0 ) such that { } lim sup α(δ, y δ ) y δ Y, y δ y δ δ 0 = 0 (2) Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 9 / 16

Regularization Definition Let α 0 (0, ] then α (0, α 0 ] let R α : Y X be a continuous operator. The family {R α } is a regularization iff y dom(t ) a parameter choice rule α = α(δ, y δ ) such that { } lim sup R α(δ,y δ 0 δ )y δ T y y δ Y, y δ y δ = 0 (1) and α : R + Y (0, α 0 ) such that { } lim sup α(δ, y δ ) y δ Y, y δ y δ δ 0 = 0 (2) For a specific y dom(t ), a pair (R α, α) is a regularization method if (1) and (2) hold. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 9 / 16

Regularization If the parameter choice rule does not depend on y δ, we say it is a-priori and we write α = α(δ). Otherwise, it is an a-posteriori parameter choice rule. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 10 / 16

Regularization If the parameter choice rule does not depend on y δ, we say it is a-priori and we write α = α(δ). Otherwise, it is an a-posteriori parameter choice rule. Proposition (Engl,Hanke,Neubauer) Further, (R α, α) is convergent (for linear R α ) iff lim δ 0 α(δ) = 0 and lim δ 0 δ R α(δ) = 0. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 10 / 16

Regularization If the parameter choice rule does not depend on y δ, we say it is a-priori and we write α = α(δ). Otherwise, it is an a-posteriori parameter choice rule. Proposition (Engl,Hanke,Neubauer) Further, (R α, α) is convergent (for linear R α ) iff lim δ 0 α(δ) = 0 and lim δ 0 δ R α(δ) = 0. X µ,ρ := {x X x = (T T ) µ ω, ω ρ}, µ > 0 Proposition (Engl,Hanke,Neubauer) If range(t ) is non-closed, a regularization algorithm cannot converge to zero faster than δ 2µ 2µ+1 ρ 1 2µ+1 as δ 0 for x X µ,ρ. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 10 / 16

Regularization (δ, M, R α ) := sup { R α y δ x x M, y δ Y, Tx y δ δ } for some M X Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 11 / 16

Regularization (δ, M, R α ) := sup { R α y δ x x M, y δ Y, Tx y δ δ } for some M X Definition We say (R α, α) is optimal in X µ,ρ if (δ, X µ,ρ, R α ) = δ 2µ 2µ+1 ρ 1 2µ+1 holds δ > 0. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 11 / 16

Regularization (δ, M, R α ) := sup { R α y δ x x M, y δ Y, Tx y δ δ } for some M X Definition We say (R α, α) is optimal in X µ,ρ if (δ, X µ,ρ, R α ) = δ 2µ 2µ+1 ρ 1 2µ+1 holds δ > 0. (R α, α) is of optimal order in X µ,ρ if C 1 such that (δ, X µ,ρ, R α ) Cδ 2µ 2µ+1 ρ 1 2µ+1 δ > 0 Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 11 / 16

Regularization (δ, M, R α ) := sup { R α y δ x x M, y δ Y, Tx y δ δ } for some M X Definition We say (R α, α) is optimal in X µ,ρ if (δ, X µ,ρ, R α ) = δ 2µ 2µ+1 ρ 1 2µ+1 holds δ > 0. (R α, α) is of optimal order in X µ,ρ if C 1 such that (δ, X µ,ρ, R α ) Cδ 2µ 2µ+1 ρ 1 2µ+1 δ > 0 Theorem (Engl,Hanke,Neubauer) Let τ > τ 0 1, then if (R α, α τ ) is of optimal order in X µ,ρ for some µ > 0 and ρ > 0 then all (R α, α τ ) with τ > τ 0 are convergent for y range(t ) and of optimal order X ν,ρ with 0 < ν µ and ρ > 0. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 11 / 16

Classical Tikhonov Regularization The Tikhonov Functional Φ(x) := Tx y δ 2 + α x 2 Theorem (Engl,Hanke,Neubauer) xα δ := (T T + αi ) 1 T y δ is the unique minimizer of Φ(x). Proof Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 12 / 16

Classical Tikhonov Regularization The Tikhonov Functional Φ(x) := Tx y δ 2 + α x 2 Theorem (Engl,Hanke,Neubauer) x δ α := (T T + αi ) 1 T y δ is the unique minimizer of Φ(x). Proof Theorem (Engl,Hanke,Neubauer) For xα δ := (T T + αi ) 1 T y δ, y range(t ), y y δ δ if α = α(δ) δ such that lim α(δ) = 0 and lim 2 δ 0 δ 0 α(δ) = 0 then lim x δ δ 0 α(δ) = T y. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 12 / 16

Classical Tikhonov Regularization Proposition (Engl,Hanke,Neubauer) As long as µ 1, Tikhonov regularization with the a-priori choice rule ) 2 2µ+1 ( α δ ρ is of optimal order in X µ,ρ, the best possible convergence rate for µ = 1 is: α ( ) 2 δ 3 ρ and x δ α x = O ( ) δ 2 3 Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 13 / 16

Tikhonov Regularization with Sparsity Constraints In some applications, we require a sparse solution So use an l p norm of the coefficients of x wrt an orthonormal basis {ϕ i } i N with 1 p 2 Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 14 / 16

Tikhonov Regularization with Sparsity Constraints In some applications, we require a sparse solution So use an l p norm of the coefficients of x wrt an orthonormal basis {ϕ i } i N with 1 p 2 Decreasing p from 2 to 1, we increase the penalty on small coefficients and decrease the penalty on large coefficients, i.e. we encourage representations with large, but fewer coefficients wrt the orthonormal basis {ϕ i } i N Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 14 / 16

Tikhonov Regularization with Sparsity Constraints In some applications, we require a sparse solution So use an l p norm of the coefficients of x wrt an orthonormal basis {ϕ i } i N with 1 p 2 Decreasing p from 2 to 1, we increase the penalty on small coefficients and decrease the penalty on large coefficients, i.e. we encourage representations with large, but fewer coefficients wrt the orthonormal basis {ϕ i } i N Generally, we restrict ourselves to p 1 since the functional is no longer convex for p < 1. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 14 / 16

Tikhonov Regularization with Sparsity Constraints In some applications, we require a sparse solution So use an l p norm of the coefficients of x wrt an orthonormal basis {ϕ i } i N with 1 p 2 Decreasing p from 2 to 1, we increase the penalty on small coefficients and decrease the penalty on large coefficients, i.e. we encourage representations with large, but fewer coefficients wrt the orthonormal basis {ϕ i } i N Generally, we restrict ourselves to p 1 since the functional is no longer convex for p < 1. The Tikhonov Functional with l 1 Penalty Ψ(x) := Tx y δ 2 + α i ϕ i, x Denote x δ α to be the minimizer of Ψ(x). Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 14 / 16

Tikhonov Regularization with Sparsity Constraints Definition For x X and a regularization R α, x is an R α -minimizing solution if Tx = y and R α (x ) = min{r α (x) Tx = y}. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 15 / 16

Tikhonov Regularization with Sparsity Constraints Definition For x X and a regularization R α, x is an R α -minimizing solution if Tx = y and R α (x ) = min{r α (x) Tx = y}. Recall, R α (x) := α i φ i, x Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 15 / 16

Tikhonov Regularization with Sparsity Constraints Definition For x X and a regularization R α, x is an R α -minimizing solution if Tx = y and R α (x ) = min{r α (x) Tx = y}. Recall, R α (x) := α i φ i, x Proposition (Grasmair,Haltmeier,Scherzer) Assume that T is injective (or finite basis injectivity holds), then a unique minimizer xα δ of Ψ(x) and a unique R α -minimizing solution x. For y range(t ) and y y δ δ if α satisfies lim α(δ) = 0 and δ 0 δ lim α(δ) = 0, then lim x δ δ 0 α(δ) = x. δ 0 Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 15 / 16

Tikhonov Regularization with Sparsity Constraints Theorem (Grasmair,Haltmeier,Scherzer) Assume that R α (x ) range(t ), that the finite basis injectivity property holds, and that Tx = y has an R α -minimizing solution that is sparse wrt {ϕ i } i N. Then for parameter choice strategy α δ, we have x δ α x = O(δ) Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 16 / 16

Tikhonov Regularization with Sparsity Constraints Theorem (Grasmair,Haltmeier,Scherzer) Assume that R α (x ) range(t ), that the finite basis injectivity property holds, and that Tx = y has an R α -minimizing solution that is sparse wrt {ϕ i } i N. Then for parameter choice strategy α δ, we have x δ α x = O(δ) In fact, Grasmair,Haltmeier,Scherzer also prove that for 1 p 2, if we know in advance the solution is sparse in the basis {ϕ i } i N then we obtain x δ α x = O(δ 1/p ). Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 16 / 16

Tikhonov Regularization with Sparsity Constraints Theorem (Grasmair,Haltmeier,Scherzer) Assume that R α (x ) range(t ), that the finite basis injectivity property holds, and that Tx = y has an R α -minimizing solution that is sparse wrt {ϕ i } i N. Then for parameter choice strategy α δ, we have x δ α x = O(δ) In fact, Grasmair,Haltmeier,Scherzer also prove that for 1 p 2, if we know in advance the solution is sparse in the basis {ϕ i } i N then we obtain x δ α x = O(δ 1/p ). We have not yet answered how to find this minimizer x δ α. However, Jack will discuss this in his presentation. Caroline Sieger (Bremen and Clemson) Regularization and Inverse Problems August 5, 2009 16 / 16