Iterative Methods for Ill-Posed Problems

Similar documents
Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Arnoldi-Tikhonov regularization methods

Greedy Tikhonov regularization for large linear ill-posed problems

Tikhonov Regularization of Large Symmetric Problems

ON THE REGULARIZING PROPERTIES OF THE GMRES METHOD

Augmented GMRES-type methods

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

Dedicated to Adhemar Bultheel on the occasion of his 60th birthday.

1. Introduction. We are concerned with the approximate solution of linear least-squares problems

Lecture 2: Tikhonov-Regularization

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b?

SIGNAL AND IMAGE RESTORATION: SOLVING

Regularization and Inverse Problems

Conjugate Gradient algorithm. Storage: fixed, independent of number of steps.

Linear Inverse Problems

Preconditioning. Noisy, Ill-Conditioned Linear Systems

VECTOR EXTRAPOLATION APPLIED TO TRUNCATED SINGULAR VALUE DECOMPOSITION AND TRUNCATED ITERATION

Erkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5.

An interior-point method for large constrained discrete ill-posed problems

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation

Preconditioning. Noisy, Ill-Conditioned Linear Systems

ETNA Kent State University

A NEW L-CURVE FOR ILL-POSED PROBLEMS. Dedicated to Claude Brezinski.

FGMRES for linear discrete ill-posed problems

Alternating Krylov Subspace Image Restoration Methods

An iterative multigrid regularization method for Toeplitz discrete ill-posed problems

Parameter Identification in Partial Differential Equations

Discrete ill posed problems

Comparison of A-Posteriori Parameter Choice Rules for Linear Discrete Ill-Posed Problems

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

An l 2 -l q regularization method for large discrete ill-posed problems

NONLINEAR DIFFUSION PDES

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

LAVRENTIEV-TYPE REGULARIZATION METHODS FOR HERMITIAN PROBLEMS

Dedicated to Gérard Meurant on the occasion of his 60th birthday.

INVERSE SUBSPACE PROBLEMS WITH APPLICATIONS

COMPUTATION OF FOURIER TRANSFORMS FOR NOISY B FOR NOISY BANDLIMITED SIGNALS

c 1999 Society for Industrial and Applied Mathematics

Inverse scattering problem from an impedance obstacle

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

NON-LINEAR DIFFUSION FILTERING

A model function method in total least squares

PDEs in Image Processing, Tutorials

ON THE COMPUTATION OF A TRUNCATED SVD OF A LARGE LINEAR DISCRETE ILL-POSED PROBLEM. Dedicated to Ken Hayami on the occasion of his 60th birthday.

Inverse Ill Posed Problems in Image Processing

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT

AN ANALYSIS OF MULTIPLICATIVE REGULARIZATION

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

The structure of iterative methods for symmetric linear discrete ill-posed problems

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

Linear Diffusion and Image Processing. Outline

On solving linear systems arising from Shishkin mesh discretizations

Iterative Krylov Subspace Methods for Sparse Reconstruction

Inverse problem and optimization

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

Nonlinear Diffusion. Journal Club Presentation. Xiaowei Zhou

PROJECTED TIKHONOV REGULARIZATION OF LARGE-SCALE DISCRETE ILL-POSED PROBLEMS

Iterative regularization of nonlinear ill-posed problems in Banach space

On nonstationary preconditioned iterative regularization methods for image deblurring

Lecture 1: Numerical Issues from Inverse Problems (Parameter Estimation, Regularization Theory, and Parallel Algorithms)

SIMPLIFIED GSVD COMPUTATIONS FOR THE SOLUTION OF LINEAR DISCRETE ILL-POSED PROBLEMS (1.1) x R Rm n, x R n, b R m, m n,

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Old and new parameter choice rules for discrete ill-posed problems

Regularization with Randomized SVD for Large-Scale Discrete Inverse Problems

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1

Normed & Inner Product Vector Spaces

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring

Square Smoothing Regularization Matrices with Accurate Boundary Conditions

Regularization in Banach Space

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

6 The SVD Applied to Signal and Image Deblurring

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve

8 The SVD Applied to Signal and Image Deblurring

You should be able to...

8 The SVD Applied to Signal and Image Deblurring

Iterative methods for positive definite linear systems with a complex shift

Medical Image Analysis

ETNA Kent State University

REGULARIZATION MATRICES FOR DISCRETE ILL-POSED PROBLEMS IN SEVERAL SPACE-DIMENSIONS

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

ETNA Kent State University

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS

Linear Algebra and its Applications

Nonlinear Diffusion. 1 Introduction: Motivation for non-standard diffusion

Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice

Optimal control as a regularization method for. an ill-posed problem.

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

Solving Regularized Total Least Squares Problems

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

A semismooth Newton method for L 1 data fitting with automatic choice of regularization parameters and noise calibration

Least squares problems Linear Algebra with Computer Science Application

A GLOBAL LANCZOS METHOD FOR IMAGE RESTORATION

Lecture 1 Numerical methods: principles, algorithms and applications: an introduction

Multigrid absolute value preconditioning

Dynamical Systems Method for Solving Ill-conditioned Linear Algebraic Systems

Transcription:

Iterative Methods for Ill-Posed Problems Based on joint work with: Serena Morigi Fiorella Sgallari Andriy Shyshkov Salt Lake City, May, 2007

Outline: Inverse and ill-posed problems Tikhonov regularization Regularization by truncated iteration Multigrid methods Application to image restoration

Inverse problems Inverse problems arise when one seeks to determine the cause of an observed effect. Helioseismology: Determine the structure of the sun by measurements from earth or space. Medical imaging, e.g., electrocardiographic imaging, computerized tomography. Image restoration: Determine the unavailable exact image from an available contaminated version. Inverse problems often are ill-posed.

Ill-posed problems A problem is said to be ill-posed if it has at least one of the properties: the problem does not have a solution, the problem does not have a unique solution, the solution does not depend continuously on the data.

Linear discrete ill-posed problems Ax = b arise from the discretization of linear ill-posed problems (Fredholm integral equations of the first kind) or, naturally, in discrete form (image restoration). The matrix A is of ill-determined rank, possibly singular. System may be inconsistent. The right-hand side b represents available data that generally is contaminated by an error.

Available contaminated, possibly inconsistents, linear system Ax = b (1) Unavailable associated consistent linear system with error-free right-hand side Ax = ˆb (2) Let ˆx denote the desired solution of (2), e.g., the minimal-norm solution. Task: Determine an approximate solution of (1) that is a good approximation of ˆx.

Define the error e = ˆb b noise and let δ = e How much damage can a little noise really do? How much noise requires the use of special techniques?

Example: Fredholm integral equation of the 1st kind π 0 exp( st)x(t)dt = 2 sinh(s) s, 0 s π 2. Determine solution x(t) = sin(t). Discretize integral by Galerkin method using piecewise constant functions. Code baart from Regularization Tools.

This gives a linear system of equations Ax = ˆb, A R 200 200, ˆb R 200. A is numerically singular. Let the noise vector e in b have normally distributed entries with mean zero and δ = e = 10 3 b b = ˆb + e i.e., 0.1% relative noise

0.26 right hand side 0.25 0.24 0.23 no added noise added noise/signal=1e 3 0.22 0.21 0.2 0.19 0.18 0.17 0 20 40 60 80 100 120 140 160 180 200

0.14 exact solution 0.12 0.1 0.08 0.06 0.04 0.02 0 0 20 40 60 80 100 120 140 160 180 200

1.5 x 1013 Solution Ax=b: noise level 10 3 1 0.5 0 0.5 1 1.5 0 20 40 60 80 100 120 140 160 180 200

Tikhonov regularization

Solve the Tikhonov minimization problem where A R m n ; min x { Ax b 2 + µ Lx 2 }, L R p n, p n, is the regularization operator. Common choices: L = I or a finite difference operator; µ > 0 is the regularization parameter to be determined.

Normal equations associated with Tikhonov minimization problem: (A T A + µl T L)x = A T b. Have unique solution iff N (A) N (L) = {0}.

Important to determine a suitable value of µ: L = I: Solution x µ := (A T A + µi) 1 A T b. lim µ 0 x µ = A b, lim µ x µ = 0.

Regularization by truncated iteration

CGNR: CG applied to A Ax = A b Define the Krylov subspace K k (A A, A b) = span{a b, (A A)A b,..., (A A) k 2 A b, (A A) k 1 A b}. Then x k K k (A A, A b) and Ax k b = min Ax b x K k (A A,A b) Therefore discrepancy d j = b Ax j satisfies b d 1... d k.

Stopping Criterion Discrepancy principle Let α > 1 be fixed, e = ˆb b = δ. The iterate x k satisfies the discrepancy principle if Ax k b αδ Stopping rule Terminate the iterations as soon as iterate x k satisfies Ax k b αδ Ax k 1 b > αδ Denote the termination index by k δ.

An iterative method is a regularization method if lim sup δ 0 e δ x kδ ˆx = 0 CGNR is a regularization method; see Nemirovskii, Hanke.

A multilevel method

Consider Ω k(s, t)x(s)ds = b(t), t Ω Let S 1 S 2... S k L 2 (Ω) nested subspaces R i : L 2 (Ω) S i restriction operator b i = R i b, b δ i = R i b δ, A i = R i AR i P i : S i 1 S i prolongation operator

Cascadic multilevel method: - Solve integral equation in S 1, map solution to S 2, - Solve integral equation in S 2 for correction, map to S 3, - Solve integral equation in S 3 for correction, map to S 4... Assume that b b δ = δ and b i b δ i cδ, c > 1, i = 1, 2,...

Apply CGNR or MR-II in S 1. Yields iterates x 1,j, j = 1, 2,.... Terminate iterations as soon as A 1 x 1,j b δ 1 cδ. Proceed similarly on higher levels. Theorem: The multilevel method outlined is a regularization method.

Example: Baart integral equation discretized by trapeziodal rule on mesh with 1025 point. δ b One-Grid CGNR m(δ) x δ 8,m(δ) ˆx ˆx 1 10 1 2 0.3412 1 10 2 3 0.1662 1 10 3 3 0.1657 1 10 4 4 0.1143

δ b Multilevel CGNR m i (δ) x δ 8,m 8 (δ) ˆx ˆx 1 10 1 2, 1, 1, 1, 1, 1, 1, 1 0.2686 1 10 2 2, 2, 1, 3, 1, 1, 1, 1 0.1110 1 10 3 3, 3, 2, 1, 1, 1, 1, 1 0.1065 1 10 4 4, 3, 3, 3, 2, 1, 1, 1 0.0669

Noise level 10 3 : CGNR (red curve), ML-CGNR (green curve), exact solution (blue curve) 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0 200 400 600 800 1000 1200

Example: Phillips integral equation discretized by trapeziodal rule on mesh with 1025 point. δ b One-Grid CGNR m(δ) x δ 8,m(δ) ˆx ˆx 1 10 1 3 0.0934 1 10 2 4 0.0248 1 10 3 4 0.0243 1 10 4 11 0.0064

δ b Multilevel CGNR m i (δ) x δ 8,m 8 (δ) ˆx ˆx 1 10 1 2, 1, 1, 1, 1, 1, 1, 1 0.0842 1 10 2 5, 5, 3, 2, 1, 1, 1, 1 0.0343 1 10 3 8, 6, 6, 4, 3, 1, 1, 1 0.0243 1 10 4 9, 13, 9, 9, 5, 4, 3, 2 0.0076

Application to image restoration

Nonlinear Prolongation Operators Return to Tikhonov regularization: { } min (h u f δ ) 2 + µr(u)dx u Ω Associated Euler-Lagrange equations: u t = h (h u f δ ) + µ D(u), Example: u 0 = f δ R(u) = u 2 = D(u) = 2 u R(u) = u = D(u) = div( u u ) TV operator

We consider a weighted total variation operator: ( ) u D 1 (u) = u q ɛ u q, u q ɛ = ( u 2 + ɛ 2) q/2 ɛ with 1 q 2, and a Perona-Malik operator: D 2 (u) = (g( u 2 ) u), g(s 2 ) = 1/(1 + s 2 /ρ 2 ) with g the diffusivity function.

The prolongation operators are determined by integration of for j = 1 or j = 2. u t = D j(u) Example: Restoration 512 512-pixel image contaminated by 10% noise and Gaussian blur.

Original blur- and noise-free image

Blurred and noisy image

Restoration using multigrid with piecewise linear prolongation

Restoration using multigrid with D 1 prolongation (TV-norm)

Restoration using multigrid with D 2 prolongation (Perona-Malik)

Example: Contaminate by Gaussian blur and 0.5% noise. Restored with 1 or 3 level multigrid method. Then apply edge detector from gimp.

1-level CGNR

3-level multigrid with piecewise linear prolongation

1-level CGNR with D 2 applied after convergence

3-level multigrid with D 2 applied after convergence

Example: Original 256 256 image

Image contaminated by Gaussian blur and 0.5% noise

Restored image using prolongation D2

More iterations...

Restored image using nonlinear PDE model based on D2