Tikhonov Regularization in Image Reconstruction with Kaczmarz Extended Algorithm

Similar documents
Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

Weaker hypotheses for the general projection algorithm with corrections

FRIEDRICH-ALEXANDER-UNIVERSITÄT ERLANGEN-NÜRNBERG. Lehrstuhl für Informatik 10 (Systemsimulation)

On a general extending and constraining procedure for linear iterative methods

MATH 3795 Lecture 10. Regularized Linear Least Squares.

MARN 5898 Regularized Linear Least Squares.

Linear Least Squares. Using SVD Decomposition.

Computational Linear Algebra

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

1 Error analysis for linear systems

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Matrix decompositions

Lecture # 20 The Preconditioned Conjugate Gradient Method

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Backward Error Estimation

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

Algebra C Numerical Linear Algebra Sample Exam Problems

Gaussian Elimination for Linear Systems

Numerical Methods I: Eigenvalues and eigenvectors

Tikhonov Regularization for Weighted Total Least Squares Problems

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

OPTIMIZATION ALGORITHM FOR FINDING SOLUTIONS IN LINEAR PROGRAMMING PROBLEMS

VANISHING RESULTS FOR SEMISIMPLE LIE ALGEBRAS

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

Scientific Computing

A priori bounds on the condition numbers in interior-point methods

Numerical methods, midterm test I (2018/19 autumn, group A) Solutions

Review Questions REVIEW QUESTIONS 71

lecture 2 and 3: algorithms for linear algebra

ECONOMICS 207 SPRING 2006 LABORATORY EXERCISE 5 KEY. 8 = 10(5x 2) = 9(3x + 8), x 50x 20 = 27x x = 92 x = 4. 8x 2 22x + 15 = 0 (2x 3)(4x 5) = 0

Questions and Answers on Heteroskedasticity, Autocorrelation and Generalized Least Squares

Least squares: the big idea

Least-Squares Fitting of Model Parameters to Experimental Data

Linear Algebraic Equations

Iterative Methods for Ill-Posed Problems

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Lecture 3: QR-Factorization

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Probabilistic Latent Semantic Analysis

Response Surface Methodology III

Orthogonalization and least squares methods

Fundamentals of Numerical Linear Algebra

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

7.2 Steepest Descent and Preconditioning

ERROR AND SENSITIVTY ANALYSIS FOR SYSTEMS OF LINEAR EQUATIONS. Perturbation analysis for linear systems (Ax = b)

Introduction. Chapter One

CHAPTER 5. Basic Iterative Methods

A Note on the Pin-Pointing Solution of Ill-Conditioned Linear System of Equations

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Introduction to Mathematical Programming

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

Numerical Linear Algebra

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Solving Linear Systems of Equations

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

11. Equality constrained minimization

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name:

Numerical Linear Algebra And Its Applications

Exploiting off-diagonal rank structures in the solution of linear matrix equations

Robust Multivariable Control

The Block Kaczmarz Algorithm Based On Solving Linear Systems With Arrowhead Matrices

Math 273a: Optimization Netwon s methods

Lecture 9 Least Square Problems

Iterative Methods. Splitting Methods

Discrete ill posed problems

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

lecture 3 and 4: algorithms for linear algebra

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve

DELFT UNIVERSITY OF TECHNOLOGY

Numerical Methods I Non-Square and Sparse Linear Systems

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

On the Solution of Constrained and Weighted Linear Least Squares Problems

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

Synopsis of Numerical Linear Algebra

The Lanczos and conjugate gradient algorithms

Solving linear equations with Gaussian Elimination (I)

Multiparameter eigenvalue problem as a structured eigenproblem

Scientific Computing: Solving Linear Systems

Coordinate Update Algorithm Short Course Operator Splitting

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1

Total least squares. Gérard MEURANT. October, 2008

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b?

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Homework 2 Foundations of Computational Math 2 Spring 2019

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Inverse Source Identification based on Acoustic Particle Velocity Measurements. Abstract. 1. Introduction

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

THE LIE ALGEBRA sl(2) AND ITS REPRESENTATIONS

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Lecture Note 2: The Gaussian Elimination and LU Decomposition

Orthogonal Transformations

Transcription:

Tikhonov Regularization in Image Reconstruction with Kaczmarz Extended Algorithm Paper supported by the PNCDI INFOSOC Grant 131/2004 Andrei Băutu 1 Elena Băutu 2 Constantin Popa 2 1 Mircea cel Bătrân Naval Academy Constantza, Romania 2 Ovidius University Constantza, Romania ASIM 2005 Conference, September 12-15, 2005 Erlangen, Germany

The LS formulation Ax = b, A : m n, b R(A) IR m ART (Kaczmarz): x 0 IR n, lim k x k = x(x 0 ) S(A; b) real world applications: b b = b + δb / R(A) Ax b = min!; LSS(A; b); x LS ART: still exists lim k x k = x(x 0 ), but (Popa/Zdunek, 2004) x 0 IR n, d ( x(x 0 ), LSS(A; b) ) = c > 0.

The KE algorithm KE: x 0 IR n, y 0 = b, for k = 0, 1, 2,... y k+1 = Φ(α; y k ), b k+1 = b y k+1, x k+1 = F (ω; b k+1 ; x k ) x 0 IR n, α, ω (0, 2) lim k x k = x(x 0 ) LSS(A; b) (Popa, 1998) R1: If b = b + δb, δb = δb A + δb A R(A) N(At ), then δb A is completely eliminated during the KE iterations. The remaining problem: δb A.

Perturbations Ax = b x LS = A + b Ax b = min!, thus x LS = A + b = A + ( b + δb A + δb A ) = A+ ( b + δb A ) x LS x LS = A + δb A The small singular values of A (ill-conditioning) determine a (possible) big value for A + δb A, although δb A is small.

Tikhonov regularization Ax b 2 + γ 2 Rx, x = min! ( ) R : n n, positive semidefinite and symmetric For appropriate construction of R and values of γ we get Problem: 1 How to construct R? 2 How to solve ( ) with KE? x LS (γ) x LS = O ( δb A )

Construction of the regularization matrix R w h, if j H i D i V i D i w (R) H i P i H ij = v, if j V i i w d, if j D i D i V i D i 0, otherwise (R) ii = j i (R) ij + ɛ H i the set of horizontally neighbour pixels of pixel P i V i the set of vertically neighbour pixels of pixel P i D i the set of diagonally neighbour pixels of pixel P i Note: 1 ɛ > 0 R: SPD matrix (RKE-1 algorithm) 2 ɛ = 0 R: symmetric and positive semidefinite (RKE-2 algorithm)

First regularized KE version (RKE-1) R = SPD R = LL t (Cholesky) [ A ( ) Âx ˆb = min!, Â = γl t ] [ b, ˆb = 0 ] ( 1) RKE-1 = Just apply KE to ( 1) R2: x 0 IR n, α, ω (0, 2), γ IR, lim k x k = x(x 0 ; γ); for x 0 = 0, x(0, γ) = x LS (γ).

Second reguralized KE version (RKE-2) R = symmetric and positive semidefinite ( ) Ax b 2 W + γ2 Rx, x = min! ( ) 1 W = diag a 1 2,..., 1 a m 2 RKE-2: x 0 IR n, y 0 = b; for k = 0, 1, 2,... y k+1 = Φ(α; y k ), b k+1 = b y k+1, x k+1 = F (ω; b k+1 ; x k ) γ 2 Rx k Note: Unfortunately, no convergence proof (yet).

Numerical experiments parameters of the reconstruction procedure Parameter KE RKE-1 RKE-2 α 0.5 0.5 0.5 ω 0.8 0.8 0.8 γ 5 10 2 10 2 ɛ 0.0 10 3 initialization 1 zero initialization: x 0 i = 0 2 Herman initialization: x 0 i = m i=1 b i m n i=1 j=1 (A) ij

Numerical experiments (2) reconstruction error e xex (x) = x x ex Algorithm Initialization Reconstruction error Test 1 Test 2 KE Zero 0.7959 11.5644 Herman 0.7949 11.5763 RKE-1 Zero 0.7187 11.3980 Herman 0.7187 11.3980 RKE-2 Zero 0.1125 8.3250 Herman 0.1127 8.3507

Test 1 reconstruction

Test 2 reconstruction

Test 1 reconstructions errors x 0 i = 0 x 0 i = x

Test 2 reconstructions errors x 0 i = 0 x 0 i = x

Tikhonov Regularization in Image Reconstruction with Kaczmarz Extended Algorithm Thank you!