Solution methods for linear discrete ill-posed problems for color image restoration

Similar documents
CASCADIC MULTILEVEL METHODS FOR FAST NONSYMMETRIC BLUR- AND NOISE-REMOVAL. Dedicated to Richard S. Varga on the occasion of his 80th birthday.

Numerical linear algebra and some problems in computational statistics

Global Golub Kahan bidiagonalization applied to large discrete ill-posed problems

CS229 Lecture notes. Andrew Ng

A Brief Introduction to Markov Chains and Hidden Markov Models

Lecture Note 3: Stationary Iterative Methods

An explicit Jordan Decomposition of Companion matrices

Source and Relay Matrices Optimization for Multiuser Multi-Hop MIMO Relay Systems

FFTs in Graphics and Vision. Spherical Convolution and Axial Symmetry Detection

BALANCING REGULAR MATRIX PENCILS

ORTHOGONAL MULTI-WAVELETS FROM MATRIX FACTORIZATION

Partial permutation decoding for MacDonald codes

Comparison of A-Posteriori Parameter Choice Rules for Linear Discrete Ill-Posed Problems

VTU-NPTEL-NMEICT Project

Preconditioned Locally Harmonic Residual Method for Computing Interior Eigenpairs of Certain Classes of Hermitian Matrices

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries

Stochastic Variational Inference with Gradient Linearization

An approximate method for solving the inverse scattering problem with fixed-energy data

Greedy Tikhonov regularization for large linear ill-posed problems

A. Distribution of the test statistic

ON THE REPRESENTATION OF OPERATORS IN BASES OF COMPACTLY SUPPORTED WAVELETS

(This is a sample cover image for this issue. The actual cover is not yet available at this time.)

4 Separation of Variables

Tikhonov Regularization of Large Symmetric Problems

Discrete ill posed problems

On a geometrical approach in contact mechanics

6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17. Solution 7

Problem set 6 The Perron Frobenius theorem.

VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES

SUPPLEMENTARY MATERIAL TO INNOVATED SCALABLE EFFICIENT ESTIMATION IN ULTRA-LARGE GAUSSIAN GRAPHICAL MODELS

Integrating Factor Methods as Exponential Integrators

Numerical solution of one dimensional contaminant transport equation with variable coefficient (temporal) by using Haar wavelet

Approximation and Fast Calculation of Non-local Boundary Conditions for the Time-dependent Schrödinger Equation

8 Digifl'.11 Cth:uits and devices

Lecture 6: Moderately Large Deflection Theory of Beams

A proposed nonparametric mixture density estimation using B-spline functions

The EM Algorithm applied to determining new limit points of Mahler measures

High-order approximations to the Mie series for electromagnetic scattering in three dimensions

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES

Volume 13, MAIN ARTICLES

Absolute Value Preconditioning for Symmetric Indefinite Linear Systems

HYDROGEN ATOM SELECTION RULES TRANSITION RATES

VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES

Haar Decomposition and Reconstruction Algorithms

Approximated MLC shape matrix decomposition with interleaf collision constraint

Introduction to Simulation - Lecture 14. Multistep Methods II. Jacob White. Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy

221B Lecture Notes Notes on Spherical Bessel Functions

Introduction. Figure 1 W8LC Line Array, box and horn element. Highlighted section modelled.

Formulas for Angular-Momentum Barrier Factors Version II

A Solution to the 4-bit Parity Problem with a Single Quaternary Neuron

Related Topics Maxwell s equations, electrical eddy field, magnetic field of coils, coil, magnetic flux, induced voltage

A Simple and Efficient Algorithm of 3-D Single-Source Localization with Uniform Cross Array Bing Xue 1 2 a) * Guangyou Fang 1 2 b and Yicai Ji 1 2 c)

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

VECTOR EXTRAPOLATION APPLIED TO TRUNCATED SINGULAR VALUE DECOMPOSITION AND TRUNCATED ITERATION

ON THE COMPUTATION OF A TRUNCATED SVD OF A LARGE LINEAR DISCRETE ILL-POSED PROBLEM. Dedicated to Ken Hayami on the occasion of his 60th birthday.

Primal and dual active-set methods for convex quadratic programming

Separation of Variables and a Spherical Shell with Surface Charge

Week 6 Lectures, Math 6451, Tanveer

FRST Multivariate Statistics. Multivariate Discriminant Analysis (MDA)

Cryptanalysis of PKP: A New Approach

Introduction to Simulation - Lecture 13. Convergence of Multistep Methods. Jacob White. Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy

An implicit Jacobi-like method for computing generalized hyperbolic SVD

Methods for Ordinary Differential Equations. Jacob White

Color Seamlessness in Multi-Projector Displays using Constrained Gamut Morphing

Strauss PDEs 2e: Section Exercise 2 Page 1 of 12. For problem (1), complete the calculation of the series in case j(t) = 0 and h(t) = e t.

arxiv: v1 [math.ca] 6 Mar 2017

Turbo Codes. Coding and Communication Laboratory. Dept. of Electrical Engineering, National Chung Hsing University

Math 124B January 31, 2012

$, (2.1) n="# #. (2.2)

Distributed average consensus: Beyond the realm of linearity

Statistics for Applications. Chapter 7: Regression 1/43

Radar/ESM Tracking of Constant Velocity Target : Comparison of Batch (MLE) and EKF Performance

Gauss Law. 2. Gauss s Law: connects charge and field 3. Applications of Gauss s Law

Approximated MLC shape matrix decomposition with interleaf collision constraint

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with?

FOURIER SERIES ON ANY INTERVAL

c 2000 Society for Industrial and Applied Mathematics

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network

Some Measures for Asymmetry of Distributions

NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION

SASIMI: Sparsity-Aware Simulation of Interconnect-Dominated Circuits with Non-Linear Devices

General Decay of Solutions in a Viscoelastic Equation with Nonlinear Localized Damping

Statistical Learning Theory: A Primer

A Robust Voice Activity Detection based on Noise Eigenspace Projection

THE REACHABILITY CONES OF ESSENTIALLY NONNEGATIVE MATRICES

2M2. Fourier Series Prof Bill Lionheart

HILBERT? What is HILBERT? Matlab Implementation of Adaptive 2D BEM. Dirk Praetorius. Features of HILBERT

Mode in Output Participation Factors for Linear Systems

Stochastic Complement Analysis of Multi-Server Threshold Queues. with Hysteresis. Abstract

Convergence Property of the Iri-Imai Algorithm for Some Smooth Convex Programming Problems

CALIBRATION OF RIVER BED ROUGHNESS

Using constraint preconditioners with regularized saddle-point problems

Do Schools Matter for High Math Achievement? Evidence from the American Mathematics Competitions Glenn Ellison and Ashley Swanson Online Appendix

COMPUTATION OF A TEST STATISTIC IN DATA QUALITY CONTROL

DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM

C. Fourier Sine Series Overview

Centralized Coded Caching of Correlated Contents

T.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA

Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rules 1

Algorithms to solve massively under-defined systems of multivariate quadratic equations

Transcription:

BIT manuscript No. (wi be inserted by the editor) Soution methods for inear discrete i-posed probems for coor image restoration A. H. Bentbib M. E Guide K. Jbiou E. Onunwor L. Reiche Received: date / Accepted: date Abstract This work discusses four agorithms for the soution of inear discrete iposed probems with severa right-hand side vectors. These agorithms can be appied, for instance, to muti-channe image restoration when the image degradation mode is described by a inear system of equations with mutipe right-hand sides that are contaminated by errors. Two of the agorithms are bock generaizations of the standard Goub Kahan bidiagonaization method with the bock size equa to the number of channes. One agorithm uses standard Goub Kahan bidiagonaization without restarts for a right-hand sides. These schemes are compared to standard Goub Kahan bidiagonaization appied to each right-hand side independenty. Tikhonov reguarization is used to avoid severe error propagation. Numerica exampes iustrate the performance of these agorithms. Appications incude the restoration of coor images. A. H. Bentbib Facuté des Sciences et Techniques-Gueiz, Laboratoire de Mathématiques Appiquées et Informatique, Morocco. E-mai: a.bentbib@uca.ac.ma M. E Guide Facuté des Sciences et Techniques-Gueiz, Laboratoire de Mathématiques Appiquées et Informatique, Morocco, and Université Mohammed VI Poytechnique, FAB-LAB, UM6P, Bengeurir, Morocco. E-mai: mohamed.eguide@um6p.ma, mohamed.eguide@edu.uca.ac.ma K. Jbiou Université du Littora Côte d Opae, L.M.P.A, ULCO, 50 rue F. Buisson, BP699, F-62228 Caais-Cedex, France. E-mai: jbiou@univ-ittora.fr E. Onunwor Department of Mathematica Sciences, Kent State University, Kent, OH 44242, USA, and Department of Mathematics, Stark State Coege, 6200 Frank Ave. NW, North Canton, OH 44720, USA. E-mai: eonunwor@starkstate.edu L. Reiche Department of Mathematica Sciences, Kent State University, Kent, OH 44242, USA. E-mai: reiche@math.kent.edu

2 A. H. Bentbib et a. Keywords Goub Kahan bidiagonaization bock Goub Kahan bidiagonaization goba Goub Kahan bidiagonaization Tikhonov reguarization i-posed probem mutipe right-hand sides coor image restoration. 1 Introduction This paper discusses the use of iterative methods based on standard or bock Goub Kahan-type bidiagonization, combined with Tikhonov reguarization, to the restoration of a muti-channe image from an avaiabe bur- and noise-contaminated version. Appications incude the restoration of coor images whose RGB (red, green, and bue) representation uses three channes; see [9, 17]. The methods described aso can be appied to the soution of Fredhom integra equations of the first kind in two or more space dimensions and to the restoration of hyper-spectra images. The atter kind of images generaize coor images in that they aow more than three coors ; see, e.g. [21]. For definiteness, we focus in this section on the restoration of k-channe images that have been contaminated by bur and noise, and formuate this restoration task as a inear system of equations with k right-hand side vectors, where each spectra band corresponds to one channe. To simpify our notation, we assume the image to be represented by an array of n n pixes in each one of the k channes, where 1 k n 2. Let b (i) R n2 represent the avaiabe bur- and noisecontaminated image in channe i, et e (i) R n2 describe the noise in this channe, and et x (i) R n2 denote the desired unknown bur- and noise-free image in channe i. The corresponding quantities for a k channes b, x,e R n2k are obtained by stacking the vectors b (i), x (i),e (i) of each channe. For instance, b = [(b (1) ) T,...,(b (k) ) T ] T. Here and throughout this paper, the superscript T denotes transposition. The degradation mode is of the form with burring matrix b = H x + e (1.1) H = A k A R n2 k n 2k. Here denotes the Kronecker product, the matrix A R n2 n 2 represents withinchanne burring, which is assumed to be the same in a channes, and the sma matrix A k R k k modes cross-channe burring. Sometimes it is convenient to gather images for the different channes in bock vectors. Introduce the bock vectors B = [b (1),...,b (k) ] R n2 k, X = [ x (1),..., x (k) ] R n2 k, and E = [e (1),...,e (k) ] R n2 k. Using properties of the Kronecker product, the mode (1.1) can be expressed as where the inear operator A is defined by B = A ( X) + E, (1.2) A : R n2 k R n2 k A (X) := AXA T k. (1.3)

Soution methods for inear discrete i-posed probems for coor image restoration 3 Its transpose is given by A T (X) := A T XA k. The mode (1.2) is said to have crosschanne burring when A k I k ; when A k = I k, there is no cross-channe burring. In the atter situation, the burring is said to be within-channe ony, and the deburring probem decoupes into k independent deburring probems. The degradation mode (1.1) then can be expressed in the form B = A X + E. (1.4) For notationa simpicity, we denote in the foowing both the matrix A in (1.4) and the inear operator A in (1.2) by A, and we write A (X) as AX. The singuar vaues of a burring matrix or operator A typicay custer at the origin, i.e., A has many singuar vaues of different orders of magnitude cose to zero. It foows that the soution (if it exists) of the inear system of equations AX = B (1.5) is very sensitive to the error E in B. Linear systems of equations with a matrix of this kind are commony referred to as inear discrete i-posed probems. Let B denote the (unknown) noise-free bock vector associated with B. The system of equations AX = B is assumed to be consistent, and X stands for the soution of minima Frobenius norm of this system. The Frobenius norm of a matrix M is defined by M F = trace(m T M) 1/2. We woud ike to determine an accurate approximation of X given B and A. This generay is a difficut computationa task due to the error E in B and the presence of tiny positive singuar vaues of A. Tikhonov reguarization reduces the sensitivity of the soution of (1.5) to the error E in B by repacing (1.5) by a penaized east-squares probem of the form min { AX B 2 F + µ 1 X 2 F}, (1.6) X R n2 k where µ > 0 is the reguarization parameter. The norma equations associated with this minimization probem are given by They have the unique soution (A T A + µ 1 I)X = A T B. (1.7) X µ = ( A T A + µ 1 I ) 1 A T B (1.8) for any µ > 0. The size of µ determines how sensitive X µ is to the error in B and how cose X µ is to the desired soution X. We wi comment on the use of µ 1 in (1.6) instead of µ beow. The computation of an accurate approximation X µ of X requires that a suitabe vaue of the reguarization parameter µ be used. Severa methods for determining such a µ-vaue have been suggested in the iterature. These incude so-caed heuristic methods that do not require knowedge of the size of the error E in B, such as the L-curve criterion, generaized cross vaidation, and the quasi-optimaity criterion; see e.g. [2,8,11,15,19,20,24] for discussions and iustrations. We wi use the discrepancy principe, discussed e.g. in [6], to determine µ in the computed exampes

4 A. H. Bentbib et a. reported in Section 5. The discrepancy principe requires that a bound ε > 0 of E F be avaiabe and prescribes that µ > 0 be chosen so that the soution (1.8) of (1.6) satisfies B AX µ F = ηε, (1.9) where η > 1 is a user-specified constant independent of ε. A zero-finder can be appied to determine a µ-vaue such that the associated Tikhonov soution (1.8) satisfies (1.9). When the matrix A is of sma to moderate size, the eft-hand side of (1.9) easiy can be evauated by using the singuar vaue decomposition (SVD) of A. However, computation of the SVD is impractica when the matrix A is arge. We wi discuss how an approximate soution of (1.6) can be computed by first evauating a partia bock Goub Kahan bidiagonaization (BGKB) of A and then soving (1.6) in a subspace so defined. Aternativey, we may reduce A to a sma bidiagona matrix with the aid of goba Goub Kahan bidiagonaization (GGKB), which aso is a bock method, and then appy the connection between GGKB and Gauss-type quadrature rues to determine upper and ower bounds for the eft-hand side of (1.9). This aows the computation of a suitabe vaue of µ in a simpe manner. This approach has previousy been appied in [1]; the GGKB method was first described in [26]. The BGKB and GGKB bock methods are compared to the appication of Goub Kahan bidiagonaization (with bock size one) in two ways. One approach appies Goub Kahan bidiagonaization with initia vector b (1) and generates a soution subspace that is arge enough to sove a systems of equations Ax (i) = b (i), i = 1,...,k, (1.10) with Tikhonov reguarization. The other approach is to simpy sove each one of the k systems of equations (1.10) independenty with Goub Kahan bidiagonaization and Tikhonov reguarization, i.e., by using the agorithm described in [3] k times. This paper is organized as foows. Section 2 describes the BGKB method and discusses its appication to the soution of (1.6). The determination of a reguarization parameter such that the computed soution satisfies the discrepancy principe is aso described. Section 3 reviews the use of the GGKB method to reduce A. The connection between this reduction and Gauss-type quadrature rues is expoited to compute bounds for the eft-hand side of (1.9). The soution of (1.6) by appying Goub Kahan bidiagonaization (with bock size one) determined by A and the initia vector b (1) is discussed in Section 4. Sufficienty many bidiagonaization steps are carried out so that a systems (1.10) can be soved with soution subspaces determined by A and b (1). We aso consider the soution of the k systems (1.10) independenty by Goub Kahan bidiagonaization and Tikhonov reguarization as described in [3]. Section 5 presents a few numerica exampes. Concuding remarks can be found in Section 6. 2 Soution by partia bock Goub Kahan bidiagonaization Introduce for µ > 0 the function φ(µ) = B AX µ 2 F. (2.1)

Soution methods for inear discrete i-posed probems for coor image restoration 5 Substituting (1.8) into (2.1) and using the identity shows that (2.1) can be written as with I A(A T A + µ 1 I) 1 A T = (µaa T + I) 1 (2.2) φ(µ) = tr ( B T f µ (AA T )B ) (2.3) f µ (t) = (µt + 1) 2. The determination of a vaue of the reguar parameter µ > 0 such that (1.9) hods generay requires the function φ to be evauated for severa µ-vaues. Each evauation of φ is very expensive for arge-scae probems. We therefore approximate the expression B T f µ (AA T )B by a simper one, which we determine with a few steps of bock Goub Kahan bidiagonaization as foows. Introduce the QR factorization B = P 1 R 1, where P 1 R n2 k has orthonorma coumns and R 1 R k k is upper trianguar. Then steps of the BGKB method appied to A with initia bock vector P 1 gives the decompositions AQ (k) = P (k) (k) +1 C, A T P (k) = Q (k) C(k)T, (2.4) where the matrices P (k) = [P 1,...,P ] R n2 k, P (k) +1 = [P 1,...,P +1 ] R n2 (+1)k, and Q (k) = [Q 1,...,Q ] R n2 k have orthonorma coumns, and L 1 R 2 L 2 C (k) :=...... R L R +1 R k(+1) k is ower bock bidiagona with ower trianguar diagona bocks L j R k k and upper trianguar bocks R j R k k. Moreover, C (k) is the eading k k submatrix of C (k). In case A denotes the operator A defined by (1.3), the expressions AQ (k) and A T P (k) in the eft-hand sides of (2.4) shoud be repaced by [A (Q 1 ),...,A (Q )] and [A T (P 1 ),...,A T (P )], respectivey. When the bock size is k = 1, the decompositions (2.4) simpify to the decompositions computed by the agorithm bidiag1 by Paige and Saunders [23]. In particuar, the decompositions (2.4) differ from the ones described by Goub et a. [13], who compute an upper bock bidiagona matrix. In our discussion, we wi assume that is sma enough so that the trianguar matrices L j, j = 1,...,, and R j, j = 2,..., + 1, are nonsinguar. It foows from (2.4) that the range of the matrix P (k) is the bock Kryov subspace K (AA T,B) = range[p 1,AA T P 1,(AA T ) 2 P 1,...,(AA T ) 1 P 1 ]. Simiary, the range of the matrix Q (k) is the bock Kryov subspace K (A T A,A T B) = range[a T P 1,A T AA T P 1,(A T A) 2 A T P 1,...,(A T A) 1 A T P 1 ].

6 A. H. Bentbib et a. Mutipying the rightmost equation in (2.4) by A from the eft yieds Therefore, AA T P (k) = P (k) +1 P (k)t AA T P (k) (k) C C (k)t. = C (k) C (k)t. This suggests that f µ (AA T ) may be approximated by evauating f µ (C (k) C (k)t ), which is much easier to compute than f µ (AA T ) when A is arge. Let E 1 denotes the bock vector of appropriate dimensions with bocks of size k k, with the first bock equa to I k and a other bocks equa to 0. It foows from resuts by Goub and Meurant [14] on the symmetric bock Lanczos agorithm that the expression ( ) G f µ = R T 1 E1 T f µ C (k) C (k)t E 1 R 1 (2.5) can be interpreted as an -bock Gauss quadrature rue for the approximation of B T f µ (AA T )B, i.e., G f = B T f (AA T )B f P 2 1, where P 2 1 denotes the set of a poynomias of degree at most 2 1; see aso [7] for reated discussions. We therefore approximate (2.3) by and et the reguarization parameter be the soution of φ (µ) = tr(g f µ ) (2.6) φ (µ) = η 2 ε 2. (2.7) The foowing resut shows that φ (µ) is decreasing and convex. This makes it convenient to compute the soution µ of (2.7) by Newton s method; see beow. Proposition 2.1 The functions φ(µ) and φ (µ), defined by (2.3) and (2.6) for µ > 0, respectivey, satisfy φ (µ) < 0, φ (µ) > 0, φ Proof The derivative of φ(µ) is given by (µ) < 0, φ (µ) > 0. φ (µ) = 2tr(B T (µaa T + I) 3 AA T B). It foows from (µaa T + I) 1 A = A(µA T A + I) 1 that φ (µ) = 2tr(B T A(µA T A + I) 3 A T B). Substituting the spectra factorization A T A = SΛS T, S T S = I, into the above expression and etting W = [w 1,...,w k ] = S T A T B yieds φ (µ) = 2tr(W T (µλ + I) 3 W) = 2 k j=1 w T j (µλ + I) 3 w j < 0.

Soution methods for inear discrete i-posed probems for coor image restoration 7 Thus, φ(µ) is a decreasing function of µ. Turning to the second derivative, we have φ (µ) = 6tr(B T AA T (µaa T + I) 4 AA T B), and can proceed simiary as above to show that φ (µ) > 0. The derivative of φ (µ) is given by φ (µ) = tr(rt 1 E T 1 C (k) (µc (k)t C (k) + I) 3 C (k)t E 1 R 1 ), (2.8) where we again use the identity (µc (k) C (k)t + I) 1 C (k) = C (k) (µc (k)t C (k) + I) 1. The stated properties of φ (µ) and φ (µ) can be shown by substituting the spectra factorization of C (k)t C (k) into (2.8). Since φ (µ) is decreasing and convex, Newton s method converges monotonicay and quadraticay to the soution µ of (2.7) for any initia approximate soution µ init < µ. This makes it easy to impement the Newton method. For instance, we may use µ init = 0 when φ and its derivative are suitaby defined at µ = 0; see [3] for a detaied discussion of the case when the bock size is one. We note that the function µ φ (1/µ), which corresponds to the reguarization term µ X 2 F in (1.6), is not guaranteed to be convex. Therefore, Newton s method has to be safeguarded when appied to the soution of φ (1/µ) = ε 2. This is the reason for considering Tikhonov reguarization of the form (1.6). Proposition 2.2 Let P N (M) denote the orthogona projector onto the nu space N (M) of the matrix M. Then φ(0) = tr(b T B), φ (0) = tr(b T B), im φ(µ) = tr(b T P µ N (AA T ) B), im φ (µ) = tr(r T µ 1 E1 T P N (R R T )E 1R 1 ). Proof The vaue at zero and imit of φ foow from (2.3). The expression (2.5) and the definition of the upper trianguar matrix R 1 in the QR factorization B = P 1 R 1 yied ( ) φ (0) = tr (R T 1 E1 T f 0 C )E (k)t 1 R 1 = tr(r T 1 R 1 ) = tr(b T B). C (k) The resut for φ (µ) as µ foows simiary as for φ. Let the reguarization parameter µ be computed by Newton s method. We then determine the corresponding approximate soution by projecting the norma equations (1.7) with µ = µ onto a smaer space determined by the decompositions (2.4). We seek to determine an approximate soution of the form X µ = Q (k) Y µ, Y µ R k k, (2.9) by soving the norma equations (1.7) with µ = µ by a Gaerkin method, (Q (k) ) T (A T A + µ 1 I)Q (k) Y µ = (Q (k) ) T A T B, (2.10)

8 A. H. Bentbib et a. which simpifies to ( C (k)t C (k) + µ 1 I)Y µ = C (k)t E 1 R 1. (2.11) We compute the soution Y,µ by soving a east-squares probem for which (2.11) are the norma equations min Y R k k [ ] C (k) Y I µ 1/2 [ E1 R 1 0 ] 2 F. (2.12) Our reason for computing the soution of (2.12) instead of (2.11) is that soving the east-squares probem is ess sensitive to errors for sma vaues of µ > 0. Proposition 2.3 Let µ sove (2.7), and et Y µ sove (2.10). Then the associated approximate soution X µ = Q (k) Y µ of (1.6) satisfies ( AX µ B 2 F = tr R T 1 E1 T f µ ( C (k) C (k)t )E 1 R 1 ). (2.13) Proof Using the expression of X,µ and appying (2.4) shows that AX µ B = AQ (k) Y µ B = P (k) (k) C +1 = P (k) +1 Y µ P 1 R 1 ( C (k) Y µ E 1 R 1 ), where we reca that B = P 1 R 1. It foows from (2.11) that ( ) [( P (k) +1 C (k) Y µ E 1 R 1 = P (k) +1 ( C (k) C (k)t C (k) ) ] 1 + µ 1 I C (k)t I )E 1 R 1. The identity (2.2) with A repaced by C (k) now yieds (2.13). Agorithm 1 The BGKB-Tikhonov method. Input: A, B, k, ε, η 1. 1. Compute the QR factorization B = P 1 R 1. 2. For = 1,2,... unti AX µ B F ηε (a) Determine Q (k) and P (k) +1 and bock bidiagona matrix C(k) by BGKB. (b) Update the vaue µ by soving (2.7) with Newton s method. 3. Determine Y µ by soving (2.12) and then X µ by (2.9).

Soution methods for inear discrete i-posed probems for coor image restoration 9 3 The GGKB method and Gauss-type quadrature We discuss the appication of the GGKB method to the computation of an approximate soution of (1.6) and review how the method can be used to compute inexpensive upper and ower bounds for the discrepancy (1.9). These bounds hep us to determine the reguarization parameter. This approach of soving (1.6) and determining bounds for the discrepancy has recenty been described in [1], where further detais can be found. Introduce the inner product F,G = tr(f T G), F,G R n2 k. We have F F = F,F 1/2. Appication of steps of the GGKB method to A with initia bock vector B determines the ower bidiagona matrix as we as the matrices ρ 1 σ 2 ρ 2...... C = σ 1 ρ 1 σ ρ σ +1 R (+1) U (k) +1 = [U 1,U 2,...,U +1 ] R n2 (+1)k, V (k) = [V 1,V 2,...,V ] R n2 k with orthonorma bock coumns U i,v j R n2 k, where U 1 = s 1 B and s 1 > 0 is a scaing factor. Thus, { 1 i = j, U i,u j = V i,v j = 0 i j. We assume that is sma enough so that a nontrivia entries of the matrix C are positive. This is the generic situation. Denote the eading submatrix of C by C. The matrices determined satisfy A [ V 1,V 2,...,V ] = U (k) +1 ( C I k ), (3.1) A T [ U 1,U 2,...,U ] = V (k) (C T I k). (3.2) When A is the operator A defined by (1.3), one shoud repace A [ V 1,V 2,...,V ] and A T [ U 1,U 2,...,U ] on the eft-hand sides of (3.1) and (3.2) by the expressions [A (V 1 ),A (V 2 ),...,A (V )] and [A T (U 1 ),A T (U 2 ),...,A T (U )], respectivey. The functions (of µ) G f µ = B 2 Fe T 1 (µc C T + I ) 2 e 1, R +1 f µ = B 2 Fe T 1 (µ C C T + I +1) 2 e 1

10 A. H. Bentbib et a. can be interpreted as Gauss-type quadrature rues for the approximation of the expression φ(µ) defined by (2.3). The remainder formuas for these quadrature rues yied the bounds G f µ φ(µ) R +1 f µ ; (3.3) see [1] for detais. We determine a suitabe vaue of µ and an associated approximate soution of (1.6) as foows. For 2, we seek to sove the noninear equation G f µ = ε 2 (3.4) for µ > 0 by Newton s method. If the soution µ of (3.4) satisfies R +1 f µ η 2 ε 2, (3.5) then it foows from (3.3) that there is a soution X µ of (1.6) such that ε B AX µ F ηε. If either (3.4) does not have a soution or (3.5) does not hod, then we increase. Generay, it suffices to choose quite sma. Assume that (3.4) and (3.5) hod for µ = µ. We then compute the approximate soution X µ, = V (k) (y µ I k ) (3.6) of (1.6), where y µ soves ( C T C + µ 1 I )y = d 1 C T e 1, d 1 = B F. (3.7) The vector y µ is computed by soving a east-squares probem for which (3.7) are the associated norma equations. The foowing resut shows an important property of the approximate soution (3.6). We incude a proof for competeness. Proposition 3.1 Let µ sove (3.4) and et y µ sove (3.7). Then the associated approximate soution (3.6) of (1.6) satisfies AX µ, B 2 F = R +1 f µ. Proof The representation (3.6) and (3.1) show that AX µ, = U (k) +1 ( C I k )(y µ I k ) = U (k) +1 ( C y µ I k ). Using the above expression gives AX µ, B 2 F = U (k) +1 (d 1e 1 I k ) U (k) +1 ( C y µ I k ) 2 F = (d 1 e 1 I k ) ( C y µ I k ) 2 F = d 1 e 1 C y µ 2 F, where we reca that d 1 = B F. We now express y µ with the aid of (3.7), and appy the identity (2.2) with A repaced by C, to obtain AX µ, B 2 F = d1 e 2 1 C ( C T C + µ 1 I ) 1 C T e 1 2 F = d1e 2 T 1 (µ C C T + I +1) 2 e 1 = R +1 f µ.

Soution methods for inear discrete i-posed probems for coor image restoration 11 The foowing agorithm outines the main steps for computing µ and X µ, that satisfy (1.9). Agorithm 2 The GGKB-Tikhonov method. Input: A, B, k, ε, η 1. 1. Let U 1 := B/ B F. 2. For = 1,2,... unti AX µ B F ηε (a) Determine U (k) (k) +1 and V, and the bidiagona matrices C and C with GGKB agorithm. (b) Determine µ that satisfies (3.4) with Newton s method. 3. Determine y µ by soving a east-quares probem for which (3.7) are the associated norma equations and then compute X µ, by (3.6). 4 Goub Kahan bidiagonaization for probems with mutipe right-hand sides We may consider (1.5) as k inear discrete i-posed probems that have the same matrix A and different right-hand side vectors b (1),...,b (k) ; cf. (1.10). The soution of inear systems of equations with mutipe right-hand sides that might not be known simutaneousy, and a matrix that stems from the discretization of a we-posed probem has received considerabe attention in the iterature; see e.g., [4,5,18,22,25] and references therein. However, the soution of inear discrete i-posed probems with mutipe right-hand sides that might not be avaiabe simutaneousy has not. The method described in this section is based on the anaysis and numerica experience reported in [12], where it is shown that it often suffices to appy ony a few steps of (standard) Goub Kahan bidiagonaization (GKB) to a matrix A of a inear discrete i-posed probem to gain vauabe information of subspaces spanned by the right and eft singuar vectors of A associated with the dominant singuar vaues. Consider the first system of (1.10), Ax (1) = b (1), (4.1) where the right-hand side is the sum of an unknown error-free vector b (1) and an error-vector e (1). Thus, b (1) = b (1) + e (1). A bound e (1) ε (1) is assumed to be known. Let x (1) denote the first coumn of the matrix X in (1.4). We seek to compute an approximation of x (1) by using (standard) partia Goub Kahan bidiagonaization (GKB) of A with initia vector b (1). To expain some properties of the bidiagonaization computed, we introduce the SVD of A, where W,Z R n2 n 2 are orthogona matrices and A = WΣZ T, (4.2) Σ = diag[σ 1,σ 2,...,σ n 2] R n2 n 2, σ 1 σ 2... σ r > σ r+1 =... = σ n 2 = 0.

12 A. H. Bentbib et a. Here r is the rank of A. Let 1 s r and et Z s and W s consist of the first s coumns of Z and W, respectivey. Moreover, Σ s denotes the eading s s principa submatrix of Σ. This gives the best rank-s approximation A s = W s Σ s Z T s of A in the spectra and Frobenius norms. The computation of the fu SVD (4.2) is too expensive for arge-scae probems without a particuar structure to be practica. The computation of a partia GKB is much cheaper. Appication of steps of GKB yieds the decompositions AV = U +1 C, A T U = V C T, (4.3) where the matrices V = [v 1,v 2,...,v ] R n2 and U +1 = [u 1,u 2,...,u +1 ] R n2 (+1) have orthonorma coumns, and U consists of the first coumns of U +1. Further, C R (+1) is ower bidiagona and C is the eading submatrix of C. We appy reorthogonaization of the coumns of U +1 and V to secure their numerica orthogonaity. It is shown in [12] that for sufficienty many steps, the spaces range(u +1 ) and range(v ) contain to high accuracy the subspaces range(w s ) and range(z s ), respectivey, for s 1 fixed and not too arge. Computed exampes in [12] indicate that it often suffices to choose 3s. Moreover, the coumns of the noise-free right-hand side matrix B, generay, can be approximated quite we by ony the first few coumns of the matrix W in the SVD (4.2) of A. This foows from the discrete Picard condition [15]. These coumns, in turn, typicay can be approximated fairy accuratey by the first few coumns of the matrix U +1 in the partia Goub Kahan bidiagonaization (4.3). It is therefore unikey that many steps of this bidiagonaization process have to be carried out in order to be abe to compute usefu approximations of the coumns of the desired soution matrix X. Consider the Tikhonov reguarization probem min { Ax x range(v ) b(1) 2 2 + µ x 2 2} = min y R { C y U+1 T b(1) 2 2 + µ y 2 2}, (4.4) where x = V y. We determine the reguarization parameter µ > 0 so that the computed soution y µ satisfies the discrepancy principe C y µ U+1 T b(1) 2 = ηε (1). (4.5) If no such µ-vaue exists, then we increase by one and try to sove (4.5) with repaced by + 1 in (4.4) and (4.5). The sma east-squares probem on the righthand side of (4.4) is soved as described in [3]. We remark that the vector U+1 T b(1) can be simpified to e 1 b (1) 2. The soution y µ of (4.4) determines the approximate soution x (1) µ = V y µ of (4.1). We turn to the probem Ax (2) = b (2) (4.6) and compute an approximate soution by soving (4.4) with the vector b (1) repaced by b (2). The vector U T +1 b(2) has to be expicity computed. Therefore it is important that the coumns of the matrix U +1 are numericay orthonorma. If no µ > 0 can be

Soution methods for inear discrete i-posed probems for coor image restoration 13 determined so that (4.5) can be satisfied with b (1) repaced by b (2), then we carry out one more step of Goub Kahan bidiagonaization (4.3); otherwise, we compute the soution y µ of (4.4) with the avaiabe decomposition. Let µ be such that the discrepancy principe hods. Then we obtain the approximate soution x (2) µ =V y µ of (4.6). We proceed in the same manner to sove Ax (i) = b (i) for i = 3,4,...,k. Agorithm 3 The GKB-Tikhonov method. Input: A, k, b (1),b (2),...,b (k), ε (1),ε (2),...,ε (k), η 1. 1. Let u 1 := b (1) / b (1) 2. 2. Compute AV = U +1 C, A T U = V C T 3. For i = 1,2,...,k (a) Compute min y µ R { C y µ U+1 T b(i) 2 2 + µ y µ 2 2} (b) If C y µ U T +1 b(i) 2 > ηε (i) i. := + 1 ii. Return to step (a). (c) Compute x (i) µ = V y µ We wi compare this agorithm and Agorithms 1 and 2 to the foowing trivia method that is based on soving each one of the inear discrete i-posed probems (1.10) independenty with the aid of (standard) Goub Kahan bidiagonaization. Thus, we appy Agorithm 2 with bock size one to each one of the k inear discrete i-posed probems (1.10) independenty. We refer to this scheme as Agorithm 4. We expect it to require the most matrix-vector product evauations of the methods in our comparison because we compute a new partia standard Goub Kahan bidiagonaization for each one of the vectors b ( j), j = 1,...,k. Moreover, this method does not benefit from the fact that on many modern computers the evauation of matrixbock-vector products with a arge matrix A does not require much more time than the evauation of a matrix-vector product with a singe vector for sma bock sizes; see e.g., [10] for discussions on this and reated issues. Agorithm 4 Trivia method. Input: A, k, b (1),b (2),...,b (k), ε (1),ε (2),...,ε (k), η 1. 1. For i = 1,2,...,k (a) Let u 1 := b (i) / b (i) 2. (b) Compute Goub Kahan bidiagonaization (c) Compute min y µ R { C y µ U+1 T b(i) 2 2 + µ y µ 2 2} (d) If C y µ U T +1 b(i) 2 > ηε (i) i. := + 1 ii. Return to step (b). (e) Compute x (i) µ := V y µ AV = U +1 C, A T U = V C T

14 A. H. Bentbib et a. 5 Numerica resuts This section provides some numerica resuts that iustrate the performance of Agorithms 1-4 when appied to the soution of inear discrete i-posed probems with the same matrix and different right-hand sides. The first exampe appies these agorithms to the soution of a inear discrete i-posed probem with severa right-hand sides defined by matrices that stem from Reguarization Toos by Hansen [16]; the remaining exampes discuss appications to image restoration. We consider the restoration of RGB images that have been contaminated by within-channe and cross-channe bur and noise, as we as the restoration of a sequence of images from a video. A computations were carried out using the MATLAB environment on an Inte(R) Core(TM) i5-4590 CPU computer with 8 GB of RAM. The computations were done with approximatey 15 decima digits of reative accuracy. 5.1 Exampe 1 We woud ike to sove inear discrete i-posed probems (1.10) with the matrix A R 702 70 2 determined by the function phiips in Reguarization Toos [16]. The matrix is a discretization of a Fredhom integra equation of the first kind that describes a convoution on the interva 6 t 6. The function phiips aso determines the error-free data vector b (1) R 702 and the associated error-free soution x (1) R 702. The other error-free data vectors b (i) R 702, i = 2,...,k, are obtained by setting x (i) = x (i 1) + y/2 for i = 2,...,k, where y is a vector obtained by discretizing a function of the form 1 2 cos( 3 t ) + 1 4 at equidistant points on the interva 6 t 6. The error-free right-hand sides are obtained from b (i) = A x (i) for i = 2,...,k. A noise vector e (i) R 702 with normay distributed random entries with zero mean is added to each data vector b (i) to obtain the error-contaminated data vectors b (i), i = 1,...,k, in (1.10). The error-vectors e (i) are scaed to correspond to a specified noise eve. This is simuated by e (i) := δ b (i) 2 ẽ (i), where δ is the noise eve and the vectors ẽ (i) R 702 have normay distributed random entries with mean zero and variance one. When the data vectors b (i), i = 1,...,k, are avaiabe sequentiay, the inear discrete i-posed probems (1.10) can be soved one by one by Agorithms 3 or 4. If the data vectors are avaiabe simutaneousy, then Agorithms 1 and 2 aso can be used to sove (1.10). The atter agorithms require that the noise eve for each discrete i-posed probem (1.10) is about the same. This is a reasonabe assumption for many appications. Tabe 5.1 compares the number of matrix-vector product evauations and the CPU time required by Agorithms 1-4 for k = 10 and noise-contaminated data vectors b (i) corresponding to the noise eves δ = 10 2 and δ = 10 3. In a exampes of this section, the reguarization parameter is determined with the aid of the discrepancy principe with η = 1.1; cf. (1.9). The dispayed reative error in the computed soutions

Soution methods for inear discrete i-posed probems for coor image restoration 15 is the maximum error for each one of the k probems (1.10). The number of matrixvector products (MVP) shown is the number of matrix-vector product evauations with A and A T with a singe vector. Thus, each iteration step of Agorithms 1 and 2 adds 2k matrix-vector product evauations to the count. The number of matrix-vector product evauations does not give an accurate idea of the computing time required. We therefore aso present timings for the agorithms. Noise eve Method MVP Reative error CPU-time (sec) Agorithm 1 100 1.46 10 2 0.30 10 3 Agorithm 2 200 1.31 10 2 0.43 Agorithm 3 16 2.28 10 2 0.31 Agorithm 4 162 1.43 10 2 2.08 Agorithm 1 80 2.54 10 2 0.24 10 2 Agorithm 2 120 2.61 10 2 0.30 Agorithm 3 10 2.52 10 2 0.19 Agorithm 4 140 2.60 10 2 1.32 Tabe 5.1 Resuts for the phiips test probem. Tabes 5.1 shows Agorithm 3 to require the fewest matrix-vector product evauations and to give approximate soutions of comparabe or higher quaity than the other agorithms. Agorithms 2 and 4 require about the same number of matrix-vector product evauations, but the former agorithm demands ess CPU time because it impements a bock method. 5.2 Exampe 2 This exampe iustrates the performance of Agorithms 1-4 when appied to the restoration of a 3-channe RGB coor image that has been contaminated by bur and noise. The corrupted image is stored in a bock vector B with three coumns (one for each channe). The desired (and assumed unavaiabe) image is stored in the bock vector X with three coumns. The bur-contaminated, but noise-free image associated with X, is stored in the bock vector B. The bock vector E represents the noise in B, i.e., B := B + E. We define the noise eve ν = E F B F. To determine the effectiveness of our soution methods, we evauate the reative error Reative error = X X µ F X F, where X µ denotes the computed restoration.

16 A. H. Bentbib et a. We consider within-channe burring. Hence, the matrix A 3 in (1.3) is the 3 3 identity matrix, and the matrix A in (1.3), which describes the burring within each channe, modes Gaussian bur, and is determined by the Gaussian PSF, h σ (x,y) = 1 2πσ 2 exp ( x2 + y 2 Thus, A is a symmetric bock Toepitz matrix with Toepitz bocks. It is generated with the MATLAB function bur from [16]. This function has two parameters, the haf-bandwidth of the Toepitz bocks, r, and the variance, σ, of the Gaussian PSF. We et σ = 4 and r = 6. The origina (unknown) RGB image X R 256 256 3 is the papav256 image from MATLAB. It is shown on the eft-hand side of Figure 5.1. The associated burred and noisy image B = A X + E is shown on the right-hand side of the figure. The noise eve is ν = 10 3. Given the contaminated image B, we woud ike to recover an approximation of the origina image X. Tabe 5.2 compares the number of matrixvector product evauations, the computing time, and the reative errors in the computed restorations. The restoration obtained with Agorithm 1 for the noise eve ν = 10 3 is shown on the eft-hand side of Figure 5.2. The discrepancy principe is satisfied when = 82 steps of the BGKB method have been carried out. This corresponds to 3 2 82 matrix-vector product evauations. 2σ 2 ). Noise eve Method MVP Reative error CPU-time (sec) Agorithm 1 492 6.93 10 2 3.86 10 3 Agorithm 2 558 6.85 10 2 3.95 Agorithm 3 112 2.64 10 1 1.66 Agorithm 4 632 1.29 10 1 6.55 Agorithm 1 144 9.50 10 2 1.13 10 2 Agorithm 2 156 9.44 10 2 1.12 Agorithm 3 20 2.91 10 1 0.32 Agorithm 4 112 1.58 10 1 1.10 Tabe 5.2 Resuts for Exampe 2. The restoration determined by Agorithm 2 is shown on the right-hand side of Figure 5.2. The GGKB method requires = 93 steps to satisfy the discrepancy principe. Agorithm 3 is the fastest, but yieds restorations of ower quaity than the other agorithms for this exampe. 5.3 Exampe 3 The previous exampe iustrates the restoration of an image that has been contaminated by noise and within-channe bur, but not by cross-channe bur. This exam-

Soution methods for inear discrete i-posed probems for coor image restoration 17 Fig. 5.1 Exampe 2: Origina image (eft), burred and noisy image (right). Fig. 5.2 Exampe 2: Restored image by Agorithm 1 (eft), and restored image by Agorithm 2 (right). Noise eve Method MVP Reative error CPU-time (sec) Agorithm 1 354 7.56 10 2 2.74 10 3 Agorithm 2 702 6.97 10 2 4.99 Agorithm 3 112 2.64 10 1 1.63 Agorithm 4 556 1.35 10 1 5.77 Tabe 5.3 Resuts for Exampe 3.

18 A. H. Bentbib et a. Fig. 5.3 Exampe 3: Cross-channe burred and noisy image (eft), restored image by Agorithm 2 (right). pe shows the restoration of an image that has been contaminated by noise, withinchanne bur, and cross-channe bur. We use the same within-channe bur as in Exampe 2. The cross-channe bur is defined by the cross-channe bur matrix 0.7 0.2 0.1 A 3 = 0.25 0.5 0.25 0.15 0.1 0.75 from [17]. The burred and noisy image is represented by B = A XA T 3 + E, where the noise eve is ν = 10 3. It is shown on the eft-hand side of Figure 5.3. We restore this image with Agorithms 1-4. The quaity of the restored images obtained with Agorithm 1,2, and 4 is about the same, whie the restoration determined by Agorithm 3 is of poor quaity. The best attainabe restoration is shown on the right-hand side of Figure 5.3. Tabe 5.3 compares the agorithms for this exampe. 5.4 Exampe 4 We evauate the effectiveness of Agorithms 1-4 when appied to the restoration of a video defined by a sequence of back and white images. Video restoration is the probem of restoring a sequence of k images (frames). Each frame is represented by a matrix of n n pixes. In the present exampe, we are interested in restoring 6 consecutive frames of a contaminated video. We consider the xyophone video from MATLAB. The video cip is in MP4 format with each frame having 240 240 pixes. The (unknown) bur- and noise-free frames are stored in the bock vector X R 2402 6. These frames are burred by a burring matrix A of the same kind and with the same parameters as in Exampe 2. Figure 5.5 shows the exact (origina) frame and the contaminated version, which is to be restored. Burred and noisy frames are generated by B = A X + E, where the matrix E represents white Gaussian noise of eves ν = 10 3 or ν = 10 2. Tabe 5.4 dispays the performance of the agorithms. Agorithms 1 and 2 are seen to yieds fairy accurate restorations, with the atter

Soution methods for inear discrete i-posed probems for coor image restoration 19 Fig. 5.4 Frame no. 3: Origina frame (eft), burred and noisy frame (right). agorithm requiring the east CPU time. Figure 5.5 shows restorations of frame 3 obtained with two of the agorithms for noise eve ν = 10 3. Noise eve Method MVP Reative error CPU-time (sec) Agorithm 1 1152 5.76 10 2 8.72 10 3 Agorithm 2 1188 5.66 10 2 6.23 Agorithm 3 130 1.19 10 1 1.69 Agorithm 4 1190 5.67 10 2 10.79 Agorithm 1 264 9.48 10 2 1.65 10 2 Agorithm 2 228 9.53 10 2 1.21 Agorithm 3 34 1.40 10 1 0.44 Agorithm 4 250 9.48 10 2 2.22 Tabe 5.4 Resuts for Exampe 4. 6 Concusion This paper discusses four approaches to the soution of inear discrete i-posed probems with mutipe right-hand sides. Agorithm 4 is ceary the east attractive of the agorithms considered. Agorithm 3 is the fastest for a exampes, but determines approximate soutions of worse quaity than Agorithms 1 and 2 for a image restoration exampes. The accuracy achieved by Agorithm 3 depends on how we-suited the Kryov subspace generated by the matrix A and the first right-hand side b (1) is to represent the desired soutions associated with the other right-hand sides b (2),...,b (k). Agorithms 1 and 2 are attractive compromises between high accucary and speed. Their reative speed depends on the computer architecture.

20 A. H. Bentbib et a. Fig. 5.5 Frame no. 3: Restored frame by Agorithm 1 (eft), and restored frame by Agorithm 2 (right). Acknowedgments The authors woud ike to thank Lars Edén and the referee for comments. Research by L.R. is supported in part by NSF grants DMS-1729509 and DMS-1720259. References 1. A. H. BENTBIB, M. EL GUIDE, K. JBILOU, AND L. REICHEL, Goba Goub Kahan bidiagonaization appied to arge discrete i-posed probems, J. Comput. App. Math., 322 (2017) 46 56. 2. D. CALVETTI, P. C. HANSEN, AND L. REICHEL, L-curve curvature bounds via Lanczos bidiagonaization, Eectron. Trans. Numer. Ana., 14 (2002) 134 149. 3. D. CALVETTI AND L. REICHEL, Tikhonov reguarization of arge inear probems, BIT, 43 (2003) 263 283. 4. T. F. CHAN AND W. L. WAN, Anaysis of projection methods for soving inear systems with mutipe right-hand sides, SIAM J. Stat. Comput., 18 (1997) 1698 1721. 5. A. EL GUENNOUNI, K. JBILOU, AND H. SADOK, A bock version of BiCGSTAB for inear systems with mutipe right-hand sides, Eectron. Trans. Numer. Ana., 16 (2003) 129 142. 6. H. W. ENGL, M. HANKE, AND A. NEUBAUER, Reguarization of Inverse Probems, Kuwer, Dordrecht, 1996. 7. C. FENU, D. MARTIN, L. REICHEL, AND G. RODRIGUEZ, Bock Gauss and anti-gauss quadrature with appication to networks, SIAM J. Matrix Ana. App., 34(2013) 1655 1684. 8. C. FENU, L. REICHEL, AND G. RODRIGUEZ, GCV for Tikhonov reguarization via goba Goub Kahan decomposition, Numer. Linear Agebra App., 23 (2016) 467 484. 9. N. P. GALATSANOS, A. K. KATSAGGELOS, R. T. CHIN, AND A. D. HILLARY, Least squares restoration of mutichanne images, IEEE Trans. Signa Proc., 39 (1991) 2222 2236. 10. K. GALLIVAN, M. HEATH, E. NG, B. PEYTON, R. PLEMMONS, J. ORTEGA, C. ROMINE, A. SAMEH, AND R. VOIGT, Parae Agorithms for Matrix Computations, SIAM, Phiadephia, 1990. 11. S. GAZZOLA, P. NOVATI, AND M. R. RUSSO, On Kryov projection methods and Tikhonov reguarization, Eectron. Trans. Numer. Ana., 44 (2015) 83 123. 12. S. GAZZOLA, E. ONUNWOR, L. REICHEL, AND G. RODRIGUEZ, On the Lanczos and Goub Kahan reduction methods appied to discrete i-posed probems, Numer. Linear Agebra App., 23 (2016) 187 204. 13. G. H. GOLUB, F. T. LUK, AND M. L. OVERTON, A bock Lanczos method for computing the singuar vaues and corresponding singuar vectors of a matrix, ACM Trans. Math. Software, 7 (1981) 149-169.

Soution methods for inear discrete i-posed probems for coor image restoration 21 14. G. H. GOLUB AND G. MEURANT, Matrices, Moments and Quadrature with Appications, Princeton University Press, Princeton, 2010. 15. P. C. HANSEN, Rank-Deficient and Discrete I-Posed Probems, SIAM, Phiadephia, 1998. 16. P. C. HANSEN, Reguarization toos version 4.0 for MATLAB 7.3, Numer. Agorithms, 46 (2007) 189 194. 17. P. C. HANSEN, J. NAGY, AND D. P. O LEARY, Deburring Images: Matrices, Spectra, and Fitering, SIAM, Phiadephia, 2006. 18. K. JBILOU, H. SADOK, AND A. TINZEFTE, Obique projection methods for inear systems with mutipe right-hand sides, Eectron. Trans. Numer. Ana., 20 (2005) 119 138. 19. S. KINDERMANN, Convergence anaysis of minimization-based noise eve-free parameter choice rues for inear i-posed probems, Eectron. Trans. Numer. Ana., 38 (2011) 233 257. 20. S. KINDERMANN, Discretization independent convergence rates for noise eve-free parameter choice rues for the reguarization of i-conditioned probems, Eectron. Trans. Numer. Ana., 40 (2013) 58 81. 21. F. LI, M. K. NG, AND R. J. PLEMMONS, Couped segmentation and denoising/deburring for hyperspectra materia identification, Numer. Linear Agebra App., 19 (2012) 15 17. 22. J. MENG, P.-Y. ZHU, AND H.-B. LI, A bock GCROT(m,k) method for inear systems with mutipe right-hand sides, J. Comput. App, Math., 255 (2014) 544 554. 23. C. C. PAIGE AND M. A. SAUNDERS, LSQR: An agorithm for sparse inear equations and sparse east squares, ACM Trans. Math. Software, 8 (1982) 43 71. 24. L. REICHEL AND G. RODRIGUEZ, Od and new parameter choice rues for discrete i-posed probems, Numer. Agorithms, 63 (2013) 65 87. 25. Y. SAAD, On the Lanczos method for soving symmetric inear systems with severa right-hand sides, Math. Comp., 48 (1987) 651 662. 26. F. TOUTOUNIAN AND S. KARIMI, Goba east squares method (G-LSQR) for soving genera inear systems with severa right-hand sides, App. Math. Comput., 178 (2006) 452 460.