Downloaded 06/11/15 to Redistribution subject to SIAM license or copyright; see

Size: px
Start display at page:

Download "Downloaded 06/11/15 to Redistribution subject to SIAM license or copyright; see"

Transcription

1 SIAM J. SCI. COMPUT. Vol. 37, No. 2, pp. B332 B359 c 2015 Society for Industrial and Applied Mathematics A FRAMEWORK FOR REGULARIZATION VIA OPERATOR APPROXIMATION JULIANNE M. CHUNG, MISHA E. KILMER, AND DIANNE P. O LEARY Abstract. Regularization approaches based on spectral filtering can be highly effective in solving ill-posed inverse problems. These methods, however, require computing the singular value decomposition (SVD) and choosing appropriate regularization parameters. These tasks can be prohibitively expensive for large-scale problems. In this paper, we present a framework that uses operator approximations to efficiently obtain good regularization parameters without an SVD of the original operator. Instead, we approximate the original operator with a nearby structured or separable one whose SVD is easily computable. Highly effective methods can then be used to efficiently compute good regularization parameters for the nearby problem. Then, we solve the original problem iteratively using the regularization determined for the approximate problem. A variety of regularization approaches can be incorporated into this framework, but we focus here on the recently developed windowed regularization, a generalization of Tikhonov regularization in which different regularization parameters are used in different regions of the spectrum. We derive bounds on the perturbation to the computed solution and residual resulting from using the regularization determined for the approximate operator. We demonstrate the effectiveness of our method in computations using operator approximations such as sums of Kronecker products, block circulant with circulant blocks matrices, and Krylov subspace approximations. Key words. singular value decomposition, Tikhonov regularization, inverse problem, ill-posed, deconvolution, Kronecker approximation, spectral filter, Golub Kahan, parameter selection, hybrid iterative methods AMS subject classifications. 65F20, 65F22, 65F30 DOI / Introduction. Large-scale inverse problems arise in many applications such as astronomy, biomedical imaging, surveillance, and nondestructive evaluation; see [14, 16, 40, 41] and references therein. We consider the following linear model: (1.1) b = Ax + n, where A R m n,m n, denotes the forward operator, b R m represents the observed data, n R m is additive noise, and x R n is the desired solution. Given A and b, the goal of the inverse problem is to reconstruct x. In this paper, we will use image deblurring as an example, but other inverse problems fit within the same framework. In image deblurring, x represents the true image, b represents the observed blurred image, and A contains knowledge about the blurring operator. Most inverse problems are ill-posed, meaning small perturbations in the observation may lead to large changes to the solution. To mitigate this difficulty, regularization is often used, adding constraints that suppress the amplification of noise during the inversion process. Choosing an appropriate regularization method and a Submitted to the journal s Computational Methods in Science and Engineering section November 14, 2013; accepted for publication (in revised form) January 14, 2015; published electronically April 21, Department of Mathematics, Virginia Tech, Blacksburg, VA (jmchung@vt.edu). Department of Mathematics, Tufts University, Medford, MA (misha.kilmer@tufts.edu). Department of Computer Science and Institute for Advanced Computer Studies, University of Maryland, College Park, MD (oleary@cs.umd.edu). B332

2 REGULARIZATION VIA OPERATOR APPROXIMATION B333 good regularization parameter to balance fidelity to the model with satisfaction of the constraints is key to solving any inverse problem. Indeed, the quality of the reconstruction relies heavily on a proper choice of the regularization parameter. For problems where the singular value decomposition (SVD) of A is available, various methods such as generalized cross-validation (GCV), the discrepancy principle, and the residual periodogram have been proposed for choosing a regularization parameter [39, 22, 29, 50]. However, for large-scale problems where the SVD is not available, selecting a good regularization parameter remains a challenging task. Moreover, some regularization methods rely on the SVD for computing the solution x. An example is the PP-TSVDalgorithmgivenin [31], in which edge information is recovered under the assumption that the SVD is available. A newer approach that relies on computation of the SVD, presented in [15], uses regularization windows defined in terms of the singular values and regularizes differently in each window, leading to considerable improvement over standard Tikhonov regularization in the quality of image restoration. As a motivating example, suppose that we are presented with a blurred image for which the blur is a perturbation of a spatially invariant blur. If we replace the blurring matrix by the spatially invariant one, the SVD is quite inexpensive and we have available to us a full complement of regularization methods and tools for determining a good regularization parameter (e.g., discrepancy principle, GCV, etc.). Simplifying in this way is attractive, but the resulting reconstructed image is unlikely to be acceptable because of the error introduced into the blurring matrix. On the other hand, we could hope that the regularization parameter determined (very economically) for the simplified problem might also be useful for our original problem. Knowing this parameter allows us to solve the regularized (well-conditioned) problem inexpensively using the correct blurring matrix using iterative methods, without access to the SVD. In this work, we investigate the usefulness of this idea. We note that the philosophy is quite different from previous methods in the literature that exploit the relationship between the operators by either replacing the blur by its approximation, which can lose important features, or by using the approximation as a preconditioner, which can have the effect of mixing signal and noise spaces and then unacceptably magnifying the noise. In this paper, we propose a framework for regularization that uses operator approximations to determine the regularization for large-scale problems in which the SVD of A is not available. The framework consists of three steps: 1. Find a related but simpler operator  A. 2. Choose a regularization method and find suitable regularization operators/ parameters for the approximate problem, (1.2) b = Âx + n. 3. Find the regularized solution of the original problem (1.1), using the same regularization method and operators/parameters determined in step 2. One of the key advantages of our proposed framework is that we apply the regularization to the original problem (1.1). The operator approximation is only used to determine the regularization and regularization parameter in a computationally efficient way. Although a solution to the approximate problem (1.2) may provide an estimate of the desired solution for the original problem (1.1), previous researchers have observed that the approximate problem can yield poor reconstructions [35, 12]. Instead, we propose to only use the operator approximation in step 2 of our framework,

3 B334 J. M. CHUNG, M. E. KILMER, AND D. P. O LEARY so that sophisticated regularization and regularization parameter selection methods can be utilized for problems for which it is too expensive to apply the methods directly. Once the regularization parameters are chosen, step 3 becomes a well-studied problem and the subject of intense previous research. For example, for Tikhonov regularization, step 3 requires solving a large linear least squares problem that is not very ill-conditioned, for which iterative Krylov methods [47, 48, 20] (among other choices) are appropriate. For variational regularization such as total variation, the resulting nonlinear optimization problem can be solved using standard techniques; see, for example, [10, 21, 46, 55]. Therefore we focus our attention on steps 1 and 2. Other researchers have considered using operator approximations to make difficult, large-scale problems more tractable. For example, structured operator approximations have been used to construct preconditioners that are used to accelerate iterative methods [26, 8, 19]. In addition, Kronecker product approximations have been used to construct preconditioners and estimate regularization parameters for Tikhonov regularization with GCV in [35, 12], and fast trigonometric transform matrices such as the DCT have been proposed for use as approximate SVD bases [13]. Subspace approximations such as Krylov methods have been used to efficiently compute the L-curve and discrepancy principle [6, 5, 4]. Hybrid methods that combine an iterative approach with direct regularization also take advantage of operator approximations to compute regularized solutions [45, 37]. In this paper, we propose a new framework for solving inverse problems, where operator approximations are used to determine regularization, but a solution to the original problem is provided. We consider various examples of this framework, leading to new regularization techniques for large-scale problems. One of the main contributions of this work is to extend newly developed windowed regularization to problems where computing the SVD is not feasible. We develop a novel approach for automatic window selection and provide connections with hybrid methods. Another significant contribution of this work is that we use the proposed framework to derive error bounds for predicting errors and residuals for the original problem, thereby providing theoretical justification for the new framework. To establish notation, let A = UΣV T be the SVD of A, where the m n (m n) diagonal matrix Σ contains the singular values, σ 1 σ 2 σ n > 0, and columns of the orthogonal matrices U and V contain the left and right singular vectors u i,i=1, 2,...,m, and v i,i=1, 2,...,n, respectively. Similarly, let  = Û Σ V T be the SVD of Â, whereˆσ 1 ˆσ 2 ˆσ n > 0 are the singular values and û i and v i are the corresponding singular vectors. The paper is organized as follows. Section 2 describes some operator approximations  that can be used in the proposed framework for regularization. We can obtain simplified operators by, for example, eliminating spatial variation in the forward model, applying simpler boundary conditions, or projecting into a lower dimensional subspace. Once a suitable operator approximation is obtained, various regularization methods and regularization parameter selection methods can be used in step 2 of the proposed framework. Some of these methods are described in section 3. Inparticular, we derive an extension of the windowed approach from [15] and develop a new approach for selecting windows by recursive partitioning of signal and noise subspaces. An error analysis for the new framework is provided in section 4. Section 5 contains numerical results, and conclusions can be found in section Simplifying the operator. Many operator approximations  can be used in the proposed framework for regularization. In this section, we describe three ap-

4 REGULARIZATION VIA OPERATOR APPROXIMATION B335 proaches. Specifically, we investigate a Kronecker product approximation to the blur function, a structured matrix approximation that imposes different boundary conditions, and a Krylov subspace approximation that projects the operator into a lower dimensional subspace. Each of these approximations provides a simpler matrix whose SVD can be computed, so that various regularization methods can be used and good regularization parameters can be computed efficiently Kronecker product approximation. In image deblurring or image deconvolution, the blurring process can be described using a point spread function (PSF). We assume that the PSF is known. 1 For spatially invariant blur, meaning that the same blur function produces every pixel in the image, the blur matrix is highly structured [32, Chap. 4]. In particular, if the pixel image of the PSF is a rank- 1 matrix, i.e., if the horizontal and vertical components of the blur function can be separated, then we say that the blur matrix A is separable. We can then express A as a Kronecker product C D, andthesvdofa can be calculated in terms of the SVDs of the much smaller matrices C and D. If the spatially invariant PSF is not separable, then A can be approximated by a sum of Kronecker products [53, 35, 42]. If the PSF is well-approximated by a rank-one matrix, then we can set  equal to the one-term Kronecker product approximation to A. Then  A, whereâ = C D. LetU C Σ C VC T and U DΣ D VD T be the SVDs of C and D, respectively. Then the SVD of  is  =(U C Σ C V T C ) (U DΣ D V T D ) (2.1) =(U C U D )(Σ C Σ D )(V C V D ) T (2.2) (2.3) = Û Σ V T, where a permutation has been employed in the last two equalities so that the diagonal entries in Σ are sorted from largest to smallest. Thus, since the cost of deriving C and D reduces to the cost of finding a rank-1 approximation to the (weighted) PSF matrix [42], for an N N image the total cost of finding the SVD of  is only O(N 3 ). In many cases, the (weighted) PSF is significantly better approximated by taking the rank (say, ρ) to be small but greater than one. In this case, the approximation  =(U C1 U D1 ) Σ(V C1 V D1 ) T has been proposed (see, for example, [36] and references therein), with Σ the diagonal of the matrix Σ =(U C1 U D1 ) T ( ρ i=1 C i D i ) (V C1 V D1 ), where the summation is the optimal Kronecker approximation (with certain structure constraints) to A obtained from the low-rank approximation to the weighted PSF matrix. Again, the cost of finding the SVD of  is O(N 3 ) for an N N image. 1 In many real imaging systems, an image of a point source of light in the center of the region of interest provides the PSF estimate.

5 B336 J. M. CHUNG, M. E. KILMER, AND D. P. O LEARY 2.2. Approximations from different boundary conditions. It is well known that certain boundary conditions impose certain structure on a spatially invariant blur matrix [32, Chap. 4]. For example, if we assume zero boundary conditions, then the blur matrix is block-toeplitz-with-toeplitz-blocks (BTTB). If we assume periodic boundary conditions, then the blur matrix is block-circulant-with-circulantblocks (BCCB). If we assume reflexive or anti-reflexive boundary conditions and the PSF is doubly symmetric, then the blur matrix is a sum of highly structured matrices [51, 18]. We can exploit these structures in determining approximate operators Â. If our operator A is BTTB and results from zero boundary conditions, then the optimal BCCB approximation with respect to the Frobenius norm is readily available [9, 11] and can be diagonalized by the normalized discrete unitary Fourier transform matrix, F. Then = FH ΛF, whereλ is a diagonal matrix containing the eigenvalues of ÂḞor reflexive boundary conditions, if the PSF can be well-approximated by a doubly-symmetric PSF, then the resulting  can be diagonalized using an orthogonal discrete cosine transform matrix. Similarly, if the boundary conditions are antireflexive, then  can be diagonalized using an orthogonal discrete sine transform matrix [1] Krylov subspace approximations. For problems where computing the SVD of A is not feasible, iterative methods can be used to project the original problem into a lower dimensional subspace. The projected operator can provide a low dimensional approximation to the original matrix [37], and this approximation can be used to select regularization parameters for the original problem. Consider using the Krylov subspace K k (A T A, A T b)=span{a T b, (A T A)A T b,...,(a T A) k 1 A T b} generated by the matrix A T A and the vector A T b [24]. The subspace has dimension at most k and can be generated using Golub Kahan bidiagonalization 2 [23]. After k iterations of Golub Kahan bidiagonalization, we have the relationship, (2.4) AY k = Z k+1 B k, where Y k R n k and Z k+1 = [z 1,...,z k+1 ] R m (k+1) contain orthonormal columns, and B k R (k+1) k is a lower bidiagonal matrix. Let B k = PSQ T be the SVD of B k, where P R (k+1) (k+1) and Q R k k are orthogonal and S is (k +1) k diagonal. It then follows from (2.4) that (2.5) A (Y k Q) =(Z k+1p) S. }{{}}{{} V k Û k+1 Consistent with our proposed regularization framework, in this context we would advocate using the Krylov subspace approximation, [ ] (2.6) A  = Ûk+1S V S 0 k T = Û V 0 0 T, where Ûk+1 and V k (likewise their extensions Û, V) contain orthonormal columns. 2 We assume no termination of the iteration, and therefore the dimension of K k+1 (A T A, A T b) is k +1.

6 REGULARIZATION VIA OPERATOR APPROXIMATION B Selecting regularization and regularization parameters. Usingasuitable operator approximation Â, various regularization approaches can be incorporated in the proposed framework, and standard regularization parameter selection methods can be used. In this section, we describe some common variational regularization methods that can be used, and we extend windowed regularization so that it can be used in this framework. We provide novel methods for selecting the windows and draw connections with hybrid iterative methods Variational methods. Variational methods have been popular and effective for the regularization of ill-posed inverse problems. Rather than solving the least-squares problem, (3.1) min x Ax b 2 2, the basic idea is to solve the regularized problem, (3.2) min x { Ax b λr(x) }, where λ>0 is a regularization parameter 3 and R(x) is a regularization term. Determining a useful value of λ can be quite computationally expensive, so we focus here on how approximate operators can be used to reduce this cost. One of the most well-known variational methods is Tikhonov regularization [25], where R(x) = Lx 2 2. Typical choices for the regularization operator L include the identity matrix or discretizations of derivative operators. For L = I, the solution of the Tikhonov problem can be expressed in terms of a filtered SVD expansion, (3.3) x tik (λ) A λ b = n i=1 φ i u T i b σ i v i, where φ i = σ2 i are filter factors. In general, if A and L can be diagonalized using σi 2 +λ the same singular vectors, then it is possible to write the solution of the Tikhonov problem in terms of a filtered SVD expansion [32]. For example, in image deblurring, if we assume periodic boundary conditions and take L to be an approximation of the partial derivatives of the solution, the discrete Fourier transform matrix F diagonalizes both A and L [32]. If the singular vectors of A and L are different, then a generalized SVD [27] must be used. This motivates the idea of determining the regularization parameter λ for a closely related problem determined by changing A or L (or both) so that their singular vectors are the same. Another popular approach is total variation (TV) regularization, wherer(x) corresponds to a discrete approximation of the total variation of the solution [49, 34, 54]. Computing trial solutions for different values of λ can be very expensive. Strong, Aujol,andChan[52, sect. 3], for example, discuss using bisection to determine a regularization parameter that removes features below a given scale. Whether bisection, gradient descent, or Newton-like methods are used to generate trial values of λ, using a good operator approximation  that allows fast matrix-vector products would speed the computation at each iteration. Selecting a regularization parameter for (3.2) can be a delicate and cumbersome task, and various general methods have been proposed in the literature. Some meth- 3 Parameter λ is often referred to as λ 2 in the literature.

7 B338 J. M. CHUNG, M. E. KILMER, AND D. P. O LEARY ods, such as the discrepancy principle, require an estimate of the noise level. Other methods are efficient only when an SVD of A is available. Generalized cross-validation (GCV) [22], for example, seeks the parameter λ that minimizes (3.4) GCV (λ) = n (AA λ I)b 2 2 [trace(i AA λ ) ] 2, where A λ is the operator that maps b to the computed x. For standard Tikhonov where L = I, the GCV function can be written in terms of the SVD, ( n ( ) 2 m ) λ n σ 2 i=1 i + λ c 2 i + c 2 i i=n+1 (3.5) GCV (λ) = ( ) 2, n λ (m n)+ σi 2 + λ where c i = u T i b. A similar Tikhonov GCV function can be derived for the case where A and L are diagonalized in the same basis [32]. Using related ideas, Liao, Li, and Ng [38, sect. 2A] note that the computation of a TV regularization parameter satisfying the GCV criterion can be greatly simplified if the blurring matrix is diagonalized by a fast transform. (This is true, for example, for symmetric, spatially invariant blurs.) For problems where the SVD of A is not available, sophisticated parameter choice methods such as GCV are not computationally feasible. Thus, in the proposed framework, we work with a matrix approximation  whose SVD is obtainable and select a regularization parameter for the approximate problem, { } (3.6) min Âx b λr(x). x We propose to use the regularization parameter obtained from (3.6) in(3.2) andthen employ a suitable solver on (3.2) forthisfixedλ. In this way, we can use, for example, GCV regularization for the choice of the regularization parameter for wider classes of operators for Tikhonov, TV, and other regularization methods. There are a host of other methods for solving (3.2) and computing λ in the context of Tikhonov, TV, and other regularization methods. In general, our framework can be computationally useful whenever we can devise an approximate operator  which makes assessing a candidate value of λ much easier than for the original matrix A Windowed regularization. As noted in [15], superior reconstructions can sometimes be obtained by windowed regularization, where windows in the spectral domain break the problem into subproblems, each with a different regularization parameter. One of the drawbacks of the windowed approach is that it requires the SVD of A for defining the windows as well as for choosing regularization parameters. In this section we extend the windowed approach so that windowed solutions can be computed for more general forward operators. We present results for the case of nonoverlapping (Shannon) windows, but we remark that the ideas can be extended to overlapping windows. Although the windowed approach was originally derived in the frequency domain of the operator, giving expressions for V T x, we express the results in the coordinate system for x, giving insight into how approximate operators can be used. i=1

8 REGULARIZATION VIA OPERATOR APPROXIMATION B339 Given the SVD of A, forj =1,...,p, we define the jth Shannon window vector w (j) R n 1 to have entries (3.7) w (j) i = { 1 for τ (j 1) <σ i τ (j), 0 otherwise, where τ 0 τ (p), τ (0) <σ n,andτ (p) σ 1.LetW (j) =diag(w (j) ),j =1,...,p. The windowed reconstruction from [15, eq. (3.3)] can be written as (3.8) x win = p V(Σ T Σ + λ (j) I) 1 W (j) Σ T U T b, i=1 where λ (j) is the regularization parameter for the jth window. Let D be a diagonal matrix with entries d i = j λ (j) w (j) i,i = 1,...,n. Then, the windowed solution (3.8) satisfies (3.9) (A T A + VDDV T )x = A T b, which is the solution vector for the minimization problem (3.10) min x Ax b DVT x 2 2. Notice that the SVD of A is needed to compute DV T. Since this SVD can be expensive, we propose using the SVD of  instead, replacing (3.10) by (3.11) min x Ax b D V T x 2 2, (j) ˆλ(j)ŵ where D is a diagonal matrix with entries ˆd i = j i. We note that in the special case that the Ritz approximation from section 2.3 is used for p windows, there are actually implicitly p + 1 windows, with the last window corresponding to the columns of V that are not formed through the iterative process and the corresponding value of ˆλ (p+1) being equal to infinity. This is relevant for the discussion on hybrid methods. All of the methods described in [15] for defining the Shannon windows and choosing regularization parameters in a windowed framework can be used in our framework. For selecting regularization parameters for the windowed approach, we suggest using one of the methods described in [15]. For automatically choosing the windows, we now propose a new approach, based on a recursive partitioning of the signal and noise subspaces. We illustrate our ideas using a Picard plot corresponding to the inverse heat problem described in [28], with additive Gaussian noise at level The Picard plot shown in Figure 1 illustrates the typical behavior of ill-posed inverse problems, where the spectral coefficients u T i b (blue stars) initially decay to 0 faster than the singular values σ i (red line). 4 In general, the smaller singular values (large index i) correspond to the noise subspace and the larger singular values (small index i) correspond to the signal subspace. However, for many problems, there is a transition region where some mixing of signal and noise occurs, and windowed regularization treats each of these regions separately. 4 For this example, we take û i = u i.

9 B340 J. M. CHUNG, M. E. KILMER, AND D. P. O LEARY i Fig. 1. Picard plot and three windows for the inverse heat problem. The red solid line corresponds to the singular values, and the blue stars indicate the spectral coefficients u T i b. The region between the magenta dashed line with circles and the black dotted line with squares is the transition window. To the left is the signal window, and to the right the noise window. For illustration, assume that we want to define three windows w (1), w (2), and w (3) corresponding to the noise, transition, and signal subspaces, respectively. The first step is to isolate the noise subspace by determining where the coefficients of the data appear most similar to the noise coefficients. That is, we would like to determine a truncation index k 1 for which û T i b ût i n for i = k 1,...,m. There are a variety of ways to find k 1. We propose two approaches. 1. We propose to use the truncation parameter determined by GCV for the truncated SVD solution, x TSVD (k) = k i=1 û T i b σ i v i. That is, we let k 1 be the value of k that minimizes the GCV function (3.12) GCV n (k) = n (n k) 2 n i=k+1 û T i b Another approach to compute k 1 is to use an estimate ν of the noise variance to identify spectral coefficients û T i that seem to be noise. We can use a standard statistical test to identify the smallest value of k for which the sequence { û T i b }, i = k,...,m, is a plausible sample from the noise distribution. In order to distinguish the signal and transition subspaces, we propose to use the GCV criterion to partition the remaining singular values, σ 1 to σ k1 1. Inparticular, let k 2 be the value of k that minimizes the TSVD GCV function GCV k1 1(k). Then let τ (1) = σ k1 and τ (2) = σ k2, and define the windows using (3.7). For our example, k 1 = 112 and k 2 = 77, as illustrated by the vertical lines in Figure 1. We can recursively partition these two windows until the desired number of windows is reached or until GCV fails to discriminate subspaces (i.e., we encounter a window with no singular values).

10 REGULARIZATION VIA OPERATOR APPROXIMATION B Hybrid iterative methods. Iterative methods are often used for solving large-scale ill-posed inverse problems, where early termination of the iteration imposes regularization [30]. For example, one might use k steps of a Krylov subspace method to determine the best solution to (3.1) withinak dimensional Krylov subspace. If a method based on running Golub Kahan bidiagonalization (e.g., LSQR) is used, at step k the iterate is produced (implicitly) by solving a so-called projected least squares problem involving the bidiagonal matrix in (2.4). It has been noted (cf., [45, 3, 2, 37, 17, 33]) that for ill-posed inverse problems, this projected problem can become ill-conditioned for large values of k. One approach is to use a regularization method such as Tikhonov to solve the projected problem. Some methods for selecting regularization parameters for the projected problem have been studied in [37, 17]. A second approach is to use a Krylov approximation to obtain an estimate of the regularization parameter and then solve the regularized problem (3.2) usingan iterative approach [4, 7]. This method falls under our framework since the approximate problem was used to determine the regularization parameter for the original problem. Iterative methods that implement short-term recurrences can then be used to efficiently compute the regularized solution, but determining a good initial k can be difficult. Comparisons between these two approaches are provided for Tikhonov regularization in [37]. In this paper, we consider hybrid methods that use regularization methods such as windowed regularization. First, we see how hybrid methods relate to our framework. The framework first requires the choice of an approximate operator Â. When using a Krylov method, it is natural to take this to be the rank-k approximation obtained after k steps. We can apply the Krylov method to A or to a closely related matrix for which multiplication is easier. Second, we need to choose a regularization method and find suitable parameters. We can, for example, use windowed regularization on the kth approximation Â. Transforming back to n dimensional space results in a rank-k regularization operator D V T. Directions orthogonal to the rows of V T effectively have a regularization parameter of infinity, and this can be interpreted as defining one additional window. Finally, our framework requires a solution of the regularized form of the original problem, and this involves solving (3.11). This can be done using a Krylov method. The subspace generated by the iteration will be different from that generated by A alone when more than one window is used. Alternatively, we can apply k steps of a Krylov method to our original problem (no regularization), apply windowed regularization on the k dimensional subspace, and use the result as our approximate solution. In terms of our framework, we stop after step 2, thus saving the cost of a second Krylov iteration. We call this a hybrid windowed approach. We expect the difference between the result of this approach and the result of the 3-step framework to be quite small when the number of Krylov iterations is sufficiently large (in the absence of severe loss of orthogonality). Numerical results show this to be the case. 4. Error analysis. In this section we derive bounds on the change in the computed solution caused by using the approximate windows defined in section 3.2 and on the error in the computed solution. We are particularly concerned with error bounds for components of the solution in the signal and transition subspaces, the components corresponding to the k largest

11 B342 J. M. CHUNG, M. E. KILMER, AND D. P. O LEARY singular values. We use subscript S to denote submatrices associated with these subspaces, and subscript N to denote submatrices associated with the n k dimensional noise subspace. The column dimension of these matrices will be made clear by context. Define the orthogonal matrix [ ] [ ] Ĩ V T ÎS 0 0 ES V = + 0 Î N E N 0 Î + E. We assume that E <ɛfor a small number ɛ, which means that V results in lowmixing between the signal and transition subspace, spanned by rows of VS T, and its complement, but Î is not necessarily close to the identity matrix. (Much tighter bounds can be proved if Î is close to the identity.) We also assume that we have normalized so that σ 1 =1. The notation and bounds in the following lemma will be useful to us. Lemma 4.1. Given nonnegative diagonal matrices Ψ and Φ, define K and the block diagonal matrix K by K = Φ + ĨT ΨĨ, K = Φ + ÎT ΨÎ. Denote the leading k k block of K by K S. Define Δ a and Δ b by Then where ɛ = Ĩ Î, Δ a = K K, K 1 = K 1 (I + Δ b ). Δ a (2ɛ + ɛ 2 )ψ max δ a, Δ b δ a δ b, δ c δ a 1 1 K S φ S,min + ψ S,min ɛ 2, ψ S,max 1 1 K N φ N,min + ψ N,min ɛ 2, ψ N,max δ c λ min ( K) min(φ S,min + ψ S,min ɛ 2 ψ S,max,φ N,min + ψ N,min ɛ 2 ψ N,max ), ψ S,min = min j, j=1,...,k φ S,min = min j, j=1,...,k ψ N,min = min j, j=k+1,...,n φ N,min = min j, j=k+1,...,n ψ min = min j, j=1,...,n φ min = min j, j=1,...,n and analogous definitions hold for ψ S,max,φ S,max,ψ N,max,φ N,max,ψ max, and φ max. These bounds are valid whenever ɛ is small enough that the expressions are positive. Proof. The first bound is verified by direct computation, using the formula Δ a = ĨT ΨE + E T ΨĨ ET ΨE.

12 REGULARIZATION VIA OPERATOR APPROXIMATION B343 Next, to get the bound on 1 K S, consider the similarity transformation (4.1) Î S KS Î 1 S = ÎSΦ S Î 1 S + ÎSÎT S Ψ S = ÎSΦ S Î 1 S +(I E SE T S )Ψ S. Then the smallest eigenvalue of K S is bounded below by the sum of the smallest eigenvalues of ÎSΦ S Î 1 S and (I E S E T S )Ψ S = Ψ 1/2 S (Ψ S Ψ 1/2 S E SE T S Ψ1/2 S )Ψ1/2 S. Using Weyl s theorem, we obtain (4.2) λ min (Ψ S Ψ 1/2 S E SE T S Ψ1/2 S ) λ min(ψ S ) Ψ 1/2 S E SE T S Ψ1/2 S ɛ2 ψ S,max. Thus, λ min ((I E S E T S )Ψ S) ψ S,min ɛ 2 ψ S,max, giving the desired bound. The bound 1 for K N is derived analogously. For the second bound, note that K 1 =( K + Δ a ) 1 = K 1 (I + Δ a K 1 ) 1, so Δ b =(I + Δ K 1 a ) 1 I, and we want to bound the largest eigenvalue of this matrix. The smallest eigenvalue of I + Δ K 1 a is bounded below by λ min (I + Δ a K 1 ) 1 Δa K 1 δ a 1 λ min ( K) = λ min( K) δ a λ min ( K). Therefore, the largest eigenvalue of Δ b is bounded by λ max (Δ b ) λ min( K) λ min ( K) 1 δ a δ a = λ min ( K). δ a Notice that since K = Φ + ÎT ΨÎ is a block diagonal matrix, λ min ( K) =min(λ min (Φ S + ÎT S Ψ SÎS),λ min (Φ N + ÎT N Ψ N ÎN )). We have already bounded the terms corresponding to the signal and transition subspace, and a similar argument for the noise subspace using the fact that ÎN ÎT N = I E N E T N results in the bound on λ min( K) and our bound on Δ b Bounding the change in the solution. Let D be a diagonal matrix with entries d i = j λ (j) w (j) i,i=1,...,n. Then the windowed solution x o satisfies (4.3) (A T A + VDDV T )x o = A T b or (Σ T Σ + D 2 )V T x o = Σ T c and is the solution vector for the minimization problem (4.4) min x Ax b 2 + DV T x 2.

13 B344 J. M. CHUNG, M. E. KILMER, AND D. P. O LEARY Our method uses Û, Σ, and V to select the windows Ŵ(j) and regularization parameters ˆλ and then solves (3.11), where D is a diagonal matrix with entries (j) ˆd i = (j) j ˆλ(j)ŵ i. The normal equations for this problem are (4.5) (A T A + V D D V T )x w = A T b or (Σ T Σ + ĨT D2 Ĩ)V T x w = Σ T c with solution x w. Equating this with (4.3) gives (4.6) (Σ T Σ + D 2 )V T x o =(Σ T Σ + ĨT D2 Ĩ)V T x w. Solving (4.6) forv T x w yields so (4.7) (4.8) V T x w =(Σ T Σ + ĨT D2 Ĩ) 1 (Σ T Σ + D 2 )V T x o, V T (x o x w )=(I (Σ T Σ + ĨT D2 Ĩ) 1 (Σ T Σ + D 2 ))V T x o =(Σ T Σ + ĨT D2 Ĩ) 1 (ĨT D2 Ĩ D 2 )V T x o. Similarly, solving (4.6) forv T x o yields V T x o =(Σ T Σ + D 2 ) 1 (Σ T Σ + ĨT D2 Ĩ)V T x w, so V T (x o x w )=((Σ T Σ + D 2 ) 1 (Σ T Σ + ĨT D2 Ĩ) I)V T x w (4.9) =(Σ T Σ + D 2 ) 1 (ĨT D2 Ĩ D 2 )V T x w. First, let s consider standard Tikhonov regularization, which is a special case of the windowed approach with one window, where D 2 = ˆλI and D 2 = λi, soĩt D2 Ĩ = ˆλI. Theorem 4.2. For Tikhonov regularization, the jth component of the difference between the two regularized solutions is vj T (x o x w ) = ˆλ λ σj 2 + ˆλ vt j x o. Proof. This follows directly from (4.8) and is a well-known expression for the change in the solution as the Tikhonov parameter changes; see, for example, [44]. The result for the windowed case is similar but a bit more complicated. The following theorem presents three bounds. The first two are bounds relative to x w, and the last is relative to x o. Theorem 4.3. The jth component of the difference between the true windowed solution x o from (4.3) and the approximate windowed solution x w from (4.5) is bounded as v T j (x o x w ) ĨT D 2Ĩ D2 x w σ 2 j + d2 j or V T S (x o x w ) ĨT D 2Ĩ D2 x w σ 2 k + d2 S,min where d S,min =min j=1,...,k d j. Alternatively, (4.10) VS T (x 1+δ 1 o x w ) σk 2 + ˆd 2 S,min ɛ2 ˆd 2 ĨT D2 Ĩ D 2 x o, S,max,

14 REGULARIZATION VIA OPERATOR APPROXIMATION B345 where δ 1 = (2ɛ + ɛ2 ) ˆd 2 max δ 2 (2ɛ + ɛ 2 ) ˆd, 2 max δ 2 min(σk 2 + ˆd 2 S,min ɛ2 ˆd2 S,max,σn 2 + ˆd 2 N,min ɛ2 ˆd2 N,max ), ˆd S,min = min ˆd j, ˆdN,min = min ˆd j, ˆdmin = min ˆd j, j=1,...,k j=k+1,...,n j=1,...,n and analogous definitions hold for ˆd S,max, ˆd N,max,and ˆd max. Proof. The first bounds follow directly from (4.9). The last is derived from (4.8): in Lemma 4.1, letφ = Σ T Σ and Ψ = D 2.Then and V T (x o x w )= K 1 (I + Δ b )(ĨT D2 Ĩ D 2 )V T x o VS T (x o x w )=[ K 1 S, 0] (I + Δ b )(ĨT D2 Ĩ D 2 )V T x o. The result follows by taking the norm of each factor. For ɛ small, the bound in (4.10) is small if D 2 D 2 has a norm that is small relative to σk 2 + ˆd 2 S,min. We expect ˆd 2 S,min to be small, since it corresponds to the regularization parameter for the signal and transition subspace, where little regularization is needed, so the denominator term is dominated by σk Bounding the error in the solution. Next, we derive bounds for the error between the computed solution using the windowed approach, x w,andthetrue solution, x true. Define Solving (4.5) forv T x w yields c = U T b, η = U T n, b true = b n, c true = U T b true = c η, [ ] Ĩ T LS D2 F Ĩ =, F T L N H = Σ T Σ + ĨT D2 Ĩ, [ ] Ĥ = Σ T LS 0 Σ +. 0 L N (4.11) V T x w = H 1 Σ T (c true + η). The true solution satisfies V T x true = Σ c true, where Σ =(Σ T Σ) 1 Σ T. The difference can be written as (4.12) V T (x w x true )=H 1 Σ T c true (Σ T Σ) 1 Σ T c true + H 1 Σ T η =(H 1 (Σ T Σ) 1 )Σ T c true + H 1 Σ T η = H 1 Ĩ T D2 Ĩ(Σ T Σ) 1 Σ T c true + H 1 Σ T η = H 1 (ĨT D2 ĨV T x true Σ T η).

15 B346 J. M. CHUNG, M. E. KILMER, AND D. P. O LEARY We obtain the bound in the following theorem. Theorem 4.4. For windowed Tikhonov regularization, the error in the subspace spanned by the rows of VS T, denoting the true signal and transition subspace, is bounded by VS T (x 1+δ ( 1 w x true ) ˆd2 σk 2 + ˆd 2 S,min ɛ2 ˆd 2 max x true + n ), S,max where δ 1, ˆd S,min, ˆd S,max are defined in Theorem 4.3, and n is the noise in the data b. Proof. The result follows in the same way as the proof of (4.10), using Lemma 4.1. It is worth mentioning that the error bound in Theorem 4.4 is small if ˆd 2 max is small relative to σk 2 for ɛ and ˆd 2 S,min small. Next, we derive a related bound but use VS T x true on the right-hand side, instead of x true. We will need the bounds F = E T N D 2 N ÎN + ÎT S D 2 S E S ɛ( ˆd 2 N,max + ˆd 2 S,max ) δ f, obtained by taking bounds on each matrix in the expression, and L S = ÎT S D 2 SÎS + E T N D 2 N E N ˆd 2 S,max (1 + ɛ2 )+ ˆd 2 N,max ɛ2, where the first term is obtained using similarity transformation (4.1) and Weyl s theorem to get a bound on the largest eigenvalue of ÎT D S 2 SÎS and the second term is obtained by bounding the norm of each factor. We can separate out the noise subspace by multiplying both sides of (4.12) by Ĥ 1 H to obtain (4.13) [ I Ĥ 1 N FT Ĥ 1 S F I Next, observe that ] [ V T S (x w x true ) V T N (x w x true ) Ĥ 1 Ĩ T D2 Ĩ = [ Ĥ 1 ] = Ĥ 1 (ĨT D2 ĨV T x true Σ T η). S L S Ĥ 1 S Ĥ 1 N FT F Ĥ 1 N L N Then, by equating the first row block of both sides of (4.13), we get V T S (x w x true )+Ĥ 1 S FV T N (x w x true ) = Ĥ 1 S By rearranging and combining terms, we get (4.14) V T S (x w x true )= Ĥ 1 S By Lemma 4.1, weobtain ]. L S VS T x true Ĥ 1 S FVN T x true + Ĥ 1 S Σ S η S. L S V T S x true Ĥ 1 Ĥ 1 S 1 σk 2 + ˆd 2 S,min ɛ2 ˆd. 2 S,max S FV T N x w + Ĥ 1 S Σ S η S.

16 REGULARIZATION VIA OPERATOR APPROXIMATION B347 If there is no mixing between the signal and noise subspaces (i.e., F = 0), the middle term on the right of (4.14) disappears. However, if there is mixing, we can obtain a bound on V T N x w in terms of the data by multiplying both sides of (4.11) by [0, I] toobtain (4.15) and thus VN T x w =[0, Ĥ 1 N ](I + Δ b)σ T c =[0, Ĥ 1 N Σ N c N ]+[0, Ĥ 1 N ]Δ bσ T c, (4.16) VN T x w σ k+1 c N + δ 1 c σn 2 + ˆd 2 N,min ɛ2 ˆd. 2 N,max Combining these results, we get a bound for (4.14), which we summarize in the following theorem. Theorem 4.5. For windowed regularization, the error in the subspace spanned by the rows of VS T, denoting the true signal and transition subspace, is bounded by VS T (x w x true ) ( ) ( ˆd 2 S,max (1 + ɛ2 )+ ˆd 2 N,max ɛ2 ) VS T x true + δ f (σ k+1 c N + δ 1 c ) σn 2 + ˆd 2 N,min ɛ2 ˆd + η 2 S N,max /(σ 2 k + ˆd 2 S,min ɛ2 ˆd2 S,max ), where δ 1, ˆd N,min,and ˆd N,max are defined in Theorem 4.3. For ɛ small, the coefficient in the first term is small if ˆd 2 S,max is small relative to σk 2 + ˆd 2 S,min, which we can expect to be the case since ˆd 2 S,max and ˆd 2 S,min correspond to regularization parameters for the signal and transition subspace. The second term is small for δ b and δ f sufficiently small, and we can assume that c N η N by the discrete Picard condition. In contrast to the bound in Theorem 4.4, thelastterm involves only the norm of the noise components in the signal and transition subspace. For Tikhonov regularization, the situation is much simpler since H = Σ T Σ + ˆλI is diagonal. Equation (4.12) becomes V T (x w x true )= (Σ T Σ + ˆλI) 1 (ˆλV T x true Σ T η). The componentwise version of this formula agrees with results in [44]. Theorem 4.6. For Tikhonov regularization, the absolute value of the difference between the computed solution x w and the true solution x true in the jth column of V is vj T (x w x true ) = ˆλv j T x true σj 2 + ˆλ σ jη j σj 2 + ˆλ where η j is the jth component of η. We can also obtain error bounds in the components defined by rows of V T.Begin from (4.5) with, (VΣ T ΣV T + V D 2 VT )x w = VΣ T c.

17 B348 J. M. CHUNG, M. E. KILMER, AND D. P. O LEARY Thus, we can write (4.17) Now compare (4.17) to V T x w =( V T VΣ T ΣV T V + D2 ) 1 VT VΣ T c =(ĨΣT ΣĨT + D 2 ) 1 ĨΣ T (c true + η). V T x true = ĨΣ c true. The difference can be written as V T (x w x true )=(ĨΣT ΣĨT + D 2 ) 1 ĨΣ T (c true + η) ĨΣ c true (4.18) =(ĨΣT ΣĨT + D 2 ) 1 (ĨΣT c true + ĨΣT η (ĨΣT ΣĨT + D 2 )ĨΣ c true ) =(ĨΣT ΣĨT + D 2 ) 1 ( D 2 ĨΣ c true + ĨΣT η). Theorem 4.7. For windowed Tikhonov regularization, the error in the subspace spanned by the rows of V S T, denoting the approximate signal and transition subspace, is bounded by where V T S (x w x true ) 1+δ 3 ˆd 2 S,min + σ2 k ɛ2 ( ˆd 2 max x true + n ), 2ɛ + ɛ2 δ 3 = δ 4 2ɛ ɛ 2, δ 4 min(σk 2 + ˆd 2 S,min ɛ2,σn 2 + ˆd 2 N,min ɛ2 ), and ˆd S,min and ˆd N,min are defined in Theorem 4.3. Proof. The result follows from taking norms in (4.18) and using Lemma 4.1 with Φ = D 2 and Ψ = Σ T Σ. Similar to Theorem 4.4, the error bound is small here if ˆd 2 max is small relative to σk 2 for ɛ and ˆd 2 S,min small. We remark that if we use the solution to the approximate problem as an approximation to x true, we expect that the error may be significantly larger, since instead of relying on the low-mixing assumption, we would need ( V, Σ, Û) to be close to (V, Σ, U). 5. Numerical results. All experiments were performed in MATLAB 2013a, using codes available in the RestoreTools software package [43]. Numerical experiments focus on the use of windowed regularization for the three different operator approximation schemes: Kronecker product approximation, BCCB approximation, and Krylov subspace approximation. Since in each experiment the true image was known, the quality of the restored images is measured using three different metrics: relative error (a smaller number is better); signal-to-noise-ratio (SNR) (a larger number is better); and mean structured similarity [56] (a larger number is better) Results using Kronecker approximations. In these experiments, we use a Kronecker product approximation, as described in section 2.1, for both the window selection scheme and the regularization parameter selection scheme (here, GCV). That is, we generate V and Ŵ(j), compute regularization parameters λ (j) using Â, andthensolve(3.11) using CGLS. We experimented with several nonseparable blurring operators, but for brevity we present only a few results here. In

18 REGULARIZATION VIA OPERATOR APPROXIMATION B349 (a) True Image (b) Blurred Noisy Image (c) Point Spread Function Fig. 2. Experiment 1: Plane example with 10dB SNR. The point spread function is a combination of a boxcar blur with a slightly rotated nonsymmetric Gaussian blur. particular, we present results from operators that are not exactly diagonalizable by fast trig transforms, since the SVD routine in RestoreTools is smart enough to detect diagonalizability. In all the experiments in this section, we use two windows and use gcvforsvd from RestoreTools to get k 1, the size of the first window. We refer to this approach as 2-window-Â. Having more windows or different window selection approaches and regularization parameter selection techniques could affect the quality of the results. However, the examples below illustrate that often two Shannon windows are sufficient to provide significant improvement over standard Tikhonov in a practical setting. We compare 2-window- restorations to Tikhonov restorations (i.e., a single window) for two scenarios. The first uses GCV on the approximate problem to obtain a single regularization parameter and then uses that parameter to solve the original Tikhonov problem. This approach is practical, and we refer to it as Tikhonov-Â. We also report results for an unrealistic best case scenario, called Tikhonov-opt, where we use the parameter that minimizes the relative error (after 200 iterations) among 20 log-evenly spaced choices between 10 3 and Since measuring relative error requires knowledge of the true image, this parameter cannot be computed in practice. Experiment 1. We generated a blurred image of a plane using a combined boxcar blur plus nonsymmetric Gaussian blur point spread function that was rotated 5 degrees counterclockwise. 5 Gaussian white noise was added to the image. Two sets of results are presented here for blurred-signal-to-noise ratios of 10dB and 25dB. The true and blurred image (corresponding to 10dB), along with the point spread function, are shown in Figure 2. We assume reflexive boundary conditions for the image. Using codes from RestoreTools, we construct a Kronecker product approximation to the operator using 3 terms in the expansion. Results are given in Table 1. For the 10dB case, reconstructions can be found in Figure 3. It is worth noting that 2-window- is able to produce results similar to or better than Tikhonov-opt, which cannot be attained in practice. To further illustrate the benefits of our framework, we consider the Picard plot using  that is displayed in Figure 4. The window corresponding to the larger singular values consisted of 636 terms, and the corresponding regularization parameter for this window was calculated using GCV to be λ (2) = For the window corresponding to the smaller singular values, 5 A boxcar blur means that each pixel in the blurred image [ is equal to the average of its neighboring 1/9 1/9 1/9 ] pixels. For example, the PSF for a 3 3 boxcar blur would be 1/9 1/9 1/9 1/9 1/9 1/9. To compute the blurring operator using RestoreTools, pass an image of the point spread function and the boundary conditions to generate a psfmatrix object [43].

19 B350 J. M. CHUNG, M. E. KILMER, AND D. P. O LEARY Table 1 Experiment 1: Results for 3-term Kronecker product approximation for the plane example. 2- window-â and Tikhonov-Â use GCV to obtain regularization parameters, whereas Tikhonov-opt (which requires knowing the true image) approximately minimizes the relative error. Larger values for SNR and MMSIM and smaller values of relative error are desirable. 2-window-Â Tikhonov-Â Tikhonov-opt BSNR Rel Er SNR MMSIM Rel Er SNR MMSIM Rel Er SNR MMSIM 25 db db (a) 2-window-Â (b) Tikhonov-Â (c) Tikhonov-opt Fig. 3. Experiment 1: Results for 3-term Kronecker product approximation for the plane example of Figure 2, 10dB noise. Fig. 4. Experiment 1: Picard plot for the plane example of Figure 2. The red dashed line corresponds to the singular values of Â. The absolute values of the spectral coefficients, relative to the SVD of Â, are shown in blue. The noise level is indicated by the horizontal line. The window threshold is marked by the vertical line. λ (1) = The optimal Tikhonov parameter is 0.23, and GCV on the approximate problem selected the Tikhonov parameter to be Experiment 2. Next, we illustrate our approach on the larger Elaine image, where the blur function was a nonsymmetric out-of-focus blur 6 and reflexive 6 The N N image of the PSF was computed using PSF =(X. 2+2 Y. 2) <= 9 2,where X, Y are arrays of indices created using [X, Y ] = meshgrid(x) with x = fix(n/2) : ceil(n/2) 1.

20 REGULARIZATION VIA OPERATOR APPROXIMATION B351 Table 2 Experiment 2: Results for 3-term Kronecker product approximation for Elaine. 2-window- Tikhonov- Tikhonov-opt BSNR Rel Err SNR MMSIM Rel Err SNR MMSIM Rel Err SNR MMSIM 25 db db (a) True Image (b) Blurred Image (c) 2-window- Fig. 5. Experiment 2: Results for 3-term Kronecker product approximation for Elaine, 25dB noise. The true and blurred images are shown in (a) and (b), respectively. The windowed reconstruction using  to select the two windows and corresponding two regularization parameters is found in (c). (a) True Image (b) Blurred Image (c) 2-window- Fig. 6. Experiment 3: Results for 5-term Kronecker product approximation for the grain test problem, 25dB noise. The true and noisy blurred images are shown in (a) and (b), respectively. The windowed reconstruction using  to select the two windows and corresponding two regularization parameters is found in (c). boundary conditions were used. A Kronecker product approximation was constructed using five terms in the expansion, and results for two noise levels are presented in Table 2. The true and blurred Elaine image, along with the window-â reconstruction, are shown in Figure 5. For this problem, window-â was able to produce lower relative errors and larger SNR and MMSIM values than the optimal Tikhonov reconstruction. Experiment 3. We also considered the grain test problem [42], whose image and PSF are available in RestoreTools [43] (see top left, Figure 6). The operator is not separable, and the default Kronecker approximation chosen is a 5-term approximation. As in the preceding examples, we present, in Table 3, the results for 2-window-Â, Tikhonov-Â, and Tikhonov-opt for two noise levels. Our 2-window reconstructions are superior to Tikhonov- at both noise levels and are nearly optimal as compared against Tikhonov-opt Results using BCCB approximation. We have seen that Kronecker product approximations to A can be useful, but other approximations can also be

21 B352 J. M. CHUNG, M. E. KILMER, AND D. P. O LEARY Table 3 Experiment 3: Results for 5-term Kronecker product approximation for the grain test problem. The 2-window approach performs comparably with Tikhonov-opt. 2-window- Tikhonov- Tikhonov-opt BSNR Rel Er SNR MMSIM Rel Er SNR MMSIM Rel Er SNR MMSIM 25 db db (c) 3-window- (a) True Image (b) Blurred Image (d) Tikhonov- (e) Tikhonov-opt Fig. 7. Experiment 4: Results for BCCB approximation for the cameraman image, 40dB noise. The true and blurred images are shown in (a) and (b), respectively. The windowed reconstruction using  to select windows and regularization parameters is found in (c), and Tikhonov reconstructions using GCV and the optimal regularization parameter are found in (d) and (e), respectively. used. Here, we consider BCCB matrix approximations and provide two experiments. Experiment 4. In this example, we use the cameraman image shown in Figure 7(a) and use a 11 9 boxcar blur with reflexive boundary conditions. We ignore separability of the PSF and instead seek to use a BCCB approximation of the blurring matrix to define windows and compute regularization parameters for the windowed approach. We use three windows in this example, with the window threshholds chosen via recursive GCV. Similar to the previous section, we consider 3-window- and Tikhonov-Â, where all parameters were chosen using GCV on the approximate problem, and we compare to an optimal Tikhonov reconstruction that minimizes the 2-norm of the reconstruction error, Tikhonov-opt. Results are presented in Table 4 for various noise levels, and reconstructed images corresponding to 40dB are presented in Figure 7. For the 40dB example, the GCV-computed regularization parameters for the windowed approach were λ (3) =1.97 3,λ (2) = , and λ (1) = , while the regularization parameter for Tikhonov- selected using GCV was and the optimal parameter was For all noise levels, window-â did not perform as well as Tikhonov-opt, but the reconstructions were still better than Tikhonov-Â. GCV seemed to have a difficult time determining the window spacings

22 REGULARIZATION VIA OPERATOR APPROXIMATION B353 Table 4 Experiment 4: Results for BCCB approximation for the cameraman image. 3-window- Tikhonov- Tikhonov-opt BSNR Rel Er SNR MMSIM Rel Er SNR MMSIM Rel Er SNR MMSIM 30 db db db and parameters, especially for large noise levels (corresponding to small BSNR). This may be due to the fact that BCCB approximations do not necessarily approximate all parts of the spectrum equally well, with the approximations deteriorating over the noise subspace. Experiment 5. In this example, we consider general form Tikhonov regularization, where the regularization matrix is not equal to the identity matrix (L I in section 3.1). Consider periodic boundary conditions, and let L represent a finite difference approximation of the partial derivatives of the image. That is, we can define (5.1) L = [ ] In D 2,m, where D D 2,n I 2,n = m Then, matrix-vector product Lx approximates the partial derivatives of the image (ignoring a constant). For matrices A that can be diagonalized using the discrete Fourier transform (DFT) (e.g., spatially invariant blurring matrices, assuming periodic boundary conditions), the general form Tikhonov solution, (A T A + λl T L) 1 A T b,canbe written as a filtered solution in the Fourier transform basis [32], and standard regularization parameter selection methods such as GCV can be efficiently implemented. We are interested in the case when A cannot be diagonalized by the DFT. In this experiment, we use the problem set-up from Experiment 4 and consider the solution of (5.2) min x Ax b λ Lx 2 2, where L is defined in (5.1). Note that computing a basis in which both matrices A and L are diagonalized requires the GSVD and could be expensive. Following our framework, we first approximate A with its BCCB approximation Â. Since  and L can both be diagonalized using the DFT, GCV can be used to efficiently compute a regularization parameter. Using the computed GCV parameter, Figure 8(a) is the reconstruction obtained by solving the approximate problem, and Figure 8(b) is the reconstruction obtained by solving the original problem. (This corresponds to general Tikhonov in our framework.) As evident in the first reconstruction, solving the approximate problem may not provide useful results, so it is necessary to work with the original problem. Second, the use of the approximate problem for selecting the regularization parameter for general-form Tikhonov in our framework seems to provide a remarkably good reconstruction. The reconstruction error for general Tikhonov in our framework was , SNR was 18.04, and MSSIM was

23 B354 J. M. CHUNG, M. E. KILMER, AND D. P. O LEARY (a) Fig. 8. Experiment 5: Results for BCCB approximation with L an approximation to the partial derivatives of the image, using GCV to select regularization parameters. Figure (a) is the solution to the approximate problem, while (b) is the solution of the original problem using the regularization parameter from the approximate problem. This shows that the approximate problem provides effective regularization parameters but poor reconstructions Krylov subspace approximations and hybrid methods. As described in section 2.1, the Krylov subspace approximation (2.6) for given k canbeusedto estimate regularization parameters and windows for the original problem. Experiment 6. Here we consider deblurring the Elaine image in Figure 5(a), where the spatially invariant blurring operator was determined by a noisy boxcar (b) ζ(:) 2 PSF(:) 2 =0.01). PSF (i.e., PSF = PSF + ζ, whereζ is white noise scaled so that Additional noise was added to the blurred image; results are presented for BSNR = 10dB and BSNR = 5dB. We defined  to be the approximation corresponding to k = 30 Krylov iterations on (A, b). We tested three algorithms. Tikhonov- uses 30 LSQR iterations to approximate the solution to (3.2), where the regularization parameter was estimated using Â. The latter two were variants of our framework, as discussed in section window- finds an approximate solution to (3.11) using 30 LSQR iterations. For this variant,  was used to compute regularization parameters and select three windows, with linear equally spaced thresholds. The third algorithm, 3-window-hybrid, is a hybrid windowed approach (our alternate method from section 3.3). It applies windowing to the projected problem at each iteration and uses GCV to select regularization parameters. A plot of the relative errors for 3-window- and 3-window-hybrid can be found in Figure 9, along with the relative errors for standard LSQR on the unregularized problem and relative errors for Tikhonov-Â. For ill-posed problems, it is known that early iterations of LSQR produce good solutions, but reconstruction errors for later iterations grow. Hybrid methods can be used to stabilize this behavior [17, 37]. For early iterations, the 3-window-hybrid approach can have difficulty choosing the right regularization parameters (hence the small jump in the relative error plot). However, benefits of 3-window-hybrid, compared to 3-window-Â, include not having to specify in advance how many iterations to use in the operator approximation and being able to estimate the windows, regularization parameters, and stopping criteria along the way. For further fair comparison among the methods, we show on the 3-window- plot the value of the relative error corresponding to using a stopping criteria based on the GCV function (red star). The iterative process would have been stopped after nine iterations. Results are presented in Table Robustness of the proposed framework. In our final experiment, we investigate robustness of the proposed framework for cases where a spatially variant blur is approximated by a spatially invariant one.

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Taewon Cho Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial

More information

Numerical Linear Algebra and. Image Restoration

Numerical Linear Algebra and. Image Restoration Numerical Linear Algebra and Image Restoration Maui High Performance Computing Center Wednesday, October 8, 2003 James G. Nagy Emory University Atlanta, GA Thanks to: AFOSR, Dave Tyler, Stuart Jefferies,

More information

Inverse Ill Posed Problems in Image Processing

Inverse Ill Posed Problems in Image Processing Inverse Ill Posed Problems in Image Processing Image Deblurring I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz, martin.plesinger@tul.cz, strakos@cs.cas.cz 1,3 Faculty of Mathematics

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

SIGNAL AND IMAGE RESTORATION: SOLVING

SIGNAL AND IMAGE RESTORATION: SOLVING 1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

Mathematical Beer Goggles or The Mathematics of Image Processing

Mathematical Beer Goggles or The Mathematics of Image Processing How Mathematical Beer Goggles or The Mathematics of Image Processing Department of Mathematical Sciences University of Bath Postgraduate Seminar Series University of Bath 12th February 2008 1 How 2 How

More information

Linear Inverse Problems

Linear Inverse Problems Linear Inverse Problems Ajinkya Kadu Utrecht University, The Netherlands February 26, 2018 Outline Introduction Least-squares Reconstruction Methods Examples Summary Introduction 2 What are inverse problems?

More information

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS BIT Numerical Mathematics 6-3835/3/431-1 $16. 23, Vol. 43, No. 1, pp. 1 18 c Kluwer Academic Publishers ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS T. K. JENSEN and P. C. HANSEN Informatics

More information

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University,

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Discrete ill posed problems

Discrete ill posed problems Discrete ill posed problems Gérard MEURANT October, 2008 1 Introduction to ill posed problems 2 Tikhonov regularization 3 The L curve criterion 4 Generalized cross validation 5 Comparisons of methods Introduction

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b?

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? Rosemary Renaut http://math.asu.edu/ rosie BRIDGING THE GAP? OCT 2, 2012 Discussion Yuen: Solve

More information

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

Statistically-Based Regularization Parameter Estimation for Large Scale Problems Statistically-Based Regularization Parameter Estimation for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova March 1, 2010 National Science Foundation: Division of Computational

More information

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 1 One Picture and a Thousand Words Using Matrix Approximations Dianne P. O Leary

More information

Backward Error Estimation

Backward Error Estimation Backward Error Estimation S. Chandrasekaran E. Gomez Y. Karant K. E. Schubert Abstract Estimation of unknowns in the presence of noise and uncertainty is an active area of study, because no method handles

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 28, pp. 49-67, 28. Copyright 28,. ISSN 68-963. A WEIGHTED-GCV METHOD FOR LANCZOS-HYBRID REGULARIZATION JULIANNE CHUNG, JAMES G. NAGY, AND DIANNE P.

More information

Advanced Numerical Linear Algebra: Inverse Problems

Advanced Numerical Linear Algebra: Inverse Problems Advanced Numerical Linear Algebra: Inverse Problems Rosemary Renaut Spring 23 Some Background on Inverse Problems Constructing PSF Matrices The DFT Rosemary Renaut February 4, 23 References Deblurring

More information

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Rosemary Renaut, Jodi Mead Arizona State and Boise State September 2007 Renaut and Mead (ASU/Boise) Scalar

More information

What is Image Deblurring?

What is Image Deblurring? What is Image Deblurring? When we use a camera, we want the recorded image to be a faithful representation of the scene that we see but every image is more or less blurry, depending on the circumstances.

More information

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut Collaborators: Jodi Mead and Iveta Hnetynkova DEPARTMENT OF MATHEMATICS

More information

c 1999 Society for Industrial and Applied Mathematics

c 1999 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 21, No. 1, pp. 185 194 c 1999 Society for Industrial and Applied Mathematics TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES GENE H. GOLUB, PER CHRISTIAN HANSEN, AND DIANNE

More information

Laboratorio di Problemi Inversi Esercitazione 2: filtraggio spettrale

Laboratorio di Problemi Inversi Esercitazione 2: filtraggio spettrale Laboratorio di Problemi Inversi Esercitazione 2: filtraggio spettrale Luca Calatroni Dipartimento di Matematica, Universitá degli studi di Genova Aprile 2016. Luca Calatroni (DIMA, Unige) Esercitazione

More information

2017 Society for Industrial and Applied Mathematics

2017 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 39, No. 5, pp. S24 S46 2017 Society for Industrial and Applied Mathematics GENERALIZED HYBRID ITERATIVE METHODS FOR LARGE-SCALE BAYESIAN INVERSE PROBLEMS JULIANNE CHUNG AND ARVIND

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 31, pp. 204-220, 2008. Copyright 2008,. ISSN 1068-9613. ETNA NOISE PROPAGATION IN REGULARIZING ITERATIONS FOR IMAGE DEBLURRING PER CHRISTIAN HANSEN

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

Ill Posed Inverse Problems in Image Processing

Ill Posed Inverse Problems in Image Processing Ill Posed Inverse Problems in Image Processing Introduction, Structured matrices, Spectral filtering, Regularization, Noise revealing I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz,

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut DEPARTMENT OF MATHEMATICS AND STATISTICS Prague 2008 MATHEMATICS AND STATISTICS

More information

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke

More information

INVERSE SUBSPACE PROBLEMS WITH APPLICATIONS

INVERSE SUBSPACE PROBLEMS WITH APPLICATIONS INVERSE SUBSPACE PROBLEMS WITH APPLICATIONS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Given a square matrix A, the inverse subspace problem is concerned with determining a closest matrix to A with a

More information

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE Int. J. Appl. Math. Comput. Sci., 007, Vol. 17, No., 157 164 DOI: 10.478/v10006-007-0014-3 REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE DOROTA KRAWCZYK-STAŃDO,

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

6 The SVD Applied to Signal and Image Deblurring

6 The SVD Applied to Signal and Image Deblurring 6 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

SVD, PCA & Preprocessing

SVD, PCA & Preprocessing Chapter 1 SVD, PCA & Preprocessing Part 2: Pre-processing and selecting the rank Pre-processing Skillicorn chapter 3.1 2 Why pre-process? Consider matrix of weather data Monthly temperatures in degrees

More information

Near-Optimal Spectral Filtering and Error Estimation for Solving Ill-Posed Problems

Near-Optimal Spectral Filtering and Error Estimation for Solving Ill-Posed Problems Near-Optimal Spectral Filtering and Error Estimation for Solving Ill-Posed Problems Viktoria Taroudaki Dianne P. O Leary May 1, 2015 Abstract We consider regularization methods for numerical solution of

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Rosemary Renaut, Jodi Mead Arizona State and Boise State Copper Mountain Conference on Iterative Methods

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova SIAM Annual Meeting July 10, 2009 National Science Foundation:

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Linear Least-Squares Data Fitting

Linear Least-Squares Data Fitting CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered

More information

Regularization with Randomized SVD for Large-Scale Discrete Inverse Problems

Regularization with Randomized SVD for Large-Scale Discrete Inverse Problems Regularization with Randomized SVD for Large-Scale Discrete Inverse Problems Hua Xiang Jun Zou July 20, 2013 Abstract In this paper we propose an algorithm for solving the large-scale discrete ill-conditioned

More information

Extension of GKB-FP algorithm to large-scale general-form Tikhonov regularization

Extension of GKB-FP algorithm to large-scale general-form Tikhonov regularization Extension of GKB-FP algorithm to large-scale general-form Tikhonov regularization Fermín S. Viloche Bazán, Maria C. C. Cunha and Leonardo S. Borges Department of Mathematics, Federal University of Santa

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

Lecture Notes 5: Multiresolution Analysis

Lecture Notes 5: Multiresolution Analysis Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and

More information

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 AMath 584 Name: INSERT YOUR NAME HERE Take-home Final UWNetID: INSERT YOUR NETID Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 The main part of the assignment (Problems 1 3) is worth 80 points.

More information

An Introduction to Wavelets and some Applications

An Introduction to Wavelets and some Applications An Introduction to Wavelets and some Applications Milan, May 2003 Anestis Antoniadis Laboratoire IMAG-LMC University Joseph Fourier Grenoble, France An Introduction to Wavelets and some Applications p.1/54

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Choosing the Regularization Parameter

Choosing the Regularization Parameter Choosing the Regularization Parameter At our disposal: several regularization methods, based on filtering of the SVD components. Often fairly straightforward to eyeball a good TSVD truncation parameter

More information

Inverse Singular Value Problems

Inverse Singular Value Problems Chapter 8 Inverse Singular Value Problems IEP versus ISVP Existence question A continuous approach An iterative method for the IEP An iterative method for the ISVP 139 140 Lecture 8 IEP versus ISVP Inverse

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2004-012 Kronecker Product Approximation for Three-Dimensional Imaging Applications by MIsha Kilmer, James Nagy Mathematics and Computer Science EMORY UNIVERSITY Kronecker Product Approximation

More information

Dedicated to Adhemar Bultheel on the occasion of his 60th birthday.

Dedicated to Adhemar Bultheel on the occasion of his 60th birthday. SUBSPACE-RESTRICTED SINGULAR VALUE DECOMPOSITIONS FOR LINEAR DISCRETE ILL-POSED PROBLEMS MICHIEL E. HOCHSTENBACH AND LOTHAR REICHEL Dedicated to Adhemar Bultheel on the occasion of his 60th birthday. Abstract.

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

AMS classification scheme numbers: 65F10, 65F15, 65Y20

AMS classification scheme numbers: 65F10, 65F15, 65Y20 Improved image deblurring with anti-reflective boundary conditions and re-blurring (This is a preprint of an article published in Inverse Problems, 22 (06) pp. 35-53.) M. Donatelli, C. Estatico, A. Martinelli,

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear

More information

arxiv: v1 [math.na] 3 Jan 2019

arxiv: v1 [math.na] 3 Jan 2019 STRUCTURED FISTA FOR IMAGE RESTORATION ZIXUAN CHEN, JAMES G. NAGY, YUANZHE XI, AND BO YU arxiv:9.93v [math.na] 3 Jan 29 Abstract. In this paper, we propose an efficient numerical scheme for solving some

More information

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Gérard MEURANT November 2010 1 Introduction to ill posed problems 2 Examples of ill-posed problems 3 Tikhonov

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our

More information

VECTOR EXTRAPOLATION APPLIED TO TRUNCATED SINGULAR VALUE DECOMPOSITION AND TRUNCATED ITERATION

VECTOR EXTRAPOLATION APPLIED TO TRUNCATED SINGULAR VALUE DECOMPOSITION AND TRUNCATED ITERATION VECTOR EXTRAPOLATION APPLIED TO TRUNCATED SINGULAR VALUE DECOMPOSITION AND TRUNCATED ITERATION A. BOUHAMIDI, K. JBILOU, L. REICHEL, H. SADOK, AND Z. WANG Abstract. This paper is concerned with the computation

More information

On nonstationary preconditioned iterative regularization methods for image deblurring

On nonstationary preconditioned iterative regularization methods for image deblurring On nonstationary preconditioned iterative regularization methods for image deblurring Alessandro Buccini joint work with Prof. Marco Donatelli University of Insubria Department of Science and High Technology

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

arxiv: v1 [math.na] 15 Jun 2009

arxiv: v1 [math.na] 15 Jun 2009 Noname manuscript No. (will be inserted by the editor) Fast transforms for high order boundary conditions Marco Donatelli arxiv:0906.2704v1 [math.na] 15 Jun 2009 the date of receipt and acceptance should

More information

Chapter 8 Structured Low Rank Approximation

Chapter 8 Structured Low Rank Approximation Chapter 8 Structured Low Rank Approximation Overview Low Rank Toeplitz Approximation Low Rank Circulant Approximation Low Rank Covariance Approximation Eculidean Distance Matrix Approximation Approximate

More information

The Global Krylov subspace methods and Tikhonov regularization for image restoration

The Global Krylov subspace methods and Tikhonov regularization for image restoration The Global Krylov subspace methods and Tikhonov regularization for image restoration Abderrahman BOUHAMIDI (joint work with Khalide Jbilou) Université du Littoral Côte d Opale LMPA, CALAIS-FRANCE bouhamidi@lmpa.univ-littoral.fr

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. Broadly, these techniques can be used in data analysis and visualization

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Sparse linear models

Sparse linear models Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time

More information

Solving linear equations with Gaussian Elimination (I)

Solving linear equations with Gaussian Elimination (I) Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

A Residual Inverse Power Method

A Residual Inverse Power Method University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR 2007 09 TR 4854 A Residual Inverse Power Method G. W. Stewart February 2007 ABSTRACT The inverse

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information