Condition numbers and perturbation analysis for the Tikhonov regularization of discrete ill-posed problems

Size: px
Start display at page:

Download "Condition numbers and perturbation analysis for the Tikhonov regularization of discrete ill-posed problems"

Transcription

1 NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. (1) Published online in Wiley InterScience ( Condition numbers and perturbation analysis for the Tikhonov regularization of discrete ill-posed problems Delin Chu 1, Lijing Lin,,RogerC.E.Tan 1 and Yimin Wei 3,4,, 1 Department of Mathematics, National University of Singapore, Science Drive, Singapore , Singapore Institute of Mathematical Science, Fudan University, Shanghai, 433, People s Republic of China 3 School of Mathematical Sciences, Fudan University, Shanghai 433, People s Republic of China 4 Shanghai Key Laboratory of Contemporary Applied Mathematics, Fudan University, Shanghai 433, People s Republic of China SUMMARY One of the most successful methods for solving the least-squares problem min x Ax b with a highly ill-conditioned or rank deficient coefficient matrix A is the method of Tikhonov regularization. In this paper, we derive the normwise, mixed and componentwise condition numbers and componentwise perturbation bounds for the Tikhonov regularization. Our results are sharper than the known results. Some numerical examples are given to illustrate our results. Copyright q 1 John Wiley & Sons, Ltd. Received 9 December 8; Revised 8 December 9; Accepted 13 December 9 KEY WORDS: linear least squares; condition number; Tikhonov regularization; perturbation 1. INTRODUCTION The well-known Tikhonov regularization technique [1] in numerical analysis has been investigated extensively. Some recent developments can be found in [ 1]. The key idea in Tikhonov regularization is to incorporate aprioriassumptions about the size and smoothness of the desired solution, in the form of the smoothing function ω( f ) for the continuous case, and the (semi) norm Lx Correspondence to: Yimin Wei, School of Mathematical Sciences, Fudan University, Shanghai 433, People s Republic of China. yimin.wei@gmail.com, ymwei cn@yahoo.com, ymwei@fudan.edu.cn Current address: School of Mathematics, The University of Manchester, Manchester M13 9PL, U.K. Contract/grant sponsor: NUS Research; contract/grant number: R Contract/grant sponsor: National Natural Science Foundation of China; contract/grant number: Contract/grant sponsor: Shanghai Education Committee (Dawn Project), Shanghai Science and Technology Committee; contract/grant numbers: 9DZ79, KLMM91 Contract/grant sponsor: Doctoral Program of the Ministry of Education; contract/grant number: Copyright q 1 John Wiley & Sons, Ltd.

2 D. CHU ET AL. for the discrete case. For discrete ill-posed problems in mechanical systems and signal processing, the Tikhonov regularization in general form leads to the regularized minimization problem min{ Ax b x +λ Lx }, A Rm n, L R p n, (1) where the regularization parameter λ controls the weight given to the minimization of Lx relative to the minimization of the residual Ax b. The matrix L is typically the identity matrix I n or a discrete approximation to some derivation operator. As usual, for the regularization problem (1), it is always assumed that rank(l)= p n m and ([ ]) A rank =n L (cf. [13, Section 5]). Thus, the problem (1) has a unique solution for any λ>. The regularization problem (1) can also be written as: [ ] [ ] A b min x. () x λl Since the normal equation of () is (A T A+λ L T L)x = A T b, (3) we obtain the explicit expression for the regularized solution immediately, x λ =(A T A+λ L T L) 1 A T b. Clearly, the perturbed counterpart of the problem (1) and its normal equation (3) are, respectively, min { (A+ΔA)(x x+δx +Δx) (b+δb) +λ L(x +Δx) }, (4) and ((A+ΔA) T (A+ΔA)+λ L T L)(x λ +Δx)=(A+ΔA) T (b+δb), (5) where ΔA and Δb are perturbations to A and b, respectively, and ([ ]) A+ΔA rank =n. L For the Tikhonov regularization, Malyshev [14] studied the condition numbers ( / ) ( / ) κ F A x = lim Δx ΔA F Δx Δb sup, κ b x = lim sup, ε ΔA F ε x λ A ε Δb ε x λ b andprovedthat κ F A x = A [ (A T A+λI) 1 A T r ] I rrt rx T λ I r r x λ, (6) x λ I κ b x = (AT A+λI) 1 A T b x λ, (7)

3 CONDITION NUMBERS OF TIKHONOV REGULARIZATION where r =b Ax λ. However, relative normwise condition numbers in (6) and (7) do not take into account the structures of A and b with respect to the scaling and/or sparsity. When A and/or b are badly scaled or contain many zeros, the size of a perturbation in terms of its norm cannot reflect the relative size of the perturbation on its small (or zero) entries. Motivated by this observation, a different approach known as componentwise analysis [15, 16] in the perturbation theory arises. In componentwise analysis, two different condition numbers are considered: the first one measures the errors in the output using norms and the input perturbations componentwise, and the second one measures both the errors in the output and the perturbations in the input componentwise. The resulting condition numbers are called mixed and componentwise by Gohberg and Koltracht [17]. We will use this terminology throughout this paper. In [18] Skeel performed a mixed perturbation analysis for the nonsingular linear system of equation and a mixed error analysis for Gaussian elimination. Rohn [19] developed the componentwise condition numbers for matrix inversion and nonsingular linear system. Gohberg and Koltracht [17] presented explicit expressions for both mixed and componentwise condition numbers, for differentiable functions as well as the linear systems. The componentwise perturbation analysis and the mixed and componentwise condition numbers for the linear least squares problems can be found in [15, 4]. To the best of our knowledge, there is no mixed and componentwise perturbation analysis available for the Tikhonov regularization. Our paper is to fill in this gap. We will derive the normwise, mixed and componentwise condition numbers and componentwise perturbation bounds for the Tikhonov regularization. Our normwise error bound is sharper than the known results. The present paper is organized as follows. In Section, we derive computable formulae for the normwise, mixed and componentwise condition number for the Tikhonov regularization. Condition numbers for the residue r =b Ax λ are also considered. We also give sharp bounds for unrestricted (i.e. not necessarily small) perturbations. We study the augmented system for the Tikhonov regularization in Section 3, which extends the results in [, 3]. Finally, we compare our results with the known results by numerical examples. Throughout this paper, Let A R m n and B R p q,thekronecker product A B R mp nq is defined by: a 11 B a 1 B... a 1n B a 1 B a B... a n B A B =. ;..... a m1 B a m B... a mn B For any matrix A =[a 1 a... a n ] R m n,vec(a) is defined by vec(a)=[a T 1 at... at n ]T ; For any matrix A =(a ij ) R m n, A is defined by their absolute value A =( a ij ) R m n ; A R n m is the Moore Penrose inverse of A [5, 6]; A denotes the spectral norm of A; A F represents the Frobenius norm of A and A F = trace(a T A); A is the infinity norm of A; [Ab] is the augmented matrix of A and b; ρ(a) denotes the spectral radius of A;

4 D. CHU ET AL. Π= n i=1 j=1 m E ij (m n) E ji (n m), where E ij (m n)=e (m) i (e (n) j ) T R m n denotes the (i, j)th elementary matrix, and e (m) i and e (n) j are the ith column and jth column of identity matrices I m R m m and I n R n n, respectively. The matrix Π is called the vec-permutation matrix. With the notation above, the following results hold: A B = A B, vec(axb)=(b T A)vec(X), Πvec(A) = vec(a T ), (y T Y )Π=Y y T.. NORMWISE, MIXED AND COMPONENTWISE CONDITION NUMBERS FOR THE TIKHONOV REGULARIZATION In this section, first we derive normwise perturbation bounds and the condition numbers for the solutions of Tikhonov regularization and proceed to the mixed and componentwise perturbation analysis. In many applications the error in b can be larger than the error in A. Moreover, A and b do not have to be of the same magnitude. To get a more general analysis and flexible result, which allows different scalings of A and b, we take advantage of the weighted Frobenius norm of the data A and b defined by [7] [AT,βb] F, where T is a positive definite diagonal matrix and β>. Definition 1 Let x λ and x λ +Δx be the unique solutions of the problems (1) and (4), respectively. Then the normwise, mixed and componentwise condition numbers of Tikhonov regularization are defined by cond F Reg Δx := lim sup, ε [(ΔA)T,β(Δb)] F ε [AT,βb] F ε x λ m Reg := lim ε c Reg := lim ε sup ΔA ε A Δb ε b sup ΔA ε A Δb ε b Δx ε x λ, 1 ε Δx x λ, where b/a is an entry-wise division, that is, b/a :=(b i /a i ),orb./a in MATLAB notation, and ξ/ is interpreted as zero if ξ= and otherwise.

5 CONDITION NUMBERS OF TIKHONOV REGULARIZATION In the above definition, we study the general case when both the coefficient matrix A and the right-hand side b are perturbed, which is an extension to (6) and (7) by Malyshev [14], who only considered the perturbation on A or b. Theorem Let D T = T I and D x λ be the Moore Penrose inverse of D xλ =diag(x λ ). Denote r = b Ax λ, M =(A T A+λL T L) 1 A T, H = x T λ ((AT A+λL T L) 1 A T )+(A T A+λL T L) 1 r T. (i) Assume that [(ΔA)T β(δb)] F ε [AT βb] F.Ifε is small enough, then Δx [HD 1 T β 1 M] [(ΔA)T β(δb)] F, (8) x λ x λ where the symbol means that the expression on the left-hand side is less than or similar to the right-hand side. (ii) condreg F = [HD 1 T β 1 M] [AT βb] F x λ, (9) m Reg = H vec( A )+ M b, x λ (1) c Reg = D x λ H vec( A )+ D x λ M b. (11) Proof We shall use (3) and (5) that Δx =(A T A+λ L T L) 1 (A T Δb+(ΔA) T (b Ax λ ) A T ΔAx λ )+O(ε ) (1) provided that [(ΔA)T β(δb)] F O(ε). In the following we will show that (8) and (9) hold. By omitting the term O(ε ) in (1) we have which leads to Δx =(A T A+λ L T L) 1 (A T Δb+(ΔA) T (b Ax λ ) A T ΔAx λ ), (13) Δx =[ xλ T ((AT A+λL T L) 1 A T )+(A T A+λL T L) 1 r T (A T A+λL T L) 1 A T ] [ ] vec(δa) Δb [ ] DT =[HDT 1 β 1 vec(δa) M]. (14) β(δb)

6 D. CHU ET AL. Thus, a simple calculation yields [ ] Δx [HD 1 T β 1 M] DT vec(δa) x λ x λ β(δb) = [HD 1 T Mβ 1 ] [(ΔA)T β(δb)] F x λ ε [HD 1 T Mβ 1 ] [AT βb] F. x λ Hence, Part (i) is true. Furthermore, the upper bound above is attainable according to the property of -norm. Consequently, (9) holds. Next, we will prove (1). According to ΔA ε A, we know that if a ij =, then Δa ij =, i.e. the zero elements of A are not permitted to be perturbed. Therefore, vec(δa)= D A D Avec(ΔA), (15) where D A =diag(vec(a)) and D A is the Moore Penrose inverse of D A. We can rewrite (14) as Δx =[HD A MD b ] D A vec(δa), (16) D b (Δb) where D b =diag(b). Taking the infinity norm and using the assumption in the definition of m Reg, we have Δx [HD A MD b ] D A vec(δa) D b (Δb) (17) ε [HD A MD b ]. (18) The equality is attainable. According to the definition of,thereexistsy with y =1 such that Let [HD A MD b ] = max y =1 [HD A MD b ]y = [HD A MD b ]y. [ ] vec(δa) =ε Δb [ DA D b ] y. Then (18) becomes an equality. This results in m Reg = [HD A MD b ]. (19) x λ

7 CONDITION NUMBERS OF TIKHONOV REGULARIZATION It is easy to verify that [HD A MD b ] = [ HD A MD b ]e = HD A e+ MD b e = H vec( A )+ M b, where e =[1,1,...,1] T. Finally, we prove (11). From the definition of c Reg,if(x λ ) i =butδx i =, then c Reg =, otherwise, reformulate (14) as D x λ Δx =[D x λ HD A D x λ MD b ] D A vec(δa). () D b (Δb) Then we obtain c Reg = [D x λ HD A D x λ MD b ] = [ D x λ HD A D x λ MD b ]e = [ D x λ HD A e+ D x λ MD b e] = D x λ H vec( A )+ D x λ M b. Condition numbers bound the worst-case sensitivity of solution of the problem to small perturbations in the input data [16, 8]. Ifε is the size of the perturbation, then a term O(ε ) is neglected and the bound only holds for ε small enough. In this sense, condition numbers are first-order bounds for these sensitivities. The following result exhibits unrestricted perturbation bounds for the Tikhonov regularization. Theorem 3 Let ΔA R m n, Δb R m be the perturbations to A and b, respectively. Let x λ and y λ := x λ +δx be the solutions of normal equations (3) and (5), respectively. If for some non-negative E R m n and f R m we have ΔA εe and Δb ε f,then where y λ x λ ε Q vec(e)+ S f (1) x λ x λ Q = y T λ ((AT A+λ L T L) 1 A T )+(A T A+λ L T L) 1 s T, S = (A T A+λ L T L) 1 A T, s =b+δb (A+ΔA)y λ. Proof Substituting (3) into (5), we have (A T A+λ L T L)(x λ y λ ) = (ΔA) T ((b+δb) (A+ΔA)y λ ) A T (Δb ΔAy λ ) = (ΔA) T s A T (Δb)+ A T (ΔA)y λ.

8 D. CHU ET AL. Hence, y λ x λ = (A T A+λ L T L) 1 ( A T ΔAy λ +(ΔA) T s + A T Δb) =[ yλ T ((AT A+λ L T L) 1 A T )+(A T A+λ L T L) 1 s T (A T A+λ L T L) 1 A T ] [ ] vec(δa). Δb Let D E =diag(vec(e)), D f =diag( f ). Wehave y λ x λ =[Q S] [ DE D f ] D E vec(δa), D () f (Δb) which gives y λ x λ ε [Q S] [ DE D f ] =ε Q vec(e)+ S f. Thus, the inequality (1) follows. By the same argument as in the proof of Theorem, the error bound (1) is attainable. In this sense, the inequality (1) is optimal. In what follows we assume that b / R(A), wherer(a) denotes the range of A. Thatis,the residual vector r =b Ax λ =, where x λ is the solution of the problem (1). Next we introduce the mixed and componentwise condition numbers for the residual vector r. Definition 4 The mixed and componentwise condition numbers for the residual vector are, respectively, given by: m res := lim ε c res := lim ε sup ΔA ε A Δb ε b sup ΔA ε A Δb ε b Δr ε r, 1 ε Δr r. The following theorem presents the explicit expressions of the mixed and componentwise condition numbers m res and c res. Theorem 5 m res = U vec( A )+ V b x λ, (3) c res = D r U vec( A )+ D r V b, (4)

9 CONDITION NUMBERS OF TIKHONOV REGULARIZATION where r = b Ax λ, U = x T λ (I A(AT A+λ L T L) 1 A T ) (A(A T A+λ L T L) 1 ) r T, V = I A(A T A+λ L T L) 1 A T, and D r is the Moore Penrose inverse of D r =diag(r). Proof Let s =b+δb (A+ΔA)y λ where y λ is the solution of the problem (5). Omitting the higher-order terms, the perturbation of the residual vector r satisfies δr := s r (5) = b+δb (A+ΔA)y λ (b Ax λ ) = (I A(A T A+λ L T L) 1 A T )(ΔA)x λ A(A T A+λ L T L) 1 (ΔA) T r +(I A(A T A+λ L T L) 1 A T )Δb =[ xλ T (I A(AT A+λ L T L) 1 A T ) (A(A T A+λ L T L) 1 ) r T, [ ] vec(δa) I A(A T A+λ L T L) 1 A T ] Δb [ ] vec(δa) =[U V] =[UD A VD b ] D A vec(δa). (6) Δb D b (Δb) Consequently, we obtain Δr [UD A VD b ] D A vec(δa) D b (Δb) ε [UD A VD b ]. (7) Again, as shown in Theorem, the error bound (7) is attainable. Therefore, we have m res = [UD A VD b ] r = U vec( A )+ V b r. As for the componentwise condition number c res,ifr i =butδr i =, then c res =.Otherwise, we have c res = [D r UD A D r VD b] = D r U vec( A )+ D r V b.

10 D. CHU ET AL. 3. PERTURBATION ANALYSIS WITH AUGMENTED SYSTEM FOR THE TIKHONOV REGULARIZATION In this section we derive perturbation results corresponding to the componentwise errors in A and b for the Tikhonov regularization with augmented system. A similar analysis on linear least squares problem was given by Arioli et al. [3] and Björck []. Recalling that for the linear system of equation Ax =b with A nonsingular, the basic identity for the perturbation analysis given in Bauer [9] and Skeel [18] is which implies Hence we have, Δx =(I + A 1 ΔA) 1 A 1 (Δb ΔAx), (8) Δx (I + A 1 ΔA) 1 A 1 ( Δb + ΔA x ). Δx (I A 1 ΔA ) 1 A 1 ( Δb + ΔA x ) = (I +O( A 1 ΔA )) A 1 ( Δb + ΔA x ), (9) provided the spectral radius ρ( A 1 ΔA )<1. In the following we extend the Bauer Skeel s analysis to Tikhonov regularization (). Let αi A ΔA G(α)= αi λl, ΔG =, A T λl T (ΔA) T where α = is the scaling parameter. It is easy to see that α 1 (I A(A T A+λ L T L) 1 A T ) α 1 λa(a T A+λ L T L) 1 L T A(A T A+λ L T L) 1 G 1 (α)= α 1 λl(a T A+λ L T L) 1 A T α 1 (I λ L(A T A+λ L T L) 1 L T ) λl(a T A+λ L T L) 1. (A T A+λ L T L) 1 A T λ(a T A+λ L T L) 1 L T α(a T A+λ L T L) 1 It is well-known that the augmented systems of the regularized least squares problem (1) and its counterpart are of the forms α 1 r b α 1 (r +Δr) b+δb G(α) α 1 t =, [G(α)+ΔG] α 1 (t +Δt) =, (3) x λ +Δx x λ where x λ is the unique solution of the problem (1) and r =b Ax λ. As a direct application of (9), the following holds: α 1 Δr Δb + ΔA x λ α 1 Δt =(I +O( G 1 (α) ΔG )) G 1 (α), Δx α 1 ΔA T r

11 CONDITION NUMBERS OF TIKHONOV REGULARIZATION provided that the spectral radius ρ( G 1 (α) ΔG ) A(A T A+λ L T L) 1 ΔA T α 1 I A(A T A+λ L T L) 1 A T ΔA =ρ λ L(A T A+λ L T L) 1 ΔA T λ α 1 L(A T A+λ L T L) 1 A T ΔA α (A T A+λ L T L) 1 ΔA T (A T A+λ L T L) 1 A T ΔA ([ A(A T A+λ L T L) 1 ΔA T α 1 I A(A T A+λ L T L) 1 A T ]) ΔA =ρ <1. α (A T A+λ L T L) 1 ΔA T (A T A+λ L T L) 1 A T ΔA Let the perturbations ΔA and Δb satisfy the componentwise bounds ΔA ε A, Δb ε b. A simple calculation yields Δr I A(A T A+λ L T L) 1 A T A(A T A+λ L T L) 1 [ ] Δt = ε λ L(A T A+λ L T L) 1 A T λ L(A T A+λ L T L) 1 b + A xλ A T r Δx (A T A+λ L T L) 1 A T (A T A+λ L T L) 1 +O(ε ), (31) if ([ A(A T ε<ρ 1 A+λ L T L) 1 A T α 1 I A(A T A+λ L T L) 1 A T ]) A. α (A T A+λ L T L) 1 A T (A T A+λ L T L) 1 A T A Obviously, the parameter α is not involved in (31). Hence, the analysis leads to the following result. Theorem 6 Let x λ and x λ +Δx be the unique solutions of the problem (1) and its perturbed counterpart (4), respectively. Denote r =b Ax λ, r +Δr =(b+δb) (A+ΔA)(x λ +Δx). For any ε> satisfying ([ A(A T ε< sup ρ 1 A+λ L T L) 1 A T α 1 I A(A T A+λ L T L) 1 A T ]) A, (3) α =,α R α (A T A+λ L T L) 1 A T (A T A+λ L T L) 1 A T A and any ΔA and Δb with ΔA ε A and Δb ε b, we obtain Δr = ε( I A(A T A+λ L T L) 1 A T ( b + A x λ ) + A(A T A+λ L T L) 1 A T r )+O(ε ), Δx = ε( (A T A+λ L T L) 1 A T ( b + A x λ ) + (A T A+λ L T L) 1 A T r )+O(ε ).

12 D. CHU ET AL. 4. NUMERICAL EXAMPLE In this section we test some numerical examples to illustrate the perturbation bounds from previous sections. All the computations are carried out by MATLAB 7. with the REGULARIZATION TOOLS package [3]. The machine precision is ε (1) First we review some well-known results on perturbation bounds for the Tikhonov regularization. Hansen [31 33] illuminated the tradeoff between adding too much regularization (such that the solution becomes too smooth ) and too little regularization (such that the solution is too sensitive to the perturbations). The main tool for their analysis is the generalized SVD [31, 34] of the matrix pair (A, L), [ ] Σ A =U X 1, L = V [M,]X 1, (33) I where U R m n and V R p p have orthonormal columns, X R p p is nonsingular, and Σ, M R p p are diagonal matrices with non-negative diagonal elements σ i and μ i satisfying σ i +μ i =1, (i =1,,...,p). Now we quote Hansen s result in the following theorem. Theorem 7 ([31, Theorem 4.1]; [3]) Let x λ and x λ be, respectively, the solutions to min{ Ax b x +λ Lx } and min { Ã x b x +λ L x } (34) computed with the same L and λ, and denote and ΔA = Ã A, Δb = b b, b λ = Ax λ, r λ =b b λ, ε= ΔA / A, κ λ = A X /λ, where X is determined by (33). If <λ 1, then x λ x λ x λ κ λ 1 εκ λ ( (1+cond(X))ε+ Δb b λ +εκ λ r λ b λ Furthermore, if ΔA = andλ<1/, then we obtain the tighter bound x λ x λ x λ κ λ ). (35) (1 λ ) 1/ Δb b λ. (36) In particular, if L is square (p =n) and invertible, and if we define κ λ = A L 1 /λ, then (35) and (36) can be sharpened to x λ x λ κ ( λ (1+cond(L))ε+ Δb ) r λ +ε κ λ, (37) x λ 1 ε κ λ b λ b λ

13 CONDITION NUMBERS OF TIKHONOV REGULARIZATION and x λ x λ 1 x λ κ Δb λ. (38) b λ If L = I, then (37) and (38) hold with κ λ = A /λ and cond(l)=1. Malyshev [14] derives the similar explicit expression for Δx as in (13) and obtained the straightforward perturbation bound for the Tikhonov solution x λ, x λ x λ (AT A+λ I) 1 A T b x λ x λ Δb b + A ( (A T A+λ I) 1 A T + (A T A+λ I) 1 r λ x λ ) ΔA A. (39) From our discussion in Section, we can also get our new linear perturbation bound, as it can be seen in (8). We complete our discussion of this section with a collection of numerical examples to illustrate our results described in previous sections. These test problems come from discretization of Fredholm integral equations of the first kind [35, 36], and they lead to discrete ill-posed problems and matrices with ill-determined numerical rank. All the test problems are included in [3]. The first test problem is a mildly ill-posed problem which is a classical example of an ill-posed problem and included in [3] as the function deriv. It is a discretization of a first kind Fredholm integral equation whose kernel K is Green s function for the second derivative. In this example, we first generate a discrete problem Ax =b exact by deriv with dimensions m =m =64 and then apply standard-form Tikhonov regularization to it. To see how sensitive the regularization solution is to the perturbations of the input data, we assume the original ill-posed problem to be noise-free. The singular values of this problem decay fairly slowly and the condition number is cond(a) In Figure 1 we show how the Tikhonov regularized solutions x λ (solid line) approximate the exact solution x exact (dashed line) for deriv and the following four values of the regularization parameter λ,.1, 6 1, , Let ε =1 9. We assume that the perturbations of the input data A and b satisfy ΔA F ε A F, Δb ε b. Table I shows the perturbation analysis results cited above. We compare previous perturbation bounds with our new bound (8) where T = I and β=1 are assumed. We also provide the relative normwise, componentwise and mixed condition numbers for different regularization parameters λ, which show how the sensitivity of regularization solution to perturbations changes with different λ. We can see from this example that our new bound (8) is better than the previous ones and close to the real error. It is indicated in Theorem 7 that the quantity 1/λ plays the role of a condition number for the Tikhonov regularized solution. The larger the λ, the less sensitive the regularized solutions are to perturbations. Now it can be observed directly from our exact computation of relative normwise condition number for the Tikhonov regularization that how the condition numbers precisely change with λ.

14 D. CHU ET AL.. λ =.1. λ = λ =.17. λ = Figure 1. Test problem deriv with standard-form Tikhonov regularization. Compare the exact solution (dashed line) with problem Ax = b and the regularization solution (dotted line) for four values of λ. Table I. Perturbation results ([3], ε =1 9 ). Theorem 7 Equation (39) New bound condreg F m Reg c Reg Δx λ by Hansen by Malyshev (8) x λ (9) (1) (11) 1.e 1.19e 9 1.9e 9 1.e 9 1.1e 9.5e+.95e+ 3.11e+ 6.e 1.93e 1 8.3e e e 11.58e+ 3.37e+ 3.97e+ 1.7e e e e 1 5.9e 1 7.6e+1 5.6e+1.31e+ 1.7e e 9.55e 9.57e 9.38e e+ 4.48e+.43e+3 To illustrate the importance of the matrix L on the regularized solution, we apply the Tikhonov regularization in general form to the test problem deriv, with 1 1 L = L 1 = R(n 1) n 1 1 approximating the first derivative operator. The dimensions are still m =n =64 and we compute the Tikhonov regularization solution for the following four values of the regularization parameter λ,.1, 6 1, 6 1 3,

15 CONDITION NUMBERS OF TIKHONOV REGULARIZATION Table II. Perturbation results ([3], L = L 1 ). Theorem 7 Equation (39) New bound condreg F m Reg c Reg Δx λ by Hansen by Malyshev (8) x λ (9) (1) (11) 1.e e 7 6.8e e e 9 1.9e+1 7.3e+.86e+1 6.e 3.3e e e e 9 1.8e e+ 4.31e+1 6.e 3 1.5e e 9 1.3e e 9 7.7e e+1.41e+ 1.7e 3.64e e e 8 1.1e e+ 9.83e e+ As shown in Table II, our new bound is slightly worse than Malyshev s in this case, but is still close to the real error. We can also see that the condition number condreg F is more sensitive to the matrix L. We should mention that the parameters of λ=.17 in Table I and λ=.6 in Table II are chosen by L-curve criterion [37]. For the Tikhonov regularization where the regularization parameter is continuous, the L-curve is a continuous curve as a parametric plot of the discrete smoothing (semi) norm Lx λ versus the corresponding residual norm Ax λ b, with λ as the parameter. The L-shaped corner of the L-curve appears for regularization parameter close to the optimal parameter that balances the regularization errors and perturbation errors in x λ,which is the basis for the L-curve criterion for choosing the regularization parameter. Besides L-curve criterion for parameter-choice, a variety of parameter-choice strategies have been proposed, such as the discrepancy principle [38], generalized cross-validation (GCV) [39] and the quasi-optimality criterion [38]. In the next example, we use our theoretical results to make an experimental comparison of the four parameter-choice methods for the Tikhonov regularization. We consider a first-kind Fredholm integral equation that is a one-dimensional model problem in image reconstruction from [4]. This test problem is included in [3] as the function shaw. In this example, we first generate a discrete ill-posed problem Ax =b exact and then add white noise to the right-hand side with a perturbation e whose elements are normally distributed with zero mean and standard deviation chosen such that the noise-to-signal ratio is e / b =1 4, thus produce a more realistic problem. The singular values of shaw decay rapidly until they hit the machine precision times A,andthe condition number of this matrix is therefore approximately the reciprocal of the machine precision. In Figure we show the exact solution to test problem shaw with dimensions m =n =64 and the standard-form the Tikhonov regularization solution for the values of the regularization parameter found by four parameter-choice methods. Now we set ε=1 5 and Table III shows the corresponding perturbation analysis results. It is known that, within a certain range of regularization parameters, there is usually no particular choice that stands out as natural compared with the other choices [41, Section 7.1]. Similarly, as we can see in Table III, we could not identity which choice makes regularization less sensitive. Again, we apply the general-form Tikhonov regularization to test problem shaw with 1 1 L = L = R(n ) n 1 1

16 D. CHU ET AL Discrep. pr GCV Figure. Test problem shaw with standard-form Tikhonov regularization. Compare the exact solution (solid line) with problem Ax = b and the regularization solution (dotted line) for four parameter-choice method. Table III. Perturbation results ([3], ε =1 5 ). Theorem 7 Equation (39) New bound condreg F m Reg c Reg Δx λ by Hansen by Malyshev (8) x λ (9) (1) (11) Dis. pr. 3.9e e 5.1e e e 3.81e e+ 5.54e+4 L-curve 1.5e 4 7.7e 1.17e e e e e+4.45e+6 GCV 9.18e e+ 5.31e 1 3.7e e 1.39e e+4 5.8e+5 Quasi-opt 3.65e e 4.64e 3 5.7e e 3.59e e+ 5.1e+4 approximating the second derivative operator. We still use four corresponding general-form parameter-choice methods and summarize the results in Table IV. () From Theorem, we can deduce the first-order componentwise perturbation bounds for the Tikhonov regularization solution Δx εm Reg +O(ε ), x (4) Δx x εc Reg +O(ε ). (41)

17 CONDITION NUMBERS OF TIKHONOV REGULARIZATION Table IV. Perturbation results ([3], L = L ). Theorem 7 Equation (39) New bound condreg F m Reg c Reg Δx λ by Hansen by Malyshev (8) x λ (9) (1) (11) Dis. pr. 1.6e 9.5e+ 1.14e 1.3e 1.6e 6.3e+3 1.7e+3 1.7e+5 L-curve 3.34e e+1 3.1e.64e.7e 1.35e+4 3.7e+3 3.e+5 GCV 5.8e e+ 3.65e 1 3.e 1.83e 1 1.9e+5.43e e+5 GCV 4.6e+1.13e 3 4.1e e e 5.81e+1 7.5e+ 1.45e+1. λ = 1. λ = λ =.6. λ = Figure 3. Test problem wing with general-form Tikhonov regularization. Compare the exact solution (dashed line) with problem Ax = b and the regularization solution of unperturbed problem (dotted line) for four values of λ. Now we use a test problem with a discontinuous solution to illustrate some componentwise perturbation analysis results. This test problem appears as problem VI.1 in [4] and is included in [3] as the function wing. We generate a noise-free test problem with dimension m =n =64 and again use L = L 1 to approximate the first derivative operator. In this example the exact solution to the underlying integral equation has two discontinuities at the abscissas 1 3 and 3, and the exact solution vector to Ax =b has two large gaps between elements 1 and, and between elements 43 and 44, which can be seen in Figure 3 (dashed lines). We also show in this figure the Tikhonov resolutions (dotted line) for the following four values of λ: 1,.1,.6,.1.

18 D. CHU ET AL. Table V. Perturbation results ([4, problem VI.1], L = L 1 ). Δx λ x Reg εm Reg (A,b) x Δx Reg εc Reg (A,b) e 9 7.1e e e e e e 8.47e e 9.e 7 1.3e e e e 6 7.5e e 4 Let ε =1 8 and (S, f ) be random (the distribution of each entry being uniform on the interval ( 1,1)). Then let ΔA ij =εs ij A ij and Δb ij =εf ij b ij. Note that ΔA ε A and Δb ε b. Thenwe obtain Table V. It is obvious that both mixed and componentwise perturbation bounds are good approximations to the real relative errors. ACKNOWLEDGEMENTS The authors are grateful to Prof. O. Axelsson, Dr Maya Neytcheva and two reviewers for their many valuable comments and helpful suggestions. D. Chu and Roger C.E. Tan are partially supported by NUS Research Grant R Y.Wei is supported by the National Natural Science Foundation of China under grant , Shanghai Education Committee (Dawn Project), Shanghai Science and Technology Committee under grant 9DZ79 and KLMM91, Doctoral Program of the Ministry of Education under grant REFERENCES 1. Tikhonov AN. Solution of incorrectly formulated problems and the regularization method. Soviet Mathematics Doklady 1963; 4: English translation of Doklady Akademii Nauk SSSR, 151 (1963) Engl HW. Regularization methods for the stable solution of inverse problems. Surveys on Mathematics for Industry 1993; 3: Engl HW, Hanke M, Neubauer A. Regularization of Inverse Problems. Kluwer: Dordrecht, The Netherlands, Groetsch CW. The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind. Research Notes in Mathematics, vol. 15. Pitman: Boston, Groetsch CW. Inverse Problems in the Mathematical Sciences. Vieweg: Wiesbaden, Germany, Hanke M, Hansen PC. Regularization methods for large-scale problems. Surveys on Mathematics for Industry 1993; 3: Calvetti D, Reichel L. Tikhonov regularization of large linear problems. BIT 3; 43: Calvetti D, Reichel L, Shuibi A. Tikhonov regularization of large symmetric problems. Numerical Linear Algebra with Applications 5; 1: Eldén L. Algorithms for the regularization of ill-conditioned least squares problems. BIT 1977; 17: Golub G, von Matt U. Tikhonov regularization for large scale problems. In Workshop on Scientific Computing, Golub GH, Lui SH, Luk F, Plemmons R (eds). Springer: New York, 1997; Gulliksson ME, Wedin PÅ. Perturbation theory for generalized and constrained linear least squares. Numerical Linear Algebra with Applications ; 7: Gulliksson ME, Wedin PÅ, Wei Y. Perturbation identities for regularized Tikhonov inverses and weighted pseudoinverses. BIT ; 4: Björck Å. Numerical Methods for Least Squares Problems. SIAM: Philadelphia, Malyshev AN. A unified theory of conditioning for linear least squares and Tikhonov regularization solutions. SIAM Journal on Matrix Analysis and Applications 3; 4:

19 CONDITION NUMBERS OF TIKHONOV REGULARIZATION 15. Higham NJ. A survey of componentwise perturbation theory in numerical linear algebra. Proceedings of Symposia in Applied Mathematics 1994; 48: Higham NJ. Accuracy and Stability of Numerical Algorithms (nd edn). SIAM: Philadelphia, PA,. 17. Gohberg I, Koltracht I. Mixed compontwise, and structured conditon numbers. SIAM Journal on Matrix Analysis and Applications 1993; 14: Skeel RD. Scaling for numerical stability in Gaussian elimination. Journal of Association for Computing Machinery 1979; 6: Rohn J. New condition numbers for matrices and linear systems. Computing 1989; 41: Björck Å. Componentwise perturbation analysis and error bounds for linear least squares solutions. BIT 1991; 31: Cucker F, Diao H, Wei Y. On mixed and componentwise condition numbers for Moore Penrose inverse and linear least squares problems. Mathematics of Computation 7; 78(58): Demmel J, Hidaz Y, Li X, Riedy E. Extra-precise iterative refinement for overdetermined least squares problems. ACM Transactions on Mathematical Software 9; 35(4). Article No Arioli M, Duff IS, de Rijk PPM. On the augmented system approach to sparse least squares problems. Numerische Mathematik 1989; 55: Baboulin M, Dongarra J, Gratton S, Langou J. Computing the conditioning of the components of a linear least-squares solution. Numerical Linear Algebra with Applications 9; 16: Ben-Israel A, Greville TNE. Generalized Inverses: Theory and Applications (nd edn). Springer: New York, Wang G, Wei Y, Qiao S. Generalized Inverses: Theory and Computations. Science Press: Beijing, Gratton S. On the condition number of linear least squares problems in a weighted Frobenius norm. BIT 1996; 36: Rice JR. A theory of condition. SIAM Journal on Numerical Analysis 1996; 3: Bauer F. Genauigkeitsfragen bei der Losung linearer Gleichungssysteme. Zeitschrift für Angewandte Mathematik und Mechanik 1966; 46: Hansen PC. Regularization Tools version 4. for Matlab 7.3. Numerical Algorithms 7; 46: Hansen PC. Regularization GSVD and truncated GSVD. BIT 1989; 9: Hansen PC. Perturbation bounds for discrete Tikhonov regularization. Inverse Problems 1989; 5: Hansen PC. Truncated SVD solutions to discrete ill-posed problems with ill-determined numerical rank. SIAM Journal on Scientific and Statistical Computing 199; 11: Van Loan CF. Generalizing the singular value decomposition. SIAM Journal on Numerical Analysis 1976; 13: Hanke M. Conjugate Gradient Type Methods for Ill-posed Problems. Longman Scientific and Technical: Essex, Phillips DL. A technique for the numerical solution of certain integral equations of the first kind. Journal of Association for Computing Machinery 196; 9: Hansen PC. Analysis of discrete ill-posed problems by means of the L-curve. SIAM Review 199; 34: Morozov VA. Methods for Solving Incorrectly Posed Problems. Springer: New York, Wahba G. Spline Models for Observational Data. CBMS-NSF Regional Conference Series in Applied Mathematics, vol. 59. SIAM: Philadelphia, Shaw CB. Improvements of the resolution of an instrument by numerical solution of an integral equation. Journal of Mathematical Analysis and Applications 197; 37: Hansen PC. Rank-deficient and Discrete Ill-posed Problems. SIAM: Philadelhia, Wing GM, Zahrt JD. A Primer on Integral Equations of the First Kind. SIAM: Philadelphia, 1991.

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Structured Condition Numbers of Symmetric Algebraic Riccati Equations

Structured Condition Numbers of Symmetric Algebraic Riccati Equations Proceedings of the 2 nd International Conference of Control Dynamic Systems and Robotics Ottawa Ontario Canada May 7-8 2015 Paper No. 183 Structured Condition Numbers of Symmetric Algebraic Riccati Equations

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15]) LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and

More information

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13]) SIAM Review 4():02 2, 999 ILL-CONDITIONED MATRICES ARE COMPONENTWISE NEAR TO SINGULARITY SIEGFRIED M. RUMP Abstract. For a square matrix normed to, the normwise distance to singularity is well known to

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

Discrete ill posed problems

Discrete ill posed problems Discrete ill posed problems Gérard MEURANT October, 2008 1 Introduction to ill posed problems 2 Tikhonov regularization 3 The L curve criterion 4 Generalized cross validation 5 Comparisons of methods Introduction

More information

Tikhonov Regularization for Weighted Total Least Squares Problems

Tikhonov Regularization for Weighted Total Least Squares Problems Tikhonov Regularization for Weighted Total Least Squares Problems Yimin Wei Naimin Zhang Michael K. Ng Wei Xu Abstract In this paper, we study and analyze the regularized weighted total least squares (RWTLS)

More information

On the Solution of Constrained and Weighted Linear Least Squares Problems

On the Solution of Constrained and Weighted Linear Least Squares Problems International Mathematical Forum, 1, 2006, no. 22, 1067-1076 On the Solution of Constrained and Weighted Linear Least Squares Problems Mohammedi R. Abdel-Aziz 1 Department of Mathematics and Computer Science

More information

c 1999 Society for Industrial and Applied Mathematics

c 1999 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 21, No. 1, pp. 185 194 c 1999 Society for Industrial and Applied Mathematics TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES GENE H. GOLUB, PER CHRISTIAN HANSEN, AND DIANNE

More information

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE Int. J. Appl. Math. Comput. Sci., 007, Vol. 17, No., 157 164 DOI: 10.478/v10006-007-0014-3 REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE DOROTA KRAWCZYK-STAŃDO,

More information

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University,

More information

On the Perturbation of the Q-factor of the QR Factorization

On the Perturbation of the Q-factor of the QR Factorization NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. ; :1 6 [Version: /9/18 v1.] On the Perturbation of the Q-factor of the QR Factorization X.-W. Chang McGill University, School of Comptuer

More information

Backward perturbation analysis for scaled total least-squares problems

Backward perturbation analysis for scaled total least-squares problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 009; 16:67 648 Published online 5 March 009 in Wiley InterScience (www.interscience.wiley.com)..640 Backward perturbation analysis

More information

Greedy Tikhonov regularization for large linear ill-posed problems

Greedy Tikhonov regularization for large linear ill-posed problems International Journal of Computer Mathematics Vol. 00, No. 00, Month 200x, 1 20 Greedy Tikhonov regularization for large linear ill-posed problems L. Reichel, H. Sadok, and A. Shyshkov (Received 00 Month

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory

More information

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Gérard MEURANT November 2010 1 Introduction to ill posed problems 2 Examples of ill-posed problems 3 Tikhonov

More information

On condition numbers for the canonical generalized polar decompostion of real matrices

On condition numbers for the canonical generalized polar decompostion of real matrices Electronic Journal of Linear Algebra Volume 26 Volume 26 (2013) Article 57 2013 On condition numbers for the canonical generalized polar decompostion of real matrices Ze-Jia Xie xiezejia2012@gmail.com

More information

Arnoldi-Tikhonov regularization methods

Arnoldi-Tikhonov regularization methods Arnoldi-Tikhonov regularization methods Bryan Lewis a, Lothar Reichel b,,1 a Rocketcalc LLC, 100 W. Crain Ave., Kent, OH 44240, USA. b Department of Mathematical Sciences, Kent State University, Kent,

More information

ON THE REGULARIZING PROPERTIES OF THE GMRES METHOD

ON THE REGULARIZING PROPERTIES OF THE GMRES METHOD ON THE REGULARIZING PROPERTIES OF THE GMRES METHOD D. CALVETTI, B. LEWIS, AND L. REICHEL Abstract. The GMRES method is a popular iterative method for the solution of large linear systems of equations with

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

Gaussian Elimination for Linear Systems

Gaussian Elimination for Linear Systems Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination

More information

PROJECTED TIKHONOV REGULARIZATION OF LARGE-SCALE DISCRETE ILL-POSED PROBLEMS

PROJECTED TIKHONOV REGULARIZATION OF LARGE-SCALE DISCRETE ILL-POSED PROBLEMS PROJECED IKHONOV REGULARIZAION OF LARGE-SCALE DISCREE ILL-POSED PROBLEMS DAVID R. MARIN AND LOHAR REICHEL Abstract. he solution of linear discrete ill-posed problems is very sensitive to perturbations

More information

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse

More information

RESCALING THE GSVD WITH APPLICATION TO ILL-POSED PROBLEMS

RESCALING THE GSVD WITH APPLICATION TO ILL-POSED PROBLEMS RESCALING THE GSVD WITH APPLICATION TO ILL-POSED PROBLEMS L. DYKES, S. NOSCHESE, AND L. REICHEL Abstract. The generalized singular value decomposition (GSVD) of a pair of matrices expresses each matrix

More information

Augmented GMRES-type methods

Augmented GMRES-type methods Augmented GMRES-type methods James Baglama 1 and Lothar Reichel 2, 1 Department of Mathematics, University of Rhode Island, Kingston, RI 02881. E-mail: jbaglama@math.uri.edu. Home page: http://hypatia.math.uri.edu/

More information

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

On the simplest expression of the perturbed Moore Penrose metric generalized inverse Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Dedicated to Gérard Meurant on the occasion of his 60th birthday.

Dedicated to Gérard Meurant on the occasion of his 60th birthday. SIMPLE SQUARE SMOOTHING REGULARIZATION OPERATORS LOTHAR REICHEL AND QIANG YE Dedicated to Gérard Meurant on the occasion of his 60th birthday. Abstract. Tikhonov regularization of linear discrete ill-posed

More information

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

More information

LAVRENTIEV-TYPE REGULARIZATION METHODS FOR HERMITIAN PROBLEMS

LAVRENTIEV-TYPE REGULARIZATION METHODS FOR HERMITIAN PROBLEMS LAVRENTIEV-TYPE REGULARIZATION METHODS FOR HERMITIAN PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Lavrentiev regularization is a popular approach to the solution of linear discrete illposed problems

More information

CHAPTER 7. Regression

CHAPTER 7. Regression CHAPTER 7 Regression This chapter presents an extended example, illustrating and extending many of the concepts introduced over the past three chapters. Perhaps the best known multi-variate optimisation

More information

Blind image restoration as a convex optimization problem

Blind image restoration as a convex optimization problem Int. J. Simul. Multidisci.Des. Optim. 4, 33-38 (2010) c ASMDO 2010 DOI: 10.1051/ijsmdo/ 2010005 Available online at: http://www.ijsmdo.org Blind image restoration as a convex optimization problem A. Bouhamidi

More information

Multiplicative Perturbation Analysis for QR Factorizations

Multiplicative Perturbation Analysis for QR Factorizations Multiplicative Perturbation Analysis for QR Factorizations Xiao-Wen Chang Ren-Cang Li Technical Report 011-01 http://www.uta.edu/math/preprint/ Multiplicative Perturbation Analysis for QR Factorizations

More information

1. Introduction. We consider linear discrete ill-posed problems of the form

1. Introduction. We consider linear discrete ill-posed problems of the form AN EXTRAPOLATED TSVD METHOD FOR LINEAR DISCRETE ILL-POSED PROBLEMS WITH KRONECKER STRUCTURE A. BOUHAMIDI, K. JBILOU, L. REICHEL, AND H. SADOK Abstract. This paper describes a new numerical method for the

More information

VECTOR EXTRAPOLATION APPLIED TO TRUNCATED SINGULAR VALUE DECOMPOSITION AND TRUNCATED ITERATION

VECTOR EXTRAPOLATION APPLIED TO TRUNCATED SINGULAR VALUE DECOMPOSITION AND TRUNCATED ITERATION VECTOR EXTRAPOLATION APPLIED TO TRUNCATED SINGULAR VALUE DECOMPOSITION AND TRUNCATED ITERATION A. BOUHAMIDI, K. JBILOU, L. REICHEL, H. SADOK, AND Z. WANG Abstract. This paper is concerned with the computation

More information

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C Journal of Computational and Applied Mathematics 1 008) 31 44 www.elsevier.com/locate/cam An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation

More information

Inverse Ill Posed Problems in Image Processing

Inverse Ill Posed Problems in Image Processing Inverse Ill Posed Problems in Image Processing Image Deblurring I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz, martin.plesinger@tul.cz, strakos@cs.cas.cz 1,3 Faculty of Mathematics

More information

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS BIT Numerical Mathematics 6-3835/3/431-1 $16. 23, Vol. 43, No. 1, pp. 1 18 c Kluwer Academic Publishers ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS T. K. JENSEN and P. C. HANSEN Informatics

More information

This article was published in an Elsevier journal. The attached copy is furnished to the author for non-commercial research and education use, including for instruction at the author s institution, sharing

More information

Computing least squares condition numbers on hybrid multicore/gpu systems

Computing least squares condition numbers on hybrid multicore/gpu systems Computing least squares condition numbers on hybrid multicore/gpu systems M. Baboulin and J. Dongarra and R. Lacroix Abstract This paper presents an efficient computation for least squares conditioning

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 28, pp. 49-67, 28. Copyright 28,. ISSN 68-963. A WEIGHTED-GCV METHOD FOR LANCZOS-HYBRID REGULARIZATION JULIANNE CHUNG, JAMES G. NAGY, AND DIANNE P.

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 14, pp. 2-35, 22. Copyright 22,. ISSN 168-9613. ETNA L-CURVE CURVATURE BOUNDS VIA LANCZOS BIDIAGONALIZATION D. CALVETTI, P. C. HANSEN, AND L. REICHEL

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Diagonal and Monomial Solutions of the Matrix Equation AXB = C Iranian Journal of Mathematical Sciences and Informatics Vol. 9, No. 1 (2014), pp 31-42 Diagonal and Monomial Solutions of the Matrix Equation AXB = C Massoud Aman Department of Mathematics, Faculty of

More information

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Backward Error Estimation

Backward Error Estimation Backward Error Estimation S. Chandrasekaran E. Gomez Y. Karant K. E. Schubert Abstract Estimation of unknowns in the presence of noise and uncertainty is an active area of study, because no method handles

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Research Article A Rapid Numerical Algorithm to Compute Matrix Inversion

Research Article A Rapid Numerical Algorithm to Compute Matrix Inversion International Mathematics and Mathematical Sciences Volume 12, Article ID 134653, 11 pages doi:.1155/12/134653 Research Article A Rapid Numerical Algorithm to Compute Matrix Inversion F. Soleymani Department

More information

The DMP Inverse for Rectangular Matrices

The DMP Inverse for Rectangular Matrices Filomat 31:19 (2017, 6015 6019 https://doi.org/10.2298/fil1719015m Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://.pmf.ni.ac.rs/filomat The DMP Inverse for

More information

Old and new parameter choice rules for discrete ill-posed problems

Old and new parameter choice rules for discrete ill-posed problems Old and new parameter choice rules for discrete ill-posed problems Lothar Reichel Giuseppe Rodriguez Dedicated to Claude Brezinski and Sebastiano Seatzu on the Occasion of Their 70th Birthdays. Abstract

More information

Subset selection for matrices

Subset selection for matrices Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations A revisit to a reverse-order law for generalized inverses of a matrix product and its variations Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China Abstract. For a pair

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy, Math 54 - Numerical Analysis Lecture Notes Linear Algebra: Part B Outline Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences

More information

DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX

DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 8 14 c 2008 Institute for Scientific Computing and Information DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX RAVI

More information

arxiv: v1 [math.na] 28 Jun 2013

arxiv: v1 [math.na] 28 Jun 2013 A New Operator and Method for Solving Interval Linear Equations Milan Hladík July 1, 2013 arxiv:13066739v1 [mathna] 28 Jun 2013 Abstract We deal with interval linear systems of equations We present a new

More information

Determinantal divisor rank of an integral matrix

Determinantal divisor rank of an integral matrix Determinantal divisor rank of an integral matrix R. B. Bapat Indian Statistical Institute New Delhi, 110016, India e-mail: rbb@isid.ac.in Abstract: We define the determinantal divisor rank of an integral

More information

MATH 5524 MATRIX THEORY Problem Set 4

MATH 5524 MATRIX THEORY Problem Set 4 MATH 5524 MATRIX THEORY Problem Set 4 Posted Tuesday 28 March 217. Due Tuesday 4 April 217. [Corrected 3 April 217.] [Late work is due on Wednesday 5 April.] Complete any four problems, 25 points each.

More information

MULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS. Xiao-Wen Chang. Ren-Cang Li. (Communicated by Wenyu Sun)

MULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS. Xiao-Wen Chang. Ren-Cang Li. (Communicated by Wenyu Sun) NUMERICAL ALGEBRA, doi:10.3934/naco.011.1.301 CONTROL AND OPTIMIZATION Volume 1, Number, June 011 pp. 301 316 MULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS Xiao-Wen Chang School of Computer

More information

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS IMA Journal of Numerical Analysis (2002) 22, 1-8 WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS L. Giraud and J. Langou Cerfacs, 42 Avenue Gaspard Coriolis, 31057 Toulouse Cedex

More information

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices Applied Mathematics Letters 25 (202) 2339 2343 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Comparison theorems for a subclass

More information

Why the QR Factorization can be more Accurate than the SVD

Why the QR Factorization can be more Accurate than the SVD Why the QR Factorization can be more Accurate than the SVD Leslie V. Foster Department of Mathematics San Jose State University San Jose, CA 95192 foster@math.sjsu.edu May 10, 2004 Problem: or Ax = b for

More information

A PARTIAL CONDITION NUMBER FOR LINEAR LEAST SQUARES PROBLEMS

A PARTIAL CONDITION NUMBER FOR LINEAR LEAST SQUARES PROBLEMS A PARTIAL CONDITION NUMBER OR LINEAR LEAST SQUARES PROBLEMS MARIO ARIOLI, MARC BABOULIN, AND SERGE GRATTON CERACS Technical Report TR/PA/04/, 004 Also appeared as Rutherford Appleton Laboratory Technical

More information

A NEW L-CURVE FOR ILL-POSED PROBLEMS. Dedicated to Claude Brezinski.

A NEW L-CURVE FOR ILL-POSED PROBLEMS. Dedicated to Claude Brezinski. A NEW L-CURVE FOR ILL-POSED PROBLEMS LOTHAR REICHEL AND HASSANE SADOK Dedicated to Claude Brezinski. Abstract. The truncated singular value decomposition is a popular method for the solution of linear

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

SIMPLIFIED GSVD COMPUTATIONS FOR THE SOLUTION OF LINEAR DISCRETE ILL-POSED PROBLEMS (1.1) x R Rm n, x R n, b R m, m n,

SIMPLIFIED GSVD COMPUTATIONS FOR THE SOLUTION OF LINEAR DISCRETE ILL-POSED PROBLEMS (1.1) x R Rm n, x R n, b R m, m n, SIMPLIFIED GSVD COMPUTATIONS FOR THE SOLUTION OF LINEAR DISCRETE ILL-POSED PROBLEMS L. DYKES AND L. REICHEL Abstract. The generalized singular value decomposition (GSVD) often is used to solve Tikhonov

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS

REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS JESSE L. BARLOW Department of Computer Science and Engineering, The Pennsylvania State University, University

More information

AN ALGORITHM FOR DETERMINATION OF AGE-SPECIFIC FERTILITY RATE FROM INITIAL AGE STRUCTURE AND TOTAL POPULATION

AN ALGORITHM FOR DETERMINATION OF AGE-SPECIFIC FERTILITY RATE FROM INITIAL AGE STRUCTURE AND TOTAL POPULATION J Syst Sci Complex (212) 25: 833 844 AN ALGORITHM FOR DETERMINATION OF AGE-SPECIFIC FERTILITY RATE FROM INITIAL AGE STRUCTURE AND TOTAL POPULATION Zhixue ZHAO Baozhu GUO DOI: 117/s11424-12-139-8 Received:

More information

UNIVERSITY OF CAPE COAST REGULARIZATION OF ILL-CONDITIONED LINEAR SYSTEMS JOSEPH ACQUAH

UNIVERSITY OF CAPE COAST REGULARIZATION OF ILL-CONDITIONED LINEAR SYSTEMS JOSEPH ACQUAH UNIVERSITY OF CAPE COAST REGULARIZATION OF ILL-CONDITIONED LINEAR SYSTEMS JOSEPH ACQUAH 2009 UNIVERSITY OF CAPE COAST REGULARIZATION OF ILL-CONDITIONED LINEAR SYSTEMS BY JOSEPH ACQUAH THESIS SUBMITTED

More information

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms DOI: 10.1515/auom-2017-0004 An. Şt. Univ. Ovidius Constanţa Vol. 25(1),2017, 49 60 Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms Doina Carp, Ioana Pomparău,

More information

arxiv: v2 [math.na] 8 Feb 2018

arxiv: v2 [math.na] 8 Feb 2018 MODIFIED TRUNCATED RANDOMIZED SINGULAR VALUE DECOMPOSITION (MTRSVD) ALGORITHMS FOR LARGE SCALE DISCRETE ILL-POSED PROBLEMS WITH GENERAL-FORM REGULARIZATION ZHONGXIAO JIA AND YANFEI YANG arxiv:1708.01722v2

More information

Irregular Solutions of an Ill-Posed Problem

Irregular Solutions of an Ill-Posed Problem Irregular Solutions of an Ill-Posed Problem Peter Linz University of California at Davis Richard L.C. Wang Clearsight Systems Inc. Abstract: Tikhonov regularization is a popular and effective method for

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Discrete Ill Posed and Rank Deficient Problems Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Definitions Overview Inversion, SVD, Picard Condition, Rank Deficient, Ill-Posed Classical

More information

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

SIGNAL AND IMAGE RESTORATION: SOLVING

SIGNAL AND IMAGE RESTORATION: SOLVING 1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation

More information

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Rosemary Renaut, Jodi Mead Arizona State and Boise State September 2007 Renaut and Mead (ASU/Boise) Scalar

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

Condition number and matrices

Condition number and matrices Condition number and matrices Felipe Bottega Diniz arxiv:70304547v [mathgm] 3 Mar 207 March 6, 207 Abstract It is well known the concept of the condition number κ(a) A A, where A is a n n real or complex

More information

Parameter estimation: A new approach to weighting a priori information

Parameter estimation: A new approach to weighting a priori information c de Gruyter 27 J. Inv. Ill-Posed Problems 5 (27), 2 DOI.55 / JIP.27.xxx Parameter estimation: A new approach to weighting a priori information J.L. Mead Communicated by Abstract. We propose a new approach

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

Comparison of A-Posteriori Parameter Choice Rules for Linear Discrete Ill-Posed Problems

Comparison of A-Posteriori Parameter Choice Rules for Linear Discrete Ill-Posed Problems Comparison of A-Posteriori Parameter Choice Rules for Linear Discrete Ill-Posed Problems Alessandro Buccini a, Yonggi Park a, Lothar Reichel a a Department of Mathematical Sciences, Kent State University,

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 33, pp. 63-83, 2009. Copyright 2009,. ISSN 1068-9613. ETNA SIMPLE SQUARE SMOOTHING REGULARIZATION OPERATORS LOTHAR REICHEL AND QIANG YE Dedicated to

More information

Dedicated to Adhemar Bultheel on the occasion of his 60th birthday.

Dedicated to Adhemar Bultheel on the occasion of his 60th birthday. SUBSPACE-RESTRICTED SINGULAR VALUE DECOMPOSITIONS FOR LINEAR DISCRETE ILL-POSED PROBLEMS MICHIEL E. HOCHSTENBACH AND LOTHAR REICHEL Dedicated to Adhemar Bultheel on the occasion of his 60th birthday. Abstract.

More information