Preconditioning regularized least squares problems arising from high-resolution image reconstruction from low-resolution frames

Size: px
Start display at page:

Download "Preconditioning regularized least squares problems arising from high-resolution image reconstruction from low-resolution frames"

Transcription

1 Linear Algebra and its Applications 39 (2004) wwwelseviercom/locate/laa Preconditioning regularized least squares problems arising from high-resolution image reconstruction from low-resolution frames Fu-Rong Lin a,b,, Wai-Ki Ching b,2, Michael K Ng b,,3 a Department of Mathematics, Shantou University, Shantou, Guangdong 55063, PR China b Department of Mathematics, The University of Hong Kong, Pokfulam Road, Hong Kong, PR China Received April 2003; accepted 22 January 2004 Submitted by S van Huffel Abstract In this paper, we study the problem of reconstructing a high-resolution image from multiple undersampled, shifted, degraded frames with subpixel displacement errors from multisensors Preconditioned conjugate gradient methods with cosine transform based preconditioners and incomplete factorization based preconditioners are applied to solve this image reconstruction problem Numerical examples are given to demonstrate the efficiency of these preconditioners We find that cosine transform based preconditioners are effective when the number of shifted low-resolution frames are large, but are less effective when the number is small However, incomplete factorization based preconditioners work quite well independent of the number of shifted low-resolution frames 2004 Elsevier Inc All rights reserved Keywords: High-resolution; Image reconstruction; Regularization; Cosine transform preconditioner; Incomplete Cholesky factorization preconditioner Corresponding author Tel: ; fax: addresses: frlin@stueducn (F-R Lin), mng@mathshkuhk (MK Ng) Supported in part by the Guangdong Provincial Natural Science Foundation of China No Research supported in part by RGC Grant No HKU 726/02P and HKU CRCG Grant No , and Research supported in part by Hong Kong Research Grants Council Grant Nos HKU 730/02P and 7046/03P, and HKU CRCG Grant Nos , and /$ - see front matter 2004 Elsevier Inc All rights reserved doi:006/jlaa

2 50 F-R Lin et al / Linear Algebra and its Applications 39 (2004) Introduction An image acquisition system composed of an array of sensors, where each sensor has a subarray of sensing elements of suitable size, has recently been popular for increasing the spatial resolution with high signal-to-noise ratio beyond the performance bound of technologies that constrain the manufacture of imaging devices The attainment of superresolution from a sequence of degraded undersampled images could be viewed as reconstruction of the high-resolution image from a finite set of its projections on a sampling lattice This can then be formulated as a constrained optimization problem whose solution is obtained by minimizing a cost function [3,8,9,9,2,22] The image acquisition scheme is important in the modeling of the degradation process The need for model accuracy is undeniable in the attainment of superresolution along-with the design of the algorithm whose robust implementation will produce the desired quality in the presence of model parameter uncertainty To keep the presentation focused and of reasonable size, data acquisition with multisensors instead of, say, a video camera is considered Multiple undersampled images of a scene are often obtained by using multiple identical image sensors which are shifted relative to each other by subpixel displacements [,6,0] The resulting highresolution image reconstruction problem using a set of low-resolution images captured by the image sensors is interesting because it is closely related to the design of high-definition television (HDTV) and very high-definition (VHD) image sensors CCD image sensor arrays, where each sensor consists of a rectangular subarray of sensing elements, produce discrete images whose sampling rate and resolution are determined by the physical size of the sensing elements If multiple CCD image sensor arrays are shifted relative to each other by subpixel values, the reconstruction of high-resolution images is sometimes modeled as in [] Let g i, i =,,m be the low-resolution frames and z be the high-resolution image We have H i z = g i + η i, i =,,m, () where η i is the noise of g i and H i, i =,,m are structured matrices which will be specified in Section 2 The high-resolution image reconstruction problem is equivalent to find z such that it can be modeled as a minimization problem with regularization: min z m H i z g i α Lz 2 2, (2) i= where L is the discretization of the first order differential operator Here Lz 2 2 is a functional which measures the regularity (the difference between pixel values) of z and the regularization parameter α is used to control the degree of regularity of the solution This regularization functional has been used in [,6] for the reconstruction of high-resolution images

3 F-R Lin et al / Linear Algebra and its Applications 39 (2004) The minimization problem (2) is equivalent to the linear system ( m ) m H t i H i + αl t L z = H t i g i i= i= Ng et al [6] used cosine transform based preconditioners to precondition the above linear system When the number of shifted low-resolution images is equal to four (ie, m = 4) in the 2-by-2 sensor setting and these four shifted low-resolution images are shifted relative to each other by the half-pixel value, they showed that the conjugate gradient method, when applied to solving the cosine preconditioned system, converges superlinearly We note that under the noiseless condition, the four shifted low-resolution images are sufficient to reconstruct the high-resolution image perfectly In [7], Ng and Sze further modified cosine transform based preconditioners to handle some special cases where the number of shifted low-resolution images is equal to two Numerical results showed that the performance of these cosine transform based preconditioners are quite good for some special cases However, the cosine transform based preconditioners do not work well in general On the other hand, in the literature, there is no theoretical and experimental results for cosine transform preconditioners when the number of shifted low-resolution images is large We note that the quality of the reconstructed image is better when there are more shifted low-resolution images (see the numerical results in Section 4) There are two aims of this paper The first aim is to extend cosine transform based preconditioners for the high-resolution image reconstruction when the number of shifted low-resolution images is large The other one is to propose and develop incomplete Cholesky factorization based preconditioners for the high-resolution image reconstruction problem Incomplete Cholesky factorization based preconditioners was commonly employed to precondition the linear system arising from partial differential equations In this paper, we consider this type of preconditioner for the blurring type problem Our experimental results show that the performance of incomplete Cholesky factorization based preconditioners is quite efficient independent of the number of shifted low-resolution images However, the cosine transform based preconditioners are effective when the number of shifted low-resolution images is large When the number of shifted low-resolution images is small, cosine transform based preconditioners do not work well The outline of the paper is as follows In Section 2, we briefly give a mathematical formulation of the problem In Section 3, we study cosine transform based preconditioners and incomplete Cholesky factorization based preconditioners Finally, numerical results and concluding remarks are given in Section 4 2 The high-resolution image reconstruction model In this section, we give a brief introduction of the mathematical model for the high-resolution image reconstruction, see Bose and Boo [] for details

4 52 F-R Lin et al / Linear Algebra and its Applications 39 (2004) Suppose that we have m sensors, each sensor has N N 2 sensing elements (pixels) and the size of each sensing element is T T 2 Wethenhavem images of resolution N N 2 (low-resolution images) Our aim is to reconstruct an image of resolution M M 2 (high-resolution image), where M = L N and M 2 = L 2 N 2 In order to have some information to resolve the high-resolution image, there are subpixel displacements between the sensors More precisely, there exist integers u i [0,L ], v i [0,L 2 ], and real numbers ɛi x,ɛy i ( 2, 2 ), such that the horizontal and vertical displacements of the ith sensor are given by: di x = T (u i + ɛi x L ) and dy i = T 2 (v i + ɛ y i L ) 2 Here (u i,v i ) is the sensor position of the ith sensor, and ɛi x and ɛ y i denote respectively the normalized horizontal and vertical displacement errors We note that the parameters ɛi x and ɛ y i can be obtained by manufacturers during camera calibration The estimation method of these displacement errors was discussed in [5] An efficient algorithm for the estimation of these displacement errors is given in [5] Let f be the original scene The observed low-resolution image g i is modeled by: g i [n,n 2 ] = T T 2 ( ) T2 n d y i T 2 (n 2 2 ) +d y i ( ) T n + 2 +di x ) T (n 2 +di x f(x,y)dx dy + η i [n,n 2 ] (3) for n =,,N and n 2 =,,N 2 InFig,weshowthegenerationofthe low-resolution image pixel value from the high-resolution image pixel values Here η i is the noise corresponding to the ith sensor Similarly, the high-resolution image z is modeled by: low-resolution image pixel high-resolution image pixel /4 /2 /4 /2 /2 /4 /2 /4 Fig The generation of the low-resolution image pixel for a 2-by-2 sensor array with the exact half-pixel value displacement

5 F-R Lin et al / Linear Algebra and its Applications 39 (2004) z[n,n 2 ]= L L 2 T T 2 ( ) n T 2 /L 2 ( ) n 2 2 T 2 /L 2 ) T /L ( n 2 ( n + 2 )T /L f(x,y)dx dy (4) for n =,,M and n 2 =,,M 2 Let g i, η i,andz be the corresponding vectors obtained by using a column by column ordering for g i, η i,andz, respectively, we have g i = H i z + η i, where H i is the blurring matrix corresponding to the ith sensor [] The stencil for a L -by-l 2 sensor array is given by L L ɛ x [ 2 + ɛ y 2 ɛ y] 2 ɛ x For instance, the stencil for a 2-by-2 sensor array is given by 4 ( )( ) ( ) ( )( ) 2 + ɛ 2 x + ɛ 2 y + ɛ 2 x + ɛ 2 x ɛ y ( ) ( ) 2 + ɛ y 2 ɛ y ( )( ) ( ) ( )( ), 2 ɛ 2 x + ɛ 2 y ɛ 2 x ɛ 2 x ɛ y see also Fig Now we can state the reconstruction problem as follows: find z minimizing m g i H i z 2 2 (5) i= Since the minimization problem (5) is ill-conditioned or even singular in general and there exists noise in the low-resolution images, the classical Tikhonov regularization is used More precisely, we solve the problem min z R M M 2 m g i H i z α Lz 2 2, (6) i= where the regularization parameter α is a small positive number controlling the degree of regularity of the solution, and L is discretization of the first order differential operator, ie 2 L t L = I M 2 M M I M2

6 54 F-R Lin et al / Linear Algebra and its Applications 39 (2004) Let the positions of all sensors be denoted by S =[(u,v ), (u 2,v 2 ),,(u m,v m )] Let #(u, v) denote the number of sensors with position (u, v) in S for integers u [0,L ] and v [0,L 2 ] Define M(S) = #(u, v) and m(s) = min #(u, v) (7) 0 u L 0 v L 2 max 0 u L 0 v L 2 We note that for the image reconstruction problem in [,6], M(S) = m(s) = and for the image reconstruction problem in [7], M(S) = andm(s) = 0 Now the proposed model (6) can handle the cases where M(S) > 2 Image boundary Because of the blurring process (cf (3)), the boundary values of g i are also affected by the values of f outside the scene Thus in solving z from (6), we need some assumptions on the values of f outside the scene Bose and Boo [] imposed the zero boundary condition outside the scene, ie, assuming a dark background outside the scene in the image reconstruction The ringing effects will occur at the boundary of the reconstructed images if f is indeed not zero close to the boundary The problem is more severe if the image is reconstructed from a large sensor array since the number of pixel values of the image affected by the sensor array increases, see [6] Let d u,l be the l vector with zero entries except its (u + )th entry being equal to (for instance, d,4 = (0,, 0, 0) t ) Under the zero boundary condition, the blurring matrix corresponding to the ith sensor can be written as H i = H y i H x i, (8) where H x i = (I M /L d t u i,l ) H x (ɛi x ) and H y i = (I M2 /L 2 d t v i,l 2 ) H y (ɛ y i ) Here H x (ɛi x) is an M M banded Toeplitz matrix with bandwidth L + : h x+ i 0 h x+ i L h x i 0 h x i with h x± i = 2 ± ɛx i

7 F-R Lin et al / Linear Algebra and its Applications 39 (2004) The M 2 M 2 banded blurring matrix H y (ɛ y i ) is defined similarly We recall that a matrix T =[t ij ] n i,j= is called a Toeplitz matrix if t ij = t i j for i, j =,,nin many applications, Toeplitz matrices are generated by a function p(θ) = t k e ikθ, k= which is called a generating function For the above Toeplitz matrix, the generating function is given by p(θ) = L e ikθ + h x+ uv L eilθ + h x uv e il θ (9) k= (L ) Ng et al [6] have considered using the Neumann boundary condition on the image It assumes that the scene immediately outside is a reflection of the original scene at the boundary Numerical results have shown that the Neumann boundary condition gives better reconstructed high-resolution images than that by the zero or periodic boundary conditions Under the Neumann boundary condition, the blurring matrices are given by H i = [(I M2 /L 2 d t v i,l2 )H y (ɛ y i ) ] [(I M /L d t u i,l )H x (ɛ x i ) ], which is similar to (8) Here H x (ɛ x i ) and Hy (ɛ y i ) are M M and M 2 M 2 Toeplitzplus-Hankel matrices respectively: h x+ i 0 h x i 0 q q h x+ i L + q h x+ i h x L i h x, i q q q 0 h x+ 0 h x i i (0) and H y (ɛ y i ) is similar We recall that a matrix H =[h ij ] n i,j= is called a Hankel matrix if h ij = h i+j for i, j =,,n For the sake of simplicity, we define H x i = (I M /L d t u i,l )H x (ɛ x i ) and Hy i = (I M2 /L 2 d t v i,l 2 )H y (ɛ y i ) Let E =[(ɛ x,ɛy ),,(ɛx m,ɛy m)], and m A(S, E,α) = [(H y i )t H y i ] [(Hx i )t H x i ]+αlt L, () i=

8 56 F-R Lin et al / Linear Algebra and its Applications 39 (2004) g = m (H y i Hx i )t g i i= We see that when the Neumann boundary condition is applied, the optimization problem (6) is equivalent to A(S, E,α)z = g (2) In the next section, we consider solving (2) by preconditioned conjugate gradient (PCG) methods We remark that Ng and Sze [7] have considered the image reconstruction problem where M(S) = andm(s) = 0 For comparison, we briefly introduce the mathematical model proposed in [7] Let D u,l be an l l diagonal matrix with all zero diagonals except that the (u + )th diagonal is equal to and D u,v = (I M2 /L 2 D v,l2 ) (I M /L D u,l ) Then the blurringmatrixcorresponding to theith sensor under the Neumann boundary condition is given by H(ɛi x,ɛy i ) = D u i,v i (H y (ɛ y i ) Hx (ɛi x )) We note that the idea is to intersperse the low-resolution image g i to form an M M 2 image g i : assign g i [n,n 2 ] to g i [L (n ) + u +,L 2 (n 2 ) + v + ] and zero to other positions of g i Thus, g i = H(ɛi x,ɛy i )z + η i The blurring matrix for the whole set of sensors is made up of blurring matrices from each sensor: m H(S, E) = H(ɛi x,ɛy i ) (3) i= With the Tikhonov regularization, the problem becomes: (H(S, E) t H(S, E) + αl t L)z = H(S, E) t g, (4) where g = m i= g i and g = m i= g i is called the observed image It is not difficult to check that the systems (2) and (4) are equivalent if M(S) = Thus, our model is an extension of the model in [7] For the cases where M(S) =, the observed image is given by { g[l (n ) + u i +,L 2 (n 2 ) + v i + ] =g i [n,n 2 ] for i =,,m, g[l (n ) + u +,L 2 (n 2 ) + v + ] =0 for #(u, v) = 0 (5) If M(S) 2, that is, there exist integers u, v such that #(u, v) 2, then g[l (n ) + u +,L 2 (n 2 ) + v + ] is set to the average of values of [n,n 2 ]th pixel of the low-resolution images with (u, v) sensor position

9 F-R Lin et al / Linear Algebra and its Applications 39 (2004) Fig 2(a) shows the method of forming a 4 4image g with a 2 2 sensor array each having a 2 2 sensing elements, ie, L = 2, L 2 = 2, M = M 2 = 4, N = N 2 = 2 and T = T 2 = 2 This is the case for high-resolution image reconstruction and S =[(0, 0), (0, ), (, 0), (, )] Fig 2(b) shows a 4 2image g with 2 sensors each having a 2 2 sensing elements The sensor positions are (0, 0) and (, 0) respectively, ie, S =[(0, 0), (, 0)] This case corresponds to the sensor taking the same scene of the original image but is slightly displaced in the horizontal direction with respect to the reference sensor In Fig 2(c), we consider the case where S =[(0, 0), (0, )] This case corresponds to the sensor which is slightly displaced in the vertical direction with respect to the sensor with position (0, 0) In Fig 2(d), we consider the case of two sensors where the sensor with position (, ) is slightly displaced in the diagonal direction with respect to the sensor with position (0, 0) In this case, we have S =[(0, 0), (, )] 3 The construction of preconditioners In this section, we discuss the construction of preconditioners for the linear system (2) We consider cosine transform based preconditioners and incomplete Cholesky factorization based preconditioners 3 Cosine transform based preconditioners Let C n be the n n discrete cosine transform matrix, ie the (i, j)th entry of C n is given by ( ) 2 δi (i )(2j )π cos, i, j n, n 2n where δ ij is the Kronecker delta Note that the matrix-vector product C n x can be computed in O(n log n) operations by using the fast cosine transform, see Sorensen and Burrus [20, p 557] For an n n matrix B, the cosine transform preconditioner c(b) of B is defined to be the matrix C t n C n that minimizes C t n C n B F in Frobenius norm [2] Clearly, the cost of computing c(b) y for any vector y is of O(nlog n) operations For banded matrices, like the matrices defined in () and (3), the cost of constructing c( ) is of O(n) [2], where n = M M 2 In this paper, we propose usingc(a(s, E,α)) as preconditioner for (2) We note that when M(S) = the preconditioner can be written as c(h(s, E) t H(S, E)) + αl t L Obviously, this preconditioner is different from the one proposed in [7] c(h(s, E)) t c(h(s, E)) + αl t L The following theorems imply that our new preconditioner can be more efficient than the preconditioner proposed in [7]

10 58 F-R Lin et al / Linear Algebra and its Applications 39 (2004) Fig 2 Observed images: (a) high-resolution image reconstruction; (b) high-resolution image reconstruction (horizontal displacement of the sensor); (c) high-resolution image reconstruction (vertical displacement of the sensor); (d) high-resolution image reconstruction (diagonal displacement of the sensor)

11 F-R Lin et al / Linear Algebra and its Applications 39 (2004) Theorem Let H ɛ be a Toeplitz-plus-Hankel matrix h + 0 h 0 q q h + L h + q h + L h, q q q 0 h + 0 h where h ± = /2 ± ɛ Then we have H t ɛ H ɛ = A + A 2, where A is a Toeplitz-plus-Hankel matrix w w 2 w n w 2 w 3 w n 0 A = w 2 w w 3 q q q w n w2 + q q q w n q q q w 3 w n w 2 w 0 w n w 3 w 2 and A 2 is a rank 2 matrix (6) A 2 = L 4ɛ L 2 O n 2L+2 4ɛ L Here w i,i=,,nare the entries of w = (2L 5/2 + 2ɛ 2, 2L 3, 2L 4,,, /4 ɛ 2, 0,,0) t /L 2, and m and O m are m m matrices with all entries and 0 respectively Proof We first note that H t ɛ H ɛ can be written as a sum of a banded Toeplitz matrix and a sparse matrix H t ɛ H ɛ = (T + E) L2 Let l = L, note that h + + h =, and the Toeplitz matrix T is generated by (cf (9)) l k= (l ) = e ikθ + h + e ilθ + h e ilθ l k= (l ) e ikθ 2 + (e ilθ + e ilθ ) l k= (l ) l k= (l ) e ikθ + h e ilθ + h + e ilθ e ikθ

12 60 F-R Lin et al / Linear Algebra and its Applications 39 (2004) = = +(h + e ilθ + h e ilθ )(h e ilθ + h + e ilθ ) 2(l ) k= 2(l ) 2l + k= 2l (2l k )e ikθ + k= e ikθ e ikθ + ((h + ) 2 + (h ) 2 ) + h + h (e 2ilθ + e 2ilθ ) (2l 2 ) 2l + 2ɛ2 + 2 k= (2l k) cos(kθ) + 2 ( ) 4 ɛ2 cos(2lθ) In other words, T is a symmetric Toeplitz matrix with the first column given by w The matrix E is a sparse matrix with only the upper-left block E( : 2L 2, : 2L 2) and the lower-right block E(n 2L + 3 : n, n 2L + 3 : n) are non-zero It can be verified that and E( : 2L 2, : 2L 2) w 2 w 3 w 2L w 3 q q 0 = q q + w 2L 0 0 E(n 2L + 3 : n, n 2L + 3 : n) q q w 2L = q q + 0 w 2L w 2 ( ) 4ɛL O L O L O L ( ) OL O L O L 4ɛ L Thus, the result of the theorem follows Using the results and algorithms of [7], one can check that the matrix A can be diagonalized by the cosine transform matrix, ie c(a ) = A and the cosine transform preconditioner for A 2 is the zero matrix, ie c(a 2 ) = O n Hence, H t ɛ H ɛ c(h t ɛ H ɛ) = A 2 is a rank two matrix, ie the spectrum of H t ɛ H ɛ c(h t ɛ H ɛ) is clustered around 0 Furthermore, when ɛ is small, the matrix A 2 is also a small norm matrix On the other hand, it is easy to check that c(h ɛ ) = H 0 and it follows that the spectrum of H t ɛ H ɛ c(h t ɛ )c(h ɛ) = H t ɛ H ɛ H t 0 H 0

13 F-R Lin et al / Linear Algebra and its Applications 39 (2004) is not clustered around 0 if ɛ not small enough Therefore, as preconditioners for H t ɛ H ɛ, c(h t ɛ H ɛ) is better than c(h ɛ ) t c(h ɛ ) Based on the above discussions, we can easily prove the following theorem for the two-dimensional case (the block Toeplitz-plus-Hankel with Toeplitz-plus-Hankel block case), which states that when M(S) = m(s) = and all subpixel displacement errors are the same, then c(a(s, E,α)) A(S, E,α)is a low rank matrix with respect to the matrix size of A(S, E,α) Theorem 2 Let H ɛ and H δ be two Toeplitz-plus-Hankel matrices of order M and M 2 respectively, cf (6) We have (H t ɛ H ɛ) (H t δ H δ) c(h t ɛ H ɛ) c(h t δ H δ) = A 2M +2M 2 +4, where A 2M +2M 2 +4 is an (M M 2 ) (M M 2 ) matrix of rank at most 2M + 2M Furthermore, if both ɛ and δ are small enough, then A 2M +2M 2 +4 is also a small norm matrix According to Theorem 2, we expect that the proposed cosine transform preconditioner works well for block Toeplitz-plus-Hankel with Toeplitz-plus-Hankel block systems arising from the high-resolution image reconstruction problem when M(S) = m(s) = 32 Incomplete factorization based preconditioners Besides the cosine transform based preconditioner, we study incomplete factorization based preconditioners for the high-resolution image reconstruction problem Given a symmetric matrix A and a symmetric sparsity pattern S, an incomplete Cholesky factor of A is a lower triangular matrix Q such that A = QQ T + V, q ij = 0 if(i, j) S, v ij = 0 if (i, j) S Meijerink and van der Vorst [4] considered two choices of S, the standard setting of S to the sparsity pattern of A, and a setting that allowed more fill Many variations are possible In [4], it is proved that if A is an M-matrix, then the incomplete Cholesky factorization exists for any predetermined sparsity pattern S Manteuffel [3] extended this result to H -matrix which positive diagonal elements We note that an n n matrix A is called an M-matrix if A is invertible and all entries of the inverse of A are non-negative A matrix A is called an H -matrix if the associated matrix M(A): { [A]ij, i = j, [M(A)] ij = [A] ij, i /= j is an M-matrix We are interested in the incomplete factorization with S being the sparsity pattern of A This fails if a negative diagonal element is encountered One can increase any non-positive pivot to a positive threshold during the factorization process However,

14 62 F-R Lin et al / Linear Algebra and its Applications 39 (2004) this may result in a very poor preconditioner, see for instance, the example in Section 3 of [] It is important to modify the diagonal elements before we encounter a negative pivot There are several modifications to the incomplete Cholesky factorization of the form A + E, wheree is a diagonal matrix, see for instance [4,8] In this paper, we will apply the shifted incomplete Cholesky factorization of Manteuffel [2,3] for the scaled matrix  = D /2 AD /2, D = diag([a],,[a] nn ) The idea is to apply the incomplete Cholesky factorization to the matrix  + βi, where I is the identity matrix and β 0 It is obvious that for sufficiently large β,  + βi is an H -matrix and therefore the incomplete factorization exists However, a large β will result in a poor preconditioner The minimal value of β is an interesting issue for further research In summary, we have the following algorithm: Shifted incomplete Cholesky factorization of Manteuffel Choose β S > 0 Compute  = D /2 AD /2,whereD = diag([a],,[a] nn ) Set β 0 = 0 For k = 0,,, Compute the incomplete Cholesky factorization of  k =  + β k IIfsuccessful set β = β k and exit Set β k+ = max(2β k,β S ) End The main features of the shifted incomplete Cholesky factorization are that the memory requirement is predictable (limited memory) and the computational cost is not more than ( + log 2 (max(2β,β S )/β S )), where is the cost of one incomplete Cholesky factorization with the sparsity pattern of A Note that the coefficient matrix A(S, E,α)(cf ()) is a block band matrix with band blocks, where the bands are not more than (4L 3) and (4L 2 3) respectively, it follows that = O(L 2 L2 2 M M 2 ) Numerical results in Section 4 show that the shifted incomplete Cholesky factorization performs well (see Table 3) and that the value of ( + log 2 (max(2β,β S )/β S )) is small for the high-resolution image reconstruction problem (see Table 4) 33 Comparison of the preconditioners In this subsection, we compare the condition numbers and the spectra of the preconditioned matrices of cosine transform preconditioner with the incomplete Cholesky factorization preconditioner We have randomly tested a number of different ɛ (ɛ x i and ɛ y i are chosen randomly between 0 and0), the results are similar In Table, we show the condition numbers for the following situations of 2 2sensor:

15 F-R Lin et al / Linear Algebra and its Applications 39 (2004) (i) S =[(0, 0), (0, )], (ii) S 2 =[(0, 0), (, )], (iii) S 3 =[(0, 0), (0, ), (, )], (iv) S 4 =[(0, 0), (0, ), (, 0), (, )], (v) S 5 =[(0, 0), (0, ), (, 0), (, ), (0, 0)], (vi) S 6 =[(0, 0), (0, ), (, 0), (, ), (0, 0), (0, )], (vii) S 7 =[(0, 0), (0, ), (, 0), (, ), (0, 0), (, )], (viii) S 8 =[(0, 0), (0, ), (, 0), (, ), (0, 0), (0, ), (, )], (ix) S 9 =[(0, 0), (0, ), (, 0), (, ), (0, 0), (0, ), (, 0), (, )] For each situation, we show one numerical result In our tests, we set M = M 2 = 32 In Table, κ, κ 2,andκ 3 denote the condition numbers of the coefficient matrix A(S, E,α), c(a(s, E,α)) A(S, E,α),andQ t Q A(S, E,α)respectively, where Q is the factor of the shifted incomplete Cholesky factorization of A(S, E,α)In the shifted incomplete Cholesky factorization, we set β S = 00 We observe that the preconditioned matrices with incomplete Cholesky factorization preconditioners are quite well-conditioned for all situations As to the cosine transform preconditioners, the preconditioned matrices are very well-conditioned for the cases S i where S i S 4 (m(s i ) ),ies i for i = 4,,9 while they are quite ill-conditioned for the cases S i where S i S 4 (m(s i ) = 0), ies i for i =, 2, 3 We show in Fig 3 the spectra of the preconditioned matrices It is easy to see that the spectra of preconditioned matrices are not clustered around one Therefore, the improvement of the convergence of the PCG method lies in the improvement of the condition numbers of the preconditioned matrices 4 Numerical results and concluding remarks In this section, we compare the performance of cosine transform based preconditioners and incomplete Cholesky factorization based preconditioners Nine situations Table Condition numbers for different S: M = M 2 = 32 S κ κ 2 κ 3 κ κ 2 κ 3 α = α = S S S S S S S S S

16 64 F-R Lin et al / Linear Algebra and its Applications 39 (2004) Fig 3 Spectra of the preconditioned matrices with cosine transform preconditioners (left) and incomplete Cholesky factorization preconditioners (right) of 2-by-2 sensor array are considered: S i, i =,,9, (cf Section 33) In the tests, the parameters ɛi x and ɛ y i are chosen randomly between 0 and 0 Gaussian white noises with signal-to-noise ratios of 50 and 30 db are added to the low-resolution images The optimal regularization parameter α is chosen such that it minimizes the relative error of the reconstructed image z r (α) to the original image z, ie, it minimizes z z r (α) 2 (7) z 2 In the conjugate gradient methods, we use the zero vector as the initial guess and the stopping criteria is r (j) 2 r (0) < 0 6, 2 where r (j) is the normal equations residual after j iterations The data in Tables 2 4 are averages of 20 randomly generated problems The original image is shown in Fig 4(a) One of the low-resolution images is shown in Fig 4(b) (50 db case) The observed noisy images and reconstructed images for S i, i =,,9 are also shown in Fig 4 (50 db case) We can see that all reconstructed images are much better than the observed images Table 2 shows the optimal regularization parameters and the corresponding relative errors for the nine situations We can clearly see that the relative error becomes smaller when the number of low-resolution images increases Furthermore, the optimal regularization parameter is proportional to the number of sensor positions covered by low-resolution images Table 3 shows the performance of the cosine transform based preconditioners and the incomplete Cholesky factorization based preconditioners proposed in Section 3

17 F-R Lin et al / Linear Algebra and its Applications 39 (2004) Table 2 Regularization parameters and relative errors S α Relative error α Relative error SNR=50 db SNR=30 db S S S S S S S S S Table 3 Number of iterations required for convergence S N C IC N C IC SNR=50 db SNR=30 db S S S S S S S S S Table 4 Values of ( + log 2 (max(2β,β S )/β S )) for α = S S S 2 S 3 S 4 S 5 S 6 S 7 S 8 S 9 + log 2 (max(2β,β S )/β S ) In Table 3, the symbols N, C, andic denote the PCG methods without preconditioner, with cosine transform based preconditioners and with incomplete Cholesky factorization based preconditioners respectively We see from Table 3 that the incomplete Cholesky factorization based preconditioner is quite efficient for all sensor arrays The cosine transform preconditioner is more efficient than the incomplete Cholesky factorization preconditioner when m(s i ) = while it is not efficient when m(s i ) = 0(S, S 2,andS 3 ) This observation is consistent with the numerical results in Table We remark that in Table the value of the regularization parameter is fixed at or 5 0 3, while in Table 3, the optimal regularization parameter

18 66 F-R Lin et al / Linear Algebra and its Applications 39 (2004) Fig 4 Images: (a) original; (b) low-resolution; (c) observation for S ; (d) restoration for S ; (e) observation for S 2 ; (f) restoration for S 2 ; (g) observation for S 3 ; (h) restoration for S 3 (i) observation for S 4 ; (j) restoration for S 4 ; (k) observation for S 5 ; (l) restoration for S 5 (m) observation for S 6 ; (n) restoration for S 6 ; (o) observation for S 7 ; (p) restoration for S 7 (q) observation for S 8 ; (r) restoration for S 8 ; (s) observation for S 9 ; (t) restoration for S 9 based on (7) is used for each S i We note that for S 4 S 9, the regularization parameter α is larger than that for S S 3 Finally in Table 4 we show the values of ( + log 2 (max(2β,β S )/β S )) for different sensor arrays with regularization parameter α = We see from the

19 F-R Lin et al / Linear Algebra and its Applications 39 (2004) table that ( + log 2 (max(2β,β S )/β S )) are small for all problems we tested (for α = , all the values are equal to ) It follows that the total cost of the shifted incomplete Cholesky factorization which is given by ( + log 2 (max(2β,β S )/β S ))O (L 2 L2 2 M M 2 ) is well bounded by O(L 2 L2 2 M M 2 ) In this paper we study the problem of reconstructing a high-resolution image from multiple undersampled, shifted, degraded frames with subpixel displacement errors We apply the PCG method with cosine transform based preconditioners and incomplete factorization based preconditioners to solve this reconstruction problem Numerical results show that cosine transform based preconditioners are effective when m(s) = (the number of shifted low-resolution frames are large), but are less effective when m(s) = 0 (the number of shifted low-resolution frames is small) However, incomplete factorization based preconditioners work quite well independent of the number of shifted low-resolution frames References [] NK Bose, KJ Boo, High-resolution image reconstruction with multisensors, Internat J Imag Syst Technol 9 (998) [2] RH Chan, MK Ng, Conjugate gradient method for Toeplitz system, SIAM Rev 38 (996) [3] R Chan, T Chan, M Ng, W Tang, C Wong, Preconditioned iterative methods for high-resolution image reconstruction with multisensors, in: F Luk (Ed), Proceedings to the SPIE Symposium on Advanced Signal Processing: Algorithms, Architectures, and Implementations, San Diego, CA, July 998, vol 346, pp [4] A Forsgren, PE Gill, W Murray, Computing modified Newton directions using a partial Cholesky factorization, SIAM J Sci Comput 6 (995) [5] H Fu, J Barlow, A regularized total least squares algorithm for high resolution image reconstruction, Linear Algebra Appl, this issue [6] G Jacquemod, C Odet, R Goutte, Image resolution enhancement using subpixel camera displacement, Signal Process 26 (992) [7] T Kailath, V Olshevsky, Displacement structure approach to discrete trigonometric transform based preconditioners of G Strang and T Chan type, Calcolo 33 (996) [8] E Kaltenbacher, RC Hardie, High resolution infrared image reconstruction using multiple, low resolution, aliased frames, in: Proceedings of IEEE 996 National Aerospace and Electronic Conference on NAECON, vol 2, 996, pp [9] SP Kim, NK Bose, HM Valenzuela, Recursive reconstruction of high resolution image from noisy undersampled multiframes, IEEE Trans Acoust Speech Signal Process 38 (6) (990) [0] T Komatsu, K Aizawa, T Igarashi, T Saito, Signal processing based method for acquiring very high resolution images with multiple cameras and its theoretical analysis, IEE Proc 40 (3, Part I) (993) 9 25 [] CJ Lin, JJ Moré, Incomplete Cholesky factorization with limited memory, SIAM J Sci Comput 2 (999) [2] TA Manteuffel, Shifted incomplete Cholesky factorization, in: Sparse Matrix Proceedings 978, SIAM, Philadelphia, 979, pp 4 6 [3] TA Manteuffel, An incomplete factorization technique for positive definite linear systems, Math Comput 34 (980)

20 68 F-R Lin et al / Linear Algebra and its Applications 39 (2004) [4] JA Meijerink, HA van der Vorst, An iterative solution method for linear systems of which the coefficient matrix is a symmetric M-matrix, Math Comput 3 (977) [5] M Ng, NK Bose, J Koo, Constrained total least squares computations for high resolution image reconstruction with multisensors, Internat J Imag Syst Technol 2 (2002) [6] M Ng, R Chan, T Chan, A Yip, Cosine transform preconditioners for high resolution image reconstruction, Linear Algebra Appl 36 (2000) [7] M Ng, KN Sze, Preconditioned iterative methods for super-resolution image reconstruction with multisensors, in: Symposium on Advanced Signal Processing: Algorithms, Architectures and Implementations, San Diego CA, July 2000, in: F Luk (Ed), Proceedings to the SPIE, vol 46, 2000, pp [8] RB Schnabel, E Eskow, A new modified Cholesky factorization, SIAM J Sci Statist Comput (990) [9] RR Schultz, RL Stevenson, Extraction of high-resolution frames from video sequences, IEEE Trans Image Process 5 (6) (996) [20] H Sorensen, C Burrus, Fast DFT and convolution algorithms, in: S Mitra, J Kaiser (Eds), Handbook of Signal Processing, Wiley, New York, 993, pp [2] AM Tekalp, MK Ozkan, MI Sezan, High-resolution image reconstruction from lower-resolution image sequences and space-varying image restoration, in: Proceedings of IEEE International Conference on Acoustic, Speech, and Signal Processing, III, San Francisco, CA, March 992, pp [22] RY Tsai, TS Huang, Multiframe image restoration and registration, Adv Comput Vis Image Process (984)

Cosine transform preconditioners for high resolution image reconstruction

Cosine transform preconditioners for high resolution image reconstruction Linear Algebra and its Applications 36 (000) 89 04 www.elsevier.com/locate/laa Cosine transform preconditioners for high resolution image reconstruction Michael K. Ng a,,, Raymond H. Chan b,,tonyf.chan

More information

SIAM Journal on Scientific Computing, 1999, v. 21 n. 3, p

SIAM Journal on Scientific Computing, 1999, v. 21 n. 3, p Title A Fast Algorithm for Deblurring Models with Neumann Boundary Conditions Author(s) Ng, MKP; Chan, RH; Tang, WC Citation SIAM Journal on Scientific Computing, 1999, v 21 n 3, p 851-866 Issued Date

More information

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors Sean Borman and Robert L. Stevenson Department of Electrical Engineering, University of Notre Dame Notre Dame,

More information

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-ero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In [0, 4], circulant-type preconditioners have been proposed

More information

Approximate Inverse-Free Preconditioners for Toeplitz Matrices

Approximate Inverse-Free Preconditioners for Toeplitz Matrices Approximate Inverse-Free Preconditioners for Toeplitz Matrices You-Wei Wen Wai-Ki Ching Michael K. Ng Abstract In this paper, we propose approximate inverse-free preconditioners for solving Toeplitz systems.

More information

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University,

More information

Robust Preconditioned Conjugate Gradient for the GPU and Parallel Implementations

Robust Preconditioned Conjugate Gradient for the GPU and Parallel Implementations Robust Preconditioned Conjugate Gradient for the GPU and Parallel Implementations Rohit Gupta, Martin van Gijzen, Kees Vuik GPU Technology Conference 2012, San Jose CA. GPU Technology Conference 2012,

More information

A Limited Memory, Quasi-Newton Preconditioner. for Nonnegatively Constrained Image. Reconstruction

A Limited Memory, Quasi-Newton Preconditioner. for Nonnegatively Constrained Image. Reconstruction A Limited Memory, Quasi-Newton Preconditioner for Nonnegatively Constrained Image Reconstruction Johnathan M. Bardsley Department of Mathematical Sciences, The University of Montana, Missoula, MT 59812-864

More information

Matrix Assembly in FEA

Matrix Assembly in FEA Matrix Assembly in FEA 1 In Chapter 2, we spoke about how the global matrix equations are assembled in the finite element method. We now want to revisit that discussion and add some details. For example,

More information

Signal Identification Using a Least L 1 Norm Algorithm

Signal Identification Using a Least L 1 Norm Algorithm Optimization and Engineering, 1, 51 65, 2000 c 2000 Kluwer Academic Publishers. Manufactured in The Netherlands. Signal Identification Using a Least L 1 Norm Algorithm J. BEN ROSEN Department of Computer

More information

Preconditioning Techniques Analysis for CG Method

Preconditioning Techniques Analysis for CG Method Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

2 Regularized Image Reconstruction for Compressive Imaging and Beyond EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING JIAN-FENG CAI, STANLEY OSHER, AND ZUOWEI SHEN Abstract. Real images usually have sparse approximations under some tight frame systems derived

More information

Iterative Krylov Subspace Methods for Sparse Reconstruction

Iterative Krylov Subspace Methods for Sparse Reconstruction Iterative Krylov Subspace Methods for Sparse Reconstruction James Nagy Mathematics and Computer Science Emory University Atlanta, GA USA Joint work with Silvia Gazzola University of Padova, Italy Outline

More information

Toeplitz-circulant Preconditioners for Toeplitz Systems and Their Applications to Queueing Networks with Batch Arrivals Raymond H. Chan Wai-Ki Ching y

Toeplitz-circulant Preconditioners for Toeplitz Systems and Their Applications to Queueing Networks with Batch Arrivals Raymond H. Chan Wai-Ki Ching y Toeplitz-circulant Preconditioners for Toeplitz Systems and Their Applications to Queueing Networks with Batch Arrivals Raymond H. Chan Wai-Ki Ching y November 4, 994 Abstract The preconditioned conjugate

More information

Sparse Matrix Techniques for MCAO

Sparse Matrix Techniques for MCAO Sparse Matrix Techniques for MCAO Luc Gilles lgilles@mtu.edu Michigan Technological University, ECE Department Brent Ellerbroek bellerbroek@gemini.edu Gemini Observatory Curt Vogel vogel@math.montana.edu

More information

Adaptive beamforming for uniform linear arrays with unknown mutual coupling. IEEE Antennas and Wireless Propagation Letters.

Adaptive beamforming for uniform linear arrays with unknown mutual coupling. IEEE Antennas and Wireless Propagation Letters. Title Adaptive beamforming for uniform linear arrays with unknown mutual coupling Author(s) Liao, B; Chan, SC Citation IEEE Antennas And Wireless Propagation Letters, 2012, v. 11, p. 464-467 Issued Date

More information

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Linear Algebra and its Applications 7 (2) 227 24 www.elsevier.com/locate/laa Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Wen Li a,, Weiwei Sun b a Department of Mathematics,

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Blind image restoration as a convex optimization problem

Blind image restoration as a convex optimization problem Int. J. Simul. Multidisci.Des. Optim. 4, 33-38 (2010) c ASMDO 2010 DOI: 10.1051/ijsmdo/ 2010005 Available online at: http://www.ijsmdo.org Blind image restoration as a convex optimization problem A. Bouhamidi

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

On Solving Large Algebraic. Riccati Matrix Equations

On Solving Large Algebraic. Riccati Matrix Equations International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University

More information

Fine-Grained Parallel Algorithms for Incomplete Factorization Preconditioning

Fine-Grained Parallel Algorithms for Incomplete Factorization Preconditioning Fine-Grained Parallel Algorithms for Incomplete Factorization Preconditioning Edmond Chow School of Computational Science and Engineering Georgia Institute of Technology, USA SPPEXA Symposium TU München,

More information

Non-negative matrix factorization with fixed row and column sums

Non-negative matrix factorization with fixed row and column sums Available online at www.sciencedirect.com Linear Algebra and its Applications 9 (8) 5 www.elsevier.com/locate/laa Non-negative matrix factorization with fixed row and column sums Ngoc-Diep Ho, Paul Van

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Theoretical Computer Science

Theoretical Computer Science Theoretical Computer Science 412 (2011) 1484 1491 Contents lists available at ScienceDirect Theoretical Computer Science journal homepage: wwwelseviercom/locate/tcs Parallel QR processing of Generalized

More information

BLOCK DIAGONAL AND SCHUR COMPLEMENT PRECONDITIONERS FOR BLOCK TOEPLITZ SYSTEMS WITH SMALL SIZE BLOCKS

BLOCK DIAGONAL AND SCHUR COMPLEMENT PRECONDITIONERS FOR BLOCK TOEPLITZ SYSTEMS WITH SMALL SIZE BLOCKS BLOCK DIAGONAL AND SCHUR COMPLEMENT PRECONDITIONERS FOR BLOCK TOEPLITZ SYSTEMS WITH SMALL SIZE BLOCKS WAI-KI CHING, MICHAEL K NG, AND YOU-WEI WEN Abstract In this paper we consider the solution of Hermitian

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille

More information

A fast and well-conditioned spectral method: The US method

A fast and well-conditioned spectral method: The US method A fast and well-conditioned spectral method: The US method Alex Townsend University of Oxford Leslie Fox Prize, 24th of June 23 Partially based on: S. Olver & T., A fast and well-conditioned spectral method,

More information

Self-Calibration and Biconvex Compressive Sensing

Self-Calibration and Biconvex Compressive Sensing Self-Calibration and Biconvex Compressive Sensing Shuyang Ling Department of Mathematics, UC Davis July 12, 2017 Shuyang Ling (UC Davis) SIAM Annual Meeting, 2017, Pittsburgh July 12, 2017 1 / 22 Acknowledgements

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

1. Introduction. We consider the system of saddle point linear systems

1. Introduction. We consider the system of saddle point linear systems VALIDATED SOLUTIONS OF SADDLE POINT LINEAR SYSTEMS TAKUMA KIMURA AND XIAOJUN CHEN Abstract. We propose a fast verification method for saddle point linear systems where the (, block is singular. The proposed

More information

SPARSE SIGNAL RESTORATION. 1. Introduction

SPARSE SIGNAL RESTORATION. 1. Introduction SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

Block Lanczos Tridiagonalization of Complex Symmetric Matrices

Block Lanczos Tridiagonalization of Complex Symmetric Matrices Block Lanczos Tridiagonalization of Complex Symmetric Matrices Sanzheng Qiao, Guohong Liu, Wei Xu Department of Computing and Software, McMaster University, Hamilton, Ontario L8S 4L7 ABSTRACT The classic

More information

1. Abstract. 2. Introduction/Problem Statement

1. Abstract. 2. Introduction/Problem Statement Advances in polarimetric deconvolution Capt. Kurtis G. Engelson Air Force Institute of Technology, Student Dr. Stephen C. Cain Air Force Institute of Technology, Professor 1. Abstract One of the realities

More information

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1. J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

Research Article A Rapid Numerical Algorithm to Compute Matrix Inversion

Research Article A Rapid Numerical Algorithm to Compute Matrix Inversion International Mathematics and Mathematical Sciences Volume 12, Article ID 134653, 11 pages doi:.1155/12/134653 Research Article A Rapid Numerical Algorithm to Compute Matrix Inversion F. Soleymani Department

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

A quantative comparison of two restoration methods as applied to confocal microscopy

A quantative comparison of two restoration methods as applied to confocal microscopy A quantative comparison of two restoration methods as applied to confocal microscopy Geert M.P. van Kempen 1, Hans T.M. van der Voort, Lucas J. van Vliet 1 1 Pattern Recognition Group, Delft University

More information

Numerical solution of the eigenvalue problem for Hermitian Toeplitz-like matrices

Numerical solution of the eigenvalue problem for Hermitian Toeplitz-like matrices TR-CS-9-1 Numerical solution of the eigenvalue problem for Hermitian Toeplitz-like matrices Michael K Ng and William F Trench July 199 Joint Computer Science Technical Report Series Department of Computer

More information

Incomplete Cholesky preconditioners that exploit the low-rank property

Incomplete Cholesky preconditioners that exploit the low-rank property anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université

More information

Applications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012

Applications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012 Applications of Robust Optimization in Signal Processing: Beamforg and Power Control Fall 2012 Instructor: Farid Alizadeh Scribe: Shunqiao Sun 12/09/2012 1 Overview In this presentation, we study the applications

More information

Equivalence constants for certain matrix norms II

Equivalence constants for certain matrix norms II Linear Algebra and its Applications 420 (2007) 388 399 www.elsevier.com/locate/laa Equivalence constants for certain matrix norms II Bao Qi Feng a,, Andrew Tonge b a Department of Mathematical Sciences,

More information

In [7], Vogel introduced the \lagged diusivity xed point iteration", which we denote by FP, to solve the system (6). If A u k, H and L u k denote resp

In [7], Vogel introduced the \lagged diusivity xed point iteration, which we denote by FP, to solve the system (6). If A u k, H and L u k denote resp Cosine Transform Based Preconditioners for Total Variation Deblurring Raymond H. Chan, Tony F. Chan, Chiu-Kwong Wong Abstract Image reconstruction is a mathematically illposed problem and regularization

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:

More information

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING, SERIES B Volume 5, Number 1-2, Pages 21 30 c 2014 Institute for Scientific Computing and Information A SPARSE APPROXIMATE INVERSE PRECONDITIONER

More information

Part III Super-Resolution with Sparsity

Part III Super-Resolution with Sparsity Aisenstadt Chair Course CRM September 2009 Part III Super-Resolution with Sparsity Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Super-Resolution with Sparsity Dream: recover high-resolution

More information

Iterative Methods for Smooth Objective Functions

Iterative Methods for Smooth Objective Functions Optimization Iterative Methods for Smooth Objective Functions Quadratic Objective Functions Stationary Iterative Methods (first/second order) Steepest Descent Method Landweber/Projected Landweber Methods

More information

A fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration

A fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration NTMSCI 5, No. 2, 277-283 (2017) 277 New Trends in Mathematical Sciences http://dx.doi.org/ A fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration

More information

Superlinear convergence for PCG using band plus algebra preconditioners for Toeplitz systems

Superlinear convergence for PCG using band plus algebra preconditioners for Toeplitz systems Computers and Mathematics with Applications 56 (008) 155 170 Contents lists available at ScienceDirect Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa Superlinear

More information

Relative Irradiance. Wavelength (nm)

Relative Irradiance. Wavelength (nm) Characterization of Scanner Sensitivity Gaurav Sharma H. J. Trussell Electrical & Computer Engineering Dept. North Carolina State University, Raleigh, NC 7695-79 Abstract Color scanners are becoming quite

More information

Integer Least Squares: Sphere Decoding and the LLL Algorithm

Integer Least Squares: Sphere Decoding and the LLL Algorithm Integer Least Squares: Sphere Decoding and the LLL Algorithm Sanzheng Qiao Department of Computing and Software McMaster University 28 Main St. West Hamilton Ontario L8S 4L7 Canada. ABSTRACT This paper

More information

Overview. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Overview. Optimization-Based Data Analysis.   Carlos Fernandez-Granda Overview Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 1/25/2016 Sparsity Denoising Regression Inverse problems Low-rank models Matrix completion

More information

Fast Linear Iterations for Distributed Averaging 1

Fast Linear Iterations for Distributed Averaging 1 Fast Linear Iterations for Distributed Averaging 1 Lin Xiao Stephen Boyd Information Systems Laboratory, Stanford University Stanford, CA 943-91 lxiao@stanford.edu, boyd@stanford.edu Abstract We consider

More information

Co-prime Arrays with Reduced Sensors (CARS) for Direction-of-Arrival Estimation

Co-prime Arrays with Reduced Sensors (CARS) for Direction-of-Arrival Estimation Co-prime Arrays with Reduced Sensors (CARS) for Direction-of-Arrival Estimation Mingyang Chen 1,LuGan and Wenwu Wang 1 1 Department of Electrical and Electronic Engineering, University of Surrey, U.K.

More information

On group inverse of singular Toeplitz matrices

On group inverse of singular Toeplitz matrices Linear Algebra and its Applications 399 (2005) 109 123 wwwelseviercom/locate/laa On group inverse of singular Toeplitz matrices Yimin Wei a,, Huaian Diao b,1 a Department of Mathematics, Fudan Universit,

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

Preconditioning of elliptic problems by approximation in the transform domain

Preconditioning of elliptic problems by approximation in the transform domain TR-CS-97-2 Preconditioning of elliptic problems by approximation in the transform domain Michael K. Ng July 997 Joint Computer Science Technical Report Series Department of Computer Science Faculty of

More information

Computer Vision & Digital Image Processing

Computer Vision & Digital Image Processing Computer Vision & Digital Image Processing Image Restoration and Reconstruction I Dr. D. J. Jackson Lecture 11-1 Image restoration Restoration is an objective process that attempts to recover an image

More information

On factor width and symmetric H -matrices

On factor width and symmetric H -matrices Linear Algebra and its Applications 405 (2005) 239 248 www.elsevier.com/locate/laa On factor width and symmetric H -matrices Erik G. Boman a,,1, Doron Chen b, Ojas Parekh c, Sivan Toledo b,2 a Department

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

Lecture # 20 The Preconditioned Conjugate Gradient Method

Lecture # 20 The Preconditioned Conjugate Gradient Method Lecture # 20 The Preconditioned Conjugate Gradient Method We wish to solve Ax = b (1) A R n n is symmetric and positive definite (SPD). We then of n are being VERY LARGE, say, n = 10 6 or n = 10 7. Usually,

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 433 (2010) 1101 1109 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Minimal condition number

More information

ADMM algorithm for demosaicking deblurring denoising

ADMM algorithm for demosaicking deblurring denoising ADMM algorithm for demosaicking deblurring denoising DANIELE GRAZIANI MORPHEME CNRS/UNS I3s 2000 route des Lucioles BP 121 93 06903 SOPHIA ANTIPOLIS CEDEX, FRANCE e.mail:graziani@i3s.unice.fr LAURE BLANC-FÉRAUD

More information

Tight Frame Based Method for High-Resolution Image Reconstruction

Tight Frame Based Method for High-Resolution Image Reconstruction Tight Frame Based Method for High-Resolution Image Reconstruction Jian-Feng Cai Raymond Chan Lixin Shen Zuowei Shen September 9, 00 Abstract We give a comprehensive discussion on high-resolution image

More information

RECURSIVE CONSTRUCTION OF (J, L) QC LDPC CODES WITH GIRTH 6. Communicated by Dianhua Wu. 1. Introduction

RECURSIVE CONSTRUCTION OF (J, L) QC LDPC CODES WITH GIRTH 6. Communicated by Dianhua Wu. 1. Introduction Transactions on Combinatorics ISSN (print: 2251-8657, ISSN (on-line: 2251-8665 Vol 5 No 2 (2016, pp 11-22 c 2016 University of Isfahan wwwcombinatoricsir wwwuiacir RECURSIVE CONSTRUCTION OF (J, L QC LDPC

More information

Super-Resolution. Dr. Yossi Rubner. Many slides from Miki Elad - Technion

Super-Resolution. Dr. Yossi Rubner. Many slides from Miki Elad - Technion Super-Resolution Dr. Yossi Rubner yossi@rubner.co.il Many slides from Mii Elad - Technion 5/5/2007 53 images, ratio :4 Example - Video 40 images ratio :4 Example Surveillance Example Enhance Mosaics Super-Resolution

More information

On deflation and singular symmetric positive semi-definite matrices

On deflation and singular symmetric positive semi-definite matrices Journal of Computational and Applied Mathematics 206 (2007) 603 614 www.elsevier.com/locate/cam On deflation and singular symmetric positive semi-definite matrices J.M. Tang, C. Vuik Faculty of Electrical

More information

Direct solution methods for sparse matrices. p. 1/49

Direct solution methods for sparse matrices. p. 1/49 Direct solution methods for sparse matrices p. 1/49 p. 2/49 Direct solution methods for sparse matrices Solve Ax = b, where A(n n). (1) Factorize A = LU, L lower-triangular, U upper-triangular. (2) Solve

More information

ANONSINGULAR tridiagonal linear system of the form

ANONSINGULAR tridiagonal linear system of the form Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal

More information

Using Hankel structured low-rank approximation for sparse signal recovery

Using Hankel structured low-rank approximation for sparse signal recovery Using Hankel structured low-rank approximation for sparse signal recovery Ivan Markovsky 1 and Pier Luigi Dragotti 2 Department ELEC Vrije Universiteit Brussel (VUB) Pleinlaan 2, Building K, B-1050 Brussels,

More information

Tikhonov Regularization for Weighted Total Least Squares Problems

Tikhonov Regularization for Weighted Total Least Squares Problems Tikhonov Regularization for Weighted Total Least Squares Problems Yimin Wei Naimin Zhang Michael K. Ng Wei Xu Abstract In this paper, we study and analyze the regularized weighted total least squares (RWTLS)

More information

Real-Valued Khatri-Rao Subspace Approaches on the ULA and a New Nested Array

Real-Valued Khatri-Rao Subspace Approaches on the ULA and a New Nested Array Real-Valued Khatri-Rao Subspace Approaches on the ULA and a New Nested Array Huiping Duan, Tiantian Tuo, Jun Fang and Bing Zeng arxiv:1511.06828v1 [cs.it] 21 Nov 2015 Abstract In underdetermined direction-of-arrival

More information

An iterative multigrid regularization method for Toeplitz discrete ill-posed problems

An iterative multigrid regularization method for Toeplitz discrete ill-posed problems NUMERICAL MATHEMATICS: Theory, Methods and Applications Numer. Math. Theor. Meth. Appl., Vol. xx, No. x, pp. 1-18 (200x) An iterative multigrid regularization method for Toeplitz discrete ill-posed problems

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

Preconditioned Parallel Block Jacobi SVD Algorithm

Preconditioned Parallel Block Jacobi SVD Algorithm Parallel Numerics 5, 15-24 M. Vajteršic, R. Trobec, P. Zinterhof, A. Uhl (Eds.) Chapter 2: Matrix Algebra ISBN 961-633-67-8 Preconditioned Parallel Block Jacobi SVD Algorithm Gabriel Okša 1, Marián Vajteršic

More information

Robust Sparse Recovery via Non-Convex Optimization

Robust Sparse Recovery via Non-Convex Optimization Robust Sparse Recovery via Non-Convex Optimization Laming Chen and Yuantao Gu Department of Electronic Engineering, Tsinghua University Homepage: http://gu.ee.tsinghua.edu.cn/ Email: gyt@tsinghua.edu.cn

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors Nico Vervliet Joint work with Lieven De Lathauwer SIAM AN17, July 13, 2017 2 Classification of hazardous

More information

Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices

Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices Computers Mathematics with Applications 56 (2008) 2048 2053 Contents lists available at ScienceDirect Computers Mathematics with Applications journal homepage: wwwelseviercom/locate/camwa Convergence analysis

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Notes on PCG for Sparse Linear Systems

Notes on PCG for Sparse Linear Systems Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/

More information

Structured Low-Density Parity-Check Codes: Algebraic Constructions

Structured Low-Density Parity-Check Codes: Algebraic Constructions Structured Low-Density Parity-Check Codes: Algebraic Constructions Shu Lin Department of Electrical and Computer Engineering University of California, Davis Davis, California 95616 Email:shulin@ece.ucdavis.edu

More information

General Properties for Determining Power Loss and Efficiency of Passive Multi-Port Microwave Networks

General Properties for Determining Power Loss and Efficiency of Passive Multi-Port Microwave Networks University of Massachusetts Amherst From the SelectedWorks of Ramakrishna Janaswamy 015 General Properties for Determining Power Loss and Efficiency of Passive Multi-Port Microwave Networks Ramakrishna

More information

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C Journal of Computational and Applied Mathematics 1 008) 31 44 www.elsevier.com/locate/cam An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation

More information