DEBLURRING AND SPARSE UNMIXING OF HYPERSPECTRAL IMAGES USING MULTIPLE POINT SPREAD FUNCTIONS G = XM + N,

Size: px
Start display at page:

Download "DEBLURRING AND SPARSE UNMIXING OF HYPERSPECTRAL IMAGES USING MULTIPLE POINT SPREAD FUNCTIONS G = XM + N,"

Transcription

1 DEBLURRING AND SPARSE UNMIXING OF HYPERSPECTRAL IMAGES USING MULTIPLE POINT SPREAD FUNCTIONS SEBASTIAN BERISHA, JAMES G NAGY, AND ROBERT J PLEMMONS Abstract This paper is concerned with deblurring and spectral analysis of ground-based astronomical images of space objects A numerical approach is provided for deblurring and sparse unmixing of ground-based hyperspectral images (HSI) of objects taken through atmospheric turbulence Hyperspectral imaging systems capture a 3D datacube (tensor) containing: 2D spatial information, and D spectral information at each spatial location Pixel intensities vary with wavelength bands providing a spectral trace of intensity values, and generating a spatial map of spectral variation (spectral signatures of materials) The deblurring and spectral unmixing problem is quite challenging since the point spread function (PSF) depends on the imaging system as well as the seeing conditions and is wavelength varying We show how to efficiently construct an optimal Kronecker product-based preconditioner, and provide numerical methods for estimating the multiple PSFs using spectral data from an isolated (guide) star for joint deblurring and sparse unmixing the HSI datasets in order to spectrally analyze the image objects The methods are illustrated with numerical experiments on a commonly used test example, a simulated HSI of the Hubble Space Telescope satellite Key words image deblurring, hyperspectral imaging, preconditioning, least squares, ADMM, Kronecker product AMS Subject Classifications: 65F2, 65F3 Introduction Information about the material composition of an object is contained most unequivocally in the spectral profiles of the brightness at the different surface pixels of the object Thus by acquiring the surface brightness distribution in narrow spectral channels, as in hyperspectral image (HSI) data cubes, and by performing spectral unmixing on such data cubes, one can infer material identities as functions of position on the object surface [] Spectral unmixing involves the computation of the fractional contribution of elementary spectra, called endmembers By assuming the measured spectrum of each mixed pixel in an HSI is a linear combination of spectral signatures of endmembers, the underlying image model can be formulated as a linear mixture of endmembers with nonnegative and sparse coefficients G = XM + N, where M R Nm Nw represents a spectral library containing N m spectral signatures of endmembers with N w spectral bands or wavelengths, G R Np Nw is the observed data matrix (each row contains the observed spectrum of a given pixel, and we use N p to denote the number of pixels in each image), and X R Np Nm contains the fractional abundances of endmembers (each column contains the fractional abundances of a given endmember) Here, we assume X is a sparse matrix and N R Np Nw is a matrix representing errors or noise affecting the measurements at each spectral Department of Mathematics and Computer Science, Emory University sberish@emoryedu Department of Mathematics and Computer Science, Emory University nagy@mathcsemoryedu Research supported in part by the AFOSR under grant FA , and by the US National Science Foundation under grant no DMS-5627 Department of Computer Science, Wake Forest University, Winston-Salem, NC, USA plemmons@wfuedu His work was supported by grant no FA from the US Air Force Office of Scientific Research and by contract no HM582--C- from the US National Geospatial- Intelligence Agency

2 2 S BERISHA, J NAGY AND R PLEMMONS band or wavelength; see, eg [, 4] If we assume that the data at each wavelength has been degraded by a blurring operator, H, then the problem takes the form G = HXM + N Given a spectral library of the endmembers, M, and assuming that we have computed a priori the parameters defining the blurring operator H, the goal becomes to compute the nonnegative and sparse matrix of fractional abundances, X A major challenge for deblurring hyperspectral images is that of estimating the overall blurring operator H, taking into account the fact that the blurring operator point spread function (PSF) can vary over the images in the HSI datacube That is, the blurring can be wavelength dependent and depend on the imaging system diffraction blur as well as the effects of atmospheric turbulence blur on the arriving wavefront, see eg, [5, 9,, 2] We point especially to recent development of the hyperspectral Multi Unit Spectroscopic Explorer (MUSE) system installed on the Very Large Telescope (VLT) being deployed by the European Southern Observatory (ESO) at the Paranal Observatory in Chile The MUSE system will collect up to 4, bands, and research is ongoing to develop methods for estimation of the wavelength dependent PSFs for deblurring the resulting HSI datacube for ground-based astrophysical observations, see eg [9, 2] In particular, Soules, et al [] consider the restoration of hyperspectral astronomical data with spectrally varying blur, but assume that the spectrally varying PSF has already been provided by other means, and defer the PSF estimation to a later time In this paper we consider the estimation of a Moffat function parameterization of the PSF as a function of wavelength and derive a numerical scheme for deblurring and unmixing the HSI datacube using the estimated PSFs Moffat function parameterizations capture the diffraction blur of the telescope system as well as the blur resulting from imaging through atmospheric turbulence, and have been used quite effectively by astronomers for estimating PSFs; see, for example, the survey by Soulez et al [] This paper is outlined as follows In Section 2 we review a numerical approach for deblurring and sparse unmixing of HSI datacubes for the special case of a homogeneous PSF across the wavelength, based on work by Zhao et al [4] Section 3 concerns estimating the wavelength dependent PSFs to model the blurring effects of imaging through the atmosphere, and application of a numerical scheme for this multiple PSF case using a preconditioned alternating direction method of multipliers We show how to efficiently construct an optimal Kronecker product-based preconditioner to accelerate convergence In order to illustrate the use of our method, some numerical experiments on a commonly used test example, a simulated hyperspectral image of the Hubble Space Telescope satellite, are reported in Section 4 Some concluding comments are provided in Section 5 2 Numerical Scheme for the Single PSF Case In this section we describe and expand upon the numerical scheme used in [4] for solving the hyperspectral image deblurring and unmixing problem by using the Alternating Direction Method of Multipliers (ADMM) in the single PSF case Here, it is assumed that the blurring operator H is defined by a single PSF and each column of XM is blurred by the same H The authors in [4] have presented a total variation (TV) regularization method for solving the deblurring and sparse hyperspectral unmixing problem, which takes the form min X 2 HXM G 2 F + µ X + µ 2 T V (X)

3 HYPERSPECTRAL IMAGING WITH MULTIPLE PSF 3 where H R Np Np is a blurring matrix constructed from a single Gaussian function assuming periodic boundary conditions, and µ, µ 2 are two regularization terms used to control the importance of the sparsity and the total variation terms Numerical schemes for both isotropic and anisotropic total variation are presented in [4] For isotropic TV, the above problem can be rewritten as subject to min N 2 HXM p G 2 F + µ V + µ 2 N m i= j= W ij 2 (2) D h X = W (), D v X = W (2), V = X, V K = {V R Np Nm, V }, where W i,j = [ ] W () i,j, W (2) i,j R 2, W () i,j = D i,hx j, W (2) i,j = D i,vx j, i N p, j N m Here, the matrices D h and D v represent the first order difference matrices in the horizontal and vertical direction, respectively The authors in [4] solve the above problem using an alternating direction method The problem in (2) can be decoupled by letting f (X) = 2 HXM G 2 F and N p f 2 (Z) = X K (V ) + µ 2 N m i= j= W ij 2 + µ V where Z = W () W (2) V, X K = { if V K otherwise The constraints are expressed as BX + CZ = D h D v I Np N p X I 3Np 3N p W () W (2) = 3Np N m, V where we use I and to denote, respectively, an identity matrix and matrix of all zeros (subscripts on these matrices define their dimensions, and may be omitted later in the paper, if dimensions are clear from the context) Furthermore, by attaching the Lagrange multiplier to the linear constraints the augmented Lagrangian function of (2) is written as L(X, Z, Λ) = f (X) + f 2 (Z)+ < Λ, BX + CZ > + β 2 BX + CZ 2 F,

4 4 S BERISHA, J NAGY AND R PLEMMONS where Λ () Λ = Λ (2) R 3Np Nm, Λ (3) β > is the penalty parameter for violating the linear constraints, and <, > is the sum of the entries of the Hadamard product With this formulation the hyperspectral unmixing and deblurring problem is solved using an alternating direction method consisting of solving 3 subproblems at each iteration k: Step : X k+ arg min L(X, Z k, Λ k ) Step 2: Z k+ arg min L(X k+, Z, Λ k ) Step 3: Λ k+ Λ k + β(bx k+ + CZ k+ ) The X-subproblem, or Step, consists of solving X k+ arg min{ X 2 HXM G 2 F + < Λ, BX + CZ > + β 2 BX + CZ 2 F } (22) The above subproblem is the solution of the classical Sylvester matrix equation H T HXMM T + βb T BX = H T GM T βb T CZ k B T Λ k (23) Similar alternating minimization schemes have been used for solving the hyperspectral unmixing problem in [3] and [6] However, the key step of the alternating minimization scheme presented in [4] consists in transforming the matrix equation (23) to a linear system which has a closed-form solution In particular the authors in [4] reformulate the Sylvester matrix equation in (23) as where (MM T H T H + βi B T B)x = ĝ, x = vec(x) = vec( [ x x n ] ) = x x n, x i = ith column of X, and similarly, ĝ = vec(h T GM T βb T CZ k B T Λ k ) Let M = UΣV T be the singular value decomposition of M, and let H = F ΓF and B T B = F ΨF be the Fourier decomposition of H and B T B, respectively Here, we assume spatially invariant blur with periodic boundary conditions The above linear system takes the form (U F )(ΣΣ T Γ 2 + I Ψ 2 )(U T F )x = ĝ and thus a direct solution is given by x = (U F )(ΣΣ T Γ 2 + I Ψ 2 ) (U T F )ĝ For a detailed description of the solution of Steps 2 and 3 of the alternating minimization approach see [4]

5 HYPERSPECTRAL IMAGING WITH MULTIPLE PSF 5 3 Numerical Scheme for the Multiple PSF Case In this section we provide the problem formulation for the general case where each column of the matrix XM is generally blurred by a different blurring operator In particular, the deblurring and hyperspectral unmixing problem with multiple PSFs takes the form H H 2 XMe XMe 2 = g g 2 (3) H Nw XMe Nw g Nw where e i R Nw is the i th unit vector, g i is the i th column of the observed matrix G, and each blurring matrix H i is defined by different PSFs that vary with wavelength For example, in astronomical imaging, a blurring operator H for a particular wavelength, λ, might be accurately modeled with a circular Moffat function PSF(α, α, α 2, λ) = α 2 ( + i2 + j 2 ) α2 π(α + α λ) (α + α λ) 2 (32) Moffat functions are widely used to paramaterize PSFs in astronomical imaging The parameters α, α and α 2 are the Moffat function shape parameters for the associated PSF which are to be estimated from the data, see, eg [9] Notice that problem (3) can be rewritten as H XMe H 2 XMe 2 g = g 2 H Nw XMe Nw g Nw By utilizing Kronecker product properties the above equation can be reformulated as (e T M T H )x g (e T 2 M T H 2 )x g 2 (e T N w M T H Nw )x = where x = vec(x) Thus, the multiple PSF hyperspectral image deblurring and unmixing problem can be formulated as Hx = g g Nw, where H = m T H m T 2 H 2 m T N w H Nw g, x = vec(x), g = g 2 g Nw Hence, the X-subproblem (22) for the multiple PSF formulation takes the form { } X k+ arg min X 2 Hx g 2 F + < Λ, BX + CZ > +β 2 BX + CZ 2 F

6 6 S BERISHA, J NAGY AND R PLEMMONS That is, using Kronecker product properties and by applying the vectorizing operator, vec( ), X k+ is a solution of the { min x 2 Hx g 2 F +vec(λ)t ((I B) x + vec (CZ)) + β } 2 (I B) x + vec(cz) 2 F, where x = vec(x) Now, if we set the gradient of the augmented Lagrangian for the X-subproblem to then we obtain H T Hx + β ( I B T B ) x = H T g β ( I B T ) vec(cz k ) vec(b T Λ k ) The above equation can be rewritten as (H T H + βi B T B)x = ĝ, (33) where ĝ = H T g β ( I B ) T vec(cz k ) vec(b T Λ k ) Notice that m T H H T H = [ ] m H T m 2 H2 T m Nw HN T m T 2 H 2 w m T N w H Nw = m m T H T H + m 2 m T 2 H T 2 H m Nw m T N w H T N w H Nw Using the decompositions H T i H i = F Γ 2 i F and BT B = F Ψ 2 F equation (33) takes the form (m m T F Γ 2 F +m 2 m T 2 F Γ 2 2F + +m Nw m T N w F Γ 2 N w F +I F Ψ 2 F )x = ĝ Thus, the X-subproblem to be solved for the multiple PSF case is (I F )(m m T Γ 2 + m 2 m T 2 Γ m Nw m T N w Γ 2 N w + I Ψ 2 )(I F )x = ĝ Notice that the middle part of the coefficient matrix in the above linear system is not diagonal as in the single PSF case, and thus there is not an explicit solution of the X-subproblem for multiple PFSs However, the coefficient matrix for multiple PSFs is symmetric positive definite (spd) and thus we use the conjugate gradient method to solve the above subproblem Construction of a preconditioner is described next 4 Conjugate Gradient Preconditioner The X-subproblem involves the coefficient matrix (I F )(m m T Γ 2 + m 2 m T 2 Γ m Nw m T N w Γ 2 N w + I Ψ 2 )(I F ) Our goal is to approximate m m T Γ m Nw m T N w Γ 2 N w by one Kronecker product A D, where D is a diagonal matrix, and A is spd If we can find such an approximation, then we can compute A = UΣU T, and (I F )(m m T Γ 2 + m 2 m T 2 Γ m Nw m T N w Γ 2 N w + I Ψ 2 )(I F ) (I F )(A D + I Ψ 2 )(I F ) = (I F )(UΣU T D + I Ψ 2 )(I F ) = (U F )(Σ D + I Ψ 2 )(U T F )

7 HYPERSPECTRAL IMAGING WITH MULTIPLE PSF 7 4 Kronecker Product Approximation One, very simple approximation, can be obtained by replacing each Γ j by a single matrix, such as by an average of all diagonal matrices, Γ avg In this case we use A = MM T and D = Γ 2 avg However, it is difficult to determine the quality of using such a simple approach Therefore, we seek a different and possibly optimal approximation Consider m m T Γ m Nw m T N w Γ 2 N w = [ m T Γ ] m Γ m Nw Γ Nw where = CC T, C = [ ] m Γ m 2 Γ 2 m Nw Γ Nw m Γ m 2 Γ 2 m Nw Γ Nw m 2 Γ m 22 Γ 2 m 2Nw Γ Nw = m NmΓ m Nm2Γ 2 m NmN w Γ Nw m T N w Γ Nw Notice that if we can approximate C with C ˆM ˆΓ, where ˆΓ is diagonal, then CC T ˆM ˆM T ˆΓ 2 For example, one such simple approximation, as discussed above, is ˆM = M, ˆΓ = Γavg Instead of using this approach, we show how to find an optimal minimizes C = vec(m NmΓ ) T vec(m 2 Γ 2 ) T vec(m 22 Γ 2 ) T vec(m Nm2Γ 2 ) T vec(m Nw Γ Nw ) T vec(m 2Nw Γ Nw ) T vec(m NmN w Γ Nw ) T = C ˆM ˆΓ F ˆM and ˆΓ that Using ideas from Van Loan and Pitsianis [], we can find the approximation C ˆM ˆΓ by transforming C to tilde space; the optimal Kronecker product approximation of a matrix is obtained by using the optimal rank- approximation of the transformed matrix In our case, the tilde transformation of C is given by vec(m Γ ) T m vec(γ ) T vec(m 2 Γ ) T m 2 vec(γ ) T m Nmvec(Γ ) T m 2 vec(γ 2 ) T m 22 vec(γ 2 ) T m Nm2vec(Γ 2 ) T m Nw vec(γ Nw ) T m 2Nw vec(γ Nw ) T m NmN w vec(γ Nw ) T = m vec(γ ) T m 2 vec(γ 2 ) T m Nw vec(γ Nw ) T

8 8 S BERISHA, J NAGY AND R PLEMMONS m m 2 = m Nw vec(γ ) T vec(γ 2 ) T vec(γ Nw ) T where m ij is the ith entry in vector m j, for i =,, N m and j =,, N w To find the optimal Kronecker product approximation of C, first observe (see, eg, []) that, C ˆM ˆΓ F = C vec( ˆM)vec(ˆΓ) T 2, and thus the optimal Kronecker product approximation problem is equivalent to an optimal rank- approximation of C By the Eckhart-Young theorem (see, eg, [2]), the best rank- approximation is obtained from the largest singular value and corresponding singular vectors of C; that is, C σ ũ ṽ T The matrices ˆM and ˆΓ are then constructed from σ, ũ, and ṽ ; specifically, vec( ˆM) = σ ũ, vec(ˆγ) = σ ṽ 42 Computing the largest singular triplet of C The matrix C can be quite large, N m N w Np 2, so it is important to exploit its structure in order to efficiently compute the largest singular triplet; simply exploiting sparsity is not enough To describe how we do this, we first define matrices m vec(γ ) T m 2 M = and T = vec(γ 2 ) T, m Nm vec(γ Nw ) T so that C = MT Now notice that since each Γ j is an N p N p diagonal matrix, then at most N p columns of T are nonzero Thus, there is a permutation matrix P such that T P = [ T ] T = [ T ] P T, where is an N w ( N 2 p N p ) matrix of all zeros, and T = is an N w N p matrix, where γ j is a vector containing the diagonal elements of Γ j Next, observe that the structure of M allows us to efficiently compute a thin QR decomposition, γ T γ T 2 γ T N w M = QR,

9 HYPERSPECTRAL IMAGING WITH MULTIPLE PSF 9 where R = diag( m 2, m 2 2, m Nw 2 ) Thus, we can now rewrite C as C = MT = QR [ T ] P T = Q [ R T ] P T Notice that RT is an N w N p matrix, which is relatively small compared to the N m N w Np 2 matrix C In addition, because Q and P are orthogonal matrices, to compute the largest singular triplet of C, we need only compute the largest singular triplet of RT That is, if we denote the largest singular triplet of RT as (u t, σ t, v t ), and the largest singular triplet of C as (ũ, σ, ṽ ), then [ ] vt σ = σ t, ũ = Qu t and ṽ = P, where is an (N 2 p N p ) vector of all zeros It is also important to notice that the zero structure of ṽ implies that if vec(ˆγ) = σ ṽ, then ˆΓ is a diagonal matrix precisely what we need for our preconditioner 43 Approximation Quality of the Preconditioner In this subsection, we consider the approximation quality of the preconditioner It is difficult to give a precise analytical result on the quality of the approximation, but it is possible to get a rough bound on C ˆM ˆΓ 2 F The size of this norm depends on how the PSFs vary with wavelength Specifically, if ˆM ˆΓ is the matrix that minimizes the Frobenius norm, then C ˆM ˆΓ 2 F C M Γ avg 2 F N m N w = m ij 2 E j 2 F i= j= N w N m E j 2 F, where E j = Γ j Γ avg, and the last inequality results because, without loss of generality, we can assume m ij (see Fig 53) Thus, if Γ Γ Nw (that is, the PSFs are approximately equal), then we obtain a small approximation error with our preconditioner Because of the high nonlinearity of the PSFs, we have not been able to refine this bound any further However, we can say that it is known that all the wavelengthdependent exact PSFs for hyperspectral imaging through atmospheric turbulence are scaled versions of a base PSF associated with the optical path difference function [8] for the arriving light wavefront phase Also, the blurring effects of the PSFs become essentially much less at longer wavelengths, and there the scaling factor begins to approach one, so that the PSFs become almost identical, see eg, [5] The wavelength where this begins to occur is of course problem dependent and depends upon the level of turbulence We note that, when evaluating the quality of preconditioners, it is often useful to look experimentally at the singular values of the original matrix and the preconditioned system However, this can only be done for very small problems Figure 4 j=

10 S BERISHA, J NAGY AND R PLEMMONS shows the singular values for preconditioned and non-preconditioned systems We used PSFs of size and vary the number of wavelengths from 9 to 99 with a nm step size Note that the singular values of the preconditioned system tend to cluster more towards compared to the non-preconditioned system We also noticed that the singular values of the preconditioned system show a tendency to move away from The behavior of the singular values is similar as the number of wavelengths varies Even though there is not a tight clustering of the singular values around, as one would expect from an extremely effective preconditioner, our numerical results show that our Kronecker product-based preconditioner significantly reduces the number of necessary iterations for the convergence of the conjugate gradient method Fig 4 Singular values for the non-preconditoned (left) and preconditioned (right) systems with varying number of wavelengths 5 Numerical results In this section we test the proposed numerical scheme for the deblurring and unmixing model using single and multiple PSFs Simulated hyperspectral data are used to evaluate the proposed method We compare the multiple PSF approach with the single PSF approach The quality of the estimated fractional abundances of endmembers is evaluated by using the relative error defined by: X true X 2 X true 2, where X true is the matrix of true fractional abundances of endmembers and X is the computed fractional abundances of endmembers by the proposed method In all experiments, we used the circular Moffat functions defined in equation (32) to model the PSFs, with α = 242, α =, and α 2 = 266 We consider a simulated hyperspectral image of the Hubble Space Telescope, which is also used for testing in [4] Similar data was also used in [7] and [3] The signatures cover a band of spectra from 4nm to 25nm We use 99 evenly distributed sampling points, leading to a hyperspectral datacube of size Six materials typical to those associated with satellites are used, Hubble aluminum, Hubble glue, Hubble honeycomb top, Hubble honeycomb side, solar cell, and rubber edge The synthetic map of the satellite image is shown in Figure 5 The hyperspectral datacube is blurred by multiple circular Moffat point spread functions (see Figure 52), ie each column of G is blurred by a different circular Moffat function corresponding to a particular wavelength Note that as the wavelength, λ, increases there is less blurring present in those columns compared to the blurring in the columns

11 HYPERSPECTRAL IMAGING WITH MULTIPLE PSF observed at shorter wavelengths, as expected eg [4] Gaussian white noise in the level of 3dB is also added to the datacube In the experiments for the numerical scheme with multiple PSFs we use all the PSFs for the reconstruction of the fractional abundances In the single PSF scheme we use an average of all the PSFs for reconstruction That is, the columns of G are blurred with different PSFs in both cases and we use one PSF representing the average of all PSFs for reconstruction in the single PSF numerical scheme The plot of relative errors is shown in Figure 54 One can observe that the use of multiple PSFs provides lower relative reconstruction errors compared to the relative errors obtained by the single PSF case It is a known fact that the blurring varies with different wavelengths Figure 54 shows that by taking this fact into account we can achieve much lower relative errors in the reconstruction of the fractional abundances The following values are used for the parameters in the alternating minimization scheme: β = 2, µ =, µ 2 = 5 4 The convergence of the alternating direction method is theoretically guaranteed as long as the penalty parameter β is positive, see eg [4] We note that the conjugate gradient method required, iterations to solve each X-subproblem (for multiple PSFs) at the same accuracy level as the single PSF method By using the preconditioner (presented in Section 3) we were able to reduce the number of iterations required for convergence to 2 In Figure 55 we show the relative residual norms for the first 95 iterations of preconditioned conjugate gradient (PCG) and conjugate gradient (CG) without preconditioning It is clear from this figure that the relative residual norms for PCG decrease very quickly until they reach the default tolerance level of 6 whereas for CG the relative residual norms decrease very slowly Figures 56 and 57 show the reconstructed columns of X for both the single PSF and multiple PSF methods The reconstructed spectral signatures for the various materials are shown in Figure 58 This figure shows clearly that the reconstructed spectra using multiple PSFs are much better approximations to the true spectra than when using a single PSF As with the results shown in Figure 54, the single PSF is obtained averaging all PSFs Figure 59 is essentially the same as Figure 58, except we also include the spectral signatures of the observed (blurred) data Fig 5 Synthetic map representation of the hyperspectral satellite image The false colors are used to denote different materials, which are defined in Table 5

12 2 S BERISHA, J NAGY AND R PLEMMONS Table 5 Materials, corresponding colors (see Fig 5), fractional abundances of constituent endmembers, and fractional abundances of the materials used for the Hubble satellite simulation Percent Percent Material Color Constituent Endmembers Fractional Abundance light gray Em () 2 green Em 2 (7), Em 9 (3) 8 3 red Em 3 () 4 4 dark gray Em 4 (6), Em (4) 9 5 brown Em 5 () 7 6 gold Em 6 (4), Em (3), 32 Em 2 (3) 7 blue Em 7 () 3 8 white Em 8 () 6

13 HYPERSPECTRAL IMAGING WITH MULTIPLE PSF 3 Fig 52 First column: the true columns of G observed at different wavelengths (from top to bottom: 4nm, 7nm, 843nm, 25nm) Second column: the circular Moffat PSFs used to blur the columns of G at different wavelengths Third column: the corresponding blurred and noisy columns of G blurred with different Moffat PSFs corresponding to different wavelengths

14 4 S BERISHA, J NAGY AND R PLEMMONS Fig 53 Spectral signatures of eight materials assigned to the simulated Hubble Telescope model 9 Relative Errors 7 5 Multiple PSFs One PSF Iterations Fig 54 Relative errors for the computed fractional abundances using a single PSF and multiple PSFs

15 HYPERSPECTRAL IMAGING WITH MULTIPLE PSF 5 Relative residual norms PCG CG Iterations Fig 55 Relative residual norms for the first 95 iterations of PCG and CG Fig 56 Fractional abundances for materials to 4 (the first 4 materials in Table 5) First column: true fractional abundances; second column: the estimated fractional abundances using the single PSF approach; third column: the estimated fractional abundances using the multiple PSF approach

16 6 S BERISHA, J NAGY AND R PLEMMONS Fig 57 Fractional abundances for materials 5 to 8 (the last 4 materials in Table 5) First column: true fractional abundances; second column: the estimated fractional abundances using the single PSF approach; third column: the estimated fractional abundances using the multiple PSF approach

17 HYPERSPECTRAL IMAGING WITH MULTIPLE PSF Fig 58 The true material spectral signatures (blue and +), the computed spectral signatures using the multiple PSF numerical approach (red and o), and the computed spectral signatures using the single PSF approach (magenta : and ) Note that the y-axis has not been scaled in order to show more clearly the differences between spectral signatures in the three cases

18 8 S BERISHA, J NAGY AND R PLEMMONS Fig 59 The true material spectral signatures (blue and +), the computed spectral signatures using the multiple PSF numerical approach (red and o), the computed spectral signatures using the single PSF approach (magenta : and ), and the original observed blurred and noisy material spectral signatures (black and ) Note that the y-axis has not been scaled in order to show more clearly the differences between spectral signatures in the four cases

19 HYPERSPECTRAL IMAGING WITH MULTIPLE PSF 9 6 Conclusions We have presented a numerical approach, based on the ADMM method, for deblurring and sparse unmixing of ground-based hyperspectral images of objects taken through the atmosphere at multiple wavelengths with narrow spectral channels Because the PSFs, which define the blurring operations, depend on the imaging system as well as the seeing conditions and is wavelength dependent, the reconstruction process is computationally intensive In particular, we found it important to use a preconditioned conjugate gradient method to solve a large-scale linear system needed at each ADMM iteration We showed how to efficiently construct an optimal Kronecker product-based preconditioner, and provided numerical experiments to illustrate the effectiveness of our approach In particular, we illustrated that much better accuracy can be obtained by using the multiple, wavelength dependent PSFs, and we showed that our preconditioner is quite effective in significantly reducing the number of conjugate gradient iterations REFERENCES [] M T Eismann, Hyperspectral Remote Sensing, SPIE Press, 22 [2] G H Golub and C F Van Loan, Matrix Computations, 4th Ed, Johns Hopkins University Press, Baltimore, 23 [3] M-D Iordache, J M Bioucas-Dias, and A Plaza, Total variation spatial regularization for sparse hyperspectral unmixing, Geoscience and Remote Sensing, IEEE Transactions on, 5 (22), pp [4] S M Jefferies and M Hart, Deconvolution from wavefront sensing using the frozen flow hypothesis, Optics express, 9 (2), pp [5] A J Lambert and G Nichols, Wavelength diversity in restoration from atmospheric turbulence effected surveillance imagery, in Frontiers in Optics 29, Laser Science XXV, Fall 29 OSA Optics & Photonics Technical Digest, OSA Technical Digest (CD) [6] C Li, T Sun, K F Kelly, and Y Zhang, A compressive sensing and unmixing scheme for hyperspectral data processing, Image Processing, IEEE Transactions on, 2 (22), pp 2 2 [7] F Li, M K Ng, and R J Plemmons, Coupled segmentation and denoising/deblurring models for hyperspectral material identification, Numerical Linear Algebra with Applications, 9 (22), pp [8] M C Roggemann and B M Welsh, Imaging Through Turbulence, CRC press, 996 [9] D Serre, E Villeneuve, H Carfantan, L Jolissaint, V Mazet, S Bourguignon, and A Jarno, Modeling the spatial PSF at the VLT focal plane for MUSE WFM data analysis purpose, in SPIE Astronomical Telescopes and Instrumentation: Observational Frontiers of Astronomy for the New Decade, International Society for Optics and Photonics, 2, pp [] F Soulez, E Thiebaut, and L Denis, Restoration of hyperspectral astronomical data with spectrally varying blur, European Astronomical Society Publications Series, 59 (23), pp [] C F Van Loan and N P Pitsianis, Approximation with Kronecker products, in Linear Algebra for Large Scale and Real Time Applications, M S Moonen and G H Golub, eds, Kluwer Publications, 993, pp [2] E Villeneuve, H Carfantan, and D Serre, PSF estimation of hyperspectral data acquisition system for ground-based astrophysical observations, in Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), 2 3rd Workshop on, IEEE, 2, pp 4 [3] Q Zhang, H Wang, R Plemmons, and V Pauca, Tensor methods for hyperspectral data analysis: a space object material identification study, Journal of the Optical Society of America A, 25 (28), pp 3 32 [4] X-L Zhao, F Wang, T-Z Huang, M K Ng, and R J Plemmons, Deblurring and sparse unmixing for hyperspectral images, Geoscience and Remote Sensing, IEEE Transactions on, 5 (23), pp

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Object Characterization from Spectral Data Using Nonnegative Factorization and Information Theory

Object Characterization from Spectral Data Using Nonnegative Factorization and Information Theory Object Characterization from Spectral Data Using Nonnegative Factorization and Information Theory J. Piper V. Paul Pauca Robert J. Plemmons Maile Giffin Abstract The identification and classification of

More information

Structured Linear Algebra Problems in Adaptive Optics Imaging

Structured Linear Algebra Problems in Adaptive Optics Imaging Structured Linear Algebra Problems in Adaptive Optics Imaging Johnathan M. Bardsley, Sarah Knepper, and James Nagy Abstract A main problem in adaptive optics is to reconstruct the phase spectrum given

More information

Leveraging Machine Learning for High-Resolution Restoration of Satellite Imagery

Leveraging Machine Learning for High-Resolution Restoration of Satellite Imagery Leveraging Machine Learning for High-Resolution Restoration of Satellite Imagery Daniel L. Pimentel-Alarcón, Ashish Tiwari Georgia State University, Atlanta, GA Douglas A. Hope Hope Scientific Renaissance

More information

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke

More information

Numerical Linear Algebra and. Image Restoration

Numerical Linear Algebra and. Image Restoration Numerical Linear Algebra and Image Restoration Maui High Performance Computing Center Wednesday, October 8, 2003 James G. Nagy Emory University Atlanta, GA Thanks to: AFOSR, Dave Tyler, Stuart Jefferies,

More information

Lecture 9: Speckle Interferometry. Full-Aperture Interferometry. Labeyrie Technique. Knox-Thompson Technique. Bispectrum Technique

Lecture 9: Speckle Interferometry. Full-Aperture Interferometry. Labeyrie Technique. Knox-Thompson Technique. Bispectrum Technique Lecture 9: Speckle Interferometry Outline 1 Full-Aperture Interferometry 2 Labeyrie Technique 3 Knox-Thompson Technique 4 Bispectrum Technique 5 Differential Speckle Imaging 6 Phase-Diverse Speckle Imaging

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

Blind image restoration as a convex optimization problem

Blind image restoration as a convex optimization problem Int. J. Simul. Multidisci.Des. Optim. 4, 33-38 (2010) c ASMDO 2010 DOI: 10.1051/ijsmdo/ 2010005 Available online at: http://www.ijsmdo.org Blind image restoration as a convex optimization problem A. Bouhamidi

More information

Some Applications of Nonnegative Tensor Factorizations (NTF) to Mining Hyperspectral & Related Tensor Data. Bob Plemmons Wake Forest

Some Applications of Nonnegative Tensor Factorizations (NTF) to Mining Hyperspectral & Related Tensor Data. Bob Plemmons Wake Forest Some Applications of Nonnegative Tensor Factorizations (NTF) to Mining Hyperspectral & Related Tensor Data Bob Plemmons Wake Forest 1 Some Comments and Applications of NTF Decomposition methods involve

More information

AIR FORCE RESEARCH LABORATORY Directed Energy Directorate 3550 Aberdeen Ave SE AIR FORCE MATERIEL COMMAND KIRTLAND AIR FORCE BASE, NM

AIR FORCE RESEARCH LABORATORY Directed Energy Directorate 3550 Aberdeen Ave SE AIR FORCE MATERIEL COMMAND KIRTLAND AIR FORCE BASE, NM AFRL-DE-PS-JA-2007-1004 AFRL-DE-PS-JA-2007-1004 Noise Reduction in support-constrained multi-frame blind-deconvolution restorations as a function of the number of data frames and the support constraint

More information

Semidefinite Programming Based Preconditioning for More Robust Near-Separable Nonnegative Matrix Factorization

Semidefinite Programming Based Preconditioning for More Robust Near-Separable Nonnegative Matrix Factorization Semidefinite Programming Based Preconditioning for More Robust Near-Separable Nonnegative Matrix Factorization Nicolas Gillis nicolas.gillis@umons.ac.be https://sites.google.com/site/nicolasgillis/ Department

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2004-012 Kronecker Product Approximation for Three-Dimensional Imaging Applications by MIsha Kilmer, James Nagy Mathematics and Computer Science EMORY UNIVERSITY Kronecker Product Approximation

More information

What is Image Deblurring?

What is Image Deblurring? What is Image Deblurring? When we use a camera, we want the recorded image to be a faithful representation of the scene that we see but every image is more or less blurry, depending on the circumstances.

More information

The Global Krylov subspace methods and Tikhonov regularization for image restoration

The Global Krylov subspace methods and Tikhonov regularization for image restoration The Global Krylov subspace methods and Tikhonov regularization for image restoration Abderrahman BOUHAMIDI (joint work with Khalide Jbilou) Université du Littoral Côte d Opale LMPA, CALAIS-FRANCE bouhamidi@lmpa.univ-littoral.fr

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

Interaction on spectral data with: Kira Abercromby (NASA-Houston) Related Papers at:

Interaction on spectral data with: Kira Abercromby (NASA-Houston) Related Papers at: Low-Rank Nonnegative Factorizations for Spectral Imaging Applications Bob Plemmons Wake Forest University Collaborators: Christos Boutsidis (U. Patras), Misha Kilmer (Tufts), Peter Zhang, Paul Pauca, (WFU)

More information

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

2 Regularized Image Reconstruction for Compressive Imaging and Beyond EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

6 The SVD Applied to Signal and Image Deblurring

6 The SVD Applied to Signal and Image Deblurring 6 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

Mathematical Beer Goggles or The Mathematics of Image Processing

Mathematical Beer Goggles or The Mathematics of Image Processing How Mathematical Beer Goggles or The Mathematics of Image Processing Department of Mathematical Sciences University of Bath Postgraduate Seminar Series University of Bath 12th February 2008 1 How 2 How

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University,

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

arxiv: v1 [math.na] 3 Jan 2019

arxiv: v1 [math.na] 3 Jan 2019 STRUCTURED FISTA FOR IMAGE RESTORATION ZIXUAN CHEN, JAMES G. NAGY, YUANZHE XI, AND BO YU arxiv:9.93v [math.na] 3 Jan 29 Abstract. In this paper, we propose an efficient numerical scheme for solving some

More information

On nonstationary preconditioned iterative regularization methods for image deblurring

On nonstationary preconditioned iterative regularization methods for image deblurring On nonstationary preconditioned iterative regularization methods for image deblurring Alessandro Buccini joint work with Prof. Marco Donatelli University of Insubria Department of Science and High Technology

More information

Iterative Krylov Subspace Methods for Sparse Reconstruction

Iterative Krylov Subspace Methods for Sparse Reconstruction Iterative Krylov Subspace Methods for Sparse Reconstruction James Nagy Mathematics and Computer Science Emory University Atlanta, GA USA Joint work with Silvia Gazzola University of Padova, Italy Outline

More information

ONP-MF: An Orthogonal Nonnegative Matrix Factorization Algorithm with Application to Clustering

ONP-MF: An Orthogonal Nonnegative Matrix Factorization Algorithm with Application to Clustering ONP-MF: An Orthogonal Nonnegative Matrix Factorization Algorithm with Application to Clustering Filippo Pompili 1, Nicolas Gillis 2, P.-A. Absil 2,andFrançois Glineur 2,3 1- University of Perugia, Department

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

Blind Image Deconvolution Using The Sylvester Matrix

Blind Image Deconvolution Using The Sylvester Matrix Blind Image Deconvolution Using The Sylvester Matrix by Nora Abdulla Alkhaldi A thesis submitted to the Department of Computer Science in conformity with the requirements for the degree of PhD Sheffield

More information

Application to Hyperspectral Imaging

Application to Hyperspectral Imaging Compressed Sensing of Low Complexity High Dimensional Data Application to Hyperspectral Imaging Kévin Degraux PhD Student, ICTEAM institute Université catholique de Louvain, Belgium 6 November, 2013 Hyperspectral

More information

Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method

Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method Yunshen Zhou Advisor: Prof. Zhaojun Bai University of California, Davis yszhou@math.ucdavis.edu June 15, 2017 Yunshen

More information

1. Abstract. 2. Introduction/Problem Statement

1. Abstract. 2. Introduction/Problem Statement Advances in polarimetric deconvolution Capt. Kurtis G. Engelson Air Force Institute of Technology, Student Dr. Stephen C. Cain Air Force Institute of Technology, Professor 1. Abstract One of the realities

More information

Adaptive optics and atmospheric tomography: An inverse problem in telescope imaging

Adaptive optics and atmospheric tomography: An inverse problem in telescope imaging Adaptive optics and atmospheric tomography: An inverse problem in telescope imaging Jonatan Lehtonen Bayesian Inversion guest lecture, 29.01.2018 1 / 19 Atmospheric turbulence Atmospheric turbulence =

More information

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block

More information

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 1 One Picture and a Thousand Words Using Matrix Approximations Dianne P. O Leary

More information

Iterative Methods for Smooth Objective Functions

Iterative Methods for Smooth Objective Functions Optimization Iterative Methods for Smooth Objective Functions Quadratic Objective Functions Stationary Iterative Methods (first/second order) Steepest Descent Method Landweber/Projected Landweber Methods

More information

Tensor Methods for Hyperspectral Data Analysis: A Space Object Material Identification Study

Tensor Methods for Hyperspectral Data Analysis: A Space Object Material Identification Study Tensor Methods for Hyperspectral Data Analysis: A Space Object Material Identification Study Qiang Zhang, 1, Han Wang, 2 Robert J. Plemmons 2,3 and V. Paul Pauca 3 1 Department of Biostatistical Sciences,

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Minru Bai(x T) College of Mathematics and Econometrics Hunan University Joint work with Xiongjun Zhang, Qianqian Shao June 30,

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION

CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION International Conference on Computer Science and Intelligent Communication (CSIC ) CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION Xuefeng LIU, Yuping FENG,

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München, München, Arcistraße

More information

Reweighted Laplace Prior Based Hyperspectral Compressive Sensing for Unknown Sparsity: Supplementary Material

Reweighted Laplace Prior Based Hyperspectral Compressive Sensing for Unknown Sparsity: Supplementary Material Reweighted Laplace Prior Based Hyperspectral Compressive Sensing for Unknown Sparsity: Supplementary Material Lei Zhang, Wei Wei, Yanning Zhang, Chunna Tian, Fei Li School of Computer Science, Northwestern

More information

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille

More information

Astronomy. Optics and Telescopes

Astronomy. Optics and Telescopes Astronomy A. Dayle Hancock adhancock@wm.edu Small 239 Office hours: MTWR 10-11am Optics and Telescopes - Refraction, lenses and refracting telescopes - Mirrors and reflecting telescopes - Diffraction limit,

More information

Krylov Subspace Methods to Calculate PageRank

Krylov Subspace Methods to Calculate PageRank Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)

More information

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING JIAN-FENG CAI, STANLEY OSHER, AND ZUOWEI SHEN Abstract. Real images usually have sparse approximations under some tight frame systems derived

More information

RANDOM PROJECTION AND SVD METHODS IN HYPERSPECTRAL IMAGING JIANI ZHANG. A Thesis Submitted to the Graduate Faculty of

RANDOM PROJECTION AND SVD METHODS IN HYPERSPECTRAL IMAGING JIANI ZHANG. A Thesis Submitted to the Graduate Faculty of RANDOM PROJECTION AND SVD METHODS IN HYPERSPECTRAL IMAGING BY JIANI ZHANG A Thesis Submitted to the Graduate Faculty of WAKE FOREST UNIVERSITY GRADUATE SCHOOL OF ARTS AND SCIENCES in Partial Fulfillment

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

A convex model for non-negative matrix factorization and dimensionality reduction on physical space

A convex model for non-negative matrix factorization and dimensionality reduction on physical space A convex model for non-negative matrix factorization and dimensionality reduction on physical space Ernie Esser Joint work with Michael Möller, Stan Osher, Guillermo Sapiro and Jack Xin University of California

More information

A fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration

A fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration NTMSCI 5, No. 2, 277-283 (2017) 277 New Trends in Mathematical Sciences http://dx.doi.org/ A fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration

More information

Linear Inverse Problems

Linear Inverse Problems Linear Inverse Problems Ajinkya Kadu Utrecht University, The Netherlands February 26, 2018 Outline Introduction Least-squares Reconstruction Methods Examples Summary Introduction 2 What are inverse problems?

More information

Numerical Methods. Rafał Zdunek Underdetermined problems (2h.) Applications) (FOCUSS, M-FOCUSS,

Numerical Methods. Rafał Zdunek Underdetermined problems (2h.) Applications) (FOCUSS, M-FOCUSS, Numerical Methods Rafał Zdunek Underdetermined problems (h.) (FOCUSS, M-FOCUSS, M Applications) Introduction Solutions to underdetermined linear systems, Morphological constraints, FOCUSS algorithm, M-FOCUSS

More information

Application of deconvolution to images from the EGRET gamma-ray telescope

Application of deconvolution to images from the EGRET gamma-ray telescope Application of deconvolution to images from the EGRET gamma-ray telescope Symeon Charalabides, Andy Shearer, Ray Butler (National University of Ireland, Galway, Ireland) ABSTRACT The EGRET gamma-ray telescope

More information

Fast Nonnegative Matrix Factorization with Rank-one ADMM

Fast Nonnegative Matrix Factorization with Rank-one ADMM Fast Nonnegative Matrix Factorization with Rank-one Dongjin Song, David A. Meyer, Martin Renqiang Min, Department of ECE, UCSD, La Jolla, CA, 9093-0409 dosong@ucsd.edu Department of Mathematics, UCSD,

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

ADMM algorithm for demosaicking deblurring denoising

ADMM algorithm for demosaicking deblurring denoising ADMM algorithm for demosaicking deblurring denoising DANIELE GRAZIANI MORPHEME CNRS/UNS I3s 2000 route des Lucioles BP 121 93 06903 SOPHIA ANTIPOLIS CEDEX, FRANCE e.mail:graziani@i3s.unice.fr LAURE BLANC-FÉRAUD

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Self-Calibration and Biconvex Compressive Sensing

Self-Calibration and Biconvex Compressive Sensing Self-Calibration and Biconvex Compressive Sensing Shuyang Ling Department of Mathematics, UC Davis July 12, 2017 Shuyang Ling (UC Davis) SIAM Annual Meeting, 2017, Pittsburgh July 12, 2017 1 / 22 Acknowledgements

More information

CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu

CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu Feature engineering is hard 1. Extract informative features from domain knowledge

More information

Ill Posed Inverse Problems in Image Processing

Ill Posed Inverse Problems in Image Processing Ill Posed Inverse Problems in Image Processing Introduction, Structured matrices, Spectral filtering, Regularization, Noise revealing I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz,

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

Integer Least Squares: Sphere Decoding and the LLL Algorithm

Integer Least Squares: Sphere Decoding and the LLL Algorithm Integer Least Squares: Sphere Decoding and the LLL Algorithm Sanzheng Qiao Department of Computing and Software McMaster University 28 Main St. West Hamilton Ontario L8S 4L7 Canada. ABSTRACT This paper

More information

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors Sean Borman and Robert L. Stevenson Department of Electrical Engineering, University of Notre Dame Notre Dame,

More information

When Dictionary Learning Meets Classification

When Dictionary Learning Meets Classification When Dictionary Learning Meets Classification Bufford, Teresa 1 Chen, Yuxin 2 Horning, Mitchell 3 Shee, Liberty 1 Mentor: Professor Yohann Tendero 1 UCLA 2 Dalhousie University 3 Harvey Mudd College August

More information

CANONICAL POLYADIC DECOMPOSITION OF HYPERSPECTRAL PATCH TENSORS

CANONICAL POLYADIC DECOMPOSITION OF HYPERSPECTRAL PATCH TENSORS th European Signal Processing Conference (EUSIPCO) CANONICAL POLYADIC DECOMPOSITION OF HYPERSPECTRAL PATCH TENSORS M.A. Veganzones a, J.E. Cohen a, R. Cabral Farias a, K. Usevich a, L. Drumetz b, J. Chanussot

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

A FREQUENCY DEPENDENT PRECONDITIONED WAVELET METHOD FOR ATMOSPHERIC TOMOGRAPHY

A FREQUENCY DEPENDENT PRECONDITIONED WAVELET METHOD FOR ATMOSPHERIC TOMOGRAPHY Florence, Italy. May 2013 ISBN: 978-88-908876-0-4 DOI: 10.12839/AO4ELT3.13433 A FREQUENCY DEPENDENT PRECONDITIONED WAVELET METHOD FOR ATMOSPHERIC TOMOGRAPHY Mykhaylo Yudytskiy 1a, Tapio Helin 2b, and Ronny

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Inverse Ill Posed Problems in Image Processing

Inverse Ill Posed Problems in Image Processing Inverse Ill Posed Problems in Image Processing Image Deblurring I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz, martin.plesinger@tul.cz, strakos@cs.cas.cz 1,3 Faculty of Mathematics

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Sparsity Matters. Robert J. Vanderbei September 20. IDA: Center for Communications Research Princeton NJ.

Sparsity Matters. Robert J. Vanderbei September 20. IDA: Center for Communications Research Princeton NJ. Sparsity Matters Robert J. Vanderbei 2017 September 20 http://www.princeton.edu/ rvdb IDA: Center for Communications Research Princeton NJ The simplex method is 200 times faster... The simplex method is

More information

Preconditioning Techniques Analysis for CG Method

Preconditioning Techniques Analysis for CG Method Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system

More information

Linear Subspace Models

Linear Subspace Models Linear Subspace Models Goal: Explore linear models of a data set. Motivation: A central question in vision concerns how we represent a collection of data vectors. The data vectors may be rasterized images,

More information

Inversion of Satellite Ocean-Color Data

Inversion of Satellite Ocean-Color Data Inversion of Satellite Ocean-Color Data Robert Frouin Scripps Institution of Oceanography La Jolla, California, USA ADEOS-2 AMSR/GLI Workshop,Tsukuba, Japan, 30 January 2007 Collaborators Pierre-Yves Deschamps,

More information

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

SPARSE SIGNAL RESTORATION. 1. Introduction

SPARSE SIGNAL RESTORATION. 1. Introduction SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful

More information

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Taewon Cho Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial

More information

CS281 Section 4: Factor Analysis and PCA

CS281 Section 4: Factor Analysis and PCA CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we

More information

Fantope Regularization in Metric Learning

Fantope Regularization in Metric Learning Fantope Regularization in Metric Learning CVPR 2014 Marc T. Law (LIP6, UPMC), Nicolas Thome (LIP6 - UPMC Sorbonne Universités), Matthieu Cord (LIP6 - UPMC Sorbonne Universités), Paris, France Introduction

More information

Phase Retrieval for the Hubble Space Telescope and other Applications Abstract: Introduction: Theory:

Phase Retrieval for the Hubble Space Telescope and other Applications Abstract: Introduction: Theory: Phase Retrieval for the Hubble Space Telescope and other Applications Stephanie Barnes College of Optical Sciences, University of Arizona, Tucson, Arizona 85721 sab3@email.arizona.edu Abstract: James R.

More information

SVD, PCA & Preprocessing

SVD, PCA & Preprocessing Chapter 1 SVD, PCA & Preprocessing Part 2: Pre-processing and selecting the rank Pre-processing Skillicorn chapter 3.1 2 Why pre-process? Consider matrix of weather data Monthly temperatures in degrees

More information

Lecture 1: Numerical Issues from Inverse Problems (Parameter Estimation, Regularization Theory, and Parallel Algorithms)

Lecture 1: Numerical Issues from Inverse Problems (Parameter Estimation, Regularization Theory, and Parallel Algorithms) Lecture 1: Numerical Issues from Inverse Problems (Parameter Estimation, Regularization Theory, and Parallel Algorithms) Youzuo Lin 1 Joint work with: Rosemary A. Renaut 2 Brendt Wohlberg 1 Hongbin Guo

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

SUPPLEMENTAL NOTES FOR ROBUST REGULARIZED SINGULAR VALUE DECOMPOSITION WITH APPLICATION TO MORTALITY DATA

SUPPLEMENTAL NOTES FOR ROBUST REGULARIZED SINGULAR VALUE DECOMPOSITION WITH APPLICATION TO MORTALITY DATA SUPPLEMENTAL NOTES FOR ROBUST REGULARIZED SINGULAR VALUE DECOMPOSITION WITH APPLICATION TO MORTALITY DATA By Lingsong Zhang, Haipeng Shen and Jianhua Z. Huang Purdue University, University of North Carolina,

More information

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction Journal of Computational Mathematics Vol.xx, No.x, 200x, 1 9. http://www.global-sci.org/jcm doi:10.4208/jcm.1401-cr7 ON AUGMENED LAGRANGIAN MEHODS FOR SADDLE-POIN LINEAR SYSEMS WIH SINGULAR OR SEMIDEFINIE

More information