ANALYSIS OF p-norm REGULARIZED SUBPROBLEM MINIMIZATION FOR SPARSE PHOTON-LIMITED IMAGE RECOVERY
|
|
- Linda Waters
- 5 years ago
- Views:
Transcription
1 ANALYSIS OF p-norm REGULARIZED SUBPROBLEM MINIMIZATION FOR SPARSE PHOTON-LIMITED IMAGE RECOVERY Aramayis Orkusyan, Lasith Adhikari, Joanna Valenzuela, and Roummel F. Marcia Department o Mathematics, Caliornia State University, Fresno, Fresno, CA 9374, USA Applied Mathematics, University o Caliornia, Merced, Merced, CA 95343, USA ABSTRACT Critical to accurate reconstruction o sparse signals rom low-dimensional low-photon count observations is the solution o nonlinear optimization problems that promote sparse solutions. In this paper, we explore recovering high-resolution sparse signals rom low-resolution measurements corrupted by Poisson noise using a gradientbased optimization approach with non-convex regularization. In particular, we analyze zero-inding methods or solving the p-norm regularized minimization subproblems arising rom a sequential quadratic approach. Numerical results rom luorescence molecular tomography are presented. Index Terms Photon-limited imaging, nonconvex optimization, sparse reconstruction, l p -norm, luorescence molecular tomography. INTRODUCTION Photon-limited data observations generally ollow a Poisson distribution with a certain mean detector photon intensity, i.e., y Poisson(A ), where y Z m + is a vector o observed photon counts, R n + is the vector o true signal intensity, and A R+ m n is the system matrix that linearly projects the true signal to the detector photon intensity []. The Poisson reconstruction problem has the ollowing constrained optimization orm: minimize Φ() F () + τ pen() R n subject to, () where F () is the negative Poisson log-likelihood unction F () = T A m i= y i log(e T i A + β), where is the m-vector o ones, e i is the i-th column o the m m identity matrix, β > (typically β ), pen : R n R is a sparsity-promoting penalty unctional, and τ > is a regularization parameter. This work was supported by NSF Grants CMMI and DMS Various convex penalty techniques have previously been used as regularization terms in (). For example, when the solution is sparse in the canonical basis, an l norm is simple to implement [, 3]. A total variation seminorm with a split Bregman approach can also be used [4, 5]. Other related methods include [6, 7, 8, 9]. In this work, we consider the non-convex penalty unction pen() = p p = n i= i p ( p < ) in () as a bridge between the convex l norm and the l counting seminorm [,, ]. The solution to this nonconvex problem can be ound by minimizing a sequence o quadratic models to the unction F () approximated by second-order Taylor series expansion where the Hessian replaced by a scaled identity matrix α k I with α k > [3, 4, 5]. Simpliying the second-order approximation yields a sequence o subproblems o the orm k+ = arg min R n sk + τ α k p p subject to, () where s k = k α k F ( k ). Note that the subproblem () can be separated into scalar minimization problems o the orm s = arg min Ω s () = R ( s) + λ p, subject to. (3) where and s denote elements o the vectors and s k respectively and λ = τ/α k [6]. Given a regularization parameter λ > and p-norm or Ω s () in (3), there exists a threshold value γ p (λ) (that explicitly depends on p and λ) such that i s γ p (λ), the global minimum o (3) is s = ; otherwise, the global minimum will be a non-zero value (see Fig. ). When s = γ p (λ), there exists γ such that Ω γ ( γ ) = Ω γ () and Ω γ( γ ) =. (4) By solving (4) simultaneously, we can explicitly ind the threshold value γ p (λ) or given p and λ values. Speciically, γ p (λ) is given by γ p (λ) = (λ( p)) p +λp(λ(
2 .8 Ω s ().3 Ω γ () In order to simpliy the computation o this iteration and avoid computing two dierent roots n p and n p, we multiply the numerator and denominator by p n : (a).. Ω γ ( )=Ω γ ( γ ) γ.5 (b) n+ = p sn p + λp(p ) n. (7) n + λp(p ) The perormance o ixed-point iteration and Newton s method very much depend on the choice o the initial point, which we discuss next. Fig.. The plot o the scalar quadratic unction Ω s (), where p =.5 and λ =.. (a) When s is less than the speciic threshold value γ p (λ), then s = is the unique global minimum. (b) When s = γ p (λ), there are global minima at = and γ. I s > γ p (λ), then the global minimum is uniquely at some s >. p)) p p (see [7] or details). For any s > γp (λ), the unique minimum s o Ω s () is greater than and is obtained by setting Ω s to : Ω s( s ) = s s + λp( s ) p =. (5) We now describe zero-inding algorithms to compute the root s. To our knowledge, this work is the irst careul analysis o these minimization techniques or solving the non-convex quadratic subproblem given by (3).. ZERO-FINDING METHODS.. Fixed-Point Iteration Method A point is said to be a ixed point o a unction G() i G( ) =. Setting Ω s() equal to zero, we have s λp( ) p =. The ixed-point iteration method is an iterative method or inding ixed points o a unction. In particular, it deines a sequence o points { n } given by n+ = G( n ). Previous methods or inding the root o Ω s() use the ixed point iteration (see, e.g., [6, 7]):.. Newton s Method n+ = g( n ) = s λp p n. (6) Note that there are various ways o deining ixed point iterations. One particular ixed-point ormulation is Newton s method, which is given by the iterations n+ = G( n ) = n Ω s( n ) Ω s ( n ). In our case, the iterations or Newton s method are given by n+ = n n s + λpn p + λp(p )n p = s + λp(p ) p n + λp(p ) p n..3. Initialization When s = γ p (λ), the solution γ such that Ω ( γ ) = γ γ p (λ) + λp( γ ) p = is given explicitly by γ = (λ( p)) p. Then i s = γ p (λ) + ε or some ε >, we now analyze how to estimate s to initialize the zero-inding methods described previously. First-order Taylor series approximation. To deine the initial point, we can linearize Ω s() around γ and ind the zero o the linearization. More speciically, Ω s( γ + δ) Ω s( γ ) + δω s ( γ ) = γ (γ p (λ) + ε) + λp( γ ) p + δ( + λp(p )( γ ) p ) = ε + δ( + λp(p )( γ ) p ). Setting this equal to zero and solving or δ suggests the use o the initialization s = γ + δ, where δ = ε + λp(p )( γ ) p. Second-order Taylor series approximation. Similarly, we can use a second-order Taylor approximation to Ω s around γ : Ω s( γ + δ) Ω s( γ ) + δω s ( γ ) + δ Ω s ( γ ), which yields the ollowing approximation: s = γ + δ, where δ = b + b 4ac, a where a = λp(p )(p ) ( γ ) p 3, b = +λp(p )( γ ) p, and c = ε. The linearization and second-order Taylor approximation, however, diverge quickly rom the true solution as s becomes large (see Fig. ). We now discuss bounds on s that allow us to make more eective initial approximations to s. We irst prove a lemma, which will be useul in showing bounds on s as well as other results.
3 5 4 3 Quadratic Approximation Linear Approximation γ l()= s Ω γ() Fig.. Approximations to Ω γ() centered at γ. As increases, both the linear and quadratic Taylor approximation diverge rom Ω γ(). In contrast, the approximation l() = s, which are the irst two terms in Ω γ(), is more accurate or large values o. Lemma. Let λ > and p <. Then or s γ p (λ), λp( p)( s ) p p. Proo. Recall that or s = γ p (λ), there exists an γ > such that (4) hold. From Ω γ ( γ ) = Ω γ (), we can obtain and rom Ω γ( γ ) =, we have γ + λ( γ ) p = γ p (λ), (8) γ + λp( γ ) p = γ p (λ). (9) Setting (8) equal to (9) and with some algebraic manipulation, we have λp( p)( γ ) p = p. For s > γ p(λ), the unique minimizer s > γ. Thus, λp( p)( s ) p < λp( p)( γ ) p = p, which completes the proo. This result allows us to prove the ollowing theorem, which bounds the minimizer, s, o Ω s (): Theorem. For λ > and p <, the minimizer, s, o Ω s is bounded by s s. I p, then the minimizer is urther bounded by 3 s s s. Proo. Recall that the minimizer o Ω s solves Ω s( s ) =. Solving or s, we have s = s + λp(s. () ) p Rewriting the main result o Lemma, λp(s ) p p ( p). Observe that i p, λp( s ) p p ( p) we obtain and +λp( s ) p 3. Using these bounds in () yields the desired results. Note that Theorem implies that as s increases, so does s. Moreover, as s, ( s ) p, and thereore, by (5), s s. Thus, a sensible initial estimate or s is s. Fixed-point initialized Newton s Method. We can improve the initial guess rom s by inding a point between s and s. The mean-value theorem guarantees the existence o ξ (s, s) such that Ω s (ξ) = Ω s (s) Ω s ( s ) s. s Rearranging, we ind that s = s Ω s(s) Ω s( s ) Ω s (ξ) = s λps p λp( p)ξ p. By Lemma, p λp( p)ξ p, and thus, s s λps p (, s). We note that this is precisely the irst ixed point iteration initialized at s..4. Convergence Guarantee o convergence. Let e n = n and e n+ = n+ represent the errors on the n-th and n+-th iterations respectively. For ixed point iteration, we have e n+ = n+ = G( n ) = G( + e n ), = G( ) + e n G ( ) + e ng (ξ) = + e n G ( ) + e ng (ξ) = e n G ( ) + e ng (ξ). For small e n, e n+ e n G ( ). In our context, G() = s λp p and G () = λp( p) p. By Lemma, G () <. Thereore, the error is decreasing and the ixed point iteration method is guaranteed to converge. To show Newton s method is guaranteed to converge, let c be a critical point o Ω s() i.e. Ω s ( c ) =. In particular, c = (λp( p)) p and or any > c, Ω s () = + λp(p ) p > i.e. Ω s() is increasing in the interval ( c, ). Then, Ω s () = λp(p )(p ) p 3 > or all (, ), which implies Ω s() is convex. Finally, we note that c < (λp(p )) p = γ, i.e Ω s() has a root in ( c, ). Thereore, Ω s() is increasing, convex, and has a zero in ( c, ), and Newton s method is guaranteed to converge rom any starting point in the interval ( c, ) (see Theorem pg. 86 in [8]). Rate o convergence. Let ε be some set tolerance such that on the n-th iteration i e n = n s ε then we
4 will consider the algorithm to have converged to the root. For ixed point iteration, we have convergence when ε e n = C e n = C n e () where C = G ( s ) = λp( p)( s ) p. Solving or n, the number o iterations required to converge, we have n Fixed Point ln ε ln e ln C. () For Newton s method, we have convergence when ε e n = C e n = C n e n (3) where C = λp( p)( p)(s ) p 3 λp( p)(s ) p. Solving or n in (3) yields n Newton ( ) ln ln ln C + ln ε. (4) ln C + ln e Fig. 3 shows the theoretical number o iterations or ixed-point iterations and Newton s method to converge. Note that when s is near γ p (λ), ixed-point iterations take many more iterations than Newton s method. However, or large s, ixed-point iterations only require our iterations. Although this is still twice as many as the iterations or Newton s method, the number o loating point operations or ixed-point iterations is much smaller than that or Newton s method (compare (6) and (7)). Since s can take on any real value, we expect the average perormance o ixed-point iteration and Newton s method will be comparable, which we see in the next section. is set to 7, inside the two rods and outside. We chose a total o excitation source positions and, 57 detector positions on the top surace o the cube, which gives us,57 =,4 measurements. About one-tenth o all the measurements were used (i.e., measurements). We assumed that the excitation wavelength is 65 nm and the emission wavelength is 7 nm in the construction o the system matrix A. The tissue optical properties were µ a =. mm, µ s =.4 mm at both 65 nm and at 7 nm. For this experiment, the simulated measurement vector y is corrupted by Poisson noise with signal-to-noise ratio (SNR) o 3 db ( 57% noise). In our method, we used p =.74 and A T y as the initial guess. Fig. 4 shows the true signal ( ) and our reconstruction. Time (sec) Iterations Fixed-point iteration.89,8,974 Newton s method.8 476,585 Table. Time and iteration average over trials or ixed-point iteration and Newton s method to reconstruct the luorescence molecular tomography data. (a) x (b) x Fig. 4. (a) Horizontal slices o a simulated luorescence capillary rod targets. (b) Reconstruction using p-norm regularized subproblem minimization. 4. CONCLUDING REMARKS Fig. 3. Theoretical number o iterations required to converge as a unction o s. Here p =.5, λ =, ε = 8, e = s, and γ p (λ) s. 3. NUMERCAL EXPERIMENTS We simulated a 3D cubic phantom with two embedded luorescence capillary rod targets (see e.g., [9, ]). For the inite element mesh, there are a total o 8,69 nodes inside the 3D cube while only 36 nodes are located inside the two rods. The luorophore concentration o the nodes In this paper, we analyzed methods or solving the p-norm regularized subproblems arising rom minimizing the Poisson-log likelihood or reconstructing sparse signals rom photon-limited measurements. These nonconvex subproblems do not have closed orm solutions, and as such, they require numerical approaches or computing the minimizers. While Newton s method in theory should converge to the solution aster than ixedpoint iterations, the number o loating-point operations needed to perorm each iteration osets the computational advantage o using derivative inormation. Acknowledgments. We thank Dr. Dianwen Zhu and Pro. Changqing Li or providing the numerical codes or generating the luorescence molecular tomography data.
5 5. REFERENCES [] D. L. Snyder and M. I. Miller, Random point processes in space and time, Springer-Verlag, New York, NY, 99. [] D. Lingenelter, J. Fessler, and Z. He, Sparsity regularization or image reconstruction with Poisson data, in Proc. SPIE Computational Imaging VII, 9, vol [3] Z. T. Harmany, R. F. Marcia, and R. M. Willett, Sparse Poisson intensity reconstruction algorithms, in Proceedings o IEEE Statistical Signal Processing Workshop, Cardi, Wales, UK, September 9. [4] M. Figueiredo and J. Bioucas-Dias, Deconvolution o Poissonian images using variable splitting and augmented Lagrangian optimization, in Proc. IEEE Workshop on Statistical Signal Processing, 9. [5] S. Setzer, G. Steidl, and T. Teuber, Deblurring Poissonian images by split Bregman techniques, J. Visual Commun. Image Represent., 9, In Press. [6] J. A Fessler and A. O. Hero, Penalized maximum-likelihood image reconstruction using space-alternating generalized em algorithms, IEEE Trans. Image Processing, vol. 4, no., pp , 995. [7] S. Ahn and J. Fessler, Globally convergent image reconstruction or emission tomography using relaxed ordered subsets algorithms, IEEE Trans. Med. Imaging, vol., no. 5, pp , May 3. [8] J. A. Fessler and H. Erdoğan, A paraboloidal surrogates algorithm or convergent penalized-likelihood emission image reconstruction, in Proc. IEEE Nuc. Sci. Symp. Med. Im. Con., 998. [9] M. Raginsky, R. M. Willett, Z. T. Harmany, and R. F. Marcia, Compressed sensing perormance bounds under Poisson noise, IEEE Trans. Signal Process., vol. 58, no. 8, pp ,. [] R. Chartrand and W. Yin, Iteratively reweighted algorithms or compressive sensing, in Proc. IEEE International Conerence on Acoustics, Speech, and Signal Processing, 8, pp [3] J. Barzilai and J. M. Borwein, Two-point step size gradient methods, IMA J. Numer. Anal., vol. 8, no., pp. 4 48, 988. [4] S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, Sparse reconstruction by separable approximation, IEEE Trans. Signal Processing, vol. 57, no. 7, pp , 9. [5] Z. T. Harmany, R. F. Marcia, and R. M. Willett, This is SPIRAL-TAP: Sparse Poisson intensity reconstruction algorithms - theory and practice, IEEE Trans. Image Processing, vol., no. 3, pp ,. [6] L. Adhikari and R. F. Marcia, Nonconvex relaxation or Poisson intensity reconstruction, in Proc. IEEE International Conerence on Acoustics, Speech, and Signal Processing, Brisbane Australia, April 5. [7] W. Zuo, D. Meng, L. Zhang, X. Feng, and D. Zhang, A generalized iterated shrinkage algorithm or nonconvex sparse coding, in 3 IEEE International Conerence on Computer Vision (ICCV), Dec 3, pp [8] D. R. Kincaid and E. W. Cheney, Numerical Analysis: Mathematics o Scientiic Computing, American Mathematical Society, 3rd edition,. [9] C. Li, G. S. Mitchell, J. Dutta, S. Ahn, R. M. Leahy, and S. R. Cherry, A three-dimensional multispectral luorescence optical tomography imaging system or small animals based on a conical mirror design, Opt. Express, vol. 7, no. 9, pp , Apr 9. [] F. Fedele, J. P. Laible, and M. J. Eppstein, Coupled complex adjoint sensitivities or requency-domain luorescence tomography: theory and vectorized implementation, Journal o Computational Physics, vol. 87, no., pp , 3. [] R. Chartrand, Exact reconstruction o sparse signals via nonconvex minimization, Signal Processing Letters, IEEE, vol. 4, no., pp. 77 7, Oct 7. [] R. Chartrand and V. Staneva, Restricted isometry properties and nonconvex compressive sensing, Inverse Problems, vol. 4, no. 3, pp. 35, 8.
BOUNDED SPARSE PHOTON-LIMITED IMAGE RECOVERY. Lasith Adhikari and Roummel F. Marcia
BONDED SPARSE PHOTON-IMITED IMAGE RECOVERY asith Adhikari and Roummel F. Marcia Department of Applied Mathematics, niversity of California, Merced, Merced, CA 9533 SA ABSTRACT In photon-limited image reconstruction,
More informationPOISSON noise, also known as photon noise, is a basic
IEEE SIGNAL PROCESSING LETTERS, VOL. N, NO. N, JUNE 2016 1 A fast and effective method for a Poisson denoising model with total variation Wei Wang and Chuanjiang He arxiv:1609.05035v1 [math.oc] 16 Sep
More informationSource Reconstruction for 3D Bioluminescence Tomography with Sparse regularization
1/33 Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization Xiaoqun Zhang xqzhang@sjtu.edu.cn Department of Mathematics/Institute of Natural Sciences, Shanghai Jiao Tong University
More informationLecture 8 Optimization
4/9/015 Lecture 8 Optimization EE 4386/5301 Computational Methods in EE Spring 015 Optimization 1 Outline Introduction 1D Optimization Parabolic interpolation Golden section search Newton s method Multidimensional
More informationSPARSE SIGNAL RESTORATION. 1. Introduction
SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful
More informationITERATED SRINKAGE ALGORITHM FOR BASIS PURSUIT MINIMIZATION
ITERATED SRINKAGE ALGORITHM FOR BASIS PURSUIT MINIMIZATION Michael Elad The Computer Science Department The Technion Israel Institute o technology Haia 3000, Israel * SIAM Conerence on Imaging Science
More informationSparsity Regularization for Image Reconstruction with Poisson Data
Sparsity Regularization for Image Reconstruction with Poisson Data Daniel J. Lingenfelter a, Jeffrey A. Fessler a,andzhonghe b a Electrical Engineering and Computer Science, University of Michigan, Ann
More informationTRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS
TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München, München, Arcistraße
More informationA Simple Explanation of the Sobolev Gradient Method
A Simple Explanation o the Sobolev Gradient Method R. J. Renka July 3, 2006 Abstract We have observed that the term Sobolev gradient is used more oten than it is understood. Also, the term is oten used
More informationSparsity Regularization
Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation
More informationA NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang
A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.
More informationSparsity-Aware Pseudo Affine Projection Algorithm for Active Noise Control
Sparsity-Aware Pseudo Aine Projection Algorithm or Active Noise Control Felix Albu *, Amelia Gully and Rodrigo de Lamare * Department o Electronics, Valahia University o argoviste, argoviste, Romania E-mail:
More informationContinuous State MRF s
EE64 Digital Image Processing II: Purdue University VISE - December 4, Continuous State MRF s Topics to be covered: Quadratic functions Non-Convex functions Continuous MAP estimation Convex functions EE64
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More information2. ETA EVALUATIONS USING WEBER FUNCTIONS. Introduction
. ETA EVALUATIONS USING WEBER FUNCTIONS Introduction So ar we have seen some o the methods or providing eta evaluations that appear in the literature and we have seen some o the interesting properties
More informationTelescoping Decomposition Method for Solving First Order Nonlinear Differential Equations
Telescoping Decomposition Method or Solving First Order Nonlinear Dierential Equations 1 Mohammed Al-Reai 2 Maysem Abu-Dalu 3 Ahmed Al-Rawashdeh Abstract The Telescoping Decomposition Method TDM is a new
More informationCurve Sketching. The process of curve sketching can be performed in the following steps:
Curve Sketching So ar you have learned how to ind st and nd derivatives o unctions and use these derivatives to determine where a unction is:. Increasing/decreasing. Relative extrema 3. Concavity 4. Points
More informationAsymptote. 2 Problems 2 Methods
Asymptote Problems Methods Problems Assume we have the ollowing transer unction which has a zero at =, a pole at = and a pole at =. We are going to look at two problems: problem is where >> and problem
More informationSupplementary material for Continuous-action planning for discounted infinite-horizon nonlinear optimal control with Lipschitz values
Supplementary material or Continuous-action planning or discounted ininite-horizon nonlinear optimal control with Lipschitz values List o main notations x, X, u, U state, state space, action, action space,
More informationNumerical Methods - Lecture 2. Numerical Methods. Lecture 2. Analysis of errors in numerical methods
Numerical Methods - Lecture 1 Numerical Methods Lecture. Analysis o errors in numerical methods Numerical Methods - Lecture Why represent numbers in loating point ormat? Eample 1. How a number 56.78 can
More informationFluctuationlessness Theorem and its Application to Boundary Value Problems of ODEs
Fluctuationlessness Theorem and its Application to Boundary Value Problems o ODEs NEJLA ALTAY İstanbul Technical University Inormatics Institute Maslak, 34469, İstanbul TÜRKİYE TURKEY) nejla@be.itu.edu.tr
More information( x) f = where P and Q are polynomials.
9.8 Graphing Rational Functions Lets begin with a deinition. Deinition: Rational Function A rational unction is a unction o the orm ( ) ( ) ( ) P where P and Q are polynomials. Q An eample o a simple rational
More information1 Relative degree and local normal forms
THE ZERO DYNAMICS OF A NONLINEAR SYSTEM 1 Relative degree and local normal orms The purpose o this Section is to show how single-input single-output nonlinear systems can be locally given, by means o a
More informationPoisson Image Denoising Using Best Linear Prediction: A Post-processing Framework
Poisson Image Denoising Using Best Linear Prediction: A Post-processing Framework Milad Niknejad, Mário A.T. Figueiredo Instituto de Telecomunicações, and Instituto Superior Técnico, Universidade de Lisboa,
More informationRATIONAL FUNCTIONS. Finding Asymptotes..347 The Domain Finding Intercepts Graphing Rational Functions
RATIONAL FUNCTIONS Finding Asymptotes..347 The Domain....350 Finding Intercepts.....35 Graphing Rational Functions... 35 345 Objectives The ollowing is a list o objectives or this section o the workbook.
More informationObjective Functions for Tomographic Reconstruction from. Randoms-Precorrected PET Scans. gram separately, this process doubles the storage space for
Objective Functions for Tomographic Reconstruction from Randoms-Precorrected PET Scans Mehmet Yavuz and Jerey A. Fessler Dept. of EECS, University of Michigan Abstract In PET, usually the data are precorrected
More informationScaled gradient projection methods in image deblurring and denoising
Scaled gradient projection methods in image deblurring and denoising Mario Bertero 1 Patrizia Boccacci 1 Silvia Bonettini 2 Riccardo Zanella 3 Luca Zanni 3 1 Dipartmento di Matematica, Università di Genova
More informationEstimation Error Guarantees for Poisson Denoising with Sparse and Structured Dictionary Models
Estimation Error Guarantees for Poisson Denoising with Sparse and Structured Dictionary Models Akshay Soni and Jarvis Haupt Department of Electrical and Computer Engineering University of Minnesota Twin
More informationMinimizing Isotropic Total Variation without Subiterations
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Minimizing Isotropic Total Variation without Subiterations Kamilov, U. S. TR206-09 August 206 Abstract Total variation (TV) is one of the most
More informationA Brief Overview of Practical Optimization Algorithms in the Context of Relaxation
A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation Zhouchen Lin Peking University April 22, 2018 Too Many Opt. Problems! Too Many Opt. Algorithms! Zero-th order algorithms:
More informationScan Time Optimization for Post-injection PET Scans
Presented at 998 IEEE Nuc. Sci. Symp. Med. Im. Conf. Scan Time Optimization for Post-injection PET Scans Hakan Erdoğan Jeffrey A. Fessler 445 EECS Bldg., University of Michigan, Ann Arbor, MI 4809-222
More informationIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 59, NO. 9, SEPTEMBER
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 59, NO. 9, SEPTEMBER 2011 4139 Performance Bounds for Expander-Based Compressed Sensing in Poisson Noise Maxim Raginsky, Member, IEEE, Sina Jafarpour, Zachary
More informationTu G Nonlinear Sensitivity Operator and Its De Wolf Approximation in T-matrix Formalism
Tu G13 7 Nonlinear Sensitivity Operator and Its De Wol Approximation in T-matrix Formalism R.S. Wu* (University o Caliornia), C. Hu (Tsinghua University) & M. Jakobsen (University o Bergen) SUMMARY We
More informationMath-3 Lesson 8-5. Unit 4 review: a) Compositions of functions. b) Linear combinations of functions. c) Inverse Functions. d) Quadratic Inequalities
Math- Lesson 8-5 Unit 4 review: a) Compositions o unctions b) Linear combinations o unctions c) Inverse Functions d) Quadratic Inequalities e) Rational Inequalities 1. Is the ollowing relation a unction
More informationIterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming
Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study
More informationRecovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm
Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University
More informationRapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization
Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationA discretized Newton flow for time varying linear inverse problems
A discretized Newton flow for time varying linear inverse problems Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München Arcisstrasse
More informationA quantative comparison of two restoration methods as applied to confocal microscopy
A quantative comparison of two restoration methods as applied to confocal microscopy Geert M.P. van Kempen 1, Hans T.M. van der Voort, Lucas J. van Vliet 1 1 Pattern Recognition Group, Delft University
More informationLearning with L q<1 vs L 1 -norm regularisation with exponentially many irrelevant features
Learning with L q
More informationarxiv: v1 [math.na] 26 Nov 2009
Non-convexly constrained linear inverse problems arxiv:0911.5098v1 [math.na] 26 Nov 2009 Thomas Blumensath Applied Mathematics, School of Mathematics, University of Southampton, University Road, Southampton,
More informationMath Review and Lessons in Calculus
Math Review and Lessons in Calculus Agenda Rules o Eponents Functions Inverses Limits Calculus Rules o Eponents 0 Zero Eponent Rule a * b ab Product Rule * 3 5 a / b a-b Quotient Rule 5 / 3 -a / a Negative
More informationExtreme Values of Functions
Extreme Values o Functions When we are using mathematics to model the physical world in which we live, we oten express observed physical quantities in terms o variables. Then, unctions are used to describe
More informationSparse and Regularized Optimization
Sparse and Regularized Optimization In many applications, we seek not an exact minimizer of the underlying objective, but rather an approximate minimizer that satisfies certain desirable properties: sparsity
More informationWavelet Based Image Restoration Using Cross-Band Operators
1 Wavelet Based Image Restoration Using Cross-Band Operators Erez Cohen Electrical Engineering Department Technion - Israel Institute of Technology Supervised by Prof. Israel Cohen 2 Layout Introduction
More informationPhysics 5153 Classical Mechanics. Solution by Quadrature-1
October 14, 003 11:47:49 1 Introduction Physics 5153 Classical Mechanics Solution by Quadrature In the previous lectures, we have reduced the number o eective degrees o reedom that are needed to solve
More informationAPPROXIMATE NULLSPACE ITERATIONS FOR KKT SYSTEMS IN MODEL BASED OPTIMIZATION. min. s.t. c(x, p) = 0. c(x, p) = 0
Forschungsbericht Nr. 6-5, FB IV Mathematik/Inormatik, Universität Trier, 6 PPROXIMTE NULLSPCE ITERTIONS FOR KKT SYSTEMS IN MODEL BSED OPTIMIZTION KZUFUMI ITO, KRL KUNISCH, ILI GHERMN, ND VOLKER SCHULZ
More informationEE 381V: Large Scale Optimization Fall Lecture 24 April 11
EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that
More informationAdaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise
Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Minru Bai(x T) College of Mathematics and Econometrics Hunan University Joint work with Xiongjun Zhang, Qianqian Shao June 30,
More informationProjected Nesterov s Proximal-Gradient Signal Recovery from Compressive Poisson Measurements
Electrical and Computer Engineering Conference Papers, Posters and Presentations Electrical and Computer Engineering 2015 Projected Nesterov s Proximal-Gradient Signal Recovery from Compressive Poisson
More informationFunction Operations. I. Ever since basic math, you have been combining numbers by using addition, subtraction, multiplication, and division.
Function Operations I. Ever since basic math, you have been combining numbers by using addition, subtraction, multiplication, and division. Add: 5 + Subtract: 7 Multiply: (9)(0) Divide: (5) () or 5 II.
More informationStatistical Issues in Searches: Photon Science Response. Rebecca Willett, Duke University
Statistical Issues in Searches: Photon Science Response Rebecca Willett, Duke University 1 / 34 Photon science seen this week Applications tomography ptychography nanochrystalography coherent diffraction
More informationScattered Data Approximation of Noisy Data via Iterated Moving Least Squares
Scattered Data Approximation o Noisy Data via Iterated Moving Least Squares Gregory E. Fasshauer and Jack G. Zhang Abstract. In this paper we ocus on two methods or multivariate approximation problems
More informationSparse Solutions of an Undetermined Linear System
1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research
More informationSolving DC Programs that Promote Group 1-Sparsity
Solving DC Programs that Promote Group 1-Sparsity Ernie Esser Contains joint work with Xiaoqun Zhang, Yifei Lou and Jack Xin SIAM Conference on Imaging Science Hong Kong Baptist University May 14 2014
More informationRelating axial motion of optical elements to focal shift
Relating aial motion o optical elements to ocal shit Katie Schwertz and J. H. Burge College o Optical Sciences, University o Arizona, Tucson AZ 857, USA katie.schwertz@gmail.com ABSTRACT In this paper,
More informationCoordinate Update Algorithm Short Course Proximal Operators and Algorithms
Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 36 Why proximal? Newton s method: for C 2 -smooth, unconstrained problems allow
More informationarxiv: v1 [cs.it] 12 Mar 2014
COMPRESSIVE SIGNAL PROCESSING WITH CIRCULANT SENSING MATRICES Diego Valsesia Enrico Magli Politecnico di Torino (Italy) Dipartimento di Elettronica e Telecomunicazioni arxiv:403.2835v [cs.it] 2 Mar 204
More informationEstimation and detection of a periodic signal
Estimation and detection o a periodic signal Daniel Aronsson, Erik Björnemo, Mathias Johansson Signals and Systems Group, Uppsala University, Sweden, e-mail: Daniel.Aronsson,Erik.Bjornemo,Mathias.Johansson}@Angstrom.uu.se
More informationIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior
More informationLarge-Scale L1-Related Minimization in Compressive Sensing and Beyond
Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March
More informationNon-negative Quadratic Programming Total Variation Regularization for Poisson Vector-Valued Image Restoration
University of New Mexico UNM Digital Repository Electrical & Computer Engineering Technical Reports Engineering Publications 5-10-2011 Non-negative Quadratic Programming Total Variation Regularization
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationYURI LEVIN AND ADI BEN-ISRAEL
Pp. 1447-1457 in Progress in Analysis, Vol. Heinrich G W Begehr. Robert P Gilbert and Man Wah Wong, Editors, World Scientiic, Singapore, 003, ISBN 981-38-967-9 AN INVERSE-FREE DIRECTIONAL NEWTON METHOD
More informationRelating axial motion of optical elements to focal shift
Relating aial motion o optical elements to ocal shit Katie Schwertz and J. H. Burge College o Optical Sciences, University o Arizona, Tucson AZ 857, USA katie.schwertz@gmail.com ABSTRACT In this paper,
More informationDefinition: Let f(x) be a function of one variable with continuous derivatives of all orders at a the point x 0, then the series.
2.4 Local properties o unctions o several variables In this section we will learn how to address three kinds o problems which are o great importance in the ield o applied mathematics: how to obtain the
More informationFast Angular Synchronization for Phase Retrieval via Incomplete Information
Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department
More information! " k x 2k$1 # $ k x 2k. " # p $ 1! px! p " p 1 # !"#$%&'"()'*"+$",&-('./&-/. !"#$%&'()"*#%+!'",' -./#")'.,&'+.0#.1)2,'!%)2%! !"#$%&'"%(")*$+&#,*$,#
"#$%&'()"*#%+'",' -./#")'.,&'+.0#.1)2,' %)2% "#$%&'"()'*"+$",&-('./&-/. Taylor Series o a unction at x a is " # a k " # " x a# k k0 k It is a Power Series centered at a. Maclaurin Series o a unction is
More informationPhysics 2B Chapter 17 Notes - First Law of Thermo Spring 2018
Internal Energy o a Gas Work Done by a Gas Special Processes The First Law o Thermodynamics p Diagrams The First Law o Thermodynamics is all about the energy o a gas: how much energy does the gas possess,
More informationSTAT 801: Mathematical Statistics. Hypothesis Testing
STAT 801: Mathematical Statistics Hypothesis Testing Hypothesis testing: a statistical problem where you must choose, on the basis o data X, between two alternatives. We ormalize this as the problem o
More informationAccelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)
Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan
More informationOn the Girth of (3,L) Quasi-Cyclic LDPC Codes based on Complete Protographs
On the Girth o (3,L) Quasi-Cyclic LDPC Codes based on Complete Protographs Sudarsan V S Ranganathan, Dariush Divsalar and Richard D Wesel Department o Electrical Engineering, University o Caliornia, Los
More informationObjectives. By the time the student is finished with this section of the workbook, he/she should be able
FUNCTIONS Quadratic Functions......8 Absolute Value Functions.....48 Translations o Functions..57 Radical Functions...61 Eponential Functions...7 Logarithmic Functions......8 Cubic Functions......91 Piece-Wise
More informationRelaxed linearized algorithms for faster X-ray CT image reconstruction
Relaxed linearized algorithms for faster X-ray CT image reconstruction Hung Nien and Jeffrey A. Fessler University of Michigan, Ann Arbor The 13th Fully 3D Meeting June 2, 2015 1/20 Statistical image reconstruction
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationGF(4) Based Synthesis of Quaternary Reversible/Quantum Logic Circuits
GF(4) ased Synthesis o Quaternary Reversible/Quantum Logic Circuits MOZAMMEL H. A. KHAN AND MAREK A. PERKOWSKI Department o Computer Science and Engineering, East West University 4 Mohakhali, Dhaka, ANGLADESH,
More informationOptimization transfer approach to joint registration / reconstruction for motion-compensated image reconstruction
Optimization transfer approach to joint registration / reconstruction for motion-compensated image reconstruction Jeffrey A. Fessler EECS Dept. University of Michigan ISBI Apr. 5, 00 0 Introduction Image
More informationWEAK AND STRONG CONVERGENCE OF AN ITERATIVE METHOD FOR NONEXPANSIVE MAPPINGS IN HILBERT SPACES
Applicable Analysis and Discrete Mathematics available online at http://pemath.et.b.ac.yu Appl. Anal. Discrete Math. 2 (2008), 197 204. doi:10.2298/aadm0802197m WEAK AND STRONG CONVERGENCE OF AN ITERATIVE
More information1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:
Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion
More informationProducts and Convolutions of Gaussian Probability Density Functions
Tina Memo No. 003-003 Internal Report Products and Convolutions o Gaussian Probability Density Functions P.A. Bromiley Last updated / 9 / 03 Imaging Science and Biomedical Engineering Division, Medical
More informationA NONCONVEX ADMM ALGORITHM FOR GROUP SPARSITY WITH SPARSE GROUPS. Rick Chartrand and Brendt Wohlberg
A NONCONVEX ADMM ALGORITHM FOR GROUP SPARSITY WITH SPARSE GROUPS Rick Chartrand and Brendt Wohlberg Los Alamos National Laboratory Los Alamos, NM 87545, USA ABSTRACT We present an efficient algorithm for
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationTwo-step self-tuning phase-shifting interferometry
Two-step sel-tuning phase-shiting intererometry J. Vargas, 1,* J. Antonio Quiroga, T. Belenguer, 1 M. Servín, 3 J. C. Estrada 3 1 Laboratorio de Instrumentación Espacial, Instituto Nacional de Técnica
More informationRoot Finding and Optimization
Root Finding and Optimization Ramses van Zon SciNet, University o Toronto Scientiic Computing Lecture 11 February 11, 2014 Root Finding It is not uncommon in scientiic computing to want solve an equation
More informationStrain and Stress Measurements with a Two-Dimensional Detector
Copyright ISSN (C) 97-, JCPDS-International Advances in X-ray Centre Analysis, or Volume Diraction 4 Data 999 5 Strain and Stress Measurements with a Two-Dimensional Detector Baoping Bob He and Kingsley
More informationComplexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization
Mathematical Programming manuscript No. (will be inserted by the editor) Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization Wei Bian Xiaojun Chen Yinyu Ye July
More informationStrong Lyapunov Functions for Systems Satisfying the Conditions of La Salle
06 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 49, NO. 6, JUNE 004 Strong Lyapunov Functions or Systems Satisying the Conditions o La Salle Frédéric Mazenc and Dragan Ne sić Abstract We present a construction
More informationConstrained Optimization Theory
Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August
More informationCHAPTER 8 ANALYSIS OF AVERAGE SQUARED DIFFERENCE SURFACES
CAPTER 8 ANALYSS O AVERAGE SQUARED DERENCE SURACES n Chapters 5, 6, and 7, the Spectral it algorithm was used to estimate both scatterer size and total attenuation rom the backscattered waveorms by minimizing
More informationA Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems
c de Gruyter 2007 J. Inv. Ill-Posed Problems 15 (2007), 12 8 DOI 10.1515 / JIP.2007.002 A Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems Johnathan M. Bardsley Communicated
More informationLab on Taylor Polynomials. This Lab is accompanied by an Answer Sheet that you are to complete and turn in to your instructor.
Lab on Taylor Polynomials This Lab is accompanied by an Answer Sheet that you are to complete and turn in to your instructor. In this Lab we will approimate complicated unctions by simple unctions. The
More information5.6 Penalty method and augmented Lagrangian method
5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the
More informationHigher-Order Methods
Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth
More informationRoberto s Notes on Differential Calculus Chapter 8: Graphical analysis Section 1. Extreme points
Roberto s Notes on Dierential Calculus Chapter 8: Graphical analysis Section 1 Extreme points What you need to know already: How to solve basic algebraic and trigonometric equations. All basic techniques
More informationEUSIPCO
EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationFisher Consistency of Multicategory Support Vector Machines
Fisher Consistency o Multicategory Support Vector Machines Yueng Liu Department o Statistics and Operations Research Carolina Center or Genome Sciences University o North Carolina Chapel Hill, NC 7599-360
More informationMax Margin-Classifier
Max Margin-Classifier Oliver Schulte - CMPT 726 Bishop PRML Ch. 7 Outline Maximum Margin Criterion Math Maximizing the Margin Non-Separable Data Kernels and Non-linear Mappings Where does the maximization
More informationEnhanced Compressive Sensing and More
Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University
More information