Noise Analysis of Regularized EM for SPECT Reconstruction 1
|
|
- Adele Stevens
- 5 years ago
- Views:
Transcription
1 Noise Analysis of Regularized EM for SPECT Reconstruction Wenli Wang and Gene Gindi Departments of Electrical Engineering and Radiology SUNY at Stony Brook, Stony Brook, NY Abstract The ability to theoretically model the propagation of photon noise through tomographic reconstruction algorithms is crucial in evaluating reconstructed image quality as a function of parameters of the algorithm. Here, we show the theoretical expressions for the propagation of Poisson noise through tomographic SPECT reconstructions using regularized EM algorithms with independent Gamma and multivariate Gaussian priors. Our analysis extends the work in [], in which judicious linearizations were used to enable the propagation of a mean image and covariance matrix from one iteration to the next for the (unregularized) EM algorithm. To validate our theoretical analyses, we use a methodology in [] to compare the results of theoretical calculations to Monte Carlo simulations. We also demonstrate an application of the theory to the calculation of an optimal smoothing parameter for a regularized reconstruction. The smoothing parameter is optimal in the context of a quantitation task, defined as the minimization of the expected mean-square error of an estimated number of counts in a hot lesion region. Our results thus demonstrate how the theory can be applied to a problem of potential practical use. I. INTRODUCTION Reconstruction algorithms are often justified in terms of simple image quality metrics such as rms error, but a more meaningful approach advocated in recent years is to base the justification on task performance metrics. In this approach, reconstructions are obtained for an ensemble of representative objects and noise realizations. A task is defined (e.g. lesion quantitation) and the task is performed by a mathematical observer that derives some test statistics (e.g. counts in a region) for each of the reconstructions. An algorithm is successful insofar as it yields good average performance according to some criterion (e.g. low bias and variance in the quantitation estimate). While the above approach lends itself readily to Monte Carlo (MC) approaches, it would be more usefully employed in a theoretical approach that enabled one to predict task performance statistics as a function of object and noise statistics. Such an approach, advocated in [3], is readily applied to linear reconstruction algorithms, but becomes more difficult to apply to the nonlinear algorithms of much interest in SPECT. The difficulty here lies in the theoretical modeling of noise Appeared at 996 IEEE Nuclear Science Symposium Conference Record, pp , Anaheim, California. propagation through the nonlinear stages of these typically iterative algorithms. For the ML-EM algorithm, a solution to this problem was reported in []. Our own interests lies in Bayesian algorithms, where much anecdotal evidence touts the apparent efficacy of including prior information into the reconstruction. In this work, we report two advances: () We show how to extend the noise propagation formulate of [] to the case of MAP-EM. In particular, we consider the cases of independent Gamma and multivariate Gaussian priors. (The latter category includes familiar smoothing priors.) For each case, we show how first and second-order noise statistics are propagated through the MAP-EM algorithm. () We then show how such formulae may be used to solve a vexing problem associated with Bayesian approaches, namely the determination of, the strength of a smoothing prior. Here, is determined via a task performance metric involving region-of-interest (ROI) quantitation. II. REGULARIZED EM ALGORITHMS In SPECT, the forward projection process can be described by G = Hf + N () where the N vector f denotes the unknown object, the M random vector G denotes the projection data. (Note in (), we use a single subscript to lexicographically order the D quantities f and G.) The H is the M N system matrix. Its element H mn is the probability that a photon emitted from object pixel n will be detected at data bin m. In SPECT, it includes the (approximately linear) effects of attenuation, scatter and detector response. The M random vector N is the object dependent Poisson noise in the projection data. To summarize notation, upper-case bold quantities denote random vectors, lower-case bold quantities denote deterministic vectors, and calligraphic letters denote matrices with corresponding upper-case letters denote matrix elements. Also, we will use the convenient Hadamard notation [] [4] in which ab and a=b denote vectors whose nth components are a n b n and a n =b n, respectively, where a n and b n are the nth components of a and b, respectively. Also log a and exp a are vectors comprising components log a n and exp a n, respectively. Dot and matrix products are denoted a T b and Aa with T indicating a transpose. With this notation, the familiar ML-EM algorithm becomes [] ^F ^F k k+ = H T GH [ ^F ] s where ^F k is the object estimate at iteration k and s is the
2 sensitivity vector defined as s = H T, where is an M vector with all elements equal to one. Note that ^F k is a random vector since it depends on N. We may now list the two MAP-EM algorithms to be analyzed. The first MAP-EM algorithm for an independent Gamma prior [5] [4] becomes ^F ^F k k+ = H T G q^f [ s + c H ^F k ] + () k where c and q are N vectors with nth components c n = n = n and q n = n?, respectively. The quantities n and n= n are the mean and variance in the Gamma prior Pr(f n ) = ( n = n ) n =?( n )fn n? exp(? n f n = n ) for the nth object pixel. The Gamma prior thus is not a smoothing prior but steers each pixel estimate toward a predetermined value n, so a mean image is required. The second algorithm is the One-Step-Late (OSL) procedure of Green [6]. It is not a true MAP-EM algorithm, but if it converges, it converges to the MAP solution [7]. The regularized EM algorithm for a multivariate Gaussian prior, using the OSL strategy [6] [4], can be shown [4] to be ^F ^F k k+ = s + K? ( ^F H T G k [? m) H ^F k ] (3) where m and K are the mean and covariance matrix in the multivariate Gaussian prior. We also derived [4] two specializations of the multivariate Gaussian corresponding to two smoothing priors: membrane prior and thin plate prior. Both of these may be written as Gibbs priors with associated energy functions. The energy function U (f ) for the membrane prior is defined as U (f ) = X n [fe (n) + f s (n) + p fne (n) + p fse (n)] (4) Here, f e (n), f s (n), f ne (n) and f se (n) are the first partial derivatives along the horizontal ( east ), vertical ( south ), northeast and southeast directions at the nth pixel, respectively. The first derivative is approximated as the center pixel minus its appropriate neighbor. An eight-nearest-neighbor neighborhood is thus involved. The membrane equation (4) is a special case of a multivariate Gaussian prior with a zero mean vector and a specific covariance matrix. The N N inverse covariance matrix K M? for the membrane prior can be shown [4] to be a symmetric positive semi-definite sparse matrix with most of its elements zero, except for 9 elements in each row or column. The OSL update for the thin plate prior is very similar to that of the membrane prior, but is not analyzed here. Interested readers can refer to [4]. III. THEORETICAL NOISE ANALYSIS Here, we establish our noise propagation formulae using the Gamma prior, the derivation for other priors follows along similar lines. The derivation here, necessarily skeletal, follows that in [] but extends the case from ML to MAP. Note the right hand side of equation () is a multiplicative updating formula, so, we take logarithm of () and obtain an additive updating equation: Y k+ = Y k + log H T G q^f [ H ^F k ] +? log(s k + c) (5) where Y k log ^F k. We decompose each of the random vectors G, Y k and ^F k in (5) into their mean plus (zero-mean) noise terms: G = Hf + N (6) Y k = log a k + N k y (7) ^F k = a k exp N k y ' a k ( + N k y ) = ak + N k^f where Hf, log a k and a k are the means (conditioned on f) of random vectors G, Y k and ^F k, respectively, and N, N k y and N k^f are the noises in the random vectors G, Yk and ^F k, respectively. Recall that we only consider photon noise and assume the object f is given. Thus ^F k is a random vector by virtue of the noise, as in equation (8), and not by virtue of the prior object density. In (8), N k^f = ak N k y and we have used the first of two approximations: noise in the reconstructed object is much less than the signal in the reconstructed object, i.e. N k^f << ak, which will be approximately true for useful images. Inserting (6), (7) and (8) into (5), using small signal approximations ignoring terms that are quadratic or higher in any of the three quantities N, N k y and, equating random Nk^f terms to random terms and non-random ones to non-random ones, we obtain the following results: (for details see [4]) The conditional mean of the reconstructed object, E[ ^F k j f ] a k, can be obtained simply by running the MAP-EM algorithm with noise-free projection G = Hf. That is, a k+ = ak H T Hf q [ s + c Ha k ] + a k (8) (9) such a result (i.e., mean image equals noise-free reconstruction) would hold for any linear algorithm, but is not obvious for MAP-EM. Noise in the reconstructed object N can be obtained by k^f a linear operation operating on N. That is N k^f = ak (U k N) ; where U k is an N M matrix satisfying the recursion relation: U k+ = B k + [C k? A k ]U K with U = () with A k approximately a projection-backprojection operator (H T H), B k approximately a backprojection operator (H T ) and C k a diagonal matrix. The component
3 forms for N N matrix A k, N M matrix B k and N N matrix C k are as follows: A k ij = B k ij = ak j s i + qi ( a k i MX m= H mi H mj PN n= H mna k n ) () H ji (s i + qi a k i )(P N n= H jna k n ) () C k ii = s i s i + qi a k i (3) In the derivation of the matrices A k, B k and C k, we have used the second approximation: the projection of the current estimate, Ha k, will fairly closely resemble the noise-free projection Hf of the object after the first few iterations wipe out biases due to the initial estimate a. That is Hf=Ha k '. We could drop this approximation and it would lead to more complicated forms for A k, B k and C k. The same strategy applies to equation (3) with a membrane prior leads to the equivalent of equations (9) and (): a k+ = a k Hf s + K M? HT [ ak Ha k ] (4) U k+ = B k + [I? A k ]U K with U = (5) where I is an N N identity matrix. The component forms for A k and B k for the membrane prior are: A k ij = a k j s i ( MX m= H mi H mj PN n= H mna k n ) + B k ij = [K? M ] ija k j s i + P N n= [K? M ] ina k n (6) H ji s i ( P N n= H jna k n ) (7) For any of our priors, we can write general expressions for the covariance matrix for the reconstructed object given f, denoted K k^fjf. This turns out to be K k^fjf E ^F[( ^F k? a k )( ^F k? a k ) T j f ] ' diag(a k ) U k diag(hf )[ U k ] T diag(a k ) : (8) where diag(a k ) is a diagonal matrix with nth diagonal element a k n. We may express Kk^Fjf in terms of its ij element, as [K k^fjf ]ij = a k i ak j X m [U k ]im[u k ]jm[hf ]m : An important special case of the above equation is i = j, which gives the variances of the components of the random vector ^F k (i.e. the variance image ). To actually use the theoretical noise analysis method: we first initialize a, compute and save the sequence of noise-free reconstruction a k, for k = ; : : : ; K, then use recursive relation () or (5) to compute the desired U K at iteration K, and plug in (8) to get the covariance matrix K K^Fjf. The conditional mean of ^F K given f is simply a K. We thus end up with the first and second-order reconstruction statistics at iteration k. In [4], we also derive the general lognormal joint density function for ^F k. In section V, we illustrate validation results of these formulae using MC trials. IV. HYPERPARAMETER ESTIMATION USING TASK PERFORMANCE We may use the theory to estimate the smoothing hyperparameter of the membrane prior based on task performance criterion. The quantitation task we choose is estimation of total counts in an ROI. Define the ROI by a binary (; ) N vector w T with w i = if pixel i ROI. An estimate of the true number of counts is then given by ^ ROI = w T ^F : (9) Note that ^ is a random variable since it depends on ^F, as well as parameter. Also note = w T f. The bias and variance of this ROI estimator are calculated in [3], with the bias given by b ROI E[^ ROI j f ]? = w T b ; () where b is the bias vector defined as b = E[ ^F j f ]? f, and the variance given by var(^ ROI ) E h i (^ ROI? E[^ ROI ]) j f = tr[k ^Fjf W] ; () where W is an N N matrix defined as W = ww T. Note that var(^ ROI ) is not only the sum of pointwise variances of ROI pixels, but also includes contributions from covariances associated with all pixel pairs in the ROI. A good figure of merit that can take both bias and variance into account is the expected mean-square error (EMSE) [3], given by EMSE(^ ROI ) E[(^ ROI? ) j f ] = b ROI + var(^ ROI ) : () Our optimal will be the one which minimizes the EMSE of ^ ROI. V. SIMULATION RESULTS Following the MC methodology in [], we validated the theoretical noise analysis formulae for cases of of no prior (i.e. ML-EM) and Gamma prior. The phantom (3 3) was a uniform disk with radius 3 pixels. We used two projection count levels, 8, and 5,, to represent the low and high signal-to-noise ratios. For the Gamma prior, the mean was set to the disk for both count levels, and the standard deviation was set to 6.8% and 5.8% of the mean for count levels 8, and 5,, respectively. Sample size was 8 which implies a relative error of about.6% []. Results (not included here) showed excellent agreement of theory and MC according to criteria discussed below. For the membrane prior, we used a phantom, shown in Figure A, that included a % contrast hot lesion. We found that the OSL convergence ranges for the lesion phantom
4 with 8; and 5; total projection counts were = [:; :5] and = [:; :8], respectively. Within these ranges, we found that subranges [:; :9] (low counts) and [:; :9] (high counts) captured a nice bias/variance tradeoff in ^ ROI. Our validations were thus based on 4 experiments: () 8; total projection counts, = :. () 8; total projection counts, = :9. (3) 5; total projection counts, = :. (4) 5; total projection counts, = :9. We again checked MC-theory validation with 8 noise realizations. For each experiment, the results [8] of mean, variance and covariance are very consistent at iterations, 3, 5,, for both MC simulation and theoretical analysis. Figures A and B show the excellent agreement of profiles of mean and variance images for MC and theory. Figure C illustrates profiles of covariance images, which display the covariation of a given pixel relative to a reference pixel at the center of the lesion, for MC and theory. Figures are for experiment 3 at iterations. Figures B-E shows a set of variance images (from left to right) obtained from the theoretical analysis for experiment 3 at iterations, 3, 5 and, respectively. As seen, as iteration number increases, the effect of the lesion in the variance image gradually disappears and the variance image finally looks like a uniform phantom with some symmetric structure. One explanation is that: since the OSL-MAP-EM is a smoothed version of ML-EM, as iterations go on, the neighborhood interactions increase and finally smooth out the larger fluctuations associated with the lesion in the variance image. The symmetric fine structure, apparently due to the sensitivity vector, is not as yet easily explained. Our task was to estimate the total counts in a 3 3 lesion template. (We used a template smaller than the lesion to avoid the edge artifacts in the bias image.) The bias, variance and EMSE of the ROI estimator (equations (), () and ()) were calculated for the low projection counts (8,) using =.,.3,.4,.5,.6,.7,.8 and.9, and for high projection counts (5,) using =.,.3,.4,.5,.6,.7,.8 and.9. The OSL-MAP-EM algorithm was stopped at iteration in all these cases. Optimization to find best for quantitation was done simply by inspection of the EMSE- curve. Note that these optimal s are not the same as those that minimize reconstruction rms, though calculation shows them to be in the same range in this case. Figures 3A and 3B shows the bias-, bias?, variance- and EMSE- curves of the ROI estimator for 8; and 5; projection counts. As seen, the bias (solid line with plus signs) begins with a small positive bias at small, and decreases with increasing, and finally becomes a large negative value at large. Thus the square of the bias (dashed line) is concave-shaped. The negative bias at high is easily understood: as the high positive contrast lesion is smoothed more strongly, the high-valued pixel values are spread to surrounding background and the values of the pixels in the ROI lowered. The variance (dash-dotted line) decreases as expected with increasing. The EMSE () (solid line with circles) which is the sum of bias squared and variance becomes a convex-shaped curve and has a minimum. The optimal s for low and high projection counts were read off as.3 and.5, respectively. VI. CONCLUSION We developed theoretical noise analyses for MAP-EM algorithms and demonstrated one application to task performance. The results here technically apply only to the particular phantom and at the two noise levels. To generalize this analysis, one would have to consider a relevant ensemble of objects that adequately captures the range of objects likely to be encountered in the clinic. We note that the theoretical method still requires a significant amount of computation, albeit far less than MC methods. However, once we have computed the covariance K k^fjf, which is the main burden, this same covariance may be reused in support of a variety of task performance analyses. VII. ACKNOWLEDGMENTS We wish to thank Soo-Jin Lee, Ing-Tsung Hsiao and Paul J. Hong for technical help and the ML-EM folks from the Arizona group Harrison H. Barrett, Donald W. Wilson and Craig K. Abbey for useful discussions. This work was supported by a Student Fellowship from the Education and Research Foundation of the Society of Nuclear Medicine, and by grant NS3879 from NIH-NINDS. VIII. REFERENCES [] H. H. Barrett, D. W. Wilson, and B. M. W. Tsui, Noise Properties of the EM Algorithm: I. Theory, Phys. Med. Biol., 39, pp , 994. [] D. W. Wilson, B. M. W. Tsui, and H. H. Barrett, Noise Properties of the EM Algorithm: II. Monte Carlo Simulations, Phys. Med. Biol., 39, pp , 994. [3] H. H. Barrett, Objective Assessment of Image Quality: Effects of Quantum Noise and Object Variability, Journal of Optical Society of America A, 7(7), pp , July 99. [4] W. Wang and G. Gindi, Noise Analysis of Regularized EM Algorithms for SPECT, Technical Report MIPL-96-, Depts. of Radiology and Electrical Engineering, State University of New York at Stony Brook, June 996. [5] K. Lange, M. Bahn, and R. Little, A Theoretical Study of Some Maximum Likelihood Algorithms for Emission and Transmission Tomography, IEEE Trans. on Med. Imaging, MI-6(), pp. 6 4, June 987. [6] P. J. Green, Bayesian Reconstructions from Emission Tomography Data Using a Modified EM Algorithm, IEEE Trans. on Medical Imaging, 9(), pp , Mar. 99. [7] P. J. Green, On Use of the EM Algorithm for Penalized Likelihood Estimation, J. Royal Statist. Soc., B, 5(3), pp , 99. [8] W. Wang and G. Gindi, Noise Analysis of Regularized EM Algorithms for SPECT: Validation and Task Performance Application to Quantitation, Technical Report MIPL-96-3, Depts. of Radiology and Electrical Engineering, State University of New York at Stony Brook, Oct. 996.
5 6 5 MC **th 9 8 MC **th MC **th (A) (B) (C) Fig. Horizontal profiles comparisons of theoretical and MC results. (A) Mean images, profiles through lesion center. (B) Variance images, profiles through image center. (C) Covariance images for the center pixel of lesion, profiles through lesion center. Lesion phantom, 5, projection counts, reconstructed using OSL-MAP-EM with membrane prior, = :, iterations. MC, * theory. (A) (B) (C) (D) (E) Fig. Lesion phantom (A) and variance images (B to E) obtained from theoretical analysis at, 3, 5 and iterations, 5, projection counts, reconstructed using OSL-MAP-EM with membrane prior, = :. metrics of ROI estimator k EMSE bias var bias metrics of ROI estimator k EMSE bias var bias λ (A) x 3 Fig. 3 Bias-, bias?, variance- and EMSE- curves of 3 3 ROI estimator using theoretical analysis, lesion phantom, OSL-MAP-EM with membrane prior, iterations. (A) 8, projection counts, (B) 5, projection counts. Bias: solid line with plus signs, bias : dashed line, variance: dash-dotted line, EMSE: solid line with circles. λ (B)
6
Objective Functions for Tomographic Reconstruction from. Randoms-Precorrected PET Scans. gram separately, this process doubles the storage space for
Objective Functions for Tomographic Reconstruction from Randoms-Precorrected PET Scans Mehmet Yavuz and Jerey A. Fessler Dept. of EECS, University of Michigan Abstract In PET, usually the data are precorrected
More informationScan Time Optimization for Post-injection PET Scans
Presented at 998 IEEE Nuc. Sci. Symp. Med. Im. Conf. Scan Time Optimization for Post-injection PET Scans Hakan Erdoğan Jeffrey A. Fessler 445 EECS Bldg., University of Michigan, Ann Arbor, MI 4809-222
More informationComparison of Lesion Detection and Quantification in MAP Reconstruction with Gaussian and Non-Gaussian Priors
Hindawi Publishing Corporation International Journal of Biomedical Imaging Volume 6, Article ID 87567, Pages DOI.55/IJBI/6/87567 Comparison of Lesion Detection and Quantification in MAP Reconstruction
More informationLecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis
Lecture 3 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,
More informationReconstruction for Proton Computed Tomography: A Monte Carlo Study
Reconstruction for Proton Computed Tomography: A Monte Carlo Study T Li, Z. Liang, K. Mueller, J. Heimann, L. Johnson, H. Sadrozinski, A. Seiden, D. Williams, L. Zhang, S. Peggs, T. Satogata, V. Bashkirov,
More informationMaximum-Likelihood Deconvolution in the Spatial and Spatial-Energy Domain for Events With Any Number of Interactions
IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 59, NO. 2, APRIL 2012 469 Maximum-Likelihood Deconvolution in the Spatial and Spatial-Energy Domain for Events With Any Number of Interactions Weiyi Wang, Member,
More informationSparsity Regularization for Image Reconstruction with Poisson Data
Sparsity Regularization for Image Reconstruction with Poisson Data Daniel J. Lingenfelter a, Jeffrey A. Fessler a,andzhonghe b a Electrical Engineering and Computer Science, University of Michigan, Ann
More informationStatistics and Data Analysis
Statistics and Data Analysis The Crash Course Physics 226, Fall 2013 "There are three kinds of lies: lies, damned lies, and statistics. Mark Twain, allegedly after Benjamin Disraeli Statistics and Data
More informationComputational Methods Short Course on Image Quality
Computational Methods Short Course on Image Quality Matthew A. Kupinski What we Will Cover Sources of randomness Computation of linear-observer performance Computation of ideal-observer performance Miscellaneous
More informationMAP Reconstruction From Spatially Correlated PET Data
IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL 50, NO 5, OCTOBER 2003 1445 MAP Reconstruction From Spatially Correlated PET Data Adam Alessio, Student Member, IEEE, Ken Sauer, Member, IEEE, and Charles A Bouman,
More informationAn IDL Based Image Deconvolution Software Package
An IDL Based Image Deconvolution Software Package F. Városi and W. B. Landsman Hughes STX Co., Code 685, NASA/GSFC, Greenbelt, MD 20771 Abstract. Using the Interactive Data Language (IDL), we have implemented
More informationSupplementary Note on Bayesian analysis
Supplementary Note on Bayesian analysis Structured variability of muscle activations supports the minimal intervention principle of motor control Francisco J. Valero-Cuevas 1,2,3, Madhusudhan Venkadesan
More informationUncertainty Quantification and Validation Using RAVEN. A. Alfonsi, C. Rabiti. Risk-Informed Safety Margin Characterization. https://lwrs.inl.
Risk-Informed Safety Margin Characterization Uncertainty Quantification and Validation Using RAVEN https://lwrs.inl.gov A. Alfonsi, C. Rabiti North Carolina State University, Raleigh 06/28/2017 Assumptions
More informationRobust Maximum- Likelihood Position Estimation in Scintillation Cameras
Robust Maximum- Likelihood Position Estimation in Scintillation Cameras Jeffrey A. Fessler: W. Leslie Rogers, and Neal H. Clinthorne Division of Nuclear Medicine, University of Michigan ABSTRACT The classical
More informationStatistical Data Analysis Stat 3: p-values, parameter estimation
Statistical Data Analysis Stat 3: p-values, parameter estimation London Postgraduate Lectures on Particle Physics; University of London MSci course PH4515 Glen Cowan Physics Department Royal Holloway,
More informationStatistical and Computational Inverse Problems with Applications Part 2: Introduction to inverse problems and example applications
Statistical and Computational Inverse Problems with Applications Part 2: Introduction to inverse problems and example applications Aku Seppänen Inverse Problems Group Department of Applied Physics University
More informationUniform Quadratic Penalties Cause Nonuniform Spatial Resolution
Uniform Quadratic Penalties Cause Nonuniform Spatial Resolution Jeffrey A. Fessler and W. Leslie Rogers 3480 Kresge 111, Box 0552, University of Michigan, Ann Arbor, MI 48109-0552 ABSTRACT This paper examines
More informationStatistics: Learning models from data
DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial
More informationA Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction
A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction Yao Xie Project Report for EE 391 Stanford University, Summer 2006-07 September 1, 2007 Abstract In this report we solved
More informationBayesian Image Reconstruction for Transmission Tomography Using Deterministic Annealing
Bayesian Image Reconstruction for Transmission Tomography Using Deterministic Annealing Ing-Tsung Hsiao, Anand Rangarajan, and Gene Gindi School of Medical Technology, Chang Gung University, Kwei-Shan,
More informationSuperiorized Inversion of the Radon Transform
Superiorized Inversion of the Radon Transform Gabor T. Herman Graduate Center, City University of New York March 28, 2017 The Radon Transform in 2D For a function f of two real variables, a real number
More informationDESIGNING CNN GENES. Received January 23, 2003; Revised April 2, 2003
Tutorials and Reviews International Journal of Bifurcation and Chaos, Vol. 13, No. 10 (2003 2739 2824 c World Scientific Publishing Company DESIGNING CNN GENES MAKOTO ITOH Department of Information and
More informationPQL Estimation Biases in Generalized Linear Mixed Models
PQL Estimation Biases in Generalized Linear Mixed Models Woncheol Jang Johan Lim March 18, 2006 Abstract The penalized quasi-likelihood (PQL) approach is the most common estimation procedure for the generalized
More informationL11: Pattern recognition principles
L11: Pattern recognition principles Bayesian decision theory Statistical classifiers Dimensionality reduction Clustering This lecture is partly based on [Huang, Acero and Hon, 2001, ch. 4] Introduction
More informationInference and estimation in probabilistic time series models
1 Inference and estimation in probabilistic time series models David Barber, A Taylan Cemgil and Silvia Chiappa 11 Time series The term time series refers to data that can be represented as a sequence
More informationNeutron and Gamma Ray Imaging for Nuclear Materials Identification
Neutron and Gamma Ray Imaging for Nuclear Materials Identification James A. Mullens John Mihalczo Philip Bingham Oak Ridge National Laboratory Oak Ridge, Tennessee 37831-6010 865-574-5564 Abstract This
More informationContinuous State MRF s
EE64 Digital Image Processing II: Purdue University VISE - December 4, Continuous State MRF s Topics to be covered: Quadratic functions Non-Convex functions Continuous MAP estimation Convex functions EE64
More informationPart 6: Multivariate Normal and Linear Models
Part 6: Multivariate Normal and Linear Models 1 Multiple measurements Up until now all of our statistical models have been univariate models models for a single measurement on each member of a sample of
More information1. Abstract. 2. Introduction/Problem Statement
Advances in polarimetric deconvolution Capt. Kurtis G. Engelson Air Force Institute of Technology, Student Dr. Stephen C. Cain Air Force Institute of Technology, Professor 1. Abstract One of the realities
More informationOn the estimation of the K parameter for the Rice fading distribution
On the estimation of the K parameter for the Rice fading distribution Ali Abdi, Student Member, IEEE, Cihan Tepedelenlioglu, Student Member, IEEE, Mostafa Kaveh, Fellow, IEEE, and Georgios Giannakis, Fellow,
More informationSTA 414/2104, Spring 2014, Practice Problem Set #1
STA 44/4, Spring 4, Practice Problem Set # Note: these problems are not for credit, and not to be handed in Question : Consider a classification problem in which there are two real-valued inputs, and,
More informationIEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 23, NO. 5, MAY
IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 23, NO. 5, MAY 2004 591 Emission Image Reconstruction for Roms-Precorrected PET Allowing Negative Sinogram Values Sangtae Ahn*, Student Member, IEEE, Jeffrey
More informationError analysis for efficiency
Glen Cowan RHUL Physics 28 July, 2008 Error analysis for efficiency To estimate a selection efficiency using Monte Carlo one typically takes the number of events selected m divided by the number generated
More informationPATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA Contents in latter part Linear Dynamical Systems What is different from HMM? Kalman filter Its strength and limitation Particle Filter
More information5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE
5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Hyperplane-Based Vector Quantization for Distributed Estimation in Wireless Sensor Networks Jun Fang, Member, IEEE, and Hongbin
More informationPattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions
Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite
More informationFeature selection and classifier performance in computer-aided diagnosis: The effect of finite sample size
Feature selection and classifier performance in computer-aided diagnosis: The effect of finite sample size Berkman Sahiner, a) Heang-Ping Chan, Nicholas Petrick, Robert F. Wagner, b) and Lubomir Hadjiiski
More informationSteven Tilley Fully 3D Recon 2015 May 31 -June 4
Fully3D Advanced System Models for Reconstruction in Flat-Panel Detector Cone-Beam CT Steven Tilley, Jeffrey Siewerdsen, Web Stayman Johns Hopkins University Schools of Medicine and Engineering Acknowledgements
More informationA Search and Jump Algorithm for Markov Chain Monte Carlo Sampling. Christopher Jennison. Adriana Ibrahim. Seminar at University of Kuwait
A Search and Jump Algorithm for Markov Chain Monte Carlo Sampling Christopher Jennison Department of Mathematical Sciences, University of Bath, UK http://people.bath.ac.uk/mascj Adriana Ibrahim Institute
More informationBayesian approach to image reconstruction in photoacoustic tomography
Bayesian approach to image reconstruction in photoacoustic tomography Jenni Tick a, Aki Pulkkinen a, and Tanja Tarvainen a,b a Department of Applied Physics, University of Eastern Finland, P.O. Box 167,
More informationINTRODUCTION TO PATTERN RECOGNITION
INTRODUCTION TO PATTERN RECOGNITION INSTRUCTOR: WEI DING 1 Pattern Recognition Automatic discovery of regularities in data through the use of computer algorithms With the use of these regularities to take
More informationEstimation Tasks. Short Course on Image Quality. Matthew A. Kupinski. Introduction
Estimation Tasks Short Course on Image Quality Matthew A. Kupinski Introduction Section 13.3 in B&M Keep in mind the similarities between estimation and classification Image-quality is a statistical concept
More informationPractical Statistics
Practical Statistics Lecture 1 (Nov. 9): - Correlation - Hypothesis Testing Lecture 2 (Nov. 16): - Error Estimation - Bayesian Analysis - Rejecting Outliers Lecture 3 (Nov. 18) - Monte Carlo Modeling -
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY PHYSICS DEPARTMENT
G. Clark 7oct96 1 MASSACHUSETTS INSTITUTE OF TECHNOLOGY PHYSICS DEPARTMENT 8.13/8.14 Junior Laboratory STATISTICS AND ERROR ESTIMATION The purpose of this note is to explain the application of statistics
More information9 Multi-Model State Estimation
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State
More informationInvestigation of Possible Biases in Tau Neutrino Mass Limits
Investigation of Possible Biases in Tau Neutrino Mass Limits Kyle Armour Departments of Physics and Mathematics, University of California, San Diego, La Jolla, CA 92093 (Dated: August 8, 2003) We study
More informationMultistage Anslysis on Solar Spectral Analyses with Uncertainties in Atomic Physical Models
Multistage Anslysis on Solar Spectral Analyses with Uncertainties in Atomic Physical Models Xixi Yu Imperial College London xixi.yu16@imperial.ac.uk 23 October 2018 Xixi Yu (Imperial College London) Multistage
More informationVariance Images for Penalized-Likelihood Image Reconstruction
IEEE TRANSACTIONS ON MEDICAL IMAGING, VERSION September 29, 2 1 Variance Images for Penalized-Likelihood Image Reconstruction Jeffrey A. Fessler 424 EECS Bldg., University of Michigan, Ann Arbor, MI 4819-2122
More informationFoundations of Image Science
Foundations of Image Science Harrison H. Barrett Kyle J. Myers 2004 by John Wiley & Sons,, Hoboken, 0-471-15300-1 1 VECTORS AND OPERATORS 1 1.1 LINEAR VECTOR SPACES 2 1.1.1 Vector addition and scalar multiplication
More informationVariational Bayesian Inference Techniques
Advanced Signal Processing 2, SE Variational Bayesian Inference Techniques Johann Steiner 1 Outline Introduction Sparse Signal Reconstruction Sparsity Priors Benefits of Sparse Bayesian Inference Variational
More informationBayesian Regression Linear and Logistic Regression
When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we
More informationDetection ASTR ASTR509 Jasper Wall Fall term. William Sealey Gosset
ASTR509-14 Detection William Sealey Gosset 1876-1937 Best known for his Student s t-test, devised for handling small samples for quality control in brewing. To many in the statistical world "Student" was
More informationRegression. Oscar García
Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is
More informationRich Tomography. Bill Lionheart, School of Mathematics, University of Manchester and DTU Compute. July 2014
Rich Tomography Bill Lionheart, School of Mathematics, University of Manchester and DTU Compute July 2014 What do we mean by Rich Tomography? Conventional tomography reconstructs one scalar image from
More information8 The SVD Applied to Signal and Image Deblurring
8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear
More informationError Reporting Recommendations: A Report of the Standards and Criteria Committee
Error Reporting Recommendations: A Report of the Standards and Criteria Committee Adopted by the IXS Standards and Criteria Committee July 26, 2000 1. Introduction The development of the field of x-ray
More informationHierarchical Nearest-Neighbor Gaussian Process Models for Large Geo-statistical Datasets
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geo-statistical Datasets Abhirup Datta 1 Sudipto Banerjee 1 Andrew O. Finley 2 Alan E. Gelfand 3 1 University of Minnesota, Minneapolis,
More informationModern Methods of Data Analysis - WS 07/08
Modern Methods of Data Analysis Lecture VII (26.11.07) Contents: Maximum Likelihood (II) Exercise: Quality of Estimators Assume hight of students is Gaussian distributed. You measure the size of N students.
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by
More informationMONTE CARLO SIMULATION OF VHTR PARTICLE FUEL WITH CHORD LENGTH SAMPLING
Joint International Topical Meeting on Mathematics & Computation and Supercomputing in Nuclear Applications (M&C + SNA 2007) Monterey, California, April 5-9, 2007, on CD-ROM, American Nuclear Society,
More informationLecture 3: Pattern Classification
EE E6820: Speech & Audio Processing & Recognition Lecture 3: Pattern Classification 1 2 3 4 5 The problem of classification Linear and nonlinear classifiers Probabilistic classification Gaussians, mixtures
More informationKarl-Rudolf Koch Introduction to Bayesian Statistics Second Edition
Karl-Rudolf Koch Introduction to Bayesian Statistics Second Edition Karl-Rudolf Koch Introduction to Bayesian Statistics Second, updated and enlarged Edition With 17 Figures Professor Dr.-Ing., Dr.-Ing.
More informationAutomated Segmentation of Low Light Level Imagery using Poisson MAP- MRF Labelling
Automated Segmentation of Low Light Level Imagery using Poisson MAP- MRF Labelling Abstract An automated unsupervised technique, based upon a Bayesian framework, for the segmentation of low light level
More informationCovariance Matrix Simplification For Efficient Uncertainty Management
PASEO MaxEnt 2007 Covariance Matrix Simplification For Efficient Uncertainty Management André Jalobeanu, Jorge A. Gutiérrez PASEO Research Group LSIIT (CNRS/ Univ. Strasbourg) - Illkirch, France *part
More informationImage Quality and Adaptive Imaging
Image Quality and Adaptive Imaging Matthew A. Kupinski Associate Professor College of Optical Sciences University of Arizona Tucson, Arizona November 7, 2012 Introduction Imaging equation The need for
More informationMODULE -4 BAYEIAN LEARNING
MODULE -4 BAYEIAN LEARNING CONTENT Introduction Bayes theorem Bayes theorem and concept learning Maximum likelihood and Least Squared Error Hypothesis Maximum likelihood Hypotheses for predicting probabilities
More informationECE521 lecture 4: 19 January Optimization, MLE, regularization
ECE521 lecture 4: 19 January 2017 Optimization, MLE, regularization First four lectures Lectures 1 and 2: Intro to ML Probability review Types of loss functions and algorithms Lecture 3: KNN Convexity
More informationDimension Reduction Techniques. Presented by Jie (Jerry) Yu
Dimension Reduction Techniques Presented by Jie (Jerry) Yu Outline Problem Modeling Review of PCA and MDS Isomap Local Linear Embedding (LLE) Charting Background Advances in data collection and storage
More informationDETECTING PROCESS STATE CHANGES BY NONLINEAR BLIND SOURCE SEPARATION. Alexandre Iline, Harri Valpola and Erkki Oja
DETECTING PROCESS STATE CHANGES BY NONLINEAR BLIND SOURCE SEPARATION Alexandre Iline, Harri Valpola and Erkki Oja Laboratory of Computer and Information Science Helsinki University of Technology P.O.Box
More informationMachine Learning, Fall 2009: Midterm
10-601 Machine Learning, Fall 009: Midterm Monday, November nd hours 1. Personal info: Name: Andrew account: E-mail address:. You are permitted two pages of notes and a calculator. Please turn off all
More informationCS534 Machine Learning - Spring Final Exam
CS534 Machine Learning - Spring 2013 Final Exam Name: You have 110 minutes. There are 6 questions (8 pages including cover page). If you get stuck on one question, move on to others and come back to the
More informationComputer Vision Group Prof. Daniel Cremers. 4. Gaussian Processes - Regression
Group Prof. Daniel Cremers 4. Gaussian Processes - Regression Definition (Rep.) Definition: A Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution.
More informationAnalytical Approach to Regularization Design for Isotropic Spatial Resolution
Analytical Approach to Regularization Design for Isotropic Spatial Resolution Jeffrey A Fessler, Senior Member, IEEE Abstract In emission tomography, conventional quadratic regularization methods lead
More informationBayesian Inference: Principles and Practice 3. Sparse Bayesian Models and the Relevance Vector Machine
Bayesian Inference: Principles and Practice 3. Sparse Bayesian Models and the Relevance Vector Machine Mike Tipping Gaussian prior Marginal prior: single α Independent α Cambridge, UK Lecture 3: Overview
More informationECE295, Data Assimila0on and Inverse Problems, Spring 2015
ECE295, Data Assimila0on and Inverse Problems, Spring 2015 1 April, Intro; Linear discrete Inverse problems (Aster Ch 1 and 2) Slides 8 April, SVD (Aster ch 2 and 3) Slides 15 April, RegularizaFon (ch
More information6 The SVD Applied to Signal and Image Deblurring
6 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an
More informationNew developments in JET gamma emission tomography
New developments in JET gamma emission tomography T. CRACIUNESCU, A. MURARI, V. KIPTILY, I. LUPELLI, A. FERNANDES, S. SHARAPOV, I. TISEANU, V. ZOITA and JET Contributors Acknowledgements T. Craciunescu
More informationBACKGROUND NOTES FYS 4550/FYS EXPERIMENTAL HIGH ENERGY PHYSICS AUTUMN 2016 PROBABILITY A. STRANDLIE NTNU AT GJØVIK AND UNIVERSITY OF OSLO
ACKGROUND NOTES FYS 4550/FYS9550 - EXERIMENTAL HIGH ENERGY HYSICS AUTUMN 2016 ROAILITY A. STRANDLIE NTNU AT GJØVIK AND UNIVERSITY OF OSLO efore embarking on the concept of probability, we will first define
More informationThe Monte Carlo method what and how?
A top down approach in measurement uncertainty estimation the Monte Carlo simulation By Yeoh Guan Huah GLP Consulting, Singapore (http://consultglp.com) Introduction The Joint Committee for Guides in Metrology
More informationDiscrete Simulation of Power Law Noise
Discrete Simulation of Power Law Noise Neil Ashby 1,2 1 University of Colorado, Boulder, CO 80309-0390 USA 2 National Institute of Standards and Technology, Boulder, CO 80305 USA ashby@boulder.nist.gov
More informationULTRASONIC INSPECTION, MATERIAL NOISE AND. Mehmet Bilgen and James H. Center for NDE Iowa State University Ames, IA 50011
ULTRASONIC INSPECTION, MATERIAL NOISE AND SURFACE ROUGHNESS Mehmet Bilgen and James H. Center for NDE Iowa State University Ames, IA 511 Rose Peter B. Nagy Department of Welding Engineering Ohio State
More informationPoS(ICRC2017)765. Towards a 3D analysis in Cherenkov γ-ray astronomy
Towards a 3D analysis in Cherenkov γ-ray astronomy Jouvin L. 1, Deil C. 2, Donath A. 2, Kerszberg D. 3, Khelifi B. 1, Lemière A. 1, Terrier R. 1,. E-mail: lea.jouvin@apc.in2p3.fr 1 APC (UMR 7164, CNRS,
More informationPattern Recognition and Machine Learning
Christopher M. Bishop Pattern Recognition and Machine Learning ÖSpri inger Contents Preface Mathematical notation Contents vii xi xiii 1 Introduction 1 1.1 Example: Polynomial Curve Fitting 4 1.2 Probability
More information8 The SVD Applied to Signal and Image Deblurring
8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an
More informationStatistics of Non-Poisson Point Processes in Several Dimensions
Statistics of Non-Poisson Point Processes in Several Dimensions Kenneth A. Brakke Department of Mathematical Sciences Susquehanna University Selinsgrove, Pennsylvania 17870 brakke@susqu.edu originally
More informationDetectors in Nuclear Physics: Monte Carlo Methods. Dr. Andrea Mairani. Lectures I-II
Detectors in Nuclear Physics: Monte Carlo Methods Dr. Andrea Mairani Lectures I-II INTRODUCTION Sampling from a probability distribution Sampling from a probability distribution X λ Sampling from a probability
More informationSTA414/2104 Statistical Methods for Machine Learning II
STA414/2104 Statistical Methods for Machine Learning II Murat A. Erdogdu & David Duvenaud Department of Computer Science Department of Statistical Sciences Lecture 3 Slide credits: Russ Salakhutdinov Announcements
More informationNONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition
NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function
More informationLinear & nonlinear classifiers
Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table
More informationRecursive Estimation
Recursive Estimation Raffaello D Andrea Spring 08 Problem Set 3: Extracting Estimates from Probability Distributions Last updated: April 9, 08 Notes: Notation: Unless otherwise noted, x, y, and z denote
More information( ).666 Information Extraction from Speech and Text
(520 600).666 Information Extraction from Speech and Text HMM Parameters Estimation for Gaussian Output Densities April 27, 205. Generalization of the Results of Section 9.4. It is suggested in Section
More informationPart 2: Multivariate fmri analysis using a sparsifying spatio-temporal prior
Chalmers Machine Learning Summer School Approximate message passing and biomedicine Part 2: Multivariate fmri analysis using a sparsifying spatio-temporal prior Tom Heskes joint work with Marcel van Gerven
More informationLecture 32. Lidar Error and Sensitivity Analysis
Lecture 3. Lidar Error and Sensitivity Analysis Introduction Accuracy in lidar measurements Precision in lidar measurements Error analysis for Na Doppler lidar Sensitivity analysis Summary 1 Errors vs.
More informationAnalysis of observer performance in unknown-location tasks for tomographic image reconstruction
A. Yendiki and J. A. Fessler Vol. 24, No. 12/December 2007/J. Opt. Soc. Am. A B99 Analysis of observer performance in unknown-location tasks for tomographic image reconstruction Anastasia Yendiki 1, *
More informationData assimilation with and without a model
Data assimilation with and without a model Tim Sauer George Mason University Parameter estimation and UQ U. Pittsburgh Mar. 5, 2017 Partially supported by NSF Most of this work is due to: Tyrus Berry,
More informationMark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.
CS 189 Spring 2015 Introduction to Machine Learning Midterm You have 80 minutes for the exam. The exam is closed book, closed notes except your one-page crib sheet. No calculators or electronic items.
More informationMaximal Entropy for Reconstruction of Back Projection Images
Maximal Entropy for Reconstruction of Back Projection Images Tryphon Georgiou Department of Electrical and Computer Engineering University of Minnesota Minneapolis, MN 5545 Peter J Olver Department of
More informationRegularization in Neural Networks
Regularization in Neural Networks Sargur Srihari 1 Topics in Neural Network Regularization What is regularization? Methods 1. Determining optimal number of hidden units 2. Use of regularizer in error function
More information5.1 2D example 59 Figure 5.1: Parabolic velocity field in a straight two-dimensional pipe. Figure 5.2: Concentration on the input boundary of the pipe. The vertical axis corresponds to r 2 -coordinate,
More information