Expectation Maximization Deconvolution Algorithm
|
|
- Isaac Welch
- 6 years ago
- Views:
Transcription
1 Epectation Maimization Deconvolution Algorithm Miaomiao ZHANG March 30, 2011 Abstract In this paper, we use a general mathematical and eperimental methodology to analyze image deconvolution. The main procedure is to use an eample image convolving it with a know Gaussian point spread function and then develop algorithms to recover the image. Observe the deconvolution process by adding Gaussian and Poisson noise at different signal to noise ratios. In addition, we will describe the effect of the width of the Gaussian which is used to blur the image. The core algorithms in this paper is the iterative E-M algorithm as well as the non-iterative least squares deconvolution algorithm. 1 Introduction In this paper, we will discuss the Epectation Maimization (EM) algorithm which was proposed by Dempster, Laird and Rubin. It can deal with problems which involve incomplete data, or miture estimation. Generally, in statistics, it aims to find maimum likelihood or maimum a posteriori (MAP) estimates of parameters in the model, where the model depends on unobserved data sets. EM algorithm has two iterative steps: E step is to estimate the epectation; M step is to maimize the likelihood. We will introduce this in the net section in detail. In addition, here we will also do the eperiment on non-iterative methods which takes advantage of the Fourier transformation. It would be pretty faster than EM algorithm. 1
2 2 Methods The practical problem we solve here is the linear imaging system which can be epressed as: D(y) = P (y )λ()d (1) Where λ() is the object space, which stands for the intensity of an image. D(y) is the detector space, which is the observed data. P (y ) is the point spread function which is the probability depends on the distance between y and. And it describes how much of the original information from λ() arrives the detector space D(y). Actually, if the P (y ) is shift invariant, here we can consider it as the same as P (y ). Thus, we can get: D(y) = P (y )λ()d (2) From the equation above, It is obviously to see that equation (2) is equivalent to the image convolution. However, there always eists noise in D(y) which is generated by the imaging system. Therefore, it leads to the uncertainty of the detected data in detector space. In this case, we cannot just solve the linear system which was introduced above, so we need to maimize the likelihood of D(y) according to the distribution of the noise. Net, we will discuss two kinds of important noise distribution, one is Gaussian distribution, and the other one is the Poisson distribution. 2.1 Gaussian Case Generally, the Gaussian distribution is: p() = 1 2πδ ep ( µ)2 2δ 2 (3) Where µ is the mean and δ 2 is the variance of the distribution. In Gaussian case, we assume that: µ(d(y)) = P (y )λ()dp (D(y) µ(d(y))) = 1 ( ep (D(y) µ(d(y))) 2 2δ 2 ) (4) 2πδ Where D(y) is the Gaussian distribution with the mean µ(d(y)) and covariance δ 2. P (D(y) µ(d(y))) is the joint probability of D(y). It is easy to notice that to maimize the likelihood is to minimize the linear equation (D(y) µ(d(y))) 2. 2
3 2.2 Poisson Case The poisson distribution can be written as: P (k µ) = µk e µ Where µ is the number of occurrence, k is the integer of epected number of occurrence. When µ is large enough, the distribution of poisson will be close to the gaussian model. As the same inference as we did in gaussian model, the joint probability for poisson distribution is: µ(d(y)) = k! P (y )λ()dp (D(y) µ(d(y))) = µ(d(y)) D(y) e µ(d(y)) ) (6) D(y)! In order to reduce the compleity of computation, we take the log operation of the likelihood, then we can obtain the de-convoluted λ() through maimize the equation as follows: ma λ() 0 [D(y) log y 3 Implementation P (y )λ()d 3.1 Implementation of Gaussian Case (5) P (y )λ()d log(d(y)!)] (7) In this eperiment, by taking advantage of the Prseval s equality and Fourier transformation, we can obtain: (D(y) λ()p (y )) 2 = (F (D(y) λ P )) 2 dω (8) = (D(ω) λ(ω) P (ω)) 2 dω (9) Where F is the Fourier transformation operator, is the symbol for image convolution. To solve this equation, we can get: λ(ω) = D(ω) P (ω) (10) 3
4 Then, we should get the output image back to image space from the Fourier space, thus we have: λ() = F 1 (λ(ω)) (11) Note: For equation (11), it is necessary to pad P (y ) to have the same size as the input image before doing the Fourier transformation, one trick here is to pad all the other elements as zeros. 3.2 Implementation of Poisson Case By solving Equation 7 above, we can get the iteration formulation for each piel i in an image for λ()as : λ k+1 ( i ) = λ k ( i ) j D(y j )P (y j i ) i λk ( i )P (y j i ) (12) Where j is each piel in the detector space. 3.3 Regularization Term In this paper, we choose the Gradient Descent method to eliminate the noise. Then for the poisson noise, after added by the regularization term, the formulation is: ma I( i ) Z( i ) log(i( i )) i i I( i ) + 1 τ Where, Z is any data, I is the mean of poisson distribution. I( i ) 2 (13) i 4 Results and Discussion In this section, we will show different results of the methods which we discussed above and give the discussion of why the results look like this. How the algorithms make effects on the same input image. If we change the parameters in a different value, how will the algorithm work, whether it is bad or good. Basically, the steps for our eperiments are: step 1: We blur the original image with different width. step 2: Apply different implementation algorithms above to get the result. 4
5 Figure 1: Deconvolution results with Gaussian model by different width of σ : Top: original image; Middle: left with the σ = 0.5, right with the σ = 1; bottom: left with σ = 2, right with σ = 3. 5
6 Figure 2: Deconvolution results with poisson model by different width of σ : Top: left is original image, middle is blurred image with σ = 1, right is deconvolution result; Middle: left is original image, middle is blurred image with σ = 5, right is deconvolution result; bottom: left is original image, middle is blurred image with σ = 8, right is deconvolution result. All the iterations in these tested images are 50. 6
7 4.1 Discussion From the results above, in figure.1, the noise has different levels. It is clearly that when the value of σ increases, the deconvolution results would get worse. This is because when the P (ω) is near zero, the λ(ω) will be very large, thus the result will become worse. In addition, there is no big difference if we add the regularization term to this case. However, the convergence by adding this term would be better. In figure.2, we test the algorithm on Poisson distribution noise, it is more stable than the Gaussian model when comparing the results. When we add the regularization term, it seems that it also does not help too much. 5 Conclusion In a word, from the whole eperiment, we find that the Poisson model is much more stable than the Gaussian model no matter by adding the noise or increasing the width of σ. What s more, for the regularization term, it does not always help, in other words, it depends on which information we use, or else the result would get even worse. 6 Reference All the reference materials come from: [1] M.Bertero and P.Boccacci. Image Deconvolution. Springer Netherlands, [2] Sean Borman. The epectation maimization algorithm: A short tutorial. [3] Class Notes, Sarang Joshi, Mathematics of Imaging. 7
Image Deconvolution. Xiang Hao. Scientific Computing and Imaging Institute, University of Utah, UT,
Image Deconvolution Xiang Hao Scientific Computing and Imaging Institute, University of Utah, UT, hao@cs.utah.edu Abstract. This is a assignment report of Mathematics of Imaging course. The topic is image
More informationLecture 3: Pattern Classification. Pattern classification
EE E68: Speech & Audio Processing & Recognition Lecture 3: Pattern Classification 3 4 5 The problem of classification Linear and nonlinear classifiers Probabilistic classification Gaussians, mitures and
More informationNoise-Blind Image Deblurring Supplementary Material
Noise-Blind Image Deblurring Supplementary Material Meiguang Jin University of Bern Switzerland Stefan Roth TU Darmstadt Germany Paolo Favaro University of Bern Switzerland A. Upper and Lower Bounds Our
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationChapter 2. Random Variable. Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance.
Chapter 2 Random Variable CLO2 Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance. 1 1. Introduction In Chapter 1, we introduced the concept
More informationA quantative comparison of two restoration methods as applied to confocal microscopy
A quantative comparison of two restoration methods as applied to confocal microscopy Geert M.P. van Kempen 1, Hans T.M. van der Voort, Lucas J. van Vliet 1 1 Pattern Recognition Group, Delft University
More informationApproximate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery
Approimate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery arxiv:1606.00901v1 [cs.it] Jun 016 Shuai Huang, Trac D. Tran Department of Electrical and Computer Engineering Johns
More informationCSE 559A: Computer Vision
CSE 559A: Computer Vision Fall 208: T-R: :30-pm @ Lopata 0 Instructor: Ayan Chakrabarti (ayan@wustl.edu). Course Staff: Zhihao ia, Charlie Wu, Han Liu http://www.cse.wustl.edu/~ayan/courses/cse559a/ Sep
More informationCSE 559A: Computer Vision Tomorrow Zhihao's Office Hours back in Jolley 309: 10:30am-Noon
CSE 559A: Computer Vision ADMINISTRIVIA Tomorrow Zhihao's Office Hours back in Jolley 309: 0:30am-Noon Fall 08: T-R: :30-pm @ Lopata 0 This Friday: Regular Office Hours Net Friday: Recitation for PSET
More informationMACHINE LEARNING ADVANCED MACHINE LEARNING
MACHINE LEARNING ADVANCED MACHINE LEARNING Recap of Important Notions on Estimation of Probability Density Functions 2 2 MACHINE LEARNING Overview Definition pdf Definition joint, condition, marginal,
More informationMachine Learning Lecture 3
Announcements Machine Learning Lecture 3 Eam dates We re in the process of fiing the first eam date Probability Density Estimation II 9.0.207 Eercises The first eercise sheet is available on L2P now First
More informationIntroduction to Probability Theory for Graduate Economics Fall 2008
Introduction to Probability Theory for Graduate Economics Fall 008 Yiğit Sağlam October 10, 008 CHAPTER - RANDOM VARIABLES AND EXPECTATION 1 1 Random Variables A random variable (RV) is a real-valued function
More informationMACHINE LEARNING ADVANCED MACHINE LEARNING
MACHINE LEARNING ADVANCED MACHINE LEARNING Recap of Important Notions on Estimation of Probability Density Functions 22 MACHINE LEARNING Discrete Probabilities Consider two variables and y taking discrete
More informationSignals and Spectra - Review
Signals and Spectra - Review SIGNALS DETERMINISTIC No uncertainty w.r.t. the value of a signal at any time Modeled by mathematical epressions RANDOM some degree of uncertainty before the signal occurs
More informationWhere now? Machine Learning and Bayesian Inference
Machine Learning and Bayesian Inference Dr Sean Holden Computer Laboratory, Room FC6 Telephone etension 67 Email: sbh@clcamacuk wwwclcamacuk/ sbh/ Where now? There are some simple take-home messages from
More informationTransform Techniques - CF
Transform Techniques - CF [eview] Moment Generating Function For a real t, the MGF of the random variable is t t M () t E[ e ] e Characteristic Function (CF) k t k For a real ω, the characteristic function
More informationSteepest descent on factor graphs
Steepest descent on factor graphs Justin Dauwels, Sascha Korl, and Hans-Andrea Loeliger Abstract We show how steepest descent can be used as a tool for estimation on factor graphs. From our eposition,
More informationTransform Techniques - CF
Transform Techniques - CF [eview] Moment Generating Function For a real t, the MGF of the random variable is t t M () t E[ e ] e Characteristic Function (CF) k t k For a real ω, the characteristic function
More informationMachine Learning 4771
Machine Learning 4771 Instructor: Tony Jebara Topic 7 Unsupervised Learning Statistical Perspective Probability Models Discrete & Continuous: Gaussian, Bernoulli, Multinomial Maimum Likelihood Logistic
More informationLecture 3: Latent Variables Models and Learning with the EM Algorithm. Sam Roweis. Tuesday July25, 2006 Machine Learning Summer School, Taiwan
Lecture 3: Latent Variables Models and Learning with the EM Algorithm Sam Roweis Tuesday July25, 2006 Machine Learning Summer School, Taiwan Latent Variable Models What to do when a variable z is always
More informationArtificial Neural Networks 2
CSC2515 Machine Learning Sam Roweis Artificial Neural s 2 We saw neural nets for classification. Same idea for regression. ANNs are just adaptive basis regression machines of the form: y k = j w kj σ(b
More informationTransform Techniques - CF
Transform Techniques - CF [eview] Moment Generating Function For a real t, the MGF of the random variable is t e k p ( k) discrete t t k M () t E[ e ] e t e f d continuous Characteristic Function (CF)
More information1. Abstract. 2. Introduction/Problem Statement
Advances in polarimetric deconvolution Capt. Kurtis G. Engelson Air Force Institute of Technology, Student Dr. Stephen C. Cain Air Force Institute of Technology, Professor 1. Abstract One of the realities
More informationAppendix A PROBABILITY AND RANDOM SIGNALS. A.1 Probability
Appendi A PROBABILITY AND RANDOM SIGNALS Deterministic waveforms are waveforms which can be epressed, at least in principle, as an eplicit function of time. At any time t = t, there is no uncertainty about
More informationConvergence Rate of Expectation-Maximization
Convergence Rate of Expectation-Maximiation Raunak Kumar University of British Columbia Mark Schmidt University of British Columbia Abstract raunakkumar17@outlookcom schmidtm@csubcca Expectation-maximiation
More informationAn example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information.
An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information. Model: where g(t) = a(t s)f(s)ds + e(t), a(t) t = (rapidly). The problem,
More informationState Space and Hidden Markov Models
State Space and Hidden Markov Models Kunsch H.R. State Space and Hidden Markov Models. ETH- Zurich Zurich; Aliaksandr Hubin Oslo 2014 Contents 1. Introduction 2. Markov Chains 3. Hidden Markov and State
More informationType II variational methods in Bayesian estimation
Type II variational methods in Bayesian estimation J. A. Palmer, D. P. Wipf, and K. Kreutz-Delgado Department of Electrical and Computer Engineering University of California San Diego, La Jolla, CA 9093
More informationA Practitioner s Guide to Generalized Linear Models
A Practitioners Guide to Generalized Linear Models Background The classical linear models and most of the minimum bias procedures are special cases of generalized linear models (GLMs). GLMs are more technically
More informationReview of Probability
Review of robabilit robabilit Theor: Man techniques in speech processing require the manipulation of probabilities and statistics. The two principal application areas we will encounter are: Statistical
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Lecture Notes Fall 2009 November, 2009 Byoung-Ta Zhang School of Computer Science and Engineering & Cognitive Science, Brain Science, and Bioinformatics Seoul National University
More informationunit P[r x*] C decode encode unit P[x r] f(x) x D
Probabilistic Interpretation of Population Codes Richard S. Zemel Peter Dayan Aleandre Pouget zemel@u.arizona.edu dayan@ai.mit.edu ale@salk.edu Abstract We present a theoretical framework for population
More informationSparsity Regularization for Image Reconstruction with Poisson Data
Sparsity Regularization for Image Reconstruction with Poisson Data Daniel J. Lingenfelter a, Jeffrey A. Fessler a,andzhonghe b a Electrical Engineering and Computer Science, University of Michigan, Ann
More informationOn rate of convergence in distribution of asymptotically normal statistics based on samples of random size
Annales Mathematicae et Informaticae 39 212 pp. 17 28 Proceedings of the Conference on Stochastic Models and their Applications Faculty of Informatics, University of Debrecen, Debrecen, Hungary, August
More informationMachine Learning Lecture 2
Announcements Machine Learning Lecture 2 Eceptional number of lecture participants this year Current count: 449 participants This is very nice, but it stretches our resources to their limits Probability
More informationSteepest Descent as Message Passing
In the roc of IEEE ISOC ITW2005 on Coding Compleity; editor MJ Dinneen; co-chairs U Speidel D Taylor; pages 2-6 Steepest Descent as Message assing Justin Dauwels, Sascha Korl, Hans-Andrea Loeliger Dept
More informationFiltering and Edge Detection
Filtering and Edge Detection Local Neighborhoods Hard to tell anything from a single pixel Example: you see a reddish pixel. Is this the object s color? Illumination? Noise? The next step in order of complexity
More informationNUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS
NUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS Justin Dauwels Dept. of Information Technology and Electrical Engineering ETH, CH-8092 Zürich, Switzerland dauwels@isi.ee.ethz.ch
More informationToday s lecture. Local neighbourhood processing. The convolution. Removing uncorrelated noise from an image The Fourier transform
Cris Luengo TD396 fall 4 cris@cbuuse Today s lecture Local neighbourhood processing smoothing an image sharpening an image The convolution What is it? What is it useful for? How can I compute it? Removing
More informationGaussian Process priors with Uncertain Inputs: Multiple-Step-Ahead Prediction
Gaussian Process priors with Uncertain Inputs: Multiple-Step-Ahead Prediction Agathe Girard Dept. of Computing Science University of Glasgow Glasgow, UK agathe@dcs.gla.ac.uk Carl Edward Rasmussen Gatsby
More informationLecture Notes 2 Random Variables. Discrete Random Variables: Probability mass function (pmf)
Lecture Notes 2 Random Variables Definition Discrete Random Variables: Probability mass function (pmf) Continuous Random Variables: Probability density function (pdf) Mean and Variance Cumulative Distribution
More informationProbability & Statistics: Infinite Statistics. Robert Leishman Mark Colton ME 363 Spring 2011
Probability & Statistics: Infinite Statistics Robert Leishman Mark Colton ME 363 Spring 0 Large Data Sets What happens to a histogram as N becomes large (N )? Number of bins becomes large (K ) Width of
More informationMachine Learning Lecture 2
Machine Perceptual Learning and Sensory Summer Augmented 6 Computing Announcements Machine Learning Lecture 2 Course webpage http://www.vision.rwth-aachen.de/teaching/ Slides will be made available on
More informationStatistical techniques for data analysis in Cosmology
Statistical techniques for data analysis in Cosmology arxiv:0712.3028; arxiv:0911.3105 Numerical recipes (the bible ) Licia Verde ICREA & ICC UB-IEEC http://icc.ub.edu/~liciaverde outline Lecture 1: Introduction
More informationSub-pixel edge location in thermal images using a mean square measure
http://d.doi.org/1.1611/qirt..9 Sub-piel edge location in thermal images using a mean square measure by R. Bąbka, W. inkina Institute of Electronics and Control Systems, Technical University of Częstochowa
More information4 An Introduction to Channel Coding and Decoding over BSC
4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the
More information2. A Basic Statistical Toolbox
. A Basic Statistical Toolbo Statistics is a mathematical science pertaining to the collection, analysis, interpretation, and presentation of data. Wikipedia definition Mathematical statistics: concerned
More information6. Vector Random Variables
6. Vector Random Variables In the previous chapter we presented methods for dealing with two random variables. In this chapter we etend these methods to the case of n random variables in the following
More informationProbabilistic Graphical Models
School of Computer Science Probabilistic Graphical Models Gaussian graphical models and Ising models: modeling networks Eric Xing Lecture 0, February 5, 06 Reading: See class website Eric Xing @ CMU, 005-06
More informationebay/google short course: Problem set 2
18 Jan 013 ebay/google short course: Problem set 1. (the Echange Parado) You are playing the following game against an opponent, with a referee also taking part. The referee has two envelopes (numbered
More informationProbabilistic image processing and Bayesian network
robabilistic imae processin and Bayesian network Kazuyuki Tanaka Graduate School o Inormation Sciences Tohoku University kazu@smapip.is.tohoku.ac.jp http://www.smapip.is.tohoku.ac.jp/~kazu/ Reerences K.
More informationMRC: The Maximum Rejection Classifier for Pattern Detection. With Michael Elad, Renato Keshet
MRC: The Maimum Rejection Classifier for Pattern Detection With Michael Elad, Renato Keshet 1 The Problem Pattern Detection: Given a pattern that is subjected to a particular type of variation, detect
More informationSTA 414/2104, Spring 2014, Practice Problem Set #1
STA 44/4, Spring 4, Practice Problem Set # Note: these problems are not for credit, and not to be handed in Question : Consider a classification problem in which there are two real-valued inputs, and,
More informationExpression arrays, normalization, and error models
1 Epression arrays, normalization, and error models There are a number of different array technologies available for measuring mrna transcript levels in cell populations, from spotted cdna arrays to in
More informationUncertainty quantification and visualization for functional random variables
Uncertainty quantification and visualization for functional random variables MascotNum Workshop 2014 S. Nanty 1,3 C. Helbert 2 A. Marrel 1 N. Pérot 1 C. Prieur 3 1 CEA, DEN/DER/SESI/LSMR, F-13108, Saint-Paul-lez-Durance,
More informationIntroduction...2 Chapter Review on probability and random variables Random experiment, sample space and events
Introduction... Chapter...3 Review on probability and random variables...3. Random eperiment, sample space and events...3. Probability definition...7.3 Conditional Probability and Independence...7.4 heorem
More informationInverse problem and optimization
Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples
More informationCHAPTER 3a. INTRODUCTION TO NUMERICAL METHODS
CHAPTER 3a. INTRODUCTION TO NUMERICAL METHODS A. J. Clark School of Engineering Department of Civil and Environmental Engineering by Dr. Ibrahim A. Assakkaf Spring 1 ENCE 3 - Computation in Civil Engineering
More informationAn example to illustrate frequentist and Bayesian approches
Frequentist_Bayesian_Eample An eample to illustrate frequentist and Bayesian approches This is a trivial eample that illustrates the fundamentally different points of view of the frequentist and Bayesian
More informationOptimum Ordering and Pole-Zero Pairing. Optimum Ordering and Pole-Zero Pairing Consider the scaled cascade structure shown below
Pole-Zero Pairing of the Cascade Form IIR Digital Filter There are many possible cascade realiations of a higher order IIR transfer function obtained by different pole-ero pairings and ordering Each one
More informationIntroduction to Bayesian methods in inverse problems
Introduction to Bayesian methods in inverse problems Ville Kolehmainen 1 1 Department of Applied Physics, University of Eastern Finland, Kuopio, Finland March 4 2013 Manchester, UK. Contents Introduction
More informationM.S. Project Report. Efficient Failure Rate Prediction for SRAM Cells via Gibbs Sampling. Yamei Feng 12/15/2011
.S. Project Report Efficient Failure Rate Prediction for SRA Cells via Gibbs Sampling Yamei Feng /5/ Committee embers: Prof. Xin Li Prof. Ken ai Table of Contents CHAPTER INTRODUCTION...3 CHAPTER BACKGROUND...5
More informationON CONVERGENCE PROPERTIES OF MESSAGE-PASSING ESTIMATION ALGORITHMS. Justin Dauwels
ON CONVERGENCE PROPERTIES OF MESSAGE-PASSING ESTIMATION ALGORITHMS Justin Dauwels Amari Research Unit, RIKEN Brain Science Institute, Wao-shi, 351-0106, Saitama, Japan email: justin@dauwels.com ABSTRACT
More informationThe Variational Gaussian Approximation Revisited
The Variational Gaussian Approximation Revisited Manfred Opper Cédric Archambeau March 16, 2009 Abstract The variational approximation of posterior distributions by multivariate Gaussians has been much
More informationSignal Detection Basics - CFAR
Signal Detection Basics - CFAR Types of noise clutter and signals targets Signal separation by comparison threshold detection Signal Statistics - Parameter estimation Threshold determination based on the
More informationSpeech and Language Processing
Speech and Language Processing Lecture 5 Neural network based acoustic and language models Information and Communications Engineering Course Takahiro Shinoaki 08//6 Lecture Plan (Shinoaki s part) I gives
More informationLecture 3: Pattern Classification
EE E6820: Speech & Audio Processing & Recognition Lecture 3: Pattern Classification 1 2 3 4 5 The problem of classification Linear and nonlinear classifiers Probabilistic classification Gaussians, mixtures
More informationDeblurring Jupiter (sampling in GLIP faster than regularized inversion) Colin Fox Richard A. Norton, J.
Deblurring Jupiter (sampling in GLIP faster than regularized inversion) Colin Fox fox@physics.otago.ac.nz Richard A. Norton, J. Andrés Christen Topics... Backstory (?) Sampling in linear-gaussian hierarchical
More informationRandom Fields in Bayesian Inference: Effects of the Random Field Discretization
Random Fields in Bayesian Inference: Effects of the Random Field Discretization Felipe Uribe a, Iason Papaioannou a, Wolfgang Betz a, Elisabeth Ullmann b, Daniel Straub a a Engineering Risk Analysis Group,
More information2 Statistical Estimation: Basic Concepts
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 2 Statistical Estimation:
More informationProbabilistic Machine Learning. Industrial AI Lab.
Probabilistic Machine Learning Industrial AI Lab. Probabilistic Linear Regression Outline Probabilistic Classification Probabilistic Clustering Probabilistic Dimension Reduction 2 Probabilistic Linear
More informationLecture Notes 2 Random Variables. Random Variable
Lecture Notes 2 Random Variables Definition Discrete Random Variables: Probability mass function (pmf) Continuous Random Variables: Probability density function (pdf) Mean and Variance Cumulative Distribution
More informationINF Introduction to classifiction Anne Solberg Based on Chapter 2 ( ) in Duda and Hart: Pattern Classification
INF 4300 151014 Introduction to classifiction Anne Solberg anne@ifiuiono Based on Chapter 1-6 in Duda and Hart: Pattern Classification 151014 INF 4300 1 Introduction to classification One of the most challenging
More informationBayesian Poisson Regression for Crowd Counting
Bayesian Poisson Regression for Crowd Counting Antoni B. Chan Department of Computer Science City University of Hong Kong abchan@ucsd.edu Nuno Vasconcelos Dept. of Electrical and Computer Engineering University
More informationStatistical Geometry Processing Winter Semester 2011/2012
Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian
More informationLecture 3: September 10
CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 3: September 10 Lecturer: Prof. Alistair Sinclair Scribes: Andrew H. Chan, Piyush Srivastava Disclaimer: These notes have not
More informationNON-NEGATIVE MATRIX FACTORIZATION FOR PARAMETER ESTIMATION IN HIDDEN MARKOV MODELS. Balaji Lakshminarayanan and Raviv Raich
NON-NEGATIVE MATRIX FACTORIZATION FOR PARAMETER ESTIMATION IN HIDDEN MARKOV MODELS Balaji Lakshminarayanan and Raviv Raich School of EECS, Oregon State University, Corvallis, OR 97331-551 {lakshmba,raich@eecs.oregonstate.edu}
More informationEstimating the parameters of hidden binomial trials by the EM algorithm
Hacettepe Journal of Mathematics and Statistics Volume 43 (5) (2014), 885 890 Estimating the parameters of hidden binomial trials by the EM algorithm Degang Zhu Received 02 : 09 : 2013 : Accepted 02 :
More informationBayesian Inference of Noise Levels in Regression
Bayesian Inference of Noise Levels in Regression Christopher M. Bishop Microsoft Research, 7 J. J. Thomson Avenue, Cambridge, CB FB, U.K. cmbishop@microsoft.com http://research.microsoft.com/ cmbishop
More informationthat relative errors are dimensionless. When reporting relative errors it is usual to multiply the fractional error by 100 and report it as a percenta
Error Analysis and Significant Figures Errors using inadequate data are much less than those using no data at all. C. Babbage No measurement of a physical quantity can be entirely accurate. It is important
More informationGaussian Process Vine Copulas for Multivariate Dependence
Gaussian Process Vine Copulas for Multivariate Dependence José Miguel Hernández-Lobato 1,2 joint work with David López-Paz 2,3 and Zoubin Ghahramani 1 1 Department of Engineering, Cambridge University,
More informationLaplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters
Sobel Filters Note that smoothing the image before applying a Sobel filter typically gives better results. Even thresholding the Sobel filtered image cannot usually create precise, i.e., -pixel wide, edges.
More informationRegression with Input-Dependent Noise: A Bayesian Treatment
Regression with Input-Dependent oise: A Bayesian Treatment Christopher M. Bishop C.M.BishopGaston.ac.uk Cazhaow S. Qazaz qazazcsgaston.ac.uk eural Computing Research Group Aston University, Birmingham,
More information2.7 The Gaussian Probability Density Function Forms of the Gaussian pdf for Real Variates
.7 The Gaussian Probability Density Function Samples taken from a Gaussian process have a jointly Gaussian pdf (the definition of Gaussian process). Correlator outputs are Gaussian random variables if
More informationLatent Variable Models and EM Algorithm
SC4/SM8 Advanced Topics in Statistical Machine Learning Latent Variable Models and EM Algorithm Dino Sejdinovic Department of Statistics Oxford Slides and other materials available at: http://www.stats.ox.ac.uk/~sejdinov/atsml/
More informationStochastic Processes. Review of Elementary Probability Lecture I. Hamid R. Rabiee Ali Jalali
Stochastic Processes Review o Elementary Probability bili Lecture I Hamid R. Rabiee Ali Jalali Outline History/Philosophy Random Variables Density/Distribution Functions Joint/Conditional Distributions
More informationLatent Variable Models
Latent Variable Models Stefano Ermon, Aditya Grover Stanford University Lecture 5 Stefano Ermon, Aditya Grover (AI Lab) Deep Generative Models Lecture 5 1 / 31 Recap of last lecture 1 Autoregressive models:
More informationE(x i ) = µ i. 2 d. + sin 1 d θ 2. for d < θ 2 0 for d θ 2
1 Gaussian Processes Definition 1.1 A Gaussian process { i } over sites i is defined by its mean function and its covariance function E( i ) = µ i c ij = Cov( i, j ) plus joint normality of the finite
More informationBeyond Newton s method Thomas P. Minka
Beyond Newton s method Thomas P. Minka 2000 (revised 7/21/2017) Abstract Newton s method for optimization is equivalent to iteratively maimizing a local quadratic approimation to the objective function.
More informationMACHINE LEARNING AND PATTERN RECOGNITION Fall 2006, Lecture 8: Latent Variables, EM Yann LeCun
Y. LeCun: Machine Learning and Pattern Recognition p. 1/? MACHINE LEARNING AND PATTERN RECOGNITION Fall 2006, Lecture 8: Latent Variables, EM Yann LeCun The Courant Institute, New York University http://yann.lecun.com
More informationChapter 5. Statistical Models in Simulations 5.1. Prof. Dr. Mesut Güneş Ch. 5 Statistical Models in Simulations
Chapter 5 Statistical Models in Simulations 5.1 Contents Basic Probability Theory Concepts Discrete Distributions Continuous Distributions Poisson Process Empirical Distributions Useful Statistical Models
More informationOnline Supplement to Are Call Center and Hospital Arrivals Well Modeled by Nonhomogeneous Poisson Processes?
Online Supplement to Are Call Center and Hospital Arrivals Well Modeled by Nonhomogeneous Poisson Processes? Song-Hee Kim and Ward Whitt Industrial Engineering and Operations Research Columbia University
More informationMath 180A. Lecture 16 Friday May 7 th. Expectation. Recall the three main probability density functions so far (1) Uniform (2) Exponential.
Math 8A Lecture 6 Friday May 7 th Epectation Recall the three main probability density functions so far () Uniform () Eponential (3) Power Law e, ( ), Math 8A Lecture 6 Friday May 7 th Epectation Eample
More informationINF Introduction to classifiction Anne Solberg
INF 4300 8.09.17 Introduction to classifiction Anne Solberg anne@ifi.uio.no Introduction to classification Based on handout from Pattern Recognition b Theodoridis, available after the lecture INF 4300
More informationProbability Distribution
Probability Distribution Prof. (Dr.) Rajib Kumar Bhattacharjya Indian Institute of Technology Guwahati Guwahati, Assam Email: rkbc@iitg.ernet.in Web: www.iitg.ernet.in/rkbc Visiting Faculty NIT Meghalaya
More informationLecture 1c: Gaussian Processes for Regression
Lecture c: Gaussian Processes for Regression Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London c.archambeau@cs.ucl.ac.uk
More informationHistogram Processing
Histogram Processing The histogram of a digital image with gray levels in the range [0,L-] is a discrete function h ( r k ) = n k where r k n k = k th gray level = number of pixels in the image having
More information1 Kernel methods & optimization
Machine Learning Class Notes 9-26-13 Prof. David Sontag 1 Kernel methods & optimization One eample of a kernel that is frequently used in practice and which allows for highly non-linear discriminant functions
More informationStatistics for Data Analysis. Niklaus Berger. PSI Practical Course Physics Institute, University of Heidelberg
Statistics for Data Analysis PSI Practical Course 2014 Niklaus Berger Physics Institute, University of Heidelberg Overview You are going to perform a data analysis: Compare measured distributions to theoretical
More information