Inverse Problems: From Regularization to Bayesian Inference

Size: px
Start display at page:

Download "Inverse Problems: From Regularization to Bayesian Inference"

Transcription

1 1 / 42 Inverse Problems: From Regularization to Bayesian Inference An Overview on Prior Modeling and Bayesian Computation Application to Computed Tomography Ali Mohammad-Djafari Groupe Problèmes Inverses Laboratoire des Signaux et Systèmes UMR 8506 CNRS - SUPELEC - Univ Paris Sud 11 Supélec, Plateau de Moulon, Gif-sur-Yvette, FRANCE. djafari@lss.supelec.fr Applied Math Colloquium, UCLA, USA, July 17, 2009 Organizer: Stanley Osher

2 2 / 42 Content Image reconstruction in Computed Tomography : An ill posed invers problem Two main steps in Bayesian approach : Prior modeling and Bayesian computation Prior models for images : Separable Gaussian, GG,... Gauss-Markov, General one layer Markovian models Hierarchical Markovian models with hidden variables (contours and regions) Gauss-Markov-Potts Bayesian computation MCMC Variational and Mean Field approximations (VBA, MFA) Application : Computed Tomography in NDT Conclusions and Work in Progress Questions and Discussion

3 3 / 42 Computed Tomography : Making an image of the interior of a body f(x, y) a section of a real 3D body f(x, y, z) g φ (r) a line of observed radiographe g φ (r, z) Forward model : Line integrals or Radon Transform g φ (r) = f(x, y) dl + ǫ φ (r) L r,φ = f(x, y) δ(r x cos φ y sin φ) dx dy + ǫ φ (r) Inverse problem : Image reconstruction Given the forward model H (Radon Transform) and a set of data g φi (r), i = 1,, M find f(x, y)

4 4 / 42 2D and 3D Computed Tomography 3D 2D Projections y f(x,y) 20 0 x g φ (r 1, r 2 ) = f(x, y, z) dl g φ (r) = f(x, y) dl L r1,r 2,φ L r,φ Forward probelm : f(x, y) or f(x, y, z) g φ (r) or g φ (r 1, r 2 ) Inverse problem : g φ (r) or g φ (r 1, r 2 ) f(x, y) or f(x, y, z)

5 5 / 42 X ray Tomography y f(x,y) x ( ) I g(r, φ) = ln = f(x, y) dl I 0 L r,φ g(r, φ) = f(x, y)δ(r x cosφ y sinφ) dx dy D f(x, y) RT g(r,φ) phi p(r,phi) IRT? = r

6 Analytical Inversion methods Radon : g(r,φ) = f(x, y) = y S r f(x, y) φ x D g(r,φ) = L f(x, y) dl f(x, y) δ(r x cos φ y sin φ) dx dy D ( 1 ) π + r g(r,φ) 2π 2 0 (r x cos φ y sin φ) dr dφ 6 / 42

7 7 / 42 Filtered Backprojection method f(x, y) = ( 1 ) π + r g(r,φ) 2π 2 dr dφ 0 (r x cos φ y sin φ) Derivation D : g(r,φ) = g(r,φ) r Hilbert TransformH : g 1 (r,φ) = 1 g(r,φ) π 0 (r r ) dr Backprojection B : f(x, y) = 1 π g 1 (r = x cos φ + y sin φ,φ) dφ 2π f(x, y) = B H D g(r,φ) = B F 1 1 Ω F 1 g(r,φ) Backprojection of filtered projections : 0 g(r,φ) FT Filter F 1 Ω IFT F 1 1 g 1 (r,φ) Backprojection B f(x,y)

8 8 / 42 Limitations : Limited angle or noisy data Original 64 proj. 16 proj. 8 proj. [0, π/2] Limited angle or noisy data Accounting for detector size Other measurement geometries : fan beam,...

9 9 / 42 CT as a linear inverse problem 1 Fan beam X ray Tomography Source positions Detector positions g(s i ) = f(r) dl i + ǫ(s i ) Discretization g = Hf + ǫ L i g, f and H are huge dimensional

10 10 / 42 Inversion : Deterministic methods Data matching Observation model g i = h i (f) + ǫ i, i = 1,...,M g = H(f) + ǫ Misatch between data and output of the model (g, H(f)) Examples : f = arg min { (g,h(f))} f LS (g,h(f)) = g H(f) 2 = i L p (g,h(f)) = g H(f) p = i g i h i (f) 2 g i h i (f) p, 1 < p < 2 KL (g,h(f)) = i g i ln g i h i (f) In general, does not give satisfactory results for inverse problems.

11 11 / 42 Inversion : Regularization theory Inverse problems = Ill posed problems Need for prior information Functional space (Tikhonov) : g = H(f) + ǫ J(f) = g H(f) λ Df 2 2 Finite dimensional space (Philips & Towmey) : g = H(f) + ǫ Minimum norme LS (MNLS) : J(f) = g H(f) 2 + λ f 2 Classical regularization : J(f) = g H(f) 2 + λ Df 2 More general regularization : J(f) = Q(g H(f)) + λω(df) or J(f) = 1 (g,h(f)) + λ 2 (f,f ) Limitations : Errors are considered implicitly white and Gaussian Limited prior information on the solution Lack of tools for the determination of the hyperparameters

12 12 / 42 Bayesian estimation approach M : g = Hf + ǫ Observation model M + Hypothesis on the noise ǫ p(g f; M) = p ǫ (g Hf) A priori information Bayes : Link with regularization : Maximum A Posteriori (MAP) : p(f M) p(f g; M) = p(g f; M) p(f M) p(g M) f = arg max {p(f g)} = arg max {p(g f) p(f)} f f = arg min { ln p(g f) ln p(f)} f with Q(g,Hf) = ln p(g f) and λω(f) = ln p(f) But, Bayesian inference is not only limited to MAP

13 13 / 42 Case of linear models and Gaussian priors g = Hf + ǫ Hypothesis on{ the noise : ǫ N(0,σ } ǫ 2 I) p(g f) exp 1 g Hf 2 2σǫ 2 Hypothesis { on f : f N(0,σ } f 2(Dt D) 1 ) p(f) exp 1 Df 2 2σf 2 A posteriori : { } p(f g) exp 1 g Hf 2 1 Df 2 2σǫ 2 2σf 2 MAP : f = arg maxf {p(f g)} = arg min f {J(f)} with J(f) = g Hf 2 + λ Df 2, λ = σ2 ǫ σ 2 f Advantage : characterization of the solution f g N( f, P ) with f = PH t g, P = ( H t H + λd t D ) 1

14 14 / 42 MAP estimation with other priors : f = arg min f {J(f)} avec J(f) = g Hf 2 + λω(f) Separable priors : Gaussian : p(f j ) exp { α f j 2} Ω(f) = α j f j 2 Gamma : p(f j ) f α j exp { βf j } Ω(f) = α j ln f j + βf j Beta : p(f j ) f α j (1 f j ) β Ω(f) = α j ln f j + β j ln(1 f j) Generalized Gaussian : p(f j ) exp { α f j p}, 1 < p < 2 Ω(f) = α j f j p, Markovian models : p(f j f) exp α φ(f j, f i ) i N j Ω(f) = α j i N j φ(f j, f i ),

15 15 / 42 MAP estimation with markovien priors f = arg min f {J(f)} with J(f) = g Hf 2 + λω(f) Ω(f) = j φ(f j f j 1 ) with φ(t) : Convex functions : t α, 1 + t 2 1, log(cosh(t)), { t 2 t T 2T t T 2 t > T or Non convex functions : log(1 + t 2 ), t t 2, arctan(t2 ), { t 2 t T T 2 t > T

16 16 / 42 MAP estimation with different prior models g = Hf + ǫ, ǫ N(0,σ 2 ǫ I) f = Cf + z, z N(0,σ 2 f I) f = (I C) 1 z = Dz p(g f) = N(Hf,σ 2 ǫ I) p(f) = N(0,σ 2 f (Dt D) 1 ) p(f g) = N( f, P f ) f = arg min f {J(f)} J(f) = g Hf 2 + λ Df 2 f = PH t g P f = ( H t H + λd t D ) 1 g = Hf + ǫ, ǫ N(0,σ 2 ǫ I) f = Wz, z N(0,σ 2 z I) g = HWz + ǫ p(g z) = N(HWz,σ 2 ǫ I) p(z) = N(0,σ 2 zi) p(z g) = N(ẑ, P z ) ẑ = arg min z {J(z)} J(z) = g HWz 2 + λ z 2 ẑ = P z H t W t g P z = ( H t W t H + λi ) 1 f = Wẑ P f = W P z W t

17 17 / 42 MAP estimation with different prior models { g = Hf + ǫ f N ( 0,σf 2(Dt D) 1 ) ) = g = Hf + ǫ f = Cf + z with z N(0,σ 2 f I) Df = z with D = (I C) f g N( f, P f ) with f = P f H t g, Pf = ( H t H + λd t D ) 1 J(f) = ln p(f g) = g Hf 2 + λ Df 2 { { g = Hf + ǫ f N ( 0,σf 2(WWt ) ) g = Hf + ǫ = f = Wz with z N(0,σf 2I) z g N(ẑ, P z ) with ẑ = P z W t H t g, Pz = ( W t H t HW + λi ) 1 J(z) = ln p(z g) = g HWz 2 + λ z 2 f = Wẑ z decomposqition coefficients

18 18 / 42 MAP estimation and Compressed Sensing { g = Hf + ǫ f = Wz W a code book matrix, z coefficients Gaussian : { } p(z) = N(0,σzI) 2 exp 1 2σ j z j 2 z 2 J(z) = ln p(z g) = g HWz 2 + λ j z j 2 Generalized Gaussian (sparsity, β = 1) : p(z) exp { λ } j z j β J(z) = ln p(z g) = g HWz 2 + λ j z j β z = arg min z {J(z)} f = Wẑ

19 19 / 42 Main advantages of the Bayesian approach MAP = Regularization Posterior mean? Marginal MAP? More information in the posterior law than only its mode or its mean Meaning and tools for estimating hyper parameters Meaning and tools for model selection More specific and specialized priors, particularly through the hidden variables More computational tools : Expectation-Maximization for computing the maximum likelihood parameters MCMC for posterior exploration Variational Bayes for analytical computation of the posterior marginals...

20 20 / 42 Full Bayesian approach M : g = Hf + ǫ Forward & errors model : p(g f,θ 1 ; M) Prior models p(f θ 2 ; M) Hyperparameters θ = (θ 1,θ 2 ) p(θ M) Bayes : p(f,θ g; M) = Joint MAP : p(g f,θ;m) p(f θ;m) p(θ M) p(g M) ( f, θ) = arg max {p(f,θ g; M)} (f,θ) { p(f g; M) = p(f,θ g; M) df Marginalization : p(θ g; M) = p(f,θ g; M) dθ { f = f p(f,θ g; M) df dθ Posterior means : θ = θ p(f,θ g; M) df dθ Evidence of the model : p(g M) = p(g f,θ; M)p(f θ; M)p(θ M) df dθ

21 Two main steps in the Bayesian approach Prior modeling Separable : Gaussian, Generalized Gaussian, Gamma, mixture of Gaussians, mixture of Gammas,... Markovian : Gauss-Markov, GGM,... Separable or Markovian with hidden variables (contours, region labels) Choice of the estimator and computational aspects MAP, Posterior mean, Marginal MAP MAP needs optimization algorithms Posterior mean needs integration methods Marginal MAP needs integration and optimization Approximations : Gaussian approximation (Laplace) Numerical exploration MCMC Variational Bayes (Separable approximation) 21 / 42

22 22 / 42 Which images I am looking for?

23 23 / 42 Which image I am looking for? Gaussian Generalized Gaussian p(f j f j 1 ) exp { α f j f j 1 2} p(f j f j 1 ) exp { α f j f j 1 p} Piecewize Gaussian p(f j q j, f j 1 ) = N ( ) (1 q j )f j 1, σf 2 Mixture of GM p(f j z j = k) = N ( ) m k, σk 2

24 24 / 42 Gauss-Markov-Potts prior models for images In NDT applications of CT, the objects are, in general, composed of a finite number of materials, and the voxels corresponding to each materials are grouped in compact regions How to model this prior information? f(r) z(r) {1,..., K } p(f(r) z(r) = k, m k, v k ) = N(m k, v k ) p(f(r)) = P(z(r) = k) N(m k, v k ) Mixture of Gaussians k p(z(r) z(r ),r V(r)) exp γ δ(z(r) z(r )) r V(r)

25 25 / 42 Four different cases To each pixel of the image is associated 2 variables f(r) and z(r) f z Gaussian iid, z iid : Mixture of Gaussians f z Gauss-Markov, z iid : Mixture of Gauss-Markov f z Gaussian iid, z Potts-Markov : Mixture of Independent Gaussians (MIG with Hidden Potts) f(r) f z Markov, z Potts-Markov : Mixture of Gauss-Markov (MGM with hidden Potts) z(r)

26 26 / 42 Four different cases Case 1 : Mixture of Gaussians Case 2 : Mixture of Gauss-Markov Case 3 : MIG with Hidden Potts Case 4 : MGM with hidden Potts

27 27 / 42 Four different cases f(r) z(r) z(r) f(r) z(r ) z(r) z(r ) f(r) f(r ), z(r), z(r ) z(r) q(r, r ) = {0, 1} f(r) f(r ), z(r), z(r ) z(r) z(r ) q(r, r ) = {0, 1}

28 28 / 42 Case 1 : f z Gaussian iid, z iid Independent Mixture of Independent Gaussiens (IMIG) : p(f(r) z(r) = k) = N(m k, v k ), r R p(f(r)) = K k=1 α kn(m k, v k ), with k α k = 1. p(z) = r p(z(r) = k) = r α k = k αn k k Noting R k = {r : z(r) = k}, R = k R k, we have : m z (r) = m k, v z (r) = v k,α z (r) = α k, r R k p(f z) = r R N(m z (r), v z (r)) p(z) = r α z (r) = k P r R α δ(z(r) k) k = k α n k k

29 29 / 42 Case 2 : f z Gauss-Markov, z iid Independent Mixture of Gauss-Markov (IMGM) : p(f(r) z(r), z(r ), f(r ),r V(r)) = N(µ z (r), v z (r)), r R r V(r) µ z(r ) µ z (r) = 1 V(r) µ z (r ) = δ(z(r ) z(r)) f(r ) + (1 δ(z(r ) z(r)) m z (r ) = (1 c(r )) f(r ) + c(r ) m z (r ) p(f z) r N(µ z(r), v z (r)) k α kn(m k 1,Σ k ) p(z) = r v z(r) = k αn k k with 1 k = 1, r R k and Σ k a covariance matrix (n k n k ).

30 30 / 42 Case 3 : f z Gauss iid, z Potts Gauss iid as in Case 1 : p(f z) = r R N(m z(r), v z (r)) = k r R k N(m k, v k ) Potts-Markov : p(z(r) z(r ),r V(r)) exp γ p(z) exp γ r R r V(r) r V(r) δ(z(r) z(r )) δ(z(r) z(r ))

31 31 / 42 Case 4 : f z Gauss-Markov, z Potts Gauss-Markov as in Case 2 : p(f(r) z(r), z(r ), f(r ),r V(r)) = N(µ z (r), v z (r)), r R µ z (r) = 1 V(r) r V(r) µ z (r ) µ z(r ) = δ(z(r ) z(r)) f(r ) + (1 δ(z(r ) z(r)) m z (r ) p(f z) r N(µ z(r), v z (r)) k α kn(m k 1,Σ k ) Potts-Markov as in Case 3 : p(z) exp γ r R r V(r) δ(z(r) z(r ))

32 32 / 42 Summary of the two proposed models f z Gaussian iid z Potts-Markov (MIG with Hidden Potts) f z Markov z Potts-Markov (MGM with hidden Potts)

33 33 / 42 Bayesian Computation p(f,z,θ g) p(g f,z, v ǫ ) p(f z,m,v) p(z γ,α) p(θ) θ = {v ǫ,(α k, m k, v k ), k = 1,, K } p(θ) Conjugate priors Direct computation and use of p(f,z,θ g; M) is too complex Possible approximations : Gauss-Laplace (Gaussian approximation) Exploration (Sampling) using MCMC methods Separable approximation (Variational techniques) Main idea in Variational Bayesian methods : Approximate p(f,z,θ g; M) by q(f,z,θ) = q 1 (f) q 2 (z) q 3 (θ) Choice of approximation criterion : KL(q : p) Choice of appropriate families of probability laws for q 1 (f), q 2 (z) and q 3 (θ)

34 34 / 42 MCMC based algorithm General scheme : p(f,z,θ g) p(g f,z,θ) p(f z,θ) p(z) p(θ) f p(f ẑ, θ,g) ẑ p(z f, θ,g) θ (θ f,ẑ,g) Sample f from p(f ẑ, θ,g) p(g f,θ) p(f ẑ, θ) Needs optimisation of a quadratic criterion. Sample z from p(z f, θ,g) p(g f,ẑ, θ) p(z) Needs sampling of a Potts Markov field. Sample θ from p(θ f,ẑ,g) p(g f,σ 2 ǫ I) p( f ẑ,(m k, v k )) p(θ) Conjugate priors analytical expressions.

35 35 / 42 Application of CT in NDT Reconstruction from only 2 projections g 1 (x) = f(x, y) dy g 2 (y) = f(x, y) dx Given the marginals g 1 (x) and g 2 (y) find the joint distribution f(x, y). Infinite number of solutions : f(x, y) = g 1 (x) g 2 (y) Ω(x, y) Ω(x, y) is a Copula : Ω(x, y) dx = 1 and Ω(x, y) dy = 1

36 36 / 42 Application in CT g f f z z q g = Hf + ǫ iid Gaussian iid q(r) {0, 1} g f N(Hf,σǫ 2 I) or or 1 δ(z(r) z(r )) Gaussian Gauss-Markov Potts binary Forward model Gauss-Markov-Potts Prior Model Auxilary Unsupervised Bayesian estimation : p(f,z,θ g) p(g f,z,θ) p(f z,θ) p(θ)

37 / 42 Results : 2D case Original Backprojection Filtered BP LS Gauss-Markov+pos GM+Line process GM+Label process c z c

38 38 / 42 Some results in 3D case M. Defrise Phantom FeldKamp Proposed method

39 39 / 42 Some results in 3D case FeldKamp Proposed method

40 40 / 42 Some results in 3D case Experimental setup A photograpy of metalique esponge Reconstruction by proposed method

41 41 / 42 Application : liquid evaporation in metalic esponge Time 0 Time 1 Time 2

42 42 / 42 Conclusions Bayesian estimation approach gives more tools than deterministic regularization to handle inverses problems Gauss-Markov-Potts are useful prior models for images incorporating regions and contours Bayesian computation needs either multi-dimensional optimization or integration methods Different optimization and integration tools and approximations exist (Laplace, MCMC, Variational Bayes) Work in Progress and Perspectives : Efficient implementation in 2D and 3D cases using GPU Application to other linear and non linear inverse problems : X ray CT, Ultrasound and Microwave imaging, PET, SPECT,...

Inverse problems in imaging and computer vision

Inverse problems in imaging and computer vision 1 / 62. Inverse problems in imaging and computer vision From deterministic methods to probabilistic Bayesian inference Ali Mohammad-Djafari Groupe Problèmes Inverses Laboratoire des Signaux et Systèmes

More information

Hierarchical prior models for scalable Bayesian Tomography

Hierarchical prior models for scalable Bayesian Tomography A. Mohammad-Djafari, Hierarchical prior models for scalable Bayesian Tomography, WSTL, January 25-27, 2016, Szeged, Hungary. 1/45. Hierarchical prior models for scalable Bayesian Tomography Ali Mohammad-Djafari

More information

A. Mohammad-Djafari, Bayesian Discrete Tomography from a few number of projections, Mars 21-23, 2016, Polytechnico de Milan, Italy.

A. Mohammad-Djafari, Bayesian Discrete Tomography from a few number of projections, Mars 21-23, 2016, Polytechnico de Milan, Italy. A. Mohammad-Djafari, Bayesian Discrete Tomography from a few number of projections, Mars 21-23, 2016, Polytechnico de Milan, Italy. 1. A Student-t based sparsity enforcing hierarchical prior for linear

More information

Bayesian X-ray Computed Tomography using a Three-level Hierarchical Prior Model

Bayesian X-ray Computed Tomography using a Three-level Hierarchical Prior Model L. Wang, A. Mohammad-Djafari, N. Gac, MaxEnt 16, Ghent, Belgium. 1/26 Bayesian X-ray Computed Tomography using a Three-level Hierarchical Prior Model Li Wang, Ali Mohammad-Djafari, Nicolas Gac Laboratoire

More information

Bayesian Microwave Breast Imaging

Bayesian Microwave Breast Imaging Bayesian Microwave Breast Imaging Leila Gharsalli 1 Hacheme Ayasso 2 Bernard Duchêne 1 Ali Mohammad-Djafari 1 1 Laboratoire des Signaux et Systèmes (L2S) (UMR8506: CNRS-SUPELEC-Univ Paris-Sud) Plateau

More information

Bayesian inference methods for sources separation

Bayesian inference methods for sources separation A. Mohammad-Djafari, BeBec2012, February 22-23, 2012, Berlin, Germany, 1/12. Bayesian inference methods for sources separation Ali Mohammad-Djafari Laboratoire des Signaux et Systèmes, UMR8506 CNRS-SUPELEC-UNIV

More information

Inverse problems, Deconvolution and Parametric Estimation

Inverse problems, Deconvolution and Parametric Estimation A. Mohammad-Djafari, Inverse problems, Deconvolution and Parametric Estimation, MATIS SUPELEC, 1/102. Inverse problems, Deconvolution and Parametric Estimation Ali Mohammad-Djafari Laboratoire des Signaux

More information

Inverse Problems: From Regularization Theory to Probabilistic and Bayesian Inference

Inverse Problems: From Regularization Theory to Probabilistic and Bayesian Inference A. Mohammad-Djafari, Inverse Problems: From Regularization to Bayesian Inference, Pôle signaux, 1/11. Inverse Problems: From Regularization Theory to Probabilistic and Bayesian Inference Ali Mohammad-Djafari

More information

Bayesian Approach for Inverse Problem in Microwave Tomography

Bayesian Approach for Inverse Problem in Microwave Tomography Bayesian Approach for Inverse Problem in Microwave Tomography Hacheme Ayasso Ali Mohammad-Djafari Bernard Duchêne Laboratoire des Signaux et Systèmes, UMRS 08506 (CNRS-SUPELEC-UNIV PARIS SUD 11), 3 rue

More information

Approximate Bayesian Computation for Big Data

Approximate Bayesian Computation for Big Data A. Mohammad-Djafari, Approximate Bayesian Computation for Big Data, Tutorial at MaxEnt 2016, July 10-15, Gent, Belgium. 1/63. Approximate Bayesian Computation for Big Data Ali Mohammad-Djafari Laboratoire

More information

VARIATIONAL BAYES WITH GAUSS-MARKOV-POTTS PRIOR MODELS FOR JOINT IMAGE RESTORATION AND SEGMENTATION

VARIATIONAL BAYES WITH GAUSS-MARKOV-POTTS PRIOR MODELS FOR JOINT IMAGE RESTORATION AND SEGMENTATION VARIATIONAL BAYES WITH GAUSS-MARKOV-POTTS PRIOR MODELS FOR JOINT IMAGE RESTORATION AND SEGMENTATION Hacheme AYASSO and Ali MOHAMMAD-DJAFARI Laboratoire des Signaux et Systèmes, UMR 8506 (CNRS-SUPELEC-UPS)

More information

Bayesian inference for Machine Learning, Inverse Problems and Big Data: from Basics to Computational algorithms

Bayesian inference for Machine Learning, Inverse Problems and Big Data: from Basics to Computational algorithms A. Mohammad-Djafari, Approximate Bayesian Computation for Big Data, Seminar 2016, Nov. 21-25, Aix-Marseille Univ. 1/76. Bayesian inference for Machine Learning, Inverse Problems and Big Data: from Basics

More information

Sensors, Measurement systems and Inverse problems

Sensors, Measurement systems and Inverse problems A. Mohammad-Djafari, Sensors, Measurement systems and Inverse problems, 2012-2013 1/78. Sensors, Measurement systems and Inverse problems Ali Mohammad-Djafari Laboratoire des Signaux et Systèmes, UMR8506

More information

Inverse problems in imaging science: From classical regularization methods To state of the art Bayesian methods

Inverse problems in imaging science: From classical regularization methods To state of the art Bayesian methods A. Mohammad-Djafari, Inverse problems in imaging science:..., Tutorial presentation, IPAS 2014: Tunisia, Nov. 5-7, 2014, 1/76. Inverse problems in imaging science: From classical regularization methods

More information

Bayesian inference with hierarchical prior models for inverse problems in imaging systems

Bayesian inference with hierarchical prior models for inverse problems in imaging systems A Mohammad-Djafari, Tutorial talk: Bayesian inference for inverse problems, WOSSPA, May 11-15, 2013, Mazafran, Algiers, 1/96 Bayesian inference with hierarchical prior models for inverse problems in imaging

More information

MAXIMUM ENTROPIES COPULAS

MAXIMUM ENTROPIES COPULAS MAXIMUM ENTROPIES COPULAS Doriano-Boris Pougaza & Ali Mohammad-Djafari Groupe Problèmes Inverses Laboratoire des Signaux et Systèmes (UMR 8506 CNRS - SUPELEC - UNIV PARIS SUD) Supélec, Plateau de Moulon,

More information

Entropy, Information Theory, Information Geometry and Bayesian Inference in Data, Signal and Image Processing and Inverse Problems

Entropy, Information Theory, Information Geometry and Bayesian Inference in Data, Signal and Image Processing and Inverse Problems Entropy 2015, 17, 3989-4027; doi:10.3390/e17063989 OPEN ACCESS entropy ISSN 1099-4300 www.mdpi.com/journal/entropy Article Entropy, Information Theory, Information Geometry and Bayesian Inference in Data,

More information

Super-Resolution: A Bayesian approach

Super-Resolution: A Bayesian approach A Mohammad-Djafari, superesolution, Cours Master Montpelier 2013 1/75 Super-Resolution: A Bayesian approach Ali Mohammad-Djafari Laboratoire des Signaux et Systèmes, UMR8506 CNRS-SUPELEC-UNIV PARIS SUD

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

Sensors, Measurement systems Signal processing and Inverse problems

Sensors, Measurement systems Signal processing and Inverse problems A. Mohammad-Djafari, Sensors, Measurement systems, Signal processing and Inverse problems, Master MNE 2014, 1/118. Sensors, Measurement systems Signal processing and Inverse problems Ali Mohammad-Djafari

More information

Non-Parametric Bayes

Non-Parametric Bayes Non-Parametric Bayes Mark Schmidt UBC Machine Learning Reading Group January 2016 Current Hot Topics in Machine Learning Bayesian learning includes: Gaussian processes. Approximate inference. Bayesian

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin

More information

SUPREME: a SUPer REsolution Mapmaker for Extended emission

SUPREME: a SUPer REsolution Mapmaker for Extended emission SUPREME: a SUPer REsolution Mapmaker for Extended emission Hacheme Ayasso 1,2 Alain Abergel 2 Thomas Rodet 3 François Orieux 2,3,4 Jean-François Giovannelli 5 Karin Dassas 2 Boualam Hasnoun 1 1 GIPSA-LAB

More information

MCMC Sampling for Bayesian Inference using L1-type Priors

MCMC Sampling for Bayesian Inference using L1-type Priors MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling

More information

Introduction to Probabilistic Machine Learning

Introduction to Probabilistic Machine Learning Introduction to Probabilistic Machine Learning Piyush Rai Dept. of CSE, IIT Kanpur (Mini-course 1) Nov 03, 2015 Piyush Rai (IIT Kanpur) Introduction to Probabilistic Machine Learning 1 Machine Learning

More information

Unsupervised Learning

Unsupervised Learning Unsupervised Learning Bayesian Model Comparison Zoubin Ghahramani zoubin@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc in Intelligent Systems, Dept Computer Science University College

More information

COS513 LECTURE 8 STATISTICAL CONCEPTS

COS513 LECTURE 8 STATISTICAL CONCEPTS COS513 LECTURE 8 STATISTICAL CONCEPTS NIKOLAI SLAVOV AND ANKUR PARIKH 1. MAKING MEANINGFUL STATEMENTS FROM JOINT PROBABILITY DISTRIBUTIONS. A graphical model (GM) represents a family of probability distributions

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 7 Approximate

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 20: Expectation Maximization Algorithm EM for Mixture Models Many figures courtesy Kevin Murphy s

More information

GAUSSIAN PROCESS REGRESSION

GAUSSIAN PROCESS REGRESSION GAUSSIAN PROCESS REGRESSION CSE 515T Spring 2015 1. BACKGROUND The kernel trick again... The Kernel Trick Consider again the linear regression model: y(x) = φ(x) w + ε, with prior p(w) = N (w; 0, Σ). The

More information

The Bayesian approach to inverse problems

The Bayesian approach to inverse problems The Bayesian approach to inverse problems Youssef Marzouk Department of Aeronautics and Astronautics Center for Computational Engineering Massachusetts Institute of Technology ymarz@mit.edu, http://uqgroup.mit.edu

More information

STA414/2104 Statistical Methods for Machine Learning II

STA414/2104 Statistical Methods for Machine Learning II STA414/2104 Statistical Methods for Machine Learning II Murat A. Erdogdu & David Duvenaud Department of Computer Science Department of Statistical Sciences Lecture 3 Slide credits: Russ Salakhutdinov Announcements

More information

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for

More information

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak 1 Introduction. Random variables During the course we are interested in reasoning about considered phenomenon. In other words,

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Integrated Non-Factorized Variational Inference

Integrated Non-Factorized Variational Inference Integrated Non-Factorized Variational Inference Shaobo Han, Xuejun Liao and Lawrence Carin Duke University February 27, 2014 S. Han et al. Integrated Non-Factorized Variational Inference February 27, 2014

More information

Bayesian Regression Linear and Logistic Regression

Bayesian Regression Linear and Logistic Regression When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we

More information

Introduction: MLE, MAP, Bayesian reasoning (28/8/13)

Introduction: MLE, MAP, Bayesian reasoning (28/8/13) STA561: Probabilistic machine learning Introduction: MLE, MAP, Bayesian reasoning (28/8/13) Lecturer: Barbara Engelhardt Scribes: K. Ulrich, J. Subramanian, N. Raval, J. O Hollaren 1 Classifiers In this

More information

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a Some slides are due to Christopher Bishop Limitations of K-means Hard assignments of data points to clusters small shift of a

More information

Lecture 13 : Variational Inference: Mean Field Approximation

Lecture 13 : Variational Inference: Mean Field Approximation 10-708: Probabilistic Graphical Models 10-708, Spring 2017 Lecture 13 : Variational Inference: Mean Field Approximation Lecturer: Willie Neiswanger Scribes: Xupeng Tong, Minxing Liu 1 Problem Setup 1.1

More information

Introduction to Bayesian methods in inverse problems

Introduction to Bayesian methods in inverse problems Introduction to Bayesian methods in inverse problems Ville Kolehmainen 1 1 Department of Applied Physics, University of Eastern Finland, Kuopio, Finland March 4 2013 Manchester, UK. Contents Introduction

More information

Statistical Approaches to Learning and Discovery. Week 4: Decision Theory and Risk Minimization. February 3, 2003

Statistical Approaches to Learning and Discovery. Week 4: Decision Theory and Risk Minimization. February 3, 2003 Statistical Approaches to Learning and Discovery Week 4: Decision Theory and Risk Minimization February 3, 2003 Recall From Last Time Bayesian expected loss is ρ(π, a) = E π [L(θ, a)] = L(θ, a) df π (θ)

More information

Learning Bayesian network : Given structure and completely observed data

Learning Bayesian network : Given structure and completely observed data Learning Bayesian network : Given structure and completely observed data Probabilistic Graphical Models Sharif University of Technology Spring 2017 Soleymani Learning problem Target: true distribution

More information

13: Variational inference II

13: Variational inference II 10-708: Probabilistic Graphical Models, Spring 2015 13: Variational inference II Lecturer: Eric P. Xing Scribes: Ronghuo Zheng, Zhiting Hu, Yuntian Deng 1 Introduction We started to talk about variational

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

Chapter 3: Maximum-Likelihood & Bayesian Parameter Estimation (part 1)

Chapter 3: Maximum-Likelihood & Bayesian Parameter Estimation (part 1) HW 1 due today Parameter Estimation Biometrics CSE 190 Lecture 7 Today s lecture was on the blackboard. These slides are an alternative presentation of the material. CSE190, Winter10 CSE190, Winter10 Chapter

More information

Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos. Christopher C. Strelioff 1,2 Dr. James P. Crutchfield 2

Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos. Christopher C. Strelioff 1,2 Dr. James P. Crutchfield 2 How Random Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos Christopher C. Strelioff 1,2 Dr. James P. Crutchfield 2 1 Center for Complex Systems Research and Department of Physics University

More information

Variational Scoring of Graphical Model Structures

Variational Scoring of Graphical Model Structures Variational Scoring of Graphical Model Structures Matthew J. Beal Work with Zoubin Ghahramani & Carl Rasmussen, Toronto. 15th September 2003 Overview Bayesian model selection Approximations using Variational

More information

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu Lecture: Gaussian Process Regression STAT 6474 Instructor: Hongxiao Zhu Motivation Reference: Marc Deisenroth s tutorial on Robot Learning. 2 Fast Learning for Autonomous Robots with Gaussian Processes

More information

Bayesian inference J. Daunizeau

Bayesian inference J. Daunizeau Bayesian inference J. Daunizeau Brain and Spine Institute, Paris, France Wellcome Trust Centre for Neuroimaging, London, UK Overview of the talk 1 Probabilistic modelling and representation of uncertainty

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

The Laplace Approximation

The Laplace Approximation The Laplace Approximation Sargur N. University at Buffalo, State University of New York USA Topics in Linear Models for Classification Overview 1. Discriminant Functions 2. Probabilistic Generative Models

More information

Outline Lecture 2 2(32)

Outline Lecture 2 2(32) Outline Lecture (3), Lecture Linear Regression and Classification it is our firm belief that an understanding of linear models is essential for understanding nonlinear ones Thomas Schön Division of Automatic

More information

Variable selection for model-based clustering

Variable selection for model-based clustering Variable selection for model-based clustering Matthieu Marbac (Ensai - Crest) Joint works with: M. Sedki (Univ. Paris-sud) and V. Vandewalle (Univ. Lille 2) The problem Objective: Estimation of a partition

More information

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007 Sequential Monte Carlo and Particle Filtering Frank Wood Gatsby, November 2007 Importance Sampling Recall: Let s say that we want to compute some expectation (integral) E p [f] = p(x)f(x)dx and we remember

More information

Variational inference

Variational inference Simon Leglaive Télécom ParisTech, CNRS LTCI, Université Paris Saclay November 18, 2016, Télécom ParisTech, Paris, France. Outline Introduction Probabilistic model Problem Log-likelihood decomposition EM

More information

An introduction to Bayesian inference and model comparison J. Daunizeau

An introduction to Bayesian inference and model comparison J. Daunizeau An introduction to Bayesian inference and model comparison J. Daunizeau ICM, Paris, France TNU, Zurich, Switzerland Overview of the talk An introduction to probabilistic modelling Bayesian model comparison

More information

Computer Vision Group Prof. Daniel Cremers. 4. Gaussian Processes - Regression

Computer Vision Group Prof. Daniel Cremers. 4. Gaussian Processes - Regression Group Prof. Daniel Cremers 4. Gaussian Processes - Regression Definition (Rep.) Definition: A Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution.

More information

Bayesian Learning (II)

Bayesian Learning (II) Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Bayesian Learning (II) Niels Landwehr Overview Probabilities, expected values, variance Basic concepts of Bayesian learning MAP

More information

Reliability Monitoring Using Log Gaussian Process Regression

Reliability Monitoring Using Log Gaussian Process Regression COPYRIGHT 013, M. Modarres Reliability Monitoring Using Log Gaussian Process Regression Martin Wayne Mohammad Modarres PSA 013 Center for Risk and Reliability University of Maryland Department of Mechanical

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 218 Outlines Overview Introduction Linear Algebra Probability Linear Regression 1

More information

Prediction of Data with help of the Gaussian Process Method

Prediction of Data with help of the Gaussian Process Method of Data with help of the Gaussian Process Method R. Preuss, U. von Toussaint Max-Planck-Institute for Plasma Physics EURATOM Association 878 Garching, Germany March, Abstract The simulation of plasma-wall

More information

Outline lecture 2 2(30)

Outline lecture 2 2(30) Outline lecture 2 2(3), Lecture 2 Linear Regression it is our firm belief that an understanding of linear models is essential for understanding nonlinear ones Thomas Schön Division of Automatic Control

More information

CSC321 Lecture 18: Learning Probabilistic Models

CSC321 Lecture 18: Learning Probabilistic Models CSC321 Lecture 18: Learning Probabilistic Models Roger Grosse Roger Grosse CSC321 Lecture 18: Learning Probabilistic Models 1 / 25 Overview So far in this course: mainly supervised learning Language modeling

More information

Computer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression

Computer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression Group Prof. Daniel Cremers 9. Gaussian Processes - Regression Repetition: Regularized Regression Before, we solved for w using the pseudoinverse. But: we can kernelize this problem as well! First step:

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information

General Bayesian Inference I

General Bayesian Inference I General Bayesian Inference I Outline: Basic concepts, One-parameter models, Noninformative priors. Reading: Chapters 10 and 11 in Kay-I. (Occasional) Simplified Notation. When there is no potential for

More information

Probabilistic & Unsupervised Learning

Probabilistic & Unsupervised Learning Probabilistic & Unsupervised Learning Week 2: Latent Variable Models Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc ML/CSML, Dept Computer Science University College

More information

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012 Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood

More information

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics STA414/2104 Lecture 11: Gaussian Processes Department of Statistics www.utstat.utoronto.ca Delivered by Mark Ebden with thanks to Russ Salakhutdinov Outline Gaussian Processes Exam review Course evaluations

More information

Mixtures of Gaussians. Sargur Srihari

Mixtures of Gaussians. Sargur Srihari Mixtures of Gaussians Sargur srihari@cedar.buffalo.edu 1 9. Mixture Models and EM 0. Mixture Models Overview 1. K-Means Clustering 2. Mixtures of Gaussians 3. An Alternative View of EM 4. The EM Algorithm

More information

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite

More information

Marginal density. If the unknown is of the form x = (x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density

Marginal density. If the unknown is of the form x = (x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density Marginal density If the unknown is of the form x = x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density πx 1 y) = πx 1, x 2 y)dx 2 = πx 2 )πx 1 y, x 2 )dx 2 needs to be

More information

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) = Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,

More information

Parametric Techniques Lecture 3

Parametric Techniques Lecture 3 Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to

More information

Lecture 1b: Linear Models for Regression

Lecture 1b: Linear Models for Regression Lecture 1b: Linear Models for Regression Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London c.archambeau@cs.ucl.ac.uk Advanced

More information

Variational Bayes. A key quantity in Bayesian inference is the marginal likelihood of a set of data D given a model M

Variational Bayes. A key quantity in Bayesian inference is the marginal likelihood of a set of data D given a model M A key quantity in Bayesian inference is the marginal likelihood of a set of data D given a model M PD M = PD θ, MPθ Mdθ Lecture 14 : Variational Bayes where θ are the parameters of the model and Pθ M is

More information

STA 4273H: Sta-s-cal Machine Learning

STA 4273H: Sta-s-cal Machine Learning STA 4273H: Sta-s-cal Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 2 In our

More information

BAYESIAN MACHINE LEARNING: THEORETICAL FOUNDATIONS. Bayesian Machine Learning, Frederic Pennerath

BAYESIAN MACHINE LEARNING: THEORETICAL FOUNDATIONS. Bayesian Machine Learning, Frederic Pennerath BAYESIAN MACHINE LEARNING: THEORETICAL FOUNDATIONS Overview 1. Define a model family as a type of joint distribution P X = P X 1,, X m Digression on Bayesian Network. Predict / estimate outputs from a

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning

More information

Practical Bayesian Optimization of Machine Learning. Learning Algorithms

Practical Bayesian Optimization of Machine Learning. Learning Algorithms Practical Bayesian Optimization of Machine Learning Algorithms CS 294 University of California, Berkeley Tuesday, April 20, 2016 Motivation Machine Learning Algorithms (MLA s) have hyperparameters that

More information

USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL*

USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL* USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL* 3 Conditionals and marginals For Bayesian analysis it is very useful to understand how to write joint, marginal, and conditional distributions for the multivariate

More information

Probabilistic Reasoning in Deep Learning

Probabilistic Reasoning in Deep Learning Probabilistic Reasoning in Deep Learning Dr Konstantina Palla, PhD palla@stats.ox.ac.uk September 2017 Deep Learning Indaba, Johannesburgh Konstantina Palla 1 / 39 OVERVIEW OF THE TALK Basics of Bayesian

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Introduction to Machine Learning. Lecture 2

Introduction to Machine Learning. Lecture 2 Introduction to Machine Learning Lecturer: Eran Halperin Lecture 2 Fall Semester Scribe: Yishay Mansour Some of the material was not presented in class (and is marked with a side line) and is given for

More information

Better restore the recto side of a document with an estimation of the verso side: Markov model and inference with graph cuts

Better restore the recto side of a document with an estimation of the verso side: Markov model and inference with graph cuts June 23 rd 2008 Better restore the recto side of a document with an estimation of the verso side: Markov model and inference with graph cuts Christian Wolf Laboratoire d InfoRmatique en Image et Systèmes

More information

(4D) Variational Models Preserving Sharp Edges. Martin Burger. Institute for Computational and Applied Mathematics

(4D) Variational Models Preserving Sharp Edges. Martin Burger. Institute for Computational and Applied Mathematics (4D) Variational Models Preserving Sharp Edges Institute for Computational and Applied Mathematics Intensity (cnt) Mathematical Imaging Workgroup @WWU 2 0.65 0.60 DNA Akrosom Flagellum Glass 0.55 0.50

More information

Introduction to Bayesian Methods. Introduction to Bayesian Methods p.1/??

Introduction to Bayesian Methods. Introduction to Bayesian Methods p.1/?? to Bayesian Methods Introduction to Bayesian Methods p.1/?? We develop the Bayesian paradigm for parametric inference. To this end, suppose we conduct (or wish to design) a study, in which the parameter

More information

Variational Methods in Bayesian Deconvolution

Variational Methods in Bayesian Deconvolution PHYSTAT, SLAC, Stanford, California, September 8-, Variational Methods in Bayesian Deconvolution K. Zarb Adami Cavendish Laboratory, University of Cambridge, UK This paper gives an introduction to the

More information

Bayesian Methods: Naïve Bayes

Bayesian Methods: Naïve Bayes Bayesian Methods: aïve Bayes icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Last Time Parameter learning Learning the parameter of a simple coin flipping model Prior

More information

Pattern Recognition and Machine Learning. Bishop Chapter 6: Kernel Methods

Pattern Recognition and Machine Learning. Bishop Chapter 6: Kernel Methods Pattern Recognition and Machine Learning Chapter 6: Kernel Methods Vasil Khalidov Alex Kläser December 13, 2007 Training Data: Keep or Discard? Parametric methods (linear/nonlinear) so far: learn parameter

More information

Introduction to Statistical Learning Theory

Introduction to Statistical Learning Theory Introduction to Statistical Learning Theory In the last unit we looked at regularization - adding a w 2 penalty. We add a bias - we prefer classifiers with low norm. How to incorporate more complicated

More information

Chris Bishop s PRML Ch. 8: Graphical Models

Chris Bishop s PRML Ch. 8: Graphical Models Chris Bishop s PRML Ch. 8: Graphical Models January 24, 2008 Introduction Visualize the structure of a probabilistic model Design and motivate new models Insights into the model s properties, in particular

More information

Graphical Models for Collaborative Filtering

Graphical Models for Collaborative Filtering Graphical Models for Collaborative Filtering Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Sequence modeling HMM, Kalman Filter, etc.: Similarity: the same graphical model topology,

More information

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator Estimation Theory Estimation theory deals with finding numerical values of interesting parameters from given set of data. We start with formulating a family of models that could describe how the data were

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear

More information

Bayesian inference J. Daunizeau

Bayesian inference J. Daunizeau Bayesian inference J. Daunizeau Brain and Spine Institute, Paris, France Wellcome Trust Centre for Neuroimaging, London, UK Overview of the talk 1 Probabilistic modelling and representation of uncertainty

More information

Infinite latent feature models and the Indian Buffet Process

Infinite latent feature models and the Indian Buffet Process p.1 Infinite latent feature models and the Indian Buffet Process Tom Griffiths Cognitive and Linguistic Sciences Brown University Joint work with Zoubin Ghahramani p.2 Beyond latent classes Unsupervised

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information