A Weighted Multivariate Gaussian Markov Model For Brain Lesion Segmentation
|
|
- Pauline Alexander
- 5 years ago
- Views:
Transcription
1 A Weighted Multivariate Gaussian Markov Model For Brain Lesion Segmentation Senan Doyle, Florence Forbes, Michel Dojat July 5, 2010
2 Table of contents Introduction & Background A Weighted Multi-Sequence Markov Model Variational Bayesian EM Solution Lesion Segmentation Procedure Results
3 Introduction & Background Outline Introduction & Background A Weighted Multi-Sequence Markov Model Variational Bayesian EM Solution Lesion Segmentation Procedure Results
4 Introduction & Background Goal Delineation & Quantification of Lesions Establish Patient Prognosis Chart Development over Time
5 Introduction & Background Goal Delineation & Quantification of Lesions Establish Patient Prognosis Chart Development over Time Automatically!
6 Introduction & Background Goal Delineation & Quantification of Lesions Establish Patient Prognosis Chart Development over Time Automatically!
7 Introduction & Background MRI Imaging 3D Volume of Voxels Different MRI Sequences (T1, T2-w, FLAIR, Diffusion Weight...)
8 Introduction & Background MRI Imaging 3D Volume of Voxels Different MRI Sequences (T1, T2-w, FLAIR, Diffusion Weight...) Cerebro Spinal Fluid (CSF), White Matter (WM), Grey Matter(GM)
9 Introduction & Background MRI Imaging 3D Volume of Voxels Different MRI Sequences (T1, T2-w, FLAIR, Diffusion Weight...) Cerebro Spinal Fluid (CSF), White Matter (WM), Grey Matter(GM)
10 Introduction & Background MRI Imaging 3D Volume of Voxels Different MRI Sequences (T1, T2-w, FLAIR, Diffusion Weight...) Cerebro Spinal Fluid (CSF), White Matter (WM), Grey Matter(GM)
11 Introduction & Background MRI Imaging 3D Volume of Voxels Different MRI Sequences (T1, T2-w, FLAIR, Diffusion Weight...) Cerebro Spinal Fluid (CSF), White Matter (WM), Grey Matter(GM)
12 Introduction & Background MRI Imaging 3D Volume of Voxels Different MRI Sequences (T1, T2-w, FLAIR, Diffusion Weight...) Cerebro Spinal Fluid (CSF), White Matter (WM), Grey Matter(GM)
13 Introduction & Background MRI Imaging 3D Volume of Voxels Different MRI Sequences (T1, T2-w, FLAIR, Diffusion Weight...) Cerebro Spinal Fluid (CSF), White Matter (WM), Grey Matter(GM)
14 Introduction & Background Challenges Noise Intensity Inhomogeneity
15 Introduction & Background Challenges Noise Intensity Inhomogeneity Partial Volume Effects
16 Introduction & Background Challenges Noise Intensity Inhomogeneity Partial Volume Effects Other Artefacts
17 Introduction & Background Challenges Noise Intensity Inhomogeneity Partial Volume Effects Other Artefacts Poor Contrast between Tissues
18 Introduction & Background Challenges Noise Intensity Inhomogeneity Partial Volume Effects Other Artefacts Poor Contrast between Tissues Poor Representation of Lesion
19 Introduction & Background Challenges Noise Intensity Inhomogeneity Partial Volume Effects Other Artefacts Poor Contrast between Tissues Poor Representation of Lesion
20 Introduction & Background Inliers Versus Outliers Lesions usually modelled as outliers. Widely Varying & Inhomogeneous Appearance
21 Introduction & Background Inliers Versus Outliers Lesions usually modelled as outliers. Widely Varying & Inhomogeneous Appearance Small in Size
22 Introduction & Background Inliers Versus Outliers Lesions usually modelled as outliers. Widely Varying & Inhomogeneous Appearance Small in Size Our approach
23 Introduction & Background Inliers Versus Outliers Lesions usually modelled as outliers. Widely Varying & Inhomogeneous Appearance Small in Size Our approach Adopt a Model so that Lesion Voxels become Inliers
24 Introduction & Background Inliers Versus Outliers Lesions usually modelled as outliers. Widely Varying & Inhomogeneous Appearance Small in Size Our approach Adopt a Model so that Lesion Voxels become Inliers Introduce Weight Field
25 Introduction & Background Inliers Versus Outliers Lesions usually modelled as outliers. Widely Varying & Inhomogeneous Appearance Small in Size Our approach Adopt a Model so that Lesion Voxels become Inliers Introduce Weight Field Bayesian Estimation of Weights
26 Introduction & Background Inliers Versus Outliers Lesions usually modelled as outliers. Widely Varying & Inhomogeneous Appearance Small in Size Our approach Adopt a Model so that Lesion Voxels become Inliers Introduce Weight Field Bayesian Estimation of Weights
27 A Weighted Multi-Sequence Markov Model Outline Introduction & Background A Weighted Multi-Sequence Markov Model Variational Bayesian EM Solution Lesion Segmentation Procedure Results
28 A Weighted Multi-Sequence Markov Model Multi-Sequence PD T1 T2
29 A Weighted Multi-Sequence Markov Model Multi-Sequence Figure: Feature Space, Lesion in Red
30 A Weighted Multi-Sequence Markov Model Multi-Sequence Figure: Feature Space, Lesion in Red
31 A Weighted Multi-Sequence Markov Model Additional Information Spatial Dependency Prior Probabilistic Tissue Atlas
32 A Weighted Multi-Sequence Markov Model Additional Information Spatial Dependency Prior Probabilistic Tissue Atlas
33 A Weighted Multi-Sequence Markov Model Additional Information Spatial Dependency Prior Probabilistic Tissue Atlas
34 A Weighted Multi-Sequence Markov Model Additional Information Spatial Dependency Prior Probabilistic Tissue Atlas
35 A Weighted Multi-Sequence Markov Model Additional Information Spatial Dependency Prior Probabilistic Tissue Atlas
36 A Weighted Multi-Sequence Markov Model Definitions Regular 3D Grid, i = 1,..., N, i V Observations y i = {y i1,..., y im }
37 A Weighted Multi-Sequence Markov Model Definitions Regular 3D Grid, i = 1,..., N, i V Observations y i = {y i1,..., y im } Labelling z = {z 1,..., z N }, z i {1... K}, z Z.
38 A Weighted Multi-Sequence Markov Model Definitions Regular 3D Grid, i = 1,..., N, i V Observations y i = {y i1,..., y im } Labelling z = {z 1,..., z N }, z i {1... K}, z Z. Nonnegative Weights ω = {ω i, i V }, ω W, ω i = {ω i1,..., ω im }.
39 A Weighted Multi-Sequence Markov Model Definitions Regular 3D Grid, i = 1,..., N, i V Observations y i = {y i1,..., y im } Labelling z = {z 1,..., z N }, z i {1... K}, z Z. Nonnegative Weights ω = {ω i, i V }, ω W, ω i = {ω i1,..., ω im }. Joint distribution p(y, z, ω; ψ), ψ = {β, φ} with H(y, z, ω; ψ) = H Z (z; β) + H W (ω) + i V log g(y i z i, ω i ; φ) Missing Data term
40 A Weighted Multi-Sequence Markov Model Definitions Regular 3D Grid, i = 1,..., N, i V Observations y i = {y i1,..., y im } Labelling z = {z 1,..., z N }, z i {1... K}, z Z. Nonnegative Weights ω = {ω i, i V }, ω W, ω i = {ω i1,..., ω im }. Joint distribution p(y, z, ω; ψ), ψ = {β, φ} with H(y, z, ω; ψ) = H Z (z; β) + H W (ω) + i V log g(y i z i, ω i ; φ) Missing Data term Parameter Prior term
41 A Weighted Multi-Sequence Markov Model Definitions Regular 3D Grid, i = 1,..., N, i V Observations y i = {y i1,..., y im } Labelling z = {z 1,..., z N }, z i {1... K}, z Z. Nonnegative Weights ω = {ω i, i V }, ω W, ω i = {ω i1,..., ω im }. Joint distribution p(y, z, ω; ψ), ψ = {β, φ} with H(y, z, ω; ψ) = H Z (z; β) + H W (ω) + i V log g(y i z i, ω i ; φ) Missing Data term Parameter Prior term Data term
42 A Weighted Multi-Sequence Markov Model Definitions Regular 3D Grid, i = 1,..., N, i V Observations y i = {y i1,..., y im } Labelling z = {z 1,..., z N }, z i {1... K}, z Z. Nonnegative Weights ω = {ω i, i V }, ω W, ω i = {ω i1,..., ω im }. Joint distribution p(y, z, ω; ψ), ψ = {β, φ} with H(y, z, ω; ψ) = H Z (z; β) + H W (ω) + i V log g(y i z i, ω i ; φ) Missing Data term Parameter Prior term Data term
43 A Weighted Multi-Sequence Markov Model Data term M-Dimensional Gaussians, Diagonal Covariance g(y i z i, ω i ; φ) = M m=1 G(y im ; µ zi m, s z i m ω im )
44 A Weighted Multi-Sequence Markov Model Data term M-Dimensional Gaussians, Diagonal Covariance g(y i z i, ω i ; φ) = M m=1 G(y im ; µ zi m, s z i m ω im ) Related to Logarithmic Pooling ( ) M G(y im ; µ zi m, s zi m) ω im. m=1
45 A Weighted Multi-Sequence Markov Model Data term M-Dimensional Gaussians, Diagonal Covariance g(y i z i, ω i ; φ) = M m=1 G(y im ; µ zi m, s z i m ω im ) Related to Logarithmic Pooling ( ) M G(y im ; µ zi m, s zi m) ω im. m=1
46 A Weighted Multi-Sequence Markov Model Missing Data term Potts Model, external field ξ, interaction parameter η H Z (z; β) = i V (ξ izi + η z i, z j ) j N (i)
47 A Weighted Multi-Sequence Markov Model Missing Data term Potts Model, external field ξ, interaction parameter η H Z (z; β) = i V (ξ izi + η z i, z j ) j N (i) β = {ξ, η} with ξ = { t (ξ i1... ξ ik ), i V } being a set of real-valued K-dimensional vectors and η a real positive value.
48 A Weighted Multi-Sequence Markov Model Missing Data term Potts Model, external field ξ, interaction parameter η H Z (z; β) = i V (ξ izi + η z i, z j ) j N (i) β = {ξ, η} with ξ = { t (ξ i1... ξ ik ), i V } being a set of real-valued K-dimensional vectors and η a real positive value.
49 A Weighted Multi-Sequence Markov Model Parameter Prior term Weights independent of ψ, across Sequences M Define Conjugate Prior p(ω) = p(ω im ) m=1 i V
50 A Weighted Multi-Sequence Markov Model Parameter Prior term Weights independent of ψ, across Sequences M Define Conjugate Prior p(ω) = p(ω im ) m=1 i V p(ω im ) is a Gamma distribution with hyperparameters α im (shape) and γ im (inverse scale) H W (ω) = M ((α im 1) log ω im γ im ω im ). m=1 i V
51 A Weighted Multi-Sequence Markov Model Parameter Prior term Weights independent of ψ, across Sequences M Define Conjugate Prior p(ω) = p(ω im ) m=1 i V p(ω im ) is a Gamma distribution with hyperparameters α im (shape) and γ im (inverse scale) H W (ω) = M ((α im 1) log ω im γ im ω im ). m=1 i V Modes fixed to expert weights {ω exp im, m = 1... M, i V } by setting α im = γ im ω exp im + 1
52 A Weighted Multi-Sequence Markov Model Parameter Prior term Weights independent of ψ, across Sequences M Define Conjugate Prior p(ω) = p(ω im ) m=1 i V p(ω im ) is a Gamma distribution with hyperparameters α im (shape) and γ im (inverse scale) H W (ω) = M ((α im 1) log ω im γ im ω im ). m=1 i V Modes fixed to expert weights {ω exp im, m = 1... M, i V } by setting α im = γ im ω exp im + 1
53 Variational Bayesian EM Solution Outline Introduction & Background A Weighted Multi-Sequence Markov Model Variational Bayesian EM Solution Lesion Segmentation Procedure Results
54 Variational Bayesian EM Solution Alternating Maximization Weights viewed as Missing Variables Let D be the set of all probability distributions on Z W.
55 Variational Bayesian EM Solution Alternating Maximization Weights viewed as Missing Variables Let D be the set of all probability distributions on Z W. Alternating maximization of a function F s.t. for any q D, F (q, ψ) = E q [log p(y, Z, W ; ψ)] E q [log q(z, W )]
56 Variational Bayesian EM Solution Alternating Maximization Weights viewed as Missing Variables Let D be the set of all probability distributions on Z W. Alternating maximization of a function F s.t. for any q D, F (q, ψ) = E q [log p(y, Z, W ; ψ)] E q [log q(z, W )] E-step: q (r) = arg max q D F (q, ψ(r) ) (1) M-step: ψ (r+1) = arg max ψ Ψ F (q(r), ψ)
57 Variational Bayesian EM Solution Alternating Maximization Weights viewed as Missing Variables Let D be the set of all probability distributions on Z W. Alternating maximization of a function F s.t. for any q D, F (q, ψ) = E q [log p(y, Z, W ; ψ)] E q [log q(z, W )] E-step: q (r) = arg max q D F (q, ψ(r) ) (1) M-step: ψ (r+1) = arg max ψ Ψ F (q(r), ψ)
58 Variational Bayesian EM Solution Alternating Maximization Weights viewed as Missing Variables Let D be the set of all probability distributions on Z W. Alternating maximization of a function F s.t. for any q D, F (q, ψ) = E q [log p(y, Z, W ; ψ)] E q [log q(z, W )] E-step: q (r) = arg max q D F (q, ψ(r) ) (1) M-step: ψ (r+1) = arg max ψ Ψ F (q(r), ψ)
59 Variational Bayesian EM Solution Alternating Maximization E-step solved over restricted class of probability distributions, D, s.t. q(z, ω) = q Z (z) q W (ω) where q Z D Z and q W D W E-Z-step: q (r) Z E-W-step: q (r) W = arg max F (q (r 1) q Z D W q Z ; ψ (r) ) Z = arg max q W D W F (q W q (r) Z ; ψ(r) ).
60 Variational Bayesian EM Solution Alternating Maximization E-step solved over restricted class of probability distributions, D, s.t. q(z, ω) = q Z (z) q W (ω) where q Z D Z and q W D W E-Z-step: q (r) Z E-W-step: q (r) W = arg max F (q (r 1) q Z D W q Z ; ψ (r) ) Z = arg max q W D W F (q W q (r) Z ; ψ(r) ).
61 Variational Bayesian EM Solution Alternating Maximization Use K-L divergence properties to avoid taking derivatives ( E-Z: q (r) Z exp E q (r 1) W ) [log p(z y, W; ψ (r) ] ) E-W: q (r) W (E exp (r) q [log p(ω y, Z; ψ (r) )] M: ψ (r+1) = arg max Z E ψ Ψ q (r) Z q(r) W (2) (3) [log p(y, Z, W ; ψ)]. (4)
62 Variational Bayesian EM Solution Alternating Maximization Use K-L divergence properties to avoid taking derivatives ( E-Z: q (r) Z exp E q (r 1) W ) [log p(z y, W; ψ (r) ] ) E-W: q (r) W (E exp (r) q [log p(ω y, Z; ψ (r) )] M: ψ (r+1) = arg max Z E ψ Ψ q (r) Z q(r) W (2) (3) [log p(y, Z, W ; ψ)]. (4)
63 Variational Bayesian EM Solution E-Z Note p(z y, ω; ψ) is Markovian with energy H(z y, ω; ψ) H Z (z; β) + i V Therefore (2) becomes log g(y i z i, ω i ; φ), (linear in ω) (5) E-Z: q (r) Z p(z y, E (r 1) q [W ]; ψ (r) ). (6) W
64 Variational Bayesian EM Solution E-Z Note p(z y, ω; ψ) is Markovian with energy H(z y, ω; ψ) H Z (z; β) + i V Therefore (2) becomes log g(y i z i, ω i ; φ), (linear in ω) (5) E-Z: q (r) Z p(z y, E (r 1) q [W ]; ψ (r) ). (6) W Mean-Field like algorithm provides MRF approximation and q (r) Z (z) i V M G(y im ; µ (r) z i m, m=1 s (r) z i m ω (r 1) im )p(z i z (r) N (i) ; β(r) ), (7)
65 Variational Bayesian EM Solution E-Z Note p(z y, ω; ψ) is Markovian with energy H(z y, ω; ψ) H Z (z; β) + i V Therefore (2) becomes log g(y i z i, ω i ; φ), (linear in ω) (5) E-Z: q (r) Z p(z y, E (r 1) q [W ]; ψ (r) ). (6) W Mean-Field like algorithm provides MRF approximation and q (r) Z (z) i V M G(y im ; µ (r) z i m, m=1 s (r) z i m ω (r 1) im )p(z i z (r) N (i) ; β(r) ), (7)
66 Variational Bayesian EM Solution E-W Similarly p(ω y, z; ψ) is Markovian with energy H(ω y, z; ψ) H W (ω) + i V log g(y i z i, ω i ; φ) (8) E (r) q [W im ] denoted by ω (r) im becomes W ω (r) im = α im γ im + 1 K 2 k=1 δ(y im, µ (r) km, s(r) km ) q(r) Z i (k) (9) where δ(y, µ, s) = (y µ) 2 /s is the squared Mahalanobis distance.
67 Variational Bayesian EM Solution E-W Similarly p(ω y, z; ψ) is Markovian with energy H(ω y, z; ψ) H W (ω) + i V log g(y i z i, ω i ; φ) (8) E (r) q [W im ] denoted by ω (r) im becomes W ω (r) im = α im γ im + 1 K 2 k=1 δ(y im, µ (r) km, s(r) km ) q(r) Z i (k) (9) where δ(y, µ, s) = (y µ) 2 /s is the squared Mahalanobis distance.
68 Variational Bayesian EM Solution M β solved using mean field approximation φ updated using µ (r+1) km = s (r+1) km = N i=1 q(r) Z i N i=1 q(r) Z i N i=1 q(r) Z i (k) ω (r) im y im (k) ω (r) im (k) ω (r) im (y im µ (r+1) N i=1 q(r) Z i (k), km )2,
69 Variational Bayesian EM Solution M β solved using mean field approximation φ updated using µ (r+1) km = s (r+1) km = N i=1 q(r) Z i N i=1 q(r) Z i N i=1 q(r) Z i (k) ω (r) im y im (k) ω (r) im (k) ω (r) im (y im µ (r+1) N i=1 q(r) Z i (k), km )2,
70 Lesion Segmentation Procedure Outline Introduction & Background A Weighted Multi-Sequence Markov Model Variational Bayesian EM Solution Lesion Segmentation Procedure Results
71 Lesion Segmentation Procedure Expert Weighting Region L defines candidate lesions L weighted with ω L > 1 (ω \L = 1)
72 Lesion Segmentation Procedure Expert Weighting Region L defines candidate lesions L weighted with ω L > 1 (ω \L = 1) K = 3, ω exp im = 1 and γ im = 1 setting
73 Lesion Segmentation Procedure Expert Weighting Region L defines candidate lesions L weighted with ω L > 1 (ω \L = 1) K = 3, ω exp im = 1 and γ im = 1 setting Threshold Most Informative Weight Map (chi-square percentile)
74 Lesion Segmentation Procedure Expert Weighting Region L defines candidate lesions L weighted with ω L > 1 (ω \L = 1) K = 3, ω exp im = 1 and γ im = 1 setting Threshold Most Informative Weight Map (chi-square percentile)
75 Lesion Segmentation Procedure Expert Weighting Region L defines candidate lesions L weighted with ω L > 1 (ω \L = 1) K = 3, ω exp im = 1 and γ im = 1 setting Threshold Most Informative Weight Map (chi-square percentile)
76 Lesion Segmentation Procedure Inlier Setting K = 4, initialized with K = 3 result (+ thresholded lesions) γ im = γ L i L (γ L = 10)
77 Lesion Segmentation Procedure Inlier Setting K = 4, initialized with K = 3 result (+ thresholded lesions) γ im = γ L i L (γ L = 10) γ im = γ L i L. (γ L = 1000)
78 Lesion Segmentation Procedure Inlier Setting K = 4, initialized with K = 3 result (+ thresholded lesions) γ im = γ L i L (γ L = 10) γ im = γ L i L. (γ L = 1000) ω L depends on number of voxels in L
79 Lesion Segmentation Procedure Inlier Setting K = 4, initialized with K = 3 result (+ thresholded lesions) γ im = γ L i L (γ L = 10) γ im = γ L i L. (γ L = 1000) ω L depends on number of voxels in L
80 Lesion Segmentation Procedure Inlier Setting
81 Lesion Segmentation Procedure Inlier Setting
82 Results Outline Introduction & Background A Weighted Multi-Sequence Markov Model Variational Bayesian EM Solution Lesion Segmentation Procedure Results
83 Results Simulated Data 0% IIH Method 3% 5% 7% 9% Mild lesions (0.02% of the voxels) AWEM 68 (+1) 49 (-21) 36 (+2) 12 (+8) [1] [3] [2] 52 NA NA NA Moderate lesions (0.18% of the voxels) AWEM 86 (+7) 80 (-1) 73 (+14) 64 (+27) [1] [3] [2] 63 NA NA NA Severe lesions (0.52% of the voxels) AWEM 92 (+7) 86 (-2) 78 (+6) 68 (+27) [1] [3] [2] 82 NA NA NA
84 Results Simulated Data 40% IIH Method 3% 5% 7% 9% Mild lesions (0.02% of the voxels) AWEM 0 (-75) 0 (-65) 0 (-20) 0 (-30) [1] [3] Moderate lesions (0.18% of the voxels) AWEM 52 (-24) 51 (-25) 52 (-15) 10 (-38) [1] [3] Severe lesions (0.52% of the voxels) AWEM 87 (+1) 84 (+1) 77 (+3) 66 (+8) [1] [3]
85 Results Real Data (Rennes MS) LL EMS AWEM Patient (+20) Patient (+2) Patient (-2) Patient (+7) Patient (-2) Average 55 +/ /-16
86 Results Results (a) (b) (c) Figure: Real MS data, patient 3. (a): Flair image. (b): identified lesions with our approach (DSC 45%). (c): ground truth.
87 Results Results
88 Results Results
89 Results Results (a) (b) (c) Figure: Real stroke data. (a): DW image. (b): identified lesions with our approach (DSC 63%). (c): ground truth.
90 Results Discussion & Future Work Extension to full covariance matrices: temporal multi-sequence data, eg. patient follow-up Exploration of Markov Prior Other expert weighting schemes, possibly lesion specific Extension to handle intensity inhomogeneities Evaluation in a semi-supervised context
91 Results D. Garcia-Lorenzo, L. Lecoeur, D.L. Arnold, D. L. Collins, and C. Barillot. Multiple Sclerosis lesion segmentation using an automatic multimodal graph cuts. In MICCAI, pages , F. Rousseau, F. Blanc, J. de Seze, L. Rumbac, and J.P. Armspach. An a contrario approach for outliers segmentation: application to multiple sclerosis in MRI. In IEEE ISBI, pages 9 12, K. Van Leemput, F. Maes, D. Vandermeulen, A. Colchester, and P. Suetens. Automated segmentation of multiple sclerosis lesions by model outlier detection. IEEE trans. Med. Ima., 20(8): , 2001.
Brain Lesion Segmentation: A Bayesian Weighted EM Approach
Brain Lesion Segmentation: A Bayesian Weighted EM Approach Senan Doyle, Florence Forbes, Michel Dojat November 19, 2009 Table of contents Introduction & Background A Weighted Multi-Sequence Markov Model
More informationMultivariate Robust clustering via mixture models
Multivariate Robust clustering via mixture models Florence Forbes, Darren Wraith, Senan Doyle, Eric Frichot INRIA-LJK équipe MISTIS (Grenoble) Model-based multivariate clustering Model-based clustering:
More informationBiomedical Image Analysis. Segmentation by Thresholding
Biomedical Image Analysis Segmentation by Thresholding Contents: Thresholding principles Ridler & Calvard s method Ridler TW, Calvard S (1978). Picture thresholding using an iterative selection method,
More informationInverse regression approach to (robust) non-linear high-to-low dimensional mapping
Inverse regression approach to (robust) non-linear high-to-low dimensional mapping Emeline Perthame Joint work with Florence Forbes INRIA, team MISTIS, Grenoble LMNO, Caen October 27, 2016 1 / 25 Outlines
More information1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM.
Université du Sud Toulon - Var Master Informatique Probabilistic Learning and Data Analysis TD: Model-based clustering by Faicel CHAMROUKHI Solution The aim of this practical wor is to show how the Classification
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 13: Learning in Gaussian Graphical Models, Non-Gaussian Inference, Monte Carlo Methods Some figures
More informationEEL 851: Biometrics. An Overview of Statistical Pattern Recognition EEL 851 1
EEL 851: Biometrics An Overview of Statistical Pattern Recognition EEL 851 1 Outline Introduction Pattern Feature Noise Example Problem Analysis Segmentation Feature Extraction Classification Design Cycle
More informationGraphical Models for Collaborative Filtering
Graphical Models for Collaborative Filtering Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Sequence modeling HMM, Kalman Filter, etc.: Similarity: the same graphical model topology,
More informationH. Salehian, G. Cheng, J. Sun, B. C. Vemuri Department of CISE University of Florida
Tractography in the CST using an Intrinsic Unscented Kalman Filter H. Salehian, G. Cheng, J. Sun, B. C. Vemuri Department of CISE University of Florida Outline Introduction Method Pre-processing Fiber
More informationCSC411 Fall 2018 Homework 5
Homework 5 Deadline: Wednesday, Nov. 4, at :59pm. Submission: You need to submit two files:. Your solutions to Questions and 2 as a PDF file, hw5_writeup.pdf, through MarkUs. (If you submit answers to
More informationIntroduction to Machine Learning
Introduction to Machine Learning Thomas G. Dietterich tgd@eecs.oregonstate.edu 1 Outline What is Machine Learning? Introduction to Supervised Learning: Linear Methods Overfitting, Regularization, and the
More informationMathematical Segmentation of Grey Matter, White Matter
Tina Memo No. 2000-006 Short Version published in: British Journal of Radiology, 74, 234-242, 2001. Mathematical Segmentation of Grey Matter, White Matter and Cerebral Spinal Fluid from MR image Pairs.
More informationParametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012
Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood
More informationHandling imprecise and uncertain class labels in classification and clustering
Handling imprecise and uncertain class labels in classification and clustering Thierry Denœux 1 1 Université de Technologie de Compiègne HEUDIASYC (UMR CNRS 6599) COST Action IC 0702 Working group C, Mallorca,
More informationAn introduction to Bayesian inference and model comparison J. Daunizeau
An introduction to Bayesian inference and model comparison J. Daunizeau ICM, Paris, France TNU, Zurich, Switzerland Overview of the talk An introduction to probabilistic modelling Bayesian model comparison
More informationIndependent Component Analysis and Unsupervised Learning. Jen-Tzung Chien
Independent Component Analysis and Unsupervised Learning Jen-Tzung Chien TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent voices Nonparametric likelihood
More informationIndependent Component Analysis and Unsupervised Learning
Independent Component Analysis and Unsupervised Learning Jen-Tzung Chien National Cheng Kung University TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent
More informationIntroduction to Machine Learning
Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin
More informationBayesian inference J. Daunizeau
Bayesian inference J. Daunizeau Brain and Spine Institute, Paris, France Wellcome Trust Centre for Neuroimaging, London, UK Overview of the talk 1 Probabilistic modelling and representation of uncertainty
More information3D Bayesian Regularization of Diffusion Tensor MRI Using Multivariate Gaussian Markov Random Fields
3D Bayesian Regularization of Diffusion Tensor MRI Using Multivariate Gaussian Markov Random Fields Marcos Martín-Fernández 1,2, Carl-Fredrik Westin 2, and Carlos Alberola-López 1 1 Image Processing Lab.
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Lecture 11 CRFs, Exponential Family CS/CNS/EE 155 Andreas Krause Announcements Homework 2 due today Project milestones due next Monday (Nov 9) About half the work should
More informationBayesian inference J. Daunizeau
Bayesian inference J. Daunizeau Brain and Spine Institute, Paris, France Wellcome Trust Centre for Neuroimaging, London, UK Overview of the talk 1 Probabilistic modelling and representation of uncertainty
More informationActive and Semi-supervised Kernel Classification
Active and Semi-supervised Kernel Classification Zoubin Ghahramani Gatsby Computational Neuroscience Unit University College London Work done in collaboration with Xiaojin Zhu (CMU), John Lafferty (CMU),
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate
More informationPattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods
Pattern Recognition and Machine Learning Chapter 11: Sampling Methods Elise Arnaud Jakob Verbeek May 22, 2008 Outline of the chapter 11.1 Basic Sampling Algorithms 11.2 Markov Chain Monte Carlo 11.3 Gibbs
More informationThe Expectation-Maximization Algorithm
1/29 EM & Latent Variable Models Gaussian Mixture Models EM Theory The Expectation-Maximization Algorithm Mihaela van der Schaar Department of Engineering Science University of Oxford MLE for Latent Variable
More informationFast and Accurate HARDI and its Application to Neurological Diagnosis
Fast and Accurate HARDI and its Application to Neurological Diagnosis Dr. Oleg Michailovich Department of Electrical and Computer Engineering University of Waterloo June 21, 2011 Outline 1 Diffusion imaging
More informationProbabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms
Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,
More informationProbabilistic & Unsupervised Learning
Probabilistic & Unsupervised Learning Week 2: Latent Variable Models Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc ML/CSML, Dept Computer Science University College
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Lecture 12 Dynamical Models CS/CNS/EE 155 Andreas Krause Homework 3 out tonight Start early!! Announcements Project milestones due today Please email to TAs 2 Parameter learning
More informationNonparametric Bayesian Methods (Gaussian Processes)
[70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent
More informationMixture Models & EM. Nicholas Ruozzi University of Texas at Dallas. based on the slides of Vibhav Gogate
Mixture Models & EM icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Previously We looed at -means and hierarchical clustering as mechanisms for unsupervised learning -means
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 7 Approximate
More informationModeling of Anatomical Information in Clustering of White Matter Fiber Trajectories Using Dirichlet Distribution
Modeling of Anatomical Information in Clustering of White Matter Fiber Trajectories Using Dirichlet Distribution Mahnaz Maddah 1,2, Lilla Zöllei 1,3, W. Eric L. Grimson 1,2, William M. Wells 1,2 1 Computer
More informationMixture Models & EM. Nicholas Ruozzi University of Texas at Dallas. based on the slides of Vibhav Gogate
Mixture Models & EM icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Previously We looed at -means and hierarchical clustering as mechanisms for unsupervised learning -means
More informationPart 2: Multivariate fmri analysis using a sparsifying spatio-temporal prior
Chalmers Machine Learning Summer School Approximate message passing and biomedicine Part 2: Multivariate fmri analysis using a sparsifying spatio-temporal prior Tom Heskes joint work with Marcel van Gerven
More informationVariational inference
Simon Leglaive Télécom ParisTech, CNRS LTCI, Université Paris Saclay November 18, 2016, Télécom ParisTech, Paris, France. Outline Introduction Probabilistic model Problem Log-likelihood decomposition EM
More informationAutomated Segmentation of Low Light Level Imagery using Poisson MAP- MRF Labelling
Automated Segmentation of Low Light Level Imagery using Poisson MAP- MRF Labelling Abstract An automated unsupervised technique, based upon a Bayesian framework, for the segmentation of low light level
More informationReliability Monitoring Using Log Gaussian Process Regression
COPYRIGHT 013, M. Modarres Reliability Monitoring Using Log Gaussian Process Regression Martin Wayne Mohammad Modarres PSA 013 Center for Risk and Reliability University of Maryland Department of Mechanical
More informationProbabilistic Graphical Models Lecture Notes Fall 2009
Probabilistic Graphical Models Lecture Notes Fall 2009 October 28, 2009 Byoung-Tak Zhang School of omputer Science and Engineering & ognitive Science, Brain Science, and Bioinformatics Seoul National University
More informationCS Lecture 19. Exponential Families & Expectation Propagation
CS 6347 Lecture 19 Exponential Families & Expectation Propagation Discrete State Spaces We have been focusing on the case of MRFs over discrete state spaces Probability distributions over discrete spaces
More informationLongitudinal Change Detection: Inference on the Diffusion Tensor Along White-Matter Pathways
Longitudinal Change Detection: Inference on the Diffusion Tensor Along White-Matter Pathways Antoine Grigis 1,2,, Vincent Noblet 1,Fréderic Blanc 2,FabriceHeitz 1, Jérome de Seze 2, and Jean-Paul Armspach
More informationFast Approximate MAP Inference for Bayesian Nonparametrics
Fast Approximate MAP Inference for Bayesian Nonparametrics Y. Raykov A. Boukouvalas M.A. Little Department of Mathematics Aston University 10th Conference on Bayesian Nonparametrics, 2015 1 Iterated Conditional
More informationMathematical Formulation of Our Example
Mathematical Formulation of Our Example We define two binary random variables: open and, where is light on or light off. Our question is: What is? Computer Vision 1 Combining Evidence Suppose our robot
More informationModel generation and model selection in credit scoring
Model generation and model selection in credit scoring Vadim STRIJOV Russian Academy of Sciences Computing Center EURO 2010 Lisbon July 14 th The workflow Client s application & history Client s score:
More informationOutline. 1 Introduction. 2 Problem and solution. 3 Bayesian tracking model of group debris. 4 Simulation results. 5 Conclusions
Outline 1 Introduction 2 Problem and solution 3 Bayesian tracking model of group debris 4 Simulation results 5 Conclusions Problem The limited capability of the radar can t always satisfy the detection
More informationDynamic Data Modeling, Recognition, and Synthesis. Rui Zhao Thesis Defense Advisor: Professor Qiang Ji
Dynamic Data Modeling, Recognition, and Synthesis Rui Zhao Thesis Defense Advisor: Professor Qiang Ji Contents Introduction Related Work Dynamic Data Modeling & Analysis Temporal localization Insufficient
More informationDefault Priors and Effcient Posterior Computation in Bayesian
Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature
More informationComputer Vision Group Prof. Daniel Cremers. 6. Mixture Models and Expectation-Maximization
Prof. Daniel Cremers 6. Mixture Models and Expectation-Maximization Motivation Often the introduction of latent (unobserved) random variables into a model can help to express complex (marginal) distributions
More informationVariational Inference via Stochastic Backpropagation
Variational Inference via Stochastic Backpropagation Kai Fan February 27, 2016 Preliminaries Stochastic Backpropagation Variational Auto-Encoding Related Work Summary Outline Preliminaries Stochastic Backpropagation
More informationBayesian Image Segmentation Using MRF s Combined with Hierarchical Prior Models
Bayesian Image Segmentation Using MRF s Combined with Hierarchical Prior Models Kohta Aoki 1 and Hiroshi Nagahashi 2 1 Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology
More informationINFINITE MIXTURES OF MULTIVARIATE GAUSSIAN PROCESSES
INFINITE MIXTURES OF MULTIVARIATE GAUSSIAN PROCESSES SHILIANG SUN Department of Computer Science and Technology, East China Normal University 500 Dongchuan Road, Shanghai 20024, China E-MAIL: slsun@cs.ecnu.edu.cn,
More informationIntroduction to Probabilistic Graphical Models: Exercises
Introduction to Probabilistic Graphical Models: Exercises Cédric Archambeau Xerox Research Centre Europe cedric.archambeau@xrce.xerox.com Pascal Bootcamp Marseille, France, July 2010 Exercise 1: basics
More informationCovariance Matrix Simplification For Efficient Uncertainty Management
PASEO MaxEnt 2007 Covariance Matrix Simplification For Efficient Uncertainty Management André Jalobeanu, Jorge A. Gutiérrez PASEO Research Group LSIIT (CNRS/ Univ. Strasbourg) - Illkirch, France *part
More informationExpectation Maximization
Expectation Maximization Aaron C. Courville Université de Montréal Note: Material for the slides is taken directly from a presentation prepared by Christopher M. Bishop Learning in DAGs Two things could
More informationMachine Learning for Signal Processing Bayes Classification and Regression
Machine Learning for Signal Processing Bayes Classification and Regression Instructor: Bhiksha Raj 11755/18797 1 Recap: KNN A very effective and simple way of performing classification Simple model: For
More informationLecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions
DD2431 Autumn, 2014 1 2 3 Classification with Probability Distributions Estimation Theory Classification in the last lecture we assumed we new: P(y) Prior P(x y) Lielihood x2 x features y {ω 1,..., ω K
More informationVariational solution to hemodynamic and perfusion response estimation from ASL fmri data
Variational solution to hemodynamic and perfusion response estimation from ASL fmri data Aina Frau-Pascual, Florence Forbes, Philippe Ciuciu June, 2015 1 / 18 BOLD: Qualitative functional MRI Blood Oxygen
More informationStatistics 352: Spatial statistics. Jonathan Taylor. Department of Statistics. Models for discrete data. Stanford University.
352: 352: Models for discrete data April 28, 2009 1 / 33 Models for discrete data 352: Outline Dependent discrete data. Image data (binary). Ising models. Simulation: Gibbs sampling. Denoising. 2 / 33
More informationECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering
ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:
More informationEfficient Inference in Fully Connected CRFs with Gaussian Edge Potentials
Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials by Phillip Krahenbuhl and Vladlen Koltun Presented by Adam Stambler Multi-class image segmentation Assign a class label to each
More informationVariational Inference for Image Segmentation
Variational Inference for Image Segmentation Claudia Blaiotta 1, M. Jorge Cardoso 2, and John Ashburner 1 1 Wellcome Trust Centre for Neuroimaging, University College London, London, UK 2 Centre for Medical
More informationInference and estimation in probabilistic time series models
1 Inference and estimation in probabilistic time series models David Barber, A Taylan Cemgil and Silvia Chiappa 11 Time series The term time series refers to data that can be represented as a sequence
More informationJournal de la Société Française de Statistique. Bayesian Markov model for cooperative clustering: application to robust MRI brain scan segmentation
Author manuscript, published in "Journal de la ociété Française de tatistique 2011;152(3):116-141" ubmission Journal de la ociété Française de tatistique Bayesian Markov model for cooperative clustering:
More informationBased on slides by Richard Zemel
CSC 412/2506 Winter 2018 Probabilistic Learning and Reasoning Lecture 3: Directed Graphical Models and Latent Variables Based on slides by Richard Zemel Learning outcomes What aspects of a model can we
More informationECE521 week 3: 23/26 January 2017
ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear
More informationMachine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io
Machine Learning Lecture 4: Regularization and Bayesian Statistics Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 207 Overfitting Problem
More informationSpatial Bayesian Nonparametrics for Natural Image Segmentation
Spatial Bayesian Nonparametrics for Natural Image Segmentation Erik Sudderth Brown University Joint work with Michael Jordan University of California Soumya Ghosh Brown University Parsing Visual Scenes
More informationMachine Learning (CS 567) Lecture 5
Machine Learning (CS 567) Lecture 5 Time: T-Th 5:00pm - 6:20pm Location: GFS 118 Instructor: Sofus A. Macskassy (macskass@usc.edu) Office: SAL 216 Office hours: by appointment Teaching assistant: Cheol
More informationLatent Variable Models and EM Algorithm
SC4/SM8 Advanced Topics in Statistical Machine Learning Latent Variable Models and EM Algorithm Dino Sejdinovic Department of Statistics Oxford Slides and other materials available at: http://www.stats.ox.ac.uk/~sejdinov/atsml/
More informationProbabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms
Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Adrien Todeschini Inria Bordeaux JdS 2014, Rennes Aug. 2014 Joint work with François Caron (Univ. Oxford), Marie
More informationChris Bishop s PRML Ch. 8: Graphical Models
Chris Bishop s PRML Ch. 8: Graphical Models January 24, 2008 Introduction Visualize the structure of a probabilistic model Design and motivate new models Insights into the model s properties, in particular
More informationFully Bayesian Deep Gaussian Processes for Uncertainty Quantification
Fully Bayesian Deep Gaussian Processes for Uncertainty Quantification N. Zabaras 1 S. Atkinson 1 Center for Informatics and Computational Science Department of Aerospace and Mechanical Engineering University
More information13: Variational inference II
10-708: Probabilistic Graphical Models, Spring 2015 13: Variational inference II Lecturer: Eric P. Xing Scribes: Ronghuo Zheng, Zhiting Hu, Yuntian Deng 1 Introduction We started to talk about variational
More informationSequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them
HMM, MEMM and CRF 40-957 Special opics in Artificial Intelligence: Probabilistic Graphical Models Sharif University of echnology Soleymani Spring 2014 Sequence labeling aking collective a set of interrelated
More informationBasic Sampling Methods
Basic Sampling Methods Sargur Srihari srihari@cedar.buffalo.edu 1 1. Motivation Topics Intractability in ML How sampling can help 2. Ancestral Sampling Using BNs 3. Transforming a Uniform Distribution
More informationCMU-Q Lecture 24:
CMU-Q 15-381 Lecture 24: Supervised Learning 2 Teacher: Gianni A. Di Caro SUPERVISED LEARNING Hypotheses space Hypothesis function Labeled Given Errors Performance criteria Given a collection of input
More informationIntroduction to SVM and RVM
Introduction to SVM and RVM Machine Learning Seminar HUS HVL UIB Yushu Li, UIB Overview Support vector machine SVM First introduced by Vapnik, et al. 1992 Several literature and wide applications Relevance
More informationMAP Examples. Sargur Srihari
MAP Examples Sargur srihari@cedar.buffalo.edu 1 Potts Model CRF for OCR Topics Image segmentation based on energy minimization 2 Examples of MAP Many interesting examples of MAP inference are instances
More informationIntelligent Systems:
Intelligent Systems: Undirected Graphical models (Factor Graphs) (2 lectures) Carsten Rother 15/01/2015 Intelligent Systems: Probabilistic Inference in DGM and UGM Roadmap for next two lectures Definition
More informationComputer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression
Group Prof. Daniel Cremers 9. Gaussian Processes - Regression Repetition: Regularized Regression Before, we solved for w using the pseudoinverse. But: we can kernelize this problem as well! First step:
More informationAnomaly Detection for the CERN Large Hadron Collider injection magnets
Anomaly Detection for the CERN Large Hadron Collider injection magnets Armin Halilovic KU Leuven - Department of Computer Science In cooperation with CERN 2018-07-27 0 Outline 1 Context 2 Data 3 Preprocessing
More informationComputer Vision Group Prof. Daniel Cremers. 4. Gaussian Processes - Regression
Group Prof. Daniel Cremers 4. Gaussian Processes - Regression Definition (Rep.) Definition: A Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution.
More informationUnsupervised Learning
Unsupervised Learning Bayesian Model Comparison Zoubin Ghahramani zoubin@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc in Intelligent Systems, Dept Computer Science University College
More informationImage segmentation combining Markov Random Fields and Dirichlet Processes
Image segmentation combining Markov Random Fields and Dirichlet Processes Jessica SODJO IMS, Groupe Signal Image, Talence Encadrants : A. Giremus, J.-F. Giovannelli, F. Caron, N. Dobigeon Jessica SODJO
More informationSTAT 501 Assignment 2 NAME Spring Chapter 5, and Sections in Johnson & Wichern.
STAT 01 Assignment NAME Spring 00 Reading Assignment: Written Assignment: Chapter, and Sections 6.1-6.3 in Johnson & Wichern. Due Monday, February 1, in class. You should be able to do the first four problems
More informationLogistic Regression: Online, Lazy, Kernelized, Sequential, etc.
Logistic Regression: Online, Lazy, Kernelized, Sequential, etc. Harsha Veeramachaneni Thomson Reuter Research and Development April 1, 2010 Harsha Veeramachaneni (TR R&D) Logistic Regression April 1, 2010
More informationHmms with variable dimension structures and extensions
Hmm days/enst/january 21, 2002 1 Hmms with variable dimension structures and extensions Christian P. Robert Université Paris Dauphine www.ceremade.dauphine.fr/ xian Hmm days/enst/january 21, 2002 2 1 Estimating
More informationLearning Gaussian Process Models from Uncertain Data
Learning Gaussian Process Models from Uncertain Data Patrick Dallaire, Camille Besse, and Brahim Chaib-draa DAMAS Laboratory, Computer Science & Software Engineering Department, Laval University, Canada
More informationAlgorithmisches Lernen/Machine Learning
Algorithmisches Lernen/Machine Learning Part 1: Stefan Wermter Introduction Connectionist Learning (e.g. Neural Networks) Decision-Trees, Genetic Algorithms Part 2: Norman Hendrich Support-Vector Machines
More informationConjugate Analysis for the Linear Model
Conjugate Analysis for the Linear Model If we have good prior knowledge that can help us specify priors for β and σ 2, we can use conjugate priors. Following the procedure in Christensen, Johnson, Branscum,
More informationIntroduction to Machine Learning
Introduction to Machine Learning Bayesian Classification Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB CSE 474/574
More informationPATTERN RECOGNITION AND MACHINE LEARNING
PATTERN RECOGNITION AND MACHINE LEARNING Chapter 1. Introduction Shuai Huang April 21, 2014 Outline 1 What is Machine Learning? 2 Curve Fitting 3 Probability Theory 4 Model Selection 5 The curse of dimensionality
More informationA brief introduction to Conditional Random Fields
A brief introduction to Conditional Random Fields Mark Johnson Macquarie University April, 2005, updated October 2010 1 Talk outline Graphical models Maximum likelihood and maximum conditional likelihood
More informationOriental Medical Data Mining and Diagnosis Based On Binary Independent Factor Model
Oriental Medical Data Mining and Diagnosis Based On Binary Independent Factor Model Jeong-Yon Shim, Lei Xu Dept. of Computer Software, YongIn SongDam College MaPyeong Dong 571-1, YongIn Si, KyeongKi Do,
More informationBayesian Microwave Breast Imaging
Bayesian Microwave Breast Imaging Leila Gharsalli 1 Hacheme Ayasso 2 Bernard Duchêne 1 Ali Mohammad-Djafari 1 1 Laboratoire des Signaux et Systèmes (L2S) (UMR8506: CNRS-SUPELEC-Univ Paris-Sud) Plateau
More informationCPSC 540: Machine Learning
CPSC 540: Machine Learning Expectation Maximization Mark Schmidt University of British Columbia Winter 2018 Last Time: Learning with MAR Values We discussed learning with missing at random values in data:
More informationClustering K-means. Clustering images. Machine Learning CSE546 Carlos Guestrin University of Washington. November 4, 2014.
Clustering K-means Machine Learning CSE546 Carlos Guestrin University of Washington November 4, 2014 1 Clustering images Set of Images [Goldberger et al.] 2 1 K-means Randomly initialize k centers µ (0)
More informationx. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).
.8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More information