A Weighted Multivariate Gaussian Markov Model For Brain Lesion Segmentation

Similar documents
Brain Lesion Segmentation: A Bayesian Weighted EM Approach

Multivariate Robust clustering via mixture models

Biomedical Image Analysis. Segmentation by Thresholding

Inverse regression approach to (robust) non-linear high-to-low dimensional mapping

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM.

Probabilistic Graphical Models

EEL 851: Biometrics. An Overview of Statistical Pattern Recognition EEL 851 1

Graphical Models for Collaborative Filtering

H. Salehian, G. Cheng, J. Sun, B. C. Vemuri Department of CISE University of Florida

CSC411 Fall 2018 Homework 5

Introduction to Machine Learning

Mathematical Segmentation of Grey Matter, White Matter

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Handling imprecise and uncertain class labels in classification and clustering

An introduction to Bayesian inference and model comparison J. Daunizeau

Independent Component Analysis and Unsupervised Learning. Jen-Tzung Chien

Independent Component Analysis and Unsupervised Learning

Introduction to Machine Learning

Bayesian inference J. Daunizeau

3D Bayesian Regularization of Diffusion Tensor MRI Using Multivariate Gaussian Markov Random Fields

Probabilistic Graphical Models

Bayesian inference J. Daunizeau

Active and Semi-supervised Kernel Classification

STA 4273H: Statistical Machine Learning

Pattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods

The Expectation-Maximization Algorithm

Fast and Accurate HARDI and its Application to Neurological Diagnosis

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic & Unsupervised Learning

Probabilistic Graphical Models

Nonparametric Bayesian Methods (Gaussian Processes)

Mixture Models & EM. Nicholas Ruozzi University of Texas at Dallas. based on the slides of Vibhav Gogate

STA 4273H: Statistical Machine Learning

Modeling of Anatomical Information in Clustering of White Matter Fiber Trajectories Using Dirichlet Distribution

Mixture Models & EM. Nicholas Ruozzi University of Texas at Dallas. based on the slides of Vibhav Gogate

Part 2: Multivariate fmri analysis using a sparsifying spatio-temporal prior

Variational inference

Automated Segmentation of Low Light Level Imagery using Poisson MAP- MRF Labelling

Reliability Monitoring Using Log Gaussian Process Regression

Probabilistic Graphical Models Lecture Notes Fall 2009

CS Lecture 19. Exponential Families & Expectation Propagation

Longitudinal Change Detection: Inference on the Diffusion Tensor Along White-Matter Pathways

Fast Approximate MAP Inference for Bayesian Nonparametrics

Mathematical Formulation of Our Example

Model generation and model selection in credit scoring

Outline. 1 Introduction. 2 Problem and solution. 3 Bayesian tracking model of group debris. 4 Simulation results. 5 Conclusions

Dynamic Data Modeling, Recognition, and Synthesis. Rui Zhao Thesis Defense Advisor: Professor Qiang Ji

Default Priors and Effcient Posterior Computation in Bayesian

Computer Vision Group Prof. Daniel Cremers. 6. Mixture Models and Expectation-Maximization

Variational Inference via Stochastic Backpropagation

Bayesian Image Segmentation Using MRF s Combined with Hierarchical Prior Models

INFINITE MIXTURES OF MULTIVARIATE GAUSSIAN PROCESSES

Introduction to Probabilistic Graphical Models: Exercises

Covariance Matrix Simplification For Efficient Uncertainty Management

Expectation Maximization

Machine Learning for Signal Processing Bayes Classification and Regression

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions

Variational solution to hemodynamic and perfusion response estimation from ASL fmri data

Statistics 352: Spatial statistics. Jonathan Taylor. Department of Statistics. Models for discrete data. Stanford University.

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials

Variational Inference for Image Segmentation

Inference and estimation in probabilistic time series models

Journal de la Société Française de Statistique. Bayesian Markov model for cooperative clustering: application to robust MRI brain scan segmentation

Based on slides by Richard Zemel

ECE521 week 3: 23/26 January 2017

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li.

Spatial Bayesian Nonparametrics for Natural Image Segmentation

Machine Learning (CS 567) Lecture 5

Latent Variable Models and EM Algorithm

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Chris Bishop s PRML Ch. 8: Graphical Models

Fully Bayesian Deep Gaussian Processes for Uncertainty Quantification

13: Variational inference II

Sequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them

Basic Sampling Methods

CMU-Q Lecture 24:

Introduction to SVM and RVM

MAP Examples. Sargur Srihari

Intelligent Systems:

Computer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression

Anomaly Detection for the CERN Large Hadron Collider injection magnets

Computer Vision Group Prof. Daniel Cremers. 4. Gaussian Processes - Regression

Unsupervised Learning

Image segmentation combining Markov Random Fields and Dirichlet Processes

STAT 501 Assignment 2 NAME Spring Chapter 5, and Sections in Johnson & Wichern.

Logistic Regression: Online, Lazy, Kernelized, Sequential, etc.

Hmms with variable dimension structures and extensions

Learning Gaussian Process Models from Uncertain Data

Algorithmisches Lernen/Machine Learning

Conjugate Analysis for the Linear Model

Introduction to Machine Learning

PATTERN RECOGNITION AND MACHINE LEARNING

A brief introduction to Conditional Random Fields

Oriental Medical Data Mining and Diagnosis Based On Binary Independent Factor Model

Bayesian Microwave Breast Imaging

CPSC 540: Machine Learning

Clustering K-means. Clustering images. Machine Learning CSE546 Carlos Guestrin University of Washington. November 4, 2014.

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Transcription:

A Weighted Multivariate Gaussian Markov Model For Brain Lesion Segmentation Senan Doyle, Florence Forbes, Michel Dojat July 5, 2010

Table of contents Introduction & Background A Weighted Multi-Sequence Markov Model Variational Bayesian EM Solution Lesion Segmentation Procedure Results

Introduction & Background Outline Introduction & Background A Weighted Multi-Sequence Markov Model Variational Bayesian EM Solution Lesion Segmentation Procedure Results

Introduction & Background Goal Delineation & Quantification of Lesions Establish Patient Prognosis Chart Development over Time

Introduction & Background Goal Delineation & Quantification of Lesions Establish Patient Prognosis Chart Development over Time Automatically!

Introduction & Background Goal Delineation & Quantification of Lesions Establish Patient Prognosis Chart Development over Time Automatically!

Introduction & Background MRI Imaging 3D Volume of Voxels Different MRI Sequences (T1, T2-w, FLAIR, Diffusion Weight...)

Introduction & Background MRI Imaging 3D Volume of Voxels Different MRI Sequences (T1, T2-w, FLAIR, Diffusion Weight...) Cerebro Spinal Fluid (CSF), White Matter (WM), Grey Matter(GM)

Introduction & Background MRI Imaging 3D Volume of Voxels Different MRI Sequences (T1, T2-w, FLAIR, Diffusion Weight...) Cerebro Spinal Fluid (CSF), White Matter (WM), Grey Matter(GM)

Introduction & Background MRI Imaging 3D Volume of Voxels Different MRI Sequences (T1, T2-w, FLAIR, Diffusion Weight...) Cerebro Spinal Fluid (CSF), White Matter (WM), Grey Matter(GM)

Introduction & Background MRI Imaging 3D Volume of Voxels Different MRI Sequences (T1, T2-w, FLAIR, Diffusion Weight...) Cerebro Spinal Fluid (CSF), White Matter (WM), Grey Matter(GM)

Introduction & Background MRI Imaging 3D Volume of Voxels Different MRI Sequences (T1, T2-w, FLAIR, Diffusion Weight...) Cerebro Spinal Fluid (CSF), White Matter (WM), Grey Matter(GM)

Introduction & Background MRI Imaging 3D Volume of Voxels Different MRI Sequences (T1, T2-w, FLAIR, Diffusion Weight...) Cerebro Spinal Fluid (CSF), White Matter (WM), Grey Matter(GM)

Introduction & Background Challenges Noise Intensity Inhomogeneity

Introduction & Background Challenges Noise Intensity Inhomogeneity Partial Volume Effects

Introduction & Background Challenges Noise Intensity Inhomogeneity Partial Volume Effects Other Artefacts

Introduction & Background Challenges Noise Intensity Inhomogeneity Partial Volume Effects Other Artefacts Poor Contrast between Tissues

Introduction & Background Challenges Noise Intensity Inhomogeneity Partial Volume Effects Other Artefacts Poor Contrast between Tissues Poor Representation of Lesion

Introduction & Background Challenges Noise Intensity Inhomogeneity Partial Volume Effects Other Artefacts Poor Contrast between Tissues Poor Representation of Lesion

Introduction & Background Inliers Versus Outliers Lesions usually modelled as outliers. Widely Varying & Inhomogeneous Appearance

Introduction & Background Inliers Versus Outliers Lesions usually modelled as outliers. Widely Varying & Inhomogeneous Appearance Small in Size

Introduction & Background Inliers Versus Outliers Lesions usually modelled as outliers. Widely Varying & Inhomogeneous Appearance Small in Size Our approach

Introduction & Background Inliers Versus Outliers Lesions usually modelled as outliers. Widely Varying & Inhomogeneous Appearance Small in Size Our approach Adopt a Model so that Lesion Voxels become Inliers

Introduction & Background Inliers Versus Outliers Lesions usually modelled as outliers. Widely Varying & Inhomogeneous Appearance Small in Size Our approach Adopt a Model so that Lesion Voxels become Inliers Introduce Weight Field

Introduction & Background Inliers Versus Outliers Lesions usually modelled as outliers. Widely Varying & Inhomogeneous Appearance Small in Size Our approach Adopt a Model so that Lesion Voxels become Inliers Introduce Weight Field Bayesian Estimation of Weights

Introduction & Background Inliers Versus Outliers Lesions usually modelled as outliers. Widely Varying & Inhomogeneous Appearance Small in Size Our approach Adopt a Model so that Lesion Voxels become Inliers Introduce Weight Field Bayesian Estimation of Weights

A Weighted Multi-Sequence Markov Model Outline Introduction & Background A Weighted Multi-Sequence Markov Model Variational Bayesian EM Solution Lesion Segmentation Procedure Results

A Weighted Multi-Sequence Markov Model Multi-Sequence PD T1 T2

A Weighted Multi-Sequence Markov Model Multi-Sequence Figure: Feature Space, Lesion in Red

A Weighted Multi-Sequence Markov Model Multi-Sequence Figure: Feature Space, Lesion in Red

A Weighted Multi-Sequence Markov Model Additional Information Spatial Dependency Prior Probabilistic Tissue Atlas

A Weighted Multi-Sequence Markov Model Additional Information Spatial Dependency Prior Probabilistic Tissue Atlas

A Weighted Multi-Sequence Markov Model Additional Information Spatial Dependency Prior Probabilistic Tissue Atlas

A Weighted Multi-Sequence Markov Model Additional Information Spatial Dependency Prior Probabilistic Tissue Atlas

A Weighted Multi-Sequence Markov Model Additional Information Spatial Dependency Prior Probabilistic Tissue Atlas

A Weighted Multi-Sequence Markov Model Definitions Regular 3D Grid, i = 1,..., N, i V Observations y i = {y i1,..., y im }

A Weighted Multi-Sequence Markov Model Definitions Regular 3D Grid, i = 1,..., N, i V Observations y i = {y i1,..., y im } Labelling z = {z 1,..., z N }, z i {1... K}, z Z.

A Weighted Multi-Sequence Markov Model Definitions Regular 3D Grid, i = 1,..., N, i V Observations y i = {y i1,..., y im } Labelling z = {z 1,..., z N }, z i {1... K}, z Z. Nonnegative Weights ω = {ω i, i V }, ω W, ω i = {ω i1,..., ω im }.

A Weighted Multi-Sequence Markov Model Definitions Regular 3D Grid, i = 1,..., N, i V Observations y i = {y i1,..., y im } Labelling z = {z 1,..., z N }, z i {1... K}, z Z. Nonnegative Weights ω = {ω i, i V }, ω W, ω i = {ω i1,..., ω im }. Joint distribution p(y, z, ω; ψ), ψ = {β, φ} with H(y, z, ω; ψ) = H Z (z; β) + H W (ω) + i V log g(y i z i, ω i ; φ) Missing Data term

A Weighted Multi-Sequence Markov Model Definitions Regular 3D Grid, i = 1,..., N, i V Observations y i = {y i1,..., y im } Labelling z = {z 1,..., z N }, z i {1... K}, z Z. Nonnegative Weights ω = {ω i, i V }, ω W, ω i = {ω i1,..., ω im }. Joint distribution p(y, z, ω; ψ), ψ = {β, φ} with H(y, z, ω; ψ) = H Z (z; β) + H W (ω) + i V log g(y i z i, ω i ; φ) Missing Data term Parameter Prior term

A Weighted Multi-Sequence Markov Model Definitions Regular 3D Grid, i = 1,..., N, i V Observations y i = {y i1,..., y im } Labelling z = {z 1,..., z N }, z i {1... K}, z Z. Nonnegative Weights ω = {ω i, i V }, ω W, ω i = {ω i1,..., ω im }. Joint distribution p(y, z, ω; ψ), ψ = {β, φ} with H(y, z, ω; ψ) = H Z (z; β) + H W (ω) + i V log g(y i z i, ω i ; φ) Missing Data term Parameter Prior term Data term

A Weighted Multi-Sequence Markov Model Definitions Regular 3D Grid, i = 1,..., N, i V Observations y i = {y i1,..., y im } Labelling z = {z 1,..., z N }, z i {1... K}, z Z. Nonnegative Weights ω = {ω i, i V }, ω W, ω i = {ω i1,..., ω im }. Joint distribution p(y, z, ω; ψ), ψ = {β, φ} with H(y, z, ω; ψ) = H Z (z; β) + H W (ω) + i V log g(y i z i, ω i ; φ) Missing Data term Parameter Prior term Data term

A Weighted Multi-Sequence Markov Model Data term M-Dimensional Gaussians, Diagonal Covariance g(y i z i, ω i ; φ) = M m=1 G(y im ; µ zi m, s z i m ω im )

A Weighted Multi-Sequence Markov Model Data term M-Dimensional Gaussians, Diagonal Covariance g(y i z i, ω i ; φ) = M m=1 G(y im ; µ zi m, s z i m ω im ) Related to Logarithmic Pooling ( ) M G(y im ; µ zi m, s zi m) ω im. m=1

A Weighted Multi-Sequence Markov Model Data term M-Dimensional Gaussians, Diagonal Covariance g(y i z i, ω i ; φ) = M m=1 G(y im ; µ zi m, s z i m ω im ) Related to Logarithmic Pooling ( ) M G(y im ; µ zi m, s zi m) ω im. m=1

A Weighted Multi-Sequence Markov Model Missing Data term Potts Model, external field ξ, interaction parameter η H Z (z; β) = i V (ξ izi + η z i, z j ) j N (i)

A Weighted Multi-Sequence Markov Model Missing Data term Potts Model, external field ξ, interaction parameter η H Z (z; β) = i V (ξ izi + η z i, z j ) j N (i) β = {ξ, η} with ξ = { t (ξ i1... ξ ik ), i V } being a set of real-valued K-dimensional vectors and η a real positive value.

A Weighted Multi-Sequence Markov Model Missing Data term Potts Model, external field ξ, interaction parameter η H Z (z; β) = i V (ξ izi + η z i, z j ) j N (i) β = {ξ, η} with ξ = { t (ξ i1... ξ ik ), i V } being a set of real-valued K-dimensional vectors and η a real positive value.

A Weighted Multi-Sequence Markov Model Parameter Prior term Weights independent of ψ, across Sequences M Define Conjugate Prior p(ω) = p(ω im ) m=1 i V

A Weighted Multi-Sequence Markov Model Parameter Prior term Weights independent of ψ, across Sequences M Define Conjugate Prior p(ω) = p(ω im ) m=1 i V p(ω im ) is a Gamma distribution with hyperparameters α im (shape) and γ im (inverse scale) H W (ω) = M ((α im 1) log ω im γ im ω im ). m=1 i V

A Weighted Multi-Sequence Markov Model Parameter Prior term Weights independent of ψ, across Sequences M Define Conjugate Prior p(ω) = p(ω im ) m=1 i V p(ω im ) is a Gamma distribution with hyperparameters α im (shape) and γ im (inverse scale) H W (ω) = M ((α im 1) log ω im γ im ω im ). m=1 i V Modes fixed to expert weights {ω exp im, m = 1... M, i V } by setting α im = γ im ω exp im + 1

A Weighted Multi-Sequence Markov Model Parameter Prior term Weights independent of ψ, across Sequences M Define Conjugate Prior p(ω) = p(ω im ) m=1 i V p(ω im ) is a Gamma distribution with hyperparameters α im (shape) and γ im (inverse scale) H W (ω) = M ((α im 1) log ω im γ im ω im ). m=1 i V Modes fixed to expert weights {ω exp im, m = 1... M, i V } by setting α im = γ im ω exp im + 1

Variational Bayesian EM Solution Outline Introduction & Background A Weighted Multi-Sequence Markov Model Variational Bayesian EM Solution Lesion Segmentation Procedure Results

Variational Bayesian EM Solution Alternating Maximization Weights viewed as Missing Variables Let D be the set of all probability distributions on Z W.

Variational Bayesian EM Solution Alternating Maximization Weights viewed as Missing Variables Let D be the set of all probability distributions on Z W. Alternating maximization of a function F s.t. for any q D, F (q, ψ) = E q [log p(y, Z, W ; ψ)] E q [log q(z, W )]

Variational Bayesian EM Solution Alternating Maximization Weights viewed as Missing Variables Let D be the set of all probability distributions on Z W. Alternating maximization of a function F s.t. for any q D, F (q, ψ) = E q [log p(y, Z, W ; ψ)] E q [log q(z, W )] E-step: q (r) = arg max q D F (q, ψ(r) ) (1) M-step: ψ (r+1) = arg max ψ Ψ F (q(r), ψ)

Variational Bayesian EM Solution Alternating Maximization Weights viewed as Missing Variables Let D be the set of all probability distributions on Z W. Alternating maximization of a function F s.t. for any q D, F (q, ψ) = E q [log p(y, Z, W ; ψ)] E q [log q(z, W )] E-step: q (r) = arg max q D F (q, ψ(r) ) (1) M-step: ψ (r+1) = arg max ψ Ψ F (q(r), ψ)

Variational Bayesian EM Solution Alternating Maximization Weights viewed as Missing Variables Let D be the set of all probability distributions on Z W. Alternating maximization of a function F s.t. for any q D, F (q, ψ) = E q [log p(y, Z, W ; ψ)] E q [log q(z, W )] E-step: q (r) = arg max q D F (q, ψ(r) ) (1) M-step: ψ (r+1) = arg max ψ Ψ F (q(r), ψ)

Variational Bayesian EM Solution Alternating Maximization E-step solved over restricted class of probability distributions, D, s.t. q(z, ω) = q Z (z) q W (ω) where q Z D Z and q W D W E-Z-step: q (r) Z E-W-step: q (r) W = arg max F (q (r 1) q Z D W q Z ; ψ (r) ) Z = arg max q W D W F (q W q (r) Z ; ψ(r) ).

Variational Bayesian EM Solution Alternating Maximization E-step solved over restricted class of probability distributions, D, s.t. q(z, ω) = q Z (z) q W (ω) where q Z D Z and q W D W E-Z-step: q (r) Z E-W-step: q (r) W = arg max F (q (r 1) q Z D W q Z ; ψ (r) ) Z = arg max q W D W F (q W q (r) Z ; ψ(r) ).

Variational Bayesian EM Solution Alternating Maximization Use K-L divergence properties to avoid taking derivatives ( E-Z: q (r) Z exp E q (r 1) W ) [log p(z y, W; ψ (r) ] ) E-W: q (r) W (E exp (r) q [log p(ω y, Z; ψ (r) )] M: ψ (r+1) = arg max Z E ψ Ψ q (r) Z q(r) W (2) (3) [log p(y, Z, W ; ψ)]. (4)

Variational Bayesian EM Solution Alternating Maximization Use K-L divergence properties to avoid taking derivatives ( E-Z: q (r) Z exp E q (r 1) W ) [log p(z y, W; ψ (r) ] ) E-W: q (r) W (E exp (r) q [log p(ω y, Z; ψ (r) )] M: ψ (r+1) = arg max Z E ψ Ψ q (r) Z q(r) W (2) (3) [log p(y, Z, W ; ψ)]. (4)

Variational Bayesian EM Solution E-Z Note p(z y, ω; ψ) is Markovian with energy H(z y, ω; ψ) H Z (z; β) + i V Therefore (2) becomes log g(y i z i, ω i ; φ), (linear in ω) (5) E-Z: q (r) Z p(z y, E (r 1) q [W ]; ψ (r) ). (6) W

Variational Bayesian EM Solution E-Z Note p(z y, ω; ψ) is Markovian with energy H(z y, ω; ψ) H Z (z; β) + i V Therefore (2) becomes log g(y i z i, ω i ; φ), (linear in ω) (5) E-Z: q (r) Z p(z y, E (r 1) q [W ]; ψ (r) ). (6) W Mean-Field like algorithm provides MRF approximation and q (r) Z (z) i V M G(y im ; µ (r) z i m, m=1 s (r) z i m ω (r 1) im )p(z i z (r) N (i) ; β(r) ), (7)

Variational Bayesian EM Solution E-Z Note p(z y, ω; ψ) is Markovian with energy H(z y, ω; ψ) H Z (z; β) + i V Therefore (2) becomes log g(y i z i, ω i ; φ), (linear in ω) (5) E-Z: q (r) Z p(z y, E (r 1) q [W ]; ψ (r) ). (6) W Mean-Field like algorithm provides MRF approximation and q (r) Z (z) i V M G(y im ; µ (r) z i m, m=1 s (r) z i m ω (r 1) im )p(z i z (r) N (i) ; β(r) ), (7)

Variational Bayesian EM Solution E-W Similarly p(ω y, z; ψ) is Markovian with energy H(ω y, z; ψ) H W (ω) + i V log g(y i z i, ω i ; φ) (8) E (r) q [W im ] denoted by ω (r) im becomes W ω (r) im = α im + 1 2 γ im + 1 K 2 k=1 δ(y im, µ (r) km, s(r) km ) q(r) Z i (k) (9) where δ(y, µ, s) = (y µ) 2 /s is the squared Mahalanobis distance.

Variational Bayesian EM Solution E-W Similarly p(ω y, z; ψ) is Markovian with energy H(ω y, z; ψ) H W (ω) + i V log g(y i z i, ω i ; φ) (8) E (r) q [W im ] denoted by ω (r) im becomes W ω (r) im = α im + 1 2 γ im + 1 K 2 k=1 δ(y im, µ (r) km, s(r) km ) q(r) Z i (k) (9) where δ(y, µ, s) = (y µ) 2 /s is the squared Mahalanobis distance.

Variational Bayesian EM Solution M β solved using mean field approximation φ updated using µ (r+1) km = s (r+1) km = N i=1 q(r) Z i N i=1 q(r) Z i N i=1 q(r) Z i (k) ω (r) im y im (k) ω (r) im (k) ω (r) im (y im µ (r+1) N i=1 q(r) Z i (k), km )2,

Variational Bayesian EM Solution M β solved using mean field approximation φ updated using µ (r+1) km = s (r+1) km = N i=1 q(r) Z i N i=1 q(r) Z i N i=1 q(r) Z i (k) ω (r) im y im (k) ω (r) im (k) ω (r) im (y im µ (r+1) N i=1 q(r) Z i (k), km )2,

Lesion Segmentation Procedure Outline Introduction & Background A Weighted Multi-Sequence Markov Model Variational Bayesian EM Solution Lesion Segmentation Procedure Results

Lesion Segmentation Procedure Expert Weighting Region L defines candidate lesions L weighted with ω L > 1 (ω \L = 1)

Lesion Segmentation Procedure Expert Weighting Region L defines candidate lesions L weighted with ω L > 1 (ω \L = 1) K = 3, ω exp im = 1 and γ im = 1 setting

Lesion Segmentation Procedure Expert Weighting Region L defines candidate lesions L weighted with ω L > 1 (ω \L = 1) K = 3, ω exp im = 1 and γ im = 1 setting Threshold Most Informative Weight Map (chi-square percentile)

Lesion Segmentation Procedure Expert Weighting Region L defines candidate lesions L weighted with ω L > 1 (ω \L = 1) K = 3, ω exp im = 1 and γ im = 1 setting Threshold Most Informative Weight Map (chi-square percentile)

Lesion Segmentation Procedure Expert Weighting Region L defines candidate lesions L weighted with ω L > 1 (ω \L = 1) K = 3, ω exp im = 1 and γ im = 1 setting Threshold Most Informative Weight Map (chi-square percentile)

Lesion Segmentation Procedure Inlier Setting K = 4, initialized with K = 3 result (+ thresholded lesions) γ im = γ L i L (γ L = 10)

Lesion Segmentation Procedure Inlier Setting K = 4, initialized with K = 3 result (+ thresholded lesions) γ im = γ L i L (γ L = 10) γ im = γ L i L. (γ L = 1000)

Lesion Segmentation Procedure Inlier Setting K = 4, initialized with K = 3 result (+ thresholded lesions) γ im = γ L i L (γ L = 10) γ im = γ L i L. (γ L = 1000) ω L depends on number of voxels in L

Lesion Segmentation Procedure Inlier Setting K = 4, initialized with K = 3 result (+ thresholded lesions) γ im = γ L i L (γ L = 10) γ im = γ L i L. (γ L = 1000) ω L depends on number of voxels in L

Lesion Segmentation Procedure Inlier Setting

Lesion Segmentation Procedure Inlier Setting

Results Outline Introduction & Background A Weighted Multi-Sequence Markov Model Variational Bayesian EM Solution Lesion Segmentation Procedure Results

Results Simulated Data 0% IIH Method 3% 5% 7% 9% Mild lesions (0.02% of the voxels) AWEM 68 (+1) 49 (-21) 36 (+2) 12 (+8) [1] 67 70 34 0 [3] 56 33 13 4 [2] 52 NA NA NA Moderate lesions (0.18% of the voxels) AWEM 86 (+7) 80 (-1) 73 (+14) 64 (+27) [1] 72 81 59 29 [3] 79 69 52 37 [2] 63 NA NA NA Severe lesions (0.52% of the voxels) AWEM 92 (+7) 86 (-2) 78 (+6) 68 (+27) [1] 79 88 72 41 [3] 85 72 56 41 [2] 82 NA NA NA

Results Simulated Data 40% IIH Method 3% 5% 7% 9% Mild lesions (0.02% of the voxels) AWEM 0 (-75) 0 (-65) 0 (-20) 0 (-30) [1] 75 65 20 30 [3] 58 27 13 6 Moderate lesions (0.18% of the voxels) AWEM 52 (-24) 51 (-25) 52 (-15) 10 (-38) [1] 75 76 67 48 [3] 76 64 47 31 Severe lesions (0.52% of the voxels) AWEM 87 (+1) 84 (+1) 77 (+3) 66 (+8) [1] 75 83 74 58 [3] 86 74 62 45

Results Real Data (Rennes MS) LL EMS AWEM Patient1 0.42 62 82 (+20) Patient2 1.71 54 56 (+2) Patient3 0.29 47 45 (-2) Patient4 1.59 65 72 (+7) Patient5 0.31 47 45 (-2) Average 55 +/-8 60 +/-16

Results Results (a) (b) (c) Figure: Real MS data, patient 3. (a): Flair image. (b): identified lesions with our approach (DSC 45%). (c): ground truth.

Results Results

Results Results

Results Results (a) (b) (c) Figure: Real stroke data. (a): DW image. (b): identified lesions with our approach (DSC 63%). (c): ground truth.

Results Discussion & Future Work Extension to full covariance matrices: temporal multi-sequence data, eg. patient follow-up Exploration of Markov Prior Other expert weighting schemes, possibly lesion specific Extension to handle intensity inhomogeneities Evaluation in a semi-supervised context

Results D. Garcia-Lorenzo, L. Lecoeur, D.L. Arnold, D. L. Collins, and C. Barillot. Multiple Sclerosis lesion segmentation using an automatic multimodal graph cuts. In MICCAI, pages 584 591, 2009. F. Rousseau, F. Blanc, J. de Seze, L. Rumbac, and J.P. Armspach. An a contrario approach for outliers segmentation: application to multiple sclerosis in MRI. In IEEE ISBI, pages 9 12, 2008. K. Van Leemput, F. Maes, D. Vandermeulen, A. Colchester, and P. Suetens. Automated segmentation of multiple sclerosis lesions by model outlier detection. IEEE trans. Med. Ima., 20(8):677 688, 2001.