Anatomical Regularization on Statistical Manifolds for the Classification of Patients with Alzheimer s Disease
|
|
- Posy Arnold
- 6 years ago
- Views:
Transcription
1 Anatomical Regularization on Statistical Manifolds for the Classification of Patients with Alzheimer s Disease Rémi Cuingnet 1,2, Joan Alexis Glaunès 1,3, Marie Chupin 1, Habib Benali 2, and Olivier Colliot 1 the Alzheimer s Disease Neuroimaging Initiative 1 Université Pierre et Marie Curie-Paris 6, CNRS UMR 7225, Inserm UMR S 975, Centre de Recherche de l Institut Cerveau-Moelle (CRICM), Paris, France 2 Inserm, UMR S 678, LIF, Paris, France 3 MAP5, Université Paris 5 - René Descartes, Paris, France Abstract. This paper introduces a continuous framework to spatially regularize support vector machines (SVM) for brain image analysis based on the Fisher metric. We show that, by considering the images as elements of a statistical manifold, one can define a metric that integrates various types of information. Based on this metric, replacing the standard SVM regularization with a Laplace-Beltrami regularization operator allows integrating to the classifier various types of constraints based on spatial and anatomical information. The proposed framework is applied to the classification of magnetic resonance (MR) images based on gray matter concentration maps from 137 patients with Alzheimer s disease and 162 elderly controls. The results demonstrate that the proposed classifier generates less-noisy and consequently more interpretable feature maps with no loss of classification performance. 1 Introduction Brain image analyses have widely relied on univariate voxel-wise analyses, such as voxel-based morphometry (VBM) for structural MRI [1]. In such analyses, brain images are first spatially registered to a common stereotaxic space, and then mass univariate statistical tests are performed in each voxel to detect significant group differences. However, the sensitivity of theses approaches is limited when the differences are spatially complex and involve a combination of different voxels or brain structures [2]. Recently, there has been a growing interest in support vector machines (SVM) methods [3, 4] to overcome the limits of these univariate analyses. Theses approaches allow capturing complex multivariate relationships in the data and have 1 Data used in preparation of this article were obtained from the Alzheimer s Disease Neuroimaging Initiative (ADNI) database (adni.loni.ucla.edu). A complete listing of ADNI investigators can be found at: to apply/adni Authorship List.pdf
2 2 been successfully applied to the individual classification of a variety of neurological and psychiatric conditions such as Alzheimer s disease [5 9] fronto-temporal dementia [5], schizophrenia [10] and Parkinsonian syndromes [11]. Moreover, the output of the SVM can also be analyzed to localize spatial patterns of discrimination, for example by drawing the coefficients of the optimal margin hyperplane (OMH) which, in the case of a linear SVM, live in the same space as the MRI data [6, 7]. However, voxel-based comparisons are subject to registration errors and interindividual variability. Therefore, one of the problems with analyzing directly the OMH coefficients is that the corresponding maps are scattered and lack spatial coherence. This makes it difficult to give a meaningful interpretation of the maps, for example to localize the brain regions altered by a given pathology. This is due to the fact that the regularization term of the standard linear SVM is not a spatial regularization. To overcome this limitation, Cuingnet et al. [12] proposed to directly enforce spatial consistency into the SVM by using the Laplacian of a regularization graph. They proposed a regularization graph which takes into consideration both spatial information (the location) and anatomical information (the tissue types). They combine spatial and anatomical information by modifying the local topology induced by the spatial information with respect to some given anatomical priors (tissues types). Since the images are discrete, they used a discrete framework to model local behaviors: graphs. Nevertheless, as the brain is intrinsically a continuous object, it seems more interesting to describe local behaviors from the continuous viewpoint. This paper extends this spatial regularization framework to the continuous case. In particular, we show that by considering images as statistical manifolds together with the Fisher metric, it allows taking into account various prior information such as tissue, atlas information and spatial proximity. We then apply the proposed framework to the classification of MR images based on gray matter concentration maps and cortical thickness measures from patients with Alzheimer s disease and elderly controls. The results demonstrate that the proposed approach allows obtaining spatially and anatomically coherent discrimination patterns. It generates more interpretable features maps with an increase or at least with no loss of classification performance. 2 Spatially Regularized SVM on Riemannian Manifold 2.1 Background In this contribution, we consider the case of brain images which are spatially normalized to a common stereotaxic space as in many group studies or classification methods [6, 7, 9, 10, 13]. These images can be any characteristics extracted from the MRI, such as tissue concentration maps (in VBM). Let (x s ) s [1,N] be the images of N subjects and (y s ) s [1,N] {±1} N their group labels (e.g. diagnosis). For each subject s, x s can be considered as a square integrable real-valued function defined on a compact subset, V, of R 3 or more generally on a compact
3 3 of a 3D Riemannian manifold. Let V be the domain of the 3D images. SVMs search for the hyperplane for which the margin between groups is maximal. The standard linear SVM solves the following optimization problem [3, 4]: ( w opt, b opt) 1 = arg min w L 2 (V),b R N N l hinge (y s [ w, x s L 2 + b]) + λ w 2 L (1) 2 s=1 where λ R + is the regularization parameter and l hinge the hinge loss function defined as: l hinge : u R (1 u) +. With a linear SVM, the feature space is the same as the input space. Thus, when the input features are images, the weight map w opt is also an image. This map qualitatively informs us about the role of the different brain regions in the classifier [9]. Therefore, since two neighboring regions should have a similar role in the classifier, w opt should be smooth with respect to the topology of V. However, this is not guaranteed with the standard linear SVM because the regularization term is not a spatial regularization. 2.2 Regularization operator By considering the SVM from the regularization viewpoint [4], one can constrain w opt to be smooth with respect to the topology of V. This is done through the definition of a regularization operator, P, defined as a linear map from a space U L 2 (V) into L 2 (V). When P is bijective and symmetric, min u U,b R 1 N N l hinge (y s [ u, x s L 2 + b]) + λ P u 2 L (2) 2 s=1 is equivalent to a linear SVM on the data (P 1 x s ) s. Similarly, it can be seen as a SVM minimization problem on the raw data with kernel K defined by K(x 1, x 2 ) = P 1 x 1, P 1 x 2 L 2. One has to define the regularization operator P to obtain the suitable regularization for the problem. 2.3 Spatial Regularization on Compact Riemannian Manifold Spatial regularization requires the notion of proximity between elements of V. In this paper, V is considered as a 3-dimensional compact Riemannian manifold (M, g) with boundaries. The metric, g, then models the notion of proximity. On such spaces, the heat kernel exists [14, 15]. Therefore, the Laplacian regularization presented in [12] can be extended to compact Riemannian manifolds. Let g denotes the Laplace-Beltrami operator 4. Let (e n ) n N be an orthonormal basis of L 2 (V) of eigenvectors of g (with homogeneous Dirichlet boundary conditions) [14, 16] and (µ n ) n N the corresponding eigenvalues. We define U β { U β = u = } ) u n e n (u n ) n N l 2 and (e 1 2 βµn u n l 2 n N 4 Note that, with the convention used in this paper, in Euclidean space, g = where is the Laplacian operator. n N
4 4 where l 2 denotes the set of square-summable sequences. We chose the regularization operator P β : U β L 2 (V) defined as: P β : u = n N u n e n e 1 2 β g u = n N e 1 2 βµn u n e n (3) This penalizes the high-frequency components with respect to the topology of V. 3 Spatial Proximity When the proximity is encoded by a Euclidean distance, this is equivalent to preprocess the data with a Gaussian smoothing kernel with standard deviation σ = β. However such a metric does not take into account anatomical information. In this section, the goal is to define a metric that takes into account various prior informations such as tissue, atlas and location information. We first show that this can be done by considering the images as elements of a statistical manifold and using the Fisher metric. We then give some details about the computation of the Gram matrix. 3.1 Fisher metric The images are registered to a common space. Therefore, when considering some location v R 3, the true location is known up to the registration errors. Such spatial information can be modeled by a probability density function: x R 3 p loc (x v). A simple example would be p loc ( v) N (v, σloc 2 ). It can be seen as a confidence index about the spatial localization at voxel v. We further assume that we are given an anatomical or a functional atlas A composed of R regions: {A r } r=1 R. Therefore, in each point v V, we have a probability distribution p atlas ( v) R A which informs us about the atlas region in v. As a result, in each point v R 3, we have some information about the spatial location and some anatomical information through the atlas. Such information can be modeled by a probability density function p( v) R A R3. Therefore, we consider the parametric family of probability distributions: { M = p( v) R A R3} v V In the following, we further assume that p loc and p atlas are independent. Thus, p verifies: p((a r, x) v) = p atlas (A r v)p loc (x v), (A r, x) A R 3. We also assume that p is sufficiently smooth in v V and that the Fisher information matrix is definite at each v V. Then the parametric family of probability distributions M can be considered as a differential manifold [17]. A natural way to encode proximity on M is to use the Fisher metric, since such metric is invariant under reparametrization of the manifold. M with the Fisher metric is a compact Riemmanian manifold [17]. The metric tensor g is then given for all v V by: ], 1 i, j 3 g ij (v) = E v [ log p( v) v i log p( v) v j When p loc ( v) N (v, σloc 2 I 3), we have: g ij (v) = gij atlas (v) + δij. σloc 2
5 5 3.2 Computing the Gram matrix The computation of the kernel matrix requires the computation of e β g x s for all the subjects of the training set. The eigendecomposition of the Laplace- Beltrami operator is intractable since the number of voxels in a brain images is about Hence e β g x s is considered as the solution at time t = β of the heat equation with the Dirichlet homogeneous boundary conditions of unknown u: u t + gu = 0; u(t = 0) = x s (4) To solve equation (4), one can use a variational approach [18]. We used the rectangular finite elements { φ (i)} in space and the explicit finite difference scheme for the time discretization. x and t denote the space step and the time step respectively. Let U(t) denote the coordinates of u(t). Let U n denote the coordinates of u(t = n t ). This leads to: with K i,j = V M du dt (t) + KU(t) = 0; U(t = 0) = U 0 (5) M φ (i), M φ (j) dµ M and M i,j = φ (i) φ (j) dµ M (6) M where K is the stiffness matrix and M is the mass matrix. The explicit finite difference scheme was used for the time discretization, thus U n+1 is given by: MU n+1 = (M t K) U n. The step x is fixed by the MRI spatial resolution. The time step, t, is then chosen so as to respect the Courant-Friedrichs-Lewy (CFL) condition: t 2(max λ i ) 1 where λ i are the eigenvalues of the general eigenproblem: KU = λmu. Therefore, the computational complexity is: O (Nβ(max i λ i )d). To compute the optimal time step t, we estimated the largest eigenvalue with the power iteration method. In our experiments, for σ loc = 5, λ max 15.4 and for σ loc = 10, λ max Setting the diffusion parameter β Our method required the tuning of two parameters σ loc and β. The parameter σ loc was chosen a priori. As evaluating the spectrum of the Laplacian operator is intractable considering the images sizes, β was chosen to be equivalent to the diffusion parameter of the Gaussian smoothing, β = σ 2, where σ is the standard deviation for the Gaussian smoothing kernel. To be comparable with the Euclidean case, we first normalized g with: ( 1 V u V 4 Experiments and Results 1 ( ) 3 tr g 1 2 (u) du In this section, the proposed framework is applied to the analysis of MR images using gray matter concentration maps from patients with Alzheimer s disease and elderly controls. ) 2 V
6 6 4.1 Materials Subjects and MRI acquisition Data used in the preparation of this article were obtained from the Alzheimer s Disease Neuroimaging Initiative (ADNI) database. The Principal Investigator of this initiative is Michael W. Weiner, MD, VA Medical Center and University of California, San Francisco. ADNI is the result of efforts of many co-investigators from academic institutions and private corporations. For up-to-date information, see We used the same study population as in [9]. As a result, 299 subjects were selected: 162 cognitively normal elderly controls (76 males, 86 females, age ± SD [range] = 76.3 ± 5.4 [60 90] years, and mini-mental score (MMS) = 29.2 ± 1.0 [25 30]) and 137 patients with AD (67 males, 70 females, age = 76.0 ± 7.3 [55 91] years, and MMS = 23.2 ± 2.0 [18 27]). The T1-weighted MR images described in [19] were used in this study. Features Extraction All images were segmented into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) using the SPM5 unified segmentation routine [20] and spatially normalized using the DARTEL diffeomorphic registration algorithm [21] with the default parameters. The features are the modulated GM probability maps in the MNI space. 4.2 Classification experiments We tested the spatial regularization for both the Euclidean metric and the Fisher metric. In the following, they will be referred to as Regul-Euclidean and Regul- Fisher respectively. The atlas information used was only the tissue types (GM, WM and CSF templates). To assess the impact of the regularization we also performed the classification experiments with no regularization: Direct. Optimal coefficient maps The optimal SVM weights w opt for different value of β are shown on Figure 1. When no spatial regularization has been carried out (a), the w opt maps are noisy and scattered. With Euclidean spatial regularization (b-c), they become smoother and more spatially consistent. However it mixes tissues and does not respect the topology of the cortex. With the Fisher metric (d-e), the obtained map is much more consistent with the brain anatomy. Compared to the Euclidean regularization, it better respects the topology of the cortex (Fig. 2). The main regions in which atrophy increases the likelihood of being classified as AD (regions in red) are: the medial temporal lobe, the inferior and middle temporal gyri, the posterior cingulate and the posterior middle frontal gyri. Classification performances In in order to obtain unbiased estimates of the performances, the set of participants was randomly split into two groups of the same size: a training set and a testing set. On the training set, a gridsearch with a leave-one-out-cross-validation was used to estimate the optimal
7 7-0.5 (a) -0.1 (b) +0.1 (c) +0.5 (d) (e) Fig. 1. Normalized wopt coefficients for: (a) Direct, (b-c) Regul-Euclidean with FWHM = 4 mm and FWHM = 4 mm respectively, (d-e) Regul-Fisher with FWHM 4 mm and FWHM 8 mm respectively (σloc = 10). In all experiments, C = 1. (a) (b) (c) (d) (e) Fig. 2. Gray probability map ((a) original map) of a control subject preprocessed with: (b) a 4 mm FWHM gaussian kernel, (c) an 8 mm FWHM gaussian kernel, (d)-(e) with β e 2 g and β corresponds to a 4 mm and to an 8 mm FHWM respectively. values of the hyperparameters: the cost parameter C (λ = 2N1 C ) of the linear C-SVM (10 5, ,, 103 ), FWHM (0, 2,, 8 mm) and σloc (5, 10 mm). The performances of the resulting classifiers were then evaluated on the testing set. Classification performances in terms of accuracies were slightly improved by spatially regularizing the SVM with the Fisher metric: Direct: 89%, RegulEuclidean: 89%, Regul-Fisher : 91%, COMPARE [10]: 86%, STAND-Score [7]: 81%. 5 Conclusion In conclusion, this paper presents a continuous framework to spatially regularize SVM for brain image analysis based on the Fisher metric. By considering the images as elements of a statistical manifold, one can define a metric that integrates various types of information. Based on this metric, replacing the standard SVM regularization with a Laplace-Beltrami regularization operator allows integrating to the classifier various types of constraints based on spatial and anatomical information. The proposed approach makes the results more consistent with the anatomy, making their interpretation more meaningful. Finally, it should be noted that the proposed approach is not specific to structural MRI, and can be applied to other pathologies and other types of data (e.g. functional or diffusionweighted MRI).
8 8 Acknowledgements This work was supported by ANR (project HM-TC, number ANR-09-EMER- 006). Data collection and sharing for this project was funded by the Alzheimer s Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904). References 1. Ashburner, J., Friston, K.J.: Voxel-based morphometry the methods. NeuroImage 11(6) (2000) Davatzikos, C.: Why voxel-based morphometric analysis should be used with great caution when characterizing group differences. NeuroImage 23(1) (2004) Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer-Verlag (1995) 4. Schölkopf, B., Smola, A.J.: Learning with Kernels. MIT Press (2001) 5. Davatzikos, C., et al.: Individual patient diagnosis of AD and FTD via highdimensional pattern classification of MRI. NeuroImage 41(4) (2008) Klöppel, S., et al.: Automatic classification of MR scans in Alzheimer s disease. Brain 131(3) (2008a) Vemuri, P., et al.: Alzheimer s disease diagnosis in individual subjects using structural mr images: Validation studies. NeuroImage 39(3) (2008) Gerardin, É., et al.: Multidimensional classification of hippocampal shape features discriminates Alzheimer s disease and mild cognitive impairment from normal aging. NeuroImage 47(4) (2009) Cuingnet, R., et al.: Automatic classification of patients with Alzheimer s disease from structural MRI: A comparison of ten methods using the ADNI database. NeuroImage 56(2) (2011) Fan, Y., et al.: COMPARE: classification of morphological patterns using adaptive regional elements. IEEE Transactions on Medical Imaging 26(1) (2007) Duchesne, S., et al.: Automated computer differential classification in Parkinsonian syndromes via pattern analysis on MRI. Academic radiology 16(1) (2009) Cuingnet, R., et al.: Spatially regularized SVM for the detection of brain areas associated with stroke outcome. In: MICCAI. Volume 6361 of LNCS. (2010) Querbes, O., et al.: Early diagnosis of Alzheimer s disease using cortical thickness: impact of cognitive reserve. Brain 132(8) (2009) Jost, J.: Riemannian geometry and geometric analysis. Springer Verlag (2008) 15. Lafferty, J., Lebanon, G.: Diffusion kernels on statistical manifolds. JMLR 6 (2005) Hebey, E.: Sobolev spaces on Riemannian manifolds. Springer-Verlag (1996) 17. Amari, S.I., et al.: Differential Geometry in Statistical Inference. Volume 10. Institute of Mathematical Statistics (1987) 18. Druet, O., Hebey, E., Robert, F.: Blow-up theory for elliptic PDEs in Riemannian geometry. Princeton Univ Press (2004) 19. Jack, C.R., et al.: The Alzheimer s disease neuroimaging initiative (ADNI): MRI methods. Journal of Magnetic Resonance Imaging 27(4) (2008) 20. Ashburner, J., Friston, K.J.: Unified segmentation. NeuroImage 26(3) (2005) Ashburner, J.: A fast diffeomorphic image registration algorithm. NeuroImage 38(1) (2007)
Multivariate models of inter-subject anatomical variability
Multivariate models of inter-subject anatomical variability Wellcome Trust Centre for Neuroimaging, UCL Institute of Neurology, 12 Queen Square, London WC1N 3BG, UK. Prediction Binary Classification Curse
More informationComputational Brain Anatomy
Computational Brain Anatomy John Ashburner Wellcome Trust Centre for Neuroimaging, 12 Queen Square, London, UK. Overview Voxel-Based Morphometry Morphometry in general Volumetrics VBM preprocessing followed
More informationMorphometry. John Ashburner. Wellcome Trust Centre for Neuroimaging, 12 Queen Square, London, UK.
Morphometry John Ashburner Wellcome Trust Centre for Neuroimaging, 12 Queen Square, London, UK. Morphometry is defined as: Measurement of the form of organisms or of their parts. The American Heritage
More informationMorphometrics with SPM12
Morphometrics with SPM12 John Ashburner Wellcome Trust Centre for Neuroimaging, 12 Queen Square, London, UK. What kind of differences are we looking for? Usually, we try to localise regions of difference.
More informationDiscriminative Direction for Kernel Classifiers
Discriminative Direction for Kernel Classifiers Polina Golland Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 polina@ai.mit.edu Abstract In many scientific and engineering
More informationMorphometry. John Ashburner. Wellcome Trust Centre for Neuroimaging, 12 Queen Square, London, UK. Voxel-Based Morphometry
Morphometry John Ashburner Wellcome Trust Centre for Neuroimaging, 12 Queen Square, London, UK. Overview Voxel-Based Morphometry Morphometry in general Volumetrics VBM preprocessing followed by SPM Tissue
More informationMultivariate Statistical Analysis of Deformation Momenta Relating Anatomical Shape to Neuropsychological Measures
Multivariate Statistical Analysis of Deformation Momenta Relating Anatomical Shape to Neuropsychological Measures Nikhil Singh, Tom Fletcher, Sam Preston, Linh Ha, J. Stephen Marron, Michael Wiener, and
More informationMorphometrics with SPM12
Morphometrics with SPM12 John Ashburner Wellcome Trust Centre for Neuroimaging, 12 Queen Square, London, UK. What kind of differences are we looking for? Usually, we try to localise regions of difference.
More informationFast Template-based Shape Analysis using Diffeomorphic Iterative Centroid
Fast Template-based Shape Analysis using Diffeomorphic Iterative Centroid Claire Cury, Joan Alexis Glaunès, Marie Chupin, Olivier Colliot To cite this version: Claire Cury, Joan Alexis Glaunès, Marie Chupin,
More informationClassification of Alzheimer s Disease Using a Self-Smoothing Operator
Classification of Alzheimer s Disease Using a Self-Smoothing Operator Juan Eugenio Iglesias, Jiayan Jiang, Cheng-Yi Liu, Zhuowen Tu, and the Alzheimers Disease Neuroimaging Initiative Laboratory of Neuro
More informationMixed effect model for the spatiotemporal analysis of longitudinal manifold value data
Mixed effect model for the spatiotemporal analysis of longitudinal manifold value data Stéphanie Allassonnière with J.B. Schiratti, O. Colliot and S. Durrleman Université Paris Descartes & Ecole Polytechnique
More informationDeformation Morphometry: Basics and Applications
Deformation Morphometry: Basics and Applications Valerie Cardenas Nicolson, Ph.D. Assistant Adjunct Professor NCIRE, UCSF, SFVA Center for Imaging of Neurodegenerative Diseases VA Challenge Clinical studies
More informationFast and Accurate HARDI and its Application to Neurological Diagnosis
Fast and Accurate HARDI and its Application to Neurological Diagnosis Dr. Oleg Michailovich Department of Electrical and Computer Engineering University of Waterloo June 21, 2011 Outline 1 Diffusion imaging
More informationStatistical Analysis of Tensor Fields
Statistical Analysis of Tensor Fields Yuchen Xie Baba C. Vemuri Jeffrey Ho Department of Computer and Information Sciences and Engineering University of Florida Abstract. In this paper, we propose a Riemannian
More informationQuantitative Neuro-Anatomic and Functional Image Assessment Recent progress on image registration and its applications
Quantitative Neuro-Anatomic and Functional Image Assessment Recent progress on image registration and its applications Guido Gerig Sarang Joshi Tom Fletcher Applications of image registration in neuroimaging
More informationALZHEIMER S disease (AD) is the most common type of
576 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 61, NO. 2, FEBRUARY 2014 Integration of Network Topological and Connectivity Properties for Neuroimaging Classification Biao Jie, Daoqiang Zhang, Wei
More informationPopulation Based Analysis of Directional Information in Serial Deformation Tensor Morphometry
Population Based Analysis of Directional Information in Serial Deformation Tensor Morphometry Colin Studholme 1,2 and Valerie Cardenas 1,2 1 Department of Radiiology, University of California San Francisco,
More informationClassification of handwritten digits using supervised locally linear embedding algorithm and support vector machine
Classification of handwritten digits using supervised locally linear embedding algorithm and support vector machine Olga Kouropteva, Oleg Okun, Matti Pietikäinen Machine Vision Group, Infotech Oulu and
More informationLongitudinal growth analysis of early childhood brain using deformation based morphometry
Longitudinal growth analysis of early childhood brain using deformation based morphometry Junki Lee 1, Yasser Ad-Dab'bagh 2, Vladimir Fonov 1, Alan C. Evans 1 and the Brain Development Cooperative Group
More informationFrom Pixels to Brain Networks: Modeling Brain Connectivity and Its Changes in Disease. Polina Golland
From Pixels to Brain Networks: Modeling Brain Connectivity and Its Changes in Disease Polina Golland MIT Computer Science and Artificial Intelligence Laboratory Joint work with Archana Venkataraman C.-F.
More informationMulti-Atlas Tensor-Based Morphometry and its Application to a Genetic Study of 92 Twins
Multi-Atlas Tensor-Based Morphometry and its Application to a Genetic Study of 92 Twins Natasha Leporé 1, Caroline Brun 1, Yi-Yu Chou 1, Agatha D. Lee 1, Marina Barysheva 1, Greig I. de Zubicaray 2, Matthew
More informationNeuroimage Processing
Neuroimage Processing Instructor: Moo K. Chung mkchung@wisc.edu Lecture 10-11. Deformation-based morphometry (DBM) Tensor-based morphometry (TBM) November 13, 2009 Image Registration Process of transforming
More informationAn Anatomical Equivalence Class Based Joint Transformation-Residual Descriptor for Morphological Analysis
An Anatomical Equivalence Class Based Joint Transformation-Residual Descriptor for Morphological Analysis Sajjad Baloch, Ragini Verma, and Christos Davatzikos University of Pennsylvania, Philadelphia,
More informationComparison of Standard and Riemannian Fluid Registration for Tensor-Based Morphometry in HIV/AIDS
Comparison of Standard and Riemannian Fluid Registration for Tensor-Based Morphometry in HIV/AIDS Caroline Brun 1, Natasha Lepore 1, Xavier Pennec 2, Yi-Yu Chou 1, Oscar L. Lopez 3, Howard J. Aizenstein
More informationRegularized Tensor Factorization for Multi-Modality Medical Image Classification
Regularized Tensor Factorization for Multi-Modality Medical Image Classification Nematollah Batmanghelich, Aoyan Dong, Ben Taskar, and Christos Davatzikos Section for Biomedical Image Analysis, Suite 380,
More informationBayesian Principal Geodesic Analysis in Diffeomorphic Image Registration
Bayesian Principal Geodesic Analysis in Diffeomorphic Image Registration Miaomiao Zhang and P. Thomas Fletcher School of Computing, University of Utah, Salt Lake City, USA Abstract. Computing a concise
More informationSpectral Perturbation of Small-World Networks with Application to Brain Disease Detection
Spectral Perturbation of Small-World Networks with Application to Brain Disease Detection Chenhui Hu May 4, 22 Introduction Many real life systems can be described by complex networks, which usually consist
More informationScale-Invariance of Support Vector Machines based on the Triangular Kernel. Abstract
Scale-Invariance of Support Vector Machines based on the Triangular Kernel François Fleuret Hichem Sahbi IMEDIA Research Group INRIA Domaine de Voluceau 78150 Le Chesnay, France Abstract This paper focuses
More informationDISCO: a Coherent Diffeomorphic Framework for Brain Registration Under Exhaustive Sulcal Constraints
DISCO: a Coherent Diffeomorphic Framework for Brain Registration Under Exhaustive Sulcal Constraints Guillaume Auzias 1, Joan Glaunès 2, Olivier Colliot 1, Matthieu Perrot 3 Jean-Franois Mangin 3, Alain
More informationFace Recognition Using Laplacianfaces He et al. (IEEE Trans PAMI, 2005) presented by Hassan A. Kingravi
Face Recognition Using Laplacianfaces He et al. (IEEE Trans PAMI, 2005) presented by Hassan A. Kingravi Overview Introduction Linear Methods for Dimensionality Reduction Nonlinear Methods and Manifold
More informationLeast Absolute Shrinkage is Equivalent to Quadratic Penalization
Least Absolute Shrinkage is Equivalent to Quadratic Penalization Yves Grandvalet Heudiasyc, UMR CNRS 6599, Université de Technologie de Compiègne, BP 20.529, 60205 Compiègne Cedex, France Yves.Grandvalet@hds.utc.fr
More informationCIND Pre-Processing Pipeline For Diffusion Tensor Imaging. Overview
CIND Pre-Processing Pipeline For Diffusion Tensor Imaging Overview The preprocessing pipeline of the Center for Imaging of Neurodegenerative Diseases (CIND) prepares diffusion weighted images (DWI) and
More informationObject Recognition Using Local Characterisation and Zernike Moments
Object Recognition Using Local Characterisation and Zernike Moments A. Choksuriwong, H. Laurent, C. Rosenberger, and C. Maaoui Laboratoire Vision et Robotique - UPRES EA 2078, ENSI de Bourges - Université
More informationHeat Kernel Smoothing on Human Cortex Extracted from Magnetic Resonance Images
Heat Kernel Smoothing on Human Cortex Extracted from Magnetic Resonance Images Moo K. Chung Department of Statistics Department of Biostatistics and Medical Informatics University of Wisconsin-Madison
More informationCortical Shape Analysis using the Anisotropic Global Point Signature
Cortical Shape Analysis using the Anisotropic Global Point Signature Anand A Joshi 1,3, Syed Ashrafulla 1, David W Shattuck 2, Hanna Damasio 3 and Richard M Leahy 1 1 Signal and Image Processing Institute,
More informationLearning Eigenfunctions: Links with Spectral Clustering and Kernel PCA
Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA Yoshua Bengio Pascal Vincent Jean-François Paiement University of Montreal April 2, Snowbird Learning 2003 Learning Modal Structures
More informationConstruction of Neuroanatomical Shape Complex Atlas from 3D Brain MRI
Construction of Neuroanatomical Shape Complex Atlas from 3D Brain MRI Ting Chen 1, Anand Rangarajan 1, Stephan J. Eisenschenk 2, and Baba C. Vemuri 1 1 Department of CISE, University of Florida, Gainesville,
More informationManifolds, Lie Groups, Lie Algebras, with Applications. Kurt W.A.J.H.Y. Reillag (alias Jean Gallier) CIS610, Spring 2005
Manifolds, Lie Groups, Lie Algebras, with Applications Kurt W.A.J.H.Y. Reillag (alias Jean Gallier) CIS610, Spring 2005 1 Motivations and Goals 1. Motivations Observation: Often, the set of all objects
More informationTract-Specific Analysis for DTI of Brain White Matter
Tract-Specific Analysis for DTI of Brain White Matter Paul Yushkevich, Hui Zhang, James Gee Penn Image Computing & Science Lab Department of Radiology University of Pennsylvania IPAM Summer School July
More informationOptimized Conformal Parameterization of Cortical Surfaces Using Shape Based Matching of Landmark Curves
Optimized Conformal Parameterization of Cortical Surfaces Using Shape Based Matching of Landmark Curves Lok Ming Lui 1, Sheshadri Thiruvenkadam 1, Yalin Wang 1,2,TonyChan 1, and Paul Thompson 2 1 Department
More informationSemi-Supervised Learning through Principal Directions Estimation
Semi-Supervised Learning through Principal Directions Estimation Olivier Chapelle, Bernhard Schölkopf, Jason Weston Max Planck Institute for Biological Cybernetics, 72076 Tübingen, Germany {first.last}@tuebingen.mpg.de
More informationRegularization on Discrete Spaces
Regularization on Discrete Spaces Dengyong Zhou and Bernhard Schölkopf Max Planck Institute for Biological Cybernetics Spemannstr. 38, 72076 Tuebingen, Germany {dengyong.zhou, bernhard.schoelkopf}@tuebingen.mpg.de
More informationActive and Semi-supervised Kernel Classification
Active and Semi-supervised Kernel Classification Zoubin Ghahramani Gatsby Computational Neuroscience Unit University College London Work done in collaboration with Xiaojin Zhu (CMU), John Lafferty (CMU),
More informationSchild s Ladder for the parallel transport of deformations in time series of images
Schild s Ladder for the parallel transport of deformations in time series of images Lorenzi Marco 1,2, Nicholas Ayache 1, Xavier Pennec 1, and the Alzheimer s Disease Neuroimaging Initiative 1 Project
More informationMeasuring the invisible using Quantitative Magnetic Resonance Imaging
Measuring the invisible using Quantitative Magnetic Resonance Imaging Paul Tofts Emeritus Professor University of Sussex, Brighton, UK Formerly Chair in Imaging Physics, Brighton and Sussex Medical School,
More informationarxiv: v1 [cs.cv] 23 Nov 2017
Parallel transport in shape analysis: a scalable numerical scheme arxiv:1711.08725v1 [cs.cv] 23 Nov 2017 Maxime Louis 12, Alexandre Bône 12, Benjamin Charlier 23, Stanley Durrleman 12, and the Alzheimer
More informationResearch Article Thalamus Segmentation from Diffusion Tensor Magnetic Resonance Imaging
Biomedical Imaging Volume 2007, Article ID 90216, 5 pages doi:10.1155/2007/90216 Research Article Thalamus Segmentation from Diffusion Tensor Magnetic Resonance Imaging Ye Duan, Xiaoling Li, and Yongjian
More informationFast Geodesic Regression for Population-Based Image Analysis
Fast Geodesic Regression for Population-Based Image Analysis Yi Hong 1, Polina Golland 2, and Miaomiao Zhang 2 1 Computer Science Department, University of Georgia 2 Computer Science and Artificial Intelligence
More informationADFINDER : MR IMAGE DIAGNOSIS TOOL TO AUTOMATICALLY DETECT ALZHEIMER S DISEASE
ADFINDER : MR IMAGE DIAGNOSIS TOOL TO AUTOMATICALLY DETECT ALZHEIMER S DISEASE A DISSERTATION SUBMITTED TO THE UNIVERSITY OF MANCHESTER FOR THE DEGREE OF MASTER OF SCIENCE IN THE FACULTY OF SCIENCE AND
More information17th Annual Meeting of the Organization for Human Brain Mapping. Multivariate cortical shape modeling based on sparse representation
17th Annual Meeting of the Organization for Human Brain Mapping Multivariate cortical shape modeling based on sparse representation Abstract No: 2207 Authors: Seongho Seo 1, Moo K. Chung 1,2, Kim M. Dalton
More informationConnection of Local Linear Embedding, ISOMAP, and Kernel Principal Component Analysis
Connection of Local Linear Embedding, ISOMAP, and Kernel Principal Component Analysis Alvina Goh Vision Reading Group 13 October 2005 Connection of Local Linear Embedding, ISOMAP, and Kernel Principal
More informationIntroduction to Machine Learning
10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what
More informationMULTISCALE MODULARITY IN BRAIN SYSTEMS
MULTISCALE MODULARITY IN BRAIN SYSTEMS Danielle S. Bassett University of California Santa Barbara Department of Physics The Brain: A Multiscale System Spatial Hierarchy: Temporal-Spatial Hierarchy: http://www.idac.tohoku.ac.jp/en/frontiers/column_070327/figi-i.gif
More informationc 4, < y 2, 1 0, otherwise,
Fundamentals of Big Data Analytics Univ.-Prof. Dr. rer. nat. Rudolf Mathar Problem. Probability theory: The outcome of an experiment is described by three events A, B and C. The probabilities Pr(A) =,
More informationFirst Technical Course, European Centre for Soft Computing, Mieres, Spain. 4th July 2011
First Technical Course, European Centre for Soft Computing, Mieres, Spain. 4th July 2011 Linear Given probabilities p(a), p(b), and the joint probability p(a, B), we can write the conditional probabilities
More informationGraphs in Machine Learning
Graphs in Machine Learning Michal Valko Inria Lille - Nord Europe, France TA: Pierre Perrault Partially based on material by: Mikhail Belkin, Jerry Zhu, Olivier Chapelle, Branislav Kveton October 30, 2017
More informationSUPPORT VECTOR REGRESSION WITH A GENERALIZED QUADRATIC LOSS
SUPPORT VECTOR REGRESSION WITH A GENERALIZED QUADRATIC LOSS Filippo Portera and Alessandro Sperduti Dipartimento di Matematica Pura ed Applicata Universit a di Padova, Padova, Italy {portera,sperduti}@math.unipd.it
More informationHHS Public Access Author manuscript Brain Inform Health (2015). Author manuscript; available in PMC 2016 January 01.
GN-SCCA: GraphNet based Sparse Canonical Correlation Analysis for Brain Imaging Genetics Lei Du 1, Jingwen Yan 1, Sungeun Kim 1, Shannon L. Risacher 1, Heng Huang 2, Mark Inlow 3, Jason H. Moore 4, Andrew
More informationIntroduction to Machine Learning Midterm, Tues April 8
Introduction to Machine Learning 10-701 Midterm, Tues April 8 [1 point] Name: Andrew ID: Instructions: You are allowed a (two-sided) sheet of notes. Exam ends at 2:45pm Take a deep breath and don t spend
More informationCluster Kernels for Semi-Supervised Learning
Cluster Kernels for Semi-Supervised Learning Olivier Chapelle, Jason Weston, Bernhard Scholkopf Max Planck Institute for Biological Cybernetics, 72076 Tiibingen, Germany {first. last} @tuebingen.mpg.de
More informationRegistration of anatomical images using geodesic paths of diffeomorphisms parameterized with stationary vector fields
Registration of anatomical images using geodesic paths of diffeomorphisms parameterized with stationary vector fields Monica Hernandez, Matias N. Bossa, and Salvador Olmos Communication Technologies Group
More informationChemometrics: Classification of spectra
Chemometrics: Classification of spectra Vladimir Bochko Jarmo Alander University of Vaasa November 1, 2010 Vladimir Bochko Chemometrics: Classification 1/36 Contents Terminology Introduction Big picture
More informationStatistical Pattern Recognition
Statistical Pattern Recognition Feature Extraction Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi, Payam Siyari Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Dimensionality Reduction
More informationHow to learn from very few examples?
How to learn from very few examples? Dengyong Zhou Department of Empirical Inference Max Planck Institute for Biological Cybernetics Spemannstr. 38, 72076 Tuebingen, Germany Outline Introduction Part A
More informationW vs. QCD Jet Tagging at the Large Hadron Collider
W vs. QCD Jet Tagging at the Large Hadron Collider Bryan Anenberg: anenberg@stanford.edu; CS229 December 13, 2013 Problem Statement High energy collisions of protons at the Large Hadron Collider (LHC)
More informationFocus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.
Previously Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations y = Ax Or A simply represents data Notion of eigenvectors,
More informationDictionary Learning on Riemannian Manifolds
Dictionary Learning on Riemannian Manifolds Yuchen Xie Baba C. Vemuri Jeffrey Ho Department of CISE, University of Florida, Gainesville FL, 32611, USA {yxie,vemuri,jho}@cise.ufl.edu Abstract. Existing
More informationHYPERGRAPH BASED SEMI-SUPERVISED LEARNING ALGORITHMS APPLIED TO SPEECH RECOGNITION PROBLEM: A NOVEL APPROACH
HYPERGRAPH BASED SEMI-SUPERVISED LEARNING ALGORITHMS APPLIED TO SPEECH RECOGNITION PROBLEM: A NOVEL APPROACH Hoang Trang 1, Tran Hoang Loc 1 1 Ho Chi Minh City University of Technology-VNU HCM, Ho Chi
More informationKernel Methods in Medical Imaging
This is page 1 Printer: Opaque this Kernel Methods in Medical Imaging G. Charpiat, M. Hofmann, B. Schölkopf ABSTRACT We introduce machine learning techniques, more specifically kernel methods, and show
More informationSelf-Tuning Semantic Image Segmentation
Self-Tuning Semantic Image Segmentation Sergey Milyaev 1,2, Olga Barinova 2 1 Voronezh State University sergey.milyaev@gmail.com 2 Lomonosov Moscow State University obarinova@graphics.cs.msu.su Abstract.
More informationDiffusion Tensor Imaging I: The basics. Jennifer Campbell
Diffusion Tensor Imaging I: The basics Jennifer Campbell Diffusion Tensor Imaging I: The basics Jennifer Campbell Diffusion Imaging MRI: many different sources of contrast T1W T2W PDW Perfusion BOLD DW
More informationTECHNICAL REPORT NO January 1, Tensor-Based Surface Morphometry
DEPARTMENT OF STATISTICS University of Wisconsin 1210 West Dayton St. Madison, WI 53706 TECHNICAL REPORT NO. 1049 January 1, 2002 Tensor-Based Surface Morphometry Moo K. Chung 1 Department of Statistics,
More informationSUPPORT VECTOR MACHINE
SUPPORT VECTOR MACHINE Mainly based on https://nlp.stanford.edu/ir-book/pdf/15svm.pdf 1 Overview SVM is a huge topic Integration of MMDS, IIR, and Andrew Moore s slides here Our foci: Geometric intuition
More informationShape Anisotropy: Tensor Distance to Anisotropy Measure
Shape Anisotropy: Tensor Distance to Anisotropy Measure Yonas T. Weldeselassie, Saba El-Hilo and M. Stella Atkins Medical Image Analysis Lab, School of Computing Science, Simon Fraser University ABSTRACT
More informationMark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.
CS 189 Spring 2015 Introduction to Machine Learning Midterm You have 80 minutes for the exam. The exam is closed book, closed notes except your one-page crib sheet. No calculators or electronic items.
More informationRECENT technological achievements and globalization
1 Classification of Big Data with Application to Imaging Genetics Magnus O. Ulfarsson, Member, IEEE, Frosti Palsson, Student Member, IEEE, Jakob Sigurdsson Student Member, IEEE, Johannes R. Sveinsson,
More informationMotivating the Covariance Matrix
Motivating the Covariance Matrix Raúl Rojas Computer Science Department Freie Universität Berlin January 2009 Abstract This note reviews some interesting properties of the covariance matrix and its role
More informationMultiple Similarities Based Kernel Subspace Learning for Image Classification
Multiple Similarities Based Kernel Subspace Learning for Image Classification Wang Yan, Qingshan Liu, Hanqing Lu, and Songde Ma National Laboratory of Pattern Recognition, Institute of Automation, Chinese
More informationHierarchical Dirichlet Processes with Random Effects
Hierarchical Dirichlet Processes with Random Effects Seyoung Kim Department of Computer Science University of California, Irvine Irvine, CA 92697-34 sykim@ics.uci.edu Padhraic Smyth Department of Computer
More informationIncorporating Invariances in Nonlinear Support Vector Machines
Incorporating Invariances in Nonlinear Support Vector Machines Olivier Chapelle olivier.chapelle@lip6.fr LIP6, Paris, France Biowulf Technologies Bernhard Scholkopf bernhard.schoelkopf@tuebingen.mpg.de
More informationCOMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017
COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University PRINCIPAL COMPONENT ANALYSIS DIMENSIONALITY
More informationBeyond the Point Cloud: From Transductive to Semi-Supervised Learning
Beyond the Point Cloud: From Transductive to Semi-Supervised Learning Vikas Sindhwani, Partha Niyogi, Mikhail Belkin Andrew B. Goldberg goldberg@cs.wisc.edu Department of Computer Sciences University of
More informationSparse Scale-Space Decomposition of Volume Changes in Deformations Fields
Sparse Scale-Space Decomposition of Volume Changes in Deformations Fields Lorenzi Marco 1, Bjoern H Menze 1,2, Marc Niethammer 3, Nicholas Ayache 1, and Xavier Pennec 1 for the Alzheimer's Disease Neuroimaging
More informationManifold Regularization
9.520: Statistical Learning Theory and Applications arch 3rd, 200 anifold Regularization Lecturer: Lorenzo Rosasco Scribe: Hooyoung Chung Introduction In this lecture we introduce a class of learning algorithms,
More informationKernel expansions with unlabeled examples
Kernel expansions with unlabeled examples Martin Szummer MIT AI Lab & CBCL Cambridge, MA szummer@ai.mit.edu Tommi Jaakkola MIT AI Lab Cambridge, MA tommi@ai.mit.edu Abstract Modern classification applications
More informationA short introduction to supervised learning, with applications to cancer pathway analysis Dr. Christina Leslie
A short introduction to supervised learning, with applications to cancer pathway analysis Dr. Christina Leslie Computational Biology Program Memorial Sloan-Kettering Cancer Center http://cbio.mskcc.org/leslielab
More informationVariational Inference for Image Segmentation
Variational Inference for Image Segmentation Claudia Blaiotta 1, M. Jorge Cardoso 2, and John Ashburner 1 1 Wellcome Trust Centre for Neuroimaging, University College London, London, UK 2 Centre for Medical
More informationStatistical Learning. Dong Liu. Dept. EEIS, USTC
Statistical Learning Dong Liu Dept. EEIS, USTC Chapter 6. Unsupervised and Semi-Supervised Learning 1. Unsupervised learning 2. k-means 3. Gaussian mixture model 4. Other approaches to clustering 5. Principle
More informationSupervised Learning Part I
Supervised Learning Part I http://www.lps.ens.fr/~nadal/cours/mva Jean-Pierre Nadal CNRS & EHESS Laboratoire de Physique Statistique (LPS, UMR 8550 CNRS - ENS UPMC Univ. Paris Diderot) Ecole Normale Supérieure
More informationDynamic Causal Modelling for fmri
Dynamic Causal Modelling for fmri André Marreiros Friday 22 nd Oct. 2 SPM fmri course Wellcome Trust Centre for Neuroimaging London Overview Brain connectivity: types & definitions Anatomical connectivity
More informationHigher Order Cartesian Tensor Representation of Orientation Distribution Functions (ODFs)
Higher Order Cartesian Tensor Representation of Orientation Distribution Functions (ODFs) Yonas T. Weldeselassie (Ph.D. Candidate) Medical Image Computing and Analysis Lab, CS, SFU DT-MR Imaging Introduction
More informationFinal Overview. Introduction to ML. Marek Petrik 4/25/2017
Final Overview Introduction to ML Marek Petrik 4/25/2017 This Course: Introduction to Machine Learning Build a foundation for practice and research in ML Basic machine learning concepts: max likelihood,
More informationL11: Pattern recognition principles
L11: Pattern recognition principles Bayesian decision theory Statistical classifiers Dimensionality reduction Clustering This lecture is partly based on [Huang, Acero and Hon, 2001, ch. 4] Introduction
More informationRician Noise Removal in Diffusion Tensor MRI
Rician Noise Removal in Diffusion Tensor MRI Saurav Basu, Thomas Fletcher, and Ross Whitaker University of Utah, School of Computing, Salt Lake City, UT 84112, USA Abstract. Rician noise introduces a bias
More informationEEG/MEG Inverse Solution Driven by fmri
EEG/MEG Inverse Solution Driven by fmri Yaroslav Halchenko CS @ NJIT 1 Functional Brain Imaging EEG ElectroEncephaloGram MEG MagnetoEncephaloGram fmri Functional Magnetic Resonance Imaging others 2 Functional
More informationGaussian Processes (10/16/13)
STA561: Probabilistic machine learning Gaussian Processes (10/16/13) Lecturer: Barbara Engelhardt Scribes: Changwei Hu, Di Jin, Mengdi Wang 1 Introduction In supervised learning, we observe some inputs
More informationParcellation of the Thalamus Using Diffusion Tensor Images and a Multi-object Geometric Deformable Model
Parcellation of the Thalamus Using Diffusion Tensor Images and a Multi-object Geometric Deformable Model Chuyang Ye a, John A. Bogovic a, Sarah H. Ying b, and Jerry L. Prince a a Department of Electrical
More informationTemplate estimation form unlabeled point set data and surfaces for Computational Anatomy
Template estimation form unlabeled point set data and surfaces for Computational Anatomy Joan Glaunès 1 and Sarang Joshi 1 Center for Imaging Science, Johns Hopkins University, joan@cis.jhu.edu SCI, University
More informationHuman Brain Networks. Aivoaakkoset BECS-C3001"
Human Brain Networks Aivoaakkoset BECS-C3001" Enrico Glerean (MSc), Brain & Mind Lab, BECS, Aalto University" www.glerean.com @eglerean becs.aalto.fi/bml enrico.glerean@aalto.fi" Why?" 1. WHY BRAIN NETWORKS?"
More informationMACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA
1 MACHINE LEARNING Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA 2 Practicals Next Week Next Week, Practical Session on Computer Takes Place in Room GR
More information