An Implementation of Dynamic Causal Modelling Christian Himpe 24.01.2011
Overview Contents: 1 Intro 2 Model 3 Parameter Estimation 4 Examples
Motivation Motivation: How are brain regions coupled?
Motivation Motivation: How are brain regions coupled? How does the connectivity change in an experimental context?
Cycle DCM Cycle:
Cycle DCM Cycle: 1 Form Hypothesis
Cycle DCM Cycle: 1 Form Hypothesis 2 Data Acquisition (via fmri, EEG, MEG)
Cycle DCM Cycle: 1 Form Hypothesis 2 Data Acquisition (via fmri, EEG, MEG) 3 Data Preprocessing
Cycle DCM Cycle: 1 Form Hypothesis 2 Data Acquisition (via fmri, EEG, MEG) 3 Data Preprocessing 4 (Select appropriate Model)
Cycle DCM Cycle: 1 Form Hypothesis 2 Data Acquisition (via fmri, EEG, MEG) 3 Data Preprocessing 4 (Select appropriate Model) 5 Tune model-parameters to t Data
Cycle DCM Cycle: 1 Form Hypothesis 2 Data Acquisition (via fmri, EEG, MEG) 3 Data Preprocessing 4 (Select appropriate Model) 5 Tune model-parameters to t Data An Example: 1 Hypothesis: Connectivity Strengthens for selected Brain Regions under Fear
Cycle DCM Cycle: 1 Form Hypothesis 2 Data Acquisition (via fmri, EEG, MEG) 3 Data Preprocessing 4 (Select appropriate Model) 5 Tune model-parameters to t Data An Example: 1 Hypothesis: Connectivity Strengthens for selected Brain Regions under Fear 2 Data Acquisition: Condition Mice, Trigger Stimulus, Record Data
Cycle DCM Cycle: 1 Form Hypothesis 2 Data Acquisition (via fmri, EEG, MEG) 3 Data Preprocessing 4 (Select appropriate Model) 5 Tune model-parameters to t Data An Example: 1 Hypothesis: Connectivity Strengthens for selected Brain Regions under Fear 2 Data Acquisition: Condition Mice, Trigger Stimulus, Record Data 3 Preprocessing: Low-Pass Filtering
Cycle DCM Cycle: 1 Form Hypothesis 2 Data Acquisition (via fmri, EEG, MEG) 3 Data Preprocessing 4 (Select appropriate Model) 5 Tune model-parameters to t Data An Example: 1 Hypothesis: Connectivity Strengthens for selected Brain Regions under Fear 2 Data Acquisition: Condition Mice, Trigger Stimulus, Record Data 3 Preprocessing: Low-Pass Filtering 4 Model Selection: EEG(invasive) Model
Cycle DCM Cycle: 1 Form Hypothesis 2 Data Acquisition (via fmri, EEG, MEG) 3 Data Preprocessing 4 (Select appropriate Model) 5 Tune model-parameters to t Data An Example: 1 Hypothesis: Connectivity Strengthens for selected Brain Regions under Fear 2 Data Acquisition: Condition Mice, Trigger Stimulus, Record Data 3 Preprocessing: Low-Pass Filtering 4 Model Selection: EEG(invasive) Model 5 Evaluate with DCM Software.
Model Model Principles: Dynamic Deterministic Multiple Inputs + Outputs Two Component Model
Model Model Principles: Dynamic Deterministic Multiple Inputs + Outputs Two Component Model Dynamic Submodel (Coupling) Forward Submodel (Signal)
Model Model Principles: Dynamic Deterministic Multiple Inputs + Outputs Two Component Model Dynamic Submodel (Coupling) Forward Submodel (Signal) Dier by Data Acquisition Method (fmri,eeg/meg).
Dynamic Submodel (fmri) I General Description: ż = F (z, u, θ)
Dynamic Submodel (fmri) I General Description: Bilinear Approximation: ż = F (z, u, θ) ż δf δz z + δf δu u + k u k δ 2 F z δzδu k
Dynamic Submodel (fmri) I General Description: Bilinear Approximation: ż = F (z, u, θ) ż δf δz z + δf δu u + k u k δ 2 F z δzδu k Reparametrization: ż Az + Cu + k u k B k z.
Dynamic Submodel (fmri) II
Dynamic Submodel (EEG) I Construction: Tripartioning Coupling Ruleset (Forward,Backward,Lateral) { H e Impulse Response Kernel: p(t) = τ t e exp ( t τ e ) (t 0) 0 (t < 0) Sigmoid Function: S(x) = Convolution: p(t) u(t) = x 2e0 1+exp ( rx) e 0
Dynamic Submodel (EEG) I Construction: Tripartioning Coupling Ruleset (Forward,Backward,Lateral) { H e Impulse Response Kernel: p(t) = τ t e exp ( t τ e ) (t 0) 0 (t < 0) Sigmoid Function: S(x) = 2e0 1+exp ( rx) e 0 Convolution: p(t) u(t) = x ẍ = He τ e u(t) 2 τ e ẋ(t) 1 τ 2 x(t)
Dynamic Submodel (EEG) II Neuronal State Equation: x 1 = x 4 x 4 = H e x 7 = x 8 x 8 = H e τ e ((C F + C L + γ 1 I )S(x 0 ) + C U u) 2 τ e x 4 1 τ e ((C B + C L + γ 3 I )S(x 0 )) 2 τ e x 8 1 x 0 = x 5 x 6 x 2 = x 5 x 3 = x 6 x 5 = H e ((C B + C L )S(x 0 ) + γ 2 S(x 1 )) 2 x 5 1 x 2 τ e τ e τe 2 x 6 = H i γ 4 S(x 7 ) 2 x 6 1 x 3 τ i τ i τ 2 i τ 2 e x 7 τ 2 e x 1
Forward Submodel (fmri) Hemodynamic Equation: s i = z i κs i γ(f i 1) f i = s i v i = 1 τ (f i v 1 α ) i q i = 1 f i τ ρ (1 (1 ρ) 1 ρ ) v 1 α 1 q i i y i = V 0 (γ 1 ρ(1 q i ) + γ 2 (1 q i v i ) + (γ 3 ρ 0.02)(1 τ))
Forward Submodel (EEG) EEG Forward Model: non-invasive: y = LKx 0 invasive: y = Kx 0
Development fmri: Balloon Model (Buxton, 1998) Hemodynamic Model (Friston, 2000) Bayesian Estimation of Dynamical Systems (Friston, 2001) Dynamic Causal Modelling for fmri (Friston,2003)
Development fmri: EEG: Balloon Model (Buxton, 1998) Hemodynamic Model (Friston, 2000) Bayesian Estimation of Dynamical Systems (Friston, 2001) Dynamic Causal Modelling for fmri (Friston,2003) Jansen Model (Jansen + Rit, 1995) Neural Mass Model (David + Friston, 2003) Modelling Event-Related Responses (David + Friston, 2005) Dynamic Causal Modelling for EEG/MEG (David + Friston, 2006)
Parameter Overview fmri Parameters: Coupling Scale Hemodynamic
Parameter Overview fmri Parameters: Coupling Scale Hemodynamic For EEG: Coupling Gain Synaptic Input Contribution
Parameter Overview fmri Parameters: Coupling Scale Hemodynamic For EEG: Coupling Gain Synaptic Input Contribution Both: Drift
Estimation Preconditions: Data Model: y = h(z, u, θ) + X β + ɛ
Estimation Preconditions: Data Model: y = h(z, u, θ) + X β + ɛ Gaussian Assumption for Parameter distribution
Estimation Preconditions: Data Model: y = h(z, u, θ) + X β + ɛ Gaussian Assumption for Parameter distribution Prior Knowledge
Estimation Preconditions: Data Model: y = h(z, u, θ) + X β + ɛ Gaussian Assumption for Parameter distribution Prior Knowledge Bayes Rule: P(θ y) P(y θ) P(θ)
Estimation Preconditions: Data Model: y = h(z, u, θ) + X β + ɛ Gaussian Assumption for Parameter distribution Prior Knowledge Bayes Rule: P(θ y) P(y θ) P(θ) Unknown Covariance Parametrization of Covariance.
EM-Algorithm Posterior Estimation: Mean Estimation least-squares estimator
EM-Algorithm Posterior Estimation: Mean Estimation least-squares estimator Covariance Estimation Scoring algorithm
EM-Algorithm Posterior Estimation: Mean Estimation least-squares estimator Covariance Estimation Scoring algorithm Two Step Procedure (EM-Algorithm).
E-Step E-Step: ( y h(η i θ y 1 Residuals: ȳ = ) ) η θ η i θ y
E-Step E-Step: ( y h(η i θ y 1 Residuals: ȳ = ) ) η θ η i θ y 2 Parameter Jacobian (Design Matrix): J = ( J X 1 0 )
E-Step E-Step: ( y h(η i θ y 1 Residuals: ȳ = ) ) η θ η i θ y ( ) J X 2 Parameter Jacobian (Design Matrix): J = 1 0 ( ) λi V ˆQi 0 3 Covariance Weights: Cɛ = 0 Cθ
E-Step E-Step: ( y h(η i θ y 1 Residuals: ȳ = ) ) η θ η i θ y ( ) J X 2 Parameter Jacobian (Design Matrix): J = 1 0 ( ) λi V ˆQi 0 3 Covariance Weights: Cɛ = 0 Cθ 4 T 1 Posterior Covariance: C θ y = ( J C ɛ J) 1
E-Step E-Step: ( y h(η i θ y 1 Residuals: ȳ = ) ) η θ η i θ y ( ) J X 2 Parameter Jacobian (Design Matrix): J = 1 0 ( ) λi V ˆQi 0 3 Covariance Weights: Cɛ = 0 Cθ 4 T 1 Posterior Covariance: C θ y = ( J C ɛ J) 1 5 Posterior Mean: η n+1 θ y = η n θ y + C θ y( J T Cɛ 1 ȳ)
M-Step M-Step: 1 Residual Forming Matrix: P = C 1 ɛ Cɛ 1 JC θ y J T Cɛ 1
M-Step M-Step: 1 Residual Forming Matrix: P = C 1 ɛ Cɛ 1 JC θ y J T Cɛ 1 2 1st NFE/LL Derivative: g i = 1 2 tr(pq i) + 1 2 ȳ T P T Q i Pȳ
M-Step M-Step: 1 Residual Forming Matrix: P = C 1 ɛ Cɛ 1 JC θ y J T Cɛ 1 2 1st NFE/LL Derivative: g i = 1 2 tr(pq i) + 1 2 ȳ T P T Q i Pȳ 3 2nd NFE/LL Derivatives Expectation: H ij = 1 2 tr(pq i PQ j )
M-Step M-Step: 1 Residual Forming Matrix: P = C 1 ɛ Cɛ 1 JC θ y J T Cɛ 1 2 1st NFE/LL Derivative: g i = 1 2 tr(pq i) + 1 2 ȳ T P T Q i Pȳ 3 2nd NFE/LL Derivatives Expectation: H ij = 1 2 tr(pq i PQ j ) 4 Hyperparameter Update: λ n+1 = λ n H 1 g
Problems and Improvements Problems and Improvements: Large Matrices
Problems and Improvements Problems and Improvements: Large Matrices Sparse Matrices
Problems and Improvements Problems and Improvements: Large Matrices Sparse Matrices Matrix Multiplications
Problems and Improvements Problems and Improvements: Large Matrices Sparse Matrices Matrix Multiplications Parallelization
Problems and Improvements Problems and Improvements: Large Matrices Sparse Matrices Matrix Multiplications Parallelization Parameter Jacobian
Problems and Improvements Problems and Improvements: Large Matrices Sparse Matrices Matrix Multiplications Parallelization Parameter Jacobian Parallelization
Problems and Improvements Problems and Improvements: Large Matrices Sparse Matrices Matrix Multiplications Parallelization Parameter Jacobian Parallelization Inversion/Solving
Problems and Improvements Problems and Improvements: Large Matrices Sparse Matrices Matrix Multiplications Parallelization Parameter Jacobian Parallelization Inversion/Solving Cholesky Decomposition
Problems and Improvements Problems and Improvements: Large Matrices Sparse Matrices Matrix Multiplications Parallelization Parameter Jacobian Parallelization Inversion/Solving Cholesky Decomposition Traces of (Complicated) Matrix Products
Problems and Improvements Problems and Improvements: Large Matrices Sparse Matrices Matrix Multiplications Parallelization Parameter Jacobian Parallelization Inversion/Solving Cholesky Decomposition Traces of (Complicated) Matrix Products Recycling E-Step
Problems and Improvements Problems and Improvements: Large Matrices Sparse Matrices Matrix Multiplications Parallelization Parameter Jacobian Parallelization Inversion/Solving Cholesky Decomposition Traces of (Complicated) Matrix Products Recycling E-Step NFE/LL Derivatives
Problems and Improvements Problems and Improvements: Large Matrices Sparse Matrices Matrix Multiplications Parallelization Parameter Jacobian Parallelization Inversion/Solving Cholesky Decomposition Traces of (Complicated) Matrix Products Recycling E-Step NFE/LL Derivatives Parallelization.
Problems and Improvements Problems and Improvements: Large Matrices Sparse Matrices Matrix Multiplications Parallelization Parameter Jacobian Parallelization Inversion/Solving Cholesky Decomposition Traces of (Complicated) Matrix Products Recycling E-Step NFE/LL Derivatives Parallelization. Some Linear Algebra:
Problems and Improvements Problems and Improvements: Large Matrices Sparse Matrices Matrix Multiplications Parallelization Parameter Jacobian Parallelization Inversion/Solving Cholesky Decomposition Traces of (Complicated) Matrix Products Recycling E-Step NFE/LL Derivatives Parallelization. Some Linear Algebra: (AB) T = B T A T tr(abc) = tr(bca) = tr(cab) tr(ab) = ij A ij B ji
Improved E-Step Improved E-Step: ȳ = J = Cɛ = ( y h(η i ) ) θ y η θ η i θ y ) ( J X 1 0 ( ) λi V ˆQi 0 J A = Cɛ 1 J J B = J A T 0 Cθ T 1 C θ y = ( J C ɛ J) 1 C θ y = ( J B J) 1 D = C θ y J B η θ y = Dȳ η n+1 = η n + θ y θ y C T θ y( J Cɛ 1 ȳ) η n+1 = η n + η θ y θ y θ y
Improved M-Step Improved M-Step: Q A = Q i i Cɛ 1 Q B A = Q i i J Q C B = DQ i i p y = C 1 P = Cɛ 1 Cɛ 1 JC θ y J T Cɛ 1 ɛ ȳ ( J A η θ y ) g i = 1 tr(pq 2 i) + 1 ȳ T P T Q 2 i Pȳ g i = p T y Q i p y tr(q A ) + tr(q C ) i i H ij = 1 tr(pq 2 i PQ j ) H ij = tr(q A Q A ) i j tr(q A Q B D) j i D) tr(q A i +tr(q C i Q B j Q C j ) λ n+1 = λ n H 1 g λ n+1 = λ n + H 1 g
Implementation Implementation Notes: C++ Sparse Matrices OpenMP Runge-Kutta-Fehlberg Modular Submodel Classes 6500 Lines of Code
Articial EEG
Articial EEG
Articial EEG
Articial EEG
Articial EEG
Articial EEG
Articial EEG
Articial EEG
Articial EEG
Articial EEG
Articial EEG
Articial EEG
Articial EEG.
Real EEG
Real EEG
Real EEG
Real EEG
Real EEG
Real EEG
Real EEG.
Challenges Input Distribution Stimulus Adaption Drift Order Initrinsic Connections
exit(0); Thank You