Bayesian Inference with Oscillator Models: A Possible Role of Neural Rhythms

Size: px
Start display at page:

Download "Bayesian Inference with Oscillator Models: A Possible Role of Neural Rhythms"

Transcription

1 Bayesian Inference with Oscillator Models: A Possible Role of Neural Rhythms Qualcomm/Brain Corporation/INC Lecture Series on Computational Neuroscience University of California, San Diego, March 5, 2012 Prashant G. Mehta Department of Mechanical Science and Engineering and the Coordinated Science Laboratory University of Illinois at Urbana-Champaign Research supported by NSF and AFOSR

2 Application 2 Gait Cycle Biological Rhythm

3 Application 2 Gait Cycle Biological Rhythm

4 Application 2 Gait Cycle Biological Rhythm

5 Application 2 Gait Cycle Biological Rhythm

6 Application 2 Gait Cycle Biological Rhythm

7 Application 2 Gait Cycle Biological Rhythm

8 Application 2 Gait Cycle Biological Rhythm

9 Application Application: Ankle-foot Orthoses Estimation of gait cycle using sensor measurements Ankle-foot orthoses (AFOs) : For lower-limb neuromuscular impairments. Provides dorsiflexor (toe lift) and plantarflexor (toe push) torque assistance Solenoid valves: control the flow of CO2 to the actuator Actuator Compressed CO2 Sensors: heel, toe, and ankle joint AFO system components: Power supply, Valves, Actuator, Sensors. Professor Liz Hsiao-Wecksler Acknowledgement: Professor Liz Hsiao-Wecksler for sharing the AFO device picture and sensor data. 3

10 Application Application: Ankle-foot Orthoses Estimation of gait cycle using sensor measurements Ankle-foot orthoses (AFOs) : For lower-limb neuromuscular impairments. Provides dorsiflexor (toe lift) and plantarflexor (toe push) torque assistance Solenoid valves: control the flow of CO2 to the actuator Actuator Compressed CO2 Sensors: heel, toe, and ankle joint AFO system components: Power supply, Valves, Actuator, Sensors. Professor Liz Hsiao-Wecksler Acknowledgement: Professor Liz Hsiao-Wecksler for sharing the AFO device picture and sensor data. 3

11 Application 4 Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) θ(t) = ω }{{} 0 + noise natural frequency

12 Application 4 Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) θ(t) = ω }{{} 0 + noise natural frequency

13 Application 4 Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) θ(t) = ω }{{} 0 + noise natural frequency

14 Application 4 Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) θ(t) = ω }{{} 0 + noise natural frequency

15 Application 4 Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) θ(t) = ω }{{} 0 + noise natural frequency

16 Application 4 Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) θ(t) = ω }{{} 0 + noise natural frequency

17 Application 4 Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) θ(t) = ω }{{} 0 + noise natural frequency

18 Application 4 Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) θ(t) = ω }{{} 0 + noise natural frequency

19 Application 5 Problem: Estimate Gait Cycle θ(t) Sensor model Observation model: y(t) = h(θ(t))+ noise Problem: What is θ(t) given noisy observations?

20 Application 5 Problem: Estimate Gait Cycle θ(t) Sensor model Observation model: y(t) = h(θ(t))+ noise Problem: What is θ(t) given noisy observations?

21 Application Problem: Estimate Gait Cycle θ(t) Sensor model Observation model: y(t) = h(θ(t))+ noise Problem: What is θ(t) given noisy observations? 5

22 Application Problem: Estimate Gait Cycle θ(t) Sensor model Observation model: y(t) = h(θ(t))+ noise Problem: What is θ(t) given noisy observations? 5

23 Application Problem: Estimate Gait Cycle θ(t) Sensor model Observation model: y(t) = h(θ(t))+ noise Problem: What is θ(t) given noisy observations? 5

24 Application 6 Solution: Particle Filter Algorithm to approximate posterior distribution Large number of oscillators Posterior distribution: P(φ 1 < θ(t) < φ 2 Sensor readings) = Fraction of θ i (t) in interval (φ 1,φ 2 ) Circuit: θ i (t) = ω i }{{} natural freq. of ith oscillator + noise i + u }{{} i, i = 1,...,N mean-field control Feedback Particle Filter: Design control law u i (t)

25 Application 6 Solution: Particle Filter Algorithm to approximate posterior distribution Large number of oscillators Posterior distribution: P(φ 1 < θ(t) < φ 2 Sensor readings) = Fraction of θ i (t) in interval (φ 1,φ 2 ) Circuit: θ i (t) = ω i }{{} natural freq. of ith oscillator + noise i + u }{{} i, i = 1,...,N mean-field control Feedback Particle Filter: Design control law u i (t)

26 Application 6 Solution: Particle Filter Algorithm to approximate posterior distribution Large number of oscillators Posterior distribution: P(φ 1 < θ(t) < φ 2 Sensor readings) = Fraction of θ i (t) in interval (φ 1,φ 2 ) Circuit: θ i (t) = ω i }{{} natural freq. of ith oscillator + noise i + u }{{} i, i = 1,...,N mean-field control Feedback Particle Filter: Design control law u i (t)

27 Application 6 Solution: Particle Filter Algorithm to approximate posterior distribution Large number of oscillators Posterior distribution: P(φ 1 < θ(t) < φ 2 Sensor readings) = Fraction of θ i (t) in interval (φ 1,φ 2 ) Circuit: θ i (t) = ω i }{{} natural freq. of ith oscillator + noise i + u }{{} i, i = 1,...,N mean-field control Feedback Particle Filter: Design control law u i (t)

28 Application 6 Solution: Particle Filter Algorithm to approximate posterior distribution Large number of oscillators Posterior distribution: P(φ 1 < θ(t) < φ 2 Sensor readings) = Fraction of θ i (t) in interval (φ 1,φ 2 ) Circuit: θ i (t) = ω i }{{} natural freq. of ith oscillator + noise i + u }{{} i, i = 1,...,N mean-field control Feedback Particle Filter: Design control law u i (t)

29 Application 6 Solution: Particle Filter Algorithm to approximate posterior distribution Large number of oscillators Posterior distribution: P(φ 1 < θ(t) < φ 2 Sensor readings) = Fraction of θ i (t) in interval (φ 1,φ 2 ) Circuit: θ i (t) = ω i }{{} natural freq. of ith oscillator + noise i + u }{{} i, i = 1,...,N mean-field control Feedback Particle Filter: Design control law u i (t)

30 Application 7 Simulation Results Solution for the gait cycle estimation problem [Click to play the movie]

31 Application 8

32 Application 8

33 Part I Oscillators

34 Oscillators in Biology 10 Oscillator Models in Neuroscience Literature: Dynamical Systems & Neuroscience

35 Oscillators in Biology Normal Form Reduction Derivation of oscillator model C dv dt = g T m 2 (V ) h (V E T ) g h r (V E h )... dh dt = h (V ) h τ h (V ) dr dt = r (V ) r τ r (V ) [5] J. Guckenheimer, J. Math. Biol., 1975; [1] J. Moehlis et al., Neural Computation,

36 Oscillators in Biology Normal Form Reduction Derivation of oscillator model C dv dt = g T m 2 (V ) h (V E T ) g h r (V E h )... dh dt = h (V ) h τ h (V ) dr dt = r (V ) r τ r (V ) [5] J. Guckenheimer, J. Math. Biol., 1975; [1] J. Moehlis et al., Neural Computation,

37 Oscillators in Biology Normal Form Reduction Derivation of oscillator model C dv dt = g T m 2 (V ) h (V E T ) g h r (V E h )... dh dt = h (V ) h τ h (V ) dr dt = r (V ) r τ r (V ) Normal form reduction θ i = ω i + u i Φ(θ i ) [5] J. Guckenheimer, J. Math. Biol., 1975; [1] J. Moehlis et al., Neural Computation,

38 Oscillators in Biology 12 Collective Dynamics of a Large Number of Oscillators Synchrony, Neural rhythms

39 Oscillators in Biology Synchronization Kuramoto coupled oscillator model ( d dt θ i(t) = ω i + κ N N sin(θ j (t) θ i (t)) j=1 ω i taken from distribution g(ω) over [1 γ,1 + γ] γ measures the heterogeneity of the population κ measures the strength of coupling ) + σ ξ i (t), i = 1,...,N [9] Y. Kuramoto, 1975; [14] Strogatz et al., J. Stat. Phy.,

40 Oscillators in Biology Synchronization Kuramoto coupled oscillator model ( d dt θ i(t) = ω i + κ N N sin(θ j (t) θ i (t)) j=1 ω i taken from distribution g(ω) over [1 γ,1 + γ] γ measures the heterogeneity of the population κ measures the strength of coupling ) + σ ξ i (t), i = 1,...,N [9] Y. Kuramoto, 1975; [14] Strogatz et al., J. Stat. Phy.,

41 Oscillators in Biology Synchronization Kuramoto coupled oscillator model ( d dt θ i(t) = ω i + κ N N j=1 sin(θ j (t) θ i (t)) ω i taken from distribution g(ω) over [1 γ,1 + γ] γ measures the heterogeneity of the population κ measures the strength of coupling ) + σ ξ i (t), i = 1,...,N [9] Y. Kuramoto, 1975; [14] Strogatz et al., J. Stat. Phy.,

42 Oscillators in Biology Synchronization Kuramoto coupled oscillator model ( d dt θ i(t) = ω i + κ N N sin(θ j (t) θ i (t)) j=1 ω i taken from distribution g(ω) over [1 γ,1 + γ] γ measures the heterogeneity of the population κ measures the strength of coupling ) + σ ξ i (t), i = 1,...,N κ 0.3 Synchrony Locking R κ < κ c (γ) Incoherence γ [9] Y. Kuramoto, 1975; [14] Strogatz et al., J. Stat. Phy.,

43 Oscillators in Biology 14 Movies of incoherence and synchrony solution Incoherence [Click to play] Synchrony [Click to play]

44 Oscillators in Biology 15 Functional Role of Neural Rhythms Is synchronization useful? Does it have a functional role? Books/review papers: Buzsaki, Destexhe, Ermentrout, Izhikevich, Kopell, Trout and Whittington (2009), Llinas and Ribary (2001), Pareti and Palma (2004), Sejnowski and Paulsen (2006), Singer (1993)... Computations: Computing with intrinsic network states Destexhe and Contreras (2006); Izhikevich (2006); Zhang and Ballard (2001). Synaptic plasticity: Neurons that fire together wire together And several other hypotheses: Communication and information flow (Laughlin and Sejnowski); Binding by synchrony (Singer); Memory formation (Jutras and Fries); Probabilistic decision making (Wang); Stimulus competition and attention selection (Kopell); Sleep/wakefulness/disease (Steriade)

45 Part II Bayesian Inference

46 Bayesian inference in Neuroscience 17 Prediction Brain as a reality emulator [Prediction] is the primary function of the neocortex, and the foundation of intelligence. If we want to understand how your brain works, and how to build intelligent machines, we must understand the nature of these predictions and how the cortex makes them. The capacity to predict the outcome of future events critical to successful movement is, most likely, the ultimate and most common of all brain functions.

47 Bayesian inference in Neuroscience 17 Prediction Brain as a reality emulator [Prediction] is the primary function of the neocortex, and the foundation of intelligence. If we want to understand how your brain works, and how to build intelligent machines, we must understand the nature of these predictions and how the cortex makes them. The capacity to predict the outcome of future events critical to successful movement is, most likely, the ultimate and most common of all brain functions.

48 Bayesian inference in Neuroscience 17 Prediction Brain as a reality emulator [Prediction] is the primary function of the neocortex, and the foundation of intelligence. If we want to understand how your brain works, and how to build intelligent machines, we must understand the nature of these predictions and how the cortex makes them. The capacity to predict the outcome of future events critical to successful movement is, most likely, the ultimate and most common of all brain functions.

49 Bayesian inference in Neuroscience 17 Prediction Brain as a reality emulator [Prediction] is the primary function of the neocortex, and the foundation of intelligence. If we want to understand how your brain works, and how to build intelligent machines, we must understand the nature of these predictions and how the cortex makes them. The capacity to predict the outcome of future events critical to successful movement is, most likely, the ultimate and most common of all brain functions.

50 Bayesian inference in Neuroscience 18 Bayesian Inference in Neuroscience Edited volumes

51 Bayesian inference in Neuroscience 19 Bayesian Inference in Neuroscience Mathematics of prediction: Bayes rule Signal (hidden): X X P(X ), (prior, known) Solution Bayes rule: P(X Y ) }{{} Posterior P(Y X )P(X ) }{{} Prior Challenge: Implementing Bayes rule in dynamic, nonlinear, non-gaussian settings!

52 Bayesian inference in Neuroscience 19 Bayesian Inference in Neuroscience Mathematics of prediction: Bayes rule Signal (hidden): X X P(X ), (prior, known) Observation: Y (known) Solution Bayes rule: P(X Y ) }{{} Posterior P(Y X )P(X ) }{{} Prior Challenge: Implementing Bayes rule in dynamic, nonlinear, non-gaussian settings!

53 Bayesian inference in Neuroscience Bayesian Inference in Neuroscience Mathematics of prediction: Bayes rule Signal (hidden): X X P(X ), (prior, known) Observation: Y (known) Observation model: P(Y X ) (known) Solution Bayes rule: P(X Y ) }{{} Posterior P(Y X )P(X ) }{{} Prior Challenge: Implementing Bayes rule in dynamic, nonlinear, non-gaussian settings! 19

54 Bayesian inference in Neuroscience Bayesian Inference in Neuroscience Mathematics of prediction: Bayes rule Solution Signal (hidden): X X P(X ), (prior, known) Observation: Y (known) Observation model: P(Y X ) (known) Problem: What is X? Bayes rule: P(X Y ) }{{} Posterior P(Y X )P(X ) }{{} Prior Challenge: Implementing Bayes rule in dynamic, nonlinear, non-gaussian settings! 19

55 Bayesian inference in Neuroscience Bayesian Inference in Neuroscience Mathematics of prediction: Bayes rule Solution Signal (hidden): X X P(X ), (prior, known) Observation: Y (known) Observation model: P(Y X ) (known) Problem: What is X? Bayes rule: P(X Y ) }{{} Posterior P(Y X )P(X ) }{{} Prior Challenge: Implementing Bayes rule in dynamic, nonlinear, non-gaussian settings! 19

56 Bayesian inference in Neuroscience Bayesian Inference in Neuroscience Mathematics of prediction: Bayes rule Solution Signal (hidden): X X P(X ), (prior, known) Observation: Y (known) Observation model: P(Y X ) (known) Problem: What is X? Bayes rule: P(X Y ) }{{} Posterior P(Y X )P(X ) }{{} Prior Challenge: Implementing Bayes rule in dynamic, nonlinear, non-gaussian settings! 19

57 Part III Nonlinear Filtering

58 Nonlinear Filtering 21 Nonlinear Filtering Mathematical Problem Signal model: X t = a(x t ) + Ḃt, X 0 p 0( ) Posterior is an information state P(X t A Y0 t ) = E(X t Y0 t ) = A R p (x,t)dx xp (x,t)dx

59 Nonlinear Filtering 21 Nonlinear Filtering Mathematical Problem Signal model: Observation model: X t = a(x t ) + Ḃ t, X 0 p 0( ) Y t = h(x t ) + Ẇ t Posterior is an information state P(X t A Y0 t ) = E(X t Y0 t ) = A R p (x,t)dx xp (x,t)dx

60 Nonlinear Filtering 21 Nonlinear Filtering Mathematical Problem Signal model: Observation model: X t = a(x t ) + Ḃ t, X 0 p 0( ) Y t = h(x t ) + Ẇ t Problem: What is X t? given obs. till time t =: Y t 0 Posterior is an information state P(X t A Y0 t ) = E(X t Y0 t ) = A R p (x,t)dx xp (x,t)dx

61 Nonlinear Filtering 21 Nonlinear Filtering Mathematical Problem Signal model: Observation model: X t = a(x t ) + Ḃ t, X 0 p 0( ) Y t = h(x t ) + Ẇ t Problem: What is X t? given obs. till time t =: Y t 0 Answer in terms of posterior: P(X t Y t 0 ) =: p (x,t). Posterior is an information state P(X t A Y0 t ) = E(X t Y0 t ) = A R p (x,t)dx xp (x,t)dx

62 Nonlinear Filtering 21 Nonlinear Filtering Mathematical Problem Signal model: Observation model: X t = a(x t ) + Ḃ t, X 0 p 0( ) Y t = h(x t ) + Ẇ t Problem: What is X t? given obs. till time t =: Y t 0 Answer in terms of posterior: P(X t Y t 0 ) =: p (x,t). Posterior is an information state P(X t A Y0 t ) = E(X t Y0 t ) = A R p (x,t)dx xp (x,t)dx

63 Nonlinear Filtering 21 Nonlinear Filtering Mathematical Problem Signal model: Observation model: X t = a(x t ) + Ḃ t, X 0 p 0( ) Y t = h(x t ) + Ẇ t Problem: What is X t? given obs. till time t =: Y t 0 Answer in terms of posterior: P(X t Y t 0 ) =: p (x,t). Posterior is an information state P(X t A Y0 t ) = E(X t Y0 t ) = A R p (x,t)dx xp (x,t)dx

64 Nonlinear Filtering Kalman filter Solution in linear Gaussian settings X t = αx t + Ḃt (1) Y t = γx t + Ẇ t (2) [8] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,

65 Nonlinear Filtering Kalman filter Solution in linear Gaussian settings X t = αx t + Ḃt (1) Y t = γx t + Ẇ t (2) Kalman filter: p = N( ˆX t,σ t ) ˆX t = α ˆX t + K(Y t γ ˆX t ) }{{} Update [8] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,

66 Nonlinear Filtering Kalman filter Solution in linear Gaussian settings Kalman filter: p = N( ˆX t,σ t ) ˆX t = α ˆX t + K(Y t γ ˆX t ) }{{} Update X t = αx t + Ḃt (1) Y t = γx t + Ẇ t (2) - + Kalman Filter [8] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,

67 Nonlinear Filtering Kalman filter Solution in linear Gaussian settings X t = αx t + Ḃt (1) Y t = γx t + Ẇ t (2) Kalman filter: p = N( ˆX t,σ t ) ˆX t = α ˆX t + K(Y t γ ˆX t ) }{{} Update Observation: Kalman Filter Y t = γx t + Ẇ t - + Kalman Filter [8] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,

68 Nonlinear Filtering Kalman filter Solution in linear Gaussian settings X t = αx t + Ḃt (1) Y t = γx t + Ẇ t (2) Kalman filter: p = N( ˆX t,σ t ) ˆX t = α ˆX t + K(Y t γ ˆX t ) }{{} Update Observation: Prediction: Kalman Filter Y t = γx t + Ẇ t Ŷ t = γ ˆX t - + Kalman Filter [8] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,

69 Nonlinear Filtering Kalman filter Solution in linear Gaussian settings X t = αx t + Ḃt (1) Y t = γx t + Ẇ t (2) Kalman filter: p = N( ˆX t,σ t ) ˆX t = α ˆX t + K(Y t γ ˆX t ) }{{} Update Observation: Prediction: Kalman Filter Y t = γx t + Ẇ t Ŷ t = γ ˆX t - + Innov. error: I t = Y t Ŷ t = Y t γ ˆX t Kalman Filter [8] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,

70 Nonlinear Filtering Kalman filter Solution in linear Gaussian settings X t = αx t + Ḃt (1) Y t = γx t + Ẇ t (2) Kalman filter: p = N( ˆX t,σ t ) ˆX t = α ˆX t + K(Y t γ ˆX t ) }{{} Update Observation: Prediction: Kalman Filter Y t = γx t + Ẇ t Ŷ t = γ ˆX t - + Innov. error: I t = Y t Ŷ t = Y t γ ˆX t Kalman Filter Control: U t = KI t [8] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,

71 Nonlinear Filtering Kalman filter Solution in linear Gaussian settings X t = αx t + Ḃt (1) Y t = γx t + Ẇ t (2) Kalman filter: p = N( ˆX t,σ t ) ˆX t = α ˆX t + K(Y t γ ˆX t ) }{{} Update Observation: Prediction: Kalman Filter Y t = γx t + Ẇ t Ŷ t = γ ˆX t - + Innov. error: I t = Y t Ŷ t = Y t γ ˆX t Kalman Filter Control: U t = KI t Gain: Kalman gain [8] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,

72 Nonlinear Filtering Applications in Engineering Filtering is a mature field with many applications Filtering is important to: Air moving target indicator (AMTI) systems, Space situational awareness Remote sensing and surveillance: Air traffic management, weather surveillance, geophysical surveys Autonomous navigation & robotics: Simultaneous localization and map building (SLAM) 23

73 Nonlinear Filtering Applications in Engineering Filtering is a mature field with many applications Filtering is important to: Air moving target indicator (AMTI) systems, Space situational awareness Remote sensing and surveillance: Air traffic management, weather surveillance, geophysical surveys Autonomous navigation & robotics: Simultaneous localization and map building (SLAM) 23

74 Nonlinear Filtering Applications in Engineering Filtering is a mature field with many applications Filtering is important to: Air moving target indicator (AMTI) systems, Space situational awareness Remote sensing and surveillance: Air traffic management, weather surveillance, geophysical surveys Autonomous navigation & robotics: Simultaneous localization and map building (SLAM) 23

75 Nonlinear Filtering Applications in Engineering Filtering is a mature field with many applications Filtering is important to: Air moving target indicator (AMTI) systems, Space situational awareness Remote sensing and surveillance: Air traffic management, weather surveillance, geophysical surveys Autonomous navigation & robotics: Simultaneous localization and map building (SLAM) 23

76 Nonlinear Filtering Applications in Engineering Filtering is a mature field with many applications Filtering is important to: Air moving target indicator (AMTI) systems, Space situational awareness Remote sensing and surveillance: Air traffic management, weather surveillance, geophysical surveys Autonomous navigation & robotics: Simultaneous localization and map building (SLAM) 23

77 Nonlinear Filtering Filtering in Brain? Bayesian model of sensory signal processing Theory: Lee and Mumford, Hierarchical Bayesian inference Framework (2003) Rao; Rao and Ballard; Rao and Sejnowski. Predictive coding framework (2002) Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995) Lewicki and Sejnowski. Bayesian unsupervised learning (1995) Ma, Beck, Latham and Pouget. Probabilistic population codes (2006) And others: See Doya, Ishii, Pouget and Rao, Bayesian Brain, MIT Press (2007) Rao, Olshausen & Lewicki, Probabilistic Models of Brain, MIT Press (2002) 24

78 Nonlinear Filtering Filtering in Brain? Bayesian model of sensory signal processing Theory: Lee and Mumford, Hierarchical Bayesian inference Framework (2003) Rao; Rao and Ballard; Rao and Sejnowski. Predictive coding framework (2002) Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995) Lewicki and Sejnowski. Bayesian unsupervised learning (1995) Ma, Beck, Latham and Pouget. Probabilistic population codes (2006) And others: See Doya, Ishii, Pouget and Rao, Bayesian Brain, MIT Press (2007) Rao, Olshausen & Lewicki, Probabilistic Models of Brain, MIT Press (2002) 24

79 Nonlinear Filtering 25 Filtering in Brain? Bayesian model of sensory signal processing Experiments (see reviews): Gold & Shadlen, The neural basis of decision making, Ann. Rev. of Neurosci. (2007) R. T. Knight, Neural networks debunk phrenology, Science (2007) Such theories naturally feed into computer vision & more generally on how to make computer intelligent

80 Nonlinear Filtering 25 Filtering in Brain? Bayesian model of sensory signal processing Experiments (see reviews): Gold & Shadlen, The neural basis of decision making, Ann. Rev. of Neurosci. (2007) R. T. Knight, Neural networks debunk phrenology, Science (2007) Such theories naturally feed into computer vision & more generally on how to make computer intelligent

81 Nonlinear Filtering 26 Bayesian Inference in Neuroscience Lee and Mumford s hierarchical Bayesian inference framework Bayes rule Bayes rule Bayes rule... Similar ideas also appear in: 1 Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995) 2 Lewicki and Sejnowski. Bayesian unsupervised learning (1995) 3 Rao and Ballard; Rao and Sejnowski. Predictive coding framework (1999;2002)

82 Nonlinear Filtering 26 Bayesian Inference in Neuroscience Lee and Mumford s hierarchical Bayesian inference framework Bayes rule Bayes rule Bayes rule... Part. Filter Part. Filter Part. Filter... Similar ideas also appear in: 1 Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995) 2 Lewicki and Sejnowski. Bayesian unsupervised learning (1995) 3 Rao and Ballard; Rao and Sejnowski. Predictive coding framework (1999;2002)

83 Nonlinear Filtering 27 What is a Particle Filter An algorithm to solve nonlinear filtering problem Approximate posterior in terms of particles p (x,t) = 1 N Algorithm outline 1 Initialization at time 0: X i 0 p 0( ) 2 At each discrete time step: Importance sampling (Bayes update step) Resampling (for variance reduction) N i=1 δ X i t (x)

84 Nonlinear Filtering 27 What is a Particle Filter An algorithm to solve nonlinear filtering problem Approximate posterior in terms of particles p (x,t) = 1 N Algorithm outline 1 Initialization at time 0: X i 0 p 0( ) 2 At each discrete time step: Importance sampling (Bayes update step) Resampling (for variance reduction) N i=1 δ X i t (x)

85 Nonlinear Filtering 27 What is a Particle Filter An algorithm to solve nonlinear filtering problem Approximate posterior in terms of particles p (x,t) = 1 N Algorithm outline 1 Initialization at time 0: X i 0 p 0( ) 2 At each discrete time step: Importance sampling (Bayes update step) Resampling (for variance reduction) N i=1 δ X i t (x)

86 Nonlinear Filtering 27 What is a Particle Filter An algorithm to solve nonlinear filtering problem Approximate posterior in terms of particles p (x,t) = 1 N Algorithm outline 1 Initialization at time 0: X i 0 p 0( ) 2 At each discrete time step: Importance sampling (Bayes update step) Resampling (for variance reduction) N i=1 δ X i t (x)

87 Nonlinear Filtering What is a Particle Filter An algorithm to solve nonlinear filtering problem Approximate posterior in terms of particles p (x,t) = 1 N Algorithm outline 1 Initialization at time 0: X i 0 p 0( ) 2 At each discrete time step: Importance sampling (Bayes update step) Resampling (for variance reduction) N i=1 δ X i t (x) It is unclear how to implement this with neurons? 27

88 Nonlinear Filtering What is a Particle Filter An algorithm to solve nonlinear filtering problem Approximate posterior in terms of particles p (x,t) = 1 N Algorithm outline 1 Initialization at time 0: X i 0 p 0( ) 2 At each discrete time step: Importance sampling (Bayes update step) Resampling (for variance reduction) N i=1 δ X i t (x) It is unclear how to implement this with neurons? 27

89 Part IV Feedback Particle Filter

90 Feedback particle filter Control-oriented Formulation of Particle Filter Use feedback control to implement Bayes rule Signal & Observations Controlled system (N particles): X t = a(x t ) + σ B Ḃ t (1) Y t = h(x t ) + σ W Ẇ t (2) X t i = a(xt i ) + σ B Ḃt i + Ut i, i = 1,...,N (3) }{{} mean field control {Ḃ i t} N i=1 are ind. standard white noises. Objective: Choose control Ut, i as a function of history {Y s,xs i : 0 s t}, such that the two posteriors coincide: P{X t A Z t } p (x,t) dx = p(x,t) dx = P{Xt i A Z t } x A x A Huang, Caines and Malhame IEEE TAC (2007); Lasry and Lions (2007) 29

91 Feedback particle filter Control-oriented Formulation of Particle Filter Use feedback control to implement Bayes rule Signal & Observations Controlled system (N particles): X t = a(x t ) + σ B Ḃ t (1) Y t = h(x t ) + σ W Ẇ t (2) X t i = a(xt i ) + σ B Ḃt i + Ut i, i = 1,...,N (3) }{{} mean field control {Ḃ i t} N i=1 are ind. standard white noises. Objective: Choose control Ut, i as a function of history {Y s,xs i : 0 s t}, such that the two posteriors coincide: P{X t A Z t } p (x,t) dx = p(x,t) dx = P{Xt i A Z t } x A x A Huang, Caines and Malhame IEEE TAC (2007); Lasry and Lions (2007) 29

92 Feedback particle filter Control-oriented Formulation of Particle Filter Use feedback control to implement Bayes rule Signal & Observations Controlled system (N particles): X t = a(x t ) + σ B Ḃ t (1) Y t = h(x t ) + σ W Ẇ t (2) X t i = a(xt i ) + σ B Ḃt i + Ut i, i = 1,...,N (3) }{{} mean field control {Ḃ i t} N i=1 are ind. standard white noises. Objective: Choose control Ut, i as a function of history {Y s,xs i : 0 s t}, such that the two posteriors coincide: P{X t A Z t } p (x,t) dx = p(x,t) dx = P{Xt i A Z t } x A x A Huang, Caines and Malhame IEEE TAC (2007); Lasry and Lions (2007) 29

93 Feedback particle filter 30 Feedback Particle Filter Filtering in nonlinear non-gaussian settings Signal model: Observation model: X t = a(x t ) + Ḃt, X 0 p 0( ) Y t = h(x t ) + Ẇ t FPF: Ẋ i t = a(x i t ) + Ḃ i t + K(X i t )I i t }{{} Update Innovations: I i t =:Y t 1 2 (h(x i t ) + ĥ), with cond. mean ĥ = p,h.

94 Feedback particle filter 30 Feedback Particle Filter Filtering in nonlinear non-gaussian settings Signal model: Observation model: X t = a(x t ) + Ḃt, X 0 p 0( ) Y t = h(x t ) + Ẇ t FPF: Ẋ i t = a(x i t ) + Ḃ i t + K(X i t )I i t }{{} Update Innovations: I i t =:Y t 1 2 (h(x i t ) + ĥ), with cond. mean ĥ = p,h.

95 Feedback particle filter 31 Update Step How does feedback particle filter implement Bayes rule? Feedback particle filter Linear Kalman filter Observation: Y t = h(x t ) + Ẇ t Y t = γx t + Ẇ t

96 Feedback particle filter 31 Update Step How does feedback particle filter implement Bayes rule? Feedback particle filter Linear Kalman filter Observation: Y t = h(x t ) + Ẇ t Y t = γx t + Ẇ t Prediction: Ŷt i = h(x t i )+ĥ 2 ĥ = 1 N N i=1 h(x t i ) Ŷ t = γ ˆX t

97 Feedback particle filter 31 Update Step How does feedback particle filter implement Bayes rule? Feedback particle filter Linear Kalman filter Observation: Y t = h(x t ) + Ẇ t Y t = γx t + Ẇ t Prediction: Ŷt i = h(x t i )+ĥ 2 ĥ = 1 N N i=1 h(x t i ) Ŷ t = γ ˆX t Innov. error: It i = Y t Ŷ t i I t = Y t Ŷt = Y t h(x t i )+ĥ 2 = Y t γ ˆX t

98 Feedback particle filter 31 Update Step How does feedback particle filter implement Bayes rule? Feedback particle filter Linear Kalman filter Observation: Y t = h(x t ) + Ẇ t Y t = γx t + Ẇ t Prediction: Ŷt i = h(x t i )+ĥ 2 ĥ = 1 N N i=1 h(x t i ) Ŷ t = γ ˆX t Innov. error: It i = Y t Ŷ t i I t = Y t Ŷt = Y t h(x t i )+ĥ 2 = Y t γ ˆX t Control: U i t = K(X i t )I i t U t = KI t

99 Feedback particle filter 31 Update Step How does feedback particle filter implement Bayes rule? Feedback particle filter Linear Kalman filter Observation: Y t = h(x t ) + Ẇ t Y t = γx t + Ẇ t Prediction: Ŷt i = h(x t i )+ĥ 2 ĥ = 1 N N i=1 h(x t i ) Ŷ t = γ ˆX t Innov. error: It i = Y t Ŷ t i I t = Y t Ŷt = Y t h(x t i )+ĥ 2 = Y t γ ˆX t Control: U i t = K(X i t )I i t U t = KI t Gain: K is a solution of a linear BVP K is the Kalman gain

100 Feedback particle filter Feedback Particle Filter Filtering in nonlinear non-gaussian settings Signal model: Observation model: FPF: X t = a(x t ) + Ḃt, X 0 p 0( ) Y t = h(x t ) + Ẇ t Ẋ i t = a(x i t ) + Ḃ i t + K(X i t )I i t }{{} Update Innovations: It i =:Y t 1 2 (h(x t i ) + ĥ), with cond. mean ĥ = p,h. - + Feedback Particle Filter T. Yang, P. G. Mehta and S. P. Meyn, A Control-oriented Approach for Particle Filtering, ACC 2011, CDC

101 Feedback particle filter Feedback Particle Filter Filtering in nonlinear non-gaussian settings Signal model: Observation model: FPF: X t = a(x t ) + Ḃt, X 0 p 0( ) Y t = h(x t ) + Ẇ t Ẋ i t = a(x i t ) + Ḃ i t + K(X i t )I i t }{{} Update Innovations: It i =:Y t 1 2 (h(x t i ) + ĥ), with cond. mean ĥ = p,h. - + Feedback Particle Filter T. Yang, P. G. Mehta and S. P. Meyn, A Control-oriented Approach for Particle Filtering, ACC 2011, CDC

102 Feedback particle filter 33 Robustness of feedback particle filter Variance reduction Mean-square error: ( 1 T T 0 Σ (N) t ) 2 Σ t dt Σ t MSE 10 1 Bootstrap (BPF) 10 2 Feedback (FPF) N (number of particles)

103 Part V Application: Filtering with Rhythms

104 35 Filtering of Biological Rhythms with Brain Rhythms Connection to Lee and Mumford s hierarchical Bayesian inference framework Noisy input Part. Filter Part. Filter Part. Filter... Prior

105 35 Filtering of Biological Rhythms with Brain Rhythms Connection to Lee and Mumford s hierarchical Bayesian inference framework Noisy input Part. Filter Part. Filter Part. Filter... Prior Normal form reduction Mumford s box with neurons Rhythmic movement Noisy measurements Mumford s box with oscillators Normal form reduction Prior Estimate

106 35 Filtering of Biological Rhythms with Brain Rhythms Connection to Lee and Mumford s hierarchical Bayesian inference framework Noisy input Part. Filter Part. Filter Part. Filter... Prior Normal form reduction Mumford s box with neurons Rhythmic movement Noisy measurements Mumford s box with oscillators Normal form reduction Prior Estimate

107 35 Filtering of Biological Rhythms with Brain Rhythms Connection to Lee and Mumford s hierarchical Bayesian inference framework Noisy input Part. Filter Part. Filter Part. Filter... Prior Normal form reduction Mumford s box with neurons Rhythmic movement Noisy measurements Mumford s box with oscillators Normal form reduction Prior Estimate

108 Application Signal & Observation models Ankle-foot orthoses (AFOs) : For lower-limb neuromuscular impairments. Provides dorsiflexor (toe lift) and plantarflexor (toe push) torque assistance Heel Force Regression Model Experimental Data Solenoid valves: control the flow of CO2 to the actuator Actuator Compressed CO2 Toe Force Sensors: heel, toe, and ankle joint AFO system components: Power supply, Valves, Actuator, Sensors. Ankle Angle Percent Gait Cycle Cycles of sensor data. Acknowledgement: Professor Liz Hsiao-Wecksler for sharing the AFO device picture and sensor data. 36

109 Application Signal & Observation models Ankle-foot orthoses (AFOs) : For lower-limb neuromuscular impairments. Provides dorsiflexor (toe lift) and plantarflexor (toe push) torque assistance Heel Force Regression Model Experimental Data Solenoid valves: control the flow of CO2 to the actuator Actuator Compressed CO2 Toe Force Sensors: heel, toe, and ankle joint AFO system components: Power supply, Valves, Actuator, Sensors. Ankle Angle Percent Gait Cycle Cycles of sensor data. Acknowledgement: Professor Liz Hsiao-Wecksler for sharing the AFO device picture and sensor data. 36

110 Filtering with rhythms Feedback particle filter The gait cycle is a single sequence of functions of one limb. Compressed CO2 Solenoid valves: control the flow of CO2 to the actuator Actuator Sensors: heel, toe, and ankle joint Oscillator model to estimate gait Gait-State Estimation Oscillator Model of gait: θ t = ω + B t mod 2π Observations: h(θ ) pulse function: π 0 π 37

111 Filtering with rhythms Feedback particle filter The gait cycle is a single sequence of functions of one limb. Compressed CO2 Solenoid valves: control the flow of CO2 to the actuator Actuator Sensors: heel, toe, and ankle joint Oscillator model to estimate gait Gait-State Estimation Oscillator Model of gait: θ t = ω + B t mod 2π Observations: h(θ ) pulse function: π 0 π 37

112 38 Filtering for Oscillators Signal & Observations θ t = ω + Ḃt mod 2π Y t = h(θ t ) + Ẇ t π 0 π Particle evolution, θ t i = ω i + Ḃt i + K(θt i )[Y t 1 2 (h(θ t i ) + ĥ)] mod 2π, i = 1,...,N. where ω i is sampled from a distribution.

113 38 Filtering for Oscillators Signal & Observations θ t = ω + Ḃt mod 2π Y t = h(θ t ) + Ẇ t π 0 π Particle evolution, θ t i = ω i + Ḃt i + K(θt i )[Y t 1 2 (h(θ t i ) + ĥ)] mod 2π, i = 1,...,N. where ω i is sampled from a distribution.

114 38 Filtering for Oscillators Signal & Observations θ t = ω + Ḃ t mod 2π Y t = h(θ t ) + Ẇt π 0 π Particle evolution, θ t i = ω i + Ḃt i + K(θt i )[Y t 1 2 (h(θ t i ) + ĥ)] mod 2π, i = 1,...,N. where ω i is sampled from a distribution. - + Feedback Particle Filter

115 39 Simulation Results Solution of the Estimation of Gait Cycle Problem [Click to play the movie]

116 Thank you! Website: Collaborators Adam Tilton Tao Yang Huibing Yin Liz Hsiao-Wecksler Sean Meyn Synchronization of Coupled Oscillators is a Game, ACC 2010, IEEE TAC 2012 Learning in Mean-field Oscillator Game, CDC 2010 A Control-oriented Approach for Particle Filtering, ACC 2011, CDC 2011 Filtering with Rhythms: Application to Estimation of Gait Cycle, ACC 2012

117 41

118 Bibliography 41 Eric Brown, Jeff Moehlis, and Philip Holmes. On the phase reduction and response dynamics of neural oscillator populations. Neural Computation, 16(4): , A. Doucet, N. de Freitas, and N. Gordon. Sequential Monte-Carlo Methods in Practice. Springer-Verlag, April R. Ericson and A. Pakes. Markov-perfect industry dynamics: A framework for empirical work. The Review of Economic Studies, 62(1):53 82, N. J. Gordon, D. J. Salmond, and A. F. M. Smith. Novel approach to nonlinear/non-gaussian Bayesian state estimation. IEE Proceedings F Radar and Signal Processing, 140(2): , J. Guckenheimer. Isochrons and phaseless sets. J. Math. Biol., 1: , M. Huang, P. E. Caines, and R. P. Malhame. Large-population cost-coupled LQG problems with nonuniform agents: Individual-mass behavior and decentralized ε-nash equilibria. 52(9): , Minyi Huang, Peter E. Caines, and Roland P. Malhame. Large-population cost-coupled LQG problems with nonuniform agents: Individual-mass behavior and decentralized ε-nash equilibria. IEEE transactions on automatic control, 52(9): , R. E. Kalman. A new approach to linear filtering and prediction problems. Journal of Basic Engineering, 82(1):35 45, Y. Kuramoto. International Symposium on Mathematical Problems in Theoretical Physics, volume 39 of Lecture Notes in Physics. Springer-Verlag, 1975.

119 Bibliography 41 H. J. Kushner. On the differential equations satisfied by conditional probability densities of markov process. SIAM J. Control, 2: , J. Lasry and P. Lions. Mean field games. Japanese Journal of Mathematics, 2(2): , Jean-Michel Lasry and Pierre-Louis Lions. Mean field games. Japan. J. Math., 2: , R. L. Stratonovich. Conditional Markov processes. SIAM Theory Probab. Appl., 5: , S. H. Strogatz and R. E. Mirollo. Stability of incoherence in a population of coupled oscillators. Journal of Statistical Physics, 63: , May Huibing Yin, Prashant G. Mehta, Sean P. Meyn, and Uday V. Shanbhag. Synchronization of coupled oscillators is a game. In Proc. of 2010 American Control Conference, pages , Baltimore, MD, Mehta P. G. Meyn S. P. Yin, H. and U. V. Shanbhag. Synchronization of coupled oscillators is a game. IEEE Trans. Automat. Control. G. Y. Weintraub, L. Benkard, and B. Van Roy. Oblivious equilibrium: A mean field approximation for large-scale dynamic games. In Advances in Neural Information Processing Systems, volume 18. MIT Press, G. Y. Weintraub, L. Benkard, and B. V. Roy. Markov perfect industry dynamics with many firms. Econometrica, 76(6): , 2008.

Feedback Particle Filter and its Application to Coupled Oscillators Presentation at University of Maryland, College Park, MD

Feedback Particle Filter and its Application to Coupled Oscillators Presentation at University of Maryland, College Park, MD Feedback Particle Filter and its Application to Coupled Oscillators Presentation at University of Maryland, College Park, MD Prashant Mehta Dept. of Mechanical Science and Engineering and the Coordinated

More information

Synchronization of coupled oscillators is a game. Prashant G. Mehta 1

Synchronization of coupled oscillators is a game. Prashant G. Mehta 1 CSL COORDINATED SCIENCE LABORATORY Synchronization of coupled oscillators is a game Prashant G. Mehta 1 1 Coordinated Science Laboratory Department of Mechanical Science and Engineering University of Illinois

More information

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References 24th March 2011 Update Hierarchical Model Rao and Ballard (1999) presented a hierarchical model of visual cortex to show how classical and extra-classical Receptive Field (RF) effects could be explained

More information

Multi-dimensional Feedback Particle Filter for Coupled Oscillators

Multi-dimensional Feedback Particle Filter for Coupled Oscillators 3 American Control Conference (ACC) Washington, DC, USA, June 7-9, 3 Multi-dimensional Feedback Particle Filter for Coupled Oscillators Adam K. Tilton, Prashant G. Mehta and Sean P. Meyn Abstract This

More information

c 2014 Shane T. Ghiotto

c 2014 Shane T. Ghiotto c 2014 Shane T. Ghiotto COMPARISON OF NONLINEAR FILTERING TECHNIQUES BY SHANE T. GHIOTTO THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in Mechanical Engineering

More information

c 2014 Krishna Kalyan Medarametla

c 2014 Krishna Kalyan Medarametla c 2014 Krishna Kalyan Medarametla COMPARISON OF TWO NONLINEAR FILTERING TECHNIQUES - THE EXTENDED KALMAN FILTER AND THE FEEDBACK PARTICLE FILTER BY KRISHNA KALYAN MEDARAMETLA THESIS Submitted in partial

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

A Monte Carlo Sequential Estimation for Point Process Optimum Filtering

A Monte Carlo Sequential Estimation for Point Process Optimum Filtering 2006 International Joint Conference on Neural Networks Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006 A Monte Carlo Sequential Estimation for Point Process Optimum Filtering

More information

Sensor Fusion: Particle Filter

Sensor Fusion: Particle Filter Sensor Fusion: Particle Filter By: Gordana Stojceska stojcesk@in.tum.de Outline Motivation Applications Fundamentals Tracking People Advantages and disadvantages Summary June 05 JASS '05, St.Petersburg,

More information

Quasi Stochastic Approximation American Control Conference San Francisco, June 2011

Quasi Stochastic Approximation American Control Conference San Francisco, June 2011 Quasi Stochastic Approximation American Control Conference San Francisco, June 2011 Sean P. Meyn Joint work with Darshan Shirodkar and Prashant Mehta Coordinated Science Laboratory and the Department of

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER Zhen Zhen 1, Jun Young Lee 2, and Abdus Saboor 3 1 Mingde College, Guizhou University, China zhenz2000@21cn.com 2 Department

More information

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems Modeling CSC752 Autonomous Robotic Systems Ubbo Visser Department of Computer Science University of Miami February 21, 2017 Outline 1 Modeling and state estimation 2 Examples 3 State estimation 4 Probabilities

More information

A Comparative Study of Nonlinear Filtering Techniques

A Comparative Study of Nonlinear Filtering Techniques A Comparative Study of onlinear Filtering Techniques Adam K. Tilton, Shane Ghiotto and Prashant G. Mehta Abstract In a recent work it is shown that importance sampling can be avoided in the particle filter

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

arxiv: v1 [math.na] 26 Feb 2013

arxiv: v1 [math.na] 26 Feb 2013 1 Feedback Particle Filter Tao Yang, Prashant G. Mehta and Sean P. Meyn Abstract arxiv:1302.6563v1 [math.na] 26 Feb 2013 A new formulation of the particle filter for nonlinear filtering is presented, based

More information

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Axel Gandy Department of Mathematics Imperial College London http://www2.imperial.ac.uk/~agandy London

More information

(1) I. INTRODUCTION (2)

(1) I. INTRODUCTION (2) 920 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 57, NO. 4, APRIL 2012 Synchronization of Coupled Oscillators is a Game Huibing Yin, Member, IEEE, Prashant G. Mehta, Member, IEEE, Sean P. Meyn, Fellow,

More information

Expectation Propagation in Dynamical Systems

Expectation Propagation in Dynamical Systems Expectation Propagation in Dynamical Systems Marc Peter Deisenroth Joint Work with Shakir Mohamed (UBC) August 10, 2012 Marc Deisenroth (TU Darmstadt) EP in Dynamical Systems 1 Motivation Figure : Complex

More information

Data assimilation with and without a model

Data assimilation with and without a model Data assimilation with and without a model Tim Sauer George Mason University Parameter estimation and UQ U. Pittsburgh Mar. 5, 2017 Partially supported by NSF Most of this work is due to: Tyrus Berry,

More information

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS Gustaf Hendeby Fredrik Gustafsson Division of Automatic Control Department of Electrical Engineering, Linköpings universitet, SE-58 83 Linköping,

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information

Learning Static Parameters in Stochastic Processes

Learning Static Parameters in Stochastic Processes Learning Static Parameters in Stochastic Processes Bharath Ramsundar December 14, 2012 1 Introduction Consider a Markovian stochastic process X T evolving (perhaps nonlinearly) over time variable T. We

More information

Data assimilation with and without a model

Data assimilation with and without a model Data assimilation with and without a model Tyrus Berry George Mason University NJIT Feb. 28, 2017 Postdoc supported by NSF This work is in collaboration with: Tim Sauer, GMU Franz Hamilton, Postdoc, NCSU

More information

Dynamical systems in neuroscience. Pacific Northwest Computational Neuroscience Connection October 1-2, 2010

Dynamical systems in neuroscience. Pacific Northwest Computational Neuroscience Connection October 1-2, 2010 Dynamical systems in neuroscience Pacific Northwest Computational Neuroscience Connection October 1-2, 2010 What do I mean by a dynamical system? Set of state variables Law that governs evolution of state

More information

Multi-Target Particle Filtering for the Probability Hypothesis Density

Multi-Target Particle Filtering for the Probability Hypothesis Density Appears in the 6 th International Conference on Information Fusion, pp 8 86, Cairns, Australia. Multi-Target Particle Filtering for the Probability Hypothesis Density Hedvig Sidenbladh Department of Data

More information

A Tree Search Approach to Target Tracking in Clutter

A Tree Search Approach to Target Tracking in Clutter 12th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 2009 A Tree Search Approach to Target Tracking in Clutter Jill K. Nelson and Hossein Roufarshbaf Department of Electrical

More information

System identification and sensor fusion in dynamical systems. Thomas Schön Division of Systems and Control, Uppsala University, Sweden.

System identification and sensor fusion in dynamical systems. Thomas Schön Division of Systems and Control, Uppsala University, Sweden. System identification and sensor fusion in dynamical systems Thomas Schön Division of Systems and Control, Uppsala University, Sweden. The system identification and sensor fusion problem Inertial sensors

More information

Controlled sequential Monte Carlo

Controlled sequential Monte Carlo Controlled sequential Monte Carlo Jeremy Heng, Department of Statistics, Harvard University Joint work with Adrian Bishop (UTS, CSIRO), George Deligiannidis & Arnaud Doucet (Oxford) Bayesian Computation

More information

Bayesian Computation in Recurrent Neural Circuits

Bayesian Computation in Recurrent Neural Circuits Bayesian Computation in Recurrent Neural Circuits Rajesh P. N. Rao Department of Computer Science and Engineering University of Washington Seattle, WA 98195 E-mail: rao@cs.washington.edu Appeared in: Neural

More information

State Estimation using Moving Horizon Estimation and Particle Filtering

State Estimation using Moving Horizon Estimation and Particle Filtering State Estimation using Moving Horizon Estimation and Particle Filtering James B. Rawlings Department of Chemical and Biological Engineering UW Math Probability Seminar Spring 2009 Rawlings MHE & PF 1 /

More information

Feedback Particle Filter:Application and Evaluation

Feedback Particle Filter:Application and Evaluation MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Feedback Particle Filter:Application and Evaluation Berntorp, K. TR2015-074 July 2015 Abstract Recent research has provided several new methods

More information

A State Space Model for Wind Forecast Correction

A State Space Model for Wind Forecast Correction A State Space Model for Wind Forecast Correction Valrie Monbe, Pierre Ailliot 2, and Anne Cuzol 1 1 Lab-STICC, Université Européenne de Bretagne, France (e-mail: valerie.monbet@univ-ubs.fr, anne.cuzol@univ-ubs.fr)

More information

Large-population, dynamical, multi-agent, competitive, and cooperative phenomena occur in a wide range of designed

Large-population, dynamical, multi-agent, competitive, and cooperative phenomena occur in a wide range of designed Definition Mean Field Game (MFG) theory studies the existence of Nash equilibria, together with the individual strategies which generate them, in games involving a large number of agents modeled by controlled

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information

Efficient Monitoring for Planetary Rovers

Efficient Monitoring for Planetary Rovers International Symposium on Artificial Intelligence and Robotics in Space (isairas), May, 2003 Efficient Monitoring for Planetary Rovers Vandi Verma vandi@ri.cmu.edu Geoff Gordon ggordon@cs.cmu.edu Carnegie

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München Motivation Bayes filter is a useful tool for state

More information

Bayesian Methods for Sparse Signal Recovery

Bayesian Methods for Sparse Signal Recovery Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Motivation Motivation Sparse Signal Recovery

More information

Lecture 7: Optimal Smoothing

Lecture 7: Optimal Smoothing Department of Biomedical Engineering and Computational Science Aalto University March 17, 2011 Contents 1 What is Optimal Smoothing? 2 Bayesian Optimal Smoothing Equations 3 Rauch-Tung-Striebel Smoother

More information

Human Pose Tracking I: Basics. David Fleet University of Toronto

Human Pose Tracking I: Basics. David Fleet University of Toronto Human Pose Tracking I: Basics David Fleet University of Toronto CIFAR Summer School, 2009 Looking at People Challenges: Complex pose / motion People have many degrees of freedom, comprising an articulated

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin

More information

Efficient Likelihood-Free Inference

Efficient Likelihood-Free Inference Efficient Likelihood-Free Inference Michael Gutmann http://homepages.inf.ed.ac.uk/mgutmann Institute for Adaptive and Neural Computation School of Informatics, University of Edinburgh 8th November 2017

More information

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization and Timothy D. Barfoot CRV 2 Outline Background Objective Experimental Setup Results Discussion Conclusion 2 Outline

More information

SENSOR-ASSISTED ADAPTIVE MOTOR CONTROL UNDER CONTINUOUSLY VARYING CONTEXT

SENSOR-ASSISTED ADAPTIVE MOTOR CONTROL UNDER CONTINUOUSLY VARYING CONTEXT SENSOR-ASSISTED ADAPTIVE MOTOR CONTROL UNDER CONTINUOUSLY VARYING CONTEXT Heiko Hoffmann, Georgios Petkos, Sebastian Bitzer, and Sethu Vijayakumar Institute of Perception, Action and Behavior, School of

More information

Efficient Variational Inference in Large-Scale Bayesian Compressed Sensing

Efficient Variational Inference in Large-Scale Bayesian Compressed Sensing Efficient Variational Inference in Large-Scale Bayesian Compressed Sensing George Papandreou and Alan Yuille Department of Statistics University of California, Los Angeles ICCV Workshop on Information

More information

Abnormal Activity Detection and Tracking Namrata Vaswani Dept. of Electrical and Computer Engineering Iowa State University

Abnormal Activity Detection and Tracking Namrata Vaswani Dept. of Electrical and Computer Engineering Iowa State University Abnormal Activity Detection and Tracking Namrata Vaswani Dept. of Electrical and Computer Engineering Iowa State University Abnormal Activity Detection and Tracking 1 The Problem Goal: To track activities

More information

Bayesian Inference Course, WTCN, UCL, March 2013

Bayesian Inference Course, WTCN, UCL, March 2013 Bayesian Course, WTCN, UCL, March 2013 Shannon (1948) asked how much information is received when we observe a specific value of the variable x? If an unlikely event occurs then one would expect the information

More information

Ergodicity in data assimilation methods

Ergodicity in data assimilation methods Ergodicity in data assimilation methods David Kelly Andy Majda Xin Tong Courant Institute New York University New York NY www.dtbkelly.com April 15, 2016 ETH Zurich David Kelly (CIMS) Data assimilation

More information

Markov Chain Monte Carlo Methods for Stochastic

Markov Chain Monte Carlo Methods for Stochastic Markov Chain Monte Carlo Methods for Stochastic Optimization i John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U Florida, Nov 2013

More information

Collective and Stochastic Effects in Arrays of Submicron Oscillators

Collective and Stochastic Effects in Arrays of Submicron Oscillators DYNAMICS DAYS: Long Beach, 2005 1 Collective and Stochastic Effects in Arrays of Submicron Oscillators Ron Lifshitz (Tel Aviv), Jeff Rogers (HRL, Malibu), Oleg Kogan (Caltech), Yaron Bromberg (Tel Aviv),

More information

Density Propagation for Continuous Temporal Chains Generative and Discriminative Models

Density Propagation for Continuous Temporal Chains Generative and Discriminative Models $ Technical Report, University of Toronto, CSRG-501, October 2004 Density Propagation for Continuous Temporal Chains Generative and Discriminative Models Cristian Sminchisescu and Allan Jepson Department

More information

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu Lecture: Gaussian Process Regression STAT 6474 Instructor: Hongxiao Zhu Motivation Reference: Marc Deisenroth s tutorial on Robot Learning. 2 Fast Learning for Autonomous Robots with Gaussian Processes

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION Supplementary discussion 1: Most excitatory and suppressive stimuli for model neurons The model allows us to determine, for each model neuron, the set of most excitatory and suppresive features. First,

More information

Bagging During Markov Chain Monte Carlo for Smoother Predictions

Bagging During Markov Chain Monte Carlo for Smoother Predictions Bagging During Markov Chain Monte Carlo for Smoother Predictions Herbert K. H. Lee University of California, Santa Cruz Abstract: Making good predictions from noisy data is a challenging problem. Methods

More information

Synchronization in delaycoupled bipartite networks

Synchronization in delaycoupled bipartite networks Synchronization in delaycoupled bipartite networks Ram Ramaswamy School of Physical Sciences Jawaharlal Nehru University, New Delhi February 20, 2015 Outline Ø Bipartite networks and delay-coupled phase

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Learning and Memory in Neural Networks

Learning and Memory in Neural Networks Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units

More information

Consider the following spike trains from two different neurons N1 and N2:

Consider the following spike trains from two different neurons N1 and N2: About synchrony and oscillations So far, our discussions have assumed that we are either observing a single neuron at a, or that neurons fire independent of each other. This assumption may be correct in

More information

Recent Advances in Bayesian Inference Techniques

Recent Advances in Bayesian Inference Techniques Recent Advances in Bayesian Inference Techniques Christopher M. Bishop Microsoft Research, Cambridge, U.K. research.microsoft.com/~cmbishop SIAM Conference on Data Mining, April 2004 Abstract Bayesian

More information

Data assimilation in high dimensions

Data assimilation in high dimensions Data assimilation in high dimensions David Kelly Courant Institute New York University New York NY www.dtbkelly.com February 12, 2015 Graduate seminar, CIMS David Kelly (CIMS) Data assimilation February

More information

Recursive Bayes Filtering

Recursive Bayes Filtering Recursive Bayes Filtering CS485 Autonomous Robotics Amarda Shehu Fall 2013 Notes modified from Wolfram Burgard, University of Freiburg Physical Agents are Inherently Uncertain Uncertainty arises from four

More information

Distributed estimation in sensor networks

Distributed estimation in sensor networks in sensor networks A. Benavoli Dpt. di Sistemi e Informatica Università di Firenze, Italy. e-mail: benavoli@dsi.unifi.it Outline 1 An introduction to 2 3 An introduction to An introduction to In recent

More information

The Unscented Particle Filter

The Unscented Particle Filter The Unscented Particle Filter Rudolph van der Merwe (OGI) Nando de Freitas (UC Bereley) Arnaud Doucet (Cambridge University) Eric Wan (OGI) Outline Optimal Estimation & Filtering Optimal Recursive Bayesian

More information

Synchronization and Phase Oscillators

Synchronization and Phase Oscillators 1 Synchronization and Phase Oscillators Richard Bertram Department of Mathematics and Programs in Neuroscience and Molecular Biophysics Florida State University Tallahassee, Florida 32306 Synchronization

More information

Implementation of Particle Filter-based Target Tracking

Implementation of Particle Filter-based Target Tracking of -based V. Rajbabu rajbabu@ee.iitb.ac.in VLSI Group Seminar Dept. of Electrical Engineering IIT Bombay November 15, 2007 Outline Introduction 1 Introduction 2 3 4 5 2 / 55 Outline Introduction 1 Introduction

More information

TSRT14: Sensor Fusion Lecture 8

TSRT14: Sensor Fusion Lecture 8 TSRT14: Sensor Fusion Lecture 8 Particle filter theory Marginalized particle filter Gustaf Hendeby gustaf.hendeby@liu.se TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 1 / 25 Le 8: particle filter theory,

More information

Data assimilation as an optimal control problem and applications to UQ

Data assimilation as an optimal control problem and applications to UQ Data assimilation as an optimal control problem and applications to UQ Walter Acevedo, Angwenyi David, Jana de Wiljes & Sebastian Reich Universität Potsdam/ University of Reading IPAM, November 13th 2017

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA Contents in latter part Linear Dynamical Systems What is different from HMM? Kalman filter Its strength and limitation Particle Filter

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche

More information

IN particle filter (PF) applications, knowledge of the computational

IN particle filter (PF) applications, knowledge of the computational Complexity Analysis of the Marginalized Particle Filter Rickard Karlsson, Thomas Schön and Fredrik Gustafsson, Member IEEE Abstract In this paper the computational complexity of the marginalized particle

More information

Sequential Monte Carlo Methods (for DSGE Models)

Sequential Monte Carlo Methods (for DSGE Models) Sequential Monte Carlo Methods (for DSGE Models) Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 Some References These lectures use material from our joint work: Tempered

More information

Monte Carlo Approximation of Monte Carlo Filters

Monte Carlo Approximation of Monte Carlo Filters Monte Carlo Approximation of Monte Carlo Filters Adam M. Johansen et al. Collaborators Include: Arnaud Doucet, Axel Finke, Anthony Lee, Nick Whiteley 7th January 2014 Context & Outline Filtering in State-Space

More information

A NEW NONLINEAR FILTER

A NEW NONLINEAR FILTER COMMUNICATIONS IN INFORMATION AND SYSTEMS c 006 International Press Vol 6, No 3, pp 03-0, 006 004 A NEW NONLINEAR FILTER ROBERT J ELLIOTT AND SIMON HAYKIN Abstract A discrete time filter is constructed

More information

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture CSE/NB 528 Final Lecture: All Good Things Must 1 Course Summary Where have we been? Course Highlights Where do we go from here? Challenges and Open Problems Further Reading 2 What is the neural code? What

More information

Probabilistic Graphical Models for Image Analysis - Lecture 1

Probabilistic Graphical Models for Image Analysis - Lecture 1 Probabilistic Graphical Models for Image Analysis - Lecture 1 Alexey Gronskiy, Stefan Bauer 21 September 2018 Max Planck ETH Center for Learning Systems Overview 1. Motivation - Why Graphical Models 2.

More information

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 8: Importance Sampling

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 8: Importance Sampling Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 8: Importance Sampling 8.1 Importance Sampling Importance sampling

More information

Introduction to Mobile Robotics Probabilistic Robotics

Introduction to Mobile Robotics Probabilistic Robotics Introduction to Mobile Robotics Probabilistic Robotics Wolfram Burgard 1 Probabilistic Robotics Key idea: Explicit representation of uncertainty (using the calculus of probability theory) Perception Action

More information

Gaussian with mean ( µ ) and standard deviation ( σ)

Gaussian with mean ( µ ) and standard deviation ( σ) Slide from Pieter Abbeel Gaussian with mean ( µ ) and standard deviation ( σ) 10/6/16 CSE-571: Robotics X ~ N( µ, σ ) Y ~ N( aµ + b, a σ ) Y = ax + b + + + + 1 1 1 1 1 1 1 1 1 1, ~ ) ( ) ( ), ( ~ ), (

More information

PATTERN RECOGNITION AND MACHINE LEARNING

PATTERN RECOGNITION AND MACHINE LEARNING PATTERN RECOGNITION AND MACHINE LEARNING Chapter 1. Introduction Shuai Huang April 21, 2014 Outline 1 What is Machine Learning? 2 Curve Fitting 3 Probability Theory 4 Model Selection 5 The curse of dimensionality

More information

Markov chain Monte Carlo methods for visual tracking

Markov chain Monte Carlo methods for visual tracking Markov chain Monte Carlo methods for visual tracking Ray Luo rluo@cory.eecs.berkeley.edu Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720

More information

Synchrony in Neural Systems: a very brief, biased, basic view

Synchrony in Neural Systems: a very brief, biased, basic view Synchrony in Neural Systems: a very brief, biased, basic view Tim Lewis UC Davis NIMBIOS Workshop on Synchrony April 11, 2011 components of neuronal networks neurons synapses connectivity cell type - intrinsic

More information

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

Proceedings of Neural, Parallel, and Scientific Computations 4 (2010) xx-xx PHASE OSCILLATOR NETWORK WITH PIECEWISE-LINEAR DYNAMICS

Proceedings of Neural, Parallel, and Scientific Computations 4 (2010) xx-xx PHASE OSCILLATOR NETWORK WITH PIECEWISE-LINEAR DYNAMICS Proceedings of Neural, Parallel, and Scientific Computations 4 (2010) xx-xx PHASE OSCILLATOR NETWORK WITH PIECEWISE-LINEAR DYNAMICS WALTER GALL, YING ZHOU, AND JOSEPH SALISBURY Department of Mathematics

More information

NONLINEAR STATISTICAL SIGNAL PROCESSING: A PARTI- CLE FILTERING APPROACH

NONLINEAR STATISTICAL SIGNAL PROCESSING: A PARTI- CLE FILTERING APPROACH NONLINEAR STATISTICAL SIGNAL PROCESSING: A PARTI- CLE FILTERING APPROACH J. V. Candy (tsoftware@aol.com) University of California, Lawrence Livermore National Lab. & Santa Barbara Livermore CA 94551 USA

More information

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier Miscellaneous Regarding reading materials Reading materials will be provided as needed If no assigned reading, it means I think the material from class is sufficient Should be enough for you to do your

More information

Tutorial on Approximate Bayesian Computation

Tutorial on Approximate Bayesian Computation Tutorial on Approximate Bayesian Computation Michael Gutmann https://sites.google.com/site/michaelgutmann University of Helsinki Aalto University Helsinki Institute for Information Technology 16 May 2016

More information

Sequential Monte Carlo in the machine learning toolbox

Sequential Monte Carlo in the machine learning toolbox Sequential Monte Carlo in the machine learning toolbox Working with the trend of blending Thomas Schön Uppsala University Sweden. Symposium on Advances in Approximate Bayesian Inference (AABI) Montréal,

More information

AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER. Qi Cheng and Pascal Bondon. CNRS UMR 8506, Université Paris XI, France.

AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER. Qi Cheng and Pascal Bondon. CNRS UMR 8506, Université Paris XI, France. AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER Qi Cheng and Pascal Bondon CNRS UMR 8506, Université Paris XI, France. August 27, 2011 Abstract We present a modified bootstrap filter to draw

More information

State-Space Methods for Inferring Spike Trains from Calcium Imaging

State-Space Methods for Inferring Spike Trains from Calcium Imaging State-Space Methods for Inferring Spike Trains from Calcium Imaging Joshua Vogelstein Johns Hopkins April 23, 2009 Joshua Vogelstein (Johns Hopkins) State-Space Calcium Imaging April 23, 2009 1 / 78 Outline

More information

Nonlinear reverse-correlation with synthesized naturalistic noise

Nonlinear reverse-correlation with synthesized naturalistic noise Cognitive Science Online, Vol1, pp1 7, 2003 http://cogsci-onlineucsdedu Nonlinear reverse-correlation with synthesized naturalistic noise Hsin-Hao Yu Department of Cognitive Science University of California

More information

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS Frédéric Mustière e-mail: mustiere@site.uottawa.ca Miodrag Bolić e-mail: mbolic@site.uottawa.ca Martin Bouchard e-mail: bouchard@site.uottawa.ca

More information

Monitoring and Diagnosis of Hybrid Systems Using Particle Filtering Methods

Monitoring and Diagnosis of Hybrid Systems Using Particle Filtering Methods Monitoring and Diagnosis of Hybrid Systems Using Particle Filtering Methods Xenofon Koutsoukos, James Kurien, and Feng Zhao Palo Alto Research Center 3333 Coyote Hill Road Palo Alto, CA 94304, USA koutsouk,jkurien,zhao@parc.com

More information

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Simo Särkkä, Aki Vehtari and Jouko Lampinen Helsinki University of Technology Department of Electrical and Communications

More information

Neuroscience applications: isochrons and isostables. Alexandre Mauroy (joint work with I. Mezic)

Neuroscience applications: isochrons and isostables. Alexandre Mauroy (joint work with I. Mezic) Neuroscience applications: isochrons and isostables Alexandre Mauroy (joint work with I. Mezic) Outline Isochrons and phase reduction of neurons Koopman operator and isochrons Isostables of excitable systems

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Simo Särkkä E-mail: simo.sarkka@hut.fi Aki Vehtari E-mail: aki.vehtari@hut.fi Jouko Lampinen E-mail: jouko.lampinen@hut.fi Abstract

More information

Time Series Analysis

Time Series Analysis Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture State space models, 1st part: Model: Sec. 10.1 The

More information

On a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model

On a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model On a Data Assimilation Method coupling, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model 2016 SIAM Conference on Uncertainty Quantification Basile Marchand 1, Ludovic

More information

Lecture 4: Feed Forward Neural Networks

Lecture 4: Feed Forward Neural Networks Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training

More information

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets J. Clayton Kerce a, George C. Brown a, and David F. Hardiman b a Georgia Tech Research Institute, Georgia Institute of Technology,

More information