Elsevier Editorial System(tm) for Electric Power Systems Research Manuscript Draft

Size: px
Start display at page:

Download "Elsevier Editorial System(tm) for Electric Power Systems Research Manuscript Draft"

Transcription

1 Elsevier Editorial System(tm) for Electric Power Systems Research Manuscript Draft Manuscript Number: EPSR-D Title: State space modeling of RL electrical circuit and estimation of parameters via Maximum lielihood and Bayesian approaches Article Type: Research Paper Keywords: RL electrical circuit; State space model; Optimal filtering; Parameter estimation. Corresponding Author: Dr Rahman Farnoosh, Corresponding Author's Institution: First Author: Rahman Farnoosh Order of Authors: Rahman Farnoosh; Arezoo Hajrajabi Abstract: This paper is proposed to establish a state space model for RL electrical circuit. The mathematical model of RL circuit is considered as a linear stochastic differential equation and the current in the circuit is corrupted by the measurement noise. Due to the effect of white noise, the current is hidden and a state space model is obtained for the RL circuit. Optimal filtering is used for estimation of the current from the noisy observations. Estimation of all unnown parameters in this model is carried out by the Maximum lielihood approach using Expectation-Maximization algorithm and Bayesian Monte Carlo perspective using Metropolis-Hastings algorithm. Some numerical simulation examples which are performed by R software programming are considered to show the efficiency of proposed approaches. Results show an excellent estimation of parameters based on these approaches.

2 Cover Letter Dear professor Enclosed please find here with a copy of my paper entitled: State space modeling of RL electrical circuit and estimation of parameters via Maximum lielihood and Bayesian approaches For possible review and publication in your esteemed journal: Electric Power Systems Research This paper has not been submitted to any other journals. Best Regards Sincerely Yours Dr. Rahman Farnoosh Associate professor of Statistics Address: School of Mathematics, Iran University of Science & Technology Narma, Tehran Iran Tel : , Fax:

3 Highlights Highlights 1) Considering the SDE of RL electrical circuit as the dynamic model of a state space system. 2) Estimation of the current as the state of the system. 3) Estimation of all unnown parameters of the model via both ML and Bayesian approaches by using EM and Metropolis-Hastings algorithms. 4) Some numerical simulation examples which are performed by R software programming are considered to show the efficiency of the proposed approaches.

4 *Manuscript Clic here to view lined References State space modeling of RL electrical circuit and estimation of parameters via Maximum lielihood and Bayesian approaches R Farnoosh, A Hajrajabi School of Mathematics, Iran University of Science and Technology, Narma, Tehran 16846, Iran. rfarnoosh@iust.ac.ir, Tel : , hajirajabi@iust.ac.ir Abstract. This paper is proposed to establish a state space model for RL electrical circuit. The mathematical model of RL circuit is considered as a linear stochastic differential equation and the current in the circuit is corrupted by the measurement noise. Due to the effect of white noise, the current is hidden and a state space model is obtained for the RL circuit. Optimal filtering is used for estimation of the current from the noisy observations. Estimation of all unnown parameters in this model is carried out by the Maximum lielihood approach using Expectation-Maximization algorithm and Bayesian Monte Carlo perspective using Metropolis-Hastings algorithm. Some numerical simulation examples which are performed by R software programming are considered to show the efficiency of proposed approaches. Results show an excellent estimation of parameters based on these approaches. eywords: RL electrical circuit; State space model; Optimal filtering; Parameter estimation.

5 2 1. Introduction A large number of physical phenomena can be modeled as stochastic differential equations (SDEs) which can be observed at discrete instance of time and considered as a dynamic model in the state space models. Application of these models have been broadly developed in statistics, control engineering and signal processing [1 3]. State estimation of time varying systems is a main purpose, when only noisy observations are available. Therefore Kalman filter and modified methods of it lie the Extended Kalman filter, the Unscented Kalman filter, the particle filter, the digital filter have been applied for state estimation in the state space systems [4 9]. Optimal smoothing closely related to optimal filtering is studied in [10, 11]. Since the exact nowledge of the parameters in applied situations is not a contingent assumption, the unnown parameters can be included in the state space models. Due to the sensitivity of the system sate estimation to results regarding to these parameters, estimating appropriate values for these parameters based on available data is a main concern. Ideally, all unnown parameters in these models can be estimated via both Maximum lielihood (ML) and Bayesian approaches. ML estimation is a common parameter estimation method in linear and non-linear Gaussian state space models. Expectation-Maximization (EM) algorithm is a popular procedure to maximize the lielihood function of model and is applicable for estimation of parameters in linear Gaussian state space models [12, 13]. Other wors [13, 14] have been presented on the estimation of parameters in ML approach via EM algorithm for nonlinear dynamical

6 3 systems. The gradient-based search procedure to find the maximization step is studied in [15 17]. Bayesian method provide an alternative approach to assessing parameters in a large class of state space models when sufficiently informative priors regarding to the unnown parameters are available. This approach described in [18, 19] is considered as a method of parameter estimation by concatenating the state with the unnown parameters and introducing an artificial dynamic on the parameters. Marov chain Monte Carlo (MCMC) methods for drawing sample from posterior distribution of parameters and approximation of mathematical expectation have been extensively used in Bayesian inference. Poyiadjis et al [20] utilized the particle filter method for estimation of parameters by considering a nonlinear non-gaussian state space model based on Bayesian framewor using Sequential Monte Carlo methods. Recently, a stochastic perspective for RL electrical circuit using different noise terms has been presented by Farnooshet al [21]. Because in empirical situation, observing current in this circuit regardless of the measurement noise is unrealistic assumption, in the present wor, we consider a specific form of a state space model by using the SDE of RL electrical circuit as the dynamic model. This model can be considered in the class of the state space models, if the current in the stochastic RL electrical circuit is corrupted by measurement noise and only observed through the noisy observations. We are interested in filtering corrupted current by measurement noise via optimal filtering. To our best information, the problem of considering the SDE of RL electrical circuit as

7 4 the dynamic model of a state space system has not been studied before. Furthermore, according to latest nowledge from the research wors it is believed that estimation of the current as the state of the system and estimation of all unnown parameters of the model (variance of observed current in the measurement model, Resistance, mean and variance of current prior distribution at time step 0) via both ML and Bayesian approaches by using EM and Metropolis-Hastings algorithms have been investigated for the first time in the present study. The rest of the paper unfolds as follows. In section 2, the SDE of RL electrical circuit is considered as the dynamic model in the state space system. Section 3, covers filtering and smoothing for estimation of hidden current in the model. In section 4, based on the model, EM algorithm for ML estimation is derived. In section 5, the MCMC sampling algorithm used in simulation posterior distribution to perform Bayesian inferences. In section 6, numerical experiments are conducted to verify the accuracy of proposed methods. Some conclusions are given in section State Space Modeling for Stochastic RL electrical Circuit An inductor-resistance circuit (RL) is an electrical circuit composed of resistance and inductor driven by a voltage or current source. By adding some different randomness in the potential source, the stochastic differential equation describing the behavior of the RL circuit is given by Farnoosh et al [21], di(t) = ( 1 L V (t) R L I(t))dt + ϵ db(t), (1) L

8 where B(t) is one dimensional Brownian motion. V(t) and I(t) denote the potential 5 source and current at time t respectively. When the current is hidden and only observed through the noisy observations, it is reasonable to loo at the model as a continuousdiscrete linear state space model, di(t) = ( 1 L V (t) R L I(t))dt + ϵ db(t), (2) L y = I(t ) + r, = 0,..., n. (3) The models (2) and (3) are the dynamic and the measurement model, respectively. The dynamic model can be equivalently interpreted as Ito type stochastic differential equations, which lead to the following state space model at discrete-time t, I = I L (V RI 1) t + q 1, y = I + r, = 0,..., n, (4) where q 1 N(0, Q 1 = ( ϵ L )2 t), y, I and r N(0, σ 2 ) are the process noise, observation, state of the model and the measurement noise, respectively. The time step runs from 0 to n and at time step 0 there is no measurement only the prior distribution I 0 N(m 0 0 = m 0, p 0 0 = Σ 0 ). 3. Filtering and Smoothing Computing the marginal distribution of the state I on the time step given the history of the measurement up to time step, p(i y 1: ), is the main purpose of the optimal filtering. The discrete-time Kalman smoother can be used for computing the closed form of smoothing solution p(i y 1:n ), that is conditional on the whole measurement

9 data y 1:n. The solution to the discrete-time filtering problem can be computed by the following prediction and update steps [1, 5]. 6 Proposition 1 The optimal filtering equation can be obtained for the linear filtering model of equation (4) in closed form. (i) The prediction step density is Gaussian with parameters as follows: p(i y 1: 1 ) = N(I m 1, p 1 ), (5) m 1 p 1 = ( V L ) t + (1 R L t)m 1 1, = (1 R L t)2 p Q. (6) (ii) The update step densities are Gaussian with parameters as follows: p(i y 1: ) = N(I m, p ), (7) p(y y 1: 1 ) = N(I ŷ 1, F 1 ), (8) ŷ 1 = m 1, F 1 m = m 1 = p 1 + σ 2, + p 1 F 1 (y ŷ 1 ), (9) p = p 1 (p 1 ) 2 F 1 Proof (i): The joint density of I and I 1 given y 1: 1 is,. p(i 1, I y 1: 1 ) = p(i I 1 )p(i 1 y 1: 1 ) (10) = N(I ( V L ) t + (1 R L t)i 1, Q)N(I 1 m 1 = N I 1 I 1, p 1 1 ) (11) m, p, (12)

10 where, m = m 1 1 ( V L ) t+(1 R L t)m 1 1 and p = p 1 1 (1 R L t)p 1 1 hence, the marginal distribution of I can be represented as follows: (1 R L t)p 1 1 (1 R L t)2 p 1 1 +Q, 7 p(i y 1: 1 ) = N(I m 1, p 1 ), (13) where, m 1 = ( V L ) t+(1 R L t)m 1 1 p 1 = (1 R L t)2 p 1 1 +Q. Proof (ii): The joint distribution of y and I is, p(i, y y 1: 1 ) = p(y I )p(i y 1: 1 ) (14) where, m = m 1 m 1 = N(y I, R)N(I m 1, p 1 ) (15) = N and p = I y p 1 p 1 m, p, (16) p 1 p 1 +σ 2 Therefore, the condition distribution of I and the marginal distribution of y can be written as,. p(i y 1: 1, y ) = p(i y 1: ) (17) = N(I m, p ), p(y y 1: ) = N(y ŷ 1, F 1 ), (18)

11 8 where, ŷ 1 F 1 = m 1, m = ŷ 1 = p 1 + σ 2, + p 1 F 1 (y ŷ 1 ), (19) p = p 1 (p 1 ) 2 F 1. Proposition 2 The recursive equation for the discrete-time Kalman smoother are given as, p(i y 1:n ) = N(I m n, p n ), (20) with parameters as follows: c = (1 R L t)p (p +1 ) 1, m n = m + c (m n +1 m +1 ), (21) p n = p + c (p n +1 )c = n 1,..., 0, where m and p are the mean and covariance computed by the Kalman filter. The recursion is started from the last time step n. Proof: The joint distribution of I and I +1 given y 1: is, p(i, I +1 y 1: ) = p(i +1 I )p(i y 1: ) (22) where, m = = N m ( V L ) t+(1 R L t)m = N(I +1 ( V L ) t + (1 R L t)i, Q)N(I m, p ) (23) I I +1 and p = m, p, (24) p (1 R L t)p (1 R L t)p (1 R L t)2 p +Q Due to the Marov property of the states, the conditional distribution I given I +1 and.

12 9 y 1: is, p(i, I +1 y 1:n ) = p(i, I +1 y 1: ) (25) = N(I m, p ), (26) where, m +1 = ( V L ) t + (1 R L t)m, p +1 = (1 R L t)2 p + Q, c = (1 R L t)p (p +1 ) 1, (27) m = m + c (I +1 m +1 ), p = p + c (p +1 )c, hence, the joint distribution of I and I +1 given all data can be written as: p(i, I +1 y 1:n ) = p(i I +1 )p(i +1 y 1:n ) (28) where, m = = N m n +1 m +c (m n +1 m +1 ) = N(I m, p )N(I +1 m n +1, p n +1) (29) I +1 I m, p, (30) and p = Therefore, the marginal distribution of I can be derived, p n +1 c p n +1 p n +1 c c (p n +1 )c +p. p(i y 1:n ) = N(I m n, m n ), (31) where, m n = m + c (m n +1 m +1 ), p n = p + c (p n +1 )c = n 1,..., 0. (32)

13 For the state-space model specified in equation (4), the Lag-One covariance 10 smoother is obtained with p n n that is computed from Theorems 1 and 2 and with initial condition (see [12]). p n n,n 1 = (1 h n )(1 R L t)pn 1 n 1, (33) where, h n = Also, for t = n, n 1,..., 2, pn 1 n p n 1 n + σ 2. (34) p n t,t 1 = E[(I t m n t )(I t 1 m n t 1)] = p t 1 t 1c t 2 + c t 1 (p n t,t 1 (1 R L t)pt 1 t 1)c t 2, where, c t = (1 R L t)pt t(p t t+1) 1. (35) Furthermore, based on the properties of the the Gaussian distribution, the confidence interval for the prediction, the filter and the smoother are m 1 ± 1.96 p 1, m ± 1.96 p and mn ± 1.96 p n, respectively. 4. Estimation of parameters via maximum lielihood approach The problem of parameter estimation has created a lot of interest for state-space models over the past few years and many approaches have been proposed to solve it. One of these approaches could be ML and EM algorithm could be considered as a numerical optimization in order to calculate the ML estimation of parameters. If the model in

14 11 equation (4) is considered as a parametric function of Θ = (m 0, Σ 0, R, σ 2 ), then these parameters will be estimated via EM algorithm [22]. The basic idea is that if the states, I = (I 0, I 1,..., I n ) observed, in addition to the observations y = (y 1,..., y n ), then {I, y} would be considered as the complete data, with the complete lielihood function, n n L c Θ(I, y) = f m0,σ 0 (I 0 ) f R (I I 1 ) f σ 2(y I ). (36) =1 =1 The log lielihood of the complete data under Gaussian assumption and ignoring constant can be written as: 2 ln L c (Θ I,y) = ln Σ Σ 0 (I 0 m 0 ) 2 +n ln Q+ 1 Q n =1 (I ( V L ) t (1 R L t)i 1) 2 +n ln σ 2 + n =1 1 σ 2 (y I ) 2. (37) EM algorithm is an iterative method for finding the ML estimation of Θ based on the incomplete data, y. Two steps of this algorithm (E- and M- steps) can be written as follows: E step: The conditional expectation of complete data lielihood given the observed data and current parameters is as below, Q(Θ Θ (t 1) ) = E{ 2 ln L c (Θ I,y) y,θ (t 1) } (38) = ln Σ Σ 0 (p n 0 +(mn 0 m 0) 2 )+n ln Q+ 1 Q { n =1 s +(1 R L t)2 n =1 s 2(1 R L t) n =1 s }+n ln σ n σ 2 =1 (y2 2y m n +(mn )2 +p n ), (39) where s=p n 1 +(mn 1 )2, s =p n +(mn (V/L) t)2 and s =(p n, 1 +(mn V L t)mn 1 ). By using Theorem 2, the desired conditional expectation can be obtained as

15 12 smoothers, E{ 1 Σ 0 (I 0 m 0 ) 2 y,θ (t 1) }= 1 Σ 0 (p n 0 +(mn 0 m 0) 2 ) (40) E{ 1 Q n =1 (I ( V L ) t (1 R L t)i 1) 2 y,θ (t 1) }= 1 Q ( n =1 E{(I ( V L ) t))2 +((1 R L t)i 1) 2 2(I ( V L )((1 R L t)i 1) y,θ (t 1) }), (41) E{(I ( V L ) t)2 y,θ (t 1) }=p n +(mn ( V L ) t)2, (42) E{((1 R L t)i 1) 2 y,θ (t 1) }=(1 R L t)2 (p n 1 +(mn 1 )2 ), (43) E{(I ( V L ) t)((1 R L t)i 1) y,θ (t 1) }=(1 R L t)(pn, 1 +(mn V L t)mn 1 ), (44) E{ n =1 1 σ 2 (y I ) 2 y,θ (t 1) }= 1 σ 2 ( n =1 E{y2 +I2 2y I y,θ (t 1) }), (45) E{y 2 +I2 2y I y,θ (t 1) }=y 2 2y m n +(mn )2 +p n. (46) In equation (39) the smoothers are calculated under the current value of the parameters Θ (t 1). For simplicity, this fact have not been explicitly displayed. M step: Minimizing equation (38) with respect to the parameters, at iteration t, constitutes the maximization step, which yields the updated estimates, Θ (t) = arg min Q(Θ Θ (t 1) ), (47) Q m 0 = 0= m (t) 0 =mn 0, (48) Q Σ 0 = 0= Σ (t) 0 =pn 0, (49) Q σ 2 = 0= σ 2(t) = 1 n n =1 (y2 2y m n +(mn )2 +p n ), (50) Q R = 0= R(t) = L t (1 n =1 s s ). (51) Convergence is obtained, when the estimates or the lielihood stabilize. The general scheme of the algorithm can be written as below,

16 13 (i) Initialize the procedure by selecting starting values for parameters Θ 0 = {m 0, Σ 0, R, σ 2 }, (ii) Step t. for t=1,... (iii) Perform the E-Step, use proposition 1 and 2 to obtain the smoothed values m n, pn and pn, 1 for = 1,..., n, using the parameters Θ(j 1), (iv) Perform the M-Step, update the estimates, m 0, Σ 0, R, σ 2 using equations (48)- (51) to obtain Θ (j), (v) Repeat steps (iii)-(iv) to convergence. 5. Estimation of parameters via Bayesian approach Another approach to estimate of unnown parameters Θ = (m 0, p 0, R, σ 2 ), is Bayesian paradigm, which does not assume that Θ has a fixed value, instead it assumes that Θ is a random variable with some probability distribution. On the other hand this method incorporates nowledge about a particular parameter as the prior distribution that reflects our degree of belief in different values of these quantities. Since density of the observation up to time is the normal distribution with mean ŷ 1 = m 1 and variance F 1 is as below, = p 1 + σ 2 given by equation (8), the lielihood function of observations L(Θ y 1:n ) = n f(y y 1: 1 ) (52) =1

17 = n 1 exp[ 2π(p 1 + σ 2 ) =1 2(p σ 2 ) (y m 1 ) 2 ]. (53) Without loss of generality the independent of parameters is considered, under a normal prior N(δ j, λ j ) j = 1, 2, (δ j R and λ j > 0 are nown hyper-parameters) on both R and m 0 and a Gamma prior G(τ j, υ j ) j = 1, 2 on both σ 2 and Σ 0, the prior distribution of Θ can be written as, π(θ) = π(m o )π(σ 0 )π(r)π(σ 2 ). (54) From Bayesian point of view the posterior density of parameter is proportional to the product of the prior density and the lielihood function, Posterior = Lielihood Prior. π(θ y 1:n ) = L(Θ y 1:n )π(θ) (55) ( n =1 exp[ n =1 1 )(σ 2 ) (τ 2π(p 1 1 1) exp[ σ2 ]Σ (τ 2 1) +σ 2 υ ) 1 0 exp[ Σ 0 ] υ 2 1 2(p 1 +σ 2 ) (y m 1 ) 2 ] exp[ (R δ 1 ) 2λ 1 ] exp[ (m 0 δ 2 ) 2λ 2 ]. (56) Bayesian point estimation is the mathematical expectation under the presumption that the loss function is quadratic. Bayesian estimator of Θ, can be considered as, ˆΘ = Θπ(Θ y 1:n )dθ. (57) In this case posterior distribution dosn t have a closed form, hence for doing inference, MCMC algorithm facilitates sampling of posterior distribution. Simulation from the posterior distribution of the parameters is done by using Metropolis-Hastings sampling. A schematic overview of the proposed MCMC algorithm is given by Cappe et al [23].

18 15 The general scheme of sampling is as below. (i) Initialization:choose Θ (0) = (m (0) 0, Σ (0) 0, R (0), σ 2(0) ), (ii) step t. for t=1,... Generate σ 2 from LN(log(σ 2(t 1) ), ζ 2 ), Generate Σ 0 from LN(log(Σ (t 1) 0 ), ζ 2 ), Compute r = π( Θ y 1:n ) σ 2 Σ0 π(θ (t 1) y 1:n )σ 2(t 1) Σ (t 1) 0, (58) Generate u U [0,1] if r < u then Θ (t) = Θ else Θ (t) = Θ (t 1). The normal proposal distribution is assumed for R and m Numerical Simulation Four data set with different values of m 0, Σ 0, R, σ 2, V, L used for simulation purpose are presented in Table 1. The purpose is to estimate the current as state of the model using optimal filtering based on discrete state space form of the model. 50 values of the corrupted states, the states, the predictions, the filters and the smoothers for estimation of the current in the model are obtained by using observed data generated from data set 3. Only 10 of them are illustrated in Table 2. In Figure 1, the predictions, the filters and the smoothers are shown as lines and their confidence intervals as dashed lines. The

19 16 simulation results obtained from Table 2 and Figure 1 show that the prediction of current is more uncertain than the corresponding filtered value, which in turn, is more uncertain than the corresponding smoother value. Consequently, confidence interval for smoother is shorter than the filter and the filter is shorter than the predicted. Estimation of four unnown parameters (R, µ 0, Σ 0 and σ 2 ) are investigated based on ML and Bayesian inference. These methods have been tested by using two algorithms EM and Metropolis Hastings on the mentioned four simulated data set. The results of estimation from two methods are shown in Table 3. It can be claimed that the estimated results are very close to the exact values of parameters. Results show that if the step size is smaller, the estimations will be precise. As can be seen, due to the assistance of prior nowledge of parameters, the mean square error (MSE) of Bayesian estimator is less than one in ML estimator that obtained via EM algorithm. In Figures 2 and 3, after 7000 iterations the Metropolis-Hastings algorithm seems to converge and the posterior means of parameters as Bayesian estimations are obtained by using 1000 last reordered iterations. 7. Conclution In this paper the SDE of RL electrical circuit is considered as the dynamic model in the state space system and the hidden circuit is estimated via optimal filtering. The results show that the smoothers, the filters and the predictions respectively have the shortest confidence intervals. Four parameters in the model are estimated via Bayesian and ML approaches. Due to the fact that Bayesian paradigm use the prior distribution to

20 incorporate nowledge about parameters, this approach, is better than the ML approach in estimation process based on MSE of estimators. 17 References [1] S. Sara, On unscented alman filtering for state estimation of continuous-time nonlinear systems, IEEE Trans. Autom. Control. 52 (2007) [2] M.S. Grewal, L.R. Weill, A.P Andrews, Global Positioning Systems, Inertial Navigation and Integration, New Yor: Wiley Interscience, [3] C.J. Long, P.L. Purdon, S. Temereanca, N.U. Desai, M.S. Hamalainen, E.N. Brown, Statespace solutions to the dynamic magnetoencephalography inverse problem using high performance computing, Ann. Appl. Statist. 5 (2011) [4] S.J. Julier, J.K. Uhlmann, Unscented filtering and nonlinear estimation, Proc. IEEE. 92 (2004) [5] Y. Bar-Shalom, X.R. Li, T. Kirubarajan, Estimation with Applications to Tracing and Navigation, New Yor: Wiley Interscience, [6] S. Masell, N. Gordon, M. Rollason, D. Salmond, Efficient multitarget tracing using particle filters, Image. vision. Comput. 21 (2003) [7] B. Ristic, S. Arulampalam, N. Gordon, Beyond the Kalman Filter: Particle Filters for Tracing Applications, Boston: Artech House, [8] M. Huanga, L. Wenyuan, Y. Wei, Estimating parameters of synchronous generators using squareroot unscented Kalman filter, Electr Pow Syst Res. 80 (2010) [9] W. M. Al-Hasawi, K. M. El-Naggar, New digital filter for unbalance distorted current and voltage estimation in power systems, Electr Pow Syst Res. 78 (2008) [10] R.C.K. Lee, Optimal Estimation, Identification and Control, Cambridge, Massachuetts: M.I.T.Press, 1964.

21 18 [11] C. Leondes, J.B. Peller, E.B. Stear, Nonlinear smoothing theory, IEEE Trans. Syst. Science Cyber. 6 (1970) [12] R.H. Shumway, D.S. Stofer, Time Series Analysis and Its Applications: With R Examples, New Yor: Springer, [13] S. Roweis, Z. Ghahramani, A unifying review of linear Gaussian models, Neural Computation. 11 (1999) [14] J.F.G. De Freitas, M. Niranjan, A.H. Gee, Dynamic learning with the EM algorithm for neural networs, VLSI Signal Processing. 26 (2000) [15] A. Hagenblada, L. Ljung, A. Wills, Maximum lielihood identification of Wiener models, Automatica. 44 (2008) [16] J. Olsson, O. Cappe, R. Douc, E. Moulines, Sequential Monte Carlo smoothing with application to parameter estimation in nonlinear state space models, Bernoulli. 14 (2008) [17] A. Wills, T. Schon, B. Ninness, Parameter estimation for discretetime nonlinear systems using EM, Proc. 17th IFAC World Congress. Seoul: Korea, [18] G. Kitagawa, Self-organizing time series model, Amer. Statist. Assoc. 93 (1998) [19] X. Yang, K. Xing, K. Shi, Q. Pan, Joint state and parameter estimation in particle filtering and stochastic optimization, Control Theory Appl. 62 (2008) [20] G. Poyiadjis, A. Doucet, S.S. Singh, Particle approximations of the score and observed information matrix in state space models with application to parameter estimation, Biometria. 98 (2011) [21] R. Farnoosh, P. Nabati, R. Rezaeyan, M. Ebrahimi, A stochastic perspective for RL electrical circuits using different noise terms, Compel. 30 (2011) [22] A.P. Dempster, N.M Laird, D.B. Rubin, Maximum lielihood from incomplete data via the EM algorithm, Roy. Statist. Soc. Ser. B. 39 (1977) [23] O. Cappe, C. Robert, T. Ryden, Reversible jump MCMC converging to birth-and-death MCMC and more general continuous time samplers. Roy. Statist. Soc. Ser. B. 65 (2003)

22 19 8. Caption Table 1: Simulation from four different data set. Table 2: Predictions, filters and smoothers. Table 3: The numerical values of estimators ˆµ 0, ˆΣ 0, ˆR and ˆσ 2 with different t, the data in parenthesis are Mean square errors. Figure 1: The simulated values of I, for = 1,.., 50 in data set 3 are shown as points. In top, middle and bottom panel, the predictions m 1, the filters m and the smoothers m n with their confidence intervals are shown as a line and dashed lines respectively. Figure 2: Evolution of the Metropolis-Hastings chains over 7000 iterations for unnown parameters in data set 1 with considering t = Figure 3: Evolution of the Metropolis-Hastings chains over 7000 iterations for unnown parameters in data set 3 with considering t = 0.01.

23 20 Figure 1. Prediction current Time Filter Time current current Smoother Time Figure m Iterations Iterations R Iterations Iterations

24 21 Figure m Iterations Iterations R Iterations Iterations Table 1. Data set m 0 Σ 0 R σ 2 V L Table 2. t y I m 1 p 1 m p m n p n

25 22 Table 3. data set ˆµ0 Bayesian estimator (Mean square error ) ˆΣ0 ˆR ˆσ (0.0152) ( ) (0.0166) ( ) t = (0.0107) ( ) (0.0163) (0.0016) (0.0005) (0.0438) (0.0080) (0.0094) (0.0010) (0.0027) (0.0130) (0.0550) (0.0124) (0.0021) (0.0087) (0.0018) t = (0.0070) (0.0145) (0.0102) (0.0040) (0.0003) (0.0231) (0.0127) (0.0144) (0.0001) (0.0001) (0.0075) (0.0404) data set ML estimator (Mean square error ) ˆµ0 ˆΣ0 ˆR ˆσ (0.0619) ( ) (0.0257) (0.0757) t = (0.0401) (0.3736) (0.0157) (0.0064) (0.0446) (0.1573) (0.0076) (0.0543) (0.0555) (0.0002) (0.0075) (0.0005) (0.0056) 0.439(0.4039) (0.0139) (0.0023) t = (0.0212) (0.8929) (0.0047) (0.0151) (0.0007) (0.2252) (0.0056) (0.0462) (0.0085) (0.0006) (0.0062) (0.0001)

26 Responses to Technical Chec Results Electric Power Systems Research Ref. EPSR-D Title: State space modeling of RL electrical circuit and estimation of parameters via Maximum lielihood and Bayesian approaches Dear Editor, Than you for your useful comments and suggestions on the language and structure of our manuscript. We have modified the manuscript accordingly, and detailed corrections are listed below point by point: 1) Type the whole manuscript with double line spacing including tables. Please note that the length of the manuscript including captions, figures, and tables should not exceed 22 double line-spaced pages. We now have used double spacing throughout the manuscript. 2) References should be prepared according to the Guide for Authors. For journal articles, please adhere to the following order: initials followed by surname,article title,abbreviated journal name, volume number, year in parenthesis and page span. We have checed all the references and formatted them strictly according to the Guide for Authors. 3) Please do not embed tables or figures within the text of your manuscript file. Collected figure captions should be provided on a separate page. We have embed tables and figures after references in the manuscript file, furthermore, Collected figure captions have been provided on a separate page.

27 4) Figures 2 and 3 provided in the manuscript are not cited in the text. Now Figures 2 and 3 are cited in sequence in the main text. 5) The tel/fax numbers (with country and area code) of the corresponding author should be provided on the first page of the manuscript. The tel number of the corresponding author has been provided on the first page of the manuscript. The manuscript has been resubmitted to your journal. We loo forward to your positive response. Best Regards Sincerely Yours

SEQUENTIAL MONTE CARLO METHODS FOR THE CALIBRATION OF STOCHASTIC RAINFALL-RUNOFF MODELS

SEQUENTIAL MONTE CARLO METHODS FOR THE CALIBRATION OF STOCHASTIC RAINFALL-RUNOFF MODELS XIX International Conference on Water Resources CMWR 2012 University of Illinois at Urbana-Champain June 17-22,2012 SEQUENTIAL MONTE CARLO METHODS FOR THE CALIBRATION OF STOCHASTIC RAINFALL-RUNOFF MODELS

More information

F denotes cumulative density. denotes probability density function; (.)

F denotes cumulative density. denotes probability density function; (.) BAYESIAN ANALYSIS: FOREWORDS Notation. System means the real thing and a model is an assumed mathematical form for the system.. he probability model class M contains the set of the all admissible models

More information

Lecture 8: Bayesian Estimation of Parameters in State Space Models

Lecture 8: Bayesian Estimation of Parameters in State Space Models in State Space Models March 30, 2016 Contents 1 Bayesian estimation of parameters in state space models 2 Computational methods for parameter estimation 3 Practical parameter estimation in state space

More information

Recursive Noise Adaptive Kalman Filtering by Variational Bayesian Approximations

Recursive Noise Adaptive Kalman Filtering by Variational Bayesian Approximations PREPRINT 1 Recursive Noise Adaptive Kalman Filtering by Variational Bayesian Approximations Simo Särä, Member, IEEE and Aapo Nummenmaa Abstract This article considers the application of variational Bayesian

More information

The Unscented Particle Filter

The Unscented Particle Filter The Unscented Particle Filter Rudolph van der Merwe (OGI) Nando de Freitas (UC Bereley) Arnaud Doucet (Cambridge University) Eric Wan (OGI) Outline Optimal Estimation & Filtering Optimal Recursive Bayesian

More information

in a Rao-Blackwellised Unscented Kalman Filter

in a Rao-Blackwellised Unscented Kalman Filter A Rao-Blacwellised Unscented Kalman Filter Mar Briers QinetiQ Ltd. Malvern Technology Centre Malvern, UK. m.briers@signal.qinetiq.com Simon R. Masell QinetiQ Ltd. Malvern Technology Centre Malvern, UK.

More information

Dual Estimation and the Unscented Transformation

Dual Estimation and the Unscented Transformation Dual Estimation and the Unscented Transformation Eric A. Wan ericwan@ece.ogi.edu Rudolph van der Merwe rudmerwe@ece.ogi.edu Alex T. Nelson atnelson@ece.ogi.edu Oregon Graduate Institute of Science & Technology

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin

More information

An introduction to Sequential Monte Carlo

An introduction to Sequential Monte Carlo An introduction to Sequential Monte Carlo Thang Bui Jes Frellsen Department of Engineering University of Cambridge Research and Communication Club 6 February 2014 1 Sequential Monte Carlo (SMC) methods

More information

NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES

NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES 2013 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES Simo Särä Aalto University, 02150 Espoo, Finland Jouni Hartiainen

More information

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Simo Särkkä, Aki Vehtari and Jouko Lampinen Helsinki University of Technology Department of Electrical and Communications

More information

Latent Variable Models and EM algorithm

Latent Variable Models and EM algorithm Latent Variable Models and EM algorithm SC4/SM4 Data Mining and Machine Learning, Hilary Term 2017 Dino Sejdinovic 3.1 Clustering and Mixture Modelling K-means and hierarchical clustering are non-probabilistic

More information

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix Labor-Supply Shifts and Economic Fluctuations Technical Appendix Yongsung Chang Department of Economics University of Pennsylvania Frank Schorfheide Department of Economics University of Pennsylvania January

More information

RAO-BLACKWELLIZED PARTICLE FILTER FOR MARKOV MODULATED NONLINEARDYNAMIC SYSTEMS

RAO-BLACKWELLIZED PARTICLE FILTER FOR MARKOV MODULATED NONLINEARDYNAMIC SYSTEMS RAO-BLACKWELLIZED PARTICLE FILTER FOR MARKOV MODULATED NONLINEARDYNAMIC SYSTEMS Saiat Saha and Gustaf Hendeby Linöping University Post Print N.B.: When citing this wor, cite the original article. 2014

More information

Particle Filters. Outline

Particle Filters. Outline Particle Filters M. Sami Fadali Professor of EE University of Nevada Outline Monte Carlo integration. Particle filter. Importance sampling. Degeneracy Resampling Example. 1 2 Monte Carlo Integration Numerical

More information

Learning the hyper-parameters. Luca Martino

Learning the hyper-parameters. Luca Martino Learning the hyper-parameters Luca Martino 2017 2017 1 / 28 Parameters and hyper-parameters 1. All the described methods depend on some choice of hyper-parameters... 2. For instance, do you recall λ (bandwidth

More information

Introduction to Particle Filters for Data Assimilation

Introduction to Particle Filters for Data Assimilation Introduction to Particle Filters for Data Assimilation Mike Dowd Dept of Mathematics & Statistics (and Dept of Oceanography Dalhousie University, Halifax, Canada STATMOS Summer School in Data Assimila5on,

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters

More information

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions DD2431 Autumn, 2014 1 2 3 Classification with Probability Distributions Estimation Theory Classification in the last lecture we assumed we new: P(y) Prior P(x y) Lielihood x2 x features y {ω 1,..., ω K

More information

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER Zhen Zhen 1, Jun Young Lee 2, and Abdus Saboor 3 1 Mingde College, Guizhou University, China zhenz2000@21cn.com 2 Department

More information

Identification of Gaussian Process State-Space Models with Particle Stochastic Approximation EM

Identification of Gaussian Process State-Space Models with Particle Stochastic Approximation EM Identification of Gaussian Process State-Space Models with Particle Stochastic Approximation EM Roger Frigola Fredrik Lindsten Thomas B. Schön, Carl E. Rasmussen Dept. of Engineering, University of Cambridge,

More information

BAYESIAN ESTIMATION OF TIME-VARYING SYSTEMS:

BAYESIAN ESTIMATION OF TIME-VARYING SYSTEMS: Written material for the course S-114.4202 held in Spring 2010 Version 1.1 (April 22, 2010) Copyright (C) Simo Särä, 2009 2010. All rights reserved. BAYESIAN ESTIMATION OF TIME-VARYING SYSTEMS: Discrete-Time

More information

Density Propagation for Continuous Temporal Chains Generative and Discriminative Models

Density Propagation for Continuous Temporal Chains Generative and Discriminative Models $ Technical Report, University of Toronto, CSRG-501, October 2004 Density Propagation for Continuous Temporal Chains Generative and Discriminative Models Cristian Sminchisescu and Allan Jepson Department

More information

An efficient stochastic approximation EM algorithm using conditional particle filters

An efficient stochastic approximation EM algorithm using conditional particle filters An efficient stochastic approximation EM algorithm using conditional particle filters Fredrik Lindsten Linköping University Post Print N.B.: When citing this work, cite the original article. Original Publication:

More information

Identification of Gaussian Process State-Space Models with Particle Stochastic Approximation EM

Identification of Gaussian Process State-Space Models with Particle Stochastic Approximation EM Preprints of the 9th World Congress The International Federation of Automatic Control Identification of Gaussian Process State-Space Models with Particle Stochastic Approximation EM Roger Frigola Fredrik

More information

Maximum Likelihood Diffusive Source Localization Based on Binary Observations

Maximum Likelihood Diffusive Source Localization Based on Binary Observations Maximum Lielihood Diffusive Source Localization Based on Binary Observations Yoav Levinboo and an F. Wong Wireless Information Networing Group, University of Florida Gainesville, Florida 32611-6130, USA

More information

AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER. Qi Cheng and Pascal Bondon. CNRS UMR 8506, Université Paris XI, France.

AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER. Qi Cheng and Pascal Bondon. CNRS UMR 8506, Université Paris XI, France. AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER Qi Cheng and Pascal Bondon CNRS UMR 8506, Université Paris XI, France. August 27, 2011 Abstract We present a modified bootstrap filter to draw

More information

10-701/15-781, Machine Learning: Homework 4

10-701/15-781, Machine Learning: Homework 4 10-701/15-781, Machine Learning: Homewor 4 Aarti Singh Carnegie Mellon University ˆ The assignment is due at 10:30 am beginning of class on Mon, Nov 15, 2010. ˆ Separate you answers into five parts, one

More information

On-Line Parameter Estimation in General State-Space Models

On-Line Parameter Estimation in General State-Space Models On-Line Parameter Estimation in General State-Space Models Christophe Andrieu School of Mathematics University of Bristol, UK. c.andrieu@bris.ac.u Arnaud Doucet Dpts of CS and Statistics Univ. of British

More information

Extended Object and Group Tracking with Elliptic Random Hypersurface Models

Extended Object and Group Tracking with Elliptic Random Hypersurface Models Extended Object and Group Tracing with Elliptic Random Hypersurface Models Marcus Baum Benjamin Noac and Uwe D. Hanebec Intelligent Sensor-Actuator-Systems Laboratory ISAS Institute for Anthropomatics

More information

Human Pose Tracking I: Basics. David Fleet University of Toronto

Human Pose Tracking I: Basics. David Fleet University of Toronto Human Pose Tracking I: Basics David Fleet University of Toronto CIFAR Summer School, 2009 Looking at People Challenges: Complex pose / motion People have many degrees of freedom, comprising an articulated

More information

Outline lecture 6 2(35)

Outline lecture 6 2(35) Outline lecture 35 Lecture Expectation aximization E and clustering Thomas Schön Division of Automatic Control Linöping University Linöping Sweden. Email: schon@isy.liu.se Phone: 13-1373 Office: House

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

EM-algorithm for Training of State-space Models with Application to Time Series Prediction EM-algorithm for Training of State-space Models with Application to Time Series Prediction Elia Liitiäinen, Nima Reyhani and Amaury Lendasse Helsinki University of Technology - Neural Networks Research

More information

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17 MCMC for big data Geir Storvik BigInsight lunch - May 2 2018 Geir Storvik MCMC for big data BigInsight lunch - May 2 2018 1 / 17 Outline Why ordinary MCMC is not scalable Different approaches for making

More information

RECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T DISTRIBUTION

RECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T DISTRIBUTION 1 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING, SEPT. 3 6, 1, SANTANDER, SPAIN RECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T

More information

A Generalized Labeled Multi-Bernoulli Filter for Maneuvering Targets

A Generalized Labeled Multi-Bernoulli Filter for Maneuvering Targets A Generalized Labeled Multi-Bernoulli Filter for Maneuvering Targets arxiv:163.565v1 [stat.me] 15 Mar 16 Yuthia Punchihewa School of Electrical and Computer Engineering Curtin University of Technology

More information

Hmms with variable dimension structures and extensions

Hmms with variable dimension structures and extensions Hmm days/enst/january 21, 2002 1 Hmms with variable dimension structures and extensions Christian P. Robert Université Paris Dauphine www.ceremade.dauphine.fr/ xian Hmm days/enst/january 21, 2002 2 1 Estimating

More information

On the Slow Convergence of EM and VBEM in Low-Noise Linear Models

On the Slow Convergence of EM and VBEM in Low-Noise Linear Models NOTE Communicated by Zoubin Ghahramani On the Slow Convergence of EM and VBEM in Low-Noise Linear Models Kaare Brandt Petersen kbp@imm.dtu.dk Ole Winther owi@imm.dtu.dk Lars Kai Hansen lkhansen@imm.dtu.dk

More information

Bagging During Markov Chain Monte Carlo for Smoother Predictions

Bagging During Markov Chain Monte Carlo for Smoother Predictions Bagging During Markov Chain Monte Carlo for Smoother Predictions Herbert K. H. Lee University of California, Santa Cruz Abstract: Making good predictions from noisy data is a challenging problem. Methods

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information

Linear Dynamical Systems

Linear Dynamical Systems Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations

More information

A new iterated filtering algorithm

A new iterated filtering algorithm A new iterated filtering algorithm Edward Ionides University of Michigan, Ann Arbor ionides@umich.edu Statistics and Nonlinear Dynamics in Biology and Medicine Thursday July 31, 2014 Overview 1 Introduction

More information

State Estimation by IMM Filter in the Presence of Structural Uncertainty 1

State Estimation by IMM Filter in the Presence of Structural Uncertainty 1 Recent Advances in Signal Processing and Communications Edited by Nios Mastorais World Scientific and Engineering Society (WSES) Press Greece 999 pp.8-88. State Estimation by IMM Filter in the Presence

More information

Bayesian Calibration of Simulators with Structured Discretization Uncertainty

Bayesian Calibration of Simulators with Structured Discretization Uncertainty Bayesian Calibration of Simulators with Structured Discretization Uncertainty Oksana A. Chkrebtii Department of Statistics, The Ohio State University Joint work with Matthew T. Pratola (Statistics, The

More information

MH I. Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution

MH I. Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution MH I Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution a lot of Bayesian mehods rely on the use of MH algorithm and it s famous

More information

Gaussian Mixtures Proposal Density in Particle Filter for Track-Before-Detect

Gaussian Mixtures Proposal Density in Particle Filter for Track-Before-Detect 12th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 29 Gaussian Mixtures Proposal Density in Particle Filter for Trac-Before-Detect Ondřej Straa, Miroslav Šimandl and Jindřich

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of

More information

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Background Data Assimilation Iterative process Forecast Analysis Background

More information

A Review of Pseudo-Marginal Markov Chain Monte Carlo

A Review of Pseudo-Marginal Markov Chain Monte Carlo A Review of Pseudo-Marginal Markov Chain Monte Carlo Discussed by: Yizhe Zhang October 21, 2016 Outline 1 Overview 2 Paper review 3 experiment 4 conclusion Motivation & overview Notation: θ denotes the

More information

Estimating Gaussian Mixture Densities with EM A Tutorial

Estimating Gaussian Mixture Densities with EM A Tutorial Estimating Gaussian Mixture Densities with EM A Tutorial Carlo Tomasi Due University Expectation Maximization (EM) [4, 3, 6] is a numerical algorithm for the maximization of functions of several variables

More information

Computer Practical: Metropolis-Hastings-based MCMC

Computer Practical: Metropolis-Hastings-based MCMC Computer Practical: Metropolis-Hastings-based MCMC Andrea Arnold and Franz Hamilton North Carolina State University July 30, 2016 A. Arnold / F. Hamilton (NCSU) MH-based MCMC July 30, 2016 1 / 19 Markov

More information

output dimension input dimension Gaussian evidence Gaussian Gaussian evidence evidence from t +1 inputs and outputs at time t x t+2 x t-1 x t+1

output dimension input dimension Gaussian evidence Gaussian Gaussian evidence evidence from t +1 inputs and outputs at time t x t+2 x t-1 x t+1 To appear in M. S. Kearns, S. A. Solla, D. A. Cohn, (eds.) Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press, 999. Learning Nonlinear Dynamical Systems using an EM Algorithm Zoubin

More information

Nonlinear Measurement Update and Prediction: Prior Density Splitting Mixture Estimator

Nonlinear Measurement Update and Prediction: Prior Density Splitting Mixture Estimator Nonlinear Measurement Update and Prediction: Prior Density Splitting Mixture Estimator Andreas Rauh, Kai Briechle, and Uwe D Hanebec Member, IEEE Abstract In this paper, the Prior Density Splitting Mixture

More information

Controlled sequential Monte Carlo

Controlled sequential Monte Carlo Controlled sequential Monte Carlo Jeremy Heng, Department of Statistics, Harvard University Joint work with Adrian Bishop (UTS, CSIRO), George Deligiannidis & Arnaud Doucet (Oxford) Bayesian Computation

More information

Extension of the Sparse Grid Quadrature Filter

Extension of the Sparse Grid Quadrature Filter Extension of the Sparse Grid Quadrature Filter Yang Cheng Mississippi State University Mississippi State, MS 39762 Email: cheng@ae.msstate.edu Yang Tian Harbin Institute of Technology Harbin, Heilongjiang

More information

Data Assimilation for Dispersion Models

Data Assimilation for Dispersion Models Data Assimilation for Dispersion Models K. V. Umamaheswara Reddy Dept. of Mechanical and Aerospace Engg. State University of New Yor at Buffalo Buffalo, NY, U.S.A. venatar@buffalo.edu Yang Cheng Dept.

More information

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Simo Särkkä E-mail: simo.sarkka@hut.fi Aki Vehtari E-mail: aki.vehtari@hut.fi Jouko Lampinen E-mail: jouko.lampinen@hut.fi Abstract

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

CSC321 Lecture 18: Learning Probabilistic Models

CSC321 Lecture 18: Learning Probabilistic Models CSC321 Lecture 18: Learning Probabilistic Models Roger Grosse Roger Grosse CSC321 Lecture 18: Learning Probabilistic Models 1 / 25 Overview So far in this course: mainly supervised learning Language modeling

More information

Spring 2006: Introduction to Markov Chain Monte Carlo (MCMC)

Spring 2006: Introduction to Markov Chain Monte Carlo (MCMC) 36-724 Spring 2006: Introduction to Marov Chain Monte Carlo (MCMC) Brian Juner February 16, 2006 Hierarchical Normal Model Direct Simulation An Alternative Approach: MCMC Complete Conditionals for Hierarchical

More information

Chapter 4 Dynamic Bayesian Networks Fall Jin Gu, Michael Zhang

Chapter 4 Dynamic Bayesian Networks Fall Jin Gu, Michael Zhang Chapter 4 Dynamic Bayesian Networks 2016 Fall Jin Gu, Michael Zhang Reviews: BN Representation Basic steps for BN representations Define variables Define the preliminary relations between variables Check

More information

Clustering by Mixture Models. General background on clustering Example method: k-means Mixture model based clustering Model estimation

Clustering by Mixture Models. General background on clustering Example method: k-means Mixture model based clustering Model estimation Clustering by Mixture Models General bacground on clustering Example method: -means Mixture model based clustering Model estimation 1 Clustering A basic tool in data mining/pattern recognition: Divide

More information

Learning Static Parameters in Stochastic Processes

Learning Static Parameters in Stochastic Processes Learning Static Parameters in Stochastic Processes Bharath Ramsundar December 14, 2012 1 Introduction Consider a Markovian stochastic process X T evolving (perhaps nonlinearly) over time variable T. We

More information

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets J. Clayton Kerce a, George C. Brown a, and David F. Hardiman b a Georgia Tech Research Institute, Georgia Institute of Technology,

More information

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Gerdie Everaert 1, Lorenzo Pozzi 2, and Ruben Schoonackers 3 1 Ghent University & SHERPPA 2 Erasmus

More information

Markov Chain Monte Carlo Methods for Stochastic Optimization

Markov Chain Monte Carlo Methods for Stochastic Optimization Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,

More information

Basic math for biology

Basic math for biology Basic math for biology Lei Li Florida State University, Feb 6, 2002 The EM algorithm: setup Parametric models: {P θ }. Data: full data (Y, X); partial data Y. Missing data: X. Likelihood and maximum likelihood

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 7 Sequential Monte Carlo methods III 7 April 2017 Computer Intensive Methods (1) Plan of today s lecture

More information

Particle Filtering Approaches for Dynamic Stochastic Optimization

Particle Filtering Approaches for Dynamic Stochastic Optimization Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,

More information

A Gaussian Mixture PHD Filter for Nonlinear Jump Markov Models

A Gaussian Mixture PHD Filter for Nonlinear Jump Markov Models A Gaussian Mixture PHD Filter for Nonlinear Jump Marov Models Ba-Ngu Vo Ahmed Pasha Hoang Duong Tuan Department of Electrical and Electronic Engineering The University of Melbourne Parville VIC 35 Australia

More information

Sequential Bayesian Updating

Sequential Bayesian Updating BS2 Statistical Inference, Lectures 14 and 15, Hilary Term 2009 May 28, 2009 We consider data arriving sequentially X 1,..., X n,... and wish to update inference on an unknown parameter θ online. In a

More information

Particle Filtering for Data-Driven Simulation and Optimization

Particle Filtering for Data-Driven Simulation and Optimization Particle Filtering for Data-Driven Simulation and Optimization John R. Birge The University of Chicago Booth School of Business Includes joint work with Nicholas Polson. JRBirge INFORMS Phoenix, October

More information

Linear Dynamical Systems (Kalman filter)

Linear Dynamical Systems (Kalman filter) Linear Dynamical Systems (Kalman filter) (a) Overview of HMMs (b) From HMMs to Linear Dynamical Systems (LDS) 1 Markov Chains with Discrete Random Variables x 1 x 2 x 3 x T Let s assume we have discrete

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 13: Learning in Gaussian Graphical Models, Non-Gaussian Inference, Monte Carlo Methods Some figures

More information

Sequential Monte Carlo methods for system identification

Sequential Monte Carlo methods for system identification Technical report arxiv:1503.06058v3 [stat.co] 10 Mar 2016 Sequential Monte Carlo methods for system identification Thomas B. Schön, Fredrik Lindsten, Johan Dahlin, Johan Wågberg, Christian A. Naesseth,

More information

For final project discussion every afternoon Mark and I will be available

For final project discussion every afternoon Mark and I will be available Worshop report 1. Daniels report is on website 2. Don t expect to write it based on listening to one project (we had 6 only 2 was sufficient quality) 3. I suggest writing it on one presentation. 4. Include

More information

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS Frédéric Mustière e-mail: mustiere@site.uottawa.ca Miodrag Bolić e-mail: mbolic@site.uottawa.ca Martin Bouchard e-mail: bouchard@site.uottawa.ca

More information

EM Algorithm II. September 11, 2018

EM Algorithm II. September 11, 2018 EM Algorithm II September 11, 2018 Review EM 1/27 (Y obs, Y mis ) f (y obs, y mis θ), we observe Y obs but not Y mis Complete-data log likelihood: l C (θ Y obs, Y mis ) = log { f (Y obs, Y mis θ) Observed-data

More information

Numerical Solutions of ODEs by Gaussian (Kalman) Filtering

Numerical Solutions of ODEs by Gaussian (Kalman) Filtering Numerical Solutions of ODEs by Gaussian (Kalman) Filtering Hans Kersting joint work with Michael Schober, Philipp Hennig, Tim Sullivan and Han C. Lie SIAM CSE, Atlanta March 1, 2017 Emmy Noether Group

More information

Learning Nonlinear Dynamical Systems using an EM Algorithm

Learning Nonlinear Dynamical Systems using an EM Algorithm Learning Nonlinear Dynamical Systems using an EM Algorithm Zoubin Ghahramani and Sam T. Roweis Gatsby Computational Neuroscience Unit University College London London WC1N 3AR, U.K. http://www.gatsby.ucl.ac.uk/

More information

Sigma Point Belief Propagation

Sigma Point Belief Propagation Copyright 2014 IEEE IEEE Signal Processing Letters, vol. 21, no. 2, Feb. 2014, pp. 145 149 1 Sigma Point Belief Propagation Florian Meyer, Student Member, IEEE, Ondrej Hlina, Member, IEEE, and Franz Hlawatsch,

More information

Auxiliary Particle Methods

Auxiliary Particle Methods Auxiliary Particle Methods Perspectives & Applications Adam M. Johansen 1 adam.johansen@bristol.ac.uk Oxford University Man Institute 29th May 2008 1 Collaborators include: Arnaud Doucet, Nick Whiteley

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Preliminaries. Probabilities. Maximum Likelihood. Bayesian

More information

Bayesian and Monte Carlo change-point detection

Bayesian and Monte Carlo change-point detection Bayesian and Monte Carlo change-point detection ROMN CMEJL, PVEL SOVK, MIROSLV SRUPL, JN UHLIR Department Circuit heory Czech echnical University in Prague echnica, 66 7 Prague 6 CZECH REPUBLIC bstract:

More information

Efficient Monitoring for Planetary Rovers

Efficient Monitoring for Planetary Rovers International Symposium on Artificial Intelligence and Robotics in Space (isairas), May, 2003 Efficient Monitoring for Planetary Rovers Vandi Verma vandi@ri.cmu.edu Geoff Gordon ggordon@cs.cmu.edu Carnegie

More information

Outline Lecture 3 2(40)

Outline Lecture 3 2(40) Outline Lecture 3 4 Lecture 3 Expectation aximization E and Clustering Thomas Schön Division of Automatic Control Linöping University Linöping Sweden. Email: schon@isy.liu.se Phone: 3-8373 Office: House

More information

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft 1 Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft K. Meier and A. Desai Abstract Using sensors that only measure the bearing angle and range of an aircraft, a Kalman filter is implemented

More information

An Brief Overview of Particle Filtering

An Brief Overview of Particle Filtering 1 An Brief Overview of Particle Filtering Adam M. Johansen a.m.johansen@warwick.ac.uk www2.warwick.ac.uk/fac/sci/statistics/staff/academic/johansen/talks/ May 11th, 2010 Warwick University Centre for Systems

More information

Wavelet-Based Nonparametric Modeling of Hierarchical Functions in Colon Carcinogenesis

Wavelet-Based Nonparametric Modeling of Hierarchical Functions in Colon Carcinogenesis Wavelet-Based Nonparametric Modeling of Hierarchical Functions in Colon Carcinogenesis Jeffrey S. Morris University of Texas, MD Anderson Cancer Center Joint wor with Marina Vannucci, Philip J. Brown,

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Introduction to Bayesian methods in inverse problems

Introduction to Bayesian methods in inverse problems Introduction to Bayesian methods in inverse problems Ville Kolehmainen 1 1 Department of Applied Physics, University of Eastern Finland, Kuopio, Finland March 4 2013 Manchester, UK. Contents Introduction

More information

Optimization Methods II. EM algorithms.

Optimization Methods II. EM algorithms. Aula 7. Optimization Methods II. 0 Optimization Methods II. EM algorithms. Anatoli Iambartsev IME-USP Aula 7. Optimization Methods II. 1 [RC] Missing-data models. Demarginalization. The term EM algorithms

More information

EM Algorithm. Expectation-maximization (EM) algorithm.

EM Algorithm. Expectation-maximization (EM) algorithm. EM Algorithm Outline: Expectation-maximization (EM) algorithm. Examples. Reading: A.P. Dempster, N.M. Laird, and D.B. Rubin, Maximum likelihood from incomplete data via the EM algorithm, J. R. Stat. Soc.,

More information

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances Journal of Mechanical Engineering and Automation (): 6- DOI: 593/jjmea Cramér-Rao Bounds for Estimation of Linear System oise Covariances Peter Matiso * Vladimír Havlena Czech echnical University in Prague

More information

Fault Detection and Diagnosis Using Information Measures

Fault Detection and Diagnosis Using Information Measures Fault Detection and Diagnosis Using Information Measures Rudolf Kulhavý Honewell Technolog Center & Institute of Information Theor and Automation Prague, Cech Republic Outline Probabilit-based inference

More information

Markov Chain Monte Carlo Methods for Stochastic

Markov Chain Monte Carlo Methods for Stochastic Markov Chain Monte Carlo Methods for Stochastic Optimization i John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U Florida, Nov 2013

More information