Elsevier Editorial System(tm) for Electric Power Systems Research Manuscript Draft

Similar documents
SEQUENTIAL MONTE CARLO METHODS FOR THE CALIBRATION OF STOCHASTIC RAINFALL-RUNOFF MODELS

F denotes cumulative density. denotes probability density function; (.)

Lecture 8: Bayesian Estimation of Parameters in State Space Models

Recursive Noise Adaptive Kalman Filtering by Variational Bayesian Approximations

The Unscented Particle Filter

in a Rao-Blackwellised Unscented Kalman Filter

Dual Estimation and the Unscented Transformation

Introduction to Machine Learning

An introduction to Sequential Monte Carlo

NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother

Latent Variable Models and EM algorithm

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

RAO-BLACKWELLIZED PARTICLE FILTER FOR MARKOV MODULATED NONLINEARDYNAMIC SYSTEMS

Particle Filters. Outline

Learning the hyper-parameters. Luca Martino

Introduction to Particle Filters for Data Assimilation

Principles of Bayesian Inference

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER

Identification of Gaussian Process State-Space Models with Particle Stochastic Approximation EM

BAYESIAN ESTIMATION OF TIME-VARYING SYSTEMS:

Density Propagation for Continuous Temporal Chains Generative and Discriminative Models

An efficient stochastic approximation EM algorithm using conditional particle filters

Identification of Gaussian Process State-Space Models with Particle Stochastic Approximation EM

Maximum Likelihood Diffusive Source Localization Based on Binary Observations

AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER. Qi Cheng and Pascal Bondon. CNRS UMR 8506, Université Paris XI, France.

10-701/15-781, Machine Learning: Homework 4

On-Line Parameter Estimation in General State-Space Models

Extended Object and Group Tracking with Elliptic Random Hypersurface Models

Human Pose Tracking I: Basics. David Fleet University of Toronto

Outline lecture 6 2(35)

Sequential Monte Carlo Methods for Bayesian Computation

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17

RECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T DISTRIBUTION

A Generalized Labeled Multi-Bernoulli Filter for Maneuvering Targets

Hmms with variable dimension structures and extensions

On the Slow Convergence of EM and VBEM in Low-Noise Linear Models

Bagging During Markov Chain Monte Carlo for Smoother Predictions

Markov Chain Monte Carlo

Density Estimation. Seungjin Choi

Lecture 6: Bayesian Inference in SDE Models

Linear Dynamical Systems

A new iterated filtering algorithm

State Estimation by IMM Filter in the Presence of Structural Uncertainty 1

Bayesian Calibration of Simulators with Structured Discretization Uncertainty

MH I. Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution

Gaussian Mixtures Proposal Density in Particle Filter for Track-Before-Detect

Computer Intensive Methods in Mathematical Statistics

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC

A Review of Pseudo-Marginal Markov Chain Monte Carlo

Estimating Gaussian Mixture Densities with EM A Tutorial

Computer Practical: Metropolis-Hastings-based MCMC

output dimension input dimension Gaussian evidence Gaussian Gaussian evidence evidence from t +1 inputs and outputs at time t x t+2 x t-1 x t+1

Nonlinear Measurement Update and Prediction: Prior Density Splitting Mixture Estimator

Controlled sequential Monte Carlo

Extension of the Sparse Grid Quadrature Filter

Data Assimilation for Dispersion Models

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Bayesian Inference for DSGE Models. Lawrence J. Christiano

CSC321 Lecture 18: Learning Probabilistic Models

Spring 2006: Introduction to Markov Chain Monte Carlo (MCMC)

Chapter 4 Dynamic Bayesian Networks Fall Jin Gu, Michael Zhang

Clustering by Mixture Models. General background on clustering Example method: k-means Mixture model based clustering Model estimation

Learning Static Parameters in Stochastic Processes

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US

Markov Chain Monte Carlo Methods for Stochastic Optimization

Basic math for biology

Computer Intensive Methods in Mathematical Statistics

Particle Filtering Approaches for Dynamic Stochastic Optimization

A Gaussian Mixture PHD Filter for Nonlinear Jump Markov Models

Sequential Bayesian Updating

Particle Filtering for Data-Driven Simulation and Optimization

Linear Dynamical Systems (Kalman filter)

Probabilistic Graphical Models

Sequential Monte Carlo methods for system identification

For final project discussion every afternoon Mark and I will be available

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS

EM Algorithm II. September 11, 2018

Numerical Solutions of ODEs by Gaussian (Kalman) Filtering

Learning Nonlinear Dynamical Systems using an EM Algorithm

Sigma Point Belief Propagation

Auxiliary Particle Methods

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian and Monte Carlo change-point detection

Efficient Monitoring for Planetary Rovers

Outline Lecture 3 2(40)

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

An Brief Overview of Particle Filtering

Wavelet-Based Nonparametric Modeling of Hierarchical Functions in Colon Carcinogenesis

Bayesian Methods for Machine Learning

Introduction to Bayesian methods in inverse problems

Optimization Methods II. EM algorithms.

EM Algorithm. Expectation-maximization (EM) algorithm.

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

Fault Detection and Diagnosis Using Information Measures

Markov Chain Monte Carlo Methods for Stochastic

Transcription:

Elsevier Editorial System(tm) for Electric Power Systems Research Manuscript Draft Manuscript Number: EPSR-D-12-00187 Title: State space modeling of RL electrical circuit and estimation of parameters via Maximum lielihood and Bayesian approaches Article Type: Research Paper Keywords: RL electrical circuit; State space model; Optimal filtering; Parameter estimation. Corresponding Author: Dr Rahman Farnoosh, Corresponding Author's Institution: First Author: Rahman Farnoosh Order of Authors: Rahman Farnoosh; Arezoo Hajrajabi Abstract: This paper is proposed to establish a state space model for RL electrical circuit. The mathematical model of RL circuit is considered as a linear stochastic differential equation and the current in the circuit is corrupted by the measurement noise. Due to the effect of white noise, the current is hidden and a state space model is obtained for the RL circuit. Optimal filtering is used for estimation of the current from the noisy observations. Estimation of all unnown parameters in this model is carried out by the Maximum lielihood approach using Expectation-Maximization algorithm and Bayesian Monte Carlo perspective using Metropolis-Hastings algorithm. Some numerical simulation examples which are performed by R software programming are considered to show the efficiency of proposed approaches. Results show an excellent estimation of parameters based on these approaches.

Cover Letter Dear professor Enclosed please find here with a copy of my paper entitled: State space modeling of RL electrical circuit and estimation of parameters via Maximum lielihood and Bayesian approaches For possible review and publication in your esteemed journal: Electric Power Systems Research This paper has not been submitted to any other journals. Best Regards Sincerely Yours Dr. Rahman Farnoosh Associate professor of Statistics Address: School of Mathematics, Iran University of Science & Technology Narma, Tehran 16846 Iran Tel : +98 21 73225427, Fax: +98 21 77240302

Highlights Highlights 1) Considering the SDE of RL electrical circuit as the dynamic model of a state space system. 2) Estimation of the current as the state of the system. 3) Estimation of all unnown parameters of the model via both ML and Bayesian approaches by using EM and Metropolis-Hastings algorithms. 4) Some numerical simulation examples which are performed by R software programming are considered to show the efficiency of the proposed approaches.

*Manuscript Clic here to view lined References State space modeling of RL electrical circuit and estimation of parameters via Maximum lielihood and Bayesian approaches R Farnoosh, A Hajrajabi School of Mathematics, Iran University of Science and Technology, Narma, Tehran 16846, Iran. E-mail: rfarnoosh@iust.ac.ir, Tel :+98 2173225427, hajirajabi@iust.ac.ir Abstract. This paper is proposed to establish a state space model for RL electrical circuit. The mathematical model of RL circuit is considered as a linear stochastic differential equation and the current in the circuit is corrupted by the measurement noise. Due to the effect of white noise, the current is hidden and a state space model is obtained for the RL circuit. Optimal filtering is used for estimation of the current from the noisy observations. Estimation of all unnown parameters in this model is carried out by the Maximum lielihood approach using Expectation-Maximization algorithm and Bayesian Monte Carlo perspective using Metropolis-Hastings algorithm. Some numerical simulation examples which are performed by R software programming are considered to show the efficiency of proposed approaches. Results show an excellent estimation of parameters based on these approaches. eywords: RL electrical circuit; State space model; Optimal filtering; Parameter estimation.

2 1. Introduction A large number of physical phenomena can be modeled as stochastic differential equations (SDEs) which can be observed at discrete instance of time and considered as a dynamic model in the state space models. Application of these models have been broadly developed in statistics, control engineering and signal processing [1 3]. State estimation of time varying systems is a main purpose, when only noisy observations are available. Therefore Kalman filter and modified methods of it lie the Extended Kalman filter, the Unscented Kalman filter, the particle filter, the digital filter have been applied for state estimation in the state space systems [4 9]. Optimal smoothing closely related to optimal filtering is studied in [10, 11]. Since the exact nowledge of the parameters in applied situations is not a contingent assumption, the unnown parameters can be included in the state space models. Due to the sensitivity of the system sate estimation to results regarding to these parameters, estimating appropriate values for these parameters based on available data is a main concern. Ideally, all unnown parameters in these models can be estimated via both Maximum lielihood (ML) and Bayesian approaches. ML estimation is a common parameter estimation method in linear and non-linear Gaussian state space models. Expectation-Maximization (EM) algorithm is a popular procedure to maximize the lielihood function of model and is applicable for estimation of parameters in linear Gaussian state space models [12, 13]. Other wors [13, 14] have been presented on the estimation of parameters in ML approach via EM algorithm for nonlinear dynamical

3 systems. The gradient-based search procedure to find the maximization step is studied in [15 17]. Bayesian method provide an alternative approach to assessing parameters in a large class of state space models when sufficiently informative priors regarding to the unnown parameters are available. This approach described in [18, 19] is considered as a method of parameter estimation by concatenating the state with the unnown parameters and introducing an artificial dynamic on the parameters. Marov chain Monte Carlo (MCMC) methods for drawing sample from posterior distribution of parameters and approximation of mathematical expectation have been extensively used in Bayesian inference. Poyiadjis et al [20] utilized the particle filter method for estimation of parameters by considering a nonlinear non-gaussian state space model based on Bayesian framewor using Sequential Monte Carlo methods. Recently, a stochastic perspective for RL electrical circuit using different noise terms has been presented by Farnooshet al [21]. Because in empirical situation, observing current in this circuit regardless of the measurement noise is unrealistic assumption, in the present wor, we consider a specific form of a state space model by using the SDE of RL electrical circuit as the dynamic model. This model can be considered in the class of the state space models, if the current in the stochastic RL electrical circuit is corrupted by measurement noise and only observed through the noisy observations. We are interested in filtering corrupted current by measurement noise via optimal filtering. To our best information, the problem of considering the SDE of RL electrical circuit as

4 the dynamic model of a state space system has not been studied before. Furthermore, according to latest nowledge from the research wors it is believed that estimation of the current as the state of the system and estimation of all unnown parameters of the model (variance of observed current in the measurement model, Resistance, mean and variance of current prior distribution at time step 0) via both ML and Bayesian approaches by using EM and Metropolis-Hastings algorithms have been investigated for the first time in the present study. The rest of the paper unfolds as follows. In section 2, the SDE of RL electrical circuit is considered as the dynamic model in the state space system. Section 3, covers filtering and smoothing for estimation of hidden current in the model. In section 4, based on the model, EM algorithm for ML estimation is derived. In section 5, the MCMC sampling algorithm used in simulation posterior distribution to perform Bayesian inferences. In section 6, numerical experiments are conducted to verify the accuracy of proposed methods. Some conclusions are given in section 7. 2. State Space Modeling for Stochastic RL electrical Circuit An inductor-resistance circuit (RL) is an electrical circuit composed of resistance and inductor driven by a voltage or current source. By adding some different randomness in the potential source, the stochastic differential equation describing the behavior of the RL circuit is given by Farnoosh et al [21], di(t) = ( 1 L V (t) R L I(t))dt + ϵ db(t), (1) L

where B(t) is one dimensional Brownian motion. V(t) and I(t) denote the potential 5 source and current at time t respectively. When the current is hidden and only observed through the noisy observations, it is reasonable to loo at the model as a continuousdiscrete linear state space model, di(t) = ( 1 L V (t) R L I(t))dt + ϵ db(t), (2) L y = I(t ) + r, = 0,..., n. (3) The models (2) and (3) are the dynamic and the measurement model, respectively. The dynamic model can be equivalently interpreted as Ito type stochastic differential equations, which lead to the following state space model at discrete-time t, I = I 1 + 1 L (V RI 1) t + q 1, y = I + r, = 0,..., n, (4) where q 1 N(0, Q 1 = ( ϵ L )2 t), y, I and r N(0, σ 2 ) are the process noise, observation, state of the model and the measurement noise, respectively. The time step runs from 0 to n and at time step 0 there is no measurement only the prior distribution I 0 N(m 0 0 = m 0, p 0 0 = Σ 0 ). 3. Filtering and Smoothing Computing the marginal distribution of the state I on the time step given the history of the measurement up to time step, p(i y 1: ), is the main purpose of the optimal filtering. The discrete-time Kalman smoother can be used for computing the closed form of smoothing solution p(i y 1:n ), that is conditional on the whole measurement

data y 1:n. The solution to the discrete-time filtering problem can be computed by the following prediction and update steps [1, 5]. 6 Proposition 1 The optimal filtering equation can be obtained for the linear filtering model of equation (4) in closed form. (i) The prediction step density is Gaussian with parameters as follows: p(i y 1: 1 ) = N(I m 1, p 1 ), (5) m 1 p 1 = ( V L ) t + (1 R L t)m 1 1, = (1 R L t)2 p 1 1 + Q. (6) (ii) The update step densities are Gaussian with parameters as follows: p(i y 1: ) = N(I m, p ), (7) p(y y 1: 1 ) = N(I ŷ 1, F 1 ), (8) ŷ 1 = m 1, F 1 m = m 1 = p 1 + σ 2, + p 1 F 1 (y ŷ 1 ), (9) p = p 1 (p 1 ) 2 F 1 Proof (i): The joint density of I and I 1 given y 1: 1 is,. p(i 1, I y 1: 1 ) = p(i I 1 )p(i 1 y 1: 1 ) (10) = N(I ( V L ) t + (1 R L t)i 1, Q)N(I 1 m 1 = N I 1 I 1, p 1 1 ) (11) m, p, (12)

where, m = m 1 1 ( V L ) t+(1 R L t)m 1 1 and p = p 1 1 (1 R L t)p 1 1 hence, the marginal distribution of I can be represented as follows: (1 R L t)p 1 1 (1 R L t)2 p 1 1 +Q, 7 p(i y 1: 1 ) = N(I m 1, p 1 ), (13) where, m 1 = ( V L ) t+(1 R L t)m 1 1 p 1 = (1 R L t)2 p 1 1 +Q. Proof (ii): The joint distribution of y and I is, p(i, y y 1: 1 ) = p(y I )p(i y 1: 1 ) (14) where, m = m 1 m 1 = N(y I, R)N(I m 1, p 1 ) (15) = N and p = I y p 1 p 1 m, p, (16) p 1 p 1 +σ 2 Therefore, the condition distribution of I and the marginal distribution of y can be written as,. p(i y 1: 1, y ) = p(i y 1: ) (17) = N(I m, p ), p(y y 1: ) = N(y ŷ 1, F 1 ), (18)

8 where, ŷ 1 F 1 = m 1, m = ŷ 1 = p 1 + σ 2, + p 1 F 1 (y ŷ 1 ), (19) p = p 1 (p 1 ) 2 F 1. Proposition 2 The recursive equation for the discrete-time Kalman smoother are given as, p(i y 1:n ) = N(I m n, p n ), (20) with parameters as follows: c = (1 R L t)p (p +1 ) 1, m n = m + c (m n +1 m +1 ), (21) p n = p + c (p n +1 )c = n 1,..., 0, where m and p are the mean and covariance computed by the Kalman filter. The recursion is started from the last time step n. Proof: The joint distribution of I and I +1 given y 1: is, p(i, I +1 y 1: ) = p(i +1 I )p(i y 1: ) (22) where, m = = N m ( V L ) t+(1 R L t)m = N(I +1 ( V L ) t + (1 R L t)i, Q)N(I m, p ) (23) I I +1 and p = m, p, (24) p (1 R L t)p (1 R L t)p (1 R L t)2 p +Q Due to the Marov property of the states, the conditional distribution I given I +1 and.

9 y 1: is, p(i, I +1 y 1:n ) = p(i, I +1 y 1: ) (25) = N(I m, p ), (26) where, m +1 = ( V L ) t + (1 R L t)m, p +1 = (1 R L t)2 p + Q, c = (1 R L t)p (p +1 ) 1, (27) m = m + c (I +1 m +1 ), p = p + c (p +1 )c, hence, the joint distribution of I and I +1 given all data can be written as: p(i, I +1 y 1:n ) = p(i I +1 )p(i +1 y 1:n ) (28) where, m = = N m n +1 m +c (m n +1 m +1 ) = N(I m, p )N(I +1 m n +1, p n +1) (29) I +1 I m, p, (30) and p = Therefore, the marginal distribution of I can be derived, p n +1 c p n +1 p n +1 c c (p n +1 )c +p. p(i y 1:n ) = N(I m n, m n ), (31) where, m n = m + c (m n +1 m +1 ), p n = p + c (p n +1 )c = n 1,..., 0. (32)

For the state-space model specified in equation (4), the Lag-One covariance 10 smoother is obtained with p n n that is computed from Theorems 1 and 2 and with initial condition (see [12]). p n n,n 1 = (1 h n )(1 R L t)pn 1 n 1, (33) where, h n = Also, for t = n, n 1,..., 2, pn 1 n p n 1 n + σ 2. (34) p n t,t 1 = E[(I t m n t )(I t 1 m n t 1)] = p t 1 t 1c t 2 + c t 1 (p n t,t 1 (1 R L t)pt 1 t 1)c t 2, where, c t = (1 R L t)pt t(p t t+1) 1. (35) Furthermore, based on the properties of the the Gaussian distribution, the confidence interval for the prediction, the filter and the smoother are m 1 ± 1.96 p 1, m ± 1.96 p and mn ± 1.96 p n, respectively. 4. Estimation of parameters via maximum lielihood approach The problem of parameter estimation has created a lot of interest for state-space models over the past few years and many approaches have been proposed to solve it. One of these approaches could be ML and EM algorithm could be considered as a numerical optimization in order to calculate the ML estimation of parameters. If the model in

11 equation (4) is considered as a parametric function of Θ = (m 0, Σ 0, R, σ 2 ), then these parameters will be estimated via EM algorithm [22]. The basic idea is that if the states, I = (I 0, I 1,..., I n ) observed, in addition to the observations y = (y 1,..., y n ), then {I, y} would be considered as the complete data, with the complete lielihood function, n n L c Θ(I, y) = f m0,σ 0 (I 0 ) f R (I I 1 ) f σ 2(y I ). (36) =1 =1 The log lielihood of the complete data under Gaussian assumption and ignoring constant can be written as: 2 ln L c (Θ I,y) = ln Σ 0 + 1 Σ 0 (I 0 m 0 ) 2 +n ln Q+ 1 Q n =1 (I ( V L ) t (1 R L t)i 1) 2 +n ln σ 2 + n =1 1 σ 2 (y I ) 2. (37) EM algorithm is an iterative method for finding the ML estimation of Θ based on the incomplete data, y. Two steps of this algorithm (E- and M- steps) can be written as follows: E step: The conditional expectation of complete data lielihood given the observed data and current parameters is as below, Q(Θ Θ (t 1) ) = E{ 2 ln L c (Θ I,y) y,θ (t 1) } (38) = ln Σ 0 + 1 Σ 0 (p n 0 +(mn 0 m 0) 2 )+n ln Q+ 1 Q { n =1 s +(1 R L t)2 n =1 s 2(1 R L t) n =1 s }+n ln σ 2 + 1 n σ 2 =1 (y2 2y m n +(mn )2 +p n ), (39) where s=p n 1 +(mn 1 )2, s =p n +(mn (V/L) t)2 and s =(p n, 1 +(mn V L t)mn 1 ). By using Theorem 2, the desired conditional expectation can be obtained as

12 smoothers, E{ 1 Σ 0 (I 0 m 0 ) 2 y,θ (t 1) }= 1 Σ 0 (p n 0 +(mn 0 m 0) 2 ) (40) E{ 1 Q n =1 (I ( V L ) t (1 R L t)i 1) 2 y,θ (t 1) }= 1 Q ( n =1 E{(I ( V L ) t))2 +((1 R L t)i 1) 2 2(I ( V L )((1 R L t)i 1) y,θ (t 1) }), (41) E{(I ( V L ) t)2 y,θ (t 1) }=p n +(mn ( V L ) t)2, (42) E{((1 R L t)i 1) 2 y,θ (t 1) }=(1 R L t)2 (p n 1 +(mn 1 )2 ), (43) E{(I ( V L ) t)((1 R L t)i 1) y,θ (t 1) }=(1 R L t)(pn, 1 +(mn V L t)mn 1 ), (44) E{ n =1 1 σ 2 (y I ) 2 y,θ (t 1) }= 1 σ 2 ( n =1 E{y2 +I2 2y I y,θ (t 1) }), (45) E{y 2 +I2 2y I y,θ (t 1) }=y 2 2y m n +(mn )2 +p n. (46) In equation (39) the smoothers are calculated under the current value of the parameters Θ (t 1). For simplicity, this fact have not been explicitly displayed. M step: Minimizing equation (38) with respect to the parameters, at iteration t, constitutes the maximization step, which yields the updated estimates, Θ (t) = arg min Q(Θ Θ (t 1) ), (47) Q m 0 = 0= m (t) 0 =mn 0, (48) Q Σ 0 = 0= Σ (t) 0 =pn 0, (49) Q σ 2 = 0= σ 2(t) = 1 n n =1 (y2 2y m n +(mn )2 +p n ), (50) Q R = 0= R(t) = L t (1 n =1 s s ). (51) Convergence is obtained, when the estimates or the lielihood stabilize. The general scheme of the algorithm can be written as below,

13 (i) Initialize the procedure by selecting starting values for parameters Θ 0 = {m 0, Σ 0, R, σ 2 }, (ii) Step t. for t=1,... (iii) Perform the E-Step, use proposition 1 and 2 to obtain the smoothed values m n, pn and pn, 1 for = 1,..., n, using the parameters Θ(j 1), (iv) Perform the M-Step, update the estimates, m 0, Σ 0, R, σ 2 using equations (48)- (51) to obtain Θ (j), (v) Repeat steps (iii)-(iv) to convergence. 5. Estimation of parameters via Bayesian approach Another approach to estimate of unnown parameters Θ = (m 0, p 0, R, σ 2 ), is Bayesian paradigm, which does not assume that Θ has a fixed value, instead it assumes that Θ is a random variable with some probability distribution. On the other hand this method incorporates nowledge about a particular parameter as the prior distribution that reflects our degree of belief in different values of these quantities. Since density of the observation up to time is the normal distribution with mean ŷ 1 = m 1 and variance F 1 is as below, = p 1 + σ 2 given by equation (8), the lielihood function of observations L(Θ y 1:n ) = n f(y y 1: 1 ) (52) =1

= n 1 exp[ 2π(p 1 + σ 2 ) =1 2(p 1 14 1 + σ 2 ) (y m 1 ) 2 ]. (53) Without loss of generality the independent of parameters is considered, under a normal prior N(δ j, λ j ) j = 1, 2, (δ j R and λ j > 0 are nown hyper-parameters) on both R and m 0 and a Gamma prior G(τ j, υ j ) j = 1, 2 on both σ 2 and Σ 0, the prior distribution of Θ can be written as, π(θ) = π(m o )π(σ 0 )π(r)π(σ 2 ). (54) From Bayesian point of view the posterior density of parameter is proportional to the product of the prior density and the lielihood function, Posterior = Lielihood Prior. π(θ y 1:n ) = L(Θ y 1:n )π(θ) (55) ( n =1 exp[ n =1 1 )(σ 2 ) (τ 2π(p 1 1 1) exp[ σ2 ]Σ (τ 2 1) +σ 2 υ ) 1 0 exp[ Σ 0 ] υ 2 1 2(p 1 +σ 2 ) (y m 1 ) 2 ] exp[ (R δ 1 ) 2λ 1 ] exp[ (m 0 δ 2 ) 2λ 2 ]. (56) Bayesian point estimation is the mathematical expectation under the presumption that the loss function is quadratic. Bayesian estimator of Θ, can be considered as, ˆΘ = Θπ(Θ y 1:n )dθ. (57) In this case posterior distribution dosn t have a closed form, hence for doing inference, MCMC algorithm facilitates sampling of posterior distribution. Simulation from the posterior distribution of the parameters is done by using Metropolis-Hastings sampling. A schematic overview of the proposed MCMC algorithm is given by Cappe et al [23].

15 The general scheme of sampling is as below. (i) Initialization:choose Θ (0) = (m (0) 0, Σ (0) 0, R (0), σ 2(0) ), (ii) step t. for t=1,... Generate σ 2 from LN(log(σ 2(t 1) ), ζ 2 ), Generate Σ 0 from LN(log(Σ (t 1) 0 ), ζ 2 ), Compute r = π( Θ y 1:n ) σ 2 Σ0 π(θ (t 1) y 1:n )σ 2(t 1) Σ (t 1) 0, (58) Generate u U [0,1] if r < u then Θ (t) = Θ else Θ (t) = Θ (t 1). The normal proposal distribution is assumed for R and m 0. 6. Numerical Simulation Four data set with different values of m 0, Σ 0, R, σ 2, V, L used for simulation purpose are presented in Table 1. The purpose is to estimate the current as state of the model using optimal filtering based on discrete state space form of the model. 50 values of the corrupted states, the states, the predictions, the filters and the smoothers for estimation of the current in the model are obtained by using observed data generated from data set 3. Only 10 of them are illustrated in Table 2. In Figure 1, the predictions, the filters and the smoothers are shown as lines and their confidence intervals as dashed lines. The

16 simulation results obtained from Table 2 and Figure 1 show that the prediction of current is more uncertain than the corresponding filtered value, which in turn, is more uncertain than the corresponding smoother value. Consequently, confidence interval for smoother is shorter than the filter and the filter is shorter than the predicted. Estimation of four unnown parameters (R, µ 0, Σ 0 and σ 2 ) are investigated based on ML and Bayesian inference. These methods have been tested by using two algorithms EM and Metropolis Hastings on the mentioned four simulated data set. The results of estimation from two methods are shown in Table 3. It can be claimed that the estimated results are very close to the exact values of parameters. Results show that if the step size is smaller, the estimations will be precise. As can be seen, due to the assistance of prior nowledge of parameters, the mean square error (MSE) of Bayesian estimator is less than one in ML estimator that obtained via EM algorithm. In Figures 2 and 3, after 7000 iterations the Metropolis-Hastings algorithm seems to converge and the posterior means of parameters as Bayesian estimations are obtained by using 1000 last reordered iterations. 7. Conclution In this paper the SDE of RL electrical circuit is considered as the dynamic model in the state space system and the hidden circuit is estimated via optimal filtering. The results show that the smoothers, the filters and the predictions respectively have the shortest confidence intervals. Four parameters in the model are estimated via Bayesian and ML approaches. Due to the fact that Bayesian paradigm use the prior distribution to

incorporate nowledge about parameters, this approach, is better than the ML approach in estimation process based on MSE of estimators. 17 References [1] S. Sara, On unscented alman filtering for state estimation of continuous-time nonlinear systems, IEEE Trans. Autom. Control. 52 (2007) 1631 1641. [2] M.S. Grewal, L.R. Weill, A.P Andrews, Global Positioning Systems, Inertial Navigation and Integration, New Yor: Wiley Interscience, 2001. [3] C.J. Long, P.L. Purdon, S. Temereanca, N.U. Desai, M.S. Hamalainen, E.N. Brown, Statespace solutions to the dynamic magnetoencephalography inverse problem using high performance computing, Ann. Appl. Statist. 5 (2011) 1207 1228. [4] S.J. Julier, J.K. Uhlmann, Unscented filtering and nonlinear estimation, Proc. IEEE. 92 (2004) 401-422. [5] Y. Bar-Shalom, X.R. Li, T. Kirubarajan, Estimation with Applications to Tracing and Navigation, New Yor: Wiley Interscience, 2001. [6] S. Masell, N. Gordon, M. Rollason, D. Salmond, Efficient multitarget tracing using particle filters, Image. vision. Comput. 21 (2003) 931 939. [7] B. Ristic, S. Arulampalam, N. Gordon, Beyond the Kalman Filter: Particle Filters for Tracing Applications, Boston: Artech House, 2004. [8] M. Huanga, L. Wenyuan, Y. Wei, Estimating parameters of synchronous generators using squareroot unscented Kalman filter, Electr Pow Syst Res. 80 (2010) 1137 1144. [9] W. M. Al-Hasawi, K. M. El-Naggar, New digital filter for unbalance distorted current and voltage estimation in power systems, Electr Pow Syst Res. 78 (2008) 1290 1301. [10] R.C.K. Lee, Optimal Estimation, Identification and Control, Cambridge, Massachuetts: M.I.T.Press, 1964.

18 [11] C. Leondes, J.B. Peller, E.B. Stear, Nonlinear smoothing theory, IEEE Trans. Syst. Science Cyber. 6 (1970) 63 71. [12] R.H. Shumway, D.S. Stofer, Time Series Analysis and Its Applications: With R Examples, New Yor: Springer, 2006. [13] S. Roweis, Z. Ghahramani, A unifying review of linear Gaussian models, Neural Computation. 11 (1999) 305 345. [14] J.F.G. De Freitas, M. Niranjan, A.H. Gee, Dynamic learning with the EM algorithm for neural networs, VLSI Signal Processing. 26 (2000) 119 131. [15] A. Hagenblada, L. Ljung, A. Wills, Maximum lielihood identification of Wiener models, Automatica. 44 (2008) 2697 2705. [16] J. Olsson, O. Cappe, R. Douc, E. Moulines, Sequential Monte Carlo smoothing with application to parameter estimation in nonlinear state space models, Bernoulli. 14 (2008) 155 179. [17] A. Wills, T. Schon, B. Ninness, Parameter estimation for discretetime nonlinear systems using EM, Proc. 17th IFAC World Congress. Seoul: Korea, 2008. [18] G. Kitagawa, Self-organizing time series model, Amer. Statist. Assoc. 93 (1998) 1203 1215. [19] X. Yang, K. Xing, K. Shi, Q. Pan, Joint state and parameter estimation in particle filtering and stochastic optimization, Control Theory Appl. 62 (2008) 215 220. [20] G. Poyiadjis, A. Doucet, S.S. Singh, Particle approximations of the score and observed information matrix in state space models with application to parameter estimation, Biometria. 98 (2011) 65 80. [21] R. Farnoosh, P. Nabati, R. Rezaeyan, M. Ebrahimi, A stochastic perspective for RL electrical circuits using different noise terms, Compel. 30 (2011) 812 822. [22] A.P. Dempster, N.M Laird, D.B. Rubin, Maximum lielihood from incomplete data via the EM algorithm, Roy. Statist. Soc. Ser. B. 39 (1977) 1 38. [23] O. Cappe, C. Robert, T. Ryden, Reversible jump MCMC converging to birth-and-death MCMC and more general continuous time samplers. Roy. Statist. Soc. Ser. B. 65 (2003) 679 700.

19 8. Caption Table 1: Simulation from four different data set. Table 2: Predictions, filters and smoothers. Table 3: The numerical values of estimators ˆµ 0, ˆΣ 0, ˆR and ˆσ 2 with different t, the data in parenthesis are Mean square errors. Figure 1: The simulated values of I, for = 1,.., 50 in data set 3 are shown as points. In top, middle and bottom panel, the predictions m 1, the filters m and the smoothers m n with their confidence intervals are shown as a line and dashed lines respectively. Figure 2: Evolution of the Metropolis-Hastings chains over 7000 iterations for unnown parameters in data set 1 with considering t = 0.01. Figure 3: Evolution of the Metropolis-Hastings chains over 7000 iterations for unnown parameters in data set 3 with considering t = 0.01.

20 Figure 1. Prediction current 20 10 0 10 20 10 20 30 40 50 Time Filter 20 10 0 10 20 0 10 20 30 40 50 Time current 20 10 0 10 20 current Smoother 0 10 20 30 40 50 Time Figure 2. 0.0 0.5 1.0 1.5 2.0 2.5 3.0 m0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0 1000 2000 3000 4000 5000 6000 7000 Iterations 0 1000 2000 3000 4000 5000 6000 7000 Iterations 0.0 0.5 1.0 1.5 2.0 2.5 3.0 R 4 6 8 10 12 14 0 1000 2000 3000 4000 5000 6000 7000 Iterations 0 1000 2000 3000 4000 5000 6000 7000 Iterations

21 Figure 3. 0.0 0.5 1.0 1.5 2.0 2.5 3.0 m0 0 1 2 3 4 5 0 1000 2000 3000 4000 5000 6000 7000 Iterations 0 1000 2000 3000 4000 5000 6000 7000 Iterations 0 1 2 3 4 R 12 13 14 15 16 17 0 1000 2000 3000 4000 5000 6000 7000 Iterations 0 1000 2000 3000 4000 5000 6000 7000 Iterations Table 1. Data set m 0 Σ 0 R σ 2 V L 1 1 1 12 1 5 2 2 1 1 20 2 15 4 3 3 0.5 15 2 15 4 4 2 0.03 11 0.8 18 4 Table 2. t y I m 1 p 1 m p m n p n 0 - -1.27 - - 1 1-0.08 0.78 1-3.30-1.89 0.93 2.77-2.18 0.73-2.5 0.61 2-4.09-2.93-1.87 2.56-3.47 0.71-3.12 0.59 3-2.20-1.67-3.00 2.55-1.38 0.71-1.5 0.59 4-0.74-2.47-1.16 2.55-2.08 0.71-2.03 0.59 5-2.44-1.48-1.78 2.55-1.14 0.71-1.58 0.59 6-0.88-1.81-0.95 2.55-2.83 0.71-2.74 0.59 7-3.56-0.48-2.44 2.55-1.82 0.71-2.09 0.59 8-1.58-3.36-1.55 2.55-2.60 0.71-2.61 0.59 9-3.01-2.47-2.24 2.55-2.62 0.71-2.29 0.59 10-2.77-1.72-2.25 2.55-0.73 0.71-0.93 0.59

22 Table 3. data set ˆµ0 Bayesian estimator (Mean square error ) ˆΣ0 ˆR ˆσ 2 1 0.8766(0.0152) 0.9268( 0.0059) 11.8751(0.0166) 0.8849( 0.0132) t = 0.1 2 0.9007(0.0107) 0.9224( 0.0062) 19.8730(0.0163) 1.9702(0.0016) 3 3.0160(0.0005) 0.3039(0.0438) 14.9121(0.0080) 1.9033(0.0094) 4 2.0305(0.0010) 0.0726(0.0027) 10.8882(0.0130) 0.5654(0.0550) 1 0.8885(0.0124) 0.9554(0.0021) 11.9070(0.0087) 0.9655(0.0018) t = 0.01 2 0.9164(0.0070) 0.8797(0.0145) 19.8989(0.0102) 1.9372(0.0040) 3 2.9951(0.0003) 0.3481(0.0231) 14.8891(0.0127) 1.8797(0.0144) 4 1.9870(0.0001) 0.0229(0.0001) 10.9134(0.0075) 0.5990(0.0404) data set ML estimator (Mean square error ) ˆµ0 ˆΣ0 ˆR ˆσ 2 1 0.7511(0.0619) 0.3246( 0.4560) 12.1605(0.0257) 0.7247(0.0757) t = 0.1 2 0.7995(0.0401) 0.3887(0.3736) 19.8746(0.0157) 1.9199(0.0064) 3 2.7887(0.0446) 0.1033(0.1573) 14.9122(0.0076) 1.7669(0.0543) 4 2.2357(0.0555) 0.0143(0.0002) 10.9129(0.0075) 0.7772(0.0005) 1 0.9245(0.0056) 0.439(0.4039) 11.8819(0.0139) 0.9510(0.0023) t = 0.01 2 0.8543(0.0212) 0.0550(0.8929) 9.9295(0.0047) 2.1232(0.0151) 3 3.0269(0.0007) 0.0254(0.2252) 14.9284(0.0056) 2.2149(0.0462) 4 2.0924(0.0085) 0.0041(0.0006) 10.9206(0.0062) 0.8123(0.0001)

Responses to Technical Chec Results Electric Power Systems Research Ref. EPSR-D-12-00187 Title: State space modeling of RL electrical circuit and estimation of parameters via Maximum lielihood and Bayesian approaches Dear Editor, Than you for your useful comments and suggestions on the language and structure of our manuscript. We have modified the manuscript accordingly, and detailed corrections are listed below point by point: 1) Type the whole manuscript with double line spacing including tables. Please note that the length of the manuscript including captions, figures, and tables should not exceed 22 double line-spaced pages. We now have used double spacing throughout the manuscript. 2) References should be prepared according to the Guide for Authors. For journal articles, please adhere to the following order: initials followed by surname,article title,abbreviated journal name, volume number, year in parenthesis and page span. We have checed all the references and formatted them strictly according to the Guide for Authors. 3) Please do not embed tables or figures within the text of your manuscript file. Collected figure captions should be provided on a separate page. We have embed tables and figures after references in the manuscript file, furthermore, Collected figure captions have been provided on a separate page.

4) Figures 2 and 3 provided in the manuscript are not cited in the text. Now Figures 2 and 3 are cited in sequence in the main text. 5) The tel/fax numbers (with country and area code) of the corresponding author should be provided on the first page of the manuscript. The tel number of the corresponding author has been provided on the first page of the manuscript. The manuscript has been resubmitted to your journal. We loo forward to your positive response. Best Regards Sincerely Yours