Analysis of Emerson s MMI Estimation Algorithm
|
|
- Camron Henry
- 5 years ago
- Views:
Transcription
1 Technical Report PC Analysis of Emerson s MMI Estimation Algorithm João P. Hespanha Dale E. Seborg University of California, Santa Barbara August 8, 003
2 Abstract In this report we analyze Emerson s Multiple Model Interpolation (MMI) algorithm for parameter estimation and compare it with standard least-squares estimators. In certain cases these two algorithms provide the same estimate. We start be performing the analysis for the single-gain case and then expand the results for more general estimation problems. 1 Single-gain estimation We consider the discrete-time, single-input, single-output, process model: y(k) = K p u(k τ) + n(k), k {1,, 3,... } (1) where y denotes the output, u the control input, n measurement noise, τ a fixed known delay, and K p the process steady-state gain. This model considers the effect of measurement noise, which was ignored in previous reports. We assume that we have available a set of data { } y(k), u(k τ) : k = 1,,..., L collected over a time-window of length L. By stacking the inputs and outputs as vectors of length L y(1) u(1 τ) y() Y :=.., U := u( τ).., (3) y(l) u(l τ) we can write the process model as where Y = K p U + N, (4) n(1) n() N :=., (5) n(l) is a vector of measurement noise. The value of N is not available to estimate K p. The data-set is processed multiple times by examining its fits with respect to a bank of M models that varies from iteration to iteration. The bank of models used in the ith iteration is given by Ym (i) = K m (i)u, {1,,..., M}, (6) where Ym (i) denotes the estimate of Y based on the th model during the ith iteration. The corresponding prediction error is given by E m(i) := Y m(i) Y, {1,,..., M}. (7) The sum-of-squares error SSE (i) for the th model during the ith iteration is given by SSE (i) := E m(i) = (Y m(i) Y ) (Y m(i) Y ), (8) and we define the corresponding performance index J (i) by J (i) := () 1 SSE (i). (9) Based on these definitions, we construct a multiple model interpolation (MMI) estimator by =1 ˆK p (i) := K m (i)j (i) =1 J. (10) (i)
3 1.1 Moving multiple-models interpolation The moving multiple-models interpolation (MMMI) estimation algorithm is defined as follows: 1. Set i = 0. Compute the MMI estimate ˆK p (i) based on the family of model defined by the candidate gains K m(i) 3. Compute a new family of models by computing a new set of candidate gains K m(i + 1) centered at ˆK p (i): Km (i + 1) = ˆK ( M + 1) p (i) + K (11) 4. Increment i and goto to until there is no significant change in ˆK p (i). We are assuming here that the number of models M is odd and there is a constant spacing K among the model gains. According to the MMMI algorithm we obtain =1 ˆK p (i + 1) = K m (i + 1)J (i + 1) ( M ( ) ) =1 ˆKp (i) + K J (i + 1) = ( ) = ˆK =1 J (i + 1) p (i) + K (1) (13) (14) where the performance indexes J (i + 1) are given by (9). Using (4), (6), and (8) we can write the corresponding sum-of-squares errors for the th model during the (i + 1)th iteration as SSE (i + 1) = (K m(i + 1)U K p U N )(K m(i + 1)U K p U N) (15) = (Km(i + 1) K p ) U (Km(i + 1) K p )U N + N (16) = ( ( M + 1)) U Kp (i) + K ( ( M + 1) p (i) + K )U N + N (17) where K p (i) := ˆK p (i) K p denotes the estimation error at the ith iteration. 1. Equilibrium Assume now that the MMMI algorithm converges to some value ˆK p. Since ˆK p must be a fixed-point of (14), we conclude that at equilibrium ( ) =1 J K =1 J = 0 M =1 ( M + 1) J = 0, (18) where J denote the asymptotic value of J (i) as i. For the 3 moving models case (M = 3) we simply have 3 ( ) J = 0 J 3 J 1 = 0, (19) =1 3
4 which, because of (9), is further equivalent to SSE 3 = SSE 1 (0) where SSE denotes the asymptotic value of SSE (i) as i. Using (17) we conclude that (0) is equivalent to ( Kp + K ) U ( K p + K )U N + N = ( Kp K ) U ( K p K )U N + N, (1) where K p denotes the asymptotic value of the estimation error K p (i) as i. Equation (1) can further be simplified to from which we conclude that K p U U N = 0, () K p = U N U, (3) as long as the input signal U is not identically zero. The equilibrium parameter estimate is therefore given by ˆK p = K p + U N U. (4) It turns out that this is precisely the least-square estimate for the original data in (). To verify this note that the sum-of-squares error for an arbitrary gain K is given by SSE(K) := (KU Y ) (KU Y ) = (KU K p U N) (KU K p U N). (5) This is minimized by finding the value of K for which SSE(K) K giving the following sum-of-squares estimate Comparing (4) with (7) we concludes the following: = 0 U (KU K p U N) = 0, (6) K = K p + U N U. (7) Lemma 1 (Equilibrium). With 3 moving models (M = 3) and a not identically zero input signal, the unique equilibrium point of the MMMI algorithm is the least-squares estimate of the gain parameter. 1.3 Convergence For 3 moving models (14) can be written as ˆK p (i + 1) = ˆK p (i) + K J 3 (i + 1) J 1 (i + 1) 3 SSE 3 (i+1) 1 SSE 1 (i+1) 1 = ˆK p (i) + K 3 (8) (9) = ˆK p (i) + K γ(i) ( SSE 1 (i + 1) SSE 3 (i + 1) ) (30) 4
5 where γ(i) := 1 SSE 1 (i + 1) SSE 3 (i + 1) 3. (31) Subtracting K p from both side of (30) we obtain the following recursion for the estimation error. On the other hand, from (17) we conclude that K p (i + 1) = K p (i) + K γ(i) ( SSE 1 (i + 1) SSE 3 (i + 1) ) (3) SSE 1 (i + 1) SSE 3 (i + 1) = 4 K ( U Kp (i) U N ) (33) Defining v(i) := U Kp (i) U N, we conclude from (3) and (33) that v(i + 1) = U Kp (i + 1) U N (34) = U Kp (i) 4 U K γ(i)v(i) U N (35) = ( 1 4 U K γ(i)) v(i) (36) This shows that v(i) is monotone and bounded between v(0) and 0 and therefore convergent. Assuming that the input U is not identically zero, this means that Kp is also bounded as well as all remaining signals including the SSE and J. We thus conclude that γ(i) will not converge to zero and therefore 1 4 U Kγ(i) is bounded away from 1. From this it follows that v(i) actually converges to zero and therefore The convergence rate will be exponential. The following can be stated ˆK p (i) K p + U N U. (37) Lemma (Convergence). With 3 moving models (M = 3) and a not identically zero input signal, the MMMI algorithm converges exponentially fast to the least-squares estimate of the gain parameter. General case We consider a general SISO ARX model for the process, which can be written as where ϕ(k) denotes the regression vector defined by y(k) = ϕ(k) c(θ p ) + n(k), k {1,, 3,... } (38) ϕ(k) := [ y(k 1) y(k ) y(k n y ) u(k 1) u(k n u ) ] R n y+n u, (39) and c(θ p ) a column vector of coefficients that depends on some unknown parameter vector θ p that belongs to a parameter set P R n. We assume that we have available a set of data { } y(k), ϕ(k) : k = 1,,..., L collected over a time-window of length L. By stacking the outputs and regression vectors as follows y(1) ϕ(1) y() Y :=. ϕ() RL, Φ :=. RL (ny+nu), (41) y(l) ϕ(l) (40) 5
6 we can write the process model as where Y = Φc(θ p ) + N, (4) n(1) n() N :=., (43) n(l) is a vector with measurement noise. The value of N is not available to estimate p. The data-set is processed multiple times by examining its fits with respect to a finite bank of models that varies from iteration to iteration. We denote by M(i) := {θ 1 m(i), θ m(i),..., θ M m (i)} P (44) the values for the parameters for the bank of models used in the ith iteration. The estimate Y m(i) of Y based on the th model during the ith iteration is defined by and the corresponding prediction error is given by Y m(i) = Φ c(θ m(i)), {1,,..., M}. (45) Em (i) := Y m (i) Y, {1,,..., M}. (46) The multiple model interpolation (MMI) estimator is now given by =1 ˆθ p (i) := J (i)θm(i) =1 J. (47) (i) where J (i) denotes the performance index for the th model during the ith iteration, defined by J (i) := and SSE (i) denotes the sum-of-squares error SSE (i) given by 1 SSE (i), (48) SSE (i) := Em (i) = (Ym (i) Y ) (Ym (i) Y ). (49) Example 1. For a one-step delay system with unknown gain θ p [1, 10], we have ϕ(k) := u(k 1), c(θ p ) := θ p, P := [1, 10] (50) leading to a model similar to the one considered in Section 1 with τ = 1: y(k) = θ p u(k 1) + n(k), p [1, 10]. (51) Example. For a system with unknown gain θ 1 [1, 10] and unknown delay θ {1,, 3}, we would have ϕ(k) := [ u(k 1) u(k ) u(k 3) ] [ θ ] θ = 1, c(θ1, θ ) := [ 0 θ 1 0 ] θ =, P := [1, 10] {1,, 3} (5) [ 0 0 θ 1 ] p = 3 leading to θ 1 u(k 1) θ = 1 y(k) = n(k) + θ 1 u(k ) θ =, θ 1 [1, 10]. (53) θ 1 u(k 3) θ = 3 6
7 .1 Moving multiple-models interpolation The moving multiple-models interpolation (MMMI) estimation algorithm is defined as follows: 1. Set i = 0 (iteration index). Set l = 1 (parameter index) 3. Compute the MMI estimate ˆθ p (i) based on the family of model defined by the candidate gains θ m(i) 4. Compute a new family of models by computing a new set of candidate gains θ m(i + 1) centered at the lth parameter in ˆθ p (i): where e l denotes the lth element of the canonical basis of R n θm (i + 1) = ˆθ ( M + 1) p (i) + l el, (54) 5. Increment i and l (modulo n) and go to 3 until there is no significant change in ˆθ p (i). We are assuming here that the number of models M is odd and there is a constant spacing l among the model values for the lth parameter. According to the MMMI algorithm we obtain ˆθ p (i + 1) = = =1 θ m (i + 1)J (i + 1) =1 (ˆθp (i) + l ( ) el ) J (i + 1) (55) (56) ( ) = ˆθ =1 J (i + 1) p (i) + l =1 J e l (57) (i + 1) where the performance indexes J (i + 1) are given by (48). Using (4), (45), and (49) we can write the corresponding sum-of-squares errors for the th model during the (i + 1)th iteration as where SSE (i + 1) = (Φ c(θ m(i + 1)) Φc(θ p ) N) (Φ c(θ m(i + 1)) Φc(θ p ) N) (58) = c m(i + 1) Φ Φ c m(i + 1) c m(i + 1) Φ N + N. (59) When the parametrization c( ) is affine 1, we actually have c m(i + 1) := c(θ m(i + 1)) c(θ p ). (60) c m(i + 1) = J(θm(i + 1) θ p ) = J θ ( M + 1) p (i) + l Jel, (61) where θ p (i) := ˆθ p (i) θ p denotes the estimation error at the ith iteration and J the (constant) Jacobian matrix of the map c( ), i.e., c(θ) = Jθ + c 0. 1 This implicitly assumes that P is the whole R n 7
8 . Equilibrium Assume now that the MMMI algorithm converges to some value ˆθ p. Since ˆθ p must be a fixed-point of (57) for every l, we conclude that at equilibrium ( ) =1 J l l =1 J l e l M =1 ( M + 1) J l = 0, l (6) where J l denote the asymptotic value of J (i) for the parameter index l as i. For the 3 moving models case (M = 3) we simply have 3 =1 which, because of (48), is further equivalent to ( ) J l = 0 J 3 l J 1 l = 0, l (63) SSE 3 l = SSE 1 l, l (64) where SSE denotes the asymptotic value of SSE (i) for the parameter index l as i. Using (59) and (61) we conclude that for the affine c( ) case, (64) is equivalent to ( θ p + l e l ) J Φ ΦJ( θ p + l e l ) ( θ p + l e l ) J Φ N + N = ( θ p l e l ) JΦ ΦJ( θ p l e l ) ( θ p l e l ) J Φ N + N, l (65) where θ p denotes the asymptotic value of the parameter estimation error θ p (i) as i. Equation (65) can further be simplified to from which we conclude that e l (J Φ ΦJ θ p J Φ N) = 0, l J Φ ΦJ θ p J Φ N = 0. (66) θ p = (J Φ ΦJ) 1 J Φ N, (67) as long as J Φ ΦJ is nonsingular. The equilibrium parameter estimate is therefore given by ˆθ p = θ p + (J Φ ΦJ) 1 J Φ N. (68) It turns out that this is precisely the least-square estimate. To verify this note that the sum-of-squares error for an arbitrary value θ of the parameter is given by This is minimized by finding the value of θ for which SSE(θ) = (ΦJ(θ θ p ) N) (ΦJ(θ θ p ) N). (69) SSE(θ) θ giving the following sum-of-squares estimate Comparing (68) with (71) we concludes the following: = 0 J Φ (ΦJ(θ θ p ) N) = 0 (70) θ = θ p + (J Φ ΦJ) 1 J Φ N (71) Lemma 3 (Equilibrium). With 3 moving models (M = 3), c( ) affine, and J Φ ΦJ is nonsingular, the unique equilibrium point of the MMMI algorithm is the least-squares estimate of the parameter θ. 8
Analysis of Emerson s Multiple Model Interpolation Estimation Algorithms: The MIMO Case
Technica Report PC-04-00 Anaysis of Emerson s Mutipe Mode Interpoation Estimation Agorithms: The MIMO Case João P. Hespanha Dae E. Seborg University of Caifornia, Santa Barbara February 0, 004 Anaysis
More informationHybrid Control and Switched Systems. Lecture #11 Stability of switched system: Arbitrary switching
Hybrid Control and Switched Systems Lecture #11 Stability of switched system: Arbitrary switching João P. Hespanha University of California at Santa Barbara Stability under arbitrary switching Instability
More informationHybrid Control and Switched Systems. Lecture #9 Analysis tools for hybrid systems: Impact maps
Hybrid Control and Switched Systems Lecture #9 Analysis tools for hybrid systems: Impact maps João P. Hespanha University of California at Santa Barbara Summary Analysis tools for hybrid systems Impact
More informationCUSUM TEST FOR PARAMETER CHANGE IN TIME SERIES MODELS. Sangyeol Lee
CUSUM TEST FOR PARAMETER CHANGE IN TIME SERIES MODELS Sangyeol Lee 1 Contents 1. Introduction of the CUSUM test 2. Test for variance change in AR(p) model 3. Test for Parameter Change in Regression Models
More informationEL1820 Modeling of Dynamical Systems
EL1820 Modeling of Dynamical Systems Lecture 9 - Parameter estimation in linear models Model structures Parameter estimation via prediction error minimization Properties of the estimate: bias and variance
More informationAdaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling
Adaptive Filters - Statistical digital signal processing: in many problems of interest, the signals exhibit some inherent variability plus additive noise we use probabilistic laws to model the statistical
More informationComputer Intensive Methods in Mathematical Statistics
Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of
More informationHybrid Control and Switched Systems. Lecture #7 Stability and convergence of ODEs
Hybrid Control and Switched Systems Lecture #7 Stability and convergence of ODEs João P. Hespanha University of California at Santa Barbara Summary Lyapunov stability of ODEs epsilon-delta and beta-function
More informationStochastic Hybrid Systems: Applications to Communication Networks
research supported by NSF Stochastic Hybrid Systems: Applications to Communication Networks João P. Hespanha Center for Control Engineering and Computation University of California at Santa Barbara Deterministic
More information6. Fractional Imputation in Survey Sampling
6. Fractional Imputation in Survey Sampling 1 Introduction Consider a finite population of N units identified by a set of indices U = {1, 2,, N} with N known. Associated with each unit i in the population
More informationAdaptive Piecewise Polynomial Estimation via Trend Filtering
Adaptive Piecewise Polynomial Estimation via Trend Filtering Liubo Li, ShanShan Tu The Ohio State University li.2201@osu.edu, tu.162@osu.edu October 1, 2015 Liubo Li, ShanShan Tu (OSU) Trend Filtering
More informationI. D. Landau, A. Karimi: A Course on Adaptive Control Adaptive Control. Part 9: Adaptive Control with Multiple Models and Switching
I. D. Landau, A. Karimi: A Course on Adaptive Control - 5 1 Adaptive Control Part 9: Adaptive Control with Multiple Models and Switching I. D. Landau, A. Karimi: A Course on Adaptive Control - 5 2 Outline
More informationCHBE507 LECTURE II MPC Revisited. Professor Dae Ryook Yang
CHBE507 LECURE II MPC Revisited Professor Dae Ryook Yang Fall 2013 Dept. of Chemical and Biological Engineering Korea University Korea University III -1 Process Models ransfer function models Fixed order
More information12. Prediction Error Methods (PEM)
12. Prediction Error Methods (PEM) EE531 (Semester II, 2010) description optimal prediction Kalman filter statistical results computational aspects 12-1 Description idea: determine the model parameter
More informationEXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY
EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA, 00 MODULE : Statistical Inference Time Allowed: Three Hours Candidates should answer FIVE questions. All questions carry equal marks. The
More informationStochastic Hybrid Systems: Applications to Communication Networks
research supported by NSF Stochastic Hybrid Systems: Applications to Communication Networks João P. Hespanha Center for Control Engineering and Computation University of California at Santa Barbara Talk
More informationCorrigendum to Inference on impulse. response functions in structural VAR models. [J. Econometrics 177 (2013), 1-13]
Corrigendum to Inference on impulse response functions in structural VAR models [J. Econometrics 177 (2013), 1-13] Atsushi Inoue a Lutz Kilian b a Department of Economics, Vanderbilt University, Nashville
More informationShankar Shivappa University of California, San Diego April 26, CSE 254 Seminar in learning algorithms
Recognition of Visual Speech Elements Using Adaptively Boosted Hidden Markov Models. Say Wei Foo, Yong Lian, Liang Dong. IEEE Transactions on Circuits and Systems for Video Technology, May 2004. Shankar
More informationNonlinear Regression. Chapter 2 of Bates and Watts. Dave Campbell 2009
Nonlinear Regression Chapter 2 of Bates and Watts Dave Campbell 2009 So far we ve considered linear models Here the expectation surface is a plane spanning a subspace of the observation space. Our expectation
More informationKalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance
2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016. Boston, MA, USA Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise
More informationTopics in Undergraduate Control Systems Design
Topics in Undergraduate Control Systems Design João P. Hespanha April 9, 2006 Disclaimer: This is an early draft and probably contains many typos. Comments and information about typos are welcome. Please
More informationAutomatic Control Systems theory overview (discrete time systems)
Automatic Control Systems theory overview (discrete time systems) Prof. Luca Bascetta (luca.bascetta@polimi.it) Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Motivations
More informationGeometric Interpolation by Planar Cubic Polynomials
1 / 20 Geometric Interpolation by Planar Cubic Polynomials Jernej Kozak, Marjeta Krajnc Faculty of Mathematics and Physics University of Ljubljana Institute of Mathematics, Physics and Mechanics Avignon,
More informationOutline. A Central Limit Theorem for Truncating Stochastic Algorithms
Outline A Central Limit Theorem for Truncating Stochastic Algorithms Jérôme Lelong http://cermics.enpc.fr/ lelong Tuesday September 5, 6 1 3 4 Jérôme Lelong (CERMICS) Tuesday September 5, 6 1 / 3 Jérôme
More informationControl Systems Lab - SC4070 System Identification and Linearization
Control Systems Lab - SC4070 System Identification and Linearization Dr. Manuel Mazo Jr. Delft Center for Systems and Control (TU Delft) m.mazo@tudelft.nl Tel.:015-2788131 TU Delft, February 13, 2015 (slides
More informationAn LMI Approach to the Control of a Compact Disc Player. Marco Dettori SC Solutions Inc. Santa Clara, California
An LMI Approach to the Control of a Compact Disc Player Marco Dettori SC Solutions Inc. Santa Clara, California IEEE SCV Control Systems Society Santa Clara University March 15, 2001 Overview of my Ph.D.
More information3.3 Accumulation Sequences
3.3. ACCUMULATION SEQUENCES 25 3.3 Accumulation Sequences Overview. One of the most important mathematical ideas in calculus is that of an accumulation of change for physical quantities. As we have been
More informationHybrid Control and Switched Systems. Lecture #1 Hybrid systems are everywhere: Examples
Hybrid Control and Switched Systems Lecture #1 Hybrid systems are everywhere: Examples João P. Hespanha University of California at Santa Barbara Summary Examples of hybrid systems 1. Bouncing ball 2.
More informationChapter 4: Imputation
Chapter 4: Imputation Jae-Kwang Kim Department of Statistics, Iowa State University Outline 1 Introduction 2 Basic Theory for imputation 3 Variance estimation after imputation 4 Replication variance estimation
More informationChapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.
MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,
More informationEuler s Method and Logistic Growth (BC Only)
Euler s Method Students should be able to: Approximate numerical solutions of differential equations using Euler s method without a calculator. Recognize the method as a recursion formula extension of
More informationEL1820 Modeling of Dynamical Systems
EL1820 Modeling of Dynamical Systems Lecture 10 - System identification as a model building tool Experiment design Examination and prefiltering of data Model structure selection Model validation Lecture
More informationYURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL
Journal of Comput. & Applied Mathematics 139(2001), 197 213 DIRECT APPROACH TO CALCULUS OF VARIATIONS VIA NEWTON-RAPHSON METHOD YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Abstract. Consider m functions
More informationMarginal Screening and Post-Selection Inference
Marginal Screening and Post-Selection Inference Ian McKeague August 13, 2017 Ian McKeague (Columbia University) Marginal Screening August 13, 2017 1 / 29 Outline 1 Background on Marginal Screening 2 2
More informationOn Input Design for System Identification
On Input Design for System Identification Input Design Using Markov Chains CHIARA BRIGHENTI Masters Degree Project Stockholm, Sweden March 2009 XR-EE-RT 2009:002 Abstract When system identification methods
More informationST5215: Advanced Statistical Theory
Department of Statistics & Applied Probability Monday, September 26, 2011 Lecture 10: Exponential families and Sufficient statistics Exponential Families Exponential families are important parametric families
More informationAnalysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems
Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems Jeremy S. Conner and Dale E. Seborg Department of Chemical Engineering University of California, Santa Barbara, CA
More informationCooperative Filters and Control for Cooperative Exploration
Cooperative Filters and Control for Cooperative Exploration Fumin Zhang and Naomi Ehrich Leonard 1 Abstract Autonomous mobile sensor networks are employed to measure large-scale environmental fields. Yet
More informationL 2 -induced Gains of Switched Systems and Classes of Switching Signals
L 2 -induced Gains of Switched Systems and Classes of Switching Signals Kenji Hirata and João P. Hespanha Abstract This paper addresses the L 2-induced gain analysis for switched linear systems. We exploit
More informationStatistical Estimation
Statistical Estimation Use data and a model. The plug-in estimators are based on the simple principle of applying the defining functional to the ECDF. Other methods of estimation: minimize residuals from
More informationNonlinear Least Squares
Nonlinear Least Squares Stephen Boyd EE103 Stanford University December 6, 2016 Outline Nonlinear equations and least squares Examples Levenberg-Marquardt algorithm Nonlinear least squares classification
More informationStatistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach
Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Jae-Kwang Kim Department of Statistics, Iowa State University Outline 1 Introduction 2 Observed likelihood 3 Mean Score
More informationLecture 8. Poisson models for counts
Lecture 8. Poisson models for counts Jesper Rydén Department of Mathematics, Uppsala University jesper.ryden@math.uu.se Statistical Risk Analysis Spring 2014 Absolute risks The failure intensity λ(t) describes
More informationPh.D. Qualifying Exam Friday Saturday, January 6 7, 2017
Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a
More informationComputer Vision Group Prof. Daniel Cremers. 6. Mixture Models and Expectation-Maximization
Prof. Daniel Cremers 6. Mixture Models and Expectation-Maximization Motivation Often the introduction of latent (unobserved) random variables into a model can help to express complex (marginal) distributions
More informationNow consider the case where E(Y) = µ = Xβ and V (Y) = σ 2 G, where G is diagonal, but unknown.
Weighting We have seen that if E(Y) = Xβ and V (Y) = σ 2 G, where G is known, the model can be rewritten as a linear model. This is known as generalized least squares or, if G is diagonal, with trace(g)
More informationAGEC 661 Note Eleven Ximing Wu. Exponential regression model: m (x, θ) = exp (xθ) for y 0
AGEC 661 ote Eleven Ximing Wu M-estimator So far we ve focused on linear models, where the estimators have a closed form solution. If the population model is nonlinear, the estimators often do not have
More informationAdvanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification
Advanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification 1. Consider the time series x(k) = β 1 + β 2 k + w(k) where β 1 and β 2 are known constants
More informationStructured LPV Control of Wind Turbines
fda@es.aau.dk Department of Electronic Systems, November 29, 211 Agenda Motivation Main challenges for the application of wind turbine control: Known parameter-dependencies (gain-scheduling); Unknown parameter
More informationInformation in a Two-Stage Adaptive Optimal Design
Information in a Two-Stage Adaptive Optimal Design Department of Statistics, University of Missouri Designed Experiments: Recent Advances in Methods and Applications DEMA 2011 Isaac Newton Institute for
More informationMore Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction
Sankhyā : The Indian Journal of Statistics 2007, Volume 69, Part 4, pp. 700-716 c 2007, Indian Statistical Institute More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order
More informationKALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE ANTONY G AU T I E R (LILLE)
PROBABILITY AND MATHEMATICAL STATISTICS Vol 29, Fasc 1 (29), pp 169 18 KALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE BY ANTONY G AU T I E R (LILLE)
More informationsc Control Systems Design Q.1, Sem.1, Ac. Yr. 2010/11
sc46 - Control Systems Design Q Sem Ac Yr / Mock Exam originally given November 5 9 Notes: Please be reminded that only an A4 paper with formulas may be used during the exam no other material is to be
More informationCBE507 LECTURE III Controller Design Using State-space Methods. Professor Dae Ryook Yang
CBE507 LECTURE III Controller Design Using State-space Methods Professor Dae Ryook Yang Fall 2013 Dept. of Chemical and Biological Engineering Korea University Korea University III -1 Overview States What
More informationy k = ( ) x k + v k. w q wk i 0 0 wk
Four telling examples of Kalman Filters Example : Signal plus noise Measurement of a bandpass signal, center frequency.2 rad/sec buried in highpass noise. Dig out the quadrature part of the signal while
More informationEXPLICIT BLOCK-STRUCTURES FOR BLOCK-SYMMETRIC FIEDLER-LIKE PENCILS
EXPLICIT BLOCK-STRUCTURES FOR BLOCK-SYMMETRIC FIEDLER-LIKE PENCILS M I BUENO, M MARTIN, J PÉREZ, A SONG, AND I VIVIANO Abstract In the last decade, there has been a continued effort to produce families
More informationSensitivity and Asymptotic Error Theory
Sensitivity and Asymptotic Error Theory H.T. Banks and Marie Davidian MA-ST 810 Fall, 2009 North Carolina State University Raleigh, NC 27695 Center for Quantitative Sciences in Biomedicine North Carolina
More informationIntroduction to Algorithms
Introduction to Algorithms (2 nd edition) by Cormen, Leiserson, Rivest & Stein Chapter 3: Growth of Functions (slides enhanced by N. Adlai A. DePano) Overview Order of growth of functions provides a simple
More informationSUFFIX PROPERTY OF INVERSE MOD
IEEE TRANSACTIONS ON COMPUTERS, 2018 1 Algorithms for Inversion mod p k Çetin Kaya Koç, Fellow, IEEE, Abstract This paper describes and analyzes all existing algorithms for computing x = a 1 (mod p k )
More informationIdentification of ARX, OE, FIR models with the least squares method
Identification of ARX, OE, FIR models with the least squares method CHEM-E7145 Advanced Process Control Methods Lecture 2 Contents Identification of ARX model with the least squares minimizing the equation
More informationHybrid Control and Switched Systems. Lecture #8 Stability and convergence of hybrid systems (topological view)
Hybrid Control and Switched Systems Lecture #8 Stability and convergence of hybrid systems (topological view) João P. Hespanha University of California at Santa Barbara Summary Lyapunov stability of hybrid
More informationA NONLINEAR TRANSFORMATION APPROACH TO GLOBAL ADAPTIVE OUTPUT FEEDBACK CONTROL OF 3RD-ORDER UNCERTAIN NONLINEAR SYSTEMS
Copyright 00 IFAC 15th Triennial World Congress, Barcelona, Spain A NONLINEAR TRANSFORMATION APPROACH TO GLOBAL ADAPTIVE OUTPUT FEEDBACK CONTROL OF RD-ORDER UNCERTAIN NONLINEAR SYSTEMS Choon-Ki Ahn, Beom-Soo
More informationEE 225D LECTURE ON DIGITAL FILTERS. University of California Berkeley
University of California Berkeley College of Engineering Department of Electrical Engineering and Computer Sciences Professors : N.Morgan / B.Gold EE225D Digital Filters Spring,1999 Lecture 7 N.MORGAN
More informationStability Theory for Nonnegative and Compartmental Dynamical Systems with Time Delay
1 Stability Theory for Nonnegative and Compartmental Dynamical Systems with Time Delay Wassim M. Haddad and VijaySekhar Chellaboina School of Aerospace Engineering, Georgia Institute of Technology, Atlanta,
More informationIncluding historical data in the analysis of clinical trials using the modified power priors: theoretical overview and sampling algorithms
Including historical data in the analysis of clinical trials using the modified power priors: theoretical overview and sampling algorithms Joost van Rosmalen 1, David Dejardin 2,3, and Emmanuel Lesaffre
More informationi=1 h n (ˆθ n ) = 0. (2)
Stat 8112 Lecture Notes Unbiased Estimating Equations Charles J. Geyer April 29, 2012 1 Introduction In this handout we generalize the notion of maximum likelihood estimation to solution of unbiased estimating
More information1 Introduction 198; Dugard et al, 198; Dugard et al, 198) A delay matrix in such a lower triangular form is called an interactor matrix, and almost co
Multivariable Receding-Horizon Predictive Control for Adaptive Applications Tae-Woong Yoon and C M Chow y Department of Electrical Engineering, Korea University 1, -a, Anam-dong, Sungbu-u, Seoul 1-1, Korea
More informationDiscrete-Time Signals: Time-Domain Representation
Discrete-Time Signals: Time-Domain Representation 1 Signals represented as sequences of numbers, called samples Sample value of a typical signal or sequence denoted as x[n] with n being an integer in the
More informationFOR traditional discrete-time sampled systems, the operation
29 American Control Conference Hyatt Regency Riverfront St Louis MO USA June 1-12 29 FrA132 Least-squares based iterative parameter estimation for two-input multirate sampled-data systems Jing Lu Xinggao
More informationDiscrete-Time Signals: Time-Domain Representation
Discrete-Time Signals: Time-Domain Representation 1 Signals represented as sequences of numbers, called samples Sample value of a typical signal or sequence denoted as x[n] with n being an integer in the
More informationGeneralized nonlinear models of suspension bridges
J. Math. Anal. Appl. 324 (2006) 1288 1296 www.elsevier.com/locate/jmaa Generalized nonlinear models of suspension bridges Josef Malík Institute of Geonics of the Academy of Sciences, Studetská 1768, 708
More informationEM for ML Estimation
Overview EM for ML Estimation An algorithm for Maximum Likelihood (ML) Estimation from incomplete data (Dempster, Laird, and Rubin, 1977) 1. Formulate complete data so that complete-data ML estimation
More informationChapter 1 Divide and Conquer Algorithm Theory WS 2016/17 Fabian Kuhn
Chapter 1 Divide and Conquer Algorithm Theory WS 2016/17 Fabian Kuhn Formulation of the D&C principle Divide-and-conquer method for solving a problem instance of size n: 1. Divide n c: Solve the problem
More informationImplications of the Constant Rank Constraint Qualification
Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates
More informationControlo Switched Systems: Mixing Logic with Differential Equations. João P. Hespanha. University of California at Santa Barbara.
Controlo 00 5 th Portuguese Conference on Automatic Control University of Aveiro,, September 5-7, 5 00 Switched Systems: Mixing Logic with Differential Equations João P. Hespanha University of California
More informationLecture 3 September 1
STAT 383C: Statistical Modeling I Fall 2016 Lecture 3 September 1 Lecturer: Purnamrita Sarkar Scribe: Giorgio Paulon, Carlos Zanini Disclaimer: These scribe notes have been slightly proofread and may have
More informationLinear Discrimination Functions
Laurea Magistrale in Informatica Nicola Fanizzi Dipartimento di Informatica Università degli Studi di Bari November 4, 2009 Outline Linear models Gradient descent Perceptron Minimum square error approach
More informationMachine Learning: Logistic Regression. Lecture 04
Machine Learning: Logistic Regression Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.edu Supervised Learning Task = learn an (unkon function t : X T that maps input
More informationCanonical lossless state-space systems: staircase forms and the Schur algorithm
Canonical lossless state-space systems: staircase forms and the Schur algorithm Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics School of Mathematical Sciences Projet APICS Universiteit
More informationThe formal relationship between analytic and bootstrap approaches to parametric inference
The formal relationship between analytic and bootstrap approaches to parametric inference T.J. DiCiccio Cornell University, Ithaca, NY 14853, U.S.A. T.A. Kuffner Washington University in St. Louis, St.
More informationLocalization of Radioactive Sources Zhifei Zhang
Localization of Radioactive Sources Zhifei Zhang 4/13/2016 1 Outline Background and motivation Our goal and scenario Preliminary knowledge Related work Our approach and results 4/13/2016 2 Background and
More informationConcentration of Measures by Bounded Size Bias Couplings
Concentration of Measures by Bounded Size Bias Couplings Subhankar Ghosh, Larry Goldstein University of Southern California [arxiv:0906.3886] January 10 th, 2013 Concentration of Measure Distributional
More informationTopic 3: Neural Networks
CS 4850/6850: Introduction to Machine Learning Fall 2018 Topic 3: Neural Networks Instructor: Daniel L. Pimentel-Alarcón c Copyright 2018 3.1 Introduction Neural networks are arguably the main reason why
More informationSymbolic Dynamics of Digital Signal Processing Systems
Symbolic Dynamics of Digital Signal Processing Systems Dr. Bingo Wing-Kuen Ling School of Engineering, University of Lincoln. Brayford Pool, Lincoln, Lincolnshire, LN6 7TS, United Kingdom. Email: wling@lincoln.ac.uk
More informationEuler s Method (BC Only)
Euler s Method (BC Only) Euler s Method is used to generate numerical approximations for solutions to differential equations that are not separable by methods tested on the AP Exam. It is necessary to
More informationFactor Analysis and Kalman Filtering (11/2/04)
CS281A/Stat241A: Statistical Learning Theory Factor Analysis and Kalman Filtering (11/2/04) Lecturer: Michael I. Jordan Scribes: Byung-Gon Chun and Sunghoon Kim 1 Factor Analysis Factor analysis is used
More informationChapter III. Stability of Linear Systems
1 Chapter III Stability of Linear Systems 1. Stability and state transition matrix 2. Time-varying (non-autonomous) systems 3. Time-invariant systems 1 STABILITY AND STATE TRANSITION MATRIX 2 In this chapter,
More informationLecture 9: Time-Domain Analysis of Discrete-Time Systems Dr.-Ing. Sudchai Boonto
Lecture 9: Time-Domain Analysis of Discrete-Time Systems Dr-Ing Sudchai Boonto Department of Control System and Instrumentation Engineering King Mongkut s Unniversity of Technology Thonburi Thailand Outline
More informationAn Akaike Criterion based on Kullback Symmetric Divergence in the Presence of Incomplete-Data
An Akaike Criterion based on Kullback Symmetric Divergence Bezza Hafidi a and Abdallah Mkhadri a a University Cadi-Ayyad, Faculty of sciences Semlalia, Department of Mathematics, PB.2390 Marrakech, Moroco
More informationHST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007
MIT OpenCourseare http://ocw.mit.edu HST.58J / 6.555J / 16.56J Biomedical Signal and Image Processing Spring 7 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM
c 2007-2016 by Armand M. Makowski 1 ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM 1 The basic setting Throughout, p, q and k are positive integers. The setup With
More informationStat 512 Homework key 2
Stat 51 Homework key October 4, 015 REGULAR PROBLEMS 1 Suppose continuous random variable X belongs to the family of all distributions having a linear probability density function (pdf) over the interval
More informationEstimating prediction error in mixed models
Estimating prediction error in mixed models benjamin saefken, thomas kneib georg-august university goettingen sonja greven ludwig-maximilians-university munich 1 / 12 GLMM - Generalized linear mixed models
More informationAnalysis of Middle Censored Data with Exponential Lifetime Distributions
Analysis of Middle Censored Data with Exponential Lifetime Distributions Srikanth K. Iyer S. Rao Jammalamadaka Debasis Kundu Abstract Recently Jammalamadaka and Mangalam (2003) introduced a general censoring
More informationHow Much Wood Could a Woodchuck Chuck? By: Lindsey Harrison
How Much Wood Could a Woodchuck Chuck? By: Lindsey Harrison Objective: We will first explore fitting a given data set with various functions in search of the line of best fit. To determine which fit is
More informationEECE Adaptive Control
EECE 574 - Adaptive Control Recursive Identification in Closed-Loop and Adaptive Control Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont
More informationThe Fibonacci sequence modulo π, chaos and some rational recursive equations
J. Math. Anal. Appl. 310 (2005) 506 517 www.elsevier.com/locate/jmaa The Fibonacci sequence modulo π chaos and some rational recursive equations Mohamed Ben H. Rhouma Department of Mathematics and Statistics
More informationCovariance function estimation in Gaussian process regression
Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian
More informationCS 577 Introduction to Algorithms: Strassen s Algorithm and the Master Theorem
CS 577 Introduction to Algorithms: Jin-Yi Cai University of Wisconsin Madison In the last class, we described InsertionSort and showed that its worst-case running time is Θ(n 2 ). Check Figure 2.2 for
More informationLecture 4 Logistic Regression
Lecture 4 Logistic Regression Dr.Ammar Mohammed Normal Equation Hypothesis hθ(x)=θ0 x0+ θ x+ θ2 x2 +... + θd xd Normal Equation is a method to find the values of θ operations x0 x x2.. xd y x x2... xd
More information