Online monitoring of MPC disturbance models using closed-loop data

Similar documents
DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

Estimating Disturbance Covariances From Data For Improved Control Performance

Quis custodiet ipsos custodes?

Coordinating multiple optimization-based controllers: new opportunities and challenges

State Estimation of Linear and Nonlinear Dynamic Systems

State Estimation using Moving Horizon Estimation and Particle Filtering

Application of Autocovariance Least-Squares Methods to Laboratory Data

Nonlinear Stochastic Modeling and State Estimation of Weakly Observable Systems: Application to Industrial Polymerization Processes

Nonlinear Model Predictive Control Tools (NMPC Tools)

Course on Model Predictive Control Part II Linear MPC design

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

Optimizing Economic Performance using Model Predictive Control

In search of the unreachable setpoint

Postface to Model Predictive Control: Theory and Design

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance

State Estimation of Linear and Nonlinear Dynamic Systems

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Coordinating multiple optimization-based controllers: new opportunities and challenges

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents

A FAST, EASILY TUNED, SISO, MODEL PREDICTIVE CONTROLLER. Gabriele Pannocchia,1 Nabil Laachi James B. Rawlings

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

On the Inherent Robustness of Suboptimal Model Predictive Control

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

Identification of Disturbance Covariances Using Maximum Likelihood Estimation

Industrial Model Predictive Control

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

Robustness of MPC and Disturbance Models for Multivariable Ill-conditioned Processes

Statistics 910, #15 1. Kalman Filter

On the Inherent Robustness of Suboptimal Model Predictive Control

Controlling Large-Scale Systems with Distributed Model Predictive Control

Basic Concepts in Data Reconciliation. Chapter 6: Steady-State Data Reconciliation with Model Uncertainties

9 Multi-Model State Estimation

Simple criteria for controller performance monitoring

Cooperation-based optimization of industrial supply chains

ROBUST CONSTRAINED ESTIMATION VIA UNSCENTED TRANSFORMATION. Pramod Vachhani a, Shankar Narasimhan b and Raghunathan Rengaswamy a 1

FIR Filters for Stationary State Space Signal Models

Time Series Analysis

Optimal dynamic operation of chemical processes: current opportunities

Machine Learning 4771

Distributed and Real-time Predictive Control

Control Systems. State Estimation.

I. D. Landau, A. Karimi: A Course on Adaptive Control Adaptive Control. Part 9: Adaptive Control with Multiple Models and Switching

State Estimation of Linear and Nonlinear Dynamic Systems

A tutorial overview on theory and design of offset-free MPC algorithms

Optimal dynamic operation of chemical processes: Assessment of the last 20 years and current research opportunities

Partially Observable Markov Decision Processes (POMDPs)

Optimal control and estimation

A New Approach to Tune the Vold-Kalman Estimator for Order Tracking

Adaptive Dual Control

Learning the Linear Dynamical System with ASOS ( Approximated Second-Order Statistics )

Cautious Data Driven Fault Detection and Isolation applied to the Wind Turbine Benchmark

An insight into noise covariance estimation for Kalman filter design

A nonlinear filtering tool for analysis of hot-loop test campaings

Introduction to System Identification and Adaptive Control

Robust Model Predictive Control

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

Adaptive State Estimation Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2018

Optimal Polynomial Control for Discrete-Time Systems

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

DESIGN AND IMPLEMENTATION OF SENSORLESS SPEED CONTROL FOR INDUCTION MOTOR DRIVE USING AN OPTIMIZED EXTENDED KALMAN FILTER

Data assimilation with and without a model

FRTN 15 Predictive Control

Model based optimization and estimation of the field map during the breakdown phase in the ITER tokamak

Secure Control Against Replay Attacks

Physics 509: Propagating Systematic Uncertainties. Scott Oser Lecture #12

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

2D Image Processing (Extended) Kalman and particle filter

Constrained State Estimation Using the Unscented Kalman Filter

SIMULATION OF TURNING RATES IN TRAFFIC SYSTEMS

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

An overview of distributed model predictive control (MPC)

Adaptive ensemble Kalman filtering of nonlinear systems

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

A Study of Covariances within Basic and Extended Kalman Filters

State Estimation of Linear and Nonlinear Dynamic Systems

Stochastic Tube MPC with State Estimation

Outline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem.

Factor Analysis and Kalman Filtering (11/2/04)

Outline. Linear regulation and state estimation (LQR and LQE) Linear differential equations. Discrete time linear difference equations

Estimation of the Disturbance Structure from Data using Semidefinite Programming and Optimal Weighting

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK

System Identification and Adaptive Filtering in the Short-Time Fourier Transform Domain

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

Advanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification

Control System Design

Kalman Filter and Parameter Identification. Florian Herzog

Predictive Control of Gyroscopic-Force Actuators for Mechanical Vibration Damping

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter

IMC based automatic tuning method for PID controllers in a Smith predictor configuration

Ian G. Horn, Jeffery R. Arulandu, Christopher J. Gombas, Jeremy G. VanAntwerp, and Richard D. Braatz*

OPTIMAL ESTIMATION of DYNAMIC SYSTEMS

Linear-Optimal State Estimation

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

SIMULTANEOUS STATE AND PARAMETER ESTIMATION USING KALMAN FILTERS

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004

TSRT14: Sensor Fusion Lecture 8

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

Transcription:

Online monitoring of MPC disturbance models using closed-loop data Brian J. Odelson and James B. Rawlings Department of Chemical Engineering University of Wisconsin-Madison Online Optimization Based Identification and Estimation June 5, 2003

Outline Introduction and Motivation Estimating Covariances From Closed-loop Data Literature Review Approach Disturbance Models Case Study Summary ACC 2003 1

MPC Monitoring Overview Performance Objectives (Q, R) Model(A, B, C, D) Constraints (u min,max, y min,max ) system ID model validation x s (k) u s (k) regulator u k y t, u t target calculation plant y k ˆx k ˆd k estimator Model (A, B, C, D) Disturbance Model Tuning (Q w, R v ) covariance estimator ACC 2003 2

Motivation Why use data to compute noise covariances and the filter gain? Regulator penalties can come from business goals but estimator tuning is a major source of uncertainty Industrial practitioners currently set covariances arbitrarily Operators would have fewer tuning parameters to set Covariances of disturbances are measurable quantities Better state estimates lead to better control ACC 2003 3

Effects of Incorrect Covariances x k+1 = Ax k + Bu k + Gw k w k (0, Q w ) y k = Cx k + v k v k (0, R v ) Case Interpretation Effect Operator Response Effect Excessive Increase Slow R v Sensor quality control control tracking (Q w ) deteriorates action penalty response R v (Q w ) More reliable sensor Slow tracking response Probably none Suboptimal tracking response Would operators notice a decrease in noise, allowing tighter control? More accurate covariances lead to better state estimates ACC 2003 4

Literature - Adaptive Filtering Method Strengths Weaknesses Citations Bayesian Multi-model Long approaches computation times [1] Maximum Multi-model Not guaranteed Likelihood approaches to converge [2] Covariance Easy to Gives only Matching implement approximate solution [3,4] Correlation Will recover Open-loop Techniques Q w, R v systems only [5,6] [1] D.L. Alspach. A parallel filtering algorithm for linear systems with unknown time-varying noise statistics. IEEE Trans. Auto. Cont., 19(5):552 556, 1974. [2] R.L. Kashyap. Maximum likelihood identification of stochastic linear systems. IEEE Trans. Auto. Cont., 15(1):25 34, 1970. [3] K.A. Myers and B.D. Tapley. Adaptive sequential estimation with unknown noise statistics. IEEE Trans. Auto. Cont., 21:520 523, 1976. [4] B. Friedland. Estimating noise variances by using multiple observers. IEEE Trans. Aero. Elec. Sys., 18(4):442 448, 1982. [5] R.K. Mehra. On the identification of variances and adaptive Kalman filtering. IEEE Trans. Auto. Cont., 15(12):175 184, 1970. [6] B. Carew and P.R. Bélanger. Identification of optimum filter steady-state gain for systems with unknown noise covariances. IEEE Trans. Auto. Cont., 18(6):582 587, 1973. ACC 2003 5

Autocovariance function x k+1 = Ax k + Gw k w k N(0, Q w ) y k = Cx k + v k v k N(0, R v ) Model (A, B, C, G) known Correlations between outputs w k propagates through states, enabling distinction between Q w, R v y k = Cx k + v k y k+1 = CAx k + Cw k + v k+1 y k+2 = CA 2 x k + CAw k + Cw k+1 + v k+2 y k+3 = CA 3 x k + CA 2 w k + CAw k+1 + Cw k+2 + v k+3 y k+4 = CA 4 x k + CA 3 w k + CA 2 w k+1 + CAw k+2 + Cw k+3 + v k+4 ACC 2003 6

Building a Least Squares Problem Given some arbitrary (stable) estimator, L i, compute correlation of innovations process Y k = y k C x k k 1 E Y k Y k Y k Y k+1 Y k Y k+2 Y k+1 Y k Y k+1 Y k+1 Y k+1 Y k+2 Y k+2 Y k Y k+2 Y k+1 Y k+2 Y k+2 Express a weighted least-squares problem in a vector of unknowns, Q w, R v min Q w,r v Φ = ( [ ] (Qw ) A s (R v ) s b) W = f(a, B, G, C, L i, Q w, R v ) ( [ ] (Qw ) A s (R v ) s ) b The objective is to minimize the estimate errors, which can be done in several ways min E[(x k x k k 1 )(x k x k k 1 ) T ] = [P k k 1 ] k ACC 2003 7

Estimator Design Procedure Q w, R v yes Compute Unique? Q w, R v Computational Complexity no G Q w, R v Unique? yes Optimize for Q w, R v L no Optimize for L ACC 2003 8

Different Ways to Process the Data Prediction error, y k C x k k 1 Reconstruction error, y k C x k k (G Q w ) kl L 1 L 2 L 1 L 2 95% Confidence Interval k-step ahead predictors Multiple observers with different L Same least-squares framework (G Q w ) ij ACC 2003 9

What s New and Different What s Different? Better conditioning than available techniques Covariance symmetry implicitly enforced by the structure of the least-squares problem Additional constraints (i.e. diagonality, positive definiteness) are easily enforced Necessary and sufficient conditions for uniqueness of covariances ACC 2003 10

What s New and Different What s New? Optimization approach for computing G Q w Optimization approach for computing L Unified approach allowing different ways to process the data Treating integrated white noise disturbances Demonstrating the impact of adaptive filtering on regulator performance ACC 2003 11

Closed-loop Example 2 input, 2 output G (z) = z 2z 1 0.5z 2z 1 z 2.5z 1.5 1.5z 2.5z 1.5 4 state Covariances unknown Active input constraints 0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0 x T P x = c -3-2 -1 0 x 1-2 -1 1 2 3-3 0 1 2 x 2 3 ACC 2003 12

Example www.che.wisc.edu/jbr-group/projects/odelson-movie.mpg ACC 2003 13

Cooperation, Not Competition With Process Identification r k d k u k Process & State Est. y k L x, L d Covariance Estimator A d, B d, C d Frequency Disturbance Modeling A, B, C, D Full Identification ACC 2003 14

Integrating Disturbance Models Why use a disturbance model? Offset free control Model mismatch Nonlinearities 1. Fixed disturbance model, Update Q w, R v with fixed Q ξ 2. Variable disturbance model, Update Q w, R v, Q ξ x k+1 = Ax k + Bu k + B d d k + Gw k d k+1 = d k + ξ k y k = Cx k + C d d k + v k ξ k N(0, Q ξ ) We don t expect to see an integrating white noise disturbance in the plant E[ξ k ξ k] = kq ξ Slow-drift disturbance Plant/model mismatch! ACC 2003 15

Case Study Ill-Conditioned Distillation Column - Zafiriou and Morari (1988) Structurally ill-conditioned LV distillation column Sensitive to input uncertainty Model mismatch in the unfavorable direction System destabilizes with 16.8% uncertainty in the input G plant (s) = G(s) [ 1 + δ 0 0 1 δ ] ACC 2003 16

Case Study With model mismatch, it is possible to destabilize the system with a poor choice of estimator gain. 15% uncertainty (δ = 0.15) Choosing Q w = Q w, Q ξ = Q ξ, R v = R v destabilizes the system! A careful industrial approach might be covariance matching Given an arbitrary stable filter gain, process the data and compute the covariances from the residuals R v = cov[(y k C x k k 1 d k k 1 )] CP k k 1 C T G Q w G = cov[( x k+1 k A x k k 1 Bu k )] ACC 2003 17

Regulator Payoff $u_{k}$ $y_{k}$ Φ = 1 N k+n 1 j=k y j r j 2 Q + u j u j 1 2 S $k-1$ k $k+1$ $k+2$ $k$ Past Present Future value of control objective What does a reduction in average regulator cost mean? 1. Better tracking (translates to more pounds, quality, etc.) 2. Less control (translates to a reduction in consumables, utilities, etc.) 3. Reduction in steady-state variance ACC 2003 18

Ill-Conditioned Distillation Column Pure Output Disturbance Model - A Common Industrial Choice y 1 k 0 autocovariance LS covariance matching optimal setpoint -0.2 Φ 1 =6.771-0.4-0.6-0.8 Φ 3 =6.544-1 -1.2 Φ 2 =17.635-1.4-1.6 0 100 200 300 400 500 600 700 800 900 1000 mins ACC 2003 19

Ill-Conditioned Distillation Column Pure Input Disturbance Model y 1 k 0 autocovariance LS covariance matching optimal setpoint -0.2-0.4-0.6 Φ 2 =8.233-0.8 Φ 3 =6.457-1 Φ 1 =7.499-1.2-1.4-1.6 0 100 200 300 400 500 600 700 800 900 1000 mins ACC 2003 20

Summary We can recover the true noise covariances under limiting conditions We can recover the noise shaping matrix or the optimal filter gain under less limiting conditions The method can be used in the closed loop, with mild restrictions The method can be used to estimate the covariances of the integrated disturbance model These methods can remove a major source of uncertainty from MPC implementation. No additional capital expenditures are required! ACC 2003 21

Future Work Further quantify regulator benefits of these methods Validate methods with laboratory data Demonstrate approach with data from Eastman Chemical Extend to nonlinear process models ACC 2003 22