Online monitoring of MPC disturbance models using closed-loop data Brian J. Odelson and James B. Rawlings Department of Chemical Engineering University of Wisconsin-Madison Online Optimization Based Identification and Estimation June 5, 2003
Outline Introduction and Motivation Estimating Covariances From Closed-loop Data Literature Review Approach Disturbance Models Case Study Summary ACC 2003 1
MPC Monitoring Overview Performance Objectives (Q, R) Model(A, B, C, D) Constraints (u min,max, y min,max ) system ID model validation x s (k) u s (k) regulator u k y t, u t target calculation plant y k ˆx k ˆd k estimator Model (A, B, C, D) Disturbance Model Tuning (Q w, R v ) covariance estimator ACC 2003 2
Motivation Why use data to compute noise covariances and the filter gain? Regulator penalties can come from business goals but estimator tuning is a major source of uncertainty Industrial practitioners currently set covariances arbitrarily Operators would have fewer tuning parameters to set Covariances of disturbances are measurable quantities Better state estimates lead to better control ACC 2003 3
Effects of Incorrect Covariances x k+1 = Ax k + Bu k + Gw k w k (0, Q w ) y k = Cx k + v k v k (0, R v ) Case Interpretation Effect Operator Response Effect Excessive Increase Slow R v Sensor quality control control tracking (Q w ) deteriorates action penalty response R v (Q w ) More reliable sensor Slow tracking response Probably none Suboptimal tracking response Would operators notice a decrease in noise, allowing tighter control? More accurate covariances lead to better state estimates ACC 2003 4
Literature - Adaptive Filtering Method Strengths Weaknesses Citations Bayesian Multi-model Long approaches computation times [1] Maximum Multi-model Not guaranteed Likelihood approaches to converge [2] Covariance Easy to Gives only Matching implement approximate solution [3,4] Correlation Will recover Open-loop Techniques Q w, R v systems only [5,6] [1] D.L. Alspach. A parallel filtering algorithm for linear systems with unknown time-varying noise statistics. IEEE Trans. Auto. Cont., 19(5):552 556, 1974. [2] R.L. Kashyap. Maximum likelihood identification of stochastic linear systems. IEEE Trans. Auto. Cont., 15(1):25 34, 1970. [3] K.A. Myers and B.D. Tapley. Adaptive sequential estimation with unknown noise statistics. IEEE Trans. Auto. Cont., 21:520 523, 1976. [4] B. Friedland. Estimating noise variances by using multiple observers. IEEE Trans. Aero. Elec. Sys., 18(4):442 448, 1982. [5] R.K. Mehra. On the identification of variances and adaptive Kalman filtering. IEEE Trans. Auto. Cont., 15(12):175 184, 1970. [6] B. Carew and P.R. Bélanger. Identification of optimum filter steady-state gain for systems with unknown noise covariances. IEEE Trans. Auto. Cont., 18(6):582 587, 1973. ACC 2003 5
Autocovariance function x k+1 = Ax k + Gw k w k N(0, Q w ) y k = Cx k + v k v k N(0, R v ) Model (A, B, C, G) known Correlations between outputs w k propagates through states, enabling distinction between Q w, R v y k = Cx k + v k y k+1 = CAx k + Cw k + v k+1 y k+2 = CA 2 x k + CAw k + Cw k+1 + v k+2 y k+3 = CA 3 x k + CA 2 w k + CAw k+1 + Cw k+2 + v k+3 y k+4 = CA 4 x k + CA 3 w k + CA 2 w k+1 + CAw k+2 + Cw k+3 + v k+4 ACC 2003 6
Building a Least Squares Problem Given some arbitrary (stable) estimator, L i, compute correlation of innovations process Y k = y k C x k k 1 E Y k Y k Y k Y k+1 Y k Y k+2 Y k+1 Y k Y k+1 Y k+1 Y k+1 Y k+2 Y k+2 Y k Y k+2 Y k+1 Y k+2 Y k+2 Express a weighted least-squares problem in a vector of unknowns, Q w, R v min Q w,r v Φ = ( [ ] (Qw ) A s (R v ) s b) W = f(a, B, G, C, L i, Q w, R v ) ( [ ] (Qw ) A s (R v ) s ) b The objective is to minimize the estimate errors, which can be done in several ways min E[(x k x k k 1 )(x k x k k 1 ) T ] = [P k k 1 ] k ACC 2003 7
Estimator Design Procedure Q w, R v yes Compute Unique? Q w, R v Computational Complexity no G Q w, R v Unique? yes Optimize for Q w, R v L no Optimize for L ACC 2003 8
Different Ways to Process the Data Prediction error, y k C x k k 1 Reconstruction error, y k C x k k (G Q w ) kl L 1 L 2 L 1 L 2 95% Confidence Interval k-step ahead predictors Multiple observers with different L Same least-squares framework (G Q w ) ij ACC 2003 9
What s New and Different What s Different? Better conditioning than available techniques Covariance symmetry implicitly enforced by the structure of the least-squares problem Additional constraints (i.e. diagonality, positive definiteness) are easily enforced Necessary and sufficient conditions for uniqueness of covariances ACC 2003 10
What s New and Different What s New? Optimization approach for computing G Q w Optimization approach for computing L Unified approach allowing different ways to process the data Treating integrated white noise disturbances Demonstrating the impact of adaptive filtering on regulator performance ACC 2003 11
Closed-loop Example 2 input, 2 output G (z) = z 2z 1 0.5z 2z 1 z 2.5z 1.5 1.5z 2.5z 1.5 4 state Covariances unknown Active input constraints 0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0 x T P x = c -3-2 -1 0 x 1-2 -1 1 2 3-3 0 1 2 x 2 3 ACC 2003 12
Example www.che.wisc.edu/jbr-group/projects/odelson-movie.mpg ACC 2003 13
Cooperation, Not Competition With Process Identification r k d k u k Process & State Est. y k L x, L d Covariance Estimator A d, B d, C d Frequency Disturbance Modeling A, B, C, D Full Identification ACC 2003 14
Integrating Disturbance Models Why use a disturbance model? Offset free control Model mismatch Nonlinearities 1. Fixed disturbance model, Update Q w, R v with fixed Q ξ 2. Variable disturbance model, Update Q w, R v, Q ξ x k+1 = Ax k + Bu k + B d d k + Gw k d k+1 = d k + ξ k y k = Cx k + C d d k + v k ξ k N(0, Q ξ ) We don t expect to see an integrating white noise disturbance in the plant E[ξ k ξ k] = kq ξ Slow-drift disturbance Plant/model mismatch! ACC 2003 15
Case Study Ill-Conditioned Distillation Column - Zafiriou and Morari (1988) Structurally ill-conditioned LV distillation column Sensitive to input uncertainty Model mismatch in the unfavorable direction System destabilizes with 16.8% uncertainty in the input G plant (s) = G(s) [ 1 + δ 0 0 1 δ ] ACC 2003 16
Case Study With model mismatch, it is possible to destabilize the system with a poor choice of estimator gain. 15% uncertainty (δ = 0.15) Choosing Q w = Q w, Q ξ = Q ξ, R v = R v destabilizes the system! A careful industrial approach might be covariance matching Given an arbitrary stable filter gain, process the data and compute the covariances from the residuals R v = cov[(y k C x k k 1 d k k 1 )] CP k k 1 C T G Q w G = cov[( x k+1 k A x k k 1 Bu k )] ACC 2003 17
Regulator Payoff $u_{k}$ $y_{k}$ Φ = 1 N k+n 1 j=k y j r j 2 Q + u j u j 1 2 S $k-1$ k $k+1$ $k+2$ $k$ Past Present Future value of control objective What does a reduction in average regulator cost mean? 1. Better tracking (translates to more pounds, quality, etc.) 2. Less control (translates to a reduction in consumables, utilities, etc.) 3. Reduction in steady-state variance ACC 2003 18
Ill-Conditioned Distillation Column Pure Output Disturbance Model - A Common Industrial Choice y 1 k 0 autocovariance LS covariance matching optimal setpoint -0.2 Φ 1 =6.771-0.4-0.6-0.8 Φ 3 =6.544-1 -1.2 Φ 2 =17.635-1.4-1.6 0 100 200 300 400 500 600 700 800 900 1000 mins ACC 2003 19
Ill-Conditioned Distillation Column Pure Input Disturbance Model y 1 k 0 autocovariance LS covariance matching optimal setpoint -0.2-0.4-0.6 Φ 2 =8.233-0.8 Φ 3 =6.457-1 Φ 1 =7.499-1.2-1.4-1.6 0 100 200 300 400 500 600 700 800 900 1000 mins ACC 2003 20
Summary We can recover the true noise covariances under limiting conditions We can recover the noise shaping matrix or the optimal filter gain under less limiting conditions The method can be used in the closed loop, with mild restrictions The method can be used to estimate the covariances of the integrated disturbance model These methods can remove a major source of uncertainty from MPC implementation. No additional capital expenditures are required! ACC 2003 21
Future Work Further quantify regulator benefits of these methods Validate methods with laboratory data Demonstrate approach with data from Eastman Chemical Extend to nonlinear process models ACC 2003 22