Design and Implementation of Predictive Controller with KHM Clustering Algorithm

Similar documents
A Hybrid Time-delay Prediction Method for Networked Control System

! # % & () +,.&/ 01),. &, / &

Iterative Laplacian Score for Feature Selection

Hybrid particle swarm algorithm for solving nonlinear constraint. optimization problem [5].

Bearing fault diagnosis based on EMD-KPCA and ELM

Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method. 1 Introduction

Feature gene selection method based on logistic and correlation information entropy

Chap 1. Overview of Statistical Learning (HTF, , 2.9) Yongdai Kim Seoul National University

Curriculum Vitae Wenxiao Zhao

Lecture 7: Discrete-time Models. Modeling of Physical Systems. Preprocessing Experimental Data.

Soft Sensor Modelling based on Just-in-Time Learning and Bagging-PLS for Fermentation Processes

EL1820 Modeling of Dynamical Systems

Iterative Controller Tuning Using Bode s Integrals

Nonlinear System Identification Using MLP Dr.-Ing. Sudchai Boonto

Optimization. Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

Statistical Methods for Data Mining

4.0 Update Algorithms For Linear Closed-Loop Systems

Kaggle.

Electric Load Forecasting Using Wavelet Transform and Extreme Learning Machine

Stochastic Analogues to Deterministic Optimizers

Learning with multiple models. Boosting.

Stochastic Subgradient Method

Research Article Identifying a Global Optimizer with Filled Function for Nonlinear Integer Programming

Adaptive Primal Dual Optimization for Image Processing and Learning

Predictive analysis on Multivariate, Time Series datasets using Shapelets

C.-H. Lamarque. University of Lyon/ENTPE/LGCB & LTDS UMR CNRS 5513

Bifurcation Analysis of the Aeroelastic Galloping Problem via Input-Output Parametric Modelling

EL1820 Modeling of Dynamical Systems

Method for Recognizing Mechanical Status of Container Crane Motor Based on SOM Neural Network

6.036 midterm review. Wednesday, March 18, 15

Chapter 4. Replication Variance Estimation. J. Kim, W. Fuller (ISU) Chapter 4 7/31/11 1 / 28

An artificial chemical reaction optimization algorithm for. multiple-choice; knapsack problem.

ROBUST PASSIVE OBSERVER-BASED CONTROL FOR A CLASS OF SINGULAR SYSTEMS

Dynamic-Inner Partial Least Squares for Dynamic Data Modeling

Kalman Filter and Parameter Identification. Florian Herzog

STK Statistical Learning: Advanced Regression and Classification

Discriminative Models

Forecasting Data Streams: Next Generation Flow Field Forecasting

Dual Estimation and the Unscented Transformation

Journal of Chemical and Pharmaceutical Research, 2014, 6(3): Research Article

Support Vector Machines for Classification: A Statistical Portrait

LEAST-SQUARES FINITE ELEMENT MODELS

CSE 417T: Introduction to Machine Learning. Lecture 11: Review. Henry Chai 10/02/18

Support Vector Regression (SVR) Descriptions of SVR in this discussion follow that in Refs. (2, 6, 7, 8, 9). The literature

A GENERAL FORMULATION FOR SUPPORT VECTOR MACHINES. Wei Chu, S. Sathiya Keerthi, Chong Jin Ong

APPLICATION OF A KERNEL METHOD IN MODELING FRICTION DYNAMICS

A Wavelet Neural Network Forecasting Model Based On ARIMA

Statistics 910, #15 1. Kalman Filter

Closed-loop Identification of Hammerstein Systems Using Iterative Instrumental Variables

Introduction to Statistical modeling: handout for Math 489/583

Iterative Reweighted Least Squares

Machine Learning and Data Mining. Linear regression. Kalev Kask

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

Support Vector Machine via Nonlinear Rescaling Method

Engine fault feature extraction based on order tracking and VMD in transient conditions

Refined Instrumental Variable Methods for Identifying Hammerstein Models Operating in Closed Loop

Lazy learning for control design

Data Mining Classification: Basic Concepts and Techniques. Lecture Notes for Chapter 3. Introduction to Data Mining, 2nd Edition

Lecture 9. Time series prediction

Large-scale Collaborative Ranking in Near-Linear Time

Online Learning. Jordan Boyd-Graber. University of Colorado Boulder LECTURE 21. Slides adapted from Mohri

Comments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms

A Fast Approximation Algorithm for Set-Membership System Identification

A NEW METHOD FOR VIBRATION MODE ANALYSIS

Approximation Bound for Fuzzy-Neural Networks with Bell Membership Function

A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring

Nonlinear PD Controllers with Gravity Compensation for Robot Manipulators

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

Data Exploration and Unsupervised Learning with Clustering

Gaussian Process for Internal Model Control

Data-driven methods in application to flood defence systems monitoring and analysis Pyayt, A.

Fine-grained Photovoltaic Output Prediction using a Bayesian Ensemble

Model-free Predictive Control

Supplemental Material for Discrete Graph Hashing

A SEASONAL FUZZY TIME SERIES FORECASTING METHOD BASED ON GUSTAFSON-KESSEL FUZZY CLUSTERING *

Neural Network Training

The analog data assimilation: method, applications and implementation

The Perceptron algorithm

Statistical Machine Learning from Data

CS229 Final Project. Wentao Zhang Shaochuan Xu

Weighted Least Squares

Humanoid Based Intelligence Control Strategy of Plastic Cement Die Press Work-Piece Forming Process for Polymer Plastics

Discriminative Models

Nearest Neighbor. Machine Learning CSE546 Kevin Jamieson University of Washington. October 26, Kevin Jamieson 2

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

Location Prediction of Moving Target

SIMPLIFIED MARGINAL LINEARIZATION METHOD IN AUTONOMOUS LIENARD SYSTEMS

Machine Learning And Applications: Supervised Learning-SVM

Ad Placement Strategies

Multi-scale Geometric Summaries for Similarity-based Upstream S

Scuola di Calcolo Scientifico con MATLAB (SCSM) 2017 Palermo 31 Luglio - 4 Agosto 2017

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise

Adaptive Dual Control

Linear Regression and Its Applications

Notion of Distance. Metric Distance Binary Vector Distances Tangent Distance

Overfitting, Bias / Variance Analysis

Logarithmic quantisation of wavelet coefficients for improved texture classification performance

The Phase Detection Algorithm of Weak Signals Based on Coupled Chaotic-oscillators Sun Wen Jun Rui Guo Sheng, Zhang Yang

EECE Adaptive Control

Transcription:

Journal of Computational Information Systems 9: 14 2013) 5619 5626 Available at http://www.jofcis.com Design and Implementation of Predictive Controller with KHM Clustering Algorithm Jing ZENG 1,, Jun WANG 2 1 College of Information Engineering, Shenyang University of Chemical Technology, Shenyang 110142, China 2 College of Computer Science and Technology, Shenyang University of Chemical Technology, Shenyang 110142, China Abstract A KHM multi-model predictive control method is presented based on data-driven for a sort of unknownstructure nonlinear system based on vast history data only. A database search strategy based on K- Harmonic Means is given, hard cluster is softening. The similar cluster is selected, neighborhood and model parameters are determined on line according to the change of system operation point. This method can shorten searching time and improve precision of the data which used to get optimal local model. A multiple generalized predictive control strategy based on KHM is developed. The simulation test has stated the validity of this approach. Keywords: Clustering Algorithm; Fitting; Local Polynomial Regression; Predictive Control 1 Introduction Nonlinear system modeling and identification is very complicated, when there are a lot of observation data, determining model structure and related optimal problem will become more complex, it is difficult to get accurate global model for actual industrial process [1]. Many scholars modeling online using the present sampling data based on data driven [2-4], and it s generally believed that local polynomial fitting algorithm can achieve good control effect. Clustering analysis is one of the basic technologies of data mining, statistical analysis, machine learning and many other fields of application. In the study of clustering problem, k-means KM) clustering algorithm became one of the universal clustering algorithms because of its rapid convergence performance [5]. However, KM algorithm is very sensitive to the system initial value, different initial value can lead to different clustering results. K-Harmonic Means KHM) clustering algorithm can make up for the defect [6-9], this algorithm has the characteristic that be not sensitive to the initial value and advantages of good clustering performance. Generalized predictive control algorithm is a kind of advanced control algorithm which can overcome the system lag Corresponding author. Email address: zengjing0066@sohu.com Jing ZENG). 1553 9105 / Copyright 2013 Binary Information Press DOI: 10.12733/jcis6421 July 15, 2013

5620 J. Zeng et al. /Journal of Computational Information Systems 9: 14 2013) 5619 5626 and deal with open loop unstable non-minimum phase system [10,11]. This paper presents a K- Harmonic Means KHM) local polynomial generalized predictive control algorithm, using KHM clustering algorithm to cluster analysis on historical database, finding out the recent class to the current working point, getting the specific working point neighborhood of input vector in this class, computing the corresponding weights and model parameters, and finally combining with generalized predictive control algorithm, designing the predictive controller. This paper gives the simulation case analysis, and the results show that the dynamic response of closed-loop system can be improved significantly. 2 Modeling Process Assume that the input and output dataset can expressed as {Y i, X i )} N i=1, it can characterize all kinds of possible basic condition of the nonlinear process, the relationship of input and output of the process can be expressed as Y i = fx i ) + ε i, i = 1,...N 1) f.) is the nonlinear mapping between input and output data, ε i is noise variable. By the Taylor formula, it is known that at the input vector x, the corresponding local function of the system f ) can be expressed by p order polynomial, fx, ξ) = ξ 0 + ξ 1 X i x) +... + ξ r X i x) p 2) After get the similar data sample, local polynomial fitting is essentially a weighted optimal problem, its purpose is to minimize the mismatching degree between the model and data, ˆξ = arg min ξ i Ψ k x) ly i p ξ j X i x) j )w i x) 3) j=0 w i x) denote weight, Ψ k x) denote the specific working point neighborhood of input vector, it contains k sampling point. 3 K-harmonic Mean Database Searching Modeling process first need to determine the input-output history database, which should cover all kinds of working conditions, for industrial field data, removing noise filtering, smoothing and eliminating related redundant information should also be considered. Suppose the dataset which contain n group data can expressed as {x 1, x 2,..., x n }, clustering analysis the dataset, suppose k {2,..., n 1} is clustering number, V k is the kth class, k n is segmentation matrix, satisfy the following formula { uij = 1 if x j V i u ij = 0 if x j / V i

J. Zeng et al. /Journal of Computational Information Systems 9: 14 2013) 5619 5626 5621 and k i=1 u ij = 1, n j=1 u ij > 0 i = 1,..., k; j = 1,..., n). v i Ris defined as every kind of clustering centers, V = {v 1,..., v k } is clustering center set. The distance between data point x j and clustering center v i is defined as distance measure d ij, and define differ matrix by d 11... d 1n D =..... d k1 d kn D can be calculated by X and V using appropriate norm, using the Euclidean norm to describe as follows d ij = x j v i )x j v i ) T Using the harmonic mean distance between data point and clustering center instead of the minimum distance between data point and clustering center k k i=1 1 d 2 ij min { d 2 ij i = 1,..., k } Objective function is expressed as J KHM V, X) = n j=1 k k i=1 1 d 2 ij 4) Make J / V =0 and then get the clustering center update methods as formula 5) shows. According to initial value continuously iteration, make formula 4) continuously reduce until stability. v i = n 1 ) 2 x j j=1 k d 4 1 ij d l=1 2 l,j n 1 j=1 k d 4 1 ij d l=1 2 l,j ) 2 5) Relative to the KM algorithm, KHM using the harmonic mean distance between data point and clustering center instead of the minimum distance between data point and clustering center. The conditional probability from clustering center to data point and dynamic weight of every iteration process are introduced. Algorithm procedure is as follows, Step 1 k samples from the historical input-output database {Y i, X i )} N i=1 are selected randomly as the initial clustering center v 1, v 2,..., v k. Step 2 According to formula 4), {Y i, X i )} N i=1 are divided into these K clusters according to the minimum harmonic mean principle between the data and clustering center. Step 3 The new clustering centers v 1 *, v 2 *,... v k * are computed by formula 5).

5622 J. Zeng et al. /Journal of Computational Information Systems 9: 14 2013) 5619 5626 Step 4 If v l *= v l l = 1, 2,...,k ), jump into step 5, or turn to step 2. Step 5 Find out the recent class to the current working point, compute the specific working point neighborhood of input vector x in the specific class and compute weight w i x), compute model parameters according to formula 3). 4 Controller Design Consider the following NARX non-linear auto-regressive with exogenous inputs) system, yk) = fφk)) + ek), K = 1,...... M 6) f ) is an unknown nonlinear curve, e ) is an error random variables for example a random variable with mean zero and variance σk 2 ), φt) is regression vector, φt) = [yt 1)...yt n a )ut n k )...ut n b n k )] T n a, n b and n k are on behalf of the previous output, input and model hysteresis uantity respectively. Supposing formula 3) using uadratic form for l.), lε) = ε 2, and model structure is local linear, f φ k), ζ) = ζ 0 + ζ T 1 φ k) φ t)) 7) If ˆβ 0 and ˆβ 1 represent the parameter estimator, then the estimator of yt) can be expressed as: ŷt) = fφt), ˆζ) = ˆζ 0 8) By formula 7 can learn, a linear input-output form is available in the local neighborhood of φt), A 1 )yt) = B 1 )ut 1) + α 9) A 1 ) = 1 + a 1 1 +... + a n n B 1 ) = b 0 + b 1 1 +... + b nb n b A 1 ) and B 1 ) are polynomials about the recession operator 1, which can be obtained by ˆζ 1. α = ˆζ 0 ζ T 1 φt) 10) α is a deviation item. Multi-step prediction strategy is introduced at here, and in order to find out the optimal output predictive value after j step, Diophantine formula is introduced as follows 1 = E j 1 ) A 1) + j F j 1 ) 11) E j 1 ) = e j,0 + e j,1 1 +... + e j,j 1 j+1 F j 1 ) = f j,0 + f j,1 1 +... + f j,na n a By formula 9) 11) can learn y t + j) = ) Ḡj 1 u t + j + 1) + F ) j 1 y t) + E ) j 1 ξ t + j)

J. Zeng et al. /Journal of Computational Information Systems 9: 14 2013) 5619 5626 5623 Ḡ j 1 ) = E j 1 ) B 1) = B 1 ) 1 j F 1 )) A 1 ) = ḡ j,0 + ḡ j,1 1 +... The optimal predictive value at t + j can be expressed as follows ŷtj) = Ḡj 1 ) ut + j 1) + F j 1 )yt) 12) Another Diophantine formula is introduced which can decompose Ḡj 1 ) u t + j + 1) into two parts: known input item and unknown input item E j 1 ) B 1) = G j 1 ) + j H j 1 ) 13) G j 1 ) = g 0 + g 1 1 +... + g j 1 j+1 H j 1 ) = h j,0 + h j,1 1 +... + h j,nb 1 n b+1 Then formula 12) can be expressed as vector form Ŷ = G U + f 14) Ŷ = [ŷ t + 1), ŷ t + 2), ŷt + 3)...ŷ t + N 2 )] T U = [ u t), u t + 1), ut + 2)... u t + N u )] T f t + 1) F 1 f t + 2) F 1 f = = H u t 1) + y t).. f t + N 2 ) F N2 G 1 1 ) g 1 [G 2 1 ) g 2 1 g 1 ] H = 2 [G 3 1 ) g 3 2 g 2 1 g 1 ]. [ ] N 2 1 G N2 1 ) g N2 N1+1 g 1 Then the output of the controlled object can be predicted by the known input, output information and the future input value. By formula 14), the performance index function can be expressed as follows { } J = E G U + f ω) T Q y G U + f ω) + U T Q u U 15) [ ω = ω t + 1) ω t + 2) ϖ t + N 2 ) ] T

5624 J. Zeng et al. /Journal of Computational Information Systems 9: 14 2013) 5619 5626 Therefore, the future input which minimize the control performance index in the unconstrained conditions can be expressed U = G T Q y G + Q u I ) 1 G T Q y w f) 16) Then the real-time optimal control incremental is taken for the first element u t) = d T w f) 17) d T is the first line of the matrix G T Q y G + Q u I ) 1 G T Q y. d T = 1 0 0) G T Q y G + Q u I ) 1 G T Q y The first element of U in formula 16) is u t), then the real-time input u t) can be expressed by u t) = u t 1) + d T w f) 18) In the industrial controlled process, when there are input constraints u min ut + k) u max 19) u min ut + k) u max 20) and because the controlled variables can be written as vector form U = T U + Ū 21) T 1 0 0 1 1 0... 1 1 1 Ū ut 1) ut 1). ut 1) Then the constraints can be expressed as [ ] C U T U T U T U T c 22) C I I T T u max c u min u max Ū u min + Ū It is a uadratic programming problem as follows: min U T G T Q y G + Q u I) U 2ω f)q y G U U [ ] s.t. C U T U T U T U T c

J. Zeng et al. /Journal of Computational Information Systems 9: 14 2013) 5619 5626 5625 5 Simulation Study Consider the following strong nonlinear system y t)1 + y t) ) = ut) This system gain decreases along with the increase of the output signal amplitude. Because the open loop system is unstable, we generate 2500 input-output data sets in this paper by using closed loop proportional controller. The sampling time is selected as T s =0.1. The controller is designed by the data sets and the method in this paper, the simulation results are compared with the conventional generalized predictive control method. Suppose the system output is step signal, at t=1, the output is jumped form 0 to 1. In the simulation process, the predictive time domain is set as N=8, the controlled time domain is set as N u =6, the weight of output variable is Qy=1, the increment weight of the controlled signal is Q U =0.001. The output results using conventional generalized predictive control algorithm is shown in figure 1, the results by the method in this paper is shown in figure 2, the real line represent system output, dotted line represent reference in figure a), the controlled input signal is shown in figure b). Form the diagram we can see that compared with routine predictive control algorithm, the system output overshoot by the controller of our method is decrease obviously, and system response time is shortened, the input signal freuency jump is avoided. The superiority of the algorithm in this paper is seen clearly. a) Fig. 1: Step response curve of conventional predictive controller b) a) Fig. 2: Step response curve of our algorithm b)

5626 J. Zeng et al. /Journal of Computational Information Systems 9: 14 2013) 5619 5626 6 Conclusion A K-harmonic mean clustering algorithm which is not sensitive to initial value selection is introduced in this paper, the harmonic mean distance between data point and clustering center instead of the minimum distance between data point and clustering center. The conditional probability from clustering center to data point and dynamic weight of every iteration process are also introduced. The recent class to current working point is selected by tracking real time condition changes, the specific working point neighborhood and model parameters are computed in the class, the appropriate control increment is calculated too. The simulation results show the effectiveness of the method. Acknowledgement This work is supported by the Scientific Research Project of Liaoning Province, China L2011064, L2012141). References [1] N.N. Nandola and S. Bhartiya. Hybrid system identification using a structural approach and its model based control: An experimental validation. Nonlinear Analysis: Hybrid systems, 32): 87-100, 2009. [2] Yunan S, Maozai T. Adaptive local linear uantile regression. Acta Mathematicae Applicata Sinica, 273): 509-516, 2011. [3] Seuenz H, Schreiber A, Isermann R. Identification of nonlinear static processes with local polynomial regression and subset selection. IFAC Symposium on System Identification, 138-143, Saint- Malo, France, July 2009. [4] Hang Yue; Jones E G, Revesz P. Local polynomial regression models for average traffic speed estimation and forecasting in linear constraint databases. 17th International symposium on temporal representation and reasoning, 155-161. 2010. [5] Kumar A, Sinha R and Bhattacherjee V. Modeling using K-means clustering algorithm. 2012 1st International Conference on Rencent Advances in Information Technology, 554-558, 2012. [6] B. Zhang, M. Hsu, and U. Dayal. K-harmonic means a data clustering algorithm. Technical Report HPL 1999-124, HP Laboratories Palo Alto, 1999. [7] A. Inler and Z. Gungor. Applying k-harmonic means clustering to the part-machine classification problem. Expert Systems with Applications, 362): 1179-1194, 2009. [8] Jing Zeng, Hui Jin. Local Polynomial Regression with K-harmonic Means and Subset Selection. Journal of Computational Information Systems, 2012, 822): 9225-9232. [9] K. Thangavel and N. Karthikeyani Visalakshi. Ensemble based distributed k-harmonic means clustering. International Journal of Recent Trends in Engineering, 21): 125-129, November 2009. [10] Wei Wang. Theory and application of generalized predictive control. Science press, 1998. [11] N.N. Nandola and D.E Rivera. A novel model predictive control formulation for hybrid systems with application to adaptive behavioral interventions. ACC, 6286-6292, Baltimore, Maryland, USA, June, 2010.