RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK

Similar documents
Subspace Identification Methods

A New Subspace Identification Method for Open and Closed Loop Data

Adaptive Channel Modeling for MIMO Wireless Communications

Closed and Open Loop Subspace System Identification of the Kalman Filter

On Consistency of Closed-loop Subspace Identifictaion with Innovation Estimation

Subspace Identification

An overview of subspace identification

Model Predictive Control of Building Heating System

Multirate MVC Design and Control Performance Assessment: a Data Driven Subspace Approach*

ELEC system identification workshop. Subspace methods

FIR Filters for Stationary State Space Signal Models

Subspace Identification A Markov Parameter Approach

Identification of MIMO linear models: introduction to subspace methods

RANGE CONTROL MPC APPROACH FOR TWO-DIMENSIONAL SYSTEM 1

On Identification of Cascade Systems 1

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

SUBSPACE SYSTEM IDENTIFICATION Theory and applications

System Identification by Nuclear Norm Minimization

While using the input and output data fu(t)g and fy(t)g, by the methods in system identification, we can get a black-box model like (In the case where

ADAPTIVE FILTER THEORY

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

Industrial Model Predictive Control

Application of Modified Multi Model Predictive Control Algorithm to Fluid Catalytic Cracking Unit

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Optimal State Estimators for Linear Systems with Unknown Inputs

EECE Adaptive Control

Statistical and Adaptive Signal Processing

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM

A Mathematica Toolbox for Signals, Models and Identification

OPTIMAL CONTROL AND ESTIMATION

Departement Elektrotechniek ESAT-SISTA/TR About the choice of State Space Basis in Combined. Deterministic-Stochastic Subspace Identication 1

BASE VECTORS FOR SOLVING OF PARTIAL DIFFERENTIAL EQUATIONS

A Vector Space Justification of Householder Orthogonalization

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

Identification of continuous-time systems from samples of input±output data: An introduction

9 Multi-Model State Estimation

VARIANCE COMPUTATION OF MODAL PARAMETER ES- TIMATES FROM UPC SUBSPACE IDENTIFICATION

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

Network Reconstruction from Intrinsic Noise: Non-Minimum-Phase Systems

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

Identification of modal parameters from ambient vibration data using eigensystem realization algorithm with correlation technique

State Space Modeling for MIMO Wireless Channels

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

OPTIMAL ESTIMATION of DYNAMIC SYSTEMS

Linear-Quadratic Optimal Control: Full-State Feedback

SUBSPACE IDENTIFICATION METHOD FOR COMBINED DETERMINISTIC-STOCHASTIC BILINEAR SYSTEMS. Huixin Chen and Jan Maciejowski 1

Lecture 4 and 5 Controllability and Observability: Kalman decompositions

1 Cricket chirps: an example

Optimal Polynomial Control for Discrete-Time Systems

APPROXIMATE REALIZATION OF VALVE DYNAMICS WITH TIME DELAY

MODEL PREDICTIVE CONTROL and optimization

HANKEL-NORM BASED INTERACTION MEASURE FOR INPUT-OUTPUT PAIRING

Lifted approach to ILC/Repetitive Control

Applied Mathematics Letters

Orthogonal projection based subspace identification against colored noise

COMP 558 lecture 18 Nov. 15, 2010

Data-driven Subspace-based Model Predictive Control

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides

THIS paper studies the input design problem in system identification.

SINGLE DEGREE OF FREEDOM SYSTEM IDENTIFICATION USING LEAST SQUARES, SUBSPACE AND ERA-OKID IDENTIFICATION ALGORITHMS

EECS 275 Matrix Computation

Review problems for MA 54, Fall 2004.

Process Modelling, Identification, and Control

LINEAR ALGEBRA KNOWLEDGE SURVEY

Statistical Signal Processing Detection, Estimation, and Time Series Analysis

Further Results on Model Structure Validation for Closed Loop System Identification

ADAPTIVE FILTER THEORY

Subspace-based Identification

Subspace Based Identification for Adaptive Control

Vector and Matrix Norms. Vector and Matrix Norms

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

On solving linear systems arising from Shishkin mesh discretizations

Performance assessment of MIMO systems under partial information

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

SUBSPACE IDENTIFICATION METHODS

Nonlinear Identification of Backlash in Robot Transmissions

Operational modal analysis using forced excitation and input-output autoregressive coefficients

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Investigation of traffic-induced floor vibrations in a building

Estimating Variances and Covariances in a Non-stationary Multivariate Time Series Using the K-matrix

Spatial Process Estimates as Smoothers: A Review

Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices

ON GRADIENT-BASED SEARCH FOR MULTIVARIABLE SYSTEM ESTIMATES. Adrian Wills Brett Ninness Stuart Gibson

Data-driven signal processing

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Linear Least-Squares Data Fitting

System identification and uncertainty domain determination: a subspace-based approach

Numerical Methods in Matrix Computations

Singular Value Decomposition

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

Hands-on Matrix Algebra Using R

Introduction to Probabilistic Graphical Models

Auxiliary signal design for failure detection in uncertain systems

A subspace fitting method based on finite elements for identification and localization of damages in mechanical systems

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

Structured State Space Realizations for SLS Distributed Controllers

Static Output Feedback Stabilisation with H Performance for a Class of Plants

2 Introduction of Discrete-Time Systems

ECE 275A Homework #3 Solutions

Transcription:

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail: {trnkap,havlena}@controlfelkcvutcz Abstract: Subspace identification methods 4SID) are relatively new in the field of linear systems identification In the recent years they proved to be efficient for industrial applications, due to their good properties, such as: same complexity of identification for single input/output and multiple input/output systems, direct state space model identification, numerical robustness QR and SVD factorization) and implicit model order reduction The algorithms are well developed for off-line identification, however online recursive identification is still rather an open topic The problem lies in the recursification of SVD, which is impossible and several approximations are used instead We use a different approach, exploiting the fact, that 4SID methods minimizes implicit optimality criterion, which is mean square error of multi-step predictions of the model The criterion allows for recursification in the least squares framework and prior knowledge incorporation We also address the problem of non-causality, which was recently pointed out in 4SID methods Keywords: System Identification, Subspace Identification, Least Squares, Recursive Algorithm, Multi-step Predictor 1 INTRODUCTION Modern control methods, like model predictive control, proved to be very effective in the industrial applications However the efficiency is often limited by the quality of the controlled system model, which is hard to obtain especially for the large systems with multiple inputs and multiple outputs MIMO) The recent advances in Subspace identification methods 4SID) Overschee and Moor [1996]) showed that they can be successful in the identification of such models from real world data The methods are moreover numerically robust, identify state space stochastic model and do not require extensive model parameterization like for example identification of MIMO ARMAX structure The 4SID methods are developed for off-line identification, however for industrial applications it is necessary to have on-line recursive algorithms for identification of the model, where parameters can vary in time This is still an open problem and several approaches have been suggested in Mercère et al [2005]; Kameyama and Ohsumi [2005], but the problem is still far from being solved The 4SID methods are based on the Geometrical Projections between the spaces spanned by the rows of certain matrices with Hankel structure built from measured input/output data and on exploration of these spaces by Singular Value Decomposition SVD) So far the suggested methods for recursification are rather complicated with complex theoretical background, because they are usually based on tricky recursive approximation of SVD We propose a new recursive 4SID method in the well-known least squares framework The 4SID methods can be shown to give a model, which is an optimal multistep predictor Trnka and Havlena [2005]), in the sense of minimizing the sum of prediction errors on the measured input/output data for a certain prediction horizon The oblique projections used in 4SID identification can be proved to arise from this multi-step optimization This fact can be exploited for recursive 4SID algorithm, allowing us to use recursive least squares with some type of forgetting and even allows us to incorporate prior information in the field of otherwise black-box approach of 4SID methods The prior information should be for example a funnel limiting model step response The paper is organized as follows: First the notation and the model used in subspace methods are established, next the standard subspace identification algorithm is shown, followed by its reformulation R068 1

into the least squares framework and finally the recursive 4SID algorithm is proposed The paper is closed with simulations results 2 STATE SPACE MODEL In this paper a state space model of stochastic system in the innovation form Ljung [1998]) is considered x k+1 = Ax k + Bu k + Ke k, 1) y k = Cx k + Du k + e k, 2) where u k R m is the m-dimensional input, x k R n is the n-dimensional state, y k R l is the l- dimensional output, K is the steady state Kalman gain and e k R l is an unknown innovation with covariance matrix E [ e k e T ] k = Re This model has close relation with widely used stochastic state space model x k+1 = Ax k + Bu k + v k, 3) y k = Cx k + Du k + w k, 4) where v k R m and w k R l are the process and the measurement noise with zero mean and covariance matrices E [ v k vk T ] [ = Q, E wk wk T ] [ = R and E vk wk T ] = S The process noise represents the disturbances entering the system and the measurement noise represent the uncertainty in the system observations The stochastic model 3 4) has good physical interpretation, however the innovation model 1 2) is more suitable for subspace methods, because it has a noise model with less degrees of freedom, which can be more appropriately identified from the given input/output data Both models can be shown to be equivalent from the input/output point of view up to the second order statistics means and covariances) 3 USED NOTATION Assume a set of input/output data samples u k, y k is available for k 0, 1,, 2i + j 2 These data can be arranged into Hankel matrices with i block rows and j columns u 0 u 1 u j 1 u 1 u 2 u j U p =, u i 1 u i u i+j 2 u i u i+1 u i+j 1 u i+1 u i+2 u i+j =, u 2i 1 u 2i u 2i+j 2 where U p is the matrix of past inputs and is the matrix of future inputs Although most of data samples are in both matrices, the notation past/feature is appropriate, because corresponding columns of U p and are subsequent without any common data samples and therefore have the meaning of the past and the future Value of the coefficient i is usually selected slightly larger then the upper bound of expected system order and the coefficient j is approximately equal to the number of measured data at disposal j i) From i and j ratio it is obvious that Hankel matrices U p and have the structures with long rows Output measurements and the noises can be simlirarly as the inputs arranged into Hankel matrices Y p, Y f, E p and E f System state sequence can be arranged in the matrices X p = x 0 x 1 x j 1 ), X f = x i x i+1 x i+j 1 ) R068 2

Recursively substituting the equation 1) into 2) a state space model in the matrix input/output form is obtained Y p = Γ i X p + H i U p + H s i E p, 5) Y f = Γ i X f + H i + H s i E f, 6) where Γ i R il n is an extended observability matrix and H i R il im and Hi s R il il are lower triangular Toeplitz matrices containing impulse response from deterministic input u k and stochastic input e k respectively This matrix form is a starting point for subspace identfication The structure of the parametres related matrices is as follows Γ i = C T CA) T CA i 1 ) T ) T D 0 0 CB D 0 H i =, CA i 2 B CA i 3 B D I m 0 0 Hi s CK I m 0 = CA i 2 K CA i 3 K I m 4 SUBSPACE IDENTIFICATION METHODS Subspace Identification Methods are relatively new in the field of system identification They are used for identification of linear time invariant state space model 1 2) directly from the input/output data They are generally entitled Subspace Identification Methods or more accurately 4SID methods Subspace State Space System IDentification) 4SID methods are an alternative to widely used Prediction Error Methods, like for example the least squares identification of ARX model or Gauss-Newton iterative ARMAX model identification 41 Standard 4SID Algorithm This section presents a brief version of unified subspace identification showing the basic steps of the algorithm First the measured data samples u k and y k are arranged into Hankel matrices U p,, Y p and Y f ) The next step is fundamental in the algorithm It is the computation of oblique projection Harville [1997]) The row space of future outputs matrix Y f is projected on the row space of past data matrix W p = Yp U p ) along the row space of future inputs O i = Y f / W p 7) Having obtained the matrix O i the rest of the algorithm is straightforward and uses the fact that O i can be written as a matrix multiplication O i = Γ i ˆXf, where matrix Γ i has full column rank and matrix ˆX f has full row rank Exploiting this fact and using singular value decomposition of weighted matrix O i weighting allows for tuning the algorithm) W 1 O i W 2 = UΣV T the order n of the system should be determined by inspecting the singular values in Σ to accordingly partition the matrices U, Σ and V T to obtain U 1 = U:, 1 : n), Σ 1 = Σ1 : n, 1 : n) and V T 1 = V 1 : R068 3

n, :) T Matlab like notation), which is used to compute Γ i and ˆX f as Γ i = W1 1 U 1 Σ 1/2 1, ˆX f = Γ i O i, where ) denotes Moore-Penrose matrix pseudo-inverse Harville [1997]) From the knowledge of the estimated state sequence ˆX f and measured input/output data, the state space model parameters A, B, C and D, can be computed by the least squares or total least squares from ) ) ) ˆXi+1 A B ˆXi = + ε, C D Y i U i where Y i is first block row of Hankel matrix Y f similarly U i ) Finally stochastic properties can be estimated from the residuals ) Ŝ = Σ 22, Σ11 Σ ˆK = Σ 12 Σ 1 22, where 12 = cov ε) Σ 21 Σ 22 This algorithm is only a basic version of subspace identification More sophisticated variations and extensions can be found in Overschee and Moor [1996], Overschee and Moor [1995] They differ mostly in the way of obtaining the model parameters from the matrix O i 42 4SID in the Least Squares Framework This section shows, how to derive the oblique projection 7), used in the unified 4SID algorithm, in the least squares framework Firstly a definition of the oblique projection will be recalled Assume the row spaces of the general matrices A R p j, B R q j and C R r j The oblique projection of the row space of A along the row space of B on the row space of C is defined as ) C T B T )) ] A / B C = A C T B T ) [ C B first r columns C 8) The important observation is that the equation 6), omitting the noise term, can be interpreted as the equation of a multi-step optimal predictor, based on the known system states X f and the inputs Ŷ f = Γ i X f + H i 9) However, the states X f are unknown in the process of identification As it is shown in Appendix A, the states can be estimated from the limited available input/outpu data set as a linear combination of past data W p The best output estimate is then Ỹ f = L w W p + H i, 10) which is based only on the input/output data Considering now the problem of optimal multi-step predictions, the parameter matrices L w and Hi d should be selected to let 10) optimally predict the measured outputs The quality of the prediction will be measured by a quadratic norm Frobenius norm) of prediction errors Yf min L Ỹf = min w,h i 2 L w,h i Y f ) ) W L w H p i 11) 2 Minimizing 11) means finding the best linear predictor in the sense of least squares Optimal values of L w and H i can be found from matrix pseudo-inverse Lw H i ) = Yf Wp ) R068 4

Denoting D = ) Wp the pseudo-inversion can be written as Lw H i ) = Yf D T DD T ) 1, 12) multiplying both sides of the equation by the matrix D from the right yields to ) Lw H i D T = Y f D DD T ) 1 D 13) }{{}}{{} Π D Ŷ f This expression represents the best linear prediction of Y f based on available data / ) Wp Ŷ f = Y f From Ŷf it is needed to get only the part coming from the term L w W p, because it is equal to Γ i ˆXi, which is the matrix O i necessary for further identification using SVD Section 41) To get it separately from 13) it is sufficient to use the right side of 12) and take only the first 2i columns and multiply them by the matrix W p alone L w W p = Y f D T [ DD T ) 1 ] first 2i columns W p Comparing this expression with 8) it is obvious, that it is equal with the oblique projection L w W p = Y f / W p, which can be rewritten using 19) as O i =) Γ i ˆXf = Y f / W p, yielding to the fundamental equation of subspace identification algorithm 7) 5 4SID RECURSIFICATION The objective function 11) and its interpretation as multi-step predictions optimization allows for straightforward recursification of 4SID algorithm without approximation or circumvent of SVD, which is typical for several recursive 4SID algorithms Recursive least squares can be used to find optimal L w and Hh d With each new measurements, the Hankel structure of data matrices Y f, W p and grow with one column The important facts allowing recursification is that L w and Hh d do not grow in time, they are not dependent on state basis selection and they are sufficient to estimate state space model parameters 4SID methods use O h = L w W p for model parameters estimation, suggesting that L w should be used in recursive 4SID for this estimation However, the meaning of L w is not very convenient and Hh d should be used instead The impulse response h k for k 0, h 1 can be read directly from the last block row of Hh d D should be read directly from h 0 and A,B,C using classical realization theory by Ho and Kalman 1966) Constructing a matrix of impulse responses h 1 h 2 h 3 h p h 2 h 3 h 4 H = h 3 h 4 h 5, h p h h 1 R068 5

A A max A 0 09 A 0 t min t max t Figure 1 Prior information as a funnel on the step response where p = h+1 2 for odd h H can be factorized by SVD as H = Γ p p, where p = B AB A p 1 B ) is the extended controllability matrix For a minimal realization, the matrices Γ p and p have full rank, and hence H has rank n, equal to the system order The algorithm of Ho and Kalman is based on the above observations The matrices B and C can be read directly from the first m columns of p and l rows of Γ p respectively The remaining matrix A is computed using shift invariance structure of Γ p : Γ p = Γ p A, where where Γ p is Γ p without the last block row and Γ p is Γ p without the first block row Algorithm Modifications: Forgetting To allow on-line identification of time-varying processes, the recursive algorithm can be modified to incorporate some well known type of forgetting For example from simple exponential to more advanced directionally constrained forgetting Prior Information A very convenient way to describe prior information is by a funnel limiting step or impulse response Figure 1) This approach can be used by initial conditions on mean value and covariance of H d h Ensuring Causality In Qin et al [2004, 2005], the 4SID methods based on the unified 4SID algorithm, were shown to have hidden non-causality The estimated parameter L w and Hh d of multi-step predictor are not constrained, however to ensure causality the matrix Hh d must have lower triangular structure From the structure of multi-step predictor it is obvious, that failing to ensure this property leads to non-causal predictions using future inputs predicting ŷ m from u n for n > m) This fact is one of the reasons, why 4SID methods generally do not work in closed-loop A solution proposed in Qin et al [2004] uses parsimonious parametrization Our recursive 4SID algorithm can easily ensure causality by prior information fixing upper triangle of Hh d to zero R068 6

Poles convergence 15 Poles convergence Real System ARMAX Off-line N4SID 0615 Real System ARMAX Off-line N4SID 1 061 05 Imaginary Axis 0 Imaginary Axis 0605 06-05 0595-1 -15-1 -05 0 05 1 15 Real Axis a) 059-0606 -0604-0602 -06-0598 -0596-0594 -0592-059 b) Real Axis Figure 2 Convergence of recursive 4SID algorithm a), Detail of the convergence of one of four system poles compared to off-line identification methods increasing number of iterations is marked by darker color) b) 6 SIMULATION RESULTS In this section the convergence of the recursive algorithm on a simple example is shown The experimental input/output sequences is generated by the 4th order state space model 0603 0603 0 0 x k+1 = 0603 0603 0 0 0 0 0603 0603 x k + 0 0 0603 0603 0924 2755 4317 2644 u k + 0021 011 0071 0103 e k, y k = 0575 1075 0523 0183 ) x k + 0714) u k + e k, where the deterministic input is pseudo-random binary signal and the stochastic input is a white noise e k N0, 001) The identification started with non-informative prior information The positions of the identified system poles in each iteration related to the real poles and the poles from off-line N4SID and ARMAX methods are on Figure 2 CONCLUSION The interpretation of 4SID methods in the least squares framework showed one possibility how to recursify the algorithm, which seemed to be problematic for on-line use There can be several improvements, mainly to eliminate the separability in the optimality criterion 11), which causes the problems in the systems parameters separation step R068 7

ACKNOWLEDGEMENTS This work was financially supported by the Grant Agency of the Czech Republic under grant No 102/05/0271 and 102/05/2075 and by project Talent under grant No 102/03/H116 APPENDIX A System states as a linear combination of data history Recursive substitution of equation 1) in 2) and their formulation in a matrix form yields to the equations, which are fundamental in 4SID methods Y p = Γ i X p + H i U p + H s i E p, 14) Y f = Γ i X f + H i + H s i E f, 15) X f = A i X p + i U p + s i E p, 16) where i and s i are reverse controllability matrices for deterministic and stochastic subsystems with the following structure i = A i 1 B A i 2 B B ), s i = A i 1 K A i 2 K K ) From the equation 14) X p can be expressed as X p = Γ i Y p Γ i H iu p Γ i Hs i E p = Γ i Γ i H i Γ i H s i ) Using this expression in the equation 16), the future states X f can be obtained as Y p U p E p X f = A i Γ i Y p A i Γ i H iu p A i Γ i Hs i E p + i U p + s i E p ) Y p = A i Γ i i A i Γ i H i) s i Ai Γ i Hs i ) U p 17) E p These two last equations clearly indicate, that both past states X p and future states X f can be obtained as a linear combination of past data Y p, U p and E p The last equation can be substituted for X f in 15) and assuming i, j Y f = Γ i A i Γ i Γ i i A i Γ i H i) ) Y p U p ) + +H i + Hi s E f = L w W p + H i + Hi s E f 18) Future outputs Y f thus can be besides equation 6), where they are a linear combination of the system states X f and the inputs, expressed using a linear combination determined by matrices L w, H i and Hi s) of past data W p, known sequence of future inputs and the sequence of future innovations E f Replacing the unknown future innovations sequence E f with its mean, the equation of the linear predictor from input/output data is obtained Ŷ f = L w W p + H i Moreover from the comparison of 15) and 18 it is obvious that matrix O i output response from system states X f ) can be obtained from the past data W p Γ i X f = L w W p R068 8

Finally considering only limited input/output data set available assuming now limited W p and ), the future states X f in the equation 17) are only an estimate of future states ˆX f yielding in Ŷ f = L w W p + H i, where the following terms are equivalent L w W p = Γ i ˆXf = O i 19) References HARVILLE, D A 1997 Matrix Algebra From a Statistician s Perspective Springer-Verlag ISBN 038794978X KAMEYAMA, K; OHSUMI, A 2005 Recursive subspace prediction of linear time-vayring stochastic systems In Proceedings of the 16th IFAC World Congress Kidlington, Oxford, GB, Elsevier ISBN 0-08-144130-1 LJUNG, L 1998 System Identification: Theory for the User 2nd Edition) Prentice Hall PTR MERCÈRE, G; LECÆUCHE, S; VASSEUR, C 2005 Sequential correlation based propagator algorithm for recursive subspace identification In Proceedings of the 16th IFAC World Congress Kidlington, Oxford, GB, Elsevier ISBN 0-08-144130-1 OVERSCHEE, P V; MOOR, B D 1995 A unifying theorem for three subspace system identification Automatica, Special Issue on Trends in System Identification, 31, 12, 1853 1864 OVERSCHEE, P V; MOOR, B D 1996 Subspace Identification for Linear Systems: Theory- Implementation-Applications Kluwer Academic Publishers QIN, S J; LIN, W; LJUNG, L 2004 A novel subspace identification approach with parsimonious parametrization Technical report, The University of Texas at Austin and Linköping University, Sweden QIN, S J; LIN, W; LJUNG, L 2005 A novel subspace identification approach with enforced causal models Automatica, 41, 12, 2043 2053 TRNKA, P; HAVLENA, V 2005 Subspace identification as multi-step predictions optimization In Proceedings of the Fifth IASTED International Conference on Modelling, Simulation and Optimization, 223 228 Anaheim: ACTA Press ISBN: 0-88986-524-8 R068 9