A software package for system identification in the behavioral setting

Size: px
Start display at page:

Download "A software package for system identification in the behavioral setting"

Transcription

1 1 / 20 A software package for system identification in the behavioral setting Ivan Markovsky Vrije Universiteit Brussel

2 2 / 20 Outline Introduction: system identification in the behavioral setting Solution approach: structured low-rank approximation Examples

3 Work plan 1. define a problem 2. develop methods/algorithms to solve the problem 3. implement the algorithm in software an identification problem involves: data collection set of time-series (trajectories) choice of model class bounded complexity LTI choice of identification criteria weighted 2-norm 3 / 20

4 4 / 20 Representation free problem formulation model B set of trajectories w : Z R q representation equations, which solution set = B parameters specify the representation behavioral setting problem definitions involve the model, not its representations solution methods do involve representations, however they are implementation details of the methods

5 LTI model representations kernel representation B = {w R 0 w + R 1 σw + + R l σ l w = 0} (KER) where σ is the shift operator (σw)(t) = w(t + 1) input/state/output representation B = {w = Π(u,y) x, such that σx = Ax + Bu, y = Cx + Du } (I/S/O) Π is a permutation matrix, defining the I/O partitioning 5 / 20

6 6 / 20 Model class L m,l the smallest l, for which (KER) exists, is the lag of B the smallest n = dim(x), for which (I/S/O) exists, is the order of B the number of inputs m is invariant of the repr. (I/S/O) (m,l) and (m,n) are measures of model s complexity L m,l LTI models with m inputs and lag l

7 7 / 20 Approximation criterion orthogonal distance between data and model M(w,B) := min w ŵ 2 ŵ B M(w,B) shows how much B fails to explain w called misfit (lack of fit) between w and B system identification problem minimize over B Lm,l M(w, B) (SYSID)

8 Generalizations multiple time-series w = {w 1,...,w N } M(w, B) := min N {ŵ 1,...,ŵ N i=1 w i ŵ i 2 2 } B fixed initial conditions w ini M(w, B) := fixed variables I {1,...,q } M(w, B) := min w ŵ 2 (w ini,ŵ) B min ŵ B, ŵ I =w I w ŵ 2 missing data: w i j (t) = NaN = w i j (t) is missing 8 / 20

9 9 / 20 Mosaic-Hankel matrix 1 N block matrix H l+1 (w) := [ H l+1 (w 1 ) H l+1 (w N ) ] with block-hankel blocks w i (1) w i (2) w i (T l) H l+1 (w i w i (2) w i (3) w i (T l + 1) ) :=... w i (l + 1) w i (l + 2) w i (T )

10 Mosaic-Hankel low-rank approximation ( w i (1),...,w i (T i l) ) B L m,l, for i = 1,...,N rank ( H l+1 (w) ) lq + m (SYSID) is mosaic-hankel low-rank approx. problem: minimize over B Lm,l M(w, B) minimize over ŵ (w ŵ) diag(v)(w ŵ) }{{} w ŵ v subject to rank ( H l+1 (w) ) lq + m v i = 0 if w i = NaN, v i = if w i is exact, w i = 1 otherwise 10 / 20

11 11 / 20 Solution method kernel representation of the rank constraint rank ( H l+1 (ŵ) ) r RH l+1(ŵ) = 0 RR = I (l+1)q r variable projection: elimination of the variable ŵ M(R) := min w ŵ v subject to RH l+1 (ŵ) = 0 ŵ is a least-norm problem with analytic solution M(R) = vec (w)γ 1 (R)vec(w) where Γ is a positive definite banded Toeplitz matrix

12 12 / 20 SLRA software package the identification problem is then minimize over R M(R) subject to RR = I nonconvex optimization problem on a manifold efficient evaluation of M(R) exploiting the structure software implementation is available

13 13 / 20 Usage and implementation [sysh,info,wh] = ident(w, m, ell, opt) sysh (I/S/O) repr. of the identified model opt.sys0 (I/S/O) repr. of initial approximation opt.wini initial conditions opt.exct exact variables info.rh parameter R of (KER) info.m misfit [M, wh, xini] = misfit(w, sysh, opt) implemented as a literate program the code lives in research papers the papers document the code the code reveals the full implementation details

14 14 / 20 The simulation setup simulation parameters ell = 2; m = 1; p = 1; T = 30; s = 0.02; define constants q = m + p; n = ell * p; the "true" system and "true" data sys0 = drss(n, p, m); u0 = rand(t, 1); y0 = lsim(sys0, u0);

15 15 / 20 The simulation setup add noise w0 = [u0 y0]; w = w0 + s * randn(t, q); generate noisy data for the examples clear all ell = 2; m = 1; p = 1; T = 30; s = 0.02; q = m + p; n = ell * p; sys0 = drss(n, p, m); u0 = rand(t, 1); y0 = lsim(sys0, u0); w0 = [u0 y0]; w = w0 + s * randn(t, q);

16 16 / 20 Invariance to variables permutation identify a model with permuted variables io = fliplr(1:q); [sysh, info] = ident(w(:, io), m, ell); disp(info.m) identify a model with the original variables order io = 1:q; [sysh, info] = ident(w(:, io), m, ell); disp(info.m)

17 Zero initial conditions identify the system with option opt.wini = 0 opt.wini = 0; [sysh0, info0, wh] = ident(w, m, ell, opt); disp(info0.m) verify that (0,...,0,ŵ) B whext = [zeros(ell, q); wh]; disp(misfit(whext, sysh0)) e-32 compare the misfit with free initial conditions zero initial conditions / 20

18 18 / 20 Identification from multiple trajectories split w into two parts and apply ident on them wm{1} = w(1:10, :); wm{2} = w(11:t, :); [sysh, infom, wmh] = ident(wm, m, ell); disp(infom.m) compare the misfit when fitting one trajectory the same trajectory split into two parts

19 19 / 20 System identification with missing data w 1 is the noisy trajectory of the system w 2 = (δ,nan) exact input, missing output, and zero initial conditions ŵ 2 = (δ,ĥ) estimate of the system s impulse resp. data-driven simulation problem ŷ 2 is compared with the true impulse response and the impulse response of the model identified from w 1

20 20 / 20 System identification with missing data wm{1} = w; wm{2} = [1 NaN; 0 NaN; 0 NaN; 0 NaN]; opt.wini = {[] 0}; opt.exct = {[] 1}; [~, info, wh] = ident(wm, m, ell, opt); disp(wh{2}(:, 2) ) disp(impulse(sys0, 3) )

Data-driven signal processing

Data-driven signal processing 1 / 35 Data-driven signal processing Ivan Markovsky 2 / 35 Modern signal processing is model-based 1. system identification prior information model structure 2. model-based design identification data parameter

More information

ELEC system identification workshop. Behavioral approach

ELEC system identification workshop. Behavioral approach 1 / 33 ELEC system identification workshop Behavioral approach Ivan Markovsky 2 / 33 The course consists of lectures and exercises Session 1: behavioral approach to data modeling Session 2: subspace identification

More information

A missing data approach to data-driven filtering and control

A missing data approach to data-driven filtering and control 1 A missing data approach to data-driven filtering and control Ivan Markovsky Abstract In filtering, control, and other mathematical engineering areas it is common to use a model-based approach, which

More information

ELEC system identification workshop. Subspace methods

ELEC system identification workshop. Subspace methods 1 / 33 ELEC system identification workshop Subspace methods Ivan Markovsky 2 / 33 Plan 1. Behavioral approach 2. Subspace methods 3. Optimization methods 3 / 33 Outline Exact modeling Algorithms 4 / 33

More information

A structured low-rank approximation approach to system identification. Ivan Markovsky

A structured low-rank approximation approach to system identification. Ivan Markovsky 1 / 35 A structured low-rank approximation approach to system identification Ivan Markovsky Main message: system identification SLRA minimize over B dist(a,b) subject to rank(b) r and B structured (SLRA)

More information

Low-rank approximation and its applications for data fitting

Low-rank approximation and its applications for data fitting Low-rank approximation and its applications for data fitting Ivan Markovsky K.U.Leuven, ESAT-SISTA A line fitting example b 6 4 2 0 data points Classical problem: Fit the points d 1 = [ 0 6 ], d2 = [ 1

More information

Sparsity in system identification and data-driven control

Sparsity in system identification and data-driven control 1 / 40 Sparsity in system identification and data-driven control Ivan Markovsky This signal is not sparse in the "time domain" 2 / 40 But it is sparse in the "frequency domain" (it is weighted sum of six

More information

Two well known examples. Applications of structured low-rank approximation. Approximate realisation = Model reduction. System realisation

Two well known examples. Applications of structured low-rank approximation. Approximate realisation = Model reduction. System realisation Two well known examples Applications of structured low-rank approximation Ivan Markovsky System realisation Discrete deconvolution School of Electronics and Computer Science University of Southampton The

More information

Using Hankel structured low-rank approximation for sparse signal recovery

Using Hankel structured low-rank approximation for sparse signal recovery Using Hankel structured low-rank approximation for sparse signal recovery Ivan Markovsky 1 and Pier Luigi Dragotti 2 Department ELEC Vrije Universiteit Brussel (VUB) Pleinlaan 2, Building K, B-1050 Brussels,

More information

Optimization on the Grassmann manifold: a case study

Optimization on the Grassmann manifold: a case study Optimization on the Grassmann manifold: a case study Konstantin Usevich and Ivan Markovsky Department ELEC, Vrije Universiteit Brussel 28 March 2013 32nd Benelux Meeting on Systems and Control, Houffalize,

More information

A software package for system identification in the behavioral setting

A software package for system identification in the behavioral setting A software package for system identification in the behavioral setting Ivan Markovsky Department ELEC, Vrije Universiteit Brussel Pleinlaan 2, Building K, B-1050 Brussels, Belgium Fax: +32 2 629.28.50,

More information

Structured low-rank approximation with missing data

Structured low-rank approximation with missing data Structured low-rank approximation with missing data Ivan Markovsky and Konstantin Usevich Department ELEC Vrije Universiteit Brussel {imarkovs,kusevich}@vub.ac.be Abstract We consider low-rank approximation

More information

Sum-of-exponentials modeling via Hankel low-rank approximation with palindromic kernel structure

Sum-of-exponentials modeling via Hankel low-rank approximation with palindromic kernel structure Sum-of-exponentials modeling via Hankel low-rank approximation with palindromic kernel structure Ivan Markovsky and Dieter Toon Verbeke Vrije Universiteit Brussel Pleinlaan 2, 1050 Brussels, Belgium Email:

More information

Dynamical system. The set of functions (signals) w : T W from T to W is denoted by W T. W variable space. T R time axis. W T trajectory space

Dynamical system. The set of functions (signals) w : T W from T to W is denoted by W T. W variable space. T R time axis. W T trajectory space Dynamical system The set of functions (signals) w : T W from T to W is denoted by W T. W variable space T R time axis W T trajectory space A dynamical system B W T is a set of trajectories (a behaviour).

More information

Improved initial approximation for errors-in-variables system identification

Improved initial approximation for errors-in-variables system identification Improved initial approximation for errors-in-variables system identification Konstantin Usevich Abstract Errors-in-variables system identification can be posed and solved as a Hankel structured low-rank

More information

Signal theory: Part 1

Signal theory: Part 1 Signal theory: Part 1 Ivan Markovsky\ Vrije Universiteit Brussel Contents 1 Introduction 2 2 The behavioral approach 3 2.1 Definition of a system....................................... 3 2.2 Dynamical

More information

A comparison between structured low-rank approximation and correlation approach for data-driven output tracking

A comparison between structured low-rank approximation and correlation approach for data-driven output tracking A comparison between structured low-rank approximation and correlation approach for data-driven output tracking Simone Formentin a and Ivan Markovsky b a Dipartimento di Elettronica, Informazione e Bioingegneria,

More information

Identification of MIMO linear models: introduction to subspace methods

Identification of MIMO linear models: introduction to subspace methods Identification of MIMO linear models: introduction to subspace methods Marco Lovera Dipartimento di Scienze e Tecnologie Aerospaziali Politecnico di Milano marco.lovera@polimi.it State space identification

More information

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:

More information

A Behavioral Approach to GNSS Positioning and DOP Determination

A Behavioral Approach to GNSS Positioning and DOP Determination A Behavioral Approach to GNSS Positioning and DOP Determination Department of Communications and Guidance Engineering National Taiwan Ocean University, Keelung, TAIWAN Phone: +886-96654, FAX: +886--463349

More information

The Behavioral Approach to Systems Theory

The Behavioral Approach to Systems Theory The Behavioral Approach to Systems Theory Paolo Rapisarda, Un. of Southampton, U.K. & Jan C. Willems, K.U. Leuven, Belgium MTNS 2006 Kyoto, Japan, July 24 28, 2006 Lecture 2: Representations and annihilators

More information

Data-Enabled Predictive Control : In the Shallows of the DeePC

Data-Enabled Predictive Control : In the Shallows of the DeePC Data-Enabled Predictive Control : In the Shallows of the DeePC Florian Do rfler Automatic Control Laboratory, ETH Zu rich Acknowledgements Linbin Huang Jeremy Coulson Brain-storming: P. Mohajerin Esfahani,

More information

Observability and state estimation

Observability and state estimation EE263 Autumn 2015 S Boyd and S Lall Observability and state estimation state estimation discrete-time observability observability controllability duality observers for noiseless case continuous-time observability

More information

Lecture 19 Observability and state estimation

Lecture 19 Observability and state estimation EE263 Autumn 2007-08 Stephen Boyd Lecture 19 Observability and state estimation state estimation discrete-time observability observability controllability duality observers for noiseless case continuous-time

More information

Dynamic measurement: application of system identification in metrology

Dynamic measurement: application of system identification in metrology 1 / 25 Dynamic measurement: application of system identification in metrology Ivan Markovsky Dynamic measurement takes into account the dynamical properties of the sensor 2 / 25 model of sensor as dynamical

More information

Low-Rank Approximation

Low-Rank Approximation Ivan Markovsky Low-Rank Approximation Algorithms, Implementation, Applications September 2, 2014 Springer Preface Mathematical models are obtained from first principles (natural laws, interconnection,

More information

REGULARIZED STRUCTURED LOW-RANK APPROXIMATION WITH APPLICATIONS

REGULARIZED STRUCTURED LOW-RANK APPROXIMATION WITH APPLICATIONS REGULARIZED STRUCTURED LOW-RANK APPROXIMATION WITH APPLICATIONS MARIYA ISHTEVA, KONSTANTIN USEVICH, AND IVAN MARKOVSKY Abstract. We consider the problem of approximating an affinely structured matrix,

More information

Structured low-rank approximation with missing data

Structured low-rank approximation with missing data Structured low-rank approximation with missing data Ivan Markovsky and Konstantin Usevich Department ELEC Vrije Universiteit Brussel {imarkovs,kusevich}@vub.ac.be Abstract We consider low-rank approximation

More information

1 Continuous-time Systems

1 Continuous-time Systems Observability Completely controllable systems can be restructured by means of state feedback to have many desirable properties. But what if the state is not available for feedback? What if only the output

More information

Lifted approach to ILC/Repetitive Control

Lifted approach to ILC/Repetitive Control Lifted approach to ILC/Repetitive Control Okko H. Bosgra Maarten Steinbuch TUD Delft Centre for Systems and Control TU/e Control System Technology Dutch Institute of Systems and Control DISC winter semester

More information

Realization and identification of autonomous linear periodically time-varying systems

Realization and identification of autonomous linear periodically time-varying systems Realization and identification of autonomous linear periodically time-varying systems Ivan Markovsky, Jan Goos, Konstantin Usevich, and Rik Pintelon Department ELEC, Vrije Universiteit Brussel, Pleinlaan

More information

Simplex Algorithm Using Canonical Tableaus

Simplex Algorithm Using Canonical Tableaus 41 Simplex Algorithm Using Canonical Tableaus Consider LP in standard form: Min z = cx + α subject to Ax = b where A m n has rank m and α is a constant In tableau form we record it as below Original Tableau

More information

Subspace Identification Methods

Subspace Identification Methods Czech Technical University in Prague Faculty of Electrical Engineering Department of Control Engineering Subspace Identification Methods Technical Report Ing. Pavel Trnka Supervisor: Prof. Ing. Vladimír

More information

On Weighted Structured Total Least Squares

On Weighted Structured Total Least Squares On Weighted Structured Total Least Squares Ivan Markovsky and Sabine Van Huffel KU Leuven, ESAT-SCD, Kasteelpark Arenberg 10, B-3001 Leuven, Belgium {ivanmarkovsky, sabinevanhuffel}@esatkuleuvenacbe wwwesatkuleuvenacbe/~imarkovs

More information

Filtering via Rank-Reduced Hankel Matrix CEE 690, ME 555 System Identification Fall, 2013 c H.P. Gavin, September 24, 2013

Filtering via Rank-Reduced Hankel Matrix CEE 690, ME 555 System Identification Fall, 2013 c H.P. Gavin, September 24, 2013 Filtering via Rank-Reduced Hankel Matrix CEE 690, ME 555 System Identification Fall, 2013 c H.P. Gavin, September 24, 2013 This method goes by the terms Structured Total Least Squares and Singular Spectrum

More information

Stationary trajectories, singular Hamiltonian systems and ill-posed Interconnection

Stationary trajectories, singular Hamiltonian systems and ill-posed Interconnection Stationary trajectories, singular Hamiltonian systems and ill-posed Interconnection S.C. Jugade, Debasattam Pal, Rachel K. Kalaimani and Madhu N. Belur Department of Electrical Engineering Indian Institute

More information

9. Introduction and Chapter Objectives

9. Introduction and Chapter Objectives Real Analog - Circuits 1 Chapter 9: Introduction to State Variable Models 9. Introduction and Chapter Objectives In our analysis approach of dynamic systems so far, we have defined variables which describe

More information

New Introduction to Multiple Time Series Analysis

New Introduction to Multiple Time Series Analysis Helmut Lütkepohl New Introduction to Multiple Time Series Analysis With 49 Figures and 36 Tables Springer Contents 1 Introduction 1 1.1 Objectives of Analyzing Multiple Time Series 1 1.2 Some Basics 2

More information

Chapter 1 Nonlinearly structured low-rank approximation

Chapter 1 Nonlinearly structured low-rank approximation Chapter 1 onlinearly structured low-rank approximation Ivan Markovsky and Konstantin Usevich Abstract Polynomially structured low-rank approximation problems occur in algebraic curve fitting, e.g., conic

More information

10. Rank-nullity Definition Let A M m,n (F ). The row space of A is the span of the rows. The column space of A is the span of the columns.

10. Rank-nullity Definition Let A M m,n (F ). The row space of A is the span of the rows. The column space of A is the span of the columns. 10. Rank-nullity Definition 10.1. Let A M m,n (F ). The row space of A is the span of the rows. The column space of A is the span of the columns. The nullity ν(a) of A is the dimension of the kernel. The

More information

AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS

AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS First the X, then the AR, finally the MA Jan C. Willems, K.U. Leuven Workshop on Observation and Estimation Ben Gurion University, July 3, 2004 p./2 Joint

More information

Least Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e.

Least Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e. Least Squares with Eamples in Signal Processing Ivan Selesnick March 7, 3 NYU-Poly These notes address (approimate) solutions to linear equations by least squares We deal with the easy case wherein the

More information

Integer Least Squares: Sphere Decoding and the LLL Algorithm

Integer Least Squares: Sphere Decoding and the LLL Algorithm Integer Least Squares: Sphere Decoding and the LLL Algorithm Sanzheng Qiao Department of Computing and Software McMaster University 28 Main St. West Hamilton Ontario L8S 4L7 Canada. ABSTRACT This paper

More information

Time Series Analysis

Time Series Analysis Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture State space models, 1st part: Model: Sec. 10.1 The

More information

Model reduction for linear systems by balancing

Model reduction for linear systems by balancing Model reduction for linear systems by balancing Bart Besselink Jan C. Willems Center for Systems and Control Johann Bernoulli Institute for Mathematics and Computer Science University of Groningen, Groningen,

More information

Subspace Identification

Subspace Identification Chapter 10 Subspace Identification Given observations of m 1 input signals, and p 1 signals resulting from those when fed into a dynamical system under study, can we estimate the internal dynamics regulating

More information

Least squares regularized or constrained by L0: relationship between their global minimizers. Mila Nikolova

Least squares regularized or constrained by L0: relationship between their global minimizers. Mila Nikolova Least squares regularized or constrained by L0: relationship between their global minimizers Mila Nikolova CMLA, CNRS, ENS Cachan, Université Paris-Saclay, France nikolova@cmla.ens-cachan.fr SIAM Minisymposium

More information

Introduction to Sparsity in Signal Processing

Introduction to Sparsity in Signal Processing 1 Introduction to Sparsity in Signal Processing Ivan Selesnick Polytechnic Institute of New York University Brooklyn, New York selesi@poly.edu 212 2 Under-determined linear equations Consider a system

More information

Secrets of Matrix Factorization: Supplementary Document

Secrets of Matrix Factorization: Supplementary Document Secrets of Matrix Factorization: Supplementary Document Je Hyeong Hong University of Cambridge jhh37@cam.ac.uk Andrew Fitzgibbon Microsoft, Cambridge, UK awf@microsoft.com Abstract This document consists

More information

Linear Hamiltonian systems

Linear Hamiltonian systems Linear Hamiltonian systems P. Rapisarda H.L. Trentelman Abstract We study linear Hamiltonian systems using bilinear and quadratic differential forms. Such a representation-free approach allows to use the

More information

Discrete Riccati equations and block Toeplitz matrices

Discrete Riccati equations and block Toeplitz matrices Discrete Riccati equations and block Toeplitz matrices André Ran Vrije Universiteit Amsterdam Leonid Lerer Technion-Israel Institute of Technology Haifa André Ran and Leonid Lerer 1 Discrete algebraic

More information

Closed and Open Loop Subspace System Identification of the Kalman Filter

Closed and Open Loop Subspace System Identification of the Kalman Filter Modeling, Identification and Control, Vol 30, No 2, 2009, pp 71 86, ISSN 1890 1328 Closed and Open Loop Subspace System Identification of the Kalman Filter David Di Ruscio Telemark University College,

More information

Introduction to Sparsity in Signal Processing

Introduction to Sparsity in Signal Processing 1 Introduction to Sparsity in Signal Processing Ivan Selesnick Polytechnic Institute of New York University Brooklyn, New York selesi@poly.edu 212 2 Under-determined linear equations Consider a system

More information

ELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky

ELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky ELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky Equilibrium points and linearization Eigenvalue decomposition and modal form State transition matrix and matrix exponential Stability ELEC 3035 (Part

More information

A linguistic fuzzy model with a monotone rule base is not always monotone

A linguistic fuzzy model with a monotone rule base is not always monotone EUSFLAT - LFA 25 A linguistic fuzzy model with a monotone rule base is not always monotone Ester Van Broekhoven and Bernard De Baets Department of Applied Mathematics, Biometrics and Process Control Ghent

More information

Overview of total least-squares methods

Overview of total least-squares methods Signal Processing 87 (2007) 2283 2302 wwwelseviercom/locate/sigpro Overview of total least-squares methods Ivan Markovsky a,, Sabine Van Huffel b a School of Electronics and Computer Science, University

More information

Lecture 4: Linear and quadratic problems

Lecture 4: Linear and quadratic problems Lecture 4: Linear and quadratic problems linear programming examples and applications linear fractional programming quadratic optimization problems (quadratically constrained) quadratic programming second-order

More information

VARIANCE COMPUTATION OF MODAL PARAMETER ES- TIMATES FROM UPC SUBSPACE IDENTIFICATION

VARIANCE COMPUTATION OF MODAL PARAMETER ES- TIMATES FROM UPC SUBSPACE IDENTIFICATION VARIANCE COMPUTATION OF MODAL PARAMETER ES- TIMATES FROM UPC SUBSPACE IDENTIFICATION Michael Döhler 1, Palle Andersen 2, Laurent Mevel 1 1 Inria/IFSTTAR, I4S, Rennes, France, {michaeldoehler, laurentmevel}@inriafr

More information

SGN Advanced Signal Processing Project bonus: Sparse model estimation

SGN Advanced Signal Processing Project bonus: Sparse model estimation SGN 21006 Advanced Signal Processing Project bonus: Sparse model estimation Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 12 Sparse models Initial problem: solve

More information

State Feedback MAE 433 Spring 2012 Lab 7

State Feedback MAE 433 Spring 2012 Lab 7 State Feedback MAE 433 Spring 1 Lab 7 Prof. C. Rowley and M. Littman AIs: Brandt Belson, onathan Tu Princeton University April 4-7, 1 1 Overview This lab addresses the control of an inverted pendulum balanced

More information

Subdiagonal pivot structures and associated canonical forms under state isometries

Subdiagonal pivot structures and associated canonical forms under state isometries Preprints of the 15th IFAC Symposium on System Identification Saint-Malo, France, July 6-8, 29 Subdiagonal pivot structures and associated canonical forms under state isometries Bernard Hanzon Martine

More information

Lecture 9. Econ August 20

Lecture 9. Econ August 20 Lecture 9 Econ 2001 2015 August 20 Lecture 9 Outline 1 Linear Functions 2 Linear Representation of Matrices 3 Analytic Geometry in R n : Lines and Hyperplanes 4 Separating Hyperplane Theorems Back to vector

More information

Machine Learning and Computational Statistics, Spring 2017 Homework 2: Lasso Regression

Machine Learning and Computational Statistics, Spring 2017 Homework 2: Lasso Regression Machine Learning and Computational Statistics, Spring 2017 Homework 2: Lasso Regression Due: Monday, February 13, 2017, at 10pm (Submit via Gradescope) Instructions: Your answers to the questions below,

More information

A Matlab/Simulink Toolbox for Inversion of Local Linear Model Trees

A Matlab/Simulink Toolbox for Inversion of Local Linear Model Trees A Matlab/Simulink Toolbox for Inversion of Local Linear Model Trees Mirko Nentwig Paolo Mercorelli Abstract Models in the form of characteristic diagrams or more specifically, in the form of engine operating

More information

4 Optimal State Estimation

4 Optimal State Estimation 4 Optimal State Estimation Consider the LTI system ẋ(t) = Ax(t)+B w w(t), y(t) = C y x(t)+d yw w(t), z(t) = C z x(t) Problem: Compute (Â,F) such that the output of the state estimator ˆx(t) = ˆx(t) Fy(t),

More information

The Behavioral Approach to Systems Theory

The Behavioral Approach to Systems Theory The Behavioral Approach to Systems Theory... and DAEs Patrick Kürschner July 21, 2010 1/31 Patrick Kürschner The Behavioral Approach to Systems Theory Outline 1 Introduction 2 Mathematical Models 3 Linear

More information

Copyright IFAC System Identification Santa Barbara, California, USA, 2000

Copyright IFAC System Identification Santa Barbara, California, USA, 2000 Copyright IFAC System Identification Santa Barbara, California, USA, 2000 ON CONSISTENCY AND MODEL VALIDATION FOR SYSTEMS WITH PARAMETER UNCERTAINTY S. Gugercin*,l, A.C. Antoulas *,1 * Department of Electrical

More information

MAE 143B - Homework 8 Solutions

MAE 143B - Homework 8 Solutions MAE 43B - Homework 8 Solutions P6.4 b) With this system, the root locus simply starts at the pole and ends at the zero. Sketches by hand and matlab are in Figure. In matlab, use zpk to build the system

More information

Notes on Solving Linear Least-Squares Problems

Notes on Solving Linear Least-Squares Problems Notes on Solving Linear Least-Squares Problems Robert A. van de Geijn The University of Texas at Austin Austin, TX 7871 October 1, 14 NOTE: I have not thoroughly proof-read these notes!!! 1 Motivation

More information

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as Reading [SB], Ch. 16.1-16.3, p. 375-393 1 Quadratic Forms A quadratic function f : R R has the form f(x) = a x. Generalization of this notion to two variables is the quadratic form Q(x 1, x ) = a 11 x

More information

+ 5x 2. = x x. + x 2. Transform the original system into a system x 2 = x x 1. = x 1

+ 5x 2. = x x. + x 2. Transform the original system into a system x 2 = x x 1. = x 1 University of California, Davis Department of Agricultural and Resource Economics ARE 5 Optimization with Economic Applications Lecture Notes Quirino Paris The Pivot Method for Solving Systems of Equations...................................

More information

Suppose that we have a real phenomenon. The phenomenon produces events (synonym: outcomes ).

Suppose that we have a real phenomenon. The phenomenon produces events (synonym: outcomes ). p. 5/44 Modeling Suppose that we have a real phenomenon. The phenomenon produces events (synonym: outcomes ). Phenomenon Event, outcome We view a (deterministic) model for the phenomenon as a prescription

More information

Linear Dimensionality Reduction

Linear Dimensionality Reduction Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Principal Component Analysis 3 Factor Analysis

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Subspace Identification A Markov Parameter Approach

Subspace Identification A Markov Parameter Approach Subspace Identification A Markov Parameter Approach Nelson LC Chui JM Maciejowski Cambridge University Engineering Deptartment Cambridge, CB2 1PZ, England 22 December 1998 Technical report CUED/F-INFENG/TR337

More information

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Proper Orthogonal Decomposition (POD) Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu 1 / 43 Outline 1 Time-continuous Formulation 2 Method of Snapshots

More information

Outline Lecture 2 2(32)

Outline Lecture 2 2(32) Outline Lecture (3), Lecture Linear Regression and Classification it is our firm belief that an understanding of linear models is essential for understanding nonlinear ones Thomas Schön Division of Automatic

More information

NECESSARY AND SUFFICIENT CONDITIONS FOR STATE-SPACE NETWORK REALIZATION. Philip E. Paré Masters Thesis Defense

NECESSARY AND SUFFICIENT CONDITIONS FOR STATE-SPACE NETWORK REALIZATION. Philip E. Paré Masters Thesis Defense NECESSARY AND SUFFICIENT CONDITIONS FOR STATE-SPACE NETWORK REALIZATION Philip E. Paré Masters Thesis Defense INTRODUCTION MATHEMATICAL MODELING DYNAMIC MODELS A dynamic model has memory DYNAMIC MODELS

More information

Compression, Matrix Range and Completely Positive Map

Compression, Matrix Range and Completely Positive Map Compression, Matrix Range and Completely Positive Map Iowa State University Iowa-Nebraska Functional Analysis Seminar November 5, 2016 Definitions and notations H, K : Hilbert space. If dim H = n

More information

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2008-09 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

Tensor approach for blind FIR channel identification using 4th-order cumulants

Tensor approach for blind FIR channel identification using 4th-order cumulants Tensor approach for blind FIR channel identification using 4th-order cumulants Carlos Estêvão R Fernandes Gérard Favier and João Cesar M Mota contact: cfernand@i3s.unice.fr June 8, 2006 Outline 1. HOS

More information

Identification of Linear Systems

Identification of Linear Systems Identification of Linear Systems Johan Schoukens http://homepages.vub.ac.be/~jschouk Vrije Universiteit Brussel Department INDI /67 Basic goal Built a parametric model for a linear dynamic system from

More information

Linear Algebra I for Science (NYC)

Linear Algebra I for Science (NYC) Element No. 1: To express concrete problems as linear equations. To solve systems of linear equations using matrices. Topic: MATRICES 1.1 Give the definition of a matrix, identify the elements and the

More information

Lecture 2. Linear Systems

Lecture 2. Linear Systems Lecture 2. Linear Systems Ivan Papusha CDS270 2: Mathematical Methods in Control and System Engineering April 6, 2015 1 / 31 Logistics hw1 due this Wed, Apr 8 hw due every Wed in class, or my mailbox on

More information

Math 594, HW2 - Solutions

Math 594, HW2 - Solutions Math 594, HW2 - Solutions Gilad Pagi, Feng Zhu February 8, 2015 1 a). It suffices to check that NA is closed under the group operation, and contains identities and inverses: NA is closed under the group

More information

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19 POLE PLACEMENT Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 19 Outline 1 State Feedback 2 Observer 3 Observer Feedback 4 Reduced Order

More information

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of

More information

A NEW INFORMATION THEORETIC APPROACH TO ORDER ESTIMATION PROBLEM. Massachusetts Institute of Technology, Cambridge, MA 02139, U.S.A.

A NEW INFORMATION THEORETIC APPROACH TO ORDER ESTIMATION PROBLEM. Massachusetts Institute of Technology, Cambridge, MA 02139, U.S.A. A EW IFORMATIO THEORETIC APPROACH TO ORDER ESTIMATIO PROBLEM Soosan Beheshti Munther A. Dahleh Massachusetts Institute of Technology, Cambridge, MA 0239, U.S.A. Abstract: We introduce a new method of model

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 2: Least Square Estimation Readings: DDV, Chapter 2 Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology February 7, 2011 E. Frazzoli

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

Lecture Notes of EE 714

Lecture Notes of EE 714 Lecture Notes of EE 714 Lecture 1 Motivation Systems theory that we have studied so far deals with the notion of specified input and output spaces. But there are systems which do not have a clear demarcation

More information

A spacetime DPG method for acoustic waves

A spacetime DPG method for acoustic waves A spacetime DPG method for acoustic waves Rainer Schneckenleitner JKU Linz January 30, 2018 ( JKU Linz ) Spacetime DPG method January 30, 2018 1 / 41 Overview 1 The transient wave problem 2 The broken

More information

Canonical lossless state-space systems: staircase forms and the Schur algorithm

Canonical lossless state-space systems: staircase forms and the Schur algorithm Canonical lossless state-space systems: staircase forms and the Schur algorithm Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics School of Mathematical Sciences Projet APICS Universiteit

More information

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015 Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 15 Asymptotic approach from time-varying to constant gains Elimination of cross weighting

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

COM Optimization for Communications 8. Semidefinite Programming

COM Optimization for Communications 8. Semidefinite Programming COM524500 Optimization for Communications 8. Semidefinite Programming Institute Comm. Eng. & Dept. Elect. Eng., National Tsing Hua University 1 Semidefinite Programming () Inequality form: min c T x s.t.

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Active sets, steepest descent, and smooth approximation of functions

Active sets, steepest descent, and smooth approximation of functions Active sets, steepest descent, and smooth approximation of functions Dmitriy Drusvyatskiy School of ORIE, Cornell University Joint work with Alex D. Ioffe (Technion), Martin Larsson (EPFL), and Adrian

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information