ELEC system identification workshop. Subspace methods

Size: px
Start display at page:

Download "ELEC system identification workshop. Subspace methods"

Transcription

1 1 / 33 ELEC system identification workshop Subspace methods Ivan Markovsky

2 2 / 33 Plan 1. Behavioral approach 2. Subspace methods 3. Optimization methods

3 3 / 33 Outline Exact modeling Algorithms

4 4 / 33 Plan Exact modeling Algorithms

5 5 / 33 The goal is to obtain a model B from data D data D U identification model B M U data space (R q ) N : functions from N to R q D data: set of finite vector-valued time series D = {wd 1,...,w d N }, w d i = ( wd i (1),...,w d i (T i) ) B model: subset of the data space U M model class: set of models

6 6 / 33 Work plan 1. define a modeling problem (What is D B?) 2. find an algorithm that solves the problem 3. implement the algorithm (How to compute B?)

7 7 / 33 State the aim without hidden assumptions all user choices should enter in the problem formulation hyper-parameters should not appear in the solutions the resulting methods should be automatic

8 User choices reflect prior knowledge; they determine the model class and fitting criterion 8 / 33 the "true model" assumption D = }{{} D + D }{{} true data noise where D }{{} B M true model assuming, in addition, that D is a stochastic process noise distribution maximum likelihood principle fitting criterion we can specify M and as deterministic approximation

9 9 / 33 Examples of user choices for M and Model class linear static time-invariant nonlinear dynamic time-varying Fitting criterion exact deterministic approximate stochastic

10 10 / 33 Why exact identification? from simple to complex: exact approx. stoch. approx. stoch. exact identification is ingredient of the other problems exact methods lead to effective approximation heuristics

11 11 / 33 Exact identification in L given data D find B L, such that D B nonunique solution always exists Exact identification in L m,l given (m,l) and data D find B L m,l, such that D B solution may not exist

12 12 / 33 Most powerful unfalsified model B mpum (D) given data D find the smallest (m,l), such that B L q m,l, D B Why complexity minimization? makes the solution unique Occam s razor: "simpler = better"

13 13 / 33 Identifiability question Recover the data generating system B from exact data D D B L q Under what conditions B mpum (D) = B? the answer is given by the "fundamental lemma"

14 14 / 33 Hankel matrix consider the case D = w d (single trajectory) main tool w(1) w(2) w(3) w(t L + 1) w(2) w(3) w(4) w(t L + 2) H L (w) := w(3) w(4) w(5) w(t L + 3).... w(l) w(l + 1) w(l + 2) w(t ) if w d B L q, then image ( H L (w d ) ) B L extra conditions on w d and B are needed for image ( H L (w d ) ) = B L

15 15 / 33 Persistency of excitation (PE) u is PE of order L if H L (u) is full row rank system theoretic interpretation: u (R m ) T is PE of order L there is no B L m 1,L, such that u B Lemma 1. B L q m,l controllable and 2. w d = (u d,y d ) B with u d PE of order L + pl = image ( H L (w d ) ) = B L

16 16 / 33 Plan Exact modeling Algorithms

17 The main idea is that a desired trajectory w can be constructed directly from the data w d 17 / 33 any w B L can be obtained from w d B w = H L (w d )g, for some g g input and initial conditions, cf., image representation

18 18 / 33 Algorithms w d kernel parameter R w d impulse response H w d state/space parameters (A,B,C,D) wd R (A,B,C,D) or w d H (A,B,C,D) wd observability matrix (A,B,C,D) wd state sequence (A,B,C,D)

19 w d R under the assumptions of the lemma image ( H l+1 (w d ) ) = B l+1 leftker ( H l+1 (w d ) ) defines a kernel repr. of B [R 0 R 1 R l ] H l+1 (w d ) = 0, R i R g q kernel representation B = ker ( R(σ) ), with R(z) = l R i z i i=0 recursive computation (exploiting Hankel structure) 19 / 33

20 20 / 33 w d H impulse response (matrix values trajectory) ( [ ] [ ] W = 0,...,0, I }{{} H(0), 0 H(1),..., l by the lemma, W = H l+t (w d )G define H l+t (u d ) =: we have U p Y p U f G = [ 0 H(t) [ ] [ ] Up Yp and H U l+t (y f d ) =: Y f } 0 0 ] [ Im 0 ]) zero ini. conditions impulse input (1) Y f G = H (2)

21 21 / 33 Block algorithm input: u d, y d, l, and t solve (2) and let G p be a solution compute H = Y f G p output: the first t samples of the impulse response H Exercise: implement and test the algorithm

22 22 / 33 Refinements solve (2) efficiently exploiting the Hankel structure do the computations iteratively for pieces of H automatically choose t, for a sufficient decay of H Exercise: try the improvements application for noisy data

23 23 / 33 w d (A,B,C,D) w d H(0 : 2l) or R(ξ ) realization (A,B,C,D) w d obs. matrix O l+1 (A,C) (3) (A,B,C,D) O l+1 (A,C) (A,C), (u d,y d,a,c) (B,C,x ini ) (3) (4) w d state sequence x d (A,B,C,D) [ ] [ ][ ] σx d A B x = d y d C D u d (4)

24 24 / 33 O l+1 (A,C) (A,B,C,D) C is the first block entry of O l+1 (A,C) A is given by the shift equation ( σ O l+1 (A,C) ) A = ( σo l+1 (A,C) ) (σ / σ removes first / last block entry) Once C and A are known, the system of equations y d (t) = CA t t 1 x d (1) + is linear in D, B, x d (1) τ=1 CA t 1 τ Bu d (τ) + Dδ(t + 1)

25 25 / 33 w d observability matrix columns of O t (A,C) are n indep. free resp. of B under the conditions of the lemma, [ ] [ ] H t (u d ) 0 zero inputs G = H t (y d ) free responses Y 0 lin. indep. free responses = G maximal rank rank revealing factorization ] Y 0 = O t (A,C) [x ini,1 x ini,j }{{} X ini

26 26 / 33 w d state sequence sequential free responses = Y 0 block-hankel then X ini is a state sequence of B computation of sequential free responses } U p U p sequential ini. conditions Y p G = Y p 0 zero inputs U f (5) Y f G = Y 0 rank revealing factorization [ Y 0 = O t (A,C) x d (1) ] x d (n + m + 1)

27 27 / 33 Refinements solve (5) efficiently exploiting the Hankel structure iteratively compute pieces of Y 0 iterative algorithm requires smaller persistency of excitation of u d could be more efficient (solve a few smaller systems of eqns than one big)

28 28 / 33 MOESP-type algorithms project the rows of H n (y d ) on rowspan ( H n (u) ) where Π u := Y 0 := H n (y d )Π u ( I Hn (u) ( H n (u)hn (u) ) ) 1 Hn (u) Observe that Π u is maximal rank and [ ] [ ] H n (u) Π 0 u = H n (y d ) = the orthogonal projection computes free responses Y 0

29 29 / 33 Comments H n (y d )Π u are T n + 1 free responses (n such responses suffice for exact identification) a geometric operation has system theoretic meaning condition for rank(y 0 ) = n given in the literature ([ ]) X rank ini = n + nm H n (u) is not verifiable from the data (u d,y d )

30 30 / 33 N4SID-type algorithms splitting of the data into "past" and "future" [ Up H 2n (u d ) =: ], H U f 2n (y d ) =: and define W p := oblique projection [ ] Up Y p [ ] Yp Y f ] + [ W p Y 0 := Y f / Uf W p := Y f [W ][ p W p Wp W p U U f f U f Wp U f U f 0 }{{} Π obl of the rows of Y f along rowspan(u f ) onto rowspan(w p ) ]

31 31 / 33 N4SID-type algorithms Observe that W p U f Y f Π obl = W p 0 (Π obl gives the least-norm, least-squares solution) Y 0 = oblique proj. computes sequential free responses

32 32 / 33 Comments Y 0 := Y f / Uf W p are T 2n + 1 sequential free responses (n + m + 1 such responses suffice for exact identification) geometric operation has system theoretic meaning conditions for rank(y 0 ) = n given in the literature 1. u d persistently exciting of order 2n and 2. rowspan(x ini ) rowspan(u f ) = {0} are not verifiable from the data (u d,y d )

33 33 / 33 Summary transitions among representations system theory exact identification aims at B mpum (w d ) H t (w d ) plays key role in both analysis and computation under controllability and u d persistently exciting image ( H t (w d ) ) = B t subspace methods construct special responses from data

Data-driven signal processing

Data-driven signal processing 1 / 35 Data-driven signal processing Ivan Markovsky 2 / 35 Modern signal processing is model-based 1. system identification prior information model structure 2. model-based design identification data parameter

More information

ELEC system identification workshop. Behavioral approach

ELEC system identification workshop. Behavioral approach 1 / 33 ELEC system identification workshop Behavioral approach Ivan Markovsky 2 / 33 The course consists of lectures and exercises Session 1: behavioral approach to data modeling Session 2: subspace identification

More information

A software package for system identification in the behavioral setting

A software package for system identification in the behavioral setting 1 / 20 A software package for system identification in the behavioral setting Ivan Markovsky Vrije Universiteit Brussel 2 / 20 Outline Introduction: system identification in the behavioral setting Solution

More information

Low-rank approximation and its applications for data fitting

Low-rank approximation and its applications for data fitting Low-rank approximation and its applications for data fitting Ivan Markovsky K.U.Leuven, ESAT-SISTA A line fitting example b 6 4 2 0 data points Classical problem: Fit the points d 1 = [ 0 6 ], d2 = [ 1

More information

A structured low-rank approximation approach to system identification. Ivan Markovsky

A structured low-rank approximation approach to system identification. Ivan Markovsky 1 / 35 A structured low-rank approximation approach to system identification Ivan Markovsky Main message: system identification SLRA minimize over B dist(a,b) subject to rank(b) r and B structured (SLRA)

More information

A missing data approach to data-driven filtering and control

A missing data approach to data-driven filtering and control 1 A missing data approach to data-driven filtering and control Ivan Markovsky Abstract In filtering, control, and other mathematical engineering areas it is common to use a model-based approach, which

More information

Identification of MIMO linear models: introduction to subspace methods

Identification of MIMO linear models: introduction to subspace methods Identification of MIMO linear models: introduction to subspace methods Marco Lovera Dipartimento di Scienze e Tecnologie Aerospaziali Politecnico di Milano marco.lovera@polimi.it State space identification

More information

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:

More information

Sparsity in system identification and data-driven control

Sparsity in system identification and data-driven control 1 / 40 Sparsity in system identification and data-driven control Ivan Markovsky This signal is not sparse in the "time domain" 2 / 40 But it is sparse in the "frequency domain" (it is weighted sum of six

More information

AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS

AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS First the X, then the AR, finally the MA Jan C. Willems, K.U. Leuven Workshop on Observation and Estimation Ben Gurion University, July 3, 2004 p./2 Joint

More information

Subspace Identification

Subspace Identification Chapter 10 Subspace Identification Given observations of m 1 input signals, and p 1 signals resulting from those when fed into a dynamical system under study, can we estimate the internal dynamics regulating

More information

Using Hankel structured low-rank approximation for sparse signal recovery

Using Hankel structured low-rank approximation for sparse signal recovery Using Hankel structured low-rank approximation for sparse signal recovery Ivan Markovsky 1 and Pier Luigi Dragotti 2 Department ELEC Vrije Universiteit Brussel (VUB) Pleinlaan 2, Building K, B-1050 Brussels,

More information

Two well known examples. Applications of structured low-rank approximation. Approximate realisation = Model reduction. System realisation

Two well known examples. Applications of structured low-rank approximation. Approximate realisation = Model reduction. System realisation Two well known examples Applications of structured low-rank approximation Ivan Markovsky System realisation Discrete deconvolution School of Electronics and Computer Science University of Southampton The

More information

Orthogonal complement

Orthogonal complement Orthogonal complement Aim lecture: Inner products give a special way of constructing vector space complements. As usual, in this lecture F = R or C. We also let V be an F-space equipped with an inner product

More information

Subspace Identification Methods

Subspace Identification Methods Czech Technical University in Prague Faculty of Electrical Engineering Department of Control Engineering Subspace Identification Methods Technical Report Ing. Pavel Trnka Supervisor: Prof. Ing. Vladimír

More information

Improved initial approximation for errors-in-variables system identification

Improved initial approximation for errors-in-variables system identification Improved initial approximation for errors-in-variables system identification Konstantin Usevich Abstract Errors-in-variables system identification can be posed and solved as a Hankel structured low-rank

More information

Dynamical system. The set of functions (signals) w : T W from T to W is denoted by W T. W variable space. T R time axis. W T trajectory space

Dynamical system. The set of functions (signals) w : T W from T to W is denoted by W T. W variable space. T R time axis. W T trajectory space Dynamical system The set of functions (signals) w : T W from T to W is denoted by W T. W variable space T R time axis W T trajectory space A dynamical system B W T is a set of trajectories (a behaviour).

More information

Rank & nullity. Defn. Let T : V W be linear. We define the rank of T to be rank T = dim im T & the nullity of T to be nullt = dim ker T.

Rank & nullity. Defn. Let T : V W be linear. We define the rank of T to be rank T = dim im T & the nullity of T to be nullt = dim ker T. Rank & nullity Aim lecture: We further study vector space complements, which is a tool which allows us to decompose linear problems into smaller ones. We give an algorithm for finding complements & an

More information

Modeling and Analysis of Dynamic Systems

Modeling and Analysis of Dynamic Systems Modeling and Analysis of Dynamic Systems by Dr. Guillaume Ducard c Fall 2016 Institute for Dynamic Systems and Control ETH Zurich, Switzerland G. Ducard c 1 Outline 1 Lecture 9: Model Parametrization 2

More information

Signal theory: Part 1

Signal theory: Part 1 Signal theory: Part 1 Ivan Markovsky\ Vrije Universiteit Brussel Contents 1 Introduction 2 2 The behavioral approach 3 2.1 Definition of a system....................................... 3 2.2 Dynamical

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

SINGLE DEGREE OF FREEDOM SYSTEM IDENTIFICATION USING LEAST SQUARES, SUBSPACE AND ERA-OKID IDENTIFICATION ALGORITHMS

SINGLE DEGREE OF FREEDOM SYSTEM IDENTIFICATION USING LEAST SQUARES, SUBSPACE AND ERA-OKID IDENTIFICATION ALGORITHMS 3 th World Conference on Earthquake Engineering Vancouver, B.C., Canada August -6, 24 Paper No. 278 SINGLE DEGREE OF FREEDOM SYSTEM IDENTIFICATION USING LEAST SQUARES, SUBSPACE AND ERA-OKID IDENTIFICATION

More information

Linear Methods for Regression. Lijun Zhang

Linear Methods for Regression. Lijun Zhang Linear Methods for Regression Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Introduction Linear Regression Models and Least Squares Subset Selection Shrinkage Methods Methods Using Derived

More information

Dynamic measurement: application of system identification in metrology

Dynamic measurement: application of system identification in metrology 1 / 25 Dynamic measurement: application of system identification in metrology Ivan Markovsky Dynamic measurement takes into account the dynamical properties of the sensor 2 / 25 model of sensor as dynamical

More information

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2008-09 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

Low-Rank Approximation

Low-Rank Approximation Ivan Markovsky Low-Rank Approximation Algorithms, Implementation, Applications September 2, 2014 Springer Preface Mathematical models are obtained from first principles (natural laws, interconnection,

More information

Adaptive Channel Modeling for MIMO Wireless Communications

Adaptive Channel Modeling for MIMO Wireless Communications Adaptive Channel Modeling for MIMO Wireless Communications Chengjin Zhang Department of Electrical and Computer Engineering University of California, San Diego San Diego, CA 99- Email: zhangc@ucsdedu Robert

More information

Sparse Subspace Clustering

Sparse Subspace Clustering Sparse Subspace Clustering Based on Sparse Subspace Clustering: Algorithm, Theory, and Applications by Elhamifar and Vidal (2013) Alex Gutierrez CSCI 8314 March 2, 2017 Outline 1 Motivation and Background

More information

Modeling and Identification of Dynamic Systems (vimmd312, 2018)

Modeling and Identification of Dynamic Systems (vimmd312, 2018) Modeling and Identification of Dynamic Systems (vimmd312, 2018) Textbook background of the curriculum taken. In parenthesis: material not reviewed due to time shortage, but which is suggested to be read

More information

Realization and identification of autonomous linear periodically time-varying systems

Realization and identification of autonomous linear periodically time-varying systems Realization and identification of autonomous linear periodically time-varying systems Ivan Markovsky, Jan Goos, Konstantin Usevich, and Rik Pintelon Department ELEC, Vrije Universiteit Brussel, Pleinlaan

More information

Data-Enabled Predictive Control : In the Shallows of the DeePC

Data-Enabled Predictive Control : In the Shallows of the DeePC Data-Enabled Predictive Control : In the Shallows of the DeePC Florian Do rfler Automatic Control Laboratory, ETH Zu rich Acknowledgements Linbin Huang Jeremy Coulson Brain-storming: P. Mohajerin Esfahani,

More information

Kernel. Prop-Defn. Let T : V W be a linear map.

Kernel. Prop-Defn. Let T : V W be a linear map. Kernel Aim lecture: We examine the kernel which measures the failure of uniqueness of solns to linear eqns. The concept of linear independence naturally arises. This in turn gives the concept of a basis

More information

Disturbance Decoupling Problem

Disturbance Decoupling Problem DISC Systems and Control Theory of Nonlinear Systems, 21 1 Disturbance Decoupling Problem Lecture 4 Nonlinear Dynamical Control Systems, Chapter 7 The disturbance decoupling problem is a typical example

More information

Dot Products, Transposes, and Orthogonal Projections

Dot Products, Transposes, and Orthogonal Projections Dot Products, Transposes, and Orthogonal Projections David Jekel November 13, 2015 Properties of Dot Products Recall that the dot product or standard inner product on R n is given by x y = x 1 y 1 + +

More information

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop Music and Machine Learning (IFT68 Winter 8) Prof. Douglas Eck, Université de Montréal These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

A Behavioral Approach to GNSS Positioning and DOP Determination

A Behavioral Approach to GNSS Positioning and DOP Determination A Behavioral Approach to GNSS Positioning and DOP Determination Department of Communications and Guidance Engineering National Taiwan Ocean University, Keelung, TAIWAN Phone: +886-96654, FAX: +886--463349

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

New Introduction to Multiple Time Series Analysis

New Introduction to Multiple Time Series Analysis Helmut Lütkepohl New Introduction to Multiple Time Series Analysis With 49 Figures and 36 Tables Springer Contents 1 Introduction 1 1.1 Objectives of Analyzing Multiple Time Series 1 1.2 Some Basics 2

More information

A comparison between structured low-rank approximation and correlation approach for data-driven output tracking

A comparison between structured low-rank approximation and correlation approach for data-driven output tracking A comparison between structured low-rank approximation and correlation approach for data-driven output tracking Simone Formentin a and Ivan Markovsky b a Dipartimento di Elettronica, Informazione e Bioingegneria,

More information

Closed and Open Loop Subspace System Identification of the Kalman Filter

Closed and Open Loop Subspace System Identification of the Kalman Filter Modeling, Identification and Control, Vol 30, No 2, 2009, pp 71 86, ISSN 1890 1328 Closed and Open Loop Subspace System Identification of the Kalman Filter David Di Ruscio Telemark University College,

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Subspace Identification A Markov Parameter Approach

Subspace Identification A Markov Parameter Approach Subspace Identification A Markov Parameter Approach Nelson LC Chui JM Maciejowski Cambridge University Engineering Deptartment Cambridge, CB2 1PZ, England 22 December 1998 Technical report CUED/F-INFENG/TR337

More information

Math background for PS1. Jiahui Shi

Math background for PS1. Jiahui Shi CS 231A Section 2 Math background for PS1 Jiahui Shi Jiahui Shi Section 2-1 2011.10.7 Announcement Updated deadline of PS1: 10/14, 12 noon Extra office hour for PS1: 10/13, 7 9pm, Gates 104 Jiahui Shi

More information

4. Multi-linear algebra (MLA) with Kronecker-product data.

4. Multi-linear algebra (MLA) with Kronecker-product data. ect. 3. Tensor-product interpolation. Introduction to MLA. B. Khoromskij, Leipzig 2007(L3) 1 Contents of Lecture 3 1. Best polynomial approximation. 2. Error bound for tensor-product interpolants. - Polynomial

More information

SUBSPACE SYSTEM IDENTIFICATION Theory and applications

SUBSPACE SYSTEM IDENTIFICATION Theory and applications SUBSPACE SYSTEM IDENTIFICATION Theory and applications Lecture notes Dr. ing. David Di Ruscio Telemark Institute of Technology Email: david.di.ruscio@hit.no Porsgrunn, Norway January 1995 6th edition December

More information

Hammerstein System Identification by a Semi-Parametric Method

Hammerstein System Identification by a Semi-Parametric Method Hammerstein System Identification by a Semi-arametric ethod Grzegorz zy # # Institute of Engineering Cybernetics, Wroclaw University of echnology, ul. Janiszewsiego 11/17, 5-372 Wroclaw, oland, grmz@ict.pwr.wroc.pl

More information

From inductive inference to machine learning

From inductive inference to machine learning From inductive inference to machine learning ADAPTED FROM AIMA SLIDES Russel&Norvig:Artificial Intelligence: a modern approach AIMA: Inductive inference AIMA: Inductive inference 1 Outline Bayesian inferences

More information

4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1

4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1 4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1 Two vectors are orthogonal when their dot product is zero: v w = orv T w =. This chapter moves up a level, from orthogonal vectors to orthogonal

More information

Stochastic Realization of Binary Exchangeable Processes

Stochastic Realization of Binary Exchangeable Processes Stochastic Realization of Binary Exchangeable Processes Lorenzo Finesso and Cecilia Prosdocimi Abstract A discrete time stochastic process is called exchangeable if its n-dimensional distributions are,

More information

Where is matrix multiplication locally open?

Where is matrix multiplication locally open? Linear Algebra and its Applications 517 (2017) 167 176 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Where is matrix multiplication locally open?

More information

List Decoding of Noisy Reed-Muller-like Codes

List Decoding of Noisy Reed-Muller-like Codes List Decoding of Noisy Reed-Muller-like Codes Martin J. Strauss University of Michigan Joint work with A. Robert Calderbank (Princeton) Anna C. Gilbert (Michigan) Joel Lepak (Michigan) Euclidean List Decoding

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our

More information

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning: Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u 2 1 + u 2 2 = u 2. Geometric Meaning: u v = u v cos θ. u θ v 1 Reason: The opposite side is given by u v. u v 2 =

More information

MATH Linear Algebra

MATH Linear Algebra MATH 304 - Linear Algebra In the previous note we learned an important algorithm to produce orthogonal sequences of vectors called the Gramm-Schmidt orthogonalization process. Gramm-Schmidt orthogonalization

More information

Decision Trees. Lewis Fishgold. (Material in these slides adapted from Ray Mooney's slides on Decision Trees)

Decision Trees. Lewis Fishgold. (Material in these slides adapted from Ray Mooney's slides on Decision Trees) Decision Trees Lewis Fishgold (Material in these slides adapted from Ray Mooney's slides on Decision Trees) Classification using Decision Trees Nodes test features, there is one branch for each value of

More information

MATH 112 QUADRATIC AND BILINEAR FORMS NOVEMBER 24, Bilinear forms

MATH 112 QUADRATIC AND BILINEAR FORMS NOVEMBER 24, Bilinear forms MATH 112 QUADRATIC AND BILINEAR FORMS NOVEMBER 24,2015 M. J. HOPKINS 1.1. Bilinear forms and matrices. 1. Bilinear forms Definition 1.1. Suppose that F is a field and V is a vector space over F. bilinear

More information

Joseph Muscat Universal Algebras. 1 March 2013

Joseph Muscat Universal Algebras. 1 March 2013 Joseph Muscat 2015 1 Universal Algebras 1 Operations joseph.muscat@um.edu.mt 1 March 2013 A universal algebra is a set X with some operations : X n X and relations 1 X m. For example, there may be specific

More information

Dimension Reduction and Iterative Consensus Clustering

Dimension Reduction and Iterative Consensus Clustering Dimension Reduction and Iterative Consensus Clustering Southeastern Clustering and Ranking Workshop August 24, 2009 Dimension Reduction and Iterative 1 Document Clustering Geometry of the SVD Centered

More information

Final Exam, Machine Learning, Spring 2009

Final Exam, Machine Learning, Spring 2009 Name: Andrew ID: Final Exam, 10701 Machine Learning, Spring 2009 - The exam is open-book, open-notes, no electronics other than calculators. - The maximum possible score on this exam is 100. You have 3

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

Section 6.2: THE KERNEL AND RANGE OF A LINEAR TRANSFORMATIONS

Section 6.2: THE KERNEL AND RANGE OF A LINEAR TRANSFORMATIONS Section 6.2: THE KERNEL AND RANGE OF A LINEAR TRANSFORMATIONS When you are done with your homework you should be able to Find the kernel of a linear transformation Find a basis for the range, the rank,

More information

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3.0 INTRODUCTION The purpose of this chapter is to introduce estimators shortly. More elaborated courses on System Identification, which are given

More information

Math 2331 Linear Algebra

Math 2331 Linear Algebra 6. Orthogonal Projections Math 2 Linear Algebra 6. Orthogonal Projections Jiwen He Department of Mathematics, University of Houston jiwenhe@math.uh.edu math.uh.edu/ jiwenhe/math2 Jiwen He, University of

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

Simplicial Homology. Simplicial Homology. Sara Kališnik. January 10, Sara Kališnik January 10, / 34

Simplicial Homology. Simplicial Homology. Sara Kališnik. January 10, Sara Kališnik January 10, / 34 Sara Kališnik January 10, 2018 Sara Kališnik January 10, 2018 1 / 34 Homology Homology groups provide a mathematical language for the holes in a topological space. Perhaps surprisingly, they capture holes

More information

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in

More information

Lecture 9. Econ August 20

Lecture 9. Econ August 20 Lecture 9 Econ 2001 2015 August 20 Lecture 9 Outline 1 Linear Functions 2 Linear Representation of Matrices 3 Analytic Geometry in R n : Lines and Hyperplanes 4 Separating Hyperplane Theorems Back to vector

More information

SGN Advanced Signal Processing Project bonus: Sparse model estimation

SGN Advanced Signal Processing Project bonus: Sparse model estimation SGN 21006 Advanced Signal Processing Project bonus: Sparse model estimation Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 12 Sparse models Initial problem: solve

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.4 The Projection Matrix 1 Chapter 6. Orthogonality 6.4 The Projection Matrix Note. In Section 6.1 (Projections), we projected a vector b R n onto a subspace W of R n. We did so by finding a basis for

More information

June 2011 PURDUE UNIVERSITY Study Guide for the Credit Exam in (MA 262) Linear Algebra and Differential Equations

June 2011 PURDUE UNIVERSITY Study Guide for the Credit Exam in (MA 262) Linear Algebra and Differential Equations June 20 PURDUE UNIVERSITY Study Guide for the Credit Exam in (MA 262) Linear Algebra and Differential Equations The topics covered in this exam can be found in An introduction to differential equations

More information

10. Smooth Varieties. 82 Andreas Gathmann

10. Smooth Varieties. 82 Andreas Gathmann 82 Andreas Gathmann 10. Smooth Varieties Let a be a point on a variety X. In the last chapter we have introduced the tangent cone C a X as a way to study X locally around a (see Construction 9.20). It

More information

ECE521 week 3: 23/26 January 2017

ECE521 week 3: 23/26 January 2017 ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear

More information

Chapter 7. Optimization and Minimum Principles. 7.1 Two Fundamental Examples. Least Squares

Chapter 7. Optimization and Minimum Principles. 7.1 Two Fundamental Examples. Least Squares Chapter 7 Optimization and Minimum Principles 7 Two Fundamental Examples Within the universe of applied mathematics, optimization is often a world of its own There are occasional expeditions to other worlds

More information

Chapter 3. Tohru Katayama

Chapter 3. Tohru Katayama Subspace Methods for System Identification Chapter 3 Tohru Katayama Subspace Methods Reading Group UofA, Edmonton Barnabás Póczos May 14, 2009 Preliminaries before Linear Dynamical Systems Hidden Markov

More information

Jordan chain. Defn. E.g. T = J 3 (λ) Daniel Chan (UNSW) Lecture 29: Jordan chains & tableaux Semester / 10

Jordan chain. Defn. E.g. T = J 3 (λ) Daniel Chan (UNSW) Lecture 29: Jordan chains & tableaux Semester / 10 Jordan chain Aim lecture: We introduce the notions of Jordan chains & Jordan form tableaux which are the key notions to proving the Jordan canonical form theorem. Throughout this lecture we fix the following

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix

More information

Subspace-based Identification

Subspace-based Identification of Infinite-dimensional Multivariable Systems from Frequency-response Data Department of Electrical and Electronics Engineering Anadolu University, Eskişehir, Turkey October 12, 2008 Outline 1 2 3 4 Noise-free

More information

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem

More information

Structured low-rank approximation with missing data

Structured low-rank approximation with missing data Structured low-rank approximation with missing data Ivan Markovsky and Konstantin Usevich Department ELEC Vrije Universiteit Brussel {imarkovs,kusevich}@vub.ac.be Abstract We consider low-rank approximation

More information

Elements of Multivariate Time Series Analysis

Elements of Multivariate Time Series Analysis Gregory C. Reinsel Elements of Multivariate Time Series Analysis Second Edition With 14 Figures Springer Contents Preface to the Second Edition Preface to the First Edition vii ix 1. Vector Time Series

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

Robustness of Principal Components

Robustness of Principal Components PCA for Clustering An objective of principal components analysis is to identify linear combinations of the original variables that are useful in accounting for the variation in those original variables.

More information

he Applications of Tensor Factorization in Inference, Clustering, Graph Theory, Coding and Visual Representation

he Applications of Tensor Factorization in Inference, Clustering, Graph Theory, Coding and Visual Representation he Applications of Tensor Factorization in Inference, Clustering, Graph Theory, Coding and Visual Representation Amnon Shashua School of Computer Science & Eng. The Hebrew University Matrix Factorization

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V

More information

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26 Principal Component Analysis Brett Bernstein CDS at NYU April 25, 2017 Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 1 / 26 Initial Question Intro Question Question Let S R n n be symmetric. 1

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Linear Models for Regression. Sargur Srihari

Linear Models for Regression. Sargur Srihari Linear Models for Regression Sargur srihari@cedar.buffalo.edu 1 Topics in Linear Regression What is regression? Polynomial Curve Fitting with Scalar input Linear Basis Function Models Maximum Likelihood

More information

Solution for Homework 5

Solution for Homework 5 Solution for Homework 5 ME243A/ECE23A Fall 27 Exercise 1 The computation of the reachable subspace in continuous time can be handled easily introducing the concepts of inner product, orthogonal complement

More information

Math 416, Spring 2010 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 2010 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES

Math 416, Spring 2010 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 2010 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES Math 46, Spring 00 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 00 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES Recap Yesterday we talked about several new, important concepts

More information

Lecture Notes of EE 714

Lecture Notes of EE 714 Lecture Notes of EE 714 Lecture 1 Motivation Systems theory that we have studied so far deals with the notion of specified input and output spaces. But there are systems which do not have a clear demarcation

More information

2014:05 Incremental Greedy Algorithm and its Applications in Numerical Integration. V. Temlyakov

2014:05 Incremental Greedy Algorithm and its Applications in Numerical Integration. V. Temlyakov INTERDISCIPLINARY MATHEMATICS INSTITUTE 2014:05 Incremental Greedy Algorithm and its Applications in Numerical Integration V. Temlyakov IMI PREPRINT SERIES COLLEGE OF ARTS AND SCIENCES UNIVERSITY OF SOUTH

More information

INF-SUP CONDITION FOR OPERATOR EQUATIONS

INF-SUP CONDITION FOR OPERATOR EQUATIONS INF-SUP CONDITION FOR OPERATOR EQUATIONS LONG CHEN We study the well-posedness of the operator equation (1) T u = f. where T is a linear and bounded operator between two linear vector spaces. We give equivalent

More information

QALGO workshop, Riga. 1 / 26. Quantum algorithms for linear algebra.

QALGO workshop, Riga. 1 / 26. Quantum algorithms for linear algebra. QALGO workshop, Riga. 1 / 26 Quantum algorithms for linear algebra., Center for Quantum Technologies and Nanyang Technological University, Singapore. September 22, 2015 QALGO workshop, Riga. 2 / 26 Overview

More information

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof Delft Center for Systems and Control, Delft University of Technology, Mekelweg

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information