Hybrid System Identification: An SDP Approach

Similar documents
Hybrid System Identification via Sparse Polynomial Optimization

Block designs and statistics

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

A note on the multiplication of sparse matrices

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

Sharp Time Data Tradeoffs for Linear Inverse Problems

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

Feature Extraction Techniques

Convex Programming for Scheduling Unrelated Parallel Machines

Multi-Dimensional Hegselmann-Krause Dynamics

Kernel Methods and Support Vector Machines

Randomized Recovery for Boolean Compressed Sensing

Asynchronous Gossip Algorithms for Stochastic Optimization

Distributed Subgradient Methods for Multi-agent Optimization

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Interactive Markov Models of Evolutionary Algorithms

Exact tensor completion with sum-of-squares

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Decentralized Adaptive Control of Nonlinear Systems Using Radial Basis Neural Networks

Weighted- 1 minimization with multiple weighting sets

In this chapter, we consider several graph-theoretic and probabilistic models

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

A note on the realignment criterion

Chapter 6 1-D Continuous Groups

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation

Lower Bounds for Quantized Matrix Completion

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

arxiv: v1 [cs.ds] 17 Mar 2016

On Constant Power Water-filling

Bipartite subgraphs and the smallest eigenvalue

Lecture 20 November 7, 2013

Recursive Algebraic Frisch Scheme: a Particle-Based Approach

A Simple Regression Problem

An Improved Particle Filter with Applications in Ballistic Target Tracking

Non-Parametric Non-Line-of-Sight Identification 1

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

Boosting with log-loss

arxiv: v1 [cs.ds] 3 Feb 2014

New Classes of Positive Semi-Definite Hankel Tensors

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

On the Use of A Priori Information for Sparse Signal Approximations

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

Necessity of low effective dimension

Modeling Chemical Reactions with Single Reactant Specie

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)

On the Inapproximability of Vertex Cover on k-partite k-uniform Hypergraphs

Stochastic Subgradient Methods

Tail estimates for norms of sums of log-concave random vectors

On-line identification of hybrid systems using an adaptive growing and pruning RBF neural network Gholami, Mehdi

List Scheduling and LPT Oliver Braun (09/05/2017)

The Methods of Solution for Constrained Nonlinear Programming

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x)

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE

EMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS

Randomized Accuracy-Aware Program Transformations For Efficient Approximate Computations

Least Squares Fitting of Data

Ch 12: Variations on Backpropagation

A Note on the Applied Use of MDL Approximations

Complex Quadratic Optimization and Semidefinite Programming

Introduction to Machine Learning. Recitation 11

A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010

On the Analysis of the Quantum-inspired Evolutionary Algorithm with a Single Individual

Physics 215 Winter The Density Matrix

COS 424: Interacting with Data. Written Exercises

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION Vol. IX Uncertainty Models For Robustness Analysis - A. Garulli, A. Tesi and A. Vicino

Characterization of the Line Complexity of Cellular Automata Generated by Polynomial Transition Rules. Bertrand Stone

Structured signal recovery from quadratic measurements: Breaking sample complexity barriers via nonconvex optimization

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis

arxiv: v1 [math.na] 10 Oct 2016

Bayes Decision Rule and Naïve Bayes Classifier

Support Vector Machines. Maximizing the Margin

Research Article Robust ε-support Vector Regression

The Transactional Nature of Quantum Information

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions

Lecture 21. Interior Point Methods Setup and Algorithm

Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs

Recovery of Sparsely Corrupted Signals

3.8 Three Types of Convergence

1 Identical Parallel Machines

The Fundamental Basis Theorem of Geometry from an algebraic point of view

On Conditions for Linearity of Optimal Estimation

Tight Bounds for Maximal Identifiability of Failure Nodes in Boolean Network Tomography

A model reduction approach to numerical inversion for a parabolic partial differential equation

Probability Distributions

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Lecture 9 November 23, 2015

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 54, NO. 5, MAY /$ IEEE

IN modern society that various systems have become more

Optimal Jamming Over Additive Noise: Vector Source-Channel Case

time time δ jobs jobs

OPTIMIZATION in multi-agent networks has attracted

An improved self-adaptive harmony search algorithm for joint replenishment problems

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

Fairness via priority scheduling

THE CONSTRUCTION OF GOOD EXTENSIBLE RANK-1 LATTICES. 1. Introduction We are interested in approximating a high dimensional integral [0,1]

Transcription:

49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The proble of identifying discrete tie affine hybrid systes with noisy easureents is addressed in this paper Given a finite nuber of easureents of input/output and a bound on the easureent noise, the objective is to identify a switching sequence and a set of affine odels that are copatible with the a priori inforation, while iniizing the nuber of affine odels While this proble has been successfully addressed in the literature if the input/output data is noise-free or corrupted by process noise, results for the case of easureent noise are liited, eg, a randoized algorith has been proposed in a previous paper [3] In this paper, we develop a deterinistic approach Naely, by recasting the identification proble as polynoial optiization, we develop deterinistic algoriths, in which the inherent sparse structure is exploited A finite diensional sei-definite proble is then given which is equivalent to the identification proble Moreover, to address coputational coplexity issues, an equivalent rank iniization proble subject to deterinistic LMI constraints is provided, as efficient convex relaxations for rank iniization are available in the literature Nuerical exaples are provided, illustrating the effectiveness of the algoriths I INTRODUCTION In recent years, considerable effort has been put in the proble of identification of hybrid systes In general, a hybrid syste is a syste whose behavior is deterined by switching dynaics These systes arise in any different contexts, for exaples, circuit network, biological systes, systes with interaction with logic devices and continuous processes In addition, they can be used to approxiate nonlinear dynaics Thus, due to the potential application to a vast set of practical probles, the proble of identifying input/output hybrid odels has attracted considerable attention, and several approaches have been developed For the identification proble of piecewise affine (PWA) systes, there are any results available in the literature One ay refer to a thorough review [12] for a suary of recent developents In the case where easureents are noise-free, an algebraic procedure, known as Generalized Principal Coponent Analysis (GPCA), has been proposed in [8], [15] to efficiently solve the proble The proble can be also forulated as a ixed linear integer optiization proble [14] or in ters of linear copleentary inequalities [1], leading to generically NP-hard probles More recently, a greedy algorith has been proposed to identify This work was supported by the National Science Foundation under grants CMMI-0838906, ECCS-0731224, ECCS-0501166, IIS0713003, ECCS-0901433; and AFOSR grant FA9550-09-1-0253 Departent of Electrical Engineering, The Pennsylvania State University, University Park, PA 16802, USA (e-ail: feng@psuedu, lagoa@engrpsuedu) ECE Departent, Northeastern University, Boston, MA 02115, USA (ozayn,@neuedu, sznaier@neuedu) the syste while iniizing the nuber of switches [10] For robust identification of PWA systes subject to process noise, an efficient oent-based convex approach using convex relaxations on rank iniization has been proposed in [9] A siilar approach was also pursued in [11] to solve a different proble: segenting a collection of noisy easureents into subspaces However, to the best of our knowledge, for the case of easureent noise, the only result available is the approach proposed in [3] where a randoized algorith is provided based on sparse polynoial optiization In this paper, we continue the line of research started in [3] First, we provide an equivalent polynoial optiization proble, which inherently has a sparse structure and satisfies the so-called running intersection property This sparse structure can be used to significantly reduce coputational coplexity, eg see [5], [6], [16] The reasoning behind the proposed approach bears strong connection to hybrid decoupling used in GPCA However, the approach in this paper preserves all syste paraeters as a part of optiizing variables, while GPCA eliinates the structure of the paraeters (this is elaborated in Section IV) Moreover, it is shown that the sparse polynoial optiization proble can be solved via a fixed size sei-definite progra (SDP) Furtherore, for larger size probles, an equivalent rank iniization proble is provided In forulating this rank iniization proble, inspired by the results in [9], we use siilar tricks to isolate the syste paraeters fro the unknown noise ters, and, hence, eliinate the fro the decision variables in the forulated rank iniization proble A ajor advantage of the approach in this paper is that the atrix in the objective function is syetric and of uch saller size than the ones used in [9] One ay note that this feature notably reduces the coputational burden in solving convex relaxations of rank iniization probles The reainder of the paper is organized as follows Section II defines notation used and presents soe background results related to sparse polynoial optiization In Section III, we forally define the identification proble of PWA systes in the presence of easureent noise In Section IV, we reforulate the identification proble as a polynoial optiization proble and show that it has an intrinsically sparse structure A fixed size SDP is provided which is proven to be equivalent to the identification proble In Section V, an equivalent rank iniization proble with fixed size LMI constraints is given and a drop rank algorith is then provided based on a convex relaxation of rank iniization Illustrative nuerical exaples are provided in Section VI Section VII concludes the paper with soe final rearks and directions for further research 978-1-4244-7746-3/10/$2600 2010 IEEE 1546

A Notation x i E µ [p(x)] II PRELIMINARIES abbreviation for x i1 i x i d d where d is the diension of the vector x the ean value of p(x) wrt the probability easure µ on the rando variable x { i } N 0 the oent sequence where i = E µ x i for soe probability easure µ and 0 i 1 + + i d N M N () M 0 x p the oent atrix in R (N+d N ) ( N+d constructed by { i } 2N 0 the atrix M is positive sei-definite the l p nor of the vector x, p = 2 or B General Polynoial Optiization N ) Consider the following general constrained polynoial optiization proble: p K := in x K p 0(x) (P1) where K R d is a copact sei-algebraic set with nonepty interior defined as K = {x: p i (x) 0, i = 1, L} where p i (x) are polynoials with total degree d i This proble is usually not convex, and hence, hard to solve in general Yet, let s consider a related proble in the probability easure space: p K := in µ P(K) p 0 (x)µ(dx) := in µ P(K) E µ [p 0 (x)] (P2) where P(K) is the space of finite Borel probability easures on K Although (P2) is an infinite diensional proble, it is, in contrast to (P1), convex The following result, taken fro [5], establishes the equivalence between the two probles: Theore 1: Probles (P1) and (P2) are equivalent; that is: p K = p K If x is a global iniizer of (P1), then the Dirac distribution µ = δ x with support on the point x is a global iniizer of (P2) For every optial solution µ of (P2), p 0 (x) = p K -µ alost everywhere One direct consequence of this theore is that, it is possible to develop a convergent sequence of LMI based convex relaxations to proble (P1), where the optiization variables are i = Eµ x i, the oents of the unknown distribution µ If in addition p 0 (x) p K has a Su-of-Squares (SOS) representation on K, ie, p 0 (x) p K = t2 0 (x) + L p i (x)t 2 i (x) (1) for soe polynoial t 0 (x) of degree at ost N and soe polynoials t i (x) of degree at ost N d i /2, then, it is possible to construct an equivalent LMI based convex optiization proble for P2 To this effect, let p N = in p 0,α α (2) α st M N () 0, M Ni (p i ) 0, i = 1,,d, where p 0,α is the coefficient of x α in p 0 (x); N i is the sallest integer that no less than N d i /2; and M N () is the so-called oent atrix and M Ni (p i ) is the socalled localizing atrix, both of which are constructed fro the truncated oent sequence { i } N 0 For illustration and clarity of exposition, consider the case where x R 2, the oent atrix M N () is defined as M 0,0 () M 0,1 () M 0,N () M 1,0 () M 1,1 () M 1,N () M N () = M N,0 () M N,1 () M N,N () where j+k,0 j+k 1,1 j,k j+k 1,1 j+k 2,2 j 1,k+1 M j,k () = k,j k 1,j+1 0,j+k Note that the size of M N () is ( ) ( d+n N d+n ) N The localizing atrix M Ni (p i ) is defined as M Ni (p i )(i, j) = α p i,α (β(i, j) + α) where p i,α is the coefficient of x α in p i (x), (i, j) is the entry (i, j) of M N () and β(i, j) is the subscript of β To illustrate, consider g 1 = a x 1 x 2, then M 1 (g 1 ) = [ a 11 a 10 22 a 01 12 a 10 22 a 20 31 a 11 22 a 01 12 a 11 22 a 02 13 By the end, according to Theore 42 in [5], we have Theore 2 (General Polynoial Optiization with SOS): In proble (P1), if p 0 (x) p K has the representation for (1), then C Sparse Polynoial Optiization ] p N = p K (3) The previous section describes how to build a finite SDP to solve a polynoial optiization proble given that a SOS representation is known to exist at a priori However, in ters of coplexity, it ight becoe coputationally intractable if the proble size is large, ie, the diension of x is large and/or the degree N is large Note that the diension of the oent atrix M N () is ( ) d+n d, which grows polynoially in N or d but still very fast, as pointed out in [5], [7], [13] On the other hand, however, any polynoial optiization probles encountered in practice have a sparse structure that can be exploited to decrease 1547

coputational coplexity; ie, the polynoial p i only contains a sall fraction of the overall variables; eg, see [16] If the set of indices of variables in each p i satisfies the socalled running intersection property, the size of the LMIs can be significantly reduced, ie, at ost ( ) N+ξ N where ξ is the largest nuber of variables appearing in each polynoial, eg, see [6] We now state the definition of this property Definition 1 (Running Intersection Property): Let I k, k = 1,, d, be the subsets of variables X = {x 1,,x d } satisfying d k=1 I k = X If i) each constraint polynoial p i (x) uses only variables in I k for soe k; ii) the objective polynoial can be written as p 0 = p 0,1 + + p 0,l where each p 0,i uses only variables in I k for soe k, then running intersection property is satisfied in Proble (P1) if the collection {I 1,, I d} obeys the following condition: I k+1 ( k j=1 I j) Is for soe s k, (4) for every k = 1,, d 1 Siilar to the result stated in Theore 2, for a sparse polynoial optiization proble that satisfies the running intersection property and has a sparse SOS representation on K, one can construct a finite SDP as well For siplicity, we denote M N (, I k ) the oent atrix for the reduced variables in the set I k and denote M Ni (p j, I k ) the localizing atrix with the reduced variables in I k In the spirit of results in [5], [6], we have the following theore Theore 3 (Sparse Polynoial Optiization with SOS): Assue that (P1) satisfies Running Intersection Property and let p N = in p 0,α α (5) α st M N (, I k ) 0, k = 1,, d M Ni (p i, I k(i) ) 0, i = 1,,d, where p i (x) contains only variables in I k(i), and p 0 (x) has a sparse SOS representation on K, ie, ( ) L p 0 (x) p K = d t 2 k,0 (x) + p i (x)t 2 k,i (x) (6) Then, k=1 p N = p K (7) III SET MEMBERSHIP IDENTIFICATION In this section, we define the hybrid syste identification proble We consider the proble of identifying singleinput-single-output switched linear systes of the for n y k = a i (σ k )(y k i e k i ) + b i (σ k )u k i + e k (8) where u, y and e denote input, output and noise, respectively The agnitude of noise is bounded by ē > 0 Moreover, a i and b i are the paraeters of the syste, where σ k {1, 2,, s} denotes which sub-syste is active at tie k Without any additional restrictions, the proble adits infinitely any solutions For exaple, one can assign a trivial odel corresponding to each easureent Thus, one needs to add additional constraints or objectives to ake the proble eaningful In this paper, we ai at iniizing the nuber of sub-systes and/or iniizing the order of the linear odels For siplicity, let us assue that s, the nuber of switched sub-systes, is known and (n, ), the order of the linear odels, is also known This assuption does not iply loss of generality since one can always increase s and/or (n, ) one by one until a eaningful solution is found Then, the proble of interest can be forally stated as follows Proble 1: Given input and corrupted output easureents u, y over the interval [1, L], a bound ē on the l nor of easureent noise e (ie e k ē for k [1, L]), the nuber of sub-odels s and the order of sub-odels (n and ), find a hybrid affine odel of the for (8) that is consistent with all a priori inforation and the easureent data, or conclude that none exists Copared to the switched autoregressive exogenous (SARX) linear odels considered in [1], [9], the odel (8) assues that one has easureent noise and does not contain unodeled syste dynaics 1 In [3], the case with easureent noise is considered and a polynoial optiization proble is forulated for finding a copatible hybrid odel, if any, using an algebraic procedure known as GPCA In the next section, we use a related but different algebraic procedure, with two ain advantages First, the paraeters of the linear odels are explicitly included in the optiization variables, ie, once the polynoial optiization proble is solved, the syste paraeters are known iediately Hence, there is no need to use paraeter recovering algoriths as in GPCA related approaches Second, it is shown the polynoial optiization proble can be solved via a fixed size SDP progra IV ALGEBRAIC REFORMULATION In this section, based on the so-called hybrid decoupling constraint introduced in [15], one can see that equation (8) is equivalent to the polynoial equation p 0,k (e, a, b) = s [(y k + j=1 n a i (j)(y k i e k i ) b i (j)u k i e k )] = 0 (9) which holds for all k Then, the identification proble is equivalent to find soe adissible noise e and paraeters a i (j) and b i (j), so that p k (e) = 0, k = 1,,L To address this issue, we consider the following polynoial optiization proble 1 This forulation can be easily odified to include unodeled dynaics 1548

Proble 2: Given the nuber of the sub-systes s and the order of linear odels (n, ), find where in e,a,b p 0 (e, a, b) = p 0 (e, a, b) (10) st e ē, L p 0,k (e, a, b) 2 k=1 The equivalence between the above proble and Proble 1 is established in the following theore Theore 4: Given the nuber of the sub-systes s and the order of the sub-odels (n, ), if there exists at least one copatible hybrid affine odel for Proble 1, then there exist noise e and paraeters (a, b) so that the iniu of Proble 2 is zero The converse is also true Proof: If there is a copatible hybrid odel, then, given the true values of paraeters and noise, one have p 0,k (e, a, b) = 0 for all k Hence, p 0 = 0 Since p 0 is a SOS, p 0 0, hence, the iniu of (10) is zero Conversely, if p 0 = 0 for soe e, a, b, then p 0,k (e, a, b) = 0 for all k Since p 0,k is a product of s polynoials, then one of the is equal to zero Denote the index of such a polynoial by σ(k) Hence, a syste with a, b being its paraeters is a copatible odel, and σ(k) is the index of the active sub-odel at tie k Reark 1: As entioned before, there is a connection between the forulation above and GPCA based approaches There are, however, substantial differences between these two In GPCA, a algebraic procedure is used to construct the so-called Veronese atrix The related proble is to find the null space of the atrix and to extract syste paraeters fro the null vector, eg, see [15] On the other hand, in our procedure, the syste paraeters are a part of the optiization variables Hence, a necessary and sufficient condition is derived in Theore 4; and once the optiization proble is solved, the paraeters are deterined iediately Although Proble 1 and Proble 2 are equivalent, at first glance it sees that the reforulation is at least equally difficult to solve, since (10) contains variables a, b and e where the diension of e is equal to the nuber of easureents However, if one carefully checks the structure of the polynoial p 0, it can be found that it is sparse In fact, p 0 is a su of squares of p 0,k and for each k, p 0,k is a polynoial of variables a, b and e k n,,e k Hence, proble (10) inherently satisfies the running intersection property defined in Definition 1, with I k = {a, b, e k,, e k+n }, k = 1, 2,,L (11) Moreover, p 0 is a SOS already, iplying that, if the iniu of (10) is zero, p 0 p 0 can be represented in the for of (6); ie, one just needs to take t k,0 = p 0,k and t k,i = 0 Consequently, Proble 2 can be solved via a fixed and finite sized SDP This is suarized as follows Theore 5: Given the nuber of the sub-systes s and the order of the sub-odels (n, ), consider the following optiization proble p s = in p 0,α α (12) α st M N (, I k ) 0, k = 1,,L M Ni (p i, I β(i) ) 0, i = 1,,L + n, where I β(i) = {a, b, e β(i),, e β(i)+n } is the partial variable set for p i, β(i) = i for i = 1,, L and β(i) = L for i = L + 1, L + n; N = 2s and N i = 2s 1 for i = 1,,L + n; p i = ē 2 e 2 i, i = 1,,L + n Then, if there exists at least one copatible hybrid affine odel for Proble 1, p s = 0 is the optiu of (12) Conversely, if p s = 0, rank M N(, I k ) = rank M Ni (, I k ) for all k, and rank M N (, I k I j ) = 1 for all pairs (j, k) with I k I j, then, there exists at least one copatible odel Proof: This is a direct consequence of Theore 3, given the fact that running intersection property holds for the collection of the variable sets I k defined in (11) and the fact that p 0 has a representation of (6) by taking t k,0 = p 0,k and t k,i = 0 Reark 2: The rank condition rank M N (, I k ) = rank M Ni (, I k ) and rank M N (, I k I j ) = 1 is a sufficient condition to guarantee that the optiu of the SDP relaxation is the sae to the one of the corresponding polynoial optiization proble, see eg [4], [6] With this rank condition being satisfied, an algorith is given in [4], which can always extract an optial oent sequence corresponding to a probability easure with point support Reark 3: In the case where both input noise and output noise are considered, the results above still apply One just need to add ore variables for input noise ters in the polynoial optiization proble, which has a siilar structure as shown above As illustrated above, by taking into account the sparsity, the optiization proble can be solved if its size is relatively sall One iportant observation is that the coplexity is proportional to the nuber of easureents, ie if one fixes the structure of the hybrid syste, the axiu size of LMIs in the SDP reains the sae and the nuber of LMIs increases proportionally Hence, if the hybrid syste is relatively siple, the identification proble can be solved via solving (12) directly However, if the hybrid syste becoes ore coplex, say, the nuber of sub-systes increases or the order of the sub-odels increases, the proble becoes nuerically difficult to solve This coes fro the fact that the size of I k is equal to n ν = n + 1 + s(n + ), and the ax size of the oent atrices is equal to ( n ν+2s) 2s To overcoe this nuerical difficulty, otivated by GPCA related algoriths, we forulate an equivalent rank iniization proble by using quadratic functions of rows of the Veronese atrix to eliinate the syste paraeters a, b in the optiization proble This is described in the next section V AN EQUIVALENT RANK MINIMIZATION PROBLEM It can be seen that the ain coputational difficulty coes fro the fact that all syste paraeters are a part of the 1549

decision variables in the polynoial optiization Thus, if the structure of the hybrid syste is too coplex, one ay need large coputational power to solve (12) In this section, we show that, by anipulating the rows of the noisy Veronese atrix, a rank iniization proble with finite LMIs can be forulated and it is equivalent to the original identification proble Though rank iniization proble is in general NP-hard, efficient relaxations are available especially for syetric positive sei-definite atrix, which is the case encountered We now review GPCA approach which provides the otivation for the results presented That is, one can write (9) into atrix fors, in which the noise ters e k and the paraeters a, b are isolated Note that the inforation of the paraeters is included in the null vector of the Veronese atrix A siilar idea has been used in [3] in the proposed randoized algorith, where the coputational cost is reduced by fixing each part of the variables in the iterations First let s recall the construction of Veronese atrix Consider the polynoial equations (9) fro tie k = 1 to L, by collecting all easureents, one can build the noisy Veronese atrix V s and polynoial equations as V s (r, e)x = v s (r 1, e) v s (r L, e) x = 0 (13) where x is a vector with eleents being polynoial functions of the syste paraeters Then, the identification proble is equivalent to finding adissible noise e so that V s (r, e) has a non-trivial null space, ie finding e and a non-zero vector x so that (13) holds Once such a null space is found, it can be used to recover the paraeters of the sub-systes; eg see [15] This is suarized in the following proposition Proposition 1 (Theore 1, 2 in [15]): Given the nuber of the sub-systes s, the order of sub-odels (n, ), if there exists at least one copatible hybrid affine odel for Proble 1, then there exists soe adissible noise e such that V s (r, e) is rank deficient; and the syste paraeters a, b can be extracted fro its null space A siple exaple is illustrated on how to construct such a noisy Veronese atrix Exaple 1: For s = 2 and the order (n, ) = (1, 1), equation 9 can be written that (y k + a 1 (y k 1 e k 1 ) b 1 u k e k ) (y k + a 2 (y k 1 e k 1 ) b 2 u k e k ) = 0 Hence, the k-th row of the noisy Veronese atrix V s can be written as (y k e k ) 2 T (y k 1 e k 1 )(y k e k ) v k (r, e) = u k (y k e k ) u k (y k 1 e k 1 ) (y k 1 e k 1 ) 2 u 2 k ( Reark 4: The size of the Veronese atrix is L n++s ) s For each row of the atrix, the highest total degree of the polynoials (in e) is s, the nuber of sub-systes Given the noisy Veronese atrix constructed above, let s define a atrix Q() = L E µ {v s (r i, e) T v s (r i, e)} (14) where is the truncated oent sequence corresponding to probability easure µ Hence, Q() is a syetric atrix linear in Now, we are ready to propose the equivalent rank iniization proble Proble 3: Given the nuber of the sub-systes s, the order of sub-odels (n, ), find a rank deficient atrix Q() defined in (14) subject to M N (, I k ) 0, k = 1,,L, M Ni (p i, I β(i) ) 0, i = 1,,L + n where I β(i) = {e β(i),,e β(i)+n } is the partial variable set for p i, β(i) = i for i = 1,,L and β(i) = L for i = L+1,, L+n; N = s and N i = s 1 for i = 1,,L+n; and p i = ē 2 e 2 i, i = 1,,L + n Then, we observe that this proble is indeed equivalent to the proble of finding an adissible noise sequence that results in a rank deficient Veronese atrix Theore 6: If there exists an adissible noise sequence e such that the Veronese atrix V s (r, e) is rank deficient, then Proble 3 has a feasible solution Conversely, if Proble 3 has a feasible solution, and if rank M N (, I k ) = rank M Ni (, I k ) for all k and rank M N (, I k I j ) = 1 for all pairs (j, k) with I k I j, then, V s (r, e) is rank deficient for soe adissible noise e Proof: First note that there exists soe unit vector x and adissible noise e such that if and only if V s (r, e)x = 0 L x T v s (r i, e) T v s (r i, e)x = 0 (15) Since v s (r i, e) T v s (r i, e) is positive sei-definite, it is equivalent to finding a unit vector x that the iniu of the following proble [ L ] in e st (x ) T is zero, where K is defined as v s (r i, e) T v s (r i, e) e K x (16) K = {e : p k = ē 2 e 2 k 0, k = 1,,L + n} Note that the objective function in (16) is a SOS in ters of e for any fixed x, hence, it can be represented in the for of (6) Moreover, the running intersection property is 1550

satisfied for (16) Therefore, by Theore 3, it follows that the iniu of the following proble in st (x ) T Q()x (17) M N (, I k ) 0, k = 1,,L, M Ni (p i, I k(i) ) 0, i = 1,,L + n, is zero, where N = s, N i = s 1 for all i, and Q is defined in (14) Since v s (r i, e) T v s (r i, e) is positive seidefinite, Q() is positive sei-definite as well 2 Hence, we have (x ) T Q()x = 0 for soe unit vector x if and only if the atrix Q() is rank deficient Conversely, if the rank condition holds, one can always extract a oent sequence corresponding to a probability easure with point support, such that Q( ) is rank deficient Hence, there is an adissible noise e that L v s(r i, e) T v s (r i, e) is rank deficient This iplies the Veronese atrix is rank deficient for the sae noise e, which concludes the proof Although rank iniization is NP-hard, efficient convex relaxations are available In particular, good approxiate solutions can be obtained by using a log-det heuristic that relaxes rank iniization to a sequence of convex probles, eg, see [2], [9] Furtherore, as stated in Theore 6, it suffices to find a rank deficient solution Thus, we use a odification of log-det heuristic that ais at dropping the rank by one, as illustrated in the following Algorith 1 Drop Rank Set X Q(), X 0 I, k 0 repeat Solve X k+1 arg in Tr(X k + δi) 1 X st M where M is the feasible set in Proble 3 Decopose the syetric atrix X k = T 1 DT Set δ in diag(d) Set k k + 1 until a convergence criterion is reached return X k Reark 5: Given a rank deficient atrix Q( ) is found with a oent sequence, a null vector can be easily deterined by coputing its eigenvectors and eigenvalues Moreover, by Theore 6, there exists soe adissible noise e that the Veronese atrix V s (r, e) is rank deficient One can extract such a noise value e fro the oent sequence using the algorith introduced in [4] Once the noise values are estiated, the proble can be converted to the noise free case by plugging the noise estiates in to the Veronese atrix and the syste paraeters can be coputed using the procedure introduced in [8] 2 Note that Q() is psd since satisfies the LMI constraints in (17) This coes directly fro the fact that the finite oent condition is the dual forulation of the related SOS proble VI NUMERICAL RESULTS In this section, we present several nuerical exaples for illustrating the proposed algoriths In the first exaple, a siple hybrid syste with liited easureents is considered The identification proble is then solved by solving the SDP proble (12) directly, according to Theore 5 In the second exaple, a ore coplex hybrid odel is considered It is then shown that, the rank iniization relaxation algorith in Algorith 1 works efficiently on identifying the syste paraeters A few coents is also given regarding how the noise bound ay affect identification results A Exaple I: via Sparse Polynoial Optiization In this exaple, we consider a hybrid linear switching syste with s = 3 and (n, ) = (1, 0), where the subodels are y k = 09(y k 1 e k 1 ) + u k 1 + e k (Subodel 1) y k = 05(y k 1 e k 1 ) + u k 1 + e k (Subodel 2) y k = +07(y k 1 e k 1 ) + u k 1 + e k (Subodel 3) and the hybrid syste is odeled as (8), ie, y k = a 1 (σ k )(y k 1 e k 1 ) + u k 1 + e k, where σ k {1, 2, 3} depending on which sub-odel is active at tie k In the siulation, we set σ(k) = 1 for k = [1, 5], σ(k) = 2 for k = [6, 10] and k = [16, 20], σ(k) = 3 for k = [11, 15] The experiental data was obtained with unit step input and with uniforly randoly generated noise bounded by ē First we set the noise bound ē = 01 and then increase the bound by setting ē = 03 The identification probles are solved based on Theore 5 via solving a SDP relaxation built fro the equivalent sparse polynoial optiization proble The paraeter values used in the siulation and their estiated values with respect to different noise bounds are illustrated in Table I TABLE I ESTIMATED AND TRUE VALUES OF PARAMETERS True ē = 01 ē = 03 Subodel 1 a 1 09000 09352 05825 Subodel 2 a 1 05000 04867 05825 Subodel 3 a 1-07000 -06841-06652 Moreover, the active sub-syste at tie k can be deterined once the sub-odels are identified The results are shown in Figure I Index of sub systes 35 3 25 2 15 1 e = 0 e < 01 e < 03 05 0 2 4 6 8 10 12 14 16 18 20 Tie Fig 1 Identified and true active sub-systes vs tie 1551

Reark 6: As one can see fro Table I and Figure I, if the noise bound is large, there are copatible hybrid systes with a saller nuber of sub-systes However, one should note that this is not a failure of our algorith In fact, the hybrid syste obtained is still copatible with all the easureents and a priori inforation This should not be surprising since when the easureents are liited, the difference between odel dynaics could be covered by the noise, especially when the noise is large Moreover, when the noise is large, one ay get a copatible hybrid syste with linear sub-odels of lower orders B Exaple II: via Rank Miniization In this exaple, we consider a ore coplex hybrid syste to show the coputational efficiency of using Theore 6 and Algorith 1 It is assued that there are three subsystes with the order (n, ) = (2, 1) That is, y k = a 1,i (y k 1 e k 1 ) a 2,i (y k 2 e k 2 )+b 1,i u k 1 +e k where i {1, 2, 3} depending on which sub-syste is active at tie k We take 120 easureent with 11 switches aong these three sub-systes The siulation is run for two different noise levels, ie ē = 01 and ē = 03 The input signal u is uniforly randoly generated between -1 and 1 In Table II, we present the values of the syste paraeters identified by the algorith for the noise levels considered Subodel 1 Subodel 2 Subodel 3 TABLE II ESTIMATED AND TRUE VALUES OF PARAMETERS True Noise e < 01 Noise e < 03 a 1 09000 09183 09586 a 2 01800 01736 02379 b 1 02000 02310 02863 a 1 05000 05155 05494 a 2 00600 00564 01014 b 1 10000 11310 10802 a 1-10000 -09716-09618 a 2 03000 03144 03287 b 1 06000 06369 06991 Moreover, the active sub-syste at tie k can be deterined once the sub-odels are identified The results are shown in Figure I Index of sub systes 35 3 25 2 15 1 e = 0 e < 01 e < 03 05 0 20 40 60 80 100 120 Tie Fig 2 Identified and true active sub-systes vs tie VII CONCLUDING REMARKS This paper addresses the identification proble of discretetie affine hybrid systes with input/output data corrupted by easureent noise The proposed approach first forulates the identification proble as a polynoial optiization proble It is shown that the optiization proble obtained inherently has a sparse structure, which can be used to significantly reduce the size of the SDPS, and, hence, reduce coputational cost Moreover, since the objective function has a SOS representation, the relaxation is equivalent to the original proble (size of relaxation is deterined exclusively by the nuber of sub-odels) Furtherore, to address the coputational difficulties when the hybrid syste is coplex, a drop rank approach is given to solve the original identification proble Though rank iniization proble is in general NP-hard, efficient convex relaxations are available in the literature Nuerical exaples are provided to illustrate the effectiveness of the proposed algorith It is shown in the paper, by isolating syste paraeters fro the decision variables in the optiization proble, the coputational cost can be substantially reduced However, the structure inforation of syste paraeters is then lost Hence, ongoing work is aied at developing approaches to separate the search of adissible noise and systes paraeters while preserving the structure inforation REFERENCES [1] A Beporad, A Garulli, S Paoletti, and A Vicino A boundederror approach to piecewise affine syste identification IEEE Trans Autoat Contr, 50(10):1567 1580, 2005 [2] M Fazel, H Hindi, and S Boyd Log-det heuristic for atrix rank iniization with applications to hankel and euclidean distance atrices In Proc Aer Contr Conf, 2003 [3] C Feng, C M Lagoa, and M Sznaier Hybrid syste identication via sparse polynoial optiization In Proc Aer Contr Conf, 2010 [4] D Henrion and J B Lasserre Detecting global optiality and extracting solutions in gloptipoly Technical report, LAAS-CNRS, 2005 [5] Jean B Lasserre Global optiization with polynoials and the proble of oents SIAM J Opti, 11(3):796 817, 2001 [6] Jean B Lasserre Convergent SDP-relaxations in polynoial optiization with sparsity SIAM J on Optiization, 17(3):822 843, 2006 [7] M Laurent Sus of squares, oent atrices and optiization over polynoials Eerging applications of algebraic geoetry, 149:157 270, 2009 [8] Yi Ma and René Vidal Identification of deterinistic switched arx systes via identification of algebraic varieties In Hybrid Systes: Coputation and Control, pages 449 465, 2005 [9] N Ozay, C Lagoa, and M Sznaier Robust identification of switched affine systes via oents-based convex optiization In Proc IEEE Conf Dec Contr, 2009 [10] N Ozay, M Sznaier, C Lagoa, and O Caps A sparsification approach to set ebership identification of a class of affine hybrid systes In Proc IEEE Conf Dec Contr, pages 123 130, 2008 [11] N Ozay, M Sznaier, C M Lagoa, and O Caps GPCA with denoising: a oent-based convex approach subitted to IEEE Conference on Coputer Vision and Pattern Recognition, 2010 [12] S Paoletti, A Juloski, G Ferrari-trecate, and R Vidal Identification of hybrid systes: a tutorial European Journal of Control, 13(2):123 130, 2007 [13] P A Parrilo Seidefinite prograing relaxations for seialgebraic probles Math Progra, 96(2, Ser B):293 320, 2003 [14] J Roll, A Beporad, and L Ljung Identification of piecewise affine systes via ixed-integer prograing Autoatica, 40(1):37 50, 2004 [15] R Vidal, S Soatto, Y Ma, and S Sastry An algebraic geoetric approach to the identification of a class of linear hybrid systes In Proc IEEE Conf Dec Contr, pages 167 172, 2003 [16] H Waki, S Ki, M Kojia, and M Muraatsu Sus of squares and seidefinite progra relaxations for polynoial optiization probles with structured sparsity SIAM J on Optiization, 17(1):218 242, 2006 1552