Interconnected Systems of Fliess Operators

Similar documents
PDE Notes, Lecture #11

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy,

Witt#5: Around the integrality criterion 9.93 [version 1.1 (21 April 2013), not completed, not proofread]

Euler equations for multiple integrals

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

A Note on Exact Solutions to Linear Differential Equations by the Matrix Exponential

Lower Bounds for the Smoothed Number of Pareto optimal Solutions

Least-Squares Regression on Sparse Spaces

REAL ANALYSIS I HOMEWORK 5

Discrete Operators in Canonical Domains

DIFFERENTIAL GEOMETRY, LECTURE 15, JULY 10

Systems & Control Letters. A Faà di Bruno Hopf algebra for a group of Fliess operators with applications to feedback

1 Math 285 Homework Problem List for S2016

Generalized Tractability for Multivariate Problems

II. First variation of functionals

ALGEBRAIC AND ANALYTIC PROPERTIES OF ARITHMETIC FUNCTIONS

Logarithmic spurious regressions

Tractability results for weighted Banach spaces of smooth functions

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson

International Journal of Pure and Applied Mathematics Volume 35 No , ON PYTHAGOREAN QUADRUPLES Edray Goins 1, Alain Togbé 2

Robustness and Perturbations of Minimal Bases

The canonical controllers and regular interconnection

The total derivative. Chapter Lagrangian and Eulerian approaches

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION

u!i = a T u = 0. Then S satisfies

Lie symmetry and Mei conservation law of continuum system

Combinatorica 9(1)(1989) A New Lower Bound for Snake-in-the-Box Codes. Jerzy Wojciechowski. AMS subject classification 1980: 05 C 35, 94 B 25

Introduction to the Vlasov-Poisson system

Permanent vs. Determinant

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

2Algebraic ONLINE PAGE PROOFS. foundations

Table of Common Derivatives By David Abraham

Linear First-Order Equations

1. Aufgabenblatt zur Vorlesung Probability Theory

THE GENUINE OMEGA-REGULAR UNITARY DUAL OF THE METAPLECTIC GROUP

Agmon Kolmogorov Inequalities on l 2 (Z d )

Diophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations

A simple model for the small-strain behaviour of soils

TMA 4195 Matematisk modellering Exam Tuesday December 16, :00 13:00 Problems and solution with additional comments

Consider for simplicity a 3rd-order IIR filter with a transfer function. where

The Generalized Incompressible Navier-Stokes Equations in Besov Spaces

On the Cauchy Problem for Von Neumann-Landau Wave Equation

Basic IIR Digital Filter Structures

On state representations of time-varying nonlinear systems

Physics 5153 Classical Mechanics. The Virial Theorem and The Poisson Bracket-1

Math 342 Partial Differential Equations «Viktor Grigoryan

Tensors, Fields Pt. 1 and the Lie Bracket Pt. 1

On Decentralized Optimal Control and Information Structures

Adaptive Gain-Scheduled H Control of Linear Parameter-Varying Systems with Time-Delayed Elements

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

Discrete Mathematics

ON TAUBERIAN CONDITIONS FOR (C, 1) SUMMABILITY OF INTEGRALS

MARKO NEDELJKOV, DANIJELA RAJTER-ĆIRIĆ

Nested Saturation with Guaranteed Real Poles 1

Monotonicity for excited random walk in high dimensions

arxiv: v1 [physics.flu-dyn] 8 May 2014

On non-antipodal binary completely regular codes

Estimation of the Maximum Domination Value in Multi-Dimensional Data Sets

Acute sets in Euclidean spaces

Lower bounds on Locality Sensitive Hashing

The effect of dissipation on solutions of the complex KdV equation

6 General properties of an autonomous system of two first order ODE

Lecture 5. Symmetric Shearer s Lemma

Optimal Variable-Structure Control Tracking of Spacecraft Maneuvers

A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks

A FURTHER REFINEMENT OF MORDELL S BOUND ON EXPONENTIAL SUMS

Convergence of Random Walks

arxiv: v1 [math-ph] 5 May 2014

On a limit theorem for non-stationary branching processes.

Similar Operators and a Functional Calculus for the First-Order Linear Differential Operator

Quantum mechanical approaches to the virial

ELEC3114 Control Systems 1

Equilibrium in Queues Under Unknown Service Times and Service Value

Qubit channels that achieve capacity with two states

Quantum Algorithms: Problem Set 1

Mark J. Machina CARDINAL PROPERTIES OF "LOCAL UTILITY FUNCTIONS"

A Note on Modular Partitions and Necklaces

SINGULAR PERTURBATION AND STATIONARY SOLUTIONS OF PARABOLIC EQUATIONS IN GAUSS-SOBOLEV SPACES

ORDINARY DIFFERENTIAL EQUATIONS AND SINGULAR INTEGRALS. Gianluca Crippa

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Global Solutions to the Coupled Chemotaxis-Fluid Equations

Balancing Expected and Worst-Case Utility in Contracting Models with Asymmetric Information and Pooling

Optimal Control of Spatially Distributed Systems

Implicit Differentiation

A new proof of the sharpness of the phase transition for Bernoulli percolation on Z d

EVALUATING HIGHER DERIVATIVE TENSORS BY FORWARD PROPAGATION OF UNIVARIATE TAYLOR SERIES

A nonlinear inverse problem of the Korteweg-de Vries equation

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k

Schrödinger s equation.

TOEPLITZ AND POSITIVE SEMIDEFINITE COMPLETION PROBLEM FOR CYCLE GRAPH

Continuous observer design for nonlinear systems with sampled and delayed output measurements

Systems & Control Letters

ON THE INTEGRAL RING SPANNED BY GENUS TWO WEIGHT ENUMERATORS. Manabu Oura

Three-Variable Bracket Polynomial for Two-Bridge Knots

POROSITY OF CERTAIN SUBSETS OF LEBESGUE SPACES ON LOCALLY COMPACT GROUPS

On the minimum distance of elliptic curve codes

7. Localization. (d 1, m 1 ) (d 2, m 2 ) d 3 D : d 3 d 1 m 2 = d 3 d 2 m 1. (ii) If (d 1, m 1 ) (d 1, m 1 ) and (d 2, m 2 ) (d 2, m 2 ) then

A Weak First Digit Law for a Class of Sequences

Regular tree languages definable in FO and in FO mod

Transcription:

Interconnecte Systems of Fliess Operators W. Steven Gray Yaqin Li Department of Electrical an Computer Engineering Ol Dominion University Norfolk, Virginia 23529 USA Abstract Given two analytic nonlinear input-output systems represente as Fliess operators, four system interconnections are consiere in a unifie setting: the parallel connection, prouct connection, cascae connection an feeback connection. In each case, the corresponing generating series is prouce, when one exists, an conitions for convergence of the corresponing Fliess operator are given. In the process, an existing notion of a composition prouct for formal power series is generalize to the multivariable setting, an its set of known properties is expane. In aition, the notion of a feeback prouct for formal power series is introuce an characterize. 1 Introuction Let I = {0, 1,..., m} enote an alphabet an I the set of all wors over I. A formal power series in I is any mapping of the form I R l, an the set of all such mappings will be enote by R l I. For each c R l I, one can formally associate a corresponing m-input, l-output operator F c in the following manner. Let p 1 an a < b be given. For a measurable function u : [a, b] R m, efine u p = max{ u i p : 1 i m}, where u i p is the usual L p -norm for a measurable real-value function, u i, efine on [a, b]. Let L m p [a, b] enote the set of all measurable functions efine on [a, b] having a finite p norm. With t 0, T R fixe an T > 0, efine inuctively for each η I the mapping E η : L m 1 [t 0, t 0 + T ] C[t 0, t 0 + T ] by E φ = 1, an E ik i k 1 i 1 [u](t, t 0 = t t 0 u ik (τe ik 1 i 1 [u](τ, t 0 τ, where u 0 (t 1. The input-output operator corresponing to c is then F c [u](t = η I (c, η E η [u](t, t 0, which is referre to as a Fliess operator. In the classical literature where these operators first appeare [4, 5, 6, 8, 9, 10], it is normally assume that there exists real numbers K > 0 an M 1 such that (c, η KM η η!, η I, (1.1 where z = max{ z 1, z 2,..., z l } when z R l, an η enotes the number of symbols in η. These growth conitions on the coefficients of c insure that there exist positive real numbers R an T 0 such that for all piecewise continuous u with u R an T T 0, the series efining F c converges uniformly an absolutely on [t 0, t 0 + T ]. Uner such conitions the power series c is sai to be locally convergent. More recently, it was shown in [7] that the growth conition (1.1 also 1

F c F c u + y u X y F F (a Parallel connection. (b Prouct connection. u F F c y u + F c y F (c Cascae connection. ( Feeback connection. Figure 1: Elementary system interconnections. implies that F c constitutes a well efine operator from B m p (R[t 0, t 0 + T ] into B l q(s[t 0, t 0 + T ] for sufficiently small R, S, T > 0, where the numbers p, q [1, ] are conjugate exponents, i.e., 1/p + 1/q = 1 with (1, being a conjugate pair by convention. In many applications input-output systems are interconnecte in a variety of ways. Given two Fliess operators F c an F, where c, R l I are locally convergent, Figure 1 shows four elementary interconnections. In the case of the cascae an feeback connections it is assume that l = m. The general goal of this paper is to escribe in a unifie manner the generating series for each of the four interconnections shown in Figure 1 an conitions uner which it is locally convergent. The parallel connection is the trivial case, an the prouct connection was analyze in [11]. They are inclue for completeness an some of the analysis is applicable to the other two interconnections. It was shown in [3] for the SISO case (i.e., l = m = 1 that there always exists a series c such that y = F c [F [u]] = F c [u], but a multivariable analysis of this composition prouct is apparently not available in the literature, nor are any results about local convergence. So in Section 2 the composition prouct is first investigate inepenent of the interconnection problem. It is efine in the multivariable setting an various funamental properties are presente. Then a conition is introuce uner which the composition prouct preserves both rationality an local convergence. Finally, in preparation for the feeback analysis, it is next shown that the composition prouct prouces a contractive mapping on the set of all formal power series using the familiar ultrametric. In Section 3, the three nonrecursive connections: the parallel, prouct an cascae connections are analyze primarily by applying the results of Section 2. In Section 4 the feeback 2

connection is consiere. Such a system is sai to be well-pose if the support of c an each contain at least one wor having a nonzero symbol. Otherwise, there is no real recursive structure, an a egenerate case results. If, for example, F c is a linear operator then formally the solution to the feeback equation y = F c [u + F [y]] (1.2 is y = F c [u] + F c F F c [u] + (1.3 It is not immeiately clear that this series converges in any manner, an in particular, converges to another Fliess operator, say F c@ for some c@ R m I. When F c is nonlinear, the problem is further complicate by the fact that a simple series representation (1.3 is not possible. Thus, for the case where the system inputs are being generate by an exosystem which is itself a Fliess operator, a sufficient conition is given uner which a unique solution to the feeback equation exists. Then the close-loop system is characterize in terms of a new Fliess operator when a certain series factorization property is available. This leas to an implicit characterization of the feeback prouct, c@, for formal power series. 2 The Composition Prouct The composition prouct of two series over an alphabet X = {x 0, x 1 } is efine recursively in terms of the shuffle prouct [2, 3]. For any η X an R X, let { η := η : η x1 = 0 x k+1 0 [ (η ] : η = x k 0 x 1η, k 0, where η x1 enotes the number of symbols in η equivalent to x 1. For c, R X the efinition is extene to c = η X (c, η η. (2.5 For SISO systems it is easily verifie that F c F = F c. For the multivariable case it is necessary to consier power series of the form : X R m, where X = {x 0, x 1,..., x m } is an arbitrary alphabet with m + 1 letters. In which case, the efining equation (2.4 becomes { η = η : η xi = 0, i 0 x k+1 0 [ i (η ] : η = x k 0 x iη, k 0, i 0, where i : ξ ((, ξ i, the i-th component of (, ξ. Observe that in general for where i j 0 for j = 1,..., k, it follows that (2.4 η = x n k 0 x i k x n k 1 0 x ik 1 x n 1 0 x i 1 x n 0 0, (2.6 η = x n k+1 0 [ ik x n k 1+1 0 [ ik 1 x n 1+1 0 [ i1 x n 0 0 ] ]]. The following theorem guarantees that the composition prouct is well efine by insuring that the series (2.5 is summable. 3

Theorem 2.1 Given a fixe R m X, the family of series {η : η X } is locally finite, an therefore summable. Proof: Given an arbitrary η X expresse in the form (2.6, it follows irectly that where the orer of c is efine as or(η = n 0 + k + or(c = = η + k n j + or( ij j=1 η η x0 j=1 { inf{ η : η supp(c} : c 0 : c = 0, or( ij, (2.7 an supp(c := {η X : (c, η 0} enotes the support of c. Hence, for any ξ X I (ξ := {η X : (η, ξ 0} {η X : or(η ξ } = {η X : η + η η x0 j=1 or( ij ξ }. Clearly this latter set is finite, an thus I (ξ is finite for all ξ X. This fact implies summability [1]. From the efinition, it is easily verifie that for any series c,, e R m X, (c + e = c e + e, but in general c ( + e c + c e. A special exception are linear series. A series c R l X is calle linear if supp(c {η X : η = x n 1 0 x ix n 0 0, i {1, 2,..., m}, n 1, n 0 0}. Since the shuffle prouct istributes over aition, given any η = x n 1 0 x ix n 0 0 : Therefore, η ( + e = x n 1+1 0 [( + e i x n 0 0 ] = x n 1+1 0 ( i x n 0 0 + xn 1+1 0 (e i x n 0 0 = η + η e. c ( + e = η I (c, η η ( + e = η I (c, η η + (c, η η e = c + c e. 4

A linear series shoul not be confuse with a rational series. The series c = k! x k 0x 1 k=0 is linear but not rational, while the bilinear series c = k=0 i 1,...,i k =0 (CN ik N i1 z 0 x ik x i1 with N i R n n an C T, z 0 R n 1 is rational but not linear. The set R X forms a metric space uner the ultrametric ist : R X R X R + {0} : (c, σ or(c, where σ (0, 1 is arbitrary. The following theorem states that the composition prouct on R X R X is at least continuous in its first argument. The result extens naturally to vector-value series in a componentwise fashion. Theorem 2.2 Let {c i } i 1 be a sequence in R X with lim i c i = c. Then lim i (c i = c. Proof: Define the sequence n i = or(c i c for i 1. Since c is the limit of the sequence {c i } i 1, {n i } i 1 must have a monotone increasing subsequence {n ij }. Now observe that ist(c i, c = σ or((c i c an or((c ij c = or η supp(c ij c inf or(η η supp(c ij c (c ij c, η η = inf η supp(c ij c η + ( η η x 0 or( n ij. Thus, ist(c ij, c σ n i j for all j 1, an the theorem is proven. It is shown in [3] by counter example that the composition prouct is not a rational operation. That is, the composition of two rational series oes not in general prouce a rational series. But in [2], it is shown in the SISO case that special classes of rational series prouce rational series when the composition prouct is applie. The following efinition is the multivariable extention of this essential property, an the corresponing rationality proof is not significantly ifferent. Definition 2.1 A series c R X is limite relative to x i if there exists an integer N i 0 such that sup η xi N i. η supp(c 5

If c is limite relative to x i for every i = 1, 2,..., m then c is input-limite. In such cases, let N c := i N i. A series c R l X is input-limite if each component series, c j, is input-limite for j = 1, 2,..., l. In this case, N c := max j N cj. It is shown next that the composition prouct will preserve local convergence if its first argument is input-limite. Theorem 2.3 Suppose c, R m X are locally convergent series with growth constants K c, M c an K, M, respectively. If c is input-limite then c is locally convergent with where M = max(m c, M an N c 1. (c, ν < K c K Nc (N c + 1 (M(N c + 1 ν ν!, ν X, (2.8 The proof of this result requires the following lemma. Lemma 2.1 [11] Suppose c, R X are locally convergent series with growth constants K c, M c an K, M, respectively. Then c is locally convergent with where M = max(m c, M. (c, ν K c K M ν ( ν + 1!, ν X, (2.9 Proof of Theorem 2.3: The proof has two main parts. Only the SISO case is consiere here for brevity. First it is shown that for any η X in the form of equation (2.6 (which is no restriction: (η, ν For any set of integers n j 0, j 0, efine the set of wors ( K k M η M ν n 0!(n 1 + 1! (n k + 1! ν!, ν X. (2.10 η j+1 = x n j+1 0 x 1 η j, η 0 = x n 0 0. (2.11 The upperboun (2.10 is proven for any given η by verifying inuctively that it hols for every wor η j, j 0. Observe that for j = 0: { 1 : ν = η0 (η 0, ν = (η 0, ν = 0 : ν η 0 an { M η 0 M ν n 0! ν! = 1 : ν = η 0 M ν η 0 n0! ν! : ν η 0. Hence, the inequality is satisfie in the trivial case. Now suppose the result is true up to some given integer j 1 0. From the efinition of the composition prouct (η j, ν = (x n j+1 0 [ (η j 1 ], ν. 6

Applying Lemma 2.1 gives ( (η j 1, ν K ( K j 1 M η j 1 n 0!(n 1 + 1! (n j 1 + 1! M ν ( ν + 1!. Therefore, (η j, ν = ( ( ( (η j 1, K j x n j+1 0 1 (ν : ν = x n j+1 0 ν 0 : otherwise M η j 1 n 0!(n 1 + 1! (n j 1 + 1! M ν (n j+1 ( ν n j!. Now using the fact that for any integer n 1, ( n k k + 1 when k = 0, 1,..., n 1, it follows that Thus, ( ν n j! (η j, ν ( K j ν! (n j + 1!, 0 n j ν 1. M η j n 0!(n 1 + 1! (n j + 1! M ν ν! since n j < ν is necessary for (η j, v to be nonzero. Therefore, the inequality hols for all j 0, or equivalently, for any η X Now if c is input-limite the theorem is proven in the secon main step as follows: (c, ν = (c, η(η, ν η supp(c I (ν ν i=0 [( min(i,n c K k k=0 η supp(c n 0 +n 1 +...+n k +k=i M η n 0!(n 1 + 1! (n k + 1! [ K c M η c ] η! M ν ν! ] K c K Nc M ν ν! η supp(c n 0 +n 1 +...+n k +k=i K c K Nc M ν ν! K c K Nc K c K Nc ν N c i=0 k=0 ( η! n 0!(n 1 + 1! (n k + 1! ν N c (k + 1 i i=0 k=0 N c + 2 M ν ν! 2 ν (N c + 1 i i=0 N c + 2 M ν ν! (N c + 1 ν +1 2N c K c K Nc (N c + 1M ν (N c + 1 ν ν!, 7

assuming in this last step that N c > 1. The inequality for the power sum S i (n := n k=1 ki n i [ n+1 2 ], n, i > 0 has also been use in the evelopment. To prouce the result when N c = 1, simply note that ν 1 i=0 k=0 η supp(c n 0 +n 1 +...+n k +k=i (n 0, n 1 + 1,..., n k + 1! = 1 + (2 i 1 2 ν +1. ν i=0 When c an are both linear series, the growth conition (2.8 is known to be conservative. Representing the composition prouct as a convolution sum an using the fact that n ( n 1 k=0 k < 3 for any n 0, it can be shown that a tighter boun is (c, ν < K c K M ν ν!, ν X. (2.12 It is conjecture that in the general case perhaps the tighter boun (c, ν < K c K Nc M ν ν!, ν X (2.13 applies. The metric space (R X, ist is known to be complete [1]. Given a fixe c R X, consier the mapping R X R X : c. The section is conclue by showing that this mapping is always a contraction on R X, i.e., ist(c, c e < ist(, e,, e R X. The focus is on the SISO case where any c R X can be written unambiguously in the form c = c 0 + c 1 +, where c k R X has the property that η supp(c k only if η x1 = k. Some of the series c k may be the zero series. When c 0 = 0, c is referre to as being homogeneous. When c k = 0 for k = 0, 1,..., l 1 an c l 0 then c is calle homogeneous of orer l. In this setting consier the following theorem. Theorem 2.4 For any c k R X with X = {x 0, x 1 } ist(c k, c k e σ k ist(, e,, e R X. Proof: The proof is by inuction for the nontrivial case where c k 0. First suppose k = 0. From the efinition of the composition prouct it follows irectly that η = η for all η supp(c 0. Therefore, c 0 = (c 0, η η = (c 0, η η = c 0, an η supp(c 0 η supp(c 0 ist(c 0, c 0 e = ist(c 0, c 0 = 0 8 σ 0 ist(, e.

Now fix any k 0 an assume the claim is true for all c 0, c 1,..., c k. In particular, this implies that or(c k c k e k + or( e. (2.14 For any j 0, wors in supp(c j have the form η j as efine in equation (2.11. Observe then that c k+1 c k+1 e = (c k+1, η k+1 η k+1 (c k+1, η k+1 η k+1 e η k+1 X [ n = (c k+1, η k+1 x0 k+1 +1 [ [η k ]] η k,η k+1 X n x k+1 +1 0 [e [η k e]] ] [ n = (c k+1, η k+1 x0 k+1 +1 [ [η k ]] η k,η k+1 X n x k+1 +1 0 [ [η k e]] ] + [ n x k+1 +1 0 [ [η k e]] n x k+1 +1 0 [e [η k e]] ] [ n = (c k+1, η k+1 x0 k+1 +1 [ [η k η k e]]+ η k,η k+1 X x 0 n k+1 +1 [( e [η k e]] ] using the fact that the shuffle prouct istributes over aition. Next, applying the ientity (2.7 an the inequality (2.14 with c k = η k, it follows that { or(c k+1 c k+1 e min inf n k+1 + 1 + or( + k + or( e, η k+1 supp(c k+1 } inf n k+1 + 1 + or( e + η k + k or(e η k+1 supp(c k+1 an thus, k + 1 + or( e, ist(c k+1, c k+1 e σ k+1 ist(, e. Hence, ist(c k, c k e σ k ist(, e hols for any k 0. Applying the above theorem leas to following result. Theorem 2.5 If c R X with X = {x 0, x 1 } then for any series c 0 ist((c 0 + c, (c 0 + c e = ist(c, c e,, e R X. (2.15 If c is homogeneous of orer l 1 then ist(c, c e σ l ist(, e,, e R X. (2.16 Proof: The equality is proven first. Since the metric ist is shift-invariant: ist((c 0 + c, (c 0 + c e = ist ( c 0 + c, c 0 e + c e = ist ( c 0 + c, c 0 + c e = ist (c, c e. 9

The inequality is proven next by first selecting any fixe l 1 an showing inuctively that it hols for any partial sum l+k c i where k 0. When k = 0 Theorem 2.4 implies that ist(c l, c l e σ l ist(, e. If the result is true for partial sums up to any fixe k then using the ultrametric property it follows that ist ist(, e max{ist(, f, ist(f, e},, e, f R X, (( l+k+1 = ist max = max c i, (( l+k { ist { ( l+k+1 c i e c i + c l+k+1, (( l+k ist (( l+k ( l+k c i e + c l+k+1 e c i + c l+k+1, c i + c l+k+1 e, ist(c l+k+1, c l+k+1 e, ist { } max σ l+k+1 ist(, e, σ l ist(, e c i + c l+k+1 e, ( l+k ( l+k } c i e + c l+k+1 e (( l+k c i, } c i e ( l+k σ l ist(, e. Hence, the result hols for all k 0. Finally the theorem is proven by noting that c = lim k l+k c i an using the continuity of the composition prouct proven in Theorem 2.2 an the metric ist. The final result of this section is given below. Theorem 2.6 For any c R X with X = {x 0, x 1 }, the mapping c is a contraction on R X. Proof: Choose any series, e R X. If c is homogeneous of orer l 1 then the result follows irectly from equation (2.16. Otherwise, observe that via equation (2.15: (( ( ist(c, c e = ist c i, c i e l=1 σ ist(, e < ist(, e. l=1 10

3 The Nonrecursive Connections In this section the generating series are prouce for the three nonrecursive interconnections an their local convergence is characterize. Theorem 3.1 Suppose c, R l I are locally convergent power series. Then each nonrecursive interconnecte input-output system shown in Figure 1 (a-(c has a Fliess operator representation generate by a locally convergent series as inicate: 1. F c + F = F c+ 2. F c F = F c 3. F c F = F c, where l = m, an c is input-limite. Proof: 1. Observe that F c [u](t + F [u](t = η I [(c, η + (, η] E η [u](t, t 0 = F c+ [u](t. Since c an are locally convergent, efine M = max{m c, M }. Then for any η I it follows that (c +, η = (c, η + (, η (K c + K M η η! or c + is locally convergent. 2. In light of the componentwise efinition of the shuffle prouct, it can be assume here without loss of generality that l = 1. Thus, F c [u](tf [u](t = η I (c, ηe η [u](t, t 0 ξ I (c, ξe ξ [u](t, t 0 = η,ξ I (c, η(, ξ E η [u](t, t 0 E ξ [u](t, t 0 = η,ξ I (c, η(, ξ E η ξ[u](t, t 0 = F c [u](t. Applying Lemma 2.1 an the fact that 2 n n + 1, n 0 gives (c, η K c K (2M η η!. Thus, c is locally convergent. 3. For any monomial η I an R m I the corresponing Fliess operators are F η [u](t = E η [u](t, t 0 F [u](t = ξ I (, ξe ξ [u](t, t 0. 11

Therefore, If η = η x0 then (F η F [u](t = E η [F [u]] (t, t 0. (F η F [u](t = E η [u](t, t 0 = F η [u](t = F η [u](t. If, on the other han, η = 0 } {{ 0} iη with i 0 then k times Thus, (F η F [u](t = E }{{} 0 0 iη [F [u]](t, t 0 = k times t τ2 F i [u](τe η [F [u]](τ, t 0 τ 1... τ k+1 } t 0 {{ t 0 } k+1 times t τ2 = } t 0 {{ t 0 } k+1 times = F }{{} 0 0 [ i (η ][u](t k+1 times = F η [u](t. F i (η [u](τ, t 0 τ 1... τ k+1 (F c F [u](t = η I (c, ηe η [F [u]](t, t 0 = (c, ηf η [u](t η I = [ ] (c, η (η, νe ν [u](t, t 0 η I ν I = (c, η(η, ν E ν [u](t, t 0 ν I η I = ν I (c, νe ν (t, t 0 = F c [u](t. Local convergence of c uner the state conitions was proven in Theorem 2.3. It shoul be note that c being input-limite is only a sufficient conition for the composition prouct to prouce a locally convergent series. If in Theorem 2.3 it is instea assume that both c an have finite Lie rank, then the mappings F c an F each have a finite imensional analytic state space realization, an therefore so oes the mapping F c F. The classical literature then provies that the generating series c must be locally convergent [10]. An example of this situation is given below. This suggests the possibility that just as the composition of two analytic functions 12

is again analytic, the composition of two Fliess operators may always prouce another well-efine Fliess operator. But showing irectly that c is always locally convergent is presently an open problem. Example 3.1 Consier the state space system ż = z 2 u, z(0 = 1 y = z. It is easily verifie that y = F c [u] where the only nonzero coefficients are (c, 1 } {{ 1} = k!, k 0. So c is not input-limite. But the mapping F c c clearly has the analytic state space realization, ż 1 = z 2 1u, z 1 (0 = 1 ż 2 = z 2 2 z 1, z 2 (0 = 1 y = z 2, an therefore the generating series c c must be locally convergent. k 4 The Feeback Connection The general goal of this section is to etermine when there exists a y which satisfies the feeback equation (1.2, an in particular, when oes there exist a generating series e so that y = F e [u] over some appropriate input set. In the latter case, the feeback equation becomes equivalent to F e [u] = F c [u + F e [u]], an the feeback prouct is efine by c@ = e. To make the analysis simpler, it is assume through out that the inputs u are supplie by an exosystem which is itself a Fliess operator as shown in Figure 2. That is, u = F b (v for some locally convergent series b. In this setting, a sufficient v F b u + F c y F Figure 2: A feeback configuration with a Fliess operator exosystem proviing the inputs. conition is given uner which a unique solution y of the feeback equation (1.2 is known to exist, an y is characterize as the output of a new Fliess operator when a certain series factorization property is available. This leas to an implicit characterization of the feeback prouct c@. Theorem 4.1 Let b, c, R I with I = {0, 1}. Then: 13

1. The mapping S : R I R I : ẽ i ẽ i+1 = c (b + ẽ i has a unique fixe point ẽ. 2. If b, c, an ẽ are locally convergent then the feeback equation (1.2 has the unique solution y = Fẽ[v] for any amissible v. 3. If ẽ = e b for some locally convergent series e then c@ = e. Proof: 1. The mapping S is a contraction since via Theorem 2.6: ist(s(ẽ i, S(ẽ j < ist(b + ẽ i, b + ẽ j = ist( ẽ i, ẽ j < ist(ẽ i, ẽ j. Therefore, the mapping S has a unique fixe point ẽ, that is, ẽ = c (b + ẽ. 2. From the state assumptions concerning b, c, an ẽ it follows that Fẽ[v] = F c (b+ ẽ [v] = F c [F b [v] + F [Fẽ[v]]] for any amissible v. Therefore equation (1.2 has the unique solution y = Fẽ[v]. 3. Since e is locally convergent thus c@ = e. y = Fẽ[v] = F e [F b [v]] = F e [u], This result suggests several open problems. Are there conitions on b, c, an alone which will insure that ẽ above is locally convergent? When oes there exist a factorization of the form ẽ = e b, where e is locally convergent? Can the theorem be generalize to the case where the inputs are simply from an L p space an not filtere through a Fliess operator a priori? Some insight into these questions is provie by the following examples. Example 4.1 Suppose c, R I are locally convergent. If c is also a linear series, one can formally write using equation (1.3 c@ = c + (c k c, (4.17 k=1 where c k enotes k copies of c compose k 1 times. In light of Theorem 2.2, c@ is well efine as long as the family of series {(c k : k 1} is summable, an it is easily verifie that ((c k, ν = 0, k > ν. 14

So for this special case an application of Theorem 4.1 can be avoie for concluing that c@ is well efine. Now from Theorem 2.3, c is locally convergent. If the (conjecture growth conition (2.13 hols then it follows immeiately that an thus, If, for example, K c > 1 then ((c k, ν K k c M ν ν!, ν I, (c@, ν K c M c ν ν! + K c Kc k M ν ν! K c ν k=0 K k c ν k=1 M ν ν!, ν I. (c@, ν K c K c K c 1 (K c M ν ν!, ν I. Thus, for a linear series c, the close-loop system can be escribe by the Fliess operator F c@ with c@ given by (4.17, if c satisfies the growth conition (2.13. If, in aition, is linear then c always satisfies the boun (2.13 (c.f. (2.12 an in fact K c = K c K. Example 4.2 Consier a generalize series δ with the efining property that δ is the ientity element for the composition prouct, i.e., c δ = δ c = c for any c R X. Then (mapping the symbols x i i F δ [u] = u for any u, an a unity feeback system has the generating series c@δ. Setting b = 0 in Figure 2 (or effectively setting u 0, a self-excite feeback loop is escribe by Fẽ[v], where ẽ = c ẽ an ẽ = e 0. In this case ẽ = lim k c k an e = c@ = ẽ. From Theorem 4.1, c@δ is well efine in general. Analogous to the situation with the composition prouct, if c has finite Lie rank then c@δ will always be locally convergent. For example, when c = 1 + x 1 it is easy verifie that c@δ = k 0 xk 0 so that F c@δ[0](t = e t for t 0. When c = 1 + 2x 1 + 2x 2 1 it follows that c@δ = k 0 (k + 1! xk 0 an F c@δ[0](t = 1/(1 t 2 for 0 t < 1. Acknowlegement: The authors wish to thank Yuan Wang for her insightful comments concerning this paper. References [1] J. Berstel an C. Reutenauer, Les Séries Rationnelles et Leurs Langages, Springer-Verlag, Paris (1984. [2] A. Ferfera, Combinatoire u Monoïe Libre Appliquée à la Composition et aux Variations e Certaines Fonctionnelles Issues e la Théorie es Systèmes, Thèse e 3ème Cycle, Université e Boreaux I (1979. 15

[3], Combinatoire u monoïe libre et composition e certains systèmes non linéaires, Analyse es Systèms, Soc. Math. France, 75 76 (1980 87 93. [4] M. Fliess, Fonctionnelles causales non linéaires et inéterminées non commutatives, Bull. Soc. Math. France 109 (1981 3 40. [5], Réalisation locale es systèmes non linéaires, algèbres e Lie filtrées transitives et séries génératrices non commutatives, Invent. Math. 71 (1983 521 537. [6] M. Fliess, M. Lamnabhi, an F. Lamnabhi-Lagarrigue, An algebraic approach to nonlinear functional expansions, IEEE Trans. Circuits Syst. 30 (1983 554 570. [7] W. S. Gray an Y. Wang, Fliess operators on L p spaces: convergence an continuity, Systems & Control Letters 46 (2002 67-74. [8] B. Jakubczyk, Existence an uniqueness of realizations of nonlinear systems, SIAM J. Contr. Optimiz. 18 (1980 445-471. [9], Local realization of nonlinear causal operators, SIAM J. Contr. Optimiz. 24 (1986 230-242. [10] H. Sussmann, Existence an uniqueness of minimal realizations of nonlinear systems, Math. System Theory 12 (1977 263-284. [11] Y. Wang, Algebraic Differential Equations an Nonlinear Control Systems, Doctoral Dissertation, Rutgers University (1990. 16