Port Hamiltonian Systems

Size: px
Start display at page:

Download "Port Hamiltonian Systems"

Transcription

1 University of Bologna Dept. of Electronics, Computer Science and Systems Port Hamiltonian Systems A unified approach for modeling and control finite and infinite dimensional physical systems Ph.D Thesis Alessandro Macchelli Coordinator: Prof. Alberto Tonielli Tutor: Prof. Claudio Melchiorri

2 This PhD Thesis was developed under the supervision of Prof. Claudio Melchiorri This work has been done in the context of the European sponsored project GeoPlex, reference code IST Further informations at This thesis was written in L A TEX on a c RedHat Linux system with XEmacs, All the pictures are hand-made by the author with Xfig, Copyright c 2002 by Alessandro Macchelli. All right reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechamical, including photocopy, recording or any information storage and retrieval system, without permission in writing from the author.

3 to my father...i hope he was right for being proud of me A good paper should contain at least one serious error, in order to add some magic to it J. Willems (reported by A. J. van der Schaft)

4

5 Preface The funniest thing about a preface is that it is, at the same time, the first part of a book people read and the last one the author writes. Now that I m writing these last few lines, I m trying to find any good reason that can justify why I m doing that and why I spent me last three years working on Hamiltonian systems and the other related stuffs reported in this thesis. And this is not a good starting point for the interested reader and the right conclusion after two-hundred pages of hard work. My momentary confusion takes place from the strange way in which I decided, or probably it is more correct to say, my supervisor, prof. Claudio Melchiorri, suggested me to study this new and promising research field on modeling and control of nonlinear dynamical systems based on Hamiltonian formulation. I was just graduate and I went to speak about these interesting things which I was unaware of with prof. Bernard Maschke, who was in Bologna for a seminar during that period. I have still the short notes and the list of references he wrote for me in my room, with all my books and papers. After that short conversation, my studies began and this book is the result of these three years of activity. In order to prepare the reader to what eventually awaits him in next pages, a short summary of the contents of this thesis is presented. In Chapter 1, the port Hamiltonian class of dynamical system is introduced. The starting point is energy and the assumption that a system can be represented by a proper interconnection of a well-defined set of atomic elements, each of them characterized by a particular energetic behavior. From a mathematical point of view, this network can be described by means of a Dirac structure and system configuration can be easily given in terms of energy variables whose time evolution depends onthe variation of the internal energy. Furthermore, it is shown that the interaction between dynamical systems in port Hamiltonian form is simply a power exchange. Once the port Hamiltonian representation of a dynamical system is deduced, it is possible to approach the control problem. In Chapter 2, the classical theory on passive system and passive control is presented, together with some new results on the regulation of port Hamiltonian systems. Moreover, it is shown that it is possible to merge passivity-based control techniques with variable structure ones in order to achieve further robustness properties. In Chapter 3, the port Hamiltonian formulation is generalized in order to cope with distributed parameter systems. Classical infinite dimensional models are presented in this new formulation. Among them, the Maxwell s equations and the Timoshenko beam. Then, in Chapter 4, the control problem of distributed port Hamiltonian system is approached. By generalization of the energy-based control techniques developed for the finite dimensional case, the regulation problem of distributed parameter system in Hamiltonian form is discussed. New results for the stabilization of the Timoshenko beam and of a simple class of mixed finite and infinite dimensional port Hamiltonian systems are presented. Moreover, in Chapter 5, some

6 vi Preface aspects of the scattering theory within the framework of port Hamiltonian system are discussed. Starting from a novel formulation in finite dimensions, the scattering theory is generalized in order to deal with infinite dimensions. Well-established results valid for the finite dimensional case are generalized in order to study the power propagation and exchange phenomena for distributed parameter systems. Finally, in Appendix B some results concerning the real-time control of robots with realtime Linux-based operating systems are presented. Now that I m concluding the work, I start to think to all the people I would like to thank for their support, suggestions and friendship: my mother, that was always able to taking care of me during all this time, and my girlfriend, for her love she gave me during the last two years. I would like to thank my supervisor, prof. Claudio Melchiorri, for the opportunities he gave me, prof. Arjan van der Schaft, for the wonderful research period I had at his department and prof. Stefano Stramigioli for his friendship and for the almost infinite number of suggestions and remarks about my work. Moreover, I would like to thank Cristian Secchi (it is always a pleasure to work with him), Daniele Arduini (that introduced me within the world of Linux), Marcello Montanari and all the students I had the pleasure to coordinate during their final work period (in particular, Marco Guidetti and Raffaella Carloni). Finally, I would like to thank all the guys that work at L.A.R. for their friendship and the nice moments we had together. Bologna, 19th of March, 2003 Alessandro

7 Notation and Symbols V vector space V dual space of V V orthogonal complement of V, duality product, +pairing operator M + representation of the +pairing operation in matrix form ( ) scalar product norm z i 1,...,i q j 1,...,j p tensor of type (q, p) ε Levi-Civita tensor δ Kroneker delta D n-dimensional manifold D boundary the manifold D T q D tangent space of D in q T D tangent bundle of D Tq D co-tangent space of D in q T D co-tangent bundle k (D) space of k-forms on D Ω k (D) space of differential k-forms on D Hodge star operator Z Hodge star operator based on the metric Z = [z ij ] exterior product of forms d exterior derivative of forms J = {j 1,..., j k } multi-index #J order (number of elements) of the multi-index J D Dirac structure C Casimir function (functional) S +, S scattering subspaces s +, s scattering variables

8 viii Notation and Symbols

9 Contents Preface Notation and Symbols v vii 1 Power, ports and interconnections Introduction System and interconnection. Classical definitions Physical modeling Energy domains Power and power-conserving interconnection Basic definitions and properties Dirac structure representation Interconnection of Dirac structures Basics on bond graph Physical modeling and bond graphs Energy storage elements Energy dissipation elements Ideal transformations and gyrations Ideal sources Network structure DC motor example Finite element of Timoshenko beam example Concluding remarks on bond graphs Port Hamiltonian systems Implicit port Hamiltonian systems Port Hamiltonian systems Control of port Hamiltonian systems Passive systems and passivity Introduction Preliminary definitions and results Basic considerations on the stabilization of passive systems Passive systems and port Hamiltonian systems Control by interconnection Introduction General formulation of energy shaping

10 x Contents Stabilization by energy balancing Control through invariants Control via state-modulated source IDA PBC control technique Energy-based variable-structure control Introduction Dynamics of a phd under constraints Energy-based approach to sliding-mode Variable structure approach for passive control of robots Distributed port Hamiltonian systems Introduction Stokes Dirac structures Distributed port Hamiltonian system Classical examples Transmission line Maxwell s equations Vibrating string dph model of the Timoshenko beam Background. The classical formulation Timoshenko beam in dph form Introducing the distributed port Control of distributed port Hamiltonian systems Introduction Stability for infinite dimensional systems Arnold s first stability theorem approach La Salle s theorem approach Control by damping injection Basic results Control of the Timoshenko beam by damping injection Control by interconnection and energy shaping Introduction m-ph systems. A simple example Casimir functionals Control of m-ph systems by energy shaping An example The single transmission line case Control by energy shaping of the Timoshenko beam Model of the plant Casimir functionals for the closed-loop system Control by energy shaping of the Timoshenko beam Scattering with applications Introduction and a motivating example Scattering in the finite dimensional case Definitions and basic results

11 Contents xi Scattering mapping and scattering matrix Scattering and telemanipulation Introduction Dealing with time delays in the communication channel Telemanipulation and phd systems Scattering for distributed systems Basic definitions and results Scattering mapping Interconnection of distributed systems. Scattering matrix Scattering mapping and operator in coordinates Example. Maxwell s equations: border considerations A Mathematical background 159 A.1 Tensors on linear spaces A.2 Manifolds, tangent spaces and tangent bundles A.3 Tensor fields and tensor bundles A.4 Exterior algebra A.5 Hodge star operator A.6 Differential forms, exterior derivative and Stokes s theorem B Control of robots with Real-Time Linux 171 B.1 Introduction B.2 Real time Linux. A quick overview B.2.1 Short introduction on real-time systems B.2.2 RTAI-Linux B.3 An experimental setup for robotics B.3.1 General overview B.3.2 Real-time control of the robot B.4 Working with the vision system and the A.S.I. Gripper B.4.1 Distance evaluation B.4.2 Evaluation of the optimal grasping configuration Curriculum Vitae 187

12 xii Contents

13 List of Figures 1.1 Representation of a dynamical system and an example of network Mass-spring system and its network structure Network structure of a physical system Interconnection of Dirac structures Power bond connecting two physical systems A and B Energy storage element Energy storage C and I elements Energy dissipation element Ideal transformer element Ideal gyrator element Ideal flow and effort sources Junction elements DC drive Bond graph representation of the DC drive of Fig Finite element of Timoshenko beam Bond graph of the Timoshenko beam finite element Implicit port Hamiltonian systems Series and parallel RLC circuits Interconnection of physical systems. Σ is the plant, while Σ C the controller Control as state-modulated source: Σ C is an infinite power source Magnetic levitation system Behavior of the proposed control scheme in the case of dimx = 2 and m = Energy and force of non saturated (continuous) and saturated (dashed) spring Behavior of the variable structure PD + g(q) controller A planar 2-dof manipulator Simulation results with PD+g(q) Simulation results with variable structure PD+g(q) Simulation results with partial knowledge of mass parameters: errors Simulation results with partial knowledge of mass parameters and saturation Experimental results: perfect gravity compensation Experimental results: no gravity compensation Detailed overview of steady state performances (no compensation) Infinite dimensional port Hamiltonian system with dissipation Control of flexible structure by damping injection

14 xiv List of Figures 3.3 Infinitesimal element (length equal to δz) of the transmission line Bond graph representation of the Timoshenko beam Control by damping injection of a flexible beam dof robot with flexible links An example of m-ph system Flexible link with mass in x = L Bond graph representation of the closed-loop system Scattering in the infinite dimensional case. An overview Structure of a telemanipulation system A 2-port element. Power variables vs. scattering variables Stabilization by dissipation of the channel Power exchange in terms of scattering variables Scattering interconnection of phd systems The operators π + and π Interconnection of systems A and B over a subset D of their boundary Scattering decomposition and Maxwell s equations B.1 Comau SMART3 S robot and A.S.I. Gripper B.2 Selection of the finger s target points B.3 The experimental setup: a general overview B.4 The real-time module: organization and user-space communication channels B.5 Screen-shot of the vision software B.6 Calculation of the best grasping configuration

15 List of Tables 1.1 Flow and effort in different energy domains Generalized states Parameters of the considered manipulator

16 xvi List of Tables

17 Chapter 1 Power, ports and interconnections The interconnection or, better, the interaction between physical systems can be described in terms of power exchange through power ports. Furthermore, the network structure behind a set of interconnected system can be mathematically modeled in terms of bond graphs and/or Dirac structures. Power, ports and interconnections are the starting points for the definition of the port Hamiltonian class of systems, a powerful mathematical framework for modeling and controlling physical systems. 1.1 Introduction System and interconnection. Classical definitions In general, what is a system? is the first concept control theory students learn. Roughly speaking, a system is a mathematical entity that describes a particular dynamical relation between a set of input and a set of output signals, see Fig. 1.1(a). The input signals modify the system configuration, summarized by its state variables, while the outputs will be a function of both the actual configuration and the inputs. It is important to note that there is no particular relation between inputs and outputs, and that input, output and state variables are not required to have a particular physical meaning. To be more precise, in order to define a (dynamical) it is necessary to specify: i) a time set (domain) T, ii) an input manifold U, iii) a set U f of admissible input functions u : T U, iv) a state manifold X, v) an output manifold Y. Then, a quite general definition of a continuous-time non-linear dynamical system can be the following:

18 2 Power, ports and interconnections Sfrag replacements u U PSfrag replacements. Σ. y Y X Σ 1 Σ 2 Σ 3 Σ 4 (a) Representation of a dynamical system. (b) Network of dynamical systems. Figure 1.1: Representation of a dynamical system and an example of network. Definition 1.1 (dynamical system). A continuous-time dynamical system Σ is given by the sets T R, U, U f, X and Y, by a state transition function f : X U T T X, such that ẋ = f (x(t), u(t), t) has unique solution for every initial state x 0 X and admissible input function u : T U, u U f, and by an output function g : X U T Y, such that y(t) = g (x(t), u(t), t) In a compact notation, it makes sense to write Σ := (T, U, U f, X, Y, f, g). Once the definition of system is given, it is immediate to define when the members of a set of dynamical systems are interconnected, see Fig. 1.1(b). Definition 1.2 (interconnection). Consider a set of n dynamical systems Σ i, i = 1,..., n; these systems are interconnected if and only if, for every Σ i, it is possible to find, at least, a system Σ j such that: i) the input and output manifolds of both the systems can be partitioned as U i,j = U i,j int and Y i,j = Y i,j int Yi,j ext ; ii) if u i,j U i,j int and yi,j Y i,j int, then ui = y j and/or u j = y i. U i,j ext From Def. 1.1, a dynamical system can be interpreted as an object that elaborates an information flow received as input and provides another information flow as output depending on the input and on some initial conditions. The intuitive idea of interconnection of systems is formalized in Def. 1.2: two systems are interconnected if the output information flow of one of them becomes the input of the other. Following the definition, only an exchange of informations is created: it is not explicitly shown if there is mutual influence or any sort of interaction between systems Physical modeling Natural phenomena (systems) can be clearly described by means of Def. 1.1 and 1.2, that are quite general and powerful, but they intrinsically require much more structure. Consider the

19 1.1 Introduction 3 PSfrag replacements K m, p PSfrag replacements Σ m F Σ s x (a) Mass-spring system. v (b) Network representation of the mass-spring system. Figure 1.2: Mass-spring system and its network structure. simple mass-spring model of Fig. 1.2(a), in which a mass m is interconnected with a spring of stiffness K. If x is represents the spring deformation (state variable) and p is the mass momentum (state variable), then the two elements can be modeled by means of the following dynamical systems: Σ m : { ṗ = F y = p/m (= v) Σ s : { ẋ = v y = Kx (= F ) (1.1) in which Σ m is the mass model and Σ s the spring model. In (1.1), F represents the force applied to the mass by the spring and it is the output of the Σ s subsystem, while v is the mass speed, that is the output of the Σ m subsystem. As represented in Fig. 1.2(b), the interconnection is in feedback: the mass integrates the force F in order to determine its speed, while the speed is integrated by the spring so that the entity of its deformation can be computed; the force F depends on the deformation x. This is a general behavior when dealing with the physics of the systems: in each energetic domain, the only way in which subsystems can be interconnected is in feedback. In other words, there is mutual influence between interacting systems and the nature of this interaction can be revealed by analyzing what kind of informations are exchanged. In the case of the mass-spring system, the subsystems interact by exchanging force-velocity (F, v) informations. What happens is that one subsystem (the mass) imposes the velocity v, while the second one (the spring) the force F. It is well known that the quantity P = F v (1.2) is power. Consequently, the interaction of the mass with the spring simply results in an exchange of power between both the subsystems. Generally speaking, physical system interconnections are well-defined if the power flows are specified. Moreover, input and output are related in the sense that their product has to give power and, within this context, they cannot be considered and treated independently. From Def. 1.1, a dynamical system is specified once a state transition function and an output function are introduced. Then, dynamics and an interface with the environment are the main components of a system. When dealing with physical systems, the nature of this interaction is simply an exchange of power that modifies the internal energy of the systems. This is clear if the

20 4 Power, ports and interconnections mass and spring models (1.1) are considered: it is well known, in fact, that the mass (kinetic) energy E m and the spring (potential elastic) energy E s are given by: E m (p) = 1 p 2 2 m E s (x) = 1 2 Kx2 (1.3) Note that the what in classical system theory are called the state variables, in this framework can be called energy variables. Then, the effect of the input and output signals is to modify the system total energy: input and output have to be considered together since they specify the time variation of internal energy. In fact, from (1.1) and (1.3), we have that de m dt = p m ṗ = v F = P de s dt = (Kx) ẋ = F v = P (1.4) where the power P is defined in (1.2). The couple of relation (1.4) expresses the well-known physical property that, in the mass-spring system of Fig. 1.2, a continuous conversion between kinetic and potential elastic energy takes place: this is the source of the oscillatory behavior of the whole system. Input and output can be considered together, in the sense that they set up a sort of system interface with the environment: this interface is called power port. Then, physical systems can interact if these ports are connected: in this way, an exchange of power is possible. As can be noted from the models (1.1), independently from the energetic domain to which a given system belongs, the interaction with a generic environment involves an exchange of two types of informations: by extending the mass-spring system, it can be correct to call this kind of signals as speed and force informations. In bond graph formalism, as discussed with much more details in Sec. 1.3, the speed informations are called flows, while the force informations are called efforts. Depending on the nature of the particular physical system under study, flows and efforts can be input or output, and an example is given, again, by the mass-spring system. The spring sub-system Σ s receives as input a flow (v) and produces an effort (F ) as output. The flow acts on the state (energy) variable, the spring deformation x, modifying the system stored energy and consequently the force the spring generates. This force is the effort provided as output. On the other hand, it is possible to say that the mass subsystem Σ m behaves in a dual way with respect to Σ s : the input is an effort, while the output is a flow. What happens is that the force acting on the mass modifies the system momentum p (energy variable), consequently changing the kinetic energy and, then, the mass speed. From an energetic, or better, from a port behavior point of view, mass and spring are complementary objects. As it will be pointed out in Sec. 1.3, the dynamical behavior of generic and complex physical systems can be modeled by properly interconnecting simple elements characterized by a mass-like or spring-like input/output relations. But this is not enough. Taking the mass-spring system again as a paradigm of a generic physical system, it is easy to note that the total energy E m + E s of the system remains constant. In fact, from (1.4), we have that d dt (E m + E s ) = de m + de s = 0 (1.5) dt dt Intuitively speaking, it is necessary to model, also, energy dissipation phenomena that can occur within every physical system. In the case of the mass-spring system, energy dissipation can be due to friction or to a damper interconnected in parallel to the spring. Moreover, the whole system can itself interact with the environment: consider for example the action of an external

21 1.1 Introduction 5 mass-like elem. PSfrag replacements damper-like elem. ports environment interconnections spring-like elem. Figure 1.3: Network structure of a physical system. force acting on the mass. We start to understand that behind any physical system a sort of network structure can be revealed. Generalized masses, springs and dampers are interconnected each other and can exchange power: by properly choosing this network of objects, it is possible to model almost any physical system within an energy-based framework. This way of modeling systems is called Physical Modeling. The scheme of Fig. 1.3 summarizes this idea Energy domains The unified approach in modeling complex systems introduced in the previous section reveals and takes inspiration from a strong and fascinating property of Nature, that is the similarity and the almost complete equivalence among the different energetic domains. A typical example is again the mass-spring system: it is well known, in fact, that the same mathematical model can describe the behavior of an electric circuit made of a capacitor and an inductor connected in parallel. The same equation that models the time evolution of the mass speed holds for the time evolution of the current flowing through the inductor. The oscillatory behavior of both the systems is direct consequence that two different energetic sub-domains are interconnect or, in other but equivalent words, are interacting. This concept is clear once the definition of what a physical domain represents is given. From an intuitive point of view, given a complex system, it is possible to discriminate the different energy domains by considering the different kinds of energy that each part of the whole system can store. When considering the kinetic energy of a mass moving in a plane, we are reasoning in the translational mechanical domain, while, if the potential energy stored within a capacitor is considered, we are implicitly assuming the electrical domain. The most important energetic domains are: mechanical, electromagnetic, hydraulic,

22 6 Power, ports and interconnections thermical. Moreover, each of them, with the only exception of the thermal one, can be further split into two sub-domains: mechanical: mechanical potential mechanical kinetic; electromagnetic: electrical magnetic; hydraulic: hydraulic potential hydraulic kinetic. The thermical domain is the only one which has no dual sub-domains, due to the possibility of irreversible transformation of energy. Even if the macroscopic phenomena that can take place in each of these energy domains are really different from each other, from an energy-description point of view, they present a common behavior. Every dynamical system belonging to a specific energy domain can be described as network of atomic elements, each of them representing a specific energetic property, e.g. energy storage, dissipation or transformation (see again Fig. 1.3). Each element has its own power port (a couple flow-effort) through which it can interact with the environment and/or other elements. The power flow will be a proper function of the two port signals. In Sec. 1.2 a mathematical tool describing the network structure behind each physical system will be introduced, while in Sec. 1.3 a graphical language for physical modeling will be discussed. 1.2 Power and power-conserving interconnection Basic definitions and properties The interconnection of physical system is power exchange. In order to mathematically model these phenomena, it is necessary to give a definition of power and to introduce a proper set of tools that will be useful to treat and describe the network structure behind every physical system. Consider an n-dimensional linear space F. It is well-known that a linear function on a vector space F is e : F R satisfying e(f 1 + f 2 ) = e(f 1 ) + e(f 2 ) e(cf) = c e(f) with f, f 1, f 2 F and c R. Then, we give the following: Definition 1.3 (dual space). Consider a linear space F. Its dual space is the set F of all linear functions e : F R. It is easy to prove that the dual of a linear space of dimension n is again an n-dimensional linear space. Once the dual of a linear space is defined, it is possible to give the mathematical definition of power. Definition 1.4 (power). Consider a linear space F and denote its dual by E := F. product space F E is called space of power variables, with power defined by The P = e, f (1.6) with (f, e) F E, where e, f is the duality product, that is the linear function e E F acting on f F.

23 1.2 Power and power-conserving interconnection 7 Example 1.1. If F is the space of currents, then its dual E F is the space of voltages and e, f is electrical power. In the same way, if F is the space of generalized velocities, then E is the space of generalized forces and e, f is the mechanical power. Definition 1.5 (+pairing operator). Consider the space of power variables F E. following symmetric bilinear form is defined: The (f 1, e 1 ), (f 2, e 2 ) := e 1, f 2 + e 2, f 1 (1.7) with (f i, e i ) F E, i = 1, 2;, is called +pairing operator. Note 1.1. If on F and E a basis and its dual are assumed, and if flows and efforts are represented as n-dimensional column vectors, then it is possible to write: e, f = e T f (f 1, e 1 ), (f 2, e 2 ) = e 1 T f 2 + e 2 T f 1 Moreover, the +pairing operator (1.7) admits the following matrix representation: (f 1, e 1 ), (f 2, e 2 ) = [ f T 2 e ] [ ] [ ] 0 I n f1 T 2 I n 0 e 1 }{{} M + Consider a linear subspace S F E of dimension m; its orthogonal complement with respect to the +pairing operator is given by the set { S = (f, e) F E (f, e), ( f, ẽ) = 0, ( f, } ẽ) F E which is again a linear subspace of F E with dimension 2n m since (1.7) is a non-degenerate form. Based on the +pairing operator (1.7), it is possible to give the fundamental definition of Dirac structure, that is the basic mathematical tool that is used to describe the interconnection structure between physical systems. Definition 1.6 (Dirac structure). Consider the space of power variables F E and the symmetric bilinear form (1.7). A (constant) Dirac structure on F is a linear subspace D F E such that D = D Note 1.2. The dimension of a Dirac structure D on an n-dimensional linear space F is equal to n. In fact, from (1.8) and the definition of Dirac structure, we have that with the linear map M + non singular. Then, F E = D M + D 2n = dim (F E) = dim (D) + dim (M + D) = 2 dim (D) and, consequently, dim (D) = n. Moreover, suppose that (f, e) D; from (1.7), we have that 0 = (f, e), (f, e) = 2 e, f (1.8)

24 8 Power, ports and interconnections Then, it can be deduced that, for every (f, e) D, e, f = 0 (1.9) or, equivalently, that every Dirac structure D on F defines a power-conserving relation between power variables (f, e) F E. Proposition 1.1. Suppose that F is an n-dimensional linear space, with dual space E F, and that D F E is an n-dimensional linear subspace, such that e, f = 0 for all (f, e) F E. Then, D is a Dirac structure on F. Proof. Consider (f 1, e 1 ), (f 2, e 2 ) D; clearly, also (f 1 + f 2, e 1 + e 2 ) D and, from (1.9), we have that 0 = e 1 + e 2, f 1 + f 2 = e 1, f 2 + e 2, f 1 + e 1, f 1 + e 2, f 2 = (f 1, e 1 ), (f 2, e 2 ) Then, D D and, since dim (D) = n, D = D. Note 1.3. The fact that dim (D) = dim (F) is related to an interesting property of physical systems. Consider, for example, the interconnection of electrical networks: it is well known that it is not possible to impose both currents and voltages. By generalization, a physical interconnection cannot determine both the flow and the effort Dirac structure representation Once a basis for the space of the flows is specified and its dual is assumed on the space of efforts, it is possible to give several matrix representation of a Dirac structure. Suppose that D F E is a constant Dirac structure on F, with dim (F) = n. Then, the following propositions present some of the possible matrix representations of D. Proposition 1.2 (kernel and image representation). A Dirac structure D F E on F, with dim (F) = n, can be given as where F and E are n n matrices such that D = {(f, e) F E F f + Ee = 0} (1.10) (i) EF T + F E T = 0 (1.11a) [ ] (ii) rank F. E = n (1.11b) Equivalently, D = { (f, e) F E f = E T λ, e = F T λ, λ R n} (1.12) Proof. Since D is a Dirac structure, from Def 1.6 and Note 1.2, it is known that it is a linear subspace of dimension n of F E. Then, it can be represented by [ ] [ ] [ ] E T E T Im, with rank = rank F. E = n F T F T

25 1.2 Power and power-conserving interconnection 9 Consider λ i R n, then define f i = E T λ i and e i = F T λ i, with i = 1, 2. From Def. 1.6, it is necessary that (f i, e i ) D, i = 1, 2, (f 1, e 1 ), (f 2, e 2 ) = 0 or equivalently, that λ 1 T F E T λ 2 + λ 2 T F E T λ 1 = λ 1 T [ F E T + EF T] λ 2 = 0 for all possible λ 1, λ 2 R n, and this can be true if and only if (1.11a) holds. The kernel representation (1.10) can be deduced since, from (1.11a) and (1.12), given λ R n 0 = [ F E T + EF T] λ = F f + Ee for all possible (f, e) D. Proposition 1.3 (constrained input-output representation). A Dirac structure D F E on F, with dim (F) = n, can be given as D = { (f, e) F E f = Je + Gλ, G T e = 0 } (1.13) where J is an n n skew-symmetric matrix and G is matrix of proper dimensions. Moreover, Im G = {f F (f, 0) D} and Ker J = {e E (0, e) D}. Proof. It is enough to prove that (1.13) defines a generic Dirac structure. First of all, note that D D since, if (f, e) D, then (f, e), (f, e) = 2 e, f = e T (Je + Gλ) = 0 In order to prove that D D and then D = D, consider (f, e) D, that is, for every ( f = Jẽ + G λ, ẽ) D, (f, e), ( f, ẽ) = 0. Then, ( ) 0 = e T Jẽ + G λ + ẽ T f = ẽ T (f Je) + e T G λ If λ = 0, then necessarily (f Je) Im G; consequently, it is possible to find λ such that f Je = Gλ, that is f = Je + Gλ. Moreover, if ẽ = 0, then e T G λ = 0 for every λ. It can be deduced that Ge T = 0. Since we proved that, if (f, e) D, then f = Je + Gλ and Ge T = 0, it is verified that (f, e) D and then that D D. The fact that Im G = {f F (f, 0) D} is immediate from (1.13); moreover, if (0, e) D, then (0, e), ( f, ẽ) = 0 for every ( f ) = ẽ + G λ, ẽ) D), that is e (Jẽ T + G λ = e T Jẽ = 0. This is true if and only if e Ker J. Proposition 1.4 (hybrid input-output representation). Consider a Dirac structure D given in kernel representation (1.10). Suppose that rank F = m n, then select m independent columns of F and group them into a matrix F 1. Write (possibly after permutations) ] F = [F 1. F 2 and correspondingly E = ] [ f1 [E 1. E 2, f = f 2 ], e = [ e1 e 2 ]

26 10 Power, ports and interconnections ] Then, the matrix [F 1. E 2 is invertible and D = {[ f1 f 2 ] F, [ e1 e 2 ] E ] 1 ] with J = [F 1. E 2 [F 2. E 1 skew-symmetric. [ f1 e 2 ] = J [ e1 f 2 ]} (1.14) Proposition 1.5 (canonical coordinates representation). Consider a Dirac structure D F E on F; there exist linear coordinates (q, p, r, s) for F and corresponding dual coordinates on E such that (f, e) = (f q, f p, f r, f s, e q, e p e r, e s ) D if and only if { fq = e p, f p = e q (1.15) f r = 0, e s = 0 Consider a Dirac structure D on F. Then it is possible to define the following subspaces of F and E: G 0 := {f F (f, 0) D} G 1 := {f F e E s.t. (f, e) D} P 0 := {e E (0, e) D} P 1 := {e E f F s.t. (f, e) D} (1.16) The subspace G 1 is the set of all admissible flows, while P 1 is the set of admissible efforts. It is possible to prove that: P 0 = G orth 1 := {e E e, f = 0, f G 1 } P 1 = G orth 0 := {e E e, f = 0, f G 0 } and, if the kernel-image representation ( ) for D is adopted, that G 1 = Im E T P 1 = Im F T Interconnection of Dirac structures In this section, the compositionally properties of Dirac structures are discussed. From an intuitive point of view, it seems clear that the composition of power-conserving interconnection results into another power-conserving interconnection. In terms of Dirac structure, it will be pointed out that the power-conserving interconnection by means of partially shared variables of Dirac structures will lead to another Dirac structure. First of all, consider two Dirac structures D 12 and D 23 on the space of flows given by F 1 F 2 and F 2 F 1 respectively. The space F 2 is the space of shared flow variables and its dual E 2 the space of shared effort variables. This means that the interconnection between the Dirac structures D 12 and D 23, indicated by D 12 D 23, is the result of a power exchange through the port variables belonging to F 2 E 2. The interconnection between physical systems is power-conserving if the incoming power in a system is the outgoing power from the second one. Two different solutions are presented by the following definitions:

27 1.3 Basics on bond graph 11 PSfrag replacements D 1 f 2 e 1 f 1 D I f e D 2 e 2 f k e k D k Figure 1.4: Interconnection of Dirac structures. Definition 1.7 (common flow connection). Consider the Dirac structures D 12 on F 1 F 2 and D 23 on F 2 F 3. If (f 1, f2 12) F 1 F 2, (f2 23, f 3) F 2 F 3 and the same decomposition holds on the dual spaces E 1 E 2 and E 2 E 3, then D 12 and D 23 are interconnected with common flow if { f 12 2 = f2 23 (1.17) e 12 2 = e 23 2 Definition 1.8 (common effort connection). Under the same conditions of the previous definition, D 12 and D 23 are interconnected with common effort if { f 12 2 = f2 23 e 12 2 = e 23 2 (1.18) It is easy to prove that the interconnections described by (1.17) and by (1.18) are powerconserving. The following theorem summarizes the properties of interconnected Dirac structures. Theorem 1.6 (Dirac structure interconnection). Consider the Dirac structures D 12 and D 23 introduced above. Then, if the interconnection is given by (1.17) or (1.18), then D 12 D 23 is a Dirac structure. At this point, it is easy to deal with the composition of several Dirac structures. Suppose that D i, i = 1,..., k, are Dirac structures on F i and interconnected by a Dirac structure D I on F 1 F k F, with F a linear space of flow port variables, see Fig By subsequent applications of Thm. 1.6, it can be deduced that the resulting interconnection structure D 1... D k D I is a Dirac structure. 1.3 Basics on bond graph Physical modeling and bond graphs The modeling language of bond graph has been formalized by Paynter in 1961 (Paynter, 1961), and most of the ideas presented in this section take inspiration from (Stramigioli, 1999; Stramigioli,

28 12 Power, ports and interconnections en. domain flow effort mech. trans. force F velocity v mech. rot. torque τ ang. velocity ω electr.-magn. voltage V current i hydraulic pressure p flow rate Q thermic PSfrag replacements temperature T entropy flow Ė Table 1.1: Flow and effort in different energy domains. power bond A e f causal stroke B Figure 1.5: Power bond connecting two physical systems A and B. 2001). The basic idea behind bond graphs is that any physical system can be modeled by properly interconnecting a set of simple elements, each of them characterized by a particular energetic behavior. Energy can be stored, dissipated and converted from a physical domain to another one: each of these physical phenomena will be represented by a simple atomic element. Common characteristic among these elements is the presence of a port with a pair of external variables belonging to dual spaces (flows and efforts) and whose duality product gives the power exchanged with the environment. The behavior of a complex physical system is the result of a certain network of these simple elements. Note that there are not a priori assumptions on the energetic domain under study: every energetic domain can be modeled by using this approach. Flows and efforts are abstractions and they allow a unified power description through different domains. In Tab. 1.1, the correspondences between flow and effort in some common domains are given. Bond graphs is a graphical language for modeling physical systems. The basic idea is to interconnect each subsystem with an energetic bond (or simply bond) which represents the power exchange between subsystems. An example is given in Fig. 1.5, that shows the interconnection of two systems A and B. The edge is the power bond, while f and e represent the flow and the effort at the power ports. It is important to understand that: each bond represents a power interconnection, that is both an effort e and a (dual) flow f (e.g. current and voltage or force and velocity) are present. the half arrow on the bond does not provide the positive direction of neither the effort nor the flow, but only the positive direction of the power flow. If the simple bond graph of Fig. 1.5 is considered, if the power P = e f is greater than zero, it means that a power flow from A to B is present. If P < 0, then the power is flowing from B to A. if the causal stroke is present in the bond, as in Fig. 1.5, then the positive direction of the effort e is specified. In this case, the positive direction of the flow f is the opposite one. By using bonds and junctions, it is possible to model every network structure behind each physical system. This network interconnects a set of atomic elements that are characterized

29 PSfrag replacements 1.3 Basics on bond graph 13 t x(t) = x(0) + u(τ)dτ u(t) 0 x(t) y(t) = E x (x) E x y(t) Figure 1.6: Energy storage element. by a specific energetic property (e.g. storage, dissipation, conversion and so on). This is a generalization of what happens in the case of electric circuits, in which an electric network connects capacitors, inductors, resistances, transformers and sources. A short description of these atomic elements is given in the following subsection, together with a couple of examples Energy storage elements A storage element is an atomic element with the property of storing energy; typical example are masses, springs, capacitors or inductors. Even if, from a macroscopic point of view, these elements are really different one from the other, their energetic description is the same. As presented in Fig. 1.6, if represented in integral form, every energy storage element is characterized by: an input signal u(t); an output signal y(t); a state variable x(t); a scalar energy function E(x) Following Def. 1.1, its mathematical model is given by { ẋ = u(t) y(t) = g(x) = E x (1.19) In Sec , the fact that, in physical modeling, input/output signals of a system can only be flows and efforts is pointed out. Then, energy storing elements can be classified on the basis of what kind of input signal they receive; clearly, an analogous distinction can be done by considering the nature of the output signal. In other words, only the following two classes of energy storing elements can exist: C elements: they receive a flow as input and provide an effort as output; I elements: they receive an effort as input and provide a flow as output. This fact is intimately related to the fact, discussed in Sec , that in every energy domain it is possible to locate two dual energy sub-domains, each of them able to store a particular kind of energy, kinetic energy for masses and potential energy for springs if the mechanical domain is taken as example.

30 14 Power, ports and interconnections PSfrag replacements PSfrag replacements f e q p e γ c (q) f γ i (p) e f C e f I (a) C element. (b) I element. Figure 1.7: Energy storage C and I elements. Since the (duality) product of u and y gives the power flow, the relation between input, output and energy function is the following power balance equation: Ė = de dt = E x ẋ = uy = P s (1.20) The variation of internal energy equals the power P s supplied through the port. Note that it is implicitly assumed an incoming direction of the power flow: this means that the direction of the bond connecting the energy storing element to the remaining part of the network is always toward the element itself. The bond graph representation of both the C and I elements is given in Fig 1.7, together with their state space representations. Consider, for example, the C-element. Its state variable q is the integral of the (input) flow and the (output) effort is given by a proper function γ c (q) = Ec q of the state, closely related to the energy function E c. The state variable q is called generalized displacement, while the function E c is the generalized potential energy. The reason why these names are adopted can be easily understood if we recall that a mechanical example of C-element can be a spring. From a physical modeling point of view, a spring receives a velocity (that is a flow) as input, by integration it calculates the deformation (a displacement), which the force is a function of. The electrical equivalent element of the spring is the capacitor. Consider, for example, a linear capacitor con capacitance C. From physics, it is well known that the stored electrical energy is a function of the electrical charge Q and it is given by: E s (Q) = 1 Q 2 2 C Moreover, the charge Q depends on the integral of the current i, a flow (see Tab. 1.1), that is Q(t) = Q(0) + t 0 i(τ)dτ The voltage is a function of the stored charge and it is given by V (t) = CQ(t) = E s Q (Q)

31 1.3 Basics on bond graph 15 domain gen. momentum e gen. displacement f mech. translational momentum p displacement x mech. rotational angular mom. m angular displ. θ electromagnetic flux linkage φ charge Q hydraulic pressure mom. P p volume V thermic entropy E PSfrag replacements Table 1.2: Generalized states. R : r Figure 1.8: Energy dissipation element. Non-linear capacitors can be modeled by introducing a non-quadratic energy function E s. Similar considerations hold for the I-element. In this case, the state variable is the generalized momentum p, while the function E m is called generalized kinetic energy. Examples of I-elements are masses and inductors. In Tab. 1.2, a list of names of state variables for several energy subdomains is presented; in this table, e indicates the integral of an effort, while f the integral of a flow Energy dissipation elements An energy dissipation element models the irreversible phenomena of the conversion of (mechanical, electrical, etc.) energy to the thermal one. An energy dissipator is characterized by a statical relation between effort and flow like the following ones: e = Z(f) (impedance form) or f = Y (e) (admittance form) (1.21) for which the following inequalities have to hold: Z(f)f 0 or ey (e) 0 (1.22) Since the (dual) product of effort and flow gives the power flowing through the bond, from (1.21) and (1.22) we have that P = ef 0 So, it can be deduced that, in order to have a dissipative behavior, the direction of the bond connecting a dissipative element with a generic environment has to be always directed toward the element. Examples of dissipative elements are resistances (electromagnetic domain) and dampers (mechanical domain). Dissipative elements are represented by the symbol R; the bond graph is reported in Fig Ideal transformations and gyrations Energy storing elements (C, I) and dissipative elements (R) are characterized by the presence of only one (scalar) power port. In network and communication theory, it is well-known that it

32 PSfrag replacements 16 Power, ports and interconnections e in e out e in e out TF MTF f in.. f out f in f n out n Figure 1.9: Ideal transformer element. is possible to speak about multi-port elements, that is about elements that can be connected to the environment by means of several ports. In this section, two 2-port elements characterized by a static behavior are introduced. Both elements main feature is that the incoming power flow P in equals the outgoing one P out. If (f in, e in ) are the power variables of the input port and (f out, e o ut) the power variables of the output port, then the power balance equation is given by: P in = e in f in = e out f out = P out (1.23) The orientation of the power bonds is obviously given as in Fig. 1.9 and Fig An ideal transformer is a 2-port element characterized by a linear relation between input and output flows. If n > 0 is a real number, we have that Clearly, (1.23) can be satisfied if or, equivalently, if f out = n f in e in = n e out e out = 1 n e in This element can be used to model gear-boxes or electrical transformers; in general, it models ideal energy transformations in which a flow is mapped into another flow and, correspondingly, an effort to another effort. It is represented by the symbol TF (transf ormer) or by MTF (modulated transf ormer) and the bond graph description is given Fig Differently from the transformer, the gyrator linearly relates the effort of the output port with the flow of the input port, that is e out = n f in Then, the power conserving property (1.23) can be satisfied if or, equivalently, if e in = n f out f out = 1 n e in The typical example for a gyrator is the DC motor where electrical power flows in and mechanical power flows out. The motor constant K is the information that relates the input current i and the output torque τ = Ki. Due to the power conservation properties of this element, the remaining flow and effort are related by u = Kω, where u is the e.m.f. of the motor and ω the angular speed. The bond graph representation of a gyrator is given in Fig Note 1.4. Transformer and gyrator define two distinct Dirac structures D T F and D GY on F := (f in, f out ).

33 PSfrag replacements 1.3 Basics on bond graph 17 e in e out e in e out GY MGY f in.. f out f in f n out n PSfrag replacements Figure 1.10: Ideal gyrator element. PSfrag replacements f : S f e in f in e : S e e in f in Figure 1.11: Ideal flow and effort sources Ideal sources A source is an element that is able to generate energy. Two kind of element are considered: the ideal flow (S f ) and the ideal effort source (S e ), whose bond graph representation is given in Fig Note that the power bond are characterized by an outgoing direction since P s = ef is the supplied power. These element can supply a specified effort or flow independently of the value assumed by the corresponding dual variable. Typical examples are the ideal voltage and ideal current sources Network structure Once the basic 1-port and 2-port element have been introduced, it is necessary to present the way in which the members of this set of atomic components can be interconnected each other. The resulting power-conserving network is able to model, together with the static and dynamical equations of these simple systems, the dynamic of a complex physical system. In circuit theory, the topology of each electric network can be described by means of the Kirchoff s laws. In this contest, the power conserving interconnection of physical system is based on a generalization of these laws to deal with other energetic domains than the electromagnetic one. These generalization brings to the introduction of two basic elements: the 1-junction, also called the flow junction, and the 0-junction, also called the effort junction. These elements are presented in Fig Each of them is characterized by a set of n incoming power bonds with power variables (f in,1, e in,1 ),..., (f in,n, e in,n ), and by a set of m outgoing power bonds with power variables (f out,1, e out,1 ),..., (f out,m, e out,m ). Since the interconnection structure has to be power conserving, the total incoming power is equal to the outgoing one, that is: n e in,i f in,i = i m e out,i f out,i i n e in,i f in,i i m e out,i f out,i = 0 (1.24) i The bond graph representation of the 1-junction is given in Fig. 1.12(a). This junction is characterized by the property that all connected bonds are constrained to assume the same flow value. This is the reason why it is also called flow junction. Then, the equation describing a 1-junction are f in,1 = = f in,n = f out,1 = = f out,m (1.25a)

34 PSfrag replacements PSfrag replacements 18 Power, ports and interconnections f in,1 e in,1 e out,1 f out,1 f in,1 e in,1 e out,1 f out,1 e in,n f in,n 1 f out,n e out,n e in,n f in,n 0 f out,n e out,n (a) 1-junction. (b) 0-junction. Figure 1.12: Junction elements. as regard the flows, while n e in,i = i m i e out,i (1.25b) as regard the efforts, as it can be deduced from the power conserving relation (1.24). This junction generalizes the Kirchoff s law that expresses the fact that all the element connected in series have the same current (i.e. the same flow), while the sum of the voltages (i.e. the sum of efforts) is equal to zero. The 0-junction is presented in Fig. 1.12(b). This junction is characterized by the property that all connected bonds are constrained to assume the same effort value. This is the reason why it is also called effort junction. Then, the describing equations are e in,1 = = e in,n = e out,1 = = e out,m (1.26a) as regard the efforts, while n f in,i = i m i f out,i (1.26b) as regard the flows, as it can be deduced from the power conserving relation (1.24). This junction generalizes the Kirchoff s law that expresses the fact that all the element connected in parallel have the same potential (i.e. the same effort), while the sum of the currents of node (i.e. the sum of flows) must be equal to zero. Note 1.5. The 1-junction and 0-junction equations (1.25) and (1.26) define two Dirac structures D 1 and D 0 on (f in,1,..., f in,n, f out,1,..., f out,m ) DC motor example The schematic representation of a DC drive is given in Fig First of all, note the presence of two interacting energy domains, the electromagnetic and the mechanical ones. In the model, it is possible to recognize the following atomic elements: storage elements: the inductor L, with state variable φ, and the rotary inertia I, with state variable p; dissipative elements: the resistance R and the damper b that models viscous friction on the load;

35 PSfrag replacements 1.3 Basics on bond graph 19 R L u + u R u L um K I ω τ i electromagnetic dom. b mechanical dom. Figure 1.13: DC drive. sources: a voltage (effort) source u; gyrator: a gyrator K Then, based on the considerations of the previous section, it is possible to write the mathematical model of each component relating the flow/effort port variables. As regard the storage elements, we have (see Sec ): inertia: ṗ = τ I ω = E I p = p ( ) p 2 = p 2I I inductor: φ = u L i = E L φ = ( ) φ 2 = φ φ 2L L with τ I the torque applied to the load. For the dissipative elements (see Sec ): resistance: u R = Ri damper: τ b = bω with τ b the torque due to friction. Finally, the gyrator equation are given by (see Sec ): τ = Ki u m = Kω with τ the torque generated by the DC drive. The network structure is revealed once the interconnection equation are specified. It is easy to see that u u R u L u M = 0 (electrical circuit) τ τ I τ b = 0 (mechanical structure) which correspond to a couple of 1-junction. Then, the bond graph representation immediately follows and it is reported in Fig Finite element of Timoshenko beam example Flexible beams are generally modeled according to the classical Euler-Bernoulli theory: this formulation provides a good description of the dynamical behavior of the system if the beam s cross sectional dimension are small in comparison of its length. In this case, the effects of the rotary inertia of the beam are not considered. A more accurate beam model is provided by the Timoshenko theory, according to which the rotary inertia and also the deformation due to

36 PSfrag replacements 20 Power, ports and interconnections I : L I : I u L u : S e u R 1 u M GY.. K 1 ω R : b R : R Figure 1.14: Bond graph representation of the DC drive of Fig PSfrag replacements φ(x + x) F (x) φ(x) T (x + x) T (x) w(x) x x F (x + x) w(x + x) Figure 1.15: Finite element of Timoshenko beam. shear are considered. The resulting Timoshenko model of the beam is generally more accurate in predicting the beam s response than the Euler-Bernoulli one, but, on the other hand, it is more difficult to utilize for control purposes because of its higher order. The infinite dimensional model of the Timoshenko beam will be deeply discussed in Sec. 3.5 and in Sec. 4.5, while, in this section, only the finite element of the beam is studied and its bond graph representation deduced. As reported in Fig. 1.15, indicate with x the position along the unstressed beam, with w(x) the deflection of the beam from the equilibrium configuration and with φ(x) the rotation of the beam s cross section due to bending. Denote with ρ the mass per unit length, with I ρ the mass moment of inertia of the cross section, with E the Young s modulus and with I the moment of inertia of the cross section. Moreover, G is the modulus of elasticity in shear, A is the cross sectional area and k is a numerical factor depending on the shape of the cross section. Consider a finite element at the position x and with length equal to x. The forces/couples (efforts) exchanged with the environment are given by F (x) = e L t F (x + x) = e R t T (x) = e L r T (x + x) = e R r (1.27) where F is the shear force and T the bending torque. The corresponding flow variables are the translational and rotational speed at the x and x + x side of the finite element, that is ẇ(x) = ft L ẇ(x + x) = ft R φ(x) = fr L φ(x + x) = fr R (1.28)

37 1.3 Basics on bond graph 21 If p t (x, t) and p r (x, t) represent the generalized translational and rotational momenta, then the following equilibrium relations hold: F (x + x) F (x) = ṗ t (x, t) T (x + x) T (x) + F (x + x) x = ṗ r (x, t) (1.29) where p t (x, t) = ρa x ẇ(x, t) p r (x, t) = I ρ x φ(x, t) (1.30) Denote by ɛ t (x, t) the shear displacement and by ɛ r (x, t) the bending deformation. The Timoshenko beam model assumes that, for small deformations, shear displacement and bending deformation are given by: ɛ t (x, t) = w(x + x) w(x) xφ(x) ɛ r (x, t) = φ(x + x) φ(x) (1.31) and that the corresponding shear force and bending torque by F (x + x) = kga x ɛ t(x) T (x + x) = EI x ɛ r(x) (1.32) Relations (1.29) and (1.30) are the state space models of a couple of I-elements, while (1.31) and (1.32) the state space models of two C-elements. Then, the finite element of the beam is given by the interconnection of two inertia and two springs, each couple of them models the translational and the rotational components of the motion. Note that in (1.30) and in (1.31) a coupling between translational and rotational motion is present. This coupling can be modeled by means of TF-element (with modulating input x). From (1.29), (1.30), (1.31) and (1.32), together with the definitions (1.27), it can be deduced that the state space model of the finite element of the beam is given by the following relations: trans. motion: rot. motion: ṗ t = e R t e L t ft L p t = ρa x ṗ r = e R r e L r + xe R t f L r = p r I ρ x ɛ t = ft R ft L xfr L e R t = kga x ɛ t ɛ r = fr R fr L e R r = EI x ɛ r The bond graph model easily follows from (1.33), and it is reported in Fig Concluding remarks on bond graphs (1.33) Bond graph is a powerful tool for modeling physical systems. By interconnecting atomic elements characterized by their own energetic behavior, it is possible to obtain a graphical representation of the system under study that shows the way in which power is exchanged among each part of the system and between the system and its environment. In order to deduce the bond graph representation of a generic system, it is necessary to understand what is the network structure behind it, that is the way this set of atomic elements interact each other. So, the network structure is fundamental, and it is given by a composition of TF-elements, GY-elements, 1-junction and 0-junction. In Sec. 1.2, a mathematical tool describing power conserving interconnection structures has been introduced. The question that can arise is, then, if it is possible to find a relation between

38 PSfrag replacements 22 Power, ports and interconnections ρa x : I C : kga/ x e L t f L t 1 0 e R t f R t TF.. x e L r f L r 1 0 e R r f R r I ρ x : I C : EI/ x Figure 1.16: Bond graph of the Timoshenko beam finite element. bond graph network structure and Dirac structure. The answer is that every bond graph network admits a Dirac structure representation. This property can be intuitively justified by considering Thm. 1.6 since, as reported in Note 1.4 and Note 1.5, junctions, transformers and gyrators can be described by Dirac structures. Then, the interconnection of Dirac structures results into another Dirac structure and the relation with bond graph is proved. Moreover, also the inverse implication holds. Some limitations of bond graph formalism arise when the problem of developing control applications is approached, mostly because the equations of the model are not explicitly shown. Based on the same idea of network and of interconnection of atomic components, it is possible to introduce a class of dynamical systems in which the energy properties are explicitly shown in their mathematical description. By simply analyzing the model equations, it is possible to locate the interconnection structure, the power ports, the stored energy function and so on: this systems are called port Hamiltonian systems and will be introduced in the next section. 1.4 Port Hamiltonian systems Implicit port Hamiltonian systems By generalization of the basic idea behind bond graph formalism, it makes sense to model physical systems by properly interconnecting a set of multi-dimensional atomic elements, each of them characterized by a particular energetic property. The final network is the result of a proper combination of multidimensional 1-junctions, 0-junctions, transformers and gyrators or, in an equivalent but much more powerful way, it can be described by a means of a Dirac structure, (Maschke and van der Schaft, 1992; Dalsmo and van der Schaft, 1999; van der Schaft, 2000). As represented in Fig. 1.17, a physical system is the result of the power-conserving interconnection of storage elements (C, I), characterized by port variables (f S, e S ) F S E S and of dissipative elements (R), with power port variables (f R, e R ) F R E R. Moreover, the system can interact with the environment or can be connected with power sources: the port variables, in this case, are given by (f P, e P ) F P E P. The sets F S, F R and F P are linear spaces of dimension, respectively, n, n R and n P. Globally, the space of flows is given by F := F S F R F P

39 1.4 Port Hamiltonian systems 23 PSfrag replacements R 1 e R f R environment e P f P D e S f S 1 C, I Figure 1.17: Implicit port Hamiltonian systems. while the space of efforts by E := F = F S F R F P = E S E R E P The network structure is a Dirac structure D on F that, in kernel representation, can be given as D = {(f S, f R, f P, e S, e R, e P ) F E (1.34) F S f S + E S e S + F R f R + E R e R + F P f P + E P e P = 0} where the matrices F S, F R, F P, E S, E R and E P are such that with (i) E S F T S + F S E T S + E R F T R + F R E T R + E P F T P + F P E T P = 0 (1.35a) ] (ii) rank [F S. F R. F P.E S. E R. E P = dim F (1.35b) dim F = n + n R + n P The energy storing elements are characterized by an n-dimensional space X of energy (state) variables, of which x 1,..., x n X are local coordinates, and by an energy function H : X R. The Dirac structure introduced by (1.34) and (1.35) is constant over F: to be more precise and general, D can be modulated by the state variable x X in the sense that the matrices in (1.34) and (1.35) can smoothly depend on x. In order to keep the notation as simple as possible, this dependence is omitted. The behavior of a generalized storing element can be easily deduced from (1.19): the flow variables are given by ẋ(t) = dx dt, with t R while the effort variables by H x (x(t)) so that H x, ẋ = T H x ẋ = dh dt

40 24 Power, ports and interconnections is the increase of stored energy. Flows and efforts of the energy storing elements can be related to the corresponding variables (f S, e S ) of the Dirac structure (1.34) by setting: f S = ẋ e S = H x (1.36) If restricted to the linear case, the generalized dissipative elements introduced in Sec impose the following relation on the variables f R, e R ) of the Dirac structure D: f R = Y R e R (1.37) with Y R = Y T R 0. The minus sign is necessary in order to have a consistency in the power flow. By substitution of (1.36) and (1.37) in (1.34), the representation of an implicit port Hamiltonian system with dissipation can be deduced (van der Schaft, 2000), and the following definition makes sense. Definition 1.9 (implicit port Hamiltonian system). Denote by X an n-dimensional space of energy variables, by H : X R an energy function (Hamiltonian), by D a generic Dirac structure, for which (1.34) is its kernel representation, and by Y R = Y R T 0 a matrix taking into account dissipative phenomena. Then F S ẋ(t) + E S H x F RY R e R + E R e R + F P f P + E P e P = 0 (1.38) with the matrices F S, F R, F P, E S, E R and E P satisfying (1.35), is an implicit port Hamiltonian system with dissipation, defined with respect to the Dirac structure D and the Hamiltonian H. From the power conserving property of a Dirac structure, the following proposition can be easily deduced: Proposition 1.7 (energy balance equation). Every implicit port Hamiltonian system (1.38) satisfies the following energy balance equation: dh dt = e R T (t)y R e R (t) + e P T (t)f P (t) (1.39) Proof. If (f, e) = (f S, f R, f P, e S, e R, e P ) D F E, then (1.9) holds, that is e, f = e S T f S + e R T f R + e P T f P = 0 Consequently, from (1.36) and (1.37), we have that and then (1.39). T H x ẋ e R T Re R + e T P f P = dh dt ẋ e R T Y R e R + e T P f P = 0 Note 1.6. Relation (1.39) expresses a fundamental property of physical systems, i.e. the conservation of energy. The variation of internal (stored) energy equals the power provided by the environment, that is e P, f P = e P T f P, plus the dissipated power (or, equivalently, converted to the thermal domain), that is e R, f R = e R T f R = e R T Y R e R 0.

41 1.4 Port Hamiltonian systems 25 The fact that the interconnection structure is power conserving, that is the port variables of energy storing and dissipative elements and environment belong to a certain Dirac structure D, imposes a set of constrains on the values of admissible flows and efforts. Given (f S, f R, f P, e S, e R, e P ) D, from (1.16) immediately follows that or, equivalently, that (e S, e R, e P ) P 1 (x) e S Im F S T (x), e R Im F R T (x), e P Im F P T (x) (1.40) in which the dependence on x in the Dirac structure is explicitly shown. Then, the following constrain on the state variable is introduced by the Dirac structure: H x Im F S T (x) (1.41) Analogous properties have to hold for the flow variables. In particular, it is necessary that or equivalently that (f S, f R, f P ) G 1 (x) f S Im E S T (x), f R Im E R T (x), f P Im E P T (x) (1.42) Then, the Dirac structure D introduces the following constrain on the time evolution of the state variables: ẋ(t) Im E S T (x(t)), t R (1.43) Consider a generic function C : X R and evaluate it on the trajectory x(t) of a generic port Hamiltonian system of which (1.38) is its implicit formulation. Clearly, dc dt = T C x ẋ So, we give the following (van der Schaft, 2000): Definition 1.10 (strong Casimir function). A (strong) Casimir function C : X R is a scalar function defined on the space of power variables such that its time derivative is equal to 0 for every ẋ(t) Im E s T. Note 1.7. Note that C : X R is Casimir function if and only if it is solution of the following PDE: C x Ker E S(x) (1.44) Moreover, if C is a Casimir function, then it is invariant for every port behavior and for every dissipative relation (1.37): it is invariant in a strong sense. Given an Hamiltonian H, a Dirac structure D and, eventually, some relations summarizing the dissipation phenomena, it is possible to give an implicit formulation (1.38) of the corresponding physical system which relates the variation of internal energy to the time evolution of the energy variables, to the dissipative effects that can be present in the system and to the values

42 26 Power, ports and interconnections assumed by the port variables that model the interaction of the system with the environment. Relation (1.38) is quite general and suitable for nice interpretations, but it is not handy for system analysis and for the development of (complex) control strategies. The main reason is that the time evolution of the state variables is not explicitly given, that is the dynamical system is not given in the form of Def It is possible to show that an explicit model can be deduced if the algebraic constraints (1.43) on the time evolution of the state variables can be removed from (1.38), and that it is possible if certain non-degeneracy conditions on this set of constraints are satisfied. Further details can be found in (Dalsmo and van der Schaft, 1999). In the next section, an explicit formulation for port Hamiltonian systems is deduced in a special case for the Dirac structure (1.35). The resulting model will be the starting point when dealing with control problems (see Chapter 2) and, then, it will be generalized to the infinite dimensional case (see Chapter 3 and Chapter 4) Port Hamiltonian systems An explicit formulation of port Hamiltonian systems can be easily deduced from (1.38) if a particular case for the Dirac structure D (1.34) is considered. This is the starting point for obtaining a mathematical model of a dynamical system that recalls the one of Def Assume that (van der Schaft, 2000): I n G R (x) G(x) F S = 0, F R = 0, F P = 0, E S = J(x) G R T (x) G T (x), E R = 0 I nr 0, E P = 0 0 I np (1.45) with J(x) = J T (x). Then, by substituting (1.36) and (1.37) in (1.38) under the hypothesis (1.45), it can be obtained that ẋ(t) + J(x) H x G R(x)Y R e R + G(x)f P = 0 (1.46a) G T R (x) H x + e R = 0 (1.46b) G T (x) H x + e P = 0 (1.46c) By substitution of (1.46b) into (1.46a), it can be obtained that ẋ(t) = J(x) H x G R(x)Y R G T H R (x) }{{} x + G(x)f P (1.47) R(x) where the matrix R(x) := G R (x)y R G R T (x) is symmetric and positive semi-definite. Then, (1.47), together with (1.46c), defines a dynamical system in the form of Def. 1.1, where input and output signals are f P and e P respectively. Definition 1.11 (port Hamiltonian systems). Denote by X an n-dimensional space of state (energy) variables and by H : X R a scalar energy function (Hamiltonian). Denote by U

43 1.4 Port Hamiltonian systems 27 an m-dimensional (linear) space of input variables and by its dual Y U the space of output variables. Then, ẋ(t) = [J (x) R (x)] H x + G(x)u(t) y(t) = G T (x) H (1.48) x with J(x) = J T (x), R(x) = R T (x) 0 and G(x) matrices of proper dimensions, is a port Hamiltonian system with dissipation. The n n matrices J and R are called interconnection and damping matrix respectively. Note 1.8. Given a dynamical system in port Hamiltonian from (1.48), the variation of internal energy equals the dissipated power plus the power provided to the system by the environment, that is: dh dt = T H x ẋ = H T x R(x) H x + yt u that is a particular case of (1.39). This relation expresses a fundamental property of port Hamiltonian systems, their passivity. Roughly speaking, the internal energy of the unforced system (u=0) is non-increasing along system trajectories or, if the port variable are closed on a dissipative element, that is relations (1.21) and (1.22) are imposed between u and y, then the energy function is always a decreasing function. If the definition of Lyapunov stability is recalled, together with the sufficient condition for the stability of an equilibrium point, then it can be deduced that the Hamiltonian is a good candidate for being a Lyapunov function. These considerations are the subject of Chp. 2. Example 1.2 (DC motor). Consider the DC motor example of Sec X R 2 and and x := (φ, p). The energy function is given by Assume that and the whole dynamical model by [ ṗ φ ] [ = H(x) = H(p, φ) := 1 p 2 2 I + 1 φ 2 2 L 0 K ] } K {{ 0 } J [ b 0 0 R } {{ } R ] H p H φ [ ] }{{} G where the input u is the voltage imposed by the voltage source. The dual output is given by y = H φ = i that is the current flowing through the inductor (and, clearly, through the voltage source). Example 1.3 (finite element of the Timoshenko beam). Consider the example of the finite element of Timoshenko beam discussed in Sec Assume that X R 4 and and x := (p t, p r, ɛ t, ɛ r ). The energy function is given by H(x) = H(p t, p r, ɛ t, ɛ r ) := 1 ( pt 2 2 ρa x + p ) 2 r + 1 ( kga I ρ x 2 x ɛ t 2 + EI ) x ɛ r 2 u

44 28 Power, ports and interconnections and the whole model by p t p r ɛ t ɛ r = x 1 1 x } {{ } J x pt H pr H ɛt H ɛr H + e L t e L r f R t f R r } {{ } u where the minus sign for e L t and e L r depends on the orientation chosen for the left (L) bonds (see Fig. 1.16). The dual outputs are, clearly, given by y = [ [ ] ft L, fr L, e R t, e R ] T pt r = ρa x, p r I ρ x, kga x ɛ EI T t, x ɛ r Example 1.4 (n-dof mechanical system). The configuration of an n-dof mechanical system can be represented by a set q = (q 1,..., q n ) Q of generalized coordinates, with Q the configuration manifold. Denote by M(q) the symmetric and positive definite inertia matrix. Then, the generalized momenta are given by p = M(q) q T Q and the state variable is given by x := [q, p]. The total energy (Hamiltonian) is given by the sum of a kinetic energy K(q, p) and of a potential energy V (q), that is H(x) = H(q, p) := 1 2 pt M 1 (q)p + V (q) while the mathematical model of the system is given by the well-known Hamiltonian equations of which the port Hamiltonian formalism is a generalization: [ ] ([ ] [ ]) [ ] [ ] q 0 I = n 0 0 q H 0 + u ṗ I n 0 0 D(q, p) p H B(q) where D(q, p) = D T (q, p) 0, u R m and B(q) is an n m matrix. The mechanical system is fully-actuated if and only if n = m and rank B(q) = n. Within the port Hamiltonian formalism, the dual output is given by: y = B T (q) H p As introduced in Def. 1.10, a strong Casimir function is a scalar function C : X R such that its time derivative is equal to 0 independently from the evolution of the state variables, that is x C Ker E S (x), ẋ Im E S T (x(t)). Given a port Hamiltonian system in the form (1.48), E S is given in (1.45), then it can be easily deduced that C x Ker E s(x) T C x J(x) = 0, T C x R(x) = 0, T C G(x) = 0 (1.49) x In other words, if these last PDEs are satisfied, then C is a Casimir function for system (1.48) in a strong sense. Note that C = 0 independently of H and u. In other words, the level sets L c C := {x X C(x) = c} (1.50) for every c R are invariants. The state variable x evolves on a given set L c C under the action of the input u, with c fixed by the initial conditions. A less restrictive definition of Casimir function can be given if the invariance of the level sets (1.50) is required only if the input signal is equal to zero. Then, we give the following (van der Schaft, 2000):

45 1.4 Port Hamiltonian systems 29 Definition 1.12 (Casimir function). Consider a port Hamiltonian system with dissipation (1.48) and a scalar function C : X R. Then, C is a Casimir function for (1.48) if x X. T C x [J (x) R (x)] = 0 (1.51) Example 1.5 (rotational motion of a rigid body). Consider a rigid body spinning around its center of mass in absence of gravity. The state variable is the angular momenta p = (p x, p y, p z ) and the Hamiltonian is given by the kinetic energy ( ) H(p) = 1 p 2 x + p2 y + p2 z 2 I x I y I z with I x, I y and I z the principal momenta of inertia. The motion equations are given by the Euler s equation ṗ x 0 p z p y px H ṗ y = p z 0 p x py H + u(t) ṗ z } p y p x {{ 0 } pz H J(p) with u external torques. The function C is a Casimir function for this system if it is solution of the following system of PDEs: p z C p y + p y C p z = 0 p z C p x p x C p z = 0 C C p y + p x = 0 p x p y It is possible to prove that C(p) = 1 ( 2 p 2 x + p 2 y + p 2 ) z, representing the total angular momentum, is a Casimir function for the system. This means that, without forcing action (i.e. u = 0), the total angular momentum of the rigid body is conserved. Note 1.9. Clearly, if C is Casimir function in a strong sense, then it is a Casimir function in the sense of Def Moreover, a stronger version can be given by requiring that T C x J(x) = 0 and T C R(x) = 0, x X x Note Consider an unforced port Hamiltonian system without dissipation, that is a system (1.48) with u = 0 and R(x) = 0. If C is a Casimir function in the sense of Def. 1.12, then the sets L c C introduced in (1.50) are invariants. Moreover, the restricted dynamics of (1.48) on these invariants can be given as ẋ C = J C (x C ) H C x C where x C, H C and J C are the restrictions of x, H and J to L c C.

46 30 Power, ports and interconnections

47 Chapter 2 Control of port Hamiltonian systems In this chapter, some well-established control strategies for the regulation of port Hamiltonian systems are introduced and discussed. Basically, all these control techniques aim to develop a passive controller that shapes the total energy function of the plant in order to obtain a closed-loop energy with a minimum in the desired equilibrium configuration. Then, stability can be proved by means of energetic considerations and, since controller and, consequently, closed-loop system are passive, it can be assured even in presence of model uncertainties. This property can be increased in order to obtain further robustness also in terms of performances by adopting a controller that is passive but, furthermore, it is provided with a variable structure. 2.1 Passive systems and passivity Introduction Any physical system, with no forcing action, assumes a configuration in which its total energy function assumes a (possibly local) minimum; this configuration is asymptotically stable if dissipative effects are present. A typical example is represented by mechanical systems: from physics it is well known that, if the potential energy has a global minimum and dissipative phenomena are present (e.g. friction and dampers), this minimum is globally asymptotically stable. Since very rarely the minimum of the potential energy coincides with the desired configuration, it can be supposed to implement proper control actions that are able to shape system energy in order to introduce a (local or global) minimum in correspondence with the desired configuration. This control technique is called energy shaping. The convergence rate to the new minimum can be increased by adding artificial damping (i.e. dissipation) by means of the control law: this procedure is called damping injection. The idea of developing control algorithms based on energy considerations takes inspiration from (Takegaki and Arimoto, 1981), in which an early application for the control of robotic manipulators is presented. Later, this approach was successfully extended in order to cope with

48 32 Control of port Hamiltonian systems a large class of systems, e.g. see (Ortega et al., 1998) for Euler Lagrange systems. In this way, the controller can be seen as a dynamical system that has to be interconnected to the plant in order to obtain a desired closed-loop energy function. It is easy to deduce how this approach could be fruitfully specified in order to cope with port Hamiltonian systems, for which the energetic properties are explicitly revealed by the mathematical description of the system itself. In this way, the action of the controller on the plant can be interpreted by considering how the Hamiltonian (energy) function, together with the interconnection and damping matrices, is modified. As already introduced in Note 1.8, from control point of view, the most important property of port Hamiltonian system with dissipation (phd) is their passivity. The notion of passivity of input/output dynamical systems originates from the phenomena of energy dissipation across resistances: it is well-known that passive systems are, for example, electrical circuits containing only positive resistors. Intuition suggests that passive systems are stable, since no energy regenerative effect is present. Based on the generalization of this simple consideration, several control techniques have been developed for the regulation of this class of dynamical systems, see (Byrnes et al., 1991) for a complete overview. Roughly speaking, passive systems satisfy (for definition) a generalization of the energy balance equation (1.39), already introduced for phd systems. An immediate consequence is that, under some structural hypothesis, basic results regarding the stability of an equilibrium configuration can be achieved by means of simple algebraic output feedback laws for which the stability proof can be seen as an adaptation of La Salle s invariance principle. The relation between passive and port Hamiltonian systems is, in some sense, really trivial since every phd system is passive. An immediate consequence is that control techniques already developed for the stabilization of passive systems can be easily extended and specified in order to deal with phd systems. But, this is only a (good) starting point for the development of control strategies that can be applied in order to solve the regulation (and, eventually, the tracking) problem for port Hamiltonian systems. Improvements are possible since the phd formulation of a physical systems provides a deep insight on the structural properties of the system itself. If the controller is developed in order to properly modify these inner characteristics of the plant, then more complex and powerful control schemes can be implemented, whose behavior can be suitable of (nice) physical interpretations. In order to introduce and understand some of the most important control techniques developed for the stabilization of phd systems, it is necessary to present the basic definitions and classical results on passivity and passive systems in general. Then, the main results about the control of port Hamiltonian systems will be presented and discussed Preliminary definitions and results Consider a generic nonlinear system affine in the input described by the following set of equations: { ẋ = f(x) + g(x)u y = h(x) (2.1) where x X R n is the state variable, u U R m and y Y R m the input and output variables. Denote by U f the set of all admissible input functions, that is the set of all u : R U piecewise and continuous. Moreover, suppose that f, g and h are smooth mappings and that f admits at least an equilibrium point x. Without loss of generality, assume that x = 0, then

49 2.1 Passive systems and passivity 33 f(0) = 0, and that h(0) = 0. Finally, denote by Φ(t, x 0, u) the state evolution x(t) of (2.1) when the initial state is x(0) = x 0 and the input function is u; clearly, y(t) = h(φ(t, x 0, u)) is the corresponding output. Definition 2.1 (supply rate). A function w : U Y R is a supply rate for the system (2.1) if and only if for every u U f and every x 0 X we have that where y(t) = h(φ(t, x 0, u). t 0 w (u (τ), y (τ)) dτ < +, t 0 Definition 2.2 (dissipative system). Consider a system (2.1) and denote by w a supply rate. Then, (2.1) is dissipative if and only if it is possible to find a C 0 non-negative function V : X R, called storage function, such that for all x 0 X, u U f and t 0: V (x) V (x 0 ) t 0 w(τ)dτ (2.2) where x = Φ(t, x 0, u). The previous relation is called dissipation inequality. Note 2.1. It is important to point out that the property of a dynamical system of being dissipative depends on the particular storage function under consideration. Within this framework, the dissipative property is no longer a structural characteristic of the system. Note that (2.2) can be considered as a generalization of (1.39). As pointed out in Note 2.1, the definition of dissipative system does not require a particular expression for the storage function. If U and Y are dual spaces, it is possible to assume w(t) := y(t), u(t) = y T (t)u(t) under the further hypothesis that an inner product is defined. This particular choice on w leads to the following definition. Definition 2.3 (passive system). The system (2.1) is passive if and only if it is dissipative with supply rate w = y, u and the storage function satisfies V (0) = 0. Note 2.2. An equivalent characterization of passive systems can be given as follows: the system (2.1) is passive if it is possible to find a C 0 non-negative function V : X R such that V (0) = 0 and V (x) V (x 0 ) t 0 y(τ), u(τ) dτ (2.3) for every x 0 X, u U f and t 0. In this way, it is easier to deduce that the storage function V is non-increasing along the system trajectories of the unforced system (u = 0), that is V (x(t)) V (x 0 ), x 0 X, t 0 Then, passive systems with a positive definitive storage function are stable in the sense of Lyapunov. Moreover, the storage function is non-decreasing along system trajectories that are compatible with the condition y = 0, that is along the set of trajectories that define the zerodynamics of (2.1). Then, a passive system characterized by a positive definite storage function has a stable zero-dynamics.

50 34 Control of port Hamiltonian systems By considering the two possible limiting situations for the dissipation inequality (2.3), it is possible to introduce the following sub-classes of passive systems. Definition 2.4 (lossless system). The system (2.1) is lossless if and only if it is passive with storage function V and for every x 0 X, u U f and t 0. V (x) V (x 0 ) = t 0 y(τ), u(τ) dτ Definition 2.5 (strictly passive system). The system (2.1) is strictly passive if and only if it is passive with storage function V and if it is possible to find a positive definite function D : X R such that V (x) V (x 0 ) = for every x 0 X, u U f and t 0. t 0 y(τ), u(τ) dτ t 0 D(x(τ))dτ A classical result providing a characterization of the passivity properties of the nonlinear system affine in the input (2.1) is represented by the well-known Kalman-Yacubovitch-Popov (KYP) lemma. This lemma introduces a couple of necessary and sufficient conditions for a nonlinear system to be passive. Before stating the lemma, the following property for passive systems is introduced. Definition 2.6 (KYP property). The system (2.1) has the KYP property if and only if it is possible to find a non-negative C 0 function V : X R such that V (0) = 0 and for every x X, (Byrnes et al., 1991). L f V (x) 0 (2.4a) L g V (x) = h T (x) (2.4b) Then, the following result (an equivalent formulation of the KYP lemma) can be proved. Proposition 2.1. The system (2.1) has the KYP property if and only if it is passive. In other words, if the system (2.1) has the KYP property, then it is passive with storage function V ; if the system (2.1) is passive, then its storage function V satisfies conditions (2.4). Proof. If system (2.1) has the KYP property, then along its trajectories we have that dv (x(t)) dt = L f V (x(t)) + L g V (x(t))u(t) y T u y, u (2.5) relation that can be integrated in order to obtain the dissipation inequality (2.3). On the other hand, if (2.1) is passive with C 1 storage function V, the time derivative of the dissipation inequality (2.3) leads to (2.5) that implies conditions (2.4). Note 2.3. The KYP property for nonlinear affine in the input systems can be specified in order to deal with lossless and strictly passive systems. In particular, (2.1) is lossless if and only if it is possible to find a non-negative function V : X R such that L f V (x) = 0 and L g V (x) = h T (x)

51 2.1 Passive systems and passivity 35 for every x X. Under the same hypothesis, (2.1) is strictly passive if L f V (x) < 0 and L g V (x) = h T (x) for every x X. Clearly, in a strictly passive system with positive definite C 1 storage function the equilibrium point x = 0 is asymptotically stable Basic considerations on the stabilization of passive systems In this section, some classical results for the asymptotic stabilization of passive systems by means of algebraic output feedback are presented. These idea are going to be extended and specified in the remaining part of this chapter in order to solve the regulation problem for port Hamiltonian systems. As discussed in (Hill and Moylan, 1976), the asymptotic stabilization of passive systems deeply depends on its observability properties: these results have been extended and generalized in (Byrnes et al., 1991). First of all, it is necessary to give a definition of observability and detectability. Definition 2.7 (observability). The system (2.1) is locally zero-state observable if there exists an neighborhood U X of 0 such that, for every x U h(φ(t, x, 0)) = 0 for all t 0 x(t) = Φ(t, x, 0) = 0 If X U, the system is zero-state observable. A less restrictive requirement can be expressed in terms of the following definition. Definition 2.8 (detectability). The system (2.1) is locally zero-state detectable if there exists an neighborhood U X of 0 such that, for every x U h(φ(t, x, 0)) = 0 for all t 0 lim Φ(t, x, 0) = 0 t If X U, the system is zero-state detectable. A basic stabilization property of passive system is summarized by the following proposition, whose proof is strictly related to the La Salle s invariance principle. As it will be clearer later, the control strategy here introduced is nothing more that damping injection, a typical control methodology also for port Hamiltonian systems. Proposition 2.2. Suppose that (2.1) is a passive system with positive definite storage function V and that it is locally zero-state detectable. Consider, then, a smooth function φ : Y U such that φ(0) = 0 and y T φ(y) > 0 if y 0. The control law u = φ(y) (2.6) asymptotically stabilizes the equilibrium x = 0. If (2.1) is zero-state detectable and V is proper, the control law (2.6) globally asymptotically stabilizes the equilibrium x = 0, (Byrnes et al., 1991).

52 36 Control of port Hamiltonian systems Proof. Since (2.1) is passive, from (2.3) and (2.6) we have that V (x(t)) V (x(0)) t 0 y T (τ)φ(y(τ))dτ 0 and clearly V is non-increasing along closed-loop system trajectories. Consider a > 0 sufficiently small; since V is positive definite, the set V 1 ([0, a]) is compact and the equilibrium x = 0 is stable in the sense of Lyapunov: its asymptotic stability can be proved as follows. Consider an initial condition x 0 sufficiently closed to x = 0, denote by x 0 (t) the corresponding trajectory of the closed-loop system and by γ 0 its ω-limit set (nonempty, compact and invariant). Since lim t V (x(t)) = a 0 0, then V (x) = a 0 for every x γ 0. Denote by x a point of γ 0 and by x(t) the corresponding trajectory. Since x(t) γ 0, then V ( x(t)) = a 0 and consequently 0 = V ( x(t)) V ( x) t 0 y T (τ)φ(y(τ))dτ 0 which implies that y(t) = 0 for every t 0. By detectability, lim t x(t) = 0 and, consequently, a 0 = 0. Then, lim t V (x 0 (t)) = 0, that is lim t x 0 (t) = 0 and x = 0 is locally asymptotically stable. The global asymptotically stability of x = 0 easily follows from the further hypothesis of V being proper. Note 2.4. The previous proposition shows that any zero-state detectable passive system with positive definite storage function can be asymptotically stabilized by means of a pure gain output feedback. It is possible to show that an analogous result holds without assuming explicitly the zero-state detectability for the nonlinear system. In particular, it can be proved that, if the system (2.1) is passive with positive definite storage function V, then control law (2.6) makes x = 0 a (locally) asymptotically stable point if, given a neighborhood B 0 of x = 0, the largest invariant set contained in {x X B 0 y(x) = 0} equals {0}. Note 2.5. Consider a passive system (2.1) with positive definite storage function V. Then, the configuration x = 0 can be asymptotically stabilized by the control law u = y that is a particular case of (2.6). Since from the KYP property y = h(x) = [L g V (x)] T, we deduce that the state feedback law u = [L g V (x)] T stabilizes the system in x = 0, (Jurdjevic and Quinn, 1978) Passive systems and port Hamiltonian systems Consider a generic port Hamiltonian system with dissipation (phd system): ẋ = [J (x) R (x)] H x + G (x) u y = G T (x) H x where x X R n, u U R m, y Y U and H : X R is the Hamiltonian (energy) function bounded from below. Moreover, assume that J(x) = J T (x) and that R(x) = R T (x) 0 for every x X. (2.7)

53 2.2 Control by interconnection 37 The relation between system (2.7) and passive systems can be summarized by means of the following proposition. Proposition 2.3. Every port Hamiltonian system is passive and the Hamiltonian function is the storage function. Proof. The proof is quite trivial. The energy balance equation for system (2.7) is given by then, by integration, it can be deduced that dh dt = T H x R(x) H x + yt u y T u H(x(t)) H(x(0)) = t 0 y T (τ)u(τ)dτ (2.8) which is the same as (2.3). Clearly, the storage function is, in this case, the Hamiltonian H. Equivalently, it is possible to check the passivity of the class of phd systems by verifying that the KYP property, i.e. condition (2.4), is verified. Note 2.6. If the Hamiltonian H is bounded from below, from (2.8) we have that: t 0 y T (τ)u(τ)dτ H(x(0)) < (2.9) Then, it can be deduced that the total amount of energy that can be extracted from a port Hamiltonian system (but the same result holds for generic passive systems) is bounded. Note 2.7. Consider the port Hamiltonian system (2.7) and the output feedback law (2.6): the energy balance equation becomes: dh dt yt φ(y) < 0 if y 0 Denote by x the minimum of the Hamiltonian: clearly, y(x ) = 0, then, if the port Hamiltonian system is (locally) zero-state detectable, the configuration x is (locally) asymptotically stable. Suppose that x is the desired configuration. Then, asymptotic stabilization can be achieved by means of the simple output feedback law (2.6). From a physical point of view, the controller behaves as a dissipative element: in this way, the total energy decreases and the system reaches the configuration x corresponding to minimum energy. The controller grows in complexity if x is not the configuration in which the closed-loop system has to be stabilized. In this case, the controller can be developed in order to modify the close-loop energy so that a (new) minimum in the desired configuration is introduced. This procedure is called energy shaping. Then, this new minimum will be asymptotically stable if dissipation is added; this further procedure is called damping injection. 2.2 Control by interconnection Introduction The problem of developing a control scheme whose behavior can be interpreted in terms of its effects on the energy of the closed-loop system can be basically approached in two different ways.

54 38 Control of port Hamiltonian systems In the first case, the desired closed-loop energy function is a priori fixed and the state feedback loop calculated in order to achieve this objective. Note the similarities with the Lyapunov approach. Main problems are related to the fact that, usually, the desired energy function is chosen quadratic in the error, thus not suitable to any physical interpretation. Moreover, in this case the energy shaping procedure becomes the result of a control by inverse dynamics so that some invertibility hypothesis on open-loop system dynamics are required. More details can be found in (Ortega et al., 1998). A different and newer approach is based on the idea that the energy function of the closedloop system is the result of a proper choice of the interconnection and damping structure of the controlled system. In this way, the energy function is a consequence of a desired internal structure for the closed-loop system, thus it is energy in physical sense. This approach has been introduced in (Ortega et al., 1999; Ortega et al., 2000) and it will be discussed in Sec and Sec General formulation of energy shaping Consider the port Hamiltonian system (2.8) and rewrite the energy balance equation (2.9) as follows: H(x(t)) H(x(0)) = t 0 y T (τ)u(τ)dτ d(t) (2.10) where d is a non-negative function taking into account all the dissipation effects. As pointed out in Note 2.7, the asymptotic stabilization around x X is an easy task if x corresponds to a minimum of the energy function H. If this is not the case, it is necessary to shape the energy function with the controller in order to introduce a minimum in x. A quite general formulation of this control technique can be given as follow: find a proper state feedback law u = β(x) + v (2.11) with v external signal, such that the resulting closed-loop dynamics satisfies the following new energy balance relation: H d (x(t)) H d (x(0)) = t 0 y T (τ)v(τ)dτ dd (t) (2.12) In (2.12), H d : X R is a desired energy function with minimum in x, y is the new passive output (which can be still equal to y) and d d a non-negative function that is introduced in order to increase the convergence rate of the closed-loop system. In conclusion, it should be now clear that in the development of controller based on energy considerations two main steps can be identified: (a) definition/deduction of a suitable H d with a minimum in the desired configuration (energy shaping). This procedure can be generalized in order to modify also the interconnection structure (i.e. the matrix J in (2.7)) of the system in order to add some (virtual) coupling between non-interacting parts of the system. (b) modification of the dissipative effects (damping injection) in order to increase performances. If the system (2.7) is considered, this can be done by properly changing the matrix R.

55 2.2 Control by interconnection Stabilization by energy balancing Within the framework of the control by energy shaping, the action of the controller on the plant has to be interpreted by comparing the open-loop and the resulting closed-loop energy functions. Clearly, the way in which the energy properties of a given system can be modified depends on the choice of the control action (2.11). Suppose that it is possible to express the energy supplied by the controller as a function of the state of the plant: if we can find a function β : X U such that t 0 y T (τ)β(x(τ))dτ = H a (x(t)) H a (x(0)) (2.13) for some function H a : X R, then the controller (2.11) assures that the resulting closed-loop system is passive with energy function given by H d (x) = H(x) + H a (x) (2.14) In fact, combining (2.10), (2.13) and (2.14), it can be obtained that and then H(x(t)) H(x(0) t 0 y T (τ) [β(x(τ)) + v(τ)] dτ = = H d (x(t)) H d (x(0)) H d (x(t)) H d (x(0)) = t 0 t 0 y T (τ)v(τ)dτ = d(t) y T (τ)v(τ)dτ d(t) which is the energy balance relation with respect to the desired energy function H d. The regulation problem is solved if it is possible to chose H a such that H d has a minimum at the desired equilibrium x. Since H a represents the energy supplied by the controller, from (2.14) it can be deduced that the resulting closed-loop energy function is given by the difference between stored and supplied energies: this is the reason why this control technique is usually referred as energy-balancing passivity based control (PBC), (Ortega et al., 2001). This control techniques solves the regulation problem for mechanical systems and for some particular electrical networks. The key point behind this approach is the solution of (2.13) in terms of β for some H a ; this equation can be equivalently expressed by means of the following PDE: y T (x(t))β(x(t)) = T H a ẋ(t) (2.15) x in which x(t) is the state evolution of the closed-loop system. Since this methodology can be fruitfully applied to general passive system in the form (2.1) in which H is the storage function, then (2.15) can be written as T H a x [f(x) + g(x)β(x)] = ht (x)β(x) (2.16) where f, g, h and H have to satisfy the KYP conditions (2.4). Moreover, in order to stabilize the equilibrium configuration x for the closed-loop system is necessary that H d = H + H a has a minimum in x.

56 40 Control of port Hamiltonian systems Since x is a (desired) equilibrium configuration for the closed loop system, it is necessary that: f(x ) + g(x )β(x ) = 0 and, from (2.16) that h T (x )β(x ) = 0 y T (x )β(x ) = 0 (2.17) which expresses the fact that, at the equilibrium point, the power extracted by the controller has to be equal to zero. In general, this property has to be satisfied for all the configurations x X that are solutions of f( x) + g( x)β( x) = 0. An immediate consequence of (2.17) is that a necessary condition on the solvability of (2.15), or (2.16), is that the energy dissipated by the system at the equilibrium has to be bounded and then that it can be stabilized by extracting a finite amount of energy from the controller. This is the reason why this control technique can be successfully applied for the regulation of mechanical systems, for which the final configuration always corresponds to velocity (and then supplied power) equal to zero, but not to (all) the electrical networks in which, if at the equilibrium there is current flow through (at least) one resistor, then the controller has to provide an infinite amount of power in order to maintain the desired configuration. Example 2.1 (n dof mechanical system). Consider an n dof fully actuated mechanical system, whose phd model is given in Example 1.4 (assume, for simplicity, B(q) = I n ). In order to asymptotically stabilize this system in q, a good candidate for the closed-loop energy function can be H d (q, p) = 1 2 pt M 1 (q)p (q q ) T K P (q q ) with K P = K P T > 0, since it is characterized by a global minimum in (q, 0). From (2.14), we deduce that H a (q, p) = H a (q) = V (q) (q q ) T K P (q q ) and, then, β(q, p) = β(q) is solution of (2.13) if it is given by β(q) = V q K P (q q ) In order to increase the rate of convergence to q, it is possible to add some damping: this can be done by imposing v = K D y = K D M 1 (q)p K D q with K D = K D T > 0. Globally, the state feedback law (2.11) is then given by: u = V q K P (q q ) K D q which is the well-known PD + gravity compensation controller, (Arimoto and Miyazaki, 1984; Lewis et al., 1993). Note that, at the equilibrium, the power supplied by the controller to the mechanical system is equal to zero.

57 2.2 Control by interconnection 41 PSfrag replacements φ L PSfrag replacements φ L u + L R C q C u + L C q C R (a) Series RLC (b) Parallel RLC Figure 2.1: Series and parallel RLC circuits. Example 2.2 (series RLC circuit). Consider the series RLC circuit of Fig. 2.1(a), whose phd model is given by q = 1 L φ = φh φ = 1 C q R L φ + u = qh R φ H + u y = 1 L φ = φh (2.18) in which x = [ q, φ ] T is the state variable, with q the charge stored in the capacitor and φ the flux in the inductance, and H(q, φ) = 1 q 2 2 C + 1 φ 2 (2.19) 2 L is the energy (Hamiltonian) function. The system (2.18) clearly satisfies condition (2.10) with d given by t [ ] φ 2 d(t) = R L (τ) dτ 0 which is the total energy dissipated by the resistor. The possible equilibrium configuration in which (2.18) can be stabilized are given by x = [ q, 0 ] T. Since the desired current at the equilibrium is equal to zero, it is clear that also the power supplied by the voltage source is zero in steady state. In the same way of Example 2.1, it is necessary to find the functions β and H a solutions of the PDE (2.15), which, in this case, takes the form φ H a L q [ q C + R φ β(q, φ) L ] Ha φ = φ β(q, φ) L Since (2.19) is quadratic in φ and at the equilibrium φ = 0, it is necessary to shape only the contribution to the total energy due to q. Then, it is convenient to suppose that H a depends only on q: in this way, the PDE (2.15) becomes β(q) = H a q which defines the feedback law once the desired energy function H d is specified (and, consequently, also H a ). Given the open-loop energy function (2.19), a possible choice for H d can

58 42 Control of port Hamiltonian systems be H d (q, φ) = 1 2 ( 1 C + 1 C a ) (q q ) φ 2 2 L with C a a free parameter. Note that C a > C in order to have a minimum in x. With this choice, the control action becomes u = β(q) = q ( 1 + C a C + 1 ) q C a Example 2.3 (parallel RLC circuit). Consider the parallel RLC circuit of Fig. 2.1(b), whose phd model is given by q = 1 RC q + 1 L φ = 1 R qh + φ H φ = 1 C q + u = qh + u y = 1 L φ = φh where the Hamiltonian function is still given by (2.19). Due to the change in the dissipation structure, the equilibrium points are now given by x = [ Cu, L/Ru ] T for any given u. Note that the power supplied at the equilibrium must not be equal to zero (except when u = 0). By means of the control technique presented in this section, it is not possible to stabilize the system. In Sec , and 2.3 this problem will be solved with different approaches Control through invariants The Example 2.3 about the parallel RLC circuit has shown how dissipation could play an important role in the regulation process of passive systems. By approaching this problem on the basis of simple energy balancing consideration (as in the PBC control technique), it is not possible to find a solution to the problem of stabilizing the system in a configuration for which an infinite amount of energy is required. In this section, a control scheme that is able to overcome some of these limitations and to provide a characterization of the admissible dissipation is introduced. In particular, a condition on the structure of the dissipation term for which the stabilization is possible is obtained. Suppose that the plant is given by (2.7) and that also the controller can be represented in port Hamiltonian form as follows: ξ = [J C (ξ) R C (ξ)] H C ξ y C = G T C (ξ) H C ξ + G C (ξ) u C (2.20) where ξ X C is the state variable, with dim X C = n C, u C U C and y C Y C U C the power conjugated port variables, H C : X C R the energy function, J C (ξ) = J C T (ξ) and R C (ξ) = R C T (ξ) 0, ξ X C, the interconnection and damping matrices. The basic idea is to interconnect the systems (2.7) and (2.20) in power conserving manner and to shape the closed-loop energy by properly defining the Hamiltonian of the controller H C in order to introduce a (possibly) global minimum in the desired equilibrium configuration.

59 PSfrag replacements 2.2 Control by interconnection 43 e + u Σ y y C Σ C u C + e C Figure 2.2: Interconnection of physical systems. Σ is the plant, while Σ C the controller. Suppose that Y U C and, consequently, that Y C U. Then, (2.7) and (2.20) are interconnected in power conserving way if (see Def. 1.7 and Def. 1.8): { u = yc u C = y or, if a couple of external signal e U and e C U C is included, by { u = yc + e u C = y + e C (2.21) as it is presented in Fig 2.2. It is easy to prove that the resulting feedback system is still port Hamiltonian with state space X X C and energy function given by H + H C. In fact, the closed-loop dynamics is given by: [ ẋ ξ [ y y C ] ] = = ([ J(x) G(x)G T ] [ C (ξ) R(x) 0 G C (ξ)g T [ (x) ] [ J C (ξ) ] 0 R C (ξ) G(x) 0 e + 0 G C (ξ) e C [ G T ] [ ] (x) 0 x H(x) 0 G T C (ξ) ξ H C (ξ) ]) [ x H(x) ξ H C (ξ) ] (2.22) which is clearly in port Hamiltonian form. It is important to note that, right now, there is no relation between the state of the controller and the state of the system to be controlled. Then, it is not clear how the controller energy, which is freely assignable, has to be chosen in order to solve the regulation problem. A possible solution can be to constraint the state of the extended system (2.22) on a certain subspace of X X C, for example given by: Ω c := {(x, ξ) X X C ξ = S(x) + c} (2.23) where c R n C and S : X X C is a function still to be computed. In other words, we are looking for a set of Casimir function C i : X X C R, i = 1,..., n C for the closed-loop system (2.22) such that C i (x, ξ) := S i (x) ξ i (2.24)

60 44 Control of port Hamiltonian systems where [S 1 (x),..., S nc (x)] T = S(x). Clearly, the subspace (2.23) is given by Ω c = n C i=1 where L c i C i are defined in (1.50). Due to the nature of a Casimir function, it can be deduced that, by means of (2.24), it is possible to introduce an intrinsic non-linear state feedback law that will be used in order to shape the energy function of the controller. Note that, under these hypothesis, this energy function depends on the state variables of system (2.7). This control methodology is called invariant function method and it is deeply discussed in (Marsden and Ratiu, 1994; Dalsmo and van der Schaft, 1999). From Def. 1.12, the set of n C functions (2.24) are Casimirs for (2.22) if and only if they are solution of the following system of PDEs: or, equivalently, of: [ T S x Consequently, we have that L c i C i ] [ J(x) R(x) T G(x)GC (ξ). I nc G C (ξ)g T (x) J C (ξ) R C (ξ) ] = 0 T S x [J(x) R(x)] = G C(ξ)G T (x) (2.25a) T S x G(x)G C T (ξ) = J C (ξ) R C (ξ) (2.25b) T S x [J(x) R(x)] S x = J C(ξ) + R C (ξ) (2.26) Since J 1 + R 1 = J 2 + R 2, with J i skew symmetric and R i symmetric, i = 1, 2, implies J 1 = J 2 and R 1 = R 2, from (2.26) we have that: T S x J(x) S x = J C(ξ) T S x R(x) S x = R C(ξ) (2.27a) (2.27b) Clearly, under the hypothesis R(x) = R T (x) 0 and R C (ξ) = R C T (ξ) 0, (2.27b) can be equivalently written as R(x) S x = 0 R C(ξ) = 0 and, consequently, (2.25a) becomes: T S x J(x) = G C(ξ)G T (x) In conclusion, the following proposition has been proved.

61 2.2 Control by interconnection 45 Proposition 2.4. The functions C i, i = 1,..., n C, defined in (2.24) are Casimir functions for the system (2.22) if and only if the following conditions are satisfied: T S x J(x) S x = J C(ξ) (2.28a) R(x) S x = 0 (2.28b) R C (x) = 0 (2.28c) T S x J(x) = G C(ξ)G T (x) (2.28d) Suppose that (2.28) are satisfied. Then, the state variables of the controller are robustly related to the state variable of the system to be stabilized, that is, if e = 0 and e C = 0, we have that ξ i = S i (x) + c i, i = 1,..., n C (2.29) with c i R depending on the initial conditions. Moreover, the closed-loop dynamics (2.22) evolves on the foliation induced by the level sets (1.50), which, in this case, take the form L c i C i = {(x, ξ) X X C ξ i = S i (x) + c i } (2.30) Note that can be expressed as a function of the x coordinate. If conditions (2.28b d) are taken into account, the reduced dynamics of (2.22) on these level sets is given by ẋ = [J(x) R(x)] H x G(x)G C T (ξ) H C ( ξ H = [J(x) R(x)] x + S ) H (2.31) C x ξ From (2.29), we have that H C (ξ) H C (S(x) + c): the controller energy function is finally dependent from x through the non-linear feedback action S( ). If then (2.31) can be written as H d (x) := H(x) + H C (S(x) + c) (2.32) ẋ = [J(x) R(x)] = [J(x) R(x)] H d x ( H x + S x In conclusion, the following proposition has been proved. ) H C S (2.33) Proposition 2.5. Consider the closed-loop port Hamiltonian system (2.22), with e = 0 and e C = 0, and suppose that the function S(x) = [S 1 (x),..., S nc (x)] T satisfies conditions (2.28). Then, the reduced dynamics on the level sets (2.30) is given by (2.33), where the closed-loop energy function H d is given by (2.32). Note 2.8. From (2.32), it is possible to deduce that, under the hypothesis e = 0 and e C = 0, for the reduced closed-loop dynamics (2.33) we have that dh d dt = dh dt + dh C dt = dh dt + y C T u C

62 46 Control of port Hamiltonian systems Then, from (2.21), it can be deduced that: or, equivalently, that: dh d dt H d (x(t)) = H(x(t)) = dh dt yt u t 0 y T (τ)u(τ)dτ + κ with κ R a constant. In other words, the closed-loop energy function is given by the initial energy function H minus the energy supplied by the controller. Note 2.9. Condition (2.28b) imposes some constraints on the class of port Hamiltonian system for which the proposed control technique can be applied. As already pointed out in Sec , dissipative effects can limit the applicability of an energy-based control scheme: in that case, systems requiring a non-zero power flow in order to be stabilized in the desired configuration cannot be treated within the proposed framework. The control methodology described in this section tries to overcome this limitation. Moreover, it is able to characterize the admissible dissipation (Ortega et al., 2001) for energy-balancing PBC in terms of the coordinates along which the energy can be shaped. In fact, from (2.28b) we deduce that: R(x) H C (S(x)) = 0 (2.34) x for any controller energy function H C. This relation means that the controller energy H C cannot depend on the coordinates where natural damping is present or, in other words, the closed-loop energy function can be shaped only in the directions along which no dissipation effect takes place. In Prop. 2.5, the reduced close-loop dynamics can be calculated under the hypothesis that no forcing action is present on the closed-loop system, i.e. if e = 0 and e C = 0. By means of the following proposition, a simple solution in order to remove at least the hypothesis e = 0 is discussed. Proposition 2.6. Consider the closed-loop port Hamiltonian system (2.22). Suppose that the function S(x) = [S 1 (x),..., S nc (x)] T satisfies conditions (2.28), that J C (ξ) = 0 and that rank G C (ξ) is maximum (i.e. G C (ξ) is injective). Then, if e C = 0, the closed-loop dynamics on the level sets (2.30) is given by ẋ = [J(x) R(x)] H d x + G(x)e y = G T (x) H d x where the energy function H d is given as in (2.32). Proof. The proof consists in verifying that, under these hypothesis, the set of functions (2.24) are strong Casimir functions for (2.22), with e C = 0, in the sense of Def This property can be equivalently formulated by requiring that [ T C x. T C ξ ] [ G(x) 0 ] = [ T S x. I nc ] [ G(x) 0 ] = 0

63 2.2 Control by interconnection 47 This can be easily verified as follows: since J C (ξ) = 0, from (2.25b) we have that G C (ξ)g T (x) S x = 0 and, then, G T (x) S x = 0, from the maximal rank condition of G C(ξ), completes the proof. Example 2.4 (parallel RLC circuit). Consider the parallel RLC circuit of Fig. 2.1(b) and already discussed in Example 2.3. The possible equilibrium configurations are equal to x = [ q, φ ] T = [ Cu, L/Ru ] T for any given u. In order to stabilize this configuration, let assume that the controller (2.20) is given by a first order dynamical system for which J C (ξ) = R C (ξ) = 0 and G C (ξ) = 1. With this choice, conditions (2.28b) and (2.28c) are immediately satisfied, while (2.28a) and (2.28d) if the function S satisfies S q = 1, and S φ = 0 Then, the Casimir function is given by C(q, φ, ξ) = q ξ + κ, with κ R depending on the initial conditions. Consequently, ξ = q + κ for the closed-loop system. It is easy to prove that, by choosing H c (ξ) = 1 ( 1 ξ 2 C a C + 1 ) ξq C a the desired configuration x is asymptotically stabilized. Example 2.5 (n dof mechanical system). As already presented in Example 2.1, a n dof mechanical system can be easily stabilized by means of (simple) energy-based control techniques since only a finite amount of energy is required in order to reach the desired configuration. Clearly, stability can be also achieved by means of the control scheme described in this section, which turns out out be a generalization of the classical PBC technique. Consider the phd model of an n dof fully-actuated mechanical system given in Example 1.4 (assume, for simplicity, B(q) = I n ) and recall that the Hamiltonian is given by H(p, q) = 1 2 pt M 1 (q)p + V (q) while the input u are the joint torques and the outputs y are the joint velocities q. As regard the controller (2.20), assume that dim X C = n and R C (ξ) = 0. Then, the functions (2.24) are Casimirs for the closed-loop system if S(q, p) = [S 1 (q, p),..., S n (q, p)] T satisfies conditions (2.28), that can be written as T S q S p T S S p q T S p D(q, p) S p = 0, T S q = J C (ξ) (2.35a) = 0 (2.35b) = G C (ξ) (2.35c) From (2.35b) and Note 2.9, it is clear the reason why the total energy of a mechanical system can be shaped independently from the structure of the dissipation term. In fact, the amount

64 48 Control of port Hamiltonian systems of dissipated energy depends on the joint momenta p, while regulation requires that the energy function has to be shaped in the q direction. If J C (ξ) = 0 and G C (ξ) = I n, then conditions (2.35) can be satisfied if S p = 0, and S q = I n thus C i (q i, ξ i ) = q i xi i + κ i, κ i R, i = 1,..., n, being Casimir functions for the closed-loop system. From Prop. 2.6, the resulting dynamics is then given by [ ] ([ ] [ ]) [ ] [ ] q 0 I = n 0 0 q H d 0 + e ṗ I n 0 0 D(q, p) p H d I n y = H d p with e external input and H d desired energy function, equal to H d (q, p) = 1 2 pt M 1 (q)p + V (q) + H C (q 1,..., q n ) Note that, in this case, the energy can be shaped only in its potential contribution and that the convergence rate can be increased by adding artificial damping, that is by imposing e = K D y, with K D = K D T 0. Moreover, the PD + gravity compensation controller can be easily obtained if H C (q) = V (q) (q q d) T K P (q q d ) with K P = K P T > Control via state-modulated source The control by energy balancing discussed in Sec revealed some intrinsic limitations when the problem of stabilizing a system in a configuration that requires a non-zero power flow is approached. In order to overcome this limitation, the control through invariants approach is introduced in Sec , providing a characterization of the admissible dissipation for which an energy-balancing PBC stabilization is possible. But this problem can be also approached in a different way. Since these systems cannot be stabilized by extracting a finite amount of energy from the (passive) controller Σ C, it is possible to assume that the latter is an infinite source of energy, with (phd) model given by ξ = u C with y C = H C ξ H C (ξ) = ξ (2.36) the energy function. Since H C is not bounded from below, it can be deduced that system (2.36) is not passive. Moreover, in order to overcome the constraints that the classical feedback interconnection scheme (2.21) introduces and of which conditions (2.28) are a consequence, it

65 2.2 Control by interconnection 49 PSfrag replacements Σ C 0 SGY Σ β( ) x Figure 2.3: Control as state-modulated source: Σ C is an infinite power source. is possible to interconnect the phd system (2.7) with (2.36) taking into account the system configuration (i.e. the state variable x), as described in Fig The result is a state modulate interconnection of the form: [ u(t) u C (t) ] [ = 0 β(x) β(x) 0 ] [ y(t) y C (t) ] which is clearly power conserving, (van der Schaft, 2000; Ortega et al., 2000). The resulting feedback system is given by the following phd system, with Hamiltonian H + H C : [ ] [ ] [ ] ẋ J(x) R(x) G(x)β(x) x H = ξ β T (x)g T (x) 0 ξ H C Note that, from (2.36), the x dynamics is given by ẋ = [J(x) R(x)] H x + G(x)β(x) that is system (2.7) with the static state feedback law u = β(x). Then, stability can be achieved by properly choosing the function β. Suppose that H d = H + H a is the desired closed-loop energy function: if it is possible to find β and H a such that the following PDE holds [J(x) R(x)] H a x then, for the closed-loop system the resulting dynamics is given by ẋ = [J(x) R(x)] H d x = G(x)β(x) (2.37) Energy shaping has been achieved without generating Casimir functions and without introducing hypothesis on the admissible dissipative effects in the plant. In other words, the applicability of this approach depends only on the solvability of (2.37). Consequently, it can be (in principle) applied also to deal with systems with infinite dissipation at the equilibrium. In Sec. 2.3, it is shown how these considerations can be extended and generalized. Example 2.6 (parallel RLC circuit). The applicability of the proposed control scheme to the stabilization of systems requiring an infinite amount of energy from the controller can be shown by considering again the parallel RLC circuit of Example 2.3. The problem reduces to find a solution of (2.37), which in this case becomes 1 H a R q + H a φ H a q = 0 = β(q, φ)

66 50 Control of port Hamiltonian systems From the first equation, we have that H a (q, φ) = Φ(Rq + φ), with Φ : R R an arbitrary smooth function, while the second equation provides the control law. In order to have H d with a minimum in the desired equilibrium configuration x = [ q, φ ] T = [ Cu, L/Ru ] T, the function Φ can be chosen as follows Φ(Rq + φ) = K P 2 [(Rq + φ) (Rq + φ )] 2 Ru (Rq + φ) so that H d is quadratic in the increments: H d (x) = (x T x T) Clearly, x is a global minimum if 1 C + R2 K P RK P 1 RK P L K P 1 K P > (L + R 2 C) Moreover, the feedback law is linear and given by u = β(x) = K P [R (q q ) + φ φ ] 2.3 IDA PBC control technique (x x ) + κ In Sec , it has been shown that the PBC of a phd system can be successfully carried out if it is possible to find proper solutions of the PDE (2.37). In this case, it is possible to shape the energy function in order to introduce a (possibly) global minimum in the desired configuration. Note that the interconnection and damping matrices of the system remain unchanged. In this section, a generalization of this approach is presented. In particular, it is shown how to develop a state feedback action u = β(x) such that, together with the energy function, also the interconnection and damping structure of the system could be modified. In this way, it is possible to introduce (virtual) coupling between non-interacting part of the initial system and, on the other side, to incorporate further informations on the system in order to simplify the solution of the PDE, which is usually a not easy task. Consider the phd system (2.7). We aim to develop a state feedback action u = β(x) such that the resulting closed-loop dynamics is given by ẋ = [J d (x) R d (x)] H d x (2.38) where J d (x) = J d T (x) and R d (x) = R d T (x) 0 are the new interconnection and damping matrices. This is the reason why this approach is called interconnection and damping assignment (IDA) PBC, (van der Schaft, 2000; Ortega et al., 2000; Ortega et al., 2001). Then, the solution is given by the following PDE, of which (2.37) is a particular case: where [J(x) + J a (x) R(x) R a (x)] H a x = [J a(x) R a (x)] H x J a (x) := J d (x) J(x) R a (x) := R d (x) R(x) + G(x)β(x) (2.39) The following proposition describes how to chose the solutions of (2.39) in order to stabilize a configuration x, (Ortega et al., 2000).

67 2.3 IDA PBC control technique 51 Proposition 2.7. Consider the phd system (2.7) and desired equilibrium configuration x X. Assume that it is possible to find two functions β : X U and K : R n R m and two matrices J a (x) and R a (x) such that and that J(x) + J a (x) = [J(x) + J a (x)] T R(x) + R a (x) = [R(x) + R a (x)] T 0 [J(x) + J a (x) R(x) R a (x)] K(x) = [J a (x) R a (x)] H x Moreover, suppose that the following conditions hold: (i) integrability: K(x) is the gradient of a scalar function, that is (ii) equilibrium assignment: K(x) at x verifies K x (x) = T K x (x) K(x ) = H x (x ) (iii) Lyapunov stability: the Jacobian of K(x) at x satisfies the bound K x (x ) > 2 H x 2 (x ) + G(x)β(x) (2.40) Under these hypothesis, system (2.7) with the feedback law u = β(x) will result in the phd system (2.38), where H d (x) = H(x) + H a (x) and H a (x) = K(x) x Furthermore, x will be a (locally) stable equilibrium of the closed-loop. Asymptotic stability will be achieved if the largest invariant set under the closed-loop dynamics contained in { T H d x X B x R d(x) H } d x = 0 equals {x }. Proof. The proof is straightforward and can be found in (Ortega et al., 2000). Regarding the asymptotic stability of x, see also Note 2.4. Note It could be of some interest to relate the energy-balance PBC control discussed in Sec. 2.2 with the IDA-PBC control technique. Since H d (x) = H(x) + H a (x), from (2.38) we have that dh d dt = dh dt + dh a dt = T H d x R d(x) H d x = y T u T H x R(x) H x + dh a dt

68 52 Control of port Hamiltonian systems u PSfrag replacements g φ q m Figure 2.4: Magnetic levitation system. and then dh a dt [ = y T u 2 H x + H ] T a R(x) H a x x T H d x R d(x) H d x Consequently, if R a (x) = 0, and the natural damping R(x) satisfies the condition R(x) H a x (x) = 0 the the new PBC is an energy balancing PBC. Note that this condition is equivalent to (2.34). Note In general, it is not necessary to solve the PDE (2.39), or equivalently the (2.40), in terms of H a (x) or K(x). In fact, if J d (x) R d (x) is invertible, it is possible to express K(x) as a function of β(x) and then, by substitution in the integrability conditions of Prop. 2.7, a PDE in terms of the unknown feedback law β can be obtained. Example 2.7 (magnetic levitation system, (Ortega et al., 2000; Ortega et al., 2001)). Consider the system of Fig. 2.4, consisting of an iron ball in a vertical magnetic field created by an electromagnet. If φ is the flux, assume that φ = L(q)i, where i is the current in the inductance and L(q) is the value of inductance, function of the difference q between the actual and nominal position of the center of the ball. The force F generated by the electromagnet is given by F = 1 L 2 q i2 with L(q) = k 1 q, < q < 1 where k > 0 a constant depending on the number of coil turns. Moreover, denote by m the mass of the ball and by R the coil resistance. Then, it is possible to prove (Ortega et al., 2001) that

69 2.3 IDA PBC control technique 53 the phd model of the system is given by φ q ṗ = where the Hamiltonian is equal to } {{ } J R } {{ } R φ H q H p H H(φ, q, p) = 1 2k (1 q)φ m p2 + mgq }{{} G Denote by x = [ φ, q, p ] T the state variable. Once a desired position q is fixed, the equilibrium we want to stabilize is x := [ 2kmg, q, 0 ] T. This is not possible with the natural interconnection matrix of the system J. In fact, the PDE (2.40) implies that RK 1 (x) = β(x) [J R] K(x) = Gβ(x) K 2 (x) = 0 K 3 (x) = 0 and, consequently, that H a can only depend on q. Thus, the resulting Lyapunov function will be of the form H d (φ, q, p) = 1 2k (1 q)φ m p2 + mgq + H a (q) with Hessian given by 2 H d x 2 = 1 q k + 2 H a q 2 φ k 0 φ k m which is sign indefinite for every possible choice of H a. Then, even if the equilibrium assignment in x is possible, its asymptotic stability cannot be assured. The problem is the lack of coupling between electrical an mechanical subsystems since the interconnection matrix J only couples positions with velocities. In order to overcome this limitation, a coupling between φ and p is introduced by defining the following desired interconnection structure: 0 0 α J d = α 1 0 where α is a parameter to be assigned. If R a (x) = 0, (2.40) becomes RK 1 (x) = α m p + β(x) K 3 (x) = 0 αk 1 (x) K 2 (x) = α (1 q)φ k The first equation defines the feedback law, while the last one can be solve leading to H a (φ, q, p) = 1 6kα p k φ2 (q 1) + Φ (q + 1α ) φ

70 54 Control of port Hamiltonian systems where Φ is an arbitrary smooth function that has to be chosen in order to satisfy the equilibrium assignment in x and the Lyapunov stability for the closed-loop Hamiltonian H d (φ, q, p) = 1 6kα p m p2 + mgq + Φ (q + 1α ) φ If x := x x and α, b > 0, then a possible choice can be: Φ (q + 1α ) [ ( φ := mg q + 1 ) α φ + b ( q + 1 ) ] 2 2 α φ In conclusion, the control law u = R k (1 q)φ K P ( ) 1 α φ + q α m p R ( ) 1 α 2k φ2 mg asymptotically stabilizes the equilibrium configuration x for all K P, α > 0, with K P a new constant. The last term in the feedback law u contains a quadratic nonlinearity that could saturate the control action. In order to remove this contribution, in (Ortega et al., 2001) it is proposed to remove the damping from the electrical subsystem an to add it to the position coordinate. So, the following added damping matrix is suggested: R 0 0 R a = 0 R α where R α is some positive number. Applying again technique of Prop. 2.7, it is possible to show that the stabilization is possible by means of the following simplified control action: u = R ( ) 1 k (1 q)φ K P α φ ( α ) + q m + K P R α p 2.4 Energy-based variable-structure control Introduction In this section, it is shown how it is possible to merge energy based and variable structure control techniques for the passive control of phd systems. The key result is the definition of a novel control scheme that is able to conserve the phd structure of the system when constrained on a sub-manifold of the state space. The idea is to modify both the interconnection and damping structures of the system and to add a proper dynamical extension in such a way that the constraint can be related to some dynamical invariants of the resulting closed-loop system. Since part of the structure of this dynamical extension can be arbitrarily chosen, it is also possible to drive the state of the system on the constraint. The starting point is understanding how the internal structure of the system changes if some constraints on the state variable are introduced. Preliminary results in this sense can be found in (Nijmeijer and van der Schaft, 1991), where no dissipation effect is taken into account and the constraints are related to the system dynamics, while in (van der Schaft, 2000) the reduced dynamics of a mechanical system expressed in Hamiltonian form and subject to holonomic

71 2.4 Energy-based variable-structure control 55 constraints is studied. Furthermore, in (Sira-Ramirez, 1999) it is shown that the behavior of the system with respect to the constraints strictly depends on the energetic structure of the system itself. In particular, the fact that the system spontaneously moves toward the constraints can be interpreted as the results of dissipation effects and, on the other hand, the fact that the system diverges can be related to the presence of regenerative effects. What is not clear is when, given a port Hamiltonian system and a set of constraints defined on the state variables, the resulting constrained dynamics could be represented in port Hamiltonian form. Some results in this direction can be found in (Macchelli et al., 2002a), where a passive controller guaranteeing that the closed-loop dynamics satisfying the constraints is still representable in phd formalism is introduced. This is achieved by relating the constraints to some dynamical invariants (Casimir functions) of the closed-loop system. The proposed control scheme can be the starting point for the development of several control applications since, by properly specifying a set of parameters, the closed-loop system can be driven on the constraints in a passive way. Two possible applications are discussed in Sec and in Sec , both of them being the result of a merge of energy based and variable structure control techniques. In the first one, the constraints are interpreted as sliding surfaces and an energy-based sliding mode (Utkin, 1978) controller is presented, while the second one shows how it is possible to give further robustness to a classical control methodology for the regulation of port Hamiltonian system presented in Sec. 2.2 and in Sec These methodologies generally assure stability of the closed-loop system even in presence of model uncertainties, but without assuring that the desired configuration could be reached. The robustness also in terms of performances is achieved by introducing a variable structure controller, intrinsically robust with respect to model uncertainties that properly shapes the total energy of the system. In this case, simulation results with a 2-dof manipulator, (Macchelli et al., 2003), and experimental result with a 6-dof industrial manipulator are reported and discussed in order to validate the proposed approach Dynamics of a phd under constraints Consider the port Hamiltonian system with dissipation (2.7) and suppose that dim X = n and dim U dim U = m n. Denote by S i : X R, i = 1,..., m a set of functions defined on the state manifold and for each S i, define the following state sub-manifolds: S i,0 := {x X S i (x) = 0} S i,+ := {x X S i (x) > 0} S i, := {x X S i (x) < 0} Moreover, define If rank S(x) := [ S 1 (x) S m (x) ] T [ ] [ S S1 = rank x x S ] m = m x x X, then the state sub-manifold S(x) = 0, that is S 0 := m i=1 S i,0

72 56 Control of port Hamiltonian systems is (n m)-dimensional. If not differently specified, it will be assumed that the transversality condition (Sira-Ramirez, 1988) ( T ) S rank x G = m (2.41) holds x X. From a sliding mode point of view, this means that S 0 has locally relative degree one in X. Once the constraints are defined, we can introduce a state-feedback law that properly modifies the interconnection and damping matrices of system (2.7), and by interconnecting the resulting system with another phd system in a power-conserving manner, it is possible to obtain a new system such that, if constrained on S 0, it can still be described in the phd formalism. Consider the controller (2.20), with H C the arbitrary energy function, and the power conserving feedback interconnection (2.21). As discussed in Sec. 2.2, the resulting system is still phd and, if some conditions on the interconnection and damping matrices of (2.1) and (2.20) are satisfied, it is possible to relate the state variables of the controller to the state variables of the plant by using Casimir functions. In particular, we want that ξ i S i (x), i = 1,..., m is a set of Casimir function for the closed-loop system. Since there are m independent constraints, it is chosen dim X C = m. First of all, it is necessary to properly modify the interconnection and damping matrices of the system (2.7) by means of a state feedback action u = β(x). In particular, as suggested in Prop. 2.7 of Sec. 2.3 for the IDA-PBC design technique, assume that there exist matrices J a (x) = J T a (x), R a (x) = R T a (x) 0 and two function H a : X R and β : X U, such that the PDE (2.39) holds. By now, define H d (x) := H(x) + H a (x), J d (x) := J(x) + J a (x) and R d (x) := R(x) + R a (x); then the matrices J a and R a together with the parameters of the controller (2.20) will be chosen in such a way that the following conditions hold (compare with conditions (2.28)): T S x (x)j d(x) S x (x) = J C(ξ) (2.42a) R d (x) S (x) x = 0 (2.42b) R C (ξ) = 0 (2.42c) T S x (x)j d(x) = G C (ξ)g T (x) (2.42d) If in (2.7) we impose u = β(x) + u, from Prop 2.7 and (2.39), we obtain the following phd system with interconnection and damping matrices satisfying conditions (2.42): ẋ = [J d (x) R d (x)] H d (x) + G(x)u x y = G T (x) H (2.43) d x (x) Given the systems (2.20) and (2.43) and the power-conserving feedback interconnection (2.21), from Prop. 2.4 we have that each ξ i S i (x), i = 1,..., m is a Casimir function for the closed-loop system. In particular, under the hypothesis e c = 0 and e = 0, the reduced dynamics on the foliation induced by the Casimir functions will be given by ẋ = [J d R d ] x (H + H a) + [J d R d ] x H C(S 1,..., S m ) (2.44)

73 2.4 Energy-based variable-structure control 57 where the energy function H C : X C R of the controller (2.20) can be arbitrary. In particular, it can be chosen so that the sub-manifold S 0 can be reached. The system (2.44) naturally evolves in such a way that ξ i S i (x) = cons., i = 1,..., m, and from (2.42b) the resulting dynamics is given by: ẋ = [J d R d ] x (H + H S H C a) + J d (2.45) x S where [ H C S = HC S 1 H C S m With a proper choice of H C, the state of the initial system (2.7) can be steered on S 0. First of all, consider the 1-form (vector function) v : R m R m and rewrite (2.45) as: ] T Since ẋ = [J d R d ] x (H + H a) + J d S x v (2.46) V (x) := 1 2 ST (x)s(x) can be considered as a Lyapunov function that measures the state distance from the sub-manifold S 0, it is sufficient to prove that V (x) < 0 on the trajectory of the closed-loop system (2.45) for a proper choice of v. Clearly, V (x) = S T (x) T S x ẋ then, from (2.44), (2.42b) and (2.42d), we have [ V = S T T S x J d x (H + H a) S ] [ x v = S T G C G T x (H + H a) G T S ] x v Since the transversality condition (2.41) holds and under the further hypothesis that G C is nonsingular, the condition V (x) < 0 can be mapped in a set of inequalities involving the vector function v defined on the controller state space X C. If the 1-form v satisfying the previous set of inequalities is closed, or, equivalently, if it is a gradient of a scalar function, that is T v S = v S and the set {s = S(x) x X } R m is a contractile manifold, then an energy function H C for the controller (2.20) can be found such that H C S = v(s) guaranteeing the reaching of the sub-manifold S 0. In Fig. 2.5, a visual description of the behavior of this control scheme is presented. The state of the closed-loop system (x, ξ) X X C, evolves on the sub-manifold defined by the Casimir function ξ S(x) = cons.. A proper choice of the energy function H C of the controller will bring the state on S 0 X X C, that is the state x of the plant will be constrained on S(x) = 0. Moreover, it is possible to prove that the dynamics of the system (2.45) constrained on S 0 is described by a phd system. This fact can be seen as a generalization of the results presented in

74 58 Control of port Hamiltonian systems (x(t), ξ(t)) PSfrag replacements ξ X ξ S(x) = cons. x(t) Figure 2.5: Behavior of the proposed control scheme in the case of dimx = 2 and m = 1. S 0 (Nijmeijer and van der Schaft, 1991), where no dissipation term is present in the Hamiltonian model of the plant. It is important to notice that the (orthogonality) condition (2.42b) imposes some constraints on the structure of the dissipation term in (2.7), in relation to the sub-manifold S 0. In particular, the damping matrix R(x) is compatible with the sub-manifold S 0 if and only if it is possible to find a linear, symmetric and positive semi-definite operator R a (x) such that R d (x) := R(x) + R a (x) satisfies condition (2.42b). As in the energy balancing PBC discussed in the previous sections, it is the structure of the dissipation term that can limit the applicability of the proposed energy-based control approaches. The constrained dynamics of (2.45), or equivalently of (2.46), on S 0 can be seen as a zerodynamics on S(x) = 0. The equivalent control input v equiv that constrains the system (2.46) on S 0 can be calculated by imposing T S x ẋ = 0 and by substituting the x dynamics given by (2.46). Then, S(x) = 0 if or, from (2.42d), iff T S x J d [ x (H + H a) S x v equiv ] = 0 [ G C G T x (H + H a) S ] x v equiv = 0 Supposing G C non singular, the equivalent control is given by ( v equiv = G T S ) 1 G T x x (H + H a) (2.47) Finally, from (2.47) and (2.46), the constrained dynamics expression can be deduced: ẋ = [J d R d ] ( x (H + H S a) J d G T S ) 1 G T x x x (H + H a) It is easy to prove that the matrix ( S J d G T S ) 1 G T x x

75 2.4 Energy-based variable-structure control 59 is skew-symmetric. From (2.42a) and (2.42d) we have that J d S x = GG C T, G C T = ( G T S ) T J C x and consequently ( S J d G T S ) 1 ( G T = G G T S ) T ( J C G T S ) 1 G T := x x x x J with J skew symmetric since J C = J C T. As a consequence, if we define J d,0 (x) := J d (x) + J(x) and R d,0 (x) := R d (x), the constrained dynamics can be written in the following phd system form ẋ = [J d,0 (x) R d,0 (x)] x [H(x) + H a(x)] (2.48) Note The zero-dynamics (2.48) is given in terms of the evolution of an n-dimensional state. Since the system is constrained on the n m dimensional sub-manifold S 0, the achieved dynamics can be described by a system of order n m. In other words, a proper state coordinate transformation that explicitly show the n m order constrained dynamics can be defined. If [ ] ( ) S(x) Φ z = Φ(x) :=, with rank = n T (x) x is a well-defined coordinate transformation, the system dynamics (2.48) can be expressed in the new coordinates as ż = [ Jd,0 (z) R d,0 (z) ] [ H(z) + Ha (z) ] z where Since J d,0 = T Φ x J Φ d,0 x, it follows that with, in particular, J (1,1) d,0 = T S J (1,2) d,0 = J d,0 = Rd,0 = T Φ x R Φ d,0 x, H(x) = H[Φ(x)], H a (x) = H a [Φ(x)] J (1,1) d,0 (1,2) T J d,0 x J S d x J C = 0 [ T S x J d J C R (1,1) d,0 = T S x R S d x = 0 R (1,2) d,0 = T S x R d T x = 0 ( G T S x [ Φ S x = x, T ] x J (1,2) d,0 J (2,2) d,0 ) 1 G T ] Rd,0 = R (1,1) d,0 R (1,2) T d,0 R (1,2) d,0 R (2,2) d,0 [ T x = G C G T G C G T S ( G T S ) ] 1 G T T x x x = 0

76 60 Control of port Hamiltonian systems as can be deduced from (2.42). It follows that [ J d,0 = 0 m m 0 m (n m) ] 0 (n m) m R d,0 = [ 0 m m 0 m (n m) 0 (n m) m ] where indicates a quantity (in general) different from 0. These interconnection and damping matrices clearly define a dynamics of order n m. Now, suppose that m = n, that is the sub-manifold S 0 reduces to a point. We have seen that ) ] 1 G T where J d,0 = J d [ I n S ( x G T S x ( S G T S ) 1 G T = I n x x since, in this case, the matrix at left side is is idempotent and non-singular. Consequently, J d,0 = 0. Moreover, from (2.42) and since the constraints defined by the functions S 1,..., S m are independent, also R d,0 (x) = 0. In conclusion, the constrained dynamics becomes ẋ = 0, that is coherent with the fact that the control action tries to keep the state in a specific point Energy-based approach to sliding-mode With the dynamical extension (2.20), the system (2.7) is characterized by a set of Casimir functions, strictly related to the constraints S 1 (x),..., S m (x). By now, consider the S i, i = 1,..., m as sliding surfaces. Therefore, by properly choosing the energy function H C (S 1,..., S m ) of the controller (2.20) it is possible to constrain the state of the system (2.7) on S 0 in a passive way. The same behavior of classical sliding-mode controllers (and the same robustness properties) can be achieved if the function H C is characterized by a variable structure. Since this function is also an energy function, it must be at least of C 0 class. Moreover, the energy function of the controller H C has to be chosen in such a way that the state of the plant (2.7) reaches S 0, after a (finite) reaching phase, and that a sliding regime is possible on it. This means that the sliding surface has to become attractive for the closed-loop system. In the case of several sliding surfaces, the problem is more complex, also without introducing energy constraints into the controller design. This is due, in particular, to the couplings that are, in general, present in the dynamics of the system (Sira-Ramirez, 1988). If only one sliding surface is given, it is easy to find explicit solutions for control laws that assure the completion of the reaching phase and the stability of the sliding mode, also within the framework presented in this section. If m 1 independent sliding surfaces are given, the problem can be solved if an explicit solution of a set of 2m inequalities involving H C / S i, i = 1,..., m, can be found. Suppose that x X, i, j = 1,..., m so that the decoupling matrix L gj S i (x) = 0, if i j, and L gi S i (x) 0 (2.49) T S x G(x) = diag [L g 1 S 1 (x),..., L gm S m (x)]

77 2.4 Energy-based variable-structure control 61 is non singular and diagonal. Condition (2.49) requires a decoupling between input signals and sliding surfaces, that is each input signal can drive the state variable only on one sliding surface. It is important to point out that it is not strictly necessary to require, by means of condition (2.49), that the decoupling matrix is diagonal. In fact, since the transversality condition (2.41) has to hold, this matrix is non singular and then, by a feedback transformation u = γ(x)u, it is always possible to make it diagonal. Alternatively, if the input vector fields g i, i = 1,..., m, are commuting, then it is possible to choose a different set of constraints S j, j = 1,..., m, defining the same sliding surface S 0 in such a way that, again, the decoupling matrix becomes diagonal. Since we want to design a controller that is able to bring the state of the system on the intersection of m independent sliding surfaces, it is important to determine what is the behavior of the controlled system in relation to each surface S i (x) = 0. Given the closed-loop dynamics (2.45) and under the previous hypotheses, we have that [ LẋS i = T S i x ẋ = T S i J d x x (H + H T a) G G H ] C C S (2.50) The reaching phase can be completed and the sliding mode is possible on the sliding surface if the controller energy function H c is chosen such that { LẋS i (x) < 0, x S i,+ i = 1,..., m LẋS i (x) > 0, x S i, Let us consider a generic real-valued function H c : ζ R m R and, under the further hypothesis that G c is non-singular, introduce the following coordinate change: ζ = G 1 c ξ with ζ = [ζ 1,..., ζ m ] T. Finally, define the controller energy function as follows: H C [S 1 (x),..., S m (x)] := H C[G T C S(x)] where H c is still to be specified. Then, (2.50) becomes Moreover, assume that with LẋS i = T S i x J d x (H + H H C a) L gi S i (2.51) ζ i H C(ζ 1,..., ζ m ) := H C,i(ζ i ) := m H C,i(ζ i ) (2.52) i=1 { H,+ C,i (ζ i), if ζ i 0 H, C,i (ζ i), if ζ i < 0 and such that H,+ C,i (0) H, C,i (0), i = 1,..., m, in order to assure that the energy function of the controller is continuous.

78 62 Control of port Hamiltonian systems From (2.51) and (2.52), it follows that the sliding mode is possible on S(x) = 0 if, i = 1,..., m: H C,i L gi S i > T S i ζ i x J d x (H + H a), x S i,+ (2.53a) H C,i L gi S i < T S i ζ i x J d x (H + H a), x S i, (2.53b) If the set (2.53) of 2m inequalities in the 2m unknown functions H,+ C,1 ζ 1,..., H,+ C,m ζ m, H, C,1 ζ 1,..., H, C,m ζ m can be satisfied, then the system will reach the sub-manifold S 0 and then evolve on it according to the dynamics defined by (2.48). The dynamical extension (2.20) that tries to constrain the state of the plant over each sliding surface S i,0 behaves as a set of m nonlinear dynamical systems, each with equilibrium configuration (i.e. minimum of the corresponding energy function) on the corresponding submanifold Variable structure approach for passive control of robots Consider an n-dof fully-actuated mechanical system with generalized coordinates q Q R n. As discussed in Example 1.4, denote by p = M(q) q T Q R n the generalized momenta, with M(q) the inertia matrix. The phd representation of this system can be obtained by assuming in (2.7) dim(x ) = 2n and m = n, then defining x := [ q, p ] T, H(q, p) := 1 2 pt M 1 (q)p + V (q), where V (q) is the potential energy, and, finally [ J = 0 I n I n 0 ] [ 0 0, R = 0 D(q, p) ] [, G = 0 B(q) where D(q, p) = D T (q, p) 0 takes into account the dissipation effects. Moreover, assume rank G = n, i.e. B is non singular, since the mechanical system is fully actuated. These considerations lead to the following model [ ] [ ] [ ] [ ] q 0 I = n q H 0 + u ṗ I n D p H B (2.54) y = B T p H Suppose that q d Q is a desired configuration in the joint space, then define the constraints as S(q, p) := q q d (2.55) Clearly, stabilizing the system (2.54) in q d means driving the system state on the state submanifold S(q, p) = 0. Following the same procedure of Sec , assume in (2.39) [ ] 0 0 J a = 0 R a = H a (q, p) = V (q) (2.56) 0 K D ]

79 2.4 Energy-based variable-structure control 63 with K D = K D T 0 such that D + K D > 0, and in (2.42) J C (ξ) = 0 R C (ξ) = 0 G C (ξ) = B T (2.57) so that ξ i q i + q d,i, i = 1,..., m are Casimir functions for the closed-loop system resulting from the interconnection of (2.54) with the controller given by (2.20). It is possible to prove that the resulting feedback law that stabilizes the mechanical system is: [ V u = B 1 q H C q K D H p ] (2.58) The state feedback law (2.58) shapes the total energy by compensating the effect of the potential V (q) and by introducing a new potential H C (q). If H C is characterized by a minimum in the desired configuration q d, by introducing some dissipative effect with the controller, then this new minimum can be reached. Recall that the closed-loop energy function is clearly given by H cl (q, p) = H(q, p) + H a (q) + H C (q q d ) = H d (q, p) + H C (q q d ) (2.59) The most critical point in the implementation of this control technique is that a perfect compensation of the original potential contribution is necessary in order to have a steady state regulation error equal to zero. This compensation requires a perfect knowledge of all the robot s parameters; if this is not the case, the mechanical system stops, but not in the desired configuration. If in (2.20) it is assumed H C (q) = 1 2 (q q d) T K P (q q d ) (2.60) with K P = K T P > 0, then, in the case of perfect compensation of the gravity term, that is H a given as in (2.56), the feedback law becomes [ ] V u(q, p) = B 1 q K P (q q d ) K D q which is again the PD + gravity compensation (PD + g(q)) controller, see Examples 2.1 and 2.5. Moreover, the closed-loop energy function (2.59) becomes H cl (q, p) = 1 2 pt M 1 (q)p (q q d) T K P (q q d ) The PD + g(q) controller can be interpreted as a set on n linear springs acting in the joint space with center of stiffness in q d. In order to extend this controller to take into account the saturation of each actuator, some non-linearity in the energy function of the springs has to be introduced. From Sec , we known that a spring is an element storing potential energy and its behavior is described by (1.19) (see also Fig. 1.6). The input u is the deformation rate of the extreme of the spring, x is the state associated to the spring and E(x) is a lower bounded function representing the stored energy. The output y is the force applied by the spring. The simplest springs are the linear ones, i.e. springs whose energy function is quadratic: E(x) = 1 2 xt Kx (2.61)

80 64 Control of port Hamiltonian systems (a) Energy (b) Force Figure 2.6: Energy and force of non saturated (continuous) and saturated (dashed) spring. where K = K T > 0 represents the stiffness. The force applied by the springs turns out to be: f = E x = Kx In the case of mechanical systems (e.g. robots), each component of the force is applied to the plant by means of an actuator. Intuitively speaking, if the amount of stored energy increases too much, then the force generated by the springs, that is the force that the actuators have to apply, can be greater than the physical limits of the actuators themselves. If the robot is controlled by means of the PD + g(q) controller, this situation can happen if the initial error is sufficiently high. For simplicity, assume K = diag(k 1,..., k n ), that is the spring energy in (2.61), can be written as n E(x) = E i (x i ) = 1 n k i x 2 i 2 i=1 Then, suppose that each actuator is limited, i.e. f i,m f i f i,m, i = 1,..., n. Consider x M = (x 1,M,..., x n,m ) and x m = (x 1,m,..., x n,m ) such that f i,m = k i x i,m and f i,m = k i x i,m. The saturation of each actuator can be taken into account if the following energy function is introduced: E s (x) = E 1,s + + E n,s where i=1 [ f i,m xi 1 2 x i,m], if xi < x i,m 1 E i,s (x i ) = 2 k ix 2 i, if x i,m x i x i,m (2.62) [ f i,m xi 1 2 x ] i,m, if xi > x i,m Note that the passivity properties of the spring are preserved since the proposed energy function is C 1 and bounded from below. The energy function of a 1 dimensional spring and the relative force in function of the state are represented in Fig. 2.6, both for the non-saturated and saturated case. The saturation of each actuator can be taken into account in the (passive) control of a robot if in (2.20) it is assumed n H C (q) = E i,s (q i q i,d ) (2.63) i=1

81 2.4 Energy-based variable-structure control 65 where E i,s is defined as in (2.62), k i > 0 can be freely assigned, and f i,m, f i,m depend on the characteristics of the i-th actuator. Note the difference from (2.60). Since H C is characterized by a (global) minimum in q d, the control action (2.58) still assures the (global) stability of this configuration. The control scheme proposed in Sec with the assumptions (2.56), (2.57) and H C given by (2.63) can stabilize the system (2.54) even in presence of uncertainties and taking into account a (possible) saturation of the joint actuators. Since the transversality condition (2.41) cannot be verified with the assumption (2.55) on the constraint manifold, the stability of the closed-loop systems has to proved using a different approach. It is well known that dh cl dt = T H cl p (D + K D) H cl p < 0 (2.64) for H cl p = q 0. Since H cl is bounded from below, it is correct to assume q(t) = 0 for some t t; moreover, the possible configurations in which the robot stops are clearly given by the solutions of the following equation: H cl (q, p) q = 0 (2.65) p=0 or, in the case of perfect compensation of the potential V (q), by: H C q (q) = 0 So, it is clear why, if H C is characterized by a global minimum in q = q d, e.g. as in (2.60) or in (2.63), the robot reaches the desired configuration q d. The key point is that a perfect compensation of the original potential energy of the robot has to be implemented. If this is not the case, then some regulation errors will be present. Suppose that H a (q) = ˆV (q) in the PDE (2.39), or equivalently in (2.56), where ˆV (q) is an estimate of the potential term in H(q, p). Then, (2.58) becomes and H cl is now given by [ u = B 1 ˆV q H C q K D ] H p H cl (q, p) = 1 2 pt M 1 (q)p + H C (q) V (q) (2.66) where V (q) = ˆV (q) V (q). Since (2.64) holds, the final configurations the robot can assume are still solutions of (2.65), or, equivalently, of H C q = V q (2.67) Even if H C is characterized by a (global) minimum in q d, it is not sure that this configuration can be reached. In order to make the control law (2.66) robust also in terms of performances with respect to unknown parameters, H C, which is freely assignable, can be chosen with a variable structure. For example, assume H C (q) = 1 n k i [q i q i,d + sign(q i q i,d ) q i ] 2 (2.68) 2 i=1

82 66 Control of port Hamiltonian systems where k i > 0 and q i > 0, with i = 1,..., n. It is possible to prove that, if V q M < (2.69) and if q i, i = 1,..., n, are properly chosen, then the control law (2.66) with H C given by (2.68), can drive the system in q = q d. The proof is immediate in the case that a perfect compensation of the potential V (q) is possible, that is if V (q) = 0: in fact, in this situation, H cl is characterized by a global minimum in (q d, 0). Suppose that V (q) 0 and, in particular, that (2.69) holds. Furthermore, consider a generic initial condition (q 0, p 0 ) and define σ := [σ 1,..., σ n ], where { 1 if qi,0 q σ i = i,d 0 1 if q i,0 q i,d < 0 Clearly, σ only depends on the initial conditions. Assume that the control input u is given by (2.66), but with H C given by: H C (q) = 1 2 n k i (q i q i,d + σ i q i ) 2 (2.70) i=1 If, with a proper choice of q, this continuous control input can drive the robot in a final configuration q such that { q q i,d < 0 if q i,0 q i,d > 0 (σ i = 1) q (2.71) q i,d > 0 if q i,0 q i,d < 0 (σ i = 1) then an instant t such that q( t) q d = 0 has to exists. Consequently, the variable structure controller resulting from (2.66) and (2.68) makes the configuration q = q d globally attractive and, clearly, globally stable. The possible final configurations q are solution of (2.67), that is q i q i,d + σ i q i = 1 k i V q (q ) Since the values q i, i = 1,..., n have to be chosen according to (2.71), it can be deduced that q i > M k i, with i = 1,..., n (2.72) With this choice, the configuration q = q d is globally attractive and stable. In Fig. 2.7, the behavior of the proposed controller is presented: the initial error q i,0 q i,d is greater than 0, but q i is chosen in such a way that all the possible steady state configurations q satisfy q i q i,d < 0 if H C is given by (2.70). If the variable structure of the controller deriving from (2.68) is adopted, then the system is constrained in q d. The actuator saturation can be taken into account by introducing the saturated springs of which E i,s is the energy function, as reported in (2.62). Suppose that H C (q) = n E i,s [q i q i,d + sign(q i q i,d ) q i ] (2.73) i=1

83 2.4 Energy-based variable-structure control 67 PSfrag replacements H a q i,0 σ i = 1 σ i = 1 q i q i q i,d q i 0 q i Figure 2.7: Behavior of the variable structure PD + g(q) controller. Figure 2.8: A planar 2-dof manipulator Then, the final configurations the robot can assume if u is given by (2.66) are the solutions of (2.67). If ( max(f i,m, f i,m ) > M V ) q i = 1,..., n, then in the steady state configuration none of the actuators is in saturation. A consequence is that, if q i, i = 1,..., n, are chosen according to (2.72), then the controller is able to regulate the robot in q d that will be an asymptotically stable configuration. In conclusion, even in presence of modeling uncertainties, a variable structure passive controller (2.66), with H C given by (2.67) or (2.73), if the saturation of the actuators is taken into account, is able to drive the system in the desired configuration. Simulation results In order to test the variable-structure PD + g(q) controller introduced in this section in comparison with the classical PD + gravity compensation regulator, some simulation have been carried out on a 2 dof planar manipulator, shown in Fig The main parameters of the manipulator are reported in Tab Note that the manipulator is subject to gravity force active in the negative y direction. As a reference case, in Fig. 2.9, a simulation with the PD + g(q) compensation controller is reported. A fixed set point q = (1.75, 0.1) has been assigned as desired goal for the tip of the manipulator, corresponding to joint positions q 1 = , q 2 = rad. In this case, the dynamic parameters are supposed to be perfectly known. As expected, the errors nicely tend to

84 68 Control of port Hamiltonian systems L 1 = L 2 = 1 m L g1 = L g2 = 0.5 m M 1 = M 2 = 20 Kg I 1 = I 2 = 5 Kg m 2 D 1 = D 2 = 0 N m s g = 9.81 m / s 2 Links lengths Center of mass Links mass Links inertia Viscous friction Gravity acceleration Table 2.1: Parameters of the considered manipulator. zero; in this case, the control parameters are K p = diag(6000, 6000), K d = diag(1100, 1100). The final errors are e x = , e y = (m) corresponding to e q1 = , e q2 = (deg). Results obtained with the proposed controller are reported in Fig. 2.10, with the same control parameters as in the previous case for the PD part, i.e. K p = diag(6000, 6000), K d = diag(1100, 1100) while q = diag(0.1, 0.1). Also in this case, the desired configuration is reached without errors. Note the behavior of the torques: after a transient, when the errors are null, a switching behavior takes place in order to constrain the state on the desired configuration (corresponding to the minimum of H C ). If the robot parameters are not perfectly known, the PD + gravity compensation scheme is not able to reach the desired configuration. This case is shown in Fig. 2.11(a), where, as limit case, it is assumed that the parameters m 1 and m 2 are not known at all (i.e. the values m 1 = m 2 = 0 are assumed). As expected, the robot reaches a different final configuration and the final errors are not null: e x = , e y = (m) e q1 = , e q2 = (deg). The corresponding simulation with the proposed controller is shown in Fig. 2.11(b). Note that in this case the desired configuration is reached without errors. In this case, the final errors are e x = e 006, e y = (m) and q 1 = , q 2 = (deg). Finally, the case of saturation has been considered. A saturation value of 800Nm has been considered for the actuators. Results obtained with the proposed controller (and no knowledge of the parameters m 1 and m 2 ) are reported in Fig Errors in this case are e x = e 006, e y = e 005 (m) q 1 = , q 2 = (deg). Experimental results The variable structure control methodology discussed in this section has been implemented on the Comau SMART3 S robot. This is a standard industrial 6 degrees of freedom anthropomorphic manipulator with a non-spherical wrist. Each joint is actuated by a DC-brushless motor, and its angular position is measured by a resolver. The robot is controlled by means of a standard PC running a real-time variant of the popular desktop operating system Linux. In this way, the standard controller, the C3G 9000, is by-passed and provides only the low level interface with the DC drives. More details in Appendix B. The experimental activity has been carried on with the idea to show the performances of the proposed control scheme both in tracking and in regulation. The robot has been controlled in order to follow a pre-calculated trajectory in joint space: steady state performances are evaluated once the set-point becomes constant. In Fig. 2.13, the classical PD + g(q) controller

85 2.4 Energy-based variable-structure control Norm of cartesian errors; ex (dot), ey (dash) 2 Desired and real motion [m] time [s] [deg] Norm of joint errors; eq1 (dot), eq2 (dash) time [s] (a) Cartesian motion. (b) Errors. 500 Torques 0 Joint velocities Tau Tau time [s] time [s] (c) Torques. (d) Joint velocities. Figure 2.9: Simulation results with PD+g(q).

86 70 Control of port Hamiltonian systems 0.3 Norm of cartesian errors; ex (dot), ey (dash) 2000 Torques [m] Tau time [s] [deg] Norm of joint errors; eq1 (dot), eq2 (dash) Tau time [s] time [s] (a) Errors. (b) Torques Joint velocities Kinetic energy Energy stored in the springs H a (q) time [s] time (sec) (c) Joint velocities. (d) Energy. Figure 2.10: Simulation results with variable structure PD+g(q).

87 2.4 Energy-based variable-structure control Norm of cartesian errors; ex (dot), ey (dash) 0.3 Norm of cartesian errors; ex (dot), ey (dash) [m] [m] time [s] time [s] 80 Norm of joint errors; eq1 (dot), eq2 (dash) 80 Norm of joint errors; eq1 (dot), eq2 (dash) [deg] 20 0 [deg] time [s] time [s] (a) PD + g(q). (b) VS PD + g(q). Figure 2.11: Simulation results with partial knowledge of mass parameters: errors Torques 0.3 Norm of cartesian errors; ex (dot), ey (dash) Tau [m] time [s] Norm of joint errors; eq1 (dot), eq2 (dash) Tau [deg] time [s] time [s] (a) Torques. (b) Errors. Figure 2.12: Simulation results with partial knowledge of mass parameters and saturation.

88 72 Control of port Hamiltonian systems 100 Setpoint of joint n: Setpoint of joint n: Error joint n: 5 x Error joint n: 5 x Current joint n: 5 x Current joint n: 5 x x x 10 4 (a) PD + g(q) controller. (b) VS PD + g(q) controller. Figure 2.13: Experimental results: perfect gravity compensation. is compared with the variable structure controller in the case of perfect compensation (or better, in the case of the most accurate gravity compensation possible). The performances of joint 5 are shown. Note that the classical controller is able to drive to zero the steady state error (see Fig 2.13(a)), while the variable structure controller (see Fig. 2.13(b)) is characterized by an oscillatory behavior around the zero steady state error configuration. This is due to the fact that the maximum switching frequency is 1kHz: at these ranges, chattering is still present. In Fig. 2.14, the performances of the controller are presented in the case that no gravity compensation is implemented. In this case, the PD + g(q) controller becomes a simple PD regulator acting on each joint. With this control scheme, the robot stops, but, usually, not in the desired configuration. In particular, the final configuration is given by (one of) the solutions of (2.67). As it is possible to notice in Fig. 2.14(a), the steady state error is different from zero. This is not the case for the variable structure controller presented in this section. If no gravity compensation is implemented, it is simplified into a PD controller plus a switching action proportional to the sign of the position error. Also in this case, the steady state behavior is characterized by the presence of chattering, but the oscillations are around the zero state configuration. A detailed view of the steady state performances is given in Fig Note that the position error on transient is different from zero: this make sense since the proposed controller is developed in order to assure error equal to zero only with constant set-point. The same consideration holds for the PD + g(q) controller and, also, in the case that perfect compensation is implemented.

89 2.4 Energy-based variable-structure control Setpoint of joint n: Setpoint of joint n: Error joint n: 5 x Error joint n: 5 x Current joint n: 5 x Current joint n: 5 x x x 10 4 (a) PD + g(q) controller. (b) VS PD + g(q) controller. Figure 2.14: Experimental results: no gravity compensation. 5 x 10 3 Error joint n: x 10 4 Figure 2.15: Detailed overview of steady state performances (no compensation).

90 74 Control of port Hamiltonian systems

91 Chapter 3 Distributed port Hamiltonian systems In this chapter, the port Hamiltonian formulation of a class of distributed parameter system is presented and discussed. As in the finite dimensional case, the key point is the identification of a suitable mathematical tool that is able to describe the interconnection structure behind the physical model of the system. In particular, within the distributed parameter framework, it is necessary to understand how it is possible to relate the internal exchange of power between interacting energy domains with the power flow coming from the environment. This relation is described by means of an infinite dimensional Dirac structure associated with the exterior derivative and based on the Stokes theorem. In this way, it is possible to give an infinite dimensional port Hamiltonian representation of classical distributed parameter systems, as the transmission line, the vibrating string, Maxwell s equations or the Timoshenko beam. 3.1 Introduction The port Hamiltonian model of lumped parameter system takes inspiration from circuit analysis: the behavior of a physical system is the result of a certain network of atomic multi-port elements, each of them characterized by a particular energy property. The key point is the identification of the interconnection structure, mathematically described by a Dirac structure, generalization of the well-known Kirchoff laws. In this way, the variation of system total energy is related to the power exchanged with the environment and the dynamics is the result of internal power flows among different parts of the whole system. It has been shown that this approach can be fruitfully applied for modeling a wide class of physical (mechanical, electrical, hydraulic and chemical) systems and several control techniques, based on energy considerations, have been developed in order to solve the regulation problem. In some sense, it seems to be natural to extend the finite dimensional Hamiltonian formulation in order to deal with distributed parameter systems. Many results on integrability, existence

92 76 Distributed port Hamiltonian systems of solutions or stability and several applications have been proposed in the last decades: see, for example, (Swaters, 2000) for an application to fluid dynamics and (Olver, 1993) for a nice introduction and historical remarks. On the other hand, it is interesting to note that some problems regarding the treatment of boundary conditions are still open. In fact, most of the research activity has been focused on the study of infinite dimensional system characterized by an infinite spatial domain, for which the state variables tend to zero when the spatial variable tends to infinity (with respect to some norm), or on the analysis of infinite dimensional systems with zero boundary conditions (on the finite spatial domain). These are autonomous systems: no interaction, i.e. power exchange, with the environment is taken into account. From the modeling point of view, this is a strong limitation since it is not possible to study the effect of non-zero boundary conditions on the dynamics of the system. It is clear that it should be possible to study the effects of voltages and currents applied at both ends of a transmission line or of forces and velocities at both ends of a flexible beam. In other words, it is necessary to develop a generic mathematical representation of infinite dimensional system that is able to deal with non-zero power flow through the boundary. Only in this way it can make sense to speak about control application for infinite dimensional systems in Hamiltonian form. The controller, in fact, can act on the system only by properly modifying the boundary variables or, equivalently, by exchanging power with the (infinite dimensional) system. From a mathematical point of view, it is not immediate how a non-zero energy flow through the boundary can be incorporated in the classical distributed Hamiltonian framework. It is possible to overcome this limitation by introducing the notion of Dirac structure for distributed parameter systems. In this framework, the Dirac structure is defined on a certain space of differential forms on the spatial domain of the system and its boundary and the relation between variation of internal energy and power flow through the boundary relies on the Stokes theorem, (Maschke and van der Schaft, 2000). This is the reason why these structures are called Stokes Dirac structures. An immediate consequence is that the proposed (port) Hamiltonian formulation of distributed parameter systems (dph systems) becomes a generalization, to the infinite dimensional case, of the ideas discussed in the previous chapters regarding lumped parameters systems. This unified approach immediately leads to the extension to the infinite dimensional case of the energy-based control strategies developed for finite dimensional system and discussed in Chapter 2. Early results in this sense can be found in (Rodriguez et al., 2001; Macchelli and Melchiorri, 2003a; Macchelli and Melchiorri, 2003b) and will be deeply discussed in Chapter 4. In conclusion, this chapter is organized as follows. In Sec. 3.2, the Stokes Dirac structures are introduced while, in Sec. 3.3, the dph formulation of a wide class of infinite dimensional systems is presented. Then, in Sec. 3.4, some classical example of infinite dimensional systems that can be described with the proposed approach are presented. The Timoshenko model of the beam cannot be treated within the approach of Sec. 3.3, but, following the same approach, its dph formulation is deduced in Sec Stokes Dirac structures From an intuitive point of view, every infinite dimensional system is characterized by a spatial domain D, with boundary D, and by the presence of (at least) two interactive energy domains,

93 3.2 Stokes Dirac structures 77 PSfrag replacements D D (f d, e d ) R E M C, I (f b, e b ) Figure 3.1: Infinite dimensional port Hamiltonian system with dissipation. as in Fig 3.1. In the case of Maxwell s equations, these domains are the electrical (E) and the magnetic (M) ones. The internal energy can be stored (C, I), that is continuously exchanged among these different domains, or dissipated (R) because of the presence of dissipative phenomena. Moreover, the system can exchange energy (power) with the environment through the boundary D or a distributed port, as in the case of Maxwell s equations where the current density directly affects the charge distribution within the domain D. The starting point in the definition of port Hamiltonian system is the identification of a suitable space of energy variables that, in the case of distributed parameter systems, is closely connected to the geometry of the spatial variables. Denote by D the spatial domain, an n- dimensional Riemannian manifold, and by D its (n 1)-dimensional boundary. Assume that the space of power variables is given by F E, where F is the linear space defined as F := Ω p (D) Ω q (D) Ω p (D) Ω q (D) Ω n p ( D) Ω p (D) Ω q (D) (3.1) with p and q positive integers satisfying p + q = n + 1 (3.2) The space F is the space of flows, that is the space of the rate of change of the energy variables. The reason of the choice (3.1) for F will be clear later. Its dual space E F, that is the space of efforts (co-energy variables), can be identified by the linear space E := Ω n p (D) Ω n q (D) Ω n p (D) Ω n q (D) Ω n q ( D) Ω n p (D) Ω n q (D) (3.3) In fact, given the (linear) space of k-forms Ω k (D), its dual space (Ω k (D)) can be naturally identified with Ω n k (D), while the dual of Ω k ( D) by Ω n k q ( D). This result is a consequence of the following proposition, (Maschke and van der Schaft, 2000). Proposition 3.1. Denote by D an n-dimensional Riemannian manifold. Then, (Ω k (D)) can be identified with Ω n k (D), replacing the duality product between Ω k (D) and (Ω k (D)) by β, α := α β (3.4) with α Ω k (D) and β Ω n k (D). D

94 78 Distributed port Hamiltonian systems Proof. Since D is a Riemannian manifold, from Prop. A.9 and A.10, we have that (α 1 α 2 ) := α 1 α 2, α 1, α 2 Ω k (D) D defines an inner product on Ω k (D). Then, by the Riesz representation theorem, for any β (Ω k (D)) it is possible to find a γ Ω k (D) such that β, α = (α γ), α Ω k (D) Denoting γ Ω n k (D) by β, the result is proved. If in (3.2), it is assumed that p = q = k, then 2k = n + 1 and necessarily n is odd. This is the symmetric case: the transmission line and Maxwell s equations can be treated under this hypothesis, while examples taken from fluid dynamics (Maschke and van der Schaft, 2001) and elasticity require the general case. Using the identification provided by Prop. 3.1, the bilinear form (+pairing operator), on F E can be immediately defined (see also Def. 1.5). If ω i = (f i E,s, f i M,s, f i E,r,f i M,r, f i b, f i E,d, f i M,d, e i E,s, e i M,s, e i E,r, e i M,r, e i b, ei E,d, ei M,d ) F E, i = 1, 2 then ω 1, ω 2 := ( f 1 E,s e 2 E,s + fm,s 1 e 2 M,s + fe,s 2 e 1 E,s + fm,s 2 e 1 M,s) D + ( f 1 E,r e 2 E,r + fm,r 1 e 2 M,r + fe,r 2 e 1 E,r + fm,r 2 e 1 M,r) D + ( f 1 b e 2 b + f b 2 b) e1 D ( + f 1 E,d e 2 E,d + f M,d 1 e2 M,d + f E,d 2 e1 E,d + f M,d 2 M,d) e1 D (3.5) The notation used in (3.5) and in the remaining part of this section can be explained as follows. Taking Maxwell s equations and electromagnetism as a paradigm for the generic class of distributed parameter systems that will be introduced later, the subscripts E and M stand for electrical and magnetic and correspond to the physical domains in interaction inside the spatial domain D. Moreover, the subscript s stands for stored, since it represents the energy continuously exchanged between the two physical domains, while the subscript r stands for resistance, since it represents the amount of electric or magnetic energy that is dissipated. Finally, b stands for boundary and d for distributed: the corresponding power variables are related to the energy exchanged by the systems through the boundary D or the distributed power port, i.e. the spatial domain D. As in the finite dimensional case, the port Hamiltonian representation of a given physical system can be deduced once the power variables belonging to F E are related so that the property of energy conservation is satisfied. In other words, it is necessary to define a proper Dirac structure. In the distributed parameter case, we speak about Stokes Dirac structures, given by a generalization of Def. 1.6.

95 3.2 Stokes Dirac structures 79 Definition 3.1 (Stokes Dirac structure). Consider the space of power variables F E, with F and E defined as in (3.1) and (3.3), and the symmetric bilinear form (3.5). A (constant) Stokes Dirac structure on F is a linear subspace D F E such that D = D where the orthogonal complement is respect the +pairing operator,. Note 3.1. If ω = (f E,s, f M,s, f E,r, f M,r,f b, f E,d, f M,d, e E,s, e M,s, e E,r, e M,r, e b, e E,d, e M,d ) D then from Def. 3.1 and the general properties of a Dirac structure (see in particular Note 1.2), it is immediate that ω, ω = 0 and consequently that (e E,s, e M,s, e E,r, e M,r, e b, e E,d, e M,d ), (f E,s, f M,s, f E,r, f M,r, f b, f E,d, f M,d ) = 0 This relation expresses power conservation in the infinite dimensional case. From (3.5), it can be deduced that (f E,s e E,s + f M,s e M,s ) + (f E,r e E,r + f M,r e M,r ) + D D }{{}}{{} internally stored power internally dissipated power (3.6) + (f b e b ) + (f E,d e E,d + f M,d e M,d ) = 0 D D }{{}}{{} inc. power (bound.) incoming power (distributed) which expresses the balance between the incoming power (through the boundary or the distributed port) and the internal power that can be dissipated or stored (or, better, continuously transformed from one energy domain to the other). In conclusion, given the space of power variables F E, with F and E given by (3.1) and (3.3) respectively, assume that (fe,s i, f M,s i ) Ωp (D) Ω q (D) (fe,r i, f M,r i ) Ωp (D) Ω q (D) fb i Ω n p ( D) (fe,d i, f M,d i ) Ωp (D) Ω q (D) (e i E,s, ei M,s ) Ωn p (D) Ω n q (D) (Ω p (D)) (Ω q (D)) (e i E,r, ei M,r ) Ωn p (D) Ω n q (D) (Ω p (D)) (Ω q (D)) e i b Ω n q ( D) (Ω n p ( D)) (e i E,d, ei M,d ) Ωn p (D) Ω n q (D) (Ω p (D)) (Ω q (D)) (3.7) Then, it is possible to define a (constant) Dirac structure over F E by means of the following proposition.

96 80 Distributed port Hamiltonian systems Proposition 3.2. Consider F and E given in (3.1) and (3.3), with p and q satisfying (3.2), and the bilinear form, given in (3.5). Define the following linear subspace D if F E: D := {(f E,s, f M,s, f E,r, f M,r, f b, f E,d, f M,d, e E,s, e M,s, e E,r, e M,r, e b, e E,d, e M,d ) F E [ ] [ fe,s 0 ( 1) = r ] [ ] [ ] [ ] d ee,s fe,r fe,d G f M,s d 0 e r G M,s f d, M,r f M,d [ ] [ ] [ ] [ ] ee,r = G ee,s ee,d r, = G ee,s e M,r e M,s e d, M,d e M,s [ ] [ ] [ ] fb 0 ( 1) r+1 ee,s = D } 1 0 e M,s D e b (3.8) where r = pq n and D denotes the restriction on the boundary D. Furthermore, G r, G d : Ω p (D) Ω q (D) Ω p (D) Ω q (D) are linear mappings, with G r and G d their duals, satisfying [ ] [ ] [ ] [ ] ee,s fer, G e r = G ee,s fe,r r, M,s f M,r e M,s f M,r [ ] [ ] [ ] [ ] (3.9) ee,s fed, G e d = G ee,s fe,d M,s f d, M,d e M,s Then, D = D, that is D is a Dirac structure. Proof. The proof can be divided into two steps: in the first one, we prove that D D, while in the second one that D D. For simplicity, it is assumed that G r = G d = 0 and, then, ω F E if ω = (f E, f M, f b, e E, e M, e b ). (i) D D. Consider ω 1 D and any ω 2 D: then, it is shown that ω 1, ω 2 = 0. From (3.5) and (3.8), since ω 1 D, we have that ω 1, ω 2 [ = ( 1) r de 1 M e 2 E + ( 1) r de 2 M e 1 E + de 1 E e 2 M + de 2 E e 1 M] D + ( 1) r+1 D f M,d [ e 1 M D e 2 E D +e 2 M D e 1 E D ] From Prop. A.2, we have that de E e M = ( 1) q(p 1) e M de E, and then ω 1, ω 2 [ ] =( 1) r de 1 M e 2 E + ( 1) q(p 1) r e 1 M de 2 E D [ ] + ( 1) r de 2 M e 1 E + ( 1) q(p 1) r e 2 M de 1 E D + ( 1) r+1 [ e 1 M D e 2 E D +e 2 M D e 1 ] E D D Since, from (3.2), ( 1) q(p 1) r = ( 1) pq q pq+n = ( 1) p 1 and, from Prop. A.13, d(e M e E ) = de M e E + ( 1) p 1 e M de E, we have that ω 1, ω 2 =( 1) r [ d(e 1 M e 2 E) + d(e 2 M e 1 E) ] D + ( 1) r+1 D [ e 1 M D e 2 E D +e 2 M D e 1 E D ]

97 3.2 Stokes Dirac structures 81 which is zero from the Stokes theorem, see Thm. A.14. (ii) D D. Consider ω 1 D and any ω 2 D: from ω 1, ω 2 = 0, we have to deduce that ω 1 D, that is it can be expressed as in (3.8). Since ω 2 D, from (3.5) and (3.8) we have that [ 0 = f 1 E e 2 E + ( 1) r de 2 M e 1 E + fm 1 e 2 M + de 2 E e 1 M] D [ + f 1 b e 2 E D +( 1) r+1 e 2 M D e 1 b] From Prop. A.13 and equation (3.2), we have that and then D d(e M e E ) = de M e E + ( 1) p 1 e M de E ( 1) r d(e M e E ) = ( 1) r de M e E de E de M 0 = [ f 1 E e 2 E + ( 1) r d(e 2 M e 1 E) de 1 E e 2 M] D + [ f 1 M e 2 M + ( 1) r d(e 1 M e 2 E) + ( 1) r+1 de 1 M e 2 E] D + [ f 1 b e 2 E D +( 1) r+1 e 2 M D e 1 b] D From the Stokes theorem (Thm. A.14), we can write that [( 0 = f 1 E ( 1) r de 1 ) M e 2 E + ( fm 1 de 1 ] E) e 2 M D [( + f 1 b ( 1) r+1 e 1 ) M D e 2 E D +( 1) r+1 e 2 M D ( e 1 b )] e1 E D D which has to be zero for every ω 2 D. Consequently, it is necessary that f 1 E = ( 1)r de 1 M e 1 b = e 1 E D f 1 M = de1 E f 1 b = ( 1) r+1 e 1 M D that corresponds to (3.8) and implies that ω 1 D. Note 3.2. The couple of relations (3.9) can be deduced by requiring that e, f = 0 for every (f, e) D: this is a necessary condition for D to be a Dirac structure. Note 3.3. The compositionally properties of the Dirac structure defined in Prop. 3.2 follow immediately. Indeed, consider two spatial domains D 1 and D 2 with boundary D 1 and D 2 such that { D1 = Γ Γ 1, with Γ Γ 1 = (3.10) D 2 = Γ Γ 2, with Γ Γ 2 = (that is, D 1 and D 2 have a subset Γ of boundary in common). Then, the Dirac structures D 1 and D 2 on D 1 and D 2, can be composed in order to obtain the Dirac structure D 12 on D 12 = D 1 D 2 with boundary Γ 1 Γ 2, if we equate fb 1 on Γ with -f b 2 on Γ, and e1 b on Γ with e2 b on Γ (common effort interconnection; see also Def. 1.7 and 1.8). The meaning is that the power flowing into D 1 via Γ should be equal to the power owing out of D 2 via Γ.

98 82 Distributed port Hamiltonian systems 3.3 Distributed port Hamiltonian system The definition of distributed port Hamiltonian systems easily follows from the proposed Dirac structure (3.8) by following a procedure that generalized the approach of Sec Denote by D the spatial domain, an n-dimensional Riemannian manifold with boundary D, an consider the Dirac structure D of Prop Denote by H the Hamiltonian density (i.e. energy per volume element), that is the following function H : Ω p (D) Ω q (D) D Ω n (D) (3.11) Then, the total energy inside the domain D is given by H := H (3.12) Consider α E, α E Ω p (D) and α M, α M Ω q (D); then, if H is differentiable, D H(α E + α E, α M + α M ) = = H(α E, α M, z) + [ α E δ E H + α M δ M H] + higher order terms D for some uniquely defined differential forms D (3.13) δ E H (Ω p (D)) Ω n p (D), δ M H (Ω q (D)) Ω n q (D) (3.14) which correspond to the variational derivative of the functional (3.12), (Dubrovin et al., 1992). Consequently, (δ E H, δ M H) Ω n p (D) Ω n q (D) can be treated as the gradient of H evaluated in (α E, α M ) Ω p (D) Ω q (D). If (α E, α M )(t) Ω p (D) Ω q (D), with t R then ( dh dt (α αe E(t), α M (t)) = gradh, t, α ) M = α E t t δ E H + α M t δ M H (3.15) or, for the total energy, ( dh dt = αe D t δ E H + α M t ) δ M H (3.16) The differential forms α E t and α M t represent the generalized velocities for the energy storage elements in D, while δ E H and δ M H represent the generalized forces. So, the equations that express the connection with the Dirac structure D (3.8) are f E,s = α E t e E,s = δ E H f M,s = α M t e M,s = δ M H (3.17) where the minus sign in necessary to have a consistent energy flow description, as in (1.36). Moreover, the dissipation effects can be taken into account by properly terminating the ports

99 3.3 Distributed port Hamiltonian system 83 connected to the dissipation elements. Then, consider the maps R E : Ω n p (D) Ω p (D) and R M : Ω n q (D) Ω q (D) such that R E (e E,r ) e E,r 0, R M (e M,r ) e M,r 0 (3.18) D e E,r Ω p (D), e M,r Ω q (D). Dissipation can be added by imposing that D f E,r = R E (e E,r ) f M,r = R M (e M,r ) (3.19) These considerations lead to the following definition. Definition 3.2 (distributed port Hamiltonian system). The distributed parameter port Hamiltonian system with dissipation with spatial domain D, state space Ω p (D) Ω q (D), Dirac structure (3.8) and Hamiltonian (energy) density (3.11) is given by [ ] [ t α E 0 ( 1) = r ] [ ] [ ] [ ] [ ] d δe H RE 0 + G t α M d 0 δ M H r G δe H fe,d r G 0 R M δ M H d f M,d [ ] [ ] [ ] [ ] [ ] (3.20) ee,d = G δe H fb 0 ( 1) r+1 δe H e d, = D M,d δ M H e b 1 0 δ M H D where r = pq n, (f b, e b ) Ω n p ( D) Ω n q ( D) are the boundary variables, (f E,d, e E,d ) Ω p (D) Ω n p (D) and (f M,d, e M,d ) Ω q (D) Ω n q (D) are the distributed port variables. Furthermore, G r, G d are linear mappings, with G r and G d their duals, that satisfy condition (3.9) and R E, R M are the maps R E : Ω n p (D) Ω p (D) and R M : Ω n q (D) Ω q (D) that satisfy condition (3.18). Note 3.4. From (3.6) and (3.16), we deduce that the distributed port Hamiltonian system (3.28) satisfies the following energy balance equation dh dt = [R E (δ E H) δ E H + R M (δ M H) δ M H] D + [f E,d e E,d + f M,d e M,d ] + f b e b (3.21) D D [f E,d e E,d + f M,d e M,d ] + f b e b D which expresses that, along its trajectories, the increase of internal stored energy is less or, at maximum, equal to incoming the power through the boundary D and the distributed port on D. Note 3.5. The couple of relation (3.19) suggest a simple approach for the control of distributed parameter systems based on the generalization of the damping injection methodology proposed in Sec.2.2 and 2.3 for finite dimensional phd systems. Basically, it is necessary to close the boundary or distributed port of the system (3.20) on a dissipative element or, in other words, it is necessary to implement a controller that behaves as a dissipative element imposing a relation similar to (3.19) on the port variables. See Fig. 3.3 for a possible application to a flexible beam. Intuitively speaking, if the total energy (3.12) is characterized by a (global) minimum in the desired configuration, then (global) stability can be achieved. It is important to point out that, in the infinite dimensional case, the procedure is, in some sense, a bit more involved, since it is necessary to take into account boundary conditions and some intrinsic difficulties in the stability proof, as it will be deeply discussed in Sec D

100 84 Distributed port Hamiltonian systems PSfrag replacements (f b, e b ) distributed port (f d, e d ) border port Figure 3.2: Control of flexible structure by damping injection. PSfrag replacements I(z) L(z)δz R(z)δz V (z) C(z)δz G(z)δz Figure 3.3: Infinitesimal element (length equal to δz) of the transmission line. 3.4 Classical examples Transmission line In this example, the propagation of an electromagnetic wave within a transmission line is presented. Suppose that the spatial domain is given by the line segment D = [0, L] and that the transmission line is characterized by a distributed capacitance C(z), by a distributed inductance L(z), by a distributed resistance R(z) and by a distributed conductance G(z), with z D (see. Fig ). The energy variables are the charge per unit length q(z, t) and the magnetic flux per unit length φ(z, t). The electromagnetic energy density is given by H(q, φ, z) := 1 q 2 2 C + 1 φ 2 2 L The co-energy variables are the voltage V (z) and the current I(z), where (3.22) V (z) = H q = q C I(z) = H φ = φ L It is possible to prove that the transmission line equation is given by q = (G(z)V (z) + z ) t I(z) φ = (R(z)I(z) + z ) (3.23) t V (z) The transmission line equation can be written in distributed port Hamiltonian form as follows. The energy variables (the electric charge and the magnetic flux) are the 1-forms: α E (t) = q(z, t) Ω 1 ([0, L]) α M (t) = φ(z, t) Ω 1 ([0, L])

101 3.4 Classical examples 85 and the co-energy variables (voltage and current) the following 0-forms (functions) V (z) Ω 0 ([0, L]) and I(z) Ω 0 ([0, L]). Energy and co-energy variables are related by means of the Hodge operator as follows: V = 1 C α E I = 1 L α M with the Hodge star operator defined with respect to the Euclidean metric on D. The energy density becomes the following 1-form: H(α E, α M ) = 1 2 (α E V + α M I) and the Hamiltonian H the following functional: H(α E, α M ) = D H(α E, α M ) Consequently, V = δ E H and I = δ M H. The (internal) dissipation effects, due to the distributed resistance R(z) and admittance G(z), can be taken into account by terminating the dissipative ports as follows: f E,r = G(z) e E,r f M,r = R(z) e M,r Consequently, since it is possible to assume G r in (3.20) as the identity map and G d equal to zero, the distributed port Hamiltonian formulation of (3.27) becomes { t α E = d δ M H + G δ E H (3.24) t α M = d δ E H + R δ E H or, in matrix notation, [ t α E t α M ] = ([ 0 d d 0 ] [ G R ]) [ δe H δ M H ] Since D [0, L] = {0, L}, the power balance (3.21) can be written as dh dt f b e b = V (0)I(0) V (L)I(L) D where f b = I D and e b = V D. This relation shows that the variation of magnetic energy within the transmission line is less than the incoming electromagnetic power at the boundary Maxwell s equations In this section, the classical formulation of Maxwell s equations in terms of differential forms (Ingarden and Jamiolkowsky, 1985) rewritten within the framework of dph systems is presented, (Maschke and van der Schaft, 2000). Denote by D R 3 a 3-dimensional manifold with boundary D representing the spatial domain, and consider the electromagnetic field in D. The energy variables are the electric field induction D and the magnetic field induction B, both 2-forms, given by (z D): D = 1 2 D ij(t, z)dz i dz j B = 1 2 B ij(t, z)dz i dz j

102 86 Distributed port Hamiltonian systems The co-energy variables are the electric field intensity E and the magnetic field intensity H, both 1-forms, given by: E = E i (t, z)dz i H = H i (t, z)dz i The relation between energy and co-energy variables is expressed by the following constitutive equations of the medium (or material equations): D = ɛe B = µh where the scalar functions ɛ(t, z) and µ(t, z) are the electric permittivity and the magnetic permeability. The Hamiltonian (total energy) can be defined as: H = 1 [E D + H B] 2 D Under the hypothesis that there is no current flow in the medium, Maxwell s equations can be written as { t D = dh t B = de or, also taking into account boundary conditions, they can be written in the following dph form, as it can be easily deduced from (3.20) since n = 3, p = 2 and q = 2: [ ] [ ] [ ] [ ] [ ] t D 0 d δd H fb δb H =, = D (3.25) t B d 0 δ B H δ D H D In this case, the energy balance equation (3.21) becomes dh dt = δ B H δ D H = H E = D D e b D E H with E H Ω 2 ( D) corresponding to the Poynting vector. In the case of a non-zero current density J Ω 2 (D), it is necessary to modify the first matrix equation of (3.25) as follows: [ ] [ ] [ ] [ ] t D 0 d δd H 0 = J t B d 0 δ B H I This model can be obtained from (3.20) by defining G d = I, f E,d = J and f M,d = 0, thus leading to the following further couple of equations: e E,d = δ D H = E e M,d = δ B H = H In this case, the energy balance equation becomes dh dt = E H + D D E J which is known as Poynting s theorem. In order to incorporate energy dissipation, we write J = J r + J and we impose the Ohm s law J r = σe with σ(t, z) the specific conductivity of the medium.

103 3.5 dph model of the Timoshenko beam Vibrating string Consider an elastic string subject to traction forces at its ends. Denote the spatial domain by D = [0, L] R the displacement of the string by u(t, z), z D. The potential elastic energy is a function of the strain ɛ, given by the following 1-form: ɛ = ɛ(t, z)dz = u z dz whose associated co-energy variable is the stress σ, the following 0-form: σ = T ɛ = T u z The potential energy results in the quadratic functional U(ɛ) = σɛ = T ɛ ɛ = D D D T ( ) u 2 dz z and, clearly, σ = δ ɛ U. The kinetic energy K is a function of the kinetic momentum, defined as the 1-form p = p(t, z)dz = µ u t dz and it is given by the quadratic functional K(p) = D 1 µ p p The associated co-energy variable is the velocity, given by the 0-form v = 1 µ p = δ pk = u t Since the energy (Hamiltonian) is given by H(p, ɛ) = K(p) + U(ɛ), the dph formulation of the vibrating string can be deduced from (3.20) by assuming n = 1, q = p = 1, thus leading to the equations [ t p t ɛ ] = [ 0 d d 0 or, in a simpler form, to { t p = z σ = z (T ɛ) ] [ δp H δ ɛ H ], [ fb e b ] [ ] δɛ H = D δ p H D t ɛ = z v = z (1/µ P ) e b = v {0,L}, f b = σ {0,L} 3.5 dph model of the Timoshenko beam Background. The classical formulation According to the Timoshenko theory, the motion of a beam can be described by the following system of PDE: ρ 2 w t 2 K 2 w x 2 + K φ x = 0 I ρ 2 φ t 2 EI 2 φ x 2 + K ( φ w x ) = 0 (3.26)

104 88 Distributed port Hamiltonian systems where t is the time and x is the spatial coordinate along the beam in its equilibrium position. Then, w(x, t) is the deflection of the beam from the equilibrium configuration, denoted by w 0, and φ(x, t) is the rotation of the beam s cross section due to bending. Assume that the motion takes place in the wx-plane and that x [0, L]. The coefficients ρ, I ρ, E and I, assumed to be constant, are the mass per unit length, the mass moment of inertia of the cross section, Young s modulus and the moment of inertia of the cross section, respectively. The coefficient K is equal to kga, where G is the modulus of elasticity in shear, A is the cross sectional area and k is a constant depending on the shape of the cross section. The mechanical energy is given by the following relation H(t) := 1 L ( ) w 2 ( ) φ 2 ρ + I ρ dx + 1 L ( K φ w ) 2 ( ) φ 2 + EI dx 2 0 t t 2 0 x x }{{}}{{} kinetic energy potential elastic energy (3.27) that points out that, as every infinite dimensional system, the beam is characterized by a spatial domain D := [0, L], with border D = {0, L}, and by the presence of two interactive energy domains, the kinetic and the potential elastic Timoshenko beam in dph form As discussed in Sec. 3.2, the starting point in the definition of dph system is the identification of a suitable space of power (or energy) variables, strictly related to the geometry of the system, and the definition of a Stokes Dirac structure on this space of power variables, in order to describe the internal and external interconnection of the system. The potential elastic energy in (3.27) is a function of the shear and of the bending, respectively given by the following 1-forms: ɛ t (t, x) = [ ] w (t, x) φ(t, x) dx x ɛ r (t, x) = φ (t, x)dx (3.28) x The associated co-energy variables are the shear force and the bending momentum, given by the following 0-forms (functions): σ t (t, x) = K [ ] w (t, x) φ(t, x) x σ r (t, x) = EI φ (t, x) x where the Hodge star operator is defined assuming the Euclidean metric. Besides, the kinetic energy is function of the translational and rotational momenta, i.e. of the following 1-form: p t (t, x) = ρ w (t, x)dx t p r(t, x) = I ρ φ t (t, x)dx (3.29) and the associated co-energy variables are the translational and rotational velocity, given by the following 0-forms (functions): v t (t, x) = w (t, x) t v r(t, x) = φ (t, x) t

105 3.5 dph model of the Timoshenko beam 89 Clearly, we have that p t, p r, ɛ t, ɛ r Ω 1 (D) and w, φ Ω 0 (D). Moreover, it is possible to re-write (3.28) and (3.29) as p t = ρ w t, p r = I ρ φ t, ɛ t = dw φ, ɛ r = dφ and the total energy (3.27) becomes the following (quadratic) functional: H(p t, p r, ɛ t, ɛ r ) = H(p t, p r, ɛ t, ɛ r ) = D ( 1 2 ρ p t p t + 1 ) (3.30) p r p r + Kɛ t ɛ t + EIɛ r ɛ r I ρ D with H : Ω 1 (D) Ω 1 (D) Ω 1 (D) the energy density. Consider a time function (p t, p r, ɛ t, ɛ r ) (t) Ω 1 (D) Ω 1 (D), with t R and evaluate the energy H along this trajectory. At any time t, the variation of internal energy, that is the power exchanged with the environment, is given by ( dh pt = dt D t δ p t H + p r t δ p r H + ɛ t t δ ɛ t H + ɛ ) r t δ ɛ r H [ ( ) pt 1 = t ρ p t + p ( ) r 1 t p r + ɛ t I ρ t (K ɛ t) + ɛ ] (3.31) r t (EI ɛ r) D The differential forms pt t, pr t, ɛt t and ɛr t are the time derivatives of the energy variables p t, p r, ɛ t, ɛ r and represent the generalized velocities (flows), while δ pt H, δ pr H, δ ɛt H, δ ɛr H are the variational derivative of the total energy (3.30). They are related to the rate of change of the stored energy and represent the generalized forces (efforts). With some simple manipulations, it is possible to write the Timoshenko model of the beam (3.26) as a function of the generalized velocities and forces. It can be obtained that: p t t p r t ɛ t t ɛ r t = d (K ɛ t ) = dδ ɛt H = d (EI ɛ r ) + (K ɛ t ) = dδ ɛr H + δ ɛt H ( ) ( ) 1 1 = d ρ p t p r = dδ pt H δ pr H ( ) I ρ 1 = d p r = dδ pr H I ρ (3.32) where fb t = 1 ρ p t = δ pt H D D e t b = K ɛ t D = δ ɛt H D fb r = 1 p r = δ pr H I D ρ D (3.33) e r b = EI ɛ r D = δ ɛr H D are the boundary conditions on translational/rotational velocities of the extremities of the beam (f t b, f r b ) Ω0 ( D) Ω 0 ( D) and on the applied forces and the torques (e t b, er b ) Ω0 ( D) Ω 0 ( D).

106 90 Distributed port Hamiltonian systems It is possible to prove that (3.32, 3.33) is dph formulation of the Timoshenko beam. The underlying Dirac structure can be revealed once the space of power variables is defined. The space of flows is given by F := Ω 1 (D) Ω 1 (D) Ω 1 (D) Ω 1 (D) Ω 0 ( D) Ω 0 ( D) }{{}}{{} generalized. velocities border flow (3.34) while, from Prop. 3.1, its dual, the space of efforts E, by E := Ω 0 (D) Ω 0 (D) Ω 0 (D) Ω 0 (D) Ω 0 ( D) Ω 0 ( D) }{{}}{{} generalized forces border effort (3.35) Thus, the duality product (3.4) and the +pairing operator (3.5) can be easily specialized in order to deal with the space of power variables F E defined by (3.34) and (3.35). Suppose that ( fpt, f pr, f ɛt, f ɛr, f t b, f r b, e p t, e pr, e ɛt, e ɛr, e t b, er b), (f i p t,..., f r,i b with i = 1, 2, then ), ei p t,..., e r,i b F E ( ept, e pr, e ɛt, e ɛr, e t b, ) ( er b, fpt, f pr, f ɛt, f ɛr, fb t, f b r ) := := (f pt e pt + f pr e pr + f ɛt e ɛt + f ɛr e ɛr ) + D ( ) ( ) fp 1 t,..., f r,1 b, e 1 p t,..., e r,1 b,, fp 2 t,..., f r,2 b, e 2 p t,..., e r,2 b := ( := f 1 pt e 2 p t + fp 2 t e 1 p t + fp 1 r e 2 p r + fp 2 r e 1 ) p r + D ( + f 1 ɛt e 2 ɛ t + fɛ 2 t e 1 ɛ t + fɛ 1 r e 2 ɛ r + fɛ 2 r e 1 ) ɛ r + D ( ) + f t,1 b e t,2 b + f t,2 b e t,1 b + f r,1 b e r,2 b + f r,2 b e r,1 b D D ( f t b e t b + f r b er b) (3.36) With the following proposition, the main result of this section is presented. Proposition 3.3 (the Timoshenko beam Dirac structure). Consider the space of power variables F E with F and E defined in (3.34) and (3.35) and the bilinear form (+pairing operator), given by (3.36). Define the following linear subspace D of F E: D = { ( f pt, f pr, f ɛt, f ɛr, fb t, f b r, e p t, e pr, e ɛt, e ɛr, e t b, b) er F E f pt 0 0 d 0 e pt f pr f ɛt = 0 0 d e pr d 0 0 e ɛt, = f ɛr 0 d 0 0 e ɛr f t b f r b e t b e r b e pt D e pr D e ɛt D e ɛr D } (3.37) where D denotes the restriction on the border of the (spatial) domain D. Then D = D, that is D is a Dirac structure.

107 3.5 dph model of the Timoshenko beam 91 Proof. Following the same procedure of Prop. 3.2, the proof can be divided in two steps. In the first one, it is verified that D D, while, in the second one, that D D. Suppose that ( ) ω i = fp i t, fp i r, fɛ i t, fɛ i r, f t,i b, f r,i b, ei p t, e i p r, e i ɛ t, e i ɛ r, e t,i b, er,i b F E with i = 1, 2; clearly, D D if ω 1, ω 2 D, it happens that ω 1, ω 2 = 0. From (3.36) and from the definition (3.37) of the Dirac structure D, we have ω 1, ω 2 [ = de 1 ɛt e 2 p t de 2 ɛ t e 1 p t + ( e 1 ɛ t de 1 ɛ r ) e 2 p r + ( e 2 ɛ t de 2 ɛ r ) e 1 ] p r + D [ + ( de 1 pt + e 1 p r ) e 2 ɛ t + ( de 2 p t + e 2 p r ) e 1 ɛ t de 1 p r e 2 ɛ r de 2 p r e 1 ] ɛ r + D [ ] + f t,1 b e t,2 b + f t,2 b e t,1 b + f r,1 b e r,2 b + f r,2 b e r,1 b D [ = (de 2 pt e 1 ɛ t + e 2 p t de 1 ɛ t ) + (de 1 p t e 2 ɛ t + e 1 p t de 2 ɛ t ) ] D [ (de 2 pr e 1 ɛ r + e 2 p r de 1 ɛ r ) + (de 1 p r e 2 ɛ r + e 1 p r de 2 ɛ r ) ] D [ ] + f t,1 b e t,2 b + f t,2 b e t,1 b + f r,1 b e r,2 b + f r,2 b e r,1 b D Since d(e 1 e 2 ) = de 1 e 2 + e 1 de 2, and, for the Stokes theorem (see Thm A.14), if α Ω 0 (D), then D dα = D α, we deduce that ω1, ω 2 = 0 and D D. In order to prove that D D, consider ω 2 D. From the definition of Dirac structure we have that ω 1 D, ω 1, ω 2 = 0. Since ω 1 D, from (3.37) we have: 0 = ω 1, ω 2 [ = de 1 ɛt e 2 p t + fp 2 t e 1 p t + ( e 1 ɛ t de 1 ɛ r ) e 2 p r + fp 2 r e 1 ] p r + D [ + ( de 1 pt + e 1 p r ) e 2 ɛ t + fɛ 2 t e 1 ɛ t de 1 p r e 2 ɛ r + fɛ 2 r e 1 ] ɛ r + D [ ] + e 1 p t D e t,2 b + f t,2 b e 1 ɛ t D +e 1 p r D e r,2 b + f r,2 b e 1 ɛ r D D From the Stokes Theorem and the properties of the exterior derivative, is is possible to obtain that [ 0 = e 1 ɛt (de 2 p t e 2 p r + fɛ 2 t ) + e 1 p t (fp 2 t + de 2 ɛ t ) ] + D [ + e 1 ɛr (de 2 p r + fɛ 2 r ) + e 1 p r (fp 2 r + e 2 ɛ t + de 2 ɛ r ) ] + D [ ] + e 1 ɛ t D ( e 2 p t D +f t,2 b ) + e 1 ɛ r D ( e 2 p r D +f r,2 b ) + D [ ] + e 1 p t D ( e 2 ɛ t D +e t,2 b ) + e1 p r D ( e 2 ɛ r D +e r,2 b ) D for every ω 1 D. We deduce that the previous relation holds if and only if ω 2 D. So, D D and this completes the proof.

108 92 Distributed port Hamiltonian systems Consider the total energy (3.30) as the Hamiltonian of the system, i.e. a (quadratic) functional of the energy variables p t, p r, ɛ t and ɛ r bounded from below. The rate of change of these energy variable (generalized velocities) can be connected to the Dirac structure (3.37) by setting f pt = p t t, f ɛ t = ɛ t t, f p r = p r t, f ɛ r = ɛ r t (3.38) where the minus sign is necessary in order to have a consistent energy flow description. Moreover, the rate of change of the Hamiltonian with respect to the energy variables, that is its variational derivatives, can be related to the Dirac structure by setting e pt = δ pt H, e ɛt = δ ɛt H, e pr = δ pr H, e ɛr = δ ɛr H (3.39) From (3.38) and (3.39), it is possible to obtain the distributed Hamiltonian formulation with boundary energy flow of the Timoshenko beam. We give the following: Definition 3.3 (dph model of Timoshenko beam). The dph model of the Timoshenko beam with Dirac structure D (3.37) and Hamiltonian H (3.30) is given by t p t t p r t ɛ t t ɛ r = 0 0 d d d d 0 0 δ pt H δ pr H δ ɛt H δ ɛr H, Note that (3.40) coincides, obviously, with (3.32, 3.33). f t b f r b e t b e r b = δ pt H D δ pr H D δ ɛt H D δ ɛr H D (3.40) Since the elements of every Dirac structure satisfy the power conserving property, we have that, given (f pt,..., f ɛr, e pt,..., e ɛr, fb t,..., er b ) D, ( (f pt e pt + f pr e pr + f ɛt e ɛt + f ɛr e ɛr ) + f t b e t b + f b r ) er b = 0 D D and, consequently, from (3.31), (3.38) and (3.39), the following proposition can be proved. Proposition 3.4 (energy balance). Consider the dph model of the Timoshenko beam (3.40). Then dh dt (t) = ( e t b fb t + er b f r ) b D = [ e t b (t, L)f b t (t, L) + er b (t, L)f b r (t, L)] [ (3.41) e t b (t, 0)f b t (t, 0) + er b (t, 0)f b r (t, 0)] or, in other words, the increase of energy kinetic/potential energy of the beam is equal to the power supplied through the border Introducing the distributed port Power exchange through the boundaries is not the only way by means of which the system can interact with the environment. The distributed control is a well-know control technique that can be fruitfully applied to flexible structures. The actuators are connected along the flexible structure and can act on the system applying forces/couples that are functions of the

109 3.5 dph model of the Timoshenko beam 93 configuration of the beam. The final result is that vibrations can be damped in a more efficient way than acting only on the border of the beam. In order to introduce a distributed port, the space of power variables F E defined in (3.34, 3.35) and the Dirac structure D defined in (3.37) have to be modified. The space of power variables becomes F d E d, where F d := F Ω 1 (D) Ω 1 (D) }{{} distrib. flow E d := E Ω 1 (D) Ω 1 (D) }{{} distrib. effort (3.42) The modified Dirac structure that incorporates the distributed port is given by the following: Proposition 3.5. Consider the space of power variables F d E d defined in (3.42) and the bilinear form (+pairing operator), given by (3.36). Define the following linear subspace D d of F d E d : D d = { ( f pt, f pr, f ɛt, f ɛr, fb t, f b r, f d t, f d r, e p t, e pr, e ɛt, e ɛr, e t b, er b, et d, d) er Fd E d f pt 0 0 d 0 e pt 1 0 [ ] f pr f ɛt = 0 0 d e pr d 0 0 e ɛt 0 1 f t d 0 0 f r, d f ɛr 0 d 0 0 e ɛr 0 0 [ ] [ ] e t d = , = } e r d e pt e pr e ɛt e ɛr f t b f r b e t b e r b e pt D e pr D e ɛt D e ɛr D (3.43) where D denotes the restriction on the border of the (spatial) domain D. Then D d = D d, that is D d is a Dirac structure. Proof. The proof is very similar to the one given for Prop The dph formulation of the Timoshenko beam with boundary and distributed energy flow can be obtained simply combining the Dirac structure D d (3.43) with (3.38) and (3.39). The resulting model is given in the following: Definition 3.4. The dph model of the Timoshenko beam with Dirac structure D d (3.43) and Hamiltonian H (3.30) is given by [ e t d t p t t p r t ɛ t t ɛ r e r d ] = = 0 0 d d d d 0 0 [ ] δ pt H δ pr H δ ɛt H δ ɛr H δ pt H δ pr H δ ɛt H δ ɛr H, + f t b f r b e t b e r b = [ f t d f r d ], δ pt H D δ pr H D δ ɛt H D δ ɛr H D (3.44)

110 94 Distributed port Hamiltonian systems PSfrag (f t,r b x=0 replacements, e t,r b x=0 ) H (f t,r b x=l, e t,r b x=l ) (f d, e d ) Figure 3.4: Bond graph representation of the Timoshenko beam. The energy balance equation (3.41) becomes dh dt = ( f t b e t b + f b r ) ( er b + f t d e t d + f d r d) er D D (3.45) which expresses the fact that the variation of internal stored energy equals the power supplied to the system through the border and the distributed port. From a bond graph point of view, the Timoshenko beam can be described as in Fig. 3.4, where the power flows through the border, (f t,r b x=, e t,r b x= ), and the distributed port, (f d, e d ), are shown along with their causality.

111 Chapter 4 Control of distributed port Hamiltonian systems In this chapter, the stabilization problem for distributed port Hamiltonian systems is discussed. Since the dph formulation of infinite dimensional system is a relatively new result in literature, the control problem is still to be developed in detail. Basically, the idea is the extension of the control methodologies already applied in the control of finite dimensional systems and discussed in Chapter 2 in order to deal with the distributed parameter case. Once in Sec. 4.2 the definition of stability in the infinite dimensional framework is given, together with a couple of well-established stability theorems, is given, then in Sec. 4.3 and Sec. 4.4 the extension to the infinite dimensional case of the control by damping injection and control by interconnection and energy shaping methodologies are discussed. Furthermore, in Sec. 4.5, the stabilization of a complex mechanical system built around a flexible beam modeled according to the Timoshenko theory is presented and discussed. 4.1 Introduction In the last years, stimulated by the applications arising from space exploration, automated manufacturing and other areas of technological development, the control of distributed parameter systems has been an active field of research for control system people. The problem is quite complex since the systems to be controlled are described by a set of partial differential equations, the study of which is not an easy task. The semi-group theory provides a large number of results on the analysis of systems of PDEs, and, in particular, on the exponential stability of feedback laws for beam, wave and thermoelastic equations. In (Olver, 1993), the classical results on semi-group theory are presented and discussed, while in (Luo et al., 1999) some new results on the stability and feedback stabilization of infinite dimensional systems are reported. In particular, the second order partial differential equations are discussed, such as the Euler-Bernoulli beam equation which arises from control of

112 96 Control of distributed port Hamiltonian systems several mechanical structures, such as flexible robots arms and large space structures. A novel framework for modeling and control distributed parameter systems is represented by the infinite dimensional port Hamiltonian systems, introduced in (Maschke and van der Schaft, 2000) and widely discussed in Chapter 3 from the modeling point of view. In this chapter, some results regarding control applications are presented. In some sense, it is more correct to speak about preliminary results in control of distributed port Hamiltonian systems, since a general theory, as the one discussed in Chapter 2 for the finite dimensional port Hamiltonian systems, is far from being developed. When dealing with infinite dimensional system, the main problem concerns with the intrinsic difficulties related to the proof of stability of an equilibrium point, as it will be pointed out in Sec And it is important to underscore that this limitation does not depend on the particular approach adopted to study the problem. Even if a distributed parameter systems is described within the port Hamiltonian framework, the stability proof of a certain control scheme will always be a difficult task. So the dph approach does not simplify the control task. Then, we can ask ourself: from the control point of view, what is the advantage related to a port Hamiltonian description of a distributed parameter system? It is author s opinion that two are the main advantages in adopting the distributed port Hamiltonian framework: the development of control schemes for infinite dimensional systems is usually based on energy considerations or, equivalently, the stability proof often relies on the properties of an energy-like functional, a generalization of the Lyapunov function to the distributed parameters case. Some examples, related to the stabilization of flexible beams, are in (Kim and Renardy, 1987; Taylor, 1997). The Hamiltonian description of a distributed parameter system is given in terms of time evolution of energy variables depending on the variation of the total energy of the system. In this way, the energy of the system, which is generally a good Lyapunov function, appears explicitly in the mathematical model of the system itself and, consequently, both the design of the control law and the proof of its stability can be deduced and presented in a more intuitive (in some sense physical) and elegant way. the port Hamiltonian formulation of distributed parameter systems originates from the idea that a system is the result of a network of atomic element, each of them characterized by a particular energetic behavior, as in the finite dimensional case. So, the mathematical models originates from the same set of assumptions. This fact is important and allows us to go further: in particular, it is of great interest to understand if also the control schemes developed of finite dimensional port Hamiltonian systems could be generalized in order to deal with distributed parameter ones. For example, suppose that the total energy (Hamiltonian) of the system is characterized by a minimum at the desired equilibrium configuration. This happens, for example, in the case of flexible beams, for which the zero-energy configuration corresponds to the undeformed beam. In this situation, the controller can be developed in order to behave as a dissipative element to be connected to the system at the boundary or along the distributed port. The amount of dissipated power can be increased in order to reach quickly the configuration with minimum energy, (Macchelli and Melchiorri, 2003b). As in the finite dimensional case, it can happen that the minimum of the energy does not correspond to a desired configuration. Then, it is necessary to shape the energy function so that a new minimum is introduced. In other words, it is interesting to investigate if the control by interconnection and energy

113 4.2 Stability for infinite dimensional systems 97 shaping discussed in Sec. 2.2 can be generalized to the infinite dimensional case. More details in (Rodriguez et al., 2001; Macchelli and Melchiorri, 2003a; Macchelli and Melchiorri, 2003b). This chapter is organized as follows. In Sec. 4.2, a short overview on the stability problem for distributed parameter systems is given, together with some simple but useful stability theorems. Then, in Sec. 4.3, the control by damping injection is generalized to the infinite dimensional case and an application to the boundary and distributed control of the Timoshenko beam is presented. In Sec. 4.4, a simple generalization of the control by interconnection and energy shaping to the infinite dimensional framework is discussed. In particular, the control scheme is developed in order to cope with a simple mixed finite and infinite dimensional port Hamiltonian system (m-ph system). Then, an application to the dynamical control of a Timoshenko beam is discussed in Sec Stability for infinite dimensional systems Arnold s first stability theorem approach The idea behind the stability of distributed parameter systems remains the same of the finite dimensional case: in order to have (local) asymptotic stability, the equilibrium solutions should be a (local) strict extremum of a proper Lyapunov functional, that is the Hamiltonian in the case of distributed port Hamiltonian systems. In finite dimensions, the positive definiteness of the second differential of the closed-loop Hamiltonian function calculated at the equilibrium configuration is sufficient to show that the steady state solution corresponds to s strict extremum of the Hamiltonian, thus implying the asymptotic stability of the configuration itself. On the other hand, as pointed out in (Swaters, 2000), in infinite dimensions, the same condition on the second variation of the Hamiltonian evaluated at the equilibrium is not, in general, sufficient to guarantee asymptotic stability. This is due to the fact that, when dealing with distributed parameter systems, it is necessary to specify the norm associated with the stability argument, because stability with respect to a one norm does not necessarily imply stability with respect to another norm. This is a consequence of the fact that, unlike finite dimensional vector spaces, all norms are not equivalent in infinite dimensions. In particular, in infinite dimensions, not every convergent sequence on the unit ball converges to a point on the unit ball, that is infinite dimensional vector spaces are not compact. Denote by X the configuration space of a distributed parameter system and by H : X R the corresponding Hamiltonian. Furthermore, denote by a norm on X. The definition of stability for infinite dimensional system can be given as follows: Definition 4.1 (Lyapunov stability for distributed param. systems). Denote by χ X an equilibrium configuration for a distributed parameter system. Then, χ is said to be stable in the sense of Lyapunov with respect to the norm if, for every ɛ > 0 there exists δ ɛ > 0 such that χ(0) χ < δ ɛ χ(t) χ < ɛ for all t > 0, where χ(0) X is the initial configuration of the system. In 1965, Arnold proved a set of stability theorems. These theorems are known as Arnold s first and second stability theorems for linear and nonlinear infinite dimensional systems. For a

114 98 Control of distributed port Hamiltonian systems complete overview, refer to (Swaters, 2000) where these results are presented within the framework of Hamiltonian fluid dynamics. In this chapter, the stability result we use in some of the proof is known as Arnold s first nonlinear stability theorem. Instead of reporting the statement of the theorem, it is more useful to report the underlying mathematical procedure for its proof. This procedure, illustrated in (Swaters, 2000; Rodriguez et al., 2001), can be treated as a general method for verifying the stability of an equilibrium configuration of a generic non-linear infinite dimensional system. The procedure showing the stability of a configuration χ can be summarized by the following steps: i) denote by H a candidate Lyapunov function which, in the case of dph systems, is the Hamiltonian function; ii) show that the equilibrium point χ satisfies the first order necessary condition for an extremum of the candidate Lyapunov function, that is verify that H (χ ) = 0 (4.1) Furthermore, it is necessary to verify that, at the equilibrium point, the interconnection constraints (boundary conditions) are compatible with the first order condition (4.1). iii) introduce the nonlinear functional N ( χ) := H (χ + χ) H (χ ) (4.2) which is proportional to the second variation of H evaluated in χ. This means that its Taylor expansion about χ is N ( χ) H(χ ) iv) verify if the functional (4.2) satisfies the following convexity condition with respect to a suitable norm on X, in order to assure its positive definiteness: with α, γ 1, γ 2 > 0. γ 1 χ N ( χ) γ 2 χ α (4.3) A couple of applications of this result will be presented in Sec , in the case of a simple transmission line, and in Sec. 4.5, in order to prove the stabilization property of a dynamical controller for the Timoshenko beam La Salle s theorem approach La Salle s theorem is well-known result for the stability analysis of finite dimensional nonlinear systems. If in a domain about the equilibrium point we can find a Lyapunov function V (x) whose derivative along the trajectories of the system is negative semidefinite, and if we can establish that no trajectory can stay identically at point where V (x) = 0 except at the equilibrium, then this configuration is asymptotically stable, (Khalil, 1996). This idea is also referred as La Salle s invariance principle.

115 4.2 Stability for infinite dimensional systems 99 This result can be generalized in order to cope with distributed parameter systems. First of all, consider a distributed parameter system, e.g. the system (3.20) if the port Hamiltonian formalism is adopted, and denote by X the configuration space. Then, it is possible to define an operator Φ(t) : X X such that (α E, α M )(t) = Φ(t)(α E, α M )(0) for each t 0. It is possible to prove that Φ(t) is a family of bounded and continuous operators which is called C 0 -semi-group on X, (Olver, 1993). The operator Φ gives the solutions of the set of PDE (3.20) once initial and boundary conditions are specified. For every χ X, denote by γ(χ) := t 0 Φ(t)χ (4.4) the set of all the orbits of (3.20) through χ, and by { } ω(χ) := χ X χ = lim Φ(t n)χ, with t n as n n the (possibly empty) ω-limit set of χ. It is possible to prove that ω(χ) is always positively invariant, i.e. Φ(t)ω(χ) ω(χ), and closed. Moreover, from classical topological dynamics we take the following result, (Luo et al., 1999). Theorem 4.1. If χ X and γ(χ) is precompact 1, then ω(χ) is nonempty, compact, connected. Moreover, lim d(φ(t)χ, ω(χ)) = 0 t where, given χ X and Ω X, d( χ, Ω) denotes the distance from χ to Ω, that is d( χ, Ω) = inf ω Ω χ ω This theorem characterizes the asymptotic behavior of the distributed parameter systems once the ω-limit set is calculated. Based on this result, it is possible to state the La Salle s theorem. Theorem 4.2 (La Salle s theorem). Denote by H a continuous Lyapunov function for the system (3.20), that is for Φ(t), and by B the largest invariant subset of { } χ X Ḣ (χ) = 0 that is Φ(t)B = B for all t 0. If χ X and γ(χ) is precompact, then lim d(φ(t)χ, B) = 0 t An immediate consequence is expressed by the following corollary (compare with Prop. 2.2 and Note 2.4). Corollary 4.3. Consider a distributed parameter system and denote by χ an equilibrium point and by H a candidate Lyapunov function (the Hamiltonian in the case of dph systems). If the largest invariant subset of { } χ X Ḣ (χ) = 0 equals {χ }, then χ is asymptotically stable. 1 See (Curtain and Zwart, 1995; Luo et al., 1999).

116 100 Control of distributed port Hamiltonian systems 4.3 Control by damping injection Basic results Consider the distributed port Hamiltonian system (3.20) for which (f E,d, f M,d, e E,d, e M,d ) and (f b, e b ) are the power conjugated port variables, along the distributed port D and on the boundary D of the spatial domain respectively. Denote by χ the state of the dph system and by χ a desired equilibrium configuration. As in finite dimensions, if the energy function (Hamiltonian) H of the system is characterized by a minimum in χ, then it is possible to drive the system in the desired configuration by interconnecting a controller, that behaves as a dissipative element, to the plant. If the controller is interconnected on the boundary of the spatial domain, we can speak about boundary control of the distributed parameter system (more precisely, about damping injection through the boundary). If the controller is interconnected along the distributed port, we can speak about distributed control of the infinite dimensional system (distributed damping injection). Consider the maps R E,d : Ω n p (D) D Ω p (D) and R M,d : Ω n q (D) D Ω q (D) and suppose that it is possible to find D D such that R E,d (, x) = R M,d (, x) = 0 if x D \ D. Furthermore, suppose that R E,d (e E,d ) e E,d 0 and R M,d (e M,d ) e M,d 0 D for every e E,d Ω p (D) and e M,d Ω q (D). In the same way, consider the map R b : Ω n q ( D) D Ω n p ( D) and suppose that it is possible to find D D such that R b (, x) = 0 if x D \ D and that R b (e b ) e b 0 D Distributed dissipation can be added if the controller can impose the following relations on effort and flows on D: D f E,d = R E,d (e E,d ) on D E and f M,d = R M,d (e M,d ) on D M (4.5) while boundary damping injection is introduced if f b = R b (e b ) on D (4.6) Suppose that the power flow through D \ D and D \ D is equal to zero or, equivalently, that f E,d = f M,d = 0 on D \ D and e b, f b = 0 on D From the energy balance equation (3.21), we have that dh dt [R E,d (e E,d ) e E,d + R M,d (e M,d ) e M,d ] D D R b (e b ) e b 0

117 4.3 Control by damping injection 101 Consequently, the energy function is non-increasing along system trajectories and it reaches a steady state configuration when [ ] [ ] RE,d 0 G δe H d = 0 (on 0 R M,d δ M H D) (4.7) R b (e b ) = 0 (on D) Denote by B the set of configuration χ compatible with relations (4.7). Consequently, based on Corollary 4.3, it is possible to state the following proposition. Proposition 4.4. Consider the dph system (3.20) and the control schemes (4.5) and (4.6). If the largest invariant subset of { } χ Ḣ(χ) = 0 B equals {χ }, then the configuration χ is asymptotically stable. Proof. The proof follows immediately from the La Salle invariance principle. Note the similarities with the result discussed in Note 2.4 for generic finite dimensional passive system and with the statement of Prop Control of the Timoshenko beam by damping injection In this section, some considerations about control by damping injection applied to the Timoshenko beam are presented. In order to be as general as possible, consider the dph formulation of the Timoshenko beam with distributed port (3.44). The energy functional (3.30) assumes its minimum in the zero configuration, i.e. when or, equivalently, when p t = 0, p r = 0, ɛ t = 0 and ɛ r = 0 (4.8) w(t, x) = α x + d φ(t, x) = α (4.9) where the constants α and d are determined by the boundary conditions on w and φ. In (4.9), α represents the rotation angle of the beam around the point x = 0, while d is the vertical displacement in x = 0. If some dissipation effect is introduced by means of a controller, it is possible to drive the state of the beam to the configuration where the (open loop) energy functional (3.30) assumes its minimum. As discussed in Sec , the controller can interact with the system through the border and/or the distributed port and the energy dissipation can be introduced by terminating these ports with a dissipative element, i.e. by a generalized impedance. Clearly, it is the control algorithm that, in some sense, simulates the desired impedance. In Fig. 4.1, the interconnection of the Timoshenko beam (see Fig. 3.4 for its bond graph representation) with a distributed and a boundary controller (in x = L) is presented. In order to simplify some stability proofs that will be presented in the remaining part of this section, it is important to characterize the behavior of the Timoshenko beam equation when the energy function becomes constant and when the boundary conditions are equal to zero. We give this important remark, (Curtain and Zwart, 1995; Luo et al., 1999).

118 102 Control of distributed port Hamiltonian systems PSfrag replacements S f 1 H 1 R b 1 R d Figure 4.1: Control by damping injection of a flexible beam. Remark 4.1. Consider the dph model of the Timoshenko beam (3.40). The only invariant solution compatible with Ḣ = 0 and with the boundary conditions is the zero solution (4.8). { f t b (0) = f r b (0) = 0 e t b (L) = er b (L) = 0 or { f t b (L) = f r b (L) = 0 e t b (0) = er b (0) = 0 Note 4.1. More precisely, Remark 4.1 should be extended in order to contain also informations about the observability of the Timoshenko beam model, as discussed in (Curtain and Zwart, 1995). These conditions can be interpreted as a generalization of Def. 2.8 to the infinite dimensional case and the control by damping injection for dph system as an extension of Prop Boundary control Suppose that a finite dimensional controller can be interconnected to the beam in x = L and that the beam can interact with the environment in x = 0. Moreover, suppose that no interaction can take place through the distributed port. The last hypothesis means that, in (3.44), it can be assumed that fd t(t, x) = 0 f d r (t, x) = 0 The controller is designed in order to act as if a dissipative element is connected to the power port of the beam in x = L. The causality of the power port in x = L is represented in Fig. 3.4: some dissipation effect can be introduced if it is possible to impose the following relation between flow an effort in x = L: fb t(t, L) = bt (t) e t b (t, L) fb r(t, L) = br (t) e r b (t, L) with b t, b r > 0 smooth functions of time t. 1 ρ p t = b t ( ) K ɛ t x=l x=l (4.10) 1 p r I ρ = b r ( ) EI ɛ r x=l x=l

119 4.3 Control by damping injection 103 In this way, the energy balance equation (3.45) becomes dh dt (t) = [ b t (t) e t b et b + br (t) e r b ] ( er b + e t b fb t + er b f r ) b x=l x=0 = b t (t) [K ɛ t x=l ] 2 b r (t) [EI ɛ r x=l ] 2 (4.11) + [ e t b (t, 0)f b t (t, 0) + er b (t, 0)f b r (t, 0)] If, for example, the boundary conditions in x = 0 are w(t, 0) = 0 φ(t, 0) = 0 (4.12a) and, consequently fb t (t, 0) = f b r (t, 0) = 0 (4.12b) then (4.11) becomes dh dt (t) = bt (t) [K ɛ t x=l ] 2 b r (t) [EI ɛ r x=l ] 2 0 So, it is possible to state the following proposition. Proposition 4.5. Consider the dph model of the Timoshenko beam (3.40) and suppose that the boundary conditions in x = 0 are given by (4.12) and that the controller (4.10) is interconnected to the beam in x = L. Then, the final configuration is (4.9), with α = 0 and d = 0, that is w(t, x) = 0 φ(t, x) = 0 Proof. The proof is immediate from Remark 4.1 and Prop Furthermore, it is necessary that α = 0 and d = 0 in (4.9), in order to be compatible with the boundary conditions (4.12a). Note 4.2. These results were already presented in (Kim and Renardy, 1987) using a different approach. The proposed control law was written in the following form: w t (t, L) = [ ] w bt (t) K (t, L) φ(t, L) x φ t (t, L) = br (t) EI φ (t, L) x which is clearly equivalent to (4.10). The main advantage in approaching the problem within the framework of dph systems is that both the way the control law is deduced and the proof of its stability can be presented in a more intuitive (in some sense physical) and elegant way. The same considerations hold for the distributed control of the beam by damping injection presented in the next section: also in this case, the same results were already presented in (Dong-Hua and De-Xing, 1999), but with a different approach.

120 104 Control of distributed port Hamiltonian systems Distributed control Following the same ideas presented in the previous section, it is possible to extend the control by damping injection to the case in which the interaction between system and controller takes place through a distributed port. In this case, the (distributed) power port has to be terminated by a desired impedance implemented by a distributed controller. In other words, in this section it is shown how stabilize the Timoshenko beam with a locally distributed control based on an extension to the infinite dimensional case of the damping injection control technique. Assume that b t d (t, x) and br d (t, x) are smooth functions on D and suppose that it is possible to find D D and d 0 > 0 such that b t d (, x), br d (, x) d 0 > 0 if x D D. By taking into account the causality of the distributed port illustrated in Fig 3.4, the dissipation effects con be introduced through the distributed port if the controller can impose the following relation between flows and efforts on D: { f t d = b t d et d (4.13a) fd r = b r d er d This relation can be equivalently written as fd t = bt d ρ p t f r d = br d I ρ p r and, clearly, the closed-loop system is described by the following set of PDE ( ρ 2 w 2 t 2 K w x 2 φ ) + b t w d = 0 x t 2 ( ) φ I ρ t 2 EI 2 φ w x 2 + K x φ + b r φ d = 0 t (4.13b) in which the boundary conditions have still to be specified. Moreover, the energy balance (3.45) becomes dh ( = e t dt b fb t + er b f b r ) ( + e t d fd t + D D er d f d r ) ( = e t b fb t + er b f b r ) ( (4.14) b t d e t d et d + br d er d d) er D Assume, for simplicity, that the beam is clamped in x = 0, that is D w(t, 0) = 0 and φ(t, 0) = 0 (4.15a) and that there is no force/torque acting on x = L. Moreover, the boundary conditions, i.e. the values assumed by the power variables on the (power) ports on D, are given by { { f t b (t, 0) = 0 e t b (t, L) = 0 fb r (t, 0) = 0 e r b (t, L) = 0 (4.15b) From (4.14), the energy balance relation (3.45) becomes [ ( ) dh dt = ( b t d e t d D et d + br d er d ) 1 w 2 er d = + 1 D t So, it is possible to state the following proposition. b t d b r d ( ) ] φ 2 dx 0 (4.16) t

121 4.4 Control by interconnection and energy shaping 105 Proposition 4.6. Consider the dph system of the Timoshenko beam with distributed port (3.44) and suppose that the boundary conditions are given by (4.15). Then, the distributed control action (4.13) asymptotically stabilizes the system in w(t, x) = 0 and φ(t, x) = 0 Proof. From (4.16), we have that Ḣ = 0 if ɛ t = ɛ r = 0 and p t = p r = 0 on D. Consequently, from Prop. 4.5 and from the boundary conditions (4.15), we deduce that also on D \ D we have ɛ t = ɛ r = 0 and p t = p r = 0. The only configuration compatible with this energy configuration and the boundary conditions (4.15a) is clearly w(t, x) = 0 and φ(t, x) = 0. Note 4.3. It is important to point out that, from a mathematical point of view, the most difficult point in the analysis of the stability of the proposed control schemes is the proof of Remark 4.1 which characterizes the invariants solutions of the Timoshenko beam equations for zero boundary conditions. As regard the La Salle theorem, which is the second mathematical tool widely used in the proposed stability proofs, the key point is the study of the γ(χ) set (4.4). More details on these problems and the rigorous way to solve them, as usual, in (Curtain and Zwart, 1995; Luo et al., 1999). 4.4 Control by interconnection and energy shaping Introduction The control by damping injection can be fruitfully applied when the open-loop energy function is characterized by a minimum at the desired final configuration. Then, by interconnecting a controller that behaves as a generalized impedance, it is possible to increase the amount of dissipated energy and then reaching the minimum of energy. Then, the desired equilibrium configuration is asymptotically stabilized. Problems arise when the equilibrium is chosen in a non-minimum energy configuration. As discussed in Sec. 2.2 and in Sec. 2.3 for finite dimensional port Hamiltonian systems, it is necessary to develop a controller that properly shapes the energy of the systems, thus providing a closed-loop system characterized by a Hamiltonian function with a minimum in the desired equilibrium point. The idea is the generalization of this well-established control methodology to the distributed parameter case. In this section, the stabilization problem for mixed finite and infinite dimensional port Hamiltonian systems (m-ph systems) is discussed. A m-ph system is a dynamical system resulting from the power conserving interconnection of finite dimensional and distributed parameter port Hamiltonian systems. This is a quite common situation. Consider, for example, the simple stabilization of a generic infinite dimensional system: the control law can be applied to the plant only by means of finite dimensional actuators. We deduce that, whatever situation is considered, the resulting closed-loop system is given by the interconnection of two main subsystems, the finite dimensional and the infinite dimensional ones. In particular, we suppose that the finite dimensional controller can act on the finite dimensional plant through a distributed parameter system. The proposed framework can be seen as a generalization of (Rodriguez et al., 2001). An example is given in Fig. 4.2 in which a 2-dof flexible robot is presented. In this case, the stabilization problem can be stated as follows: the

122 106 Control of distributed port Hamiltonian systems environment PSfrag replacements link 2 ω 2, τ 2 ω 1, τ 1 link 1 joint 2 joint 1 Figure 4.2: 2-dof robot with flexible links. controller acting on joint-1 has to impose the control law τ 1 = τ 1 (ω 1 ) in order to properly stabilize the position of joint-2 in the plane. Note that the regulator (finite dimensional) can act on the system to be controller only through link-1, which is an infinite dimensional systems. In the same way, the position of the end-effector can be stabilized by properly defining the control action τ 2 = τ 2 (ω 2 ) at joint-2 which modifies the position of the gripper thanks to the flexible link 2. Generally speaking, the open-loop energy function can be shaped by acting on the energy function of the controller. As in the finite dimensional case, the key point is to robustly relate the state variable of the controller by means of Casimir functions (in infinite dimensions, it is more correct to speak about Casimir functional) to the state variable of the m-ph system to be stabilized. In this way, the regulator energy function, which is freely assignable, becomes a function of the configuration of the plant and, then, it can be easily shaped in order to solve the regulation problem. This section is organized as follows. In Sec , the particular m-ph system under study is presented, while in Sec necessary and sufficient conditions for the existence of Casimir functions for the closed-loop systems are deduced. Then, in Sec , under some further hypothesis on the distributed parameter subsystem, the control by interconnection methodology is generalized to deal with m-ph systems. Finally, a simple example is discussed in Sec m-ph systems. A simple example Consider two finite dimensional port Hamiltonian systems A and B, whose state space representation are given by: ẋ a = [J a (x a ) R a (x a )] H a x a + G a (x a )u a y a = G T a (x a ) H a x a (4.17a)

123 4.4 Control by interconnection and energy shaping 107 and ẋ b = [J b (x b ) R b (x b )] H b x b + G b (x b )u b y b = G T b (x b) H b x b (4.17b) Denote by X a and X b the state space of system (4.17a) and (4.17b) respectively, with dim X a = n a and dim X b = n b, while H a : X a R and H b : X b R are the Hamiltonian functions, bounded from below. Moreover, suppose that dim U a = dim U b = m. If systems A and B are interconnected in power conserving way, that is if { ub = y a y b = u a (4.18) then, the resulting dynamics is given by the following autonomous port Hamiltonian systems, with state space X a X b and Hamiltonian H a + H b : [ ] {[ ẋa J = a (x a ) G a (x a )G T b (x ] [ ]} [ ] b) Ra (x G T a ) 0 xa H a (4.19) a (x a )G b (x b ) J b (x b ) 0 R b (x b ) xb H b ẋ b From Def. 1.12, a scalar function C : X a X b R is a Casimir function for system (4.19) if and only if the following couple of relations is satisfied: T C [J a (x a ) R a (x a )] + T C G a (x a )G T b x a x (x b) b = 0 (4.20a) T C [J b (x b ) R b (x b )] T C G T b x b x (x b)g a (x a ) a = 0 (4.20b) These conditions are direct consequence of the interconnection law (4.18). Following the same idea presented in (Rodriguez et al., 2001) for a simpler case, a possible generalization of this interconnection law could be the following: suppose to interconnect systems (4.17a) and (4.17b) by means of m transmission lines, modeled as distributed parameters port Hamiltonian systems. As discussed in Sec , the dph model of the i-th transmission line is given by [ ] ([ ] [ ]) [ ] t α E,i 0 d Gi 0 δe,i H = +,i t α M,i d 0 0 R i δ M,i H,i [ ] [ ] [ ] (4.21) fb,i 0 1 δe,i H =,i D e b,i 1 0 δ M,i H,i D where (α E,i, α M,i ) X,i are the state variables, with X,i := Ω 1 (D) Ω 1 (D), (f b,i, e b,i ) Ω 0 ( D) Ω 0 ( D) are the power conjugated boundary variables and H i the total energy, which can be expressed, in the simplest case, by means of the following quadratic functional: H,i (α E,i, α M,i ) = 1 [ 1 α E,i α E,i + 1 ] α M,i α M,i 2 C i L i D Clearly, the set of m transmission lines can be treated as a single dph system, with state space X := X,1 X,m (4.22)

124 108 Control of distributed port Hamiltonian systems and total Hamiltonian H (α E,1,..., α E,m, α M,1,..., α M,m ) = In fact, if m H,i (α E,i, α M.i ) i=1 α E := [ α E,1 ] T α E,m α M := [ α M,1 ] T α M,m δ E H := [ δ E,1 H,1 ] T δ E,m H,m δ M H := [ δ M,1 H,1 ] T δ M,m H,m f b := [ f b,1 ] T f b,m e b := [ e b,1 ] T e b,m then the set of m transmission lines (4.21) can be briefly written as [ ] ([ ] [ ]) [ t α E 0 d G 0 δe H = + t α M d 0 0 R δ M H [ ] [ ] [ ] fb 0 1 δe H = D e b 1 0 δ M H D ] where R := diag(r 1,..., R m ) and G := diag(g 1,..., G m ). Suppose to interconnect systems (4.17a), (4.17b) and the m transmission lines (4.21) in a power conserving way. An admissible interconnection law could be following one: u a = e b,1 (0). e b,m (0), y a = f b,1 (0). f b,m (0), u b = f b,1 (L). f b,m (L), y b = e b,1 (L). e b,m (L) (4.23) Note that, if L = 0, then (4.23) is the same as (4.18). The resulting system is a mixed finite and infinite dimensional port Hamiltonian system (m-ph system), with configuration space total Hamiltonian X cl := X a X b X (4.24) H cl (x a, x b, α E,1,... α M,m ) := H a (x a ) + H b (x b ) + H (α E,1,..., α M,m ) (4.25) and whose dynamics is described by means of the following set of ODEs and PDEs: ẋ a J a (x a ) R a (x a ) 0 G a (x a ) 0 0 ẋ b α E = 0 J b (x b ) R b (x b ) 0 G b (x b ) L 0 0 G d α M 0 0 d R [ G T a (x a ) xa H a G T b (x b) xb H b ] = [ 0 0 L 0 ] [ δe H δ M H ] xa H a xb H b δ E H δ M H (4.26)

125 4.4 Control by interconnection and energy shaping 109 It is easy to verify that (4.26) satisfies the following power balance relation: ( dh cl T ) H cl H cl R a + T H cl H cl R b = dt x a x a x b x b ( T ) H a H a = R a + T H b H b R b 0 x a x a x b x b Casimir functionals The applicability of the control by interconnection and energy shaping relies on the possibility of relating the controller state variables to the state variables of the plant by means of Casimir functions. Equivalently, we can say that the controller structure is chosen in order to constrain the closed-loop trajectory to evolve on a particular sub-manifold of the whole state space. The key point is, then, to find necessary and sufficient conditions on the existence of Casimir functions for a given dynamical system. In finite dimensions, these conditions are simply expressed by Def In order to cope with m-ph systems, this definition has to be generalized. Since a Casimir function is a structural invariant, that is a scalar function defined on the state space of a dynamical system which is constant along its trajectories independently from the Hamiltonian function, a possible generalization can be given by means of the following definition. Definition 4.2 (Casimir functionals). Consider a function C : X cl R, where X cl is given by (4.24). Then, C is a Casimir functional for the system (4.26) if and only if dc dt = 0, where H cl has the structure given in (4.25). for every H cl : X cl R Remark. A function C : X X R, with X a finite dimensional vector space, satisfying Def. 4.2 is called functionals in order to point out that, for every x X, C(x, ) is a map from X, given by the Cartesian product of m Ω k (D) spaces, to R. Since dc dt = T C ẋ a + T C ẋ b + x a x b m i=1 D [ αe,i t δ E,i C + α M,i t ] δ M,i C from (4.17), (4.21) and (4.23) we have that dc C dt = T [J a R a ] H δ E,1 H 1 0 a + T C G a x a x a x. a δ E,m H m 0 + T C [J b R b ] H δ M,1 H 1 L b + T C G b x b x b x. b δ M,m H m L m + (dδ M,1 H i + G i δ E,i H) δ E,i C + i=1 m i=1 D D (dδ E,1 H i + R i δ M,i H) δ M,i C (4.27)

126 110 Control of distributed port Hamiltonian systems From Prop. A.2 and Prop. A.11, if α, β Ω 0 (D) and κ R, then d(α β) = dα β + α dβ and (κ α) β = α (κ β). Consequently, the integral term in (4.27) becomes: m i=1 D [δ E,i H (dδ M,i C G i δ E,i C) + δ M,i H (dδ E,i C R i δ M,i C)] m i=1 D [d (δ M,i H δ E,i C) + d (δ E,i H δ M,i C)] From the Stokes theorem (see Thm. A.14), we have that d(δh δc) = δh D δc C D Then, from (4.27) and (4.28), we can write that { dc T } dt = C [J a R a ] + [δ E,1 C 0 δ E,m C 0 ] G T Ha a x a x a { T } C + [J b R b ] [δ M,1 C L δ M,m C L ] G T Hb b x b x b m + [δ E,i H (dδ M,i C G i δ E,i C) + δ M,i H (dδ E,i C R i δ M,i C)] i=1 D { T } δ E,1 H 1 0 C + G a + [δ M,1 C 0 δ M,m C 0 ] x. a δ E,m H m 0 { T } δ M,1 H 1 L C + G b + [δ E,1 C L δ E,m C L ] x. b δ M,m H m L D (4.28) which, according to Def. 4.2, has to be equal to zero for every Hamiltonian function H a, H b and H. It can be deduced that the following set of conditions has to hold: T C [J a R a ] + [δ E,1 C 0 x a δ E,m C 0 ] G T a = 0 (4.29a) T C [J b R b ] [δ M,1 C L x b δ M,m C L ] G T b = 0 (4.29b) T C G a + [δ M,1 C 0 x a δ M,m C 0 ] = 0 (4.29c) T C G b + [δ E,1 C L x b δ E,m C L ] = 0 (4.29d) dδ M,i C G i δ E,i C = 0 (4.29e) dδ E,i C R i δ M,i C = 0 (4.29f) where (4.29e) and (4.29f) have to hold for every i = 1,..., m. These conditions are a generalization of the classical definition of Casimir function reported in (van der Schaft, 2000). In conclusion, the following proposition has been proved.

127 4.4 Control by interconnection and energy shaping 111 Proposition 4.7. Consider the mixed finite and infinite dimensional port Hamiltonian system (4.26), for which X cl is the configuration space, defined in (4.22), and H cl is the Hamiltonian, defined in (4.25). Then, a functional C : X cl R is a Casimir functional if and only if conditions (4.29) are satisfied. Note 4.4. Suppose that the m transmission lines are lossless, that is R i = G i = 0 for every i = 1,..., m. Then, from (4.29e) and (4.29f), we deduce that C is a Casimir functional if and only if { dδe,i C = 0 i = 1,..., m (4.30) dδ M,i C = 0 or, equivalently, if δ E,i C and δ M,i C are constant on D as function of z D. Then, from (4.30), we have that δ E,i C = δ E,i C 0 = δ E,i C L and δ M,i C = δ M,i C 0 = δ M,i C L i = 1,..., m (4.31) Then, from (4.31) and by combining (4.29a) with (4.29d) and (4.29b) with (4.29c), we deduce that C is Casimir functional if satisfies relation (4.30) and (4.20). Note that (4.20) are the necessary and sufficient conditions for the existence of Casimir functions in the finite dimensional case, when the interconnection law is purely algebraic and given by (4.18): they continue to hold under the hypothesis that the interconnecting infinite dimensional system is lossless Control of m-ph systems by energy shaping Consider the system (4.17b) and denote by x b a desired equilibrium point. As discussed in Sec , in finite dimensions the stabilization of (4.17b) in x b by means of (4.17a), that plays the role of the controller, can be solved by interconnecting both the systems according to (4.18) and looking for Casimir functions of the resulting closed-loop system in the form C i (x a, x b ) = x a,i S i (x b ), i = 1,..., n a. Clearly, they have to satisfy conditions (4.20) or, equivalently, (2.28). In this way, since C i = 0, we have that x a = S(x b ) + κ for every energy function H a and H b. This relation defines, then, a structural state feedback law. Furthermore, H a, which is freely assignable, can be expressed as a function of x b : the problem of shaping the closed-loop energy in order to introduce a minimum in x b can be solve by properly choosing H a. Finally, if dissipation is added, then this new minimum is reached. The stabilization of the m-ph system (4.26) can be stated as follows. Denote by (χ, x b ) X X b a desired equilibrium configuration for the m-ph system, where χ is a configuration of the infinite dimensional system that is compatible with the desired equilibrium point x b of the finite dimensional sub-system. In order to stabilize the configuration (χ, x b ), it is necessary to chose the finite dimensional controller (4.17a) so that the open-loop energy function m H,i (α E,i, α M,i ) + H b (x b ) = H (α E,1,..., α M,m ) + H b (x b ) i=1 could be shaped by acting on H a As in the finite dimensional case, there is no a priori relation between the state of the controller and the state of the system to be controlled and it is not clear how the controller energy, which is freely assignable, has to be chosen in order to solve the regulation problem. By generalization of the finite dimensional approach, a possible solution

128 112 Control of distributed port Hamiltonian systems can be to robustly relate the controller state variable with the plant state variable by means of a set of Casimir functions. Consider the m-ph system (4.26): if C i (x a,i, x b, α E,1,..., α M,m ) = x a,i S i (x b ) S i (α E,1,..., α M,m ), i = 1,..., n a (4.32) are a set of Casimir functionals, then, independently from the energy functions H a, H b and H, we have that: x a,i = S i (x b ) + S i (α E,1,..., α E,m, α M,1,..., α M,m ) + κ i, i = 1,..., n a The constants κ i depends only on the initial conditions and can be set to zero if the initial state is known. In this way, the controller state variable is expressed as function of the state variable of the system (4.17b) and of the configuration of the m transmission lines (4.21). Consequently, the closed-loop energy function (4.25) becomes: m H cl (x a, x b,α E,1,..., α E,m, α M,1,..., α M,m ) = H b (x b ) + H i (α E,i, α M,i ) + H a (S 1 (x b ) + S 1 (α E,1,..., α M,m ),..., S na (x b ) + S na (α E,1,..., α M,m )) i=1 where H a can be freely chosen in order to introduce a minimum in (χ, x b ). The n a functionals (4.32) are Casimir functionals for (4.26) if and only if conditions (4.29) are satisfied. The couple of relations (4.29e) and (4.29f) can be equivalently written as { dδm,i S j G i δ E,i S j = 0 dδ E,i S j R i δ M,i S j = 0 i = 1,..., m; j = 1,..., n a which is a system of partial differential equations that has to be solved for every S j, j = 1,..., n a. As in the finite dimensional case (see Sec. 2.2), dissipation introduces strong constraints on the applicability of passivity-based control techniques or, equivalently, on the admissible Casimir functions for the closed-loop system. Clearly, these limitations are present also when dealing with m-ph systems. In order to simplify the problem, it assumed that the infinite dimensional subsystem is lossless, as already discussed in Note 4.4. In this case, condition (4.30) becomes { dδe,i S j = 0 dδ M,i S j = 0 i = 1,..., m, j = 1,..., n a which expresses the fact that δ E,i S j and δ M,i S j are constant along D. Then, we have that, for every i = 1,..., m and j = 1,..., n a δ E,i S j = δ E,i S j 0 = δ E,i S j L and δ M,i S j = δ M,i S j 0 = δ M,i S j L, (4.33)

129 4.4 Control by interconnection and energy shaping 113 From (4.32) and (4.33), conditions (4.29a d) can be written as: δ E,1 S 1 δ E,m S 1 [J a R a ] =..... G T a (4.34a) T S x b [J b R b ] = G a = δ E,1 S na δ E,m S na δ M,1 S 1 δ M,m S δ M,1 S na δ M,m S na δ M,1 S 1 δ M,m S G T b (4.34b) (4.34c) δ M,1 S na δ M,m S na T δ E,1 S 1 δ E,m S 1 S G b =. x.... b δ E,1 S na δ E,m S na (4.34d) By substitution of (4.34c) in (4.34a), and of (4.34d) in (4.34b), after a post-multiplication by S x b, we deduce that J a + R a = T S [J b R b ] S (4.35) x b x b Since J a and J b are skew-symmetric and R a and R b are symmetric and positive definite, we deduce that, necessarily, J a = T S S J b (4.36a) x b x b R a = 0 (4.36b) Furthermore, from (4.35) and (4.36b), we deduce that and from (4.34b), (4.34c) and (4.36c) that R b S x b = 0 (4.36c) T S x b J b = G a G T b In conclusion, the following proposition has been proved. (4.36d) Proposition 4.8. Consider the m-ph system (4.26) and suppose that the m transmission lines are lossless, that is R i = G i = 0, i = 1,..., m. Then, the n a functionals (4.32) are Casimir functionals for this system if the conditions (4.33), (4.34c), (4.30d) and (4.36) are satisfied. Note 4.5. Note that conditions (4.36), involving the finite dimensional subsystem of (4.26), are the same of (2.28), which are required in the finite dimensional energy Casimir method, see Sec Furthermore, as it will be discussed later, Prop. 4.8 generalizes the already cited results presented in (Rodriguez et al., 2001), which can be easily treated as a particular case. Note that, in the proposed approach there are no a priori assumptions on the interconnection and damping matrices of the controller (4.17a).

130 114 Control of distributed port Hamiltonian systems PSfrag replacements sys. A transmission line sys. B Figure 4.3: An example of m-ph system. In Sec. 4.2, the intrinsic difficulties related to the proof of stability in the infinite dimensional case have been presented. In the case of the m-ph system (4.26), it is necessary to properly choose the energy function H a of the controller: if the conditions of Prop. 4.8 are satisfied, then relation (4.32) holds thus providing a full state feedback law for the closed-loop system (4.26). At this point, stability can be verified by following the methods proposed in Sec and Sec Two application of the first procedure are reported in Sec and in Sec An example The single transmission line case Consider the series RLC circuit introduced in Example 2.2, whose port Hamiltonian model is given by [ ] [ ] [ ] [ ] ẋ1 0 1 x1 H = b 0 + u ẋ 2 1 R x2 H b 1 b y b = H (4.37) b x 2 in which x b = [x 1 x 2 ] T is the state variable, with x 1 the charge stored in the capacitor and x 2 the flux in the inductance, and H b (x 1, x 2 ) = 1 x C + 1 x L the total energy. As reported in Fig. 4.3, suppose to interconnect system (4.37) to a lossless transmission line (of length equal to l) in z = l; moreover, suppose to interconnect the controller (4.17a) to the transmission line in z = 0. If (f 0, e 0 ) and (f l, e l ) are the power conjugated port variables at z = 0 and z = l respectively, then, after a port dualization at both sides of the line in order to give physical consistency to efforts and flows, the interconnection law (4.23) can be written as { ya = e 0 { yb = f l u a = f 0 u p = e l Furthermore, denote by χ the state of the resulting m-ph system, by C the distributed capacitance and by L the distributed inductance of the line. Consequently, the total closed-loop energy function becomes H cl (χ) = 1 x C + 1 x L + H a(x a ) l 0 ( α 2 ) E + α2 M dz (4.38) C L In order to apply the control by interconnection methodology, it is necessary to find the Casimir functions of the form (4.32) for the closed-loop dynamics. Then, the conditions of Prop. 4.8

131 4.4 Control by interconnection and energy shaping 115 have to be satisfied. As in the finite dimensional case, we obtain that S x 1 = 1 and S x 2 = 0 which implies that S(x b ) = x 1. Moreover, as regard the functional S(α E, α M ), we obtain that which implies that δ M S(α E, α M ) = 0 and δ E S(α E, α M ) = 1 S(α E, α M ) = S(α E ) = l 0 α E dz Then, a Casimir function for the closed-loop system is given by C(x a, x b, α E, α M ) = x a x 1 and the closed-loop energy function (4.38) can be written as H cl = 1 x C + 1 x 2 ( 2 2 L + H a x 1 + l 0 l 0 ) α E dz α E dz In the remaining part of this section, it will be shown that, by selecting H a (x a ) = 1 1 x 2 a + k x a 2 C a l 0 ( α 2 ) E + α2 M dz C L with C a > 0, k R to be specified and x a = x a x a, it is possible to shape the closed-loop energy in such a way that it has a minimum at the equilibrium point χ = [x 1 x 2 α E α M] T = [ x x ] T C C 0 which is stable. The stability proof relies on the Arnold s first stability theorem approach described in Sec Condition (4.1) can be verified by selecting k = x 1 C since x 1 C + k H cl (χ 0 ) = α E C + k 0 The nonlinear functional (4.2) can be easily calculated: it can be obtained that N ( χ) = 1 x C + 1 x L + l 0 ( α 2 E C + α2 M L ) dz C a ( x 1 + l 0 ) 2 αedz 2 The stability proof is completed once condition (4.3) is verified. The following norm is assumed: χ := ( x x l 0 α 2 Edz + l 0 ) 1 αmdz 2 2

132 116 Control of distributed port Hamiltonian systems PSfrag replacements q 2 m, J f c τ c q 1 0 L Figure 4.4: Flexible link with mass in x = L. for which the constant γ 1 can be easily estimated in γ 1 = 1 2 min { 1 C, 1 L, 1 C, 1 L } Moreover, note that x 1 α E dz 1 2 x ( l ) 2 α E dz 2 0 ( l 2 l α E dz) l αedz 2 l Consequently, it is possible to chose 0 0 γ 2 = 1 2 max { 1 C C a, 0 1 L, l, C C a 1 L } (4.39) and α = 2 in order to complete the proof. 4.5 Control by energy shaping of the Timoshenko beam Model of the plant Consider the mechanical system of Fig. 4.4, in which a flexible beam, modeled according to the Timoshenko theory and whose dph model is given by (3.40), is connected to a rigid body with mass m and inertia momentum J in x = L and to a controller in x = 0. The controller acts on the system with a force f c and a torque τ c. Since the Timoshenko model of the beam is valid only for small deformations, it is possible to assume that the motion of the mass is the combination of a rotational and of a translational motion along x = L. The port Hamiltonian model of the mass is given by: [ q ṗ ] = e = p H ([ 0 I I 0 ] [ D ]) [ q H p H ] [ 0 + I ] f (4.40) where q = [q 1, q 2 ] T Q are the generalized coordinates, with q 1 the distance from the equilibrium configuration and q 2 the rotation angle, p T Q are the generalized momenta, f, e R 2

133 PSfrag replacements 4.5 Control by energy shaping of the Timoshenko beam 117 H c SGY H SGY 1 H Figure 4.5: Bond graph representation of the closed-loop system. are the port variables and H(p, q) := 1 2 pt M 1 p qt Kq = 1 ( ) p m + p ( k1 q1 2 + k 2 q 2 J 2 2) (4.41) is the total energy (Hamiltonian) function, with k 1, k 2 > 0. It is assumed that a translational and rotational spring is acting on the mass, with center of stiffness in (0, 0) Q. As regard the controller, we assume that it can be modeled by means of the following finite dimensional port Hamiltonian systems [ ] ([ ] [ ]) [ ] [ ] qc 0 I 0 0 qc H = c 0 + f ṗ c I 0 0 D c pc H c G c c (4.42) e c = G T c pc H c where q c Q c are the generalized coordinates, with dim(q c ) = 2, p c T Q c are the generalized momenta and f c, e c R 2 are the power conjugated port variables. Moreover, H c (q c, p c ) is the Hamiltonian and it will be specified in the remaining part of this section in order to drive the whole system in a desired equilibrium configuration. The port causality of both the mass and the controller is assumed to be with flows as input and efforts as outputs. As pointed out in (Stramigioli, 2001), it is possible to interconnect two port Hamiltonian system only if a port dualization is applied on one of the system. In this way, a system can have an effort as input and a flow as output. Since the port causality and orientation of the beam is given in Fig. 3.4, the bond graph representation of the closed loop system made of the Timoshenko beam, the mass in x = L and the finite dimensional port Hamiltonian controller acting in x = 0 is given in Fig Then, the interconnections constraints between the port variables of the subsystems are given by the following power-preserving relations: { [ f t b (L) ff r(l) ] T { [ = e f t [ e t b (L) e r b (L) ] b (0) ff r(0) ] T T [ = f e t b (0) e r b (0) ] T = e c (4.43) = f c From (3.40), (4.40), (4.42) and (4.43), it is possible to obtain the mixed finite and infinite dimensional port Hamiltonian representation of the closed-loop system. The total energy H cl is defined in the extended space X cl := Q T Q Q c T Q }{{} c Ω 1 (D) Ω 1 (D) Ω 1 (D) Ω 1 (D) (4.44) }{{} X X and it is given by the sum of the energy functions of the subsystems, that is H cl := H + H c + H (4.45)

134 118 Control of distributed port Hamiltonian systems Moreover, it is easy to verify that the energy rate is equal to dh cl dt ( T H cl = p D H cl p ) + T H cl H cl D c p c p c where D c and H c have to be designed in order to drive the system in the desired equilibrium position, which is still to be specified. Following the same procedure of Sec. 4.4, the idea is to shape the total energy H cl by properly choosing the controller Hamiltonian H c in order to have a new minimum of energy in the desired configuration that can be reached if some dissipative effect is introduced. The first step is find the Casimir functionals of the closed-loop system Casimir functionals for the closed-loop system Denote by C : X X R a scalar function defined on the extended state space (4.44); according to Def. 4.2, it is Casimir function for the m-ph of Fig. 3.4, if and only if dc dt = 0, H cl : X X R Hamiltonian of the system Clearly, dc C dt = T q q + T C p ṗ + T C q c + T C ṗ c q c p ( c pt + t δ p t C + p r t δ p r C + ɛ t t δ ɛ t C + ɛ ) r t δ ɛ r C D and, from (3.40), (4.40), (4.42) and the interconnection constraints (4.43), we obtain { dc C H dt = T q p + T C H p + T C H c + T C q c p c p c + [dδ ɛt H δ pt C + (dδ ɛr H + δ ɛt H) δ pr C] D + [(dδ pt H δ pr H) δ ɛr C + dδ pr H δ ɛt C] D } δ ɛr H x=l ] T q D H p + [δ ɛ t H x=l { H c H c D c + G c [δ ɛt H x=0 q c p c } δ ɛr H x=0 ] T (4.46) Since dδh δc = d(δh δc) δh dδc and δh δc = δh δc (see Prop. A.2 and Prop. A.11), the integral term in (4.46) is equal to D [d (δ ɛt H δ pt C) + d (δ ɛr H δ pr C) + d (δ pt H δ ɛt C) + d (δ pr H δ ɛr C)] [δ pt H dδ ɛt C + δ pr H (dδ ɛr C + δ ɛt C)] D [δ ɛt H (dδ pt C δ pr C) + δ ɛr H dδ pr C] D (4.47)

135 4.5 Control by energy shaping of the Timoshenko beam 119 where, from Stokes theorem (see Thm. A.14), the first term can be written as [δ ɛt H D δ pt C D + +δ pr H D δ ɛr C D ] = D [ ] [ ] T = δpt C D δ pr C D δɛt H D δ ɛr H D (4.48) D [ ] [ ] T + δɛt C D δ ɛr C D δpt H D δ pr H D D From (4.40, 4.42) and the interconnection constraints (4.43), we have that [ ] [ ] δpt H x=l = e = H δpt H x=0 and = e c = G T c δ pr H x=l p δ pr H x=0 Then, combining (4.46) with (4.47) and (4.48), we obtain that dc dt = T C p H q T C H c p c q c q + T C { T C p D + [ } ] H δ ɛt C x=l δ ɛr C x=l p { T C + T C D c + [ } ] δ ɛt C x=0 δ ɛr C x=0 G T Hc q c p c c p c { T C + p + [ } ] [ δ pt C x=l δ pr C x=l δɛth x=l δ ɛrh x=l { T C + [ } ] [ δ pt C x=0 δ pr C x=0 δɛth x=0 δ ɛrh x=0 p c [δ pt H dδ ɛt C + δ pr H (dδ ɛr C + δ ɛt C)] D [δ ɛt H (dδ pt C δ pr C) + δ ɛr H dδ pr C] D that has to be equal to zero for every Hamiltonian H, H c and H. This is true if and only if H c p c ] T ] T dδ ɛt C = 0 dδ pt C δ pr C = 0 dδ ɛr C + δ ɛt C = 0 dδ pr C = 0 [ ] [ ] C p = 0 C δpt C x=l δpt C x=0 = 0 = 0 = 0 p c δ pr C x=l δ pr C x=0 [ ] [ ] C q = δɛt C x=l C δɛt C x=0 = G c δ ɛr C x=l q c δ ɛr C x=0 (4.49) In other words, the following proposition has been proved, (Macchelli and Melchiorri, 2003a; Macchelli and Melchiorri, 2003b). Proposition 4.9. Consider the mixed finite and infinite dimensional port Hamiltonian system of Fig. 4.5, that is the result of the power conserving interconnection (4.43) of the subsystems

136 120 Control of distributed port Hamiltonian systems (3.40), (4.40) and (4.42). If X X is the extended state space of the system, introduced in (4.44), then a functional C : X X R is a Casimir for the closed-loop system if and only if relations (4.49) hold. Since the necessary and sufficient conditions for the existence of Casimir functions have been deduced, the control problem can be approached Control by energy shaping of the Timoshenko beam In order to control the flexible beam with the finite dimensional controller (4.42), the first step is to find Casimir functionals for the closed-loop system that can relate the state variables of the controller q to the state variables that describe the configuration of the flexible beam and of the mass connected to its extremity. In particular, we are looking for some functionals C i, i = 1, 2, such that C i (q, p, q c, p c, p t, p r, ɛ t, ɛ r ) := q c,i C i (q, p, p c, p t, p r, ɛ t, ɛ r ), with i = 1, 2 are Casimir functionals for the closed loop system, i.e. satisfying the conditions of Prop First of all, from (4.49), it is immediate to note that every Casimir functional cannot depend on p and p c. Moreover, since it is necessary that dδ ɛt C i = 0 and dδ pr C i = 0, we deduce that δ ɛt C i and δ pr C i have to be constant as function on x on D and their value will be determined by the boundary conditions on C i. Since, from (4.42), δ pr C i D = 0, we deduce that δ pr C i = 0 on D. But dδ pt C i = δ pr C i = 0, then, from the boundary conditions, we deduce that also δ pt C i = 0 on D. As a consequence, all the admissible Casimir functionals are also independent from p t and p r. In other words, we are interested in finding Casimir functionals in the following form: C i (q, q c, ɛ t, ɛ r ) := q c,i C i (q, ɛ t, ɛ r ), with i = 1, 2 Assuming G c = I, we have that [ C 1 1 = q c 0 ] [ δɛt C = 1 x=0 δ ɛr C 1 x=0 ] (4.50) and, consequently, δ ɛt C 1 = 1 on D. From (4.49), we have that dδ ɛr C 1 = δ ɛt C 1 = 1 = dx; then, δ ɛr C 1 = x + c 1, where c 1 is determined by the boundary conditions. Since, from (4.50), δ ɛr C 1 x=0 = 0, then c 1 = 0; moreover, we deduce that δ ɛr C x=l = L, relation that introduces a new boundary condition in x = L. A consequence is that C 1 q = [ δɛt C 1 x=l δ ɛr C 1 x=l ] = [ 1 L The first conclusion is that C 1 (q, q c, ɛ t, ɛ r ) = q c,1 (Lq 2 q 1 ) D ] (xɛ r ɛ t ) (4.51) is a Casimir for the closed loop system. Following the same procedure, it is possible to calculate C 2. From (4.49), we have that [ ] [ ] C 2 0 δɛt C = = 2 x=0 q c 1 δ ɛr C 2 x=0

137 4.5 Control by energy shaping of the Timoshenko beam 121 and then δ ɛt C 2 = 0 on D; moreover, dδ ɛr C 2 = 0 and, consequently, δ ɛr C 2 = 1 on D since (4.45) holds. Again from (4.49), we deduce that So we can state that [ C 2 q = δɛt C 2 x=l δ ɛr C 2 x=l ] = [ 0 1 ] C 2 (q, q c, ɛ t, ɛ r ) = q c,2 + q 2 + ɛ r (4.52) D is another Casimir functionals for the closed loop system. In conclusion, the following proposition has been proved, (Macchelli and Melchiorri, 2003a; Macchelli and Melchiorri, 2003b). Proposition Consider the mixed finite and infinite dimensional port Hamiltonian system of Fig. 4.5, that is the result of the power conserving interconnection (4.43) of the subsystems (3.40), (4.40) and (4.42). Then (4.51) and (4.52) are Casimir functionals for this system. Note 4.6. Since C i, i = 1, 2, are Casimir functionals, they are invariant for the system of Fig Then, for every energy function H c of the controller, we have that q c,1 = (Lq 2 q 1 ) + (xɛ r ɛ t ) + C 1 q c,2 = q 2 ɛ r + C 2 (4.53) D where C 1 and C 2 depend on the initial conditions. If the initial configuration of the system is known, then it is possible to assume these constants equal to zero. Since H c is an arbitrary function of q c, it is possible to shape the total energy function of the closed-loop system in order to have a minimum of energy in a desired configuration: if some dissipation effect is present, the new equilibrium configuration will be reached. Suppose that (q, 0), with q = [q1 q 2 ]T, is the desired equilibrium configuration of the mass (4.40). Then, the corresponding equilibrium configuration of the beam can be calculated as the solution of (3.40) with p t t = p r t = ɛ t t = ɛ r t = 0 on D and with boundary conditions (in x = L) given by [ f t b (L) f r b (L) ] = H p (q, 0) = 0, [ e t b (L) e r b (L) ] D = H [ k1 q q (q, 0) = 1 k 2 q2 ] (4.54) From (3.40), we have that the equilibrium configuration has to satisfy the following system of PDEs { dδɛt H = 0 δ ɛt H + dδ ɛr H = 0 whose solution, compatible with the boundary conditions (4.54), is equal to ɛ t (x, t) = k 1 K q 1 ɛ r (x, t) = k 1q1 EI (L x) + k 2q2 EI (4.55)

138 122 Control of distributed port Hamiltonian systems Furthermore, at the equilibrium, it is easy to compute that p t = p t = 0 and that p r = p r = 0. From (4.53) and (4.55), define L qc,1 = q c,1 (q1, q2, ɛ t, ɛ r ) = Lq2 q1 + (xɛ r ɛ t ) dx 0 L [ = Lq2 q1 k1 q1 + 0 EI (L x)x + k 2q2 EI x k ] 1 K q 1 dx ( k1 L 3 = EI 6 k ) ( 1 K L 1 q1 k2 L 2 ) + EI 2 + L q2 L qc,2 = q c,2 (q2, ɛ r ) = q2 ɛ r dx 0 L [ = q2 k1 q1 0 EI (L x) + k 2q2 EI = k 1 L 2 ( ) k2 EI 2 q 1 EI L + 1 q2 Note that, at the equilibrium, p c = p c = 0. The energy function H c of the controller (4.42) will be developed in order to regulate the closed-loop system in the configuration ] dx χ = (q, p, p c, p t, p r, ɛ t, ɛ r ) In the remaining part of this section it will proved that, by choosing the controller energy as H c (p c, q c ) = 1 2 pt c M 1 c p c K c,1(q c,1 q c,1) K c,2(q c,2 q c,2) 2 + Ψ 1 (q c,1 ) + Ψ(q c,2 ) (4.56) with M c = Mc T > 0, K c,1, K c,1 > 0 and Ψ 1, Ψ 2 functions still to be specified, the configuration χ is stable. The stability proof follows the Arnold first stability theorem discussed in Sec Denote by χ the state variable of the closed-loop system. From (3.30), (4.41) and (4.56), the total energy function is given by H cl (χ) = 1 ( ) p m + p ( k1 q1 2 + k 2 q 2 J 2 2) + 1 ( 1 2 ρ p t p t + 1 ) p r p r + Kɛ t ɛ t + EIɛ r ɛ r I ρ D pt c M 1 c p c K c,1(q c,1 q c,1) K c,2(q c,2 q c,2) 2 + Ψ 1 (q c,1 ) + Ψ(q c,2 ) The first step in the stability proof is to find under which conditions, that is for what particular choice of the functions Ψ 1 and Ψ 2, relation (4.1) is satisfied. We have that p H cl p H q H cl q (H + H c ) H cl (χ) = δ pt H cl δ pr H cl = δ pt H δ pr H δ ɛt H cl δ ɛt (H + H c ) δ ɛr H cl δ ɛr (H + H c )

139 4.5 Control by energy shaping of the Timoshenko beam 123 Clearly, Furthermore, H p (χ ) = 0, H c p c (χ ) = 0, δ pt H(χ ) = 0 and δ pr H(χ ) = 0 H cl ( = k 1 q 1 K c,1 qc,1 q ) Ψ 1 c,1 q 1 q c,1 H cl ( = k 2 q 2 + K c,1 qc,1 q ) ( c,1 L Kc,2 qc,2 q Ψ 1 q c,2) + L Ψ 2 2 q c,1 q c,2 and ( δ ɛt H cl = K ɛ t K c,1 qc,1 qc,1 ) Ψ 1 q c,1 ( δ ɛr H cl = EI ɛ r + K c,1 qc,1 qc,1) ( x Kc,2 qc,2 qc,2 ) Ψ 1 + x Ψ 2 q c,1 q c,2 then H cl (χ ) = 0 if { Ψ1 (q c,1 ) = k 1 q 1 q c,1 + ψ c,1 Ψ 2 (q c,2 ) = (k 2 q 2 + k 1q 1 L) q c,2 + ψ c,2 with ψ c,1 and ψ c,2 arbitrary constants. Once the equilibrium is assigned in χ, it is necessary to verify the convexity condition (4.3) in χ on the nonlinear functional N defined as in (4.2). With some simple but boring calculation, it can be obtained that χ = 1 2 pt M 1 p pt c Mc 1 p c k 1 q k 2 q L ( ρ p t p t + 1 ) p r p r + K ɛ t ɛ t + EI ɛ r ɛ r I ρ + 1 [ L ] 2 2 K c,1 L q 2 q 1 + (x ɛ r ɛ t ) + 1 [ L ] 2 2 K c,2 q 2 + ɛ r 0 The convexity condition (4.3) requires a norm in order to be verified. As already discussed in Sec , a possible choice can be 0 χ 2 = 1 2 pt p pt c p c q q L In (4.3), a possible choice for γ 1 can be min ( p t p t + p r p r + ɛ t ɛ t + ɛ r ɛ r ) { M 1, M 1 c, k 1, k 2, } 1 ρ, 1, K, EI I ρ Moreover, if γ 2 = 1 2 max { M 1, M 1 c, k 1, k 2, } 1 ρ, 1, K, EI, K c,1, K c,2 I ρ

140 124 Control of distributed port Hamiltonian systems we have that N ( χ) γ 2 χ + γ 2 [L q 2 q 1 + L 0 ] 2 [ (x ɛ r ɛ t ) + γ 2 q 2 + L 0 ɛ r ] 2 If the inequalities (4.39) are considered again, it is possible to satisfy (4.3) by choosing α = 2 and γ 2 = γ 2 max { 2, 2L 2 + 2L + 1 } which completes the stability proof. In other words, the following proposition has been proved. Proposition Consider the mixed finite and infinite dimensional port Hamiltonian system of Fig. 4.5, that is the result of the power conserving interconnection (4.43) of the subsystems (3.40), (4.40) and (4.41). If in (4.42) it is assumed that G c = I and H c is chosen according to (4.56), then the configuration χ is stable in the sense of Lyapunov, i.e. in the sense of Def. 4.1.

141 Chapter 5 Scattering with applications The scattering decomposition is well known in network and communications theory. This decomposition acts on the space of energy variables and shows that, in the power propagation, it is possible to locate two distinct power flows, the incoming and the outgoing one. In the case of dph system, the space of energy variables is not given by a finite dimensional vector space, but, in Sec. 5.4, it will be pointed out that it is still possible to implement a scattering decomposition that will be the generalization of the finite dimensional one, discussed in Sec The finite dimensional scattering theory can be fruitfully used in telemanipulation control systems in which the use of scattering variables allows to preserve the passivity of the whole system even in presence of variables time delay. This important application is discussed in Sec Furthermore, it can be of some interest for modeling and control infinite dimensional system to have a scattering description of the dynamics of a distributed system. But this is left to future research activities. 5.1 Introduction and a motivating example Scattering is a well known phenomena in physics and in network and communication theory: when a wave propagating in a material medium encounters discontinuities, its properties (direction, frequency or polarization) are changed, in strict relation with the intrinsic characteristics of the material through which the wave is propagating, (Lax and Phillips, 1967). Consequently, it seems to be natural that scattering theory is concerned with the effect obstacles or inhomogeneities have on an incident waves. There are two types of problems in this area: the direct problem: this problem is concerned with determining the scattered field from the knowledge of the incident field and the scattering obstacle. the inverse problem: this problem is, basically, the determination of the shape and/or physical properties of the scatterer from the measurement of the scattered field for a number of incident fields.

142 126 Scattering with applications As it will be clear later, the results presented in this chapter can be interpreted as if developed within the direct problem framework. Roughly speaking, the word scattering is related to the propagation of waves and to the change of their properties when obstacles are met. Consider a transmission line given by the interconnection of two parts, each of them characterized by different properties (i.e. different impedance): a traveling wave encounters an obstacle when it reaches the interconnection point. The consequence is well-known: the wave splits into reflected and transmitted components. Try to generalize to generic power propagation phenomena: at the interconnection point of physical systems, an amount of traveling power is transmitted, while the remaining part is reflected. Consequently, the total traveling power becomes the sum of two distinct terms, each of them moving in opposite directions. Consider, for example, a transmission line, whose model has been already discussed in Sec Denote by C(z) = C the distributed capacitance and by L(z) = L the distributed inductance. If V (z, t) and I(z, t) are voltages and currents, it is well known that C V t L I t = I z = V z and, clearly, that 2 I t 2 = LC 2 I 2 V z 2 t 2 which are both 1-dimensional wave equation. If v = 1 = LC 2 V z 2 I(z, t) = F 1 (z + vt) + F 2 (z vt) LC denotes the wave speed, we have that is a generic solution of the first PDE, where F 1 and F 2 are two waveforms traveling in opposite directions. In particular, F 1 in the negative z direction and F 2 in the positive one. As regard the time evolution of the voltage along the transmission line, it is easy to compute that L V (z, t) = C [F 1(z + vt) + F 2 (z vt)] where Z := L/C is the impedance of the line. Note that Z can be interpreted as a map from the space of flows (currents) to the space of efforts (voltages). If N := Z, consider the following functions: S + (z, t) = N 1 [V (z, t) + Z I(z, t)] 2 It is easy to compute that S (z, t) = N 1 2 [V (z, t) Z I(z, t)] S + (z, t) = 2Z F 1 (z, t) and S (z, t) = 2Z F 2 (z, t) showing that each S +, (z, t) is only function of a specific wavefront. Moreover, denote by P (z, t) := V (z, t)i(z, t) the traveling power. Then, P (z, t) = Z[F1 2(z, t) F 2 2 (z, t)] or, in terms of S + and S P (z, t) = 1 S + (z, t) 2 1 S (z, t) 2 (5.2) 2 2 (5.1)

143 5.1 Introduction and a motivating example 127 PSfrag replacements s + D (f b, e b ) s D Figure 5.1: Scattering in the infinite dimensional case. An overview. Relation (5.2) shows that the change of variable (5.1) is able to reveal two distinct power flows propagating inside the transmission line. Moreover, each new variable is related to the power traveling in a specific direction: S + and S are called scattering variables while (5.1) is the scattering map. Moreover, the result expressed by (5.2) is known as scattering power decomposition. At this point, we can imagine if it is possible to extend these classical results on wave propagation to generic power propagation phenomena (e.g. to fluid dynamics or flexible structures). The key-point seems to be the generalization of the concept of impedance to other domains than the electromagnetic one and, if possible, to the infinite dimensional case. At this point, we think that it could not be still clear the relation between scattering or, better, scattering theory with port Hamiltonian systems and control in general. In order to understand that this connection is possible, it is necessary to recall some basic concepts introduced in the previous chapters. It has already pointed out that the interaction between physical systems is, basically, an exchange of power, the amount of which can be calculated by means of intrinsic operations on the so-called power variables, defined on the ports of the systems themselves. Clearly, it is on these ports that the interaction takes place. As in wave propagation it is possible to operate on the state variables in order to obtain a couple of new state variables related to the transmitted and reflected power, it is possible to show that, once an orientation is fixed for each power port of the system, it is possible to split the total power flow into two distinct ones by means of a generalization of (5.1). In (Stramigioli, 2001; Stramigioli et al., 2002), a geometric approach to scattering is presented in the case of finite dimensional systems, where the space of energy variables is a finite dimensional vector space. This result is presented in Sec. 5.2, while in Sec. 5.3 an important application of finite dimensional scattering theory to the control of telemanipulation system is discussed. Within the framework of infinite dimensional port Hamiltonian system, the space of power variables is given by an infinite dimensional space. In Sec. 5.4, it will be pointed out that it is still possible to implement a scattering decomposition, the natural generalization of the finite dimensional one. Basically, the boundary D of the spatial domain D, in which the physical phenomena to be modeled take place, is treated as a power port through which the interaction with the environment is possible. The key point is the generalization of the concept of impedance, which is always possible under the hypothesis that D is a Riemannian manifold, the extension of the scattering map (5.1) and of the scattering power decomposition theorem (5.2) are possible. Fig. 5.1 gives an idea of what is scattering in the infinite dimensional case.

144 128 Scattering with applications In conclusion, it is important to underscore what the possible application of this extension to the infinite dimensional case of the scattering theory can be. First of all, what we obtained is a unified description of power propagation in different domains by means of a powerful mathematical tool that can be used to study the interconnection of finite and infinite dimensional systems with input and output variables in duality. As regard control application, let us consider the control by damping injection methodology applied to the stabilization of distributed system, through the boundary or the distributed port. The scattering decomposition can can suggest a way to choose the right impedance in order to transfer power to a load, without power reflection phenomena. Moreover, this is a framework allowing to treat infinite dimensional systems as transmission lines: then, energy-based approaches to the control of telemanipulation systems can be extended to treat a wide class of problems (e.g. the control of robots with flexible links). But these are only ideas for future work: unfortunately, they are not considered in this work. 5.2 Scattering in the finite dimensional case Definitions and basic results As discussed in Sec , denote by F an n-dimensional vector space, the space of flows, and by E F its dual, the space of efforts, that is the space of linear operator from F to R. The duality product between f F and e E, that is the value of e(f) R provides the power associated to the couple of power variables (f, e) F E. Equivalently (see Def. 1.4), power P is defined as P := e, f where, is the dual pairing (i.e. dual product). Based on the dual product, it is possible to define a canonical bi-linear operator on F E, the +pairing (see Def. 1.5), which is given by (f 1, e 1 ), (f 2, e 2 ) := e 2, f 1 + e 1, f 2 where (f i, e i ) F E, i = 1, 2. Once a basis is assumed on F, and the corresponding dual on E, if flows and efforts are represented as n-dimensional column vectors, then it is possible to write e, f = e T f (f 1, e 1 ), (f 2, e 2 ) = e 1 T f 2 + e 2 T f 1 (see Note 1.1). In order to define the scattering subspaces in a coordinate free way, a metric Z = [z ij ] on F is needed, (Stramigioli et al., 2000; Stramigioli et al., 2002). We can also induce a metric on E starting from Z, that is given by Z 1 = [z ij ]. So, it is possible to define a metric on F E. We give the following Definition 5.1. Let Z = [z ij ] be a metric on F. Then, define the norm of (f, e) F E as: (f, e) 2 Z := z ijf i f j + z ij e i e j Note 5.1. It is clear from the previous definition that the norm on F E depends on the metric Z on F. This metric is generally a function of the physical properties of the system, for example of the impedance in the case of electric circuits. In matrix representation and using coordinates, we can say that, given (f, e) F E, (f, e) 2 Z = [ f T e T ] [ Z 0 0 Z 1 ] [ ] f e

145 5.2 Scattering in the finite dimensional case 129 Using the concepts of dual product and +pairing and after fixing a proper metric on the space of the flow, it is possible to give the following definition, (Macchelli et al., 2002b). Definition 5.2 (scattering subspaces). Suppose that F is a vector space, with E = F its dual, and Z is a metric on F that induces a metric Z on F E. The Z-scattering subspaces S +, S F E are defined as S + := {(f, e) F E (f, e), (f, e) = (f, e) 2 Z } (5.3a) S := {(f, e) F E (f, e), (f, e) = (f, e) 2 Z } (5.3b) Note 5.2. The previous definition requires that the elements belonging to S + (resp. S ) are related to the eigenvalue +1 (resp. 1) of the bi-linear operator, with respect to the metric Z on F E. In (van der Schaft, 2000; Stramigioli et al., 2000; Stramigioli et al., 2002), the scattering subspaces are directly defined using the concept of eigenvalues of the +pairing, once a metric is considered on the spaces of power variables. A metric is necessary since the +pairing is a quadratic form and we cannot directly consider the eigenvalues of the representing matrix; so we use a metric to higher one of its indexes to obtain a linear operator for which it is possible to consider the eigenvalues. Here, a different definition is introduced: in this way, it is easier to generalize in order to deal with infinite dimensional systems. Proposition 5.1. The following proposition are equivalent: (1) S + = {(f, e) F E (f, e), (f, e) = (f, e) 2 Z } (2) S + = {(f, e) F E e j = z ij f i f j = z ij e i } Proof. We start to prove that (1) (2). From (5.3a) in Def. 5.2, we have that 2 e i f i = e i f i + e j f j = z ij f i f j + z ij e i e j or, equivalently, that ( zij f j e i ) f i + ( z ij e i f j) e j = 0 for all f F and e E. Since [z ij ] is non-singular, the previous relation is satisfied if e i = z ij f j or, equivalently, if f j = z ij e i. The inverse implication (2) (1) is immediate. If we suppose that e j = z ij f i, or, equivalently, that f j = z ij e i, we obtain (f, e) 2 Z = z ijf i f j + z ij e i e j = e j f j + f j e j = (f, e), (f, e) which completes the proof. Note 5.3. The same result holds for S. We can say that S = {(f, e) F E e j = z ij f i f j = z ij e i } Moreover, using vector notation, it is possible to say that S + = {(f, e) F E e = Zf f = Z 1 e} S = {(f, e) F E e = Zf f = Z 1 e}

146 130 Scattering with applications Proposition 5.2. These conditions hold (1) S + S = {(0, 0)} (2) dim S + = dim S = dim F = dim E = n (3) given s + = (f +, e + ) S + and s = (f, e ) S, then s + s with respect to the scalar product induced on F E by the norm Z = [z ij ] on F. Proof. Conditions (1) and (2) are an immediate consequence of Prop. 5.1 and Note 5.3. To prove condition (3), it is necessary to define the inner product ( ) on F E related with the metric of Def Given (f i, e i ) F E, i = 1, 2, we have that Now, from Prop. 5.1 and Note 5.3, we have that ( s + s ) = ( (f +, e + ) (f, e ) ) ((f 1, e 1 ) (f 2, e 2 )) := z ij f i 1f j 2 + zij e 1,i e 2,j (5.4) = z ij f +,i f,j + z ij e + i e j = e+ j f,j e + i f,i = 0 and, clearly, s + s with respect to the given metric. Note 5.4. From the previous proposition, we deduce that F E = S + S. The space of power variables is given by the sum of two vector subspaces, that are orthogonal with respect to each other and to a given metric. Proposition 5.3. The +pairing operator restricted to S + or S gives the inner product ( ) defined by (5.4) in Prop In other words, if (f i, e i ) S +, i = 1, 2, then and, if (f i, e i ) S, i = 1, 2, then Proof. If (f i, e i ) S +,we have that (f 1, e 1 ), (f 2, e 2 ) = ((f 1, e 1 ) (f 2, e 2 )) (f 1, e 1 ), (f 2, e 2 ) = ((f 1, e 1 ) (f 2, e 2 )) (f 1, e 1 ), (f 2, e 2 ) = e 1,i f i 2 + e 2,i f i 1 = z ij f j 1 f i 2 + z ij e 1,j e 2,i = ((f 1, e 1 ) (f 2, e 2 )) since Prop. 5.1 holds. In the same way, if (f i, e i ) S, i = 1, 2, we have that (f 1, e 1 ), (f 2, e 2 ) = e 1,i f i 2 + e 2,i f i 1 = z ij f j 1 f i 2 z ij e 1,j e 2,i = ((f 1, e 1 ) (f 2, e 2 )) Proposition 5.4. If s + = (f +, e + ) S + and s = (f, e ) S, then s +, s = 0

147 5.2 Scattering in the finite dimensional case 131 Proof. As in the proof of Prop. 5.3, we have that s +, s = (f +, e + ), (f, e ) = e i f +,i + e + i f,i = z ij f,j f +,i + z ij f +,j f,i = 0 From Prop. 5.2, it is clear that (f, e) F E,! s + = (f +, e + ) S + and! s = (f, e ) S, such that (f, e) = (f + + f, e + + e ) = s + + s, with s + and s orthogonal in the sense of the same proposition. Immediate consequence is the following Theorem 5.5 (scattering power decomposition). Given (f, e) F E and any metric Z = [z ij ] on F, the following relation holds: e, f = 1 2 s + 2 Z 1 s 2 (5.5) 2 Z where s + = (f +, e + ) S +, s = (f, e ) S, (f, e) = s + + s and the metric 2 Z restricted on S + and S. is Proof. Since e, f = 1 2 (f, e), (f, e) from Prop. 5.3 and Prop. 5.4, we have that e, f = 1 2 (s+ + s ), (s + + s ) = 1 2 s+, s s, s + s +, s = 1 2 s + 2 Z 1 s 2 2 Z which completes the proof. Note 5.5. Relation (5.5) shows that the power flow can be written as the sum of a positive and a negative power that only depend on the scattering variables. The scattering variable s + could be related to an incoming power flow and s to an outgoing power flow. This decomposition is always possible if a certain metric operator is defined; changing the metric will result in a change of the scattering decomposition Scattering mapping and scattering matrix If we decide to represent the elements of the space of effort and flow as column vectors, it is possible to obtain a coordinate mapping between the space of power variables and the space of scattering variables. First of all, define a base for F as the columns of a matrix B and the dual base, that is the base of E, as the columns of a matrix B such that B T B = B T B = I n. The corresponding base matrix for F E can be defined as the columns of a matrix B, where [ ] B 0n B := 0 n B

148 132 Scattering with applications So, it is possible to see that [ S + = Im B Z 1 I n ] [ N S = Im B In 2 Z ] N 1 2 (5.6) where N is the symmetric square root of Z, that is Z = NN. The terms N/ 2 and N 1 / 2 are used for normalization, so the columns of the matrices whose image is considered are orthonormal in the induced norm Z. From now, it is assumed that F = E = R n and B = I 2n. Now, we can state the following Proposition 5.6 (scattering mapping). If the bases of the scattering subspaces S + and S are chosen as in (5.6), the mapping relating the coordinates (f, e) of flows and effort to the coordinates s + and s of the corresponding scattering variables is given by f = N 1 2 (s + s ) e = N 2 (s + + s ) (5.7a) or, inverting the relations s + = N 1 2 (e + Zf) s = N 1 2 (e Zf) (5.7b) Proof. Given s + S + and s S, it is possible to find two vectors s +, s R n (coordinates in the scattering subspaces), such that, according to (5.6), we have s + = [ Z 1 I n ] [ N s + and s In = 2 Z Since (f, e) = s + + s, in coordinates we can write that [ f e ] = [ Z 1 I n ] [ N s + In + 2 Z ] ] N 1 2 s N 1 2 s = 1 2 [ N 1 (s + s ) N(s + + s ) ] that is the first of (5.7). The inverse map follows by simple calculations. Note 5.6. The expression found for the mappings (5.7) is a direct consequence of the choice (5.6) for the bases of S + and S. From Prop. 5.1 and 5.2, we deduce that it is possible to identify the scattering subspaces with the space of flows F or with the space of efforts E. We need only to choose different bases for S + and S. We can say to have a flow-representation of the scattering subspaces if it is assumed that S + = Im [ In Z ] 2 2 S = Im [ In Z ] 2 2

149 5.2 Scattering in the finite dimensional case 133 and an effort-representation if [ Z 1 [ Z 1 S + = Im I n ] 2 2 S = Im I n ] 2 2 Note that these bases are only orthogonal, and not orthonormal as (5.6), with respect to the inner product ( ). For example, assuming a flow-representation for both the scattering subspaces, the scattering mappings (5.7) become f = e = 2 2 (s+ s ) 2 2 Z(s+ + s ) and s + = s = 2 2 (Z 1 e + f) 2 2 (Z 1 e f) An analogous result holds if the effort-representation of the scattering subspaces is adopted. Consider two systems (A and B) with the same state of power variables and operate the scattering decomposition for both. Clearly, proper metrics Z A and Z B have to be chosen for the two spaces of power variables and, in general, the scattering decomposition will not be the same for each system. The following proposition shows the relation between the scattering variables when the two system are connected in power-conserving interconnection, (Stramigioli, 2001). Proposition 5.7 (scattering matrix). Consider two systems A and B with power variables (f A, e A ) and (f B, e B ) belonging to the same space F E and operate a scattering decomposition after having chosen two proper metric Z A and Z B in relation with the physical properties of the two systems. Suppose to interconnect A and B in power conserving interconnection, i.e. with common effort. If s ± A S± A and s± B S± B are the scattering variables, the following relation holds: [ ] [ ] s + A s A s + = S B s (5.8a) B where S = 1 2 [ (ZA + Z B ) 1 (Z A Z B ) 2 (Z A + Z B ) 1 N B N A 2 (Z A + Z B ) 1 N A N B (Z A + Z B ) 1 (Z A Z B ) ] (5.8b) is orthogonal, and N A and N B are the symmetric square roots of Z A and Z B, that is Z A = N A N A and Z B = N B N B. Proof. Interconnecting with common effort means that e A = e B and f A = f B. Using the mapping (5.7a) of Prop. 5.6, it is possible to write a relation between the scattering variables of system A and B. In fact, we have that { NA (s + A + s A ) = N B(s + B + s B ) N 1 A (s+ A s 1 A ) = NB (s+ B s B ) and the result is proved solving the linear system for s + A and s+ B. The fact that S is an orthogonal matrix, that is S T S = I 2n, is a consequence of (5.8a) and of the power conserving interconnection

150 134 Scattering with applications between the two systems, that can be re-written, in terms of scattering variables, as 1 ( s + 2 A s ) 2 2 Z A A = 1 ( s + 2 Z A B s ) 2 2 Z B B Z B Note 5.7. An analogous result could be obtained with the common flow interconnection, i.e. e A = e B and f A = f B. Note 5.8. If Z A = Z B, then S = [ 0n I n I n 0 n ] since N A = N B, and clearly { s + A = s B s + B = s A The outgoing power from A becomes the incoming power in B, and, symmetrically, the outgoing power from B is the incoming power in A. There is no power reflection at the interconnection point, or, in other words, on the borders of the two systems. A power reflection phenomena is present if Z A Z B, that is if the physical properties of the two system are different. In circuit theory, we have such a kind of behavior, for example, when we interconnect two transmission lines with different impedance. Since the impedance of the line is related with the metric on the space off flows and efforts (voltages and currents), the result of Prop. 5.7 has a direct physical interpretation. This is a consequence of the fact that we request that the norm of the power variables is the power exchanged by the system during the interaction through its port. So the metric is the line impedance. 5.3 Scattering and telemanipulation Introduction Teleoperation (or telemanipulation) is one of the first applications of robotics. Telemanipulation schemes are bilateral systems made of a local primarily side, called master, and a remote one, called slave. The human operator interacts with the master manipulator in two ways: by driving the remote side in the completing of the task and, then, by receiving a proper force feedback from the slave based on what it is exactly happening. Master and slave manipulators are governed by two local controllers and exchange their respective information with the opposite side through the communication channel. The channel can be characterized by the bandwidth and by the delay introduced in the information exchange: these properties depend on the kind of transmission implemented. As proposed in Fig. 5.2, the teleoperation system can be separated into a cascade of simple elements. The communication channel can be considered independently from the remaining part of the system and from the particular operating configuration. This choice allows to concentrate on the problems introduced by the delay in the communication channel. It is well-known, in fact, that even a small delay in the control loop can lead to instability in some particular operative configuration, even if the closed-loop system, without the delay, is asymptotically stable.

151 PSfrag replacements 5.3 Scattering and telemanipulation 135 MASTER-SIDE SLAVE-SIDE operator master communication slave environment Figure 5.2: Structure of a telemanipulation system. In this section, it is shown how the scattering theory can be fruitfully applied in order to develop telemanipulation systems characterized by nice stability properties independently from the delays introduced by the communication channel. In particular, if master and slave are controlled by means of passive control schemes, the resulting closed-loop system is still passive, even in presence of unpredictable time delays. The basic idea is to properly code the data exchanged between master and slave in such a way that the communication channel could remain passive even in presence of (large) delay and in every operative configuration. In Sec , it is shown that the transmission channel can be the source of regenerative effects due to time delays that can bring the telemanipulation scheme to instability. Moreover, a first simple solution to this problem is discussed. The idea is do add damping at master and slave side: in this way, stability can be achieved but performances are very poor. Scattering theory is the solution, (Anderson and Spong, 1989; Niemeyer and Slotine, 1991). If the exchange of information between master and slave is coded in terms of scattering variables, then the communication channel is passive independently from the delays, which could be a priori unknown. Consequently, overall stability can be achieved by controlling master and slave by means of passive control schemes. An application of these consideration to the interconnection of (finite dimensional) port Hamiltonian systems through a transmission line with time delay is discussed in Sec , (Stramigioli et al., 2000; Stramigioli, 2001; Stramigioli et al., 2002). In particular, the analysis of the performances of a telemanipulation system with communication based on an excahnge of scattering information is discussed within the framework of port Hamiltonian systems Dealing with time delays in the communication channel The communication channel can be treated as a 2-port element. Denote by the M the port at master-side and by S the port at slave-side. The interaction between master and slave is simply an exchange of power between both these subsystem. The power interconnection of physical system can be expressed in terms of relation between either power variables or scattering variables. In the first case, see Fig. 5.3(a), the power flow P is given by P = e M, f M e S, f S = e M T f M e S T f S (5.9a) where the minus sign is due to the convention adopted for the power flows at master and slave side. If the interconnection is expressed by means of a relation involving scattering variables, see Fig. 5.3(b), the power flow P can be equivalently expressed as ( 1 P = 2 s + M s M ) 2 ( 1 2 s + S s S ) 2 (5.9b)

152 136 Scattering with applications PSfrag replacements PSfrag replacements e M e S s + M s S f M f S s M s + S (a) (b) Figure 5.3: A 2-port element. Power variables vs. scattering variables. where the minus sign is due to the fact that s + is associated to the outgoing power flow while s to the incoming one. In both cases, the communication channel is passive (see Def. 2.2 and Def. 2.3) if and only if it is possible to find a lower bounded energy storage function E and a non-negative power dissipation function P d such that P = P d + de dt (5.10) The communication element connects master and slave closing the overall control loop by transmitting data to and from both sides. If time delay is introduced, the resulting closedloop system can become unstable. It is important to point out that the type of transmitted data can be arbitrarily chosen. Furthermore, the particular choice deeply affects the overall stability properties of the system. In (Anderson and Spong, 1989; Niemeyer and Slotine, 1991), the problem of finding the right set of data to be transmitted in order to achieve a passive behavior for the communication channel independently from the time delay is discussed. It is shown that, if the data to be sent over the communication channel are the scattering variables, then stability is assured. Since we desire to develop a teleoperation system that is able to provide force feedback to the user at master side, the standard communication mechanism between master and slave can be the following: local velocity is transmitted to the remote site, thus becoming a velocity command. At the same time, the remote force is transmitted back to the local site to provide the desired force command. If T denotes the time delay, this communication procedure is described by { fs (t) = f M (t T ) e M (t) = e S (t T ) (5.11) Assume, for simplicity, that flows and efforts are scalar. The result presented in the remaining part of this section are valid also in the multidimensional case. Moreover, denote by z a gain relating flows and efforts and can be interpreted as the impedance of the line, the power flow

153 PSfrag replacements 5.3 Scattering and telemanipulation 137 e M 1 f M e S T 0 f S R : z SGY R : 1/z Figure 5.4: Stabilization by dissipation of the channel. (5.9) can be written as P (t) = e M (t)f M (t) e S (t)f S (t) = 1 2z e2 M(t) + b 2 f 2 M(t) 1 2z (e M zf M ) 2 (t) + 1 2z e2 S(t) + b 2 f S(t) 2 1 2z (e S + zf S ) 2 (t) = 1 z e2 M(t) 1 2z (e M zf M ) 2 (t) + zfs(t) 2 1 2z (e S + zf S ) 2 (t) + d t [ z dt 2 f M(τ) ] 2z e2 S(τ) dτ t T If is the storage function and E = t t T [ z 2 f M(τ) ] 2z e2 S(τ) dτ (5.12) P d = 1 z e2 M(t) 1 2z (e M zf M ) 2 (t) + zf 2 S(t) 1 2z (e S + zf S ) 2 (t) the power dissipation function, then the power balancing relation (5.10) has been obtained. But, for the communication to be passive, the function P d has to always be non-negative. However, specific choices of the port variables can originate negative dissipation, that is regenerative effects that could drive the overall system unstable. This consideration does not mean that the closedloop system is unstable: it simply states that there always exists a controller/manipulator setup (which may even be passive), such that the overall system is unstable, (Niemeyer and Slotine, 1991). A possible solution can be placing dissipation elements at the communication ports, as described in Fig In this case, effort at master-side and flow at slave-side are respectively given by e M(t) = e M (t) + f M (t) and f S(t) = f S (t) 1 z e S(t) while the data sent on the channel are still given by (5.11). The power flow is, then, given by P (t) = e M(t)f M (t) e S (t)f S(t)

154 PSfrag replacements 138 Scattering with applications s + M s S e M f M z T z e S f S s M s + S Figure 5.5: Power exchange in terms of scattering variables. which can by written as in (5.10) if the storage function is chose as in (5.12) and the dissipation function as P d = 1 2z e M 2 + z 2 f S 2 The total dissipation is positive: the dissipation elements are removing all the produced energy. The modified communication scheme is passive, independently from the time delay T. Note that dissipation is present until either fs or e M are different from zero. Then, a continuous power flow is necessary in order to sustain a constant motion and, more important, to sustain a constant force reflection. This can be a problem since, in steady state, the operator cannot feel or apply a constant force to the remote environment. Moreover, the dissipation effect changes the velocity command at slave side. Consequently, if forces are reflected, the remote position drift away from the master reference. Only in the case that e S = 0 the slave can track master velocity. In conclusion, the proposed transmission methodology gives nice performances in terms of stability, which can be achieved independently from time delays, but provides poor performances. The scattering theory solve these performances problem. Based on this theory, it is possible to set up a communication channel that imitates the behavior of a lossless transmission line. In particular, a passive communication can be achieved by directly transmitting the wave variables s and s instead of the power variables f and e. Consider the interconnection scheme of Fig Note, differently from Fig. 5.3(a) and Fig. 5.4, that master and slave system are assumed to have the same (input) causality: with this choice, the telemanipulation scheme of Fig. 5.5 is completely symmetric. Both the master and slave power variables are coded in the corresponding scattering ones (and vice-versa) according to the scattering maps (5.7). Moreover, since the transmitted data are the scattering variables, the communication procedure can be described by the following relation: { s + S (t) = s M (t T ) s + M (t) = s S (t T ) (5.13) Consequently, the power flow (5.9b) can be written as P (t) = 1 2 s 2 1 M (t) 2 1 (t) = 1 2 s M = d dt t t T 2 s+ M 2 s S [ 1 2 s M 2 (t) s S 2 (t T ) (τ) s S 2 (t) 1 2 s S 2 (τ) ] dτ 2 s+ S 2 (t) 2 (t) 1 2 s M 2 (t T ) The communication channel, then, behaves as a lossless system with a positive storage function which is simple given by the integral of the incoming power for the duration of the transmission.

155 5.3 Scattering PSfrag andreplacements telemanipulation 139 H e f z s + s Figure 5.6: Scattering interconnection of phd systems. Note the independence of this passivity property from the amount of the delay introduced. Since, from (5.7b), we have that s + = s = 1 2z (e + zf) 1 2z (e zf) at both master and slave side (this is a consequence of the fact that the same causality has been assumed), in terms of power variables, the transmission equation (5.13) can be easily written as follows, (Anderson and Spong, 1989; Niemeyer and Slotine, 1991). e M (t) = e S (t T ) z [f M (t) + f S (t T )] f S (t) = f M (t T ) 1 z [e S (t) e M (t T )] Note that, when the time delay T becomes zero, transmitting scattering variables is the same as transmitting power variables. Thus, this procedure allows to robustify systems against time delays, (Niemeyer and Slotine, 1991). Since the communication mechanism has been developed as a paradigm the physical phenomena of wave propagation, as waves are reflected at junctions and termination, that is where the impedance of the line changes, the same behavior may occur at both the local and the remote site. To avoid these reflections, which in the context of teleoperation cause an oscillatory behavior of the closed-loop system, the impedance of the wave transmission has to be modified in order to match the impedance of the other systems (see Note 5.8). More general details in (Niemeyer and Slotine, 1991); moreover, in (Stramigioli, 2001; Stramigioli et al., 2002), further details on port Hamiltonian systems and scattering can be found. These results are shortly commented in the Sec Telemanipulation and phd systems Consider again the interconnection scheme of Fig. 5.5 and suppose to interconnect a couple of port Hamiltonian systems at master and slave side. Since the scheme is symmetric, Fig. 5.6 well summarizes the interconnection at both sides. Consider a generic port Hamiltonian system ẋ = [J(x) R(x)] H x + G(x)f e = G T (x) H x (5.14) As pointed out at the end of Sec , the main problem related to the use of scattering variables in telemanipulation is that waves are reflected at junctions and termination, that is where the

156 140 Scattering with applications impedance of the line changes. In order to avoid this class of phenomena, it is necessary to match the impedance when interconnecting (5.14) to the transmission line. As pointed out in (Stramigioli et al., 2002), a theoretical condition for the impedance matching of a system connected to a transmission line is that the system seen at scattering side of transformation of Fig. 5.6 and having s + as input and s as output has to be of relative degree 1, that is system should have no direct feed-through. From (5.7b) and (5.14), the phd with inputs/outputs given by the scattering variables can be written as ẋ = [ J(x) R(x) G(x)N 1 NG T (x) ] H x + 2N 1 G(x)s + s = 2N 1 G T (x) H x s+ The input s + has a direct feed-through to the output s, thus implying that the incoming power is immediately send back independently from the state of the system. In order to solve this problem, it is necessary to enlarge the class of port Hamiltonian systems by considering the phd systems of the following form: ẋ = [J(x) R(x)] H x + G(x)f e = G T (x) H x + B(x)f (5.15) where B(x) = B T (x) 0 is a new dissipation matrix. It is easy to prove that, in this case, the output scattering variable s is given by [ B(x) s = N ] 1 N G T (x) H 2 2 x + F s+ with F = [ B(x)N 1 + N ] 1 [ B(x)N 1 N ] Then, we have impedance matching when F = 0, that is when B = NN = Z In conclusion, a system of the form (5.15) for which B = Z, with Z the impedance of the line to which it is connected, guarantees the matching condition and, consequently, no power reflection phenomena at the interconnection point, (Stramigioli et al., 2002). 5.4 Scattering for distributed systems Basic definitions and results Consider the distributed port Hamiltonian system (3.20), in which D is a Riemannian manifold denoting the spatial domain where the physical phenomena to be modeled take place. Assume that dim D = n+1: then, dim D = n. Define N := D, which turns out to be an n-dimensional Riemannian manifold. If the power flow through the distributed port is equal to zero, the energy balancing equation (3.21) becomes dh dt f b e b N

157 5.4 Scattering for distributed systems 141 in which the integral terms represents the power supplied to the system by the environment. The boundary of the spatial domain can be treated as the power port through which an exchange of power between system and its environment takes place: it can be of interest to extend the finite dimensional scattering theory discussed in Sec. 5.2 in order to locate the incoming and the outgoing power flow. The first step is the generalization of scattering subspaces to the distributed parameter case. As discussed in Def. 5.2, in order to define the scattering subspaces in a geometric way, it is necessary to provide the space of power variables with a symmetric bilinear operator (the +pairing) and a metric. Since our interest is focused only on the border power variables, from (3.1) and (3.3), we have that the space of flows is given by F = Ω k (N ), while the space of efforts by E = (Ω k (N )) = Ω n k (N ), where the definition of duality on the space of differential form results from Prop. 3.1 and the duality product is given by: β, α := α β with α F and β E. Furthermore, the +pairing operator is given by (see also Def. 1.5): (α 1, β 1 ), (α 2, β 2 ) := β 2, α 1 + β 1, α 2 = (α 2 β 1 + α 1 β 2 ) (5.16) where (α i, β i ) F E, i = 1, 2. The most critical point is the identification of a suitable norm on the space of power variables. Since N is a Riemannian manifold, it is well defined a symmetric positive definite 2-contra-variant tensor g T2 0 (N ), the Riemannian metric. Based on this tensor, it is possible to define the Hodge star operator (see Prop. A.10), which is a map : F E satisfying the properties of Prop. A.11. The Hodge star operator is related to the bilinear, symmetric, non-degenerate and positive definite tensor g (k) (, ) ( ) induced on Ω k (N ) F by g (see Prop. A.9). Then, the following proposition can be deduced. N N Proposition 5.8. Suppose that (α i, β i ), (α, β) F E, i = 1, 2. Then ((α 1, β 1 ) (α 2, β 2 )) := α 1 α 2 + β 1 β 2 is an inner product on F E, and (α, β) 2 := is a norm on F E. N N N α α + β β N (5.17a) (5.17b) Proof. The proposition is an immediate consequence of Prop. A.9, Prop. A.10 and the properties of an inner product. As discussed in Sec. 5.2, the scattering decomposition is possible when a +pairing operator, or a dual product operation, and a metric are defined over the space of power variables. Then, we can state the following important definition, (Macchelli et al., 2002b). Definition 5.3 (scattering subspaces in the infinite dimensional case). Suppose that N is an n-dimensional Riemannian manifold and assume that F = Ω k (N ) is the space of flows

158 142 Scattering with applications and that E = Ω n k (N ) is the space of efforts. The scattering subspaces S +, S F E are defined as S + := {(f, e) F E (f, e), (f, e) = (f, e) 2} S := {(f, e) F E (f, e), (f, e) = (f, e) 2} Note 5.9. This is the same definition as in the finite dimensional case (see Def. 5.2). From (5.17b) and (5.16), we can alternatively write that S + := S := We deduce the following { (f, e) F E 2 { (f, e) F E 2 N N f e = f f + N f e = f f N } e e } e e N N (5.18a) (5.18b) Proposition 5.9. The scattering subspaces of Def. 5.3 can be written as S + = S = { } (f, e) F E f = ( 1) k(n k) e e = f { } (f, e) F E e = f f = ( 1) k(n k) e (5.19a) (5.19b) Proof. We start proving that (5.18a) (5.19a). Since for all Ñ N, from Prop. A.2, we deduce that 2 f e = f f + e e (5.20) Ñ Ñ Ñ 2 f e = f f + e e = f f + ( 1) k(n k) e e or, equivalently, that ( ) f ( f e) + ( 1) k(n k) e f e = 0 for all f F and e E. This can be true if e = f or, equivalently, if f = ( 1) k(n k) e, as indicated in (5.19a). Furthermore, it is immediate to verify that (5.19a) (5.18a). It is only needed to observe that (5.20) is satisfied if it is assumed that e = f, or equivalently, that f = ( 1) k(n k) e. Applying the same procedure, it is possible to prove that (5.18b) (5.19b). Proposition These conditions hold (1) S + S = {(0, 0)} (2) F, E, S + and S are isomorphic spaces (3) if s + = (f +, e + ) S + and s = (f, e ) S, then (s + s ) = 0. This means that s + s with respect to the scalar product on F E defined by (5.17a) in Prop. 5.8.

159 5.4 Scattering for distributed systems 143 Proof. (1) is a direct consequence of Def. 5.3 while (2) is immediate from Prop. A.10 and Note 5.9. As regard (3), since the first property in Prop. A.11 holds, we have that ( s + s ) = ( (f +, e + ) (f, e ) ) = f + f + e + e From Prop. 5.9, we have that f = e and ( 1) k(n k) e + = f + : consequently, ( s + s ) = f + f + e + e = f + f + e e + N N N N = f + e + ( 1) k(n k) e f + = 0 N N N N which completes the proof. Note Immediate consequence of the previous proposition is that F E = S + S, with S + and S orthogonal in the sense of the inner product (5.17a). Proposition The +pairing restricted on S + and S gives the inner product ( ). Proof. If s + i = (f i +, e+ i ) S+, i = 1, 2, then s + 1, s+ 2 = (f 1 + e+ 2 + f 2 + e+ 1 ) N = f 1 + f ( 1)k(n k) e + 2 e+ 1 N N = ( (f 1 +, e+ 1 ) (f 2 +, e+ 2 )) = ( s + 1 ) s+ 2 Moreover, if s i = (fi, e i ) S, i = 1, 2, then s 1, s 2 = (f1 e 2 + f 2 e 1 ) N = f1 f 2 ( 1)k(n k) e 2 e 1 N N = ( (f1, e 1 ) (f 2, e 2 )) = ( s 1 ) s 2 Proposition If s + = (f +, e + ) S + and s = (f, e ) S, then s +, s = 0. Proof. Since e + = f + and e = f, we have that s +, s = (f + e + f e + ) = N N f + f + f f + = 0 N Before stating the infinite dimensional version of Theorem 5.5, it is important to note that, given (f, e) F E, are unequivocally determined s + = (f +, e + ) S + and s = (f, e ) S such that (f, e) = (f + + f, e + + e ) = s + + s. This can be easily deduced from Prop and Note 5.10.

160 144 Scattering with applications Theorem 5.13 (scattering power decomposition, the infinite dimensional case). Given (f, e) F E, the following relation holds: e, f = 1 2 s s 2 2 where s + = (f +, e + ) S +, s = (f, e ) S, (f, e) = s + + s and the metric 2 is the one defined by (5.17b) and restricted on S + and S. Proof. Since Prop and 5.12 hold, we have that 2 e, f = (f, e), (f, e) = (f +, e + ), (f +, e + ) + (f, e ), (f, e ) = (f +, e + ) 2 (f, e ) 2 = s + 2 s 2 and finally e, f = 1 2 s s Scattering mapping As stated in Prop. 5.6 for the finite dimensional case, the scattering mapping is a linear and invertible map from the space of power variables F E to itself expressed as the direct sum of the scattering subspaces, that is to S + S = F E. In other words, given (f, e) F E, the scattering mapping provides the corresponding scattering variables s + S + and s S such that (f, e) = s + + s (5.21) or, as regard the inverse map, given a couple of scattering variables s + S + and s S, it makes possible to find (f, e) F E such that (5.21) holds. In the remaining part of this subsection, the generalization to the infinite dimensional case will be presented. First of all, it is necessary to introduce the following projection operators π + and π. Definition 5.4. Suppose that s + = (f +, e + ) S +. Then π + : S + F is defined as π + (s + ) := f + In the same way, if s = (f, e ) S, then π : S F is defined as π (s ) := f In Fig. 5.7, the basic idea behind the operators π + and π previously introduced is presented. Note that, in this figure, the space of power variables, and consequently, the space of scattering variables is supposed to be finite dimensional. Furthermore, s = (s +, s ) S + S is the expression in scattering variables of (f, e) = (f + + f, e + + e ) F E. Note From Prop. 5.9, it is immediate to verify that, if s + = (f +, e + ) S +, then e + = π + (s + ), and that, if s = (f, e ) S, then e = π (s ). Moreover, the maps π +

161 5.4 Scattering for distributed systems 145 E PSfrag replacements S s = (s +, s ) s = (f, e ) S + s + = (f +, e + ) f = π (s ) f + = π + (s + ) F Figure 5.7: The operators π + and π. and π are invertible, with inverse maps given by (π + ) 1 : F S + and (π ) 1 : F S, where, if f F, ( π + ) 1 (f) = (f, f) S + ( π ) 1 (f) = (f, f) S Clearly, s +, S +, and f F, we have that ( (π s +, +, = ) ) 1 ( π +, s +, ) ( f = π +, ( π +, ) ) 1 (f) The scattering mapping can be expressed by means of the operators π + and π. Consider the scattering variables s + S + and s S. Then, the corresponding power variables (f, e) F E, such that (5.21) holds, are given by { f = π + (s + ) + π (s ) e = π + (s + ) π (s (5.22a) ) or, in a matrix notation, by [ f e ] [ = π + π + π π ] [ s + s ] By inversion of (5.22a), it is possible to calculate the scattering variables s + and s, corresponding to the power variables (f, e). It is immediate to obtain that s + = 1 ( π + ) [ 1 f + ( 1) e] k(n k) 2 s = 1 ( π ) [ (5.22b) 1 f ( 1) e] k(n k) 2 or, in a matrix notation, that [ ] s + s = 1 2 [ (π + ) 1 ( 1) k(n k) (π + ) 1 (π ) 1 ( 1) k(n k) (π ) 1 ] [ f e ]

162 146 Scattering with applications PSfrag replacements D a D b A B D b D a D Figure 5.8: Interconnection of systems A and B over a subset D of their boundary. Relations (5.22) are the inverse and direct scattering maps in the infinite dimensional case, that is the generalizations of (5.7) for the infinite dimensional systems. In conclusion, we proved that Proposition 5.14 (scattering mapping, the infinite dimensional case). Consider (f, e) F E and suppose that s + S + and s S are the corresponding scattering variables such that (5.21) holds. Then (5.22a) and (5.22b) are the inverse and direct scattering maps in the infinite dimensional case Interconnection of distributed systems. Scattering matrix Consider two infinite dimensional systems A and B with spatial domain given by two (n + 1)- dimensional Riemannian manifolds D a and D b. Then, suppose that their borders D a and D b are n-dimensional Riemannian manifolds. On D a (resp. on D b ), assume that the space of flows is given by F a := Ω k ( D a ) (resp. by F b := Ω k ( D b )), and its dual, the space of efforts, by E a := Ω n k ( D a ) (resp. by E b := Ω n k ( D b )). Since a Riemannian metric a g ij (resp. b g ij ) is defined over the border of each spatial domain D a and D b, it is possible to define the Hodge star operator a (resp. b ) and the scattering subspaces S + a and S a (resp. S + b and S b) on D a (resp. on D b ) in a coordinate-free way. If D = D a D b, with dim D = n, we can say that D a and D b are interconnected. Moreover, D can be considered as the distributed port on which the interaction between the two physical systems takes place. This situation is represented in Fig As in the finite dimensional case, it is important to specify when the interconnection of two (or even more) infinite dimensional systems is power conserving. So, we give the following Definition 5.5 (Power conserving interconnection). Consider two physical systems A and B with spatial domain given by two (n + 1)-dimensional Riemannian manifolds D a and D b and borders D a and D b that are n-dimensional Riemannian manifolds. Then, define the space of flows and efforts over there as F a,b := Ω k ( D a,b ) and E a,b := Ω n k ( D a,b ) and suppose that D = D a D b is an n-dimensional Riemannian manifold. We say that the interconnection between A and B on D is power conserving if ( ) a f a r + b f b r = 0 D i ( a,b f, a,b e) F a,b E a,b and D i D.

163 5.4 Scattering for distributed systems 147 Note From the previous definition, we can say that the interconnection is power conserving if and only if, ( a,b f, a,b e) F a,b E a,b, it happens that a f a e + b f b e = 0 on D. Two possible solutions for this equation are { a f = b f a e = b and e { a f = b f a e = b e that are the well-known common-effort and common-flow interconnections of the finite dimensional case. Assume, for example, a common-effort interconnection between system A and B. Since D belongs to both the borders of A and B, we can say that, on this manifold, two Riemannian metrics a g ij and b g ij are defined. These metrics are strictly related to the physical properties of the two systems. Moreover, on D it will be possible to define two Hodge star operators a and b and, consequently, two distinct scattering decompositions. In other words, the space of power variables of A and B, restricted to D, can be described by using scattering variables related to two different scattering decomposition, clearly defined by means of two (different) Hodge star operators. Given ( a f, a e) F a E a and ( b f, b e) F b E b, in terms of scattering variables, it is possible to write that ( a f, a e) = a s + + a s ( b f, b e) = b s + + b s with a,b s + S + a,b and a,b s S a,b. Clearly, Prop holds for both the scattering subspaces defined on D a and D b. Moreover, it is possible to write two couple of equations (5.22a), one for each system. Assuming a common-effort interconnection on D, the relation { a f = b f a e = b e can be written, in terms of scattering variables, as { a π + ( a s + ) + a π ( a s ) = b π + ( b s + ) b π ( b s ) a a π + ( a s + ) a a π ( a s ) = b b π + ( b s + ) b b π ( b s ) (5.23) By simple manipulations, from (5.23) it is possible to obtain the following couple of relations: { ( a + b ) a π + ( a s + ) = ( a b ) a π ( a s ) 2 b b π ( b s ) (5.24) ( a + b ) b π + ( b s + ) = 2 a a π ( a s ) + ( b a ) b π ( b s ) Using a matrix notation, (5.24) can be written as [ ] [ ( a + b ) a π + 0 a ] s + b s + 0 ( a + b ) b π + [ ( a b ) a π 2 b b π = 2 a a π ( b a ) b π ] [ a s that is very close to (5.8a) in the finite dimensional case. Before stating an analogous result for the infinite dimensional systems, the following lemma is needed. b s ]

164 148 Scattering with applications Lemma Consider an n-dimensional Riemannian manifold N equipped with the positive Riemannian metrics a g ij and b g ij. If a and b are the corresponding Hodge operators, then, given ω Ω k (N ), a ω + b ω = 0 (5.25) if and only if ω = 0. Proof. Relation (5.25) is obviously satisfied if ω = 0. The inverse implication can be proved as follows. If (5.25) holds, we have that a ω = b ω. So, from Prop. A.10 and Prop. 5.8, we deduce that ω 2 a = a ω ω = b ω ω = ω 2 b N N that is satisfied only if ω = 0. From Lemma 5.15, we deduce that the linear operator ( a + b )( ) is invertible, with inverse map that can be indicated by ( a + b ) 1 ( ). Consequently, relation (5.24) can be written as [ a s + ] [ a s b s + = S ] b s (5.26) where S is the following operator [ ] ( a π + ) 1 ( a + b ) 1 ( a b ) a π 2( a π + ) 1 ( a + b ) 1 b b π S = 2(π + ) 1 ( a + b ) 1 a a π (π + ) 1 ( a + b ) 1 ( b a ) b π (5.27) These relations are the infinite dimensional version of (5.7). In other words, the following proposition has been proved. Proposition 5.16 (scattering operator, the infinite dimensional case). Consider two infinite dimensional systems A and B with spatial domains D a and D b and suppose that their borders D a and D b are n-dimensional Riemannian manifolds. Assume on the borders the following spaces of power variables F a E a and F b E b. If a g ij and b g ij are the Riemannian metrics on each border, it is possible to define two distinct scattering decompositions F a E a = S + a S a and F b E b = S + b S b. Moreover, suppose that the two physical systems are interconnected over D = D a D b in a power conserving way (i.e. with common effort). Then (5.26) holds. Moreover, the linear operator (5.27), can be interpreted as the generalization of the scattering matrix (5.8b) to the infinite dimensional case. Note Suppose that a g ij = b g ij = g ij. Then (5.26) becomes a s + ( (π + = 1 π b s ) b s + ( (π + = 1 ( π a s ) since a = b = a π + = b π + = π + a π = b π = π

165 5.4 Scattering for distributed systems 149 This means that, if the two systems have the same physical properties, then the outgoing power from A becomes the incoming power in B and, symmetrically, the outgoing power from B becomes the incoming power in A. So, no power reflection is present at the interconnection port D, phenomena that is present if the physical properties of the two systems are different, that is if a g ij b g ij Scattering mapping and operator in coordinates In the infinite dimensional case, the space of flows is defined as the space of k-forms on the n-dimensional Riemannian manifold N, that is F = Ω k (N ). From Prop. A.3, we known that if x 1,..., x n is a set of local coordinates on N, each flow f F can be written as a linear combination of simple k-forms dx i 1 dx i k, with i 1,..., i k {1,..., n} and basis for F = Ω k (N ) is given by the set of all dx I := dx i 1 dx i k with I = {i 1,..., i k } a multi-index of order #I = k and 1 i 1 < < i k n, introduced in Note A.6. So, we can state that, given f F, a set of functions f I : N R such that f = I f Idx I is unequivocally determined. In the same way, a base for E = Ω n k (N ) is given by the set of all dx J = dx j 1 dx j n k with J = {j 1,..., j n k } a multi-index of order #J = n k and 1 j 1 < < j n k n, and, for every e E, a set of e J : N R such that e = J e Jdx J is unequivocally determined. As in the finite dimensional case, it is possible to define a base for each scattering subspace S + and S, with the difference that, in the infinite dimensional framework, the bases can be only orthogonal, and not orthonormal, in the sense of ( ). It is convenient to extend the ideas of Note 5.6 to the infinite dimensional case. So, it is possible to state the following Proposition The sets { s +,I := } 2 2 (dxi, dx I ) S + (5.28a) and { s,i := } 2 2 ( dxi, dx I ) S (5.28b) with I = {i 1,..., i k }, 1 i 1 < < i k n, give an orthogonal base of S + and S in the sense of ( ) on F E. This is what we call flow-representation of the scattering subspaces. Moreover, also the sets { s +,J := } 2 2 (( 1)k(n k) dx J, dx J ) S +

166 150 Scattering with applications and { s,j := } 2 2 ( ( 1)k(n k) dx J, dx J ) S with J = {j 1,..., j n k }, 1 j 1 < < j n k n, give an orthogonal base of S + and S in the sense of ( ), but we call this the effort-representation of the scattering subspaces. Proof. Only the proof for the orthogonality of the bases in the flow-representation (5.28) is given. We start with the set defined in (5.28a). Consider s +,I 1, s +,I 2 S +. From the properties of the wedge product and of the Hodge star operator (see Prop. A.2 and Prop. A.11), we have that ( s +,I 1 s +,I ) ( 2 2 ( = dx I 1, dx I ) 1 2 ( dx I 2, dx I )) = 1 dx I 1 dx I dx I 1 dx I 2 2 N 2 N = 1 dx I 1 dx I 2 + ( 1) k(n k) 1 dx I 1 dx I 2 2 N 2 N = 1 dx I 1 dx I dx I 1 dx I 2 = dx I 1 dx I N which is 0 if and only if I 1 I 2. The same procedure can be applied to prove that the set of s,i, defined in (5.28b), are an orthogonal base for S. If s,i 1, s,i 2 S, then N N ( s,i 1 s,i ) ( 2 2 ( = dx I 1, dx I ) 1 2 ( dx I 2, dx I )) = 1 ( dx I 1 ) ( dx I 2 ) + 1 dx I 1 dx I 2 2 N 2 N = dx I 1 dx I 2 which is 0 if and only if I 1 I 2. N Consider two multi-index I and J, with I = {i 1,..., i k }, 1 i 1 < < i k n, and J = {j 1,..., j n k }, 1 j 1 < < j n k n. From Note A.8, it is possible to prove that dx I = dx i 1 dx i k = g ε l1...l k j 1...j n k g i 1l 1... g i kl k dx j 1 dx j n k where the sum is extended to all l 1,..., l k, with 1 l 1 < < l k n, and to all j 1,..., j n k, with 1 j 1 < < j n k n, and that dx J = dx j 1 dx i n k = g ε l1...l n k i 1...i k g i 1l 1... g i n kl n k dx i 1 dx i k where the sum is extended to all l 1,..., l n k, with 1 l 1 < < l n k n, and to all i 1,..., i k, with 1 i 1 < < i k n. In a compact notation, we can say that dx I = g ε L, J g I,L dx J dx J = g ε L, Ĩ gj,l dxĩ

167 5.4 Scattering for distributed systems 151 Consider (f, e) F E. We know that a set of functions f I, e J : N R such that f = I f Idx I and e = J e Jdx J are unequivocally determined. If it is assumed a flowrepresentation for the scattering subspaces, with s +,I = (dxi, dx I ) s,i = 2 ( dxi, dx I ) as in Prop. 5.17, then, from Prop. 5.10, we deduce that also a set of functions s + I, s I : N R such that (f, e) = s + I s+,i + s I s,i (5.29) I I is unequivocally determined. An immediate consequence is the following proposition, that gives the expressions in coordinates of the direct and inverse scattering maps (5.22a) and (5.22b). Note the analogy with the expressions of the scattering map in Prop. 5.6 for the finite dimensional case. Proposition Suppose that a base for S + and S is chosen according to the flow-representation (5.28) in Prop and consider (f, e) F E, with f = I f Idx I and e = J e Jdx J in coordinates. If s + S + and s S are the corresponding scattering variables, expressed as s + = I s+ I s+,i and s = I s I s,i in coordinates, then or, equivalently, s + I = s I = f I = e J = for every multi-index I = {i 1,..., i k }, 1 i 1 < < i k n. Proof. From (5.29) we have that 2 ( s + I 2 ) s I 2 ( ) (5.30a) g εl,j 2 gĩ,l s + + Ĩ s Ĩ 2 (( 1) k(n k) ) g ε L,I g J,L e 2 J + f I 2 (( 1) k(n k) ) (5.30b) g ε L,I g J,L e J f I 2 f = f I dx I = s + I = 2 2 ( s + I ) s I dx I ( ) 2 2 dxi + s I ( ) 2 2 dxi and e = e J dx J = s + J ( ) 2 2 dxi + s I 2 ( = g εl,j 2 gĩ,l s + + Ĩ s Ĩ ) dx J ( ) 2 2 dxi

168 152 Scattering with applications that gives (5.30a). The inverse relations (5.30b) can be obtained as follows. First of all, from the second of (5.30a) we have ( 1) k(n k) g ε M,I g J,M 2 ( ) e J = 2 ( 1)k(n k) g ε M,I ε L, J g J,M gĩ,l s + Ĩ s Ĩ Since ( 1) k(n k) ε M,I ε L, J g J,M gĩ,l = 1 g δĩi (5.31) with δ the Kronecker delta, we obtain 2 ( s + I 2 + ) s I = ( 1) k(n k) g ε M,I g J,M e J that, if summed with the first relation in (5.30a) gives the first in (5.30b) and, if subtracted, gives the second in (5.30b) f I + ( 1) k(n k) g ε M,I g J,M e J = 2 s + I f I ( 1) k(n k) g ε M,I g J,M e J = 2 s I Note An analogous result can be obtained using an effort-representation of the scattering subspaces. The final mappings have different expressions only because the bases used are not the same. Note It could be useful to express the mappings (5.30) in matrix notation. First of all, given r natural and such that 1 r n, define ( ) n n r := r and consider a generic 1-1 mapping σ r : {1,..., n r } {M = {m 1,..., m r } 1 m 1 < < m r n} Clearly n k = n n k. Now, it is possible to define the following (square) n n k n k matrix g εl,σn k (1) g σ k(1),l g εl,σn k (1) g σ k(n k ),L G :=..... g εl,σn k (n n k ) g σ k(1),l g εl,σn k (n n k ) g σ k(n k ),L From (5.31), we deduce that G is non singular, with inverse given by the following (square) n k n n k matrix g εl,σk (1) g σ n k(1),l g εl,σk (1) g σ n k(n n k ),L G 1 = ( 1) k(n k)..... g εl,σk (n k ) g σ n k(1),l g εl,σk (n k ) g σ n k(n n k ),L

169 5.4 Scattering for distributed systems 153 since the Kronecker δ can be represented by the identity of proper order in matrix notation. Moreover, if f := f σk (1). f σk (n k ) and e := e σn k (1). e σn k (n n k ) for the flow and effort variables, and if s + := s + σ k (1). and s := s + σ k (1). s + σ k (n k ) s σ k (n k ) for the scattering variables, then relations (5.30) can be written as f = e = 2 ( s + s ) G ( s + + s ) (5.32a) and s + = s = 2 ( G 1 e + f ) 2 2 ( G 1 e f ) (5.32b) 2 The matrix G can be interpreted as the numerical representation of the impedance of the infinite dimensional system. As underlined before, it is necessary to fix a base for each scattering subspace and the expression of G will be in strict relation with the chosen base. In this case, the flow-representation of the scattering subspaces is adopted: an analogous result can be obtained using the effort-representation or, eventually, another base. Once a coordinate-depending expression for the scattering mapping (5.30) is presented in Prop or, in a more compact form, in (5.32), it is possible to deduce an analogous expression of the scattering operator S introduced in Sec Suppose that ( a f, a e) F a E a and that ( b f, b e) F b E b. Clearly, it is possible to find two unique sets of functions a f I, a e J : D a R and b f I, b e J : D b R such that ( a f, a e) = ( a f I dx I, a e J dx J ) and ( b f, b e) = ( b f I dx I, b e J dx J ) It is well known that, in terms of scattering variables, it is possible to write that { ( a f, a e) = a s + + a s ( b f, b e) = b s + + b s with a,b s + S + a,b and a,b s S a,b.

170 154 Scattering with applications Once a flow-representation for both a,b S + and a,b S is assumed, the set of a,b s +,I and a,b s,i gives a base for a,b S + and a,b S, as in Prop So, we have that a s + = a s + I a s +,I a s = a s I a s,i b s + = b s + I b s +,I b s = b s I b s,i with a s + I, a s I : D a R and b s + I, b s I : D b R Clearly, Prop holds for both the scattering subspaces defined on D a and D b and it is possible to write two couple of relations (5.30a), one for each system. Assuming a common-effort interconnection on D, the relations { a f I = b f I a e J = b e J can be written, in terms of scattering variables, as ( ) a s + I a s I = b s + I b s I ( ) a g ε a L,J gĩ,l a s + + a s = b ( ) g ε b Ĩ Ĩ L,J gĩ,l b s + + (5.33) b s Ĩ Ĩ Consider a multi-index I = {i 1,..., i n k } such that 1 i 1,..., i n k n. With some algebraic manipulations, we can find that ( ) a s + I + a s I = ( 1)k(n k) a g b g ε M,I ε L, J a g J,M b gĩ,l b s + + b s (5.34a) Ĩ Ĩ since ( 1) k(n k) a g ε M,I ε L, J a g J,M agĩ,l = δĩi, and in the same way, that b s + I + b s I ( = ( 1)k(n k) b g a g ε M,I ε L, J b g J,M a gĩ,l a s + + a s Ĩ Ĩ ) (5.34b) since ( 1) k(n k) b g ε M,I ε L, J b g J,M bgĩ,l = δĩi. Combining the first relation in (5.33) with both (5.34a) and (5.34b), we obtain the following couple of relations: [ 2 δĩi a s = ( 1) k(n k) ] a g b g ε Ĩ M,I ε L, J a g J,M b gĩ,l + b δĩi s +Ĩ [ + ( 1) k(n k) ] a g b g ε M,I ε L, J a g J,M bgĩ,l δĩi b s Ĩ [ 2 δĩi b s = ( 1) k(n k) ] (5.35) b g a g ε Ĩ M,I ε L, J b g J,M a gĩ,l + a δĩi s +Ĩ [ + ( 1) k(n k) ] b g a g ε M,I ε L, J b g J,M agĩ,l δĩi a s Ĩ Using the matrix notation introduced in Note 5.15, (5.35) can be written as [ Ink + G 1 b G a 0 nk 0 nk I nk + G 1 a G b ] [ a s + b s + ] [ Ink G 1 = b G a 2I nk 2I nk I nk G 1 a G b ] [ a s b s ]

171 5.4 Scattering for distributed systems 155 that is the formulation of (5.24) in coordinates. Note that it is very similar to (5.8a) for the finite dimensional case. In Lemma 5.15, it has been proved that under some hypothesis (here omitted), given ω Ω k (N ), a ω + b ω = 0 if and only if ω = 0. In matrix notation, this lemma states that (G a + G b ) ˆω = 0 if and only if ˆω = 0, where ˆω are the coordinates, or better, ˆω is the numerical representation of ω in a fixed base. We deduce that the matrix G a + G b has full rank, and clearly also I + G 1 b G a, because G b is nonsingular. Since also the matrix [ Ink + G 1 ] b G a 0 nk 0 nk I nk + G 1 a G b is non singular, relation (5.35) can be written as [ a s + ] [ a s b s + = S ] b s where S = [ Ink + G 1 b G a 0 nk 0 nk I nk + G 1 a G b ] 1 [ Ink G 1 b G a 2I nk 2I nk I nk G 1 a is the matrix representation, in the chosen base, of the scattering operator of Prop Note A similar result still holds if we suppose a common flow interconnection over D or if a different base for the scattering subspaces is fixed (e.g. if the effort-representation is adopted). Note Suppose that a g ij = b g ij = g ij. Then (5.26) becomes { a s + I = b s I G b ] b s + I = a s I since [ 0nk I nk S = as in the finite dimensional case (see Note 5.8). I nk 0 nk ] Example. Maxwell s equations: border considerations Consider some connected and closed domain D of the three-dimensional oriented Euclidean space E 3, or, equivalently, suppose that D is a three-dimensional close and connected Riemannian manifold equipped with the Euclidean metric δ ij, being δ the Kronecker delta. Moreover, assume that in D some electromagnetic phenomena, described by the Maxwell s equations, are taking place. Following the formulation of Maxwell s equations in terms of differential forms discussed in Sec , the co-energy variables are the 1-forms electric field intensity E and the magnetic field intensity H, with E, H Ω 1 (D). Moreover, the energy variables are the 2-forms electric field induction D and the magnetic field induction B, with D, B Ω 2 (D).

172 156 Scattering with applications If the medium is linear and non-bianisotropic (Warnick and Arnold, 1996), its macroscopic electric and magnetic properties can be described by the following (0, 2) symmetric and positive definite tensors, the electric permittivity ɛ ij and the magnetic permittivity µ ij. It is correct to assume that ɛ ij = ɛ k i g ij = ɛ k i δ kj µ ij = µ k i g ij = µ k i δ (5.36) kj where g ij = δ ij is the (spatial) Euclidean metric on D. We have that ɛ j i and µj i are linear operator, symmetric with respect to the metric defined on D. Moreover, ɛ ij and µ ij can be treated as positive definite metrics on D and, consequently, used to define two distinct Hodge star operators ɛ and µ. These Hodge operators are not needed to define the scattering decomposition on the border D of the physical domain, but only to write in slightly different way than in Sec the constitutive relations of the medium and the electromagnetic energy density. In particular, the constitutive relations, that is the relations between energy and co-energy variables, are given by D = ɛ E and B = µ H and the electromagnetic energy density, a 3-form, by H em = 1 2 (E D + H B) = 1 2 (E ɛe + H µ H) If H em = D H em is the total electromagnetic energy, the following energy balance relation have already been prove: dh em = E H (5.37) dt which expresses the fact that the time derivative of the total electromagnetic energy in D is equal to the flow of electromagnetic power radiating through the boundary D. Locally, the power through the boundary is given by the 2-form S = E H Ω 2 ( D), the Poynting vector. Clearly, (5.37) provides the expression of the duality product for Maxwell s equations on the boundary of the spatial domain. The power conjugated variables are the electric field E restricted on the boundary, that is E D, and the magnetic field H restricted on the boundary, that is H D. The flow variable is E D and the effort variable is H D. In order to define a scattering decomposition over the boundary of the spatial domain, a dual product and a metric, that is a 1-1 mapping between flow and effort variables, are needed (see Def. 5.3). Since (5.37) provides the definition of the duality product, the next step is to find a proper metric on the space of power variables. Given the Euclidean metric δ ij and the electric permittivity and magnetic permeability tensors ɛ ij and µ ij on D, it is possible to consider their restrictions g i j, ɛ i j and µ i j on D, (Dubrovin et al., 1992). All these tensors are symmetric and positive definite, so it is possible to assume that D ɛ i j = ɛk i g k j µ i j = µk i g k j where ɛ j i and µ j i are linear operators on D, that are also symmetric and positive definite respect to the induced metric g i j on D. They can be interpreted as the restrictions of the

173 PSfrag replacements 5.4 Scattering for distributed systems 157 ɛ ɛ i j ɛ i j H em δ ij g i j z i j z D µ µ i j µ i j D Figure 5.9: Scattering decomposition and Maxwell s equations. linear operators ɛ i j and µi j, introduced in (5.36), on D. If ɛ i j = ( ɛ ) i ( ) k k ɛ j µ i j = ( µ) i k ( µ) k j with ( ɛ) i j and ( ) i µ symmetric and positive definite linear operators, with respect to the j metric on D, it is possible to define the following metric on D: where g i j z i j := ( ɛ ) k is the restriction of the Euclidean metric on D and i ( µ 1) i ( ) l µ 1 g k l j (5.38) is the inverse (linear) mapping of ( ) i µ, which is, obviously, symmetric and positive definite j respect to g i j. If z is the Hodge star operator defined using z i j as metric on D, then it defines a 1-1 mapping between the space of flows and efforts on D. In fact, it is possible to prove that, given the restriction on the border of D of an electric field E, then H = z E is the corresponding restriction on D of a magnetic field. In other words, z is the metric required in the definition of the scattering subspaces S + and S on the boundary of the physical domain D (see Def 5.3). The assumption (5.38) for the metric on D can be intuitively justified as follows: since the electromagnetic energy can be written as 1 2 (ɛe2 + µh 2 ) = 1 2 µ(ɛ/µe2 + H 2 ) then, in some sense, we have that ɛ/µe is equivalent to a magnetic field H. At this point, all the results described in Sec. 5.4 can be obtained by simple calculations. In conclusion, the example of Maxwell s equations points out that the physical properties of an infinite dimensional systems can be summarized by a spatial metric and two linear operators, symmetric and positive definite with respect to the spatial metric (e.g. δ ij, ɛ i j and µi j ). Inside the domain, these tensors are combined in such a way that two Hodge operators, ɛ and µ, can be defined. By means of these operators, the constitutive relations and the energy density can j

174 158 Scattering with applications be written. On the border of the domain, a proper combination of the restriction of the same tensors gives a Riemannian metric z i j and, consequently, another Hodge operator z. Then, z is a metric on the space of power variables or, equivalently, a mapping between flows and efforts. In other words, it is the impedance in the infinite dimensional case. These different combinations are summarized in Fig. 5.9.

175 Appendix A Mathematical background This appendix is necessary in order to give an essential background about tensors and differential forms. Tensors are a generalization of functions and vector fields defined over a manifold, while differential forms are tensors with special symmetry properties. Such objects arise in many application in physic, engineering and mathematics. This is due to the fact that the curl, div and grad operations and the theorems of Green, Gauss and Stokes can be all expressed in terms of differential forms and a a differential operator on differential forms, the exterior derivative d. More details on the discussed topics can be found in (Marsden and Ratiu, 2001), from which most of this appendix takes inspiration. A.1 Tensors on linear spaces Before studying tensors fields on manifold, we study them on vector spaces. This subject is an extension of linear algebra and it is generally referred as multi-linear algebra. Definition A.1 (multi-linear mapping). Suppose that E 1,..., E k and F are linear spaces. A map A : E 1 E k F is called k-multi-linear if A(e 1,..., e k ) is linear in each argument separately. Definition A.2. The space of all continuous k-multi-linear maps from E 1 E k to F is denoted by L(E 1,..., E k ; F). If E i = E, i = 1,..., k, then this space is denoted by L k (E; F). Consider the vector space E and denote by E its dual space. Then, we give the following fundamental definition. Definition A.3 (tensor). For a vector space E we put T r s (E) := L r+s (E,..., E, E,..., E; R) }{{}}{{} r-times s-times

176 160 Mathematical background The elements of Ts r (E) are called tensors on E, contra-variant of order r and covariant of order s, or, simply, of type (r, s). Given t 1 T r 1 s 1 (E) and t 2 T r 2 s 2 (E), the tensor product of t 1 and t 2 is the tensor t 1 t 2 T r 1+r 2 s 1 +s 2 (E) defined by (t 1 t 2 )(β 1,..., β r 1, γ 1,..., γ r 2, f 1,..., f s1, g 1,..., g s2 ) = where β j, γ j E and f j, g j E. = t 1 (β 1,..., β r 1, f 1,..., f s1 )t 2 (γ 1,..., γ r 2, g 1,..., g s2 ) The tensor product is associative, bilinear and continuous but it is not commutative. We also have the special cases T 1 0 (E) = E, T 0 1 (E) = E, T 0 2 (E) = L(E; E ), T 1 1 (E) = L(E; E) Proposition A.1. Denote by E an n-dimensional vector space. If {e 1,..., e n } a basis of E and {e 1,..., e n } a basis of its dual E, then {e i1 e ir e j 1 e js i 1,..., i r, j 1,..., j s = 1,..., n} is a basis of T r s (E) and thus dim T r s (E) = n r+s. Then, given t T r s (E), it is possible to write where the coefficients t = t(e i 1,... e ir, e j1,... e js )e i1 e ir e j 1 e js t i 1,...,i r j 1,...,j s := t(e i 1,... e ir, e j1,... e js ) are the components of t relative to the basis {e 1,..., e n }. Example A.1. If t is a (0, 2)-tensor on E, then it has components t ij = t(e i, e j ), that is an n n matrix. This is the usual way of associating a bilinear form with a matrix. Moreover, it makes sense to say that t is symmetric when t(e i, e j ) = t(e j, e i ). This is equivalent to saying that the matrix [t ij ] is symmetric. An inner product ( ) on E is, then, a symmetric (0, 2)-tensor g with components g ij = (e i e j ) and such that [g ij ] is symmetric and positive definite. The components of the inverse matrix are written g ij. Example A.2. Higher order tensors arise in elesticity and Riemannian geometry. In elesticity, the stress tensor is a symmetric 2-tensor and the elasticity is a fourth order tensor. In Riemannian geometry, the metric tensor is a symmetric 2-tensor and the curvature tensor is a fourth order tensor. Definition A.4 (Kronecker delta). The Kronecker delta is the tensor δ T1 1 (E) defined by δ(α, e) = α, e and corresponds to the identity I L(E; E) under the canonical isomorphism T 1 1 (E) L(E; E). Relative to any basis, the components of δ are the Kronecker symbols δ i j, that is δ = δi j e i e j.

177 A.2 Manifolds, tangent spaces and tangent bundles 161 A.2 Manifolds, tangent spaces and tangent bundles A manifold can be intuitively defined as a set which is locally diffeomorphic to R n around each of its points. A typical example is a smooth surface in space: locally, it is possible to define a one-to-one mapping between a neighborhood of the surface points and a neighborhood of R 2. The basic concepts needed in order to mathematically define a manifold M are charts and atlas. Definition A.5 (chart). Given a set M, a local chart on M is bijection φ : U M P R n. This chart is denoted by (U, φ). Definition A.6 (atlas). An atlas on a set M is a family of charts (U i, φ i ), i I, with I an indexing set, such that: (i) i I, φ i is a homeomorphism (i.e. a bijective continuous mapping, with inverse continuous); (ii) M = i I U i ; (iii) for every i, j I, consider the charts (U i, φ i ) and (U j, φ j ): if U c = U i U j, then the function (φ i φ j ) : φ j (U c ) R n φ i (U c ) R n should be smooth. Definition A.7 (manifold dimension). For a manifold M, the dimension around a point p U M is the dimension of the real space which is the co-domain of the chart (U, φ). In order to define what are the tangent and co-tangent space of a manifold M at a given point, it is necessary to introduce the function on the manifold. Denote by C (M) the set of infinitely differentiable function f : T R M and by C (p) the set of smooth functions defined on an open neighborhood of p M. The tangent space to M in p can be defined as the space of the tangent vectors to the all the curves passing through p. Clearly, the tangent space is a vector space, since the manifold is local bijective to R n. Equivalently, the tangent space can be defined as follows. Definition A.8 (tangent space). The tangent space T p M to M at p M is the linear space of mappings F p : C (p) R such that (i) F p (αf + βg)αf p (f) + βf p (g) (ii) F p (fg) = F p (f)g(p) + f(p)f p (g) for every f, g C (p) and α, β R. Note A.1. Consider around a point p M a chart (U, φ) with local coordinates (x 1,..., x n ); then, for every F p T p M, it is possible to write F p = F 1 x F n x n where (F 1,..., F n ) R n is the local representation of F p with coordinates (x 1,..., x n ). Moreover, the set of all { / x i }, i = 1,..., n, gives a basis for T p M.

178 162 Mathematical background Definition A.9 (tangent bundle). The tangent bundle of a manifold M is defined as: T M := T p M p M Note A.2. The dimension of a tangent space to a manifold M of dimension n is equal to n since at every point p M the manifold is locally diffeomorphic to R n. On the other hand, an element of the tangent bundle has dimension 2n, since it is necessary to specify n coordinates for the point p and n coordinates for the element of the tangent space T p M. Definition A.10 (cotangent space). The cotangent space T p M of M at p M is the space of linear operators from T p M to R, that is the dual space of T p M. Note A.3. The dual basis to { / x i } is denoted by {dx i }. Thus, relative to a choice of local coordinates we get the basic formula where f : M R is a smooth function. df = f x i dxi Definition A.11 (cotangent bundle). The cotangent bundle of a manifold M is defined as: T M := Tp M p M A.3 Tensor fields and tensor bundles In Sec. A.1, the definition of tensor on linear space has been given and, in Sec. A.2 it has been shown that, given an n-dimensional manifold M, it is always possible to identify two duals spaces, the tangent and cotangent spaces, at each point p of the manifold. If the tangent space is considered as the vector space for tensors, it is possible to arrive to the concept of tensor field. Definition A.12 (tensor field). A tensor field r-contra-variant and s-covariant or, simply, of type (r, s) is defined as a map assigning to each point p of a manifold M a tensor Ts r (T p M) and it is denoted by Ts r M p. Definition A.13 (tensor bundle). The r-contra-variant and s-covariant tensor bundle is defined as follows: Ts r M := Ts r M p p M Note A.4 (coordinate representation of tensor fields). If M is an n-dimensional manifold, from Note A.1 and Note A.3, we have that { / x i } is a basis for T p M and {dx i } is a basis for T p M, both corresponding to the local coordinates (x 1,..., x n ). Then dx i ( / x j ) = x i / x j = δ i j. If t T r s M p, then t = t i 1,...,i r j 1,...,j s x i 1 x ir dxj 1 dx js where t i 1,...,i r j 1,...,j s are the components of t corresponding to choice (x 1,..., x n ) of (local) coordinates on M. It can be of interest to understand the behavior of these components in relation to a

179 A.4 Exterior algebra 163 (local) change of coordinates on M. If X i, i = 1,..., n, is a different (local) coordinate system on M, then the following coordinate transformation formula for the components of t holds: T k 1,...,k r l 1,...,l s = Xk 1 x i... Xkr x j1 1 x ir X l... xjs 1 X ti 1,...,i r ls j 1,...,j s This relation could be treated as an alternative definition of tensor. Note that, in this case, the tensor is introduced in terms of its behavior under coordinate changes. A.4 Exterior algebra In this section, some short notes about the exterior algebra of a vector space are given. The extension to manifolds can be easily deduced following the same ideas for the generalization of tensors to manifolds. Definition A.14 (pemutation group). The permutation group on k elements, denoted by S k, consists of all bijections σ : {1,..., k} {1,..., k} usually given in the form of table ( 1 k σ(1) σ(k) together with the structure of a group under the composition of maps. Definition A.15 (transposition). A transposition σ S k is a permutation that swaps two elements of {1,..., k}. Every permutation σ S k can be expressed in terms of composition of transposition: it is possible to prove that the expression of σ as a product of transpositions is not unique, but the number of required composition is always even or odd. Thus, it makes sense to speak about even or odd permutations. Definition A.16 (permutation sign). The sign of a permutation σ S k is the following function: { 1 if σ is even sign σ := 1 if σ is odd Definition A.17 (exterior k-form). A k-multi-linear continuous mapping t T0 k (E) is called skew symmetric when t(e 1,..., e k ) = (sign σ)t(e σ(1),..., e σ(k) ) for all e 1,..., e k E and σ S k. This property can be equivalently expressed by requiring that t(e 1,..., e k ) changes sign when any two of e 1,..., e k are swapped. We call t an exterior k-form. The space space of exterior k-forms on a vector space E is denoted by k (E). Definition A.18 (Levi-Civita tensor). Denote by E an n-dimensional vector space. The Levi-Civita tensor ε Tn 0 (E) on E is as the following n-form: ( ) 1 n ε i1,...,i n = sign i 1 i n )

180 164 Mathematical background Definition A.19 ((k, l)-shuffle). A (k, l)-shuffle is a permutation σ of {1, 2,..., k + l} such that σ(1) < < σ(k) and σ(k + 1) < < σ(k + l) Definition A.20 (wedge product). If α k (E) and β l (E), define their wedge product α β k+l (E) by (α β)(e 1,..., e k+l ) = σ (sign σ)α(e σ(1),..., e σ(k) )β(e σ(k+1),..., e σ(l+k) ) where the sum is over all (k, l)-shuffles σ. Example A.3. If α is a 2-form and β is a 1-form, then (α β)(e 1, e 2, e 3 ) = α(e 1, e 2 )β(e 3 ) α(e 1, e 3 )β(e 2 ) + α(e 2, e 3 )β(e 1 ) since the only (2, 1)-shuffles in S 3 are ( ) 1 2 3, ( ) and ( ) of which only the second one has sign equal to 1. Proposition A.2. Consider α k (E), β l (E) and γ m (E). Then, the wedge product satisfies the following properties: (i) is bilinear; (ii) α β = ( 1) kl β α; (iii) α (β γ) = (α β) γ. Note A.5. To express the wedge product in coordinate notation, suppose that dim E = n and that e 1,..., e n is a basis. The components of α k (E) are the real numbers α i1,...,i k = α(e i1,..., e ik ), 1 i 1 < < i k n antisymmetric in its indices i 1,..., i k. For example, β 2 (E) yields β ij, a skew-symmetric n n matrix. If α k (E) and β l (E), then (α β) i1,...,i k+l = σ α σ(i1 ),...,σ(i k )β σ(ik+1 ),...,σ(i k+l ) where the sum is over all (k, l)-shuffles σ. Definition A.21 (Grassmann algebra). The direct sum of the spaces k (E), k = 0, 1, 2,..., together with its structure of real vector space and multiplication induced by, is called the exterior algebra of E, or the Grassmann algebra of E. It is denoted by (E). Proposition A.3. Suppose that dim E = n. Then, for k > n, k (E) = {0}, while for 0 < k n, k (E) has dimension n!/(n k)!k!. The exterior algebra over E has dimension 2 n. If {e 1,..., e n is an (ordered) basis of E and {e 1,..., e n } its dual basis, a basis of k (E) is {e i 1 e i k 1 i 1 < i 2 < < i k n}

181 A.5 Hodge star operator 165 Note A.6. Given α k (E), if {e 1,..., e n } is an ordered basis for E and {e 1,..., e n } is the corresponding basis for E, then α = α(e i1,..., e ik )e i 1 e i k = α i1,...,i k e i 1 e i k with 1 i 1 < < i k n. 1 i 1 < < i k n, then If I := {i 1,.., i k } is a multi-index of order #I = k such that is a basis for k (E) and α = α I e I. {e I := e i 1 e i k I multi-index} Corollary A.4. If dim E, then dim n (E) = 1. If {α 1,..., α n } is a basis for E, then α 1 α n spans n (E). Corollary A.5. Let α 1,..., α n E. Then, α 1,..., α n E are linearly dependent if and only if α 1 α n spans n (E). Corollary A.6. Let θ 1 (E) and α k (E). Then, θ α = 0 if and only if exists β k 1 (E) such that α = θ β. A.5 Hodge star operator In this section, the Hodge star operator on forms is introduced. definition of volume element and orientation of a vector space E. The starting point is the Definition A.22. The non-zero elements of the one-dimensional space n (E) are called volume elements. If ω 1 and ω 2 are volume element, we say that ω 1 and ω 2 are equivalent if and only if there is c > 0 such that ω 1 = cω 2. An equivalence class [ω] of volume elements on E is called an orientation on E. An oriented vector space (E, [ω]) is a vector space E together with an orientation [ω] on E; [ ω] is called the reverse orientation. A basis {e 1,..., e n } of the oriented vector space (E, [ω]) is called positively (resp., negatively) oriented, if ω(e 1,..., e n ) > 0 (resp., < 0). Note A.7. The definition of positively (resp., negatively) orientation is clearly independent from the particular orientation ω representing the equivalence class [ω]. Furthermore, it is easy to note that a vector space E has exactly two orientations: one given by selecting an arbitrary dual basis {e 1,..., e n } and taking [e 1 e n ], the other given by its reverse orientation [ e 1 e n ]. It is important to focus our attention on vector spaces carrying a bilinear symmetric nondegenerate covariant 2-tensor, for which it is possible to construct canonical volume elements. Differently from (Marsden and Ratiu, 2001), only the special case in which the vector space is provided with a bilinear symmetric positive definite covariant 2-tensor (i.e. with a Riemannian metric) is discussed. First of all, recall the following proposition from linear algebra.

182 166 Mathematical background Proposition A.7. Denote by E an n-dimensional vector space and by g = ( ) T2 0(E) a symmetric positive definite tensor. This means that the map e E g(e, ) E has range equal to n. Then, there is an ordered basis {e 1,..., e n } of E, with dual basis {e 1,..., e n }, such that n g = e i e i i=1 This basis {e 1,..., e n } is called g-orthonormal basis. The canonical volume element of a vector space with Riemannian metric is, then, given by the following proposition. Proposition A.8. Denote by E an n-dimensional vector space and by g T2 0 (E) a symmetric positive definite tensor. If [ω] is an orientation of E, there exists a unique volume element µ = µ(g) [ω], called the g-volume, such that µ(e 1,..., e n ) = 1 for all positively oriented g- orthonormal basis {e 1,..., e n } of E. In fact, if {e 1,..., e n } is the dual basis, then µ = e 1 e n. More generally, if {f 1,..., f n } is a positively oriented basis with dual basis {f 1,..., f n }, then µ = det [g (f i, f j )] 1/2 f 1 f n It is possible to induce a symmetric positive definite 2-tensor on k (E), k = 1,..., n, from a symmetric positive definite tensor g T 0 2 (E), with dim E = n. If α, β k (E), with α = α i1,...,i k e i 1 e i k and β = β i1,...,i k e i 1 e i k (sum over 1 i 1 < < i k n) denote by β i 1,...,i k = g i 1j1 g i kj k β j1,...,j k (sum over all j 1,..., j k ) the components of the associated contra-variant k-tensor, where [g ij ] is the inverse matrix of [g ij ] = [g(e i, e j )]. Then, define g (k) (α, β) := 1 i 1 < <i k n α i1,...,i k β i 1,...,i k or, if there is no danger of confusion, write (α β) = g (k) (α, β). This definition does not depend on the basis. Moreover, g (k) (, ) is bilinear, symmetric, non-degenerate and positive definite. Thus, the following proposition holds. Proposition A.9. A non-degenerate symmetric positive definite covariant 2-tensor g on the vector space E induces a similar tensor on k (E) for all k = 1,..., n. Moreover, if {e 1,..., e n } is a g-orthonormal basis of E, then the basis {e i 1 e i k 1 i 1 < < i k n} is orthonormal with respect to g (k) (, ) ( ) and ( e i 1 e i k e i 1 e i k ) = 1 The Hodge start operator is, then, introduced by means of the following proposition.

183 A.6 Differential forms, exterior derivative and Stokes s theorem 167 Proposition A.10. Denote by E an n-dimensional vector space, by g T2 0 (E) a symmetric positive definite tensor and by µ the corresponding volume element of E. Then, there is a unique isomorphism (the Hodge star operator) : k (E) n k (E) satisfying α β = (α β) µ for α, β k (E). Furthermore, if {e 1,..., e n } is a positively oriented g-orthonormal basis of E and {e 1,..., e n } is its dual basis, then ( e σ(1) e σ(k)) = sign σ (e σ(k+1) e σ(n)) where σ(1) < < σ(k) and σ(k + 1) < < σ(n). Proposition A.11. Denote by E an n-dimensional vector space, by g T2 0 (E) a symmetric positive definite tensor and by µ the corresponding volume element of E. The Hodge star operator satisfies the following properties, for all α, β k (E): α β = β α = (α β) µ 1 = µ, µ = 1 α = ( 1) k(n k) α (α β) = ( α β) Note A.8. It is necessary to compute in an arbitrary oriented basis. In particular, it is possible to prove that ( e i 1 e i ) ( ) k = det [g ij ] 1/2 g i 1j1 g i kj k 1 n sign e j k+1 e jn j 1 j n = det [g ij ] 1/2 g i 1j1 g i kj k ε j1,...,j n e j k+1 e jn (sum over j k+1 < < j n ), where {j 1,..., j k } is a complementary set of indices to j k+1,, j n. Consequently, if α = α i1,...,i k e i 1 e i k k (E), we obtain that for j k+1 < < j n. ( α) jk+1,...,j n = det [g ij ] 1/2 j 1 < <j k ε j1,...,j n g i1j1 g ikjk α i1,...,i k A.6 Differential forms, exterior derivative and Stokes s theorem Differential forms are an extension of exterior forms to the tangent bundle T M of a manifold M. If M is an n-dimensional manifold, from Note A.2 we know that also its tangent space (which is a vector space) is n-dimensional. It is possible to deduce that the extension of exterior forms to the tangent space of manifold makes sense. The elements of this special class of skew-symmetric (0, k)-tensors are called differential forms and are denoted by Ω k (M). Each differential form α Ω k (M) is defined at every m M and, clearly, α(m) k (T m M). Consequently, it can be easily deduced how to extend the wedge product to deal with differential forms. This result is discussed in the following proposition.

184 168 Mathematical background Proposition A.12. If α Ω k (M) and β Ω l (M), k = 0, 1,..., define α β : M k+l (T M) by (α β)(m) = α(m) β(m) Then, α β Ω k+l (M) and satisfies the properties of Prop. A.2. Definition A.23. Let Ω(M) denote the direct sum of the spaces Ω k (M), k = 0, 1,..., together with is structure of vector space and with multiplication extended component-wise to Ω(M). We call Ω(M) the algebra of exterior differential forms on M. The elements of Ω k (M) are called k-forms. Note A.9. As discussed in Note A.4, given an n-dimensional manifold M, a tensor field t T r s (M) has the local expression t(u) = t i 1,...,i r j 1,...,j s (u) x i 1 x ir where u U, (U, φ) is a local chart in M and t i 1,...,i r j 1,...,j s (u) = t ( dx i 1,..., dx ir, Then, the local expression for ω Ω k (M) is given by dxj 1 dx js x j 1,..., x js ) (u) ω(u) = ω i1,...,i k (u)dx i 1 dx i k, i 1 < < i k where ( ω i1,...,i k (s) = ω x i,..., 1 x i k ) (u) The exterior derivative is the generalization of the differential operator of functions to a map d : Ω k (M) Ω k+1 (M) defined for every k. This operator turns out to have powerful algebraic properties: in particular, d is related to the basic properties of div, grad and curl on R 3. This operator is introduced by means of the following proposition. Proposition A.13. Denote by M an n-dimensional manifold. There is a unique family of mappings d k (U) : Ω k (U) Ω k+1 (U), k = 0, 1,..., n and U open in M, all denoted by d and called the exterior derivative on M, such that (i) d is a -anti-derivation, that is d is R-linear and for α Ω k (M) and β Ω l (M), d(α β) = dα β + ( 1) k α β; (ii) if f : U R is a function, then df is its differential; (iii) d 2 = d d = 0; (iv) d is natural with respect to restrictions, that is, if U V M are open and α Ω k (M), then d(α U ) = (dα) U

185 A.6 Differential forms, exterior derivative and Stokes s theorem 169 Condition (iv) means that d is a local operator. Note A.10. If α Ω k (M), clearly α = α i1,...,i k dx i 1 dx i k. Then, dα = α i 1,...,i k x i dx i dx i 1 dx i k (sum over i 1 < < i k ) which gives the coordinate expression of d. The integral of an n-form on an n-dimensional manifold is defined by piecing together integrals over sets in R n using a partition of unity subordinate to an atlas. The change of variables theorem guarantees that the integral is well defined, independent of the choice of atlas and partition of unity. The basic theorems of integral calculus are the change of variables theorem and Stokes theorem. The second one, in particular, is a powerful tool widely used in this thesis when dealing with infinite dimensional systems. Stokes theorem states that if α is an (n 1)-form on an orientable n-dimensional manifold M, then the integral of dα over M equals the integral of α over M, the boundary of M. The classical theorems of Gauss, Green, and Stokes are special cases of this result. In order to state Stokes theorem formally, it should be necessary to discuss manifolds with boundary and their orientations, but this matter is beyond the aims of this appendix. The interested reader, as usual, can refer to (Marsden and Ratiu, 2001). Within this framework, we think that the intuitive idea of orientation of manifolds is enough. Theorem A.14 (Stokes). Denote by M an oriented smooth n-dimensional manifold with boundary M and by α Ω n 1 (M) a differential form. Then, dα = α If M =, the left hand side of the previous equation is set equal to zero. M M

186 170 Mathematical background

187 Appendix B Control of robots with Real-Time Linux It is well known that flexibility, short development times and low economical efforts are probably the most important features needed by any industrial application and, of course, in a robotic and control system oriented experimental set-up. The growth in performances of low-cost PC-based architectures and the diffusion of free-distributed software packages brought the possibility of developing such a kind of systems taking into account all these needs. In this appendix, the real-time control of an industrial manipulator, the Comau SMART 3-S, developed under a real-rime variant of the Linux operative system, RTAI-Linux is discussed. The basic idea is to build a flexible and highly configurable robotic set-up to quickly test new control algorithms and to easily use the robot in different manipulation tasks. B.1 Introduction Rapid prototyping is a well recognized need in the design and development phases of advanced systems for automation. In particular, this concept is also important for the design and experimentation of control algorithms and new mechanical prototypes in advanced robotics. Moreover, flexibility, short development times and contained economical costs are other important features considered in developing new industrial applications and, of course, new robotic and control system experimental set-ups. Recently, the growth in performances of low-cost PC-based architectures and the diffusion of free-distributed software packages has given the possibility of developing rapid prototyping tools taking into account all these needs. In a sense, the advent of real-time variants of the popular desktop operating system Linux could be considered a sort of milestone for the development process of real-time applications. As a matter of fact, this kind of systems provide noticeable performances that, together with availability of the source codes, powerful development tools and, generally speaking, detailed documentation, could be the starting point for setting up new standards for advanced development environments. These operating systems are distributed under the GNU Public License, so

188 172 Control of robots with Real-Time Linux they are freely available and configurable to meet the desired requirements. According to this model, the RT-Linux (Barabanov, 1997; Yodaiken, 1999; RTLinux web site, 2003), and RTAI- Linux (Bianchi et al., 1999; Mantegazza, 2003; RTAI-Linux web site, 2003) projects took place, both with the aim of giving Linux the possibility to implement hard real-time applications. At the Laboratory of Automation and Robotics (L.A.R.) of the University of Bologna, we started to work with Linux-based real time operative systems within the ViDet Project, see (Macchelli et al., 2000) for more details on this research project. Basically, we developed an haptic device based on a wire-actuated robot which was able to provide force feedback sensations to the user. The real-time architecture was able to manage visual informations from a stereovision subsystem, to build a 3D virtual reconstruction based on the visual informations and to control the robotic device in order to simulate the contact with the virtual world. Stimulated by the good results, we decided to develop a more complex and flexible experimental setup for robotic applications still based on real-time Linux or, more precisely, on RTAI-Linux. The idea was to develop a highly customizable and modular application for the control of an industrial manipulator, the Comau SMART 3-S robot, that could allow us to quickly test new control algorithms and to easily use the robot in different manipulation tasks. This appendix is organized as follows. In Sec. B.2 a short overview on real-time Linux and, in particular, on the real-time variant adopted, RTAI-Linux, is presented. Then, in Sec. B.3, the experimental setup is introduced: both hardware and software architectures are described. Finally, in Sec. B.4, an advanced application with a vision subsystem and a 3-dof gripper are presented. B.2 Real time Linux. A quick overview B.2.1 Short introduction on real-time systems The first thing to clarify is what a real-time system is. A first definition can be: A real-time operating system is able to execute all of its tasks without violating specified timing constraints, while a second one, trying to explain the meaning of real-time, can be Times at which tasks will execute can be predicted deterministically on the basis of knowledge about the system s hardware and software. That means, if the hardware can do the job, the RT-OS software will do the job deterministically, (Bruyninckx, 2003). A real-time system must be fast and predictable. With fast, we mean that it has to be characterized by low latency, that is it has to be able to respond to external, asynchronous events in a short time. Clearly, the lower the latency, the better the system will respond to events which require immediate attention. Moreover, with predictable we mean that it is able to determine task s completion time with certainty. A typical example of real time system can be a computer controlling system that manages and coordinates the activities of a controlled system. The controlled system can be interpreted as the environment with which the computer interacts. The interaction is bidirectional, e.g. through various sensors (environment computer) and actuators (computer environment), and it is characterized by timing correctness constraints. In a real-time system, time-critical and non time-critical activities coexist: both are called tasks and a task with a timeliness requirement is called a real time task. Typically real time tasks have the following types of requirements and/or constraints. timing constraints: the most common are either periodic or aperiodic. An aperiodic task has a deadline by which it must finish or start, or it may have a constraint on both start and

189 B.3 An experimental setup for robotics 173 finish times. A periodic task has to be repeated once per period. Most sensory processing is periodic, while aperiodic requirements can arise from dynamic events; resource requirements: a real time task may require access to certain resources such as I/O devices, data structures, files and databases; communication requirements: tasks should be allowed to communicate with messages; concurrency constraints: tasks should be allowed concurrent access to common resources providing the consistency of the resource is not violated. More details on real-time systems can be find in (Bruyninckx, 2003). B.2.2 RTAI-Linux RTAI means Real Time Application Interface. More precisely, it is not a real time operating system, such as VXworks or QNX since it is based on the Linux kernel, providing the ability to make it fully pre-emptable. Linux is a standard time-sharing operating system providing good average performance and highly sophisticated services. Like other OS, it offers to the applications at least the following services: hardware management layer dealing with event polling or processor/peripheral interrupts; scheduler classes dealing with process activation, priorities, time slice; communications means among applications. Linux suffers from a lack of real time support. To obtain a timing correctness behavior, it is necessary to make some changes in the kernel sources, i.e. in the interrupt handling and scheduling policies. In this way, you can have a real time platform, with low latency and high predictability requirements, within full non real time Linux environment (access to TCP/IP, graphical display and windowing systems, file and data base systems, etc.). RTAI offers the same services of the Linux kernel core, adding the features of an industrial real time operating system. It consists basically of an interrupt dispatcher: RTAI mainly traps the peripherals interrupts and if necessary re-routes them to Linux. It is not an intrusive modification of the kernel; it uses the concept of HAL (hardware abstraction layer) to get information from Linux and to trap some fundamental functions. This HAL provides few dependencies to Linux Kernel. This leads to a simple adaptation in the Linux kernel, an easy RTAI port from version to version of Linux and an easier use of other operating systems instead of RTAI. RTAI considers Linux as a background task running when no real time activity occurs. Further informations on Linux-based real-time systems can be found in (RTAI-Linux web site, 2003; RTLinux web site, 2003; Bruyninckx, 2003). A good starting point for an Internet research can be and B.3 An experimental setup for robotics B.3.1 General overview The main component of the experimental set-up is a Comau SMART 3-S robot, see Fig. B.1(a). This is a standard industrial 6 degrees of freedom anthropomorphic manipulator with a non-

190 174 Control of robots with Real-Time Linux (a) Comau SMART3 S robot. (b) A.S.I. Gripper Figure B.1: Comau SMART3 S robot and A.S.I. Gripper. spherical wrist. Each joint is actuated by a DC-brushless motor, and its angular position is measured by a resolver. The robot is equipped with the standard controller C3G Basically, this controller is composed of three parts: a user interface, a control unit, and a driver unit. The control unit consists of a Motorola VME bus rack with two boards. The first board (SCC) is equipped with a DSP and carries out all the control tasks (trajectory planning, direct and inverse kinematics, etc.), while the second board (RBC) is responsible for the man-machine interface and for interpreting PLD2 user s programs. A shared memory area is available on this board: this is a memory area accessible from each board connected to the VME bus. In the experimental set-up available at L.A.R., the C3G-9000 controller is open, that is the VME bus is connected with an ISA-PC bus via a pair of Bit3 boards, one inside the controller and the other inside a PC, running under RTAI-Linux, that implements the real-time control algorithms. The two boards are connected with a high-speed cable. A data exchange between PC and controller is possible via the shared memory area on the RBC board inside the controller and synchronization can be achieved by an interrupt signal generated by the controller itself. In this configuration, position and velocity loops managed by the C3G-9000 are opened, and all the safety protections are disabled. As a matter of fact, the controller is only used as an interface between the resolvers and drives on the robot and the PC. Therefore, in each sampling period the real-time control system running on the PC must acquire the data from the encoders, compute the new control input for the actuators and send their values to the C3G On the robot wrist, besides a standard force/torque sensor, a vision system and the A.S.I. Gripper (Biagiotti et al., 2000) are installed, see Fig. B.1(b). The vision system is used to provide visual information about the environment and, in particular, about objects within the workspace

191 B.3 An experimental setup for robotics 175 fingers PSfrag replacements object (a) (b) Figure B.2: Selection of the finger s target points. of the robot. These informations are needed to track an object moving in the robot workspace, to move the robot in a desired position in order to grasp it with the gripper and to automatically calculate the optimal grasping configuration. It consists of a monocular camera, connected to a frame-grabber board that could be installed either on the same PC that implements the robot control algorithms or on a dedicated workstation connected to the PC that controls the manipulator via client/server application over TCP/IP. By properly moving the robot arm and, at the same time, acquiring images of the object from different points of view, the vision algorithm can give a good estimate of the distance of the object from the robot wrist. This information is essential to correctly move the robot in order to grasp the object with the gripper. Moreover, by means of the vision system, the shape of the object is recognized in order to calculate the better grasping points (target points). The object is caught on these points and the contact forces are properly controlled through the force/torques sensors on gripper fingers. Generally, the target points are selected on the basis of a kinematic analysis of a first order model: the resulting points do not depend on the shape of the object and on the geometric characteristics of the gripper. In our case, the target points are calculated by means of a kinematic analysis of a second order model that takes into account also the shape of the object and of the gripper s finger: in this manner, the resulting grasping configuration is more stable. An example is presented in Fig. B.2. According to a first order analysis, the configurations presented in (a) and (b) are equivalent, but differ if a second order analysis is carried out. Clearly, the configuration (b) is more stable. Generally speaking, it is possible to show that two grasping configurations, which are equivalent if evaluated according to considerations based on first order models, can differ if a second order model is used. The A.S.I. Gripper has three degrees of freedom, and is particularly suited for no-gravity manipulation tasks in space applications, since it can interact with free-floating and irregularly shaped objects. Its control algorithms are executed on a custom DSP board (based on the TMS320C32 chip) that is installed, at the moment, in a third PC. For this board, a loader and a DSP-monitor have been developed under Linux, together with some simple drivers for the DSP board. Fig. B.3 provides a general overview of the setup.

192 176 Control of robots with Real-Time Linux USER (keyboard, mouse, joystick) ROBOT Comau SMART 3 S C3G 9000 Controller High speed link RTAI Linux System A.S.I. Gripper Vision System DSP Gripper Software A.S.I. Gripper Sub system TCP/IP link over Internet Frame grabber TCP/IP link over Internet or PCI BUS Vision Soft. Vision sub system Figure B.3: The experimental setup: a general overview. B.3.2 Real-time control of the robot The software developed for the control of the Comau SMART 3-S is divided into two distinct modules: a real-time module, which executes the control algorithms and communicates directly with the plant (robot), and a set of non real-time application providing a user interface for the real-time module. Clearly, some communication mechanisms exist to provide an information exchange between the two parts. Moreover, the real-time module is periodically activated by an external interrupt signal generated by the C3G-9000 controller. The real-time module The starting point in the development of the real-time control module for the SMART 3-S robot was to create a modular and flexible structure in order to have the possibility of a quick testing of new control algorithms and fast implementation of new robot applications. The code is divided into three main sub-modules, providing: communication with robot SMART 3-S; security tests; implementation of the robot control algorithms. The communication module is used for reading and writing data on the shared memory area in the C3G-9000 controller. In particular, this module reads the six joint positions and writes six current set-points for the DC drives. Moreover, it implements the drive on/off function. This is the only module that gets access to the shared area on the RBC board in the controller. In presence of an externally generated interrupt (in this case, the synchronization interrupt is originated by the robot controller), RTAI does not automatically save the FPU contest before a task switching, so it is this module that have to provide this function.

193 B.3 An experimental setup for robotics 177 The security module implements software range delimiters (saturation). Two kinds of range delimiters have been implemented: absolute and relative. The relative saturation is really useful when we want to check the stability around a certain configuration of a control algorithm without any risk. Moreover, the security module limits the joint speeds, checks if some of the joints is blocked or if the current set points are too high. In particular, the last two tests are needed to prevent drives damages. If the first three tests are not passed, then the robot is stopped; as regard the last one, we only execute a software saturation to the maximum allowed current value. The control module implements the control algorithms and, in particular it is responsible for trajectory planning, both in joint and Cartesian space, and for robot regulation. The submodule that implements the regulation algorithms may change according to the control scheme under development. e.g. decentralized control, multi-variable centralized control, and so on. As far as the trajectory planning in concerned, since in most of the application (e.g. with the vision system) the desired trajectory is not known before the execution of a task, we have implemented a trajectory generator using a non-linear filter with constraints on maximum speed and acceleration (Zanasi and Morselli, 2001). All these real-time functions are compiled in a kernel module and dynamically linked to the real-time kernel of the operating system. Since the user needs to interact with the robot, some communication channels between kernel space and user space are needed. Each module can exchange data with the user space applications by its own channels. Since in all the situations the amount of data that we send from/to the kernel module is not relevant, we decided to implement all the communication channels using FIFOs. This solution provides robustness and a built-in coordination/synchronization mechanism between sender and receiver. From the user space it is possible to send the drive on/off command to the communication module of the real time process, and it is possible to change in run-time some parameters in the security module (this is a function that must be disabled when testing new control schemes). Concerning the control module, it can receive commands from user space applications and send back to them information about internal variables of the robot (e.g. joint positions and drive currents). In this manner, it is possible to move the robot with a keyboard, a mouse or a joystick, to interface it with other applications for state-tracing or for movements under vision system control. Fig. B.4 gives a general overview of the real-time module and of the communication channels with user space applications. The user-space applications The user-space software can be divided into two main categories: user-to-robot and application-to-robot interface applications robot-state-monitoring applications. With user-to-robot we intend any application that will help the user to move the robot. The Virtual Teach-Pendant and the Mouse-Interface belong to this category: with them the user can easily move the robot in joint and Cartesian space. In the last case the reference frame can be chosen either on the base of the robot or on the end-effector. Moreover, an interface with a joystick is provided. With application-to-robot interface we intend an application that provides a basic interface for a stand alone software that needs to communicate with the real-time module and, clearly, with

194 178 Control of robots with Real-Time Linux Kernel space User space Control Virtual Teach Pendant Security FIFOs Mouse Interface Real Time Tracing Vision Software Communication Security configuration Comau SMART 3S Figure B.4: The real-time module: organization and user-space communication channels. the robot itself. For example, by means of this interface the vision system can send commands to the Comau SMART 3-S in order to follow a user-selected object in the workspace. When robot and vision systems are connected over the Internet, a simple client/server application has been developed: the client application runs on the RTAI-Linux system and the server on the Linux PC that manages the vision tasks. In control applications, it is important to be able to check the state of the plant and the behavior of the controller: every application needs a proper monitoring software that should be easily configurable and with a user-friendly interface. Following these criteria, we have equipped our system with a specific monitoring application: the RTMon. Using the FIFO channels between user and kernel space, RTMon can display and continuously plot the state of the internal variables of the robot (angular joint position, working space position, set-points, drive currents, etc.) and save the desired data in Matlab compliant format. B.4 Working with the vision system and the A.S.I. Gripper The vision system consists of a monocular video-camera mounted on the robot s wrist. The related software runs as a user-space application and communicates with the real-time (kernel) module by means of a set of commands in order to make the robot to execute specifics tasks/movements. In Fig. B.5, a screen-shot of the vision software with main window Robotic Vision is presented. Moreover, the TP window is the Virtual Teach Pendant mentioned above, while Pos:Giunti is another tool that allows the user to move the robot acting on each joint separately. The scene framed by the video-camera is reproduced in the COMAU s eye window. A typical task for this system is to automatically grasp a user selected object within the robot s work-space by using the gripper mounted on the robot s wrist. Moreover, the grasp has to be optimal and stable. This procedure consists of two main steps: the evaluation of the object distance from the robot wrist;

195 B.4 Working with the vision system and the A.S.I. Gripper 179 Figure B.5: Screen-shot of the vision software. the determination of the best grasping configuration. After that, the robot is automatically moved in order to position the gripper such that the object can be cached in the most suitable way. In the following subsection, a detailed description of these two main steps is presented. B.4.1 Distance evaluation The first step is to move the robot in order to frame the object that has to be cached by the gripper with the video camera. Once the object is selected by the operator, its geometric gravity center (GGC) is calculated and, by sending proper pitch and yaw commands to the real-time module that controls the robot, the video-camera optic axis is aligned with the GGC. Since the vision system is not stereo, an estimation of the actual distance between object and end-effector is calculated by moving the robot along the direction camera-object and taking (at least) two pictures of the object itself. The first picture is taken in the actual position of the robot, while the second one after a negative approach is executed. This is necessary since the distance from the object is unknown. From a comparison of the same object framed from two different but aligned points of view, a first estimation of the unknown distance is calculated. Analyzing some experimental results, it has been determined that the best performances can be achieved if the distance between video-camera and object is about of 35cm. For this reason, performances can be improved by taking (if necessary) more than two pictures of the same object. In particular, if the first estimate provides a distance between 32.5 and 37.5cm then this estimation is assumed to be correct. If it is not the case, the robot executes an approach in order to move the camera at a (estimated) distance of 35cm from the object and the previous procedure is repeated. The result is assumed to be correct if it belongs to the range cm. Once the position of the object with respect to the video-camera, and clearly from the gripper, is determined, a first approach is executed in order to position the video-camera at

196 180 Control of robots with Real-Time Linux Center of the gripper GGC PSfrag replacements Contact points (a) Object to be grasped. (b) Fingers configuration and contact points. Figure B.6: Calculation of the best grasping configuration. 20cm from the object. At this point, a procedure that analyzes the object and calculates the best contact points in order to catch the object with the gripper is started. More details are in Sec. B.4.2. By now, assume to know how the object has to be grasped. Then, by means of normal and slide commands, the center of the gripper is aligned with the GGC of the object. Finally, the gripper is rotated and its fingers can grasp the object in the desired contact points, (Guidetti, 2002). B.4.2 Evaluation of the optimal grasping configuration The only informations the system owns about the object to be grasped are provided by the video-camera. A typical situation is represented in Fig B.6(a). The first step is to extract the contour of the object by means of the well-known Canny s algorithm and to calculate its GGC. Since the Canny s algorithm provides only a bitmap describing the object contour, it is necessary to ordinate these points in order to obtain a discrete parametrization of the border of the object. This procedure is called edge tracking: more detail about the one we implemented can be found in (Carloni, 2002). Given the border parametrization, it is possible to extract the all the geometric informations that are useful for the determination of the best grasping configuration, that is the set of tangent and normal vectors and its curvature at a given point. The vector t i tangent to the point P i of the border is assumed to be the line passing from P i and P i+1, while the normal vector n i and the curvature r i are determined by calculating the circumference passing through P i 1, P i and P i+1. Starting from this set of geometrical parameters that describes the border of the object, the algorithm is able to provide the three contact points for which an index describing the performances of the corresponding grasp is maximum. Clearly, the resulting contact points have to be compatible with the mechanical characteristics of the A.S.I. gripper. In particular, since the contacts with the grasped object can occur along three intersecting lines equally spaced of 120, the resulting configuration has to respect this mechanical constraint. In Fig. B.6(b), the best grasping configuration calculated for the object of Fig. B.6(a) using only the in-

197 B.4 Working with the vision system and the A.S.I. Gripper 181 formations provided by the video-camera are presented. The contact points are compatible with the mechanical architecture of the gripper. The mathematical details can be found in (Rimon and Burdick, 1998a; Rimon and Burdick, 1998b). In conclusion, the typical task of grasping an object selected by the user within the robot s work-space by means of the gripper mounted on its wrist can be divided into five main steps. i) The user moves the robot using the keyboard, mouse or joystick until the vision system frames the object to be grasped; ii) The vision system automatically moves the robot in order to align the end-effector (i.e. the gripper) with the object. The vision algorithms can deal even with slowly moving objects, compatibly with the computational load required to manage visual informations. iii) Since an estimation of the actual distance between object and end-effector is needed and the vision system is not stereo, the robot is moved along the camera-object direction in order to take two pictures of the object and then calculate the unknown distance. iv) Using the distance information, the robot is moved in order to reach a given distance from the object. v) Finally, the right position for the grasp of the object is achieved based on 2nd order stability considerations. The robot s movements in step ii) and iv) are managed by the vision system sending rollpitch-yaw and normal-slide-approach commands to the manipulator in order to reach the object aligned with the camera. These operations are executed safely and further modifications of the control systems, as well as the development of new tasks, can be easily accomplished with this control architecture.

198 182 Control of robots with Real-Time Linux

199 Bibliography Anderson, J. A. and M. W. Spong (1989). Bilateral control of teleoperators with time delay. IEEE Trans. on Automatic Contr. Arimoto, S. and F. Miyazaki (1984). Stability and robustness of pid feedback control for robot manipulators of sensory capability. In: Robotics Res.: The First Int. Symp. (M. Brady and R. Paul, Eds.). MIT Press. pp Barabanov, M. (1997). A Linux-based real-time operating system. Master s thesis. New Mexico Institute of Mining and Technology, Socorro, New Mexico. Biagiotti, L., C. Melchiorri and G. Vassura (2000). Control of a robotic gripper for grasping objects in no-gravity conditions. ICRA 01, IEEE Int. Conf. on Robotics and Automation, Seoul, Corea. Bianchi, E., L. Dozio, G. L. Ghiringhelli and P. Mantegazza (1999). Complex control systems, applications of DIAPM-RTAI at DIAPM. In: Proc. Realtime Linux Workshop, Vienna. Bruyninckx, H. (2003). Real-Time and Embedded Guide. K. U. Leuven, Mechanical Engineering. Freely available at Byrnes, C. I., A. Isidori and J. C. Willems (1991). Passivity, feedback equivalence, and the global stabilization of minimum phase nonlinear systems. IEEE Transaction on Automatic Control 36(11), Carloni, R. (2002). Manipolazione robotica: strategie di presa ottima per oggetti planari. Master s thesis. Universit degli Studi di Bologna Facolt di Ingegneria DEIS. In Italian. Curtain, R. F. and H. J. Zwart (1995). An introduction to infinite dimensional linear systems theory. Springler Verlag, New York. Dalsmo, M. and A. J. van der Schaft (1999). On representation and integrability of mathematical structures in energy-conserving physical systems. SIAM J. Control and Optimization (37), Dong-Hua, S. and F. De-Xing (1999). Exponantial stabilization of the timoshenko beam with locally distributed feedback. In: Proc. 14th IFAC World Congress, Beijing, P. R. China. Dubrovin, B. A., A. T. Fomenko and S. P. Novikov (1992). Moder geometry Methods and applications. Springer Verlag.

200 184 Bibliography Guidetti, M. (2002). Integrazione di visione artificiale e controllo real-time per un robot industriale. Master s thesis. Universit degli Studi di Bologna Facolt di Ingegneria DEIS. In Italian. Hill, D. and P. Moylan (1976). The stability of nonlinear dissipative systems. IEEE Trans. Automat. Control 21, Ingarden, R. S. and A. Jamiolkowsky (1985). Classical Electrodynamics. PWN-Polish Sc. Publ., Warszawa, Elsevier. Jurdjevic, V. and P. J. Quinn (1978). Controllability and stability. J. Diff. Equations 28, Khalil, H. K. (1996). Nonlinear systems. Prentice Hall, Upper Saddle River, NJ Kim, J. U. and Y. Renardy (1987). Boundary control of the timoshenko beam. SIAM J. Contr. and Opt. Lax, P. D. and R. S. Phillips (1967). Scattering Theory. Pure and Applied Mathematics. Academic Press, New York. Lewis, A., C. T. Abdallah and D. M. Dawson (1993). Control of Robot Manipulator. Macmillan Publ. Co. Luo, Z. H., B. Z. Guo and O. Morgul (1999). Stability and stabilization of infinite dimensional systems with applications. Springer Verlag, London. Macchelli, A. and C. Melchiorri (2003a). Control by interconnection of the timoshenko beam. In: Proc. 2nd IFAC Workshop on Lagrangian and Hamiltonian Methods for Nonlinear Control. Macchelli, A. and C. Melchiorri (2003b). Distributed port hamiltonian formulation of the timoshenko beam: Modeling and control. In: Proc. 4th MATHMOD Vienna. Macchelli, A., C. Melchiorri and D. Arduini (2000). Real-time Linux control of a haptic interface for visually impaired persons. In: Proc. IFAC Symposium on Robot Control SYROCO 00. Macchelli, A., C. Melchiorri, C. Secchi and C. Fantuzzi (2003). A variable structure approach to energy shaping. submitted to the 2003 European Control Conference, ECC September, University of Cambridge, UK. Macchelli, A., S. Stramigioli, A. J. van der Schaft and C. Melchiorri (2002a). Considerations on the zero-dynamics of port hamiltonian systems and application to passive implementation of sliding mode control. In: Proc. 15th IFAC World Congress on Automatic Control. Macchelli, A., S. Stramigioli, A. J. van der Schaft and C. Melchiorri (2002b). Scattering for infinite dimensional port hamiltonian systems. In: Proc. IEEE 2002 Conference on Decision and Control. Mantegazza, P. (2003). Dissecting DIAPM RTHAL-RTAI. Technical report. Dipartimento di Ingegneria Aerospaziale, Politecnico di Milano. Downloadable from:

201 Bibliography 185 Marsden, J. E. and T. S. Ratiu (1994). Introduction to mechanics and symmetry. Springer, New York. Marsden, J. E. and T. S. Ratiu (2001). Geometry of nonlinear systems. Freely available at Maschke, B. and A. J. van der Schaft (2000). Port controlled hamiltonian representation of distributed parameter sytems. In: Workshop on modeling and Control of Lagrangian and Hamiltonian Systems. Maschke, B. and A. J. van der Schaft (2001). Fluid dynamical systems as hamiltonian boundary control systems. In: Proc. of the 40th IEEE Conference on Decision and Control. Vol. 5. pp Maschke, B. and A.J. van der Schaft (1992). Port controlled Hamiltonian systems: modeling origins and system theoretic properties. In: Proceedings of the third Conference on nonlinear control systems (NOLCOS). RTAI-Linux web site (2003). RTLinux web site (2003). Niemeyer, G. and J. E. Slotine (1991). Stable adaptive teleoperation. IEEE Journal of Oceanic Eng. 16(1), Nijmeijer, H. and A. J. van der Schaft (1991). Nonlinear Dynamical Control Systems. Springer- Verlag. London, UK. Olver, P. J. (1993). Application of Lie groups to differential equations. Springer Verlag. Ortega, R., A. J. van der Schaft, B. Maschke and G. Escobar (1999). Energy shaping of port controlled Hamiltonian systems by interconnection. In: Proc. IEEE Conf. Dec. and Control. Ortega, R., A. J van der Schaft, B. Maschke and G. Escobar (2000). Interconnection and damping assignment passivity-based control of port-controlled Hamiltonian systems. Automatica 38, Ortega, R., A. J. van der Schaft, I. Mareels and B. Maschke (2001). Putting energy back in control. IEEE Control System Magazine pp Ortega, R., A. Loria, P. J. Nicklasson and H. Sira-Ramirez (1998). Passivity-based control of Euler Lagrange systems. Springler Verlag, Berlin, Germany. Paynter, H. M. (1961). Analysis and design of engineering systems. The M.I.T. Press, Cambridge, Massachusetts. Rimon, E. and J. W. Burdick (1998a). Mobility of bodies in contact. i. a 2nd-order mobility index for multiple-finger grasps. IEEE Transactions on Robotics and Automation 14(5), Rimon, E. and J. W. Burdick (1998b). Mobility of bodies in contact. i. how forces are generated by curvature effects. IEEE Transactions on Robotics and Automation 14(5),

202 186 Bibliography Rodriguez, H., A. J. van der Schaft and R. Ortega (2001). On stabilization of nonlinear distributed parameter port-controlled hamiltonian systems via energy shaping. In: Proc. of the 40th IEEE Conference on Decision and Control. Vol. 1. pp Sira-Ramirez, H. (1988). Differential methods in variable-structure control. Int. J. Control 48(4), Sira-Ramirez, H. (1999). A general canonical form for sliding-mode control of non-linear systems. In: Proc. ECC 99, Karlsruhe, Germany. Stramigioli, S. (1999). Modern control of physical systems. Personal notes on modeling through bond graphs. Stramigioli, S. (2001). Modeling and IPC Control of Interactive Mechanical Systems: a coordinate free approach. Springer, London. Stramigioli, S., A. J. van der Schaft, B. Maschke and C. Melchiorri (2002). Geometric scattering in robotic telemanipulation. IEEE Transactions on Robotics and Automation 18, Stramigioli, S., A. J. van der Schaft, B. Maschke, S. Andreotti and C. Melchiorri (2000). Geometric scattering in telemanipulation of generalized port controlled Hamiltonian systems. In: Proc. 39th IEEE Conference on Decision and Control, Sydney. Swaters, G. E. (2000). Introduction to Hamiltonian fluid dynamics and stability theory. Chapman & Hall / CRC. Takegaki, M. and S. Arimoto (1981). A new feedback method for dynamic control of manipulator. ASME J. Dyn. Syst. Meas. Cont. 102, Taylor, S. W. (1997). Boundary control of the timoshenko beam with variable physical characteristics. Technical report. University of Auckland Department of Mathematics. Utkin, V.I. (1978). Sliding regimes in the theory of variable structure system. MIR Editor. Moscow, Russia. van der Schaft, A. J. (2000). L 2 -Gain and Passivity Techniques in Nonlinear Control. Communication and Control Engineering. Springer Verlag. Warnick, K. F. and D. V. Arnold (1996). Green form for anisotropic, inomogeneous media. Journal of Electromagnetic Waves and Applications. Yodaiken, V. (1999). The RTLinux Manifesto. Department of Computer Science New Mexico Institute of Technology, Socorro, New Mexico. Zanasi, R. and R. Morselli (2001). Second order smooth trajectory generator with nonlinear constraints. In: Proc of European Control Conf. ECC 01, Oporto, Portugal.

203 Curriculum Vitae Alessandro Macchelli was born in Bologna, Italy, on the 1st of January Before attending the University, he followed the Scientific Lyceum Augusto Righi in Bologna gaining the diploma. Then, he followed the study of Computer Science Engineering at the University of Bologna, where he graduated cum laude in 2000 on a robotic project called ViDet Project (see On May 2000, he began his PhD (XV cycle) and, consequently, his research activity within D.E.I.S., the Department of Electronics, Computer Science and Systems of the University of Bologna, under the supervision of prof. Claudio Melchiorri. From July to December 2001 he was a visiting scholar at the Department of Applied Mathematics (TW) of the University of Twente (NL) under the supervision of prof. Arjan van der Schaft and of prof Stefano Stramigioli of the Drebbel Institute. His visit was sponsored by the European Project NACO2. His research activity has been focused on finite and infinite dimensional port Hamiltonian system and, in particular, on the control problem. Furthermore, he has been active in the field of real-time system, in particular in the development of control application for industrial automation based on real-time Linux. He is active member of the European Project GeoPlex ( and he is a second-level partner of the European Project Orocos (

Analysis and Control of Multi-Robot Systems. Elements of Port-Hamiltonian Modeling

Analysis and Control of Multi-Robot Systems. Elements of Port-Hamiltonian Modeling Elective in Robotics 2014/2015 Analysis and Control of Multi-Robot Systems Elements of Port-Hamiltonian Modeling Dr. Paolo Robuffo Giordano CNRS, Irisa/Inria! Rennes, France Introduction to Port-Hamiltonian

More information

HAMILTONIAN FORMULATION OF PLANAR BEAMS. Goran Golo,,1 Arjan van der Schaft,1 Stefano Stramigioli,1

HAMILTONIAN FORMULATION OF PLANAR BEAMS. Goran Golo,,1 Arjan van der Schaft,1 Stefano Stramigioli,1 HAMILTONIAN FORMULATION OF PLANAR BEAMS Goran Golo,,1 Arjan van der Schaft,1 Stefano Stramigioli,1 Department of Appl. Mathematics, University of Twente P.O. Box 217, 75 AE Enschede, The Netherlands ControlLab

More information

AC&ST AUTOMATIC CONTROL AND SYSTEM THEORY SYSTEMS AND MODELS. Claudio Melchiorri

AC&ST AUTOMATIC CONTROL AND SYSTEM THEORY SYSTEMS AND MODELS. Claudio Melchiorri C. Melchiorri (DEI) Automatic Control & System Theory 1 AUTOMATIC CONTROL AND SYSTEM THEORY SYSTEMS AND MODELS Claudio Melchiorri Dipartimento di Ingegneria dell Energia Elettrica e dell Informazione (DEI)

More information

Decomposition of Linear Port-Hamiltonian Systems

Decomposition of Linear Port-Hamiltonian Systems American ontrol onference on O'Farrell Street, San Francisco, A, USA June 9 - July, Decomposition of Linear Port-Hamiltonian Systems K. Höffner and M. Guay Abstract It is well known that the power conserving

More information

Modeling of Dynamic Systems: Notes on Bond Graphs Version 1.0 Copyright Diane L. Peters, Ph.D., P.E.

Modeling of Dynamic Systems: Notes on Bond Graphs Version 1.0 Copyright Diane L. Peters, Ph.D., P.E. Modeling of Dynamic Systems: Notes on Bond Graphs Version 1.0 Copyright 2015 Diane L. Peters, Ph.D., P.E. Spring 2015 2 Contents 1 Overview of Dynamic Modeling 5 2 Bond Graph Basics 7 2.1 Causality.............................

More information

(Refer Slide Time: 00:01:30 min)

(Refer Slide Time: 00:01:30 min) Control Engineering Prof. M. Gopal Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 3 Introduction to Control Problem (Contd.) Well friends, I have been giving you various

More information

Port-based Modeling and Control for Efficient Bipedal Walking Machines

Port-based Modeling and Control for Efficient Bipedal Walking Machines Port-based Modeling and Control for Efficient Bipedal Walking Machines Vincent Duindam vincentd@eecs.berkeley.edu Control Laboratory, EE-Math-CS University of Twente, Netherlands Joint work with Stefano

More information

ENGI9496 Modeling and Simulation of Dynamic Systems Bond Graphs

ENGI9496 Modeling and Simulation of Dynamic Systems Bond Graphs ENGI9496 Modeling and Simulation of Dynamic Systems Bond Graphs Topics covered so far: Analogies between mechanical (translation and rotation), fluid, and electrical systems o Review of domain-specific

More information

Stabilization and Passivity-Based Control

Stabilization and Passivity-Based Control DISC Systems and Control Theory of Nonlinear Systems, 2010 1 Stabilization and Passivity-Based Control Lecture 8 Nonlinear Dynamical Control Systems, Chapter 10, plus handout from R. Sepulchre, Constructive

More information

Dissipative Systems Analysis and Control

Dissipative Systems Analysis and Control Bernard Brogliato, Rogelio Lozano, Bernhard Maschke and Olav Egeland Dissipative Systems Analysis and Control Theory and Applications 2nd Edition With 94 Figures 4y Sprin er 1 Introduction 1 1.1 Example

More information

Port-Hamiltonian Systems: from Geometric Network Modeling to Control

Port-Hamiltonian Systems: from Geometric Network Modeling to Control Port-Hamiltonian Systems: from Geometric Network Modeling to Control, EECI, April, 2009 1 Port-Hamiltonian Systems: from Geometric Network Modeling to Control, EECI, April, 2009 2 Port-Hamiltonian Systems:

More information

Port-Hamiltonian systems: network modeling and control of nonlinear physical systems

Port-Hamiltonian systems: network modeling and control of nonlinear physical systems Port-Hamiltonian systems: network modeling and control of nonlinear physical systems A.J. van der Schaft February 3, 2004 Abstract It is shown how port-based modeling of lumped-parameter complex physical

More information

Port-Hamiltonian systems: a theory for modeling, simulation and control of complex physical systems

Port-Hamiltonian systems: a theory for modeling, simulation and control of complex physical systems Port-Hamiltonian systems: a theory for modeling, simulation and control of complex physical systems A.J. van der Schaft B.M. Maschke July 2, 2003 Abstract It is shown how port-based modeling of lumped-parameter

More information

NONLINEAR MECHANICAL SYSTEMS (MECHANISMS)

NONLINEAR MECHANICAL SYSTEMS (MECHANISMS) NONLINEAR MECHANICAL SYSTEMS (MECHANISMS) The analogy between dynamic behavior in different energy domains can be useful. Closer inspection reveals that the analogy is not complete. One key distinction

More information

Electrical and Magnetic Modelling of a Power Transformer: A Bond Graph Approach

Electrical and Magnetic Modelling of a Power Transformer: A Bond Graph Approach Vol:6, No:9, Electrical and Magnetic Modelling of a Power Transformer: A Bond Graph Approach Gilberto Gonzalez-A, Dunia Nuñez-P International Science Index, Electrical and Computer Engineering Vol:6, No:9,

More information

Composition of Dirac Structures and Control of Port-Hamiltonian Systems

Composition of Dirac Structures and Control of Port-Hamiltonian Systems Composition of Dirac Structures and Control of Port-Hamiltonian Systems A.J. van der Schaft* 1,J.Cervera** 2 * University of Twente, Faculty of Mathematical Sciences, P.O. Box 217, 7500 AE Enschede, The

More information

Mechatronics 1: ME 392Q-6 & 348C 31-Aug-07 M.D. Bryant. Analogous Systems. e(t) Se: e. ef = p/i. q = p /I, p = " q C " R p I + e(t)

Mechatronics 1: ME 392Q-6 & 348C 31-Aug-07 M.D. Bryant. Analogous Systems. e(t) Se: e. ef = p/i. q = p /I, p =  q C  R p I + e(t) V + - K R + - - k b V R V L L J + V C M B Analogous Systems i = q. + ω = θ. C -. λ/l = q v = x F T. Se: e e(t) e = p/i R: R 1 I: I e C = q/c C = dq/dt e I = dp/dt Identical dierential equations & bond

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Chapter 1 Preliminaries 1.1 The Vector Concept Revisited The concept of a vector has been one of the most fruitful ideas in all of mathematics, and it is not surprising that we receive repeated exposure

More information

Signals and Systems Chapter 2

Signals and Systems Chapter 2 Signals and Systems Chapter 2 Continuous-Time Systems Prof. Yasser Mostafa Kadah Overview of Chapter 2 Systems and their classification Linear time-invariant systems System Concept Mathematical transformation

More information

School of Engineering Faculty of Built Environment, Engineering, Technology & Design

School of Engineering Faculty of Built Environment, Engineering, Technology & Design Module Name and Code : ENG60803 Real Time Instrumentation Semester and Year : Semester 5/6, Year 3 Lecture Number/ Week : Lecture 3, Week 3 Learning Outcome (s) : LO5 Module Co-ordinator/Tutor : Dr. Phang

More information

When Gradient Systems and Hamiltonian Systems Meet

When Gradient Systems and Hamiltonian Systems Meet When Gradient Systems and Hamiltonian Systems Meet Arjan van der Schaft Johann Bernoulli Institute for Mathematics and Computer Science University of Groningen, the Netherlands December 11, 2011 on the

More information

Passivity-based Control of Euler-Lagrange Systems

Passivity-based Control of Euler-Lagrange Systems Romeo Ortega, Antonio Loria, Per Johan Nicklasson and Hebertt Sira-Ramfrez Passivity-based Control of Euler-Lagrange Systems Mechanical, Electrical and Electromechanical Applications Springer Contents

More information

The POG Modeling Technique Applied to Electrical Systems

The POG Modeling Technique Applied to Electrical Systems The POG Modeling Technique Applied to Electrical Systems Roberto ZANASI Computer Science Engineering Department (DII) University of Modena and Reggio Emilia Italy E-mail: roberto.zanasi@unimo.it Outline

More information

EQUIVALENT SINGLE-DEGREE-OF-FREEDOM SYSTEM AND FREE VIBRATION

EQUIVALENT SINGLE-DEGREE-OF-FREEDOM SYSTEM AND FREE VIBRATION 1 EQUIVALENT SINGLE-DEGREE-OF-FREEDOM SYSTEM AND FREE VIBRATION The course on Mechanical Vibration is an important part of the Mechanical Engineering undergraduate curriculum. It is necessary for the development

More information

POG Modeling of Automotive Systems

POG Modeling of Automotive Systems POG Modeling of Automotive Systems MORE on Automotive - 28 Maggio 2018 Prof. Roberto Zanasi Graphical Modeling Techniques Graphical Techniques for representing the dynamics of physical systems: 1) Bond-Graph

More information

Modeling of Electromechanical Systems

Modeling of Electromechanical Systems Page 1 of 54 Modeling of Electromechanical Systems Werner Haas, Kurt Schlacher and Reinhard Gahleitner Johannes Kepler University Linz, Department of Automatic Control, Altenbergerstr.69, A 4040 Linz,

More information

Basic. Theory. ircuit. Charles A. Desoer. Ernest S. Kuh. and. McGraw-Hill Book Company

Basic. Theory. ircuit. Charles A. Desoer. Ernest S. Kuh. and. McGraw-Hill Book Company Basic C m ш ircuit Theory Charles A. Desoer and Ernest S. Kuh Department of Electrical Engineering and Computer Sciences University of California, Berkeley McGraw-Hill Book Company New York St. Louis San

More information

Kinematics. Chapter Multi-Body Systems

Kinematics. Chapter Multi-Body Systems Chapter 2 Kinematics This chapter first introduces multi-body systems in conceptual terms. It then describes the concept of a Euclidean frame in the material world, following the concept of a Euclidean

More information

The Geometry Underlying Port-Hamiltonian Systems

The Geometry Underlying Port-Hamiltonian Systems The Geometry Underlying Port-Hamiltonian Systems Pre-LHMNC School, UTFSM Valparaiso, April 30 - May 1, 2018 Arjan van der Schaft Jan C. Willems Center for Systems and Control Johann Bernoulli Institute

More information

Representation of a general composition of Dirac structures

Representation of a general composition of Dirac structures Representation of a general composition of Dirac structures Carles Batlle, Imma Massana and Ester Simó Abstract We provide explicit representations for the Dirac structure obtained from an arbitrary number

More information

Some of the different forms of a signal, obtained by transformations, are shown in the figure. jwt e z. jwt z e

Some of the different forms of a signal, obtained by transformations, are shown in the figure. jwt e z. jwt z e Transform methods Some of the different forms of a signal, obtained by transformations, are shown in the figure. X(s) X(t) L - L F - F jw s s jw X(jw) X*(t) F - F X*(jw) jwt e z jwt z e X(nT) Z - Z X(z)

More information

Contents. Dynamics and control of mechanical systems. Focus on

Contents. Dynamics and control of mechanical systems. Focus on Dynamics and control of mechanical systems Date Day 1 (01/08) Day 2 (03/08) Day 3 (05/08) Day 4 (07/08) Day 5 (09/08) Day 6 (11/08) Content Review of the basics of mechanics. Kinematics of rigid bodies

More information

Modeling and Experimentation: Compound Pendulum

Modeling and Experimentation: Compound Pendulum Modeling and Experimentation: Compound Pendulum Prof. R.G. Longoria Department of Mechanical Engineering The University of Texas at Austin Fall 2014 Overview This lab focuses on developing a mathematical

More information

Introduction to Control (034040) lecture no. 2

Introduction to Control (034040) lecture no. 2 Introduction to Control (034040) lecture no. 2 Leonid Mirkin Faculty of Mechanical Engineering Technion IIT Setup: Abstract control problem to begin with y P(s) u where P is a plant u is a control signal

More information

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67 1/67 ECEN 420 LINEAR CONTROL SYSTEMS Lecture 6 Mathematical Representation of Physical Systems II State Variable Models for Dynamic Systems u 1 u 2 u ṙ. Internal Variables x 1, x 2 x n y 1 y 2. y m Figure

More information

Passive control. Carles Batlle. II EURON/GEOPLEX Summer School on Modeling and Control of Complex Dynamical Systems Bertinoro, Italy, July

Passive control. Carles Batlle. II EURON/GEOPLEX Summer School on Modeling and Control of Complex Dynamical Systems Bertinoro, Italy, July Passive control theory II Carles Batlle II EURON/GEOPLEX Summer School on Modeling and Control of Complex Dynamical Systems Bertinoro, Italy, July 18-22 2005 Contents of this lecture Interconnection and

More information

Introduction to Controls

Introduction to Controls EE 474 Review Exam 1 Name Answer each of the questions. Show your work. Note were essay-type answers are requested. Answer with complete sentences. Incomplete sentences will count heavily against the grade.

More information

Port-Hamiltonian Based Modelling.

Port-Hamiltonian Based Modelling. Modelling and Control of Flexible Link Multi-Body Systems: Port-Hamiltonian Based Modelling. Denis MATIGNON & Flávio Luiz CARDOSO-RIBEIRO denis.matignon@isae.fr and flavioluiz@gmail.com July 8th, 2017

More information

Interconnection of port-hamiltonian systems and composition of Dirac structures

Interconnection of port-hamiltonian systems and composition of Dirac structures Automatica 43 (7) 1 5 www.elsevier.com/locate/automatica Interconnection of port-hamiltonian systems and composition of Dirac structures J. Cervera a,1, A.J. van der Schaft b,c,,, A. Baños a3 a Departemento

More information

Kai Sun. University of Michigan, Ann Arbor. Collaborators: Krishna Kumar and Eduardo Fradkin (UIUC)

Kai Sun. University of Michigan, Ann Arbor. Collaborators: Krishna Kumar and Eduardo Fradkin (UIUC) Kai Sun University of Michigan, Ann Arbor Collaborators: Krishna Kumar and Eduardo Fradkin (UIUC) Outline How to construct a discretized Chern-Simons gauge theory A necessary and sufficient condition for

More information

where C f = A ρ g fluid capacitor But when squeezed, h (and hence P) may vary with time even though V does not. Seems to imply C f = C f (t)

where C f = A ρ g fluid capacitor But when squeezed, h (and hence P) may vary with time even though V does not. Seems to imply C f = C f (t) ENERGY-STORING COUPLING BETWEEN DOMAINS MULTI-PORT ENERGY STORAGE ELEMENTS Context: examine limitations of some basic model elements. EXAMPLE: open fluid container with deformable walls P = ρ g h h = A

More information

1000 Solved Problems in Classical Physics

1000 Solved Problems in Classical Physics 1000 Solved Problems in Classical Physics Ahmad A. Kamal 1000 Solved Problems in Classical Physics An Exercise Book 123 Dr. Ahmad A. Kamal Silversprings Lane 425 75094 Murphy Texas USA anwarakamal@yahoo.com

More information

Dr Ian R. Manchester Dr Ian R. Manchester AMME 3500 : Review

Dr Ian R. Manchester Dr Ian R. Manchester AMME 3500 : Review Week Date Content Notes 1 6 Mar Introduction 2 13 Mar Frequency Domain Modelling 3 20 Mar Transient Performance and the s-plane 4 27 Mar Block Diagrams Assign 1 Due 5 3 Apr Feedback System Characteristics

More information

Follow links Class Use and other Permissions. For more information, send to:

Follow links Class Use and other Permissions. For more information, send  to: COPYRIGHT NOTICE: Stephen L. Campbell & Richard Haberman: Introduction to Differential Equations with Dynamical Systems is published by Princeton University Press and copyrighted, 2008, by Princeton University

More information

Port-Hamiltonian Systems: from Geometric Network Modeling to Control

Port-Hamiltonian Systems: from Geometric Network Modeling to Control Port-Hamiltonian Systems: from Geometric Network Modeling to Control, EECI, April, 2009 1 Port-Hamiltonian Systems: from Geometric Network Modeling to Control Arjan van der Schaft, University of Groningen

More information

Inverse differential kinematics Statics and force transformations

Inverse differential kinematics Statics and force transformations Robotics 1 Inverse differential kinematics Statics and force transformations Prof Alessandro De Luca Robotics 1 1 Inversion of differential kinematics! find the joint velocity vector that realizes a desired

More information

BACKGROUND IN SYMPLECTIC GEOMETRY

BACKGROUND IN SYMPLECTIC GEOMETRY BACKGROUND IN SYMPLECTIC GEOMETRY NILAY KUMAR Today I want to introduce some of the symplectic structure underlying classical mechanics. The key idea is actually quite old and in its various formulations

More information

Robotics & Automation. Lecture 25. Dynamics of Constrained Systems, Dynamic Control. John T. Wen. April 26, 2007

Robotics & Automation. Lecture 25. Dynamics of Constrained Systems, Dynamic Control. John T. Wen. April 26, 2007 Robotics & Automation Lecture 25 Dynamics of Constrained Systems, Dynamic Control John T. Wen April 26, 2007 Last Time Order N Forward Dynamics (3-sweep algorithm) Factorization perspective: causal-anticausal

More information

AP PHYSICS 2 FRAMEWORKS

AP PHYSICS 2 FRAMEWORKS 1 AP PHYSICS 2 FRAMEWORKS Big Ideas Essential Knowledge Science Practices Enduring Knowledge Learning Objectives ELECTRIC FORCE, FIELD AND POTENTIAL Static Electricity; Electric Charge and its Conservation

More information

Balancing of Lossless and Passive Systems

Balancing of Lossless and Passive Systems Balancing of Lossless and Passive Systems Arjan van der Schaft Abstract Different balancing techniques are applied to lossless nonlinear systems, with open-loop balancing applied to their scattering representation.

More information

Index. Index. More information. in this web service Cambridge University Press

Index. Index. More information.  in this web service Cambridge University Press A-type elements, 4 7, 18, 31, 168, 198, 202, 219, 220, 222, 225 A-type variables. See Across variable ac current, 172, 251 ac induction motor, 251 Acceleration rotational, 30 translational, 16 Accumulator,

More information

Physics 102 Spring 2006: Final Exam Multiple-Choice Questions

Physics 102 Spring 2006: Final Exam Multiple-Choice Questions Last Name: First Name: Physics 102 Spring 2006: Final Exam Multiple-Choice Questions For questions 1 and 2, refer to the graph below, depicting the potential on the x-axis as a function of x V x 60 40

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009 Preface The title of the book sounds a bit mysterious. Why should anyone read this

More information

Exercise 5 - Hydraulic Turbines and Electromagnetic Systems

Exercise 5 - Hydraulic Turbines and Electromagnetic Systems Exercise 5 - Hydraulic Turbines and Electromagnetic Systems 5.1 Hydraulic Turbines Whole courses are dedicated to the analysis of gas turbines. For the aim of modeling hydraulic systems, we analyze here

More information

REUNotes08-CircuitBasics May 28, 2008

REUNotes08-CircuitBasics May 28, 2008 Chapter One Circuits (... introduction here... ) 1.1 CIRCUIT BASICS Objects may possess a property known as electric charge. By convention, an electron has one negative charge ( 1) and a proton has one

More information

AP Physics 1. Course Overview

AP Physics 1. Course Overview Radnor High School Course Syllabus AP Physics 1 Credits: Grade Weighting: Yes Prerequisites: Co-requisites: Length: Format: 1.0 Credit, weighted Honors chemistry or Advanced Chemistry Honors Pre-calculus

More information

Compositional modelling of distributed-parameter systems

Compositional modelling of distributed-parameter systems Compositional modelling of distributed-parameter systems B.M. Maschke A.J. van der Schaft 1 Introduction The Hamiltonian formulation of distributed-parameter systems has been a challenging reserach area

More information

arxiv: v2 [math.oc] 6 Sep 2012

arxiv: v2 [math.oc] 6 Sep 2012 Port-Hamiltonian systems on graphs arxiv:1107.2006v2 [math.oc] 6 Sep 2012 A.J. van der Schaft and B.M. Maschke August 25, 2012 Abstract In this paper we present a unifying geometric and compositional framework

More information

A geometric Birkhoffian formalism for nonlinear RLC networks

A geometric Birkhoffian formalism for nonlinear RLC networks Journal of Geometry and Physics 56 (2006) 2545 2572 www.elsevier.com/locate/jgp A geometric Birkhoffian formalism for nonlinear RLC networks Delia Ionescu Institute of Mathematics, Romanian Academy of

More information

Chapter 1 Introduction to System Dynamics

Chapter 1 Introduction to System Dynamics Chapter 1 Introduction to System Dynamics SAMANTHA RAMIREZ Introduction 1 What is System Dynamics? The synthesis of mathematical models to represent dynamic responses of physical systems for the purpose

More information

AA242B: MECHANICAL VIBRATIONS

AA242B: MECHANICAL VIBRATIONS AA242B: MECHANICAL VIBRATIONS 1 / 50 AA242B: MECHANICAL VIBRATIONS Undamped Vibrations of n-dof Systems These slides are based on the recommended textbook: M. Géradin and D. Rixen, Mechanical Vibrations:

More information

Notes for course EE1.1 Circuit Analysis TOPIC 10 2-PORT CIRCUITS

Notes for course EE1.1 Circuit Analysis TOPIC 10 2-PORT CIRCUITS Objectives: Introduction Notes for course EE1.1 Circuit Analysis 4-5 Re-examination of 1-port sub-circuits Admittance parameters for -port circuits TOPIC 1 -PORT CIRCUITS Gain and port impedance from -port

More information

EML5311 Lyapunov Stability & Robust Control Design

EML5311 Lyapunov Stability & Robust Control Design EML5311 Lyapunov Stability & Robust Control Design 1 Lyapunov Stability criterion In Robust control design of nonlinear uncertain systems, stability theory plays an important role in engineering systems.

More information

A Primer on Three Vectors

A Primer on Three Vectors Michael Dine Department of Physics University of California, Santa Cruz September 2010 What makes E&M hard, more than anything else, is the problem that the electric and magnetic fields are vectors, and

More information

Modeling and Simulation Revision IV D R. T A R E K A. T U T U N J I P H I L A D E L P H I A U N I V E R S I T Y, J O R D A N

Modeling and Simulation Revision IV D R. T A R E K A. T U T U N J I P H I L A D E L P H I A U N I V E R S I T Y, J O R D A N Modeling and Simulation Revision IV D R. T A R E K A. T U T U N J I P H I L A D E L P H I A U N I V E R S I T Y, J O R D A N 2 0 1 7 Modeling Modeling is the process of representing the behavior of a real

More information

arxiv: v3 [math.oc] 1 Sep 2018

arxiv: v3 [math.oc] 1 Sep 2018 arxiv:177.148v3 [math.oc] 1 Sep 218 The converse of the passivity and small-gain theorems for input-output maps Sei Zhen Khong, Arjan van der Schaft Version: June 25, 218; accepted for publication in Automatica

More information

With Modern Physics For Scientists and Engineers

With Modern Physics For Scientists and Engineers With Modern Physics For Scientists and Engineers Third Edition Richard Wolfson Middlebury College Jay M. Pasachoff Williams College ^ADDISON-WESLEY An imprint of Addison Wesley Longman, Inc. Reading, Massachusetts

More information

AN APPLICATION OF LINEAR ALGEBRA TO NETWORKS

AN APPLICATION OF LINEAR ALGEBRA TO NETWORKS AN APPLICATION OF LINEAR ALGEBRA TO NETWORKS K. N. RAGHAVAN 1. Statement of the problem Imagine that between two nodes there is a network of electrical connections, as for example in the following picture

More information

Screw Theory and its Applications in Robotics

Screw Theory and its Applications in Robotics Screw Theory and its Applications in Robotics Marco Carricato Group of Robotics, Automation and Biomechanics University of Bologna Italy IFAC 2017 World Congress, Toulouse, France Table of Contents 1.

More information

Laboratory manual : RC circuit

Laboratory manual : RC circuit THE UNIVESITY OF HONG KONG Department of Physics PHYS55 Introductory electricity and magnetism Laboratory manual 55-: C circuit In this laboratory session, CO is used to investigate various properties

More information

CS-184: Computer Graphics

CS-184: Computer Graphics CS-184: Computer Graphics Lecture #25: Rigid Body Simulations Tobias Pfaff 537 Soda (Visual Computing Lab) tpfaff@berkeley.edu Reminder Final project presentations next week! Game Physics Types of Materials

More information

Lecture 5: Hodge theorem

Lecture 5: Hodge theorem Lecture 5: Hodge theorem Jonathan Evans 4th October 2010 Jonathan Evans () Lecture 5: Hodge theorem 4th October 2010 1 / 15 Jonathan Evans () Lecture 5: Hodge theorem 4th October 2010 2 / 15 The aim of

More information

NONLINEAR AND ADAPTIVE (INTELLIGENT) SYSTEMS MODELING, DESIGN, & CONTROL A Building Block Approach

NONLINEAR AND ADAPTIVE (INTELLIGENT) SYSTEMS MODELING, DESIGN, & CONTROL A Building Block Approach NONLINEAR AND ADAPTIVE (INTELLIGENT) SYSTEMS MODELING, DESIGN, & CONTROL A Building Block Approach P.A. (Rama) Ramamoorthy Electrical & Computer Engineering and Comp. Science Dept., M.L. 30, University

More information

Formation Control Over Delayed Communication Networks

Formation Control Over Delayed Communication Networks 28 IEEE International Conference on Robotics and Automation Pasadena, CA, USA, May 19-23, 28 Formation Control Over Delayed Communication Networks Cristian Secchi and Cesare Fantuzzi DISMI, University

More information

Robotics. Dynamics. Marc Toussaint U Stuttgart

Robotics. Dynamics. Marc Toussaint U Stuttgart Robotics Dynamics 1D point mass, damping & oscillation, PID, dynamics of mechanical systems, Euler-Lagrange equation, Newton-Euler recursion, general robot dynamics, joint space control, reference trajectory

More information

Linear Algebra and Robot Modeling

Linear Algebra and Robot Modeling Linear Algebra and Robot Modeling Nathan Ratliff Abstract Linear algebra is fundamental to robot modeling, control, and optimization. This document reviews some of the basic kinematic equations and uses

More information

QUESTION BANK SUBJECT: NETWORK ANALYSIS (10ES34)

QUESTION BANK SUBJECT: NETWORK ANALYSIS (10ES34) QUESTION BANK SUBJECT: NETWORK ANALYSIS (10ES34) NOTE: FOR NUMERICAL PROBLEMS FOR ALL UNITS EXCEPT UNIT 5 REFER THE E-BOOK ENGINEERING CIRCUIT ANALYSIS, 7 th EDITION HAYT AND KIMMERLY. PAGE NUMBERS OF

More information

Contents. PART I METHODS AND CONCEPTS 2. Transfer Function Approach Frequency Domain Representations... 42

Contents. PART I METHODS AND CONCEPTS 2. Transfer Function Approach Frequency Domain Representations... 42 Contents Preface.............................................. xiii 1. Introduction......................................... 1 1.1 Continuous and Discrete Control Systems................. 4 1.2 Open-Loop

More information

1. Consider the 1-DOF system described by the equation of motion, 4ẍ+20ẋ+25x = f.

1. Consider the 1-DOF system described by the equation of motion, 4ẍ+20ẋ+25x = f. Introduction to Robotics (CS3A) Homework #6 Solution (Winter 7/8). Consider the -DOF system described by the equation of motion, ẍ+ẋ+5x = f. (a) Find the natural frequency ω n and the natural damping ratio

More information

Dynamics and control of mechanical systems

Dynamics and control of mechanical systems Dynamics and control of mechanical systems Date Day 1 (03/05) - 05/05 Day 2 (07/05) Day 3 (09/05) Day 4 (11/05) Day 5 (14/05) Day 6 (16/05) Content Review of the basics of mechanics. Kinematics of rigid

More information

Comprehensive Introduction to Linear Algebra

Comprehensive Introduction to Linear Algebra Comprehensive Introduction to Linear Algebra WEB VERSION Joel G Broida S Gill Williamson N = a 11 a 12 a 1n a 21 a 22 a 2n C = a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn a m1 a m2 a mn Comprehensive

More information

Physics 110. Electricity and Magnetism. Professor Dine. Spring, Handout: Vectors and Tensors: Everything You Need to Know

Physics 110. Electricity and Magnetism. Professor Dine. Spring, Handout: Vectors and Tensors: Everything You Need to Know Physics 110. Electricity and Magnetism. Professor Dine Spring, 2008. Handout: Vectors and Tensors: Everything You Need to Know What makes E&M hard, more than anything else, is the problem that the electric

More information

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ). Connectedness 1 Motivation Connectedness is the sort of topological property that students love. Its definition is intuitive and easy to understand, and it is a powerful tool in proofs of well-known results.

More information

Metric spaces and metrizability

Metric spaces and metrizability 1 Motivation Metric spaces and metrizability By this point in the course, this section should not need much in the way of motivation. From the very beginning, we have talked about R n usual and how relatively

More information

Case Study: The Pelican Prototype Robot

Case Study: The Pelican Prototype Robot 5 Case Study: The Pelican Prototype Robot The purpose of this chapter is twofold: first, to present in detail the model of the experimental robot arm of the Robotics lab. from the CICESE Research Center,

More information

Linear Hamiltonian systems

Linear Hamiltonian systems Linear Hamiltonian systems P. Rapisarda H.L. Trentelman Abstract We study linear Hamiltonian systems using bilinear and quadratic differential forms. Such a representation-free approach allows to use the

More information

Section 2.2 : Electromechanical. analogies PHILIPE HERZOG AND GUILLAUME PENELET

Section 2.2 : Electromechanical. analogies PHILIPE HERZOG AND GUILLAUME PENELET Section 2.2 : Electromechanical analogies PHILIPE HERZOG AND GUILLAUME PENELET Paternité - Pas d'utilisation Commerciale - Partage des Conditions Initiales à l'identique : http://creativecommons.org/licenses/by-nc-sa/2.0/fr/

More information

COSSERAT THEORIES: SHELLS, RODS AND POINTS

COSSERAT THEORIES: SHELLS, RODS AND POINTS COSSERAT THEORIES: SHELLS, RODS AND POINTS SOLID MECHANICS AND ITS APPLICATIONS Volume 79 Series Editor: G.M.L. GLADWELL Department of Civil Engineering University of Waterloo Waterloo, Ontario, Canada

More information

The written qualifying (preliminary) examination covers the entire major field body of knowledge

The written qualifying (preliminary) examination covers the entire major field body of knowledge Dynamics The field of Dynamics embraces the study of forces and induced motions of rigid and deformable material systems within the limitations of classical (Newtonian) mechanics. The field is intended

More information

Power System Analysis Prof. A. K. Sinha Department of Electrical Engineering Indian Institute of Technology, Kharagpur

Power System Analysis Prof. A. K. Sinha Department of Electrical Engineering Indian Institute of Technology, Kharagpur Power System Analysis Prof. A. K. Sinha Department of Electrical Engineering Indian Institute of Technology, Kharagpur Lecture - 9 Transmission Line Steady State Operation Welcome to lesson 9, in Power

More information

Time-Dependent Statistical Mechanics 5. The classical atomic fluid, classical mechanics, and classical equilibrium statistical mechanics

Time-Dependent Statistical Mechanics 5. The classical atomic fluid, classical mechanics, and classical equilibrium statistical mechanics Time-Dependent Statistical Mechanics 5. The classical atomic fluid, classical mechanics, and classical equilibrium statistical mechanics c Hans C. Andersen October 1, 2009 While we know that in principle

More information

Inductance, RL and RLC Circuits

Inductance, RL and RLC Circuits Inductance, RL and RLC Circuits Inductance Temporarily storage of energy by the magnetic field When the switch is closed, the current does not immediately reach its maximum value. Faraday s law of electromagnetic

More information

Rotational motion of a rigid body spinning around a rotational axis ˆn;

Rotational motion of a rigid body spinning around a rotational axis ˆn; Physics 106a, Caltech 15 November, 2018 Lecture 14: Rotations The motion of solid bodies So far, we have been studying the motion of point particles, which are essentially just translational. Bodies with

More information

A HYBRID SYSTEM APPROACH TO IMPEDANCE AND ADMITTANCE CONTROL. Frank Mathis

A HYBRID SYSTEM APPROACH TO IMPEDANCE AND ADMITTANCE CONTROL. Frank Mathis A HYBRID SYSTEM APPROACH TO IMPEDANCE AND ADMITTANCE CONTROL By Frank Mathis A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE

More information

Antennas Prof. Girish Kumar Department of Electrical Engineering Indian Institute of Technology, Bombay. Module 02 Lecture 08 Dipole Antennas-I

Antennas Prof. Girish Kumar Department of Electrical Engineering Indian Institute of Technology, Bombay. Module 02 Lecture 08 Dipole Antennas-I Antennas Prof. Girish Kumar Department of Electrical Engineering Indian Institute of Technology, Bombay Module 02 Lecture 08 Dipole Antennas-I Hello, and welcome to today s lecture. Now in the last lecture

More information

Energy Storage Elements: Capacitors and Inductors

Energy Storage Elements: Capacitors and Inductors CHAPTER 6 Energy Storage Elements: Capacitors and Inductors To this point in our study of electronic circuits, time has not been important. The analysis and designs we have performed so far have been static,

More information

Dynamics of Ocean Structures Prof. Dr. Srinivasan Chandrasekaran Department of Ocean Engineering Indian Institute of Technology, Madras

Dynamics of Ocean Structures Prof. Dr. Srinivasan Chandrasekaran Department of Ocean Engineering Indian Institute of Technology, Madras Dynamics of Ocean Structures Prof. Dr. Srinivasan Chandrasekaran Department of Ocean Engineering Indian Institute of Technology, Madras Module - 01 Lecture - 09 Characteristics of Single Degree - of -

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

AP Physics C Mechanics Objectives

AP Physics C Mechanics Objectives AP Physics C Mechanics Objectives I. KINEMATICS A. Motion in One Dimension 1. The relationships among position, velocity and acceleration a. Given a graph of position vs. time, identify or sketch a graph

More information

Electrical Transport in Nanoscale Systems

Electrical Transport in Nanoscale Systems Electrical Transport in Nanoscale Systems Description This book provides an in-depth description of transport phenomena relevant to systems of nanoscale dimensions. The different viewpoints and theoretical

More information