Commande sous contraintes pour des systèmes dynamiques incertains : une approache basée sur l interpolation

Size: px
Start display at page:

Download "Commande sous contraintes pour des systèmes dynamiques incertains : une approache basée sur l interpolation"

Transcription

1 Commande sous contraintes pour des systèmes dynamiques incertains : une approache basée sur l interpolation Hoai Nam Nguyen To cite this version: Hoai Nam Nguyen. Commande sous contraintes pour des systèmes dynamiques incertains : une approache basée sur l interpolation. Autre. Supélec, 22. Français. <NNT : 22SUPL4>. <tel > HAL Id: tel Submitted on Feb 23 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

2 N d ordre : 22-4-TH THÈSE DE DOCTORAT DOMAINE : STIC SPECIALITE : AUTOMATIQUE Ecole Doctorale «Sciences et Technologies de l Information des Télécommunications et des Systèmes» Présentée par : Hoai-Nam NGUYEN Sujet: «Commande sous contraintes pour des systèmes dynamiques incertains : une approche basée sur l'interpolation» Soutenue le //22 devant les membres du jury : M. Mazen ALAMIR Gipsa-lab, Grenoble Rapporteur M. Tor Arne JOHANSEN NTNU, Trondheim, Norway Rapporteur M. Jamal DAAFOUZ ENSEM - CRAN, Nancy Examinateur M. Didier DUMUR SUPELEC Examinateur M. Silviu-Iulian NICULESCU LSS Examinateur M. Sorin OLARU SUPELEC Encadrant M. Per-Olof GUTMAN Tehnion, Israel Invité

3

4 Even the best control system cannot make a Ferrari out of a Volkswagen. Skogestad and Postlethwaite

5

6 Preface A fundamental problem in automatic control is the control of uncertain plants in the presence of input and state or output constraints. An elegant and theoretically most satisfying framework is represented by optimal control policies which, however, rarely gives an analytical feedback solution, and oftentimes builds on numerical solutions(approximations). Therefore, in practice, the problem has seen many ad-hoc solutions, such as override control, anti-windup, as well as modern techniques developed during the last decades usually based on state space models. One of the popular example is Model Predictive Control(MPC) where an optimal control problem is solved at each sampling instant, and the element of the control vector meant for the nearest sampling interval is applied. In spite of the increased computational power of control computers, MPC is at present mainly suitable for low-order, nominally linear systems. The robust version of MPC is conservative and computationally complicated, while the explicit version of MPC that gives an affine state feedback solution involves a very complicated division of the state space into polyhedral cells. In this thesis a novel and computationally cheap solution is presented for linear, time-varying or uncertain, discrete-time systems with polytopic bounded control and state(or output) vectors, with bounded disturbances. The approach is based on the interpolation between a stabilizing, outer low gain controller that respects the control and state constraints, and an inner, high gain controller, designed by any method that has a robustly positively invariant set within the constraints. A simple Lyapunov function is used for the proof of closed loop stability. In contrast to MPC, the new interpolation based controller is not necessarily employing an optimization criterion inspired by performance. In its explicit form, the cell partitioning is simpler that the MPC counterpart. For the implicit version, the on-line computational demand can be restricted to the solution of one linear program or quadratic program. Several simulation examples are given, including uncertain linear systems with output feedback and disturbances. Some examples are compared with MPC. The control of a laboratory ball-and-plate system is also demonstrated. It is believed that vii

7 viii Preface the new controller might see wide-spread use in industry, including the automotive industry, also for the control of fast, high-order systems with constraints. Place(s), SUPELEC, Gif sur Yvette month year October 3, 22

8 Acknowledgements Thisthesisistheresultofmyinteractionwithalargenumberofpeople,whichona personal and scientific level helped me during these last three years. I would like first to thank my supervisor, Prof. Sorin Olaru, for his enthusiastic support, and for creating and maintaining a creative environment for research and studies, and making this thesis possible. To him and his close collaborator, Prof. MortenHovd,Iamgratefulfortheirhelpwhichhelpedmetotakethefirststeps into becoming a scientist. BigthanksandabighuggoestoPer-OlofGutman,whohasbeenpartofalmost all the research in the thesis, and has worked as my supervisor, and co-authored most ofthepapers.thankstohim,ibelievethatilearnedtothinktherightwayabout some problems in control, which I would not be able to approach appropriately otherwise. IwouldliketothanktoProf.DidierDumur,whowasmysupervisorforthefirst twoyearsofmyphd.imustthanktohimforenablingmyresearch,andtooffer helpful suggestions for the thesis. In the Automatic Control department of Supelec, I would like to thank especially to Prof. Patrick Boucher and Ms. Josiane Dartron for their support. I would also like tothanktoallthepeopleihavemetinthedepartmentwhichmademystayhere interesting and enjoyable. Without giving an exhaustive list, some of them are Hieu, Florin, Bogdan, Raluca, Christina, Ionela, Nikola, Alaa. Finally,IamgratefultomywifeLinhandmysonTung,whotrustedandsupported me in several ways and made this thesis possible. ix

9

10 Résumé étendu de la thèse Introduction Un problème fondamental à résoudre en Automatique réside dans la commande des systèmes incertains qui présentent des contraintes sur les variables de l entrée, de l état ou la sortie. Ce problème peut être théoriquement résolu au moyen d une commande optimale. Cependant la commande optimale (en temps minimal) par principe n est pas une commande par retour d état ou retour de sortie et offre seulement une trajectoire optimale le plus souvent par le biais d une solution numérique. Par conséquent, dans la pratique, le problème peut être approché par de nombreuses méthodes, tels que commande over-ride et anti-windup. Une autre solution, devenu populaire au cours des dernières décennies est la commande prédictive. Selon cette méthode, un problème de la commande optimale est résolu à chaque instant d échantillonnage, et le composant du vecteur de commande destiné à l échelon curant est appliquée. En dépit de la montée en puissance des architecture de calcul temps-réel, la commande prédictive est à l heure actuelle principalement approprié lorsque l ordre est faible, bien connu, et souvent pour des systèmes linéaires. La version robuste de la commande prédictive est conservatrice et compliquée à mettre en oeuvre, tandis que la version explicite de la commande prédictive donnant une solution affine par morceaux implique une compartimentation de l état-espace en cellules polyédrales, très compliquée. Dans cette thèse, une solution élégante et peu coûteuse en temps de calcul est présentée pour des systèmes linéaire, variants dans le temps ou incertains. Les développements se concentre sur les dynamiques en temps discret avec contraintes polyédriques sur l entrée et l état (ou la sortie) des vecteurs, dont les perturbations sont bornées. Cette solution est basée sur l interpolation entre un correcteur pour la région extérieure qui respecte les contraintes sur l entrée et de l état, et un autre pour la région intérieure, ce dernier plus agressif, conçue par n importe quelle méthode classique, ayant un ensemble robuste positivement invariant à l intérieur des contraintes. Une fonction de Lyapounov simple est utilisée afin d apporter la preuve de la stabilité en boucle fermée.

11 2 Notation Contrairement à la commande prédictive, la nouvelle commande interpolée n est pas nécessairement fondée sur un critère d optimisation. Dans sa forme explicite, la partition de l espace d état est plus simple que celle de la commande prédictive. Pour la version implicite, la demande de calcul en ligne peut se limiter à la solution d un ou de deux programmes linéaires. On donne plusieurs exemples de simulation, y compris pour les systèmes linéaires incertains avec retour de sortie et les perturbations. On donne quelques à titre de comparaison avec la commande prédictive. Une application de ce type de commande a été commande expérimentée pour un système de positionnement d une bille sur une plaque. Nous pensons que la nouvelle commande peut être largement utilisée dans l industrie, y compris l industrie automobile, ainsi que pour la commande de systèmes d ordres élevées avec contraintes. Contraintes Les contraintes sont présentes dans tous les systèmes dynamiques du monde réel. Elles rendent plus complexe la synthèse de lois des commandes, non seulement en théorie, mais aussi dans le domaine des applications pratiques. Du point de vue conceptuel, les contraintes peuvent être de nature différente. Fondamentalement, il existe deux types de contraintes imposées par les limitations physiques et/ou la performance souhaitée. Les contraintes physiques sont dues aux limitations physiques de la partie mécanique, électrique, biologique, actionneur, etc. La principale préoccupation ici est la stabilité du système en présence des contraintes sur les variables d entrée et de sortie ou sur l état. Les variables d entrée et de sortie doivent rester à l intérieur des contraintes afin d éviter la surexploitation ou de dommages. En outre, la violation des contraintes peut conduire à une dégradation de performance, à des oscillations ou même à l instabilité. Les contraintes de performance sont introduites lors de la synthèse afin de satisfaire les exigences de performance, par exemple le temps de montée, le temps de premier maximum, la tolérance aux pannes, la longévité des équipements et des problèmes environnementaux, etc. Contraintes sur l entrée Une classe de contraintes couramment imposées tout au long de cette thèse sont les contraintes sur l entrée considérées dans le but d éviter la saturation Contraintes sur la norme Euclidienne de la commande u(k) 2 u max (.)

12 Notation 3 oùu(k) R m est la commande du système. Contraintes polyédrales sur la commande u(k) U,U ={u R m :F u u(k) g u } (.2) où le matricef u et le vecteurg u sont supposés être constants avecg u > de sorte que l origine est contenue à l intérieur deu. Contraintes sur la sortie Une autre classe de contraintes présentes dans ce manuscrit sont les contraintes sur la sortie. Contraintes sur la norme Euclidienne de la sortie oùy(k) R p est la sortie du système. Contraintes polyédrales sur la sortie y(k) 2 y max (.3) y(k) Y,Y ={y R p :F y y(k) g y } (.4) où le matricef y et le vecteurg y sont supposés être constants avecg y > de sorte que l origine est contenue à l intérieur dey. Contraintes sur l état Une dernière classe de contraintes présentes dans ce mémoire sont les contraintes sur l état. Contraintes sur la norme Euclidienne de l état oùx(k) R n est l état du système. Contraintes polyédrales sur l état x(k) 2 x max (.5) x(k) X,X ={x R n :F x x(k) g x } (.6) où le matricef x et le vecteurg x sont supposés être constants avecg x > de sorte que l origine est contenue à l intérieur dex.

13 4 Notation Incertitudes Le problème de la commande sous contraintes peut devenir encore plus difficile en présence d incertitudes de modèle, ce qui est inévitable dans la pratique [43], [2]. Les incertitudes du modèle apparaissent dans certains cas spécifiques, par exemple quand un modèle linéaire est obtenu par une approximation d un système non linéaire autour d un point de fonctionnement. Même si ce processus est assez bien représenté par un modèle linéaire, les paramètres du modèle pourraient être variants dans le temps ou pourraient changer en raison d un changement des points de fonctionnement. Dans ces cas, la cause et la structure des incertitudes du modèle sont assez bien connus. Néanmoins, même si le processus réel est linéaire, il y a toujours une certaine incertitude associée, par exemple, à des paramètres physiques, qui ne sont jamais connus exactement. En outre, les processus réels sont généralement affectés par des perturbations et il est nécessaire de les prendre en compte dans la conception de la lois de commande. Il est généralement admis que la principale raison de l asservissement est de diminuer les effets de l incertitude, qui peut apparaître sous différentes formes telles que les incertitudes paramétriques, les perturbations additives insuffisances dans les modèles utilisés servant à la conception de l asservissement. L incertitude du modèle et sa robustesse ont été un thème central dans le développement de lois de commande en automatique [9]. Une perspective historique Une manière simple, qui permet de stabiliser un système sous contraintes est de réaliser la conception de la lois de commande sans tenir compte des contraintes. Puis, une adaptation de la loi de commande est considérée en considérant la saturation d entrée. Une telle approche est appelée anti-windup [79], [52], [5], [58], plus récemment traitée dans [42]. Au cours des dernières décennies, la recherche concernant le sujet de la commande sous contraintes n a pu déboucher que dans la mesure où ces contraintes ont pu être prises en compte lors de la phase de synthèse. Par son principe, la commande prédictive révèle toute son importance dans le traitement des contraintes [32], [], [28], [29], [47], [96], [48]. Dans l approche de la commande prédictive, une séquence de valeurs prédites de commande sur un horizon de prédiction finie est calculée afin d optimiser la performance du système en boucle fermée, exprimée en termes d une fonction de coûts [3]. La commande utilise un modèle mathématique interne qui, compte tenu des mesures actuelles, prédit le comportement futur du système réel en fonction du changement des entrées de commande. Une fois que la séquence optimale d entrées de la commande a été calculé, seul le premier élément est effectivement appliquée au système et l optimisation est répétée à l instant suivant avec la nouvelle mesure de l état [5], [], [96].

14 Notation 5 Dans la commande prédictive classique, l action d entrée à chaque instant est obtenue en résolvant en ligne le problème de commande optimale en boucle ouverte [26], [33]. Avec un modèle linéaire, des contraintes polyédrales, et un coût quadratique, le problème d optimisation est un programme quadratique. La résolution du programme quadratique peut être coûteuse en temps de calcul, surtout quand l horizon de prédiction est grand, ce qui a traditionnellement limitée la commande prédictive aux applications avec période d échantillonnage relativement faible [4]. Dans la dernière décennie, des tentatives ont été faites pour utiliser la commande prédictive aux processus rapides. Dans [22], [2], [2], [54] il a été montré que la commande prédictive sous contraintes est équivalente à un problème d optimisation multi-paramétrique, où l état joue le rôle d un vecteur de paramètres. La solution est une fonction affine par morceaux de l état sur une partition polyédrale de l espace d état, et l effort de calcul de la commande prédictive est déplacé hors-ligne. Cependant, la commande prédictive explicite a aussi des inconvénients. L obtention de la solution optimale explicite nécessite de résoudre un problème hors-ligne d optimisation paramétrique, qui est généralement un problème NP-difficile. Bien que le problème soit traitable et pratiquement résoluble pour plusieurs applications de intéressantes de l automatique, l effort hors-ligne de calcul croît exponentiellement plus vite même que l augmentation de la taille du problème [84], [85], [83], [64], [65]. C est le cas d une grand horizon de prédiction, d un grand nombre de contraintes et de systèmes de grande dimension. Dans [57], les auteurs montrent que le calcul en ligne est préférable dans le cas des systèmes de grande dimension où la réduction significative de la complexité de calcul peut être obtenue en exploitant la structure particulière du problème d optimisation à partir d une solution obtenue à l étape précédente de temps. La même référence mentionne que pour les modèles de plus de cinq dimensions, la solution explicite pourrait ne pas être pratique. Il convient de mentionner que les solutions approchées explicites ont été étudiées pour aller au-delà de cette limitation ad-hoc [9], [62], [4]. Notez que, comme son nom l indique, les commandes prédictives implicites et explicites classiques sont basées sur des modèles mathématiques qui, invariablement, présentent un décalage par rapport aux systèmes physiques. La commande prédictive robuste est conçue pour couvrir à la fois l incertitude des modèles et des perturbations. Cependant, le robuste MPC présente un grand conservatisme et/ou grand complexité de de calcul [78], [87]. L utilisation de l interpolation dans la commande sous contraintes permettent d éviter des procédures très complexes de synthèse est bien connue dans la littérature. Il y a une longue tradition de développements sur ces sujets étroitement liés à la commande prédictive, voir par exemple [], [32], [33], [3]. En effet, l interpolation entre les séquences d entrée, l état des trajectoires, les gains de correcteurs et/ou des ensembles associés aux ensembles invariants, peut être déterminée.

15 6 Notation Formulation du problème Considérons le problème de la régulation d un système linéaire variant dans le temps ou incertain x(k+ )=A(k)x(k)+B(k)u(k)+D(k)w(k) (.7) oùx(k) R n,u(k) R m andw(k) R d sont respectivement, le vecteur d état, le vecteur des entrées et le vecteur des perturbations. Les matrices du système A(k) R n n,b(k) R n m etd(k) R n d satisfont A(k)= q q i= i= α i (k)a i, B(k)= q α i (k)b i, D(k)= q α i (k)d i i= i= α i (k)=, α i (k) (.8) où les matricesa i,b i etd i sont donnés. Le système est soumis à des contraintes sur l état, la commande et la perturbation x(k) X, X ={x R n :F x x g x } u(k) U, U ={u R m :F u u g u } (.9) w(k) W, W ={w R d :F w w g w } où les matricesf x etf u,f w et les vecteursg x,g u etg w sont supposés être constants avecg x >,g u > etg w > tel que l origine est contenue à l intérieur dex,u et W. Les inégalités sont valable sur tous ces éléments. Ensembles invariants Avec la théorie de Lyapunov introduite dans le cadre des systèmes régis par des équations différentielles ordinaires, la notion d ensemble invariant a été utilisée dans de nombreux problèmes concernant l analyse et la commande des systèmes dynamiques. Une motivation importante ayant conduit à introduire les ensembles invariants est venu du besoin d analyser l influence des incertitudes sur le système. Deux types de systèmes seront examinées dans la présente section, à savoir, des systèmes linéaires incertains à temps discret (.7) et des systèmes autonomes x(k+ )=A c (k)x(k)+d(k)w(k) (.) Définition.. Ensemble positif invariant robuste L ensemble Ω X est dit positif invariant robuste pour le système(.) si et seulement si x(k+ )=A c (k)x(k)+d(k)w(k) Ω pour tout x(k) Ω et pour tout w(k) W.

16 Notation 7 Par conséquent, si le vecteur d état du système (.) atteint un ensemble positivement invariant, il restera dans l ensemble en dépit de la perturbation w(k). Le terme positivement fait référence au fait que seulement les évolutions du système (.) en temps direct sont considérées. Cet attribut sera omis dans les sections à venir par souci de brièveté. Étant donné un ensemble bornéx R n, l ensemble invariant robuste maximal Ω max X est un ensemble invariant robuste, qui contient tous les ensembles invariants robustes contenues dansx. Définition.2. Ensemble contractif robuste Pour un certain scalaire λ avec λ, l ensemble Ω X est λ-contractif robuste pour le système (.) si et seulement si x(k+ )=A c (k)x(k)+d(k)w(k) λω pour tout x(k) Ω et pour tout w(k) W. Définition.3. Ensemble invariant robuste contrôlée L ensemble C X est invariante robuste contrôlée pour le système (.7) si pour toutx(k) C, il existe une valeur de commandeu(k) U tel que pour toutw(k) W. x(k+ )=A(k)x(k)+B(k)u(k)+D(k)w(k) C Étant donné un ensemble bornéx R n, l ensemble invariant robuste contrôlée maximalc max X est un ensemble invariant robuste contrôlée, qui contient tous les ensembles invariants robustes contrôlée contenues dansx. Définition.4. Ensemble contractif robuste contrôlée Pour un certain scalaire λ avec λ <, l ensemblec X est robuste contractif contrôlée pour le système (.7) si pour toutx(k) C, il existe une valeur de commandeu(k) U tel que pour toutw(k) W. x(k+ )=A(k)x(k)+B(k)u(k)+D(k)w(k) λc De toute évidence, si le facteur contractif λ =, la question des concepts de l invariance robuste et l invariance robuste contrôlée se repose. Pour les système (.7) et (.) deux types d ensembles convexes sont largement utilisés pour caractériser le domaine d attraction. La première classe est celle des ensembles ellipsoïdaux ou ellipsoïdes. Les ensembles ellipsoïdaux sont les plus couramment utilisés dans l analyse de stabilité robuste et la synthèse des correcteurs des systèmes sous contraintes. Leur popularité est due à l efficacité des calculs grâce à l utilisation de formulations LMI et du fait que leur complexité est fixe par rapport à la dimension de l espace d état [29], [37]. Cette approche, cependant, peut conduire à des résultats entachés deconservatisme. La seconde classe est celle des ensembles polyédrales. Avec des contraintes linéaires sur les variables d état et de commande, les ensembles invariants polyédrales

17 8 Notation x x Fig.. Ensemble invariant ellipsoïdale. sont préférés aux ensembles invariants ellipsoïdaux, car ils offrent une meilleure approximation du domaine d attraction [35], [55], [2]. Leur principal inconvénient est que la complexité de la représentation n est pas fixée par la dimension de l espace x x Fig..2 Ensemble invariant polyédrique. Commande aux sommets La solution est une extension de la commande aux sommets, développé par Gutman et Cwikel (986) et prolongée par Blanchini (992) dans le cas des systèmes incertains. Les conditions nécessaires et suffisantes de stabilisation à l origine du système (.7), (.9) sont que, à chaque sommet de l ensemble invariant, il existe

18 Notation 9 une commande faisable qui amène l état de l intérieur respectif à l ensemble invariant pour tous les cas d incertitude ou de variation dans le temps se produisant dans les matrices du système. X 2 X 3 u 2 u 3 Feasible state set (nominal case) X u u x u 5 X 5 u 4 X 4 Convex combination of vertex controls (a) Ensemble invariant faisable (b) Commande aux sommets Lyapunov function level curves Feasible state set (robust case) (c) Courbes de niveau des fonctions de Lyapunov (d) Ensemble invariant faisable - Cas robuste Fig..3 Commande sommet. Le principal avantage du système de la commande aux sommets est la taille du domaine d attraction, c est-à-dire l ensemblec N. Il est clair que l ensemble invariant contrôléec N et le domaine faisable pour la commande, pourrait être aussi grand que tout autres correcteur sous contraintes pourraient avoir. Toutefois, une faiblesse de la commande aux sommets réside dans ce que l action de commande maximale est appliqué seulement à la frontière de l ensemble faisable, avec une amplitude de l action de commande diminué lorsque l état se rapproche de l origine. Une solution à ce problème consiste à basculer vers un correcteur local plus agressif qui doit être complété, par exemple, par un mécanisme de type hystérésis afin d éviter la vibration. Une faiblesse de lois de commande par commutation, c est que le signal de commande peut changer brusquement. Commande interpolée basée sur la programmation linéaire Solution implicite Une solution aux faiblesses mentionnées ci-dessus est la loi de commande aux sommets améliorée qui réalise une interpolation entre le signal de commande ex-

19 Notation terne et un correcteur interne plus agressif. Supposons que une loi de commande interne stabilisant a été conçu dont l ensemble invariant robuste maximal Ω max est un sous-ensemble dec N. Rappelons que, le MAS Ω max est l ensemble polyédrale maximal pour le quel la loi de commande choisie donne un signal de commande admissibleu o tel quex(k) reste dans Ω max. Soitx(k) C N décomposé comme x(k)=c(k)x v (k)+( c(k))x o (k) (.) oùx v (k) C N,x o Ω max et c. Considérons la loi de commande suivante u(k)=c(k)u v (k)+( c(k))u o (k) (.2) oùu v (k) est obtenu en appliquant la loi de commande aux sommets, etu o (k) = Kx o (k) est la loi de commande optimale localement, et qui est faisable dans Ω max C N x 2 x x o Ω max 2 x v x Fig..4 Commande interpolée. Tout l état x(k) peut être exprimée comme une combinaison convexe dex v (k) C N etx o (k) Ω max. À chaque instant, considérons le problème d optimisation non linéaire suivant sujet à c = min c,x v,x o {c} (.3) x v C N, x o Ω max, cx v +( c)x o =x, c On montre que le problème d optimisation non linéaire (.3) peut être converti en un problème de programmation linéaire. Il sera prouvé que la loi de commande (.), (.2), (.3) est faisable et stabilise asymptotiquement le système en boucle

20 u Notation fermée avec garanties de robustesse. Le minimum de c(k) est le meilleur choix du point de vue de la commande, car il donne la mesure de l action de commande, qui se rapproche autant que possible de l action de commande optimale. Il est en outre montré que le coefficient d interpolationc est une fonction de Lyapunov. 5 Interpolation based control MPC.8.6 Interpolation based control MPC x Time (Sampling) Interpolation based control MPC.4.6 x Time (Sampling) (a) Trajectoires d etat Time (Sampling) (b) Trajectoires d entrée Fig..5 Une simulation de la commande interpolée et de la commande prédictive. Solution explicite Pour le problème d optimisation (.3), les propriétés suivantes peuvent être exploitées : Pour toutx Ω max, le résultat du problème d interpolation optimale est la solution trivialec = et doncx o =x dans (.3). 4 3 C N 2 x 2 x x o * x o x v * x v Ω max x Fig..6 Illustration graphique. Pour toutx C N \ Ω max, la solution optimale du problème (.3) est atteinte si et seulement sixest écrit comme une combinaison convexe dex v etx o oùx v Fr(C N ) etx o Fr(Ω max ).

21 2 Notation Soitx C N \ Ω max avec une combinaison particulière convexex=cx v +( c)x o oùx c C N etx o Ω max. Six o est strictement à l intérieur de Ω max, on peut définir x o = Fr(Ω max ) x,x o x o comme l intersection entre la frontière de Ω max et le segment reliantxetx o. En utilisant des arguments de convexité, on a x= cx v +( c) x o où c<c. D une manière générale, pour toutx C N \ Ω max l interpolation optimale (.3) conduit à une solution{x v,x o} avecx o Fr(Ω max ). D autre part, six v est strictement à l intérieur dec N, on peut exprimer x v = Fr(C N ) x,x v x v à l intersection entre la frontière dec N et le rayon de raccordement dexet x v. On obtient x=ĉ x v +( ĉ)x o avecĉ<c, ce qui conduit à la conclusion que la solution optimale {x v,x o}, il estime quex v Fr(C N ). De la remarque précédente, nous concluons que pour toutx C N \Ω max le coefficient d interpolationcatteint un minimum en (.3) si et seulement sixest écrit sous la forme d une combinaison convexe de deux points, l un appartenant à la frontière de Ω max et l autre étant sur la frontière dec N. Il est en outre montré que, six C N \ Ω max, la plus petite valeur decsera atteint lorsque la régionc N \Ω est décomposé en polytopes avec leurs sommets situés, soit sur la frontière de Ω max ou sur la frontière dec N. Ces polytopes peuvent être décomposée en simplexes, formées chacune parr sommets dec N etn r+ sommets de Ω max ou r n. Nous allons également prouver que la commande d interpolation de base donne une solution explicite avec les lois de commande affines par morceaux dansc N \ Ω max partitionné en simplexes (solution similaire, mais plus simple que cell de la commande prédictive explicite). A l intérieur de l ensemble Ω max, la commande interpolée s avère être la commande optimale sans contrainte u(k)=kx(k). Commande interpolée basée sur la programmation quadratique L interpolation entre les régulateurs linéaires La commande interpolée basée sur l interpolation (.), (.2), (.3) se réduit à l utilisation de la programmation linéaire, qui est extrêmement simple. Toutefois, le principal problème en ce qui concerne la mise en ouvre de l algorithme (.),

22 Notation 3 x x x x (a) Commande interpolée - Avant de fusionner Controller partition with 37 regions. 4 (b) Commande interpolée - Après la fusion Controller partition with 97 regions x 2 x x x (c) Commande prédictive - Avant de fusionner (d) Commande prédictive - Après la fusion Fig..7 Partition de l espace état de la commande interpolée et la commande prédictive. (.2), (.3) est la non-unicité de la solution. Plusieurs optimas ne sont pas souhaitables, car ils pourraient conduire à une commutation rapide entre les différentes actions de commande optimale lorsque le problème LP (.3) est résolu en ligne. Traditionnellement, la commande prédictive a été formulé en utilisant un critère quadratique []. Donc, pour la commande à base d interpolation, cela vaut la peine de se tourner vers l utilisation de la programmation quadratique. Avant d introduire une formulation QP, notons que l idée d utiliser des formulations QP pour la commande interpolée n est pas nouvelle. Dans [], [32], la théorie de Lyapunov est utilisée pour calculer une borne supérieure de la fonction de coût à horizon infini. J = k= { x(k) T Qx(k)+u(k) T Ru(k) } (.4) oùq etr sont les matrices d état et d entrée. À chaque instant, l algorithme dans [] utilise une décomposition en ligne de l état actuel, avec chaque composant se trouvant dans un ensemble distinct invariant. Après quoi le dispositif de commande correspondant est appliqué à chaque composant séparément, de maniére à calculer l action de commande. Les polytopes qu on utilise comme ensembles candidats sont invariants. Par conséquent, le problème d optimisation en ligne peut être formulé comme un problème QP. Cependant, les résultats de [], [32] ne permettent pas d imposer une priorité parmi les lois de contrôle d interpolation. Dans ce manuscrit, nous fournissons une contribution à cet direction de recherche en tenant compte dans l interpolation, du fait que une des commandes aura la plus

23 4 Notation grande priorité, tandis que les autres gains joueront le rôle de degrés de liberté de manière à élargir le domaine d attraction. Cette approche alternative peut fournir un cadre approprié pour la conception de lois de commande sous contraintes qui s appuie sur la commande optimale sans contraintes (généralement avec un gain élevé) et par la suite on pourra règle le facteur d interpolation pour faire face à des contraintes et des limitations (par interpolation avec les contrôleurs adéquats à faible gain). On suppose que l utilisation des résultats établis dans la théorie de la commande, on obtient un ensemble de correcteurs sans contraintes asymptotiquement stabilisés u(k)=k i x(k),i=,2,...,r tel que pour les matrices d état et d entrée, le problème d optimisation suivant (A j +B j K i ) T P i (A j +B j K i ) P i Q i K T i R i K i, j=,2,...,s est faisable par rapport à la variablep i R n n. Notons Ω i X un ensemble invariant maximal pour chaque correcteurk i, et Ω dans une enveloppe convexe de Ω i. De la convexité dex, il s ensuie que Ω X. Le correcteur de gain élevé dans cette énumération jouera le rôle d un candidat prioritaire, tandis que les autres correcteurs de gain faible seront utilisés dans le schéma d interpolation pour élargir le domaine d attraction. Tout l étatx(k) Ω peut être décomposé comme suit où x i Ω i pour touti=,2,...,r et x(k)=λ x + λ 2 x λ r x r (.5) r i= λ i =, λ i Considérons la loi de commande suivante u(k)=λ K x + λ 2 K 2 x λ r K r x r (.6) oùu i (k) =K i x i est la loi de commande, associé à la construction de l ensemble invariant Ω i. À chaque instant, considère le problème d optimisation suivant min x i,λ i { r x ip T i x i + λi 2 i=2 } (.7) sujet aux contraintes x i Ω i, i=,2,...,r r λ i x i =x i= λ i =, λ i r i=

24 u Notation 5 Nous insistons sur le fait que la fonction objectif est construite sur les indices {2,...r}, ce qui correspond aux correcteurs de plus faible priorité. Il sera démontré que le problème d optimisation non linéaire (.7) peut être converti en un problème d optimisation quadratique. Il sera en outre montré que, la commande d interpolation basée sur (.5), (.6), (.7) garantit la faisabilité récursive et la stabilité asymptotique robuste du système en boucle fermée x x Fig..8 Ensembles invariant et des trajectoires d état. 8 6 New approach Approach in [32].2 New approach Approach in [32] x x Time (Sampling) 2 New approach Approach in [32] Time (Sampling) (a) Trajectoires d état Time (Sampling) (b) Trajectoires d entrée Fig..9 Commande interpolée basée sur la programmation quadratique. Il est évident que lorsquex Ω, le problème d optimisation (.7) admet une solution triviale, on a { xi =, λ i = pour tout i = 2,3,...,r. D où x =x et λ = ou dans une autre perspective, pour toutx Ω, la commande interpolée s avère être la commande optimal sans contrainte.

25 u 6 Notation Commande interpolée entre les correcteurs saturés Afin d utiliser le complet potentiel des actionneurs et de satisfaire aux contraintes d entrée sans avoir à manipuler une commande inutilement complexe, fondée sur l optimisation, une fonction de saturation à l entrée sera considérée. La saturation permettra de garantir que les contraintes sur l entrée du système soient satisfaites. Dans notre conception, nous exploitons le fait que la commande linéaire saturée linéaire peut être exprimée comme une combinaison convexe d un ensemble de lois linéaire selon Hu et al. [59]. Ainsi, les lois de commande disponibles dans l enveloppe convexe plutôt que la loi de commande optimale, vont gérer les contraintes sur les signaux d entrée x x Fig.. Ensembles invariant et trajectoires d état. x Interpolation based control 8 u = sat(k x) Time (Sampling) Interpolation based control u = sat(k x) 2 Interpolation based control u = sat(k x) x Time (Sampling) (a) Trajectoires d état Time (Sampling) (b) Trajectoires d entrée Fig.. Interpolation entre les correcteurs saturés.

26 Notation 7 Commande interpolée basée sur LMI Pour les systèmes de dimension élevée, les méthodes fondées sur les ensembles polyédrales pourraient ne pouvoir s appliquer, puisque le nombre de sommets ou de demi-espaces peuvent conduire à une complexité exponentielle. Dans ces cas, les ellipsoïdes semblent être une classe appropriée d ensembles candidats pour l interpolation. Dans ce manuscrit, l enveloppe convexe d une famille d ellipsoïdes est utilisé pour estimer le domaine de stabilité pour un système de la commande sous contraintes. Ceci est motivé par des problèmes liés à l estimation du domaine d attraction de façon à l agrandir. Afin de décrire brièvement la classe des problèmes, supposons qu un ensemble d ellipsoïdes invariants et un ensemble associé de lois saturées soient disponibles. Notre objectif est de savoir si l enveloppe convexe de l ensemble de ces ellipsoïdes est invariant par la commande et la façon de construire une loi de commande pour cette région. Il est supposé que les contraintes sur l étatx et les contraintes sur l entréeu sont symétriques. Il est également supposé qu un ensemble de correcteursk i R m n pouri =,2,...,r sont disponibles tels que les ensembles ellipsoïdales invariants E(P i ) E(P i )= { x R n :x T P i x } (.8) sont non-vide pouri=,2,...,r. Rappelons que pour toutx(k) E(P i ), il s ensuit quesat(k i x) U etx(k+)=ax(k)+bsat(k i x(k)) X. On note par la suite Ω E R n comme une enveloppe convexe dee(p i ) pour touti. Il s ensuit que Ω E X, depuise(p i ) X. Tout étatx(k) Ω E peut être décomposé comme suit x(k)= r i= λ i x i (k) (.9) avec x i (k) E(P i ) et λ i sont les coefficients d interpolation, qui satisfont r i= λ i =, λ i Considérons la loi de commande suivante u(k)= r i= λ i sat(k i x i (k)) (.2) oùsat(k i x i (k)) est la loi de commande saturée, ce qui est faisable danse(p i ). Le premier correcteur de gain élevé sera utilisé afin de garantir la performance et sera considéré comme prioritaire, tandis que le reste des correcteurs disponibles (à faible gain) seront utilisés pour élargir le domaine d attraction. Pour l état courant donnéx, considérer la fonction objective suivante

27 8 Notation sujet à min x i,λ i r i=2 x T i P i x i, i=,2,...,r r i= r λ i x i =x λ i = i= λ i, i=,2,...,r λ i (.2) Il sera montré que le problème d optimisation non linéaire (.2) peut être reformulée comme un problème d optimisation LMI. Il sera en outre montré que la commande d interpolation basée sur l utilisation d une solution du problème d optimisation (.2) garantit la faisabilité et la stabilité récursive robuste et asymptotique du système en boucle fermée. Il est clair que pour toutx E(P ), le problème d optimisation (.2) admet une solution triviale, c est { xi =, i=2,3,...,r λ i = pour laquellex =x et λ =. Ou en d autres termes, la commande d interpolation s avère être la commande optimal e de haut gain élevéu(k)=sat(k x) x x Fig..2 Ensembles invariant et les trajectoires de l état. Commande par retour de sortie Jusqu à présent, les problèmes d asservissement dans l espace d état ont été pris en compte. Cependant, dans la pratique, l information directe ou la mesure de l état

28 u Notation 9 x Time (Sampling) Time (Sampling) x Time (Sampling) (a) Trajectoires de l état Lyapunov function Time (Sampling) (b) trajectoires de l entreé Fig..3 Commande interpolée basée sur LMI. complet des systèmes dynamiques peuvent ne pas être disponibles. Dans ce cas, un observateur pourrait éventuellement être utilisée afin d estimer l état. Un inconvénient majeur est l erreur de l observateur que l on doit inclure dans l incertitude. En outre, lorsque les contraintes se manifestent, la non-linéarité domine la structure du système de commande et on ne peut s attendre que le principe de séparation soit toujours valide. En outre, il n existe aucune garantie que les trajectoires en boucle fermée satisfassent les contraintes. Nous reviendrons sur le problème de la reconstruction de l état grâce à la mesure et le stockage des mesures précédentes appropriées. Même si ce modèle pourrait être non-minimal du point de vue de la dimension, il est directement mesurable et fournira un modèle approprié pour la conception de la commande avec des garanties de satisfaction de contraintes. Enfin, il sera montré comment les principes de la commande interpolée peut conduire à une commande par retour de sortie. Formulation du problème Considérons le problème de la régulation à l origine en temps discret pour un système linéaire variant dans le temps ou incertain, décrite par la relation d entréesortie y(k+ )+E y(k)+e 2 y(k )+...+E s y(k s+) (.22) =N u(k)+n 2 u(k )+...+N r u(k r+ )+w(k) oùy(k) R p,u(k) R m etw(k) R p sont respectivement la sortie, l entrée et le vecteur de perturbation. Les matricese i fori=,...,s etn i pouri=,...,r doivent avoir des dimensions appropriées. Pour plus de simplicité, il est supposé ques =r. Les matrices E i et N i pour i=,2,...,s satisfont [ ] E E Γ = 2...E s = q α N N 2...N i (k)γ i (.23) s i=

29 2 Notation où α i (k) et q i= α i (k)= et E i Γ i =[ E2 i ]...Ei s N i Ni 2...Ni s sont les réalisations extrêmes d un modèle polytopique. Le système est soumis à des contraintes sur la sortie, la commande { y(k) Y, Y = { y R p :F y y g y } u(k) U, U ={u R m :F u u g u } (.24) où Y et U sont des ensembles convexes et compactes. Il est supposé que la perturbation w(k) est inconnue, additive et se trouvent dans le polytope W, c est-à-dire w(k) W, oùw ={w R p :F w w g w } est un C-ensemble. Cas nominal Nous considérons le cas où les matricese j etn j pourj=,2,...,s sont connues et fixes. Le cas oùe j etn j pour j =,2,...,s sont inconnus ou variables dans le temps sera traitée dans la section suivante. Une représentation d état sera construite selon les principes de [53]. Toutes les étapes de la construction sont détaillés tels que la présentation des résultats soit autonomes. L état du système est choisi comme un vecteur de dimensionp s avec les composants suivants x(k)= [ x (k) T x 2 (k) T...x s (k) T ] T (.25) où x (k)=y(k) x 2 (k)= E s x (k )+N s u(k ) x 3 (k)= E s x (k )+x 2 (k )+N s u(k ) x 4 (k)= E s 2 x (k )+x 3 (k )+N s 2 u(k ). x s (k)= E 2 x (k )+x s (k )+N 2 u(k ) (.26) Il sera démontré que le modèle d état est alors définie sous une forme compacte d équation à différence linéaire comme suit { x(k+ )=Ax(k)+Bu(k)+Dw(k) (.27) y(k)=cx(k) où

30 Notation 2 E... I N I E s... N s E s I... N s A= E s 2 I..., B=.... N s 2, D=, E 2...I N 2 C= [ I... ] On note z(k)=[y(k) T...y(k s+) T u(k ) T...u(k s+) T ] T (.28) Il sera en outre montré que le vecteur d étatx(k) est liée au vecteurz(k) comme suit où x(k)=tz(k) (.29) T =[T T 2 ] I E s... N s... T = E s E s..., T = N s N s E 2 E 3... E s N 3 N 4...N s A tout instant k, le vecteur de variables d état est disponible uniquement si la mesure et le stockage des mesures précédentes est assuré. N x x Fig..4 Ensembles invariant et trajectoires de l état.

31 22 Notation 2 y 3 4 Output feedback approach Kalman filter approach Time(Sampling) Fig..5 Trajectoires de sortie. Une simulation de la commande interpolée par retour de sortie comparée à l utilisation du filtre de Kalman. Cas robuste Une faiblesse de l approche ci-dessus est que la mesure d état est disponible si et seulement si les paramètres du système sont connus. Pour le système incertain ou variant dans le temps, ce n est pas le cas. Dans cette section, nous proposons une autre méthode pour construire les variables d état, qui n utilisent pas les informations sur les paramètres du système. En utilisant l entrée mesurée, la sortie et leurs dernières valeurs mesurées, l état du système est choisi en tant que x(k)= [y(k) T...y(k s+) T u(k ) T...u(k s+) T ] T (.3) Le modèle espace d état est alors définie comme suit { x(k+ ) =Ax(k)+Bu(k)+Dw(k) y(k) =Cx(k) (.3) où A= E E 2... E s N 2...N s N s I I I...,B= O I O... I C= [ I ] N. I. I.,D=. Bien que la représentation obtenue soit non-minimale, elle a le mérite de transformer le problème de la commande par retour de sortie pour des systèmes incertains en

32 Notation 23 un problème retour d état, où les matricesaetbsont dans le polytope sans aucune incertitude supplémentaire et toute commande de retour d état conçue pour cette représentation sous la formeu=kx peut être traduit en un correcteur par retour de sortie dynamique.

33 Contents Introduction Constrained uncertain systems Organization of the thesis List of Publications related to the PhD Part I Background 2 SetTheoreticMethodsinControl Set terminology Convex sets Basic definitions Ellipsoidal set Polyhedral set Set invariance theory Basic definitions Problem formulation Ellipsoidal invariant sets Polyhedral invariant sets On the domains of attraction Problem formulation Saturation nonlinearity modeling- A linear differential inclusion approach The ellipsoidal set approach The polyhedral set approach OptimalandConstrainedControl-AnOverview Dynamic programming Pontryagin s maximum principle Model predictive control Implicit model predictive control Recursive feasibility and stability xi

34 xii Contents Explicit model predictive control - Parameterized vertices Vertex control Part II Interpolation based control 4 InterpolationBasedControl NominalStateFeedbackCase Problem formulation Interpolation based on linear programming - Implicit solution Interpolation based on linear programming - Explicit solution Geometrical interpretation Analysis inr Explicit solution of the interpolation-based control scheme Interpolation based on linear programming - Qualitative analysis Performance improvement for the interpolation based control Interpolation based on quadratic programming An improved interpolation based control method in the presence of actuator saturation Convex hull of ellipsoids InterpolationBasedControl RobustStateFeedbackCase Problem formulation Interpolation based on linear programming Interpolation based on quadratic programming for uncertain systems An improved interpolation based control method in the presence of actuator saturation Interpolation via quadratic programming - Algorithm Input to state stability Cost function determination Interpolation via quadratic programming Interpolation via quadratic programming - Algorithm Convex hull of invariant ellipsoids for uncertain systems Interpolation based on LMI Geometrical properties of the solution InterpolationBasedControl OutputFeedbackCase Problem formulation Output feedback - Nominal case Output feedback - Robust case Some remark on local controllers Problem formulation Robustness analysis Robust optimal design Part III Applications

35 Contents xiii 7 Ballandplatesystem System description System identification The identification procedure Identification of the ball and plate system Controller design State space realization Interpolation based control Experimental results Non-isothermalcontinuousstirredtankreactor Continuous stirred tank reactor model Controller design Part IV Conclusions and Future directions 9 ConclusionsandFuturedirections Conclusions Domain of attraction Interpolation based control LMI synthesis condition Future directions Interpolation based control for non-linear system Obstacle avoidance References

36

37 Notation The conventions and the notations used in the thesis are classical for the control literature. A short description is provided in the following. Sets R Set of real number R + Set of nonnegative real number R n Set of real vectors withnelements R n m Set of real matrices withnrows andmcolumns Algebraic Operators A T Transpose of matrixa A Inverse of matrixa A ( ) Positive (semi)definite matrix A ( ) Negative (semi)definite matrix Set Operators P P 2 Set intersection P P 2 Minkowski sum P P 2 Pontryagin difference P P 2 P is a subset ofp 2 P P 2 P is a strict subset ofp 2 P P 2 P is a superset ofp 2 P P 2 P is a strict superset ofp 2 Fr(P) The frontier ofp Int(P) The interior ofp Proj x (P) The orthogonal projection of the setponto thexspace xv

38 xvi Notation Others I Identity matrix Matrix of ones of appropriate dimension Matrix of zeros of appropriate dimension Acronyms LMI Linear Matrix Inequality LP Linear Programming QP Quadratic Programming LQR Linear Quadratic Regulator LTI Linear Time Invariant LPV Linear Parameter Varying PWA PieceWise Affine

39

40 Chapter Introduction. Constrained uncertain systems Constraints are encountered in practically all real-world control problems. The presence of constraints leads to high complexity control problems, not only in control theory, but also in practical applications. From the conceptual point of view, constraints can have different nature. Basically, there are two types of constraints imposed by physical limitation and/or performance desiderata. Physical constraints are due to the physical limitations of the mechanical, electrical, biological, etc controlled system. The main concern here is the stability in the presence of input and output or state constraints. The input and output variables must remain inside the constraints to avoid over-exploitation or damage. In addition, the constraint violation may lead to degraded performance, oscillations or even instability. Performance constraints are introduced by the designer for guaranteeing performance requirements, for example transient time, transient overshoot, etc, fault tolerance, equipment longevity and environmental problems. The constrained control problem can become even more challenging in the presence of model uncertainties, which are unavoidable in practice [42], [2]. Model uncertainties are appear e.g. when a linear model is obtained as an approximation of a nonlinear system around the operating point. Even if the underlying process is quite accurately represented by a linear model, the parameters of the model could be time-varying or could change due to a change in the operating points. In these cases, the cause and structure of the model uncertainties are rather well known. Nevertheless, even when the real process is linear, there is always some uncertainty associated, for example, with physical parameters, which are never known exactly. Moreover, the real processes are usually affected by disturbances and it is required to consider them in control design. It is generally accepted that a key reason of using feedback is to diminish the effects of uncertainty, which may appear in different forms as parametric uncertainties Which is actually asking a lot. 25

41 26 Introduction or as additive disturbances or as other inadequacies in the models used to design the feedback law. Model uncertainty and robustness have been a central theme in the development of the field of automatic control [9]. A straightforward way to stabilize a constrained system is to perform the control design disregarding the constraints. Then an adaptation of the control law is considered with respect to input saturation, such an approach is called anti-windup [79], [5], [49], [57]. Over the last decades, the research on constrained control topics has developed to the degree that constraints can be taken into account during the synthesis phase. By its principle, model predictive control (MPC) approach shows its importance on dealing with constraints [32], [], [28], [29], [47], [96], [48]. In the MPC approach, a sequence of predicted optimal control values over a finite prediction horizon is computed for optimizing the performance of the controlled system, expressed in terms of a cost function [3]. MPC approach uses an internal mathematical model which, given the current measurements, predicts the future behavior of the real system with respect to changes in the control inputs. Once the sequence of optimal control inputs has been calculated, only the first element of this sequence is actually applied to the system and the entire optimization is repeated at the next time instant with the new state measurement [5], [], [96]. In classical MPC, the control action at each time instant is obtained by solving an on-line open-loop finite optimal control problem [26], [33]. With a linear model, polyhedral constraints, and a quadratic cost, the resulting optimization problem is a quadratic program. Solving the quadratic program can be computationally costly, specially when the prediction horizon is large, and this has traditionally limited MPC to applications with relatively low complexity/sampling interval ratio [4]. In the last decade, attempts have been made to use predictive control in fast processes. In [22], [2], [2], [53] it was shown that the constrained linear MPC is equivalent to a multi-parametric optimization problem, where the state plays the role of a vector of parameters. The solution is a piecewise affine function of the state over a polyhedral partition of the state space, and the computational effort of the MPC is moved off-line. However, explicit MPC implementation approaches also have disadvantages. Obtaining the explicit optimal MPC solution requires to solve an off-line parametric optimization problem, which is generally an NP-hard problem. Although the problem is tractable and practically solvable for several interesting control applications, the off-line computational effort growsexponentially fast as the problem size increases [84], [85], [83], [64], [65]. This is the case for long prediction horizon, large number of constraints and high dimensional systems. In [56], the authors show that the on-line computation is preferable for high dimensional systems where significant reduction of the computational complexity can be achieved by exploiting the particular structure of the optimization problem as well as by early stopping and warm-starting from a solution obtained at the previous time-step. The same reference mentions that for models of more than five dimensions the explicit solution might be impractical. It worth mentioning that approximate explicit solutions have been investigated to go beyond this ad-hoc limitation [9], [62], [4].

42 . Constrained uncertain systems 27 Note that as its name says, most traditional implicit and explicit MPC approaches are based on mathematical models which invariably present a mismatch with respect to the physical systems. The robust MPC is meant to address both model uncertainty and disturbances. However, the robust MPC presents great conservativeness and/or on-line computational burden [78], [23], [87]. The use of interpolation in constrained control in order to avoid very complex control design procedures is well known in the literature. There is a long line of developments on these topics generally closely related to MPC, see for example [], [32], [33], [3], where interpolation between input sequences, state trajectories, different feedback gains and/or associated invariant sets can be found. The vertex control can be considered also as an interpolation control approach based on the explicit control values, assumed to be available for the extreme points of a certain region in the state space [54], [22]. A weakness of vertex control is that the full control range is exploited only on the border of the feasible positive invariant set in the state space, and hence the time to regulate the plant to the origin is much longer than e.g. by time-optimal control. A way to overcome this shortcoming is to switch to another, more aggressive, local controller, e.g. a state feedback controller u o =Kx, when the state reaches the maximal feasible set of the local controller. The disadvantage of such a switching-based solution is that the control action becomes non-smooth [3]. For LTI systems the vertex control Lyapunov level curves are polyhedra parallel with the border of the vertex controller feasible set, and as such we will, without loss of generality, base the new design method on the existence of a polyhedral contractive set for a local control law. This set will be related to the description of the maximal controlled invariant set. Then we point to the existence of a smooth convex interpolation between the vertex control actionu v and the local control actionu o for the current statex, in the formu(x)=c(x)u v (x)+( c(x))u o (x) with c(x), wherebyc(x) is minimized in order to be as close as possible to the local optimal controller. It is shown that with this objective function, there exists a Lyapunov function for the system controlled by the interpolated controlleru, and hence stability is proven. It is shown that from a computational point of view the minimization of the interpolating coefficient c can be done by linear programming. It is further shown that that the minimization can be done off-line, yielding a polyhedral partition of the feasible region, with an affine control law for each polyhedron, while guaranteeing the global continuity of the state feedback. Thus, our controller can be compared from the structural point of view with explicit MPC where the feasible set in the state space is also partitioned into polyhedra, each of which with its own affine state feedback control law. The interpolation based on an LP (linear programming) problem between the global vertex controller and the local more aggressive controller is the first aim of the thesis. Then as in the traditional MPC approach, which is formulated using a quadratic criterion [], we will show how an interpolation based control problem for linear systems can be set up as a quadratic program. All the interpolation schemes via LP or QP computations are based on the use of polyhedral sets. For

43 28 Introduction high dimensional systems, the polyhedral based control methods might be impractical, since the number of vertices or half-spaces may lead to an exponential complexity. In these cases, the ellipsoids seem to be the suitable class of sets in the interpolation. It will be shown that the convex hull of a set of invariant ellipsoids is controlled invariant. A continuous feedback control law is constructed based on the solution of an LMI problem at each time instant. For all interpolation optimization based schemes, a proof of recursive feasibility and robust asymptotic stability will be provided..2 Organization of the thesis The thesis (except the present chapter) is partitioned into four parts and appendices Part I contains two chapters introducing the theoretical foundations for the rest of the thesis. In Chapter 2, basic set theory elements are discussed with the accent on the (controlled) invariant set. The advantages as well as disadvantages of different families of sets and their use in control will be considered, which is instrumental for the presentation of the main results of the thesis. Chapter 3 reviews the main approaches to optimal and constrained control with emphasis on vertex control, which is one of the main ingredients of an interpolation based control scenario. Part II consists of three chapters and provides a novel and computationally attractive solution to a constrained control problem. This part presents several original contributions on constrained control algorithms for discrete-time linear systems. Chapter 4 is concerned only with the nominal state-input constrained systems where there are no disturbances and no model mismatch. In this chapter a series of generic interpolation based control schemes via linear programming, quadratic programming or linear matrix inequality are introduced. Further, in Chapter 5, we extend the interpolation technique for the discrete time linear uncertain or time-varying systems subject to bounded disturbances. To complete the presentation, in Chapter 6, the output feedback case is considered. This last feature is very important, since state feedback is rarely used in (constrained control) practice. For all algorithms proposed in this part, the proofs of recursive feasibility and asymptotic stability are given. Part III contains two chapters applying the theoretical results discussed in Part II to one practical application proposed in the literature and one benchmark. In Chapter 7 the interpolation based control via linear programming is used for stabilizing a ball and plate laboratory system. Then in Chapter 8, the explicit interpolation based control approach is implemented on a non-isothermal continuous stirred tank reactor. Part IV consists two sections which completes the thesis with conclusions and future directions.

44 .3 List of Publications related to the PhD 29.3 List of Publications related to the PhD We provide here the complete list of publications submitted/accepted to various conferences and journals Publish journal papers Hoai-Nam Nguyen, Sorin Olaru, Hybrid modeling and constrained control of juggling system,internationaljournalofsystemsscience, 2. [4] Hoai-Nam Nguyen, Sorin Olaru, Morten Hovd, A patchy approximation of explicit model predictive control,internationaljournalofcontrol, 22. [3] Submitted journal papers Hoai-Nam Nguyen, Per-Olof Gutman, Sorin Olaru, Morten Hovd, Improved vertex control for discrete-time linear time-invariant systems with state and control constraints, submitted for publication (second review round),automatica. Book chapters Hoai-Nam Nguyen, Sorin Olaru, Per-Olof Gutman, Morten Hovd, Robust output feedback interpolation based control for constrained linear systems, 22, Informatics in Control, Automation and Robotics Revised and Selected Papers from the International Conference on Informatics, published by Springer-Verlag. [9] Publish conference papers Hoai-Nam Nguyen, Sorin Olaru, Per-Olof Gutman, Morten Hovd, Interpolation based control for constrained linear time-varying or uncertain systems in the presence of disturbances, in 4th IFAC Nonlinear Model Predictive Control Conference, August 23-27, 22, NH Conference Center Leeuwenhorst, Noordwijkerhout, NL. [2] Hoai-Nam Nguyen, Sorin Olaru, Per-Olof Gutman, Morten Hovd, Improved vertex control for a ball and plate system,in7thifacsymposiumonrobust Control Design, June 2-22, 22, Aalborg, Denmark. [] Hoai-Nam Nguyen, Sorin Olaru, Per-Olof Gutman, Morten Hovd, An improved interpolation based control method in the presence of actuator saturation, At Itzhack Y. Bar-Itzhack Memorial Symposium on Estimation, Navigation, and Spacecraft Control, October 47, 22, Dan Panorama Hotel, Haifa, Israel

45 3 Introduction Hoai-Nam Nguyen, Sorin Olaru, Per-Olof Gutman, Morten Hovd, Constrained interpolation-based control for polytopic uncertain systems,in5thieeeconference on Decision and Control and European Control Conference, December 2-5, 2, US. [] Hoai-Nam Nguyen, Per-Olof Gutman, Sorin Olaru, Morten Hovd, An interpolation approach for robust constrained output feedback, In 5th IEEE Conference on Decision and Control and European Control Conference, December 2-5, 2, US. [7] Hoai-Nam Nguyen, Sorin Olaru, Florin Stoican, On maximal robustly positively invariant sets, In ICINCO 2: 8th International Conference on Informatics in Control, Automation and Robotics. [5] Hoai-Nam Nguyen, Per-Olof Gutman, Sorin Olaru, Morten Hovd, Explicit constraint control based on interpolation techniques for time varying and uncertain linear discrete time systems, In 8th IFAC World Congress, Italy, 2. [6] Hoai-Nam Nguyen, Per-Olof Gutman, Sorin Olaru, Morten Hovd, Federic Colledani, Improved vertex control for uncertain linear discrete time systems with control and state constraints, In ACC2, San Francisco, California, USA, June 29- July, 2. [8] Hoai-Nam Nguyen, Sorin Olaru, Morten Hovd, Patchy approximate explicit model predictive control, In Proceedings of International Conference on Control, Automation and Systems ICCAS(International Conference on Control, Automation and Systems ICCAS), Gyeonggi-do(Korea), 2. [3] Hoai-NamNguyen, Sorin Olaru, Hybrid modeling and optimal control of juggling systems, In Proceedings of International Conference on Control, Automation and Systems, Sinaia(Romania), 2. [4]

46 PartI Background

47

48 Chapter 2 Set Theoretic Methods in Control The first aim of this chapter is to briefly review some of the set families used in control and to comment on the strengths and weaknesses of each of them. The tools of choice throughout the manuscript will be ellipsoidal and polyhedral sets due to their combination of numerical applicability and flexibility in the representation of generic convex sets. After the geometrical nomenclature, the concept of robustly invariant and robust controlled invariant sets are introduced. Some algorithms are proposed for computing such sets. The chapter ends with an original contribution on estimating the domain of attraction for time-varying and uncertain discrete-time systems with a saturated input. 2. Set terminology For completeness, some standard definitions of set terminology will first be introduced in this section. For a detailed reference, the reader is referred to the book [77]. Definition2..(Closedset) A setsisclosed if it contains its own boundary. In other words, any point outsideshas a neighborhood disjoint froms. Definition2.2.(Closureofaset) Theclosure of a setsis the intersection of all closed sets containing S. Definition2.3.(Boundedset) A sets R n isbounded if it is contained in some ballb R ={x R n : x 2 ε} of finite radius ε >. Definition2.4.(Compactset) A sets R n iscompact if it is closed and bounded. Definition2.5.(Supportfunction) The support function of a sets R n, evaluated atz R n is defined as φ S (z)=supz T x x S 33

49 34 2 Set Theoretic Methods in Control 2.2 Convex sets 2.2. Basic definitions The fact that convexity is a more important property than linearity has been recognized in several domains, the optimization theory being maybe the best example [28], [3]. We provide in this section a series of definitions which will be useful in the sequel. Definition2.6.(Convexset) A sets R n isconvex if for allx S andx 2 S, it holds that αx +( α)x 2 S, α [, ] The point x=αx +( α)x 2 where α is called aconvexcombination of the pair (x,x 2 ). The set of all such points is the segment connectingx andx 2. In other words a setsis said to be convex if the line segment between any two points inslies ins. The concept of convex set is closely related to the definition of a convex functions. Definition2.7.(Convexfunction) A function f :S R withs R n isconvex if and only if the setsis convex and f(αx +( α)x 2 ) αf(x )+( α)f(x 2 ) for allx S,x 2 S and for all α [, ]. Definition2.8.(C-set) A sets R n is ac set if it is a convex and compact set, containing the origin in its interior. Definition2.9.(Convexhull) Theconvexhull of a sets R n is the smallest convex set containings. It is well known [58] that for any finite sets = {s,s 2,...,s r } withr N, the convex hull of the setsis given as where r α i = and α i. i= Convex Hull(S)={s R n :s= r i= α i s i : s i S} One can understand the link straightforwardly by the fact that the epigraph of a convex function is a convex set.

50 2.2 Convex sets Ellipsoidal set Ellipsoidal sets or ellipsoids are one of the famous classes of convex sets. Ellipsoids represent a large category used in the study of dynamical systems and the control fields due to their simple numerical representation [29], [8]. Next we provide a formal definition for ellipsoidal sets and a few properties. Definition2..(Ellipsoidalset) An ellipsoidal sete(p,x ) R n with centerx and shape matrixpis a set of the form E(P,x )={x R n :(x x ) T P (x x ) } (2.) wherep R n n is a positive definite matrix. If the ellipsoid is centered in the origin then it is possible to write E(P)={x R n :x T P x } (2.2) DefineQ=P 2 as the Cholesky factor of matrixp, which satisfiesq T Q=QQ T = P. With the matrixq, it is possible to show an alternative dual representation for an ellipsoidal set D(Q,x )={x R n :x=x +Qz} wherez R n such thatz T z. Ellipsoidal sets are probably the most commonly used in the control field since they are associated with powerful tools such as the Lyapunov equation 2 or Linear Matrix Inequalities (LMI) [37], [29]. When using ellipsoidal sets, almost all the optimization problems present in the classical control methods can be reduced to the optimization of a linear function under LMI constraints. This optimization problem is convex and is by now a powerful design tool in many control applications. A linear matrix inequality is a condition of the type [37], [29] F(x) wherex R n is a vector variable and the matrixf(x) is affine inx, that is F(x)=F + with symmetric matricesf i R m m. LMIs can either be understood as feasibility conditions or constraints for optimization problems. Optimization of a linear function over LMI constraints is called semidefinite programming, which is considered as an extension of linear programming. Nowadays, a major benefit in using LMIs is that for solving an LMI problem, n i= F i x i 2 A quadratic Lyapunov function can be associated to stable linear dynamics. Consequently, the ellipsoids are natural representations of the quadratic Lyapunov function level sets.

51 36 2 Set Theoretic Methods in Control several polynomial time algorithms were developed and implemented in free available software packages, such as LMI Lab [43], YALMIP [94], CVX [49], SEDUMI [47] etc. The Schur complement formula is a very useful tool for manipulating matrix inequalities. The Schur complement states that the nonlinear conditions of the special forms { P(x) P(x) Q(x) T R(x) (2.3) Q(x) or { R(x) R(x) Q(x)P(x) Q(x) T can be equivalently written in the LMI form [ P(x)Q(x) T Q(x) R(x) (2.4) ] (2.5) The Schur complement allows one to convert certain nonlinear matrix inequalities into a higher dimensional LMI. For example, it is well known [8] that the support function of the ellipsoide(p,x ), evaluated at the vector f is φ E(P,x )(z)=f T x + f T Pf (2.6) then it is obvious that the ellipsoide(p) in (2.2) is a subset of the polyhedral set 3 with f R n if and only if P(f,)={x R n : f T x } f T Pf or by using the Schur complement this condition can be rewritten as [29], [57] [ f T ] P (2.7) Pf P Obviously an ellipsoidal sete(p,x ) R n is uniquely defined by its matrixpand by its centerx. Since matrixpis symmetric, the complexity of the representation (2.) is n(n+) +n= n(n+3) 2 2 The main drawback of ellipsoids is however that having a fixed and symmetrical structure they may be too conservative and this conservativeness is increased by the related operations. It is well known [8] 4 that The convex hull of of a set of ellipsoids, in general, is not an ellipsoid. 3 A rigorous definition of polyhedral sets will be given in the section. 4 The reader is referred to [8] for the definitions of operations with ellipsoids.

52 2.2 Convex sets 37 The sum of two ellipsoids is not, in general, an ellipsoid. The difference of two ellipsoids is not, in general, an ellipsoid. The intersection of two ellipsoids is not, in general, an ellipsoid Polyhedral set Polyhedral sets provide a useful geometrical representation for the linear constraints that appear in diverse fields such as control and optimization. In a convex setting, they provide a good compromise between complexity and flexibility. Due to their linear and convex nature, the basic set operations are relatively easy to implement [82], [54]. Principally, this is related to their dual (half-spaces/vertices) representation [], [3] which allows choosing which formulation is best suited for a particular problem. This section is started by recalling some theoretical concepts. Definition2..(Hyperplane) A hyperplaneh R n is a set of the form where f R n is a column vector andg R is a scalar. H ={x R n :f T x=g} (2.8) Definition2.2.(Half-space) A closed half-spaceh R n is a set of the form where f R n is a column vector andg R is a scalar. H ={x R n :f T x g} (2.9) Definition2.3.(Polyhedralset) A convex polyhedral setp(f,g) is a set of the form P(F,g)={x R n :F i x g i, i=,2,...,n } (2.) wheref i R n denotes thei th row of the matrixf R n n andg i is thei th component of the column vectorg R n. The inequalities here are element-wise. A polyhedral set contains the origin if and only ifg, and includes the origin in its interior if and only ifg>. Definition2.4.(Polytope) A polytope is abounded polyhedral set. Definition2.5.(Dimensionofpolytope) A polytopep R n is of dimensiond n, if there exists ad dimension ball with radius ε > contained inpand there exists no (d+ ) dimension ball with radius ε > contained inp. A polytope is full dimensional if and only ifd =n. Definition 2.6.(Redundant half-space) For a given polytope P(F, g), a polyhedral setp(f,g) is defined by removing thei th half-spacef i from matrixf and

53 38 2 Set Theoretic Methods in Control the corresponding componentg i from vectorg. The facet(f i,g i ) isredundant if and only if g i <g i (2.) where g i = max{f i x} x subject to:fx g Definition2.7.(Face,facet,vertex,edge) A(n ) dimensional facef i a of polytopep(f,g) R n is defined as a set of the form F i a ={x P:F i x=g i } (2.2) and can be interpreted as the intersection between the polytope and a non-redundant supporting hyperplane F i a =P {x R n :F i x=g i } (2.3) The non-empty intersection of two faces of dimension (n ) leads to the description of (n 2) dimensional face. The faces of the polytopepwith dimension, and(n ) are calledvertices,edges andfacets, respectively. One of the fundamental properties of polytopes is that it can be presented in half-space representation as in Definition 2.3 or in vertex representation as follows { } P(V)= x R n :x= r i= α i v i, α i, wherev i R n denotes thei column of matrixv R n r. r i= α i = f T x g v f 7 T x g7 v 7 v 2 x 2 f 6 T x g6 f 2 T x g2 x 2 v 3 v 6 f 5 T x g5 f 3 T x g3 v 4 f 4 T x g4 v 5 (a) Half-space representation x (b) Vertex representation x Fig.2. Exemplification fo the equivalent of half-space and vertex representations of polytopes. This dual (half-spaces/vertices) representation has very practical consequences in methodological and numerical applications. Due to this duality we are allowed to use either representation in the solving of a particular problem. Note that the transformation from one representation to another may be time-consuming with several

54 2.2 Convex sets 39 well-known algorithms: Fourier-Motzkin elimination [37], CDD [4], Equality Set Projection [63]. and Recall that the expressionx= r α i v i with a given set of vectors {v,v 2,...,v r } i= r i= α i =, α i is calledtheconvexhull of a set of vectors{v,v 2,...,v r } and will be denoted as x=conv{v,v 2,...,v r } Definition2.8.(Simplex) A simplexs R n is ann dimensional polytope, which is the convex hull ofn+ vertices. For example, a 2D simplex is a triangle, a 3D simplex is a tetrahedron, and a 4D simplex is a pentachoron. Definition 2.9.(Redundant vertex) For a given polytope P(V), a polyhedral set P(V) is defined by removing thei th vertexv i from the matrixv. The vertexv i is redundant if and only if p i < (2.4) where p i = min p { T p} subject to:vp=v i Definition 2.2.(Minimal representation) A half-space or vertex representation of polytopepisminimal if and only if the removal of any facet or any vertex would changep, i.e. there are no redundant facets or redundant vertices. Clearly, a minimal representation of a polytope can be achieved by removing from the half-space (vertex) representation all the redundant facets (vertices). Definition 2.2.(Normalized representation) A polytope P(F,g)={x R n :F i x g i,i=,2,...,n } is in a normalized representation if it has the following property F i F T i = A normalized full dimensional polytope has a unique minimal representation. This fact is very meaningful in practice, since normalized full dimensional polytopes in minimal representation allow us to avoid any ambiguity when comparing them. Next, some basic operations on polytopes will be briefly reviewed. Note that although the focus lies on polytopes, most of the operations described here are directly or with minor changes applicable to polyhedral sets. Additional details on polytope computation can be found in [58], [52], [42].

55 4 2 Set Theoretic Methods in Control Definition2.22.(Intersection) The intersection of two polytopesp R n,p 2 R n is a polytope P P 2 ={x R n :x P,x P 2 } Definition2.23.(Minkowskisum) The Minkowski sum of two polytopesp R n, P 2 R n is a polytope P P 2 ={x +x 2 :x P,x 2 P 2 } It is well known [58] that ifp andp 2 are presented in vertex representation, i.e. then the Minkowski can be computed as P = Conv{v,v 2,...v p }, P 2 = Conv{v 2,v 22,...v 2q } P P 2 = Conv{v i +v 2j }, i=,2,...,p, j=,2,...,q Definition 2.24.(Pontryagin difference) The Pontryagin difference of two polytopesp R n,p 2 R n is a polytope P P 2 ={x P :x +x 2 P, x 2 P 2 } P P 2 P P 2 P P 2 P P 2 x 2 x 2 x x (a) Minkowski sump P 2 (b) Pontryagin differencep P 2. Fig.2.2 Minkowski sum and Pontryagin difference of polytopes. Note that the Pontryagin difference is not the complement of the Minkowski sum. For two polytopesp andp 2, it holds only that(p P 2 ) P 2 P. Definition2.25.(Projection) Given a polytopep R n +n 2 the orthogonal projection onto thex spacer n is defined as Proj x (P)={x R n : x 2 R n 2 such that[x T x T 2] T P} It is well known that the Minkowski sum operation on polytopes in their halfplane representation is complexity-wise equivalent to a projection [58]. Current projection methods for polytopes that can operate in general dimensions can be

56 2.2 Convex sets 4 grouped into four classes: Fourier elimination [69], block elimination [2], vertex based approaches and wrapping-based techniques [63]. P x 2 Proj x (P) x Fig.2.3 Projection of a 2-dimensional polytopeponto a linex. It is straightforward to see that the complexity of the representation of polytopes is not a function of the space dimension only, but it may be arbitrarily big. For the half-space (or alternatively vertex) representation, the complexity of the polytopes is a linear function of the number of rows of the matrixf (the number of columns of the matrixv ). As far as the complexity issue concerns, it is worth to be mentioned that none of these representations can be regarded as more convenient. Apparently, one can define an arbitrary polytope with relatively few vertices, however this may nevertheless have a surprisingly large number of facets. This happens, for example when some vertices contribute to many facets. And equally, one can define an arbitrary polytope with relatively few facets, however this may have relatively many more vertices. This happens, for example when some facets have many vertices [42]. The main advantage of the polytopes is their flexibility. It is well known [3] that any convex body can be approximated arbitrarily close by a polytope. Particularly, for a given bounded, convex and closed setsand for a given ε with <ε <, then there exists a polytopepsuch that ( ε)s P S for an inner ε approximation of the setsand S P (+ε)s for an outer ε approximation of the sets.

57 42 2 Set Theoretic Methods in Control 2.3 Set invariance theory 2.3. Basic definitions Set invariance is a fundamental concept in analysis and controller design for constrained systems, since the constraint satisfaction can be guaranteed for all times if and only if the initial states are contained in an invariant set. Two types of systems will be considered in this section, namely, discrete-time uncertain nonlinear systems and systems with additional external control inputs x(k+ )=f(x(k),w(k)) (2.5) x(k+ )=f(x(k),u(k),w(k)) (2.6) wherex(k) R n,u(k) R m andw(k) R d are respectively the system state, the control input and the unknown disturbance. It is assumed that f(,,) = and f(,)=. The state vector x(k), the control vector u(k) and the disturbance w(k) are subject to constraints x(k) X u(k) U k (2.7) w(k) W where the setsx R n,u R m andw R d are assumed to be closed and bounded. It is also assumed that the setsx,u andw contain the origin their respective interior. Definition2.26.Robustpositivelyinvariantset [23], [7] The set Ω X is robust positively invariant for the system (2.5) if and only if for allx(k) Ω and for allw(k) W. f(x(k),w(k)) Ω Hence if the state vector of system (2.5) reaches a robust positively invariant set, it will remain inside the set in spite of disturbancew(k). The termpositively refers to the fact that only forward evolutions of the system (2.5) are considered and will be omitted in future sections for brevity. Given a bounded setx R n, the maximal robustly invariant set Ω max X is a robustly invariant set, that contains all the robustly invariant sets contained inx. Definition2.27.Robustcontractiveset [23] For a given scalar number λ with λ, the set Ω X is robust λ contractive for the system (2.5) if and only if for allx(k) Ω and for allw(k) W. f(x(k),w(k)) λω

58 2.3 Set invariance theory 43 Definition2.28.Robustcontrolledinvariantset [23], [7] The setc X is robust controlled invariant for the system (2.6) if for allx(k) C, there exists a control value u(k) U such that for allw(k) W. x(k+ )=f(x(k),u(k),w(k)) C Given a bounded setx R n, the maximal robust controlled invariant setc max X is a robust controlled invariant set and contains all the robust controlled invariant sets contained inx. Definition Robust controlled contractive set [23] For a given scalar number λ with λ < the setc X is robust controlled contractive for the system (2.6) if for allx(k) C, there exists a control valueu(k) U such that for allw(k) W. x(k+ )=f(x(k),u(k),w(k)) λc Obviously, in Definition 2.27 and Definition 2.29 if the contraction factor λ = we will, respectively retrieve the robust invariance and robust controlled invariance Problem formulation From this point on, we will consider the problem of computing an invariant set for the following discrete time linear time-varying or uncertain system x(k+ )=A(k)x(k)+B(k)u(k)+D(k)w(k) (2.8) wherex(k) R n,u(k) R m,w(k) R d are, respectively the state, input and disturbance vectors. The matricesa(k) R n n,b(k) R n m,d(k) R n d satisfy A(k)= q q i= i= α i (k)a i, B(k)= q α i (k)b i, D(k)= q α i (k)d i i= i= α i (k)=, α i (k) (2.9) where the matricesa i,b i andd i are the extreme realizations ofa(k),b(k) andd(k). Remark 2.. Note that the numbers of the extreme realizations of A(k), B(k) and D(k) can be different as

59 44 2 Set Theoretic Methods in Control A(k)= q α i (k)a i, B(k)= q 2 β i (k)b i, D(k)= q 3 γ i (k)d i i= i= i= q i= q 2 i= q 3 i= α i (k)=,α i (k), i=,2,...,q β i (k)=,β i (k), i=,2,...,q 2 γ i (k)=,γ i (k), i=,2,...,q 3 (2.2) In this case the form of (2.2) can be translated into the form of (2.9) as follows. For simplicity we consider here the case whend(k)=, k, but the extension to the case whend(k) is straightforward. x(k+ ) = q α i (k)a i x(k)+ q 2 β j (k)b j u(k) i= j= = q α i (k)a i x(k)+ q α i (k) q 2 β j (k)b j u(k) i= { i= j= } = q α i (k) A i x(k)+ q 2 β j (k)b j u(k) i= { j= = q α i (k) q 2 β j (k)a i x(k)+ q 2 β j (k)b j u(k) i= j= j= β j (k) { A i x(k)+b j u(k) } = q i= = q α i (k) q 2 q 2 i=j= j= α i (k)β j (k) { A i x(k)+b j u(k) } Consider the polytopep c, the vertices of which are given by taking all possible combinations of{a i,b j } withi=,2,...,q and j=,2,...,q 2. Since q q 2 i=j= α i (k)β j (k)= q i= α i (k) q 2 j= β j (k)= it is clear that{a(k),b(k)} can be expressed as a convex combination of the vertices ofp c. The state, the control and the disturbance are subject to the following polytopic constraints x(k) X,X ={x R n :F x x g x } u(k) U,U ={u R m :F u u g u } (2.2) w(k) W,W ={w R d :F w w g w } where the matricesf x,f u,f w and the vectorsg x,g u,g w are assumed to be constant withg x >,g u >,g w > such that the origin is contained in the interior ofx,u andw. Recall that the inequalities are element-wise. }

60 2.3 Set invariance theory Ellipsoidal invariant sets Ellipsoidal sets are the most commonly used for robust stability analysis and controller synthesis of constrained systems. Their popularity is due to computational efficiency via the use of LMI formulations and the complexity is fixed with respect to the dimension of the state space [29], [37]. This approach, however may lead to conservative results. For simplicity, in this subsection, the case of vanishing disturbances is considered. In other words, the system under consideration is x(k+ )=A(k)x(k)+B(k)u(k) (2.22) Due to the symmetric properties of ellipsoids, it is clear that the ellipsoidal invariant sets are less conservative in the case when the constraints on the state and control vector are symmetric, i.e. { x(k) X,X={x : Fi x }, i=,2,...,n (2.23) u(k) U,U ={u : u i u imax }, i=,2,...,m whereu imax is thei component of vectoru max R m. Let us consider now the problem of checking robust controlled invariance. The ellipsoide(p)={x R n :x T P x } is controlled invariant if and only if for all x E(P) there exists an inputu=φ(x) U such that (A i x+b i Φ(x)) T P (A i x+b i Φ(x)) (2.24) for alli=,2,...,q, whereqis the cardinal of the set of vertices inp c. It is well known [27] that for the time-varying or uncertain linear discrete-time system (2.22), it is sufficient to check condition (2.24) only for allxon the boundary ofe(p), i.e. for allxsuch thatx T P x =. Therefore condition (2.24) can be transformed into (A i x+b i Φ(x)) T P (A i x+b i Φ(x)) x T P x, i=,2,...,q (2.25) One possible choice foru=φ(x) is a linear state feedback controlleru=kx. By denotinga ci =A i +B i K withi=,2,...,q, condition (2.25) is equivalent to or x T A T cip A ci x x T P x, i=,2,...,q A T cip A ci P, i=,2,...,q By using the Schur complement, this condition can be rewritten as [ P A T ] ci, i=,2,...,q A ci P

61 46 2 Set Theoretic Methods in Control The condition provided here is not linear inp. By using the Schur complement again, one gets P A ci PA T ci, i=,2,...,q or [ ] P Aci P PA T, i=,2,...,q ci P By substitutinga ci =A i +B i K withi=,2,...,q, one obtains [ ] P A i P+B i KP PA T i +PK T B T, i=,2,...,q i P Though this condition is nonlinear (in fact bilinear sincepandk are the unknowns). Still it can be re-parameterized into a linear condition by settingy =KP. The above condition is equivalent to [ ] P A i P+B i Y PA T i +Y T B T, i=,2,...,q (2.26) i P Condition (2.26) is necessary and sufficient for ellipsoid E(P) with linear state feedback u = Kx to be robustly invariant. Concerning the constraint satisfaction (2.23), based on equation (2.7) it is obvious that The state constraints are satisfied in closed loop if and only ife(p) is a subset of X, hence [ ] Fi P PFi T, i=,2,...,n P (2.27) The input constraints are satisfied in closed loop if and only ife(p) is a subset of a polyhedral setx u where X u ={x R n : K i x u imax } fori=,2,...,m andk i is thei row of the matrixk R m n, hence [ ] u 2 imax K i P PKi T, P By noticing thatk i P=Y i withy i is thei row of the matrixy R m n, one gets [ ] u 2 imax Y i Yi T (2.28) P Define a row vectort i R m as follows T i =[... }{{} i th position... ] It is clear thaty i =T i Y. Therefore equation (2.28) can be transformed into

62 2.3 Set invariance theory 47 [ ] u 2 imax T i Y Y T Ti T (2.29) P With all the ellipsoids satisfying invariance condition (2.26) and constraint satisfaction (2.27), (2.28), we would like to choose among them the largest ellipsoid. In the literature, the size of ellipsoid E(P) is usually measured by the determinant or the trace of matrixp, see for example [5]. Here the trace of matrixpis chosen due to its linearity. The trace of a square matrix is defined to be the sum of the elements on the main diagonal of the matrix. Maximization of the trace of matrices corresponds to the search for the maximal sum of eigenvalues of matrices. With the trace of matrix as the objective function, the problem of choosing the largest robustly invariant ellipsoid can be formulated as subject to Invariance condition (2.26) Constraints satisfaction (2.27), (2.29) J = max{trace(p)} (2.3) P,Y It is clear that the solutionp,y of problem (2.3) may lead to the controllerk = YP such that the closed loop system with matricesa ci =A i +B i K,i=,2,...,q is at the stability margin. In other words, the ellipsoide(p) thus obtained might not be contractive (although being invariant). Indeed, the system trajectories might not converge to the origin. In order to ensurex(k) ask, it is required that for allxon the boundary ofe(p), i.e. for allxsuch thatx T P x=, to have (A i x+b i Φ(x)) T P (A i x+b i Φ(x))< i=,2,...,q With the same argument as above, one can conclude that the ellipsoide(p) with the linear controller u = Kx is robust contractive if the following set of LMI conditions is satisfied [ ] P A i P+B i Y PA T i +Y T B T i=,2,...,q (2.3) i P Polyhedral invariant sets The problem of invariance description using polyhedral sets is addressed in this section. With linear constraints on state and control variables, polyhedral invariant sets are preferred to the ellipsoidal invariant sets, since they offer a better approximation of the domain of attraction [35], [55], [2]. To begin, let us consider the case, when the control input is in the form of state feedbacku(k)=kx(k). Then the closed loop system of (2.8) is in the form x(k+ )=A c (k)x(k)+d(k)w(k) (2.32)

63 48 2 Set Theoretic Methods in Control where A c (k)=a(k)+b(k)k = Conv{A ci } witha ci =A i +B i K,i=,2,...,q. The state constraints of the closed loop system are in the form where x X c, X c ={x R n :F c x g c } (2.33) [ ] Fx F c =, g F u K c = The following definition plays an important role in computing robustly invariant sets for system (2.32) with constraints (2.33). Definition 2.3.(Pre-image set) For the system (2.32), the one step admissible preimageset of the setx c is a setx c X c such that for allx X c, it holds that A ci x+d i w X c [ gx g u ] for allw W and for alli=,2,...,q. The pre-image setpre(x c ) can be defined by [26], [22] { } Xc = x X c :F c A ci x g c max {F cd i w} w W (2.34) for alli=,2,...,q. Example2.. Consider the following uncertain system x(k+ )=A(k)x(k)+Bu(k)+Dw(k) where with α(k) and A(k)=α(k)A [ ] +( α(k))a [ ] 2 B=, D= A = [ ]., A 2 = [ ].6 The constraints on the state, on the input and on the disturbance (2.2) have the particular realization given by the matrices

64 2.3 Set invariance theory 49 3 F x =, g x = 3 3 [ ] [ 3] 2 F u =, g u = 2.2 F w =, g w = or equivalently 3 x i 3,i=,2 and 2 u 2 and.2 w i.2,i=,2. The robust stabilizing feedback controller is chosen as K =[ ] With this feedback controller the closed loop matrices are [ ] [ ]...6. A c =, A c2 = The state constraint setx c is where X c = { x R 2 :F c x g c } F c =.., g c = By solving the LP problem (2.), it follows that the half-spaces [ ]x 3 and[ ]x 3 are redundant. After eliminating these redundant half-spaces, the state constraint setx c is presented in minimal normalized half-space representation as } X c = {x R 2 : F c x ĝ c where. 3. F c = , ĝ c = Based on equation (2.34), the one step admissible pre-image setx c of the setx c is defined as

65 5 2 Set Theoretic Methods in Control F ĝ c c Xc = x R2 : F c A x ĝ c max { F c w} w W (2.35) F c A 2 ĝ c max { F c w} w W After removing redundant inequalities, the setxc is represented in minimal normalized half-space representation as Xc = x R 2 : x The setsx,x c andx c are depicted in Figure X c x 2 X c X x Fig.2.4 One step pre-image set for example 2.. It is clear that the set Ω X c is robustly invariant if it equals to its one step admissible pre-image set, that is for allx Ω and for allw W, it holds that A i x+d i w Ω for alli=,2,...,q. Based on this observation, the following algorithm can be used for computing a robustly invariant set for system (2.32) with respect to constraints (2.33)

66 2.3 Set invariance theory 5 Procedure 2.: Robustly invariant set computation[45],[7] Input: The matricesa c,a c2,...,a cq,d,d 2,...,D q andx c ={x R n :F c x g c } and the set W. Output: The robustly invariant set Ω.. Seti=,F =F c,g =g c andx ={x R n :F x g }. 2. SetX i =X. 3. Eliminate redundant inequalities of the following polytope g F F A c g max {F D w} w W P= x R n : F A c2 g x max {F D 2 w} w W.. F A cq g max {F D q w} w W 4. SetX =P and update consequently the matricesf andg. 5. IfX =X i then stop and set Ω =X. Else continue. 6. Seti=i+ and go to step 2. The natural question for procedure 2. is that if there exists a finite indexisuch thatx =X i, or equivalently if procedure 2. terminates after a finite number of iterations. In the absence of disturbances, the following theorem holds [24]. Theorem 2..[24] Assume that the system(2.32) is robustly asymptotically stable. Thenthereexistsafiniteindexi=i max,suchthatx =X i inprocedure 2.. Remark 2.2. In the presence of disturbances, a necessary and sufficient condition for the existence of a finite indexiis that a minimal robustly invariant set 5 [76], [27], [6] is a subset ofx c. We will come back to this problem later in Chapter 6, when we deal with a peak to peak controller. Apparently the sensitive part of procedure 2. is step 5. Checking the equality of two polytopesx andx i is computationally demanding, i.e. one has to checkx X i andx i X. Note that if at the stepiof procedure 2. the set Ω is invariant then the following set of inequalities 5 The set Ω is minimal robustly invariant if it is a robustly invariant set and is a subset of any robustly invariant set.

67 52 2 Set Theoretic Methods in Control g F A max {F D w} c w W F A c2 g. x max {F D 2 w} w W. F A cq g max {F D q w} w W is redundant with respect to the set Ω Ω ={x R n :F x g } Hence the procedure 2. can be modified for computing a robustly invariant set as follows Procedure 2.2: Robustly invariant set computation Input: The matricesa c,a c2,...,a cq,d,d 2,...,D q, the setx c ={x R n :F c x g c } and the setw. Output: The robustly invariant set Ω.. Seti=,F =F c,g =g c andx ={x R n :F x g }. 2. Consider the following polytope g F F A c g max {F D w} w W P= x R n : F A c2 g x max {F D 2 w} w W.. F A cq g max {F D q w} w W and iteratively check the redundancy of the subsets starting from the following set of inequalities {x R n :F A cj x g max w W {F D j w}} with j=,2,...,q. 3. If all of the inequalities are redundant with respect tox, then stop and set Ω =X. Else continue. 4. SetX =P 5. Seti=i+ and go to step 2. It is well known [45], [76], [27] that the set Ω resulting from procedure 2. or procedure 2.2, turns out to be the maximal robustly invariant set for for system (2.32) with respect to constraints (2.2), that is Ω = Ω max.

68 2.3 Set invariance theory 53 Example 2.2. Consider the uncertain system in example 2. with the same constraints on the state, on the input and on the disturbance. Applying procedure 2.2, the maximal robustly invariant set is obtained after 5 iterations as Ω max = x R 2 : x The setsx,x c and Ω max are depicted in Figure X c x 2 Ω max 2 X x Fig.2.5 Maximal robustly invariant set Ω max for example 2.2. Definition 2.3.(One step robust controlled set) Given the polytopic system (2.8), the one step robust controlled set of the setc = {x R n :F x g } is given by all states that can be steered in one step in toc when a suitable control action is applied. The one step robust controlled set denoted asc can be shown to be [26], [22] { } C = x R n : u U :F (A i x+b i u) g max {F D i w} (2.36) w W for allw W and for alli=,2,...,q Remark2.3. If the setc is robustly invariant, thenc C. HenceC is a robust controlled invariant set. Recall that the set Ω max is a maximal robustly invariant set with respect to a predefined control lawu(k)=kx(k). DefineC N as the set of all states, that can be

69 54 2 Set Theoretic Methods in Control steered to Ω max in no more thann steps along an admissible trajectory, i.e. a trajectory satisfying control, state and disturbance constraints. This set can be generated recursively by the following procedure: Procedure 2.3: Robust N-step controlled invariant set computation Input: The matricesa,a 2,...,A q,d,d 2,...,D q and the setsx,u,w and the maximal robustly invariant set Ω max Output: The N-step robust controlled invariant setc N. Seti = andc = Ω max and let the matricesf,g be the half space representation of the setc, i.e.c ={x R n :F x g } 2. Compute the expanded setp i R n+m g F i (A x+b u) i max {F id w} w W P i = (x,u) R n+m F i (A 2 x+b 2 u) g :. i max {F id 2 w} w W. F i (A q x+b q u) g i max {F id q w} w W 3. Compute the projection ofp i on R n Pi n ={x R n : u U such that(x,u) P i } 4. Set C i+ =Pi n X and let the matricesf i+,g i+ be the half space representation of the setc i+, i.e. C i+ ={x R n :F i+ x g i+ } 5. IfC i+ =C i, then stop and setc N =C i. Else continue. 6. Ifi=N, then stop else continue. 7. Seti=i+ and go to step 2. Since Ω max is a robustly invariant set, it follows that for eachi,c i C i and thereforec i is a robust controlled invariant set and a sequence of nested polytopes. Note that the complexity of the setc N does not have an analytic dependence on N and may increase without bound, thus placing a practical limitation on the choice ofn. Example 2.3. Consider the uncertain system in example 2.. The constraints on the state, on the input and on the disturbance are the same. Using procedure 2.3, one obtains the robust controlled invariant setsc N as shown in Figure 2.6 withn= andn= 7. The setc is a set of all states that can be steered

70 2.4 On the domains of attraction 55 in one step in Ω max when a suitable control action is applied. The setc 7 is a set of all states that can be steered in seven steps in Ω max when a suitable control action is applied. Note thatc 7 =C 8, thereforec 7 is the maximal robust controlled invariant set. 3 2 C x 2 C 7 C = Ω max 2 X x Fig.2.6 Robust controlled invariant set for example 2.3. The setc 7 is presented in minimal normalized half-space representation as C 7 = x R 2 : x On the domains of attraction This section presents an original contribution on estimating the domain of attraction for uncertain and time-varying linear discrete-times systems in closed-loop with a saturated linear feedback controller and state constraints. Ellipsoidal and polyhedral sets will be used for characterizing the domain of attraction. The use of ellipsoidal sets associated with its simple characterization as a solution of an LMI problem,

71 56 2 Set Theoretic Methods in Control while the use of polyhedral sets offers a better approximation of the domain of attraction Problem formulation Consider the following time-varying or uncertain linear discrete-time system where x(k+ )=A(k)x(k)+B(k)u(k) (2.37) A(k)= q q i= i= α i (k)a i, B(k)= q α i (k)b i i= α i (k)=, α i (k) (2.38) with given matricesa i R n n andb i R n m,i=,2,...,q. Both the state vector x(k) and the control vector u(k) are subject to the constraints { x(k) X,X={x R n :F i x g i }, i=,2,...,n u(k) U,U ={u R m (2.39) :u il u i u iu }, i=,2,...,m wheref i R n is thei th row of the matrixf x R n n,g i is thei th component of the vectorg x R n,u il andu iu are respectively thei th component of the vectors u l andu u, which are the lower and upper bounds of inputu. It is assumed that the matrixf x and the vectorsu l R m,u u R m are constant withu l < andu u > such that the origin is contained in the interior ofx andu. Assume that using established results in control theory, one can find a feedback controllerk R m n such that u(k) = Kx(k) (2.4) robustly quadratically stabilizes system (2.37). We would like to estimate the domain of attraction of the origin for the closed loop system x(k + ) = A(k)x(k) + B(k)sat(Kx(k)) (2.4) where the state vector and the control vector are subject to the constraints (2.39) Saturation nonlinearity modeling- A linear differential inclusion approach In this section, a linear differential inclusion approach used for modeling the saturation function is briefly reviewed. This modeling framework was first proposed by

72 2.4 On the domains of attraction 57 Hu et al. in [57], [59], [6]. Then its generalization was developed by Alamo et al. [6], [7]. The main idea of the differential inclusion approach is to use an auxiliary vector variablev R m, and to compose the output of the saturation function as a convex combination of the actual control signalsuandv. sat(u) u u u u l Fig.2.7 The saturation function The saturation function is defined as follows u il, if u i u il sat(u i )= u, if u il u i u iu (2.42) u iu, if u iu u i fori =,2,...,m andu il andu iu are respectively, the upper bound and the lower bound ofu i. To underline the details of the approach, let us first consider the case whenm=. In this caseuandvwill be scalars. It is clear that for any arbitrarilyu, there existv and β such that sat(u)=βu+( β)v (2.43) where β and or equivalently u l v u u (2.44) sat(u) Conv{u, v} (2.45) Figure 2.8 illustrates this fact. Analogously, form=2 andvsuch that { ul v u u u 2l v 2 u 2u (2.46) the saturation function can be expressed as

73 58 2 Set Theoretic Methods in Control Case : u u l sat(u) = u l u u v l u u Case 2: u l u u u sat(u) = u u l u v u u Case 3: u u u sat(u) = u u u l v u u u Fig.2.8 Linear differential inclusion approach. [ ] [ ] [ ] [ ] u u v v sat(u)=β + β u 2 + β 2 v 3 + β 2 u 4 2 v 2 (2.47) where or equivalently 4 i= sat(u) Conv β i =, β i (2.48) {[ u u 2 ], [ u v 2 ] [ ] v,, u 2 [ v v 2 ]} (2.49) Denote nowd m as the set ofm m diagonal matrices whose diagonal elements are either or. For example, ifm=2 then {[ ] [ ] [ ] [ ]} D 2 =,,, There are 2 m elements ind m. Denote each element ofd m ase i,i=,2,...,2 m and defineei =I E i. For example, if [ ] E = then [ ] E = [ ] = [ ]

74 2.4 On the domains of attraction 59 Clearly ife i D m, thene i is also ind m. The generalization of the results (2.45) (2.49) is reported by the following lemma [57], [59], [6] Lemma2..[59]Considertwovectorsu R m andv R m suchthatu il v i u iu foralli=,2,...,m,thenitholdsthat sat(u) Conv{E i u+e i v},i=,2,...,2 m (2.5) Consequently, there exist β i withi=,2,...,2 m and β i and 2 m β i = i= such that sat(u) = 2 m i= β i (E i u+e i v) The ellipsoidal set approach The aim of this subsection is twofold. First, we provide an invariance condition of ellipsoidal sets for discrete-time linear time-varying or uncertain systems with a saturated input and state constraints [56]. This invariance condition is an extended version of the previously published results in [59] for the robust case. Secondly, we propose a method for computing a nonlinear controller u(k) = sat(kx(k)), which makes a given ellipsoid invariant. For simplicity, consider the case of bounds equal tou max, namely u l =u u =u max and let us assume that the polyhedral constraint setx is symmetric withg i =, for alli=,2,...,n. Clearly, this assumption is nonrestrictive as long as F i x g i F i g i x for allg i >. For a matrixh R m n, definex c as an intersection between the state constraint setx and the polyhedral setf(h,u max )={x : Hx u max }, i.e. X c = x Rn : F x H x u max H u max We are now ready to state the main result of this subsection Theorem2.2.IfthereexistasymmetricmatrixP R n n andamatrixh R m n such that

75 6 2 Set Theoretic Methods in Control [ P {A i +B i (E j K+E j H)}P ] P{A i +B i (E j K+E, (2.5) j H)}T P for i=,2,...,q, j =,...,2 m ande(p) X c,thentheellipsoide(p)isarobustly invariant set for the system(2.4) with constraints(2.39). Proof. Assume that there exist a matrixpand a matrixh such that condition (2.5) is satisfied. Based on Lemma 2. and by choosingv=hx, one has sat(kx) = for allxsuch that Hx u max. Subsequently { where x(k+ ) = q i= α i (k) = q α i (k) i= = q α i (k) 2m i= j= 2 m i=j= = q From the fact that A c (k)= 2 m β j (E j Kx+E j Hx) j= 2 m A i +B i β j (E j K+E j }x(k) H) j= { 2 m 2 m β j A i +B i β j (E j K+E j }x(k) H) j= j= β j { A i +B i (E j K+E j H) }x(k) α i (k)β j { A i +B i (E j K+E j H) }x(k)=a c (k)x(k) q i= q 2 m i=j= 2 m j= α i (k)β j { A i +B i (E j K+E j H) } α i (k)β j = q i= α i (k) { 2 m β j }= j= it is clear thata c (k) belongs to the polytopep c, the vertices of which are given by taking all possible combinations ofa i +B i (E j K+E jh) wherei =,2,...,q and j=,2,...,2 m. The ellipsoide(p) = {x R n :x T P x } is invariant, if and only if for all x R n such thatx T P x it holds that x T A c (k) T P A c (k)x (2.52) With the same argument as in Section 2.3.3, it is clear that condition (2.52) can be transformed to [ ] P A c (k)p PA c (k) T (2.53) P

76 2.4 On the domains of attraction 6 The left-hand side of equation (2.52) can be treated as a function ofkand reaches the minimum on one of the vertices ofa c (k), so the set of LMI conditions to be satisfied for invariance is the following [ P {A i +B i (E j K+E j H)}P ] P{A i +B i (E j K+E, j H)}T P for alli=,2,...,q and for all j=,2,...,2 m. Note that conditions (2.5) involve the multiplication between two unknown parametersh andp. By denotingy =HP, the LMI condition (2.5) can be rewritten as [ P (A i P+B i E j KP+B i E j Y) (PA T i +PK T E j B T i +Y T E j BT i ) P ], (2.54) for i =,2,...,q, j =,2,...,2 m. Thus the unknown matricespandy enter linearly in the conditions (2.54). Again, as in Section 2.3.3, in general one would like to have the largest invariant ellipsoid for system (2.37) under the feedback u(k) = sat(kx(k)) with respect to constraints (2.39). This can be achieved by solving the following LMI problem subject to Invariance condition [ J = max{trace(p)} (2.55) P,Y P (A i P+B i E j KP+B i E j Y) (PA T i +PK T E j B T i +Y T E j BT i ) P for alli=,2,...,q and for all j=,2,...,2 m Constraint satisfaction On state [ ] Fi P PFi T, i=,2,...,n P On input [ ] u 2 imax Y i Yi T, i=,2,...,m P wherey i is thei th row of the matrixy. Example2.4. Consider the following linear uncertain discrete-time system with x(k+ )=A(k)x(k)+B(k)u(k) ], (2.56)

77 62 2 Set Theoretic Methods in Control and A = A(k)=α(k)A +( α(k))a 2 B(k)=α(k)B +( α(k))b 2 [ ].,A 2 = [ ].2,B = [ ],B 2 = [ ].5 At each sampling time α(k) [, ] is an uniformly distributed pseudo-random number. The constraints are x, x 2, u The robustly stabilizing feedback matrix gain is chosen as K =[ ] By solving the optimization problem (2.55), the matricespandy are obtained [ ] P=, Y =[ ] Hence H =YP =[ ] Based on the LMI problem (2.3), an invariant ellipsoide(p ) is obtained under the linear feedbacku(k)=kx(k) with [ ] P = Figure 2.9 presents the invariant sets with different control laws. The set E(P) is obtained with the saturated controlleru(k) =sat(kx(k)) while the sete(p ) is obtained with the linear controlleru(k)=kx(k). Figure 2. shows different state trajectories of the closed loop system with the controller u(k) = sat(kx(k)) for different realizations of α(k) and different initial conditions. In the first part of this subsection, Theorem 2.2 was exploited in the following manner: if the ellipsoide(p) is robustly invariant for the system x(k+ )=A(k)x(k)+B(k)sat(Kx(k)) then there exists a stabilizing linear controlleru(k)=hx(k), such that the ellipsoid E(P) is robustly invariant with respect to the closed-loop system x(k+ )=A(k)x(k)+B(k)Hx(k) The matrix gainh R m n is obtained by solving the optimization problem (2.55). Theorem 2.2 now will be exploited in a different manner. We would like to design a saturated feedback gain u(k) = sat(kx(k)) that makes a given invariant ellipsoid

78 2.4 On the domains of attraction Kx = 2 Hx = x 2 2 Hx = E(P) E(P ) Kx = x Fig. 2.9 Invariant sets with different control laws for example 2.4. The set E(P) is obtained with the saturated controlleru(k)=sat(kx(k)) while the sete(p ) is obtained with the linear controller u(k)=kx(k) x x Fig.2. State trajectories of the closed loop system for example 2.4. E(P) contractive with a maximal contraction factor. This invariant ellipsoid E(P) can be inherited for example together with a linear feedback gainu(k)=hx(k) from the optimization of some convex objective functionj(p) 6, for exampletrace(p). In the second stage, based on the gainh and the ellipsoide(p), a saturated controller u(k) =sat(kx(k)) which aims to maximize some contraction factor g is computed. It is worth noticing that the invariance condition (2.3) corresponds to the one in condition (2.54) withe j = ande j =I E j =I. Following the proof of Theorem 2.2, it is clear that for the following system 6 Practically, the design of the invariant ellipsoide(p) and the controlleru(k)=hx(k) can be done by solving the LMI problem (2.3).

79 64 2 Set Theoretic Methods in Control x(k+ )=A(k)x(k)+B(k)sat(Kx(k)) the ellipsoide(p) is contractive with the contraction factor g if {A i +B i (E j K+E j H) } TP { A i +B i (E j K+E j H) } P gp for alli=,2,...,q and for allj=,2,...,2 m such thate j. By using the Schur complement, this problem can be converted into an LMI optimization as J = max{g} (2.57) g,k subject to [ ( g)p (A i +B i (E j K+E j H))T (A i +B i (E j K+E j H)) P ] for alli =,2,...,p and j =,2,...,2 m withe j. Recall that here the only unknown parameters are the matrixk R m n and the scalarg, the matricespand H being given in the first stage. Remark 2.4. The proposed two-stage control design presented here benefits from global uniqueness properties of the solution. This is due to the one-way dependence of the two (prioritized) objectives: the trace maximization precedes the associated contraction factor. Example 2.5. Consider the uncertain system in example 2.4 with the same constraints on the state vector and on the input vector. In the first stage by solving the optimization problem (2.3), one obtains the matricespandy P= [ ], Y =[ ] HenceH =YP =[ ]. In the second stage, by solving the optimization problem (2.57), one obtains the feedback gaink K =[ ] Figure 2. shows the invariant ellipsoid E(P). This figure also shows the state trajectories of the closed loop system under the saturated feedback u(k) = sat(kx(k)) for different initial conditions and different realizations of α(k). For the initial conditionx()=[ 4 ] T Figure 2.2(a) presents the state trajectory of the closed loop system with the saturated controller u(k) = sat(kx(k)) and with the linear controlleru(k)=hx(k). It can be observed that the time to regulate the plant to the origin by using the linear controller is longer than the time to regulate the plant to the origin by using the saturated controller. The explanation for this is that when using the controlleru(k) =Hx(k), the control action is saturated only at some points of the boundary of the ellipsoid E(P), while using the controller

80 α u 2.4 On the domains of attraction Kx = x Kx = x Fig.2. Invariant ellipsoid and state trajectories of the closed loop system for example 2.5. u(k) =sat(kx(k)), the control action is saturated not only on the boundary of the sete(p), the saturation being active also inside the sete(p). This effect can be observed in Figure 2.2(b). The same figure presents the realization of α(k). x u(k) = sat(kx(k)) u(k) = Hx(k) Time (Sampling) u(k) = Hx(k) u(k) = sat(kx(k)) Time (Sampling) x 2 5 u(k) = Hx(k) u(k) = sat(kx(k)) Time (Sampling) (a) State trajectory Time (Sampling) (b) Input trajectory and α realization Fig.2.2 State and input trajectory of the closed loop system as a function of time for example 2.5. The solid blue lines are obtained by using the saturated feedback gain u(k) = sat(kx(k)), and the dashed red lines are obtained by using the linear feedback controlleru(k)=hx(k) in the figures forx,x 2 andu The polyhedral set approach In this section, the problem of estimating the domain of attraction is addressed by using polyhedral sets. For a given linear state feedback controller u(k) = Kx(k), it is clear that the largest polyhedral invariant set is the maximal robustly invariant set Ω max. The set Ω max can be readily found using procedure 2. or procedure 2.2. From this point on, it is assumed that the set Ω max is known.

81 66 2 Set Theoretic Methods in Control Our aim in this subsection is to find the the largest polyhedral invariant set characterizing an estimation of the domain of attraction for system (2.37) under the saturated controller u(k) = sat(kx(k)). To this aim, recall that from Lemma (2.), the saturation function can be expressed as sat(kx) = 2 m i= β i (E i Kx+E i v), 2 m β i =, β i (2.58) i= withu l v u u ande i is an element ofd 7 m andei =I E i. With equation (2.59) the closed loop system can be rewritten as { } x(k+ ) = q 2 m α i (k) A i x(k)+b i β j (E j Kx(k)+E j v) i= j= or = q α i (k) i= { } 2 m 2 m β j A i x(k)+b i β j (E j Kx(k)+E j v) j= j= = q { } α i (k) 2m β j A i x(k)+b i (E j Kx(k)+E j v) i= j= x(k+ )= 2 m q } j α i (k) {(A i +B i E j K)x(k)+B i E j=β j v i= (2.59) The variablesv R m can be considered as an external controlled input for system (2.59). Hence, the problem of finding the largest polyhedral invariant set Ω s for system (2.4) boils down to the problem of computing the largest controlled invariant set for system (2.59). System (2.59) can be considered as an uncertain system with respect to the parameters α i and β j. Hence the following procedure can be used to obtain the largest polyhedral invariant set Ω s for system (2.59) based on the results in Section The set ofm m diagonal matrices whose diagonal elements are either or

82 2.4 On the domains of attraction 67 Procedure 2.4: Invariant set computation Input: The matricesa,...,a q,b,...,b q, the gaink and the setsx,u and the invariant set Ω max Output: An invariant approximation of the invariant set Ω s for the closed loop system (2.4).. Seti = andc = Ω max and let the matricesf,g be the half space representation of the setc, i.e.c ={x R n :F x g } 2. Compute the expanded setp ij R n+m for all j=,2,...,2 m F i {(A +B E j K)x+B E j v} g i F i {(A 2 +B 2 E j K)x+B 2 E j v} P ij = (x,v) R n+m : 3. Compute the projection ofp ij on R n 4. Set. F i {(A q +B q E j K)x+B q E j v} P n ij ={x R n : v U such that(x,v) P ij }, j=,2,...,2 m C i+ =X 2 m j= P n ij g i. g i and let the matricesf i+,g i+ be the half space representation of the setc i+, i.e. C i+ ={x R n :F i+ x g i+ } 5. IfC i+ =C i, then stop and set Ω s =C i. Else continue. 6. Seti=i+ and go to step 2. It is clear thatc i C i, since the set Ω max is robustly invariant. HenceC i is a robustly invariant set. The set sequence {C,C,...,} converges to Ω s, which is the largest polyhedral invariant set. Remark2.5. Each one of the polytopesc i represents an invariant inner approximation of the domain of attraction for the system (2.37) under the saturated controller u(k) =sat(kx(k)). That means the procedure 2.4 can be stopped at any time before converging to the true largest invariant set Ω s and obtain a robustly invariant approximation of the set Ω s. It is worth noticing that the matrixh R m n resulting from optimization problem (2.55) can also be employed for computing the polyhedral invariant set Ω H s with respect to the saturated controlleru(k)=sat(kx(k)). Clearly the set Ω H s is a subset of Ω s, since the vectorvis now in the restricted formv(k)=hx(k), but this can be

83 68 2 Set Theoretic Methods in Control an important instrument design tool. In this case, from the equation (2.59) one gets x(k+ )= Define the setx H as follows where 2 m q } j α i (k) {(A i +B i E j K+B i E j=β j H)x(k) i= (2.6) X H ={x R n :F H x g H } (2.6) F H = F x H, g H = g x u u H u l With the setx H, the following procedure can be used for computing the polyhedral invariant set Ω H s. Procedure 2.5: Invariant set computation Input: The matricesa,a 2,...,A q and the setx H and the invariant set Ω max Output: The invariant set Ω H s. Seti = andc = Ω max and let the matricesf,g be the half space representation of the setc, i.e.c ={x R n :F x g } 2. Compute the setp ij R n+m 3. Set P ij = x R n : F i (A +B E j K+B E j H)x F i (A 2 +B 2 E j K+B 2 E j H)x. F i (A q +B q E j K++B q E j H)x C i+ =X H 2 m j= P n ij g i g i. g i and let the matricesf i+,g i+ be the half space representation of the setc i+, i.e. C i+ ={x R n :F i+ x g i+ } 4. IfC i+ =C i, then stop and set Ω s =C i. Else continue. 5. Seti=i+ and go to step 2. Since the matrix 2m j= q β j α i (k) i= {(A i +B i E j K+B i E j H) } has a sub-unitary joint spectral radius, procedure 2.5 terminates in finite time [27]. In other words, there exists a finite indexi=i max such thatc imax =C imax +.

84 2.4 On the domains of attraction 69 Example 2.6. Consider again the example 2.4. The constraint on the state vector and on the input vector are the same. The controller isk =[ ]. By using procedure 2.4 one obtains the robust polyhedral invariant set Ω s as depicted in Figure 2.3. Procedure 2.4 terminated with i = 2. Figure 2.3 also shows the robust polyhedral invariant set Ωs H obtained with the auxiliary matrix H whereh= [ ] and the robust polyhedral invariant set Ω max obtained with the controlleru(k)=kx Ω s Ω s H x Ω max x Fig. 2.3 Robustly invariant sets with different control laws and different methods for example 2.6. The polyhedral set Ω s is obtained with respect to the controlleru(k)=sat(kx(k)). The polyhedral set Ω H s is obtained with respect to the controlleru(k)=sat(kx(k)) using an auxiliary matrixh. The polyhedral set Ω max is obtained with the controlleru(k)=kx. The set Ωs H and Ω s are presented in minimal normalized half-space representation as Ωs H = x R 2 : x

85 7 2 Set Theoretic Methods in Control Ω s = x R 2 : x Figure 2.4 presents state trajectories of the closed loop system with the controller u(k) = sat(kx(k)) for different initial conditions and different realizations of α(k) x x Fig. 2.4 State trajectories of the closed loop system with the controller u(k) = sat(kx(k)) for example 2.6.

86 Chapter 3 Optimal and Constrained Control- An Overview In this chapter some of the approaches to constrained and optimal control are briefly reviewed. This review is not intended to be exhaustive but to provide an insight into the existing theoretical background on which, the present manuscript builds on. The chapter includes the following sections. Dynamic programming. 2. Pontryagin s maximum principle. 3. Model predictive control: implicit and explicit solutions. 4. Vertex control. 3. Dynamic programming The purpose of this section is to present a brief introduction to dynamic programming, which provides a sufficient condition for optimality. Dynamic programming was developed by R.E. Bellman in the early fifties [3], [4], [5], [6]. It provides insight into properties of the control problems for various classes of systems, e.g. linear, time-varying or nonlinear. In general the optimal solution is found in open loop form, without feedback. Dynamic programming is based on the following principle of optimality [7]: An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. To begin, let us consider the following optimal control problem subject to min x,u { N L(x(k),u(k))+E(x(N)) k= } (3.) 7

87 72 3 Optimal and Constrained Control - An Overview x(k+ )=f(x(k),u(k)),k=,,...,n u(k) U,k=,,...,N x(k) X,k=,,...,N x()=x where x(k) R n andu(k) R m are respectively the state and control variables. N > is called the time horizon. L(x(k), u(k)) is the Lagrange objective function, which represents a cost along the trajectory. E(x(N)) is the Mayer objective function, which represents the terminal cost. U andx are the sets of constraints on the input and state variables, respectively. x() is the initial condition. Define thevaluefunctionv i (x(i)) as follows { subject to V i (x(i))=min x,u E(x(N))+ N L(x(k),u(k)) k=i x(k+ )=f(x(k),u(k)),k=,,...,n u(k) U,k=i,i+,...,N x(k) X,k=i,i+,...,N } (3.2) fori=n,n,n 2,...,. ClearV i (x(i)) is the optimal cost on the remaining horizon [i,n], starting from the statex(i). Based on the principle of optimality, one has By substituting one gets subject to V i (x(i))=min u(i) {L(x(i),u(i))+V i+(x(i+))} x(i+)=f(x(i),u(i)) V i (z)=min u(i) {L(x(i),u(i))+V i+(f(x(i),u(i)))} (3.3) { u(i) U, f(x(i),u(i)) X The problem (3.3) is much simpler than the one in (3.) because it involves only one decision variable u(i). To actually solve this problem, we work backwards in time fromi=n, starting with V N (x(n))=e(x(n)) Based on the value functionv i+ (x(i+)) withi=n,n 2,...,, the optimal control valuesu (i) can be obtained as

88 3.2 Pontryagin s maximum principle 73 u (i)=argmin u(i) {L(x(i),u(i))+V i+(f(x(i),u(i)))} subject to { u(i) U, f(x(i),u(i)) X 3.2 Pontryagin s maximum principle The second milestone in the optimal control theory is the Pontryagin s maximum principle [25], [34], offering a basic mathematical technique for calculating the optimal control values in many important problems of mathematics, engineering, economics, e.t.c. This approach, can be seen as a counterpart of the classical calculus of variation approach, allowing us to solve the control problems in which the control input is subject to constraints in a very general way. Here for illustration, we consider the following simple optimal control problem subject to min x,u { N L(x(k),u(k))+E(x(N)) k= x(k+ )=f(x(k),u(k)),k=,,...,n u(k) U,k=,,...,N x()=x } (3.4) For simplicity, the state variables are considered unconstrained. For solving the optimal control problem (3.4) with the Pontryagin s maximum principle, the following HamiltonianH k ( ) is defined H k (x(k),u(k),λ(k+ ))=L(x(k),u(k))+λ T (k+ )f(x(k),u(k)) (3.5) where λ(k) R n withk=,2,...,n are called theco-state or theadjoint variables. For problem (3.4), these variables must satisfy the so called co-state equation and λ (k+ )= H k (x(k)),k=,,...,n 2 λ (N)= E(x(N)) (x(n)) For given state and co-state variables, the optimal control value is achieved by choosing controlu (k) that minimizes the Hamiltonian at each time instant, i.e. H k (x (k),u (k),λ (k+ )) H k (x (k),u(k),λ (k+ )), u(k) U

89 74 3 Optimal and Constrained Control - An Overview Note that a convexity assumption on the Hamiltonian is needed, i.e. the function H k (x(k),u(k),λ(k+ )) is supposed to be convex with respect tou(k). 3.3 Model predictive control Model predictive control (MPC), or receding horizon control, is one of the most advanced control approaches which, in the last decades, has became a leading industrial control technology for constrained control systems [32], [], [28], [29], [47], [96], [53]. MPC is an optimization based strategy, where a model of the plant is used to predict the future evolution of the system, see [], [96]. This prediction uses the current state of the plant as the initial state and, at each time instant,k, the controller computes a finite optimal control sequence. Then the first control action in this sequence is applied to the plant at time instantk, and at time instantk+ the optimization procedure is repeated with a new plant measurement. This open loop optimal feedback mechanism of the MPC compensates for the prediction error due to structural mismatch between the model and the real system as well as for disturbances and measurement noise. In contrast to the maximal principle or dynamic programming solutions which are open loop optimal, the receding horizon principle behind MPC brings the advantage of the feedback structure. But again the main advantage which makes MPC industrially desirable is that it can take into account constraints in the control problem. This feature is very important for several reasons Often the best performance, which may correspond to the most efficient operation, is obtained when the system is made to operate near the constraints. The possibility to explicitly express constraints in the problem formulation offers a natural way to state complex control objectives. Stability and other features can be proved, at least in some cases, in contrast to popular ad-hoc methods to handle constraints, like anti-windup control [5], and override control [43] Implicit model predictive control Consider the problem of regulating to the origin the following discrete-time linear time-invariant system x(k+ )=Ax(k)+Bu(k) (3.6) wherex(k) R n andu(k) R m are respectively the state and the input variables, A R n n andb R n m are the system matrices. Both the state vectorx(k) and the control vectoru(k) are subject to polytopic constraints It was named OLOF (Open Loop Optimal Feedback) control, by the author of [38]

90 3.3 Model predictive control 75 { x(k) X,X ={x :Fx x g x } k u(k) U,U ={u :F u u g u } (3.7) where the matricesf x,f u and the vectorsg x,g u are assumed to be constant with g x >,g u > such that the origin is contained in the interior ofx andu. Here the inequalities are taken element-wise. It is assumed that the pair(a,b) is stabilizable, i.e. all uncontrollable states have stable dynamics. Provided that the state x(k) is available from the measurements, the basic finite horizon MPC optimization problem is defined as } utru T t subject to where N V(x(k))= min u=[u,u,...,u N ]{ t= xtqx T N t + t= x t+ =Ax t +Bu t,t=,,...,n x t X,t=,2,...,N u t U,t=,,...,N x t =x(k) (3.8) x t+ andu t are, respectively the predicted states and the predicted inputs,t=,,...,n. Q R n n is a real symmetric positive semi-definite matrix. R R m m is a real symmetric positive definite matrix. N is a fixed integer greater than. N is calledthetimehorizon ortheprediction horizon. The conditions onqandrguarantee that the functionj is convex. In term of eigenvalues, the eigenvalues of Q should be non-negative, while those of R should be positive in order to ensure a unique optimal solution. From the control objective point of view, it is clear that the first termx T tqx t penalizes the deviation of the statexfrom the origin, while the second termu T tru t measures the input control energy. In other words, selecting Q large means that, to keepv small, the statex t must be as close as possible to the origin in a weighted Euclidean norm. On the other hand, selecting R large means that the control input u t must be small to keep the cost functionv small. An alternative is a performance measure based onl norm N min Qx t u=[u,u,...,u N ]{ + t= } N Ru t t= (3.9) orl norm N min Qx t u=[u,u,...,u N ]{ + t= } N Ru t t= (3.)

91 76 3 Optimal and Constrained Control - An Overview Based on the state space model (3.6), the future state variables are expressed sequentially using the set of future control variable values x =Ax +Bu x 2 =Ax +Bu =A 2 x +ABu +Bu (3.). x N =A N x +A N Bu +A N 2 Bu +...+Bu N with and The set of equations (3.) can be rewritten in a compact matrix form as x=a a x +B a u=a a x(k)+b a u (3.2) x=[x T xt 2...xT N ]T u=[u T ut...ut N ]T A B... A 2 A a =., B. a = AB B A N A N BA N 2 B...B The MPC optimization problem (3.8) can be expressed as where V(x(k))=min u {x T Q a x+u T R a u} (3.3) Q... R... Q... Q a =......, R R... a = Q...R and by substituting (3.2) in (3.3), one gets where V(x(k))=min u {u T Hu+2x T (k)fu+x T (k)yx(k)} (3.4) H =B T aq a B a +R a, F =A T aq a B a andy =A T aq a A a (3.5) Consider now the constraints on state and on input along the horizon. From (3.7) it can be shown that { F a x x g a x Fuu g a a (3.6) u where

92 3.3 Model predictive control 77 F x... g x Fx a F x... = , g x ga x =.....F x g x F u... g u Fu a F u... = , g u ga u =.....F u g u or Using (3.2), the state constraints along the horizon can be expressed as Combining (3.6), (3.7), one obtains where F a x {A a x(k)+b a u} g a x F a xb a u F a xa a x(k)+g a x (3.7) [ ] [ F a G= u FxB a, E = a Gu Ex(k)+S (3.8) F a xa a ] ] g a, S=[ u g a x Based on (3.3) and (3.8), the MPC quadratic program formulation can be defined as { V (x(k))=min u T Hu+2x T (k)fu } (3.9) u subject to Gu Ex(k)+W where the termx T (k)yx(k) is removed since it does not influence the optimal argument. The value of the cost function at optimum is simply obtained from (3.9) by V(x(k))=V (x(k))+x T (k)yx(k) A simple on-line algorithm for MPC is Algorithm3.. Model predictive control - Implicit approach. Measure the current state of the systemx(k). 2. Compute the control signal sequenceuby solving (3.9). 3. Apply first element of the control sequenceuas input to the system (3.6). 4. Wait for the next time instantk:=k+. 5. Go to step and repeat

93 78 3 Optimal and Constrained Control - An Overview Example3.. Consider the following discrete time linear time invariant system [ ] [ ] x(k+ )= x(k)+ u(k) (3.2).7 and the MPC problem with weighting matricesq=i andr= and the prediction horizonn = 3. The constraints are 2 x 2, 5 x 2 5, u Based on equation (3.5) and (3.8), the MPC problem can be described as a QP problem { min u T Hu+2x T (k)fu } u={u,u,u 2 } with ] H = , F =[ and subject to the following constraints Gu S+Ex(k) where G=..7, E =, S=

94 u 3.3 Model predictive control 79 For the initial conditionx()=[2 ] T and by using the implicit MPC method, Figure 3. shows the state and input trajectory of the closed loop system as a function of time. 2 x Time (Sampling) x Time (Sampling) (a) State trajectory Time (Sampling) (b) Input trajectory. Fig. 3. State and input trajectory of the closed loop system as a function of time for example Recursive feasibility and stability Recursive feasibility of the optimization problem and stability of the resulting closed-loop system are two important aspects when designing a MPC controller. Recursive feasibility of the optimization problem (3.9) means that if the problem (3.9) is feasible at time instantk, it will be also feasible at time instantk+. In other words there exists an admissible control value that holds the system within the state constraints. The feasibility problem can arise due to model errors, disturbances or the choice of the cost function. Stability analysis necessitates the use of Lyapunov theory [73], since the presence of the constraints makes the closed-loop system nonlinear. In addition, it is well known that unstable input-constrained system cannot be globally stabilized [44], [97], [38]. Another problem is that the control law is generated by the solution of the optimization problem (3.9) and generally there does not exist any simple 2 closed-form expression for the solution, although it can be shown that the solution is a piecewise affine state feedback law [2]. Recursive feasibility and stability can be assured by adding a terminal cost function in the objective function (3.8) and by including the final state of the planning horizon in a terminal positively invariant set. Let the matrixp R n n be the unique solution of the following discrete-time algebraic Riccati equation P=A T PA A T PB(B T XB+R) B T PA+Q (3.2) 2 Simple here is understood in terms of linear feedback gains as it is the case for the optimal control for unconstrained linear quadratic regulators [68].

95 8 3 Optimal and Constrained Control - An Overview and the matrix gaink R m n is defined as K = (B T PB+R) B T PA (3.22) It is well known [8], [86], [89], [9] that matrix gaink is a solution of the optimization problem (3.8) when the time horizonn= and there are no constraints on the state vector and on the input vector. In this case the cost function is V(x()) = { x T k Qx k +u T k Ru k} k= = x T ( k Q+K T RK ) x k =x TPx k= Once the stabilizing feedback gain u(k) = Kx(k) is defined, the terminal set Ω X can be computed as a maximal invariant set associated with the control law u(k) = Kx(k) for system (3.6) and with respect to the constraints (3.7). Generally, the terminal invariant set Ω is chosen to be in the ellipsoidal or polyhedral form 3. Consider now the following MPC optimization problem min u=[u,u,...,u N ] { x T NPx N + subject to x t+ =Ax t +Bu t,t=,,...,n x t X,t=,2,...,N u t U,t=,,...,N x N Ω x =x(k) then the following theorem holds [] N { x T t Qx t +u T } } tru t (3.23) t= Theorem 3..[] Assuming feasibility at the initial state, the MPC controller (3.23) guarantees recursive feasibility and asymptotic stability. Proof. See []. The MPC problem considered here uses both a terminal cost function and a terminal set constraint and is called the dual-mode MPC. This MPC scheme is the most attractive version in the MPC literature. In general, it offers better performance compared with other MPC versions and allows a wider range of control problems to be handled. The downside is the dependence of the feasible domain on the prediction horizon. Generally, for a large domain one needs to employ a large prediction horizon. 3 see Section and Section

96 3.3 Model predictive control Explicit model predictive control- Parameterized vertices Note that the implicit model predictive control requires running on-line optimization algorithms to solve a quadratic programming (QP) problem associated with the objective function (3.8) or to solve a linear programming (LP) problem with the objective function (3.9), (3.). Although computational speed and optimization algorithms are continuously improving, solving a QP or LP problem can be computationally costly, specially when the prediction horizon is large, and this has traditionally limited MPC to applications with relatively low complexity/sampling interval ratio. Indeed the state vector can be interpreted as a vector of parameters in the optimization problem (3.23). The exact optimal solution can be expressed as a piecewise affinefunction of the state over a polyhedral partition of the state space and the MPC control computation can be moved off-line [2], [28], [4], [7]. The control action is then computed on-line by lookup tables and search trees. Several solutions have been proposed in the literature for constructing a polyhedral partition of the state space [2], [4], [7]. In [2], [8] some iterative techniques use a QP or LP to find feasible points and then split the parameters space by inverting one by one the constraints hyper-planes. As an alternative, in [4] the authors construct the unconstrained polyhedral region and then enumerate the others based on the combinations of active constraints. When the cost function is quadratic, the uniqueness of the optimal solution is guaranteed and the methods proposed in [2], [8], [4] work very well, at least for non-degenerate sets of constraints [53]. It is worth noticing that by usingl orl norms as the performance measure, the cost function is only positive semi-definite and the uniqueness of the optimal solution is not guaranteed and as a consequence, neither the continuity. A control law will have a practical advantage if the control action presents no jumps on the boundaries of the polyhedral partitions. When the optimal solution is not unique, the methods in [2], [8], [4] allow discontinuities as long as during the exploration of the parameters space, the optimal basis is chosen arbitrarily. Note that based on the cost function (3.9) or (3.) the MPC problem can be rewritten as follows V(x(k))=minc T z (3.24) z subject to with z=[ξ T ξ T 2 G l z E l x(k)+s l...ξ T N ξ u T u T...u T N ] T where ξ i,i=,2,...,n ξ are slack variables andn ξ depends on the norm used and on the time horizonn. Details of how to compute vectorsc,s l and matricesg l,e l are well known [8]. The feasible domain for the LP problem (3.24) is defined by a finite number of inequalities with a right hand side linearly dependent on the vector of parameters x(k), describing in fact aparameterizedpolytope [93]

97 82 3 Optimal and Constrained Control - An Overview P(x(k))={z :G l z E l x(k)+s l } (3.25) For simplicity, it is assumed that for allx(k) X, the polyhedral setp(x(k)) is bounded. With this assumption, P(x(k)) can be expressed in a dual (generator based) form as P(x)=Conv{v i (x(k))},i=,2,...,n v (3.26) wherev i are the parameterized vertices. Each parameterized vertex in (3.26) is characterized by a set of saturated inequalities. Once this set of active constraints is identified, one can write the linear dependence of the parameterized vertex in the vector of parameters v i (x(k))=g li E li x(k)+g li S li (3.27) whereg li,e li,w li correspond to the subset of saturated inequalities for thei th parameterized vertex. As a first conclusion, the construction of the dual description (3.25), (3.26) requires the determination of the set of parameterized vertices. Efficient algorithms exist in this direction [93], the main idea being the analogy with a non-parameterized polytope in a higher dimension. When the vector of parameter x(k) varies inside the parameters space, the vertices of the feasible domain (3.25) may split or merge. This means that each parameterized vertexv i is defined only over a specific region in the parameters space. These regionsvd i are calledvaliditydomains and can be constructed using simple projection mechanisms [93]. Once the entire family of parameterized vertices and their validity domains are available, the optimal solution can be constructed. It is clear that the space of feasible parameters can be partitioned in non-degenerate polyhedral regionsr k R n such that the minimum min { c T } v i (x(k)) v i (x(k)) vertex ofp(x(k)) valid overr k (3.28) is attained by a constant subset of vertices ofp(x(k)), denotedv i (x(k)). The complete solution overr k is z k (x(k))=conv{v k (x(k)),v 2k (x(k)),...,v sk (x(k))} (3.29) The following theorem holds regarding the structure of the polyhedral partitions of the parameters space [8] Theorem3.2.[8]Letthemulti-parametricprogramin(3.24)andv i (x(k)),i =,2,...,n v betheparameterizedverticesofthefeasibledomain(3.25),(3.26)with theircorrespondingvaliditydomainsvd i.ifaparameterizedvertextakespartin thedescriptionoftheoptimalsolutionforaregionr k,thenitwillbepartofthe familyofoptimalsolutionoveritsentirevaliditydomainvd i. Proof. See [8]. Theorem 3.2 states that if a parameterized vertex is selected as an optimal candidate, then it covers all its validity domain.

98 3.3 Model predictive control 83 It is worth noticing that the complete optimal solution (3.29) takes into account the eventual non-uniqueness of the optimum, and it defines the entire family of optimal solutions using the parameterized vertices and their validity domains. Once the entire family of optimal solutions is available, the continuity of the control law can be guaranteed as follows. Firstly if the optimal solution is unique, then there is no decision to be made, the explicit solution being the collection of the parameterized vertices and their validity domains. The continuity is intrinsic. Conversely, the family of the optimal solutions can be enriched in the presence of several optimal parameterized vertices z k (x(k))=α k v k + α 2kv 2k +...+α skv sk α ik,i=,2,...,s (3.3) α k + α 2k +...+α sk = passing to an infinite number of candidates (any function included in the convex combination of vertices being optimal). As mentioned previously, the vertices of the feasible domain split and merge. The changes occur with a preservation of the continuity. Hence the continuity of the control law is guaranteed by the continuity of the parameterized vertices. The interested reader is referred to [8] for further discussions on the related concepts and constructive procedures. Example 3.2. To illustrate the parameterized vertices concept, consider the following feasible domain for the MPC optimization problem P(x(k))=P P 2 (x(k)) (3.3) wherep is a fixed polytope P = z R 2 : z andp 2 (x(k)) is a parameterized polyhedral set { [ P 2 (x(k))= z R 2 : ] z [ ] x(k)+ [ ]}.5.5 (3.32) (3.33) Note thatp 2 (x(k)) is an unbounded set. From equation (3.33), it is clear that Ifx(k).5, then x(k)+.5. It follows thatp P 2 (x(k)). The polytope P(x(k))=P has the half-space representation as (3.32) and the vertex representation P(x(k))=Conv{v,v 2,v 3,v 4 } where v = [ ],v 2 = [ ],v 3 = [ ],v 4 = [ ]

99 84 3 Optimal and Constrained Control - An Overview If.5 x(k).5, then x(k)+.5. It follows thatp P 2 (x(k)) /. Note that for the polytopep(x(k)) the half-spacesz = andz 2 = are redundant. The polytopep(x(k)) has the half-space representation P(x(k))= z R 2 : and the vertex representation with [ v 5 = z P(x(k))=Conv{v 4,v 5,v 6,v 7 } x.5 ],v 6 = [ ] x.5,v 7 = x(k)+.5.5 [ ] x.5, x.5 If.5 <x(k), then x(k)+.5 <. It follows thatp P 2 (x(k)) = /. Hence P(x(k))= /. In conclusion, the parameterized vertices ofp(x(k)) are [ v = [ v 5 = ],v 2 = x(k).5 and the validity domains [ ] [ ],v 3 =,v 4 = ] [ x(k).5,v 6 = [ ], ],v 7 = [ ] x(k).5, x(k).5 VD =[.5], VD 2 =[.5.5], VD 3 =(.5 + ] Table 3. presents the validity domains and their corresponding parameterized vertices. Table3. Validity domains and their parameterized vertices VD VD 2 VD 3 v,v 2,v 3,v 4 v 4,v 5,v 6,v 7 / Figure 3.2 shows the polyhedral setsp andp 2 (x(k)) withx(k)=.3,x(k)=.9 and x(k) =.5. Example 3.3. Consider the discrete time linear time invariant system in example 3. with the same constraints on the state and input variables. Here we will use an MPC formulation, which guarantees recursive feasibility and stability. By solving equations (3.2) and (3.22) with weighting matrices

100 3.3 Model predictive control P 2 (.5) z P P 2 (.9) P 2 (.3) z Fig.3.2 Polyhedral setsp andp 2 (x(k)) withx(k)=.3,x(k)=.9 andx(k)=.5 for example 3.2. Forx(k).5,P P 2 (x(k))=p. For.5 x(k).5,p P 2 (x(k)) /. Forx(k) >.5, P P 2 (x(k))= / Q= [ ], R=. one obtains P= [ ] , K =[ ] The terminal set Ω is computed as a maximal polyhedral invariant set in Section Ω = x R 2 :.. x Figure 3.3 shows the state space partition obtained by using the parameterized vertices framework as a method to construct an explicit solution to the MPC problem (3.23) with prediction horizonn = 2. The control law over the state space partition is

101 86 3 Optimal and Constrained Control - An Overview x x Fig.3.3 State space partition for example 3.3. Number of regionsn r = x (k).6x 2 (k) if x(k) (Region ) x (k).7x 2 (k)+.47 if.. x(k) (Region 4) x (k).7x 2 (k).47 if.. x(k) (Region 7) if.. x(k) u(k)= (Region 2) if.. x(k) (Region 5) if.... x(k) (Region 3) if.... x(k) (Region 6) 2.33], Figure 3.4 shows the state and input tra- For the initial conditionx()=[ 2 jectory as a function of time.

102 u 3.4 Vertex control 87 x Time (Sampling) x Time (Sampling) (a) State trajectory Time (Sampling) (b) Input trajectory. Fig. 3.4 State and input trajectory of the closed loop system as a function of time for example Vertex control The vertex control framework was first proposed by Gutman and Cwikel in [54]. It gives a necessary and sufficient condition for stabilizing a discrete time linear time invariant system with polyhedral state and control constraints. The condition is that at each vertex of the controlled invariant set 4 C N there exists an admissible control action that brings the state to the interior of the setc N. Then, this condition was extended to the uncertain plant case by Blanchini in [22]. A stabilizing controller is given by the convex combination of vertex controls in each sector with a Lyapunov function given by shrunken images of the boundary of the setc N [54], [22]. To begin, let us consider now the system of the form x(k+ )=A(k)x(k)+B(k)u(k)+D(k)w(k) (3.34) wherex(k) R n,u(k) R m andw(k) R d are, respectively the state, input and disturbance vectors. The matricesa(k) R n n,b(k) R n m andd(k) R n d satisfy A(k)= q q i= i= α i (k)a i, B(k)= q α i (k)b i, D(k)= q α i (k)d i i= i= α i (k)=, α i (k) (3.35) where the matricesa i,b i andd i are given. A somewhat more general uncertainty description is given by equation (2.2) in Chapter 2 which can be transformed to the one in (3.35). The state variables, the control variables and the disturbances are subject to the following polytopic constraints 4 See Section 2.3.4

103 88 3 Optimal and Constrained Control - An Overview x(k) X,X={x R n :F x x g x } u(k) U,U ={u R m :F u u g u } (3.36) w(k) W,W ={w R d :F w w g w } where the matrices F x, F u, F w and the vectors g x, g u and g w are assumed to be constant withg x >,g u >,g w > such that the origin is contained in the interior ofx,u andw. Using the results in Section 2.3.4, it is assumed that the robust controlled invariant setc N with some fixed integern > is determined in the form of a polytope, i.e. C N ={x R n :F N x g N } (3.37) Any statex(k) C N can be decomposed as follows x=sx s +( s)x (3.38) where s,x s C N andx is the origin. In other words, the statexis expressed as a convex combination of the origin and one other pointx s C N. Consider the following optimization problem subject to The following theorem holds s = min s,x s {s} (3.39) sx s =x F N x s g N s Theorem3.3.Forallstatex C N andxisnottheorigin,theoptimalsolutionof theproblem(3.39)isreachedifandonlyifxiswrittenasaconvexcombinationof theoriginandonepointbelongingtotheboundaryofthesetc N. Proof. It the optimal solution candidatex s is strictly inside the setc N, then by setting x s = Fr(C N ) x,x s i.e.x s is the intersection between the boundary ofc N and the line connectingxand x s, one obtains x=s x s +( s )x =s x s withs <s. Hence for the optimal solutionx s, it holds thatx s Fr(C N ). Remark3.. The optimal solutions of the problem (3.39) can be seen as a measure of the distance from statexto the origin. Remark3.2. The optimization problem (3.39) can be transformed to an LP problem as s = min{s} (3.4) s subject to

104 3.4 Vertex control * x s x s x 2 x x = x Fig.3.5 Graphical illustration of the proof of theorem 3.3. { FN x sg N s Clearly, for the given statexthe solution to the optimization problem (3.4) is s = max{ F N g N x} (3.4) where the ratios F N gn are element-wise. It is well known [95], [23] that withs given as in (3.4),s is the Minkownski functional. The level curves of the functions are given by scaling the boundary of the setc N. Hence the explicit solution of the problem (3.39) is a set of n dimensional pyramidsp (j) C, each formed by one facet ofc N as a base and the origin as a common vertex. By decomposing further these pyramidsp (j) C as a sequence of simplicies C (j) N, each formed bynvertices {x(j),x(j) 2,...,x(j) n } of the base ofp (j) C and the origin, having the following properties C (j) N has nonempty interior. Int(C (j) N ) Int(C(l) N )= /, j l. C (j) N =C N. j Let U (j) =[u (j) u (j) 2... u (j) n ] be them n matrix defined by chosen admissible control values 5 satisfying (3.34) at the vertices {x (j),x(j) 2,...,x(j) n }. 5 By an admissible control value we understand any control value that is the first of a sequence of control values that bring the state from the vertex to the interior of the feasible set in afinite number of steps, see [54].

105 9 3 Optimal and Constrained Control - An Overview 3 2 x x Fig.3.6 Graphical illustration of the simplex decomposition. Remark3.3. Maximizing the control action at the verticesv R n of the controlled invariant setc N can be achieved by solving the following optimization problem subject to J = max u p (3.42) u { FN (A i v+b i u) g N max w W {F ND i w}, i=,2...,q F u u g u where u p is the p norm of the vectoru. Since the setc N is robust controlled invariant, problem (3.42) is always feasible. For allx(k) C N, there exists an index j corresponding to a simplex decomposition ofc N such thatx(k) C (j) N and hencex s(k) is on the base ofc (j) N. Therefore xs(k) can be written as a convex combination of{x (j),x(j) 2,...,x(j) n }, i.e. with x s(k)=β x (j) + β 2x (j) β nx (j) n (3.43) β i, i=,2,...,n n i= β i = By substitutingx(k)=s (k)x s(k) in equation (3.43), one obtains x(k)=s (k){β x (j) + β 2x (j) β nx (j) n } By denoting γ i =s (k)β i, i=,2,...,n

106 3.4 Vertex control 9 one gets with x(k)=γ x (j) + γ 2x (j) γ nx (j) n (3.44) γ i, i=,2,...,n n i= γ i =s (k) n β i =s (k) i= Remark3.4. Let {x,x 2,...,x nc } be the vertices of the polytopec N andn c be the number of vertices. It is well known [54] that the optimization problem (3.4) is equivalent to the following LP problem subject to min γ i {γ + γ γ nc } (3.45) γ x + γ 2 x γ nc x nc =x(k) γ i, i=,2,...,n n c i= γ i Equation (3.44) can be rewritten in a compact form as where x(k)=x (j) γ X (j) =[x (j) x (j) 2... x (j) γ =[γ γ 2... γ n ] T SinceC (j) N has nonempty interior, matrixx(j) is invertible. It follows that Consider the following control law n ] γ ={X (j) } x(k) (3.46) u(k)=γ u (j) + γ 2u (j) γ nu (j) n (3.47) or u(k)=u (j) γ (3.48) By substituting equation (3.46) in equation (3.48) one gets u(k)=u (j) {X (j) } x(k)=k (j) x(k) (3.49) with K (j) =U (j) {X (j) } (3.5) Hence forx C (j) N the controller is an linear feedback state law whose gains are obtained simply by linear interpolation of the control values at the vertices of the simplex.

107 92 3 Optimal and Constrained Control - An Overview u=k (j) x, x C (j) N (3.5) The piecewise linear control law (3.5) was first proposed by Gutman and Cwikel in [54] for the discrete-time linear time-invariant system case. In the original work [54], the state feedback control (3.5) was called the vertex controller. The extension to the uncertain plant case was proposed by Blanchini in [22]. Remark3.5. Clearly, once the piecewise linear function (3.5) is pre-calculated, the control action can be computed by determining the simplex that contains the current state, which gives an explicit piecewise linear control law. An alternative approach for computing the control action is based on solving on-line the LP problem (3.45) and then apply the control u(k)=γ u + γ 2 u γ nc u nc (3.52) whereu,u 2,...,u nc are the stored control values at the verticesx,x 2,...,x nc. The following theorem holds Theorem 3.4. For system(3.34) and constraints(3.36), the vertex control law(3.47) or(3.5)guaranteesrecursivefeasibilityforallx C N. Proof. A basic explanation is provided in the original work of [54]. Here a new and simpler proof is proposed using convexity of the setc N and linearity of the system (3.34). For recursive feasibility, one has to prove that for allx(k) C N { Fu u(k) g u and x(k+ )=A(k)x(k)+B(k)u(k)+D(k)w(k) C N For allx(k) C N, there exists an index j such thatx(k) C (j) N. It follows that F u u(k) =F u {γ u (j) + γ 2u (j) γ nu (j) n } = γ F u u (j) + γ 2F u u (j) γ nf u u (j) n γ g u + γ 2 g u +...+γ n g u n g u γ i s (k)g u g u i= x(k+ ) =A(k)x(k)+B(k)u(k)+D(k)w(k) One has =A(k) n γ i x (j) i +B(k) n γ i u (j) i +D(k)w(k) i= i= = n γ i {A(k)x (j) i +B(k)u (j) i +D(k)w(k)}+( s (k))d(k)w(k) i=

108 3.4 Vertex control 93 F N x(k+ ) = n γ i F N {A(k)x (j) i +B(k)u (j) i +D(k)w(k)}+( s (k))f N D(k)w(k) i= n γ i g N +( s (k))f N D(k)w(k) i= s (k)g N +( s (k))f N D(k)w(k) Since the setc N is robust controlled invariant and containing the origin in its interior, it follows that max w W {F ND i w(k)} g N Hence or in other words,x(k+ ) C N. F N x(k+ ) s (k)g N +( s (k))g N g N In the absence of disturbances, i.e.w(k)=, k, the following theorem holds Theorem 3.5. Consider the uncertain system(3.34) with input and state constraints (3.36), then the closed loop system with the piecewise linear control law(3.47) or (3.5) is robustly asymptotically stable. Proof. A proof is given in [54], [22]. Here we give an alternative proof providing a geometrical insight into the vertex control scheme. Consider the following positive definite function V(x)=s (k) (3.53) V(x) is a Lyapunov function candidate. For anyx(k) C N, there exists an index j such thatx(k) C (j) N. Hence x(k)=s (k)x s(k), and u(k)=s (k)u s(k) where the control actionu s(k) is given by 3.43 It follows that u s(k)=β u (j) + β 2u (j) β nu (j) n x(k+ ) =A(k)x(k)+B(k)u(k) =A(k)s (k)x s(k)+b(k)s (k)u s(k) =s (k){a(k)x s(k)+b(k)u s(k)}=s (k)x s (k+ ) wherex s (k+) =A(k)x s(k)+b(k)u s(k) C N. Hences (k) gives a possible decomposition (3.38) ofx(k+ ). By using the interpolation based on linear programming (3.4), one gets a different and optimal decomposition, namely x(k+ )=s (k+ )x s(k+ ) withx s(k+) C N. It follows thats (k+) s (k) andv(x) is a non-increasing function.

109 94 3 Optimal and Constrained Control - An Overview From the fact that the level curves of the functionv(x) =s (k) are given by scaling the boundary of the feasible set, and the contractiveness property of the control values at the vertices of the feasible set guarantees that there is no initial conditionx() C N such thats (k) =s () = for sufficiently large and finitek, one concludes thatv(x) =s (k) is a Lyapunov function for allx(k) C N. Hence the closed loop system with the vertex control law (3.5) is robustly asymptotically stable. Example3.4. Consider the discrete time system in example 3. and 3.3 [ ] [ ] x(k+ )= x(k)+ u(k) (3.54).7 The constraints are 2 x 2, 5 x 2 5, u (3.55) Based on procedure 2.3 in Section 2.3.4, the controlled invariant setc N is computed and depicted in Figure x x Fig.3.7 controlled invariant setc N and state space partition of vertex control for example 3.4. The set of vertices ofc N is given by the matrixv(c N ) below, together with the control matrixu v [ ] V(C N )= (3.56) and U v = [ ] (3.57) The vertex control law over the state space partition is

110 3.4 Vertex control 95.25x (k).5x 2 (k), ifx(k) C () N N.33x u(k)= (k).33x 2 (k), ifx(k) C (2) N orx(k) C(6) N.2x (k).43x 2 (k), ifx(k) C N orx(k) C(7) N.4x (k).42x 2 (k), ifx(k) C (4) N orx(k) C(8) N (3.58) with.. C () N = x R2 : x C (2) N = x R2 : x C (3) N = x R2 : x C (4) N = x R2 : x C (5) N = x R2 : x C (6) N = x R2 : x C (7) N = x R2 : x C (8) N = x R2 : x Figure 3.8 presents state trajectories of the closed loop system for different initial conditions. For the initial conditionx()=[ ] T, Figure 3.9 shows the state trajectory, the input trajectory and the interpolating coefficients as a function of time. As expecteds (k) is a positive and non-increasing function. From Figure 3.9(b), it is worth noticing that using the vertex controller, the control values are saturated only on the boundary of the setc N, i.e. whens =. And also the state trajectory at some moments is parallel to the boundary of the setc N, i.e whens is constant. At these moments, the control values are also constant due to the choice of the control values at the vertices of the setc N.

111 u 96 3 Optimal and Constrained Control - An Overview 3 2 x x Fig.3.8 State trajectories of the closed loop system for example x Time (Sampling) Time (Sampling) x Time (Sampling) (a) State trajectory s * Time (Sampling) (b) Input trajectory and interpolating coefficient Fig.3.9 State trajectory, input trajectory and interpolating coefficient of the closed loop system as a function of time for example 3.4.

112 Part II Interpolation based control

113

114 Chapter 4 Interpolation Based Control Nominal State Feedback Case This chapter presents several original contributions on constrained control algorithms for discrete-time linear systems. Using a generic design principle, several types of control laws will be proposed for time-invariant models, their robust version being investigated in the next chapters. The first control law is based on an interpolation technique between a global vertex controller and a local controller through the resolution of a simple linear programming problem. An implicit and explicit solutions of this control law will be presented. The second control law is obtained as a solution of a quadratic programming problem. Then to fully utilize the capacity of actuators and guarantee the input constraints, a saturation function on the input is considered. For the third control law, it is shown that the convex hull of a set of invariant ellipsoids is invariant. A method for constructing a continuous feedback law based on interpolation between thesaturated controllers of the ellipsoids will also be presented. For all the types of controllers, recursive feasibility and asymptotic stability will be proved. Several numerical examples are given to support the algorithms with illustrative simulations. 4. Problem formulation In this chapter, we consider the problem of regulating to the origin the following discrete-time linear time-invariant system x(k+ )=Ax(k)+Bu(k) (4.) wherex(k) R n andu(k) R m are respectively the state and the input,a R n n andb R n m are the system matrices. Both the state vectorx(k) and the control vectoru(k) are subject to polytopic constraints 99

115 4 Interpolation Based Control Nominal State Feedback Case { x(k) X,X={x R n :F x x g x } u(k) U,U ={u R m k (4.2) :F u u g u } where the matricesf x,f u and the vectorsg x,g u are assumed to be constant with g x > and g u > such that the origin is contained in the interior ofx andu. Recall that the inequalities are taken element-wise. We assume that the states of the system are measurable. We also assume that the pair (A,B) is stabilizable, i.e. all uncontrollable states have stable dynamics. 4.2 Interpolation based on linear programming- Implicit solution Define a linear controllerk R m n, such that u(k)=kx(k) (4.3) quadratically stabilizes the system (4.) with some desired performance specifications. The details of such a synthesis procedure are not reproduced here, but we assume that feasibility is guaranteed. Based on procedures 2. or 2.2, a maximal invariant set Ω max can be computed in the form Ω max ={x R n :F o x g o } (4.4) when applying the control law u(k) = Kx(k). Furthermore with some given and fixed integern>, based on procedure 2.3 one can find a controlled invariant set C N in the form C N ={x R n :F N x g N } (4.5) such that allx C N can be steered into Ω max in no more thann steps when a suitable control is applied. As in Section 3.4, the polytopec N is decomposed into a set of simpliciesc (j) N, each formed bynvertices ofc N and the origin. For allx(k) C (j) N, the vertex controller u(k)=k (j) x(k), x(k) C (j) N (4.6) can be applied withk (j) defined as in (3.5). From Section 3.4, it is clear that the closed loop system (4.) with vertex control is asymptotically stable for all initial statesx C N. The main advantage of the vertex control scheme is the size of the domain of attraction, i.e. the setc N. It is clear that the controlled invariant setc N, that is the feasible domain for the vertex control, might be as large as that of any other constrained control scheme. However, a weakness of vertex control is that the full control range is exploited only on the border of the setc N in the state space, with progressively smaller control action when state approaches the origin. Hence the time to regulate the plant to the origin is often unnecessary long. A way to overcome this shortcoming is to switch to another, more aggressive, local controller, e.g.

116 4.2 Interpolation based on linear programming - Implicit solution a state feedback controller u(k) = Kx(k), when the state reaches the maximal invariant set Ω max of the local controller. The disadvantage of this solution is that the control action becomes non-smooth [3]. In this section, a method to overcome the non-smooth control action [3] will be proposed. For this purpose, any statex(k) C N can be decomposed as follows x(k)=c(k)x v (k)+( c(k))x o (k) (4.7) withx v C N,x o Ω max and c. Figure 4. illustrates such a decomposition C N x 2 x x o Ω max 2 x v x Fig. 4. Interpolation based control. Any state x(k) can be expressed as a convex combination of x v (k) C N andx o (k) Ω max. Consider the following control law u(k)=c(k)u v (k)+( c(k))u o (k) (4.8) whereu v (k) is obtained by applying the vertex control law (4.6) atx v (k) andu o (k)= Kx o (k) is the control law (4.3) that is feasible in Ω max. Theorem 4.. For system(4.) and constraints(4.2), the control law(4.8) guaranteesrecursivefeasibilityforallinitialstatesx() C N. Proof. For recursive feasibility, one has to prove that { Fu u(k) g u for allx(k) C N. For the input constraints x(k+ )=Ax(k)+Bu(k) C N F u u(k) =F u {c(k)u v (k)+( c(k))u o (k)} =c(k)f u u v (k)+( c(k))f u u o (k) c(k)g u +( c(k))g u =g u

117 2 4 Interpolation Based Control Nominal State Feedback Case and x(k+ ) =Ax(k)+Bu(k) =A{c(k)x v (k)+( c(k))x o (k)}+b{c(k)u v (k)+( c(k))u o (k)} =c(k){ax v (k)+bu v (k)}+( c(k)){ax o (k)+bu o (k)} SinceAx v (k)+bu v (k) C N andax o (k)+bu o (k) Ω max C N, it follows that x(k+ ) C N. In order to approach as much as possible to the unconstrained local controller, the minimization of the interpolating coefficient c(k) needs to be considered. This can be done by solving the following nonlinear optimization problem c = min {c} (4.9) x v,x o,c subject to F N x v g N F o x o g o cx v +( c)x o =x c Denoter v =cx v R n,r o = ( c)x o R n. Sincex v C N andx o Ω max, it follows thatr v cc N andr o ( c)ω max or equivalently { FN r v cg N F o r o ( c)g o With this change of variables, the nonlinear optimization problem (4.9) is transformed into a linear programming problem as follows subject to c = min{c} (4.) r v,c F N r v cg N F o (x r v ) ( c)g o c Remark 4.. It is interesting to observe that the proposed interpolation scheme, by the minimization of the interpolation coefficient is the antithesis of the maximization ofc. It is obvious that in the latter casec = for allx C N and the interpolating controller (4.8) turns out to be the vertex controller. Theorem 4.2. The control law using interpolation based on linear programming (4.)guaranteesasymptoticstabilityforallinitialstatesx() C N. Proof. First of all we will prove that all solutions starting inc N \ Ω max will reach the set Ω max infinitetime. For this purpose, consider the following non-negative function

118 4.2 Interpolation based on linear programming - Implicit solution 3 V(x(k))=c (k), x(k) C N \ Ω max (4.) V(x(k)) is a candidate Lyapunov function. For anyx(k) C N \ Ω max, one has x(k)=c (k)xv(k)+( c (k))xo(k) and consequently It follows that u(k)=c (k)u v (k)+( c (k))u o (k) where x(k+ ) =Ax(k)+Bu(k) =c (k)x v (k+ )+( c (k))x o (k+ ) x v (k+ )=Ax v(k)+bu v (k) C N x o (k+ )=Ax o(k)+bu o (k) Ω max Hencec (k) gives a feasible decomposition (4.7) ofx(k+ ). By using the interpolation based on linear programming (4.), a possibly different and optimal decomposition is obtained, namely x(k+ )=c (k+ )x v(k+ )+( c (k+ ))x o(k+ ) wherex v(k+) C N andx o(k+) Ω max. It follows thatc (k+) c (k) and V(x(k)) is a non-increasing function and a Lyapunov function in the weak sense as the inequality is not strict. Using the vertex controller, an interpolation between the vertices of the feasible controlled invariant setc N and the origin is obtained. Conversely using the controller (4.7), (4.8), (4.) an interpolation is constructed between the vertices of the feasible controlled invariant setc N and the vertices of the invariant set Ω max which contains the origin as an interior point. This last property proves that the vertex controller is a feasible choice for the interpolation based technique. From these facts we conclude that c (k) s (k) for anyx(k) C N, withs (k) obtained as in (3.4), Section 3.4. Since the vertex controller is exponentially stable, the state reaches any bounded set around the origin in finite time. In our case this property will imply that using the controller (4.7), (4.8), (4.) the state of the closed loop system reaches the invariant set Ω max infinitetime or equivalently that there exists a finiteksuch thatc (k)=. The proof is complete by noting that inside Ω max, the LP problem has (4.) the trivial solutionc =. Hence the controller (4.7), (4.8), (4.) turns out to be the local controller. The feasible stabilizing controlleru(k)=kx(k) iscontractive, and thus the interpolation-based controller assures asymptotic stability for allx C N. Sincer v(k)=c (k)x v(k) andr o (k)=( c (k))x o(k), it follows that u(k)=u rv (k)+u ro (k) (4.2)

119 4 4 Interpolation Based Control Nominal State Feedback Case whereu rv (k) is obtained by applying the vertex control law atr v(k) andu ro (k) = Kr o (k). A simple on-line algorithm for the interpolation based controller is given here. Algorithm4.: Interpolation based control - Implicit solution. Measure the current state of the systemx(k). 2. Solve the LP problem (4.). 3. Computeu rv in (4.2) by solving an LP-program or otherwise determine in which C (j) N simplex r v belongs and using (3.5), or explicitly solve the LP-program (3.45) and then using (3.52). 4. Implement as input the control value (4.2). 5. Wait for the next time instantk:=k+. 6. Go to step and repeat. Remark 4.2. From the computational complexity point of view, we note that at each time instant algorithm 4. requires the solution of the LP problem (4.) of dimensionn+ withnbeing the dimension of state and another LP problem to findu rv. Clearly, this extremely simple optimization problem is comparable with a one-step ahead MPC. 4.3 Interpolation based on linear programming- Explicit solution 4.3. Geometrical interpretation This subsection is dedicated to the computation and structural implications of the interpolation based on linear programming (4.), which can be assimilated to a multi-parametric optimization by the fact that the current state plays the role of a vector of parameters. The control law can be pre-computed off-line in an explicit form as a piecewise affine state feedback over a polyhedral partition of the state space, thus avoiding real-time optimization. Remark 4.3. The following properties can be exploited during the construction stage For allx Ω max, the result of the optimal interpolation problem has the trivial solutionc = and thusx o =x in (4.7). Letx C N \Ω max with a particular convex combinationx=cx v +( c)x o where x c C N andx o Ω max. Ifx o is strictly inside Ω max, define In the implicit vertex control case.

120 4.3 Interpolation based on linear programming - Explicit solution C N 2 x 2 x x o * x o x v * x v Ω max x Fig.4.2 Graphical illustration for remark 4.3. For allx C N \ Ω max, the optimal solution of the problem (4.) is reached if and only ifxis written as a convex combination ofx v andx o with x v Fr(C N ) andx o Fr(Ω max ). x o = Fr(Ω max ) x,x o i.e. x o is the intersection between the boundary of Ω max and the segment connectingxandx o. Using convexity arguments, one has x= cx v +( c) x o with c<c. In general terms, for allx C N \Ω max the optimal interpolation (4.7), (4.8), (4.) leads to a solution{x v,x o} withx o Fr(Ω max ). On the other hand, ifx v is strictly insidec N, by setting x v = Fr(C N ) x,x v i.e. x v is the intersection between the boundary ofc N and the ray connectingx andx v. One obtains x=ĉ x v +( ĉ)x o withĉ <c, leading to the conclusion that for the optimal solution {x v,x o}, it holds thatx v Fr(C N ). From the previous remark we conclude that for allx C N \Ω max the interpolating coefficientcreaches a minimum in (4.) if and only ifxis written as a convex combination of two points, one belonging to the boundary of Ω max and the other on the boundary ofc N. Theorem4.3.Foragivenx C N \Ω max,theconvexcombinationx=cx v +( c)x o givesthesmallestvalueoftheinterpolatingcoefficientcifandonlyiftheratio x v x x x o ismaximal,where denotesthevectornorm. Proof. One has

121 6 4 Interpolation Based Control Nominal State Feedback Case Analogously, it holds that leading to x=cx v +( c)x o x v x=x v cx v ( c)x o =( c)(x v x o ) x v x =( c) x v x o x x o =c x v x o x v x x x o = ( c) x v x o = c x v x o c Apparently,cis minimal if and only if c is maximal, or in other words x v x x x o reaches its maximum AnalysisinR 2 In this subsection an analysis of the LP problem (4.) in ther 2 parameter space is presented with reference to Figure 4.3. The discussion is insightful in what concerns the properties of the partition in the explicit solution. The problem considered here is to decompose the polyhedralx 234 such that the explicit solutionc = min{c} is given in the decomposed cells. X x v T X 2 x X 4 x o x o X 3 Fig.4.3 Graphical illustration in the R 2 case. For illustration we will consider four vertices X i, i =,2,3,4 and any point x Conv(X,X 2,X 3,X 4 ) 2. Denote X ij as the segment connecting X i and X j, for i,j=,2,3,4 andi j. The problem is reduced to find the expression of a convex 2 this schematic view can be generalized to any pair of faces ofc N and Ω max

122 4.3 Interpolation based on linear programming - Explicit solution 7 combinationx =cx v +( c)x o, wherex v X 2 Fr(C N ) andx o X 34 Fr(Ω), providing the minimal value ofc. Without loss of generality, we suppose that the distance fromx 2 tox 34 is larger than the distance fromx tox 34 or equivalently that the distance fromx 4 tox 2 is smaller than the distance fromx 3 tox 2. Theorem4.4.UndertheconditionthatthedistancefromX 2 tox 34 islargerthanthe distancefromx tox 34,orthedistancefromX 4 tox 2 issmallerthanthedistance fromx 3 tox 2,thedecompositionofthepolytopeX 234 =X 24 X 234,istheresult of the minimization of the interpolating coefficient c. Proof. Without loss of generality, suppose thatx X 234. Thenxcan be decomposed as x=cx 2 +( c)x o wherex o X 34, see Figure 4.3. Another possible decomposition is x=c x v+( c )x o wherex o is any point inx 34 andx v is any point inx 2, see Figure 4.3. It is clear that if the distance fromx 2 tox 34 is larger than the distance fromx to X 34 then the distance fromx 2 tox 34 is larger than the distance from any pointx v in X 2 tox 34. As a consequence, there exists a pointt X 2,x such that x x v x x = x T x x o x X 2 x x o Together with theorem 4.3, one obtains o c<c orx 234 represent a polyhedral partition of the explicit solution to problem (4.). Analogously one can prove thatx 24 is polyhedral partition of the explicit solution to problem (4.). Theorem 4.4 states that the minimal value of the interpolating coefficient c is found with the help of the decomposition of the polyhedralx 234 asx 234 =X 24 X 234. Remark4.4. A singular case where the assumption of the previous case is not fulfilled is represented by the segmentsx 2 parallel withx 34. In this case, any convex combinationx=cx v +( c)x o gives the same value ofc. Hence the partition may not be unique. Remark4.5. From theorem 4.4, it is clear that one can subdivide the regionc N \ Ω max into partitions as follows

123 8 4 Interpolation Based Control Nominal State Feedback Case For each facet of the maximal admissible set Ω max, one has to find the furthest point on the boundary of the feasible setc N on the same side of the origin as the facet of Ω max. A polyhedral partition is obtained as the convex hull of that facet of Ω max and the furthest point inc N. By the bounded polyhedral structure ofc N, the existence of such a vertexc N as the furthest point is guaranteed. On the other hand, for each facet of the feasible setc N, one has to find the closest point on the boundary of the set Ω max on the same side of the origin as the facet ofc N. A polyhedral partition will be in this case the convex hull of that facet of C N and the closest point in Ω max. In this case again the existence of some vertex Ω max as the closest point is guaranteed. Remark 4.6. In the n dimensional state space, it may happen that the decomposition of the state space according to the remark 4.5 does not cover the entire setc N, as will be shown in the following example. The feasible outer setc N and the feasible inner set Ω max are given by the following vertex representations, displayed in Figure 4.4. C N = Conv 4, 4 4, 4 4, Ω max = Conv,.5.5,.5.5, Fig.4.4 Graphical illustration for remark 4.6. The white set is the feasible outer setc N. The red set is the feasible inner set Ω max. By solving the parametric linear program (4.) in its explicit form, the state space partition is obtained. Figure 4.5 shows two polyhedral partition of the state space partition. The red one is the set Ω max. The blue one is the set, obtained by the convex hull of two points from the inner set Ω max and two points from the outer set C N.

124 4.3 Interpolation based on linear programming - Explicit solution 9 Fig. 4.5 Graphical illustration for remark 4.6. The partition is obtained by two vertices of the inner set and two vertices of the outer set. In conclusion, in then dimensional state space ifx C N \ Ω max, the smallest valuecwill be reached when the regionc N \ Ω max is decomposed into polytopes with vertices either on the boundary of Ω max or on the boundary ofc N. These polytopes can be further decomposed into simplices, each formed byr vertices ofc N and n r+ vertices of Ω max where r n. An example of a state space partition is given in Figure x x Fig.4.6 Simplex based decomposition as an explicit solution of the LP problem (4.).

125 4 Interpolation Based Control Nominal State Feedback Case Explicit solution of the interpolation-based control scheme Now suppose thatxbelongs to the simplex formed bynvertices {x,x 2,...,x n } ofc N and the vertexx o of Ω max (the other cases ofn+ vertices distributed in a different manner in betweenc N and Ω max can be treated similarly). In this case,x can be written as a convex combination ofnvertices {x,x 2,...,x n } andx o, i.e. with x= n+ i= n i= α i x i + α i+ x o, (4.3) α i =, α i (4.4) For a givenx C N \Ω max, based on equations (4.3) and (4.4), the interpolating coefficients α i, i=,2,...,n+ are defined uniquely as with an invertible matrix [ α α 2... α n α n+ ] T = [ x x 2...x n x o... [ ] x x 2...x n x o... ] [ ] x since a nonempty simplex is formed byn+ linear independent vertices. On the other hand, from equation (4.7), the statexcan also be expressed as x=cx v +( c)x o with c. Due to the uniqueness of the combination, it follows that α i+ = c and x v = n i= α i c x i wheneverc, i.e. for allx C N \ Ω max. By applying the vertex control law, one obtains and or in a compact matrix form u v = n i= u=cu v +( c)u o = α i c u i n i= α i u i + α n+ u o. (4.5)

126 4.3 Interpolation based on linear programming - Explicit solution α u= [ ] α 2 u u 2...u n u. o. α n α n+ Together with equation (4.5), one can obtain a piecewise affine form u = [ ] [ ] [ ] x u u 2...u n u x 2...x n x o x o... =Lx+v where the matrixl R m n and the vectorv R m are defined as [L v]= [ u u 2...u n u o ] [ x x 2...x n x o... Globally over the entire setc N \ Ω max the controller is an affine state feedback whose gains are obtained simply by linear interpolation of the control values at the vertices of each simplex. It is worth noticing that the generalization of a simplexbased partition can be highly improved from the complexity point of view by merging of the elementary simplex cells found above [44], [5], [8], [85]. This is not surprising as long as in general terms the interpolation based on linear programming is parameterized in terms of the state vector and leads to a multiparametric optimization problem. The expected result is a decomposition of the state space corresponding to the distribution of the optimal pairs of extreme points (vertices) used in the interpolation process. ]

127 2 4 Interpolation Based Control Nominal State Feedback Case Algorithm 4.2: Interpolation based control- Explicit solution Input: Given the setsc N, Ω max, the optimal feedback controllerk over Ω max and the control values at the vertices ofc N. Output: State space partition, the feedback control laws over the partitions ofc N.. Solve the LP (4.) by using explicit multi-parametric programming by exploiting the parameterized vertices formulation, see Section As a result, one obtains the state space partition ofc N. 2. Decompose each polyhedral partition ofc N \ Ω max in a sequence of simplices, each formed bysvertices ofc N andn s+ vertex of Ω max, where s n. The result is a the state space partition overc N \ Ω max in the form of simplices C (k). 3. The control law over Ω max isu=kx. 4. In each simplexc (k) C N \ Ω max the control law is defined as: u(x)=l k x+v k wherel k R m n andv k R m are defined as { with x (k),x(k) 2...x (k) ] n+ [ ] ] Lk v k = [u [ (k) u (k) 2...u (k) x (k) x (k) n+... { u (k) } 2,...x(k) n+ are vertices ofc (k) that define a full-dimensional sim- },u(k) 2,...u(k) n+ are the corresponding control values at vertices }. plex and { x (k),x(k) 2,...x(k) n+ Remark4.7. Based on remark 4.4, it is worth noticing that by using explicit multiparametric programming, the vertices of the state space partition ofc N \Ω max might not be the vertices ofc N or Ω max, which might happen for example when some facet ofc N is parallel with a facet of Ω max. Remark 4.8. It can be observed that algorithm 4.2 uses only the information about the state space partition of the explicit solution of the LP problem (4.). The explicit form ofc,r v andr o as a piecewise affine function of the state is not used. The sensitive part of algorithm 4.2 is step 2. It is clear that the above simplexbased partition overc N \ Ω max might be very complex. Also the fact that for all facets of the inner invariant set Ω max, the local controller is in the formu =Kx is not exploited. In addition, as practice usually shows, for each facet of the outer controlled invariant setc N, the vertex controller is usually constant. In these cases, the complexity of the explicit piecewise affine solution of a multi-parametric optimization problem might be reduced as follows.

128 4.3 Interpolation based on linear programming - Explicit solution 3 Consider the case when the state space partitionp (k) ofc N \ Ω max is formed by one vertexx v ofc N and one facetf o of Ω max. Note that based on remark 4.5 such a partition always exists as an explicit solution to the LP problem (4.). For all x P (k) it follows that x=cx v +( c)x o =cx v +r o withx o F o andr o =( c)x o. Letu v R m be the control value at the vertexx v and denote the explicit solution ofcandr o to the LP problem (4.) for allx P (k) as { c=f (c) k x+g (c) k r o =F (o) k x+g (o) (4.6) k withf (c) k R n,g (c) k R andf (o) k R n n,g (o) k R n. The control value forx P (k) is computed as u=cu v +( c)kx o =cu v +Kr o (4.7) By substituting equation (4.6) into equation (4.7), one obtains ( ) ( ) u=u v F (c) k x+g (c) k +K F (o) k x+g (o) k or equivalently ( ) ( ) u= u v F (c) k +KF (o) k x+ u v g (c) k +Kg (o) k (4.8) The fact that the control value is a piecewise affine (PWA) function of state is confirmed. Clearly, the complexity of the explicit solution with the control law (4.8) is lower than the complexity of the explicit solution with the simplex based partition, since one does not have to divide up the facets of Ω max (and facets ofc N, in the case when the vertex control for such facets is constant) into a set of simplices Interpolation based on linear programming- Qualitative analysis Theorem 4.5 below shows the Lipschitz continuity of the control law based on linear programming (4.7), (4.8), (4.). Theorem 4.5. Consider the control law resulting from the interpolation based on linear programming(4.7),(4.8),(4.). This control is Lipschitz with Lipschitz constantm= max k L k,wherekrangesoverthesetofindicesofpartitionsand denotes the Euclidean norm. Proof. For any two pointsx A andx B inp N, there existr+ pointsx,x,...,x r that lie on the segment, connectingx A andx B, and such thatx A =x,x B =x r and

129 4 4 Interpolation Based Control Nominal State Feedback Case (x k,x k )=x A,x B Fr(P (i) ) (the intersection between the line connectingx A,x B and the boundary of some partitionp (i), see Figure 4.7) x x A = x x x 2 x B = x x Fig.4.7 Graphical illustration of the construction related to Theorem 4.5. Due to the continuity property of the control law, one has (L A x A +v A ) (L B x B +v B ) = (L x +v ) (L x +v )+(L x +v )... (L r x r +v r ) = L x L x +L x... L r x r r L i (x k x k ) r L k (x k x k ) k= k= max{ L k } r (x k x k ) =M x A x B k k= where the last equality holds, since the pointsx i withi=,2,...,r are aligned. Theorem 4.5 states that the interpolating controller (4.7), (4.8), (4.) is a Lipschitz continuous function of state, which is a strong form of uniform continuity for function. Example4.. Consider the following discrete-time linear time-invariant system [ ] [ ] x(k+ )= x(k)+ u(k) (4.9).3 The constraints are x (k), 5 x 2 (k) 5, u(k) (4.2) Using linear quadratic (LQ) local control with weighting matrices Q = I and R= the local feedback gain is obtained

130 4.3 Interpolation based on linear programming - Explicit solution 5 K =[ ] (4.2) The invariant set Ω max and the controlled invariant setc N withn = 4 are shown in Figure 4.. Note thatc 4 =C 5. In this casec 4 is the maximal controlled invariant set. Ω max is presented in minimal normalized half-space representation as Ω max = x R 2 : x.45 (4.22) The setc N can be presented in vertex representation by a set vertices ofc N, given by the matrixv(c N ) V(C N )=[V V ] (4.23) [ ] V = and the corresponding control values at the vertices ofc N U v =[U U ] (4.24) U = [ ] Using algorithm 4.2, the state space partition is obtained in Figure 4.6. Merging the regions with identical control laws, the reduced state space partition is obtained in Figure 4.8. This figure also presents different state trajectories of the closed-loop system for different initial conditions x x Fig. 4.8 State space partition and different state trajectories for example 4. using algorithm 4.2. Number of regionsn r =.

131 c Interpolation Based Control Nominal State Feedback Case Figure 4.9(a) shows the Lyapunov function as a piecewise affine function of state. It is well known 3 that the level sets of the Lyapunov function for vertex control are simply obtained by scaling the boundary of the setc N. For the interpolation based control method (4.7), (4.8), (4.), the level sets of the Lyapunov function V(x) =c depicted in Figure 4.9(b) have a more complicated form and generally are not parallel to the boundary ofc N. From Figure 4.9, it can be observed that the Lyapunov level setsv(x) =c have the outer setc N as an external level set (for c = ). The inner level sets change the polytopic shape in order to approach the boundary of the inner set Ω max x 2 PWA function over 25 regions x x x (a) Lyapunov functionc (b) Lyapunov level curves Fig.4.9 Lyapunov function and Lyapunov level curves for the interpolation based control method for example 4.. The control law over the state space partition is 3 see Section 3.4

132 4.3 Interpolation based on linear programming - Explicit solution if x(k) x (k)+.59x 2 (k) 2.23 if x(k) x (k).32x 2 (k)+.2 if.6. x(k) x (k).8x 2 (k)+.65 if x(k) x (k).4x 2 (k)+2.2 if. x(k) u(k)= if x(k) x (k)+.59x 2 (k)+2.23 if x(k) x (k).32x 2 (k).2 if.6. x(k) x (k).8x 2 (k).65 if x(k) x (k).4x 2 (k) 2.2 if x(k) x (k).98x 2 (k) if x(k) (4.25)

133 8 4 Interpolation Based Control Nominal State Feedback Case In view of comparison, consider the explicit model predictive control method with a prediction horizon of 4 steps, Figure 4.(a) presents the state space partition with a number of regionsn r = 37. Merging the polyhedral regions with an identical piecewise affine control function, the reduced state space partition is obtained in Figure 4.(b). Controller partition with 37 regions. Controller partition with 97 regions x 2 x x (a) Before merging x (b) After merging Fig. 4. State space partition before and after merging for example 4. using the explicit model predictive control method. Before merging, number of regionsn r = 37. After mergingn r = 97. The comparison of the interpolation based control method and the explicit MPC in terms of the number of regions before and after merging is given in Table 4.. Table 4. Number of regions for the interpolation based control method versus the explicit MPC for example 4.. Before Merging After Merging Interpolation based control 25 Explicit MPC Figure 4.(a) and Figure 4.(b) show the control value as a piecewise affine function of state with the interpolation based control method and the MPC method, respectively. For the initial conditionx()=[ ] T, Figure 4.2(a) and Figure 4.2(b) show the results of a time-domain simulation for these two control laws. As a final analysis element, Figure 4.3 presents the interpolating coefficient c (k) as a function of time. As expected this function is positive and non-increasing. It is interesting to note thatc (k)=, for allk 5 implying that from time instant k = 5, the state of the closed-loop system is in the invariant set Ω max, and as consequenceoptimal in the MPC cost function terms. The monotonic decrease and the positivity confirms the Lyapunov interpretation given in the present section.

134 u u u 4.4 Performance improvement for the interpolation based control 9 PWA function over regions PWA function over 37 regions x x 2 2 x (a) Interpolation based control x (b) MPC method Fig. 4. Control value as a piecewise affine function of state for example 4. with the interpolation based control method and the MPC method. 5 Interpolation based control Explicit MPC.8.6 Interpolation based control Explicit MPC x Time (Sampling) Interpolation based control Explicit MPC.4.6 x Time (Sampling) (a) State trajectory Time (Sampling) (b) Input trajectory Fig.4.2 State and input trajectory for example 4.. The dashed red curve is obtained by using the explicit MPC method and the solid blue curve is obtained by using the interpolation based control method. Interpolating coefficient c * Time (Sampling) Fig.4.3 Interpolating coefficientc as a function of time example Performance improvement for the interpolation based control The interpolation based control method in Sections 4.2 and Section 4.3 can be seen as an approximation of model predictive control, which in the last decade has re-

135 2 4 Interpolation Based Control Nominal State Feedback Case ceived significant attention in the control community [62], [83], [4], [3], [9], [66]. From this point of view, it is worthwhile to obtain an interpolation based controller with some given level of accuracy in terms of performance compared with the optimal MPC one. Naturally, the approximation error can be a measure of the level of accuracy. The methods of computing bounds on the approximation error is by now well known in the literature, see for example [62], [9] or [4]. Obviously, the simplest way of improving the contraction factor of the interpolation based control scheme is to use the intermediate s-step controlled invariant sets C s with s<n. Then there will be not only one level of interpolation but two or virtually any number of interpolation as necessary from the performance point of view. For simplicity of the presentation, we provide in the following a brief study of the case when one intermediate controlled invariant setc s will be used. Let this set C s be polyhedral in the form and satisfying Ω max C s C N. C s ={x R n :F s x g s } (4.26) Remark 4.9. It has to be noted however that, the expected increase in performance comes at the price of complexity as long as this intermediate set needs to be stored along with its vertex controller. The vertex controller can be applied for the polyhedral setc s, sincec s is controlled invariant. For further use, the vertex control law applied for the setc s is denoted asu s. Using the same philosophy as in Section 4.2, the statexwill be decomposed as follows. Ifx C N andx / C s, thenxis decomposed as x=c x v +( c )x s (4.27) withx v C N,x s C s and c. The corresponding control action is then computed as u=c u v +( c )u s (4.28) 2. Elsex C s is decomposed as x=c 2 x s +( c 2 )x o (4.29) withx s C s,x o Ω max and c 2. The control action is computed then as x=c 2 u s +( c 2 )u o (4.3) Depending on the value of x, at each time instant, either the interpolating coefficientc orc 2 is minimized in order to be as close as possible to the optimal controller. This can be done by solving the following (in a parallelized manner) nonlinear optimization problem

136 4.4 Performance improvement for the interpolation based control C N C s 2Ωmax x x Fig.4.4 Two-level interpolation for improving the performance.. Ifx C N \C s c = min x v,x s,c {c } (4.3) subject to F N x v g N F s x s g s c x v +( c )x s =x c 2. Elsex C s c 2 = min x s,x o,c 2 {c 2 } (4.32) subject to F s x s g s F o x o g o c 2 x s +( c 2 )x o =x c 2 or by changing variablesr v =c x v andr s =c 2 x s, the nonlinear optimization problems (4.3) and (4.32) can be transformed in the LP problems as

137 22 4 Interpolation Based Control Nominal State Feedback Case. Ifx C N \C s c = min r v,c {c } (4.33) subject to F N r v c g N F s (x r v ) ( c )g s c 2. Elsex C s c 2 = min r s,c 2 {c 2 } (4.34) subject to F s r s c 2 g s F o (x r s ) ( c 2 )g o c 2 The following theorem shows recursive feasibility and asymptotic stability of the interpolation based control (4.27), (4.28), (4.29), (4.3), (4.33), (4.34) Theorem 4.6. The control law using interpolation based on the solution of the LP problems(4.33),(4.34) guarantees recursive feasibility and asymptotic stability of theclosedloopsystemforallinitialstatesx() C N. Proof. The proof of the theorem is omitted here, since it follows the same steps as those presented in the feasibility proof 4. and the stability proof 4.2 in Section 4.2. Remark 4.. Clearly, instead of the second level of interpolation, the MPC approach can be applied for all state inside the setc s. This has very practical consequences in applications, since it is well known that the main issue of the MPC method for the nominal discrete-time linear time-invariant systems is the trade-off between the overall complexity (computational cost) and the size of the domain of attraction. If the prediction horizon of the MPC method is short then the domain of attraction is small. If the prediction horizon is long then the computational cost may be very burdensome for the available hardware. Here the MPC method with the short prediction horizon (equal to one, strictly speaking) is used for the performance and then for enlarging the domain of attraction, the interpolation based on linear programming (4.33) is used. In this way one can achieve the performance and the domain of attraction with a relatively small computational cost. For the continuity of the control law (4.27), (4.28), (4.29), (4.3), (4.33), (4.34), the following theorem holds Theorem 4.7. The control law from the interpolation based on linear programming (4.33),(4.34) can be represented as a continuous function of the state. Proof. Clearly the discontinuity of the control law may arise only on the boundary of the intermediate setc s, since for allx C N \C s or for allx C s, the interpolation based controller (4.27), (4.28), (4.33) or (4.29), (4.3), (4.34) is continuous

138 4.4 Performance improvement for the interpolation based control 23 It is clear that for allx Fr(C s ), the LP problems (4.33), (4.34) have a trivial solution c =, c 2 = Hence the control action for the interpolation based on 4.27), (4.28), (4.33) is u=u s and the control action for the interpolation based on (4.29), (4.3), (4.34) is u =u s. These control actions coincide and turn out to be the vertex controller for the setc s. Hence the continuity of the control law is guaranteed. Remark4.. It is interesting to note that by usingn intermediate setsc i together with the setsc N and Ω max, a continuous minimum-time controller is obtained, i.e. a controller that steers all statex C N in Ω max in no more thann steps. Concerning the explicit solution of the interpolation based controller using the intermediate setc s, with the same argument as in Section 4.3, it can be concluded that Ifx C N \C s (orx C s \ Ω max ), the smallest valuec (orc 2 ) will be reached when the regionc N \C s (orc S \ Ω max ) is decomposed into partitions in form of simplices with vertices either on the boundary ofc N or on the boundary of C s (or on the boundary ofc s or on the boundary of Ω max ). The control law in each partition is piecewise affine function of state whose gains are obtained by interpolation of control values at the vertices of the simplex. Ifx Ω max, then the control law is the optimal unconstrained controller. Example 4.2. Consider again the discrete-time linear time-invariant system in example 4. with the same state and control constraints. The local feedback controller is the same as in example 4. K =[ ] (4.35) With the local controllerk, the sets Ω max,c s withs=4 andc N withn= 4 is constructed. The representation of the sets Ω max andc N can be found in example 4.. The set of verticesv s of the polytopec s is [ ] V s = (4.36) and the set of the corresponding control actions at the verticesv s is U s = [ ] (4.37) Using explicit multi-parametric linear programming, the state space partition is obtained in Figure 4.5(a). Merging the regions with identical control laws, the reduced state space partition is obtained in Figure 4.5(b). This figure also shows state trajectories of the closed-loop system for different initial conditions. Figure 4.6 shows the control value as a piecewise affine function of the state with two-level interpolation.

139 24 4 Interpolation Based Control Nominal State Feedback Case x 2 x x (a) Before merging (b) After merging x Fig. 4.5 State space partition before and after merging for example 4.2. Before merging, the number of regions isn r = 37. After merging,n r = 9. PWA function over 9 regions u 5 x x Fig. 4.6 Control value as a piecewise affine function of state for example 4.2 using two-level interpolation. For the initial condition x() = [ ] T, Figure 4.7 shows the results of a time-domain simulation. The two curves correspond to the one-level and two-level interpolation based control, respectively Figure 4.8 presents the interpolating coefficients as a function of time. As expectedc andc 2 are positive and non-increasing. It is also interesting to note that c (k)= for allk, indicating thatxis insidec s andc (k)= for allk 4, indicating that statexis inside Ω max. 4.5 Interpolation based on quadratic programming The interpolation based control method in Section 4.2 and Section 4.3 makes use of linear programming, which is extremely simple. However, the main issue regarding the implementation of the algorithm 4. is the non-uniqueness of the solution. Multiple optima are undesirable, as they might lead to a fast switching between the

140 u 4.5 Interpolation based on quadratic programming 25 5 Two level One level Two level One level x Time (Sampling) x Two level One level Time (Sampling) (a) State trajectory Time (Sampling) (b) Input trajectory Fig. 4.7 State and input trajectories for example 4.2. The dashed red curve is obtained by using the one-level interpolation based control, and the solid blue curve is obtained by using the two-level interpolation based control. * c Time (Sampling) * c Time (Sampling) Fig.4.8 Interpolating coefficients as a function of time for example 4.2. different optimal control actions when the LP problem (4.) is solved on-line. Traditionally model predictive control has been formulated using a quadratic criterion []. Hence, also in interpolation based control it is worthwhile to investigate the use of quadratic programming. Before introducing a QP formulation let us note that the idea of using QP for interpolation control is not new. In [], [32], Lyapunov theory is used to compute an upper bound of the infinite horizon cost function J = k= { x(k) T Qx(k)+u(k) T Ru(k) } (4.38) whereq andr are the state and input weighting matrices. At each time instant, the algorithm in [] uses an on-line decomposition of the current state, with each component lying in a separate invariant set, after which the corresponding controller is applied to each component separately in order to calculate the control action. Polytopes are employed as candidate invariant sets. Hence, the on-line optimization problem can be formulated as a QP problem. The approach taken in this

141 26 4 Interpolation Based Control Nominal State Feedback Case section follows ideas originally proposed in [], [32]. In this setting we provide a QP based solution to the constrained control problem. This section begins with a brief summary of the work of Bacic et al. [], [32]. For this purpose, it is assumed that using established results in control theory (LQR, LMI based, etc), one obtains a set of unconstrained asymptotically stabilizing feedback controllersu(k)=k i x(k),i=,2,...,r such that the corresponding invariant sets Ω i X { } Ω i = x R n :F o (i) x g (i) o (4.39) are non-empty fori=,2,...,r. Denote Ω as a convex hull of the sets Ω i,i=,2,...,r. It follows that Ω X, since Ω i X for alli=,2,...,r. Any statex(k) Ω can be decomposed as follows where x Ω i for alli=,2,...,r and x(k)=λ x + λ 2 x λ r x r (4.4) r i= λ i =, λ i With a slight abuse of notation, denotex i = λ i x i. Since x i Ω i, it follows that x i λ i Ω i or equivalently F o (i) x i λ i g (i) o (4.4) for alli=,2,...,r. Consider the following control law u(k)= r i= λ i K i x i = r i= K i x i (4.42) whereu(k)=k i x i (k) is the control law, associated to the invariant construction of the set Ω i. One has or x(k+ ) =Ax(k)+Bu(k)=A r x i (k)+b r K i x i (k) i= i= = r (A+BK i )x i (k) i= x(k+ )= r i= wherex i (k+ )=A ci x i (k) anda ci =A+BK i. Denote a vectorz R rn as follows From equation (4.43), it follows that z= [ x T x T 2... x T r x i (k+ ) (4.43) ] T

142 4.5 Interpolation based on quadratic programming 27 where z(k+ )=Φz(k) (4.44) A c... A c2... Φ = A cr With the given state and control weighting matricesq R n n andr R m m, consider the following quadratic function where matrixp R rn rn,p is chosen to satisfy with V(z)=z T Pz (4.45) V(z(k+ )) V(z(k)) x(k) T Qx(k) u(k) T Ru(k) (4.46) From equation (4.44), the left hand side of inequality (4.46) can be rewritten as V(z(k+ )) V(z(k))=z(k) T (Φ T PΦ P)z(k) (4.47) The right hand side of inequality (4.46) can be rewritten as x(k) T Qx(k) u(k) T Ru(k)=z(k) T (Q +R )z(k) (4.48) I K T I Q =. Q[ II...I ] K T 2,R =.. R[ ] K K 2...K r I Kr T From equations (4.46), (4.47) and (4.48), one gets Φ T PΦ P Q +R or by using the Schur complement, one obtains [ P+Q +R A T ] cp (4.49) PA c P Clearly, problem (4.49) is linear with respect to the matrix P. This problem is feasible since matrix Φ has a sub-unitary spectral radius. One way to obtain matrix P is to solve the following LMP problem subject to constraints (4.49). min{trace(p)} (4.5) P

143 28 4 Interpolation Based Control Nominal State Feedback Case At each time instant, for a given current statex, consider the following optimization problem subject to min x i,λ i [ x T x T 2...xT r F (i) r i= r i= ] P x x 2.. x r o x i λ i g (i) o, i=,2,...,r x i =x λ i =, λ i (4.5) and implement as input the control actionu= r K i x i. i= Theorem 4.8.[],[32] The control law using interpolation based on the solution of the problem(4.5) guarantees recursive feasibility and the closed loop system is asymptotically stable for all initial states x() Ω. Proof. See [], [32]. Using the approach in [], [32], it can be observed that, at each time instant we are trying to minimizex,x 2,...,x r in the weighted Euclidean norm sense. This is somehow a conflicting task, since x +x x r =x In addition, if the first controller is an optimal controller and play the role of a performance controller, and the remaining controller is used to enlarge the domain of attraction, then one would like to be as close as possible to the first controller, i.e. to the optimal one. This means that in the interpolation scheme (4.4), one would like to havex =x and x 2 =x 3 =...=x r = whenever it is possible. And it is not trivial how to do this with the approach in [], [32]. Below we provide a contribution to this line of research by considering one of the interpolation factors, i.e. control gains to be a performance related one, while the remaining factors play the role of degrees of freedom to enlarge the domain of attraction. This alternative approach can provide the appropriate framework for the constrained control design which builds on the unconstrained controller (generally with high gain) and subsequently need to adjusted them to cope with the constraints and limitations (via interpolation with adequate low gain controllers). From this point of view, in the remaining part of this section we try to build a bridge between the linear interpolation scheme presented in Section 4.2 and the QP based interpolation approaches in [], [32].

144 4.5 Interpolation based on quadratic programming 29 For the given set of state and control weighting matricesq i R n n,r i R m m andq i,r i, consider the following set of quadratic functions V i (x i )=x T ip i x i, i=2,3,...,r (4.52) where matrixp i R n n andp i is chosen to satisfy V i (x i (k+ )) V i (x i (k)) x i (k) T Q i x i (k) u i (k) T R i u i (k) (4.53) From inequality (4.53) and sincex i (k+ )=A ci x i (k), it follows that A T cip i A ci P i Q i K T i R i K i By using the Schur complement, one obtains [ Pi Q i Ki T R i K i A T ci P ] i (4.54) P i A ci P i Since matrixa ci has a sub-unitary spectral radius, inequality (4.54) is always feasible. One way to obtain matrixp i is to solve the following LMI problem subject to constraint (4.54). Define the vectorz R (r )(n+) as follows min P i {trace(p i )} (4.55) z =[x T 2 x T 3... x T r λ 2 λ 3... λ r ] T With the vectorz, consider the following quadratic function V (z )= r i=2 x T ip i x i + r i=2 λ 2 i (4.56) We underline the fact that the sums are built on indices{2,...r}, corresponding to the more poorly performing controllers. At each time instant, consider the following optimization problem

145 3 4 Interpolation Based Control Nominal State Feedback Case subject to the constraints F (i) r i= min z {V (z )} (4.57) o x i λ i g (i) o, i=,2,...,r x i =x r i= λ i =, λ i and apply as input the control signalu= r {K i x i }. i= Theorem 4.9. The control law based on solving on-line the optimization problem (4.57) guarantees recursive feasibility and asymptotic stability for all initial states x() Ω. Proof. Theorem 4.9 makes two important claims, namely the recursive feasibility and the asymptotic stability. These can be treated sequentially. Recursivefeasibility: It has to be proved thatf u u(k) g u andx(k+ ) Ω for allx(k) Ω. It holds that and r F u u(k) =F u λ i K i x i = r λ i F u K i x i i= i= r λ i g u =g u i= x(k+ )=Ax(k)+Bu(k)= r λ i A ci x i (k) i= SinceA ci x i (k) Ω i Ω, it follows thatx(k+ ) Ω. Asymptoticstability: Consider the positive functionv (z ) as a candidate Lyapunov function. It is clear that, ifx o (k),xo 2 (k),...,xo r(k) and λ o (k),λo 2 (k),...,λo r (k) is the solution of the optimization problem (4.57) at time instantk, then and x i (k+ )=A ci x o i(k) λ i (k+ )=λ o i (k) for alli=,2,...,r is a feasible solution to the optimization problem (4.57) at time instantk+. Since at each time instant we are trying to minimizev(z), it follows that V (z o (k+ )) V (z (k+ )) therefore V (z o (k+ )) V (z o (k)) V (z (k+ )) V (z o (k))

146 4.5 Interpolation based on quadratic programming 3 together with inequality (4.53), one obtains V (z o (k+ )) V (z o (k)) r i=2 ( x T i Q i x i +u T ir i u i ) Therefore V(z) is a Lyapunov function and the interpolation based controller assures asymptotic stability for allx Ω. Clearly, the objective function (4.57) can be written as where min z {z T Hz } (4.58) P P H =...P r And the constraints of the optimization problem (4.57) can be rewritten as F o () (x x 2... x r ) ( λ 2... λ r )g () o F o (2) x 2 λ 2 g (2) o. F o (r) x r λ r g o (r) λ i, i=2,...,r λ 2 + λ λ r or, equivalently where Gz S+Ex (4.59)

147 32 4 Interpolation Based Control Nominal State Feedback Case F o () F o ()... F o () g () o g () o... g () o G= F o (r)... g (r) o [ ] T S= (g () o ) T [ ] T E = (F o () ) T F o (2)... g (2) o... F o (3)... g (3) o... Hence, the optimization problem (4.57) is transformed into the quadratic programming problem (4.58) subject to the linear constraints (4.59). It is worth noticing that for allx Ω, the QP problem (4.58) subject to the constraints (4.59) has a trivial solution, namely { xi =, i=2,3,...,r λ i = Hencex =x and λ =. That means, inside the invariant set Ω, the interpolating controller turns out to be the optimal unconstrained controller. An on-line algorithm for the interpolation based controller via quadratic programming is Algorithm 4.3: Interpolation based control via quadratic programming. Measure the current state of the systemx(k). 2. Solve the QP problem (4.58) subject to the constraints (4.59). 3. Apply the control input (4.42). 4. Wait for the next time instantk:=k+. 5. Go to step and repeat. Remark 4.2. Note that algorithm 4.3 requires the solution of the QP problem (4.58) of dimension (r )(n+) whereris the number of interpolated controllers andn is the dimension of state. Clearly, solving the QP problem (4.58) can be computationally expensive when the number of interpolated controllers is big. In practice, it is usually enough withr=2 orr=3. Example 4.3. Consider again the discrete-time linear time-invariant system in example (4.) with the same state and control constraints. Two linear feedback controllers are chosen as

148 4.5 Interpolation based on quadratic programming 33 { K =[ ] K 2 =[ ] (4.6) The first controlleru(k)=k x(k) is a high controller and plays the role of the performance controller, while the second controlleru(k)=k 2 x(k) will be used to enlarge the domain of attraction. Figure 4.9(a) shows the invariant sets Ω and Ω 2 correspond to the controllers K andk 2 respectively. Figure 4.9(b) shows different state trajectories of the closed loop system for different initial conditions. The state trajectories are obtained by solving on-line quadratic programming problem (4.58) subject to the constraints (4.59) Ω 2 2 x 2 x 2 Ω x (a) Feasible invariant sets x (b) State trajectories Fig.4.9 Feasible invariant sets and state trajectories of the closed loop system for example 4.3. The sets Ω and Ω 2 are presented in minimal normalized half-space representation as.. Ω = x R 2 : x and

149 34 4 Interpolation Based Control Nominal State Feedback Case Ω 2 = x R 2 : x With the weighting matrices [ ] Q 2 =, R 2 = and by solving the LMI problem (4.55), one obtains [ ] P 2 = (4.6) For the initial condition x() = [ ] T, Figure 4.2(a) and 4.2(b) present the state and input trajectory of the closed loop system as a function of time. The solid blue line is obtained by solving the QP problem (4.58). The dashed red line is obtained by solving the QP interpolation using algorithm in [32]. For the algorithm in [32], the matrixpin the optimization problem (4.5) is computed as P= for the following weighting matrices [ ] Q=, R=

150 u 4.6 An improved interpolation based control method in the presence of actuator saturation 35 8 Proposed approach Rossiter s approach.2 Proposed approach Rossiter s approach 6 x Time (Sampling).4 x Proposed approach Rossiter s approach Time (Sampling) (a) State trajectory Time (Sampling) (b) Input trajectory Fig.4.2 State and input trajectory of the closed loop system for example 4.3. The solid blue line is obtained by solving the QP problem (4.58). The dashed red line is obtained by using the algorithm in [32]. The interpolating coefficient λ 2 and the Lyapunov functionv(z) as a function of time are depicted in Figure 4.2. As expectedv(z) is a positive and non-increasing function. Interpolating coefficient Time (Sampling) Lyapunov function Time (Sampling) Fig.4.2 Interpolating coefficient λ 2 (k) and Lyapunov functionv(z) as a function of time for example An improved interpolation based control method in the presence of actuator saturation In this section, in order to fully utilize the capability of actuators and guarantee the input constraints satisfaction without handling an unnecessarily complex optimization-based control, a saturation function on the input is considered. Saturation will guarantee that the plant input constraints are satisfied. In our design we

151 36 4 Interpolation Based Control Nominal State Feedback Case exploit the fact that the saturating linear feedback law can be expressed as a convex hull of a group of linear feedback laws according to Hu et al. [59]. Thus, the auxiliary control laws in the convex hull rather than the actual control law will handle the input constraints. For simplicity, only the single-input single-output system case is considered here, although extensions to the multi-input multi-output system case are straightforward. Since the saturation function on the input is considered, the system (4.) becomes x(k+ )=Ax(k)+Bsat(u(k)) (4.62) Clearly, the use of a saturation function is an appropriate choice only for the input constraints (4.2) in a form u(k) U,U ={u R:u l u u u } (4.63) whereu l andu u are respectively the lower and upper bounds of inputu. It is assumed thatu l andu u are constant withu l < andu u > such that the origin is contained in the interior ofu. Recall that the state constraints remain the same as in (4.2). From Lemma 2., Section 2.4., recall that for a given stabilizing controller u(k)=kx(k), the saturation function 4 can be expressed as sat(kx(k)) = α(k)kx(k) +( α(k))hx(k) (4.64) for allxsuch thatu l Hx u u and with a suitable choice of α(k). The vectorh R n can be computed using theorem 2.2. Based on procedure 2.5 in Section 2.4., a polyhedral set Ω H s can be computed, with invariance properties with respect to the dynamics x(k+ )=Ax(k)+Bsat(Kx(k)) (4.65) It is assumed that a set of asymptotically stabilizing feedback controllersk i R n is available as well as a set of auxiliary vectorsh i R n for alli =,2,...,r such that the corresponding invariant sets Ω H i s X { } Ω H i s = x R n :F o (i) x g (i) o (4.66) are non-empty fori=,2,...,r. With a slight abuse of notation, denote Ω s as a convex hull of the sets Ω H i s. It follows that Ω s X, since Ω H i s X for alli=,2,...,r. Any statex(k) Ω s can be decomposed as follows x(k)= r i= where x i (k) Ω H i s for alli=,2,...,r and λ i x i (k) (4.67) 4 See Section 2.4. for more details.

152 4.6 An improved interpolation based control method in the presence of actuator saturation 37 r i= Consider the following control law u(k)= Based on Lemma 2., one obtains u(k)= r i= λ i =, λ i r i= λ i sat(k i x i (k)) (4.68) λ i (α i (k)k i +( α i (k))h i ) x i (k) (4.69) where α i for alli=,2,...,r. Similar with the notation employed in the Section 4.5, we denotex i = λ i x i. Since x i Ω H i s, it follows thatx i λ i Ω H i s or F (i) o x i λ i g (i) o, i=,2,...,r (4.7) From equations (4.67) and (4.69), one obtains x= r x i i= u= r (4.7) (α i K i +( α i )H i )x i i= As in Section 4.5, the first controller, identified by the high gaink will play the role of a performance controller, while the remaining low gain controllers will be used to enlarge the domain of attraction. With the control input as in the form (4.7), it is clear thatu(k) U, k. Hencesat(u(k))=u(k). It follows that where or x(k+ ) =Ax(k)+Bsat(u(k))=Ax(k)+Bu(k) =A r x i (k)+b r (α i K i +( α i )H i )x i (k) i= i= = r x i (k+ ) i= x i (k+ )={A+B(α i K i +( α i )H i )}x i (k) x i (k+ )=A ci x i (k) (4.72) witha ci =A+B(α i K i +( α i )H i ). For the given state and control weighting matricesq i R n n andr i R, consider the following set of quadratic functions

153 38 4 Interpolation Based Control Nominal State Feedback Case where matrixp i R n n,p i is chosen to satisfy V i (x i )=x T ip i x i, i=2,3,...,r (4.73) V i (x i (k+ )) V i (x i (k)) x i (k) T Q i x i (k) u i (k) T R i u i (k) (4.74) By using the same arguments as in the previous section, inequality (4.74) can be rewritten as A T cip i A ci P i Q i (α i K i +( α i )H i ) T R i (α i K i +( α i )H i ) DenoteY i =(α i K i +( α i )H i ), by using the Schur complement, the above condition can be transformed into [ Pi Q i Y T i R i Y i A T ci P i P i A ci P i ] or, equivalently [ Pi A T ci P i P i A ci P i ] [ Qi +Y T i R i Y i ] DenoteQ 2 i andr 2 i as the Cholesky factor of the matricesq i andr i, which satisfy (Q 2 i ) T Q 2 i =Q i and (R 2 i ) T R 2 i =R i. The previous condition can be rewritten as [ Pi A T ci P i P i A ci P i ] [ 2 (Q i ) T Yi T 2 (R i ) T or by using the Schur complement, one obtains P i A T ci P i (Q 2 i ) T Y T 2 i (R P i A ci P i Q 2 i I R 2 i Y i I ][ i ) T Q 2 i R 2 i Y i ] (4.75) The left hand side of inequality (4.75) is linear in α i, and hence reaches its minimum at either α i = or α i =. Consequently, the set of LMI conditions to be checked is following (4.75) and the fact thaty i = α i K i +( α i H i )

154 4.6 An improved interpolation based control method in the presence of actuator saturation 39 P i (A+BK i ) T 2 P i (Q i ) T 2 (R i K i ) T P i (A+BK i ) P i Q 2 i I R 2 i K i I P i (A+BH i ) T 2 P i (Q i ) T (R (4.76) 2 i H i ) T P i (A+BH i ) P i Q 2 i I R 2 i H i I Condition (4.76) is linear with respect to the matrixp i. One way to calculatep i is to solve the following LMI problem min P i {trace(p i )} (4.77) subject to constraint (4.76). Once the matricesp i withi=2,3,...,r are computed, they can be used in practice for real-time control based on the following algorithm, which can be formulated as an optimization problem that is efficient with respect to structure and complexity. At each time instant, for a given current statex, minimize on-line the quadratic cost function r min{ x i,λ i i=2 x T ip i x i + subject to the linear constraints F o (i) x i λ i g (i) o, i=,2,...,r x i =x r i= r r i=2 λ i = i= λ i, i=,2,...,r. λ 2 i } (4.78) Theorem 4.. The control law based on solving the optimization problem(4.78) guarantees recursive feasibility and asymptotic stability of the closed loop system forallinitialstatesx() Ω s. Proof. The proof of this theorem is similar to the one of theorem 4.9. Hence it is omitted here. Example 4.4. Consider again the discrete-time linear time-invariant system in example (4.) with the same state and control constraints. Two linear feedback controllers are chosen as { K =[.95.37] (4.79) K 2 =[ ]

155 4 4 Interpolation Based Control Nominal State Feedback Case Based on theorem 2.2, two auxiliary matrices are computed as { H =[ ] H 2 =[ ] (4.8) With the auxiliary matricesh andh 2, the invariant sets Ω H s and Ω H 2 s are respectively constructed for the saturated controllersu=sat(k x) andu=sat(k 2 x), see Figure 4.22(a). Figure 4.22(b) shows different state trajectories of the closed loop system for different initial conditions Ω s H x 2 Ω s H x x (a) Feasible invariant sets x (b) State trajectories Fig.4.22 Feasible invariant sets and state trajectories of the closed loop system for example 4.4. The sets Ω H s and Ω H 2 s are presented in minimal normalized half-space representation as Ω H s = x R 2 : x and

156 4.7 Convex hull of ellipsoids Ω H 2 s = x R 2 : x With the weighting matrices [ ] Q 2 =,R 2 =. and by solving the LMI problem (4.77), one obtains [ ] P 2 = For the initial conditionx()=[ ] T, Figure 4.23 presents the state and input trajectory of the closed loop system as a function of time. The solid blue line is obtained by using the interpolation based control method 4.78, while the dashed red line is obtained by using the saturated controlleru=sat(k 2 x), which is the controller corresponding to the largest invariant set. The interpolating coefficient λ 2 and the objective function as a Lyapunov function are shown in Figure Convex hull of ellipsoids In this section, the convex hull of a family of ellipsoids is used for estimating the stability domain for a constrained control system. This is motivated by problems arising from the estimation of the domain of attraction of stable dynamics and the

157 u 42 4 Interpolation Based Control Nominal State Feedback Case x Interpolation based control 8 u = sat(k 2 x) Time (Sampling) Interpolation based control u = sat(k 2 x) Interpolation based control u = sat(k 2 x).2.4 x Time (Sampling) (a) State trajectory Time (Sampling) (b) Input trajectory Fig.4.23 State and input trajectory of the closed loop system as a function of time for example 4.4. The solid blue line is obtained by using the interpolation based control method 4.78, while the dashed red line is obtained by using the saturated controlleru=sat(k 2 x). Interpolating coefficient Time (Sampling) Lyapunov function Time (Sampling) Fig.4.24 Interpolating coefficient and Lyapunov function as a function of time for example 4.4. control design which aims to enlarge such a domain of attraction. In order to briefly describe the class of problems, let us suppose that a set of invariant ellipsoids and an associated set ofsaturated control laws are available. The questions whether the convex hull of this set of ellipsoids is invariant and how to construct a control law for this region are our objectives. The fact that the convex hull of a set of invariant ellipsoids is also invariant is well known in the literature, for nominal continuous-time linear time-invariant systems, see [58], and for nominal discrete-time linear time-invariant systems, see []. In these papers, a method to construct a continuous feedback law based on a set of linear feedback laws was proposed to make the convex hull of a set of invariant ellipsoids invariant. The main contribution of this section is to provide a new type of interpolation based controller, that makes invariant the convex hull of invariant ellipsoids. It is assumed that the polyhedral state constraints X and the polyhedral input constraints U are symmetric. It is also assumed that a set of asymptotically stabilizing feedback controllersk i R m n and using theorem 2.2, a set of auxiliary matrices

158 4.7 Convex hull of ellipsoids 43 H i R m n fori =,2,...,r are available such that the corresponding ellipsoidal invariant setse(p i ) E(P i )= { x R n :x T P i x } (4.8) are non-empty fori =,2,...,r. Recall that for allx(k) E(P i ), it follows that sat(k i x) U andx(k+)=ax(k)+bsat(k i x(k)) X. Denote Ω E R n as a convex hull ofe(p i ) for alli. It follows that Ω E X, sincee(p i ) X. Any statex(k) Ω E can be decomposed as follows x(k)= r i= λ i x i (k) (4.82) with x i (k) E(P i ) and λ i are interpolating coefficients, that satisfy r i= Consider the following control law u(k)= λ i =, λ i r i= λ i sat(k i x i (k)) (4.83) wheresat(k i x i (k)) is the saturated control law, that is feasible ine(p i ). Theorem 4.. The control law(4.83) is guaranteed to be recursively feasible for anyconditionsx() Ω E. Proof. Starting with the decomposition (4.82), the control law obtained by the corresponding convex combination of the control actions is leading to the expression in (4.83). One has to prove thatu(k) U andx(k+ )=Ax(k)+Bu(k) Ω E for all x Ω E. For the input constraints, from equation (4.83) and sincesat(k i x i (k)) U, it follows thatu(k) U. For the state constraints, it holds that x(k+ ) =Ax(k)+Bu(k) =A r λ i x i (k)+b r λ i sat(k i x i (k)) i= i= = r λ i (A x i (k)+bsat(k i x i (k))) i= One hasa x i (k)+bsat(k i x i (k)) E(P i ) Ω E for alli=,2,...,r, which ultimately assures thatx(k+ ) Ω E. As in the sections 4.5 and 4.6, the first high gain controller will be used for the performance, while the rest of available low gain controllers will be used to enlarge the domain of attraction. For the given current state x, consider the following objective function

159 44 4 Interpolation Based Control Nominal State Feedback Case subject to r min{ x i,λ i i=2 x T i P i x i, i=,2,...,r r i= r λ i x i =x λ i = i= λ i, i=,2,...,r λ i } (4.84) Theorem 4.2. The control law using interpolation based on the objective function (4.84)guaranteesasymptoticstabilityforallinitialstatesx() Ω E. Proof. Let λi o be the solutions of the optimization problem (4.84) and consider the following positive function r V(x)= λi o (k) (4.85) for allx Ω E \E(P ).V(x) is a Lyapunov function candidate. For anyx(k) Ω E \E(P ), one has x(k)= r λi o(k) xo i (k) i= u(k)= r λi o(k)sat(k i x i o(k)) i= It follows that x(k+ ) =Ax(k)+Bu(k) =A r λi o(k) xo i (k)+b r λi o(k)sat(k i x i o(k)) i= i= = r λi o(k) x i(k+ ) i= where x i (k+ )=A x i o(k)+bsat(k i x i o(k)) E(P i) for alli=,2,...,r. By using the interpolation based on the optimization problem (4.84) x(k+ )= r i= where x o i (k+ ) E(P i). It follows that r λ o i=2 andv(x) is a non-increasing function. i=2 i (k+ ) λ o i (k+ ) x o i(k+ ) r i=2 λ o i (k)

160 4.7 Convex hull of ellipsoids 45 The contractive invariant property of the ellipsoide(p i ) assures that there is no initial conditionx() Ω E \E(P ) such that i=2 r i=2 λ o i (k+ ) r i=2 λi o (k) for allk. It follows thatv(x)= r λi o(k) is a Lyapunov function for allx Ω E\E(P ). The proof is completed by noting that insidee(p ), the feasible stabilizing controlleru=sat(k x) is contractive and thus the interpolation based controller assures asymptotic stability for allx Ω E. If we denotex i = λ i x i, then since x i E(P i ), it follows thatxi TP i x i λi 2. The non-linear optimization problem (4.84) can be rewritten as follows subject to xi TP r i= r r min{ x i,λ i i=2 λ i } i x i λi 2, i=,2,...,r x i =x λ i = i= λ i, i=,2,...,r or by using the Schur complement subject to [ λi xi T x i λ i P i x i =x r i= r min r x i,λ i i=2 λ i = i= λ i, i=,2,...,r λ i (4.86) ], i=,2,...,r This is an LMI optimization problem. In summary, at each time instant the interpolation based controller involves the following steps

161 46 4 Interpolation Based Control Nominal State Feedback Case Algorithm4.4 Interpolation based control - Convex hull of ellipsoids. Measure the current state of the systemx(k). 2. Solve the LMI problem (4.86). In the result, one getsxi o E(P i ) and λi o for all i=,2,...,q. 3. Forxi o E(P i ), one associates the control valueu o i =sat(k i xi o). 4. The control signal to be applied to the plantu(k) is found as a convex combination ofu o i u(k)= r i= λ o i (k)u o i Remark4.3. It is worth noticing that for allx(k) E(P ), the LMI problem (4.86) has a trivial solution λ i =, i=2,3,...,r Hence λ = andx= x. In this case, the interpolation based controller turns out to be the saturated controlleru=sat(k x). Example 4.5. Consider again the discrete-time linear time-invariant system in example (4.) with the same state and control constraints. Three linear feedback controllers are chosen as K =[.95.37], K 2 =[ ], (4.87) K 3 =[ ] Based on theorem 2.2 and the optimization problem (2.55), three auxiliary matrices are found as H =[ ], H 2 =[ ], (4.88) H 3 =[ ] With these auxiliary matrices, three invariant ellipsoidse(p ),E(P 2 ),E(P 3 ) are computed corresponding to the saturated controllersu=sat(k x),u=sat(k 2 x) and u=sat(k 3 x). The invariant sets and their convex hull are depicted in Figure 4.25(a). Figure 4.25(b) shows state trajectories of the closed loop system for different initial conditions. The matricesp,p 2 andp 3 are [ ] [ ] P =, P =, [ ] P 3 = For the initial conditionx()=[ ] T, Figure 4.26 presents the state trajectory, the input trajectory and the Lyapunov function of the closed loop system

162 u 4.7 Convex hull of ellipsoids E(P 3 ) 2 x 2 E(P 2 ) E(P ) x x (a) Invariant ellipsoids x (b) State trajectories Fig.4.25 Feasible invariant sets and state trajectories of the closed loop system for example 4.5. as a function of time. As expected, the Lyapunov function, i.e. the objective function is positive and non-increasing. x Time (Sampling) Time (Sampling) x Time (Sampling) (a) State trajectory Lyapunov function Time (Sampling) (b) Input trajectory and Lyapunov function Fig State trajectory, input trajectory and Lyapunov function of the closed loop system as a function of time for example 4.5.

163

164 Chapter 5 Interpolation Based Control Robust State Feedback Case In this chapter, the problem of regulating a constrained discrete-time linear timevarying or uncertain system to the origin subject to bounded disturbances is addressed. The robust counterpart of the interpolation technique generalizes the results presented in the previous chapter, recursive feasibility and robust asymptotic stability being preserved. It is shown that in the implicit case, depending on the shape of invariant sets, i.e. polyhedral or ellipsoidal, and depending on the objective functions, i.e. linear or quadratic, two LPs or one QP or one LMI problem is solved at each time instant. In the explicit case, the control law is shown to be a piecewise affine function of state. 5. Problem formulation Consider the problem of regulating to the origin the following discrete-time linear time-varying oruncertain systems subject to additive bounded disturbances x(k+ )=A(k)x(k)+B(k)u(k)+D(k)w(k) (5.) wherex(k) R n,u(k) R m andw(k) R d are respectively the state, the input and the disturbance vectors. The system matricesa(k) R n n,b(k) R n m and D(k) R n d satisfy A(k)= q q i= i= α i (k)a i, B(k)= q α i (k)b i, D(k)= q α i (k)d i, i= i= α i (k)=, α i (k) (5.2) where the matricesa i,b i andd i are given. A somewhat more general uncertainty description is given by equation (2.2) in Chapter 2 which can be transformed to the one in (5.2). 49

165 5 5 Interpolation Based Control Robust State Feedback Case The state, the control and the disturbance are subject to the following polytopic constraints x(k) X, X ={x R n :F x x g x } u(k) U, U ={u R m :F u u g u } (5.3) w(k) W, W ={w R d :F w w g w } where the matricesf x,f u andf w and the vectorsg x,g u andg w are assumed to be constant withg x >,g u >,g w > such that the origin is contained in the interior ofx,u andw. The inequalities here are component-wise. In this chapter, it is assumed that the states of the system are measurable. 5.2 Interpolation based on linear programming Starting from the assumption that an unconstrained robust asymptotically stabilizing feedback controller u(k) = Kx(k) is available such that the corresponding maximal robustly invariant set Ω max X Ω max ={x R n :F o x g o } (5.4) is non-empty. Furthermore with some given and fixed integern >, based on procedure 2.3 presented in section one can find a robust controlled invariant set C N in the form C N ={x R n :F N x g N } (5.5) such that allx C N can be steered into Ω max in no more thann steps when suitable control is applied. The polytopec N can be decomposed into a set of simplicesc (j) each formed bynvertices ofc N and the origin. For allx C (j) N N,, the vertex controller u(k)=k (j) x(k) (5.6) can be applied withk (j) R m n is defined as in (3.5). In Section 3.4, it was shown that the system (5.) in closed loop with vertex control is robustly asymptotically stable for all initial statesx C N. In the robust case, similar to the nominal case presented in in Chapter 4, Section 4.2, the weakness of vertex control is that the full control range is exploited only on the border of the setc N in the state space, with progressively smaller control action when state approaches the origin. Hence the time to regulate the plant to the origin is longer than necessary. Here we provide a method to overcome this shortcoming, where the control action is still smooth. For this purpose, any statex(k) C N is decomposed as Here by robust asymptotic stability we understand that the state converges asymptotically to a minimal robust positively invariant set [27], [6], [76], which replaces the origin as attractor for the system (5.) in closed loop with the vertex controller.

166 5.2 Interpolation based on linear programming 5 wherex v C N,x o Ω max and c. Consider the following control law x(k)=c(k)x v (k)+( c(k))x o (k) (5.7) u(k)=c(k)u v (k)+( c(k))u o (k) (5.8) where u v (k) is obtained by applying the vertex control law (5.6) for x v (k) and u o (k)=kx o (k) is the control law that is feasible in Ω max. Theorem 5.. For system(5.) and constraints(5.3), the control law(5.8) guaranteesrecursivefeasibilityforallinitialstatesx() C N. Proof. For recursive feasibility, it has to be proved that { Fu u(k) g u x(k+ )=A(k)x(k)+B(k)u(k)+D(k)w(k) C N for allx(k) C N. While the feasibility of the input constraints is proved in a similar way to the nominal case 2, the state constraint feasibility deserves an adaptation. x(k+ )=A(k)x(k)+B(k)u(k)+D(k)w(k) =A(k){c(k)x v (k)+( c(k))x o (k)}+b(k){c(k)u v (k)+( c(k))u o (k)}+d(k)w(k) =c(k)x v (k+ )+( c(k))x o (k+ ) where x v (k+ )=A(k)x v (k)+b(k)u v (k)+d(k)w(k) C N x o (k+ )=A(k)x o (k)+b(k)u o (k)+d(k)w(k) Ω max C N It follows thatx(k+ ) C N. As in Section 4.2, in order to be as close as possible to the optimal unconstrained local controller, one would like to minimize the interpolating coefficient c(k). This can be done by solving the following nonlinear optimization problem c = min {c} (5.9) x v,x o,c subject to F N x v g N F o x o g o cx v +( c)x o =x c Definer v =cx v andr o =( c)x o. Sincex v C N andx o Ω max, it follows that r v cc N andr o ( c)ω max or equivalently 2 See proof of theorem 4. in section 4.2

167 52 5 Interpolation Based Control Robust State Feedback Case { FN r v cg N F o r o ( c)g o With this change of variables, the nonlinear optimization problem (5.9) is transformed into a linear programming problem as follows subject to c = min{c} (5.) r v,c F N r v cg N F o (x r v ) ( c)g o c Theorem 5.2. The control law using interpolation based on linear programming (5.)guaranteesrobustasymptoticstability 3 forallinitialstatesx() C N. Proof. The proof of this theorem is omitted here, since it is the same as the proof of theorem 4.2, Section 4.2. An on-line algorithm for the interpolation based controller via linear programming is Algorithm 5.: Interpolation based control- Implicit solution. Measure the current state of the systemx(k). 2. Solve the LP problem (5.). 3. Implement as input the control action (5.8). 4. Wait for the next time instantk:=k+. 5. Go to step and repeat. Although the dimension of the LP problem (5.) isn+, wherenis the dimension of state, the complexity of the control law (5.8) is in direct relationship with the complexity of the vertex controller and can be very high, since in general the complexity of the setc N is high in terms of vertices. Also it is well known [25] that the number of simplicies of vertex control is typically much greater than the number of vertices. Therefore a question is how to achieve an interpolating controller whose complexity is not correlated with the complexity of the involved sets. It is obvious that vertex control is only one possible choice for the global outer controller. One can consider any other linear or non-linear controller and the principle of interpolation scheme (5.7), (5.8), (5.) holds as long as the convexity of the 3 Here by robust asymptotic stability we understand that the state of the closed loop system with the interpolation based controller converges to the minimal robustly positively invariant set despite the parameter variation and the influence of additive disturbances.

168 5.2 Interpolation based on linear programming 53 associated controlled invariant set is preserved. A natural candidate for the global controller is the saturated controlleru =sat(k s x) with the associated invariant set Ω s computed using procedure 2.4 in Section The experience usually shows that by properly choosing the saturated gaink s R m n, the associated invariant set Ω s may approach the invariant sets other constrained controllers might have. In summary with the global saturated controlleru(k)=sat(k s x(k)) the interpolation based control law (5.7), (5.8), (5.) involves the following steps. Design a local gaink and a global gaink s, both stabilizing with some desired performance specifications. UsuallyK is chosen for the performance, whilek s is designed for the quality of its domain of attraction. 2. Compute the invariant sets Ω max and Ω s associated with the controllersk and K s respectively. The set Ω max is computed using procedure 2.2 in Section 2.3.4, while the set Ω s is computed using procedure 2.4. in the same section. 3. Implement the control law (5.7), (5.8), (5.). Practically, the interpolation scheme using the saturated controller shows to be simpler than the interpolation scheme using vertex control, while the domain of attraction remains typically the same. In order to complete the picture of the available possibilities in the choice of control policies in the interpolation scheme, we provide below an alternative for choosing the global controller. With this aim, some geometrical properties of the solution of the LP problem (5.) will be recalled. Remark5.. In the robust case, similar to the nominal case presented in Section 4.3. Ifx Ω max the result of the optimal interpolation problem has a trivial solution x o =x and thusc = in (5.). 2. Ifx C N \Ω max, the interpolating coefficientcwill reach a minimum in (5.) if and only ifxis written as a convex combination of two points, one belonging to the boundary of Ω max and the other on the boundary ofc N. As a consequence of remark 5., the vertex control law is only one of the candidate of the controller at the boundary ofc N. It is clear that any control law that steers the state on the boundary ofc N towards the interior ofc N will make the interpolation based control (5.7), (5.8), (5.) robustly asymptotically stable. An intuitive approach is to devise a controller, that pushes the state away from the boundary of the controlled invariant setc N asfaraspossible in a contractive sense. In order to give a precise definition of far, the following definition is introduced [95], [23]. Definition 5..(Minkowski functional) [95], [23] Given a C set S, the Minkowski functional Ψ S ofsis defined as Ψ S (x)=min{µ :x µs} (5.) µ

169 54 5 Interpolation Based Control Robust State Feedback Case It is well known [95] that the function Ψ S is convex, positively homogeneous of order one, i.e. for any scalart, it holds that Ψ S (tx)=tψ S (x). Furthermore it is a norm if and only if the setsis symmetric. Its level surfaces are given by scaling the boundary of the sets. Thus such boundary defines the shape of the function. Figure 5. depicts the sets(the red line) and the level surfaces corresponding to Ψ =.6 and Ψ =.3. x 2 Ψ = Ψ =.3 x Ψ =.6 Fig. 5. Minkowski functional. So at each time instant, we will try to minimize the Minkowski functional for the statex v at the boundary of the feasible invariant setc N. This can be done by solving the following linear program [27] subject to for alli=,2,...,q and for allw W. µ = min{µ} (5.2) u,µ F N (A i x v +B i u) µg N maxf N D i w F u u g u µ Remark 5.2. The minimization of the Minkowski functional can be interpreted in terms of a one-step Model Predictive Control method. Remark 5.3. The non-uniqueness of the solution is a main issue regarding the implementation of the global control based on the LP problem (5.2). This issue might arise from the degeneracy of the LP problem (5.2). Multiple optima are undesirable, as they might lead to a fast switching between the different optimal control actions when the LP problem (5.2) is solved on-line [5], [75]. Note that for vertex control there is no such problem.

170 5.2 Interpolation based on linear programming 55 In summary, the interpolation based controller involves the following steps Algorithm 5.2: Interpolation based control- Implicit solution. Measure the current state of the systemx(k). 2. Solve the LP problem (5.). In the result one getsx v,x o andc withx v Fr(C N ), x o Fr(Ω max ) andx=c x v +( c )x o. 3. For x v Fr(C N ), the control value u v is obtained by solving the LP problem (5.2). 4. Implement as input the control signalu=c u v +( c )u o. 5. Wait for the next time instantk:=k+ and the associated state measurements. 6. Go to step and repeat. It is worth noticing that for algorithm 5.2, at each time instant, two linear programs have to be solved sequentially, one is of dimensionn+ and the other is of dimensionm+ wherenandmare respectively, the dimension of state and control input. Hence algorithm 5.2 is more computationally demanding than algorithm 5.. However if the number of vertices of the feasible setc N exceeds the number of facets, algorithm 5.2 is preferable, due to the storage complexity of the global vertex controller used in the evaluation of the control action in equation (5.8) in the algorithm 5.. Remark 5.4. Concerning the explicit solution of the interpolation based control with the global vertex controller, using the same argument as in Section 4.3, it can be concluded that If x C N \ Ω max, the smallest value of the interpolating coefficient c will be reached when the regionc N \ Ω is decomposed into partitions in form of simplices with vertices either on the boundary ofc N or on the boundary of Ω max. The control law in each partition is piecewise affine function of state whose gains are obtained by interpolation of control values at the vertices of the simplex. Ifx Ω max, then the control law is the optimal unconstrained controller one. Example5.. Consider the following uncertain discrete-time system x(k+ )=A(k)x(k)+B(k)u(k) (5.3) where { A(k)=α(k)A +( α(k))a 2 and A = B(k)=α(k)B +( α(k))b 2 [ ].,A 2 = [ ] [ [ ].2,B =,B ] 2 =.5 At each time instant α(k) [, ] is an uniformly distributed pseudo-random number. The constraints are

171 56 5 Interpolation Based Control Robust State Feedback Case x, x 2, u (5.4) The stabilizing feedback gain for states near the origin is chosen as K =[ ] Using procedure 2.2 and procedure 2.3 in Chapter 2, one obtains the sets Ω max andc N as shown in Figure 5.2. Note thatc 27 =C 28, in this casec 27 is a maximal robustly invariant set for system (5.3) C N 2Ωmax x x Fig.5.2 Feasible invariant sets for example 5.. The set Ω max is presented in minimal normalized half-space representation as Ω max = x R 2 : The set of vertices ofc N is given by the matrixv(c N ) below, together with the control matrixu v at these vertices { V(PN ) = [V V ] U v = [U U ] where

172 5.2 Interpolation based on linear programming 57 [ ] V = U = [ ] Solving explicitly the LP problem (5.) by using multi-parametric linear programming, a state space partition is obtained as depicted in Figure 5.3(a). The number of polyhedral partition isn r = 27. Merging the regions with the identical control law, one obtains the reduced state space partition (N r = 3) in Figure 5.3(b). In the same Figure, different trajectories of the closed loop system are presented for different initial conditions and different realizations of α(k). x x x x (a) Number of regionsn r = 27 (b) Number of regionsn r = 3 Fig.5.3 Explicit solution before and after merging for the interpolation based control method and different trajectories of the closed loop system for example 5.. The control law over the state space partition with 3 regions is

173 58 5 Interpolation Based Control Robust State Feedback Case if x(k) if.. x(k) x (k).25x 2 (k)+2.3 if.59.8 x(k) u(k)= x (k).2x 2 (k)+. if.. x(k) x (k).8x 2 (k)+2.72 if x(k) x (k).56x 2 (k)+2.3 if.2.98 x(k) x (k).8x 2 (k) if x(k) (due to symmetry of the explicit solution, only the control law for seven regions are reported here) The interpolating coefficient and the control input as a piecewise affine function of state is depicted in Figure 5.4. It is worth noticing thatc = inside the set Ω max. For the initial conditionx =[ ] T, Figure 5.5(a) and Figure 5.5(b) show the state and input trajectories as a function of time. The solid blue line is obtained by using the interpolation based control method and confirms the stabilizing as well as good performances for regulation. As a comparison, Figure 5.5(a) and Figure 5.5(b) also show the state and input trajectories obtained by using algorithms

174 u u c 5.2 Interpolation based on linear programming 59 PWA function over 27 regions PWA function over 3 regions x x x 2 (a) Interpolating coefficient (b) Control input x Fig. 5.4 The interpolating coefficient and the control input as a piecewise affine function of state for example 5.. proposed by Kothare et al. in [78]. Note that algorithms in [78] require a solution of a semidefinite problem. x 5 Interpolation based control Kothare s approach x Time (Sampling) Interpolation based control Kothare s approach Time (Sampling) (a) State trajectories Interpolation based control Kothare s approach Time (Sampling) (b) Input trajectories Fig.5.5 State and input trajectories as a function of time for example 5.. The solid blue line is obtained by using the interpolation based control method, and the dashed red line is obtained by using the method in [78]. Figure 5.6 presents the interpolating coefficient and the realization of α(k) as a function of time. As expectedc (k) is a positive and non-increasing function. The following state and control weighting matrices were used for the LMI based MPC algorithm in in [78] [ ] Q =, R= Example 5.2. This example extends the study of the nominal case. The discrete-time linear time-invariant system with disturbances is given as [ ] [ x(k+ )= x(k)+ u(k)+w(k) (5.5) ] The state-input constraints and disturbance bounds are

175 6 5 Interpolation Based Control Robust State Feedback Case c * α Time (Sampling) (a) Interpolating coefficient Time (Sampling) (b) α(k) realization Fig. 5.6 Interpolating coefficient and the realization of α(k) as a function of time for example x 5, 5 x 2 5 u. w.,. w 2. (5.6) An LQ gain with weighting matrices [ ] Q=, R=. leads to a local unconstrained feedback gain K =[ ] The following saturated controlleru(k)=sat(k s x(k)) is chosen as a global controller with the linear gain K s =[ ] Using procedure 2.2 and procedure 2.4 for the control lawsu(k) =Kx(k) and u(k) =sat(k s x(k)) respectively, the maximal invariant sets Ω max and Ω s are computed. The result is depicted in Figure 5.7(a). Note that the set Ω s is actually the maximal domain of attraction for the system (5.5) with constraints (5.6), which can be verified by comparing the equivalence between the set Ω s and its one-step robust controlled invariant set. Figure 5.7(b) presents different state trajectories for different initial conditions and different realizations ofw(k). It can be observed that the trajectories do not converge to the origin but to a minimal robust positively invariant set of the systemx(k+ )=(A+BK)x(k)+w(k) that contains the origin in the interior. The sets Ω max and Ω s are presented in minimal normalized half-space representation as

176 5.3 Interpolation based on quadratic programming for uncertain systems Ω max 2Ωs x 2 Ω max x 2 Ω s x (a) Feasible invariant sets x (b) State trajectories Fig. 5.7 Feasible invariant sets and different trajectories of the closed loop system for example Ω max = x R 2 : x Ω s = x R 2 : x For the initial conditionx()=[ ] T, Figure 5.8 shows the state and input trajectory as a function of time. Figure 5.9 presents the interpolating coefficientc (k) and the realization ofw(k) as a function of time. 5.3 Interpolation based on quadratic programming for uncertain systems The non-uniqueness of the solution is the main issue regarding the implementation of the interpolation via linear programming in Section 5.2. Hence, as in the nominal case, it is also worthwhile in the robust case to have an interpolation scheme with strictly convex objective function.

177 u 62 5 Interpolation Based Control Robust State Feedback Case x Time (Sampling) x Time (Sampling) (a) State trajectory Time (Sampling) (b) Input trajectory Fig. 5.8 State and input trajectory of the closed loop system as a function of time for example w.5 c * Time (Sampling)..2 w Time (Sampling) (a) Interpolating coefficient Time (Sampling) (b)w(k) realization Fig. 5.9 Interpolating coefficient and the realization of w(k) as a function of time for example 5.2. In this section, we consider the problem of regulating to the origin system (5.) in the absence of disturbances. In other words, the system under consideration is of the form x(k+ )=A(k)x(k)+B(k)u(k) (5.7) where the uncertainty description ofa(k) andb(k) is as in (5.2). For a given set of robust asymptotically stabilizing controllersu(k) =K i x(k), i=,2,...,r and corresponding maximal robust positively invariant sets Ω i X Ω i ={x R n :F (i) o x g (i) o } (5.8) denote Ω as a convex hull of Ω i. It follows from the convexity ofx that Ω X, since Ω i X for alli=,2,...,r. By employing the same design scheme in Section 4.5, the first high gain controller in this enumeration will play the role of a performance controller, while the remaining low gain controllers will be used in the interpolation scheme to enlarge the domain of attraction. Any statex(k) Ω can be decomposed as follows x(k)=λ (k) x (k)+λ 2 (k) x 2 (k)+...+λ r (k) x r (k) (5.9) where x i (k) Ω i for alli=,2,...,r and

178 5.3 Interpolation based on quadratic programming for uncertain systems 63 r i= Consider the following control law λ i (k)=, λ i (k) u(k)=λ (k)k x (k)+λ 2 (k)k 2 x 2 (k)+...+λ r (k)k r x r (k) (5.2) whereu i (k)=k i x i (k) is the control law, associated to the invariant construction of the set Ω i. With a slight abuse of notation, denotex i = λ i x i. Since x i Ω i, it follows thatx i λ i Ω i or equivalently that the set of inequalities is verified for alli=,2,...,r. It holds that or F (i) o x i λ i g (i) o (5.2) x(k+ ) =A(k)x(k)+B(k)u(k)=A(k) r x i (k)+b(k) r K i x i (k) i= i= = r (A(k)+B(k)K i )x i (k) i= x(k+ )= r i= x i (k+ ) (5.22) withx i (k+ )=A ci x i (k) anda ci =(A(k)+B(k)K i ). For the given set of state and control weighting matricesq i R n n,r i R m m withq i,r i, consider the following set of quadratic functions wherep i R n n andp i is chosen to satisfy V i (x i )=x T ip i x i, i=2,3,...,r (5.23) V i (x i (k+ )) V i (x i (k)) x i (k) T Q i x i (k) u i (k) T R i u i (k) (5.24) Sincex i (k+ )=A ci x i (k), it follows that A T cip i A ci P i Q i K T i R i K i By using the Schur complement, one obtains [ Pi Q i Ki T R i K i A T ci P ] i (5.25) P i A ci P i Since matrixa ci has a sub-unitary joint spectral radius, problem (5.25) is always feasible [9]. It is clear that this problem reaches the minimum on one of the vertices ofa ci. Therefore the set of LMI conditions to be satisfied is following

179 64 5 Interpolation Based Control Robust State Feedback Case [ Pi Q i Ki T R i K i (A j +B j K i ) T ] P i, j=,2,...,q (5.26) P i (A j +B j K i ) P i One way to obtain matrixp i is to solve the following LMI problem subject to constraints (5.26). Define the vectorz R (r )(n+) as follows min P i {trace(p i )} (5.27) z=[x T 2... x T r λ 2... λ r ] T (5.28) With the vectorz, consider the following quadratic function V(z)= r i=2 x T ip i x i + r i=2 λ 2 i (5.29) We underline the fact that the sum is built on the indices{2,3,...,r}, which correspond to the more poorly performing controllers. Simultaneously, the cost function is intended to diminish the influence of these controller actions in the interpolation scheme toward the unconstrained optimum with λ i =. At each time instant, consider the following optimization problem subject to the constraints F (i) r i= r i= min z {V(z)} (5.3) o x i λ i g (i) o x i =x λ i =, λ i and apply as input the control actionu= r K i x i. i= Theorem 5.3. The interpolation based controller obtained by solving on-line the optimization problem(5.3) guarantees recursive feasibility and robust asymptotic stabilityforallinitialstatesx() Ω. Proof. The proof of this theorem follows the same argumentation as the one of theorem 4.9. Hence it is omitted here. As in Section 4.5, the objective function in (5.3) can be rewritten in a quadratic form as min{z T Hz} (5.3) z with

180 5.3 Interpolation based on quadratic programming for uncertain systems 65 P P H =...P r and the constraints of the optimization problem (5.3) can be rewritten as where G= Gz S+Ex(k) (5.32) F o () F o ()... F o () g () o g () o... g () o F o (r)... g o (r) [ ] T S= (g () o ) T [ ] T E = (F o () ) T F o (2)... g (2) o... F o (3)... g (3) o... Hence the optimization problem (5.3) is transformed into the quadratic programming problem (5.3) subject to the linear constraints (5.32). It is worth noticing that for allx Ω, the QP problem (5.3) subject to the constraints (5.32) has a trivial solution, that is { xi =, i=2,3,...r λ i =, Hencex =x and λ = for allx Ω. This means that, inside the robustly invariant set Ω, the interpolating controller turns out to be the optimal one in the sequence,2,...,r. In summary, an on-line algorithm for the interpolation based controller via quadratic programming is

181 66 5 Interpolation Based Control Robust State Feedback Case Algorithm 5.3: Interpolation based control via quadratic programming. Measure the current state of the systemx(k). 2. Solve the QP problem (5.3). 3. Implement as input the control actionu= r K i x i. 4. Wait for the next time instantk:=k+. 5. Go to step and repeat. Example 5.3. Consider the linear uncertain discrete time system in example (5.) with the same state and control constraints. Two linear feedback controllers are chosen as { K =[ ] (5.33) K 2 =[.786.] The first controlleru(k)=k x(k) plays the role of a performance controller, while the second controlleru(k)=k 2 x(k) will be used for extending the domain of attraction. Figure 5.(a) shows the maximal robustly invariant sets Ω and Ω 2 correspond to the controllersk andk 2 respectively. Figure 5.(b) presents different state trajectories of the closed loop system for different initial conditions and different realizations of α(k). i= 5 5 Ω 2 x 2 Ω x x (a) Feasible invariant sets x (b) State trajectories Fig. 5. Feasible invariant sets and different state trajectories of the closed loop system for example 5.3. The sets Ω and Ω 2 are presented in minimal normalized half-space representation as Ω = x R 2 : x

182 u 5.3 Interpolation based on quadratic programming for uncertain systems Ω 2 = x R 2 : x With the weighting matrices [ ] Q 2 =, R 2 =. and by solving the LMI problem (5.27), one obtains [ ] P 2 = For the initial conditionx() = [ ] T, Figure 5.(a) and 5.(b) show the state and input trajectories as a function of time. Figure 5.(a) and 5.(b) also show the state and input trajectories, obtained by using algorithm proposed by Pluymers et al. in [23]. 6 Proposed approach Pluymers Approach.8 Proposed approach Pluymers approach x Time (Sampling).2 x 2 5 Proposed approach Pluymers approach Time (Sampling) (a) State trajectory Time (Sampling) (b) Input trajectory Fig.5. State and input trajectories of the closed loop system as a function of time for example 5.3. The solid blue line is obtained by using the proposed interpolation based control method, and the dashed red line is obtained by using the method in [23]. The following parameters were used for the approach in [23]. The state and control weighting matricesq=,r=.. Figure 5.2(a), 5.2(b) and 5.2(c) present the interpolating coefficient λ 2 (k), the objective function i.e. the Lyapunov function and the realization of α(k) as a

183 α 68 5 Interpolation Based Control Robust State Feedback Case function of time. It is worth noticing that here λ 2 (k) is allowed to increase (see for example at time instantk=7) λ V(z) Time (Sampling) (a) Interpolating coefficient Time (Sampling) (b) Lyapunov function Time (Sampling) (c) α(k) realization Fig. 5.2 Interpolating coefficient, Lyapunov function and α(k) realization as a function of time for example An improved interpolation based control method in the presence of actuator saturation In this section, in order to fully utilize the capability of actuators and guarantee the input constraints, a saturation function is applied to the input channel. As in the previous section, we consider the case whenw(k)= for allk. For simplicity, only the single input - single output system case is considered here, although extension to the multi-input multi-output system case is straightforward. Since the saturation function on the input is applied, the dynamical system under consideration is of the form x(k + ) = A(k)x(k) + B(k)sat(u(k)) (5.34) It is assumed that the input constraints are in the form u(k) U,U ={u R:u l u u u } (5.35)

184 5.4 An improved interpolation based control method in the presence of actuator saturation 69 whereu l andu u are respectively the lower and the upper bound of the inputu. It is also assumed thatu l andu u are constant withu l < andu u > such that the origin is contained in the interior ofu. With respect to the the state constraints, their formulation remains the same as in (5.3). From Lemma 2., Section 2.4., recall that for a given stabilizing controller u(k) =Kx(k) and for allxsuch thatu l Hx u u, the saturation function can be expressed as sat(kx)=β(k)kx(k)+( β(k))hx(k) (5.36) with a suitable choice of β(k) 4. The instrumental vectorh R n can be computed based on theorem 2.2. Based on procedure 2.5 in Section 2.4., an associated robust polyhedral set Ω H s can be computed, that is invariant for the system x(k + ) = A(k)x(k) + B(k)sat(Kx(k)) (5.37) These design principles can be exploited for a given set of robust asymptotically stabilizing controllersu(k)=k i x(k) in order to obtain a set of auxiliary vectorsh i R n withi=,2,...,r and a set of robustly invariant sets Ω H i s X in the polyhedral form Ω H i s ={x R n :F (i) o x g (i) o } (5.38) Let us denote Ω S as a convex hull of the sets Ω H i s. By the convexity ofx, it follows that Ω S X, since Ω H i s X for alli=,2...,r. Any statex(k) Ω S can be decomposed as follows x(k)= r i= where x i (k) Ω H i s for alli=,2,...,r and r i= λ i =, λ i λ i x i (k) (5.39) As in the previous section we remark the non-uniqueness of the decomposition. Consider the following control law u(k)= Based on Lemma 2., one obtains u(k)= r i= where β i for alli=,2,...,r. r i= λ i sat(k i x i (k)) (5.4) λ i (β i K i +( β i )H i ) x i (k) (5.4) 4 See Section 2.4. for more details.

185 7 5 Interpolation Based Control Robust State Feedback Case With the same notation as in the previous sections, letx i = λ x i. Since x i Ω H i it follows thatx i λω H i s or F (i) o x i λg (i) o, i=,2,...,r (5.42) From equations (5.39) and (5.4), one gets x= r x i i= u= r (5.43) (β i K i +( β i )H i )x i i= The first high gain controller plays the role of a performance controller, while the remaining low gain controllers will be used to enlarge the domain of attraction. When the control input is of the form (5.43), it is clear thatu(k) U andsat(u(k))= u(k) as long as there is no active constraint. It follows that with or x(k+ ) =A(k)x(k)+B(k)sat(u(k))=A(k)x(k)+B(k)u(k) =A(k) r x i (k)+b(k) r (β i K i +( β i )H i )x i (k) i= i= = r x i (k+ ) i= x i (k+ )={A(k)+B(k)(β i K i +( β i )H i )}x i (k) s, x i (k+ )=A ci x i (k) (5.44) witha ci =A(k)+B(k)(β i K i +( β i )H i ). For the given set of state and control weighting matricesq i R n n andr i R, consider the following set of quadratic functions where matrixp i R n n,p i is chosen to satisfy V i (x i )=x T ip i x i, i=2,3,...,r (5.45) V i (x i (k+ )) V i (x i (k)) x i (k) T Q i x i (k) u i (k) T R i u i (k) (5.46) DenoteY i = β i K i +( β i )H i. Based on equation (5.44), one can rewrite inequality (5.46) as A T cip i A ci P i Q i Y T i R i Y i or By using the Schur complement, the previous condition can be transformed into [ Pi Q i Yi T R i Y i A T ci P ] i P i A ci P i

186 5.4 An improved interpolation based control method in the presence of actuator saturation 7 [ Pi A T ci P ] [ i Qi +Y T ] i R i Y i P i A ci P i DenoteQ 2 i andr 2 i as the Cholesky factor of the matricesq i andr i, which satisfy (Q 2 i ) T Q 2 i =Q i and (R 2 i ) T R 2 i =R i. The previous condition can be rewritten as [ Pi A T ci P i P i A ci P i ] [ 2 (Q i ) T Yi T 2 (R i ) T ][ Q 2 i R 2 i Y i or by using the Schur complement, one obtains P i A T ci P 2 i (Q i ) T 2 (R i Y i ) T P i A ci P i Q 2 i I (5.47) R 2 i Y i I Clearly, the left-hand side of inequality (5.47) reaches the minimum on one of vertices ofa ci,y i, so practically the set of LMI conditions to be checked is the following P i (A j +B j K i ) T 2 P i (Q i ) T 2 (R i K i ) T P i (A j +B j K i ) P i Q 2 i I R 2 i K i I P i (A j +B j H i ) T 2 P i (Q i ) T (R (5.48) 2 i H i ) T P i (A j +B j H i ) P i 2 Q i I R 2 i H i I for all j=,2,...,q and for alli=2,3,...,r. Condition (5.48) is linear with respect to the matrixp i. One way to calculatep i is to solve the following LMI problem ] min P i {trace(p i )} (5.49) subject to constraint (5.48). Once the matricesp i withi=2,3,...,r are computed, they can be used in practice for real-time control based on the resolution of a low complexity optimization problem. At each time instant, for a given current state x, minimize on-line the following quadratic cost function subject to linear constraints

187 72 5 Interpolation Based Control Robust State Feedback Case subject to r min{ x i,λ i i=2 F (i) r i= r x T ip i x i + r i=2 o x i λg (i) o, i=,2,...,r x i =x λ i = i= λ i, i=,2,...,r. λ 2 i } (5.5) Theorem 5.4. The control law based on solving the optimization problem(5.5) guarantees recursive feasibility and asymptotic stability of the closed loop system forallinitialstatesx() Ω S. Proof. The proof is omitted here, since it is the same as the one of theorem 4.9. An on-line algorithm for the interpolation based controller between several saturated controllers via quadratic programming is Algorithm 5.4: Interpolation based control via quadratic programming. Measure the current state of the systemx(k). 2. Solve the QP problem (5.5). 3. Implement as input the control actionu= r λ i sat(k i x i ). 4. Wait for the next time instantk:=k+. 5. Go to step and repeat. Example5.4. We recall the linear uncertain discrete time system x(k+ )=(α(k)a +( α(k))a 2 )x(k)+(α(k)b +( α(k))b 2 )u(k) (5.5) in example (5.) with the same state and control constraints. Two linear feedback controllers in the interpolation scheme are chosen as { K =[ ], (5.52) K 2 =[ ] i= Based on theorem 2.2, two auxiliary matrices are computed as { H =[ ] H 2 =[.786.] (5.53) With these auxiliary matricesh andh 2, the robustly invariant sets Ω H s and s are respectively constructed for the saturated controllersu=sat(k x) andu= Ω H 2

188 5.4 An improved interpolation based control method in the presence of actuator saturation 73 sat(k 2 x), see Figure 5.3(a). Figure 5.3(b) shows different state trajectories of the closed loop system for different initial conditions and different realizations of α(k), obtained by solving the QP problem (5.5) Ω s H x Ω s H x x (a) Feasible invariant sets x (b) State trajectories Fig.5.3 Feasible invariant sets and state trajectories of the closed loop system for example 5.4. The sets Ω H s and Ω H 2 s are presented in minimal normalized half-space representation as Ω H s = x R 2 : x and Ω H 2 s = x R 2 : x

189 u 74 5 Interpolation Based Control Robust State Feedback Case With the weighting matrices Q 2 = [ ],R 2 =. and by solving the LMI problem (4.77), one obtains [ ] P 2 = For the initial conditionx()=[ ] T, Figure 5.4 shows the state and input trajectory of the closed loop system as a function of time. x 2 x Time (Sampling) Time (Sampling) (a) State trajectory Time (Sampling) (b) Input trajectory Fig.5.4 State and input trajectory of the closed loop system as a function of time for example 5.4. Figure 5.5 presents the interpolating coefficient λ 2 (k), the objective function i.e. the Lyapunov function and the realization of α(k) as a function of time. 5.5 Interpolation via quadratic programming for uncertain systems with disturbances- Algorithm Note that all the development in Sections 5.3 ad 5.4 avoided handling of additive disturbances due to the impossibility of dealing with the robustly asymptotic stability of the origin as an equilibrium point. In this section, an interpolation based control method for system (5.) with constraints (5.3) using quadratic programming will be proposed to cope with the additive disturbance problem. It is clear that when the disturbance is persistent, it is impossible to guarantee the convergencex(k) ask +. In other words, it is impossible to achieve asymptotic stability of the closed loop system to the origin. The best that can be hoped for is that the controller steers any initial state to some target set around the origin. Therefore an input-to-state (ISS) stability framework proposed in [6], [99], [88] will be used for characterizing this target region.

190 5.5 Interpolation via quadratic programming - Algorithm λ Time (Sampling) (a) Interpolating coefficient Objective function Time (Sampling) (b) Lyapunov function α Time (Sampling) (c) α(k) realization Fig. 5.5 Interpolating coefficient, Lyapunov function and α(k) realization as a function of time for example Input to state stability The input to state stability framework provides a natural way to formulate questions of stability with respect to disturbances [45]. This framework attempts to capture the notion of bounded disturbance input- bounded state. Before using the concepts in the specific case of the interpolation schemes, a series of preliminary definitions is introduced. Definition5.2.(K function) A real valued scalar function φ : R R is of class K if it is continuous, strictly increasing and φ()=. Definition5.3.(K function) A function φ : R R is of class K if it is a K function and φ(s) + ass +. Definition5.4.(K L function) A function β : R R R is of class K L if for each fixedk, it follows that β(,k) is a K function and for each fixed s, it follows that β(s, ) is decreasing and β(s, k) as k. The ISS framework for autonomous discrete-time linear time-varying or uncertain systems, as studied by Jiang and Wang in [6], is briefly reviewed next. Consider system (5.) with a feedback controller u(k) = Kx(k) and the corresponding closed loop matrix wherea c (k)=a(k)+b(k)k. x(k+ )=A c (k)x(k)+d(k)w(k) (5.54)

191 76 5 Interpolation Based Control Robust State Feedback Case Definition 5.5.(ISS stability) The dynamical system (5.54) is ISS with respect to disturbancew(k) if there exist a K L function β and a K function φ such that for all initial statesx() and for all admissible disturbancesw(k), the evolutionx(k) of system (5.54) satisfies ( ) x(k) β( x(),k)+φ sup w(i) i k The function φ( ) is usually called an ISS gain of system (5.54). (5.55) Definition5.6.(ISSLyapunovfunction) A functionv :R n R is an ISS Lyapunov function for system (5.54) is there exist K functions γ, γ 2, γ 3 and a K function θ such that { γ ( x ) V(x) γ 2 ( x ) (5.56) V(x(k+ )) V(x(k)) γ 3 ( x(k) )+θ( w(k) ) Theorem 5.5. System(5.54) is input-to-state stable if it admits an ISS Lyapunov function. Proof. See [6], [99], [88]. Remark5.5. Note that the ISS notion is related to the existence of statesxsuch that γ 3 ( x ) θ( w ) for allw W. This implies that there exists a scalard such that γ 3 (d)=max θ( w ) w W ( ) ord= γ3 max θ( w(k) ). Here γ3 denotes the inverse operator of γ 3. w(k) W It follows that for any x(k) >d, one has V(x(k+ )) V(x(k)) γ 3 ( x(k) )+θ( w(k) ) γ 3 (d)+θ( w(k) )< Thus the trajectory x(k) of the system (5.54) will eventually enter the region R x ={x R n : x(k) d}. Once inside, the trajectory will never leave this region, due to the monotonicity condition imposed onv(x(k)) outside the regionr x Cost function determination The main contribution presented in the following starts from the assumption that using established results in control theory, one disposes a set of unconstrained robust asymptotically stabilizing feedback controllersu(k) =K i x(k),i =,2,...,r, such

192 5.5 Interpolation via quadratic programming - Algorithm 77 that for eachithe joint spectral radius of the parameter varying matrixa ci (k) is less than one wherea ci (k)=a(k)+b(k)k i. For each controlleru(k)=k i x(k), a maximal robustly positively invariant set Ω i can be found in the polyhedral form 5 { } Ω i = x R n :F o (i) x g (i) o (5.57) for alli =,2,...,r, such that for allx(k) Ω i, it follows thatx(k+) Ω i in closed loop with the control lawu(k)=k i x(k) U for allw(k) W. With a slight abuse of notation, denote Ω as a convex hull of the sets Ω i,i=,2,...,r. It follows that Ω X as a consequence of the fact that Ω i X for alli=,2,...,r. Any statex(k) Ω can be decomposed as follows with x i (k) Ω i and r i= x(k)= r i= λ i (k) x i (k) (5.58) λ i (k)=, λ i (k) One of the first remark is that according to the cardinal numberr and the disposition of the regions Ω i, the decomposition (5.58) is not unique. Denotex i (k)=λ i (k) x i (k). Equation (5.58) can be rewritten as Hence x(k)= r i= x (k)=x(k) x i (k) r i=2 Since x i Ω i, it follows thatx i λ i Ω i, or in other words Consider the following control law u(k)= r i= x i (k) (5.59) F (i) o x i λ i g (i) o (5.6) λ i (k)k i x i (k)= r i= K i x i (k) (5.6) wherek i x i (k) is the control law, associated to the construction of the set Ω i. From equations (5.59), (5.6), one gets 5 See procedure 2.2 in Chapter 2 u(k)=k x(k)+ r i=2 (K i K )x i (k) (5.62)

193 78 5 Interpolation Based Control Robust State Feedback Case It holds that or equivalently with x(k+ ) =A(k)x(k)+B(k)u(k)+D(k)w(k) =A(k) r λ i (k) x i (k)+b(k) r λ i (k)k i x i (k)+d(k)w(k) i= i= = r {(A(k)+B(k)K i )λ i (k) x i (k)+λ i (k)d(k)w(k)} i= x(k+ )= r i= x i (k+ ) (5.63) x i (k+ )=A ct (k)x i (k)+d(k)w i (k) (5.64) wherea ci (k)=a(k)+b(k)k i andw i (k)=λ i w(k) for alli=,2,...,r. From equation (5.63), one obtains Therefore x (k+ )=x(k+ ) r i=2 x i (k+ ) Hence x(k+ ) r x i (k+ ) =A c (k)x (k)+d(k)w (k) i=2 x(k+ )=A c (k)x(k)+ =A c (k){x(k) r x i (k)}+d(k)w (k) i=2 r i=2 x i (k+ ) A c (k) r i=2 x i (k)+d(k)w (k) From equation (5.64), one gets x(k+ )=A c (k)x(k)+ r i=2 B(k)(K i K )x i (k)+d(k)w(k) (5.65) Equation (5.65) describes the one-step state prediction of system (5.). Define the vectorszand ω as { z= [ x T x2 T ] T...xT r ω = [ w T w T ] T 2...wT r Based on equations (5.64), (5.65), one has where z(k+ )=Φ(k)z(k)+Γ(k)ω(k) (5.66)

194 5.5 Interpolation via quadratic programming - Algorithm 79 A c (k)b(k)(k 2 K )...B(k)(K r K ) A c2 (k)... Φ(k)= ,... A cr (k) D(k)... D(k)... Γ(k)= D(k) From equation (5.2), it is clear that Φ(k) and Γ(k) can be expressed as a convex combination of Φ j and Γ j, respectively Φ(k)= q α j (k)φ j j= Γ(k)= q (5.67) α j (k)φ j j= where q α j (k)=, α j (k) and j= A j +B j K B j (K 2 K )...B j (K r K ) D j... A j +B j K 2... Φ j =......, Γ D j... j = A j +B j K r...d j For the given state and input weighting matricesq R n n,r R m m,q andr, consider the following quadratic function where matrix P is chosen to satisfy V(z)=z T Pz (5.68) V(z(k+ )) V(z(k)) x(k) T Qx(k) u(k) T Ru(k)+θω(k) T ω(k) (5.69) where θ. Based on equation (5.66), the left hand side of inequality (5.69) can be written as V(z(k+ )) V(z(k))=(Φz+Γ ω) T P(Φz+Γ ω) z T Pz = [ z T ω T ][ Φ T ] Γ T P [ Φ Γ ][ ] z [ z ω T ω T ][ ] P z ][ ω (5.7) And the right hand side

195 8 5 Interpolation Based Control Robust State Feedback Case where x(k) T Qx(k) u(k) T Ru(k)+θω(k) T ω(k) =z(k) T ( Q R )z(k)+θω(k) T ω(k) = [ z T ω T ][ ] (5.7) Q R z θi][ ω K T I Q =. Q[ I... ], (K 2 K ) T R =. R[ K (K 2 K )... (K r K ) ] (K r K ) T From equations (5.69), (5.7), (5.7) one gets [ ] Φ T P [ Φ Γ ] [ ] [ ] P Q R θi Γ T or equivalently [ ] [ ] P Q R Φ T θi Γ T P [ Φ Γ ] (5.72) Using the Schur complement, equation (5.72) can be brought to P Q R Φ T P θi Γ T P (5.73) PΦ PΓ P From equation (5.72) it is clear that the problem (5.73) is feasible if and only if the joint spectral radius of matrix Φ(k) is less than one, or in other words, all matricesa ci (k) are asymptotically stable. The left hand side of the inequality (5.73) is linear with respect to α i (k). Hence it reaches the minimum if and only if α i (k)= or α i (k)=. Therefore the set of LMI conditions to be checked is as follows P Q R Φ T j P θi Γj T P (5.74) PΦ j PΓ j P for all j=,2,...,q. Remark 5.6. The results presented here are based on the common Lyapunov function (5.68) but they can be relaxed by using a parameter dependent Lyapunov function concept, see [36].

196 5.5 Interpolation via quadratic programming - Algorithm 8 Structurally, problem (5.74) is linear with respect to the matrix P and to the scalar θ. It is well known [88] that in the sense of the ISS gain having a smaller θ is a desirable property. The smallest value of θ can be found by solving the following LMI optimization problem min{θ} (5.75) P,θ subject to constraint (5.74) Interpolation via quadratic programming Once the matrixpis computed as a solution of the problem (5.75), it can be used in practice for real time control based on the resolution of a low complexity optimization problem with respect to structure and complexity. The resulting control law can be seen as a predictive control type of construction if the function (5.68) is interpreted as an upper bound for a receding horizon cost function. Define the vectorz and the matrixp as follows z = [ x T x2 T ] T...xT [ r λ 2 ] λ 3... λ r P P = I With the vectorsz and matrixp, at each time instant, for a given current state x, minimize on-line the following quadratic cost function V (z )=min{z T z P z } (5.76) subject to linear constraints F o (i) x i λ i g (i) o, i=,2,...,r r x i =x, i= λ i, i=,2,...,r r λ i = i= and implement as input the control actionu=k x+ r (K i K )x i. i=2 Theorem 5.6. The control law using interpolation based on the solution of the problem(5.76) guarantees recursive feasibility and the closed loop system is ISS for all initialstatesx() Ω. Proof. Theorem 5.6 stands on two important claims, namely the recursive feasibility and the input-to-state stability. These can be treated sequentially.

197 82 5 Interpolation Based Control Robust State Feedback Case Recursivefeasibility: It has to be proved thatf u u(k) g u andx(k+ ) Ω for allx(k) Ω. It holds that and r F u u(k) =F u λ i (k)k i x i (k)= r λ i (k)f u K i x i (k) i= i= r λ i (k)g u =g u i= x(k+ ) =A(k)x(k)+B(k)u(k)+D(k)w(k) = r λ i (k){(a(k)+b(k)k i ) x i (k)+d(k)w(k)} i= Since(A(k)+B(k)K i ) x i (k)+d(k)w(k) Ω i Ω, it follows thatx(k+ ) Ω. ISSstability: From the feasibility proof, it is clear that ifxi o (k) and λo i (k),i =,2,...,r are a solution of the optimization problem (5.76) at time instantk, then x i (k+ )=A ci (k)x o i(k)+d(k)w i (k) and λ i (k+) = λi o (k) is a feasible solution at time instantk+. By solving the quadratic programming problem (5.76), one gets V (z o (k+ )) V (z (k+ )) and by using inequality (5.69), it follows that V (z o (k+ )) V (z o (k)) V (z (k+ )) V (z o (k)) x(k) T Qx(k) u(k) T Ru(k)+θω(k) T ω(k) HenceV (z ) is an ISS Lyapunov function of the system (5.66). It follows that the closed loop system with the interpolation based controller is ISS. Remark5.7. MatrixPcan be chosen as follows P... P P= P rr (5.77) In this case, the cost function (5.76) can be written by V (z )=x T P x+ r i=2 x T ip ii x i + Hence, when the current statexis in the set Ω, the optimization problem (5.76) has the trivial solution as { xi =, λ i = i=2,3,...,r r i=2 λ 2 i

198 5.5 Interpolation via quadratic programming - Algorithm 83 and thusx =x and λ =. Therefore, the interpolation based controller turns out to be the optimal unconstrained controlleru=k x. It follows that the minimal robust positively invariant setr of the system x(k+ )=(A(k)+B(k)K )x(k)+d(k)w(k) is an attractor of the closed loop system with the interpolating controller. In the other words, all trajectories will converge to the setr. In summary, the interpolation based controller via quadratic programming involves the following steps Algorithm 5.5: Interpolation based control via quadratic programming- Algorithm. Measure the current state of the systemx(k). 2. Solve the QP problem (5.76). 3. Implement as input the control actionu=k x+ r (K i K )x i. 4. Wait for the next time instantk:=k+. 5. Go to step and repeat. Example5.5. This example is based on a nominal case. Consider the following discrete time system [ ] [ x(k+ )= x(k)+ u(k)+w(k) (5.78) ] The constraints on the state variables, the control variable and the disturbances are { 5 x 5, 5 x 2 5, u. w.,. w 2. Two linear feedback controllers are chosen as { K =[ ] K 2 =[ ] i=2 (5.79) The first controlleru=k x is an LQ controller with the weighting matricesq=i, R=.. The second controlleru=k 2 x is used to enlarge the domain of attraction. The sets Ω and Ω 2 are presented in minimal normalized half-space representation as Ω = x R 2 : x.48.48

199 84 5 Interpolation Based Control Robust State Feedback Case Ω 2 = x R 2 : x With the weighting matricesq =I andr =., by solving the optimization problem (5.75) with a block diagonal matrixp, one obtains P = [ ], P 2 = [ and θ = Figure 5.6 shows the maximal robust positively invariant sets Ω, Ω 2, associated with the feedback gainsk andk 2, respectively. This figure also presents state trajectories of the closed loop system for different initial conditions and different realizations ofw(k). ] Ω 2 x 2 Ω x Fig.5.6 Feasible invariant sets and state trajectories of the closed loop system for example 5.5. For the initial conditionx =[ ] T, Figure 5.7 and Figure 5.8 present the state and input trajectories of the closed loop system as a function of time. The solid blue line is obtained by using the interpolation based control method and confirms the stabilizing as well as good performances for regulation. Needless to say the literature on robust MPC for linear systems is very rich nowadays, and one needs to confront the solutions in terms of complexity and performance. In order to compare the proposed technique and the simulation results,

200 5.5 Interpolation via quadratic programming - Algorithm 85 we choose one of the most attractive solutions, which is the tube MPC in [87]. The dashed red lines in Figure 5.7 and Figure 5.8 are obtained by using this technique. 6 4 Interpolation based control Tube MPC x Time (Sampling) 3 2 Interpolation based control Tube MPC x Time (Sampling) Fig.5.7 State trajectories as functions of time for example 5.5. The solid blue line is obtained by using the interpolation based control method, and the dashed red line is obtained by using the tube model predictive control method in [87]..6.4 Interpolation based control Tube MPC.2.2 u Time (Sampling) Fig.5.8 Input trajectories as functions of time for example 5.5. The solid blue line is obtained by using the interpolation based control method, and the dashed red line is obtained by using the tube model predictive control method in [87]. The following parameters were used for the tube MPC. The minimal robust positively invariant setr was constructed for system x(k+ )=(A+BK )x(k)+w(k) using a method in [27]. The setr is depicted in Figure 5.9. The setup of the MPC

201 86 5 Interpolation Based Control Robust State Feedback Case R x 2. W x Fig.5.9 Minimal invariant set for example 5.5. approach for the nominal system of the tube MPC framework isq =I,R =.. The prediction horizonn =. The objective functionv (z ) is depicted in Figure 5.2(a). It is worth noticing thatv (z ) is only an ISS Lyapunov function. This means, when the state is near to the origin, the functionv (z ) might be increasing at some time instants as shown in Figure 5.2(b). 8.4 ISS Lyapunov function ISS Lyapunov function Time (Sampling) (a) ISS Lyapunov function Time (Sampling) (b) Non-decreasing effect of the ISS Lyapunov function Fig. 5.2 ISS Lyapunov function and non-decreasing effect of the ISS Lyapunov function as a function of time for example 5.5. The realization of disturbancesw(k) and the interpolating coefficient λ 2 are respectively, depicted in Figure 5.2(a) and Figure 5.2(b) as a function of time.

202 5.6 Interpolation via quadratic programming - Algorithm w Time (Sampling). λ w Time (Sampling) (a)w(k) realization Time (Sampling) (b) Interpolating coefficient λ 2 (k) Fig.5.2 w(k) realization and interpolating coefficient λ 2 (k) as a function of time for example Interpolation based on quadratic programming for uncertain systems with bounded disturbances- Algorithm 2 In this section, an alternative approach to constrained control of uncertain systems with bounded disturbances will be proposed. Following [3], any state x is decomposed as x= r i= x i (5.8) wherex i R n withi =,2,...,r are slack variables. The corresponding control value is of the form u= r i= K i x i (5.8) withk i R m n are given such that the joint spectral radius of matricesa ci (k) is sub-unitary where One has A ci (k)=a(k)+b(k)k i, i=,2,...,r x(k+ ) =A(k)x(k)+B(k)u(k)+D(k)w(k) =A(k) r x i (k)+b(k) r K i x i (k)+d(k)w(k) i= i= or equivalently where x(k+ )= r i= x i (k+ ) { x (k+ )=(A(k)+B(k)K )x (k)+d(k)w(k) x i (k+ )=(A(k)+B(k)K i )x i (k), i=2,3,...,r (5.82)

203 88 5 Interpolation Based Control Robust State Feedback Case Thereforex (k+)=x(k+ ) r x i (k+). From the first equation of (5.82), one gets or x(k+ ) r i=2 i=2 x i (k+ )=A c (k) x(k+ )=A c (k)x(k)+b(k) ( r i=2 x(k) r i=2 x i (k) ) +D(k)w(k) (K i K )x i (k)+d(k)w(k) Together with the second equation of (5.82), one obtains an augmented system x(k+ )=A c (k)x(k)+b(k) r (K i K )x i (k)+d(k)w(k) i=2 x i (k+ )=A ci (k)x i (k), i=2,3,...,r or in a matrix form x(k+ ) x(k) x 2 (k+ ). = Λ(k) x 2 (k) + Ξ(k)w(k) (5.83). x r (k+ ) x r (k) with A c (k)b(k)(k 2 K )...B(k)(K r K ) A c2 (k)... Λ(k)= , Ξ(k)=... A cr (k) D(k). Clearly Λ(k) and Ξ(k) can be respectively, expressed as a convex combination of Λ j and Ξ j, i.e. Λ(k)= q j= where q α j (k)=, α j (k) and j= α j (k)λ j, Ξ(k)= q j= α j (k)ξ j (5.84) A j +B j K B j (K 2 K )...B j (K r K ) A j +B j K 2... Λ j = , Ξ(k)=... A j +B j K r The constraints on the augmented state D j.

204 5.6 Interpolation via quadratic programming - Algorithm 2 89 x x 2 x s =.. x r are [ ] Fx... x F u K F u (K 2 K )...F u (K r K ) s [ gx g u ] (5.85) For system (5.83) with constraints (5.85), using procedure 2.2, Chapter 2, one can compute the maximal robust positively invariant set Ψ a R rn in the form Ψ a ={x s R rn :F a x s g a } (5.86) such that for all x s (k) Ψ a, it follows that x s (k+) Ψ a and u(k) =K x(k)+ r (K i K )x i (k) U. Define Ψ R n as a set obtained by projecting the polyhedral i=2 set Ψ a onto the state spacex. Theorem 5.7. For the given system(5.), the polyhedral set Ψ is robust controlled positively invariant and admissible with respect to the constraints(5.3). Proof. Clearly, for allx(k) Ψ, there existx i (k) R n withi=2,3,...,r such that The augmented statex s (k) is in Ψ a. The control actionu(k)=k x(k)+ r (K i K )x i (k) is inu. i=2 The successor augmented statex s (k+ ) is in Ψ a. Sincex s (k+ ) Ψ a, it follows thatx(k+ ) Ψ. Hence Ψ is a robust positively invariant set. Following the principle of the construction introduced in the Section 5.5, for the given state and input weighting matricesq R n n andr R m m, consider the following quadratic function V(x s )=x T spx s (5.87) where the matrixp R rn rn andp is chosen to satisfy V(x s (k+ )) V(x s (k)) x(k) T Qx(k) u(k) T Ru(k)+τw(k) T w(k) (5.88) The left hand side of inequality (5.88) can be rewritten as V(x s (k+ )) V(x s (k)) = [ xs T w T ][ Λ T Ξ T ] P [ Λ Ξ ][ x s w ] [ x T s w T ][ P ][ ] xs w (5.89) and the right hand side

205 9 5 Interpolation Based Control Robust State Feedback Case x(k) T Qx(k) u(k) T Ru(k)+τw(k) T w(k)= [ xs T w T ][ ][ ] Q R xs τi w (5.9) where I Q =. Q[ I... ], K T (K2 T R = KT ). R[ K (K 2 K )... (K r K ) ] (Kr T K T) Substituting equations (5.89) and (5.9) into equation (5.88), one gets [ ] Λ T P [ Λ Ξ ] [ ] [ ] P Q R τi Ξ T or equivalently [ ] [ ] P Q R Λ T τi Ξ T P [ Λ Ξ ] or by using the Schur complement, one obtains P Q R Λ T P τi Ξ T P (5.9) PΛ PΞ P The left hand side of inequality (5.9) reaches the minimum on one of the vertices of Λ(k), Ξ(k) so the set of LMI conditions to be satisfied is the following P Q R Λj TP τi Ξ T j P, j=,2,...,q (5.92) PΛ j PΞ j P Again, one would like to have the smallest value of τ. This can be done by solving the following LMI optimization problem min{τ} (5.93) P,τ subject to constraint (5.92) LetPbe the solution of the problem (5.93). At each time instant for a given current state x, minimize on-line the following quadratic cost function subject to linear constraints

206 5.6 Interpolation via quadratic programming - Algorithm 2 9 subject to The control input is in the form min x s {x T spx s } (5.94) F a x s g a u=k x+ r i=2 (K i K )x i (5.95) Theorem5.8.Thecontrollaw(5.95)wherex s isasolutionofthequadraticprogramming problem(5.94) guarantees recursive feasibility and the closed loop systemisissforallinitialstatesx() Ψ. Proof. Recursivefeasibility: One has to prove thatu(k) U andx(k+) Ψ for all x(k) Ψ. Since for allx(k) Ψ, there existx i (k) withi=2,3,...,r such thatx s (k) Ψ a. Hence the optimization problem (5.94) is always feasible. From equation (5.95), it follows that u(k)=k x(k)+ r i=2 (K i K )x i (k) U With this control input, it holds thatx s (k+ ) Ψ a. Hencex(k+ ) Ψ. ISSstability: Since matrixpis a solution of the LMI problem (5.93), it is clear that the objective function is an ISS Lyapunov function, which then subsequently guarantees the ISS stability. An on-line interpolation based control via quadratic programming is Algorithm 5.6: Interpolation based control via quadratic programming Measure the current state of the systemx(k). Solve the QP problem (5.94). Implement as input the control actionu=k x+ r (K i K )x i. Wait for the next time instantk:=k+. Go to step and repeat. Example5.6. Consider the following uncertain linear discrete-time system x(k+ )=A(k)x(k)+B(k)u(k)+w(k) (5.96) i=2 where A(k)=α(k)A +( α(k))a 2, B(k)=α(k)B +( α(k))b 2

207 92 5 Interpolation Based Control Robust State Feedback Case and [ ] [ ]. A =, B =, [ ] [ ].2 A 2 =, B 2 = 2 At each sampling time α(k) [, ] is an uniformly distributed pseudo-random number. he constraints are x, x 2, u,. w.,. w 2. Two feedback controllers are chosen as { K =[ ], K 2 =[ ] (5.97) Figure 5.22(a) presents the robust controlled invariant set Ψ, obtained by projecting the augmented invariant set Ψ a onto thexparameter space. This figure also shows the maximal robustly invariant sets Ω and Ω 2 obtained by using the single controllersk andk 2. It can be observed that the set Ψ is different from the convex hull of the sets Ω and Ω 2. Figure 5.22(b) presents different state trajectories of the closed loop system for different initial conditions and different realizations ofw(k). 8 Ω Ψ x 2 Ω x x (a) Feasible invariant sets x (b) State trajectories Fig.5.22 Feasible invariant sets and state trajectories of the closed loop system for example 5.6. The set Ψ is presented in minimal normalized half-space representation as

208 5.6 Interpolation via quadratic programming - Algorithm Ψ = x R n : x With the weighting matrices Q= [ ], R= and by solving the optimization problem (5.93) with a block diagonal matrix P, one obtains [ ] P P= P 2 with P = [ ] [ ] , P = , and τ = For the initial conditionx()=[ ] T, Figure 5.23 shows the state and input trajectories of the closed loop system as a function of time. The solid blue lines are obtained by using interpolation based control method. The dashed red lines are obtained by using the controlleru(k)=k 2 x(k). From the figures, it is clear that the performance of the closed loop system with the interpolation based controller is better than the closed loop system with the single feedbacku(k)=k 2 x(k) The ISS Lyapunov function and its non-decreasing effect where states near the origin are depicted in Figure 5.24(a) and Figure 5.24(b), respectively. Figure 5.25 shows the α(k) andw(k) realizations as a function of time.

209 α u 94 5 Interpolation Based Control Robust State Feedback Case x Interpolation based control u = K x Interpolation based control u = K x Time (Sampling) x Interpolation based control u = K 2 x Time (Sampling) (a) State trajectory Time (Sampling) (b) Input trajectory Fig.5.23 State and input trajectory of the closed loop system as a function of time for example 5.6. The solid blue lines are obtained by using interpolation based control method. The dashed red lines are obtained by using the controlleru(k)=k 2 x(k). 5 x ISS Lyapunov function 5 ISS Lyapunov function Time (Sampling) Time (Sampling) (a) ISS Lyapunov function (b) Non-decreasing effect Fig.5.24 Lyapunov function and its non-decreasing effect as a function of time for example w 2 w Time (Sampling) Time (Sampling) (a) α(k) realization Time (Sampling) (b)w(k) realization Fig.5.25 α(k) andw(k) realizations as a function of time for example Convex hull of invariant ellipsoids for uncertain systems 5.7. Interpolation based on LMI In this subsection, a set of quadratic functions will be used for estimating the domain of attraction for a constrained discrete-time linear time-varying or uncertain

210 5.7 Convex hull of invariant ellipsoids for uncertain systems 95 systems. It will be shown that the convex hull of a set of invariant ellipsoids is invariant. The ultimate goal is to design a method for constructing a constrained feedback law based on an interpolation technique for a given set of saturated feedback laws. In the absence of disturbances, the system considered is of the form x(k+ )=A(k)x(k)+B(k)u(k) (5.98) It is assumed that the polyhedral state constraints X and the polyhedral input constraints U are symmetric. Using established result in control theory and theorem 2.2, one obtains a set of asymptotically stabilizing feedback controllersk i R m n and a set of auxiliary matricesh i R m n fori=,2,...,r such that the corresponding ellipsoidal invariant setse(p i ) E(P i )= { x R n :x T P i x } (5.99) is non-empty for i =,2,...,r. Recall that for all x(k) E(P i ), it follows that sat(k i x) U andx(k+ )=A(k)x(k)+B(k)sat(K i x(k)) X. Denote Ω E R n as a convex hull ofe(p i ). It follows that Ω E X, sincee(p i ) X. Any statex(k) Ω E can be decomposed as follows x(k)= r i= λ i x i (k) (5.) with x i (k) E(P i ) and λ i are interpolating coefficients, that satisfy r i= Consider the following control law u(k)= λ i =, λ i r i= λ i sat(k i x i (k)) (5.) wheresat(k i x i (k)) is the saturated control law, that is feasible ine(p i ). Theorem 5.9. The control law(5.) guarantees recursive feasibility for all x() Ω E. Proof. The proof of this theorem is the same as the proof of theorem 4. and is omitted here. As in the previous Sections, the first feedback gain in the sequence will be used for satisfying performance specifications near the origin, while the remaining gains will be used to enlarge the domain of attraction. For the given current statex, consider the following optimization problem

211 96 5 Interpolation Based Control Robust State Feedback Case r min{ x i,λ i i=2 λ i } (5.2) subject to x T i P i x i, i=,2,...,r r i= r λ i x i =x λ i = i= λ i, i=,2,...,r Theorem 5.. The control law using interpolation based on the objective function in(5.2)guaranteesrobustasymptoticstabilityforallinitialstatesx() Ω E. Proof. Let λi o be the solutions of the optimization problem (5.2) and consider the following positive function r V(x)= λi o (k) (5.3) for allx Ω E \E(P ).V(x) is a Lyapunov function candidate. For anyx(k) Ω E, one has x(k)= r λi o(k) xo i (k) i= u(k)= r λi o(k)sat(k i x i o(k)) i= It follows that x(k+ ) =A(k)x(k)+B(k)u(k) =A(k) r λi o(k) xo i (k)+b(k) r λi o(k)sat(k i x i o(k)) i= i= = r λi o(k) x i(k+ ) i= where x i (k+ )=A(k) x i o(k)+b(k)sat(k i x i o(k)) E(P i) for alli=,2,...,r. By using the interpolation based on the optimization problem (5.2) x(k+ )= r i= where x o i (k+ ) E(P i). It follows that r λ o i=2 andv(x) is a non-increasing function. i=2 i (k+ ) λ o i (k+ ) x o i(k+ ) r i=2 λ o i (k)

212 5.7 Convex hull of invariant ellipsoids for uncertain systems 97 The contractive invariant property of the ellipsoide(p i ) assures that there is no initial conditionx() Ω E \E(P ) such that i=2 r i=2 λ o i (k+ )= r i=2 λi o (k) for allk. It follows thatv(x)= r λi o(k) is a Lyapunov function for allx Ω E\E(P ). The proof is complete by noting that insidee(p ), the robust stabilizing controlleru=sat(k x) is contractive and thus the interpolation based controller assures robust asymptotic stability for allx Ω E. With a slight abuse of notation, denotex i = λ i x i. Since x i E(P i ), it follows thatxi TP i x i λi 2. The non-linear optimization problem (5.2) can be rewritten as follows subject to xi TP r i= r r min{ x i,λ i i=2 λ i } i x i λi 2, i=,2,...,r x i =x λ i = i= λ i, i=,2,...,r or by using the Schur complement subject to [ λi xi T x i λ i P i x i =x r i= r min r x i,λ i i=2 λ i = i= λ i, i=,2,...,r λ i (5.4) ], i=,2,...,r This can be cast in terms of an LMI optimization. In summary, at each time instant the interpolation based controller involves the following steps

213 98 5 Interpolation Based Control Robust State Feedback Case Algorithm 5.7 Interpolation based control- Convex hull of ellipsoids. Measure the current state of the systemx(k). 2. Solve the LMI problem (5.4). In the result, one getsxi o E(P i ) and λi o for all i=,2,...,q. 3. Forxi o E(P i ), one associates the control valueu o i =sat(k i xi o). 4. The control valueu(k) is found as a convex combination ofu o i u(k)= r i= λ o i (k)u o i Geometrical properties of the solution The aim of this section is to highlight the properties of the solution of the optimization problem (5.4). Define a vectorx 2r R n and a set Ω 2r R n as follows x 2r = r λ i x i i=2 Ω 2r = Convex hull(e(p i )),i=2,3,...,r The following theorem holds Theorem5..For all x(k) / E(P ), the solution of the optimization problem (5.4)isreachedifx(k)iswrittenasaconvexcombinationoftwopointsx o and x o 2r,wherexo Fr(E(P ))andx o 2r Fr(Ω 2r). 3 2 E(P ) x 2 2 o x23 x x o x x 23 E(P 3 ) E(P 2 ) x Fig.5.26 Graphical illustration of the construction related to theorem 5..

214 5.7 Convex hull of invariant ellipsoids for uncertain systems 99 Proof. Suppose thatxis decomposed asx = λ x + λ 2r x 2r, wherex E(P) and x 2r Ω 2r. Ifx 2r is strictly inside Ω 2r, by settingx o 2r = Fr(Ω 2r) x,x 2r (the intersection between the boundary of Ω 2r and the line connectingxandx 2r ), one has x=λ o xo + λo 2r xo 2r with λo 2r < λ 2r which leads to a contradiction from the optimization point of view. Thus the first conclusion is that in general terms, for the optimal solution one has(x o,xo 2r ), wherexo 2r Fr(Ω 2r). Analogously, ifx is strictly insidee(p ), by settingx o = Fr(E(P )) x,x (the intersection between the boundary ofe(p ) and the line connectingxandx ) one obtainsx=λ o xo +λo 2r xo 2r where λo λ and λ o 2r λ 2r. This is again a contradiction leading to the conclusion that for the optimal solution(x o,xo 2r ), one hasxo E(P ). Remark5.8. For allx(k) E(P ) the result of the optimization problem (5.4) has a trivial solutionx (k)=x(k) and thus λ = and λ 2r =. For allx(k) / E(P ), the fact thatx o 2r belongs to the boundary of Ω 2r implies that eitherx o i Fr(E(P i )) orx o i =. Or by denoting x i = λ i x i, one concludes that the optimal solutions of the problem (5.4) satisfy and for allx(k) / E(P ) x T ip i x i = λ 2 i, i=2,3,...,r x T P x = λ 2 Hence for allx(k) / E(P ), the optimal solution of the problem (5.4) satisfy x T ip i x i = λ 2 i, i=,2,...,r Example 5.7. Consider the uncertain linear discrete-time system in example (5.) with the same state and control constraints. Two linear feedback controllers is chosen as { K =[ ], (5.5) K 2 =[ ] Based on theorem 2.2, two auxiliary matrices are defined as { H =[.86.86] H 2 =[ ] (5.6) With the auxiliary matrices H and H 2, two invariant ellipsoids E(P ) and E(P 2 ) are respectively constructed for the saturated controllersu =sat(k x) and u=sat(k 2 x), see Figure 5.27(a). Figure 5.27(b) shows the state trajectories of the closed loop system for different initial conditions and different realization of α(k). The matricesp andp 2 are P = [ ], P 2 = [ ]

215 α u 2 5 Interpolation Based Control Robust State Feedback Case E(P 2 ) x x E(P ) x 2 (a) Feasible invariant sets x 2 (b) State trajectories Fig.5.27 Feasible invariant sets and state trajectories of the closed loop system for example 5.7. For the initial conditionx() = [ ] T, Figure 5.28 shows the state and input trajectory of the closed loop system as a function of time. x 2 x Time (Sampling) Time (Sampling) (a) State trajectory Time (Sampling) (b) Input trajectory Fig.5.28 State and input trajectory of the closed loop system as a function of time for example 5.7. Figure 5.29 presents the interpolating coefficient λ 2 (k) as a Lyapunov function and the realization of α(k) as a function of time. As expected, the function λ 2 (k) is positive and non-increasing. Interpolating coefficient Time (Sampling) (a) Interpolating coefficient Time (Sampling) (b) α(k) realization Fig.5.29 Interpolating coefficient and α(k) realization as a function of time for example 5.7.

216 Chapter 6 Interpolation Based Control Output Feedback Case So far, in this manuscript, state feedback control problems have been considered. However, in practice, direct information (measurement) of the complete state of dynamic systems may not be available. In this case, an observer could possibly be used for the state estimation. A serious drawback of the observer-based approaches is the observer error, which one has to include in the uncertainty. In addition, whenever the constraints become active, the nonlinearity dominates the properties of the state feedback control system and one cannot expect the separation principle to hold. Moreover there is no guarantee that the constraints will be satisfied along the closedloop trajectories. In the chapter, we revisit the problem of state reconstruction through measurement and storage of appropriate previous measurements. Even if this model might benon-minimal, it is directly measurable and will provide an appropriate model for the control design with constraint handling guarantees. Finally it will be shown how the interpolation-based control principles can lead to an output-feedback control design procedure. 6. Problem formulation Consider the problem of regulating to the origin the following discrete-time linear time-varying or uncertain system, described by the input-output relationship y(k+ )+E y(k)+e 2 y(k )+...+E s y(k s+) =N u(k)+n 2 u(k )+...+N r u(k r+ )+w(k) (6.) wherey(k) R p,u(k) R m andw(k) R p are respectively the output, the input and the disturbance vector. The matricese i fori =,...,s andn i fori =,...,r have suitable dimensions. For simplicity, it is assumed thats=r. The matricese i andn i fori=,2,...,s satisfy 2

217 22 6 Interpolation Based Control Output Feedback Case [ ] E E Γ = 2...E s = q α N N 2...N i (k)γ i (6.2) s i= where α i (k) and q α i (k)= and i= E i Γ i =[ E2 i ]...Ei s N i Ni 2...Ni s are the extreme realizations of a polytopic model. The output and control vectors are subject to the following hard constraints { { y(k) Y, Y = y R p } :F y y g y u(k) U, U ={u R m (6.3) :F u u g u } wherey andu are C-sets. It is assumed that the disturbancew(k) is unknown, additive and lie in the polytopew, i.e.w(k) W, wherew ={w R p :F w w g w } is a C-set. 6.2 Output feedback- Nominal case In this section, we consider the case when the matricese j andn j for j =,2,...,s are known and fixed. The case whene j andn j for j =,2,...,s are unknown or time-varying will be treated in the next section. A state space representation will be constructed along the lines of [52]. All the steps of the construction are detailed such that the presentation of the results are self contained. The state of the system is chosen as a vector of dimensionp s with the following components x(k)= [ x (k) T x 2 (k) T...x s (k) T ] T (6.4) where x (k)=y(k) x 2 (k)= E s x (k )+N s u(k ) x 3 (k)= E s x (k )+x 2 (k )+N s u(k ) x 4 (k)= E s 2 x (k )+x 3 (k )+N s 2 u(k ). x s (k)= E 2 x (k )+x s (k )+N 2 u(k ) (6.5) The subcomponents of the state vector can be interpreted exclusively in terms of the input and output contributions as

218 6.2 Output feedback - Nominal case 23 One has or, equivalently x 2 (k)= E s y(k )+N s u(k ) x 3 (k)= E s y(k ) E s y(k 2)+N s u(k )+N s u(k 2). x s (k)= E 2 y(k ) E 3 y(k 2)... E s y(k s+)+ +N 2 u(k )+N 3 u(k 2)+...+N s u(k s+) y(k+ )= E y(k) E 2 y(k )... E s y(k s+) +N u(k)+n 2 u(k )+...+N s u(k s+)+w(k) x (k+ )= E x (k)+x s (k)+n u(k)+w(k) The state space model is then defined in a compact linear difference equation form as follows { x(k+ )=Ax(k)+Bu(k)+Dw(k) (6.6) y(k)=cx(k) where E... I N E s... N s E s I... A= E s 2 I..., B= E 2...I N 2 C= [ I... ] N s N s 2 I, D=,. The model (6.6) has been elaborated such that the state to be available by simple storage of input values and output signal measurements. One natural question is if this important advantage will be paid in terms of dimensions. In comparison with classical state space representations, the model (6.6) is minimal in the single-input single-output case. However, in the multi-input multi-output cases, this realization might not be minimal, as shown in the following example. Consider the following single-input multi-output discrete-time system y(k+ )+ [ 2 2 ] y(k)+ [ ] y(k )= [.5 2 ] u(k)+ [.5 Using the above construction, the state space model is given as follows { x(k+ ) =Ax(k)+Bu(k)+Dw(k) ] u(k )+w(k) (6.7) y(k) =Cx(k) where

219 24 6 Interpolation Based Control Output Feedback Case 2.5 A= 2, B=.5.5, E =, [.5 ] C= This realization is not minimal, since it unnecessarily replicates the common poles of the denominators in the input-output description. There exists an alternative lower dimensional construction like Denote A= [ 2 ],B= [ ].5,D=.5 [ ],C= [ ] z(k)=[y(k) T...y(k s+) T u(k ) T...u(k s+) T ] T (6.8) Based on equation (6.4) the state vectorx(k) is related to the vectorz(k) as follows where x(k)=tz(k) (6.9) T =[T T 2 ] I E s... N s... T = E s E s..., T = N s N s E 2 E 3... E s N 3 N 4...N s From equation (6.9), it becomes obvious that at any time instantk, the state variable vector is available exclusively though measurement and storage of appropriate previous measurements. Our main objective remains the treatment of the constraints (6.3). After simple set manipulations, these can be translated in state constraints of the typex i (k) X i wherex i is given by X =Y X 2 =E s ( X ) N s U (6.) X i =E s+2 i ( X ) X i N s+2 i U, i=3,...,s In summary, the constraints on the state arex X, wherex ={x : F x x g x }. Example6.. Consider the following discrete-time system y(k+ ) 2y(k)+y(k )=.5u(k)+.5u(k )+w(k) (6.) N 2

220 6.2 Output feedback - Nominal case 25 and The constraints on output and input and on disturbance are { 5 y(k) 5 5 u(k) 5. w(k). The state space model is given by { x(k+ )=Ax(k)+Bu(k)+Dw(k) where and A= y(k)=cx(k) [ ] 2, B= C= [ ] [ ].5, E =.5 [ ], The statex(k) is available though the measured input, output and their past measured values as follows x(k)=tz(k) where z(k)= [ y(k)y(k )u(k ) ] T [ ], T =.5 The constraints on the state according to (6.) are { 5 x x Using the linear quadratic regulator with the weighting matrices [ ] Q=C T C=, R=. as the local controller, the feedback gain is obtained K = [ ] Algorithm 5. in Section 5.2 will be employed with the global vertex controller in this example. Using procedures 2.2 and 2.3 Chapter 2, one obtains the set Ω max andc N as shown in Figure 6.(a). Note thatc 3 =C 4, in this casec 3 is a maximal invariant set for system (6.). Figure 6.(b) presents different state trajectories for different initial conditions and different realizations ofw(k). The set of vertices ofc N is given by the matrixv(c N ) below, together with the control matrixu v

221 y u 26 6 Interpolation Based Control Output Feedback Case 6 x Ω max C 3 x x (a) Feasible invariant sets x (b) State trajectories Fig.6. Feasible invariant sets and state trajectories for example 6.. and V(C N )= [ ] U v = [ ] The set Ω max is presented in minimal normalized half-space representation as Ω max = x R 2 :.. x For the initial conditionx()=[. 7.5] T, Figure 6.2 shows the output and input trajectory of the closed loop system as a function of time Time(Sampling) (a) Output trajectory Time(Sampling) (b) Input trajectory Fig.6.2 Output and input trajectory of the closed loop system for example 6.. The interpolating coefficient and the realization ofw(k) as a function of time are depicted in Figure 6.3. As expected the interpolating coefficient, i.e. the Lyapunov function is positive and non-increasing.

222 6.2 Output feedback - Nominal case 27. Interpolating coefficient w Time(Sampling) (a) Interpolating coefficient Time(Sampling) (b)w(k) realization Fig.6.3 Interpolating coefficient and realization ofw(k) for example 6.. In order to provide a term of comparison for the present approach, we present a solution based on the well-known steady state Kalman filter. Figure 6.4(a) shows the output trajectories using the constrained output feedback approach and the Kalman filter + constrained state feedback approach. It is obvious that, the minimal robust positively invariant set for the Kalman filter based approach is larger than the minimal robust positively invariant set of the approach, presented in this section. For the sake of completeness of the comparison, we mention that, the Matlab routine with the command kalman was used for designing the Kalman filter. The process noise is a white noise with an uniform distribution and no measurement noise was considered. The disturbancewis a random number with an uniform distribution,w l w w u wherew l =. andw u =.. The variance ofwis given as C w = (w u w l + ) 2 2 =.367 The estimator gain of the Kalman filter is obtained as L=[2 ] T The Kalman filter is used to estimate the state of the system and then this estimation is used to close the loop with the interpolated control law. In contrast to the output feedback approach, where the state is exact with respect to the measurement, in the Kalman filter approach, an extra level of uncertainty is introduced around the state trajectory by mixing the additive disturbances in the estimation process. Thus there is no guarantee that the constraints are satisfied in the transitory stage. This constraint violation effect is shown in Figure 6.4(b). Figure 6.4(c) presents the output trajectories of our approach and the Kalman filter based approach.

223 y 28 6 Interpolation Based Control Output Feedback Case x x x (a) State trajectories x (b) Constraints violation Output feedback approach Kalman filter approach Time(Sampling) (c) Output trajectories Fig.6.4 Comparison between the output feedback approach and the Kalman filter based approach for example Output feedback- Robust case A weakness of the approach in Section 6.2 is that the state measurement is available if and only if the parameters of the system are known. For uncertain or time-varying system, that is not the case. In this section, we provide another method for constructing the state variables, that do not use the information of the system parameter. Based on the measured plant input, output and their past measured values, the state of the system (6.) is chosen as x(k)= [y(k) T...y(k s+) T u(k ) T...u(k s+) T ] T (6.2) The state space model is then defined as follows { x(k+ ) =Ax(k)+Bu(k)+Dw(k) y(k) =Cx(k) (6.3) where

224 6.3 Output feedback - Robust case 29 E E 2... E s N 2...N s N s N I I I A=... I...,B=,D= I... O I O... I C= [ I ] From equation (6.2), it is clear that matricesaandbbelong to a polytopic set where (A,B) (6.4) = Convex hull{(a,b ),(A 2,B 2 )...,(A q,b q )} The vertices (A i,b i ) are obtained from the vertices of (6.2). Although the obtained representation is non-minimal, it has the merit that the original output-feedback problem for the uncertain plant has been transformed into a state-feedback problem where the matricesaandblie in the polytope defined by (6.4) without any additional uncertainty and any state-feedback control which is designed for this representation in the formu=kx can be translated into a dynamic output feedback controller. Based on equation (6.3), it is clear thatx(k) X R n x, withn x =n(q+p). Explicitly,X is given by X =Y Y Y U U U ={x R }{{}}{{} n x :F x x g x } s times s times Example6.2. Consider the following transfer function P(s)= k s+ s(s+k 2 ) (6.5) wherek =.787,. k 2 3. Using a sampling time of. and Euler s first order approximation for the derivative, the following input-output relationship is obtained y(k+ ) (2.k 2 )y(k)+(.k 2 )y(k )= =.k u(k)+(..k 2 )u(k )+w(k) (6.6) The signalw(k) represents the process noise with. w.. The following constraints are considered on the measured variables { y(k) 5 u(k) 5

225 2 6 Interpolation Based Control Output Feedback Case The statex(k) is constructed as follows x(k)=[y(k) y(k ) u(k )] T Hence, the state space model is given by { x(k+ ) =Ax(k)+Bu(k)+Dw(k) y(k) =Cx(k) where A= (2.k 2) (.k 2 ) (..k ),B=.k,D= C= [ ] Using the polytopic uncertainty description, one obtains A=αA +( α)a 2 where A =,A 2 = At each time instant α and. w. are uniformly distributed pseudo-random numbers. Algorithm 5. in Section 5.2 will be employed with a global saturated controller in this example. For this purpose, two controllers have been designed The local linear controller u(k) = Kx(k) for the performance. In this example, the peak to peak controller is chosen K =[ ] The global saturated controlleru(k)=sat(k s x(k)) for the domain of attraction K s =[ ] It is worth noticing that, this controller can be described in the output-feedback form as K(z)= z.8729z and respectively K s (z)= z.3365z The control lawpeaktopeak is developed in the next Section 6.4.

226 y u 6.3 Output feedback - Robust case 2 Overall the control scheme is described by a second order plant and two first order controllers, which provide a reduced order solution for the stabilization problem. Using procedure 2.2 and procedure 2.4 and corresponding to the control laws u(k)=kx(k) andu(k)=sat(k s x(k)), the maximal robustly invariant sets Ω max and Ω s are computed and depicted in Figure 6.5(a). The blue set is Ω max and the red set is Ω s. Figure 6.5(b) presents the projection of the sets Ω max and Ω s onto the(x,x 2 ) state space x x (a)feasible sets (b) Projection onto(x,x 2 ) space Fig.6.5 Feasible invariant sets for example 6.2. For the initial conditionx()=[ ] T, Figure 6.6 presents the output and input trajectory of the closed loop system as a function of time Time(Sampling) (a)output trajectory Time(Sampling) (b) Input trajectory Fig.6.6 Output and input trajectory of the closed loop system for example 6.2. Finally, Figure 6.7 shows the interpolating coefficient, the realization of α(k) and w(k) as a function of time.

227 α 22 6 Interpolation Based Control Output Feedback Case.9.8 Interpolating coefficient Time(Sampling) (a)interpolating coefficient Time(Sampling) (b) α(k) realization w Time(Sampling) (c)w(k) realization Fig.6.7 Interpolating coefficient and realization of α(k) andw(k) for example Some remark on local controllers In this section, we will revisit and provide a novel method for the local control design problem. It is clear that the local controller can be any feasible and stabilizing controller. Usually, one would like to ensure a certain level of optimality for the local controller, since when the state of the system reaches the local feasible invariant set, the interpolating controller turns out to be the local controller. Note that the local controller will not encounter constraints. Therefore it can be designed as e.g. an optimal controller, or as a controller satisfying some performance specifications, e.g. a QFT controller. In the presence of persistent bounded disturbances, a controller with a good disturbance rejection ability might be desirable. The measure of disturbance rejection can be defined as the peak value of the state variable over the peak value of the disturbance. This basic idea will be exploited in the design procedure presented in the remainder of this chapter Problem formulation Consider the problem of regulating to the origin the following discrete-time linear time-varying and uncertain system

228 6.4 Some remark on local controllers 23 x(k+ )=A(k)x(k)+B(k)u(k)+D(k)w(k) (6.7) wherex(k) R n,u(k) R m are respectively the measurable state variable and the control variable. The matricesa(k) R n n,b(k) R n m andd(k) R n d satisfy A(k)= q q i= i= α i (k)a i,b(k)= s α i (k)b i,d(k)= q α i (k)d i, i= i= α i =, α i, i=,...,q (6.8) where the matricesa i,b i andd i are given. Both the state and control are subject to the following constraints: { x(k) X,X={x R n : Fx } u(k) U,U ={u R m k (6.9) : u i u imax } whereu imax is thei th component of the vectoru max R m. The matrixf and the vectoru max are assumed to be constant withu max > such that the origin is contained in the interior ofx andu. The signalw(k) R d represents the additive disturbance input. Using a change of variables D (k)=d(k)p 2 andw (k)=p 2w(k) for an appropriate matrixp, one can always assume thatw(k) T w(k) Robustness analysis Before going to the synthesis problem, let us consider the analysis problem of the following discrete-time system 2 where matrixh(k) R n n satisfies where x(k+ )=H(k)x(k)+D(k)w(k) (6.2) q i= H(k)= q i= α i (k)h i α i (k)=, α i (k), and the matricesh i are extreme realizations ofh(k). It is assumed thatw(k) T w(k) and ρ(h(k))<, where ρ(h(k)) is the joint spectral radius of matrixh(k). This 2 The autonomous version of equation (6.7)

229 24 6 Interpolation Based Control Output Feedback Case condition implies that whenw(k)=, system (6.2) is robustly asymptotically stable. Recall that the ellipsoid sete(p) is robust positively invariant for system (6.2), if the conditionx() E(P) impliesx(k) E(P), k. In other words, starting from any point ine(p), the state of the system will never leave this set under any admissible uncertainty and disturbance. The following theorem provides a necessary and sufficient condition for invariance of ellipsoide(p) for the system (6.2). Theorem6..TheellipsoidE(P)isinvariantforsystem(6.2)ifandonlyifthere existsapositivedefinitematrixp R n n satisfyingthefollowinglmiconditions ( τ)p PHT i τi D T i H i P D i P foralli=,2,...,qandforsomenumber <τ <. Proof. Define the following quadratic function V(x(k))=x(k) T P x(k) For the invariant property of the sete(p)={x(k) R n :V(x(k)) }, it is required that V(x(k + )) for all possible state trajectories and disturbance realizations. That is (Hx+Dw) T P (Hx+Dw) for allxandwsuch thatx T P x andw T w or [ ] T [ x H T P HH T P ][ ] D x w D T P H D T P (6.2) D w for allxandwsuch that [ x w ] T [ P ] x (6.22) ][ w and [ ] T [ ][ ] x x (6.23) w I w By using thes procedure [24], [67] with two quadratic constraints, the conditions (6.2), (6.22), (6.23) can be equivalently rewritten as [ H T P HH T P ] [ D τ P D T P H D T P ] (6.24) D τ 2 I for some values of τ, τ 2, such that τ + τ 2. As a consequence of the fact thatp, it follows thath T P H and D T P D. Hence τ and τ 2 must be strictly positive. It is clear that if the in-

230 6.4 Some remark on local controllers 25 equality (6.24) holds for some τ < τ 2, then it also holds for τ = τ 2, since for all τ < τ 2 [ ] [ τ P ( τ2 )P ] τ 2 I τ 2 I Hence, it is nonrestrictive to use τ = τ 2. Condition (6.24) is thus equivalent to the LMI [ ( τ)p ] [ ] H T τi D T P [ HD ] where τ = τ 2, <τ <. By using the Schur complement one has ( τ)p H T τid T H D P or or, equivalently [ τid T D P ] [ ] P [ H τ H T ] ( τ)p PHT τi D T HP D P (6.25) Clearly, the left hand side of condition (6.25) reaches the minimum on one of the vertices ofh(k),d(k), so the set of LMI conditions to be satisfied is the following ( τ)p PHT i τi D T i (6.26) H i P D i P for alli=,2,...,q and for some number <τ <. Theorem 6. states that for all admissible uncertainties and disturbances, the set E(P)={x :x T P x } is invariant, wherepis a solution of (6.26). Remark6.. It has to be mentioned that the LMI conditions for ellipsoidal sets to be minimal invariant for continuous-time linear time-invariant systems have been presented in [] and for discrete time linear time invariant systems in [4], [74]. By the previous theorem we extended the results in [], [4], [74] to discretetime linear time-varying and uncertain systems. In addition, the LMI conditions (6.26) are applicable for different types of invariant ellipsoids, e.g. minimal invariant ellipsoids, maximal invariant ellipsoids, etc. Remark6.2. It is clear that ellipsoide(p), resulting from problem (6.26) might not be contractive although being invariant. In order to ensure such additional properties it is required that for all x(k) E(P) ={x(k) : V(x(k)) } to have V(x(k+ )) V(x(k))< (6.27)

231 26 6 Interpolation Based Control Output Feedback Case or in other words, the Lyapunov function V(x(k)) is strictly decreasing. By using the same argument as the proof of theorem 6., condition (6.27) can be transformed into ( τ)p PHT i τi D T i (6.28) H i P D i P for alli=,2,...,q and for some number <τ < Robust optimal design The peak value of the state variable over the peak value of the disturbance is defined as follows. J = x d (6.29) The existence of an invariant ellipsoide(p p ) can be used as an upper bound of the peak to peak value in (6.29). Based on Theorem 6., a linear feedback controlleru= K p x, which minimizes the size of the invariant ellipsoide(p p ), can be designed for system (6.7) with constraints (6.9) by solving the following optimization problem min P p,y p {trace(p p (τ))} (6.3) subject to Invariance condition ( τ)p p P p A T i +YpB T T i τi D T i A i P p +B i Y p D i P p (6.3) Constraint satisfaction 3 +) On state: [ ] fi P p P p fi T (6.32) P p where f i is thei th row of the matrixf. +) On input: [ ] u 2 imax K ip P p P p Kip T, i=,2,...,m P p or [ ] u 2 imax Y ip Yip T, i=,2,...,m (6.33) P p 3 see Section 2.3.3

232 6.4 Some remark on local controllers 27 withy p =K p P p R m n,k ip is thei th row of the matrixk andy ip =K ip P p is thei th row of the matrixy. The trace of a square matrix is defined to be the sum of the elements on the main diagonal of the matrix. Minimization of the trace of matrices corresponds to the search for the minimal sum of eigenvalues of matrices. It is important to note that when τ is fixed, the optimization problem (6.3), (6.3), (6.32) is an LMI problem, for which nowadays, there exist several effective solvers, e.g. [94], [49]. Remark6.3. At the same time, optimizing the peak to peak feedback gaink p can lead to a maximal invariant ellipsoide(p m ) X, such that for allx(k) E(P m ), it follows thatx(k+ ) E(P m ) andu(k)=k p x(k) U. This can be done by solving the following LMI problem subject to the constraints (6.32) and (6.33). max P p {trace(p m (τ))} (6.34) Remark6.4. Recall that the ellipsoide(p p ) is the limit set of all trajectories of the system (6.7) with the feedback gainu=k p x, i.e. all trajectories starting from the origin, are bounded bye(p p ) and all trajectories, starting outsidee(p p ) converge toe(p p ). On the other hand, the sete(p m ) is the admissible ellipsoid, which maximizes the cost function (6.34) for system (6.7) with the feedback gainu=k p x. Remark 6.5. Aside performance (here in the sense of disturbance rejection), another desideratum in the control design is the approximation by invariant ellipsoids of the maximal domain of attraction. It is well known that by using the LMI technique, one can determine the largest invariant ellipsoid E(P) with respect to the inclusion of some reference direction defined byx, meaning that the sete(p) will include the point θx, where θ is a scaling factor on the direction pointed by the vectorx. Indeed, θx E(P) implies that θ 2 x TP x or by using the Schur complements [ ] θx T (6.35) θx P Therefore the following LMI optimization problem can be used to obtain an invariant ellipsoide(p i ), that contains the most important extension on a certain direction defined by the reference pointx i subject to constraints (6.28), (6.32), (6.33), (6.35). max θ(τ) (6.36) P i,y i,θ Example6.3. Consider the following discrete-time linear time varying system where x(k+ )=A(k)x(k)+B(k)u(k)+Dw(k) (6.37)

233 28 6 Interpolation Based Control Output Feedback Case A(k)=α(k)A +( α(k))a 2, B(k)=α(k)B +( α(k))b 2 with A = , A 2 = B =.2.2, B 2 =.2.98, D= The constraint on the disturbance isw(k) T w(k). For simplicity, we do not consider any constraints on state or on input. By solving the LMI problem (6.3) a robust peak to peak controlleru(k)=k p x(k) is obtained with [ ] K p = together with the invariant ellipsoide(p p ) where P p = Figure 6.8 shows the projection of ellipsoide(p p ) on the (x,x 2 ) state space. This figure also shows the state trajectory (x,x 2 ) of the closed loop system as a function of time corresponding to a certain initial point insidee(p p ). From Figure 6.8, it can be observed that the state trajectory travels closed to the boundary of the projection of the sete(p p ) x x Fig.6.8 Optimal robustly invariant ellipsoid and state trajectory for example 6.3.

234 Part III Applications

235

236 Chapter 7 Ball and plate system The main purpose of this chapter is to apply the algorithms discussed in the previous chapters for the constrained control of the ball and plate experiment with an actuation on the angles of the plate. The presence of constraints on the positions of the ball and the angle of the plate makes the experiment an adequate benchmark test for the proposed theories and have been used in previous constrained control experiments, see for example [28]. 7. System description The ball and plate benchmark is depicted in Figure 7.. The system consists of a mechanical plate, two actuation mechanisms for tilting the plate around two orthogonal axes and a ball position sensor. The entire system is mounted on a mechanical (steel) base plate and is supported by four vertical springs and a central joint. The motors are operated in angular position mode for the simplicity of the modeling and control. A pulse-width modulated signal is employed for this purpose. The servos are powered by a 6V DC power supply. A resistive touch sensitive glass screen that is actually meant to be a computer touchscreen was used for sensing the ball position. It provides an extremely reliable, accurate, and economical solution to the ball position sensing problem. The screen consists of three layers: a glass sheet, a conductive coating on the glass sheet, and a hard-coated conductive top-sheet. 7.2 System identification For the identification purpose, the following notations will be used x r (m) position of the ball along thex axis, y r (m) position of the ball along they axis, 22

237 222 7 Ball and plate system Fig.7. Ball and plate system x max =.56m maximum position of the ball on thex axis, y max =.792m maximum position of the ball on they axis, u r x(rad) angle of the plate around thex axis, u r y(rad) angle of the plate around they axis, u xmax =± 6 π rad maximum angle of the plate w.r.t. thex axis, u ymax =± 6 π rad maximum angle of the plate w.r.t. they axis, g=9.8m/s 2 acceleration due to gravity, x= x xr max scaled position of the ball along thex axis, x, y= yr y max scaled position of the ball along they axis, y, u x = ur x u xmax scaled angle of the plate around thex axis, u x, u y = ur y u ymax scaled angle of the plate around they axis, u y The identification procedure The dynamical model of the ball and plate system can be derived by Newton s second law. It is well known [46] that under the assumptions of negligible friction between the ball and the plate and the motor friction and for smallu x andu y, the dynamics of the ball and plate system can be modeled as {ẍ= 5x xmax 7u xmax gsin(u x ) 5x xmax 7u xmax gu x ÿ= 5x ymax 7u ymax gsin(u y ) 5y ymax (7.) 7u ymax gu y However it was experienced experimentally that model (7.) is not accurate enough for capturing the dynamics of the ball and plate system. The experimen-

Apprentissage automatique Méthodes à noyaux - motivation

Apprentissage automatique Méthodes à noyaux - motivation Apprentissage automatique Méthodes à noyaux - motivation MODÉLISATION NON-LINÉAIRE prédicteur non-linéaire On a vu plusieurs algorithmes qui produisent des modèles linéaires (régression ou classification)

More information

Outils de Recherche Opérationnelle en Génie MTH Astuce de modélisation en Programmation Linéaire

Outils de Recherche Opérationnelle en Génie MTH Astuce de modélisation en Programmation Linéaire Outils de Recherche Opérationnelle en Génie MTH 8414 Astuce de modélisation en Programmation Linéaire Résumé Les problèmes ne se présentent pas toujours sous une forme qui soit naturellement linéaire.

More information

Apprentissage automatique Machine à vecteurs de support - motivation

Apprentissage automatique Machine à vecteurs de support - motivation Apprentissage automatique Machine à vecteurs de support - motivation RÉGRESSION À NOYAU régression à noyau Algorithme de régression à noyau entraînement : prédiction : a = (K + λi N ) 1 t. y(x) =k(x) T

More information

Commande prédictive robuste par des techniques d observateurs à base d ensembles zonotopiques

Commande prédictive robuste par des techniques d observateurs à base d ensembles zonotopiques N d ordre : 2012-16-TH THÈSE DE DOCTORAT DOMAINE : STIC Spécialité : Automatique Ecole Doctorale «Sciences et Technologies de l Information des Télécommunications et des Systèmes» Présentée par : Vu Tuan

More information

Best linear unbiased prediction when error vector is correlated with other random vectors in the model

Best linear unbiased prediction when error vector is correlated with other random vectors in the model Best linear unbiased prediction when error vector is correlated with other random vectors in the model L.R. Schaeffer, C.R. Henderson To cite this version: L.R. Schaeffer, C.R. Henderson. Best linear unbiased

More information

Apprentissage automatique Classification linéaire - fonction discriminante

Apprentissage automatique Classification linéaire - fonction discriminante Apprentissage automatique Classification linéaire - fonction discriminante TYPES D APPRENTISSAGE apprentissage supervisé, classification, régression L apprentissage supervisé est lorsqu on a une cible

More information

MGDA II: A direct method for calculating a descent direction common to several criteria

MGDA II: A direct method for calculating a descent direction common to several criteria MGDA II: A direct method for calculating a descent direction common to several criteria Jean-Antoine Désidéri To cite this version: Jean-Antoine Désidéri. MGDA II: A direct method for calculating a descent

More information

THÈSE DE DOCTORAT. Nikola Stanković. Sujet :

THÈSE DE DOCTORAT. Nikola Stanković. Sujet : Order no.: 2013-25-TH THÈSE DE DOCTORAT DISCIPLINE : AUTOMATIQUE Ecole Doctorale «Sciences et Technologies de l Information des Télécommunications et des Systèmes» Présenté par : Nikola Stanković Sujet

More information

Université Paul Sabatier Laboratoire d Analyse et d Architecture des Systèmes - CNRS

Université Paul Sabatier Laboratoire d Analyse et d Architecture des Systèmes - CNRS Université Paul Sabatier Laboratoire d Analyse et d Architecture des Systèmes - CNRS Thèse présentée en première version en vue d obtenir le grade de Docteur, spécialité Systèmes Automatiques par Ixbalank

More information

Optimisation par réduction d incertitudes : application à la recherche d idéotypes

Optimisation par réduction d incertitudes : application à la recherche d idéotypes : application à la recherche d idéotypes Victor Picheny 1, D. Da Silva et E. Costes 2 Rencontres du réseau Mexico, Toulouse 23 mai 2014 1. INRA MIAT 2. INRA GAP Plan de l exposé 1 Introduction : recherche

More information

Numerical modification of atmospheric models to include the feedback of oceanic currents on air-sea fluxes in ocean-atmosphere coupled models

Numerical modification of atmospheric models to include the feedback of oceanic currents on air-sea fluxes in ocean-atmosphere coupled models Numerical modification of atmospheric models to include the feedback of oceanic currents on air-sea fluxes in ocean-atmosphere coupled models Florian Lemarié To cite this version: Florian Lemarié. Numerical

More information

DETERMINING HIGH VOLTAGE CABLE CONDUCTOR TEMPERATURES. Guy Van der Veken. Euromold, Belgium. INVESTIGATIONS. INTRODUCTION.

DETERMINING HIGH VOLTAGE CABLE CONDUCTOR TEMPERATURES. Guy Van der Veken. Euromold, Belgium. INVESTIGATIONS. INTRODUCTION. DETERMINING HIGH VOLTAGE CABLE CONDUCTOR TEMPERATURES. Guy Van der Veken. Euromold, Belgium. INTRODUCTION. INVESTIGATIONS. Type tests on MV cable accessories are described in CENELEC HD68 and HD69 documents.

More information

Fault tolerant control based on set-theoretic methods.

Fault tolerant control based on set-theoretic methods. Fault tolerant control based on set-theoretic methods. Florin Stoican To cite this version: Florin Stoican. Fault tolerant control based on set-theoretic methods.. Other. Supélec, 2011. English.

More information

Some consequences of the analytical theory of the ferromagnetic hysteresis

Some consequences of the analytical theory of the ferromagnetic hysteresis Some consequences of the analytical theory of the ferromagnetic hysteresis G. Biorci, D. Pescetti To cite this version: G. Biorci, D. Pescetti. Some consequences of the analytical theory of the ferromagnetic

More information

AVERTISSEMENT. D'autre part, toute contrefaçon, plagiat, reproduction encourt une poursuite pénale. LIENS

AVERTISSEMENT. D'autre part, toute contrefaçon, plagiat, reproduction encourt une poursuite pénale. LIENS AVERTISSEMENT Ce document est le fruit d'un long travail approuvé par le jury de soutenance et mis à disposition de l'ensemble de la communauté universitaire élargie. Il est soumis à la propriété intellectuelle

More information

Assimilation de Données, Observabilité et Quantification d Incertitudes appliqués à l hydraulique à surface libre

Assimilation de Données, Observabilité et Quantification d Incertitudes appliqués à l hydraulique à surface libre Assimilation de Données, Observabilité et Quantification d Incertitudes appliqués à l hydraulique à surface libre Pierre-Olivier Malaterre, Igor Gejadze, Hind Oubanas UMR G-EAU, Irstea, Montpellier, France

More information

Analyse de stabilité de systèmes à coefficients dépendant du retard

Analyse de stabilité de systèmes à coefficients dépendant du retard Analyse de stabilité de systèmes à coefficients dépendant du retard Chi Jin To cite this version: Chi Jin. Analyse de stabilité de systèmes à coefficients dépendant du retard. Automatique / Robotique.

More information

Kato s inequality when u is a measure. L inégalité de Kato lorsque u est une mesure

Kato s inequality when u is a measure. L inégalité de Kato lorsque u est une mesure Kato s inequality when u is a measure L inégalité de Kato lorsque u est une mesure Haïm Brezis a,b, Augusto C. Ponce a,b, a Laboratoire Jacques-Louis Lions, Université Pierre et Marie Curie, BC 187, 4

More information

A Stochastic Approach For The Range Evaluation

A Stochastic Approach For The Range Evaluation A Stochastic Approach For The Range Evaluation Andrei Banciu To cite this version: Andrei Banciu. A Stochastic Approach For The Range Evaluation. Signal and Image processing. Université Rennes 1, 2012.

More information

THÈSE DE DOCTORAT. Ionela PRODAN. Commande sous contraintes de systèmes dynamiques multi-agents. Soutenue le 3/12/2012. devant les membres du jury :

THÈSE DE DOCTORAT. Ionela PRODAN. Commande sous contraintes de systèmes dynamiques multi-agents. Soutenue le 3/12/2012. devant les membres du jury : Order no.: 2012-19-TH THÈSE DE DOCTORAT DOMAINE : STIC SPECIALITE : AUTOMATIQUE Ecole Doctorale «Sciences et Technologies de l Information des Télécommunications et des Systèmes» Présentée par : Ionela

More information

Accurate critical exponents from the ϵ-expansion

Accurate critical exponents from the ϵ-expansion Accurate critical exponents from the ϵ-expansion J.C. Le Guillou, J. Zinn-Justin To cite this version: J.C. Le Guillou, J. Zinn-Justin. Accurate critical exponents from the ϵ-expansion. Journal de Physique

More information

The Core of a coalitional exchange economy

The Core of a coalitional exchange economy The Core of a coalitional exchange economy Elena L. Del Mercato To cite this version: Elena L. Del Mercato. The Core of a coalitional exchange economy. Cahiers de la Maison des Sciences Economiques 2006.47

More information

Répartition géographique des investissements directs étrangers en Chine

Répartition géographique des investissements directs étrangers en Chine Répartition géographique des investissements directs étrangers en Chine Qixu Chen To cite this version: Qixu Chen. Répartition géographique des investissements directs étrangers en Chine. Economies et

More information

Constrained Control of Uncertain, Time-varying Linear Discrete-Time Systems Subject to Bounded Disturbances

Constrained Control of Uncertain, Time-varying Linear Discrete-Time Systems Subject to Bounded Disturbances Constrained Control of Uncertain, ime-varying Linear Discrete-ime Systems Subject to Bounded Disturbances Hoaï-Nam Nguyen, Sorin Olaru, Per-Olof Gutman, Morten Hovd o cite this version: Hoaï-Nam Nguyen,

More information

arxiv: v2 [math.dg] 13 Feb 2018

arxiv: v2 [math.dg] 13 Feb 2018 HOLOMORPHIC CARTAN GEOMETRIES ON COMPLEX TORI INDRANIL BISWAS AND SORIN DUMITRESCU arxiv:1710.05874v2 [math.dg] 13 Feb 2018 Abstract. In [DM] it was asked whether all flat holomorphic Cartan geometries(g,

More information

Thèse de Doctorat de L'Université Paris-Saclay. L'Université Paris-Sud. Inria Saclay Ile-de-France

Thèse de Doctorat de L'Université Paris-Saclay. L'Université Paris-Sud. Inria Saclay Ile-de-France NNT : 2016SACLS459 Thèse de Doctorat de L'Université Paris-Saclay préparée à L'Université Paris-Sud au sein de Inria Saclay Ile-de-France ÉCOLE DOCTORALE N 580 Sciences et technologies de l'information

More information

Exercise sheet n Compute the eigenvalues and the eigenvectors of the following matrices. C =

Exercise sheet n Compute the eigenvalues and the eigenvectors of the following matrices. C = L2 - UE MAT334 Exercise sheet n 7 Eigenvalues and eigenvectors 1. Compute the eigenvalues and the eigenvectors of the following matrices. 1 1 1 2 3 4 4 1 4 B = 1 1 1 1 1 1 1 1 1 C = Which of the previous

More information

Kernel Generalized Canonical Correlation Analysis

Kernel Generalized Canonical Correlation Analysis Kernel Generalized Canonical Correlation Analysis Arthur Tenenhaus To cite this version: Arthur Tenenhaus. Kernel Generalized Canonical Correlation Analysis. JdS 10, May 2010, Marseille, France. CD-ROM

More information

TOLERABLE HAZARD RATE FOR FUNCTION WITH INDEPENDENT SAFETY BARRIER ACTING AS FAILURE DETECTION AND NEGATION MECHANISM

TOLERABLE HAZARD RATE FOR FUNCTION WITH INDEPENDENT SAFETY BARRIER ACTING AS FAILURE DETECTION AND NEGATION MECHANISM e Congrès de maîtrise des risques et de sûreté de fonctionnement - Saint-Malo -3 octobre 6 TOLERABLE HAZARD RATE FOR FUNCTION WITH INDEPENDENT SAFETY BARRIER ACTING AS FAILURE DETECTION AND NEGATION MECHANISM

More information

Contrôle multi-objectifs d ordre réduit

Contrôle multi-objectifs d ordre réduit Contrôle multi-objectifs d ordre réduit Christian Fischer To cite this version: Christian Fischer. Contrôle multi-objectifs d ordre réduit. Autre [cs.oh]. École Nationale Supérieure des Mines de Paris,

More information

The multi-terminal vertex separator problem : Complexity, Polyhedra and Algorithms

The multi-terminal vertex separator problem : Complexity, Polyhedra and Algorithms The multi-terminal vertex separator problem : Complexity, Polyhedra and Algorithms Youcef Magnouche To cite this version: Youcef Magnouche. The multi-terminal vertex separator problem : Complexity, Polyhedra

More information

Reachability Analysis of Hybrid Systems with Linear Continuous Dynamics

Reachability Analysis of Hybrid Systems with Linear Continuous Dynamics Reachability Analysis of Hybrid Systems with Linear Continuous Dynamics Colas Le Guernic To cite this version: Colas Le Guernic. Reachability Analysis of Hybrid Systems with Linear Continuous Dynamics.

More information

Contrôle et stabilité Entrée-Etat en dimension infinie du profil du facteur de sécurité dans un plasma Tokamak

Contrôle et stabilité Entrée-Etat en dimension infinie du profil du facteur de sécurité dans un plasma Tokamak Contrôle et stabilité Entrée-Etat en dimension infinie du profil du facteur de sécurité dans un plasma Tokamak Federico Bribiesca Argomedo To cite this version: Federico Bribiesca Argomedo. Contrôle et

More information

Inelastic scattering of 30 Mev polarized protons from 112Cd

Inelastic scattering of 30 Mev polarized protons from 112Cd Inelastic scattering of 30 Mev polarized protons from 112Cd R. De Swiniarski, G. Bagieu, DinhLien Pham, M. Massaad, J.Y. Grossiord, A. Guichard To cite this version: R. De Swiniarski, G. Bagieu, DinhLien

More information

Replicator Dynamics and Correlated Equilibrium

Replicator Dynamics and Correlated Equilibrium Replicator Dynamics and Correlated Equilibrium Yannick Viossat To cite this version: Yannick Viossat. Replicator Dynamics and Correlated Equilibrium. CECO-208. 2004. HAL Id: hal-00242953

More information

On the direct kinematics of planar parallel manipulators: special architectures and number of solutions

On the direct kinematics of planar parallel manipulators: special architectures and number of solutions On the direct kinematics of planar parallel manipulators: special architectures and number of solutions by Clément M. Gosselin and Jean-Pierre Merlet Département de Génie Mécanique Université Laval Ste-Foy,

More information

Pour obtenir le grade de DOCTEUR DE L UNIVERSITÉ DE GRENOBLE. Mohammad Al Khatib. l information, Informatique. Analyse de stabilité, ordonnancement,

Pour obtenir le grade de DOCTEUR DE L UNIVERSITÉ DE GRENOBLE. Mohammad Al Khatib. l information, Informatique. Analyse de stabilité, ordonnancement, THÈSE Pour obtenir le grade de DOCTEUR DE L UNIVERSITÉ DE GRENOBLE Spécialité : Mathématiques appliquées Arrêté ministérial : 7 aout 2006 Présentée par Mohammad Al Khatib Thèse dirigée par Antoine Girard

More information

Application of computational fluid dynamics to spray drying

Application of computational fluid dynamics to spray drying Application of computational fluid dynamics to spray drying Simon Lo To cite this version: Simon Lo. Application of computational fluid dynamics to spray drying. Le Lait, INRA Editions, 2005, 85 (4-5),

More information

Optimal synthesis of sensor networks

Optimal synthesis of sensor networks Université de Liège Faculté des Sciences Appliquées Département de Chimie Appliquée Laboratoire d Analyse et de Synthèse des Systèmes Chimiques Optimal synthesis of sensor networks Carine Gerkens Thèse

More information

Poisson s ratio effect of slope stability calculations

Poisson s ratio effect of slope stability calculations Poisson s ratio effect of slope stability calculations Murray Fredlund, & Robert Thode SoilVision Systems Ltd., Saskatoon, SK, Canada ABSTRACT This paper presents the results of a study on the effect of

More information

La question posée (en français, avec des mots justes ; pour un calcul, l'objectif doit être clairement écrit formellement)

La question posée (en français, avec des mots justes ; pour un calcul, l'objectif doit être clairement écrit formellement) Exercise : You have to make one ton of mayonnaise sauce using 95 % oil, 2.5 % egg yolk, 2.5 % vinegar. What is the minimum energy that you have to spend? Calculation for mayonnaise Hervé 4th October 2013

More information

THÈSE. présentée et soutenue publiquement le 04/10/2013 pour l obtention du. Doctorat de l Université de Lorraine. spécialité automatique par

THÈSE. présentée et soutenue publiquement le 04/10/2013 pour l obtention du. Doctorat de l Université de Lorraine. spécialité automatique par Ecole doctorale IAEM Lorraine Département de formation doctorale en automatique UFR ESSTIN Identification de Systèmes Dynamiques Hybrides : géométrie, parcimonie, et non-linéarités Hybrid Dynamical System

More information

arxiv:cs/ v1 [cs.dm] 21 Apr 2005

arxiv:cs/ v1 [cs.dm] 21 Apr 2005 arxiv:cs/0504090v1 [cs.dm] 21 Apr 2005 Abstract Discrete Morse Theory for free chain complexes Théorie de Morse pour des complexes de chaines libres Dmitry N. Kozlov Eidgenössische Technische Hochschule,

More information

Path dependent partial differential equation: theory and applications

Path dependent partial differential equation: theory and applications Path dependent partial differential equation: theory and applications Zhenjie Ren To cite this version: Zhenjie Ren. Path dependent partial differential equation: theory and applications. Analysis of PDEs

More information

Capillary rise between closely spaced plates : effect of Van der Waals forces

Capillary rise between closely spaced plates : effect of Van der Waals forces Capillary rise between closely spaced plates : effect of Van der Waals forces B. Legait, P.G. De Gennes To cite this version: B. Legait, P.G. De Gennes. Capillary rise between closely spaced plates : effect

More information

AIR SYSTEMS MODELING AND CONTROL FOR TURBOCHARGED ENGINES

AIR SYSTEMS MODELING AND CONTROL FOR TURBOCHARGED ENGINES AIR SYSTEMS MODELING AND CONTROL FOR TURBOCHARGED ENGINES Philippe Moulin To cite this version: Philippe Moulin. AIR SYSTEMS MODELING AND CONTROL FOR TURBOCHARGED EN- GINES. Automatic. École Nationale

More information

Superconvergent Cartesian Methods for Poisson type Equations in 2D domains

Superconvergent Cartesian Methods for Poisson type Equations in 2D domains Superconvergent Cartesian Methods for Poisson type Equations in 2D domains Olivier Gallinato, Clair Poignard To cite this version: Olivier Gallinato, Clair Poignard. Superconvergent Cartesian Methods for

More information

Guaranteed control synthesis for switched space-time dynamical systems

Guaranteed control synthesis for switched space-time dynamical systems Guaranteed control synthesis for switched space-time dynamical systems Adrien Le Coënt To cite this version: Adrien Le Coënt. Guaranteed control synthesis for switched space-time dynamical systems. General

More information

THÈSE DE DOCTORAT. Barbara GRIS. Modular approach on shape spaces, sub-riemannian geometry and computational anatomy

THÈSE DE DOCTORAT. Barbara GRIS. Modular approach on shape spaces, sub-riemannian geometry and computational anatomy NNT : 2016SACLN069 Université Paris-Saclay École Doctorale de Mathématiques Hadamard (EDMH, ED 574) Établissement d inscription : École Normale Supérieure de Paris-Saclay Laboratoires d accueil : Centre

More information

Transfer matrix in one dimensional problems

Transfer matrix in one dimensional problems Transfer matrix in one dimensional problems M. E. Mora, R. Pérez, Ch. B. Sommers To cite this version: M. E. Mora, R. Pérez, Ch. B. Sommers. Transfer matrix in one dimensional problems. Journal de Physique,

More information

IMPROVED SUPPRESSION OF UNCORRELATED BACKGROUND NOISE WITH THE STSF TECHNIQUE

IMPROVED SUPPRESSION OF UNCORRELATED BACKGROUND NOISE WITH THE STSF TECHNIQUE IMPROVED SUPPRESSION OF UNCORRELATED BACKGROUND NOISE WITH THE STSF TECHNIQUE J. Hald, K. Ginn To cite this version: J. Hald, K. Ginn. IMPROVED SUPPRESSION OF UNCORRELATED BACKGROUND NOISE WITH THE STSF

More information

Stickelberger s congruences for absolute norms of relative discriminants

Stickelberger s congruences for absolute norms of relative discriminants Stickelberger s congruences for absolute norms of relative discriminants Georges Gras To cite this version: Georges Gras. Stickelberger s congruences for absolute norms of relative discriminants. Journal

More information

Thèse de Doctorat. Donata Puplinskaitė. Agrégation de processus autorégressifs et de champs aléatoires de variance finie ou infinie

Thèse de Doctorat. Donata Puplinskaitė. Agrégation de processus autorégressifs et de champs aléatoires de variance finie ou infinie Thèse de Doctorat Mémoire présenté en vue de l obtention du grade de Docteur de l Université de Nantes Docteur de l Université de Vilnius sous le label de l Université de Nantes Angers Le Mans École doctorale

More information

A set of formulas for primes

A set of formulas for primes A set of formulas for primes by Simon Plouffe December 31, 2018 Abstract In 1947, W. H. Mills published a paper describing a formula that gives primes : if A 1.3063778838630806904686144926 then A is always

More information

Regression on Parametric Manifolds: Estimation of Spatial Fields, Functional Outputs, and Parameters from Noisy Data

Regression on Parametric Manifolds: Estimation of Spatial Fields, Functional Outputs, and Parameters from Noisy Data Regression on Parametric Manifolds: Estimation of Spatial Fields, Functional Outputs, and Parameters from Noisy Data Anthony T. Patera a, Einar M. Rønquist b a Department of Mechanical Engineering, Massachusetts

More information

Solution of contact problems with friction

Solution of contact problems with friction Solution of contact problems with friction Pierre Joli, Zhi-Qiang Feng, Nadjet Talbi To cite this version: Pierre Joli, Zhi-Qiang Feng, Nadjet Talbi. Solution of contact problems with friction: Pseudo-potential

More information

Polynomial systems solving and elliptic curve cryptography

Polynomial systems solving and elliptic curve cryptography Polynomial systems solving and elliptic curve cryptography Louise Huot To cite this version: Louise Huot. Polynomial systems solving and elliptic curve cryptography. Symbolic Computation [cs.sc]. Université

More information

Anisotropy dynamics of CuMn spin glass through torque measurements

Anisotropy dynamics of CuMn spin glass through torque measurements Anisotropy dynamics of CuMn spin glass through torque measurements J.B. Pastora, T.W. Adair, D.P. Love To cite this version: J.B. Pastora, T.W. Adair, D.P. Love. Anisotropy dynamics of CuMn spin glass

More information

Problèmes vérifiables par agents mobiles

Problèmes vérifiables par agents mobiles Evangelos Bampas, David Ilcinkas To cite this version: Evangelos Bampas, David Ilcinkas. Problèmes vérifiables par agents mobiles. ALGOTEL 2015-17èmes Rencontres Francophones sur les Aspects Algorithmiques

More information

Construction et Analyse de Fonctions de Hachage

Construction et Analyse de Fonctions de Hachage Université Paris Diderot (Paris 7) École Normale Supérieure Équipe Crypto Thèse de doctorat Construction et Analyse de Fonctions de Hachage Spécialité : Informatique présentée et soutenue publiquement

More information

Système de commande embarqué pour le pilotage d un lanceur aéroporté automatisé

Système de commande embarqué pour le pilotage d un lanceur aéroporté automatisé UNIVERSITE D EVRY-VAL-D ESSONNE Laboratoire d Informatique, Biologie Intégrative et Systèmes Complexes Thèse pour l obtention du titre de Docteur de l Université d Evry-Val-d Essonne Spécialité : AUTOMATIQUE

More information

It s a Small World After All Calculus without s and s

It s a Small World After All Calculus without s and s It s a Small World After All Calculus without s and s Dan Sloughter Department of Mathematics Furman University November 18, 2004 Smallworld p1/39 L Hôpital s axiom Guillaume François Antoine Marquis de

More information

Random variables. Florence Perronnin. Univ. Grenoble Alpes, LIG, Inria. September 28, 2018

Random variables. Florence Perronnin. Univ. Grenoble Alpes, LIG, Inria. September 28, 2018 Random variables Florence Perronnin Univ. Grenoble Alpes, LIG, Inria September 28, 2018 Florence Perronnin (UGA) Random variables September 28, 2018 1 / 42 Variables aléatoires Outline 1 Variables aléatoires

More information

DEVELOPMENT OF THE ULTRASONIC HIGH TEMPERATURE BOLT STRESS MONITOR

DEVELOPMENT OF THE ULTRASONIC HIGH TEMPERATURE BOLT STRESS MONITOR DEVELOPMENT OF THE ULTRASONIC HIGH TEMPERATURE BOLT STRESS MONITOR S.-M. Zhu, J. Lu, M.-Z. Xiao, Y.-G. Wang, M.-A. Wei To cite this version: S.-M. Zhu, J. Lu, M.-Z. Xiao, Y.-G. Wang, M.-A. Wei. DEVELOPMENT

More information

On the nonrelativistic binding energy for positive ions

On the nonrelativistic binding energy for positive ions On the nonrelativistic binding energy for positive ions G.I. Plindov, I.K. Dmitrieva To cite this version: G.I. Plindov, I.K. Dmitrieva. On the nonrelativistic binding energy for positive ions. Journal

More information

Γ -convergence and Sobolev norms

Γ -convergence and Sobolev norms C. R. Acad. Sci. Paris, Ser. I 345 (2007) 679 684 http://france.elsevier.com/direct/crass1/ Partial Differential Equations Γ -convergence and Sobolev norms Hoai-Minh Nguyen Rutgers University, Department

More information

A MODELING OF MICHELSON -MORLEY INTERFEROMETER IN A PLATONIC FOUR-DIMENSIONAL SPACE

A MODELING OF MICHELSON -MORLEY INTERFEROMETER IN A PLATONIC FOUR-DIMENSIONAL SPACE A MODELING OF MICHELSON -MORLEY INTERFEROMETER IN A PLATONIC FOUR-DIMENSIONAL SPACE Alain Jégat To cite this version: Alain Jégat. A MODELING OF MICHELSON -MORLEY INTERFEROMETER IN A PLATONIC FOUR-DIMENSIONAL

More information

arxiv: v1 [math.ca] 16 Jul 2018

arxiv: v1 [math.ca] 16 Jul 2018 arxiv:1807.0566v1 [math.ca] 16 Jul 2018 1 2 4 Contents Introduction (Français) 7 Introduction (English) 1 1 Minimal sets 19 1.1 Area minimisers......................... 19 1.2 Minimal cones..........................

More information

Approximation and applications of distributed delay

Approximation and applications of distributed delay Approximation and applications of distributed delay Hao Lu To cite this version: Hao Lu. Approximation and applications of distributed delay. Other. INSA de Lyon, 213. English. .

More information

Sur le groupe d automorphismes du groupe libre I. Transvections

Sur le groupe d automorphismes du groupe libre I. Transvections Journal of Algebra 222, 621677 1999 doi:10.1006jabr.1999.8036, available online at http:www.idealibrary.com on Sur le groupe d automorphismes du groupe libre I. Transvections Daniel Piollet* Uniersite

More information

Chapitre 2 : Sélection de variables et pénalisations. Agathe Guilloux Professeure au LaMME - Université d Évry - Paris Saclay

Chapitre 2 : Sélection de variables et pénalisations. Agathe Guilloux Professeure au LaMME - Université d Évry - Paris Saclay Chapitre 2 : Sélection de variables et pénalisations Agathe Guilloux Professeure au LaMME - Université d Évry - Paris Saclay Avant de commencer Les documents du cours sont disponibles ici : http://www.math-evry.

More information

Reliability analysis methods and improvement techniques applicable to digital circuits

Reliability analysis methods and improvement techniques applicable to digital circuits Reliability analysis methods and improvement techniques applicable to digital circuits Samuel Nascimento Pagliarini To cite this version: Samuel Nascimento Pagliarini. Reliability analysis methods and

More information

Numerical solution of the Monge-Ampère equation by a Newton s algorithm

Numerical solution of the Monge-Ampère equation by a Newton s algorithm Numerical solution of the Monge-Ampère equation by a Newton s algorithm Grégoire Loeper a, Francesca Rapetti b a Département de Mathématiques, Ecole Polytechnique Fédérale de Lausanne, 1015 Lausanne, CH

More information

Content. Content. Introduction. T. Chateau. Computer Vision. Introduction. Outil projectif permettant l acquisition d une scène 3D sur un plan 2D

Content. Content. Introduction. T. Chateau. Computer Vision. Introduction. Outil projectif permettant l acquisition d une scène 3D sur un plan 2D Content Modèle de caméra T Chateau Lamea/Gravir/ComSee, Blaie Pacal Univerit Computer Viion 2 Content La projection perpective Changement de repère objet/caméra Changement de repère caméra/image Changement

More information

Théorèmes asymptotiques pour les équations de Boltzmann et de Landau

Théorèmes asymptotiques pour les équations de Boltzmann et de Landau UIVERSITÉ PARIS-DAUPHIE ÉCOLE DOCTORALE DE DAUPHIE THÈSE DE DOCTORAT Discipline : Mathématiques pour obtenir le grade de Docteur en Sciences de l Université Paris-Dauphine présentée par Kleber CARRAPATOSO

More information

Commande robuste et calibrage des systèmes de contrôle actif de vibrations

Commande robuste et calibrage des systèmes de contrôle actif de vibrations Commande robuste et calibrage des systèmes de contrôle actif de vibrations Tudor-Bogdan Airimitoaie To cite this version: Tudor-Bogdan Airimitoaie. Commande robuste et calibrage des systèmes de contrôle

More information

Design of Feedback Controls Supporting TCP based on Modern Control Theory

Design of Feedback Controls Supporting TCP based on Modern Control Theory Design of Feedback ontrols Supporting TP based on Modern ontrol Theory Ki Baek Kim To cite this version: Ki Baek Kim. Design of Feedback ontrols Supporting TP based on Modern ontrol Theory. [Research Report]

More information

Feedback adaptive compensation for active vibration control in the presence of plant parameter uncertainties

Feedback adaptive compensation for active vibration control in the presence of plant parameter uncertainties Feedback adaptive compensation for active vibration control in the presence of plant parameter uncertainties Abraham Castellanos Silva To cite this version: Abraham Castellanos Silva. Feedback adaptive

More information

Second-order prediction and residue vector quantization for video compression

Second-order prediction and residue vector quantization for video compression Second-order prediction and residue vector quantization for video compression Bihong Huang To cite this version: Bihong Huang. Second-order prediction and residue vector quantization for video compression.

More information

Thèse de doctorat Discipline : Mathématiques Appliquées

Thèse de doctorat Discipline : Mathématiques Appliquées École Doctorale de Science Mathématiques de Paris Centre Thèse de doctorat Discipline : Mathématiques Appliquées Contributions à l étude des propriétés asymptotiques en contrôle optimal et en jeux répétés

More information

Adsorption of chain molecules with a polar head a scaling description

Adsorption of chain molecules with a polar head a scaling description Adsorption of chain molecules with a polar head a scaling description S. Alexander To cite this version: S. Alexander. Adsorption of chain molecules with a polar head a scaling description. Journal de

More information

Basis Function Selection Criterion for Modal Monitoring of Non Stationary Systems ABSTRACT RÉSUMÉ

Basis Function Selection Criterion for Modal Monitoring of Non Stationary Systems ABSTRACT RÉSUMÉ Basis Function Selection Criterion for Modal Monitoring of Non Stationary Systems Li W. 1, Vu V. H. 1, Liu Z. 1, Thomas M. 1 and Hazel B. 2 Zhaoheng.Liu@etsmtl.ca, Marc.Thomas@etsmtl.ca 1 Dynamo laboratory,

More information

Introduction 1. Partie II : Cosmologie

Introduction 1. Partie II : Cosmologie Introduction 1 Partie II : Cosmologie Introduction 2 Only 5% of universe is ordinary matter! For the first time in human history we believe we have an inventory of the constituents of the universe. Rapid

More information

The fractal nature of a diffusion front and the relation to percolation

The fractal nature of a diffusion front and the relation to percolation The fractal nature of a diffusion front and the relation to percolation Bernard Sapoval, Michel Rosso, JeanFrançois Gouyet To cite this version: Bernard Sapoval, Michel Rosso, JeanFrançois Gouyet. The

More information

A method for synthesis of designs of multilayer structures with prescribed frequency dependent properties

A method for synthesis of designs of multilayer structures with prescribed frequency dependent properties A method for synthesis of designs of multilayer structures with prescribed frequency dependent properties G. Tricoles To cite this version: G. Tricoles. A method for synthesis of designs of multilayer

More information

A set of formulas for primes

A set of formulas for primes A set of formulas for primes by Simon Plouffe December 31, 2018 Abstract In 1947, W. H. Mills published a paper describing a formula that gives primes : if A 1.3063778838630806904686144926 then A is always

More information

Thomas Lugand. To cite this version: HAL Id: tel

Thomas Lugand. To cite this version: HAL Id: tel Contribution à la Modélisation et à l Optimisation de la Machine Asynchrone Double Alimentation pour des Applications Hydrauliques de Pompage Turbinage Thomas Lugand To cite this version: Thomas Lugand.

More information

BAVER OKUTMUSTUR. pour l obtention du titre de. Sujet : MÉTHODES DE VOLUMES FINIS POUR LES LOIS DE CONSERVATION HYPERBOLIQUES NON-LINÉAIRES

BAVER OKUTMUSTUR. pour l obtention du titre de. Sujet : MÉTHODES DE VOLUMES FINIS POUR LES LOIS DE CONSERVATION HYPERBOLIQUES NON-LINÉAIRES THÈSE DE L UNIVERSITÉ PIERRE ET MARIE CURIE PARIS VI SPÉCIALITÉ MATHÉMATIQUES présentée par BAVER OUTMUSTUR pour l obtention du titre de DOCTEUR DE L UNIVERSITÉ PIERRE ET MARIE CURIE PARIS VI Sujet : MÉTHODES

More information

Ecole doctorale n 575 Electrical, Optical, Bio-physics and Engineering Spécialité de doctorat: Génie Électrique par M. XIAOTAO REN

Ecole doctorale n 575 Electrical, Optical, Bio-physics and Engineering Spécialité de doctorat: Génie Électrique par M. XIAOTAO REN NNT : 2017SACLS159 THÈSE DE DOCTORAT DE L UNIVERSITÉ PARIS-SACLAY PRÉPARÉE À L UNIVERSITÉ PARIS-SUD Ecole doctorale n 575 Electrical, Optical, Bio-physics and Engineering Spécialité de doctorat: Génie

More information

MGDA Variants for Multi-Objective Optimization

MGDA Variants for Multi-Objective Optimization MGDA Variants for Multi-Objective Optimization Jean-Antoine Désidéri RESEARCH REPORT N 8068 September 17, 2012 Project-Team Opale ISSN 0249-6399 ISRN INRIA/RR--8068--FR+ENG MGDA Variants for Multi-Objective

More information

THÈSE. Présentée en vue de l obtention du DOCTORAT DE L UNIVERSITÉ DE TOULOUSE

THÈSE. Présentée en vue de l obtention du DOCTORAT DE L UNIVERSITÉ DE TOULOUSE THÈSE Présentée en vue de l obtention du DOCTORAT DE L UNIVERSITÉ DE TOULOUSE Délivré par l Université Toulouse III - Paul Sabatier Discipline : informatique Soutenue par Sébastien Destercke Le 29 Octobre

More information

Variable selection in model-based clustering for high-dimensional data

Variable selection in model-based clustering for high-dimensional data Variable selection in model-based clustering for high-dimensional data Caroline Meynet To cite this version: Caroline Meynet. Variable selection in model-based clustering for high-dimensional data. General

More information

SINGLE ATOM DETECTABILITY OF A ToF ATOM-PROBE

SINGLE ATOM DETECTABILITY OF A ToF ATOM-PROBE SNGLE ATOM DETECTABLTY OF A ToF ATOM-PROBE T. Sakurai, T. Hashizume, A. Jimbo To cite this version: T. Sakurai, T. Hashizume, A. Jimbo. SNGLE ATOM DETECTABLTY OF A ToF ATOM-PROBE. Journal de Physique Colloques,

More information

Semidefinite Programming. Methods and algorithms for energy management

Semidefinite Programming. Methods and algorithms for energy management Semidefinite Programming. Methods and algorithms for energy management Agnès Maher To cite this version: Agnès Maher. Semidefinite Programming. Methods and algorithms for energy management. Other [cs.oh].

More information

SUPELEC THÈSE DE DOCTORAT

SUPELEC THÈSE DE DOCTORAT N d ordre : 2013-29-TH SUPELEC ECOLE DOCTORALE STITS «Sciences et Technologies de l Information des Télécommunications et des Systèmes» THÈSE DE DOCTORAT DOMAINE : STIC Spécialité : Automatique Soutenue

More information

A DIFFERENT APPROACH TO MULTIPLE CORRESPONDENCE ANALYSIS (MCA) THAN THAT OF SPECIFIC MCA. Odysseas E. MOSCHIDIS 1

A DIFFERENT APPROACH TO MULTIPLE CORRESPONDENCE ANALYSIS (MCA) THAN THAT OF SPECIFIC MCA. Odysseas E. MOSCHIDIS 1 Math. Sci. hum / Mathematics and Social Sciences 47 e année, n 86, 009), p. 77-88) A DIFFERENT APPROACH TO MULTIPLE CORRESPONDENCE ANALYSIS MCA) THAN THAT OF SPECIFIC MCA Odysseas E. MOSCHIDIS RÉSUMÉ Un

More information

A note on the moving hyperplane method

A note on the moving hyperplane method 001-Luminy conference on Quasilinear Elliptic and Parabolic Equations and Systems, Electronic Journal of Differential Equations, Conference 08, 00, pp 1 6. http://ejde.math.swt.edu or http://ejde.math.unt.edu

More information

ANNALES SCIENTIFIQUES L ÉCOLE NORMALE SUPÉRIEURE. Cluster ensembles, quantization and the dilogarithm. Vladimir V. FOCK & Alexander B.

ANNALES SCIENTIFIQUES L ÉCOLE NORMALE SUPÉRIEURE. Cluster ensembles, quantization and the dilogarithm. Vladimir V. FOCK & Alexander B. ISSN 0012-9593 ASENAH quatrième série - tome 42 fascicule 6 novembre-décembre 2009 ANNALES SCIENTIFIQUES de L ÉCOLE NORMALE SUPÉRIEURE Vladimir V. FOCK & Alexander B. GONCHAROV Cluster ensembles, quantization

More information

Pablo Enrique Sartor Del Giudice

Pablo Enrique Sartor Del Giudice THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l Université Européenne de Bretagne en cotutelle internationale avec PEDECIBA - Université de la République, Uruguay pour le grade de DOCTEUR DE L UNIVERSITÉ

More information

Sequential Monte Carlo Sampler for Bayesian Inference in Complex Systems

Sequential Monte Carlo Sampler for Bayesian Inference in Complex Systems N d ordre: 41447 Universite Lille 1 - Sciences et Technologies Ecole Doctorale des Sciences pour l Ingénieur THÈSE présentée en vue d obtenir le grade de DOCTEUR Spécialité: Automatique, Génie informatique,

More information