Parametrization of all plants that have the same optimal LQG controller*

Similar documents
Iterative Feedback Tuning for robust controller design and optimization

Parametrization of All Strictly Causal Stabilizing Controllers of Multidimensional Systems single-input single-output case

Optimal Polynomial Control for Discrete-Time Systems

Pole placement control: state space and polynomial approaches Lecture 3

EECE 460 : Control System Design

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

IDENTIFICATION OF A TWO-INPUT SYSTEM: VARIANCE ANALYSIS

2 Problem formulation. Fig. 1 Unity-feedback system. where A(s) and B(s) are coprime polynomials. The reference input is.

H 2 -optimal model reduction of MIMO systems

The parameterization of all. of all two-degree-of-freedom strongly stabilizing controllers

A DESIGN METHOD FOR SIMPLE REPETITIVE CONTROLLERS WITH SPECIFIED INPUT-OUTPUT CHARACTERISTIC

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

Existence and uniqueness of solutions for a continuous-time opinion dynamics model with state-dependent connectivity

Virtual Reference Feedback Tuning for Non Minimum Phase Plants

CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN. Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren

A FEEDBACK STRUCTURE WITH HIGHER ORDER DERIVATIVES IN REGULATOR. Ryszard Gessing

A Design Method for Smith Predictors for Minimum-Phase Time-Delay Plants

MRAGPC Control of MIMO Processes with Input Constraints and Disturbance

Performance assessment of MIMO systems under partial information

Research Article Minor Prime Factorization for n-d Polynomial Matrices over Arbitrary Coefficient Field

Matrix Equations in Multivariable Control

LABORATORY OF AUTOMATION SYSTEMS Analytical design of digital controllers

14 th IFAC Symposium on System Identification, Newcastle, Australia, 2006

Feedback Stabilization over Signal-to-Noise Ratio Constrained Channels

The Tuning of Robust Controllers for Stable Systems in the CD-Algebra: The Case of Sinusoidal and Polynomial Signals

Singular perturbation analysis of an additive increase multiplicative decrease control algorithm under time-varying buffering delays.

1 The Observability Canonical Form

Discrete-Time H Gaussian Filter

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

ECEN 605 LINEAR SYSTEMS. Lecture 20 Characteristics of Feedback Control Systems II Feedback and Stability 1/27

Model reduction of interconnected systems

THIS paper deals with robust control in the setup associated

Zeros and zero dynamics

A Method to Teach the Parameterization of All Stabilizing Controllers

VIBRATION CONTROL OF CIVIL ENGINEERING STRUCTURES VIA LINEAR PROGRAMMING

Module 4 : Laplace and Z Transform Problem Set 4

MANY adaptive control methods rely on parameter estimation

Lecture 4 Stabilization

Complex Continued Fraction Algorithms

ALGEBRA HANDOUT 2.3: FACTORIZATION IN INTEGRAL DOMAINS. In this handout we wish to describe some aspects of the theory of factorization.

basis of dierent closed-loop transfer functions. Here the multivariable situation is considered

Design Methods for Control Systems

PARETO OPTIMAL FEEDFORWARD CONSTRAINED REGULATION OF MIMO LINEAR SYSTEMS

An LQ R weight selection approach to the discrete generalized H 2 control problem

Closed-Loop Structure of Discrete Time H Controller

Optimizing simultaneously over the numerator and denominator polynomials in the Youla-Kučera parametrization

Gaussian integers. 1 = a 2 + b 2 = c 2 + d 2.

Process Modelling, Identification, and Control

On Input Design for System Identification

A unified Smith predictor based on the spectral decomposition of the plant

EECE Adaptive Control

Journal of the Franklin Institute. Data-Driven Model Reference Control Design by Prediction Error Identification

Affine iterations on nonnegative vectors

PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT

Pole-Placement Design A Polynomial Approach

Chapter 9 Observers, Model-based Controllers 9. Introduction In here we deal with the general case where only a subset of the states, or linear combin

Robust Internal Model Control for Impulse Elimination of Singular Systems

LMI Based Model Order Reduction Considering the Minimum Phase Characteristic of the System

Available online at ScienceDirect. Procedia Engineering 100 (2015 )

CONTROL DESIGN FOR SET POINT TRACKING

Iterative Feedback Tuning

Krylov Techniques for Model Reduction of Second-Order Systems

4F3 - Predictive Control

Balancing of Lossless and Passive Systems

ON LYAPUNOV STABILITY OF LINEARISED SAINT-VENANT EQUATIONS FOR A SLOPING CHANNEL. Georges Bastin. Jean-Michel Coron. Brigitte d Andréa-Novel

Non-negative matrix factorization with fixed row and column sums

Control Systems. Laplace domain analysis

Properties of Zero-Free Transfer Function Matrices

Notes on n-d Polynomial Matrix Factorizations

THEORY OF MIMO BIORTHOGONAL PARTNERS AND THEIR APPLICATION IN CHANNEL EQUALIZATION. Bojan Vrcelj and P. P. Vaidyanathan

1. multiplication is commutative and associative;

Ten years of progress in Identification for Control. Outline

Linear State Feedback Controller Design

Minimal positive realizations of transfer functions with nonnegative multiple poles

FEL3210 Multivariable Feedback Control

Factorization of integer-valued polynomials with square-free denominator

MEM 355 Performance Enhancement of Dynamical Systems MIMO Introduction

Stabilization with Disturbance Attenuation over a Gaussian Channel

(i) Represent discrete-time signals using transform. (ii) Understand the relationship between transform and discrete-time Fourier transform

Direct adaptive suppression of multiple unknown vibrations using an inertial actuator

Stabilization of Networked Control Systems with Clock Offset

Control Systems. Root Locus & Pole Assignment. L. Lanari

Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems

Auxiliary signal design for failure detection in uncertain systems

Problem Set 4 Solutions 1

EL2520 Control Theory and Practice

Polynomial Stabilization with Bounds on the Controller Coefficients

IMPULSIVE CONTROL OF DISCRETE-TIME NETWORKED SYSTEMS WITH COMMUNICATION DELAYS. Shumei Mu, Tianguang Chu, and Long Wang

6.435, System Identification

CRC for Robust and Adaptive Systems. Peter J Kootsookos. Abstract. This paper develops an adaptive controller for active vibration control.

On Identification of Cascade Systems 1

arxiv: v1 [cs.sy] 2 Apr 2019

Improving Convergence of Iterative Feedback Tuning using Optimal External Perturbations

1 Introduction 198; Dugard et al, 198; Dugard et al, 198) A delay matrix in such a lower triangular form is called an interactor matrix, and almost co

On the Dual of a Mixed H 2 /l 1 Optimisation Problem

On the continuity of the J-spectral factorization mapping

Learn2Control Laboratory

On the simultaneous stabilization of three or more plants

REDUCTION OF NOISE IN ON-DEMAND TYPE FEEDBACK CONTROL SYSTEMS

Robustness of Discrete Periodically Time-Varying Control under LTI Unstructured Perturbations

Transcription:

Proceedings of the 35th Conference on Decision and Control Kobe, Japan December 1996 Parametrization of all plants that have the same optimal LQG controller* Franky De Bruyne f, Brian D. 0. Anderson t, Michel Gevers t t CESAME, Universitk Catholique de Louvain, Biitiment Euler, Avenue Georges Lemaitre 4-6, B 1348 Louvain-La-Neuve, Belgium $ Department of Systems Engineering, Research School of Physical Sciences & Engineering, Canberra, A.C.T. 0200, Australia Corresponding author: De Bruyne Franky, e-mail: DeBruyneQauto.ucl.ac.be, Fax: (++32)(10)472180 keywords: LQG control, MV control, coprime factorizations Abstract Using the dual Youla parametrizations of controllerbased coprime factor plant perturbations and plantbased coprime factor controller perturbations, we characterize the set of all plants that have the same optimal LQG or MV controller. 1 Introduction Consider that you have some initial plant, PO, and some controller, Co, that stabilizes Po. Using stable proper coprime factor descriptions of Po and Co, and the Youla parametrizations, one can then characterize both the set C of all controllers stabilizing Po, and the set P of all plants that are stabilized by Co. The first set is parametrized in terms of coprime factors of PO and Co and an arbitrary proper stable transfer function S, i.e. C = {C(S)). The transfer function S is often called the Youla parameter and its norm indicates the size of the perturbation away from Co. The second (dual) set is parametrized in terms of coprime factors of Po and Co and an arbitrary proper stable transfer function Q, i.e. P = {P(&)). The transfer function Q is also called Youla parameter and its norm indicates the size of the perturbation away from Po. These parametrizations are explicitly described in the following proposition, which contains a collection of results from [7]. They are expressed here for scalar systems and apply to both the discrete and continuous time case; the extension to the multivariable case is straightforward. Proposition 1.1 [7] Let Po and Co have fractional representations Po = N~ Dpl and Co = NC Dzl, where Np, Dp, Nc, Dc belong to S, the ring of proper stable transfer functions. ( We assume a negative feedback convention). Assume that the following Bezout equation holds NcNp + DcDp = 1. (1.1) 'This paper presents research results of the Belgian Programme on Interuniversity Poles of Attraction, initiated by the Belgian State, Prime Minister's Office for Science, Technology and Culture. The scientific responsibility rests with its authors. This equation expresses both the fact that the factors are coprime and that the feedback loop formed by the plant Po and the controller Co is internally stable. For any S E S, define 1. Then C(S) = NS D; ' is a stabilizing controller for PO = N ~D;;~. 2. Furthermore, any controller that stabilizes Po has a fractional representation Ns D;' with Ns, Ds as in (1.2) for some S E S. The dual result can be stated in the following way. For any Q E S, define 1. Then P(Q) = N~D,~' is stabilized by Co = N~D;~. 2. Furthermore, any plant stabilized by Co has a fractional representation N~DQ' with NQ, DQ as in (1.3) for some Q E S. In this paper we use these parametrizations to characterize all plants that have same optimal LQG or MV controller. Our basic one degree of freedom control loop is that of Figure 1.1, where yt is the plant output and ut is the control signal. The disturbance signal vt is assumed zero mean stationary with spectral density function 40. Our control design criterion is the following regulation LQG index (expressed here in discrete time) The solution of the minimization problem (1.4) using the Youla parametrization was first presented in 181. [4] proposes a generalization to LQG control in a prescribed domain of stability. The analytic solution to

vt Figure 1.1: One degree of freedom control loop (Po, 40, C(S)) configuration we then have: yt=(dc+nps)dp vt and ut=(nc-sdp)dpvt. The LQG index (1.4) can then be rewritten, using Parseval's theorem, to obtain an expression1 that is integrable in S: the infinite horizon LQG control problem in the polynomial setting of [8] is recalled in Section 2. Here, we characterize the set of all plants (PI, $1) that have the same optimal LQG controller, Co, as the original plant (Po, 40). We believe that this parametrization will prove to be useful in the analysis of iterative identification and control schemes that take an LQG control objective as a starting point. Indeed, using our parametrization, it is possible to characterize the common features of all models that admit the same optimal LQG controller as the "true" plant; features which are representative of a good identified model in the identification stage. A similar parametrization has been introduced in [5] in the context of adaptive control and in a state-space framework. In that paper, the author characterizes the set of all models that are equivalent with the real system (which is supposed to be in the model set) in the sense that they lead to the same controlled behaviour as the desired behaviour of the real system. Notice that a "dual" problem was treated in [6], i.e, in that paper the authors parametrize the set of all solutions for the unconstrained Hz-optimal control problem in the state feedback case. The outline of our paper is as follows. In Section 2 we present a solution to the LQG controller design problem in the Youla parametrization framework starting from the plant model Po and any stabilizing controller Co, using the set C(S) of all stabilizing controllers for Pol i.e. we show how to compute Sopt. In Section 3 we compute how much change is induced in a controller by a change in a plant model and we characterize the set of all plants that have the same optimal LQG controller. In Section 4, we particularize our results to the case of a discrete time ARMAX system. The validity of the theoretical results is checked on an LQG and MV example in Section 5. We conclude in Section 6. 2 Optimal LQG control in the Youla parametrization Let Po = ivpdpl and Co = NcL3z1 be coprime factorizations of the plant Po and of an arbitrary stabilizing controller Co such that the Bezout equation (1.1) holds. Let Co in Figure 1.1 be replaced by an arbitrary controller C(S) defined in Proposition 1.1 and consider a disturbance rejection problem, i.e. rt = 0. For this where is the spectral density function of vt. If Co is the optimal LQG controller for Po, then S = 0 minimizes JLQG over all S E S. Computation of the optimal Youla parameter It is shown in [8] that a stable minimizing S can be found analytically by means of spectral factorizations and projections, i.e. by taking stable parts. Indeed, it is straightforward to show that by completing the square, the LQG control criterion can be rewritten2 as: with I D P ~ ~ ~ O, (2.3) Ad* = [INplZ + AIDP~~] f? = [N$Dc - AD$Nc] I~pl' 40 (2.4) where A is minimum phase, stable and of relative degree zero3. The minimizing S is clearly given by where [Ist denotes the stable part: see remark below. The optimal control cost is Remark: Every finite rational transfer function Z can be decomposed into the sum of its stable and unstable 'The integration bounds have been omitted to stress the fact that the expressions are valid in both the continuous (S-wm) and.. discrete time case (fft). 21f S = 5 of relative degree d, with A and B polynomials, then S* is defined as., in continuous time and as in discrete time, which ensures that.. (S*)* = S, and that s*(ejw) is the complex conjugate of S(eJW). A*(z) is defined here as the reciprocal polynomial of A(z), i.e, if A(z) = zna + alzna-' + + an,, then A*(z) = znaa(z-l) = 1 + alz +... + an,ena. 31n the continuous time case, the relative degree zero constraint cannot always be imposed. In such cases, the infimum of JLQG is not attained for any S E S. However, one can still compute infses JLQG(S) and construct a family {S, E S) such that J(S,) approaches the infimum as E --+ 0. See [7] for details.

parts, Z = [Z],t + [Z],,,t, as follows. Expand Z into partial fractions (unique decomposition) and a polynomial; then [Z],t (respectively [Z],,,t) is the sum of the terms corresponding to poles in the open left half plane (respectively in the closed right half plane) in continuous time and inside (respectively on or outside) the unit circle in discrete time. The improper part of Z is assigned to the unstable part. In the continuous time decomposition of A-*B, it is necessary to take the unique solution with the constant part assigned to the stable part in order to make the cost (2.6) finite. In discrete time, one can partition the constant part between the stable and the unstable part: all these solutions lead to a finite cost. Since we optimize over all proper S, [d-*b],nst has to reflect just that part of the associated impulse {hk) corresponding to k < 0, so that [A-*B],,,t = Ck<O hk z-~. The constant term in the partial fraction expansion must be so partitioned between [A-*B],,,t and [A-*B],t that [A-*B],,,t has z = 0 as a zero, i.e. there is a unique decomposition. 3 Parametrization of all plants that have the same LQG or MV controller Consider a plant model (Po, 40) and its corresponding optimal (and hence stabilizing) controller Co, both factorized &before. Let now Pl be some plant that is stabilized by Co. It is obvious that PI is contained in the set of all models stabilized by Co. The set of all controllers stabilizing PI(&) is then given by This is typical of an ARX or ARMAX model structure. We note that the product of the last two terms in (3.2) is then independent of Q. Computation of Sopt as a function of Q In this subsection, we characterize the optimal controller c:~~, i.e. we compute sopt that minimizes JLQG and express it as a function of Q and the coprime factorizations of the plant Po and its corresponding opti- mal controller Co. Thus, zap,, which expresses c:~~ as a perturbation of Co, will be defined as a function of Q, which expresses PI as a perturbation of Po and as a function of $0: see (3.3). Recall that A and B, related to the plant Po and its optimal controller Co, are given by the following expressions: We are now in a position to calculate the perturbed versions of A and B. We start with There will be a corresponding change from A to 3: We obtain for some 3, Q E S. We have called this Youla parameter 3 to distinguish it from S that parametrizes all controllers stabilizing PO. Let Cl be any controller in the set El. The LQG index (1.4) for (PI, $1) with controller C1, expressed in the frequency domain, is integral in Q and 3: Two situations can occur when a system is perturbed: either the perturbation Q only influences the plant model, and the noise model remains unchanged (as happens in an OE model structure) or both the plant model and the noise model are influenced (as happens in an ARX or ARMAX model structure). We consider the second case, and we assume that +o varies with Q in such a way that I Dp + QNc 1' is independent of &, i.e. IDP + &NcI2 41 = PpI2 40. (3.3) -- * because B is unstable by optimality of Co and A is unstable by definition. We can now characterize the set of all plants that admit the same optimal controller, Co, as (Polq5~). This set is parametrized by all Q E S for which Sopt = 0, i.e. all Q E S that solve Equation (3.7) is a very implicit characterization, not self-evidently allowing any nonzero solutions. In the next section, we shall display a nontrivial solution set more explicitly. From (3.2Land by optimality of Co, i.e. Q E S solves (3.7) and Sopt = 0, we observe that the optimal LQG cost that is associated to any triplet (Pl(Q), &(Q), Co) satisfying (3.3) and for which Co is optimal, is now independent of Q: J ~ (PI, ~ Co) G = JLQG (PO, CO)

4 Application in the case of an ARMAX not consider the possibility that A and B* could have plant model a common zero. Accordingly, we present a complete Let us specialize our results to the case where the plant proof. is described by a discrete time ARMAX model: where et is white noise of zero mean and unit variance, A(%), B(z) and C(z) are polynomials in z of degree n, n - d and n, respectively, that have no common factor, with A(z), B(z) coprime, d 2 1, and C(z) having all its zeros inside the unit circle. Note that this system has a delay d. Equation (4.1) is normalized so that the leading coefficients of the polynomials A(%) and C(z) are unity, i.e. A(%) and C(z) are monic. B(z) can be factorized as B(z) = B-(z)B+(z) where B+ (z) has all its zeros strictly inside the unit circle and B-(z) is monic and has all its zeros on or outside the unit circle. We examine successively the case of the LQG criterion, JLQc N = lim~,, k Ct=l E{ y: + Xu:}, and the case of a minimum variance criterion, JMv = lim~-+.. & Czl E{Y?}. The LQG disturbance rejection problem The spectral factorization solution method for the infinite horizon LQG regulation problem consists in first computing the stable polynomial spectral factor G(z) of It can be shown that if deg A > deg B then there always exists a unique polynomial G(z) with deg G(z) = n and positive coefficient of the highest degree term [3]. The next step consists in solving the following Diophantine equation for the polynomials X(z) and Y (2): The resulting optimal controller is The complexity of the control law is determined by the polynomials Y (z) and X(z). Since A(z) has degree n, it is obvious that we can choose Y(z) such that deg Y (z) < n. For any such Y (z), it follows that degx(z) < n. Requiring this degree constraint does not uniquely specify the solution pair X(z), Y (2). This will be done subsequently. For any such pair, define the polynomials X*(z) = znx(z-') and Y*(z) = zny (z-i). (4.5) Notice that if X(z), say, happens to have degree less than n, this definition is non standard. We then have the followlng lemma, which is essentially the same as the one in [3]. However the proof of [3] does Lemma 4.1 Let the polynomials A(%), B(z), C(z) and G(z) have degree n, n - d, n and n respectively, with (A, B) coprime, and (4.2) holding for some X > 0. Let polynomials X(z), Y(z) wf'th degree at most n be found satisfying (4.3). Then there exists a polynomial V(z) of degree at most n such that Proof: Let M+ M- be the greatest common divisor of A and B*, where M+ has all zeros strictly inside the unit circle and M- all zeros on or outside the unit circle. Then for some All B1, with AIAT and BIB: coprime, we have A = AIM+M- and B = BIMTM:. (4.8) It follows that G = GlM+M? (4.9) where AIAT, B1 Bt and GIG? are mutually coprime with GIG? = XAIAf + zd BIB:. Now observe that A* [X*B- AY*A] = A*X*B + zdbb*y* - GG*Y* = B [A*x* + zd~*y*] - GPY* = BGtC* - GPY* = P[BC* -GY*]. (4.10) The result of the lemma is immediate if we can conclude that P divides X*B - XY*A. If A* and G* are coprime, this would be immediate. However, A* and Ct may not be coprime. Use (4.8) and (4.9) to rewrite (4.10) as ATMZ [X*B-XY*A]=GM- [Be-GY*]. (4.11) Next Mf is a factor of B and therefore is coprime with A; hence M- is coprime with A* = ATM$ Mf. From (4.11), we see then that M- divides X*B - XY*A and therefore also X*B. Since M- divides A, it is coprime with B. Hence M- divides X*, i.e. X* = Xr M-. Also (4.3) yields GIM+M*C = AlM+M-X +BY (4.12) and it follows easily that Y = M+Yl, So (4.11) can be rewritten as AT [x~b~ MZ -XY:AI M+] = G: [BlC-GI M+YT] Now Af and are coprime. Hence AT divides BIC* - GIM+Yt, i.e. there exists a polynomial V(z) such that ATV = BIC* - G1 M+ Y:. It follows that A*V = ATM:MZV = B1M;MXC- (G~M+ME)(Y:M;) = BC* - GY* (4.13)

as required. The identity (4.6) is then immediate from (4.13) and (4.10). The degree constraint on V is immediate from those on Y, G, A, 3 and C. It remains to specify how to choose the particular X, Y pair satisfying (4.3). A trivial calculation shows that if X, Y, V is any triple of polynomials satisfying (4.3), (4.6) and (4.7), then these equations are also satisfied by - X=X+kB, Y=Y-~A and V=V+BG for any constant k. If V has degree n, choose k such that has degree less than n. Else choose k = 0. It follows that we can assume deg V < n in solving (4.3), (4.6) and (4.7). Using earlier notations, we introduce the following plant and controller factorizations: which assures that assumption (3.3) is satisfied. Then, for fixed A, any system (PI, 41) with Q defined as in (4.15) has the same optimal LQG controller, Ca = N~ DE', as the unperturbed system (Po, 40). Moreover, for fixed A, any system (PI, dl) that has optimal LQG controller Co = N ~ D can ~ ' be expressed as (4.16) and (4.17) for some Q defined in (4.15). The minimum variance disturbance rejection problem (A = 0) For X = 0, the LQG regulation criterion reduces to the minimum variance disturbance rejection criterion N CtEl ~{y:). JMV = limn,, The minimum variance disturbance rejection control law is given by (see 133): Ii' (z) ut = - B+(z)F(z) yt where Ii'(z) and F(z) are polynomials that satisfy the Diophantine equation where G, X and Y are determined from A, B, C via (4.2) and (4.3). Note that these fractional representations fulfill the Bezout identity (1.1) and that each transfer function is stable and proper. The first term in (3.4) is given by where V(z) is defined by (4.6) and 1 = n - deg V 2 1. Because Blz=o = 0 and GXG* is unstable, we have [a],, = 0, as expected. The second term in (3.4) is in which the polynomial F(z) has degree d+deg B- - 1 and deg I< < n. The plant and controller factorizations are defined by: Np = B(z) Dp = - 4%) C(z) C(z) ' Ii' (z) Nc = Dc = F(z). zd-i B+ (z) BE (z)' zd-lb* -( z 1 Note that these fractional representations fulfill the Bezout identity (1.1) and that each transfer function is stable and proper. The first term in (3.4) is given by If we take Q = G R (4.15) where R is any element of S that has relative degree n + 1, then the second term in (3.4) is unsta- - ble and has -- a zero at z = 0. This means that 1 -*- * Sopt = -A [A BIst = 0 because 2- is unstable by definition. Theorem 4.2 Consider some ARMAX system (A, B, if;12) and its optimal LQG controller Co have fractzonal representations defined as in (4.14). Let C) and let the system (Po, 40) = (a, be a perturbation of Po and consider the following noise svectrum zd3*(z) F(z) -- - zb; (z)f(z). C*(z) zd-i B*_ (z) c*(z) Because = 0 and C* is unstable, we have [B],t = 0 as expected. The second term in (3.4) is If we take Q=B-R (4.21) where R is any element of S with relative degree d + degb- (i.e. Q has relative degree d), then the second term in (3.4) is unstable and has a zero at z = 0. This --I --*- * means that Sopt = -A [A a],, = 0 because 2- is unstable by definition.

Theorem 4.3 Consider some ARMAX system (A, 3, C) and let the system (Po, 40) = (9, 15 1') and its optimal MV controller Co have fractional representations defined as in (4.20). Let be a perturbation ofpo and consader the following noise spectrum and the plant and controller factorizations are given by (4.20) with K(z) = C(z)-A(z),F(z)= 1, B+(z) = B(z) and B- (z) = 1. Any ARMAX system (PI,&) that has optimal MV controller (5.25) can be expressed as which assures that assumption (3.3) is satisfied. Then any system (P1,#~) with Q defined as in (4.21) has the same optimal MV controller, Co = DG'N~, as the unperturbed system (Po,40). Moreover, any system (PI,&) that has optimal MV controller Co = Nc D;' can be expressed as (4.22) and (4.23) with Q defined in 5 Numerical example To illustrate the theoretical results proposed above, let us take an ARMAX system described by (4.1) with The optimal LQG controller (4.4) is given by bhr ut = - (a2 + X)z + Ah and the plant and controller factorizations are given by (4.14) with X(z) = y(z) (b2 + X)z + Xh Jm bhz -. = JW Take Q = (dm) R where Q E S has relative degree 2. Then, for fixed A, any ARMAX system (PI,&) that has optimal LQG controller (5.24) can be expressed as Let us take an ARMAX system (4.1) with.. C(z) = z2 + cl z + c2 where Q E S has relative degree 1. 6 Conclusions In this paper, we have used the Youla parametrization to characterize the set of all plants that have the same optimal LQG or MV controller. References [I] Anderson B.D.O., F. De Bruyne and M. Gevers (1994). "Computing LQG plant and controller perturbations", 33rd Conference on Decision and Control, Orlando, Florida, USA, Vol. 2, pp. 1439-1444. [2] Anderson B. D. 0. and J. B. Moore (1990). Optimal Control: Linear Quadratic Methods. Prentice Hall, Englewood Cliffs, New Jersey. [3] Astrijm K. J. and B. Wittenmark (1990). Computer Controlled Systems. Prentice Hall, Englewood Cliffs, New Jersey. [4] De Bruyne F., B.D.O. Anderson, M. Gevers and J. Leblond (1995). "LQG control in a prescribed domain of stability using the Youla parametrization", submjtted to Automatica. [5] Polderman J. W. (1985). "A note on the structure of two subsets of the parameter space in adaptive control problems". Systems t3 Control letters, Vol. 7, pp. 25-34. [6] Rotea M. A. and P. P. Khargonekar (1991). "Hzoptimal Control with an H,constraint: The State Feedback Case", Automatica, Vol. 27, pp. 307-316. [7] Vidyasagar M. (1985). Control System Synthesis. MIT Press, Cambridge, Massachusetts. [8] Youla C. D., H. A. Jabr and J. J. Bongiorno, Jr. (1976). "Modern Wiener-Hopf design of optimal and lbl 1 < 1. The optimal MV controller (4.18) is given controllers, part I: the single-input-single-output by Ut = - C(Z) - 42) Yt (5.25) B(z) case." IEEE Trans. Automat. Contr., Vol. AC-21, pp. 3-13.