and the nite horizon cost index with the nite terminal weighting matrix F > : N?1 X J(z r ; u; w) = [z(n)? z r (N)] T F [z(n)? z r (N)] + t= [kz? z r

Similar documents
H 1 optimisation. Is hoped that the practical advantages of receding horizon control might be combined with the robustness advantages of H 1 control.

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013

1 Introduction 198; Dugard et al, 198; Dugard et al, 198) A delay matrix in such a lower triangular form is called an interactor matrix, and almost co

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015

Decentralized control with input saturation

Time-Invariant Linear Quadratic Regulators!

An LQ R weight selection approach to the discrete generalized H 2 control problem

Linear Quadratic Zero-Sum Two-Person Differential Games

BUMPLESS SWITCHING CONTROLLERS. William A. Wolovich and Alan B. Arehart 1. December 27, Abstract

Applications of Controlled Invariance to the l 1 Optimal Control Problem

FIR Filters for Stationary State Space Signal Models

The norms can also be characterized in terms of Riccati inequalities.

SUCCESSIVE POLE SHIFTING USING SAMPLED-DATA LQ REGULATORS. Sigeru Omatu

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

New Lyapunov Krasovskii functionals for stability of linear retarded and neutral type systems

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

INVERSE MODEL APPROACH TO DISTURBANCE REJECTION AND DECOUPLING CONTROLLER DESIGN. Leonid Lyubchyk

Delay-Dependent Stability Criteria for Linear Systems with Multiple Time Delays

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

A lifting approach. Control Engineering Laboratory. lifted time-invariant system description without causality

Stochastic Nonlinear Stabilization Part II: Inverse Optimality Hua Deng and Miroslav Krstic Department of Mechanical Engineering h

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory

Robust sampled-data stabilization of linear systems: an input delay approach

Generalized Riccati Equations Arising in Stochastic Games

Intrinsic diculties in using the. control theory. 1. Abstract. We point out that the natural denitions of stability and

Static Output Feedback Stabilisation with H Performance for a Class of Plants

IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS

A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1

Chapter III. Stability of Linear Systems

Eects of small delays on stability of singularly perturbed systems

Delay-Dependent H 1 Control of Uncertain Discrete Delay Systems

CONTROL DESIGN FOR SET POINT TRACKING

Lecture 2: Discrete-time Linear Quadratic Optimal Control

Problem Description The problem we consider is stabilization of a single-input multiple-state system with simultaneous magnitude and rate saturations,

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

Let T (N) be the algebra of all bounded linear operators of a Hilbert space L which leave invariant every subspace N in N, i.e., A T (N), AN N.

Chapter 8 Stabilization: State Feedback 8. Introduction: Stabilization One reason feedback control systems are designed is to stabilize systems that m

State-Feedback Optimal Controllers for Deterministic Nonlinear Systems

H -Optimal Control and Related Minimax Design Problems

Stability Analysis and H Synthesis for Linear Systems With Time-Varying Delays

6.241 Dynamic Systems and Control

A New Algorithm for Solving Cross Coupled Algebraic Riccati Equations of Singularly Perturbed Nash Games

Benchmark problems in stability and design of. switched systems. Daniel Liberzon and A. Stephen Morse. Department of Electrical Engineering

Abstract. Previous characterizations of iss-stability are shown to generalize without change to the

Proc. 9th IFAC/IFORS/IMACS/IFIP/ Symposium on Large Scale Systems: Theory and Applications (LSS 2001), 2001, pp

Average Reward Parameters

Lecture 2: Review of Prerequisites. Table of contents

Nonlinear Model Predictive Control for Periodic Systems using LMIs

Reduced rank regression in cointegrated models

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

Notes on the matrix exponential

A converse Lyapunov theorem for discrete-time systems with disturbances

Introduction to Model Predictive Control. Dipartimento di Elettronica e Informazione

VII Selected Topics. 28 Matrix Operations


MODERN CONTROL DESIGN

Quasigroups and Related Systems 8 (2001), Hee Kon Park and Hee Sik Kim. Abstract. 1. Introduction

Floor Control (kn) Time (sec) Floor 5. Displacement (mm) Time (sec) Floor 5.

A Stable Block Model Predictive Control with Variable Implementation Horizon

Here, u is the control input with m components, y is the measured output with k componenets, and the channels w j z j from disturbance inputs to contr

Null controllable region of LTI discrete-time systems with input saturation

Circuit depth relative to a random oracle. Peter Bro Miltersen. Aarhus University, Computer Science Department

PARAMETER IDENTIFICATION IN THE FREQUENCY DOMAIN. H.T. Banks and Yun Wang. Center for Research in Scientic Computation

Digital Control Engineering Analysis and Design

The model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples sho

Controllability, Observability, Full State Feedback, Observer Based Control

On robustness of suboptimal min-max model predictive control *

J. Korean Math. Soc. 37 (2000), No. 4, pp. 593{611 STABILITY AND CONSTRAINED CONTROLLABILITY OF LINEAR CONTROL SYSTEMS IN BANACH SPACES Vu Ngoc Phat,

Lyapunov Stability of Linear Predictor Feedback for Distributed Input Delays

Filter Design for Linear Time Delay Systems

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

An important method in stability and ISS analysis of continuous-time systems is based on the use class-kl and class-k functions (for classical results

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory

De Nugis Groebnerialium 5: Noether, Macaulay, Jordan

Learning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System

Min-Max Output Integral Sliding Mode Control for Multiplant Linear Uncertain Systems

Approximate Bisimulations for Constrained Linear Systems

LMI based output-feedback controllers: γ-optimal versus linear quadratic.

IMPULSIVE CONTROL OF DISCRETE-TIME NETWORKED SYSTEMS WITH COMMUNICATION DELAYS. Shumei Mu, Tianguang Chu, and Long Wang

4F3 - Predictive Control

Block companion matrices, discrete-time block diagonal stability and polynomial matrices

ESC794: Special Topics: Model Predictive Control

Stability Analysis of a Proportional with Intermittent Integral Control System

Stability, Pole Placement, Observers and Stabilization

2 The Linear Quadratic Regulator (LQR)

Functions: A Fourier Approach. Universitat Rostock. Germany. Dedicated to Prof. L. Berg on the occasion of his 65th birthday.

A note on continuous behavior homomorphisms

CONSTRAINED MODEL PREDICTIVE CONTROL ON CONVEX POLYHEDRON STOCHASTIC LINEAR PARAMETER VARYING SYSTEMS. Received October 2012; revised February 2013

1 Introduction We study innite-dimensional systems that can formally be written in the familiar form x 0 (t) = Ax(t) + Bu(t); y(t) = Cx(t) + Du(t); t

Model reduction for linear systems by balancing

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

Output Regulation of Uncertain Nonlinear Systems with Nonlinear Exosystems

Riccati difference equations to non linear extended Kalman filter constraints

Adaptive linear quadratic control using policy. iteration. Steven J. Bradtke. University of Massachusetts.

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING

Disturbance Attenuation for a Class of Nonlinear Systems by Output Feedback

1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound)

Lyapunov Stability Theory

Fall 線性系統 Linear Systems. Chapter 08 State Feedback & State Estimators (SISO) Feng-Li Lian. NTU-EE Sep07 Jan08

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

Transcription:

Intervalwise Receding Horizon H 1 -Tracking Control for Discrete Linear Periodic Systems Ki Baek Kim, Jae-Won Lee, Young Il. Lee, and Wook Hyun Kwon School of Electrical Engineering Seoul National University, Seoul 151-742, KOREA FAX:+82-2-871-71, Tel:+82-2-88-7314 E-mail:whkwon@cisl.snu.ac.kr Abstract In this paper, a xed-horizon H 1 tracking control (HTC) for discrete time-varying systems is obtained via the dynamic game theory in state-feedback case. From HTC, an intervalwise receding horizon H 1 tracking control (IHTC) for discrete periodic systems is suggested using the intervalwise strategy. The conditions under which IHTC stabilizes the closed-loop system are proposed. Under proposed stability conditions, it is shown that IHTC guarantees the H 1 -norm bound and that IHTC with integral action provides zero oset for a constant command signal. The performance of IHTC is compared with that of RHTC via simulation studies for a discrete periodic system. 1 Introduction The receding horizon control strategy has been developed as a proper control strategy for tracking performance and time-varying systems. It is well known that this strategy presents more practical aspects in the applications to real systems than the innite horizon control strategy, because it needs only information for only a nite future time. As an approach to overcome this tracking problem, the receding horizon strategy has been developed. The receding horizon strategy is obtaining a solution to optimize a nite future cost horizon. There are two receding horizon strategies, the pointwise and intervalwise one. As well shown in [3], in the pointwise strategy, the terminal point of a xedlength nite cost horizon continuously recedes at each time instant. In the intervalwise strategy the terminal point is kept xed for a nite cost horizon and, after one period, the terminal point moves by one period and xed by the next period. The intervalwise strategy presents some advantages over the pointwise one in some respects. It has much less computation burdens than the pointwise one, since the intervalwise one requires calculation of control gain per a period of every cost horizon while the other one requires it per every time instant. During the horizon in which the optimal solutions are implemented, the intervalwise strategy is optimal, while the pointwise one is suboptimal. Hence the tracking performance of the intervalwise strategy is superior to the other one. The pointwise strategy has been developed for general timevarying systems [8], [1], [11], while the intervalwise strategy only for periodic and time-invariant systems [2], [3], [9]. There has been a few studies on the receding horizon tracking problems and its stability property in the H 1 problem [4], [5], [11]. But the intervalwise receding horizon strategy has not been investigated in the tracking problems and the H 1 problem to authors's knowledge. In this paper, an intervalwise receding horizon H 1 tracking control (IHTC) for discrete periodic systems is proposed. Our xed nite horizon H 1 tracking control (HTC) which is rst obtained to derive IHTC is dierent from that of [5]. The solution (HTC) is obtained via the dynamic game theory as shown in [6], [7]. The condidions under which closed loop stability, zero-oset tracking error, and innite horizon H 1 norm bound are guaranteed with IHTC are proposed, respectively. 2 H 1 tracking control for discrete time-varying systems We derive a nite horizon H 1 -tracking control (HTC) using the previous result [6], [7] in which only the regulation problem is dealt with. Consider the following discrete time-varying system: x(t + 1) = Ax + B 1 w + B 2 u (1) Cx yr z = ; z u r = x 2 R n ; u 2 R m ; w 2 R l ; z 2 R p+m

and the nite horizon cost index with the nite terminal weighting matrix F > : N?1 X J(z r ; u; w) = [z(n)? z r (N)] T F [z(n)? z r (N)] + t= [kz? z r k 2 2? 2 kwk 2 2] (2) is the disturbance attenuation level and y r (1), y r (2),, y r (N) are tracking commands which are assumed to be available over the future horizon N. In the following theorem, we introduce the existing result on the nite horizon H 1 -regulation problem the tracking commands y r = for 8 t. From now on, we substitute B 1 with B =?1 B 1 without loss of generality. Theorem 1 [6] When y r =, the dynamic game theory described by (1), (2) admits a unique feedback saddle-point solution, if and only if [I? B T M(t + 1)B ] > over t 2 [; N? 1] the sequence of M over t 2 [1; N] are generated by the following equation with M(N) = Q f : M = Q + A T M(t + 1)?1 A (3) = I + [B 2 B T 2??2 B 1 B T 1 ]M(t + 1) Then the unique saddle-point solution is given for t 2 [; N? 1] by u =?B T 2 M(t + 1)?1 Ax (4) w =?2 B T 1 M(t + 1)?1 Ax (5) Now in order to derive a nite horizon H 1 tracking control (HTC), we modify the notation shown in [1] as follows : Using, P = M[I? B (t? 1)B T (t? 1)M]?1 (6) (A?1 11 + A 12A 22 A 21 )?1 = A 11? A 11 A 12 (A 21 A 11 A 12 +A?1 22 )?1 A 21 A 11 (7) and (I + )?1 = (I + )?1 (8) we can change (3)-(5) as follows: u =?[I + B T 2 P (t + 1)B 2 ]?1 B T 2 P (t + 1)Ax (9) w =?2 B T 1 [I + P (t + 1)B 2 B T 2 ]?1 P (t + 1)Ax (1) P = A T P (t + 1)A + P B (t? 1)[I + B T (t? 1)P B (t? 1)]?1 B T (t? 1)P?A T P (t + 1)B 2 [I + B T 2 P (t + 1) B 2 ]?1 B T 2 P (t + 1)A + Q (11) From the above modied equations, we derive the following result. Theorem 2 If and only if [I? B T r M(t + 1)B r ] > over t 2 [; N? 1], the unique saddle-point solution of HTC for discrete time-varying systems is given by u =?[I + B T 2 P (t + 1)B 2 ]?1 B T 2 [P (t + 1)Ax + g(t + 1)] (12) w =?2 B T 1 [I + P (t + 1)B 2 B T 2 ]?1 [P (t + 1)Ax + g(t + 1)] (13) g = [I + P B (t? 1)B T (t? 1)]fA T [I + P (t + 1)B 2 B T 2 ]?1 g(t + 1)? C T y r g g(n) =?fi + Q f B (N? 1)[I? B T (N? 1)Q f B (N? 1)]?1 B T (N? 1)gC T (N)F y r (N) P (N) = Q f [I? B (N? 1)B T (N? 1)Q f ]?1 Proof: It is well known that for a given p n (p n) full rank matrix C, there always exist some n p matrices L such that CL = I pp. Let ~x = Ly r. (2) is then rewritten with Q f = C T (N)F C(N) and Q = C T C as: N?1 X J = [x(n)? ~x(n)] T Q f [x(n)? ~x(n)] + t= We dene: x ^x = 1 A ^A = 1 ^Q = [(x? ~x) T Q(x? ~x) +kuk 2 2? 2 kwk 2 2] (14) B1 ; ^B1 = Q?C T y r?y T r C y T r y r Hence (1) and (14) are written as: B2 ; ^B2 = ; ^C = C?yr ^x(t + 1) = ^A^x + ^B1 w + ^B2 u(15) ^C^x ^z = ; ^Qf = u ^CT (N)F ^C(N) X N?1 J = ^x T (N) ^Qf ^x(n) + [^x T ^Q^x + kuk 2 2 t=? 2 kwk 2 2] (16) The dynamic game theory described by (15)-(16) admits a unique feedback saddle-point solution, if and

only if I? ^BT ^M(t + 1) ^B > over t 2 [; N? 1] [6], [7]. Let P P12 ^P = P12 T ; g = P P 22 12 We know that [I? ^BT ^M(t + 1) ^B ] = [I? B T M(t + 1)B ]. Using (7)-(8), (12) and (13) are obtained from (9)-(11) with A, B 1, B 2, and P replaced by ^A, ^B1, ^B2, and ^P. 3 Stability of IHTC for discrete periodic systems Consider a generally time-varying matrix function L(). The symbol L () will denote a T -periodic matrix function such that L =L for t + T? 1 and L (t + T )=L for 8t. From the result of the previous section, we propose an intervalwise receding horizon H 1 -tracking control (IHTC) which stabilizes discrete T -periodic systems. Assume that N T + 1. Here N is both the cost horizon and the horizon that the tracking signal is given. Let the initial point be and Q f be the xed value. Among the solutions obtained over [; +N], we use the solutions over [; +T ]. Next the initial point moves to + T and the terminal point of the cost horizon moves to + T + N. This procedure repeats. Therefore P () is T-periodical. Let us make a T-periodic Riccati equation (T -PRE). We dene the following notations with k. E = A T P (t + 1)A? A T P (t + 1)B 2 [I +B T 2 P (t + 1)B 2 ]?1 B T 2 P (t + 1)A Q = Q + E( + T )? E() if t = + (k + 1)T Q otherwise Lemma 1 If A is nonsingular for 8t, Q () makes the solutions of the following Riccati equation T- periodic, i.e., it makes the following T -PRE. P = A T P (t + 1)A + Q? A T P (t + 1) B 2 [I + B T 2 P (t + 1)B 2 ]?1 B T 2 P (t + 1)A + P B (t? 1)[I + B T (t? 1) P B (t? 1)]?1 B T (t? 1)P (17) Proof: First, we will obtain the solutions over [ + 1; + N] from (11). Let the initial value be. Then from (17), we obtain at the terminal time t = T : P (T )? P (T )B (T? 1)[I + B T (T? 1)P (T ) B (T? 1)]?1 B T (T? 1)P (T )? Q (T ) = A T (T )P (T + 1)A(T )? A T (T )P (T + 1)B 2 (T )[I + B T 2 (T )P (T + 1)B 2 (T )]?1 B T 2 (T )P (T + 1)A(T ) (18) From the denition of Q (T ), the left side of (18) equals to E(). From (18), we observe that P (T +1) = P (1). Similarly P ((k + 1)T + i + 1) = P (i + 1) for k and i 2 [; T? 1]. Now the stabilizing property of the solution is derived in forward time, i.e., P +kt (). Consider a periodic matrix function X() with X(t+T )=X, and let the state feedback gain be?? =?[I + B T 2 X(t + 1)B 2 ]?1 B T 2 X(t+1)A Then, X() will be said to be stabilizing if the matrix A() + B 2 ()?() is asymptotically stable. If we replace P and Q with P +kt and Q +kt respectively, we obtain P +kt = F T k P +kt (t + 1)F k + Q +kt +K T k K k + P +kt B (t? 1) [I + B T (t? 1)P +kt B (t? 1)]?1 B T (t? 1)P +kt (19) K k =?[I + B T 2 P +kt (t + 1)B 2 ]?1 B T 2 P +kt (t + 1)A F k = A + B 2 K k Before stating the following theorem, we dene : = Q +kt + K T k K k + P +kt B (t? 1)[I + B T (t? 1)P +kt B (t? 1)]?1 B T (t? 1)P +kt (2) Theorem 3 y r over next N-horizon at + kt, i.e. [ + kt + 1; + kt + N] is given for k. Let P () be the solution of (11) and assume that 1) > or M?1 >?2 B 1 (t? 1)B T 1 (t? 1) on t 2 P [ + 1; + T ] +T 2) t= +1 > 3) (C(); A()) is completely observable 4) A is nonsingular for 8t Then, the periodic matrix functions P +kt () are stabilizing for each k. [I? B T M(t + 1)B > ] is satised from 1) since [I? B T M(t + 1)B ] = [I + B T P (t + 1)B ]?1. Proof: By (19) and (2), P +kt ( +1) can be denoted as follows: P +kt ( + 1) = T F k ( + 1)P +kt ( + T + 1) Fk ( + 1) + X+T t= +1 T F k (t; + 1) F k (t; + 1) F k ( + T; ) = F k ( + T? 1) F k ( + 1)F k (); Fk () = F k ( + T; ):

Let v be an eigenvector of Fk ( + 1) associated with the eigenvalue. Then we obtain: (1? jj 2 )v P +kt ( + 1)v = X+T t= +1 v T F k (t; + 1) F k (t; + 1)v From the assumption, we observe that all characteristic multipliers of Fk ( + 1) belong to the open-unit disk. Therefore P +kt () is stabilizing [2]. Let us consider the assumptions 1) and 2) of Theorem 3. The assumption 2) seems not to be a strong condition. In LQ problems, it is well known that P () > under some basic conditions such as controllability and observability conditions. But in H 1 - problems, no proper condition satisfying P () > has been found. In Section 6, it will be shown that quite a small satises 1). Consider discrete time-invariant systems. A discrete time-invariant system as shown in [2] can be viewed as a periodic system of an arbitrary period. Then, we can derive a time-invariant version of Theorem 3. Corollary 1 Let P () be the solution of (11) for discrete time-invariant systems with 3) and 4) of Theorem 3. If there exists an integer T such that: 1) P > or M?1 >?2 B 1 B T 1 on [ + 1; + T ] +T 2) t= +1 > then, the T -periodic matrix functions P kt () are stabilizing for each k 4 The stabilizing IHTC with integral action In this section, we investigate the zero oset property of the proposed stabilizing IHTC when the tracking command is constant and the system is time-invariant. It is well known that such a property can be obtained by introducing the following incremental state-space model: x e (t + 1) = A e x e + B 1e w + B 2e u(21) y = C e x e Ce x z = e yr I CA ; z u r = ; A e = A y x e = ; B x 1e = CB1 ; B B 2e 1 = CB2 B 2 C e = I The dynamic game theory based on the (21) gives the following control: u =?[I + B T 2e P e(t + 1)B 2e ]?1 B T 2e[P e (t + 1) A e x e + g e (t + 1)] (22) P e () and g e () are obtained from (11) and g of Section 2 with A, B, and C replaced by A e, B e, and C e. It is noted that the IHTC based on the incremental control u in (22) is stabilizing from Theorem 3 in the previous section. Now we will show that the stabilizing IHTC with integral action provides the zero oset. Corollary 2 The stabilizing IHTC with integral action provides the zero oset. proof: g e is derived similarly to [1]: g e =? T e (T + 1; t + 1)C T e F y r? TX j=t+1 [ T e (j; t + 1)B ce (j? 1)C T e y r]; t 2 [1; T? 1] e (t; t ) = A ce (t? 1)A ce (t? 1) A ce (t ) A ce = [I + B 2e B2e T P e]?1 A e [I + B e Be T P e(t? 1)] B ce = [I + P e B e B e ]?1 ; B e =?1 B 1e Now, we demonstrate the following fact: P e I e = T e (T + 1; t + 1)C T e F + TX j=t+1 [ T e (j; t + 1) B ce (j? 1)C T e ]; t 2 [1; T ] (23) Dene with H T (i; t + 1) = e (i; t + 1): G = H T T (T + 1; t + 1)C T e F + TX B ce (j? 1)C T e ; j=t+1 [H T T (j; t + 1) If t = T, it is clear that P e (T )I e = G(T ). Assuming that (23) is true when t = n + 1, we obtain: P e (n + 1)I e = G(n + 1) Let t = n then TX = H T T (T + 1; n + 2)C T e F + j=n+2 [H T T (j; n + 2)B ce (j? 1)C T e ] P e (n)i e = A T e fi? P e(n + 1)B 2e [I + B T 2e P e(n + 1) B 2e ]?1 B T 2egP e (n + 1)I e + C T e + P e (n) B e [I + B T e P e(n)b e ]?1 B T e P e(n)i e TX = H T T (T + 1; n + 1)C T e F + j=n+1 = G(n) [H T T (j; n + 1)B ce (j? 1)C T e ]

This means that (23) is true. Using this fact, the control u can be written with e = y? y r as u =?[I + B T 2e P e(t + 1)B 2e ]?1 B T 2e P e(t + 1)A e [e x] T (24) If we dene x E = [e x] T, we get x E = A e x E + B 1e w + B 2e u e = C e x E Since the above system is stable with the control (24), e! as t! 1, which means y! y r. 5 The H 1 -norm bound of the stabilizing IHTC In this section, we show that the stabilizing IHTC guarantees the H 1 -norm bound when y r () =. Theorem 4 Assume that E( + T )-E(). With the stabilizing IHTC u, the H 1 -norm bound of the closed-loop system is guaranteed, i.e., kt zw k 1 < (25) proof: Here for convenience, we denote P +kt ; M +kt ; F k ; and Q +kt as P ; M; F, and Q each other. By assumption, M >. Therefore?x T ()M()x() < 1X t= [x T (t + 1)M(t + 1)x(t + 1)?x T Mx] (26) Let = x T (t + 1)M(t + 1)x(t + 1)? x T Mx. Then, = [F x + B 1 w] T M(t + 1)[F x +B 1 w]? x T Mx = x T [X + Y ]x? W T [I? B T M(t + 1)B ]W + 2 w T w W = w? [I? B T M(t + 1)B ]?1 B T M(t + 1)F x X = F T M(t + 1)F? M Y = F T M(t + 1)B [I? B T By (6) and (7), M(t + 1)B ]?1 B T M(t + 1)F [I? B T M(t + 1)B ]?1 = I + B T P (t + 1)B (27) By (6) and (27), X and Y can be written as follows: X = F T P (t + 1)F? M? F T P (t + 1)B [I + B T P (t + 1)B ]?1 B T P (t + 1)F Y = F T fi? P (t + 1)B [I + B T P (t + 1)B ]?1 Denoting B T gp (t + 1)B [I + B T P (t + 1)B ] B T P (t + 1)fI? B [I + B T P (t + 1) B ]?1 B T P (t + 1)gF = B T P (t + 1)B = B T P (t + 1)F?F T P (t + 1)B [I + B T P (t + 1)B ]?1 B T P (t + 1)F + Y = T [?(I + )?1 + (I? (I + )?1 )(I + )(I? (I + )?1 )] = using = ( + I? I) (28) Using (19) and (28), X+Y =? Q?K T k K k Therefore =?x T [ Q + kkk k 2 2]x? W T [I? B T M(t + 1)B ]W + 2 kwk 2 2 <?x T [ Q + kkk k 2 2]x + 2 kwk 2 2?z T z + 2 kwk 2 2 (29) When x() =, we know from the relations of (26) and (29) that (25) is satised. 6 Simulation studies We demonstrate the properties of the proposed IHTC through simulation studies. the tracking performance of IHTC is compared with that of RHTC which is known to show good performances. [1]. We consider the following T -periodic system matrices: 1 + :1 B 1 = A = :9 + :9 C = q + :1q :1 + :1 :7 + :7 ; B 2 = :2 + :2 :8 + :8 :1q = cos( 2t T ); q is tuning parameter; F = 1; R = I(RHT C):

the external disturbances. Another advantage is that computation burdens are lessened and the IHTC can easily applied to real-time tracking systems In this simulation we assume that T = 2 and = :7. Using these values, we obtain a stabilizing IHTC. We make the values of disturbance as multiplying 2% of the tracking command by random signal which has a normal distribution between?:5 and :5. We select the cost horizon as T + 1 for the both cases of IHTC and RHTC. Fig.1 shows outputs of IHTC and RHTC for the given command signal. Fig.2 shows the dierence between the output and the command signal. In Fig.1, solid curve represents the tracking command. In Fig.1-Fig.2, '.' represents the result of IHTC and '{' represents the result of RHTC. Fig.1 and Fig.2 show that the performance of IHTC is better than that of RHTC. This result is also the same as that when there is no disturbance. If we increase the cost horizon, the performance of RHTC becomes better. Also in that case, the performance of IHTC is also a little better than that of RHTC. The results for time-invariant systems are similar to those for T- periodic systems. 7 Conclusion In this paper, a xed nite horizon H 1 -tracking control (HTC) for discrete time-varying systems is rst derived. And then, an intervalwise receding horizon H 1 -tracking control (IHTC) is proposed for discrete periodic systems. It is shown that the proposed IHTC guarantees closed loop stability, innite horizon H 1 - norm bound, and zero oset tracking error under the proposed conditions. Through the example, it is shown that the performance of the proposed IHTC presents better tracking performance than the existing pointwise receding horizon control which is proposed in [1]. Specially when the cost horizon is near the system order, the performance of IHTC is shown to be very good compared with that of the pointwise one for some systems. References [1] I. Yaesh and U. Shaked, "Minimum H1 Norm Regulation of Linear Discrete-Time Systems and Its Relation to Linear Quadratic Discrete Games," IEEE Trans. Automat. Contr., vol. AC-35, pp. 161-164, 199. [2] G. D. Nicolao, "Cyclomonotonicity and Stabilizability Properties of Solutions of the Dierence Periodic Riccati Equation," IEEE Trans. Automat. Contr., vol. AC-37, pp. 145-141, 1992. [3] G. D. Nicolao and S. Strada, "What is the easiest way to stabilize a Linear Periodic System," ECC95., 1995. [4] U. Shaked and C. E. DeSouza, "Continuous-Time Tracking Problems in an H 1 Setting:A Game Theory Approach," IEEE Trans. Automat. Contr., vol. AC-4, pp. 841-852, 1995. [5] A. Cohen and U. Shaked, "Linear Discrete-Time H 1 -Optimal Tracking with Preview," Pro. of the 34th CDC, New Orleans, LA, pp. 2555-2561, December 1995. [6] T. Basar and P. Bernhard, "H 1 -Optimal Control and Related Minimax Design Problems : A Dynamic Game Approach," Birkhauser Boston Basel Berlin, 1991. [7] T. Basar, "A Dynamic Games Approach to Controller Design: Disturbance Rejection in Discrete- Time," IEEE Trans. Automat. Contr., vol. AC-36, pp. 936-952, 1991. [8] W. H. Kwon and A. E. Pearson, "On Feedback Stabilization of Time-Varying Discrete Linear Systems," IEEE Trans. Automat. Contr., vol. AC-23, no. 3, pp. 479-481, 1978. [9] W. H. Kwon and A. E. Pearson, "Linear Systems with Two-Point Boundary Lyapunov and Riccati Equations," IEEE Trans. Automat. Contr., vol. AC-25, no. 2, 1982. [1] W. H. Kwon and D. G. Byun, "Receding horizon tracking control as a predictive control and its stability properties," Int.J.Control., vol. 5, no. 5, pp. 187-1824, 1989. [11] Sanjay Lall and Keith Glover, "A game theoretic approach to moving horizon control," Oxford University Press., Edited by David Clarke, 1994. One of the advantages of the proposed IHTC is that it can show very good tracking performance in spite of