The Generalized Laplace Transform: Applications to Adaptive Control*

Similar documents
Submitted Version to CAMWA, September 30, 2009 THE LAPLACE TRANSFORM ON ISOLATED TIME SCALES

arxiv: v2 [math.oc] 3 Mar 2009

Bilateral Laplace Transforms on Time Scales: Convergence, Convolution, and the Characterization of Stationary Stochastic Time Series

International Publications (USA) PanAmerican Mathematical Journal

Algebraic and Dynamic Lyapunov Equations on Time Scales

Problem Set 5 Solutions 1

Convergence of Unilateral Laplace Transforms on Time Scales

Summary of Fourier Transform Properties

Trace Class Operators and Lidskii s Theorem

We denote the space of distributions on Ω by D ( Ω) 2.

THE INVERSE FUNCTION THEOREM

Boundary Value Problems For A Differential Equation On A Measure Chain

McGill University Department of Mathematics and Statistics. Ph.D. preliminary examination, PART A. PURE AND APPLIED MATHEMATICS Paper BETA

UNIQUENESS OF SOLUTIONS TO MATRIX EQUATIONS ON TIME SCALES

LTI Systems (Continuous & Discrete) - Basics

Solution of Linear State-space Systems

Laplace Transforms Chapter 3

Module 1: Signals & System

6.241 Dynamic Systems and Control

Three hours THE UNIVERSITY OF MANCHESTER. 31st May :00 17:00

Linear System Theory

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 2 Laplace Transform I 1/52

BOUNDEDNESS AND EXPONENTIAL STABILITY OF SOLUTIONS TO DYNAMIC EQUATIONS ON TIME SCALES

Katznelson Problems. Prakash Balachandran Duke University. June 19, 2009

Module 07 Controllability and Controller Design of Dynamical LTI Systems

Control Systems (ECE411) Lectures 7 & 8

Observability. It was the property in Lyapunov stability which allowed us to resolve that

Chapter III. Stability of Linear Systems

A system that is both linear and time-invariant is called linear time-invariant (LTI).

Discussion Section #2, 31 Jan 2014

1 Introduction. or equivalently f(z) =

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

3 hours UNIVERSITY OF MANCHESTER. 22nd May and. Electronic calculators may be used, provided that they cannot store text.

Control Systems Design

Some nonlinear dynamic inequalities on time scales

16.30 Estimation and Control of Aerospace Systems

Part IB. Further Analysis. Year

Linear Systems. Linear systems?!? (Roughly) Systems which obey properties of superposition Input u(t) output

3. (a) What is a simple function? What is an integrable function? How is f dµ defined? Define it first

II - REAL ANALYSIS. This property gives us a way to extend the notion of content to finite unions of rectangles: we define

Fréchet differentiability of the norm of L p -spaces associated with arbitrary von Neumann algebras

1.3.1 Definition and Basic Properties of Convolution

F (x) = P [X x[. DF1 F is nondecreasing. DF2 F is right-continuous

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

Analysis Comprehensive Exam Questions Fall 2008

Equations with Separated Variables on Time Scales

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

(i) Represent discrete-time signals using transform. (ii) Understand the relationship between transform and discrete-time Fourier transform

Topic 13 Notes Jeremy Orloff

Picard s Existence and Uniqueness Theorem

DIGITAL STABILIZATION OF LINEAR CONTINUOUS-TIME PERIODIC PROCESSES WITH PURE DELAY

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon Measures

ME Fall 2001, Fall 2002, Spring I/O Stability. Preliminaries: Vector and function norms

Nonlinear Control Systems

Geometric Series and the Ratio and Root Test

A Second Course in Elementary Differential Equations

(q 1)t. Control theory lends itself well such unification, as the structure and behavior of discrete control

Ingegneria dell Automazione - Sistemi in Tempo Reale p.1/30

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Problem 1A. Calculus. Problem 3A. Real analysis. f(x) = 0 x = 0.

arxiv: v1 [math.fa] 14 Jul 2018

Module 4 : Laplace and Z Transform Problem Set 4

Topics in Representation Theory: Fourier Analysis and the Peter Weyl Theorem

The Laplace Transform

Stability of Switched Linear Systems on Non-uniform Time Domains

Commutative Banach algebras 79

Introduction to Modern Control MT 2016

Stochastic Dynamic Programming: The One Sector Growth Model

Discrete and continuous dynamic systems

Definition of the Laplace transform. 0 x(t)e st dt

CHAPTER 0 PRELIMINARY MATERIAL. Paul Vojta. University of California, Berkeley. 18 February 1998

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Laplace Transform Introduction

Periodic Solutions in Shifts δ ± for a Dynamic Equation on Time Scales

21 Linear State-Space Representations

Application of a Generalized Budan-Fourier Algorithm to Stability Analysis of Time-Delay Systems

ACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016

ANALYTIC SEMIGROUPS AND APPLICATIONS. 1. Introduction

1 Some Facts on Symmetric Matrices

Department of Mathematics IIT Guwahati

Chapter Intended Learning Outcomes: (i) Understanding the relationship between transform and the Fourier transform for discrete-time signals

Measures and Measure Spaces

INTRODUCTION TO REAL ANALYTIC GEOMETRY

Properties of the Generalized Laplace Transform and Transport Partial Dynamic Equation on Time Scales

Examples of Dual Spaces from Measure Theory

(f g)(t) = Example 4.5.1: Find f g the convolution of the functions f(t) = e t and g(t) = sin(t). Solution: The definition of convolution is,

Lebesgue Measure. Dung Le 1

Math 210B. Artin Rees and completions

Chapter 6: The Laplace Transform. Chih-Wei Liu

Dichotomy, the Closed Range Theorem and Optimal Control

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Module 4. Related web links and videos. 1. FT and ZT

Fourier transform of tempered distributions

Definition 2.3. We define addition and multiplication of matrices as follows.

NOTES ON EXISTENCE AND UNIQUENESS THEOREMS FOR ODES

Complex Analysis MATH 6300 Fall 2013 Homework 4

Measure and integration

Introduction to the z-transform

Pseudospectra and Nonnormal Dynamical Systems

Transcription:

The Transform: Applications to Adaptive * J.M. Davis 1, I.A. Gravagne 2, B.J. Jackson 1, R.J. Marks II 2, A.A. Ramos 1 1 Department of Mathematics 2 Department of Electrical Engineering Baylor University Waco, TX 76798 This work was supported by NSF Grant EHS#0410685.

Establish a region of convergence for the the forward transform.

Establish a region of convergence for the the forward transform. Establish a unique inverse transform (and inversion formula).

Establish a region of convergence for the the forward transform. Establish a unique inverse transform (and inversion formula). Explore algebraic properties of the transform: convolution, identity element.

Establish a region of convergence for the the forward transform. Establish a unique inverse transform (and inversion formula). Explore algebraic properties of the transform: convolution, identity element. Applications to adaptive control of linear systems (time permitting): controllability and observability for linear systems.

Definition For f : T R, the transform of f, denoted by L{f } or F (z), is given by L{f }(z) = F (z) = where g(t) = e z (t, 0). 0 f (t)g σ (t) t, (1)

Definition For f : T R, the transform of f, denoted by L{f } or F (z), is given by L{f }(z) = F (z) = where g(t) = e z (t, 0). Recall... z = z 1+µ(t)z, 0 f (t)g σ (t) t, (1)

Definition For f : T R, the transform of f, denoted by L{f } or F (z), is given by L{f }(z) = F (z) = where g(t) = e z (t, 0). Recall... z = z 1+µ(t)z, 0 f (t)g σ (t) t, (1) e λ (t, t 0 ) is the unique solution of the dynamic IVP x (t) = λx(t), x(t 0 ) = 1.

Hilger Complex Plane and Cylinder Transform

Assumptions

Assumptions T is a time scale with bounded graininess, that is, µ min µ(t) µ max < for all t T.

Assumptions T is a time scale with bounded graininess, that is, µ min µ(t) µ max < for all t T. We set µ min = µ and µ max = µ.

Assumptions T is a time scale with bounded graininess, that is, µ min µ(t) µ max < for all t T. We set µ min = µ and µ max = µ. Under these assumptions, if λ H, where H denotes the Hilger circle given by { H = H t = z C : z + 1 µ(t) < 1 }, µ(t) then lim t e λ (t, 0) = 0.

Assumptions T is a time scale with bounded graininess, that is, µ min µ(t) µ max < for all t T. We set µ min = µ and µ max = µ. Under these assumptions, if λ H, where H denotes the Hilger circle given by { H = H t = z C : z + 1 µ(t) < 1 }, µ(t) then lim t e λ (t, 0) = 0. Careful: λ = z is time varying!

To give an appropriate domain for the transform, which of course is tied to the region of convergence of the integral in (1), for any c > 0 define the set D = {z C : z H max and z satisfies 1 + zµ > 1 + cµ } = H max {Re µ (z) > Re µ (c)}, Note that if µ = 0 this set is a right half plane.

Dynamic Hilger circles and R.O.C.s H max H t H min Time varying Hilger circles. The largest, H max has center µ min while the smallest, H min has center µ max. In general, the Hilger circle at time t is denoted by H t which has center µ(t). The exterior of each circle is shaded representing the corresponding regions of convergence.

H min H min H t H t H max Re μ z = c H max Re μ z = c The region of convergence is shaded. On the left, the µ = 0 case. On the right, the µ 0 case. In the latter, note our proof of the inversion formula is only valid for Re z > c, i.e. the right half plane bounded by this abscissa of convergence even though the region of convergence is clearly a superset of this right half plane.

Furthermore, if z D, then z H min H t since for all z D, z satisfies the inequality z + 1 µ < 1 µ. Finally, if Re µ (z) > Re µ (c), then Re µ (z) > Re µ (c) for all t T since the Hilger real part is an increasing function in µ.

Exponential Type I and Type II Functions Definition The function f : T R is said to be of exponential Type I if there exists constants M, c > 0 such that f (t) Me ct. Furthermore, f is said to be of exponential Type II if there exists constants M, c > 0 such that f (t) Me c (t, 0).

Exponential Type I and Type II Functions Definition The function f : T R is said to be of exponential Type I if there exists constants M, c > 0 such that f (t) Me ct. Furthermore, f is said to be of exponential Type II if there exists constants M, c > 0 such that f (t) Me c (t, 0). The time scale exponential function, e α (t, t 0 ), is Type II.

Exponential Type I and Type II Functions Definition The function f : T R is said to be of exponential Type I if there exists constants M, c > 0 such that f (t) Me ct. Furthermore, f is said to be of exponential Type II if there exists constants M, c > 0 such that f (t) Me c (t, 0). The time scale exponential function, e α (t, t 0 ), is Type II. The time scale polynomials h k (t, 0) are Type I.

Region of Convergence for the Transform Theorem The integral 0 e σ z(t, 0)f (t) t converges absolutely for z D if f (t) is of exponential Type II with exponential constant c.

Region of Convergence for the Transform Theorem The integral 0 e σ z(t, 0)f (t) t converges absolutely for z D if f (t) is of exponential Type II with exponential constant c. The same estimates used in the proof of the preceding theorem can be used to show that if f (t) is of exponential Type II with constant c and Re µ (z) > Re µ (c), then lim t e z (t, 0)f (t) = 0.

Properties of the Transform Theorem Let F denote the generalized transform for f : T R. F (z) is analytic in Re µ (z) > Re µ (c). F (z) is bounded in Re µ (z) > Re µ (c). lim F (z) = 0. z

Initial and Final Values Theorem Theorem Let f : T R have generalized transform F (z). Then f (0) = lim z zf (z), lim t f (t) = lim z 0 zf (z) when the limits exist.

Theorem Theorem Suppose that F (z) is analytic in the region Re µ (z) > Re µ (c) and lim z F (z) 0 uniformly in this region. Suppose F (z) has finitely many regressive poles of finite order {z 1, z 2,... z n } and F R (z) is the transform of the function f (t) on R that corresponds to the transform F (z) = F T (z) of f (t) on T. If then f (t) = c+i c i F R (z) dz <, n Res z=zi e z (t, 0)F (z), i=1 has transform F (z) for all z with Re (z) > c.

Proof. The proof follows from the commutative diagram between the appropriate function spaces.

Let C be the collection of transforms over R, and D the collection of transforms over T, i.e. C = { F R (z)} and D = {F T (z)}, where F R (z) = G(z)e zτ and F T (z) = G(z)e z (τ, 0) for G and G rational functions in z and for τ an appropriate constant. Each of θ, γ, θ 1, γ 1 maps functions involving the continuous exponential to the time scale exponential and vice versa. For example, γ maps the function F (z) = e za z to the function F (z) = e z (a,0) z, while γ 1 maps F (z) back to F (z) in the obvious manner. If the representation of F (z) is independent of the exponential (that is, τ = 0), then γ and its inverse will act as the identity. For example, ( ) ( ) 1 1 γ z 2 = γ 1 + 1 z 2 = 1 + 1 z 2 + 1.

θ will send the continuous exponential function to the time scale exponential function in the following manner: if we write f (t) C(R, R) with f (t) of exponential order as then f (t) = θ( f (t)) = n Res z=zi e zt FR (z), i=1 n Res z=zi e z (t, 0)F T (z). i=1

To go from F R (z) to F T (z), we simply switch expressions involving the continuous exponential in F R (z) with the time scale exponential giving F T (z) as was done for γ and its inverse. θ 1 will then act on g(t) = n Res z=zi e z (t, 0)G T (z), i=1 with g(t) of exponential Type II as θ 1 (g(t)) = n Res z=zi e zt G R (z). i=1

For a given time scale transform F T (z), we begin by mapping to F R (z) via γ 1. The hypotheses on F T (z) and F R (z) are enough to guarantee the inverse of F R (z) exists for all z with Re (z) > c, and is given by f (t) = n Res z=zi e zt F R (z). i=1 Apply θ to f (t) to retrieve the time scale function f (t) = n Res z=zi e z (t, 0)F T (z), i=1 whereby (γ L R θ 1 )(f (t)) = F T (z) as claimed.

Remarks

Remarks Does there exist a contour in the complex plane around which it is possible to integrate to obtain the same results through a more operational approach?

Remarks Does there exist a contour in the complex plane around which it is possible to integrate to obtain the same results through a more operational approach? For T = R or T = Z yes!

Remarks Does there exist a contour in the complex plane around which it is possible to integrate to obtain the same results through a more operational approach? For T = R or T = Z yes! If T is completely discrete, then if we choose any circle in the region of convergence which encloses all of the singularities of F (z), the inversion formula holds.

Remarks Does there exist a contour in the complex plane around which it is possible to integrate to obtain the same results through a more operational approach? For T = R or T = Z yes! If T is completely discrete, then if we choose any circle in the region of convergence which encloses all of the singularities of F (z), the inversion formula holds. In general? Sorry, just don t know... (Want a problem to work on?)

We can use these techniques to generate generalized (and unified) versions for any canonical transforms (e.g. Fourier, Mellin, Hankel, etc.).

We can use these techniques to generate generalized (and unified) versions for any canonical transforms (e.g. Fourier, Mellin, Hankel, etc.). For any transformable function f (t) on T, there is a shadow function f (t) defined on R. That is, to determine the appropriate time scale analogue of the function f (t) in terms of the transform, we use the diagram to map its transform on R to its transform on T.

Example. Suppose F (z) = 1 z 2. The hypotheses of the Theorem are readily verified, so that For F (z) = 1 z 3, we have L 1 {F } = f (t) = Res z=0 e z (t, 0) z 2 = t. L 1 {F } = f (t) = Res z=0 e z (t, 0) z 3 t = t2 0 µ(τ) τ 2 The last equality is justified since the function f (t) = t2 t 0 µ(τ) τ, 2 is the unique solution to the initial value problem = h 2 (t, 0). f (t) = h 1 (t, 0), f (0) = 0.

In a similar manner, we can use an induction argument coupled with the Theorem to show that the inverse of F (z) = 1 z k+1, for k a positive integer, is h k (t, 0).

In a similar manner, we can use an induction argument coupled with the Theorem to show that the inverse of F (z) = 1 z k+1, for k a positive integer, is h k (t, 0). In fact, it easy to show that the inversion formula gives the claimed inverses for any of the elementary functions that Bohner and Peterson have in their table. These elementary functions become the proper time scale analogues or shadows of their continuous counterparts.

A very important inverse...

A very important inverse... Consider the function F (z) = e z (σ(a), 0). The Theorem cannot be applied. However, if a is right scattered, one can show that the Hilger Delta function which has representation { 1 δa H µ(a) (t) =, t = a, 0, t a, has F (z) as a transform, while if a is right dense, the Dirac delta functional has F (z) as a transform.

Uniqueness of the Inverse If two functions f and g have the same transform, then are f and g necessarily the same function? As on R, the answer to this question is affirmative if we define our equality in an almost everywhere (a.e.) sense.

Uniqueness of the Inverse If two functions f and g have the same transform, then are f and g necessarily the same function? As on R, the answer to this question is affirmative if we define our equality in an almost everywhere (a.e.) sense. Upshot (after some measure theoretic hoops to jump thru): To show that a property holds almost everywhere on a time scale, it is necessary to show that the property holds for every right scattered point in the time scale, and that the set of right dense points for which the property fails has Lebesgue measure zero.

Uniqueness of the Inverse If two functions f and g have the same transform, then are f and g necessarily the same function? As on R, the answer to this question is affirmative if we define our equality in an almost everywhere (a.e.) sense. Upshot (after some measure theoretic hoops to jump thru): To show that a property holds almost everywhere on a time scale, it is necessary to show that the property holds for every right scattered point in the time scale, and that the set of right dense points for which the property fails has Lebesgue measure zero. Theorem If the functions f : T R and g : T R have the same transform, then f = g a.e.

Setting the table for convolution Definition The delay of the function x : T R by σ(τ) T, denoted by x(t, σ(τ)), is given by u σ(τ) (t)x(t, σ(τ)) = n Res z=zi X (z)e z (t, σ(τ)). i=1 Here, u ξ (t) : T R is the time scale unit step function activated at time t = ξ T.

Shifts Notice that u σ(τ) (t)x(t, σ(τ)) has transform X (z)e σ z(τ, 0). Indeed, u σ(τ) (t)x(t, σ(τ)) = = n Res z=zi X (z)e z (t, σ(τ)) i=1 n [ Res z=zi X (z)e σ z (τ, 0) ] e z (t, 0) i=1 = L 1 {X (z)e σ z(τ, 0)}. This allows us to use the term delay to describe x(t, σ(τ)), since on T = R, the transformed function X (z)e zτ corresponds to the function u τ (t)x(t τ). With the delay operator defined, we define the convolution of two arbitrary transformable time scale functions.

The Product Definition The convolution of the functions f : T R and g : T R, denoted f g, is given by (f g)(t) = t 0 f (τ)g(t, σ(τ)) τ.

The Theorem Theorem The transform of a convolution product is the product of the transforms.

Proof. Assuming absolute integrability of all functions involved, then by the delay property and inversion we obtain L{f g} = = = = = = 0 0 0 0 0 0 e z(t, σ 0) [(f g)(t)] t [ t ] f (τ)g(t, σ(τ)) τ e z(t, σ 0) t 0 [ ] f (τ) g(t, σ(τ))e z(t, σ 0) t τ σ(τ) = F (z)g(z). f (τ)l{u σ(τ) (t)g(t, σ(τ))} τ f (τ) [ G(z)e σ z(τ, 0) ] τ f (τ)e σ z(τ, 0) τ G(z)

Example. Suppose g(t, σ(τ)) = 1. In this instance, we see that for any transformable function f (t), the transform of h(t) = t 0 f (τ) τ is given by L{h} = L{f 1} = F (z)l{1} = F (z), z another result obtained by direct calculation in Bohner and Peterson.

The convolution product is both commutative and associative. Indeed, the products f g and g f have the same transform as do the products f (g h) and (f g) h, and since the inverse is unique, the functions defined by these products must agree almost everywhere.

The convolution product is both commutative and associative. Indeed, the products f g and g f have the same transform as do the products f (g h) and (f g) h, and since the inverse is unique, the functions defined by these products must agree almost everywhere. Identity element?

The convolution product is both commutative and associative. Indeed, the products f g and g f have the same transform as do the products f (g h) and (f g) h, and since the inverse is unique, the functions defined by these products must agree almost everywhere. Identity element? At first glance, one may think that the identity is vested in the Hilger Delta. Unfortunately, this is not the case. It can be easily shown that any identity for the convolution will of necessity have transform 1 by the convolution theorem. But the the Hilger Delta does not have transform 1 since its transform is just the exponential.

The Let f, g : T R be given functions with f (x) having unit area. To define the time scale delta functional, we construct the following functional. Let Cc (T) denote the C (T) functions with compact support. For g σ Cc (T) and for all ɛ > 0, define the functional F : Cc (T, R) T R by { F (g σ, a) = 0 δa H (x)g ρ (σ(x)) x, a is right scattered, 1 lim ɛ 0 0 ɛ f ( x ɛ )g(σ(x)) x, a is right dense. The (time scale or generalized) Dirac delta functional is then given by δ a (x), g σ = F (g σ, a).

Action of delta functional on C c (T)?

Action of delta functional on Cc (T)? In a nutshell... g σ, δ a = g(a), independently of the time scale involved.

Action of delta functional on C c In a nutshell... (T)? g σ, δ a = g(a), independently of the time scale involved. Also, if g(t) = e z (t, 0), then δ a, g σ = e z (a, 0), so that for a = 0, the Dirac delta functional δ 0 (t) has transform of 1, thereby giving us an identity element for the convolution.

However, our definition of the delay operator will only hold for functions. It will be necessary to extend this definition for the delta functional. To maintain consistency with the delta function s action on g(t) = e σ z(t, 0), it follows that for any t T, the shift of δ a (τ) is given by δ a (t, σ(τ)) = δ t (σ(τ)).

However, our definition of the delay operator will only hold for functions. It will be necessary to extend this definition for the delta functional. To maintain consistency with the delta function s action on g(t) = e σ z(t, 0), it follows that for any t T, the shift of δ a (τ) is given by δ a (t, σ(τ)) = δ t (σ(τ)). Just as before, the algebraic properties hold in this setting, e.g. (δ 0 g)(t 0 ) = (g δ 0 )(t 0 ).

When we perform the convolution g, δ σ, we must give meaning to this symbol and do so by defining δ σ (t) to be the Kronecker delta when t is right scattered and the Dirac delta if t is right dense. While this ad hoc approach does not address convolution with an arbitrary (shifted) distribution on the right, this will suffice (at least for now) since our eye is on solving generalizations of canonical partial dynamic equations which will involve the Dirac delta distribution.

Uniqueness of the inverse of the Dirac delta?

Uniqueness of the inverse of the Dirac delta? Yes, just need a new commutative diagram betwixt the dual spaces.

Definition The regressive linear state equation x (t) = A(t)x(t) + B(t)u(t), x(t 0 ) = x 0, y(t) = C(t)x(t) + D(t)u(t), is called controllable on [t 0, t f ] if given any initial state x 0 there exists a rd-continuous input signal u(t) such that the corresponding solution of the system satisfies x(t f ) = x f. (2)

Gramian Condition Theorem The regressive linear state equation (2) is controllable on [t 0, t f ] if and only if the n n controllability Gramian matrix W (t 0, t f ) = is invertible. tf t 0 Φ A (t 0, σ(t))b(t)b T (t)φ T A (t 0, σ(t)) t

Gramian Condition Theorem The regressive linear state equation (2) is controllable on [t 0, t f ] if and only if the n n controllability Gramian matrix W (t 0, t f ) = is invertible. tf t 0 Φ A (t 0, σ(t))b(t)b T (t)φ T A (t 0, σ(t)) t Φ A (t, t 0 ) is the transition matrix for the (time varying!) problem X = A(t)X (t), X (t 0 ) = I.

Rank Theorem for LTI Systems Theorem The regressive linear time invariant system x (t) = Ax(t) + Bu(t), x(t 0 ) = x 0 y(t) = Cx(t) + Du(t) is controllable on [t 0, t f ] if and only if the n nm controllability matrix [ B AB... A n 1 B ] satisfies rank [ B AB... A n 1 B ] = n.

Definition The linear state equation x (t) = A(t)x(t), x(t 0 ) = x 0, y(t) = C(t)x(t) is called observable on [t 0, t f ] if any initial state x(t 0 ) = x 0 is uniquely determined by the corresponding response y(t) for t [t 0, t f ).

Gramian Condition Theorem The regressive linear system given above is observable on [t 0, t f ] if and only if the n n observability Gramian matrix is invertible. M(t 0, t f ) = tf t 0 Φ T A (t, t 0)C T (t)c(t)φ A (t, t 0 ) t

Rank Theorem for LTI Systems Theorem The autonomous linear regressive system x (t) = Ax(t), x(t 0 ) = x 0, y(t) = Cx(t) is observable on [t 0, t f ] if and only if the nm n observability matrix satisfies C CA rank. = n. CA n 1