(q 1)t. Control theory lends itself well such unification, as the structure and behavior of discrete control

Similar documents
Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013

Linear Quadratic Zero-Sum Two-Person Differential Games

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

Overview of the Seminar Topic

Periodic Solutions in Shifts δ ± for a Dynamic Equation on Time Scales

BOUNDEDNESS AND EXPONENTIAL STABILITY OF SOLUTIONS TO DYNAMIC EQUATIONS ON TIME SCALES

Steady State Kalman Filter

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

Algebraic and Dynamic Lyapunov Equations on Time Scales

MODERN CONTROL DESIGN

The Generalized Laplace Transform: Applications to Adaptive Control*

Stability of Switched Linear Systems on Non-uniform Time Domains

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION - Vol. XIII - Nonlinear Observers - A. J. Krener

Spectral factorization and H 2 -model following

6 OUTPUT FEEDBACK DESIGN

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries!

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries!

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

Submitted Version to CAMWA, September 30, 2009 THE LAPLACE TRANSFORM ON ISOLATED TIME SCALES

LINEAR-CONVEX CONTROL AND DUALITY

International Publications (USA) PanAmerican Mathematical Journal

inputs. The velocity form is used in the digital implementation to avoid wind-up [7]. The unified LQR scheme has been developed due to several reasons

Linear State Feedback Controller Design

OPTIMAL CONTROL AND ESTIMATION

ESTIMATION and stochastic control theories are heavily

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

EE C128 / ME C134 Feedback Control Systems

6.241 Dynamic Systems and Control

Limiting Performance of Optimal Linear Filters

Multi-Model Adaptive Regulation for a Family of Systems Containing Different Zero Structures

Mathematical Theory of Control Systems Design

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

LINEAR QUADRATIC GAUSSIAN

EL2520 Control Theory and Practice

Observer-based quantized output feedback control of nonlinear systems

Control Systems I. Lecture 2: Modeling. Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch Emilio Frazzoli

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents

Stability and Instability for Dynamic Equations on Time Scales

Time Series Analysis

MATH4406 (Control Theory) Unit 1: Introduction Prepared by Yoni Nazarathy, July 21, 2012

FIR Filters for Stationary State Space Signal Models

Chapter 9 Observers, Model-based Controllers 9. Introduction In here we deal with the general case where only a subset of the states, or linear combin

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

Observability and state estimation

COMBINED ADAPTIVE CONTROLLER FOR UAV GUIDANCE

Min-Max Output Integral Sliding Mode Control for Multiplant Linear Uncertain Systems

An LQ R weight selection approach to the discrete generalized H 2 control problem

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004

Closed-Loop Impulse Control of Oscillating Systems

Lecture 19 Observability and state estimation

ROBUST PASSIVE OBSERVER-BASED CONTROL FOR A CLASS OF SINGULAR SYSTEMS

Control Systems Lab - SC4070 Control techniques

1 Continuous-time Systems

Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation

Structured State Space Realizations for SLS Distributed Controllers

INVERSE MODEL APPROACH TO DISTURBANCE REJECTION AND DECOUPLING CONTROLLER DESIGN. Leonid Lyubchyk

c 2014 Krishna Kalyan Medarametla

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

UNIQUENESS OF SOLUTIONS TO MATRIX EQUATIONS ON TIME SCALES

14 Riccati Equations and their Solution

Optimal Decentralized Control with. Asymmetric One-Step Delayed Information Sharing

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory

Filter Design for Linear Time Delay Systems

ACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016

Comparison of four state observer design algorithms for MIMO system

Lecture 7 : Generalized Plant and LFT form Dr.-Ing. Sudchai Boonto Assistant Professor

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

Nonlinear Optimal Tracking Using Finite-Horizon State Dependent Riccati Equation (SDRE)

Optimization-Based Control

The norms can also be characterized in terms of Riccati inequalities.

Solving Third Order Three-Point Boundary Value Problem on Time Scales by Solution Matching Using Differential Inequalities

6.241 Dynamic Systems and Control

Observer-based sampled-data controller of linear system for the wave energy converter

Linear-Quadratic Optimal Control: Full-State Feedback

Hyers Ulam stability of first-order linear dynamic equations on time scales

Topic # Feedback Control Systems

A Method to Teach the Parameterization of All Stabilizing Controllers

Balancing of Lossless and Passive Systems

Bisimilar Finite Abstractions of Interconnected Systems

OPTIMAL SPACECRAF1 ROTATIONAL MANEUVERS

Learning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System

and the nite horizon cost index with the nite terminal weighting matrix F > : N?1 X J(z r ; u; w) = [z(n)? z r (N)] T F [z(n)? z r (N)] + t= [kz? z r

POLYNOMIAL AND SERIES SOLUTIONS OF DYNAMIC. Maryville, MO , USA.

D(s) G(s) A control system design definition

Optimal Polynomial Control for Discrete-Time Systems

OPTIMAL FUSION OF SENSOR DATA FOR DISCRETE KALMAN FILTERING Z. G. FENG, K. L. TEO, N. U. AHMED, Y. ZHAO, AND W. Y. YAN

An Adaptive LQG Combined With the MRAS Based LFFC for Motion Control Systems

Chapter One. Introduction

Monotone Control System. Brad C. Yu SEACS, National ICT Australia And RSISE, The Australian National University June, 2005

LQG/LTR ROBUST CONTROL SYSTEM DESIGN FOR A LOW-PRESSURE FEEDWATER HEATER TRAIN. G. V. Murphy J. M. Bailey The University of Tennessee, Knoxville"

Boundary Value Problems For A Differential Equation On A Measure Chain

Control of Chatter using Active Magnetic Bearings

Control Systems Design

Exam in Automatic Control II Reglerteknik II 5hp (1RT495)

Trajectory Tracking Control of Bimodal Piecewise Affine Systems

5. Observer-based Controller Design

Control Systems I. Lecture 2: Modeling and Linearization. Suggested Readings: Åström & Murray Ch Jacopo Tani

Topic # Feedback Control Systems

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

SERVO TRACKING OF TARGETS AT SEA

Transcription:

My general research area is the study of differential and difference equations. Currently I am working in an emerging field in dynamical systems. I would describe my work as a cross between the theoretical and applied. The goal of my research is to unify and extend continuous and discrete analysis into a more general theory. Specifically, my work has great potential for the study of hybrid systems sampled with continuous, discrete, or irregular measurements. Computationally, such hybrid systems have their advantages in studying a number of real world applications. In particular, I am interested in applications in control engineering, game theory, forecasting, and mathematical finance. 1. Overview My main research focus is the study of optimal control and estimation in continuous and discrete time. More specifically, I study such processes on dynamic equations on time scales. A time scale, denoted by T, is a nonempty closed subset of the reals. In order to understand the structure of a time scale, we define the following operators on T. We define the forward jump operator σ : T T by σ(t) := inf {s T : s > t} and the graininess function µ : T [0, ) by µ(t) = σ(t) t. Here σ represents the shift to the next available point in T while µ represents the distance between two consecutive points in T. For any function f : T R, we often define the function f σ : T R by f σ = f σ. Standard calculus operations can be defined on a time scale, namely -differentiation and -integration. The table below offers some examples of time scales and standard operations on them. A more detailed introduction can be found in [2, 3]. T µ(t) σ(t) f (t) R 0 t f (t) Z 1 t + 1 f(t) = f(t + 1) f(t) hz h t + h f (t) = f(t+h) f(t) h q Z (q 1)t qt D q f(t) := f(qt) f(t) (q 1)t a f(τ) τ a f(τ)dτ a f(t) t = b 1 τ=a f(τ) a f(t) t = b/h 1 k=a/h hf(kh) a f(t) t = b τ=a (q 1)τf(τ) Control theory lends itself well such unification, as the structure and behavior of discrete control processes closely mirror their continuous counterparts. Consequently, the study of control systems on time scales can be thought of as a generalized sampling technique that allows us to evaluate processes with continuous, discrete, or possibility uneven measurements. Due to the infancy of this field, there are still a number of open problems. I will describe my contributions to this field as well as future plans in detail below. 1

2. Past Research 2.1. Controllability and Observability. Working under Dr Martin Bohner, we first extended well known results in controllability and observability for linear time invariant systems on time scales in [6]. R.E. Kalman introduced these concepts in the early 1960s for the continuous and discrete time cases (see [13,16]). Since then, they have become the backbone of modern control theory with many applications in engineering and mathematics. Here we studied the dynamic system x = Ax σ + Bu y = Cx, where x R n is the state, u R m is the input (control), and y R r is the output. Then given dynamic system is said to be controllable if and only if the matrix Γ C [A, B] := [ B AB A 2 B A n 1 B ] has full rank n while it is said to be observable if and only if C CA Γ O [A, C] :=. CA n 1 has rank n. These are the same controllability and observability matrices in the discrete and continuous cases. Also, these concepts are mathematical duals of each other in the discrete and continuous cases. This relationship is preserved in their unification on time scales. There have been a number of other attempts to examine these concepts for a similar system (see [1, 8, 9]). As it turns out, there seems to be a trade off between these two systems, which means it may be more beneficial to use one system over the other depending on whether we are using the controllability or observability property. This is due to the way the matrix exponential is defined on time scales. 2.2. The Linear Quadratic Regulator (LQR) and Tracker (LQT). Much of my work has centered around the formulation and solution of a new kind of optimal control problem. The goal of such problems to find an optimal control that minimizes a quadratic cost functional associated with some dynamic system. Kalman and Koepcke [17] first used this method in the discrete case. Shortly thereafter, Kalman extended these results to continuous time (see [11,14]). Since then, what is now called the linear quadratic regulator (LQR) plays a central role in control engineering. We have generalized this method (see [7]) to consider x (t) = Ax(t) + Bu(t), x(t 0 ) = x 0 2

associated with the quadratic cost functional J = 1 2 xt (t f )S(t f )x(t f ) + 1 2 tf t 0 (x T Qx + u T Ru)(τ) τ, where S(t f ), Q 0 and R > 0. Depending on the final state the optimal control can take two different forms. If the final state is fixed, the (open-loop) optimal control mirrors our controllability criterion results we obtained in [6]. On the other hand if the final state is free, then the (closed-loop) optimal control can be written as u (t) = K(t)x(t), where K represents a generalized state feedback gain. Next we extended our results on the LQR (see [5]) to linear quadratic tracking on time scales. Here we determined an affine optimal control that forces our our dynamic system to track a desired reference trajectory over a fixed time. In this setting, we found an optimal control that contains a feedback term as well as a feedforward term that anticipates this desired final trajectory. In this paper we also examined the numerical advantages time scales offers to control theory. When the dynamics are stationary and the isolated time scale is known in advance, our tracking algorithms incorporate the gaps between sampling times within the equations of our states, inputs, and gains. This also offers an advantage when we need schedule our gains, a useful tool in radar analysis. Other potential applications of the LQT on time scales include disturbance-rejection, heat dynamics, game theory, and economics. 2.3. The Kalman Filter. We have also generalized the Kalman filter to dynamic equations on time scales. This optimal estimation method was first tackled by Kalman [12] in the discrete case and later with Bucy [15] in the continuous case in the early 1960s. The effect of this filter cannot be understated. Following Kalman s visit to the NASA Ames Research Center, Kalman s filter was applied to the problem of navigating to the moon with the Apollo program. In [4], we considered the linear stochastic dynamic system x (t) = Ax(t) + Bu(t) + Gw(t), x(t 0 ) = x 0, y(t) = Cx(t) + v(t), where state and output are influenced by unknown disturbances w and v. In this setting, we want an unbiased estimate, ˆx, that ensures the smallest error covariance possible. We made the assumptions that our measurement are in real time and that the intial estimate is equal to the initial state mean (i.e., ˆx(t 0 ) = x(t 0 )) so that the observer is unbiased and mirrors the results for the Kalman-Bucy filter. We proved that this observer is given by ˆx (t) = Aˆx(t) + Bu(t) + L(t)[y(t) C ˆx(t)], ˆx(t 0 ) = x(t 0 ), 3

where L is a Kalman gain. Finally, we have made an argument that LQR and the Kalman are mathematical duals to each other, as the Riccati equations and gains that describe each optimal problem look very similar to each other. This relationship exists in both the discrete and continuous cases, and is preserved in their unification on time scales. 3. Current Research and Future Plans Looking forward, there are a number of open problems based on my work thus far. Below I list some of my current research projects and future plans. 3.1. Extensions to the LQR. Currently, I am working on extending my results of the LQR to other applied problems. One such extension is optimal pursuit-evasion games. These is a two player game first considered by Ho, Bryson, and Baron in the continuous case (see [10]). The pursuing state seeks to intercept the evading state at time t f while the latter state seeks to do the opposite. This is essentially a minimax problem. I am also working on extending these results to solve minimum time problems, regulators with cross coupling terms and a prescribed degree of stability, and bang bang control. 3.2. Bilinear Systems. To best of my knowledge, control theory for nonlinear dynamic systems on time scales has not been investigated in great detail. Currently I am studying a special case of nonlinear dynamic systems given by x = Ax + Dxu + bu y = cx, where x R n and u R. This system has been well studied in the discrete and continuous cases. Some potential applications for these systems on time scales include nuclear reactor systems, suspension systems, furmentation processes, and heat exchange systems. I am also interested in extending my results on the LQR and Kalman filter for bilinear systems on time scale as these systems should give us insight to solve more general nonlinear dynamic systems. 3.3. Other Interests. While my work thus far has solely been in the time domain, I am also interested in studying similar problems in the frequency domain. There are many engineering problems that involve both discrete and continuous signals, or signals that are sampled at irregular points in time. Therefore, signal processing on time scales has potential for many applications, such as radar systems and electronic warfare. Thus far there has been little, if any, study of signal processing on time scales. My research interests are compatible with those who work in dynamical systems, time series, dynamic programming, and adaptive control among others. Also from a pedological viewpoint, the theory of time scales has a lot to offer. In unifying concepts from the discrete and continuous 4

cases, students will gain a deeper understanding and appreciation of both. With any research I work on, I would like to collaborate with researchers in other departments on campus. References [1] Z. Bartosiewicz and E. Paw luszewicz. Realizations of linear control systems on time scales. Control Cybernet., 35(4):769 786, 2006. [2] M. Bohner and A. Peterson. Dynamic equations on time scales. Birkhäuser Boston Inc., Boston, MA, 2001. [3] M. Bohner and A. Peterson, editors. Advances in dynamic equations on time scales. Birkhäuser Boston Inc., Boston, MA, 2003. [4] M. Bohner and N. Wintz. The Kalman filter for linear systems on time scales. To be submitted. [5] M. Bohner and N. Wintz. The linear quadratic tracker on time scales. To be submitted. [6] M. Bohner and N. Wintz. Controllability and observability of time-invariant linear dynamic systems. Math. Bohem., 2010. To appear. [7] M. Bohner and N. Wintz. The linear quadratic regulator on time scales. Int. J. Difference Equ., 6(2), 2011. To appear. [8] J. Davis, I. Gravagne, B. Jackson, and R. Marks II. Controllability, observability, realizability, and stability of dynamic linear systems. Electron. J. Diff. Eqns, 2009(37):1 32, 2009. [9] L. V. Fausett and K. N. Murty. Controllability, observability and realizability criteria on time scale dynamical systems. Nonlinear Stud., 11(4):627 638, 2004. [10] Y. Ho, A. Bryson, and S. Baron. Differential games and optimal pursuit-evasion strategies. Automatic Control, IEEE Transactions on, 10(4):385 389, oct. 1965. [11] R. E. Kalman. Contributions to the theory of optimal control. Bol. Soc. Mat. Mexicana (2), 5:102 119, 1960. [12] R. E. Kalman. A new approach to linear filtering and prediction problems. Transactions of the ASME Journal of Basic Engineering, (82 (Series D)):35 45, 1960. [13] R. E. Kalman. On the general theory of control systems. Proc. First IFAC Congress Automatic Control, Vol. 1:481 492, 1960. [14] R. E. Kalman. When is a linear control system optimal? Trans. ASME Ser. D. J. Basic Engineering, 86:81 90, 1964. [15] R. E. Kalman and R. S. Bucy. New results in linear filtering and prediction theory. Trans. ASME Ser. D. J. Basic Engineering, 83:95 108, 1961. [16] R. E. Kalman, Y. C. Ho, and K. S. Narendra. Controllability of linear dynamical systems. Contributions to Differential Equations, 1:189 213, 1963. [17] R. E. Kalman and R. W. Koepcke. Optimal synthesis of linear sampling control systems using generalized performance indexes. Trans. ASME Ser. D. J. Basic Engineering, 80:1820 1826, 1958. 5