Deterministic Kalman Filtering on Semi-infinite Interval

Similar documents
Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications

= m(0) + 4e 2 ( 3e 2 ) 2e 2, 1 (2k + k 2 ) dt. m(0) = u + R 1 B T P x 2 R dt. u + R 1 B T P y 2 R dt +

Implementation of Infinite Dimensional Interior Point Method for Solving Multi-criteria Linear Quadratic Control Problem

6.241 Dynamic Systems and Control

We describe the generalization of Hazan s algorithm for symmetric programming

Steady State Kalman Filter

Semidefinite Programming Duality and Linear Time-invariant Systems

Chapter Introduction. A. Bensoussan

6 OUTPUT FEEDBACK DESIGN

Lecture 4 Continuous time linear quadratic regulator

Final Exam Solutions

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

Lecture 5 Linear Quadratic Stochastic Control

Kalman Decomposition B 2. z = T 1 x, where C = ( C. z + u (7) T 1, and. where B = T, and

Optimization-Based Control

Linear System Theory

EE363 homework 2 solutions

RECURSIVE ESTIMATION AND KALMAN FILTERING

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

Taylor expansions for the HJB equation associated with a bilinear control problem

Optimal Control of Partially Observable Piecewise Deterministic Markov Processes

Denis ARZELIER arzelier

EE C128 / ME C134 Final Exam Fall 2014

Lecture 8. Applications

The Output Stabilization problem: a conjecture and counterexample

Lecture 9: Discrete-Time Linear Quadratic Regulator Finite-Horizon Case

Networked Sensing, Estimation and Control Systems

Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

Optimal Polynomial Control for Discrete-Time Systems

Tilburg University. An equivalence result in linear-quadratic theory van den Broek, W.A.; Engwerda, Jacob; Schumacher, Hans. Published in: Automatica

H 2 Optimal State Feedback Control Synthesis. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

A conjecture on sustained oscillations for a closed-loop heat equation

Homework Solution # 3

Chap. 3. Controlled Systems, Controllability

Outline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem.

Dynamical systems: basic concepts

Equivalence of dynamical systems by bisimulation

6.241 Dynamic Systems and Control

A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

Extensions and applications of LQ

1 Continuous-time Systems

CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73

Lecture 7 : Generalized Plant and LFT form Dr.-Ing. Sudchai Boonto Assistant Professor

EE363 Review Session 1: LQR, Controllability and Observability

Theory in Model Predictive Control :" Constraint Satisfaction and Stability!

On perturbations in the leading coefficient matrix of time-varying index-1 DAEs

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

then the substitution z = ax + by + c reduces this equation to the separable one.

Conservative Control Systems Described by the Schrödinger Equation

Industrial Model Predictive Control

minimize x subject to (x 2)(x 4) u,

EE221A Linear System Theory Final Exam

Stabilization of persistently excited linear systems

Subspace Identification

Modelling and Mathematical Methods in Process and Chemical Engineering

Control Systems. Frequency domain analysis. L. Lanari

NUMERICAL EXPERIMENTS WITH UNIVERSAL BARRIER FUNCTIONS FOR CONES OF CHEBYSHEV SYSTEMS

Second Order Sufficient Conditions for Optimal Control Problems with Non-unique Minimizers

Hamilton-Jacobi-Bellman Equation Feb 25, 2008

X n = c n + c n,k Y k, (4.2)

Feedback Control of Turbulent Wall Flows

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

Topic # Feedback Control Systems

Lecture Notes. Advanced Discrete Structures COT S

Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control

Iterative methods to compute center and center-stable manifolds with application to the optimal output regulation problem

AMS subject classifications. 47N70, 47A48, 47A20, 47A56, 47A62, 93B28, 93C55, 93D15

VERSAL DEFORMATIONS OF BILINEAR SYSTEMS UNDER OUTPUT-INJECTION EQUIVALENCE

Z i Q ij Z j. J(x, φ; U) = X T φ(t ) 2 h + where h k k, H(t) k k and R(t) r r are nonnegative definite matrices (R(t) is uniformly in t nonsingular).

CLASSIFICATIONS OF THE FLOWS OF LINEAR ODE

Observability and state estimation

Return Difference Function and Closed-Loop Roots Single-Input/Single-Output Control Systems

Midterm 2 Nov 25, Work alone. Maple is allowed but not on a problem or a part of a problem that says no computer.

LINEAR-CONVEX CONTROL AND DUALITY

EE363 homework 8 solutions

arxiv: v1 [math.oc] 2 Sep 2016

Module 02 CPS Background: Linear Systems Preliminaries

Modern Optimal Control

CONTROL DESIGN FOR SET POINT TRACKING

Linear Discrete-time State Space Realization of a Modified Quadruple Tank System with State Estimation using Kalman Filter

Research Article Indefinite LQ Control for Discrete-Time Stochastic Systems via Semidefinite Programming

Model reduction for linear systems by balancing

ECSE.6440 MIDTERM EXAM Solution Optimal Control. Assigned: February 26, 2004 Due: 12:00 pm, March 4, 2004

Lecture 19 Observability and state estimation

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1

= 0 otherwise. Eu(n) = 0 and Eu(n)u(m) = δ n m

EL2520 Control Theory and Practice

Outline. Linear regulation and state estimation (LQR and LQE) Linear differential equations. Discrete time linear difference equations

Lecture 2: Discrete-time Linear Quadratic Optimal Control

Grammians. Matthew M. Peet. Lecture 20: Grammians. Illinois Institute of Technology

Linear Systems. Manfred Morari Melanie Zeilinger. Institut für Automatik, ETH Zürich Institute for Dynamic Systems and Control, ETH Zürich

Linear-Quadratic Optimal Control: Full-State Feedback

Linear Functions (I)

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Gramians based model reduction for hybrid switched systems

Controllability, Observability, Full State Feedback, Observer Based Control

Transcription:

Deterministic Kalman Filtering on Semi-infinite Interval L. Faybusovich and T. Mouktonglang Abstract We relate a deterministic Kalman filter on semi-infinite interval to linear-quadratic tracking control model with unfixed initial condition. 1 Introduction In [4], E. Sontag considered the deterministic analogue of Kalman filtering problem on finite interval. The deterministic model allows a natural extension to semi-infinite interval. It is of a special interest because for the standard linearquadratic stochastic control problem extension to semi-infinite interval leads to complications with the standard quadratic objective function (see e.g. [1]). According to [4], the model which we are going to consider has the following form: J(x, u, x ) = [u T Ru + (Cx ȳ) T Q(Cx ȳ)] dt, (1) ẋ = Ax + Bu, (2) x() = x. (3) Here we assume that the pair (x, u) a(x ) + Z, where Z is a vector subspace of the Hilbert space L n 2 [, + ) L m 2 [, + ) (with L n 2 [, + ) a Hilbert space of R n value square integrable functions) defined as follows: Z = {(x, u) L n 2 [, + ) L m 2 [, + ) : x is absolutely continuous, ẋ L n 2 [, + ), ẋ = Ax + Bu, x() = } Here A is an n by n matrix; B is an n by m matrix; R = R T is an n by n and positive definite; Q = Q T is an r by r and positive definite; C is an r Mathematics Department, University of Notre Dame, USA 46556. This paper is based upon work supported in part by National Science Foundation Grant No. DMS7-1289 Mathematics Department, Faculty of Science, Chiang Mai University, Thailand 522. This paper is based upon work supported in part by the Center of Excellence in Mathematics, Sri Ayutthaya Rd., Bangkok 14, Thailand 1

by n matrix; ȳ L r 2[, + ). Notice that in (1) - (3) x is not fixed and we minimize over all triple (x, u, x ) L n 2 [, + ) L m 2 [, + ) R n satisfying our assumption. Notice also that we interpret (1) - (3) as an estimation problem of the form ẋ = Ax + Bu, ȳ = Cx + v, where we try to estimate x with the help of observation ȳ by minimizing perturbations u, v and choosing an appropriate initial condition x. 2 Solution of the deterministic problem Consider the algebraic Riccati equation KA + A T K + KLK C T QC =, (4) where L = BR 1 B T. Assuming that the pair (A, B) is stabilizable and the pair (C, A) is detectable, there exists a negative definite symmetric solution K st to (4) such that the matrix A+LK st is stable (see e.g. Theorem 12.3 in [5]). Using the result of [2], we can describe the optimal solution to (1) - (3) with fixed x as follows. There exists a unique solution ρ L n 2 [, ) satisfying the differential equation ρ = (A + LK st ) T ρ C T Qȳ. (5) Moreover, ρ can be explicitly described as follows: ρ (t) = The optimal solution (x, u) to (1) - (3) has the form exp[(a + LK st ) T τ]c T Qȳ(t + τ) dτ. (6) ẋ = (A + LK st )x + Lρ, x() = x, (7) u = R 1 B T (K st x + ρ ). (8) For details see [2]. Notice that ρ does not depend on x. To solve the original problem (1) - (3) we need to express the minimal value of the functional (1) in term of x. Theorem 1. Let (x, u) be an optimal solution of (1) - (3) with fixed x given by (5) - (8). Then J(x, u, x ) = x T K st x 2ρ () T x + [ȳ T Qȳ ρ T Lρ ] dt. (9) 2

Remark. Notice that J(x, u, x ) is a strictly convex function of x and hence minimum of J as a function of x is attained at x opt = K 1 st ρ (). (1) Hence (5) - (8) gives a complete solution of the original problem (1) - (3). Proof. Let (y, w) a(x ) + Z be feasible solution to (1) - (3), where x is fixed. Consider (y, w) = [w R 1 B T (K st y + ρ )] T R[w R 1 B T (K st y + ρ )]. where we suppressed an explicit dependence on time. Notice that (x, u). Furthermore, (y, w) = 1 + 2 + 3, where 1 = w T Rw, 2 = 2(K st y + ρ ) T Bw, 3 = (K st y + ρ ) T L(K st y + ρ ). Now Bw = ẏ Ay and consequently Consequently, 2 = 2(y T K st + ρ T )(ẏ Ay) = y T (K st A + A T K st )y 2y T K st ẏ 2ρ T ẏ + 2ρ T Ay, 3 = y T K st LK st y + ρ T Lρ + 2ρ T LK st y. (y, w) = w T Rw + ρ T Lρ + y T (K st LK st + K st A + A T K st )y Using (4) and (5), we obtain d dt (yt K st y) 2 d dt (ρt y) + 2 ρ T y + 2ρT LK st y + 2ρ T Ay. (y, w) = w T Rw + ρ T Lρ + y T C T QCy 2 d dt (ρt y) d dt (yt K st y) 2(C T Qȳ) T y = w T Rw + ρ T Lρ 2 d dt (ρt y) d dt (yt K st y) +(ȳ Cy) T Q(ȳ Cy) ȳ T Qȳ. Hence, taking into account that ρ (t), y(t), t + (see for details [2]), we obtain 3

(y, w) dt = + [w T Rw + +(ȳ Cy) T Q(ȳ Cy)] dt [ρ T Lρ ȳ T Qȳ] dt + 2ρ () T x + x K st x = J(y, w, x ) + 2ρ () T x + x K st x + c where c = [ρ T Lρ ȳ T Qȳ] dt. Notice, that (y, w) and (x, u). This shows that, indeed, (x, u) is an optimal solution to (1) - (3) (with fixed x ) and proves (9). 3 Steady state deterministic Kalman filtering In light of (1), it is natural to consider the process z(t) = K 1 st ρ (t), t [, + ) (11) as a natural estimate for the optimal solution to problem (1) - (3). Let us find the differential equation for z. Proposition 1. Remark: Notice that K 1 st ż = Az + K 1 st C T Q(ȳ Cz). (12) is a solution to the algebraic equation L P C T QCP + AP + P A T = (13) In other words, the differential equation (12) is a precise deterministic analogue for the stochastic differential equation describing the optimal (steady-state) estimation in Kalman filtering problem. See e.g. [1]. Proof. Using (5) and (11), we obtain: ż = Kst 1 (A + LK st ) T ρ + Kst 1 C T Qȳ = (Kst 1 A T + L)(K st z) + Kst 1 C T Qȳ. Since K st is a solution to (4), we have Hence, Hence, we obtain (12). K 1 st A T K st LK st = A K 1 st C T QC. ż = Az K 1 st C T QCz + K 1 st C T Qȳ. Remark: Notice that due to (11) (z, ) and consequently (z, ) would be an optimal solution to (1) - (3) if it were feasible for this problem. 4

4 Concluding remarks The differential equation (12) allows one to recursively estimate of z based on observation ȳ provided we know z() = x opt. Notice that according to (6) z() = K 1 st exp[(a + LK st ) T τ]c T Qȳ(τ) dτ, requires the knowledge of the whole ȳ. However, due to the integral nature of this relationship the infinite integral can be very well approximated by the integral over the finite interval [, T ] (since the matrix A + LK st ) T is stable) and hence the recursive nature of (12) is recovered for t T for some finite T. Notice, further, that similar problem with discrete time can be solved using the formalism developed in [3]. Acknowledgments: The research of L. Faybusovich was partially supported by NSF grant DMS7-1289. The research of T. Mouktonglang was partially supported by the Center of Excellence in Mathematics, Sri Ayutthaya Rd., Bangkok 14, Thailand. References [1] M.H.A Davis, Linear Estimation and Stochastic Control, Chapman and Hall, London, 1977. [2] L. Faybusovich and T. Mouktonglang, Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications, Technical Report, University of Notre Dame, December 23. [3] L. Faybusovich, T. Mouktonglang and T. Tsuchiya, Implementation of infinite - dimensional interior - point method for solving multi - criteria linear - quadratic control problem. Optim. Methods Softw. 21(26) no. 2, 315-341. [4] E. D. Sontag, Mathematical Control Theory, Springer, 199. [5] W. M. Wonham, Linear Multivariable Control, a Geometric Approach, Springer, 1974. 5