Control, Stabilization and Numerics for Partial Differential Equations

Similar documents
Decay rates for partially dissipative hyperbolic systems

Switching, sparse and averaged control

Optimal and Approximate Control of Finite-Difference Approximation Schemes for the 1D Wave Equation

Null-controllability of the heat equation in unbounded domains

The heat equation. Paris-Sud, Orsay, December 06

Chap. 3. Controlled Systems, Controllability

Stabilization of persistently excited linear systems

Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0).

Some recent results on controllability of coupled parabolic systems: Towards a Kalman condition

ROBUST NULL CONTROLLABILITY FOR HEAT EQUATIONS WITH UNKNOWN SWITCHING CONTROL MODE

Control Theory and PDE s

Linear System Theory

1 Lyapunov theory of stability

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

ACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016

Control Systems Design, SC4026. SC4026 Fall 2009, dr. A. Abate, DCSC, TU Delft

Asymptotic Behavior of a Hyperbolic-parabolic Coupled System Arising in Fluid-structure Interaction

Intro. Computer Control Systems: F8

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

Deterministic Dynamic Programming

Historical Introduction to Mathematical Control Theory

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

6.241 Dynamic Systems and Control

Greedy controllability of finite dimensional linear systems

Introduction to Exact Controllability and Observability; Variational Approach and Hilbert Uniqueness Method 1

On Controllability of Linear Systems 1

Nonlinear stabilization via a linear observability

Controllability of linear PDEs (I): The wave equation

Differential Equations

Grammians. Matthew M. Peet. Lecture 20: Grammians. Illinois Institute of Technology

Zeros and zero dynamics

The norms can also be characterized in terms of Riccati inequalities.

Formal Groups. Niki Myrto Mavraki

The Wave Equation: Control and Numerics

CONTROL DESIGN FOR SET POINT TRACKING

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

MATH 213 Linear Algebra and Ordinary Differential Equations Spring 2015 Study Sheet for Final Exam. Topics

1 The Observability Canonical Form

Solution for Homework 5

Stability of an abstract wave equation with delay and a Kelvin Voigt damping

Heat equations with singular potentials: Hardy & Carleman inequalities, well-posedness & control

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

APPPHYS 217 Tuesday 6 April 2010

Z i Q ij Z j. J(x, φ; U) = X T φ(t ) 2 h + where h k k, H(t) k k and R(t) r r are nonnegative definite matrices (R(t) is uniformly in t nonsingular).

ORDINARY DIFFERENTIAL EQUATIONS

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

First and Second Order Differential Equations Lecture 4

Differential Equations 2280 Sample Midterm Exam 3 with Solutions Exam Date: 24 April 2015 at 12:50pm

An introduction to Birkhoff normal form

Conservative Control Systems Described by the Schrödinger Equation

Convex Functions and Optimization

dx n a 1(x) dy

Denis ARZELIER arzelier

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

Differential of the Exponential Map

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Review and problem list for Applied Math I

u( x) = g( y) ds y ( 1 ) U solves u = 0 in U; u = 0 on U. ( 3)

Greedy Control. Enrique Zuazua 1 2

Lec 6: State Feedback, Controllability, Integral Action

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

22.2. Applications of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

c i r i i=1 r 1 = [1, 2] r 2 = [0, 1] r 3 = [3, 4].

Solutions for Math 225 Assignment #5 1

Vectors. January 13, 2013

Least Squares Based Self-Tuning Control Systems: Supplementary Notes

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

Commutative Banach algebras 79

Math Ordinary Differential Equations

Lecture Notes on PDEs

Sharp observability estimates for heat equations

Chapter 2 Optimal Control Problem

1+t 2 (l) y = 2xy 3 (m) x = 2tx + 1 (n) x = 2tx + t (o) y = 1 + y (p) y = ty (q) y =

Abstract In this paper, we consider bang-bang property for a kind of timevarying. time optimal control problem of null controllable heat equation.

Partial differential equation for temperature u(x, t) in a heat conducting insulated rod along the x-axis is given by the Heat equation:

ORDINARY DIFFERENTIAL EQUATIONS

MODERN CONTROL DESIGN

On the Stabilization of Neutrally Stable Linear Discrete Time Systems

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

SPACES ENDOWED WITH A GRAPH AND APPLICATIONS. Mina Dinarvand. 1. Introduction

EIGENVALUES AND EIGENVECTORS 3

2 A Model, Harmonic Map, Problem

Semidefinite Programming Duality and Linear Time-invariant Systems

Lecture Notes of EE 714

1 Controllability and Observability

arxiv: v3 [math.oc] 19 Apr 2018

Module 07 Controllability and Controller Design of Dynamical LTI Systems

Topics in control Tracking and regulation A. Astolfi

Lecture 11 The Radical and Semisimple Lie Algebras

SYSTEMTEORI - ÖVNING 5: FEEDBACK, POLE ASSIGNMENT AND OBSERVER

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

Chapter III. Stability of Linear Systems

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

OPTIMAL SPACECRAF1 ROTATIONAL MANEUVERS

Copyright (c) 2006 Warren Weckesser

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

4F3 - Predictive Control

Linear second-order differential equations with constant coefficients and nonzero right-hand side

Infinite dimensional controllability

EML5311 Lyapunov Stability & Robust Control Design

Transcription:

Paris-Sud, Orsay, December 06 Control, Stabilization and Numerics for Partial Differential Equations Enrique Zuazua Universidad Autónoma 28049 Madrid, Spain enrique.zuazua@uam.es http://www.uam.es/enrique.zuazua

Plan: 1.- Introduction to the control of finite-dimensional systems: The rank condition. 2.- The wave equation: 2.1. Control and observation of waves: an introduction 2.2 Pathological numerical schemes for the 1-d wave equation 2.3 Nonharmonic Fourier series and remedies to the divergence of controls.

2.4 The two-grid algorithm in 1-d. 2.5 Links with the dynamical properties of bicharacteristic rays. 2.6 The two-grid algorithm in 2-d. 2.7 Links with waves in heterogenous media 2.8 Stabilization 2.9 Semilinear wave equations 2.10 Schrödinger and plate equations

3.- The heat equation: 3.1 Preliminaries on the control of the heat equation. 3.2. Hyperbolic control implies parabolic one 3.3 Numerical approximation in 1-d 3.4 Multi-dimensional pathologies 3.5 A first result in the multi-dimensional case 3.6 The Carleman inequalities aproach 4.- Open problems and future directions of research

Introduction to the control of finite-dimensional systems: The rank condition. Let n, m N and T > 0. Consider the following finite dimensional system: { x (t) = Ax(t) + Bu(t), t (0, T ), x(0) = x 0 (1). In (1), A is a real n n matrix, B is a real n m matrix and x 0 a vector in R n. The function x : [0, T ] R n represents the state and u : [0, T ] R m the control. Both are vector functions of n and m For a more complete introduction to the topics several books are available. particular that by E. Trélat. in

components respectively depending exclusively on time t. Obviously, in practice m n. The most desirable goal is, of course, controlling the system by means of a minimum number m of controls. Given an initial datum x 0 R n and a vector function u L 2 (0, T ; R m ), system (1) has a unique solution x H 1 (0, T ; R n ) characterized by the variation of constants formula: x(t) = e At x 0 + t 0 ea(t s) Bu(s)ds, t [0, T ]. (2) Definition 1 System (1) is said to be exactly controllable in time T > 0 if given any initial and final one x 0, x 1 R n there exists u L 2 (0, T, R m ) such that the solution of (1) satisfies x(t ) = x 1.

According to this definition the aim of the control process consists in driving the solution x of (1) from the initial state x 0 to the final one x 1 in time T by acting on the system through the control u. Example 1. Consider the case ( 1 0 ) ( 1 A =, B =. (3) 0 1 0) Then the system x = Ax + Bu can be written as { x 1 = x 1 + u x 2 = x 2, or equivalently, where x 0 = (x 0 1, x0 2 ) are the initial data. { x 1 = x 1 + u x 2 = x 0 2 et, This system is not controllable since the control u does not act on the second component x 2 of the state which is completely determined by the initial data x 0 2.,

Example 2. By the contrary, the equation of the harmonic oscillator is controllable The matrices A and B are now respectively ( 0 1 ) A =, B = 1 0 x + x = u. (4) ( 0 1). Once again, we have at our disposal only one control u for both components x and y of the system. But, unlike in Example 1, now the control acts in the second equation where both components are present. CONCLUSION: Controllability not only depends on the dimensions m and n but on the way A and B interact.

Define the set of reachable states R(T, x 0 ) = {x(t ) R n : x solution of (1) with u (L 2 (0, T )) m }. (5) The exact controllability property is equivalent to the fact that R(T, x 0 ) = R n for any x 0 R n. Observability property Let A be the adjoint matrix of A, i.e. the matrix with the property that Ax, y = x, A y for all x, y R n. Consider the following homogeneous adjoint system of (1): { ϕ = A ϕ, t (0, T ) (6) ϕ(t ) = ϕ T.

The following is an equivalent condition for exact controllability: Lemma 1 An initial datum x 0 R n of (1) is driven to zero in time T by using a control u L 2 (0, T ) if and only if T 0 u, B ϕ dt + x 0, ϕ(0) = 0, ϕ. (7) Proof: Let ϕ T be arbitrary in R n and ϕ the corresponding solution of (6). By multiplying (1) by ϕ and (6) by x we deduce that x, ϕ = Ax, ϕ + Bu, ϕ ; x, ϕ = A ϕ, x. Hence, d x, ϕ = Bu, ϕ dt

which, after integration in time, gives that x(t ), ϕ T x 0, ϕ(0) = T 0 Bu, ϕ dt = T 0 u, B ϕ dt. (8) We obtain that x(t ) = 0 if and only if (7) is verified for any ϕ T R n. Identity (7) is in fact an optimality condition for the critical points of the quadratic functional J : R n R n, J(ϕ T ) = 1 2 T 0 B ϕ 2 dt + x 0, ϕ(0) where ϕ is the solution of the adjoint system (6) with initial data ϕ T at time t = T. More precisely:

Lemma 2 Suppose that J has a minimizer ϕ T R n and let ϕ be the solution of the adjoint system (6) with initial data ϕ T. Then is a control of system (1) with initial data x 0. u = B ϕ (9) Proof: If ϕ T is a point where J achieves its minimum value, then lim h 0 J ( ϕ T + hϕ T ) J ( ϕ T ) h = 0, ϕ T R n. This is equivalent to T 0 B ϕ, B ϕ dt + x 0, ϕ(0) = 0, ϕ T R n, which, in view of Lemma 1, implies that u = B ϕ is a control for (1).

This is a variational method to obtain the control as a minimum of the functional J. The controls we obtain this way are smooth. This is not the unique possible functional allowing to build the control. By modifying it conveniently, other types of controls (for instance bang-bang ones) can be obtained.

Do the minimizers of J exist? For, coercivity is needed: lim J(ϕ 0 ) =. ϕ 0 Coercivity is in fact equivalent to the following observability property of the adjoint system. Definition 2 System (6) is said to be observable in time T > 0 if there exists c > 0 such that T 0 B ϕ 2 dt c ϕ(0) 2, (10)

for all ϕ T R n, ϕ being the corresponding solution of (6). In the sequel (10) will be called the observation or observability inequality. It guarantees that the solution of the adjoint problem at t = 0 is uniquely determined by the observed quantity B ϕ(t) for 0 < t < T. In other words, the information contained in this term completely characterizes the solution of (6). The following remark is very important in the context of finite dimensional control. Unfortunately (?) it is not true for infinite-dimensional systems (PDE, distributed parameter systems). Proposition 1 Inequality (10) is equivalent to the following unique continuation principle: B ϕ(t) = 0, t [0, T ] ϕ T = 0. (11)

This is an uniqueness or unique continuation property. UNIQUE CONTINUATION OBSERVABILITY INEQUALITY CONTROLLABILITY, WITH A CONSTRUCTIVE PROCEDURE TO BUILD CONTROLS BY MINIMIZING A COERCIVE FUNCTIONAL.

The rank condition for controllability What about the observability property? Are there algebraic conditions on the state matrix A and the control one B for it to be true? The following classical result gives a complete answer to the problem of exact controllability of finite dimensional linear systems. It shows, in particular, that the time of control is irrelevant, something which is far from being true in the context of PDE. due to J. P. La Salle (1960), W. L. Chou (1940), R. E. Kalman (1960), L. S. Pontryaginh (1956)...? It is often known as the Kalman rank condition. But himself refers for it to Pontryagin and La Salle and indicates that the nonlinear version, using Lie Brackets, in the differenctial geometry setting, that of course is more general than the linear one, was introduced by Chow.

Theorem 1 System (1) is exactly controllable in some time T if and only if rank [B, AB,, A n 1 B] = n. (12) Consequently, if system (1) is controllable in some time T > 0 it is controllable in any time. Remark 1 From now on we shall simply say that (A, B) is controllable if (12) holds. The matrix [B, AB,, A n 1 B] will be called the controllability matrix. Proof of Theorem 1: Suppose that rank([b, AB,, A n 1 B]) < n.

Then the rows of the controllability matrix [B, AB,, A n 1 B] are linearly dependent and there exists a vector v R n, v 0 such that [B, B A,, B (A ) n 1 ]v = 0. Then B v = B A v = = B (A ) n 1 v = 0. From Cayley-Hamilton Theorem we deduce that there exist constants c 1,, c n such that, A n = c 1 A n 1 + + c n I and therefore B (A ) n v = 0, too. In fact, it follows that B (A ) k v = 0 for all k N and consequently B e A t v = 0 for all t as well. But, from the variation of constants formula, the solution x of (1) satisfies Therefore x(t) = e At x 0 + v, x(t ) = v, e AT x 0 + T t 0 ea(t s) Bu(s)ds. (13) 0 B e A (T s) v, u(s) ds = v, e AT x 0.

Hence, v, x(t) is independent of t. This shows that the projection of the solution x on v is independent of the value of the control u. Hence, the system is not controllable. Remark 2 The conservation property for the quantity v, x we have just proved holds for any vector v in the kernel of [B, B A,, B (A ) n 1 ]. Thus, if the rank of the matrix [B, AB,, A n 1 B] is n k, the reachable set that x(t ) runs is an affine subspace of R n of dimension n k. Suppose now that rank([b, AB,, A n 1 B]) = n. It is sufficient to show that system (6) is observable.

Assume B ϕ = 0 and ϕ(t) = e A (T t) ϕ T, it follows that B e A (T t) ϕ T 0 for all 0 t T. By computing the derivatives of this function in t = T we obtain that B [A ] k ϕ T = 0 k 0. But since rank( [ B, AB,, A n 1 B ] ) = n we deduce that rank( [ B, B A,, B (A ) n 1] ) = n and therefore ϕ T = 0. Hence, (11) is verified and the proof of Theorem 1 is now complete. Remark 3 The set of controllable pairs (A, B) is open and dense. This means that

Most systems are controllable; The controllability property is robust, i. e. it is invariant under small perturbations of A and/or B. When controllability holds the norm of the control is proportional to the distance between e AT x 0 (the state freely attained by the system in the absence of control, i. e. with u = 0) and the objective x 1, u L 2 (0,T ) C eat x 0 x 1 (14) for any initial data x 0 and final objective x 1.

Remark 4 Linear scalar equations of any order provide examples of systems of arbitrarily large dimension k that are controllable with only one control: x (k) + a 1 x (k 1) +... + a k 1 x = u. Exercise: Check that the rank condition is fulfilled in this case.

Bang-bang controls Let us consider the particular case B M n 1, (15) i. e. m = 1, in which only one scalar control u : [0, T ] R is available and B is a column vector. To build bang-bang controls it is convenient to consider the quadratic functional: J bb (ϕ 0 ) = 1 [ ] T 2 2 0 B ϕ dt + x 0, ϕ(0) (16) where ϕ is the solution of the adjoint system (6) with initial data ϕ T.

The same arguments as above show that J bb is also continuous and coercive. It follows that J bb attains a minimum in some point ϕ T R n. The optimality condition (the Euler-Lagrange equations) its minimizers satisfy: T T 0 B ϕ dt sgn 0 (B ϕ)b ψ(t)dt + x 0, ϕ(0) = 0 for all ϕ T R, where ϕ is the solution of the adjoint system (6) with initial data ϕ T. The control we are looking for is T u = 0 B ϕ dt sgn (B ϕ) where ϕ is the solution of (6) with initial data ϕ T.

Note that the control u is of bang-bang form. Indeed, u takes only two values ± T 0 B ϕ dt. The control switches from one value to the other finitely many times when the function B ϕ changes sign. This control is of minimal L (0, T ) norm among all possible controls. and the proof finishes. Exercise: Check what are the conditions needed to guarantee the bang-bang controllability in the case where m 2, i. e. more than once control is applied. In that case the rank condition for the pair (A, B) does not suffice and one actually needs each colum b j of B to be such that the pais (A, b j ) satisfies the Kalman rank condition.

Stabilization of finite dimensional linear systems The controls we have obtained so far are the so called open loop controls. In practice, it is interesting to get closed loop or feedback controls, so that its value is related with the state itself in real time. In this section, to simplify the presentation, we assume that A is a skew-adjoint matrix, i. e. A = A. In this case, < Ax, x >= 0. Consider the system { x = Ax + Bu x(0) = x 0 (17).

When u 0, the energy of the solution of (17) is conserved. Indeed, by multiplying (17) by x, if u 0, one obtains Hence, d dt x(t) 2 = 0. (18) x(t) = x 0, t 0. (19) The problem of stabilization can be formulated in the following way. Suppose that the pair (A, B) is controllable. We then look for a matrix L such that the solution of system (17) with the feedback control law u(t) = Lx(t) (20)

has a uniform exponential decay, i.e. there exist c > 0 and ω > 0 such that for any solution. x(t) ce ωt x 0 (21) Note that, according to the law (20), the control u is obtained in real time from the state x. In other words, we are looking for matrices L such that the solution of the system has an uniform exponential decay rate. x = (A + BL)x = Dx (22)

Remark that we cannot expect more than (21). Indeed, for instance, the solutions of (22) may not satisfy x(t ) = 0 in finite time T. Indeed, if it were the case, from the uniqueness of solutions of (22) with final state 0 in t = T, it would follow that x 0 0. Theorem 2 If A is skew-adjoint and the pair (A, B) is controllable then L = B stabilizes the system, i.e. the solution of { x = Ax BB x x(0) = x 0 (23) has an uniform exponential decay (21). Proof: With L = B we obtain that 1 d 2dt x(t) 2 = < BB x(t), x(t) >= B x(t) 2 0.

Hence, the norm of the solution decreases in time. Moreover, x(t ) 2 x(0) 2 = 2 T 0 B x 2 dt. (24) To prove the uniform exponential decay it is sufficient to show that there exist T > 0 and c > 0 such that T x(0) 2 c 0 B x 2 dt (25) for any solution x of (23). Indeed, from (24) and (25) we would obtain that x(t ) 2 x(0) 2 2 c x(0) 2 (26)

and consequently with x(t ) 2 γ x(0) 2 (27) γ = 1 2 c < 1. (28) Hence, x(kt ) 2 γ k x 0 2 = e (lnγ)k x 0 2 k N. (29) Now, given any t > 0 we write it in the form t = kt +δ, with δ [0, T ) and k N and we obtain that x(t) 2 x(kt ) 2 e ln(γ) k x 0 2 = = e ln(γ) ( ) t T e ln(γ) T δ x 0 2 1 γ e ln(γ) T t x 0 2.

We have obtained the desired decay result (21) with c = 1 γ, ω = ln(γ). (30) T To prove (25) we decompose the solution x of (23) as x = ϕ + y with ϕ and y solutions of the following systems: { ϕ = Aϕ ϕ(0) = x 0 (31), and { y = Ay BB x y(0) = 0. (32) Remark that, since A is skew-adjoint, (31) is exactly the adjoint system (6) except for the fact that the initial data are taken at t = 0.

As we have seen in the proof of Theorem 1, the pair (A, B) being controllable, the following observability inequality holds for system (31): x 0 2 C T 0 B ϕ 2 dt. (33) Since ϕ = x y we deduce that x 0 2 2C [ T 0 B x 2 dt + T 0 B y 2 dt ]. On the other hand, it is easy to show that the solution y of (32) satisfies: 1 d 2dt y 2 = B x, B y B x B y 1 ( y 2 + B 2 B x 2). 2

From Gronwall s inequality we deduce that y(t) 2 B 2 t 0 et s B x 2 ds B 2 e T 0 B x 2 dt (34) and consequently T T T 0 B y 2 dt B 2 0 2 dt T B 4 e T 0 B x 2 dt. T Finally, we obtain that T x 0 2 2C 0 B x 2 dt + C B 4 e T T 0 B x 2 dt C and the proof of Theorem 2 is complete. T T 0 B x 2 dt

Example: Consider the damped harmonic oscillator: where m, k and R are positive constants. mx + Rx + kx = 0, (35) Note that (36) may be written in the equivalent form mx + Rx = kx which indicates that an applied force, proportional to the velocity of the point-mass and of opposite sign, is acting on the oscillator. It is easy to see that the solutions of this equation have an exponential decay property. Indeed, it is sufficient to remark that the two characteristic roots have negative real part. Indeed, mr 2 + R + kr = 0 r ± = k ± k 2 4mR 2m

and therefore Re r ± = 2m k 2m k ± k 2 4m 2m R if k 2 4mR if k2 4mR.

We observe here the classical overdamping phenomenon. Contradicting a first intuition it is not true that the decay rate increases when the value of the damping parameter k increases. This can be avoided if the feedback mechanisms also involves the state x and not only its time derivative. In other words, if instead of the previous equation we consider mx + Rx + k 1 x + k 2 x = 0, (36) then the decay rate can be made arbitrarily large.

If (A, B) is controllable, we have proved the uniform stability property of the system (17), under the hypothesis that A is skew-adjoint. However, this property holds even if A is an arbitrary matrix. More precisely, we have Theorem 3 If (A, B) is controllable then it is also stabilizable. Moreover, it is possible to prescribe any complex numbers λ 1, λ 2,...,λ n as the eigenvalues of the closed loop matrix A + BL by an appropriate choice of the feedback matrix L so that the decay rate may be made arbitrarily fast. This result is not in contradiction with the behavior we observed above on the harmonic oscillator (the overdamping phenomenon). In order to obtain the arbitrarily fast decay one needs to use all components of the state on the feedback law!

We have shown that Controllability and observabilitiy are equivalent notions (Wiener s cybernetics); Both hold for all T if and only if the Kalman rank condition is fulfilled. The controls may be obtained as minimizers of suitable quadratic functionals over the space of solutions of the adjoint system. There are very many controls ( smooth ones, in bang-bang form,... ) that can be characterized by variational methods.

The key ingredient is always the observability inequality. Controllable systems are stabillizable by means of closed loop or feedback controls.