Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0).

Similar documents
The motivating principal of linear systems is that even if a system isn t linear, it is at least locally linear.

ME Fall 2001, Fall 2002, Spring I/O Stability. Preliminaries: Vector and function norms

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is

Chapter III. Stability of Linear Systems

Grammians. Matthew M. Peet. Lecture 20: Grammians. Illinois Institute of Technology

Math Ordinary Differential Equations

Linear Control Theory

6.241 Dynamic Systems and Control

Module 06 Stability of Dynamical Systems

Solution of Linear State-space Systems

Control, Stabilization and Numerics for Partial Differential Equations

Mathematical Economics. Lecture Notes (in extracts)

Lyapunov Stability Theory

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

Controllability, Observability, Full State Feedback, Observer Based Control

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

CDS Solutions to the Midterm Exam

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

7.1 Linear Systems Stability Consider the Continuous-Time (CT) Linear Time-Invariant (LTI) system

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

Robust Control 2 Controllability, Observability & Transfer Functions

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

Module 03 Linear Systems Theory: Necessary Background

Balanced Truncation 1

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems

STAT200C: Review of Linear Algebra

Module 07 Controllability and Controller Design of Dynamical LTI Systems

Math 113 Winter 2013 Prof. Church Midterm Solutions

Entrance Exam, Differential Equations April, (Solve exactly 6 out of the 8 problems) y + 2y + y cos(x 2 y) = 0, y(0) = 2, y (0) = 4.

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

Poincaré Map, Floquet Theory, and Stability of Periodic Orbits

1 Lyapunov theory of stability

Linear System Theory

Observability. Dynamic Systems. Lecture 2 Observability. Observability, continuous time: Observability, discrete time: = h (2) (x, u, u)

8 Periodic Linear Di erential Equations - Floquet Theory

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

CDS Solutions to Final Exam

Nonlinear Control Lecture 5: Stability Analysis II

Control Systems. Internal Stability - LTI systems. L. Lanari

Decay rates for partially dissipative hyperbolic systems

MATH 4211/6211 Optimization Constrained Optimization

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015

GQE ALGEBRA PROBLEMS

w T 1 w T 2. w T n 0 if i j 1 if i = j

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

Solution of Additional Exercises for Chapter 4

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

Numerical Optimization

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

ECE504: Lecture 8. D. Richard Brown III. Worcester Polytechnic Institute. 28-Oct-2008

Chap 3. Linear Algebra

Lecture 4 and 5 Controllability and Observability: Kalman decompositions

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems

Modeling and Analysis of Dynamic Systems

Problem List MATH 5173 Spring, 2014

EEE582 Homework Problems

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Lecture 8 : Eigenvalues and Eigenvectors

An introduction to Birkhoff normal form

Stabilization and Passivity-Based Control

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

LMI Methods in Optimal and Robust Control

Control Systems. Internal Stability - LTI systems. L. Lanari

Tangent spaces, normals and extrema

MATH JORDAN FORM

MA5510 Ordinary Differential Equation, Fall, 2014

ELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky

1 Continuous-time Systems

NORMS ON SPACE OF MATRICES

Criterions on periodic feedback stabilization for some evolution equations

COMP 558 lecture 18 Nov. 15, 2010

NOTES ON LINEAR ODES

Nonlinear Control. Nonlinear Control Lecture # 2 Stability of Equilibrium Points

Systems of Algebraic Equations and Systems of Differential Equations

Eigenvectors. Prop-Defn

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 8: Basic Lyapunov Stability Theory

Nonlinear Control Systems

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016

7 Planar systems of linear ODE

On Controllability of Linear Systems 1

Modern Optimal Control

EC Control Engineering Quiz II IIT Madras

Multivariable Control. Lecture 03. Description of Linear Time Invariant Systems. John T. Wen. September 7, 2006

Chap. 3. Controlled Systems, Controllability

Converse Lyapunov theorem and Input-to-State Stability

Lyapunov stability ORDINARY DIFFERENTIAL EQUATIONS

1 Controllability and Observability

Robotics. Control Theory. Marc Toussaint U Stuttgart

Stabilization of persistently excited linear systems

Designing Information Devices and Systems II Spring 2018 J. Roychowdhury and M. Maharbiz Discussion 6B

Half of Final Exam Name: Practice Problems October 28, 2014

6.241 Dynamic Systems and Control

STABILITY OF PLANAR NONLINEAR SWITCHED SYSTEMS

Linear Algebra II Lecture 22

Transcription:

Linear Systems Notes Lecture Proposition. A M n (R) is positive definite iff all nested minors are greater than or equal to zero. n Proof. ( ): Positive definite iff λ i >. Let det(a) = λj and H = {x D = } and take Q H, so if Q is positive on all vectors then it s clearly positive on any subspace of V, so det(a k ) >. ( ): We know that det(a k ) > including det(a) >. By interlacing property, the matrix is positive definite.. Lyapunov functions Theorem. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < ). Idea is to transform basis in order to take a trajectory which doesn t admit the application of Lyapunov function to basis wherein the system decomposes in Jordan form, with each Jordan block admitting a quadratic form (Lyapunov function) for each direct summand; take Lyapunov function to be the direct sum of these pieces. E.g. If U = x 2 = x x, then U = ẋ x + x ẋ = x J x + x Jx, i.e. 2λ 2λ... 2λ ẋ = (λi k + N k )x () The determinant is 4λ 2, but this probably won t work, we should change the basis so that instead of having ones on the diagonal we have ɛ, which can be made arbitrarily small. Write x = y, x 2 = ay 2,..., x d a d y d, and also write, x k = λx k + x k so a k y k = λa k y k + a k 2 y k and y k = λy k + a y k. If λ / R, if A : R d R d, then P A (λ) = z d + tr(a)z d +... + det(a); then P A (λ) = implies P A (λ) = too, which implies ( that nonreal ) complex roots always come in roots. If z λz, then a b the matrix of this operator is b a ( ) ( ) 2 5 + 2i Let A =, char A (z) = z 2 + 2z + 5 = (λ + + 2i)(λ + 2i), v = and ( ) ( ) ( ) ( ) 2i 2i 2 v 2 = ; A f =, let e + 2i = and e 2 = be the new basis. Then to calculate: ( ) ( ) ( ) ( ) 2 5 2 2 = (3) /2 /2 2 (2)

Linear Systems Notes 2 This matrix is simply multiplication by 2i, as expected. If y = ay by 2 and ẏ 2 = by + ay 2, then (y 2 + y2 2 ). = 2ẏ y + 2ẏ 2 y 2. x Qx positive definite quadratic form such that d/dx(x Qx) = ẋ Qx + x Qẋ = x A Qx + x QAx = x (A Q + QA)x, negative definite, i.e. the derivative of a quadratic form is still quadratic. x Qx a x 2, for a > and d/dx(x Qx) a x 2. Then let take U(x(t)) = f(t), so f a x 2, then f a x 2 a 2 f, from which it follows that f(t) f()e a2t.

Linear Systems Notes 3 2 Lecture Theorem 2. Let ẋ = Ax. Then A is Hurwitz iff the origin is globally exponentially stable. Proof. ( ): Done. ( ): Assume that Re(λ), with λ an eigenvalue. Then we have ẋ = λx +..., ẋ 2 = λx 2 +..., etc. On the other hand, if λ C \ R, then ẋ = ax bx +... and ẋ 2 = bx + ax 2. Corollary. Let ẋ = Ax+g(x) where g(x) c x 2, for small x, ẋ = f(x) = Ax+g(x) (Taylor), f() =. If A is Hurwitz, then any quadratic strict Lyapunov function for ẋ = Ax will be also strict Lyapunov for a neighborhood of x. Definition. Domain (region) of stability: If U is positive definite in Ω and U f(x) is negative definite in Ω; the region of attraction is the set of {ω Ω : x() = ω lim t x(t) }. Let ẋ = Ax, x R d. We want a quadratic form; want U = x P x, positive definit, and U Ax negative definite. Plugging in, we have d dt U(x(t)) = ẋ P x + x ẋ = x (A P + P A)x. We want A P +P A = Q with Q negative definite symmetric matrix. To solve there are d(d+) 2 equations to solve. Having solved for P, you then have to verify that P is positive definite. Proposition 2. If A is Hurwitz, then such P exists and is positive definite. Proof. Take x P x, and look at x(t) at the solution (trajectory) of ẋ = Ax, and x (t)qx(t)dt which exists. This integral is simply x e A t Qe At xdt = x e A t Qe At dtx. Claim: this integral (-) solves the Lyapunov equation. Positive definiteness of P follows immediately from Q being negative definite and its integral will still be negative definite. The only thing to check is that it solves the equation. d δ Calculate: dtp x(t) = lim e A t Qe At dt = Q, hence d/dtx P x = Q. δ δ 2. Discrete Time Systems Given ẋ = Ax, the analogous discrete time system is x[k + ] = Ax[k], and stability is given not by max Re(σ(A)) <, but that max σ(a) < (global exponential stability, or just exponential stability for nonlinear perturbations x[k + ] = Ax[k] + g(x[k])).

Linear Systems Notes 4 2.2 Hamiltonian Systems Let ẋ = Ax and x R 2d, (q, ( p). ) Hamiltonian matrices are never Hurwitz. Example: q = ap + bq a b and ṗ + cp + dq, the matrix which is Hamiltonian iff tr(a) = ; can t have Hurwitz because c d R(λ + λ 2 ) < contradicts λ + λ 2 =. Nevertheless, such systems can still be Lyapunov stable (only Lyapunov stable), if det(a) >. If det(a) <, then the system is unstable.

Linear Systems Notes 5 3 Lecture 2 3. BIBO Stability Consider the LTV system ẋ = A(t)x + B(t)u (4) C(t)D(t)u (5) Question: if input is bounded, will output also be? Take first an example: say ẋ = v, and v = x + u, y = x; if ɛ sin(t) we have ẍ + x = ɛ sin t which has solution x(t) = A sin(t) + B cos(t) ± ɛt cos(t); hence with bounded u sup t [,T ] So to formalize this discussion, we consider bounded inputs in the L sense, namely u = u(t), if u < C R, will y < M C R as well? There is not much to say in this case, just take y(t) = t Cϕ(t, s)bu(s)ds + Du(t). For one thing, D should be bounded in the matrix norm, i.e. D(t) M D for all t; where M is as follows: For a matrix transformation M L(V, W ), where V comes equipped with a norm V and W as well with a norm W, then we define M M = sup Mv W. What remains is to demand that the integrand is bounded in v V = the operator norm. Precisely: Theorem 3. If there exists constants E, D R such that for all t, s the above bounded conditions hold, then the system is BIBO stable. Proposition 3. For a linear time invariant system (with ẋ = Ax + Bu), if A is Hurwitz then the system is BIBO stable. 3.2 Controllability Take ẋ = Ax + Bu and a finite time interval [, t ]. Define the reachable subspace R(, t ) = {x V : {u(t)} t [t,t ] s.t. x t = x t = x}. Note that R is a linear subspace in V, for if x, x 2 R(, t ), then λx + x 2 R as well because if you have controls u x and u 2 x 2, then λu + u 2 λx + x 2. ( ) Take ẋ = x. This system is asymptotically stable. If we add controls, which states 2 3 ( ) ( ) can we reach? Well if our control is say u, then if you start at the origin, since is an ( ) eigenvector of A, the trajectory x(t) will remain on that eigenvalue, i.e. traj(x(t)) span( ). ( ) If on the other hand, the control is given by u then you have control over one coordinate

Linear Systems Notes 6 direction of travel. This control does allow one to reach the whole space, for if you apply a control you can transverse trajectories then turn off the control and follow the natural dynamics for a period of time then turn on control again, etc. (Intuition for why R = R 2 here). On the other hand, we have controllable subspaces, defined similarly: C(, t ) = {x V : x( ) = x u t [t,t ] : x x(t ) = }. For ϕ(t, ), V t V t where C sits in the first space and R in the second. Recall ϕ(t, s) : V s V t satisfying ϕ t Define W r (, t ) = ϕ(t, s)b(s)b(s) ϕ (t, s)ds. Theorem 4. R(, t ) = Im(W R ). = Aϕ, and ϕ(t, t) = I. Proof. Define u(t) = B (t)ϕ (t, t)z, where assuming x W R z. Then simply compute t ϕ(t, s)b(s)b (s)ϕ(t, s x(t ) = W r z = x. In the other direction, if x / Im(W R ), then we want to show that X / R(, t ). But x Im(W R ) iff (c, x) = for all CW R = ; x = W R z cx = cw R z =. x / Im c row vector such that cx, cw R =, so = c t ϕ(t, s)b(s)b (s)ϕ(t, s)dsc = t (cϕ(t, s)b(s))(cϕb) ds = t cϕ(t, s)b ds which implies that all cϕ(t, s)b(s) is uniformly zero, so that x(t ) = ϕ(t, s)b(s)u(s)ds

Linear Systems Notes 7 4 Lecture 3 4. Lagrange Multiplier Given f : V R, with unconstrained optimization, one approach would be to find all the critical points and test whether they are local extremum. More generally, ( f x,..., f x d ), at a mininum point (assuming differentiability of f), f =. With constrained optimization, we have a set of conditions which we want to be satisfied, namely g i : V R (g : V V = R I, with (g) i = g i ), i =,..., I, with each g i = (g = ). Difficult problem. So we introduce a new vector, λ R I. We now consider the function F (x, λ) = f(x) + λg(x). Extremum points of f subject to g = corresponds exactly to extremum points of F (x, λ) on V W. In this case, F = ( F x,..., F x d, F λ,..., F λ I ) (6) The result is that f x j + λ i g x j =, with j =,..., d, g i (x) =. Optimization with Lagrange multipliers will help us in reachability. As usual we have ẋ = A(t)x(t) + B(t)u(t), with x V = R d, Take x = t ϕ(t, s)b(s)u(s)ds (7) {u} = U, where U is the space of all control functions. Given g : U V, with u ϕbuds, want g x = while minimizing the energy /2 u 2 ds (note that here what we want most, namely a solution to the dynamic system, gives us the constraint in the optimization problem). As before, λ V, form λ(g(u) x ) + 2 u uds = F (u, λ). The result t t λ( ϕ(t, s)b(s)u(s)ds x ) + /2 u (s)u(s)ds (8) equivalent to t /2u u + λϕ(t, s)b9s)u(s)ds λx (9) Now we need to take the gradient; the tricky part is the differentiating w.r.t. u, because it is a function. To address this difficulty, we consider the function at u, at u(s), some arbitrary s, denoted F u(s). F u(s) = u (s) + λϕ(t, s)b(s) () We know that if we solve this problem, then u (s) = λϕ(t, s)b(s), this is the optimal control. Alternatively u(s) = B (s)ϕ (t, s)z, with z = λ. The result: x = t ϕ(t, s)b(s)b (s)ϕ (t, s)dsz = W r z ()

Linear Systems Notes 8 The basis of this argument works so long as a feasible point exists. In other words, assuming that a minimal control exists, then it has this form. Controllability is similar. C(, t ) = {x : u x + ϕ(t, s)bu(s)ds = }. The controllability grammian W C = t ϕ(, s)b(s)b (s)ϕ (, s)ds. So we want W C = x V t. We have almost the same setup as before, = ϕ(t, )x + t ϕ(t, s)b(s)u(s)ds = g with u 2 ds minimum. The solution in this case follows the same process of computations as in reachability. 4.2 LTI Case Let ẋ = Ax + Bu, x V d, u U k, define the controllability matrix C = [B AB... A d B] which has kd columns. Theorem 5. R(, t ) = C(, t ) = ImC. Proof. Let x R, iff x = d t e A(t s) Bu(s)ds. By Cayley Hamilton, A d = cj A j so we can d express e A = αj A j, so t α j A j Bu(s)ds = A j B t α j (s)u(s)ds, and hence x ImC. In the other direction, let x ImC, and suppose that CR =. Want to show that c x =. R = ImW R, so = cw R = c t e A(t s) BB e A (t s) dsc = ce A(t s) B 2 ds ce A(t s) B =. Taking derivatives, of any order, will also be zero. So e.g. cb =, cae (t s) B = (when t = s, equality still holds) or cab =, etc. Hence cw R = ca k B =, but x = A k Bu k. Conclusion: if R = C = ImC = V. Theorem 6. LTI system is controllable iff there are no left eigenvectors of A in the kernel of B. u Ker(B), means Bu =, where B : U V. Similarly, B : V U, then c ker(b ) if cb =. Proof. dim R < d rank(c) < d c cc =. So cb =, i.e. c kerb. Now, we want to prove that among these c s, one of them is an eigenvector. Consider L = ker(c ) = {c : C c = } V. Claim: LA L; cb = cab =... = ca d B = (ca)b = (ca)ab = = (ca)a d B = c( α j A j )B =. Hence it s a linear operator on this space, and therefore has an eigenvector. In the other direction, if cb =, ca = λc, then cab = λcb =. Iterating, gives the same. Therefore, the rank cannot be full.