EE363 homework 7 solutions

Similar documents
EE363 homework 8 solutions

6. Linear Quadratic Regulator Control

Solution of Linear State-space Systems

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

Lecture 4 Continuous time linear quadratic regulator

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is

ECE504: Lecture 8. D. Richard Brown III. Worcester Polytechnic Institute. 28-Oct-2008

7.1 Linear Systems Stability Consider the Continuous-Time (CT) Linear Time-Invariant (LTI) system

Solution of Additional Exercises for Chapter 4

Module 07 Controllability and Controller Design of Dynamical LTI Systems

Balanced Truncation 1

MCE693/793: Analysis and Control of Nonlinear Systems

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 8: Basic Lyapunov Stability Theory

Lyapunov Stability Theory

Nonlinear Control. Nonlinear Control Lecture # 3 Stability of Equilibrium Points

EE363 homework 2 solutions

Balancing of Lossless and Passive Systems

Converse Lyapunov theorem and Input-to-State Stability

Control Systems Design

Prof. Krstic Nonlinear Systems MAE281A Homework set 1 Linearization & phase portrait

Quadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015

Analysis of undamped second order systems with dynamic feedback

ECEEN 5448 Fall 2011 Homework #4 Solutions

Robotics. Control Theory. Marc Toussaint U Stuttgart

Complex Dynamic Systems: Qualitative vs Quantitative analysis

Problem 1 Cost of an Infinite Horizon LQR

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems

Module 03 Linear Systems Theory: Necessary Background

Modeling and Analysis of Dynamic Systems

There is a more global concept that is related to this circle of ideas that we discuss somewhat informally. Namely, a region R R n with a (smooth)

Identification Methods for Structural Systems

Dynamical Systems & Lyapunov Stability

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

Module 02 CPS Background: Linear Systems Preliminaries

Semidefinite Programming Duality and Linear Time-invariant Systems

Chapter III. Stability of Linear Systems

Half of Final Exam Name: Practice Problems October 28, 2014

Zeros and zero dynamics

4F3 - Predictive Control

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1

EE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS

Symmetric Matrices and Eigendecomposition

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

There are six more problems on the next two pages

The norms can also be characterized in terms of Riccati inequalities.

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Grammians. Matthew M. Peet. Lecture 20: Grammians. Illinois Institute of Technology

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems.

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Observability and state estimation

6.241 Dynamic Systems and Control

1 Lyapunov theory of stability

Lecture 2: Discrete-time Linear Quadratic Optimal Control

Lecture 19 Observability and state estimation

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Linear Algebra - Part II

Linear Algebra Massoud Malek

Linear Matrix Inequalities in Robust Control. Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University MTNS 2002

CDS Solutions to the Midterm Exam

ME Fall 2001, Fall 2002, Spring I/O Stability. Preliminaries: Vector and function norms

Lecture 15 Perron-Frobenius Theory

Lecture 6. Foundations of LMIs in System and Control Theory

ACM 104. Homework Set 5 Solutions. February 21, 2001

Automatic Control 2. Nonlinear systems. Prof. Alberto Bemporad. University of Trento. Academic year

7 Planar systems of linear ODE

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

Math Ordinary Differential Equations

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

EE263: Introduction to Linear Dynamical Systems Review Session 5

Hybrid Systems Course Lyapunov stability

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

w T 1 w T 2. w T n 0 if i j 1 if i = j

Module 06 Stability of Dynamical Systems

EEE582 Homework Problems

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Extreme Values and Positive/ Negative Definite Matrix Conditions

Lecture 1: Review of linear algebra

Lyapunov Stability Analysis: Open Loop

An Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

4F3 - Predictive Control

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Properties of Open-Loop Controllers

A New Algorithm for Solving Cross Coupled Algebraic Riccati Equations of Singularly Perturbed Nash Games

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control

Equilibrium points: continuous-time systems

Autonomous system = system without inputs

Modern Optimal Control

Linear Algebra. P R E R E Q U I S I T E S A S S E S S M E N T Ahmad F. Taha August 24, 2015

Nonlinear Systems and Elements of Control

2 Lyapunov Stability. x(0) x 0 < δ x(t) x 0 < ɛ

Lyapunov stability ORDINARY DIFFERENTIAL EQUATIONS

21 Linear State-Space Representations

Stability in the sense of Lyapunov

Extra Problems for Math 2050 Linear Algebra I

Transcription:

EE363 Prof. S. Boyd EE363 homework 7 solutions 1. Gain margin for a linear quadratic regulator. Let K be the optimal state feedback gain for the LQR problem with system ẋ = Ax + Bu, state cost matrix Q, and input cost matrix R >. You can assume that (A, B) is controllable and (Q, A) is observable. We consider the system ẋ = Ax + Bu, u = αkx, where α >. If α = 1, this gives the LQR optimal input, but for α 1 this input is obviously not LQR optimal, and indeed the closed-loop system ẋ = (A + αbk)x can even be unstable. Show that for α > 1/2, the closed-loop system is stable. In classical control theory, the ability of a system to remain stable when the input is scaled by any factor in the interval (1/2, ) is described as a negative gain margin of 6dB, and an infinite positive gain margin. Hint. Use the obvious quadratic Lyapunov function. Solution: Let P be the positive definite solution of the ARE A T P +PA = K T RK Q. We will use V (z) = z T Pz as our Lyapunov function. With ẋ = (A + αbk)x, we have V (z) is quadratic, with quadratic form (A + αbk) T P + P(A + αbk) = A T P + PA 2αK T RK = (Q + (2α 1)K T RK). Since Q and K T RK for α 1/2 we have (Q + (2α 1)K T RK). To conclude stability, we now just need to show that (Q + (2α 1)K T RK, A + αbk) is observable, i.e., there is no v for which (A + αbk)v = λv and (Q + (2α 1)K T RK)v =. If there were such a v, we would have v T (Q + (2α 1)K T RK)v = v T Qv + (2α 1)v T K T RKv =, which implies Qv = and Kv =. This would give (A + BK)v = Av = λv, Qv =, which is not possible since we ve assumed (Q, A) observable. 2. Gradient systems. Suppose φ is a scalar valued function on R n. (You can assume it is smooth, or has any other technical property you need.) We can define several dynamical systems using the gradient of φ. These systems are sometimes called gradient systems, and in this context, φ is sometimes called the potential function. 1

(a) A second order gradient system has the form ẍ = φ(x). Show that V (x(t)) = φ(x(t)) + (1/2) ẋ(t) 2 2 is a conserved quantity. (b) A first order gradient system has the form ẋ = φ(x). Show that φ is a dissipated quantity. (c) Suppose φ has bounded sublevel sets. Show that for every solution of ẋ = φ(x), we have φ(x(t)) as t. Hint. Show that φ(x(t)) 2 dt <. Solution: (a) To check that V (x(t)) is constant along any trajectory, we compute its time derivative: V (x(t)) = d ( φ(x(t)) + 1 ) ẋ(t) = φ(x(t)) T ẋ(t) + ẍ(t) T ẋ(t) =. dt 2ẋ(t)T Therefore, trajectories of the second order gradient system stay on level surfaces of V, {z R n V (z) = a}. (b) We claim that function V (x(t)) = φ(x(t)) is a dissipated quantity, since V (x(t)) = d dt (φ(x(t))) = φ(x(t))t ẋ(t) = φ(x(t)) 2. Therefore, trajectories of the first order gradient system can only stay the same as current value or slip down to lower values of V. All sublevel sets {z R n V (z) a} are invariant. (c) Following the hint, we will show that φ(x(t)) 2 dt <, then, assuming φ is continuous, we must have φ(x(t)) as t. From part (b), φ(x(t)) = φ(x(t)) 2, so we can write t φ(x(τ)) 2 dτ = t φ(x(τ)) dτ = φ(x()) φ(x(t)). Since φ(x(t)), for any t we have x(t) C, where C is the φ(x())-sublevel set of φ, i.e., C = { z φ(z) φ(x()) }. C is bounded, which means we can find a compact set C, such that C C. So we get t φ(x(t)) 2 dt 2 sup (φ(x()) φ(x(t))) t φ(x(t)) = φ(x()) inf t φ(x()) inf z C φ(z) = B <.

The last inequality follows from the fact that the infimum of a real valued function over a compact set is finite. Since the integral is uniformly bounded by B for all t, we conclude that φ(x(t)) 2 dt B <. 3. A bound on peaking factor via Lyapunov theory. Consider the system ẋ = f(x), where f : R n R n, and f() = (i.e., is an equilibrium point). We define the peaking factor p of the system as p = x(t) sup t, x() x(), where the supremum is taken over all trajectories of the system with x(). We can have p =, which would occur, for example, if there is an unbounded trajectory of the system. We always have p 1; p = 1 only if all trajectories never increase in norm. The peaking factor gives a bound on how far any trajectory can wander away from the origin, relative to how far it was when it started. In this problem you will show how to bound the peaking factor of a system using Lyapunov methods. Suppose the quadratic Lyapunov function V (z) = z T Pz, P = P T >, satisfies V (z) = 2z T Pf(z) for all z. (In other words, the Lyapunov function V proves that all trajectories are bounded.) Show that p κ, where κ = λ max (P)/λ min (P) is the condition number of P. Two interpretations/ramifications of this result: By finding a quadratic Lyapunov function that proves the trajectories are bounded, we can bound the peaking factor of the system. If a system has a large peaking factor, then any quadratic Lyapunov function that proves boundedness of the trajectories must have large condition number. Solution. Since V (z) for all z, V (x(t)) is nonincreasing; in particular, V (x(t)) V (x()) for t. We combine this with the basic inequalities to get V (x(t)) = x(t) T Px(t) λ min (P) x(t) 2, V (x()) = x() T Px() λ max (P) x() 2 λ min (P) x(t) 2 λ max (P) x() 2 for all t. Assuming x(), this yields Therefore, we have p κ. x(t) x() λ max (P) λ min (P) = κ. 3

4. Digital filter with saturation. The dynamics of an undriven digital filter which exhibits saturation is x(k + 1) = sat(ax(k)), where A R n n is stable, and sat : R n R n is defined by sat(x) = (sat(x 1 ),...,sat(x n )), and the (unit) saturation function is defined by sat(z) = z z < 1 1 z 1 1 z 1. Without saturation, the state would converge to zero, but with saturation, it need not. When a trajectory of the system fails to converge to zero, it is called saturation induced instability. We seek conditions under which the system with saturation is globally asymptotically stable, i.e., saturation induced instability cannot occur. Show that x(k) if there is a nonsingular diagonal D such that DAD 1 < 1. (We will see later how to compute such a D, or determine that none exists, using linear matrix inequalities.) Hint: Use the Lyapunov function V (z) = Dz. Solution: This condition can be obtained by finding an appropriate Lyapunov function for this system, i.e., a positive definite function V (z) such that V = V (sat(az)) V (z) αv (z) for some α >. First note that in the scalar case, sat(z) z. For example, when z doesn t saturate, then equality is achieved. The sat( ) function just extends this idea to vectors in R n : sat(z) 2 = n (sat(z i )) 2 i=1 n (z i ) 2 = z 2. i=1 In fact, D(sat(z)) Dz for any diagonal matrix D. Let V (z) = Dz, where D is invertible. This gives V (sat(az)) = Dsat(Az) DAz = DAD 1 Dz DAD 1 Dz Therefore, V (sat(az)) V (z) (1 DAD 1 )V (z), which satisfies the stability conditions when DAD 1 < 1. 5. Boundaries of sublevel sets and LaSalle s theorem. The set {z R n V (z) = } arising in LaSalle s theorem is usually a thin hypersurface, but it need not be. Carefully prove global asymptotic stability of ẋ 1 = x 2, ẋ 2 = x 1 max {, x 1 }max {, x 2 }. (You might want to sketch out the vector field to understand the dynamics.) 4

Solution: Define our Lyapunov function as V (x) = x T x, which is positive definite. Then V (x) = 2x 1 ẋ 1 + 2x 2 ẋ 2 = 2x 1 x 2 + 2x 2 ( x 1 max {, x 1 }max {, x 2 }) = 2x 2 max {, x 1 }max {, x 2 }. Now we see what happens when V (x) = : V (x) = 2x 2 max {, x 1 } max {, x 2 } = x 1 or x 2 { ẋ1 = x 2 ẋ 2 = x 1 { x1 (t) = A cos(t) + B sin(t) x 2 (t) = B cos(t) A sin(t). The trace of x 1, x 2 is a circle, so if A and B are both nonzero, x 1 (t) > and x 2 (t) > for some t. This implies that the only solution of V (x) = is x(t) = for all t. By LaSalle s theorem, this system is globally asymptotically stable. 6. LQR control with quantized gain matrix. Consider the ODE with linear state feedback... y (t) +.95ÿ(t) +.75ẏ(t) +.85y(t) = u(t), u(t) = k 1 y(t) + k 2 ẏ(t) + k 3 ÿ(t). (a) Find k opt 1, k opt 2, and k opt 3 that minimize J = ( y(t) 2 + u(t) 2) dt. (Of course, you must explain your method.) What is J opt, the minimum value of J, for initial condition y() = ẏ() = ÿ() = 1? (b) Let k quant i Solution: = round(k opt i ) (where round(a) denotes the nearest integer to a), and let J quant be the value of J when k = k quant. What is the value of J quant for initial condition y() = ẏ() = ÿ() = 1? For what (nonzero) initial condition is the ratio J quant /J opt maximum? What is this maximum value? 5

(a) If we define x(t) = (y(t), ẏ(t), ÿ(t)), we have ẋ(t) = Ax(t) + Bu(t), y(t) = Cx(t) with A = 1 1.85.75.95, B = 1, C = [ 1 ]. This is an LQR problem with Q = C T C and R = 1. Defining K = [ k 1 k 2 k 3 ], we have that K = B T P, where P is the solution of the ARE C T C + A T P + PA PBB T P =. The minimum value of J for y() = ẏ() = ÿ() = 1 is 11.35. (b) Let K be the gain matrix with quantized coefficients. To find the value of J in this case, we need to solve (for P quant ) the Lyapunov equation (A + B K) T P quant + P quant (A + B K) + (C T C + K T K) =. We then have that J quant = x() T P quant x(). Doing this we find that the value of J quant in this case is 14.1, which is about 25% more than the minimum value. To find the maximum value of J quant /J opt we note that J quant J opt = xt P quant x x T Px = xt P 1/2 P 1/2 P quant P 1/2 P 1/2 x P 1/2 x 2 = zt Tz z 2, where z = P 1/2 x and T = P 1/2 P quant P 1/2. Therefore J quant J opt λ max T The initial condition which achieves this maximum can be found by finding the eigenvector corresponding to the maximum eigenvalue of T by x() = P 1/2 v max. For this system it turns out that and the maximizing initial condition is J quant J opt 1.37, x() = (.53, 1.74, 1.6). 6

7. Schur complements and matrix inequalities. Consider a matrix X = X T R n n partitioned as [ ] A B X = B T, C where A R k k. If det A, the matrix S = C B T A 1 B is called the Schur complement of A in X. Schur complements arise in many situations and appear in many important formulas and theorems. For example, we have det X = det A det S. (You don t have to prove this.) The Schur complement arises in several characterizations of positive definiteness or semidefiniteness of a block matrix. As examples we have the following three theorems: X if and only if A and S. If A, then X if and only if S. X if and only if A, B T (I AA ) = and C B T A B, where A is the pseudo-inverse of A. (C B T A B serves as a generalization of the Schur complement in the case where A is positive semidefinite but singular.) Prove one of these theorems. (You can choose which one.) Solution If A, then X if and only if S. From homework 1, we know that if A, then inf u f(u, v) = v T Sv. If S, then f(u, v) inf u f(u, v) = vt Sv for all u, v, and hence X. This proves the if -part. To prove the only if -part we note that f(u, v) for all (u, v) implies that inf u f(u, v) for all v, i.e., S. X if and only if A, B T (I AA ) =, C B T A B. Suppose A R k k with Rank(A) = r. Then there exist matrices Q 1 R k r, Q 2 R k (k r) and an invertible diagonal matrix Λ R r r such that A = [ Q 1 Q 2 ] [ Λ and [Q 1 Q 2 ] T [Q 1 Q 2 ] = I. The matrix [ Q1 Q 2 I ] [ Q1 Q 2 ] T, ] R n n 7

is nonsingular, and therefore [ ] [ ] T [ ][ ] A B Q1 Q B T 2 A B Q1 Q 2 C I B T C I Λ Q T 1 B Q T 2 B B T Q 1 B T Q 2 C [ ] Q T Λ Q 2 B =, T 1 B B T. Q 1 C We have Λ if and only if A. It can be verified that Therefore A = Q 1 Λ 1 Q T 1, I AA = Q 2 Q T 2. Q T 2 B = Q 2 Q T 2 B = (I A A)B =. Moreover, since Λ is invertible, [ ] Λ Q T 1 B B T Λ, C B T Q Q 1 C 1 Λ 1 Q T 1 B = C B T A B. 8