Chapter 8 Stabilization: State Feedback 8. Introduction: Stabilization One reason feedback control systems are designed is to stabilize systems that m

Similar documents
Chapter 9 Observers, Model-based Controllers 9. Introduction In here we deal with the general case where only a subset of the states, or linear combin

Chapter 30 Minimality and Stability of Interconnected Systems 30.1 Introduction: Relating I/O and State-Space Properties We have already seen in Chapt

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s

Chapter Stability Robustness Introduction Last chapter showed how the Nyquist stability criterion provides conditions for the stability robustness of

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

Zeros and zero dynamics

Chapter 13 Internal (Lyapunov) Stability 13.1 Introduction We have already seen some examples of both stable and unstable systems. The objective of th

Chapter 9 Robust Stability in SISO Systems 9. Introduction There are many reasons to use feedback control. As we have seen earlier, with the help of a

CONTROL DESIGN FOR SET POINT TRACKING

5. Observer-based Controller Design

hapter 8 Simulation/Realization 8 Introduction Given an nth-order state-space description of the form x_ (t) = f (x(t) u(t) t) (state evolution equati

Chapter Robust Performance and Introduction to the Structured Singular Value Function Introduction As discussed in Lecture 0, a process is better desc

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

Contents. 6 Systems of First-Order Linear Dierential Equations. 6.1 General Theory of (First-Order) Linear Systems

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67

Linear State Feedback Controller Design

6.241 Dynamic Systems and Control

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Course Outline. Higher Order Poles: Example. Higher Order Poles. Amme 3500 : System Dynamics & Control. State Space Design. 1 G(s) = s(s + 2)(s +10)

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

Module 07 Controllability and Controller Design of Dynamical LTI Systems

State Feedback and State Estimators Linear System Theory and Design, Chapter 8.

Decentralized control with input saturation

Stability, Pole Placement, Observers and Stabilization

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

SUCCESSIVE POLE SHIFTING USING SAMPLED-DATA LQ REGULATORS. Sigeru Omatu

Control Systems Design, SC4026. SC4026 Fall 2009, dr. A. Abate, DCSC, TU Delft

Linearization problem. The simplest example

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

ACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Exam. 135 minutes, 15 minutes reading time

SYSTEMTEORI - ÖVNING 5: FEEDBACK, POLE ASSIGNMENT AND OBSERVER

University of California Department of Mechanical Engineering ECE230A/ME243A Linear Systems Fall 1999 (B. Bamieh ) Lecture 3: Simulation/Realization 1

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

4F3 - Predictive Control

MODERN CONTROL DESIGN

Systems and Control Theory Lecture Notes. Laura Giarré

PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT

Represent this system in terms of a block diagram consisting only of. g From Newton s law: 2 : θ sin θ 9 θ ` T

Intro. Computer Control Systems: F8

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

EC Control Engineering Quiz II IIT Madras

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

Video 6.1 Vijay Kumar and Ani Hsieh

Linear Algebra: Matrix Eigenvalue Problems

Linear System Theory

ECE557 Systems Control

Chapter 6 Balanced Realization 6. Introduction One popular approach for obtaining a minimal realization is known as Balanced Realization. In this appr

Control, Stabilization and Numerics for Partial Differential Equations

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:

Control Systems I. Lecture 2: Modeling. Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch Emilio Frazzoli

Fall 線性系統 Linear Systems. Chapter 08 State Feedback & State Estimators (SISO) Feng-Li Lian. NTU-EE Sep07 Jan08

VII Selected Topics. 28 Matrix Operations

Lecture 2: Discrete-time Linear Quadratic Optimal Control

Lecture 2: Controllability of nonlinear systems

Definition of Reachability We begin by disregarding the output measurements of the system and focusing on the evolution of the state, given by

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

Problem Set 9 Due: In class Tuesday, Nov. 27 Late papers will be accepted until 12:00 on Thursday (at the beginning of class).

Problem Set 5 Solutions 1

ECE-320: Linear Control Systems Homework 8. 1) For one of the rectilinear systems in lab, I found the following state variable representations:

Controllability. Chapter Reachable States. This chapter develops the fundamental results about controllability and pole assignment.

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

6.241 Dynamic Systems and Control

MULTIVARIABLE ZEROS OF STATE-SPACE SYSTEMS

ECE504: Lecture 8. D. Richard Brown III. Worcester Polytechnic Institute. 28-Oct-2008

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

Suppose that we have a specific single stage dynamic system governed by the following equation:

Lecture 2 and 3: Controllability of DT-LTI systems

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H

Autonomous Systems and Stability

BUMPLESS SWITCHING CONTROLLERS. William A. Wolovich and Alan B. Arehart 1. December 27, Abstract

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Extensions and applications of LQ

Problem 2 (Gaussian Elimination, Fundamental Spaces, Least Squares, Minimum Norm) Consider the following linear algebraic system of equations:

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

Linear Regression and Its Applications

2.3. VECTOR SPACES 25

Inverted Pendulum. Objectives

Control engineering sample exam paper - Model answers

Review: control, feedback, etc. Today s topic: state-space models of systems; linearization

EEE582 Homework Problems

Modeling and Analysis of Dynamic Systems

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

Time-Invariant Linear Quadratic Regulators!

Chapter Two Elements of Linear Algebra

and the nite horizon cost index with the nite terminal weighting matrix F > : N?1 X J(z r ; u; w) = [z(n)? z r (N)] T F [z(n)? z r (N)] + t= [kz? z r

Lecture 9: Eigenvalues and Eigenvectors in Classical Mechanics (See Section 3.12 in Boas)

CBE507 LECTURE III Controller Design Using State-space Methods. Professor Dae Ryook Yang

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

X 2 3. Derive state transition matrix and its properties [10M] 4. (a) Derive a state space representation of the following system [5M] 1

Systems and Control Theory Lecture Notes. Laura Giarré

Eigenvalues and Eigenvectors

Control Systems I. Lecture 7: Feedback and the Root Locus method. Readings: Jacopo Tani. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

LMIs for Observability and Observer Design

EL2520 Control Theory and Practice

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

Note. Design via State Space

Transcription:

Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of echnology c

Chapter 8 Stabilization: State Feedback 8. Introduction: Stabilization One reason feedback control systems are designed is to stabilize systems that may b e unstable. Although our earlier results show that a reachable but unstable system can have its state controlled by appropriate choice of control input, these results were obtained under some critical assumptions: the control must be unrestricted (as our reachability results assumed the control could be chosen freely) the system must be accurately described (i.e. we m ust have an accurate model of it) the initial state must be accurately known. he trouble with unstable systems is that they are unforgiving when assumptions such as the above d o not hold. Even if the rst assumption above is assumed to hold, there will undoubtedly be modeling errors, such as improperly modeled dynamics or incompletely modeled disturbances (thus violating the second assumption). And even if we assume that the dynamics are accurately modeled, the initial state of the system is unlikely to be known precisely (violating the third assumption). It is thus clear that we need ongoing feedback of information about the state of the system, in order to have a hope of stabilizing an unstable system. Feedback can also improve the performance of a stable system (or, if the feedback is badly chosen, it can degrade the performance and possibly cause instability!). We shall come to understand these issues better over the remaining lectures. How, then, can we design feedback c o n trollers that stabilize a given system (or plant the word used to describe the system that we are interested in controlling)? o a n s w er this, we h a ve to address the issues of what kind of feedback v ariables are available for our controller. here are, in general, two types of feedback: state feedback output feedback. With state feedback, all of the state variables (e.g., x) of a system are available for use by the controller, whereas with output feedback, a set of output variables (e.g., y = Cx +Du ) related to the state variables

are available. he state feedback problem is easier than the output feedback one, and richer in the sense that we can do more with control. In the remainder of this chapter, we examine eigenvalue placement by state feedback. All our discussion here will b e for the case of a known LI plant. he issue of uncertainty and unmodeled dynamics should be dealt with as discussed in previous chapters namely, b y imposing a norm constraint on an appropriate closed loop transfer function. Our development in this lecture will use the notation of C systems but there is no essential dierence for the D case. 8. State Feedback In the case of state feedback, we measure all of the state variables. hus the plant specication is (A B I ) we omit the direct-feedthrough matrix, D, for simplicity, because including it would introduce only notational complications, without changing any conclusions. Our plant specication implies that the output equation is simply y = x. (In many applications, direct measurement o f all system state variables is either impossible or impractical. We address the important topic of output feedback a little later in this lecture.) For now, let us examine state feedback in further detail. Let our control, u, b e specied by u = F x + v, where F is a constant matrix, and v is an external input. his corresponds to LI state feedback. Combining this control law with the state-space description for our nth-order plant, namely, we nd that the closed-loop dynamics are described by x = Ax + Bu (8.) y = x (8.) x = ( A + BF )x + Bv (8.) where we adopt the notation x_ for C systems x = : (8.) x(k + ) for D systems As is apparent from (8.), the closed-loop system is stable if and only if the eigenvalues of A + BF are all in the stable region. In other words, F stabilizes this system if and only if (A + BF ) Open left half of the complex plane in continuous ; open unit disc in discrete ; (8.) where (A + BF ) is the spectrum (set of eigenvalues) of (A + BF ). A k ey question is: \Can F b e c hosen so that the eigenvalues of (A + BF ) are placed at arbitrary desired locations?" he answer is provided by the following theorem. heorem 8. (Eigenvalue Placement) here exists a matrix F such that ny det (I ; [A + B F ]) = ( ; i ) (8.) for any arbitrary self-conjugate set of complex numbers : : : n C if and only if (A B) is reachable. i=

Proof. o p r o ve that reachability is necessary, suppose that i (A) is an unreachable mode. Let wi b e the left eigenvector of A associated with i. It follows from the modal reachability test that w i A = i w and w i B =. herefore, i w i (A + BF ) = w i A + ( w i B)F = w i A + = i w i : (8.) Equation (8.) implies that i is an eigenvalue of (A + BF ) for any F. hus, if i is an unreachable mode of the plant, then there exists no state feedback matrix F that can move it. We shall prove suciency for the single-input (B = b) case only. (he easiest proof for the multiinput case is, as in the single-input case below, based on a canonical form for reachable multi-input systems, which w e h a ve not examined in any detail, and this is why w e omit the multi-input proof.) Since (A b) is reachable, there exists a similarity transformation x = z such that ; A and ; b have the controller canonical form ~A = ; A = ; ; ; n... (8.8) ~ b = ; b =. : (8.9) ~ Recall that the coecients i in the matrix A dene the characteristic polynomial of A~ and A: Let () = n + n; + + n (8.) ny ( ; i ) = n + d i= If u = ~ F z with ~ F being the row v ector n; + + d n = d () : (8.) F~ = f ~ then ; + f ~ ; + f ~ ; n + f ~ n A ~ + ~ bf~ =. : (8.).. It is evident that w e simply have to choose f~ i = ; d i + i for i = : : : n to get the desired closed-loop characteristic polynomial d (). We h a ve t h us been able to place the eigenvalues in the desired locations. Now, using the similarity transformation and F~, we m ust nd F so that A + bf has the same eigenvalues. Since u = F ~ z and f ~ n

x = z, u = F ~ ; x. hus we should dene F = F~ ;. (Verify that A + bf has the same eigenvalues as A ~ + ~ bf~.) his completes the proof. he calculation that was described above of a feedback matrix that places the poles of A + bf at the roots a specied polynomial d (s) can be succinctly represented in a simple formula. he matrix A and A~ have the same characteristic polynomial, (), which implies that A ~ satises (A~) n = ; A~n; ; A~n; ; : : : ; n I: Based on the above relation the desired characteristic polynomial satises d (A~) = A~ n + d A~ n; + d A~ n; + : : : + d ni = ( d ; )A~n; + ( d A n; + : : : + ( d ; ) ~ n ; n )I: We dene the unit vectors e i, i = : : : n as e i = h i th position : : : : : : : Due to the special structure of the matrix A~ the reader should be able to check t h a t f ; R n R n e n d (A~) = ( d ; )e n A~n; + ( d ; )e n A~n; + : : : + ( n d ; n )e n I = ( d ; )e + : : : + ( d ; )e + ( d n ; n )e n = ;F ~ : Recall that the transformation that transforms a system into reachable form is given by = where R n = b Ab : : : A n; b fr n = ~ b A ~ ~ b : : : A~n;~ b : i he matrix fr n has the following form : : : : : : fr n =. (8.).... : : : where denotes entries that can be dierent from zero. he feedback matrix F is related to F~ via the relation F = F ~ ; which implies that F = F ~ ; = ;e = ;e n n n n d ; (A~) d ( ; ; A ) = ;e ; d (A) ; = ;e fr n R d n (A): Note that from Equation 8. we h a ve e = e n fr n n, w h i c h results in the following formula, which i s commonly called Ackermann's formula Some comments are in order: F = ;e ; ; n R n d (A): (8.)

. If (A B) is not reachable, then the reachable modes, and only these, can be changed by state feedback.. he pair (A B) is said to be stabilizable if its unreachable modes are all stable, because in this case, and only in this case, F can be selected to change the location of all unstable modes to stable locations.. Despite what the theorem says we can do, there are good practical reasons why one might temper the application of the theorem. rying to make the closed-loop dynamics very fast generally requires large F, and hence large control eort but in practice there are limits to how m uch control can be exercised. Furthermore, unmodeled dynamics could lead to instability i f w e g o t too ambitious with our feedback. he so-called linear-quadratic regulator or LQR formulation of the controller problem for linear systems uses an integral-square (i.e. quadratic) cost criterion to pose a compromise between the desire to bring the state to zero and the desire to limit control eort. In the LI case, and with the integral extending over an innite interval, the optimal control turns out to be precisely an LI state feedback. he solution of the LQR problem for this case enables computation of the optimal feedback gain matrix F (most commonly through the solution of an algebraic Riccati equation). You are led through some exploration of this on the homework. See also the article on \Linear Quadratic Regulator Control" by Lublin and Athans in he Control Handbook, W.S. Levine (Ed.), CRC Press, 99.. State feedback cannot change reachability, but it can aect observability either destroying it or creating it.. State feedback c a n c hange the poles of an LI system, but does not aect the zeros (unless the feedback happens to induce unobservability, i n w h i c h case what has occurred is that a pole has been shifted to exactly cancel a zero). Note that, if the open-loop and closed-loop descriptions are minimal, then their transmission zeros are precisely the values of s where their respective system matrices drop rank. hese system matrices are related by a nonsingular transformation: si ; (A + BF ) ;B si ; A ;B I = C C F I (8.) Hence the closed-loop and open-loop zeros are identical. (We omit a more detailed discussion of what happens in the nonminimal case.) Example 8. Inverted Pendulum A cart of mass M slides on a frictionless surface. he cart is pulled by a force u(t). On the cart a pendulum of mass m is attached via a frictionless hinge, as shown in Figure 8.. he pendulum's center of mass is located at a distance l from either end. he moment of inertia of the pendulum about its center of mass is denoted by I. he position of the center of mass of the cart is at a distance s(t) from a reference point. he angle (t) is the angle that the pendulum makes with respect to the vertical axis which is assumed to increase clockwise. First let us write the equations of motion that result from the free-body diagram of the cart. he vertical forces P, R and Mg balance out. For the horizontal forces we h a ve the following equation Ms = u ; N: (8.)

theta s+ l sin(theta) l s(t) u(t) N P u P mg Mg R N Figure 8.: Inverted Pendulum From the free-body diagram of the pendulum, the balance of forces in the horizontal direction gives the equation d m dt (s + l sin()) = N d m s_ + l cos() _ = N dt m s ; l sin()( _ ) + l cos() = N (8.) and the balance of forces in the vertical direction gives the equation d m dt (l cos()) = P ; mg d m ;l sin() _ = P ; mg dt m ;l cos()() _ ; l sin() = P ; mg: (8.8) Finally by balancing the moments around the center of mass we get the equation I = P l sin() ; Nl cos(): (8.9)

From equations 8., 8. we can eliminate the force N to obtain (M + m) s + m l cos() ; l sin()() _ Substituting equations 8., 8.8 into equation 8.9 gives us I = l mg ; ml cos()( _ ) ; ml sin() sin() ; l ms ; ml sin()() _ + ml cos() cos(): Simplifying the above expression gives us the equation = u: (8.) (I + ml ) = mgl sin() ; mlscos(): (8.) he equations that describe the system are 8. and 8.. We can have a further simplication of the system of equations by removing the term from equation 8., and the term s from equation 8.. Dene the constants M t = M + m L = I + ml : ml Substituting from equation 8. into equation 8. we g e t ml ml ; cos() ml s + g sin() cos() ; sin()() _ = u: (8.) M t L M t L M t M t Similarly we can substitute s from equation 8. into equation 8. to get ml _ ; cos() g ml ; sin() + sin() cos()() = ; cos()u: (8.) M t L L M t L M t L hese are nonlinear equations due to the presence of the terms sin(), cos(), and ( ) _. We can linearize these equations around = and _ =, b y assuming that (t) and (t) _ remain small. Recall that for small sin() ; cos() ; and using these relations we can linearize the equations 8. and 8.. he linearized system of equations take t h e f o r m Choose the following state variables ml ml g ; s + = u M t L M t L L M t ml ; g ; = ; u: M t L L M t L x = s s

@ @ A to write a state space model for the invert ed pendulum. Using these state variables the following state space model can be easily obtained d dt B C B @ L C A B @ C B x x ml x ; M = t L g x + A A x x g x x ; y = x where the constant is given by = ; ml M t L : M t LM t Intuitively it is clear that the equilibrim p o i n t [s = constant s_ = = _ = ] is an unstable equilibrim point. o v erify this we compute the eigenvalues of the matrix A by solving the equation det(i ; A) =. he eigenvalues are p g L herefore we h a ve t wo eigenvalues at the j! axis and one eigenvalue in the open right half of the complex plane, which indicates instability. Now let us consider the case where M = kg, m = :kg, l = :m, I = :kgm, and of course g = 9 :8m=s. Assume that we can directly measure the state variables, s, s_, and. _ We w ant to design a feedback control law u = F x ^ + r to stabilize this system. In order to do that we will choose a feeback matrix F to place the poles of the closed-loop system at f; ; ; ;g. Using Ackermann's formula p g L ; : F = ; R n ; d (A) where d () = ( + )( + )( + )( + ) which is the ploynomial whose roots are the desired new p o l e locations, and R n is the reachability matrix. In specic using the parameters of the problem we h a ve F = ; = :88 : : : : :88 9: : ;: ;:9 :88 : 9: ;:9 ;: ;:88 ;:89 : : ;:88 ;:89 : : ; C u he closed-loop system is given by d dt x : x x :98 :9 :9 :8 x :88 = + x : x x ;:98 ;:9 ;:98 ;:9 x ;:88 r In Figure 8. we s h o w the trajectories of the closed-loop linearized system when the reference input r(t) i s i d e n tically zero and the intial angular displacement of the pendulum

Distance From Reference Point Velocity From Reference Point s ds/dt Angle of Pendulum w.r.t Vertical Angular Velocity theta. d theta/dt. Figure 8.: Plot of the State Variables of the Closed-Loop Linearized System with r = and _ the Initial Condition s =, s_ =, = :, and =

is : radians. In this simulation the initial conditions on all the other state variables are zero. We can also look at the performance of this controller if it is applied to the nonlinear system. In this case we should simulate the dynamics of the following nonlinear system of equations d dt x x mlg ml x ; M (x sin(x ) ) cos(x ) + (x sin(x ) )(x ) t L M t M t (x ) = u x x (x u x + g ml cos(x ; ) L (x sin(x ) ) ; M (x sin(x ) ) cos(x )(x ) t L M t L ) x x = :88 : : : + r x x where (x ) is dened as ml (x ) = ; cos(x ) M t L : In Figure 8. we s h o w the trajectories of the nonlinear closed-loop system when the reference input r(t) i s i d e n tically zero and the intial angular displacement of the pendulum is : radians. In this simulation the initial conditions on all the other state variables are zero.

Distance From Reference Point 8 Velocity From Reference Point theta s Angle of Pendulum w.r.t Vertical.. d theta/dt ds/dt Angular Velocity 8 Figure 8.: Plot of the State Variables of the Nonlinear Closed-Loop System with r = and _ the Initial Condition s =, s_ =, = :, and =

Exercises Exercise 8. Let (A B C ) be a reachable and observable LI state-space description of a discrete or continuous- system. Let its input u b e related to its output y by the following output feedback law: u = F y + r for some constant matrix F, where r is a new external input to the closed-loop system that results from the output feedback. (a) Write down the state-space description of the system mapping r to y. (b) Is the new system reachable? Prove reachability, o r s h o w a c o u n terexample. (c) Is the new system observable? Prove observability, o r s h o w a c o u n terexample. Exercise 8. (Discrete ime \Linear-Quadratic" or LQ Control) Given the linear system x i+ = Ax i + Bu i and a specied initial condition x, we wish to nd the sequence of controls u u : : : u N that minimizes the quadratic criterion NX J N (x u : : : u N ) = (x i+ Qx i+ + u i Ru i) Here Q is positive semi-denite (and hence of the form Q = V V for some matrix V ) and R is positive denite (and hence of the form R = W W for some nonsingular matrix W ). he rationale for this criterion is that it permits us, through proper choice of Q and R, to trade o our desire for small state excursions against our desire to use low c o n trol eort (with state excursions and control eort measured in a sum-of-squares sense). his problem will demonstrate that the optimal control sequence for this criterion has the form of a -varying linear state feedback. Let the optimal control sequence be denoted by u : : : u N, let the resulting state sequence be denoted by x : : : x N +, and let the resulting value of J N (x u : : : u N ) be denoted by L N ( x ). (a) Argue that u : : : : : : u N that minimizes J k N (x k u k : : : u N ), k u N is also the the sequence of controls u k k N. [his observation, in its general form, is termed the principle of optimality, and underlies the powerful optimization framework of dynamic programming.] (b) Show that NX J k N (x k u k : : : u N ) = ke`k where e` = C x ` + D u ` and C = V A D = k V B W

(c) Let u k e k U = @. k N A E = @. k N A u N Show that E k N = C k N x k + D kn U k N for appropriate matrices C k N and D k N, and show that D k N has full column rank. (d) Note from (b) that J k N (x k u k : : : u N) = ke k N k. Use this and the results of (a), (c) to show that U = ;(D ) ; D k N k N D k N k N C k N x k and hence that u k = F k x k for some state feedback gain matrix F k, k N. he optimal control sequence thus has the form of a -varying linear state feedback. (e) Assuming the optimal control problem has a solution for N =, argue that in this \innitehorizon" case the optimal control is given by u k = F x k for a constant state feedback gain matrix F. e N Exercise 8. (Continuous-ime LQ Control) Consider the controllable and observable system x_(t) = Ax(t) + Bu (t), y(t) = Cx (t). It can be shown that the control which minimizes Z J = [ y (t)y(t) + u (t)ru (t) ] dt with R positive denite, is of the form u(t) = F x(t), where F = ;R ; B P (:) and where P is the unique, symmetric, positive denite solution of the following equation (called the algebraic Riccati equation or ARE): P A + A P + Q ; P BR ; B P = Q = C C (:) he control is guaranteed to be stabilizing. he signicance of P is that the minimum value of J is given by x ()P x (). In the case where u and y are scalar, so R is also a scalar which w e denote by r, the optimum closed-loop eigenvalues, i.e. the eigenvalues of A + BF, can be shown to be the left-half-plane roots of the so-called root square characteristic polynomial a(s)a(;s) + p(s)r ; p(;s) where a(s) = det( si ; A) and p(s)=a(s) is the transfer function from u to y. Now consider the particular case where A = 9 B = C = ( ) his could represent a magnetic suspension scheme with actuating current u and position y (or a simplied model of an inverted pendulum).

where q i : : : q n are as dened in (a). (a) Show that the system is unstable. (b) Find the transfer function from u to y. (c) Using the root square characteristic polynomial for this problem, approximately determine in terms of r the optimum closed-loop eigenvalues, assuming r. (d) Determine the optimum closed-loop eigenvalues for r!, and nd the F that gives this set of eigenvalues. (e) Verify the result in (d) by computing the optimal gain F via the formulas in (.) and (.). (In order to get a meaningful solution of the ARE, you should not set r ; =, but still use the fact that r.) Exercise 8. (Eigenstructure Assignment) Let (A B I ) be an m-input, reachable, nth-order LI system. Let the input be given by the LI state feedback u = F x Suppose we desire the new, closed-loop eigenvalues to be i, with associated eigenvectors p i. We have seen that the i can be placed arbitrarily by c hoice of F (subject only to the requirement that they be at self-conjugate locations, i.e. for each complex value we also select its conjugate). Assume in this problem that none of the i 's are eigenvalues of A. (a) Show that the eigenvector p i associated with i must lie in the m-dimensional subspace Ra[( i I ; A) ; B], i.e., p i = ( i I ; A) ; Bq i for some q i. (b) Show that if p : : : p n are a set of attainable, linearly independent, closed-loop eigenvectors, then F = [ q : : : q n ] [p : : : p n ] ; (c) Verify that specifying the closed-loop eigenvalues and eigenvectors, subject to the restrictions in (a), involves specifying exactly nm numbers, which matches the numb e r o f free parameters in F.