APPPHYS 217 Thursday 8 April 2010

Similar documents
State observers and recursive filters in classical feedback control theory

Diagonalization of Matrices Dr. E. Jacobs

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

model considered before, but the prey obey logistic growth in the absence of predators. In

Schrödinger s equation.

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

6 General properties of an autonomous system of two first order ODE

Lagrangian and Hamiltonian Mechanics

CDS 101/110a: Lecture 2.1 Dynamic Behavior

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Linear First-Order Equations

CDS 101/110a: Lecture 2.1 Dynamic Behavior

The Exact Form and General Integrating Factors

6.003 Homework #7 Solutions

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

1 Heisenberg Representation

u t v t v t c a u t b a v t u t v t b a

'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21

Separation of Variables

Lecture 2 Lagrangian formulation of classical mechanics Mechanics

Euler equations for multiple integrals

1 dx. where is a large constant, i.e., 1, (7.6) and Px is of the order of unity. Indeed, if px is given by (7.5), the inequality (7.

PD Controller for Car-Following Models Based on Real Data

Experimental Robustness Study of a Second-Order Sliding Mode Controller

Free rotation of a rigid body 1 D. E. Soper 2 University of Oregon Physics 611, Theoretical Mechanics 5 November 2012

The Principle of Least Action

Total Energy Shaping of a Class of Underactuated Port-Hamiltonian Systems using a New Set of Closed-Loop Potential Shape Variables*

Text S1: Simulation models and detailed method for early warning signal calculation

CDS 101: Lecture 5-1 Reachability and State Space Feedback

05 The Continuum Limit and the Wave Equation

CDS 101: Lecture 5-1 Reachability and State Space Feedback. Review from Last Week

Calculus in the AP Physics C Course The Derivative

Chapter 11: Feedback and PID Control Theory

θ x = f ( x,t) could be written as

Designing Information Devices and Systems II Spring 2018 J. Roychowdhury and M. Maharbiz Discussion 8A

Chapter 11: Feedback and PID Control Theory

F 1 x 1,,x n. F n x 1,,x n

Slide10 Haykin Chapter 14: Neurodynamics (3rd Ed. Chapter 13)

SYNCHRONOUS SEQUENTIAL CIRCUITS

and from it produce the action integral whose variation we set to zero:

6 Wave equation in spherical polar coordinates

Optimization of Geometries by Energy Minimization

Implicit Differentiation

G j dq i + G j. q i. = a jt. and

Accelerate Implementation of Forwaring Control Laws using Composition Methos Yves Moreau an Roolphe Sepulchre June 1997 Abstract We use a metho of int

MA 2232 Lecture 08 - Review of Log and Exponential Functions and Exponential Growth

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1

Quantum Mechanics in Three Dimensions

Design A Robust Power System Stabilizer on SMIB Using Lyapunov Theory

EE 370L Controls Laboratory. Laboratory Exercise #7 Root Locus. Department of Electrical and Computer Engineering University of Nevada, at Las Vegas

16.30/31, Fall 2010 Recitation # 1

Table of Common Derivatives By David Abraham

Chapter 2 Lagrangian Modeling

Math 342 Partial Differential Equations «Viktor Grigoryan

Calculus and optimization

Criteria for Global Stability of Coupled Systems with Application to Robust Output Feedback Design for Active Surge Control

Introduction to variational calculus: Lecture notes 1

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs

APPROXIMATE SOLUTION FOR TRANSIENT HEAT TRANSFER IN STATIC TURBULENT HE II. B. Baudouy. CEA/Saclay, DSM/DAPNIA/STCM Gif-sur-Yvette Cedex, France

Separation Principle for a Class of Nonlinear Feedback Systems Augmented with Observers

APPPHYS 217 Tuesday 6 April 2010

Unit #6 - Families of Functions, Taylor Polynomials, l Hopital s Rule

ELEC3114 Control Systems 1

Chapter 11: Feedback and PID Control Theory

Calculus of Variations

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note 16

Lecture 10: Logistic growth models #2

Vectors in two dimensions

A Course in Machine Learning

Applications of First Order Equations

THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE

Advanced Partial Differential Equations with Applications

Power Systems Control Prof. Wonhee Kim. Ch.3. Controller Design in Time Domain

Applications of the Wronskian to ordinary linear differential equations

Cable holds system BUT at t=0 it breaks!! θ=20. Copyright Luis San Andrés (2010) 1

Nonlinear Adaptive Ship Course Tracking Control Based on Backstepping and Nussbaum Gain

Code_Aster. Detection of the singularities and computation of a card of size of elements

Chapter 11: Feedback and PID Control Theory

Examining Geometric Integration for Propagating Orbit Trajectories with Non-Conservative Forcing

The Principle of Least Action and Designing Fiber Optics

Mathematical Review Problems

Optimal Variable-Structure Control Tracking of Spacecraft Maneuvers

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations

4. Important theorems in quantum mechanics

The Levitation Controller Design of an Electromagnetic Suspension Vehicle using Gain Scheduled Control

Integration Review. May 11, 2013

G4003 Advanced Mechanics 1. We already saw that if q is a cyclic variable, the associated conjugate momentum is conserved, L = const.

Unit #4 - Inverse Trig, Interpreting Derivatives, Newton s Method

Thermal runaway during blocking

Math 1B, lecture 8: Integration by parts

6. Friction and viscosity in gasses

Quantum mechanical approaches to the virial

ensembles When working with density operators, we can use this connection to define a generalized Bloch vector: v x Tr x, v y Tr y

Topic 7: Convergence of Random Variables

Numerical Integrator. Graphics

The Three-dimensional Schödinger Equation

Two formulas for the Euler ϕ-function

The Optimal Steady-State Control Problem

Nested Saturation with Guaranteed Real Poles 1

A Novel Decoupled Iterative Method for Deep-Submicron MOSFET RF Circuit Simulation

Transcription:

APPPHYS 7 Thursay 8 April A&M example 6: The ouble integrator Consier the motion of a point particle in D with the applie force as a control input This is simply Newton s equation F ma with F u : t q q The set of equilibrium points of this system clearly correspons to the line q (from the equations we see that u is require for equilibrium) With reference to A&M Figure 6(b) let us briefly consier how we can use the input u to steer the system starting from any equilibrium point an ening on any other It is instructive to visualize this process on the phase portrait q q m u State-feeback control example: secon-orer system Consier the riven secon-orer system q q q u x q x q t x x x x As we have previously iscusse u here coul represent an external applie force (in a mechanical mass-spring-amper system) or voltage (in the LCR circuit realization) Anticipating conventional control-theoretic notation that we will introuce below let us also write this as t x Ax Bu A B We saw last time that the amping ratio sets important properties of the step (transient) response: u

5 u(t) re =/3 green = blue =4/3 black 5-5 5 5 5 3 t/ Let us briefly consier the state-feeback control scenario u Kx where (since we have taken u to be scalar) K k k With this feeback law we have t x Ax Bu A BKx where A BK k k k k k k Clearly we can view this as a moifie secon-orer system If we efine k then apparently we have k k k

an it follows that we can actually set to any value we like through choice of the feeback parameter k an once this is set (assuming ) we can ajust as esire through choice of k Hence if we on t like the transient behavior of our original open-loop system (without feeback) we can in principle moify it however we like by use of state feeback as escribe above We program the esire behavior by setting the feeback gain K Note that if the components of x are available as electronic signals it is straightforwar to prouce the feeback signal u with an op-amp circuit (LCR example) The fact that we have full power to reprogram the oscillation frequency an amping ratio in this state feeback scenario can be mae obvious if we look at the ynamics with feeback transforme back to a single secon-orer ODE: t q q q k k q q k q k q Hence k is associate with a moification of the harmonic restoring force while k irectly an inepenently moifies the velocity-amping Just for fun let us calculate the eigenvalues of the feeback-moifie A matrix: eta BK I k k k k k Consiering the solutions given by the quaratic equation k k k 4 4k we see that we can achieve any esire pair of eigenvalues by setting k k an k 4 4k k 4 k 4 4 4 Hence we can arbitrarily program the eigenvalues of the close-loop ynamics matrix Before turning to some generalities we note that the secon-orer system with the control structure we are consiering can be pushe into arbitrary states in a similar manner as the ouble-integrator Suppose we are starting at the origin at t an 3

want to reach q q at a target time T If we use the input u an allow our input signals to be unboune we can in principle take the following approach First give the system a momentum impulse at time t that will cause the system to pass through position q at time T To convince ourselves that this is really possible we coul look at the matrix exponential we calculate last time an show that we can always solve for q such that q expat q In the overampe case for example this leas to q e T sinh T q Then at time T apply a secon momentum impulse that corrects the final velocity q to its esire value Note that even with this rather singular strategy in which we rely on our ability to apply controls so strong that they overwhelm most of the system s natural ynamics (harmonic restoring force an velocity-amping) we still rely on the inherent integrator structure t q q that allows us to use an input to q only to affect the position q Reachability rank conition We say that a linear system ẋ Ax Bu is reachable if for any initial state x x esire final state x f an target time T it is possible to fin a control input ut t T that steers the system to reach xt x f There is a theorem (see A&M pp 67-7) that says that a system is reachable if its reachability matrix W r B AB A B A n B has full rank (here x R n ) If a system is reachable it is furthermore possible to solve the pole placement problem in which we want to esign a state feeback law u Kx such that we can pick any eigenvalues we want for the controlle ynamics ẋ Ax Bu A BKx Here A an B are given an we must fin a K to achieve the esire eigenvalues for A BK In the case of our secon-orer system with 4

we have AB A B W r an clearly W r has full rank as etw r Before moving on let s look at a simple example (from Åström an Murray Ex 53) of a system that is not reachable: x x u t x x Here we can easily compute W r which clearly has eterminant zero This can be unerstoo by noting the complete symmetry of the way that u moifies the evolution of x an x For example if x x there is no way to use u to achieve x T x T at any later time Preator-Prey example We have the equations of motion Ḣ f H HL rh H k c ahl H L f L HL b c ahl H L First we fin the equilibrium points For steay-state of the lynx equation we have b c ahl H L L abh c H L which has a trivial solution L as well as c H abh c ab H H H eq c ab Then going to the hares equation if L then we nee rh H k which has solutions H an H k Thus we have two istinct equilibria so far L H an L H k If we set H H eq however then we nee 5

ah eq L c H eq rh eq H eq k L L eq c H eq a r H eq k r ak c H eqk H eq ak r c c k c ab ab ak r abc c c abk k c ab ab k r bc abk k c ab ab bcrabk k c ab k This gives us a thir equilibrium point L L eq H H eq In orer to etermine the stability of these equilibria we compute the erivative f f H f L rh H ahl k ch b ahl L ch rh r k H ahlc H abhlc H L f f H H f L H f H L f L L Evaluating this at each equilibrium point: H L r r k H alc H ahlc H ablc H abhlc H : f r ahc H abhc H H L k : f ak r ck abk ck H L H eq L eq : f r r k H eq al eq c H eq ah eq L eq c H eq abl eq c H eq abh eq L eq c H eq By computing eigenvalues it follows that all three equilibrium points are unstable with the first two being sales an the thir a source Following Example 65 from A&M we now consier state-feeback stabilization of the non-trivial equilibrium point H eq L eq where once again H eq c ab L eq bcrabk k c ab k The feeback strategy will be to moulate the net birth rate of hares (which coul be one for example by moulating their foo supply) r r u so that ah eq c H eq abh eq c H eq 6

Ḣ f H HL r uh H k c ahl H L f L HL b c ahl H L Using the erivative f that we erive above we can linearize the controlle ynamics (assuming u r) as t H L r r k H eq al eq c H eq ah eq L eq c H eq abl eq c H eq abh eq L eq c H eq ah eq c H eq abh eq c H eq H L H eq Heq k u Hence in the control-theoretic notation we have been using t H L A H L Bu an inserting parameter values as in A&M a 3 b 6 c 5 56 k 5 r 6 we numerically evaluate 3 93 7 b A B 57 The open-loop eigenvalues of A are then 63 754i unstable as claime We can now try to fin a feeback law H u K L such that the close-loop eigenvalues become real an negative for example Using the Matlab function place wefin K 48 5 k k This state-feeback law leas to new linearize ynamics t H L A BK 3 35 A BK 568 Assuming the state-feeback law is implemente linearly we can in fact write the nonlinear ynamics with feeback as H L 7

Ḣ f H HL r k H H eq k L L eq H H k c ahl H L f L HL b c ahl H L Using pplane8 we can investigate the phase portrait of the close-loop system in the vicinity of the open-loop equilibrium point State feeback versus output feeback Note that in our iscussion of stabilization an pole-placement so far we have assume that it makes sense to esign a control law of the form u Kx This is calle a state feeback law since in orer to etermine the control input ut at time t we generally nee to have full knowlege of the state xt In practice this is often not possible an thus we usually specify the available output signals when efining a control esign problem: ẋ Ax Bu y Cx Here the output signal yt which can in principle be a vector of any imension represents the information about the evolving system state that is mae available to the controller via sensors An output feeback law must take the form ut f y t where in general we can allow ut to epen on the entire history of y with t (more on this below an later in the course) Output feeback is a natural setting for practical applications For example if we are talking about cruise control for an automobile x may represent a complex set of variables having to o with the internal state of the engine wheels an chassis while y is only a reaout from the speeometer Hopefully it will seem natural that it is usually prohibitively ifficult to install a sensor to monitor every coorinate of the system s state space an also that it will often be unnecessary to o so (cruise control electronics can function quite well with just the car s spee) One simple example of a system in which full state knowlege is clearly not necessary is (asymptotic) stabilization of a simple harmonic oscillator If the natural ynamics of the plant is mẍ kx an our actuation mechanism is to apply forces irectly on the mass then the control system looks like t x x k m (where x is now the position an x the velocity) We can clearly make the equilibrium point at the origin asymptotically stable via the feeback law x x u 8

u bx b x x which makes the overall equation of motion ẍ m k x bẋ which we recognize as a ampe harmonic oscillator Thus it is clear that the controller only nees to know the velocity of the oscillator in orer to implement a successful feeback strategy So even if we go to a SISO (single-input single-output) formulation of this problem ẋ Ax Bu y Cx we are obviously fine for any C of the form C since x y/ an we can implement an output-feeback law of the form u bx b y In contrast to this consier the analogous stabilization problem for the preator-prey system that we iscusse above Even if our attention to the immeiate vicinity of the natural equilibrium point an assume that a linearize moel is sufficient for the esign it is not clear that we coul succee without requiring knowlege of both the lynx an hare populations Clearly if C is a square matrix an y has the same imension as x everything will be easy if C is invertible As a generalization of what we i for the simple harmonic oscillator above we coul just esign a state feeback controller K set x C y an apply feeback u Kx KC y However this is a special case an not the sort of convenience we want to count on! State estimation At this point it might seem like we might nee completely new theorems about reachability an pole-placement for output-feeback laws when ut is only allowe to epen on y t However it turns out that we can buil naturally on our previous results by appealing to a separation metho The basic iea is that we will try to construct a proceure for processing the ata y t to obtain an estimate x t of the true system state xt an then apply a feeback law u Kx base on this estimate This can be possible even when C is not invertible (not even square) The controller thus assumes the structure of a ynamical system itself with yt as its input ut as its output an x t as its internal state There are various ways of esigning state estimators to extract x t from y t of which we will iscuss two an there is also 9

a convenient proceure for etermining whether or not y contains enough information to make full state reconstruction possible in principle The latter test looks a lot like the test for reachability for not acciental reasons Let s start by thinking about the simple harmonic oscillator again We note that in orer to asymptotically stabilize the equilibrium point at the origin it woul be most convenient to have an output signal that tol us irectly about its velocity x However you may have alreay realize that in a scenario with C y x it shoul be simple to obtain a goo estimate of x via x t y This is certainly a vali proceure for estimating x although in practice one shoul be wary of taking erivatives of measure ata since that tens to accentuate high-frequency noise In a similar spirit we note that for any ynamical system ẋ Ax Bu y Cx if we hol u at zero we can make use of the general relations ẏ Cẋ CAx ÿ Cẍ C t ẋ C t Ax CAẋ CA x n t n y CAn x If we look at how this applies to our moifie simple harmonic oscillator example with C A m k we have y C CA x x x m k x ẏ CA x x an we start to get a sense for how the natural ynamics A can move information about state space variables into the support of C Hopefully it shoul thus seem

reasonable that in orer for a system to be observable we require that the observability matrix W o C CA CA CA n have full rank Informally if a system is observable then we are guarantee that we can esign a proceure (such as the erivatives scheme above) to extract a faithful estimate x from y However it will generally be necessary to monitor y for some time (an with goo accuracy) before the estimation error x t xt x t can be mae small In the erivatives scheme for instance we can t estimate high erivatives of yt until we see enough of it to get an accurate etermination of its slope curvature etc State observer with innovations A more common (an more robust) metho for estimating x from y is to construct a state observer that applies corrections to an initial guess x until Cx becomes an accurate preictor of y Suppose that at some arbitrary point in time t we have an estimate x t How shoul we upate this estimate to generate estimates of the state xt with t t? Most simply we coul integrate x Ax Bu t assuming we know A an B for the plant It is generally assume that we know u since this signal is uner our control! Then we notice that the estimation error x evolves as t x t x x Ax Bu Ax Bu Ax x Ax Hence this strategy has the nice feature that if A is stable limx t meaning that our estimate will eventually converge to the true system state Note that this works even if B an u are non-zero What if we are not so lucky as to have sufficiently stable natural ynamics A? As mentione above a goo strategy is to try to apply corrections to x at every time step in proportion to the so-calle innovation

w y Cx Here y Cx istheerrorwemakeinpreictingyt on the basis of x t Clearly when x is small so is w A Luenberger state observer can thus be constructe as x Ax Bu Ly Cx t where L is a gain matrix that is left to our esign This observer equation results in t x ẋ t x Ax Bu Ax Bu Ly Cx Ax x Ly Cx Ax x LCx x A LCx Hence we see that our esign task shoul be to choose L givena an C such that A LC has nice stable eigenvalues This shoul remin you immeiately of the pole-placement problem in state feeback in which we wante to choose K given A an B such that A BK ha esire eigenvalues Inee one can map between the two problems by noting that the transpose of a matrix M T has the same eigenvalues as M Thus we can view our observer esign problem as being the choice of L T such that A LC T A T C T L T has nice stable eigenvalues an this now has precisely the same structure as before Inee there is a complete uality between state feeback an observer esign with corresponences A A T B C T K L T W r W T o Hence it shoul be clear for example how Matlab s place function can be use for observer esign An as long as the observability matrix has full rank we are guarentee to be able to fin an L such that A LC has arbitrary esire eigenvalues Pole-placement with output feeback As iscusse in section 73 of Åström an Murray the following theorem hols (here we simplify to the r case): For a system ẋ Ax Bu y Cx the controller escribe by u Kx x Ax Bu Ly Cx t gives a close-loop system with the characteristic polynomial

etsi A BKetsI A LC This polynomial can be assigne arbitrary roots if the system is observable an reachable The overall setup is summarize in the following cartoon: Next time we ll have a look at how this sort of strategy performs in the presence of noise 3