Astro 250 Crash course on Control Systems Part I, March 3, 2003 Andy Packard, Zachary Jarvis-Wloszek, Weehong Tan, Eric Wemhoff

Similar documents
4 Arithmetic of Feedback Loops

Often, in this class, we will analyze a closed-loop feedback control system, and end up with an equation of the form

Raktim Bhattacharya. . AERO 422: Active Controls for Aerospace Vehicles. Dynamic Response

Raktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Preliminaries

Singular Value Decomposition Analysis

21 Linear State-Space Representations

Problem Set 5 Solutions 1

ME 132, Dynamic Systems and Feedback. Class Notes. Spring Instructor: Prof. A Packard

Iterative Learning Control Analysis and Design I

Introduction to Modern Control MT 2016

First-Order Low-Pass Filter

6.241 Dynamic Systems and Control

Dr Ian R. Manchester Dr Ian R. Manchester AMME 3500 : Review

GATE EE Topic wise Questions SIGNALS & SYSTEMS

4. Complex Oscillations

2.161 Signal Processing: Continuous and Discrete Fall 2008

Analysis and Design of Control Systems in the Time Domain

Lecture 12. AO Control Theory

Automatic Control 2. Loop shaping. Prof. Alberto Bemporad. University of Trento. Academic year

Chapter 7. Digital Control Systems

ME 132, Fall 2017, UC Berkeley, A. Packard 334 # 6 # 7 # 13 # 15 # 14

Design Methods for Control Systems

ME 132, Fall 2015, Quiz # 2

Introduction. Performance and Robustness (Chapter 1) Advanced Control Systems Spring / 31

Systems Analysis and Control

Introduction to Feedback Control

(Continued on next page)

High-Gain Observers in Nonlinear Feedback Control. Lecture # 3 Regulation

Systems Analysis and Control

Dynamic Response. Assoc. Prof. Enver Tatlicioglu. Department of Electrical & Electronics Engineering Izmir Institute of Technology.

7.2 Relationship between Z Transforms and Laplace Transforms

Linear Systems. Chapter Basic Definitions

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

Topic # Feedback Control. State-Space Systems Closed-loop control using estimators and regulators. Dynamics output feedback

A system that is both linear and time-invariant is called linear time-invariant (LTI).

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

OPTIMAL CONTROL AND ESTIMATION

Control Systems I. Lecture 6: Poles and Zeros. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

Automatic Control II Computer exercise 3. LQG Design

Chapter 2. Classical Control System Design. Dutch Institute of Systems and Control

Intro to Frequency Domain Design

sc Control Systems Design Q.1, Sem.1, Ac. Yr. 2010/11

Learn2Control Laboratory

AC&ST AUTOMATIC CONTROL AND SYSTEM THEORY SYSTEMS AND MODELS. Claudio Melchiorri

Pole placement control: state space and polynomial approaches Lecture 2

Lecture 5 Classical Control Overview III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Time Response Analysis (Part II)

EE363 homework 7 solutions

Return Difference Function and Closed-Loop Roots Single-Input/Single-Output Control Systems

Controls Problems for Qualifying Exam - Spring 2014

L2 gains and system approximation quality 1

FREQUENCY-RESPONSE DESIGN

Control Systems Design

Control Systems I. Lecture 2: Modeling. Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch Emilio Frazzoli

Dr Ian R. Manchester

Topic # Feedback Control Systems

Control System Design

LECTURE 12 Sections Introduction to the Fourier series of periodic signals

Richiami di Controlli Automatici

H 2 Optimal State Feedback Control Synthesis. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Basic Procedures for Common Problems

CDS 101/110: Lecture 3.1 Linear Systems

Dynamic measurement: application of system identification in metrology

Control Systems I. Lecture 5: Transfer Functions. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

L = 1 2 a(q) q2 V (q).

Lecture 12. Upcoming labs: Final Exam on 12/21/2015 (Monday)10:30-12:30

Integral action in state feedback control

Stability theory is a fundamental topic in mathematics and engineering, that include every

Lecture 7: Laplace Transform and Its Applications Dr.-Ing. Sudchai Boonto

Review: transient and steady-state response; DC gain and the FVT Today s topic: system-modeling diagrams; prototype 2nd-order system

Control Systems I. Lecture 7: Feedback and the Root Locus method. Readings: Jacopo Tani. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

LTI Systems (Continuous & Discrete) - Basics

Topic # Feedback Control Systems

ELEC2400 Signals & Systems

Optimal Polynomial Control for Discrete-Time Systems

ME 132, Fall 2017, UC Berkeley, A. Packard 317. G 1 (s) = 3 s + 6, G 2(s) = s + 2

Discrete and continuous dynamic systems

Chapter 5. Standard LTI Feedback Optimization Setup. 5.1 The Canonical Setup

I. D. Landau, A. Karimi: A Course on Adaptive Control Adaptive Control. Part 9: Adaptive Control with Multiple Models and Switching

Chapter 9 Observers, Model-based Controllers 9. Introduction In here we deal with the general case where only a subset of the states, or linear combin

Frequency domain analysis

Time Response of Systems

CONTROL DESIGN FOR SET POINT TRACKING

Chapter 3. 1 st Order Sine Function Input. General Solution. Ce t. Measurement System Behavior Part 2

CDS 101/110a: Lecture 8-1 Frequency Domain Design

The Laplace Transform

Explanations and Discussion of Some Laplace Methods: Transfer Functions and Frequency Response. Y(s) = b 1

Laplace Transform Part 1: Introduction (I&N Chap 13)

Final Exam Solutions

Goodwin, Graebe, Salgado, Prentice Hall Chapter 11. Chapter 11. Dealing with Constraints

Volterra/Wiener Representation of Non-Linear Systems

Raktim Bhattacharya. . AERO 422: Active Controls for Aerospace Vehicles. Frequency Response-Design Method

Mathematical Foundations of Signal Processing

Feedback Control of Linear SISO systems. Process Dynamics and Control

Math Ordinary Differential Equations

Transcription:

Astro 25 Crash course on Control Systems Part I, March 3, 23 Andy Packard, Zachary Jarvis-Wloszek, Weehong Tan, Eric Wemhoff pack@me.berkeley.edu 1

Feedback Systems Motivation Process to be controlled d 1 d 2 u Σ P Σ y Goal: regulate y as desired, by freely manipulating u Problem: Effect of u on y is partially unknown: External disturbances, d 1 and d 2 act Process behavior, P is somewhat unknown, and may drift/change with time Note: arrows indicate cause/effect relationships, not necessarily power, force, flow, etc. Open-loop regulation: Make H the inverse of P y des H Σ P Σ u d 1 d 2 P y If the unknown effects (d 1, d 2, ) are small, this calibration strategy may work. We ll not focus on this. 2

Feedback Systems Motivation Feedback regulation: y des C Σ P Σ u d 1 d 2 P y F Σ η Benefits of feedback 1. Strategy C turns y des and y meas into u, so u depends on d 1, d 2, (and η). Automatic compensation for the unknowns occurs, but it is corrupted by η. 2. If C is properly designed, the feedback mechanism yields several benefits: (a) Effects of d 1 and d 2 on y are reduced, and modestly insensitive to P s behavior (b) The output y closely follows the desired trajectory, y des, perhaps responding faster than the process naturally does on its own. (c) If the process P is inherently unstable, the feedback provides constant, corrective inputs u to stabilize the process 3

Feedback Systems Motivation/Nomenclature y des C Σ P Σ u d 1 d 2 P y F Σ η Drawbacks of using feedback 1. A feedback loop requires a sensing element, F, which may be F 2. Measurements potentially introduce additional noise, η, into process 3. System performance or even stability can be degraded if strategy C is not appropriate for P Nomenclature If keeping the mapping from d 1 and d 2 to y small is the focus, then the problem is a disturbance rejection problem. If keeping the mapping from r to y approximately unity is the focus, then the problem is a reference tracking problem. In either case, the ability for C to augment the system s performance is dependent on the dynamics of F, the noise level η, and the uncertainty in the process behavior, P. 4

Feedback Loops Arithmetic Many principles of feedback control derive from the arithmetic relations (and their sensitivities) implied by the figure below. r Controller d C G e u y f Process to be controlled H F Filter y m S Sensor y n Analysis is oversimplified, not realistic, but relevant. Lines represent variables, arrows give cause/effect direction, rectangular blocks are multiplication operators. Finally, circles are summing junctions (with subtraction explicitly denoted). (r, d, n) are independent variables; (e, u, y, y m, y f ) are dependent, being generated (caused) by specific values of (r, d, n). Writing each operation in the loop gives e = r y f u = Ce generate the regulation error control strategy y = Gu + Hd process behavior y m = Sy + n sensor behavior y f = F y m filtering the measurement 5

r d H C G F S y n There is a cycle in the cause/effect relationships - specifically, starting at y f (r, y f ) e u d y n y m y f ie., a feedback loop. It can be beneficial and/or detrimental. d (and u) affects y, and through the the feedback loop, ultimately affects u, which in turn affects y. So, through feedback, the control action, u, may compensate for disturbances d. However, through feedback, y is affected by the imperfection to which it is measured, n. Eliminating the intermediate variables yields the explicit dependence of y on r, d, n, y = GC r + } 1 + GCF {{ S} (r y) CL called the closed-loop relationship. H d + } 1 + GCF {{ S} (d y) CL GCF 1 + GCF S }{{} (n y) CL The goal (unattainable) of feedback (the S, F and C) is: for all reasonable (r, d, n), make y r, independent of d and n, and this behavior should be resilent to modest/small changes in G (once C is fixed). 6 n

Goals Implications The first two goals are: 1. Make the magnitude of (d y) CL significantly smaller than the uncontrolled effect that d has on y, which is H. 2. Make the magnitude of (n y) CL small, relative to 1 S. Implications Goal 1 implies H 1 + GCF S << H, which is equivalent to 1 1 + GCF S << 1 This, in turn, is equivalent to GCF S >> 1. Goal 2 implies that any noise injected at the sensor output should be significantly attenuated at the process output y (with proper accounting for unit changes by S). This requires GCF 1 + GCF S << 1 S. This is equivalent to requiring GCF S << 1. So, Goals 1 and 2 are in direct conflict. 7

Conflict Impact on achieving Goal 3 3. Make (r y) CL response approximately equal to 1 Depending on which of Goal 1 or Goal 2 is followed, Goal 3 is accomplished in different manners. By itself, goal 3 requires GC 1 + GCF S 1. If Goal 1 is satisfied, then GCF S is large (relative to 1), so GC 1 + GCF S GC GCF S = 1 F S. Therefore, the requirement of Goal 3 is that 1 F S 1, GC >> 1. On the other hand, if Goal 2 is satisfied, then GCF S is small (relative to 1), so GC 1 + GCF S GC Therefore, the requirement of Goal 3 is that F S << 1, GC 1 These are completely different. One is actually a feedback strategy, and the other is not it is a calibration strategy. r d H C G F S y n 8

Tradeoffs Let T (G, C, S, F ) denote the factor that relates r to y T (G, C, S, F ) = GC 1 + GCF S. r d Arithmetic H C G F S y n Use T to denote T (G, C, F, S) for short, and consider two sensitivities: sensitivity of T to G, and sensitivity of T to the product F S. Obtain SG T 1 = 1 + GCF S, ST F S = GCF S 1 + GCF S Note that (always) SG T = 1 + ST F S Hence, if one of the sensitivity measures is very small, then the other sensitivity measure will be approximately 1. So, if T is insensitive to G, it will be sensitive to F S and visa-versa. Defn: For a function F of a many variables (say two) the sensitivity of F to x is defined as the percentage change in F due to a percentage change in x. denoted as Sx F. For infinitesimal changes in x, the sensitivity is Sx F = x F F (x, y) x Other more interesting conservation laws hold. 9

Systems time, signals Time: dominant (only) independent variable usual notation, t (and τ, ξ, η,...) real number, so t R, often time starts from, so t R + Signals: real-valued, functions on time variable usual notation, u, y, x, w, v explicit example: u(t) = e 3t sin 4t for all t R +. Systems: mapping from signal to signal (often called operator ) usual notation, L for mapping, and Lu for L acting on u. explicit example: (Lu)(t) := t u2 (τ)dτ t exp 4τ u(τ)dτ A system L is linear if for all signals u and v, and all scalars α, β L(αu + βv) = αlu + βlv 1

Linear systems Examples/Non-Examples Examples t (Lw)(t) = e 2(t τ) w(τ) 1 w(τ 4)dτ τ 2 + 1 (Lw)(t) = 5tw(t) (Lw)(t) = Non-Examples t w(τ)dτ (Lw)(t) = 3w(t 4) (Lw)(t) = w 2 (t) + 1 (Lw)(t) = t sin(w(τ))dτ (Lw)(t) = 3e w(t 4) Alot is learned by considering feedback configurations of linear systems, and then studying how nonideal aspects affect the conclusions Direct consideration of nonlinear systems is also possible, but is not how we structure this crash-course. 11

Causality Time Invariance A system L is causal if for any two inputs u and v, implies that u(t) = v(t) for all t T (Lu)(t) = (Lv)(t) for all t T ie. output at t only depends on past values of input. Anything operating in real-time produces outputs that are causally related to its inputs. Off-line filtering (ie., what is the best estimate of what was happening at t = 2.3 given the data on the window [ 1]) is not necessarily causal. A system L is time-invariant if the system s input/output behavior is not explicitly changing/varying with time. 12

3 Representations Linear, Time-Invariant, Causal Systems Convolution: given a function g (on time), define a system (relationship between input u and output y as y(t) = t g(t τ)u(τ)dτ g can also be a matrix-valued function of time, and the u and y are vector-valued signals. g is called the convolution kernel. Linear Ordinary Differential Equations: Given constants a i and b i, define a system (relationship between input u and output y as y [n] (t) + a 1 y [n 1] (t) + + a n 1 y [1] (t) + a n y(t) = b u [n] (t) + b 1 u [n 1] (t) + + b n 1 u [1] (t) + b n u(t) with given initial conditions on y and its derivatives. State-Space (coupled, first-order LODEs): Given matrices A, B, C, D of appropriate dimensions, define a system (relationship between input u, output y, and internal state x as [ ] [ ] [ ] ẋ(t) A B x(t) = y(t) C D u(t) with given initial conditions on x. We ll focus on these types of descriptions on Wednesday. 13

Linear, Time-Invariance and Convolution Equivalence Fact: Give a linear, time-invariant system. If for all T >, there is a number M T such that max u(t) 1 t T max t T y(t) M T then the system can be represented as a convolution system, with for all T. T g(η) dη < If the convolution kernel, g, is the finite sum of exponentially weighted sines, cosines, and polynominals in t, then it can also come from a linear ODE, or a system of coupled, 1st order linear ODEs. Translation between the representations, when possible, is easy... 14

Stability Things to know A system L is BIBO (Bounded-Input, Bounded-Output) stable if there is a number M < such that max t y(t) M max t u(t) for all possible input signals u, starting from initial conditions. A system L is internally stable all homogeneous solutions (ie., u, nonzero initial condiitons) decay to zero as t. Ignoring mathematically relevant, but physically artificial situations, these are the same, and are equivalent to: for a convolution description: g(η) dη < for a LODE description: All roots of λ n + a 1 λ n 1 + + a n 1 λ + a n = (which may be complex) have negative real-part for a state-space description: All eigenvalues of A have negative real-part 15

Simple tools Frequency Response for stable systems, derived from model and/or obtained from experiment Behavior (model) of interconnection of a collection of linear systems, from the individual behaviors Quantitative, qualitative reasoning about 1st, 2nd, and 3rd order linear differential equations Decrease in sensitivity and linearizing effect of feedback Destabilizing potential of time-delays in feedback path A few relevant architectures for control of simple dynamic processes 16

Frequency Response Convolution Assume convolution system is BIBO, The tail of the integral satisfies and for all ω R, lim t g(τ) dτ < t Ĝ(ω) := g(τ) dτ = g(t)e jωt dt is well defined. Let ω R, and ū C. Apply the complex sinusoidal input u(t) = ūe jωt. The output is y(t) = = = t t t g(t τ)u(τ)dτ g(t τ)ūe jωτ dτ g(η)e jω(t η) dτ ū t = e jωt g(η)e jωη dη ū [ = e jωt g(η)e jωη dη using η := t τ t ] g(η)e jωη dη ū = Ĝ(ω)ūejωt + e jωt g(η)e jωη dη ū } t {{} y d (t) Clearly, lim t y d (t) =, and the response tends to a complex sin at same frequency of input. For stable, linear time-invariant systems, u(t) = e jωt y ss (t) = H(ω)e jωt 17

Frequency Response Other representations If system is given in convolution form, y(t) = then H(ω) = Ĝ(ω) t g(t τ)u(τ)dτ If system is given in linear ODE form, then y [n] (t) + a 1 y [n 1] (t) + + a n 1 y [1] (t) + a n y(t) = b u [n] (t) + b 1 u [n 1] (t) + + b n 1 u [1] (t) + b n u(t) H(ω) := b (jω) n + b 1 (jω) n 1 + + b n 1 (jω) + b n (jω) n + a 1 (jω) n 1 + + a n 1 (jω) + a n Finally, if system is given in 1st-order form, ẋ(t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t) then H(ω) = D + C (jωi A) 1 B 18

Complex Arithmetic Review Suppose G C is not equal to zero. The magnitude of G is denoted G and defined as G := ([ReG] 2 + [ImG] 2) 1/2 The quantity G is a real number, unique to within additive 2π, which has the properties cos Then, for any real θ, G = ReG G, sin G = ImG G. Re ( Ge jθ) = Re [(G R + jg I ) (cos θ + j sin θ)] Im ( Ge jθ) = = G R cos [ θ G I sin θ ] = G GR G cos θ G I G sin θ = G [cos G cos θ sin G sin θ] = G cos (θ + G) = G sin (θ + G) 19

Complex Arithmetic Real-valued Interpretation g is real, u is complex, so y(t) := t g(t τ)u(τ)dτ is complex. The linearity, obvious from the integral form, implies that the real part of u leads to/causes the real part of y, and the imaginary part of u leads to the imaginary part of y namely y(t) = t g(t τ)u(τ)dτ y R(t) = t g(t τ)u R(τ)dτ y I (t) = t g(t τ)u I(τ)dτ In steady-state (after transients decay), we saw u(t) = e jωt y(t) = Ĝ(ω)ejωt The real and imaginary parts mean u(t) = cos ωt y(t) = u(t) = sin ωt y(t) = ) Ĝ(ω) cos (ωt + Ĝ(ω) ) Ĝ(ω) sin (ωt + Ĝ(ω) 2

A most important feedback loop... Feedback around integrator Diagram and equations: r Σ ẋ x y β Σ Σ n d ẋ(t) = r(t) y(t) n(t) y(t) = d(t) + βx(t) Eliminate x to yield ẏ(t) + βy(t) = βr(t) + d(t) βn(t) Frequency Response Functions (r y, d y and n y): G R Y (ω) = G N Y = β jω + β G D Y (ω) = jω jω + β Properties: If r(t) r, d(t) d, then y(t) = r + e βt (βx }{{ + } d r) y d, not d itself, affects y. Slowly varying d has little affect of y The bandwidth of the closed-loop system is β, the time constant is 1 β. r and n, though interpreted differently, enter in essentially the same manner; feedback loop and integrator combine to a low-pass filter to y. Apparent from time simulations and frequency response function plots. 21

MIFL Step/Frequency Responses r Σ ẋ x y β Σ Σ n d ẏ(t) + βy(t) = βr(t) + d(t) βn(t) G R Y (ω) = G N Y = G D Y (ω) = jω jω+β β jω+β Time Responses REFERENCE R, and OUTPUT Y 1.2 1 Reference, r Output, y Frequency Responses 1 FREQUENCY MAGNITUDE RESPONSE from R > Y.8.6 1.4.2.1.2 2/Beta 4/Beta 6/Beta 8/Beta 1/Beta 12/Beta TIME DISTURBANCE D.1 Beta/1 Beta/1 Beta 1Beta 1Beta FREQUENCY FREQUENCY PHASE RESPONSE from R > Y pi/4.5 2/Beta 4/Beta 6/Beta 8/Beta 1/Beta 12/Beta TIME STATE X 1.6 pi/2 Beta/1 Beta/1 Beta 1Beta 1Beta FREQUENCY 1 FREQUENCY MAGNITUDE RESPONSE from D > Y 1.4 1.2 1 1.8.6.4.1.2.2 2/Beta 4/Beta 6/Beta 8/Beta 1/Beta 12/Beta TIME.1 Beta/1 Beta/1 Beta 1Beta 1Beta FREQUENCY 22

MIFL Application Integral Control Process model: y(t) = H u(t) + d(t) with H uncertain gain of process, and d an exogenous disturbance. Goal: Regulate y to a given value r, even in the presence of slowlyvarying d and measurement noise n. A Solution: Integral control action: t u(t) = K I e(τ)dτ (equivalently: u() =, u(t) = K I e(t)). r e Σ x u y K I H Σ Σ n d After K I is chosen, certain properties of the closed-loop system are insensitive to H, others are still 1-1 sensitive to H... Description Value Sensitivity 1 Time constant 1 HK I Time-Delay for instability π 2HK I 1 (r y) ss for r(t) r, d(t) d 23

MIFL Application Some plots Y 2 1.5 1.5.5 Experimental Process Data Runs Shown are several plots of the (u, y) relationship y = Hu + d for different values of H and d. 1 1.5 2 2 1.5 1.5.5 1 1.5 2 U Fix these, and try the integral control solution to regulate y 3 2.5 2 1.5 1 REFERENCE R, and OUTPUT Y Reference, r Output, y Closed-loop time-responses of y(t) for staircase r: Note that time-constant is affected by the variability in H, but the steadystate tracking (y = r) is not..5.5 15/KI 3/KI 45/KI 6/KI TIME 12 1 8 CONTROL ACTION, U Corresponding value of control input u: Even though regulatory strategy is fixed, namely U/KI 6 4 2 15/KI 3/KI 45/KI 6/KI TIME t u(t) = K I e(τ)dτ the value clearly depends on the specific d and H. 24

What limits bandwidth? Discussion Without loss in generality, Take H nominal := 1; Drop r from the discussion (r =, or recenter variables around the value of r). Give sensor a model, S. Then, closed-loop (nominal) appears as e x u K y I H H = 1 S d Here, K I sets the bandwidth of the system. What limits our choice? Time-delay in feedback path Tradeoff between effect of d and n on y H may actually not have constant gain at all frequencies. We know this, and use a more complex corrective strategy (Wednesday) We don t know this, or choose not to figure out (for instance, too difficult and/or unreliable to predict) H s behavior at high frequencies n 25

What limits bandwidth? Case 1: Lag/Delay in Feedback Path If the feedback signal is subject to a time delay of magnitude T, some of the properties are adversely effected. Diagram and equations: r Σ ẋ x y β Σ delay, T d ẋ(t) = r(t) y(t T ) y(t) = d(t) + βx(t) Eliminate x to yield or ẏ(t) = d(t) + β [r(t) y(t T )] ẏ(t) + βy(t T ) = βr(t) + d(t) Properties: If T < π, then the system is stable. Time responses for T = {,.1,.3,.5,.7,.9} π are shown on left. Time 2β 2β delay in feedback loop degrades the system s ability to reject a rapidly changing disturbance. Time responses for T = {,.1,.3,.5,.7,.9} π 2β are shown: 2 REFERENCE R, and OUTPUT Y Reference, r Output, y 2 REFERENCE R, and OUTPUT Y Reference, r Output, y 1.5 1.5 1 1.5.5 2/Beta 4/Beta 6/Beta 8/Beta 1/Beta 12/Beta TIME 2/Beta 4/Beta 6/Beta 8/Beta 1/Beta 12/Beta TIME 26

What limits bandwidth? Case 2: Sensor Noise e x u K y I H H = 1 S d Consider given power spectral densities for independent d and n Φ d (ω) = γ2 ω 2, Φ n(ω) = σ 2 With integral feedback, the PSD of y and variance of a weighted (by a scalar q) multiple of y are Φ y (ω) = γ2 + K 2 I σ2 ω 2 + K 2 I S2, n E(q2 y(t) 2 ) = q 2γ2 + K 2 I σ2 2K I S The integral gain which minimizes the variance is K I = γ, leading to σ a closed-loop bandwidth of BW = γs, and variance σ E(qy) 2 (t) = q 2γσ S. A specification imposes a lower bound on bandwidth. If E(qy) 2 (t) M is a requirement, then we must have σ MS q 2 γ, and relating this to bandwidth gives BW q2 γ 2 M. This has implications on how much the actual process H can deviate from its idealized model within the frequency range [, BW ]. 27

What limits bandwidth? e x u K y I H perception S d n Case 3: Process Uncertainty e x u K I H y reality S d n How much can process H change if G(ω) := G D Y (ω) = 1 1 + H K I jω S = jω jω + HK I S is not to degrade significantly? Let BW denote bandwidth here BW = HK I S. Pick R >. Easy-to-show that for all stable H satisfying H(jω) H H R 1 + R it follows that G(ω) (1 + R) G(ω). jω + BW BW 1 2 1 1 1 1 1 R=1 R=1 R=.1 1 2.1 BW.1 BW BW 1 BW 1 BW Take R =.1 (for example). For a guarantee of no surprises 1% degradation across frequency then one should be able to say that H(jω) H < H for ω [, 1 BW ]. Statement above is general, and tight, in that no stronger statement can be made. There probably is a better way to get the take-home-message across... 28

Effect of Process/Model Mismatch Toy Example Take γ = 1, S = 1, q = 1 and σ from.5 3, giving BW = γs σ = 2 1 3, E(qy)2 (t) = q 2γσ S =.5 3 Suppose that the (u, d) y relationship is not simply y(t) = u(t) + d(t), but rather y(t) = f(t) + d(t), where f behaves with ω n = 1 and ξ =.1. f(t) + 2ξω n f(t) + ω 2 nf(t) = ω 2 nu(t) MAGNITUDE 1 1 1 1 1 1 2 1 1 1 1 2 FREQUENCY PHASE ANGLE (degrees) 2 4 6 8 1 12 14 16 18 1 1 1 1 2 FREQUENCY Frequency responses of H(= 1) and H. Similar over [ 1], differ by about 1% at 3, and 1% at 8. As σ decreases (and is exploited by increasing the bandwidth) the output variance decreases. But, for large bandwidths (about 1.4 and higher), the performance actually degrades as ones attempts to exploit the sensor quality. 3.5 1 1 3 VARIANCE 2.5 2 1.5 1 Actual Expected Magnitude 1 1 1 R=5 Bounds Percentage Mismatch.5 1 2.5.5 1 1.5 2 2.5 3 Noise σ 1 3 1 1 1 1 2 FREQUENCY 29

Multi-Input, Multi-Output MIFL e u y C Q z S d Many control inputs, many disturbances, many sensors (not the regulated variables) Process, Measurement, Error criterion: y(t) = u(t) + d(t) y m (t) = Sy(t) + n(t) n z(t) = Qy(t) For now, assume all are the same dimension, so S is a square matrix. Statistical descriptions of d and n: say, for instance Φ d (ω) = 1 ω 2ΓΓ, Φ n (ω) = NN Note that everything is a matrix: Γ, N, S, Q. Each component of the problem has its own prefered directions, and these will interact... Goal: Find best feedback strategy, min C Ez T (t)z(t). Solution: Easy to use singular value decomposition to reduce to many scalar problems (exercise)... Facts: Optimal control is integral control, u(t) = K I x(t), ẋ(t) = e(t). K I depends in a complicated way on the directionality/magnitudes of the matrices Γ, S, N (though not on Q). Feedback loop has many bandwidths (eigenvalues of matrix SK I ). 3

Linear Algebra Singular Value Decomposition (SVD) Theorem: Given M F n m. Then there exists U F n n, with U U = I n, V F m m, with V V = I m, integer k min (n, m), and real numbers σ 1 σ 2 σ k > such that where Σ R k k is [ ] Σ M = U V σ 1 Σ = σ 2...... σ k We need to apply it to real, square, invertible matrices... 31

Multi-Input, Multi-Output MIFL Solution e u y C Q z S d Process, Measurement, Error criterion: y(t) = u(t) + d(t) y m (t) = Sy(t) + n(t) n z(t) = Qy(t) Statistical descriptions of d and n: say, for instance Φ d (ω) = 1 ω 2ΓΓ, Φ n (ω) = NN Solution: 1. Calculate SVD of Γ =: U Γ Σ Γ V T Γ 2. Calculate SVD of N =: U N Σ N V T N 3. Calculate SVD of Σ 1 N U T N SU ΓΣ Γ =: UΣV T 4. Define K I := U Γ Σ Γ V U T Σ 1 N U T N This is a special case of the LQG problem. 32

Linear-Quadratic Gaussian Problem Statement General dynamical system setup for process: ẋ(t) A B 1 B 2 x(t) e(t) = C 1 D 12 d(t) y(t) C 2 D 21 D 22 u(t) Assumptions: All matrices known d zero mean, Φ d (ω) = I (absorb actual PSD into process model) Measure y, manipulate u Goal: Find the best dynamic, linear control strategy (matrices F, G, H, L) [ ] [ ] [ ] η(t) F G η(t) = u(t) H L y(t) to minimize Ee T (t)e(t). Solution: Well-known since 196 s (Kalman, Bucy, Kushner, Wonham, Fleming, and others). Computation: Easy to compute controller matrices, solving 2 quadratic matrix equations. Ordered Schur decomposition is the main tool. Issues (1978): There are no guarantees as to how sensitive the achieved closed-loop performance is to variations in the process behavior. [Doyle, IEEE TAC]. Robust Control (1978-199X): Tempering the optimization based on description of what is possibly unreliable in process model. 33

2nd MIFL Controlling the position of an inertia Diagram and equations: K P r 1 K u ẋ x I m d mẍ(t) = u(t) + d(t) K D y 2 = ẋ + n 2 n 2 Controller equations: y 1 = x + n 1 n 1 e(t) = r(t) y 1 (t) ż(t) = e(t) u(t) = K p e(t) + K I z(t) K D y 2 (t) Eliminating z and u: m... x(t) + K D ẍ(t) + K P ẋ(t) + K I x(t) = K I r(t) + K P ṙ(t) + d(t) K P ṅ 1 (t) K I n 1 (t) K D ṅ 2 (t) Facts: Knowledge of m implies characteristic polynomial can be set with n i, r(t) r, d(t) d, x(t) r. K P gives initial control reaction to error K I keeps fighting low frequency biases K D adds damping 34

2nd MIFL Design Equations Characteristic polynomial is p(λ) = λ 3 + K D m λ2 + K P m λ + K I m Parametrize roots with positive real numbers ξ, ω n, α which implies ξω n ± jω n 1 ξ2, αω n p(λ) = ( λ 2 + 2ξω n λ + ω 2 n) (λ + αωn ) = λ 3 + ω n (2ξ + α)λ 2 + ω 2 n (2ξα + 1)λ + ω3 n α Matching coefficients yields the design equations K I = mω 3 nα K P = mωn 2 (2ξα + 1) K D = mω n (2ξ + α) Look at results for ξ =.77, and α =,.4, 2.5. Start with a robust stability calculation - how much variation can be tolerated in the process behavior, which nominally is mẍ(t) = u(t). PERCENTAGE VARIATION 1 2 1 1 1 The maximum allowable percentage variation in U X (described in terms of Frequency Response) for which closed-loop stability is guaranteed to be maintained..1 wn.1 wn wn 1 wn 1 wn FREQUENCY 35

Results MAGNITUDE 1 1 1 1 1 1 2 Frequency Response Functions Magnitude of frequency response from R X, K P (jω) + K I m(jω) 3 + K D (jω) 2 + K P (jω) + K I 1 3 1 4.1 wn.1 wn wn 1 wn 1 wn FREQUENCY 2 4 6 Phase of frequency response from R X PHASE 8 1 12 14 16 18.1 wn.1 wn wn 1 wn 1 wn FREQUENCY 1 1 1 Magnitude of frequency response from D X (normalized). MAGNITUDE 1 2 1 3 1 4.1 wn.1 wn wn 1 wn 1 wn FREQUENCY 1 1 1 Magnitude of frequency response from N 1, N 2 X. MAGNITUDE 1 1 1 2 1 3 1 4.1 wn.1 wn wn 1 wn 1 wn FREQUENCY 36

Results 2.5 2 Time Responses Applied reference signal r. Reference R 1.5 1.5.5 1 2 3 4 Normalized Time, t/wn.5 Applied disturbance signal d. Normalized Disturbance D.5 1 1 2 3 4 Normalized Time, t/wn 2.5 2 Output (x) response. Output Response 1.5 1.5.5 1 1 2 3 4 Normalized Time, t/wn 5 Control action u. Control Action 5 1 1 2 3 4 Normalized Time, t/wn 37

Reduction in sensitivity from feedback r d Σ e y L Σ Constraints are Eliminating e (for instance) gives y = d + Le, e = r y y = L 1 + L }{{} T (L) ort r + 1 d } 1 + {{ L} S(L) ors Obviously, L > (which is negative feedback) means 1 1+L < 1. Suppose L changes to L +. Obviously T changes as well. Compare percentage change in T to percentage change in L, % change in T % change in L = T (L+ ) T (L) T (L) L+ L L = T (L + ) T (L) L T (L) Compute for differential changes in L, so take lim, giving % change in T % change in L = dt L dl T (L) = 1 L (1 + L) 2 T (L) = S This is why Bode called S the sensitivity function. 38

Linearizing effect of Feedback r d Σ e y K φ( ) Σ Constraints are y = d + φ(e), e = K(r y) y = r 1 K e whose solution (for certain φ) implicitly defines a function ŷ(r, d). The chain rule implies [ ŷ r = Kφ (e) 1 ŷ ] ŷ, r d = 1 Kφ (e) ŷ d where e = K(r ŷ(r)). Rearranging, gives ŷ r = Kφ (e) Kφ (e) + 1, ŷ d = 1 Kφ (e) + 1 Note, if Kφ >> 1 everywhere, then the function ŷ is more linear in r than φ, and nearly unaffected by d. 25 Solution of y 2 15 1 y=r e/k r y(r) y 5 5 1 y = φ(e) 15 1 8 6 4 2 2 4 6 8 1 e 39

Linearizing effect of Feedback Dynamic Example r Σ e y φ( ) Σ d Replace K by an integrator and inject sine wave, r = 1sin.1t. At ω =.1, the gain from the integrator is 1. 2 15 1 y = x +.1x 3 Nonlinear function 1 2 15 1 Nonlinear function 2 y = 2e for e < y = 4e for <= e <= 2 y = e + 6 for e > 2 2 15 1 Nonlinear function 2 y = 2e.5e 2 + 5 5 5 5 e y y 5 5 5 1 1 1 15 15 15 2 1 8 6 4 2 2 4 6 8 1 x 2 1 8 6 4 2 2 4 6 8 1 e 2 1 8 6 4 2 2 4 6 8 1 e 2 Function 1: Ref(o), Output( ), Open loop(.) 2 Function 2: Ref(o), Output( ), Open loop(.) 2 Function 2: Ref(o), Output( ), Open loop(.) 15 Open loop 15 Open loop 15 Open loop 1 1 1 5 5 5 Output y Ref & Closed loop Output y Ref & Closed loop Output y Ref & Closed loop 5 5 5 1 1 1 15 15 15 2 1 2 3 4 5 6 7 8 9 1 Time (sec) 2 1 2 3 4 5 6 7 8 9 1 Time (sec) 2 1 2 3 4 5 6 7 8 9 1 Time (sec) 1 Reference vs Input, compared to inverse of function 1 1 Reference vs Input, compared to inverse of function 2 1 Reference vs Input, compared to inverse of function 3 8 8 8 6 6 6 4 4 4 Input e 2 2 Input e 2 2 Input e 2 2 4 4 4 6 6 6 8 8 8 1 2 15 1 5 5 1 15 2 Reference r 1 2 15 1 5 5 1 15 2 Reference r 1 2 15 1 5 5 1 15 2 Reference r The scatter plots of r vs e look like the e = φ 1 (r), i.e. the inverse function of φ(e). 4

Linearizing effect of Feedback Dynamic Example (contd) r Σ e y φ( ) Σ d However, if r = 1sin.1t, the gain from the integrator is 1, and the time responses for this system with the same nonlinear functions are shown. 2 Function 1: Ref(o), Output( ), Open loop(.) 2 Function 2: Ref(o), Output( ), Open loop(.) 2 Function 2: Ref(o), Output( ), Open loop(.) 15 15 15 1 1 1 5 5 5 Output y Ref & Closed loop Output y Ref & Closed loop Output y Ref & Closed loop 5 5 5 1 1 1 15 15 15 2 1 2 3 4 5 6 7 8 9 1 Time (sec) 2 1 2 3 4 5 6 7 8 9 1 Time (sec) 2 1 2 3 4 5 6 7 8 9 1 Time (sec) 1 Reference vs Input, compared to inverse of function 1 1 Reference vs Input, compared to inverse of function 2 1 Reference vs Input, compared to inverse of function 3 8 8 8 6 6 6 4 4 4 Input e 2 2 Input e 2 2 Input e 2 2 4 4 4 6 6 6 8 8 8 1 2 15 1 5 5 1 15 2 Reference r 1 2 15 1 5 5 1 15 2 Reference r 1 2 15 1 5 5 1 15 2 Reference r Notice that the output, y, does not track the reference, r, as well as when r = 1sin.1t. Also, the scatter plots of r vs e have more dispersion and indicate that e does not invert φ(.) as well as in the previous case. This example shows that feedback can have a linearizing effect when the gain is large enough. 41