Linear Systems. Manfred Morari Melanie Zeilinger. Institut für Automatik, ETH Zürich Institute for Dynamic Systems and Control, ETH Zürich
|
|
- Robert Williamson
- 6 years ago
- Views:
Transcription
1 Linear Systems Manfred Morari Melanie Zeilinger Institut für Automatik, ETH Zürich Institute for Dynamic Systems and Control, ETH Zürich Spring Semester 2016 Linear Systems M. Morari, M. Zeilinger - Spring Semester 2016
2 Table of Contents 1. Linear Systems 1.1 Models of Dynamic Systems 1.2 Analysis of LTI Discrete-Time Systems 2. Linear Quadratic Optimal Control 2.1 Optimal Control 2.2 Batch Approach 2.3 Recursive Approach 2.4 Infinite Horizon Optimal Control 3. Uncertainty Modeling 3.1 Objective Statement, Stochastic Processes 3.2 Modeling using State Space Descriptions 3.3 Obtaining Models from First Principles 3.4 Obtaining Models from System Identification 4. State Estimation 4.1 Linear State Estimation 4.2 State Observer 4.3 Kalman Filter inear Systems M. Morari, M. Zeilinger - Spring Semester 2016
3 Linear Systems Table of Contents 1. Linear Systems 1.1 Models of Dynamic Systems 1.2 Analysis of LTI Discrete-Time Systems 2. Linear Quadratic Optimal Control 2.1 Optimal Control 2.2 Batch Approach 2.3 Recursive Approach 2.4 Infinite Horizon Optimal Control 3. Uncertainty Modeling 3.1 Objective Statement, Stochastic Processes 3.2 Modeling using State Space Descriptions 3.3 Obtaining Models from First Principles 3.4 Obtaining Models from System Identification 4. State Estimation 4.1 Linear State Estimation 4.2 State Observer 4.3 Kalman Filter inear Systems M. Morari, M. Zeilinger - Spring Semester 2016
4 Linear Systems Models of Dynamic Systems Table of Contents 1. Linear Systems 1.1 Models of Dynamic Systems 1.2 Analysis of LTI Discrete-Time Systems inear Systems M. Morari, M. Zeilinger - Spring Semester 2016
5 Linear Systems Models of Dynamic Systems Models of Dynamic Systems Goal: Introduce mathematical models to be used in Model Predictive Control (MPC) describing the behavior of dynamic systems Model classification: state space/transfer function, linear/nonlinear, time-varying/time-invariant, continuous-time/discrete-time, deterministic/stochastic If not stated differently, we use deterministic models Models of physical systems derived from first principles are mainly: nonlinear, time-invariant, continuous-time, state space models (*) Target models for standard MPC are mainly: linear, time-invariant, discrete-time, state space models ( ) Focus of this section is on how to transform (*) to ( ) Linear Systems M. Morari, M. Zeilinger - Spring Semester
6 Linear Systems Models of Dynamic Systems Nonlinear, Time-Invariant, Continuous-Time, State Space Models (1/3) ẋ = g(x, u) y = h(x, u) x R n state vector g(x, u) : R n R m R n system dynamics u R m input vector h(x, u) : R n R m R p output function y R p output vector Very general class of models Higher order ODEs can be easily brought to this form (next slide) Analysis and control synthesis generally hard linearization to bring it to linear, time-invariant (LTI), continuous-time, state space form Linear Systems M. Morari, M. Zeilinger - Spring Semester
7 Linear Systems Models of Dynamic Systems Nonlinear, Time-Invariant, Continuous-Time, State Space Models (2/3) Equivalence of one n-th order ODE and n 1-st order ODEs Define x (n) + g n (x, ẋ, ẍ,..., x (n 1) ) = 0 x i+1 = x (i), i = 0,..., n 1 Transformed system ẋ 1 = x 2 ẋ 2 = x 3.. ẋ n 1 = x n ẋ n = g n (x 1, x 2,..., x n ) Linear Systems M. Morari, M. Zeilinger - Spring Semester
8 Linear Systems Models of Dynamic Systems Nonlinear, Time-Invariant, Continuous-Time, State Space Models (3/3) Example: Pendulum Moment of inertia wrt. rotational axis m l 2 Torque caused by external force T c Torque caused by gravity m g l sin(θ) T c u l System equation m l 2 θ = Tc m g l sin(θ) Using x 1 θ, x 2 θ = ẋ 1 and u T c /m l 2 the system can be brought to standard form ( ) ( ) ẋ1 x ẋ = = 2 g l sin(x = g(x, u) 1) + u ẋ 2 Output equation depends on the measurement configuration, i.e. if θ is measured then y = h(x, u) = x 1. mg Linear Systems M. Morari, M. Zeilinger - Spring Semester
9 Linear Systems Models of Dynamic Systems LTI Continuous-Time State Space Models (1/6) ẋ = A c x + B c u y = Cx + Du x R n state vector A c R n n system matrix u R m input vector B c R n m input matrix y R p output vector C R p n output matrix D R p m throughput matrix Vast theory exists for the analysis and control synthesis of linear systems Exact solution (next slide) Linear Systems M. Morari, M. Zeilinger - Spring Semester
10 Linear Systems Models of Dynamic Systems LTI Continuous-Time State Space Models (2/6) Solution to linear ODEs Consider the ODE (written with explicit time dependence) ẋ(t) = A c x(t) + B c u(t) with initial condition x 0 x(t 0 ), then its solution is given by t x(t) = e Ac (t t 0) x 0 + e A c (t τ) Bu(τ)dτ t 0 where e Act n=0 (A c t) n n! Linear Systems M. Morari, M. Zeilinger - Spring Semester
11 Linear Systems Models of Dynamic Systems LTI Continuous-Time State Space Models (3/6) Problem: Most physical systems are nonlinear but linear systems are much better understood Nonlinear systems can be well approximated by a linear system in a small neighborhood around a point in state space Idea: Control keeps the system around some operating point replace nonlinear by a linearized system around operating point First order Taylor expansion of f ( ) around x f (x) f ( x) + f x (x x), with f x= x x = f 1 x 1 f 1 x f n x 1 f n x 2... f 1 x n. f n x n Linear Systems M. Morari, M. Zeilinger - Spring Semester
12 Linear Systems Models of Dynamic Systems LTI Continuous-Time State Space Models (4/6) Linearization u s keeps the system around stationary operating point x s ẋ s = g(x s, u s ) = 0, y s = h(x s, u s ) ẋ x }{{} s =0 ẋ = g(x s, u s ) + g }{{} x =0 x=xs u=u s } {{ } =A c = ẋ = A c x + B c u y = h(x s, u s ) }{{} y s + y }{{} = C x + D u y y s h x x=xs u=u }{{} s =C (x x s ) + g }{{} u = x x=xs u=u s } {{ } =B c (x x s ) + h u x=xs u=u s } {{ } =D (u u s ) }{{} = u (u u s ) Linear Systems M. Morari, M. Zeilinger - Spring Semester
13 Linear Systems Models of Dynamic Systems LTI Continuous-Time State Space Models (5/6) Linearization The linearized system is written in terms of deviation variables x, u, y Linearized system is only a good approximation for small x, u Subsequently, instead of x, u and y, x, u and y are used for brevity Linear Systems M. Morari, M. Zeilinger - Spring Semester
14 Linear Systems Models of Dynamic Systems LTI Continuous-Time State Space Models (6/6) Example: Linearization of pendulum equations ẋ = ( ) ( ẋ1 = ẋ 2 y = x 1 = h(x, u) x 2 g l sin(x 1) + u ) = g(x, u) Want to keep the pendulum around x s = (π/4, 0) u s = g l sin(π/4) A c = g ( x = x=xs u=u s C = h x = (1 0), D = h x=xs u u=u s ) 0 1 g l cos(π/4) 0, B c = g u = 0 x=xs u=u s = x=xs u=u s ( 0 1 ) Linear Systems M. Morari, M. Zeilinger - Spring Semester
15 Linear Systems Models of Dynamic Systems LTI Discrete-Time State Space Models (1/4) Linear discrete-time systems are described by linear difference equations x(k + 1) = Ax(k) + Bu(k) y(k) = Cx(k) + Du(k) Inputs and outputs of a discrete-time system are defined only at discrete time points, i.e. its inputs and outputs are sequences defined for k Z + Discrete time systems describe either 1 Inherently discrete systems, eg. bank savings account balance at the k-th month x(k + 1) = (1 + α)x(k) + u(k) 2 Transformed continuous-time system Linear Systems M. Morari, M. Zeilinger - Spring Semester
16 Linear Systems Models of Dynamic Systems LTI Discrete-Time State Space Models (2/4) Vast majority of controlled systems not inherently discrete-time systems Controllers almost always implemented using microprocessors Finite computation time must be considered in the control system design discretize the continuous-time system Discretization is the procedure of obtaining an equivalent discrete-time system from a continuous-time system The discrete-time model describes the state of the continuous-time system only at particular instances t k, k Z + in time, where t k+1 = t k + T s and T s is called the sampling time Usually u(t) = u(t k ) t [t k, t k+1 ) is assumed (and implemented) Linear Systems M. Morari, M. Zeilinger - Spring Semester
17 Linear Systems Models of Dynamic Systems LTI Discrete-Time State Space Models (3/4) Discretization of LTI continuous-time state space models Recall the solution of the ODE x(t) = e Ac (t t 0) x 0 + t t 0 e Ac (t τ) Bu(τ)dτ Choose t 0 = t k (hence x 0 = x(t 0 ) = x(t k )), t = t k+1 and use t k+1 t k = T s and u(t) = u(t k ) t [t k, t k+1 ) tk+1 x(t k+1 ) = e Ac T s x(t k ) + t k Ts e A c (t k+1 τ) B c dτu(t k ) = e Ac T s }{{} x(t k ) + e Ac (T s τ ) B c dτ u(t k ) 0 }{{} A = Ax(t k ) + Bu(t k ) We found the exact discrete-time model predicting the state of the continuous-time system at time t k+1 given x(t k ), k Z + under the assumption of a constant u(t) during a sampling interval B = (A c ) 1 (A I )B c, if A c invertible B Linear Systems M. Morari, M. Zeilinger - Spring Semester
18 Linear Systems Models of Dynamic Systems LTI Discrete-Time State Space Models (4/4) Example: Discretization of the linearized pendulum equations Using g/l = 10[s 2 ] the pendulum equations linearized about x s = (π/4, 0) are given by ( ẋ(t) = / 2 0 ) ( 0 x(t) + 1 ) u(t) Discretizing the continuous-time system using the definitions of A and B, and T s = 0.1 s, we get the following discrete-time system ( ) ( ) x(k + 1) = x(k) + u(k) Linear Systems M. Morari, M. Zeilinger - Spring Semester
19 Linear Systems Analysis of LTI Discrete-Time Systems Table of Contents 1. Linear Systems 1.1 Models of Dynamic Systems 1.2 Analysis of LTI Discrete-Time Systems inear Systems M. Morari, M. Zeilinger - Spring Semester 2016
20 Linear Systems Analysis of LTI Discrete-Time Systems Analysis of LTI Discrete-Time Systems Goal: Introduce the concepts of stability, controllability and observability From this point on we consider only discrete-time LTI systems for the rest of the lecture Linear Systems M. Morari, M. Zeilinger - Spring Semester
21 Linear Systems Analysis of LTI Discrete-Time Systems Coordinate Transformations (1/2) Consider again the system x(k + 1) = Ax(k) + Bu(k) y(k) = Cx(k) + Du(k) Input-output behavior, i.e. the sequence {y(k)} k=0,1,2... entirely defined by x(0) and {u(k)} k=0,1,2... Infinitely many choices of the state that yield the same input-output behavior Certain choices facilitate system analysis Linear Systems M. Morari, M. Zeilinger - Spring Semester
22 Linear Systems Analysis of LTI Discrete-Time Systems Coordinate Transformations (2/2) Consider the linear transformation x = Tx with det(t) 0 (invertible) or T 1 x(k + 1) = AT 1 x(k) + Bu(k) y(k) = CT 1 x(k) + Du(k) Note: u(k) and y(k) are unchanged x(k + 1) = TAT }{{ 1 } x(k) + }{{} TB u(k) Ã B y(k) = CT }{{ 1 } x(k) + }{{} D u(k) C D Linear Systems M. Morari, M. Zeilinger - Spring Semester
23 Linear Systems Analysis of LTI Discrete-Time Systems Stability of Linear Systems (1/3) Theorem: Asymptotic Stability of Linear Systems The LTI system is globally asymptotically stable x(k + 1) = Ax(k) lim x(k) = 0, x(0) Rn k if and only if λ i < 1, i = 1,, n where λ i are the eigenvalues of A. inear Systems M. Morari, M. Zeilinger - Spring Semester
24 Linear Systems Analysis of LTI Discrete-Time Systems Stability of Linear Systems (2/3) Proof of asymptotic stability condition Assume that A has n linearly independent eigenvectors e 1,, e n then the coordinate transformation x = [e 1,, e n ] 1 x = Tx transforms an LTI discrete-time system to λ x(k + 1) = TAT 1 x(k) = 0 λ x(k) = Λ x(k) λ n The state x(k) can be explicitly formulated as a function of x(0) = Tx(0) λ k x(k) = Λ k x(0) = 0 λ k x(0) 0 0 λ k n Linear Systems M. Morari, M. Zeilinger - Spring Semester
25 Linear Systems Analysis of LTI Discrete-Time Systems Stability of Linear Systems (3/3) Proof of asymptotic stability condition We thus have that x(k) = Λ k x(0) x(k) = Λ k x(0) (component-wise) x(k) = Λ k x(0) x i (k) = λ k i x i (0) = λ i k x i (0) If any λ i 1 then lim k x(k) 0 for x(0) 0. On the other hand if λ i < 1 i 1,, n then lim k x(k) = 0 and we have asymptotic stability If the system does not have n linearly independent eigenvectors it can not be brought into diagonal form and Jordan matrices have to be used for the proof but the assertions still hold Linear Systems M. Morari, M. Zeilinger - Spring Semester
26 Linear Systems Analysis of LTI Discrete-Time Systems Stability of Nonlinear Systems (1/5) For nonlinear systems there are many definitions of stability. Informally, we define a system to be stable in the sense of Lyapunov, if it stays in any arbitrarily small neighborhood of the origin when it is disturbed slightly. In the following we always mean stability in the sense of Lyapunov. We consider first the stability of a nonlinear, time-invariant, discrete-time system x k+1 = g(x k ) (1) with an equilibrium point at 0, i.e. g(0) = 0. Note that system (1) encompasses any open- or closed-loop autonomous system. We will then derive simpler stability conditions for the specific case of LTI systems. Note that always stability is a property of an equilibrium point of a system. Linear Systems M. Morari, M. Zeilinger - Spring Semester
27 Linear Systems Analysis of LTI Discrete-Time Systems Stability of Nonlinear Systems (2/5) Definitions Formally, the equilibrium point x = 0 of a system (1) is stable if for every ɛ > 0 there exists a δ(ɛ) such that x 0 < δ(ɛ) x k < ɛ, k 0 unstable otherwise. An equilibrium point x = 0 of system (1) is asymptotically stable in Ω R n if it is Lyapunov stable and lim k x k = 0, x 0 Ω globally asymptotically stable if it is asymptotically stable and Ω = R n Linear Systems M. Morari, M. Zeilinger - Spring Semester
28 Linear Systems Analysis of LTI Discrete-Time Systems Stability of Nonlinear Systems (3/5) Lyapunov functions We can show stability by constructing a Lyapunov function Idea: A mechanical system is asymptotically stable when the total mechanical energy is decreasing over time (friction losses). A Lyapunov function is a system theoretic generalization of energy Definition: Lyapunov function Consider the equilibrium point x = 0 of system (1). Let Ω R n be a closed and bounded set containing the origin. A function V : R n R, continuous at the origin, finite for every x Ω, and such that V (0) = 0 and V (x) > 0, x Ω \ {0} V (g(x k )) V (x k ) α(x k ) x k Ω \ {0} where α : R n R is continuous positive definite, is called a Lyapunov function. inear Systems M. Morari, M. Zeilinger - Spring Semester
29 Linear Systems Analysis of LTI Discrete-Time Systems Stability of Nonlinear Systems (4/5) Lyapunov theorem Theorem: Lyapunov stability (asymptotic stability) If a system (1) admits a Lyapunov function V (x), then x = 0 is asymptotically stable in Ω. Theorem: Lyapunov stability (global asymptotic stability) If a system (1) admits a Lyapunov function V (x) that additionally satisfies x V (x), then x = 0 is globally asymptotically stable. Linear Systems M. Morari, M. Zeilinger - Spring Semester
30 Linear Systems Analysis of LTI Discrete-Time Systems Stability of Nonlinear Systems (5/5) Remarks Note that the Lyapunov theorems only provide sufficient conditions Lyapunov theory is a powerful concept for proving stability of a control system, but for general nonlinear systems it is usually difficult to find a Lyapunov function Lyapunov functions can sometimes be derived from physical considerations One common approach: Decide on form of Lyapunov function (e.g., quadratic) Search for parameter values e.g. via optimization so that the required properties hold For linear systems there exist constructive theoretical results on the existence of a quadratic Lyapunov function Linear Systems M. Morari, M. Zeilinger - Spring Semester
31 Linear Systems Analysis of LTI Discrete-Time Systems Global Lyapunov Stability of Linear Systems (1/3) Consider the linear system x(k + 1) = Ax(k) (2) Take V (x) = x Px with P > 0 (positive definite) as a candidate Lyapunov function. It satisfies V (0) = 0, V (x) > 0 and x V (x). Check energy decrease condition V (Ax(k)) V (x(k)) = x (k)a PAx(k) x (k)px(k) = x (k)(a PA P)x(k) α(x(k)) We can choose α(x(k)) = x (k)qx(k), Q > 0. Hence, the condition can be satisfied if a P > 0 can be found that solves the discrete-time Lyapunov equation A PA P = Q, Q > 0. (3) Linear Systems M. Morari, M. Zeilinger - Spring Semester
32 Linear Systems Analysis of LTI Discrete-Time Systems Global Lyapunov Stability of Linear Systems (2/3) Theorem: Existence of solution to the DT Lyapunov equation The discrete-time Lyapunov equation (3) has a unique solution P > 0 if and only if A has all eigenvalues inside the unit circle, i.e. if the system x(k + 1) = Ax(k) is stable. Therefore, for LTI systems global asymptotic Lyapunov stability is not only sufficient but also necessary, and it agrees with the notion of stability based on eigenvalue location. Note that stability is always global for linear systems. inear Systems M. Morari, M. Zeilinger - Spring Semester
33 Linear Systems Analysis of LTI Discrete-Time Systems Global Lyapunov Stability of Linear Systems (3/3) Property of P The matrix P can also be used to determine the infinite horizon cost-to-go for an asymptotically stable autonomous system x(k + 1) = Ax(k) with a quadratic cost function determined by Q. Proof More precisely, defining Ψ(x(0)) as we have that Ψ(x(0)) = x(k) Qx(k) = x(0) (A k ) QA k x(0) (4) k=0 k=0 Ψ(x(0)) = x(0) Px(0). (5) Define H k (A k ) QA k and P k=0 H k (limit of the sum exists because the system is assumed asymptotically stable). We have that A H k A = (A k+1 ) QA k+1 = H k+1. Thus A PA = k=0 A H k A = k=0 H k+1 = k=1 H k = P H 0 = P Q. Linear Systems M. Morari, M. Zeilinger - Spring Semester
34 Linear Systems Analysis of LTI Discrete-Time Systems Controllability (1/3) Definition: A system x(k + 1) = Ax(k) + Bu(k) is controllable 1 if for any pair of states x(0), x there exists a finite time N and a control sequence {u(0),, u(n 1)} such that x(n ) = x,i.e. x = x(n ) = A N x(0) + ( B AB A N 1 B ) u(n 1) u(n 2). u(0) It follows from the Cayley-Hamilton theorem that A k can be expressed as linear combinations of A i, i 0, 1,, n 1 for k n. Hence for all N n range ( B AB A N 1 B ) = range ( B AB A n 1 B ) 1 Often referred to as reachable for discrete time systems. inear Systems M. Morari, M. Zeilinger - Spring Semester
35 Linear Systems Analysis of LTI Discrete-Time Systems Controllability (2/3) If the system cannot be controlled to x in n steps, then it cannot in an arbitrary number of steps Define the controllability matrix C = ( B AB A n 1 B ) The system is controllable if u(n 1) u(n 2) C. = x A n x(0) u(0) has a solution for all right-hand sides (RHS) From linear algebra: solution exists for all RHS iff n columns of C are linearly independent Necessary and and sufficient condition for controllability is rank(c) = n Linear Systems M. Morari, M. Zeilinger - Spring Semester
36 Linear Systems Analysis of LTI Discrete-Time Systems Controllability (3/3) Remarks Another related concept is stabilizability A system is called stabilizable if it there exists an input sequence that returns the state to the origin asymptotically, starting from an arbitrary initial state A system is stabilizable iff all of its uncontrollable modes are stable Stabilizability can be checked using the following condition if rank ([λ i I A B]) = n λ i Λ + A (A, B) is stabilizable where Λ + A is the set of all eigenvalues of A lying on or outside the unit circle. Controllability implies stabilizability Linear Systems M. Morari, M. Zeilinger - Spring Semester
37 Linear Systems Analysis of LTI Discrete-Time Systems Observability (1/3) Consider the following system with zero input x(k + 1) = Ax(k) y(k) = Cx(k) Definition: A system is said to be observable if there exists a finite N such that for every x(0) the measurements y(0), y(1), y(n 1) uniquely distinguish the initial state x(0) Linear Systems M. Morari, M. Zeilinger - Spring Semester
38 Linear Systems Analysis of LTI Discrete-Time Systems Observability (2/3) Question of uniqueness of the linear equations y(0) C y(1) CA. y(n 1) =. CA N 1 x(0) As previously we can replace N by n wlog. (Cayley-Hamilton) Define O = ( C (CA) (CA n 1 ) ) From linear algebra: solution is unique iff the n columns of O are linearly independent Necessary and sufficient condition for observability of system (A, C) is rank(o) = n Linear Systems M. Morari, M. Zeilinger - Spring Semester
39 Linear Systems Analysis of LTI Discrete-Time Systems Observability (3/3) Remarks Another related concept is detectability A system is called detectable if it possible to construct from the measurement sequence a sequence of state estimates that converges to the true state asymptotically, starting from an arbitrary initial estimate A system is detectable iff all of its unobservable modes are stable Detectability can be checked using the following condition if rank ([A λ i I C ]) = n λ i Λ + A (A, C) is detectable where Λ + A is the set of all eigenvalues of A lying on or outside the unit circle. Observability implies detectability Linear Systems M. Morari, M. Zeilinger - Spring Semester
40 Linear Quadratic Optimal Control Table of Contents 1. Linear Systems 1.1 Models of Dynamic Systems 1.2 Analysis of LTI Discrete-Time Systems 2. Linear Quadratic Optimal Control 2.1 Optimal Control 2.2 Batch Approach 2.3 Recursive Approach 2.4 Infinite Horizon Optimal Control 3. Uncertainty Modeling 3.1 Objective Statement, Stochastic Processes 3.2 Modeling using State Space Descriptions 3.3 Obtaining Models from First Principles 3.4 Obtaining Models from System Identification 4. State Estimation 4.1 Linear State Estimation 4.2 State Observer 4.3 Kalman Filter inear Systems M. Morari, M. Zeilinger - Spring Semester 2016
41 Linear Quadratic Optimal Control Optimal Control Table of Contents 2. Linear Quadratic Optimal Control 2.1 Optimal Control 2.2 Batch Approach 2.3 Recursive Approach 2.4 Infinite Horizon Optimal Control inear Systems M. Morari, M. Zeilinger - Spring Semester 2016
42 Linear Quadratic Optimal Control Optimal Control Optimal Control Introduction (1/2) Discrete-time optimal control is concerned with choosing an optimal input sequence U 0 N [u 0, u 1,...] (as measured by some objective function), over a finite or infinite time horizon, in order to apply it to a system with a given initial state x(0). The objective, or cost, function is often defined as a sum of stage costs q(x k, u k ) and, when the horizon has finite length N, a terminal cost p(x N ): N 1 J 0 N (x 0, U 0 N ) p(x N ) + q(x k, u k ) k=0 The states {x k } N k=0 must satisfy the system dynamics x k+1 = g(x k, u k ), k = 0,..., N 1 x 0 = x(0) and there may be state and/or input constraints h(x k, u k ) 0, k = 0,..., N 1. Linear Systems M. Morari, M. Zeilinger - Spring Semester
43 Linear Quadratic Optimal Control Optimal Control Optimal Control Introduction (2/2) In the finite horizon case, there may also be a constraint that the final state x N lies in a set X f x N X f A general finite horizon optimal control formulation for discrete-time systems is therefore J 0 N(x(0)) min U 0 N J 0 N (x(0), U 0 N ) subject to x k+1 = g(x k, u k ), k = 0,..., N 1 h(x k, u k ) 0, k = 0,..., N 1 x N X f x 0 = x(0) Linear Systems M. Morari, M. Zeilinger - Spring Semester
44 Linear Quadratic Optimal Control Optimal Control Linear Quadratic Optimal Control In this section, only linear discrete-time time-invariant systems and quadratic cost functions x(k + 1) = Ax(k) + Bu(k) N 1 J 0 (x 0, U 0 ) x NPx N + [x kqx k + u kru k ] (6) are considered, and we consider only the problem of regulating the state to the origin, without state or input constraints. The two most common solution approaches will be described here k=0 1 Batch Approach, which yields a series of numerical values for the input 2 Recursive Approach, which uses Dynamic Programming to compute control policies or laws, i.e. functions that describe how the control decisions depend on the system states. Linear Systems M. Morari, M. Zeilinger - Spring Semester
45 Linear Quadratic Optimal Control Optimal Control Unconstrained Finite Horizon Control Problem Goal: Find a sequence of inputs U 0 [u 0,..., u N 1 ] that minimizes the objective function N 1 J0 (x(0)) min x NPx N + [x kqx k + u kru k ] U 0 k=0 subject to x k+1 = Ax k + Bu k, k = 0,..., N 1 x 0 = x(0) P 0, with P = P, is the terminal weight Q 0, with Q = Q, is the state weight R 0, with R = R, is the input weight N is the horizon length Note that x(0) is the current state, whereas x 0,..., x N and u 0,..., u N 1 are optimization variables that are constrained to obey the system dynamics and the initial condition. Linear Systems M. Morari, M. Zeilinger - Spring Semester
46 Linear Quadratic Optimal Control Batch Approach Table of Contents 2. Linear Quadratic Optimal Control 2.1 Optimal Control 2.2 Batch Approach 2.3 Recursive Approach 2.4 Infinite Horizon Optimal Control inear Systems M. Morari, M. Zeilinger - Spring Semester 2016
47 Linear Quadratic Optimal Control Batch Approach Solution approach 1: Batch Approach (1/4) The batch solution explicitly represents all future states x k in terms of initial condition x 0 and inputs u 0,..., u N 1. Starting with x 0 = x(0), we have x 1 = Ax(0) + Bu 0, and x 2 = Ax 1 + Bu 1 = A 2 x(0) + ABu 0 + Bu 1, by substitution for x 1, and so on. Continuing up to x N we obtain: x 0 x 1.. x N = I A.. A N x(0) + The equation above can be represented as 0 0 B 0 0 AB B A N 1 B AB B u 0 u 1.. u N 1 X S x x(0) + S u U 0. (7) Linear Systems M. Morari, M. Zeilinger - Spring Semester
48 Linear Quadratic Optimal Control Batch Approach Solution approach 1: Batch Approach (2/4) Define Q blockdiag(q,..., Q, P) and R blockdiag(r,..., R) Then the finite horizon cost function (6) can be written as J 0 (x(0), U 0 ) = X QX + U 0 RU 0. (8) Eliminating X by substituting from (7), equation (8) can be expressed as: J 0 (x(0), U 0 ) = (S x x(0) + S u U 0 ) Q(S x x(0) + S u U 0 ) + U 0 RU 0 = U 0 HU 0 + 2x(0) FU 0 + x(0) S x QS x x(0) where H S u QS u + R and F S x QS u. Note that H 0, since R 0 and S u QS u 0. Linear Systems M. Morari, M. Zeilinger - Spring Semester
49 Linear Quadratic Optimal Control Batch Approach Solution approach 1: Batch Approach (3/4) Since the problem is unconstrained and J 0 (x(0), U 0 ) is a positive definite quadratic function of U 0 we can solve for the optimal input U 0 by setting the gradient with respect to U 0 to zero: U0 J 0 (x(0), U 0 ) = 2HU 0 + 2F x(0) = 0 U 0 (x(0)) = H 1 F x(0) = (S u QS u + R) 1 S u QS x x(0), which is a linear function of the initial state x(0). Note H 1 always exists, since H 0 and therefore has full rank. The optimal cost can be shown (by back-substitution) to be J 0 (x(0)) = x(0) FHF x(0) + x(0) S x QS x x(0) = x(0) (S x QS x S x QS u (S u QS u + R) 1 S u QS x )x(0), Linear Systems M. Morari, M. Zeilinger - Spring Semester
50 Linear Quadratic Optimal Control Batch Approach Solution approach 1: Batch Approach (4/4) Summary The Batch Approach expresses the cost function in terms of the initial state x(0) and input sequence U 0 by eliminating the states x k. Because the cost J 0 (x(0), U 0 ) is a strictly convex quadratic function of U 0, its minimizer U0 is unique and can be found by setting U0 J 0 (x(0), U 0 ) = 0. This gives the optimal input sequence U0 as a linear function of the intial state x(0): U 0 (x(0)) = (S u QS u + R) 1 S u QS x x(0) The optimal cost is a quadratic function of the initial state x(0) J 0 (x(0)) = x(0) (S x QS x S x QS u (S u QS u + R) 1 S u QS x )x(0) If there are state or input constraints, solving this problem by matrix inversion is not guaranteed to result in a feasible input sequence Linear Systems M. Morari, M. Zeilinger - Spring Semester
51 Linear Quadratic Optimal Control Recursive Approach Table of Contents 2. Linear Quadratic Optimal Control 2.1 Optimal Control 2.2 Batch Approach 2.3 Recursive Approach 2.4 Infinite Horizon Optimal Control inear Systems M. Morari, M. Zeilinger - Spring Semester 2016
52 Linear Quadratic Optimal Control Recursive Approach Solution approach 2: Recursive Approach (1/8) Alternatively, we can use dynamic programming to solve the same problem in a recursive manner. Define the j-step optimal cost-to-go as the optimal cost attainable for the step j problem: J j (x(j)) N 1 min x u j,...,u N 1 NPx N + [x kqx k + u kru k ] k=j subject to x k+1 = Ax k + Bu k, k = j,..., N 1 x j = x(j) This is the minimum cost attainable for the remainder of the horizon after step j Linear Systems M. Morari, M. Zeilinger - Spring Semester
53 Linear Quadratic Optimal Control Recursive Approach Solution approach 2: Recursive Approach (2/8) Consider the 1-step problem (solved at time N 1) J N 1(x N 1 ) = min u N 1 {x N 1Qx N 1 + u N 1Ru N 1 + x NP N x N } (9) subject to x N = Ax N 1 + Bu N 1 (10) P N = P where we introduced the notation P j to express the optimal cost-to-go x jp j x j. In particular, P N = P. Substituting (10) into (9) J N 1(x N 1 ) = min u N 1 {x N 1(A P N A + Q)x N 1 + u N 1(B P N B + R)u N 1 + 2x N 1A P N Bu N 1 } Linear Systems M. Morari, M. Zeilinger - Spring Semester
54 Linear Quadratic Optimal Control Recursive Approach Solution approach 2: Recursive Approach (3/8) Solving again by setting the gradient to zero leads to the following optimality condition for u N 1 Optimal 1-step input: 1-step cost-to-go: where 2(B P N B + R)u N 1 + 2B P N Ax N 1 = 0 u N 1 = (B P N B + R) 1 B P N Ax N 1 F N 1 x N 1 J N 1(x N 1 ) = x N 1P N 1 x N 1, P N 1 = A P N A + Q A P N B(B P N B + R) 1 B P N A. Linear Systems M. Morari, M. Zeilinger - Spring Semester
55 Linear Quadratic Optimal Control Recursive Approach Solution approach 2: Recursive Approach (4/8) The recursive solution method used from here relies on Bellman s Principle of Optimality For any solution for steps 0 to N to be optimal, any solution for steps j to N with j 0, taken from the 0 to N solution, must itself be optimal for the j-to-n problem Therefore we have, for any j = 0,..., N J j (x j ) = min u j {J j+1(x j+1 ) + x jqx j + u jru j } s.t. x j+1 = Ax j + Bu j Suppose that the fastest route from Los Angeles to Boston passes through Chicago. Then the principle of optimality formalizes the obvious fact that the Chicago to Boston portion of the route is also the fastest route for a trip that starts from Chicago and ends in Boston. Linear Systems M. Morari, M. Zeilinger - Spring Semester
56 Linear Quadratic Optimal Control Recursive Approach Solution approach 2: Recursive Approach (5/8) Now consider the 2-step problem, posed at time N 2 { N 1 } JN 2(x N 2 ) = min x kqx k + u kru k + x NPx N u N 1,u N 2 k=n 2 s.t. x k+1 = Ax k + Bu k, k = N 2, N 1 From the Principle of Optimality, the cost function is equivalent to J N 2(x N 2 ) = min u N 2 {J N 1(x N 1 ) + x N 2Qx N 2 + u N 2Ru N 2 } = min u N 2 {x N 1P N 1 x N 1 + x N 2 Qx N 2 + u N 2Ru N 2 } Linear Systems M. Morari, M. Zeilinger - Spring Semester
57 Linear Quadratic Optimal Control Recursive Approach Solution approach 2: Recursive Approach (6/8) As with 1-step solution, solve by setting the gradient with respect to u N 2 to zero Optimal 2-step input 2-step cost-to-go where u N 2 = (B P N 1 B + R) 1 B P N 1 Ax N 2 F N 2 x N 2 J N 2(x N 2 ) = x N 2P N 2 x N 2, P N 2 = A P N 1 A + Q A P N 1 B(B P N 1 B + R) 1 B P N 1 A We now recognize the recursion for P j and u j, j = N 1,, 0. Linear Systems M. Morari, M. Zeilinger - Spring Semester
58 Linear Quadratic Optimal Control Recursive Approach Solution approach 2: Recursive Approach (7/8) We can obtain the solution for any given time step k in the horizon u (k) = (B P k+1 B + R) 1 B P k+1 Ax(k) F k x(k) for k = 1,..., N where we can find any P k by recursive evaluation from P N = P, using P k = A P k+1 A + Q A P k+1 B(B P k+1 B + R) 1 B P k+1 A (11) This is called the Discrete Time Riccati Equation or Riccati Difference Equation (RDE). Evaluating down to P 0, we obtain the N -step cost-to-go J 0 (x(0)) = x(0) P 0 x(0) Linear Systems M. Morari, M. Zeilinger - Spring Semester
59 Linear Quadratic Optimal Control Recursive Approach Solution approach 2: Recursive Approach (8/8) Summary From the Principle of Optimality, the optimal control policy for any step j is then given by u (k) = (B P k+1 B + R) 1 B P k+1 Ax(k) = F k x(k) and the optimal cost-to-go is J k (x(k)) = x kp k x(k) Each P k is related to P k+1 by the Riccati Difference Equation P k = A P k+1 A + Q A P k+1 B(B P k+1 B + R) 1 B P k+1 A, which can be initialized with P N = P, the given terminal weight Linear Systems M. Morari, M. Zeilinger - Spring Semester
60 Linear Quadratic Optimal Control Recursive Approach Comparison of Batch and Recursive Approaches (1/2) Fundamental difference: Batch optimization returns a sequence U 0 (x(0)) of numeric values depending only on the initial state x(0), while dynamic programming yields feedback policies u (k) = F k x(k), k = 0,..., N 1 depending on each x(k). If the state evolves exactly as modelled, then the sequences of control actions obtained from the two approaches are identical. The recursive solution should be more robust to disturbances and model errors, because if the future states later deviate from their predicted values, the exact optimal input can still be computed. The Recursive Approach is computationally more attractive because it breaks the problem down into single-step problems. For large horizon length, the Hessian H in the Batch Approach, which must be inverted, becomes very large. Linear Systems M. Morari, M. Zeilinger - Spring Semester
61 Linear Quadratic Optimal Control Recursive Approach Comparison of Batch and Recursive Approaches (2/2) Without any modification, both solution methods will break down when inequality constraints on x k or u k are added. The Batch Approach is far easier to adapt than the Recursive Approach when constraints are present: just perform a constrained minimization for the current state. Doing this at every time step within the time available, and then using only the first input from the resulting sequence, amounts to receding horizon control. Linear Systems M. Morari, M. Zeilinger - Spring Semester
62 Linear Quadratic Optimal Control Infinite Horizon Optimal Control Table of Contents 2. Linear Quadratic Optimal Control 2.1 Optimal Control 2.2 Batch Approach 2.3 Recursive Approach 2.4 Infinite Horizon Optimal Control inear Systems M. Morari, M. Zeilinger - Spring Semester 2016
63 Linear Quadratic Optimal Control Infinite Horizon Optimal Control Infinite Horizon Control Problem: Optimal Sol n (1/2) In some cases we may want to solve the same problem with an infinite horizon: { } J (x(0)) = min [x kqx k + u kru k ] u( ) k=0 subject to x k+1 = Ax k + Bu k, k = 0, 1, 2,...,, x 0 = x(0) As with the Dynamic Programming approach, the optimal input is of the form u (k) = (B P B + R) 1 B P Ax(k) F x(k) and the infinite-horizon cost-to-go is J (x(k)) = x(k) P x(k). Linear Systems M. Morari, M. Zeilinger - Spring Semester
64 Linear Quadratic Optimal Control Infinite Horizon Optimal Control Infinite Horizon Control Problem: Optimal Sol n (2/2) The matrix P comes from an infinite recursion of the RDE, from a notional point infinitely far into the future. Assuming the RDE does converge to some constant matrix P, it must satisfy the following (from (11), with P k = P k+1 = P ) P = A P A + Q A P B(B P B + R) 1 B P A, which is called the Algebraic Riccati Equation (ARE). The constant feedback matrix F is referred to as the asymptotic form of the Linear Quadratic Regulator (LQR). In fact, if (A, B) is stabilizable and (Q 1/2, A) is detectable, then the RDE (initialized with Q at k = and solved for k 0) converges to the unique positive definite solution P of the ARE. Linear Systems M. Morari, M. Zeilinger - Spring Semester
65 Linear Quadratic Optimal Control Infinite Horizon Optimal Control Stability of Infinite-Horizon LQR In addition, the closed-loop system with u(k) = F x(k) is guaranteed to be asymptotically stable, under the stabilizability and detectability assumptions of the previous slide. The latter statement can be proven by substituting the control law u(k) = F x(k) into x(k + 1) = Ax(k) + Bu(k), and then examining the properties of the system x(k + 1) = (A + BF )x(k). (12) The asymptotic stability of (12) can be proven by showing that the infinite horizon cost J (x(k)) = x(k) P x(k) is actually a Lyapunov function for the system, i.e. J (x(k)) > 0, k 0, J (0) = 0, and J (x(k + 1)) < J (x(k)), for any x(k). This implies that lim x(k) = 0. k Linear Systems M. Morari, M. Zeilinger - Spring Semester
66 Linear Quadratic Optimal Control Infinite Horizon Optimal Control Choices of Terminal Weight P in Finite Horizon Control (1/2) 1 The terminal cost P of the finite horizon problem can in fact trivially be chosen so that its solution matches the infinite horizon solution To do this, make P equal to the optimal cost from N to (i.e. the cost with the optimal controller choice). This can be computed from the ARE: P = A PA + Q A PB(B PB + R) 1 B PA This approach rests on the assumption that no constraints will be active after the end of the horizon. Linear Systems M. Morari, M. Zeilinger - Spring Semester
67 Linear Quadratic Optimal Control Infinite Horizon Optimal Control Choices of Terminal Weight P in Finite Horizon Control (2/2) 2 Choose P assuming no control action after the end of the horizon, so that x(k + 1) = Ax(k), k = N,..., This P can be determined from solving the Lyapunov equation APA + Q = P. This approach only makes sense if the system is asymptotically stable (or no positive definite solution P will exist). 3 Assume we want the state and input both to be zero after the end of the finite horizon. In this case no P but an extra constraint is needed x k+n = 0 Linear Systems M. Morari, M. Zeilinger - Spring Semester
68 Uncertainty Modeling Table of Contents 1. Linear Systems 1.1 Models of Dynamic Systems 1.2 Analysis of LTI Discrete-Time Systems 2. Linear Quadratic Optimal Control 2.1 Optimal Control 2.2 Batch Approach 2.3 Recursive Approach 2.4 Infinite Horizon Optimal Control 3. Uncertainty Modeling 3.1 Objective Statement, Stochastic Processes 3.2 Modeling using State Space Descriptions 3.3 Obtaining Models from First Principles 3.4 Obtaining Models from System Identification 4. State Estimation 4.1 Linear State Estimation 4.2 State Observer 4.3 Kalman Filter inear Systems M. Morari, M. Zeilinger - Spring Semester 2016
69 Uncertainty Modeling Objective Statement, Stochastic Processes Table of Contents 3. Uncertainty Modeling 3.1 Objective Statement, Stochastic Processes 3.2 Modeling using State Space Descriptions 3.3 Obtaining Models from First Principles 3.4 Obtaining Models from System Identification inear Systems M. Morari, M. Zeilinger - Spring Semester 2016
70 Uncertainty Modeling Objective Statement, Stochastic Processes Objective Statement One of the main reasons for control is to suppress the effect of disturbances on key process outputs. A model is needed to predict the disturbances influence on the outputs on the basis of measured signals. For unmeasured disturbances, stochastic models are used. Objective In this part we introduce stochastic models for disturbances and show how to integrate them into deterministic system models for estimation and control. We discuss how to construct models of the form: x(k + 1) = Ax(k) + Bu(k) + Fw(k) y(k) = Cx(k) + Gw(k) (13) where w(k) is a disturbance signal. inear Systems M. Morari, M. Zeilinger - Spring Semester
71 Uncertainty Modeling Objective Statement, Stochastic Processes Stochastic processes (1/2) Stochastic processes are the mathematical tool used to model uncertain signals. A discrete-time stochastic process is a sequence of random variables {w(0), w(1), w(2),...} The realization of the process is uncertain. We can model a stochastic process via its probability distribution In general, one must specify the joint probability distribution function (pdf) for the entire time sequence P(w(0), w(1),...) inear Systems M. Morari, M. Zeilinger - Spring Semester
72 Uncertainty Modeling Objective Statement, Stochastic Processes Stochastic processes (2/2) Stochastic processes are modeled using data: Estimating the joint pdf usually is intractable Thus the normal distribution assumption is often made: only models of the mean and the covariance are needed Further distinction of stochastic processes: Stationary stochastic processes Nonstationary stochastic processes Informally, a stationary process with normal distribution has mean and variance that do not vary over a shifting time window. Linear Systems M. Morari, M. Zeilinger - Spring Semester
73 Uncertainty Modeling Objective Statement, Stochastic Processes Normal stochastic process Joint pdf is a normal distribution Completely defined by its mean and covariance function µ w (k) E{w(k)} R w (k, τ) E{w(k + τ)w (k)} µ w (k + τ)µ w (k) Stationary if µ w (k) = µ w and R w (k, τ) = R w (τ) Typically data are used to estimate µ w (k) and R w (k, τ) Special case: Normal white noise stochastic process ε(k) µ ε = 0 { R ε if τ = 0 R ε (k, τ) = 0 otherwise Since jointly normally distributed and uncorrelated over time, ε(k) is independent of time Linear Systems M. Morari, M. Zeilinger - Spring Semester
74 Uncertainty Modeling Modeling using State Space Descriptions Table of Contents 3. Uncertainty Modeling 3.1 Objective Statement, Stochastic Processes 3.2 Modeling using State Space Descriptions 3.3 Obtaining Models from First Principles 3.4 Obtaining Models from System Identification inear Systems M. Morari, M. Zeilinger - Spring Semester 2016
75 Uncertainty Modeling Modeling using State Space Descriptions Stationary Case (1/2) We model the stochastic process as the output w(k) of a linear system driven by normal white noise ε(k). It will turn out that w(k) is a normal stochastic process and µ w (k), R w (k, τ) can be chosen through A w,b w,c w. In the stationary case and using a state space description we have: x w (k + 1) = A w x w (k) + B w ε(k) w(k) = C w x w (k) + ε(k) (14) where: x w is an additional state introduced to model the linear system s response to white noise all the eigenvalues of A w lie strictly inside the unit circle. (14) is the standard form for many filter and control design tools. We will show how to determine A w,b w,c w in practical situations. Linear Systems M. Morari, M. Zeilinger - Spring Semester
76 Uncertainty Modeling Modeling using State Space Descriptions Stationary Case (2/2) The output w(k) of system (14) with white noise ε(k) and stable A w has the following properties: E{x w (k)} = A w E{x w (k 1)} = A k we{x(0)} E{x w (k)x w(k)} = A w E{x w (k 1)x w(k 1)}A w + B w R ε B w E{x w (k + τ)x w(k)} = A τ we{x w (k)x w(k)} (15) From this one can deduce that w = lim k E{w(k)} = 0 R w (τ) = lim k E{w(k + τ)w (k)} = C w A τ P w w C w + C w A τ 1 w B w R ε (16) where P w = A w Pw A w + B w R ε B w, (17) i.e. Pw is a positive semi-definite solution to a Lyapunov equation. These relations can be used in order to determine A w, B w, C w matching a certain covariance R w (τ). Linear Systems M. Morari, M. Zeilinger - Spring Semester
77 Uncertainty Modeling Modeling using State Space Descriptions Nonstationary Case (1/2) If a disturbance signal has persistent characteristics (exhibiting shifts in the mean), it is not appropriate to model it with a stationary stochastic process. For example, controller design based on stationary stochastic processes will generally lead to offset. In this case one can superimpose the output of a linear system driven by integrated white noise ε int (k) to the stationary signal: The state space description is then: ε int (k + 1) = ε int (k) + ε(k) (18) x w (k + 1) = A w x w (k) + B w ε int (k) w(k) = C w x w (k) + ε int (k) (19) Linear Systems M. Morari, M. Zeilinger - Spring Semester
78 Uncertainty Modeling Modeling using State Space Descriptions Nonstationary Case (2/2) The state space description (19) can be rewritten, using differenced variables, as: x w (k + 1) = A w x w (k) + B w ε(k 1) (20) w(k) = C w x w (k) + ε(k 1) where w(k) = w(k) w(k 1) and ε(k) is a zero-mean stationary process. Since ε(k) is a white noise signal, (20) is equivalent to: x w (k + 1) = A w x w (k) + B w ε(k) w(k) = C w x w (k) + ε(k) (21) Linear Systems M. Morari, M. Zeilinger - Spring Semester
79 Uncertainty Modeling Obtaining Models from First Principles Table of Contents 3. Uncertainty Modeling 3.1 Objective Statement, Stochastic Processes 3.2 Modeling using State Space Descriptions 3.3 Obtaining Models from First Principles 3.4 Obtaining Models from System Identification inear Systems M. Morari, M. Zeilinger - Spring Semester 2016
80 Uncertainty Modeling Obtaining Models from First Principles Obtaining Models from First Principles From first principles, after linearization, one obtains an ODE of the form: ẋ p = A c px p + Bpu c + Fp c w y = C p x + G p w (22) which can be discretized, leading to: x p (k + 1) = A p x(k) + B p u(k) + F p w(k) y(k) = C p x p (k) + G p w(k) (23) Remarks: Subscript p is used here to distinguish the process model matrices from the disturbance model matrices introduced before. If the physical disturbance variables cannot be identified, one can express the overall effect of the disturbances as a signal directly added to the output output disturbance, i.e. G p = I and F p = 0. Linear Systems M. Morari, M. Zeilinger - Spring Semester
81 Uncertainty Modeling Obtaining Models from First Principles Stationary Case We can combine the model (14) with (23) to get: [ ] xp (k + 1) x w (k + 1) = [ ] [ Ap F p C w xp (k) 0 A w x w (k) y(k) = [ C p G p C w ] [ x p (k) x w (k) ] + ] + G p ε(k) [ ] [ ] Bp Fp u(k) + ε(k) 0 B w (24) With some appropriate re-definition of system matrices, the above is in the standard state-space form of: x(k + 1) = Ax(k) + Bu(k) + Fε(k) }{{} ε 1(k) y(k) = Cx(k) + Gε(k) }{{} ε 2(k) (25) Notice that the state is now expanded to include both the original system state x p and the disturbance state x w. Linear Systems M. Morari, M. Zeilinger - Spring Semester
82 Uncertainty Modeling Obtaining Models from First Principles Nonstationary Case (1/2) We can combine (21) with a differenced version of (23) to obtain: [ ] [ ] [ ] [ ] [ ] xp (k + 1) Ap F = p C w xp (k) Bp Fp + u(k) + ε(k) (26) x w (k + 1) 0 A w x w (k) 0 B w }{{}}{{}}{{} A B F y(k) = [ [ ] ] xp (k) C p G p C w + G }{{} x w (k) p ε(k) (27) }{{} C G where x p (k) x p (k) x p (k 1), and x w (k), u(k) are defined similarly. For estimation and control, it is further desired that the model output be y rather than y. This requires yet another augmentation of the state... Linear Systems M. Morari, M. Zeilinger - Spring Semester
83 Uncertainty Modeling Obtaining Models from First Principles Nonstationary Case (2/2) The augmented system is: [ ] x(k + 1) = y(k + 1) [ ] [ ] A 0 x(k) C I y(k) y(k) = [ 0 I ] [ ] x(k) y(k) [ ] [ B F + u(k) + ε(k) (28) 0 G] It can be brought into the standard state-space form after re-definition of the system matrices: (29) x(k + 1) = Ā x(k) + B u(k) + Fε(k) y(k) = C x(k) (30) except that now the system input is u rather than u. System (30) has n y integrators to express the effects of the white noise disturbances and the system input u on the output. Linear Systems M. Morari, M. Zeilinger - Spring Semester
84 Uncertainty Modeling Obtaining Models from System Identification Table of Contents 3. Uncertainty Modeling 3.1 Objective Statement, Stochastic Processes 3.2 Modeling using State Space Descriptions 3.3 Obtaining Models from First Principles 3.4 Obtaining Models from System Identification inear Systems M. Morari, M. Zeilinger - Spring Semester 2016
85 Uncertainty Modeling Obtaining Models from System Identification Stationary Case Input output models obtained from an identification have the typical structure of: y(z) = H 1 (z)u(z) + H 2 (z)ε(z) (31) where H 1 (z) and H 2 (z) are stable transfer matrices. This can be brought into the form of (25) by finding a state-space realization of [ H1 (z) H 2 (z) ] : x(k + 1) = A(k) + Bu(k) + Fε(k) y(k) = Cx(k) + Du(k) + Gε(k) (32) Remarks: H 1 (z) has relative degree of at least one We may assume without loss of generality that H 2 (0) = I, D = 0 and G = I. Linear Systems M. Morari, M. Zeilinger - Spring Semester
86 Uncertainty Modeling Obtaining Models from System Identification Nonstationary Case In this case, the driving noise should be integrated white noise. 1 y(z) = H 1 (z)u(z) + H 2 (z) ε(z) 1 z 1 }{{} ε int(z) (33) Using the fact (1 z 1 )y(z) = ( y)(z), we can rewrite the above as: ( y)(z) = H 1 (z) u(z) + H 2 (z)ε(z) (34) Denoting the realization of [ H 1 (z) H 2 (z) ] as x(k + 1) = Ax(k) + B u(k) + Fε(k) y(k) = Cx(k) + ε(k) (35) The state can be augmented with y as before to bring it into the form of (30). Linear Systems M. Morari, M. Zeilinger - Spring Semester
87 State Estimation Table of Contents 1. Linear Systems 1.1 Models of Dynamic Systems 1.2 Analysis of LTI Discrete-Time Systems 2. Linear Quadratic Optimal Control 2.1 Optimal Control 2.2 Batch Approach 2.3 Recursive Approach 2.4 Infinite Horizon Optimal Control 3. Uncertainty Modeling 3.1 Objective Statement, Stochastic Processes 3.2 Modeling using State Space Descriptions 3.3 Obtaining Models from First Principles 3.4 Obtaining Models from System Identification 4. State Estimation 4.1 Linear State Estimation 4.2 State Observer 4.3 Kalman Filter inear Systems M. Morari, M. Zeilinger - Spring Semester 2016
4F3 - Predictive Control
4F3 Predictive Control - Lecture 2 p 1/23 4F3 - Predictive Control Lecture 2 - Unconstrained Predictive Control Jan Maciejowski jmm@engcamacuk 4F3 Predictive Control - Lecture 2 p 2/23 References Predictive
More informationOptimal control and estimation
Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011
More informationMATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem
MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem Pulemotov, September 12, 2012 Unit Outline Goal 1: Outline linear
More informationEE C128 / ME C134 Feedback Control Systems
EE C128 / ME C134 Feedback Control Systems Lecture Additional Material Introduction to Model Predictive Control Maximilian Balandat Department of Electrical Engineering & Computer Science University of
More informationModule 07 Controllability and Controller Design of Dynamical LTI Systems
Module 07 Controllability and Controller Design of Dynamical LTI Systems Ahmad F. Taha EE 5143: Linear Systems and Control Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ataha October
More informationOutline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem.
Model Predictive Control Short Course Regulation James B. Rawlings Michael J. Risbeck Nishith R. Patel Department of Chemical and Biological Engineering Copyright c 217 by James B. Rawlings Outline 1 Linear
More informationLinear System Theory
Linear System Theory Wonhee Kim Chapter 6: Controllability & Observability Chapter 7: Minimal Realizations May 2, 217 1 / 31 Recap State space equation Linear Algebra Solutions of LTI and LTV system Stability
More informationTheory in Model Predictive Control :" Constraint Satisfaction and Stability!
Theory in Model Predictive Control :" Constraint Satisfaction and Stability Colin Jones, Melanie Zeilinger Automatic Control Laboratory, EPFL Example: Cessna Citation Aircraft Linearized continuous-time
More information4F3 - Predictive Control
4F3 Predictive Control - Lecture 3 p 1/21 4F3 - Predictive Control Lecture 3 - Predictive Control with Constraints Jan Maciejowski jmm@engcamacuk 4F3 Predictive Control - Lecture 3 p 2/21 Constraints on
More information6.241 Dynamic Systems and Control
6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May
More informationLecture 2: Discrete-time Linear Quadratic Optimal Control
ME 33, U Berkeley, Spring 04 Xu hen Lecture : Discrete-time Linear Quadratic Optimal ontrol Big picture Example onvergence of finite-time LQ solutions Big picture previously: dynamic programming and finite-horizon
More informationLecture 2 and 3: Controllability of DT-LTI systems
1 Lecture 2 and 3: Controllability of DT-LTI systems Spring 2013 - EE 194, Advanced Control (Prof Khan) January 23 (Wed) and 28 (Mon), 2013 I LTI SYSTEMS Recall that continuous-time LTI systems can be
More informationA Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1
A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1 Ali Jadbabaie, Claudio De Persis, and Tae-Woong Yoon 2 Department of Electrical Engineering
More informationLQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin
LQR, Kalman Filter, and LQG Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin May 2015 Linear Quadratic Regulator (LQR) Consider a linear system
More informationNonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México
Nonlinear Observers Jaime A. Moreno JMorenoP@ii.unam.mx Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México XVI Congreso Latinoamericano de Control Automático October
More informationLinear-Quadratic Optimal Control: Full-State Feedback
Chapter 4 Linear-Quadratic Optimal Control: Full-State Feedback 1 Linear quadratic optimization is a basic method for designing controllers for linear (and often nonlinear) dynamical systems and is actually
More informationDiscrete and continuous dynamic systems
Discrete and continuous dynamic systems Bounded input bounded output (BIBO) and asymptotic stability Continuous and discrete time linear time-invariant systems Katalin Hangos University of Pannonia Faculty
More information7.1 Linear Systems Stability Consider the Continuous-Time (CT) Linear Time-Invariant (LTI) system
7 Stability 7.1 Linear Systems Stability Consider the Continuous-Time (CT) Linear Time-Invariant (LTI) system ẋ(t) = A x(t), x(0) = x 0, A R n n, x 0 R n. (14) The origin x = 0 is a globally asymptotically
More informationLyapunov Stability Theory
Lyapunov Stability Theory Peter Al Hokayem and Eduardo Gallestey March 16, 2015 1 Introduction In this lecture we consider the stability of equilibrium points of autonomous nonlinear systems, both in continuous
More informationModule 03 Linear Systems Theory: Necessary Background
Module 03 Linear Systems Theory: Necessary Background Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September
More informationTime-Invariant Linear Quadratic Regulators!
Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 17 Asymptotic approach from time-varying to constant gains Elimination of cross weighting
More informationOPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28
OPTIMAL CONTROL Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 28 (Example from Optimal Control Theory, Kirk) Objective: To get from
More informationTime-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015
Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 15 Asymptotic approach from time-varying to constant gains Elimination of cross weighting
More informationRobotics. Control Theory. Marc Toussaint U Stuttgart
Robotics Control Theory Topics in control theory, optimal control, HJB equation, infinite horizon case, Linear-Quadratic optimal control, Riccati equations (differential, algebraic, discrete-time), controllability,
More informationEML5311 Lyapunov Stability & Robust Control Design
EML5311 Lyapunov Stability & Robust Control Design 1 Lyapunov Stability criterion In Robust control design of nonlinear uncertain systems, stability theory plays an important role in engineering systems.
More informationSolution of Linear State-space Systems
Solution of Linear State-space Systems Homogeneous (u=0) LTV systems first Theorem (Peano-Baker series) The unique solution to x(t) = (t, )x 0 where The matrix function is given by is called the state
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011
MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.4: Dynamic Systems Spring Homework Solutions Exercise 3. a) We are given the single input LTI system: [
More informationESC794: Special Topics: Model Predictive Control
ESC794: Special Topics: Model Predictive Control Discrete-Time Systems Hanz Richter, Professor Mechanical Engineering Department Cleveland State University Discrete-Time vs. Sampled-Data Systems A continuous-time
More information1. Find the solution of the following uncontrolled linear system. 2 α 1 1
Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +
More informationEE221A Linear System Theory Final Exam
EE221A Linear System Theory Final Exam Professor C. Tomlin Department of Electrical Engineering and Computer Sciences, UC Berkeley Fall 2016 12/16/16, 8-11am Your answers must be supported by analysis,
More informationEEE582 Homework Problems
EEE582 Homework Problems HW. Write a state-space realization of the linearized model for the cruise control system around speeds v = 4 (Section.3, http://tsakalis.faculty.asu.edu/notes/models.pdf). Use
More informationModel Predictive Control Short Course Regulation
Model Predictive Control Short Course Regulation James B. Rawlings Michael J. Risbeck Nishith R. Patel Department of Chemical and Biological Engineering Copyright c 2017 by James B. Rawlings Milwaukee,
More informationNetworked Sensing, Estimation and Control Systems
Networked Sensing, Estimation and Control Systems Vijay Gupta University of Notre Dame Richard M. Murray California Institute of echnology Ling Shi Hong Kong University of Science and echnology Bruno Sinopoli
More information16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1
16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 Charles P. Coleman October 31, 2005 1 / 40 : Controllability Tests Observability Tests LEARNING OUTCOMES: Perform controllability tests Perform
More informationReview: control, feedback, etc. Today s topic: state-space models of systems; linearization
Plan of the Lecture Review: control, feedback, etc Today s topic: state-space models of systems; linearization Goal: a general framework that encompasses all examples of interest Once we have mastered
More informationIEOR 265 Lecture 14 (Robust) Linear Tube MPC
IEOR 265 Lecture 14 (Robust) Linear Tube MPC 1 LTI System with Uncertainty Suppose we have an LTI system in discrete time with disturbance: x n+1 = Ax n + Bu n + d n, where d n W for a bounded polytope
More informationOutline. Linear regulation and state estimation (LQR and LQE) Linear differential equations. Discrete time linear difference equations
Outline Linear regulation and state estimation (LQR and LQE) James B. Rawlings Department of Chemical and Biological Engineering 1 Linear Quadratic Regulator Constraints The Infinite Horizon LQ Problem
More informationLinear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters
Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 204 Emo Todorov (UW) AMATH/CSE 579, Winter
More informationTopic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis
Topic # 16.30/31 Feedback Control Systems Analysis of Nonlinear Systems Lyapunov Stability Analysis Fall 010 16.30/31 Lyapunov Stability Analysis Very general method to prove (or disprove) stability of
More informationLecture 5 Linear Quadratic Stochastic Control
EE363 Winter 2008-09 Lecture 5 Linear Quadratic Stochastic Control linear-quadratic stochastic control problem solution via dynamic programming 5 1 Linear stochastic system linear dynamical system, over
More informationControl Systems Design
ELEC4410 Control Systems Design Lecture 14: Controllability Julio H. Braslavsky julio@ee.newcastle.edu.au School of Electrical Engineering and Computer Science Lecture 14: Controllability p.1/23 Outline
More information4F3 - Predictive Control
4F3 Predictive Control - Discrete-time systems p. 1/30 4F3 - Predictive Control Discrete-time State Space Control Theory For reference only Jan Maciejowski jmm@eng.cam.ac.uk 4F3 Predictive Control - Discrete-time
More informationPrashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides. Department of Chemical Engineering University of California, Los Angeles
HYBRID PREDICTIVE OUTPUT FEEDBACK STABILIZATION OF CONSTRAINED LINEAR SYSTEMS Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides Department of Chemical Engineering University of California,
More informationControl Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli
Control Systems I Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback Readings: Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich October 13, 2017 E. Frazzoli (ETH)
More informationChapter 2 Optimal Control Problem
Chapter 2 Optimal Control Problem Optimal control of any process can be achieved either in open or closed loop. In the following two chapters we concentrate mainly on the first class. The first chapter
More informationKalman Decomposition B 2. z = T 1 x, where C = ( C. z + u (7) T 1, and. where B = T, and
Kalman Decomposition Controllable / uncontrollable decomposition Suppose that the controllability matrix C R n n of a system has rank n 1
More informationSteady State Kalman Filter
Steady State Kalman Filter Infinite Horizon LQ Control: ẋ = Ax + Bu R positive definite, Q = Q T 2Q 1 2. (A, B) stabilizable, (A, Q 1 2) detectable. Solve for the positive (semi-) definite P in the ARE:
More informationStabilization and Passivity-Based Control
DISC Systems and Control Theory of Nonlinear Systems, 2010 1 Stabilization and Passivity-Based Control Lecture 8 Nonlinear Dynamical Control Systems, Chapter 10, plus handout from R. Sepulchre, Constructive
More informationDenis ARZELIER arzelier
COURSE ON LMI OPTIMIZATION WITH APPLICATIONS IN CONTROL PART II.2 LMIs IN SYSTEMS CONTROL STATE-SPACE METHODS PERFORMANCE ANALYSIS and SYNTHESIS Denis ARZELIER www.laas.fr/ arzelier arzelier@laas.fr 15
More informationLecture 4 and 5 Controllability and Observability: Kalman decompositions
1 Lecture 4 and 5 Controllability and Observability: Kalman decompositions Spring 2013 - EE 194, Advanced Control (Prof. Khan) January 30 (Wed.) and Feb. 04 (Mon.), 2013 I. OBSERVABILITY OF DT LTI SYSTEMS
More information11. Further Issues in Using OLS with TS Data
11. Further Issues in Using OLS with TS Data With TS, including lags of the dependent variable often allow us to fit much better the variation in y Exact distribution theory is rarely available in TS applications,
More informationTheorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0).
Linear Systems Notes Lecture Proposition. A M n (R) is positive definite iff all nested minors are greater than or equal to zero. n Proof. ( ): Positive definite iff λ i >. Let det(a) = λj and H = {x D
More informationControl Systems I. Lecture 2: Modeling. Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch Emilio Frazzoli
Control Systems I Lecture 2: Modeling Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch. 2-3 Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich September 29, 2017 E. Frazzoli
More informationUCLA Chemical Engineering. Process & Control Systems Engineering Laboratory
Constrained Innite-time Optimal Control Donald J. Chmielewski Chemical Engineering Department University of California Los Angeles February 23, 2000 Stochastic Formulation - Min Max Formulation - UCLA
More informationStability of Parameter Adaptation Algorithms. Big picture
ME5895, UConn, Fall 215 Prof. Xu Chen Big picture For ˆθ (k + 1) = ˆθ (k) + [correction term] we haven t talked about whether ˆθ(k) will converge to the true value θ if k. We haven t even talked about
More informationSemidefinite Programming Duality and Linear Time-invariant Systems
Semidefinite Programming Duality and Linear Time-invariant Systems Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 2 July 2004 Workshop on Linear Matrix Inequalities in Control LAAS-CNRS,
More informationProblem 1 Cost of an Infinite Horizon LQR
THE UNIVERSITY OF TEXAS AT SAN ANTONIO EE 5243 INTRODUCTION TO CYBER-PHYSICAL SYSTEMS H O M E W O R K # 5 Ahmad F. Taha October 12, 215 Homework Instructions: 1. Type your solutions in the LATEX homework
More informationDistributed and Real-time Predictive Control
Distributed and Real-time Predictive Control Melanie Zeilinger Christian Conte (ETH) Alexander Domahidi (ETH) Ye Pu (EPFL) Colin Jones (EPFL) Challenges in modern control systems Power system: - Frequency
More informationEE363 homework 2 solutions
EE363 Prof. S. Boyd EE363 homework 2 solutions. Derivative of matrix inverse. Suppose that X : R R n n, and that X(t is invertible. Show that ( d d dt X(t = X(t dt X(t X(t. Hint: differentiate X(tX(t =
More informationLyapunov Stability Analysis: Open Loop
Copyright F.L. Lewis 008 All rights reserved Updated: hursday, August 8, 008 Lyapunov Stability Analysis: Open Loop We know that the stability of linear time-invariant (LI) dynamical systems can be determined
More informationCDS Solutions to Final Exam
CDS 22 - Solutions to Final Exam Instructor: Danielle C Tarraf Fall 27 Problem (a) We will compute the H 2 norm of G using state-space methods (see Section 26 in DFT) We begin by finding a minimal state-space
More informationIMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS
IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS D. Limon, J.M. Gomes da Silva Jr., T. Alamo and E.F. Camacho Dpto. de Ingenieria de Sistemas y Automática. Universidad de Sevilla Camino de los Descubrimientos
More information= 0 otherwise. Eu(n) = 0 and Eu(n)u(m) = δ n m
A-AE 567 Final Homework Spring 212 You will need Matlab and Simulink. You work must be neat and easy to read. Clearly, identify your answers in a box. You will loose points for poorly written work. You
More informationEN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015
EN530.678 Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015 Prof: Marin Kobilarov 0.1 Model prerequisites Consider ẋ = f(t, x). We will make the following basic assumptions
More informationLecture 9: Discrete-Time Linear Quadratic Regulator Finite-Horizon Case
Lecture 9: Discrete-Time Linear Quadratic Regulator Finite-Horizon Case Dr. Burak Demirel Faculty of Electrical Engineering and Information Technology, University of Paderborn December 15, 2015 2 Previous
More informationOPTIMAL CONTROL AND ESTIMATION
OPTIMAL CONTROL AND ESTIMATION Robert F. Stengel Department of Mechanical and Aerospace Engineering Princeton University, Princeton, New Jersey DOVER PUBLICATIONS, INC. New York CONTENTS 1. INTRODUCTION
More information1 The Observability Canonical Form
NONLINEAR OBSERVERS AND SEPARATION PRINCIPLE 1 The Observability Canonical Form In this Chapter we discuss the design of observers for nonlinear systems modelled by equations of the form ẋ = f(x, u) (1)
More informationME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ
ME 234, Lyapunov and Riccati Problems. This problem is to recall some facts and formulae you already know. (a) Let A and B be matrices of appropriate dimension. Show that (A, B) is controllable if and
More informationRECURSIVE ESTIMATION AND KALMAN FILTERING
Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv
More informationFEL3210 Multivariable Feedback Control
FEL3210 Multivariable Feedback Control Lecture 8: Youla parametrization, LMIs, Model Reduction and Summary [Ch. 11-12] Elling W. Jacobsen, Automatic Control Lab, KTH Lecture 8: Youla, LMIs, Model Reduction
More informationAUTOMATIC CONTROL. Andrea M. Zanchettin, PhD Spring Semester, Introduction to Automatic Control & Linear systems (time domain)
1 AUTOMATIC CONTROL Andrea M. Zanchettin, PhD Spring Semester, 2018 Introduction to Automatic Control & Linear systems (time domain) 2 What is automatic control? From Wikipedia Control theory is an interdisciplinary
More informationDiscrete-time linear systems
Automatic Control Discrete-time linear systems Prof. Alberto Bemporad University of Trento Academic year 2-2 Prof. Alberto Bemporad (University of Trento) Automatic Control Academic year 2-2 / 34 Introduction
More informationPredictive Control of Gyroscopic-Force Actuators for Mechanical Vibration Damping
ARC Centre of Excellence for Complex Dynamic Systems and Control, pp 1 15 Predictive Control of Gyroscopic-Force Actuators for Mechanical Vibration Damping Tristan Perez 1, 2 Joris B Termaat 3 1 School
More informationState Estimation using Moving Horizon Estimation and Particle Filtering
State Estimation using Moving Horizon Estimation and Particle Filtering James B. Rawlings Department of Chemical and Biological Engineering UW Math Probability Seminar Spring 2009 Rawlings MHE & PF 1 /
More informationModule 08 Observability and State Estimator Design of Dynamical LTI Systems
Module 08 Observability and State Estimator Design of Dynamical LTI Systems Ahmad F. Taha EE 5143: Linear Systems and Control Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ataha November
More informationACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016
ACM/CMS 17 Linear Analysis & Applications Fall 216 Assignment 4: Linear ODEs and Control Theory Due: 5th December 216 Introduction Systems of ordinary differential equations (ODEs) can be used to describe
More informationHybrid Systems Course Lyapunov stability
Hybrid Systems Course Lyapunov stability OUTLINE Focus: stability of an equilibrium point continuous systems decribed by ordinary differential equations (brief review) hybrid automata OUTLINE Focus: stability
More informationExtensions and applications of LQ
Extensions and applications of LQ 1 Discrete time systems 2 Assigning closed loop pole location 3 Frequency shaping LQ Regulator for Discrete Time Systems Consider the discrete time system: x(k + 1) =
More informationx(n + 1) = Ax(n) and y(n) = Cx(n) + 2v(n) and C = x(0) = ξ 1 ξ 2 Ex(0)x(0) = I
A-AE 567 Final Homework Spring 213 You will need Matlab and Simulink. You work must be neat and easy to read. Clearly, identify your answers in a box. You will loose points for poorly written work. You
More informationReverse Order Swing-up Control of Serial Double Inverted Pendulums
Reverse Order Swing-up Control of Serial Double Inverted Pendulums T.Henmi, M.Deng, A.Inoue, N.Ueki and Y.Hirashima Okayama University, 3-1-1, Tsushima-Naka, Okayama, Japan inoue@suri.sys.okayama-u.ac.jp
More informationControlled Diffusions and Hamilton-Jacobi Bellman Equations
Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter
More information1 Continuous-time Systems
Observability Completely controllable systems can be restructured by means of state feedback to have many desirable properties. But what if the state is not available for feedback? What if only the output
More informationBalanced Truncation 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI
More informationCDS 101/110a: Lecture 2.1 Dynamic Behavior
CDS 11/11a: Lecture.1 Dynamic Behavior Richard M. Murray 6 October 8 Goals: Learn to use phase portraits to visualize behavior of dynamical systems Understand different types of stability for an equilibrium
More informationECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming
ECE7850 Lecture 7 Discrete Time Optimal Control and Dynamic Programming Discrete Time Optimal control Problems Short Introduction to Dynamic Programming Connection to Stabilization Problems 1 DT nonlinear
More informationLinear System Theory. Wonhee Kim Lecture 1. March 7, 2018
Linear System Theory Wonhee Kim Lecture 1 March 7, 2018 1 / 22 Overview Course Information Prerequisites Course Outline What is Control Engineering? Examples of Control Systems Structure of Control Systems
More informationCALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems
CDS 101 1. For each of the following linear systems, determine whether the origin is asymptotically stable and, if so, plot the step response and frequency response for the system. If there are multiple
More informationLecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.
Lecture 4 Chapter 4: Lyapunov Stability Eugenio Schuster schuster@lehigh.edu Mechanical Engineering and Mechanics Lehigh University Lecture 4 p. 1/86 Autonomous Systems Consider the autonomous system ẋ
More informationESC794: Special Topics: Model Predictive Control
ESC794: Special Topics: Model Predictive Control Nonlinear MPC Analysis : Part 1 Reference: Nonlinear Model Predictive Control (Ch.3), Grüne and Pannek Hanz Richter, Professor Mechanical Engineering Department
More informationFINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez
FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES Danlei Chu Tongwen Chen Horacio J Marquez Department of Electrical and Computer Engineering University of Alberta Edmonton
More informationControl engineering sample exam paper - Model answers
Question Control engineering sample exam paper - Model answers a) By a direct computation we obtain x() =, x(2) =, x(3) =, x(4) = = x(). This trajectory is sketched in Figure (left). Note that A 2 = I
More informationState estimation and the Kalman filter
State estimation and the Kalman filter PhD, David Di Ruscio Telemark university college Department of Technology Systems and Control Engineering N-3914 Porsgrunn, Norway Fax: +47 35 57 52 50 Tel: +47 35
More informationModule 02 CPS Background: Linear Systems Preliminaries
Module 02 CPS Background: Linear Systems Preliminaries Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html August
More informationChapter 8 Stabilization: State Feedback 8. Introduction: Stabilization One reason feedback control systems are designed is to stabilize systems that m
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of echnology c Chapter 8 Stabilization:
More informationECE7850 Lecture 8. Nonlinear Model Predictive Control: Theoretical Aspects
ECE7850 Lecture 8 Nonlinear Model Predictive Control: Theoretical Aspects Model Predictive control (MPC) is a powerful control design method for constrained dynamical systems. The basic principles and
More informationChap. 3. Controlled Systems, Controllability
Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :
More informationCDS 101/110a: Lecture 2.1 Dynamic Behavior
CDS 11/11a: Lecture 2.1 Dynamic Behavior Richard M. Murray 6 October 28 Goals: Learn to use phase portraits to visualize behavior of dynamical systems Understand different types of stability for an equilibrium
More informationDecentralized and distributed control
Decentralized and distributed control Centralized control for constrained discrete-time systems M. Farina 1 G. Ferrari Trecate 2 1 Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB) Politecnico
More informationDynamical Systems & Lyapunov Stability
Dynamical Systems & Lyapunov Stability Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University Outline Ordinary Differential Equations Existence & uniqueness Continuous dependence
More informationPontryagin s maximum principle
Pontryagin s maximum principle Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2012 Emo Todorov (UW) AMATH/CSE 579, Winter 2012 Lecture 5 1 / 9 Pontryagin
More information