Suppose that we have a specific single stage dynamic system governed by the following equation:

Similar documents
Homework Solution # 3

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

Quadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

EE C128 / ME C134 Final Exam Fall 2014

9 Controller Discretization

Lecture 4 Continuous time linear quadratic regulator

4F3 - Predictive Control

MODERN CONTROL DESIGN

Linear Quadratic Regulator (LQR) Design I

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015

Time-Invariant Linear Quadratic Regulators!

Optimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function:

Topic # Feedback Control Systems

Lecture 2: Discrete-time Linear Quadratic Optimal Control

Course Outline. Higher Order Poles: Example. Higher Order Poles. Amme 3500 : System Dynamics & Control. State Space Design. 1 G(s) = s(s + 2)(s +10)

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

Robust Control 5 Nominal Controller Design Continued

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

Outline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem.

Linear Quadratic Regulator (LQR) Design II

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

Linear State Feedback Controller Design

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Chapter 2 Optimal Control Problem

Model Predictive Control Short Course Regulation

Control Systems. Design of State Feedback Control.

EE221A Linear System Theory Final Exam

Lecture 9: Discrete-Time Linear Quadratic Regulator Finite-Horizon Case

ESC794: Special Topics: Model Predictive Control

Topic # Feedback Control

Linear-Quadratic Optimal Control: Full-State Feedback

6 OUTPUT FEEDBACK DESIGN

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

Linear Quadratic Regulator (LQR) I

EE363 homework 2 solutions

Australian Journal of Basic and Applied Sciences, 3(4): , 2009 ISSN Modern Control Design of Power System

Robotics. Control Theory. Marc Toussaint U Stuttgart

Some New Results on Linear Quadratic Regulator Design for Lossless Systems

SUCCESSIVE POLE SHIFTING USING SAMPLED-DATA LQ REGULATORS. Sigeru Omatu

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

Lecture 5 Linear Quadratic Stochastic Control

Denis ARZELIER arzelier

6.241 Dynamic Systems and Control

Theoretical Justification for LQ problems: Sufficiency condition: LQ problem is the second order expansion of nonlinear optimal control problems.

Lecture 2 and 3: Controllability of DT-LTI systems

State Feedback and State Estimators Linear System Theory and Design, Chapter 8.

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018

Advanced Control Theory

1 Steady State Error (30 pts)

6. Linear Quadratic Regulator Control

Extensions and applications of LQ

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004

2 The Linear Quadratic Regulator (LQR)

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Mathematical Modelling, Stability Analysis and Control of Flexible Double Link Robotic Manipulator: A Simulation Approach

Pole Placement (Bass Gura)

EE C128 / ME C134 Feedback Control Systems

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

D(s) G(s) A control system design definition

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

Numerical Methods for Model Predictive Control. Jing Yang

Autonomous Mobile Robot Design

Fuzzy modeling and control of rotary inverted pendulum system using LQR technique

Steady State Kalman Filter

Controls Problems for Qualifying Exam - Spring 2014

(Refer Slide Time: 00:01:30 min)

5. Observer-based Controller Design

CDS 110b: Lecture 2-1 Linear Quadratic Regulators

Outline. Linear regulation and state estimation (LQR and LQE) Linear differential equations. Discrete time linear difference equations

Return Difference Function and Closed-Loop Roots Single-Input/Single-Output Control Systems

Optimal control and estimation

Numerics and Control of PDEs Lecture 1. IFCAM IISc Bangalore

1. LQR formulation 2. Selection of weighting matrices 3. Matlab implementation. Regulator Problem mm3,4. u=-kx

Regional Solution of Constrained LQ Optimal Control

Static and Dynamic Optimization (42111)

Problem Set 4 Solution 1

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67

Semidefinite Programming Duality and Linear Time-invariant Systems

Linear control of inverted pendulum

Linear Quadratic Optimal Control Topics

Control System Design

Laboratory 11 Control Systems Laboratory ECE3557. State Feedback Controller for Position Control of a Flexible Joint

Dynamic Optimal Control!

Control Systems Design

Linear Quadratic Regulator (LQR) II

Lecture 5: Linear Systems. Transfer functions. Frequency Domain Analysis. Basic Control Design.

An LQR Controller Design Approach For Pitch Axis Stabilisation Of 3-DOF Helicopter System

The norms can also be characterized in terms of Riccati inequalities.

Time Response Analysis (Part II)

Identification Methods for Structural Systems

Linear Systems Theory

EC Control Engineering Quiz II IIT Madras

Chapter 8 Stabilization: State Feedback 8. Introduction: Stabilization One reason feedback control systems are designed is to stabilize systems that m

Chap 8. State Feedback and State Estimators

Reverse Order Swing-up Control of Serial Double Inverted Pendulums

Contents lecture 6 2(17) Automatic Control III. Summary of lecture 5 (I/III) 3(17) Summary of lecture 5 (II/III) 4(17) H 2, H synthesis pros and cons:

moments of inertia at the center of gravity (C.G.) of the first and second pendulums are I 1 and I 2, respectively. Fig. 1. Double inverted pendulum T

R10 JNTUWORLD B 1 M 1 K 2 M 2. f(t) Figure 1

ICS 6N Computational Linear Algebra Matrix Algebra

Transcription:

Dynamic Optimisation Discrete Dynamic Systems A single stage example Suppose that we have a specific single stage dynamic system governed by the following equation: x 1 = ax 0 + bu 0, x 0 = x i (1) where x is a scalar state and u is a scalar control input We wish to minimise the following objective, subject to the dynamics of the system: min u 0 J = x 2 1 (2) 1

Substituting x 1 to include the system constraint: min u 0 J = (ax 0 +bu 0 ) 2 = a 2 x 2 0 +b2 u 2 0 +2abx 0u 0 (3) We can find J/ u 0 and make it zero: J u 0 = 2b 2 u 0 + 2abx 0 = 0 (4) Therefore: u(0) = (a/b)x 0 (5) which gives: x 1 = ax 0 + b( a/b)x 0 = 0 (6) 2

Therefore the optimal control u 0 = (a/b)x 0 gives the minimun value of the objective function J = 0 A two stage example Now, take the same system over two stages: x 0 = x i x 1 = ax 0 + bu 0 (7) x 2 = ax 1 + bu 1 = a 2 x 0 + abu 0 + bu 1 And assume that we wish to minimise the following objective function: min J = x 2 u 0,u 2 + u2 1 + u2 0 (8) 1 3

Substituting x 2, we have min J = (a 2 x u 0,u 0 + abu 0 + bu 1 ) 2 + u 2 1 + u2 0 (9) 1 The partial derivatives of J with respect to u 0 and u 1 are: J u 0 = 2ab(a 2 x 0 +abu 0 +bu 1 )+2u 0 = 0 (10) J u 1 = 2b(a 2 x 0 + abu 0 + bu 1 ) + 2u 1 = 0 (11) We have two linear equations with two unknows: u 0 and u 1 The solution is: u 0 = ba3 ba 2 +b 2 +1 x 0 u 1 = ba2 ba 2 +b 2 +1 x 0 (12) 4

Suppose that we know that a = 05, b = 1 and x 0 = 1 Then we have: u 0 = 011 u 1 = 00556 x 1 = 04444 x 2 = 01111 J = 00278 (13) u 0 002 004 006 008 01 012 0 1 2 1 08 06 x 04 02 0 0 1 2 5

General discrete case Suppose that a dynamic system is described by the following equation which determines the transition from the n-dimensional state x k to state x k+1, given the m-dimensional control vector u k : x k+1 = f(x k, u k, k), x 0 = x i (14) 6

A fairly general optimisation problem for such systems is to find the sequence of controls u k, k = 0, N 1 to minimise a performance index of the form: J = φ (x N ) + N 1 k=0 L (x k, u k, k) (15) subject to x k+1 = f(x k, u k, k), x 0 = x i (16) This is an optimisation problem with equality constraints 7

Necessary optimality conditions Adjoin the constraints to the performance index with a sequence of Lagrange multiplier vectors λ k as follows: + N 1 k=0 J = φ (x N ) + λ T 0 [x i x 0 ] { Lk + λ T k+1 [ fk x k+1 ]} (17) Define the Hamiltonian as follows: H k = L(x k, u k, k) + λ T k+1 f (x k, u k, k) (18) 8

So that J = φ (x N ) λ T N x N + λ T 0 x i + N 1 k=0 { Hk λ T k x k} (19) The optimality conditions are then found using optimisation theory, which involves calculating the increment d J and making it zero 9

The optimality conditions are: x k+1 = f (x k, u k, k) (20) λ k = H T x k (21) H u k T = 0 (22) The boundary conditions are: x 0 = x i (23) λ N = φ T x N (24) We have a two point boundary value problem 10

Discrete Linear-Quadratic Regulator Let the plant to be controlled be described by the linear equation x k+1 = Ax k + Bu k (25) Suppose that we wish to minimise the following quadratic performance index: J = 1 2 xt N Sx N + 1 2 N 1 k=0 [ x T k Qx k + u T k Ru k] (26) We assume that Q and S are positive semidefinite matrices and that R is positive definite 11

In this case, the Hamiltonian is given by: H k = 1 2 ( x T k Qx k + u T k Ru k +λ T k+1 (Ax k + Bu k ) ) (27) From the necessary optimality conditions, we have: x k+1 = Ax k + Bu k (28) λ k = H T x k = Qx k + A T λ k+1 (29) H u k T = Ru k + B T λ k+1 = 0 (30) 12

From (30) we can obtain the optimal control: u k = R 1 B T λ k+1 (31) The boundary conditions are: x 0 specified (32) λ N = Sx N We have a linear two point boundary value problem 13

Riccati Solution The solution to the linear two point boundary value problem can be found by solving backwards from S N = S, the following Riccati equation: S k = A T [S k+1 S k+1 B(B T S k+1 B + R) 1 B T S k+1 ]A + Q (33) The state feedback gain matrix is given by: K k = ( B T S k+1 B + R k ) 1 B T S k+1 A (34) The optimal control is: u k = K k x k (35) and the optimal state is: x k+1 = (A BK k ) x k (36) 14

Steady State Solutions When the number of samples N approaches infinity, then under certain conditions the Riccati solution converges to a fixed values of S and K S = A T [ S SB(B T SB + R) 1 B T S ] A + Q (37) K = ( B T SB + R ) 1 B T SA (38) Equation (37) is known as the discrete Riccati Algebraic Equation (RAE) In this case, the optimal control is: u k = Kx k (39) and the optimal state is: x k+1 = (A BK) x k (40) 15

Example: LQ regulation of an unstable scalar system Consider the unstable system: x k+1 = 2x k + u k (41) Assume that we wish to regulate this system using steady state LQ control, given the following performance index: J = 1 2 k=0 [ x 2 k + 2u 2 k] (42) The algebraic Riccati equation becomes: S 2 7S 2 = 0 (43) which has the solutions S 1 S 2 = 02749 = 72749 and 16

Taking the positive solution, gives the following state feedback law: u k = 15687x k (44) and the closed loop system becomes stable: x k+1 = 2x k + ( 15687x k ) = 04313x k (45) 17

Example: DC motor under state feedback + i I a = constant u R L ω J The state equations for a dc motor with constant armature current are: di(t) dt = R L i(t) + 1 u(t) (46) L dω(t) dt = K J i(t) (47) where K = k t i a, and J is the moment of inertia Assuming R = L = K = 1, find a discrete time state feedback controller with sampling time h = 1 to minimise the following performance index: J = k=0 i 2 k + ω2 k + u2 k (48) 18

Solution: Continuous time model: [ ] [ ] d i 1 0 = dt ω 1 0 } {{ } Ā [ i ω ] + [ 1 0 ] }{{} B u (49) Discrete time model using zero order hold discretisation: [ ] [ ] [ ] [ ] ik+1 0368 0 ik 0632 = + u 0632 0 0368 k ω k+1 }{{} eāh ω k }{{} h 0 eāh dt B (50) Riccati solution using Matlab: K = dlqr( A, B, Q, R ) K = [ 0615 0628] (51) Control law: u k = 0615i k + 0628ω k (52) 19

Suppose that the system starts from the initial condition i(0) = 1 A and ω(0) = 2 rad/s 2 15 1 x 05 0 05 1 0 1 2 3 4 5 6 7 8 9 10 time (s) 05 0 05 u 1 15 2 0 1 2 3 4 5 6 7 8 9 10 time (s) 20