Advanced Mechatronics Engineering

Similar documents
OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

Linear Quadratic Optimal Control Topics

Optimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function:

Advanced Control Theory

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Deterministic Dynamic Programming

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

State Variable Analysis of Linear Dynamical Systems

Linear Quadratic Optimal Control

EE C128 / ME C134 Final Exam Fall 2014

SYSTEMTEORI - ÖVNING 1. In this exercise, we will learn how to solve the following linear differential equation:

Stochastic and Adaptive Optimal Control

Hamilton-Jacobi-Bellman Equation Feb 25, 2008

Module 02 CPS Background: Linear Systems Preliminaries

Module 05 Introduction to Optimal Control

Project Optimal Control: Optimal Control with State Variable Inequality Constraints (SVIC)

Principles of Optimal Control Spring 2008

Chap 4. State-Space Solutions and

Chapter III. Stability of Linear Systems

Solution via Laplace transform and matrix exponential

Linear System Theory

ECEEN 5448 Fall 2011 Homework #5 Solutions

CONTROL DESIGN FOR SET POINT TRACKING

Observability. Dynamic Systems. Lecture 2 Observability. Observability, continuous time: Observability, discrete time: = h (2) (x, u, u)

Lecture Note 13:Continuous Time Switched Optimal Control: Embedding Principle and Numerical Algorithms

Robotics. Control Theory. Marc Toussaint U Stuttgart

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is

Sequential State Estimation (Crassidas and Junkins, Chapter 5)

arxiv: v1 [math.oc] 29 Apr 2017

EE221A Linear System Theory Final Exam

Iterative methods to compute center and center-stable manifolds with application to the optimal output regulation problem

Formula Sheet for Optimal Control

Steady State Kalman Filter

Lecture 4 Continuous time linear quadratic regulator

ECSE.6440 MIDTERM EXAM Solution Optimal Control. Assigned: February 26, 2004 Due: 12:00 pm, March 4, 2004

ACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016

Toward nonlinear tracking and rejection using LPV control

Topic # Feedback Control Systems

Suppose that we have a specific single stage dynamic system governed by the following equation:

Boyce - DiPrima 7.9, Nonhomogeneous Linear Systems

Professor Fearing EE C128 / ME C134 Problem Set 7 Solution Fall 2010 Jansen Sheng and Wenjie Chen, UC Berkeley

ME Fall 2001, Fall 2002, Spring I/O Stability. Preliminaries: Vector and function norms

Tutorial on Control and State Constrained Optimal Control Pro. Control Problems and Applications Part 3 : Pure State Constraints

Quadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

21 Linear State-Space Representations

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77

Math Ordinary Differential Equations

Intro. Computer Control Systems: F8

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory

Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0).

Solution of Linear State-space Systems

Modeling and Analysis of Dynamic Systems

Applied Differential Equation. November 30, 2012

On Controllability of Linear Systems 1

UNIVERSITY OF OSLO. Please make sure that your copy of the problem set is complete before you attempt to answer anything.

Math 331 Homework Assignment Chapter 7 Page 1 of 9

Principles of Optimal Control Spring 2008

INTRODUCTION TO OPTIMAL CONTROL

Third In-Class Exam Solutions Math 246, Professor David Levermore Thursday, 3 December 2009 (1) [6] Given that 2 is an eigenvalue of the matrix

EML Spring 2012

LINEAR-CONVEX CONTROL AND DUALITY

Robotics. Islam S. M. Khalil. November 15, German University in Cairo

Course Outline. Higher Order Poles: Example. Higher Order Poles. Amme 3500 : System Dynamics & Control. State Space Design. 1 G(s) = s(s + 2)(s +10)

Introduction to Modern Control MT 2016

Linear dynamical systems with inputs & outputs

Periodic Linear Systems

ME8281-Advanced Control Systems Design

Z i Q ij Z j. J(x, φ; U) = X T φ(t ) 2 h + where h k k, H(t) k k and R(t) r r are nonnegative definite matrices (R(t) is uniformly in t nonsingular).

Geometric Optimal Control with Applications

Sub-Riemannian geometry in groups of diffeomorphisms and shape spaces

Cost-extended control systems on SO(3)

Problem 1 Cost of an Infinite Horizon LQR

May 9, 2014 MATH 408 MIDTERM EXAM OUTLINE. Sample Questions

A Tutorial on Recursive methods in Linear Least Squares Problems

An Application of Pontryagin s Maximum Principle in a Linear Quadratic Differential Game

Mathematical Economics. Lecture Notes (in extracts)

Chapter Introduction. A. Bensoussan

Game Theory Extra Lecture 1 (BoB)

Calculus C (ordinary differential equations)

Kalman Filter and Parameter Identification. Florian Herzog

Control Systems Design, SC4026. SC4026 Fall 2009, dr. A. Abate, DCSC, TU Delft

Lyapunov Theory for Discrete Time Systems

A Numerical Scheme for Generalized Fractional Optimal Control Problems

Discrete and continuous dynamic systems

6. Linear Quadratic Regulator Control

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

MA Ordinary Differential Equations and Control Semester 1 Lecture Notes on ODEs and Laplace Transform (Parts I and II)

Principles of Optimal Control Spring 2008

Modal Decomposition and the Time-Domain Response of Linear Systems 1

EE291E/ME 290Q Lecture Notes 8. Optimal Control and Dynamic Games

Ingegneria dell Automazione - Sistemi in Tempo Reale p.1/30

ME 132, Fall 2015, Quiz # 2

Numerical Optimal Control Overview. Moritz Diehl

Linear Algebra. P R E R E Q U I S I T E S A S S E S S M E N T Ahmad F. Taha August 24, 2015

Stochastic optimal control theory

Numerical Computational Techniques for Nonlinear Optimal Control

Optimal Linear Feedback Control for Incompressible Fluid Flow

Linear System Theory

Transcription:

Advanced Mechatronics Engineering German University in Cairo 21 December, 2013

Outline Necessary conditions for optimal input Example Linear regulator problem Example

Necessary conditions for optimal input The problem is to find an optimal input (u ) that causes the system ẋ(t) = f(x(t), u(t), t), (1) to follow a trajectory (x ) that minimizes the performance measure tf J(x(t), t) = h(x(t f ), t f ) + g(x(τ), u(τ), τ)dτ. (2) t 0

Necessary conditions for optimal input The Hamiltonian is given by H(x(t), u(t), p(t), t) g(x(t), u(t), t) + p T (t) [f(x(t), u(t), t)]. (3) The necessary conditions of optimality are ẋ (t) = H p (x (t), u (t), p (t), t), (4) ṗ (t) = H x (x (t), u (t), p (t), t), (5) 0 = H u (x (t), u (t), p (t), t). (6)

Example The system ẋ 1 (t) = x 2 (t) (7) ẋ 2 (t) = x 2 (t) + u(t) (8) is to be controlled so that its input is conserved. Therefore, the performance measure is given by J(u) = tf The Hamiltonian (36) is given by t 0 1 2 u2 (t)dt. (9) H(x(t), u(t), p(t), t) = 1 2 u2 (t)+p 1 (t)x 2 (t) p 2 (t)x 2 (t)+p 2 (t)u(t). (10)

Example H(x(t), u(t), p(t), t) = 1 2 u2 (t)+p 1 (t)x 2 (t) p 2 (t)x 2 (t)+p 2 (t)u(t). The necessary conditions for optimality are (11) ṗ 1(t) = H x 1 = 0 (12) ṗ 2(t) = H x 2 = p 1(t) + p 2(t), (13) and 0 = H u = u (t) + p 2(t). (14)

Linear Regulator Problems The plant is described by the linear state equations ẋ(t) = A(t)x(t) + B(t)u(t) (15) which may have time-varying coefficients. The performance measure to be minimized is J = 1 2 xt (t f )Hx(t f ) + tf t 0 1 2 [ x T Qx + u T Ru ] dt, (16) where the final time t f is fixed. Further, H and Q are real symmetric positive semi-definite matrices. Finally, R is a real symmetric positive definite matrix. The Hamiltonian is H(x(t), u(t), p(t), t) g(x(t), u(t), t) + p T (t) [f(x(t), u(t), t)]. (17)

Linear Regulator Problems H(x(t), u(t), p(t), t) = 1 2 xt Qx+ 1 2 ut Ru+p T A(t)x(t)+p T B(t)u(t), (18) and the necessary conditions for optimality are ẋ (t) = A(t)x (t) + B(t)u (t), (19) ṗ (t) = Q(t)x (t) A T (t)p (t), (20) 0 = R(t)u (t) + B T (t)p (t). (21) Solving (54) for u (t) yields u (t) = R 1 (t)b T (t)p (t). (22) Substitution of (55) into (52) yields ẋ (t) = A(t)x (t) B(t)R 1 (t)b T (t)p (t). (23)

Linear Regulator Problems Putting (53) and (56) into the following matrix format ẋ (t) A(t) B(t)R 1 (t)b T (t) x (t) =. (24) ṗ (t) Q(t) p (t) p (t) The solution of these equations has the following form x (t f ) x (t) = ϕ(t f, t), (25) p (t f ) p (t) where ϕ(t f, t) is the state-transition matrix of the system (57). x (t f ) ϕ 11 (t f, t) ϕ 12 (t f, t) x (t) =, (26) p (t f ) ϕ 21 (t f, t) ϕ 22 (t f, t) p (t)

Linear Regulator Problems From the boundary condition, the final co-states are related to the final states using p (t f ) = Hx (t f ). (27) Solving for p (t f ), we obtain p (t) = [ϕ 22 (t f, t) Hϕ 12 (t f, t)] 1 [Hϕ 11 (t f, t) ϕ 21 (t f, t)] x (t). (28) p (t) = K(t)x (t). (29) The optimal input is given by u (t) = R 1 (t)b T (t)k(t)x (t). (30)

Example It is desired to determine the input (using the principle of optimality and the Hamilton-Jacobi-Bellman equation) that causes the plant ẋ 1 = x 2 (t) (31) ẋ 2 = x 1 (t) 2x 2 (t) + u(t) (32) to minimize the performance measure J = 10x 2 1 (T ) + 1 2 T 0 [ x 2 1 (t) + 2x 2 2 (t) + u 2 (t) ]. (33)

State Transition Matrix Consider the scalar case ẋ(t) = ax(t). (34) Taking the Laplace transform of (34), we obtain sx (s) x(0) = ax (s), (35) X (s) = x(0) s a = (s a) 1 x(0). (36) Finally, inverse Laplace transform of (36) yields x(t) = e at x(0). (37)

State Transition Matrix Now consider the following homogenous state equation ẋ(t) = Ax(t). (38) sx(s) x(0) = AX(s), (39) X(s) = (si A) 1 x(0). (40) The inverse Laplace transform yields x(t) = L 1 [ (si A) 1] x(0) = e At x(0). (41) Therefore, the state transition matrix (e At ) is given by e At = L 1 [ (si A) 1]. (42)

State Transition Matrix Calculate the state transition matrix of the following system ] [ẋ1 = ẋ 2 [si A] = [si A] 1 = [ 1 0 2 3 ] [ ] x1 x 2 [ ] (s + 1) 0 2 (s + 3) (43) (44) [ (s+3) (s+1)(s+3) 0 2 (s+1) (s+1)(s+3) (s+1)(s+3) ] = [ 1 (s+1) 0 ( 1 (s+1) 1 (s+1) ) 1 (s+3) e At = L 1 [ (si A) 1], (45) [ e At e = t ] 0 (e t e 3t ) e 3t. ]

State Transition Matrix Calculate the state transition matrix of the following system ] [ ] [ ] [ẋ1 0 1 x1 = 2 3 ẋ 2 [si A] = x 2 [ ] s 1 2 (s + 3) (46) (47) e At = L 1 [ (si A) 1], (48) [ 2e = t e 2t e t e 2t ] 2e t + 2e 2t e t + 2e 2t. [si A] 1 = [ (s+3) (s+1)(s+2) 2 (s+1)(s+2) 1 (s+1)(s+2) s (s+1)(s+2) ]

State Transition Matrix If the matrix A can be transformed into a diagonal form, then the state transition matrix e At is given by e λ 1t 0... 0 e At = Pe Dt P 1 0 e λ 2t... 0 = P....... 0 P 1, (49) 0... 0 e λnt where P is a digonalizing matrix for A. Further, λ i is the ith eigenvalue of the matrix A, for i = 1,..., n.

State Transition Matrix Derivation: Consider the following homogenous state equation ẋ = Ax, (50) and the following similarity transformation: x = Pξ, ẋ = P ξ. (51) Substituting (51) in (50) yields ξ = P 1 APξ = Dξ. (52) Solution of (52) is using (51) ξ(t) = e Dt ξ(0), (53) x(t) = Pξ(t) = Pe Dt ξ(0), x(0) = Pξ(0). (54) Therefore x(t) = Pe Dt P 1 x(0) = e At x(0). (55)

State Transition Matrix Calculate the state transition matrix of the following system ] [ẋ1 = ẋ 2 [ 0 1 0 2 ] [ ] x1 x 2 (56) The eigenvalues of A are λ 1 = 0 and λ 2 = 2. A similarity transformation matrix P is [ ] 1 1 P =. (57) 0 2 Using (49) to calculate the state transition matrix e At = Pe Dt P 1 (58) [ ] [ ] [ ] 1 1 e 0 0 1 1 = 0 2 0 e 2t 2 0 1 2 [ e At 1 1 = 2 (1 ] e 2t ) 0 e 2t. (59)

Thanks Questions please