ACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016

Similar documents
Control Systems Design, SC4026. SC4026 Fall 2009, dr. A. Abate, DCSC, TU Delft

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Laplace Transforms Chapter 3

Lecture 7 and 8. Fall EE 105, Feedback Control Systems (Prof. Khan) September 30 and October 05, 2015

Laplace Transforms. Chapter 3. Pierre Simon Laplace Born: 23 March 1749 in Beaumont-en-Auge, Normandy, France Died: 5 March 1827 in Paris, France

Module 07 Controllability and Controller Design of Dynamical LTI Systems

Autonomous system = system without inputs

Intro. Computer Control Systems: F8

Lecture 2 and 3: Controllability of DT-LTI systems

Control Systems Design

Solution of Linear State-space Systems

CONTROL DESIGN FOR SET POINT TRACKING

ECEN 605 LINEAR SYSTEMS. Lecture 8 Invariant Subspaces 1/26

Control, Stabilization and Numerics for Partial Differential Equations

Observability. Dynamic Systems. Lecture 2 Observability. Observability, continuous time: Observability, discrete time: = h (2) (x, u, u)

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018

EE 16B Midterm 2, March 21, Name: SID #: Discussion Section and TA: Lab Section and TA: Name of left neighbor: Name of right neighbor:

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

LMI Methods in Optimal and Robust Control

EC Control Engineering Quiz II IIT Madras

ELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky

Stabilization and Passivity-Based Control

Introduction to Modern Control MT 2016

Optimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function:

EE221A Linear System Theory Final Exam

Designing Information Devices and Systems II Spring 2018 J. Roychowdhury and M. Maharbiz Discussion 6B

Chapter 2 Optimal Control Problem

Module 03 Linear Systems Theory: Necessary Background

Discrete Time Systems:

Solving systems of ODEs with Matlab

1 (30 pts) Dominant Pole

Calculus C (ordinary differential equations)

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

EEE582 Homework Problems

Differential Equations and Modeling

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION - Vol. VII - System Characteristics: Stability, Controllability, Observability - Jerzy Klamka

Represent this system in terms of a block diagram consisting only of. g From Newton s law: 2 : θ sin θ 9 θ ` T

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

Solutions to Dynamical Systems 2010 exam. Each question is worth 25 marks.

Chap. 3. Controlled Systems, Controllability

Introduction to Optimal Control Theory and Hamilton-Jacobi equations. Seung Yeal Ha Department of Mathematical Sciences Seoul National University

Lecture 7. Please note. Additional tutorial. Please note that there is no lecture on Tuesday, 15 November 2011.

Modeling and Analysis of Dynamic Systems

AIMS Exercise Set # 1

Matrix Solutions to Linear Systems of ODEs

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

Math 216 Final Exam 14 December, 2012

Diagonalization. P. Danziger. u B = A 1. B u S.

Chaotic motion. Phys 750 Lecture 9

Control Systems. Time response

1 Relative degree and local normal forms

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Problem Set 5 Solutions 1

Linearization problem. The simplest example

Lecture 4: Numerical solution of ordinary differential equations

Min-Max Output Integral Sliding Mode Control for Multiplant Linear Uncertain Systems

APPPHYS 217 Tuesday 6 April 2010

Manifesto on Numerical Integration of Equations of Motion Using Matlab

Lecture Note 1: Background

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

LMIs for Observability and Observer Design

21 Linear State-Space Representations

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

Full-State Feedback Design for a Multi-Input System

Chapter 8 Stabilization: State Feedback 8. Introduction: Stabilization One reason feedback control systems are designed is to stabilize systems that m

Deterministic Dynamic Programming

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

7 Planar systems of linear ODE

MATH 56A: STOCHASTIC PROCESSES CHAPTER 0

Systems and Control Theory Lecture Notes. Laura Giarré

Topics in control Tracking and regulation A. Astolfi

2015 Holl ISU MSM Ames, Iowa. A Few Good ODEs: An Introduction to Modeling and Computation

CDS 101: Lecture 5-1 Reachability and State Space Feedback

Review: control, feedback, etc. Today s topic: state-space models of systems; linearization

Chapter Two Elements of Linear Algebra

Video 6.1 Vijay Kumar and Ani Hsieh

Control Systems Design

Controllability, Observability, Full State Feedback, Observer Based Control

Autonomous Systems and Stability

EE363 homework 2 solutions

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control

Computational Methods of Scientific Programming. Lecturers Thomas A Herring Chris Hill

Modern Optimal Control

Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science : MULTIVARIABLE CONTROL SYSTEMS by A.

Exercises Chapter II.

Numerical Methods for ODEs. Lectures for PSU Summer Programs Xiantao Li

Chaotic motion. Phys 420/580 Lecture 10

Chapter 3. Vector spaces

Cayley-Hamilton Theorem

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is

CHEM-UA 652: Thermodynamics and Kinetics

Calculus and Differential Equations II

Ordinary Differential Equations (ODEs) Background. Video 17

Compare the measured behavior with desired behavior, and

LAURENTIAN UNIVERSITY UNIVERSITÉ LAURENTIENNE

Designing Information Devices and Systems II Spring 2018 J. Roychowdhury and M. Maharbiz Homework 7

Some recent results on controllability of coupled parabolic systems: Towards a Kalman condition

Transcription:

ACM/CMS 17 Linear Analysis & Applications Fall 216 Assignment 4: Linear ODEs and Control Theory Due: 5th December 216 Introduction Systems of ordinary differential equations (ODEs) can be used to describe many physical processes, from the dynamics of populations to the trajectories of missiles. The general form for such systems is as follows: dt (t) = F (x(t), t), x() = x, where x(t) R n for each t. Note that a higher order system can always be written as a first order system in a higher dimensional space. The behavior of the solution x(t) may be undesirable, and so it will often be of interest to control the system by, for example, forcing it. The above system is therefore generalized to dt (t) = F (x(t), t, u(t)), x() = x, where u : [, ) R is a control function. The function u may also depend on the state x(t) at time t, in addition to t, to allow for feedback into the system. The question arises of how the control u can be chosen to enforce certain behavior of the solution x. θ 2 (t) θ 1 (t) θ 3 (t) u(t) Figure 1: The triple pendulum on a cart. As an example, consider a triple pendulum on a cart as illustrated in Figure 1. The state of the pendulum can be characterized by the pivot angles (θ 1, θ 2, θ 3 ). Under the laws of classical mechanics, the evolution of the angles (θ 1, θ 2, θ 3 ) in time is deterministic, though their behavior is chaotic and the system of differential equations that describes their evolution is nonlinear. It may be of interest to ask how the cart should be moved, according to the control u(t), to ensure that each θ i (t) π in a finite time so that the pendulum stands up straight. The following video illustrates the calculation and implementation of such a control: https://www.youtube.com/watch?v=cyn-crnrb3e In Problem 1 we show how solutions to general linear systems of ODEs can be found via use of the matrix exponential. In Problem 2 we introduce the notion of controllability, and look at conditions for when this holds for linear systems. In Problem 3 we consider partially observed systems, and look at when we can infer a system s dynamics from these observations using a consequence of controllability. Finally in Problem 4 we study the systems from Problem 3 numerically, along with the Lorenz 63 model, which, like the triple pendulum, is also a nonlinear and chaotic system.

Problem 1. Linear ODEs (2 points) (a) Let A R n n. Define the exponential of A by the Taylor series Show that for each t, e A = k= 1 k! Ak. d ( e ta ) = Ae ta = e ta A, dt assuming any results about the exponential from lectures and problem sets. (b) Let A R n n and B R n m. (i) Using part (a), write down the solution to the linear system dt (t) = Ax(t), x() = x. If x C n is an eigenvector of A with eigenvalue λ C, describe the behavior of x(t) based on the value of λ. (ii) Let u : [, ) R m be a control function. Consider the forced linear system given by dt (t) = Ax(t) + Bu(t), x() = x. The solution to this system is given by x(t) = e ta x + t e (t s)a Bu(s) ds. By use of a suitable integrating factor, derive this solution. Problem 2. Controllability (25 points) We define the space of unrestricted controls U as the set of functions u : [, ) R m : U = {u : [, ) R m }. It is often of interest in control theory to work with a subset U ad U, referred to as the set of admissible controls, however in this assignment we work only with unrestricted controls. Let A R n n and B R n m. Given a control u U, consider the response x : [, ) R n defined via the autonomous system dt (t) = Ax(t) + Bu(t), x() = x (1) for some x R n. If u( ) depends only upon the initial condition x, it is called an open-loop control. If it depends on the path x( ), it it called a closed-loop control. A closed-loop control allows feedback into the system, whereas an open-loop control does not. We introduce the notion of controllability for this system:

Definition (Controllability). For each time t >, the fixed time t controllable set is defined by C(t) = {x R n : there exists u U with x(t) = }. The controllable set is defined by C = t> C(t). If C = R n, we will say that the system is controllable. We will alternatively say that (A, B) is controllable in this case. The system is hence controllable if it can be controlled to hit zero in a finite time, from any starting point. A useful characterization of controllability is given in terms of the controllability matrix G(A, B) R n mn of (1). This is defined by G(A, B) = ( B, AB, A 2 B,..., A n 1 B ). Theorem. (A, B) is controllable if and only if rank ( G(A, B) ) = n. Proof of this theorem may be found in, for example, [1]. (a) The Cayley-Hamilton theorem states that every square matrix satisfies its own characteristic polynomial. Using this, explain why rank ( G(A, B), A k B ) = rank ( G(A, B) ) for any k n. (b) Let n = m = 2. ( ) 1 (i) Let B =. Under what conditions on A is the system controllable? 1 ( ) 1 (ii) Let B =. Under what conditions on A is the system controllable? (c) Let n = 3, m = 1, and define A R n n by 1 A = 1 1. 1 Consider the three cases 1 B =, 1, and. 1 For which of these choices of B is the system controllable? In the case(s) where the system is not controllable, find the controllable set C.

Problem 3. Observer (25 points) Suppose that we wish to recover a signal x(t) that is known to satisfy the system dt (t) = Ax(t), x() = x, but we do not know the initial condition x. We do not observe x(t) directly, but instead have observations given by y(t) = Hx(t) for some H R m n. We wish to use these observations to determine x(t) for sufficiently large t. To this end, we consider the system given by dz dt (t) = Az(t) + B( y(t) Hz(t) ), z() = z (2) for some B R n m. This is of the form (1) with the closed loop control u = y Hz. The reasoning for this is that when z is close to x, so that y is close to Hz, the dynamics of z should be close to those of x. In particular, if z(t ) = x(t ) for some t, then z(t) = x(t) for all t t. In general we cannot guarantee that the positions of z and x will ever coincide, but we can instead aim for z(t) and x(t) to become arbitrarily close for large times t. (a) Define the error e(t) = z(t) x(t). Show that e(t) satisfies the equation de dt (t) = (A BH)e(t), e() = z x and hence give criterion that ensure e(t) as t for any choice of z. (b) If (A, B) is controllable, it is known that there exists an observation matrix H R m n such that e(t). (i) Consider the cases from Problem 2(b)-(c) which were controllable. By hand, find observation matrices H such that the e(t). (ii) In the cases from Problem 2(b)-(c) where (A, B) were not controllable, can you find H R m n such that e(t) for any z? Investigate either numerically or by hand, choosing at least one example from each of 2(b) and 2(c) where controllability does not hold, and looking at the spectrum of A BH for different choices of H. Remark. Above we are given the matrices A, B, and use controllability to see that we can choose an observation matrix H that causes e(t). There is also a dual notion to controllability, called observability. If we are instead given the matrices A, H, and the pair (A, H) is observable, then it can be shown that we can choose a matrix B such that e(t). Problem 4. Numerics (3 points) (a) Implement the the systems (1), (2) in MATLAB, for arbitrary A R n n and B R n m. You can use the forms of the solutions given in Problem 1, solve using one of MATLAB s built-in ODE solvers such as ode45, or implement a Runge-Kutta method directly.

(i) Choose an example from Problem 2(b) where you know that e(t) for any z. Fix x, and verify that this is the case by plotting (on the same axes) the trajectories of e( ) for variety of choices of z. (ii) Let n = m = 3. Define A R 3 3 by.1 A = 1 1 and let B = I. Observe that rank ( G(A, B) ) = 3, and so there exists an observation matrix H R 3 3 that drives the error to zero. Fix x = (1, 1, ) and z = (2, 2, 2). Consider the three observation matrices H given by 1 1 H =, 1 and 1. For each H, plot the error ε(t) = e(t) = x(t) z(t) for t [, 5]. By considering the trajectories of x( ) and z( ), and the structures of A, B, H, explain the behavior of these errors. (b) We now consider a nonlinear example. The Lorenz 63 model is defined as follows: 1 dt (t) = σ( x 2 (t) x 1 (t) ) 2 dt (t) = x 1(t) ( ρ x 3 (t) ) x 2 (t) 3 dt (t) = x 1(t)x 2 (t) βx 3 (t) where σ, ρ, β are scalar parameters and x 1 (t), x 2 (t), x 3 (t) R for each t. In what follows we will fix σ = 1, β = 8/3 and ρ = 28; it is known that with these choices the system is chaotic. We can write the above system more compactly as dt (t) = F (x(t)), x() = ( x 1 (), x 2 (), x 3 () ). (3) where now F : R 3 R 3 is nonlinear. Suppose that we have observations y(t) = Hx(t) for some H R 3 3, and analogously to the linear case (2), consider the controlled system dz dt (t) = F (z(t)) + B( y(t) Hz(t) ), z() = ( z 1 (), z 2 (), z 3 () ). (4) for some B R 3 3.

(i) Implement both systems (3) and (4) in MATLAB, for arbitrary observation and control matrices H, B. Plot the solution to (3) for a number of initial conditions and t [, 5]. Observe the sensitivity of the solution to the choice of initial condition: two distinct but arbitrarily close initial conditions can lead to very different trajectories. (ii) We consider the case where we only observe one component of the solution, so that H R 3 3 is given by 1 H =, 1 or. 1 References Additionally we assume B = αi for some α >, where I is the identity matrix, and consider t [, 5]. Fix x() and z(), with x() z() 1. Define the error ε(t) = x(t) z(t). For each choice of H above, can you find α > such that that ε(t) for large t? If this is the case, how small can you take α for this to hold? Produce a plot of the error ε(t) for each choice of H, with ε(t) becoming small if possible. [1] L.C. Evans. An Introduction to Mathematical Optimal Control Theory. 25. https://math.berkeley.edu/ evans/control.course.pdf.