ME8281-Advanced Control Systems Design

Similar documents
6.241 Dynamic Systems and Control

Modal Decomposition and the Time-Domain Response of Linear Systems 1

ECEN 605 LINEAR SYSTEMS. Lecture 8 Invariant Subspaces 1/26

Solution via Laplace transform and matrix exponential

Linear dynamical systems with inputs & outputs

21 Linear State-Space Representations

ME Fall 2001, Fall 2002, Spring I/O Stability. Preliminaries: Vector and function norms

Module 03 Linear Systems Theory: Necessary Background

SYSTEMTEORI - ÖVNING 1. In this exercise, we will learn how to solve the following linear differential equation:

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77

6.241 Dynamic Systems and Control

Homogeneous and particular LTI solutions

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

MCE693/793: Analysis and Control of Nonlinear Systems

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Dynamic interpretation of eigenvectors

Advanced Control Theory

Positive Stabilization of Infinite-Dimensional Linear Systems

Linear System Theory

Chapter III. Stability of Linear Systems

Topic # /31 Feedback Control Systems

Linear System Theory

Linear Algebra Exercises

5 More on Linear Algebra

Linear Algebra and Matrix Inversion

Eigenvalues, Eigenvectors and the Jordan Form

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b

Dynamical systems: basic concepts

EE263: Introduction to Linear Dynamical Systems Review Session 6

Math Ordinary Differential Equations

Chap. 3. Controlled Systems, Controllability

Properties of Linear Transformations from R n to R m

Diagonalization. P. Danziger. u B = A 1. B u S.

Module 06 Stability of Dynamical Systems

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Extensions and applications of LQ

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

EE363 homework 8 solutions

MTH 464: Computational Linear Algebra

MATH 221, Spring Homework 10 Solutions

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Definition (T -invariant subspace) Example. Example

B5.6 Nonlinear Systems

CAAM 335 Matrix Analysis

Linear Algebra- Final Exam Review

Introduction to Modern Control MT 2016

0.1 Rational Canonical Forms

= A(λ, t)x. In this chapter we will focus on the case that the matrix A does not depend on time (so that the ODE is autonomous):

Observability. Dynamic Systems. Lecture 2 Observability. Observability, continuous time: Observability, discrete time: = h (2) (x, u, u)

Lecture 4 and 5 Controllability and Observability: Kalman decompositions

MATH 583A REVIEW SESSION #1

Chap 4. State-Space Solutions and

4 Second-Order Systems

Control Systems. Internal Stability - LTI systems. L. Lanari

Stabilization of Distributed Parameter Systems by State Feedback with Positivity Constraints

Lecture 4 Continuous time linear quadratic regulator

Review of Some Concepts from Linear Algebra: Part 2

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Matrices and systems of linear equations

Definition of Dynamic System

MTH 464: Computational Linear Algebra

There are six more problems on the next two pages

Linear Systems Notes for CDS-140a

We have already seen that the main problem of linear algebra is solving systems of linear equations.

Recall : Eigenvalues and Eigenvectors

16.30 Estimation and Control of Aerospace Systems

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples

MIT Final Exam Solutions, Spring 2017

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Econ Slides from Lecture 7

The Jordan Normal Form and its Applications

Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0).

Poincaré Map, Floquet Theory, and Stability of Periodic Orbits

Solving Dynamic Equations: The State Transition Matrix

Linear algebra II Tutorial solutions #1 A = x 1

Lecture 6 Positive Definite Matrices

Math 1553, Introduction to Linear Algebra

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19

Discrete-Time State-Space Equations. M. Sami Fadali Professor of Electrical Engineering UNR

Jordan Canonical Form Homework Solutions

Jordan normal form notes (version date: 11/21/07)

7 Planar systems of linear ODE

1 Last time: least-squares problems

EE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS

Lecture Note 12: The Eigenvalue Problem

EE363 homework 2 solutions

Linear Algebra II Lecture 13

State will have dimension 5. One possible choice is given by y and its derivatives up to y (4)

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Eigenvalues and Eigenvectors

Transcription:

ME8281 - Advanced Control Systems Design Spring 2016 Perry Y. Li Department of Mechanical Engineering University of Minnesota Spring 2016

Lecture 4 - Outline 1 Homework 1 to be posted by tonight 2 Transition matrix for periodic A(t) = A(t + T ). 3 and for constant A Matrix exponential: expm(a (t t 0 )) Laplace transform Eigen decomposition 4 Decomposition into system modes Algebraic and geometric meaning of eigen-decomposition Time varying eigen values, time-invariant eigen vectors Jordan form 5 Zero-initial state response (response to input) 6 Discrete time response

Periodic A(t) = A(t + T ) 1 Homework problem - Hint. 2 For a periodic system with period T Why? 3 Floquet theory where 0 τ 1, τ 0 < T Φ(t + T, t 0 + T ) = Φ(t, t 0 ) Φ(t 1, t 0 ) = Φ(τ 1, 0)Φ(T, 0) k Φ(T, τ 0 ) 4 Hence Φ(t, t 0 ) for all (t, t 0 ) can be characterized quite easily by knowing Φ(, ) over a a finite range of (t, t 0 ).

Constant A case 1 Matrix exponential method (Matlab» expm(a*t)) 2 Laplace transform method 3 Eigen decomposition method

Laplace transform method

Eigen decomposition Av i = λ i v i 1 (λ i, v i ) - a pair of eigen values and eigen vector 2 If v i, i = 1,... n are independent (i.e. A is semi-simple), let T = [v 1, v 2,..., v n ]. 3 Show that AT = T Λ A = T ΛT 1 exp(a(t t 0 )) = T exp(λ(t t 0 ))T 1 4 Note: exp(λ(t t 0 )) is diagonal. 5 Similarly for other matrix functions: for semi-simple M = T ΛT 1, sin(m) = Tsin(Λ)T 1

Some terminology A matrix A R n n is: Simple: if A has n distinct eigen values λ i λ j, i j. This guarantees that A has n independent eign-vectors. Semi-simple: if A has n independent eigen vectors (but does not necessarily have n distinct eigen values.) Example: A = Identity. Jordan form: if A has repeated eigen values and does not have independent eigen vectors. Example ( ) 2 1 A = 0 2

Modal Decomposition ẋ = Ax + Bu If A = T ΛT 1 where Λ = diag(λ 1,..., λ n ),... Coordinate transformation: Let z be such that: x = Tz ẋ = T ż = T Λz + Bu ż = Λz + T 1 Bu ż i = λ i z i + B i u Note z i are decoupled!! Solve problem by: 1 z(t 0 ) = T 1 x(t 0 ); 2 Solve scalar eqns: ż i = λ i z i + B i u for i = 1,..., n. 3 x(t) = Tz(t).

Dyadic Expansion A = where n v i w i λ i = D i λ i i=1 v i = i-th column of T (right eigen vector) w i = i-th row of T 1 (left eigen vecotr) Then n Φ(t, 0) = D i exp(λ i t) i=1

Phase portrait Plot the flow" (state trajectory) {x(t) t t 0 } for various initial states x(t 0 ). Plot flow in z(t) coordinates first, and then transform back to x(t) = T x(t). Several types - characteristics the same under coordinate transformation: Nodes (real, same signs) Saddle (real, different signs) Focus (imagery) What happens when the eigen-vectors become very close to each other... Jordan form (repeated eigen values but with 1 eigen vector)

General form of decomposition - 1 Repeated eigen-values λ 1 = λ 2 = λ 3 = 1 with only one eigen vector v 1 : Decomposed into Jordan block: A = TJT 1 λ i 1 0 J = 0 λ i 1 0 0 λ i T = ( ) v 1, v 2, v 3 Defining equations: 0 = (λi A)v 1 v 1 = (λi A)v 2 (λi A) 2 v 2 = 0 v 2 = (λi A)v 3 (λi A) 3 v 3 = 0, etc. Hence, solve for v 1, v 2, v 3 successively by finding the increasing null spaces of (λi A) k.

General form of decomposition - 2 In general: A = ( T 1 ) J 1 0 0 T 2 T 3 0 J 2 0 ( T 1 T 2 ) 1 T 3 0 0 J 3 where J i is a Jordan block which can have length of 1 Also, the blocks may have the same eigenvalues Unions T i for the eigen value is the eigen-subspace for that eigen value.

Zero-initial state transition (effect of input) Recall taht: s(t, t 0, x 0, u) = Φ(t, t 0 )x 0 + s(t, t 0, 0 x, u) we focus now on s(t, t 0, 0 x, u).

Heuristic guess - 1 Decompose inputs into piecewise continuous parts {u i : R R m } for i =, 2, 1, 1, 0, 1,, { u(t 0 + h i) t 0 + h i t < t 0 + h (i + 1) u i (t) = 0 otherwise where h > 0 is a small positive number: Intuitively we can see that as h 0, u(t) = i= u i (t) as h 0. Let ū(t) = i= u i(t). By linearity of the transition map, s(t, t 0, 0, ū) = i s(t, t 0, 0, u i ).

Heuristic guess - 2. Response to u i ( ) Step 1: t 0 t < t 0 + h i. Since u(τ) = 0, τ [t 0, t 0 + h i) and x(t 0 ) = 0, x(t) = 0 for t 0 t < t 0 + h i Step 2: t [t 0 + h i, t 0 + h(i + 1)). Input is active: x(t) x(t 0 + h i) + [A(t 0 + h i)x(t 0 + h i) +B(t 0 + h i)u(t 0 + h i)] T = [B(t 0 + h i)u(t 0 + h i)] T where T = t (t 0 + h i). Step 3: t t 0 + h (i + 1). Input is no longer active, u i (t) = 0. So the state is again given by the zero-input transition map: Φ (t, t 0 + h (i + 1)) B(t 0 + i h)u(t 0 + h i) }{{} x(t 0 +(i+1) h)

Heuristic guess -3 Since Φ(t, t 0 ) is continuous, if we make the approximation Φ(t, t 0 + (h + 1)i) Φ(t, t 0 + h i) we only introduce second order error in h. Hence, s(t, t 0, x 0, u i ) Φ(t, t 0 + h i)b(t 0 + h i)u(t 0 + h i). The total zero-state state transition due to the input u( ) is therefore given by: s(t, t 0, 0, u) (t t 0 )/h i=0 Φ (t, t 0 + h i) B(t 0 + h i)u(t 0 + h i) As h 0, the sum becomes an integral so that: s(t, t 0, 0, u) = t t 0 Φ(t, τ)b(τ)u(τ)dτ. (4)

Formal proof Show z(t) = t t 0 Φ(t, τ)b(τ)u(τ)dτ satisfies ż = A(t)z + B(t)u(t) and z(t 0 ) = 0 will do.

Discrete time system transition map x(k + 1) = A(k)x(k) + B(k)u(k) Existence and uniqueness of solution only in the forward time direction (unless A( ) is invertible) This leads to linearity in (x 0, u( )): s(k 1, k 0, αx a + βx b, αu a ( ) + βu b ( )) =αs(k 1, k 0, x a, u a ( )) + βs(k 1, k 0, x b, u b ( )) Decomposition into zero-input and zero-initial-state transitions: s(k 1, k 0, x 0, u( )) = s(k 1, k 0, x 0, 0 u ) + s(k }{{} 1, k 0, 0 x, u( )) }{{} zero-input zero-initial-state Linearity of zero-input transition map: s(k 1, k 0, x 0, 0 u ) = Φ(k 1, k 0 )x 0

Discrete time transition matrix Matrix difference equations: Φ(k + 1, k 0 ) = A(k)Φ(k, k 0 ) Φ(k 1, k 1) = Φ(k 1, k)a(k 1) Φ(k 1, k 0 ) = A(k 1 1)A(k 1 2)... A(k 0 ) Semi-group property: k 0 k 1 k 2 : Φ(k 2, k 0 ) = Φ(k 2, k 1 )Φ(k 1, k 0 ) Inveritibility of Φ(k 1, k 0 )??? Only if A(k) invertible for all k [k 0, k 1 1].

Discrete time - zero-initial state response x(k) = A(k 1)x(k 1) + B(k 1)u(k 1). = A(k 1)A(k 2)x(k 2) + A(k 1)B(k 2)u(k 2) + B(k 1)u(k 1) = A(k 1)A(k 2)... A(k 0 )x(k 0 ) + k 1 i=k 0 Π k 1 j=i+1 A(j)B(i)u(i) Thus, since x(k 0 ) = 0 for the the zero-initial state response: s(k, k 0, 0 x, u) = = k 1 i=k 0 Π k 1 j=i+1 A(j)B(i)u(i) k 1 i=k 0 Φ(k, i + 1)B(i)u(i)

Summary - Lecture 4 Transition matrix for periodic A(t) = A(t + T ) & constant A Eigen decomposition modal decomposition For constant A matrix, eigen decomposition provides a decoupling of the system into simpler systems Geometric meaning of eigen value and eigen vectors Generalized decomposition (allowing for Jordan blocks). Eigen vectors become eigen subspaces. Response due to inputs (zero-initial state response) is a convolution Discrete time systems response similar to LDS except: uniqueness is guaranteed only for forward time (unless A(k) are inveritible).