ME8281 - Advanced Control Systems Design Spring 2016 Perry Y. Li Department of Mechanical Engineering University of Minnesota Spring 2016
Lecture 4 - Outline 1 Homework 1 to be posted by tonight 2 Transition matrix for periodic A(t) = A(t + T ). 3 and for constant A Matrix exponential: expm(a (t t 0 )) Laplace transform Eigen decomposition 4 Decomposition into system modes Algebraic and geometric meaning of eigen-decomposition Time varying eigen values, time-invariant eigen vectors Jordan form 5 Zero-initial state response (response to input) 6 Discrete time response
Periodic A(t) = A(t + T ) 1 Homework problem - Hint. 2 For a periodic system with period T Why? 3 Floquet theory where 0 τ 1, τ 0 < T Φ(t + T, t 0 + T ) = Φ(t, t 0 ) Φ(t 1, t 0 ) = Φ(τ 1, 0)Φ(T, 0) k Φ(T, τ 0 ) 4 Hence Φ(t, t 0 ) for all (t, t 0 ) can be characterized quite easily by knowing Φ(, ) over a a finite range of (t, t 0 ).
Constant A case 1 Matrix exponential method (Matlab» expm(a*t)) 2 Laplace transform method 3 Eigen decomposition method
Laplace transform method
Eigen decomposition Av i = λ i v i 1 (λ i, v i ) - a pair of eigen values and eigen vector 2 If v i, i = 1,... n are independent (i.e. A is semi-simple), let T = [v 1, v 2,..., v n ]. 3 Show that AT = T Λ A = T ΛT 1 exp(a(t t 0 )) = T exp(λ(t t 0 ))T 1 4 Note: exp(λ(t t 0 )) is diagonal. 5 Similarly for other matrix functions: for semi-simple M = T ΛT 1, sin(m) = Tsin(Λ)T 1
Some terminology A matrix A R n n is: Simple: if A has n distinct eigen values λ i λ j, i j. This guarantees that A has n independent eign-vectors. Semi-simple: if A has n independent eigen vectors (but does not necessarily have n distinct eigen values.) Example: A = Identity. Jordan form: if A has repeated eigen values and does not have independent eigen vectors. Example ( ) 2 1 A = 0 2
Modal Decomposition ẋ = Ax + Bu If A = T ΛT 1 where Λ = diag(λ 1,..., λ n ),... Coordinate transformation: Let z be such that: x = Tz ẋ = T ż = T Λz + Bu ż = Λz + T 1 Bu ż i = λ i z i + B i u Note z i are decoupled!! Solve problem by: 1 z(t 0 ) = T 1 x(t 0 ); 2 Solve scalar eqns: ż i = λ i z i + B i u for i = 1,..., n. 3 x(t) = Tz(t).
Dyadic Expansion A = where n v i w i λ i = D i λ i i=1 v i = i-th column of T (right eigen vector) w i = i-th row of T 1 (left eigen vecotr) Then n Φ(t, 0) = D i exp(λ i t) i=1
Phase portrait Plot the flow" (state trajectory) {x(t) t t 0 } for various initial states x(t 0 ). Plot flow in z(t) coordinates first, and then transform back to x(t) = T x(t). Several types - characteristics the same under coordinate transformation: Nodes (real, same signs) Saddle (real, different signs) Focus (imagery) What happens when the eigen-vectors become very close to each other... Jordan form (repeated eigen values but with 1 eigen vector)
General form of decomposition - 1 Repeated eigen-values λ 1 = λ 2 = λ 3 = 1 with only one eigen vector v 1 : Decomposed into Jordan block: A = TJT 1 λ i 1 0 J = 0 λ i 1 0 0 λ i T = ( ) v 1, v 2, v 3 Defining equations: 0 = (λi A)v 1 v 1 = (λi A)v 2 (λi A) 2 v 2 = 0 v 2 = (λi A)v 3 (λi A) 3 v 3 = 0, etc. Hence, solve for v 1, v 2, v 3 successively by finding the increasing null spaces of (λi A) k.
General form of decomposition - 2 In general: A = ( T 1 ) J 1 0 0 T 2 T 3 0 J 2 0 ( T 1 T 2 ) 1 T 3 0 0 J 3 where J i is a Jordan block which can have length of 1 Also, the blocks may have the same eigenvalues Unions T i for the eigen value is the eigen-subspace for that eigen value.
Zero-initial state transition (effect of input) Recall taht: s(t, t 0, x 0, u) = Φ(t, t 0 )x 0 + s(t, t 0, 0 x, u) we focus now on s(t, t 0, 0 x, u).
Heuristic guess - 1 Decompose inputs into piecewise continuous parts {u i : R R m } for i =, 2, 1, 1, 0, 1,, { u(t 0 + h i) t 0 + h i t < t 0 + h (i + 1) u i (t) = 0 otherwise where h > 0 is a small positive number: Intuitively we can see that as h 0, u(t) = i= u i (t) as h 0. Let ū(t) = i= u i(t). By linearity of the transition map, s(t, t 0, 0, ū) = i s(t, t 0, 0, u i ).
Heuristic guess - 2. Response to u i ( ) Step 1: t 0 t < t 0 + h i. Since u(τ) = 0, τ [t 0, t 0 + h i) and x(t 0 ) = 0, x(t) = 0 for t 0 t < t 0 + h i Step 2: t [t 0 + h i, t 0 + h(i + 1)). Input is active: x(t) x(t 0 + h i) + [A(t 0 + h i)x(t 0 + h i) +B(t 0 + h i)u(t 0 + h i)] T = [B(t 0 + h i)u(t 0 + h i)] T where T = t (t 0 + h i). Step 3: t t 0 + h (i + 1). Input is no longer active, u i (t) = 0. So the state is again given by the zero-input transition map: Φ (t, t 0 + h (i + 1)) B(t 0 + i h)u(t 0 + h i) }{{} x(t 0 +(i+1) h)
Heuristic guess -3 Since Φ(t, t 0 ) is continuous, if we make the approximation Φ(t, t 0 + (h + 1)i) Φ(t, t 0 + h i) we only introduce second order error in h. Hence, s(t, t 0, x 0, u i ) Φ(t, t 0 + h i)b(t 0 + h i)u(t 0 + h i). The total zero-state state transition due to the input u( ) is therefore given by: s(t, t 0, 0, u) (t t 0 )/h i=0 Φ (t, t 0 + h i) B(t 0 + h i)u(t 0 + h i) As h 0, the sum becomes an integral so that: s(t, t 0, 0, u) = t t 0 Φ(t, τ)b(τ)u(τ)dτ. (4)
Formal proof Show z(t) = t t 0 Φ(t, τ)b(τ)u(τ)dτ satisfies ż = A(t)z + B(t)u(t) and z(t 0 ) = 0 will do.
Discrete time system transition map x(k + 1) = A(k)x(k) + B(k)u(k) Existence and uniqueness of solution only in the forward time direction (unless A( ) is invertible) This leads to linearity in (x 0, u( )): s(k 1, k 0, αx a + βx b, αu a ( ) + βu b ( )) =αs(k 1, k 0, x a, u a ( )) + βs(k 1, k 0, x b, u b ( )) Decomposition into zero-input and zero-initial-state transitions: s(k 1, k 0, x 0, u( )) = s(k 1, k 0, x 0, 0 u ) + s(k }{{} 1, k 0, 0 x, u( )) }{{} zero-input zero-initial-state Linearity of zero-input transition map: s(k 1, k 0, x 0, 0 u ) = Φ(k 1, k 0 )x 0
Discrete time transition matrix Matrix difference equations: Φ(k + 1, k 0 ) = A(k)Φ(k, k 0 ) Φ(k 1, k 1) = Φ(k 1, k)a(k 1) Φ(k 1, k 0 ) = A(k 1 1)A(k 1 2)... A(k 0 ) Semi-group property: k 0 k 1 k 2 : Φ(k 2, k 0 ) = Φ(k 2, k 1 )Φ(k 1, k 0 ) Inveritibility of Φ(k 1, k 0 )??? Only if A(k) invertible for all k [k 0, k 1 1].
Discrete time - zero-initial state response x(k) = A(k 1)x(k 1) + B(k 1)u(k 1). = A(k 1)A(k 2)x(k 2) + A(k 1)B(k 2)u(k 2) + B(k 1)u(k 1) = A(k 1)A(k 2)... A(k 0 )x(k 0 ) + k 1 i=k 0 Π k 1 j=i+1 A(j)B(i)u(i) Thus, since x(k 0 ) = 0 for the the zero-initial state response: s(k, k 0, 0 x, u) = = k 1 i=k 0 Π k 1 j=i+1 A(j)B(i)u(i) k 1 i=k 0 Φ(k, i + 1)B(i)u(i)
Summary - Lecture 4 Transition matrix for periodic A(t) = A(t + T ) & constant A Eigen decomposition modal decomposition For constant A matrix, eigen decomposition provides a decoupling of the system into simpler systems Geometric meaning of eigen value and eigen vectors Generalized decomposition (allowing for Jordan blocks). Eigen vectors become eigen subspaces. Response due to inputs (zero-initial state response) is a convolution Discrete time systems response similar to LDS except: uniqueness is guaranteed only for forward time (unless A(k) are inveritible).