EE16B Designing Information Devices and Systems II Lecture 6B Controllability
Administration Lecture slides (Miki Lustig): Alpha version will be posted a few days before Beta version will be posted same day before noon Includes 4 slides per page for printing version 1.0 will be posted affter student s corrections are integrated. Lecture notes (Murat Arcak): Class Reader: http://inst.eecs.berkeley.edu/ ~ee16b/fa17/note/16breader.pdf
Today Last time: Derived stability conditions for disc. and cont. systems Easy to analyze using eigenvalues Eigenvalues can predict system behaviour Envelope (decay) and Oscillation (frequency) Today: Controllability of systems
Controllability Discrete-time: ~x(t + 1) = A~x(t)+Bu(t) From last time: t 1 X ~x(t) =A t ~x(0) + A t 1 k Bu(t) k k=0 = A t ~x(0) + A t 1 Bu(0) + A t 2 Bu(1) + + Bu(t 1) 2 3 2 3 u(0) ~x(t) A t ~x(0) = 4? 5 6 4 u(1)... 7 5 u(t 1)
Controllability ~x(t) A t ~x(0) = A t 1 Bu(0) + A t 2 Bu(1) + + Bu(t 1) 2 3 2 3 u(0) ~x(t) A t ~x(0) = 4 5 A t 1 B A t 2 B AB B 6 4 u(1)... 7 5 u(t 1) R t Q) Given any x(0), can we find u(t) s.t. x(t) = x target for some t?
Controllability 2 3 R t = 4 A t 1 B A t 2 B AB B 5 Q) For t < n? Q) At t=n, If columns are independent? Q) If not independent, does increasing t helps? Cayley-Hamilton Theorem: If A is n x n, then A n can be written as a linear combination of A n-1, A, 1 A n = n 1 A n 1 + + 1 A + 0 1 So does: A n B = n 1 A n 1 B + + 1 AB + 0 B
Controllability 2 3 R n = 4 A n 1 B A n 2 B AB B 5 What about R n+1? 2 3 R n+1 = 4 A n B A n 1 B A n 2 B AB B 5 A n B = n 1 A n 1 B + + 1 AB + 0 B
Controllability Test If R t doesn t have n independent columns at t=n, it never will for t > n either! Therefore, we need only to examine R n for controllability: 2 3 R n = 4 A n 1 B A n 2 B AB B 5 Conclusion: ~x(t + 1) = A~x(t)+Bu(t) is controllable if and only if rank{r n } = n
Example 1: ~x(t + 1) = R2 = [AB B] = 1 0 1 2 A 1 1 0 0 Rank {R2} = 1, < n=2 State equations: ~x(t) + 1 0 u(t) B 2 R3 = [A B AB B] = 1 0 1 0 Not controllable! x1 (t + 1) = x1 (t) + x2 (t) + u(t) x2 (t + 1) = 2x2 (t) (not stable) Can not control x2, not with u and not with x1 1 0
Example 2: m u(t)(force) p(t) m d2 p(t) 2 = u(t) d p(t) = 0 1 p(t) + 0 u(t) State variables: p(t) v(t) 0 0 v(t) 1/m ṗ(t) =v(t) This a continuous time model! Let s use it as a stepping stone and convert it to discrete time.
Example 2: m u(t)(force) Let s convert it to discrete time: p(t) p(t) p(t) d d = 0 1 + 0 u(t) p(t) =v(t) v(t) 0 0 v(t) 1/m d v(t) =u(t) u(t) /m T 2T 3T Z t+t v(t + T ) v(t) = u( )d = Tu(t) t p(t + T ) p(t) =Tv(t)+ 1 2 T 2 u(t) See homework!
Example 2: p(t + T )=p(t)+tv(t)+ 1 2 T 2 u(t) v(t + T )=v(t)+tu(t) p(t + T ) v(t + T ) = p(t) 1 T 0 1 v(t) + 1 T 2 2 T u(t) A B R 2 =[AB B] = 3 T 2 1 T 2 2 2 T T Rank = 2 Controllable!
Example 2 Showed how to convert simple continuous time to discrete time not always as simple! In Homework you will show that the continuous system is controllable
Continuous Time (no derivation here) The continuous-time system d ~x(t) =A~x(t)+Bu(t) is controllable if and only if 2 3 R n = 4 A n 1 B A n 2 B AB B 5 has rank = n
Example 3 + Quiz Write the state model: d x1 (t) x1 (t) = + u(t) x 2 (t) x 2 (t) For:
Quiz Write the state model: d x1 (t) x 2 (t) = R L 1 R L 2 R L 1 R L 2 x1 (t) x 2 (t) + R L 1 R L 2 u(t) For: # u x 1 x 2 V r = R(u x 1 x 2 ) = L 1 ẋ = L 1 2 ẋ 2
Example 3 Controllability: B = R L 1 R L 2 AB = 2 4 R L 1 R L 2 R L 1 R L 1 + R L 2 + R L 2 3 5 R =[AB B] R AB = B L 1 + R L 2 Rank = 1! Not controllable
Physical explanation Why can t I drive the currents x1 and x2 freely using U(t)? L 1 dx 1 = L 2 dx 2 = R i R = V R ) L 1 dx 1 L 2 dx 2 =0 d (L 1x 1 L 2 x 2 )=0 ) (L 1 x 1 L 2 x 2 ) = Const
(L 1 x 1 L 2 x 2 ) = Const x 2 x 1 Given an initial condition, I can only move along the line by changing U
Q) What if A = 0? Can the system be controllable? d ~x(t) =Bu(t) R =[0 0 0 0 B] A) Only if u(t) is a vector with the same number of elements as the number of states
Summary Described and derived conditions for controllability of linear state models. Rank of R n for both discrete and continuouse Showed how to discretize continuous systems Showed examples of controllable and noncontrollable systems Next time: Open loop and state feedback control Controllers to make systems do what we want!