EE16B Designing Information Devices and Systems II

Similar documents
EE16B Designing Information Devices and Systems II

State Feedback and State Estimators Linear System Theory and Design, Chapter 8.

EE16B Designing Information Devices and Systems II

Designing Information Devices and Systems II Fall 2018 Elad Alon and Miki Lustig Homework 7

Digital Control Systems State Feedback Control

Intro. Computer Control Systems: F8

Module 07 Controllability and Controller Design of Dynamical LTI Systems

Designing Information Devices and Systems II Spring 2018 J. Roychowdhury and M. Maharbiz Discussion 6B

Lecture 18 : State Space Design

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

EECS C128/ ME C134 Final Wed. Dec. 15, am. Closed book. Two pages of formula sheets. No calculators.

EE 16B Midterm 2, March 21, Name: SID #: Discussion Section and TA: Lab Section and TA: Name of left neighbor: Name of right neighbor:

Designing Information Devices and Systems II Spring 2017 Murat Arcak and Michel Maharbiz Homework 9

ACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016

Lecture 9 Nonlinear Control Design

Lec 6: State Feedback, Controllability, Integral Action

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

EE C128 / ME C134 Final Exam Fall 2014

Homework Solution # 3

ECEN 605 LINEAR SYSTEMS. Lecture 8 Invariant Subspaces 1/26

ECE 602 Exam 2 Solutions, 3/23/2011.

1 (30 pts) Dominant Pole

Reachability and Controllability

Optimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function:

Designing Information Devices and Systems II Fall 2016 Murat Arcak and Michel Maharbiz Homework 11. This homework is due November 14, 2016, at Noon.

9 Controller Discretization

EE222 - Spring 16 - Lecture 2 Notes 1

EE16B - Spring 17 - Lecture 12A Notes 1

Control, Stabilization and Numerics for Partial Differential Equations

State Feedback and State Estimators Linear System Theory and Design, Chapter 8.

EE221A Linear System Theory Final Exam

Control Systems Design, SC4026. SC4026 Fall 2009, dr. A. Abate, DCSC, TU Delft

9/4/2017. Motion: Acceleration

Topics in control Tracking and regulation A. Astolfi

SYSTEMTEORI - ÖVNING 5: FEEDBACK, POLE ASSIGNMENT AND OBSERVER

Welcome to Physics 161 Elements of Physics Fall 2018, Sept 4. Wim Kloet

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

EE16B, Spring 2018 UC Berkeley EECS. Maharbiz and Roychowdhury. Lecture 5B: Open QA Session

Robotics. Control Theory. Marc Toussaint U Stuttgart

Solutions to Homework 3

Intro. Computer Control Systems: F9

Lecture 2 : Mathematical Review

Problem 1 Cost of an Infinite Horizon LQR

Lecture 2 and 3: Controllability of DT-LTI systems

8 sin 3 V. For the circuit given, determine the voltage v for all time t. Assume that no energy is stored in the circuit before t = 0.

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

Designing Information Devices and Systems II Fall 2016 Murat Arcak and Michel Maharbiz Homework 0. This homework is due August 29th, 2016, at Noon.

CDS 110b: Lecture 2-1 Linear Quadratic Regulators

Designing Information Devices and Systems II Spring 2018 J. Roychowdhury and M. Maharbiz Homework 5

Observability. Dynamic Systems. Lecture 2 Observability. Observability, continuous time: Observability, discrete time: = h (2) (x, u, u)

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

APPPHYS 217 Tuesday 6 April 2010

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

Controllability, Observability, Full State Feedback, Observer Based Control

CURVILINEAR MOTION: NORMAL AND TANGENTIAL COMPONENTS

EE C128 / ME C134 Midterm Fall 2014

Designing Information Devices and Systems II Fall 2018 Elad Alon and Miki Lustig Homework 9

Full-State Feedback Design for a Multi-Input System

Quadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

First Derivative Test

Lecture 5 Linear Quadratic Stochastic Control

Mathematics 22: Lecture 5

6. Linear Quadratic Regulator Control

OPTIMAL CONTROL SYSTEMS

5. Observer-based Controller Design

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19

EECS C128/ ME C134 Final Thu. May 14, pm. Closed book. One page, 2 sides of formula sheets. No calculators.

ME 132, Fall 2015, Quiz # 2

EE363 homework 2 solutions

Principles of Optimal Control Spring 2008

Designing Information Devices and Systems II Fall 2018 Elad Alon and Miki Lustig Homework 8

The norms can also be characterized in terms of Riccati inequalities.

Systems Analysis and Control

Systems Analysis and Control

EE16B Fa 17, Michael Maharbitz and Miki Lustig. Miterm Exam#2. (after the exam begins, add your SID# in the top right corner of each page)

EC Control Engineering Quiz II IIT Madras

Mathematics 22: Lecture 12

EE16B, Spring 2018 UC Berkeley EECS. Maharbiz and Roychowdhury. Lectures 4B & 5A: Overview Slides. Linearization and Stability

EE363 Review Session 1: LQR, Controllability and Observability


Math 312 Lecture Notes Linear Two-dimensional Systems of Differential Equations

Lesson 4: Non-fading Memory Nonlinearities

Time Response of Systems

Physics 2101 S c e t c i cti n o 3 n 3 March 31st Announcements: Quiz today about Ch. 14 Class Website:

Decay rates for partially dissipative hyperbolic systems

Observability. It was the property in Lyapunov stability which allowed us to resolve that

Last week: analysis of pinion-rack w velocity feedback

Control Systems Design

Due: Fri Nov :31 AM MST. Question Instructions Read today's Notes and Learning Goals

Designing Information Devices and Systems II Spring 2016 Anant Sahai and Michel Maharbiz Midterm 2

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

Chap. 3. Controlled Systems, Controllability

Full State Feedback for State Space Approach

Designing Information Devices and Systems II Spring 2018 J. Roychowdhury and M. Maharbiz Homework 7

EE363 homework 8 solutions

Linear Algebra II Lecture 22

Math Fall Final Exam

Transcription:

EE16B Designing Information Devices and Systems II Lecture 6B Controllability

Administration Lecture slides (Miki Lustig): Alpha version will be posted a few days before Beta version will be posted same day before noon Includes 4 slides per page for printing version 1.0 will be posted affter student s corrections are integrated. Lecture notes (Murat Arcak): Class Reader: http://inst.eecs.berkeley.edu/ ~ee16b/fa17/note/16breader.pdf

Today Last time: Derived stability conditions for disc. and cont. systems Easy to analyze using eigenvalues Eigenvalues can predict system behaviour Envelope (decay) and Oscillation (frequency) Today: Controllability of systems

Controllability Discrete-time: ~x(t + 1) = A~x(t)+Bu(t) From last time: t 1 X ~x(t) =A t ~x(0) + A t 1 k Bu(t) k k=0 = A t ~x(0) + A t 1 Bu(0) + A t 2 Bu(1) + + Bu(t 1) 2 3 2 3 u(0) ~x(t) A t ~x(0) = 4? 5 6 4 u(1)... 7 5 u(t 1)

Controllability ~x(t) A t ~x(0) = A t 1 Bu(0) + A t 2 Bu(1) + + Bu(t 1) 2 3 2 3 u(0) ~x(t) A t ~x(0) = 4 5 A t 1 B A t 2 B AB B 6 4 u(1)... 7 5 u(t 1) R t Q) Given any x(0), can we find u(t) s.t. x(t) = x target for some t?

Controllability 2 3 R t = 4 A t 1 B A t 2 B AB B 5 Q) For t < n? Q) At t=n, If columns are independent? Q) If not independent, does increasing t helps? Cayley-Hamilton Theorem: If A is n x n, then A n can be written as a linear combination of A n-1, A, 1 A n = n 1 A n 1 + + 1 A + 0 1 So does: A n B = n 1 A n 1 B + + 1 AB + 0 B

Controllability 2 3 R n = 4 A n 1 B A n 2 B AB B 5 What about R n+1? 2 3 R n+1 = 4 A n B A n 1 B A n 2 B AB B 5 A n B = n 1 A n 1 B + + 1 AB + 0 B

Controllability Test If R t doesn t have n independent columns at t=n, it never will for t > n either! Therefore, we need only to examine R n for controllability: 2 3 R n = 4 A n 1 B A n 2 B AB B 5 Conclusion: ~x(t + 1) = A~x(t)+Bu(t) is controllable if and only if rank{r n } = n

Example 1: ~x(t + 1) = R2 = [AB B] = 1 0 1 2 A 1 1 0 0 Rank {R2} = 1, < n=2 State equations: ~x(t) + 1 0 u(t) B 2 R3 = [A B AB B] = 1 0 1 0 Not controllable! x1 (t + 1) = x1 (t) + x2 (t) + u(t) x2 (t + 1) = 2x2 (t) (not stable) Can not control x2, not with u and not with x1 1 0

Example 2: m u(t)(force) p(t) m d2 p(t) 2 = u(t) d p(t) = 0 1 p(t) + 0 u(t) State variables: p(t) v(t) 0 0 v(t) 1/m ṗ(t) =v(t) This a continuous time model! Let s use it as a stepping stone and convert it to discrete time.

Example 2: m u(t)(force) Let s convert it to discrete time: p(t) p(t) p(t) d d = 0 1 + 0 u(t) p(t) =v(t) v(t) 0 0 v(t) 1/m d v(t) =u(t) u(t) /m T 2T 3T Z t+t v(t + T ) v(t) = u( )d = Tu(t) t p(t + T ) p(t) =Tv(t)+ 1 2 T 2 u(t) See homework!

Example 2: p(t + T )=p(t)+tv(t)+ 1 2 T 2 u(t) v(t + T )=v(t)+tu(t) p(t + T ) v(t + T ) = p(t) 1 T 0 1 v(t) + 1 T 2 2 T u(t) A B R 2 =[AB B] = 3 T 2 1 T 2 2 2 T T Rank = 2 Controllable!

Example 2 Showed how to convert simple continuous time to discrete time not always as simple! In Homework you will show that the continuous system is controllable

Continuous Time (no derivation here) The continuous-time system d ~x(t) =A~x(t)+Bu(t) is controllable if and only if 2 3 R n = 4 A n 1 B A n 2 B AB B 5 has rank = n

Example 3 + Quiz Write the state model: d x1 (t) x1 (t) = + u(t) x 2 (t) x 2 (t) For:

Quiz Write the state model: d x1 (t) x 2 (t) = R L 1 R L 2 R L 1 R L 2 x1 (t) x 2 (t) + R L 1 R L 2 u(t) For: # u x 1 x 2 V r = R(u x 1 x 2 ) = L 1 ẋ = L 1 2 ẋ 2

Example 3 Controllability: B = R L 1 R L 2 AB = 2 4 R L 1 R L 2 R L 1 R L 1 + R L 2 + R L 2 3 5 R =[AB B] R AB = B L 1 + R L 2 Rank = 1! Not controllable

Physical explanation Why can t I drive the currents x1 and x2 freely using U(t)? L 1 dx 1 = L 2 dx 2 = R i R = V R ) L 1 dx 1 L 2 dx 2 =0 d (L 1x 1 L 2 x 2 )=0 ) (L 1 x 1 L 2 x 2 ) = Const

(L 1 x 1 L 2 x 2 ) = Const x 2 x 1 Given an initial condition, I can only move along the line by changing U

Q) What if A = 0? Can the system be controllable? d ~x(t) =Bu(t) R =[0 0 0 0 B] A) Only if u(t) is a vector with the same number of elements as the number of states

Summary Described and derived conditions for controllability of linear state models. Rank of R n for both discrete and continuouse Showed how to discretize continuous systems Showed examples of controllable and noncontrollable systems Next time: Open loop and state feedback control Controllers to make systems do what we want!