ELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems

Similar documents
Overview of the Seminar Topic

Chapter 7 Control. Part Classical Control. Mobile Robotics - Prof Alonzo Kelly, CMU RI

Reglerteknik, TNG028. Lecture 1. Anna Lombardi

Autonomous Mobile Robot Design

Control. CSC752: Autonomous Robotic Systems. Ubbo Visser. March 9, Department of Computer Science University of Miami

D(s) G(s) A control system design definition

Lecture «Robot Dynamics»: Dynamics 2

Target Tracking Using Double Pendulum

Modeling and Control Overview

Robotics. Dynamics. Marc Toussaint U Stuttgart

We provide two sections from the book (in preparation) Intelligent and Autonomous Road Vehicles, by Ozguner, Acarman and Redmill.

CHAPTER 1. Introduction

Lecture «Robot Dynamics»: Dynamics and Control

Review: control, feedback, etc. Today s topic: state-space models of systems; linearization

Robotics. Dynamics. University of Stuttgart Winter 2018/19

Dr Ian R. Manchester Dr Ian R. Manchester AMME 3500 : Review

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018

(Refer Slide Time: 00:01:30 min)

CHAPTER INTRODUCTION

557. Radial correction controllers of gyroscopic stabilizer

Lyapunov Design for Controls

Introduction to Control (034040) lecture no. 2

Index. Index. More information. in this web service Cambridge University Press

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

Robotics & Automation. Lecture 25. Dynamics of Constrained Systems, Dynamic Control. John T. Wen. April 26, 2007

Modelling and State Dependent Riccati Equation Control of an Active Hydro-Pneumatic Suspension System

FUZZY CONTROL CONVENTIONAL CONTROL CONVENTIONAL CONTROL CONVENTIONAL CONTROL CONVENTIONAL CONTROL CONVENTIONAL CONTROL

MATH4406 (Control Theory) Unit 1: Introduction Prepared by Yoni Nazarathy, July 21, 2012

General procedure for formulation of robot dynamics STEP 1 STEP 3. Module 9 : Robot Dynamics & controls

Instrumentation Commande Architecture des Robots Evolués

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67

Lecture Schedule Week Date Lecture (M: 2:05p-3:50, 50-N202)

Modeling and Analysis of Dynamic Systems

Investigation of Steering Feedback Control Strategies for Steer-by-Wire Concept

Identification and Control of Mechatronic Systems

Today. Why idealized? Idealized physical models of robotic vehicles. Noise. Idealized physical models of robotic vehicles

Partially Observable Markov Decision Processes (POMDPs)

Topic # Feedback Control Systems

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

Control Systems Design, SC4026. SC4026 Fall 2009, dr. A. Abate, DCSC, TU Delft

INC 693, 481 Dynamics System and Modelling: Introduction to Modelling Dr.-Ing. Sudchai Boonto Assistant Professor

Dynamics and control of mechanical systems

EE C128 / ME C134 Feedback Control Systems

Intelligent Systems and Control Prof. Laxmidhar Behera Indian Institute of Technology, Kanpur

CONTROL OF THE NONHOLONOMIC INTEGRATOR

Chapter 3. State Feedback - Pole Placement. Motivation

Contents. Dynamics and control of mechanical systems. Focus on

MEAM 510 Fall 2012 Bruce D. Kothmann

Modeling and Simulation Revision IV D R. T A R E K A. T U T U N J I P H I L A D E L P H I A U N I V E R S I T Y, J O R D A N

ME8230 Nonlinear Dynamics

Manipulators. Robotics. Outline. Non-holonomic robots. Sensors. Mobile Robots

Subject: Introduction to Process Control. Week 01, Lectures 01 02, Spring Content

Servo Control of a Turbine Gas Metering Valve by Physics-Based Robust Controls (μ) Synthesis

Stochastic Optimal Control!

Optimization-Based Control

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

Structural System Identification (KAIST, Summer 2017) Lecture Coverage:

Robotics, Geometry and Control - A Preview

e jωt = cos(ωt) + jsin(ωt),

CDS 110: Lecture 2-2 Modeling Using Differential Equations

Dr Ian R. Manchester

Acoustics-An An Overview. Lecture 1. Vibro-Acoustics. What? Why? How? Lecture 1

NONLINEAR CONTROLLER DESIGN FOR ACTIVE SUSPENSION SYSTEMS USING THE IMMERSION AND INVARIANCE METHOD

PID controllers, part I

Manufacturing Equipment Control

Chapter 2 Review of Linear and Nonlinear Controller Designs

Video 8.1 Vijay Kumar. Property of University of Pennsylvania, Vijay Kumar

Lecture 4. Applications

Automatic Control II Computer exercise 3. LQG Design

CDS 101: Lecture 2.1 System Modeling. Lecture 1.1: Introduction Review from to last Feedback week and Control

Enhancing a Model-Free Adaptive Controller through Evolutionary Computation

Line following of a mobile robot

Control of a Car-Like Vehicle with a Reference Model and Particularization

Control Systems I. Lecture 2: Modeling. Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch Emilio Frazzoli

Design of Advanced Control Techniques for an Underwater Vehicle

1.1 OBJECTIVE AND CONTENTS OF THE BOOK

Dynamics 4600:203 Homework 09 Due: April 04, 2008 Name:

Linear Systems Theory

INTI INTERNATIONAL UNIVERSITY FOUNDATION IN SCIENCE (CFSI) PHY1203: GENERAL PHYSICS 1 FINAL EXAMINATION: SEPTEMBER 2012 SESSION

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004

CDS 101/110a: Lecture 8-1 Frequency Domain Design

Control of Mobile Robots

VISUAL PHYSICS ONLINE DYNAMICS TYPES OF FORCES FRICTION

Introduction to Controls

Robot Manipulator Control. Hesheng Wang Dept. of Automation

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Control of Manufacturing Processes

Video 5.1 Vijay Kumar and Ani Hsieh

(W: 12:05-1:50, 50-N202)

Case Study: The Pelican Prototype Robot

MEAM 510 Fall 2011 Bruce D. Kothmann

Using Theorem Provers to Guarantee Closed-Loop Properties

Week 3: Wheeled Kinematics AMR - Autonomous Mobile Robots

Lecture 20 Aspects of Control

GAIN SCHEDULING CONTROL WITH MULTI-LOOP PID FOR 2- DOF ARM ROBOT TRAJECTORY CONTROL

Modeling and Simulation Revision III D R. T A R E K A. T U T U N J I P H I L A D E L P H I A U N I V E R S I T Y, J O R D A N

Integration of an Active Brake Pedal Simulator in the CarMaker Environment for Design and Evaluation of Haptic Driver Assistance Systems

Lecture 1: Introduction to System Modeling and Control. Introduction Basic Definitions Different Model Types System Identification

Virtual Passive Controller for Robot Systems Using Joint Torque Sensors

Modeling and control design for a semi-active suspension system with magnetorheological rotary brake

Transcription:

ELEC4631 s Lecture 2: Dynamic Control Systems 7 March 2011 Overview of dynamic control systems Goals of Controller design Autonomous dynamic systems Linear Multi-input multi-output (MIMO) systems Bat flight example

Overview of dynamic control systems On one hand, it is a branch of Engineering Science which is applied to almost any area of human endeavour - if it moves then it almost certainly could not do so without a control system. On the other hand it is almost always invisible only coming to attention when systems fail. In Engineering, the emphasis is on physical systems but the methodology is widely applied elsewhere, for instance in macroeconomics, finance and actuarial studies.

The goal is to improve, or in some cases enable, the performance of a system by the addition of sensor, control processor and actuators. The sensors measure or sense various signals in the system and operator commands. The signals might be transmitted via analog or digitally encoded electrical signal. The control processors process the sensed signals and drive the actuators, which affect the behavior of the system. They could be mechanical, pneumatic, hydraulic, analog electrical, general-purpose or custom digital computers

The system to be control might be an vehicle, a robot, a large electric power, generation and distribution systems, computer disk driver, or economic system; Usually, the sensor signals can affect the system to be controlled (via the control processor and the actuators), the control system is called a feedback or closed-loop control system

Examples Robot Control Humanoid robot, which can implement many heave or intelligent tasks like caring a persons (RIBA for nursing), playing music and sport (baseball, soccer) talking and understanding human being (Ashimo robot). The system consists of robot arm and legs which are connected through robot body. Dynamic (motions) of robot arms and legs are controlled by the hydraulic actuator (joint motors). There are lots of sensors to sense various signals such as environment condition (for adaptive motion). The robot actions are results of interaction between robot and environment

Mobile robot, which can move and search in wide space of hazardous conditions (under a deep ocean, path-finder in Mars..). The system consists of many wheels connected through the robot body. The shape of mobile robot depends which kind of dynamic and what statical condition of robot environment we expect. The wheel motions are controlled by hydraulic actuators (engines, motors). The are lot of sensor cameras to sense/examine the environment conditions (obstacles, external forces like wind etc). The robot motions are results of the sensor feedbacks. Cars, which are driven by control commands of the drivers. Too complex systems which need about 20 computers to coordinate and implement the drive tasks (from injecting petrol to the engines to accelerating the car or breaking). There are lot

of sensors in the cars (for appropriately breaking avoid skidding, it needs to sensor the road surface condition for computing friction between the road and tyre). Of course, one of interests is to car speed control and such speed sensor is needed. Usually the economical sensor is to sense the angular velocity of tyres but if the car is locked (i.e. wheel is locked but the car is still moving) then such sensor stops working. Another issue is how to efficiently use petrol which is not only about energy consumption but also about emission as well Aircraft, which are like cars but without wheels. It is too complex systems.

Studies of dynamic control systems System design and control configuration. The selection and placement of the actuators on the system to be controlled are very important aspect. However, the control engineer is often provided with already designed system and starts with the control configuration. Actuator selection and placement. To decide where to put actuators such as pumps, heaters, and valves and other actuator hardware. Relevant characteristics include cost, power limit, speed and accuracy of response. Sensor selection and placement. Which signals in the system will be measured or sensed and with what sensor hardware. For examples, in industrial process, which temperatures, flow rates,

pressures and concentrations to sense. Relevant characteristics of the used sensors are important. Modeling. To develop mathematical models of the system to be controlled noises or disturbances that may act on the system the command the operator may issue desirable or required qualities of the final system These models might be deterministic (e.g. differential equations and transfer functions) or stochastic or probabilistic (e.g. power spectral density) Models are developed in several ways

Physical modeling by applying various laws of physics (e.g. Newton s equations, energy conservation, flow balance etc) to derive differential equations Empirical modeling or identification by developing models from observed or collected data Models are developed with complexity and fidelity trade-off. A simple model might capture some basic features and characteristics of the systems, at risk of inaccuracy. A complex model could be very detailed and accurate but greatly complicate the design, simulation, or system analysis Controller design (OUR TOPIC). The controller or control law describes the algorithm or signal processing used by the control processor to generate the actuator signals from the sensor and command signals it receives.

PID (proportional-integral-derivative) are most widely and effectively used in many industries. There are only three parameters to tune empirically (basically studied in the previous course) Linear quadratic regulator (LQR) (the estimated-state-fedback controller) and linear quadartic Gaussian (LQG) controllers (will be studied in this course) Steering and coordinating controllers (will be studied in this course) System testing, validation and tuning extensive computer simulation real time simulation field tests

Goals of controller design Performance specification Good regulation against disturbances. Desirable responses to commands Critical signals are not too big

Robustness specifications They limit the change in performance of the system that can be caused by the differences between the system and its model. Low differential sensitivities. The derivatives of some closed-loop quantity with respect to some system parameter, is small. Guaranteed margins. The system must have the ability to meet some performance specifications despite some specific set of perturbations. Trade-off between the system performance and control robustness

Dynamical systems An linear dynamical system has form ẋ(t) = A(t)x(t) x(t) R n is called the state n is the state dimension A(t) is time-dependent or time-independent matrix It is easy to analyze the linear system based on conditions of matrix A(t). We will study later

Example 1: mechanical system with k degrees of freedom undergoing small motions M q + D q + Kq = 0 q(t) R k is the vector of generalized displacements M is the mass matrix K is the stiffness matrix D is the damping matrix with state x = ẋ = [ ] q q [ ] q = q [, the state equation is ] 0 I M 1 K M 1 x D

A general nonlinear dynamic system has form ẋ(t) = f(x(t)) where x R n is the system state and f : R n R n is a nonlinear map. It is extremely to analyze a nonlinear system A global analysis of nonlinear dynamic system requires a huge computational load Usually, a local analysis is applied. For instance, one may take an point x 0 (called by operating point) and then analyze the system around it. By using a local approximation f(x 0 + δx(t)) f(x 0 ) + f(x 0 )δx(t)

and so the system is locally approximated by x 0 + δx(t) = f(x 0 )δx(t) + f(x 0 ) dt which is a linear dynamical system δx(t) = Aδx(t) + f(x 0 ) for A = f(x 0 ). The state of the approximated system is δx(t). Example [ ẋ1 ] [ x 2 1 + 2x 2 ] ẋ 2 = x 2 1

It has form Linear MIMO systems ẋ = Ax + Bu, y = Cx + Du The first is state equation and the second is output equation u R m is control input y R n is output. It has different meanings. It may be the expression what we access the system information in reality. It may be an expression for the system target in control task About the control input: it is what we can input to control the system based on available information on the system

It may be open loop control, i.e. we design some control law u = u(t) dependent on time. It is more suitable in a simple environment and simple task. It is often closed loop (feedback) control. It is more natural because we should update the control law based on the updated information on the system state. Two basic problems of control: If system state are not available, we have to solve the state estimation problem Based on the state information to design control law

Structured state equations d dt [ ] x1 x 2 = [ ] [ A11 A 12 x1 0 A 22 x 2 ] + [ B1 0 ] u The state x 2 is not affected by input u, i.e. x 2 propagates autonomously. We cannot control x 2 ; We can control only the state x 1 which is also affected by the state x 2. Example 1: A simple analysis for massspring-damper system three unit masses connected by unit springs and dampers inputs are tensions between two masses (1st and 2nd, 2nd and 3rd)

[ ] y the state is x =, where y R 3 is displacement of masses ẏ 1,2,3 The state equation is ẋ = 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 2 1 0 2 1 0 1 2 1 1 2 1 0 1 2 0 1 2 0 0 0 0 [ ] 0 0 u1 + 1 0 u 2 1 1 0 1 x impulse at u 1 affects third mass less than other two

impulse at u 2 affects first mass later than other two

Analysis of echolocating bath flight in real time: How echolocating bat achieves the compound and adaptive flight with energy efficiency in preying insect through its interaction with local environment. Equation of adaptive PI control governing its locomotion θ flight (t + τ) = kθ gaze (t), where θ flight (t) is bat flight direction so θ flight (t+ τ) is the ahead flight turn rate, and θ gaze (t) is the acoustic gaze angle. The bat adapts to different behavioral requirements by tuning only one proportional parameter k

Only these two variables and one parameter give full information on bat system present states and their predicted flight path cos θ flight (t) plays the role of the gain-scheduling parameters for the nonlinear dynamic equation of the bat so if θ flight (t) is linearly varying in time, the bat s gaze point at time t is exactly the bat s trajectory position at later time t + τ. Thus when θ gaze (t) is fixed at the direction to the insect, the bat captures it in a quick time.

Quiz Linear systems. We see that linear system may have many kinds of motions through just two dimensional systems (coordinate in plane) What s behavior of the motion described by ẋ 1 = x 1, ẋ 2 = x 2 What s behavior of the motion described by ẋ 1 = x 1, dotx 2 = x 2 What s behavior of the motion described by ẋ 1 = x 1 + x 2, ẋ 2 = x 2 Nonlinear systems. They are very complicated so usually they must be linearized

around the so called operating points. Please give the linearized equations of the following nonlinear equations at operating points (0, 0) and (2, 2) respectively ẋ 1 = x 1 x 2 + x 2 2, ẋ 2 = x 2 1 x 2 + x 2 x 1 ẋ 1 = (sin x 1 ) cos x 2 + x 2 2, ẋ 2 = (cos x 1 )x 2 + x 3 2