Overview of the Seminar Topic

Similar documents
MATH4406 (Control Theory) Unit 1: Introduction Prepared by Yoni Nazarathy, July 21, 2012

ECEN 420 LINEAR CONTROL SYSTEMS. Instructor: S. P. Bhattacharyya* (Dr. B.) 1/18

Black Boxes & White Noise

Modeling and Control Overview

Contents. PART I METHODS AND CONCEPTS 2. Transfer Function Approach Frequency Domain Representations... 42


Optimal Control. McGill COMP 765 Oct 3 rd, 2017

ELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems

Reglerteknik, TNG028. Lecture 1. Anna Lombardi

MATH4406 (Control Theory) Unit 1: Introduction to Control Prepared by Yoni Nazarathy, August 11, 2014

OPTIMAL CONTROL AND ESTIMATION

ME 132, Dynamic Systems and Feedback. Class Notes. Spring Instructor: Prof. A Packard

Video 5.1 Vijay Kumar and Ani Hsieh

Alireza Mousavi Brunel University

An Introduction to Feedback Control

Identification and Control of Mechatronic Systems

Systems Analysis and Control

Mathematical Theory of Control Systems Design

CDS 101/110a: Lecture 1.1 Introduction to Feedback & Control. CDS 101/110 Course Sequence

Controls Problems for Qualifying Exam - Spring 2014

General procedure for formulation of robot dynamics STEP 1 STEP 3. Module 9 : Robot Dynamics & controls

1 An Overview and Brief History of Feedback Control 1. 2 Dynamic Models 23. Contents. Preface. xiii

FUZZY CONTROL CONVENTIONAL CONTROL CONVENTIONAL CONTROL CONVENTIONAL CONTROL CONVENTIONAL CONTROL CONVENTIONAL CONTROL

Introduction to Systems Theory and Control Systems

Autonomous Mobile Robot Design

D(s) G(s) A control system design definition

Dr Ian R. Manchester Dr Ian R. Manchester AMME 3500 : Review

Control Systems I. Lecture 2: Modeling. Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch Emilio Frazzoli

Lecture 1: Pragmatic Introduction to Stochastic Differential Equations

Chapter One. Introduction

Linear State Feedback Controller Design

Introduction to System Identification and Adaptive Control

Outline. Classical Control. Lecture 1

Acknowledgements. Control System. Tracking. CS122A: Embedded System Design 4/24/2007. A Simple Introduction to Embedded Control Systems (PID Control)

Dynamics and control of mechanical systems

Introduction to Feedback Control

(a) Find the transfer function of the amplifier. Ans.: G(s) =

An Introduction to Control Systems

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations

Control Systems. EC / EE / IN. For

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018

Dr Ian R. Manchester

Automatic Control (TSRT15): Lecture 1

Control System. Contents

Introduction to Optimal Control Theory and Hamilton-Jacobi equations. Seung Yeal Ha Department of Mathematical Sciences Seoul National University

Stochastic Optimal Control!

School of Mechanical Engineering Purdue University. ME375 Feedback Control - 1

Control Systems! Copyright 2017 by Robert Stengel. All rights reserved. For educational use only.

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Control Systems I. Lecture 2: Modeling and Linearization. Suggested Readings: Åström & Murray Ch Jacopo Tani

Analysis and Synthesis of Single-Input Single-Output Control Systems

Optimal Polynomial Control for Discrete-Time Systems

Prüfung Regelungstechnik I (Control Systems I) Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam!

Module 02 Control Systems Preliminaries, Intro to State Space

EEE 188: Digital Control Systems

EECE Adaptive Control

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents

EE C128 / ME C134 Feedback Control Systems

(q 1)t. Control theory lends itself well such unification, as the structure and behavior of discrete control

Robot Manipulator Control. Hesheng Wang Dept. of Automation

Table of Laplacetransform

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

OPTIMAL SPACECRAF1 ROTATIONAL MANEUVERS

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

Index. Index. More information. in this web service Cambridge University Press

Chapter 2 Brief Historical Overview of Control Systems, Mechatronics and Motion Control

INSTRUMENTAL ENGINEERING

Advanced Aerospace Control. Marco Lovera Dipartimento di Scienze e Tecnologie Aerospaziali, Politecnico di Milano

EG4321/EG7040. Nonlinear Control. Dr. Matt Turner

VALLIAMMAI ENGINEERING COLLEGE

Solid foundations has been a hallmark of the control community

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother

Topic # Feedback Control Systems

OPTIMAL ESTIMATION of DYNAMIC SYSTEMS

Nonlinear Adaptive Robust Control. Theory and Applications to the Integrated Design of Intelligent and Precision Mechatronic Systems.

Return Difference Function and Closed-Loop Roots Single-Input/Single-Output Control Systems

Chapter 7 Control. Part Classical Control. Mobile Robotics - Prof Alonzo Kelly, CMU RI

(Refer Slide Time: 00:01:30 min)

Control Theory in Physics and other Fields of Science

GAIN SCHEDULING CONTROL WITH MULTI-LOOP PID FOR 2- DOF ARM ROBOT TRAJECTORY CONTROL

Zdzislaw Bubnicki Modern Control Theory

Linear Control Systems General Informations. Guillaume Drion Academic year

CHAPTER 5 ROBUSTNESS ANALYSIS OF THE CONTROLLER

B1-1. Closed-loop control. Chapter 1. Fundamentals of closed-loop control technology. Festo Didactic Process Control System

Control Systems Lab - SC4070 Control techniques

Feedback Control of Linear SISO systems. Process Dynamics and Control

Control Systems I. Lecture 1: Introduction. Suggested Readings: Åström & Murray Ch. 1, Guzzella Ch. 1. Emilio Frazzoli

Lecture 1: Feedback Control Loop

Professional Portfolio Selection Techniques: From Markowitz to Innovative Engineering

Plan of the Lecture. Today s topic: what is feedback control? past, present, future

Introduction to control theory

CHAPTER INTRODUCTION

EL2520 Control Theory and Practice

Control Systems I. Lecture 7: Feedback and the Root Locus method. Readings: Jacopo Tani. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

Aircraft Stability & Control

Modern Control Systems

Stochastic and Adaptive Optimal Control

16.400/453J Human Factors Engineering. Manual Control I

Process Modelling, Identification, and Control

Modern Control Systems

Transcription:

Overview of the Seminar Topic Simo Särkkä <simo.sarkka@hut.fi> Laboratory of Computational Engineering Helsinki University of Technology September 17, 2007

Contents 1 What is Control Theory? 2 History of Control 3 Classical, Optimal and Stochastic Control 4 Classication of Control Problems 5 Summary

Purpose of Control Theory Physical System Control Signals Mathematical theory of what kind of control signals we should inject to a physical system in order to reach a goal For example, physical system can be a car, which obeys the Newton s laws (or of GR, if desired) The control signals are steering of the wheel, brakes and the throttle The goal to maintain the car on the road

Building Blocks of Control Systems Physical system is the actual system to be controlled, sometimes called the plant Controller is the system (e.g., computer program), which computes the required control signal (voltage value, motor speed) Actuators are the devices used for realizing the physical control signals (e.g., motors, valves, jet engines) Sensors are used for monitoring the state of the physical system (e.g., speedometer, acceleration sensor, odometer)

Example Applications of Control Cruise control: Maintain constant speed of a car Airplane control: Fly the plane to given altitude Spacecraft control: Steer Apollo 11 to the moon with minimum consumed fuel Chemical process control: Maintain desired level of a chemical substance Nuclear reactor control: Keeping the number of neutrons at suitable level Robot control: Drive of a robot from one place to another in minimum time Disk drive control: Move reading head over the correct track

Topic of the Seminar Topic: Automatic control of uncertain dynamic systems Uncertainty in different parts of the system: System dynamics System parameters Sensor measurements Stochastic (optimal) control: Uncertainties are modeled as stochastic processes and random variables (Bayesian approach) Adaptive control: Adaptation mechanisms are used for making the system behave as it should Classical control forms the basis of above methods

History: Prehistory and Primitive Period Water clock control systems of ancient Greeks and Arabs Windmills controllers, temperature regulators, float regulators Watt s flyball governor in 1769 for steam engine control, Polzunov s water level regulator in 1765 In 1868, Maxwell s mathematical analysis on the governor and Vyshnegradskii s analysis on regulators in 1877. In late 1800 s, Lyapunov s stability theory (in Russia), In late 1800 s, Heaviside s operational calculus

History: Classical Period Beginning of 1900 s: Frequency domain (Fourier transform) analysis of feedback amplifiers by Bode, Nyquist and Black. In 1922 Minorsky introduces PID controller World War II: Airplane pilots, gun-positioning systems, radar control systems. Systematic engineering methods of classical control in 40 s. In 40 s, Wiener s work on stochastic analysis and optimal filtering (and cybernetics ) In 50 s, s-plane methods (Laplace domain) such as root locus were developed, as well as digital control and the z-transform.

History: Modern Period In late 50 s, state space models, Bellman s dynamic programming approach to discrete-time optimal control, Pontryagin s calculus of variations approach to optimal control In 60 s, Kalman s theory of Linear Quadratic Regulators (LQR), Kalman filter and Kalman-Bucy filter. Stability analysis of linear state space models (mostly by Kalman). In 60 s, non-linear control, stochastic control, adaptive control and numerical methods for computer implementation 70 s to present: robust control, numerical methods, approximation methods and industrial applications

Classical Control [1/2] Systems are modeled with linear time-invariant (LTI) ordinary differential equations (ODE) with single input and single output (SISO), e.g. d 2 y(t)/dt 2 + a dy(t)/dt + b y(t) = u(t). Based of forming an error signal e(t) = y(t) r(t) (feedback signal) between reference signal and the process. The controller is designed to force the error to zero Analysis is typically done using Laplace transforms and s-domain transfer functions of the differential equations H(s) = 1 s 2 + a s + b

Classical Control [2/2] The most common control architecture is proportional-integral-derivative (PID) controller C(s) = K p + K i /s + K d s The parameters of controllers are heuristically selected such that the controlled closed loop system is stable Performance criteria are, for example, settling time, overshoot, oscillations and robustness properties Phase-lead, phase-lag and other compensators can be used for improving robustness

Optimal Control [1/2] Systems are modeled generally as non-linear state space models, that is, non-linear vectorial differential or difference equations, e.g. dx(t)/dt = f(x(t), u(t), t) x(t) = (x 1 (t),..., x n (t)) is the state vector u(t) = (u 1 (t),..., u d (t)) is the control signal The controller is selected to optimize a given performance cost functional such as J(u) = T 0 [ x(t) r(t) 2 + u(t) 2] dt All classical controllers can be seen to be special cases of optimal controllers

Optimal Control [2/2] In continuous-time, the solution is given by the calculus of variations In discrete-time case, the problem is a finite-dimensional optimization problem The solutions can be expressed in several forms: Euler-Lagrange equations, Pontryagin s maximum principle, Dynamic programming, Hamilton-Jacobi-Bellman equation Linear Quadratic Regulator (LQR) is an important closed form solution and can be used for approximating non-linear solutions

Stochastic Control [1/2] The system is assumed to contain unknown processes and parameters, which are modeled using stochastic processes and random variables Dynamics are modeled using stochastic differential or difference equations dx(t)/dt = f(x(t), u(t), t) + G(x(t), t) w(t) Noise process w(t) models the uncertainties in dynamics System state is not measured directly, but through certain sensors Sensor measurements also contain uncertainties or noises that are modeled as stochastic processes and random variables y(t) = h(x(t), u(t), t) + v(t)

Stochastic Control [2/2] The performance measure is the minimum expected value of cost function such as { } T [ J(u) = E x(t) r(t) 2 + u(t) 2] dt y[0, t] 0 The solution is a complicated combination of optimal filtering and stochastic generalizations of the optimal control solutions The most general solution can be thought as recursive application of Bayesian inference and Bayesian decision theory Linear Quadratic Gaussian Regulator is an important closed form solution, which is a combination of Kalman filter and deterministic LQR.

Division to Classical, Optimal and Stochastic Control Control approaches can be thought as subsets of each other as follows: Classical Optimal Stochastic Control This division is similar to the division Least squares Maximum likelihood Bayesian inference Stochastic control is based on Bayesian decision theory: Stochastic Control Bayesian decision theory

Robust, Adaptive and Intelligent and Other Controls Robust control refers methodology for design controllers that are robust to model uncertainty and unknown disturbances In adaptive control, model or parameter identification is included in the control system. Intelligent control is general name for control methods utilizing methods from machine learning (AI), e.g., neural networks Can be thought as applications or approximations of general Bayesian decision theory

Division to Classical and Modern Control In 60 s, division into classical and modern control Classical refers to transfer function based analysis of scalar linear time-invariant systems Modern refers to state space model based control, e.g., optimal, robust and stochastic control. This division is a bit outdated - if in the 60 s control was "modern", what is it today?

Division to Continuous-Time and Discrete-Time Physical systems are naturally modeled as continuous-time systems, i.e., with differential equations In computer, due to sampling of signal, a reasonable way of modeling is sometimes discrete-time formulation with difference equations There are processes that are more naturally modeled with discrete-time than continuous-time models. Mathematically every discrete-time process can be represented as sampled version of a continuous-time process. Numerically discrete-time processes are easier, because computers work in discrete steps.

Summary Control theory studies how a physical system can be steered to a given goal by applying a control force Dates back to 1700 s and 1800 s, but most of the theory was has developed in 1900 s The main approaches are classical, optimal and stochastic control. Robust, adaptive, intelligent and other approaches also exist, but they can be considered as special cases of the main methods. The mathematical foundations are in differential equations, Laplace/Fourier transforms, calculus of variations, stochastic analysis and Bayesian decision theory.