Reglerteknik, TNG028. Lecture 1. Anna Lombardi

Similar documents
Automatic Control (TSRT15): Lecture 1

ME 132, Dynamic Systems and Feedback. Class Notes. Spring Instructor: Prof. A Packard

Overview of the Seminar Topic

Control. CSC752: Autonomous Robotic Systems. Ubbo Visser. March 9, Department of Computer Science University of Miami

ELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems

Chapter 7 Control. Part Classical Control. Mobile Robotics - Prof Alonzo Kelly, CMU RI

Introduction to Control (034040) lecture no. 2

2.004 Dynamics and Control II Spring 2008

2002 Prentice Hall, Inc. Gene F. Franklin, J. David Powell, Abbas Emami-Naeini Feedback Control of Dynamic Systems, 4e

Introduction to Feedback Control

MEAM 510 Fall 2012 Bruce D. Kothmann

Alireza Mousavi Brunel University

EE 474 Lab Part 2: Open-Loop and Closed-Loop Control (Velocity Servo)

MEAM 510 Fall 2011 Bruce D. Kothmann

School of Mechanical Engineering Purdue University. ME375 Feedback Control - 1

Chapter Test A. Teacher Notes and Answers Forces and the Laws of Motion. Assessment

DC-motor PID control

B1-1. Closed-loop control. Chapter 1. Fundamentals of closed-loop control technology. Festo Didactic Process Control System

Physics Unit: Force & Motion

Systems Engineering/Process Control L1

A system is defined as a combination of components (elements) that act together to perform a certain objective. System dynamics deal with:

Laboratory Exercise 1 DC servo

LQ Control of a Two Wheeled Inverted Pendulum Process

EXAMPLES EXAMPLE - Temperature in building

An Introduction to Control Systems

9. Fuzzy Control Systems

Dr Ian R. Manchester Dr Ian R. Manchester AMME 3500 : Review

Introduction to control theory

Equal Pitch and Unequal Pitch:

Lecture 6: Control Problems and Solutions. CS 344R: Robotics Benjamin Kuipers

Today. Why idealized? Idealized physical models of robotic vehicles. Noise. Idealized physical models of robotic vehicles

4.4. Friction and Inclines

FRICTIONAL FORCES. Direction of frictional forces... (not always obvious)... CHAPTER 5 APPLICATIONS OF NEWTON S LAWS

MATH4406 (Control Theory) Unit 1: Introduction Prepared by Yoni Nazarathy, July 21, 2012

Lecture 25: Tue Nov 27, 2018

Chapter 4. Forces and Newton s Laws of Motion

Modelling and simulation of a measurement robot

Lecture «Robot Dynamics»: Dynamics 2

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

Systems Engineering/Process Control L1

Unit V: Mechanical Energy

Position, Speed and Velocity Position is a variable that gives your location relative to an origin. The origin is the place where position equals 0.

Optimal Control. McGill COMP 765 Oct 3 rd, 2017

Intermediate Process Control CHE576 Lecture Notes # 2

Last-night s Midterm Test. Last-night s Midterm Test. PHY131H1F - Class 10 Today, Chapter 6: Equilibrium Mass, Weight, Gravity

Simulation Study on Pressure Control using Nonlinear Input/Output Linearization Method and Classical PID Approach

Isaac Newton ( )

Automatic Control II Computer exercise 3. LQG Design

Uniform Circular Motion. Uniform Circular Motion

BIPN100 Human Physiology 1 (Kristan) F15 Lecture 1: Introduction to Physiology p 1

MECH 3140 Final Project

Dynamics and PID control. Process dynamics

EXAMPLE: MODELING THE PT326 PROCESS TRAINER

Energy, Work & Power Questions

School of Engineering Faculty of Built Environment, Engineering, Technology & Design

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018

Computer Aided Control Design

Chapter 5 - Force and Motion

Homework Assignment 4 Root Finding Algorithms The Maximum Velocity of a Car Due: Friday, February 19, 2010 at 12noon

Where we ve been, where we are, and where we are going next.

Chapter 5 Lecture. Pearson Physics. Newton's Laws of Motion. Prepared by Chris Chiaverina Pearson Education, Inc.

EE 422G - Signals and Systems Laboratory

Newton s Laws.

Lecture «Robot Dynamics»: Dynamics and Control

(Refer Slide Time: 00:01:30 min)

Embedding Nonsmooth Systems in Digital Controllers. Ryo Kikuuwe

Electricity and Magnetism

Forces and two masses.

PHYS 185 Final Exam December 4, 2012

Momentum and Impulse

Acknowledgements. Control System. Tracking. CS122A: Embedded System Design 4/24/2007. A Simple Introduction to Embedded Control Systems (PID Control)

acceleration weight load

Control and Measuring Systems I

w = mg Use: g = 10 m/s 2 1 hour = 60 min = 3600 sec

Chapter 4: Newton's Laws of Motion

So now that we ve practiced with Newton s Laws, we can go back to combine kinematics with Newton s Laws in this example.

CDS 101: Lecture 2.1 System Modeling

WS-CH-4 Motion and Force Show all your work and equations used. Isaac Newton ( )

Physics Pre-comp diagnostic Answers

Unit 08 Work and Kinetic Energy. Stuff you asked about:

We provide two sections from the book (in preparation) Intelligent and Autonomous Road Vehicles, by Ozguner, Acarman and Redmill.

1. Draw a FBD of the toy plane if it is suspended from a string while you hold the string and move across the room at a constant velocity.

Fundamental Principles of Process Control

Lecture 6: Time-Dependent Behaviour of Digital Circuits

INC 693, 481 Dynamics System and Modelling: Introduction to Modelling Dr.-Ing. Sudchai Boonto Assistant Professor

Physics 12. Unit 5 Circular Motion and Gravitation Part 1

Dynamics of Physical System Prof. S. Banerjee Department of Electrical Engineering Indian Institute of Technology, Kharagpur

D(s) G(s) A control system design definition

Forces Dynamics. Matthew W. Milligan

CHAPTER 1. Introduction

Motor Controller. A block diagram for the motor with a feedback controller is shown below

Figure 5.1: Force is the only action that has the ability to change motion. Without force, the motion of an object cannot be started or changed.

NCSP 2 nd Ed Chapter 4 Solutions Forces

CDS 101: Lecture 2.1 System Modeling. Lecture 1.1: Introduction Review from to last Feedback week and Control

Introduction to Systems Theory and Control Systems

Forces Review! By Cole Shute, Anisa Patel, Will Bley, and Camille Lorenz

CPO Science Foundations of Physics

Professional Portfolio Selection Techniques: From Markowitz to Innovative Engineering

Today s Lecture: Kinematics in Two Dimensions (continued) A little bit of chapter 4: Forces and Newton s Laws of Motion (next time)

ET3-7: Modelling I(V) Introduction and Objectives. Electrical, Mechanical and Thermal Systems

Transcription:

Reglerteknik, TNG028 Lecture 1 Anna Lombardi

Today lecture We will try to answer the following questions: What is automatic control? Where can we nd automatic control? Why do we need automatic control?

Automatic control Automatic control is the art to make things accomplish desired objectives

Automatic control From Wikipedia: Automatic control is the research area and theoretical base for mechanization and automation, employing methods from mathematics and engineering.

What is Automatic Control? Control is the adjustment of some knob in response to some measure of desirability. Humans do it all the time. If the shower water is too cold, we open the hot water valve to heat it up. If the chips are not salty enough, we shake some salt on them. If it is raining, we open the umbrella. If there is background noise we converse in a louder voice. Control is taking action to x something, to regulate something, to make something come out in a desirable way. Biological beings do it, but when the control is done by a device built by humans, when it is performed autonomously by machine or computer, it is called automatic control. From "American Automatic Control Council"

Successful examples of automatic control... Segway Device similar to the inverted pendulum It is kept upright by a feedback control system

Successful examples of automatic control... Computer: hard disk controller The reading arm must be positioned in the right position as fast as possible while the disk is rotating. Without control when the arm is moved it oscillates with the consequence that it takes longer before it stops and data can be read.

Successful examples of automatic control... Home: climate control It works by comparing the actual temperature with the desired temperature, the system then operates according to this comparison. For example if the actual temperature is too hot the air conditioner is turned on. The goal is to bring the comparison between the actual and desired temperatures as close to zero as possible.

Successful examples of automatic control... Cars Adaptive Cruise Control The regular cruise control system maintains the set speed of the vehicle regardless of any external inuences. Adaptive Cruise Control systems use sensors to monitor the vehicles ahead, and if they become too close, the speed of the vehicle is decreased in order to maintain a safe distance between the cars. ESP - Electronic Stability Program

Successful examples of automatic control... Mobile telephony Mobile phones switch from one tower to another automatically Internet: maximisation of transmission speed

Successful examples of automatic control... Industry Quadrupedal pack robot Process is controlled to obtain high quality of product with minimisation of discharge and eect on environment For example: thickness control system for metal. After the metal passes through the rollers, X rays measure its thickness and compare it to a desired thickness. Any dierence is adjusted by a screw-down position control that changes the gap at the rollers in which the metal passes.

Successful examples of automatic control... Human body Our bodies self-regulate themselves all the way from sweat to regulate overall temperature, down to the mechanism generating antibodies medical treatment: anaesthesia, insulin pump, pace maker

Control problem Example: adjusting the temperature of a shower From "American Automatic Control Council"

Control problem General feedback loop Reference error compare Feedback Adjustment Actuate input Physical System output measured output Measurement From "American Automatic Control Council"

Control problem What is common to all these control problems? They can all be described in the following way: v u control signal, input y measured signal, output u S y v disturbance S system Choose the input signal u so that the system (in terms of the output signal y) behaves as desired according to a reference signal r.

Example of a process: cruise control Cruise control maintains constant speed in a car independently of road inclines or wind. Vehicle dynamics: input - u force (accelerator pedal) output - y speed reference signal - r desired speed disturbance - v head wind, uphill slopes

Control models - Cruise control y(t) = speed of the car [m/s] u(t) = driving/braking force generated by motor and brakes [N] v(t) = disturbances depending on the inclination of the road [N] f(y) = air resistance, friction, and so on [N] m = mass of the car [kg] Reference signal r(t) = 25m/s = 90km/h desired speed

Control models - Cruise control Study the behaviour of the car Newton's law: F (t) = ma(t) mẏ(t) = u(t) f(y(t)) v(t) Assumption: air resistance proportional to speed: f(y(t)) = cy(t) ẏ(t) + c m y(t) = 1 (u(t) v(t)) m Dynamic system!

Control models - Cruise control Study the behaviour of the car on a at road with a constant input Input chosen as a step: u(t) = Flat road v = 0 { k t 0 0 t < 0 Solution: y(t) = k c ẏ(t) + c m y(t) = 1 m k t 0 ( 1 e c m t) speed asymptotically settles down at k c desired speed can be reached with a proper choice of the input k

Control models - Cruise control Study the behaviour of the car on a at road with a constant input Assume: m = 1000 kg, c = 200 Ns/m { 200r(t) t 0 Input chosen as: u(t) = 0 t < 0 ẏ(t) + 200 5000 y(t) = 1000 1000, t 0 Solution: y(t) = 25 ( 1 e 0.2t) desired speed is reached asymptotically Cruise control computations

Control models - Cruise control Study the behaviour of the car on a at road with a constant input

Control models - Cruise control Study the behaviour of the car on a at road with a constant input We have seen that for our car (m = 1000 kg, c = 200 Ns/m ) the desired speed can be reached if we choose an input { as 200r(t) t 0 u(t) = 0 t < 0 What happens if: the model is not correct, i.e. m 1000 kg, c 200 Ns/m road has a slope, i.e. v(t) = mg sin ϕ(t) Is it still possible to reach the desired speed?

Control models - Cruise control Errors on the model: the wind test was wrong: c = 150 Ns/m ẏ(t) + 150 5000 y(t) = 1000 1000, t 0 With the same input signal, the speed becomes: y(t) = 33.3 ( 1 e 0.15t) The car reaches a speed that is too high. The reason is that we don't take into consideration the actual speed.

Control models - Cruise control

Control models - Cruise control Disturbances depending on the slope of the road: v(t) = mg sin ϕ(t) where ϕ(t)= slope of the road Assumption: ϕ = 10 (uphill slope) v(t) = mg sin(10 ) 1700 Model: ẏ(t) + 200 3300 y(t) = 1000 1000, t 0 Speed: y(t) = 3300 200 (1 e 0.2t )

Control models - Cruise control y(t) 3300 200 = 16.5 25 when t

What are the diculties? The process is never known exactly. Cruise control example: dierent values of the air resistance coecient c. There are disturbances in the process. Cruise control example: slope of the road ϕ(t).

Control problem Servo problem: the task is to make the output variables follow the reference signal as much exact as possible (e.g. car driving, industrial robots) Regulator problem: the purpose is to keep the output variables as much constant as possible in spite of disturbances and variations in process dynamics (e.g. manufacturing processes, thermostat control)

Control strategies Open-loop control r Reg u S y Closed-loop control feedback control r + Σ Reg u S y

Control models - Cruise control Closed-loop control: to feedback velocity A strategy is to accelerate when driving too slow and to brake when driving too fast u(t) = K P (r(t) y(t)) This is called proportional control, P-control: the constant K P is the only variable to design in the controller Closed-loop system ẏ(t) + c m y(t) = 1 m K P (25 y(t)) y(t) = 25 1 + c K P (1 e K P +c m t) where 25 is the desired speed: r(t) = 25 m/s.

Control models - Cruise control with P-control

Outline of the course Mathematical models I: time-domain Synthesis I: PID Mathematical models II: frequency-domain Synthesis II: lead-lag compensation Mathematical models III: state-space description Synthesis III: state-feedback, pole placement, LQ

Control problem S: system to be controlled v u S y In this course we assume the system to be dynamic and linear

Dynamic system A system with memory, i.e. the actual output depends on what has happened previously Mathematically a dynamic system is described by a dierential equation ẏ(t) = f(y(t), u(t), v(t)) Opposite: static system: y(t) = f(u(t), v(t))

Linear system u S y u(t) = u 1 (t) y(t) = y 1 (t) u(t) = u 2 (t) y(t) = y 2 (t) Linear superposition principle can be applied: u(t) = k 1 u 1 (t) + k 2 u 2 (t) y(t) = k 1 y 1 (t) + k 2 y 2 (t) linear ordinary dierential equations

Summary Important concepts Control problem: input, output, reference signal, disturbance Control problem: servo and regulator problem Control strategies: open-loop and closed-loop control System: dynamic, linear Remember to answer the quiz!