Reglerteknik, TNG028 Lecture 1 Anna Lombardi
Today lecture We will try to answer the following questions: What is automatic control? Where can we nd automatic control? Why do we need automatic control?
Automatic control Automatic control is the art to make things accomplish desired objectives
Automatic control From Wikipedia: Automatic control is the research area and theoretical base for mechanization and automation, employing methods from mathematics and engineering.
What is Automatic Control? Control is the adjustment of some knob in response to some measure of desirability. Humans do it all the time. If the shower water is too cold, we open the hot water valve to heat it up. If the chips are not salty enough, we shake some salt on them. If it is raining, we open the umbrella. If there is background noise we converse in a louder voice. Control is taking action to x something, to regulate something, to make something come out in a desirable way. Biological beings do it, but when the control is done by a device built by humans, when it is performed autonomously by machine or computer, it is called automatic control. From "American Automatic Control Council"
Successful examples of automatic control... Segway Device similar to the inverted pendulum It is kept upright by a feedback control system
Successful examples of automatic control... Computer: hard disk controller The reading arm must be positioned in the right position as fast as possible while the disk is rotating. Without control when the arm is moved it oscillates with the consequence that it takes longer before it stops and data can be read.
Successful examples of automatic control... Home: climate control It works by comparing the actual temperature with the desired temperature, the system then operates according to this comparison. For example if the actual temperature is too hot the air conditioner is turned on. The goal is to bring the comparison between the actual and desired temperatures as close to zero as possible.
Successful examples of automatic control... Cars Adaptive Cruise Control The regular cruise control system maintains the set speed of the vehicle regardless of any external inuences. Adaptive Cruise Control systems use sensors to monitor the vehicles ahead, and if they become too close, the speed of the vehicle is decreased in order to maintain a safe distance between the cars. ESP - Electronic Stability Program
Successful examples of automatic control... Mobile telephony Mobile phones switch from one tower to another automatically Internet: maximisation of transmission speed
Successful examples of automatic control... Industry Quadrupedal pack robot Process is controlled to obtain high quality of product with minimisation of discharge and eect on environment For example: thickness control system for metal. After the metal passes through the rollers, X rays measure its thickness and compare it to a desired thickness. Any dierence is adjusted by a screw-down position control that changes the gap at the rollers in which the metal passes.
Successful examples of automatic control... Human body Our bodies self-regulate themselves all the way from sweat to regulate overall temperature, down to the mechanism generating antibodies medical treatment: anaesthesia, insulin pump, pace maker
Control problem Example: adjusting the temperature of a shower From "American Automatic Control Council"
Control problem General feedback loop Reference error compare Feedback Adjustment Actuate input Physical System output measured output Measurement From "American Automatic Control Council"
Control problem What is common to all these control problems? They can all be described in the following way: v u control signal, input y measured signal, output u S y v disturbance S system Choose the input signal u so that the system (in terms of the output signal y) behaves as desired according to a reference signal r.
Example of a process: cruise control Cruise control maintains constant speed in a car independently of road inclines or wind. Vehicle dynamics: input - u force (accelerator pedal) output - y speed reference signal - r desired speed disturbance - v head wind, uphill slopes
Control models - Cruise control y(t) = speed of the car [m/s] u(t) = driving/braking force generated by motor and brakes [N] v(t) = disturbances depending on the inclination of the road [N] f(y) = air resistance, friction, and so on [N] m = mass of the car [kg] Reference signal r(t) = 25m/s = 90km/h desired speed
Control models - Cruise control Study the behaviour of the car Newton's law: F (t) = ma(t) mẏ(t) = u(t) f(y(t)) v(t) Assumption: air resistance proportional to speed: f(y(t)) = cy(t) ẏ(t) + c m y(t) = 1 (u(t) v(t)) m Dynamic system!
Control models - Cruise control Study the behaviour of the car on a at road with a constant input Input chosen as a step: u(t) = Flat road v = 0 { k t 0 0 t < 0 Solution: y(t) = k c ẏ(t) + c m y(t) = 1 m k t 0 ( 1 e c m t) speed asymptotically settles down at k c desired speed can be reached with a proper choice of the input k
Control models - Cruise control Study the behaviour of the car on a at road with a constant input Assume: m = 1000 kg, c = 200 Ns/m { 200r(t) t 0 Input chosen as: u(t) = 0 t < 0 ẏ(t) + 200 5000 y(t) = 1000 1000, t 0 Solution: y(t) = 25 ( 1 e 0.2t) desired speed is reached asymptotically Cruise control computations
Control models - Cruise control Study the behaviour of the car on a at road with a constant input
Control models - Cruise control Study the behaviour of the car on a at road with a constant input We have seen that for our car (m = 1000 kg, c = 200 Ns/m ) the desired speed can be reached if we choose an input { as 200r(t) t 0 u(t) = 0 t < 0 What happens if: the model is not correct, i.e. m 1000 kg, c 200 Ns/m road has a slope, i.e. v(t) = mg sin ϕ(t) Is it still possible to reach the desired speed?
Control models - Cruise control Errors on the model: the wind test was wrong: c = 150 Ns/m ẏ(t) + 150 5000 y(t) = 1000 1000, t 0 With the same input signal, the speed becomes: y(t) = 33.3 ( 1 e 0.15t) The car reaches a speed that is too high. The reason is that we don't take into consideration the actual speed.
Control models - Cruise control
Control models - Cruise control Disturbances depending on the slope of the road: v(t) = mg sin ϕ(t) where ϕ(t)= slope of the road Assumption: ϕ = 10 (uphill slope) v(t) = mg sin(10 ) 1700 Model: ẏ(t) + 200 3300 y(t) = 1000 1000, t 0 Speed: y(t) = 3300 200 (1 e 0.2t )
Control models - Cruise control y(t) 3300 200 = 16.5 25 when t
What are the diculties? The process is never known exactly. Cruise control example: dierent values of the air resistance coecient c. There are disturbances in the process. Cruise control example: slope of the road ϕ(t).
Control problem Servo problem: the task is to make the output variables follow the reference signal as much exact as possible (e.g. car driving, industrial robots) Regulator problem: the purpose is to keep the output variables as much constant as possible in spite of disturbances and variations in process dynamics (e.g. manufacturing processes, thermostat control)
Control strategies Open-loop control r Reg u S y Closed-loop control feedback control r + Σ Reg u S y
Control models - Cruise control Closed-loop control: to feedback velocity A strategy is to accelerate when driving too slow and to brake when driving too fast u(t) = K P (r(t) y(t)) This is called proportional control, P-control: the constant K P is the only variable to design in the controller Closed-loop system ẏ(t) + c m y(t) = 1 m K P (25 y(t)) y(t) = 25 1 + c K P (1 e K P +c m t) where 25 is the desired speed: r(t) = 25 m/s.
Control models - Cruise control with P-control
Outline of the course Mathematical models I: time-domain Synthesis I: PID Mathematical models II: frequency-domain Synthesis II: lead-lag compensation Mathematical models III: state-space description Synthesis III: state-feedback, pole placement, LQ
Control problem S: system to be controlled v u S y In this course we assume the system to be dynamic and linear
Dynamic system A system with memory, i.e. the actual output depends on what has happened previously Mathematically a dynamic system is described by a dierential equation ẏ(t) = f(y(t), u(t), v(t)) Opposite: static system: y(t) = f(u(t), v(t))
Linear system u S y u(t) = u 1 (t) y(t) = y 1 (t) u(t) = u 2 (t) y(t) = y 2 (t) Linear superposition principle can be applied: u(t) = k 1 u 1 (t) + k 2 u 2 (t) y(t) = k 1 y 1 (t) + k 2 y 2 (t) linear ordinary dierential equations
Summary Important concepts Control problem: input, output, reference signal, disturbance Control problem: servo and regulator problem Control strategies: open-loop and closed-loop control System: dynamic, linear Remember to answer the quiz!