CHAPTER 1. Introduction

Similar documents
Chapter 1 Lecture 2. Introduction 2. Topics. Chapter-1

A SIMPLIFIED ANALYSIS OF NONLINEAR LONGITUDINAL DYNAMICS AND CONCEPTUAL CONTROL SYSTEM DESIGN

Introduction to Flight Dynamics

Department of Aerospace Engineering and Mechanics University of Minnesota Written Preliminary Examination: Control Systems Friday, April 9, 2010

Lecture AC-1. Aircraft Dynamics. Copy right 2003 by Jon at h an H ow

Robot Control Basics CS 685

Chapter 4 The Equations of Motion

Spacecraft and Aircraft Dynamics

Problem 1: Ship Path-Following Control System (35%)

What is flight dynamics? AE540: Flight Dynamics and Control I. What is flight control? Is the study of aircraft motion and its characteristics.

Flight and Orbital Mechanics

Lecture #AC 3. Aircraft Lateral Dynamics. Spiral, Roll, and Dutch Roll Modes

Stability and Control Some Characteristics of Lifting Surfaces, and Pitch-Moments

AA 242B/ ME 242B: Mechanical Vibrations (Spring 2016)

Autopilot design for small fixed wing aerial vehicles. Randy Beard Brigham Young University

The PVTOL Aircraft. 2.1 Introduction

Aerodynamics SYST 460/560. George Mason University Fall 2008 CENTER FOR AIR TRANSPORTATION SYSTEMS RESEARCH. Copyright Lance Sherry (2008)

Flight Dynamics and Control. Lecture 3: Longitudinal stability Derivatives G. Dimitriadis University of Liege

Control of Mobile Robots

The Inverted Pendulum

ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 4 CONTROL. Prof. Steven Waslander

ELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems

Disturbance Decoupling Problem

Chapter 2 Review of Linear and Nonlinear Controller Designs

Mechanics of Flight. Warren F. Phillips. John Wiley & Sons, Inc. Professor Mechanical and Aerospace Engineering Utah State University WILEY

Flight Dynamics and Control

Pitch Control of Flight System using Dynamic Inversion and PID Controller

Dynamics and Control of Rotorcraft

Name: Fall 2014 CLOSED BOOK

ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 3 MOTION MODELING. Prof. Steven Waslander

Active Guidance for a Finless Rocket using Neuroevolution

April 15, 2011 Sample Quiz and Exam Questions D. A. Caughey Page 1 of 9

Chapter 9. Nonlinear Design Models. Beard & McLain, Small Unmanned Aircraft, Princeton University Press, 2012, Chapter 9, Slide 1

Automatic Control II Computer exercise 3. LQG Design

Design and modelling of an airship station holding controller for low cost satellite operations

The disturbance decoupling problem (DDP)

(Refer Slide Time: 00:01:30 min)

Lecture-4. Flow Past Immersed Bodies

Announcements. Principle of Work and Energy - Sections Engr222 Spring 2004 Chapter Test Wednesday

Optimization-Based Control

Everybody remains in a state of rest or continues to move in a uniform motion, in a straight line, unless acting on by an external force.

Giovanni Tarantino, Dipartimento di Fisica e Tecnologie Relative, Università di Palermo (Italia)

/ m U) β - r dr/dt=(n β / C) β+ (N r /C) r [8+8] (c) Effective angle of attack. [4+6+6]

Multi-Robotic Systems

Topic # Feedback Control

Review: control, feedback, etc. Today s topic: state-space models of systems; linearization

Stability and Control

Given a stream function for a cylinder in a uniform flow with circulation: a) Sketch the flow pattern in terms of streamlines.

Localizer Hold Autopilot

Alternative Expressions for the Velocity Vector Velocity restricted to the vertical plane. Longitudinal Equations of Motion

AE Stability and Control of Aerospace Vehicles

Applications Linear Control Design Techniques in Aircraft Control I

CONTROL OF THE NONHOLONOMIC INTEGRATOR

3.2 Projectile Motion

Simulation of Non-Linear Flight Control Using Backstepping Method

Aircraft Design I Tail loads

18. Linearization: the phugoid equation as example

Aircraft stability and control Prof: A. K. Ghosh Dept of Aerospace Engineering Indian Institute of Technology Kanpur

Investigating the Performance of Adaptive Methods in application to Autopilot of General aviation Aircraft

Problem 1 Problem 2 Problem 3 Problem 4 Total

Aircraft Pitch Attitude Control Using Backstepping

The Role of Zero Dynamics in Aerospace Systems

Aircraft Maneuver Regulation: a Receding Horizon Backstepping Approach

FLIGHT DYNAMICS. Robert F. Stengel. Princeton University Press Princeton and Oxford

ECE557 Systems Control

Fundamentals of Airplane Flight Mechanics

Lecture 9 Nonlinear Control Design

INTRODUCTION & RECTILINEAR KINEMATICS: CONTINUOUS MOTION

M2A2 Problem Sheet 3 - Hamiltonian Mechanics

Adaptive Control of Hypersonic Vehicles in Presence of Aerodynamic and Center of Gravity Uncertainties

Rough Plane Analysis. Contents

Fact: Every matrix transformation is a linear transformation, and vice versa.

Constants: Acceleration due to gravity = 9.81 m/s 2

Constants: Acceleration due to gravity = 9.81 m/s 2

Rotational Kinetic Energy

Robotics, Geometry and Control - A Preview

ME 132, Dynamic Systems and Feedback. Class Notes. Spring Instructor: Prof. A Packard

Analog Signals and Systems and their properties

UAV Coordinate Frames and Rigid Body Dynamics

Lecture 2: Controllability of nonlinear systems

CDS 101/110a: Lecture 8-1 Frequency Domain Design

EN Nonlinear Control and Planning in Robotics Lecture 2: System Models January 28, 2015

Solutions. Problems of Chapter 1

Chapter 1. Introduction. 1.1 System Architecture

PRINCIPLES OF FLIGHT

Lecture D10 - Angular Impulse and Momentum

AEROSPACE ENGINEERING

Autonomous Helicopter Landing A Nonlinear Output Regulation Perspective

Translational and Rotational Dynamics!

Today we applied our knowledge of vectors to different kinds of problems.

Chapter 4. Forces and Newton s Laws of Motion. F=ma; gravity

Nonholonomic Constraints Examples

Rotor reference axis

16.333: Lecture # 7. Approximate Longitudinal Dynamics Models

Equations of Motion for Micro Air Vehicles

Team-Exercises for DGC 100 Modelica Course

Verification and Synthesis. Using Real Quantifier Elimination. Ashish Tiwari, SRI Intl. Verif. and Synth. Using Real QE: 1

Autonomous Robotic Vehicles

The basic principle to be used in mechanical systems to derive a mathematical model is Newton s law,

CS491/691: Introduction to Aerial Robotics

Transcription:

CHAPTER 1 Introduction Linear geometric control theory was initiated in the beginning of the 1970 s, see for example, [1, 7]. A good summary of the subject is the book by Wonham [17]. The term geometric suggests several things. First it suggests that the setting is linear state space and the mathematics behind is primarily linear algebra (with a geometric flavor). Secondly it suggests that the underlying methodology is geometric. It treats many important system concepts, for example controllability, as geometric properties of the state space or its subspaces. These are the properties that are preserved under coordinate changes, for example, the so-called invariant or controlled invariant subspaces. On the other hand, we know that things like distance and shape do depend on the coordinate system one chooses. Using these concepts the geometric approach captures the essence of many analysis and synthesis problems and treat them in a coordinate-free fashion. By characterizing the solvability of a control problem as a verifiable property of some constructible subspace, calculation of the control law becomes much easier. In many cases, the geometric approach can convert what is usually a difficult nonlinear problem into a straight-forward linear one. The linear geometric systems theory was extended to nonlinear systems in the 1970 s and 1980 s [11]. The underlying fundamental concepts are almost the same, but the mathematics is different. For nonlinear systems the tools from differential geometry are primarily used. In the rest of the chapter, we use some typical problems and examples to illustrate the basic ideas of geometric approaches. We begin by recalling the basic notions of linear and nonlinear systems. 1.1. Linear and nonlinear systems As we know, a linear control system can be written as follows: (1.1) ẋ = Ax + Bu y = Cx, where x R n is called the state, u R m the input and y R p the output, and A, B and C are matrices with proper dimensions, either constant or time-varying. 1

2 1. INTRODUCTION For a linear system (1.1), the solution is given by t x(x 0,t,t 0,u( )) = Φ(t, t 0 )x 0 + Φ(t, s)bu(s)ds, t 0 where x 0 is the initial state and t 0 the initial time. In particular, if A is constant, then the transition matrix Φ(t, t 0 )=e A(t t 0). Thus for time invariant systems one can always assume t 0 = 0. One can easily see that solutions to (1.1) satisfy the principle of superposition. In contrast a nonlinear (affine) control system can be defined as: (1.2) ẋ = f(x)+g(x)u y = h(x), where f, g and h are nonlinear mapping with proper dimensions. If we let f = Ax, g = B and h = Cx, (1.2) becomes a linear system. For nonlinear systems, a closed-form solution is in general very difficult to find and solutions in general do not satisfy the principle of superposition. In addition, nonlinear control systems also exhibit many properties a linear system does not have, some of which will be discussed in Chapter 8. Let us end this section with an example of a linear system. The purpose of the example is to show that we can model even quite complex practical systems as linear systems. Example 1.1 (Longitudinal motions of an aircraft). By longitudinal motions of an aircraft, we mean the movement of an aircraft as if it were constrained to move exclusively in a vertical plane. This is a very important type of motion of an aircraft. It is possible to show that under perfect geometrical and dynamical symmetry conditions, the linearized equations of motion of any aircraft exhibit an exact longitudinal-lateral decoupling. The full explanation of the model equations falls outside the scope of this work, and can be found in [6]. In our notations, V stands for the velocity modulus, m is the mass of the aircraft, T is the thrust force exerted by the engines and supposed to be acting along the main body axis, D is the magnitude of the total aerodynamic drag acting opposite to the velocity vector, L is the total aerodynamic magnitude of the lift force in an orthogonal direction to the velocity, mg is the gravity force acting at the center of gravity (CG) and pointing downwards (flat Earth approximation). M is the total pitching moment around the axis perpendicular to the plane of movement, α is the angle of attack of the aircraft (angle between the velocity and the x-axis), γ is the angle of climb (between the horizontal axis and the velocity), and θ = γ + α is the pitch angle of the aircraft. By convention, the pitch rate ( θ in the longitudinal motions) is denoted q.

1.1. LINEAR AND NONLINEAR SYSTEMS 3 x z θ V T α L γ M D mg Figure 1. Variables for the aircraft model. Atmospheric disturbance Sensor noise Sensor dynamics q u 1 Taileron actuator δ T B 1 B d u 2 Canards actuator δ C B 2 + 1/s C De mux A a u 3 thrust command B 3 Engine states Aircraft model Sensor dynamics Δα Sensor noise other measurements Figure 2. Detailed system architecture The linearized equations of motion are: (1.3) mδ V = ΔT mg cos γ e Δγ ΔD mv e Δ γ = ΔL + T e Δα + mg sin γ e Δγ I q = ΔM + z ATP ΔT Δ θ = q

4 1. INTRODUCTION where I and z ATP are some physical parameters. These are differential equations in incremental variables with respect to any admissible equilibrium condition (denoted with subscripts e). For example, V = V e +ΔV ; γ = γ e +Δγ; α = α e +Δα. Around each flight equilibrium condition, the pilot can control whether to increase or decrease the thrust of the engines which is ΔT. This he (or she) does by adjusting the throttle lever δ th. If he wants to correct the attitude of the aircraft, he deflects the stick, leading to changes in the deflection angles of the elevator δ E and Canards δ C if the airplane has Canards. Thus, we assume here that the pilot controls ΔT, ΔL, ΔD and ΔM via δ E, δ C and δ th by a linear mapping. An equivalent version of (1.3) written in compact vector notation is (1.4) ẋ = Ax + Bu with state vector x =[ΔV Δγ qδθ] and inputs u =[δ E δ C δ th ]. The elevator and Canard actuator dynamics are assumed so fast, that no states are used to represent them. The situation is depicted in Fig. 2. Thus, neglected actuator dynamics means here u 1 = δ E and u 2 = δ C. Naturally, we need to include external disturbances such as turbulence to make the model complete. 1.2. The geometric approach We use an example to introduce some of the problems we will study in this course. Example 1.2 (Lateral control of a car). Here is the so-called single-track dynamic model for car steering: where α f is the front tire sideslip angle, r CG r v α f δ f l r l f Figure 3. Single-track model for car steering yaw rate, δ f the steering angle and v velocity. Under the assumptions that the car has an ideal mass (m) distribution (i.e. the momentum J = ml r l f ), that the velocity of the car is constant, and that the sideslip angle is small,

1.2. THE GEOMETRIC APPROACH 5 we can get a simplified model of the lateral motion as follows: α f = a 11 α f + r + δ f (1.5) ψ = r ṙ = a 21 α f + a 22 r + b 21 δ f + d(t), where ψ the orientation of the car, and the a and b coefficients depend on the physical parameters of the car. The disturbance d(t) could be for example, side wind or roughness on the road surface. With this example, several interesting problems arise: (1) Can we design a driver assistant that decouples the yaw motion from the lateral motion? So the driver can focus on controlling the lateral motion. (2) Can we design an automatic driver that rejects d(t) from affecting the orientation? (3) Can we estimate the unknown d(t) bymeasuringψ and r? These estimations are useful if we have, for example, a camera or a manipulator mounted on top of the car, which also needs to be controlled. All these problems, we hope, will find solutions in this course. Here are a few samples of the problems we will study in this course. Problem 1.1 (Disturbance decoupling). Consider the system ẋ = Ax + Bu + Ew (1.6) y = Cx, where x R n, u is the control signal and w is an external disturbance that cannot be measured. The question is if there exits a state feedback u = Fx+ v such that the output y is unaffected by the disturbance w. Suppose such a feedback exists. Then by plugging in the control, we have ẋ = (A + BF)x + Bv + Ew (1.7) y = Cx. The fact that the output y is unaffected by the disturbance w implies that the ith derivative y (i) (t) for any i 1andanyt does not depend on w. For the sake of simplicity, let us first assume v =0.Since y (1) (t) =C(A + BF)x(t)+CEw(t), we must have CE = 0. Deductively, under the assumption C(A+BF) i 2 E = 0, i 2, we have y (i) (t) =C(A + BF) i x(t)+c(a + BF) i 1 Ew(t). Thus, we must have C(A + BF) i 1 E =0, i 1.

6 1. INTRODUCTION By Cayley-Hamilton Theorem, it suffices to have the equality hold only up to i = n. On the other hand, if we can find an F that satisfies the above equations, then the problem is solved. However, the equations are highly nonlinear, thus difficult to solve. In Chapter 3 we will use the idea of controlled invariance to reduce the problem into a linear one. Problem 1.2 (Output regulation). Consider ẋ = Ax + Bu + Ew (1.8) e = Cx Dw, where w models both the disturbance to reject and the reference signal to track (We can consider y = Cx as the output. Thus w 1 = Dw is the reference to track, while the rest of w is disturbance). Furthermore, we assume that w is generated by the following system: (1.9) ẇ =Γw. A system that generates an input signal is sometimes called an exogenous system. The problem of output regulation is to find a feedback control law such that the closed-loop system is asymptotically stable when w is set to zero and such that e(t) tends to zero as t tends to. The difference with the disturbance decoupling problem is that here we only require the output to reject the disturbance in the steady state, and we also have some knowledge about the disturbance. The discussion of this problem in Chapter 7 will lead to the famous internal model principle [7]. Surprisingly we will show that this problem is generically solvable while the disturbance decoupling problem is not. Not surprisingly, the idea of controlled invariance also plays a central role here. Problem 1.3 (Controllability under constraints). Consider the system ẋ = Ax + Bu (1.10) y = Cx, Let K be a subspace of R n. The question is which subset R Kcan be reached in finite time t 1 from any initial point in K with controls of the form u = Fx+ Gv, if we require that the trajectory {x(t) :t [0,t 1 ]} should be in K. This problem is quite relevant in many applications, for example, in path planning for mobile systems. In this set of lecture notes, we will also discuss some system concepts for nonlinear systems. We give an example here to illustrate why sometimes one has to use the nonlinear tools. Example 1.3 (Car steering). In this example, we revisit the car steering problem. However, the focus is on the longitudinal control. For this purpose, the steering system of a car can be modeled as

1.2. THE GEOMETRIC APPROACH 7 (x,y) φ θ y w (x,y) h L h x Figure 4. The geometry of the car-like robot, with position (x, y), orientation θ and steering angle φ. ẋ = v cos(θ) (1.11) ẏ = v sin(θ) θ = v L tan φ, where x and y are Cartesian coordinates of the middle point on the rear axle, θ is the orientation angle, v is the longitudinal velocity measured at that point, L is the distance of the two axles, and φ is the steering angle. In this case v and φ are the two controls. Let us reduce the complexity by defining u 1 = v, u 2 = v L tan φ, then (1.12) ẋ = cos(θ)u 1 ẏ = sin(θ)u 1 θ = u 2. Sometimes, this is called unicycle model. If we linearize (1.12) around a point (x 0,y 0,θ 0 ), we have (1.13) ẋ = cos(θ 0 )u 1 ẏ = sin(θ 0 )u 1 θ = u 2, which is obviously not controllable. However, using geometric tools we will show in Chapter 8 that the nonlinear system (1.12) is controllable (This is what you, as a driver, expected, right?). The notes are organized as follows. In Chapter 2, invariant and controlled invariant subspaces will be discussed; In Chapter 3, the disturbance decoupling problem will be introduced; In Chapter 4, we will introduce transmission zeros and their geometric interpretations; In Chapter 5, noninteracting control and tracking will be studied as applications of the zero dynamics normal form; In Chapter 6, we will discuss some input-output behaviors from a geometric point of view; In

8 1. INTRODUCTION Chapter 7, we will discuss the output regulator problem in some detail. In Chapter 8, we will extend some of the central concepts in the geometric control to nonlinear systems. Finally, in Chapter 9, some robotic applications will be discussed.