Visual Servoing for a Quadrotor UAV in Target Tracking Applications. Marinela Georgieva Popova

Similar documents
The PVTOL Aircraft. 2.1 Introduction

Nonlinear Landing Control for Quadrotor UAVs

Adaptive Trim and Trajectory Following for a Tilt-Rotor Tricopter Ahmad Ansari, Anna Prach, and Dennis S. Bernstein

Quadrotor Modeling and Control

Robot Dynamics - Rotary Wing UAS: Control of a Quadrotor

Design and modelling of an airship station holding controller for low cost satellite operations

Multi-layer Flight Control Synthesis and Analysis of a Small-scale UAV Helicopter

Chapter 2 Review of Linear and Nonlinear Controller Designs

Aerial Robotics. Vision-based control for Vertical Take-Off and Landing UAVs. Toulouse, October, 2 nd, Henry de Plinval (Onera - DCSD)

Quadrotor Modeling and Control for DLO Transportation

Trajectory tracking & Path-following control

Investigation of the Dynamics and Modeling of a Triangular Quadrotor Configuration

Simulation of Backstepping-based Nonlinear Control for Quadrotor Helicopter

Lecture AC-1. Aircraft Dynamics. Copy right 2003 by Jon at h an H ow

TTK4190 Guidance and Control Exam Suggested Solution Spring 2011

Robot Control Basics CS 685

Adaptive Robust Control (ARC) for an Altitude Control of a Quadrotor Type UAV Carrying an Unknown Payloads

CHAPTER 1. Introduction

Nonlinear Control of a Quadrotor Micro-UAV using Feedback-Linearization

A Model-Free Control System Based on the Sliding Mode Control Method with Applications to Multi-Input-Multi-Output Systems

A Comparison of Closed-Loop Performance of Multirotor Configurations Using Non-Linear Dynamic Inversion Control

Quadcopter Dynamics 1

Problem 1: Ship Path-Following Control System (35%)

Nonlinear and Neural Network-based Control of a Small Four-Rotor Aerial Robot

ENHANCED PROPORTIONAL-DERIVATIVE CONTROL OF A MICRO QUADCOPTER

Mathematical Modelling and Dynamics Analysis of Flat Multirotor Configurations

Control and Navigation Framework for Quadrotor Helicopters

CS491/691: Introduction to Aerial Robotics

with Application to Autonomous Vehicles

ELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems

Hover Control for Helicopter Using Neural Network-Based Model Reference Adaptive Controller

Quaternion-Based Tracking Control Law Design For Tracking Mode

Different Approaches of PID Control UAV Type Quadrotor

Chapter 1. Introduction. 1.1 System Architecture

QUADROTOR: FULL DYNAMIC MODELING, NONLINEAR SIMULATION AND CONTROL OF ATTITUDES

Control of Mobile Robots

Autopilot design for small fixed wing aerial vehicles. Randy Beard Brigham Young University

Backstepping and Sliding-mode Techniques Applied to an Indoor Micro Quadrotor

Nonlinear Wind Estimator Based on Lyapunov

Aircraft Maneuver Regulation: a Receding Horizon Backstepping Approach

Control of a Car-Like Vehicle with a Reference Model and Particularization

Improving Leader-Follower Formation Control Performance for Quadrotors. By Wesam M. Jasim Alrawi

Dynamics and Control of Rotorcraft

Vision-Based Estimation and Tracking Using Multiple Unmanned Aerial Vehicles. Mingfeng Zhang

Aerobatic Maneuvering of Miniature Air Vehicles Using Attitude Trajectories

Modeling and Sliding Mode Control of a Quadrotor Unmanned Aerial Vehicle

Circumnavigation with a group of quadrotor helicopters

AMME3500: System Dynamics & Control

Nonlinear Tracking Control of Underactuated Surface Vessel

Fuzzy Control for an Unmanned Helicopter

Research on Balance of Unmanned Aerial Vehicle with Intelligent Algorithms for Optimizing Four-Rotor Differential Control

Visual Feedback Attitude Control of a Bias Momentum Micro Satellite using Two Wheels

Optimal Fault-Tolerant Configurations of Thrusters

Chapter 1 Lecture 2. Introduction 2. Topics. Chapter-1

We provide two sections from the book (in preparation) Intelligent and Autonomous Road Vehicles, by Ozguner, Acarman and Redmill.

Further results on global stabilization of the PVTOL aircraft

Pitch Control of Flight System using Dynamic Inversion and PID Controller

CONTROL OF ROBOT CAMERA SYSTEM WITH ACTUATOR S DYNAMICS TO TRACK MOVING OBJECT

Mini coaxial rocket-helicopter: aerodynamic modeling, hover control, and implementation

Modelling of Opposed Lateral and Longitudinal Tilting Dual-Fan Unmanned Aerial Vehicle

Dynamic Modeling and Stabilization Techniques for Tri-Rotor Unmanned Aerial Vehicles

Nonlinear control of underactuated vehicles with uncertain position measurements and application to visual servoing

Robot Dynamics II: Trajectories & Motion

Model Reference Adaptive Control of Underwater Robotic Vehicle in Plane Motion

ROBUST NEURAL NETWORK CONTROL OF A QUADROTOR HELICOPTER. Schulich School of Engineering, University of Calgary

A Nonlinear Control Law for Hover to Level Flight for the Quad Tilt-rotor UAV

Adaptive Backstepping Control for Optimal Descent with Embedded Autonomy

Line following of a mobile robot

Integrator Backstepping using Barrier Functions for Systems with Multiple State Constraints

UAV Navigation: Airborne Inertial SLAM

3D Pendulum Experimental Setup for Earth-based Testing of the Attitude Dynamics of an Orbiting Spacecraft

Unifying Behavior-Based Control Design and Hybrid Stability Theory

DISTURBANCE ATTENUATION IN A MAGNETIC LEVITATION SYSTEM WITH ACCELERATION FEEDBACK

GUST RESPONSE ANALYSIS IN THE FLIGHT FORMATION OF UNMANNED AIR VEHICLES

Nonlinear Robust Tracking Control of a Quadrotor UAV on SE(3)

Complexity Metrics. ICRAT Tutorial on Airborne self separation in air transportation Budapest, Hungary June 1, 2010.

(W: 12:05-1:50, 50-N202)

Control of a Quadrotor Mini-Helicopter via Full State Backstepping Technique

Dynamic modeling and control system design for tri-rotor UAV

Chapter 4 The Equations of Motion

Design of Sliding Mode Attitude Control for Communication Spacecraft

Navigation and control of an UAV quadrotor in search and surveillance missions

Dead Reckoning navigation (DR navigation)

Introduction to Flight Dynamics

3 Space curvilinear motion, motion in non-inertial frames

Stability and Control

Evaluation of different wind estimation methods in flight tests with a fixed-wing UAV

Experimental Validation of a Trajectory Tracking Control using the AR.Drone Quadrotor

Quadrotors Flight Formation Control Using a Leader-Follower Approach*

Mathematical Modelling of Multirotor UAV

EE C128 / ME C134 Feedback Control Systems

LQR and SMC Stabilization of a New Unmanned Aerial Vehicle

Exam - TTK 4190 Guidance & Control Eksamen - TTK 4190 Fartøysstyring

Lecture Schedule Week Date Lecture (M: 2:05p-3:50, 50-N202)

ME 132, Dynamic Systems and Feedback. Class Notes. Spring Instructor: Prof. A Packard

Aircraft Stability & Control

Lecture 11 Overview of Flight Dynamics I. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Design and Control of Novel Tri-rotor UAV

Control of the Laser Interferometer Space Antenna

Autonomous Helicopter Landing A Nonlinear Output Regulation Perspective

Transcription:

Visual Servoing for a Quadrotor UAV in Target Tracking Applications by Marinela Georgieva Popova A thesis submitted in conformity with the requirements for the degree of Master of Applied Science Graduate Department of Aerospace Engineering University of Toronto c Copyright 2015 by Marinela Georgieva Popova

Abstract Visual Servoing for a Quadrotor UAV in Target Tracking Applications Marinela Georgieva Popova Master of Applied Science Graduate Department of Aerospace Engineering University of Toronto 2015 This research study investigates the design and implementation of position-based and image-based visual servoing techniques for controlling the motion of quadrotor unmanned aerial vehicles (UAVs). The primary applications considered are tracking stationary and moving targets. A novel position-based tracking law is developed and integrated with inner loop proportional-integral-derivative control algorithm. A theoretical proof for the stability of the proposed method is provided and numerical simulations are performed to validate the performance of the closed-loop system. A classical image-based visual servoing technique is also implemented and a modification of the classical method is suggested to reduce the undesirable effects due to the underactuated quadrotor system. Finally, the case when the quadrotor loses sight of the target is investigated and several solutions are proposed to help maintain the view of the target. ii

Acknowledgements First and foremost, I would like to express my gratitude to my supervisor Professor Hugh H.T. Liu for the continuous support of my Master s project over the past two years. His guidance, patience, and encouragement have been a tremendous help in the successful completion of this research study. My sincere thanks goes to my Research Assessment Committee members, Professor Peter Grant and Professor Christopher Damaren, for their insightful comments and valuable feedback. I am further grateful to all members of the FSC lab for the interesting discussions and for their useful suggestions during our weekly group meetings. I would also like to thank my family and my dear Evgeni Dimitrov for their constant love and support. iii

Contents 1 Introduction 1 1.1 Literature Review............................... 2 1.1.1 Visual Servoing............................ 2 1.1.2 Visual Servoing and UAVs...................... 4 1.1.3 Quadrotor Control.......................... 5 1.2 Problem Statement and Approach...................... 6 1.3 Thesis Contributions and Outline...................... 7 2 Quadrotor Model and Control 9 2.1 Quadrotor Dynamics............................. 9 2.2 Approximation Model............................ 12 2.2.1 Equations of Motion......................... 12 2.2.2 Model Validation........................... 13 3 Stability Analysis of Approximation Model 17 3.1 Tracking Control and Simulink Model.................... 17 3.2 Induced DEs.................................. 20 3.3 Stability of Height Control.......................... 25 3.4 Cascade Systems............................... 26 4 Position Based Tracking Law 29 4.1 PBVS Navigation............................... 29 4.1.1 Estimation of GMT s Pose...................... 30 4.1.2 Control Law.............................. 32 4.2 Simulations and Results........................... 32 4.2.1 Stationary Target........................... 33 4.2.2 Constant Velocity........................... 36 4.2.3 Constant Acceleration........................ 37 iv

4.2.4 Circular Motion............................ 39 4.3 Summary................................... 41 5 Image-Based Visual Servoing 42 5.1 Introduction.................................. 42 5.2 Camera Model and Image Plane Dynamics................. 42 5.3 Classical IBVS Control Design for the Quadrotor............. 43 5.3.1 Control Law in Image Space..................... 43 5.3.2 Moving Target............................ 46 5.4 IBVS with Virtual Camera.......................... 46 5.4.1 Classical IBVS with Virtual Camera................ 46 5.4.2 IBVS with GMT Velocity Estimation................ 48 5.5 Simulations and Results........................... 49 5.5.1 Stationary Target........................... 49 5.5.2 Moving Target............................ 52 5.6 Summary................................... 55 6 Field of View Challenges 56 6.1 Introduction.................................. 56 6.2 Keeping Target in FOV............................ 57 6.2.1 Managing Attitude of UAV..................... 57 6.2.2 Increasing FOV of UAV....................... 61 6.3 Target Leaving FOV............................. 64 6.3.1 Dead Reckoning............................ 64 6.3.2 Simulations.............................. 65 6.4 Summary................................... 68 7 Conclusions 69 A 70 A.1 Stability of Yaw Control........................... 71 A.2 Stability of Tracking Control......................... 72 A.3 Stability of System.............................. 78 Bibliography 81 v

List of Tables 4.1 PID controller gains............................. 32 vi

List of Figures 2.1 Configuration of the quadrotor UAV [5]................... 10 2.2 Response of full and approximation models (Position)........... 15 2.3 Response of full and approximation models (Angle)............ 15 2.4 Response of full and approximation models (Velocity)........... 16 2.5 Response of full and approximation models (Angle rate)......... 16 3.1 Quadrotor Tracking Control Structure................... 24 4.1 Camera Perspective Projection Model.................... 30 4.2 Horizontal position of the four models (Stationary target)......... 33 4.3 Height and yaw of the four models (Stationary target)........... 34 4.4 Comparison of thrust............................. 35 4.5 Horizontal position of the four models (Constant velocity)......... 36 4.6 Height and yaw of the four models (Constant velocity)........... 36 4.7 Horizontal position of the four models (Constant acceleration)....... 37 4.8 Height and yaw of the four models (Constant acceleration)......... 38 4.9 Horizontal position of the four models (Circular motion).......... 39 4.10 Height and yaw of the four models (Circular motion)............ 40 4.11 Bird s eye view of horizontal position..................... 41 5.1 Desired Target View in the Image Plane.................. 44 5.2 Initial Target View in the Image Plane................... 44 5.3 IBVS Block Diagram............................. 45 5.4 Initial and desired images (Stationary target)................ 49 5.5 Horizontal position of the models (Stationary target)............ 50 5.6 Height and yaw of the models (Stationary target).............. 50 5.7 Pitch and roll of the models (Stationary target)............... 51 5.8 Error for the models (Stationary target)................... 51 5.9 Trajectory of the target for the models (Stationary target)......... 52 vii

5.10 Initial and desired images (Moving target).................. 53 5.11 Horizontal position of the models (Moving target).............. 53 5.12 Height and yaw of the models (Moving target)............... 54 5.13 Pitch and roll of the models (Moving target)................. 54 6.1 Saturation Function.............................. 58 6.2 Position and velocity for models (Signal saturation)............. 59 6.3 Trajectory of the target for the models (Signal saturation)......... 60 6.4 Pitch for two models (Signal saturation)................... 61 6.5 X Position and camera position for models (Changing height algorithm). 63 6.6 Height for models (Changing height algorithm)............... 64 6.7 Y position and velocity for models (Changing height algorithm)...... 66 6.8 Camera Y position for models (Dead Reckoning).............. 66 6.9 Horizontal position for models (Dead reckoning and changing height algorithm)...................................... 67 6.10 Camera positions for models (Dead reckoning and changing height algorithm)...................................... 67 6.11 Height for models (Dead reckoning and changing height algorithm).... 68 viii

Chapter 1 Introduction In the past several years there has been a significant increase in the interest of unmanned aerial systems (UASs) due to their high potential for military and civil applications. In a large variety of situations, unmanned air vehicles (UAVs) offer numerous advantages to manned aircraft, especially when it comes to safety and cost-efficiency. In dangerous military operations like enemy surveillance, battlefield exploration or devastated territory monitoring, the use of UAVs virtually negates the risk for human pilots. In addition, the usually small size of UAVs makes them economically advantageous as it greatly reduces their production, maintenance, operation and fuel costs [3]. Successful applications of UAVs include surveillance, reconnaissance, battle field assessment, target designation and monitoring, search and rescue, traffic monitoring and pipeline inspection [3], [35], [11] and [30]. The surge in practical applications for UAVs has created a strong impetus for researchers to develop new and better methods for control. One specific area, which has experienced significant development in recent years, is vision-based control for quadrotor UAVs. Quadrotors are of particular interest in practice, because they are excellent candidates for performing target tracking tasks. Specifically, their high maneuverability and hovering capability allow them to navigate through dynamic environments and respond more adequately to evasive target motion. In addition, they are capable of flying at lower altitudes and their small size makes them difficult to detect, increasing their potential for surveillance missions. From a theoretical point of view, developing control strategies for target tracking for quadrotors has been challenging due to the complex nature of their equations of motion and the underactuated properties of the system. Work presented in this thesis aims to address some of these difficulties, by developing a control law that allows a quadrotor UAV to successfully track a ground moving target (GMT) through 1

Chapter 1. Introduction 2 visual aid. 1.1 Literature Review 1.1.1 Visual Servoing Visual servoing is a technique for controlling the motion of an object to reach a desired position and orientation based on visual information from the environment. The idea of using mobile cameras to control the motion of a vehicle was first introduced in the 1980s [18]. The implementation of vision sensors continues to be an active research topic in aerospace and robotics due to its impact on increasing the versatility and application domain of both robots and aircraft. Visual servoing systems typically use two types of camera configurations: eye-in-hand configuration and fixed in the workspace. In the first configuration the camera is mounted on the controlled vehicle and there is a usually constant relationship between the pose of the camera and the pose of the vehicle. In the second configuration, the camera is fixed in the workspace and the image is independent of the vehicle motion. In this thesis the focus lies entirely on the eye-in-hand configuration. The main mechanism of the visual servoing scheme is as follows: the camera records images from the environment and based on certain image feature points from the observed target, information can be derived for the current position of the vehicle in relation to the desired position. This difference in image features between the current and desired location is then used to generate a velocity command that will force the vehicle to adjust its position and orientation to achieve the desired state. The visual servoing methods can be classified into two main approaches depending on whether the visual measurements are directly used in the control law. In the position-based visual servoing (PBVS) approach, the image features are an intermediate step in the control law design and are used to reconstruct the 3-D Cartesian coordinates of the observed object. The reconstruction can be performed using a single or multiple vision sensors and often requires some knowledge of the geometry of the observed object. The advantage of PBVS is that this technique separates the control problem (computation of the feedback signal) from the pose estimation problem. However, since the reconstruction relies on the camera intrinsic parameters, it is more susceptible to camera calibration errors and can lead to loss of accuracy. In the image-based visual servoing (IBVS) the controller design is based on the visual measurements obtained from the camera. It does not require estimation of the 3-D geometry of the observed object and is computationally efficient. However, the use of visual measurements complicates the controller design since it leads to a highly

Chapter 1. Introduction 3 nonlinear and coupled system. Each of the two major visual servoing schemes has certain advantages and disadvantages and the choice which one to use depends on the application. For example, PBVS has been determined as more appropriate in systems dealing with moving objects because the motion is easier to express in the Cartesian frame [18]. Other researchers have proposed different approaches for visual servoing that use some characteristics of both IBVS and PBVS. One such method is the 2 1/2 D visual servoing developed by [24] which expressed the input in part in 3D Cartesian space and in part in 2D image space. In this technique the rotational and translational motions were decoupled and the rotational information was based on pose estimation using epipolar geometry and homography [9] while the translational information was directly obtained from image features. This method lead to some improvements in stability and convergence compared to PBVS and IBVS and prevents image Jacobian singularity. However, the method had some drawbacks such as 1) the requirement to consider 8 image points from the observed object (to construct the homography matrix) and 2) sensitivity to image noise. Another recently developed hybrid approach is the partitioned visual servoing in [10] in which the rotational and translation motions around the Z-axis were decoupled and controlled separately from the motions around the X and Y axis. The technique has been most often implemented for applications involving large rotations about the Z-axis which cannot be completed by classical IBVS schemes. The partitioned visual servoing solved a problem known as the Chaumette conundrum. This type of visual servoing has also proposed a method to help maintain the image features of the observed object inside the field of view of the camera by incorporating a repulsive potential function in the control law design. Another variation of the IBVS method based on nonlinear predictive control suggested in [14] in 2010 formulated the IBVS scheme as an optimization problem which also took into account visibility constraints and workplace constraints. Finally, Reference [13] studied the topic of switching control for visual servoing which could be implemented to choose the most appropriate between several lower level visual servoing controllers. The different visual servoing controllers described above have been designed for fully actuated six-degree of freedom systems. In this thesis, we propose both IBVS and PBVS schemes that are specifically designed for underactuated quadrotor UAVs. The next section outlines the recent developments in visual servoing in relation to UAVs and discuss how we extend the existing approaches.

Chapter 1. Introduction 4 1.1.2 Visual Servoing and UAVs The advantages of using vision sensors has attracted significant interest in the aerospace sector. In particular, the resurgent interest in unmanned aerial vehicles during the last decade has led to their improved performance and newly acquired capabilities, making them suitable for a wide range of applications. UAVs primarily depend on the Global Positioning System which can cause problems in some environments where satellite signals may be interrupted or unreliable. This has motivated researchers to investigate the use of vision sensors as an alternative for obtaining UAV position coordinates. Some of the visual servoing schemes described in Section 1.1.1 have been adopted for different UAV control applications. In fact, since vision sensors are readily available in many UAV platforms, the collected visual information can be easily implemented in the control loop in place of or addition to IMU or GPS measurements. Reference [15], for example, proposed an image-based control strategy for visual servoing that can be applied for the stabilization of an autonomous helicopter over a marked landing pad. Similar applications were considered in [29], where a real-time vision-based landing algorithm was developed for an autonomous helicopter, and in [12], where vision-based control was implemented for autonomous road following. In this thesis visual controllers are implemented for quadrotor UAVs. The design of visual servoing for quadrotor UAVs has been a challenging task due to the underactuated properties of the quadrotor system. The implementation of visual servoing typically consists of two control loops: the outer loop (vision-based loop) creates a command for the desired translational and rotational velocity components based on visual measurements and the inner loop forces the quadrotor to track the desired references. Since the system is underactuated, the horizontal velocity components are coupled with the roll and pitch angles of the vehicle. This dependence makes standard visual servo controllers (which assume that translational velocities are controlled separately from the angular velocities) difficult to apply. The most recent developments in visual servoing control for quadrotors have been discussed in [7], [22], and [26]. Reference [7] compared a hybrid visual servoing scheme to a classical IBVS scheme but did not address the negative effects of the underactuated property such as misinterpretation of image error due to tilting and loss of field of view. Reference [22] suggested several modifications of the classical visual servoing scheme that overcame the underactuated propery such as the introduction of a virtual camera frame and adaptive gains of the visual servoing controller. However, this reference did not provide a comparison of the results with the classical approach and considered only the case when the observed target was static. Another solution to counteract the negative effects of the underactuated property was given in

Chapter 1. Introduction 5 [26] where the authors proposed a new approach based on positive image feature feedback with virtual spring to control the horizontal motion of the quadrotor. In contrast to the existing reports, in this thesis we consider the case when the quadrotor has to track a moving target. When implementing the IBVS method we extend the ideas in [22] to accommodate target motion with the assumption that the quadrotor has to track only the position of the target but not the orientation. We compare the moving target results with the classical IBVS approach and propose a method to restore the field of view of the target when it is lost. Since the application of visual servoing controllers requires successful inner loop velocity control of the quadrotor, we discuss some of the commonly used quadrotor control strategies in the next section and justify our choice of PID control. 1.1.3 Quadrotor Control Different control techniques for quadrotor stabilization have been studied extensively for a long time in a variety of applications but more rarely in combination with visual servoing. One of the methods to achieve a stable flight is PID control which has been implemented in [23]. In this paper the authors demonstrated through simulations and experiments that PID can successfully and robustly regulate the quadrotor pose (position and orientation). Other control strategies for the quadrotor are based on nonlinear sliding mode control [22] and adaptive backstepping [21], [25]. The advantage of the sliding mode control is that it is robust to internal and external uncertainties. However, an undesirable effect associated with this method is the characteristic chattering behavior. Reference [21] showed that a backstepping algorithm may be successfully adopted in combination with IBVS control for a quadrotor. Still other researchers proposed control methods such as feedback linearization [34] and switching mode predictive control [1]. The latter has been designed to ensure accurate navigation in harsh environmental conditions where the quadrotor is subject to forcible wind disturbances. In this thesis, we propose the PID algorithm to control the translational velocity components of the quadrotor and the yaw angle. The PID algorithm is chosen since maintaining the FOV is a key aspect in vision-based navigation and we are interested in keeping the quadrotor from performing aggressive maneuvers requiring drastic changes in pitch and roll angle. Linearized controllers such as the PID controller provide good performance around hover conditions and allow us to put constraints on the allowed values for the pitch and roll angles.

Chapter 1. Introduction 6 1.2 Problem Statement and Approach We now turn to a careful description of the problem we are trying to solve and our methodology. As mentioned earlier, we are interested in developing a control algorithm for a quadrotor UAV for vision-aided target tracking. In this thesis, we assume that the target is located on the ground and never changes its altitude, without any restrictions on its horizontal movement. Information about the target s position is obtained from an on-board camera, whose orientation with respect to the quadrotor is fixed (physically, this means that the camera is mounted at the bottom of the quadrotor and cannot move). In addition, we assume that we can measure the pose of the quadrotor (its position, orientation, and their first derivatives). Finally, we assume we are given some desired height and yaw angle (both constant), which we would like the quarotor to reach. The goal then is to define a control algorithm, which forces the quadrotor s horizontal position to converge to that of the target, its altitude to the desired height and its yaw to the desired value. In order to solve this problem we propose a novel closed-loop, nested PID control algorithm. The structure of our controller is reminiscent of the one considered in [36], however there are several crucial differences. 1. We greatly reduce the number of PID controllers in [36], which makes the analysis of the algorithm more straightforward and the choice of parameters more manageable. 2. We include several transformations of the outputs for our PID controllers, which lead in the end to significantly different values for the control inputs in our system. 3. We introduce a non-linear error function h into the algorithms, which aims to remove the need for the quadrotor to make aggressive maneuvers. Typically in the literature, PID controllers are combined with linear error functions. One concern with using linear error functions is that when the error is large, the control algorithm produces an input requiring aggressive action to compensate the error, which may lead to instability. The way this issue has been handled is by picking a small coefficient to rescale the error, but this makes the convergence happen more slowly. Our approach aims to bypass this problem, by replacing the error-function altogether with one, which is better behaved for large error values, but still ensures fast convergence. To our knowledge, this thesis provides the first example of a non-linear error function used

Chapter 1. Introduction 7 in conjunction with PID controllers. In order to validate our control algorithm, we perform a stability analysis of the system. The complex nature of the equations of motion for the quadrotor make the full system difficult to analyze, and so instead we consider a linearization of the system around the stable point near hovering. We demonstrate that the full system behaves very similarly to its linear approximation, and prove that with the control inputs we develop one can obtain global asymptotic stability for the approximation model. In our stability analysis we use ideas from the theory of cascade control and the Lyapunov method. We remark that a cascade control seems to be a very good framework for proving stability for quadrotor systems, and to our knowledge this is the first time it has been applied to UAV models. We also mention, that we believe that our stability proof can be generalized to incorporate the full system, or systems similar to it, although currently this appears to be out of reach. The stability analysis demonstrates that our choice of control inputs leads a system closely resembling ours to the desired behavior. We further support our choice, by conducting a wide variety of simulations, indicating that even the full system performs the task of target tracking very well with our model. Specifically, we implement our control method for both a PBVS and an IBVS tracking algorithm and show that their performance is very good. Finally, we investigate the cases when the target leaves the FOV. Since the information for the target is obtained from visual data, it is important to come up with ways to handle situations when the data is lost. Losing vision of the target can be attributed to various issues from camera malfunction to aggressive target motion. We consider different causes for losing FOV and propose different solutions of how to restore vision once it is lost. 1.3 Thesis Contributions and Outline The literature review provided in Section 1.1 describes the recent developments in the areas of visual servoing, quadrotor UAV control methods, and target tracking. In summary, this thesis investigates the intersection of these fields and addresses the question of developing reliable visual servoing target tracking algorithms in an underactuated quadrotor system. The main contributions of this thesis include: development of a novel closed-loop PID control algorithm and verification of its

Chapter 1. Introduction 8 validity; an extension of the IBVS approach to take into account both the underactuated dynamics and the target motion; development of methods for keeping the target in the FOV and restoring vision once it is lost. This thesis consists of seven chapters and one appendix. Chapter 1 is an introduction describing the problem this project attempts to solve and summarizes previous research done in related fields. Chapter 2 presents the nonlinear equations of motion for the quadrotor and a simplified model of the quadrotor dynamics based on small angle approximation. In Chapter 3 we introduce a new control strategy for tracking moving targets and provide a closed loop stability analysis of the proposed quadrotor control method and tracking law. Chapter 4 is a discussion of the camera model and the position-based visual servoing scheme which is constructed using the tracking law outlined in the previous chapter. Chapter 4 also includes numerical simulations illustrating the performance of the control law in tracking both stationary and moving targets. A classical image-based visual servoing technique implemented to the quadrotor UAV is presented in Chapter 5. This chapter also describes a modification of the classical approach based on a virtual camera model that aims to resolve some problems associated with the underactuated property of the quadrotor. The performance of the suggested IBVS methods is compared through numerical simulations in the case of tracking static and moving targets. Chapter 6 addresses the issue of the target leaving the field of view (FOV) of the camera. Several possible solutions are provided depending on the factors causing the FOV loss. Chapter 7 provides a summary of the research project and proposes future research directions. Finally, in Appendix 1 we supply the proofs for some of the statements in Chapter 3.

Chapter 2 Quadrotor Model and Control In this chapter we write down the drag free equations of motion for a quadrotor. Subsequently, we linearize those equations around the stable point near hovering. This produces a new approximation model, which is demonstrated to behave similarly to the full model through numerical simulations. 2.1 Quadrotor Dynamics In this section the drag free equations of motion for a quadrotor are developed in the body-fixed frame and the inertial frame (see Figure 2.1). The main sources we use are [5] and [6]. The origin of body-fixed frame coincides with the center of mass of the vehicle, and the orientation of the coordinate axes is shown in Figure 2.1. The x and y axis are chosen in the plane of vertical symmetry while the z axis is directed upwards. Let u, v, w denote the components of the quadrotor velocity v in the body frame and p, q, r the components of the angular velocity vector ω. To develop the translational equations of motion, we use Newton s second law: f = ma = m v + mω v If the unit vectors of the body fixed frame are represented by {x B, y B, z B } and those in the inertial frame by {x I, y I, z I }, the force vector becomes f = T z B mgz I where T is the total thrust of the four motors of the quad rotor. To express f in the body frame, we use the rotation matrix between the inertial and the body frame, given by: 9

Chapter 2. Quadrotor Model and Control 10 Figure 2.1: Configuration of the quadrotor UAV [5] cos ψ cos θ sin ψ cos θ sin θ C BI = cos ψ sin θ sin φ sin ψ cos φ sin ψ sin θ sin φ + cos ψ cos φ cos θ sin φ cos ψ sin θ cos φ + sin ψ sin φ sin ψ sin θ cos φ cos ψ sin φ cos θ cos φ where ψ, θ, φ represent the Euler angles shown in Figure 2.1. The translational equations of motion are therefore: 0 0 + C BI After rearranging, we obtain: T 0 0 mg u qw vr = m v + m ur pw ẇ pv uq u = g sin θ + vr qw v = g cos θ sin φ + pw ur ẇ = T/m g cos θ cos φ + uq pv The rotational (moment) equations in the body fixed frame are derived using Euler s rotation equation given by: I ω + ω (I ω) = M

Chapter 2. Quadrotor Model and Control 11 Expanding the result above leads to the following set of equations: ṗ = (M x (I zz I yy ) qr) /I xx q = (M y (I xx I zz ) pr) /I yy ṙ = M z /I zz. (2.1) In the moment equations, I xx, I yy, I zz represent the quadrotor s inertial moments while M x, M y, and M z are the moments of the corresponding axes generated by thrust differences from opposing motors. The total force T and the moments M x, M y, and M z are derived from the thrust force F i for i = 1, 2, 3, 4 generated by each of the four motors by the following relation: T 1 1 1 1 M x M y = 0 l 0 l l 0 l 0 µ µ µ µ M z where l is the distance between a motor and the center of the quadrotor, µ is a torque coefficient. F 1 F 2 F 3 F 4 We next derive the equations in the inertial frame. r = [x, y, z, ẋ, ẏ, ż, φ, θ, ψ, φ, θ, ψ] T. From Newton s second law we have from which we get the equations ẍ m ÿ = z 0 0 mg 0 + C 1 0, BI T The state vector is given by ẍ = T (sin φ sin ψ + cos φ cos ψ sin θ) m ÿ = T (cos φ sin θ sin ψ sin φ cos ψ) (2.2) m z = T cos θ cos φ g m The relation between the angular rates p, q, r and the rate of change of the Euler angles can be expressed as (see [5]): φ p θ = C 2 q, ψ r

Chapter 2. Quadrotor Model and Control 12 where 1 sin φ tan θ cos φ tan θ C 2 = 0 cos φ sin φ. 0 sin φ sec θ cos φ sec θ The above expression can be differentiated and equation (2.1) used to represent θ, φ, ψ in terms of T, M 1, M 2, M 3 and other relevant constants (m, l, etc.). However, such a representation is complicated and a simplified one will be proposed in the next subsection, where we linearize the above system around its stable point near hover position. Finally, we remark that one has the following relationship between body-fixed frame and inertial frame velocities. Let ẋ = [ẋ, ẏ, ż] T, θ = [ θ, φ, ψ], and v and ω denote the velocity vector and angular velocity vector in the body-fixed frame. Then the relationship is given explicitly as: [ ] ẋ = θ 2.2 Approximation Model [ ] [ C 1 BI 0 0 C 2 ] v ω (2.3) In the previous section we presented the equations of motion for the quadrotor, which we now linearize around the stable point near hovering. This corresponds to taking the angles φ and θ to be small and T close to mg. It will be convenient for us to change notation slightly and denote r A = [x A, y A, z A, ẋ A, ẏ A, ż A, φ A, θ A, ψ A, φ A, θ A, ψ A ] T the pose of the quadrotor in inertial frame (here A stands for aircraft ). 2.2.1 Equations of Motion If the quadrotor is moving at constant velocity, the pitch and roll angles are both zero, and the thrust is equal to the quadrotor s weight. Consequently, if we assume that the quadrotor does not perform too aggressive maneuvers the pitch and roll angles will both be very small. This allows us to use the small angle approximation sin α α and cos α 1. We remark that the latter approximations are very good, whenever α is less than ten degrees. Simulations, performed in later chapters indicate that for a large variety of cases the pitch and roll remain within such a range. Substituting sin θ A with θ A, sin φ A with φ A and cos θ A and cos φ A with 1 in equation (2.2) we get ] [ ] [ ] [ẍa cos ψa sin ψ A θa = g ÿ A sin ψ A cos ψ A φ A [ θa φ A ] [ ] ] = 1 cos ψa sin ψ A [ẍa g sin ψ A cos ψ A ÿ A (2.4)

Chapter 2. Quadrotor Model and Control 13 Next the small angle approximation, allows one to replace C 2 with the identity matrix, so that the Euler angles rates and accelerations φ A, θ A, ψ A, φ A, θ A, ψ A and the bodyangular rates and accelerations p A, q A, r A, ṗ A, q A, ṙ A become approximately the same. This together with equation (2.1) allows one to write φ A = (M x (I zz I yy ) θ ) A ψ A /I xx θ A = (M y (I xx I zz ) φ ) A ψ A /I yy ψ A = M z /I zz. (2.5) The next task is to designate our control inputs. Quadrotors are under-actuated systems, in which four inputs are used to control motions in six degrees of freedom. In the literature several different control inputs have been chosen, but typically they are linear transformations of the four motor thrusts. For example in [6], [32] the torques and total thrust were controlled, in [17] the control inputs were the thrusts themselves. In our case we choose U 1 = T, U 2 = M y, U 3 = M x, U 4 = M z. It should be noted that in view of equations (2.5) by choosing to control M x, M y and M z we can assign desired values for θ A, φ A and ψ A. With the specified control inputs the equation of motion system is now given by: where U 2 = d dt x A y A z A ẋ A ẏ A ż A ẋ A θa ẏ A φa ż A ψ A =, (2.6) θa g cos ψ A θ A + g sin ψ A φ A U 2 φa g sin ψ A θ A g cos ψ A φ A U 3 U ψ 1 A cos θ m A cos φ A g U 4 θ A φ A ψ A (U 2 (I xx I zz ) φ ) A ψ A /I yy U 3 = (U 3 (I zz I yy ) θ ) A ψ A /I xx U 4 = U 4 /I zz. 2.2.2 Model Validation In order to assess the validity of the approximation model, its response to various inputs was compared to the response of the full model. In this simulation, signals for T, M x, M y, and M z were chosen and passed through both models, which have the same initial condition r A = [0, 0, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0]. The parameters m = 0.81kg, I xx = I yy = 0.00676kg.m 2, I zz = 0.00158kg.m 2 in all simulations are based on the

Chapter 2. Quadrotor Model and Control 14 quadrotor vehicle from the Flight Systems and Controls Laboratory. For T we chose a sine signal, centered at mg (in our case m = 0.81kg and g = 9.81m/s 2, so mg = 7.9461N) with amplitude 0.02N and frequency 1 rad/s. The signals for M x and M y were chosen to be sine as well with amplitude 0.0001N m and frequencies 1 rad/s and 2 rad/s respectively. The input for M z was a step signal, with step time at 1 sec and step size 0.0001N m. Each model has twelve outputs: x, y, z, θ, φ, ψ, ẋ, ẏ, ż, θ, φ, ψ, and they are compared in the remainder of this subsection. We remark that the following is a representative result; simulations were run for a variety of initial conditions and input signals. In general, provided that the roll and pitch angles are small the two models agree very well; however, if they become large the models begin to deviate from each other as is expected. In Figure 2.2 the positions x, y, z for the two models are compared, in Figure 2.4 the velocities ẋ, ẏ, ż are presented. Figure 2.3 compares the Euler angles, and Figure 2.5 their rates. From these Figures we make the following observations: 1. While the values for θ and φ (i.e. pitch and roll) are small, the two models agree very well on all counts. 2. As the pitch and roll become large (from Figure 2.3 for example the roll reaches 10 degrees) the values for θ and φ begin to differ significantly between the two models. The other outputs, do not exhibit such big difference, because changes in the angle rate, has not had enough time to influence the other outputs. Thus the roll and pitch rate are the most sensitive to the approximation. 3. The yaw and its rate are influenced very little by the approximation, even when it is large. This is of course expected as we did not impose small yaw angle approximation in the model. 4. The inertial frame position and velocity match better than the angles and their rates. This makes the approximation model suitable for target tracking, as opposed to landing for example, where orientation is more important. Since the problem we consider is that of target tracking we find that the approximation model is suitable for our purposes.

Chapter 2. Quadrotor Model and Control 15 (a) Horizontal position response. (b) Vertical position response. Figure 2.2: Response of full and approximation models (Position) (a) Pitch and roll response. (b) Yaw response. Figure 2.3: Response of full and approximation models (Angle)

Chapter 2. Quadrotor Model and Control 16 (a) Horizontal velocity response. (b) Vertical velocity response. Figure 2.4: Response of full and approximation models (Velocity) (a) Pitch and roll rate response. (b) Yaw rate response. Figure 2.5: Response of full and approximation models (Angle rate)

Chapter 3 Stability Analysis of Approximation Model In this chapter we describe our proposed control algorithm. Then we show that for the approximation model, the control inputs lead to a certain system of differential equations. The stability of the obtained system is shown using the theory of cascade control. 3.1 Tracking Control and Simulink Model In this section we assume that we have a ground moving target (GMT), whose position is given by (x T, y T ) (here T stands for target ). We assume that the position is four times differentiable (as a function of time) and the values of x T, ẋ T, ẍ T,... x T,... x T y T, ẏ T, ÿ T,... y T,... y T are all known. In addition, we assume we are given a target yaw ψ T and target height z T, which are constant. Finally, we assume that we can measure the state of the quadrotor, i.e. we know the value of r A. The goal is to develop a control so that the quadrotor can successfully track the GMT and at the same time, reach the desired height and yaw angle. In order to achieve this task we need to design particular values for the control inputs U 1, U 2, U 3 and U 4. Remark: The high-differentiability of the target position is a very mild condition, and most reasonable types of motion, like constant acceleration, constant velocity, circular motion are all infinitely differentiable. In addition, we mention that the assumption we know the higher order derivatives of motion for the target is practically unfeasible, however it is necessary for the stability analysis. In later chapters where we perform simulations, we will only measure the target s position and velocity from visual data and set all other derivatives to zero. What this means mathematically is that in the interval 17

Chapter 3. Stability Analysis of Approximation Model 18 of time, between measurements of the position, we assume that the target is moving with constant velocity equal to the average velocity during that time interval. As will be shown, the proposed control algorithm still successfully tracks the GMT, even if only information about the position and velocity is used. The quadrotor is entirely controlled by PID controllers, which we split into two groups. The first group controls the yaw angle and height, and is independent of the second group, which controls the horizontal position of the quadrotor through the pitch and roll accelerations. In view of equation (2.6) we have that the equations for z T and ψ T are decoupled from those for the other variables, so they can be treated separately. For the altitude we construct a desired velocity in the z direction, denoted by (ż) d, which is such that the quadrotor reaches z T. One then passes (ż) d ż A through a PD controller, whose output is denoted by U 1. The thrust control input is then defined to be U 1 = 1 cos θ A cos φ A m(u 1 + g). Similarly, we construct ( ψ) d and pass ( ψ) d ψ A through a PD controller, whose output is denoted by U 4. We set U 4 = I zz U 4. The control of the roll and pitch accelerations is more subtle and will be further explained in the next section. We begin by constructing desired horizontal velocities (ẋ) d and (ẏ) d. Using these values we define In x and In y as In x = (ẋ) d ẋ A In y = (ẏ) d ẏ A. These are passed through PD controllers, and the outputs are called Out x and Out y. One then defines [ ] [ ] ] Inθ = 1 cos ψa sin ψ A [ẍt + Out x In θ g sin ψ A cos ψ A ÿ T + Out y [ θa φ A ] These values are then passed through PD controllers, and the outputs are called Out θ and Out φ. One then defines [ ] [ ] [... ] In θ = 1 cos ψa sin ψ A x T g... sin ψ A cos ψ A y T In θ ψ + A g [ ] ] [ ] [ ] sin ψa cos ψ A [ẍt Outθ θa +. cos ψ A sin ψ A ÿ T Out φ φ A

Chapter 3. Stability Analysis of Approximation Model 19 The values In θ and In φ are passed through last pair of PI controllers and the outputs are called Out θ and Out φ. One defines [ ] [ ] U [... ] 2 = 1 cos ψa sin ψ A x T g... sin ψ A cos ψ A y T U 4 U 3 gi zz + 2 ψ A g [ ] [... sin ψa cos ψ A cos ψ A sin ψ A x T... y T [ ] ] [ ] ] [ ] sin ψa cos ψ A [ẍt ψ A 2 cos ψa sin ψ A [ẍt Out θ + +. cos ψ A sin ψ A ÿ T g sin ψ A cos ψ A ÿ T Out φ The inputs U 2 and U 3 for the roll and pitch rate are given by ] + U 2 = I yy U 2 + (I xx I zz ) φ A ψ A U 3 = I xx U 3 + (I zz I yy ) θ A ψ A We remark that the control inputs U 2, U 3, U 4 are designed to ensure that θ A = U 2, φ A = U 3 and ψ A = U 4, which is the main reason behind their design. Before we proceed to the next section, where the meaning of the above controller is more carefully explained, we give an intuitive description of what the controller does and also discuss the initial desired translational velocities and yaw rate fed into the system. We have the following desired velocities and rate (ẋ) d = ẋ T + λ x h(x T x A ) (ẏ) d = ẏ T + λ y h(y T y A ) (ż) d = λ z h(z T z A ) ( ψ) d = λ ψ (ψ T ψ A ), (3.1) where h(x) = x. The first two equations should be understood as follows: the first 1+ x term is necessary to match the target velocity, while the second one drives the displacement to 0. In fact, we observe that h(x)x 0 with equality if and only if x = 0, thus the second term increases the velocity precisely when the target is ahead of the UAV and decreases it when it is behind. The reason we choose the function h(x T x A ) as opposed to what is more common in the literature, just x T x A, is that h is a bounded function. This ensures that value we pass is not too large, preventing aggressive maneuvers of the quadrotor. The latter is especially important, since having low signals for velocity leads to lower values of the pitch and roll, which is consistent with the assumptions we made for the linearized system. Similarly, for the z direction, we do not want to change the thrust too much as it breaks the near hover condition we assume. We remark that near 0 the

Chapter 3. Stability Analysis of Approximation Model 20 function h(x) looks like x, however its boundedness means that it acts like a saturation function for the input. 3.2 Induced DEs The key behind understanding the proposed desired values is to see that they induce particular differential equations for the horizontal differences x A x T and y A y T. We begin by recalling equations (2.4), differentiating the second one once and the first one twice: ] [ ] [ ] [ẍa cos ψa sin ψ A θa = g ÿ A sin ψ A cos ψ A φ A [ θa φ A ] [ ] ] = 1 cos ψa sin ψ A [ẍa g sin ψ A cos ψ A ÿ A (3.2) [ ] [ ] [... ] [ θa = 1 cos ψa sin ψ A x A φ A g... + 1 sin ψ sin ψ A cos ψ A y A g ψa A cos ψ A ] ] cos ψ A [ẍa sin ψ A ẍ A (3.3) [... ] [ ] [ ] x A cos ψa sin ψ A θä... = g y A sin ψ A cos ψ A φ A + 2 ψ A g [ ] [ ] sin ψa cos ψ A θ Ȧ + (3.4) cos ψ A sin ψ A φ A [ ] [ ] [ ] [ ] sin ψa cos ψ ψ A θa cos A g + ψa ψa 2 cos ψ A sin ψ A φ g sin ψ A θa. A sin ψ A cos ψ A φ A Equation (3.4) shows that in a sense, by controlling θ A and φ A one controls... x A and... y A. The latter is especially clear when ψ A 0, in which case the dependence is explicitly given by... x A = g θ A and... y A = g φ A. Since our problem is that of target tracking, we want to design the controllers so that the induced differential equations for x A and y A force x A x T and y A y T to both converge to 0 as time goes to infinity. As will be seen in the next chapter this will indeed happen if we define U 1, U 2, U 3 and U 4 as in the previous section. Recall that picking these control values forces θ A = U 2, φ A = U 3 and ψ A = U 4, with U 2, U 3 and U 4 defined as in the previous section. We recall [ ] [ ] U [... ] 2 = 1 cos ψa sin ψ A x T g... sin ψ A cos ψ A y T U 3 + 2 ψ A g [ ] [... sin ψa cos ψ A cos ψ A sin ψ A x T... y T ] +

Chapter 3. Stability Analysis of Approximation Model 21 U 4 gi zz [ ] ] [ ] ] [ ] sin ψa cos ψ A [ẍt ψ A 2 cos ψa sin ψ A [ẍt Out θ + +. cos ψ A sin ψ A ÿ T g sin ψ A cos ψ A ÿ T Out φ Substituting this equation into equation (3.4) and also putting U 4 = I zz U 4 = I zz ψa we get +g [... ] [... ] x A x T... =... y A y T [ cos ψa sin ψ A sin ψ A cos ψ A + 2 ψ A [ 0 1 1 0 ] [ Out θ We may now substitute θ A and [... ] [... ] x A x T... =... y A y T ] ] [... +2 ψ A g ] x T... y T + ψ A [ 0 1 1 0 ] ] [ẍt ÿ T [ ] [ ] sin ψa cos ψ A θ Ȧ Out φ cos ψ A sin ψ A [ ] [ ] cos + ψ ψa A 2 g sin ψ A θa. sin ψ A cos ψ A + 2 ψ A [ 0 1 1 0 φ A φ A [ 1 0 + ψ A 2 0 1 + ψ A g ] ] [ẋt + ẏ T [ ] [ ] sin ψa cos ψ A θa + cos ψ A sin ψ A φ A φ A from equation (3.3) in the above expression to get ] x T... y T ] [... + ψ A [ 0 1 1 0 ] ] [ẍt ÿ T [ 1 0 + ψ A 2 0 1 [ ] [ ] [ ] [... ] ] cos ψa sin ψ A Out θ 0 1 +g + 2 x A [ẍa ψ A... + 2ψ A 2 sin ψ A cos ψ A Out φ 1 0 y A ÿ A [ ] [ ] [ ] [ ] sin + ψ ψa cos ψ A θa cos A g + ψa ψa 2 cos ψ A sin ψ A φ g sin ψ A θa. A sin ψ A cos ψ A Substituting θ A and φ A from equation (3.2) in the above we get: [... ] [... ] x A x T... =... y A y T + 2 ψ A [ 0 1 1 0 ] [... ] x T... y T + ψ A [ 0 1 1 0 ] ] [ẍt ÿ T φ A [ 1 0 + ψ A 2 0 1 ] ] [ẋt + ẏ T ] ] [ẋt + [ ] [ ] [ ] [... ] ] [ ] ] ] cos ψa sin ψ A Out θ 0 1 +g +2ψ x A [ẍa 0 1 [ẍa [ẍa A... +2ψ A 2 + ψ A ψ A 2. sin ψ A cos ψ A Out φ 1 0 y A ÿ A 1 0 ÿ A ÿ A If we set x = x A x T and y = y A y T, then the above becomes [... ] [ ] [... ] [ ] [ ] [ ] [ ] [ ] x 0 1... = 2ψ x 0 1 x x cos A... + ψ A + ψ ψa sin ψ A Out θ A 2 +g. y 1 0 y 1 0 y y sin ψ A cos ψ A Out φ (3.5) ẏ T

Chapter 3. Stability Analysis of Approximation Model 22 Next we assume that the PI controllers, whose outputs are Out θ and Out φ are just proportional controllers, with parameter P θ = P φ = A. This means that [ ] [ ] Out θ In θ = A. Out φ In φ Substituting the formula for In θ and In φ and θ A and φ A from equation (3.3), we get [ ] Out θ = A Out g φ [ cos ψa sin ψ A ] [... ] x T sin ψ A cos ψ A... y T ψ + A A g [ ] ] [ ] sin ψa cos ψ A [ẍt Outθ + A cos ψ A sin ψ A ÿ T Out φ A g [ ] [... ] [ ] ] A cos ψa sin ψ A x A g... A sin ψ sin ψ A cos ψ A y A g ψa cos ψ A [ẍa A = cos ψ A sin ψ A ẍ A [ ] [... ] cos ψa sin ψ A x... A [ ] [ ] [ ] ψ A sin ψa cos ψ A x Outθ + A sin ψ A cos ψ A y g cos ψ A sin ψ A y Out φ Next we assume that the PD controllers, whose outputs are Out θ and Out φ are just proportional controllers, with parameter P θ = P φ = B. This means that [ ] [ ] Outθ Inθ = B. Out φ In φ Substituting the formula for In θ and In φ and θ A and φ A from equation (3.2), we get [ ] Outθ = B Out φ ( [ 1 cos ψa sin ψ A g sin ψ A cos ψ A We may now substitute this above to get [ ] Out θ Out φ = A g [ cos ψa sin ψ A ] [... ] x... sin ψ A cos ψ A y ] ] [ ] ]) [ẍt + Out x 1 cos ψa sin ψ A [ẍa. ÿ T + Out y g sin ψ A cos ψ A ÿ A ψ A A g [ ] [ ] sin ψa cos ψ A x cos ψ A sin ψ A y AB g [ ] [ ] cos ψa sin ψ A x + AB sin ψ A cos ψ A y g [ ] [ ] cos ψa sin ψ A Outx. sin ψ A cos ψ A Out y

Chapter 3. Stability Analysis of Approximation Model 23 Next we assume that the PD controllers, whose outputs are Out x and Out y proportional controllers, with parameter P x = P y = C. This means that [ ] ([ ] ]) Outx (ẋ)d [ẋa = C. Out x (ẏ) d ẏ A are just Substituting the formula for (ẋ) d and (ẏ) d and θ A, we get Substituting the latter above we get [ ] Out θ Out φ = A g [ ] ([ ] ]) Outx λx h( x) + x T [ẋa = C. Out y λ y h( y) + y T ẏ A [ cos ψa sin ψ A ] [... ] x... sin ψ A cos ψ A y ψ A A g [ ] [ ] sin ψa cos ψ A x cos ψ A sin ψ A y AB g [ cos ψa sin ψ A sin ψ A ] [ ] x ABC g cos ψ A y [ cos ψa sin ψ A ABC g sin ψ A [ cos ψa sin ψ A sin ψ A cos ψ A ] [ ] λx h( x). cos ψ A λ y h( y) ] [ ] x y We can finally substitute the above into equation (3.5) and get [... ] [ ] [... ] [ ] [ ] x 0 1... = 2ψ x 0 1 x A... + ψ A + ψ A 2 y 1 0 y 1 0 y [ ] [... ] x x A... y y [ ] [ ] [ ] x x λx h( x) AB ABC ABC y y λ y h( y) Setting D = λ x = λ y and rearranging the above we get: A ψ A [ 0 1 1 0 [... ] [... ] [ ] [ ] [ ] x x x x h( x)... = A... AB ABC ABCD + y y y y h( y) ] [ ] x y ( [ ]) [ ] 0 1 x ψ A 2 + ( ψ A A ψ A ) 1 0 y + 2 ψ A [ 0 1 1 0 ] [... ] x... y (3.6)

Chapter 3. Stability Analysis of Approximation Model 24 The remaining differential systems for z = z A z T and ψ = ψ A ψ T are given as follows (we recall that z T and ψ T are assumed constant so their derivatives vanish). P ψ d dt [ ] [ ] ψ ψ =, (3.7) ψ a ψ ψ b ψ ψ where a ψ = λ ψ 1+D ψ and b ψ = P ψ+λ ψ D ψ 1+D ψ are positive constants and P ψ, D ψ denote the PD constants for the yaw controller. d dt [ ] z z = z z a z b 1+ z z z c z z (1+ z ) 2, (3.8) P where a z = λ z z 1+D z, b z = Pz 1+D z and c z = λzdz 1+D z are positive constants where P z, D z denote the PD constants for the altitude controller. A block diagram of the control method is provided in Figure 3.1. The stability of the systems presented in (3.8) can be treated separately from the other equations, because of our assumptions. However, the systems (3.6) and (3.7) are connected and must be treated simultaneously. Nevertheless, we will show that the system is stable for certain values of the proportional control constants A, B, C, D in (3.6). The precise statements and proofs are given in the next sections. Figure 3.1: Quadrotor Tracking Control Structure

Chapter 3. Stability Analysis of Approximation Model 25 3.3 Stability of Height Control In this section we analyze the stability of the altitude control. From Section 3.2 equation (3.8) we have that the differential equation, governing the height is given by d dt [ ] z z = z z a z b 1+ z z z c z z (1+ z ) 2, (3.9) where we recall that a z, b z, c z are positive constants and z = z A z T is the difference between the quadrotor height and the target height that must be reached. We will show that the above is globally asymptotically stable, converging to 0 as time goes to infinity. Consider the following Lyapunov function candidate V (x, y) = 1 2 y2 + (a z + A)( x log(1 + x )) + B xy 1 + x, where the constants A, B are to be chosen later. In order to demonstrate that the solutions of the differential equation are globally asymptotically stable it is sufficient to establish the following properties for V (see Theorem 3.2 in [20]): 1. V (x, y) 0 with equality if and only if x = y = 0. 2. V ( z, z) 0, with equality if and only if z = z = 0. 3. V (x, y) as x 2 + y 2. We first start with condition 2. and observe that V ( z, z) z z = a z 1 + z b z z 2 z 2 c z (1 + z ) 2 z z +(a z + A) 1 + z + B z 2 (1 + z ) Ba z 2 2 z (1 + z ) Bb z z 2 z 1 + z Bc z z z (1 + z ) 2 We now choose A = Bb z and upon cancellation and regrouping terms the above becomes V ( z, z) = b z z 2 z 2 c z (1 + z ) +B z 2 2 (1 + z ) Ba z 2 2 z (1 + z ) Bc z z 2 z (1 + z ) 2 The goal will be to choose B small enough and positive. From s 2 + t 2 2 st we know

Chapter 3. Stability Analysis of Approximation Model 26 that Bc z z z (1 + z ) 2 Ba z 2 z 2 (1 + z ) 4 + Bc z 2a z z 2 (1 + z ). 4 Consequently we see that ( V ( z, z) b z + c z B (1 + z ) Bc ) z 1 2 2a z (1 + z ) 4 So if we choose B small enough and positive we will have z 2 V ( z, z) c 1 z 2 z 2 c 2 (1 + z ), 2 for some positive constants c 1, c 2. This proves Property 2. ( Ba z Ba z 2 ) 1 (1 + z ) 2 z 2 (1 + z ) 2. Property 3. is easy to see. Indeed, since by definition A > 0 and x log(1+ x ) 0, we have V (x, y) 1 2 y2 B y, which goes to along any sequence (x n, y n ) with y n. Thus we only need to consider sequences (x n, y n ) along which y n, remains bounded, say by M. In the latter case, x n is forced to go to infinity and so V (x n, y n ) (a z + A)( x n log(1 + x n )) BM where we used that x log(1 + x ) as x. This proves property 3. Finally, Property 1 follows by Lemma 2. Indeed, if we chose B sufficiently small so that we have 2(a z + A) B 2, then the conclusion of Lemma 2 is precisely Property 1. Hence, Properties 1,2,3 are all satisfied and by Theorem 3.2 in [20] the system is globally asymptotically stable, as claimed. 3.4 Cascade Systems In section 3.2 we derived a system of differential equations, whose stability we now analyze. We phrase our problem in the language of cascade systems, which allows us to reduce the question of global asymptotic stability to verifying three conditions. These conditions are subsequently proved in the appendix at the end of the thesis. We start with the definition of cascade systems.

Chapter 3. Stability Analysis of Approximation Model 27 Suppose we have the following nonlinear cascade system ẋ = f(x, ω) ω = s(ω), (3.10) where x(t) R n and ω(t) R m. We assume that f : R n R m R n and s : R m R m are C 1 vector fields. We assume that f(0, 0) = 0 and s(0) = 0, so that (x, ω) = (0, 0) is an equilibrium of the cascade system. In [31] the following result was proved: Theorem 1. Suppose the following assumptions are satisfied: 1. x = 0 is a globally asymptotically stable solution to the system ẋ = f(x, 0), 2. ω = 0 is a globally asymptotically stable solution to the system ω = s(ω). 3. For any initial condition (x(0), ω(0)) the trajectories (x(t), ω(t)) remain bounded for t > 0. Then one has that (0, 0) is a globally asymptotically stable solution to the cascade system (3.10). What the above result roughly says is that if the control law s forces ω to converge to 0, and with ω 0 the control law f forces x to converge to 0, then the whole system (x, ω) converges to 0 for any initial condition. The reason we are interested in the above result is that equations (3.6) and (3.7) can be described as a cascade system. Utilizing this structure will allow us to establish global asymptotic stability for the entire system, by verifying the conditions in the above theorem. The first task is to phrase our problem in the language of cascade systems. ω = (ω 1, ω 2 ) R 2, and x = (x 1, x 2,, x 8 ) R 8. We let s : R 2 R 2, given by Let s(ω 1, ω 2 ) = [ω 2, a ψ ω 1 b ψ ω 2 ] T. From equation (3.7) and its derivative we know that d dt [ ] [ ] ψ ψ = ψ a ψ ψ b ψ ψ

Chapter 3. Stability Analysis of Approximation Model 28 We thus see that ω = ( ψ, ψ) is a solution to the differential equation ω = s(ω). Next let f : R 10 R 8 be given by f(x 1, x 2,, x 8, ω 1, ω 2 ) i = x i+1 for i = 1, 2, 3, 5, 6, 7 and [ ] [ ] [ ] [ ] [ ] f(x1, x 2,, x 8, ω 1, ω 2 ) 4 x4 x3 x2 h(x1 ) = A AB ABC ABCD + f(x 1, x 2,, x 8, ω 1, ω 2 ) 8 x 8 x 7 x 6 h(x 5 ) ( [ ]) [ ] [ ] [ ] 0 1 x2 0 1 x4 ω2 2 + (a ψ ω 1 + b ψ ω 2 Aω 2 ) + 2ω 2. 1 0 1 0 Then in view of equations (3.6) and (3.7) we see that x = ( x, x, ω = ( ψ, ψ) is a solution to the cascade system ẋ = f(x, ω) ω = s(ω). In reconciling the above system with equations (3.6) and (3.7) we used that x 6 x 8...... x, x, y, y, y, y), ψ = since ψ T is assumed constant. Since h is C 1, we conclude that f and s are C 1 vector fields. In addition, it is clear that f(0, 0) = 0 and s(0) = 0. Thus the above is indeed of the form described in (3.10). It thus follows from Theorem 1 that in order to prove the global asymptotic stability of our system it is sufficient to verify conditions 1. through 3. This will be done in the appendix. ψ A

Chapter 4 Position Based Tracking Law In this section we develop a position based visual servoing (PBVS) model for target tracking. The proposed model is implemented both for the full and approximation models and their performance is analyzed through several simulations. 4.1 PBVS Navigation We will consider the following problem. Suppose that a camera is mounted on the quadrotor and is fixed, so that its orientation changes with the orientation of the UAV. We assume that all relevant camera parameters are known. There is a GMT, observed with the camera, which represents a point in the image frame and measurements for its coordinates are available for all time. In addition, we assume that the quadrotor has knowledge of its position and velocity in the inertial frame, as well as its orientation (expressed through the Euler angles) and the rate with which orientation is changed (expressed through the rates of pitch, roll and yaw). Suppose that a constant desired height, z T and orientation ψ T is given. Based on this information, we wish to construct a PBVS model for the quadrotor for tracking the GMT and reaching the desired height and yaw. We understand the above problem as forcing the values x A x T, y A y T, z A z T and ψ A ψ T to decrease to 0 as time becomes large, starting from any initial condition. Based on the above formulation the PBVS navigation algorithm can be split into two parts. In the first part, visual data is analyzed to obtain information about the target s pose. Subsequently, this information is used to create inputs for the control design we developed in the previous section. 29

Chapter 4. Position Based Tracking Law 30 4.1.1 Estimation of GMT s Pose We will begin with the first part, where we use [8] as a basic reference. Since the camera is assumed to be fixed to the quadrotor, we have that its frame coincides with the bodyfixed frame for the quadrotor, except that its vertical axis points downwards. We let X = (X, Y, Z) denote the position of the target in the camera frame, and x = (x, y) its projection in the image as a 2-D point. We have x = X/Z = (u c u )/fα y = Y/Z = (v c v )/f, where m = (u, v) are the coordinates of the image point in pixel units, a = (c u, c v, f, α) is the set of intrinsic camera parameters; c u and c v are the coordinates of the principal point, f is the focal length, and α is the ratio of the pixel dimensions. The camera geometry is shown in Figure 4.1 Figure 4.1: Camera Perspective Projection Model In our problem we assumed that the camera parameters are all known, as are u and v. Thus the quantities X/Z and Y/Z can be readily obtained from the visual measurements. The next task is to calculate the depth Z. In order to achieve this, we use the information that our target is constrained to the ground (the plane corresponding to z = 0 in inertial coordinates). Let C BI denote the rotation matrix from the inertial to the body-fixed frame (it was given in Section 2.1). Let X I = (X I, Y I, Z I ) coordinates of the GMT in a frame, centered at (x A, y A, z A ) and with axes parallel to the inertial frame axes. Then

Chapter 4. Position Based Tracking Law 31 we have that Z I = z A and [X, Y, Z] T = C BI X I, where we used the fact that the camera and quadrotor have opposite vertical orientation. Dividing both sides by Z and multiplying by C 1 BI we see that C 1 BI [X/Z, Y/Z, 1]T = 1 Z [X I, Y I, z A ] T. The left hand-side of the above equation is known from the visual measurements and the orientation of the quadrotor, and hence so is the right hand-side. In particular, we know the value z A /Z and since z A is also known, we conclude that the depth Z can be measured in our problem. Finally, since X/Z, Y/Z and Z are all known we can reconstruct the camera frame position of the target (X, Y, Z) from visual data and knowledge of the camera orientation and position. The latter can now be transformed via C 1 BI to obtain the relative position of the target with respect to the quadrotor in the inertial frame. As the quadrotor s position is also known, we conclude that the target s inertial position can be reconstructed. As will be seen later, the above procedure can be readily implemented to accurately measure the target s position, when there is no noise in the image. This thesis will not consider the problem of denoising the image, although it could be addressed using various filtering techniques, like the Kalman filter. Instead, we will try and obtain further information about the target s pose from the image data. In particular, we would like to obtain estimates on its velocity. We do the latter, by directly using the measured positions. Specifically, if at time t 1 the position of the target in inertial frame was calculated to be (x T (t 1 ), y T (t 1 )) and at time t 2 > t 1 it is (x T (t 2 ), y T (t 2 )), then an approximation for the target s velocity is given by v x x T (t 2 ) x T (t 1 ) t 2 t 1 v y y T (t 2 ) y T (t 1 ) t 2 t 1. If the next measurement is made at time t 3 then the target is assumed to move with constant velocity, given by (v x, v y ) in the interval (t 2, t 3 ). We will see that the latter provides a good estimate on the target s velocity so long as it does not change too quickly. We could try and iterate the above procedure to calculate higher order derivatives of the GMT s motion, however we run into some issues. In particular, we have that each measurement (even if there is no noise) comes with a rounding error, which comes from

Chapter 4. Position Based Tracking Law 32 the picture being pixelized. This error is small and does not lead to big errors when one calculates the position or velocity. If however, one tries to calculate higher derivatives, then this rounding error becomes sizable and leads to considerable differences between the estimated and real values, especially when the time steps of the algorithm are small. We will thus restrict our pose estimation for the target to finding its horizontal position and velocities. The latter problem will be further discussed in Chapter 6. 4.1.2 Control Law Once we have approximations for the target s position and velocity we can feed those values into the controller described in Section 2. We remark that in that section we assumed that we knew not only x T, y T, ẋ T, ẏ T, but also three higher derivatives for the target s motion. As already explained, finding estimates for the latter is difficult, and practically not feasible. Thus we will simply set those values to 0, whenever they are not available. The particular values we choose for the PID controllers are as follows (see Section 2 for notation): Table 4.1: PID controller gains C.C. P (ẋ)d I (ẋ)d D (ẋ)d P (ẏ)d I (ẏ)d D (ẏ)d P θd I θd D θd P φd I φd D φd 2 0 0 2 0 0 30 0 5 30 0 5 C.C. P ( θ)d I ( θ)d D ( θ)d P ( φ)d I ( φ)d D ( φ)d P (ż)d I (ż)d D (ż)d P ( ψ) d I ( ψ) d D ( ψ) d 1 0 0 1 0 0 0.5 0 0.05 10 0.1 0.5 Also we have λ z = λ ψ = 1, and λ x = λ y = 2. We remark that not all PID controllers are proportional. Specifically we have a derivative term for controlling the roll and pitch rates. The latter was introduced to reduce the overshoot for that particular PID controller. In addition, we added a small integral term to the ψ controller, to reduce the stationary error to 0. The proposed controller is analyzed in the next subsection in a variety of cases. 4.2 Simulations and Results We now turn to examining the proposed model in four different situations. In what follows, we will examine four different implementations. Specifically, we will consider

Chapter 4. Position Based Tracking Law 33 the full and approximation model when the target pose is estimated from visual data as described in Section 4.1 above and when all relevant (so up to and including the fourth) derivatives of the target s motion are directly given (i.e. no visual measurement is performed). We have several goals. 1. Demonstrate that the approximation and full model behave similarly. 2. Demonstrate that the approximation and full model are capable of target tracking when all information on the GMT motion is provided. 3. Demonstrate that even if partial information (i.e. only horizontal position and velocity) for the GMT is known the control law still ensures successful target tracking. For brevity we denote the four models as: PBC - the full model with all derivatives given, PBCL - the approximation model with all derivatives given, PBVS - the full model with visual measurements, PBVSL - the approximation model with visual measurements. 4.2.1 Stationary Target We start with the case when the target is stationary, with position (10, 15) and we want to reach the desired height of 20m and yaw angle π/4 rad, from the initial state r A = [0, 0, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]. The results are presented in Figures 4.2 and 4.3. (a) Position in X. (b) Position in Y. Figure 4.2: Horizontal position of the four models (Stationary target). As can be seen from the above figures, there is hardly any difference between the behavior of the four models. Moreover, they adequately satisfy the conditions of the problem,

Chapter 4. Position Based Tracking Law 34 (a) Position in Z. (b) Yaw. Figure 4.3: Height and yaw of the four models (Stationary target). reaching the desired horizontal position in about 10 seconds and the desired height in about 20. The desired yaw is reached in about 5 seconds, and we see that there is very little overshoot for any of the four measured quantities. Finally, we present a measure of the thrust for the four models. We recall that an important assumption about our approximation model is that the thrust should be close to mg. As Figure 4.4a shows the latter is indeed true. Essentially, the thrust changes initially to create upward velocity to reach the desired height, but very quickly goes back to mg. After the quadrotor reaches the desired height around the 15-th second, the velocity is decreased (the thrust becomes less than mg) and afterwards the thrust quickly stabilizes. The latter behavior is a result from the saturation properties of the function h we mentioned near the end of Section 2.3. Indeed, in Figure 4.4b we performed a simulation where h(x) = x from the z-control is replaced by x, and as we can see the 1+ x change in thrust is much more dramatic.

Chapter 4. Position Based Tracking Law 35 (a) Thrust with function h. (b) Thrust with x. Figure 4.4: Comparison of thrust.

Chapter 4. Position Based Tracking Law 36 4.2.2 Constant Velocity We consider the case when the target moves with constant velocity, starting from position (0, 0). The horizontal velocity is given by (2, 3) and we also want to reach the desired height of 20 m and yaw angle π/4 rad, from the initial state r A = [0, 0, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]. The results are presented in Figures 4.5 and 4.6. (a) Position in X. (b) Position in Y. Figure 4.5: Horizontal position of the four models (Constant velocity). (a) Position in Z. (b) Yaw. Figure 4.6: Height and yaw of the four models (Constant velocity). As can be seen from the above figures, there is hardly any difference between the

Chapter 4. Position Based Tracking Law 37 behavior of the four models. Moreover, they adequately satisfy the conditions of the problem, overcoming the target s velocity in about 2 seconds and reaching the desired height in about 20. The desired yaw is reached in about 5 seconds, and we see that there is very little overshoot for any of the four measured quantities. 4.2.3 Constant Acceleration We consider the case when the target moves with constant acceleration, starting from rest in position (0, 0) m. The horizontal acceleration is given by (1, 2) m/s 2 and we also want to reach the desired height of 10 m and yaw angle π rad, from the initial state r A = [ 10, 10, 20, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]. The results are presented in Figures 4.7 and 4.8. (a) Position in X. (b) Position in Y. Figure 4.7: Horizontal position of the four models (Constant acceleration). As can be seen from the above figures, there is hardly any difference between the behavior of the four models. However, one can see that the models where the GMT s acceleration is directly fed into the system converge faster. The latter is expected, however, it is important to observe that even the models where only the GMT s position and velocity are measured manage to successfully track the target. All the models adequately satisfy the conditions of the problem, overcoming the target s, displacement, velocity and acceleration in about 10 seconds and reaching the desired height in about 20. The desired yaw is reached in about 5 seconds, and we see that there is very little overshoot for any

Chapter 4. Position Based Tracking Law 38 (a) Position in Z. (b) Yaw. Figure 4.8: Height and yaw of the four models (Constant acceleration). of the four measured quantities.

Chapter 4. Position Based Tracking Law 39 4.2.4 Circular Motion We finally consider the case when the target moves with constant velocity of 4 m/s around a circle, centered at (0, 0) m and with radius 4 m. We also want to reach the desired height of 10 m and yaw angle π/3 rad, from the initial state r A = [ 10, 10, 20, 0.3, 0.2, 0.8, 0, 0, 0, 0, 0, 0, 0]. The results are presented in Figures 4.9 and 4.10. (a) Position in X. (b) Position in Y. Figure 4.9: Horizontal position of the four models (Circular motion). As can be seen from the above figures, there is a significant difference between the behavior of the four models. One can see that the models where the GMT s higher order position derivatives are directly fed into the system converge to the target, making displacement eventually 0. The models where only velocity and position are measured fail to make the displacement go to 0, however they still remain at a finite distance from the target. All the models manage to reach the desired height and yaw angle, although we see that the full model s yaw oscillates around the target yaw. The latter can be explained with the fact that in the full model, the yaw rate depends on the pitch, roll and their derivatives. We no longer have the small angle approximation, in fact θ and φ both oscillate between 20 and 20 degrees, and their rates between 20 deg/s and 20 deg/s. The latter are of course not small, which leads to this behavior. Finally, we observe that there is a significant difference between the full and approximation models, which is again due to the small angle condition failing in this case. We present a bird s eye view of the horizontal positions for the four models in Figure 4.2.4, where the differences are especially evident.

Chapter 4. Position Based Tracking Law 40 (a) Position in Z. (b) Yaw. Figure 4.10: Height and yaw of the four models (Circular motion).

Chapter 4. Position Based Tracking Law 41 Figure 4.11: Bird s eye view of horizontal position. 4.3 Summary This section presented a new position based model for target tracking, based on our earlier control algorithm. The model enables a quadrotor to achieve a target height and yaw, while successfully following a GMT. The position based model was implemented in the setting of visual servoing, when only the target s position and velocity are measured from visual data. The performance of the tracking algorithm was validated through numerical simulations for a large variety of situations.

Chapter 5 Image-Based Visual Servoing 5.1 Introduction This chapter presents several image-based visual servoing (IBVS) tracking algorithms specifically developed for quadrotor UAVs. These algorithms are inspired by the classical visual servo control theory originally designed for fully actuated six degree of freedom robotic systems [8], [9], [18]. The proposed algorithms use visual measurements provided by an onboard camera to guide the quadrotor UAV to complete a desired task which includes tracking both stationary and moving targets. Two of the tracking laws use only image coordinates in the feedback loop without reconstructing any information in Cartesian space about the target motion. The third law uses a combination of visual measurements and an estimation of the target velocity in the x and y direction from known image features. The performance of all IBVS tracking algorithms is tested through numerical simulations. 5.2 Camera Model and Image Plane Dynamics The camera model considered in this problem is based on the perspective projection model described in Chapter 4, Section 4.1. When the quadrotor is moving with translational velocity v tc = v x v y v z and rotational velocity vrc = ω x ω y ω z, [ ] T [ ] T the dynamics of P c with respect to the camera frame are given by: Ẋ c = v x ω y Z c + ω z Y c Ẏ c = v y ω z X c + ω x Z c 42

Chapter 5. Image-Based Visual Servoing 43 Ż c = v z ω x Y c + ω y X c To obtain the dynamics in the image plane, we first take the time derivative of the projection equations: ẋ = Ẋc/Z c X c Ż c /Z 2 c = (Ẋc xżc)/z c ẏ = Ẏc/Z c Y c Ż c /Zc 2 = (Ẏc yżc)/z c Substituting the relations for Ẋ, Ẏ, and Ż into the time derivative of the projection equations gives: ẋ = v x /Z c + xv z /Z c + xyω x (1 + x 2 )ω y + yω z ẏ = v y /Z c + yv z /Z c + (1 + y 2 )ω y x xyω y xω z [ T [ ] T Let s = x y] and vc = v x v y v z ω x ω y ω z. Then the relationship between the time variation of the image features ṡ and the camera velocity can be expressed in vector form: where L e is the interaction matrix: L e = ṡ = L e v c [ 1/Z c 0 x/z c xy (1 + x 2 ) y 0 1/Z c y/z c 1 + y 2 xy x The relationship between the dynamics of the point in the image plane and the camera velocity is a key component in the construction of the tracking control law for the quadrotor UAV which is described in detail in the next section. ] 5.3 Classical IBVS Control Design for the Quadrotor 5.3.1 Control Law in Image Space The main objective of the classical visual servoing control scheme is to minimize the error between the current image coordinates of the observed target and the desired image coordinates: e(t) = s(t) s

Chapter 5. Image-Based Visual Servoing 44 In this problem the target is modeled as a square in the image plane and four points of interest are considered when constructing the image feature vector such that s = [ ] T x 1 y 1... x 4 y 4 contains the set of image coordinates for each vertex of the square. [ ] Similarly, the vector s = x 1 y1... x 4 y4 contains the desired image coordinates of the square vertices. An example of the image plane view of the target at two different quadrotor poses is shown Figures 5.1, 5.2. Figure 5.1: Desired Target View in the Image Plane Figure 5.2: Initial Target View in the Image Plane In order to ensure that the error is decaying to zero exponentially fast, we require that the error vector satisfies the following differential equation: ė = λe Using the relationship between the camera spatial velocity derived earlier, ṡ = L e v c, where L e R 8 6 is formed by stacking the interaction matrices associated with each of the four points of interest of the target, we obtain: v c = L e v c = λe [ v cx v cy v cz p q r] T = λle + e where L + e = (L e T L e ) 1 L e T is the Moore-Penrose pseudo-inverse of L e. This method provides a way to calculate the reference velocity expressed in the camera frame that would drive the system to the desired state if the observed object is stationary. The classical IBVS method assumes that we can control both the translational and rotational

Chapter 5. Image-Based Visual Servoing 45 motions of the camera. However, for the underactated quadrotor UAV only four of these states are directly controlled. In particular, as outlined in the quadrotor control section, we can provide a reference signal for the inertial velocities in X, Y, Z direction and the yaw rate, ψ. The quadrotor states θ, φ are not directly controlled and the reference signals for these states are derived from the reference signals for the x, and y inertial velocity components by passing the desired signal through several PID feedback loops. Since the IBVS generates the desired velocity in the body-fixed frame, the translational components are transformed to the inertial frame and the angular velocities are transformed to Euler angular rates as follows. ] T ] T [Ẋref Ẏ ref Ż ref = CBI [v cx v cy v cz [ ] T [ ] T φref θref ψ ref = C2 p q r The final reference signal for the controlled quadrotor states is A block diagram of the closed loop dynamics is presented in Figure 5.3. [Ẋref Ẏ ref Ż ref ψ ref ]. Figure 5.3: IBVS Block Diagram

Chapter 5. Image-Based Visual Servoing 46 5.3.2 Moving Target When the observed target is moving, the equation for the relationship between the time variation of the error between current and desired image coordinates and the camera velocity is modified to take into account the unknown target motion: e = ṡ = L e v c + e t Setting ė = λe for an exponential decay of the error, the new control law becomes: v c = λl e + e L e + ˆ e t where ˆ e is an estimation of e which can be obtained using the error and velocity t t components from the previous time step. ˆ e t = (e(t) e(t t))/ t L ev c (t t) In general, the motion compensation term might be omitted if a larger control gain, λ is used. However, for the quadrotor system, a large gain leads to undesirable system response with large oscillations and adding motion compensation term is preferred. 5.4 IBVS with Virtual Camera In this section, we describe an algorithm based on applying ideas from the previous section to a virtual camera. The latter modification aims to compensate the underactuated property of the UAV, and improve the tracking capabilities of the algorithm. 5.4.1 Classical IBVS with Virtual Camera The classical IBVS approach is designed for controlling 6 degrees of freedom and for tasks in which the controlled system must adjust both its position and orientation with respect to an observed object. Since moving target tracking applications are the primary focus of this project, the quadrotor UAV is only required to adjust its position to complete the desired task. Ideally, in target tracking tasks the same orientation must be maintained and the generated reference signals for the angular velocities of the camera are zero. However, for a quadrotor UAV the motion in the x and y direction is associated with changes in the pitch and roll angle. The tilting of the quadrotor leads to several problems

Chapter 5. Image-Based Visual Servoing 47 which have not been addressed for moving target tracking. The first problem is that the tilting might cause the target to disappear from the field of view of the camera. The proposed solution is discussed in detail in Chapter 6. The second problem is that tilting changes the orientation of the target with respect to the quadrotor which is interpreted as error in image space. The IBVS control method would correct this error by generating a counteracting signal for the angular velocity components of the camera. However, due to the underactuated property the quadrotor is unable to match these signals and only responds to the changes in the translational velocity. Since the uncorrected roll and pitch angles continue to increase, the error in the image becomes larger. The IBVS control compensates for this error by generating larger reference velocities. In the stationary target case these undesirable effects become significant when the quadrotor s initial position is far from the target. In the moving target case, the mismatch between the actual and desired angular rates leads to oscillations in the quadrotor position. To avoid the undesirable effects associated with the quadrotor tilt in the classical IBVS control, a modification of the method is proposed that is based on virtual image measurements. The control law for the three velocity components of the quadrotor and the yaw rate is obtained using virtual image measurements taken from a virtual camera that is aligned with the real camera but is not free to rotate. In particular, the virtual camera has the same position as the real camera, but is oriented to always point downwards (so that its roll and pitch are both 0 for all time and its yaw is the same as that of the quadrotor). The stable point of the IBVS algorithm corresponds to the quadrotor hovering at some desired height directly above the target. However, if the target is moving (especially with some non-zero acceleration) the quadrotor is unable to hover above the target, since to maintain its horizontal speed it requires non-zero roll and pitch. The virtual camera, however is capable of doing this, since its orientation is decoupled from that of the quadrotor. In addition, obtaining the stable point for the virtual camera means that the real camera, and hence the quadrotor, is directly above the target, which agrees with our target tracking task. The controller equation is given by: v c = λl + + e ev e v L ˆ v ev t where the error e v is the difference between virtual image coordinates and desired image coordinates and the image Jacobian L ev contains the virtual image coordinates of the [ ] T target. The virtual image coordinates x v y v are derived from the real coordinates

Chapter 5. Image-Based Visual Servoing 48 [ x y] T as follows: X c = xz c Y c = yz c [ ] T where X c Y c Z c denote the 3D coordinates of the target in the camera frame. X cv Y cv Z cv cos ψ cos ψ sin θ sin φ sin ψ cos φ cos ψ sin θ cos φ + sin ψ sin φ = sin ψ cos θ sin ψ sin θ sin φ + cos ψ cos φ sin ψ sin θ cos φ cos ψ sin φ sin θ cos θ sin φ cos θ cos φ X c Y c Z c [ x v y v ] [ ] X cv /Z cv = Y cv /Z cv 5.4.2 IBVS with GMT Velocity Estimation. In this subsection, we propose a hybrid model for the virtual camera, which uses some of the features of the IBVS method described above and the PBVS method developed in Chapter 4. The latter model is used to improve the tracking capabilities of the IBVS model for a moving target. In particular, the decay of the error between the position of the target and the position of the quadrotor is realized in image space using the IBVS scheme while the motion compensation term is based on an estimation of the real target velocity reconstructed from 3D inertial coordinates. This control law is also based on the virtual camera model and is a combination between the PBVS and IBVS. The estimation of the GMT inertial velocity components v x, v y is described in Section 4.1. The process of obtaining the final reference signals for the quadrotor velocity is as follows: 1. Obtain velocity reference in the camera frame from IBVS virtual camera model [ T v c = v cx v cy v cz p q r] = λlev e v where L ev and e v are the virtual interaction matrix and the error between virtual and desired image coordinates respectively ] T ] T 2. Transform translational velocities to inertial frame [Ẋref Ẏ ref Ż ref = CBI [v cx v cy v cz 3. Transform camera angular velocities to Euler angular rates C 2 [p q r ] T [ φref θref ψ ref ] T =

Chapter 5. Image-Based Visual Servoing 49 4. Quadrotor final velocity references are: Ẋ ref + v x, Ẏref + v y, Żref, ψ ref. 5.5 Simulations and Results In this section we perform numerical experiments for the proposed models. In particular, we first compare the Classical IBVS model with our proposed model, to demonstrate the effect of replacing the real camera measurements with virtual ones. We then examine the performance of the hybrid model for a moving target. For brevity we denote the three models as: CIBVS - the Classical IBVS model, VIBVS - the virtual camera model, Hybrid - the hybrid model. The particular values we choose for the PID controllers is as described in Section 4.1.2. 5.5.1 Stationary Target We start with the case when the target is stationary, with position (0, 0) and the quadrotor starts from the initial position r A = [ 1, 2, 20, π/12, π/24, π/6, 0, 0, 0, 0, 0, 0, 0]. The initial and desired image for the target are given in Figure 5.4. We remark that the desired image is taken from height 10 m directly above the target with 0 pitch, yaw and roll. We will compare the CIBVS and VIBVS models. The results are presented in Figures 5.5, 5.6 and 5.7. From the figures we can see that both algorithms converge to (a) Initial image. (b) Desired image. Figure 5.4: Initial and desired images (Stationary target).

Chapter 5. Image-Based Visual Servoing 50 (a) Position in X. (b) Position in Y. Figure 5.5: Horizontal position of the models (Stationary target). (a) Position in Z. (b) Yaw. Figure 5.6: Height and yaw of the models (Stationary target). the desired value, however the VIBVS model is much better behaved than the CIBVS. To understand this difference we can look at the error function for one of the points, given in Figure 5.8. As can be seen from Figure 5.8 the error for the CIBVS model (in red) is much more erratic, the reason being that the model interprets changes in pitch and roll as errors in the image. On the other hand, the since pitch and roll are discounted in the VIBVS model, we see that the error is much better behaved. This leads to a smoother signal for the velocities, which in turn improves the performance of the algo-

Chapter 5. Image-Based Visual Servoing 51 (a) Pitch. (b) Roll. Figure 5.7: Pitch and roll of the models (Stationary target). Figure 5.8: Error for the models (Stationary target). rithm. Nevertheless, we remark that both algorithms reach the desired image, although in both cases the target leaves the field of vision. In the VIBVS case it only does so for a brief period, while for the CIBVS the target is not in the FOV for a considerable portion of the simulation. The trajectory of the target in the FOV for the two models is given in Figure 5.9.

Chapter 5. Image-Based Visual Servoing 52 (a) Trajectory for VIBVS. (b) Trajectory for CIBVS. Figure 5.9: Trajectory of the target for the models (Stationary target). 5.5.2 Moving Target We consider the case when the target starts from position (0, 0) and the quadrotor starts from the initial position r A = [0, 0, 20, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]. The target moves with constant velocity (1, 0.5) m/s for the first 15 seconds and then makes a sharp turn, moving with the constant velocity (2, 0.5) m/s for the remainder of the simulation. The initial and desired image for the target are given in Figure 5.10. We remark that the desired image is taken from height 10 m directly above the target with 0 pitch, yaw and roll. We will compare the CIBVS and Hybrid models. The results are presented in Figures 5.11, 5.12 and 5.13. As can be seen from the figures the behavior of the two models is very similar and they both achieve tracking of the target. The Hybrid model appears to react slightly better to changes in the velocity and also converges slightly faster to the target s position.