Implementation Considerations for Vision-Aided Inertial Navigation. Gregory L. Andrews

Size: px
Start display at page:

Download "Implementation Considerations for Vision-Aided Inertial Navigation. Gregory L. Andrews"

Transcription

1 Implementation Considerations for Vision-Aided Inertial Navigation A Thesis Presented by Gregory L. Andrews to The Department of Electrical and Computer Engineering in partial fulfillment of the requirements for the degree of Master of Science in Elecrical Engineering in the field of Control Systems and Signal Processing Northeastern University Boston Massachusetts March 2008

2 2

3 Implementation Considerations for Vision-Aided Inertial Navigation by Gregory L. Andrews Submitted to the Department of Electrical and Computer Engineering on March 11, 2008, in partial fulfillment of the requirements for the degree of Master of Science in Control Systems and Signal Processing Abstract As UAVs and other unmanned vehicles continue to become smaller, so too does their available processing and sensing capability. For current UAVs, the size, weight, power and cost are all driving factors behind the selection of the onboard avionics, and thus vehicle capability. Nearly all vehicles used in practice today rely on GPS for their navigation abilities. While this is not a compromise in performance for many applications, it does not work well in environments that would benefit most from the use of small unmanned vehicles: indoors, under water, and in dense urban settings. Nearly all small vehicles, by virtue of their intended purpose, already include cameras. In addition, most include inexpensive IMUs, especially those with fast dynamics such as flying vehicles. Even the highest quality IMUs are insufficient for navigation without an external measurement to aid in drift reduction. Likewise, a monocular camera is unable to provide an accurate scale of the environment without some a priori knowledge of the scene. A recent focus in available literature has been on the fusing of these two complementary sensing modalities for use in GPS-challenged navigation. Many approaches are based on the well known Simultaneous Localization and Mapping algorithm. These approaches can provide accurate navigation solutions, but at a high computational cost. This thesis focuses on a vision-aided inertial navigation strategy suitable for small embedded processors found in many of today s unmanned vehicles. A suboptimal covariance analysis and error budget is presented which give a sound theoretical backing to several design decisions related to algorithm efficiency. Analysis for several parameters is presented, including reduced state space, measurement error, and initial uncertainty of the scene. Consideration is given to the numerical dynamic range needed for the Kalman filter implementation, and matrix factorization methods are employed to ensure numerical stability in the presence of low precision integer math found on many low-cost embedded processors. Approved for public release; distribution is unlimited 3

4 4

5 Acknowledgments This thesis would not have been possible without the support, insights, and encouragement of many. To everyone that has helped me along the way, I give my sincerest thanks. First, I must express gratitude to my advisor and teacher at Northeastern University, Professor Octavia Camps. Her insights into the world of computer vision proved to be invaluable to this work. To all of my mentors and colleagues at Draper Laboratory, I give you my utmost respect. In particular, I would like to thank Paul DeBitetto, Brent Appleby, Matthew Bottkol, and Dale Landis. Paul and Brent s experience and encouragement have made my time here both enjoyable and extremely educational. Without Matt and Dale s tremendous knowledge of navigation and estimation, this thesis would never have been completed. To Spencer Ahrens at MIT, I thank you for all of your help in collecting the data sets used in this thesis. To my family, I thank you for a lifetime of support and encouragement. Without you, I would still be frying chicken instead of writing a thesis. Finally, to my beautiful girlfriend Natalie, thank you for your love and support. I hope that I may continue to enjoy your love for years to come. 5

6 6

7 Contents 1 Introduction Problem Motivation Related Work Thesis Organization Inertial Navigation Review Coordinate Frame Definitions Coordinate Frame Transformations Navigation Equations Inertial Measurement Unit Error Model Estimator Design Linearized Error Model Filter Implementation Time Propagation Measurement Update Vision-based Measurement Update Image Feature Tracking and Selection Feature Tracking Feature Selection Measurement Equation Improvements for Measurement Linearity

8 4.4 Covariance Initialization Suboptimal Covariance Analysis Sensitivity Analysis Error Budget Construction Results Simulation Results Simulation Dynamic Model Sensor Model Synthetic Image Generation Simulated Flight Simulated Closed-loop Hover Experimental Results Conclusions and Future Work Thesis Summary and Conclusions Future Research Directions

9 List of Figures 2-1 Coordinate Frame Definitions Navigation Equation Mechanization [12] Allan Variance plot of Experimental and Simulated Gyro Data Scale ambiguity with bearing-only measurements Pyramidal Feature Tracking Camera and image frame definitions Converged state estimate example Divergent state estimate examples Navigation sensitivity to initial range uncertainty Inverse Depth Likelihood Feature parameterization and measurement equation [25] Suboptimal Covariance Analysis Flow Diagram[18] Navigation Filter Error Budget Simulation Architecture Camera Motion Definitions Example Synthetic Imagery and Feature Selection Flight path overlayed on overhead imagery Simulated flight trajectory performance comparison Position and velocity estimates of the course of the entire flight Velocity estimation results for simulated hover Position estimation error for simulated hover

10 6-9 Closed-loop position holding capability Effect of shutting off vision updates in a closed-loop hover Indoor UAV testbed at MIT s Aerospace Controls Lab [35] Video captured with selected features overlayed Navigation performance with real imagery Experimental RSS position error and uncertainty bounds Experimental attitude estimation performance IMU bias estimation results Experimental unaided RSS position error and uncertainty bounds with IMU bias Experimental unaided attitude estimation performance with IMU bias Experimental unaided RSS position error and uncertainty bounds without IMU bias Experimental unaided attitude estimation performance without IMU bias

11 Chapter 1 Introduction This thesis focuses on a vision-aided inertial navigation strategy suitable for implementation in small embedded processors found in many of today s unmanned vehicles. A numerically stable estimation algorithm is explored which provides reliable navigation performance in the face of round-off errors, and a vision-based measurement update is developed which fits into the framework of the stabilized navigation filter. A covariance analysis is performed to determine the effects of various error sources on the navigation performance, and an estimation filter is designed to balance those effects with the computational load of the filter. Results from both simulated and experimental scenarios are discussed, and a clear benefit of the vision-aided algorithm is shown in situations where an external position reference does not exist. 1.1 Problem Motivation Unmanned vehicles are playing an increasingly important role in modern military applications [16][28]. Currently, unmanned aerial vehicles (UAVs) provide critical over-the-hill reconnaissance to ground soldiers. With a UAV, ground forces are able to have their own personal eye in the sky, and can receive advanced warning of surrounding threats. It is foreseeable that as the technology continues to improve, the application space of unmanned systems will grow significantly. Near-term scenarios might include the aiding of first responders to natural disasters, and border defense 11

12 applications for homeland security. As one looks further into the future of possible UAV applications, however, it is clear that the current systems have severe limitations. Future scenarios will almost certainly include the need for operation in the urban canyon, i.e. below roof tops and in dense structure environments. Likewise, the need for indoor operation may also be seen on the horizon. The ability for soldiers, law enforcement, and search-and-rescue personnel to have a glimpse of the interior of a building before entering is an attractive prospect. The limiting factors alluded to earlier are numerous. Communication links to the vehicles suffer from latency, dropouts, and interference. A high throughput, robust communication link is of paramount importance to today s UAV missions. Generally, the operation of any robotic vehicle is done from a camera view over a wireless link. Latencies and dropouts in that link can lead to slow progress or even complete mission failure in certain situations. This is particularly true of aerial vehicles as they tend to have fast dynamics, and a latent communication link is a recipe for disaster. Another non-trivial limitation of current UAV systems is the ability to navigate without an absolute reference, such as the global positioning system (GPS). The loss, or complete absence of GPS takes away several platform capabilities. The position of the vehicle relative to the operator becomes unknown, and all navigation must be done by pilotage using the remaining on-board sensors, such as a magnetic compass and camera. Typically, small UAVs contain an inexpensive inertial measurement unit (IMU) for stability augmentation. The specific forces and angular rates sensed by such a device are useful for stabilizing the attitude dynamics of the vehicle in a feedback control system. With a very high quality IMU, some semblance of position estimation is possible using inertial navigation techniques. These position estimates, however, suffer greatly from the drift characteristics inherent to all inertial sensors, regardless of size and cost. In a traditional navigation system found on unmanned vehicles, the IMU is combined with GPS in order to form a stable solution. The GPS measurement, which provides a low-rate, low-drift position estimate is fused with an IMU measurement providing high-frequency rate information prone to large drift errors. These two complementary sensing modalities can provide an accurate 12

13 navigation solution in the presence of fast dynamics. Difficulties arise when the GPS signal is blocked by buildings, degraded by multi-path effects, or jammed. In these situations, the navigation solution quickly becomes invalid, and the vehicle must either return to a location with GPS or abort its mission altogether. These problems are more likely to occur as UAVs begin operating in increasingly challenging environments, such as indoors. For certain classes of vehicles such as helicopters, ducted fans, or otherwise naturally unstable vehicles, the loss of GPS can lead to a failure in the control system. In a traditional GPS/INS navigation system, the GPS can provide observability into the biases inherent to INS sensors. Without this aid, the biases are left unestimated and can cause a vehicle to wanter off course or tip over. The goal of this thesis is to develop a navigation algorithm which complements INS in the absence of GPS. While the algorithm cannot provide the same kind of absolute positioning information that a GPS solution would, it can allow the system to operate during periods of GPS outage. Since nearly all unmanned vehicles already contain an on-board camera, it seems logical to leverage this as a navigation aid. As UAV technology continues to develop, the size, weight, and power available decreases. By using pre-existing sensors, particularly those as information-rich as a camera, we avoid placing additional weight and power burdens on the vehicle. Among the significant challenges in using the camera as a navigation aid, however, is the non-intuitive process of extracting usable information from the image and combining it with the IMU measurements. Compounding this difficulty is the fact that there is relatively modest computing power available to extract this information. The contribution of this thesis is to develop an algorithm which explicitly considers this lack of computing power. 1.2 Related Work The details of this implementation fall into what has become an enormous body of work. Vision-aided navigation is not a new concept, and many others have contributed 13

14 valuable research to the field from which this thesis builds upon. Furthermore, the notion of combining a camera with inertial sensors is a topic that has enjoyed particular attention in a newly rejuvenated robotics field. The concept for vision-aided navigation presented in this thesis may be considered an off-shoot of the well known Simultaneous Localization and Mapping (SLAM) algorithm. The SLAM algorithm solves a joint estimation problem of both a vehicle pose and a local map. Usually, these estimates are made based on statistical methods involving probability distributions over all possible poses and all possible maps [30][33]. In practice, some form of a Kalman filter is typically used to recursively estimate these probabilities with the assumption that the posterior distribution is Gaussian [24]. Other nonlinear estimation techniques, such as particle filtering, have also been explored as solutions to the SLAM problem. The algorithm usually involves identifying objects in the environment to serve as landmarks. These positions of these landmarks are estimated based on the current estimate of the vehicle s position, and whatever measurement of the landmark is available. This landmark position estimate is preserved indefinitely. The major attraction of a SLAM-based solution is that if the landmark is revisited after some excursion into the environment, the prior knowledge of that landmark s position may be used to correct accumulated errors in a process called loop-closure. The downside to this approach, of course, is the computational burden associated with maintaining knowledge of every landmark ever seen in hopes that we may see it again. Recent approaches have veered away from the complexity of a full SLAM algorithm in favor of reduced-order solutions. These solutions typically involve maintaining landmark estimates only for those landmarks which are currently visible. While this thesis does not offer the benefits of loop-closure, it does allow for reduced navigation drift with an algorithm of bounded computational complexity. Recent works such as [26][22][29] have produced vision-aided inertial navigation strategies very similar to those in this thesis. In particular, the work of Sukkarieh, et al. show a coupled vision/ins algorithm which use the measurement formulation of [25], an identical formulation to the one used in this thesis. While this thesis does not necessarily provide a novel solution to the coupled vision/ins problem, it does 14

15 pay particular attention to the practical issues that arise when implementing such an algorithm. Of particular note is the reformulation of the vision-aided algorithm to fit within the framework of a numerically stable form of the Kalman filter. These considerations, along with the development of an error budget examining the exact contribution of various error sources to the algorithm s performance, provide a step towards real-time implementation of a vision-aided navigation algorithm on a small, embedded processor. 1.3 Thesis Organization This introductory chapter has provided a motivation for this thesis work as well as a summary of related approaches to vision-aided navigation. Chapter 2 provides an overview of the principals of inertial navigation. This includes a discussion of the mathematical foundations, sensor models, and difficulties associated with navigation in general. Chapter 3 gives a detailed design of the navigation filter to be used in this thesis. Particular attention is paid to the necessary practical considerations to be made when implementing any type of estimation filter. Chapter 4 presents an overview of well-known image processing techniques for feature selection and tracking. The chapter continues on to derive the mathematical model relating image information to the navigation state with a focus on robustness to initial scene uncertainty. Chapter 5 describes the techniques of suboptimal covariance analysis for estimation filter design. An error budget is developed showing the precise contribution of individual error sources to the overall navigation performance. Chapter 6 presents the outcome of various simulated and experimental scenarios. The performance of the vision-aided navigation algorithm is compared to a system without the aid of a camera. Finally, chapter 7 discusses the conclusions drawn from this thesis, and presents suggested areas of future work. 15

16 16

17 Chapter 2 Inertial Navigation Review In the most basic sense, inertial navigation may be compared to the strategy of path integration used by certain species of desert ants [27]. In their long search for food, the ants are able to keep track of their position relative to their nest in order to make a quick return home out of the desert heat. Through the use of odometry and a celestial compass, a simple integration strategy is used in order to derive position. In much the same way, the integration of sensed specific forces and angular rates is the principal behind inertial navigation. This chapter is intended only as a cursory overview of a rich field. For a more detailed discussion of inertial navigation, see [20]. 2.1 Coordinate Frame Definitions Before the kinematic equations for navigation may be derived, several coordinate frames must be defined. The equations of navigation may be expressed in any arbitrarily defined frame of reference, but it is the combination of measurements from several different frames that form the solution. It is for this reason that several coordinate definitions must be defined, as well as the transformations between them [31]. Figure 2-1 depicts the various coordinate frames defined in this section. First, the Earth-centered Inertial (ECI) frame is defined with its origin at the earth s center. This inertial frame, defined as a non-accelerating frame in which Newton s laws apply, is fixed in space and does not rotate with the earth. All sensed 17

18 Figure 2-1: Coordinate Frame Definitions inertial accelerations and angular rates are with respect to this frame. It should be noted, however, that the ECI frame defined here is only an approximation of a true inertial frame. Though it does not rotate with the earth, it is still influenced by the earth s gravitational pull, and therefore does not constitute a non-accelerating frame as required by Newton. Next, the Earth-centered Earth-fixed (ECEF) frame is also defined with its origin at the earth s center, but in the case is fixed to the rotating earth. The x e axis lies in the equatorial plane, and points from the origin to the prime meridian ( 0 longitude ). The z e axis lies parallel to the mean spin axis of the earth, and the y e axis completes the right-handed coordinate system. It is in the frame that the navigation equations will be expressed. The third coordinate frame to be defined is the local-level frame. This frame, often referred to as the North-East-Down (NED) frame, lies with its x l axis pointing north, and the z l frame pointing towards the center of the earth. Though vehicle attitude will be expressed as a quaternion defining the transformation from the body to earth frame (see section 2.2, it is also useful to think of the vehicle s orientation as a set of Euler angles describing the roll, pitch, and yaw of the vehicle with respect to 18

19 the NED frame. Finally, the body frame is chosen to have its origin located at the vehicle s center of gravity. For purposes of simplification, it is assumed that the body frame coincides with the origin of the on-board inertial sensors. It is in this frame that inertial forces and angular rates are resolved. The body frame is defined with the x b axis pointing in the forward direction of the vehicle (the nose), the z b axis pointing out of the bottom of the vehicle, and the y b axis completing the right-handed coordinate system. 2.2 Coordinate Frame Transformations In order to transform a vector represented in one coordinate frame to another frame of reference, a transformation must be applied. Several standards exist for representing rotations, including various Euler rotation sequences, and quaternion representations. For this thesis, a unit quaternion representation is chosen to avoid the pitch ambiguity associated with Euler angle representations. Since any sequence of rotations may be represented by a single rotation about a fixed axis, the quaternion representation specifies a unit direction vector along with a rotation angle about that vector. q = q 0 q 1 q 2 q 3 = cos(α/2) (2.1) ēsin(α/2) q = 1 (2.2) For example, if frame a is to be rotated into frame b by the quaternion q, frame a will be aligned with b when rotated by the angle α about the unit vector ē. For a quaternion q coordinatized in the body axes, we define the rotation matrix Rb e(q) 19

20 from body to ECEF coordinates as in Eq. 2.3 R e b(q) = q q 2 1 q 2 2 q 2 3 2(q 1 q 2 q 0 q 3 ) 2(q 1 q 3 + q 0 q 2 ) 2(q 1 q 2 + q 0 q 3 ) q 2 0 q q 2 2 q 2 3 2(q 2 q 3 q 0 q 1 ) (2.3) 2(q 1 q 3 q 0 q 2 ) 2(q 2 q 3 + q 0 q 1 ) q 2 0 q 2 1 q q 2 3 The transformation from ECEF to ECI takes into account the earth s rotation about the ECEF z-axis. The ECEF frame is initially aligned with the ECI frame at the start of navigation, time t 0. The transformation may be propagated from this start time by the following relationship: R i e = cos(ωiet) e sin(ωiet) e 0 sin(ωiet) e cos(ωiet) e (2.4) Finally, the transformation from the ECEF to NED frames may be expressed as a function of the vehicle s current latitude and longitude, as shown in Eq. 2.5 sin(λ)cos(ψ) sin(λ)sin(ψ) cos(λ) R l e = sin(ψ) cos(ψ) 0 (2.5) cos(λ)cos(ψ) cos(λ)sin(ψ) sin(λ) 2.3 Navigation Equations As mentioned previously, it is the integration of sensed accelerations and angular rates that form the basis of inertial navigation. An inertial measurement unit (IMU) is a device capable of sensing acceleration and angular velocity relative to inertial space. If acceleration is twice integrated, the change in position is given. An accelerometer measures specific forces along its input axis. This specific force is a combination an acceleration, and gravitation. If we were interested in navigating in an inertial space, far from any gravitational influence, we could simply integrate the accelerometer output to get position. However, since we are instead navigating on or close to 20

21 the earth s surface, the influence of earth s gravity field must be compensated for. Likewise, since the earth is rotating with respect to inertial space, this rotation must be compensated for from the gyro output as well. The kinematic equations of navigation may be derived with respect to any coordinate frame. Here, the ECEF frame is chosen as the frame in which the vehicle will navigate. This choice is made so that GPS measurements, if and when available, may be easily integrated since they are given in an earth-centered reference frame. First, we define x i to be a position vector with ECEF-referenced position vector into the ECI frame such as: x i = R i ex e (2.6) Twice integrating Eq. 2.6 gives the following relationships: ẋ i = R i eω e iex e + R i eẋe (2.7) ẍ i = 2R i eω e ieẋe + R i e( Ω e ie + Ω e ieω e ie)x e + R i eẍe (2.8) For clarity, the notation used in Eq. 2.8 and throughout this thesis is explained here. A term Ω c ab describes the angular rate of frame b with respect to frame a, and resolved in frame c. For example a gyroscope, described in the next section, measures angular rate with respect to inertial space, resolved at its own input axis. Any Ω term may be used interchangeably with the term [ω ], which is a skew-symmetric matrix inducing the cross product with the ω vector, as in Eq. 2.9 [ω ] = 0 ω 3 ω 2 ω e 0 ω 1 ω 2 ω 1 0 (2.9) The Ω e ie term, which describes the angular acceleration of the rotating earth frame, may be neglected as it is very close to zero. Likewise, the centripetal force term, Ω e ieω e iex e is also neglected, as it is well beneath the noise floor of the inertial sensors used in this application (section 2.4). 21

22 Substituting gravitation and specific force in for ẍ i and rotating both sides of Eq. 2.8 to the ECEF frame yields: ẍ e = g e + f e 2Ω e ieẋe (2.10) Equation 2.10 gives the relationship for the acceleration relative to the rotating ECEF frame. This relationship is a combination of earth s gravity, the applied specific force, and the Coriolis force due to the earth s rotation. Figure 2-2: Navigation Equation Mechanization [12] Figure 2-2 is a block diagram of the navigation mechanization. The gyro output is first integrated to form the current body-to-ecef attitude quaternion which is then used to rotate the body-referenced specific forces to the earth frame. The Coriolis and gravitational forces are then subtracted from this quantity, and the result is twice integrated to find earth-referenced position and velocity. evolves as The attitude quaternion dq dt = 1 Ωq (2.11) 2 22

23 where Ω = 0 ω x ω y ω z ω x 0 ω z ω y ω y ω z 0 ω x ω z ω y ω x 0 (2.12) and ω k is the compensated gyro reading after removing any estimated instrument error contributions, as described in section Inertial Measurement Unit Error Model As mentioned in section 2.3, the IMU provides measurements of inertial specific forces and angular rates in the body frame, i.e. fib b and ωb ib. In order to navigate in the earthcentered frame, the specific force measured must be first rotated to the ECEF frame before being integrated. The crux of inertial navigation, neglected until this point, is that the small imperfections in measurements of these quantities can lead to enormous errors in the navigated solution due to the fact that any small bias in the measurement becomes integrated twice in time to find position. This becomes an obvious problem when considering the fact that the accelerometer measures a combination of gravitation and acceleration. Any small error in the estimate of attitude will result in an integration of a component of gravity, nominally g = 9.8m/s 2. Likewise, it may seem that the earth s rotation rate, Ω e ie = 15deg/hr may be rather insignificant on its own. However, if considered the fact that the tangential velocity at the equator due to this rotation rate is approximately 456m/s, it is obvious that this quantity is nontrivial. For use in the estimator design described in section 3.2, a model of the various error sources pertaining to an IMU is given here. There are several classes of inertial measurement sensors, ranging from inexpensive micro-electrical-mechanical (MEMS) devices commonly found in automotive and consumer electronics, to very large, very expensive devices used in military hardware. Since this thesis is concerned only with avionics systems for small unmanned vehicles, details pertaining to the larger, more 23

24 accurate systems are left to the references [20]. A typical IMU sensor error model consists of a variety of error sources. Commonly used error sources include random walk, bias, scale factor, and misalignment. More detailed models may use additional terms relating to temperature, nonorthogonality, and shock. In this thesis, only the prior four will be considered. The random walk term, referred to as velocity random walk (VRW) in an accelerometer, and angle random walk (ARW) in a gyro, is the effect of integrating white noise. This term, who s standard deviation will grow with the square-root of time, can be the dominant error source due to the wide band noise characteristics of inexpensive inertial sensors. The contribution of all IMU related error sources will be examined in section 5.2 in which an error budget is developed. The bias term, commonly referred to as a null offset or zero-motion offset, is a shift in the output of the sensor. Most inertial sensors are characterized in terms of a turn-on bias, and a bias instability. Turn on biases, which are the present from the initial sensor power-up, may be calibrated out prior to the beginning of a flight. The bias instability refers to the change in bias over time and temperature. The scale factor term, as its name would imply, refers to a linear multiplier placed on the sensor input. For example, in the absence of all other error terms, an accelerometer may output a reading that is N-times its input. Like the bias term, the scale factor error may be characterized in terms of a constant turn-on value, as well as a time varying component. Misalignment refers to the physical misalignment of the sensing axes with respect to the body frame. This may be due to a combination of fabrication issues in the MEMS package, or in the physical mounting of the sensor on the vehicle. The misalignment error will cause, for example, components of an axial acceleration to appear of multiple axes of the sensor. Both of the time varying error components, namely the bias and scale factor errors, will be modeled as first-order Markov processes according to the stochastic differential equation: τẋ + x = ω (2.13) 24

25 where τ is the correlation time constant of the process, and ω(t) is a white noise process with the statistics of Eq E[ω(t)] = 0, E[ω(t)ω(τ)] = σ 2 ωδ(t τ) (2.14) This first order process yields exponentially correlated noise, often referred to as colored noise. The variance of this process evolves according to the differential equation [7] dσx 2 dt = 2σ2 x + σω 2 (2.15) τ If one assumes that the statistics of Eq have reached a steady state value, then σ 2 ω = 2σ2 x τ (2.16) In discrete form, which will be necessary for use in the estimator design, the statistics of Eq may be found as in Eq σ 2 ω = (1 e 2( t)/τ )σ 2 x (2.17) Table 2.1 gives the list of specifications used for the IMU model. These values represent typical automotive grade inertial sensors. The 1σ values given are the steady-state statistics of the process, and the time constant is the associated correlation time constant. The random walk statistics are given as the peak of the power spectral density of the sensor output. Empirical determination of the random walk, bias terms, and correlation time constants is often carried out using Allan variance techniques. In order to verify the model described here, a hour s worth of data was collected from a stationary Crista IMU, manufactured by Cloudcap Technology[1]. An Allan variance plot was generated using the techniques of [13] that compares the collected data with simulated data generated using the model in Eq Figure 2-3 shows the comparison of the experimental and simulated data, and a close resemblance between the two is seen. The random walk noise characteristics can be found 25

26 Table 2.1: IMU Model Specifications Instrument Error Source 1σ Value Time constant Accelerometer Bias 300 mg 100 s Scale Factor ppm 100 s Misalignment 3 mrad - Velocity Random Walk 280 µg/ Hz - Gyro Bias 500 deg/hr 100 s Scale Factor ppm 100 s Misalignment 3 mrad - Angle Random Walk.05 deg/sec/ Hz - at a correlation time of τ = 1. The area of negative slope indicates that the sensor is dominated by its wide band noise characteristics for correlation times of τ < 100s, and becomes dominated by exponentially correlated noise in the area with a positive slope, for which a first-order Markov process is a suitable model. Figure 2-3: Allan Variance plot of Experimental and Simulated Gyro Data The measured accelerometer and gyro measurements are given as a combination of the true value and the above mentioned error sources as in Eqs and a M = f + b a + [f ]s a + [f ]m a + w V (2.18) g M = ω + b g + [ω ]s g + [ω ]m g + w A (2.19) 26

27 where (f, ω) represent the true specific forces and angular rates, (b a, b g ) are the accelerometer and gyro biases, (s a, s g ) are the scale factors, (m a, m g ) are the misalignments, and (w V, w A ) are white noise processes describing the velocity and angle random walk quantities, respectively. The [ ] nomenclature refers to a diagonal matrix inducing the dot product with its parameter. 27

28 28

29 Chapter 3 Estimator Design Despite the various incarnations of estimation filters, the underlying concepts remain the same. In general, one is interested in combining a known dynamic model of the system they wish to estimate with measurements of the states of that system. In the case of small UAV navigation systems, we wish to estimate the position, velocity, and attitude of the vehicle over time. One approach would be to first model the dynamics of the UAV, and then treat all other sources of information, such as an IMU, GPS, altimeter, magnetometer, etc. as measurements of that system s state. The drawback of this approach is that the state estimate will only be as good as the dynamic model used in the filter. One may painstakingly model the system in question, only to find that some aspect of the airframe has changed, and the filter must be redesigned. Instead, a common approach among navigation systems is to use a propagation model for the IMU instead. The reasoning behind this is that, in all likelihood, the IMU will provide a better measure of acceleration and angular rate than a mathematical model. As an added benefit, the estimator design becomes fairly independent of the vehicle s design, and may be applied to any platform that contains the same sensor configuration. 29

30 3.1 Linearized Error Model For this thesis, a form of the extended Kalman filter (EKF) will be employed. The Kalman filter provides a method of automatically weighting all measurements based on their statistical worth. In general, an EKF may be thought of as a standard Kalman filter, but with the system and measurement models linearized about the current state estimate at each time step. In general, if a nonlinear system s state evolves according to x k+1 = f(x k ), its Taylor series expansion with respect to the current state estimate ˆx k is computed as: x k+1 = f(x k ) (3.1) = f(ˆx k ) + f(x k) δx k (3.2) x x=ˆxk neglecting higher order terms. For the filter in this application, the states to be estimated will be the errors in the overall navigation state. For example, instead of estimating the vehicle s true ECEF position, only the error in our estimate of vehicle position, δx will be estimated. The error quantity may be defined as the difference between the estimated state, and its true value, as in Eq δx k+1 = x k ˆx k (3.3) = f(x k) δx k (3.4) x x=ˆxk The use of error states is generally more robust than estimating the full system state. After each measurement update, the error state estimates are absorbed into an offline whole state estimate, and the error state estimate is reset to zero, thus keeping the linearized filter states closer to their valid operating region[18]. In order to work within the EKF framework, the system dynamic model must be linear in the region around the current state estimate. The dynamic model in question is formed from the kinematic navigation equation given in Eq First, the error in attitude estimate will be expressed as a 3-element misalignment vector coordinatized 30

31 in the body frame. This vector describes the infinitesimal rotations about the three body axes which compose the error in our estimate of attitude. If R = R(t) represents the true body-to-ecef coordinate transformation, and ˆR represents our estimate of that quantity, the error in the estimate may be described as in Eq. 3.5, where Ψ is the misalignment vector. R 1 ˆR = I + [Ψ ] (3.5) The time evolution of this error quantity may be derived as follows: d dt (R 1 ) = Ω b ebr 1 d dt ˆR = ˆR[(Ω b eb + δg) ] where Ω b eb is the angular rate vector of the body with respect to the earth resolved in body coordinates, and δg are the errors associated with the gyro. The ω b eb term is found by subtracting earth s rotation rate from the measured angular rate as in ω b eb = ωb ib ωb e. Combining these two terms and applying the product rule yields: d dt (R 1 ˆR) = Ω b ebr 1 ˆR + R 1 ˆR[(Ω b eb + δg) ] = Ω b eb(i + [Ψ ]) + (I + [Ψ ])[(Ω b eb + δg) ] = [Ψ ]Ω b eb Ω b eb[ψ ] + [δg ] = [ Ψ ] Using the relationship [a ][b ] [b ][a ] = [(a b) ], the last result can be written as in Eq. 3.6 Ψ = ω b eb Ψ + δg (3.6) Now, by re-writing equations 2.10, 2.18, and 2.19 in terms of error quantities, we 31

32 arrive at the following linear state propagation model to be used in the filter δẋ = δv (3.7) δ v = G(t)δx 2[ω e ie ]δv + R(Ψ f + δa) (3.8) Ψ = [ω b eb ]Ψ + δg (3.9) δa = δb A [f ]δs A [f ]δm A + w V (3.10) δg = δb G [w ]δs G [w ]δm G + w A (3.11) The G(t) term represents Newtonian gravitation linearized at the current ECEF position, x = x(t), given by Eq Earth s universal gravitational constant is given as µ = e14. G(t) = µ x 3 (I 3uuT ) [Ω e ie ] 2 u = x x (3.12) The above linearized error model allows the system dynamic equations to be written in the form of Eq. 3.13, with the form of the linearized state transition matrix, F, matrix given below it. δ ˆẊ = F(t)δ ˆX + ν (3.13) δx δv δ ˆX = Ψ (3.14) δa δg 1 Personal communications with Dr. Matthew Bottkol at The Charles Stark Draper Laboratory 32

33 0 I G(t) 2[Ω e E ] R[f ] R R[f ] 0 0 R[f ] [ω ] 0 0 I [ω ] 0 [ω ] Ie T/τ Ie T/τ Ie T/τ Ie T/τ I I One may use the above derived covariance propagation model to perform various covariance analyses independent of any state estimation, as described in section 5.1. Though the ordering of states is technically arbitrary, the form of F(t) is arranged in such a manner that it may be exploited for performance gains in the implementation described in section 4.4. Namely, the dense navigation states are first, the exponentially correlated noise states second, and the constant bias states third. This block upper-triangular form lends itself to factorization methods that can lead to significant performance increases. 3.2 Filter Implementation The EKF algorithm consists of a two-phase, recursive set of update equations for the state estimate and its associated covariance. The Kalman algorithm uses the covariance as an optimal weight for state estimation, as described in [18]. Since it the estimation algorithm is intended for implementation in a discrete processing environment, the discrete form of the Kalman algorithm is chosen. The first phase of the EKF algorithm is a propagation of state estimates and covariance. The resulting quantities will be denominated by the terms ˆx k ( ) and P k ( ) respectively, where the ( ) indicates that the quantities are values before 33

34 being updated by an external measurement. The state estimates may be propagated according to the full nonlinear dynamic model of Eq. 2.10, which may be expressed as a function of the current state and input. Since the Kalman framework requires that covariance be propagated linearly, the equations of must be used. Covariance propagates according to the matrix differential equation Ṗ = FP + PF T + Q (3.15) where Q is the process noise covariance matrix. For a zero initial condition, P(0) = 0, the solution to the equation becomes P(t) = t 0 e F(t τ) Qe F(t τ) dτ (3.16) For small t, the matrix exponential e F t may be approximated to first order by e F t = Φ (I + F t) (3.17) and thus the discrete-time equations for state and covariance propagation are given by Eqs and The equation for state propagation is generally numerically integrated using a first-order Euler integration over the t interval. ˆx k ( ) = f(ˆx k 1, u) (3.18) P k ( ) = Φ k 1 P k 1 Φ T k 1 + Q k 1 (3.19) The second phase of the EKF algorithm involves updating the estimates of state and covariance based on some external measurement. The measurement to be considered must be either a direct measurement of a system state, or some linear combination thereof. Much like the system dynamics, these measurements are rarely linear in practice, and therefore must also be linearized about the current state estimate. The measurement in this application will be the pixel location of an image feature, as described in section 4.2. Its nonlinear form may be expressed as a function of system 34

35 state, as in Eq. 3.20, and its linearized form may be expressed as the Jacobian of the nonlinear measurement, evaluated at the current state estimate as in Eq z = h(x) (3.20) H = h (3.21) x x=ˆx From this linearized form of the measurement equation, the optimal Kalman gain may be computed and applied to the estimate of state and covariance. K k = P k ( )H T k [H k P k ( )H T k + R k ] 1 (3.22) ˆx k (+) = ˆx k ( ) + K k [z k h(ˆx k )] (3.23) P k (+) = [I K k H k ]P k ( ) (3.24) where K k is the optimal Kalman gain at time k, and R k is the measurement noise covariance term. It is a necessary condition of the Kalman filter that the covariance matrix remain symmetric and positive definite. It has been long recognized that the algorithm is particularly sensitive to computer roundoff errors, and due to the subtraction in Eq. 3.24, the covariance can quickly become singular in ill conditioned applications. As an example of the adverse affects of computer round-off, consider a second order system and a scalar measurement quantified by the following properties [21]: P( ) = [, H = 1 0 ], r = ɛ 2 (3.25) To simulate round off error, it is assumed that 1 + ɛ 1, but 1 + ɛ 2 = 1. The updated covariance computed both with and without roundoff error are: P exact (+) = ɛ2 0 1+ɛ 2 0 1, P conventional (+) =

36 For this second order system, the exact Kalman gain and the one computed by the standard algorithm in the presence of roundoff are given below. Clearly, the roundoff error will lead to a divergent filter when the standard Kalman algorithm is used to compute updated covariances. K exact = 1 1+ɛ 2 0, K conventional = 0 0 Several methods have been proposed to mitigate the numerical issues associated with the Kalman filter. One method that is often used is the so called Symmetric Joseph Form of the covariance update equation P k (+) = (I K k H k )P k ( )(I K k H k ) T + K k R k K T k (3.26) This addition of a positive definite matrix with a positive semi-definite matrix can better prevent singular covariance matrices from developing, however this comes at a significant computational cost due to the increased complexity of the matrix equation. As part of the Apollo space program, Potter recognized that by reformulating the Kalman filter in terms of the square root of the covariance, significant numerical stability was regained [5]. In addition to the increased numerical stability, the square root form of the covariance matrix substantially reduces the dynamic range required to represent the matrix, i.e. the ratio of the largest and smallest elements of the matrix. This fact is particularly useful when the algorithm is intended to be implemented in computer hardware with a finite word length. It has been shown that square root forms of the Kalman filter can provide the effect of double word length precision on a single precision computer. Potter s original square root formulation was limited in the sense that it could not handle the inclusion of process noise, as is often found in navigation problems. In the years following, several enhancements to the algorithm were proposed that allow the inclusion of process noise. One such formulation, and the one that will be used in this thesis, is the U-D form of the filter proposed in [7]. 36

37 Any symmetric positive definite matrix may be decomposed as P = SS T (3.27) where S is said to be the square root of the matrix P. In general, this factorization may be found by means of a Cholesky decomposition [19]. Alternate forms of the Cholesky decomposition include the so called U-D factorization as in Eq P = UDU T (3.28) where U is an upper triangular matrix with unit diagonals, and D is a diagonal matrix. This form has several advantages over the Cholesky decomposition, not the least of which is the fact that no square root operations are necessary in order to compute it. Another distinct advantage is that it yields an upper triangular matrix which can be exploited for use in efficient computer implementation. In general, the second moment of the state estimate evolves according to Eq P = ΦPΦ T + GQG T (3.29) where Φ is the discrete state transition matrix, Q is the process noise covariance matrix, and G maps Q to P( ). Due to factored form of the covariance matrix, Eq would instead read as: P = [ΦU] D [ΦU] T + GQG T (3.30) While Eq could be applied directly to find the update covariance value P(+), we wish instead to find the updated U-D factors directly. Applying Eq would not only require the added computation of reconstructing the full covariance matrix, but it would also require that the update covariance be decomposed back into it s updated U-D factors. This procedure would effectively counteract the desirable numerical properties for which the U-D algorithm was originally chosen. 37

38 3.3 Time Propagation In order to update the U-D factors, Thornton [32] proposed to factor Eq as: P = [ ΦU G ] [ Diag(D, Q) ΦU G ] T (3.31) = WDW T (3.32) where [ W = n {}}{ ΦU N w {}}{ G ] (3.33) and n is the state space dimension, and N w is the column-dimension of the process noise mapping matrix G. A Modified Weighted Gram-Schmidt (MWG-S) procedure is used to orthogonalize the rows of W, weighted by D. This operation is similar to that of the so-called QR decomposition in which a matrix of linearly independent columns may be decomposed into a combination of an orthogonal matrix and an upper-triangular matrix. Generally, a Gram-Schmidt procedure is applied in order to find the orthogonal matrix, and it is usually normalized resulting in an orthonormal matrix. If instead the Gram-Schmidt procedure is is used to find an orthogonal matrix weighted by the norm of D, the result is that W = w1 T w2 T. = Ū v T 1 v T 2. (3.34) w T n v T n with Ū unit upper triangular, and the vectors v i are D-orthogonal such that their inner product, weighted by D is: v i, v j D = v T i Dv j = v i 2 Dδ ij (3.35) 38

39 Due to the vectors v i being D-orthogonal, the expression given in Eq simplifies: P = WDW T (3.36) = ŪVT DVŪT (3.37) = Ū DŪT (3.38) giving the updated covariance factors Ū and D. The resulting Ū is unit upper triangular, and D is diagonal due to our choice of weighting in the Gram-Schmidt algorithm. Using this approach, the updated covariance factors may be found without explicit expansion of Instead the only matrix multiplication which must occur is that of ΦU. Due to the fact that U is unit upper triangular, and Φ is block triangular, the number of operations may be reduced significantly. In addition to exploiting these properties of the matrices, we are granted some flexibility into the ordering of the states, and thus the form of the state transition matrix, Φ. As mentioned in section 3.1, the transition matrix may be arranged such that the dense navigation state precede the colored noise states and bias states. In chapter 4 a set of augmentation states will be added to the filter in order to improve navigation performance. These added states may be considered bias states, and will be placed at the end of the state vector. By expansion of WDW T, we see that the bias states are unaffected by the orthogonalization, and thus the M-WGS algorithm need only be applied to the states which are either coupled or contain a driving noise term. This observation can lead to substantial performance increases when implemented. Bierman points to a 97% speed increase by judicial arrangement of the state vector [6]. The Modified-Weighted Gram Schmidt algorithm is given here. The term modified comes from the fact that the algorithm differs slightly from the classic Gram-Schmidt algorithm for enhanced numerical stability [8] v (n) k = w k, k = 1,..., n 39

40 For j = n 1,..., 1 D j+1 = (v (j+1) j+1 )T Dv (j+1) j+1 Ū(k, j + 1) = (v (j+1) j+1 )T Dv (j+1) j+1 / D j+1 k = 1,..., n v j k = v j+1 k Ū(k, j + 1)v(j+1) j+1 k = 1,..., n and D 1 = (v (1) 1 ) T Dv (1) Measurement Update Although Eq suggests that multi-dimensional measurements should be processed as a matrix operation, it is always possible to process them as a sequence of scalar observations, and it is more efficient to do so. By processing measurements sequentially, the matrix inversion operation reduces to a scalar divide. This is accomplished by simply processing the measurement update for each row h T of the measurement matrix H. For a scalar observation z = h T x + v and a priori state and covariance estimates x and P = Ũ DŨ T, the method of Agee and Turner may be used to find the updated factors ˆx, Û, and ˆD [4]. If we now revisit the second-order example given in Eq. 3.25, we may see the benefit of using the factored form of the filter. Using the new filter formulation, the updated covariance value and associated Kalman gain are computed to be: P factored (+) = ɛ , K factored = 1 0 These results show a clear benefit of the square-root form of the Kalman filter in cases where roundoff error is to be expected. Situations with computer roundoff a nearly unavoidable if the algorithm is ever to be implemented in real hardware. If the algorithm is to be implemented for use beyond simulations, a numerically stable form such as the U-D factorization presented here should always be used. 40

41 Chapter 4 Vision-based Measurement Update In order to reduce the effect of INS errors on the navigation solution, a vision-based update will be employed. The motivation behind a vision-based approach is the sudden ubiquity of cameras on small UAV platforms. Often times, a camera is necessary for an operator on the ground to control the vehicle. Since every component added to the vehicle also adds weight, cost, and power consumption, it is highly desirable to use the existing sensor core for multiple purposes. The approach taken in this thesis is similar to the well known Simultaneous Localization and Mapping (SLAM) algorithm, in which a sensor platform both builds a map of its environment as well as localizes itself relative to that map. Previous work in this area is discussed in Chapter 1. Due to the high computational cost of a full SLAM algorithm, a reduced-order version is proposed which is suitable for a small avionics system. This modified algorithm aims to reduces the state space of the estimation filter by only mapping the portion of the environment which is currently visible in the current image frame. By doing so, the state size is bounded, and thus computation is bounded. The vision-based update is accomplished by first finding and tracking features in the camera image. These image features, which may be thought of as unit line-ofsight (LOS) vectors, provide a bearing update to the navigation filter. By relating these line-of-sight measurements to the vehicle s navigation state, we may estimate the errors present in the inertial sensors and therefore improve the accuracy of the 41

42 navigation solution. Unlike GPS, however, the camera cannot provide a true position update to the filter unless the tracked image features correspond to points of known location. This is seldom the case when operating in the unstructured environments typical to small UAV missions. Furthermore, the vision-based measurement can only provide a scaled navigation solution due to the lack of depth information provided by the image. This is evidenced by Fig. 4-1, where a camera is shown making a bearingonly observation of a building. Even with the large discrepancy in size between the two buildings, the bearing measurements of the object remain exactly the same. This Figure 4-1: Scale ambiguity with bearing-only measurements scale ambiguity is left to be determined by the navigation filter, and as will be shown in section 4.2, large initial range errors can quickly lead to filter divergence. This source of instability may be mitigated with the approaches outlined in 4.3, however the lack of a true depth measurement remains. It is for this reason that the coupling between the INS and the camera measurements is critical to an accurate navigation solution. These two sensing modalities may combine in a complementary fashion. The inertial sensors can provide high-rate information regarding the orientation and velocity of the camera as well as add a sense of scale to the scene. Conversely, the camera can provide low-rate information to the navigation filter that serves to counteract the drift errors found in inertial sensors. 42

43 4.1 Image Feature Tracking and Selection The vision-based update described in this chapter requires that a set of persistent points in the image must be tracked in order to give bearing measurements to the filter. These points could come from a variety of sources, including operator-selected points or automatically generated ones. For this thesis, feature points will be automatically selected and tracked using the well known Tomasi-Kanade method [34] Feature Tracking We wish to detect highly salient features in the current image, and find their corresponding locations in the next image. This operation may be thought of as a minimization problem in which the objective is to find a displacement vector d which minimizes the difference between the intensities in successive images around the feature point u. Intuitively, this method must assume that the local intensities around the feature to be tracked remain constant over the time interval between images. Optical flow, often referred to as image velocity, is the displacement vector which minimizes the following cost function given in Eq ɛ(d) = ɛ(d x, d y ) = u x+ω x x=u x ω x u y+ω y y=u y ω y (A(x, y) B(x + d x, y + d y )) 2 (4.1) where A and B are two successive images. In typical fashion, the optimal solution occurs when the first derivative of ɛ with respect to d is zero. By expanding the derivative and replacing the B(x + d x, y + d y ) term by its first order Taylor series [ ] expansion about the point d = 0 0 gives: ɛ(d) d ux+ωx 2 u y+ω y x=u x ω x y=u y ω y ( [ A(x, y) B(x, y) B x B x ] ) [ d B x B x ] (4.2) Making the following substitutions: δi(x, y). = A(x, y) B(x, y) (4.3) 43

44 [ I = I x I y ] T = [ B x B x ] (4.4) where I x and I y are the image gradients in both dimensions yields: ux+ωx 1 ɛ(d) 2 d x=u x ω x u y+ω y y=u y ω y ( I T d δi ) I T (4.5) 1 2 [ ] T ɛ(d) d u x+ω x u y+ω y x=u x ω x y=u y ω y I2 x I x I y I y I x I 2 y d δii x δii y (4.6) Equation 4.6, which expresses the derivative of the cost function ɛ with respect to the displacement vector d may be written in the simplified form: 1 2 [ ] T ɛ(d) Gd b (4.7) d where G. = u x+ω x u y+ω y x=u x ω x y=u y ω y I2 x I x I y I y I x Iy 2 and b. = u x+ω x u y+ω y x=u x ω x y=u y ω y δii x δii y (4.8) From Eq. 4.7, the optimal displacement vector may be found as d opt = G 1 b (4.9) Since all of the above equations are given with respect to a fixed window size of ±ω, it is clear that the algorithm will not easily handle large image motion without selecting a large search window. A large search window would effectively smooth out the image, and the details which are useful for tracking may be lost. To counteract this problem, an iterative, pyramidal implementation of the algorithm has been proposed. The algorithm first generates an image pyramid consisting of consecutively sub-sampled images. The tracking algorithm given above is repeated, starting at the lowest resolution image and continuing on until the full resolution image. The displacement vector found for a low resolution image is used as an initial guess to begin 44

45 the search in the next, higher-resolution image. Figure 4-2 shows an example of such an image pyramid. This form of the algorithm is implemented in the freely available OpenCV computer vision library [3]. A more detailed description of the pyramidal implementation is available in [9]. Figure 4-2: Pyramidal Feature Tracking Feature Selection As mentioned previously, we wish to stationary points in the image which will persist through several image frames. It is due to the observation of the same point from multiple views which gives the navigation filter the measurement s information content. In order to find image features which may be deemed trackable by the tracking algorithm, the equation governing the optimal displacement vector may be examined. From Eq. 4.9, we see that the matrix G must be invertible. This implies that the minimal eigenvalue of the matrix G must be above a specified threshold. Corners in an image may be found by examination of the eigenvalues of the 2 2 matrix G with the following observations: Two small eigenvalues indicate a nearly constant intensity in the window One small eigenvalue and one large eigenvalue indicate a unidirectional texture pattern, such a line 45

46 Two large eigenvalues indicate a corner, or some other spatial pattern which may be tracked reliably Image corners are found by computing the image gradients in each dimension, and forming the 2 2 variation matrix by summing over a neighborhood around each pixel in the image. A trackable feature is determined by examining the minimum eigenvalue and comparing to an empirically determined threshold: min(λ 1, λ 2 ) > λ (4.10) 4.2 Measurement Equation In order to relate the tracked pixel locations to the navigation state, a linearized measurement model must be formed. To accomplish this, the state vector in Eq is augmented with additional target states corresponding to the ECEF-referenced position of the points tracked in the image. This augmented state may simply be given as the error in the ECEF location of each image feature, δx T. From this augmented state, the line-of-sight vector in the ECEF frame may be expressed as: û = ˆx T ˆx ˆx T ˆx (4.11) The image feature locations (u, v), as described in section 4.1 may be transformed into a camera-referenced line-of-sight vector by application of the pin-hole camera model. Figure 4-3 shows the camera and image frame definitions. The camera frame is located at the optical center of the on-board camera, with its x c axis pointed along the optical axis, the z c axis pointing down, and the y c axis pointing to the right. This coordinate frame is assumed, without loss of generality, to be of a fixed offset and orientation from the body frame. The image frame is defined with its origin at the upper-left most pixel of the image. The u axis points right, and the v axis points downward. The principal point of the camera (u 0, v 0 ) is defined with respect to this frame. 46

47 Figure 4-3: Camera and image frame definitions The camera-referenced unit line-of-sight vector may be found as: y = f u 0 u 0 f u 0 1 f v v 0 f v u v 1 (4.12) z c = y y (4.13) where [u 0, v 0, f u, f v ] are the principal point at focal length of the camera in each dimension given in units of pixels. A measurement residual may be formed by rotating the measured LOS vector into the ECEF frame, and subtracting the predicted LOS vector of Eq δz = ˆR e cz c û (4.14) = Hδ ˆX (4.15) where the linearized measurement matrix, H, is the Jacobian of Eq evaluated at the current state estimate. The component of H with respect to the target state may be thought of as the projection of a perturbation of the target location δˆx T onto a unit-sphere S 2 = {x R 3 : x = 1}. This may be found by projecting the error onto the space orthogonal to the LOS vector of Eq The projected error is 47

48 normalized by the the estimated range to the target, and the result is given in Eq δz δˆx T = 1 ˆx T ˆx (I ûût ) (4.16) The component of H with respect to the navigation state is simply the negated value of the quantity given in Eq From Eq. 3.5, we see that the error in the estimate of attitude may be expressed as δr e c = ˆR e c[ψ ] (4.17) and so the component of H with respect to vehicle attitude error may be found as: δz Ψ = δ ˆR e cû c (4.18) = ˆR e c(ψ û C ) (4.19) = [û ] ˆR e cψ (4.20) Combining Eqs and 4.20 gives the final linearized measurement Jacobian: [ H = M 0 [û ] ˆR e c M 0 ] (4.21) M = 1 ˆx T ˆx (I ûût ) (4.22) where H has 3 rows, and as many columns as there are filter states. In this case, there are 9 navigation states corresponding to position, velocity, and attitude errors, 18 states corresponding to accelerometer and gyro sensor errors, and 3 N states corresponding to the N target states included in the filter. As part of the Kalman update process described in section 3.2, the measurement matrix Jacobian is formed separately for every target state in the filter. Furthermore, each of the 3 rows of H are processed sequentially in order to avoid a costly matrix inversion. In order to initialize the augmented state vector, an estimate of range to the target must be used. In reality, the true range to the target may vary uniformly from a distance of zero to infinity. As required by the EKF framework, the probability 48

49 distribution of the estimated target location must be normally distributed with a known variance. This discrepancy in the measurement model can severely limit the usefulness of the vision-based update. One may limit the initial variance of the distribution by considering the intended operating environment and selecting the parameters accordingly. However, one of two possible issues may arise: The true range to the target may lie substantially outside of the normal distribution, or the initial variance of the distribution may be so large that the EKF puts very little weight on the observation, thus placing more trust in the state propagation model. In the first case, where the true range to an observed target lies outside of the linearized regime, the filter may become unstable and/or divergent. Under this scenario, the state estimate may deviate from the truth while the filter s trust in the estimate may remain the same. In the second case, the measurement will provide very little useful information to the filter, and so the state error covariance will grow at or near the rate of the INS error model. These cases are often evidenced by examining a plot of the filter s covariance bounds alongside the error in state estimate. Figure 4-4 shows an example of a convergent state estimate estimate. As can be seen, the state estimation error is zero-mean and normally distributed with 1σ error bounds closely matching that of the filter s covariance estimate. Figure 4-5 shows examples of divergent state estimates. In the first example, the state estimate error escapes the covariance bounds which is often associated with a violation of the linearity assumption of the filter. In the second, the filter places little trust in the observation, and thus the covariance grows at the rate of the INS propagation model. In order to test the sensitivity to initial range error, a simulation was developed for the case of a UAV-mounted camera attempting to hover in place. A small oscillatory trajectory was used to simulate the closed-loop hovering dynamics of a small UAV. A group of 10 targets were randomly distributed to be within the field of view of the camera, and with distances normally distributed with a mean of 50 meters and a varying standard deviation. An initial range estimate of 50 meters was used to seed the filter s state estimate. This simulation was performed several times, and each time the standard deviation of the targets range distribution was increased, and the 49

50 Figure 4-4: Converged state estimate example Figure 4-5: Divergent state estimate examples 50

51 root-sum-squared navigation error was recorded. Figure 4-6 shows the results of this series of simulations. As can be seen, an initial range error of approximately 20% lead to filter divergence. This result suggests that a random range initialization strategy, while possible, does not provide a satisfactory level of robustness. Figure 4-6: Navigation sensitivity to initial range uncertainty One possible strategy to overcome the linearization violation potential of a random range initialization would be to employ a standard shape from motion technique common in the computer vision arena [14]. Such approaches would involve using an alternate sensing modality, such as the IMU or a barometric altimeter, to gauge the baseline between successive looks at the target. Once a sufficient baseline is achieved, the multiple sightings of the target can be used to triangulate it s position to within 20% error, therefore achieving the maturity necessary for inclusion in the filter s state space[23]. These approaches are not without drawbacks, however. If, for example, the UAV s mission is to loiter in place for surveillance purposes, a series of maneuvers must be performed in order to acquire a stereo baseline which adds complexity to an otherwise simple task. Another drawback of the shape from motion approach is the fact that it involves a delayed initialization of new target states. This implies that the information content of a target may not be absorbed until an off-line process is able to sufficiently estimate the range to the target. Not only does this add the complexity 51

52 of an additional off-line process, it has obvious disadvantages for fast-moving vehicles for which a target may only be seen a handful of times before leaving the camera s field of view. An alternative approach, and the one taken for this thesis, is to re-parameterize the measurement equation for greater linearity. This approach, known as an inversedepth parameterization, is described in the next section. 4.3 Improvements for Measurement Linearity As described in [25], an inverse depth parameterization of feature points leads to greater linearity of the measurement equation, especially in situations where low parallax exists. This method allows un-delayed initialization of new feature points into the filter, and allows the filter to maintain stability in the presence of large range uncertainties. The major factor contributing to filter divergence with the previous method was that initial feature locations must be approximated by a Gaussian distribution as required by any Kalman filter. A Gaussian model, however, is a poor fit to the true distribution of possible feature locations. In reality, a feature may lie anywhere along the line-of-sight in which it is seen, varying from very close to very far. If instead the feature state is posed as an inverse-depth along the line-of-sight, the likelihood of possible depths becomes a much better fit to the true uniform distribution. Figure 4-7 gives an example of this. The top plot shows a Gaussian distribution, who s parameter is the inverse depth of a point feature (ρ = 1/d). If instead the inverse of the parameter is used (in this case, the true depth), the likelihood of true depth becomes a much better fit for the possible distribution. The bottom plot in Fig. 4-7 shows this, where the likelihood is concentrated at some initial range guess, but the tail continues outward to include a much larger range of initial depths. As multiple looks at the stationary feature are obtained, the distribution tends towards a Gaussian one. One downside of this method is that additional states for the feature must be spawned within the filter, leading to higher computational costs. However, this may be reduced back to the old 3-dimensional state once a linearity check is 52

53 Figure 4-7: Inverse Depth Likelihood satisfied, as described in [11]. Aside from the enhanced linearity of the approach taken in [25], the formulation has the substantial benefit of an un-delayed target initialization using a single image. Given the fact that the formulation is based on the inverse of the range to the target, points at infinity are handled seamlessly, and can instantly provide attitude information to the navigation filter. If sufficient parallax for these distant features is generated from camera motion, then the range estimate to the target will be improved, thus improving the self-location estimate of the vehicle. The feature state vector for this parameterization is defined as: [ x T = x i y i z i θ i φ i ρ i ] T (4.23) which encodes the line-of-sight for the first observation of a point feature. The camera [ ] T center from which the point was first observed is given as x I = x i y i z i, and θ i, φ i are the azimuth and elevation (in the world frame) of the LOS vector m(θ i, φ i ). The inverse of the depth along the LOS vector is given as ρ i = 1/d i. 53

54 Figure 4-8: Feature parameterization and measurement equation [25] The observation of a point in the ECEF frame is given as: h = ρ (x I x) + m(θ i, φ i ) (4.24) [ ] T m(θ i, φ i ) = cos(φ)cos(θ), cos(φ)sin(θ), sin(φ) (4.25) which is then projected onto a 2D image plane by first rotating into the camera frame by the quantity (R E C )T, and applying the pinhole camera model: z = u = v u0 + f u hy h x v0 + f v h z h x (4.26) The target state is initialized x T = y(x, z, ρ 0 ), a function of the current state, the 54

55 observation of the target, and an empirically determined value for ρ 0, x I = x (4.27) θ i = arctan(h y /h x ) (4.28) φ i = arctan(h z / h 2 x + h 2 y ) (4.29) ρ i = ρ 0 (4.30) In [25], the full state vector consists of an Earth-referenced position, attitude quaternion, velocity, and angular rate, in addition to the augmented state variables as in Eq We wish to, instead, form the linearized measurement matrix with respect to the error-state vector [ δx = δx δv Ψ δa δg δy i ] T (4.31) which consists of Earth-referenced position and velocity errors, and a camera (body) referenced attitude misalignment error. The measurement matrix is formed as: H = z X = z h h δx (4.32) The error equation for the camera-to-earth coordinate transformation is computed as: ˆR e c = R e c(i + [Ψ ]) δr e c = R e c[ψ ] and so from Eq. 4.24, the linearization of the measurement model with respect to attitude misalignment may be expressed as: h Ψ = (Re c[ψ ]) T h = [Ψ ]R c e h = [h c ]Ψ 55

56 Differentiating the remaining terms, the measurement matrix H is computed as: H = z [ h ρ(r e c) T 0 [h c ] ρ(r e c) T h θ h (R e φ c) T (x I x) ] (4.33) with: z h = f uh y /h c x f v h z /h c x 2 2 f u /h c x 0 0 f v /h c x h [ ] T θ = cos(φ)sin(θ), cos(φ)cos(θ), 0 h [ φ = sin(φ)cos(θ), sin(φ)sin(θ), cos(φ) ] T In contrast to the formulation in [25], this measurement linearization serves to couple the camera with the inertial sensors. This allows a sense of scale to be recovered from the onboard accelerometers, and for navigation to continue for periods when vision updates are unavailable. 4.4 Covariance Initialization Although the estimated state vector expresses the target locations with respect to the ECEF frame, their life begins from a 2D camera measurement with no prior knowledge of their earth-referenced location. Just as the target s location must be initialized in 3D space, so must the filter s uncertainty in that location. This uncertainty should be a function of both the measurement noise covariance, as well as the current navigation error covariance of the camera s location and orientation. Much like the propagation of covariance by means of the state transition matrix, a linearized similarity transformation must be applied to the current covariance matrix in order to map it to its new, augmented state. For this particular filter design, the state space does not change in dimension. Target states are simply replaced by new ones as their corresponding pixel locations leave the image frame, serving to limit the computation necessary for navigation. 56

57 While this does not allow for loop-closure techniques that are commonplace among SLAM algorithms, this capability may be regained using an approach described in Chapter 7. The required change of basis may be found by first writing the new state vector as a function of the old state vector and the observation of the new target, X = Γ(X). If we consider a case when the j th feature state is to be replaced, the following relationships may be written: x = x (4.34) v = v (4.35) R e b = R e b (4.36) ā = a (4.37) ḡ = g (4.38) x Ti = x Ti (4.39) x Tj = y(x, z, ρ 0 ) (4.40) The similarity transformation for covariance initialization may be found from the gradient of this function: Γ = y δx 0 I 0 0 y Ψ 0 0 I } n 1 } n 2 (4.41) } n 3 This operator serves to initialize the covariance of the new target state based on the current navigation state error covariance. The intuition behind this operation is that the estimator can only be as certain about the location of an object as it is about it s self-location. Unless the target has some a-priori information associated with it, such as a geo-registered point on a map, it cannot improve the localization of the vehicle with respect to the ECEF frame. For this reason, we may think of the navigation approach taken here to be a relative one, despite the fact that the solution 57

58 is referenced to a global frame. In addition to the navigator s self-localization uncertainty contribution to the new target, the measurement noise covariance adds a component as well. These noise terms stem from the fact that there is noise inherent to the pixel tracking algorithm, as well as the fact that there is large uncertainty in the initial range to the feature as described in the previous sections of this chapter. These noise sources are assumed to be zero-mean Gaussian white noise values, and are assumed to be uncorrelated. This may be expressed in matrix form as: σ u R = 0 σ v 2 0 (4.42) 0 0 σρ 2 where (σ 2 u, σ 2 v) are the pixel tracking noise variances in units of pixels, and σ 2 ρ is the initial range error variance in units of distance. These noise terms may be linearly mapped to the new covariance matrix by the partial derivative of the target state with respect to measurement equation K = 0. 0 y h 0 (4.43) The new covariance matrix after feature initialization may be computed as: P new = Γ P Γ T + KRK T (4.44) Since the filter formulation uses a U-D factored form of the covariance matrix, we wish instead to compute the updated U-D factors directly. At first glance, the appropriate solution may seem to involve computing the full covariance matrix, applying the appropriate change of basis, and the decomposing back to the U-D form by means 58

59 of a Cholesky decomposition. This solution, however, is unattractive for two reasons. First, the amount of computation necessary to reconstruct the full covariance, apply the transformation, and decompose is substantial. If possible, it is highly desirable to avoid such a computation. Secondly, and probably more importantly, the advantages of using a square-root type estimation filter are lost by performing such an operation. The numerical round-offs that would occur by reconstruction and decomposition could lead to poor filter performance, and are the very reason for choosing a U-D filter formulation in the first place. The key to avoiding such an operation is to recognize that this change of basis operation is, in fact, the same operation that occurs every time covariance is propagated forward in time as in section 3.2. The Γ operator introduced in Eq may be thought of just as another state transition matrix such as Φ. With this important observation made, the U-D factors may be calculated directly, thus avoiding the expensive and error-prone method of full covariance reconstruction. Equation 4.44 may be re-written in terms of the U-D covariance factors as: Ū DŪT = = [ [ Γ U Γ U ] [ ] T D Γ U + KRK T ] [ K Diag(D, R) Γ U K (4.45) ] T (4.46) = WDW T (4.47) and the Modified Weighted Gramm-Schmidt algorithm may be applied to orthogonalize the rows of W and find the updated U-D covariance factors. Instead of the process noise matrix Q being mapped to the updated covariance factors, the measurement noise matrix R is mapped through the K matrix. A significant portion of the computation required for the filter is in the application of the MWG-S algorithm. This computational load may be substantially reduced by exploiting the structure of the transformation operator Γ. By expanding WDW T = Ū DŪT, we make the following observations: The first n 1 rows of W are simply a copy of the first n 1 rows of U due to the fact that the first n 1 rows of Γ are identity. 59

60 The k = n n rows of W are a copy of the k = rows of U due [ ] T [ ] T to the fact that y = δx I and y = Ψ 0 3 θ Ψ The k = n 3 rows of Ū and D are unaffected by the orthogonalization, and may be found from the k = n 3 rows of the product Γ U By exploiting the structure of the transformation operator, the number of arithmetic operations can be significantly reduced, especially for high-order filter implementations. Care should be taken to identify possible computation reductions for any filter design to ensure that the implementation is tractable in hardware. φ Ψ ρ Ψ 60

61 Chapter 5 Suboptimal Covariance Analysis The previous chapters have proposed an estimation algorithm based on the fusion of information from complementary sensing modalities, namely an inertial measurement unit and a camera. The estimator design explicitly models all known error sources including the various INS error properties, pixel tracking noise, initial scene depth uncertainty, and initial state uncertainty. One could apply the estimator design directly and, if all error sources were correctly modeled, would obtain an optimal estimate of the true state vector. While this may be appropriate in many cases, the limited nature of processing architectures in small UAV avionics systems may impose that the filter be of a reduced order. These reduced-order filters are in fact suboptimal versions of the true optimal filter since they knowingly neglect the presence of true error sources. This chapter presents the methods used for analyzing these classes of suboptimal filters. If one wished to find the estimation error covariance of a particular filer design, they could simply exercise Eqs and 3.24 with an example trajectory of IMU inputs and camera measurement updates and store the diagonal elements of P at various points of interest. This would give the estimation uncertainty of the minimum mean-square estimate at the desired point in the trajectory. However, this estimate is only optimal in relation to the model against which it is compared. If, for example, the filter contained states for the position, velocity, attitude, and gyro bias of the system, the resulting state estimate would be optimal in a world in which the only 61

62 INS error was gyro bias. The uncertainty expressed by the filter s covariance matrix would likely give the designer an overly optimistic view of their filter s performance in the real world. Thus, simply removing states from the filter and comparing the covariance results is not an adequate method determining an appropriate filter design. The analysis methods presented here enable the designer to determine the performance of an m -dimensional filter in an n -dimensional world, where often times m < n due to hardware limitations. There are two cases of such analysis: when the implemented filter represents a true portion of the truth model, and when the implemented filter dynamics differ from the truth model. For this thesis, only the first case will be considered since we are interested in determining a reduced-order filter s performance. The second case allows the designer to analyze the sensitivity of the filter to modeling errors. For example, if the the accelerometer bias true correlation time constant differed from that which was implemented in the filter, the effect of this modeling error could be analyzed with a set of generalized sensitivity equations. For a detailed treatment of the general case of sensitivity analysis, see [18] 5.1 Sensitivity Analysis Fundamentally, the sensitivity analysis method presented here involves computing a time history of filter gains for the suboptimal filter and applying it to the truth model. The state vector may be partitioned as: x = x E (5.1) x N where x E and x N represent the estimated states and the states neglected in the suboptimal design, respectively. Similarly, the state transition and measurement 62

63 matrices may be partitioned as: Φ = H = Φ EE Φ NE [ H E H N Φ EN Φ NN ] (5.2) (5.3) Due to the neglect of states x N, the suboptimal filter gain becomes: K = K 0 (5.4) Quantities with a ( ) superscript will denote those which are associated with the implemented (suboptimal) filter. The general sensitivity equations for covariance propagation are given as: P k+1 ( ) = Φ kp k (+)Φ T k + Φ kv T k (+) Φ T k + Φ T k V k (+)Φ T k + Φ k U k Φ T k + Q k V k+1 ( ) = Φ k V k (+)Φ T k + Φ k U k Φ T k Q k U k+1 = Φ k U k Φ T k + Q k and for a measurement update: P k (+) = (I K kh k)p k ( )(I K kh k) T (I K kh k)v T k ( ) H T k K T k K k H k V k ( )(I K kh k) T + K k H k U k H T k K T k +K kr k K T k V k (+) = V k ( )(I K kh k) T U k H T k K T k where Φ Φ Φ H H H 63

64 For the special case when Φ = 0 and H = 0, the above equations simplify greatly. This case arises when the implemented filter reflects a true portion of the truth filter, and there are no modeling errors for those states. In this case, the above equations reduce to: P k+1 ( ) = Φ k P k (+)Φ T k + Q k (5.5) P k (+) = (I K kh)p( )(I K kh) T + K krk T k (5.6) One should note that the dimension of the implemented filter is often less than that of the truth model. This is certainly the case when evaluating a reduced-order filter as we wish to do. In order to account for the dimensionality mismatch between the two models, the following matrix is introduced: [ W T = I 0 ] (5.7) where W is an n m matrix. This transformation may be applied to the suboptimal filter gain, and the new gain to be used in Eq. 5.6 is WK k. Figure 5-1 shows a flow diagram for suboptimal covariance analysis. As described above, the filter gains for the reduced order filter are computed and applied to the truth model. Through proper selection of the initial conditions and error sources present, an error budget may be constructed. This procedure is described in more detail in the next section. 5.2 Error Budget Construction Using the sensitivity equations given in the previous section, an error budget for the navigation filter may be constructed. This procedure allows the filter designer to study the effects of various error sources on the overall navigation performance. Armed with this information, decisions can be made about the required number of filter states as well as sensor selection in order to satisfy navigation requirements. This becomes especially important when the available on-board computing power is 64

65 Figure 5-1: Suboptimal Covariance Analysis Flow Diagram[18] minimal as is the case with many small UAVs. In order to construct an error budget, the sensitivity equations must be exercised repeatedly with many different initial conditions. In order to find the error contribution due solely to angular random walk, for example, the sensitivity equations are exercised with the process noise corresponding to the Ψ state present and all other noises set to zero. Likewise, to find the error contribution of a single state s a priori covariance, that value is set appropriately and all others are set to zero. It is important to remember that a particular error budget is only valid for the conditions under which it is constructed. If the designer is interested in the minimal number of filter states required for a particular operation such as a persistent hover, the error budget should be constructed with IMU inputs and vision-based measurements representative of such a situation. It may turn out that certain error sources only become observable under certain flight conditions. For example, if it is found that accelerometer scale factor has little contribution to the navigation performance under hovering conditions, this result does not necessarily translate to forward flight conditions. Care must be taken to design a filter that will satisfy performance requirements under all expected conditions. As an example of error budget construction, a simulation of a hovering camera was used. The details of the simulation are described in section We wish to determine the dominant error sources in the system which affect hovering performance, 65

66 and subsequently design a filter which performs adequately under those situations. In order to do so, the sensitivity equations are exercised several times, each time with a different error source present. The error sources to be examined include: INS error sources Initial feature location uncertainty Pixel tracking error To determine the contribution of the INS error, all initial covariances and measurement noises are set to zero. Individual process noises are set to their correct value in order to determine their particular contribution to the navigation error. For example, in order to determine the effect of angular random walk, the following system matrices are used to seed the covariance simulation: P 0 =.... Q 0 = σ ARW R 0 = After running the covariance analysis for every initial condition, process noise, and measurement noise, the resulting contribution of each was tabulated. Figure 5-2 shows the various error contributions. As can be seen, the INS error source with the greatest effect on navigation performance is the angular random walk. Conversely, the scale factor and misalignment terms have little effect on the system under hovering conditions. As mentioned previously, this result does not reliably transfer to other operating conditions such a forward flight, turning, etc. The bias terms of both the accelerometer and gyro have an appreciable effect on the navigation error, and so they should be considered for inclusion in the final filter design. As expected, the largest contributing factor to navigation error is the fact that no depth information is available using the vision-based updates. This fact is evidenced by the initial 66

67 feature location covariance contribution to the navigation error. The second largest contributing factor is the measurement noise value. Care should be taken to select feature tracking parameters to minimize the pixel noise variance for the available optics and expected operating conditions (see Eq. 4.42). Figure 5-2: Navigation Filter Error Budget The above example explores a filter design for one of many possible scenarios. Future work should involve developing an error budget for other operating conditions, and examining the observability of various error sources under those conditions. The covariance analysis architecture developed is a useful tool for filter design in any scenario. For a given design scenario, the time-varying state transition matrices and measurement matrices must be supplied along with a reference trajectory and measurement values. The process for error budget construction presented here is not specific to vision-aided navigation filter design, and should be used any time that computation limitations are a factor in the design. Even if such limitations are not present, the information gained about various measurement and sensor noise contributions can alert the filter designer to specific problems to be addressed in the overall system design. 67

68 68

69 Chapter 6 Results This chapter provides a summary of results based on the vision-aided navigation strategy designed in the previous chapter. First, a simulation architecture is developed in order to verify algorithm performance in a synthetic environment. The simulation provides insight into the state estimation performance both in terms of navigation accuracy, and in a closed-loop control scenario. Finally, the algorithm is tested using real imagery and an evaluation of performance is given. 6.1 Simulation Results In order to test the algorithm design in a controlled environment, a full simulation architecture was developed. For full performance characterization of the navigation algorithm, the following information is required: A motion model to produce specific forces and angular rates A sensor error model A 3D environment to produce synthetic imagery A camera model to produce image projections To satisfy the above requirements, a combination of MATLAB, Simulink, and OpenGL Performer was used to create a full dynamic simulation with synthetic imagery. The 69

70 MathWorks MATLAB R and Simulink R products were used to implement a simple motion model and associated control system [2]. This model produces both a camera trajectory consisting of position and orientation as well as a history of linear and angular accelerations used to produce the trajectory. Silicon Graphics OpenGL Performer R allows for efficient rendering of 3D environments as well as easily defined camera placement and parameters [3]. The overall simulation architecture is given in figure 6-1, and the individual components making up the full simulation are described below. Figure 6-1: Simulation Architecture Simulation Dynamic Model The equations governing the camera dynamics come about by application of the well known Newton-Euler equations for rigid body dynamics. The forces and moments acting on the camera body are given by: F b = m( v b + ω v b ) (6.1) τ b = I ω + ω (Iω) (6.2) 70

71 Re-writing in terms of linear and angular accelerations gives the following equations of motions [17]: u = vr wq g sin θ + F x /m (6.3) v = wp ur + g sin φ cos θ + F y /m (6.4) ẇ = uq vp + g cos φ cos θ + F z /m (6.5) ṗ = qr(i yy I zz )/I xx + τ x /I xx (6.6) q = pr(i zz I xx )/I yy + τ y /I yy (6.7) ṙ = pq(i xx I yy )/I zz + τ z /I zz (6.8) where (u, v, w) are the components of velocity in the body frame, and (p, q, r) are the components of angular rate in the body frame. The (I xx, I yy, I zz ) terms are the moments of inertia about each body axis (the cross products of inertia are omitted for simplification). The forces and torques acting upon the body are given by the (F, τ) terms and are treated as control inputs. Figure 6-2: Camera Motion Definitions Position in the ECEF frame is found by first rotating the accelerations given in equations to the navigation frame and integrating twice: x = R e b vb dt (6.9) The current body ECEF transformation is found through integration of the rates of change of the Euler angles. The relationship between body rotation rates and Euler 71

72 rates is given by: φ θ ψ = 1 sin φ tan θ cos φ tan θ 0 cos φ sin φ 0 sin φ cos θ cos φ cos θ p q r (6.10) A simple control system was designed in order for the camera to follow specified trajectories. The control system maps the user s commands the the required forces and torques acting on the body. A Linear Quadratic Requlator (LQR) controller was used to generate the appropriate control inputs. The LQR design allows for a weight to be assigned to each state variable and control input, resulting in an optimal control design satisfying the cost function: J = 0 x T (t)qx(t) + u T (t)ru(t) dt (6.11) where x(t) and u(t) are the state and control trajectories, and Q and R are their respective weights used to penalizize a deviation from the reference trajectory. The state to be controlled consists of the camera s position, velocity, attitude, and angular rates. The control inputs, as mentioned earlier, consist of the forces and moments acting upon the camera. By writing the system dynamics in state-space form: ẋ = Ax + Bu (6.12) the optimal control gains may be found as K = R 1 B T P, where P is a positive definite solution to the steady state Riccati equation: PBR 1 B T P = PA + A T P + Q (6.13) From this gain matrix, the optimal control inputs may be found as: u(t) = Kx(t) (6.14) 72

73 This thesis is not intended to be an in-depth reference on optimal control design, and therefore the details of LQR design are left to the references [10][15] Sensor Model The linear and angular accelerations generated by the above equations of motion may be used to generate synthetic IMU sensor readings. As described in section 2.4, the output of the IMU consists of inertial specific forces and angular rates resolved in the body axes. The specific force measured by the accelerometer is due to a combination of acceleration, gravitation, and Coriolis forces. The angular rates measured by the gyro are a combination of the angular rate of the body and the earth s rotation rate. Therefore, the values measured by the sensors are given by: f = v b (R e b) T (g e 2ω e ie v e ) (6.15) ω = ω b ib + ω b ie (6.16) These measured values are then corrupted with the INS error models given in equations 2.18 and 2.19 consisting of bias, scale factor, misalignment, and random walk noise Synthetic Image Generation The above described camera motion model and controller may be used to move the camera through space with a specified velocity and orientation. In order to capture imagery from the camera s simulated viewpoint, a synthetic 3D environment was employed. The OpenGL Performer product from Silicon Graphics gives a solution to real-time scene generation. The system allows for placement of multiple cameras within a scene, and can provide imagery with a specified resolution and field of view. These images may be fed to the feature selection and tracking algorithm described in section 4.1 in order to provide updates to the vision-aided navigation algorithm. Figure 6-3 shows an example of a synthetic image taken from the Performer Town 73

74 supplied with the software. The red squares overlayed on the image correspond to features automatically selected by the Lucas-Kanade algorithm. These pixel locations are used as measurement updates to the Kalman filter as described previously. This synthetic imagery generation system allows for full vision/ins simulations with Figure 6-3: Example Synthetic Imagery and Feature Selection synchronized data. The synchronization of images with other sensor data is often a difficult barrier to overcome for algorithm validation. With this simulation in place, the algorithm may be tested in a controlled environment, separate from the complexities of hardware Simulated Flight To test the performance of the vision-aided navigation algorithm, a circuitous trajectory was created to fly the camera through the synthetic environment. The route was flown at 10 meters above ground level, and consisted of a 710 meter traverse with several turns, and an average speed of 4.5 m/s. The camera model assumed a 45 degree field of view, and resolution. The camera was pitched 25 degrees down from the body axes, and imagery was collected at a 5Hz rate. Figure 6-4 shows the trajectory overlayed on an overhead image of the flight area. Simulated 74

75 Figure 6-4: Flight path overlayed on overhead imagery GPS updates were supplied to the navigation filter for the first 30 seconds of flight in order to simulate an operating condition where GPS becomes either obscured by the environment or jammed. The goal of the vision-aided navigation algorithm is to allow continued operation of the vehicle during these periods of GPS outage. As in previous examples, the true IMU sensor readings were corrupted by the error model provided in table 2.1. Figure 6-5 shows a comparison of performance between the INS only navigation solution with the vision-aided INS solution. As can be clearly seen, the vision-aided algorithm provides significant navigation performance increase over the INS-only solution. Once the GPS updates are shut off, the inertial only solution quickly drifts and yields an unacceptable error in position. Using the vision-aiding, the overall RSS navigation error at the end of the flight was 22.3 meters corresponding to 3.1% of the total distance traveled. The INS-only solution, however, had an error of meters at the end of the flight. Figure 6-6 shows the position and velocity estimation performance over time. Although the velocity estimates from the vision-aiding algorithm produce much noiser results than the INS-only solution due to the initial range errors, the estimates continue to follow the velocity profile without drift. It is the drift in velocity estimation that produces large errors in the position 75

76 Figure 6-5: Simulated flight trajectory performance comparison estimate, and without some means of an external update, the estimate will never improve. Figure 6-6: Position and velocity estimates of the course of the entire flight Simulated Closed-loop Hover Another possible usage scenario of the vision-aided navigation algorithm would be in a closed-loop control system. In this case, the navigation algorithm is providing state estimates for use in the feedback control loops. Specifically, we are interested 76

77 in the velocity estimation capabilities for use in a rate-damping approach where the vehicle is meant to hover in place and null out residual velocity errors. In order to test this, the simulation architecture was again used, however the vehicle was commanded to remain stationary for 60 seconds. The velocity estimates from the navigation filter were fed back to the control system described in section Figure 6-7 shows a comparison of velocity estimation results. Similar to the results presented in the previous section, the estimation results with the vision-aided algorithm are substantially improved over the inertial only performance. Figure 6-8 shows the Figure 6-7: Velocity estimation results for simulated hover lateral position estimation error. This error is separate from the closed-loop position holding error in that this plot represents the error in the position signal fed back to the control system. As can be seen, the position estimation error remains under approximately 30 cm RMS throughout the duration of the simulated hover. Finally, the position holding capability of the closed-loop system are shown in figure 6-9. This is the actual position error of the vehicle based on both estimation and control errors. 77

78 Figure 6-8: Position estimation error for simulated hover The controlled position error is actually less than that of the estimation error due to the high quality of the velocity estimates. With the full vehicle state available for feedback, the closed-loop hovering error remains below 20 cm RMS. Figure 6-10 shows the result of shutting off the vision updates after 30 seconds of hover. Once the vision updates are shut off, the estimation error rapidly diverges, and accumulates approximately 30 meters of error within seconds of shutting off the updates. 6.2 Experimental Results In order to test the vision-aided navigation algorithm in a slightly more realistic setting, an experiment was devised using a video camera and a motion tracking system. As part of the Aerospace Controls Laboratory at the Massachusetts Institute of Technology s Department of Aeronautics and Astronautics, a motion capture system has been developed to conduct indoor flight tests of small quad-rotor helicopters [35]. The motion capture system, developed at Vicon Motion Systems, provides accurate position and orientation information of objects within the room at a rate of 50Hz. In order to track objects, a series of three or more infra-red reflective dots are placed in a stationary configuration on the object. Several infra-red cameras are 78

79 Figure 6-9: Closed-loop position holding capability Figure 6-10: Effect of shutting off vision updates in a closed-loop hover 79

80 placed throughout the indoor space, and the Vicon system is able to triangulate the position and orientation of the reflectors. Figure 6-11 shows examples of the motion tracking system in use. The first image shows the configuration of the reflective balls on each vehicle, and the second shows the infra-red cameras placed throughout the room. The position and orientation of the vehicle may be fed back to an off-board computer where control signals are generated. These control signals are then sent to the quad-rotor helicopters over a traditional RC hobby communication link in order to control the vehicle. Figure 6-11: Indoor UAV testbed at MIT s Aerospace Controls Lab [35] For the experiments in this thesis, the Vicon motion capture system was used to track the position and orientation of a hand-held video camera. The motion capture system provided three very important streams of information to be used in verifying the vision-aided navigation algorithm performance: Accurate, high-rate position and attitude truth data to act as a comparison for the navigation algorithm s solution 80

81 Simulated IMU sensor information by corrupting the motion capture system s output Time synchronization between the simulated IMU data and the real video stream Without the motion capture system, a ground truth against which to compare the algorithm s performance would have been difficult to obtain. Even if a hardware GPS could have been used to collect truth data, it s accuracy would in the 5-10 meter range, and attitude information would not be available. A 40 second motion was captured in which the camera was moved through combinations of 3-dimensional movement as well as changes in it s orientation. Figure 6-12 shows an example of the imagery captured with the hand-held camera. The im- Figure 6-12: Video captured with selected features overlayed ages were sub-sampled to produce images with the same resolution as the synthetic imagery used in the previous section. Just as in the simulation architecture, the captured imagery was passed through the Lucas-Kanade feature selection and tracking algorithm, and the results were fed into the navigation filter along with the IMU data. Figure 6-13 shows the result of the vision-aided navigation algorithm s performance with the experimental data. Despite having no prior knowledge of the scene, 81

82 the algorithm is able to effectively fuse the imagery with the IMU data in order to estimate a navigation solution. The estimated position follows the true path closely in all three dimensions. The ending position estimate varies from the true position by approximately 10 cm RSS. Figure 6-14 shows the root-sum-squared navigation error Figure 6-13: Navigation performance with real imagery and the associated filter 1σ bounds for the duration of the experiment. The filter s 1σ bounds do not exceed 10 cm throughout the duration of the flight. In addition to the camera s position, it s orientation was estimated by the filter. Figure 6-15 shows the results of the navigation filter s attitude estimation performance on the roll and pitch channels. The attitude estimate closely follows the true values, and the overall error remains under approximately 0.5 degrees throughout the collection run. Also estimated was the IMU sensor calibration errors. Similar to the position and attitude estimation performance, the filter is able to converge to appropriate values within approximately 5 seconds. These calibration error estimates will allow the system to operate in INS-only mode for a longer duration. This situation may arise in conditions where image features may be difficult to select and track. With the 82

83 Figure 6-14: Experimental RSS position error and uncertainty bounds Figure 6-15: Experimental attitude estimation performance 83

UAV Navigation: Airborne Inertial SLAM

UAV Navigation: Airborne Inertial SLAM Introduction UAV Navigation: Airborne Inertial SLAM Jonghyuk Kim Faculty of Engineering and Information Technology Australian National University, Australia Salah Sukkarieh ARC Centre of Excellence in

More information

with Application to Autonomous Vehicles

with Application to Autonomous Vehicles Nonlinear with Application to Autonomous Vehicles (Ph.D. Candidate) C. Silvestre (Supervisor) P. Oliveira (Co-supervisor) Institute for s and Robotics Instituto Superior Técnico Portugal January 2010 Presentation

More information

A Low-Cost GPS Aided Inertial Navigation System for Vehicular Applications

A Low-Cost GPS Aided Inertial Navigation System for Vehicular Applications A Low-Cost GPS Aided Inertial Navigation System for Vehicular Applications ISAAC SKOG Master of Science Thesis Stockholm, Sweden 2005-03-09 IR-SB-EX-0506 1 Abstract In this report an approach for integration

More information

Automated Tuning of the Nonlinear Complementary Filter for an Attitude Heading Reference Observer

Automated Tuning of the Nonlinear Complementary Filter for an Attitude Heading Reference Observer Automated Tuning of the Nonlinear Complementary Filter for an Attitude Heading Reference Observer Oscar De Silva, George K.I. Mann and Raymond G. Gosine Faculty of Engineering and Applied Sciences, Memorial

More information

Fundamentals of attitude Estimation

Fundamentals of attitude Estimation Fundamentals of attitude Estimation Prepared by A.Kaviyarasu Assistant Professor Department of Aerospace Engineering Madras Institute Of Technology Chromepet, Chennai Basically an IMU can used for two

More information

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

VN-100 Velocity Compensation

VN-100 Velocity Compensation VN-100 Velocity Compensation Velocity / Airspeed Aiding for AHRS Applications Application Note Abstract This application note describes how the VN-100 can be used in non-stationary applications which require

More information

CS491/691: Introduction to Aerial Robotics

CS491/691: Introduction to Aerial Robotics CS491/691: Introduction to Aerial Robotics Topic: Midterm Preparation Dr. Kostas Alexis (CSE) Areas of Focus Coordinate system transformations (CST) MAV Dynamics (MAVD) Navigation Sensors (NS) State Estimation

More information

One Approach to the Integration of Inertial and Visual Navigation Systems

One Approach to the Integration of Inertial and Visual Navigation Systems FATA UNIVERSITATIS (NIŠ) SER.: ELE. ENERG. vol. 18, no. 3, December 2005, 479-491 One Approach to the Integration of Inertial and Visual Navigation Systems Dedicated to Professor Milić Stojić on the occasion

More information

Measurement Observers for Pose Estimation on SE(3)

Measurement Observers for Pose Estimation on SE(3) Measurement Observers for Pose Estimation on SE(3) By Geoffrey Stacey u4308250 Supervised by Prof. Robert Mahony 24 September 2010 A thesis submitted in part fulfilment of the degree of Bachelor of Engineering

More information

Application of state observers in attitude estimation using low-cost sensors

Application of state observers in attitude estimation using low-cost sensors Application of state observers in attitude estimation using low-cost sensors Martin Řezáč Czech Technical University in Prague, Czech Republic March 26, 212 Introduction motivation for inertial estimation

More information

Evaluation of different wind estimation methods in flight tests with a fixed-wing UAV

Evaluation of different wind estimation methods in flight tests with a fixed-wing UAV Evaluation of different wind estimation methods in flight tests with a fixed-wing UAV Julian Sören Lorenz February 5, 2018 Contents 1 Glossary 2 2 Introduction 3 3 Tested algorithms 3 3.1 Unfiltered Method

More information

EE565:Mobile Robotics Lecture 6

EE565:Mobile Robotics Lecture 6 EE565:Mobile Robotics Lecture 6 Welcome Dr. Ahmad Kamal Nasir Announcement Mid-Term Examination # 1 (25%) Understand basic wheel robot kinematics, common mobile robot sensors and actuators knowledge. Understand

More information

Aerial Robotics. Vision-based control for Vertical Take-Off and Landing UAVs. Toulouse, October, 2 nd, Henry de Plinval (Onera - DCSD)

Aerial Robotics. Vision-based control for Vertical Take-Off and Landing UAVs. Toulouse, October, 2 nd, Henry de Plinval (Onera - DCSD) Aerial Robotics Vision-based control for Vertical Take-Off and Landing UAVs Toulouse, October, 2 nd, 2014 Henry de Plinval (Onera - DCSD) collaborations with P. Morin (UPMC-ISIR), P. Mouyon (Onera), T.

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: Inertial Measurement Unit Dr. Kostas Alexis (CSE) Where am I? What is my environment? Robots use multiple sensors to understand where they are and how their environment

More information

Two dimensional rate gyro bias estimation for precise pitch and roll attitude determination utilizing a dual arc accelerometer array

Two dimensional rate gyro bias estimation for precise pitch and roll attitude determination utilizing a dual arc accelerometer array Rochester Institute of Technology RIT Scholar Works Theses Thesis/Dissertation Collections -- Two dimensional rate gyro bias estimation for precise pitch and roll attitude determination utilizing a dual

More information

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 53, NO. 5, JUNE

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 53, NO. 5, JUNE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 53, NO 5, JUNE 2008 1203 Nonlinear Complementary Filters on the Special Orthogonal Group Robert Mahony, Senior Member, IEEE, Tarek Hamel, Member, IEEE, and Jean-Michel

More information

On the Observability and Self-Calibration of Visual-Inertial Navigation Systems

On the Observability and Self-Calibration of Visual-Inertial Navigation Systems Center for Robotics and Embedded Systems University of Southern California Technical Report CRES-08-005 R B TIC EMBEDDED SYSTEMS LABORATORY On the Observability and Self-Calibration of Visual-Inertial

More information

Vision-Aided Navigation Based on Three-View Geometry

Vision-Aided Navigation Based on Three-View Geometry Vision-Aided Navigation Based on hree-view Geometry Vadim Indelman, Pini Gurfil Distributed Space Systems Lab, Aerospace Engineering, echnion Ehud Rivlin Computer Science, echnion Hector Rotstein RAFAEL

More information

Attitude Estimation Version 1.0

Attitude Estimation Version 1.0 Attitude Estimation Version 1. Francesco Farina May 23, 216 Contents 1 Introduction 2 2 Mathematical background 2 2.1 Reference frames and coordinate systems............. 2 2.2 Euler angles..............................

More information

Integration of a strapdown gravimeter system in an Autonomous Underwater Vehicle

Integration of a strapdown gravimeter system in an Autonomous Underwater Vehicle Integration of a strapdown gravimeter system in an Autonomous Underwater Vehicle Clément ROUSSEL PhD - Student (L2G - Le Mans - FRANCE) April 17, 2015 Clément ROUSSEL ISPRS / CIPA Workshop April 17, 2015

More information

A Close Examination of Multiple Model Adaptive Estimation Vs Extended Kalman Filter for Precision Attitude Determination

A Close Examination of Multiple Model Adaptive Estimation Vs Extended Kalman Filter for Precision Attitude Determination A Close Examination of Multiple Model Adaptive Estimation Vs Extended Kalman Filter for Precision Attitude Determination Quang M. Lam LexerdTek Corporation Clifton, VA 4 John L. Crassidis University at

More information

Quaternion based Extended Kalman Filter

Quaternion based Extended Kalman Filter Quaternion based Extended Kalman Filter, Sergio Montenegro About this lecture General introduction to rotations and quaternions. Introduction to Kalman Filter for Attitude Estimation How to implement and

More information

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms L11. EKF SLAM: PART I NA568 Mobile Robotics: Methods & Algorithms Today s Topic EKF Feature-Based SLAM State Representation Process / Observation Models Landmark Initialization Robot-Landmark Correlation

More information

A Study of the Effects of Stochastic Inertial Sensor Errors. in Dead-Reckoning Navigation

A Study of the Effects of Stochastic Inertial Sensor Errors. in Dead-Reckoning Navigation A Study of the Effects of Stochastic Inertial Sensor Errors in Dead-Reckoning Navigation Except where reference is made to the work of others, the work described in this thesis is my own or was done in

More information

Multi-Sensor Fusion for Localization of a Mobile Robot in Outdoor Environments

Multi-Sensor Fusion for Localization of a Mobile Robot in Outdoor Environments Multi-Sensor Fusion for Localization of a Mobile Robot in Outdoor Environments Thomas Emter, Arda Saltoğlu and Janko Petereit Introduction AMROS Mobile platform equipped with multiple sensors for navigation

More information

Multi-layer Flight Control Synthesis and Analysis of a Small-scale UAV Helicopter

Multi-layer Flight Control Synthesis and Analysis of a Small-scale UAV Helicopter Multi-layer Flight Control Synthesis and Analysis of a Small-scale UAV Helicopter Ali Karimoddini, Guowei Cai, Ben M. Chen, Hai Lin and Tong H. Lee Graduate School for Integrative Sciences and Engineering,

More information

System identification and sensor fusion in dynamical systems. Thomas Schön Division of Systems and Control, Uppsala University, Sweden.

System identification and sensor fusion in dynamical systems. Thomas Schön Division of Systems and Control, Uppsala University, Sweden. System identification and sensor fusion in dynamical systems Thomas Schön Division of Systems and Control, Uppsala University, Sweden. The system identification and sensor fusion problem Inertial sensors

More information

Determining absolute orientation of a phone by imaging celestial bodies

Determining absolute orientation of a phone by imaging celestial bodies Technical Disclosure Commons Defensive Publications Series October 06, 2017 Determining absolute orientation of a phone by imaging celestial bodies Chia-Kai Liang Yibo Chen Follow this and additional works

More information

Sigma-Point Kalman Filters for Nonlinear Estimation and Sensor-Fusion - Applications to Integrated Navigation - Rudolph van der Merwe and Eric A.

Sigma-Point Kalman Filters for Nonlinear Estimation and Sensor-Fusion - Applications to Integrated Navigation - Rudolph van der Merwe and Eric A. Sigma-Point Kalman Filters for Nonlinear Estimation and Sensor-Fusion - Applications to Integrated Navigation - Rudolph van der Merwe and Eric A. Wan OGI School of Science & Engineering, Oregon Health

More information

Real-time Attitude Estimation Techniques Applied to a Four Rotor Helicopter

Real-time Attitude Estimation Techniques Applied to a Four Rotor Helicopter 43rd IEEE Conference on Decision and Control December 14-17, 4 Atlantis, Paradise Island, Bahamas ThC9.3 Real- Attitude Estimation Techniques Applied to a Four Rotor Helicopter Matthew G. Earl and Raffaello

More information

Investigation of the Attitude Error Vector Reference Frame in the INS EKF

Investigation of the Attitude Error Vector Reference Frame in the INS EKF Investigation of the Attitude Error Vector Reference Frame in the INS EKF Stephen Steffes, Jan Philipp Steinbach, and Stephan Theil Abstract The Extended Kalman Filter is used extensively for inertial

More information

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017 EKF and SLAM McGill COMP 765 Sept 18 th, 2017 Outline News and information Instructions for paper presentations Continue on Kalman filter: EKF and extension to mapping Example of a real mapping system:

More information

Adaptive Unscented Kalman Filter with Multiple Fading Factors for Pico Satellite Attitude Estimation

Adaptive Unscented Kalman Filter with Multiple Fading Factors for Pico Satellite Attitude Estimation Adaptive Unscented Kalman Filter with Multiple Fading Factors for Pico Satellite Attitude Estimation Halil Ersin Söken and Chingiz Hajiyev Aeronautics and Astronautics Faculty Istanbul Technical University

More information

Joint GPS and Vision Estimation Using an Adaptive Filter

Joint GPS and Vision Estimation Using an Adaptive Filter 1 Joint GPS and Vision Estimation Using an Adaptive Filter Shubhendra Vikram Singh Chauhan and Grace Xingxin Gao, University of Illinois at Urbana-Champaign Shubhendra Vikram Singh Chauhan received his

More information

FIBER OPTIC GYRO-BASED ATTITUDE DETERMINATION FOR HIGH- PERFORMANCE TARGET TRACKING

FIBER OPTIC GYRO-BASED ATTITUDE DETERMINATION FOR HIGH- PERFORMANCE TARGET TRACKING FIBER OPTIC GYRO-BASED ATTITUDE DETERMINATION FOR HIGH- PERFORMANCE TARGET TRACKING Elias F. Solorzano University of Toronto (Space Flight Laboratory) Toronto, ON (Canada) August 10 th, 2016 30 th AIAA/USU

More information

Nonlinear Landing Control for Quadrotor UAVs

Nonlinear Landing Control for Quadrotor UAVs Nonlinear Landing Control for Quadrotor UAVs Holger Voos University of Applied Sciences Ravensburg-Weingarten, Mobile Robotics Lab, D-88241 Weingarten Abstract. Quadrotor UAVs are one of the most preferred

More information

Mobile Robots Localization

Mobile Robots Localization Mobile Robots Localization Institute for Software Technology 1 Today s Agenda Motivation for Localization Odometry Odometry Calibration Error Model 2 Robotics is Easy control behavior perception modelling

More information

SLAM for Ship Hull Inspection using Exactly Sparse Extended Information Filters

SLAM for Ship Hull Inspection using Exactly Sparse Extended Information Filters SLAM for Ship Hull Inspection using Exactly Sparse Extended Information Filters Matthew Walter 1,2, Franz Hover 1, & John Leonard 1,2 Massachusetts Institute of Technology 1 Department of Mechanical Engineering

More information

TTK4190 Guidance and Control Exam Suggested Solution Spring 2011

TTK4190 Guidance and Control Exam Suggested Solution Spring 2011 TTK4190 Guidance and Control Exam Suggested Solution Spring 011 Problem 1 A) The weight and buoyancy of the vehicle can be found as follows: W = mg = 15 9.81 = 16.3 N (1) B = 106 4 ( ) 0.6 3 3 π 9.81 =

More information

ENGR352 Problem Set 02

ENGR352 Problem Set 02 engr352/engr352p02 September 13, 2018) ENGR352 Problem Set 02 Transfer function of an estimator 1. Using Eq. (1.1.4-27) from the text, find the correct value of r ss (the result given in the text is incorrect).

More information

Chapter 4 State Estimation

Chapter 4 State Estimation Chapter 4 State Estimation Navigation of an unmanned vehicle, always depends on a good estimation of the vehicle states. Especially if no external sensors or marers are available, more or less complex

More information

Space Surveillance with Star Trackers. Part II: Orbit Estimation

Space Surveillance with Star Trackers. Part II: Orbit Estimation AAS -3 Space Surveillance with Star Trackers. Part II: Orbit Estimation Ossama Abdelkhalik, Daniele Mortari, and John L. Junkins Texas A&M University, College Station, Texas 7783-3 Abstract The problem

More information

A Study of Covariances within Basic and Extended Kalman Filters

A Study of Covariances within Basic and Extended Kalman Filters A Study of Covariances within Basic and Extended Kalman Filters David Wheeler Kyle Ingersoll December 2, 2013 Abstract This paper explores the role of covariance in the context of Kalman filters. The underlying

More information

Sigma-Point Kalman Filters for Integrated Navigation

Sigma-Point Kalman Filters for Integrated Navigation Sigma-Point Kalman Filters for Integrated Navigation Rudolph van der Merwe and Eric A. Wan Adaptive Systems Lab, OGI School of Science & Engineering, Oregon Health & Science University BIOGRAPHY Rudolph

More information

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada SLAM Techniques and Algorithms Jack Collier Defence Research and Development Canada Recherche et développement pour la défense Canada Canada Goals What will we learn Gain an appreciation for what SLAM

More information

NAWCWPNS TM 8128 CONTENTS. Introduction Two-Dimensinal Motion Three-Dimensional Motion Nonrotating Spherical Earth...

NAWCWPNS TM 8128 CONTENTS. Introduction Two-Dimensinal Motion Three-Dimensional Motion Nonrotating Spherical Earth... CONTENTS Introduction... 3 Two-Dimensinal Motion... 3 Three-Dimensional Motion... 5 Nonrotating Spherical Earth...10 Rotating Spherical Earth...12 WGS84...14 Conclusion...14 Appendixes: A. Kalman Filter...15

More information

Baro-INS Integration with Kalman Filter

Baro-INS Integration with Kalman Filter Baro-INS Integration with Kalman Filter Vivek Dadu c,b.venugopal Reddy a, Brajnish Sitara a, R.S.Chandrasekhar a & G.Satheesh Reddy a c Hindustan Aeronautics Ltd, Korwa, India. a Research Centre Imarat,

More information

Design and modelling of an airship station holding controller for low cost satellite operations

Design and modelling of an airship station holding controller for low cost satellite operations AIAA Guidance, Navigation, and Control Conference and Exhibit 15-18 August 25, San Francisco, California AIAA 25-62 Design and modelling of an airship station holding controller for low cost satellite

More information

GPS/INS Tightly coupled position and attitude determination with low-cost sensors Master Thesis

GPS/INS Tightly coupled position and attitude determination with low-cost sensors Master Thesis GPS/INS Tightly coupled position and attitude determination with low-cost sensors Master Thesis Michele Iafrancesco Institute for Communications and Navigation Prof. Dr. Christoph Günther Supervised by

More information

Angular Velocity Determination Directly from Star Tracker Measurements

Angular Velocity Determination Directly from Star Tracker Measurements Angular Velocity Determination Directly from Star Tracker Measurements John L. Crassidis Introduction Star trackers are increasingly used on modern day spacecraft. With the rapid advancement of imaging

More information

Verification of a Dual-State Extended Kalman Filter with Lidar-Enabled Autonomous Hazard- Detection for Planetary Landers

Verification of a Dual-State Extended Kalman Filter with Lidar-Enabled Autonomous Hazard- Detection for Planetary Landers Marquette University e-publications@marquette Master's Theses (29 -) Dissertations, Theses, and Professional Projects Verification of a Dual-State Extended Kalman Filter with Lidar-Enabled Autonomous Hazard-

More information

Non-Linear Optimization Applied to Angle-of- Arrival Satellite-Based Geolocation with Correlated Measurements

Non-Linear Optimization Applied to Angle-of- Arrival Satellite-Based Geolocation with Correlated Measurements Air Force Institute of Technology AFIT Scholar Theses and Dissertations 3-26-2015 Non-Linear Optimization Applied to Angle-of- Arrival Satellite-Based Geolocation with Correlated Measurements Joshua S.

More information

EE 570: Location and Navigation

EE 570: Location and Navigation EE 570: Location and Navigation Sensor Technology Stephen Bruder 1 Aly El-Osery 2 1 Electrical and Computer Engineering Department, Embry-Riddle Aeronautical Univesity Prescott, Arizona, USA 2 Electrical

More information

Lecture 13 Visual Inertial Fusion

Lecture 13 Visual Inertial Fusion Lecture 13 Visual Inertial Fusion Davide Scaramuzza Outline Introduction IMU model and Camera-IMU system Different paradigms Filtering Maximum a posteriori estimation Fix-lag smoothing 2 What is an IMU?

More information

From Bayes to Extended Kalman Filter

From Bayes to Extended Kalman Filter From Bayes to Extended Kalman Filter Michal Reinštein Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics Center for Machine Perception http://cmp.felk.cvut.cz/

More information

State Estimation for Autopilot Control of Small Unmanned Aerial Vehicles in Windy Conditions

State Estimation for Autopilot Control of Small Unmanned Aerial Vehicles in Windy Conditions University of Colorado, Boulder CU Scholar Aerospace Engineering Sciences Graduate Theses & Dissertations Aerospace Engineering Sciences Summer 7-23-2014 State Estimation for Autopilot Control of Small

More information

Design and Flight Performance of the Orion. Pre-Launch Navigation System

Design and Flight Performance of the Orion. Pre-Launch Navigation System Design and Flight Performance of the Orion Pre-Launch Navigation System Renato Zanetti 1, Greg Holt 2, Robert Gay 3, and Christopher D Souza 4 NASA Johnson Space Center, Houston, Texas 77058. Jastesh Sud

More information

Inertial Navigation and Various Applications of Inertial Data. Yongcai Wang. 9 November 2016

Inertial Navigation and Various Applications of Inertial Data. Yongcai Wang. 9 November 2016 Inertial Navigation and Various Applications of Inertial Data Yongcai Wang 9 November 2016 Types of Gyroscope Mechanical Gyroscope Laser Gyroscope Sagnac Effect Stable Platform IMU and Strapdown IMU In

More information

EE 570: Location and Navigation

EE 570: Location and Navigation EE 570: Location and Navigation Sensor Technology Aly El-Osery Kevin Wedeward Electrical Engineering Department, New Mexico Tech Socorro, New Mexico, USA In Collaboration with Stephen Bruder Electrical

More information

Chapter 1. Introduction. 1.1 System Architecture

Chapter 1. Introduction. 1.1 System Architecture Chapter 1 Introduction 1.1 System Architecture The objective of this book is to prepare the reader to do research in the exciting and rapidly developing field of autonomous navigation, guidance, and control

More information

c 2009 John Gregory Warner

c 2009 John Gregory Warner c 2009 John Gregory Warner ATTITUDE DETERMINATION AND CONTROL OF NANO-SATELLITES BY JOHN GREGORY WARNER THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in

More information

Rao-Blackwellized Particle Filtering for 6-DOF Estimation of Attitude and Position via GPS and Inertial Sensors

Rao-Blackwellized Particle Filtering for 6-DOF Estimation of Attitude and Position via GPS and Inertial Sensors Rao-Blackwellized Particle Filtering for 6-DOF Estimation of Attitude and Position via GPS and Inertial Sensors GRASP Laboratory University of Pennsylvania June 6, 06 Outline Motivation Motivation 3 Problem

More information

Inertial Navigation System Aiding Using Vision

Inertial Navigation System Aiding Using Vision Air Force Institute of Technology AFIT Scholar Theses and Dissertations 3-21-2013 Inertial Navigation System Aiding Using Vision James O. Quarmyne Follow this and additional works at: https://scholar.afit.edu/etd

More information

EE 570: Location and Navigation

EE 570: Location and Navigation EE 570: Location and Navigation Aided INS Aly El-Osery Kevin Wedeward Electrical Engineering Department, New Mexico Tech Socorro, New Mexico, USA In Collaboration with Stephen Bruder Electrical and Computer

More information

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard Robotics 2 Target Tracking Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard Slides by Kai Arras, Gian Diego Tipaldi, v.1.1, Jan 2012 Chapter Contents Target Tracking Overview Applications

More information

Dead Reckoning navigation (DR navigation)

Dead Reckoning navigation (DR navigation) Dead Reckoning navigation (DR navigation) Prepared by A.Kaviyarasu Assistant Professor Department of Aerospace Engineering Madras Institute Of Technology Chromepet, Chennai A Navigation which uses a Inertial

More information

Inertial Odometry using AR Drone s IMU and calculating measurement s covariance

Inertial Odometry using AR Drone s IMU and calculating measurement s covariance Inertial Odometry using AR Drone s IMU and calculating measurement s covariance Welcome Lab 6 Dr. Ahmad Kamal Nasir 25.02.2015 Dr. Ahmad Kamal Nasir 1 Today s Objectives Introduction to AR-Drone On-board

More information

A Monocular Vision-aided Inertial Navigation System with Improved Numerical Stability

A Monocular Vision-aided Inertial Navigation System with Improved Numerical Stability AIAA SciTech 5-9 January 215, Kissimmee, Florida AIAA Guidance, Navigation, and Control Conference AIAA 215-97 A Monocular Vision-aided Inertial Navigation System with Improved Numerical Stability Daniel

More information

Invariant Extended Kalman Filter: Theory and application to a velocity-aided estimation problem

Invariant Extended Kalman Filter: Theory and application to a velocity-aided estimation problem Invariant Extene Kalman Filter: Theory an application to a velocity-aie estimation problem S. Bonnabel (Mines ParisTech) Joint work with P. Martin (Mines ParisTech) E. Salaun (Georgia Institute of Technology)

More information

Design of Adaptive Filtering Algorithm for Relative Navigation

Design of Adaptive Filtering Algorithm for Relative Navigation Design of Adaptive Filtering Algorithm for Relative Navigation Je Young Lee, Hee Sung Kim, Kwang Ho Choi, Joonhoo Lim, Sung Jin Kang, Sebum Chun, and Hyung Keun Lee Abstract Recently, relative navigation

More information

Kalman Filters with Uncompensated Biases

Kalman Filters with Uncompensated Biases Kalman Filters with Uncompensated Biases Renato Zanetti he Charles Stark Draper Laboratory, Houston, exas, 77058 Robert H. Bishop Marquette University, Milwaukee, WI 53201 I. INRODUCION An underlying assumption

More information

Fundamentals of High Accuracy Inertial Navigation Averil B. Chatfield Table of Contents

Fundamentals of High Accuracy Inertial Navigation Averil B. Chatfield Table of Contents Navtech Part #2440 Preface Fundamentals of High Accuracy Inertial Navigation Averil B. Chatfield Table of Contents Chapter 1. Introduction...... 1 I. Forces Producing Motion.... 1 A. Gravitation......

More information

Information and Observability Metrics of Inertial SLAM for On-line Path-planning on an Aerial Vehicle

Information and Observability Metrics of Inertial SLAM for On-line Path-planning on an Aerial Vehicle Information and Observability Metrics of Inertial SLAM for On-line Path-planning on an Aerial Vehicle Mitch Bryson, Jonghyuk Kim and Salah Sukkarieh ARC Centre of Excellence in Autonomous Systems University

More information

Lecture 9: Modeling and motion models

Lecture 9: Modeling and motion models Sensor Fusion, 2014 Lecture 9: 1 Lecture 9: Modeling and motion models Whiteboard: Principles and some examples. Slides: Sampling formulas. Noise models. Standard motion models. Position as integrated

More information

MEMS Gyroscope Control Systems for Direct Angle Measurements

MEMS Gyroscope Control Systems for Direct Angle Measurements MEMS Gyroscope Control Systems for Direct Angle Measurements Chien-Yu Chi Mechanical Engineering National Chiao Tung University Hsin-Chu, Taiwan (R.O.C.) 3 Email: chienyu.me93g@nctu.edu.tw Tsung-Lin Chen

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: Extended Kalman Filter Dr. Kostas Alexis (CSE) These slides relied on the lectures from C. Stachniss, J. Sturm and the book Probabilistic Robotics from Thurn et al.

More information

Locating and supervising relief forces in buildings without the use of infrastructure

Locating and supervising relief forces in buildings without the use of infrastructure Locating and supervising relief forces in buildings without the use of infrastructure Tracking of position with low-cost inertial sensors Martin Trächtler 17.10.2014 18th Leibniz Conference of advanced

More information

Sensors for mobile robots

Sensors for mobile robots ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Mobile & Service Robotics Sensors for Robotics 2 Sensors for mobile robots Sensors are used to perceive, analyze and understand the environment

More information

Predictive Control of Gyroscopic-Force Actuators for Mechanical Vibration Damping

Predictive Control of Gyroscopic-Force Actuators for Mechanical Vibration Damping ARC Centre of Excellence for Complex Dynamic Systems and Control, pp 1 15 Predictive Control of Gyroscopic-Force Actuators for Mechanical Vibration Damping Tristan Perez 1, 2 Joris B Termaat 3 1 School

More information

Robot Control Basics CS 685

Robot Control Basics CS 685 Robot Control Basics CS 685 Control basics Use some concepts from control theory to understand and learn how to control robots Control Theory general field studies control and understanding of behavior

More information

Eigenstructure Assignment for Helicopter Hover Control

Eigenstructure Assignment for Helicopter Hover Control Proceedings of the 17th World Congress The International Federation of Automatic Control Eigenstructure Assignment for Helicopter Hover Control Andrew Pomfret Stuart Griffin Tim Clarke Department of Electronics,

More information

Trajectory tracking & Path-following control

Trajectory tracking & Path-following control Cooperative Control of Multiple Robotic Vehicles: Theory and Practice Trajectory tracking & Path-following control EECI Graduate School on Control Supélec, Feb. 21-25, 2011 A word about T Tracking and

More information

Quadrotor Modeling and Control

Quadrotor Modeling and Control 16-311 Introduction to Robotics Guest Lecture on Aerial Robotics Quadrotor Modeling and Control Nathan Michael February 05, 2014 Lecture Outline Modeling: Dynamic model from first principles Propeller

More information

Attitude Estimation for Augmented Reality with Smartphones

Attitude Estimation for Augmented Reality with Smartphones Attitude Estimation for Augmented Reality with Smartphones Thibaud Michel Pierre Genevès Hassen Fourati Nabil Layaïda Université Grenoble Alpes, INRIA LIG, GIPSA-Lab, CNRS June 13 th, 2017 http://tyrex.inria.fr/mobile/benchmarks-attitude

More information

Information Formulation of the UDU Kalman Filter

Information Formulation of the UDU Kalman Filter Information Formulation of the UDU Kalman Filter Christopher D Souza and Renato Zanetti 1 Abstract A new information formulation of the Kalman filter is presented where the information matrix is parameterized

More information

Distributed Data Fusion with Kalman Filters. Simon Julier Computer Science Department University College London

Distributed Data Fusion with Kalman Filters. Simon Julier Computer Science Department University College London Distributed Data Fusion with Kalman Filters Simon Julier Computer Science Department University College London S.Julier@cs.ucl.ac.uk Structure of Talk Motivation Kalman Filters Double Counting Optimal

More information

Linear Combinations of Optic Flow Vectors for Estimating Self-Motion a Real-World Test of a Neural Model

Linear Combinations of Optic Flow Vectors for Estimating Self-Motion a Real-World Test of a Neural Model Linear Combinations of Optic Flow Vectors for Estimating Self-Motion a Real-World Test of a Neural Model Matthias O. Franz MPI für biologische Kybernetik Spemannstr. 38 D-72076 Tübingen, Germany mof@tuebingen.mpg.de

More information

Tactical Ballistic Missile Tracking using the Interacting Multiple Model Algorithm

Tactical Ballistic Missile Tracking using the Interacting Multiple Model Algorithm Tactical Ballistic Missile Tracking using the Interacting Multiple Model Algorithm Robert L Cooperman Raytheon Co C 3 S Division St Petersburg, FL Robert_L_Cooperman@raytheoncom Abstract The problem of

More information

Refinements to the General Methodology Behind Strapdown Airborne Gravimetry

Refinements to the General Methodology Behind Strapdown Airborne Gravimetry Refinements to the General Methodology Behind Strapdown Airborne Gravimetry AE 8900 MS Special Problems Report Space Systems Design Lab (SSDL) Guggenheim School of Aerospace Engineering Georgia Institute

More information

Extension of Farrenkopf Steady-State Solutions with Estimated Angular Rate

Extension of Farrenkopf Steady-State Solutions with Estimated Angular Rate Extension of Farrenopf Steady-State Solutions with Estimated Angular Rate Andrew D. Dianetti and John L. Crassidis University at Buffalo, State University of New Yor, Amherst, NY 46-44 Steady-state solutions

More information

Application of Data Fusion to Aerial Robotics. Paul Riseborough March 24, 2015

Application of Data Fusion to Aerial Robotics. Paul Riseborough March 24, 2015 Application of Data Fusion to Aerial Robotics Paul Riseborough March 24, 2015 Outline Introduction to APM project What is data fusion and why do we use it? Where is data fusion used in APM? Development

More information

Navigation System for Reusable Launch Vehicle

Navigation System for Reusable Launch Vehicle AAS 8-76 Navigation System for Reusable Launch Vehicle Markus Schlotterer German Aerospace Center (DLR) 31 st ANNUAL AAS GUIDANCE AND CONTROL CONFERENCE February 1-6, 28 Breckenridge, Colorado Sponsored

More information

Research Article Design of an Attitude and Heading Reference System Based on Distributed Filtering for Small UAV

Research Article Design of an Attitude and Heading Reference System Based on Distributed Filtering for Small UAV Mathematical Problems in Engineering Volume 13 Article ID 498739 8 pages http://dx.doi.org/1.1155/13/498739 Research Article Design of an Attitude and Heading System Based on Distributed Filtering for

More information

An Adaptive Filter for a Small Attitude and Heading Reference System Using Low Cost Sensors

An Adaptive Filter for a Small Attitude and Heading Reference System Using Low Cost Sensors An Adaptive Filter for a Small Attitude and eading Reference System Using Low Cost Sensors Tongyue Gao *, Chuntao Shen, Zhenbang Gong, Jinjun Rao, and Jun Luo Department of Precision Mechanical Engineering

More information

Delayed Fusion of Relative State Measurements by Extending Stochastic Cloning via Direct Kalman Filtering

Delayed Fusion of Relative State Measurements by Extending Stochastic Cloning via Direct Kalman Filtering Delayed Fusion of Relative State Measurements by Extending Stochastic Cloning via Direct Kalman Filtering Ehsan Asadi and Carlo L Bottasso Department of Aerospace Science and echnology Politecnico di Milano,

More information

NAVIGATION OF THE WHEELED TRANSPORT ROBOT UNDER MEASUREMENT NOISE

NAVIGATION OF THE WHEELED TRANSPORT ROBOT UNDER MEASUREMENT NOISE WMS J. Pure Appl. Math. V.7, N.1, 2016, pp.20-27 NAVIGAION OF HE WHEELED RANSPOR ROBO UNDER MEASUREMEN NOISE VLADIMIR LARIN 1 Abstract. he problem of navigation of the simple wheeled transport robot is

More information

Model Reference Adaptive Control of Underwater Robotic Vehicle in Plane Motion

Model Reference Adaptive Control of Underwater Robotic Vehicle in Plane Motion Proceedings of the 11th WSEAS International Conference on SSTEMS Agios ikolaos Crete Island Greece July 23-25 27 38 Model Reference Adaptive Control of Underwater Robotic Vehicle in Plane Motion j.garus@amw.gdynia.pl

More information

Using the Kalman Filter for SLAM AIMS 2015

Using the Kalman Filter for SLAM AIMS 2015 Using the Kalman Filter for SLAM AIMS 2015 Contents Trivial Kinematics Rapid sweep over localisation and mapping (components of SLAM) Basic EKF Feature Based SLAM Feature types and representations Implementation

More information

CS 532: 3D Computer Vision 6 th Set of Notes

CS 532: 3D Computer Vision 6 th Set of Notes 1 CS 532: 3D Computer Vision 6 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Lecture Outline Intro to Covariance

More information