Relative Position Sensing by Fusing Monocular Vision and Inertial Rate Sensors

Similar documents
RELATIVE POSITION ESTIMATION FOR INTERVENTION-CAPABLE AUVS BY FUSING VISION AND INERTIAL MEASUREMENTS

Nonlinear Adaptive Ship Course Tracking Control Based on Backstepping and Nussbaum Gain

TRAJECTORY TRACKING FOR FULLY ACTUATED MECHANICAL SYSTEMS

Invariant Extended Kalman Filter: Theory and application to a velocity-aided estimation problem

VIRTUAL STRUCTURE BASED SPACECRAFT FORMATION CONTROL WITH FORMATION FEEDBACK

Optimal Variable-Structure Control Tracking of Spacecraft Maneuvers

Adaptive Optimal Path Following for High Wind Flights

Examining Geometric Integration for Propagating Orbit Trajectories with Non-Conservative Forcing

THE highly successful quaternion multiplicative extended

Total Energy Shaping of a Class of Underactuated Port-Hamiltonian Systems using a New Set of Closed-Loop Potential Shape Variables*

SYNCHRONOUS SEQUENTIAL CIRCUITS

Experimental Robustness Study of a Second-Order Sliding Mode Controller

Analytic Scaling Formulas for Crossed Laser Acceleration in Vacuum

Attitude Control System Design of UAV Guo Li1, a, Xiaoliang Lv2, b, Yongqing Zeng3, c

Predictive Control of a Laboratory Time Delay Process Experiment

Integrated Data Reconciliation with Generic Model Control for the Steel Pickling Process

Numerical Integrator. Graphics

'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21

Left-invariant extended Kalman filter and attitude estimation

Switching Time Optimization in Discretized Hybrid Dynamical Systems

Exponential Tracking Control of Nonlinear Systems with Actuator Nonlinearity

Laplacian Cooperative Attitude Control of Multiple Rigid Bodies

Lecture 2 Lagrangian formulation of classical mechanics Mechanics

Time-Optimal Motion Control of Piezoelectric Actuator: STM Application

739. Design of adaptive sliding mode control for spherical robot based on MR fluid actuator

UAV Navigation: Airborne Inertial SLAM

Track Initialization from Incomplete Measurements

VISUAL SERVOING WITH ORIENTATION LIMITS OF A X4-FLYER

Situation awareness of power system based on static voltage security region

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

Non-deterministic Social Laws

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, MONTH YEAR 1

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation

A Comparison between a Conventional Power System Stabilizer (PSS) and Novel PSS Based on Feedback Linearization Technique

Practical implementation of Differential Flatness concept for quadrotor trajectory control

Optimal Signal Detection for False Track Discrimination

An Approach for Design of Multi-element USBL Systems

Design of an Industrial Distributed Controller Near Spatial Domain Boundaries

Slide10 Haykin Chapter 14: Neurodynamics (3rd Ed. Chapter 13)

Expected Value of Partial Perfect Information

arxiv: v1 [math.oc] 25 Nov 2017

Robust Tracking Control of Robot Manipulator Using Dissipativity Theory

6. Friction and viscosity in gasses

ensembles When working with density operators, we can use this connection to define a generalized Bloch vector: v x Tr x, v y Tr y

Linear First-Order Equations

Application of Nonlinear Control to a Collision Avoidance System

Table of Common Derivatives By David Abraham

Observers for systems with invariant outputs

New Simple Controller Tuning Rules for Integrating and Stable or Unstable First Order plus Dead-Time Processes

Chapter 6: Energy-Momentum Tensors

An analysis of the operational space control of robots*

Interpolated Rigid-Body Motions and Robotics

FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction

Radar Sensor Management for Detection and Tracking

Reactionless Path Planning Strategies for Capture of Tumbling Objects in Space Using a Dual-Arm Robotic System

Deriving ARX Models for Synchronous Generators

Nested Saturation with Guaranteed Real Poles 1

Multi-robot Formation Control Using Reinforcement Learning Method

Non-Linear Bayesian CBRN Source Term Estimation

Lecture 10 Notes, Electromagnetic Theory II Dr. Christopher S. Baird, faculty.uml.edu/cbaird University of Massachusetts Lowell

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION

Dynamic Load Carrying Capacity of Spatial Cable Suspended Robot: Sliding Mode Control Approach

Simulation of Angle Beam Ultrasonic Testing with a Personal Computer

Least-Squares Regression on Sparse Spaces

An algebraic expression of stable inversion for nonminimum phase systems and its applications

State observers and recursive filters in classical feedback control theory

Event based Kalman filter observer for rotary high speed on/off valve

State of Charge Estimation of Cells in Series Connection by Using only the Total Voltage Measurement

An inductance lookup table application for analysis of reluctance stepper motor model

Optimal LQR Control of Structures using Linear Modal Model

UNIFYING PCA AND MULTISCALE APPROACHES TO FAULT DETECTION AND ISOLATION

Adaptive Predictive Control with Controllers of Restricted Structure

PoS(RAD COR 2007)030. Three-jet DIS final states from k -dependent parton showers. F. Hautmann University of Oxford

IN the recent past, the use of vertical take-off and landing

with Application to Autonomous Vehicles

Human-friendly Motion Control of a Wheeled Inverted Pendulum by Reduced-order Disturbance Observer

Predictive control of synchronous generator: a multiciterial optimization approach

Evaporating droplets tracking by holographic high speed video in turbulent flow

TIME-DELAY ESTIMATION USING FARROW-BASED FRACTIONAL-DELAY FIR FILTERS: FILTER APPROXIMATION VS. ESTIMATION ERRORS

A Path Planning Method Using Cubic Spiral with Curvature Constraint

Distributed coordination control for multi-robot networks using Lyapunov-like barrier functions

Sensors & Transducers 2015 by IFSA Publishing, S. L.

BEYOND THE CONSTRUCTION OF OPTIMAL SWITCHING SURFACES FOR AUTONOMOUS HYBRID SYSTEMS. Mauro Boccadoro Magnus Egerstedt Paolo Valigi Yorai Wardi

Both the ASME B and the draft VDI/VDE 2617 have strengths and

ELEC3114 Control Systems 1

Visual Servoing for Underactuated VTOL UAVs : a Linear, Homography-Based Framework

Kinematic Relative GPS Positioning Using State-Space Models: Computational Aspects

Control of a PEM Fuel Cell Based on a Distributed Model

Guidance Control of a Small Unmanned Aerial Vehicle with a Delta Wing

A simplified macroscopic urban traffic network model for model-based predictive control

TMA 4195 Matematisk modellering Exam Tuesday December 16, :00 13:00 Problems and solution with additional comments

arxiv: v1 [hep-lat] 19 Nov 2013

Systems & Control Letters

Mechatronics by Analogy and Application to Legged Locomotion

Suppression Method of Rising DC Voltage for the Halt Sequence of an Inverter in the Motor Regeneration

Feedback-Linearizing Control for Velocity and Attitude Tracking of an ROV with Thruster Dynamics Containing Input Dead Zones

Angular Velocity Bounds via Light Curve Glint Duration

Chapter 2 Lagrangian Modeling

Approaches for Predicting Collection Efficiency of Fibrous Filters

u t v t v t c a u t b a v t u t v t b a

Transcription:

Proceeings of ICAR 2003 The 11th International Conference on Avance Robotics Coimbra, Portugal, June 30 - July 3, 2003 Relative Position Sensing by Fusing Monocular Vision an Inertial Rate Sensors Anreas Huster an Stephen M. Rock Aerospace Robotics Lab, Stanfor University Stanfor CA 94305, USA {huster,rock}@arl.stanfor.eu Abstract Presente is a system that fuses monocular vision with inertial rate sensor measurements to generate an estimate of relative position between a moving observer an a stationary object. Although vision-only solutions are possible, fusing inertial rate sensors generates a sensing strategy that is robust to vision rop-outs an is able to etermine relative position with minimal requirements on the vision system. However, the combination of limite observability an significant nonlinearities, which are inherent to this sensing strategy, creates an estimation problem which cannot be solve with a stanar Extene Kalman Filter (EKF). This paper escribes an estimation algorithm that is uniquely aapte to this sensor fusion problem an presents results from laboratory experiments with a manipulator system. For these experiments, the estimator was implemente as part of a close-loop control system that can perform an object pick-up task. 1 Introuction Relative position sensing an control are core requirements for a wie range of autonomous, interventioncapable robots in terrestrial, space an unerwater environments. This paper focuses on a new relative position sensing system that fuses bearing information from monocular vision with inertial rate sensor measurements to estimate relative position, velocity an orientation. The estimate is compute in real time an is suitable for close-loop control. A feature of this system is that it relies on very simple vision requirements: tracking a single feature in a single camera image. Real environments exacerbate the typical challenges of ientifying goo visual features, establishing feature corresponences an robust tracking. Reucing the vision requirements by integrating inertial rate sensors has the potential to prouce a more robust sensing system than typical vision-only techniques. The sensing strategy takes avantage of the complementary nature of monocular vision measurements, inertial rate sensor measurements, an observer 1 motion. The motion of the camera between successive images generates a baseline for range computations by triangulation. 1 In this paper, observer refers to the moving sensor platform, an not the estimation algorithm. Inertial rate sensors, whose acceleration an angular rate measurements can be integrate to obtain velocity, position an orientation, can account for the 6-DOF motion of the camera along this baseline. When these measurements are fuse, the relative position between the observer an the object can be compute. A key benefit of this system is that the inertial rate sensors continue to maintain a useful estimate of relative position uring vision ropouts (e.g., occlusions, etection errors, lack of corresponence). Furthermore, both inertial rate sensors (for navigation) an monocular vision systems (for science purposes) are alreay common sensors on many fiel robots. However, inertial rate sensors suffer from bias an ranom noise errors. These lea to unboune rift uring a simple integration of the measurements to account for observer motion. The fusion algorithm estimates these inertial rate measurement errors. This is especially important for low-cost inertial rate sensors, which are subject to significant rift errors. Inherent to this sensing strategy base on a single bearing measurement is a combination of limite observability an significant nonlinearities. The Extene Kalman Filter (EKF), a stanar nonlinear estimation technique, performs poorly in this context, because it relies on linearizations of the nonlinear sensor an process moels in orer to apply the Kalman Filter equations. The EKF is unable to account for the uncertainty that results from these linearizations when the states about which the moels are linearize are uncertain. This is a critical issue for this estimation problem. Consequently, the EKF unerpreicts the estimate uncertainty an generates estimates with significant biases. A new estimator esign, which hanles nonlinearities without linearizing them, has been create to solve this problem. However, the ability of this estimator to converge epens on the trajectory of the observer. For example, uring camera motion irectly towar the feature, the estimator has no new information with which to improve its range estimate. Only camera motions transverse to the feature irection provie observability for the range estimate. However, motion irectly towar the object is typically require to complete a manipulation task. Furthermore, moving towar the object improves the effectiveness 1562

object r B Figure 1: OTTER: a Small Unerwater Vehicle Operate in MBARI s Test Tank p q observer of future transverse motion. In previous work, we aresse an emonstrate techniques to evelop estimators that will be effective for these applications. In [1], we emonstrate that the EKF fails to provie aequate solutions for this sensor fusion problem an explore an alternative approach for a simplifie (2D) problem. In [2], we presente a laboratory testbe an emonstration task to evaluate the new sensing system. In [3], we outline an algorithm to solve the complete estimation problem an escribe the first application of this sensing strategy to perform a useful manipulation task. In this paper, we exten these results an explore the performance of the algorithm in the presence of significant process noise. In particular, our goal is to preict the expecte accuracy of the estimator when it is applie to an AUV (our target AUV is the OTTER vehicle shown in Figure 1 an escribe in [4]). Our approach is to use a laboratory test be manipulator configure to obey equations of motion characteristic of an AUV operating in an ocean environment. Section 2 efines the sensor fusion problem an presents moels for the vision an inertial rate sensor measurements, the ynamics, an isturbances. Section 3 escribes the estimator esign. Initial experiments, conucte in the laboratory with a fixe-base 7-DOF manipulator arm, are escribe in Section 4. This platform provies a truth measurement an can be use to investigate competing approaches, to simulate ifferent isturbance environments, an to quantify performance. Section 5 emonstrates the effectiveness of the estimator esign by combining the estimator with a trajectory an a controller to perform a simple manipulation task. 1.1 Relate Work Other relative position sensing systems that fuse measurements from a bearing sensor an inertial rate sensors have been iscusse in Kaminer et. al. [5] (airplane tracking a ship) an Gurfil an Kasin [6] (missile intercepting a target). These authors are also concerne with fining alternatives to the EKF. Both papers present simulations of systems with known inertial rate sensor biases, known observer orientation, an perfect gravity compensation. These assumptions o not hol for our work with low-cost inertial rate sensors. N Figure 2: Geometry of the Estimation Problem 2 Sensor Fusion Problem 2.1 Estimation Scenario Figure 2 shows a stationary object an a moving observer, compose of a camera an inertial rate sensors. The camera is tracking the object an the inertial rate sensors are reporting the observer s acceleration an angular velocity. The purpose of the estimator is to etermine the relative position an velocity between the object an the observer. Frame N is the inertial frame an p inicates the position of a feature on the stationary object tracke by the camera. q is the position of the observer boy frame B. To simplify the iscussion, we assume that the camera an the inertial rate sensors are all co-locate at q. In practice, any known position offset between the sensors an the origin of frame B can be incorporate into the algorithm. The z-axis of the boy frame is aligne with the optical axis of the camera. R is the rotation matrix that transforms a vector resolve in inertial coorinates to the boy frame. ω is the associate rotational velocity resolve in the boy frame. A leaing superscript (e.g., B x) inicates that the vector is resolve in a specific frame. The position of the feature as seen by the observer is r = p q. We assume that the feature is stationary in the inertial frame, so ṗ = p = 0. Therefore, ṙ = q an r = q. Because of this assumption, a measurement of the observer acceleration q is useful for estimating the relative feature position r. 2.2 System Block Diagram The block iagram in Figure 3 shows a system that uses the sensing strategy to perform a control task. The ashe box represents the environment an inclues the moving observer, the stationary object, the camera, the inertial rate sensors an isturbance forces. This environment can be escribe by the state vector x. The sensor measurements z together with the observer comman u are inputs to the estimator, which computes an estimate ˆx of the state. The ifference between this estimate an the esire state x es, 1563

Trajectory x es x ˆx Control u Estimator Environment Disturbance Observer x z Inertial Rate Sensors Camera Figure 3: Block Diagram of a System Base on the Sensing Strategy specifie by the trajectory, is use in the controller to compute the observer commans. 2.3 Sensor Moels The vision measurement z s is the projection of r onto the image plane, an is moele as follows: S = z s = S x S y S z [ sx s y = R N r = R (p q) (1) ] + n s = 1 S z [ Sx S y ] + n s (2) n s is zero-mean white Gaussian measurement noise. The camera measurements are normalize so the effective focal length is 1. The accelerometer measures specific force, which inclues the acceleration q of the observer an a component ue to gravity. The measurement z a also inclues sensor biases b a an zero-mean white Gaussian sensor noise n a. z a = αr ( q + g) + b a + n a (3) g = [ 0 0 g ] T is the acceleration ue to gravity in inertial coorinates. α I 3 3 are scale factors inuce by the sensor. The rate gyro measurement inclues the rotational velocity ω of the observer, sensor biases b ω, an zero-mean white Gaussian sensor noise n ω. z ω = ω + b ω + n ω (4) Ranom walk moels are use to capture the ynamics of the inertial rate sensor parameters. n ba, n bω an n α are zero-mean white Gaussian riving terms. t b a = n ba (5) t b ω = n bω (6) t α = n α (7) 2.4 Process Moel A linear rag moel relates the control input u to observer velocity. t q = q = RT u + γ (w q) (8) u is known an represents the actuator commans in observer boy coorinates. w represents the water velocity an can be interprete as the source for a isturbance = γw. represents unknown external forces on the observer as well as errors in the actuator moel. It is moele by a first-orer Gauss-Markov process. t = 1 τ + n (9) where n is zero-mean white Gaussian noise. 3 Estimator Design The sensor fusion algorithm requires a nonlinear estimator, like the Extene Kalman Filter (EKF). However, while the EKF is a useful solution for many nonlinear estimation problems, we have shown in [1] that the irect application of the EKF to this problem fails to generate an aequate solution. The EKF fails because it relies on linearizations of the sensor an process moels in orer to apply the Kalman Filter equations. When the states about which these moels are linearize are uncertain, the linearization can represent significant uncertainty, which is not accounte for in the EKF. In this problem, the object range is a state that can be very uncertain an also one that is involve in the linearizations. As a result, the EKF unerpreicts the estimate uncertainty, which leas to biase estimates. The estimator esign is base on the Kalman Filter framework, but it uses a specific state representation, x, that leas to a linear sensor moel. Insisting on a linear sensor moel transfers all of the nonlinearities into the state ynamics an avois linearizations in the measurement upate. The time upate, which captures these ynamics, is implemente with the unscente transform [7, 8], which oes not require linearization. The motivation for this esign an aitional etail are presente in [3, 9]. Implementing a measurement upate without linearization requires a state representation that leas to a linear sensor moel z = Hx, where H is constant. Therefore, terms that appear in the measurement moels, such as s x, s y, R, R q, an b a, have to appear explicitly in the state vector. The representation of feature range S z presents an aitional esign choice. Because it oes not appear in the sensor moel, it is not constraine by the requirement of a linear measurement upate an we are free to choose a convenient representation. We represent feature range with ζ = 1/S z, which leas to low-orer polynomials as the ominant nonlinearities in the state ynamics. Polynomials ten to result in more accurate estimator time-upates 1564

than, for example, ratios, which are inuce by representing range with S z. The esign inclues a simplifying assumption on the accelerometer moel. We assume that α = αi 3 3 an that the scale factor variation can be ignore for the contribution ue to real accelerations, which is much smaller than the contribution from gravity. This assumption has a small impact on overall accuracy an greatly simplifies the estimator esign. The moifie accelerometer moel is z a = R ( q + αg) + b a + n a. (10) One important remaining source of estimate error that is not hanle by this esign is the assumption that the states can be moele as Gaussian ranom variables, which is not necessarily true for nonlinear problems. If the estimate uncertainties are very large, this assumption can cause problems. In that case, a more elaborate solution, like a Particle Filter esign [10], might be necessary. However, for moerate uncertainties encountere by this application, this esign has prouce goo results. The estimator state vector is given by: x = s x s y ζ v a b a Z ψ b ω v = v x v y v z a = R Z = Z x Z y Z z = R q = R 0 0 α (11) Z represents the irection of gravity in the boy frame moifie by the accelerometer scale factor α. ψ is the heaing, or rotation about Z. Together, Z an ψ efine the observer orientation. A similar representation for attitue is escribe in [11]. The corresponing state ynamics are given by (5), (6), an t s x = v x ζ + s x v z ζ + s x s y ω x (1 + s 2 x) ω y + s y ω z (12) t s y = v y ζ + s y v z ζ + (1 + s 2 y) ω x s x s y ω y s x ω z (13) t ζ = v zζ 2 + ζs y ω x ζs x ω y (14) v = u + a γv ω v (15) t t a = 1 τ a ω a + n (16) t Z = ω Z + 1 α Zn α (17) t ψ = 1 α ( ) [ ] 0 Z Zy 2 + Zz 2 y Z z ω (18) Figure 4: RRC K-1607 7-DOF Manipulator ω = ω x ω y ω z = z ω b ω n ω (19) Note that the rate gyro measurement z ω is use irectly in the ynamics. The noise sources n ω, n, n a, n ω an n α represent process noise. In practice, ψ is upate with a ifference equation that avois the singularity when Z y = Z z = 0. We use the square-root version of the unscente transform [12] to propagate the estimate an covariance forwar in time. We chose this algorithm base on accuracy, ease of implementation, an computational efficiency. The camera measurements z s an the accelerometer measurements z a are incorporate with a linear measurement upate, which is implemente with an array algorithm [13]. The combination of an array algorithm for the measurement upate an the square-root version of the unscente transform for the time-upate leas to an algorithm that operates on an stores only the square-root of the covariance, an not the actual covariance, which results in better numerical properties an reuce computational cost. 4 Laboratory Testbe Figure 4 shows the experimental harware use in this research. It is a K-1607 manipulator built by Robotics Research Corporation 2. It is a 7-DOF, kinematically reunant manipulator whose enpoint can be move to any position an orientation in its workspace. We have at- 2 http://www.robotics-research.com 1565

Camera Inertial Rate Sensors LED PSfrag replacements Figure 5: Manipulator Enpoint with the Camera, Inertial Rate Sensors, an Gripper; an the Cup with Infrare LED Observer Position (m) 1 0.5 0 0.5 0 2 4 6 8 10 12 14 16 18 20 Time (s) Figure 6: Observer Position for Twenty Runs with Different Initial Conitions z y x tache a camera an inertial rate sensors (DMU-6X Inertial Measurement Unit by Crossbow 3 ) on the enpoint of the manipulator in orer to emonstrate the estimator performance in the context of real sensor measurements. All of the manipulator joints are instrumente with encoers, so that the exact position of the enpoint can also be compute. We have evelope a simple robotic task picking up an object to emonstrate the relative position estimator. The object is a large plastic cup which the robot can pick up using a pneumatic gripper. The manipulator enpoint, with the gripper, camera, an inertial rate sensors, as well as the cup are shown in Figure 5. An LED on the hanle of the cup is the only feature that the camera can see. The relative position estimate obtaine by fusing the bearing to this LED with inertial rate sensor measurements is use to control the motion of the robot. This experiment is base on the system in Figure 3. A more etaile escription of this experiment as well as a iscussion on trajectories for this task appears in [3]. 5 Results This section presents the performance of the estimator in the presence of isturbances scale to be typical of unerwater environments. Specifically, the time constant was estimate to be τ = 60 s an the stanar eviation of was estimate to be σ() = 0.012 m/s 2. This leas to σ(n ) = 0.0022. Note that the manipulator has been programme to emulate the motion of an unerwater vehicle subject to isturbances. Before each run of the experiment, the observer (manipulator enpoint) was move to a ranom initial position q 0 an the estimator was reset to the initial estimate ˆx 0, which assumes that the initial target range is 0.65 m an that all other states, incluing inertial rate sensor biases, are unknown. The system uses the current state estimate to control the observer along a precompute trajectory, which e- 3 http://www.xbow.com Table 1: Mean an Stanar Deviation of Final Observer Position With Artificial Disturbance stanar mean (m) eviation (m) q x 0.230 0.032 q y 0.343 0.011 q z 0.715 0.009 fines the esire relative position an observer orientation. Throughout the experiment, the actual feature position p remains constant. Figure 6 overlays the observer position q (a truth value compute from the manipulator joint encoers) for twenty runs, each with ifferent initial observer positions q 0. These range from 0.50 to 0.02 m in x, 0.11 to 0.45 m in y, an 0.51 to 0.89 m in z. This plot shows that the controller, which has access only to the relative position estimate an not the truth measurement q, is successful in moving the observer from various initial positions towar a esire position near the object. Table 1 shows the mean an stanar eviation of the final observer position for these runs. Both the plot an the table show that the x-coorinate of position, which correspons to the optical axis of the camera, is the most ifficult to estimate. This experiment shows that the estimator, implemente as part of a real-time control system, can etermine relative position by fusing monocular vision measurements of a single feature an inertial rate sensor measurements. Table 2: Stanar Deviation of Final Observer Position Without Artificial Disturbance (previously reporte in [3]) stanar eviation (m) q x 0.006 q y 0.003 q z 0.002 1566

Further, the estimator performs well in the context of real sensors an realistic isturbance environments. In particular, these results preict an accuracy of 3.2 cm when the estimator is applie to the OTTER AUV experiments. For comparison, Table 2 reprouces the stanar eviations of the final observer position that were reporte in [3] for the case when no artificial isturbances were use. 6 Conclusions This paper iscusses a sensing strategy that fuses monocular vision measurements of a stationary object with inertial rate sensor measurements to estimate relative position. This capability satisfies a core requirement for many autonomous, intervention-capable robots: a robust, realtime estimate of the relative position between a moving observer an a stationary object. The sensing strategy inherits the avantages of vision in unstructure environments, but provies greater robustness than vision-only solutions an can operate with only a single trackable feature. We have efine this sensor strategy, outline an estimation algorithm to solve the sensor fusion problem, an performe laboratory experiments to evaluate the approach. The experimental results emonstrate the effectiveness of this sensing strategy in the context of real sensors an typical isturbances. We have shown that the estimator can be use as part of a close-loop control system for a manipulator arm performing an object pick-up task. Our future work will focus on emonstrating this sensor strategy on unerwater vehicles. Acknowlegments This research was supporte in part by the Packar Founation uner Grants 98-3816 an 98-6228. References [1] Anreas Huster an Stephen M. Rock, Relative position estimation for intervention-capable AUVs by fusing vision an inertial measurements, in Proceeings of the 12th International Symposium on Unmanne Untethere Submersible Technology, Durham, NH, August 2001, Autonomous Unersea Systems Institute. [2] Anreas Huster an Stephen M. Rock, Relative position estimation for manipulation tasks by fusing vision an inertial measurements, in Proceeings of the Oceans 2001 Conference, Honolulu, November 2001, MTS/IEEE, vol. 2, pp. 1025 1031. [4] H. H. Wang, S. M. Rock, an M. J. Lee, OTTER: The esign an evelopment of an intelligent unerwater robot, Autonomous Robots, vol. 3, no. 2-3, pp. 297 320, June-July 1996. [5] Isaac Kaminer, Wei Kang, Oleg Yakimenko, an Antonio Pascoal, Application of nonlinear filtering to navigation system esign using passive sensors, IEEE Transactions on Aerospace an Electronic Systems, vol. 37, no. 1, pp. 158 172, January 2001. [6] Pini Gurfil an N. Jeremy Kasin, Two-step optimal estimator for three imensional target tracking, in Proceeings of the 2002 American Control Conference, Anchorage, AK, May 2002, vol. 1, pp. 209 214. [7] Simon Julier an Jeffrey K. Uhlmann, Data fusion in nonlinear systems, in Hanbook of Multisensor Data Fusion, Davi L. Hall an James Llinas, Es., chapter 13. CRC Press, 2001. [8] Simon Julier, Jeffrey Uhlmann, an Hugh F. Durrant- Whyte, A new metho for the nonlinear transformation of means an covariances in filters an estimators, IEEE Transactions on Automatic Control, vol. 45, no. 3, pp. 477 482, March 2000. [9] Anreas Huster, Relative Position Sensing by Fusing Monocular Vision an Inertial Rate Sensors, Ph.D. thesis, Stanfor University, Stanfor, California, 2003, also available at http://arl.stanfor.eu/ huster/issertation/. [10] A. Doucet, J. F. G. e Freitas, an N. J. Goron, Es., Sequential Monte Carlo Methos in Practice, Springer Verlag, New York, 2001. [11] Henrik Rehbiner an Xiaoming Hu, Drift-free attitue estimation for accelerate rigi boies, in IEEE International Conference on Robotics an Automation, Seoul, South Korea, May 2001, vol. 4, pp. 4244 4249. [12] Ruolph van er Merwe an Eric A. Wan, The square-root unscente Kalman filter for state an parameter-estimation, in International Conference on Acoustics, Speech, an Signal Processing, Salt Lake City, Utah, May 2001, IEEE, pp. 3461 3464. [13] Thomas Kailath, Ali H. Saye, an Babak Hassibi, Linear Estimation, Prentice Hall, 2000. [3] Anreas Huster, Eric W. Frew, an Stephen M. Rock, Relative position estimation for AUVs by fusing bearing an inertial rate sensor measurements, in Proceeings of the Oceans 2002 Conference, Biloxi, MS, October 2002, MTS/IEEE, vol. 3, pp. 1857 1864. 1567