DISTRIBUTED ORBIT DETERMINATION VIA ESTIMATION CONSENSUS

Similar documents
Distributed Motion Estimation of Space Objects Using Dual Quaternions

Multi-Robotic Systems

Distributed Estimation via Dual Decomposition

RELATIVE NAVIGATION FOR SATELLITES IN CLOSE PROXIMITY USING ANGLES-ONLY OBSERVATIONS

ADMM and Fast Gradient Methods for Distributed Optimization

Copyrighted Material. 1.1 Large-Scale Interconnected Dynamical Systems

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems

Kalman Filters with Uncompensated Biases

Formation Control of Nonholonomic Mobile Robots

Space Surveillance with Star Trackers. Part II: Orbit Estimation

Distributed Optimization over Random Networks

ON SEPARATION PRINCIPLE FOR THE DISTRIBUTED ESTIMATION AND CONTROL OF FORMATION FLYING SPACECRAFT

OPTIMAL ESTIMATION of DYNAMIC SYSTEMS

Distributed MAP probability estimation of dynamic systems with wireless sensor networks

Asynchronous Non-Convex Optimization For Separable Problem

Fast Linear Iterations for Distributed Averaging 1

STATISTICAL ORBIT DETERMINATION

Consensus Protocols for Networks of Dynamic Agents

NCS Lecture 8 A Primer on Graph Theory. Cooperative Control Applications

Solving Dual Problems

dynamical zeta functions: what, why and what are the good for?

Distributed estimation in sensor networks

A Hyperparameter-Based Approach for Consensus Under Uncertainties

A Distributed Newton Method for Network Utility Maximization, II: Convergence

minimize x subject to (x 2)(x 4) u,

Sliding mode control for coordination in multi agent systems with directed communication graphs

Alternative Characterization of Ergodicity for Doubly Stochastic Chains

Adaptive Unscented Kalman Filter with Multiple Fading Factors for Pico Satellite Attitude Estimation

Relaxations and Randomized Methods for Nonconvex QCQPs

Zeno-free, distributed event-triggered communication and control for multi-agent average consensus

Distributed Optimization over Networks Gossip-Based Algorithms

On rigorous integration of piece-wise linear systems

Consensus-Based Distributed Optimization with Malicious Nodes

Consensus Problem in Multi-Agent Systems with Communication Channel Constraint on Signal Amplitude

Dual Decomposition.

Distributed Constrained Recursive Nonlinear Least-Squares Estimation: Algorithms and Asymptotics

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Information Exchange in Multi-rover SLAM

Riccati difference equations to non linear extended Kalman filter constraints

Convex Optimization of Graph Laplacian Eigenvalues

Lecture 4: Introduction to Graph Theory and Consensus. Cooperative Control Applications

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

OPTIMAL TIME TRANSFER

A NONLINEARITY MEASURE FOR ESTIMATION SYSTEMS

Quaternion Data Fusion

A Customized ADMM for Rank-Constrained Optimization Problems with Approximate Formulations

Approximate Distributed Kalman Filtering in Sensor Networks with Quantifiable Performance

9 Multi-Model State Estimation

Distributed Data Fusion with Kalman Filters. Simon Julier Computer Science Department University College London

Moving Horizon Filter for Monotonic Trends

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs)

Discrete-time Consensus Filters on Directed Switching Graphs

Alternative Decompositions for Distributed Maximization of Network Utility: Framework and Applications

On Underweighting Nonlinear Measurements

Performance Analysis of Distributed Tracking with Consensus on Noisy Time-varying Graphs

Active Passive Networked Multiagent Systems

Decoupling Coupled Constraints Through Utility Design

On Symmetry and Controllability of Multi-Agent Systems

Sequential State Estimation (Crassidas and Junkins, Chapter 5)

Duality. Geoff Gordon & Ryan Tibshirani Optimization /

Target Tracking and Classification using Collaborative Sensor Networks

Distributed Receding Horizon Control of Cost Coupled Systems

DESIGNING CNN GENES. Received January 23, 2003; Revised April 2, 2003

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Distributed Constrained Recursive Nonlinear. Least-Squares Estimation: Algorithms and Asymptotics

Nonlinear State Estimation! Particle, Sigma-Points Filters!

Nonlinear State Estimation! Extended Kalman Filters!

ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications

An Iterative Partition-Based Moving Horizon Estimator for Large-Scale Linear Systems

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes

YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

RADAR-OPTICAL OBSERVATION MIX

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

Chapter 2 Optimal Control Problem

5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

Stability and Disturbance Propagation in Autonomous Vehicle Formations : A Graph Laplacian Approach

Distributed Optimization: Analysis and Synthesis via Circuits

Target tracking and classification for missile using interacting multiple model (IMM)

Lagrange Relaxation and Duality

Theory and Applications of Matrix-Weighted Consensus

ARock: an algorithmic framework for asynchronous parallel coordinate updates

OPTIMAL FUSION OF SENSOR DATA FOR DISCRETE KALMAN FILTERING Z. G. FENG, K. L. TEO, N. U. AHMED, Y. ZHAO, AND W. Y. YAN

Singular perturbation analysis of an additive increase multiplicative decrease control algorithm under time-varying buffering delays.

Networked Control Systems, Event-Triggering, Small-Gain Theorem, Nonlinear

DIFFUSION ESTIMATION OF STATE-SPACE MODELS: BAYESIAN FORMULATION. Kamil Dedecius

An event-triggered distributed primal-dual algorithm for Network Utility Maximization

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

I. INTRODUCTION /01/$10.00 c 2001 IEEE IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 37, NO. 1 JANUARY

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

MAA507, Power method, QR-method and sparse matrix representation.

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania

10. Ellipsoid method

Every real system has uncertainties, which include system parametric uncertainties, unmodeled dynamics

State Estimation for Nonlinear Systems using Restricted Genetic Optimization

Average-Consensus of Multi-Agent Systems with Direct Topology Based on Event-Triggered Control

Transcription:

AAS 11-117 DISTRIBUTED ORBIT DETERMINATION VIA ESTIMATION CONSENSUS Ran Dai, Unsik Lee, and Mehran Mesbahi This paper proposes an optimal algorithm for distributed orbit determination by using discrete observations provided from multiple tracking stations in a connected network. The objective is to increase the precision of estimation by communication and cooperation between tracking stations in the network. We focus on the dynamical approach considering a simplified Low Earth Orbital Satellite model with random perturbations introduced in the observation data which is provided as ranges between tracking stations and satellite. The dual decomposition theory associated with the subgradient method is applied here to decompose the estimation task into a series of suboptimal problems and then solve them individually at each tracking station to achieve the global optimality. INTRODUCTION With the development of Global Positioning System (GPS), the power of this system brings immense benefit and convenience to today s navigation world. However, without accurate information of the GPS satellite location, the users, who determine their position according to the relation to the satellite, will be totally lost. 1 Generally, there are two classified approaches to determine the satellite orbit, the geometric approach and the dynamical approach. 2 The geometric approach requires instruments with high accuracy to collect data in a high sampling rate and obtain the determined orbit purely relied on the collected data without considering the satellite motion. While the dynamical approach considers the two-body motion model assuming forces acting on the satellite. The well-known batch and sequential estimation algorithms 3 are applied in the dynamical approach to estimate states or unknown parameters in the nonlinear system equations of the orbit determination problems. In this paper, we consider multiple tracking stations in a connected network to monitor the satellite locations in a cooperative way to improve the precision of orbit determination without centralized data collecting and processing. The distributed estimation technology can be commonly seen in process control, signal processing and information systems. 4, 5, 6 A subset of these efforts have generally been focused on the integration of measurements from all the sensors into a common estimate without centralized processor. For example, the well known information filter algorithm approaches the local estimate asynchronously and assimilate the innovation covariance in the inverse covariance form to simplify Research Associate, Department of Aeronautics and Astronautics, University of Washington, Box 352400, Seattle, WA, 98195-2400. Email: dairan@u.washington.edu Research Assistant, Department of Aeronautics and Astronautics, University of Washington Seattle, Box 352400, WA, 98195-2400. Email: unsik@u.washington.edu Professor, Department of Aeronautics and Astronautics, University of Washington, Seattle, Box 352400, WA, 98195-2400. Email: mesbahi@u.washington.edu 1

the resulting algebraic expressions. Nies et al. 7 applied the data fusion strategy in distributed orbit determination where the individual sensor in the fully connected network will process its estimates before communicate to the other nodes. The summary of the centralized, decentralized and hierarchic approaches for states estimation of formation flying spacecrafts can be referred to Reference 8. However, the consensus term between the neighbors of the sensor in the network has not been well examined in the information filter. Moreover, there is no guarantee that the common estimate is the optimal solution to minimize the overall mean squared estimation error under consensus constraints. Insight of consensus based distributed estimation can be found in the work of Reference 9, 10 and 11. By introducing the consensus condition as additional constraints in the objective function, the optimization problem is transformed to the problem of searching the minimal estimation error while satisfying the consensus constraints. Stanković et al. applied the consensus strategy in overlapping estimator 12 and designed a control law for vehicle formations based on the decentralized consensus estimation. 13 In order so solve the optimization problem with consensus constraints, the dual decomposition methodology and the associated subgradient method 14 are introduced. Dual decomposition has been applied in many optimization problems where each component in the dual function is a strict convex/concave problem and can be solved independently. The subgradient method, initiated by Shor 15 in 1970s, can then drive the dual function to a convergent solution by updating the dual variables which is related to the affine of the network in each iteration. Samar et al. 10 has applied the dual decomposition in the distributed estimation where parts of the observation data are tracked by multiple sensors. Each suboptimal problem in their formulation is a concave log-likelihood function with consensus constraints for the multiple tracked data. In this paper, we implement the dual decomposition in the distributed orbit determination problem in the network focusing on the global optimality properties of the distributed estimation. In the following, we will first introduce the formulation of distributed orbit determination problem in Section II and the conventional local estimation solution followed by the notation in Section III. Then we proceed to present the dual decomposition and subgradient method and its application in distributed orbit determination problems in Section IV. Simulation results will be provided in the final version. PROBLEM FORMULATION The problem in the paper is a two-dimensional, two-body problem under the assumption that there are no perturbations and the earth is non-rotating. The dynamical equations of the satellite in earth orbit are expressed as Ẋ = F (X) = ẋ = u ẏ = v u = µ v = µ x (x 2 +y 2 ) (3/2) y (x 2 +y 2 ) (3/2) where (x, y) are the satellite position in the fixed Earth coordinate, u and v are the radial and transverse velocity with respect to the center of the Earth and µ is the gravitational parameter. A set of m discrete observations are provided in the form of range measurement from n different tracking stations and labeled as R i,k, where i is the sequence of the observation points and k is the tracking station serial number. When the initial condition of the state variables in Eq. (1) is defined, we can (1) 2

proceed to calculate the states at desired observation points by numerical integration. After that, the observational range between the satellite and the tracking stations k at observation points i are expressed as Y i,k = G(X) = (x i x trackk ) 2 + (y i y trackk ) 2 + ϵ i, k = 1,..., n, i = 1,..., m (2) where (x i x trackk ) 2 + (y i y trackk ) 2 corresponds to the actual range R i,k, (x trackk, y trackk ) are the tracking station position and ϵ i is the observation error at point i. Due to the view constraints, each tracking station can only obtain the ranges in sequence when the satellite stays in their observable area as illustrated in Figure 1. When the satellite leaves the first tracking station view sight, no range data will be provided from the first station and the next tracking site will have range data as long as the satellite enters its view scope. We denote the set of observable points for tracking station k as m k and n k=1 (m k) = m. Our objective is to estimate the initial position (x 0, y 0 ) and velocity (u 0, v 0 ) of the satellite at t 0 to minimize the sum of the squares of the observation errors, such that the states of the satellite at any points thereafter can be propagated from the initial points by the dynamical equations. Mathematically, the objective function is written as J = n m k (R i,k Y i,k ) 2 (3) k=1 Figure 1. Tracking Station Network and View Constraints This problem is analogous to a scenario that a group of people are watching one movie and each people can only record one section of the movie. After the movie is finished, people communicate with each other randomly and try to figure out the complete story of this whole movie. We assume the communication between the tracking stations is constructed by the topology of the graph G = (V, E) where stations are represented by the node set V with cardinality n. If there is a communication channel between node v k and v h, (k h), then we assume the connection between v k and v h belong to the edge set E and denote it as {v k, v h } E. At the same time, we define the n n adjacency matrix A with entries [A(G)] k,h = 1 indicating the existence of communication 3

between node v k and v h and [A(G)] k,h = 0 otherwise. Since the communication is assumed to be bidirectional, A is a symmetric matrix with [A(G)] k,h = [A(G)] h,k. CENTRALIZED ESTIMATION In the problem formulation, the measurement data collected is in time sequence and the estimate is the initial state before the starting of the observation. Therefore, batch filter is applied here to solve this type of historical observation and estimation problem. In batch Filter, nonlinear system equations are linearized at the reference points X by first-order Taylor expansion such that the corresponding nonlinear system can be approximated by [ ] F Ẋ = Ẋ + [X X ] + h.o.t X [ ] G Y = G(X ) + [X X ] + h.o.t + ϵ (4) X where the partial differential matrices are defined as A(t) = [ F/ X], H(t) = [ G/ X] (5) The deviation between the unknown states X and nominal states X, named as x, is in the linear expression format ẋ = A(t)x (6) as well as the observation deviation y y = H(t)x + ϵ (7) By defining the state transition matrix Φ(t i, t 0 ), the state at a single epoch t 0 can be mapped to another epoch t i using x i = Φ(t i, t 0 )x 0 (8) and the state transition matrix can be derived by integrating the following differential equation Substituting x in Eq. (7) by Eq. (8) can simplify Eq. (7) as Φ(t, t 0 ) = A(t)Φ(t, t 0 ), Φ(t 0, t 0 ) = I (9) y = Hx 0 + ϵ (10) Figure 2 demonstrates the data collection and calculation framework of centralized estimator. After collecting the ranges R k,i from all tracking stations, the centralized processor obtained H i for all observation points t i and assemble them in one measurement sensitivity matrix H as H = H 1 Φ(t 1, t 0 ) H 2 Φ(t 2, t 0 ). H i Φ(t i, t 0 ) (11) 4

The objective now is to find the best estimate of the state correction, which will minimize the sum of the squares of the observation errors J(ˆx(t 0 )) = ϵ T ϵ. By solving the least square problem, the best linear unbiased estimate is expressed as The optimal state solution ˆX(t 0 ) is then obtained by ˆx(t 0 ) = (H T H) 1 H T y (12) ˆX(t 0 ) = X (t 0 ) + ˆx(t 0 ) (13) Figure 2. Data Collection and Framework of Centralized Estimator CONSENSUS BASED DISTRIBUTED ESTIMATOR In this section, we will proceed with the distributed estimation with consensus. We will firstly look into the framework of the distributed estimator and then introduce the dual decomposition and subgradient method which is applicable in solving this type of decomposable concave subproblems with constraints. The detailed algorithm is summarized in the following subsection. Framework In a connected tracking station network systems as shown in Figure 3, we assume each station can communicate information with its neighbor stations. There is no central processor and the network is not fully connected. However, as long as there is connection which is defined by the entries of the adjacent matrix A(G) between any two nodes in the network, the two connected stations can communicate information with each other. Furthermore, they are able to spread the information among the connected network finally. In such system, the information filter or some other data fusion algorithm that require fully connected network cannot be applied in such partially connected system. In addition, the fully connected network requires large data communication and storage which may bring difficulty for implementation with the increase of data numbers. Furthermore, without consensus constraints, there is no limitation to converge the final result to the average consensus. For a network system described in Figure 3, the index of neighbors for station k is determined by the entries A(G) k,h, h = 1,..., n, h k. If A k,h = 1, then station k can communicate 5

Figure 3. Communication and Framework of Distributed Estimator with station h and we will expect the estimates obtained from sensor k, ˆx k (t 0 ), is identical to ˆx h (t 0 ) as well as the other estimates from the connected neighbors. By passing the identity request from one node to the other in the network, we can transfer the consensus request in the system. With the neighborhood consensus constraints on the estimates, we have the following relationships: a k,hˆx k (t 0 ) a k,hˆx h (t 0 ) = 0, k = 1,..., n, k > h (14) where a k,h denotes the element A k,h in matrix A. If there is connection between station k and h, the consensus condition expressed in Eq. (14) will have ˆx k (t 0 ) = ˆx h (t 0 ), otherwise such consensus constraint between node k and h does not exist. Since matrix A is symmetric, we will have a k,h = a h,k. Therefore, ˆx k (t 0 ) = ˆx h (t 0 ) is equivalent to ˆx h (t 0 ) = ˆx k (t 0 ). In order to avoid repeating of consensus constraint, only upper triangular elements of A is included in Eq. (14). That is how the k > h term comes from. As no central processor is provided in the distributed framework, each station will propagate the nominal states from t 0 to their corresponding observation point t i and calculate H k,i independently. The objective function in Eq. (3) is rewritten as n m k J = (R i,k Y i,k ) 2 = k=1 m k n (y i,k H i,k ˆx k (t 0 )) T (y i,k H i,k ˆx k (t 0 )) (15) k=1 We assign y k = [y 1,k,..., y mk,k] T and H k = [H 1,k,..., H mk,k] T, for all the y i,k and H i,k, i = 1,..., m k obtained at station k, respectively. The consensus relationships specified in Eq. (14) are additional constraints when solving the Batch filtering problem. By introducing the Lagrangian multipliers λ k,h for the first order equality constraints in the objective function, the Lagrangian function of Eq. (15) is expressed as n n n L = (y k H k ˆx k (t 0 )) T (y k H k ˆx k (t 0 )) + λ k,h [a k,hˆx k (t 0 ) a k,hˆx h (t 0 )]. (16) k=1 k=1 h=k+1 Before we proceed to solve this typical problem, we will review the contents of dual decomposition and subgradient method first. 6

Dual Decomposition and Subgradient Method For a nonlinear optimization problem of minimizing the objective function f(x) under equality constraints h i (x) = 0(i = 1,..., p) and inequality constraints g i (x) 0(i = 1,..., m), its Lagrangian has the form: p m L(x, λ, µ) = f(x) + µ i h i (x) + λ i g i (x). (17) If the objective and constraint functions can be expressed in the summation form as f(x) = K f k (x k ), g i (x) = K gi k (x k ), h i (x) = K h k i (x k ), (18) by partition the state vector x into subvectors x = (x 1,..., x K ), then the Lagrangian is reformulated as K p m L(x, λ, µ) = (f k (x k ) + µ i h k i (x k ) + λ i gi k (x k )). (19) k=1 The above function can be decomposed into K subproblems according to the subvector x k, where k = 1,..., K. For each subproblem, it can be solved by minimizing the dual function p m L k D(λ, µ) = min (f k (x k ) + µ i h k x k i (x k ) + λ i gi k (x k )) (20) for a given pair of multipliers (λ k, µ k ). The dual function defined as L k D(λ, µ) = inf x L(x, λ, µ) (21) is always concave, thus the dual problem of the summation of the subproblems max (λ,µ) LK D(λ, µ) (22) is a convex optimization problem and can be solved by the subgradient method. The subgradient method is an iterative procedure to gradually process the optimization solution by finding the ascent direction for the dual problem. At each sequence j, assuming the multipliers (λ j, µ j ) are given, the subgradient at this point is expressed as d j = Then the multipliers are updated as follows: ( g(xj ) h(x j ) λ j+1 = max(0, λ j + α j g(x j )) µ j+1 = µ j + α j h(x j )), ). (23) where α j is the step size that will control the convergent speed of the subgradient method. The maximum number of sequence j is generally defined to satisfy the stopping criteria of iteration. The new algorithm proposed in this paper is expected to build a framework to search for a more precise optimal solution by communication between tracking stations without central processor. In the following, we will explain how to solve distributed orbit determination under estimate consensus using dual decomposition method. (24) 7

Application in Distributed Orbit Determination As we introduced above, at epoch i, the Lagrangian function for the distributed batch filtering problem under consensus is represented by Eq. (16). By introducing the linear model in Batch filtering algorithm and sorting x k (t 0 ) in Eq. (16), the Lagrangian function can be rewritten as L = n k=1 [(y k H k ˆx k (t 0 )) T (y k H k ˆx k (t 0 )) + n h=k+1 λ k,ha k,hˆx k (t 0 )] n k=1 n h=k+1 λ k,ha k,hˆx h (t 0 ) (25) From the above reformatted expression, it is easy to tell that Eq. (25) is composed of n subproblems which can be summarized as L k = minˆxk (t 0 )[(y k H k ˆx k (t 0 )) T (y k H k ˆx k (t 0 )) +( n h=k+1 λ k,ha k,h k 1 h=1 λ h,ka h,k )ˆx k (t 0 )] k = 1,..., n. (26) For a given Lagrangian multiplier vector λ k,h, the above subproblems are independent of each other with its own estimates and observations, therefore they can be solved individually. After a careful comparison of these subproblems with the local Batch filtering formulation in Eq. (3), we found that they are very similar to each other except the additional term ( n h=k+1 λ k,ha k,h k 1 h=1 λ h,ka h,k )ˆx k (t 0 ). However, the estimates need to recalculated to obtain the new optimal solution for each subproblem with the additional term in the original objective function. In order to make L k maximum, the estimates ˆx k (t 0 ) are updated as ˆx k (t 0 ) = (H k T H k ) 1 (H k T y k n h=k+1 k 1 λ k,h a k,h + λ h,k a h,k ), k = 1,..., n (27) The above estimation can be performed by each station providing the lagrangian multiplier vector and its local observations. Comparing the local estimates of distributed Batch filtering using dual decomposition in Eq. (27) with the estimates of local Batch filtering in Eq. (12), the additional term in Eq. (27) comes from the lagrangian multipliers. This extra term works as an correction to the local batch filtering estimates to adjust them to a uniform solution. Using the subgradient method, the lagrangian multipliers are updated by λ j+1 k,h = λj k,h + α ja k,h (ˆx k (t 0 ) ˆx h (t 0 )), k = 1,..., n, h = k + 1,..., n (28) The estimates and lagrangian multiplier vectors for the neighborhood sensors are coupled with each other. The iterations are repeated procedure of adjustment and comparison. The adjustment will add compensation to estimates under average and at the same time counteract the extra part of estimates above average. The adjusted estimates from neighborhood sensors are compared in the subgradient method to find the new adjustment in the next loop. Ideally, the difference between the estimates are getting smaller with the increasing number of iteration. When the estimates are convergent to each other in the network, we can expect λ j+1 = λ j. The stopping criteria for the convergent solution can use a fixed maximum iteration number j max and make j j max or set the sum of least square difference (ˆx k (t 0 ) ˆx h (t 0 )) T (ˆx k (t 0 ) ˆx h (t 0 )) less than a threshold, ϵ. In each iteration, the local batch filter will calculate it s own estimates based on the new optimal formulation. Then they exchange this information with its neighbor sensors to obtain the updated multipliers in the correction term. Different from the information filter that requires the sensors to exchange large h=1 8

information, for example, information matrix, observations, estimates, et al., as the public variables, the new algorithm only requires information of estimates and lagrangian multipliers. Furthermore, the local estimation can be carried out in parallel calculation by individual processor, which will save the overall calculation time. Finally, we summarized the algorithm in the following protocol chart: for iter 1 to iter max do Initialization: X = X, λ = 0; for k 1 to number of all tracking stations do Calculate y k, H k at each tracking station k with current X; end for j 1 to j max do for k 1 to number of all tracking stations do ˆx k (t 0 ) = (H T k H k ) 1 (H T k y k n h=k+1 λ k,ha k,h + k 1 h=1 λ h,ka h,k ); end for k 1 to number of all tracking stations do for h k + 1 to number of all tracking stations do λ j+1 k,h = λj k,h + α ja k,h (ˆx k (t 0 ) ˆx h (t 0 )); end end end Update: ˆx(t 0 ) = 1 n n k=1 ˆx k(t 0 ), X = X + ˆx(t 0 ); end Algorithm 1: Decentralized Batch Filtering Algorithm with Estimates Consensus using Dual Decomposition and Subgradient Method SIMULATION EXAMPLE In this section, we will apply the distributed estimation framework in a real scenario orbital determination problem where observation of ranges are provided by two tracking stations over time period [0, 3105] seconds. Due to the view constraints, the first tracking station observe ranges from 0 to 1320 seconds by every 15 seconds. After that, the satellite go to a area with no observation data. From 1680 to 3105 seconds, the satellite enters the view zone of the second tracking station which will provide range data also by every 15 seconds. The observed range data is demonstrated in Figure 4 with respect to time. The objective is to estimate the complete trajectory for the satellite during the whole time period [0, 3105] seconds. In order to estimate the complete trajectory during the specified period, we select t 0 = 1510 which is the middle time point between the ending of observation from station 1 and starting of observation from station2 and then try to solve the consensus based estimates at t 0 first. States at any other time points can be propagated forward or backward from t 0 by the dynamical integration based on the achieved estimation vector. Since states at t 0 is after the observation points of station 1, we integrate states and corresponding elements in transition matrix backward from t 0 to the observation points. While for station 2, we integrate the same set of variables forward to the observation points of station 2. We follow the algorithm proposed in Protocol 1 and get X(t 0 ) from centralized estimator, single tracking station estimator and distributed estimator with consensus listed in Table 1. 9

6 x 106 Range observed from tracking station 1 Range observed from tracking station 2 5 4 Range (m) 3 2 1 0 0 500 1000 1500 2000 2500 3000 3500 Time (sec) Figure 4. Time History of Observed Ranges by Tracking Station 1 and 2 Table 1. State Estimates at t 0 using Different Methods Method x(t 0 )(m) y(t 0 )(m) v x (t 0 )(m/s) v y (t 0 )(m/s) Centralized Estimator -2897.69411735 7182144.01656091-7449.82637504 369.48640491 Local Estimator of Station 1-2907.16521028 7182133.59162168-7449.83580531 369.46778527 Local Estimator of Station 2-2894.92052376 7182140.07737831-7449.82938718 369.49360532 Distributed Estimator with Consensus -2897.78458401 7182142.81635334-7449.82675938 369.48641575 After comparison of the results calculated by different methods in Table 1, it is obvious to see that the estimates from distributed estimator with consensus constraints are more close to the estimates from the centralized estimator. For example, the position deviation between the centralized estimates and the distributed estimates is controlled under 2 meters, while the deviation between the centralized estimates and single estimator may reach 10 meters. In the next step, we propagate the four groups of states from t 0 to all observation points during [0, 3105] and generate the time history of observation error respectively as illustrated in Figure 5. Results propagated from individual estimates will overlap the observation error processed by centralized estimator only within their observable time zone. It will diverge when the propagation is beyond their specific time zone. However, the results propagated by the distributed estimates with consensus coincide with the centralized one over the whole time period. CONCLUSION In this paper a novel way to view distributed batch filtering algorithm using dual decomposition and subgradient method has been proposed. The distributed batch filtering was first formulated as optimization problem with constraints. The distributed dual-decomposition algorithm then approaches the optimal solution with consensus in an iterative manner. The new algorithm was applied in the distributed orbit determination problem, which yields result close to the centralized estimates and has better performance than estimates from single tracking station. In the future, more tracking stations in a complex network will be tested for the distributed orbit determination problem. 10

40 30 Centralize Estimator Tracking Station 1 Only Consensus Estimator Tracking Station 2 only 20 10 Observation Error (m) 0 10 20 30 40 50 0 500 1000 1500 2000 2500 3000 3500 Time (sec) Figure 5. Time History of Observed Error using Different Methods Furthermore, application of the proposed algorithm in other area will be considered. APPENDIX: PARTIAL DIFFERENTIAL AND TRANSITION MATRICES The partial differential matrix for dynamics Eq. (1) is: A = [ ] F = X The time derivative of the transition matrix is given as Φ = AΦ = 0 0 1 0 0 0 0 1 µ(y2 2x 2 ) 3µxy (x 2 +y 2 ) 5/2 (x 2 +y 2 ) 5/2 0 0 3µxy 3µ(x2 2y 2 ) (x 2 +y 2 ) 5/2 (x 2 +y 2 ) 5/2 0 0 Φ 31 Φ 32 Φ 33 Φ 34 Φ 41 Φ 42 Φ 43 Φ 44 A 31 Φ 11 + A 32 Φ 21 A 31 Φ 12 + A 32 Φ 22 A 31 Φ 13 + A 32 Φ 23 A 31 Φ 14 + A 32 Φ 24 A 41 Φ 11 + A 42 Φ 21 A 41 Φ 12 + A 42 Φ 22 A 41 Φ 13 + A 42 Φ 23 A 41 Φ 14 + A 42 Φ 24 where A ij denotes the i,j component of the matrix A.. REFERENCES [1] P. T. Langer, J. and C. J., Orbit Determination and Satellite Navigation, The Aerospace Corporation Magazine of Advances in Aerospace Technology: Satellite Navigation, Vol. 2, No. 2, 2002. [2] D. Hobbs and P. Bohn, Precise Orbit Determination for Low Earth Orbit Satellites, MCFA Annals Volume 4, Vol. 4, 2006. [3] B. D. Tapley, Statistical Orbit Determination Theory. In Recent Advances in Dynamical Astronomy, D. Reidel Publishing Company, Dordrecht, 1973, pp. 396 425. 11

[4] M. Mesbahi and M. Egerstedt, Graph Theoretic Methods for Multiagent Networks. Princeton, New Jersey: Princeton University Press, 1nd ed., 2009. [5] B. S. Rao and H. F. Durrant-Whyte, Fully Decentralised Algorithm for Multisensor Kalman Filtering, Control Theory and Applications, IEE Proceedings D, Vol. 138, No. 5, 1991, pp. 413 420. [6] S. Grime and H. Durraat-Whyte, Data fusuion in decentralized sensor networks, Control Engineering Practice, Vol. 2, No. 5, 1994, pp. 849 863. [7] O. K. S. G. U. W. W. Nies, H.; Loffeld, A data fusion approach for distributed orbit estimation, Vol. 6, Anchorage, AK, 2004, pp. 3736 3739. [8] F. P. and J. How, Decentralized Estimation Algorithms for Formation Flying Spacecraft, 2003, pp. 2003 5442. [9] N. R. D. Rabbat, M. G. and J. A. Bucklew, Generalized consensus computation in networked systems with erasure links, New York, NY, July 2005, pp. 1088 1092. [10] B. S. Samar, S. and D. Gorinevsky, Distributed Estimation via Dual Decomposition, Kos, Greece, July 2007, pp. 849 863. [11] R. A. Schizas, I. and G. Giannakis, Consensus in ad hoc WSNs with noisy links - part I: distributed estimation of deterministic signals, IEEE Trans. Signal Process, Vol. 56, January 2008, pp. 350 364. [12] S. M. Stankovic, S.S. and D. Stipanovic, Consensus Based Overlapping Decentralized Estimator, IEEE Trans. Automatic Control, Vol. 25, February 2009, pp. 410 415. [13] D. S. S. Stanković, M.S. Stipanović, Decentralized consensus based control methodology for vehicle formations in air and deep space, July 2010, pp. 3660 3665. [14] S. Boyd and L. Vandenberghe, Convex optimization. Cambridge: Cambridge University Press, 2004. [15] N. Z. Shor, Minimization Methods for Non-differentiable Functions. New York, NY: Springer-Verlag, 1985. 12