THE goal of simultaneous localization and mapping

Size: px
Start display at page:

Download "THE goal of simultaneous localization and mapping"

Transcription

1 IEEE TRANSACTIONS ON ROBOTICS, MANUSCRIPT SEPTEMBER 7, isam: Incremental Smoothing and Mapping Michael Kaess, Student Member, IEEE, Ananth Ranganathan, Student Member, IEEE, and Fran Dellaert, Member, IEEE Abstract We present incremental smoothing and mapping (isam), a novel approach to the simultaneous localization and mapping problem that is based on fast incremental matrix factorization. isam provides an efficient and exact solution by updating a QR factorization of the naturally sparse smoothing information matrix, therefore recalculating only the matrix entries that actually change. isam is efficient even for robot trajectories with many loops as it avoids unnecessary fill-in in the factor matrix by periodic variable reordering. Also, to enable data association in real-time, we provide efficient algorithms to access the estimation uncertainties of interest based on the factored information matrix. We systematically evaluate the different components of isam as well as the overall algorithm using various simulated and real-world datasets for both landmar and pose-only settings. Index Terms Data association, localization, mapping, mobile robots, nonlinear estimation, simultaneous localization and mapping (SLAM), smoothing. I. INTRODUCTION THE goal of simultaneous localization and mapping (SLAM) [1] [3] is to provide an estimate after every step for both the robot trajectory and the map, given all available sensor data. In addition to being incremental, to be practically useful, a solution to SLAM has to perform in real-time, be applicable to large-scale environments, and support online data association. Such a solution is essential for many applications, stretching from search and rescue, over reconnaissance to commercial products such as entertainment and household robots. While there has been much progress over the past decade, none of the wor presented so far fulfills all of these requirements at the same time. Our previous wor, called square root SAM [4], [5], gets close to this goal by factorizing the information matrix of the smoothing problem. Formulating SLAM in a smoothing context adds the complete trajectory into the estimation problem, therefore simplifying its solution. While this seems Draft manuscript, September 7, Accepted by TRO as a regular paper. This wor was partially supported by the National Science Foundation under Grant No. IIS and by DARPA under grant number FA C This wor was presented in part at the International Joint Conference on Artificial Intelligence, Hyderabad, India, January 2007, and in part at the IEEE International Conference on Robotics and Automation, Rome, Italy, April This paper has supplementary downloadable material available at provided by the authors. This includes four multimedia AVI format movie clips, which visualize different aspects of the algorithms presented in this paper. This material is 4.3 MB in size. Color versions of most figures are available online at The first and last authors are with the College of Computing, Georgia Institute of Technology, Atlanta, GA 30332, USA. The second author is with the Honda Research Institute, One Cambridge Center, Suite 401, Cambridge MA 02142, USA. ( aess@cc.gatech.edu; ananth@cc.gatech.edu; dellaert@cc.gatech.edu). counterintuitive at first, because more variables are added to the estimation problem, the simplification arises from the fact that the smoothing information matrix is naturally sparse. In contrast, in filtering approaches the information matrix becomes dense when marginalizing out robot poses. As a consequence of applying smoothing, we are able to provide an exact, yet efficient solution based on a sparse matrix factorization of the smoothing information matrix in combination with bac-substitution. We call this matrix factor the square root information matrix, based on earlier wor on square root information filtering (SRIF) and smoothing (SRIS), as recounted in [6], [7]. In this paper we present incremental smoothing and mapping (isam), which performs fast incremental updates of the square root information matrix yet is able to compute the full map and trajectory at any time. Our previous wor, square root SAM, is a batch algorithm that first updates the information matrix when new measurements become available and then factors it completely. Hence it performs unnecessary calculations when applied incrementally. In this wor, in contrast, we directly update the square root information matrix with new measurements as they arrive, using standard matrix update equations [8]. That means we reuse the previously calculated components of the square root factor, and only perform calculations for entries that are actually affected by the new measurements. Thus we obtain a local and constant time operation for exploration tass. For trajectories with loops, periodic variable reordering prevents unnecessary fill-in in the square root factor that would otherwise slow down the incremental factor update as well as the recovery of the current estimate by bac-substitution. Fillin is a well-nown problem for matrix factorization, as the resulting matrix factor can contain a large number of additional non-zero entries that reduce or even destroy the sparsity with associated negative consequences for the computational complexity. As the variable ordering influences the amount of fill-in obtained, it allows us to influence the computational complexity involved in solving the system. While finding the best variable ordering is infeasible, good heuristics are available. We perform incremental updates most of the time, but periodically apply a variable reordering heuristic, followed by refactoring the resulting measurement Jacobian. Incremental mapping also requires online data association, hence we provide efficient algorithms to access the relevant estimation uncertainties from the incrementally updated square root factor. The ey insight for efficient retrieval of these quantities is that only some entries of the full covariance matrix are needed, at least some of which can readily be accessed [9]. We present an efficient algorithm that avoids calculating all entries of the covariance matrix, and instead

2 2 IEEE TRANSACTIONS ON ROBOTICS, MANUSCRIPT SEPTEMBER 7, 2008 with 1... K. The joint probability of all variables and measurements is given by Fig. 1. Bayesian belief networ representation of the SLAM problem. x i is the state of the robot at time i, l j the location of landmar j, u i the control input at time i and z the th landmar measurement. focuses on the relevant parts by exploiting the sparsity of the square root factor. In addition to this exact solution, we also provide conservative estimates that are again derived from the square root factor. We evaluate isam on simulated and real-world datasets for both landmar-based and pose-only settings. The pose-only setting is a special case of isam, in which no landmars are used, but general pose constraints between pairs of poses are considered in addition to odometry. The results show that isam provides an efficient and exact solution for both types of SLAM settings. They also show that the square root factor indeed remains sparse even for large-scale environments with a significant number of loops. This paper is organized as follows. We continue in the next section with a review of the smoothing approach to SLAM as a least squares problem, providing a solution based on matrix factorization. We then present our incremental solution in Section III, addressing the topics of loops in the trajectory and nonlinear measurement functions in Section IV. For data association we discuss efficient algorithms to retrieve the necessary components of the estimation uncertainty in Section V. We follow up with experimental results in Section VI and finally discuss related wor in Section VII. II. SAM: A SMOOTHING APPROACH TO SLAM In this section we review the formulation of the SLAM problem in the context of smoothing, following [4], but focusing on a solution based on QR matrix factorization. We start with the probabilistic model underlying the smoothing approach to SLAM, and show how inference on this model leads to a least squares problem. We then obtain an equivalent linear formulation in matrix form by linearization of the measurement functions. We finally provide an efficient batch solution based on QR matrix factorization. A. A Probabilistic Model for SLAM We formulate the SLAM problem in terms of the belief networ model shown in Fig. 1. We denote the robot states by X = {x i } with i 0... M, the landmars by L = {l j } with j 1... N, the control inputs by U = {u i } for i 1... M and finally the landmar measurements by Z = {z } M K P (X, L, U, Z) P (x 0 ) P (x i x i 1, u i ) P (z x i, l j ) i=1 =1 (1) where P (x 0 ) is a prior on the initial state, P (x i x i 1, u i ) is the motion model, parametrized by the control input u i, and P (z x i, l j ) is the landmar measurement model. Initially, we assume nown correspondences (i, j ) for each measurement z. The problem of establishing correspondences, which is also called data association, is deferred until Section V. We assume Gaussian measurement models, as is standard in the SLAM literature [10]. The process model x i = f i (x i 1, u i ) + w i (2) describes the odometry sensor or scan-matching process, where w i is normally distributed zero-mean process noise with covariance matrix Λ i. The Gaussian measurement equation z = h (x i, l j ) + v (3) models the robot s landmar sensors, where v is normally distributed zero-mean measurement noise with covariance Γ. B. SLAM as a Least Squares Problem To obtain an optimal estimate for the set of unnowns given all available measurements, we convert the problem into an equivalent least squares formulation based on a maximum a posteriori (MAP) estimate. As we perform smoothing rather than filtering, we are interested in the MAP estimate for the entire trajectory X and the map of landmars L, given the control inputs U and the landmar measurements Z. The MAP estimate X, L for trajectory and map is obtained by minimizing the negative log of the joint probability from (1): X, L = arg max P (X, L, U, Z) X,L = arg min log P (X, L, U, Z). (4) X,L Combined with the process and measurement models, this leads to the following nonlinear least squares problem: { M X, L = arg min f i (x i 1, u i ) x i 2 X,L Λ i i=1 } K + h (x i, l j ) z 2 Γ (5) =1 where we use the notation e Σ = e T Σ 1 e for the squared Mahalanobis distance with covariance matrix Σ. Note that we have dropped the prior P (x 0 ) on the first pose for simplicity. If the process models f i and measurement functions h are nonlinear and a good linearization point is not available, nonlinear optimization methods are used, such as Gauss- Newton or the Levenberg-Marquardt algorithm, which solve a succession of linear approximations to (5) to approach the minimum [11]. This is similar to the extended Kalman filter approach to SLAM as pioneered by [12], but allows for

3 KAESS et al.: isam: INCREMENTAL SMOOTHING AND MAPPING 3 iterating multiple times to convergence, therefore avoiding the problems arising from wrong linearization points. As derived in the Appendix, linearization of the measurement equations and subsequent collection of all components in one large linear system yields the following standard least squares problem: θ = arg min θ Aθ b 2 (6) where the vector θ R n contains all pose and landmar variables, the matrix A R m n is a large but sparse measurement Jacobian with m measurement rows, and b R m is the righthand side (RHS) vector. Such sparse least squares systems are converted into an ordinary linear equation system by setting the derivative of Aθ b 2 to 0, resulting in the so called normal equations A T Aθ = A T b. This equation system can be solved by Cholesy decomposition of A T A. C. Solving by QR Factorization We apply standard QR matrix factorization to the measurement Jacobian A to solve the least squares problem (6). In contrast to Cholesy factorization, this avoids having to calculate the information matrix A T A with the associated squaring of the matrix condition number. QR factorization of the measurement Jacobian A yields: [ ] R A = Q (7) 0 where R R n n is the upper triangular square root information matrix (note that the information matrix is given by R T R = A T A) and Q R m m is an orthogonal matrix. We apply this factorization to the least squares problem (6): Aθ b 2 = = = [ ] 2 R Q θ b 0 [ ] R QT Q θ Q T b 0 [ ] [ ] R d 2 θ 0 e = Rθ d 2 + e 2 (8) where we define [d, e] T := Q T b with d R n and e R m n. (8) becomes minimal if and only if Rθ = d, leaving the second term e 2 as the residual of the least squares problem. Therefore, QR factorization simplifies the least squares problem to a linear system with a single unique solution θ : Rθ = d (9) Most of the wor for solving this equation system has already been done by the QR decomposition, because R is upper triangular, so simple bac-substitution can be used. The result is the least squares estimate θ for the complete robot trajectory as well as the map, conditioned on all measurements. 2 Fig. 2. Using a Givens rotation as a step in transforming a general matrix into upper triangular form. The entry mared x is eliminated, changing some of the entries mared in red (dar), depending on sparsity. III. ISAM: INCREMENTAL SMOOTHING AND MAPPING We present our incremental smoothing and mapping (isam) algorithm that avoids unnecessary calculations by directly updating the square root factor when a new measurement arrives. We begin with a review of Givens rotations for batch and incremental QR matrix factorization. We then apply this technique to update the square root factor, and discuss how to retrieve the map and trajectory. We also analyze performance for exploration tass in simulated environments. A. Matrix Factorization by Givens Rotations A standard approach to obtain the QR factorization of a matrix A uses Givens rotations [8] to clean out all entries below the diagonal, one at a time. While this is not the preferred way to do full QR factorization, we will later see that this approach readily extends to factorization updates, which are needed to incorporate new measurements. The process starts from the left-most non-zero entry, and proceeds columnwise, by applying the Givens rotation [ ] cos φ sin φ Φ := (10) sin φ cos φ to rows i and, with i > as shown in Fig. 2. The parameter φ is chosen so that a i, the (i, ) entry of A, becomes 0. After all entries below the diagonal are zeroed out in this manner, the upper triangular entries contain the R factor. The orthogonal rotation matrix Q is typically dense, which is why this matrix is never explicitly formed in practice. Instead, it is sufficient to update the RHS vector b with the same rotations that are applied to A. Solving a least squares system Ax = b by matrix factorization using Givens rotations is numerically stable and accurate to machine precision if the rotations are determined as follows [8, Sec. 5.1]: ( (1, 0) ) if β = 0 α (cos φ, sin φ) = β q1+( αβ ), q 1 if β > α 2 1+( α β ( ) 2 ) where α := a and β := a i. B. Incremental Updating q 1 1+( α), q β β 2 α 1+( α) β 2 otherwise When a new measurement arrives, it is more efficient to modify the previous factorization directly by QR-updating, instead of updating and refactoring the matrix A. Adding a

4 4 IEEE TRANSACTIONS ON ROBOTICS, MANUSCRIPT SEPTEMBER 7, 2008 Number of rotations Number of Givens rotations per step Step Exploration Tas Fig. 4. Number of Givens rotations needed per step for a simulated linear exploration tas. In each step, the square root factor is updated by adding the new measurement rows by Givens rotations. The number of rotations is independent of the length of the trajectory. Fig. 3. Updating the factored representation of the smoothing information matrix for the example of an exploration tas: New measurement rows are added to the upper triangular factor R and the right-hand side (RHS). The left column shows the updates for the first three steps, the right column shows the update after 50 steps. The update operation is symbolically denoted by. Entries that remain unchanged are shown in light blue (gray). For the exploration tas, the number of operations is bounded by a constant. new measurement row w T and RHS γ into the current factor R and RHS d yields a new system that is not yet in the correct factorized form: [ ] [ ] [ ] [ ] Q T A R d 1 w T = w T, new rhs:. (11) γ Note that this is the same system that is obtained by applying Givens rotations to the updated matrix A to eliminate all entries below the diagonal, except for the last (new) row. Therefore Givens rotations can be determined that zero out this new row, yielding the updated factor R. As for the full factorization, we simultaneously update the RHS with the same rotations to obtain d. Several steps of this update process are shown in Fig. 3. New variables are added to the QR factorization by expanding the factor R by the appropriate number of empty columns and rows. This expansion is simply done before new measurement rows containing the new variables are added. At the same time, the RHS d is augmented by the same number of zeros. C. Incremental SAM Applying the Givens rotations-based updating process to the square root factor provides the basis for our efficient incremental solution to smoothing and mapping. In general, the maximum number of Givens rotations needed for adding a new measurement row is n. However, as both R and the new measurement row are sparse, only a constant number of Givens rotations are needed. Furthermore, new measurements typically refer to recently added variables, so that often only the rightmost part of the new measurement row is (sparsely) populated. For a linear exploration tas, incorporating a set of new landmar and odometry measurements taes constant time. Examples of such updates are shown in Fig. 3. The simulation results in Fig. 4 show that the number of rotations needed is independent of the size of the trajectory and the map. Updating the square root factor therefore taes O(1) time, but this does not yet provide the current least squares estimate. The current least squares estimate for the map and the full trajectory can be obtained at any time by bac-substitution in time linear in the number of variables. While bac-substitution has quadratic time complexity for general dense matrices, it is more efficient in the context of isam. For exploration tass, the information matrix is band-diagonal. Therefore, the square root factor has a constant number of entries per column independent of the number of variables n that mae up the map and trajectory. Therefore, bac-substitution requires O(n) time in isam. In the linear exploration example from above, this results in about 0.12s computation time after steps. In fact, for this special case of exploration, only a constant number of the most recent values has to be retrieved in each step to obtain the exact solution incrementally. IV. LOOPS AND NONLINEAR FUNCTIONS We discuss how isam deals with loops in the robot trajectory, as well as with nonlinear sensor measurement functions. While realistic SLAM applications include much exploration, the robot often returns to previously visited places, closing loops in the trajectory. We discuss the consequences of loops on the matrix factorization and show how to use periodic variable reordering to avoid unnecessary increases in computational complexity. Furthermore, real-world sensing often leads to nonlinear measurement functions, typically by means of angles such as bearing to a landmar or the robot heading. We show how isam allows relinearization of measurement equations, and suggest a solution in connection with the periodic variable reordering. A. Loops and Periodic Variable Reordering Environments with loops do not have the nice property of local updates, resulting in increased complexity. In contrast to pure exploration, where a landmar is only visible from a small part of the trajectory, a loop in the trajectory brings the

5 KAESS et al.: isam: INCREMENTAL SMOOTHING AND MAPPING 5 (a) Simulated double 8-loop at interesting stages of loop closing (for simplicity, only a reduced example is shown here). Time in seconds Time in seconds (log scale) (b) Factor R. No reordering Always reorder Every 100 steps (c) The same factor R after variable reordering. Execution time per step in seconds 0.5 B A e-04 Step Execution time per step in seconds - log scale No reordering Always reorder Every 100 steps A 1e B Step (d) Execution time per step for different updating strategies are shown in both linear (top) and log scale (bottom). Fig. 5. For a simulated environment consisting of an 8-loop that is traversed twice (a), the upper triangular factor R shows significant fill-in (b), yielding bad performance (d, continuous red). Some fill-in occurs at the time of the first loop closing (A). Note that there are no negative consequences on the subsequent exploration along the second loop until the next loop closure occurs (B). However, the fill-in then becomes significant when the complete 8- loop is traversed for the second time, with a pea when visiting the center point of the 8-loop for the third time (C). After variable reordering according to a approximate minimum degree heuristic, the factor matrix again is completely sparse (c). In the presence of loops, reordering the variables after each step (d, dashed green) is sometimes less expensive than incremental updates. However, a considerable increase in efficiency is achieved by using fast incremental updates interleaved with only occasional variable reordering (d, dotted blue), here performed every 100 steps. C C robot bac to a previously visited location. A loop introduces correlations between current poses and previously observed landmars, which themselves are connected to earlier parts of the trajectory. An example based on a simulated environment with a robot trajectory in the form of a double 8-loop is shown in Fig. 5. Loops in the trajectory can result in a significant increase of computational complexity through a large increase of nonzero entries in the factor matrix. Non-zero entries beyond the sparsity pattern of the information matrix are called fill-in. While the smoothing information matrix remains sparse even in the case of closing loops, the incremental updating of the factor matrix R leads to fill-in as can be seen from Fig. 5(b). We avoid fill-in by variable reordering, a technique well nown in the linear algebra community, using a heuristic to efficiently find a good bloc ordering. The order of the columns in the information matrix influences the variable elimination order and therefore also the resulting number of entries in the factor R. While obtaining the best column variable ordering is NP hard, efficient heuristics such as the COLAMD (column approximate minimum degree) ordering by Davis et al. [13] have been developed that yield good results for the SLAM problem as shown in [4], [5]. We apply this ordering heuristic to blocs of variables that correspond to the robot poses and landmar locations. As has been shown in [4], operating on these blocs leads to a further increase in efficiency as it exploits the special structure of the SLAM problem. The corresponding factor R after applying the COLAMD ordering shows negligible fill-in, as can be seen in Fig. 5(c). We propose fast incremental updates with periodic variable reordering, combining the advantages of both methods. Factorization of the new measurement Jacobian after variable reordering is expensive when performed in each step. But combined with incremental updates it avoids fill-in and still yields a fast algorithm as supported by the timing results in Fig. 5(d). In fact, as the log scale version of this figure shows, our solution is one to three orders of magnitude faster than either the purely incremental or the batch solution with the exception of occasional peas caused by reordering of the variables and subsequent matrix factorization. In this example we use a fixed interval of 100 steps after which we reorder the variables and refactor the complete matrix. B. nonlinear Systems While we have so far only discussed the case of linear measurement functions, SLAM applications usually are faced with nonlinear measurement functions. Angular measurements, such as the robot orientation or the bearing to a landmar, are the main source for nonlinear dependencies between measurements and variables. In that case everything discussed so far is still valid. However, after solving the linearized version based on the current variable estimate we might obtain a better estimate resulting in a modified Jacobian based on this new linearization point. For standard nonlinear optimization techniques this process is iterated as explained in Section II-B until the change in the estimate is sufficiently

6 6 IEEE TRANSACTIONS ON ROBOTICS, MANUSCRIPT SEPTEMBER 7, 2008 small. Convergence is guaranteed, at least to a local minimum, and the convergence speed is quadratic because we apply a second order method. As relinearization is not needed in every step, we propose combining it with periodic variable reordering. The SLAM problem is different from a standard nonlinear optimization as new measurements arrive sequentially. First, we already have a very good estimate for most if not all of the old variables. Second, measurements are typically fairly accurate on a local scale, so that good estimates are available for a newly added robot pose as well as for newly added landmars. We can therefore avoid calculating a new Jacobian and refactoring it in each step, a fairly expensive batch operation. Instead, for the results presented here, we combine relinearization with the periodic variable reordering for fill-in reduction as discussed in the previous section. In other words, in the variable reordering steps only, we also relinearize the measurement functions as the new measurement Jacobian has to be refactored anyways. V. DATA ASSOCIATION As an incremental solution to the SLAM problem also requires incremental data association, we provide efficient algorithms to access the quantities of interest of the underlying estimation uncertainty. The data association problem in SLAM consists of matching measurements to their corresponding landmars. While data association is required on a frame-toframe basis, it is particularly problematic when closing large loops in the trajectory. We start with a general discussion of the data association problem, based on a maximum lielihood formulation. We discuss how to access the exact values of interest as well as conservative estimates from the square root factor of the smoothing information matrix. We then compare the performance of these algorithms to fast inversion of the information matrix and to nearest neighbor matching. A. Maximum Lielihood Data Association The often used nearest neighbor (NN) approach to data association is not sufficient in many cases, as it does not tae into account the estimation uncertainties. NN assigns each measurement to the closest predicted landmar measurement. The NN approach corresponds to a minimum cost assignment problem, based on a cost matrix that contains all the prediction errors. Details of this minimum cost assignment problem and of how to deal with spurious measurements are given in [14]. Instead of NN, we use the maximum lielihood (ML) solution to data association [15], which is more sophisticated in that it taes into account the relative uncertainties between the current robot location and the landmars in the map. The ML formulation can again be reduced to a minimum cost assignment problem, using a Mahalanobis distance rather than the Euclidean distance. This Mahalanobis distance is based on the projection Ξ of the combined pose and landmar uncertainties Σ into the sensor measurement space Ξ = JΣJ T + Γ (12) where Γ is the measurement noise and J is the Jacobian of the linearized measurement function h from (3). We use the Fig. 6. Only a small number of entries of the dense covariance matrix are of interest for data association. In this example, the marginals between the latest pose x 2 and the landmars l 1 and l 3 are retrieved. The entries that need to be calculated in general are mared in gray: Only the triangular blocs along the diagonal and the right-most bloc column are needed, due to symmetry. Based on our factored information matrix representation, the last column can be obtained by simple bac-substitution. As we show here, the blocs on the diagonal can either be calculated exactly by only calculating the entries corresponding to non-zeros in the sparse factor R, or approximated by conservative estimates for online data association. Joner-Volgenant-Castanon (JVC) assignment algorithm [16] to solve this assignment problem. B. Marginal Covariances Knowledge of the relative uncertainties between the current pose x i and any visible landmar l j is needed for the ML solution to the data association problem. These marginal covariances [ ] Σjj Σ Σ = T ij (13) Σ ij Σ ii contain blocs from the diagonal of the full covariance matrix, as well as the last bloc row and column, as is shown in Fig. 6. Note that the off-diagonal blocs are essential, because the uncertainties are relative to an arbitrary reference frame, which is often fixed at the origin of the trajectory. Calculating the full covariance matrix to recover these entries is not an option because the covariance matrix is always completely populated with n 2 entries, where n is the number of variables. However, calculating all entries is not necessary if we always add the most recent pose at the end, that is we first add the newly observed landmars, then optionally perform variable reordering, and finally add the next pose. In that case, only some triangular blocs on the diagonal and the last bloc column are needed as indicated by the gray areas in Fig. 6, and the remaining entries are given by symmetry. Our factored representation allows us to retrieve the exact values of interest without having to calculate the complete dense covariance matrix as well as to obtain a more efficient conservative estimate. Common to both exact solution and conservative estimates are the recovery of the last columns of the covariance matrix from the square root factor, which can be done efficiently by bac-substitution. The exact pose uncertainty Σ ii and the covariances Σ ij can be recovered in linear time, based on the sparse R factor. As we choose the current pose to be the last variable in our factor R, the last bloc-column X of the full covariance matrix (R T R) 1 contains Σ ii as well as all Σ ij as observed in [9]. But instead of having to eep an incremental estimate of these quantities, we can retrieve the exact values efficiently from the factor R by bac-substitution. With d x the

7 KAESS et al.: isam: INCREMENTAL SMOOTHING AND MAPPING 7 D. Exact Covariances Recovering the exact structure uncertainties Σ jj is not straightforward, as they are spread out along the diagonal, but can still be done efficiently by again exploiting the sparsity structure of R. In general, the covariance matrix is obtained as the inverse of the information matrix Σ := (A T A) 1 = (R T R) 1 (19) Fig. 7. Comparison of marginal covariance estimates projected into the current robot frame (robot indicated by red rectangle), for a short trajectory (red curve) and some landmars (green crosses). Conservative covariances (green, large ellipses) are shown as well as the exact covariances (blue, smaller ellipses) obtained by our fast algorithm. Note that the exact covariances based on full inversion are also shown (orange, mostly hidden by blue). dimension of the last pose, we define B R n dx as the last d x unit vectors [ ] 0(n dx) d B = x (14) and solve I dx d x by a forward and a bac-substitution R T RX = B (15) R T Y = B, RX = Y. (16) The ey to efficiency is that we never have to recover a full dense matrix, but due to R being upper triangular immediately obtain Y = [0,..., 0, R 1 ii ]T. (17) Recovering these columns is efficient, because only a constant number of d x bac-substitutions are needed. C. Conservative Estimates Conservative estimates for the structure uncertainties Σ jj are obtained from the initial uncertainties as proposed by Eustice [9]. As the uncertainty can never grow when new measurements are added to the system, the initial uncertainties Σ jj provide conservative estimates. These are obtained by [ ] Σ jj = J Σii J T (18) Γ where J is the Jacobian of the linearized bac-projection function (an inverse of the measurement function is not always available, for example for the bearing-only case), and Σ ii and Γ are the current pose uncertainty and the measurement noise, respectively. Fig. 7 provides a comparison of the conservative and exact covariances. A more tight conservative estimate on a landmar can be obtained after multiple measurements are available, or later in the process by means of the exact algorithm that is presented next. based on the factor R by noting that R T RΣ = I (20) and performing a forward, followed by a bac-substitution R T Y = I, RΣ = Y. (21) Because the information matrix is not band-diagonal in general, this would seem to require calculating all n 2 entries of the fully dense covariance matrix, which is infeasible for any non-trivial problem. Here is where the sparsity of the factor R is of advantage again. Both, [17] and [18] present an efficient method for recovering exactly all entries σ ij of the covariance matrix Σ that coincide with non-zero entries in the factor R = (r ij ): σ ll = 1 r ll ( 1 r ll σ il = 1 r ii ( l j=i+1,r ij 0 n j=l+1,r lj 0 r ij σ jl r lj σ jl ) (22) n j=l+1,r ij 0 r ij σ lj ) (23) for l = n,..., 1 and i = l 1,..., 1, where the other half of the matrix is given by symmetry. Note that the summations only apply to non-zero entries of single columns or rows of the sparse matrix R. This means that in order to obtain the topleft-most entry of the covariance matrix, we at most have to calculate all other entries that correspond to non-zeros in R. The algorithm has O(n) time complexity for band-diagonal matrices and matrices with only a small number of entries far from the diagonal, but can be more expensive for general sparse R. Based on a dynamic programming approach, our algorithm provides access to all entries of interest for data association. Fig. 7 shows the marginal covariances obtained by this algorithm for a small example. Note that they coincide with the exact covariances obtained by full inversion. As the upper triangular parts of the bloc diagonals of R are fully populated, and due to symmetry of the covariance matrix, we obtain all bloc diagonals and therefore also the structure uncertainties Σ jj. Even if any of these populated entries in R happen to be zero, or if additional entries are needed that are outside the sparsity pattern of R, they are easily accessible. We use a dynamic programming approach that obtains the entries we need, and automatically calculates any intermediate entries as required. That also allows efficient retrieval of additional quantities that may be required for other data association techniques, such as the joint compatibility test [19].

8 8 IEEE TRANSACTIONS ON ROBOTICS, MANUSCRIPT SEPTEMBER 7, 2008 (a) Trajectory based on odometry only. (b) Trajectory and map after incremental optimiziation. (c) Final R factor with side length Fig. 8. Results for the full Victoria Par sequence. Solving the complete problem including data association in each step too 7.7 minutes on a laptop computer. For nown correspondences, the time reduces to 5.9 minutes. Since the dataset is from a 26 minute long robot run, isam with unnown correspondences is over 3 times faster than real-time in this case, calculating the complete and exact solution in each step. The trajectory and landmars are shown in yellow (light), manually overlaid on an aerial image for reference. Differential GPS was not used in obtaining our experimental results, but is shown in blue (dar) for comparison - note that in many places it is not available. TABLE I EXECUTION TIMES FOR DIFFERENT DATA ASSOCIATION TECHNIQUES FOR A SIMULATED LOOP. THE TIMES INCLUDE UPDATING OF THE FACTORIZATION, SOLVING FOR ALL VARIABLES, AND PERFORMING THE RESPECTIVE DATA ASSOCIATION TECHNIQUE, FOR EVERY STEP. E. Evaluation Execution time Overall Avg./step Max./step NN 2.03s 4.1ms 81ms ML conservative 2.80s 5.6ms 95ms ML exact, efficient 27.5s 55ms 304ms ML exact, full 429s 858ms 3300ms Table I compares the execution times of NN as well as ML data association for different methods of obtaining the marginal covariances. The results are based on a simulated environment with a 500-pose loop and 240 landmars, with significant measurement noise added. Undetected landmars and spurious measurements are simulated by replacing 3% of the measurements by random measurements. The nearest neighbor (NN) approach is dominated by the time needed for factorization updates and bac-substitution in each step. As those same calculations are also performed for all covariancebased approaches that follow, these times are a close approximation to the overall calculation time without data association. The numbers show that while our exact algorithm is much more efficient than full inversion, our conservative solution is better suited for real-time application. In the second and third rows, the maximum lielihood (ML) approach is evaluated for our fast conservative estimate and our efficient exact solution. The conservative estimate only adds a small overhead to the NN time, which is mostly due to bac-substitution to obtain the last columns of the exact covariance matrix. This is computationally the same as the bac-substitution used for solving, except that it covers a number of columns equal to the dimension of a single pose instead of just one. Recovering the exact marginal covariances becomes fairly expensive in comparison as additionally the bloc-diagonal entries have to be recovered. Our exact efficient algorithm is an order of magnitude faster compared to the direct inversion of the information matrix, even though we have used an efficient algorithm based on a sparse LDL T matrix factorization [8]. However, this does not change the fact that all n 2 entries of the covariance matrix have to be calculated for the full inversion. Nevertheless, even our fast algorithm will get expensive for large environments and cannot be calculated in every step. Instead, as the uncertainties can never grow when new measurements are added, exact values can be calculated when time permits in order to update the conservative estimates. VI. EXPERIMENTAL RESULTS AND DISCUSSION While we evaluate the individual components of isam in their respective sections, we now evaluate our overall algorithm on simulated data as well as real-world datasets. The simulated data allow comparison with ground-truth, while the real-world data prove the applicability of isam to practical problems. We explore both, landmar-based as well as pose constraint based SLAM. We have implemented isam in the functional programming language OCaml, using exact, automatic differentiation [20] to obtain the Jacobians. All timing results in this section are obtained on a 2 GHz Pentium M laptop computer. A. Landmar-based isam Landmar-based isam wors well in real-world settings, even in the presence of many loops in the robot trajectory. We

9 KAESS et al.: isam: INCREMENTAL SMOOTHING AND MAPPING 9 have evaluated isam on the Sydney Victoria Par dataset, a popular test dataset in the SLAM community, that consists of laser-range data and vehicle odometry, recorded in a par with sparse tree coverage. It contains 7247 frames along a trajectory of 4 ilometer length, recorded over a time frame of 26 minutes. As repeated measurements taen by a stopped vehicle do not add any new information, we have dropped these, leaving 6969 frames. We have extracted 3640 measurements of landmars from the laser data by a simple tree detector. isam with unnown correspondences runs comfortably in real-time. Performing data association based on conservative estimates, the incremental reconstruction including solving for all variables after each new frame is added, too 464s or 7.7 minutes, which is significantly less than the 26 minutes it too to record the data. That means that even though the Victoria Par trajectory contains a significant number of loops (several places are traversed 8 times), increasing fillin, isam is still over 3 times faster than real-time. The resulting map contains 140 distinct landmars as shown in Fig. 8(b). Solving after every step is not necessary, as the measurements are fairly accurate locally, therefore providing good estimates. Calculating all variables only every 10 steps yields a significant improvement to 270s or 4.5 minutes. Naturally, isam runs even faster with nown data association, for example due to uniquely identifiable landmars. For this test, we use the correspondences that were automatically obtained before. Under nown correspondences, the time reduces to 351s or 5.9 minutes. The difference is mainly caused by the bac-substitution over the last three columns to obtain the off-diagonal entries in each step that are needed for data association. The decrease is not significant because a similar bac-substitution over a single column still has to be performed to solve for all variables in each step. But, as a consequence, solving by bac-substitution only every 10 steps now significantly reduces the time to 159s or 2.7 minutes. More importantly, isam still performs in real-time towards the end of the trajectory, where the computations get more expensive. The average calculation time for the final 100 steps are the most expensive ones to compute as the number of variables is largest. Even for the slow case of retrieving the full solution after every step, isam taes in average 0.120s and 0.095s per step, with and without data association, respectively. These results compare favorably to the 0.22s needed for real-time performance. These computation times include a full linearization, COLAMD variable reordering step and matrix factorization, which too 1.8s in total in both cases. Despite all the loops, the final factor R as shown in Fig. 8(c) is still sparse, with entries for variables, yielding an average of only 9.79 entries per column. B. Pose Constraint-based isam isam can straightforwardly be applied to estimation problems without landmars, purely based on pose constraints, as we show in this section based on nown correspondences. Such pose constraints most commonly arise from scanmatching dense laser range data, but can also be generated from visual input [9]. Pose constraints either connect subsequent poses similar to odometry measurements, or they connect two arbitrary poses when closing loops. Pose constraints are incorporated in a similar way than the odometry measurements, by introducing new terms that represent the error between the predicted and the measured difference between a pair of poses. We evaluate pose-only isam on simulated as well as real-world datasets, assuming nown data association, as the generation of pose constraints by scan-matching has been well studied and good algorithms are available [22], [23]. The incremental solution of isam is comparable in quality to the solution obtained by full nonlinear optimization. To allow ground truth comparison, we use the simulated Manhattan world from [21] shown in Fig. 9(a),(b). This dataset contains 3500 poses and 5598 constraints, 3499 of which are odometry measurements. While the result in [21] seems to be better as the left part is more straightened out, our solution has a slightly lower normalized χ 2 value of , compared to After one extra relinearization and bacsubstitution, the normalized χ 2 is , the same value that we obtain by full nonlinear optimization until convergence. These results show that isam is comparable in accuracy to the exact solution provided by square root SAM. In terms of computational speed, isam also fares well for this dataset. Solving the full problem for each newly added pose, while reordering the variables and relinearizing the problem every 100 steps, taes isam 140.9s, or an average of 40ms per step. The last 100 steps tae an average of 48ms each, which includes 1.08s for variable reordering and matrix factorization. The resulting R factor shown in Fig. 9(c) is sparse with entries for a side length of isam also performs well on real-world data, both in quality of the solution as well as speed. We apply isam to two publicly available laser range datasets that appear in several publications. The first one is the Intel dataset shown in Fig. 10(b), providing a trajectory with many loops with continued exploration of the same environment in increasing detail. Preprocessing by scan matching results in 910 poses and 4453 constraints. isam obtains the full solution after each step, with variable reordering every 20 frames in 77.4s, or about 85ms per step. The last 100 steps tae an average of 290ms including 0.92s for each of the five reordering and factorization steps. The final R factor is shown in Fig. 10(c), with 2730 variables containing entries. The second real-world pose-only example we use to evaluate isam is the MIT Killian Court dataset shown in Fig. 11(b) that features much exploration with a few large-scale loops. Again the dataset was preprocessed, yielding 1941 poses and 2190 pose constraints. isam taes 23.7s for a complete solution after each step, or about 12.2ms per step, with variable reordering after every 100 steps. The last 100 steps tae an average of 31ms including 0.36s for reordering/refactorization. The final R factor with entries is shown in Fig. 11(c). C. Sparsity of Square Root Factor The complexity of isam heavily depends on the sparsity of the square root factor, as this affects both retrieving the solution as well as access to the covariances. Retrieving the

10 10 IEEE TRANSACTIONS ON ROBOTICS, MANUSCRIPT SEPTEMBER 7, 2008 (a) Original noisy data set. (b) Trajectory after incremental optimization. (c) Final R factor with side length Fig. 9. isam results for the simulated Manhattan world from [21] with 3500 poses and 5598 constraints. isam taes about 40ms per step. The resulting R factor has entries, which corresponds to 0.34% or an average of 17.8 entries per column. (a) Trajectory based on odometry only. (b) Final trajectory and evidence grid map. (c) Final R factor with side length Fig. 10. Results from isam applied to the Intel dataset. isam calculates the full solution for 910 poses and 4453 constraints with an average of 85ms per step, while reordering the variables every 20 steps. The problem has = 2730 variables and = measurement equations. The R factor contains entries, which corresponds to 2.42% or 33.1 entries per column. (a) Trajectory based on odometry only. (b) Final trajectory and evidence grid map. (c) Final R factor with side length Fig. 11. isam results for the MIT Killian Court dataset. isam calculates the full solution for the 1941 poses and 2190 pose constraints with an average of 12.2ms per step. The R factor contains entries for 5823 variables, which corresponds to 0.31% or 9.0 per column.

11 KAESS et al.: isam: INCREMENTAL SMOOTHING AND MAPPING 11 Number per column Average number of non-zero matrix entries per column in R Step Victoria Par Manhatten Intel Killian Fig. 12. Average number of entries per column in the R factor over time for the different data sets in this section. Even though the environments contain many loops, the average converges to a low constant in most cases, confirming our assumption that the number of entries per column is approximately independent of the number of variables n. solution from the square root factor requires bac-substitution, which usually has quadratic time complexity. However, if there are only a constant number of entries per column in the square root factor, then bac-substitution only requires O(n) time. The same applies to retrieval of the last columns of the covariance matrix, which is the dominant cost for our conservative estimates. Our results show that the number of entries per column is typically bound by a low constant. Figure 12 shows how the density of the factor matrix R develops over time for each dataset used in this section. The densities initially increase, showing very large changes: Increases are caused by incremental updating of the matrix factor, while sudden drops are the consequence of the periodic variable reordering. Except for the Intel sequence, all curves clearly converge to a low constant, explaining the good performance of isam. For the Intel dataset, the density increases more significantly because: 1) the trajectory starts with a coarse run through the building, followed by more detailed exploration, and 2) there are unnecessarily many pose constraints to previous parts of the trajectory that lead to fill-in. This is also the reason for choosing a shorter interval for the periodic variable reordering (20 steps) than for all other datasets (100 steps). Nevertheless, as the results for that sequence show, isam still performs faster than needed for real-time. From a theoretical point of view some bounds can be specified depending on the nature of the environment. [24] provides an upper bound of O(n log n) for the fill-in of the square root factor for planar mapping with restricted sensor range. That means that the average fill-in per column is bound by O(log n). One special case is a pure exploration tas, in which the robot never returns to previously mapped environments. In that case the information matrix is band-diagonal, and therefore the factor matrix without variable reordering has a constant number of entries per column. Another special case is a robot that remains in the same restricted environment, continuously observing the same landmars. In that case the optimal solution is given by the variable ordering that puts all landmars at the end and isam performs a calculation similar to the Schur complement. The result is a system that requires computation time linear in the length of the trajectory, but with a large constant that is quadratic in the constant number of landmars. However, once the complete environment of the robot is mapped, there is no longer any need for SLAM, but rather localization based on the obtained map is sufficient. VII. RELATED WORK There is a large body of literature on the field of robot localization and mapping, and we will only address closely related wor as well as some of the most influential algorithms. A general overview of the area of SLAM can be found in [3], [10], [25], [26]. Initial wor on probabilistic SLAM was based on the extended Kalman filter (EKF) and is due to Smith et al. [12], building on earlier wor [1], [27], [28]. It has soon been shown that filtering is inconsistent in nonlinear SLAM settings [29] and much later wor [30], [31] focuses on reducing the effect of nonlinearities and providing more efficient, but typically approximate solutions to deal with larger environments. Smoothing in the SLAM context avoids these problems by eeping the complete robot trajectory as part of the estimation problem. It is also called the full SLAM problem [10] and is closely related to bundle adjustment [18] in photogrammetry, and to structure from motion (SFM) [32] in computer vision. While those are typically solved by batch processing of all measurements, SLAM by nature is an incremental problem. Also, SLAM provides additional constraints in the form of odometry measurements and an ordered sequence of poses that are not present in general SFM problems. The first smoothing approach to the SLAM problem is presented in [33], where the estimation problem is formulated as a networ of constraints between robot poses. The first implementation [34] was based on matrix inversion. A number of improved and numerically more stable algorithms have since been developed, based on well-nown iterative techniques such as relaxation [10], [35], [36], gradient descent [37], [38], conjugate gradient [39], and more recently multi-level relaxation [40], [41]. The latter is based on a general multi-grid approach that has proven very successful in other fields for solving systems of equations. While most of these approaches represent interesting solutions to the SLAM problem, they all have in common that it is very expensive to recover the uncertainties of the estimation process. Recently, the information form of SLAM has become very popular. Filter-based approaches include the sparse extended information filter (SEIF) [42] and the thin junction tree filter (TJTF) [43]. On the smoothing side, Treemap [44] exploits the information form, but applies multiple approximations to provide a highly efficient algorithm. Square root SAM [4], [5] provides an efficient and exact solution based on a batch factorization of the information matrix, but does not address how to efficiently access the marginal covariances. While isam includes the complete trajectory and map, this is not always the case when smoothing is applied. Instead, the complexity of the estimation problem can be reduced by omitting the trajectory altogether, as for example pursued by D-SLAM [45], where measurements are transformed into relative constraints between landmars. Similarly, parts of

12 12 IEEE TRANSACTIONS ON ROBOTICS, MANUSCRIPT SEPTEMBER 7, 2008 the trajectory can be omitted as done by Folesson and Christensen [37], where parts of the underlying graph are collapsed into so-called star nodes. Alternatively, the problem can be stated as estimating the trajectory only, which leads to an exactly sparse information matrix [46], [47], where image measurements are converted to relative pose constraints. While this approach is similar to our pose-only case, they employ iterative methods to solve the estimation problem. While conservative estimates are available in Eustice s wor [46], and in fact were the inspiration for our wor, efficient access to the exact covariances is not possible based on the iterative solver. Recently, some SLAM algorithms employ direct equation solvers based on Cholesy or QR factorization. Treemap [48] uses Cholesy factors to represent probability distributions in a tree-based algorithm. However, multiple approximations are employed to reduce the complexity, while isam solves the full and exact problem, and therefore allows relinearization of all variables at any time. The problem of data association is not addressed in [48]. Square root SAM [4] solves the estimation problem by factorization of the naturally sparse information matrix. However, the matrix has to be factored completely after each step, resulting in unnecessary computational burden. We have recently presented our incremental solution [49] and its extension to unnown data association [50] in conference versions of this extended article. To the best of our nowledge, updating of matrix factorizations has not been applied in the context of SLAM yet. However, it is a well-nown technique in many areas, with applications such as computer vision [18], [51] and signal processing [52]. Golub and Van Loan [8] present general methods for updating matrix factorizations based on [53], [54], including the Givens rotations we use in this wor. Davis has done much wor in the areas of variable ordering and factorization updates, and provides highly optimized software libraries [13], [55] for various such tass. VIII. CONCLUSION We have presented isam, a fast incremental solution to the SLAM problem that updates a sparse matrix factorization. By employing smoothing we obtain a naturally sparse information matrix. As our approach is based on a direct equation solver using QR factorization, it has multiple advantages over iterative methods. Most importantly, isam allows access to the underlying estimation uncertainties, and we have shown how to access those efficiently, both the exact values as well as conservative estimates. We have further evaluated isam for both simulated and real-world data. In addition to the typical landmar-based application, we have also presented results for trajectory-only estimation problems. isam compares favorably with other methods in terms of computational speed. Even though some other algorithms are faster, they either only provide approximations, or they do not provide access to the exact estimation uncertainties. isam in contrast, combines exact recovery of the map and trajectory with efficient retrieval of the covariances needed for data association, while providing real-time processing on readily available hardware. Therefore we expect that approximate solutions will become less important in the future, and methods lie isam that are based on direct equation solvers will tae their place. There is still potential for improvements in several aspects of isam. An incremental variable ordering that balances fillin and the cost for incrementally updating the matrix could prove beneficial. This could allow completely eliminating any batch steps, because relinearization can also be performed incrementally, as typically only a small number of variables are affected. Finally, for both recovery of the solution and access to the covariances, bac-substitution could be restricted to only access the entries that are actually needed. Our incremental solution should also be of interest beyond the applications presented here. One potential application is tracing a sensor in unnown settings for augmented reality, as a cheap alternative to instrumenting the environment. We are woring on visual SLAM applications of isam that will benefit from scalable and exact solutions, especially for unstructured outdoor environments. Furthermore, the realtime properties of isam allow for autonomous operation when mapping buildings or entire cities for virtual reality applications. APPENDIX For completeness of this paper, we review how to linearize the measurement functions and collect all components of the nonlinear least squares objective function (5) into one general least squares formulation, following [4]. We linearize the measurement functions in (5) by Taylor expansion, assuming that either a good linearization point is available or that we are woring on one iteration of a nonlinear optimization method. In either case, the first-order linearization of the process term in (5) is given by: f i (x i 1, u i ) x i { f i (x 0 i 1, u i ) + F i 1 } { } i δx i 1 x 0 i + δx i = { F i 1 } i δx i 1 δx i ai (24) where F i 1 i is the Jacobian of the process model f i (.) at the linearization point x 0 i 1, as defined by F i 1 i := f i(x i 1, u i ) (25) x 0 i 1 x i 1 and a i := x 0 i f i(x 0 i 1, u i) is the odometry prediction error (note that u i here is given and hence constant). The firstorder linearizations of the measurement term in (5) is obtained similarly, = h (x i, l j ) z { } h (x 0 i, l 0 j ) + H i δx i + J j δl j z { } H i δx i + J j δl j c (26) where H i and J j are respectively the Jacobians of the measurement function h (.) with respect to a change in x i

13 KAESS et al.: isam: INCREMENTAL SMOOTHING AND MAPPING 13 and l j evaluated at the linearization point (x 0 i, l 0 j ): H i := h (x i, l j ) x i (27) (x 0 i,l0 j ) J j := h (x i, l j ) (28) (x 0 i,l0 j ) l j and c := z h (x 0 i, l 0 j ) is the measurement prediction error. Using the linearized process and measurement models (24) and (26), respectively, (5) becomes { M δθ = arg min F i 1 i δx i 1 + G i iδx i a i 2 δθ Λ i i=1 } K + H i δx i + J j δl 2 j c.(29) =1 That is, we obtain a linear least squares problem in δθ that needs to be solved efficiently. To avoid treating δx i in a special way, we introduce the matrix G i i = I d x d x. By a simple change of variables we can drop the covariance matrices Λ i and Γ. With Λ 1/2 the matrix square root of Λ, we can rewrite the Mahalanobis norm as follows e 2 Λ := et Λ 1 e = (Λ T/2 e) T (Λ T/2 e) = Λ T/2 e 2 (30) that is, we can always eliminate Λ i from (29) by premultiplying F i 1 i, G i i, and a i in each term with Λ T/2 i, and similarly eliminate Γ from the measurement terms. For scalar measurements this simply means dividing each term by the measurement standard deviation. Below we assume that this has been done and drop the Mahalanobis notation. Finally, after collecting the Jacobian matrices into one large matrix A, and the vectors a i and c into one right-hand side vector b, we obtain the following standard least squares problem δθ = arg min Aδθ δθ b 2 (31) where we drop the δ notation for simplicity outside of this appendix. ACKNOWLEDGMENTS We are grateful to U. Frese, C. Stachniss and E. Olson for helpful discussions, and W. Burgard s group and especially G. Grisetti for maing their code available. Also, we than E. Olson for sharing the Manhattan world dataset, D. Haehnel for the Intel dataset, M. Bosse for the Killian Court dataset, and E. Nebot and H. Durrant-Whyte for the Victoria Par dataset. Finally, we than the reviewers for their valueable comments. REFERENCES [1] R. Smith and P. Cheeseman, On the representation and estimation of spatial uncertainty, Intl. J. of Robotics Research, vol. 5, no. 4, pp , [2] J. Leonard, I. Cox, and H. Durrant-Whyte, Dynamic map building for an autonomous mobile robot, Intl. J. of Robotics Research, vol. 11, no. 4, pp , Γ [3] S. Thrun, Robotic mapping: a survey, in Exploring artificial intelligence in the new millennium. Morgan Kaufmann, Inc., 2003, pp [4] F. Dellaert and M. Kaess, Square Root SAM: Simultaneous localization and mapping via square root information smoothing, Intl. J. of Robotics Research, vol. 25, no. 12, pp , Dec [5] F. Dellaert, Square Root SAM: Simultaneous location and mapping via square root information smoothing, in Robotics: Science and Systems (RSS), [6] G. Bierman, Factorization methods for discrete sequential estimation, ser. Mathematics in Science and Engineering. New Yor: Academic Press, 1977, vol [7] P. Maybec, Stochastic Models, Estimation and Control. New Yor: Academic Press, 1979, vol. 1. [8] G. Golub and C. V. Loan, Matrix Computations, 3rd ed. Baltimore: Johns Hopins University Press, [9] R. Eustice, H. Singh, J. Leonard, M. Walter, and R. Ballard, Visually navigating the RMS titanic with SLAM information filters, in Robotics: Science and Systems (RSS), Jun [10] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics. The MIT press, Cambridge, MA, [11] J. Dennis and R. Schnabel, Numerical methods for unconstrained optimization and nonlinear equations. Prentice-Hall, [12] R. Smith, M. Self, and P. Cheeseman, Estimating uncertain spatial relationships in Robotics, in Autonomous Robot Vehicles, I. Cox and G. Wilfong, Eds. Springer-Verlag, 1990, pp [13] T. Davis, J. Gilbert, S. Larimore, and E. Ng, A column approximate minimum degree ordering algorithm, ACM Trans. Math. Softw., vol. 30, no. 3, pp , [14] F. Dellaert, Monte Carlo EM for data association and its applications in computer vision, Ph.D. dissertation, School of Computer Science, Carnegie Mellon, September 2001, also available as Technical Report CMU-CS [15] Y. Bar-Shalom and X. Li, Estimation and Tracing: principles, techniques and software. Boston, London: Artech House, [16] R. Joner and A. Volgenant, A shortest augmenting path algorithm for dense and sparse linear assignment problems, Computing, vol. 38, no. 4, pp , [17] G. Golub and R. Plemmons, Large-scale geodetic least-squares adjustment by dissection and orthogonal decomposition, Linear Algebra and Its Applications, vol. 34, pp. 3 28, Dec [18] B. Triggs, P. McLauchlan, R. Hartley, and A. Fitzgibbon, Bundle adjustment a modern synthesis, in Vision Algorithms: Theory and Practice, ser. LNCS, W. Triggs, A. Zisserman, and R. Szelisi, Eds. Springer Verlag, Sep 1999, pp [19] J. Neira and J. Tardos, Data association in stochastic mapping using the joint compatibility test, IEEE Trans. Robot. Automat., vol. 17, no. 6, pp , December [20] A. Griewan, On Automatic Differentiation, in Mathematical Programming: Recent Developments and Applications, M. Iri and K. Tanabe, Eds. Kluwer Academic Publishers, 1989, pp [21] E. Olson, J. Leonard, and S. Teller, Fast iterative optimization of pose graphs with poor initial estimates, in IEEE Intl. Conf. on Robotics and Automation (ICRA), 2006, pp [22] G. Grisetti, C. Stachniss, and W. Burgard, Improved techniques for grid mapping with Rao-Blacwellized particle filters, IEEE Trans. Robototics, vol. 23, no. 1, pp , [23] D. Hähnel, W. Burgard, D. Fox, and S. Thrun, A highly efficient FastSLAM algorithm for generating cyclic maps of large-scale environments from raw laser range measurements, in IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2003, pp [24] P. Krauthausen, F. Dellaert, and A. Kipp, Exploiting locality by nested dissection for square root smoothing and mapping, in Robotics: Science and Systems (RSS), [25] H. Durrant-Whyte and T. Bailey, Simultaneous localisation and mapping (SLAM): Part I the essential algorithms, Robotics & Automation Magazine, Jun [26] T. Bailey and H. Durrant-Whyte, Simultaneous localisation and mapping (SLAM): Part II state of the art, Robotics & Automation Magazine, Sep [27] R. Smith, M. Self, and P. Cheeseman, A stochastic map for uncertain spatial relationships, in Int. Symp on Robotics Research, [28] H. Durrant-Whyte, Uncertain geometry in robotics, IEEE Trans. Robot. Automat., vol. 4, no. 1, pp , [29] S. Julier and J. Uhlmann, A counter example to the theory of simultaneous localization and map building, in IEEE Intl. Conf. on Robotics and Automation (ICRA), vol. 4, 2001, pp

14 14 IEEE TRANSACTIONS ON ROBOTICS, MANUSCRIPT SEPTEMBER 7, 2008 [30] J. Guivant and E. Nebot, Optimization of the simultaneous localization and map building algorithm for real time implementation, IEEE Trans. Robot. Automat., vol. 17, no. 3, pp , June [31] J. Knight, A. Davison, and I. Reid, Towards constant time SLAM using postponement, in IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2001, pp [32] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision. Cambridge University Press, [33] F. Lu and E. Milios, Globally consistent range scan alignment for environment mapping, Autonomous Robots, pp , Apr [34] J.-S. Gutmann and B. Nebel, Navigation mobiler roboter mit laserscans, in Autonome Mobile Systeme. Berlin: Springer Verlag, [35] T. Ducett, S. Marsland, and J. Shapiro, Fast, on-line learning of globally consistent maps, Autonomous Robots, vol. 12, no. 3, pp , [36] M. Bosse, P. Newman, J. Leonard, and S. Teller, Simultaneous localization and map building in large-scale cyclic environments using the Atlas framewor, Intl. J. of Robotics Research, vol. 23, no. 12, pp , Dec [37] J. Folesson and H. Christensen, Graphical SLAM - a self-correcting map, in IEEE Intl. Conf. on Robotics and Automation (ICRA), vol. 1, 2004, pp [38], Closing the loop with graphical SLAM, IEEE Trans. Robototics, vol. 23, no. 4, pp , Aug [39] K. Konolige, Large-scale map-maing, in Proc. 21 th AAAI National Conference on AI, San Jose, CA, [40] U. Frese, An O(log n) algorithm for simultaneous localization and mapping of mobile robots in indoor environments, Ph.D. dissertation, University of Erlangen-Nürnberg, [41] U. Frese, P. Larsson, and T. Ducett, A multilevel relaxation algorithm for simultaneous localisation and mapping, IEEE Trans. Robotics, vol. 21, no. 2, pp , April [42] S. Thrun and Y. Liu, Multi-robot SLAM with sparse extended information filters, in Proceedings of the 11th International Symposium of Robotics Research (ISRR 03). Sienna, Italy: Springer, [43] M. Pasin, Thin junction tree filters for simultaneous localization and mapping, in Intl. Joint Conf. on Artificial Intelligence (IJCAI), [44] U. Frese, Treemap: An O(log n) algorithm for indoor simultaneous localization and mapping, Autonomous Robots, vol. 21, no. 2, pp , [45] Z. Wang, S. Huang, and G. Dissanayae, DSLAM: Decoupled localization and mapping for autonomous robots, in International Symposium of Robotics Research (ISRR 05), Oct [46] R. Eustice, H. Singh, and J. Leonard, Exactly sparse delayed-state filters, in IEEE Intl. Conf. on Robotics and Automation (ICRA), April 2005, pp [47] R. Eustice, H. Singh, J. Leonard, and M. Walter, Visually mapping the RMS Titanic: Conservative covariance estimates for SLAM information filters, Intl. J. of Robotics Research, vol. 25, no. 12, pp , Dec [48] U. Frese and L. Schröder, Closing a million-landmars loop, in IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Oct 2006, pp [49] M. Kaess, A. Ranganathan, and F. Dellaert, Fast incremental square root information smoothing, in Intl. Joint Conf. on AI (IJCAI), Hyderabad, India, 2007, pp [50], isam: Fast incremental smoothing and mapping with efficient data association, in IEEE Intl. Conf. on Robotics and Automation (ICRA), Rome, Italy, April 2007, pp [51] Z. Khan, T. Balch, and F. Dellaert, MCMC data association and sparse factorization updating for real time multitarget tracing with merged and multiple measurements, IEEE Trans. Pattern Anal. Machine Intell., vol. 28, no. 12, pp , December [52] F. Ling, Givens rotation based least squares lattice related algorithms, IEEE Trans. Signal Processing, vol. 39, no. 7, pp , Jul [53] P. Gill, G. Golub, W. Murray, and M. Saunders, Methods for modifying matrix factorizations, Mathematics and Computation, vol. 28, no. 126, pp , [54] W. Gentleman, Least squares computations by Givens transformations without square roots, IMA J. of Appl. Math., vol. 12, pp , [55] T. Davis and W. Hager, Modifying a sparse Cholesy factorization, SIAM Journal on Matrix Analysis and Applications, vol. 20, no. 3, pp , Michael Kaess (S 02) is finishing the Ph.D. degree in computer science in the College of Computing, Georgia Institute of Technology, Atlanta, where he received the M.S. degree in computer science in He did his undergraduate studies at the University of Karlsruhe, Germany. His research interests include probabilistic methods in mobile robotics and computer vision, in particular efficient localization and large-scale mapping, as well as data association. Ananth Ranganathan (S 02) received the Ph.D. degree in computer science from the College of Computing, Georgia Institute of Technology, Atlanta in He received the B.Tech degree in computer science from the Indian Institute of Technology, Rooree, India, in His Ph.D. thesis research was on applying Bayesian modeling and inference techniques to the problem of topological mapping in robotics. Currently, he is Senior Research Scientist at the Honda Research Institute, Cambridge MA. His research focuses on probabilistic methods in robotics and computer vision, specifically for mapping and planning in robotics, and scene and object recognition in computer vision. Fran Dellaert (M 96) received the Ph.D. degree in computer science from Carnegie Mellon University in 2001, an M.S. degree in computer science and engineering from Case Western Reserve University in 1995, and the equivalent of an M.S. degree in electrical engineering from the Catholic University of Leuven, Belgium, in He is currently an Associate Professor in the College of Computing at the Georgia Institute of Technology. His research focuses on probabilistic methods in robotics and vision. Prof. Dellaert has published more than 60 articles in journals and refereed conference proceedings, as well as several boo chapters. He also serves as Associate Editor for IEEE TPAMI.

Fast Incremental Square Root Information Smoothing

Fast Incremental Square Root Information Smoothing Fast Incremental Square Root Information Smoothing Michael Kaess, Ananth Ranganathan, and Frank Dellaert Center for Robotics and Intelligent Machines, College of Computing Georgia Institute of Technology,

More information

Exact State and Covariance Sub-matrix Recovery for Submap Based Sparse EIF SLAM Algorithm

Exact State and Covariance Sub-matrix Recovery for Submap Based Sparse EIF SLAM Algorithm 8 IEEE International Conference on Robotics and Automation Pasadena, CA, USA, May 19-3, 8 Exact State and Covariance Sub-matrix Recovery for Submap Based Sparse EIF SLAM Algorithm Shoudong Huang, Zhan

More information

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms L11. EKF SLAM: PART I NA568 Mobile Robotics: Methods & Algorithms Today s Topic EKF Feature-Based SLAM State Representation Process / Observation Models Landmark Initialization Robot-Landmark Correlation

More information

Incremental Sparse GP Regression for Continuous-time Trajectory Estimation & Mapping

Incremental Sparse GP Regression for Continuous-time Trajectory Estimation & Mapping Incremental Sparse GP Regression for Continuous-time Trajectory Estimation & Mapping Xinyan Yan College of Computing Georgia Institute of Technology Atlanta, GA 30332, USA xinyan.yan@cc.gatech.edu Vadim

More information

Visual SLAM Tutorial: Bundle Adjustment

Visual SLAM Tutorial: Bundle Adjustment Visual SLAM Tutorial: Bundle Adjustment Frank Dellaert June 27, 2014 1 Minimizing Re-projection Error in Two Views In a two-view setting, we are interested in finding the most likely camera poses T1 w

More information

SLAM for Ship Hull Inspection using Exactly Sparse Extended Information Filters

SLAM for Ship Hull Inspection using Exactly Sparse Extended Information Filters SLAM for Ship Hull Inspection using Exactly Sparse Extended Information Filters Matthew Walter 1,2, Franz Hover 1, & John Leonard 1,2 Massachusetts Institute of Technology 1 Department of Mechanical Engineering

More information

Robot Mapping. Least Squares. Cyrill Stachniss

Robot Mapping. Least Squares. Cyrill Stachniss Robot Mapping Least Squares Cyrill Stachniss 1 Three Main SLAM Paradigms Kalman filter Particle filter Graphbased least squares approach to SLAM 2 Least Squares in General Approach for computing a solution

More information

Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping

Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz What is SLAM? Estimate the pose of a robot and the map of the environment

More information

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada SLAM Techniques and Algorithms Jack Collier Defence Research and Development Canada Recherche et développement pour la défense Canada Canada Goals What will we learn Gain an appreciation for what SLAM

More information

Incremental Cholesky Factorization for Least Squares Problems in Robotics

Incremental Cholesky Factorization for Least Squares Problems in Robotics Incremental Cholesky Factorization for Least Squares Problems in Robotics Lukas Polok, Marek Solony, Viorela Ila, Pavel Smrz and Pavel Zemcik Brno University of Technology, Faculty of Information Technology

More information

Simultaneous Localization and Mapping

Simultaneous Localization and Mapping Simultaneous Localization and Mapping Miroslav Kulich Intelligent and Mobile Robotics Group Czech Institute of Informatics, Robotics and Cybernetics Czech Technical University in Prague Winter semester

More information

Friday, December 2, 11

Friday, December 2, 11 Factor Graphs, Bayes Trees, and Preconditioning for SLAM and SFM Frank Dellaert with Michael Kaess, John Leonard, Viorela Ila, Richard Roberts, Doru Balcan, Yong-Dian Jian Outline SLAM and SFM with Factor

More information

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017 EKF and SLAM McGill COMP 765 Sept 18 th, 2017 Outline News and information Instructions for paper presentations Continue on Kalman filter: EKF and extension to mapping Example of a real mapping system:

More information

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart Robotics Mobile Robotics State estimation, Bayes filter, odometry, particle filter, Kalman filter, SLAM, joint Bayes filter, EKF SLAM, particle SLAM, graph-based SLAM Marc Toussaint U Stuttgart DARPA Grand

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Roman Barták Department of Theoretical Computer Science and Mathematical Logic Summary of last lecture We know how to do probabilistic reasoning over time transition model P(X t

More information

Advanced Techniques for Mobile Robotics Least Squares. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Advanced Techniques for Mobile Robotics Least Squares. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Advanced Techniques for Mobile Robotics Least Squares Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Problem Given a system described by a set of n observation functions {f i (x)} i=1:n

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Probabilistic Fundamentals in Robotics Gaussian Filters Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

Amortized Constant Time State Estimation in SLAM using a Mixed Kalman-Information Filter

Amortized Constant Time State Estimation in SLAM using a Mixed Kalman-Information Filter 1 Amortized Constant Time State Estimation in SLAM using a Mixed Kalman-Information Filter Viorela Ila Josep M. Porta Juan Andrade-Cetto Institut de Robòtica i Informàtica Industrial, CSIC-UPC, Barcelona,

More information

Lecture 8: Bayesian Estimation of Parameters in State Space Models

Lecture 8: Bayesian Estimation of Parameters in State Space Models in State Space Models March 30, 2016 Contents 1 Bayesian estimation of parameters in state space models 2 Computational methods for parameter estimation 3 Practical parameter estimation in state space

More information

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu ESTIMATOR STABILITY ANALYSIS IN SLAM Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu Institut de Robtica i Informtica Industrial, UPC-CSIC Llorens Artigas 4-6, Barcelona, 88 Spain {tvidal, cetto,

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: Extended Kalman Filter Dr. Kostas Alexis (CSE) These slides relied on the lectures from C. Stachniss, J. Sturm and the book Probabilistic Robotics from Thurn et al.

More information

Gaussian Processes as Continuous-time Trajectory Representations: Applications in SLAM and Motion Planning

Gaussian Processes as Continuous-time Trajectory Representations: Applications in SLAM and Motion Planning Gaussian Processes as Continuous-time Trajectory Representations: Applications in SLAM and Motion Planning Jing Dong jdong@gatech.edu 2017-06-20 License CC BY-NC-SA 3.0 Discrete time SLAM Downsides: Measurements

More information

Robotics 2 Data Association. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard

Robotics 2 Data Association. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard Robotics 2 Data Association Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard Data Association Data association is the process of associating uncertain measurements to known tracks. Problem

More information

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

Unsupervised Learning with Permuted Data

Unsupervised Learning with Permuted Data Unsupervised Learning with Permuted Data Sergey Kirshner skirshne@ics.uci.edu Sridevi Parise sparise@ics.uci.edu Padhraic Smyth smyth@ics.uci.edu School of Information and Computer Science, University

More information

Using the Kalman Filter for SLAM AIMS 2015

Using the Kalman Filter for SLAM AIMS 2015 Using the Kalman Filter for SLAM AIMS 2015 Contents Trivial Kinematics Rapid sweep over localisation and mapping (components of SLAM) Basic EKF Feature Based SLAM Feature types and representations Implementation

More information

2D Image Processing (Extended) Kalman and particle filter

2D Image Processing (Extended) Kalman and particle filter 2D Image Processing (Extended) Kalman and particle filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz

More information

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions DD2431 Autumn, 2014 1 2 3 Classification with Probability Distributions Estimation Theory Classification in the last lecture we assumed we new: P(y) Prior P(x y) Lielihood x2 x features y {ω 1,..., ω K

More information

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time

More information

TSRT14: Sensor Fusion Lecture 9

TSRT14: Sensor Fusion Lecture 9 TSRT14: Sensor Fusion Lecture 9 Simultaneous localization and mapping (SLAM) Gustaf Hendeby gustaf.hendeby@liu.se TSRT14 Lecture 9 Gustaf Hendeby Spring 2018 1 / 28 Le 9: simultaneous localization and

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

But if z is conditioned on, we need to model it:

But if z is conditioned on, we need to model it: Partially Unobserved Variables Lecture 8: Unsupervised Learning & EM Algorithm Sam Roweis October 28, 2003 Certain variables q in our models may be unobserved, either at training time or at test time or

More information

Shape of Gaussians as Feature Descriptors

Shape of Gaussians as Feature Descriptors Shape of Gaussians as Feature Descriptors Liyu Gong, Tianjiang Wang and Fang Liu Intelligent and Distributed Computing Lab, School of Computer Science and Technology Huazhong University of Science and

More information

L03. PROBABILITY REVIEW II COVARIANCE PROJECTION. NA568 Mobile Robotics: Methods & Algorithms

L03. PROBABILITY REVIEW II COVARIANCE PROJECTION. NA568 Mobile Robotics: Methods & Algorithms L03. PROBABILITY REVIEW II COVARIANCE PROJECTION NA568 Mobile Robotics: Methods & Algorithms Today s Agenda State Representation and Uncertainty Multivariate Gaussian Covariance Projection Probabilistic

More information

1 Introduction. Systems 2: Simulating Errors. Mobile Robot Systems. System Under. Environment

1 Introduction. Systems 2: Simulating Errors. Mobile Robot Systems. System Under. Environment Systems 2: Simulating Errors Introduction Simulating errors is a great way to test you calibration algorithms, your real-time identification algorithms, and your estimation algorithms. Conceptually, the

More information

Motion Tracking with Fixed-lag Smoothing: Algorithm and Consistency Analysis

Motion Tracking with Fixed-lag Smoothing: Algorithm and Consistency Analysis Motion Tracing with Fixed-lag Smoothing: Algorithm and Consistency Analysis Tue-Cuong Dong-Si and Anastasios I. Mouriis Dept. of Electrical Engineering, University of California, Riverside E-mail: tdongsi@ee.ucr.edu,

More information

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 27, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... 5 Direct Methods for Solving Systems of Linear Equations They are all over the place Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13, 2012

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

(17) (18)

(17) (18) Module 4 : Solving Linear Algebraic Equations Section 3 : Direct Solution Techniques 3 Direct Solution Techniques Methods for solving linear algebraic equations can be categorized as direct and iterative

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010 Probabilistic Fundamentals in Robotics Gaussian Filters Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile robot

More information

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e. Kalman Filter Localization Bayes Filter Reminder Prediction Correction Gaussians p(x) ~ N(µ,σ 2 ) : Properties of Gaussians Univariate p(x) = 1 1 2πσ e 2 (x µ) 2 σ 2 µ Univariate -σ σ Multivariate µ Multivariate

More information

Solving linear systems (6 lectures)

Solving linear systems (6 lectures) Chapter 2 Solving linear systems (6 lectures) 2.1 Solving linear systems: LU factorization (1 lectures) Reference: [Trefethen, Bau III] Lecture 20, 21 How do you solve Ax = b? (2.1.1) In numerical linear

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

L11: Pattern recognition principles

L11: Pattern recognition principles L11: Pattern recognition principles Bayesian decision theory Statistical classifiers Dimensionality reduction Clustering This lecture is partly based on [Huang, Acero and Hon, 2001, ch. 4] Introduction

More information

Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!

Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/! Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!! WARNING! this class will be dense! will learn how to use nonlinear optimization

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

Lecture 16: Introduction to Neural Networks

Lecture 16: Introduction to Neural Networks Lecture 16: Introduction to Neural Networs Instructor: Aditya Bhasara Scribe: Philippe David CS 5966/6966: Theory of Machine Learning March 20 th, 2017 Abstract In this lecture, we consider Bacpropagation,

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

Robot Localisation. Henrik I. Christensen. January 12, 2007

Robot Localisation. Henrik I. Christensen. January 12, 2007 Robot Henrik I. Robotics and Intelligent Machines @ GT College of Computing Georgia Institute of Technology Atlanta, GA hic@cc.gatech.edu January 12, 2007 The Robot Structure Outline 1 2 3 4 Sum of 5 6

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

Parameterized Joint Densities with Gaussian Mixture Marginals and their Potential Use in Nonlinear Robust Estimation

Parameterized Joint Densities with Gaussian Mixture Marginals and their Potential Use in Nonlinear Robust Estimation Proceedings of the 2006 IEEE International Conference on Control Applications Munich, Germany, October 4-6, 2006 WeA0. Parameterized Joint Densities with Gaussian Mixture Marginals and their Potential

More information

Learning Gaussian Process Models from Uncertain Data

Learning Gaussian Process Models from Uncertain Data Learning Gaussian Process Models from Uncertain Data Patrick Dallaire, Camille Besse, and Brahim Chaib-draa DAMAS Laboratory, Computer Science & Software Engineering Department, Laval University, Canada

More information

12 : Variational Inference I

12 : Variational Inference I 10-708: Probabilistic Graphical Models, Spring 2015 12 : Variational Inference I Lecturer: Eric P. Xing Scribes: Fattaneh Jabbari, Eric Lei, Evan Shapiro 1 Introduction Probabilistic inference is one of

More information

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors Sean Borman and Robert L. Stevenson Department of Electrical Engineering, University of Notre Dame Notre Dame,

More information

A new Hierarchical Bayes approach to ensemble-variational data assimilation

A new Hierarchical Bayes approach to ensemble-variational data assimilation A new Hierarchical Bayes approach to ensemble-variational data assimilation Michael Tsyrulnikov and Alexander Rakitko HydroMetCenter of Russia College Park, 20 Oct 2014 Michael Tsyrulnikov and Alexander

More information

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016 Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2016 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several

More information

Recent Advances in Bayesian Inference Techniques

Recent Advances in Bayesian Inference Techniques Recent Advances in Bayesian Inference Techniques Christopher M. Bishop Microsoft Research, Cambridge, U.K. research.microsoft.com/~cmbishop SIAM Conference on Data Mining, April 2004 Abstract Bayesian

More information

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Bastian Steder Reference Book Thrun, Burgard, and Fox: Probabilistic Robotics Vectors Arrays of numbers Vectors represent

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Lecture 14 October 22

Lecture 14 October 22 EE 2: Coding for Digital Communication & Beyond Fall 203 Lecture 4 October 22 Lecturer: Prof. Anant Sahai Scribe: Jingyan Wang This lecture covers: LT Code Ideal Soliton Distribution 4. Introduction So

More information

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010 Probabilistic Fundamentals in Robotics Probabilistic Models of Mobile Robots Robotic mapping Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic

More information

Pattern Recognition and Machine Learning

Pattern Recognition and Machine Learning Christopher M. Bishop Pattern Recognition and Machine Learning ÖSpri inger Contents Preface Mathematical notation Contents vii xi xiii 1 Introduction 1 1.1 Example: Polynomial Curve Fitting 4 1.2 Probability

More information

Iterative Linear Solvers

Iterative Linear Solvers Chapter 10 Iterative Linear Solvers In the previous two chapters, we developed strategies for solving a new class of problems involving minimizing a function f ( x) with or without constraints on x. In

More information

IMU-Laser Scanner Localization: Observability Analysis

IMU-Laser Scanner Localization: Observability Analysis IMU-Laser Scanner Localization: Observability Analysis Faraz M. Mirzaei and Stergios I. Roumeliotis {faraz stergios}@cs.umn.edu Dept. of Computer Science & Engineering University of Minnesota Minneapolis,

More information

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014 Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2014 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several

More information

15 Singular Value Decomposition

15 Singular Value Decomposition 15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Gaussian with mean ( µ ) and standard deviation ( σ)

Gaussian with mean ( µ ) and standard deviation ( σ) Slide from Pieter Abbeel Gaussian with mean ( µ ) and standard deviation ( σ) 10/6/16 CSE-571: Robotics X ~ N( µ, σ ) Y ~ N( aµ + b, a σ ) Y = ax + b + + + + 1 1 1 1 1 1 1 1 1 1, ~ ) ( ) ( ), ( ~ ), (

More information

ECE521 week 3: 23/26 January 2017

ECE521 week 3: 23/26 January 2017 ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear

More information

A Reservoir Sampling Algorithm with Adaptive Estimation of Conditional Expectation

A Reservoir Sampling Algorithm with Adaptive Estimation of Conditional Expectation A Reservoir Sampling Algorithm with Adaptive Estimation of Conditional Expectation Vu Malbasa and Slobodan Vucetic Abstract Resource-constrained data mining introduces many constraints when learning from

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation Prof. om Overbye Dept. of Electrical and Computer Engineering exas A&M University overbye@tamu.edu Announcements

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 7 Approximate

More information

Sparsity. The implication is that we would like to find ways to increase efficiency of LU decomposition.

Sparsity. The implication is that we would like to find ways to increase efficiency of LU decomposition. Sparsity. Introduction We saw in previous notes that the very common problem, to solve for the n vector in A b ( when n is very large, is done without inverting the n n matri A, using LU decomposition.

More information

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Particle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Motivation For continuous spaces: often no analytical formulas for Bayes filter updates

More information

Nonlinear Graph Sparsification for SLAM

Nonlinear Graph Sparsification for SLAM Robotics: Science and Systems 214 Berkeley, CA, USA, July 12-16, 214 Nonlinear Graph Sparsification for SLAM Mladen Mazuran Gian Diego Tipaldi Luciano Spinello Wolfram Burgard Department of Computer Science,

More information

Joint GPS and Vision Estimation Using an Adaptive Filter

Joint GPS and Vision Estimation Using an Adaptive Filter 1 Joint GPS and Vision Estimation Using an Adaptive Filter Shubhendra Vikram Singh Chauhan and Grace Xingxin Gao, University of Illinois at Urbana-Champaign Shubhendra Vikram Singh Chauhan received his

More information

Application of the Ensemble Kalman Filter to History Matching

Application of the Ensemble Kalman Filter to History Matching Application of the Ensemble Kalman Filter to History Matching Presented at Texas A&M, November 16,2010 Outline Philosophy EnKF for Data Assimilation Field History Match Using EnKF with Covariance Localization

More information

Combine Monte Carlo with Exhaustive Search: Effective Variational Inference and Policy Gradient Reinforcement Learning

Combine Monte Carlo with Exhaustive Search: Effective Variational Inference and Policy Gradient Reinforcement Learning Combine Monte Carlo with Exhaustive Search: Effective Variational Inference and Policy Gradient Reinforcement Learning Michalis K. Titsias Department of Informatics Athens University of Economics and Business

More information

A STATE-SPACE APPROACH FOR THE ANALYSIS OF WAVE AND DIFFUSION FIELDS

A STATE-SPACE APPROACH FOR THE ANALYSIS OF WAVE AND DIFFUSION FIELDS ICASSP 2015 A STATE-SPACE APPROACH FOR THE ANALYSIS OF WAVE AND DIFFUSION FIELDS Stefano Maranò Donat Fäh Hans-Andrea Loeliger ETH Zurich, Swiss Seismological Service, 8092 Zürich ETH Zurich, Dept. Information

More information

UAV Navigation: Airborne Inertial SLAM

UAV Navigation: Airborne Inertial SLAM Introduction UAV Navigation: Airborne Inertial SLAM Jonghyuk Kim Faculty of Engineering and Information Technology Australian National University, Australia Salah Sukkarieh ARC Centre of Excellence in

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

Wavelet Transform And Principal Component Analysis Based Feature Extraction

Wavelet Transform And Principal Component Analysis Based Feature Extraction Wavelet Transform And Principal Component Analysis Based Feature Extraction Keyun Tong June 3, 2010 As the amount of information grows rapidly and widely, feature extraction become an indispensable technique

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information