Towards Gauge Invariant Bundle Adjustment: A Solution Based on Gauge Dependent Damping

Size: px
Start display at page:

Download "Towards Gauge Invariant Bundle Adjustment: A Solution Based on Gauge Dependent Damping"

Transcription

1 EXTENDED VERSION SHORT VERSION APPEARED IN THE 9TH ICCV, NICE, FRANCE, OCTOBER 003. Towards Gauge Invariant Bundle Adjustent: A Solution Based on Gauge Dependent Daping Adrien Bartoli INRIA Rhône-Alpes, 655, avenue de l Europe Saint Isier cedex, France. First.Last@inria.fr Abstract Bundle ajustent is used to obtain accurate visual reconstructions by iniizing the reprojection error. The coordinate frae abiguity, or ore generality the gauge freedos, has been dealt with in different anners. It has often been reported that standard bundle adjustent algoriths were not gauge invariant: two iterations within different gauges can lead to geoetrically very different results. Surprisingly, ost algoriths do not exploit gauge freedos to iprove perforances. We consider this issue. We analyze theoretically the ipact of the gauge on standard algoriths. We show that a sufficiently general daping atrix in Levenberg-Marquardt iteration can be used to iplicitly reproduce a gauge transforation. We show that if the daping atrix is chosen such that the decrease in the reprojection error is axiized, then the iteration is gauge invariant. Experiental results on siulated and real data show that our gauge invariant bundle adjustent algorith outperfors existing ones in ters of stability.. Introduction The recovery of accurate 3D structure and caera otion fro iages is a ajor research challenge in photograetry [] and coputer vision [7]. Bundle adjustent is a technique to refine a visual reconstruction to produce jointly optial 3D structure and caera otion. Under certain assuptions on the noise on the observed features, bundle adjustent consists in iniizing the reprojection error, which is in general a non-linear procedure. Second-order non-linear least squares algoriths are usually eployed, naely the Gauss-Newton and Levenberg-Marquardt ethods. These ethods iteratively iprove sub-optial paraeter estiates by solving noral equations. Efficient solutions are possible thanks to the sparse block structure of the noral equations. The Levenberg-Marquardt ethod has proved the ost successful due to the use of a trust region strategy, ipleented via a daping of the noral equations, also called augentation. An inherent proble in bundle adjustent is the choice of the coordinate frae in which the reconstruction is expressed. This coordinate frae is called the gauge, or the datu in the photograetry counity. A gauge is a subset of paraeter vectors such that any two of the do not share the sae underlying geoetry. Paraeter vectors corresponding to the sae geoetry are related by gauge transforations. It has been shown in [8] that these transforations have a group structure. The reprojection error is gauge invariant since it reflects the inherent erit of a reconstruction. These considerations hold whichever caera odel is used and whichever calibration level is available. Exaples of geoetric gauge freedos are the global rotation, translation and scale in etric reconstruction and the position of points along the line in the two point representation of lines. Exaple of algebraic gauge freedos is the scale factor of hoogeneous coordinates. When ignored, gauge freedos ay induce nuerical estiation probles, since they iply the rank deficiency of the noral equations. It has been reported that ost bundle adjustent algoriths initialized with the sae reconstruction expressed within different gauges can yield extreely different results, in ters of coputational cost [, 3, 8,,, 4, 7]. In other words, these algoriths are not gauge invariant. Carefully choosing the gauge can greatly iprove the reliability. In practice, we observe that ost algoriths converge to the sae solution, but ay require different nuber of iterations. Gauge freedos iply that the noral equations to be solved at each iteration do not have a unique solution, i.e. the design atrix is singular, but rather a faily of solutions, whose diension is the nuber of gauge freedos. Daping the noral equations consists in adding a syetric positive definite atrix to the design atrix. This atrix is ost We eploy gauge invariant in a anner different fro [8,, ]. These authors propose algoriths based on using a pre-defined global gauge. Hence, no atter the gauge within which the initial solution is expressed, their algoriths will give the sae result. However, the behaviour of these algoriths do depend upon the pre-defined global gauge. In this sense, these algoriths are not gauge invariant.

2 of the tie chosen as a diagonal one, which corresponds to an elliptical trust region, see e.g. [6] and 3 for ore details. Surprisingly, gauge freedos have rarely been exploited to iprove on the reliability of algoriths. Most algoriths are not gauge invariant and are based on soehow arbitrary gauges which are not chosen to eet soe efficiency criteria. More details are given in. The ain contribution of this paper is an algorith that axiizes the decrease in the reprojection error at each iteration, in a gauge invariant anner. In ore detail: In 4, we forally state the gauge invariance property of a bundle adjustent algorith. A detailled investigation of how the gauge influences Gauss-Newton and Levenberg-Marquardt iterations allows to conclude that in general neither of the is gauge invariant. This analysis shows that the gauge and the daping atrix in Levenberg-Marquardt iterations are closely linked: Gauge transforations change the standard elliptical trust region to a ore coplex shape. In 5, we concentrate on Levenberg-Marquardt iterations and propose iplicit gauges, as opposed to the explicit transfer of a reconstruction to the desired gauge. This shows that in soe sense, the daping atrix can encapsulate gauge transforations. We state the following iportant result. If the for of the daping atrix is sufficiently general, then the iteration can be ade gauge invariant by carefully choosing this atrix, based on a gauge dependent criterion. In 6, we propose a gauge dependent daping atrix which defines a trust region with the following iportant properties. (i) it akes the underlying iteration gauge invariant, (ii) it axiizes the decrease in the reprojection error up to first order and (iii) it preserves the sparse block structure of the noral equations. Finally 7 reports soe experiental results and 8 concludes. For siplicity of notation, we deal with projective reconstruction of points and caeras. Extension to other caera odels (e.g. affine caeras), other calibration levels (e.g. etric reconstruction) and other types of features (e.g. lines) is straightforward, following [8, ]. Our final gauge invariant bundle adjustent algorith is suarized in a practical anner in table. Notation. We ake no foral distinction between coordinate vectors and physical entities. Equality up to a non-null scale factor is denoted by and is soeties written by introducing explicitly the scale factor: (x x ) ( α x = αx ). Transposition and transposed inverse are denoted by T and T. Vectors are typeset using bold fonts (q, Q), atrices using sans-serif fonts (P, T) and scalars in italics. Indices are used to indicate the size of a atrix or vector (P (3 4), q (3 ) ) or to index a set of entities. The row-wise atrix vectorization is written vect. Let P i, i =... n denote the n reconstructed caera atrices and Q j, j =... denote the reconstructed 3D points. Structure and otion paraeters are contained in a (p ) vector X, partitionned into n otion paraeters M and 4 structure paraeters S (hoogeneous coordinates of points) as: X T = (vect T (P )... vect T (P n ) M T T Q... Q T ) S T Changing the 5-degrees of freedo projective reconstruction basis and the individual scale factors of the caera atrices and point coordinate vectors leave the underlying geoetry invariant. Hence there are g = 5 + n + degrees of gauge freedo.. Previous work A survey of gauge freedo handling strategies is [7, 9]. Methods to deal with this proble are either globally free, i.e. they left the gauge free to drift, or globally fixed, i.e. they enforce a global gauge. Globally free ethods are based on selecting a solution of the noral equations, using a pseudo-inverse [], nuerical daping [9, 0, 6, 7] or local gauge constraints. For exaple, the standard photograetric inner constraints specify that the reconstructed points should not be translated, rotated and scaled, but they do not specify where are the reconstructed points, see [3, 7]. They are devised to iniize the nor of the paraeter vector update, as well as the variance of the paraeter estiate. In [, 9.5], it is shown experientally that this behaves better than globally fixed ethods (see below), but that leaving the gauge drift freely can soeties be better. Globally fixed ethods enforce a pre-chosen gauge. Soe ethods use reference eleents (trivial gauge), either features [, 4, 3] (object-centered gauge) or caeras [, 5, 6]. In [, p.40] the author suggests that control points or reference lengths can be used to fix a etric recontruction basis, while in [4, 3] a siilar ethod based on 5 reference points is used to fix a projective reconstruction basis. The constraints are enforced using either Lagrange ultipliers [, p.40] or by eliination [, p.4], [4, 3]. In [6], the author considers projective reconstruction and partially fix the gauge by setting a canonical for to a reference caera, while [] uses two reference caeras. Another possibility for a globally fixed gauge is to use global gauge constraints [8,, ], which raises the proble of enforcing these constraints during optiization. In [], artificial extra observations are added to the cost function with a heavy weight. In [8], the gauge is left free to drift during the iteration, but the result is projected onto the gauge afterhand, while in [], a paraeter subspace projected onto

3 the gauge is coputed before the iteration. Another possibility is [, p.4], where Lagrange ultipliers are introduced and incorporated as unknowns in the optiization. Besides the globally free photograetric inner constraints, all these ethods are based on soehow arbitrary gauges which are not chosen to eet soe efficiency criteria. A recent attept in this direction is [4]. Based on gauge theory, the authors analyze, in the context of Euclidean reconstruction, which physical easure is the ost likely to axiize the accuracy of the reconstruction while setting the scene scale. 3. Standard Algoriths Bundle adjustent consists in refining a visual reconstruction to produce jointly optial 3D structure and caera otion. By optial, we ean that the axiu likelihood estiate of structure and otion is sought, which is achieved by iniizing an appropriate cost function. If we assue that the observed point features are corrupted by independently and identically distributed Gaussian noise, then the cost function is the reprojection error, given by the su of squared differences between observed features q ij and predicted features q ij : C(X ) = n i= j= w ij d (q ij, q ij ) = r T r, () where w ij is one if point j is visible in view i and zero otherwise. The (r ) vector r is the residual error vector. Predicted features are given by q ij P i Q j. Given a sub-optial solution X 0, the paraeter vector X is iteratively updated by X X + δ, where the increent δ is obtained as follows. Let J = r X denotes the (r p) Jacobian atrix of the residual error vector r with respect to structure and otion paraeters X, g = J T r be the (p ) gradient vector of C and let N = J T J be the Gauss-Newton approxiation of the (p p) Hessian atrix. The reprojection error is approxiated by: C(X + δ) C(X ) + g T δ + δt Nδ. () e The iniu of this siple local quadratic odel can be found by setting e δ = 0, which gives the Gauss-Newton iteration, through the following noral equations: Nδ = g. (3) Note that N has a rank deficiency of g (the nuber of gauge freedos). The Levenberg-Marquardt iteration is based on daping or augenting the noral equations, as follows : (N + W(λ) )δ = g, (4) N where λ > 0 is related to the trust region radius and W(λ) is soe syetric positive definite (p p) weight atrix, called the daping atrix, often chosen as: W(λ) = λi (p p), which corresponds to a spherical trust region. This is the original strategy proposed by Levenberg and Marquardt [9, 0] and recoended in [7]. Another coonly used solution, due to [6] and recoended in [5, 7, 5] is W(λ) = λd where D is a diagonal atrix containing the diagonal entries of N, which gives an elliptical trust region. The daping atrix ust satisfy a noralization constraint so that the trust region radius is eaningful, e.g. I (p p) = p. Note that the daping guarantees that N is full-rank. Paraeter λ is tuned as follows: if paraeters X + δ decrease the error, i.e. if C(X + δ) < C(X ), then the step is accepted and the value of λ is divided by soe constant, often 0, else the step is rejected and λ is ultiplied by the constant. 4. Influence of the Gauge Not surprisingly, It has been reported that in general, bundle adjustent was not gauge invariant, i.e. an iteration within different gauges yield geoetrically different results [, 3, 8,,, 4, 7]. We forulate the gauge invariance property and exaine under which condition standard algoriths satisfy it. 4.. Gauge Transforation and Invariance Gauge transforations change the paraeter vector without changing the underlying geoetry, e.g. [8]. Let X T = (M T S T ) and ˇX T = ( ˇM T Š T ) be two paraeter vectors of the sae reconstruction expressed within two different gauges G and Ǧ. Let T be the full-rank transforation relating the two underlying projective bases, defined such that: ˇP i = γ i P i T and ˇQ j = α j T Q j, where γ i and α j are unknown non-zero scale factors. Entities written with a ˇare expressed within the gauge Ǧ. The structure and otion paraeters transfor as: Š = diag(α T,..., α T ) S ˇM = diag(γ T T,..., γ n T T ) M, 3n since vect(γpt) = diag(γt T, γt T, γt T ) vect(p). We deduce that the paraeter vectors are related by ˇX = TX where the gauge transforation T is defined by: T = diag(γ T T,..., γ n T T, α T,..., α T ). (5) 3n Note that det(t) 0 det( T) 0 since the γ i and α j are non-zero. In general, X and ˇX are related by a unique gauge transforation. 3

4 The geoetric equivalence of two reconstructions, denoted by =, is an egality up to the gauge defined by: (X = ˇX ) ) ( T ˇX = TX. (6) with T of the for (5). We can now forally state the gauge invariance property. A bundle adjustent algorith is gauge invariant if it preserves geoetric equivalence: (X = ˇX ) ((X + δ) = ( ˇX + ˇδ)). By writing explicitly this property using equation (6), expanding, and since X and ˇX are related by a unique T, we obtain the following definition of gauge invariance: ( T ( ˇX = TX ) ) (ˇδ = Tδ). (7) Note that although derived in a different anner, this definition is siilar to the one given in []. We exaine successively under which conditions the Gauss-Newton and Levenberg-Marquardt iterations are gauge invariant. 4.. Gauss-Newton Consider a Gauss-Newton iteration. The increent δ within gauge G is given by solving the noral equations (3). Due to gauge freedos, atrix N has a rank deficiency of g, and ultiple solutions hold, corresponding to different gauges. Let G be a (p g) full colun-rank atrix defined by NG = 0, i.e. the coluns of G span the nullspace of N. Denoting by the Moore-Penrose pseudo-inverse, the possible solutions of the noral equations are paraeterized by a (g ) vector v as δ = (J T J) J T r + Gv, where we substituted N = J T J and g = J T r. Since J = r X = r ˇX ˇX X = ˇJ T, we obtain: δ = ( T TˇJ TˇJ T) TTˇJr+ T Ǧv. Fro equation (7), gauge invariance holds if and only if: ( T T Ň T) = T Ň T T and v = ˇv, which is verified if and only if T is orthonoral [], i.e. T = T T. Hence, Gauss-Newton-based bundle adjustent is not gauge invariant. The previously derived condition does not leave enough flexibility to be exploited in this direction Levenberg-Marquardt Consider a Levenberg-Marquardt iteration. The increent δ within gauge G is given by solving the daped noral equations (4) N δ = g. By substituting N = N + W(λ) = J T J + W(λ) and g = J T r in this equation and since det(n ) 0, we obtain: δ = (J T J + W(λ)) J T r. By substituting J = ˇJ T and expanding, we get: δ = ( T TˇJTˇJ T + W(λ)) TTˇJT r = T (Ň + T T W(λ) T ) ˇJT r. Fro equation (7), gauge invariance holds if and only if: ˇW(λ) = T T W(λ) T. (8) The usual choices for atrices W(λ) and ˇW(λ), e.g. W(λ) = ˇW(λ) = λi, do not fulfill the above-derived property. Hence, standard ipleentations of Levenberg- Marquardt-based algoriths are not gauge invariant. Equation (8) can be verified if an appropriate gauge dependent choice is ade for the daping atrices. This is the cornerstone for the gauge invariant ethod proposed in the next section. 5. Explicit and Iplicit Gauges Standard bundle adjustent algoriths are not gauge invariant. Hence, given a paraeter estiate, there ust exist an optial local gauge within which the iteration reduces the reprojection error better than within the others. The proble of finding this gauge is dealt with in the next section. In this section, we exaine the relationship between the daping atrix and gauge transforations. A coonly used solution to globally enforce a gauge, e.g. [8,, ], is what we call explicit gauge fixing. It consists in explicitly expressing the reconstruction within the desired gauge before each iteration. It iplies the transfer of the points, the caeras, and all other entities being optiized, into the desired gauge. We propose iplicit gauge fixing. Consider equation (8). On the one hand, it tells us that the algorith is not gauge invariant. On the other hand, it shows that there exists a close link between the gauge and the daping atrix. Hence, it can be used to perfor bundle adjustent within gauge Ǧ, while choosing ˇW(λ) such that it behaves exactly as if is was conducted within gauge G. For exaple if W(λ) = λi and ˇW(λ) = λ T T T, then running the algorith within gauges G and Ǧ is strictly equivalent. We confir in our experients that explicit and iplicit gauges give exactly the sae results. Let A = T T T, then ˇW(λ) = λ T T T is given by: ˇW(λ) = λ diag(γ A,..., γn }{{ A, α } A,..., α A ). (9) 3n In other words, a spherical trust region within gauge G corresponds to a ore coplex trust region within gauge Ǧ, encapsulated by the syetric positive definite atrix ˇW(λ) defined by equation (9). To suarize, it is possible to reproduce the behaviour of bundle adjustent within a certain gauge, without explicitly expressing the reconstruction within this gauge, only by choosing the appropriate daping atrix, depending on the gauge transforation. Based on this reasonning, we propose the following result. It can be shown that equation (8) with W(λ) = ˇW(λ) = λi is verified as in the Gauss-Newton case, i.e. if T is orthonoral. 4

5 Proposition (Gauge invariance conditions) Levenberg- Marquardt iterations can be ade gauge invariant if the following two conditions are verified: (i) the for of the daping atrix is at least as general as the for given by equation (9) and (ii) the daping atrix is uniquely deterined by a gauge dependent criterion defined such that equation (8) is verified. 6. Gauge Dependent Daping We propose a solution to choose a gauge dependent daping atrix W(λ) = λw which has the three following iportant properties. First, it akes the underlying iteration gauge invariant. Second, it axiizes the decrease in the reprojection error up to first order. Third, it preserves the sparse block structure of the noral equations. In order to achieve this result, we begin by deriving an approxiation of the reprojection error, depending upon W(λ). A quadratic approxiation of C(X + δ) is given by equation (). The increental error e is given by: e = g T δ + δt Nδ. (0) The increent vector δ is given by solving the augented noral equations (4): δ = (N + W(λ)) g. Since det(w(λ)) 0, we can forulate the following Taylor expansion: (N + W(λ)) = ( ) n Z(n), where Z(n) = (W(λ) N) n W(λ). This expression is valid provided W(λ) N <, i.e. when N is sall or λ is large. This leads to: δ = ( ) n Z(n)g. We substitute the above expression in equation (0). The first ter is rewritten as: g T δ = ( ) n g T Z(n)g = ( ) n K(n), where K(n) = g T Z(n)g. The second ter becoes: δt Nδ = ( ( ) n g T Z(n) ) N (( ) n Z(n)g). After expansion and using the following property arising fro the syetry of W(λ): Z(n)NZ(n ) = Z(n + n + ), we obtain: δt Nδ = ( ) n (n + )K(n + ). This gives the following Taylor expansion for e: e = ( ( ) n+ + n ) K(n). () Proposition (Gauge invariant iterations) The daping atrix defined such that equation () is iniized satisfies proposition. Hence, the underlying iteration is gauge invariant. Proof. We drop the paraeter λ for this proof. Equation () can be rewritten as: e = g ( W T + 3 ) W NW... g. Transfer the reconstruction fro the gauge G to Ǧ by applying the gauge transforation T. By substituting g = T T ǧ and N = T TŇ T, we obtain: e = ǧ T T ( W + 3 ) W TT Ň TW... T T ǧ, and hence, ˇW = TW T T, which verifies the gauge invariance equation (8) and concludes the proof. Finding the daping atrix that iniizes e, equation (), is a coplicated proble. We propose to keep only the first-order ter: e g T W(λ) g, which reduces the proble to finding a syetric, positive definite, daping atrix λw such that: W = arg ax W, W =p gt (λw) g. The noralization condition W = p is the sae as the standard choice W(λ) = λi since I (p p) = p. Obviously, the solution depends upon the for chosen for atrix W. For exaple, if W = diag(d), where d is a (p ) vector with strictly positive eleents, then the proble transfors in: ( ) p W = diag arg ax gk/d k, d, d =p k= which has the siple solution (up to scale) W diag(g). According to proposition, this solution can not yield a subsequent gauge invariant algorith since a diagonal daping atrix is not general enough for iplicit gauges. In a ore general anner, since W(λ) is a syetric atrix, W(λ) = λ RT R, where R is an upper-triangular atrix. With this paraeterization, and since g T λ RT Rg = λ Rg, we obtain: R = arg ax R, R T R =p λ Rg. () Hence, the coefficients of atrix R can be found by solving a siple linear least squares optiization proble. 5

6 . Copute the daping atrix W(λ) = λr T R where: R = diag(m,..., M }{{ n, S,..., S). } 3n The (4 4) upper-triangular atrices M i and S are fored by solving (see ain text): M i = arg ax S = arg ax M i, M T i Mi = k= S, S T S =4 j= 3 M i gm,i k Sg S,j, where g S,j and gm,i k are (4 ) gradient vectors for the j-th point and for the k-th row of the i-th caera atrix respectively.. Perfor one Levenberg-Marquardt iteration with the daping atrix W(λ). This iteration is gauge invariant due to the above choice. 3. Optional: Project the estiate onto a global gauge. Table. The gauge invariant iteration we propose. Periodical enforceent of global gauge constraints (step 3) such as renoralization of hoogeneous coordinates is recoended. Gauge invariance holds in the sense that any global gauge can be enforced in step 3 without affecting the result. Fro proposition and iposing the fact that the daping should not spoil the sparse block structure of the noral equations, we are left with the following possible for: R = diag(r M,,..., R M,n, R S,,..., R S, ), n where the R M,i and the R S,j are respectively ( ) and (4 4) upper-triangular atrices. Given that each of the p paraeters gives one constraint on the daping atrix and since it ust be uniquely deterined (i.e. it ust not have ore than p paraeters), we propose the following 0(n + )-paraeter choice: R M,i = diag(m i, M i, M i ) and R S,j = S, where the M i and S are (4 4) upper-triangular atrices. This choice eans that the paraeter block of each caera i has its own 0 paraeter daping block M T i M i, while a unique 0 paraeter daping block S T S is defined for point paraeters. This for is handy since each block can be found independently by solving a proble of the for (). More details are given in table. Below, we give details about solving the low-diensional axiization probles of table. Let the proble to be solved be ax x, x =b Cx. The singular vector v associated to the largest singular value of atrix C gives the solution for x as x = bv. Singular value decoposition of atrix C can be used to copute v. 7. Experiental Results We copare our gauge invariant algorith, denoted GAUGE INVARIANT, to soe other ones. FREE directly optiizes the caera atrices and 3D points. Pseudo-inverse is used to solve the noral equations. HARTLEY [6] consists in using a reference caera to partially eliinate the gauge and BARTOLI [] consists in using two reference caeras to eliinate the gauge. FAUGERAS [4] is based on fixing the coordinates of 5 reference points to enforce the gauge. A siilar approach is proposed by Mohr et al in [3]. MCLAUCHLAN [] consists in enforcing a noralized basis after each iteration and in using first order gauge constraints in the noral equations and KANATANI [8] is siilar to ethod FREE but periodically projects the estiate on a pre-chosen gauge to prevent it to drift too uch. When reference caeras or points are required by the paraeterization, they are chosen at rando. Note that we have ipleented all these algoriths in a unified anner: the optiization engine is unique, only the paraeterization and the daping strategy change. We easure the nuber of iterations and the reprojection error at convergence. The initial solution, denoted by INIT, is coputed by registering caeras in turn to a two-view reconstruction. Iage point coordinates are standardized such that they lie in [... ]. 7.. Siulated Data We siulate = 00 points lying in a cube with eter side length, observed by n = 5 caeras with a focal length of 000 pixels. The points are offset fro a base plane lying inside the cube, with a ean offset denoted by d. Caeras are situated 0 eters away fro the center of the cube. The baseline between consecutive caeras is 3 eters. All points are visible in all views. We add a centered Gaussian noise on true point positions with a pixel variance. We vary soe paraeters of the above-described setup to copare the algoriths in different situations. The results are averaged over 50 trials. Figure shows the results when the scene flatness, i.e. the ean offset d fro the plane, is varied. We observe that the reprojection errors, i.e. the accuracy, are undistinguishable for all ethods, except for ethods FREE which converges to a different local iniu than the others, for weak geoetry, i.e. when the ean offset fro the plane d is sall. Concerning the nuber of iterations, i.e. the coputational cost, we observe that when the ean offset is large, i.e. when the geoetry is strong, there is only slight differences between the different ethods besides for ethod FREE, which 6

7 rag replaceents frag replaceents INIT reprojection error (pixel) nuber of iterations INIT FREE FAUGERAS HARTLEY BARTOLI KANATANI PSfrag replaceents MCLAUCHLAN GAUGE INVARIANT INIT scene unflatness FREE FAUGERAS HARTLEY BARTOLI KANATANI MCLAUCHLAN GAUGE INVARIANT Figure shows the reprojection error as a function of the iterations for the scene setting with d = 0.5. As expected, we observe that the error is faster to decrease as the nuber of iterations gets low. reprojection error (pixel) FREE FAUGERAS HARTLEY BARTOLI KANATANI MCLAUCHLAN GAUGE INVARIANT iteration nuber Figure. Reprojection error as a function of the iterations scene unflatness Figure. Reprojection error and nuber of iterations when varying the scene unflatness, fro weak to strong geoetry. takes clearly ore iterations to converge. As the geoetry becoes weaker, i.e. when the offset decreases, large discrepancies can be observed between the different ethods. Method FAUGERAS based on reference points gives bad results, since using reference points to fix a projective basis can be very unstable. Methods HARTLEY and BARTOLI based on reference caeras and KANATANI based on periodical projection on a pre-chosen gauge give reliable results when the geoetry is strong enough. Method MCLAUCHLAN perfors better, while GAUGE INVARIANT is the ethod the less sensitive to the instability of the scene. We tested other strong to weak scene configurations, based on varying the nuber of points and caeras, the baseline between consecutive caeras and the visibility (not shown here due to lack of space). We observed siilar results as above: strong geoetries reduce the discrepancies between the different ethods. Siilarly, we tested the influence of the initialization. As expected, discrepancies between the different ethods reduce as the initialization gets ore accurate. 7.. Real Data We copare the algoriths on various iage streas. For two of the, the office sequence, figure 3, and the hotel sequence 3, we show results, see table. For the office sequence, all ethods give a reprojection error of pixel. For the hotel sequence, they all gave pixel, besides ethod FREE which gave.568 pixels. Concerning the nuber of iterations, the sae observations as for siulated data can be ade. The hotel sequence gives a weak geoetry since an affine caera odel is well-adapted to these data. Hence, a full perspective projection odel is not well-constrained, which tends to increase the discrepancies between the different ethods. Algorith office seq. hotel seq. FREE 8 HARTLEY 7 6 BARTOLI 8 9 FAUGERAS 0 MCLAUCHLAN 6 5 KANATANI 7 6 GAUGE INVARIANT 7 3 Table. Nuber of iterations of the algoriths for the office and the hotel sequences. 3 These data have been provided by the Modeling by Videotaping group in the Robotics Institute, Carnegie Mellon University. 7

8 Figure 3. One out of the 5 fraes of the office sequence, overlaid with the 38 corner features (left) and snapshot of the reconstructed shape (right). 8. Conclusions We proposed a bundle adjustent algorith which axiizes the decrease in the error at each iteration, regardless the gauge within which the reconstruction is expressed. We derived this algorith based on a careful study of how the gauge influence the iterations of standard algoriths, which are not gauge invariant. In particular, concerning Levenberg-Marquardt iterations, we showed that the standard diagonal daping atrices, defining elliptical trust regions, transfor to a ore coplex shaped trust region when changing the gauge. Based on this, we proposed a gauge dependent daping atrix and a practical algorith to copute it, that allows gauge invariant iterations while axiizing the decrease in the reprojection error. The final algorith can be incorporated in a straightforward anner to existing iniization engines since it consists in appropriately choosing the daping atrix by solving low-diensional linear least squares systes. The sparse block structure of the noral equations is preserved. We copared our algorith to existing ones using siulated and real data. We observed that it is ore reliable in the sense that it is less sensitive to weak geoetry and weak initialization. Moreover, the error decreases ore intensely throughout the iterations. Most algoriths converge to the sae solution, but require different nuber of iterations. Santa Margherita Ligure, Italy, pages Springer-Verlag, May 99. [5] R. Hartley. Euclidean reconstruction fro uncalibrated views. In Proceeding of the DARPA ESPRIT workshop on Applications of Invariants in Coputer Vision, Azores, Portugal, pages 87 0, October 993. [6] R. Hartley. Projective reconstruction and invariants fro ultiple iages. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6(0):036 04, October 994. [7] R. Hartley and A. Zisseran. Multiple View Geoetry in Coputer Vision. Cabridge University Press, June 000. [8] K. Kanatani and D. D. Morris. Gauges and gauge transforations for uncertainty description of geoetric structure with indeterinacy. IEEE Transactions on Inforation Theory, 47(5), July 00. [9] K. Levenberg. A ethod for the solution of certain non-linear probles in least squares. Quarterly of Applied Matheatics, pages 64 68, 944. [0] D. Marquardt. An algorith for least-squares estiation of nonlinear paraeters. Journal of the Society for Industrial and Applied Matheatics, ():43 44, June 963. [] P. F. McLauchlan. Gauge invariance in projective 3D reconstruction. In Proceedings of the Multi-View Workshop, Fort Collins, Colorado, USA, 999. [] P. F. McLauchlan. Gauge independence in optiization algoriths for 3D vision. In Proceedings of the Vision Algoriths Workshop, Dublin, Ireland, 000. [3] R. Mohr, L. Quan, and F. Veillon. Relative 3D reconstruction using ultiple uncalibrated iages. The International Journal of Robotics Research, 4(6):69 63, 995. [4] D. D. Morris, K. Kanatani, and T. Kanade. Gauge fixing for accurate 3d estiation. In Proceedings of the Conference on Coputer Vision and Pattern Recognition, Kauai, Hawaii, USA, Deceber 00. [5] W. Press, S. Teukolsky, W. Vetterling, and B. Flannery. Nuerical Recipes in C - The Art of Scientific Coputing. Cabridge University Press, nd edition, 99. [6] G. A. F. Seber and C. J. Wild. Non-linear regression. John Wiley & Sons, New York, 989. [7] B. Triggs, P. McLauchlan, R. Hartley, and A. Fitzgibbon. Bundle ajustent a odern synthesis. In B. Triggs, A. Zisseran, and R. Szeliski, editors, Proceedings of the International Workshop on Vision Algoriths: Theory and Practice, Corfu, Greece, volue 883 of Lecture Notes in Coputer Science, pages Springer-Verlag, 000. References [] K. Atkinson, editor. Close Range Photograetry and Machine Vision. Whittles Publishing, 996. [] A. Bartoli. On the non-linear optiization of projective otion using inial paraeters. In Proceedings of the 7th European Conference on Coputer Vision, Copenhagen, Denark, volue, pages , May 00. [3] A. Deranis. The photograetric inner constraints. ISPRS Journal of Photograetry and Reote Sensing, 49():5 39, 994. [4] O. Faugeras. What can be seen in three diensions with an uncalibrated stereo rig? In G. Sandini, editor, Proceedings of the nd European Conference on Coputer Vision, 8

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

Introduction to Machine Learning. Recitation 11

Introduction to Machine Learning. Recitation 11 Introduction to Machine Learning Lecturer: Regev Schweiger Recitation Fall Seester Scribe: Regev Schweiger. Kernel Ridge Regression We now take on the task of kernel-izing ridge regression. Let x,...,

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

INNER CONSTRAINTS FOR A 3-D SURVEY NETWORK

INNER CONSTRAINTS FOR A 3-D SURVEY NETWORK eospatial Science INNER CONSRAINS FOR A 3-D SURVEY NEWORK hese notes follow closely the developent of inner constraint equations by Dr Willie an, Departent of Building, School of Design and Environent,

More information

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x)

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x) 7Applying Nelder Mead s Optiization Algorith APPLYING NELDER MEAD S OPTIMIZATION ALGORITHM FOR MULTIPLE GLOBAL MINIMA Abstract Ştefan ŞTEFĂNESCU * The iterative deterinistic optiization ethod could not

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

Topic 5a Introduction to Curve Fitting & Linear Regression

Topic 5a Introduction to Curve Fitting & Linear Regression /7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial

More information

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna

More information

Tracking using CONDENSATION: Conditional Density Propagation

Tracking using CONDENSATION: Conditional Density Propagation Tracking using CONDENSATION: Conditional Density Propagation Goal Model-based visual tracking in dense clutter at near video frae rates M. Isard and A. Blake, CONDENSATION Conditional density propagation

More information

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Spine Fin Efficiency A Three Sided Pyramidal Fin of Equilateral Triangular Cross-Sectional Area

Spine Fin Efficiency A Three Sided Pyramidal Fin of Equilateral Triangular Cross-Sectional Area Proceedings of the 006 WSEAS/IASME International Conference on Heat and Mass Transfer, Miai, Florida, USA, January 18-0, 006 (pp13-18) Spine Fin Efficiency A Three Sided Pyraidal Fin of Equilateral Triangular

More information

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Lecture 21. Interior Point Methods Setup and Algorithm

Lecture 21. Interior Point Methods Setup and Algorithm Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

Data-Driven Imaging in Anisotropic Media

Data-Driven Imaging in Anisotropic Media 18 th World Conference on Non destructive Testing, 16- April 1, Durban, South Africa Data-Driven Iaging in Anisotropic Media Arno VOLKER 1 and Alan HUNTER 1 TNO Stieltjesweg 1, 6 AD, Delft, The Netherlands

More information

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE Proceeding of the ASME 9 International Manufacturing Science and Engineering Conference MSEC9 October 4-7, 9, West Lafayette, Indiana, USA MSEC9-8466 MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

Using a De-Convolution Window for Operating Modal Analysis

Using a De-Convolution Window for Operating Modal Analysis Using a De-Convolution Window for Operating Modal Analysis Brian Schwarz Vibrant Technology, Inc. Scotts Valley, CA Mark Richardson Vibrant Technology, Inc. Scotts Valley, CA Abstract Operating Modal Analysis

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

An Improved Particle Filter with Applications in Ballistic Target Tracking

An Improved Particle Filter with Applications in Ballistic Target Tracking Sensors & ransducers Vol. 72 Issue 6 June 204 pp. 96-20 Sensors & ransducers 204 by IFSA Publishing S. L. http://www.sensorsportal.co An Iproved Particle Filter with Applications in Ballistic arget racing

More information

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES S. E. Ahed, R. J. Tokins and A. I. Volodin Departent of Matheatics and Statistics University of Regina Regina,

More information

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 04,, p. 7 5 ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD M a t h e a t i c s Yu. A. HAKOPIAN, R. Z. HOVHANNISYAN

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Supplementary to Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data

Supplementary to Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data Suppleentary to Learning Discriinative Bayesian Networks fro High-diensional Continuous Neuroiaging Data Luping Zhou, Lei Wang, Lingqiao Liu, Philip Ogunbona, and Dinggang Shen Proposition. Given a sparse

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

Testing equality of variances for multiple univariate normal populations

Testing equality of variances for multiple univariate normal populations University of Wollongong Research Online Centre for Statistical & Survey Methodology Working Paper Series Faculty of Engineering and Inforation Sciences 0 esting equality of variances for ultiple univariate

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

paper prepared for the 1996 PTRC Conference, September 2-6, Brunel University, UK ON THE CALIBRATION OF THE GRAVITY MODEL

paper prepared for the 1996 PTRC Conference, September 2-6, Brunel University, UK ON THE CALIBRATION OF THE GRAVITY MODEL paper prepared for the 1996 PTRC Conference, Septeber 2-6, Brunel University, UK ON THE CALIBRATION OF THE GRAVITY MODEL Nanne J. van der Zijpp 1 Transportation and Traffic Engineering Section Delft University

More information

A remark on a success rate model for DPA and CPA

A remark on a success rate model for DPA and CPA A reark on a success rate odel for DPA and CPA A. Wieers, BSI Version 0.5 andreas.wieers@bsi.bund.de Septeber 5, 2018 Abstract The success rate is the ost coon evaluation etric for easuring the perforance

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Sequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson) February 5,

Sequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson) February 5, Sequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson) February 5, 2015 31 11 Motif Finding Sources for this section: Rouchka, 1997, A Brief Overview of Gibbs Sapling. J. Buhler, M. Topa:

More information

Support Vector Machines MIT Course Notes Cynthia Rudin

Support Vector Machines MIT Course Notes Cynthia Rudin Support Vector Machines MIT 5.097 Course Notes Cynthia Rudin Credit: Ng, Hastie, Tibshirani, Friedan Thanks: Şeyda Ertekin Let s start with soe intuition about argins. The argin of an exaple x i = distance

More information

Use of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need for Parameterization

Use of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need for Parameterization Use of PSO in Paraeter Estiation of Robot Dynaics; Part One: No Need for Paraeterization Hossein Jahandideh, Mehrzad Navar Abstract Offline procedures for estiating paraeters of robot dynaics are practically

More information

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co

More information

A new type of lower bound for the largest eigenvalue of a symmetric matrix

A new type of lower bound for the largest eigenvalue of a symmetric matrix Linear Algebra and its Applications 47 7 9 9 www.elsevier.co/locate/laa A new type of lower bound for the largest eigenvalue of a syetric atrix Piet Van Mieghe Delft University of Technology, P.O. Box

More information

Four-vector, Dirac spinor representation and Lorentz Transformations

Four-vector, Dirac spinor representation and Lorentz Transformations Available online at www.pelagiaresearchlibrary.co Advances in Applied Science Research, 2012, 3 (2):749-756 Four-vector, Dirac spinor representation and Lorentz Transforations S. B. Khasare 1, J. N. Rateke

More information

DERIVING PROPER UNIFORM PRIORS FOR REGRESSION COEFFICIENTS

DERIVING PROPER UNIFORM PRIORS FOR REGRESSION COEFFICIENTS DERIVING PROPER UNIFORM PRIORS FOR REGRESSION COEFFICIENTS N. van Erp and P. van Gelder Structural Hydraulic and Probabilistic Design, TU Delft Delft, The Netherlands Abstract. In probles of odel coparison

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

Determining the Robot-to-Robot Relative Pose Using Range-only Measurements

Determining the Robot-to-Robot Relative Pose Using Range-only Measurements Deterining the Robot-to-Robot Relative Pose Using Range-only Measureents Xun S Zhou and Stergios I Roueliotis Abstract In this paper we address the proble of deterining the relative pose of pairs robots

More information

Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence

Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence Best Ar Identification: A Unified Approach to Fixed Budget and Fixed Confidence Victor Gabillon Mohaad Ghavazadeh Alessandro Lazaric INRIA Lille - Nord Europe, Tea SequeL {victor.gabillon,ohaad.ghavazadeh,alessandro.lazaric}@inria.fr

More information

A Note on the Applied Use of MDL Approximations

A Note on the Applied Use of MDL Approximations A Note on the Applied Use of MDL Approxiations Daniel J. Navarro Departent of Psychology Ohio State University Abstract An applied proble is discussed in which two nested psychological odels of retention

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup) Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

Principal Components Analysis

Principal Components Analysis Principal Coponents Analysis Cheng Li, Bingyu Wang Noveber 3, 204 What s PCA Principal coponent analysis (PCA) is a statistical procedure that uses an orthogonal transforation to convert a set of observations

More information

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving

More information

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate The Siplex Method is Strongly Polynoial for the Markov Decision Proble with a Fixed Discount Rate Yinyu Ye April 20, 2010 Abstract In this note we prove that the classic siplex ethod with the ost-negativereduced-cost

More information

Linear Algebra (I) Yijia Chen. linear transformations and their algebraic properties. 1. A Starting Point. y := 3x.

Linear Algebra (I) Yijia Chen. linear transformations and their algebraic properties. 1. A Starting Point. y := 3x. Linear Algebra I) Yijia Chen Linear algebra studies Exaple.. Consider the function This is a linear function f : R R. linear transforations and their algebraic properties.. A Starting Point y := 3x. Geoetrically

More information

HIGH RESOLUTION NEAR-FIELD MULTIPLE TARGET DETECTION AND LOCALIZATION USING SUPPORT VECTOR MACHINES

HIGH RESOLUTION NEAR-FIELD MULTIPLE TARGET DETECTION AND LOCALIZATION USING SUPPORT VECTOR MACHINES ICONIC 2007 St. Louis, O, USA June 27-29, 2007 HIGH RESOLUTION NEAR-FIELD ULTIPLE TARGET DETECTION AND LOCALIZATION USING SUPPORT VECTOR ACHINES A. Randazzo,. A. Abou-Khousa 2,.Pastorino, and R. Zoughi

More information

Ph 20.3 Numerical Solution of Ordinary Differential Equations

Ph 20.3 Numerical Solution of Ordinary Differential Equations Ph 20.3 Nuerical Solution of Ordinary Differential Equations Due: Week 5 -v20170314- This Assignent So far, your assignents have tried to failiarize you with the hardware and software in the Physics Coputing

More information

Lecture 13 Eigenvalue Problems

Lecture 13 Eigenvalue Problems Lecture 13 Eigenvalue Probles MIT 18.335J / 6.337J Introduction to Nuerical Methods Per-Olof Persson October 24, 2006 1 The Eigenvalue Decoposition Eigenvalue proble for atrix A: Ax = λx with eigenvalues

More information

On Poset Merging. 1 Introduction. Peter Chen Guoli Ding Steve Seiden. Keywords: Merging, Partial Order, Lower Bounds. AMS Classification: 68W40

On Poset Merging. 1 Introduction. Peter Chen Guoli Ding Steve Seiden. Keywords: Merging, Partial Order, Lower Bounds. AMS Classification: 68W40 On Poset Merging Peter Chen Guoli Ding Steve Seiden Abstract We consider the follow poset erging proble: Let X and Y be two subsets of a partially ordered set S. Given coplete inforation about the ordering

More information

A Model for the Selection of Internet Service Providers

A Model for the Selection of Internet Service Providers ISSN 0146-4116, Autoatic Control and Coputer Sciences, 2008, Vol. 42, No. 5, pp. 249 254. Allerton Press, Inc., 2008. Original Russian Text I.M. Aliev, 2008, published in Avtoatika i Vychislitel naya Tekhnika,

More information

Chaotic Coupled Map Lattices

Chaotic Coupled Map Lattices Chaotic Coupled Map Lattices Author: Dustin Keys Advisors: Dr. Robert Indik, Dr. Kevin Lin 1 Introduction When a syste of chaotic aps is coupled in a way that allows the to share inforation about each

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

Support recovery in compressed sensing: An estimation theoretic approach

Support recovery in compressed sensing: An estimation theoretic approach Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de

More information

lecture 36: Linear Multistep Mehods: Zero Stability

lecture 36: Linear Multistep Mehods: Zero Stability 95 lecture 36: Linear Multistep Mehods: Zero Stability 5.6 Linear ultistep ethods: zero stability Does consistency iply convergence for linear ultistep ethods? This is always the case for one-step ethods,

More information

Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods

Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods Eric de Sturler Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods Eric de Sturler Departent of Matheatics, Virginia Tech www.ath.vt.edu/people/sturler/index.htl sturler@vt.edu Efficient

More information

CHAPTER 8 CONSTRAINED OPTIMIZATION 2: SEQUENTIAL QUADRATIC PROGRAMMING, INTERIOR POINT AND GENERALIZED REDUCED GRADIENT METHODS

CHAPTER 8 CONSTRAINED OPTIMIZATION 2: SEQUENTIAL QUADRATIC PROGRAMMING, INTERIOR POINT AND GENERALIZED REDUCED GRADIENT METHODS CHAPER 8 CONSRAINED OPIMIZAION : SEQUENIAL QUADRAIC PROGRAMMING, INERIOR POIN AND GENERALIZED REDUCED GRADIEN MEHODS 8. Introduction In the previous chapter we eained the necessary and sufficient conditions

More information

RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS

RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS BIT Nuerical Matheatics 43: 459 466, 2003. 2003 Kluwer Acadeic Publishers. Printed in The Netherlands 459 RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS V. SIMONCINI Dipartiento di

More information

Optimal nonlinear Bayesian experimental design: an application to amplitude versus offset experiments

Optimal nonlinear Bayesian experimental design: an application to amplitude versus offset experiments Geophys. J. Int. (23) 155, 411 421 Optial nonlinear Bayesian experiental design: an application to aplitude versus offset experients Jojanneke van den Berg, 1, Andrew Curtis 2,3 and Jeannot Trapert 1 1

More information

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials Fast Montgoery-like Square Root Coputation over GF( ) for All Trinoials Yin Li a, Yu Zhang a, a Departent of Coputer Science and Technology, Xinyang Noral University, Henan, P.R.China Abstract This letter

More information

The Transactional Nature of Quantum Information

The Transactional Nature of Quantum Information The Transactional Nature of Quantu Inforation Subhash Kak Departent of Coputer Science Oklahoa State University Stillwater, OK 7478 ABSTRACT Inforation, in its counications sense, is a transactional property.

More information

An improved self-adaptive harmony search algorithm for joint replenishment problems

An improved self-adaptive harmony search algorithm for joint replenishment problems An iproved self-adaptive harony search algorith for joint replenishent probles Lin Wang School of Manageent, Huazhong University of Science & Technology zhoulearner@gail.co Xiaojian Zhou School of Manageent,

More information

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE Proceedings of ICIPE rd International Conference on Inverse Probles in Engineering: Theory and Practice June -8, 999, Port Ludlow, Washington, USA : RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

A method to determine relative stroke detection efficiencies from multiplicity distributions

A method to determine relative stroke detection efficiencies from multiplicity distributions A ethod to deterine relative stroke detection eiciencies ro ultiplicity distributions Schulz W. and Cuins K. 2. Austrian Lightning Detection and Inoration Syste (ALDIS), Kahlenberger Str.2A, 90 Vienna,

More information

Explicit solution of the polynomial least-squares approximation problem on Chebyshev extrema nodes

Explicit solution of the polynomial least-squares approximation problem on Chebyshev extrema nodes Explicit solution of the polynoial least-squares approxiation proble on Chebyshev extrea nodes Alfredo Eisinberg, Giuseppe Fedele Dipartiento di Elettronica Inforatica e Sisteistica, Università degli Studi

More information

PHY307F/407F - Computational Physics Background Material for Expt. 3 - Heat Equation David Harrison

PHY307F/407F - Computational Physics Background Material for Expt. 3 - Heat Equation David Harrison INTRODUCTION PHY37F/47F - Coputational Physics Background Material for Expt 3 - Heat Equation David Harrison In the Pendulu Experient, we studied the Runge-Kutta algorith for solving ordinary differential

More information

Finding Rightmost Eigenvalues of Large Sparse. Non-symmetric Parameterized Eigenvalue Problems. Abstract. Introduction

Finding Rightmost Eigenvalues of Large Sparse. Non-symmetric Parameterized Eigenvalue Problems. Abstract. Introduction Finding Rightost Eigenvalues of Large Sparse Non-syetric Paraeterized Eigenvalue Probles Applied Matheatics and Scientific Coputation Progra Departent of Matheatics University of Maryland, College Par,

More information

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer

More information

Bayes Decision Rule and Naïve Bayes Classifier

Bayes Decision Rule and Naïve Bayes Classifier Bayes Decision Rule and Naïve Bayes Classifier Le Song Machine Learning I CSE 6740, Fall 2013 Gaussian Mixture odel A density odel p(x) ay be ulti-odal: odel it as a ixture of uni-odal distributions (e.g.

More information

Multi-Dimensional Hegselmann-Krause Dynamics

Multi-Dimensional Hegselmann-Krause Dynamics Multi-Diensional Hegselann-Krause Dynaics A. Nedić Industrial and Enterprise Systes Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu B. Touri Coordinated Science Laboratory

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

1 Proof of learning bounds

1 Proof of learning bounds COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #4 Scribe: Akshay Mittal February 13, 2013 1 Proof of learning bounds For intuition of the following theore, suppose there exists a

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

Page 1 Lab 1 Elementary Matrix and Linear Algebra Spring 2011

Page 1 Lab 1 Elementary Matrix and Linear Algebra Spring 2011 Page Lab Eleentary Matri and Linear Algebra Spring 0 Nae Due /03/0 Score /5 Probles through 4 are each worth 4 points.. Go to the Linear Algebra oolkit site ransforing a atri to reduced row echelon for

More information

When Short Runs Beat Long Runs

When Short Runs Beat Long Runs When Short Runs Beat Long Runs Sean Luke George Mason University http://www.cs.gu.edu/ sean/ Abstract What will yield the best results: doing one run n generations long or doing runs n/ generations long

More information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub

More information

Upper bound on false alarm rate for landmine detection and classification using syntactic pattern recognition

Upper bound on false alarm rate for landmine detection and classification using syntactic pattern recognition Upper bound on false alar rate for landine detection and classification using syntactic pattern recognition Ahed O. Nasif, Brian L. Mark, Kenneth J. Hintz, and Nathalia Peixoto Dept. of Electrical and

More information

NBN Algorithm Introduction Computational Fundamentals. Bogdan M. Wilamoswki Auburn University. Hao Yu Auburn University

NBN Algorithm Introduction Computational Fundamentals. Bogdan M. Wilamoswki Auburn University. Hao Yu Auburn University NBN Algorith Bogdan M. Wilaoswki Auburn University Hao Yu Auburn University Nicholas Cotton Auburn University. Introduction. -. Coputational Fundaentals - Definition of Basic Concepts in Neural Network

More information

The Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Parameters

The Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Parameters journal of ultivariate analysis 58, 96106 (1996) article no. 0041 The Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Paraeters H. S. Steyn

More information

A Smoothed Boosting Algorithm Using Probabilistic Output Codes

A Smoothed Boosting Algorithm Using Probabilistic Output Codes A Soothed Boosting Algorith Using Probabilistic Output Codes Rong Jin rongjin@cse.su.edu Dept. of Coputer Science and Engineering, Michigan State University, MI 48824, USA Jian Zhang jian.zhang@cs.cu.edu

More information