Pose estimation from point and line correspondences

Size: px
Start display at page:

Download "Pose estimation from point and line correspondences"

Transcription

1 Pose estimation from point and line correspondences Giorgio Panin October 17, Problem formulation Estimate (in a LSE sense) the pose of an object from N correspondences between known object points (3D or D) X i and their noisy projections on the D image plane, x i. The pose is represented by a (3 3) or (4 4) homogeneous transformation matrix T, belonging to a given sub-group G of object-space transformations. G is a subset of all possible transforms, closed under matrix product: T 1, T G (T 1 T ) G The intrinsic camera matrix K ((3 3) or (3 4), respectively) is supposed to be known in advance. Moreover, a reference (constant) transform matrix T may be present, which is pre-multiplied by T (e.g. a fixed displacement in articulated structures, with Denavit-Hartenberg parameters), and this matrix may also not belong to the same transformation group G. Overall, this is a projective geometry problem, that can be formulated in basically two ways: using homogeneous or non-homogeneous coordinates. They raise two different LSE error measures to be optimized: the algebraic and geometric error, respectively. The latter is the real target of our estimation (Maximum-Likelihood solution), but usually non-linear, while the former gives redundant and sub-optimal equations, but usually linear. Therefore, we can use the algebraic solution as a starting-point to minimize the geometric error. 1.1 Geometric error We have the following problem: Given N exact model points X and corresponding noisy image points x in homogeneous coordinates X = (X, Y, 1) or X = (X, Y, Z, 1) x = (x, y, 1) 1

2 find the optimal transformation T belonging to a group G, such that T = min T G N π(k T T Xi ) π(x i ) where π is the nonlinear projection operator, from homogeneous to non-homogeneous coordinates π(x, y, w) = [ x/w y/w ] T and T is a constant matrix, not necessarily belonging to the same group G. Solution: nonlinear LSE optimization (Gauss-Newton), starting from an initial guess T 0, close enough to T in order to ensure convergence. All of the groups we will consider hereafter are smooth manifolds with a Lie group structure, and the local tangent space at T is a Lie algebra, which maps to the whole group through the exponential mapping. Therefore, Gauss-Newton optimization is straightforwardly performed with the Lie generators G i and the compositional update (see e.g. [1][][3]). 1. Algebraic error In homogeneous coordinates, we look for T in G that satifies the equations: T G : i, λ i : ( K T T X i ) = λi x i As we can see, the homogeneous formulation carry the projective ambiguity as coefficients (λ i ) which account for the augmented number of equations (3 instead of ) per point. We remove λ i by writing the problem as a cross-product T G : x i (K T T X i ) = 0, i which provides a redundant set of homogeneous equations in T. That means, one component of each cross product (usually the third) can be discarded from the equations above, which then will be N again. For noisy data, the problem above can be cast into an LSE form T = min T G n xi (K T T X i ) and this problem can be usually put into a linear form min p Ap or min Ap b, p where p is a vector parametrizing the transformation T, and A is a column-rank deficient matrix with 1 solutions in p. By imposing a constraint on p such as a unit-norm condition p = 1, the globally optimal solution can be found in one step, via the SVD algorithm. The resulting algorithm is called DLT (Direct Linear Transform). By resuming, we solve the original problem in two steps: 1. Estimate a T 0 matrix (hopefully close enough to T ), by minimizing the algebraic error (DLT)

3 . Starting from T 0, minimize the geometric error with the Gauss-Newton (or Levenberg-Marquardt) method, in order to obtain T. For this purpose, we use compositional updates T and Lie algebra derivatives In some cases (especially D-D or 3D-3D problems), the geometric error is linear and can be solved at once, without need for T 0. And also some nonlinear cases (e.g. the absolute orientation problem) can still be solved in one step for the geometric error. However, most 3D-D cases have an inherent nonlinearity due to the projection π(), therefore the two-step procedure cannot be generally avoided. In that case, altough step has a common formulation for all poses (apart from different Jacobians, obtained through the respective Lie generators) step 1, instead, must be solved differently for each class. D-D transforms: (3x3) matrix In a D-D problem, both X and x are given by 3 homogeneous coordinates, of which the third is usually set to 1. Moreover, we suppose all (3x3) K matrices to have the simple form: K D = 1 0 r x/ r y / 0 where (r x, r y ) are the horizontal and vertical image resolution, respectively. Therefore, all image data points x i can be pre-processed in order to remove both K and T : x i = T 1 K 1 x i = T 1 ( x i [r x /, r y /, 0] T ) and the two LSE errors are re-formulated as Algebraic: T N = min T G x i (H X i ) Geometric: T N = min T G π(h X i) π( x i ) NOTE: If the last row of T is [0, 0, 1], then the geometric error T = min T G N T X i x i has a linear form, and the problem can be directly solved in non-homogeneous coordinates. 3

4 .1 Additional symbols: I n = (n n) Identity matrix t n = (n 1) Translation vector R n = (n n) Rotation matrix: R T n R n = I n R x,y,z = (3 3) Single-axis rotation matrices s = Uniform scale factor D n = diag(s 1,..., s n ): Non-uniform scale matrix A n = (n n) Linear transformation v n = (n 1) Perspective distortion vector. PoseDTranslation ( dof, min. 1 point) The simplest case is a pure translation I t The geometric error is Figure 1: Pure translation. T = min (t x,t y) N A i t b i with A i = I b i = x i X i I that is, T = min At b with A =, b = t I b 1 b n 4

5 This is a linear LSE, solved by t = A + b; in this case, the LSE solution corresponds to the displacement of point centroids: t = 1 N N ( x i X i ) = µ x µ X.3 PoseD1ScaleTranslation (3 dof, min. point) Next, we add a uniform scale factor si t Figure : Translation and uniform scale. The geometric error results N N p(t X i ) p( x i ) = sx i + t x x i sy i + t y ȳ i = A s t x b t y with A = X Y 1 X N 1 0 Y N, b = This is again a linear LSE problem in (s, t x, t y ). x 1 ȳ 1 x N ȳ N 5

6 .4 PoseDScalesTranslation (4 dof, min. points) A non-uniform scale with translation is given by D t Figure 3: Translation and non-uniform scale. This is similar to the previous problem, but with scales (s 1, s, t x, t y ); the geometric error minimization gives: min (s 1,s,t x,t y) X Y 1 X N 0 0 Y N s 1 s t x t y.5 PoseD1ScaleRotoTranslation (4 dof, min. points) Now we consider transformations involving rotations. These are in principle nonlinear problems (because of the rotation matrix R) but fortunately, due to the nature of the problem, the LSE geometric error can still be globally optimized in one step, by using the SVD decomposition. We start by giving here the solution for the general similarity transform (uniform scale, rotation and translation), and deduce its sub-cases afterwards. sr t From the [Umeyama] paper: the LSE geometric error is optimized by the (s, R, t) parameters obatined from the following steps: x 1 ȳ 1 x N ȳ N 6

7 Figure 4: Similarity (rigid roto-translation and uniform scale). Mean vectors: µ x = 1 n x i, µ X = 1 n X i Variance of the norms: σ x = 1 n x i µ x, σ X = 1 n X i µ X Cross-covariance matrix ( ): Σ xx = 1 n ( x i µ x )(X i µ X ) T SVD of the cross-covariance: Σ xx = UDV T Sign correction for det(r): S = Rotation reconstruction: R = USV T Scale reconstruction: s = 1 σ X tr(ds) { I if det(σ xx ) 0 diag(1, 1, 1,..., 1) if det(σ xx ) < 0 Translation vector: t = µ x srµ X.6 PoseD1ScaleRotation ( dof, min. 1 point) Similar as before, but without translation: sr 0 This implies that the mean vectors are µ x = µ X = 0. Therefore we have: Variance of the norms: σ x = 1 n x i, σ X = 1 n X i Cross-covariance matrix ( ): Σ xx = 1 n x ix T i 7

8 Figure 5: Rotation and uniform scale. SVD of the cross-covariance: Σ xx = UDV T Sign correction for det(r): S = Rotation reconstruction: R = USV T Scale reconstruction: s = 1 σ x tr(ds) { I if det(σ xx ) 0 diag(1, 1, 1,..., 1) if det(σ xx ) < 0.7 PoseDRotoTranslation (3 dof, min. points) By removing the scale (s = 1), we get the Euclidean transform (rigid rototranslation): R t H = where σ x = σ X = 1. The algorithm becomes Mean vectors: µ x = 1 n x i, µ X = 1 n X i Cross-covariance matrix ( ): Σ xx = 1 n ( x i µ x )(X i µ X ) T SVD of the cross-covariance: Σ xx = UDV T Sign correction for det(r): S = Rotation reconstruction: R = USV T { I if det(σ xx ) 0 diag(1, 1, 1,..., 1) if det(σ xx ) < 0 Translation vector: t = µ x Rµ X 8

9 Figure 6: Euclidean transform (rigid roto-translation)..8 PoseDRotation (1 dof, min. 1 point) Finally, if both scale and translation are removed, we obtain the absolute orientation problem: R 0 which is solved by Figure 7: Pure rotation. Cross-covariance matrix ( ): Σ xx = 1 n x ix T i SVD of the cross-covariance: Σ xx = UDV T 9

10 { I if Sign correction for det(r): S = det(σ xx ) 0 diag(1, 1, 1,..., 1) if det(σ xx ) < 0 Rotation reconstruction: R = USV T.9 PoseDScalesRotoTranslation (5 dof, min. 3 points) The problems with non-uniform scale and rotation cannot be solved like the uniform scale cases. Since the number of degrees of freedom is close to that of an affinity (6 vs. 5 dof), we prefer to solve first for an affinity, and then upgrade to the non-uniform scale similarity, by removing 1 degree of freedom. We consider here two sub-cases: with and without translation. The first one is given by R D t Figure 8: Roto-translation with non-uniform scale. and the complete procedure is the following: 1. solve for an affine transform (see below), and find A, t. using the SVD, compute A = R(θ)R( φ)dr(φ) with D = diag(s 1, s ), and s 1 > s 3. set R = R(θ) and remove φ: if φ 0, then keep the order of (s 1, s ) in D; if φ 90 deg, then swap the scales 4. Finally, since this solution is generally not the optimal LSE, it is recommended to run a Gauss-Newton optimization in the geometric error. 10

11 .10 PoseDScalesRotation (3 dof, min. point) Next, we consider the rotation with non-uniform scale: R D H = 0 Figure 9: Rotation with non-uniform scale. This is a special case of the previous one, where t = 0. We can solve it as a purely linear (affine without translation) transform with 4 dof, then upgrading A to the non-uniform scale and rotation matrix as in the previous Section..11 PoseDAffine (6 or 4 dof, respectively 3 or points) A t Figure 10: Affine transform (linear+constant). 11

12 The affine case is again a linear LSE problem in the geometric error [A, t ] = min (A,t) with a the 6 stacked parameters N AX i + t x i = min M i a x i a = [ A 11 A 1 A 1 A t x t y ] T and M i a coefficient matrix function of X i Xi Y M i = i X i Y i By stacking together the M i matrices and the x i vectors, and solving for a, we get the LSE affine parameters. Similar equations can be written for the purely linear case (t = 0) A 0 with 4 parameters only Figure 11: Purely linear transform. a = [ A 11 A 1 A 1 A ] T M i = Xi Y i X i Y i 1

13 .1 PoseDHomography (8 dof, min. 4 points): the DLT algorithm In the most general D-D case, the matrix T can be any linear transform in the homogeneous coordinates. This is defined up to a scale factor, that we remove by setting T (3, 3) = 1: A t v T 1 Figure 1: General D homography. By applying the projection operator p, this leads to a non-linear geometric error; therefore, the pose estimation problem must be formulated in two steps as described in the introduction (algebraic and geometric error minimization). Concerning the first, after pre-processing (removing K and T from the data points x i ) we have an algebraic error of the form min T N x i (T X i ) with X i, x i R 3, T R 3 3. By writing X i = (X i, Y i, Z i ), x i = ( x i, ȳ i, z i ) we take the first two terms of the cross product, and we have, for each point i, two homogeneous equations [ 0 T z i Xi T ȳ i Xi T z i Xi T 0 T x i Xi T ] where h i are the three (transposed) rows of H = h 1 h h 3 h T 1 h T h T 3 = 0. 13

14 If A is the (n 9) stacked matrix of all l.h.s. terms above, we get the DLT equation min Ah with rank-deficient A that, as we expect, has T 1 solutions in h. By imposing h = 1, the solution is obtained by the SVD decomposition A = USV T as the last column of V (corresponding to the minimum singular value in S). In addition, the normalization technique (removing the centroids of X and x, followed by isotropic coordinate scaling) ensures a better numerical stability, and therefore we use it. 1. Removing centroids: For both point sets, we compute the mean values µ x = 1 n x i, µ X = 1 n X i and remove them from each point. Isotropic scaling: Afterwards, we make sure that the average distance from the mass center (which is 0) is equal to, i.e. that the average point of both sets is (1, 1, 1) T These two steps ultimately correspond to multiply the two point sets for two matrices T X, T x. Therefore, after estimating the transformation T the two normalizations are removed by T = T x 1 T T X 14

15 3 3D-D transforms: (4x4), K = (3x4) Here, the calibration matrix K can be more general; in particular, we consider pinhole models, without distortion and skew, and with equal focal lengths: K = [ K 0 ], K = f 0 0 f r x/ r y / 0 Moreover, we have extrinsic transformation matrices the type A3 3 t 3 0 T G 1 Ā3 3 t 3 0 T G 1 belonging to possibly different groups G, G. The base estimation problem becomes: find (A, t ) such that [ A t ] G : i, λ i : ( K [ ĀA Āt + t ] X i ) = λi x i We can pre-process the image data x i x i = (K ) 1 x i and re-write the equations [ ĀA Āt + t ] X i = λ i x i However, unlike the D-D case, in general we cannot remove neither Ā nor t, because of the non-square matrix on the left side 1. Moreover, as already mentioned in the introduction, because of the dimensionality loss (from 3D to D) the projection operator π() always provides a nonlinearity, no matter of the form for A, t. Therefore, the linear approach for the algebraic error (DLT) cannot be avoided, in order to provide an initial guess T 0 for the geometric error optimization. Finally, we need the DLT approach for all transform classes, which may have much less than the maximum number of parameters (1); therefore, we assume that each T G can be parametrized (or at least approximated) by a vector q [ A(q) t(q) ] with dim(q) = d q 1. In what follows, we will call this method generalized, projective DLT (GP-DLT). 1 The only exception is given by t = 0, in which case Ā can also be removed from the data points x i = ( K Ā) 1 xi and the problem becomes a purely projective one [ A t ] X i = P X i = λ i x i 15

16 3.1 Algebraic error for 3D-D projections: the GP-DLT approach We re-write the algebraic error in terms of the reduced parameters q as follows: q = min q R dq n xi ([ ĀA (q) Āt (q) + t ] ) X i that can be written as with and q = min q R dq n F i (q) + f i F i (q) = [x i ] Ā [ A (q) f i = [x i ] [ 03 3 [x i ] = t ] X i 0 z i y i z i 0 x i y i x i 0 t (q) ] X i the cross-product matrix. If we further impose A (q) and t (q) to be linear in (q), then we can show that F i (q) = F i q and the algebraic error becomes a linear LSE problem. This condition seems to be quite restrictive, since most parametrization for transformation groups are nonlinear (particularly if a rotation matrix is involved); nevertheless, we can always find a parametrization q l in a linear group G l that includes G, where dim(g l ) is higher than dim(g), but as close as possible to it. Afterwards, the so obtained transform T l can be upgraded to the actual T by a Procrustes analysis (min T l T T G F ) or by simpler means, such as clamping the affine parameters φ for upgrading to a similarity, etc. (similarly to the D- D cases). In any case, the result of this procedure is needed only as a starting point for the geometric error optimization. 3. Example of linear constraints for 3D pose parameters We mention here a few examples of linear constraints that we can impose to p. Pure translation in 3D This corresponds to impose P = [ I 3 t 3 ] Q = I

17 p 0 = [ ] T q = [ t x t y t z ] T Almost pure rotation around z (without the non-linear constraint c + s = 1) c s 0 0 P = s c This corresponds to Q = p 0 = [ ] T q = [ c 4 Line correspondences s ] T In some cases, the image measurement consists of line segments, that have to be matched to corresponding model segments. For example, when performing a hand detection task (Fig. 13), the Hough transform provides very well-aligned line segments on the fingers, associated to the corresponding model lines. Segment correspondences in principle provide point correspondences (i.e. 4 measurement data). However, as we can see from the fingers in Fig. 13, the end-points of the detected segments are not as well localized as the line itself (direction and distance from the origin), therefore the most reliable matching can be obtained by considering pure line correspondences. This unfortunately provides less equations ( instead of 4) for each feature, but at least assures to use only the most reliable information source for pose estimation. As long as pure lines are concerned, a simpler way to describe correspondences consists of replacing them by point-to-line correspondences (i.e. a segment-to-line correspondence), where the two model points can be arbitrarily chosen onto the respective line (in D or 3D space). Alternatively, the end-points information can be still included, but with a lower weight in the LSE optimization process 17

18 Figure 13: Line detection with the Hough transform for planar hand detection. Therefore, we formulate the problem as follows: given a model segment (L 1, L ) and a corresponding image line l = (l 1, l, l 3 ) T, find a transformation H such that both points HL 1 and HL lie on l. When a noisy measurement l is given, the error can be again formulated in two ways (geometric and algebraic error) which lead to different LSE errors. In order to provide the geometric error, we also assume that the normal direction to the image line n = (l 1, l ) is normalized n = 1; in this way, the third component l 3 = d represents the distance of the line to the origin of image coordinates. In particular, algebraic errors are defined in homogeneous coordinates, and geometric errors in projected (non-homogeneous) coordinates, through the π() function. Algebraic errors: Geometric errors: l T ( K T T L 1) = l T ( K T T L ) = 0 n T π ( K T T L 1) + d = n T π ( K T T L ) + d = 0 which, in a least-squares setting, become Algebraic LSE: T = min n T G ( l T i K T T L 1 i + l T i K T T L ) i 18

19 Geometric LSE: T = min n T G ( n T i π ( K T T L 1 ) i + di + n T i π ( K T T L ) ) i + di 4.1 D-D line correspondences For most D cases (apart from the general homography of Sec..1), the geometric LSE is equivalent to the algebraic one. In fact, if the transformation matrix has the form A t then we have n T π ( K T T L ) + d = l T ( K T T L ) = l T K T ( A L + t 1 where L are the first two (non-homogeneous) coordinates of L. For sake of clarity, in the following we will omit the sign, whenever the context avoids ambiguity of interpretation. The above equations clearly show how the two terms K T can be removed by pre-processing the lines l l l T K ( n T, d) Furthermore, if the parametrization of the group A(q), t(q) is linear in q, then the problem becomes linear, and can be solved in one step via the SVD decomposition. For example, if we consider the general affine transform (Sec..11), parametrized by q = [ A 11 A 1 A 1 A t x t y ] T then we have AL + t = so that the LSE problem becomes q = min q R 6 n [ Lx L y L x L y ] q = ˆLq ( n T i ˆL 1 i q + d i + n T i ˆL i q + d i ) = min q R 6 ) n ˆL i q + d i with [ n T ˆL i = i ˆL1 i n T ˆL i i ] di ; di = d i 19

20 A similar result can be obtained for the similarity case (uniform scale, rotation and translation) with 4 parameters. In order to keep the linearity, we parametrize it as c s t x s c t y 0 with so that in this case q = [ c s t x t y ] T ˆL = [ Lx L y 1 0 L y L x and ˆL i is computed from this expression. However, the pure rotational cases (with c = cos(θ), s = sin(θ)) involve a nonlinearity that, in the point-to-point case, had been dealt with by using the Umeyama approach. In this case, we can simply estimate it as a similarity, and afterwards remove the scale by simply dividing (c, s) by c + s. 5 Point and line correspondences The most general case involves matching points and lines simultaneously. In order to formulate it in an elegant way, we start from the result of the previous Section, and add the point-related terms. Concerning the geometric error term for a linear pose parametrization q, we have K T T (q) X x = K T ˆX q x where the ˆX matrix is defined in the same way as ˆL (for segments). The pre-processing step for points (already been described in the related Section) becomes x = ( K T ) 1 x that we can see as the dual version of the line pre-processing. Therefore, for n l line and n p point correspondences, we have 3 n l q = min ˆL i q + d n p i + ˆX j q x j q R 6 with ˆL i, d i defined in the previous Section. 3 Notice the sign on the second terms. j=1 ] 0

21 A Derivation of the GP-DLT linear equations In order to derive the linear LSE matrix F i, we first consider the internal product Ā [ A (q) t (q) ] X i = W X i with W = Ā [ A (q) t (q) ] a (3 4) matrix. We express it row-wise W = w T 1 w T w T 3 where w j is the j th row (transposed to a column vector). Therefore, we can write it as a matrix-vector product X T i 0 T 0 T w 1 W X i = 0 T X T i 0 T w 0 T 0 T X T i w 3 which, after including the cross-product matrix [x i ], becomes F i = ˆX i w 1 w w 3 and ˆX i = Next, we consider again the W matrix 0 T z i X T i ȳ i X T i z i X T i 0 T x i X T i ȳ i X T i x i X T i 0 T each element of the product is given by W hk = ā T h p k Ā = W = ĀP (q) P (q) = [ A (q) t (q) ] ā T 1 ā T ā T 3, P = [ p 1 p p 3 p 4 ] where Ā has been expressed row-wise, and P column-wise. Therefore, we have ā T h 0 T 0 T 0 T w h = 0 T ā T h 0 T 0 T 0 T 0 T ā T h 0 T p (q) 0 T 0 T 0 T ā T h 1

22 where the vector p contains the 1 entries of P (column-wise), which are parametrized by q. By stacking the matrices M h row-wise, we have where  = W = Âp (q) ā ā ā ā ā ā ā ā ā ā ā ā 3 Our assumption about the linear dependency on q can be expressed by p (q) = Q q + p 0 with suitable values for the 1 d q matrix Q and the vector p 0. Therefore, in the linear case the l.h.s. becomes [x i ] ĀP (q) X i = ˆX i  (Q q + p 0 ) where last term of F i does not depend on q, and therefore can be moved to the right-hand side f i of the LSE. The r.h.s. term can be similarly developed: with [x i ] [ 03 3 t ] X i = ˆX i m m = [ t t t 3 ] T so that, finally, the LSE problem becomes n q = min F i q + f i q R dq where References F i = ˆX i ÂQ f i = ˆX i (m Âp 0 [1] T. Drummond and R. Cipolla, Visual tracking and control using lie algebras, cvpr, vol. 0, p. 65, [], Real-time visual tracking of complex structures, IEEE Trans. Pattern Anal. Mach. Intell., vol. 4, no. 7, pp , 00. [3], Real-time tracking of multiple articulated structures in multiple views, in ECCV 00: Proceedings of the 6th European Conference on Computer Vision-Part II. London, UK: Springer-Verlag, 000, pp ) T

Augmented Reality VU Camera Registration. Prof. Vincent Lepetit

Augmented Reality VU Camera Registration. Prof. Vincent Lepetit Augmented Reality VU Camera Registration Prof. Vincent Lepetit Different Approaches to Vision-based 3D Tracking [From D. Wagner] [From Drummond PAMI02] [From Davison ICCV01] Consider natural features Consider

More information

Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!

Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/! Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!! WARNING! this class will be dense! will learn how to use nonlinear optimization

More information

A Study of Kruppa s Equation for Camera Self-calibration

A Study of Kruppa s Equation for Camera Self-calibration Proceedings of the International Conference of Machine Vision and Machine Learning Prague, Czech Republic, August 14-15, 2014 Paper No. 57 A Study of Kruppa s Equation for Camera Self-calibration Luh Prapitasari,

More information

Homogeneous Coordinates

Homogeneous Coordinates Homogeneous Coordinates Basilio Bona DAUIN-Politecnico di Torino October 2013 Basilio Bona (DAUIN-Politecnico di Torino) Homogeneous Coordinates October 2013 1 / 32 Introduction Homogeneous coordinates

More information

A geometric interpretation of the homogeneous coordinates is given in the following Figure.

A geometric interpretation of the homogeneous coordinates is given in the following Figure. Introduction Homogeneous coordinates are an augmented representation of points and lines in R n spaces, embedding them in R n+1, hence using n + 1 parameters. This representation is useful in dealing with

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

Sparse Levenberg-Marquardt algorithm.

Sparse Levenberg-Marquardt algorithm. Sparse Levenberg-Marquardt algorithm. R. I. Hartley and A. Zisserman: Multiple View Geometry in Computer Vision. Cambridge University Press, second edition, 2004. Appendix 6 was used in part. The Levenberg-Marquardt

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

Optimisation on Manifolds

Optimisation on Manifolds Optimisation on Manifolds K. Hüper MPI Tübingen & Univ. Würzburg K. Hüper (MPI Tübingen & Univ. Würzburg) Applications in Computer Vision Grenoble 18/9/08 1 / 29 Contents 2 Examples Essential matrix estimation

More information

3D from Photographs: Camera Calibration. Dr Francesco Banterle

3D from Photographs: Camera Calibration. Dr Francesco Banterle 3D from Photographs: Camera Calibration Dr Francesco Banterle francesco.banterle@isti.cnr.it 3D from Photographs Automatic Matching of Images Camera Calibration Photographs Surface Reconstruction Dense

More information

Visual SLAM Tutorial: Bundle Adjustment

Visual SLAM Tutorial: Bundle Adjustment Visual SLAM Tutorial: Bundle Adjustment Frank Dellaert June 27, 2014 1 Minimizing Re-projection Error in Two Views In a two-view setting, we are interested in finding the most likely camera poses T1 w

More information

Homographies and Estimating Extrinsics

Homographies and Estimating Extrinsics Homographies and Estimating Extrinsics Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Adapted from: Computer vision: models, learning and inference. Simon J.D. Prince Review: Motivation

More information

Lines and points. Lines and points

Lines and points. Lines and points omogeneous coordinates in the plane Homogeneous coordinates in the plane A line in the plane a + by + c is represented as (a, b, c). A line is a subset of points in the plane. All vectors (ka, kb, kc)

More information

A Factorization Method for 3D Multi-body Motion Estimation and Segmentation

A Factorization Method for 3D Multi-body Motion Estimation and Segmentation 1 A Factorization Method for 3D Multi-body Motion Estimation and Segmentation René Vidal Department of EECS University of California Berkeley CA 94710 rvidal@eecs.berkeley.edu Stefano Soatto Dept. of Computer

More information

Multiple View Geometry in Computer Vision

Multiple View Geometry in Computer Vision Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Scene Planes & Homographies Lecture 19 March 24, 2005 2 In our last lecture, we examined various

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Tasha Vanesian LECTURE 3 Calibrated 3D Reconstruction 3.1. Geometric View of Epipolar Constraint We are trying to solve the following problem:

More information

Position and orientation of rigid bodies

Position and orientation of rigid bodies Robotics 1 Position and orientation of rigid bodies Prof. Alessandro De Luca Robotics 1 1 Position and orientation right-handed orthogonal Reference Frames RF A A p AB B RF B rigid body position: A p AB

More information

Multi-Frame Factorization Techniques

Multi-Frame Factorization Techniques Multi-Frame Factorization Techniques Suppose { x j,n } J,N j=1,n=1 is a set of corresponding image coordinates, where the index n = 1,...,N refers to the n th scene point and j = 1,..., J refers to the

More information

Camera calibration by Zhang

Camera calibration by Zhang Camera calibration by Zhang Siniša Kolarić September 2006 Abstract In this presentation, I present a way to calibrate a camera using the method by Zhang. NOTE. This

More information

Kinematics. Chapter Multi-Body Systems

Kinematics. Chapter Multi-Body Systems Chapter 2 Kinematics This chapter first introduces multi-body systems in conceptual terms. It then describes the concept of a Euclidean frame in the material world, following the concept of a Euclidean

More information

3D Computer Vision - WT 2004

3D Computer Vision - WT 2004 3D Computer Vision - WT 2004 Singular Value Decomposition Darko Zikic CAMP - Chair for Computer Aided Medical Procedures November 4, 2004 1 2 3 4 5 Properties For any given matrix A R m n there exists

More information

Lecture 4.3 Estimating homographies from feature correspondences. Thomas Opsahl

Lecture 4.3 Estimating homographies from feature correspondences. Thomas Opsahl Lecture 4.3 Estimating homographies from feature correspondences Thomas Opsahl Homographies induced by central projection 1 H 2 1 H 2 u uu 2 3 1 Homography Hu = u H = h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h

More information

Method 1: Geometric Error Optimization

Method 1: Geometric Error Optimization Method 1: Geometric Error Optimization we need to encode the constraints ŷ i F ˆx i = 0, rank F = 2 idea: reconstruct 3D point via equivalent projection matrices and use reprojection error equivalent projection

More information

Chapter 3. Riemannian Manifolds - I. The subject of this thesis is to extend the combinatorial curve reconstruction approach to curves

Chapter 3. Riemannian Manifolds - I. The subject of this thesis is to extend the combinatorial curve reconstruction approach to curves Chapter 3 Riemannian Manifolds - I The subject of this thesis is to extend the combinatorial curve reconstruction approach to curves embedded in Riemannian manifolds. A Riemannian manifold is an abstraction

More information

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix René Vidal Stefano Soatto Shankar Sastry Department of EECS, UC Berkeley Department of Computer Sciences, UCLA 30 Cory Hall,

More information

Theory of Bouguet s MatLab Camera Calibration Toolbox

Theory of Bouguet s MatLab Camera Calibration Toolbox Theory of Bouguet s MatLab Camera Calibration Toolbox Yuji Oyamada 1 HVRL, University 2 Chair for Computer Aided Medical Procedure (CAMP) Technische Universität München June 26, 2012 MatLab Camera Calibration

More information

Least-Squares Fitting of Model Parameters to Experimental Data

Least-Squares Fitting of Model Parameters to Experimental Data Least-Squares Fitting of Model Parameters to Experimental Data Div. of Mathematical Sciences, Dept of Engineering Sciences and Mathematics, LTU, room E193 Outline of talk What about Science and Scientific

More information

6.801/866. Affine Structure from Motion. T. Darrell

6.801/866. Affine Structure from Motion. T. Darrell 6.801/866 Affine Structure from Motion T. Darrell [Read F&P Ch. 12.0, 12.2, 12.3, 12.4] Affine geometry is, roughly speaking, what is left after all ability to measure lengths, areas, angles, etc. has

More information

A Practical Method for Decomposition of the Essential Matrix

A Practical Method for Decomposition of the Essential Matrix Applied Mathematical Sciences, Vol. 8, 2014, no. 176, 8755-8770 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.410877 A Practical Method for Decomposition of the Essential Matrix Georgi

More information

The Jacobian. Jesse van den Kieboom

The Jacobian. Jesse van den Kieboom The Jacobian Jesse van den Kieboom jesse.vandenkieboom@epfl.ch 1 Introduction 1 1 Introduction The Jacobian is an important concept in robotics. Although the general concept of the Jacobian in robotics

More information

Robotics I. February 6, 2014

Robotics I. February 6, 2014 Robotics I February 6, 214 Exercise 1 A pan-tilt 1 camera sensor, such as the commercial webcams in Fig. 1, is mounted on the fixed base of a robot manipulator and is used for pointing at a (point-wise)

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Notes on Cellwise Data Interpolation for Visualization Xavier Tricoche

Notes on Cellwise Data Interpolation for Visualization Xavier Tricoche Notes on Cellwise Data Interpolation for Visualization Xavier Tricoche urdue University While the data (computed or measured) used in visualization is only available in discrete form, it typically corresponds

More information

Algorithms for Computing a Planar Homography from Conics in Correspondence

Algorithms for Computing a Planar Homography from Conics in Correspondence Algorithms for Computing a Planar Homography from Conics in Correspondence Juho Kannala, Mikko Salo and Janne Heikkilä Machine Vision Group University of Oulu, Finland {jkannala, msa, jth@ee.oulu.fi} Abstract

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

Linear Systems. Carlo Tomasi

Linear Systems. Carlo Tomasi Linear Systems Carlo Tomasi Section 1 characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix and of

More information

Reconstruction from projections using Grassmann tensors

Reconstruction from projections using Grassmann tensors Reconstruction from projections using Grassmann tensors Richard I. Hartley 1 and Fred Schaffalitzky 2 1 Australian National University and National ICT Australia, Canberra 2 Australian National University,

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our

More information

Sample ECE275A Midterm Exam Questions

Sample ECE275A Midterm Exam Questions Sample ECE275A Midterm Exam Questions The questions given below are actual problems taken from exams given in in the past few years. Solutions to these problems will NOT be provided. These problems and

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

8 Velocity Kinematics

8 Velocity Kinematics 8 Velocity Kinematics Velocity analysis of a robot is divided into forward and inverse velocity kinematics. Having the time rate of joint variables and determination of the Cartesian velocity of end-effector

More information

Foundations of Computer Vision

Foundations of Computer Vision Foundations of Computer Vision Wesley. E. Snyder North Carolina State University Hairong Qi University of Tennessee, Knoxville Last Edited February 8, 2017 1 3.2. A BRIEF REVIEW OF LINEAR ALGEBRA Apply

More information

Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix ECCV Workshop on Vision and Modeling of Dynamic Scenes, Copenhagen, Denmark, May 2002 Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix René Vidal Dept of EECS, UC Berkeley Berkeley,

More information

Structure from Motion. CS4670/CS Kevin Matzen - April 15, 2016

Structure from Motion. CS4670/CS Kevin Matzen - April 15, 2016 Structure from Motion CS4670/CS5670 - Kevin Matzen - April 15, 2016 Video credit: Agarwal, et. al. Building Rome in a Day, ICCV 2009 Roadmap What we ve seen so far Single view modeling (1 camera) Stereo

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Camera Projection Model

Camera Projection Model amera Projection Model 3D Point Projection (Pixel Space) O ( u, v ) ccd ccd f Projection plane (, Y, Z ) ( u, v ) img Z img (0,0) w img ( u, v ) img img uimg f Z p Y v img f Z p x y Zu f p Z img img x

More information

M3: Multiple View Geometry

M3: Multiple View Geometry M3: Multiple View Geometry L18: Projective Structure from Motion: Iterative Algorithm based on Factorization Based on Sections 13.4 C. V. Jawahar jawahar-at-iiit.net Mar 2005: 1 Review: Reconstruction

More information

Induced Planar Homologies in Epipolar Geometry

Induced Planar Homologies in Epipolar Geometry Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 12, Number 4 (2016), pp. 3759 3773 Research India Publications http://www.ripublication.com/gjpam.htm Induced Planar Homologies in

More information

CS4495/6495 Introduction to Computer Vision. 3D-L3 Fundamental matrix

CS4495/6495 Introduction to Computer Vision. 3D-L3 Fundamental matrix CS4495/6495 Introduction to Computer Vision 3D-L3 Fundamental matrix Weak calibration Main idea: Estimate epipolar geometry from a (redundant) set of point correspondences between two uncalibrated cameras

More information

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions Linear Systems Carlo Tomasi June, 08 Section characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix

More information

Camera Calibration The purpose of camera calibration is to determine the intrinsic camera parameters (c 0,r 0 ), f, s x, s y, skew parameter (s =

Camera Calibration The purpose of camera calibration is to determine the intrinsic camera parameters (c 0,r 0 ), f, s x, s y, skew parameter (s = Camera Calibration The purpose of camera calibration is to determine the intrinsic camera parameters (c 0,r 0 ), f, s x, s y, skew parameter (s = cotα), and the lens distortion (radial distortion coefficient

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

The Multibody Trifocal Tensor: Motion Segmentation from 3 Perspective Views

The Multibody Trifocal Tensor: Motion Segmentation from 3 Perspective Views The Multibody Trifocal Tensor: Motion Segmentation from 3 Perspective Views Richard Hartley 1,2 and RenéVidal 2,3 1 Dept. of Systems Engineering 3 Center for Imaging Science Australian National University

More information

The nonsmooth Newton method on Riemannian manifolds

The nonsmooth Newton method on Riemannian manifolds The nonsmooth Newton method on Riemannian manifolds C. Lageman, U. Helmke, J.H. Manton 1 Introduction Solving nonlinear equations in Euclidean space is a frequently occurring problem in optimization and

More information

Numerical Methods for Inverse Kinematics

Numerical Methods for Inverse Kinematics Numerical Methods for Inverse Kinematics Niels Joubert, UC Berkeley, CS184 2008-11-25 Inverse Kinematics is used to pose models by specifying endpoints of segments rather than individual joint angles.

More information

Camera Models and Affine Multiple Views Geometry

Camera Models and Affine Multiple Views Geometry Camera Models and Affine Multiple Views Geometry Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in May 29, 2001 1 1 Camera Models A Camera transforms a 3D

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Determining the Translational Speed of a Camera from Time-Varying Optical Flow

Determining the Translational Speed of a Camera from Time-Varying Optical Flow Determining the Translational Speed of a Camera from Time-Varying Optical Flow Anton van den Hengel, Wojciech Chojnacki, and Michael J. Brooks School of Computer Science, Adelaide University, SA 5005,

More information

Methods in Computer Vision: Introduction to Matrix Lie Groups

Methods in Computer Vision: Introduction to Matrix Lie Groups Methods in Computer Vision: Introduction to Matrix Lie Groups Oren Freifeld Computer Science, Ben-Gurion University June 14, 2017 June 14, 2017 1 / 46 Definition and Basic Properties Definition (Matrix

More information

Affine Structure From Motion

Affine Structure From Motion EECS43-Advanced Computer Vision Notes Series 9 Affine Structure From Motion Ying Wu Electrical Engineering & Computer Science Northwestern University Evanston, IL 68 yingwu@ece.northwestern.edu Contents

More information

Parameterizing the Trifocal Tensor

Parameterizing the Trifocal Tensor Parameterizing the Trifocal Tensor May 11, 2017 Based on: Klas Nordberg. A Minimal Parameterization of the Trifocal Tensor. In Computer society conference on computer vision and pattern recognition (CVPR).

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date May 9, 29 2 Contents 1 Motivation for the course 5 2 Euclidean n dimensional Space 7 2.1 Definition of n Dimensional Euclidean Space...........

More information

CSE 554 Lecture 6: Deformation I

CSE 554 Lecture 6: Deformation I CSE 554 Lecture 6: Deformation I Fall 20 CSE554 Deformation I Slide Review Alignment Registering source to target by rotation and translation Methods Rigid-body transformations Aligning principle directions

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

CS 231A Computer Vision (Fall 2011) Problem Set 2

CS 231A Computer Vision (Fall 2011) Problem Set 2 CS 231A Computer Vision (Fall 2011) Problem Set 2 Solution Set Due: Oct. 28 th, 2011 (5pm) 1 Some Projective Geometry Problems (15 points) Suppose there are two parallel lines that extend to infinity in

More information

Mysteries of Parameterizing Camera Motion - Part 2

Mysteries of Parameterizing Camera Motion - Part 2 Mysteries of Parameterizing Camera Motion - Part 2 Instructor - Simon Lucey 16-623 - Advanced Computer Vision Apps Today Gauss-Newton and SO(3) Alternative Representations of SO(3) Exponential Maps SL(3)

More information

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Kinematic Functions Kinematic functions Kinematics deals with the study of four functions(called kinematic functions or KFs) that mathematically

More information

Matrix Theory and Differential Equations Homework 6 Solutions, 10/5/6

Matrix Theory and Differential Equations Homework 6 Solutions, 10/5/6 Matrix Theory and Differential Equations Homework 6 Solutions, 0/5/6 Question Find the general solution of the matrix system: x 3y + 5z 8t 5 x + 4y z + t Express your answer in the form of a particulaolution

More information

Inverse differential kinematics Statics and force transformations

Inverse differential kinematics Statics and force transformations Robotics 1 Inverse differential kinematics Statics and force transformations Prof Alessandro De Luca Robotics 1 1 Inversion of differential kinematics! find the joint velocity vector that realizes a desired

More information

Augmented Reality VU numerical optimization algorithms. Prof. Vincent Lepetit

Augmented Reality VU numerical optimization algorithms. Prof. Vincent Lepetit Augmented Reality VU numerical optimization algorithms Prof. Vincent Lepetit P3P: can exploit only 3 correspondences; DLT: difficult to exploit the knowledge of the internal parameters correctly, and the

More information

Multiple View Geometry in Computer Vision

Multiple View Geometry in Computer Vision Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Trifocal Tensor Lecture 21 March 31, 2005 2 Lord Shiva is depicted as having three eyes. The

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 37 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

Some Notes on Least Squares, QR-factorization, SVD and Fitting

Some Notes on Least Squares, QR-factorization, SVD and Fitting Department of Engineering Sciences and Mathematics January 3, 013 Ove Edlund C000M - Numerical Analysis Some Notes on Least Squares, QR-factorization, SVD and Fitting Contents 1 Introduction 1 The Least

More information

Vector Spaces, Orthogonality, and Linear Least Squares

Vector Spaces, Orthogonality, and Linear Least Squares Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ

More information

EE Camera & Image Formation

EE Camera & Image Formation Electric Electronic Engineering Bogazici University February 21, 2018 Introduction Introduction Camera models Goal: To understand the image acquisition process. Function of the camera Similar to that of

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Ridig Body Motion Homogeneous Transformations

Ridig Body Motion Homogeneous Transformations Ridig Body Motion Homogeneous Transformations Claudio Melchiorri Dipartimento di Elettronica, Informatica e Sistemistica (DEIS) Università di Bologna email: claudio.melchiorri@unibo.it C. Melchiorri (DEIS)

More information

Vectors and Matrices

Vectors and Matrices Vectors and Matrices Scalars We often employ a single number to represent quantities that we use in our daily lives such as weight, height etc. The magnitude of this number depends on our age and whether

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date April 29, 23 2 Contents Motivation for the course 5 2 Euclidean n dimensional Space 7 2. Definition of n Dimensional Euclidean Space...........

More information

OPTIMAL ESTIMATION of DYNAMIC SYSTEMS

OPTIMAL ESTIMATION of DYNAMIC SYSTEMS CHAPMAN & HALL/CRC APPLIED MATHEMATICS -. AND NONLINEAR SCIENCE SERIES OPTIMAL ESTIMATION of DYNAMIC SYSTEMS John L Crassidis and John L. Junkins CHAPMAN & HALL/CRC A CRC Press Company Boca Raton London

More information

II. DIFFERENTIABLE MANIFOLDS. Washington Mio CENTER FOR APPLIED VISION AND IMAGING SCIENCES

II. DIFFERENTIABLE MANIFOLDS. Washington Mio CENTER FOR APPLIED VISION AND IMAGING SCIENCES II. DIFFERENTIABLE MANIFOLDS Washington Mio Anuj Srivastava and Xiuwen Liu (Illustrations by D. Badlyans) CENTER FOR APPLIED VISION AND IMAGING SCIENCES Florida State University WHY MANIFOLDS? Non-linearity

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Outline. Linear Algebra for Computer Vision

Outline. Linear Algebra for Computer Vision Outline Linear Algebra for Computer Vision Introduction CMSC 88 D Notation and Basics Motivation Linear systems of equations Gauss Elimination, LU decomposition Linear Spaces and Operators Addition, scalar

More information

Linear Subspace Models

Linear Subspace Models Linear Subspace Models Goal: Explore linear models of a data set. Motivation: A central question in vision concerns how we represent a collection of data vectors. The data vectors may be rasterized images,

More information

Uncertainty Models in Quasiconvex Optimization for Geometric Reconstruction

Uncertainty Models in Quasiconvex Optimization for Geometric Reconstruction Uncertainty Models in Quasiconvex Optimization for Geometric Reconstruction Qifa Ke and Takeo Kanade Department of Computer Science, Carnegie Mellon University Email: ke@cmu.edu, tk@cs.cmu.edu Abstract

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Math 7, Professor Ramras Linear Algebra Practice Problems () Consider the following system of linear equations in the variables x, y, and z, in which the constants a and b are real numbers. x y + z = a

More information

13. Nonlinear least squares

13. Nonlinear least squares L. Vandenberghe ECE133A (Fall 2018) 13. Nonlinear least squares definition and examples derivatives and optimality condition Gauss Newton method Levenberg Marquardt method 13.1 Nonlinear least squares

More information

COMPUTATIONAL METHODS IN MRI: MATHEMATICS

COMPUTATIONAL METHODS IN MRI: MATHEMATICS COMPUTATIONAL METHODS IN MATHEMATICS Imaging Sciences-KCL November 20, 2008 OUTLINE 1 MATRICES AND LINEAR TRANSFORMS: FORWARD OUTLINE 1 MATRICES AND LINEAR TRANSFORMS: FORWARD 2 LINEAR SYSTEMS: INVERSE

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

Lecture 5. Epipolar Geometry. Professor Silvio Savarese Computational Vision and Geometry Lab. 21-Jan-15. Lecture 5 - Silvio Savarese

Lecture 5. Epipolar Geometry. Professor Silvio Savarese Computational Vision and Geometry Lab. 21-Jan-15. Lecture 5 - Silvio Savarese Lecture 5 Epipolar Geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 5-21-Jan-15 Lecture 5 Epipolar Geometry Why is stereo useful? Epipolar constraints Essential

More information

Final Exam Due on Sunday 05/06

Final Exam Due on Sunday 05/06 Final Exam Due on Sunday 05/06 The exam should be completed individually without collaboration. However, you are permitted to consult with the textbooks, notes, slides and even internet resources. If you

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Vectors. January 13, 2013

Vectors. January 13, 2013 Vectors January 13, 2013 The simplest tensors are scalars, which are the measurable quantities of a theory, left invariant by symmetry transformations. By far the most common non-scalars are the vectors,

More information

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition ace Recognition Identify person based on the appearance of face CSED441:Introduction to Computer Vision (2017) Lecture10: Subspace Methods and ace Recognition Bohyung Han CSE, POSTECH bhhan@postech.ac.kr

More information

Lecture 8: Coordinate Frames. CITS3003 Graphics & Animation

Lecture 8: Coordinate Frames. CITS3003 Graphics & Animation Lecture 8: Coordinate Frames CITS3003 Graphics & Animation E. Angel and D. Shreiner: Interactive Computer Graphics 6E Addison-Wesley 2012 Objectives Learn how to define and change coordinate frames Introduce

More information

MA 242 LINEAR ALGEBRA C1, Solutions to First Midterm Exam

MA 242 LINEAR ALGEBRA C1, Solutions to First Midterm Exam MA 242 LINEAR ALGEBRA C Solutions to First Midterm Exam Prof Nikola Popovic October 2 9:am - :am Problem ( points) Determine h and k such that the solution set of x + = k 4x + h = 8 (a) is empty (b) contains

More information

nonrobust estimation The n measurement vectors taken together give the vector X R N. The unknown parameter vector is P R M.

nonrobust estimation The n measurement vectors taken together give the vector X R N. The unknown parameter vector is P R M. Introduction to nonlinear LS estimation R. I. Hartley and A. Zisserman: Multiple View Geometry in Computer Vision. Cambridge University Press, 2ed., 2004. After Chapter 5 and Appendix 6. We will use x

More information