A Linear Algorithm for Computing the Homography from Conics in Correspondence

Size: px
Start display at page:

Download "A Linear Algorithm for Computing the Homography from Conics in Correspondence"

Transcription

1 Journal of Mathematical Imaging and Vision 13, , 2000 c 2000 Kluwer Academic Publishers Manufactured in The Netherlands A Linear Algorithm for Computing the Homography from Conics in Correspondence AKIHIRO SUGIMOTO Advanced Research Laboratory, Hitachi, Ltd, Japan Abstract This paper presents a study, based on conic correspondences, on the relationship between two perspective images acquired by an uncalibrated camera We show that for a pair of corresponding conics, the parameters representing the conics satisfy a linear constraint To be more specific, the parameters that represent a conic in one image are transformed by a five-dimensional projective transformation to the parameters that represent the corresponding conic in another image We also show that this transformation is expressed as the symmetric component of the tensor product of the transformation based on point/line correspondences and itself In addition, we present a linear algorithm for uniquely determining the corresponding point-based transformation from a given conic-based transformation up to a scale factor Accordingly, conic correspondences enable us to easily handle both points and lines in uncalibrated images of a planar object Keywords: conic correspondences, homography, uncalibrated images, tensor product, linear computation, 2-D objects 1 Introduction One of the fundamental difficulties in recognizing objects from images is how to deal with a number of images obtained from the same object Clarifying the relationship between images of the same object is thus of fundamental importance Knowledge of the relationship between images provides us with many advantages in handling images of the same object For example, it allows us to know the number of stored images required to establish a geometric model of an object We can also synthesize a new image of an object from its stored images with the help of the relationship between images Moreover, we have only to transmit the stored images of an object to a recipient, who can then synthesize an image of the object from any viewpoint Thus, the relationship between images of the same object plays a key role in important problems in computer vision and multimedia including three-dimensional Present address: Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Sakyo, Kyoto , Japan; sugimoto@ikyoto-uacjp reconstruction from multiple images, object recognition, image synthesis and image coding A conic is one of the most important image features This is because many man-made objects have circular parts, and circles are perspectively projected onto conics Furthermore, the conic is a more compact primitive than points or lines, and can be more robustly and more exactly extracted from images In addition, finding correspondences between conics is much easier than that between points Unlike points, conics have features that distinguish one from another and can be used to narrow down the possible matches Hence, developing algorithms dealing with conics is significant Nevertheless, there are fewer articles (for instance, [12 16, 23, 26, 27]) dealing with conics to clarify the relationship between images while the multilinear constraints on multiple images of points or lines were derived See, for example, [3, 4, 8 11, 19 21, 24] for points and [3, 6 8] for lines Ma [12] and Ma, Si, and Chen [13] developed an analytical method, based on conic correspondences, for motion and structure reconstruction and pose determination from stereo images This method, however, is developed under the

2 116 Sugimoto assumption that the camera is calibrated In addition, the procedure requires complex nonlinear computation in the general case to obtain closed-form solutions and it does not ensure the uniqueness of the solution Quan [14, 15] showed that two polynomial constraints are satisfied by corresponding conics in two uncalibrated perspective images Based on this result, Quan [14, 15] addressed a geometric invariant and conic reconstruction from two images However, it has not yet established the uniqueness of the solution This is due to the nonlinearity of the derived polynomial constraints Weiss [26, 27] clarified that two conics are sufficient for calibration under the affine projection and derived a nonlinear algorithm for calibration Here perspective images are not concerned and complex nonlinear computation is also required As seen above, methods dealing with conics are developed based on nonlinear constraints on conics The procedures therefore require complex nonlinear computation and the uniqueness of the solution is not ensured This paper presents a study, based on conic correspondences, on the relationship between two perspective images acquired by an uncalibrated camera We show that for a pair of corresponding conics, the parameters representing the conics satisfy a linear constraint To be more specific, the parameters representing a conic in one image are transformed by a fivedimensional projective transformation (called a conicbased transformation) to the parameters representing the corresponding conic in another image We also establish the link between this transformation and the point-based transformation 1, ie, the transformation based on point correspondences in the two images We show that a conic-based transformation is expressed as the symmetric component of the tensor product of the corresponding point-based transformation and itself Furthermore, we exploit the problem of determining the corresponding point-based transformation from a given conic-based transformation This problem is reduced to solving an overdetermined system of nonlinear equations Full use of redundant information is made so that the problem is solved with only linear computation Our method is simple and ensures the uniqueness of the solution Accordingly, conic correspondences enable us to easily handle both points and lines in uncalibrated images of a planar object In particular, when we observe a planar object, we can use conic correspondences to transfer one image to another image and to establish point correspondences and line correspondences in two images (see Fig 1) Note that a part of this work was presented in [22] Figure 1 Two views of a planar object This paper is organized as follows In Section 2, we discuss the point-based transformation between two images of coplanar points in 3-D within the framework of projective geometry In Section 3, we first clarify that when we observe a conic in 3-D from two different viewpoints, the parameters characterizing its images are related by a five-dimensional projective transformation We then investigate the relationship between a conic-based transformation and its corresponding point-based transformation and establish the link between them In Section 4, we present a linear algorithm for uniquely determining the corresponding point-based transformation from a conic-based transformation up to a scale factor Description of the proposed algorithm is presented in Section 5 Experimental results for evaluating the performance of the proposed algorithm is presented in Section 6 2 Plane Transformation In this section, we discuss the relationship between two different images of coplanar points on the basis of the framework of projective geometry Note that if not explicitly stated, the coordinates of a point/line are understood to be homogeneous throughout this paper An introduction to elementary projective geometry can be found in [2] or [18] Let P n be the n-dimensional projective space over the real number field R We embed the plane in 3-D into P 2 so that the Euclidean coordinates 2 (X, Y ) T are expressed by the homogeneous coordinates (X, Y, 1) T We also embed the image planes I (V is the viewpoint) and I (V is the viewpoint) in P 2 Embedding, I and I, respectively, in P 2 allows us to express the transformation from I to I as a plane projective transformation (see Fig 2) That is, letting the homogeneous coordinates of the corresponding points in I and I be x and

3 Linear Algorithm Conic-Based Relationship Between Two Images We exploit a relationship between two images based on conic correspondences, and then establish the link between the conic-based relationship and a point-based transformation 31 Conic-Based Transformation Figure 2 Relationship between two different images of points in plane (V and V are the view-points, and I and I are the images) x,wehave x M 1 x, where M 1 is a 3 3 nonsingular matrix and implies the equality sign up to a scale factor In this formulation, all the information of camera parameters is included in M; we need not assume that the camera is calibrated M is known as the homography relating I and I In this paper, however, we call M a point-based transformation Every time we observe a point in and are given the correspondence between its images in I and I, we have two independent linear constraints on the entries of M We can thereby uniquely determine the point-based transformation up to a scale factor if we observe four points (any three of them are not aligned) in A line and a point are dual each other in P 2 That is, lines have a one-to-one correspondence to points When we observe a line in, therefore, the transformation between the coordinates of corresponding lines in I and I is expressed in terms of the point-based transformation In fact, denoting by l and l the coordinates of corresponding lines in I and I, we see that l and l are related by M in the form of A conic in the image plane is uniquely expressed in the form of a quadratic homogeneous equation: x T C x = 0, (31) where C is the 3 3 symmetric matrix given by a f e C f b d e d c Since in (31) one conic corresponds to one C up to scale and another conic corresponds to another C up to scale, we see that conics have a one-to-one correspondence with Cs ie, 3 3 symmetric matrices up to scale C has six independent entries and only the ratios between them are significant In other words, C bijectively corresponds to a point in P 5 Hereafter, for conic (31), we refer to C as a conic matrix and to q = (a, b, c, d, e, f ) T ( P 5 ) as a conic vector Conics are transformed into conics under projective transformations Denote by q and q the conic vectors for the corresponding conics in I and I, both of which are obtained by observing a conic in plane (see Fig 3) Since each conic in the image plane bijectively corresponds to a point in P 5, q is transformed to q by l M T l The relationship between two images with respect to points or lines is aggregated into a point-based transformation When a point-based transformation between two images is given, for example, we can establish both the point correspondences and the line correspondences between the two images We can also predict the coordinates of points or lines in the second image from those in the first image Figure 3 Two different images of conics in plane (V and V are the viewpoints)

4 118 Sugimoto a five-dimensional projective transformation That is, when we let S bea6 6matrix, we have q S q (32) Here S is nonsingular due to Remark 32 below (32) gives the linear constraint on the corresponding conics in two images when we observe a conic in space In this paper, we call S a conic-based transformation When we observe coplanar conics from two different viewpoints, a conic-based transformation relates their two images Every time we observe a conic in and are given the correspondence between its images in I and I, we have five independent linear constraints on the entries of S We can thereby uniquely determine the conic-based transformation up to a scale factor if we observe seven conics in a general position in Here seven conics are said to be in a general position when the seven points in P 5 that correspond to the seven conics can be a projective coordinate frame for P 5 We thus obtain the following proposition that is known in projective geometry [18] Proposition 31 [18] Suppose that we observe a conic in plane and that we are given the correspondence of its images in planes I and I Then we can uniquely determine the conic-based transformation between I and I up to a scale factor if we observe seven conics in a general position in In particular, if nothing is known about the entries of S, at least seven conics are needed Remark 31 We see that a conic-based transformation S belongs to an eight-dimensional submanifold of all 6 6 matrices defined up to scale This is because two images are essentially determined by a point-based transformation M and because M has only eight independent parameters In other words, the entries of S are not independent; algebraic constraints exist on the entries of S and only eight entries are independent (See Appendix A for the properties of this submanifold) This implies that using these constraints allows us to reduce the number of conics to determine S In principle, it should be possible to estimate S from two conics since only eight entries of S are independent and each conic gives five independent constraints on them 32 Link Between Conic-Based Transformation and Point-Based Transformation When points are subject to a point-based transformation M, the conic matrix C is transformed to C M T CM (33) This equation indicates that each entry of C is expressed as the linear combination of the entries of C and that the coefficients are quadratic homogeneous in the entries of M Each entry of S is thus quadratic homogeneous in the entries of M In fact, if we define 3 Q ijkl := M ki M lj (i, j,k,l {1,2,3}) and then rewrite (33) in terms of the entries of the conic vectors, it follows that C (ij) σ(k,l) Q (ij)(kl) C (kl), (34) (kl) where C (kl) denotes the entry of the conic vector that is deduced from C kl (C(ij) is defined in the same manner) The notation ( ) implies the symmetrization of the indices inside parentheses; we define that the symmetrized indices are aligned in such a way as (11), (22), (33), (23), (31), (12) Moreover, function σ is defined by { 1 (k = l) σ(k,l):= 2 (k l) To be concrete, C (kl) and Q (ij)(kl) are given as follows: C (kl) := C kl + C lk 2 = C kl (since C T = C), Q (ij)(kl) := Q ijkl + Q jikl + Q ijlk + Q jilk 4 = M ki M lj + M kj M li 2 Here we used tensor analysis See [17] for details of tensor analysis and the notations From (32) and (34), we establish the link between a conic-based transformation S and its corresponding point-based transformation M: S (ij)(kl) σ(k,l)m (k(i M l) j) (35) Note that, from the definition, M (k(i M l) = M ki M lj + M kj M li j) 2 We see that we obtain this equation by taking the tensor product of M and itself, and then symmetrizing the rows and the columns respectively Proposition 32 Let S be a conic-based transformation and M be its corresponding point-based transformation We denote by S(M M) the matrix that is

5 Linear Algorithm 119 deduced from the tensor product of M and itself by symmetrizing the rows and the columns respectively Then, we have the following link between S and M: S S(M M) D, where D is the 6 6 diagonal matrix whose entries are 1, 1, 1, 2, 2 and 2, ie, D = diag(1, 1, 1, 2, 2, 2) Remark 32 It follows from (35) that S is nonsingular iff M is nonsingular We see that a conic-based transformation is essentially expressed as the symmetric component of the tensor product of the corresponding point-based transformation and itself Therefore, a conic-based transformation is determined from its corresponding point-based transformation Then, is the converse true? Namely, can we determine the corresponding point-based transformation from a conic-based transformation? In the next section, we answer this question affirmatively Remark 33 In general, two coplanar conics algebraically have four intersection points though they may be complex points However, we have no way, in particular in the case of complex points, to establish the correspondences between intersection points in two images (we face a combinatorial problem even when the conics actually have real intersection points) In this sense, determining the corresponding point-based transformation from a conic-based transformation is not trivial Moreover, deriving a linear algorithm for the determination is significant from the practical point of view 4 Decomposing Conic-Based Transformation to Point-Based Transformation Our aim here is, for a given Ŝ SD 1, to uniquely determine M up to a scale factor We have eight unknown entries of M and 36 quadratic constraints on them up to a scale factor The problem is thus reduced to solving an overdetermined system of nonlinear equations Here we develop a method that makes full use of the redundancy involved in Ŝ and uniquely determines M up to a scale factor with only linear computation We remark that for an unknown nonzero real number ρ, is satisfied Ŝ (ij)(kl) = ρ M (k(i M l) j) (41) 41 Column-Based Decomposition We derive the differences of the column vectors of M by handling the row vectors of Ŝ Denote by u (ij) the (ij)th row vector of Ŝ ((ij) = (11), (22),,(12)) and also denote by c i the ith column vector of M (i = 1, 2, 3) Defining a mapping from a vector in R 6 to a symmetric 3 3 matrix, x 1 x 6 x 5 Mtx : (x 1, x 2, x 3, x 4, x 5, x 6 ) T x 6 x 2 x 4, x 5 x 4 x 3 we have Mtx ( ) ρ ( u (ij) = ci c T j +c j c T ) i, 2 which yields 4 Mtx ( ) u (ii) + u ( jj) 2u (ij) =ρ(c i c j )(c i c j ) T = ρ c i c j 2 (c i c j ) c i c j (c i c j ) T c i c j This equation gives the spectrum resolution of Mtx (u (ii) + u ( jj) 2u (ij) ) Proposition 41 Let i jmtx(u (ii) +u ( jj) 2u (ij) ) is a matrix of rank 1, and its eigenvalue λij c that has the maximum absolute value and eigenvector c ij ( c ij =1)associated with λij c are given by where λ c ij = sgn(ρ)[ ρ c i c j ] 2, (42) c ij c i c j, (43) { 1 (ρ > 0) sgn(ρ) = 1 (ρ < 0) Remark 41 In the presence of low levels of noise, the rank of Mtx(u (ii) + u ( jj) 2u (ij) )is not less than 2 In this case, two eigenvalues other than λij c are near each other but are far from λij c (λij c is separated from the other eigenvalues) That is, from the viewpoint of magnitude, the two eigenvalues are grouped together, whereas λij c belongs to another group λij c and c ij are therefore computed robustly with respect to noise [1, 5] Expressing c i in terms of λ c ij and c ij, we obtain the following column-based decomposition of M (c ij and

6 120 Sugimoto c i c j do not necessarily have the same sign): c 2 c 3 = µ c 23 γ λ c 23 c23, (44) c 3 c 1 = µ c 31 γ λ c 31 c31, (45) c 1 c 2 = µ c 12 γ λ c 12 c12, (46) where µ ij c {1, 1}and γ = 1/ ρ (44), (45), and (46) together give a system of linear homogeneous equations in the entries of c i and γ Only six of the nine equations are independent This is because the sum of (44), (45), and (46) results in the zero vector Paying attention to the sum of (44), (45), and (46) allows us to determine µ ij c Namely, we first fix a certain component on the right-hand side, and then determine µ ij c so that the sum of the three values of the component becomes zero The following procedure determines µ ij c and it requires only judging which is the greater of given two values Thus, it is robust with respect to noise We denote by cij k the kth component of λij c c ij (i, j,k {1,2,3}) [Determining µ c ij ] Step 1 Fix k {1,2,3}so that cij k 0 for all i and j, and align cij k in order of their magnitude Let c k i 1 j 1 c k i 2 j 2 c k i 3 j 3 Step 2 Evaluate the sign of c k ij : 1 When c k i 3 j 3 > 0, set µ c i 1 j 1 = 1, µ c i 2 j 2 = µ c i 3 j 3 = 1 2 When c k i 1 j 1 < 0, set µ c i 1 j 1 = µ c i 2 j 2 = 1, µ c i 3 j 3 = 1 3 When c k i 2 j 2 > 0 > c k i 3 j 3, if c k i 1 j 1 = max (ij) ck ij, then set µ i c 1 j 1 = µ i c 3 j 3 = 1, µ i c 2 j 2 = 1; if c i k 3 j 3 = max c ij k, then set (ij) µ c i 1 j 1 = µ c i 2 j 2 = µ c i 3 j 3 = 1 4 When c k i 1 j 1 > 0 > c k i 2 j 2, if c k i 1 j 1 = max c k, ij then set (ij) µ i c 1 j 1 = µ i c 2 j 2 = µ i c 3 j 3 = 1; if c k i3 j 3 = max c k, ij then set (ij) µ i c 1 j 1 = µ i c 3 j 3 = 1, µ i c 2 j 2 = 1 42 Row-Based Decomposition In the same manner with the previous subsection, we derive the differences of the row vectors of M by handling the column vectors of Ŝ The procedure here is similar to the one in the previous subsection, but the procedures are independent of each other We denote by v (kl) the (kl)th row vector of Ŝ ((kl) = (11), (22),,(12)) and by r k the kth row vector of M (k = 1, 2, 3) Then, we have Mtx ( v (kk) +v (l l) 2v (kl) ) = ρ r k r l 2 (r k r l ) r k r l (r k r l ) T r k r l Proposition 42 Let k l Mtx(v (kk) +v (l l) 2v (kl) ) is a matrix of rank 1, and its eigenvalue λ r kl that has the maximum absolute value and eigenvector r kl ( r kl = 1) associated with λ c kl are given by λ r kl = sgn(ρ) [ ρ r k r l ] 2, (47) r kl r k r l (48) From Proposition 42 we obtain the row-based decomposition of M: r 2 r 3 = µ r 23 γ λ r 23 r 23, (49) r 3 r 1 = µ r 31 γ λ r 31 r 31, (410) r 1 r 2 = µ r 12 γ λ r 12 r 12, (411) where µ r kl {1, 1} Note that we can easily determine µ r kl (cf the procedure determining µc ij ) (49), (410), and (411) together therefore give a system of linear homogeneous equations in the entries of r k and γ, only six of which are independent Although (44), (45), (46) and (49), (410), (411) respectively give a system of linear homogeneous equations in M and γ, we cannot regard them all together as

7 Linear Algorithm 121 one system of linear homogeneous equations This is because the column-based decomposition and the rowbased decomposition are obtained independently and because in deriving them we did not pay any attention to the fact that c i and r k express the same M In the next subsection, we take this fact into account and modify (44), (45), (46) and (49), (410), (411) so that they all can be regarded as one system We then apply the method of least squares to the system to express M with two parameters 43 Parametrizing Point-Based Transformation The column-based decomposition of M and the rowbased decomposition of M are obtained independently In other words, µ ij c ensures the consistency of signs in (44), (45), and (46), whereas µ ij r ensures the consistency of signs in (49) (410), and (411); µ ij c and µ r kl together, however, do not necessarily ensure the consistency of signs when we regard all the equations together as one system Originally, µ ij c and µ r kl are closely related to each other since c i (i = 1, 2, 3) and r k (k = 1, 2, 3) express the same M in different ways In fact, the kth component of c i is equal to the ith component of r k This indicates that, for example, the order in magnitude of four values, ie, the first and second components of c 2 c 3 and the second and third components of r 1 r 2, has to be considered in determining µ c 23 and µr 12 The consistency of the order in their magnitude is ensured by a pair of µ c 23 and µr 12 or by a pair of µ c 23 and µr 12 The consistency of signs between the column-based decomposition and the row-based decomposition is ensured by a pair of {µ ij c }and {µr kl } or by a pair of {µc ij } and { µ r kl } Proposition 43 allows us to easily determine which pair ensures this consistency That is, we put κ {1, 1}and choose a κ that nullifies ( [ the π(i, j)th component of µ c ij c ij (ij), i j +κµ r ij r ij]) Here, for i j (i, j {1,2,3}), we define π(i, j) :={1,2,3} {i,j} (412) If κ = 1, the consistency is ensured by the pair of {µ ij c } and {µr kl }; otherwise, it is by the pair of {µc ij } and { µ r kl } When noise exists, we choose a κ that minimizes the absolute value of the above equation Since κ is determined by judging which is the greater of given two values, this computation is also robust Proposition 43 The column vectors c i and the row vectors r i of M satisfy (the π(i, j)th component of [(c i c j ) (ij), i j (ij), i j + (r i r j )]) = 0, (the π(i, j)th component of [(c i c j ) (r i r j )]) 0, where π(i, j) is defined in (412) and the summation is taken over for (ij)=(23), (31), (12) We replace each µ r kl with κµr kl on the right-hand side of (49), (410), and (411) Consequently, we can regard all the equations together as one system of linear homogeneous equations in 10 unknowns (M and γ ) We have 18 constraints, only eight of which are independent We see that the pair of γ = 0 and any point-based transformation whose entries are all equal to each other is always the solution of the system This solution, however, is spurious since γ 0 from its definition To put aside this spurious solution, we express M in a special form Proposition 44 Let m 11 m 12 m 13 U := 1 1 1, V := m 21 m 22 m m 31 m 32 0 (where m ij R), and put M = su + V (s R) (413) The system of linear homogeneous equations derived from (44), (45), (46) and the modified (49), (410), (411) (ie, with µ r kl replaced by κµr kl ) is expressed in the form of A f = 0 (rank A = 8; f 0), (414) where A is an 18 9 coefficient matrix 5 and f = (m 11, m 12,,m 32,γ) T R 9 Expressing M in the form of (413) decreases the number of unknowns in our system from 10 to 9 (indeed, s disappears) Since we know the rank of A, linear computation such as the QR factorization or the singular value decomposition (SVD) of A enables us to uniquely

8 122 Sugimoto determine the least-squares solution, ie, the optimal solution in the sense that it minimizes the residual sum of squares, up to a scale factor Denote by f ˆ = ( ˆm 11, ˆm 12,, ˆm 32, ˆγ) T (ˆγ = 1) the solution obtained by setting γ to be 1 (Any other criterion to eliminate the scale factor can be used; it reduces to the same results) Putting ˆV to be derived from ˆm ij,wenow express M with two parameters s and t in the following form (When t = 0, all the entries of M are equal to each other and we have γ = 0 This contradicts γ 0; therefore, t 0) M = su + t ˆV (s, t R ; t 0) (415) 44 Determining s/t From (415) it is easy to see that the scale factor eliminated by setting ˆγ = 1 is equal to t We thus have γ =ˆγt=t Noting that sgn(ρ) = sgn(λij c )(cf (42)), we express (41) in terms of s and t, from which we obtain ( s 2 ˆmki ˆm lj +ˆm kj ˆm li + sgn ( ) λij c )Ŝ(ij)(kl) t ˆm ki +ˆm lj +ˆm kj +ˆm li 2 st = 0 (416) This is a system of quadratic homogeneous equations in s and t (the number of equations is 36) We regard (416) as a system of linear homogeneous equations in independent parameters s 2, t 2 and st Solving (416) with respect to s and t is then equivalent to solving (416) with respect to s 2, t 2 and st under the decomposition constraint ( s 2 ) st det st t 2 = 0 (417) We perform two steps to obtain a solution: solve a given system ignoring the decomposition constraint and then modify the solution so that it satisfies the decomposition constraint In general, the solution we finally obtain in this way is not necessarily the least-squares solution of the original system In our case, however, we can expect that our solution is almost the same as the least-squares solution of the original system for two reasons One is that the system is highly redundant (36 constraints for only two parameters) and the other is that the decomposition constraint is unique and simple (cf (417)) Putting g = (s 2, t 2, st) T, we can then express (416) in the form of B g = 0 (rank B = 2; g 0), (418) where B isa36 3 coefficient matrix Linear computation such as the QR factorization or the SVD of B enables us to uniquely determine the solution of (418) up to a scale factor Denote by ĝ = (ŝ2, t 2, ŝt) T (ŝt = 1) the solution obtained by setting st to be 1, and define (ŝ2 ) ŝt W := ŝt t2 Letting W = ψ 1 w 1 w T 1 + ψ 2 w 2 w T 2 (ψ 1 ψ 2 0) be the spectrum resolution of W and putting W := ψ 1 w 1 w T 1, we see that W has the following property The proof is similar to that of Tsai and Huang [25] Proposition 45 Of the 2 2 symmetric matrices that satisfy the decomposition constraint (417), W is nearest from W under the Frobenius norm Hence, putting w 1 = (w 1,w 2 ) T, we see that w 1/ w 2 gives s/t By returning to (415), we can uniquely determine M up to a scale factor (The scale factor is determined by setting the criteria that the determinant of M is positive and that M is normalized under the Frobenius norm, ie, detm > 0 and M F = 1) Remark 42 Whereas we have ψ 2 = 0 in the case where no noise exists, we have ψ 2 > 0 in the presence of noise Even in the presence of noise, however, we have ψ 1 ψ 2 This indicates that the computation of w 1 is robust 5 Description of Algorithm The following algorithm determines the corresponding point-based transformation M from a given conicbased transformation S Step 2 and Step 3 are executed independently and in parallel It should be noted that the spectrum resolution of a symmetric matrix is easily and robustly computed by the singular value decomposition

9 Linear Algorithm 123 Step 0 [Computing a conic-based transformation] By using (32), compute S in the least-squares sense from given conic correspondences Step 1 Step 2 Ŝ := SD 1 [Column-based decomposition] (1): For (ij)=(23), (31), (12), compute the spectrum resolution of Mtx(u (ii) +u (jj) 2u (ij) ) Let λij c be the eigenvalue that has the maximum absolute value and let c ij be the eigenvector associated with λij c (2): Determine µ ij c according to the procedure in Subsection 41 Step 3 [Row-based decomposition] (1): For (kl) = (23), (31), (12), compute the spectrum resolution of Mtx(v (kk) +v (l l) 2v (kl) ) Let λ r kl be the eigenvalue that has the maximum absolute value and let r kl be the eigenvector associated with λ r kl (2): Determine µ r kl (cf the procedure of determining µ ij c ) [Parametrizing the point-based transforma- Step 4 tion] (1): Put κ {1, 1}, and choose a κ that minimizes the absolute value of ( [ the π(i, j)th component of µ c ij c ij (ij), i j +κµ r ij r ij]) Let κ be the solution (2): From (44), (45), (46) and the modified (49), (410), (411) (ie, with µ r kl replaced by κ µ r kl ), construct an 18 9 matrix A as in (414) (3): Compute the least-squares solution of (414) Let f ˆ= ( ˆm 11, ˆm 12,, ˆm 32, ˆγ) T (ˆγ = 1) be the solution (4): Put U : = 1 1 1, ˆm 11 ˆm 12 ˆm 13 ˆV : = ˆm 21 ˆm 22 ˆm 23 ˆm 31 ˆm 32 0 Step 5 [Determining the ratio s/t] (1): From (416), construct a 36 3 matrix B as in (418) (2): Compute the least-squares solution of (418) Let ĝ = (ŝ2, t 2, ŝt) T (ŝt = 1) be the solution (3): (ŝ2 ) ŝt W := ŝt t2 (4): Compute the spectrum resolution of W Let w 1 = (w 1,w 2 ) T be the eigenvector associated with the eigenvalue that has the maximum absolute value (5): Put ( ) w1 M := ξ U + ˆV (51) w 2 Step 6 Determine ξ so that detm > 0, M F = 1 Substitute the computed ξ into (51) to obtain M 6 Experimental Results On the basis of the algorithm above, we present two kinds of experiments: one is on noise sensitivity using numerical data The other is on the conic-based image transfer using real images In any case, our experimental results show that the outputs of the algorithm are stable and robust with respect to noise 61 Robustness Experiments Using Numerical Data We have conducted two kinds of evaluations for noise sensitivity of the proposed algorithm First, we perturbed each datum based on its magnitude Secondly, we perturbed each datum based on the variance of the distribution characterized by all the data In the first experiment, we created a 3 3 matrix M, each entry of which was randomly and independently generated within the interval of [ 5000, 5000] We used this M to compute a 6 6 matrix S(M M)We then computed S by multiplying an unknown scale to S(M M), where the scale factor was randomly generated within the interval of [ 1000, 1000] Next, we added Gaussian noise to each entry s ij of S (i, j {1, 2,,6}), independently Namely, s ij s were perturbed by independently adding Gaussian noise; we

10 124 Sugimoto Figure 4 Mean under several noise levels (Experiment 1) obtained S The mean of Gaussian distribution was set to be 00 and its standard deviation was set to be r s ij, where r represents a noise level and r was changed by 001 from 00 (noiseless) to 01 (10% noise) Here we evaluate the accuracy of each entry of a computed conic-based transformation in terms of noise levels based on the magnitude of the concerned entry By applying our algorithm to S, we estimated M For each noise level, we iterated the procedure compute S by adding noise to S and apply our algorithm to S to estimate M 50 times The mean values of estimated M over the 50 times are shown in Fig 4 We also calculated coefficients of variations, ie, the ratio of the standard deviation to the mean, which are shown in Fig 5 In the second experiment, only the way of adding noise to S was changed Perturbation of each datum was conducted based on the variance of the distribution characterized by all the data That is, each s ij was perturbed by independently adding Gaussian noise where the mean of Gaussian distribution was 00 and its standard deviation was r σ S Here, σ S is the standard deviation calculated from all the entries of S Asaresult, noise levels here indicate the ratios of noise to the size of the distribution characterized by the entries of S The mean values of estimated M over 50 times and coefficients of variations in this case are, respectively, shown in Figs 6 and 7 Figures 4 and 6 show that each entry of M is accurately estimated up to the second digit of precision under all noise levels in the both experiments Depending on the noise level, the third digit of precision changes, but the change is limited to at most 2% of each value We can thus conclude that M is accurately estimated even in the presence of noise; our algorithm is robust Figures 5 and 7, on the other hand, show that coefficients of variations (almost) monotonically increase as the noise level increases This increase is, however, compatible with the noise level This indicates that our algorithm stably estimates M We therefore numerically found that our algorithm stably performs in the presence of noise and that its outputs are reliable 62 Experiments Using Real Images We applied our algorithm to real images to see its effectiveness for the conic-based image transfer We randomly put 10 different conics on a planar board 6 and then acquired two different images of the board by an uncalibrated camera (see Fig 8) We

11 Linear Algorithm 125 Figure 5 Coefficeints of variations under several noise levels (Experiment 1) Figure 6 Means under several noise levels (Experiment 2)

12 126 Sugimoto Figure 7 Coefficients of variations under several noise levels (Experiment 2) Figure 8 Two uncalibrated images of 10 conics on a planar board: (a) image 1; (b) image 2 extracted the 10 conics from image 1 and seven conics, ie, conics 0, 1, 2, 3, 4, 5 and 7, from image 2 Here each conic was named by the number in its inside We note that seven conics in image 2 were randomly selected These results are shown in Fig 9 For each extracted conic, we randomly selected by hand about 35 points on the conic and then applied the method of least squares to the points to determine its conic vector Next, we gave the correspondences between the seven conics 0, 1, 2, 3, 4, 5 and 7 in the two images We used their conic vectors to estimate the conic-based transformation S between images 1 and 2 We then input the estimated S to our algorithm to calculate the pointbased transformation M between the two images We used this M to transfer all the conics in image 1 to those in image 2 To be more specific, for one conic in

13 Linear Algorithm 127 Figure 9 Extracted conics and lines of the two uncalibrated images: (a) conics and lines of image 1; (b) conics of image 2 (In image 2, only seven conics (0, 1, 2, 3, 4, 5, and 7) were randomly selected to determine the conic-based transformation between the two images) Figure 10 Transfered conics and lines onto image 2 (a) and the extracted conics and lines of image 2 (b) image 1, we transfered the points on the conic that were selected in determining its conic vector to the points in image 2 This transformation was conducted based on the calculated M We then applied the method of least squares to the transfered points to determine the conic vector in image 2, called the transfered conic vector The conic having the transfered conic vector was superimposed on image 2 This procedure was applied to all the conics in image 1 The results are shown in Fig 10(a) (When we directly use the estimated S to transfer the conics in image 1 to those in image 2, we obtained (almost) the same results) As the counterparts, we extracted all the conics in image 2 (Fig 10(b)) and calculated the conic vectors Comparison between the transfered conic vectors and the extracted conic vectors are shown in Table 1, where conic vectors are normalized to have the unit length In the same way, we extracted three edges of the board in image 1 (Fig 9(a)) and determined the slant and y-intercept of each extracted edge We also transfered, based on the calculated M, these edges to those in image 2 The transfered edges and extracted edges are also shown in Fig 10 Since we applied the line equation to each edge to determine its slant and y- intercept, the transfered edges have no terminal points Comparison of the slants and y-intercepts between the transfered edges and the extracted edges are shown in Table 2 Here, three lines in Fig 10 were numbered according to their locations related to the conics Namely, line 1 is the line nearest to conic 1; line 0 is the one

14 128 Sugimoto Table 1 Comparison between the transfered conic vectors and the extracted conic vectors Conic no Transfered conic vector Extracted conic vector 0 (0394, 0751, 0253, 0421, 0153, 0122) T (0394, 0753, 0251, 0420, 0154, 0123) T 1 (0595, 0668, 0243, 0274, 0238, 00994) T (0597, 0667, 0241, 0273, 0239, 00970) T 2 (0595, 0530, 0323, 0288, 0390, 0160) T (0581, 0530, 0332, 0293, 0395, 0171) T 3 (0859, 0181, 0219, 0151, 0377, 0129) T (0864, 0184, 0215, 0149, 0371, 0118) T 4 (0248, 0799, 0299, 0442, 00494, 0116) T (0257, 0796, 0298, 0440, 00487, 0121) T 5 (0601, 0366, 0488, 0248, 0452, 00186) T (0600, 0369, 0488, 0249, 0451, 00171) T 6 (0581, 0694, 0196, 0234, 0286, 00675) T (0607, 0677, 0190, 0229, 0284, 00579) T 7 (0687, 0629, 0165, 0250, 0205, 00272) T (0659, 0652, 0171, 0260, 0207, 00156) T 8 (0506, 0632, 0369, 0294, 0348, ) T (0508, 0636, 0365, 0292, 0345, ) T 9 (0636, 0449, 0428, 0311, 0333, 00522) T (0636, 0443, 0432, 0312, 0336, 00454) T Table 2 Comparison between the parameters of transfered lines and those of extracted lines Transfered Extracted Line no Slant y-intercept Slant y-intercept nearest to both conic 0 and conic 8; line 2 is the one nearest to both conic 2 and conic 9 From Fig 10 we see that all the conics and edges in image 1 are almost correctly transfered to those in image 2 In particular, three conics (ie, conics 6, 8 and 9), which were not actually used to determine the conicbased transformation, have fairly correct fittings This observation is numerically supported by Tables 1 and 2 Slightly incorrect fitting of conics and lines in Fig 10(a) is due to the numerical errors We thus may conclude that conic correspondences enable us to transfer conics and lines in one image to another, and that our algorithm performs stably for real images 7 Conclusions We clarified the relationship, based on conic correspondences, between two perspective images acquired by an uncalibrated camera Namely, we showed that the parameters representing a conic in one image are transformed by a five-dimensional projective transformation to the parameters representing the corresponding conic in another image We then established the link between a conic-based transformation and its corresponding point-based transformation, ie, the corresponding homography A conic-based transformation is essentially equivalent to the symmetric component of the tensor product of its corresponding point-based transformation and itself We developed an algorithm that uses only linear computation in uniquely determining, from a conicbased transformation, the corresponding point-based transformation up to a scale factor Our algorithm makes full use of the redundancy involved in a conicbased transformation to determine the point-based transformation with only linear computation The algorithm is simple and ensures the uniqueness of the solution Furthermore, it requires only two kinds of robust computation One is the singular value decomposition and the other is judging which is the greater of given two values In fact, our experimental results showed that the algorithm performs stably in the presence of noise and that its outputs are reliable Conic correspondences thus enable us to easily handle both points and lines in uncalibrated images of a planar object: when we observe a planar object, we can transfer one image to another image and we can establish point correspondences and line correspondences between two images Extracting points from images is sensitive with respect to noise, while conics are robustly and exactly extracted from images (see [28] for a survey of conic fitting) and, furthermore, finding correspondences between conics is much easier than that between points Therefore, we can expect that computation based on a conic-based transformation results eventually in a more stable point-based transformation than one computed from just extracted points Evaluating their differences in stability is left for future research Also left for future investigations are (1) theoretical analysis of the rounding errors in the actual implementation of the

15 Linear Algorithm 129 proposed algorithm and (2) the evaluation of the practical efficiency of the algorithm in more real situations Appendix A Algebraic Properties of Conic-Based Transformation As we see, S belongs to an eight-dimensional submanifold of all 6 6 matrices up to scale We here investigate this submanifold in detail and give the necessary and sufficient conditions on the entries of S for belonging to this submanifold A6 6 matrix up to scale has 36 1 = 35 degree of freedom and an eight-dimensional manifold has 8 degree of freedom To give the necessary and sufficient conditions on S for belonging to the submanifold constructed essentially by M, it is thus sufficient to derive 35 8 = 27 algebraically independent constraints on entries of S In Section 32 we clarified the relationship between S and M as follows: S (ij)(kl) σ(k,l)m (k(i M l) j), (35) where (ij)=(11), (22), (33), (23), (31), (12) and (kl) = (11), (22), (33), (23), (31), (12) Without loss of generality, we may handle Ŝ instead of S since Ŝ = SD 1 Ŝ (ij)(kl) = ρ M (k(i M l) j) (ρ 0) (41) = ρ M ki M lj + M kj M li 2 We pay our attention to particular cases where i = j or k = l, from which we have Ŝ (ii)(kk) = ρm 2 ki, Ŝ (ii)(kl) = ρ M ki M li, Ŝ (ij)(kk) = ρm ki M kj It is now easy to see that the entries of Ŝ satisfy the following relationship: 2Ŝ (ij)(kl) = Ŝ(ii)(kl) Ŝ (ij)(kk) Ŝ (ii)(kk) 2Ŝ (ij)(kl) = Ŝ( jj)(kl) Ŝ (ij)(kk) Ŝ (jj)(kk) + Ŝ(ii)(kl) Ŝ (ij)(ll) Ŝ (ii)(ll) (A1) + Ŝ(jj)(kl) Ŝ (ij)(ll) Ŝ ( jj)(ll) (A2) 2Ŝ (ij)(kl) = Ŝ(ii)(kl) Ŝ (ij)(kk) Ŝ (ii)(kk) 2Ŝ (ij)(kl) = Ŝ(ii)(kl) Ŝ (ij)(ll) Ŝ (ii)(ll) + Ŝ(jj)(kl) Ŝ (ij)(kk) Ŝ (jj)(kk) (A3) + Ŝ( jj)(kl) Ŝ (ij)(ll) Ŝ ( jj)(ll) (A4) When i j and k l, these equations make sense and indicate that each Ŝ (ij)(kl) is expressed in four different ways in terms of other entries of Ŝ (The equations above may be rewritten as cubic constraints on the entries of Ŝ) It is not difficult to see that any three of the four expressions are algebraically independent (From three expressions, the other one is derived) We thus have 9 3 = 27 algebraically independent (cubic) constraints on the entries of Ŝ, which are the necessary and sufficient constraints on the entries of Ŝ for belonging to the eight-dimensional submanifold that is essentially constructed by M In fact, 6 6 matrices up to scale satisfying these 27 constraints have a oneto-one correspondence to homographies The concrete description of the one-to-one mapping is given by (35) Acknowledgments The author is thankful to Kiyotake Yachi and Shohei Nobuhara for helping to perform the experiments with real images He is also grateful to anonymous referees for their helpful comments in improving this paper Notes 1 This transformation is widely known as a homography in the computer vision literature In this paper, however, we use the terminology a point-based transformation to stress the distinction from a transformation based on conic correspondences 2 We use column vectors and denote by x T the transposition of a vector x 3 M ki or M ki denotes the (k, i) entry of a matrix M 4 x denotes the Euclidean norm of a vector x 5 It is easy to see that A is a sparse matrix 6 The color of the board was coincidently similar to that of the background This unfortunately causes difficulty in observing the board References 1 F Chatelin, Valeurs propres de matrices, Masson: Paris, 1988 (W Ledermann (trans), Eigenvalues of Matrices, John-Wiley and Sons, Chichester, 1993) 2 HSM Coxeter, Projective Geometry, 2nd edn, Springer- Verlag: New York, USA, 1987

16 130 Sugimoto 3 O Faugeras and B Mourrain, On the geometry and algebra of the points and lines correspondences between N images, in Proc of the 5th ICCV, 1995, pp OD Faugeras and L Robert, What can two images tell us about a third one?, Int J of Computer Vision, Vol 18, No 1, pp 5 19, GH Golub and CF van Loan, Matrix Computation, 2nd edn, Johns Hopkins Univ Press, R Hartley, Lines and points in three views An integrated approach, in Proc of ARPA Image Understanding Workshop, 1994, pp R Hartley, A linear method for reconstruction from lines and points, in Proc of the 5th ICCV, 1995, pp R Hartley, Lines and points in three views and the trifocal tensor, Int J of Computer Vison, Vol 22, No 2, pp , R Hartley, Computation of the quadrifocal tensor, in Proc of the 5th ECCV, Vol 1, 1998, pp A Heyden, A common framework for multiple view tensors, in Proc of the 5th ECCV, Vol 1, 1998, pp A Heyden, Reduced multilinear constraints: Theory and experiments, Int J of Computer Vision, Vol 30, No 1, pp 5 26, SD Ma, Conics-based stereo, motion estimation, and pose determination, Int J of Computer Vision, Vol 10 No 1, pp 7 25, SD Ma, SH Si, and ZY Chen, Quadric curve based stereo, in Proc of the 11th ICPR, Vol 1, 1992, pp L Quan, Invariant of a pair of non-coplanar conics in space: Definition, geometric interpretation and computation, in Proc of the 5th ICCV, 1995, pp L Quan, Conic reconstruction and correspondence from two views, IEEE Trans on PAMI, Vol 18, No 2, pp , C Schmid and A Zisserman, The geometry and matching of curves in multiple views, in Proc of the 5th ECCV, Vol 1, 1998, pp JA Schouten, Tensor Analysis for Physicists, 2nd edn, Dover: New York, USA, JG Semple and GT Kneebone, Algebraic Projective Geometry, Clarendon Press: Oxford, UK, 1952 (reprinted in 1979) 19 A Shashua, Trilinearity in visual recognition by alignment, in Proc of the 3rd ECCV, Vol 1, 1994, pp A Shashua, Algebraic functions for recognition, IEEE Trans on PAMI, Vol 17, No 8, pp , A Shashua and M Werman, Trilinearity of three perspective views and its associated tensor, in Proc of the 5th ICCV, 1995, pp A Sugimoto, Conic based image transfer for 2-D objects: A linear algorithm, in Proc of the 3rd ACCV, Vol II, 1998, pp A Sugimoto and T Matsuyama, Multilinear relationships between the coordinates of corresponding image conics, in Proc of the 15th ICPR, B Triggs, Matching constraints and the joint image, in Proc of the 5th ICCV, 1995, pp RY Tsai and TS Huang, Uniqueness and estimation of three dimensional motions parameters of rigid objects with curved surfaces, IEEE Trans on PAMI, Vol 6, No 1, pp 13 27, I Weiss, 3D curve reconstruction from uncalibrated cameras, Technical Report, CSTR-TR-3605, Computer Vision Laboratory, Center for Automation Research, University of Maryland, I Weiss, 3-D curve reconstruction from uncalibrated cameras, in Proc of ICPR, Vol 1, 1996, pp Z Zhang, Parameter estimation techniques: A tutorial with application to conic fitting, Image and Vision Computing, Vol 15, No 1, pp 59 76, 1997 Akihiro Sugimoto received his BS degree, MS degree and Dr Eng in mathematical engineering from the University of Tokyo, Japan in 1987, 1989 and 1996, respectively He joined Hitachi Advanced Research Laboratory in 1989 and moved to ATR (Advanced Telecommunications Research Institute), Japan in 1991, where he worked for Auditory and Visual Perception Labs and for Human Information Processing Labs, and he got into the research field of computer vision In 1995, he returned to Hitachi Advanced Research Laboratory where he lead a project on content-based image retrieval supported by Ministry of International Trade and Industry in Japan Since 1999, he has worked for Kyoto University, where he is currently a lecturer at the graduate school of Informatics Dr Sugimoto s research interest is in mathematical methods in/for engineering In particular, his current main research interests are discrete mathematics, approximation algorithm, vision geometry and modeling of human vision

Algorithms for Computing a Planar Homography from Conics in Correspondence

Algorithms for Computing a Planar Homography from Conics in Correspondence Algorithms for Computing a Planar Homography from Conics in Correspondence Juho Kannala, Mikko Salo and Janne Heikkilä Machine Vision Group University of Oulu, Finland {jkannala, msa, jth@ee.oulu.fi} Abstract

More information

Reconstruction from projections using Grassmann tensors

Reconstruction from projections using Grassmann tensors Reconstruction from projections using Grassmann tensors Richard I. Hartley 1 and Fred Schaffalitzky 2 1 Australian National University and National ICT Australia, Canberra 2 Australian National University,

More information

Tracking Human Heads Based on Interaction between Hypotheses with Certainty

Tracking Human Heads Based on Interaction between Hypotheses with Certainty Proc. of The 13th Scandinavian Conference on Image Analysis (SCIA2003), (J. Bigun and T. Gustavsson eds.: Image Analysis, LNCS Vol. 2749, Springer), pp. 617 624, 2003. Tracking Human Heads Based on Interaction

More information

Trinocular Geometry Revisited

Trinocular Geometry Revisited Trinocular Geometry Revisited Jean Pounce and Martin Hebert 报告人 : 王浩人 2014-06-24 Contents 1. Introduction 2. Converging Triplets of Lines 3. Converging Triplets of Visual Rays 4. Discussion 1. Introduction

More information

A Study of Kruppa s Equation for Camera Self-calibration

A Study of Kruppa s Equation for Camera Self-calibration Proceedings of the International Conference of Machine Vision and Machine Learning Prague, Czech Republic, August 14-15, 2014 Paper No. 57 A Study of Kruppa s Equation for Camera Self-calibration Luh Prapitasari,

More information

Pose estimation from point and line correspondences

Pose estimation from point and line correspondences Pose estimation from point and line correspondences Giorgio Panin October 17, 008 1 Problem formulation Estimate (in a LSE sense) the pose of an object from N correspondences between known object points

More information

Computation of the Quadrifocal Tensor

Computation of the Quadrifocal Tensor Computation of the Quadrifocal Tensor Richard I. Hartley G.E. Corporate Research and Development Research Circle, Niskayuna, NY 2309, USA Abstract. This paper gives a practical and accurate algorithm for

More information

Parameterizing the Trifocal Tensor

Parameterizing the Trifocal Tensor Parameterizing the Trifocal Tensor May 11, 2017 Based on: Klas Nordberg. A Minimal Parameterization of the Trifocal Tensor. In Computer society conference on computer vision and pattern recognition (CVPR).

More information

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix René Vidal Stefano Soatto Shankar Sastry Department of EECS, UC Berkeley Department of Computer Sciences, UCLA 30 Cory Hall,

More information

A Practical Method for Decomposition of the Essential Matrix

A Practical Method for Decomposition of the Essential Matrix Applied Mathematical Sciences, Vol. 8, 2014, no. 176, 8755-8770 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.410877 A Practical Method for Decomposition of the Essential Matrix Georgi

More information

Induced Planar Homologies in Epipolar Geometry

Induced Planar Homologies in Epipolar Geometry Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 12, Number 4 (2016), pp. 3759 3773 Research India Publications http://www.ripublication.com/gjpam.htm Induced Planar Homologies in

More information

A Factorization Method for 3D Multi-body Motion Estimation and Segmentation

A Factorization Method for 3D Multi-body Motion Estimation and Segmentation 1 A Factorization Method for 3D Multi-body Motion Estimation and Segmentation René Vidal Department of EECS University of California Berkeley CA 94710 rvidal@eecs.berkeley.edu Stefano Soatto Dept. of Computer

More information

Multi-Frame Factorization Techniques

Multi-Frame Factorization Techniques Multi-Frame Factorization Techniques Suppose { x j,n } J,N j=1,n=1 is a set of corresponding image coordinates, where the index n = 1,...,N refers to the n th scene point and j = 1,..., J refers to the

More information

The Multibody Trifocal Tensor: Motion Segmentation from 3 Perspective Views

The Multibody Trifocal Tensor: Motion Segmentation from 3 Perspective Views The Multibody Trifocal Tensor: Motion Segmentation from 3 Perspective Views Richard Hartley 1,2 and RenéVidal 2,3 1 Dept. of Systems Engineering 3 Center for Imaging Science Australian National University

More information

The structure tensor in projective spaces

The structure tensor in projective spaces The structure tensor in projective spaces Klas Nordberg Computer Vision Laboratory Department of Electrical Engineering Linköping University Sweden Abstract The structure tensor has been used mainly for

More information

Linear Subspace Models

Linear Subspace Models Linear Subspace Models Goal: Explore linear models of a data set. Motivation: A central question in vision concerns how we represent a collection of data vectors. The data vectors may be rasterized images,

More information

Affine Structure From Motion

Affine Structure From Motion EECS43-Advanced Computer Vision Notes Series 9 Affine Structure From Motion Ying Wu Electrical Engineering & Computer Science Northwestern University Evanston, IL 68 yingwu@ece.northwestern.edu Contents

More information

Multiple View Geometry in Computer Vision

Multiple View Geometry in Computer Vision Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Scene Planes & Homographies Lecture 19 March 24, 2005 2 In our last lecture, we examined various

More information

Camera Models and Affine Multiple Views Geometry

Camera Models and Affine Multiple Views Geometry Camera Models and Affine Multiple Views Geometry Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in May 29, 2001 1 1 Camera Models A Camera transforms a 3D

More information

Positive semidefinite matrix approximation with a trace constraint

Positive semidefinite matrix approximation with a trace constraint Positive semidefinite matrix approximation with a trace constraint Kouhei Harada August 8, 208 We propose an efficient algorithm to solve positive a semidefinite matrix approximation problem with a trace

More information

Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix ECCV Workshop on Vision and Modeling of Dynamic Scenes, Copenhagen, Denmark, May 2002 Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix René Vidal Dept of EECS, UC Berkeley Berkeley,

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Tasha Vanesian LECTURE 3 Calibrated 3D Reconstruction 3.1. Geometric View of Epipolar Constraint We are trying to solve the following problem:

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Robust Motion Segmentation by Spectral Clustering

Robust Motion Segmentation by Spectral Clustering Robust Motion Segmentation by Spectral Clustering Hongbin Wang and Phil F. Culverhouse Centre for Robotics Intelligent Systems University of Plymouth Plymouth, PL4 8AA, UK {hongbin.wang, P.Culverhouse}@plymouth.ac.uk

More information

Augmented Reality VU Camera Registration. Prof. Vincent Lepetit

Augmented Reality VU Camera Registration. Prof. Vincent Lepetit Augmented Reality VU Camera Registration Prof. Vincent Lepetit Different Approaches to Vision-based 3D Tracking [From D. Wagner] [From Drummond PAMI02] [From Davison ICCV01] Consider natural features Consider

More information

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 Masao Fukushima 2 July 17 2010; revised February 4 2011 Abstract We present an SOR-type algorithm and a

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Multiple View Geometry in Computer Vision

Multiple View Geometry in Computer Vision Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Trifocal Tensor Lecture 21 March 31, 2005 2 Lord Shiva is depicted as having three eyes. The

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

PAijpam.eu EPIPOLAR GEOMETRY WITH A FUNDAMENTAL MATRIX IN CANONICAL FORM Georgi Hristov Georgiev 1, Vencislav Dakov Radulov 2

PAijpam.eu EPIPOLAR GEOMETRY WITH A FUNDAMENTAL MATRIX IN CANONICAL FORM Georgi Hristov Georgiev 1, Vencislav Dakov Radulov 2 International Journal of Pure and Applied Mathematics Volume 105 No. 4 2015, 669-683 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu doi: http://dx.doi.org/10.12732/ijpam.v105i4.8

More information

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Corners, Blobs & Descriptors With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Motivation: Build a Panorama M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 How do we build panorama?

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

Degeneracies, Dependencies and their Implications in Multi-body and Multi-Sequence Factorizations

Degeneracies, Dependencies and their Implications in Multi-body and Multi-Sequence Factorizations Degeneracies, Dependencies and their Implications in Multi-body and Multi-Sequence Factorizations Lihi Zelnik-Manor Michal Irani Dept. of Computer Science and Applied Math The Weizmann Institute of Science

More information

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 1 MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 2 Linear Systems and solutions Systems of linear

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

On the Absolute Quadratic Complex and its Application to Autocalibration

On the Absolute Quadratic Complex and its Application to Autocalibration On the Absolute Quadratic Complex and its Application to Autocalibration J. Ponce and K. McHenry Beckman Institute University of Illinois Urbana, IL 61801, USA T. Papadopoulo and M. Teillaud INRIA Sophia-Antipolis

More information

CS4495/6495 Introduction to Computer Vision. 3D-L3 Fundamental matrix

CS4495/6495 Introduction to Computer Vision. 3D-L3 Fundamental matrix CS4495/6495 Introduction to Computer Vision 3D-L3 Fundamental matrix Weak calibration Main idea: Estimate epipolar geometry from a (redundant) set of point correspondences between two uncalibrated cameras

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu

More information

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems Zheng-jian Bai Benedetta Morini Shu-fang Xu Abstract The purpose of this paper is to provide the convergence theory

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

Simple LU and QR based Non-Orthogonal Matrix Joint Diagonalization

Simple LU and QR based Non-Orthogonal Matrix Joint Diagonalization This paper was presented in the Sixth International Conference on Independent Component Analysis and Blind Source Separation held in Charleston, SC in March 2006. It is also published in the proceedings

More information

Reduction of Random Variables in Structural Reliability Analysis

Reduction of Random Variables in Structural Reliability Analysis Reduction of Random Variables in Structural Reliability Analysis S. Adhikari and R. S. Langley Department of Engineering University of Cambridge Trumpington Street Cambridge CB2 1PZ (U.K.) February 21,

More information

Determining the Translational Speed of a Camera from Time-Varying Optical Flow

Determining the Translational Speed of a Camera from Time-Varying Optical Flow Determining the Translational Speed of a Camera from Time-Varying Optical Flow Anton van den Hengel, Wojciech Chojnacki, and Michael J. Brooks School of Computer Science, Adelaide University, SA 5005,

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

A method for construction of Lie group invariants

A method for construction of Lie group invariants arxiv:1206.4395v1 [math.rt] 20 Jun 2012 A method for construction of Lie group invariants Yu. Palii Laboratory of Information Technologies, Joint Institute for Nuclear Research, Dubna, Russia and Institute

More information

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... 5 Direct Methods for Solving Systems of Linear Equations They are all over the place Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13, 2012

More information

Optimisation on Manifolds

Optimisation on Manifolds Optimisation on Manifolds K. Hüper MPI Tübingen & Univ. Würzburg K. Hüper (MPI Tübingen & Univ. Würzburg) Applications in Computer Vision Grenoble 18/9/08 1 / 29 Contents 2 Examples Essential matrix estimation

More information

An Even Order Symmetric B Tensor is Positive Definite

An Even Order Symmetric B Tensor is Positive Definite An Even Order Symmetric B Tensor is Positive Definite Liqun Qi, Yisheng Song arxiv:1404.0452v4 [math.sp] 14 May 2014 October 17, 2018 Abstract It is easily checkable if a given tensor is a B tensor, or

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 55, NO. 9, SEPTEMBER 2010 1987 Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE Abstract

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Quadrifocal Tensor. Amnon Shashua and Lior Wolf. The Hebrew University, Jerusalem 91904, Israel.

Quadrifocal Tensor. Amnon Shashua and Lior Wolf. The Hebrew University, Jerusalem 91904, Israel. On the Structure and Properties of the Quadrifocal Tensor Amnon Shashua and Lior Wolf School of Computer Science and Engineering, The Hebrew University, Jerusalem 91904, Israel e-mail: fshashua,lwolfg@cs.huji.ac.il

More information

15 Singular Value Decomposition

15 Singular Value Decomposition 15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

Chapter 2 Canonical Correlation Analysis

Chapter 2 Canonical Correlation Analysis Chapter 2 Canonical Correlation Analysis Canonical correlation analysis CCA, which is a multivariate analysis method, tries to quantify the amount of linear relationships etween two sets of random variales,

More information

When Fisher meets Fukunaga-Koontz: A New Look at Linear Discriminants

When Fisher meets Fukunaga-Koontz: A New Look at Linear Discriminants When Fisher meets Fukunaga-Koontz: A New Look at Linear Discriminants Sheng Zhang erence Sim School of Computing, National University of Singapore 3 Science Drive 2, Singapore 7543 {zhangshe, tsim}@comp.nus.edu.sg

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Rectangular Systems and Echelon Forms

Rectangular Systems and Echelon Forms CHAPTER 2 Rectangular Systems and Echelon Forms 2.1 ROW ECHELON FORM AND RANK We are now ready to analyze more general linear systems consisting of m linear equations involving n unknowns a 11 x 1 + a

More information

Representations of Sp(6,R) and SU(3) carried by homogeneous polynomials

Representations of Sp(6,R) and SU(3) carried by homogeneous polynomials Representations of Sp(6,R) and SU(3) carried by homogeneous polynomials Govindan Rangarajan a) Department of Mathematics and Centre for Theoretical Studies, Indian Institute of Science, Bangalore 560 012,

More information

Singular Value Decomposition Compared to cross Product Matrix in an ill Conditioned Regression Model

Singular Value Decomposition Compared to cross Product Matrix in an ill Conditioned Regression Model International Journal of Statistics and Applications 04, 4(): 4-33 DOI: 0.593/j.statistics.04040.07 Singular Value Decomposition Compared to cross Product Matrix in an ill Conditioned Regression Model

More information

Linear Algebra and Eigenproblems

Linear Algebra and Eigenproblems Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Hamed Masnadi Shirazi, Solmaz Alipour LECTURE 5 Relationships between the Homography and the Essential Matrix 5.1. Introduction In practice,

More information

ON THE QR ITERATIONS OF REAL MATRICES

ON THE QR ITERATIONS OF REAL MATRICES Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix

More information

Discriminative Direction for Kernel Classifiers

Discriminative Direction for Kernel Classifiers Discriminative Direction for Kernel Classifiers Polina Golland Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 polina@ai.mit.edu Abstract In many scientific and engineering

More information

A New Algorithm for Solving Cross Coupled Algebraic Riccati Equations of Singularly Perturbed Nash Games

A New Algorithm for Solving Cross Coupled Algebraic Riccati Equations of Singularly Perturbed Nash Games A New Algorithm for Solving Cross Coupled Algebraic Riccati Equations of Singularly Perturbed Nash Games Hiroaki Mukaidani Hua Xu and Koichi Mizukami Faculty of Information Sciences Hiroshima City University

More information

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

2. Preliminaries. x 2 + y 2 + z 2 = a 2 ( 1 )

2. Preliminaries. x 2 + y 2 + z 2 = a 2 ( 1 ) x 2 + y 2 + z 2 = a 2 ( 1 ) V. Kumar 2. Preliminaries 2.1 Homogeneous coordinates When writing algebraic equations for such geometric objects as planes and circles, we are used to writing equations that

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

SIAM Conference on Applied Algebraic Geometry Daejeon, South Korea, Irina Kogan North Carolina State University. Supported in part by the

SIAM Conference on Applied Algebraic Geometry Daejeon, South Korea, Irina Kogan North Carolina State University. Supported in part by the SIAM Conference on Applied Algebraic Geometry Daejeon, South Korea, 2015 Irina Kogan North Carolina State University Supported in part by the 1 Based on: 1. J. M. Burdis, I. A. Kogan and H. Hong Object-image

More information

3D Computer Vision - WT 2004

3D Computer Vision - WT 2004 3D Computer Vision - WT 2004 Singular Value Decomposition Darko Zikic CAMP - Chair for Computer Aided Medical Procedures November 4, 2004 1 2 3 4 5 Properties For any given matrix A R m n there exists

More information

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method Journal of Mathematics Research; Vol 6, No ; 014 ISSN 1916-9795 E-ISSN 1916-9809 Published by Canadian Center of Science and Education Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

Kazuhiro Fukui, University of Tsukuba

Kazuhiro Fukui, University of Tsukuba Subspace Methods Kazuhiro Fukui, University of Tsukuba Synonyms Multiple similarity method Related Concepts Principal component analysis (PCA) Subspace analysis Dimensionality reduction Definition Subspace

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03 Page 5 Lecture : Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 008/10/0 Date Given: 008/10/0 Inner Product Spaces: Definitions Section. Mathematical Preliminaries: Inner

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Dimensionality Reduction

Dimensionality Reduction 394 Chapter 11 Dimensionality Reduction There are many sources of data that can be viewed as a large matrix. We saw in Chapter 5 how the Web can be represented as a transition matrix. In Chapter 9, the

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

High Accuracy Fundamental Matrix Computation and Its Performance Evaluation

High Accuracy Fundamental Matrix Computation and Its Performance Evaluation High Accuracy Fundamental Matrix Computation and Its Performance Evaluation Kenichi Kanatani Department of Computer Science, Okayama University, Okayama 700-8530 Japan kanatani@suri.it.okayama-u.ac.jp

More information

Dynamic P n to P n Alignment

Dynamic P n to P n Alignment Dynamic P n to P n Alignment Amnon Shashua and Lior Wolf School of Engineering and Computer Science, the Hebrew University of Jerusalem, Jerusalem, 91904, Israel {shashua,lwolf}@cs.huji.ac.il We introduce

More information

Eventually reducible matrix, eventually nonnegative matrix, eventually r-cyclic

Eventually reducible matrix, eventually nonnegative matrix, eventually r-cyclic December 15, 2012 EVENUAL PROPERIES OF MARICES LESLIE HOGBEN AND ULRICA WILSON Abstract. An eventual property of a matrix M C n n is a property that holds for all powers M k, k k 0, for some positive integer

More information

The Principal Component Analysis

The Principal Component Analysis The Principal Component Analysis Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) PCA Fall 2017 1 / 27 Introduction Every 80 minutes, the two Landsat satellites go around the world, recording images

More information

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional space

More information

Seminar on Linear Algebra

Seminar on Linear Algebra Supplement Seminar on Linear Algebra Projection, Singular Value Decomposition, Pseudoinverse Kenichi Kanatani Kyoritsu Shuppan Co., Ltd. Contents 1 Linear Space and Projection 1 1.1 Expression of Linear

More information

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms.

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. J.C. Zúñiga and D. Henrion Abstract Four different algorithms are designed

More information

Conics and their duals

Conics and their duals 9 Conics and their duals You always admire what you really don t understand. Blaise Pascal So far we dealt almost exclusively with situations in which only points and lines were involved. Geometry would

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Camera Calibration The purpose of camera calibration is to determine the intrinsic camera parameters (c 0,r 0 ), f, s x, s y, skew parameter (s =

Camera Calibration The purpose of camera calibration is to determine the intrinsic camera parameters (c 0,r 0 ), f, s x, s y, skew parameter (s = Camera Calibration The purpose of camera calibration is to determine the intrinsic camera parameters (c 0,r 0 ), f, s x, s y, skew parameter (s = cotα), and the lens distortion (radial distortion coefficient

More information