Tutorial. Fitting Ellipse and Computing Fundamental Matrix and Homography. Kenichi Kanatani. Professor Emeritus Okayama University, Japan

Size: px
Start display at page:

Download "Tutorial. Fitting Ellipse and Computing Fundamental Matrix and Homography. Kenichi Kanatani. Professor Emeritus Okayama University, Japan"

Transcription

1 Tutorial Fitting Ellipse and Computing Fundamental Matrix and Homography Kenichi Kanatani Professor Emeritus Okayama University, Japan

2 This tutorial is based on K. Kanatani, Y. Sugaya, and Y. Kanazawa, Ellipse Fitting for Computer Vision: Implementation and Applications, Morgan & Claypool Publishers, San Rafael, CA, U.S., April, ISBN (print), ISBN (E-book) K. Kanatani, Y. Sugaya, and Y. Kanazawa, Guide to 3D Vision Computation: Geometric Analysis and Implementation. Springer International, Cham, Switzerland, December, ISBN (print), ISBN (E-book)

3 Introduction

4 Ellipse fitting Circular objects are projected as ellipses in images. By fitting ellipses, we can detect circular objects in the scene. It is also used for detecting objects of approximately elliptic shape, e.g., human faces. Circles are often used as markers for camera calibration. Ellipse fitting provides a mathematical basis of various problems, including computation of fundamental matrices and homographies. From the fitted ellipse, we can compute the 3-D position of the circular object in the scene.

5 Ellipse-based 3-D analysis

6 Ellipse representation Task: Fit an ellipse in the form of to noisy data points (x α, y α ), α = 1,..., N. Ax 2 + 2Bxy + Cy 2 + 2f 0 (Dx + Ey) + f 2 0 F = 0, f 0 : scaling constant to make x α /f 0 and y α /f 0 have orders O(1). For removing scale indeterminacy, the coefficients need to be normalized: (x α, y α) (1) F = 1, (2) A + C = 1, (3) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1, ( We adopt this) (4) A 2 + B 2 + C 2 + D 2 + E 2 = 1, (5) A 2 + 2B 2 + C 2 = 1, (6) AC B 2 = 1.

7 Vector representation Define Then, ξ α = x 2 α 2x α y α y 2 α 2f 0 x α 2f 0 y α f 2 0 A B, θ = C D. E F Ax 2 α + 2Bx α y α + Cyα 2 + 2f 0 (Dx α + Ey α ) + f0 2 F = 0 (ξ α, θ) = 0, A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 θ = 1. Task: Find a unit vector θ such that (ξ α, θ) 0, α = 1,..., N.

8 Least squares (LS) approach The simplest and the most naive method is the least squares (LS). 1. Compute the 6 6 matrix M = 1 ξ N α ξ α. 2. Solve the eigenvalue problem Mθ = λθ, and return the unit eigenvector θ for the smallest eigenvalue λ. Motivation: We minimize the algebraic distance: J = 1 N (ξ α, θ) 2 = 1 N ( 1 θ ξ α ξ α θ = (θ, N ) ξ α ξ α θ) = (θ, Mθ). } {{ } M The computation is very easy, and the solution is immediately obtained. Widely used since the 1970s. But produces a small and flat ellipse very different from the true shape. In particular, when the input points cover a small part of the ellipse. How can we improve the accuracy? The reason for the poor accuracy is that the properties of image noise are not considered. We need to consider the statistical properties of noise.

9 Noise assumption Let x α and ȳ α be the true values of observed x α and y α : x α = x α + x α, y α = ȳ α + y α. Then, ξ α = ξ α + 1 ξ α + 2 ξ α. ξ α : the true value of ξ α 1 ξ α : noise term linear in x α and y α 2 ξ α : noise term quadratic in x α and y α ξ α = x 2 α 2 x α ȳ α ȳ 2 α 2f 0 x α 2f 0 ȳ α f 2 0, 1 ξ α = 2 x α x α 2 x α ȳ α + 2 x α y α 2ȳ α y α 2f 0 x α 2f 0 y α 0, 2 ξ α = x 2 α 2 x α y α y 2 α

10 Covariance matrix The noise terms x α and y α are regarded as independent Gaussian random variables of mean 0 and variance σ 2 : E[ x α ] = E[ y α ] = 0, E[ x 2 α] = E[ y 2 α] = σ 2, E[ x α y α ] = 0. The covariance matrix of ξ α is defined by V [ξ α ] = E[ 1 ξ α 1 ξ α ]. Then, x 2 α x α ȳ α 0 f 0 x α 0 0 x α ȳ α x 2 α + ȳα 2 x α ȳ α f 0 ȳ α f 0 x α 0 V [ξ α ] = σ 2 V 0 [ξ α ], V 0 [ξ α ] = 4 0 x α ȳ α ȳα 2 0 f 0 ȳ α 0 f 0 x α f 0 ȳ α 0 f f 0 x α f 0 ȳ α 0 f σ 2 : noise level V 0 [ξ α ]: normalized covariance matrix The true values x α and ȳ α are replaced by their observations x α and y α in actual computation. Does not affect the final results.

11 Algebraic methods Ellipse fitting approaches We solve an algebraic equation for computing θ. The solution may or may not minimize any cost function. Our task is to find a good equation to solve. The resulting solution θ should be as close to its true value θ as possible. We need detailed statistical error analysis. Geometric methods We minimize some cost function J. The solution is uniquely determined once the cost J is set. Our task is to find a good cost to minimize. The minimizing θ should be close to its true value θ. We need to consider the geometry of the ellipse and the data points. We need a convenient minimization algorithm. Minimization of a given cost is not always easy.

12 Algebraic Fitting

13 1. Let θ 0 = 0 and W α = 1, α = 1,..., N. 2. Compute the 6 6 matrix 3. Solve the eigenvalue problem Iterative reweight M = 1 N W α ξ α ξ α. Mθ = λθ, and compute the unit eigenvector θ for the smallest eigenvalue λ. 4. If θ θ 0 up to sign, return θ and stop. Else, update W α and θ to and go back to Step 2. W α 1 (θ, V 0 [ξ α ]θ), θ 0 θ,

14 Minimize the weighted sum of squares 1 N W α (ξ α, θ) 2 = 1 N Motivation of iterative reweight ( 1 W α (θ, ξ α ξ α θ) = (θ, N ) W α ξ α ξ α θ) = (θ, Mθ). } {{ } M This is minimized by the unit eigenvector of M for the smallest eigenvalue. The weight is W α is optimal if it is inversely proportional to the variance of each term. Ideally, W α = 1/(θ, V 0 [ξ α ]θ): E[(ξ α, θ) 2 ] = E[(θ, 1 ξ α 1 ξ α θ)] = (θ, E[ 1 ξ α 1 ξ α ] θ) = σ 2 (θ, V 0 [ξ }{{} α ]θ), =σ 2 V 0 [ξ α ] The true θ is unknown, so the weight is iteratively updated. The iteration starts from the LS solution.

15 1. Let θ 0 = 0 and W α = 1, α = 1,..., N. 2. Compute the 6 6 matrices Renormalization (Kanatani 1993) M = 1 N W α ξ α ξ α, N = 1 N W α V 0 [ξ α ]. 3. Solve the generalized eigenvalue problem Mθ = λnθ, and compute the unit generalized eigenvector θ for the smallest generalized eigenvalue λ. 4. If θ θ 0 up to sign, return θ and stop. Else, update W α and θ to and go back to Step 2. W α 1 (θ, V 0 [ξ α ]θ), θ 0 θ,

16 Motivation of renormalization From ( ξ α, θ) = 0 or ξ α θ = 0, we see that Mθ = 0 for M = (1/N) N W α ξ α ξ α. If M is known, θ is given by its eigenvector for eigenvalue 0, but M is unknown. The expectation of M is E[M] = E[ 1 N W α ( ξ α + ξ α )( ξ α + ξ α ) ] = M + E[ 1 N = M + 1 N W α ξ α ξ α ] W α E[ ξ α ξ α ] = }{{} M + σ 2 1 W α V 0 [ξ N α ] =σ 2 V 0 [ξ α ] }{{} =N = M + σ 2 N. M = E[M] σ 2 N M σ 2 N, so we solve (M σ 2 N)θ = 0 or Mθ = σ 2 Nθ. We solve Mθ = λnθ for the smallest absolute value λ. The optimal weight W α = 1/(θ, V 0 [ξ α ]θ) is unknown, so it is iteratively updated. The iterations start from W α = 1, i.e, initially we solve Mθ = λnθ for M = (1/N) N ξ αξ α and N = (1/N) N V 0[ξ α ]. Taubin method.

17 1. Compute the 6 6 matrices Taubin method (Taubin 1991) M = 1 N ξ α ξ α, N = 1 N V 0 [ξ α ]. 2. Solve the generalized eigenvalue problem Mθ = λnθ, and compute the unit generalized eigenvector θ for the smallest generalized eigenvalue λ. This method was derived by Taubin (1991) heuristically without considering statistical properties of noise.

18 Hyper-renormalization (Kanatani et al. 2012) 1. Let θ 0 = 0 and W α = 1, α = 1,..., N. 2. Compute the 6 6 matrices M = 1 N N = 1 N W α ξ α ξ α, ) W α (V 0 [ξ α ] + 2S[ξ α e ] 1 N 2 W 2 α S[ ]: symmetrization operator (S[A] = (A + A )/2). e = (1, 0, 1, 0, 0, 0) M 5 : pseudoinverse of rank 5: ( ) (ξ α, M 5 ξ α)v 0 [ξ α ] + 2S[V 0 [ξ α ]M 5 ξ αξ α ]. M = µ 1 θ 1 θ µ }{{} 6 θ 6 θ 6 M 5 = θ 1θ θ 5θ 5. µ 1 µ Solve the generalized eigenvalue problem Mθ = λnθ, and compute the unit generalized eigenvector θ for the smallest eigenvalue λ. 4. If θ θ 0, return θ and stop. Else, update Else, update W α and θ to and go back to Step 2. W α 1 (θ, V 0 [ξ α ]θ), θ 0 θ, This method was derived so that the resulting solution has the highest accuracy. The iterations start from W α = 1. HyperLS.

19 1. Compute the 6 6 matrices N = 1 N HyperLS (Rangarajan and Kanatani 2009) ( ) V 0 [ξ α ] + 2S[ξ α e ] M = 1 N 1 N 2 2. Solve the generalized eigenvalue problem ξ α ξ α, ( ) (ξ α, M 5 ξ α)v 0 [ξ α ] + 2S[V 0 [ξ α ]M 5 ξ αξ α ]. Mθ = λnθ, and compute the unit generalized eigenvector θ for the smallest generalized eigenvalue λ. This method was derived so that the highest accuracy is achieved among all non-iterative schemes.

20 All algebraic methods solve Summary of algebraic methods Mθ = λnθ, where M and N involve observed data. They may or may not involve θ. 1 ξ N α ξ α, (LS, Taubin, HyperLS) M = 1 ξ α ξ α N (θ, V 0 [ξ α ]θ). (iterative reweight, renormalization, hyper-renormalization) N = I, (LS, iterative reweight) 1 V 0 [ξ N α ], (Taubin) 1 V 0 [ξ α ] N (θ, V 0 [ξ α ]θ), (renormalization) 1 ( ) V 0 [ξ N α ] + 2S[ξ α e ] 1 ( ) N 2 (ξ α, M 5 ξ α)v 0 [ξ α ] + 2S[V 0 [ξ α ]M 5 ξ αξ α ], (HypeLS) 1 1 ( ) V 0 [ξ N (θ, V 0 [ξ α ]θ) α ] + 2S[ξ α e ] 1 1 ( ) N 2 (θ, V 0 [ξ α ]θ) 2 (ξ α, M 5 ξ α)v 0 [ξ α ] + 2S[V 0 [ξ α ]M 5 ξ αξ α ]. (hyper-renormalization) If M and N do not involve θ, we solve the generalized eigenvalue problem Mθ = λnθ. No iterations are necessary If M and N involve θ, we iteratively solve the generalized eigenvalue problem. The weight is iteratively updated. N is generally not positive definite. We solve Nθ = (1/λ)Mθ instead. M is always positive definite for noisy data.

21 Problem: Characterization of algebraic methods M(θ)θ = λn(θ)θ. The data are noisy. The solution has a distribution. θ θ M(θ) controls the covariance of the solution. θ θ N(θ) determines the bias of the solution. Issue: What M(θ) minimizes the covariance the most? What N(θ) minimizes the bias the most? Solution: M(θ) = 1 N N(θ) = 1 N ξ α ξ α (θ, V 0 [ξ α ]θ), The covariance reaches the theoretical accuracy bound up to O(σ4 ) 1 ( ) V 0 [ξ (θ, V 0 [ξ α ]θ) α ] + 2S[ξ α e ] 1 N 2 Hyper-renormalization achieves both. 1 ( (θ, V 0 [ξ α ]θ) 2 The bias is 0 up to O(σ 4 ) ) (ξ α, M 5 ξ α)v 0 [ξ α ] + 2S[V 0 [ξ α ]M 5 ξ αξ α ],

22 Geometric Fitting

23 Geometric approach Minimize the geometric distance S: S = 1 ( (x α x α ) 2 + (y α ȳ α ) 2) = 1 N N d 2 α, (x α, y α) (x α, y α) i.e., the average of the square distances d 2 α from data points (x α, y α ) to the nearest points ( x α, ȳ α ) on the ellipse. The computation is very difficult: S is minimized subject to the constraint ( ξ α, θ) = 0. S does not contain θ, for which S is minimized. θ is contained in the constraint ( ξ α, θ) = 0. The minimization is done in the joint space of θ and ( x 1, ȳ 1 ),..., ( x N, ȳ N ). θ: structural parameter ( x α, ȳ α ): nuisance parameters

24 Sampson error If (x α, y α ) is close to the ellipse, the square distance d 2 α is approximated by d 2 α = (x α x α ) 2 + (y α ȳ α ) 2 (ξ α, θ) 2 (θ, V 0 [ξ α ]θ), Hence, the geometric distance S is approximated by the Sampson error: J = 1 N (ξ α, θ) 2 (θ, V 0 [ξ α ]θ). Minimization is done in the space of θ. Unconstrained minimization without nuisance parameters.

25 FNS: Fundamental Numerical Scheme (Chojnacki et al. 2000) 1. Let θ = θ 0 = 0 and W α = Compute the 6 6 matrices M = 1 N W α ξ α ξ α, L = 1 W 2 N α(ξ α, θ) 2 V 0 [ξ α ]. 3. Let X = M L. 4. Solve the eigenvalue problem Xθ = λθ, and compute the unit eigenvector θ for the smallest eigenvalue λ. 5. If θ θ 0 up to sign, return θ and stop. Else, update W α and θ to and go back to Step 2. W α 1 (θ, V 0 [ξ α ]θ), θ 0 θ,

26 We can see that Motivation of FNS θ J = 2(M L)θ = 2Xθ. We iteratively solve the eigenvalue problem Xθ = λθ. When the iterations have converged, it can be proved that λ = 0. The solution satisfies θ J = 0. Initially L = O. The iterations start from the LS solution.

27 Geometric distance minimization (Kanatani and Sugaya 2010) 1. Let J 0 =, ˆx α = x α, ŷ α = y α, and x α = ỹ α = Compute the normalized covariance matrix V 0 [ˆξ α ] using ˆx α and ŷ α, and let ˆx 2 α + 2ˆx α x α 2(ˆx α ŷ α + ŷ α x α + ˆx α ỹ α ) ξ α = ŷα 2 + 2ŷ α ỹ α 2f 0 (ˆx α + x α ). 2f 0 (ŷ α + ỹ α ) f 0 3. Compute the θ that minimizes the modified Sampson error 4. Update x α, ỹ α, ˆx α and ŷ α to ( ) xα 2(ξ α, θ) 2 ỹ α 5. Compute (θ, V 0 [ˆξ α ]θ) J = 1 N ( θ1 θ 2 θ 4 θ 2 θ 3 θ 5 J = 1 N (ξ α, θ) 2 (θ, V 0 [ˆξ α ]θ). ) ˆx α ŷ α, ˆx α x α x α, ŷ α y α ỹ α. f 0 ( x 2 α + ỹα). 2 If J J 0, return θ and stop. Else, let J 0 J and go back to Step 2.

28 Motivation We first minimize the Sampson error J, say by FNS, and modify the data ξ α to ξ α using the computed solution θ. Regarding them as data, we define the modified Sampson error J and minimize it, say by FNS. If this is repeated, the modified Sampson error J eventually coincides with the geometric distance S. We we obtain the solution that minimize S. However, The Sampson error minimization solution and the geometric distance minimization solution usually coincide up to several significant digits. Minimizing the Sampson error is practically the same as minimizing the geometric distance.

29 Bias removal The geometric fitting solution ˆθ is known to be biased: E[θ] θ. (x α, y α) (x α, y α) An ellipse has a convex shape. Points are more likely to move outside the ellipse by random noise. If we write ˆθ = θ + 1 θ + 2 θ +, ( k θ : kth order in noise) we have E[ 1 θ] = 0 but E[ 2 θ] 0. Hyperaccurate correction: If we can evaluate E[ 2 θ], we obtain a better solution J(θ) θ = ˆθ E[ 2 θ]. θ θ θ

30 1. Compute θ by FNS. 2. Estimate σ 2 by Hyperaccurate correction (Kanatani 2006) ˆσ 2 = (θ, Mθ) 1 5/N, using the value of M after the FNS iterations have converged. 3. Compute the correction term c θ = ˆσ2 N M 5 W α (e, θ)ξ α + ˆσ2 N 2 M 5 Wα(ξ 2 α, M 5 V 0[ξ α ]θ)ξ α, where using the value of W α after the FNS iterations have converged, where M 5 pseudoinverse of M of rank Correct θ to θ N [θ c θ], where N [ ] is a normalization operation. Since the bias is O(σ 4 ), the solution has the same accuracy as hyper-renormalization. is the

31 Experimental Comparisons

32 Some examples Gaussian noise of standard deviation σ is added (the dashed lines: the true shape) 30 data points Fitting examples for σ = LS 5. HyperLS 2. iterative reweight 6. hyper-renormalization 3. Taubin 7. FNS 4. renormalization 8. FNS + hyperaccurate correction method /8 number of left iterations right Methods 1, 3, and 5 are algebraic, hence non-iterative. Methods 7 and 8 have the same complexity. Hyperaccurate correction is an analytical procedure. FNS requires about twice as many iterations. 1 1

33 Statistical comparison θ θ: true value (unit vector) ˆθ: computed value (unit vector) θ θ The deviation is measured by the orthogonal error component: O θ = P θˆθ, P θ I θ θ. The bias B and the RMS error D are measured over M (= 10000) trials: B = 1 M θ (a), D = 1 M M M θ (a) 2. a=1 a=1 KCR lower bound: D σ ( 1 tr N N ξ ) α ξ α ( θ, V 0 [ξ α ] θ)

34 Bias and RMS error Simulation over independent trials for different σ. (the dotted lines: the KCR lower bound) σ 0.8 bias σ RMS error 1. LS 5. HyperLS 2. iterative reweight 6. hyper-renormalization 3. Taubin 7. FNS 4. renormalization 8. FNS + hyperaccurate correction LS and iterative reweight has large bias and hence large RMS errors. LS has some bias, which is reduced by hyperaccurate correction to a large extent. The bias of HyperLS and hyper-renormalization is very small. The iterations of iterative reweight and FNS do not converge for large σ.

35 Bias and RMS error (enlargement) σ bias σ 0.4 RMS error 1. LS 5. HyperLS 2. iterative reweight 6. hyper-renormalization 3. Taubin 7. FNS 4. renormalization 8. FNS + hyperaccurate correction Hyper-renormalization outperforms FNS for small σ. The highest accuracy is given by hyperaccurate correction of FNS. However, the FNS iterations may not converge for large σ. Hyper-renormalization is robust to noise. The initial solution (HyperLS) is already very accurate. It is the best method in practice.

36 Real image example: LS 5. HyperLS 2. iterative reweight 6. hyper-renormalization 3. Taubin 7. FNS 4. renormalization 8. FNS + hyperaccurate correction method /8 # of iter Methods 1, 3, and 5 are algebraic, hence non-iterative. Methods 7 and 8 have the same complexity. Hyperaccurate correction is an analytical procedure. ML requires about twice as many iterations.

37 Robust Fitting

38 When does ellipse fitting fail? Superfluous data Some segments may belong to other objects. Inliers: segments that belong to the object of interest Outliers: segments that belong to different objects. Difficult to find outliers if they are smoothly connected to inliers Scarcity of information If the segment is too short and/or noisy, a hyperbola can be fit. How can we modify a hyperbola to an ellipse? How can we produce only an ellipse? ellipse-specific method Information is too scares to produce a good fit by any method.

39 RANSAC Find an ellipse such that the number of points close to it is as large as possible. 1. Randomly select five points from the input sequence, and let ξ 1,..., ξ 5 be their vectors 2. Compute the unit eigenvector θ of the matrix M 5 = 5 ξ α ξ α, for the smallest eigenvalue, and store it as a candidate. 3. Let n be the number of points in the input sequence that satisfy ( ) (ξ, θ) (x x) 2 + (y ȳ) 2 2 (θ, V 0 [ξ]θ) < d2, where d is a threshold for admissible deviation from ellipse, e.g., d = 2 (pixels). Store that n. 4. Select a new set of five points from the input sequence, and do the same. Repeat this many times, and return from among the stored candidate ellipses the one for which n is the largest.

40 Ellipse-specific method of Fitzgibbon et al. (1999) The equation Ax 2 + 2Bxy + Cy 2 + 2f 0 (Dx + Ey) + f 2 0 F = 0 represents an ellipse if and only if AC B 2 > Compute the 6 6 matrices M = ξ N α ξ α, N = Solve the generalized eigenvalue problem Mθ = λnθ, and compute the unit generalized eigenvector θ for the smallest generalized eigenvalue λ. Motivation We minimize the algebraic distance (1/N) N (ξ α, θ) 2 subject to ( ) AC B 2 = (θ, Nθ) = 1. N is not positive definite. We solve Nθ = (1/λ)Mθ instead for the largest eigenvalue.

41 Random sampling of Masuzaki et al. (2013) 1. Fit an ellipse by the standard method. Stop, if the solution θ satisfies θ 1 θ 3 θ 2 2 > Else, randomly select five points among the sequence. Let ξ 1, ξ 2,..., ξ 5 be their vector representations. 3. Compute the unit eigenvector θ of M 5 = 5 ξ α ξ α, for the smallest eigenvalue. 4. If the resulting θ does not define an ellipse, discard it. Newly select another set of five points randomly and do the same. 5. If the resulting θ defines an ellipse, keep it as a candidate and compute its Sampson error. 6. Repeat this many times, and return from among the candidates the one with the smallest Sampson error J. We can obtain an ellipse less biased than the solution of the method of Fitzgibbon et al.

42 Minimize Penalty method of Szpak et al. (2015) J = 1 N using the Levenberg Marquardt method. The first term: the Sampson error. (θ, Nθ) = 0 at ellipse-hyperbola boundaries. λ: regularization constant (ξ α, θ) 2 (θ, V 0 [ξ α ]θ) + λ θ 4 (θ, Nθ) 2,

43 Comparison simulations Fitzgibbon et al. small flat ellipse 2. hyper-renormalization hyperbola 3. penalty method large ellipse close to 2 4. random sampling between 1 and

44 Real image examples Fitzgibbon et al. [1] produces a mall flat ellipse. If hyper-renormalization [2] returns an ellipse, random sampling [4] returns the same ellipse, and the penalty method [3] fits an ellipse close to it. If hyper-renormalization [2] returns a hyperbola, the penalty method [3] fits a large ellipse close to it. Random sampling [4] fits somewhat a moderate ellipse. Conclusion If hyper-renormalization returns a hyperbola, any ellipse specific method does not produce a reasonable ellipse. Ellipse specific methods do not make practical sense. Use random sampling if you need an ellipse by all means.

45 Fundamental Matrix Computation

46 Fundamental matrix (x α, y α ) (x α, y α ) For two images of the same scene, the following epipolar equation holds: x/f 0 x /f 0 ( y/f 0, F y /f 0 ) = f 0 : scale factor ( the size of the image) F : fundamental matrix To remove scale indeterminacy, F is normalized to unit norm: F ( i,j=1,3 F 2 ij ) = 1 From the computed F, we can reconstruct the 3-D structure of the scene.

47 Vector representation xx F 11 xy F 12 ( x/f f 0 x F 13 0 y/f 0, F x /f 0 yx F 21 y /f 0 ) = 0 (ξ, θ) = 0, ξ yy, θ F f 0 y F 23 f 0 x F 31 f 0 y F 32 f0 2 F 33 F = 1 θ = 1. Task: From noisy observations ξ 1,..., ξ N, estimate a unit vector θ such that (ξ α, θ) 0, α = 1,..., N.

48 Noise assumption ( x α, ȳ α ), ( x α, ȳ α): true values of (x α, y α ), (x α, y α). x α = x α + x α, y α = ȳ α + y α, x α = x α + x α, y α = ȳ α + y α. Then, ξ α = ξ α + 1 ξ α + 2 ξ α. ξ α : true value of ξ α 1 ξ α : noise term linear in x α, y α, x α, and y α. 2 ξ α : noise term quadratic in x α y α, x α, and y α. x α x α x α ȳ x α x α + x α x x α x α α α f 0 x α ȳ α x α + x α y x α y α α ȳ α x f 0 x α 0 α ξ = ȳ α ȳ α, 1 ξ α = x α y α + ȳ α x y α x α α f 0 ȳ α f 0 y α, 2 ξ α = y α y α. f 0 x f 0 x 0 α α f 0 ȳ α f 0 y α 0 0 f

49 Covariance matrix The noise terms x α, y α, x α, and y α are regarded as independent Gaussian random variables of mean 0 and variance σ 2 : E[ x α ] = E[ y α ] = E[ x α] = E[ y α] = 0, E[ x 2 α] = E[ y 2 α] = E[ x α2 ] = E[ y α 2 ] = σ 2, E[ x α y α ] = E[ x α y α] = E[ x α y α] = E[ x α y α ] = 0. The covariance matrix of ξ α is defined by V [ξ α ] = E[ 1 ξ α 1 ξ α ]. Then, V [ξ α ] = σ 2 V 0 [ξ α ], x 2 α + x 2 α x αȳ α f 0 x α x α ȳ α 0 0 f 0 x α 0 0 x αȳ α x 2 α + ȳ α 2 f 0 ȳ α 0 x α ȳ α 0 0 f 0 x α 0 f 0 x α f 0 ȳ α f x α ȳ α 0 0 ȳα 2 + x 2 α x αȳ α f 0 x α f 0 ȳ α 0 0 V 0 [ξ α ] = 0 x α ȳ α 0 x αȳ α ȳα 2 + ȳ α 2 f 0 ȳ α 0 f 0 ȳ α f 0 x α f 0 ȳ α f f 0 x α 0 0 f 0 ȳ α 0 0 f f 0 x α 0 0 f 0 ȳ α 0 0 f σ 2 : noise level V 0 [ξ α ]: normalized covariance matrix

50 algebraic methods Fundamental matrix computation non-iterative methods least squares (LS), Taubin method, hyperls iterative methods iterative reweight, renormalization, hyper-renormalization geometric methods Sampson error minimization (FNS) geometric error minimization hyperaccurate correction However,...

51 The fundamental matrix F must have rank 0: Existing three approaches: a posteriori correction: SVD correction optimal correction internal access: Rank constraint det F = 0 Parameterize F such that det F = 0 is identically satisfied, and do optimization in the internal parameter space of a smaller dimension. external access: Do iteration in the external (redundant) space of θ in such a way that θ approaches the true value and yet det F = 0 holds at the time of convergence. SVD det F = 0 det F = 0 det F = 0

52 SDV correction 1. Compute F without considering the rank constraint. 2. Compute the SDV of F : F = U σ σ 2 0 V 0 0 σ 3 3. Correct F to The norm F is scaled to 1 F U σ 1/ σ1 2 + σ σ 2 / σ1 2 + σ2 2 0 V 0 0 0

53 Optimal correction (Kanatani and Sugaya 2007) 1. Compute θ without considering the rank constraint. 2. Compute the 9 9 matrix ˆM = 1 N (P θ ξ α )(P θ ξ α ), P θ I θθ. (θ, V 0 [ξ α ]θ) P θ : projection matrix onto the space orthogonal to θ. 3. Compute the eigenvalues λ 1 λ 9 (= 0) of ˆM and the corresponding unit eigenvectors u 1, u 2,..., u 9 (= θ). Then, define 4. Modify θ to V 0 [θ] = 1 N N [ ]: normalization to unit norm ( u1 u u 8u ) 8. λ 1 λ 8 θ N [θ (θ, θ)v 0 [θ]θ 3(θ, V 0 [θ]θ ) ], θ = θ 5 θ 9 θ 8 θ 6 θ 6 θ 7 θ 9 θ 4 θ 4 θ 8 θ 7 θ 5 θ 8 θ 3 θ 2 θ 9 θ 9 θ 1 θ 3 θ 7. θ 7 θ 2 θ 1 θ 8 θ 2 θ 6 θ 5 θ 3 θ 3 θ 4 θ 6 θ 1 θ 1 θ 5 θ 4 θ 2 5. If (θ, θ) 0, return θ and stop. Else, update V 0 [θ] to P θ V 0 [θ]p θ and go back to Step 3. V 0 [θ] = M 8 (truncated pseudoinverse of rank 8) = KCR lower bound. V 0 [θ]θ = 0 is always ensured.

54 SVD of F : Internal access (Sugaya and Kanatani 2007) F = U σ σ 2 0 V, σ 1 = cos φ, σ 2 = sin φ We regard U, V, σ 1, and σ 2 as independent variables minimize the Sampson error J by Levenberg Marquardt method. 1. Compute an F such that det F = 0, and express its SDV in the form F = U cos φ sin φ 0 V Compute the Sampson error J, and let c = Compute the 9 3 matrices 0 F 31 F 21 0 F 13 F 12 0 F 32 F 22 F 13 0 F 11 0 F 33 F 23 F 12 F 11 0 F 31 0 F 11 0 F 23 F 22 F U = F 32 0 F 12, F V = F 23 0 F 21. F 33 0 F 13 F 22 F 21 0 F 21 F F 33 F 32 F 22 F 12 0 F 33 0 F 31 F 23 F 13 0 F 32 F Compute the 9-D vector σ 1 U 12 V 12 σ 2 U 11 V 11 σ 1 U 12 V 22 σ 2 U 11 V 21 σ 1 U 12 V 32 σ 2 U 11 V 31 σ 1 U 22 V 12 σ 2 U 21 V 11 θ φ = σ 1 U 22 V 22 σ 2 U 21 V 21. σ 1 U 22 V 32 σ 2 U 21 V 31 σ 1 U 32 V 12 σ 2 U 31 V 11 σ 1 U 32 V 22 σ 2 U 31 V 21 σ 1 U 32 V 32 σ 2 U 31 V Compute the 9 9 matrices M = 1 N ξ α ξ α (θ, V 0 [ξ α ]θ), L = 1 N (ξ α, θ) 2 (θ, V 0 [ξ α ]θ) 2 V 0[ξ α ], and let X = M L. 6. Compute the first derivatives of J ω J = 2F U Xθ, and the second derivatives ω J = 2F V Xθ, J φ = 2(θ φ, Xθ). ωω J = 2F U XF U, ω ω J = 2F V XF V, ωω J = 2F U XF V, J 2 φ 2 = 2(θ φ, Xθ φ ), 7. Compute the 9 9 Hessian H = ω J φ = 2F U Xθ φ, ω J φ ωω J ωω J ω J/ φ ( ωω J) ω ω J ω J/ φ ( ω J/ φ) ( ω J/ φ) J 2 / φ 2 8. Solve the linear equation (H + cd[h]) ω ω = ωj ω J. φ J/ φ D[ ]: diagonal matrix of diagonalelements. = 2F V Xθ φ.

55 9. Update U, V, and φ to U = R( ω)u, V = R( ω )V, φ = φ + φ. R(w): rotation around axis w by angle w. 10. Update F to F = U cos φ sin φ 0 V Compute the Sampson error J of F. If J < J or J J are not satisfied, let c 10c and go back to Step If F F, return F and stop. Else, let F F, U U, V V, φ φ, and c c/10, and go back to Step 3.

56 External access (Kanatani and Sugaya 2010) 1. Initialize θ. 2. Compute the 9 9 matrices M and L. M = 1 N ξ α ξ α (θ, V 0 [ξ α ]θ), L = 1 N (ξ α, θ) 2 (θ, V 0 [ξ α ]θ) 2 V 0[ξ α ] 3. Compute the 9-D vector θ and the 9 9 matrix P θ θ 5 θ 9 θ 8 θ 6 θ 6 θ 7 θ 9 θ 4 θ 4 θ 8 θ 7 θ 5 θ 8 θ 3 θ 2 θ 9 θ = θ 9 θ 1 θ 3 θ 7, P θ = I θ θ θ 7 θ 2 θ 1 θ 8 θ 2 θ 2 θ 6 θ 5 θ 3 θ 3 θ 4 θ 6 θ 1 θ 1 θ 5 θ 4 θ 2 4. Compute the 9 9 matices X = M L and Y = P θ XP θ. Compute the unit eigenvectors v 1 and v 2 of Y for the smallest two eigenvalues, and let ˆθ = (θ, v 1 )v 1 + (θ, v 2 )v Compute θ = N [P θ ˆθ]. 6. If θ θ up to sign, return θ as θ and stop. Else, let θ N [θ + θ ] and go back to Step 2.

57 Geometric distance minimization (Kanatani and Sugaya 2010) 1. Let J 0 =, ˆx α = x α, ŷ α = y α, ˆx α = x α, ŷ α = y α, and x α = ỹ α = x α = ỹ α = Compute the normalized covariance matrix V 0 [ˆξ α ] using ˆx α, ŷ α, ˆx α, and ŷ α, and let ˆx αˆx α + ˆx α x α + ˆx α x α ˆx α ŷ α + ŷ α x α + ˆx α ỹ α f 0 (ˆx α + x α ) ŷ αˆx ξ α + ˆx αỹ α + ŷ α x α α = ŷ α ŷ α + ŷ αỹ α + ŷ α ỹ α. f 0 (ŷ α + ỹ α ) f 0 (ˆx α + x α) f 0 (ŷ α + ỹ α) f Compute the θ that minimizes the modified Sampson error J = 1 N (ξ α, θ) 2 (θ, V 0 [ˆξ α ]θ) 4. Update x α, ỹ α, x α, and ỹ α to ( ) xα ỹ α ( ) (ξ α, θ) θ1 θ 2 θ 3 (θ, V 0 [ˆξ α ]θ) θ 4 θ 5 θ 6 ˆx α ŷ α f0, ( ) ( ) x α (ξ α, θ) θ1 θ 4 θ 7 ˆx α ŷ ỹ α (θ, V 0 [ˆξ α ]θ) θ 2 θ 5 θ α, 8 f 0 5. Compute ˆx α x α x α, ŷ α y α ỹ α, ˆx α x α x α, ŷ α y α ỹ α J = 1 N ( x 2 α + ỹα 2 + x α 2 + ỹ α 2 ). If J J 0, return θ and stop. Else, let J 0 J and go back to Step 2. The Sampson error minimization solution and the geometric distance minimization solution usually coincide up to several significant digits. Minimizing the Sampson error is practically the same as minimizing the geometric distance.

58 Examples Image size: , noise level σ = 1.0, computation error: E= method E LS + SVD FNS + SVD optimal correction internal external geometric distance minimization LS+SVD: FNS+SVD: FNS opt. correc.: i,j=1 (F ij F ij ) F = internal: external: geom. dist.: LS + SVD (= Hartley s 8-point method) has poor accuracy. Optimal correction, internal access, and external access all have almost optimal ( KCR lower bound). Geometric distance minimization by iterations results in little improvement

59 Homography Computation

60 Homography (x α, y α) (x α, y α ) Two images of a planar surface are related by a homography: x = f 0 H 11 x + H 12 y + H 13 f 0 h 31 x + H 32 y + H 33 f 0, y = f 0 H 21 x + H 22 y + H 23 f 0 h 31 x + H 32 y + H 33 f 0. f 0 : scale factor ( the size of the image) This can be written as : equality up to a nonzero constant H: homography matrix x /f 0 y /f 0 H 11 H 12 H 13 H 21 H 22 H 23 x/f 0 y/f 0. 1 } H 31 H 32 {{ H 33 } 1 H To remove scale indeterminacy, H is normalized to unit norm: H ( i,j=1,3 H2 ij ) = 1 From the computed H, we can reconstruct the position and orientation of the plane and compute the relative camera positions.

61 x /f 0 y /f 0 H 11 H 12 H 13 H 21 H 22 H 23 x/f 0 y/f 0 1 H 31 H 32 H 33 1 H 11 H 12 H 13 H 21 H 22 H 23 H 31 H 32 H 33 Vector representation x /f 0 y /f 0 H 11 H 12 H 13 H 21 H 22 H 23 x/f 0 y/f 0 = H 31 H 32 H The three components of this vector equation are (ξ (1), θ) = 0, (ξ (2), θ) = 0, and (ξ (3), θ) = 0, where 0 f 0 x xy 0 f 0 y yy 0 f 2 0 f 0 y f 0 x 0 xx θ =, ξ (1) = f 0 y, ξ (2) = 0, ξ (3) = yx. f0 2 0 f 0 x xy xx 0 yy yx 0 f 0 y f 0 x 0 H = 1 θ = 1. Task: From noisy observations ξ (k) α, estimate a unit vector θ such that (ξ (k) α, θ) 0, k = 1, 2, 3, α = 1,..., N. The three equations are not linearly independent. If two of them are satisfied, the remaining one is automatically satisfied.

62 Covariance matrices The noise terms x α, y α, x α, and y α are regarded as independent Gaussian random variables of mean 0 and variance σ 2 : E[ x α ] = E[ y α ] = E[ x α] = E[ y α] = 0, E[ x 2 α] = E[ y 2 α] = E[ x α2 ] = E[ y α 2 ] = σ 2, The covariance matrices of ξ (k) α E[ x α y α ] = E[ x α y α] = E[ x α y α] = E[ x α y α ] = 0. is defined by V (kl) [ξ α ] = E[ 1 ξ (k) α 1 ξ (l) α ] (= σ 2 V (kl) 0 [ξ α ]). Then, V (kl) 0 [ξ α ] = T (k) α T (l) α, T (k) α = ( ξ (k) x ξ (k) y ξ (k) x ξ (k) y ). α T (k) α : 9 4 Jacobi matrix ( ) α : value for x = x α, y = y α, x = x α, and y = y α. V (kl) 0 [ξ α ]: the normalized covariance matrices

63 Iterative reweight 1. Let θ 0 = 0 and W α (kl) = δ kl, α = 1,..., N, k, l = 1, 2, Compute the 9 9 matrices 3. Solve the eigenvalue problem M = 1 N 3 W α (kl) ξ (k) α ξ (l) α. k,l=1 Mθ = λθ, and compute the unit eigenvector θ for the smallest eigenvalue λ. 4. If θ θ 0 up to sign, return θ and stop. Else, update and go back to Step 2. W (kl) α ( ) (θ, V (kl) 0 [θ α ]θ), θ 0 θ, 2 δ kl : Kronecker delta (1 for k = l and 0 otherwise) ( ) (θ, V (kl) 0 [θ α ]θ) : the matrix whose (k, l) element is (θ, V (kl) 0 [θ α ]θ). ( ) (θ, V (kl) 0 [θ α ]θ) : its pseudoinverse of truncated rank 2. 2 The initial solution corresponds to least squares.

64 Renormalization (Kanatani et al. 2000) 1. Let θ 0 = 0 and W α (kl) = δ kl, α = 1,..., N, k, l = 1, 2, Compute the 9 9 matrices M = 1 N 3 W α (kl) k,l=1 ξ (k) α 3. Solve the generalized eigenvalue problem ξ (l) α, N = 1 N Mθ = λnθ, 3 k,l=1 W α (kl) V (kl) 0 [ξ α ]. and compute the unit generalized eigenvector θ for the generalized eigenvalue λ of the smallest absolute value. 4. If θ θ 0 up to sign, return θ and stop. Else, update and go back to Step 2. W (kl) α ( ) (θ, V (kl) 0 [ξ α ]θ), θ 0 θ, 2 The initial solution corresponds to the Taubin method.

65 Hyper-renormalization (Kanatani et al. 2014) 1. Let θ 0 = 0 and W α (kl) = δ kl, α = 1,..., N, k, l = 1, 2, Compute the 9 9 M = 1 N N = 1 N 1 N 2 3 k,l=1 3 k,l=1 3 W (kl) α k,l,m,n=1 ξ (k) α ξ (l) α, W α (kl) V (kl) 0 [ξ α ] W (kl) α W α (mn) ( (ξ (k) α 3. Solve the generalized eigenvalue problem, M 8 ξ(m) α Mθ = λnθ, ) )V (ln) 0 [ξ α ] + 2S[V (km) 0 [ξ α ]M 8 ξ(l) α ξ (n) α ]. and compute the unit generalized eigenvector θ for the generalized eigenvalue λ of the smallest absolute value. 4. If θ θ 0 up to sign, return θ and stop. Else, update and go back to Step 2. W (kl) α The initial solution corresponds to HyperLS ( ) (θ, V (kl) 0 [ξ α ]θ), θ 0 θ, 2

66 FNS (Kanatani and Niitsuma 2011) 1. Let θ = θ 0 = 0 and W α (kl) = δ kl, α = 1,..., N, k, l = 1, 2, Compute the 9 9 matrices M = 1 N 3 W α (kl) ξ (k) α ξ (l) k,l=1 α, L = 1 N 3 k,l=1 v α (k) v α (l) V (kl) 0 [ξ α ], where 3. Compute the 9 9 matrix v (k) α = 3 l=1 W α (kl) (ξ (l) α, θ). X = M L. 4. Solve the eigenvalue problem Xθ = λθ, and compute the unit eigenvector θ for the smallest eigenvalue λ. 5. If θ θ 0 up to sign, return θ and stop. Else, update and go back to Step 2. W (kl) α This minimizes the Sampson error: ( (θ, V (kl) 0 [ξ α ]θ) ) 2, θ 0 θ, J = 1 N 3 W α (kl) k,l=1 (ξ (k) α, θ)(ξ (l) α, θ), ( ) W α (kl) = (θ, V (kl) 0 [ξ α ]θ), 2 The initial solution corresponds to least squares. This reduces to the FNS of Chojnacki et al. (2000) for a single constraint.

67 We strictly minimize the geometric distance S = 1 N Geometric distance minimization ( (x α x α ) 2 + (y α ȳ α ) 2 + (x α x α) 2 + (y α ȳ α) 2). We first minimize the Sampson error J by FNS and modify the data ξ (k) α to ξ (k) α using the computed solution θ. Regarding them as data, we define the modified Sampson error J and minimize it by FNS. If this is repeated, the modified Sampson error J eventually coincides with the geometric distance S. We we obtain the solution that minimize S. The iterations do not alter the value of θ over several significant digits. Sampson error minimization is practically the same as geometric distance minimization.

68 Hyperaccurate correction The geometric distance minimization solution is theoretically biased. We can theoretically improve the accuracy by evaluating and subtracting the bias. hyperaccurate correction However, the accuracy gain is very small. The bias of the solution is very small. The data ξ (k) α consist of bilinear expressions in x α, y α, x α, and y α. Unlike ellipse fitting, no quadratic terms such as x 2 α are involved, Noise in different images are assumed to be independent. The bais of fundamental matrix computation is also small.

69 Examples Image size: , noise level σ = 1.0, computation error: E= method LS iterative reweight Taubin renormalization HyperLS hyper-renormalization FNS geometric distance minimization hyperaccurate correction E i,j=1 (H ij H ij ) H = LS: hyper-renorm.: FNS: geom dist.: LS and iterative reweight have poor accuracy. Taubin and HyperLS improve the accuracy. Renormalization and hyper-renormalization further improve the accuracy. FNS geometric distance minimization hyperaccurate correction FNS is the most suitable in practice.

70 Acknowledgments This work has been done in collaboration with: Prasanna Rangarajan (Southern Methodist University, U.S.A.) Ali Al-Sharadqah (California State University, Northridge, U.S.A). Nikolai Chernov (University of Alabama at Birmingham, U.S.A.); passed away August 7, 2014 Special thanks are due to: Takayuki Okatani (Tohoku University, Japan) Michael Felsberg (Linköping University, Sweden) Rudolf Mester (University of Frankfurt, Germany) Wolfgang Förstner (University of Bonn, Germany) Peter Meer (Rutgers University, U.S.A.) Michael Brooks (University of Adelaide, Australia) Wojciech Chojnacki (University of Adelaide, Australia) Alexander Kukush (University of Kiev, Ukraine)

71 For further details, see K. Kanatani, Y. Sugaya, and Y. Kanazawa, Ellipse Fitting for Computer Vision: Implementation and Applications, Morgan & Claypool Publishers, San Rafael, CA, U.S., April, ISBN (print), ISBN (E-book) K. Kanatani, Y. Sugaya, and Y. Kanazawa, Guide to 3D Vision Computation: Geometric Analysis and Implementation. Springer International, Cham, Switzerland, December, ISBN (print), ISBN (E-book)

Ellipse Fitting. 2.1 Representation of Ellipses

Ellipse Fitting. 2.1 Representation of Ellipses Ellipse Fitting 2 Abstract Extracting elliptic edges from images and fitting ellipse equations to them is one of the most fundamental tasks of computer vision. This is because circular objects, which are

More information

Overviews of Optimization Techniques for Geometric Estimation

Overviews of Optimization Techniques for Geometric Estimation Memoirs of the Faculty of Engineering, Okayama University, Vol. 47, pp. 8, January 03 Overviews of Optimization Techniques for Geometric Estimation Kenichi KANATANI Department of Computer Science, Okayama

More information

Hyper Least Squares and Its Applications

Hyper Least Squares and Its Applications Memoirs of the Faculty of Engineering, Okayama University, Vol. 45, pp. 15-26, January 2011 Hyper Least Squares and Its Applications Kenichi KAATAI, Prasanna RAGARAJA, Yasuyuki SUGAYA, and Hirotaka IITSUMA

More information

Overviews of Optimization Techniques for Geometric Estimation

Overviews of Optimization Techniques for Geometric Estimation Memoirs of the Faculty of Engineering, Okayama University, Vol. 7, pp. -, January 0 Overviews of Optimization Techniques for Geometric Estimation Kenichi KAATAI Department of Computer Science, Okayama

More information

High Accuracy Fundamental Matrix Computation and Its Performance Evaluation

High Accuracy Fundamental Matrix Computation and Its Performance Evaluation High Accuracy Fundamental Matrix Computation and Its Performance Evaluation Kenichi Kanatani Department of Computer Science, Okayama University, Okayama 700-8530 Japan kanatani@suri.it.okayama-u.ac.jp

More information

Hyperaccurate Ellipse Fitting without Iterations

Hyperaccurate Ellipse Fitting without Iterations Memoirs of the Faculty of Engineering, Okayama University, Vol. 44, pp. 42-49, January 2010 Hyperaccurate Ellipse Fitting without Iterations Kenichi KAATAI Department of Computer Science Okayama University

More information

Hyperaccurate Ellipse Fitting without Iterations

Hyperaccurate Ellipse Fitting without Iterations Memoirs of the Faculty of Engineering, Okayama University, Vol. 44, pp. 42 49, January 2010 Hyperaccurate Ellipse Fitting without Iterations Kenichi KAATAI Department of Computer Science Okayama University

More information

Experimental Evaluation of Geometric Fitting Algorithms

Experimental Evaluation of Geometric Fitting Algorithms Memoirs of the Faculty of Engineering, Okayama University, Vol.41, pp.63-72, January, 27 Experimental Evaluation of Geometric Fitting Algorithms Kenichi KANATANI Department of Computer Science Okayama

More information

Vision par ordinateur

Vision par ordinateur Vision par ordinateur Géométrie épipolaire Frédéric Devernay Avec des transparents de Marc Pollefeys Epipolar geometry π Underlying structure in set of matches for rigid scenes C1 m1 l1 M L2 L1 l T 1 l

More information

STATISTICAL ANALYSIS OF CURVE FITTING IN ERRORS IN-VARIABLES MODELS ALI AL-SHARADQAH

STATISTICAL ANALYSIS OF CURVE FITTING IN ERRORS IN-VARIABLES MODELS ALI AL-SHARADQAH STATISTICAL ANALYSIS OF CURVE FITTING IN ERRORS IN-VARIABLES MODELS by ALI AL-SHARADQAH NIKOLAI CHERNOV, COMMITTEE CHAIR WEI-SHEN HSIA CHARLES KATHOLI IAN KNOWLES BORIS KUNIN A DISSERTATION Submitted to

More information

Camera Calibration The purpose of camera calibration is to determine the intrinsic camera parameters (c 0,r 0 ), f, s x, s y, skew parameter (s =

Camera Calibration The purpose of camera calibration is to determine the intrinsic camera parameters (c 0,r 0 ), f, s x, s y, skew parameter (s = Camera Calibration The purpose of camera calibration is to determine the intrinsic camera parameters (c 0,r 0 ), f, s x, s y, skew parameter (s = cotα), and the lens distortion (radial distortion coefficient

More information

Is covariance information useful in estimating vision parameters?

Is covariance information useful in estimating vision parameters? Is covariance information useful in estimating vision parameters? Michael J. Brooks, Wojciech Chojnacki, Darren Gawley and Anton van den Hengel Department of Computer Science, Adelaide University Adelaide,

More information

Statistical efficiency of curve fitting algorithms

Statistical efficiency of curve fitting algorithms Statistical efficiency of curve fitting algorithms N. Chernov and C. Lesort Department of Mathematics University of Alabama at Birmingham Birmingham, AL 35294, USA March 20, 2003 Abstract We study the

More information

Memoirs of the Faculty of Engineering, Okayama University, Vol. 42, pp , January Geometric BIC. Kenichi KANATANI

Memoirs of the Faculty of Engineering, Okayama University, Vol. 42, pp , January Geometric BIC. Kenichi KANATANI Memoirs of the Faculty of Engineering, Okayama University, Vol. 4, pp. 10-17, January 008 Geometric BIC Kenichi KANATANI Department of Computer Science, Okayama University Okayama 700-8530 Japan (Received

More information

Algorithms for Computing a Planar Homography from Conics in Correspondence

Algorithms for Computing a Planar Homography from Conics in Correspondence Algorithms for Computing a Planar Homography from Conics in Correspondence Juho Kannala, Mikko Salo and Janne Heikkilä Machine Vision Group University of Oulu, Finland {jkannala, msa, jth@ee.oulu.fi} Abstract

More information

Optimisation on Manifolds

Optimisation on Manifolds Optimisation on Manifolds K. Hüper MPI Tübingen & Univ. Würzburg K. Hüper (MPI Tübingen & Univ. Würzburg) Applications in Computer Vision Grenoble 18/9/08 1 / 29 Contents 2 Examples Essential matrix estimation

More information

The Multibody Trifocal Tensor: Motion Segmentation from 3 Perspective Views

The Multibody Trifocal Tensor: Motion Segmentation from 3 Perspective Views The Multibody Trifocal Tensor: Motion Segmentation from 3 Perspective Views Richard Hartley 1,2 and RenéVidal 2,3 1 Dept. of Systems Engineering 3 Center for Imaging Science Australian National University

More information

Announcements (repeat) Principal Components Analysis

Announcements (repeat) Principal Components Analysis 4/7/7 Announcements repeat Principal Components Analysis CS 5 Lecture #9 April 4 th, 7 PA4 is due Monday, April 7 th Test # will be Wednesday, April 9 th Test #3 is Monday, May 8 th at 8AM Just hour long

More information

Bundle Adjustment for 3-D Reconstruction: Implementation and Evaluation

Bundle Adjustment for 3-D Reconstruction: Implementation and Evaluation Memoirs of the Faculty of Engineering Okayama University Vol 45 pp 1 9 January 2011 Bundle Adjustment for 3-D Reconstruction: Implementation and Evaluation Kenichi KANATANI Department of Computer Science

More information

Seminar on Linear Algebra

Seminar on Linear Algebra Supplement Seminar on Linear Algebra Projection, Singular Value Decomposition, Pseudoinverse Kenichi Kanatani Kyoritsu Shuppan Co., Ltd. Contents 1 Linear Space and Projection 1 1.1 Expression of Linear

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. I assume the reader is familiar with basic linear algebra, including the

More information

Rationalising the Renormalisation Method of Kanatani

Rationalising the Renormalisation Method of Kanatani Journal of Mathematical Imaging and Vision 14, 21 38, 2001 c 2001 Kluwer Academic Publishers. Manufactured in The Netherlands. Rationalising the Renormalisation Method of Kanatani WOJCIECH CHOJNACKI, MICHAEL

More information

Method 1: Geometric Error Optimization

Method 1: Geometric Error Optimization Method 1: Geometric Error Optimization we need to encode the constraints ŷ i F ˆx i = 0, rank F = 2 idea: reconstruct 3D point via equivalent projection matrices and use reprojection error equivalent projection

More information

Vision 3D articielle Session 2: Essential and fundamental matrices, their computation, RANSAC algorithm

Vision 3D articielle Session 2: Essential and fundamental matrices, their computation, RANSAC algorithm Vision 3D articielle Session 2: Essential and fundamental matrices, their computation, RANSAC algorithm Pascal Monasse monasse@imagine.enpc.fr IMAGINE, École des Ponts ParisTech Contents Some useful rules

More information

Corner. Corners are the intersections of two edges of sufficiently different orientations.

Corner. Corners are the intersections of two edges of sufficiently different orientations. 2D Image Features Two dimensional image features are interesting local structures. They include junctions of different types like Y, T, X, and L. Much of the work on 2D features focuses on junction L,

More information

Do We Really Have to Consider Covariance Matrices for Image Feature Points?

Do We Really Have to Consider Covariance Matrices for Image Feature Points? Electronics and Communications in Japan, Part 3, Vol. 86, No. 1, 2003 Translated from Denshi Joho Tsushin Gakkai Ronbunshi, Vol. J85-A, No. 2, February 2002, pp. 231 239 Do We Really Have to Consider Covariance

More information

[POLS 8500] Review of Linear Algebra, Probability and Information Theory

[POLS 8500] Review of Linear Algebra, Probability and Information Theory [POLS 8500] Review of Linear Algebra, Probability and Information Theory Professor Jason Anastasopoulos ljanastas@uga.edu January 12, 2017 For today... Basic linear algebra. Basic probability. Programming

More information

Pseudoinverse and Adjoint Operators

Pseudoinverse and Adjoint Operators ECE 275AB Lecture 5 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 5 ECE 275A Pseudoinverse and Adjoint Operators ECE 275AB Lecture 5 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p.

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

Linear Algebra. Chapter 8: Eigenvalues: Further Applications and Computations Section 8.2. Applications to Geometry Proofs of Theorems.

Linear Algebra. Chapter 8: Eigenvalues: Further Applications and Computations Section 8.2. Applications to Geometry Proofs of Theorems. Linear Algebra Chapter 8: Eigenvalues: Further Applications and Computations Section 8.2. Applications to Geometry Proofs of Theorems May 1, 2018 () Linear Algebra May 1, 2018 1 / 8 Table of contents 1

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

A Factorization Method for 3D Multi-body Motion Estimation and Segmentation

A Factorization Method for 3D Multi-body Motion Estimation and Segmentation 1 A Factorization Method for 3D Multi-body Motion Estimation and Segmentation René Vidal Department of EECS University of California Berkeley CA 94710 rvidal@eecs.berkeley.edu Stefano Soatto Dept. of Computer

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Computational Statistics and Data Analysis. Optimal Computation of 3-D Similarity: Gauss-Newton vs. Gauss-Helmert

Computational Statistics and Data Analysis. Optimal Computation of 3-D Similarity: Gauss-Newton vs. Gauss-Helmert 4470 K. Kanatani, H. Niitsuma / Computational Statistics & Data Analysis 56 2012) 4470 4483 Contents list available at ScienceDrect Computational Statistics and Data Analysis journal homepage: www.elsevier.com/locate/csda

More information

Revisiting Hartley s Normalized Eight-Point Algorithm

Revisiting Hartley s Normalized Eight-Point Algorithm Revisiting Hartley s Normalized Eight-Point Algorithm Wojciech Chojnacki, Michael J. Brooks, Anton van den Hengel, and Darren Gawley Abstract Hartley s eight-point algorithm has maintained an important

More information

nonrobust estimation The n measurement vectors taken together give the vector X R N. The unknown parameter vector is P R M.

nonrobust estimation The n measurement vectors taken together give the vector X R N. The unknown parameter vector is P R M. Introduction to nonlinear LS estimation R. I. Hartley and A. Zisserman: Multiple View Geometry in Computer Vision. Cambridge University Press, 2ed., 2004. After Chapter 5 and Appendix 6. We will use x

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix René Vidal Stefano Soatto Shankar Sastry Department of EECS, UC Berkeley Department of Computer Sciences, UCLA 30 Cory Hall,

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

REVIEW OF DIFFERENTIAL CALCULUS

REVIEW OF DIFFERENTIAL CALCULUS REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be

More information

ROBUST ESTIMATOR FOR MULTIPLE INLIER STRUCTURES

ROBUST ESTIMATOR FOR MULTIPLE INLIER STRUCTURES ROBUST ESTIMATOR FOR MULTIPLE INLIER STRUCTURES Xiang Yang (1) and Peter Meer (2) (1) Dept. of Mechanical and Aerospace Engineering (2) Dept. of Electrical and Computer Engineering Rutgers University,

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

LECTURE NOTE #10 PROF. ALAN YUILLE

LECTURE NOTE #10 PROF. ALAN YUILLE LECTURE NOTE #10 PROF. ALAN YUILLE 1. Principle Component Analysis (PCA) One way to deal with the curse of dimensionality is to project data down onto a space of low dimensions, see figure (1). Figure

More information

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester Physics 403 Parameter Estimation, Correlations, and Error Bars Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Best Estimates and Reliability

More information

ECE521 week 3: 23/26 January 2017

ECE521 week 3: 23/26 January 2017 ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear

More information

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis. Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

Covariance function estimation in Gaussian process regression

Covariance function estimation in Gaussian process regression Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian

More information

Introduction to conic sections. Author: Eduard Ortega

Introduction to conic sections. Author: Eduard Ortega Introduction to conic sections Author: Eduard Ortega 1 Introduction A conic is a two-dimensional figure created by the intersection of a plane and a right circular cone. All conics can be written in terms

More information

Pose estimation from point and line correspondences

Pose estimation from point and line correspondences Pose estimation from point and line correspondences Giorgio Panin October 17, 008 1 Problem formulation Estimate (in a LSE sense) the pose of an object from N correspondences between known object points

More information

TEST CODE: MMA (Objective type) 2015 SYLLABUS

TEST CODE: MMA (Objective type) 2015 SYLLABUS TEST CODE: MMA (Objective type) 2015 SYLLABUS Analytical Reasoning Algebra Arithmetic, geometric and harmonic progression. Continued fractions. Elementary combinatorics: Permutations and combinations,

More information

Introduction to Bayesian methods in inverse problems

Introduction to Bayesian methods in inverse problems Introduction to Bayesian methods in inverse problems Ville Kolehmainen 1 1 Department of Applied Physics, University of Eastern Finland, Kuopio, Finland March 4 2013 Manchester, UK. Contents Introduction

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

ECE 275A Homework 6 Solutions

ECE 275A Homework 6 Solutions ECE 275A Homework 6 Solutions. The notation used in the solutions for the concentration (hyper) ellipsoid problems is defined in the lecture supplement on concentration ellipsoids. Note that θ T Σ θ =

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 27, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

Uncertainty Models in Quasiconvex Optimization for Geometric Reconstruction

Uncertainty Models in Quasiconvex Optimization for Geometric Reconstruction Uncertainty Models in Quasiconvex Optimization for Geometric Reconstruction Qifa Ke and Takeo Kanade Department of Computer Science, Carnegie Mellon University Email: ke@cmu.edu, tk@cs.cmu.edu Abstract

More information

Normed & Inner Product Vector Spaces

Normed & Inner Product Vector Spaces Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed

More information

Visual SLAM Tutorial: Bundle Adjustment

Visual SLAM Tutorial: Bundle Adjustment Visual SLAM Tutorial: Bundle Adjustment Frank Dellaert June 27, 2014 1 Minimizing Re-projection Error in Two Views In a two-view setting, we are interested in finding the most likely camera poses T1 w

More information

A Study of Kruppa s Equation for Camera Self-calibration

A Study of Kruppa s Equation for Camera Self-calibration Proceedings of the International Conference of Machine Vision and Machine Learning Prague, Czech Republic, August 14-15, 2014 Paper No. 57 A Study of Kruppa s Equation for Camera Self-calibration Luh Prapitasari,

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. Broadly, these techniques can be used in data analysis and visualization

More information

Fitting circles to scattered data: parameter estimates have no moments

Fitting circles to scattered data: parameter estimates have no moments arxiv:0907.0429v [math.st] 2 Jul 2009 Fitting circles to scattered data: parameter estimates have no moments N. Chernov Department of Mathematics University of Alabama at Birmingham Birmingham, AL 35294

More information

Parameter Norm Penalties. Sargur N. Srihari

Parameter Norm Penalties. Sargur N. Srihari Parameter Norm Penalties Sargur N. srihari@cedar.buffalo.edu 1 Regularization Strategies 1. Parameter Norm Penalties 2. Norm Penalties as Constrained Optimization 3. Regularization and Underconstrained

More information

Lecture 4.3 Estimating homographies from feature correspondences. Thomas Opsahl

Lecture 4.3 Estimating homographies from feature correspondences. Thomas Opsahl Lecture 4.3 Estimating homographies from feature correspondences Thomas Opsahl Homographies induced by central projection 1 H 2 1 H 2 u uu 2 3 1 Homography Hu = u H = h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS

More information

Lecture 8. Principal Component Analysis. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. December 13, 2016

Lecture 8. Principal Component Analysis. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. December 13, 2016 Lecture 8 Principal Component Analysis Luigi Freda ALCOR Lab DIAG University of Rome La Sapienza December 13, 2016 Luigi Freda ( La Sapienza University) Lecture 8 December 13, 2016 1 / 31 Outline 1 Eigen

More information

ε ε

ε ε The 8th International Conference on Computer Vision, July, Vancouver, Canada, Vol., pp. 86{9. Motion Segmentation by Subspace Separation and Model Selection Kenichi Kanatani Department of Information Technology,

More information

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a

More information

Multiple View Geometry in Computer Vision

Multiple View Geometry in Computer Vision Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Scene Planes & Homographies Lecture 19 March 24, 2005 2 In our last lecture, we examined various

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques, which are widely used to analyze and visualize data. Least squares (LS)

More information

Segmentation of Subspace Arrangements III Robust GPCA

Segmentation of Subspace Arrangements III Robust GPCA Segmentation of Subspace Arrangements III Robust GPCA Berkeley CS 294-6, Lecture 25 Dec. 3, 2006 Generalized Principal Component Analysis (GPCA): (an overview) x V 1 V 2 (x 3 = 0)or(x 1 = x 2 = 0) {x 1x

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Laurenz Wiskott Institute for Theoretical Biology Humboldt-University Berlin Invalidenstraße 43 D-10115 Berlin, Germany 11 March 2004 1 Intuition Problem Statement Experimental

More information

Neural Network Training

Neural Network Training Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification

More information

1 Introduction. Systems 2: Simulating Errors. Mobile Robot Systems. System Under. Environment

1 Introduction. Systems 2: Simulating Errors. Mobile Robot Systems. System Under. Environment Systems 2: Simulating Errors Introduction Simulating errors is a great way to test you calibration algorithms, your real-time identification algorithms, and your estimation algorithms. Conceptually, the

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 37 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

Linear Classifiers as Pattern Detectors

Linear Classifiers as Pattern Detectors Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIMAG 2 / MoSIG M1 Second Semester 2014/2015 Lesson 16 8 April 2015 Contents Linear Classifiers as Pattern Detectors Notation...2 Linear

More information

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where: VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector

More information

Lecture 6 Positive Definite Matrices

Lecture 6 Positive Definite Matrices Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices

More information

Lecture 7. Econ August 18

Lecture 7. Econ August 18 Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

More information

Lecture 1: Systems of linear equations and their solutions

Lecture 1: Systems of linear equations and their solutions Lecture 1: Systems of linear equations and their solutions Course overview Topics to be covered this semester: Systems of linear equations and Gaussian elimination: Solving linear equations and applications

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

CITS 4402 Computer Vision

CITS 4402 Computer Vision CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh Lecture 06 Object Recognition Objectives To understand the concept of image based object recognition To learn how to match images

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2,

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2, Math 051 W008 Margo Kondratieva Week 10-11 Quadratic forms Principal axes theorem Text reference: this material corresponds to parts of sections 55, 8, 83 89 Section 41 Motivation and introduction Consider

More information

Class Averaging in Cryo-Electron Microscopy

Class Averaging in Cryo-Electron Microscopy Class Averaging in Cryo-Electron Microscopy Zhizhen Jane Zhao Courant Institute of Mathematical Sciences, NYU Mathematics in Data Sciences ICERM, Brown University July 30 2015 Single Particle Reconstruction

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Dimensionality Reduction and Principle Components

Dimensionality Reduction and Principle Components Dimensionality Reduction and Principle Components Ken Kreutz-Delgado (Nuno Vasconcelos) UCSD ECE Department Winter 2012 Motivation Recall, in Bayesian decision theory we have: World: States Y in {1,...,

More information

Least squares: introduction to the network adjustment

Least squares: introduction to the network adjustment Least squares: introduction to the network adjustment Experimental evidence and consequences Observations of the same quantity that have been performed at the highest possible accuracy provide different

More information

Errata for Robot Vision

Errata for Robot Vision Errata for Robot Vision This is a list of known nontrivial bugs in Robot Vision 1986 by B.K.P. Horn, MIT Press, Cambridge, MA ISBN 0-262-08159-8 and McGraw-Hill, New York, NY ISBN 0-07-030349-5. If you

More information

Distance Between Ellipses in 2D

Distance Between Ellipses in 2D Distance Between Ellipses in 2D David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License. To

More information