Augmented Reality VU numerical optimization algorithms. Prof. Vincent Lepetit

Size: px
Start display at page:

Download "Augmented Reality VU numerical optimization algorithms. Prof. Vincent Lepetit"

Transcription

1 Augmented Reality VU numerical optimization algorithms Prof. Vincent Lepetit

2 P3P: can exploit only 3 correspondences; DLT: difficult to exploit the knowledge of the internal parameters correctly, and the constraint that R should be a rotation matrix. M 3 M 1 M2 M 4 m 1 m 3 m 4 m 2 C

3 Minimization of the Reprojection Error min R,T i Proj A,R,T (M i ) m i 2 Minimization of a physical, meaningful error (reprojection error, in pixels); No restriction on the number of correspondences; Can be very accurate. M 3 M 1 M 2 m 1 m 3 M 4 m 4 Proj A,R,T (M 4 ) A[R T]M 4 m 2 C

4 Minimization of the Reprojection Error min R,T i Proj A,R,T (M i ) m i 2 Non-linear least-squares minimization; Requires an iterative numerical optimization è Requires an initialization. M 3 M 1 M 2 m 1 m 3 M 4 m 4 Proj A,R,T (M 4 ) A[R T]M 4 m 2 C

5 Toy Problem True camera position at (0, 0) 1D camera under 2D translation reprojection error 100 "3D points" taken at randomly in [400;1000]x[-500;+500]

6 Gaussian Noise on the Projections White cross: true camera position; Black cross: global minimum of the objective function. In that case, the global minimum of the objective function is close to the true camera pose.

7 Numerical Optimization p 2 p 1 p 0 Start from an initial guess p 0 : p 0 can be taken randomly but should be as close as possible to the global minimum: - pose computed at time t-1; - pose predicted from pose computed at time t-1 and a motion model; -...

8 Numerical Optimization General methods: Gradient descent / Steepest Descent; Conjugate Gradient;... Non-linear Least-squares optimization: Gauss-Newton; Levenberg-Marquardt;...

9 Numerical Optimization We want to find p that minimizes: i E(p) = Proj A,R(p ),T(p ) (M i ) m i = f (p) b 2 2 where p is a vector of parameters that define the camera pose (translation vector + parameters of the rotation matrix); b is a vector made of the measurements (here the m i ); f is the function that relates the camera pose to these measurements. " u Proj A,R(p ),T(p ) (M 1 ) $ f (p) = $ v Proj A,R(p ),T(p ) (M 1 ) $ # ( ) ( ) % ' ' ' & " u(m 1 )% $ ' b = $ v(m 1 ) ' # $ &'

10 Gradient descent / Steepest Descent p i+1 = p i λ E(p i ) ( ) T ( f (p i ) b) ( ) with J the Jacobian matrix of f, computed at p i E(p i ) = f (p i ) b 2 = f (p i ) b E(p i ) = 2J f (p i ) b Weaknesses: - How to choose λ? - Needs a lot of iterations in long and narrow valleys:

11 The Gauss-Newton and the Levenberg-Marquardt algorithms But first, the Linear Least-Squares Case: E(p) = f (p) b 2 If the function f is linear ie f(p) = Ap, p can be estimated as: p=a + b where A + is the pseudo-inverse of A: A + =(A T A) -1 A T

12 Non-Linear Least-Squares: The Gauss-Newton algorithm Iteration steps: p i+1 =p i + i i is chosen to minimize the residual f(p i+1 ) b 2. It is computed by approximating f to the first order: Δ i = argmin Δ = argmin Δ = argmin Δ f (p i + Δ) b 2 f (p i ) + JΔ b 2 ε i + JΔ 2 First order approximation: f (p i + Δ) f (p i ) + JΔ ε i = f (p i ) b denotes the residual at iteration i Δ i is the solution of the system JΔ = ε i in the least squares sense : Δ i = J + ε i where J + is the pseudo - inverse of J

13 Non-Linear Least-Squares: The Levenberg-Marquardt Algorithm In the Gauss-Newton algorithm: In the Levenberg-Marquardt algorithm: Levenberg-Marquardt Algorithm: 0. Initialize λ with a small value: λ = Compute i and E(p i + i ) Δ i = ( J T J) 1 J T ε i Δ i = ( J T J + λi) 1 J T ε i 2. If E(p i + i ) > E(p i ): λ 10 λ and go back to 1 [happens when the linear approximation of f is too rough] 3. If E(p i + i ) < E(p i ): λ λ / 10, p i+1 p i + i and go back to 1. Once converged, set λ 0 and continue up to convergence.

14 Non-Linear Least-Squares: the Levenberg-Marquardt Algorithm Δ i = ( J T J + λi) 1 J T ε i When λ is small, LM behaves similarly to the Gauss-Newton algorithm. When λ becomes large, LM behaves similarly to a steepest descent to guarantee convergence.

15 Error Estimation Let first assume that p can be computed by: p = g(m) where m is the vector made of the coordinates m i If we want to represent the density of p by a normal distribution, the covariance matrix of this normal distribution is with A the Jacobian matrix of g. Σ p = A Σ m A T We have in fact: m = f(p) Therefore the covariance matrix is: Σ p = J -1 Σ m J T

16 Error Estimation p Σ p The true pose is inside the ellipsoid p p ( ) T Σ 1 p ( p p ) = F 1 n (α) where p is the minimum found by the optimization with a probability α

17 Examples

18 Examples [Here the pose is computed using both 3D/2D correspondences and apparent motion, that is why the uncertainty in depth is less important than in the previous example. Note however that the uncertainties grow when the camera goes far from the 3D model]

19

20 Parameterizing the Rotation Matrix

21 Possible Parameterizations of the Rotation Matrix Rotation in 3D space has only 3 degrees of freedom. It would be awkward to use the nine elements as its parameters. Possible parameterizations: Euler Angles; Quaternions; Exponential Map. All have singularities, can be avoided by locally reparameterizing the rotation. Exponential map has the best properties. [From Grassia JGT98]

22 Euler Angles Rotation defined by angles of rotation around the X-, Y-, and Z- axes. Different conventions. For example: $ cosα sinα 0' $ cosβ 0 sinβ' $ ' & )& )& ) R = & sinα cosα 0 )& )& 0 cosγ sinγ ) %& 0 0 1( )%& sinβ 0 cosβ( )%& 0 sinγ cosγ () Z α β Y X γ

23 Gimbal Lock and Optimization When β = π/2, % 0 sin(γ α) cos(γ α) ( ' * R = ' 0 cos(γ α) sin(γ α) * &' )* the rotation by γ can be cancelled by taking α=γ. That means that - for each possible angle θ, all (α, β = π/2, γ= α + θ) correspond to the same rotation matrix => for each possible angle θ, there is a flat valley of axis (α, β = π/2, γ= α + θ) in the energy to be minimized L

24 Gimbal Lock When β = π/2, $ cosα sinα 0' $ cosβ 0 sinβ' $ ' R = & )& )& ) & sinα cosα 0 )& )& 0 cosγ sinγ ) %& 0 0 1( )%& sinβ 0 cosβ( )%& 0 sinγ cosγ () $ cosα sinα 0' $ 0 0 1' $ ' & )& )& ) = & sinα cosα 0 )& )& 0 cosγ sinγ ) %& 0 0 1( )%& 1 0 0( )%& 0 sinγ cosγ () $ cosα sinα 0' $ 0 sinγ cosγ ' & )& ) = & sinα cosα 0 )& 0 cosγ sinγ ) %& 0 0 1( )%& () $ 0 cosα sinγ sinα cosγ cosα cosγ + sinα sinγ' $ 0 sin(γ α) cos(γ α) ' & ) & ) = & 0 sinα sinγ + cosα cosγ sinα cosγ cosα sinγ ) = & 0 cos(γ α) sin(γ α) ) %& () %& ()

25 A Unit Quaternion Quaternions are hyper-complex numbers that can be written as the linear combination a+bi+cj+dk, with i 2 = j 2 = k 2 = ijk = -1. Can also be interpreted as a scalar plus a 3- vector: (a, v). A rotation about the unit vector w by an angle θ can be represented by the unit quaternion: # q = cos θ 2,wsinθ & % ( $ 2' Z θ w Y X

26 A Unit Quaternion Quaternions are hyper-complex numbers that can be written as the linear combination a+bi+cj+dk, with i 2 = j 2 = k 2 = ijk = -1. Can also be interpreted as a scalar plus a 3- vector: (a, v). A rotation about the unit vector w by an angle θ can be represented by the unit quaternion: To rotate a 3D point M: write it as a quaternion p = (0, M), and take the rotated point p' to be No gimbal lock. # q = cos θ 2,wsinθ & % ( $ 2' $ p'= q.p.q with q = cos θ 2, wsinθ ' & ) % 2( Parameterization of the rotation using the 4 coordinates of a quaternion q: 1. No constraint and rotation performed using singularity: kq yields the same rotation whatever the value of k > 0; q q 2. Additional constraint: norm of q must be constrained to be equal to 1, for example by adding the quadratic term K(1 - q 2 ).

27 Exponential Maps No gimbal lock; No additional constraints; Singularities occur in a region that can easily be avoided. J Parameterization by a 3D vector w=[w 1,w 2,w 3 ] T : Rotation around the axis of direction w of an amount of w Z w w Y X

28 The rotation matrix is given by: Rodrigues' Formula $ 0 w z w y ' & ) Ω = & w z 0 w x ) % & w y w x 0 ( ) R( Ω) = exp(ω) = I + Ω + 1 2! Ω ! Ω3 + = I + sinθ θ ( ) Ω + 1 cosθ θ 2 Ω 2 (Rodrigues' formula) Not singular for small values of θ even if we divide by θ (see Taylor expansions).

29 The Singularities of Exponential Maps Rotation around the axis of direction w of an amount of w à Singularities for w such that w = 2nπ : No rotation, whatever the direction of w. Z w X 4π 2π w Y Avoided during optimization as follows: when w becomes close to 2nπ, say higher than π, w can be replaced by [From Grassia JGT98] $ & 1 2π % w ' ) w (

30 Parameterization of the Rotation Matrix Conclusion: Use exponential maps. More details in [Grassia JGT98]

31 Computing J We need to compute J, the Jacobian of f: " u Proj A,R(p ),T(p ) (M 1 ) $ f (p) = $ v Proj A,R(p ),T(p ) (M 1 ) $ # ( ) ( ) Solution 1: Use Maple or Matlab to produce the analytical form, AND the code. % ' ' ' & Solution 2: Do it the old way...

32 Solution 2 First, decompose f: f M1 ( p) = m m M cam M1 ( p) where M cam M1 p m M M1 ( ( )) " u Proj A,R(p ),T(p ) (M 1 ) $ f (p) = $ v Proj A,R(p ),T(p ) (M 1 ) $ # ( ) ( ) % ' " ' = f M 1 ( p) % $ ' ' # & & ( ) returns M 1 in the camera coordinates system defined by p; ( cam ) returns the projection of M M1 m m ( ) returns the 2D vector corresponding to cam in homogeneous coordinates; m.

33 Chain Rule " u Proj A,R(p ),T(p ) (M 1 ) $ f (p) = $ v Proj A,R(p ),T(p ) (M 1 ) $ # J = f # # & % ( = % $ p' % $ ( ) ( ) f M1 p & ( ( with # f M 1 & % ( $ p ' ' % ' " ' = f M 1 ( p) % $ ' with f M1 ' # & & 2 6 # = m & $ % m ' ( 2 3 # % $ % m M M1 cam ( p) = m m M M1 & # M M1 ( % '( p 3 3 $ ( ( cam ( p) )) cam & ( ' 3 6

34 ( ) = $ u v m m " # " U % % $ ' = W ' $ & V ' $ ' # W & with m = " U % $ ' $ V ' # $ W &' m m = # u % U % v % $ U u V v V u & # W ( 1/W 0 U & % v ( = W 2 ( % ( 0 1/W V ( % ( W ' $ W 2 '

35 ( cam ) cam = AM M1 m M M1 m = A cam M M1

36 M cam M1 ( p) = R( p)m 1 + T [ ] T $ T = [ t x, t y, t ] T z With p = w x, w y, w z, t x, t y, t z cam M M1 p " # % ' : & ) = M cam cam cam cam cam M 1 M M1 M M1 M M1 M M1 + * w x w y w z t x t y ) ) R, ) R, 1 0 0, + ) R,. = + +. M * w 1 +. M x - * w 1 +. M y - * w z - * # R & % (... $ w x ' cam M M1 The matrices can be computed from the expansion of R. 3 3 In C, one can use the cvrodrigues function from the OpenCV library. t z,. -

37 Robust Estimation

38 What if there are Outliers? M 3 M 1 M2 M 4 m 1 m 3 m 4 m 2 C incorrect measure

39 Gaussian Noise on the Projections + 20% outliers White cross: true camera position; Black cross: global minimum of the objective function. The global minimum is now far from the true camera pose.

40 Bayesian interpretation: What Happened? argmin p = argmax p i i Proj A,R(p ),T(p ) (M i ) m i N( Proj A,R(p ),T(p ) (M i ); m i,σi) 2 The error on the 2D point locations m i is assumed to have a Gaussian (Normal) distribution with identical covariance matrices σi, and independent; M 3 M 1 M 2 This assumption is violated when m i is an outlier. m 1 m 3 m 4 M 4 m 2 C

41 min p = min p = min p = max p i i i exp Proj A,R(p ),T(p ) (M i ) m i 2 ( Proj A,R(p ),T(p ) (M i ) m ) T ( i Proj A,R(p ),T(p ) (M i ) m ) i ( Proj A,R(p ),T(p ) (M i ) m ) T % 1/σ 0 ( i ' * Proj A,R(p ),T(p ) (M i ) m i & 0 1/σ) i ( ) ( Proj A,R(p ),T(p ) (M i ) m ) T i Σ ( m Proj A,R(p ),T(p ) (M i ) m ) i % with Σ m = 1/σ 0 ( ' * & 0 1/σ) = max p = max p i i (the 2 equivalent formulations) 1 ( ( ) T Σ ( m Proj A,R(p ),T(p ) (M i ) m )) i exp Proj ( 2π) 2 A,R(p ),T(p ) (M i ) m i Σ m N( Proj A,R(p ),T(p ) (M i ); m i,σ ) m

42 Robust estimation Idea: Replace the Normal distribution by a more suitable distribution, or equivalently replace the least-squares estimator by a "robust estimator": min p min p i i Proj A,R(p ),T(p ) (M i ) m i ρ( Proj A,R(p ),T(p ) (M i ) m ) i 2

43 Example of an M-estimator: The Tukey Estimator x 2 ρ(x) 1 3 if x c if x > c 4 ρ( x) = c 2 % ' 1 % ' x ( - * ', & c ) & ρ( x) = c / 3 ( * * ) The Tukey estimator assumes the measures follow a distribution that is a mixture of: a Normal distribution, for the inliers, a uniform distribution, for the outliers.

44 Normal distribution (inliers) Uniform distribution (outliers) Mixture + = -log(.) -log(.) Least-squares Tukey estimator

45 The Tukey Estimator in Levenberg- Marquardt Optimization Use the following approximation: % ' if x c & ' if x > c ( ρ( x) = 1 2 x 2 (least squares) ρ( x) = 1 c 2 2 (constante)

46 Other M-Estimators

47 Drawbacks of the Tukey Estimator - Non-convex à creates local minimums; - Function becomes flat when too far from the global minimum.

48 Gaussian Noise on the Projections + 20% outliers + Tukey estimator White cross: true camera position; Black cross: global minimum of the object function. The global minimum is very close to the true camera pose. BUT: - local minimums; - the objective function is flat where all the correspondences are considered outliers.

49 Gaussian Noise on the Projections + 50% outliers + Tukey estimator Even more local minimums. Numerical optimization can get trapped into a local minimum.

50 RANSAC

51 How to Optimize? Idea: sampling the space of solutions (the camera pose space here):

52 How to Optimize? Idea: sampling the space of solutions: + Numerical Optimization from the best sampled pose. Problem: Exhaustive regular sampling is too expensive in 6 dimensions. Can we do a smarter sampling?

53 RANSAC RANSAC: RANdom SAmple Consensus Line fitting: the "Throwing Out the worst residual" heuristics can fail (Example for the original paper [Fischler81]): outlier final least-squares solution Ideal line

54 RANSAC As before, we could do a regular sampling, but would not be optimal: Ideal line

55 Idea: Generate hypotheses from subsets of the measurements. If a subset contains no gross errors, the estimated parameters (the hypothesis) are closed to the true ones. Take several subsets at random, retain the best one. Ideal line

56 The quality of a hypothesis is evaluated by the number of measures that lie "close enough" to the predicted line. We need to choose a threshold (T) to decide if the measure is "close enough". RANSAC returns the best hypothesis, ie the hypothesis with the largest number of inliers. T i # 1 if dist(m i,line( p)) T $ % 0 if dist(m i,line( p)) > T

57 Required Number N of Trials To ensure with a probability p that at least one of the sample is free from outliers: Take: N > log(1-p) / log(1-(1-w) n ), where: w is the proportion of outliers, and n the number of measures in a subset. Unfortunately, N increases exponentially with w, the proportion of outliers...

58 Pose Estimation To apply RANSAC to pose estimation, we need a way to compute a camera pose from a subset of measurements, for example a P3P algorithm. Since RANSAC only provides a solution estimated with a limited number of data, it must be followed by a robust minimization to refine the solution.

59 RANSAC What happens when too many measurements are outlier? RANSAC needs a large number of trials to find a subset without any outlier.

60 PROSAC [Chum & Matas05] In some applications, we have a measure of confidence for each measurement, for example a correlation measure.

61 PROSAC [Chum & Matas05] Algorithm: 1. sort the measurements according to their confidence. 2. First trial: subset taken to be the n first measurements; 3. Second trial: subset taken at random in the n+1 first measurements

62 PROSAC Starts by testing the most promising hypotheses; converges to uniform sampling as in RANSAC. à PROSAC tests the same hypotheses than RANSAC but in a different order.

63 Other Variants of RANSAC MLESAC [Torr & Zissermann CVIU'00]: the quality of the hypothesis is measured using a likelihood function, not the number of inliers. The T d,d Test [Matas & Chum, ICV'04]: First, evaluate a hypothesis only on a random small subset of measures (1 measure is actually sufficient). If all the measures in the subset are inliers, evaluate the hypothesis on all the measures. Preemptive RANSAC [Nister, ICCV'03]: All the hypotheses are generated first, and then evaluated in parallel on a subset of measures. Only the best hypotheses are kept and evaluated on a next subset. The evaluation can be interrupted at any time. good review in "A Comparative Analysis of RANSAC Techniques Leading to Adaptive Real-Time Random Sample Consensus", Rahul Raguram, Jan- Michael Frahm, and Marc Pollefeys. ECCV'08.

Augmented Reality VU Camera Registration. Prof. Vincent Lepetit

Augmented Reality VU Camera Registration. Prof. Vincent Lepetit Augmented Reality VU Camera Registration Prof. Vincent Lepetit Different Approaches to Vision-based 3D Tracking [From D. Wagner] [From Drummond PAMI02] [From Davison ICCV01] Consider natural features Consider

More information

Nonrobust and Robust Objective Functions

Nonrobust and Robust Objective Functions Nonrobust and Robust Objective Functions The objective function of the estimators in the input space is built from the sum of squared Mahalanobis distances (residuals) d 2 i = 1 σ 2(y i y io ) C + y i

More information

Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!

Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/! Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!! WARNING! this class will be dense! will learn how to use nonlinear optimization

More information

Vision par ordinateur

Vision par ordinateur Vision par ordinateur Géométrie épipolaire Frédéric Devernay Avec des transparents de Marc Pollefeys Epipolar geometry π Underlying structure in set of matches for rigid scenes C1 m1 l1 M L2 L1 l T 1 l

More information

Visual SLAM Tutorial: Bundle Adjustment

Visual SLAM Tutorial: Bundle Adjustment Visual SLAM Tutorial: Bundle Adjustment Frank Dellaert June 27, 2014 1 Minimizing Re-projection Error in Two Views In a two-view setting, we are interested in finding the most likely camera poses T1 w

More information

2 Nonlinear least squares algorithms

2 Nonlinear least squares algorithms 1 Introduction Notes for 2017-05-01 We briefly discussed nonlinear least squares problems in a previous lecture, when we described the historical path leading to trust region methods starting from the

More information

Numerical computation II. Reprojection error Bundle adjustment Family of Newtonʼs methods Statistical background Maximum likelihood estimation

Numerical computation II. Reprojection error Bundle adjustment Family of Newtonʼs methods Statistical background Maximum likelihood estimation Numerical computation II Reprojection error Bundle adjustment Family of Newtonʼs methods Statistical background Maximum likelihood estimation Reprojection error Reprojection error = Distance between the

More information

Basic Math for

Basic Math for Basic Math for 16-720 August 23, 2002 1 Linear Algebra 1.1 Vectors and Matrices First, a reminder of a few basic notations, definitions, and terminology: Unless indicated otherwise, vectors are always

More information

Camera Calibration The purpose of camera calibration is to determine the intrinsic camera parameters (c 0,r 0 ), f, s x, s y, skew parameter (s =

Camera Calibration The purpose of camera calibration is to determine the intrinsic camera parameters (c 0,r 0 ), f, s x, s y, skew parameter (s = Camera Calibration The purpose of camera calibration is to determine the intrinsic camera parameters (c 0,r 0 ), f, s x, s y, skew parameter (s = cotα), and the lens distortion (radial distortion coefficient

More information

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23 Optimization: Nonlinear Optimization without Constraints Nonlinear Optimization without Constraints 1 / 23 Nonlinear optimization without constraints Unconstrained minimization min x f(x) where f(x) is

More information

Sparse Levenberg-Marquardt algorithm.

Sparse Levenberg-Marquardt algorithm. Sparse Levenberg-Marquardt algorithm. R. I. Hartley and A. Zisserman: Multiple View Geometry in Computer Vision. Cambridge University Press, second edition, 2004. Appendix 6 was used in part. The Levenberg-Marquardt

More information

Lecture 4.3 Estimating homographies from feature correspondences. Thomas Opsahl

Lecture 4.3 Estimating homographies from feature correspondences. Thomas Opsahl Lecture 4.3 Estimating homographies from feature correspondences Thomas Opsahl Homographies induced by central projection 1 H 2 1 H 2 u uu 2 3 1 Homography Hu = u H = h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h

More information

Method 1: Geometric Error Optimization

Method 1: Geometric Error Optimization Method 1: Geometric Error Optimization we need to encode the constraints ŷ i F ˆx i = 0, rank F = 2 idea: reconstruct 3D point via equivalent projection matrices and use reprojection error equivalent projection

More information

Multiview Geometry and Bundle Adjustment. CSE P576 David M. Rosen

Multiview Geometry and Bundle Adjustment. CSE P576 David M. Rosen Multiview Geometry and Bundle Adjustment CSE P576 David M. Rosen 1 Recap Previously: Image formation Feature extraction + matching Two-view (epipolar geometry) Today: Add some geometry, statistics, optimization

More information

17 Solution of Nonlinear Systems

17 Solution of Nonlinear Systems 17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m

More information

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by: Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

nonrobust estimation The n measurement vectors taken together give the vector X R N. The unknown parameter vector is P R M.

nonrobust estimation The n measurement vectors taken together give the vector X R N. The unknown parameter vector is P R M. Introduction to nonlinear LS estimation R. I. Hartley and A. Zisserman: Multiple View Geometry in Computer Vision. Cambridge University Press, 2ed., 2004. After Chapter 5 and Appendix 6. We will use x

More information

Non-linear least squares

Non-linear least squares Non-linear least squares Concept of non-linear least squares We have extensively studied linear least squares or linear regression. We see that there is a unique regression line that can be determined

More information

The Multibody Trifocal Tensor: Motion Segmentation from 3 Perspective Views

The Multibody Trifocal Tensor: Motion Segmentation from 3 Perspective Views The Multibody Trifocal Tensor: Motion Segmentation from 3 Perspective Views Richard Hartley 1,2 and RenéVidal 2,3 1 Dept. of Systems Engineering 3 Center for Imaging Science Australian National University

More information

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada SLAM Techniques and Algorithms Jack Collier Defence Research and Development Canada Recherche et développement pour la défense Canada Canada Goals What will we learn Gain an appreciation for what SLAM

More information

Least-Squares Fitting of Model Parameters to Experimental Data

Least-Squares Fitting of Model Parameters to Experimental Data Least-Squares Fitting of Model Parameters to Experimental Data Div. of Mathematical Sciences, Dept of Engineering Sciences and Mathematics, LTU, room E193 Outline of talk What about Science and Scientific

More information

Numerical Optimization

Numerical Optimization Numerical Optimization Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Spring 2010 Emo Todorov (UW) AMATH/CSE 579, Spring 2010 Lecture 9 1 / 8 Gradient descent

More information

Pose estimation from point and line correspondences

Pose estimation from point and line correspondences Pose estimation from point and line correspondences Giorgio Panin October 17, 008 1 Problem formulation Estimate (in a LSE sense) the pose of an object from N correspondences between known object points

More information

Mysteries of Parameterizing Camera Motion - Part 2

Mysteries of Parameterizing Camera Motion - Part 2 Mysteries of Parameterizing Camera Motion - Part 2 Instructor - Simon Lucey 16-623 - Advanced Computer Vision Apps Today Gauss-Newton and SO(3) Alternative Representations of SO(3) Exponential Maps SL(3)

More information

Matrix Derivatives and Descent Optimization Methods

Matrix Derivatives and Descent Optimization Methods Matrix Derivatives and Descent Optimization Methods 1 Qiang Ning Department of Electrical and Computer Engineering Beckman Institute for Advanced Science and Techonology University of Illinois at Urbana-Champaign

More information

Generalization. A cat that once sat on a hot stove will never again sit on a hot stove or on a cold one either. Mark Twain

Generalization. A cat that once sat on a hot stove will never again sit on a hot stove or on a cold one either. Mark Twain Generalization Generalization A cat that once sat on a hot stove will never again sit on a hot stove or on a cold one either. Mark Twain 2 Generalization The network input-output mapping is accurate for

More information

Linear and Nonlinear Optimization

Linear and Nonlinear Optimization Linear and Nonlinear Optimization German University in Cairo October 10, 2016 Outline Introduction Gradient descent method Gauss-Newton method Levenberg-Marquardt method Case study: Straight lines have

More information

13. Nonlinear least squares

13. Nonlinear least squares L. Vandenberghe ECE133A (Fall 2018) 13. Nonlinear least squares definition and examples derivatives and optimality condition Gauss Newton method Levenberg Marquardt method 13.1 Nonlinear least squares

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Optimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

Optimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng Optimization 2 CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Optimization 2 1 / 38

More information

Uncertainty Models in Quasiconvex Optimization for Geometric Reconstruction

Uncertainty Models in Quasiconvex Optimization for Geometric Reconstruction Uncertainty Models in Quasiconvex Optimization for Geometric Reconstruction Qifa Ke and Takeo Kanade Department of Computer Science, Carnegie Mellon University Email: ke@cmu.edu, tk@cs.cmu.edu Abstract

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester Physics 403 Numerical Methods, Maximum Likelihood, and Least Squares Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Quadratic Approximation

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Gradient-based Methods Marc Toussaint U Stuttgart Gradient descent methods Plain gradient descent (with adaptive stepsize) Steepest descent (w.r.t. a known metric) Conjugate

More information

Stochastic Analogues to Deterministic Optimizers

Stochastic Analogues to Deterministic Optimizers Stochastic Analogues to Deterministic Optimizers ISMP 2018 Bordeaux, France Vivak Patel Presented by: Mihai Anitescu July 6, 2018 1 Apology I apologize for not being here to give this talk myself. I injured

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal

More information

Trajectory-based optimization

Trajectory-based optimization Trajectory-based optimization Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2012 Emo Todorov (UW) AMATH/CSE 579, Winter 2012 Lecture 6 1 / 13 Using

More information

BSCB 6520: Statistical Computing Homework 4 Due: Friday, December 5

BSCB 6520: Statistical Computing Homework 4 Due: Friday, December 5 BSCB 6520: Statistical Computing Homework 4 Due: Friday, December 5 All computer code should be produced in R or Matlab and e-mailed to gjh27@cornell.edu (one file for the entire homework). Responses to

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

18.06 Problem Set 1 Solutions Due Thursday, 11 February 2010 at 4 pm in Total: 100 points

18.06 Problem Set 1 Solutions Due Thursday, 11 February 2010 at 4 pm in Total: 100 points 18.06 Problem Set 1 Solutions Due Thursday, 11 February 2010 at 4 pm in 2-106. Total: 100 points Section 1.2. Problem 23: The figure shows that cos(α) = v 1 / v and sin(α) = v 2 / v. Similarly cos(β) is

More information

Numerical Optimization of Partial Differential Equations

Numerical Optimization of Partial Differential Equations Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Fast RANSAC with Preview Model Parameters Evaluation

Fast RANSAC with Preview Model Parameters Evaluation 1000-9825/2005/16(08)1431 2005 Journal of Software Vol.16, No.8 +, ( ATR, 410073) Fast RANSAC with Preview Model Parameters Evaluation CHEN Fu-Xing +, WANG Run-Sheng (National Key Laboratory of ATR, National

More information

Least squares problems

Least squares problems Least squares problems How to state and solve them, then evaluate their solutions Stéphane Mottelet Université de Technologie de Compiègne 30 septembre 2016 Stéphane Mottelet (UTC) Least squares 1 / 55

More information

Data Fitting and Uncertainty

Data Fitting and Uncertainty TiloStrutz Data Fitting and Uncertainty A practical introduction to weighted least squares and beyond With 124 figures, 23 tables and 71 test questions and examples VIEWEG+ TEUBNER IX Contents I Framework

More information

Linear Discrimination Functions

Linear Discrimination Functions Laurea Magistrale in Informatica Nicola Fanizzi Dipartimento di Informatica Università degli Studi di Bari November 4, 2009 Outline Linear models Gradient descent Perceptron Minimum square error approach

More information

1 Non-negative Matrix Factorization (NMF)

1 Non-negative Matrix Factorization (NMF) 2018-06-21 1 Non-negative Matrix Factorization NMF) In the last lecture, we considered low rank approximations to data matrices. We started with the optimal rank k approximation to A R m n via the SVD,

More information

Theory of Bouguet s MatLab Camera Calibration Toolbox

Theory of Bouguet s MatLab Camera Calibration Toolbox Theory of Bouguet s MatLab Camera Calibration Toolbox Yuji Oyamada 1 HVRL, University 2 Chair for Computer Aided Medical Procedure (CAMP) Technische Universität München June 26, 2012 MatLab Camera Calibration

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maximum Likelihood Estimation Prof. C. F. Jeff Wu ISyE 8813 Section 1 Motivation What is parameter estimation? A modeler proposes a model M(θ) for explaining some observed phenomenon θ are the parameters

More information

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) = Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,

More information

Introduction and Vectors Lecture 1

Introduction and Vectors Lecture 1 1 Introduction Introduction and Vectors Lecture 1 This is a course on classical Electromagnetism. It is the foundation for more advanced courses in modern physics. All physics of the modern era, from quantum

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Massachusetts Institute of Technology Instrumentation Laboratory Cambridge, Massachusetts

Massachusetts Institute of Technology Instrumentation Laboratory Cambridge, Massachusetts Massachusetts Institute of Technology Instrumentation Laboratory Cambridge, Massachusetts Space Guidance Analysis Memo #9 To: From: SGA Distribution James E. Potter Date: November 0, 96 Subject: Error

More information

Linear Regression (9/11/13)

Linear Regression (9/11/13) STA561: Probabilistic machine learning Linear Regression (9/11/13) Lecturer: Barbara Engelhardt Scribes: Zachary Abzug, Mike Gloudemans, Zhuosheng Gu, Zhao Song 1 Why use linear regression? Figure 1: Scatter

More information

ECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review

ECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review ECE 680Modern Automatic Control p. 1/1 ECE 680 Modern Automatic Control Gradient and Newton s Methods A Review Stan Żak October 25, 2011 ECE 680Modern Automatic Control p. 2/1 Review of the Gradient Properties

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Reading. Affine transformations. Vector representation. Geometric transformations. x y z. x y. Required: Angel 4.1, Further reading:

Reading. Affine transformations. Vector representation. Geometric transformations. x y z. x y. Required: Angel 4.1, Further reading: Reading Required: Angel 4.1, 4.6-4.10 Further reading: Affine transformations Angel, the rest of Chapter 4 Fole, et al, Chapter 5.1-5.5. David F. Rogers and J. Alan Adams, Mathematical Elements for Computer

More information

9 Multi-Model State Estimation

9 Multi-Model State Estimation Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State

More information

Lecture Note 1: Background

Lecture Note 1: Background ECE5463: Introduction to Robotics Lecture Note 1: Background Prof. Wei Zhang Department of Electrical and Computer Engineering Ohio State University Columbus, Ohio, USA Spring 2018 Lecture 1 (ECE5463 Sp18)

More information

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation. CS 189 Spring 2015 Introduction to Machine Learning Midterm You have 80 minutes for the exam. The exam is closed book, closed notes except your one-page crib sheet. No calculators or electronic items.

More information

ROBUST MULTIPLE-VIEW GEOMETRY ESTIMATION BASED ON GMM

ROBUST MULTIPLE-VIEW GEOMETRY ESTIMATION BASED ON GMM Computing and Informatics, Vol. 21, 2002, 591 606 ROBUST MULTIPLE-VIEW GEOMETRY ESTIMATION BASED ON GMM Mingxing Hu, Qiang Xing, Baozong Yuan, Xiaofang Tang Institute of Information Science Northern Jiaotong

More information

CPSC 340: Machine Learning and Data Mining. Gradient Descent Fall 2016

CPSC 340: Machine Learning and Data Mining. Gradient Descent Fall 2016 CPSC 340: Machine Learning and Data Mining Gradient Descent Fall 2016 Admin Assignment 1: Marks up this weekend on UBC Connect. Assignment 2: 3 late days to hand it in Monday. Assignment 3: Due Wednesday

More information

Physics 509: Bootstrap and Robust Parameter Estimation

Physics 509: Bootstrap and Robust Parameter Estimation Physics 509: Bootstrap and Robust Parameter Estimation Scott Oser Lecture #20 Physics 509 1 Nonparametric parameter estimation Question: what error estimate should you assign to the slope and intercept

More information

Math Review: parameter estimation. Emma

Math Review: parameter estimation. Emma Math Review: parameter estimation Emma McNally@flickr Fitting lines to dots: We will cover how Slides provided by HyunSoo Park 1809, Carl Friedrich Gauss What about fitting line on a curved surface? Least

More information

ROBUST ESTIMATOR FOR MULTIPLE INLIER STRUCTURES

ROBUST ESTIMATOR FOR MULTIPLE INLIER STRUCTURES ROBUST ESTIMATOR FOR MULTIPLE INLIER STRUCTURES Xiang Yang (1) and Peter Meer (2) (1) Dept. of Mechanical and Aerospace Engineering (2) Dept. of Electrical and Computer Engineering Rutgers University,

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Exponential Family and Maximum Likelihood, Gaussian Mixture Models and the EM Algorithm. by Korbinian Schwinger

Exponential Family and Maximum Likelihood, Gaussian Mixture Models and the EM Algorithm. by Korbinian Schwinger Exponential Family and Maximum Likelihood, Gaussian Mixture Models and the EM Algorithm by Korbinian Schwinger Overview Exponential Family Maximum Likelihood The EM Algorithm Gaussian Mixture Models Exponential

More information

Parameterizing the Trifocal Tensor

Parameterizing the Trifocal Tensor Parameterizing the Trifocal Tensor May 11, 2017 Based on: Klas Nordberg. A Minimal Parameterization of the Trifocal Tensor. In Computer society conference on computer vision and pattern recognition (CVPR).

More information

This ensures that we walk downhill. For fixed λ not even this may be the case.

This ensures that we walk downhill. For fixed λ not even this may be the case. Gradient Descent Objective Function Some differentiable function f : R n R. Gradient Descent Start with some x 0, i = 0 and learning rate λ repeat x i+1 = x i λ f(x i ) until f(x i+1 ) ɛ Line Search Variant

More information

Linear Regression (continued)

Linear Regression (continued) Linear Regression (continued) Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 6, 2017 1 / 39 Outline 1 Administration 2 Review of last lecture 3 Linear regression

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Non-polynomial Least-squares fitting

Non-polynomial Least-squares fitting Applied Math 205 Last time: piecewise polynomial interpolation, least-squares fitting Today: underdetermined least squares, nonlinear least squares Homework 1 (and subsequent homeworks) have several parts

More information

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent Nonlinear Optimization Steepest Descent and Niclas Börlin Department of Computing Science Umeå University niclas.borlin@cs.umu.se A disadvantage with the Newton method is that the Hessian has to be derived

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

Hongbing Zhang October 2018

Hongbing Zhang October 2018 Exact and Intuitive Geometry Explanation: Why Does a Half-angle-rotation in Spin or Quaternion Space Correspond to the Whole-angle-rotation in Normal 3D Space? Hongbing Zhang October 2018 Abstract Why

More information

Robot Mapping. Least Squares. Cyrill Stachniss

Robot Mapping. Least Squares. Cyrill Stachniss Robot Mapping Least Squares Cyrill Stachniss 1 Three Main SLAM Paradigms Kalman filter Particle filter Graphbased least squares approach to SLAM 2 Least Squares in General Approach for computing a solution

More information

Camera calibration. Outline. Pinhole camera. Camera projection models. Nonlinear least square methods A camera calibration tool

Camera calibration. Outline. Pinhole camera. Camera projection models. Nonlinear least square methods A camera calibration tool Outline Camera calibration Camera projection models Camera calibration i Nonlinear least square methods A camera calibration tool Applications Digital Visual Effects Yung-Yu Chuang with slides b Richard

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Rosemary Renaut, Jodi Mead Arizona State and Boise State September 2007 Renaut and Mead (ASU/Boise) Scalar

More information

Lie Groups for 2D and 3D Transformations

Lie Groups for 2D and 3D Transformations Lie Groups for 2D and 3D Transformations Ethan Eade Updated May 20, 2017 * 1 Introduction This document derives useful formulae for working with the Lie groups that represent transformations in 2D and

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information

A hybrid Marquardt-Simulated Annealing method for solving the groundwater inverse problem

A hybrid Marquardt-Simulated Annealing method for solving the groundwater inverse problem Calibration and Reliability in Groundwater Modelling (Proceedings of the ModelCARE 99 Conference held at Zurich, Switzerland, September 1999). IAHS Publ. no. 265, 2000. 157 A hybrid Marquardt-Simulated

More information

Data Fitting - Lecture 6

Data Fitting - Lecture 6 1 Central limit theorem Data Fitting - Lecture 6 Recall that for a sequence of independent, random variables, X i, one can define a mean, µ i, and a variance σi. then the sum of the means also forms a

More information

Robust Statistics, Revisited

Robust Statistics, Revisited Robust Statistics, Revisited Ankur Moitra (MIT) joint work with Ilias Diakonikolas, Jerry Li, Gautam Kamath, Daniel Kane and Alistair Stewart CLASSIC PARAMETER ESTIMATION Given samples from an unknown

More information

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method. Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization

More information

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Kinematic Functions Kinematic functions Kinematics deals with the study of four functions(called kinematic functions or KFs) that mathematically

More information

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf 1 Introduction to Machine Learning Maximum Likelihood and Bayesian Inference Lecturers: Eran Halperin, Lior Wolf 2014-15 We know that X ~ B(n,p), but we do not know p. We get a random sample from X, a

More information

Advanced Techniques for Mobile Robotics Least Squares. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Advanced Techniques for Mobile Robotics Least Squares. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Advanced Techniques for Mobile Robotics Least Squares Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Problem Given a system described by a set of n observation functions {f i (x)} i=1:n

More information

Wiley. Methods and Applications of Linear Models. Regression and the Analysis. of Variance. Third Edition. Ishpeming, Michigan RONALD R.

Wiley. Methods and Applications of Linear Models. Regression and the Analysis. of Variance. Third Edition. Ishpeming, Michigan RONALD R. Methods and Applications of Linear Models Regression and the Analysis of Variance Third Edition RONALD R. HOCKING PenHock Statistical Consultants Ishpeming, Michigan Wiley Contents Preface to the Third

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

Regularization in Neural Networks

Regularization in Neural Networks Regularization in Neural Networks Sargur Srihari 1 Topics in Neural Network Regularization What is regularization? Methods 1. Determining optimal number of hidden units 2. Use of regularizer in error function

More information

E. Santovetti lesson 4 Maximum likelihood Interval estimation

E. Santovetti lesson 4 Maximum likelihood Interval estimation E. Santovetti lesson 4 Maximum likelihood Interval estimation 1 Extended Maximum Likelihood Sometimes the number of total events measurements of the experiment n is not fixed, but, for example, is a Poisson

More information

Nonlinear Optimization

Nonlinear Optimization Nonlinear Optimization (Com S 477/577 Notes) Yan-Bin Jia Nov 7, 2017 1 Introduction Given a single function f that depends on one or more independent variable, we want to find the values of those variables

More information

OPTIMAL ESTIMATION of DYNAMIC SYSTEMS

OPTIMAL ESTIMATION of DYNAMIC SYSTEMS CHAPMAN & HALL/CRC APPLIED MATHEMATICS -. AND NONLINEAR SCIENCE SERIES OPTIMAL ESTIMATION of DYNAMIC SYSTEMS John L Crassidis and John L. Junkins CHAPMAN & HALL/CRC A CRC Press Company Boca Raton London

More information