Linear and Nonlinear Optimization German University in Cairo October 10, 2016
Outline Introduction Gradient descent method Gauss-Newton method Levenberg-Marquardt method Case study: Straight lines have to be straight
Introduction Optimization is used to compensate for this radially distorted image Figure: Image subjected to a radial distortion.
Introduction Figure: Image subjected to a radial distortion.
Introduction Figure: Image subjected to a radial distortion.
Introduction In fitting a function f (x) of an independent variables x to a set of data points f, it is convenient to minimize the sum of the weighted squares of the errors between the measured data (f ) and the curve-fit function ( f (x)) e 2 (x) = 1 2 ( m f f ) 2 (x) (1) w i i=1 = 1 2 (f f(x)) T W(f f(x)) (2) = 1 2 ft Wf f T W f(x) + 1 2 f T (x)w f(x), (3) where W is a diagonal weighting matrix with W ii = 1/w 2 i.
Introduction The function f is nonlinear in the model parameters x. Therefore, the minimization of e 2 with respect to the parameters (x) must be done iteratively. The goal of each iteration os to find a perturbation h to the parameter x that reduces e 2. We can use three methods, i.e., the gradient descent Method, the Gauss-Newton method and the Levenberg-Marquardt method.
Gradient Descent Method The steepest descent method is a general minimization method which updates parameter values in the direction opposite to the gradient of the objective function. The gradient of e 2 with respect to the parameters is e 2 (x) x = (f f(x)) T W x (f f(x)) (4) = (f f(x)) T W [ f(x) ] (5) x = (f f(x)) T WJ. (6) The perturbation h that moves the parameters in the direction of the steepest descent is given by h gd = αj T W(f f(x)), (7) where α is a positive scalar that determines the length of the step in the steepest descent direction.
Gauss-Newton method The Gauss-Newton method is a method of minimizing a sum-of-squares objective function. It presumes that the objective function is approximately quadratic in the parameters near the optimal solution. The function evaluated with perturbed model parameters may be locally approximated through a first-order Taylor series expansion. [ f(x) ] f(x + h) = f(x) + h (8) x = f(x) + Jh. (9) Substituting the approximation for the perturbed function in (1) yields e 2 (x+h) = 1 2 ft Wf+ 1 2 f T W f 1 2 ft W f (f f) T WJh+ 1 2 ht J T WJh, (10)
Gauss-Newton method The perturbation h that minimizes e 2 (x) is given from e2 (x) h = 0 h e2 (x + h) = (f f) T WJ + 1 2 ht J T WJ, (11) and the resulting normal equations for the Gauss-Newton perturbation are [J T WJ]h gn = J T W(f f(x)). (12)
Levenberg-Marquardt Method The Levenberg-Marquardt algorithm adaptively varies the parameter updates between the gradient descent and Gauss-Newton update, [J T WJ + λi]h lm = J T W(f f(x)), (13) where small values of the parameter λ result in a Gauss-Newton update and large values of λ result in a gradient descent update. [J T WJ + λdiag(j T WJ)]h lm = J T W(f f(x)). (14)
Levenberg-Marquardt Method Calculate the Jacobian matrix J If an iteration e 2 (x) e 2 (x + h) > h T (λh + J T W)(f f), then x + h is sufficiently better than x, reduce λ by a factor of ten. If an iteration e 2 (x) e 2 (x + h) < h T (λh + J T W)(f f), then x + h is sufficiently better than x, increase λ by a factor of ten. Convergence is achieved if max( J T W)(f f) ) < t. t denotes a threshold.
Levenberg-Marquardt Method The following functions can be used in fitting a set of measured data and finding the minimum of the function e 2 (x): f(x) = x1 exp( t/x 2 ) + x 3 sin(t/x 4 ) (15) f(x) = (x1 t x 2 1 + (1 x 1)t x 2 2 )1/x 2 (16) f(x) = x1 (t/ max(t))+x 2 (t/ max(t)) 2 +x 3 (t/ max(t)) 3 +x 4 (t/ max(t)) 4 (17)
Case study: Straight lines have to be straight Figure: Data are collected from a radially distorted image.
Case study: Straight lines have to be straight The lens distortion model can be written as an infinite series: x u = x d (1 + k 1 rd 2 + k 2rd 4 +...) (18) y u = y d (1 + k 1 rd 2 + k 2rd 4 +...),(19) where x u and y u are the undistorted coordinates, whereas x d and y d are the distorted coordinates. Further, k 1 and k 2 are the radial distortion parameters. The distorted radius (r d ) is given by Figure: Radial and r d = x 2d + y 2d. (20) tangential distortions.
Case study: Straight lines have to be straight The distortion error of each edge segment is given by e 2 = a sin 2 φ 2 b sin φ cos φ+c cos 2 φ (21) where 2 n a = xj 2 1 n x j (22) n b = c = j=1 j=1 n x j y j 1 n n j=1 n yj 2 1 n n j=1 j=1 j=1 x j j=1 y j n y j (23) 2. (24) Figure: The distortion error is the sum of squares of the distances from the degels to the least square fit line.
Case study: Straight lines have to be straight Figure: Data are collected from a radially distorted image after a edge detection.
Case study: Straight lines have to be straight Figure: Compensation of the radial distortion on the processed image.
Case study: Straight lines have to be straight Figure: Compensation of the radial distortion on the original image.
Thanks Questions please