Generalized Newton-Type Method for Energy Formulations in Image Processing

Similar documents
Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Motion Estimation (I) Ce Liu Microsoft Research New England

Motion Estimation (I)

An interior-point trust-region polynomial algorithm for convex programming

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

On Lagrange multipliers of trust region subproblems

Introduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Convex Optimization. Problem set 2. Due Monday April 26th

minimize x subject to (x 2)(x 4) u,

A Simple Explanation of the Sobolev Gradient Method

Higher-Order Methods

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

Variational Methods in Signal and Image Processing

Numerical optimization

Unconstrained Optimization

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Convex Optimization Algorithms for Machine Learning in 10 Slides

Unconstrained minimization of smooth functions

The Steepest Descent Algorithm for Unconstrained Optimization

On Lagrange multipliers of trust-region subproblems

Introduction to Nonlinear Image Processing

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Line Search Methods for Unconstrained Optimisation

Nonlinear Optimization: What s important?

A Pseudo-distance for Shape Priors in Level Set Segmentation

Lecture 4 Colorization and Segmentation

5 Handling Constraints

Linear and Nonlinear Optimization

A Study on Trust Region Update Rules in Newton Methods for Large-scale Linear Classification

Gradient descents and inner products

Efficient Beltrami Filtering of Color Images via Vector Extrapolation

Matrix Derivatives and Descent Optimization Methods

MATH 4211/6211 Optimization Quasi-Newton Method

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

Numerical Optimization

Unconstrained optimization

Performance Surfaces and Optimum Points

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

Integration of Sequential Quadratic Programming and Domain Decomposition Methods for Nonlinear Optimal Control Problems

Optimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization

Reproducing Kernel Hilbert Spaces

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

10. Unconstrained minimization

The Conjugate Gradient Method

IMA Preprint Series # 2098

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016

1 Numerical optimization

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT

Numerical Methods for Large-Scale Nonlinear Equations

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

nonrobust estimation The n measurement vectors taken together give the vector X R N. The unknown parameter vector is P R M.

Outline Introduction Edge Detection A t c i ti ve C Contours

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.

A Sobolev trust-region method for numerical solution of the Ginz

Nonlinear Diffusion. Journal Club Presentation. Xiaowei Zhou

Lecture 3: Basics of set-constrained and unconstrained optimization

You should be able to...

Optimal Newton-type methods for nonconvex smooth optimization problems

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Introduction to gradient descent

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization

NonlinearOptimization

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

A Recursive Trust-Region Method for Non-Convex Constrained Minimization

Second Order ODEs. CSCC51H- Numerical Approx, Int and ODEs p.130/177

MA22S3 Summary Sheet: Ordinary Differential Equations

Variational Methods in Image Denoising

Scientific Data Computing: Lecture 3

Complexity of gradient descent for multiobjective optimization

PDE-based image restoration, I: Anti-staircasing and anti-diffusion

Interpolation-Based Trust-Region Methods for DFO

Optimality Conditions

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Computational Optimization. Convexity and Unconstrained Optimization 1/29/08 and 2/1(revised)

arxiv: v1 [math.oc] 1 Jul 2016

A Model-Trust-Region Framework for Symmetric Generalized Eigenvalue Problems

ALGORITHM XXX: SC-SR1: MATLAB SOFTWARE FOR SOLVING SHAPE-CHANGING L-SR1 TRUST-REGION SUBPROBLEMS

Solution-driven Adaptive Total Variation Regularization

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

An Inexact Newton Method for Nonlinear Constrained Optimization

Optimization and Root Finding. Kurt Hornik

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.

Cubic regularization of Newton s method for convex problems with constraints

Lecture 7 Unconstrained nonlinear programming

Nonlinear Optimization for Optimal Control

Static unconstrained optimization

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration

An image decomposition model using the total variation and the infinity Laplacian

Continuous Steepest Descent Path for Traversing Non-convex Regions

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.

Lecture Note 1: Background

Trajectory-based optimization

1 Numerical optimization

Convex Optimization and l 1 -minimization

Transcription:

Generalized Newton-Type Method for Energy Formulations in Image Processing Leah Bar and Guillermo Sapiro Department of Electrical and Computer Engineering University of Minnesota

Outline Optimization in real functions gradient descent, Newton method Trust -region methods Optimization in variational framewor gradient descent, Newton method generalized Newton method Numerical simulations Conclusions 4-Mar-09

Introduction Optimization of a cost functional is a fundamental tas in image processing and computer vision Segmentation, denoising, deblurring, registration etc.. 4-Mar-09 3

Problem Statement n minimize f( x) : What is the best path? How to avoid maximum or saddle point? Can we impose some preferences on the path? NEW New optimization approach which incorporates nowledge/information 4-Mar-09 4

Descent Methods n minimize f( x) : xdom Given starting point Repeat 1. Compute a search direction d. Line search. Choose step size t >0 3. Update x:=x+td Until stopping criterion is satisfied f d f x 1 ( ) f ( x ) Gradient descent and Newton methods are most widely used in practice 4-Mar-09 5

Descent Methods Gradient descent derivation First order Taylor approximation f ( x d) f ( x) f ( x) d directional derivative Minimize w.r.t d As negative as we want 4-Mar-09 6

Descent Methods Gradient descent derivation First order Taylor approximation f ( x d) f ( x) f ( x) d d P Quadratic norm z 1/ n : P z P S P L Minimize w.r.t d 1 d P f x ( ) Newton step derivation second order Taylor approximation f ( x d) f ( x) f ( x) d f ( x) d d Quadratic convergence if L m f ( x) 1 mi L f ( x ) f ( x) m f( x) d f( x) 4-Mar-09 7 and ( ) ( ) f x f y L x y

Descent Methods The problem of the Newton method is that the solution may be attracted to a local maximum or saddle point if the Hessian is not positive definite Possible solution: Trust-region method. Basic concept Define a trust-region set min f ( x d) : d Define a model m (e.g. Taylor expansion) in the trust region Compute a step d that sufficiently reduces the model s.t. Accept the trial point if n B x : x x x d B f ( x ) f ( x d ) r : (0,0.5) m( x ) m( x d ) Update the trust region radius: if r < 0.5 then decrease if r > 0.75 then increase 4-Mar-09 8

Illustration of the Trust-Region Method f ( x, x ) 10x 10x 4sin( x x ) x x 4 1 1 1 1 1 4-Mar-09 9

Illustration of the Trust-Region Method Conn, Gould, Toint, Trust-Region Methods, 000 4-Mar-09 10

Convergence Results if f( x) twice-continuously differentiable f ( x) K lbf f ( x) K ufh The sequence f(x ) is strictly decreasing b lim f x 0 f x * 0 Super linear convergence (in CG with trust-region) f x 1 lim 0 f x Sorenson, SIAM J. Numerical Anal, 198 Mor e and Sorenson, SIAM J. Sci. Stat. Comput. 1983 Steihaug, SIAM J. Num. Anal., 1983 Conn, Gould, Toint, Trust-Region Methods, 000. 4-Mar-09 11

Truncated Conjugate Gradients Approach set d 0, r g, v r if 0 0 0 0 0 return d d ; for j 0,1,,... if Bv v 0 0 j1 j j j j1 1 m f ( x ) g d Bd d find such that d d d and d ; return d; set r r / Bv v ; set d d v ; if r d j j j j j j j find such that d d d and d set r r Bv ; if j1 j j j j1 0 return d d ; j1 set r r / r r set end r r j1 j1 j1 j j v r v j1 j1 j1 j j j j j ; return d; g f ( x ) 4-Mar-09 1 B Steihaug, SIAM J. Num. Anal., 1983 f ( x )

So Far Descent methods in real functions (gradient descent, Newton) Trust-region methods for numerical stability 4-Mar-09 13

What Next? Descent methods in real functions (gradient descent, Newton) Trust-region methods for numerical stability Can we go further? How can we modify and generalize optimization methods in variational framewor? Can we impose some nowledge by changing the metric of the model? 4-Mar-09 14

Optimization in Variational Framewor E( f ) : I x, f ( x), f ( x) dx min Gradient descent In the classical gradient descent 1 d X E( f ) arg min E( f, ) X L X Generalized gradient descent method: A new inner product is defined by u, v u, v 1 ( ) ( ) d E f E f L Symmetric and positive definite operator Prior on the deformation field in shape warping and tracing applications Charpiat, Maurel, Pons,Keriven, Faugeras, IJCV 007. Improved segmentation by Sobolev active contours. Sundaramoothi, Yezzi, Mennucci, VLSM 005, IJCV 007. 4-Mar-09 15

Generalized Newton Step Derivation E( f ) : I x, f ( x), f ( x) dx min 1 Q E f E f E f ( ) ( ) (, ) (, ) 1 Q ( ) E( f ) E( f ) Hessian E( f ) L L s.t. L L Is it good enough? 4-Mar-09 16

Geometric Active Contour 1 1 F( c, c, ) ( u c ) H( ) ( u c ) 1 H( ) g u H ( ) dx Casseles, Kimmel, Sapiro, IJCV 1997 level set function, u given image, c Chan-Vese, IEEE TIP 001 1, scalars g u 1 u / gradient descent Newton with trust-region 4-Mar-09 17

Newton Method with trust-region 4-Mar-09 18

Generalized Newton Step Derivation E( f ) : I x, f ( x), f ( x) dx min Q E f E f 1 L L E f ( ) ( ) ( ) Hessian ( ) s.t. Q 1 ( ) E ( f ) ( ) Hessian E f ( ) L L E f s.t. Leads to the following PDE E ( ) ( ) L B (self-adjoint operator) satisfies the convergence conditions! 4-Mar-09 19 g s.t.

Generalized Newton Step Derivation Given starting point f Repeat 1. Compute a search direction : minimizing Q (. ) Solving Euler-Lagrange equation by truncated CG with trust region.. Update f:=f+ 3. Accept/reject f, update, update Until stopping criterion is satisfied 4-Mar-09 0

The Second Variation Besides the Euler-Lagrange equations, additional necessary condition for a relative minimum is that the second variation is nonnegative. ( E, ) 0 In the case of D R I, i, j {1,.. N} f xi f xj I I I ff ffx ff y x x x y E( f, ) x y I ffx I f f I f f x I ffy I f y x f I y f y f y Theorem: positive definite R(x) is a necessary condition for a relative minimum (strengthened Legendre condition) The matrix R will indicate the local convexity 4-Mar-09 1

Geometric Active Contour 1 1 F( c, c, ) ( u c ) H( ) ( u c ) 1 H( ) g u H ( ) dx g 1 u / u level set function, u given image, c 1, scalars Hessian F( f) " ( ) g ( ) y ' g ( ) g ( ) y g ( ) x yx 3/ 3/ ' g( ) y g( ) yx g ( ) x 3/ 3/ ' ' g ( ) x u c1 u c Indefinite sub-hessian, Legendre condition is not satisfied! 4-Mar-09

Geometric Active Contour 1 1 F( c, c, ) ( u c ) H( ) ( u c ) 1 H( ) g u H ( ) dx g 1 u / u level set function, u given image, c 1, scalars repeat c arg min F( ) 1, 1 F 1 c1, s arg min (, ) h * Until convergence criterion By generalized Newton method Smoothing operator (self-adjoint and positive definite) 4-Mar-09 3

Results-Geometric Active Contour Gradient descent Newton with trust-region Sobolev active contour Suggested generalized Newton 4-Mar-09 4

Results-Geometric Active Contour Gradient descent Newton Sobolev active contour Suggested generalized Newton 4-Mar-09 5

Results-Geometric Active Contour Gradient descent Newton with trust-region Sobolev active contour Suggested generalized Newton 4-Mar-09 6

Results-Geometric Active Contour Gradient descent Newton with trust-region Sobolev active contour Suggested generalized Newton 4-Mar-09 7

Results-Geometric Active Contour Gradient descent Newton with trust-region Sobolev active contour Suggested generalized Newton 4-Mar-09 8

Running Time Implementation of the GAC with MATLAB environment, running time in [sec]. image Generalized Newton Newton Gradient descent Sobolev GD shapes.6 5.3 3.4 6.3 dancer 11.3 14.9 16.8 3.48 newspaper 9.8 36.9 77.9 8.8 ultrasound 4.8 15.6 106.54 63.4 4-Mar-09 9

Mumford-Shah Type Color Deblurring c 1 c c ( v 1) F( f, v) h* f g dx v f dx v dx c{ R, G, B} 4 Mumford-Shah, CVPR 1985 J. Shah, CVPR 1996 Bar, Sochen,Kiryati, VLSM 005 h-blur ernel, g-observed image, f-recovered image, v-edge set c c x y c f f f repeat v c arg min F( f 1) By generalized minimal residual method f arg min F( f, v ) c c 1 By generalized Newton method H 1 v ( x) Adaptive edge-based Hamiltonian operator 1 Until convergence criterion (self-adjoint and positive definite) 4-Mar-09 30

Color Deblurring h( x)* h( x) 0 0 x Hessian F( f ) 0 v v 0 v f f c c c f f x f x fy 3/ 3/ f f c c c f x f f f y y v 3/ 3/ Indefinite sub-hessian, Legendre condition is not satisfied! 4-Mar-09 31

Results - Color Deblurring Blurred Newton E=81 Newton with trust region E=00 smoothing norm E=106 Hamiltonian norm E=14.1,t=47 sec CG method t=176.8 sec 4-Mar-09 3

Results - Color Deblurring Blurred Newton E=97 Newton with trust region E=309 smoothing norm E=161 Hamiltonian norm E=4, t= 3sec CG method t= 65sec 4-Mar-09 33

Conclusions An efficient generalized Newton-type method with trustregion is suggested Numerically stabilized by the trust-region constraint The method is flexible by designing the inner product in different applications Future research: extending to shape spaces and manifolds 4-Mar-09 34

Than you! 4-Mar-09 35