Image restoration: numerical optimisation

Size: px
Start display at page:

Download "Image restoration: numerical optimisation"

Transcription

1 Image restoration: numerical optimisation Short and partial presentation Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP / 6

2 Context and topic Image restoration, deconvolution Missing information: ill-posedness and regularisation Previous / next lectures, exercices and practical works Three types of regularised inversion Quadratic penalties Convex non-quadratic penalties Constraints: positivity and support Bayesian strategy: an incursion for hyperparameter tuning A basic component: Quadratic criterion and Gaussian model Circulant approximation and computations based on FFT Other approaches: Numerical linear system solvers Numerical quadratic optimizers / 6

3 Direct / Inverse, Convolution / Deconvolution,... y = Hx + ε = h x + ε ε x H + y x = X (y) Reconstruction, Restoration, Deconvolution, Denoising General problem: ill-posed inverse problems, i.e., lack of information Methodology: regularisation, i.e., information compensation / 6

4 Various quadratic criteria Quadratic criterion and linear solution J (x) = y Hx + µ (x p x q ) = y Hx + µ Dx Huber penalty, extended criterion and half-quadratic approaches J (x) = y Hx + µ ϕ(x p x q ) J (x, b) = y Hx + µ [(x p x q ) b pq ] + ζ(b pq ) Constraints: augmented Lagrangian and ADMM J (x) = y Hx + µ Dx { x p = for p s.t. S x p for p M L(x, s, l) = y Hx + µ Dx + ρ x s + l t (x s) / 6

5 Various solutions (no circulant approximation) True Observation Quadratic penalty Huber penalty True Observation Quadratic penalty Constrained / 6

6 A reminder of the calculi The criterion its gradient... J (x) = y Hx + µ Dx J x = Ht (y Hx) + µ D t Dx = (H t H + µ D t D)x H t y... and its Hessian... J x = (H t H + µ D t D) The multivariate linear system of equations and the minimiser... (H t H + µd t D) x = H t y x = (H t H + µd t D) H t y 6 / 6

7 Reformulation, notations... Rewrite the criterion... J (x) = y Hx + µ Dx = xt Qx + q t x + q Gradient: g(x) = Qx + q Hessian: Q = (H t H + µ D t D) q = g() = H t y the gradient at x =... the system of equations... (H t H + µd t D) x = H t y Q x = q... and the minimiser... x = (H t H + µd t D) H t y = Q q 7 / 6

8 Quadratic criterion: illustration α (x x) + γ α (x x ) + α (x x ) ρ α α (x x )(x x ) + γ Variable x Variable x Variable x Comment: convex, concave, saddle point, proper direction 8 / 6

9 Quadratic criterion: illustration α (x x) + γ α (x x ) + α (x x ) ρ α α (x x )(x x ) + γ Variable x Variable x Variable x Comment: convex, concave, saddle point, proper direction 9 / 6

10 Computational and algorithmical aspects Various options and many relationships... Direct calculus, compact (closed) form, matrix inversion Circulant approximation and diagonalisation by FFT Special algorithms for D case Recursive least squares Kalman smoother or filter (and fast versions,... ) Algorithms for linear system solving Splitting idea Gauss, Gauss-Jordan Substitution, triangularisation,... Et Levinson,... Algorithms for criterion optimisation Component wise... Gradient descent and various modifications / 6

11 A family of linear system solvers Matrix splitting algorithms / 6

12 Linear solver: illustration Criterion J (x) = α (x x ) + α (x x ) ρ α α (x x )(x x ) Nullification of the gradient J / x = α (x x ) ρ α α (x x ) = J / x = α (x x ) ρ α α (x x ) = / 6

13 An example of linear solver: matrix splitting A simple and fruitful idea Solve Q x + q = that is Q x = q Matrix splitting Q = A B Take advantage... Q x = q (A B) x = q A x = B x q x = A [B x q] / 6

14 An example of linear solver: matrix splitting A simple and fruitful idea Solve Q x + q = that is Q x = q Matrix splitting Q = A B Take advantage... Q x = q (A B) x = q A x = B x q x = A [B x q] Iterative solver / 6

15 An example of linear solver: matrix splitting A simple and fruitful idea Solve Q x + q = that is Q x = q Matrix splitting Q = A B Take advantage... Q x = q (A B) x = q A x = B x q x = A [B x q] Iterative solver [ ] x [k] = A B x [k ] q If it exists, a fixed point satisfies: x = A [B x q] so, Q x = q / 6

16 Matrix splitting: sketch of proof () Notations Iterates [ ] x [k] = A B x [k ] q x [K] = M x [K ] m = A B x [k ] A q = M x [k ] m 6 / 6

17 Matrix splitting: sketch of proof () Notations Iterates [ ] x [k] = A B x [k ] q x [K] = M x [K ] m = A B x [k ] A q = M x [k ] m = M (Mx [K] m) m = M x [K] (M + I) m 7 / 6

18 Matrix splitting: sketch of proof () Notations Iterates [ ] x [k] = A B x [k ] q x [K] = M x [K ] m = A B x [k ] A q = M x [k ] m = M (Mx [K] m) m = M x [K] (M + I) m = M (Mx [K ] m) (M + I) m = M x [K ] (M + M + I) m 8 / 6

19 Matrix splitting: sketch of proof () Notations Iterates [ ] x [k] = A B x [k ] q x [K] = M x [K ] m = A B x [k ] A q = M x [k ] m = M (Mx [K] m) m = M x [K] (M + I) m = M (Mx [K ] m) (M + I) m = M x [K ] (M + M + I) m = M K x [] (M [K ] + + M + M + I) m ( K ) = M K x [] M k m k= 9 / 6

20 Matrix splitting: sketch of proof () Convergence... in relation with eigenvalues of M / 6

21 Matrix splitting: sketch of proof () Convergence... in relation with eigenvalues of M Increasing powers... M = B B M = B B B B = B B M K = B K B O / 6

22 Matrix splitting: sketch of proof () Convergence... in relation with eigenvalues of M Increasing powers... M = B B M = B B B B = B B M K = B K B O... and a well known serie K k= M k = = B K k= ( B B ) k ( K k= k ) B / 6

23 Matrix splitting: sketch of proof () Convergence... in relation with eigenvalues of M Increasing powers... M = B B M = B B B B = B B M K = B K B O... and a well known serie K k= M k = = B K k= ( B B ) k ( K k= k ) B B (I ) B = (I M) Convergence if: ρ(m) <, spectral radius of M smaller than / 6

24 Matrix splitting: sketch of proof () Iterates x [K] = M K x [] ( K k= M k ) m Convergence if: ρ(m) <, spectral radius of M smaller than Limit for K tends to + x = M x [] M k m k= = O (I M) m = (I A B) A q = (A B) q = Q q / 6

25 Matrix splitting: recap and examples Matrix splitting recap Solve / compute Matrix splitting Iterate Q x = q x = Q q Q = A B [ ] x [k] = A B x [k ] q [ ] A x [k] = B x [k ] q Computability: efficient inversion of A efficient solving of Au = v Convergence requires ρ(a B) < / 6

26 Matrix splitting: recap and examples Matrix splitting examples Diagonal: Q = D R ] x [k] = D [R x [k ] q Lower and Upper: Q = L + U [ ] x [k] = L + U x [k ] q [ ] L + x [k] = U x [k ] q Circulant: Q = C R ] x [k] = C [R x [k ] q 6 / 6

27 Matrix splitting: numerical results () Iterates Iterates (top: x and bottom x ) Variable x Variable x Iteration 7 / 6

28 Matrix splitting: numerical results () Iterates Iterates (top: x and bottom x ) Variable x 6 8 Variable x 6 8 Iteration 8 / 6

29 Matrix splitting: numerical results () Iterates Iterates (top: x and bottom x ) Variable x Variable x Iteration 9 / 6

30 numerical results (all of them) Coef. ρ =.7 Coef. ρ =.9 Coef. ρ =. / 6

31 Two families of quadratic optimizers Component wise Gradient direction / 6

32 Quadratic criterion: illustration α (x x) + γ α (x x ) + α (x x ) ρ α α (x x )(x x ) + γ Variable x Variable x Variable x Comment: convex, concave, saddle point, proper direction / 6

33 Minimisation, a first approach: component-wise Criterion J (x) = xt Qx + q t x + q Iterative component-wise update For k =,,... For p =,,... P Update x p by minimization of J (x) w.r.t. x p given the other variables End p End k Properties Fixed point algorithm Convergence is proved / 6

34 Component-wise minimisation () Criterion J (x) = xt Qx + q t x + q Two ingredients Select pixel p: vector p = [,...,,,,... ] t Nullify pixel p: matrix I p t p Rewrite current image and rewrite criterion x p = t p x x /p = (I p t p) x x = (I p t p) x + x p p = x /p + x p p J p (x p ) = J [ x /p + x p p ] / 6

35 Component-wise minimisation () Consider the criterion as a function of pixel p J p (x p ) = J [ x /p + x p p ] = (x /p + x p p ) t Q(x /p + x p p ) + q t (x /p + x p p ) + q. = t p Q p x p + (Q x /p + q) t p x p and minimize: update pixel p x opt p = (Q x /p + q) t p t p Q p Computation load Practicability of computing t p Q p (think about side-effects) Possible efficient update of φ p = (Q x /p + q) t p itself / 6

36 Component-wise: numerical results () Iterates Iterates (top: x and bottom x ) Variable x Variable x Iteration Criterion Iteration 6 / 6

37 Component-wise: numerical results () Iterates Iterates (top: x and bottom x ) Variable x Variable x Iteration Criterion Iteration 7 / 6

38 Component-wise: numerical results () Iterates Iterates (top: x and bottom x ) Variable x Variable x Iteration Criterion Iteration 8 / 6

39 Component-wise: numerical results (all of them) Coef. ρ =.7 Coef. ρ =.9 Coef. ρ =. 9 / 6

40 Extensions: multipixel and separability / 6

41 Two families of quadratic optimizers Component wise Gradient direction / 6

42 Quadratic criterion: generalities α (x x ) + α (x x ) ρ α α (x x )(x x ) + γ Optimality over R N (no constraint) Null gradient Positive Hessian = J = g( x) x x Q = J x > / 6

43 Iterative algorithms, fixed point,... Iterative update Initialisation x [] Iteration k =,,... x = arg min J (x) x x [k+] = x [k] + τ [k] δ [k] Direction δ R N Step length τ R + x [k] k= x Stopping rule, e.g. g(x [k] ) < ε / 6

44 Iterative algorithms, fixed point, descent,... Direction δ Optimal : opposite of the gradient Newton, inverse Hessien Preconditioned Corrected directions: bisector, Vignes, Polak-Ribière,... conjugate direction, Step length τ Optimal Over-relaxed / under-relaxed Armijo, Goldstein, Wolfe... and also fixed step... optimal, yes,... and no... / 6

45 Gradient with optimal step length Strategy: optimal direction optimal step length Iteration x [k+] = x [k] + τ [k] δ [k] Direction δ R N δ [k] = g(x [k] ) Step length τ R + J δ (τ) = J (x [k] + τδ [k] ) A second order polynom J δ (τ) =... τ +... τ +... Optimal step length τ [k] = g(x[k] ) t g(x [k] ) g(x [k] ) t Q g(x [k] ) / 6

46 Illustration Two readings: optimisation along the given direction, line-search constrained optimisation Remark: orthogonality of successive directions (see end of exercice) 6 / 6

47 Sketch of convergence proof () A reminder for notations J (x) = xt Qx + q t x + q g(x) = Qx + q Some preliminary results x = arg min J (x) = Q q x J ( x) = qt Q q + q J (x) J ( x) = g(x)t Q g(x) J (x + h) J (x ) = g(x ) t h + ht Q h 7 / 6

48 Sketch of convergence proof () Notational convenience: x [k] = x, x [k+] = x, τ [k] = τ, g(x [k] ) = g Criterion increase / decrease J (x ) J (x) = J (x + τ δ) J (x) since δ = g and τ = gt g g t Q g = g t [τδ] + [τδ] t Q [τδ] =... = [ g t g ] g t Q g Comment: it decreases, it reduces... 8 / 6

49 Sketch of convergence proof () Distance to minimiser J (x ) J ( x) = J (x) J ( x) = J (x) J ( x) [ = [J (x) J ( x)] =... [J (x) J ( x)] [ g t g] g t Q g [ g t g] g t Q g J (x) J ( x) g t Q g/ ] [g t g] [g t Q g] [g t Q g] ( ) M m M + m... so it converges M and m: maximal and minimal eigenvalue of Q... and comment Kantorovich inequality (see next slide) 9 / 6

50 Kantorovich inequality Result [ u t Q u ] [ u t Q u ] ( ) m M M + u m Q symmetric and positive-definite M and m: maximal and minimal eigenvalue of Q Short sketch of proof Quite long and complex Case u = Diagonalise Q: Q = P t ΛP et Q = P t Λ P Convex combination of the eigenvalues and their inverse Convexiy of t /t / 6

51 numerical results () Iterates Iterates (top: x and bottom x ) Variable x Variable x Iteration Criterion Iteration / 6

52 numerical results () Iterates Iterates (top: x and bottom x ) Variable x Variable x Iteration Criterion 6 Iteration / 6

53 numerical results () Iterates Iterates (top: x and bottom x ) Variable x 6 8 Variable x 6 8 Iteration Criterion 6 8 Iteration / 6

54 numerical results (all of them) Coef. ρ =.7 Coef. ρ =.9 Coef. ρ =. / 6

55 Preconditioned gradient: short digest... Strategy: modified direction optimal step length Iteration x [k+] = x [k] + τ [k] δ [k] Direction δ R N δ [k] = P g(x [k] ) Step length τ R + J δ (τ) = J (x [k] + τδ [k] ) A second order polynom Optimal step length J δ (τ) =... τ +... τ +... τ [k] = g(x[k] ) t P g(x [k] ) g(x [k] ) t P Q P g(x [k] ) / 6

56 Preconditioned gradient: two special cases... Non-preconditioned: P = I standard gradient algorithm Direction δ R N δ [k] = g(x [k] ) Step length τ R + τ [k] = g(x[k] ) t g(x [k] ) g(x [k] ) t Q g(x [k] ) Perfect-preconditioner: P = Q one step optimal Direction δ R N δ [k] = Q g(x [k] ) Step length τ R + First iterate τ [k] = g(x[k] ) t P g(x [k] ) g(x [k] ) t P Q P g(x [k] ) = x [] = x []. Q g(x [] ) = x []. Q (Qx [] + q) = Q q! 6 / 6

57 Preconditionned gradient: numerical results: Better and better approximation of the inverse Hessian 7 / 6

58 Synthesis Image restoration, deconvolution and other inverse problems Three types of regularised inversion... Quadratic penalties Convex non-quadratic penalties Constraints: positivity and support... based on unconstrained quadratic optimsation A basic component: Quadratic criterion and Gaussian model Circulant approximation and computations based on FFT Numerical linear system solvers Matrix splitting Numerical quadratic optimizers Component-wise Gradient methods 8 / 6

59 Decreasing criterion... x 6. 9 / 6

60 Various solutions True Observation Quadratic penalty Huber penalty True Observation Quadratic penalty Constrained 6 / 6

Image restoration: edge preserving

Image restoration: edge preserving Image restoration: edge preserving Convex penalties Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP 1 / 32 Topics Image restoration,

More information

Image restoration. An example in astronomy

Image restoration. An example in astronomy Image restoration Convex approaches: penalties and constraints An example in astronomy Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS

More information

Line Search Methods for Unconstrained Optimisation

Line Search Methods for Unconstrained Optimisation Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Numerical Optimization: Basic Concepts and Algorithms

Numerical Optimization: Basic Concepts and Algorithms May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

You should be able to...

You should be able to... Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set

More information

Iterative Methods for Smooth Objective Functions

Iterative Methods for Smooth Objective Functions Optimization Iterative Methods for Smooth Objective Functions Quadratic Objective Functions Stationary Iterative Methods (first/second order) Steepest Descent Method Landweber/Projected Landweber Methods

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Numerical Optimization

Numerical Optimization Numerical Optimization Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Spring 2010 Emo Todorov (UW) AMATH/CSE 579, Spring 2010 Lecture 9 1 / 8 Gradient descent

More information

FALL 2018 MATH 4211/6211 Optimization Homework 4

FALL 2018 MATH 4211/6211 Optimization Homework 4 FALL 2018 MATH 4211/6211 Optimization Homework 4 This homework assignment is open to textbook, reference books, slides, and online resources, excluding any direct solution to the problem (such as solution

More information

Chapter 4. Unconstrained optimization

Chapter 4. Unconstrained optimization Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file

More information

Non-convex optimization. Issam Laradji

Non-convex optimization. Issam Laradji Non-convex optimization Issam Laradji Strongly Convex Objective function f(x) x Strongly Convex Objective function Assumptions Gradient Lipschitz continuous f(x) Strongly convex x Strongly Convex Objective

More information

MATH 4211/6211 Optimization Basics of Optimization Problems

MATH 4211/6211 Optimization Basics of Optimization Problems MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization

More information

Relaxed linearized algorithms for faster X-ray CT image reconstruction

Relaxed linearized algorithms for faster X-ray CT image reconstruction Relaxed linearized algorithms for faster X-ray CT image reconstruction Hung Nien and Jeffrey A. Fessler University of Michigan, Ann Arbor The 13th Fully 3D Meeting June 2, 2015 1/20 Statistical image reconstruction

More information

Numerical Optimization of Partial Differential Equations

Numerical Optimization of Partial Differential Equations Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2017co.html version:

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem

Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem min{f (x) : x R n }. The iterative algorithms that we will consider are of the form x k+1 = x k + t k d k, k = 0, 1,...

More information

Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem

Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem min{f (x) : x R n }. The iterative algorithms that we will consider are of the form x k+1 = x k + t k d k, k = 0, 1,...

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

1 Numerical optimization

1 Numerical optimization Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms

More information

Introduction to Unconstrained Optimization: Part 2

Introduction to Unconstrained Optimization: Part 2 Introduction to Unconstrained Optimization: Part 2 James Allison ME 555 January 29, 2007 Overview Recap Recap selected concepts from last time (with examples) Use of quadratic functions Tests for positive

More information

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions CE 191: Civil and Environmental Engineering Systems Analysis LEC : Optimality Conditions Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 214 Prof. Moura

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

2 Regularized Image Reconstruction for Compressive Imaging and Beyond EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method. Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization

More information

Trajectory-based optimization

Trajectory-based optimization Trajectory-based optimization Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2012 Emo Todorov (UW) AMATH/CSE 579, Winter 2012 Lecture 6 1 / 13 Using

More information

Second Order Optimization Algorithms I

Second Order Optimization Algorithms I Second Order Optimization Algorithms I Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 7, 8, 9 and 10 1 The

More information

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION 15-382 COLLECTIVE INTELLIGENCE - S19 LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION TEACHER: GIANNI A. DI CARO WHAT IF WE HAVE ONE SINGLE AGENT PSO leverages the presence of a swarm: the outcome

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods.

Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods. Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods. Linear models for classification Logistic regression Gradient descent and second-order methods

More information

Mechanical Systems II. Method of Lagrange Multipliers

Mechanical Systems II. Method of Lagrange Multipliers Mechanical Systems II. Method of Lagrange Multipliers Rafael Wisniewski Aalborg University Abstract. So far our approach to classical mechanics was limited to finding a critical point of a certain functional.

More information

The Steepest Descent Algorithm for Unconstrained Optimization

The Steepest Descent Algorithm for Unconstrained Optimization The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem

More information

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration E. Chouzenoux, A. Jezierska, J.-C. Pesquet and H. Talbot Université Paris-Est Lab. d Informatique Gaspard

More information

Matrix Derivatives and Descent Optimization Methods

Matrix Derivatives and Descent Optimization Methods Matrix Derivatives and Descent Optimization Methods 1 Qiang Ning Department of Electrical and Computer Engineering Beckman Institute for Advanced Science and Techonology University of Illinois at Urbana-Champaign

More information

Mathematical optimization

Mathematical optimization Optimization Mathematical optimization Determine the best solutions to certain mathematically defined problems that are under constrained determine optimality criteria determine the convergence of the

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Examination paper for TMA4180 Optimization I

Examination paper for TMA4180 Optimization I Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 15: Nonlinear optimization Prof. John Gunnar Carlsson November 1, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 1, 2010 1 / 24

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

Higher-Order Methods

Higher-Order Methods Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1, Math 30 Winter 05 Solution to Homework 3. Recognizing the convexity of g(x) := x log x, from Jensen s inequality we get d(x) n x + + x n n log x + + x n n where the equality is attained only at x = (/n,...,

More information

Mathematical Economics. Lecture Notes (in extracts)

Mathematical Economics. Lecture Notes (in extracts) Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter

More information

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search ANZIAM J. 55 (E) pp.e79 E89, 2014 E79 On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search Lijun Li 1 Weijun Zhou 2 (Received 21 May 2013; revised

More information

Motion Estimation (I) Ce Liu Microsoft Research New England

Motion Estimation (I) Ce Liu Microsoft Research New England Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29 Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 21, 2017 1 / 29 You Have Learned (Unconstrained) Optimization in Your High School Let f (x) = ax

More information

1 Introduction

1 Introduction 2018-06-12 1 Introduction The title of this course is Numerical Methods for Data Science. What does that mean? Before we dive into the course technical material, let s put things into context. I will not

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

arxiv: v1 [math.oc] 23 May 2017

arxiv: v1 [math.oc] 23 May 2017 A DERANDOMIZED ALGORITHM FOR RP-ADMM WITH SYMMETRIC GAUSS-SEIDEL METHOD JINCHAO XU, KAILAI XU, AND YINYU YE arxiv:1705.08389v1 [math.oc] 23 May 2017 Abstract. For multi-block alternating direction method

More information

Introduction to Alternating Direction Method of Multipliers

Introduction to Alternating Direction Method of Multipliers Introduction to Alternating Direction Method of Multipliers Yale Chang Machine Learning Group Meeting September 29, 2016 Yale Chang (Machine Learning Group Meeting) Introduction to Alternating Direction

More information

Course on Model Predictive Control Part II Linear MPC design

Course on Model Predictive Control Part II Linear MPC design Course on Model Predictive Control Part II Linear MPC design Gabriele Pannocchia Department of Chemical Engineering, University of Pisa, Italy Email: g.pannocchia@diccism.unipi.it Facoltà di Ingegneria,

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

IPAM Summer School Optimization methods for machine learning. Jorge Nocedal

IPAM Summer School Optimization methods for machine learning. Jorge Nocedal IPAM Summer School 2012 Tutorial on Optimization methods for machine learning Jorge Nocedal Northwestern University Overview 1. We discuss some characteristics of optimization problems arising in deep

More information

1 Numerical optimization

1 Numerical optimization Contents Numerical optimization 5. Optimization of single-variable functions.............................. 5.. Golden Section Search..................................... 6.. Fibonacci Search........................................

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Lecture Notes: Geometric Considerations in Unconstrained Optimization Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

WHY DUALITY? Gradient descent Newton s method Quasi-newton Conjugate gradients. No constraints. Non-differentiable ???? Constrained problems? ????

WHY DUALITY? Gradient descent Newton s method Quasi-newton Conjugate gradients. No constraints. Non-differentiable ???? Constrained problems? ???? DUALITY WHY DUALITY? No constraints f(x) Non-differentiable f(x) Gradient descent Newton s method Quasi-newton Conjugate gradients etc???? Constrained problems? f(x) subject to g(x) apple 0???? h(x) =0

More information

Sub-Sampled Newton Methods

Sub-Sampled Newton Methods Sub-Sampled Newton Methods F. Roosta-Khorasani and M. W. Mahoney ICSI and Dept of Statistics, UC Berkeley February 2016 F. Roosta-Khorasani and M. W. Mahoney (UCB) Sub-Sampled Newton Methods Feb 2016 1

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning Gradient Descent, Newton-like Methods Mark Schmidt University of British Columbia Winter 2017 Admin Auditting/registration forms: Submit them in class/help-session/tutorial this

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

Convex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods

Convex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods Convex Optimization Prof. Nati Srebro Lecture 12: Infeasible-Start Newton s Method Interior Point Methods Equality Constrained Optimization f 0 (x) s. t. A R p n, b R p Using access to: 2 nd order oracle

More information

Introduction to gradient descent

Introduction to gradient descent 6-1: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction to gradient descent Derivation and intuitions Hessian 6-2: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction Our

More information