Image restoration: edge preserving

Size: px
Start display at page:

Download "Image restoration: edge preserving"

Transcription

1 Image restoration: edge preserving Convex penalties Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP 1 / 32

2 Topics Image restoration, deconvolution Motivating examples: medical, astrophysical, industrial,... Various problems: Fourier synthesis, deconvolution,... Missing information: ill-posed character and regularisation Three types of regularised inversion 1 Quadratic penalties and linear solutions Closed-form expression Computation through FFT Numerical optimisation, gradient algorithm 2 Non-quadratic penalties and edge preservation Half-quadratic approaches, including computation through FFT Numerical optimisation, gradient algorithm 3 Constraints: positivity and support Augmented Lagrangian and ADMM Bayesian strategy: a few incursions Tuning hyperparameters, instrument parameters,... Hidden / latent parameters, segmentation, detection,... 2 / 32

3 Convolution / Deconvolution y = Hx + ε = h x + ε x H + ε y x = X (y) Restoration, deconvolution-denoising General problem: ill-posed inverse problems, i.e., lack of information Methodology: regularisation, i.e., information compensation Specificity of the inversion / reconstruction / restoration methods Trade off and tuning parameters Limited quality results 3 / 32

4 Competition: Adequation to data Compare observations y and model output Hx Unknown: x Known: H and y Comparison experiment model x H Hx y Quadratic criterion: distance observation model output J LS (x) = y Hx 2 4 / 32

5 Competition: Smoothness prior Data insufficiently informative Account for prior information 1 Question of compromise, of competition 2 Specificity of methods Here: smoothness of images Quadratic penalty of the gray level gradient (or other linear combinations) P(x) = p q (x p x q ) 2 = Dx 2 5 / 32

6 Quadratic penalty: criterion and solution Criterion including quadratic penalty J PLS (x) = y Hx 2 + µ Dx 2 = y Hx 2 + µ (x p x q ) 2 p q Restored image x PLS = arg min J PLS (x) x (H t H + µd t D) x PLS = H t y x PLS = (H t H + µd t D) 1 H t y 6 / 32

7 Computational aspects and implementation Various options and many relationships... Direct calculus, compact (closed) form, matrix inversion Algorithms for linear system solution Gauss, Gauss-Jordan Substitution Triangularisation,... Special algorithms, especially for 1D case Recursive least squares Kalman smoother or filter (and fast versions,... ) Numerical optimisation gradient descent: descent in the direction opposite to the gradient and various modifications Diagonalization Circulant approximation and diagonalization by FFT 7 / 32

8 Solution from least squares and quadratic penalty True Observation Quadratic penalty Quadratic penalty Circulant approx Non-circulant 8 / 32

9 Edge preserving approaches Limited resolution of linear solutions Desirable: less smoothing around discontinuities Ambivalence: Smoothing (homogeneous regions) Heightening, enhancement, sharpening (discontinuities, edges)... and new compromise, trade off, conciliation Algorithmic structure Based on the linear solution (based on quadratic criterion) FFT (Wiener-Hunt) Kalman smoother or filter (and its fast versions, factorised,... ) Numerical optimisation (gradient,... ) 9 / 32

10 Edge preservation and non-quadratic penalties Restored image still defined as the minimiser of a penalised criterion... x = arg min J(x) x J(x) = y Hx 2 + µ P(x)... once again penalising small variations but less penalising for discontinuities P(x) = p q ϕ(x p x q ) ϕ(δ) = δ 2 ϕ(δ) =...? Ambivalence: new compromise, trade off, conciliation Smoothing (homogeneous regions) Heightening, enhancement, sharpening (discontinuities, edges) 10 / 32

11 Typical potentials Again ϕ(δ) δ 2 for small δ Behaviour for large δ 1 Horizontal asymptote 2 Horizontal parabolic behaviour 3 Oblique (slant) asymptote 4 Vertical parabolic behaviour Alpha = 0.50 et Seuil = 1.00 Q Huber H and L G and McC B and Z 1.5 Potentiel Variable Delta 11 / 32

12 Four major types of potentials 1 Horizontal asymptote ϕ(δ) 1 [Blake and Zisserman (87), Geman and McClure (87)] ϕ(δ) = αs 2 min ( 1, (δ/s) 2) ; ϕ(δ) = αs 2 (δ/s) 2 2 Horizontal parabolic behaviour ϕ(δ) log δ [Hebert and Leahy (89)] ϕ(δ) = αs 2 log [ 1 + (δ/s) 2] 1 + (δ/s) 2 3 Oblique (slant) asymptote ϕ(δ) δ Huber, hyperbolic, Log hyperbolic-cosine, fair function { δ 2 ) if δ s ϕ(δ) = α 2s δ s 2 ; ϕ(δ) = 2αs ( [δ/s] 2 1 if δ s 4 Vertical parabolic behaviour ϕ(δ) δ 2 Wiener-Hunt solution ϕ(δ) = αδ 2 12 / 32

13 Potentials with oblique asymptote (L 2 / L 1 ): details Potentiel Alpha = 0.50 et Seuil = Variable Delta { Huber : ϕ(δ) = αs 2 [δ/s] 2 if δ s 2 δ /s 1 if δ s ) Hyperbolic : ϕ(δ) = 2αs ( [δ/s] 2 1 LogCosh : ϕ(δ) = 2αs 2 log cosh ( δ /s) FairFunction : ϕ(δ) = 2αs 2 [ δ /s log (1 + δ /s)] 13 / 32

14 More general non-quadratic penalties (1) Differences, higher order derivatives, generalizations,... P(x) = n P(x) = n P(x) = n P(x) = n ϕ(x n+1 x n ) ϕ(x n+1 2x n + x n 1 ) ϕ(α x n+1 x n + α x n 1 ) ϕ(α t n x) Linear combinations (wavelet, other-stuff-in-et,... dictionaries,... ) P(x) = ϕ(wnx) t = ( ) ϕ w nm x m n n m Redundant or not Link with Haar wavelet and other 14 / 32

15 More general non-quadratic penalties (2) 2D aspects: derivative, finite differences, gradient approximations P(x) = p q ϕ(x p x q ) = n,m ϕ(x n+1,m x n,m ) + n,m ϕ(x n,m+1 x n,m ) Notion of neighborhood and Markov field Any highpass filter, contour detector (Prewitt, Sobel,... ) Linear combinations: wavelet, contourlet and other-stuff-in-et,... Other possibilities (slightly different) Enforcement towards a known shape P(x) = p ϕ(x p x p) Separable penalty P(x) = p ϕ(x p) 15 / 32

16 Penalised least squares solution A reminder of the criterion and the restored image J(x) = y Hx 2 + µ p q ϕ s (x p x q ) x = arg min J(x) x with ϕ s one of the mentioned (non-quadratic) potentials and two hyperparameters: µ and s Non-quadratic criterion Non-linear gradient No closed-form expression Two questions Practical computation: numerical optimisation algorithm,... Minimiser: existence, uniqueness,... continuity 16 / 32

17 Convexity and existence-uniquiness Convex set R N, R N +, intervals of R N,... Properties: intersection, convex envelope, projection,... Strictly convex criterion, convex criterion, Θ(u) = u 2, Θ(u) = u 2, Θ(u) = u, Huber,... Properties: sum of convex function, level sets,... Key result Set of minimisers of convex criterion on a convex set is a convex set Strict convexity unique minimiser Application ϕ convex = J convex In the following developments, potential ϕ s Huber or hyperbolic: convex (strict) guarantees In addition: non-convex no guarantee (although, sometimes... ) 17 / 32

18 Half-quadratic enchantement (start) Reminder of the criterion J(x) = y Hx 2 + µ p q ϕ(x p x q ) Minimisation based on quadratic Original idea of [Geman + Yang, 95] Set of auxiliary variables b pq so that: ϕ(δ pq) δ 2 pq ] ϕ(δ) = inf b With appropriate ζ(b): Extended criterion J(x, b) = y Hx 2 + µ p q [ 1 2 (δ b)2 + ζ(b) 1 2 [(x p x q ) b pq ] 2 + ζ(b pq ) und natürlich: J(x) = inf b J(x, b) 18 / 32

19 Legendre-Fenchel Transform (LFT) or Convex Conjugate (CC) Definition of TLF or CC Consider f : R R strictly convex once (or twice) differentiable The TLF or CC is the function f : R R defined by: f (t) = sup x R [ xt f (x) ] Remark f (0) = sup x R [ f (x) ] = inf x R [ f (x) ] t, x R, t, x R, xt f (x) f (t) f (t) + f (x) xt 19 / 32

20 LFT: some shift and dilatation / contraction properties f (t) = sup x R [ xt f (x) ] Vertical: shift-dilatation (α R and β R +) { g(x) = α + β f (x) g (t) = β f (t/β) α Horizontal: shift (x 0 R) and dilatation (γ R +) { { g(x) = f (x x 0 ) g(x) = f (γx) g (t) = f (t) tx 0 g (t) = f (t/γ) Specific case α = 0, β = 1 / x 0 = 0 / γ = 1 20 / 32

21 LFT: a first example Quadratic case (α R and β R +) Let us consider the function We look for the CC f (x) = α β(x x 0) 2 f (t) = sup x R [ xt f (x) ] Let us denote g t (x) = xt f (x) = xt ( α + β(x x 0 ) 2 / 2 ) The derivative reads: g t (x) = t β(x x 0) And the second derivative is: g t(x) = β By nullification of g t (x): x = x 0 + t/β Then by substitution: f (t) = g t( x) f (t) = 1 2β (t + βx 0) 2 ( α + 1 ) 2 βx / 32

22 LFT: a generic result for explicitation (a) A swiss army formula: Legendre formula f (t) = sup x R Let us denote g t (x) = xt f (x) [ xt f (x) ] The derivative reads: g t (x) = t f (x) And the second derivative is: g t(x) = f (x) By nullification of g t (x): Then by substitution: t f ( x) = 0 x = f 1 (t) = χ(t) f (t) = g t ( x) = t x f ( x) = tχ(t) f [χ(t)] 22 / 32

23 LFT: a generic result for explicitation (b) Derivatives Convex conjugate made explicit f (t) = tχ(t) f [χ(t)] with χ = f 1 The derivative reads: f (t) = χ(t) + tχ (t) χ (t) f [χ(t)] = χ(t) = f 1 (t) And the second derivative is: f (t) = χ(t) = 1 f [χ(t)] > 0 hence f is convex 23 / 32

24 LFT: a key result Double conjugate Transformed twice gives back the original f (x) = f (x) f (t) = sup [ xt f (x) ] Let us note h t (x) = xt f (x) and nullify the derivative: 0 = h t( x) = t f ( x) = t f 1 ( x) = t χ( x) x = χ 1 (t) = f (t) By substitution f (t) = h t ( x) = xt f ( x) = xt [ xχ( x) f (χ( x))] = xt xt + f (t) = f (t) 24 / 32

25 Outcome for LFT: un théorème vivant Definition Let us consider f : R R strictly convex once (or twice) differentiable Properties f (t) = sup x R [ xt f (x) ] f (t) = tχ(t) f [χ(t)] with χ = f 1 f = f 1 = χ f (t) = 1 / f [χ(t)] f is convex f (x) = f (x) 25 / 32

26 A theorem in action: half-quadratic (beginning) Problem statement Consider a potential ϕ, convex or not, and look for ζ such that [ ϕ(δ) = inf (δ b) 2 /2 + ζ(b) ] b R Let us define g such that it is strictly convex: Consider its CC: g (b) = sup δ R = sup δ R g(δ) = δ 2 /2 ϕ(δ) [ bδ g(δ) ] [ ϕ(δ) (δ b) 2 /2 ] + b 2 /2 Let us set (reason explained on the next slide): ζ(b) = g (b) b 2 [ /2 = sup ϕ(δ) (δ b) 2 /2 ] δ R 26 / 32

27 A theorem in action: half-quadratic (middle) Take advantage of g = g g(δ) = g (δ) δ 2 /2 ϕ(δ) = sup b R ϕ(δ) = δ 2 /2 sup [ bδ g (δ) ] b R [bδ g (δ)] [ = inf (δ b) 2 /2 + ζ(b) ] b R The icing on the cake, we have the minimiser: 0 = [ (δ b) 2 /2 + ζ(b) ] = ( b δ) + ζ ( b) = g ( b) δ then: b = g 1 (δ) = g (δ) = δ ϕ (δ) 27 / 32

28 A theorem in action: half-quadratic (ending) Reminder: original criterion and extended criterion J(x) = y Hx 2 + µ p q ϕ(x p x q ) J(x, b) = y Hx 2 + µ p q 1 2 [(x p x q ) b pq ] 2 + ζ(b pq ) Algorithmic strategy: alternating minimisation 1 Minimisation w.r.t. x for fixed b: x(b) = arg min x J(x, b) Quadratic problem 2 Minimisation w.r.t. b for fixed x: b(x) = arg min b J(x, b) Separated and explicit update 28 / 32

29 Image update Non-separable but quadratic J(x) = y Hx 2 + µ p q = y Hx 2 + µ Dx b 2 Multivariate system of equations Minimiser 1 2 [(x p x q ) b pq ] 2 (H t H + µd t D) x = H t y + µ D t b x = (H t H + µd t D) 1 (H t y + µ D t b) Numerical aspects Descent algorithm (including inner-iterations) Circulant case and FFT (without inner-iterations) 29 / 32

30 Auxiliary variables update Separable (parallel computation), explicit (no inner-iterations), easy Each b = b pq δ = δ pq = x p x q Update: b = δ ϕ (δ) Huber: b = δ [1 2α min (1; s/δ)] [ Geman & McClure: b = δ 1 2α ] (s/δ) 2... Alpha = 0.40 et Seuil = Facteur de contraction Huber H and L G and McC B and Z Variable Delta 30 / 32

31 Result L 2 / L 1 (Huber) True Observation Quadratic penalty Huber penalty 31 / 32

32 Synthesis and extensions Synthesis Image deconvolution Also available for non-invariant linear direct model And also available for the case of signal... Edge preserving and non-quadratic penalties Gradient of gray levels (and other transform) Convex (and differentiable) case and also some non-convex cases Numerical computations Half-quadratic approach: separable and FFT Numerical optimisation (gradient,... ) Extensions Including constraints better image resolution (next lecture) Hyperparameters estimation, instrument parameter estimation, / 32

Image restoration: numerical optimisation

Image restoration: numerical optimisation Image restoration: numerical optimisation Short and partial presentation Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP / 6 Context

More information

Image restoration. An example in astronomy

Image restoration. An example in astronomy Image restoration Convex approaches: penalties and constraints An example in astronomy Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS

More information

Markov Random Fields

Markov Random Fields Markov Random Fields Umamahesh Srinivas ipal Group Meeting February 25, 2011 Outline 1 Basic graph-theoretic concepts 2 Markov chain 3 Markov random field (MRF) 4 Gauss-Markov random field (GMRF), and

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

Line Search Methods for Unconstrained Optimisation

Line Search Methods for Unconstrained Optimisation Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

Satellite image deconvolution using complex wavelet packets

Satellite image deconvolution using complex wavelet packets Satellite image deconvolution using complex wavelet packets André Jalobeanu, Laure Blanc-Féraud, Josiane Zerubia ARIANA research group INRIA Sophia Antipolis, France CNRS / INRIA / UNSA www.inria.fr/ariana

More information

Continuous State MRF s

Continuous State MRF s EE64 Digital Image Processing II: Purdue University VISE - December 4, Continuous State MRF s Topics to be covered: Quadratic functions Non-Convex functions Continuous MAP estimation Convex functions EE64

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V

More information

SPARSE SIGNAL RESTORATION. 1. Introduction

SPARSE SIGNAL RESTORATION. 1. Introduction SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

SUPREME: a SUPer REsolution Mapmaker for Extended emission

SUPREME: a SUPer REsolution Mapmaker for Extended emission SUPREME: a SUPer REsolution Mapmaker for Extended emission Hacheme Ayasso 1,2 Alain Abergel 2 Thomas Rodet 3 François Orieux 2,3,4 Jean-François Giovannelli 5 Karin Dassas 2 Boualam Hasnoun 1 1 GIPSA-LAB

More information

Notes on Regularization and Robust Estimation Psych 267/CS 348D/EE 365 Prof. David J. Heeger September 15, 1998

Notes on Regularization and Robust Estimation Psych 267/CS 348D/EE 365 Prof. David J. Heeger September 15, 1998 Notes on Regularization and Robust Estimation Psych 67/CS 348D/EE 365 Prof. David J. Heeger September 5, 998 Regularization. Regularization is a class of techniques that have been widely used to solve

More information

1 Sparsity and l 1 relaxation

1 Sparsity and l 1 relaxation 6.883 Learning with Combinatorial Structure Note for Lecture 2 Author: Chiyuan Zhang Sparsity and l relaxation Last time we talked about sparsity and characterized when an l relaxation could recover the

More information

Sparse Regularization via Convex Analysis

Sparse Regularization via Convex Analysis Sparse Regularization via Convex Analysis Ivan Selesnick Electrical and Computer Engineering Tandon School of Engineering New York University Brooklyn, New York, USA 29 / 66 Convex or non-convex: Which

More information

Dual methods for the minimization of the total variation

Dual methods for the minimization of the total variation 1 / 30 Dual methods for the minimization of the total variation Rémy Abergel supervisor Lionel Moisan MAP5 - CNRS UMR 8145 Different Learning Seminar, LTCI Thursday 21st April 2016 2 / 30 Plan 1 Introduction

More information

Mathematical Problems in Image Processing

Mathematical Problems in Image Processing Gilles Aubert Pierre Kornprobst Mathematical Problems in Image Processing Partial Differential Equations and the Calculus of Variations Second Edition Springer Foreword Preface to the Second Edition Preface

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Unconstrained Optimization

Unconstrained Optimization 1 / 36 Unconstrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University February 2, 2015 2 / 36 3 / 36 4 / 36 5 / 36 1. preliminaries 1.1 local approximation

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Linear Diffusion and Image Processing. Outline

Linear Diffusion and Image Processing. Outline Outline Linear Diffusion and Image Processing Fourier Transform Convolution Image Restoration: Linear Filtering Diffusion Processes for Noise Filtering linear scale space theory Gauss-Laplace pyramid for

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29 A Random block-coordinate primal-dual proximal algorithm with application to 3D mesh denoising Emilie CHOUZENOUX Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est, France Horizon Maths 2014

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Lecture 5, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The notion of complexity (per iteration)

More information

Covariance Matrix Simplification For Efficient Uncertainty Management

Covariance Matrix Simplification For Efficient Uncertainty Management PASEO MaxEnt 2007 Covariance Matrix Simplification For Efficient Uncertainty Management André Jalobeanu, Jorge A. Gutiérrez PASEO Research Group LSIIT (CNRS/ Univ. Strasbourg) - Illkirch, France *part

More information

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1, Math 30 Winter 05 Solution to Homework 3. Recognizing the convexity of g(x) := x log x, from Jensen s inequality we get d(x) n x + + x n n log x + + x n n where the equality is attained only at x = (/n,...,

More information

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2017co.html version:

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Optimality conditions for unconstrained optimization. Outline

Optimality conditions for unconstrained optimization. Outline Optimality conditions for unconstrained optimization Daniel P. Robinson Department of Applied Mathematics and Statistics Johns Hopkins University September 13, 2018 Outline 1 The problem and definitions

More information

4. DATA ASSIMILATION FUNDAMENTALS

4. DATA ASSIMILATION FUNDAMENTALS 4. DATA ASSIMILATION FUNDAMENTALS... [the atmosphere] "is a chaotic system in which errors introduced into the system can grow with time... As a consequence, data assimilation is a struggle between chaotic

More information

MATH 819 FALL We considered solutions of this equation on the domain Ū, where

MATH 819 FALL We considered solutions of this equation on the domain Ū, where MATH 89 FALL. The D linear wave equation weak solutions We have considered the initial value problem for the wave equation in one space dimension: (a) (b) (c) u tt u xx = f(x, t) u(x, ) = g(x), u t (x,

More information

Convex Optimization Algorithms for Machine Learning in 10 Slides

Convex Optimization Algorithms for Machine Learning in 10 Slides Convex Optimization Algorithms for Machine Learning in 10 Slides Presenter: Jul. 15. 2015 Outline 1 Quadratic Problem Linear System 2 Smooth Problem Newton-CG 3 Composite Problem Proximal-Newton-CD 4 Non-smooth,

More information

Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations

Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations Jan Wehr and Jack Xin Abstract We study waves in convex scalar conservation laws under noisy initial perturbations.

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

M E M O R A N D U M. Faculty Senate approved November 1, 2018

M E M O R A N D U M. Faculty Senate approved November 1, 2018 M E M O R A N D U M Faculty Senate approved November 1, 2018 TO: FROM: Deans and Chairs Becky Bitter, Sr. Assistant Registrar DATE: October 23, 2018 SUBJECT: Minor Change Bulletin No. 5 The courses listed

More information

Optimisation on Manifolds

Optimisation on Manifolds Optimisation on Manifolds K. Hüper MPI Tübingen & Univ. Würzburg K. Hüper (MPI Tübingen & Univ. Würzburg) Applications in Computer Vision Grenoble 18/9/08 1 / 29 Contents 2 Examples Essential matrix estimation

More information

Lecture 7: Edge Detection

Lecture 7: Edge Detection #1 Lecture 7: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Definition of an Edge First Order Derivative Approximation as Edge Detector #2 This Lecture Examples of Edge Detection

More information

Applied Machine Learning for Biomedical Engineering. Enrico Grisan

Applied Machine Learning for Biomedical Engineering. Enrico Grisan Applied Machine Learning for Biomedical Engineering Enrico Grisan enrico.grisan@dei.unipd.it Data representation To find a representation that approximates elements of a signal class with a linear combination

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

Numerical Algorithms as Dynamical Systems

Numerical Algorithms as Dynamical Systems A Study on Numerical Algorithms as Dynamical Systems Moody Chu North Carolina State University What This Study Is About? To recast many numerical algorithms as special dynamical systems, whence to derive

More information

Part I : Bases. Part I : Bases

Part I : Bases. Part I : Bases Indian Statistical Institute System Science and Informatics Unit Bengalore, India Bengalore, 19-22 October 2010 ESIEE University of Paris-Est France Part I : Bases - ordering and lattices - erosion and

More information

A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction

A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction Marc Teboulle School of Mathematical Sciences Tel Aviv University Joint work with H. Bauschke and J. Bolte Optimization

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Learning the Linear Dynamical System with ASOS ( Approximated Second-Order Statistics )

Learning the Linear Dynamical System with ASOS ( Approximated Second-Order Statistics ) Learning the Linear Dynamical System with ASOS ( Approximated Second-Order Statistics ) James Martens University of Toronto June 24, 2010 Computer Science UNIVERSITY OF TORONTO James Martens (U of T) Learning

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance.

In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance. hree-dimensional variational assimilation (3D-Var) In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance. Lorenc (1986) showed

More information

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 36 Why proximal? Newton s method: for C 2 -smooth, unconstrained problems allow

More information

Bayesian Approach for Inverse Problem in Microwave Tomography

Bayesian Approach for Inverse Problem in Microwave Tomography Bayesian Approach for Inverse Problem in Microwave Tomography Hacheme Ayasso Ali Mohammad-Djafari Bernard Duchêne Laboratoire des Signaux et Systèmes, UMRS 08506 (CNRS-SUPELEC-UNIV PARIS SUD 11), 3 rue

More information

Maximum Principles for Elliptic and Parabolic Operators

Maximum Principles for Elliptic and Parabolic Operators Maximum Principles for Elliptic and Parabolic Operators Ilia Polotskii 1 Introduction Maximum principles have been some of the most useful properties used to solve a wide range of problems in the study

More information

Variational Image Restoration

Variational Image Restoration Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1

More information

Lecture 2: Haar Multiresolution analysis

Lecture 2: Haar Multiresolution analysis WAVELES AND MULIRAE DIGIAL SIGNAL PROCESSING Lecture 2: Haar Multiresolution analysis Prof.V. M. Gadre, EE, II Bombay 1 Introduction HAAR was a mathematician, who has given an idea that any continuous

More information

Sparse linear models

Sparse linear models Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time

More information

Wavelets and multiresolution representations. Time meets frequency

Wavelets and multiresolution representations. Time meets frequency Wavelets and multiresolution representations Time meets frequency Time-Frequency resolution Depends on the time-frequency spread of the wavelet atoms Assuming that ψ is centred in t=0 Signal domain + t

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

Bayesian Machine Learning - Lecture 7

Bayesian Machine Learning - Lecture 7 Bayesian Machine Learning - Lecture 7 Guido Sanguinetti Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh gsanguin@inf.ed.ac.uk March 4, 2015 Today s lecture 1

More information

Chapter I : Basic Notions

Chapter I : Basic Notions Chapter I : Basic Notions - Image Processing - Lattices - Residues - Measurements J. Serra Ecole des Mines de Paris ( 2000 ) Course on Math. Morphology I. 1 Image Processing (I) => Image processing may

More information

Lecture 8: The Metropolis-Hastings Algorithm

Lecture 8: The Metropolis-Hastings Algorithm 30.10.2008 What we have seen last time: Gibbs sampler Key idea: Generate a Markov chain by updating the component of (X 1,..., X p ) in turn by drawing from the full conditionals: X (t) j Two drawbacks:

More information

Probabilistic Graphical Models. Theory of Variational Inference: Inner and Outer Approximation. Lecture 15, March 4, 2013

Probabilistic Graphical Models. Theory of Variational Inference: Inner and Outer Approximation. Lecture 15, March 4, 2013 School of Computer Science Probabilistic Graphical Models Theory of Variational Inference: Inner and Outer Approximation Junming Yin Lecture 15, March 4, 2013 Reading: W & J Book Chapters 1 Roadmap Two

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

Unconstrained minimization

Unconstrained minimization CSCI5254: Convex Optimization & Its Applications Unconstrained minimization terminology and assumptions gradient descent method steepest descent method Newton s method self-concordant functions 1 Unconstrained

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

Multi-Layer Boosting for Pattern Recognition

Multi-Layer Boosting for Pattern Recognition Multi-Layer Boosting for Pattern Recognition François Fleuret IDIAP Research Institute, Centre du Parc, P.O. Box 592 1920 Martigny, Switzerland fleuret@idiap.ch Abstract We extend the standard boosting

More information

A First Course in Wavelets with Fourier Analysis

A First Course in Wavelets with Fourier Analysis * A First Course in Wavelets with Fourier Analysis Albert Boggess Francis J. Narcowich Texas A& M University, Texas PRENTICE HALL, Upper Saddle River, NJ 07458 Contents Preface Acknowledgments xi xix 0

More information

1 Numerical optimization

1 Numerical optimization Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms

More information

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer.

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer. University of Cambridge Engineering Part IIB & EIST Part II Paper I0: Advanced Pattern Processing Handouts 4 & 5: Multi-Layer Perceptron: Introduction and Training x y (x) Inputs x 2 y (x) 2 Outputs x

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT

More information

ECE521 lecture 4: 19 January Optimization, MLE, regularization

ECE521 lecture 4: 19 January Optimization, MLE, regularization ECE521 lecture 4: 19 January 2017 Optimization, MLE, regularization First four lectures Lectures 1 and 2: Intro to ML Probability review Types of loss functions and algorithms Lecture 3: KNN Convexity

More information

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements 3E4: Modelling Choice Lecture 7 Introduction to nonlinear programming 1 Announcements Solutions to Lecture 4-6 Homework will be available from http://www.eng.cam.ac.uk/~dr241/3e4 Looking ahead to Lecture

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Inverse Problems in Image Processing

Inverse Problems in Image Processing H D Inverse Problems in Image Processing Ramesh Neelamani (Neelsh) Committee: Profs. R. Baraniuk, R. Nowak, M. Orchard, S. Cox June 2003 Inverse Problems Data estimation from inadequate/noisy observations

More information

Generalized greedy algorithms.

Generalized greedy algorithms. Generalized greedy algorithms. François-Xavier Dupé & Sandrine Anthoine LIF & I2M Aix-Marseille Université - CNRS - Ecole Centrale Marseille, Marseille ANR Greta Séminaire Parisien des Mathématiques Appliquées

More information

The proximal mapping

The proximal mapping The proximal mapping http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/37 1 closed function 2 Conjugate function

More information

Filtering and Edge Detection

Filtering and Edge Detection Filtering and Edge Detection Local Neighborhoods Hard to tell anything from a single pixel Example: you see a reddish pixel. Is this the object s color? Illumination? Noise? The next step in order of complexity

More information

THE GAMMA FUNCTION AND THE ZETA FUNCTION

THE GAMMA FUNCTION AND THE ZETA FUNCTION THE GAMMA FUNCTION AND THE ZETA FUNCTION PAUL DUNCAN Abstract. The Gamma Function and the Riemann Zeta Function are two special functions that are critical to the study of many different fields of mathematics.

More information

Nonlinear Systems Theory

Nonlinear Systems Theory Nonlinear Systems Theory Matthew M. Peet Arizona State University Lecture 2: Nonlinear Systems Theory Overview Our next goal is to extend LMI s and optimization to nonlinear systems analysis. Today we

More information

Lectures notes. Rheology and Fluid Dynamics

Lectures notes. Rheology and Fluid Dynamics ÉC O L E P O L Y T E C H N IQ U E FÉ DÉR A L E D E L A U S A N N E Christophe Ancey Laboratoire hydraulique environnementale (LHE) École Polytechnique Fédérale de Lausanne Écublens CH-05 Lausanne Lectures

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

LECTURE Itineraries We start with a simple example of a dynamical system obtained by iterating the quadratic polynomial

LECTURE Itineraries We start with a simple example of a dynamical system obtained by iterating the quadratic polynomial LECTURE. Itineraries We start with a simple example of a dynamical system obtained by iterating the quadratic polynomial f λ : R R x λx( x), where λ [, 4). Starting with the critical point x 0 := /2, we

More information

Wavelet Based Image Restoration Using Cross-Band Operators

Wavelet Based Image Restoration Using Cross-Band Operators 1 Wavelet Based Image Restoration Using Cross-Band Operators Erez Cohen Electrical Engineering Department Technion - Israel Institute of Technology Supervised by Prof. Israel Cohen 2 Layout Introduction

More information

Linear Regression (continued)

Linear Regression (continued) Linear Regression (continued) Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 6, 2017 1 / 39 Outline 1 Administration 2 Review of last lecture 3 Linear regression

More information

Motion Estimation (I) Ce Liu Microsoft Research New England

Motion Estimation (I) Ce Liu Microsoft Research New England Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

Probabilistic Graphical Models

Probabilistic Graphical Models School of Computer Science Probabilistic Graphical Models Variational Inference II: Mean Field Method and Variational Principle Junming Yin Lecture 15, March 7, 2012 X 1 X 1 X 1 X 1 X 2 X 3 X 2 X 2 X 3

More information

Introduction to Computer Vision. 2D Linear Systems

Introduction to Computer Vision. 2D Linear Systems Introduction to Computer Vision D Linear Systems Review: Linear Systems We define a system as a unit that converts an input function into an output function Independent variable System operator or Transfer

More information

MMSE Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm

MMSE Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm Bodduluri Asha, B. Leela kumari Abstract: It is well known that in a real world signals do not exist without noise, which may be negligible

More information

Optimization Methods. Lecture 19: Line Searches and Newton s Method

Optimization Methods. Lecture 19: Line Searches and Newton s Method 15.93 Optimization Methods Lecture 19: Line Searches and Newton s Method 1 Last Lecture Necessary Conditions for Optimality (identifies candidates) x local min f(x ) =, f(x ) PSD Slide 1 Sufficient Conditions

More information

NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained

NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS 1. Introduction. We consider first-order methods for smooth, unconstrained optimization: (1.1) minimize f(x), x R n where f : R n R. We assume

More information

Lecture 2: Linear SVM in the Dual

Lecture 2: Linear SVM in the Dual Lecture 2: Linear SVM in the Dual Stéphane Canu stephane.canu@litislab.eu São Paulo 2015 July 22, 2015 Road map 1 Linear SVM Optimization in 10 slides Equality constraints Inequality constraints Dual formulation

More information

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control ALADIN An Algorithm for Distributed Non-Convex Optimization and Control Boris Houska, Yuning Jiang, Janick Frasch, Rien Quirynen, Dimitris Kouzoupis, Moritz Diehl ShanghaiTech University, University of

More information

QUADRATIC MAJORIZATION 1. INTRODUCTION

QUADRATIC MAJORIZATION 1. INTRODUCTION QUADRATIC MAJORIZATION JAN DE LEEUW 1. INTRODUCTION Majorization methods are used extensively to solve complicated multivariate optimizaton problems. We refer to de Leeuw [1994]; Heiser [1995]; Lange et

More information