Inverse problem and optimization

Size: px
Start display at page:

Download "Inverse problem and optimization"

Transcription

1 Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016

2 Inverse problem and optimization 2/36 Plan 1. Examples of inverse problems 2. Direct model: linear operator, noise 3. Ill-posed problem 4. Inversion 5. Regularization 6. Gradient descent

3 Inverse problem and optimization 3/36 Example: denoising Observations Denoised image

4 Inverse problem and optimization 3/36 Example: denoising

5 Inverse problem and optimization 3/36 Example: denoising

6 Inverse problem and optimization 4/36 Example: motion blur Observations Restored image

7 Inverse problem and optimization 4/36 Example: motion blur

8 Inverse problem and optimization 4/36 Example: motion blur

9 Inverse problem and optimization 5/36 Example: structured illumination microscopy

10 Inverse problem and optimization 6/36 Example: positron emission tomography

11 Inverse problem and optimization 6/36 Example: positron emission tomography? Measurements Reconstructed images

12 Inverse problem and optimization 7/36 Example: cartoon-texture decomposition? Observations Texture Cartoon

13 Inverse problem and optimization 8/36 Notations Image x R N 1 N 2 x = ( x n1,n 2 ) 1 n 1 N 1,1 n 2 N 2 Vector consisting of the values of the image of size N = N 1 N 2 arranged column-wise x R N (with N = N 1 N 2 ) x = ( x n )1 n N

14 Inverse problem and optimization 9/36 Direct model z = D α (Hx) x = (x n ) 1 n N R N : vector consisting of the (unknown) values of the original image of size N = N 1 N 2. z = (z j ) 1 j M R M : vector containing the observed values of size M = M 1 M 2. H R M N : matrix associated to a linear degradation operator. D α : R M R M : models other degradations such as nonlinear ones or the effect of the noise, parameterized by α (e.g. additive noise with variance α, Poisson noise with scaling parameter α).

15 Inverse problem and optimization 9/36 Direct model z = D α (Hx) x = (x n ) 1 n N R N : vector consisting of the (unknown) values of the original image of size N = N 1 N 2. z = (z j ) 1 j M R M : vector containing the observed values of size M = M 1 M 2. H R M N : matrix associated to a linear degradation operator. D α : R M R M : models other degradations such as nonlinear ones or the effect of the noise, parameterized by α (e.g. additive noise with variance α, Poisson noise with scaling parameter α).

16 Inverse problem and optimization 10/36 Direct model: convolution z = D α (Hx) z = D α (h x) where {h x}: convolution product with the Point Spread Function (PSF) h of size Q 1 Q 2. Link between h and H: Under zero-end conditions, unknown image x is zero outside its domain [0,N 1 1] [0,N 2 1], kernel h is zero outside its domain [0,Q 1 1] [0,Q 2 1].

17 Inverse problem and optimization 11/36 Direct model: convolution Link between h and H: Zero padding of x and h: extended image x e and kernel h e of size M 1 M 2 as { x e xi1,i i 1,i 2 = 2 if 0 i 1 N 1 1 and 0 i 2 N if N 1 i 1 M 1 1 and N 2 i 2 M 2 1, { hi e hi1,i 1,i 2 = 2 if 0 i 1 Q 1 1 and 0 i 2 Q if Q 1 i 1 M 1 1 and Q 2 i 2 M 2 1, This yields to (Hx) j1,j 2 = M 1 1 i 1 =0 M 2 1 i 2 =0 h e j 1 i 1,j 2 i 2 x e i 1,i 2 where j 1 {0,...,M 1 1} and j 2 {0,...,M 2 1}.

18 Inverse problem and optimization 12/36 Direct model: convolution Link between h and H: H = H 0 HM1 1 H M H1 H 1 H0 HM H2 H 2 H1 H0... H H0 H M1 1 H M1 2 H M1 3 where H j1 denotes a circulant matrix with M 2 columns such that hj e 1,0 hj e 1,M 2 1 hj e 1,M h e j 1,1 hj e H j1 = 1,1 hj e 1,0 hj e 1,M h e j 1, RM 2 M 2. hj e 1,M 2 1 hj e 1,M 2 2 hj e 1,M hj e 1,0

19 where X denotes the Fourier transform of x. Inverse problem and optimization 13/36 Direct model: convolution If H is a block-circulant matrix with circulant blocks, then where D: diagonal matrix, H = U DU U: unitary matrix (i.e, U = U 1 ) representing the discrete Fourier transform, denotes here the transpose conjugate. Efficient computation of Hx: Hx = U DU(U U)x = U DX

20 Inverse problem and optimization 14/36 Direct model: convolution Gaussian filter h Original image x F(h) F(x) {h x}

21 Inverse problem and optimization 14/36 Direct model: convolution Gaussian filter h Original image x F(h) F(x) {h x}

22 Inverse problem and optimization 14/36 Direct model: convolution Uniform filter h Original image x F(h) F(x) {h x}

23 Inverse problem and optimization 14/36 Direct model: convolution Uniform filter h Original image x F(h) F(x) {h x}

24 Inverse problem and optimization 15/36 Direct model: tomography (without noise) Tube of response j z = Hx Pixel n x = (x n ) 1 n N R N : vector consisting of the (unknown) values of the original image of size N = N 1 N 2. H = (H ji ) 1 j M,1 n N : probability to detect an event in the tube/line of response. z = (z j ) 1 j M R M : vector containing the observed values (sinogram).

25 Inverse problem and optimization 16/36 Direct model: compressed sensing z = Hx x = (x n ) 1 n N R N : vector consisting of the (unknown) values of the original image of size N = N 1 N 2. z = (z j ) 1 j M R M : vector containing the observed values (size M N). H = (H ji ) 1 j M,1 j N : random measurement matrix (size M N).

26 Inverse problem and optimization 17/36 Direct model: super-resolution ( b {1,...,B}) z b = D b TWx +ε b z: B multicomponent images at low-resolution (size M), x: (high-resolution) image to be recovered (size N), D b : downsampling matrix (size M N such that M < N), T: matrix associated to the blur (size N N), W : warp matrix (size N N), ε b N(0,σ 2 Id K ): noise often assumed to be a zero-mean white Gaussian additive noise.

27 Inverse problem and optimization 18/36 Direct model z = D α (Hx) x = (x n ) 1 n N R N : vector consisting of the (unknown) values of the original image of size N = N 1 N 2. z = (z j ) 1 j M R M : vector containing the observed values of size M = M 1 M 2. H R M N : matrix associated to a linear degradation operator D α : R M R M : models other degradations such as nonlinear ones or the effect of the noise, parameterized by α (e.g. additive noise with variance α, Poisson noise with scaling parameter α)

28 Inverse problem and optimization 19/36 Direct model: Gaussian noise z = D α (Hx) z = Hx +b where b: white additive Gaussian noise with variance α = σ 2. Original image Degraded image with σ = 10

29 Inverse problem and optimization 19/36 Direct model: Gaussian noise z = D α (Hx) z = Hx +b where b: white additive Gaussian noise with variance α = σ 2. Original image Degraded image σ = 30

30 Inverse problem and optimization 19/36 Direct model: Gaussian noise z = D α (Hx) z = Hx +b where b: white additive Gaussian noise with variance α = σ 2. Original image Degraded image σ = 50

31 Inverse problem and optimization 20/36 Direct model: Poisson noise z = D α (Hx) where D α : Poisson noise with scaling parameter α noise variance varies with image intensity. Original image Poisson noise α = 1

32 Inverse problem and optimization 20/36 Direct model: Poisson noise z = D α (Hx) where D α : Poisson noise with scaling parameter α noise variance varies with image intensity. Original image Poisson noise α = 0.1

33 Inverse problem and optimization 20/36 Direct model: Poisson noise z = D α (Hx) where D α : Poisson noise with scaling parameter α noise variance varies with image intensity. Original image Poisson noise α = 0.01

34 Inverse problem and optimization 21/36 Inverse problem Inverse problem : Find x the closest from x from observations. z = D α (Hx)? Observations z R M Restored image x R N

35 Inverse problem and optimization 22/36 Hadamard conditions The problem z = Hx is said to be well-posed if it fulfills the Hadamard conditions (1902) 1. existence of a solution, i.e. the range ranh of H is equal to R M, 2. uniqueness of the solution, i.e. the nullspace ker H of H is equal to {0}, 3. stability of the solution x relatively to the observation, i.e. ( (z,z ) ( R M) 2) z z 0 x(z) x(z ) 0.

36 Inverse problem and optimization 23/36 Hadamard conditions The problem z = Hx is said to be well-posed if it fulfills the Hadamard conditions 1. existence of a solution, i.e. every vector z in R M is the image of a vector x in R N, 2. uniqueness of the solution, i.e. if x(z) and x (z) are two solutions, then they are necessarily equal since x(z) x (z) belongs to kera, 3. stability of the solution x relatively to the observation, i.e. ensure that a small perturbation of the observed image leads to a slight variation of the recovered image.

37 Inverse problem and optimization 24/36 Inversion Inverse filtering (if M = N et H est inversible) x = H 1 z = H 1 (Hx +b) if additive noise b R M = x +H 1 b Remark : Closed form expression but noise amplification if H ill-conditioned (ill-posed problem).

38 Inverse problem and optimization 24/36 Inversion Inverse filtering (if M N and rank of H is N) x = (H H) 1 H z = (H H) 1 H (Hx +b) if additive noise b R M = x +(H H) 1 H b Remark : Closed form expression but noise amplification if H ill-conditioned (ill-posed problem).

39 Inverse problem and optimization 25/36 Regularization Variational approach where x Argmin x R N z Hx 2 2 : data-term, z Hx 2 2 +λω(x) Ω(x): regularization term (e.g. Ω(x) = x 2 2 ), λ 0: regularization parameter. Remarks If λ = 0: inverse filtering,

40 Inverse problem and optimization 26/36 Maximum A Posteriori (MAP) Let x and z be random vector realizations X and Z. Maximum A Posteriori (MAP) ˆx Argmax x R N µ X Z=z (x) find x that maximizes the posterior µ X Z=z (x)

41 Inverse problem and optimization 26/36 Maximum A Posteriori (MAP) Let x and z be random vector realizations X and Z. Maximum A Posteriori (MAP) ˆx Argmax x R N µ X Z=z (x) find x that maximizes the posterior µ X Z=z (x) Bayes rule max x R N µ X Z=z(x) max x R N µ Z X=x(z) µ X (x) min x R N { log(µz X=x (z)) log(µ X (x)) }

42 Inverse problem and optimization 26/36 Maximum A Posteriori (MAP) Let x and z be random vector realizations X and Z. Maximum A Posteriori (MAP) ˆx Argmax x R N µ X Z=z (x) find x that maximizes the posterior µ X Z=z (x) Bayes rule max µ X Z=z (x) max µ Z X=x (z) µ X (x) x R N x R N min x R N { log(µz X=x (z)) }{{} Data-term min x R N f 1 (x)+f 2 (x) } log(µ X (x)) }{{} A priori

43 Inverse problem and optimization 27/36 Data-term: Gaussian noise ( x R N ) f 1 (x) = log(µ Z X=x (z)) Let z = Hx +b with b N(0,α) Gaussian likelihood: µ Z X=x (z) = ( ) M 1 ((Hx) (i) z (i) ) 2 exp 2πα 2α i=1 Data-term: f 1 (x) = M i=1 1 2α ((Hx) i z i ) 2

44 Inverse problem and optimization 28/36 Data-term: Poisson noise ( x R N ) f 1 (x) = log(µ Z X=x (z)) Let z = D α (Hx) where D α Poisson noise with parameter α. Poisson likelihood: M exp ( α(hx) (i)) ( µ Z X=x (z) = α(hx) (i) ) z (i) z (i)! i=1 Data-term: f 1 (x) = M i=1 Ψ ( i (Hx) (i) ) αυ z (i) ln(αυ) if z (i) > 0 and υ > 0, ( υ R) Ψ i (υ) = αυ si z (i) = 0 and υ 0, + otherwise.

45 Inverse problem and optimization 29/36 Prior: Tikhonov /TV ( x R N ) f 2 (x) = log(µ X (x)) Tikhonov [Tikhonov, 1963] f 2 (x) = Lx 2 N 1 N 2 = {l x} 2 i,j avec l = i=1 j=

46 Inverse problem and optimization 30/36 Minimisation problem Variational approach where x Argmin x R N L(x) = f(z,hx): data-term, Ω(x): regularization term, λ 0: regularization parameter. L(x)+λΩ(x)

47 Inverse problem and optimization 31/36 Convex optimization? Let f : R N ],+ ]. An optimization problem consists in solving: x Argmin x R N f(x) x is a global solution if for every x R N, f( x) f(x), Course objectif: Build a sequence (x n ) n Z that converges to x. f x 0 x n x n+1 x x

48 Inverse problem and optimization 32/36 Gradient descent Illustration: Remarks: f is displayed with its level lines, Fermat rule: x Argmin x R N f(x) f( x) = 0 Solve a problem with N equations and N unknowns. Closed form expression for very few f. If no closed form expression possible iterative method.

49 Inverse problem and optimization 33/36 Solving mean square problem Find x Argmin x R N Hx z 2 2 with { H R M N z R M Optimality conditions f( x) = 0 H (H x z) = 0 x = (H H) 1 H z Difficulty: invert H H.

50 Inverse problem and optimization 34/36 Solving logistic regression Find x Argminlog(1+exp( yx) with x R y R Optimality conditions f( x) = 0 y exp( y x 1+exp( y x) = 0 Difficulty: no closed form expression.

51 Inverse problem and optimization 35/36 Gradient descent Illustration: Iterations: ( k N) x [k+1] = x [k] +γ [k] d [k] where { d [k] R N : descent direction, γ [k] > 0: step-size.

Bayesian Paradigm. Maximum A Posteriori Estimation

Bayesian Paradigm. Maximum A Posteriori Estimation Bayesian Paradigm Maximum A Posteriori Estimation Simple acquisition model noise + degradation Constraint minimization or Equivalent formulation Constraint minimization Lagrangian (unconstraint minimization)

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Satellite image deconvolution using complex wavelet packets

Satellite image deconvolution using complex wavelet packets Satellite image deconvolution using complex wavelet packets André Jalobeanu, Laure Blanc-Féraud, Josiane Zerubia ARIANA research group INRIA Sophia Antipolis, France CNRS / INRIA / UNSA www.inria.fr/ariana

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Bayesian Learning. Tobias Scheffer, Niels Landwehr

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Bayesian Learning. Tobias Scheffer, Niels Landwehr Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Bayesian Learning Tobias Scheffer, Niels Landwehr Remember: Normal Distribution Distribution over x. Density function with parameters

More information

Image restoration: numerical optimisation

Image restoration: numerical optimisation Image restoration: numerical optimisation Short and partial presentation Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP / 6 Context

More information

Probabilistic & Unsupervised Learning

Probabilistic & Unsupervised Learning Probabilistic & Unsupervised Learning Week 2: Latent Variable Models Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc ML/CSML, Dept Computer Science University College

More information

An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information.

An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information. An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information. Model: where g(t) = a(t s)f(s)ds + e(t), a(t) t = (rapidly). The problem,

More information

Proximal tools for image reconstruction in dynamic Positron Emission Tomography

Proximal tools for image reconstruction in dynamic Positron Emission Tomography Proximal tools for image reconstruction in dynamic Positron Emission Tomography Nelly Pustelnik 1 joint work with Caroline Chaux 2, Jean-Christophe Pesquet 3, and Claude Comtat 4 1 Laboratoire de Physique,

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Le Song Machine Learning I CSE 6740, Fall 2013 Naïve Bayes classifier Still use Bayes decision rule for classification P y x = P x y P y P x But assume p x y = 1 is fully factorized

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

Image processing and Computer Vision

Image processing and Computer Vision 1 / 1 Image processing and Computer Vision Continuous Optimization and applications to image processing Martin de La Gorce martin.de-la-gorce@enpc.fr February 2015 Optimization 2 / 1 We have a function

More information

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing Emilie Chouzenoux emilie.chouzenoux@univ-mlv.fr Université Paris-Est Lab. d Informatique Gaspard

More information

Efficient Variational Inference in Large-Scale Bayesian Compressed Sensing

Efficient Variational Inference in Large-Scale Bayesian Compressed Sensing Efficient Variational Inference in Large-Scale Bayesian Compressed Sensing George Papandreou and Alan Yuille Department of Statistics University of California, Los Angeles ICCV Workshop on Information

More information

Machine Learning

Machine Learning Machine Learning 10-701 Tom M. Mitchell Machine Learning Department Carnegie Mellon University February 1, 2011 Today: Generative discriminative classifiers Linear regression Decomposition of error into

More information

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

Statistically-Based Regularization Parameter Estimation for Large Scale Problems Statistically-Based Regularization Parameter Estimation for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova March 1, 2010 National Science Foundation: Division of Computational

More information

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors Sean Borman and Robert L. Stevenson Department of Electrical Engineering, University of Notre Dame Notre Dame,

More information

Image restoration. An example in astronomy

Image restoration. An example in astronomy Image restoration Convex approaches: penalties and constraints An example in astronomy Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS

More information

Optimisation in imaging

Optimisation in imaging Optimisation in imaging Hugues Talbot ICIP 2014 Tutorial Optimisation on Hierarchies H. Talbot : Optimisation 1/59 Outline of the lecture 1 Concepts in optimization Cost function Constraints Duality 2

More information

Linear Inverse Problems

Linear Inverse Problems Linear Inverse Problems Ajinkya Kadu Utrecht University, The Netherlands February 26, 2018 Outline Introduction Least-squares Reconstruction Methods Examples Summary Introduction 2 What are inverse problems?

More information

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration E. Chouzenoux, A. Jezierska, J.-C. Pesquet and H. Talbot Université Paris-Est Lab. d Informatique Gaspard

More information

Probabilistic Machine Learning. Industrial AI Lab.

Probabilistic Machine Learning. Industrial AI Lab. Probabilistic Machine Learning Industrial AI Lab. Probabilistic Linear Regression Outline Probabilistic Classification Probabilistic Clustering Probabilistic Dimension Reduction 2 Probabilistic Linear

More information

Inverse problems in statistics

Inverse problems in statistics Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) YES, Eurandom, 10 October 2011 p. 1/27 Table of contents YES, Eurandom, 10 October 2011 p. 2/27 Table of contents 1)

More information

Computer Vision & Digital Image Processing

Computer Vision & Digital Image Processing Computer Vision & Digital Image Processing Image Restoration and Reconstruction I Dr. D. J. Jackson Lecture 11-1 Image restoration Restoration is an objective process that attempts to recover an image

More information

Numerical Linear Algebra and. Image Restoration

Numerical Linear Algebra and. Image Restoration Numerical Linear Algebra and Image Restoration Maui High Performance Computing Center Wednesday, October 8, 2003 James G. Nagy Emory University Atlanta, GA Thanks to: AFOSR, Dave Tyler, Stuart Jefferies,

More information

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io Machine Learning Lecture 4: Regularization and Bayesian Statistics Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 207 Overfitting Problem

More information

Wavelet-based Image Deconvolution and Reconstruction

Wavelet-based Image Deconvolution and Reconstruction Wavelet-based Image Deconvolution and Reconstruction Nelly Pustelnik, Amel Benazza-Benhayia, Yuling Zheng, Jean-Christophe Pesquet To cite this version: Nelly Pustelnik, Amel Benazza-Benhayia, Yuling Zheng,

More information

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Rosemary Renaut, Jodi Mead Arizona State and Boise State September 2007 Renaut and Mead (ASU/Boise) Scalar

More information

NONLINEAR DIFFUSION PDES

NONLINEAR DIFFUSION PDES NONLINEAR DIFFUSION PDES Erkut Erdem Hacettepe University March 5 th, 0 CONTENTS Perona-Malik Type Nonlinear Diffusion Edge Enhancing Diffusion 5 References 7 PERONA-MALIK TYPE NONLINEAR DIFFUSION The

More information

Laplace-distributed increments, the Laplace prior, and edge-preserving regularization

Laplace-distributed increments, the Laplace prior, and edge-preserving regularization J. Inverse Ill-Posed Probl.? (????), 1 15 DOI 1.1515/JIIP.????.??? de Gruyter???? Laplace-distributed increments, the Laplace prior, and edge-preserving regularization Johnathan M. Bardsley Abstract. For

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

MCMC Sampling for Bayesian Inference using L1-type Priors

MCMC Sampling for Bayesian Inference using L1-type Priors MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling

More information

LPA-ICI Applications in Image Processing

LPA-ICI Applications in Image Processing LPA-ICI Applications in Image Processing Denoising Deblurring Derivative estimation Edge detection Inverse halftoning Denoising Consider z (x) =y (x)+η (x), wherey is noise-free image and η is noise. assume

More information

Resolving the White Noise Paradox in the Regularisation of Inverse Problems

Resolving the White Noise Paradox in the Regularisation of Inverse Problems 1 / 32 Resolving the White Noise Paradox in the Regularisation of Inverse Problems Hanne Kekkonen joint work with Matti Lassas and Samuli Siltanen Department of Mathematics and Statistics University of

More information

Inverse problems in statistics

Inverse problems in statistics Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) Yale, May 2 2011 p. 1/35 Introduction There exist many fields where inverse problems appear Astronomy (Hubble satellite).

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

Erkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5.

Erkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5. LINEAR DIFFUSION Erkut Erdem Hacettepe University February 24 th, 2012 CONTENTS 1 Linear Diffusion 1 2 Appendix - The Calculus of Variations 5 References 6 1 LINEAR DIFFUSION The linear diffusion (heat)

More information

Covariance Matrix Simplification For Efficient Uncertainty Management

Covariance Matrix Simplification For Efficient Uncertainty Management PASEO MaxEnt 2007 Covariance Matrix Simplification For Efficient Uncertainty Management André Jalobeanu, Jorge A. Gutiérrez PASEO Research Group LSIIT (CNRS/ Univ. Strasbourg) - Illkirch, France *part

More information

Machine Learning

Machine Learning Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University February 4, 2015 Today: Generative discriminative classifiers Linear regression Decomposition of error into

More information

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT PRASHANT ATHAVALE Abstract. Digital images are can be realized as L 2 (R 2 objects. Noise is introduced in a digital image due to various reasons.

More information

No. of dimensions 1. No. of centers

No. of dimensions 1. No. of centers Contents 8.6 Course of dimensionality............................ 15 8.7 Computational aspects of linear estimators.................. 15 8.7.1 Diagonalization of circulant andblock-circulant matrices......

More information

Wavelet-based Image Deconvolution and Reconstruction

Wavelet-based Image Deconvolution and Reconstruction Wavelet-based Image Deconvolution and Reconstruction Nelly Pustelnik, Amel Benazza-Benhayia, Yuling Zheng, Jean-Christophe Pesquet To cite this version: Nelly Pustelnik, Amel Benazza-Benhayia, Yuling Zheng,

More information

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop Music and Machine Learning (IFT68 Winter 8) Prof. Douglas Eck, Université de Montréal These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

More information

Marginal density. If the unknown is of the form x = (x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density

Marginal density. If the unknown is of the form x = (x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density Marginal density If the unknown is of the form x = x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density πx 1 y) = πx 1, x 2 y)dx 2 = πx 2 )πx 1 y, x 2 )dx 2 needs to be

More information

Sparse & Redundant Signal Representation, and its Role in Image Processing

Sparse & Redundant Signal Representation, and its Role in Image Processing Sparse & Redundant Signal Representation, and its Role in Michael Elad The CS Department The Technion Israel Institute of technology Haifa 3000, Israel Wave 006 Wavelet and Applications Ecole Polytechnique

More information

Physics-based Prior modeling in Inverse Problems

Physics-based Prior modeling in Inverse Problems Physics-based Prior modeling in Inverse Problems MURI Meeting 2013 M Usman Sadiq, Purdue University Charles A. Bouman, Purdue University In collaboration with: Jeff Simmons, AFRL Venkat Venkatakrishnan,

More information

Bayesian Learning (II)

Bayesian Learning (II) Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Bayesian Learning (II) Niels Landwehr Overview Probabilities, expected values, variance Basic concepts of Bayesian learning MAP

More information

Super-Resolution. Shai Avidan Tel-Aviv University

Super-Resolution. Shai Avidan Tel-Aviv University Super-Resolution Shai Avidan Tel-Aviv University Slide Credits (partial list) Ric Szelisi Steve Seitz Alyosha Efros Yacov Hel-Or Yossi Rubner Mii Elad Marc Levoy Bill Freeman Fredo Durand Sylvain Paris

More information

Linear Models for Regression. Sargur Srihari

Linear Models for Regression. Sargur Srihari Linear Models for Regression Sargur srihari@cedar.buffalo.edu 1 Topics in Linear Regression What is regression? Polynomial Curve Fitting with Scalar input Linear Basis Function Models Maximum Likelihood

More information

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the

More information

Overview. Probabilistic Interpretation of Linear Regression Maximum Likelihood Estimation Bayesian Estimation MAP Estimation

Overview. Probabilistic Interpretation of Linear Regression Maximum Likelihood Estimation Bayesian Estimation MAP Estimation Overview Probabilistic Interpretation of Linear Regression Maximum Likelihood Estimation Bayesian Estimation MAP Estimation Probabilistic Interpretation: Linear Regression Assume output y is generated

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V

More information

Noise, Image Reconstruction with Noise!

Noise, Image Reconstruction with Noise! Noise, Image Reconstruction with Noise! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 10! Gordon Wetzstein! Stanford University! What s a Pixel?! photon to electron

More information

Chapter 2 Mathematical Background

Chapter 2 Mathematical Background Chapter 2 Mathematical Background In this chapter we briefly review concepts from the theory of ill-posed problems, operator theory, linear algebra, estimation theory and sparsity. These concepts are used

More information

Advanced Numerical Linear Algebra: Inverse Problems

Advanced Numerical Linear Algebra: Inverse Problems Advanced Numerical Linear Algebra: Inverse Problems Rosemary Renaut Spring 23 Some Background on Inverse Problems Constructing PSF Matrices The DFT Rosemary Renaut February 4, 23 References Deblurring

More information

Bayesian Approach for Inverse Problem in Microwave Tomography

Bayesian Approach for Inverse Problem in Microwave Tomography Bayesian Approach for Inverse Problem in Microwave Tomography Hacheme Ayasso Ali Mohammad-Djafari Bernard Duchêne Laboratoire des Signaux et Systèmes, UMRS 08506 (CNRS-SUPELEC-UNIV PARIS SUD 11), 3 rue

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

CS-E3210 Machine Learning: Basic Principles

CS-E3210 Machine Learning: Basic Principles CS-E3210 Machine Learning: Basic Principles Lecture 4: Regression II slides by Markus Heinonen Department of Computer Science Aalto University, School of Science Autumn (Period I) 2017 1 / 61 Today s introduction

More information

Linear Diffusion and Image Processing. Outline

Linear Diffusion and Image Processing. Outline Outline Linear Diffusion and Image Processing Fourier Transform Convolution Image Restoration: Linear Filtering Diffusion Processes for Noise Filtering linear scale space theory Gauss-Laplace pyramid for

More information

Iterative Methods for Ill-Posed Problems

Iterative Methods for Ill-Posed Problems Iterative Methods for Ill-Posed Problems Based on joint work with: Serena Morigi Fiorella Sgallari Andriy Shyshkov Salt Lake City, May, 2007 Outline: Inverse and ill-posed problems Tikhonov regularization

More information

10. Multi-objective least squares

10. Multi-objective least squares L Vandenberghe ECE133A (Winter 2018) 10 Multi-objective least squares multi-objective least squares regularized data fitting control estimation and inversion 10-1 Multi-objective least squares we have

More information

SIGNAL AND IMAGE RESTORATION: SOLVING

SIGNAL AND IMAGE RESTORATION: SOLVING 1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation

More information

Bayesian Linear Regression. Sargur Srihari

Bayesian Linear Regression. Sargur Srihari Bayesian Linear Regression Sargur srihari@cedar.buffalo.edu Topics in Bayesian Regression Recall Max Likelihood Linear Regression Parameter Distribution Predictive Distribution Equivalent Kernel 2 Linear

More information

Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials

Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials by Phillip Krahenbuhl and Vladlen Koltun Presented by Adam Stambler Multi-class image segmentation Assign a class label to each

More information

Inverse Theory. COST WaVaCS Winterschool Venice, February Stefan Buehler Luleå University of Technology Kiruna

Inverse Theory. COST WaVaCS Winterschool Venice, February Stefan Buehler Luleå University of Technology Kiruna Inverse Theory COST WaVaCS Winterschool Venice, February 2011 Stefan Buehler Luleå University of Technology Kiruna Overview Inversion 1 The Inverse Problem 2 Simple Minded Approach (Matrix Inversion) 3

More information

2 Tikhonov Regularization and ERM

2 Tikhonov Regularization and ERM Introduction Here we discusses how a class of regularization methods originally designed to solve ill-posed inverse problems give rise to regularized learning algorithms. These algorithms are kernel methods

More information

Machine Learning. Bayesian Regression & Classification. Marc Toussaint U Stuttgart

Machine Learning. Bayesian Regression & Classification. Marc Toussaint U Stuttgart Machine Learning Bayesian Regression & Classification learning as inference, Bayesian Kernel Ridge regression & Gaussian Processes, Bayesian Kernel Logistic Regression & GP classification, Bayesian Neural

More information

SYDE 372 Introduction to Pattern Recognition. Probability Measures for Classification: Part I

SYDE 372 Introduction to Pattern Recognition. Probability Measures for Classification: Part I SYDE 372 Introduction to Pattern Recognition Probability Measures for Classification: Part I Alexander Wong Department of Systems Design Engineering University of Waterloo Outline 1 2 3 4 Why use probability

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Introduction to the Mathematics of Medical Imaging

Introduction to the Mathematics of Medical Imaging Introduction to the Mathematics of Medical Imaging Second Edition Charles L. Epstein University of Pennsylvania Philadelphia, Pennsylvania EiaJTL Society for Industrial and Applied Mathematics Philadelphia

More information

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas Midterm Review CS 6375: Machine Learning Vibhav Gogate The University of Texas at Dallas Machine Learning Supervised Learning Unsupervised Learning Reinforcement Learning Parametric Y Continuous Non-parametric

More information

Machine Learning: Logistic Regression. Lecture 04

Machine Learning: Logistic Regression. Lecture 04 Machine Learning: Logistic Regression Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.edu Supervised Learning Task = learn an (unkon function t : X T that maps input

More information

MAT Inverse Problems, Part 2: Statistical Inversion

MAT Inverse Problems, Part 2: Statistical Inversion MAT-62006 Inverse Problems, Part 2: Statistical Inversion S. Pursiainen Department of Mathematics, Tampere University of Technology Spring 2015 Overview The statistical inversion approach is based on the

More information

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) = Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,

More information

Pattern Recognition and Machine Learning

Pattern Recognition and Machine Learning Christopher M. Bishop Pattern Recognition and Machine Learning ÖSpri inger Contents Preface Mathematical notation Contents vii xi xiii 1 Introduction 1 1.1 Example: Polynomial Curve Fitting 4 1.2 Probability

More information

Statistical regularization theory for Inverse Problems with Poisson data

Statistical regularization theory for Inverse Problems with Poisson data Statistical regularization theory for Inverse Problems with Poisson data Frank Werner 1,2, joint with Thorsten Hohage 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical

More information

Linear Regression (9/11/13)

Linear Regression (9/11/13) STA561: Probabilistic machine learning Linear Regression (9/11/13) Lecturer: Barbara Engelhardt Scribes: Zachary Abzug, Mike Gloudemans, Zhuosheng Gu, Zhao Song 1 Why use linear regression? Figure 1: Scatter

More information

Variational Methods in Bayesian Deconvolution

Variational Methods in Bayesian Deconvolution PHYSTAT, SLAC, Stanford, California, September 8-, Variational Methods in Bayesian Deconvolution K. Zarb Adami Cavendish Laboratory, University of Cambridge, UK This paper gives an introduction to the

More information

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function. Bayesian learning: Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function. Let y be the true label and y be the predicted

More information

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING JIAN-FENG CAI, STANLEY OSHER, AND ZUOWEI SHEN Abstract. Real images usually have sparse approximations under some tight frame systems derived

More information

COMS 4771 Regression. Nakul Verma

COMS 4771 Regression. Nakul Verma COMS 4771 Regression Nakul Verma Last time Support Vector Machines Maximum Margin formulation Constrained Optimization Lagrange Duality Theory Convex Optimization SVM dual and Interpretation How get the

More information

M. Holschneider 1 A. Eicker 2 R. Schachtschneider 1 T. Mayer-Guerr 2 K. Ilk 2

M. Holschneider 1 A. Eicker 2 R. Schachtschneider 1 T. Mayer-Guerr 2 K. Ilk 2 SPP project TREGMAT: Tailored Gravity Field Models for Mass Distributions and Mass Transport Phenomena in the Earth System by by M. 1 A. Eicker 2 R. Schachtschneider 1 T. Mayer-Guerr 2 K. Ilk 2 1 University

More information

Discretization-invariant Bayesian inversion and Besov space priors. Samuli Siltanen RICAM Tampere University of Technology

Discretization-invariant Bayesian inversion and Besov space priors. Samuli Siltanen RICAM Tampere University of Technology Discretization-invariant Bayesian inversion and Besov space priors Samuli Siltanen RICAM 28.10.2008 Tampere University of Technology http://math.tkk.fi/inverse-coe/ This is a joint work with Matti Lassas

More information

2 Nonlinear least squares algorithms

2 Nonlinear least squares algorithms 1 Introduction Notes for 2017-05-01 We briefly discussed nonlinear least squares problems in a previous lecture, when we described the historical path leading to trust region methods starting from the

More information

Variable Metric Forward-Backward Algorithm

Variable Metric Forward-Backward Algorithm Variable Metric Forward-Backward Algorithm 1/37 Variable Metric Forward-Backward Algorithm for minimizing the sum of a differentiable function and a convex function E. Chouzenoux in collaboration with

More information

An IDL Based Image Deconvolution Software Package

An IDL Based Image Deconvolution Software Package An IDL Based Image Deconvolution Software Package F. Városi and W. B. Landsman Hughes STX Co., Code 685, NASA/GSFC, Greenbelt, MD 20771 Abstract. Using the Interactive Data Language (IDL), we have implemented

More information

Spectral Regularization

Spectral Regularization Spectral Regularization Lorenzo Rosasco 9.520 Class 07 February 27, 2008 About this class Goal To discuss how a class of regularization methods originally designed for solving ill-posed inverse problems,

More information

The Bayesian approach to inverse problems

The Bayesian approach to inverse problems The Bayesian approach to inverse problems Youssef Marzouk Department of Aeronautics and Astronautics Center for Computational Engineering Massachusetts Institute of Technology ymarz@mit.edu, http://uqgroup.mit.edu

More information

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b?

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? Rosemary Renaut http://math.asu.edu/ rosie BRIDGING THE GAP? OCT 2, 2012 Discussion Yuen: Solve

More information

Bregman Divergences for Data Mining Meta-Algorithms

Bregman Divergences for Data Mining Meta-Algorithms p.1/?? Bregman Divergences for Data Mining Meta-Algorithms Joydeep Ghosh University of Texas at Austin ghosh@ece.utexas.edu Reflects joint work with Arindam Banerjee, Srujana Merugu, Inderjit Dhillon,

More information

ECE521 week 3: 23/26 January 2017

ECE521 week 3: 23/26 January 2017 ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

Gauge optimization and duality

Gauge optimization and duality 1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel

More information

Generative v. Discriminative classifiers Intuition

Generative v. Discriminative classifiers Intuition Logistic Regression Machine Learning 070/578 Carlos Guestrin Carnegie Mellon University September 24 th, 2007 Generative v. Discriminative classifiers Intuition Want to Learn: h:x a Y X features Y target

More information

A quantative comparison of two restoration methods as applied to confocal microscopy

A quantative comparison of two restoration methods as applied to confocal microscopy A quantative comparison of two restoration methods as applied to confocal microscopy Geert M.P. van Kempen 1, Hans T.M. van der Voort, Lucas J. van Vliet 1 1 Pattern Recognition Group, Delft University

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

Sparse & Redundant Representations by Iterated-Shrinkage Algorithms

Sparse & Redundant Representations by Iterated-Shrinkage Algorithms Sparse & Redundant Representations by Michael Elad * The Computer Science Department The Technion Israel Institute of technology Haifa 3000, Israel 6-30 August 007 San Diego Convention Center San Diego,

More information

Wavelet Based Image Restoration Using Cross-Band Operators

Wavelet Based Image Restoration Using Cross-Band Operators 1 Wavelet Based Image Restoration Using Cross-Band Operators Erez Cohen Electrical Engineering Department Technion - Israel Institute of Technology Supervised by Prof. Israel Cohen 2 Layout Introduction

More information

y(x) = x w + ε(x), (1)

y(x) = x w + ε(x), (1) Linear regression We are ready to consider our first machine-learning problem: linear regression. Suppose that e are interested in the values of a function y(x): R d R, here x is a d-dimensional vector-valued

More information

Graphical Models for Collaborative Filtering

Graphical Models for Collaborative Filtering Graphical Models for Collaborative Filtering Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Sequence modeling HMM, Kalman Filter, etc.: Similarity: the same graphical model topology,

More information