Mumford-Shah Level-Set Ideas for Problems in Medical Imaging Paris Dauphine 2005

Size: px
Start display at page:

Download "Mumford-Shah Level-Set Ideas for Problems in Medical Imaging Paris Dauphine 2005"

Transcription

1 1/26 Mumford-Shah Level-Set Ideas for Problems in Medical Imaging Paris Dauphine 2005 Wolfgang Ring, Marc Droske, Ronny Ramlau Institut für Mathematik, Universität Graz

2 General Mumford-Shah like 2/26 functionals

3 We consider the problem of determining an unknown function v : D R N with might be singular (discontinuous) across an unknown singularity set Γ D from given data y d Y. 3/26

4 We consider the problem of determining an unknown function v : D R N with might be singular (discontinuous) across an unknown singularity set Γ D from given data y d Y. 3/26 Suppose for the moment that the singularity set Γ is known. Then the function v D\Γ is an element in a Hilbert space X(Γ) which might be dependent on Γ.

5 We consider the problem of determining an unknown function v : D R N with might be singular (discontinuous) across an unknown singularity set Γ D from given data y d Y. 3/26 Suppose for the moment that the singularity set Γ is known. Then the function v D\Γ is an element in a Hilbert space X(Γ) which might be dependent on Γ. Note: X(Γ) is a space of functions with domain of definition given by D \ Γ. The norm on X(Γ) does not measure what happens on/across Γ.

6 The relation between v and the given (exact) data is supposed to be of the form y d = Kv (1) 4/26 where K = K(Γ) with K : X(Γ) Y is a continuous (possibly nonlinear) operator.

7 Simultaneous determination of functional and geometric data 5/26 The singularity set Γ and the function v X(Γ) are to be simultaneously determined as solution to min J MS (v, Γ) Γ G v X(Γ) J MS (v, Γ) = 1 2 Kv y d 2 Y + ν 2 v 2 X(Γ) + µ Γ 1 ds

8 Simultaneous determination of functional and geometric data 5/26 The singularity set Γ and the function v X(Γ) are to be simultaneously determined as solution to min J MS (v, Γ) Γ G v X(Γ) J MS (v, Γ) = 1 2 Kv y d 2 Y + ν 2 v 2 X(Γ) + µ Data fit Γ 1 ds Regularization without penalization on/across Γ Control of Γ

9 Elimination of the functional variable Shape optimization approach 6/26

10 Elimination of the functional variable Shape optimization approach Step 1: For fixed Γ G solve 6/26 Denote the solution v(γ). min J MS (v, Γ). v X(Γ)

11 Elimination of the functional variable Shape optimization approach Step 1: For fixed Γ G solve 6/26 Denote the solution v(γ). min J MS (v, Γ). v X(Γ) Step 2: Consider the shape optimization problem min Γ G J MS(v(Γ), Γ). Calculate a descent direction F shape sensitivity analysis. using techniques from

12 Step 3: Update the geometric variable Γ in the chosen descent direction using a level-set formulation for the propagation of the geometric variable. Solve 7/26 u t + F u = 0 for an appropriate time-step. Here Γ = {u = 0}.

13 Step 1: Solution of the optimality system w.r.t. v 8/26

14 Step 1: Solution of the optimality system w.r.t. v The optimal v(γ) is found by solving the optimality system v J MS (v(γ), Γ) = 0. 8/26

15 Step 1: Solution of the optimality system w.r.t. v The optimal v(γ) is found by solving the optimality system v J MS (v(γ), Γ) = 0. 8/26 For linear K: with v(γ) X(Γ). K (Kv(Γ) y d ) + νv(γ) = 0

16 Shape sensitivity analysis for reduced functionals 9/26

17 Shape sensitivity analysis for reduced functionals We consider the reduced functional 9/26 Ĵ(Γ) = J MS (v(γ), Γ) where v(γ) = argmin v J MS (v, Γ).

18 Shape sensitivity analysis for reduced functionals We consider the reduced functional 9/26 Ĵ(Γ) = J MS (v(γ), Γ) where v(γ) = argmin v J MS (v, Γ). We get dĵ(γ; F ) = vj ( v(γ), Γ ) v (Γ; F ) + Γ J(v(Γ), Γ; F ) (2)

19 Shape sensitivity analysis for reduced functionals We consider the reduced functional 9/26 Ĵ(Γ) = J MS (v(γ), Γ) where v(γ) = argmin v J MS (v, Γ). We get dĵ(γ; F ) = vj ( v(γ), Γ ) v (Γ; F ) + Γ J(v(Γ), Γ; F ) (2) Since v(γ) satisfies the Euler-Lagrange equations for J MS w.r.t. v, the first term in (2) vanishes.

20 Descent directions and the choice of an appropriate metric 10/26

21 Shape differential and steepest descent direction 11/26

22 Shape differential and steepest descent direction Suppose F Z, (Z... space of perturbations). The shape differential δj(γ) Z of J at Ω is an element in Z satisfying 11/26 dj(γ; F ) = δj(γ), F Z,Z. (δj... covariant repr. of the shape derivative)

23 Shape differential and steepest descent direction Suppose F Z, (Z... space of perturbations). The shape differential δj(γ) Z of J at Ω is an element in Z satisfying 11/26 dj(γ; F ) = δj(γ), F Z,Z. (δj... covariant repr. of the shape derivative) A steepest descent direction F sd is an element in Z satisfying dj(γ; F sd ) = min dj(γ; F ). F Z F Z 1 (F sd... contravariant repr. of the shape derivative)

24 Choice of the Metric For the update of the geometry, the descent direction must be determined. 12/26

25 Choice of the Metric For the update of the geometry, the descent direction must be determined. 12/26 The descent direction depends on the choice of the space Z and on the respective norm.

26 Choice of the Metric For the update of the geometry, the descent direction must be determined. 12/26 The descent direction depends on the choice of the space Z and on the respective norm. Different norms lead to different behaviors of the algorithm.

27 Choice of the Metric For the update of the geometry, the descent direction must be determined. 12/26 The descent direction depends on the choice of the space Z and on the respective norm. Different norms lead to different behaviors of the algorithm. Restrictive condition on the choice of Z: δ Z.

28 A preconditioned gradient method 13/26

29 A preconditioned gradient method Suppose that there exists a G = G(Γ) H 1 (Γ) such that dj MS (Γ; F ) = G, F Γ H 1 (Γ),H 1 0 (Γ). 13/26

30 A preconditioned gradient method Suppose that there exists a G = G(Γ) H 1 (Γ) such that dj MS (Γ; F ) = G, F Γ H 1 (Γ),H 1 0 (Γ). 13/26 To find the steepest descent direction w.r.t. the H norm, we have to solve the constrained optimization problem min G, F Γ F H0 1(Γ) H 1 (Γ),H1 0 (Γ) F H 1 0 (Γ) 1

31 Introducing the Lagrange functional ( ( L(F, λ) = G, F Γ + λ Γ F 2 + F 2) ds 1) Γ 14/26 we get the optimality condition F = 1 2λ ( Γ + id) 1 G (3)

32 Introducing the Lagrange functional ( ( L(F, λ) = G, F Γ + λ Γ F 2 + F 2) ds 1) Γ 14/26 we get the optimality condition F = 1 2λ ( Γ + id) 1 G (3) The gradient direction G is preconditioned by an inverse elliptic operator. The multiplier λ is chosen such that the H0 1 -norm of F is one.

33 A piecewise constant 15/26 Mumford-Shah approach for x-ray tomography (with Ronny Ramlau)

34 Objectives: 16/26

35 Objectives: Simultaneously reconstruct the density distribution and the boundaries of regions with approximately homogeneous densities from Radon-transform data. 16/26 Segment directly from the data, estimate objects Reduce ill-posedness by reducing the dimension of the unknown inversion of noisy data Comparable in quality and speed to established methods

36 Objectives: Simultaneously reconstruct the density distribution and the boundaries of regions with approximately homogeneous densities from Radon-transform data. 16/ Segment directly from the data, estimate objects Reduce ill-posedness by reducing the dimension of the unknown inversion of noisy data Comparable in quality and speed to established methods

37 Objectives: Simultaneously reconstruct the density distribution and the boundaries of regions with approximately homogeneous densities from Radon-transform data. Segment directly from the data, estimate objects 16/ Reduce ill-posedness by reducing the dimension of the unknown inversion of noisy data Comparable in quality and speed to established methods

38 The functional and the resulting geometric flow 17/26 We fix the space of admissible densities as: X(Γ) = P C(D \ Γ) = { n(γ) i=1 f i χ Ω Γ i } : f i R Ω Γ i... connected component of D \ Γ. L 2 (D) (4)

39 The functional and the resulting geometric flow 17/26 We fix the space of admissible densities as: X(Γ) = P C(D \ Γ) = { n(γ) i=1 f i χ Ω Γ i } : f i R Ω Γ i... connected component of D \ Γ. L 2 (D) The Radon Mumford-Shah functional is given as J(f, Γ) = Rf g d 2 L 2 (R S 1 ) (4) + α Γ. (5)

40 Solving the Euler-Lagrange System (Step 1): f J(f(Γ), Γ) h = Rf(Γ) g d, Rh L2 (R S 1 ) = 0 h. (6) 18/26

41 Solving the Euler-Lagrange System (Step 1): f J(f(Γ), Γ) h = Rf(Γ) g d, Rh L2 (R S 1 ) = 0 h. (6) 18/26 This is equivalent to Af(Γ) = g (7) where A = (a ij ) and g = (g i ) t. a ij = 2 g i = x Ω Γ i y Ω Γ j R g d dx = Ω Γ i y x n i (x), n j (x) ds(y) ds(x). (8) x Ω Γ i g(ω x, ω) dω dx. (9) ω S 1

42 The shape derivative (Step 2): dj(γ; F ) = 4 k x Γ k p d(k) s p f p l q d(l) s q f q y Γ l y x y x, nk (y) ds(y) F (x) ds(x) 2 s p f p g d (ω x, ω) dω F (x) ds(x) k x Γ k p d(k) ω S 1 + κ(x) F (x) ds(x). k x Γ k f p...densitiy values, s p = ±1, κ...curvature. 19/26

43 Numerical observations 20/26

44 Numerical observations Mild degree of illposedness 20/ reconstruction of the densitiy: alpha = 0, nu = 0 reconstruction of the densitiy: alpha = 0, nu = Benefit of H 1 - preconditioning: Reduction iteration numbers of Works for real data simulations

45 Numerical observations Mild degree of illposedness Benefit of H 1 - preconditioning: Reduction of iteration numbers 20/26 Works for real data simulations ν = 10 2 ν = 10 1 ν = 10 0 ν = 10 1 ν = 10 2 ν = 10 3 ν = 0 α = α =

46 Numerical observations 20/ Mild degree of illposedness Benefit of H 1 - preconditioning: Reduction of iteration numbers Works for real data simulations

47 A Mumford-Shah like 21/26 approach for simultaneous registration and segmentation of multimodal data sets. (with Marc Droske)

48 Objectives: 22/26

49 Objectives: Simultaneously segment two images and match similar structures onto each other. 22/ Work with noisy multimodal images Transfer features which are strong in one image onto the other, where the feature is weak Work with real datasets

50 Objectives: Simultaneously segment two images and match similar structures onto each other. 22/26 Work with noisy multimodal images Transfer features which are strong in one image onto the other, where the feature is weak Work with real datasets

51 Objectives: Simultaneously segment two images and match similar structures onto each other. 22/26 Work with noisy multimodal images Transfer features which are strong in one image onto the other, where the feature is weak Work with real datasets

52 The functional: E MS (Γ, Φ, R, T ) = 1 R R 0 2 dx + µ R 2 dx 2 D 2 D\Γ + 1 T T 0 2 dx+ µ T 2 dx+αh N 1 (Γ)+νE reg (Φ) 2 D 2 D\Γ Φ 23/26 R, T... reference and template images, Φ... transformation between the images, Γ... edge-set in the reference image, Γ Φ = Φ(Γ)... edge-set in the template image, E reg... regularization term penalizing deviation from a rigid body motion.

53 Algorithm: Minimize w.r.t. T and R for fixed Γ, Φ. Consider the reduced functional 24/26 Ê(Γ, Φ) = E(Γ, Φ, R(Γ), T (Γ, Φ)). Calculate descent directions with respect to Γ and Φ. Use composite finite elements and multigrid solvers. Update Γ and φ. Use the level-set equation for the update of Γ.

54 An application from dentistry: 25/26

55 An application from dentistry: 25/26

56 An application from dentistry: 25/26

57 Further applications Optical flow estimation 26/26 Inversion of SPECT data Occlusions etc.

A Level Set Based. Finite Element Algorithm. for Image Segmentation

A Level Set Based. Finite Element Algorithm. for Image Segmentation A Level Set Based Finite Element Algorithm for Image Segmentation Michael Fried, IAM Universität Freiburg, Germany Image Segmentation Ω IR n rectangular domain, n = (1), 2, 3 u 0 : Ω [0, C max ] intensity

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

Approximation of inverse boundary value problems by phase-field methods

Approximation of inverse boundary value problems by phase-field methods Approximation of inverse boundary value problems by phase-field methods Luca RONDI Università degli Studi di Trieste Dipartimento di Matematica e Informatica MSRI Berkeley, 3 December 2010 Luca RONDI (Università

More information

Piecewise rigid curve deformation via a Finsler steepest descent

Piecewise rigid curve deformation via a Finsler steepest descent Piecewise rigid curve deformation via a Finsler steepest descent arxiv:1308.0224v3 [math.na] 7 Dec 2015 Guillaume Charpiat Stars Lab INRIA Sophia-Antipolis Giacomo Nardi, Gabriel Peyré François-Xavier

More information

Constrained optimization

Constrained optimization Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange

More information

Parameter Identification in Partial Differential Equations

Parameter Identification in Partial Differential Equations Parameter Identification in Partial Differential Equations Differentiation of data Not strictly a parameter identification problem, but good motivation. Appears often as a subproblem. Given noisy observation

More information

Discrete Projection Methods for Incompressible Fluid Flow Problems and Application to a Fluid-Structure Interaction

Discrete Projection Methods for Incompressible Fluid Flow Problems and Application to a Fluid-Structure Interaction Discrete Projection Methods for Incompressible Fluid Flow Problems and Application to a Fluid-Structure Interaction Problem Jörg-M. Sautter Mathematisches Institut, Universität Düsseldorf, Germany, sautter@am.uni-duesseldorf.de

More information

Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice

Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice 1 Lecture Notes, HCI, 4.1.211 Chapter 2 Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice Bastian Goldlücke Computer Vision Group Technical University of Munich 2 Bastian

More information

A note on accurate and efficient higher order Galerkin time stepping schemes for the nonstationary Stokes equations

A note on accurate and efficient higher order Galerkin time stepping schemes for the nonstationary Stokes equations A note on accurate and efficient higher order Galerkin time stepping schemes for the nonstationary Stokes equations S. Hussain, F. Schieweck, S. Turek Abstract In this note, we extend our recent work for

More information

arxiv: v3 [math.na] 11 Oct 2017

arxiv: v3 [math.na] 11 Oct 2017 Indirect Image Registration with Large Diffeomorphic Deformations Chong Chen and Ozan Öktem arxiv:1706.04048v3 [math.na] 11 Oct 2017 Abstract. The paper adapts the large deformation diffeomorphic metric

More information

On Optimal Reconstruction of Constitutive Relations

On Optimal Reconstruction of Constitutive Relations On Optimal Reconstruction of Constitutive Relations Vladislav Bukshtynov a, Oleg Volkov b, and Bartosz Protas b, a School of Computational Engineering and Science, McMaster University, Hamilton, Ontario,

More information

Parallelizing large scale time domain electromagnetic inverse problem

Parallelizing large scale time domain electromagnetic inverse problem Parallelizing large scale time domain electromagnetic inverse problems Eldad Haber with: D. Oldenburg & R. Shekhtman + Emory University, Atlanta, GA + The University of British Columbia, Vancouver, BC,

More information

Principles of Optimal Control Spring 2008

Principles of Optimal Control Spring 2008 MIT OpenCourseWare http://ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.323 Lecture

More information

Linear Inverse Problems

Linear Inverse Problems Linear Inverse Problems Ajinkya Kadu Utrecht University, The Netherlands February 26, 2018 Outline Introduction Least-squares Reconstruction Methods Examples Summary Introduction 2 What are inverse problems?

More information

Dimension-Independent likelihood-informed (DILI) MCMC

Dimension-Independent likelihood-informed (DILI) MCMC Dimension-Independent likelihood-informed (DILI) MCMC Tiangang Cui, Kody Law 2, Youssef Marzouk Massachusetts Institute of Technology 2 Oak Ridge National Laboratory 2 August 25 TC, KL, YM DILI MCMC USC

More information

Singular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0

Singular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0 Singular value decomposition If only the first p singular values are nonzero we write G =[U p U o ] " Sp 0 0 0 # [V p V o ] T U p represents the first p columns of U U o represents the last N-p columns

More information

Least-Squares Finite Element Methods

Least-Squares Finite Element Methods Pavel В. Bochev Max D. Gunzburger Least-Squares Finite Element Methods Spri ringer Contents Part I Survey of Variational Principles and Associated Finite Element Methods 1 Classical Variational Methods

More information

Model-aware Newton-type regularization in electrical impedance tomography

Model-aware Newton-type regularization in electrical impedance tomography Model-aware Newton-type regularization in electrical impedance tomography Andreas Rieder Robert Winkler FAKULTÄT FÜR MATHEMATIK INSTITUT FÜR ANGEWANDTE UND NUMERISCHE MATHEMATIK SFB 1173 KIT University

More information

Feature Reconstruction in Tomography

Feature Reconstruction in Tomography Feature Reconstruction in Tomography Alfred K. Louis Institut für Angewandte Mathematik Universität des Saarlandes 66041 Saarbrücken http://www.num.uni-sb.de louis@num.uni-sb.de Wien, July 20, 2009 Images

More information

Outline Introduction Edge Detection A t c i ti ve C Contours

Outline Introduction Edge Detection A t c i ti ve C Contours Edge Detection and Active Contours Elsa Angelini Department TSI, Telecom ParisTech elsa.angelini@telecom-paristech.fr 2008 Outline Introduction Edge Detection Active Contours Introduction The segmentation

More information

Natural Evolution Strategies for Direct Search

Natural Evolution Strategies for Direct Search Tobias Glasmachers Natural Evolution Strategies for Direct Search 1 Natural Evolution Strategies for Direct Search PGMO-COPI 2014 Recent Advances on Continuous Randomized black-box optimization Thursday

More information

Finsler Steepest Descent with Applications to Piecewise-regular Curve Evolution

Finsler Steepest Descent with Applications to Piecewise-regular Curve Evolution Finsler Steepest Descent with Applications to Piecewise-regular Curve Evolution Guillaume Charpiat, Giacomo Nardi, Gabriel Peyré, François-Xavier Vialard To cite this version: Guillaume Charpiat, Giacomo

More information

PDE Constrained Optimization selected Proofs

PDE Constrained Optimization selected Proofs PDE Constrained Optimization selected Proofs Jeff Snider jeff@snider.com George Mason University Department of Mathematical Sciences April, 214 Outline 1 Prelim 2 Thms 3.9 3.11 3 Thm 3.12 4 Thm 3.13 5

More information

POD for Parametric PDEs and for Optimality Systems

POD for Parametric PDEs and for Optimality Systems POD for Parametric PDEs and for Optimality Systems M. Kahlbacher, K. Kunisch, H. Müller and S. Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria DMV-Jahrestagung 26,

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements 3E4: Modelling Choice Lecture 7 Introduction to nonlinear programming 1 Announcements Solutions to Lecture 4-6 Homework will be available from http://www.eng.cam.ac.uk/~dr241/3e4 Looking ahead to Lecture

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Non-Differentiable Embedding of Lagrangian structures

Non-Differentiable Embedding of Lagrangian structures Non-Differentiable Embedding of Lagrangian structures Isabelle Greff Joint work with J. Cresson Université de Pau et des Pays de l Adour CNAM, Paris, April, 22nd 2010 Position of the problem 1. Example

More information

Numerical Analysis of Electromagnetic Fields

Numerical Analysis of Electromagnetic Fields Pei-bai Zhou Numerical Analysis of Electromagnetic Fields With 157 Figures Springer-Verlag Berlin Heidelberg New York London Paris Tokyo Hong Kong Barcelona Budapest Contents Part 1 Universal Concepts

More information

Hamburger Beiträge zur Angewandten Mathematik

Hamburger Beiträge zur Angewandten Mathematik Hamburger Beiträge zur Angewandten Mathematik Numerical analysis of a control and state constrained elliptic control problem with piecewise constant control approximations Klaus Deckelnick and Michael

More information

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems Etereldes Gonçalves 1, Tarek P. Mathew 1, Markus Sarkis 1,2, and Christian E. Schaerer 1 1 Instituto de Matemática Pura

More information

An introduction to PDE-constrained optimization

An introduction to PDE-constrained optimization An introduction to PDE-constrained optimization Wolfgang Bangerth Department of Mathematics Texas A&M University 1 Overview Why partial differential equations? Why optimization? Examples of PDE optimization

More information

A Simple Explanation of the Sobolev Gradient Method

A Simple Explanation of the Sobolev Gradient Method A Simple Explanation o the Sobolev Gradient Method R. J. Renka July 3, 2006 Abstract We have observed that the term Sobolev gradient is used more oten than it is understood. Also, the term is oten used

More information

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection 6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods

More information

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University

More information

Geodesics on Shape Spaces with Bounded Variation and Sobolev Metrics

Geodesics on Shape Spaces with Bounded Variation and Sobolev Metrics Geodesics on Shape Spaces with Bounded Variation and Sobolev Metrics arxiv:142.654v6 [math.oc] 26 Dec 214 Giacomo Nardi, Gabriel Peyré, François-Xavier Vialard Ceremade, Université Paris-Dauphine Abstract

More information

A Sobolev trust-region method for numerical solution of the Ginz

A Sobolev trust-region method for numerical solution of the Ginz A Sobolev trust-region method for numerical solution of the Ginzburg-Landau equations Robert J. Renka Parimah Kazemi Department of Computer Science & Engineering University of North Texas June 6, 2012

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Florianópolis - September, 2011.

More information

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline Goals Introduce Wiener-Hopf (WH) equations Introduce application of the steepest descent method to the WH problem Approximation to the Least

More information

Lecture Note III: Least-Squares Method

Lecture Note III: Least-Squares Method Lecture Note III: Least-Squares Method Zhiqiang Cai October 4, 004 In this chapter, we shall present least-squares methods for second-order scalar partial differential equations, elastic equations of solids,

More information

Inverse problems and medical imaging

Inverse problems and medical imaging Inverse problems and medical imaging Bastian von Harrach harrach@math.uni-frankfurt.de Institute of Mathematics, Goethe University Frankfurt, Germany Seminario di Calcolo delle Variazioni ed Equazioni

More information

Segmentation using deformable models

Segmentation using deformable models Segmentation using deformable models Isabelle Bloch http://www.tsi.enst.fr/ bloch Téécom ParisTech - CNRS UMR 5141 LTCI Paris - France Modèles déformables p.1/43 Introduction Deformable models = evolution

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Numerical Optimization of Partial Differential Equations

Numerical Optimization of Partial Differential Equations Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada

More information

Differentiation of functions of covariance

Differentiation of functions of covariance Differentiation of log X May 5, 2005 1 Differentiation of functions of covariance matrices or: Why you can forget you ever read this Richard Turner Covariance matrices are symmetric, but we often conveniently

More information

Recovering General Relativity from Hořava Theory

Recovering General Relativity from Hořava Theory Recovering General Relativity from Hořava Theory Jorge Bellorín Department of Physics, Universidad Simón Bolívar, Venezuela Quantum Gravity at the Southern Cone Sao Paulo, Sep 10-14th, 2013 In collaboration

More information

Microlocal Methods in X-ray Tomography

Microlocal Methods in X-ray Tomography Microlocal Methods in X-ray Tomography Plamen Stefanov Purdue University Lecture I: Euclidean X-ray tomography Mini Course, Fields Institute, 2012 Plamen Stefanov (Purdue University ) Microlocal Methods

More information

Statistical Machine Learning from Data

Statistical Machine Learning from Data Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Support Vector Machines Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique

More information

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

Seismic imaging and optimal transport

Seismic imaging and optimal transport Seismic imaging and optimal transport Bjorn Engquist In collaboration with Brittany Froese, Sergey Fomel and Yunan Yang Brenier60, Calculus of Variations and Optimal Transportation, Paris, January 10-13,

More information

Vectors. January 13, 2013

Vectors. January 13, 2013 Vectors January 13, 2013 The simplest tensors are scalars, which are the measurable quantities of a theory, left invariant by symmetry transformations. By far the most common non-scalars are the vectors,

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

A Self-Referencing Level-Set Method for Image Reconstruction From Sparse

A Self-Referencing Level-Set Method for Image Reconstruction From Sparse A Self-Referencing Level-Set Method for Image Reconstruction From Sparse Fourier Samples Jong Chul Ye (jong.ye@philips.com) Philips Research - USA, 345 Scarborough Road,Briarcliff Manor, NY 10510, Ph.:

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 3 3.4 Differential Algebraic Systems 3.5 Integration of Differential Equations 1 Outline 3.4 Differential Algebraic Systems 3.4.1 Constrained Dynamics 3.4.2 First and Second

More information

Generalized Newton-Type Method for Energy Formulations in Image Processing

Generalized Newton-Type Method for Energy Formulations in Image Processing Generalized Newton-Type Method for Energy Formulations in Image Processing Leah Bar and Guillermo Sapiro Department of Electrical and Computer Engineering University of Minnesota Outline Optimization in

More information

A Nonoverlapping Subdomain Algorithm with Lagrange Multipliers and its Object Oriented Implementation for Interface Problems

A Nonoverlapping Subdomain Algorithm with Lagrange Multipliers and its Object Oriented Implementation for Interface Problems Contemporary Mathematics Volume 8, 998 B 0-88-0988--03030- A Nonoverlapping Subdomain Algorithm with Lagrange Multipliers and its Object Oriented Implementation for Interface Problems Daoqi Yang. Introduction

More information

On a multiscale representation of images as hierarchy of edges. Eitan Tadmor. University of Maryland

On a multiscale representation of images as hierarchy of edges. Eitan Tadmor. University of Maryland On a multiscale representation of images as hierarchy of edges Eitan Tadmor Center for Scientific Computation and Mathematical Modeling (CSCAMM) Department of Mathematics and Institute for Physical Science

More information

Besov regularity for operator equations on patchwise smooth manifolds

Besov regularity for operator equations on patchwise smooth manifolds on patchwise smooth manifolds Markus Weimar Philipps-University Marburg Joint work with Stephan Dahlke (PU Marburg) Mecklenburger Workshop Approximationsmethoden und schnelle Algorithmen Hasenwinkel, March

More information

Numerical Optimization: Basic Concepts and Algorithms

Numerical Optimization: Basic Concepts and Algorithms May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some

More information

Efficient Solution Methods for Inverse Problems with Application to Tomography Radon Transform and Friends

Efficient Solution Methods for Inverse Problems with Application to Tomography Radon Transform and Friends Efficient Solution Methods for Inverse Problems with Application to Tomography Radon Transform and Friends Alfred K. Louis Institut für Angewandte Mathematik Universität des Saarlandes 66041 Saarbrücken

More information

2D X-Ray Tomographic Reconstruction From Few Projections

2D X-Ray Tomographic Reconstruction From Few Projections 2D X-Ray Tomographic Reconstruction From Few Projections Application of Compressed Sensing Theory CEA-LID, Thalès, UJF 6 octobre 2009 Introduction Plan 1 Introduction 2 Overview of Compressed Sensing Theory

More information

Electrostatic Imaging via Conformal Mapping. R. Kress. joint work with I. Akduman, Istanbul and H. Haddar, Paris

Electrostatic Imaging via Conformal Mapping. R. Kress. joint work with I. Akduman, Istanbul and H. Haddar, Paris Electrostatic Imaging via Conformal Mapping R. Kress Göttingen joint work with I. Akduman, Istanbul and H. Haddar, Paris Or: A new solution method for inverse boundary value problems for the Laplace equation

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Iterative Methods for Smooth Objective Functions

Iterative Methods for Smooth Objective Functions Optimization Iterative Methods for Smooth Objective Functions Quadratic Objective Functions Stationary Iterative Methods (first/second order) Steepest Descent Method Landweber/Projected Landweber Methods

More information

Adjoint Based Optimization of PDE Systems with Alternative Gradients

Adjoint Based Optimization of PDE Systems with Alternative Gradients Adjoint Based Optimization of PDE Systems with Alternative Gradients Bartosz Protas a a Department of Mathematics & Statistics, McMaster University, Hamilton, Ontario, Canada Abstract In this work we investigate

More information

Recovery of anisotropic metrics from travel times

Recovery of anisotropic metrics from travel times Purdue University The Lens Rigidity and the Boundary Rigidity Problems Let M be a bounded domain with boundary. Let g be a Riemannian metric on M. Define the scattering relation σ and the length (travel

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

arxiv: v1 [math.na] 30 Jan 2018

arxiv: v1 [math.na] 30 Jan 2018 Modern Regularization Methods for Inverse Problems Martin Benning and Martin Burger December 18, 2017 arxiv:1801.09922v1 [math.na] 30 Jan 2018 Abstract Regularization methods are a key tool in the solution

More information

Recent progress on the explicit inversion of geodesic X-ray transforms

Recent progress on the explicit inversion of geodesic X-ray transforms Recent progress on the explicit inversion of geodesic X-ray transforms François Monard Department of Mathematics, University of Washington. Geometric Analysis and PDE seminar University of Cambridge, May

More information

Sample ECE275A Midterm Exam Questions

Sample ECE275A Midterm Exam Questions Sample ECE275A Midterm Exam Questions The questions given below are actual problems taken from exams given in in the past few years. Solutions to these problems will NOT be provided. These problems and

More information

Statistical and Computational Inverse Problems with Applications Part 2: Introduction to inverse problems and example applications

Statistical and Computational Inverse Problems with Applications Part 2: Introduction to inverse problems and example applications Statistical and Computational Inverse Problems with Applications Part 2: Introduction to inverse problems and example applications Aku Seppänen Inverse Problems Group Department of Applied Physics University

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

Tikhonov Regularization in General Form 8.1

Tikhonov Regularization in General Form 8.1 Tikhonov Regularization in General Form 8.1 To introduce a more general formulation, let us return to the continuous formulation of the first-kind Fredholm integral equation. In this setting, the residual

More information

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way. AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

nonrobust estimation The n measurement vectors taken together give the vector X R N. The unknown parameter vector is P R M.

nonrobust estimation The n measurement vectors taken together give the vector X R N. The unknown parameter vector is P R M. Introduction to nonlinear LS estimation R. I. Hartley and A. Zisserman: Multiple View Geometry in Computer Vision. Cambridge University Press, 2ed., 2004. After Chapter 5 and Appendix 6. We will use x

More information

Review for Exam 2 Ben Wang and Mark Styczynski

Review for Exam 2 Ben Wang and Mark Styczynski Review for Exam Ben Wang and Mark Styczynski This is a rough approximation of what we went over in the review session. This is actually more detailed in portions than what we went over. Also, please note

More information

AppliedMathematics. Iterative Methods for Photoacoustic Tomography with Variable Sound Speed. Markus Haltmeier, Linh V. Nguyen

AppliedMathematics. Iterative Methods for Photoacoustic Tomography with Variable Sound Speed. Markus Haltmeier, Linh V. Nguyen Nr. 29 7. December 26 Leopold-Franzens-Universität Innsbruck Preprint-Series: Department of Mathematics - Applied Mathematics Iterative Methods for Photoacoustic Tomography with Variable Sound Speed Markus

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

Numerical methods for computing vortex states in rotating Bose-Einstein condensates. Ionut Danaila

Numerical methods for computing vortex states in rotating Bose-Einstein condensates. Ionut Danaila Numerical methods for computing vortex states in rotating Bose-Einstein condensates Ionut Danaila Laboratoire de mathématiques Raphaël Salem Université de Rouen www.univ-rouen.fr/lmrs/persopage/danaila

More information

Regularization in Banach Space

Regularization in Banach Space Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University

More information

Sparse Recovery in Inverse Problems

Sparse Recovery in Inverse Problems Radon Series Comp. Appl. Math XX, 1 63 de Gruyter 20YY Sparse Recovery in Inverse Problems Ronny Ramlau and Gerd Teschke Abstract. Within this chapter we present recent results on sparse recovery algorithms

More information

Spring 2014: Computational and Variational Methods for Inverse Problems CSE 397/GEO 391/ME 397/ORI 397 Assignment 4 (due 14 April 2014)

Spring 2014: Computational and Variational Methods for Inverse Problems CSE 397/GEO 391/ME 397/ORI 397 Assignment 4 (due 14 April 2014) Spring 2014: Computational and Variational Methods for Inverse Problems CSE 397/GEO 391/ME 397/ORI 397 Assignment 4 (due 14 April 2014) The first problem in this assignment is a paper-and-pencil exercise

More information

Active Contours Snakes and Level Set Method

Active Contours Snakes and Level Set Method Active Contours Snakes and Level Set Method Niels Chr. Overgaard 2010-09-21 N. Chr. Overgaard Lecture 7 2010-09-21 1 / 26 Outline What is an mage? What is an Active Contour? Deforming Force: The Edge Map

More information

Geometric Gyrokinetic Theory and its Applications to Large-Scale Simulations of Magnetized Plasmas

Geometric Gyrokinetic Theory and its Applications to Large-Scale Simulations of Magnetized Plasmas Geometric Gyrokinetic Theory and its Applications to Large-Scale Simulations of Magnetized Plasmas Hong Qin Princeton Plasma Physics Laboratory, Princeton University CEA-EDF-INRIA School -- Numerical models

More information

Week 6: Differential geometry I

Week 6: Differential geometry I Week 6: Differential geometry I Tensor algebra Covariant and contravariant tensors Consider two n dimensional coordinate systems x and x and assume that we can express the x i as functions of the x i,

More information

Numerical Solution of a Coefficient Identification Problem in the Poisson equation

Numerical Solution of a Coefficient Identification Problem in the Poisson equation Swiss Federal Institute of Technology Zurich Seminar for Applied Mathematics Department of Mathematics Bachelor Thesis Spring semester 2014 Thomas Haener Numerical Solution of a Coefficient Identification

More information

INTRODUCTION TO FINITE ELEMENT METHODS ON ELLIPTIC EQUATIONS LONG CHEN

INTRODUCTION TO FINITE ELEMENT METHODS ON ELLIPTIC EQUATIONS LONG CHEN INTROUCTION TO FINITE ELEMENT METHOS ON ELLIPTIC EQUATIONS LONG CHEN CONTENTS 1. Poisson Equation 1 2. Outline of Topics 3 2.1. Finite ifference Method 3 2.2. Finite Element Method 3 2.3. Finite Volume

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Cambridge - July 28, 211. Doctoral

More information

A GLOBALLY CONVERGENT NUMERICAL METHOD FOR SOME COEFFICIENT INVERSE PROBLEMS WITH RESULTING SECOND ORDER ELLIPTIC EQUATIONS

A GLOBALLY CONVERGENT NUMERICAL METHOD FOR SOME COEFFICIENT INVERSE PROBLEMS WITH RESULTING SECOND ORDER ELLIPTIC EQUATIONS A GLOBALLY CONVERGENT NUMERICAL METHOD FOR SOME COEFFICIENT INVERSE PROBLEMS WITH RESULTING SECOND ORDER ELLIPTIC EQUATIONS LARISA BEILINA AND MICHAEL V. KLIBANOV Abstract. A new globally convergent numerical

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

Newton-Multigrid Least-Squares FEM for S-V-P Formulation of the Navier-Stokes Equations

Newton-Multigrid Least-Squares FEM for S-V-P Formulation of the Navier-Stokes Equations Newton-Multigrid Least-Squares FEM for S-V-P Formulation of the Navier-Stokes Equations A. Ouazzi, M. Nickaeen, S. Turek, and M. Waseem Institut für Angewandte Mathematik, LSIII, TU Dortmund, Vogelpothsweg

More information

arxiv: v1 [math.oc] 19 Mar 2010

arxiv: v1 [math.oc] 19 Mar 2010 SHAPE SPLINES AND STOCHASTIC SHAPE EVOLUTIONS: A SECOND ORDER POINT OF VIEW ALAIN TROUVÉ AND FRANÇOIS-XAVIER VIALARD. arxiv:13.3895v1 [math.oc] 19 Mar 21 Contents 1. Introduction 1 2. Hamiltonian equations

More information

Introduction into Implementation of Optimization problems with PDEs: Sheet 3

Introduction into Implementation of Optimization problems with PDEs: Sheet 3 Technische Universität München Center for Mathematical Sciences, M17 Lucas Bonifacius, Korbinian Singhammer www-m17.ma.tum.de/lehrstuhl/lehresose16labcourseoptpde Summer 216 Introduction into Implementation

More information

u xx + u yy = 0. (5.1)

u xx + u yy = 0. (5.1) Chapter 5 Laplace Equation The following equation is called Laplace equation in two independent variables x, y: The non-homogeneous problem u xx + u yy =. (5.1) u xx + u yy = F, (5.) where F is a function

More information

Iterative Methods for Inverse & Ill-posed Problems

Iterative Methods for Inverse & Ill-posed Problems Iterative Methods for Inverse & Ill-posed Problems Gerd Teschke Konrad Zuse Institute Research Group: Inverse Problems in Science and Technology http://www.zib.de/ag InverseProblems ESI, Wien, December

More information

A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS

A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS Victor S. Ryaben'kii Semyon V. Tsynkov Chapman &. Hall/CRC Taylor & Francis Group Boca Raton London New York Chapman & Hall/CRC is an imprint of the Taylor

More information