Iterative Methods for Inverse & Ill-posed Problems
|
|
- Baldwin Kelley
- 6 years ago
- Views:
Transcription
1 Iterative Methods for Inverse & Ill-posed Problems Gerd Teschke Konrad Zuse Institute Research Group: Inverse Problems in Science and Technology InverseProblems ESI, Wien, December 2006
2 Outline 1 Scope of the problem 2 Linear Problems & Sparsity 3 Nonlinear Inverse Problems & Sparsity Setting Iteration Minimization Convergence Regularization Examples 4 Adaptivity for linear problems
3 Scope of the problem Scope of the Problem
4 Scope of the problem Computation of an approximation to a solution of T (x) = y where T : X Y and X, Y Hilbert spaces In many relevant cases only noisy data y δ with y δ y δ
5 Scope of the problem There is a very wide range of possible applications Image deblurring + decomposition (Daubechies,T. 05) Audio coding (T. 06) Sparseness (acceleration) of support vector machines (Rätsch,T. 05,06) SPECT (Ramlau,T. 06) Astrophysical data processing (Anthoine 05 + DeMol 04) Geophysics: seismic wave decomposition (Holschneider 06) Meteorological radar data processing (Lehmann,T. 06)...
6 Scope of the problem
7 Scope of the problem
8 Scope of the problem
9 Scope of the problem Mathematical description: div(σ Φ) = divj in Ω σ Φ, n = 0 at Γ = Ω, Inverse problem: R : (σ, j) Φ Ω Variational formulation: J(σ, j) = R(σ, j) Φ δ Ω 2 + αψ(σ, σ, j)
10 Scope of the problem
11 Scope of the problem Linear case: ill posed integral equation of the first kind ( ) Rf (s, ω) = f (sω + tω IL (s, ω) )dt = log I 0 (s, ω) R Nonlinear case: (SPECT) R[f, µ](s, ω) = f (sω + tω )e R t µ(sω+τω )dτ dt R Consider J(f, µ) = y δ R[f, µ] 2 + αψ(f, µ)
12 Scope of the problem
13 Scope of the problem Sparse Approximation of set vectors x i Reduced SVM sparse SVM Sparsifying both simultaneously: Ψ 1 (α, x) Ψ(β, z) 2 + Sparsity(β) + Sparsity(z) where Ψ 1 (α, x) = N x i=1 α iφ(x i )
14 Scope of the problem
15 Linear Inverse Problems & Sparsity Linear Problems & Sparsity
16 Linear Inverse Problems & Sparsity Signal representation v maybe represented by a preassigned basis But sometimes too restrictive - way out: frame But sometimes too restrictive - way out: dictionary of frames,...
17 Linear Inverse Problems & Sparsity Sparsity Constraints Certain physical constraints, e.g. well-known energy norm y AF g 2 + α g 2 promotion of sparsity 0 < p < 2, y AF g 2 + α g p l p more general y Av 2 + α sup v, h h C
18 Linear Inverse Problems & Sparsity Sparsity Constraints
19 Linear Inverse Problems & Sparsity Sparsity Constraints and Iterative Process Consider for instance: y AF g 2 + α g l1 Problem: AF g 2 induce a nonlinear coupling Way out: y AF g 2 + α g l1 + C g a 2 AF (g a) 2 = 2 g, FA y + α g l1 + C g a g, FA AF a y 2 AF a 2
20 Linear Inverse Problems & Sparsity Sparsity Constraints and Iterative Process Define J(g, a) := y AF g 2 +α g l1 +C g a 2 AF (g a) 2 Create an iteration process by setting a = g 0 and g m+1 = arg min g J(g, g m )
21 Linear Inverse Problems & Sparsity Sparsity Constraints and Minimization Reduces to variational equations of the form: a = b α sign(a) Solved by b α a = S α (b) = b + α b = 0 b α b α α < b < α In its full glory g m+1 = S α (FA y + g m FA AF g m )
22 Linear Inverse Problems & Sparsity Provided analysis Daubechies+Defrise+DeMol 2003: minimization by Gaussian surrogate functionals iterative Landweber approach proof of norm convergence and regularization properties general case: y Av 2 + 2α sup v, h h C minimization, norm convergence, (regularization theory) Daubechies + T. + Vese 2006
23 Linear Inverse Problems & Sparsity Well-posed case Theorem (Daubechies/T./Vese 06) Suppose some technical conditions on C, and A A has bounded inverse in its range. If we define T := (A A) 1/2 and, for an arbitrary closed convex set K, S K := Id P K, where P K is the (nonlinear) projection on K, then the minimizing v is given by v = TS αtc {TA y}.
24 Linear Inverse Problems & Sparsity Ill-posed case, convergence Gaussian surrogate approach yields: v n+1 := (Id P αc )(v n + A y A Av n ) by same techniques as in D : v n weak v norm convergence requires special knowledge of C!!!
25 Linear Inverse Problems & Sparsity Ill-posed case, convergence Theorem (Daubechies/T./Vese 06) Suppose v n v weak 0 and P αc (g) P αc (g + v n v) 0. Moreover, assume that v n is orthogonal to g, P C (g). If for some sequence γ n (with γ n ) the convex set C satisfies γ n (v n v) C. then v n v 0
26 Linear Inverse Problems & Sparsity Left: Shepp-Logan Phatom (64x64), right: FBP (0:10:180)
27 Linear Inverse Problems & Sparsity (... here is the movie theater)
28 Linear Inverse Problems & Sparsity
29 Nonlinear Inverse Problems & Sparsity Nonlinear Problems & Sparsity
30 Nonlinear Inverse Problems & Sparsity Setting The setting Nonlinear problem: T : X Y, T (x) = y Variational form (vector valued) J α (g 1,..., g n ) = y δ T (g 1,..., g n ) 2 + 2αΨ(g 1,..., g n )
31 Nonlinear Inverse Problems & Sparsity Setting The setting Requirements on T (essentially): T strongly continuous T L - Lipschitz continuous Further requirements g (l2 ) n cψ(g)... technical conditions
32 Nonlinear Inverse Problems & Sparsity Setting Linear mixing: { r } T (Kg) = T A l,i F g l l=1 i=1,...,n Simple cases: Nonlinear scalar valued: ( n ) T (Kg) = T (K(g 1,..., g n )) = T F g i i=1 Purely linear: T some linear and bounded operator.
33 Nonlinear Inverse Problems & Sparsity Setting Non coupled sparsity (T. 05) Ψ(g) = (Ψ 1 (g 1 ),..., Ψ n (g n )) Joint sparsity (linear case: Fornasier/Rauhut 06) Ψ(u) = λ Λ ω λ u λ q Complementary sparsity,...
34 Nonlinear Inverse Problems & Sparsity Iteration Basic Idea For g (l 2 ) n and some auxiliary a (l 2 ) n, consider J s α(g, a) := J α (g) + C g a 2 (l 2 ) n T (g) T (a) 2 Y Create an iteration process 1 Pick g 0 (l 2 ) n and some proper constant C > 0 2 Derive a sequence {g k } k=0,1,... by the iteration: g k+1 = arg min g k (l 2 ) Js α(g, g k ) k = 0, 1, 2,... n
35 Nonlinear Inverse Problems & Sparsity Iteration Proper Surrogate Functionals Given multi parameter α R + and g 0 (l 2 ) n, define a ball with radius r = J α (g 0 )/(2α) Define C ( C := 2 max K r := {g (l 2 ) n : Ψ(g) r} sup T (g) g K r ) 2, L J(g 0 )
36 Nonlinear Inverse Problems & Sparsity Iteration Proper Surrogate Functionals Properties: C g g 0 2 (l 2 ) n T (g) T (g 0 ) 2 Y 0 All J s α(g, g k ) are bounded from below, g k K r All J α (g k ) and J s α(g k+1, g k ) are non-increasing
37 Nonlinear Inverse Problems & Sparsity Minimization Necessary Condition The necessary condition for a minimum of J s α(g, a) is given by 0 T (g) (y δ T (a)) + Cg Ca + α Ψ(g)
38 Nonlinear Inverse Problems & Sparsity Minimization Recasting the Necessary Condition Let M(g, a) := T (g) (y δ T (a))/c + a then the necessary conditions can be casted as fixed point problem g = α ( ) C C (I P C) M(g, a), α where P C is the orthogonal projection onto the convex set C.
39 Nonlinear Inverse Problems & Sparsity Minimization Fixed Point Iteration with Projection Lemma The fixed point iteration converges towards the minimizer of J s α(g, g k ) Lemma T C 2 : J s α(g, g k ) is strictly convex.
40 Nonlinear Inverse Problems & Sparsity Minimization Joint Sparsity Measure: Ψ(u) = λ Λ ω λ u λ q Fixed point iteration: g l+1 = α ( ) C (I P M(gl, a) C) α C Equivalent description: g l+1 M(g l, a) 2 (l 2 ) n + 2α/C λ Λ (g λ ) l+1 q
41 Nonlinear Inverse Problems & Sparsity Minimization Joint Sparsity Proposition Let 1 q and 1 = 1/q + 1/q. The coefficients of iterates of the fixed point equation are given by (g λ ) l+1 = (g 1 λ,..., g n λ ) l+1 = (I P B q (C 1 αω λ ) )((M(g l, a)) λ ).
42 Nonlinear Inverse Problems & Sparsity Convergence Convergence Theorem Assume that there exists at least one isolated limit g α of a subsequence g k,l of g k. Then g k g α as k. The accumulation point g α is a minimizer for the functional J s α(g, g α) and satisfies the necessary condition for a minimum of J α.
43 Nonlinear Inverse Problems & Sparsity Regularization Regularization Theorem Let α(δ) δ 0 0, δ 2 /α(δ) δ 0 0. Then every sequence {g α(δ) } of minimizers of the functional J α(g) where δ 0 and α = α(δ) has a convergent subsequence. The limit of every convergent subsequence is a solution of T (g) = y with minimal values of Ψ(g).
44 Nonlinear Inverse Problems & Sparsity Examples X y y δ F * 1 g F * 2 g K g discr(red), penalty(green) sparsity 10 0 err J α Add J α (red), J α +Add (blue)
45 Nonlinear Inverse Problems & Sparsity Examples X 1.5 Y+δ 1 G discr(red), penalty(green) x sparsity err J α Add J α (red), J α +Add (blue)
46 Nonlinear Inverse Problems & Sparsity Examples R[f, µ](s, ω) = R f (sω + tω )e R t µ(sω+τω )dτ dt Left: density f, right: attenuation µ
47 Nonlinear Inverse Problems & Sparsity Examples R[f, µ](s, ω) = R f (sω + tω )e R t µ(sω+τω )dτ dt Simulated data R[f, µ]
48 Nonlinear Inverse Problems & Sparsity Examples Reconstruction of density f (3 percent error)
49 Nonlinear Inverse Problems & Sparsity Examples Reduced Support Vector Machines Cascade Classification
50 Iterative Methods for Inverse & Ill-posed Problems Nonlinear Inverse Problems & Sparsity Examples Input image, images showing the amount of rejected pixels at the 1st, 3rd and 50th stages of the cascade
51 Nonlinear Inverse Problems & Sparsity Examples Reduced Support Vector Machines Percentage of rejected non-face patches as a function of the number of operations required
52 Nonlinear Inverse Problems & Sparsity Examples method SVM RVM W-RVM time per patch µs 22.51µs 1.48µs Comparison of speed improvement of the W-RVM to the RVM and SVM
53 Nonlinear Inverse Problems & Sparsity Examples Drawback - High Computational Complexity Adaptivity!
54 Adaptivity for linear problems Frame based concept (Stevenson, Dahlke et.al.) for positive operators Consider the regularized problem Construction of RHS, APPLY, COARSE routines
55 Adaptivity for linear problems Operator s admissibility New definition of s compressibility (NEW: density function) New routine APPLY = if our operator fulfills the new s compressibility, then with the new APPLY routine our operator is s admissible
56 Adaptivity for linear problems Verification for the linear Radon transform 2 λ λ (R R + α)φ λ, φ λ c min( λ, λ ) δ(λ, λ ) remember Lemarié class: 2 σ λ λ (1 + 2 min( λ, λ ) δ(λ, λ )) β with β > n, σ > n/2
57 Adaptivity for linear problems see Matlab images...
Tikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations
Tikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations Ronny Ramlau Gerd Teschke April 13, 25 Abstract We shall be concerned with the construction of Tikhonov based iteration
More informationAccelerated Projected Steepest Descent Method for Nonlinear Inverse Problems with Sparsity Constraints
Accelerated Projected Steepest Descent Method for Nonlinear Inverse Problems with Sparsity Constraints Gerd Teschke Claudia Borries July 3, 2009 Abstract This paper is concerned with the construction of
More informationSparse Recovery in Inverse Problems
Radon Series Comp. Appl. Math XX, 1 63 de Gruyter 20YY Sparse Recovery in Inverse Problems Ronny Ramlau and Gerd Teschke Abstract. Within this chapter we present recent results on sparse recovery algorithms
More information2D X-Ray Tomographic Reconstruction From Few Projections
2D X-Ray Tomographic Reconstruction From Few Projections Application of Compressed Sensing Theory CEA-LID, Thalès, UJF 6 octobre 2009 Introduction Plan 1 Introduction 2 Overview of Compressed Sensing Theory
More informationIteratively Solving Linear Inverse Problems under General Convex Constraints
Konrad-Zuse-Zentrum fu r Informationstechnik Berlin Takustraße 7 D-14195 Berlin-Dahlem Germany I NGRID DAUBECHIES, G ERD T ESCHKE, L UMINITA V ESE Iteratively Solving Linear Inverse Problems under General
More informationITERATIVELY SOLVING LINEAR INVERSE PROBLEMS UNDER GENERAL CONVEX CONSTRAINTS. Ingrid Daubechies. Gerd Teschke. Luminita Vese
Inverse Problems and Imaging Volume 1, No. 1, 2007, 29 46 Web site: http://www.aimsciences.org ITERATIVELY SOLVING LINEAR INVERSE PROBLEMS UNDER GENERAL CONVEX CONSTRAINTS Ingrid Daubechies Princeton University,
More informationMorozov s discrepancy principle for Tikhonov-type functionals with non-linear operators
Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Stephan W Anzengruber 1 and Ronny Ramlau 1,2 1 Johann Radon Institute for Computational and Applied Mathematics,
More informationRecovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm
Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University
More informationDue Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces
Due Giorni di Algebra Lineare Numerica (2GALN) 16 17 Febbraio 2016, Como Iterative regularization in variable exponent Lebesgue spaces Claudio Estatico 1 Joint work with: Brigida Bonino 1, Fabio Di Benedetto
More informationIterative regularization of nonlinear ill-posed problems in Banach space
Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and
More informationA NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang
A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.
More informationLecture 2: Tikhonov-Regularization
Lecture 2: Tikhonov-Regularization Bastian von Harrach harrach@math.uni-stuttgart.de Chair of Optimization and Inverse Problems, University of Stuttgart, Germany Advanced Instructional School on Theoretical
More informationRegularization and Inverse Problems
Regularization and Inverse Problems Caroline Sieger Host Institution: Universität Bremen Home Institution: Clemson University August 5, 2009 Caroline Sieger (Bremen and Clemson) Regularization and Inverse
More informationFeature Reconstruction in Tomography
Feature Reconstruction in Tomography Alfred K. Louis Institut für Angewandte Mathematik Universität des Saarlandes 66041 Saarbrücken http://www.num.uni-sb.de louis@num.uni-sb.de Wien, July 20, 2009 Images
More informationAccelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik
Accelerated Landweber iteration in Banach spaces T. Hein, K.S. Kazimierski Preprint 2009-17 Fakultät für Mathematik Impressum: Herausgeber: Der Dekan der Fakultät für Mathematik an der Technischen Universität
More informationLevenberg-Marquardt method in Banach spaces with general convex regularization terms
Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization
More informationSolving l 1 Regularized Least Square Problems with Hierarchical Decomposition
Solving l 1 Least Square s with 1 mzhong1@umd.edu 1 AMSC and CSCAMM University of Maryland College Park Project for AMSC 663 October 2 nd, 2012 Outline 1 The 2 Outline 1 The 2 Compressed Sensing Example
More informationI P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION
I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION Peter Ochs University of Freiburg Germany 17.01.2017 joint work with: Thomas Brox and Thomas Pock c 2017 Peter Ochs ipiano c 1
More informationMIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design
MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation
More informationNonlinear equations. Norms for R n. Convergence orders for iterative methods
Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector
More informationLinear Inverse Problems
Linear Inverse Problems Ajinkya Kadu Utrecht University, The Netherlands February 26, 2018 Outline Introduction Least-squares Reconstruction Methods Examples Summary Introduction 2 What are inverse problems?
More informationAn Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint
An Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint INGRID DAUBECHIES Princeton University MICHEL DEFRISE Department of Nuclear Medicine Vrije Universiteit Brussel
More informationConvergence rates in l 1 -regularization when the basis is not smooth enough
Convergence rates in l 1 -regularization when the basis is not smooth enough Jens Flemming, Markus Hegland November 29, 2013 Abstract Sparsity promoting regularization is an important technique for signal
More informationOptimization methods
Optimization methods Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda /8/016 Introduction Aim: Overview of optimization methods that Tend to
More informationApproximating scalable frames
Kasso Okoudjou joint with X. Chen, G. Kutyniok, F. Philipp, R. Wang Department of Mathematics & Norbert Wiener Center University of Maryland, College Park 5 th International Conference on Computational
More informationLinear convergence of iterative soft-thresholding
arxiv:0709.1598v3 [math.fa] 11 Dec 007 Linear convergence of iterative soft-thresholding Kristian Bredies and Dirk A. Lorenz ABSTRACT. In this article, the convergence of the often used iterative softthresholding
More informationThree Generalizations of Compressed Sensing
Thomas Blumensath School of Mathematics The University of Southampton June, 2010 home prev next page Compressed Sensing and beyond y = Φx + e x R N or x C N x K is K-sparse and x x K 2 is small y R M or
More informationIterated hard shrinkage for minimization problems with sparsity constraints
Iterated hard shrinkage for minimization problems with sparsity constraints Kristian Bredies, Dirk A. Lorenz June 6, 6 Abstract A new iterative algorithm for the solution of minimization problems which
More informationRecent developments on sparse representation
Recent developments on sparse representation Zeng Tieyong Department of Mathematics, Hong Kong Baptist University Email: zeng@hkbu.edu.hk Hong Kong Baptist University Dec. 8, 2008 First Previous Next Last
More informationNonlinear Flows for Displacement Correction and Applications in Tomography
Nonlinear Flows for Displacement Correction and Applications in Tomography Guozhi Dong 1 and Otmar Scherzer 1,2 1 Computational Science Center, University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Wien,
More informationNonlinear error dynamics for cycled data assimilation methods
Nonlinear error dynamics for cycled data assimilation methods A J F Moodey 1, A S Lawless 1,2, P J van Leeuwen 2, R W E Potthast 1,3 1 Department of Mathematics and Statistics, University of Reading, UK.
More informationBayesian Paradigm. Maximum A Posteriori Estimation
Bayesian Paradigm Maximum A Posteriori Estimation Simple acquisition model noise + degradation Constraint minimization or Equivalent formulation Constraint minimization Lagrangian (unconstraint minimization)
More informationsparse and low-rank tensor recovery Cubic-Sketching
Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru
More informationSuper-resolution via Convex Programming
Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation
More informationOptimization methods
Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,
More informationSparse tomography. Samuli Siltanen. Department of Mathematics and Statistics University of Helsinki, Finland
Sparse tomography Samuli Siltanen Department of Mathematics and Statistics University of Helsinki, Finland Minisymposium: Fourier analytic methods in tomographic image reconstruction Applied Inverse Problems
More informationIterative Thresholding Algorithms
Iterative Thresholding Algorithms Massimo Fornasier and Holger Rauhut July 4, 007 Abstract This article provides a variational formulation for hard and firm thresholding. A related functional can be used
More informationNesterov s Accelerated Gradient Method for Nonlinear Ill-Posed Problems with a Locally Convex Residual Functional
arxiv:183.1757v1 [math.na] 5 Mar 18 Nesterov s Accelerated Gradient Method for Nonlinear Ill-Posed Problems with a Locally Convex Residual Functional Simon Hubmer, Ronny Ramlau March 6, 18 Abstract In
More informationSparse linear models and denoising
Lecture notes 4 February 22, 2016 Sparse linear models and denoising 1 Introduction 1.1 Definition and motivation Finding representations of signals that allow to process them more effectively is a central
More informationarxiv: v1 [math.na] 30 Jan 2018
Modern Regularization Methods for Inverse Problems Martin Benning and Martin Burger December 18, 2017 arxiv:1801.09922v1 [math.na] 30 Jan 2018 Abstract Regularization methods are a key tool in the solution
More informationMachine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013
Machine Learning for Signal Processing Sparse and Overcomplete Representations Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 1 Key Topics in this Lecture Basics Component-based representations
More informationRegularization in Banach Space
Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University
More informationDeep Learning: Approximation of Functions by Composition
Deep Learning: Approximation of Functions by Composition Zuowei Shen Department of Mathematics National University of Singapore Outline 1 A brief introduction of approximation theory 2 Deep learning: approximation
More informationIterative Thresholding Algorithms
Iterative Thresholding Algorithms Massimo Fornasier and Holger Rauhut March 8, 2007 Abstract This article provides a variational formulation for hard and firm thresholding. A related functional can be
More informationContents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.
Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological
More informationRECOVERY ALGORITHMS FOR VECTOR-VALUED DATA WITH JOINT SPARSITY CONSTRAINTS
SIAM J. NUMER. ANAL. Vol. 0, No. 0, pp. 000 000 c 2007 Society for Industrial and Applied Mathematics RECOVERY ALGORITHMS FOR VECTOR-VALUED DATA WITH JOINT SPARSITY CONSTRAINTS MASSIMO FORNASIER AND HOLGER
More informationAn Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems
Int. J. Contemp. Math. Sciences, Vol. 5, 2010, no. 52, 2547-2565 An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical and Computational
More informationSparse Approximation and Variable Selection
Sparse Approximation and Variable Selection Lorenzo Rosasco 9.520 Class 07 February 26, 2007 About this class Goal To introduce the problem of variable selection, discuss its connection to sparse approximation
More informationAdaptive discretization and first-order methods for nonsmooth inverse problems for PDEs
Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Barbara Kaltenbacher, Tuomo Valkonen,
More informationOn the Structure of Anisotropic Frames
On the Structure of Anisotropic Frames P. Grohs ETH Zurich, Seminar for Applied Mathematics ESI Modern Methods of Time-Frequency Analysis Motivation P. Grohs ESI Modern Methods of Time-Frequency Analysis
More informationPHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN
PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the
More informationRegularizing inverse problems using sparsity-based signal models
Regularizing inverse problems using sparsity-based signal models Jeffrey A. Fessler William L. Root Professor of EECS EECS Dept., BME Dept., Dept. of Radiology University of Michigan http://web.eecs.umich.edu/
More informationThe Levenberg-Marquardt Iteration for Numerical Inversion of the Power Density Operator
The Levenberg-Marquardt Iteration for Numerical Inversion of the Power Density Operator G. Bal (gb2030@columbia.edu) 1 W. Naetar (wolf.naetar@univie.ac.at) 2 O. Scherzer (otmar.scherzer@univie.ac.at) 2,3
More informationPart 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)
Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More informationIEOR 265 Lecture 3 Sparse Linear Regression
IOR 65 Lecture 3 Sparse Linear Regression 1 M Bound Recall from last lecture that the reason we are interested in complexity measures of sets is because of the following result, which is known as the M
More informationECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference
ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Sparse Recovery using L1 minimization - algorithms Yuejie Chi Department of Electrical and Computer Engineering Spring
More informationLarge-Scale L1-Related Minimization in Compressive Sensing and Beyond
Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March
More informationA functional model for commuting pairs of contractions and the symmetrized bidisc
A functional model for commuting pairs of contractions and the symmetrized bidisc Nicholas Young Leeds and Newcastle Universities Lecture 2 The symmetrized bidisc Γ and Γ-contractions St Petersburg, June
More informationProgetto di Ricerca GNCS 2016 PING Problemi Inversi in Geofisica Firenze, 6 aprile Regularized nonconvex minimization for image restoration
Progetto di Ricerca GNCS 2016 PING Problemi Inversi in Geofisica Firenze, 6 aprile 2016 Regularized nonconvex minimization for image restoration Claudio Estatico Joint work with: Fabio Di Benedetto, Flavia
More informationInverse Power Method for Non-linear Eigenproblems
Inverse Power Method for Non-linear Eigenproblems Matthias Hein and Thomas Bühler Anubhav Dwivedi Department of Aerospace Engineering & Mechanics 7th March, 2017 1 / 30 OUTLINE Motivation Non-Linear Eigenproblems
More informationA SYNOPSIS OF HILBERT SPACE THEORY
A SYNOPSIS OF HILBERT SPACE THEORY Below is a summary of Hilbert space theory that you find in more detail in any book on functional analysis, like the one Akhiezer and Glazman, the one by Kreiszig or
More informationProximal tools for image reconstruction in dynamic Positron Emission Tomography
Proximal tools for image reconstruction in dynamic Positron Emission Tomography Nelly Pustelnik 1 joint work with Caroline Chaux 2, Jean-Christophe Pesquet 3, and Claude Comtat 4 1 Laboratoire de Physique,
More informationUses of duality. Geoff Gordon & Ryan Tibshirani Optimization /
Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear
More information1 Sparsity and l 1 relaxation
6.883 Learning with Combinatorial Structure Note for Lecture 2 Author: Chiyuan Zhang Sparsity and l relaxation Last time we talked about sparsity and characterized when an l relaxation could recover the
More informationSparse linear models
Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time
More informationCompressed Sensing and Neural Networks
and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications
More informationLecture Notes 9: Constrained Optimization
Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form
More information2 Regularized Image Reconstruction for Compressive Imaging and Beyond
EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement
More informationSparse Proteomics Analysis (SPA)
Sparse Proteomics Analysis (SPA) Toward a Mathematical Theory for Feature Selection from Forward Models Martin Genzel Technische Universität Berlin Winter School on Compressed Sensing December 5, 2015
More informationAdaptive Frame Methods for Elliptic Operator Equations
Adaptive Frame Methods for Elliptic Operator Equations Stephan Dahlke, Massimo Fornasier, Thorsten Raasch April 13, 2004 Abstract This paper is concerned with the development of adaptive numerical methods
More informationAlgorithms for sparse analysis Lecture I: Background on sparse approximation
Algorithms for sparse analysis Lecture I: Background on sparse approximation Anna C. Gilbert Department of Mathematics University of Michigan Tutorial on sparse approximations and algorithms Compress data
More informationReconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm
Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Jeevan K. Pant, Wu-Sheng Lu, and Andreas Antoniou University of Victoria May 21, 2012 Compressive Sensing 1/23
More informationSmoothing Proximal Gradient Method. General Structured Sparse Regression
for General Structured Sparse Regression Xi Chen, Qihang Lin, Seyoung Kim, Jaime G. Carbonell, Eric P. Xing (Annals of Applied Statistics, 2012) Gatsby Unit, Tea Talk October 25, 2013 Outline Motivation:
More informationarxiv: v1 [math.na] 26 Nov 2009
Non-convexly constrained linear inverse problems arxiv:0911.5098v1 [math.na] 26 Nov 2009 Thomas Blumensath Applied Mathematics, School of Mathematics, University of Southampton, University Road, Southampton,
More informationStructured matrix factorizations. Example: Eigenfaces
Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix
More informationParameter Identification
Lecture Notes Parameter Identification Winter School Inverse Problems 25 Martin Burger 1 Contents 1 Introduction 3 2 Examples of Parameter Identification Problems 5 2.1 Differentiation of Data...............................
More informationMathematical introduction to Compressed Sensing
Mathematical introduction to Compressed Sensing Lesson 1 : measurements and sparsity Guillaume Lecué ENSAE Mardi 31 janvier 2016 Guillaume Lecué (ENSAE) Compressed Sensing Mardi 31 janvier 2016 1 / 31
More informationIterative Methods for Ill-Posed Problems
Iterative Methods for Ill-Posed Problems Based on joint work with: Serena Morigi Fiorella Sgallari Andriy Shyshkov Salt Lake City, May, 2007 Outline: Inverse and ill-posed problems Tikhonov regularization
More informationReproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto
Reproducing Kernel Hilbert Spaces 9.520 Class 03, 15 February 2006 Andrea Caponnetto About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationDFG-Schwerpunktprogramm 1324
DFG-Schwerpunktprogramm 1324 Extraktion quantifizierbarer Information aus komplexen Systemen Multilevel preconditioning for sparse optimization of functionals with nonconvex fidelity terms S. Dahlke, M.
More informationA fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring
A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional
More information6. Duals of L p spaces
6 Duals of L p spaces This section deals with the problem if identifying the duals of L p spaces, p [1, ) There are essentially two cases of this problem: (i) p = 1; (ii) 1 < p < The major difference between
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 39, pp. 476-57,. Copyright,. ISSN 68-963. ETNA ON THE MINIMIZATION OF A TIKHONOV FUNCTIONAL WITH A NON-CONVEX SPARSITY CONSTRAINT RONNY RAMLAU AND
More informationJoint ICTP-TWAS School on Coherent State Transforms, Time- Frequency and Time-Scale Analysis, Applications.
2585-31 Joint ICTP-TWAS School on Coherent State Transforms, Time- Frequency and Time-Scale Analysis, Applications 2-20 June 2014 Sparsity for big data contd. C. De Mol ULB, Brussels Belgium Sparsity for
More informationComputational Harmonic Analysis (Wavelet Tutorial) Part II
Computational Harmonic Analysis (Wavelet Tutorial) Part II Understanding Many Particle Systems with Machine Learning Tutorials Matthew Hirn Michigan State University Department of Computational Mathematics,
More informationLeast squares regularized or constrained by L0: relationship between their global minimizers. Mila Nikolova
Least squares regularized or constrained by L0: relationship between their global minimizers Mila Nikolova CMLA, CNRS, ENS Cachan, Université Paris-Saclay, France nikolova@cmla.ens-cachan.fr SIAM Minisymposium
More informationApplied Machine Learning for Biomedical Engineering. Enrico Grisan
Applied Machine Learning for Biomedical Engineering Enrico Grisan enrico.grisan@dei.unipd.it Data representation To find a representation that approximates elements of a signal class with a linear combination
More informationExact penalty decomposition method for zero-norm minimization based on MPEC formulation 1
Exact penalty decomposition method for zero-norm minimization based on MPEC formulation Shujun Bi, Xiaolan Liu and Shaohua Pan November, 2 (First revised July 5, 22) (Second revised March 2, 23) (Final
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationAn Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems
Int. Journal of Math. Analysis, Vol. 4, 1, no. 45, 11-8 An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical
More informationA Greedy Framework for First-Order Optimization
A Greedy Framework for First-Order Optimization Jacob Steinhardt Department of Computer Science Stanford University Stanford, CA 94305 jsteinhardt@cs.stanford.edu Jonathan Huggins Department of EECS Massachusetts
More informationSPARSE SIGNAL RESTORATION. 1. Introduction
SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful
More informationAn introduction to Mathematical Theory of Control
An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018
More informationEUSIPCO
EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,
More informationModel Selection with Partly Smooth Functions
Model Selection with Partly Smooth Functions Samuel Vaiter, Gabriel Peyré and Jalal Fadili vaiter@ceremade.dauphine.fr August 27, 2014 ITWIST 14 Model Consistency of Partly Smooth Regularizers, arxiv:1405.1004,
More informationStrengthened Sobolev inequalities for a random subspace of functions
Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)
More informationVariational image restoration by means of wavelets: simultaneous decomposition, deblurring and denoising
Variational image restoration by means of wavelets: simultaneous decomposition, deblurring and denoising I. Daubechies and G. Teschke December 2, 2004 Abstract Inspired by papers of Vese Osher [20] and
More informationOn the acceleration of the double smoothing technique for unconstrained convex optimization problems
On the acceleration of the double smoothing technique for unconstrained convex optimization problems Radu Ioan Boţ Christopher Hendrich October 10, 01 Abstract. In this article we investigate the possibilities
More informationLinear Complementarity as Absolute Value Equation Solution
Linear Complementarity as Absolute Value Equation Solution Olvi L. Mangasarian Abstract We consider the linear complementarity problem (LCP): Mz + q 0, z 0, z (Mz + q) = 0 as an absolute value equation
More information