Proximal tools for image reconstruction in dynamic Positron Emission Tomography
|
|
- Cory Burke
- 5 years ago
- Views:
Transcription
1 Proximal tools for image reconstruction in dynamic Positron Emission Tomography Nelly Pustelnik 1 joint work with Caroline Chaux 2, Jean-Christophe Pesquet 3, and Claude Comtat 4 1 Laboratoire de Physique, ENS Lyon CNRS UMR LATP, Université d Aix-Marseille 1 CNRS UMR LIGM, Université Paris-Est CNRS UMR SHFJ, CEA Séminaire DROITE September, 17th 2012
2 Positron Emission Tomography (PET) Image reconstruction for dynamic PET 2/32 Measures Reconstructed Benzodiazepine images receptors density Difficulties: Degradation: projection operator and noise. Large data size.
3 Outline 3/32 1. Degradation model and state-of-the-art Variational approach, sparsity, frames 2. Proximal algorithms Proximity operators, properties, DR, PPXA, Space+time PET reconstruction PPXA, hybrid regularization, simulated and preclinical data 4. Conclusion and future works
4 4/32 Degradation model A P α Original image Ay Sinogram y R N z R M Degradation model z = P α (Ay) A R M N : matrix associated with the projection operator. P α : Poisson noise with scaling parameter α > 0. GOAL: Find a vector ŷ the closest to y from the observations z
5 Variational approach: State-of-the-art Quadratic regularization - Wiener filtering [Wiener,1949]: ŷ = argmin Ay z λ y 2 2 where λ > 0 y R N 5/32
6 Variational approach: State-of-the-art Quadratic regularization - Wiener filtering [Wiener,1949]: ŷ = argmin Ay z λ y 2 2 where λ > 0 y R N 5/32 Use of frames [Haar,1910] [Mallat,2009] - Soft-thresholding: ŷ = argmin Ay z λ Fy 1 where λ > 0 y R N
7 Variational approach: State-of-the-art 6/32 F } {{ } y R N F }{{} x = Fy R K with K N F R K N : matrix associated with the analysis frame operator. F R N K : matrix associated with the synthesis frame operator. Tight frame condition: F F = µid pour µ > 0.
8 Variational approach: State-of-the-art Quadratic regularization - Wiener filtering [Wiener,1949]: ŷ = argmin Ay z λ y 2 2 where λ > 0 y R N Use of frames [Haar,1910] [Mallat,2009] - Soft-thresholding: ŷ = argmin y R N Ay z λ Fy 1 2 interpretations 7/32
9 Variational approach: Bayesian formulation 8/32 u = Ay = (u (i) ) 1 i N : realization of a random vector U. z: realization of a random vector Z. x = Fy = (x (k) ) 1 k K : realization of a random vector X = (X (k) ) 1 k K having independent components. MAP estimator (Maximum A Posteriori) max y P(U = Ay Z = z) max P(Z = z U = AF x) P(X = x) x K min lnp(z = z U = AF x) ln p x }{{} X (k)(x (k) ) k=1 Data fidelity }{{} A priori
10 Variational approach: Bayesian formulation 9/32 F } {{ } y R N F }{{} x = Fy R K with K N
11 Variational approach: Bayesian formulation K min lnp(z = z U = AF x) ln p x }{{} X (k)(x (k) ) k=1 Fidelity term }{{} A priori 10/32 where and P(Z = z U = AF x) = 1 { (2πα) exp N/2 p X (k)(x (k) ) = 1 C k exp{ λ k x (k) } AF x z 2 } 2 2α Consequently min x 1 K 2α AF x z λ k x (k) }{{} k=1 }{{} Fidelity term A priori
12 11/32 Variational approach: Compressed sensing Sparsity assumption y 0 = argmin y R N Fy 0 s.t. Ay z 2 2 ε where ( y = (y i ) 1 i N R N ) y 0 #{i : y i 0} and ε > 0 Convex relaxation : ŷ = arg min y R N Fy 1 s.t. Ay z 2 2 ε where ( y = (y i ) 1 i N R N ) y 1 N i=1 y i Lagragian formulation : ŷ = arg min y R N Ay z λ Fy 1 where λ > 0 Compressed sensing [Donoho,2004][Candès,2008]
13 Variational approach: Criterion choice Minimization problem Find ŷ Argmin y R N J f j (y) where (f j ) 1 j J belong to the class of convex functions, l.s.c., and proper from R N to ], + ]. j=1 12/32
14 Variational approach: Criterion choice 12/32 Minimization problem Find ŷ Argmin y R N J f j (y) j=1 f j may model the data fidelity term y R N, f j (y) = 1 2α Ay z 2 for a Gaussian noise y R N, f j (y) = D KL (αay, z) for a Poisson noise
15 Variational approach: Criterion choice 12/32 Minimization problem Find ŷ Argmin y R N J f j (y) j=1 f j may model the data fidelity term y R N, f j (y) = 1 2α Ay z 2 for a Gaussian noise y R N, f j (y) = D KL (αay, z) for a Poisson noise Poisson distribution: P(Z = z U = Ay) = Poisson anti-log likelihood: N exp ( ) α(ay) i (α(ty) i ) zi z i! i=1 log P(Z = z U = Ay) = N z i log(α(ay) i ) + α(ay) i i=1
16 Variational approach: Criterion choice 12/32 Minimization problem Find ŷ Argmin y R N J f j (y) j=1 f j may model the data fidelity term y R N, f j (y) = 1 2α Ay z 2 for a Gaussian noise y R N, f j (y) = D KL (αay, z) for a Poisson noise f j may model a prior y R N, f j (y) = λ y 2 2 : Tikhonov regularization y R N, f j (y) = λtv(y): total variation y R N, f j (y) = λ Fy 1 : model sparsity
17 Variational approach: Criterion choice 12/32 Minimization problem Find ŷ Argmin y R N J f j (y) j=1 f j may model the data fidelity term y R N, f j (y) = 1 2α Ay z 2 for a Gaussian noise y R N, f j (y) = D KL (αay, z) for a Poisson noise f j may model a prior y R N, f j (y) = λ y 2 2 : Tikhonov regularization y R N, f j (y) = λtv(y): total variation y R N, f j (y) = λ Fy 1 : model sparsity f j may model constraints y R N, f j (y) = ι C (y) (C = [0, 255] N ): data range constraint
18 Variational approach: Algorithm choice 13/32 Minimization problem Find ŷ Argmin y R N J f j (y) j=1 Properties of the involved functions smooth functions gradient-based methods (Newton, Quasi-Newton,...) constraints projection based methods (POCS, SIRT,...) non-smooth functions proximal algorithms (FB, DR, PPXA,...)
19 Proximity operator Definition [Moreau (1965)] Let ϕ: H R be a convex, l.s.c., and proper function. The proximity operator of ϕ at point u H is the unique point denoted by prox ϕ u such that 14/32 ( u H), prox ϕ u = arg min ϕ(v) + 1 u v 2 v H 2 Examples: If ϕ = ι C prox ιc = P C where P C denotes the projection onto the convex set C. If ϕ = χ with χ > 0 prox ϕ is the soft-thresholding with a fixed threshold χ. If ϕ = D KL (α, ζ) with ζ, α > 0 prox ϕ = 1 2 ( α + α 2 + 4ζ).
20 Proximity operator Proposition [Combettes,Pesquet,2007] H 1 and H 2 : real separable Hilbert spaces, ϕ: convex, l.s.c., and proper function from H 2 to R, L: H 1 H 2 : bounded linear operator such that L L = σid where σ > 0. Then, prox ϕ L = Id + L σ (prox σϕ Id) L. 15/32
21 Proximity operator Proposition [Combettes,Pesquet,2007] H 1 and H 2 : real separable Hilbert spaces, ϕ: convex, l.s.c., and proper function from H 2 to R, L: H 1 H 2 : bounded linear operator such that L L = σid where σ > 0. Then, prox ϕ L = Id + L σ (prox σϕ Id) L. 15/32 If L denotes a tight frame, i.e. L = F where F F = µid Proposition can be used (example: prox DKL(F,z) )
22 Proximity operator Proposition [Combettes,Pesquet,2007] H 1 and H 2 : real separable Hilbert spaces, ϕ: convex, l.s.c., and proper function from H 2 to R, L: H 1 H 2 : bounded linear operator such that L L = σid where σ > 0. Then, prox ϕ L = Id + L σ (prox σϕ Id) L. 15/32 If L denotes a tight frame, i.e. L = F where F F = µid Proposition can be used (example: prox DKL(F,z) ) Extension to the case of a separable ϕ and a linear operator such that L L = D where D denotes a diagonal matrix [Pustelnik et al, 2011].
23 16/32 Proximal algorithm: Forward-backward Forward-backward algorithm Let u 0 H. The algorithm constructs a sequence (u l ) l 1 by the iterations u l+1 = u l + λ l ( proxγl f 1 (u l γ l f 2 (u l )) u l )
24 16/32 Proximal algorithm: Forward-backward Forward-backward algorithm Let u 0 H. The algorithm constructs a sequence (u l ) l 1 by the iterations u l+1 = u l + λ l ( proxγl f 1 (u l γ l f 2 (u l )) u l ) Forward-backward convergence [Combettes and Wajs, 2005] f 2 is β-lipschitz differentiable on H with β > 0 γ l ]0, 2/β[ : algorithm step-size λ l ]0, 1] : relaxation parameter Under these assumptions, (u l ) l N converges to a solution of minf 1 (u) + f 2 (u).
25 Proximal algorithm: PPXA 17/32 Parallel ProXimal Algorithm [Combettes and Pesquet, 2008] γ ]0, + [, (ωj ) 1 j J ]0,1] J such that P J j=1 ω j = 1, (vj,0 ) 1 j J (R N ) J and u 0 = P J j=1 ω j v j,0, The sequence (x l ) l 1 is generated by the iterations: 8 For j = 1,..., J p j,l = prox γ/ωj f j v j,l compute every proximity operators >< p l = P J j=1 ω jp j,l λ l ]0, 2[ For j = 1,..., J v j,l+1 = v j,l + λ l (2 p l u l p j,l ) >: u l+1 = u l + λ l (p l u l ) Convergence [Combettes and Pesquet, 2008] Under some assumptions, the sequence (u l ) l 1 converges to a solution of min P J j=1 f j(u)
26 Proximal algorithms: ŷ Argmin y R N J j=1 f j(l j y) Forward-Backward [Combettes-Wajs, 2005] J = 2, L i = Id and a gradient Lipschitz function. 18/32
27 Proximal algorithms: ŷ Argmin y R N J j=1 f j(l j y) Forward-Backward [Combettes-Wajs, 2005] J = 2, L i = Id and a gradient Lipschitz function. Douglas-Rachford [Combettes-Pesquet, 2007] J = 2 and L i = Id. 18/32
28 Proximal algorithms: ŷ Argmin y R N J j=1 f j(l j y) Forward-Backward [Combettes-Wajs, 2005] J = 2, L i = Id and a gradient Lipschitz function. Douglas-Rachford [Combettes-Pesquet, 2007] J = 2 and L i = Id. PPXA [Combettes-Pesquet, 2008] L i = Id. 18/32
29 Proximal algorithms: ŷ Argmin y R N J j=1 f j(l j y) Forward-Backward [Combettes-Wajs, 2005] J = 2, L i = Id and a gradient Lipschitz function. Douglas-Rachford [Combettes-Pesquet, 2007] J = 2 and L i = Id. PPXA [Combettes-Pesquet, 2008] L i = Id. PPXA + [Pesquet-Pustelnik, 2012] i L i L i invertible. 18/32
30 Proximal algorithms: ŷ Argmin y R N J j=1 f j(l j y) Forward-Backward [Combettes-Wajs, 2005] J = 2, L i = Id and a gradient Lipschitz function. Douglas-Rachford [Combettes-Pesquet, 2007] J = 2 and L i = Id. PPXA [Combettes-Pesquet, 2008] L i = Id. PPXA + [Pesquet-Pustelnik, 2012] i L i L i invertible. M+SFBF [Combettes-Briceño, 2011] no required conditions. 18/32
31 Proximal algorithms: ŷ Argmin y R N J j=1 f j(l j y) Forward-Backward [Combettes-Wajs, 2005] J = 2, L i = Id and a gradient Lipschitz function. Douglas-Rachford [Combettes-Pesquet, 2007] J = 2 and L i = Id. PPXA [Combettes-Pesquet, 2008] L i = Id. PPXA + [Pesquet-Pustelnik, 2012] i L i L i invertible. M+SFBF [Combettes-Briceño, 2011] no required conditions. M+LFBF [Combettes-Pesquet, 2012] one function with a Lipschitz gradient. 18/32
32 PET reconstruction (dynamic) 19/32 Measures Reconstructed Benzodiazepine images receptors distribution
33 PET reconstruction (dynamic) 20/32 A P α Original image Ay t Sinogram y t R N z t R M Degradation model t {1,..., T }, z t = P α (Aȳ t ) A R M N : matrix associated with the projection operator. P α: Poisson noise with scaling parameter α > 0.
34 PET reconstruction: State-of-the-art Filter Back Projection: FBP EM-ML [Shepp, Vardi, 1982][Lange, Carson, 1984] Ordered Subsets EM: OSEM [Hudson, Larkin, 1994] Space-alternating Generalized EM: SAGE [Fessler, Hero, 1994] RAMLA [Browne, De Pierro, 1996] EM with stopping criteria [Veklerov, Llacer, 1987] SIEVES [Snyder, Miller, 1985] MAP [Herman et al., 1979], [Sauer, Bouman, 1993], [Fessler, 1995] Multiresolution [Turkheimer et al., 1999], [Alpert et al., 2006], [Zhou et al., 2007], Spatio-temporal assumptions [Nichols et al., 2002], [Kamasac et al., 2005], [Reader et al., 2006], [ Verhaeghe et al., 2008] 21/32
35 Proposed method 22/32 Criterion ŷ Argmin y R NT T t=1 D KL (αay t, z t ) D KL (αa, z): minus Poisson log-likelihood (data fidelity term)
36 Proposed method 22/32 Criterion ŷ Argmin y R NT T t=1 D KL (αay t, z t ) + κ Fy 1 D KL (αa, z): minus Poisson log-likelihood (data fidelity term) F R K NT : wavelet transform such that F F = FF = Id 1 : l 1 -norm
37 Proposed method Criterion ŷ Argmin y R NT T t=1 ( ) D KL (αay t, z t ) + ϑtv(y t ) + κ Fy 1 22/32 D KL (αa, z): minus Poisson log-likelihood (data fidelity term) F R K NT : wavelet transform such that F F = FF = Id 1 : l 1 -norm tv: total variation
38 Proposed method Criterion ŷ Argmin y R NT T t=1 ( ) D KL (αay t, z t ) + ϑtv(y t ) + κ Fy 1 + ι C (y) 22/32 D KL (αa, z): minus Poisson log-likelihood (data fidelity term) F R K NT : wavelet transform such that F F = FF = Id 1 : l 1 -norm tv: total variation C R NT : convex constraint
39 Proposed method Criterion ŷ Argmin y R NT T t=1 ( ) D KL (αay t, z t ) + ϑtv(y t ) + κ Fy 1 + ι C (y) 22/32 D KL (αa, z): minus Poisson log-likelihood (data fidelity term) F R K NT : wavelet transform such that F F = FF = Id 1 : l 1 -norm tv: total variation C R NT : convex constraint ϑ 0 and κ 0
40 Proposed method Criterion ŷ Argmin y R NT T t=1 ( ) D KL (αay t, z t ) + ϑtv(y t ) + κ Fy 1 + ι C (y) 22/32 D KL (αa, z): minus Poisson log-likelihood (data fidelity term) F R K NT : wavelet transform such that F F = FF = Id 1 : l 1 -norm non differentiable tv: total variation C R NT : convex constraint non differentiable ϑ 0 and κ 0
41 Proposed method Criterion ŷ Argmin y R NT T t=1 ( ) D KL (αay t, z t ) + ϑtv(y t ) + κ Fy 1 + ι C (y) 22/32 D KL (αa, z): minus Poisson log-likelihood (data fidelity term) F R K NT : wavelet transform such that F F = FF = Id 1 : l 1 -norm non differentiable tv: total variation C R NT : convex constraint non differentiable ϑ 0 and κ 0
42 Proposed method Criterion ŷ Argmin x R K T t=1 ( ) D KL (αay t, z t ) + ϑtv(y t ) + κ Fy 1 + ι C (y) 23/32 tv: anisotropic total variation 1 : l 1 -norm C R NT : convex constraint D KL (αa, z): Poisson anti-log likelihood
43 Proposed method Criterion ŷ Argmin x R K T t=1 ( ) D KL (αay t, z t ) + ϑtv(y t ) + κ Fy 1 + ι C (y) 23/32 tv: anisotropic total variation Closed form for prox tv after splitting 1 : l 1 -norm Closed form for prox 1 C R NT : convex constraint Closed form for prox ιc D KL (αa, z): Poisson anti-log likelihood
44 Proposed method Criterion ŷ Argmin x R K T t=1 ( ) D KL (αay t, z t ) + ϑtv(y t ) + κ Fy 1 + ι C (y) 23/32 tv: anisotropic total variation Closed form for prox tv after splitting 1 : l 1 -norm Closed form for prox 1 C R NT : convex constraint Closed form for prox ιc D KL (αa, z): Poisson anti-log likelihood closed form for prox DKL (α, z), NO closed form for prox because A DKL(αA,z) A D (where D denote a diagonal matrix)
45 24/32 Proposed method Tubes of response m How to compute prox DKL(αA,z)? Pixels n For every m {1,...,M}, Ψ m Γ 0 (R) We can write M D KL (αay, z) = Ψ m (αa m, y) m=1
46 24/32 Proposed method Pixels n Tubes of response m How to compute prox DKL(αA,z)? I r is a partition of {1,...,M} For every m {1,...,M}, Ψ m Γ 0 (R) We can write M D KL (αay, z) = Ψ m (αa m, y) m=1 R = Ψ(αA m, y) r=1 m I r Choice of I r
47 Proposed method Tubes of response m Pixels n Choice of I r : A (r) = (A m,n ) m Ir,1 n N? How to compute prox DKL(αA,z)? I r is a partition of {1,...,M} For every m {1,...,M}, Ψ m Γ 0 (R) We can write M D KL (αay, z) = Ψ m (αa m, y) m=1 R = Ψ(αA m, y) r=1 m I r R = Ψ (r) (αa (r) y) r=1 24/32
48 24/32 Proposed method Tubes of response m How to compute prox DKL(αA,z)? Pixels n For every m {1,...,M}, Ψ m Γ 0 (R) We can write M D KL (αay, z) = Ψ m (αa m, y) m=1 R = Ψ(αA m, y) r=1 m I r R = Ψ (r) (αa (r) y) Choice of I r : A (r) = (A m,n ) m Ir,1 n N? containing non-overlapping and thus orthogonal rows of A prox Ψ (r) A (r) takes a closed form (i.e. A (r) (A (r) ) D. r=1
49 Simulated data: Materials & methods Simulated data - Materials PET 2D+time cerebral data [ 18 F]-FDG HR+ scanner, based on Zubal cerebral phantom. 16 temporal frames, time: between 30 secondes and 5 minutes. Events number: between 48 and /32
50 Simulated data: Materials & methods Simulated data - Materials PET 2D+time cerebral data [ 18 F]-FDG HR+ scanner, based on Zubal cerebral phantom. 16 temporal frames, time: between 30 secondes and 5 minutes. Events number: between 48 and Method PPXA: 400 iterations 10 s each. System matrix split is R = 432 subsets. Wavelets: symmlet filters length 6 on 3 levels (space) and 2 levels of Daubechies-6 on interval (time). EM-ML with/without post-reconstruction smoothing, PPXA with/without temporal regularisation. 25/32
51 Simulated data: Results 26/32 EM-ML Sieves PPXA 2D PPXA 2D+t min.
52 Simulated data: Results 26/32 EM-ML Sieves PPXA 2D PPXA 2D+t min min.
53 Simulated data: MSE 27/32 Cortex Frame [min] EM-ML SIEVES PPXA 2D PPXA 2D+t Striatum Frame [min] EM-ML SIEVES PPXA 2D PPXA 2D+t
54 Simulated data: TAC 28/ Activity [Bq/cc] Original SIEVES 1 SIEVES 2 PPXA 1 PPXA Time [min] Cortex
55 Simulated data: TAC 28/ Activity [Bq/cc] Original SIEVES 1 SIEVES 2 PPXA 1 PPXA 2 Activity [Bq/cc] Time [min] Cortex Time [min] Striatum
56 Preclinical data: materials & methods 29/32 Baboon 2D brain PET [ 18 F]-FDG study on the HR+ scanner Prompt and delayeds stored separatly 128 time frames, from 10 to 30 second duration
57 Preclinical data: materials & methods 29/32 Baboon 2D brain PET [ 18 F]-FDG study on the HR+ scanner Prompt and delayeds stored separatly 128 time frames, from 10 to 30 second duration Sinograms rebinned into 16 time frames, from 80 to 240 second duration OP-PPXA: all corrections (except scatters) included in the model
58 Preclinical data: results FBP Sieves PPXA 2D+t 30/ min.
59 Preclinical data: results FBP Sieves PPXA 2D+t 30/ min min.
60 Conclusion 31/32 Efficient method to reconstruct dynamic PET data and obtain parametric map Taking into account time-frame characteristics by considering wavelets Using Parallel ProXimal Algorithm Possibility to deal with a large panel of regularization terms and constraints Results presented on simulated and preclinical data Future work: Impact of considering 16, 32, 64,... temporal frames for parametric map estimation Improve the convergence rate of the algorithm
61 Reference 32/32 J. Verhaegue, D. Van De Ville, I. Khalidov, Y. D Asseler, I. Lemahieu, and M. Unser. Dynamic PET reconstruction using wavelet regularization with adapted basis functions. IEEE TMI, 27(7): , F. C. Sureau, J.-C. Pesquet, C. Chaux, N. Pustelnik, A. J. Reader, C. Comtat, R. Trebossen, Temporal wavelet denoising of PET sinograms and images. IEEE NSS, 6 pp., Dresden, Germany, Oct N. Pustelnik, C. Chaux, J.-C. Pesquet, F. C. Sureau, E. Dusch and C. Comtat, Adapted Convex Optimization Algorithm for Wavelet-Based Dynamic PET Reconstruction, Fully 3D, 4 pp., Bejing, Chine, Sept N. Pustelnik, C. Chaux, J.-C. Pesquet, and C. Comtat, Parallel Algorithm and Hybrid Regularization for Dynamic PET Reconstruction, IEEE MIC, 4 pp., Knoxville, Tennessee, Oct Nov N. Pustelnik, C. Chaux and J.-C. Pesquet, Parallel ProXimal Algorithm for image restoration using hybrid regularization, IEEE TIP, 20(9): , Sep S. Anthoine, J.-F. Aujol, Y. Boursier, and C. Mélot, Some proximal methods for Poisson intensity CBCT and PET, Inverse Problems and Imaging, to appear, 2012.
Inverse problem and optimization
Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples
More informationVariable Metric Forward-Backward Algorithm
Variable Metric Forward-Backward Algorithm 1/37 Variable Metric Forward-Backward Algorithm for minimizing the sum of a differentiable function and a convex function E. Chouzenoux in collaboration with
More informationAbout Split Proximal Algorithms for the Q-Lasso
Thai Journal of Mathematics Volume 5 (207) Number : 7 http://thaijmath.in.cmu.ac.th ISSN 686-0209 About Split Proximal Algorithms for the Q-Lasso Abdellatif Moudafi Aix Marseille Université, CNRS-L.S.I.S
More informationGeneralized greedy algorithms.
Generalized greedy algorithms. François-Xavier Dupé & Sandrine Anthoine LIF & I2M Aix-Marseille Université - CNRS - Ecole Centrale Marseille, Marseille ANR Greta Séminaire Parisien des Mathématiques Appliquées
More informationCombining multiresolution analysis and non-smooth optimization for texture segmentation
Combining multiresolution analysis and non-smooth optimization for texture segmentation Nelly Pustelnik CNRS, Laboratoire de Physique de l ENS de Lyon Intro Basics Segmentation Two-step texture segmentation
More informationA Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing
A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing Emilie Chouzenoux emilie.chouzenoux@univ-mlv.fr Université Paris-Est Lab. d Informatique Gaspard
More informationA Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection
EUSIPCO 2015 1/19 A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection Jean-Christophe Pesquet Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est
More informationMAP Reconstruction From Spatially Correlated PET Data
IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL 50, NO 5, OCTOBER 2003 1445 MAP Reconstruction From Spatially Correlated PET Data Adam Alessio, Student Member, IEEE, Ken Sauer, Member, IEEE, and Charles A Bouman,
More informationWavelet-based Image Deconvolution and Reconstruction
Wavelet-based Image Deconvolution and Reconstruction Nelly Pustelnik, Amel Benazza-Benhayia, Yuling Zheng, Jean-Christophe Pesquet To cite this version: Nelly Pustelnik, Amel Benazza-Benhayia, Yuling Zheng,
More informationDeconvolution of confocal microscopy images using proximal iteration and sparse representations
Deconvolution of confocal microscopy images using proximal iteration and sparse representations François-Xavier Dupé, Jalal M. Fadili, Jean-Luc Starck To cite this version: François-Xavier Dupé, Jalal
More informationBayesian Paradigm. Maximum A Posteriori Estimation
Bayesian Paradigm Maximum A Posteriori Estimation Simple acquisition model noise + degradation Constraint minimization or Equivalent formulation Constraint minimization Lagrangian (unconstraint minimization)
More informationWavelet-based Image Deconvolution and Reconstruction
Wavelet-based Image Deconvolution and Reconstruction Nelly Pustelnik, Amel Benazza-Benhayia, Yuling Zheng, Jean-Christophe Pesquet To cite this version: Nelly Pustelnik, Amel Benazza-Benhayia, Yuling Zheng,
More informationA memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration
A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration E. Chouzenoux, A. Jezierska, J.-C. Pesquet and H. Talbot Université Paris-Est Lab. d Informatique Gaspard
More informationBayesian X-ray Computed Tomography using a Three-level Hierarchical Prior Model
L. Wang, A. Mohammad-Djafari, N. Gac, MaxEnt 16, Ghent, Belgium. 1/26 Bayesian X-ray Computed Tomography using a Three-level Hierarchical Prior Model Li Wang, Ali Mohammad-Djafari, Nicolas Gac Laboratoire
More informationSparsity Regularization for Image Reconstruction with Poisson Data
Sparsity Regularization for Image Reconstruction with Poisson Data Daniel J. Lingenfelter a, Jeffrey A. Fessler a,andzhonghe b a Electrical Engineering and Computer Science, University of Michigan, Ann
More informationMonotone Operator Splitting Methods in Signal and Image Recovery
Monotone Operator Splitting Methods in Signal and Image Recovery P.L. Combettes 1, J.-C. Pesquet 2, and N. Pustelnik 3 2 Univ. Pierre et Marie Curie, Paris 6 LJLL CNRS UMR 7598 2 Univ. Paris-Est LIGM CNRS
More informationA CONVEX VARIATIONAL APPROACH FOR MULTIPLE REMOVAL IN SEISMIC DATA
th European Signal Processing Conference EUSIPCO 1) Bucharest, Romania, August 7-31, 1 A CONVEX VARIATIONAL APPROACH FOR MULTIPLE REMOVAL IN SEISMIC DATA D Gragnaniello, C Chaux, and J-C Pesquet L Duval
More information2D X-Ray Tomographic Reconstruction From Few Projections
2D X-Ray Tomographic Reconstruction From Few Projections Application of Compressed Sensing Theory CEA-LID, Thalès, UJF 6 octobre 2009 Introduction Plan 1 Introduction 2 Overview of Compressed Sensing Theory
More informationA Primal-dual Three-operator Splitting Scheme
Noname manuscript No. (will be inserted by the editor) A Primal-dual Three-operator Splitting Scheme Ming Yan Received: date / Accepted: date Abstract In this paper, we propose a new primal-dual algorithm
More informationSparse linear models
Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time
More informationResolving the White Noise Paradox in the Regularisation of Inverse Problems
1 / 32 Resolving the White Noise Paradox in the Regularisation of Inverse Problems Hanne Kekkonen joint work with Matti Lassas and Samuli Siltanen Department of Mathematics and Statistics University of
More informationI P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION
I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION Peter Ochs University of Freiburg Germany 17.01.2017 joint work with: Thomas Brox and Thomas Pock c 2017 Peter Ochs ipiano c 1
More informationSource Reconstruction for 3D Bioluminescence Tomography with Sparse regularization
1/33 Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization Xiaoqun Zhang xqzhang@sjtu.edu.cn Department of Mathematics/Institute of Natural Sciences, Shanghai Jiao Tong University
More informationMMSE Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm
Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm Bodduluri Asha, B. Leela kumari Abstract: It is well known that in a real world signals do not exist without noise, which may be negligible
More informationSuperiorized Inversion of the Radon Transform
Superiorized Inversion of the Radon Transform Gabor T. Herman Graduate Center, City University of New York March 28, 2017 The Radon Transform in 2D For a function f of two real variables, a real number
More informationSignal Processing and Networks Optimization Part VI: Duality
Signal Processing and Networks Optimization Part VI: Duality Pierre Borgnat 1, Jean-Christophe Pesquet 2, Nelly Pustelnik 1 1 ENS Lyon Laboratoire de Physique CNRS UMR 5672 pierre.borgnat@ens-lyon.fr,
More informationIn collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29
A Random block-coordinate primal-dual proximal algorithm with application to 3D mesh denoising Emilie CHOUZENOUX Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est, France Horizon Maths 2014
More informationSplitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches
Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques
More informationA. Mohammad-Djafari, Bayesian Discrete Tomography from a few number of projections, Mars 21-23, 2016, Polytechnico de Milan, Italy.
A. Mohammad-Djafari, Bayesian Discrete Tomography from a few number of projections, Mars 21-23, 2016, Polytechnico de Milan, Italy. 1. A Student-t based sparsity enforcing hierarchical prior for linear
More information2D HILBERT-HUANG TRANSFORM. Jérémy Schmitt, Nelly Pustelnik, Pierre Borgnat, Patrick Flandrin
2D HILBERT-HUANG TRANSFORM Jérémy Schmitt, Nelly Pustelnik, Pierre Borgnat, Patrick Flandrin Laboratoire de Physique de l Ecole Normale Suprieure de Lyon, CNRS and Université de Lyon, France first.last@ens-lyon.fr
More informationRestoration of Missing Data in Limited Angle Tomography Based on Helgason- Ludwig Consistency Conditions
Restoration of Missing Data in Limited Angle Tomography Based on Helgason- Ludwig Consistency Conditions Yixing Huang, Oliver Taubmann, Xiaolin Huang, Guenter Lauritsch, Andreas Maier 26.01. 2017 Pattern
More informationNonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy
Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille
More information2 Regularized Image Reconstruction for Compressive Imaging and Beyond
EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement
More informationA Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction
A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction Yao Xie Project Report for EE 391 Stanford University, Summer 2006-07 September 1, 2007 Abstract In this report we solved
More informationProximal methods. S. Villa. October 7, 2014
Proximal methods S. Villa October 7, 2014 1 Review of the basics Often machine learning problems require the solution of minimization problems. For instance, the ERM algorithm requires to solve a problem
More informationOptimisation in imaging
Optimisation in imaging Hugues Talbot ICIP 2014 Tutorial Optimisation on Hierarchies H. Talbot : Optimisation 1/59 Outline of the lecture 1 Concepts in optimization Cost function Constraints Duality 2
More informationObjective Functions for Tomographic Reconstruction from. Randoms-Precorrected PET Scans. gram separately, this process doubles the storage space for
Objective Functions for Tomographic Reconstruction from Randoms-Precorrected PET Scans Mehmet Yavuz and Jerey A. Fessler Dept. of EECS, University of Michigan Abstract In PET, usually the data are precorrected
More informationLinear convergence of iterative soft-thresholding
arxiv:0709.1598v3 [math.fa] 11 Dec 007 Linear convergence of iterative soft-thresholding Kristian Bredies and Dirk A. Lorenz ABSTRACT. In this article, the convergence of the often used iterative softthresholding
More informationIterative regularization of nonlinear ill-posed problems in Banach space
Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and
More informationAN NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR POSITRON EMISSION TOMOGRAPHY. Johnathan M. Bardsley
Volume X, No. 0X, 0X, X XX Web site: http://www.aimsciences.org AN NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR POSITRON EMISSION TOMOGRAPHY Johnathan M. Bardsley Department of Mathematical Sciences
More informationOptimized analytic reconstruction for SPECT
Jean-Pol Guillement, Roman Novikov To cite this version: Jean-Pol Guillement, Roman Novikov. Optimized analytic reconstruction for SPECT. Journal of Inverse ill-posed Problems, De Gruter, 2012, 20 (4),
More informationECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference
ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Sparse Recovery using L1 minimization - algorithms Yuejie Chi Department of Electrical and Computer Engineering Spring
More informationSPARSE SIGNAL RESTORATION. 1. Introduction
SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful
More informationProximal algorithms for multicomponent image recovery problems
Noname manuscript No. will be inserted by the editor Proximal algorithms for multicomponent image recovery problems L. M. Briceño-Arias P. L. Combettes J.-C. Pesquet N. Pustelnik Received: date / Accepted:
More informationCOMPRESSIVE OPTICAL DEFLECTOMETRIC TOMOGRAPHY
COMPRESSIVE OPTICAL DEFLECTOMETRIC TOMOGRAPHY Adriana González ICTEAM/UCL March 26th, 214 1 ISPGroup - ICTEAM - UCL http://sites.uclouvain.be/ispgroup Université catholique de Louvain, Louvain-la-Neuve,
More informationIterative Methods for Inverse & Ill-posed Problems
Iterative Methods for Inverse & Ill-posed Problems Gerd Teschke Konrad Zuse Institute Research Group: Inverse Problems in Science and Technology http://www.zib.de/ag InverseProblems ESI, Wien, December
More informationOptimal mass transport as a distance measure between images
Optimal mass transport as a distance measure between images Axel Ringh 1 1 Department of Mathematics, KTH Royal Institute of Technology, Stockholm, Sweden. 21 st of June 2018 INRIA, Sophia-Antipolis, France
More informationEE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)
EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in
More informationCompressive Sensing (CS)
Compressive Sensing (CS) Luminita Vese & Ming Yan lvese@math.ucla.edu yanm@math.ucla.edu Department of Mathematics University of California, Los Angeles The UCLA Advanced Neuroimaging Summer Program (2014)
More informationMathematical methods for Image Processing
Mathematical methods for Image Processing François Malgouyres Institut de Mathématiques de Toulouse, France invitation by Jidesh P., NITK Surathkal funding Global Initiative on Academic Network Oct. 23
More informationProximal algorithms for multicomponent image recovery problems
Proximal algorithms for multicomponent image recovery problems Luis M. Briceno-Arias, Patrick Louis Combettes, Jean-Christophe Pesquet, Nelly Pustelnik To cite this version: Luis M. Briceno-Arias, Patrick
More informationMaster 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique
Master 2 MathBigData S. Gaïffas 1 3 novembre 2014 1 CMAP - Ecole Polytechnique 1 Supervised learning recap Introduction Loss functions, linearity 2 Penalization Introduction Ridge Sparsity Lasso 3 Some
More informationAuxiliary-Function Methods in Iterative Optimization
Auxiliary-Function Methods in Iterative Optimization Charles L. Byrne April 6, 2015 Abstract Let C X be a nonempty subset of an arbitrary set X and f : X R. The problem is to minimize f over C. In auxiliary-function
More informationAdaptive discretization and first-order methods for nonsmooth inverse problems for PDEs
Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Barbara Kaltenbacher, Tuomo Valkonen,
More informationMLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT
MLCC 2018 Variable Selection and Sparsity Lorenzo Rosasco UNIGE-MIT-IIT Outline Variable Selection Subset Selection Greedy Methods: (Orthogonal) Matching Pursuit Convex Relaxation: LASSO & Elastic Net
More informationAuxiliary-Function Methods in Optimization
Auxiliary-Function Methods in Optimization Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences University of Massachusetts Lowell Lowell,
More informationRecovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm
Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University
More informationComparison of Lesion Detection and Quantification in MAP Reconstruction with Gaussian and Non-Gaussian Priors
Hindawi Publishing Corporation International Journal of Biomedical Imaging Volume 6, Article ID 87567, Pages DOI.55/IJBI/6/87567 Comparison of Lesion Detection and Quantification in MAP Reconstruction
More informationImage restoration. An example in astronomy
Image restoration Convex approaches: penalties and constraints An example in astronomy Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS
More informationSparse Regularization via Convex Analysis
Sparse Regularization via Convex Analysis Ivan Selesnick Electrical and Computer Engineering Tandon School of Engineering New York University Brooklyn, New York, USA 29 / 66 Convex or non-convex: Which
More informationConvex Optimization. (EE227A: UC Berkeley) Lecture 15. Suvrit Sra. (Gradient methods III) 12 March, 2013
Convex Optimization (EE227A: UC Berkeley) Lecture 15 (Gradient methods III) 12 March, 2013 Suvrit Sra Optimal gradient methods 2 / 27 Optimal gradient methods We saw following efficiency estimates for
More informationSequential Unconstrained Minimization: A Survey
Sequential Unconstrained Minimization: A Survey Charles L. Byrne February 21, 2013 Abstract The problem is to minimize a function f : X (, ], over a non-empty subset C of X, where X is an arbitrary set.
More informationInverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology
Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x
More informationRelaxed linearized algorithms for faster X-ray CT image reconstruction
Relaxed linearized algorithms for faster X-ray CT image reconstruction Hung Nien and Jeffrey A. Fessler University of Michigan, Ann Arbor The 13th Fully 3D Meeting June 2, 2015 1/20 Statistical image reconstruction
More informationAccelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)
Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan
More informationIEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 23, NO. 5, MAY
IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 23, NO. 5, MAY 2004 591 Emission Image Reconstruction for Roms-Precorrected PET Allowing Negative Sinogram Values Sangtae Ahn*, Student Member, IEEE, Jeffrey
More informationAccelerated Block-Coordinate Relaxation for Regularized Optimization
Accelerated Block-Coordinate Relaxation for Regularized Optimization Stephen J. Wright Computer Sciences University of Wisconsin, Madison October 09, 2012 Problem descriptions Consider where f is smooth
More informationCoordinate Update Algorithm Short Course Proximal Operators and Algorithms
Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 36 Why proximal? Newton s method: for C 2 -smooth, unconstrained problems allow
More informationStatistical regularization theory for Inverse Problems with Poisson data
Statistical regularization theory for Inverse Problems with Poisson data Frank Werner 1,2, joint with Thorsten Hohage 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical
More informationA Bayesian MAP-EM Algorithm for PET Image Reconstruction Using Wavelet Transform
A Bayesian MAP-EM Algorithm for PET Image Reconstruction Using Wavelet Transform Jian Zhou, Jean-Louis Coatrieux, Alexandre Bousse, Huazhong Shu, Limin Luo To cite this version: Jian Zhou, Jean-Louis Coatrieux,
More informationA Dykstra-like algorithm for two monotone operators
A Dykstra-like algorithm for two monotone operators Heinz H. Bauschke and Patrick L. Combettes Abstract Dykstra s algorithm employs the projectors onto two closed convex sets in a Hilbert space to construct
More informationProjected Nesterov s Proximal-Gradient Algorithm for Sparse Signal Reconstruction with a Convex Constraint
1 Projected Nesterov s Proximal-Gradient Algorithm for Sparse Signal Reconstruction with a Convex Constraint Renliang Gu and Aleksandar Dogandžić arxiv:1502.02613v6 [stat.co] 5 Oct 2016 Abstract We develop
More informationAPPLICATIONS OF A NONNEGATIVELY CONSTRAINED ITERATIVE METHOD WITH STATISTICALLY BASED STOPPING RULES TO CT, PET, AND SPECT IMAGING
APPLICATIONS OF A NONNEGATIVELY CONSTRAINED ITERATIVE METHOD WITH STATISTICALLY BASED STOPPING RULES TO CT, PET, AND SPECT IMAGING JOHNATHAN M. BARDSLEY Abstract. In this paper, we extend a nonnegatively
More informationContinuous State MRF s
EE64 Digital Image Processing II: Purdue University VISE - December 4, Continuous State MRF s Topics to be covered: Quadratic functions Non-Convex functions Continuous MAP estimation Convex functions EE64
More informationImage restoration: numerical optimisation
Image restoration: numerical optimisation Short and partial presentation Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP / 6 Context
More informationSimultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors
Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors Sean Borman and Robert L. Stevenson Department of Electrical Engineering, University of Notre Dame Notre Dame,
More informationModeling Multiscale Differential Pixel Statistics
Modeling Multiscale Differential Pixel Statistics David Odom a and Peyman Milanfar a a Electrical Engineering Department, University of California, Santa Cruz CA. 95064 USA ABSTRACT The statistics of natural
More informationarxiv: v1 [math.oc] 20 Jun 2014
A forward-backward view of some primal-dual optimization methods in image recovery arxiv:1406.5439v1 [math.oc] 20 Jun 2014 P. L. Combettes, 1 L. Condat, 2 J.-C. Pesquet, 3 and B. C. Vũ 4 1 Sorbonne Universités
More informationMinimizing Isotropic Total Variation without Subiterations
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Minimizing Isotropic Total Variation without Subiterations Kamilov, U. S. TR206-09 August 206 Abstract Total variation (TV) is one of the most
More informationDictionary Learning for photo-z estimation
Dictionary Learning for photo-z estimation Joana Frontera-Pons, Florent Sureau, Jérôme Bobin 5th September 2017 - Workshop Dictionary Learning on Manifolds MOTIVATION! Goal : Measure the radial positions
More informationA New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction
A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction Marc Teboulle School of Mathematical Sciences Tel Aviv University Joint work with H. Bauschke and J. Bolte Optimization
More informationRecovering overcomplete sparse representations from structured sensing
Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix
More informationVariational inference
Simon Leglaive Télécom ParisTech, CNRS LTCI, Université Paris Saclay November 18, 2016, Télécom ParisTech, Paris, France. Outline Introduction Probabilistic model Problem Log-likelihood decomposition EM
More informationDenosing Using Wavelets and Projections onto the l 1 -Ball
1 Denosing Using Wavelets and Projections onto the l 1 -Ball October 6, 2014 A. Enis Cetin, M. Tofighi Dept. of Electrical and Electronic Engineering, Bilkent University, Ankara, Turkey cetin@bilkent.edu.tr,
More informationVariational Image Restoration
Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1
More informationSparse regression. Optimization-Based Data Analysis. Carlos Fernandez-Granda
Sparse regression Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 3/28/2016 Regression Least-squares regression Example: Global warming Logistic
More informationSélection adaptative des paramètres pour le débruitage des images
Journées SIERRA 2014, Saint-Etienne, France, 25 mars, 2014 Sélection adaptative des paramètres pour le débruitage des images Adaptive selection of parameters for image denoising Charles Deledalle 1 Joint
More informationSparse tomography. Samuli Siltanen. Department of Mathematics and Statistics University of Helsinki, Finland
Sparse tomography Samuli Siltanen Department of Mathematics and Statistics University of Helsinki, Finland Minisymposium: Fourier analytic methods in tomographic image reconstruction Applied Inverse Problems
More informationInexact Alternating Direction Method of Multipliers for Separable Convex Optimization
Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Hongchao Zhang hozhang@math.lsu.edu Department of Mathematics Center for Computation and Technology Louisiana State
More informationSparse Time-Frequency Transforms and Applications.
Sparse Time-Frequency Transforms and Applications. Bruno Torrésani http://www.cmi.univ-mrs.fr/~torresan LATP, Université de Provence, Marseille DAFx, Montreal, September 2006 B. Torrésani (LATP Marseille)
More information(4D) Variational Models Preserving Sharp Edges. Martin Burger. Institute for Computational and Applied Mathematics
(4D) Variational Models Preserving Sharp Edges Institute for Computational and Applied Mathematics Intensity (cnt) Mathematical Imaging Workgroup @WWU 2 0.65 0.60 DNA Akrosom Flagellum Glass 0.55 0.50
More informationA General Framework for a Class of Primal-Dual Algorithms for TV Minimization
A General Framework for a Class of Primal-Dual Algorithms for TV Minimization Ernie Esser UCLA 1 Outline A Model Convex Minimization Problem Main Idea Behind the Primal Dual Hybrid Gradient (PDHG) Method
More informationOn a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model
On a Data Assimilation Method coupling, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model 2016 SIAM Conference on Uncertainty Quantification Basile Marchand 1, Ludovic
More informationExponential Weighted Aggregation vs Penalized Estimation: Guarantees And Algorithms
Exponential Weighted Aggregation vs Penalized Estimation: Guarantees And Algorithms Jalal Fadili Normandie Université-ENSICAEN, GREYC CNRS UMR 6072 Joint work with Tùng Luu and Christophe Chesneau SPARS
More informationA posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem
A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem Larisa Beilina Michael V. Klibanov December 18, 29 Abstract
More informationA VARIATIONAL FORMULATION FOR FRAME-BASED INVERSE PROBLEMS
A VARIATIONAL FORMULATION FOR FRAME-BASED INVERSE PROBLEMS Caroline Chaux, 1 Patrick L. Combettes, 2 Jean-Christophe Pesquet, 1 and Valérie R. Wajs 2 1 Institut Gaspard Monge and UMR CNRS 8049 Université
More informationarxiv: v1 [math.oc] 12 Mar 2013
On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems arxiv:303.875v [math.oc] Mar 03 Radu Ioan Boţ Ernö Robert Csetnek André Heinrich February
More informationModel Selection and Geometry
Model Selection and Geometry Pascal Massart Université Paris-Sud, Orsay Leipzig, February Purpose of the talk! Concentration of measure plays a fundamental role in the theory of model selection! Model
More informationdans les modèles à vraisemblance non explicite par des algorithmes gradient-proximaux perturbés
Inférence pénalisée dans les modèles à vraisemblance non explicite par des algorithmes gradient-proximaux perturbés Gersende Fort Institut de Mathématiques de Toulouse, CNRS and Univ. Paul Sabatier Toulouse,
More informationAdaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise
Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Minru Bai(x T) College of Mathematics and Econometrics Hunan University Joint work with Xiongjun Zhang, Qianqian Shao June 30,
More information