Convex and Non-Convex Optimization in Image Recovery and Segmentation

Size: px
Start display at page:

Download "Convex and Non-Convex Optimization in Image Recovery and Segmentation"

Transcription

1 Convex and Non-Convex Optimization in Image Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU 29 May- 2 June, 2017 NUS, Singapore

2 Outline 1. Variational Models for Rician Noise Removal 2. Two-stage Segmentation 3. Dictionary and Weigthed Nuclear Norm

3 Background We assume that the noisy image f is obtained from a perfect unknown image u f = u + b. b: additive Gaussian noise. TV-ROF: µ min (u f ) 2 dx + Du, u 2 Ω Ω where the constant µ > 0, Ω R n. The MAP estimation leads to the data fitting term Ω (u f )2. Smoothness from the edge-preserving total variation (TV) term, Ω Du.

4 Rician noise with Blur The degradation process reads f = (Au + η 1 ) 2 + η2 2. A describes the blur operator The Rician noise was built from white Gaussian noise η 1, η 2 N (0, σ 2 ) The distribution has probability density p(f u) = f σ 2 e (Au)2 +f 2 2σ 2 I 0 ( Auf σ 2 ).

5 TV-model for removing Rician noise with blur The MAP estimation leads to: 1 inf u 2σ 2 (Au) 2 dx Ω Ω log I 0 ( Auf )dx + γtv(u), σ2 I 0 is the solution of the zero order modified Bessel function xy + y xy = 0. the TV prior can recover sharp edges, the constant γ > 0 Non-convex!

6 Convex variational model for denoising and deblurring [Getreuer-Tong-Vese,ISVC, 2011] inf G σ (Au, f ) + γtv(u). (1) u Ω Let z = Au and c = , { H σ (z) if z cσ, G σ (z, f ) = H σ (cσ) + H σ(cσ)(z cσ) if z cσ, H σ (z) = f 2 + z 2 2σ 2 log I 0 ( fz σ 2 ) H σ(z) = z σ 2 f fz B( σ2 σ 2 ), B(t) = I 1(t) I 0 (t) t t t t t t Complex and difficult to derive its mathematical property.

7 New elegant convex TV-model We introduce a quadratic penalty term: inf u 1 2σ 2 Ω (Au) 2 dx Ω log I 0 ( Auf σ 2 )dx + 1 σ Ω ( Au f ) 2 dx + γtv(u), (2)

8 The quadratic penalty term Why the penalty term Ω ( f Au) 2 σ dx? the value of e is always bounded, where e is defined below. Proposition 1 Suppose that the variables η 1 and η 2 independently follow the Normal distribution N (0, σ 2 ). Set f = (u + η 1 ) 2 + η2 2 where u is fixed and u 0. Then we can get the following inequality, e := E(( f u) 2 ) 2 (π + 2). σ π

9 Proof of the Proposition Lemma 1 Assume that a, b R. Then, (u 2 + 2au + b 2 ) 1 4 u 1 2 a + b is true whenever u 0 and a b. Based on the lemma, we can get E(( f u) 2 ) 2E( η 1 ) + 2E((η η 2 2) 1 2 ). Set Y := η2 1 +η2 2 σ 2, where η 1, η 2 N (0, σ 2 ). Thus, Y χ 2 (2). It can be shown that E( Y ) = E((η2 1 + η2 2 ) 1 2 ) σ 2 E( η 1 ) = π σ. = 2π 2,

10 Approximation table Table: The values of e with various values of σ for different original images u. Image σ = 5 σ = 10 σ = 15 σ = 20 σ = 25 Cameraman Bird Skull Leg joint

11 The quadratic penalty term Why the penalty term ( f Au) 2 Ω σ dx? By adding this term, we obtain a strictly convex model (2) for restoring blurry images with Rician noise.

12 Convexity of the proposed model (2) How to prove the convexity of the proposed model (2)? Let us define a function g 0 as g 0 (t) := log I 0 (t) 2 t. If g 0 (t) is strictly convex on [0, + ), then the convexity of the first three terms in model (2) as a whole can be easily proved by letting t = f (x)au(x) σ 2. Since Ω u is convex, the proposed model is strictly convex.

13 Convexity of the proposed model (2) How to prove the strict convexity of function g(t)? g 0 (t) = (I 0(t + I 2 (t))i 0 (t) 2I 2 1 (t) 2I 2 0 (t) t 3 2. Here, I n (t) is the modified Bessel functions of the first kind with order n. Based on the following lemma, we can prove that g 0 (t) is strictly convex. Lemma 2 Let h(t) = t 3 (I 0(t)+I 2 (t))i 0 (t) 2I (t) I0 2(t). Then 0 h(t) < 1 on [0, + ).

14 Existence and uniqueness of a solution Theorem 1 Let f be in L (Ω) with inf Ω f > 0, then the model (2) has a unique solution u in BV (Ω) satisfying σ 2 0 < (2 sup Ω f + σ) 2 inf f Ω u sup f. σ Set c 1 := 2 inf (2 sup Ω f +σ) 2 Ω f, c 2 := sup Ω f, and define two functions as follows, E 0 (u):= 1 2σ 2 u 2 dx log I 0 ( fu Ω Ω σ 2 )dx + 1 ( u f ) 2 dx, σ Ω E 1 (u) :=E 0 (u) + γ Du dx, (3) Ω where E 1 (u) is the objective function in model (2). Ω

15 Existence and uniqueness of a solution According to the integral forms of the modified Bessel functions of the first kind with integral orders n, we have I 0 (x) = 1 π π 0 e x cos θ dθ e x, x 0, (4) thus, for each fixed x Ω, log I 0 ( f (x)t ) f (x)t σ 2 σ 2 t 0. with E 1 (u) in (3) is bounded below. E 1 (u) E 0 (u) 1 2σ 2 u 2 dx log I 0 ( fu Ω Ω σ 2 )dx ( 1 Ω 2σ 2 u2 fu σ 2 )dx 1 2σ 2 f 2 dx. Ω

16 Existence and uniqueness of a solution For each fixed x Ω, let the real function g on R + {0} be defined as g(t) := 1 ( f (x)t ) 2σ 2 t2 log I 0 σ σ ( t f (x)) 2. g (t) = 1 σ 2 t f (x) σ 2 I 1 ( ) f (x)t σ 2 I 0 ( f (x)t σ 2 ) + 1 f (x) σ (1 ). t We can prove that g(t) is increasing if t (f (x), + ) and σ decreasing if 0 t < 2 f (x). This implies that (2f (x)+σ) 2 g(min (t, V )) g(t) if V f (x). With Ω D inf(u, c 2) Ω Du [Kornprobst,Deriche,Aubert 1999], we have E 1 (inf(u, c 2 )) E 1 (u). Similarly, we can get E 1 (sup(u, c 1 )) E 1 (u). The unique solution u to the model (2) should be restricted in [c 1, c 2 ].

17 Minimum-maximum Principle Lemma 3 The function I 0 (x) is strictly log-convex for all x > 0. In order to prove that the function I 0 (x) is strictly log-convex in (0, + ), it suffices to show that its logarithmic second-order derivative is positive in (0, + ) (log I 0 (x)) = 1 2 (I 0(x) + I 2 (x))i 0 (x) I 1 (x) 2 I 2 0 (x). Using Cauchy-Schwarz inequality, we obtain 1 2 (I 0(x) + I 2 (x))i 0 (x) = 1 π ( 1 π π 0 π 0 cos 2 θe x cos θ dθ 1 π π 0 e x cos θ dθ cos θe x cos θ dθ) 2 = (I 1 (x)) 2. Since cos θe 1 2 x cos θ and e 1 2 x cos θ are not linear dependent when θ changes, the strict inequality in above holds.

18 Minimum-maximum Principle Lemma 4 Let g(x) be a strictly convex and strictly increasing function in (0, + ). Meanwhile, let g(x) be differentiable. Assume that 0 < a < b, 0 < c < d, then we have: g(ac) + g(bd) > g(ad) + g(bc). Based on Theorem 1, Lemma 3 and Lemma 4, we can further establish the following comparison principle (minimum-maximum principle).

19 Minimum-maximum Principle Proposition 3 Let f 1 and f 2 be in L (Ω) with inf Ω f 1 > 0 and inf Ω f 2 > 0. Suppose u1 (resp. u 2 ) is a solution of model (2) with f = f 1 (resp. f = f 2 ). Assume that f 1 < f 2, then we have u1 u 2 a.e. in Ω. Note u 1 u 2 := inf(u 1, u 2 ), u 1 u 2 := sup(u 1, u 2 ). E i 1 (u) denotes E 1(u) defined in (3) with f = f i. Since u 1 (resp. u 2 ) is a solution of model (2) with f = f 1 (resp. f = f 2 ), we can easily get E 1 1 (u 1 u 2) E 1 1 (u 1), E 2 1 (u 1 u 2) E 2 1 (u 2).

20 Minimum-maximum Principle Adding the two inequalities together, and using the fact that Ω D(u 1 u 2 ) + Ω D(u 1 u 2 ) Ω Du 1 + Ω Du 2, we obtain 1 2σ 2 (u 1 u2) 2 log I 0 ( f 1(u1 u 2 ) σ 2 ) Ω + 1 σ ( u1 u 2 f 1 ) 2 dx + Ω 1 2σ 2 (u 1 u 2) 2 log I 0 ( f 2(u1 u 2 ) σ 2 ) + 1 σ ( u1 u 2 f 2 ) 2 dx 1 Ω 2σ 2 (u 1) 2 log I 0 ( f 1u1 σ 2 ) + 1 σ ( u1 f 1 ) 2 dx 1 + 2σ 2 (u 2) 2 log I 0 ( f 2u2 σ 2 ) + 1 σ ( u2 f 2 ) 2 dx. Ω

21 Minimum-maximum Principle As Ω can be written as Ω = {u1 > u 2 } {u 1 u 2 }, it is clear that ((u1 u2) 2 + (u1 u2) 2 )dx = ((u1) 2 + (u2) 2 )dx. Ω The inequality can be simplified as follows [log I f 1 u1 0( )I σ 2 0 ( f 2u2 ) σ 2 {u1 >u 2 } I 0 ( f 1u2 )I σ 2 0 ( f 2u1 ) σ 2 Ω + 1 σ ( u 1 u 2 )( f 1 f 2 )]dx 0. Since I 0 (t) is exponentially increasing function, we get log I 0 is strictly increasing.

22 Minimum-maximum Principle Based on Lemma 3 and Lemma 4, we get that if f 1 < f 2 and u 1 > u 2, log I 0 ( f 1u 1 σ 2 ) + log I 0( f 2u 2 σ 2 ) < log I 0( f 1u 2 σ 2 ) + log I 0( f 2u 1 σ 2 ), which is equivalent to log I 0( f1u 1 )I σ 2 0 ( f 2u2 ) σ 2 I 0 ( f 1u2 )I σ 2 0 ( f 2u1 ) < 0. σ 2 From the assumption f 1 < f 2, in this case we conclude that u1 > u 2 has zero Lebesgue measure, i.e., u 1 u 2 a.e. in Ω.

23 Existence and uniqueness of a solution Theorem 2 Assume that A L(L 2 (Ω)) is nonnegative, and it does not annihilate constant functions, i.e., A1 0. Let f be in L (Ω) with inf Ω f > 0, then the model (2) has a solution u. Moreover, if A is injective, then the solution is unique. Define E A (u) as follows E A (u)= 1 2σ 2 (Au) 2 dx log I 0 ( Auf Ω Ω σ 2 )dx + 1 ( Au f ) 2 dx + γ Du dx. σ Ω Similar to the proof of Theorem 1, E A is bounded from below. Thus, we choose a minimizing sequence {u n } for (2), and have that { Ω Du n } is bounded. Ω

24 Existence and uniqueness of a solution Applying the Poincaré inequality, we get u n m Ω (u n ) 2 C D(u n m Ω (u n )) = C Ω Ω Du n, (5) where m Ω (u n ) = 1 Ω Ω u n dx, Ω denotes the measure of Ω, and C is a constant. As Ω is bounded, u n m Ω (u n ) 2 is bounded for each n. Since A L(L 2 (Ω)) is continuous, {A(u n m Ω (u n ))} must be bounded in L 2 (Ω) and in L 1 (Ω). Based on the boundedness of E A (u n ), Au n f 2 is bounded, which implies that Au n is bounded in L 1 (Ω). Meanwhile, we have: m Ω (u n ) A1 1 = A(u n m Ω (u n )) Au n 1 A(u n m Ω (u n )) 1 + Au n 1, which turns out that m Ω (u n ) is uniformly bounded, because of A1 0.

25 Existence and uniqueness of a solution As we know that {u n m Ω (u n )} is bounded, the boundness of {u n } in L 2 (Ω) and thus in L 1 (Ω) is obvious. Therefore, there exists a subsequence {u nk } which converges weakly in L 2 (Ω) to some u L 2 (Ω), and {Du nk } weakly- converges as a measure to Du. Since the linear operator A is continuous, we have that {Au nk } converges weakly to Au in L 2 (Ω) as well. Then according to the lower semi-continuity of the total variation and Fatou s lemma, we obtain that u is a solution of the model (2). Furthermore, if A is injective, then its minimizer has to be unique since (2) is strictly convex.

26 Numerical Algorithm Primal-Dual Algorithm for solving the model (2) 1: Fixed τ and β. Initialize u 0 = ū 0 = w 0 = w 0 = f, v 0 = (u 0 ), p 0 = (0,, 0) R 2n, and q 0 = (0,, 0) R n. 2: Calculate p k+1, q k+1, w k+1 and u k+1 from 3: Stop; or set k := k + 1 and go to step 2. p k+1 =arg max p v k ū k, p 1 2β p pk 2 2 =p k + β( v k ū k ), (6) q k+1 =arg max q w k Aū k, q 1 2β q qk 2 2 =q k + β( w k Aū k ), (7) u k+1 =arg min u, γ div 0 u 255 pk+1 A q k τ u uk 2 2 =u k + τ(a q k+1 div p k+1 ), (8) v k+1 =arg min v γ v 1 + v, p k τ v v k 2 2, (9) w k+1 =arg min G(w) + w, 0 w 255 qk τ w w k 2 2, (10) u k+1 =2u k+1 u k, (11) v k+1 =2v k+1 v k, (12) w k+1 =2w k+1 w k, (13)

27 Numerical Experiments - Denoising (a) Original (b) MAP: (c) Getreuer s: (d) Ours: (e) Noisy: (f) Residual of MAP (g) Residual of Getreuer (h) Residual of Ours Figure: Results and PSNR values of different methods when removing the Rician noise with σ = 20 in natural image Cameraman. Row 1: the recovered images with different methods. Row 2: the residual images with different methods. (a) Zoomed original Cameraman, (b) MAP model (γ = 0.05), (c) Getreuer s model (λ = 20), (d) Our proposed model (γ = 0.05), (e) Noisy Cameraman with σ = 20, (f)-(h) are residual images of MAP model, Getreuer s model and our model, respectively.

28 Denoising Table: The comparisons of PSNR values, SSIM values and CPU-time in seconds by different methods for denoising case. σ = 20 σ = 30 Images Methods PSNR SSIM Time(s) PSNR SSIM Time(s) Noisy Camera- MAP man Getreuer s Ours Noisy Lumbar- MAP Spine Getreuer s Ours Noisy MAP Brain Getreuer s Ours Noisy MAP Liver Getreuer s Ours Noisy MAP Average Getreuer s Ours

29 Deblurring and Denoising (a) Original (b) MAP: (c) Getreuer s: (d) Ours: (e) Blurry & Noisy: (f) Residual of MAP (g) Residual of Getreuer (h) Residual of Ours Figure: Results and PSNR values of different methods when removing the Rician noise with σ = 15 and Motion blur in MR image Lumbar Spine. Row 1: the recovered images with different methods. Row 2: the residual images with different methods. (a) Original Cameraman, (b) MAP model (γ = 0.05), (c) Getreuer s model (λ = 20), (d) Our proposed model (γ = 0.04), (e) Blurry Lumbar Spine image with Rician noise σ = 10, (f)-(h) are residual images of MAP model, Getreuer s model and our model, respectively.

30 Deblurring and Denoising Table: The comparisons of PSNR values, SSIM values and CPU-time in seconds by different methods for deblurring with denoising. Motion Blur Gaussian Blur Images Methods PSNR SSIM Time(s) PSNR SSIM Time(s) Degraded Camera- MAP man Getreuer s Ours Degraded Lumbar- MAP Spine Getreuer s Ours Degraded MAP Brain Getreuer s Ours Degraded MAP Liver Getreuer s Ours Degraded MAP , Average Getreuer s Ours

31 Energy comparison Table: The comparisons of the energy values E(u MAP ), E(u Getreuer s), and E(u Ours ). Here, E(u) := 1 2σ 2 Ω u2 dx Ω log I 0( fu σ 2 )dx + γ Ω Du dx, with the same γ. These three values are nearly the same. σ = 20 σ = 30 Method MAP Getreuer s Ours MAP Getreuer s Ours Cameraman Skull Liver Brain Average As one of our aim is to obtain a good approximation for the global solution of the original non-convex model (presented in the caption of the above table) we plugged the solutions obtained by the MAP model, Getreuer s convex model and our model into the original functional to get the corresponding energy values. as these three energy values are quite close, Getreuer s and our approximation approaches are quite reasonable. in the denoising case, our method is comparable to the MAP and Getreuer s methods.

32 Comments on numerical results Our convexified model is very competitive to other recently proposed methods. Higher PSNR values and better visual results. Less computational time. Unique minimizer, independent of initialization. It seems the effort of convexification is justified, so we continue to another non-convex problem...

33 Other non-convex models Many other non-convex models in image processing. Generalization? Possible... Not so easy

34 Multiplicative Gamma noise with blur Degradation model f = (Au)η. η: multiplicative noise. We assume η follows Gamma distribution, i.e., p η (x; θ, K) = 1 θ K Γ(K) x K 1 e x θ. Mean and variance of η are Kθ and Kθ 2. We assume mean of η equals 1.

35 TV-Multiplicative model MAP analysis leads to: ( min log(au) + u Ω f ) dx + λ Au Ω Du, Known as the AA model. The edge-preserving TV term, Ω Du, the constant λ > 0 Non-convex.

36 New approach: Convexified TV-Multiplicative model We introduce a quadratic penalty term: ( min log(au) + u Ω f ) ( Au ) 2dx dx + α 1 + λ Du. Au Ω f Ω (14) α > 0 is a penalty parameter. If α 2 6 9, the model (14) is convex!

37 The quadratic penalty term More reasons to add the penalty term Ω ( Au f 1) 2dx: Set Y = 1 η, where η follows Gamma distribution with mean 1. It can be shown that (K is the shape parameter in Gamma distribution) lim K + E((Y 1)2 ) = 0. For large K, Y can be well approximated by Gaussian distribution. (We will introduce the concept of the Kullback-Leibler (KL) divergence later to reveal the relationship.) The MAP estimation leads to Ω (v f )2 dx as data fitting term in additive Gaussian noise removal.

38 Approximate by Gaussian distribution Proposition Suppose that the random variable η follows Gamma distribution with mean 1. Set Y = 1 η, with mean µ K and variance σ 2 K. Then lim D KL(Y N (µ K, σk 2 )) = 0, K + where N (µ K, σ 2 K ) is the Gaussian distribution. It can be shown that D KL (Y N (µ K, σ 2 K )) = O( 1 K ).

39 Approximate by Gaussian distribution(cont.) Indeed, we can prove that: (i) + 0 p Y (y) log p Y (y)dy = log 2 log( KΓ(K)) + 2K+1 ψ(k) := d log Γ(K) dk 2 ψ(k) K, where is the digamma function; (ii) + 0 p Y (y) log p N (µk,σk 2 )(y)dy = 1 2 log(2πeσ2 K ), where p N (µk,σk 2 )(y) denotes the PDF of the Gaussian distribution N (µ K, σk 2 ); (iii) D KL (Y N (µ K, σ 2 K )) = O( 1 K ).

40 Approximate by Gaussian distribution (cont.) Indeed, by calculation, we can get, σ 2 K = E(Y 2 ) E 2 (Y ) = KΓ(K 1) Γ(K) KΓ2 (K 1 2 ) Γ 2. (K) and D KL (Y N (µ K, σ 2 K )) = log 2 log( KΓ(K)) + 2K + 1 ψ(k) K log(2πeσ2 K ) = log log Kσ2 K + O( 1 K ) = log log ( O( 1 K ) ) + O( 1 K ) = O( 1 K ).

41 Approximate by Gaussian distribution (cont.) Furthermore, if we set Z := Y µ K σ K, then lim D KL(Z N (0, 1)) = 0. K A simple change of variable shows D KL (Z N (0, 1)) = D KL (Y N (µ K, σ 2 K )). Z tends to the standard Gaussian distribution N (0, 1).

42 Approximation graph p Y (y) p N(µ,σ 2 ) (y) 2.5 p Y (y) p N(µ,σ 2 ) (y) p(y) 1 p(y) y (a) y Figure: The comparisons of the PDFs of Y and N (µ, σ 2 ) with different K. (a) K = 6, (b) K = 10. (b)

43 Numerical Experiments - Denoising (a) (b) (c) (d) Figure: Results of different methods when removing the multiplicative noise with K = 10. (a) Noisy Cameraman, (b) AA method, (c) RLO method, (d) our method.

44 Cauchy noise with blur Degradation model f = (Au) + v. v: Cauchy noise. A random variable V follows the Cauchy distribution if it has density g(v) = 1 γ π γ 2 + (v δ) 2, where γ > 0 is the scale parameter and δ is localization parameter.

45 Cauchy noise (a) Noise free signal. (b) Degraded by Gaussian noise. (c) Degraded by Cauchy noise. Figure: Alpha-stable noise in 1D: notice that the y-axis has different scale (scale between 30 and 120 on (a) and (b) and 100 and 400 on (c)). (a) 1D noise free signal; (b) signal degraded by an additive Gaussian noise; (c) signal degraded by an additive Cauchy noise. Cauchy noise is more impulsive than the Gaussian noise.

46 TV-Cauchy model MAP analysis leads to: inf u BV (Ω) TV (u) + λ 2 Ω ( log γ 2 + (Au f ) 2) dx γ > 0 is the scale parameter and A is the blurring operator. Edge-preserving. Non-convex.

47 Image corrupted by Cauchy noise (a) Original image (b) Gaussian noise (c) Cauchy noise (d) Impulse noise (e) Zoom of (a) (f) Zoom of (b) (g) Zoom of (c) (h) Zoom of (d) Figure: Comparison of different noisy images. (a) Original image u 0 ; (b) u corrupted by an additive Gaussian noise; (c) u corrupted by an additive Cauchy noise; (d) u corrupted by an impulse noise; (e) (h) zoom of the top left corner of the images (a) (d), respectively. Cauchy noise and impulse noise are more impulsive than the Gaussian noise.

48 New approach: Convexified TV-Cauchy model We introduce a quadratic penalty term: inf TV (u) + λ ( ( log γ 2 + (Au f ) 2) ) dx + µ Au u u BV (Ω) 2 Ω Cauchy noise has impulsive character. u 0 is the image obtained by applying the median filter to the noisy image. The median filter is used instead of the myriad filter for simplicity and computational time.

49 Numerical Experiments - Denoising (a) Wavelet: (b) SURE-LET: (c) BM3D: (d) Ours: Figure: Recovered images (with PSNR(dB)) of different approaches for removing Cauchy noise from the noisy image Peppers. (a) Wavelet shrinkage; (b) SURE-LET; (c) BM3D; (d) our model.

50 Denoising Table: PSNR values for noisy images and recovered images given by different methods (ξ = 0.02). In the last line of the table, we compute the average of the values. Noisy ROF MD MR L 1 -TV Ours Peppers Parrot Cameraman Lena Baboon Goldhill Boat Average

51 Deblurring and Denoising Original L 1 -TV Ours Figure: The zoomed-in regions of the recovered images from blurry images with Cauchy noise. First row: details of original images; second row: details of restored images by L 1 -TV approach; third row: details of restored images by our approach.

52 Deblurring and Denoising Table: PSNR values for noisy images and recovered images given by different methods (ξ = 0.02). In the last line of the table, we compute the average of the values. Noisy ROF MD MR L 1 -TV Ours Peppers Parrot Cameraman Lena Baboon Goldhill Boat Average

53 Remarks We introduced three convex models based on Total Variation. Under mild conditions, our models have unique solutions. Because of convexity, fast algorithms can be employed. Numerical experiments suggest good performance of the models. Generalization?

54 Outline 1. Variational Models for Rician Noise Removal 2. Two-stage Segmentation 3. Dictionary and Weigthed Nuclear Norm

55 Mumford-Shah model (1989) Mumford-Shah model is an energy minimization problem for image segmentation: λ min g,γ 2 Ω (f g)2 dx + µ 2 Ω\Γ g 2 dx + Length(Γ). f observed image, g a piecewise smooth approximation of f, Γ boundary of segmented region. λ and µ are positive. Non-convex, very difficult to solve!

56 Simplifying Mumford-Shah Model Chan-Vese model When µ +, it reduces to: λ min (f c 1 ) 2 dx + λ (f c 2 ) 2 dx + Length(Γ), Γ,c 1,c 2 2 Ω 1 2 Ω 2 where Ω is separated into Ω 1, Ω 2 by Γ. Level-set method can be applied. Rather complex! Still non-convex. Multi-phases?

57 Our approach We propose a novel two-stage segmentation approach. First stage, solve for a convexified variant of the Mumford-Shah model. Second stage, use data clustering method to threshold the solved solution.

58 Stage One Length(Γ) Ω u ; and equal when u is binary. phases of g can be obtained from u by thresholding.

59 Stage One Length(Γ) Ω u ; and equal when u is binary. phases of g can be obtained from u by thresholding. Observation: (especially when g is nearly binary) jump set of g jump set of u. Ω g Ω u Length(Γ).

60 Stage One: Unique Minimizer Our convex variant of the Mumford-Shah model (Hintermüller H 1 -norm for image restoration) is: E(g) = λ 2 Its discrete version: Ω(f Ag) 2 dx + µ 2 λ 2 f Ag µ 2 g g 1 Ω g 2 dx + g dx (15) Ω Theorem Let Ω be a bounded connected open subset of R 2 with a Lipschitz boundary. Let Ker(A) Ker( ) = {0} and f L 2 (Ω), where A is a bounded linear operator from L 2 (Ω) to itself. Then E(g) has a unique minimizer g W 1,2 (Ω).

61 Stage Two With a minimizer g from the first stage, we obtain a segmentation by thresholding g. We use the K-means method to obtain proper thresholds. Fast multiphase segmentation. Number of phases K and thresholds ρ are determined after g is calculated. Little computation to change K and ρ. No need to recalculate u! Users can try different K and ρ.

62 Extensions to Other Noise Models First stage: solve for { min λ f log Ag)dx + g Ω(Ag µ 2 Ω } g 2 dx + g dx Ω (16) data fitting term good for Poisson noise from MAP analysis also suitable for Gamma noise (Steidl and Teuber (10)). objective functional is convex (solved by Chambolle-Pock) admits unique solution if Ker(A) Ker( ) = {0}. Second stage: threshold the solution to get the phases.

63 Example: Poisson noise with motion blur Original image Noisy & blurred Yuan et al. (10) Dong et al. (10) Sawatzky et al. (13) Our method

64 Example: Gamma noise Original image Noisy image Yuan et al. (10) Li et al. (10) Our method

65 Three-stage Model We propose a SLaT (Smoothing, Lifting and Thresholding) method for multiphase segmentation of color images corrupted by different types of degradations: noise, information loss, and blur. Stage One: Smoothing Stage Two: Lifting Stage Three: Thresholding

66 Three-stage Model Let the given degraded image be in V 1. Smoothing: The convex variational model ((15) or (16)) for grayscale images is applied in parallel to each channel of V 1. This yields a restored smooth image. Color Dimension Lifting: We transform the smoothed color image to a secondary color space V 2 that provides us with complementary information. Then we combine these images as a new vector-valued image composed of all the channels from color spaces V 1 and V 2. Thresholding: According to the desired number of phases K, we apply a multichannel thresholding to the combined V 1 -V 2 image to obtain a segmented image.

67 Smoothing stage Let f = (f 1,..., f d ) be a given color image with channels f i : Ω R, i = 1,, d. Denote Ω i 0 the set where f i is known. We consider the minimizing the functional E below E(g i ) = λ ω i Φ(f i, g i )dx + µ g i 2 dx + g i dx, i = 1,..., d, 2 Ω 2 Ω Ω (17) where ω i ( ) is the characteristic function of Ω i 0, i.e. { 1, x Ω i 0 ω i (x) =, 0, x Ω \ Ω i 0. (18) For Φ in (17) we consider the following options: 1. Φ(f, g) = (f Ag) 2, the usual choice; 2. Φ(f, g) = Ag f log(ag) if data are corrupted by Poisson noise.

68 Uniqueness and existence The Theorem below proves the existence and the uniqueness of the minimizer of (17) where we define the linear operator (ω i A) by (ω i A) : u(x) L 2 (Ω) ω i (x)(au)(x) L 2 (Ω). Theorem Let Ω be a bounded connected open subset of R 2 with a Lipschitz boundary. Let A : L 2 (Ω) L 2 (Ω) be bounded and linear. For i {1,..., d}, assume that f i L 2 (Ω) and that Ker(ω i A) Ker( ) = {0} where Ker stands for null-space. Then E(g i ) = λ ω i Φ(f i, g i )dx + µ g i 2 dx + g i dx, 2 Ω 2 Ω Ω with either Φ(f i, g i ) = (f i Ag i ) 2 or Φ(f i, g i ) = Ag i f i log(ag i ) has a unique minimizer ḡ i W 1,2 (Ω).

69 Segmentation of Real-world Color Images (A) Noisy image (A1) FRC (A2) PCMS (A3) DPP (A4) Ours (B) Information (B1) FRC (B2) PCMS (B3) DPP (B4) Ours (C1) FRC (C2) PCMS (C3) DPP (C4) Ours loss + noise (C) Blur + noise Figure: Four-phase sunflower segmentation (size: ). (A): Given Gaussian noisy image with mean 0 and variance 0.1; (B): Given Gaussian noisy image with 60% information loss; (C): Given blurry image with Gaussian noise;

70 Observations for Two/Three-stage segmentation Convex segmentation model with unique solution. Can be solved easily and fast. No need to solve the model again when thresholds or number of phases changes. Easily extendable to e.g. blurry images and non-gaussian noise. Link image segmentation and image restoration. Efficient algorithms of color image segmentation.

71 Outline 1. Variational Models for Rician Noise Removal 2. Two-stage Segmentation 3. Dictionary and Weigthed Nuclear Norm

72 Deblurring under impulse noise Degraded model g = N imp (Hu), where g is corrupted image, H is the blur kernel. one-phase model: min u J(u) + λf (Hu g), where J is the regularization term, λ is a positive parameter, F is the data fidelity-term. two-phase methods: min J(u) + λ u s F (Λ s (Hu g) s ), where Λ s = with N the noise candidates set. { 0, if s N, 1, otherwise,

73 Variational model [Ma, Yu, and Z, SIIMS 2013] In order to restore image from degraded model g = N imp (Hu). we consider the following model: min Dα s R s u η u 1 + λ Λ(Hu g) 1, α s,u,d s P µ s α s 0 + s P where u 1 denotes the discrete version of the isotropic total variation norm defined as: u 1 = ( u) s, ( u) s = ( u) 1 s 2 + ( u) 2 s 2. s (19)

74 Experimental results PSNR(dB) values for various methods for the test images corrupted by Gaussian blur and random-valued noise with noise levels r = 20%, 40%, 60% respectively. Image/r FTVd Dong Cai1 Cai2 Yan Ours Bar./20% Bar./40% Bar./60% Cam./20% Cam./40% Cam./60% Lena/20% Lena/40% Lena/60%

75 Numerical Experiments - Denoising Degraded FTVd: Dong: Cai1: Cai2: Yan: Our: Figure: Recovered images (with PSNR(dB)) of different methods on image Cameraman corrupted by Gaussian blur and random-valued noise with noise level 60%.

76 SSMS deblurring 1. Applying Wiener filter to remove the blur effect; 2. SSMS denoising to remove the color noise. The underlying clear image patch x can be estimated from the noisy patch y via f = l Λ f, φ k0,l φ k0,l, where Λ = {l : f, φ k0,l > T }, and the threshold value T depends on the noise variance. Drawback: Wiener filter doesn t work well when the noise level is high.

77 Deblurring via total variation based SSMS [Ma, and Z, Journal of Scientific Computing 2015] Model: λ min u,d,α s 2 Au g β 2 R s u Dα s 2 2 s P + s P µ s α s 1 + u 1, (20) For each patch R s u, a best orthogonal basis Φ k0 selected by SSMS. of size N N is We employ the alternating minimization method to solve the model.

78 Experiments The proposed algorithm is compared to five related existing image deblurring methods: 1. TV based method 2. Fourier wavelet regularized deconvolution (ForWaRD) method 3. Structured Sparse Model Selection (SSMS) deblurring 4. Framelet based image deblurring approach 5. Non-local TV based method

79 Experimental results Comparison of the PSNR (db) of the recovered results by different methods, with respect to the noise level σ = 2. Boat. Lena Cam. Kernel TV ForWaRD SSMS Framelet NLTV Ours box gaussian motion box gaussian motion box gaussian motion

80 Experimental results Comparison of the PSNR (db) of the recovered results by different methods, with respect to the noise level σ = 10. Boat. Lena Cam. Kernel TV ForWaRD SSMS Framelet NLTV Ours box gaussian motion box gaussian motion box gaussian motion

81 Low rank Figure: Grouping blocks BM3D: Grouping by matching; Collaborative filtering. Low rank: Grouping by matching; Low rank minimization.

82 Low rank minimization min Y X X 2 F, s.t. rank(x ) r Drawback: nonconvex Convex relaxation: nuclear norm minimization (NNM: min X 1 2 Y X 2 F + λ X (21) with solution X = US λ (Σ)V T. Y = UΣV T is the singular value decomposition (SVD) of matrix Y, where ( ) diag (σ1, σ Σ = 2,, σ n ), 0 and the singular value shrinkage operator is defined as S λ (Σ) = max(0, Σ λ), which actually is the proximal operator of the nuclear norm function (Cai, Candès and Shen).

83 WNNM NNM: using a same value to penalize every singular value However, the larger singular value is more important and should shrink less in many cases. The weighted nuclear norm minimization (WNNM) problem: where min X 1 2 Y X 2 F + X w,, (22) X w, = n w i σ i (X ), (23) i=1 with σ 1 (X ) σ 2 (X ) σ n (X ) 0, the weights vector w = [w 1, w 2,, w n ], and w i 0.

84 Existent mathematical properties of WNNM Proposition: Assume Y R m n, the SVD of Y is Y = UΣV T, and 0 w 1 w 2 w n, the global optimal solution of the WNNM problem in (22) is X = UDV T, (24) ( ) diag (d1, d where D = 2,, d n ) is a diagonal non-negative 0 matrix and d i = max(σ i w i, 0), i = 1,, n. Proposition: Assume Y R m n, and the SVD of Y is Y = UΣV T, the solution of the WNNM ( problem in (22) can be ) diag expressed as X = UDV T (d1, d, where D = 2,, d n ) 0 is a diagonal non-negative matrix and (d 1, d 2,, d n ) is the solution of the following convex optimization problem: n 1 min d 1,,d n 2 (d i σ i ) 2 + w i d i, s.t. d 1 d 2 d n 0. (25) i=1

85 Existent mathematical properties of WNNM It is extremely important and urgent if one can find explicit solution for Problem (22) which uses the weights in an arbitrary order. This is one of our main contributions. Theorem 3:Assume that (a 1,, a n ) R n and n N. Then the following minimization problem min d 1 d n 0 n (d i a i ) 2, (26) i=1 has a unique global optimal solution in the closed form. Moreover, for any k N, we assume that M k = (d 1,, d k ) is the solution to the k-th problem of (26) (that is (26) with n = k), in particular, we denote by d k = d k the last component of M k and d 0 = +.

86 Existent mathematical properties of WNNM If s 0 := sup{s N d s 1 n k=s a k, s = 1,, n}, (27) n s + 1 then the solution M n = (d 1,, d n ) to (26) satisfies ( n ) k=s (d 1,, d s0 1) = M s0 1 and d s0 = = d n = max 0 a k n s 0 + 1, 0. Furthermore, the global solution of (22) is given in (24) with ( ) diag (d1, d D = 2,, d n ). 0 (28)

87 Mathematical properties The uniqueness is clear since the objective function in (26) is strictly convex. We should employ the method of induction to prove the existence result of (26). Firstly, for n = 1, (26) has a unique solution d 1 = max{a 1, 0}. Assume that for any k n 1, the k-th problem of (26) has a unique solution M k = (d 1,, d k ). Then to solve the n-th problem (26), we compare dn 1 and a n as follows: if dn 1 a n, then d n = max{a n, 0} and (26) is solvable. Moreover, the solution M n satisfies (28) with s 0 = n; if d n 1 < a n, we take d = d n 1 = d n and we have to solve the following reduced (n 1)-th problem (n 2 min d 1 d n 2 d 0 i=1 (d i a i ) 2 + 2(d a n 1 + a n 2 ) 2). (29)

88 Mathematical properties For the latter case, we have to solve (29). To do so, we compare dn 2 and a n 1+a n 2 in the similar way as above: if d n 2 an 1+an 2, then d n 1 = d n = d = max{ an 1+an 2, 0} and (26) is solvable. Moreover, the solution M n satisfies (28) with s 0 = n 1; if dn 2 < an 1+an 2, we take d = d n 2 = d n 1 = d n and we have to solve the following reduced (n 2)-th problem (n 3 min d 1 d n 3 d 0 i=1 (d i a i ) 2 + 3(d a n 2 + a n 1 + a n 3 ) 2). (30)

89 Mathematical properties We shall repeat the above arguments and stop at the following s 0 -th problem (s 0 1 min d 1 d s0 1 d 0 i=1 (d i a i ) 2 + (n s 0 + 1)(d n k=s 0 a k n s )2). where d = d s0 = = d n and s 0 is defined in (27). Since n ds 0 1 k=s a 0 k n s 0 +1, we have n k=s 0 a k (31) d s0 = = d n = d = max( n s 0 +1, 0). Then (26) is solvable and the solution M n satisfies (28). Thus, it completes the proof of the theorem.

90 Deblurring based on WNNM The proposed model is min R j u u w, + u 1 + λ 2 Au g 2 F, (32) j P where P denotes the set of indices where small image patches exist. The operator R j firstly collects the similar patches of the reference patch located at j, and then stacks those patches into a matrix which should be low rank. Introduce an auxiliary variable v: R j v w, + u 1 + λ 2 Au g 2 F, s.t. v = u. (33) min u,v j P Then using the split Bregman method, we get ( u k+1, v k+1) = min u,v j P b k+1 = b k + ( v k+1 u k+1), R j v w, + u 1 + λ 2 Au g 2 F + α 2 u v b k 2 F,

91 Minimization for each patches group For each similar patches group R j v, we have: R j v w, + α 2 R j v (R j u k+1 R j b k) 2. F Then, using the WNNMG to solve this problem, we have ( ) R j v k+1 diag (d1, d = U 2,, d n ) V T, 0 where ( R j u k+1 R j b k) = U ( diag (σub,1, σ ub,2,, σ ub,n ) ( )) and d i = max 0, (σ ub,i w i 2 σ ub,i, w i = c 1 Nsp σ i (R j v k+1 )+ε, for i = 1, 2,, n. 0 ) V T,

92 Experimental results Comparison of the PSNR (db) of the recovered results by different methods, with respect to the noise level σ = 5. Image Kernel ROF ForWaRD Framelet NLTV BM3DDEB Ours box Bar. gaussian motion box Cam. gaussian motion box Lena gaussian motion

93 Comparison Original Degraded:20.57 ROF:24.00 ForWaRD:23.83 Framelet:24.67 NLTV:24.73 BM3DDEB:24.92 Ours:25.84 Figure: Recovered results (with PSNR(dB) of different methods on image Cameraman corrupted by 9 9 uniform blur and Gaussian noise with standard deviation σ = 5.

94 Remarks Good recovered results (blur + Gaussian noise \Impulse noise); Extent to non-gaussian noises such as Poisson noise and multiplicative noise; Extent to other image applications such as image classification.

95 Summary There are many non-convex image recover and segmentation models, we consider 3 ways to solve those models 1. Add an extra convex term 2. Relax the functional 3. Compute the analytic solution Messages: 1. Extra convex term, good to overcome non-convexity 2. Relaxation technique, good for segmentation 3. Sparsity models lead to good image recovery results

96 References L. Chen, and T. Zeng, A Convex Variational Model for Restoring Blurred Images with Large Rician Noise, J. Math. Imaging Vis., Y. Dong and T. Zeng, A Convex Variational Model for Restoring Blurred Images with Multiplicative Noise, SIAM J. Imaging Sci., F. Sciacchitano, Y. Dong, and T. Zeng, Variational Approach for Restoring Blurred Images with Cauchy Noise, SIAM J. Imaging Sci., L. Ma, J. Yu and T. Zeng,, Sparse Representation Prior and Total Variation Based Image Deblurring Under Impulse Noise, SIAM J. Imaging Sci., L. Ma, and T. Zeng, Image Deblurring via Total Variation Based Structured Sparse Model Selection, J. Sci. Comput., L. Ma, L. Xu and T. Zeng, Low rank prior and total variation regularization for image deblurring, J. Sci. Comput., 2017.

97 References X. Cai, R. Chan, and T. Zeng, A Two-stage Image Segmentation Method using a Convex Variant of the Mumford-Shah Model and Thresholding, SIAM J. Imaging Sci., R. Chan, H. Yang and T. Zeng, A Two-Stage Image Segmentation Method for Blurry Images with Poisson or Multiplicative Gamma Noise, X. Cai, R. Chan, M. Nikolova and T. Zeng, A Three-stage Approach for Segmenting Degraded Color Images: Smoothing, Lifting and Thresholding (SLaT). J. Sci. Comput., To appear.

98 Contributors Xiaohao Cai (University of Cambridge) Raymond Chan (The Chinese University of Hong Kong) Liyuan Chen (University of Texas at Dallas) Yiqiu Dong (Technical University of Denmark) Jian Yu (Beijing Jiaotong University) Lionel Moisan (Universit Paris Descartes) Zhi Li (Michigan Stage University) Liyan Ma (Institute of Microelectronics, CAS) Mila Nikolova (École Normale Supérieure de Cachan) Federica Sciacchitano (Technical University of Denmark) Li Xu (Chinese Academy of Science) Hongfei Yang (The Chinese University of Hong Kong)

99 THANK YOU!

Key words. Convexity, deblurring, multiplicative noise, primal-dual algorithm, total variation regularization, variational model.

Key words. Convexity, deblurring, multiplicative noise, primal-dual algorithm, total variation regularization, variational model. A CONVEX VARIATIONAL MODEL FOR RESTORING BLURRED IMAGES WITH MULTIPLICATIVE NOISE YIQIU DONG AND TIEYONG ZENG Abstract. In this paper, a new variational model for restoring blurred images with multiplicative

More information

A Convex Variational Model for Restoring Blurred Images with Multiplicative Noise

A Convex Variational Model for Restoring Blurred Images with Multiplicative Noise SIAM J. IMAGING SCIENCES Vol. 6, No. 3, pp. 598 65 c 03 Society for Industrial and Applied Mathematics A Convex Variational Model for Restoring Blurred Images with Multiplicative Noise Yiqiu Dong and Tieyong

More information

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Minru Bai(x T) College of Mathematics and Econometrics Hunan University Joint work with Xiongjun Zhang, Qianqian Shao June 30,

More information

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING JIAN-FENG CAI, STANLEY OSHER, AND ZUOWEI SHEN Abstract. Real images usually have sparse approximations under some tight frame systems derived

More information

Recent developments on sparse representation

Recent developments on sparse representation Recent developments on sparse representation Zeng Tieyong Department of Mathematics, Hong Kong Baptist University Email: zeng@hkbu.edu.hk Hong Kong Baptist University Dec. 8, 2008 First Previous Next Last

More information

Variational Image Restoration

Variational Image Restoration Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1

More information

Contre-examples for Bayesian MAP restoration. Mila Nikolova

Contre-examples for Bayesian MAP restoration. Mila Nikolova Contre-examples for Bayesian MAP restoration Mila Nikolova CMLA ENS de Cachan, 61 av. du Président Wilson, 94235 Cachan cedex (nikolova@cmla.ens-cachan.fr) Obergurgl, September 26 Outline 1. MAP estimators

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

A posteriori error control for the binary Mumford Shah model

A posteriori error control for the binary Mumford Shah model A posteriori error control for the binary Mumford Shah model Benjamin Berkels 1, Alexander Effland 2, Martin Rumpf 2 1 AICES Graduate School, RWTH Aachen University, Germany 2 Institute for Numerical Simulation,

More information

Gauge optimization and duality

Gauge optimization and duality 1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel

More information

TWO-PHASE APPROACH FOR DEBLURRING IMAGES CORRUPTED BY IMPULSE PLUS GAUSSIAN NOISE. Jian-Feng Cai. Raymond H. Chan. Mila Nikolova

TWO-PHASE APPROACH FOR DEBLURRING IMAGES CORRUPTED BY IMPULSE PLUS GAUSSIAN NOISE. Jian-Feng Cai. Raymond H. Chan. Mila Nikolova Manuscript submitted to Website: http://aimsciences.org AIMS Journals Volume 00, Number 0, Xxxx XXXX pp. 000 000 TWO-PHASE APPROACH FOR DEBLURRING IMAGES CORRUPTED BY IMPULSE PLUS GAUSSIAN NOISE Jian-Feng

More information

IMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND

IMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND IMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND JIAN-FENG CAI, BIN DONG, STANLEY OSHER, AND ZUOWEI SHEN Abstract. The variational techniques (e.g., the total variation based method []) are

More information

Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method

Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method Article Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method Andreas Langer Department of Mathematics, University of Stuttgart,

More information

ϕ ( ( u) i 2 ; T, a), (1.1)

ϕ ( ( u) i 2 ; T, a), (1.1) CONVEX NON-CONVEX IMAGE SEGMENTATION RAYMOND CHAN, ALESSANDRO LANZA, SERENA MORIGI, AND FIORELLA SGALLARI Abstract. A convex non-convex variational model is proposed for multiphase image segmentation.

More information

POISSON noise, also known as photon noise, is a basic

POISSON noise, also known as photon noise, is a basic IEEE SIGNAL PROCESSING LETTERS, VOL. N, NO. N, JUNE 2016 1 A fast and effective method for a Poisson denoising model with total variation Wei Wang and Chuanjiang He arxiv:1609.05035v1 [math.oc] 16 Sep

More information

arxiv: v1 [math.na] 3 Jan 2019

arxiv: v1 [math.na] 3 Jan 2019 arxiv manuscript No. (will be inserted by the editor) A Finite Element Nonoverlapping Domain Decomposition Method with Lagrange Multipliers for the Dual Total Variation Minimizations Chang-Ock Lee Jongho

More information

A General Framework for a Class of Primal-Dual Algorithms for TV Minimization

A General Framework for a Class of Primal-Dual Algorithms for TV Minimization A General Framework for a Class of Primal-Dual Algorithms for TV Minimization Ernie Esser UCLA 1 Outline A Model Convex Minimization Problem Main Idea Behind the Primal Dual Hybrid Gradient (PDHG) Method

More information

Functionals for signal and image reconstruction: properties of their minimizers and applications

Functionals for signal and image reconstruction: properties of their minimizers and applications Functionals for signal and image reconstruction: properties of their minimizers and applications Research report Mila NIKOLOVA Centre de Mathématiques et de Leurs Applications (CMLA) - CNRS-UMR 8536 -

More information

Generalized greedy algorithms.

Generalized greedy algorithms. Generalized greedy algorithms. François-Xavier Dupé & Sandrine Anthoine LIF & I2M Aix-Marseille Université - CNRS - Ecole Centrale Marseille, Marseille ANR Greta Séminaire Parisien des Mathématiques Appliquées

More information

Calculus of Variations Summer Term 2014

Calculus of Variations Summer Term 2014 Calculus of Variations Summer Term 2014 Lecture 20 18. Juli 2014 c Daria Apushkinskaya 2014 () Calculus of variations lecture 20 18. Juli 2014 1 / 20 Purpose of Lesson Purpose of Lesson: To discuss several

More information

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT PRASHANT ATHAVALE Abstract. Digital images are can be realized as L 2 (R 2 objects. Noise is introduced in a digital image due to various reasons.

More information

Least squares regularized or constrained by L0: relationship between their global minimizers. Mila Nikolova

Least squares regularized or constrained by L0: relationship between their global minimizers. Mila Nikolova Least squares regularized or constrained by L0: relationship between their global minimizers Mila Nikolova CMLA, CNRS, ENS Cachan, Université Paris-Saclay, France nikolova@cmla.ens-cachan.fr SIAM Minisymposium

More information

Anisotropic Total Variation Regularized L 1 -Approximation and Denoising/Deblurring of 2D Bar Codes

Anisotropic Total Variation Regularized L 1 -Approximation and Denoising/Deblurring of 2D Bar Codes Anisotropic Total Variation Regularized L 1 -Approximation and Denoising/Deblurring of 2D Bar Codes Rustum Choksi Yves van Gennip Adam Oberman March 23, 2011 Abstract We consider variations of the Rudin-Osher-Fatemi

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

Image Restoration with Mixed or Unknown Noises

Image Restoration with Mixed or Unknown Noises Image Restoration with Mixed or Unknown Noises Zheng Gong, Zuowei Shen, and Kim-Chuan Toh Abstract This paper proposes a simple model for image restoration with mixed or unknown noises. It can handle image

More information

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Department of Mathematics & Risk Management Institute National University of Singapore (Based on a joint work with Shujun

More information

arxiv: v1 [math.oc] 3 Jul 2014

arxiv: v1 [math.oc] 3 Jul 2014 SIAM J. IMAGING SCIENCES Vol. xx, pp. x c xxxx Society for Industrial and Applied Mathematics x x Solving QVIs for Image Restoration with Adaptive Constraint Sets F. Lenzen, J. Lellmann, F. Becker, and

More information

Image Noise: Detection, Measurement and Removal Techniques. Zhifei Zhang

Image Noise: Detection, Measurement and Removal Techniques. Zhifei Zhang Image Noise: Detection, Measurement and Removal Techniques Zhifei Zhang Outline Noise measurement Filter-based Block-based Wavelet-based Noise removal Spatial domain Transform domain Non-local methods

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

A First Order Primal-Dual Algorithm for Nonconvex T V q Regularization

A First Order Primal-Dual Algorithm for Nonconvex T V q Regularization A First Order Primal-Dual Algorithm for Nonconvex T V q Regularization Thomas Möllenhoff, Evgeny Strekalovskiy, and Daniel Cremers TU Munich, Germany Abstract. We propose an efficient first order primal-dual

More information

Solving DC Programs that Promote Group 1-Sparsity

Solving DC Programs that Promote Group 1-Sparsity Solving DC Programs that Promote Group 1-Sparsity Ernie Esser Contains joint work with Xiaoqun Zhang, Yifei Lou and Jack Xin SIAM Conference on Imaging Science Hong Kong Baptist University May 14 2014

More information

Convex Hodge Decomposition of Image Flows

Convex Hodge Decomposition of Image Flows Convex Hodge Decomposition of Image Flows Jing Yuan 1, Gabriele Steidl 2, Christoph Schnörr 1 1 Image and Pattern Analysis Group, Heidelberg Collaboratory for Image Processing, University of Heidelberg,

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

Model Selection with Partly Smooth Functions

Model Selection with Partly Smooth Functions Model Selection with Partly Smooth Functions Samuel Vaiter, Gabriel Peyré and Jalal Fadili vaiter@ceremade.dauphine.fr August 27, 2014 ITWIST 14 Model Consistency of Partly Smooth Regularizers, arxiv:1405.1004,

More information

1 Sparsity and l 1 relaxation

1 Sparsity and l 1 relaxation 6.883 Learning with Combinatorial Structure Note for Lecture 2 Author: Chiyuan Zhang Sparsity and l relaxation Last time we talked about sparsity and characterized when an l relaxation could recover the

More information

LPA-ICI Applications in Image Processing

LPA-ICI Applications in Image Processing LPA-ICI Applications in Image Processing Denoising Deblurring Derivative estimation Edge detection Inverse halftoning Denoising Consider z (x) =y (x)+η (x), wherey is noise-free image and η is noise. assume

More information

Convex Hodge Decomposition and Regularization of Image Flows

Convex Hodge Decomposition and Regularization of Image Flows Convex Hodge Decomposition and Regularization of Image Flows Jing Yuan, Christoph Schnörr, Gabriele Steidl April 14, 2008 Abstract The total variation (TV) measure is a key concept in the field of variational

More information

Dual methods for the minimization of the total variation

Dual methods for the minimization of the total variation 1 / 30 Dual methods for the minimization of the total variation Rémy Abergel supervisor Lionel Moisan MAP5 - CNRS UMR 8145 Different Learning Seminar, LTCI Thursday 21st April 2016 2 / 30 Plan 1 Introduction

More information

Non-negative Quadratic Programming Total Variation Regularization for Poisson Vector-Valued Image Restoration

Non-negative Quadratic Programming Total Variation Regularization for Poisson Vector-Valued Image Restoration University of New Mexico UNM Digital Repository Electrical & Computer Engineering Technical Reports Engineering Publications 5-10-2011 Non-negative Quadratic Programming Total Variation Regularization

More information

A Linearly Convergent First-order Algorithm for Total Variation Minimization in Image Processing

A Linearly Convergent First-order Algorithm for Total Variation Minimization in Image Processing A Linearly Convergent First-order Algorithm for Total Variation Minimization in Image Processing Cong D. Dang Kaiyu Dai Guanghui Lan October 9, 0 Abstract We introduce a new formulation for total variation

More information

A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR CONVEX OPTIMIZATION IN IMAGING SCIENCE

A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR CONVEX OPTIMIZATION IN IMAGING SCIENCE A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR CONVEX OPTIMIZATION IN IMAGING SCIENCE ERNIE ESSER XIAOQUN ZHANG TONY CHAN Abstract. We generalize the primal-dual hybrid gradient

More information

A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR TV MINIMIZATION

A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR TV MINIMIZATION A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR TV MINIMIZATION ERNIE ESSER XIAOQUN ZHANG TONY CHAN Abstract. We generalize the primal-dual hybrid gradient (PDHG) algorithm proposed

More information

A REWEIGHTED l 2 METHOD FOR IMAGE RESTORATION WITH POISSON AND MIXED POISSON-GAUSSIAN NOISE. Jia Li. Zuowei Shen. Rujie Yin.

A REWEIGHTED l 2 METHOD FOR IMAGE RESTORATION WITH POISSON AND MIXED POISSON-GAUSSIAN NOISE. Jia Li. Zuowei Shen. Rujie Yin. Volume X, No. 0X, 0xx, X XX doi:10.3934/ipi.xx.xx.xx A REWEIGHTED l METHOD FOR IMAGE RESTORATION WITH POISSON AND MIXED POISSON-GAUSSIAN NOISE JIA LI AND ZUOWEI SHEN AND RUJIE YIN AND XIAOQUN ZHANG Jia

More information

Variational methods for restoration of phase or orientation data

Variational methods for restoration of phase or orientation data Variational methods for restoration of phase or orientation data Martin Storath joint works with Laurent Demaret, Michael Unser, Andreas Weinmann Image Analysis and Learning Group Universität Heidelberg

More information

Deep Learning: Approximation of Functions by Composition

Deep Learning: Approximation of Functions by Composition Deep Learning: Approximation of Functions by Composition Zuowei Shen Department of Mathematics National University of Singapore Outline 1 A brief introduction of approximation theory 2 Deep learning: approximation

More information

regularization parameter choice

regularization parameter choice A duality-based splitting method for l 1 -TV image restoration with automatic regularization parameter choice Christian Clason Bangti Jin Karl Kunisch November 30, 2009 A novel splitting method is presented

More information

A Unified Approach to Proximal Algorithms using Bregman Distance

A Unified Approach to Proximal Algorithms using Bregman Distance A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department

More information

Bias-free Sparse Regression with Guaranteed Consistency

Bias-free Sparse Regression with Guaranteed Consistency Bias-free Sparse Regression with Guaranteed Consistency Wotao Yin (UCLA Math) joint with: Stanley Osher, Ming Yan (UCLA) Feng Ruan, Jiechao Xiong, Yuan Yao (Peking U) UC Riverside, STATS Department March

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

Energy minimization methods

Energy minimization methods Handbook of Mathematical Methods in Imaging, Springer 211, 1 st edition Editor: Otmar SCHERZER Chapter 5, p. 138 186 Energy minimization methods Mila NIKOLOVA CMLA, ENS Cachan, CNRS, UniverSud, 61 Av.

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Optimisation in imaging

Optimisation in imaging Optimisation in imaging Hugues Talbot ICIP 2014 Tutorial Optimisation on Hierarchies H. Talbot : Optimisation 1/59 Outline of the lecture 1 Concepts in optimization Cost function Constraints Duality 2

More information

Nonlinear Signal Processing ELEG 833

Nonlinear Signal Processing ELEG 833 Nonlinear Signal Processing ELEG 833 Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware arce@ee.udel.edu May 5, 2005 8 MYRIAD SMOOTHERS 8 Myriad Smoothers 8.1 FLOM

More information

Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization

Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Hongchao Zhang hozhang@math.lsu.edu Department of Mathematics Center for Computation and Technology Louisiana State

More information

Fast Multilevel Algorithm for a Minimization Problem in Impulse Noise Removal

Fast Multilevel Algorithm for a Minimization Problem in Impulse Noise Removal Fast Multilevel Algorithm for a Minimization Problem in Impulse Noise Removal Raymond H. Chan and Ke Chen September 25, 7 Abstract An effective 2-phase method for removing impulse noise was recently proposed.

More information

Some lecture notes for Math 6050E: PDEs, Fall 2016

Some lecture notes for Math 6050E: PDEs, Fall 2016 Some lecture notes for Math 65E: PDEs, Fall 216 Tianling Jin December 1, 216 1 Variational methods We discuss an example of the use of variational methods in obtaining existence of solutions. Theorem 1.1.

More information

Spatially adaptive alpha-rooting in BM3D sharpening

Spatially adaptive alpha-rooting in BM3D sharpening Spatially adaptive alpha-rooting in BM3D sharpening Markku Mäkitalo and Alessandro Foi Department of Signal Processing, Tampere University of Technology, P.O. Box FIN-553, 33101, Tampere, Finland e-mail:

More information

A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction

A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction Marc Teboulle School of Mathematical Sciences Tel Aviv University Joint work with H. Bauschke and J. Bolte Optimization

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

Adaptive one-bit matrix completion

Adaptive one-bit matrix completion Adaptive one-bit matrix completion Joseph Salmon Télécom Paristech, Institut Mines-Télécom Joint work with Jean Lafond (Télécom Paristech) Olga Klopp (Crest / MODAL X, Université Paris Ouest) Éric Moulines

More information

A theoretical analysis of L 1 regularized Poisson likelihood estimation

A theoretical analysis of L 1 regularized Poisson likelihood estimation See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/228359355 A theoretical analysis of L 1 regularized Poisson likelihood estimation Article in

More information

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1, Math 30 Winter 05 Solution to Homework 3. Recognizing the convexity of g(x) := x log x, from Jensen s inequality we get d(x) n x + + x n n log x + + x n n where the equality is attained only at x = (/n,...,

More information

Continuous Primal-Dual Methods in Image Processing

Continuous Primal-Dual Methods in Image Processing Continuous Primal-Dual Methods in Image Processing Michael Goldman CMAP, Polytechnique August 2012 Introduction Maximal monotone operators Application to the initial problem Numerical illustration Introduction

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

arxiv: v1 [cs.cv] 6 Jan 2016

arxiv: v1 [cs.cv] 6 Jan 2016 Quality Adaptive Low-Rank Based JPEG Decoding with Applications arxiv:1601.01339v1 [cs.cv] 6 Jan 2016 Xiao Shu Xiaolin Wu McMaster University Abstract Small compression noises, despite being transparent

More information

EXISTENCE OF SOLUTIONS TO ASYMPTOTICALLY PERIODIC SCHRÖDINGER EQUATIONS

EXISTENCE OF SOLUTIONS TO ASYMPTOTICALLY PERIODIC SCHRÖDINGER EQUATIONS Electronic Journal of Differential Equations, Vol. 017 (017), No. 15, pp. 1 7. ISSN: 107-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu EXISTENCE OF SOLUTIONS TO ASYMPTOTICALLY PERIODIC

More information

Adaptive Primal Dual Optimization for Image Processing and Learning

Adaptive Primal Dual Optimization for Image Processing and Learning Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University

More information

arxiv: v4 [cs.cv] 31 May 2017

arxiv: v4 [cs.cv] 31 May 2017 ANALYZING THE WEIGHTED NUCLEAR NORM MINIMIZATION AND NUCLEAR NORM MINIMIZATION BASED ON GROUP SPARSE REPRESENTATION Zhiyuan Zha, Xinggan Zhang, Yu Wu, Qiong Wang, Yechao Bai and Lan Tang School of Electronic

More information

ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING

ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING YANGYANG XU Abstract. Motivated by big data applications, first-order methods have been extremely

More information

l 0 TV: A Sparse Optimization Method for Impulse Noise Image Restoration

l 0 TV: A Sparse Optimization Method for Impulse Noise Image Restoration IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 l 0 TV: A Sparse Optimization Method for Impulse Noise Image Restoration Ganzhao Yuan, Bernard Ghanem arxiv:1802.09879v2 [cs.na] 28 Dec

More information

A Variational Approach to Reconstructing Images Corrupted by Poisson Noise

A Variational Approach to Reconstructing Images Corrupted by Poisson Noise J Math Imaging Vis c 27 Springer Science + Business Media, LLC. Manufactured in The Netherlands. DOI: 1.7/s1851-7-652-y A Variational Approach to Reconstructing Images Corrupted by Poisson Noise TRIET

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

Design of Image Adaptive Wavelets for Denoising Applications

Design of Image Adaptive Wavelets for Denoising Applications Design of Image Adaptive Wavelets for Denoising Applications Sanjeev Pragada and Jayanthi Sivaswamy Center for Visual Information Technology International Institute of Information Technology - Hyderabad,

More information

Accelerated primal-dual methods for linearly constrained convex problems

Accelerated primal-dual methods for linearly constrained convex problems Accelerated primal-dual methods for linearly constrained convex problems Yangyang Xu SIAM Conference on Optimization May 24, 2017 1 / 23 Accelerated proximal gradient For convex composite problem: minimize

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

A Primal-Dual Method for Total Variation-Based. Wavelet Domain Inpainting

A Primal-Dual Method for Total Variation-Based. Wavelet Domain Inpainting A Primal-Dual Method for Total Variation-Based 1 Wavelet Domain Inpainting You-Wei Wen, Raymond H. Chan, Andy M. Yip Abstract Loss of information in a wavelet domain can occur during storage or transmission

More information

A Primal-dual Three-operator Splitting Scheme

A Primal-dual Three-operator Splitting Scheme Noname manuscript No. (will be inserted by the editor) A Primal-dual Three-operator Splitting Scheme Ming Yan Received: date / Accepted: date Abstract In this paper, we propose a new primal-dual algorithm

More information

6 The SVD Applied to Signal and Image Deblurring

6 The SVD Applied to Signal and Image Deblurring 6 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

l0tv: A Sparse Optimization Method for Impulse Noise Image Restoration

l0tv: A Sparse Optimization Method for Impulse Noise Image Restoration l0tv: A Sparse Optimization Method for Impulse Noise Image Restoration Item Type Article Authors Yuan, Ganzhao; Ghanem, Bernard Citation Yuan G, Ghanem B (2017) l0tv: A Sparse Optimization Method for Impulse

More information

Asymmetric Cheeger cut and application to multi-class unsupervised clustering

Asymmetric Cheeger cut and application to multi-class unsupervised clustering Asymmetric Cheeger cut and application to multi-class unsupervised clustering Xavier Bresson Thomas Laurent April 8, 0 Abstract Cheeger cut has recently been shown to provide excellent classification results

More information

An unconstrained multiphase thresholding approach for image segmentation

An unconstrained multiphase thresholding approach for image segmentation An unconstrained multiphase thresholding approach for image segmentation Benjamin Berkels Institut für Numerische Simulation, Rheinische Friedrich-Wilhelms-Universität Bonn, Nussallee 15, 53115 Bonn, Germany

More information

Lecture 4 Colorization and Segmentation

Lecture 4 Colorization and Segmentation Lecture 4 Colorization and Segmentation Summer School Mathematics in Imaging Science University of Bologna, Itay June 1st 2018 Friday 11:15-13:15 Sung Ha Kang School of Mathematics Georgia Institute of

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

Generalized SURE for optimal shrinkage of singular values in low-rank matrix denoising

Generalized SURE for optimal shrinkage of singular values in low-rank matrix denoising Journal of Machine Learning Research 8 (207) -50 Submitted 5/6; Revised 0/7; Published /7 Generalized SURE for optimal shrinkage of singular values in low-rank matrix denoising Jérémie Bigot Institut de

More information

Computer Vision & Digital Image Processing

Computer Vision & Digital Image Processing Computer Vision & Digital Image Processing Image Restoration and Reconstruction I Dr. D. J. Jackson Lecture 11-1 Image restoration Restoration is an objective process that attempts to recover an image

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

MIXED GAUSSIAN-IMPULSE NOISE IMAGE RESTORATION VIA TOTAL VARIATION

MIXED GAUSSIAN-IMPULSE NOISE IMAGE RESTORATION VIA TOTAL VARIATION MIXED GAUSSIAN-IMPULSE NOISE IMAGE RESTORATION VIA TOTAL VARIATION P. Rodríguez, R. Rojas Department of Electrical Engineering Pontificia Universidad Católica del Perú Lima, Peru B. Wohlberg T5: Applied

More information

Image restoration by minimizing zero norm of wavelet frame coefficients

Image restoration by minimizing zero norm of wavelet frame coefficients Image restoration by minimizing zero norm of wavelet frame coefficients Chenglong Bao a, Bin Dong b, Likun Hou c, Zuowei Shen a, Xiaoqun Zhang c,d, Xue Zhang d a Department of Mathematics, National University

More information

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,

More information

Noise-Blind Image Deblurring Supplementary Material

Noise-Blind Image Deblurring Supplementary Material Noise-Blind Image Deblurring Supplementary Material Meiguang Jin University of Bern Switzerland Stefan Roth TU Darmstadt Germany Paolo Favaro University of Bern Switzerland A. Upper and Lower Bounds Our

More information

Primal-dual algorithms for the sum of two and three functions 1

Primal-dual algorithms for the sum of two and three functions 1 Primal-dual algorithms for the sum of two and three functions 1 Ming Yan Michigan State University, CMSE/Mathematics 1 This works is partially supported by NSF. optimization problems for primal-dual algorithms

More information

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

Levenberg-Marquardt method in Banach spaces with general convex regularization terms Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

Fully smoothed L1-TV energies : properties of the optimal solutions, fast minimization and applications. Mila Nikolova

Fully smoothed L1-TV energies : properties of the optimal solutions, fast minimization and applications. Mila Nikolova Fully smoothed L1-TV energies : properties of the optimal solutions, fast minimization and applications Mila Nikolova CMLA, ENS Cachan, CNRS 61 Av. President Wilson, F-94230 Cachan, FRANCE nikolova@cmla.ens-cachan.fr

More information

INF-SUP CONDITION FOR OPERATOR EQUATIONS

INF-SUP CONDITION FOR OPERATOR EQUATIONS INF-SUP CONDITION FOR OPERATOR EQUATIONS LONG CHEN We study the well-posedness of the operator equation (1) T u = f. where T is a linear and bounded operator between two linear vector spaces. We give equivalent

More information

Variational Methods in Signal and Image Processing

Variational Methods in Signal and Image Processing Variational Methods in Signal and Image Processing XU WANG Texas A&M University Dept. of Electrical & Computer Eng. College Station, Texas United States xu.wang@tamu.edu ERCHIN SERPEDIN Texas A&M University

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information