A posteriori error control for the binary Mumford Shah model
|
|
- Daniel Cain
- 5 years ago
- Views:
Transcription
1 A posteriori error control for the binary Mumford Shah model Benjamin Berkels 1, Alexander Effland 2, Martin Rumpf 2 1 AICES Graduate School, RWTH Aachen University, Germany 2 Institute for Numerical Simulation, University of Bonn Minisymposium on Recent Advances in Convex Relaxations SIAM Conference on IMAGING SCIENCE May 23rd 26th, Hotel Albuquerque at Old Town
2 1 The binary Mumford Shah model u 0 O Binary Mumford Shah model θ 1, θ 2 L 1 () E[O] = θ 1 dx + θ 2 dx + Per[O] O \O
3 1 The binary Mumford Shah model u 0 χ BV (, {0, 1}) Binary Mumford Shah model θ 1, θ 2 L 1 () E[χ] = θ 1 χ + θ 2 (1 χ) dx + Dχ ()
4 1 The binary Mumford Shah model u 0 χ BV (, {0, 1}) Binary Mumford Shah model θ 1, θ 2 L 1 () E[χ, c 1, c 2 ] = θ 1 χ + θ 2 (1 χ) dx + Dχ () θ i = 1 ν (c i u 0 ) 2 (i = 1, 2) for weight parameter ν > 0
5 1 The binary Mumford Shah model u 0 χ BV (, {0, 1}) Binary Mumford Shah model θ 1, θ 2 L 1 () E[χ, c 1, c 2 ] = θ 1 χ + θ 2 (1 χ) dx + Dχ () θ i = 1 ν (c i u 0 ) 2 (i = 1, 2) for weight parameter ν > 0 c 1 = ( χ dx) 1 χu 0 dx, c 2 = ( 1 χ dx) 1 (1 χ)u 0 dx
6 2 The binary Mumford Shah model (cont.) E[χ, c 1, c 2 ] = θ 1 χ + θ 2 (1 χ) dx + Dχ () minimize E over the set χ BV (, {0, 1}) (c 1, c 2 fixed)
7 2 The binary Mumford Shah model (cont.) E[χ, c 1, c 2 ] = θ 1 χ + θ 2 (1 χ) dx + Dχ () minimize E over the set χ BV (, {0, 1}) (c 1, c 2 fixed) Goal: derive functional a posteriori error estimates for χ χ h L 1 ()
8 2 The binary Mumford Shah model (cont.) E[χ, c 1, c 2 ] = θ 1 χ + θ 2 (1 χ) dx + Dχ () minimize E over the set χ BV (, {0, 1}) (c 1, c 2 fixed) Goal: derive functional a posteriori error estimates for Major problems: χ χ h L 1 () optimization over a non convex set
9 2 The binary Mumford Shah model (cont.) E[χ, c 1, c 2 ] = θ 1 χ + θ 2 (1 χ) dx + Dχ () minimize E over the set χ BV (, {0, 1}) (c 1, c 2 fixed) Goal: derive functional a posteriori error estimates for χ χ h L 1 () Major problems: optimization over a non convex set no uniform convexity of E
10 2 The binary Mumford Shah model (cont.) E[χ, c 1, c 2 ] = θ 1 χ + θ 2 (1 χ) dx + Dχ () minimize E over the set χ BV (, {0, 1}) (c 1, c 2 fixed) Goal: derive functional a posteriori error estimates for χ χ h L 1 () Major problems: optimization over a non convex set no uniform convexity of E Strategy: use a suitable uniformly convex relaxation
11 2 The binary Mumford Shah model (cont.) E[χ, c 1, c 2 ] = θ 1 χ + θ 2 (1 χ) dx + Dχ () minimize E over the set χ BV (, {0, 1}) (c 1, c 2 fixed) Goal: derive functional a posteriori error estimates for χ χ h L 1 () Major problems: optimization over a non convex set no uniform convexity of E Strategy: use a suitable uniformly convex relaxation error estimate for the relaxation based on the duality gap
12 2 The binary Mumford Shah model (cont.) E[χ, c 1, c 2 ] = θ 1 χ + θ 2 (1 χ) dx + Dχ () minimize E over the set χ BV (, {0, 1}) (c 1, c 2 fixed) Goal: derive functional a posteriori error estimates for χ χ h L 1 () Major problems: optimization over a non convex set no uniform convexity of E Strategy: use a suitable uniformly convex relaxation error estimate for the relaxation based on the duality gap thresholding of the relaxed solution and cut out argument
13 3 Convex relaxation and thresholding - CEN Idea Relax range of χ to [0, 1] [Nikolova, Esedoḡlu, Chan 06] E CEN [u] = uθ 1 dx + (1 u) θ 2 dx + Du ().
14 3 Convex relaxation and thresholding - CEN Idea Relax range of χ to [0, 1] [Nikolova, Esedoḡlu, Chan 06] E CEN [u] = uθ 1 dx + (1 u) θ 2 dx + Du (). Theorem u argmin E CEN [u] [u > s] argmin E[χ] u BV (,[0,1]) χ BV (,{0,1}) for all a.e. s [0, 1], where [u > s] is the s-superlevel set of u, i.e. {x : u(x) > s}.
15 3 Convex relaxation and thresholding - CEN Idea Relax range of χ to [0, 1] [Nikolova, Esedoḡlu, Chan 06] E CEN [u] = uθ 1 dx + (1 u) θ 2 dx + Du (). Theorem u argmin E CEN [u] [u > s] argmin E[χ] u BV (,[0,1]) χ BV (,{0,1}) for all a.e. s [0, 1], where [u > s] is the s-superlevel set of u, i.e. {x : u(x) > s}. BV (, [0, 1]) convex E CEN convex, but not uniformly convex in u
16 4 Convex relaxation and thresholding - UC Uniformly convex approach E rel [u] = u 2 θ 1 dx + (1 u) 2 θ 2 dx + Du ()
17 4 Convex relaxation and thresholding - UC Uniformly convex approach E rel [u] = u 2 θ 1 dx + (1 u) 2 θ 2 dx + Du () Theorem [Chambolle/Darbon 08][B. SSVM 09] Let θ 1, θ 2 L 1 (), θ 1, θ 2 0 a. e.. Then, E rel has a unique minimizer on BV (). u = argmin E rel [v] χ [u>0.5] argmin E[χ]. v BV () χ BV (,{0,1}) Moreover, u(x) [0, 1] for a.e. x.
18 4 Convex relaxation and thresholding - UC Uniformly convex approach E rel [u] = u 2 θ 1 dx + (1 u) 2 θ 2 dx + Du () Theorem [Chambolle/Darbon 08][B. SSVM 09] Let θ 1, θ 2 L 1 (), θ 1, θ 2 0 a. e.. Then, E rel has a unique minimizer on BV (). u = argmin E rel [v] χ [u>0.5] argmin E[χ]. v BV () χ BV (,{0,1}) Moreover, u(x) [0, 1] for a.e. x. BV () convex (even unconstrained) E UC uniformly convex in u
19 5 Convex relaxation and thresholding - UC (cont.) Proof sketch: [Chambolle/Darbon 09] E rel [min{max{0, v}, 1}] E rel [v] for any v BV ().
20 5 Convex relaxation and thresholding - UC (cont.) Proof sketch: [Chambolle/Darbon 09] E rel [min{max{0, v}, 1}] E rel [v] for any v BV (). Ψ(x, t) := t 2 θ 1 (x) + (1 t) 2 θ 2 (x), C Ψ := Ψ(x, 0) dx. 1 Ψ(x, u) dx = Ψ(x, 0) dx + t Ψ(x, s)χ [u>s] dx ds 0
21 5 Convex relaxation and thresholding - UC (cont.) Proof sketch: [Chambolle/Darbon 09] E rel [min{max{0, v}, 1}] E rel [v] for any v BV (). Ψ(x, t) := t 2 θ 1 (x) + (1 t) 2 θ 2 (x), C Ψ := Ψ(x, 0) dx. 1 Ψ(x, u) dx = Ψ(x, 0) dx + t Ψ(x, s)χ [u>s] dx ds h 1 h 2 a. e., χ i argmin χ BV (,{0,1}) 0 h iχ dx + Dχ ()
22 5 Convex relaxation and thresholding - UC (cont.) Proof sketch: [Chambolle/Darbon 09] E rel [min{max{0, v}, 1}] E rel [v] for any v BV (). Ψ(x, t) := t 2 θ 1 (x) + (1 t) 2 θ 2 (x), C Ψ := Ψ(x, 0) dx. 1 Ψ(x, u) dx = Ψ(x, 0) dx + t Ψ(x, s)χ [u>s] dx ds h 1 h 2 a. e., χ i argmin χ BV (,{0,1}) 0 h iχ dx + Dχ () χ 1 χ 2
23 5 Convex relaxation and thresholding - UC (cont.) Proof sketch: [Chambolle/Darbon 09] E rel [min{max{0, v}, 1}] E rel [v] for any v BV (). Ψ(x, t) := t 2 θ 1 (x) + (1 t) 2 θ 2 (x), C Ψ := Ψ(x, 0) dx. 1 Ψ(x, u) dx = Ψ(x, 0) dx + t Ψ(x, s)χ [u>s] dx ds h 1 h 2 a. e., χ i χ s argmin χ BV (,{0,1}) 0 argmin h iχ dx + Dχ () χ 1 χ 2 χ BV (,{0,1}) { E rel s [χ] := sψ(x, s)χ dx + Dχ () }
24 5 Convex relaxation and thresholding - UC (cont.) Proof sketch: [Chambolle/Darbon 09] E rel [min{max{0, v}, 1}] E rel [v] for any v BV (). Ψ(x, t) := t 2 θ 1 (x) + (1 t) 2 θ 2 (x), C Ψ := Ψ(x, 0) dx. 1 Ψ(x, u) dx = Ψ(x, 0) dx + t Ψ(x, s)χ [u>s] dx ds h 1 h 2 a. e., χ i χ s argmin χ BV (,{0,1}) 0 argmin h iχ dx + Dχ () χ 1 χ 2 χ BV (,{0,1}) { E rel s [χ] := sψ(x, s)χ dx + Dχ () } Ψ(x, ) strictly convex t Ψ(x, ) is strictly increasing χ s 1 χ s 2 a. e. for s 1 s 2.
25 5 Convex relaxation and thresholding - UC (cont.) Proof sketch: [Chambolle/Darbon 09] E rel [min{max{0, v}, 1}] E rel [v] for any v BV (). Ψ(x, t) := t 2 θ 1 (x) + (1 t) 2 θ 2 (x), C Ψ := Ψ(x, 0) dx. 1 Ψ(x, u) dx = Ψ(x, 0) dx + t Ψ(x, s)χ [u>s] dx ds h 1 h 2 a. e., χ i χ s argmin χ BV (,{0,1}) 0 argmin h iχ dx + Dχ () χ 1 χ 2 χ BV (,{0,1}) { E rel s [χ] := sψ(x, s)χ dx + Dχ () } Ψ(x, ) strictly convex t Ψ(x, ) is strictly increasing χ s 1 χ s 2 a. e. for s 1 s 2. graph reconstruction u (x) = sup{s χ s (x) = 1} is well-defined
26 5 Convex relaxation and thresholding - UC (cont.) Proof sketch: [Chambolle/Darbon 09] E rel [min{max{0, v}, 1}] E rel [v] for any v BV (). Ψ(x, t) := t 2 θ 1 (x) + (1 t) 2 θ 2 (x), C Ψ := Ψ(x, 0) dx. 1 Ψ(x, u) dx = Ψ(x, 0) dx + t Ψ(x, s)χ [u>s] dx ds h 1 h 2 a. e., χ i χ s argmin χ BV (,{0,1}) 0 argmin h iχ dx + Dχ () χ 1 χ 2 χ BV (,{0,1}) { E rel s [χ] := sψ(x, s)χ dx + Dχ () } Ψ(x, ) strictly convex t Ψ(x, ) is strictly increasing χ s 1 χ s 2 a. e. for s 1 s 2. graph reconstruction u (x) = sup{s χ s (x) = 1} is well-defined E rel [u] = C Ψ E rel s = E rel [u ] E rel [u] [ χ[u>s] ] ds CΨ Es rel [χ s ] ds
27 6 Predual functional The dual of BV () is difficult to handle.
28 6 Predual functional The dual of BV () is difficult to handle. Use predual of E rel.
29 6 Predual functional The dual of BV () is difficult to handle. Use predual of E rel. Starting point: D rel [q] = F [q] + G[Λq] [Hintermüller/Kunisch 04] { 0 if q 1 a.e. F [q] = I B1 [q] = + else 1 4 G[v] = v2 + vθ 2 θ 1 θ 2 dx θ 1 + θ 2
30 6 Predual functional The dual of BV () is difficult to handle. Use predual of E rel. Starting point: D rel [q] = F [q] + G[Λq] [Hintermüller/Kunisch 04] { 0 if q 1 a.e. F [q] = I B1 [q] = + else 1 4 G[v] = v2 + vθ 2 θ 1 θ 2 dx θ 1 + θ 2 D rel is the predual functional of E rel, i. e. (D rel ) = E rel.
31 6 Predual functional The dual of BV () is difficult to handle. Use predual of E rel. Starting point: D rel [q] = F [q] + G[Λq] [Hintermüller/Kunisch 04] { 0 if q 1 a.e. F [q] = I B1 [q] = + else 1 4 G[v] = v2 + vθ 2 θ 1 θ 2 dx θ 1 + θ 2 D rel is the predual functional of E rel, i. e. (D rel ) = E rel. Λ L(Q, V), Q = H N (div, ) and V = L 2 ()
32 6 Predual functional The dual of BV () is difficult to handle. Use predual of E rel. Starting point: D rel [q] = F [q] + G[Λq] [Hintermüller/Kunisch 04] { 0 if q 1 a.e. F [q] = I B1 [q] = + else 1 4 G[v] = v2 + vθ 2 θ 1 θ 2 dx θ 1 + θ 2 D rel is the predual functional of E rel, i. e. (D rel ) = E rel. Λ L(Q, V), Q = H N (div, ) and V = L 2 () H(div, ) = {q L 2 (, R n ) : div q L 2 ()} and H N (div, ) = H(div, ) {q ν = 0 on }
33 6 Predual functional The dual of BV () is difficult to handle. Use predual of E rel. Starting point: D rel [q] = F [q] + G[Λq] [Hintermüller/Kunisch 04] { 0 if q 1 a.e. F [q] = I B1 [q] = + else 1 4 G[v] = v2 + vθ 2 θ 1 θ 2 dx θ 1 + θ 2 D rel is the predual functional of E rel, i. e. (D rel ) = E rel. Λ L(Q, V), Q = H N (div, ) and V = L 2 () H(div, ) = {q L 2 (, R n ) : div q L 2 ()} and H N (div, ) = H(div, ) {q ν = 0 on } Λ = div, Λ = holds in the sense Λ v, q = (v, div q) L 2 () v V, q Q
34 7 Predual functional (cont.) From general theory one knows (D rel ) [v] = F [ Λ v] + G [v]. F [q] = I B1 [q], G[v] = 1 4 v2 + vθ 2 θ 1 θ 2 dx θ 1 + θ 2
35 7 Predual functional (cont.) From general theory one knows (D rel ) [v] = F [ Λ v] + G [v]. The Fenchel conjugates can be computed as follows: F [ Λ v] = sup( Λ v, q F [q]) q Q F [q] = I B1 [q], G[v] = 1 4 v2 + vθ 2 θ 1 θ 2 dx θ 1 + θ 2
36 7 Predual functional (cont.) From general theory one knows (D rel ) [v] = F [ Λ v] + G [v]. The Fenchel conjugates can be computed as follows: ( ) F [ Λ v] = sup( Λ v, q F [q]) = sup v div q dx I B1 [q] q Q q Q = sup v div q dx = Dv () q Q, q 1 F [q] = I B1 [q], G[v] = 1 4 v2 + vθ 2 θ 1 θ 2 dx θ 1 + θ 2
37 7 Predual functional (cont.) From general theory one knows (D rel ) [v] = F [ Λ v] + G [v]. The Fenchel conjugates can be computed as follows: ( ) F [ Λ v] = sup( Λ v, q F [q]) = sup v div q dx I B1 [q] q Q q Q = sup v div q dx = Dv () q Q, q 1 G ( [v] = (v, w)l 2 () G[w] ) sup w L 2 () F [q] = I B1 [q], G[v] = 1 4 v2 + vθ 2 θ 1 θ 2 dx θ 1 + θ 2
38 7 Predual functional (cont.) From general theory one knows (D rel ) [v] = F [ Λ v] + G [v]. The Fenchel conjugates can be computed as follows: ( ) F [ Λ v] = sup( Λ v, q F [q]) = sup v div q dx I B1 [q] q Q q Q = sup v div q dx = Dv () q Q, q 1 G ( [v] = (v, w)l 2 () G[w] ) = v 2 θ 1 + (1 v) 2 θ 2 dx sup w L 2 () where last supremum is attained for w = 2v(θ 1 + θ 2 ) 2θ 2. F [q] = I B1 [q], G[v] = 1 4 v2 + vθ 2 θ 1 θ 2 dx θ 1 + θ 2
39 7 Predual functional (cont.) From general theory one knows (D rel ) [v] = F [ Λ v] + G [v]. The Fenchel conjugates can be computed as follows: ( ) F [ Λ v] = sup( Λ v, q F [q]) = sup v div q dx I B1 [q] q Q q Q = sup v div q dx = Dv () q Q, q 1 G ( [v] = (v, w)l 2 () G[w] ) = v 2 θ 1 + (1 v) 2 θ 2 dx sup w L 2 () where last supremum is attained for w = 2v(θ 1 + θ 2 ) 2θ 2. Thus, (D rel ) [v] = F [ Λ v] + G [v] = E rel [v]. F [q] = I B1 [q], G[v] = 1 4 v2 + vθ 2 θ 1 θ 2 dx θ 1 + θ 2
40 8 Predual functional (cont.) Central insight: Minimizers p of D rel and u of (D rel ) fulfill D rel [p] = (D rel ) [u]
41 8 Predual functional (cont.) Central insight: Minimizers p of D rel and u of (D rel ) fulfill D rel [p] = (D rel ) [u] Can be seen by formally exchanging inf and sup: D rel [p] = inf q H N (div,) (F [q] + G[Λq])
42 8 Predual functional (cont.) Central insight: Minimizers p of D rel and u of (D rel ) fulfill D rel [p] = (D rel ) [u] Can be seen by formally exchanging inf and sup: D rel [p] = inf q H N (div,) (F [q] + G[Λq]) = inf (F [q] + q H N (div,) G [Λq])
43 8 Predual functional (cont.) Central insight: Minimizers p of D rel and u of (D rel ) fulfill D rel [p] = (D rel ) [u] Can be seen by formally exchanging inf and sup: D rel [p] = inf q H N (div,) = inf q H N (div,) v L 2 () (F [q] + G[Λq]) = inf sup (F [q] + q H N (div,) G [Λq]) (F [q] + v, Λq G [v])
44 8 Predual functional (cont.) Central insight: Minimizers p of D rel and u of (D rel ) fulfill D rel [p] = (D rel ) [u] Can be seen by formally exchanging inf and sup: D rel [p] = inf q H N (div,) = inf q H N (div,) v L 2 () = sup v L 2 () ( (F [q] + G[Λq]) = inf sup sup q H N (div,) (F [q] + q H N (div,) G [Λq]) (F [q] + v, Λq G [v]) ( Λ v, q F [q]) G [v] )
45 8 Predual functional (cont.) Central insight: Minimizers p of D rel and u of (D rel ) fulfill D rel [p] = (D rel ) [u] Can be seen by formally exchanging inf and sup: D rel [p] = inf q H N (div,) = inf q H N (div,) v L 2 () = sup v L 2 () ( (F [q] + G[Λq]) = inf sup sup q H N (div,) = sup ( F [ Λ v] G [v]) v L 2 () (F [q] + q H N (div,) G [Λq]) (F [q] + v, Λq G [v]) ( Λ v, q F [q]) G [v] )
46 8 Predual functional (cont.) Central insight: Minimizers p of D rel and u of (D rel ) fulfill D rel [p] = (D rel ) [u] Can be seen by formally exchanging inf and sup: D rel [p] = inf q H N (div,) = inf q H N (div,) v L 2 () = sup v L 2 () ( (F [q] + G[Λq]) = inf sup sup q H N (div,) = sup ( F [ Λ v] G [v]) v L 2 () (F [q] + q H N (div,) G [Λq]) (F [q] + v, Λq G [v]) ( Λ v, q F [q]) G [v] = sup ( (D rel ) [v]) = inf v L 2 () v L 2 () (Drel ) [v] = (D rel ) [u]. )
47 9 Functional a posteriori error estimates for E rel Two measures of uniform convexity for J : X R
48 9 Functional a posteriori error estimates for E rel Two measures of uniform convexity for J : X R J [ x 1 +x 2 ] 2 + ΦJ (x 2 x 1 ) 1 2 (J[x 1] + J[x 2 ]) for all x 1, x 2 X
49 9 Functional a posteriori error estimates for E rel Two measures of uniform convexity for J : X R J [ x 1 +x 2 ] 2 + ΦJ (x 2 x 1 ) 1 2 (J[x 1] + J[x 2 ]) for all x 1, x 2 X x, x 2 x 1 + Ψ J (x 2 x 1 ) J[x 2 ] J[x 1 ] for all x J[x 1 ]
50 9 Functional a posteriori error estimates for E rel Two measures of uniform convexity for J : X R J [ x 1 +x 2 ] 2 + ΦJ (x 2 x 1 ) 1 2 (J[x 1] + J[x 2 ]) for all x 1, x 2 X x, x 2 x 1 + Ψ J (x 2 x 1 ) J[x 2 ] J[x 1 ] for all x J[x 1 ] Theorem [Repin 00]: Let u argminṽ V E rel [ṽ] and q Q, v V = V = L 2 (). Then, Φ G (v u) + Φ F ( Λ (v u)) + Ψ E rel( v u 2 ) 1 2 (Erel [v] + D rel [q]).
51 10 Repin theorem - Proof sketch The convexity property of Φ G and Φ F gives: Φ G (v u) + Φ F ( Λ (v u)) 1 2 (F [ Λ v] + G [v] + F [ Λ u] + G [u]) ( F [ Λ u+v ] 2 + G [ ]) u+v 2
52 10 Repin theorem - Proof sketch The convexity property of Φ G and Φ F gives: Φ G (v u) + Φ F ( Λ (v u)) 1 2 (F [ Λ v] + G [v] + F [ Λ u] + G [u]) ( F [ Λ u+v ] 2 + G [ ]) u+v 2 = 1 2 (Erel [v] + E rel [u]) E rel [ u+v 2 ]
53 10 Repin theorem - Proof sketch The convexity property of Φ G and Φ F gives: Φ G (v u) + Φ F ( Λ (v u)) 1 2 (F [ Λ v] + G [v] + F [ Λ u] + G [u]) ( F [ Λ u+v ] 2 + G [ ]) u+v 2 = 1 2 (Erel [v] + E rel [u]) E rel [ u+v 2 ] Since u is a minimizer of E rel and thus 0 E rel [u], we get
54 10 Repin theorem - Proof sketch The convexity property of Φ G and Φ F gives: Φ G (v u) + Φ F ( Λ (v u)) 1 2 (F [ Λ v] + G [v] + F [ Λ u] + G [u]) ( F [ Λ u+v ] 2 + G [ ]) u+v 2 = 1 2 (Erel [v] + E rel [u]) E rel [ u+v 2 ] Since u is a minimizer of E rel and thus 0 E rel [u], we get Ψ E rel( v u ) ( 2 = u+v ΨE rel 2 u ) E rel [ u+v 2 ] Erel [u].
55 10 Repin theorem - Proof sketch The convexity property of Φ G and Φ F gives: Φ G (v u) + Φ F ( Λ (v u)) 1 2 (F [ Λ v] + G [v] + F [ Λ u] + G [u]) ( F [ Λ u+v ] 2 + G [ ]) u+v 2 = 1 2 (Erel [v] + E rel [u]) E rel [ u+v 2 ] Since u is a minimizer of E rel and thus 0 E rel [u], we get Ψ E rel( v u ) ( 2 = u+v ΨE rel 2 u ) E rel [ u+v 2 ] Erel [u]. Adding both inequalities gives Φ G (v u) + Φ F ( Λ (v u)) + Ψ E rel( v u 2 ) 1 2 (Erel [v] E rel [u]).
56 10 Repin theorem - Proof sketch The convexity property of Φ G and Φ F gives: Φ G (v u) + Φ F ( Λ (v u)) 1 2 (F [ Λ v] + G [v] + F [ Λ u] + G [u]) ( F [ Λ u+v ] 2 + G [ ]) u+v 2 = 1 2 (Erel [v] + E rel [u]) E rel [ u+v 2 ] Since u is a minimizer of E rel and thus 0 E rel [u], we get Ψ E rel( v u ) ( 2 = u+v ΨE rel 2 u ) E rel [ u+v 2 ] Erel [u]. Adding both inequalities gives Φ G (v u) + Φ F ( Λ (v u)) + Ψ E rel( v u 2 ) 1 2 (Erel [v] E rel [u]). The claim follows using the weak complementarity principle E rel [u] D rel [q].
57 11 Functional a posteriori error estimates for E rel Theorem [Repin 00]: Let u argminṽ V E rel [ṽ] and q Q, v V = V = L 2 (). Then, Φ G (v u) + Φ F ( Λ (v u)) + Ψ E rel( v u 2 ) 1 2 (Erel [v] + D rel [q]).
58 11 Functional a posteriori error estimates for E rel Theorem [Repin 00]: Let u argminṽ V E rel [ṽ] and q Q, v V = V = L 2 (). Then, Φ G (v u) + Φ F ( Λ (v u)) + Ψ E rel( v u 2 In our case, one gets Φ F 0, Φ G (v) = 1 4 v 2 (θ 1 + θ 2 ) dx, Ψ E rel(v) = ) 1 2 (Erel [v] + D rel [q]). v 2 (θ 1 + θ 2 ) dx.
59 11 Functional a posteriori error estimates for E rel Theorem [Repin 00]: Let u argminṽ V E rel [ṽ] and q Q, v V = V = L 2 (). Then, Φ G (v u) + Φ F ( Λ (v u)) + Ψ E rel( v u 2 In our case, one gets Φ F 0, Φ G (v) = 1 4 v 2 (θ 1 + θ 2 ) dx, Ψ E rel(v) = ) 1 2 (Erel [v] + D rel [q]). v 2 (θ 1 + θ 2 ) dx. Using 1 2ν (c 1 c 2 ) 2 θ 1 + θ 2 this leads to: Theorem [A posteriori error estimate for E rel ]: Let u V be the minimizer of E rel. Then, for any v V and q Q it holds that u v 2 L 2 () 2ν ( ) err2 u[v, q] := (c 1 c 2 ) 2 E rel [v] + D rel [q]
60 11 Functional a posteriori error estimates for E rel Theorem [Repin 00]: Let u argminṽ V E rel [ṽ] and q Q, v V = V = L 2 (). Then, Φ G (v u) + Φ F ( Λ (v u)) + Ψ E rel( v u 2 In our case, one gets Φ F 0, Φ G (v) = 1 4 v 2 (θ 1 + θ 2 ) dx, Ψ E rel(v) = ) 1 2 (Erel [v] + D rel [q]). v 2 (θ 1 + θ 2 ) dx. Using 1 2ν (c 1 c 2 ) 2 θ 1 + θ 2 this leads to: Theorem [A posteriori error estimate for E rel ]: Let u V be the minimizer of E rel. Then, for any v V and q Q it holds that u v 2 L 2 () 2ν ( ) err2 u[v, q] := (c 1 c 2 ) 2 E rel [v] + D rel [q] = 2ν (c 1 c 2 ) 2 v 2 θ 1 + (1 v) 2 θ 2 + v (div q)2 +div qθ 2 θ 1 θ 2 θ 1 +θ 2 dx.
61 12 A posteriori error estimates for the binary model Key observation: solutions of E rel are characterized by steep profiles
62 12 A posteriori error estimates for the binary model Key observation: solutions of E rel are characterized by steep profiles S η = [ 1 2 η v η] (η ( 0, 1 2) ) set of non properly identified phase regions and a[v, η] = S η
63 12 A posteriori error estimates for the binary model Key observation: solutions of E rel are characterized by steep profiles S η = [ 1 2 η v η] (η ( 0, 1 2) ) set of non properly identified phase regions and a[v, η] = S η η u 0 u χ [u>0.5] 0 1 a[v, η] and err χ [v, q, η]
64 12 A posteriori error estimates for the binary model Key observation: solutions of E rel are characterized by steep profiles S η = [ 1 2 η v η] (η ( 0, 1 2) ) set of non properly identified phase regions and a[v, η] = S η u 0 u χ [u>0.5] 0 1 a[v, η] and err χ [v, q, η] Theorem [A posteriori error estimate for E]: Let χ BV (, {0, 1}) be the minimizer of the binary Mumford Shah functional obtained from E rel. Then for all v V = L 2 () and q Q = H N (div, ) we have { χ χ [v> 1 L 2 ] 1 () inf err χ [v, q, η] = η (0, 1 2 ) ( a[v, η] + 1 η 2 err 2 u[v, q] η )}.
65 12 A posteriori error estimates for the binary model Theorem [A posteriori error estimate for E]: Let χ BV (, {0, 1}) be the minimizer of the binary Mumford Shah functional obtained from E rel. Then for all v V = L 2 () and q Q = H N (div, ) we have { χ χ [v> 1 L 2 ] 1 () inf err χ [v, q, η] = η (0, 1 2 ) ( a[v, η] + 1 η 2 err 2 u[v, q] Proof: [ u > 1 2] [ v > 1 2] [ 1 2 η v η] [ u v > η] η η )}. u minimizer of E rel v V
66 12 A posteriori error estimates for the binary model Theorem [A posteriori error estimate for E]: Let χ BV (, {0, 1}) be the minimizer of the binary Mumford Shah functional obtained from E rel. Then for all v V = L 2 () and q Q = H N (div, ) we have { χ χ [v> 1 L 2 ] 1 () inf err χ [v, q, η] = η (0, 1 2 ) ( a[v, η] + 1 η 2 err 2 u[v, q] Proof: [ u > 1 [ [ 2] v > 1 2] 1 2 η v η] [ u v > η] Using the estimate for u v 2 L 2 (), we get L n 1 ([ u v > η]) u v 2 dx 1 err 2 η 2 η u[v, q]. 2 { u v >η} The above holds for any η (0, 1 2 ), so also for inf. η (0, 1 2 ) )}.
67 13 Two discretizations and primal-dual algorithm T adaptive simplicial mesh Vh 0 space of piecewise constant finite element functions on T space of continuous and affine finite element functions on T V 1 h
68 13 Two discretizations and primal-dual algorithm T adaptive simplicial mesh Vh 0 space of piecewise constant finite element functions on T Vh 1 space of continuous and affine finite element functions on T 1 4 G h [v h ] = v2 h + v h θ 1,h θ 2,h dx, F h [q h ] = I B1 [q h ], θ 1,h + θ 2,h G h [v h] = vh 2 θ 1,h + (1 v h ) 2 θ 2,h dx, Fh [q h] = I h ( q h ) dx, for θ i,h = I h ( 1 ν (c i u 0 ) 2 ) for i = 1, 2, I h Lagrange interpolation
69 13 Two discretizations and primal-dual algorithm T adaptive simplicial mesh Vh 0 space of piecewise constant finite element functions on T Vh 1 space of continuous and affine finite element functions on T 1 4 G h [v h ] = v2 h + v h θ 1,h θ 2,h dx, F h [q h ] = I B1 [q h ], θ 1,h + θ 2,h G h [v h] = vh 2 θ 1,h + (1 v h ) 2 θ 2,h dx, Fh [q h] = I h ( q h ) dx, for θ i,h = I h ( 1 ν (c i u 0 ) 2 ) for i = 1, 2, I h Lagrange interpolation (FE) discretization V h = Vh 1 with L2 -scalar product and Q h = (Vh 1)2 with lumped mass scalar product, Λ h : V h Q h implicitly defined via (cf. [Bartels 14]) I h ( Λ h v h q h ) dx = v h P h div q h dx q h Q h, v h V h P h : L 2 () V h denotes L 2 -projection
70 13 Two discretizations and primal-dual algorithm T adaptive simplicial mesh Vh 0 space of piecewise constant finite element functions on T Vh 1 space of continuous and affine finite element functions on T 1 4 G h [v h ] = v2 h + v h θ 1,h θ 2,h dx, F h [q h ] = I B1 [q h ], θ 1,h + θ 2,h G h [v h] = vh 2 θ 1,h + (1 v h ) 2 θ 2,h dx, Fh [q h] = q h dx, for θ i,h = I h ( 1 ν (c i u 0 ) 2 ) for i = 1, 2, I h Lagrange interpolation (FE ) discretization V h = Vh 1 and Q h = (Vh 0)2 with L 2 -scalar product Λ h : Q h V h given by I h (Λ h q h v h ) dx = q h v h dx q h Q h, v h V h Λ h v h = v h To evaluate err 2 u an L 2 -projection of the dual solution is performed
71 14 The primal-dual algorithm Primal-dual algorithm [Chambolle, Pock 11] while u k+1 h u k h >THRESHOLD do p k+1 h = (Id + σ F h ) 1 (p k h σλ hūk h ); u k+1 h = (Id + τ G h ) 1 (u k h + τλ hp k+1 h ); ū k+1 h = 2u k+1 h u k h ; end
72 14 The primal-dual algorithm Primal-dual algorithm [Chambolle, Pock 11] while u k+1 h u k h >THRESHOLD do p k+1 h = (Id + σ F h ) 1 (p k h σλ hūk h ); u k+1 h = (Id + τ G h ) 1 (u k h + τλ hp k+1 h ); ū k+1 h = 2u k+1 h u k h ; end The resolvent (Id + σ F h ) 1 for F h is well known.
73 14 The primal-dual algorithm Primal-dual algorithm [Chambolle, Pock 11] while u k+1 h u k h >THRESHOLD do p k+1 h = (Id + σ F h ) 1 (p k h σλ hūk h ); u k+1 h = (Id + τ G h ) 1 (u k h + τλ hp k+1 h ); ū k+1 h = 2u k+1 h u k h ; end The resolvent (Id + σ F h ) 1 for F h is well known. The resolvent (Id + τ G h ) 1 for G h [v h] = vh 2 θ 1,h + (1 v h ) 2 θ 2,h dx can be computed in a straightforwardly, but involves mass matrices.
74 The primal-dual algorithm Primal-dual algorithm [Chambolle, Pock 11] while u k+1 h u k h >THRESHOLD do p k+1 h = (Id + σ F h ) 1 (p k h σλ hūk h ); u k+1 h = (Id + τ G h ) 1 (u k h + τλ hp k+1 h ); ū k+1 h = 2u k+1 h u k h ; end The resolvent (Id + σ F h ) 1 for F h is well known. The resolvent (Id + τ G h ) 1 for G h [v h] = vh 2 θ 1,h + (1 v h ) 2 θ 2,h dx can be computed in a straightforwardly, but involves mass matrices. The operator norm of Λ can be estimated as follows Λ h 2 48( )h 2 min for (FE ) and n = 2 Λ h 2 96( )h 2 min for (FE) 14
75 15 Numerical results - Flower u 0 u h χ [uh >0.5]
76 15 Numerical results - Flower u 0 u h χ [uh >0.5] mesh after 2 nd, 5 th, 10 th iteration
77 16 Numerical results - Flower (cont.) degrees of freedom err 2 u estimator for u u h 2 L 2 () ((FE) (black), (FE ) (red)) err χ estimator for χ χ [uh >0.5] L 1 () ((FE) (orange), (FE ) (blue))
78 17 Numerical results - Cameraman u 0 u h χ [uh >0.5] Image source: MATLAB
79 17 Numerical results - Cameraman u 0 u h χ [uh >0.5] mesh after 5 th, 10 th iteration Image source: MATLAB
80 18 Numerical results - Cameraman (cont.) degrees of freedom err 2 u estimator for u u h 2 L 2 () ((FE) (black), (FE ) (red)) err χ estimator for χ χ [uh >0.5] L 1 () ((FE) (orange), (FE ) (blue))
81 19 Numerical results - Gaussians using (FE ) u 0 u h χ [uh >0.5]
82 19 Numerical results - Gaussians using (FE ) u 0 u h χ [uh >0.5] (p h ) 1 (p h ) 2 0 1
83 20 Numerical results - Gaussians using (FE ) degrees of freedom err 2 u estimator for u u h 2 L 2 () err χ estimator for χ χ [uh >0.5] L 1 ()
84 21 Conclusion and outlook Summary A posteriori error estimates for the binary Mumford Shah model
85 21 Conclusion and outlook Summary A posteriori error estimates for the binary Mumford Shah model Natural error quantity: area of the non-properly segmented region
86 21 Conclusion and outlook Summary A posteriori error estimates for the binary Mumford Shah model Natural error quantity: area of the non-properly segmented region Key ingredients:
87 21 Conclusion and outlook Summary A posteriori error estimates for the binary Mumford Shah model Natural error quantity: area of the non-properly segmented region Key ingredients: Uniformly convex, unonstrained relaxation of the original problem
88 21 Conclusion and outlook Summary A posteriori error estimates for the binary Mumford Shah model Natural error quantity: area of the non-properly segmented region Key ingredients: Uniformly convex, unonstrained relaxation of the original problem Repin s theorem for a posteriori error estimation of the relaxation
89 21 Conclusion and outlook Summary A posteriori error estimates for the binary Mumford Shah model Natural error quantity: area of the non-properly segmented region Key ingredients: Uniformly convex, unonstrained relaxation of the original problem Repin s theorem for a posteriori error estimation of the relaxation Cut out argument to estimate the area mismatch
90 21 Conclusion and outlook Summary A posteriori error estimates for the binary Mumford Shah model Natural error quantity: area of the non-properly segmented region Key ingredients: Uniformly convex, unonstrained relaxation of the original problem Repin s theorem for a posteriori error estimation of the relaxation Cut out argument to estimate the area mismatch Estimate can be used for adaptive mesh refinement
91 21 Conclusion and outlook Summary A posteriori error estimates for the binary Mumford Shah model Natural error quantity: area of the non-properly segmented region Key ingredients: Uniformly convex, unonstrained relaxation of the original problem Repin s theorem for a posteriori error estimation of the relaxation Cut out argument to estimate the area mismatch Estimate can be used for adaptive mesh refinement Two different adaptive primal-dual finite element schemes
92 21 Conclusion and outlook Summary A posteriori error estimates for the binary Mumford Shah model Natural error quantity: area of the non-properly segmented region Key ingredients: Uniformly convex, unonstrained relaxation of the original problem Repin s theorem for a posteriori error estimation of the relaxation Cut out argument to estimate the area mismatch Estimate can be used for adaptive mesh refinement Two different adaptive primal-dual finite element schemes Numerical experiments show qualitative and quantitative properties
93 21 Conclusion and outlook Summary A posteriori error estimates for the binary Mumford Shah model Natural error quantity: area of the non-properly segmented region Key ingredients: Uniformly convex, unonstrained relaxation of the original problem Repin s theorem for a posteriori error estimation of the relaxation Cut out argument to estimate the area mismatch Estimate can be used for adaptive mesh refinement Two different adaptive primal-dual finite element schemes Numerical experiments show qualitative and quantitative properties Outlook Transfer the approach to more general computer vision problems
94 21 Conclusion and outlook Summary A posteriori error estimates for the binary Mumford Shah model Natural error quantity: area of the non-properly segmented region Key ingredients: Uniformly convex, unonstrained relaxation of the original problem Repin s theorem for a posteriori error estimation of the relaxation Cut out argument to estimate the area mismatch Estimate can be used for adaptive mesh refinement Two different adaptive primal-dual finite element schemes Numerical experiments show qualitative and quantitative properties Outlook Transfer the approach to more general computer vision problems Develop/use minimization algorithms that are tailored to the adaptive structure
95 22 References S. Bartels, Error control and adaptivity for a variational model problem defined on functions of bounded variation, Math. Comp. (online first), (2014). B. Berkels, A. Effland and M. Rumpf, A Posteriori Error Control for the Binary Mumford Shah Model, Math. Comp. (accepted) B. Berkels, An unconstrained multiphase thresholding approach for image segmentation. In Proceedings of the Second SSVM, A. Chambolle and J. Darbon, On total variation minimization and surface evolution using parametric maximum flows, International Journal of Computer Vision, 84 (2009), pp A. Chambolle and T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, Journal of Mathematical Imaging and Vision, 40 (2011), pp M. Hintermüller and K. Kunisch, Total bounded variation regularization as a bilaterally constrained optimization problem, SIAM J. Appl. Math., 64 (2004), pp M. Nikolova, S. Esedoḡlu and T. F. Chan, Algorithms for finding global minimizers of image segmentation and denoising models, SIAM Journal on Applied Mathematics, 66 (2006), pp S. Repin, A posteriori error estimation for variational problems with uniformly convex functionals, Mathematics of Computation, 69 (2000), pp
96 Thank you for your attention! Contact: 23
A Posteriori Error Control for the Binary Mumford-Shah Model
A Posteriori Error Control for te Binary Mumford-Sa Model Benjamin Berkels, Alexander Effland, and Martin Rumpf AICES Graduate Scool, RWTH Aacen University, berkels@aices.rwt-aacen.de Institute for Numerical
More informationAn unconstrained multiphase thresholding approach for image segmentation
An unconstrained multiphase thresholding approach for image segmentation Benjamin Berkels Institut für Numerische Simulation, Rheinische Friedrich-Wilhelms-Universität Bonn, Nussallee 15, 53115 Bonn, Germany
More informationConvex Hodge Decomposition of Image Flows
Convex Hodge Decomposition of Image Flows Jing Yuan 1, Gabriele Steidl 2, Christoph Schnörr 1 1 Image and Pattern Analysis Group, Heidelberg Collaboratory for Image Processing, University of Heidelberg,
More informationA Posteriori Estimates for Cost Functionals of Optimal Control Problems
A Posteriori Estimates for Cost Functionals of Optimal Control Problems Alexandra Gaevskaya, Ronald H.W. Hoppe,2 and Sergey Repin 3 Institute of Mathematics, Universität Augsburg, D-8659 Augsburg, Germany
More informationDual methods for the minimization of the total variation
1 / 30 Dual methods for the minimization of the total variation Rémy Abergel supervisor Lionel Moisan MAP5 - CNRS UMR 8145 Different Learning Seminar, LTCI Thursday 21st April 2016 2 / 30 Plan 1 Introduction
More informationVariational Image Restoration
Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1
More informationConvex Hodge Decomposition and Regularization of Image Flows
Convex Hodge Decomposition and Regularization of Image Flows Jing Yuan, Christoph Schnörr, Gabriele Steidl April 14, 2008 Abstract The total variation (TV) measure is a key concept in the field of variational
More informationLagrange duality. The Lagrangian. We consider an optimization program of the form
Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization
More informationHamburger Beiträge zur Angewandten Mathematik
Hamburger Beiträge zur Angewandten Mathematik Numerical analysis of a control and state constrained elliptic control problem with piecewise constant control approximations Klaus Deckelnick and Michael
More informationITERATIVE SOLUTION OF A CONSTRAINED TOTAL VARIATION REGULARIZED MODEL PROBLEM
ITERATIVE SOLUTION OF A CONSTRAINED TOTAL VARIATION REGULARIZED MODEL PROBLEM Abstract. Te discretization of a bilaterally constrained total variation minimization problem wit conforming low order finite
More informationNumerical Solutions to Partial Differential Equations
Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University The Residual and Error of Finite Element Solutions Mixed BVP of Poisson Equation
More informationITERATIVE SOLUTION OF A CONSTRAINED TOTAL VARIATION REGULARIZED MODEL PROBLEM
ITERATIVE SOLUTION OF A CONSTRAINED TOTAL VARIATION REGULARIZED MODEL PROBLEM Abstract. Te discretization of a bilaterally constrained total variation minimization problem wit conforming low order finite
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationNumerical Solution of Nonsmooth Problems and Application to Damage Evolution and Optimal Insulation
Numerical Solution of Nonsmooth Problems and Application to Damage Evolution and Optimal Insulation Sören Bartels University of Freiburg, Germany Joint work with G. Buttazzo (U Pisa), L. Diening (U Bielefeld),
More informationarxiv: v1 [math.na] 3 Jan 2019
arxiv manuscript No. (will be inserted by the editor) A Finite Element Nonoverlapping Domain Decomposition Method with Lagrange Multipliers for the Dual Total Variation Minimizations Chang-Ock Lee Jongho
More informationPDEs in Image Processing, Tutorials
PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower
More informationb i (x) u + c(x)u = f in Ω,
SIAM J. NUMER. ANAL. Vol. 39, No. 6, pp. 1938 1953 c 2002 Society for Industrial and Applied Mathematics SUBOPTIMAL AND OPTIMAL CONVERGENCE IN MIXED FINITE ELEMENT METHODS ALAN DEMLOW Abstract. An elliptic
More informationProperties of L 1 -TGV 2 : The one-dimensional case
Properties of L 1 -TGV 2 : The one-dimensional case Kristian Bredies Karl Kunisch Tuomo Valkonen May 5, 2011 Abstract We study properties of solutions to second order Total Generalized Variation (TGV 2
More informationConvex Optimization Conjugate, Subdifferential, Proximation
1 Lecture Notes, HCI, 3.11.211 Chapter 6 Convex Optimization Conjugate, Subdifferential, Proximation Bastian Goldlücke Computer Vision Group Technical University of Munich 2 Bastian Goldlücke Overview
More informationAn optimal control problem for a parabolic PDE with control constraints
An optimal control problem for a parabolic PDE with control constraints PhD Summer School on Reduced Basis Methods, Ulm Martin Gubisch University of Konstanz October 7 Martin Gubisch (University of Konstanz)
More informationHamburger Beiträge zur Angewandten Mathematik
Hamburger Beiträge zur Angewandten Mathematik A finite element approximation to elliptic control problems in the presence of control and state constraints Klaus Deckelnick and Michael Hinze Nr. 2007-0
More informationA function space framework for structural total variation regularization with applications in inverse problems
A function space framework for structural total variation regularization with applications in inverse problems Michael Hintermüller, Martin Holler and Kostas Papafitsoros Abstract. In this work, we introduce
More informationNumerical Solutions to Partial Differential Equations
Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University Nonconformity and the Consistency Error First Strang Lemma Abstract Error Estimate
More informationConvex representation for curvature dependent functionals
Convex representation for curvature dependent functionals Antonin Chambolle CMAP, Ecole Polytechnique, CNRS, Palaiseau, France joint work with T. Pock (T.U. Graz) Laboratoire Jacques-Louis Lions, 25 nov.
More informationNUMERICAL SOLUTION OF NONSMOOTH PROBLEMS
NUMERICAL SOLUTION OF NONSMOOTH PROBLEMS Abstract. Nonsmooth minimization problems arise in problems involving constraints or models describing certain discontinuities. The existence of solutions is established
More informationMultigrid Methods for Saddle Point Problems
Multigrid Methods for Saddle Point Problems Susanne C. Brenner Department of Mathematics and Center for Computation & Technology Louisiana State University Advances in Mathematics of Finite Elements (In
More informationInvestigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method
Article Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method Andreas Langer Department of Mathematics, University of Stuttgart,
More informationConvergence of a finite element approximation to a state constrained elliptic control problem
Als Manuskript gedruckt Technische Universität Dresden Herausgeber: Der Rektor Convergence of a finite element approximation to a state constrained elliptic control problem Klaus Deckelnick & Michael Hinze
More informationMINIMAL GRAPHS PART I: EXISTENCE OF LIPSCHITZ WEAK SOLUTIONS TO THE DIRICHLET PROBLEM WITH C 2 BOUNDARY DATA
MINIMAL GRAPHS PART I: EXISTENCE OF LIPSCHITZ WEAK SOLUTIONS TO THE DIRICHLET PROBLEM WITH C 2 BOUNDARY DATA SPENCER HUGHES In these notes we prove that for any given smooth function on the boundary of
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationAdaptive approximation of eigenproblems: multiple eigenvalues and clusters
Adaptive approximation of eigenproblems: multiple eigenvalues and clusters Francesca Gardini Dipartimento di Matematica F. Casorati, Università di Pavia http://www-dimat.unipv.it/gardini Banff, July 1-6,
More informationLagrange Relaxation and Duality
Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler
More informationA First Order Primal-Dual Algorithm for Nonconvex T V q Regularization
A First Order Primal-Dual Algorithm for Nonconvex T V q Regularization Thomas Möllenhoff, Evgeny Strekalovskiy, and Daniel Cremers TU Munich, Germany Abstract. We propose an efficient first order primal-dual
More informationConstruction of wavelets. Rob Stevenson Korteweg-de Vries Institute for Mathematics University of Amsterdam
Construction of wavelets Rob Stevenson Korteweg-de Vries Institute for Mathematics University of Amsterdam Contents Stability of biorthogonal wavelets. Examples on IR, (0, 1), and (0, 1) n. General domains
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian
More informationLecture 16: FTRL and Online Mirror Descent
Lecture 6: FTRL and Online Mirror Descent Akshay Krishnamurthy akshay@cs.umass.edu November, 07 Recap Last time we saw two online learning algorithms. First we saw the Weighted Majority algorithm, which
More informationGeometric Multigrid Methods
Geometric Multigrid Methods Susanne C. Brenner Department of Mathematics and Center for Computation & Technology Louisiana State University IMA Tutorial: Fast Solution Techniques November 28, 2010 Ideas
More informationOptimization for Machine Learning
Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html
More informationMIXED FINITE ELEMENTS FOR PLATES. Ricardo G. Durán Universidad de Buenos Aires
MIXED FINITE ELEMENTS FOR PLATES Ricardo G. Durán Universidad de Buenos Aires - Necessity of 2D models. - Reissner-Mindlin Equations. - Finite Element Approximations. - Locking. - Mixed interpolation or
More informationAdaptive methods for control problems with finite-dimensional control space
Adaptive methods for control problems with finite-dimensional control space Saheed Akindeinde and Daniel Wachsmuth Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy
More informationEE 227A: Convex Optimization and Applications October 14, 2008
EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationSpace-time Finite Element Methods for Parabolic Evolution Problems
Space-time Finite Element Methods for Parabolic Evolution Problems with Variable Coefficients Ulrich Langer, Martin Neumüller, Andreas Schafelner Johannes Kepler University, Linz Doctoral Program Computational
More informationA Finite Element Method for the Surface Stokes Problem
J A N U A R Y 2 0 1 8 P R E P R I N T 4 7 5 A Finite Element Method for the Surface Stokes Problem Maxim A. Olshanskii *, Annalisa Quaini, Arnold Reusken and Vladimir Yushutin Institut für Geometrie und
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationA Level Set Based. Finite Element Algorithm. for Image Segmentation
A Level Set Based Finite Element Algorithm for Image Segmentation Michael Fried, IAM Universität Freiburg, Germany Image Segmentation Ω IR n rectangular domain, n = (1), 2, 3 u 0 : Ω [0, C max ] intensity
More informationSpace-time sparse discretization of linear parabolic equations
Space-time sparse discretization of linear parabolic equations Roman Andreev August 2, 200 Seminar for Applied Mathematics, ETH Zürich, Switzerland Support by SNF Grant No. PDFMP2-27034/ Part of PhD thesis
More informationA Locking-Free MHM Method for Elasticity
Trabalho apresentado no CNMAC, Gramado - RS, 2016. Proceeding Series of the Brazilian Society of Computational and Applied Mathematics A Locking-Free MHM Method for Elasticity Weslley S. Pereira 1 Frédéric
More informationOptimization and Optimal Control in Banach Spaces
Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,
More informationA duality-based approach to elliptic control problems in non-reflexive Banach spaces
A duality-based approach to elliptic control problems in non-reflexive Banach spaces Christian Clason Karl Kunisch June 3, 29 Convex duality is a powerful framework for solving non-smooth optimal control
More informationConvergence and optimality of an adaptive FEM for controlling L 2 errors
Convergence and optimality of an adaptive FEM for controlling L 2 errors Alan Demlow (University of Kentucky) joint work with Rob Stevenson (University of Amsterdam) Partially supported by NSF DMS-0713770.
More informationLagrangian Duality and Convex Optimization
Lagrangian Duality and Convex Optimization David Rosenberg New York University February 11, 2015 David Rosenberg (New York University) DS-GA 1003 February 11, 2015 1 / 24 Introduction Why Convex Optimization?
More informationChapter 6 A posteriori error estimates for finite element approximations 6.1 Introduction
Chapter 6 A posteriori error estimates for finite element approximations 6.1 Introduction The a posteriori error estimation of finite element approximations of elliptic boundary value problems has reached
More informationBasic Concepts of Adaptive Finite Element Methods for Elliptic Boundary Value Problems
Basic Concepts of Adaptive Finite lement Methods for lliptic Boundary Value Problems Ronald H.W. Hoppe 1,2 1 Department of Mathematics, University of Houston 2 Institute of Mathematics, University of Augsburg
More informationAccelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)
Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan
More informationA General Framework for a Class of Primal-Dual Algorithms for TV Minimization
A General Framework for a Class of Primal-Dual Algorithms for TV Minimization Ernie Esser UCLA 1 Outline A Model Convex Minimization Problem Main Idea Behind the Primal Dual Hybrid Gradient (PDHG) Method
More informationDuality Uses and Correspondences. Ryan Tibshirani Convex Optimization
Duality Uses and Correspondences Ryan Tibshirani Conve Optimization 10-725 Recall that for the problem Last time: KKT conditions subject to f() h i () 0, i = 1,... m l j () = 0, j = 1,... r the KKT conditions
More informationFinite Element Error Estimates in Non-Energy Norms for the Two-Dimensional Scalar Signorini Problem
Journal manuscript No. (will be inserted by the editor Finite Element Error Estimates in Non-Energy Norms for the Two-Dimensional Scalar Signorini Problem Constantin Christof Christof Haubner Received:
More informationFinite Elements. Colin Cotter. February 22, Colin Cotter FEM
Finite Elements February 22, 2019 In the previous sections, we introduced the concept of finite element spaces, which contain certain functions defined on a domain. Finite element spaces are examples of
More informationA GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR CONVEX OPTIMIZATION IN IMAGING SCIENCE
A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR CONVEX OPTIMIZATION IN IMAGING SCIENCE ERNIE ESSER XIAOQUN ZHANG TONY CHAN Abstract. We generalize the primal-dual hybrid gradient
More informationOn Gap Functions for Equilibrium Problems via Fenchel Duality
On Gap Functions for Equilibrium Problems via Fenchel Duality Lkhamsuren Altangerel 1 Radu Ioan Boţ 2 Gert Wanka 3 Abstract: In this paper we deal with the construction of gap functions for equilibrium
More informationAMS subject classifications. Primary, 65N15, 65N30, 76D07; Secondary, 35B45, 35J50
A SIMPLE FINITE ELEMENT METHOD FOR THE STOKES EQUATIONS LIN MU AND XIU YE Abstract. The goal of this paper is to introduce a simple finite element method to solve the Stokes equations. This method is in
More informationOn the p-laplacian and p-fluids
LMU Munich, Germany Lars Diening On the p-laplacian and p-fluids Lars Diening On the p-laplacian and p-fluids 1/50 p-laplacian Part I p-laplace and basic properties Lars Diening On the p-laplacian and
More informationALTERNATING DIRECTION METHOD OF MULTIPLIERS WITH VARIABLE STEP SIZES
ALTERNATING DIRECTION METHOD OF MULTIPLIERS WITH VARIABLE STEP SIZES Abstract. The alternating direction method of multipliers (ADMM) is a flexible method to solve a large class of convex minimization
More informationLocal pointwise a posteriori gradient error bounds for the Stokes equations. Stig Larsson. Heraklion, September 19, 2011 Joint work with A.
Local pointwise a posteriori gradient error bounds for the Stokes equations Stig Larsson Department of Mathematical Sciences Chalmers University of Technology and University of Gothenburg Heraklion, September
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko
More informationNumerical Modeling of Methane Hydrate Evolution
Numerical Modeling of Methane Hydrate Evolution Nathan L. Gibson Joint work with F. P. Medina, M. Peszynska, R. E. Showalter Department of Mathematics SIAM Annual Meeting 2013 Friday, July 12 This work
More informationNonlinear Flows for Displacement Correction and Applications in Tomography
Nonlinear Flows for Displacement Correction and Applications in Tomography Guozhi Dong 1 and Otmar Scherzer 1,2 1 Computational Science Center, University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Wien,
More informationConvex Optimization in Computer Vision:
Convex Optimization in Computer Vision: Segmentation and Multiview 3D Reconstruction Yiyong Feng and Daniel P. Palomar The Hong Kong University of Science and Technology (HKUST) ELEC 5470 - Convex Optimization
More informationOn the approximation of the principal eigenvalue for a class of nonlinear elliptic operators
On the approximation of the principal eigenvalue for a class of nonlinear elliptic operators Fabio Camilli ("Sapienza" Università di Roma) joint work with I.Birindelli ("Sapienza") I.Capuzzo Dolcetta ("Sapienza")
More informationSubgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus
1/41 Subgradient Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes definition subgradient calculus duality and optimality conditions directional derivative Basic inequality
More informationA posteriori error estimation in the FEM
A posteriori error estimation in the FEM Plan 1. Introduction 2. Goal-oriented error estimates 3. Residual error estimates 3.1 Explicit 3.2 Subdomain error estimate 3.3 Self-equilibrated residuals 3.4
More information- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs
LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs
More informationSolving Dual Problems
Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem
More informationRestoration of images with rotated shapes
Numer Algor (008) 48:49 66 DOI 0.007/s075-008-98-y ORIGINAL PAPER Restoration of images with rotated shapes S. Setzer G. Steidl T. Teuber Received: 6 September 007 / Accepted: 3 January 008 / Published
More informationOptimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers
Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex
More informationRobust error estimates for regularization and discretization of bang-bang control problems
Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of
More informationLecture 8. Strong Duality Results. September 22, 2008
Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation
More informationP(E t, Ω)dt, (2) 4t has an advantage with respect. to the compactly supported mollifiers, i.e., the function W (t)f satisfies a semigroup law:
Introduction Functions of bounded variation, usually denoted by BV, have had and have an important role in several problems of calculus of variations. The main features that make BV functions suitable
More informationConvergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization
Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Radu Ioan Boţ Christopher Hendrich November 7, 202 Abstract. In this paper we
More informationAxioms of Adaptivity (AoA) in Lecture 3 (sufficient for optimal convergence rates)
Axioms of Adaptivity (AoA) in Lecture 3 (sufficient for optimal convergence rates) Carsten Carstensen Humboldt-Universität zu Berlin 2018 International Graduate Summer School on Frontiers of Applied and
More informationLinear and Combinatorial Optimization
Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality
More informationSpline Element Method for Partial Differential Equations
for Partial Differential Equations Department of Mathematical Sciences Northern Illinois University 2009 Multivariate Splines Summer School, Summer 2009 Outline 1 Why multivariate splines for PDEs? Motivation
More informationLecture Note III: Least-Squares Method
Lecture Note III: Least-Squares Method Zhiqiang Cai October 4, 004 In this chapter, we shall present least-squares methods for second-order scalar partial differential equations, elastic equations of solids,
More informationAdaptive discretization and first-order methods for nonsmooth inverse problems for PDEs
Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Barbara Kaltenbacher, Tuomo Valkonen,
More informationFINITE ENERGY SOLUTIONS OF MIXED 3D DIV-CURL SYSTEMS
FINITE ENERGY SOLUTIONS OF MIXED 3D DIV-CURL SYSTEMS GILES AUCHMUTY AND JAMES C. ALEXANDER Abstract. This paper describes the existence and representation of certain finite energy (L 2 -) solutions of
More informationNumerical Solution of Elliptic Optimal Control Problems
Department of Mathematics, University of Houston Numerical Solution of Elliptic Optimal Control Problems Ronald H.W. Hoppe 1,2 1 Department of Mathematics, University of Houston 2 Institute of Mathematics,
More informationPreconditioned space-time boundary element methods for the heat equation
W I S S E N T E C H N I K L E I D E N S C H A F T Preconditioned space-time boundary element methods for the heat equation S. Dohr and O. Steinbach Institut für Numerische Mathematik Space-Time Methods
More informationDiscontinuous Galerkin Time Discretization Methods for Parabolic Problems with Linear Constraints
J A N U A R Y 0 1 8 P R E P R N T 4 7 4 Discontinuous Galerkin Time Discretization Methods for Parabolic Problems with Linear Constraints gor Voulis * and Arnold Reusken nstitut für Geometrie und Praktische
More informationA VOLUME MESH FINITE ELEMENT METHOD FOR PDES ON SURFACES
European Congress on Computational Methods in Applied Sciences and Engineering (ECCOMAS 2012) J. Eberhardsteiner et.al. (eds.) Vienna, Austria, September 10-14, 2012 A VOLUME MESH FINIE ELEMEN MEHOD FOR
More informationLinear and non-linear programming
Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)
More informationOn John type ellipsoids
On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to
More information1 Definition of the Riemann integral
MAT337H1, Introduction to Real Analysis: notes on Riemann integration 1 Definition of the Riemann integral Definition 1.1. Let [a, b] R be a closed interval. A partition P of [a, b] is a finite set of
More informationESAIM: M2AN Modélisation Mathématique et Analyse Numérique Vol. 34, N o 4, 2000, pp
Mathematical Modelling and Numerical Analysis ESAIM: M2AN Modélisation Mathématique et Analyse Numérique Vol. 34, N o 4, 2, pp. 799 8 STRUCTURAL PROPERTIES OF SOLUTIONS TO TOTAL VARIATION REGULARIZATION
More informationarxiv: v1 [math.na] 19 Nov 2018
QUASINORMS IN SEMILINEAR ELLIPTIC PROBLEMS JAMES JACKAMAN AND TRISTAN PRYER arxiv:1811.07816v1 [math.na] 19 Nov 2018 (1.1) Abstract. In this note we examine the a priori and a posteriori analysis of discontinuous
More informationChapter 1: The Finite Element Method
Chapter 1: The Finite Element Method Michael Hanke Read: Strang, p 428 436 A Model Problem Mathematical Models, Analysis and Simulation, Part Applications: u = fx), < x < 1 u) = u1) = D) axial deformation
More informationA-posteriori error estimates for optimal control problems with state and control constraints
www.oeaw.ac.at A-posteriori error estimates for optimal control problems with state and control constraints A. Rösch, D. Wachsmuth RICAM-Report 2010-08 www.ricam.oeaw.ac.at A-POSTERIORI ERROR ESTIMATES
More informationConstrained optimization
Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained
More informationNon Convex Minimization using Convex Relaxation Some Hints to Formulate Equivalent Convex Energies
Non Convex Minimization using Convex Relaxation Some Hints to Formulate Equivalent Convex Energies Mila Nikolova (CMLA, ENS Cachan, CNRS, France) SIAM Imaging Conference (IS14) Hong Kong Minitutorial:
More informationSupport Vector Machines: Maximum Margin Classifiers
Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind
More information