Image Editing in the Gradient Domain Shai Avidan Tel Aviv Universit
Slide Credits (partial list) Rick Szeliski Steve Seitz Alosha Eros Yacov Hel-Or Marc Levo Bill Freeman Fredo Durand Slvain Paris
Image Composition Source Images Target Image
Basics Images as scalar ields R -> R
Vector Field A vector unction G: R R Each point (,) is associated with a vector (u,v) G(,)[ u(,), v(,) ]
Gradient Field Partial derivatives o scalar ield Direction Maimum rate o change o scalar ield Magnitude Rate o change Not all vector ields are gradients o an image. Onl i the are curl-ree (a.k.a conservative) What s the dierence between D and D Gradient ield? I(, ) I I {, I }
Continues v.s. Discrete Continues case derivative Discrete case Finite dierences I I [ 0 ] I [ 0 ] T I Image I(,) I I
Interpolation S :a closed subset o Ω :a closed subset o S, with boundar I * : known scalar unction over S \ Ω : unknown scalar unction over Ω R Ω
Intuition hole illing D: D:
Membrane Interpolation Solve the ollowing minimization problem: min Ω Subject to Dirichlet boundar conditions: * Ω Ω Variational Methods to the Rescue! Calculus: When we want to minimize g() over the space o real values We derive and set g ()0 What s the derivative o a unction? Variational Methods: Epress our problem as an energ minimization over a space o unctions
D Derivative: ( ) Derivative Deinition lim ε 0 ( ε ) ( ) Multidimensional derivative or some direction vector w r r r r ( ) ( εw) ( ) Dr w lim ε 0 ε ε We want to minimize ( ) d with ( ) a and ( ) b Assume we have a solution and tr to deine some notion o D derivative wrt to a D parameter ε in a given direction o unctional space: For a perturbation unction η() that also respects boundar conditions (i.e. η(_)η(_) 0) and a scalar ε the integral ( ( ) εη ( ) ) d should be bigger than alone
Calculus o Variations Lets open the parenthesis: ( ) εη ( ) ( ) ε η ( ) d The third term is alwas positive and is negligible when ε is going to zero. So derive the rest with respect to ε and set to zero: η ( ) ( ) d 0 Integrate b parts: η ( ) ( ) d [ η( ) ( ) ] η ( ) ( ) b Where: [ ( ) g( ) ] ( b) g( b) ( a) g( a) And since η(_)η(_) 0 then the epression in the squared brackets is equal to zero a d And we are let with: η ( ) ( ) d 0 But since this must be true or ever η, it holds that () 0 everwhere.
Intuition The min o is the slove integrated over the interval Locall, i the second derivative was not zero, this would mean that the First derivative is varing which is bad since we want to be minimized Recap: We start with the unctional we need to minimize Introduce the perturbation unction Use Calculus o variation Set to zero Integrate b parts And obtain the solution.
Euler-Lagrange Equation A undamental equation o calculus o variations, that states that i J is deined b an integral o the orm Equation () J F (,, ) d Then J has stationar value i the ollowing dierential equation is satisied Equation () F d d F 0 Recall, we want to solve the ollowing minimization problem: min Ω Subject to Dirichlet boundar conditions: * Ω Ω
Membrane Interpolation ( ) our case : In F 0 becomes: equation () Then F d d F d d F ( ) ( ) ( ) 0 d d d d F d d d d d d F d d F 0 we get the Laplacian: and
Smooth image completion Euler - Lagrange: arg min Ω s. t. Ω * Ω The minimum is achieved when: 0 over Ω s. t. Ω * Ω
Discrete Approimation (Membrane interpolation) Ω Ω Ω *.. over 0 s t,,,,, ( ) 0 4,,,,,,,,,,,,
Discrete Approimation b 0 0 0 4 4 4 Each, is an unknown variable i, there are N unknown (the piel values) This reduces to a sparse linear sstem o equations: We have A_ * I 0 A_ * I 0 A_boundar * I boundar so We can combine all and get A b Gradient constraints Boundar conditions
What s in the picture?
What s in the picture?
What s in the picture?
Editing in Gradient Domain Given vector ield G(u(,),v(,)) (pasted gradient) in a bounded region Ω. Find the values o in Ω that optimize: min Ω G with Ω Ω * I * G(u,v) Ω
Intuition - What i G is null? min Ω with Ω Ω D: D:
D case What i G is not null? Seamlessl paste onto - Add a linear unction so that the boundar condition is respected - Gradient error is equall distributed all over Ω in order to respect the boundar condition
D case From Perez et al. 003
D case From Perez et al. 003
D case
Poisson Equation ( ) ( ) our case : In G G G F 0 F d d F d d F G G d d F d d G G d d F d d divg G G G G 0 : we get and
Discrete Approimation (Poisson Cloning) Ω Ω Ω *.. s t divg over,,,,, ( ) 0 4,,,,,,,,,,,, ( ) ( ) ( ) ( ),,,, G G G G G G divg
Alternative Derivation (discrete notation) v u I D D D [ ]* 0 * I D I Let D - Toeplitz matri
min I D D u I v ( ) ( ) v u I T T T T D D D D D D Normal equation: ( ) v u I T T T T D D D D D D [ ] [ ]* 0 * 0, lip D Note T
Numerical Solution Discretize Laplacian T T D D D D [ ] 0 0 4 0 0 4 4 4 4 4 4 4 Sparse Toeplitz Matri
( T T ) T T D D D D I D u D v Comments: A is a sparse. A is smmetric and can be inverted. I Ω is rectangular A is a Toeplitz matri. Size o A is NN. Impractical to orm or store A. Impractical to invert A A I b
Iterative Solution: Conjugate Gradient Solves a linear sstem Ab (in our case I) A is square, smmetric, positive semi-deinite. Advantages: Fast! No need to store A but calculating A In our case A can be calculated using a single convolution. Can deal with constraints.
Conjugate Gradient as a minimization problem Minimizes And since A is smmetric
Steepest Descent Method Pick gradient direction r(i) Gradient direction Find optimum along this direction (i)αr(i) Energ along the gradient
Behavior o gradient descent Zigzag or goes straight depending i we re luck Ends up doing multiple steps in the same direction
For each step i: Conjugate gradient Take the residual d(i)b-a(i) ( -gradient) Make it A-orthogonal to the previous ones Find minimum along this direction Needs at most N iterations. Matlab command: cgs(a,b) A can be a unction handle aun such that aun() returns A*
Solving Poisson equation with boundar conditions Deine a circumscribing square ΠΩ Ω* Let Ω Π denotes the edited image area. Let Ω* Π-Ω denotes the surrounding area. ( T T ) ( T T D D D D D D D D ) S Ω Ω I s.t. I T Ω Ω S T * Ω S Ω Ω
T T k ( ) Ω Ω I I k AI U Ω * Ω S T ( ) * Ω Ω T S k b U cgs(a,b) The above requirements can be epressed as a linear set o equations: S Ω [ ] [ ] b AI T S D D D D I I D D D D T T T T Ω Ω Ω Ω
Image stitching
Gradient Domain Composition
Cut Paste & Paste in Gradient Domain
Another eample
Transparent Cloning I I S Ω Ω S TΩ I ma S, T Ω Ω ( ) Ω Ω Ω
Transparent Cloning
Another eample
Another eample
Changing local illumination
Deect concealment
High Dnamic Range Compression Small eposure: Dark inside
High Dnamic Range Compression Large eposure: Outside Saturated
Manipulate gradients α is set to 0. o average gradient magnitude β is set between 0.8 and 0.9 Where the gradient is given b:
High Dnamic Range Compression Desired Image
High Dnamic Range Compression Short Eposure Sotware Tone Mapping Long Eposure
Shadow Removal
ColorGre Algorithm Optimization: iµ min Σ Σ ( (g i - g j ) - δ i,j ) i ji-µ I δ ij L then ideal image is g Otherwise, selectivel modulated b C ij
Results Original Photoshop Gre ColorGre ColorGre Color
Original PhotoshopGre ColorGreColor
Original PhotoshopGre ColorGre
Original PhotoshopGre ColorGre