Lecture 4 Colorization and Segmentation
|
|
- Collin Collins
- 5 years ago
- Views:
Transcription
1 Lecture 4 Colorization and Segmentation Summer School Mathematics in Imaging Science University of Bologna, Itay June 1st 2018 Friday 11:15-13:15 Sung Ha Kang School of Mathematics Georgia Institute of Technology 1
2 Outline Lecture 1 & 2 (May 30): Variational/PDE based Image Restoration models Total Variation, Anisotropic Diffusion, TVL1, Image decomposition, Color models, Deblurring inpainting Lecture 3 (May 31): Image segmentation Mumford-Shah, Chan-Vese model Multiphase segmentation and Phase transition model Lecture 4 (June 1): Related topics Colorization Unsupervised multiphase segmentation 2
3 Colorization M. Fornasier, Nonlinear projection digital image inpainting and restoration methods, Journal of Mathematical Imaging and Vision, Volume 24, Number 3, pages ,
4 Related research - colorization The term colorization was introduced by Wilson Markle who first processed the gray scale moon image from the Apollo mission. This term was used to describe the process of adding color to grayscale movies or TV broadcasting program [Encyclopedia of television, 1997] Variational/PDE approaches: [Sapiro 2005] inpaint colors by minimizing the difference between the gradient of luminance and the gradient of color. [Yatziv and Sapiro 2006] utilized Dijkstra s shortest path algorithm for fast computation. [Fornasier 2006] related this problem to inpainting literature, used nonlinear distortion function for fitting the grayscale date µ Ω\D u(x) ū(x) 2 + λ D L(u(x)) ū(x) 2 dx + Ω u(x) p dx. [Fornasier and March 2007] Mathematical analysis, existence of solution using Γ-convergence of this model. [Fonseca, Leoni, Maggi and Morini 2009] using calibration method for minimizer of a colorization model. [K. and March 2007] couple of variational Models via Chromaticity and Brightness decomposition. [Ha Quang, K and Le 2010], many more.. 4
5 TV Colorization with S 2 penalization For a color image C S 2 and D the missing color information [K and March, 2007] E α (C) = C + λ C C o 2 dx + α (1 C ) 2 dx, Ω D c Ω and the corresponding minimization problem (P α ) min { E α (C) : C BV (Ω; R 3 ), C(x) 1 for a.e. x Ω }.. 5
6 Given 6
7 Result 7
8 Original 8
9 Colorization via weighted harmonic map For a color image C S 2 and D the missing color information [K and March, 07] F (C) = g( B ) C 2 dx + λ C C o 2 dx + α (1 C ) 2 dx, Ω D c Ω and the corresponding minimization problem (P) min { F (C) : C W 1,2 (Ω; S 2 ) }. If C is a solution of (P), the colorized image f can be again defined by setting f = B o C. Here g : R + R + is a monotone decreasing function such that g(0) = 1, g(t) > 0 for any t > 0, and lim t + g(t) = 0. e.g. g(t) = (t/a) 2, or g(t) = e (t/a)2, with a > 0. The value of the function g( B ) is close to one in regions where B o is slowly varying, while it is small at the edges of brightness. 9
10 Given 10
11 Result 11
12 Given 12
13 Result 13
14 Given 14
15 Result 15
16 True 16
17 Image decomposition [Meyer, 01] introduced an image decomposition model based on the Rudin-Osher-Fatemi s total variation minimization (TV) model. A given image f is separated into f = u + v by minimizing the following functional, inf (u,v) BV G/f =u+v Ω u + α v G. The Banach space G contains signals with large oscillations. A distribution v belongs to G if v can be written as v = 1 g g 2 = Div(g) g 1, g 2 L. The G-norm v G is defined as the infimum of all g L = sup x Ω g(x), where v = Div(g) and g(x) = g g 2 2 (x). G-norm approaches: [Vese, Sole, Osher 03], [Tadmor, Nezzar, Vese, 04], [Le, Vese, 04], [Daubechies and Teschke,04], [Aujol and Chambolle, 05], [Aujol, Aubert, Blanc-Féraud and Chambolle, 05], [Aubert and Aujol, 05],[Starck, Elad and Donoho, 05], [Garnett, Le Vese, 07], [Lieu, Vese, 08], [Kim and Vese, 09]... 17
18 Texture Colorization Decompose the image brightness B o into two components B o = B u + B v where B u is the BV component of the original image B o, and B v is the oscillatory part of image which represents noise or texture. [joint work with J-F. Aujol, 2006] Then, use B = G σ B u to minimize the same functional, F α (C) = g( B ) C 2 dx + λ C C o 2 dx + α Ω D c Ω (1 C ) 2 dx 18
19 Given 19
20 Result 20
21 True 21
22 Colorization via Reproducing Kernel Hilbert Spaces (RKHS) [Ha Quang, K. and Le 2010] Let Ω R be the image domain, and D Ω be a nonempty subset of Ω. This gray scale image as g : Ω R. Where the color is given be the domain D, and f be the given color, i.e. f : D R 3. To find F : Ω R 3 such that F D f. RGB 3 dimensional vector. In machine learning, reproducing kernel Hilbert spaces (RKHS) became a powerful paradigm, both from algorithmic and theoretical perspectives. e.g. [Schőkopf and Smola 2002] learning with kernels,[shawe-taylor and Cristianini 2004] Kernel methods for Pattern Analysis, [Vapnik 1998]. The goal of machine learning is to make inferences and generalizations based on limited sampled data. [Coifman and Lafon 2006] is one recent RKHS-based approach. Colorization : to find an extension from f : D R 3 to F : Ω R 3. 22
23 Reproducing Kernel Hilbert Space [Aronszajn 1950] Let D be an arbitrary nonempty set. Let K : D D R be a symmetric function, a positive definite kernel on D. There exists a unique Hilbert space H K of functions f : D R satisfying: 1. K x H K for all x D, where K x (t) = K(x, t); 2. span{k x } x X is dense in H K ; 3. the inner product, HK of H K satisfies: f(x) = f, K x HK (reproducing property), for all f H K and all x D. On the dense set span{k x }, the inner product is defined by i a ik xi, j b jk yj HK = i,j a ib j K(x i, y j ). The Hilbert space H K is called the Reproducing Kernel Hilbert Space with reproducing kernel K, with norm HK. 23
24 RKHS for Vector-valued function Let W D denote the vector space of all functions f : D W. A function K : D D L(W) is said to be an operator-valued positive definite kernel if for each pair (x, y) D D, K(x, y) L(W) is a self-adjoint operator and N w i, K(x i, x j )w j W 0 i,j=1 for every finite set of points {x i } N i=1 [Carmeli, De Vito, and Toigo. 2006], [Micchelli and Pontil. 2005] in D. cf. [Caponnetto, Pontil, Micchelli, and Ying. 2008], Colorization: [Ha Quang, K. and Le 2010] Given an f L 2 µ (D; W), inf f L K F 2 L F H K (Ω) 2 (D;W) + γ F 2 µ H K (Ω), for some γ > 0. A standard least square Tikhonov regularization problem in Hilbert spaces, which has a unique minimizer F γ satisfying the normal equation (L K L K + γi)f γ = L K f F γ = (L K L K + γi) 1 L K f. 24
25 Numerical Algorithm - regularized least-square The explicit solution can be computed as m F γ = K(x, x i )a i i=1 where a i s are the solutions of m K(x i, x j )a j + mγa i = f(x i ). j=1 K D (x, y), where (x, y) D D, for solving the system of linear equations, and K cd (x, y), where (x, y) Ω D, for evaluating the result. Let (2r + 1) (2r + 1) be the size of a square patch for each x Ω and a positive integer l = (2r + 1) 2, x = (x 1,..., x l ) R l. (r = 0,only the intensity value). ) ) g( x) g( y) p x y p k(x, y) = exp ( exp (, 2σ 1 (2r + 1) p σ 2 ρ p here ρ is N 2 + M 2. We experimented with 0 < p 2 and various σ 1 and σ 2 values. 25
26 Given less than 3 % color given. 26
27 Result r = 3, p = 2 σ 1 = 0.05, σ 2 = 10 27
28 Given less than 0.5 % color given. 28
29 Result p = 1, r = 2, σ 1 = 0.5, σ 2 = 10 29
30 30
31 31
32 32
33 33
34 34
35 35
36 Supervised and Transductive multi-class segmentation using p-laplacians and RKHS methods Let A R νn N denote the matrix corresponding to the linear mapping ( ( ) ) N x w 1/p i,j (x(i) x(j)), Let A U R νn, U and A L R νn, L denote the matrices containing the columns of A corresponding to the indices in U and L. The minimization problem (p-laplacians) becomes 1 arg min u U p c k=1 j N i=1 A U u k U + A Ll k L p p subject to u U S U c. Use the primal dual hybrid gradient algorithm with modified (extrapolated) primal variable (PDHGMp). PDHGMp was proved to converge for our setting if the parameters γ and τ are chosen such that γτ 1/ M U 2. 36
37 RKHS 1-Lap. 37
38 1-Lap 2-Lap RKHS 38
39 39
40
41 input truth truth RKHS p=1 combined p=2 40
42 Other problems.. Multi-phase segmentation via Modica-Mortola Phase Transition Model E ɛ [z, C k u] = Ω [ ɛ z ɛ sin2 πz ] dx + λ K 1 k=0 Ω u o C k 2 sinc 2 (z k) dx 41
43 Other problems.. Multiphase image segmentation via equally distanced multiple well potential [ Eϕ ɛ (u) = ɛ α ϕ( u ) + W (u) ] dx + λ ( c, u f) 2 dx, ɛ Ω Ω 42
44 Effect of Weighted Length 43
45 Effect of True Length 44
46 Comparison with Sin-Sinc model sin-sinc model True length TV 45
47 Other problems.. Infinite Perimeter Segmentation model - relaxed length E = Ω dist(x, Γ) f( )dx + ɛ 2 i=1 χ i u o c i 2, 46
48 Infinite Perimeter Segmentation Model Chan-Vese Proposed Denoising δ cv = 2 λ cv δ = δ cv Cornering δ cv = 1 λ cv δ = 1 δ cv λ Resolution δ cv = 2 λ cv δ = c δ +2 cv λ Oscillatory boundaries δ cv = c λ cv any δ if λ 2 Comparison with 1 λ cv c ɛ λ. 47
49 Other problems.. Unsupervised multiphase segmentation model ( K ) P (χ i ) E[K, χ i, c i u o ] = µ H 1 (Γ) + χ i i=1 K i=1 χ i u o c i 2, µ = 0.1 µ = 10 48
50 Corner Smoothing: Mumford and Shah B ε B ε χ 2 P α χ 1 B ε/2 P Λ 2 α Λ 1 The length term is bounded by ε while other terms are bounded by ε 2 or ε 2π/α. Then the change of the energy can be computed as E MS (Λ) E MS (χ) c ( ε 2 + ε 2π 2π α + ε ( sin α 2 1 )) for some constant c. Then for a sufficiently small ε, the main change is govern by ( sin α 2 1) which is always a negative value for any 0 < α < π. Let q i = S(χ) + 2 P (χ), i 1,..., K. χ i Then, the energy change E becomes S(Λ)P (Λ) S(χ)P (χ) = ε 2 ( 1 + sin α 2 )(q 1 + q 2 ) + O(ε 2 ) using (S(Λ) S(χ)) O(ε). Therefore, for any 0 < α < π, this is also negative, i.e. it reduces the energy as it cuts the corner as in the case of Mumford-Shah. 49
51 Optimal Angle: Mumford-Shah 120 o P 1 B α χ 3 r χ 1 P 3 P 1 Λ 3 α/2 2Π/3 P 3 Λ 1 Λ 2 χ 2 P 2 P 2 For any angle α the energy reduces as it becomes 2π 3. Let i = P (Λ i ) P (χ i ). Then the energy change can be computed as the following (S(Λ) S(χ))P (χ) + S(Λ)(P (Λ) P (χ)) = 1 q i i + O( 2 ) 2 The energy decreases if q i i < 0, therefore, the optimal angle depends on the values of q i, and NOT necessarily 120 o. The minimum of unsupervised model can have multiple junctions not only triple. Related to the work by Morgan and others on the immiscible ow in R 2 - there are different possibilities when the length is weighted. 50
52 Other problems.. Scale Segmentation: A regularized k-means data clustering ( k ) 1 k E[k, I i, c i D] = λ + d j c i 2. n i=1 i i=1 d j I i intensity segmentation scale segmentation 51
53 Summary * Restoration - Gray vs. Vector-Valued denoising, deblurring - Still vs. Video inpainting, colorization, dejittering... - Surface, High dimensional - Texture, Texture synthesis * Segmentation Two phase, multiphase, - Shape analysis, Object recognition Image registration - Learning: Supervise, unsupervised - Data clustering * Compression wavelet applications * Fast Numerical Computation - Data Analysis - Signal Processing - Computer vision, Computer graphics New devices, new technologies, new imaging challenges. Mathematical model - applied analysis, stable model, theory Numerical challenges/advances, always a need for fast computation 52
54 Thank you Sung Ha Kang kang/ 53
Convex Hodge Decomposition of Image Flows
Convex Hodge Decomposition of Image Flows Jing Yuan 1, Gabriele Steidl 2, Christoph Schnörr 1 1 Image and Pattern Analysis Group, Heidelberg Collaboratory for Image Processing, University of Heidelberg,
More informationENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT
ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT PRASHANT ATHAVALE Abstract. Digital images are can be realized as L 2 (R 2 objects. Noise is introduced in a digital image due to various reasons.
More informationError Analysis for H 1 Based Wavelet Interpolations
Error Analysis for H 1 Based Wavelet Interpolations Tony F. Chan Hao-Min Zhou Tie Zhou Abstract We rigorously study the error bound for the H 1 wavelet interpolation problem, which aims to recover missing
More informationOn a multiscale representation of images as hierarchy of edges. Eitan Tadmor. University of Maryland
On a multiscale representation of images as hierarchy of edges Eitan Tadmor Center for Scientific Computation and Mathematical Modeling (CSCAMM) Department of Mathematics and Institute for Physical Science
More informationConvex Hodge Decomposition and Regularization of Image Flows
Convex Hodge Decomposition and Regularization of Image Flows Jing Yuan, Christoph Schnörr, Gabriele Steidl April 14, 2008 Abstract The total variation (TV) measure is a key concept in the field of variational
More informationModeling oscillatory components with the homogeneous spaces BMO
Modeling oscillatory components with the homogeneous spaces BMO α and Ẇ α,p John B. Garnett, Peter W. Jones, Triet M. Le, and Luminita A. Vese UCLA CAM Report 07-2, 2007 Abstract This paper is devoted
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationMathematical modeling of textures : application to color image decomposition with a projected gradient algorithm
Mathematical modeling of textures : application to color image decomposition with a projected gradient algorithm Vincent Duval, Jean-François Aujol, Luminita Vese To cite this version: Vincent Duval, Jean-François
More informationThe Learning Problem and Regularization Class 03, 11 February 2004 Tomaso Poggio and Sayan Mukherjee
The Learning Problem and Regularization 9.520 Class 03, 11 February 2004 Tomaso Poggio and Sayan Mukherjee About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing
More informationReproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto
Reproducing Kernel Hilbert Spaces 9.520 Class 03, 15 February 2006 Andrea Caponnetto About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationIMAGE RECOVERY USING FUNCTIONS OF BOUNDED VARIATION AND SOBOLEV SPACES OF NEGATIVE DIFFERENTIABILITY. Yunho Kim and Luminita A.
Inverse Problems and Imaging Volume 3, No. 1, 2009, 43 68 doi:10.3934/ipi.2009.3.43 IMAGE RECOVERY USING FUNCTIONS OF BOUNDED VARIATION AND SOBOLEV SPACES OF NEGATIVE DIFFERENTIABILITY Yunho Kim and Luminita
More informationOnline Gradient Descent Learning Algorithms
DISI, Genova, December 2006 Online Gradient Descent Learning Algorithms Yiming Ying (joint work with Massimiliano Pontil) Department of Computer Science, University College London Introduction Outline
More informationRegularization in Reproducing Kernel Banach Spaces
.... Regularization in Reproducing Kernel Banach Spaces Guohui Song School of Mathematical and Statistical Sciences Arizona State University Comp Math Seminar, September 16, 2010 Joint work with Dr. Fred
More informationReproducing Kernel Hilbert Spaces
Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 11, 2009 About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationNovel integro-differential equations in image processing and its applications
Novel integro-differential equations in image processing and its applications Prashant Athavale a and Eitan Tadmor b a Institute of Pure and Applied Mathematics, University of California, Los Angeles,
More informationAn image decomposition model using the total variation and the infinity Laplacian
An image decomposition model using the total variation and the inity Laplacian Christopher Elion a and Luminita A. Vese a a Department of Mathematics, University of California Los Angeles, 405 Hilgard
More informationMathematical Problems in Image Processing
Gilles Aubert Pierre Kornprobst Mathematical Problems in Image Processing Partial Differential Equations and the Calculus of Variations Second Edition Springer Foreword Preface to the Second Edition Preface
More informationReproducing Kernel Hilbert Spaces
Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 12, 2007 About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationImage Cartoon-Texture Decomposition and Feature Selection using the Total Variation Regularized L 1 Functional
Image Cartoon-Texture Decomposition and Feature Selection using the Total Variation Regularized L 1 Functional Wotao Yin 1, Donald Goldfarb 1, and Stanley Osher 2 1 Department of Industrial Engineering
More informationMathematical modeling of textures : application to color image decomposition with a projected gradient algorithm
Mathematical modeling of textures : application to color image decomposition with a projected gradient algorithm Vincent Duval 1 & Jean-François Aujol 2 & Luminita A. Vese 3 1 Institut Telecom, Telecom
More informationVariational Image Restoration
Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1
More informationGeneralized Newton-Type Method for Energy Formulations in Image Processing
Generalized Newton-Type Method for Energy Formulations in Image Processing Leah Bar and Guillermo Sapiro Department of Electrical and Computer Engineering University of Minnesota Outline Optimization in
More informationMathematical Methods for Data Analysis
Mathematical Methods for Data Analysis Massimiliano Pontil Istituto Italiano di Tecnologia and Department of Computer Science University College London Massimiliano Pontil Mathematical Methods for Data
More informationTotal Variation Theory and Its Applications
Total Variation Theory and Its Applications 2nd UCC Annual Research Conference, Kingston, Jamaica Peter Ndajah University of the Commonwealth Caribbean, Kingston, Jamaica September 27, 2018 Peter Ndajah
More information10-701/ Recitation : Kernels
10-701/15-781 Recitation : Kernels Manojit Nandi February 27, 2014 Outline Mathematical Theory Banach Space and Hilbert Spaces Kernels Commonly Used Kernels Kernel Theory One Weird Kernel Trick Representer
More informationLearning gradients: prescriptive models
Department of Statistical Science Institute for Genome Sciences & Policy Department of Computer Science Duke University May 11, 2007 Relevant papers Learning Coordinate Covariances via Gradients. Sayan
More informationPDEs in Image Processing, Tutorials
PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower
More informationIntegro-Differential Equations Based on (BV, L 1 ) Image Decomposition
SIAM J IMAGING SCIENCES Vol 4, No, pp 3 32 c 2 Society for Industrial and Applied Mathematics Integro-Differential Equations Based on (BV, L Image Decomposition Prashant Athavale and Eitan Tadmor Abstract
More informationReproducing Kernel Hilbert Spaces
Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 9, 2011 About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationISSN , Volume 37, Number 1
ISSN 0924-9907, Volume 37, Number This article was published in the above mentioned Springer issue. The material, including all portions thereof, is protected by copyright; all rights are held exclusively
More informationContents. 1 Introduction 5. 2 Convexity 9. 3 Cones and Generalized Inequalities Conjugate Functions Duality in Optimization 27
A N D R E W T U L L O C H C O N V E X O P T I M I Z AT I O N T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Introduction 5 1.1 Setup 5 1.2 Convexity 6 2 Convexity
More informationImage decompositions using bounded variation and generalized homogeneous Besov spaces
Appl. Comput. Harmon. Anal. 23 (2007) 25 56 www.elsevier.com/locate/acha Image decompositions using bounded variation and generalized homogeneous Besov spaces John B. Garnett a, Triet M. Le a,yvesmeyer
More informationSome Variational Problems from Image Processing
Some Variational Problems from Image Processing John B. Garnett Triet M. Le Luminita A. Vese UCLA CAM Report 09-85 November 9, 2009 Abstract We consider in this paper a class of variational models introduced
More informationIPAM MGA Tutorial on Feature Extraction and Denoising: A Saga of u + v Models
IPAM MGA Tutorial on Feature Extraction and Denoising: A Saga of u + v Models Naoki Saito saito@math.ucdavis.edu http://www.math.ucdavis.edu/ saito/ Department of Mathematics University of California,
More informationGeometry on Probability Spaces
Geometry on Probability Spaces Steve Smale Toyota Technological Institute at Chicago 427 East 60th Street, Chicago, IL 60637, USA E-mail: smale@math.berkeley.edu Ding-Xuan Zhou Department of Mathematics,
More informationKernel-Based Contrast Functions for Sufficient Dimension Reduction
Kernel-Based Contrast Functions for Sufficient Dimension Reduction Michael I. Jordan Departments of Statistics and EECS University of California, Berkeley Joint work with Kenji Fukumizu and Francis Bach
More informationADAPTIVE DIFFUSION CONSTRAINED TOTAL VARIATION SCHEME WITH APPLICATION TO CARTOON + TEXTURE + EDGE IMAGE DECOMPOSITION
Pré-Publicações do Departamento de Matemática Universidade de Coimbra Preprint Number 13 54 ADAPTIVE DIFFUSION CONSTRAINED TOTAL VARIATION SCHEME WITH APPLICATION TO CARTOON + TEXTURE + EDGE IMAGE DECOMPOSITION
More informationDual methods for the minimization of the total variation
1 / 30 Dual methods for the minimization of the total variation Rémy Abergel supervisor Lionel Moisan MAP5 - CNRS UMR 8145 Different Learning Seminar, LTCI Thursday 21st April 2016 2 / 30 Plan 1 Introduction
More informationIntroduction to Nonlinear Image Processing
Introduction to Nonlinear Image Processing 1 IPAM Summer School on Computer Vision July 22, 2013 Iasonas Kokkinos Center for Visual Computing Ecole Centrale Paris / INRIA Saclay Mean and median 2 Observations
More informationAdaptive Primal Dual Optimization for Image Processing and Learning
Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University
More informationElements of Positive Definite Kernel and Reproducing Kernel Hilbert Space
Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Statistical Inference with Reproducing Kernel Hilbert Space Kenji Fukumizu Institute of Statistical Mathematics, ROIS Department
More informationFundamentals of Non-local Total Variation Spectral Theory
Fundamentals of Non-local Total Variation Spectral Theory Jean-François Aujol 1,2, Guy Gilboa 3, Nicolas Papadakis 1,2 1 Univ. Bordeaux, IMB, UMR 5251, F-33400 Talence, France 2 CNRS, IMB, UMR 5251, F-33400
More informationDiffuse interface methods on graphs: Data clustering and Gamma-limits
Diffuse interface methods on graphs: Data clustering and Gamma-limits Yves van Gennip joint work with Andrea Bertozzi, Jeff Brantingham, Blake Hunter Department of Mathematics, UCLA Research made possible
More informationReproducing Kernel Hilbert Spaces
9.520: Statistical Learning Theory and Applications February 10th, 2010 Reproducing Kernel Hilbert Spaces Lecturer: Lorenzo Rosasco Scribe: Greg Durrett 1 Introduction In the previous two lectures, we
More informationFunctional Analysis Review
Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all
More informationVariational Methods in Image Denoising
Variational Methods in Image Denoising Jamylle Carter Postdoctoral Fellow Mathematical Sciences Research Institute (MSRI) MSRI Workshop for Women in Mathematics: Introduction to Image Analysis 22 January
More informationErkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5.
LINEAR DIFFUSION Erkut Erdem Hacettepe University February 24 th, 2012 CONTENTS 1 Linear Diffusion 1 2 Appendix - The Calculus of Variations 5 References 6 1 LINEAR DIFFUSION The linear diffusion (heat)
More informationAsymmetric Cheeger cut and application to multi-class unsupervised clustering
Asymmetric Cheeger cut and application to multi-class unsupervised clustering Xavier Bresson Thomas Laurent April 8, 0 Abstract Cheeger cut has recently been shown to provide excellent classification results
More informationMIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design
MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation
More informationStrictly Positive Definite Functions on a Real Inner Product Space
Strictly Positive Definite Functions on a Real Inner Product Space Allan Pinkus Abstract. If ft) = a kt k converges for all t IR with all coefficients a k 0, then the function f< x, y >) is positive definite
More informationCalculus of Variations Summer Term 2014
Calculus of Variations Summer Term 2014 Lecture 20 18. Juli 2014 c Daria Apushkinskaya 2014 () Calculus of variations lecture 20 18. Juli 2014 1 / 20 Purpose of Lesson Purpose of Lesson: To discuss several
More informationIMA Preprint Series # 2057
PIECEWISE H 1 + H 0 + H 1 IMAGES AND THE MUMFORD-SHAH-SOBOLEV MODEL FOR SEGMENTED IMAGE DECOMPOSITION By Jianhong (Jackie) Shen IMA Preprint Series # 2057 ( July 2005 ) INSTITUTE FOR MATHEMATICS AND ITS
More informationComputers and Mathematics with Applications
Computers and Mathematics with Applications 6 () 77 745 Contents lists available at ScienceDirect Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa Multiphase
More informationITERATIVELY SOLVING LINEAR INVERSE PROBLEMS UNDER GENERAL CONVEX CONSTRAINTS. Ingrid Daubechies. Gerd Teschke. Luminita Vese
Inverse Problems and Imaging Volume 1, No. 1, 2007, 29 46 Web site: http://www.aimsciences.org ITERATIVELY SOLVING LINEAR INVERSE PROBLEMS UNDER GENERAL CONVEX CONSTRAINTS Ingrid Daubechies Princeton University,
More informationANALYSIS OF THE TV REGULARIZATION AND H 1 FIDELITY MODEL FOR DECOMPOSING AN IMAGE INTO CARTOON PLUS TEXTURE. C.M. Elliott and S.A.
COMMUNICATIONS ON Website: http://aimsciences.org PURE AND APPLIED ANALYSIS Volume 6, Number 4, December 27 pp. 917 936 ANALYSIS OF THE TV REGULARIZATION AND H 1 FIDELITY MODEL FOR DECOMPOSING AN IMAGE
More informationSolution-driven Adaptive Total Variation Regularization
1/15 Solution-driven Adaptive Total Variation Regularization Frank Lenzen 1, Jan Lellmann 2, Florian Becker 1, Stefania Petra 1, Johannes Berger 1, Christoph Schnörr 1 1 Heidelberg Collaboratory for Image
More information22 : Hilbert Space Embeddings of Distributions
10-708: Probabilistic Graphical Models 10-708, Spring 2014 22 : Hilbert Space Embeddings of Distributions Lecturer: Eric P. Xing Scribes: Sujay Kumar Jauhar and Zhiguang Huo 1 Introduction and Motivation
More informationApproximation theory in neural networks
Approximation theory in neural networks Yanhui Su yanhui su@brown.edu March 30, 2018 Outline 1 Approximation of functions by a sigmoidal function 2 Approximations of continuous functionals by a sigmoidal
More informationSupport Vector Machines for Classification: A Statistical Portrait
Support Vector Machines for Classification: A Statistical Portrait Yoonkyung Lee Department of Statistics The Ohio State University May 27, 2011 The Spring Conference of Korean Statistical Society KAIST,
More informationA GENERAL FORMULATION FOR SUPPORT VECTOR MACHINES. Wei Chu, S. Sathiya Keerthi, Chong Jin Ong
A GENERAL FORMULATION FOR SUPPORT VECTOR MACHINES Wei Chu, S. Sathiya Keerthi, Chong Jin Ong Control Division, Department of Mechanical Engineering, National University of Singapore 0 Kent Ridge Crescent,
More informationKey words. Disocclusion, Elastica, BV functions, Interpolation, Variational approach, Γ- convergence
DISOCCLUSION BY JOINT INTERPOLATION OF VECTOR FIELDS AND GRAY LEVELS COLOMA BALLESTER, VICENT CASELLES, AND JOAN VERDERA Abstract. In this paper we study a variational approach for filling-in regions of
More informationKernels A Machine Learning Overview
Kernels A Machine Learning Overview S.V.N. Vishy Vishwanathan vishy@axiom.anu.edu.au National ICT of Australia and Australian National University Thanks to Alex Smola, Stéphane Canu, Mike Jordan and Peter
More informationDerivative reproducing properties for kernel methods in learning theory
Journal of Computational and Applied Mathematics 220 (2008) 456 463 www.elsevier.com/locate/cam Derivative reproducing properties for kernel methods in learning theory Ding-Xuan Zhou Department of Mathematics,
More informationInverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology
Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x
More informationCoupled second order singular perturbations for phase transitions
Coupled second order singular perturbations for phase transitions CMU 06/09/11 Ana Cristina Barroso, Margarida Baía, Milena Chermisi, JM Introduction Let Ω R d with Lipschitz boundary ( container ) and
More informationBeyond the Point Cloud: From Transductive to Semi-Supervised Learning
Beyond the Point Cloud: From Transductive to Semi-Supervised Learning Vikas Sindhwani, Partha Niyogi, Mikhail Belkin Andrew B. Goldberg goldberg@cs.wisc.edu Department of Computer Sciences University of
More informationKernel Methods. Outline
Kernel Methods Quang Nguyen University of Pittsburgh CS 3750, Fall 2011 Outline Motivation Examples Kernels Definitions Kernel trick Basic properties Mercer condition Constructing feature space Hilbert
More informationMATH 829: Introduction to Data Mining and Analysis Support vector machines and kernels
1/12 MATH 829: Introduction to Data Mining and Analysis Support vector machines and kernels Dominique Guillot Departments of Mathematical Sciences University of Delaware March 14, 2016 Separating sets:
More informationPDE-based image restoration, I: Anti-staircasing and anti-diffusion
PDE-based image restoration, I: Anti-staircasing and anti-diffusion Kisee Joo and Seongjai Kim May 16, 2003 Abstract This article is concerned with simulation issues arising in the PDE-based image restoration
More informationDeep Learning: Approximation of Functions by Composition
Deep Learning: Approximation of Functions by Composition Zuowei Shen Department of Mathematics National University of Singapore Outline 1 A brief introduction of approximation theory 2 Deep learning: approximation
More informationMathematical Modeling of Textures: Application to Color Image Decomposition with a Projected Gradient Algorithm
J Math Imaging Vis (2010) 37: 232 248 DOI 10.1007/s10851-010-0203-9 Mathematical Modeling of Textures: Application to Color Image Decomposition with a Projected Gradient Algorithm Vincent Duval Jean-François
More informationLearnability of Gaussians with flexible variances
Learnability of Gaussians with flexible variances Ding-Xuan Zhou City University of Hong Kong E-ail: azhou@cityu.edu.hk Supported in part by Research Grants Council of Hong Kong Start October 20, 2007
More informationSupport Vector Machine
Support Vector Machine Fabrice Rossi SAMM Université Paris 1 Panthéon Sorbonne 2018 Outline Linear Support Vector Machine Kernelized SVM Kernels 2 From ERM to RLM Empirical Risk Minimization in the binary
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)
More informationKernel Learning via Random Fourier Representations
Kernel Learning via Random Fourier Representations L. Law, M. Mider, X. Miscouridou, S. Ip, A. Wang Module 5: Machine Learning L. Law, M. Mider, X. Miscouridou, S. Ip, A. Wang Kernel Learning via Random
More informationA Nonlocal p-laplacian Equation With Variable Exponent For Image Restoration
A Nonlocal p-laplacian Equation With Variable Exponent For Image Restoration EST Essaouira-Cadi Ayyad University SADIK Khadija Work with : Lamia ZIAD Supervised by : Fahd KARAMI Driss MESKINE Premier congrès
More informationNon Convex Minimization using Convex Relaxation Some Hints to Formulate Equivalent Convex Energies
Non Convex Minimization using Convex Relaxation Some Hints to Formulate Equivalent Convex Energies Mila Nikolova (CMLA, ENS Cachan, CNRS, France) SIAM Imaging Conference (IS14) Hong Kong Minitutorial:
More informationEnergy-Based Image Simplification with Nonlocal Data and Smoothness Terms
Energy-Based Image Simplification with Nonlocal Data and Smoothness Terms Stephan Didas 1, Pavel Mrázek 2, and Joachim Weickert 1 1 Mathematical Image Analysis Group Faculty of Mathematics and Computer
More informationImage Deblurring in the Presence of Impulsive Noise
Image Deblurring in the Presence of Impulsive Noise Leah Bar Nir Sochen Nahum Kiryati School of Electrical Engineering Dept. of Applied Mathematics Tel Aviv University Tel Aviv, 69978, Israel April 18,
More informationKey words: denoising, higher-order regularization, stability, weak convergence, Brezis-Lieb condition. AMS subject classifications: 49N45, 94A08.
A FEW REMARKS ON VARIATIONAL MODELS FOR DENOISING RUSTUM CHOKSI, IRENE FONSECA, AND BARBARA ZWICKNAGL Abstract. Variational models for image and signal denoising are based on the minimization of energy
More informationEECS 598: Statistical Learning Theory, Winter 2014 Topic 11. Kernels
EECS 598: Statistical Learning Theory, Winter 2014 Topic 11 Kernels Lecturer: Clayton Scott Scribe: Jun Guo, Soumik Chatterjee Disclaimer: These notes have not been subjected to the usual scrutiny reserved
More informationProblem Set 6: Solutions Math 201A: Fall a n x n,
Problem Set 6: Solutions Math 201A: Fall 2016 Problem 1. Is (x n ) n=0 a Schauder basis of C([0, 1])? No. If f(x) = a n x n, n=0 where the series converges uniformly on [0, 1], then f has a power series
More informationDiffeomorphic Warping. Ben Recht August 17, 2006 Joint work with Ali Rahimi (Intel)
Diffeomorphic Warping Ben Recht August 17, 2006 Joint work with Ali Rahimi (Intel) What Manifold Learning Isn t Common features of Manifold Learning Algorithms: 1-1 charting Dense sampling Geometric Assumptions
More informationarxiv: v1 [math.oc] 3 Jul 2014
SIAM J. IMAGING SCIENCES Vol. xx, pp. x c xxxx Society for Industrial and Applied Mathematics x x Solving QVIs for Image Restoration with Adaptive Constraint Sets F. Lenzen, J. Lellmann, F. Becker, and
More informationBranched transport limit of the Ginzburg-Landau functional
Branched transport limit of the Ginzburg-Landau functional Michael Goldman CNRS, LJLL, Paris 7 Joint work with S. Conti, F. Otto and S. Serfaty Introduction Superconductivity was first observed by Onnes
More informationMathematical Methods for Physics and Engineering
Mathematical Methods for Physics and Engineering Lecture notes for PDEs Sergei V. Shabanov Department of Mathematics, University of Florida, Gainesville, FL 32611 USA CHAPTER 1 The integration theory
More informationJoint distribution optimal transportation for domain adaptation
Joint distribution optimal transportation for domain adaptation Changhuang Wan Mechanical and Aerospace Engineering Department The Ohio State University March 8 th, 2018 Joint distribution optimal transportation
More informationBasic Principles of Weak Galerkin Finite Element Methods for PDEs
Basic Principles of Weak Galerkin Finite Element Methods for PDEs Junping Wang Computational Mathematics Division of Mathematical Sciences National Science Foundation Arlington, VA 22230 Polytopal Element
More informationUsing Duality as a Method to Solve SVM Regression. Problems. Langley DeWitt
Using Duality as a Method to Solve SVM Regression 1. Introduction. Reproducing Kernel Hilbert Space 3. SVM Definition 4. Measuring the Quality of an SVM 5. Representor Theorem Problems Langley DeWitt 6.
More informationA Variational Approach to Reconstructing Images Corrupted by Poisson Noise
J Math Imaging Vis c 27 Springer Science + Business Media, LLC. Manufactured in The Netherlands. DOI: 1.7/s1851-7-652-y A Variational Approach to Reconstructing Images Corrupted by Poisson Noise TRIET
More informationApplied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.
Applied Analysis APPM 44: Final exam 1:3pm 4:pm, Dec. 14, 29. Closed books. Problem 1: 2p Set I = [, 1]. Prove that there is a continuous function u on I such that 1 ux 1 x sin ut 2 dt = cosx, x I. Define
More informationControl of Interface Evolution in Multi-Phase Fluid Flows
Control of Interface Evolution in Multi-Phase Fluid Flows Markus Klein Department of Mathematics University of Tübingen Workshop on Numerical Methods for Optimal Control and Inverse Problems Garching,
More informationMeasure and Integration: Solutions of CW2
Measure and Integration: s of CW2 Fall 206 [G. Holzegel] December 9, 206 Problem of Sheet 5 a) Left (f n ) and (g n ) be sequences of integrable functions with f n (x) f (x) and g n (x) g (x) for almost
More informationVariational image restoration by means of wavelets: simultaneous decomposition, deblurring and denoising
Variational image restoration by means of wavelets: simultaneous decomposition, deblurring and denoising I. Daubechies and G. Teschke December 2, 2004 Abstract Inspired by papers of Vese Osher [20] and
More informationIntroduction to Computer Vision. 2D Linear Systems
Introduction to Computer Vision D Linear Systems Review: Linear Systems We define a system as a unit that converts an input function into an output function Independent variable System operator or Transfer
More informationGeneralization theory
Generalization theory Daniel Hsu Columbia TRIPODS Bootcamp 1 Motivation 2 Support vector machines X = R d, Y = { 1, +1}. Return solution ŵ R d to following optimization problem: λ min w R d 2 w 2 2 + 1
More informationTHE STOKES SYSTEM R.E. SHOWALTER
THE STOKES SYSTEM R.E. SHOWALTER Contents 1. Stokes System 1 Stokes System 2 2. The Weak Solution of the Stokes System 3 3. The Strong Solution 4 4. The Normal Trace 6 5. The Mixed Problem 7 6. The Navier-Stokes
More informationWeek 6 Notes, Math 865, Tanveer
Week 6 Notes, Math 865, Tanveer. Energy Methods for Euler and Navier-Stokes Equation We will consider this week basic energy estimates. These are estimates on the L 2 spatial norms of the solution u(x,
More informationStrictly positive definite functions on a real inner product space
Advances in Computational Mathematics 20: 263 271, 2004. 2004 Kluwer Academic Publishers. Printed in the Netherlands. Strictly positive definite functions on a real inner product space Allan Pinkus Department
More informationSemi-Supervised Learning in Reproducing Kernel Hilbert Spaces Using Local Invariances
Semi-Supervised Learning in Reproducing Kernel Hilbert Spaces Using Local Invariances Wee Sun Lee,2, Xinhua Zhang,2, and Yee Whye Teh Department of Computer Science, National University of Singapore. 2
More information