Regularization Theory
|
|
- Clifton Bridges
- 5 years ago
- Views:
Transcription
1 Regularization Theory Solving the inverse problem of Super resolution with CNN Aditya Ganeshan Under the guidance of Dr. Ankik Kumar Giri December 13, 2016
2 Table of Content 1 Introduction Material coverage Introduction to Inverse Problems 2 Regularization Theory Moore-Penrose Generalized Inverse Regularization Operator Order Optimality Continuous Regularization Methods 3 Image Super-Resolution Introduction Image Super-Resolution Training The CNN 2 / 38
3 Material origin q Regularization of Inverse problems. A book by Dr. Heinz Werner Engl, Dr. Martin Hanke-Bourgeois and Dr. Andreas Neubauer. q Image Super-Resolution Using Deep Convolutional Networks. Research paper by Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang in the journal Computer Vision ECCV 2014,Volume / 38
4 What are Inverse Problems? Inverse Problems 4 / 38
5 What are Inverse Problems? Hadamard s condition for well-posedness q For all admissible data, solution must exist. q For all admissible data, solution is unique. q The solution depends continuously on the data. 5 / 38
6 What are Inverse Problems? Hadamard s condition for well-posedness q For all admissible data, solution must exist. q For all admissible data, solution is unique. q The solution depends continuously on the data. 5 / 38
7 What are Inverse Problems? Hadamard s condition for well-posedness q For all admissible data, solution must exist. q For all admissible data, solution is unique. q The solution depends continuously on the data. 5 / 38
8 What are Inverse Problems? Ill-posed Problems Problems which do not follow all of Hadamard s conditions are called Ill-posed problems. Inverse Problems are mostly Ill-posed. 6 / 38
9 What are Inverse Problems? Inverse problems are concerned with determining causes for the desired or observed effect. Comparing them with Hadamard s conditionsq They might not have a solution in the strict sense. q They might not have a unique solution. q They might not depend continuously on the data. 7 / 38
10 What are Inverse Problems? Inverse problems are concerned with determining causes for the desired or observed effect. Comparing them with Hadamard s conditionsq They might not have a solution in the strict sense. q They might not have a unique solution. q They might not depend continuously on the data. 7 / 38
11 What are Inverse Problems? Inverse problems are concerned with determining causes for the desired or observed effect. Comparing them with Hadamard s conditionsq They might not have a solution in the strict sense. q They might not have a unique solution. q They might not depend continuously on the data. 7 / 38
12 Table of Content 1 Introduction Material coverage Introduction to Inverse Problems 2 Regularization Theory Moore-Penrose Generalized Inverse Regularization Operator Order Optimality Continuous Regularization Methods 3 Image Super-Resolution Introduction Image Super-Resolution Training The CNN 8 / 38
13 Generalized Inverse Definition (2.1) Let T : X 7 Y be bounded linear operator. 1. x X is called a least-squares solution of Tx = y, if ktx y k = inf {ktx y k x X } (1) 9 / 38
14 Generalized Inverse Definition (2.1) Let T : X 7 Y be bounded linear operator. 2. x X is called best-approximate solution of Tx = y if, x is a least-squares solution of Tx = Y and kxk = inf {kxk x is least squares solution} (2) 10 / 38
15 Generalized Inverse Definition (2.2) The Moore-Penrose Generalized inverse T of T L(X, Y ) is defined e 1 to as the unique linear extension of T D(T ) = R(T ) + R(T ) (3) N(T ) = R(T ) (4) e := T T N(T ) : N(T ) R(T ) (5) Where, 11 / 38
16 Generalized Inverse Theorem Let y D(T ). Then, Tx = y has a unique best-approximate solution, which is given by x := T y. (6) The set of all least-square solutions is x + N(T ). 12 / 38
17 Regularization It is the approximation of a well-posed problem by neighboring well-posed problems. We want to find the best-approximate solution x = T y, but only y δ is known, with, yδ y δ 13 / 38
18 Regularization In Ill-posed problems, T y δ is unbounded(it might not even exist!). Hence, we look for am approximation xαδ, which q depends continuously on the noisy data y δ. q tends to x as noise level decreases to zero (if regularization parameter α is selected appropriately). 14 / 38
19 Regularization q As we look not for specific values of y, rather for every y R(T ), we regularize the solution operator T. q A simple regularization of T is replacement of unbounded operator T by a parameter-dependant family {Rα }, taking xαδ = Rα y δ. q This way we define the regularization operator for the whole collection of equations. Tx = y y D(T ) 15 / 38
20 Regularization Definition (3.1) Let T : X Y bea a bounded linear operator between Hilbert spaces X and Y, α0 (0, + ). for every α (0, α0 ), let Rα : Y X be a continuous(not necessarily linear) operator.the family {Rα } is called a regularization or a regularization operator for T, if, for all y D(T ), there exists a parameter choice rule α = α(y δ, δ) such that lim sup{ Rα(y δ,δ) y δ T y y δ Y, y δ y δ} = 0 δ 0 (7) holds. 16 / 38
21 Regularization Definition (continued) Here, α : R + Y (0, α0 ) (8) lim sup{α(y δ, δ) y δ Y, y δ y δ} = 0 (9) is such that δ 0 For a specific y D(T ), a pair (Rα, α) is called a convergent regularization method if 7 and 9 holds. 17 / 38
22 18 / 38 Definition (3.2) Let α be a parameter choice rule according to definition 3.1. If α does not depend on y δ, but only on δ, then we call α an a-priori parameter choice rule and write α = α(δ). Otherwise, we call it a a-posteriori parameter choice rule. (If α = α(y δ ), α is called an error-free parameter choice rule)
23 Order optimality The rate at which xα x 0 as α 0. (10) or xα(δ,y δ ) x 0 as δ 0. (11) 19 / 38
24 Order Optimality Definition (3.3) The worst-case error under the information that y δ y δ and a-priori information that x M is given by 4(δ, M, R) = sup{ Ry δ x x M, y δ Y, Tx y δ δ} (12) 20 / 38
25 Order Optimality Convergence rates can only be on subsets of D(T ). i.e. under a-priori assumptions on the exact data.hence, we consider subsets of the form {x X x = Bw, kwk ρ} where B is a linear operator from some Hilbert space into X. For the choice of B, B = (T T )µ for some µ > 0, we denote the set formed by Xµ,ρ := {x X x = (T T )µ w, kwk < ρ} (13) 21 / 38
26 Order Optimality We use further the notation, [ µ Xµ,ρ = R((T T ) ) Xµ := (14) ρ>0 These are usually called Source sets, x Xµ,ρ is said to have a source representation. This requirement can be considered as a smoothness condition. 22 / 38
27 Order Optimality Definition (3.4) Let R(T ) be non-closed, {Rα } be a regularization operator for T. For µ, ρ > 0 and y TXµ,ρ, let α be a parameter choice rule. We call (Rα, α) optimal in Xµ,ρ if 2µ 1 4(δ, Xµ,ρ, Rα ) = δ 2µ+1 ρ 2µ+1 (15) holds for all δ > 0. We call (Rα, α) of optimal order in Xµ,ρ if there exist a constant c 1 such that 2µ 1 4(δ, Xµ,ρ, Rα ) cδ 2µ+1 ρ 2µ+1 (16) holds for all δ > 0 23 / 38
28 Continuous Regularization Methods Various Parameter Choice rules which give optimal solution under specific conditions exist, such as, q A-priori Parameter Choice rule 2 δ 2µ+1 α ( ) ρ (17) α(δ, y δ ) = sup{α > 0 Txαδ y δ τ δ} (18) τ > sup{ rα (λ) α > 0, λ [0, kt k2 ]} (19) q The Discrepancy Principle where 24 / 38
29 Continuous Regularization Methods From here on, Various types of regularization methods are generalized, and various required conditions are studied for existence of solution as well as for optimality of the solution. Various Regularization techniques covered include Tikhonov Regularization, Land-weber Iteration, ν method etc. 25 / 38
30 Table of Content 1 Introduction Material coverage Introduction to Inverse Problems 2 Regularization Theory Moore-Penrose Generalized Inverse Regularization Operator Order Optimality Continuous Regularization Methods 3 Image Super-Resolution Introduction Image Super-Resolution Training The CNN 26 / 38
31 Image Super-Resolution Single image super-resolution aims at recovering a high-resolution image from a single low resolution image. Since, a multiple solutions exist for any given low-resolution pixel, This problem is inherently ill-posed,due to the non-uniqueness of the solution. 27 / 38
32 Image Super-Resolution Most of the other state-of-art methods mostly adopt example-based strategy. These methods q Exploit internal similarities of the same image, q Or learn mapping functions from external low and high resolution exemplar pairs. 28 / 38
33 How is this model different? q This method creates a convolutional neural network that directly learns an end-to-end mapping between low resolution and high resolution images. q It does not explicitly learn dictionaries.they are implicitly achieved through hidden layers. q In this approach the entire Super Resolution pipeline is fully obtained through learning, with little pre/post processing. 29 / 38
34 How is this model different? q Its structure is implicitly designed with simplicity in mind. q Provides superior accuracy when compared with other state-of-the-art example based methods. q With moderate number of filters and layers, this method achieves fast speed for practical on-line usage even on a CPU.(It also does not require solving of any optimization problem on usage, hence it is even faster.) 30 / 38
35 Image Super-Resolution Single-image super-resolution algorithms can be classified into four types q Prediction Models q Edge Based Models q Image Statistical Methods q Patch based methods. The majority of SR algorithms focus on grey-scale or single channel image super-resolution.for color images, the aforementioned methods first transform the problem to a different color space, like YCbCr, and SR is applied only to the luminescence channel. 31 / 38
36 CNN For Super Resolution Consider a single low-resolution image, we first upscale it to the desired size using bicubic interpolation. Let the interpolated image be Y. Aim - to recover F(Y) that is as similar as possible to the ground truth high-resolution image X. 32 / 38
37 CNN For Super Resolution We will be learning the mapping F, which conceptually consists of three different operations q Patch extraction and representation q non-linear mapping q Reconstruction 33 / 38
38 Patch Extraction and representation We convolve the image by a set of filters. The first layer can be represented as F1 (Y ) = max{0, W1 Y + B1 } (20) where, W1 and B1 represent the filter and biases respectively. W1 corresponds to n1 filters of support c f1 f1, where c denotes the number of channels in the input image, and f1 denotes the spatial size of the filter. B1 is a n1 dimensional vector, whose each element is associated with a filter. 34 / 38
39 Non-Linear Mapping From the first layer, we extract an n1 -dimensional feature for each patch.now each of the n1 -dimensional vectors is mapped into another n2 -dimensional vector. This is equivalent to applying n2 filters which have only trivial spatial support 1 1. when filter size is 3 3 etc, the non-linear mapping is a mapping on a patch of the feature map. The operation of the second layer is F2 (Y ) = max{0, W2 F1 (Y ) + B2 } (21) W1 corresponds to n2 filters of support n1 f1 f1, and B1 is a n2 dimensional vector. 35 / 38
40 Reconstruction Traditionally, the predicted overlapping high-resolution patches are often averaged to produce the full final image. Here, we consider averaging as a pre-defined filter on a set of feature maps.this layer is defined as F3 (Y ) = W3 F2 (Y ) + B3 (22) W1 corresponds to c filters of support n2 f3 f3, and B1 is a c dimensional vector. 36 / 38
41 Training The CNN Learning the end-to-end mapping function F requires estimation of the network parameters Θ = W1, W2, W3, B1, B2, B3. This is done by minimizing the loss function between the reconstructed imagef (Y ; Θ) and X. n L(Θ) = 1X kf (Yi ; Θ) Xi k2 n (23) i=1 37 / 38
42 Thank you 38 / 38
3 Compact Operators, Generalized Inverse, Best- Approximate Solution
3 Compact Operators, Generalized Inverse, Best- Approximate Solution As we have already heard in the lecture a mathematical problem is well - posed in the sense of Hadamard if the following properties
More informationRegularization and Inverse Problems
Regularization and Inverse Problems Caroline Sieger Host Institution: Universität Bremen Home Institution: Clemson University August 5, 2009 Caroline Sieger (Bremen and Clemson) Regularization and Inverse
More informationNumerische Mathematik
Numer. Math. 1999 83: 139 159 Numerische Mathematik c Springer-Verlag 1999 On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems Jin Qi-nian 1, Hou Zong-yi
More informationPreconditioned Newton methods for ill-posed problems
Preconditioned Newton methods for ill-posed problems Dissertation zur Erlangung des Doktorgrades der Mathematisch-Naturwissenschaftlichen Fakultäten der Georg-August-Universität zu Göttingen vorgelegt
More informationCOMPUTATION OF FOURIER TRANSFORMS FOR NOISY B FOR NOISY BANDLIMITED SIGNALS
COMPUTATION OF FOURIER TRANSFORMS FOR NOISY BANDLIMITED SIGNALS October 22, 2011 I. Introduction Definition of Fourier transform: F [f ](ω) := ˆf (ω) := + f (t)e iωt dt, ω R (1) I. Introduction Definition
More informationSpectral Regularization
Spectral Regularization Lorenzo Rosasco 9.520 Class 07 February 27, 2008 About this class Goal To discuss how a class of regularization methods originally designed for solving ill-posed inverse problems,
More informationTwo-parameter regularization method for determining the heat source
Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for
More informationConvergence rates of the continuous regularized Gauss Newton method
J. Inv. Ill-Posed Problems, Vol. 1, No. 3, pp. 261 28 (22) c VSP 22 Convergence rates of the continuous regularized Gauss Newton method B. KALTENBACHER, A. NEUBAUER, and A. G. RAMM Abstract In this paper
More informationMachine Learning. Regression. Manfred Huber
Machine Learning Regression Manfred Huber 2015 1 Regression Regression refers to supervised learning problems where the target output is one or more continuous values Continuous output values imply that
More informationLecture 2: Tikhonov-Regularization
Lecture 2: Tikhonov-Regularization Bastian von Harrach harrach@math.uni-stuttgart.de Chair of Optimization and Inverse Problems, University of Stuttgart, Germany Advanced Instructional School on Theoretical
More informationRegularization via Spectral Filtering
Regularization via Spectral Filtering Lorenzo Rosasco MIT, 9.520 Class 7 About this class Goal To discuss how a class of regularization methods originally designed for solving ill-posed inverse problems,
More informationSuper-Resolution. Shai Avidan Tel-Aviv University
Super-Resolution Shai Avidan Tel-Aviv University Slide Credits (partial list) Ric Szelisi Steve Seitz Alyosha Efros Yacov Hel-Or Yossi Rubner Mii Elad Marc Levoy Bill Freeman Fredo Durand Sylvain Paris
More informationPDEs in Image Processing, Tutorials
PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower
More informationIll-Posedness of Backward Heat Conduction Problem 1
Ill-Posedness of Backward Heat Conduction Problem 1 M.THAMBAN NAIR Department of Mathematics, IIT Madras Chennai-600 036, INDIA, E-Mail mtnair@iitm.ac.in 1. Ill-Posedness of Inverse Problems Problems that
More informationNumerical differentiation by means of Legendre polynomials in the presence of square summable noise
www.oeaw.ac.at Numerical differentiation by means of Legendre polynomials in the presence of square summable noise S. Lu, V. Naumova, S. Pereverzyev RICAM-Report 2012-15 www.ricam.oeaw.ac.at Numerical
More informationUnbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods
Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry,
More informationDeep Learning: Approximation of Functions by Composition
Deep Learning: Approximation of Functions by Composition Zuowei Shen Department of Mathematics National University of Singapore Outline 1 A brief introduction of approximation theory 2 Deep learning: approximation
More informationOptimal control as a regularization method for. an ill-posed problem.
Optimal control as a regularization method for ill-posed problems Stefan Kindermann, Carmeliza Navasca Department of Mathematics University of California Los Angeles, CA 995-1555 USA {kinder,navasca}@math.ucla.edu
More informationAccelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems
www.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems H. Egger RICAM-Report 2005-01 www.ricam.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing
More informationEmpirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods
Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry,
More informationAsaf Bar Zvi Adi Hayat. Semantic Segmentation
Asaf Bar Zvi Adi Hayat Semantic Segmentation Today s Topics Fully Convolutional Networks (FCN) (CVPR 2015) Conditional Random Fields as Recurrent Neural Networks (ICCV 2015) Gaussian Conditional random
More informationRegularization in Banach Space
Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University
More informationConvolutional Neural Networks
Convolutional Neural Networks Books» http://www.deeplearningbook.org/ Books http://neuralnetworksanddeeplearning.com/.org/ reviews» http://www.deeplearningbook.org/contents/linear_algebra.html» http://www.deeplearningbook.org/contents/prob.html»
More informationMachine Learning for Computer Vision 8. Neural Networks and Deep Learning. Vladimir Golkov Technical University of Munich Computer Vision Group
Machine Learning for Computer Vision 8. Neural Networks and Deep Learning Vladimir Golkov Technical University of Munich Computer Vision Group INTRODUCTION Nonlinear Coordinate Transformation http://cs.stanford.edu/people/karpathy/convnetjs/
More informationEfficient Inference in Fully Connected CRFs with Gaussian Edge Potentials
Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials Philipp Krähenbühl and Vladlen Koltun Stanford University Presenter: Yuan-Ting Hu 1 Conditional Random Field (CRF) E x I = φ u
More informationIterative Methods for Ill-Posed Problems
Iterative Methods for Ill-Posed Problems Based on joint work with: Serena Morigi Fiorella Sgallari Andriy Shyshkov Salt Lake City, May, 2007 Outline: Inverse and ill-posed problems Tikhonov regularization
More informationLinear Diffusion and Image Processing. Outline
Outline Linear Diffusion and Image Processing Fourier Transform Convolution Image Restoration: Linear Filtering Diffusion Processes for Noise Filtering linear scale space theory Gauss-Laplace pyramid for
More informationAN ANALYSIS OF MULTIPLICATIVE REGULARIZATION
Michigan Technological University Digital Commons @ Michigan Tech Dissertations, Master's Theses and Master's Reports - Open Dissertations, Master's Theses and Master's Reports 2015 AN ANALYSIS OF MULTIPLICATIVE
More informationReproducing Kernel Hilbert Spaces
Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 12, 2007 About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationConvergence rates of spectral methods for statistical inverse learning problems
Convergence rates of spectral methods for statistical inverse learning problems G. Blanchard Universtität Potsdam UCL/Gatsby unit, 04/11/2015 Joint work with N. Mücke (U. Potsdam); N. Krämer (U. München)
More informationThis article was published in an Elsevier journal. The attached copy is furnished to the author for non-commercial research and education use, including for instruction at the author s institution, sharing
More informationIrregular Solutions of an Ill-Posed Problem
Irregular Solutions of an Ill-Posed Problem Peter Linz University of California at Davis Richard L.C. Wang Clearsight Systems Inc. Abstract: Tikhonov regularization is a popular and effective method for
More informationMorozov s discrepancy principle for Tikhonov-type functionals with non-linear operators
Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Stephan W Anzengruber 1 and Ronny Ramlau 1,2 1 Johann Radon Institute for Computational and Applied Mathematics,
More informationMachine Learning in Modern Well Testing
Machine Learning in Modern Well Testing Yang Liu and Yinfeng Qin December 11, 29 1 Introduction Well testing is a crucial stage in the decision of setting up new wells on oil field. Decision makers rely
More informationarxiv: v3 [math.na] 15 Aug 2018
Deep Null Space Learning for Inverse Problems: Convergence Analysis and Rates Johannes Schwab, Stephan Antholzer, and Markus Haltmeier arxiv:1806.06137v3 [math.na] 15 Aug 2018 Department of Mathematics,
More informationCS 179: LECTURE 16 MODEL COMPLEXITY, REGULARIZATION, AND CONVOLUTIONAL NETS
CS 179: LECTURE 16 MODEL COMPLEXITY, REGULARIZATION, AND CONVOLUTIONAL NETS LAST TIME Intro to cudnn Deep neural nets using cublas and cudnn TODAY Building a better model for image classification Overfitting
More informationInverse problem and optimization
Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples
More informationThe Learning Problem and Regularization Class 03, 11 February 2004 Tomaso Poggio and Sayan Mukherjee
The Learning Problem and Regularization 9.520 Class 03, 11 February 2004 Tomaso Poggio and Sayan Mukherjee About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing
More informationA Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator
A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Florianópolis - September, 2011.
More informationReproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto
Reproducing Kernel Hilbert Spaces 9.520 Class 03, 15 February 2006 Andrea Caponnetto About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationSpatial Transformer Networks
BIL722 - Deep Learning for Computer Vision Spatial Transformer Networks Max Jaderberg Andrew Zisserman Karen Simonyan Koray Kavukcuoglu Contents Introduction to Spatial Transformers Related Works Spatial
More informationNonlinear Inverse Problems: Theoretical Aspects and Some Industrial Applications
Nonlinear Inverse Problems: Theoretical Aspects and Some Industrial Applications Heinz W. Engl 1,2 and Philipp Kügler 2 1 Johann Radon Institute for Computational and Applied Mathematics, Austrian Academy
More informationParameter Identification
Lecture Notes Parameter Identification Winter School Inverse Problems 25 Martin Burger 1 Contents 1 Introduction 3 2 Examples of Parameter Identification Problems 5 2.1 Differentiation of Data...............................
More information2 Tikhonov Regularization and ERM
Introduction Here we discusses how a class of regularization methods originally designed to solve ill-posed inverse problems give rise to regularized learning algorithms. These algorithms are kernel methods
More informationThe Learning Problem and Regularization
9.520 Class 02 February 2011 Computational Learning Statistical Learning Theory Learning is viewed as a generalization/inference problem from usually small sets of high dimensional, noisy data. Learning
More informationIn the Name of God. Lectures 15&16: Radial Basis Function Networks
1 In the Name of God Lectures 15&16: Radial Basis Function Networks Some Historical Notes Learning is equivalent to finding a surface in a multidimensional space that provides a best fit to the training
More informationConvergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces
Applied Mathematical Sciences, Vol. 6, 212, no. 63, 319-3117 Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Nguyen Buong Vietnamese
More informationarxiv: v1 [math.na] 30 Jan 2018
Modern Regularization Methods for Inverse Problems Martin Benning and Martin Burger December 18, 2017 arxiv:1801.09922v1 [math.na] 30 Jan 2018 Abstract Regularization methods are a key tool in the solution
More informationSpectral Filtering for MultiOutput Learning
Spectral Filtering for MultiOutput Learning Lorenzo Rosasco Center for Biological and Computational Learning, MIT Universita di Genova, Italy Plan Learning with kernels Multioutput kernel and regularization
More informationLearning, Regularization and Ill-Posed Inverse Problems
Learning, Regularization and Ill-Posed Inverse Problems Lorenzo Rosasco DISI, Università di Genova rosasco@disi.unige.it Andrea Caponnetto DISI, Università di Genova caponnetto@disi.unige.it Ernesto De
More informationEnforcing constraints for interpolation and extrapolation in Generative Adversarial Networks
Enforcing constraints for interpolation and extrapolation in Generative Adversarial Networks Panos Stinis (joint work with T. Hagge, A.M. Tartakovsky and E. Yeung) Pacific Northwest National Laboratory
More informationA semismooth Newton method for L 1 data fitting with automatic choice of regularization parameters and noise calibration
A semismooth Newton method for L data fitting with automatic choice of regularization parameters and noise calibration Christian Clason Bangti Jin Karl Kunisch April 26, 200 This paper considers the numerical
More informationRegularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1
Applied Mathematical Sciences, Vol. 5, 2011, no. 76, 3781-3788 Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Nguyen Buong and Nguyen Dinh Dung
More informationReproducing Kernel Hilbert Spaces
9.520: Statistical Learning Theory and Applications February 10th, 2010 Reproducing Kernel Hilbert Spaces Lecturer: Lorenzo Rosasco Scribe: Greg Durrett 1 Introduction In the previous two lectures, we
More informationDirect estimation of linear functionals from indirect noisy observations
Direct estimation of linear functionals from indirect noisy observations Peter Mathé Weierstraß Institute for Applied Analysis and Stochastics, Mohrenstraße 39, D 10117 Berlin, Germany E-mail: mathe@wias-berlin.de
More informationInverse problems in statistics
Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) Yale, May 2 2011 p. 1/35 Introduction There exist many fields where inverse problems appear Astronomy (Hubble satellite).
More informationConditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space
Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Radu Ioan Boț and Bernd Hofmann September 16, 2016 Abstract In the literature on singular perturbation
More informationGeneralized Local Regularization for Ill-Posed Problems
Generalized Local Regularization for Ill-Posed Problems Patricia K. Lamm Department of Mathematics Michigan State University AIP29 July 22, 29 Based on joint work with Cara Brooks, Zhewei Dai, and Xiaoyue
More informationAn Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems
Int. Journal of Math. Analysis, Vol. 4, 1, no. 45, 11-8 An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical
More informationTuning of Fuzzy Systems as an Ill-Posed Problem
Tuning of Fuzzy Systems as an Ill-Posed Problem Martin Burger 1, Josef Haslinger 2, and Ulrich Bodenhofer 2 1 SFB F 13 Numerical and Symbolic Scientific Computing and Industrial Mathematics Institute,
More informationA NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday
A NOTE ON THE NONLINEAR LANDWEBER ITERATION Martin Hanke Dedicated to Heinz W. Engl on the occasion of his 60th birthday Abstract. We reconsider the Landweber iteration for nonlinear ill-posed problems.
More informationSpatial Transformation
Spatial Transformation Presented by Liqun Chen June 30, 2017 1 Overview 2 Spatial Transformer Networks 3 STN experiments 4 Recurrent Models of Visual Attention (RAM) 5 Recurrent Models of Visual Attention
More informationIterative regularization of nonlinear ill-posed problems in Banach space
Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and
More informationThe Application of Extreme Learning Machine based on Gaussian Kernel in Image Classification
he Application of Extreme Learning Machine based on Gaussian Kernel in Image Classification Weijie LI, Yi LIN Postgraduate student in College of Survey and Geo-Informatics, tongji university Email: 1633289@tongji.edu.cn
More informationEfficient Inference in Fully Connected CRFs with Gaussian Edge Potentials
Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials by Phillip Krahenbuhl and Vladlen Koltun Presented by Adam Stambler Multi-class image segmentation Assign a class label to each
More informationSuper-Resolution. Dr. Yossi Rubner. Many slides from Miki Elad - Technion
Super-Resolution Dr. Yossi Rubner yossi@rubner.co.il Many slides from Mii Elad - Technion 5/5/2007 53 images, ratio :4 Example - Video 40 images ratio :4 Example Surveillance Example Enhance Mosaics Super-Resolution
More informationIntroduction to Bayesian methods in inverse problems
Introduction to Bayesian methods in inverse problems Ville Kolehmainen 1 1 Department of Applied Physics, University of Eastern Finland, Kuopio, Finland March 4 2013 Manchester, UK. Contents Introduction
More informationPhD Course: Introduction to Inverse Problem. Salvatore Frandina Siena, August 19, 2012
PhD Course: to Inverse Problem salvatore.frandina@gmail.com theory Department of Information Engineering, Siena, Italy Siena, August 19, 2012 1 / 68 An overview of the - - - theory 2 / 68 Direct and Inverse
More information1 Inria Rennes - Bretagne Atlantique 2 Alcatel-Lucent - Bell Labs France
LOW-COMPLEXITY SINGLE-IMAGE SUPER-RESOLUTION BASED ON NONNEGATIVE NEIGHBOR EMBEDDING Marco Bevilacqua 1,2, Aline Roumy 1, Christine Guillemot 1, Marie-Line Alberi Morel 2 1 Inria Rennes - Bretagne Atlantique
More informationLecture 14: Deep Generative Learning
Generative Modeling CSED703R: Deep Learning for Visual Recognition (2017F) Lecture 14: Deep Generative Learning Density estimation Reconstructing probability density function using samples Bohyung Han
More informationArtificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino
Artificial Neural Networks Data Base and Data Mining Group of Politecnico di Torino Elena Baralis Politecnico di Torino Artificial Neural Networks Inspired to the structure of the human brain Neurons as
More informationLevenberg-Marquardt method in Banach spaces with general convex regularization terms
Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization
More informationDeep Learning (CNNs)
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Deep Learning (CNNs) Deep Learning Readings: Murphy 28 Bishop - - HTF - - Mitchell
More informationTikhonov Regularization of Large Symmetric Problems
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi
More informationDETECTING HUMAN ACTIVITIES IN THE ARCTIC OCEAN BY CONSTRUCTING AND ANALYZING SUPER-RESOLUTION IMAGES FROM MODIS DATA INTRODUCTION
DETECTING HUMAN ACTIVITIES IN THE ARCTIC OCEAN BY CONSTRUCTING AND ANALYZING SUPER-RESOLUTION IMAGES FROM MODIS DATA Shizhi Chen and YingLi Tian Department of Electrical Engineering The City College of
More informationStatistical Approaches to Learning and Discovery. Week 4: Decision Theory and Risk Minimization. February 3, 2003
Statistical Approaches to Learning and Discovery Week 4: Decision Theory and Risk Minimization February 3, 2003 Recall From Last Time Bayesian expected loss is ρ(π, a) = E π [L(θ, a)] = L(θ, a) df π (θ)
More informationIterative regularization methods for ill-posed problems
Alma Mater Studiorum Università di Bologna Dottorato di Ricerca in MATEMATICA Ciclo XXV Settore Concorsuale di afferenza: 01/A5 Settore Scientifico disciplinare: MAT/08 Iterative regularization methods
More informationBayesian Paradigm. Maximum A Posteriori Estimation
Bayesian Paradigm Maximum A Posteriori Estimation Simple acquisition model noise + degradation Constraint minimization or Equivalent formulation Constraint minimization Lagrangian (unconstraint minimization)
More informationMollifying Networks. ICLR,2017 Presenter: Arshdeep Sekhon & Be
Mollifying Networks Caglar Gulcehre 1 Marcin Moczulski 2 Francesco Visin 3 Yoshua Bengio 1 1 University of Montreal, 2 University of Oxford, 3 Politecnico di Milano ICLR,2017 Presenter: Arshdeep Sekhon
More informationON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume, Number, Pages S -9939(XX- ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. HOANG AND A. G. RAMM (Communicated
More informationImproved Rotational Invariance for Statistical Inverse in Electrical Impedance Tomography
Improved Rotational Invariance for Statistical Inverse in Electrical Impedance Tomography Jani Lahtinen, Tomas Martinsen and Jouko Lampinen Laboratory of Computational Engineering Helsinki University of
More informationMCMC Sampling for Bayesian Inference using L1-type Priors
MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling
More informationInverse scattering problem from an impedance obstacle
Inverse Inverse scattering problem from an impedance obstacle Department of Mathematics, NCKU 5 th Workshop on Boundary Element Methods, Integral Equations and Related Topics in Taiwan NSYSU, October 4,
More informationInterpolation via weighted l 1 minimization
Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C
More informationStatistical Machine Learning
Statistical Machine Learning Lecture 9 Numerical optimization and deep learning Niklas Wahlström Division of Systems and Control Department of Information Technology Uppsala University niklas.wahlstrom@it.uu.se
More informationOptimal stopping time formulation of adaptive image filtering
Optimal stopping time formulation of adaptive image filtering I. Capuzzo Dolcetta, R. Ferretti 19.04.2000 Abstract This paper presents an approach to image filtering based on an optimal stopping time problem
More informationAction-Decision Networks for Visual Tracking with Deep Reinforcement Learning
Action-Decision Networks for Visual Tracking with Deep Reinforcement Learning Sangdoo Yun 1 Jongwon Choi 1 Youngjoon Yoo 2 Kimin Yun 3 and Jin Young Choi 1 1 ASRI, Dept. of Electrical and Computer Eng.,
More informationLearning-Based Image Super-Resolution
Limits of Algorithms Learning-Based Image Super-Resolution zhoulin@microsoft.com Microsoft Research Asia Nov. 8, 2008 Limits of Algorithms Outline 1 What is Super-Resolution (SR)? 2 3 Limits of Algorithms
More informationIntroduction to Convolutional Neural Networks 2018 / 02 / 23
Introduction to Convolutional Neural Networks 2018 / 02 / 23 Buzzword: CNN Convolutional neural networks (CNN, ConvNet) is a class of deep, feed-forward (not recurrent) artificial neural networks that
More informationA model function method in total least squares
www.oeaw.ac.at A model function method in total least squares S. Lu, S. Pereverzyev, U. Tautenhahn RICAM-Report 2008-18 www.ricam.oeaw.ac.at A MODEL FUNCTION METHOD IN TOTAL LEAST SQUARES SHUAI LU, SERGEI
More informationAdaptive and multilevel methods for parameter identification in partial differential equations
Adaptive and multilevel methods for parameter identification in partial differential equations Barbara Kaltenbacher, University of Stuttgart joint work with Hend Ben Ameur, Université de Tunis Anke Griesbaum,
More informationOutline: Ensemble Learning. Ensemble Learning. The Wisdom of Crowds. The Wisdom of Crowds - Really? Crowd wiser than any individual
Outline: Ensemble Learning We will describe and investigate algorithms to Ensemble Learning Lecture 10, DD2431 Machine Learning A. Maki, J. Sullivan October 2014 train weak classifiers/regressors and how
More informationREGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE
Int. J. Appl. Math. Comput. Sci., 007, Vol. 17, No., 157 164 DOI: 10.478/v10006-007-0014-3 REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE DOROTA KRAWCZYK-STAŃDO,
More informationReproducing Kernel Hilbert Spaces
Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 11, 2009 About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationReview: Continuous Fourier Transform
Review: Continuous Fourier Transform Review: convolution x t h t = x τ h(t τ)dτ Convolution in time domain Derivation Convolution Property Interchange the order of integrals Let Convolution Property By
More informationA DISCREPANCY PRINCIPLE FOR PARAMETER SELECTION IN LOCAL REGULARIZATION OF LINEAR VOLTERRA INVERSE PROBLEMS. Cara Dylyn Brooks A DISSERTATION
A DISCREPANCY PRINCIPLE FOR PARAMETER SELECTION IN LOCAL REGULARIZATION OF LINEAR VOLTERRA INVERSE PROBLEMS By Cara Dylyn Brooks A DISSERTATION Submitted to Michigan State University in partial fulfillment
More informationDeep Bayesian Inversion Computational uncertainty quantification for large scale inverse problems
Deep Bayesian Inversion Computational uncertainty quantification for large scale inverse problems arxiv:1811.05910v1 stat.ml] 14 Nov 018 Jonas Adler Department of Mathematics KTH - Royal institute of Technology
More informationSimultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors
Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors Sean Borman and Robert L. Stevenson Department of Electrical Engineering, University of Notre Dame Notre Dame,
More informationMASTER. The numerical inversion of the Laplace transform. Egonmwan, A.O. Award date: Link to publication
MASTER The numerical inversion of the Laplace transform Egonmwan, A.O. Award date: 212 Link to publication Disclaimer This document contains a student thesis (bachelor's or master's), as authored by a
More informationarxiv: v3 [math.na] 11 Oct 2017
Indirect Image Registration with Large Diffeomorphic Deformations Chong Chen and Ozan Öktem arxiv:1706.04048v3 [math.na] 11 Oct 2017 Abstract. The paper adapts the large deformation diffeomorphic metric
More information