Resolving the White Noise Paradox in the Regularisation of Inverse Problems

Size: px
Start display at page:

Download "Resolving the White Noise Paradox in the Regularisation of Inverse Problems"

Transcription

1 1 / 32 Resolving the White Noise Paradox in the Regularisation of Inverse Problems Hanne Kekkonen joint work with Matti Lassas and Samuli Siltanen Department of Mathematics and Statistics University of Helsinki, Finland January 14, 2016

2 2 / 32

3 3 / 32 Outline Tikhonov approach Bayesian approach

4 4 / 32 Outline Tikhonov approach Bayesian approach

5 5 / 32 Our starting point is an indirect measurement problem We consider the continuous linear measurement model where m = Au + εδ, δ > 0 (1) u H r (N) is the unknown physical quantity of interest and m H s (N) the measurement, N closed d-dimensional manifold ε is Gaussian white noise. A is a finitely many times smoothing linear operator that models the measurement.

6 6 / 32 Tikhonov regularisation is a classical way to gain a noise-robust solution If εδ = m Au L 2 (N) we can use Tikhonov regularisation to get a reconstruction of u from the measurement m { u δ = argmin Au m 2 u H r L + δ 2 (I ) r/2 u 2 2 L }. (2) 2 However (2) is not well defined if we assume ε to be white noise since then ε L 2.

7 White noise does not belong to L 2 Formally ε = ε, ψ j ψ j j= where ψ j form an orthonormal basis for L 2. The Fourier coefficients of white noise satisfy ε, e k N(0, 1), where e k (t) = e ikt. Hence ε 2 L = ε, e 2 k 2 < k= with probability zero. For the white noise we have i) ε L 2 with probability zero, ii) ε H s, s > d/2, with probability one. 7 / 32

8 8 / 32 The white noise paradox If we are working on a discrete world ε k l 2 < with all k R. Hence the minimisation problem { } un δ = argmin Au n m k 2 u l + δ 2 (I D) r/2 u 2 n 2 l 2 is well defined. However we know that lim ε k l 2 =. k The goal is to build a rigorous theory removing the apparent paradox arising from the infinite L 2 -norm of the natural limit of white Gaussian noise in R k as k.

9 9 / 32 Why are we interested in continuous white noise? It is important to be able to connect discrete models to their infinite-dimensional limit models. In practice we do not solve the continuous problem but its discretisation. Discrete white noise is used in many practical inverse problems as a noise model. If the discrete model is an orthogonal projection of the continuous model to a finite dimensional subspace it guarantees that we can switch consistently between different discretisations which is important for e.g. multigrid methods.

10 10 / 32 We can define new regularised solution by omitting the constant term m 2 L 2 When m Au L 2 we can write m Au 2 L 2 = Au 2 L 2 2(m, Au) L 2 + m 2 L 2. Now omitting the constant term m 2 in (2) we get a new well L 2 defined minimisation problem { u δ = argmin Au 2 u H r L 2 m, Au + δ 2 u 2 2 H }. r The solution to the problem above is u δ = ( A A + δ 2 (I ) r ) 1 (A m) where A is a pseudodifferential operator.

11 11 / 32 Does omitting m 2 L 2 = cause any troubles? Example We consider problem m = Au+εδ = Φ( y)u(y)dy+εδ where u H 1 is a piecewise linear function, ε is white noise and The unknown function u. A = F 1 ((1 + n 2 ) 1 (Fu)(n)). We have u H 1 and u δ H 1 for all δ > 0 so does u δ u in H 1 when δ 0? Noiseless data m = Au

12 12 / 32 Solution u δ does not converge to u in H 1 We are interested in knowing what happens to the regularised solution u δ in different Sobolev spaces when δ 0. Figure: Normalised errors c(ζ) u u δ H ζ (T 2 ) in logarithmic scale with different values of ζ. We observe that lim δ 0 u u δ H1 (T 2 ) 0.

13 13 / 32 Theorem 1 Let N be a d-dimensional closed manifold and u H r (N) with r 0. Let ε H s (N) with some s > d/2 and consider the measurement m δ = Au + δε, where A : H r H r+t, is an elliptic pseudodifferential operator of order t < min{0, r s}. Assume that A : L 2 (N) L 2 (N) is injective. Then we have for the regularised solution u δ H r of u lim u δ = u δ 0 in H ζ (N) where ζ r s < r d/2.

14 14 / 32 Why do we not get convergence in the original space H r? We know that both u and u δ belong to H r but we can only prove convergence u δ u in H ζ, ζ < r d/2. If we study the problem only from the regularisation point of view it is hard to see why we get the upper limit ζ < r d/2 for the smoothness of the space of convergence.

15 15 / 32 Outline Tikhonov approach Bayesian approach

16 16 / 32 The indirect measurement problem revisited Let us return to the continuous linear measurement model M = AU + Eδ, δ > 0. (3) This time M, U and E are treated as random variables. The unknown U takes values in H τ (N) with some τ R. We assume E to be Gaussian white noise taking values in H s (N), s > d/2. The unknown is treated as a random variable since we have only incomplete data of U.

17 17 / 32 Prior distribution contains all other information about unknown U The prior distribution shouldn t depend on the measurement. The prior distribution should assign a clearly higher probability to those values of u that we expect to see than to unexpected u. Designing a good prior distributions is one of the main difficulties in Bayesian inversion.

18 18 / 32 Bayes formula combines data and a priori information We want to reconstruct the most probable u in light of Measurement information A priori information Bayes formula in discrete case Bayes formula gives us the posterior distribution π(u n m k ): π(u m) = π pr (u n )π ε (m k u n ) π m (m) ( = C exp 1 2δ 2 m k Au n 2 l 1 ) 2 2 (I D)r/2 u n 2 l. 2 We recover u δ as a point estimate from π(u n m k ). (4)

19 19 / 32 The result of Bayesian inversion is the posterior distribution, but typically one looks at estimates Maximum a posteriori (MAP) estimate: arg max π(u m) u Rn Conditional mean (CM) estimate: R n u π(u m) du

20 There is no general rule to know which estimate is better 20 / 32

21 21 / 32 We don t have Bayes formula for continuous problem If we assume that that the noise takes values in L 2 the MAPestimate of (4) Γ-converges to the following infinite-dimensional minimisation problem: { 1 argmin u H r 2δ 2 m Au 2 L + 1 } 2 2 u 2 H. r Now if we think that the above is a MAP estimate of a Bayesian problem we have to assume that U has formally the following distribution ( π pr (u) = c exp 12 ) u 2Hr. formally

22 22 / 32 Simple example in T 1 Next we assume that U, E N(0, I). We know that the realisations u, ε H s, s > 1/2 a.s. The unknown U has the formal distribution π pr (u) = c exp formally ( 12 u 2L2 ). Solving the CM/MAP estimate is linked to solving the minimisation problem } u δ = argmin { Au 2L2 2 m, Au + δ 2 u 2L2. u L 2 That is we are looking for approximation in L 2 even though the realisations of U are in L 2 with probability zero.

23 23 / 32 The prior U takes values in H τ, where τ < r d/2 in which Sobolev space H τ does U take values? Assume that U N(0, C U ) where C U is 2r times smoothing covariance operator. Then U(ω) H τ operator C U (I ) τ is a trace class operator in H τ. Above is true if τ < r d/2. Now the CM / MAP estimate for U is given by ( ) 1(A U δ = A A + δ 2 C 1/2 U M) Note that the approximation U δ (ω) H r with probability one but the prior U takes values in H τ, where τ < r d/2

24 24 / 32 What does this mean for our example? For Gaussian smoothness prior r = 1 but in two dimensional case we get that τ < 0. The realisations of the prior don t belong in L 2.

25 25 / 32 Bayesian explanation for why we do not get convergence in H r Assume that we are looking for an approximation U δ (ω) H r. To get the wanted penalty term we have to assume a prior U for which the realisations U(ω) H τ, τ < r d/2, a.s. Now it is natural that we only get convergence U δ (ω) U(ω) a.s. in H ζ with ζ < r d/2 since the realisations of the prior are in H r only with probability zero.

26 26 / 32 Theorem 2: Assumptions We assume that N is a d-dimensional closed manifold. Operator C U is 2r times smoothing, self-adjoint, injective and elliptic. Unknown U is a generalised random variable taking values in H τ, τ > d/2 r with mean zero and covariance operator C U. E is white Gaussian noise taking values in H s, s > d/2. Consider the measurement M δ = AU + δe, where A Ψ t, is an elliptic pseudodifferential operator of order t < min{0, τ s}. Assume that A : L 2 (N) L 2 (N) is injective.

27 27 / 32 Theorem 2: Convergence results Take ζ r s < r d/2. Then the following convergence takes place in H ζ (N) norm Above U δ (ω) H r a.s. lim E U δ(ω) U(ω) δ 0 We have the following estimates for the speed of convergence: (i) If ζ s t then (ii) If s t ζ r s then E U δ (ω) U(ω) H ζ Cδ E U δ (ω) U(ω) H ζ Cδ r s ζ t+r

28 28 / 32 Back to the minimisation problem Both the regularised solution in Tikhonov regularisation and the MAP estimate in Bayesian settings is given by { } u δ = arg min Au 2 u H r L 2 m, Au + α C 1/2 2 U u 2 L 2 where α = δ κ, κ > 0. We get the classical Tikhonov problem by choosing C U = (I ) r. In Bayesian case we have to choose κ = 2.

29 29 / 32 Convergence results in Tikhonov case ( ) } Take ζ < min {r, 2 κ r + 2 κ 1 t s then we get the following convergence and the speed of convergence lim u δ u H ζ = 0 δ 0 u δ u H ζ C max{δ κ(r ζ) κ(s+t+ζ) 1 2(t+r), δ 2(t+r) }. Note that above u, u δ H r (N) with r 0.

30 30 / 32 What can we say about the convergence depending on κ? The regularised solution { } u δ = argmin Au 2 u H r L 2 m, Au + δ κ u 2 2 H r can be written in a form where K = A A + δ κ (I ) r. u δ = K 1 A Au + K 1 A δε = u noiseless + u noise The noise term is dominating if we choose κ 2. We get convergence in H r when κ 1. Note that in classical theory we get convergence when κ < 2.

31 31 / 32 From Bayesian to Tikhonov and (almost) back A prior distribution U N(0, C U ). Independent of the measurement. MAP estimate given by { } U δ (ω) = arg min Au 2 u H r L 2 m, Au + α C 1/2 2 U u 2 L 2 where α = δ 2. Relax the previous assumption; α = δ κ, κ > 0. New a priori distribution U δ 2 κ (0, C U ). Note that this distribution depends on the noise level!

32 Thank you for your attention! 32 / 32

Discretization-invariant Bayesian inversion and Besov space priors. Samuli Siltanen RICAM Tampere University of Technology

Discretization-invariant Bayesian inversion and Besov space priors. Samuli Siltanen RICAM Tampere University of Technology Discretization-invariant Bayesian inversion and Besov space priors Samuli Siltanen RICAM 28.10.2008 Tampere University of Technology http://math.tkk.fi/inverse-coe/ This is a joint work with Matti Lassas

More information

Bayesian inverse problems with Laplacian noise

Bayesian inverse problems with Laplacian noise Bayesian inverse problems with Laplacian noise Remo Kretschmann Faculty of Mathematics, University of Duisburg-Essen Applied Inverse Problems 2017, M27 Hangzhou, 1 June 2017 1 / 33 Outline 1 Inverse heat

More information

An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information.

An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information. An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information. Model: where g(t) = a(t s)f(s)ds + e(t), a(t) t = (rapidly). The problem,

More information

MCMC Sampling for Bayesian Inference using L1-type Priors

MCMC Sampling for Bayesian Inference using L1-type Priors MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling

More information

Regularization and Inverse Problems

Regularization and Inverse Problems Regularization and Inverse Problems Caroline Sieger Host Institution: Universität Bremen Home Institution: Clemson University August 5, 2009 Caroline Sieger (Bremen and Clemson) Regularization and Inverse

More information

Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations.

Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Wenqing Hu. 1 (Joint work with Michael Salins 2, Konstantinos Spiliopoulos 3.) 1. Department of Mathematics

More information

Recent Advances in Bayesian Inference for Inverse Problems

Recent Advances in Bayesian Inference for Inverse Problems Recent Advances in Bayesian Inference for Inverse Problems Felix Lucka University College London, UK f.lucka@ucl.ac.uk Applied Inverse Problems Helsinki, May 25, 2015 Bayesian Inference for Inverse Problems

More information

MAT Inverse Problems, Part 2: Statistical Inversion

MAT Inverse Problems, Part 2: Statistical Inversion MAT-62006 Inverse Problems, Part 2: Statistical Inversion S. Pursiainen Department of Mathematics, Tampere University of Technology Spring 2015 Overview The statistical inversion approach is based on the

More information

Minimum Message Length Analysis of the Behrens Fisher Problem

Minimum Message Length Analysis of the Behrens Fisher Problem Analysis of the Behrens Fisher Problem Enes Makalic and Daniel F Schmidt Centre for MEGA Epidemiology The University of Melbourne Solomonoff 85th Memorial Conference, 2011 Outline Introduction 1 Introduction

More information

Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods

Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry,

More information

Overview. Probabilistic Interpretation of Linear Regression Maximum Likelihood Estimation Bayesian Estimation MAP Estimation

Overview. Probabilistic Interpretation of Linear Regression Maximum Likelihood Estimation Bayesian Estimation MAP Estimation Overview Probabilistic Interpretation of Linear Regression Maximum Likelihood Estimation Bayesian Estimation MAP Estimation Probabilistic Interpretation: Linear Regression Assume output y is generated

More information

1-bit Matrix Completion. PAC-Bayes and Variational Approximation

1-bit Matrix Completion. PAC-Bayes and Variational Approximation : PAC-Bayes and Variational Approximation (with P. Alquier) PhD Supervisor: N. Chopin Bayes In Paris, 5 January 2017 (Happy New Year!) Various Topics covered Matrix Completion PAC-Bayesian Estimation Variational

More information

An inverse scattering problem in random media

An inverse scattering problem in random media An inverse scattering problem in random media Pedro Caro Joint work with: Tapio Helin & Matti Lassas Computational and Analytic Problems in Spectral Theory June 8, 2016 Outline Introduction and motivation

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

Spatial Statistics with Image Analysis. Outline. A Statistical Approach. Johan Lindström 1. Lund October 6, 2016

Spatial Statistics with Image Analysis. Outline. A Statistical Approach. Johan Lindström 1. Lund October 6, 2016 Spatial Statistics Spatial Examples More Spatial Statistics with Image Analysis Johan Lindström 1 1 Mathematical Statistics Centre for Mathematical Sciences Lund University Lund October 6, 2016 Johan Lindström

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V

More information

9.520: Class 20. Bayesian Interpretations. Tomaso Poggio and Sayan Mukherjee

9.520: Class 20. Bayesian Interpretations. Tomaso Poggio and Sayan Mukherjee 9.520: Class 20 Bayesian Interpretations Tomaso Poggio and Sayan Mukherjee Plan Bayesian interpretation of Regularization Bayesian interpretation of the regularizer Bayesian interpretation of quadratic

More information

Data Analysis and Manifold Learning Lecture 6: Probabilistic PCA and Factor Analysis

Data Analysis and Manifold Learning Lecture 6: Probabilistic PCA and Factor Analysis Data Analysis and Manifold Learning Lecture 6: Probabilistic PCA and Factor Analysis Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline of Lecture

More information

A hierarchical Krylov-Bayes iterative inverse solver for MEG with physiological preconditioning

A hierarchical Krylov-Bayes iterative inverse solver for MEG with physiological preconditioning A hierarchical Krylov-Bayes iterative inverse solver for MEG with physiological preconditioning D Calvetti 1 A Pascarella 3 F Pitolli 2 E Somersalo 1 B Vantaggi 2 1 Case Western Reserve University Department

More information

INSTANTON MODULI AND COMPACTIFICATION MATTHEW MAHOWALD

INSTANTON MODULI AND COMPACTIFICATION MATTHEW MAHOWALD INSTANTON MODULI AND COMPACTIFICATION MATTHEW MAHOWALD () Instanton (definition) (2) ADHM construction (3) Compactification. Instantons.. Notation. Throughout this talk, we will use the following notation:

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

Reproducing Kernel Hilbert Spaces

Reproducing Kernel Hilbert Spaces Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 11, 2009 About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Numerical differentiation by means of Legendre polynomials in the presence of square summable noise

Numerical differentiation by means of Legendre polynomials in the presence of square summable noise www.oeaw.ac.at Numerical differentiation by means of Legendre polynomials in the presence of square summable noise S. Lu, V. Naumova, S. Pereverzyev RICAM-Report 2012-15 www.ricam.oeaw.ac.at Numerical

More information

Calderón s inverse problem in 2D Electrical Impedance Tomography

Calderón s inverse problem in 2D Electrical Impedance Tomography Calderón s inverse problem in 2D Electrical Impedance Tomography Kari Astala (University of Helsinki) Joint work with: Matti Lassas, Lassi Päivärinta, Samuli Siltanen, Jennifer Mueller and Alan Perämäki

More information

Manifold Learning for Signal and Visual Processing Lecture 9: Probabilistic PCA (PPCA), Factor Analysis, Mixtures of PPCA

Manifold Learning for Signal and Visual Processing Lecture 9: Probabilistic PCA (PPCA), Factor Analysis, Mixtures of PPCA Manifold Learning for Signal and Visual Processing Lecture 9: Probabilistic PCA (PPCA), Factor Analysis, Mixtures of PPCA Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inria.fr http://perception.inrialpes.fr/

More information

Dimension-Independent likelihood-informed (DILI) MCMC

Dimension-Independent likelihood-informed (DILI) MCMC Dimension-Independent likelihood-informed (DILI) MCMC Tiangang Cui, Kody Law 2, Youssef Marzouk Massachusetts Institute of Technology 2 Oak Ridge National Laboratory 2 August 25 TC, KL, YM DILI MCMC USC

More information

Introduction to Bayesian methods in inverse problems

Introduction to Bayesian methods in inverse problems Introduction to Bayesian methods in inverse problems Ville Kolehmainen 1 1 Department of Applied Physics, University of Eastern Finland, Kuopio, Finland March 4 2013 Manchester, UK. Contents Introduction

More information

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing Linear recurrent networks Simpler, much more amenable to analytic treatment E.g. by choosing + ( + ) = Firing rates can be negative Approximates dynamics around fixed point Approximation often reasonable

More information

On a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model

On a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model On a Data Assimilation Method coupling, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model 2016 SIAM Conference on Uncertainty Quantification Basile Marchand 1, Ludovic

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

The spectral zeta function

The spectral zeta function The spectral zeta function Bernd Ammann June 4, 215 Abstract In this talk we introduce spectral zeta functions. The spectral zeta function of the Laplace-Beltrami operator was already introduced by Minakshisundaram

More information

Preconditioned space-time boundary element methods for the heat equation

Preconditioned space-time boundary element methods for the heat equation W I S S E N T E C H N I K L E I D E N S C H A F T Preconditioned space-time boundary element methods for the heat equation S. Dohr and O. Steinbach Institut für Numerische Mathematik Space-Time Methods

More information

Bayesian Interpretations of Regularization

Bayesian Interpretations of Regularization Bayesian Interpretations of Regularization Charlie Frogner 9.50 Class 15 April 1, 009 The Plan Regularized least squares maps {(x i, y i )} n i=1 to a function that minimizes the regularized loss: f S

More information

Scalable algorithms for optimal experimental design for infinite-dimensional nonlinear Bayesian inverse problems

Scalable algorithms for optimal experimental design for infinite-dimensional nonlinear Bayesian inverse problems Scalable algorithms for optimal experimental design for infinite-dimensional nonlinear Bayesian inverse problems Alen Alexanderian (Math/NC State), Omar Ghattas (ICES/UT-Austin), Noémi Petra (Applied Math/UC

More information

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the

More information

ECE521 week 3: 23/26 January 2017

ECE521 week 3: 23/26 January 2017 ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear

More information

Inverse problems in statistics

Inverse problems in statistics Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) Yale, May 2 2011 p. 1/35 Introduction There exist many fields where inverse problems appear Astronomy (Hubble satellite).

More information

Bayesian Inference by Density Ratio Estimation

Bayesian Inference by Density Ratio Estimation Bayesian Inference by Density Ratio Estimation Michael Gutmann https://sites.google.com/site/michaelgutmann Institute for Adaptive and Neural Computation School of Informatics, University of Edinburgh

More information

Small ball probabilities and metric entropy

Small ball probabilities and metric entropy Small ball probabilities and metric entropy Frank Aurzada, TU Berlin Sydney, February 2012 MCQMC Outline 1 Small ball probabilities vs. metric entropy 2 Connection to other questions 3 Recent results for

More information

Probabilistic & Unsupervised Learning

Probabilistic & Unsupervised Learning Probabilistic & Unsupervised Learning Week 2: Latent Variable Models Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc ML/CSML, Dept Computer Science University College

More information

Hypoelliptic multiscale Langevin diffusions and Slow fast stochastic reaction diffusion equations.

Hypoelliptic multiscale Langevin diffusions and Slow fast stochastic reaction diffusion equations. Hypoelliptic multiscale Langevin diffusions and Slow fast stochastic reaction diffusion equations. Wenqing Hu. 1 (Joint works with Michael Salins 2 and Konstantinos Spiliopoulos 3.) 1. Department of Mathematics

More information

arxiv: v3 [math.st] 16 Jul 2013

arxiv: v3 [math.st] 16 Jul 2013 Posterior Consistency for Bayesian Inverse Problems through Stability and Regression Results arxiv:30.40v3 [math.st] 6 Jul 03 Sebastian J. Vollmer Mathematics Institute, Zeeman Building, University of

More information

D I S C U S S I O N P A P E R

D I S C U S S I O N P A P E R I N S T I T U T D E S T A T I S T I Q U E B I O S T A T I S T I Q U E E T S C I E N C E S A C T U A R I E L L E S ( I S B A ) UNIVERSITÉ CATHOLIQUE DE LOUVAIN D I S C U S S I O N P A P E R 2014/06 Adaptive

More information

Statistical regularization theory for Inverse Problems with Poisson data

Statistical regularization theory for Inverse Problems with Poisson data Statistical regularization theory for Inverse Problems with Poisson data Frank Werner 1,2, joint with Thorsten Hohage 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical

More information

Computer Vision Group Prof. Daniel Cremers. 3. Regression

Computer Vision Group Prof. Daniel Cremers. 3. Regression Prof. Daniel Cremers 3. Regression Categories of Learning (Rep.) Learnin g Unsupervise d Learning Clustering, density estimation Supervised Learning learning from a training data set, inference on the

More information

Provable Alternating Minimization Methods for Non-convex Optimization

Provable Alternating Minimization Methods for Non-convex Optimization Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish

More information

Nonlinear error dynamics for cycled data assimilation methods

Nonlinear error dynamics for cycled data assimilation methods Nonlinear error dynamics for cycled data assimilation methods A J F Moodey 1, A S Lawless 1,2, P J van Leeuwen 2, R W E Potthast 1,3 1 Department of Mathematics and Statistics, University of Reading, UK.

More information

Nonparametric Drift Estimation for Stochastic Differential Equations

Nonparametric Drift Estimation for Stochastic Differential Equations Nonparametric Drift Estimation for Stochastic Differential Equations Gareth Roberts 1 Department of Statistics University of Warwick Brazilian Bayesian meeting, March 2010 Joint work with O. Papaspiliopoulos,

More information

Adaptive methods for control problems with finite-dimensional control space

Adaptive methods for control problems with finite-dimensional control space Adaptive methods for control problems with finite-dimensional control space Saheed Akindeinde and Daniel Wachsmuth Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy

More information

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA) The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues

More information

Parametric Problems, Stochastics, and Identification

Parametric Problems, Stochastics, and Identification Parametric Problems, Stochastics, and Identification Hermann G. Matthies a B. Rosić ab, O. Pajonk ac, A. Litvinenko a a, b University of Kragujevac c SPT Group, Hamburg wire@tu-bs.de http://www.wire.tu-bs.de

More information

Wavelet-Based Nonparametric Modeling of Hierarchical Functions in Colon Carcinogenesis

Wavelet-Based Nonparametric Modeling of Hierarchical Functions in Colon Carcinogenesis Wavelet-Based Nonparametric Modeling of Hierarchical Functions in Colon Carcinogenesis Jeffrey S. Morris University of Texas, MD Anderson Cancer Center Joint wor with Marina Vannucci, Philip J. Brown,

More information

Convergence of the Ensemble Kalman Filter in Hilbert Space

Convergence of the Ensemble Kalman Filter in Hilbert Space Convergence of the Ensemble Kalman Filter in Hilbert Space Jan Mandel Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Parts based

More information

Calculus in Gauss Space

Calculus in Gauss Space Calculus in Gauss Space 1. The Gradient Operator The -dimensional Lebesgue space is the measurable space (E (E )) where E =[0 1) or E = R endowed with the Lebesgue measure, and the calculus of functions

More information

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation

More information

Contre-examples for Bayesian MAP restoration. Mila Nikolova

Contre-examples for Bayesian MAP restoration. Mila Nikolova Contre-examples for Bayesian MAP restoration Mila Nikolova CMLA ENS de Cachan, 61 av. du Président Wilson, 94235 Cachan cedex (nikolova@cmla.ens-cachan.fr) Obergurgl, September 26 Outline 1. MAP estimators

More information

Structured signal recovery from non-linear and heavy-tailed measurements

Structured signal recovery from non-linear and heavy-tailed measurements Structured signal recovery from non-linear and heavy-tailed measurements Larry Goldstein* Stanislav Minsker* Xiaohan Wei # *Department of Mathematics # Department of Electrical Engineering UniversityofSouthern

More information

The Kato square root problem on vector bundles with generalised bounded geometry

The Kato square root problem on vector bundles with generalised bounded geometry The Kato square root problem on vector bundles with generalised bounded geometry Lashi Bandara (Joint work with Alan McIntosh, ANU) Centre for Mathematics and its Applications Australian National University

More information

The Kato square root problem on vector bundles with generalised bounded geometry

The Kato square root problem on vector bundles with generalised bounded geometry The Kato square root problem on vector bundles with generalised bounded geometry Lashi Bandara (Joint work with Alan McIntosh, ANU) Centre for Mathematics and its Applications Australian National University

More information

More Spectral Clustering and an Introduction to Conjugacy

More Spectral Clustering and an Introduction to Conjugacy CS8B/Stat4B: Advanced Topics in Learning & Decision Making More Spectral Clustering and an Introduction to Conjugacy Lecturer: Michael I. Jordan Scribe: Marco Barreno Monday, April 5, 004. Back to spectral

More information

EE 574 Detection and Estimation Theory Lecture Presentation 8

EE 574 Detection and Estimation Theory Lecture Presentation 8 Lecture Presentation 8 Aykut HOCANIN Dept. of Electrical and Electronic Engineering 1/14 Chapter 3: Representation of Random Processes 3.2 Deterministic Functions:Orthogonal Representations For a finite-energy

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Approximation of Geometric Data

Approximation of Geometric Data Supervised by: Philipp Grohs, ETH Zürich August 19, 2013 Outline 1 Motivation Outline 1 Motivation 2 Outline 1 Motivation 2 3 Goal: Solving PDE s an optimization problems where we seek a function with

More information

Lecture : Probabilistic Machine Learning

Lecture : Probabilistic Machine Learning Lecture : Probabilistic Machine Learning Riashat Islam Reasoning and Learning Lab McGill University September 11, 2018 ML : Many Methods with Many Links Modelling Views of Machine Learning Machine Learning

More information

Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems

Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems John Bardsley, University of Montana Collaborators: H. Haario, J. Kaipio, M. Laine, Y. Marzouk, A. Seppänen, A. Solonen, Z.

More information

Multiscale Frame-based Kernels for Image Registration

Multiscale Frame-based Kernels for Image Registration Multiscale Frame-based Kernels for Image Registration Ming Zhen, Tan National University of Singapore 22 July, 16 Ming Zhen, Tan (National University of Singapore) Multiscale Frame-based Kernels for Image

More information

STA414/2104 Statistical Methods for Machine Learning II

STA414/2104 Statistical Methods for Machine Learning II STA414/2104 Statistical Methods for Machine Learning II Murat A. Erdogdu & David Duvenaud Department of Computer Science Department of Statistical Sciences Lecture 3 Slide credits: Russ Salakhutdinov Announcements

More information

Lecture 16: Small Sample Size Problems (Covariance Estimation) Many thanks to Carlos Thomaz who authored the original version of these slides

Lecture 16: Small Sample Size Problems (Covariance Estimation) Many thanks to Carlos Thomaz who authored the original version of these slides Lecture 16: Small Sample Size Problems (Covariance Estimation) Many thanks to Carlos Thomaz who authored the original version of these slides Intelligent Data Analysis and Probabilistic Inference Lecture

More information

SPDEs, criticality, and renormalisation

SPDEs, criticality, and renormalisation SPDEs, criticality, and renormalisation Hendrik Weber Mathematics Institute University of Warwick Potsdam, 06.11.2013 An interesting model from Physics I Ising model Spin configurations: Energy: Inverse

More information

L19: Fredholm theory. where E u = u T X and J u = Formally, J-holomorphic curves are just 1

L19: Fredholm theory. where E u = u T X and J u = Formally, J-holomorphic curves are just 1 L19: Fredholm theory We want to understand how to make moduli spaces of J-holomorphic curves, and once we have them, how to make them smooth and compute their dimensions. Fix (X, ω, J) and, for definiteness,

More information

Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas

Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas Department of Mathematics Department of Statistical Science Cornell University London, January 7, 2016 Joint work

More information

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets 9.520 Class 22, 2004 Tomaso Poggio and Sayan Mukherjee About this class Goal To introduce an alternate perspective of RKHS via integral operators

More information

Iterative Methods for Ill-Posed Problems

Iterative Methods for Ill-Posed Problems Iterative Methods for Ill-Posed Problems Based on joint work with: Serena Morigi Fiorella Sgallari Andriy Shyshkov Salt Lake City, May, 2007 Outline: Inverse and ill-posed problems Tikhonov regularization

More information

CS-E3210 Machine Learning: Basic Principles

CS-E3210 Machine Learning: Basic Principles CS-E3210 Machine Learning: Basic Principles Lecture 4: Regression II slides by Markus Heinonen Department of Computer Science Aalto University, School of Science Autumn (Period I) 2017 1 / 61 Today s introduction

More information

(x x 0 ) 2 + (y y 0 ) 2 = ε 2, (2.11)

(x x 0 ) 2 + (y y 0 ) 2 = ε 2, (2.11) 2.2 Limits and continuity In order to introduce the concepts of limit and continuity for functions of more than one variable we need first to generalise the concept of neighbourhood of a point from R to

More information

Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems

Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems Jonas Latz 1 Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems Jonas Latz Technische Universität München Fakultät für Mathematik Lehrstuhl für Numerische Mathematik jonas.latz@tum.de November

More information

Contributors and Resources

Contributors and Resources for Introduction to Numerical Methods for d-bar Problems Jennifer Mueller, presented by Andreas Stahel Colorado State University Bern University of Applied Sciences, Switzerland Lexington, Kentucky, May

More information

Variational problems in machine learning and their solution with finite elements

Variational problems in machine learning and their solution with finite elements Variational problems in machine learning and their solution with finite elements M. Hegland M. Griebel April 2007 Abstract Many machine learning problems deal with the estimation of conditional probabilities

More information

Estimation theory and information geometry based on denoising

Estimation theory and information geometry based on denoising Estimation theory and information geometry based on denoising Aapo Hyvärinen Dept of Computer Science & HIIT Dept of Mathematics and Statistics University of Helsinki Finland 1 Abstract What is the best

More information

PATTERN RECOGNITION AND MACHINE LEARNING

PATTERN RECOGNITION AND MACHINE LEARNING PATTERN RECOGNITION AND MACHINE LEARNING Chapter 1. Introduction Shuai Huang April 21, 2014 Outline 1 What is Machine Learning? 2 Curve Fitting 3 Probability Theory 4 Model Selection 5 The curse of dimensionality

More information

Inference and estimation in probabilistic time series models

Inference and estimation in probabilistic time series models 1 Inference and estimation in probabilistic time series models David Barber, A Taylan Cemgil and Silvia Chiappa 11 Time series The term time series refers to data that can be represented as a sequence

More information

Lecture 4: Harmonic forms

Lecture 4: Harmonic forms Lecture 4: Harmonic forms Jonathan Evans 29th September 2010 Jonathan Evans () Lecture 4: Harmonic forms 29th September 2010 1 / 15 Jonathan Evans () Lecture 4: Harmonic forms 29th September 2010 2 / 15

More information

Computation of operators in wavelet coordinates

Computation of operators in wavelet coordinates Computation of operators in wavelet coordinates Tsogtgerel Gantumur and Rob Stevenson Department of Mathematics Utrecht University Tsogtgerel Gantumur - Computation of operators in wavelet coordinates

More information

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator Estimation Theory Estimation theory deals with finding numerical values of interesting parameters from given set of data. We start with formulating a family of models that could describe how the data were

More information

Traces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains

Traces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains Traces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains Sergey E. Mikhailov Brunel University West London, Department of Mathematics, Uxbridge, UB8 3PH, UK J. Math. Analysis

More information

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov, LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES Sergey Korotov, Institute of Mathematics Helsinki University of Technology, Finland Academy of Finland 1 Main Problem in Mathematical

More information

Robust MCMC Sampling with Non-Gaussian and Hierarchical Priors

Robust MCMC Sampling with Non-Gaussian and Hierarchical Priors Division of Engineering & Applied Science Robust MCMC Sampling with Non-Gaussian and Hierarchical Priors IPAM, UCLA, November 14, 2017 Matt Dunlop Victor Chen (Caltech) Omiros Papaspiliopoulos (ICREA,

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Sparse Recovery using L1 minimization - algorithms Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Department of Mathematics & Risk Management Institute National University of Singapore (Based on a joint work with Shujun

More information

Statistical Estimation of the Parameters of a PDE

Statistical Estimation of the Parameters of a PDE PIMS-MITACS 2001 1 Statistical Estimation of the Parameters of a PDE Colin Fox, Geoff Nicholls (University of Auckland) Nomenclature for image recovery Statistical model for inverse problems Traditional

More information

Reproducing Kernel Hilbert Spaces

Reproducing Kernel Hilbert Spaces 9.520: Statistical Learning Theory and Applications February 10th, 2010 Reproducing Kernel Hilbert Spaces Lecturer: Lorenzo Rosasco Scribe: Greg Durrett 1 Introduction In the previous two lectures, we

More information

Wavelets For Computer Graphics

Wavelets For Computer Graphics {f g} := f(x) g(x) dx A collection of linearly independent functions Ψ j spanning W j are called wavelets. i J(x) := 6 x +2 x + x + x Ψ j (x) := Ψ j (2 j x i) i =,..., 2 j Res. Avge. Detail Coef 4 [9 7

More information

From independent component analysis to score matching

From independent component analysis to score matching From independent component analysis to score matching Aapo Hyvärinen Dept of Computer Science & HIIT Dept of Mathematics and Statistics University of Helsinki Finland 1 Abstract First, short introduction

More information

ESTIMATION THEORY AND INFORMATION GEOMETRY BASED ON DENOISING. Aapo Hyvärinen

ESTIMATION THEORY AND INFORMATION GEOMETRY BASED ON DENOISING. Aapo Hyvärinen ESTIMATION THEORY AND INFORMATION GEOMETRY BASED ON DENOISING Aapo Hyvärinen Dept of Computer Science, Dept of Mathematics & Statistics, and HIIT University of Helsinki, Finland. ABSTRACT We consider a

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

Density Modeling and Clustering Using Dirichlet Diffusion Trees

Density Modeling and Clustering Using Dirichlet Diffusion Trees p. 1/3 Density Modeling and Clustering Using Dirichlet Diffusion Trees Radford M. Neal Bayesian Statistics 7, 2003, pp. 619-629. Presenter: Ivo D. Shterev p. 2/3 Outline Motivation. Data points generation.

More information

From Completing the Squares and Orthogonal Projection to Finite Element Methods

From Completing the Squares and Orthogonal Projection to Finite Element Methods From Completing the Squares and Orthogonal Projection to Finite Element Methods Mo MU Background In scientific computing, it is important to start with an appropriate model in order to design effective

More information

Super-Resolution. Shai Avidan Tel-Aviv University

Super-Resolution. Shai Avidan Tel-Aviv University Super-Resolution Shai Avidan Tel-Aviv University Slide Credits (partial list) Ric Szelisi Steve Seitz Alyosha Efros Yacov Hel-Or Yossi Rubner Mii Elad Marc Levoy Bill Freeman Fredo Durand Sylvain Paris

More information

Greedy control. Martin Lazar University of Dubrovnik. Opatija, th Najman Conference. Joint work with E: Zuazua, UAM, Madrid

Greedy control. Martin Lazar University of Dubrovnik. Opatija, th Najman Conference. Joint work with E: Zuazua, UAM, Madrid Greedy control Martin Lazar University of Dubrovnik Opatija, 2015 4th Najman Conference Joint work with E: Zuazua, UAM, Madrid Outline Parametric dependent systems Reduced basis methods Greedy control

More information