Bayesian inverse problems with Laplacian noise

Size: px
Start display at page:

Download "Bayesian inverse problems with Laplacian noise"

Transcription

1 Bayesian inverse problems with Laplacian noise Remo Kretschmann Faculty of Mathematics, University of Duisburg-Essen Applied Inverse Problems 2017, M27 Hangzhou, 1 June / 33

2 Outline 1 Inverse heat equation and Laplacian measures 2 Bayesian inversion 3 Maximum a posteriori estimators 2 / 33

3 Classical setting X, Y separable Hilbert spaces, F : X Y. Given observed data y Y find unknown u X, where y = F (u) + η with observational noise η Y. 3 / 33

4 Bayesian approach X, Y separable Hilbert spaces, F : X Y, probability measures μ 0 on (X, B(X )), Q 0 on (Y, B(Y )), prior u μ 0, noise η Q 0, η independent of u and y = F (u) + η. Given observed data y Y find posterior distribution μ y, the conditional distribution of u y. Extract information out of μ y in the form of estimators. 4 / 33

5 Motivation Bayesian inverse problems in function spaces: Dashti, Law, Stuart and Voss have studied nonlinear inverse problems with Gaussian prior and noise that satisfies certain conditions. [Dashti et al 2013] In this case, the MAP estimator can be described as the minimiser of the Onsager-Machlup functional. Dashti and Stuart have analysed the inverse heat equation with Gaussian noise and different priors (i.a. Gaussian). [Dashti, Stuart 2015] 5 / 33

6 Motivation Bayesian inverse problems in function spaces: Dashti, Law, Stuart and Voss have studied nonlinear inverse problems with Gaussian prior and noise that satisfies certain conditions. [Dashti et al 2013] In this case, the MAP estimator can be described as the minimiser of the Onsager-Machlup functional. Dashti and Stuart have analysed the inverse heat equation with Gaussian noise and different priors (i.a. Gaussian). [Dashti, Stuart 2015] Questions: What happens if the prior is Gaussian but the noise is non-gaussian? Does Laplacian noise lead to an l 1 -discrepancy term? 5 / 33

7 Motivation We study the inverse heat equation with Laplacian noise in combination with a Gaussian prior. Problem: Laplacian noise violates the conditions of [Dashti et al 2013]. Existence of a solution? Connection: MAP estimator optimisation problem? Does the MAP estimator converge towards the true solution, as the variance of the noise tends to zero? 6 / 33

8 The heat conduction equation D R d bounded domain, D C k for some k 1, A := Δ defined on D(A) = H 2 (D) H 1 0 (D). For every u L 2 (D) there is a unique solution v C([0, ), L 2 (D)) C 1 ((0, ), D(A)) of the heat equation on D with Dirichlet boundary conditions, dv (t) = Av(t) for t > 0, dt v(0) = u, given by v(t) = exp( At)u for all t 0. 7 / 33

9 The inverse problem (outline) Fix t = 1, i.e. F (u) = v(1) = e A u. Given temperature measurement y at time t = 1, find initial temperature u L 2 (D) at time t = 0, where y = e A u + η. 8 / 33

10 The Bayesian inverse problem (outline) Given temperature measurement y at time t = 1, find conditional distribution of the posterior u y, where We assume that y = e A u + η. A is a Laplace-like operator, the noise η has a centred Laplacian distribution with covariance operator A s β, and the prior u has a centred Gaussian distribution with covariance operator A τ. 9 / 33

11 Laplace-like operators We assume that the operator A in L 2 (D) satisfies the following properties: 1 The eigenvectors {ϕ k } k N of A form an orthonormal basis of L 2 (D). 2 The respective eigenvalues α 1 α 2 > 0 of A satisfy 1 C A k 2 d αk C A k 2 d for all k N and a constant C A > 1. 3 A is densely defined and surjective. 4 A is self-adjoint. 10 / 33

12 Hilbert scales A induces a Hilbert scale {H s } s R, where H s := A s (L 2 (D)) = for all s 0, equipped with { u L 2 (D) : k=1 α 2s k } (u, ϕ k) L 2 2 < u H s := A s 2 u L 2 and (u, v) H s := (A s 2 u, A s 2 v)l 2. Now we set X := L 2 (D) = H 0 and Y := H s with s 0, i.e., u L 2 (D) and η, y H s. 11 / 33

13 Standard Laplacian measure on R / 33

14 Laplacian measure on R For a R and λ > 0 define probability measure L a,λ on (R, B(R)) by L a,λ (B) = 1 2λ B e 2 x a λ dx for all B B(R). Then L a,λ has mean a and variance λ, i.e., xl a,λ (dx) = a, (x a) 2 L a,λ (dx) = λ. R R 13 / 33

15 Infinite-dimensional product measure H separable real Hilbert space. For every compact self-adjoint operator Q on H there is an orthonormal basis {e k } k N of H consisting of eigenvectors of Q. Identify H with l 2 by means of x {(x, e k ) H } k N. Idea: For any a H and any positive definite trace class operator Q L(H) define Laplacian measure L a,q on (l 2, B(l 2 )) as the product measure L a,q = L ak,λ k, k=1 with a k := (a, e k ) H and λ k := (Qe k, e k ) H for all k N. 14 / 33

16 Infinite-dimensional product measure H separable real Hilbert space. For every compact self-adjoint operator Q on H there is an orthonormal basis {e k } k N of H consisting of eigenvectors of Q. Identify H with l 2 by means of x {(x, e k ) H } k N. Idea: For any a H and any positive definite trace class operator Q L(H) define Laplacian measure L a,q on (l 2, B(l 2 )) as the product measure L a,q = L ak,λ k, k=1 with a k := (a, e k ) H and λ k := (Qe k, e k ) H for all k N. Caution: This definition depends on the choice of the basis {e k } k N. 14 / 33

17 Basic properties This way, L a,q has mean a and covariance operator Q, i.e., (x, y) H L a,q (dx) = (a, y) H for all y H, H (x a, y) H (x a, z) H L a,q (dx) = (Qy, z) H for all y, z H. H In case a = 0 we write L Q := L 0,Q. 15 / 33

18 The Bayesian inverse problem Given y Y, find conditional distribution of u y on X, where noise η L A s β with Laplacian measure L A s β on Y := H s using basis e k := α 2 s k ϕ k and 0 s < β d 2, prior u N A τ independent from η with Gaussian measure N A τ on X := L 2 (D) = H 0 and τ > d 2, y = e A u + η. Idea: Use Bayes Theorem to obtain posterior distribution. 16 / 33

19 Outline 1 Inverse heat equation and Laplacian measures 2 Bayesian inversion 3 Maximum a posteriori estimators 17 / 33

20 Bayes Theorem (X, A), (Y, B) measurable spaces, ν, ν 0 probability measures on X Y, such that ν ν 0, i.e., ν is absolutely continuous with respect to ν 0. Then ν has a density f = dν dν 0 with respect to ν 0, i.e., ν = f ν 0. Theorem (Bayes) Assume that the conditional random variable x y exists under ν 0 with probability distribution ν y 0 on X. Then the conditional random variable x y exists under ν with probability distribution ν y on X, and ν y ν y 0. If additionally, Z(y) := X dν dν 0 (x, y)ν y 0 (dx) > 0, then dν y dν y (x) = 1 0 Z(y) dν (x, y). dν 0 18 / 33

21 Posterior distribution In our case, (u, η) ν 0 and (u, y) ν on X Y = L 2 (D) H s. In order for ν ν 0 to hold, we require L e A u,a s β L As β for all u X. Then by Bayes Theorem, the posterior measure μ y of u y is absolutely continuous with respect to the prior measure N A τ with the density dμ y (u) = 1 exp( Φ(u, y)) dn A τ Z(y) ν 0-a.e., Φ(u, y) = 2 α β 2 k ( y k e α k u k y k ), k=1 where y k := (y, ϕ k ) X, u k := (u, ϕ k ) X. 19 / 33

22 Admissible shifts H separable Hilbert space, Q L(H) positive definite trace class operator. Theorem 1 If a / Q 1 2 (H) then L a,q and L Q are singular. 2 If a Q 2 1 (H) then L a,q and L Q are equivalent (L a,q L Q and L Q L a,q ) and ( dl a,q (y) = exp ) y k a k y k 2 L Q -a.e., dl Q λk k=1 where y k := (y, e k ) H, a k := (a, e k ) H and λ k = (Qe k, e k ) H. Idea of proof: Apply Kakutani s Theorem. 20 / 33

23 Outline 1 Inverse heat equation and Laplacian measures 2 Bayesian inversion 3 Maximum a posteriori estimators 21 / 33

24 Maximum a posteriori estimators Let μ be a probability measure on a separable Hilbert space X and define M ε := sup μ(b ε (u)) for all ε > 0. u X Any point û X satisfying μ(b ε (û)) lim = 1 ε 0 M ε is called a maximum a posteriori estimator for μ. 22 / 33

25 Onsager-Machlup functional I : E R is called Onsager-Machlup functional for μ, if μ(b ε (u)) lim = exp(i (v) I (u)) ε 0 μ(b ε (v)) for all u, v E, where E X denotes the space of all admissible shifts that yield an equivalent measure. 23 / 33

26 Onsager-Machlup functional I : E R is called Onsager-Machlup functional for μ, if μ(b ε (u)) lim = exp(i (v) I (u)) ε 0 μ(b ε (v)) for all u, v E, where E X denotes the space of all admissible shifts that yield an equivalent measure. For a centered Gaussian measure N Q on X, E = Q 1 2 (X ) and I (u) = 1 2 Q 1 2 u 2 X for all u E 23 / 33

27 Characterisation of MAP estimators μ 0 centred Gaussian measure on X, μ y posterior measure on X with dμy dμ 0 (u) = exp( Φ(u)) μ 0 -a.e., Φ: X R, X separable Banach space, E X space of admissible shifts for μ 0, μ y that yield an equivalent measure. Theorem [Dashti et al 2013] Assume that 1 Φ is bounded from below, 2 Φ is locally bounded from above, 3 Φ is locally Lipschitz continuous. Then u E is a MAP estimator for μ y if and only if it minimises the Onsager-Machlup functional I for μ y. 24 / 33

28 Onsager-Machlup functional (2) For μ 0 = N A τ and μ y, the space of admissible shifts is given by E = A τ 2 (L 2 (D)) = H τ. In our case, Onsager-Machlup functional I : H τ R for μ y, I (u) := Φ(u) u 2 H τ = 2 α β 2 k ( y k e α k u k y k ) k=1 where u k := (u, ϕ k ) L 2 and y k := (y, ϕ k ) L 2. α τ k u k 2, k=1 25 / 33

29 Characterisation of MAP estimators (2) Theorem [Dashti et al 2013] Assume that 1 Φ is bounded from below, 2 Φ is locally bounded from above, 3 Φ is locally Lipschitz continuous. Then u E is a MAP estimator for μ y if and only if it minimises the Onsager-Machlup functional I for μ y. Problem: For Laplacian noise, Φ is not bounded from below. Upside: Φ is globally Lipschitz continuous. 26 / 33

30 Characterisation of MAP estimators (3) μ 0 centred Gaussian measure on X, μ y posterior measure on X with dμy dμ 0 (u) = exp( Φ(u)) μ 0 -a.e., Φ: X R, X separable Hilbert space, E X space of admissible shifts for μ 0, μ y that yield an equivalent measure. Theorem Assume that 1 Φ is globally Lipschitz continuous, 2 Φ(0) = 0. Then u E is a MAP estimator for μ y if and only if it minimises the Onsager-Machlup functional I for μ y. 27 / 33

31 Characterisation of MAP estimators (4) Idea of proof: Show that {u ε } ε>0, u ε := arg max μ y (B ε (u)), u X contains a subsequence {u εn } n N that converges in X and its limit u 0 E is both a MAP estimator for μ y and a minimiser of I. Show that every MAP estimator û X also minimises I. Show that every minimiser ū E of I also is a MAP estimator. 28 / 33

32 Consistency of the MAP estimator Does the MAP estimator converge towards the true solution, as the variance of the noise tends to zero? How to choose the variance of the prior appropriately? 29 / 33

33 Scaled distributions Noise distribution L b 2 A s β, b > 0 prior distribution N r 2 A τ, r > 0. Associated Onsager-Machlup functional I : H τ R for y H s, I (u) = 1 b Φ(u) + 1 2r 2 u 2 H τ. 30 / 33

34 Scaled distributions Noise distribution L b 2 A s β, b > 0 prior distribution N r 2 A τ, r > 0. Associated Onsager-Machlup functional I : H τ R for y H s, I (u) = 1 b Φ(u) + 1 2r 2 u 2 H τ. By the previous theorem, every minimiser ū(y) of I is a MAP estimator for μ y. Its components are (ū(y), ϕ k ) L 2 = max { r 2 b c k, min {e α k y k, r 2 }} b c k, where y k = (y, ϕ k ) L 2, c k := 2α β 2 τ k e α k. 30 / 33

35 Frequentist consistency True solution u L 2 (D) (fixed, no prior), positive sequences {b n } n N, {r n } n N with b n 0, Laplacian noise η n H s with η n L b 2 n A s β y n = e A u + η n. and Let u n denote the respective minimisers of I n : H τ R, I n (u) := 1 Φ(u, y n ) + 1 b n 2rn 2 u 2 H τ. 31 / 33

36 Convergence in mean square Theorem If a w H 2τ β L 2 (D) with w H 2τ β ρ exists, such that u = e A w, and if C > 0 and N N exist, such that ρ 1 2 b 1 2n r n C 1 2 b 1 2n for all n N, then ] E [ u n u 2 L 2 2C Tr A τ b n for all n N. 32 / 33

37 Conclusion Bayesian inverse heat equation with Laplacian noise: The posterior distribution exists. Every minimiser of the Onsager-Machlup functional is a MAP estimator. The MAP estimator is consistent in a frequentist sense. Outlook: Conditional mean estimator in explicit form Direct posterior sampling 33 / 33

generalized modes in bayesian inverse problems

generalized modes in bayesian inverse problems generalized modes in bayesian inverse problems Christian Clason Tapio Helin Remo Kretschmann Petteri Piiroinen June 1, 2018 Abstract This work is concerned with non-parametric modes and MAP estimates for

More information

arxiv: v1 [math.st] 26 Mar 2012

arxiv: v1 [math.st] 26 Mar 2012 POSTERIOR CONSISTENCY OF THE BAYESIAN APPROACH TO LINEAR ILL-POSED INVERSE PROBLEMS arxiv:103.5753v1 [math.st] 6 Mar 01 By Sergios Agapiou Stig Larsson and Andrew M. Stuart University of Warwick and Chalmers

More information

Resolving the White Noise Paradox in the Regularisation of Inverse Problems

Resolving the White Noise Paradox in the Regularisation of Inverse Problems 1 / 32 Resolving the White Noise Paradox in the Regularisation of Inverse Problems Hanne Kekkonen joint work with Matti Lassas and Samuli Siltanen Department of Mathematics and Statistics University of

More information

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Barbara Kaltenbacher, Tuomo Valkonen,

More information

Math Solutions to homework 5

Math Solutions to homework 5 Math 75 - Solutions to homework 5 Cédric De Groote November 9, 207 Problem (7. in the book): Let {e n } be a complete orthonormal sequence in a Hilbert space H and let λ n C for n N. Show that there is

More information

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation

More information

Scalable algorithms for optimal experimental design for infinite-dimensional nonlinear Bayesian inverse problems

Scalable algorithms for optimal experimental design for infinite-dimensional nonlinear Bayesian inverse problems Scalable algorithms for optimal experimental design for infinite-dimensional nonlinear Bayesian inverse problems Alen Alexanderian (Math/NC State), Omar Ghattas (ICES/UT-Austin), Noémi Petra (Applied Math/UC

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

arxiv: v2 [math.st] 23 May 2017

arxiv: v2 [math.st] 23 May 2017 SPARSITY-PROMOTING AND EDGE-PRESERVING MAXIMUM A POSTERIORI ESTIMATORS IN NON-PARAMETRIC BAYESIAN INVERSE PROBLEMS arxiv:705.03286v2 [math.st] 23 May 207 By Sergios Agapiou Martin Burger Masoumeh Dashti

More information

Nonlinear error dynamics for cycled data assimilation methods

Nonlinear error dynamics for cycled data assimilation methods Nonlinear error dynamics for cycled data assimilation methods A J F Moodey 1, A S Lawless 1,2, P J van Leeuwen 2, R W E Potthast 1,3 1 Department of Mathematics and Statistics, University of Reading, UK.

More information

Harnack Inequalities and Applications for Stochastic Equations

Harnack Inequalities and Applications for Stochastic Equations p. 1/32 Harnack Inequalities and Applications for Stochastic Equations PhD Thesis Defense Shun-Xiang Ouyang Under the Supervision of Prof. Michael Röckner & Prof. Feng-Yu Wang March 6, 29 p. 2/32 Outline

More information

Lecture No 2 Degenerate Diffusion Free boundary problems

Lecture No 2 Degenerate Diffusion Free boundary problems Lecture No 2 Degenerate Diffusion Free boundary problems Columbia University IAS summer program June, 2009 Outline We will discuss non-linear parabolic equations of slow diffusion. Our model is the porous

More information

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester Physics 403 Parameter Estimation, Correlations, and Error Bars Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Best Estimates and Reliability

More information

On the bang-bang property of time optimal controls for infinite dimensional linear systems

On the bang-bang property of time optimal controls for infinite dimensional linear systems On the bang-bang property of time optimal controls for infinite dimensional linear systems Marius Tucsnak Université de Lorraine Paris, 6 janvier 2012 Notation and problem statement (I) Notation: X (the

More information

Regularization and Inverse Problems

Regularization and Inverse Problems Regularization and Inverse Problems Caroline Sieger Host Institution: Universität Bremen Home Institution: Clemson University August 5, 2009 Caroline Sieger (Bremen and Clemson) Regularization and Inverse

More information

Signal Processing Problems on Function Space: Bayesian Formulation, Stochastic PDEs and Effective MCMC Methods

Signal Processing Problems on Function Space: Bayesian Formulation, Stochastic PDEs and Effective MCMC Methods Signal Processing Problems on Function Space: Bayesian Formulation, Stochastic PDEs and Effective MCMC Methods March 4, 29 M. Hairer 1,2, A. Stuart 1, J. Voss 1,3 1 Mathematics Institute, Warwick University,

More information

Blow-up Analysis of a Fractional Liouville Equation in dimension 1. Francesca Da Lio

Blow-up Analysis of a Fractional Liouville Equation in dimension 1. Francesca Da Lio Blow-up Analysis of a Fractional Liouville Equation in dimension 1 Francesca Da Lio Outline Liouville Equation: Local Case Geometric Motivation Brezis & Merle Result Nonlocal Framework The Half-Laplacian

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Controlled Diffusions and Hamilton-Jacobi Bellman Equations Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

Large Deviations, Linear Statistics, and Scaling Limits for Mahler Ensemble of Complex Random Polynomials

Large Deviations, Linear Statistics, and Scaling Limits for Mahler Ensemble of Complex Random Polynomials Large Deviations, Linear Statistics, and Scaling Limits for Mahler Ensemble of Complex Random Polynomials Maxim L. Yattselev joint work with Christopher D. Sinclair International Conference on Approximation

More information

Interaction energy between vortices of vector fields on Riemannian surfaces

Interaction energy between vortices of vector fields on Riemannian surfaces Interaction energy between vortices of vector fields on Riemannian surfaces Radu Ignat 1 Robert L. Jerrard 2 1 Université Paul Sabatier, Toulouse 2 University of Toronto May 1 2017. Ignat and Jerrard (To(ulouse,ronto)

More information

Design of optimal RF pulses for NMR as a discrete-valued control problem

Design of optimal RF pulses for NMR as a discrete-valued control problem Design of optimal RF pulses for NMR as a discrete-valued control problem Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Carla Tameling (Göttingen) and Benedikt Wirth

More information

On some weighted fractional porous media equations

On some weighted fractional porous media equations On some weighted fractional porous media equations Gabriele Grillo Politecnico di Milano September 16 th, 2015 Anacapri Joint works with M. Muratori and F. Punzo Gabriele Grillo Weighted Fractional PME

More information

Bayesian inversion: theoretical perspective

Bayesian inversion: theoretical perspective Bayesian inversion: theoretical perspective Lecture notes, spring 016 course Tapio Helin January 6, 018 Abstract These lecture notes are written for the theoretical part of the course Bayesian inversion

More information

BV functions in a Gelfand triple and the stochastic reflection problem on a convex set

BV functions in a Gelfand triple and the stochastic reflection problem on a convex set BV functions in a Gelfand triple and the stochastic reflection problem on a convex set Xiangchan Zhu Joint work with Prof. Michael Röckner and Rongchan Zhu Xiangchan Zhu ( Joint work with Prof. Michael

More information

Minimum Message Length Analysis of the Behrens Fisher Problem

Minimum Message Length Analysis of the Behrens Fisher Problem Analysis of the Behrens Fisher Problem Enes Makalic and Daniel F Schmidt Centre for MEGA Epidemiology The University of Melbourne Solomonoff 85th Memorial Conference, 2011 Outline Introduction 1 Introduction

More information

TD M1 EDP 2018 no 2 Elliptic equations: regularity, maximum principle

TD M1 EDP 2018 no 2 Elliptic equations: regularity, maximum principle TD M EDP 08 no Elliptic equations: regularity, maximum principle Estimates in the sup-norm I Let be an open bounded subset of R d of class C. Let A = (a ij ) be a symmetric matrix of functions of class

More information

Stability of an abstract wave equation with delay and a Kelvin Voigt damping

Stability of an abstract wave equation with delay and a Kelvin Voigt damping Stability of an abstract wave equation with delay and a Kelvin Voigt damping University of Monastir/UPSAY/LMV-UVSQ Joint work with Serge Nicaise and Cristina Pignotti Outline 1 Problem The idea Stability

More information

Well-posed Bayesian Inverse Problems: Beyond Gaussian Priors

Well-posed Bayesian Inverse Problems: Beyond Gaussian Priors Well-posed Bayesian Inverse Problems: Beyond Gaussian Priors Bamdad Hosseini Department of Mathematics Simon Fraser University, Canada The Institute for Computational Engineering and Sciences University

More information

Machine Learning 4771

Machine Learning 4771 Machine Learning 4771 Instructor: Tony Jebara Topic 11 Maximum Likelihood as Bayesian Inference Maximum A Posteriori Bayesian Gaussian Estimation Why Maximum Likelihood? So far, assumed max (log) likelihood

More information

Density Modeling and Clustering Using Dirichlet Diffusion Trees

Density Modeling and Clustering Using Dirichlet Diffusion Trees p. 1/3 Density Modeling and Clustering Using Dirichlet Diffusion Trees Radford M. Neal Bayesian Statistics 7, 2003, pp. 619-629. Presenter: Ivo D. Shterev p. 2/3 Outline Motivation. Data points generation.

More information

Dimension-Independent likelihood-informed (DILI) MCMC

Dimension-Independent likelihood-informed (DILI) MCMC Dimension-Independent likelihood-informed (DILI) MCMC Tiangang Cui, Kody Law 2, Youssef Marzouk Massachusetts Institute of Technology 2 Oak Ridge National Laboratory 2 August 25 TC, KL, YM DILI MCMC USC

More information

TD 1: Hilbert Spaces and Applications

TD 1: Hilbert Spaces and Applications Université Paris-Dauphine Functional Analysis and PDEs Master MMD-MA 2017/2018 Generalities TD 1: Hilbert Spaces and Applications Exercise 1 (Generalized Parallelogram law). Let (H,, ) be a Hilbert space.

More information

Approximation Theoretical Questions for SVMs

Approximation Theoretical Questions for SVMs Ingo Steinwart LA-UR 07-7056 October 20, 2007 Statistical Learning Theory: an Overview Support Vector Machines Informal Description of the Learning Goal X space of input samples Y space of labels, usually

More information

Trace Class Operators and Lidskii s Theorem

Trace Class Operators and Lidskii s Theorem Trace Class Operators and Lidskii s Theorem Tom Phelan Semester 2 2009 1 Introduction The purpose of this paper is to provide the reader with a self-contained derivation of the celebrated Lidskii Trace

More information

Stability of boundary measures

Stability of boundary measures Stability of boundary measures F. Chazal D. Cohen-Steiner Q. Mérigot INRIA Saclay - Ile de France LIX, January 2008 Point cloud geometry Given a set of points sampled near an unknown shape, can we infer

More information

Statistical Measures of Uncertainty in Inverse Problems

Statistical Measures of Uncertainty in Inverse Problems Statistical Measures of Uncertainty in Inverse Problems Workshop on Uncertainty in Inverse Problems Institute for Mathematics and Its Applications Minneapolis, MN 19-26 April 2002 P.B. Stark Department

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

Heat Kernel and Analysis on Manifolds Excerpt with Exercises. Alexander Grigor yan

Heat Kernel and Analysis on Manifolds Excerpt with Exercises. Alexander Grigor yan Heat Kernel and Analysis on Manifolds Excerpt with Exercises Alexander Grigor yan Department of Mathematics, University of Bielefeld, 33501 Bielefeld, Germany 2000 Mathematics Subject Classification. Primary

More information

On the p-laplacian and p-fluids

On the p-laplacian and p-fluids LMU Munich, Germany Lars Diening On the p-laplacian and p-fluids Lars Diening On the p-laplacian and p-fluids 1/50 p-laplacian Part I p-laplace and basic properties Lars Diening On the p-laplacian and

More information

Upper triangular forms for some classes of infinite dimensional operators

Upper triangular forms for some classes of infinite dimensional operators Upper triangular forms for some classes of infinite dimensional operators Ken Dykema, 1 Fedor Sukochev, 2 Dmitriy Zanin 2 1 Department of Mathematics Texas A&M University College Station, TX, USA. 2 School

More information

The Banach Tarski Paradox and Amenability Lecture 23: Unitary Representations and Amenability. 23 October 2012

The Banach Tarski Paradox and Amenability Lecture 23: Unitary Representations and Amenability. 23 October 2012 The Banach Tarski Paradox and Amenability Lecture 23: Unitary Representations and Amenability 23 October 2012 Subgroups of amenable groups are amenable One of today s aims is to prove: Theorem Let G be

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books. Applied Analysis APPM 44: Final exam 1:3pm 4:pm, Dec. 14, 29. Closed books. Problem 1: 2p Set I = [, 1]. Prove that there is a continuous function u on I such that 1 ux 1 x sin ut 2 dt = cosx, x I. Define

More information

Statistics & Data Sciences: First Year Prelim Exam May 2018

Statistics & Data Sciences: First Year Prelim Exam May 2018 Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book

More information

Marginal density. If the unknown is of the form x = (x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density

Marginal density. If the unknown is of the form x = (x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density Marginal density If the unknown is of the form x = x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density πx 1 y) = πx 1, x 2 y)dx 2 = πx 2 )πx 1 y, x 2 )dx 2 needs to be

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V

More information

GAUSSIAN MEASURES ON 1.1 BOREL MEASURES ON HILBERT SPACES CHAPTER 1

GAUSSIAN MEASURES ON 1.1 BOREL MEASURES ON HILBERT SPACES CHAPTER 1 CAPTE GAUSSIAN MEASUES ON ILBET SPACES The aim of this chapter is to show the Minlos-Sazanov theorem and deduce a characterization of Gaussian measures on separable ilbert spaces by its Fourier transform.

More information

Statistical regularization theory for Inverse Problems with Poisson data

Statistical regularization theory for Inverse Problems with Poisson data Statistical regularization theory for Inverse Problems with Poisson data Frank Werner 1,2, joint with Thorsten Hohage 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical

More information

Fact Sheet Functional Analysis

Fact Sheet Functional Analysis Fact Sheet Functional Analysis Literature: Hackbusch, W.: Theorie und Numerik elliptischer Differentialgleichungen. Teubner, 986. Knabner, P., Angermann, L.: Numerik partieller Differentialgleichungen.

More information

On some singular limits for an atmosphere flow

On some singular limits for an atmosphere flow On some singular limits for an atmosphere flow Donatella Donatelli Dipartimento di Ingegneria e Scienze dell Informazione e Matematica Università degli Studi dell Aquila 67100 L Aquila, Italy donatella.donatelli@univaq.it

More information

Second Order Elliptic PDE

Second Order Elliptic PDE Second Order Elliptic PDE T. Muthukumar tmk@iitk.ac.in December 16, 2014 Contents 1 A Quick Introduction to PDE 1 2 Classification of Second Order PDE 3 3 Linear Second Order Elliptic Operators 4 4 Periodic

More information

arxiv: v3 [math.st] 16 Jul 2013

arxiv: v3 [math.st] 16 Jul 2013 Posterior Consistency for Bayesian Inverse Problems through Stability and Regression Results arxiv:30.40v3 [math.st] 6 Jul 03 Sebastian J. Vollmer Mathematics Institute, Zeeman Building, University of

More information

On semilinear elliptic equations with measure data

On semilinear elliptic equations with measure data On semilinear elliptic equations with measure data Andrzej Rozkosz (joint work with T. Klimsiak) Nicolaus Copernicus University (Toruń, Poland) Controlled Deterministic and Stochastic Systems Iasi, July

More information

The spectral zeta function

The spectral zeta function The spectral zeta function Bernd Ammann June 4, 215 Abstract In this talk we introduce spectral zeta functions. The spectral zeta function of the Laplace-Beltrami operator was already introduced by Minakshisundaram

More information

arxiv: v2 [math.pr] 7 Aug 2014

arxiv: v2 [math.pr] 7 Aug 2014 The Bayesian Approach to Inverse Problems arxiv:1302.6989v2 [math.pr] 7 Aug 2014 Masoumeh Dashti and Andrew M Stuart Abstract. These lecture notes highlight the mathematical and computational structure

More information

i=1 α i. Given an m-times continuously

i=1 α i. Given an m-times continuously 1 Fundamentals 1.1 Classification and characteristics Let Ω R d, d N, d 2, be an open set and α = (α 1,, α d ) T N d 0, N 0 := N {0}, a multiindex with α := d i=1 α i. Given an m-times continuously differentiable

More information

Your first day at work MATH 806 (Fall 2015)

Your first day at work MATH 806 (Fall 2015) Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies

More information

Lecture No 1 Introduction to Diffusion equations The heat equat

Lecture No 1 Introduction to Diffusion equations The heat equat Lecture No 1 Introduction to Diffusion equations The heat equation Columbia University IAS summer program June, 2009 Outline of the lectures We will discuss some basic models of diffusion equations and

More information

Singularities and Laplacians in Boundary Value Problems for Nonlinear Ordinary Differential Equations

Singularities and Laplacians in Boundary Value Problems for Nonlinear Ordinary Differential Equations Singularities and Laplacians in Boundary Value Problems for Nonlinear Ordinary Differential Equations Irena Rachůnková, Svatoslav Staněk, Department of Mathematics, Palacký University, 779 OLOMOUC, Tomkova

More information

ASYMPTOTIC BEHAVIOUR OF NONLINEAR ELLIPTIC SYSTEMS ON VARYING DOMAINS

ASYMPTOTIC BEHAVIOUR OF NONLINEAR ELLIPTIC SYSTEMS ON VARYING DOMAINS ASYMPTOTIC BEHAVIOUR OF NONLINEAR ELLIPTIC SYSTEMS ON VARYING DOMAINS Juan CASADO DIAZ ( 1 ) Adriana GARRONI ( 2 ) Abstract We consider a monotone operator of the form Au = div(a(x, Du)), with R N and

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

Holomorphic functions which preserve holomorphic semigroups

Holomorphic functions which preserve holomorphic semigroups Holomorphic functions which preserve holomorphic semigroups University of Oxford London Mathematical Society Regional Meeting Birmingham, 15 September 2016 Heat equation u t = xu (x Ω R d, t 0), u(t, x)

More information

Université de Metz. Master 2 Recherche de Mathématiques 2ème semestre. par Ralph Chill Laboratoire de Mathématiques et Applications de Metz

Université de Metz. Master 2 Recherche de Mathématiques 2ème semestre. par Ralph Chill Laboratoire de Mathématiques et Applications de Metz Université de Metz Master 2 Recherche de Mathématiques 2ème semestre Systèmes gradients par Ralph Chill Laboratoire de Mathématiques et Applications de Metz Année 26/7 1 Contents Chapter 1. Introduction

More information

Hille-Yosida Theorem and some Applications

Hille-Yosida Theorem and some Applications Hille-Yosida Theorem and some Applications Apratim De Supervisor: Professor Gheorghe Moroșanu Submitted to: Department of Mathematics and its Applications Central European University Budapest, Hungary

More information

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak 1 Introduction. Random variables During the course we are interested in reasoning about considered phenomenon. In other words,

More information

A Limit Theorem for the Squared Norm of Empirical Distribution Functions

A Limit Theorem for the Squared Norm of Empirical Distribution Functions University of Wisconsin Milwaukee UWM Digital Commons Theses and Dissertations May 2014 A Limit Theorem for the Squared Norm of Empirical Distribution Functions Alexander Nerlich University of Wisconsin-Milwaukee

More information

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite

More information

RNDr. Petr Tomiczek CSc.

RNDr. Petr Tomiczek CSc. HABILITAČNÍ PRÁCE RNDr. Petr Tomiczek CSc. Plzeň 26 Nonlinear differential equation of second order RNDr. Petr Tomiczek CSc. Department of Mathematics, University of West Bohemia 5 Contents 1 Introduction

More information

The Bayesian approach to inverse problems

The Bayesian approach to inverse problems The Bayesian approach to inverse problems Youssef Marzouk Department of Aeronautics and Astronautics Center for Computational Engineering Massachusetts Institute of Technology ymarz@mit.edu, http://uqgroup.mit.edu

More information

On positive solutions of semi-linear elliptic inequalities on Riemannian manifolds

On positive solutions of semi-linear elliptic inequalities on Riemannian manifolds On positive solutions of semi-linear elliptic inequalities on Riemannian manifolds Alexander Grigor yan University of Bielefeld University of Minnesota, February 2018 Setup and problem statement Let (M,

More information

Lévy Processes and Infinitely Divisible Measures in the Dual of afebruary Nuclear2017 Space 1 / 32

Lévy Processes and Infinitely Divisible Measures in the Dual of afebruary Nuclear2017 Space 1 / 32 Lévy Processes and Infinitely Divisible Measures in the Dual of a Nuclear Space David Applebaum School of Mathematics and Statistics, University of Sheffield, UK Talk at "Workshop on Infinite Dimensional

More information

Sparse Quadrature Algorithms for Bayesian Inverse Problems

Sparse Quadrature Algorithms for Bayesian Inverse Problems Sparse Quadrature Algorithms for Bayesian Inverse Problems Claudia Schillings, Christoph Schwab Pro*Doc Retreat Disentis 2013 Numerical Analysis and Scientific Computing Disentis - 15 August, 2013 research

More information

Signed Measures and Complex Measures

Signed Measures and Complex Measures Chapter 8 Signed Measures Complex Measures As opposed to the measures we have considered so far, a signed measure is allowed to take on both positive negative values. To be precise, if M is a σ-algebra

More information

An Introduction to Malliavin Calculus. Denis Bell University of North Florida

An Introduction to Malliavin Calculus. Denis Bell University of North Florida An Introduction to Malliavin Calculus Denis Bell University of North Florida Motivation - the hypoellipticity problem Definition. A differential operator G is hypoelliptic if, whenever the equation Gu

More information

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1.

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1. Sep. 1 9 Intuitively, the solution u to the Poisson equation S chauder Theory u = f 1 should have better regularity than the right hand side f. In particular one expects u to be twice more differentiable

More information

Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems

Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems Jonas Latz 1 Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems Jonas Latz Technische Universität München Fakultät für Mathematik Lehrstuhl für Numerische Mathematik jonas.latz@tum.de November

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

l(y j ) = 0 for all y j (1)

l(y j ) = 0 for all y j (1) Problem 1. The closed linear span of a subset {y j } of a normed vector space is defined as the intersection of all closed subspaces containing all y j and thus the smallest such subspace. 1 Show that

More information

Gaussian Approximations for Probability Measures on R d

Gaussian Approximations for Probability Measures on R d SIAM/ASA J. UNCERTAINTY QUANTIFICATION Vol. 5, pp. 1136 1165 c 2017 SIAM and ASA. Published by SIAM and ASA under the terms of the Creative Commons 4.0 license Gaussian Approximations for Probability Measures

More information

Integral operators. Jordan Bell Department of Mathematics, University of Toronto. April 22, X F x(y)dµ(y) x N c 1 0 x N 1

Integral operators. Jordan Bell Department of Mathematics, University of Toronto. April 22, X F x(y)dµ(y) x N c 1 0 x N 1 Integral operators Jordan Bell jordan.bell@gmail.com Department of Mathematics, University of Toronto April 22, 2016 1 Product measures Let (, A, µ be a σ-finite measure space. Then with A A the product

More information

MA5206 Homework 4. Group 4. April 26, ϕ 1 = 1, ϕ n (x) = 1 n 2 ϕ 1(n 2 x). = 1 and h n C 0. For any ξ ( 1 n, 2 n 2 ), n 3, h n (t) ξ t dt

MA5206 Homework 4. Group 4. April 26, ϕ 1 = 1, ϕ n (x) = 1 n 2 ϕ 1(n 2 x). = 1 and h n C 0. For any ξ ( 1 n, 2 n 2 ), n 3, h n (t) ξ t dt MA526 Homework 4 Group 4 April 26, 26 Qn 6.2 Show that H is not bounded as a map: L L. Deduce from this that H is not bounded as a map L L. Let {ϕ n } be an approximation of the identity s.t. ϕ C, sptϕ

More information

5 Compact linear operators

5 Compact linear operators 5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.

More information

A hierarchical Krylov-Bayes iterative inverse solver for MEG with physiological preconditioning

A hierarchical Krylov-Bayes iterative inverse solver for MEG with physiological preconditioning A hierarchical Krylov-Bayes iterative inverse solver for MEG with physiological preconditioning D Calvetti 1 A Pascarella 3 F Pitolli 2 E Somersalo 1 B Vantaggi 2 1 Case Western Reserve University Department

More information

RIEMANNIAN GEOMETRY COMPACT METRIC SPACES. Jean BELLISSARD 1. Collaboration:

RIEMANNIAN GEOMETRY COMPACT METRIC SPACES. Jean BELLISSARD 1. Collaboration: RIEMANNIAN GEOMETRY of COMPACT METRIC SPACES Jean BELLISSARD 1 Georgia Institute of Technology, Atlanta School of Mathematics & School of Physics Collaboration: I. PALMER (Georgia Tech, Atlanta) 1 e-mail:

More information

Partial Differential Equations

Partial Differential Equations Part II Partial Differential Equations Year 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2015 Paper 4, Section II 29E Partial Differential Equations 72 (a) Show that the Cauchy problem for u(x,

More information

b i (µ, x, s) ei ϕ(x) µ s (dx) ds (2) i=1

b i (µ, x, s) ei ϕ(x) µ s (dx) ds (2) i=1 NONLINEAR EVOLTION EQATIONS FOR MEASRES ON INFINITE DIMENSIONAL SPACES V.I. Bogachev 1, G. Da Prato 2, M. Röckner 3, S.V. Shaposhnikov 1 The goal of this work is to prove the existence of a solution to

More information

The Dirichlet-to-Neumann operator

The Dirichlet-to-Neumann operator Lecture 8 The Dirichlet-to-Neumann operator The Dirichlet-to-Neumann operator plays an important role in the theory of inverse problems. In fact, from measurements of electrical currents at the surface

More information

Bayesian Linear Regression [DRAFT - In Progress]

Bayesian Linear Regression [DRAFT - In Progress] Bayesian Linear Regression [DRAFT - In Progress] David S. Rosenberg Abstract Here we develop some basics of Bayesian linear regression. Most of the calculations for this document come from the basic theory

More information

Statistical inference on Lévy processes

Statistical inference on Lévy processes Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline

More information

Convergence of the Ensemble Kalman Filter in Hilbert Space

Convergence of the Ensemble Kalman Filter in Hilbert Space Convergence of the Ensemble Kalman Filter in Hilbert Space Jan Mandel Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Parts based

More information

Errata Applied Analysis

Errata Applied Analysis Errata Applied Analysis p. 9: line 2 from the bottom: 2 instead of 2. p. 10: Last sentence should read: The lim sup of a sequence whose terms are bounded from above is finite or, and the lim inf of a sequence

More information

Gaussian processes for inference in stochastic differential equations

Gaussian processes for inference in stochastic differential equations Gaussian processes for inference in stochastic differential equations Manfred Opper, AI group, TU Berlin November 6, 2017 Manfred Opper, AI group, TU Berlin (TU Berlin) inference in SDE November 6, 2017

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

L -uniqueness of Schrödinger operators on a Riemannian manifold

L -uniqueness of Schrödinger operators on a Riemannian manifold L -uniqueness of Schrödinger operators on a Riemannian manifold Ludovic Dan Lemle Abstract. The main purpose of this paper is to study L -uniqueness of Schrödinger operators and generalized Schrödinger

More information

Uncertainty Quantification for Inverse Problems. November 7, 2011

Uncertainty Quantification for Inverse Problems. November 7, 2011 Uncertainty Quantification for Inverse Problems November 7, 2011 Outline UQ and inverse problems Review: least-squares Review: Gaussian Bayesian linear model Parametric reductions for IP Bias, variance

More information

Una aproximación no local de un modelo para la formación de pilas de arena

Una aproximación no local de un modelo para la formación de pilas de arena Cabo de Gata-2007 p. 1/2 Una aproximación no local de un modelo para la formación de pilas de arena F. Andreu, J.M. Mazón, J. Rossi and J. Toledo Cabo de Gata-2007 p. 2/2 OUTLINE The sandpile model of

More information

Spatially Adaptive Smoothing Splines

Spatially Adaptive Smoothing Splines Spatially Adaptive Smoothing Splines Paul Speckman University of Missouri-Columbia speckman@statmissouriedu September 11, 23 Banff 9/7/3 Ordinary Simple Spline Smoothing Observe y i = f(t i ) + ε i, =

More information