Bayesian Inverse Problems

Similar documents
Seminar: Data Assimilation

Bayesian Estimation with Sparse Grids

Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems

Answers and expectations

an introduction to bayesian inference

Bayesian Inference and MCMC

Bayesian parameter estimation in predictive engineering

Markov Chain Monte Carlo methods

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak

Part 1: Expectation Propagation

Risk Estimation and Uncertainty Quantification by Markov Chain Monte Carlo Methods

Sequential Monte Carlo Samplers for Applications in High Dimensions

Bayesian Inverse problem, Data assimilation and Localization

Monte Carlo in Bayesian Statistics

Large Scale Bayesian Inference

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems

Introduction to Bayesian Data Analysis

Introduction to Machine Learning

Doing Bayesian Integrals

Transdimensional Markov Chain Monte Carlo Methods. Jesse Kolb, Vedran Lekić (Univ. of MD) Supervisor: Kris Innanen

Human Pose Tracking I: Basics. David Fleet University of Toronto

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA

Lecture 7 and 8: Markov Chain Monte Carlo

Algorithms for Uncertainty Quantification

Bayesian Phylogenetics:

Sequential Monte Carlo Methods for Bayesian Computation

MCMC algorithms for fitting Bayesian models

MCMC Sampling for Bayesian Inference using L1-type Priors

Cheng Soon Ong & Christian Walder. Canberra February June 2018

arxiv: v1 [stat.co] 23 Apr 2018

Who was Bayes? Bayesian Phylogenetics. What is Bayes Theorem?

Machine Learning for Data Science (CS4786) Lecture 24

Bayesian Phylogenetics

Bayesian Regression Linear and Logistic Regression

High-dimensional Problems in Finance and Economics. Thomas M. Mertens

Probability Theory for Machine Learning. Chris Cremer September 2015

Introduction to Machine Learning CMU-10701

Dynamic System Identification using HDMR-Bayesian Technique

Statistical Methods in Particle Physics Lecture 1: Bayesian methods

Sampling Methods (11/30/04)

STA 4273H: Statistical Machine Learning

Bayesian room-acoustic modal analysis

Linear Dynamical Systems

On the Optimal Scaling of the Modified Metropolis-Hastings algorithm

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Networks BY: MOHAMAD ALSABBAGH

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

Stochastic Spectral Approaches to Bayesian Inference

Introduction to Markov Chain Monte Carlo & Gibbs Sampling

Stat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC

Probabilistic Graphical Models

PART I INTRODUCTION The meaning of probability Basic definitions for frequentist statistics and Bayesian inference Bayesian inference Combinatorics

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 1: Introduction

Sampling from complex probability distributions

2. A Basic Statistical Toolbox

COURSE INTRODUCTION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception

Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo

Bayesian Inference in Astronomy & Astrophysics A Short Course

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu

Practical unbiased Monte Carlo for Uncertainty Quantification

Principles of Bayesian Inference

Part IV: Monte Carlo and nonparametric Bayes

Bayesian Estimation of Input Output Tables for Russia

CSC 2541: Bayesian Methods for Machine Learning

CPSC 540: Machine Learning

Bayes Theorem. Jan Kracík. Department of Applied Mathematics FEECS, VŠB - TU Ostrava

Bayesian Methods for Machine Learning

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Introduction to Machine Learning CMU-10701

A Review of Pseudo-Marginal Markov Chain Monte Carlo

Lecture 5. G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1

STAT 425: Introduction to Bayesian Analysis

Learning the hyper-parameters. Luca Martino

Variational Scoring of Graphical Model Structures

Bayesian Machine Learning

Introduction to Bayesian Learning

Lecture 15: MCMC Sanjeev Arora Elad Hazan. COS 402 Machine Learning and Artificial Intelligence Fall 2016

F denotes cumulative density. denotes probability density function; (.)

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016

Pattern Recognition and Machine Learning

Notes on pseudo-marginal methods, variational Bayes and ABC

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Statistics. Lent Term 2015 Prof. Mark Thomson. 2: The Gaussian Limit

Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods

Likelihood-free MCMC

Introduction to Computational Biology Lecture # 14: MCMC - Markov Chain Monte Carlo

Introduction to Bayes

BAYESIAN DECISION THEORY

ECE295, Data Assimila0on and Inverse Problems, Spring 2015

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Sampling Rejection Sampling Importance Sampling Markov Chain Monte Carlo. Sampling Methods. Oliver Schulte - CMPT 419/726. Bishop PRML Ch.

Lecture 6: Markov Chain Monte Carlo

Basics of Bayesian analysis. Jorge Lillo-Box MCMC Coffee Season 1, Episode 5

CSC 2541: Bayesian Methods for Machine Learning

Hierarchical Models & Bayesian Model Selection

17 : Markov Chain Monte Carlo

Eco517 Fall 2013 C. Sims MCMC. October 8, 2013

Transcription:

Bayesian Inverse Problems Jonas Latz Input/Output: www.latz.io Technical University of Munich Department of Mathematics, Chair for Numerical Analysis Email: jonas.latz@tum.de Garching, July 10 2018 Guest lecture in Algorithms for Uncertainty Quantification with Dr. Tobias Neckel and Friedrich Menhorn 1

Outline Motivation: Forward and Inverse Problem Conditional Probabilities and Bayes Theorem Bayesian Inverse Problem Examples Conclusions 2

Outline Motivation: Forward and Inverse Problem Conditional Probabilities and Bayes Theorem Bayesian Inverse Problem Examples Conclusions 3

Forward Problem: A Picture 0.2 0.1 0.2 0.1 0 2 1 0 1 2 Uncertain Parameter θ µ Model G (e.g. ODE, PDE,...) Q 0 2 0 2 4 Quantity of interest Q(θ) = Q G(θ) µ Q? 4

Forward Problem: A few examples Groundwater pollution. G: Transport equation (PDE) θ: Permeability of the groundwater reservoir Q: Travel time of a particle in the groundwater reservoir Figure: Final disposal site for nuclear waste (Image: Spiegel Online) 5

Forward Problem: A few examples Diabetes patient. G: Glucose-Insulin ODE for a Diabetes-type 2 patient θ: Model parameters such as exchange rate plasma insulin to interstitial insulin Q: Time to inject insulin Figure: Glucometer (Image: Bayer AG) 6

Forward Problem: A few examples Geotechnical Engineering G: Deformation model θ: Soil Q: Deformation/Stability/probability of failure Figure: Construction on Soil (Image: www.ottawaconstructionnews.com) 7

Distribution of the parameter θ How do we get the distribution of θ? Can we use data to characterise the distribution of θ? 0.2 0.1 0 2 1 0 1 2 Uncertain Parameter θ µ 8

Data? Let θ true be the actual parameter. We define data y by y := O }{{} Observation operator Model {}}{ G ( θ }{{} true actual parameter The measurement noise is a random variable η N(0, Γ). ) + measurement noise {}}{ η 9

Data? A Picture. Q Quantity of interest True Parameter θ true Model G (e.g. ODE, PDE,...) O Q(θ true ) = Q G(θ true )? Observational Data y := O G(θ true ) + η 10

Identify θ true?! Can we use the data to identify θ true? Can we solve the equation y = O G(θ true ) + η? 11

Identify θ true?! Can we solve equation y = O G(θ true ) + η? No. The problem is ill-posed. 1 The operator O G is very complex dim(x Y ) dim Y, where (X Y ) (θ, η) and Y y. 1 Hadamard (1902) - Sur les problèmes aux dérivés partielles et leur signification physique, Princeton University Bulletin 13, pp. 49-52 12

Summary We want to use noisy observational data y to find θ true, but we cannot. The uncertain parameter θ is still uncertain, even if we observe data y. 2 Questions: How can we quantify the uncertainty in θ considering the data y? How does this change the probability distribution of our Quantity of interest Q? 13

Outline Motivation: Forward and Inverse Problem Conditional Probabilities and Bayes Theorem Bayesian Inverse Problem Examples Conclusions 14

An Experiment We roll a die. The sample space of this experiment is The space of events is the power set of Ω: Ω := {1,..., 6}. A := 2 Ω := {A : A Ω}. The probability measure is the Uniform measure on Ω: P := Unif Ω := ω Ω 1 6 δ ω. 15

An Experiment We roll a die. Consider the event A := {6}. The probability of A is P(A) = 1/6. Now, an oracle tells us before rolling the die, whether the outcome would be even or odd. B := {2, 4, 6}, B c := {1, 3, 5}. How does the probability of A change, if we know whether B or B c occurs? Conditional Probabilities 16

Conditional probabilities Consider a probability space (Ω, A, P) and two events D 1, D 2 A, such that P(D 2 ) > 0. The conditional probability distribution of D 1 given the event D 2 is defined by: P(D 1 D 2 ) := P(D 1 and D 2 ) P(D 2 ) := P(D 1 D 2 ) P(D 2 ) 17

Conditional probabilities: Moving back to the experiment. We roll a die. P(A B) = P({6}) P({2, 4, 6}) = 1/6 1/2 = 1 3, P(A B c P( ) ) = P({1, 3, 5}) = 0 1/2 = 0. 18

Probability and Knowledge Probability distributions can be used to model knowledge. When using a fair die, we have no knowledge whatsoever concerning the outcome: P({1}) = P({2}) = P({3}) = P({4}) = P({5}) = P({6}) = 1/6 19

Probability and Knowledge Probability distributions can be used to model knowledge. When did the French revolution start? Rough knowledge from school: End of the 18th Century, definitely not before 1750/ after 1800. 0.04 pdf(x) 0.02 0 1750 1760 1770 1780 1790 1800 x 20

Probability and Knowledge Probability distributions can be used to model knowledge. 0.04 pdf(x) 0.02 0 1750 1760 1770 1780 1790 1800 x Here, the probability distribution is given by a probability density function (pdf), i.e. P(A) = pdf(x)dx A 21

Conditional probability and Learning We represent content we learn by an event B 2 Ω. Learning B is a map P( ) P( B). 22

Conditional probability and Learning We learn that B = {2, 4, 6} occurs. Hence, we map But, how do we do this in general? [P({1}) =P({2}) = P({3}) = P({4}) = P({5}) = P({6}) = 1/6] [ ] P({1} B) = P({3} B) = P({5} B) = 0; P({2} B) = P({4} B) = P({6} B) = 1/3 23

Elementary Bayes Theorem Consider a probability space (Ω, A, P) and two events D 1, D 2 A, such that P(D 2 ) > 0. Then, Proof: Let P(D 1 ) > 0. We have P(D 1 D 2 ) = P(D 2 D 1 )P(D 1 ) P(D 2 ) P(D 1 D 2 ) = P(D 1 D 2 ) P(D 2 ) (1) and P(D 2 D 1 ) = P(D 2 D 1 ) P(D 1 ) (2) is equivalent to P(D 2 D 1 ) = P(D 2 D 1 )P(D 1 ), which can be substituted into (1) to get the final result. If P(D 1 ) = 0, we get from its definition P(D 1 D 2 ) = 0. Thus, the statement also holds in this case. (2). 24

Who is Bayes? Figure: Bayes (Image: Terence O Donnell, History of Life Insurance in Its Formative Years (Chicago: American Conservation Co:, 1936)) Thomas Bayes, 1701-1761 English, Presbyterian Minister, Mathematician, Philosopher Proposed a (very) special case of Bayes Theorem Not much known about him (the image above might be not him) 25

Who do we know Bayes Theorem from? Figure: Laplace (Image: Wikipedia) Pierre-Simon Laplace, 1749 1827 French, Mathematician and Astronomer Published Bayes Theorem in Théorie analytique des probabilités in 1812 26

Conditional probability and Learning When did the French revolution start? (1) Rough knowledge from school: End of the 18th Century, definitely not before 1750/ after 1800. 0.04 pdf(x) 0.02 0 1750 1760 1770 1780 1790 1800 x (2) Today in the radio : It was in the 1780s, so in the interval [1780, 1790). 0.15 pdf(x) 0.1 0.05 0 1750 1760 1770 1780 1790 1800 x 27

Conditional probability and Learning When did the French revolution start? (2) Today in the radio : It was in the 1780s, so in the interval [1780, 1790). 0.15 pdf(x) 0.1 0.05 0 1750 1760 1770 1780 1790 1800 x (Image: Wikipedia) (3) Reading in a textbook: It was in the middle of year 1789. Problem. The point in time x, we are looking for, is now set to a particular value x = 1789.5 + η, where η N(0, 0.0625). Hence, the event we learn is B = {x + η = 1789.5}. But, P(B) = 0. Hence P( B) is not defined and Bayes Theorem does not hold. 28

Non-elementary Conditional Probability It is possible to define conditional probabilities for (non-empty) events B, with P(B) = 0. (rather complicated) Easier: Consider the learning in terms of continuous random variables. (rather simple) 29

Conditional Densities We learn a random variable x 1 and observe another random variable x 2 The joint distribution of x 1 and x 2 is given by a 2-dimensional probability density function pdf(x 1, x 2 ). Given pdf(x 1, x 2 ) the marginal distributions of x 1, x 2 are given by mpdf 1 (x 1 ) = pdf(x 1, x 2 )dx 2 ; mpdf 2 (x 2 ) = pdf(x 1, x 2 )dx 1 We learn the event B = {x 2 = b}, for some b R. Here, the conditional distribution is given by cpdf 1 2 (x 1 x 2 = b) = pdf(x 1, b)/mpdf(b) 30

Bayes Theorem for Conditional Densities Similarly to the Elementary Bayes Theorem, we can give a Bayes Theorem for Densities cpdf 1 2 ( x 2 = b) }{{} posterior = cpdf 2 1 (b x 1 = ) mpdf 1 ( ) }{{}}{{} (data) likelihood prior / mpdf 2 (b) }{{} evidence prior: Knowledge we have a priori concerning x 1 likelihood: The probability distribution of the data given x 1 posterior: Knowledge we have concerning x 2 knowing that x 2 = b evidence: Assesses the model assumptions 31

Laplace s formulation Figure: Bayes Theorem in Théorie analytique des probabilités by Pierre-Simon Laplace (1812, pp. 182) prior p, likelihood H, posterior P, integral/sum S. 32

Conditional probability and Learning When did the French revolution start? (3) Reading in a textbook: It was in the middle of year 1789. 0.2 0.15 0.1 0.05 0 1750 1760 1770 1780 1790 1800 (4) Looking it up on wikipedia.org: The actual date is 14. Juli 1789 33

Figure: Prise de la Bastille by Jean-Pierre Louis Laurent Houel, 1789 (Image: Bibliothe que nationale de France) 34

Figure: One more example concerning conditional probabilities (Image: xkcd) 35

Outline Motivation: Forward and Inverse Problem Conditional Probabilities and Bayes Theorem Bayesian Inverse Problem Examples Conclusions 36

Bayesian Inverse Problem Given data y and a prior distribution µ 0 - the parameter θ is a random variable: θ µ 0. Determine the posterior distribution µ y, that is µ y = P(θ O G(θ) + η = y) The problem find µ y is well-posed 37

Bayesian Inverse Problem Posterior distribution µ y := P(θ G(θ) + η = y) Prior Information θ µ 0 True Parameter θ true Uncertain parameter θ µ y Model G (e.g. ODE, PDE,...) O Q Observational Data y := O G(θ true ) + η Quantity of interest Q(θ) = Q G(θ) µ y Q? 0.4 0.2 0 2 1 0 1 2 0.6 0.4 0.2 0 2 0 2 4 38

Bayes Theorem (revisited) cpdf(θ O G(θ) + η = y) = cpdf(y θ) }{{}}{{} posterior (data) likelihood prior: Given by the probability measure µ 0 likelihood: O G(θ) y = η N(0, Γ) y N(O G(θ), Γ) posterior: Given by the probability measure µ y evidence: Chosen as a normalising constant mpdf 1 (θ) }{{} prior / mpdf 2 (y) }{{} evidence 39

How do we invert Bayesian? Sampling based: Sample from the posterior measure µ y Importance Sampling Markov Chain Monte Carlo Sequential Monte Carlo/Particle Filters Deterministic: Use a deterministic quadrature rule, to approximate µ y Sparse Grids QMC 40

Sampling based methods for Bayesian Inverse Problems Idea: Generate samples from µ y. Use these samples in a Monte Carlo manner to approximate the distribution of Q(θ), where θ µ y. Problem: We typically can t generate iid. samples of µ y weighted samples of the wrong distribution (Importance Sampling, SMC) dependent samples of the right distribution (MCMC) 41

Importance Sampling Importance sampling applies directly Bayes Theorem and uses the following identity: E µ y [Q] = E µ0 [Q cpdf(y ) ]/ E µ0 [cpdf(y )] }{{}}{{} likelihood } likelihood {{} evidence Hence, we can integrate w.r.t. to µ y, using only integrals w.r.t. µ 0. In practice: Sample iid. from (θ j : j = 1,..., J) µ 0 and approximate: E µ y [Q] J 1 J Q(θ j )cpdf(y θ j )/J 1 j=1 J cpdf(y θ j ) j=1 42

Markov Chain Monte Carlo Construct an ergodic Markov chain (θ n ) n 1 that is stationary with respect to µ y. θ n µ y for n large, dependent samples can be used for MC type estimation some methods Metropolis-Hastings MCMC Gibbs sampling Hamiltonian/Langevin MCMC Slice sampling... often: accept-reject mechanisms 43

Deterministic Strategies Several deterministic methods have been proposed General issue: Estimating the model evidence is difficult (this also contraindicates importance sampling) 44

Outline Motivation: Forward and Inverse Problem Conditional Probabilities and Bayes Theorem Bayesian Inverse Problem Examples Conclusions 45

Example 1: 1D Groundwater flow with uncertain source Consider the following partial differential equation on D = [0, 1] (k )p = f (θ) (on D) p = 0 (on D), where the diffusion coefficient k is known. The source term f (θ) contains one Gaussian-type source at position θ [0.1, 0.9]. (We solve the PDE using 48 linear Finite Elements.) 46

Example 1: (deterministic) log-permeability Log-permeability 0.3 0.2 0.1 0-0.1 0 0.2 0.4 0.6 0.8 1 x 47

Example 1: Quantity of Interest Considering the uncertainty in f (θ), determine the distribution of the Quantity of interest Q : [0.1, 0.9] R, θ p(5/12). Pressure, u 1.5 1 0.5 Length, x 0 0.2 0.4 0.6 0.8 1 True Quantity of Interest 0 48

Example 1: Data The observations are based on the observation operator O, which maps given θ true = 0.2. p [p(2/12), p(4/12), p(6/12), p(8/12)], Pressure, u 1.5 1 0.5 0 Length, x 0 0.2 0.4 0.6 0.8 1 True Data 150 100 50 at midpoints at sensor 0 0 0.2 0.4 0.6 0.8 1 49

Example 1: Bayesian Setting We assume uncorrelated Gaussian noise, with different variances: (a) Γ = 0.8 2 (b) Γ = 0.4 2 (c) Γ = 0.2 2 (d) Γ = 0.1 2 Prior distribution θ µ 0 = Unif[0.1, 0.9] Compare prior and different posteriors (with different noise levels) the uncertainty propagation of prior and the posteriors (Estimations with standard Monte Carlo/Importance Sampling using J = 10000.) 50

Example 1: No data (i.e. prior) 51

Example 1: Very high noise level Γ = 0.8 2 52

Example 1: High noise level Γ = 0.4 2 53

Example 1: Small noise level Γ = 0.2 2 54

Example 1: Very small noise level Γ = 0.1 2 55

Example 1: Summary Smaller noise level less uncertainty in the parameter less uncertainty 2 in the quantity of interest The unknown parameter can be estimated pretty well in this setting Importance Sampling can be used in such simple settings. 2 less uncertainty meaning smaller variance. 56

Example 2: 1D Groundw. flow with uncertain number and pos. of sources Consider again (k )p = g(θ) (on D) p = 0 (on D), θ := (N, ξ 1,..., ξ N ), where N is the number of Gaussian type sources and ξ 1,..., ξ N are the positions of the sources (sorted ascendingly) the log-permeability is known, but with a higher spatial variability 57

Example 2: (deterministic) log-permeability 2 Log-permeability 1 0-1 -2 0 0.5 1 58

Example 2: Quantity of Interest Considering the uncertainty in g(θ), determine the distribution of the Quantity of interest Pressure, u 15 10 5 Q : [0.1, 0.9] R, θ p(1/2). Length, x 0 0.2 0.4 0.6 0.8 1 True Quantity of Interest 0 59

Example 2: Data The observations are based on the observation operator O, which maps p [p(1/12), p(3/12), p(5/12), p(7/12), p(9/12), p(11/12)], given θ true := (4, 0.2, 0.4, 0.6, 0.8). Pressure, u 15 10 5 0 Length, x 0 0.5 1 True Data 150 100 50 at midpoints at sensor 0 0 0.5 1 60

Example 2: Bayesian Setting We assume uncorrelated Gaussian noise with variance Γ = 0.4 2 Prior distribution θ µ 0. µ 0 is given by the following sampling procedure: 1 Sample N Unif{1,..., 8} 2 Sample ξ Unif[0.1, 0.9] N 3 Set ξ := sort(ξ) 4 Set θ := (N, ξ 1,..., ξ N ) Compare prior and posterior and their uncertainty propagation (Estimations with standard Monte Carlo/Importance Sampling using J = 10000.) 61

Example 2: Prior 550 500 450 400 350 300 250 200 150 100 50 0 0 10 20 30 40 50 Figure: Prior distribution of N (left) and 100 samples of the prior distribution of the Source terms 62

Example 2: Posterior 550 500 450 400 350 300 250 200 150 100 50 0 0 10 20 30 40 50 Figure: Posterior distribution of N (left) and 100 samples of the posterior distribution of the Source terms 63

Example 2: Quantity of Interest Figure: Quantities of interest, where the source term is distributed according to the prior (left) and posterior (right) 64

Example 2: Summary Bayesian estimation is possible in complicated settings (such as this transdimensional setting) Importance Sampling is not very efficient 65

Outline Motivation: Forward and Inverse Problem Conditional Probabilities and Bayes Theorem Bayesian Inverse Problem Examples Conclusions 66

Messages to take home + Bayesian Statistics can be used to incorporate data into an uncertain model + Bayesian Inverse Problems are well-posed and thus a consistent approach to parameter estimation + Applying the Bayesian Framework is possible in many different settings, also in ones that are genuinely difficult (e.g. transdimensional parameter spaces) - Solving Bayesian Inverse Problems is computationally very expensive requires many forward solves algorithmically complex 67

How to learn Bayesian Various lectures at TUM: Bayesian strategies for inverse problems, Prof. Koutsourelakis (Mechanical Engineering) Various Machine Learning lectures in CS Speak with Prof. Dr. Elisabeth Ullmann or Jonas Latz (both M2) GitHub/latz-io A short review on algorithms for Bayesian Inverse Problems Sample Code (MATLAB) These slides 68

How to learn Bayesian Various Books/Papers Allmaras et al.: Estimating Parameters in Physical Models through Bayesian Inversion: A Complete Example (2013; SIAM Rev. 55(1)) Liu: Monte Carlo Strategies in Scientific Computing (2004; Springer) McGrayne: The Theory that would not die (2011, Yale University Press) Robert: The Bayesian Choice (2007, Springer) Stuart: Inverse Problems: A Bayesian Perspective (2010; in Acta Numerica 19) 69

Jonas Latz Input/Output: www.latz.io 70