Bayesian inversion. Main Credits: )p://ist- )p.ujf- grenoble.fr/users/coutanto/jig.zip

Size: px
Start display at page:

Download "Bayesian inversion. Main Credits: )p://ist- )p.ujf- grenoble.fr/users/coutanto/jig.zip"

Transcription

1 Bayesian inversion Main Credits: Inverse problem Theory,Tarantola, 005 Parameter Estimation and inverse problems, R. Aster et al., 013 Théorie de l information (problèmes inverse & traitement du signal) Lecture by D. Gibert and F. Lopez, IPGP, 013 ( (many slides borrowed from this source) Olivier Coutant, seismology group, ISTerre Laboratory, Grenoble University Prac%ce: )p://ist- )p.ujf- grenoble.fr/users/coutanto/jig.zip

2 Plan 1. Introduction to Bayesian inversion. Bayesian solution of the inverse problem 1. Basic examples 3. How to find the solution of the Bayesian inversion 1. deterministic or stochastic 4. The deterministic approach 1. Linear and non linear. More on regularization 3. Resolution operator 4. Joint inversion (e.g seismic and gravity) 5. A practical case: seismic and gravity on a volcano

3 Bayesian inversion We saw this morning the inversion of linear problem: data = Model x parameter ó d = M p d 1 d... d N = M ij. p 1 p... p M

4 Introduction to Bayesian inversion That we solve by mean of different kind of solution, e.g least-square: d M p minimize: and obtain parameters as: d = M p M T d = M T M p p = M T M ( ) 1 M T d What if data includes inherent measurement errors? What if model includes also inherent error? How to deal with that? How to infer the transmission of error from data to parameter?

5 Introduction to Bayesian inversion Data with errors: Model with error: d =d true + ε d M=M true + ε m M and d may be treated as statistical quantities given the statistics of the error ε: They are random variables with a given density of probability p(x) : P(x I) = p(x) dx defined by different moments of increasing order: I - Expectation (mean) = E(x) = I x.p(x) dx - Variance = Var(x) = I (x E(x)) p(x) dx

6 Introduction to Bayesian inversion What if we have an a priori knowledge about parameters? «There is a good chance that the earthquake is not located in the air, neither in the sea ; most likely in Ubaye beneath Barcelonnette» The Bayesian approach gives a good formalism to introduce: - A priori knowledge on parameters - Description of data and model errors - To deal with non deterministic or non exact direct model and coupling

7 Bayesian solution of the inverse problem Basic concept is both simple and very powerful: Probability (A and B) = Probability (B and A): P(A,B)=P(B,A) P(A,B) = Probability(A) x Probability (B with A realized): P(A,B)=P(A)P(B A) P(B,A) = Probability(B) x Probability (A with B realized): P(A,B)=P(A)P(B A) P(A) = Σ P(A B) P(B) and P(B) = Σ P(B A) P(A) Bayes formula: P(A)P(B A)=P(B)P(A B) Let us call A event the data, and B event the parameter or model, then: Posterior Probability on model Prior Probability on model Data likelyhood knowing the model P(B A) = P(B) P(A B) P(A B)P(B)

8 Bayesian solution to inverse problem Bayesian solution of inverse problems Practical issues to obtain the Bayesian posterior probability: P(B A) P(B) = P(B) P(B = x P(A P(A B) B) P(A P(A,B)dB B)P(B) The data likelihood for model B P(A B) is obtained by computing the probability for the data to be actually observed if model B is the true model. P(A B) is a probability because: the data contain errors and there is some probability that the data depart from the model prediction. the relationship giving the data from the model the so-called forward problem may be imprecise or fundamentally of a probabilistic nature like in quantum mechanics (but not only). P(A B) is what will be called the forward problem 17 sept Lecture Information theory - D. Gibert - Year

9 Bayesian solution to inverse problem: more on data and model error Earthquake location problem: knowing seismic arrival times (P, S, Pg, Pn) è invert for source location (x,y,z) homogeneous gradient Gradient + layers Systematic bias between these three models Model 1 introduce systematic error on Pn (refracted waves) data type

10 Bayesian solution to inverse problem Bayesian solution of inverse problems Practical issues to obtain the Bayesian posterior probability: P(B A) = P(B) P(B) x A) = P(A B) B) P(A P(A,B)dB B)P(B) The prior probability for model B P(B) represents the amount of information available BEFORE performing the Bayesian inversion. P(B) may be obtained or applied by: using the posterior probability of another, previously solved, inverse problem. defining the parameter space in order to restrict it to the a priori probable values of the parameters. Finally, the Bayesian solution is extented to the continuous case: ρ(m d) = ρ(m) ρ(d m) ρ(d m)ρ(m)dm m: model d: data 17 sept Lecture Information theory - D. Gibert - Year

11 Bayesian solution to inverse problem Introduction The solution of a given inverse problem may constitute the prior information of another inverse problem. 17 sept Lecture Information theory - D. Gibert - Year

12 Bayesian solution to inverse problem Bayesian solution of inverse problems Probabilistic forward problem: deterministic physical law and exact data A (data) A1 A P(A B) P(A1 B1) = 1 P(A B1) = 0 B1 B (model) 17 sept Lecture Information theory - D. Gibert - Year

13 Bayesian solution to inverse problem Bayesian solution of inverse problems Probabilistic forward problem: fuzzy physical law A (data) A1 A P(A B) P(A1 B1) P(A B1) B1 B (model) 17 sept Lecture Information theory - D. Gibert - Year

14 Bayesian solution to inverse problem Bayesian solution of inverse problems Probabilistic forward problem: deterministic physical law and imprecise data A (data) A1 A P(A B) P(A1 B1) P(A B1) B1 B (model) 17 sept Lecture Information theory - D. Gibert - Year

15 Remarks The direct problem may be more or less difficult to solve either analytically or numerically. The resolution of the forward problem often represents a great deal of work. The relationship between data and parameters may be linear: - For instance gravity anomaly is doubled if density is multiplied by a factor of The relationship betwwen datat and parameters is often non-linear: - Gravity anomaly is not divided by if the depth of the causative source is multiplied by The forward problem often has a unique solution - A given geological structure produces one gravity anomaly A remarkable property of inverse problems is that they generally have multiple acceptable solutions But, different geological structures may produce the same, or very comparable gravity anomaly

16 Gravity measurement from point Jg1 J J

17 Example solve the inverse problem for a single measurement and a single parameter We want to solve: ρ(m d) = ρ(m)ρ(d m) ρ(d) = ρ(m)ρ(d m) ρ(m)ρ(d m)dm The theore%cal rela%on giving the ver%cal gravita%onal field is : g z (x) = π.g.ρ.r t.z t (x x t ) + z z We assume here that the measurement is above the tunnel, that we know the radius, the density, and seek only for the depth

18 Example The relation reduces to : with α= In our problem we measure g= µgal; ρ=700kg/m 3 and r t =3m g z (z t ) = α z t And the Bayes relation to be solve is ρ(z t g) = ρ(z t )ρ(g z t ) ρ(g)ρ(g z t )dz t Let assume that the measurement g is subject to a Gaussian error with respect to the true value: p(g g true ) e ( g g ) true σ g

19 Example And that the a priori function on the parameter is : z min < z t <z max ρ(z t ) = 1 Δz Π z z t moy Δz Then the solution of the inversion is ρ(z t g) = Π z z t moy Δz z max z min e. e ( g g ) true σ g ( g g true ) σ g dz t = Π z z t moy Δz z max z min e. e ( g g ) true σ g g α z t dz t σ g

20 Example

21 Example δg1 Find the posterior probability of the tunnel depth. The prior probability is non-uniform and a single data (δg1) is used. 17 sept Lecture Information theory - D. Gibert - Year

22 Example δg1 δg Find the posterior probability of the tunnel depth. The prior probability is uniform and a two data (δg1 and δg) are used. 17 sept Lecture Information theory - D. Gibert - Year

23 Example δg Find the posterior probability of the tunnel depth. The prior probability is uniform and a single data (δg) is used. 17 sept Lecture Information theory - D. Gibert - Year

24 δg1 Example Find the posterior probability of both depth and horizontal position of the tunnel from a single data. Prior probability is uniform. 17 sept Lecture Information theory - D. Gibert - Year

25 δg1 δg Example Find the posterior probability of both depth and horizontal position of the tunnel from two data. Prior probability is uniform. 17 sept Lecture Information theory - D. Gibert - Year

26 How to solve a Bayesian inversion? ρ(m d) = ρ(m)ρ(d m) ρ(m)ρ(d m)dm 1. Find the maximum likelihood (ie: find the maximum of the a posteriori probability distribution) max m (ρ(m d)) ó a cost function ó optimization problem, deterministic approach. Estimate the a posteriori expectation for y (mean value) yρ(y x) dy Deterministic approaches

27 How to solve a Bayesian inversion? 3. Explore the a posteriori density probability ρ(m d) by a numerical approach «stochastic sampling» - Metropolis-Hasting ( Richards Hobbs talk ) - Gibbs sampling; nested sampling - simulated annealing; genetic algorithm Stochastic approaches 4. Model the a posteriori density by mean of analytical representation: Laplace approximation (Gaussian probability); Variational Bayesian methods ( see

28 Deterministic approach (cf example solved above) We assume a known density probability distribution for a priori density on parameters ρ(m) forward problem ρ(d m) And solve for the maximum likelihood In practical we mostly consider Gaussian type density distribution: ρ(m) = e 1 ( m m prior ) T C 1 M ( m m prior ) ρ(d m) = e 1 ( g(m) d)t C 1 d ( g(m) d)

29 Deterministic approach The Bayes formula becomes: ρ(m d) e 1 ( g(m) d)t C 1 d ( g(m) d) 1 ( m m prior ) T C 1 M ( m m prior ) where g(m) is a general non linear relation between data and model, m prior is our a priori hypothesis on the model. C m and C d are covariance matrices on the model and the data: C m = σ m C ji = C ij.. C ij σ m j.....

30 Deterministic approach, linear case In the linear case: g(m) = G.m, it reduces to: ρ(m d) = e 1 { ( G.m d)t C 1( G.m d) + ( m m ) T C 1 d prior m ( m m prior )} which can also be written as a function of likelihood estimator : ρ(m d) = e 1 {( m ˆm ) T C 1 M ' ( m ˆm )} which maximizes ρ(m d), or equivalently minimizes: ˆm the maximum of C ( M = G T C 1 1) d G+C 1 m a posteri covariance on m ( G.m d) T 1 C d ( G.m d ) ( ) T 1 ( ) + m m prior C m mprior M

31 Deterministic approach, linear case And the solution is (model space): ˆm = m prior + (G T C d -1 G + C m -1 ) -1 G T C d -1 ( d G.m ) prior ˆm = (G T G) -1 G T d (recall the least square solution) or (data space): ˆm = m prior + C m G T (G T C m G + C d ) -1 ( d G.m ) prior that again minimizes G.m d C d + m m prior C m

32 Deterministic approach, non-linear case For the non linear case, we wish to find ˆm that minimizes: g(m) d C d + m m prior C m We are now back to optimization iterative procedures. The solution using a quasi-newton method for instance is (Tarantola, 005): m n+1 = m n + µ n (G T n C -1 d G n + C -1 m ) -1 ( G T n C -1 ( d d n d) + C -1 m (m n m prior )) Deterministic approach pro and con: - easy to program - flexible to fix a priori constraint on model - Can fall in secondary minima - Not suited for large scale problems

33 More on regularization Back to least square problem: d = Gm with solution: ˆm = (G T G) -1 G T d If G and hence G T G is ill-conditioned, or the system is under determined, or not unique or,, the above linear system may not be solved and we need to regularize the system, ex SVD solution,, or damped least square (Levenberg Marquardt) : which minimizes: ˆm = (G T G + λi d ) -1 G T d G.m d + λ m Let s compare our Bayesian deterministic solution to the LS solution using regularization techniques (Tikhonov)

34 More on regularization To stabilize inversions, one often uses Tikhonov regularization techniques Zero order minimize G.m d + λ m ó ˆm = (G T G + λi d ) -1 G T d and favor model solution with minimum perturbation First order: G.m d + λ Lm L ~ first derivative operator favor model solution with zero gradient (flat solutions) Second order: G.m d + λ Lm L ~ second derivative operator favor model solution with minimum roughness (L~ Laplacien)

35 More on regularization How does Bayesian inversions compares to Tikhonov regularization? One minimizes: G.m d + (m m prior ) T C m 1 (m m prior ) The operator acting on the model is matrix.. C m 1 the inverse of the covariance In 1D, a covariance matrix is strictly equivalent to a smoothing or convolution operator: C m s ó f*s where s is a signal and f filter impulse response C m= Δ : Correla%on distance

36 More on regularization C m -1 is thus the reciprocal filter (i.e a highpass filter) If we chose a covariance matrix with an exponential shape: e i j Δ Its Fourier tranform e x Δ Δ ó (k=wavenumber) 1+ k Δ The reciprocal filter (i.e C m -1) is 1+ k Δ Δ By comparison, the second order tikhonov regularization k operator is (Laplacien ) x The bayesian inversion regularization operator is then a high pass filter with a characteristic length

37 More on regularization Conclusion G.m d C d + (m m prior ) T C m 1 (m m prior ) This regulariza%on minimizes the model varia%ons that are smaller than the correla%on length Δ The covariance on parameter allows to perform a scaled inversion to enhance model varia%on that are larger than Δ ó allows for mul%scale or mul%grid inversion

38 More on regularization Other regularizations: Total variation, minimize: => enhance sharp variations G.m d + λ Lm where L=gradient operator Sparsity regularization, minimize: G.m d => maximize the number of null parameter + λ m

39 Resolution operator It yields some insight on how well a parameter is resolved, and if neighbor solution are correlated or independent. For the bayesian, linear case, the resolution operator takes the form: R = I C M C M 1 and ˆm = R m true C M where is the a posteriori covariance matrix and the inverse C M 1 a priori covariance matrix

40 Resolution operator (Barnoud et al., 015) resolution matrix row for a parameter at km depth (a) diagonal term (b) (c) Resolution along horizontal profile A-B resolution lengths Parameter at km depth All indices of the line Parameter at 4 km depth Parameter at 6 km depth Nodes of density grid Z UTM (km) 0 (d) 0 8 C A B D X UTM (km) Resolution along vertical profile C-D (e) Z UTM (km)

41 Resolution operator (Barnoud et al., 015) resolution matrix row for a parameter at km depth (a) diagonal term (b) (c) Resolution along horizontal profile A-B resolution lengths Parameter at km depth All indices of the line Parameter at 4 km depth Parameter at 6 km depth Nodes of density grid 0 A simple method for determining the spa%al resolu%on of a Resolution general along inverse vertical problem profile C-D Geophysical (d) Journal Interna3onal, Wiley Online Library, (e) 01, 191, C Trampert, A J.; Fichtner, A. & Ritsema, J. Resolu%on B tests revisited: the power of random numbers 4 Geophysical Journal Interna3onal, Oxford University Press, 013, 19, D Z UTM (km) X UTM (km) Z UTM (km)

42 Joint inversion with a Bayesian deterministic method We wish now to invert simultaneously the slowness s and the density ρ from travel times t and gravity measurement g Assuming that density and slowness are coupled via: S=f(ρ) and minimize: T (s) t C t + G.ρ g C g + s s prior C s + ρ ρ prior C ρ + S f (ρ ) C ρ We gather parameters and data in the same vector m and d g(m) d Cd + m m prior C m + S f (ρ ) C ρ data model coupling Onizawa et al., 00; coutant at al., 01

43 Joint inversion (a) 1 Sp slowness s/km density.5.5 (b) Points from (,V ) independant inversions 7 p Birch (1961) law Onizawa (00) Our study linear (,S ) fit to samples 6 5 p 4 Mt Pelée samples 3 p V velocity km/s Rela%on slowness- density: Use linear coupling? density (c).5 All nodes 5% 5% best best resolved resolved nodes nodes our our study study ((,S,Sp)) relation relation Sp slowness s/km p Coutant et al density.5

44 Joint inversion: an experiment A joint inversion: seismic and gravity on a volcano (active ) Seismic and gravimetric stations Ttime = nodes S i X i Horizontal travel time Gravity = nodes ρ i G i (ρ, S) Vp=000 Two dykes ρ=700

45 Joint inversion: an experiment A joint inversion: seismic and gravity on a volcano (active ) Seismic and gravimetric stations Ttime = nodes S i X i Horizontal travel time Gravity = nodes ρ i G i (ρ, S) Vp=000 Two dykes ρ=700

46 Joint inversion: an experiment

47 Joint inversion: an experiment How to solve? G.m d Cd + m m prior C m + S f (ρ ) C ρ data model coupling Assume a linear coupling between ρ and S: G.m d Cd + m m prior C m

48 Joint inversion: an experiment d = G m d = ns travel times ns gravity left slope ns gravity right slope G = dtraveltime dslowness 0 0 dgravity dρensity m = np slowness dyke1 np slowness dyke np density dyke1 np density dyke C d = σ t σ g σ g C m = σ Sdyke1 0 coupling 0 0 σ Sdyke 0 0 coupling 0 σ ρdyke1 coupling 0 coupling 0 σ ρdyke ˆm = m prior + (G T C d -1 G + C m -1 ) -1 G T C d -1 ( d G.m ) prior

49 Joint inversion: an experiment Independent inversion, no coupling

50 Joint inversion: an experiment Independent inversion, no coupling

51 Joint inversion: an experiment Joint inversion: parameters

52 Joint inversion: an experiment Joint inversion: data fitting

53 )p://ist- )p.ujf- grenoble.fr/users/coutanto/jig.zip

DRAGON ADVANCED TRAINING COURSE IN ATMOSPHERE REMOTE SENSING. Inversion basics. Erkki Kyrölä Finnish Meteorological Institute

DRAGON ADVANCED TRAINING COURSE IN ATMOSPHERE REMOTE SENSING. Inversion basics. Erkki Kyrölä Finnish Meteorological Institute Inversion basics y = Kx + ε x ˆ = (K T K) 1 K T y Erkki Kyrölä Finnish Meteorological Institute Day 3 Lecture 1 Retrieval techniques - Erkki Kyrölä 1 Contents 1. Introduction: Measurements, models, inversion

More information

Inverse problems and uncertainty quantification in remote sensing

Inverse problems and uncertainty quantification in remote sensing 1 / 38 Inverse problems and uncertainty quantification in remote sensing Johanna Tamminen Finnish Meterological Institute johanna.tamminen@fmi.fi ESA Earth Observation Summer School on Earth System Monitoring

More information

Multiple Scenario Inversion of Reflection Seismic Prestack Data

Multiple Scenario Inversion of Reflection Seismic Prestack Data Downloaded from orbit.dtu.dk on: Nov 28, 2018 Multiple Scenario Inversion of Reflection Seismic Prestack Data Hansen, Thomas Mejer; Cordua, Knud Skou; Mosegaard, Klaus Publication date: 2013 Document Version

More information

USING GEOSTATISTICS TO DESCRIBE COMPLEX A PRIORI INFORMATION FOR INVERSE PROBLEMS THOMAS M. HANSEN 1,2, KLAUS MOSEGAARD 2 and KNUD S.

USING GEOSTATISTICS TO DESCRIBE COMPLEX A PRIORI INFORMATION FOR INVERSE PROBLEMS THOMAS M. HANSEN 1,2, KLAUS MOSEGAARD 2 and KNUD S. USING GEOSTATISTICS TO DESCRIBE COMPLEX A PRIORI INFORMATION FOR INVERSE PROBLEMS THOMAS M. HANSEN 1,2, KLAUS MOSEGAARD 2 and KNUD S. CORDUA 1 1 Institute of Geography & Geology, University of Copenhagen,

More information

(Extended) Kalman Filter

(Extended) Kalman Filter (Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

Application of new Monte Carlo method for inversion of prestack seismic data. Yang Xue Advisor: Dr. Mrinal K. Sen

Application of new Monte Carlo method for inversion of prestack seismic data. Yang Xue Advisor: Dr. Mrinal K. Sen Application of new Monte Carlo method for inversion of prestack seismic data Yang Xue Advisor: Dr. Mrinal K. Sen Overview Motivation Introduction Bayes theorem Stochastic inference methods Methodology

More information

Frequentist-Bayesian Model Comparisons: A Simple Example

Frequentist-Bayesian Model Comparisons: A Simple Example Frequentist-Bayesian Model Comparisons: A Simple Example Consider data that consist of a signal y with additive noise: Data vector (N elements): D = y + n The additive noise n has zero mean and diagonal

More information

Today. Statistical Learning. Coin Flip. Coin Flip. Experiment 1: Heads. Experiment 1: Heads. Which coin will I use? Which coin will I use?

Today. Statistical Learning. Coin Flip. Coin Flip. Experiment 1: Heads. Experiment 1: Heads. Which coin will I use? Which coin will I use? Today Statistical Learning Parameter Estimation: Maximum Likelihood (ML) Maximum A Posteriori (MAP) Bayesian Continuous case Learning Parameters for a Bayesian Network Naive Bayes Maximum Likelihood estimates

More information

Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!

Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/! Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!! WARNING! this class will be dense! will learn how to use nonlinear optimization

More information

Parameter estimation: A new approach to weighting a priori information

Parameter estimation: A new approach to weighting a priori information c de Gruyter 27 J. Inv. Ill-Posed Problems 5 (27), 2 DOI.55 / JIP.27.xxx Parameter estimation: A new approach to weighting a priori information J.L. Mead Communicated by Abstract. We propose a new approach

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber

Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber Data Modeling & Analysis Techniques Probability & Statistics Manfred Huber 2017 1 Probability and Statistics Probability and statistics are often used interchangeably but are different, related fields

More information

Bayesian Paradigm. Maximum A Posteriori Estimation

Bayesian Paradigm. Maximum A Posteriori Estimation Bayesian Paradigm Maximum A Posteriori Estimation Simple acquisition model noise + degradation Constraint minimization or Equivalent formulation Constraint minimization Lagrangian (unconstraint minimization)

More information

Bayesian Learning in Undirected Graphical Models

Bayesian Learning in Undirected Graphical Models Bayesian Learning in Undirected Graphical Models Zoubin Ghahramani Gatsby Computational Neuroscience Unit University College London, UK http://www.gatsby.ucl.ac.uk/ Work with: Iain Murray and Hyun-Chul

More information

ECE295, Data Assimila0on and Inverse Problems, Spring 2015

ECE295, Data Assimila0on and Inverse Problems, Spring 2015 ECE295, Data Assimila0on and Inverse Problems, Spring 2015 1 April, Intro; Linear discrete Inverse problems (Aster Ch 1 and 2) Slides 8 April, SVD (Aster ch 2 and 3) Slides 15 April, RegularizaFon (ch

More information

Univariate Normal Distribution; GLM with the Univariate Normal; Least Squares Estimation

Univariate Normal Distribution; GLM with the Univariate Normal; Least Squares Estimation Univariate Normal Distribution; GLM with the Univariate Normal; Least Squares Estimation PRE 905: Multivariate Analysis Spring 2014 Lecture 4 Today s Class The building blocks: The basics of mathematical

More information

an introduction to bayesian inference

an introduction to bayesian inference with an application to network analysis http://jakehofman.com january 13, 2010 motivation would like models that: provide predictive and explanatory power are complex enough to describe observed phenomena

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

Thomas Bayes versus the wedge model: An example inference using a geostatistical prior function

Thomas Bayes versus the wedge model: An example inference using a geostatistical prior function Thomas Bayes versus the wedge model: An example inference using a geostatistical prior function Jason M. McCrank, Gary F. Margrave, and Don C. Lawton ABSTRACT The Bayesian inference is used to estimate

More information

SCUOLA DI SPECIALIZZAZIONE IN FISICA MEDICA. Sistemi di Elaborazione dell Informazione. Regressione. Ruggero Donida Labati

SCUOLA DI SPECIALIZZAZIONE IN FISICA MEDICA. Sistemi di Elaborazione dell Informazione. Regressione. Ruggero Donida Labati SCUOLA DI SPECIALIZZAZIONE IN FISICA MEDICA Sistemi di Elaborazione dell Informazione Regressione Ruggero Donida Labati Dipartimento di Informatica via Bramante 65, 26013 Crema (CR), Italy http://homes.di.unimi.it/donida

More information

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time

More information

Algorithmisches Lernen/Machine Learning

Algorithmisches Lernen/Machine Learning Algorithmisches Lernen/Machine Learning Part 1: Stefan Wermter Introduction Connectionist Learning (e.g. Neural Networks) Decision-Trees, Genetic Algorithms Part 2: Norman Hendrich Support-Vector Machines

More information

Probability and Estimation. Alan Moses

Probability and Estimation. Alan Moses Probability and Estimation Alan Moses Random variables and probability A random variable is like a variable in algebra (e.g., y=e x ), but where at least part of the variability is taken to be stochastic.

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner Fundamentals CS 281A: Statistical Learning Theory Yangqing Jia Based on tutorial slides by Lester Mackey and Ariel Kleiner August, 2011 Outline 1 Probability 2 Statistics 3 Linear Algebra 4 Optimization

More information

Lecture 3. Linear Regression II Bastian Leibe RWTH Aachen

Lecture 3. Linear Regression II Bastian Leibe RWTH Aachen Advanced Machine Learning Lecture 3 Linear Regression II 02.11.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de/ leibe@vision.rwth-aachen.de This Lecture: Advanced Machine Learning Regression

More information

Chris Bishop s PRML Ch. 8: Graphical Models

Chris Bishop s PRML Ch. 8: Graphical Models Chris Bishop s PRML Ch. 8: Graphical Models January 24, 2008 Introduction Visualize the structure of a probabilistic model Design and motivate new models Insights into the model s properties, in particular

More information

Inverse methods in glaciology

Inverse methods in glaciology Inverse methods in glaciology Martin Truer University of Alaska Fairbanks McCarthy, Summer School, 2010 1 / 20 General Problem Setting Examples of inverse problems Solution methods 2 / 20 Outline General

More information

Appendix A : Introduction to Probability and stochastic processes

Appendix A : Introduction to Probability and stochastic processes A-1 Mathematical methods in communication July 5th, 2009 Appendix A : Introduction to Probability and stochastic processes Lecturer: Haim Permuter Scribe: Shai Shapira and Uri Livnat The probability of

More information

Non-linear least-squares inversion with data-driven

Non-linear least-squares inversion with data-driven Geophys. J. Int. (2000) 142, 000 000 Non-linear least-squares inversion with data-driven Bayesian regularization Tor Erik Rabben and Bjørn Ursin Department of Petroleum Engineering and Applied Geophysics,

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear

More information

Geophysical Data Analysis: Discrete Inverse Theory

Geophysical Data Analysis: Discrete Inverse Theory Geophysical Data Analysis: Discrete Inverse Theory MATLAB Edition William Menke Lamont-Doherty Earth Observatory and Department of Earth and Environmental Sciences Columbia University. ' - Palisades, New

More information

Uncertainty quantification for inverse problems with a weak wave-equation constraint

Uncertainty quantification for inverse problems with a weak wave-equation constraint Uncertainty quantification for inverse problems with a weak wave-equation constraint Zhilong Fang*, Curt Da Silva*, Rachel Kuske** and Felix J. Herrmann* *Seismic Laboratory for Imaging and Modeling (SLIM),

More information

Bayesian Networks. Motivation

Bayesian Networks. Motivation Bayesian Networks Computer Sciences 760 Spring 2014 http://pages.cs.wisc.edu/~dpage/cs760/ Motivation Assume we have five Boolean variables,,,, The joint probability is,,,, How many state configurations

More information

SYDE 372 Introduction to Pattern Recognition. Probability Measures for Classification: Part I

SYDE 372 Introduction to Pattern Recognition. Probability Measures for Classification: Part I SYDE 372 Introduction to Pattern Recognition Probability Measures for Classification: Part I Alexander Wong Department of Systems Design Engineering University of Waterloo Outline 1 2 3 4 Why use probability

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by

More information

Lecture : Probabilistic Machine Learning

Lecture : Probabilistic Machine Learning Lecture : Probabilistic Machine Learning Riashat Islam Reasoning and Learning Lab McGill University September 11, 2018 ML : Many Methods with Many Links Modelling Views of Machine Learning Machine Learning

More information

Course on Inverse Problems

Course on Inverse Problems California Institute of Technology Division of Geological and Planetary Sciences March 26 - May 25, 2007 Course on Inverse Problems Albert Tarantola Institut de Physique du Globe de Paris Lesson XVI CONCLUSION

More information

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision The Particle Filter Non-parametric implementation of Bayes filter Represents the belief (posterior) random state samples. by a set of This representation is approximate. Can represent distributions that

More information

Course on Inverse Problems Albert Tarantola

Course on Inverse Problems Albert Tarantola California Institute of Technology Division of Geological and Planetary Sciences March 26 - May 25, 27 Course on Inverse Problems Albert Tarantola Institut de Physique du Globe de Paris CONCLUSION OF THE

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

= (G T G) 1 G T d. m L2

= (G T G) 1 G T d. m L2 The importance of the Vp/Vs ratio in determining the error propagation and the resolution in linear AVA inversion M. Aleardi, A. Mazzotti Earth Sciences Department, University of Pisa, Italy Introduction.

More information

Lecture 7 and 8: Markov Chain Monte Carlo

Lecture 7 and 8: Markov Chain Monte Carlo Lecture 7 and 8: Markov Chain Monte Carlo 4F13: Machine Learning Zoubin Ghahramani and Carl Edward Rasmussen Department of Engineering University of Cambridge http://mlg.eng.cam.ac.uk/teaching/4f13/ Ghahramani

More information

A Probabilistic Framework for solving Inverse Problems. Lambros S. Katafygiotis, Ph.D.

A Probabilistic Framework for solving Inverse Problems. Lambros S. Katafygiotis, Ph.D. A Probabilistic Framework for solving Inverse Problems Lambros S. Katafygiotis, Ph.D. OUTLINE Introduction to basic concepts of Bayesian Statistics Inverse Problems in Civil Engineering Probabilistic Model

More information

Lecture 3: Basic Statistical Tools. Bruce Walsh lecture notes Tucson Winter Institute 7-9 Jan 2013

Lecture 3: Basic Statistical Tools. Bruce Walsh lecture notes Tucson Winter Institute 7-9 Jan 2013 Lecture 3: Basic Statistical Tools Bruce Walsh lecture notes Tucson Winter Institute 7-9 Jan 2013 1 Basic probability Events are possible outcomes from some random process e.g., a genotype is AA, a phenotype

More information

Pattern Recognition and Machine Learning

Pattern Recognition and Machine Learning Christopher M. Bishop Pattern Recognition and Machine Learning ÖSpri inger Contents Preface Mathematical notation Contents vii xi xiii 1 Introduction 1 1.1 Example: Polynomial Curve Fitting 4 1.2 Probability

More information

Conditional Independence

Conditional Independence Conditional Independence Sargur Srihari srihari@cedar.buffalo.edu 1 Conditional Independence Topics 1. What is Conditional Independence? Factorization of probability distribution into marginals 2. Why

More information

Probabilistic Graphical Models

Probabilistic Graphical Models 2016 Robert Nowak Probabilistic Graphical Models 1 Introduction We have focused mainly on linear models for signals, in particular the subspace model x = Uθ, where U is a n k matrix and θ R k is a vector

More information

Marginal density. If the unknown is of the form x = (x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density

Marginal density. If the unknown is of the form x = (x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density Marginal density If the unknown is of the form x = x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density πx 1 y) = πx 1, x 2 y)dx 2 = πx 2 )πx 1 y, x 2 )dx 2 needs to be

More information

19 : Slice Sampling and HMC

19 : Slice Sampling and HMC 10-708: Probabilistic Graphical Models 10-708, Spring 2018 19 : Slice Sampling and HMC Lecturer: Kayhan Batmanghelich Scribes: Boxiang Lyu 1 MCMC (Auxiliary Variables Methods) In inference, we are often

More information

GEOPHYSICAL INVERSE THEORY AND REGULARIZATION PROBLEMS

GEOPHYSICAL INVERSE THEORY AND REGULARIZATION PROBLEMS Methods in Geochemistry and Geophysics, 36 GEOPHYSICAL INVERSE THEORY AND REGULARIZATION PROBLEMS Michael S. ZHDANOV University of Utah Salt Lake City UTAH, U.S.A. 2OO2 ELSEVIER Amsterdam - Boston - London

More information

Regularizing inverse problems. Damping and smoothing and choosing...

Regularizing inverse problems. Damping and smoothing and choosing... Regularizing inverse problems Damping and smoothing and choosing... 141 Regularization The idea behind SVD is to limit the degree of freedom in the model and fit the data to an acceptable level. Retain

More information

Today. Probability and Statistics. Linear Algebra. Calculus. Naïve Bayes Classification. Matrix Multiplication Matrix Inversion

Today. Probability and Statistics. Linear Algebra. Calculus. Naïve Bayes Classification. Matrix Multiplication Matrix Inversion Today Probability and Statistics Naïve Bayes Classification Linear Algebra Matrix Multiplication Matrix Inversion Calculus Vector Calculus Optimization Lagrange Multipliers 1 Classical Artificial Intelligence

More information

Learning Bayesian Networks (part 1) Goals for the lecture

Learning Bayesian Networks (part 1) Goals for the lecture Learning Bayesian Networks (part 1) Mark Craven and David Page Computer Scices 760 Spring 2018 www.biostat.wisc.edu/~craven/cs760/ Some ohe slides in these lectures have been adapted/borrowed from materials

More information

Bayesian Inference and MCMC

Bayesian Inference and MCMC Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the

More information

The Inversion Problem: solving parameters inversion and assimilation problems

The Inversion Problem: solving parameters inversion and assimilation problems The Inversion Problem: solving parameters inversion and assimilation problems UE Numerical Methods Workshop Romain Brossier romain.brossier@univ-grenoble-alpes.fr ISTerre, Univ. Grenoble Alpes Master 08/09/2016

More information

Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory

Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory Statistical Inference Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory IP, José Bioucas Dias, IST, 2007

More information

Discrete Mathematics and Probability Theory Fall 2015 Lecture 21

Discrete Mathematics and Probability Theory Fall 2015 Lecture 21 CS 70 Discrete Mathematics and Probability Theory Fall 205 Lecture 2 Inference In this note we revisit the problem of inference: Given some data or observations from the world, what can we infer about

More information

B4 Estimation and Inference

B4 Estimation and Inference B4 Estimation and Inference 6 Lectures Hilary Term 27 2 Tutorial Sheets A. Zisserman Overview Lectures 1 & 2: Introduction sensors, and basics of probability density functions for representing sensor error

More information

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator Estimation Theory Estimation theory deals with finding numerical values of interesting parameters from given set of data. We start with formulating a family of models that could describe how the data were

More information

PATTERN CLASSIFICATION

PATTERN CLASSIFICATION PATTERN CLASSIFICATION Second Edition Richard O. Duda Peter E. Hart David G. Stork A Wiley-lnterscience Publication JOHN WILEY & SONS, INC. New York Chichester Weinheim Brisbane Singapore Toronto CONTENTS

More information

Outline lecture 2 2(30)

Outline lecture 2 2(30) Outline lecture 2 2(3), Lecture 2 Linear Regression it is our firm belief that an understanding of linear models is essential for understanding nonlinear ones Thomas Schön Division of Automatic Control

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is

More information

An EM algorithm for Gaussian Markov Random Fields

An EM algorithm for Gaussian Markov Random Fields An EM algorithm for Gaussian Markov Random Fields Will Penny, Wellcome Department of Imaging Neuroscience, University College, London WC1N 3BG. wpenny@fil.ion.ucl.ac.uk October 28, 2002 Abstract Lavine

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing Lecture 7 Agenda for the lecture M-ary hypothesis testing and the MAP rule Union bound for reducing M-ary to binary hypothesis testing Introduction of the channel coding problem 7.1 M-ary hypothesis testing

More information

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester Physics 403 Numerical Methods, Maximum Likelihood, and Least Squares Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Quadratic Approximation

More information

Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo

Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo Andrew Gordon Wilson www.cs.cmu.edu/~andrewgw Carnegie Mellon University March 18, 2015 1 / 45 Resources and Attribution Image credits,

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016 Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2016 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix Labor-Supply Shifts and Economic Fluctuations Technical Appendix Yongsung Chang Department of Economics University of Pennsylvania Frank Schorfheide Department of Economics University of Pennsylvania January

More information

Statistics for Data Analysis. Niklaus Berger. PSI Practical Course Physics Institute, University of Heidelberg

Statistics for Data Analysis. Niklaus Berger. PSI Practical Course Physics Institute, University of Heidelberg Statistics for Data Analysis PSI Practical Course 2014 Niklaus Berger Physics Institute, University of Heidelberg Overview You are going to perform a data analysis: Compare measured distributions to theoretical

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

Inference and estimation in probabilistic time series models

Inference and estimation in probabilistic time series models 1 Inference and estimation in probabilistic time series models David Barber, A Taylan Cemgil and Silvia Chiappa 11 Time series The term time series refers to data that can be represented as a sequence

More information

A Search and Jump Algorithm for Markov Chain Monte Carlo Sampling. Christopher Jennison. Adriana Ibrahim. Seminar at University of Kuwait

A Search and Jump Algorithm for Markov Chain Monte Carlo Sampling. Christopher Jennison. Adriana Ibrahim. Seminar at University of Kuwait A Search and Jump Algorithm for Markov Chain Monte Carlo Sampling Christopher Jennison Department of Mathematical Sciences, University of Bath, UK http://people.bath.ac.uk/mascj Adriana Ibrahim Institute

More information

Practical in Numerical Astronomy, SS 2012 LECTURE 9

Practical in Numerical Astronomy, SS 2012 LECTURE 9 Practical in Numerical Astronomy, SS 01 Elliptic partial differential equations. Poisson solvers. LECTURE 9 1. Gravity force and the equations of hydrodynamics. Poisson equation versus Poisson integral.

More information

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012 Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

CSE 312 Final Review: Section AA

CSE 312 Final Review: Section AA CSE 312 TAs December 8, 2011 General Information General Information Comprehensive Midterm General Information Comprehensive Midterm Heavily weighted toward material after the midterm Pre-Midterm Material

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

CSci 8980: Advanced Topics in Graphical Models Gaussian Processes

CSci 8980: Advanced Topics in Graphical Models Gaussian Processes CSci 8980: Advanced Topics in Graphical Models Gaussian Processes Instructor: Arindam Banerjee November 15, 2007 Gaussian Processes Outline Gaussian Processes Outline Parametric Bayesian Regression Gaussian

More information

Probing the covariance matrix

Probing the covariance matrix Probing the covariance matrix Kenneth M. Hanson Los Alamos National Laboratory (ret.) BIE Users Group Meeting, September 24, 2013 This presentation available at http://kmh-lanl.hansonhub.com/ LA-UR-06-5241

More information

MCMC Sampling for Bayesian Inference using L1-type Priors

MCMC Sampling for Bayesian Inference using L1-type Priors MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling

More information

Approximate Bayesian Computation: a simulation based approach to inference

Approximate Bayesian Computation: a simulation based approach to inference Approximate Bayesian Computation: a simulation based approach to inference Richard Wilkinson Simon Tavaré 2 Department of Probability and Statistics University of Sheffield 2 Department of Applied Mathematics

More information

Lecture 22: Graph-SLAM (2)

Lecture 22: Graph-SLAM (2) Lecture 22: Graph-SLAM (2) Dr. J.B. Hayet CENTRO DE INVESTIGACIÓN EN MATEMÁTICAS Abril 2014 J.B. Hayet Probabilistic robotics Abril 2014 1 / 35 Outline 1 Data association in Graph-SLAM 2 Improvements and

More information

Introduction to Bayesian Learning. Machine Learning Fall 2018

Introduction to Bayesian Learning. Machine Learning Fall 2018 Introduction to Bayesian Learning Machine Learning Fall 2018 1 What we have seen so far What does it mean to learn? Mistake-driven learning Learning by counting (and bounding) number of mistakes PAC learnability

More information

Data S.A. 30s. Xe= Gravity anom. sigd=0.06 corlength=1000s sigp=1. dg microgal. dg microgal. Data S.A. 30s. Xe=

Data S.A. 30s. Xe= Gravity anom. sigd=0.06 corlength=1000s sigp=1. dg microgal. dg microgal. Data S.A. 30s. Xe= Gravity anom. sigd=.6 corlength=4s sigp=1. Gravity anom. sigd=.6 corlength=6s sigp=1. Xe=.594 Λ=4s Xe=.595 Λ=6s -2. -15. -1. -5.. Tim -2. -15. -1. -5.. Tim Gravity anom. sigd=.6 corlength=8s sigp=1. Gravity

More information

2 Statistical Estimation: Basic Concepts

2 Statistical Estimation: Basic Concepts Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 2 Statistical Estimation:

More information

Expectation Propagation Algorithm

Expectation Propagation Algorithm Expectation Propagation Algorithm 1 Shuang Wang School of Electrical and Computer Engineering University of Oklahoma, Tulsa, OK, 74135 Email: {shuangwang}@ou.edu This note contains three parts. First,

More information

Robots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models. Domingo Gallardo

Robots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models.  Domingo Gallardo Robots Autónomos 2. Bayesian Estimation and sensor models Domingo Gallardo Depto. CCIA http://www.rvg.ua.es/master/robots References Recursive State Estimation: Thrun, chapter 2 Sensor models and robot

More information

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics STA414/2104 Lecture 11: Gaussian Processes Department of Statistics www.utstat.utoronto.ca Delivered by Mark Ebden with thanks to Russ Salakhutdinov Outline Gaussian Processes Exam review Course evaluations

More information

Graphical Models and Kernel Methods

Graphical Models and Kernel Methods Graphical Models and Kernel Methods Jerry Zhu Department of Computer Sciences University of Wisconsin Madison, USA MLSS June 17, 2014 1 / 123 Outline Graphical Models Probabilistic Inference Directed vs.

More information

Singular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0

Singular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0 Singular value decomposition If only the first p singular values are nonzero we write G =[U p U o ] " Sp 0 0 0 # [V p V o ] T U p represents the first p columns of U U o represents the last N-p columns

More information

Condensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C.

Condensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C. Condensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C. Spall John Wiley and Sons, Inc., 2003 Preface... xiii 1. Stochastic Search

More information

9 Forward-backward algorithm, sum-product on factor graphs

9 Forward-backward algorithm, sum-product on factor graphs Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms For Inference Fall 2014 9 Forward-backward algorithm, sum-product on factor graphs The previous

More information

Statistical techniques for data analysis in Cosmology

Statistical techniques for data analysis in Cosmology Statistical techniques for data analysis in Cosmology arxiv:0712.3028; arxiv:0911.3105 Numerical recipes (the bible ) Licia Verde ICREA & ICC UB-IEEC http://icc.ub.edu/~liciaverde outline Lecture 1: Introduction

More information