Course on Inverse Problems

Similar documents
LESSON 23: EXTREMA OF FUNCTIONS OF 2 VARIABLES OCTOBER 25, 2017

Joint ] X 5) P[ 6) P[ (, ) = y 2. x 1. , y. , ( x, y ) 2, (

Statistics 351 Probability I Fall 2006 (200630) Final Exam Solutions. θ α β Γ(α)Γ(β) (uv)α 1 (v uv) β 1 exp v }

Two-dimensional Random Vectors

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

Triple integrals in Cartesian coordinates (Sect. 15.5) Review: Triple integrals in arbitrary domains

Two hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45

UNIT Define joint distribution and joint probability density function for the two random variables X and Y.

Gaussians. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Problem Y is an exponential random variable with parameter λ = 0.2. Given the event A = {Y < 2},

Let X and Y denote two random variables. The joint distribution of these random

Multiple Random Variables

Chapter 5 Random vectors, Joint distributions. Lectures 18-23

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

Econometría 2: Análisis de series de Tiempo

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity.

Inner Product Spaces 6.1 Length and Dot Product in R n

Stat 366 A1 (Fall 2006) Midterm Solutions (October 23) page 1

A Probability Review

Dependence. MFM Practitioner Module: Risk & Asset Allocation. John Dodson. September 11, Dependence. John Dodson. Outline.

Continuous Random Variables

Lecture 2. Spring Quarter Statistical Optics. Lecture 2. Characteristic Functions. Transformation of RVs. Sums of RVs

Chapter 4 Multiple Random Variables

EE/Stats 376A: Homework 7 Solutions Due on Friday March 17, 5 pm

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

Lectures 15: Parallel Transport. Table of contents

A note about the conjecture about Spearman s rho and Kendall s tau

Chapter 12: Bivariate & Conditional Distributions

Surface x(u, v) and curve α(t) on it given by u(t) & v(t). Math 4140/5530: Differential Geometry

Chapter 6 Expectation and Conditional Expectation. Lectures Definition 6.1. Two random variables defined on a probability space are said to be

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1

Introduction to Probability and Stocastic Processes - Part I

ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6]

Covariance and Correlation

Review of Probability Theory

Unsupervised Learning with Permuted Data

Uncertainty quantification for Wavefield Reconstruction Inversion

Multivariate probability distributions and linear regression

Exercises with solutions (Set D)

Multivariate random variables

Inner Product Spaces 5.2 Inner product spaces

Course on Inverse Problems

COLLOCATED CO-SIMULATION USING PROBABILITY AGGREGATION

Unit roots in vector time series. Scalar autoregression True model: y t 1 y t1 2 y t2 p y tp t Estimated model: y t c y t1 1 y t1 2 y t2

4. CONTINUOUS RANDOM VARIABLES

Consumption. Consider a consumer with utility. v(c τ )e ρ(τ t) dτ.

Joint Distribution of Two or More Random Variables

18 Bivariate normal distribution I

Course on Inverse Problems Albert Tarantola

Statistics. Lent Term 2015 Prof. Mark Thomson. 2: The Gaussian Limit

Lecture 23: 6.1 Inner Products

ECE534, Spring 2018: Solutions for Problem Set #3

Probability and statistics; Rehearsal for pattern recognition

Probability and Distributions

9.1 Mean and Gaussian Curvatures of Surfaces

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

Chapter 5,6 Multiple RandomVariables

The Maximum Entropy Principle and Applications to MIMO Channel Modeling

STAT:5100 (22S:193) Statistical Inference I

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

7. The Multivariate Normal Distribution

02 Background Minimum background on probability. Random process

Probability Density (1)

MATH c UNIVERSITY OF LEEDS Examination for the Module MATH2715 (January 2015) STATISTICAL METHODS. Time allowed: 2 hours

The Multivariate Gaussian Distribution [DRAFT]

Bayesian Gaussian / Linear Models. Read Sections and 3.3 in the text by Bishop

1: PROBABILITY REVIEW

2 Functions of random variables

Multivariate random variables

Probability theory. References:

Contents 1. Contents

Bivariate Transformations

Homework 10 (due December 2, 2009)

Integrals in cylindrical, spherical coordinates (Sect. 15.7)

Preliminary statistics

Chapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix

Lecture 16 : Independence, Covariance and Correlation of Discrete Random Variables

Integration by Substitution

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

Gaussian random variables inr n

Probability (continued)

1 Random variables and distributions

An Introduction to Bayesian Linear Regression

STAT 302 Introduction to Probability Learning Outcomes. Textbook: A First Course in Probability by Sheldon Ross, 8 th ed.

Partial derivatives, linear approximation and optimization

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:

Bayesian Inference for the Multivariate Normal

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Lecture 5: GPs and Streaming regression

ELEG 3143 Probability & Stochastic Process Ch. 4 Multiple Random Variables

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

6 The normal distribution, the central limit theorem and random samples

DEPARTMENT OF COMPUTER SCIENCE Autumn Semester MACHINE LEARNING AND ADAPTIVE INTELLIGENCE

Multivariate Distributions

Surfaces of Arbitrary Constant Negative Gaussian Curvature and Related Sine-Gordon Equations

Transformation of Probability Densities

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES

Hypothesis Testing. Econ 690. Purdue University. Justin L. Tobias (Purdue) Testing 1 / 33

Inference about the Slope and Intercept

Uncorrelatedness and Independence

Transcription:

Stanford University School of Earth Sciences Course on Inverse Problems Albert Tarantola Third Lesson: Probability (Elementary Notions)

Let u and v be two Cartesian parameters (then, volumetric probabilities and probability densities are identical). Given a probability density f (u, v), one defines the two marginal probability densities f u (u) = f v (v) = dv f (u, v) du f (u, v) and the two conditional probability densities f u v (u v = v ) = f v u (v u = u ) = f (u, v ) du f (u, v ) f (u, v) dv f (v, u )

marginal conditional.25.12.1.8.6.4.2.2.15.1.5-1 -5 5 1 1 conditional.25.2 joint.15.1.5-1 -5 5 1 5-5 -1-1 -5 5 1 marginal.15-1 -5 5 1.125.1.75.5.25-1 -5 5 1 conditional.14.12.1.8.6.4.2-1 -5 5 1.14.12.1.8.6.4.2 conditional

Let u and v be two Cartesian parameters (then, volumetric probabilities and probability densities are identical). Given a probability density f (u, v), one defines the two marginal probability densities f u (u) = f v (v) = dv f (u, v) du f (u, v) and the two conditional probability densities f u v (u v = v ) = f v u (v u = u ) = f (u, v ) du f (u, v ) f (u, v) dv f (v, u )

Let u and v be two Cartesian parameters (then, volumetric probabilities and probability densities are identical). Given a probability density f (u, v), one defines the two marginal probability densities f u (u) = f v (v) = dv f (u, v) du f (u, v) and the two conditional probability densities f u v (u v ) = f v u (v u ) = f (u, v ) du f (u, v ) f (u, v) dv f (v, u )

Let u and v be two Cartesian parameters (then, volumetric probabilities and probability densities are identical). Given a probability density f (u, v), one defines the two marginal probability densities f u (u) = f v (v) = dv f (u, v) du f (u, v) and the two conditional probability densities f u v (u v) = f v u (v u) = f (u, v) du f (u, v) f (u, v) dv f (v, u)

One has f u v (u v) = f v u (v u) = f (u, v) f v (v) f (u, v) f u (u) from where (a joint distribution can be expressed by a conditional distribution times a marginal distribution) f (u, v) = f u v (u v) f v (v) = f v u (v u) f u (u) from where (Bayes theorem) f u v (u v) = f v u(v u) f u (u) f v (v)

Recall: f (u, v) = f u v (u v) f v (v) = f v u (v u) f u (u). The two quantities u and v are said to have independent uncertainties if, in fact, f (u, v) = f u (u) f v (v) (the joint distribution equals the product of the two marginal distributions). This implies (and is implied by) f u v (u v) = f u (u) ; f v u (v u) = f v (v).

two quantities with independent uncertainties (the joint distribution is the product of the two marginal distributions) -5 5 1 15 2 2 15 1 5-5 -1-5 5 1-1 -5 5 1

Let u and v be two Cartesian parameters (then, volumetric probabilities and probability densities are identical). Let f (u, v), be a probability density that is not qualitatively different from a two-dimensional Gaussian. The mean values are the variances are u = v = c uu = σ 2 u = c vv = σ 2 v = and the covariance is c uv = du du du du du dv u f (u, v) dv v f (u, v) dv (u u) 2 f (u, v) dv (v v) 2 f (u, v) dv (u u)(v v) f (u, v)

The covariance matrix is ( ) cuu c uv ( ) σ 2 u c uv C = c vu c vv = c vu σ 2 v. It is symmetric and positive definite (or, at least, non-negative). Note: the correlation, defined as ρ uv = c uv σ u σ v = has the property 1 ρ uv +1. c uv cuu cvv,

The general form of a covariance matrix is c 11 c 12 c 13... σ 2 1 c 12 c 13... c 21 c 22 c 23... c 21 σ 2 2 c 23... C = = c 31 c 32 c 33... c 31 c 32 σ 2 3............... The quantities with immediate interpretation are the standard deviations {σ 1, σ 2, σ 3,... } and the correlation matrix 1 ρ 12 ρ 13... ρ 21 1 ρ 23... R = ρ 31 ρ 32 1..........

The multidimensional Gaussian distribution is defined as f (x 1, x 2,..., x n ) f (x) = k exp ( 1 2 (x x ) t C -1 (x x ) ) Its mean is x and its covariance is C (not obvious!).