Bayesian Methods with Monte Carlo Markov Chains II

Size: px
Start display at page:

Download "Bayesian Methods with Monte Carlo Markov Chains II"

Transcription

1 Bayesian Methods with Monte Carlo Markov Chains II Henry Horng-Shing Lu Institute of Statistics National Chiao Tung University 1

2 Part 3 An Example in Genetics 2

3 Example 1 in Genetics (1) Two linked loci with alleles A and a, and B and b A, B: dominant a, b: recessive A double heterozygote AaBb will produce gametes of four types: AB, Ab, ab, ab A A A a B A a b B b a B b a b B F (Female) 1- r r (female recombination fraction) M (Male) 1-r r (male recombination fraction) 3

4 Example 1 in Genetics (2) r and r are the recombination rates for male and female Suppose the parental origin of these heterozygote is from the mating of AABB aabb. The problem is to estimate r and r from the offspring of selfed heterozygotes. Fisher, R. A. and Balmukand, B. (1928). The estimation of linkage from the offspring of selfed heterozygotes. Journal of Genetics, 20, ank/handout12.pdf 4

5 Example 1 in Genetics (3) MALE AB ab ab Ab (1-r)/2 (1-r)/2 r/2 r/2 AB AABB aabb aabb AABb (1-r )/2 (1-r) (1-r )/4 (1-r) (1-r )/4 r (1-r )/4 r (1-r )/4 F E M A ab (1-r )/2 AaBb (1-r) (1-r )/4 aabb (1-r) (1-r )/4 aabb r (1-r )/4 Aabb r (1-r )/4 L E ab r /2 AaBB (1-r) r /4 aabb (1-r) r /4 aabb r r /4 AabB r r /4 Ab AABb aabb aabb AAbb r /2 (1-r) r /4 (1-r) r /4 r r /4 r r /4 5

6 Example 1 in Genetics (4) Four distinct phenotypes: A*B*, A*b*, a*b* and a*b*. A*: the dominant phenotype from (Aa, AA, aa). a*: the recessive phenotype from aa. B*: the dominant phenotype from (Bb, BB, bb). b* : the recessive phenotype from bb. A*B*: 9 gametic combinations. A*b*: 3 gametic combinations. a*b*: 3 gametic combinations. a*b*: 1 gametic combination. Total: 16 combinations. 6

7 Example 1 in Genetics (5) Let ϕ = (1 r)(1 r'), then 2 + ϕ PA ( * B*) =, 4 1 ϕ PA ( * b*) = Pa ( * B*) =, 4 ϕ Pa ( * b*) =. 4 7

8 Example 1 in Genetics (6) Hence, the random sample of n from the offspring of selfed heterozygotes will follow a multinomial distribution: 2+ ϕ 1 ϕ 1 ϕ ϕ Multinomial n;,,, We know that ϕ = (1 r)(1 r'), 0 r 1/ 2, and 0 r' 1/ 2. So, 1 ϕ 1/ 4. 8

9 Bayesian for Example 1 in Genetics (1) To simplify computation, we let PA ( * B*) = ϕ1, PA ( * b*) = ϕ2 Pa ( * B*) = ϕ, Pa ( * b*) = ϕ 3 4 The random sample of n from the offspring of selfed heterozygotes will follow a multinomial distribution: [ ] Multinomial n; ϕ, ϕ, ϕ, ϕ

10 Bayesian for Example 1 in Genetics (2) If we assume a Dirichlet prior distribution with parameters: D( α1, α2, α3, α4) to estimate probabilities for A*B*, A*b*,a*B* and a*b*. Recall that A*B*: 9 gametic combinations. A*b*: 3 gametic combinations. a*b*: 3 gametic combinations. a*b*: 1 gametic combination. We consider ( α, α, α, α ) = (9,3,3,1)

11 Bayesian for Example 1 in Genetics (3) Suppose that we observe the data of y = (y1, y2, y3, y4) = (125, 18, 20, 24). So the posterior distribution is also Dirichlet with parameters D(134, 21, 23, 25) The Bayesian estimation for probabilities are: ( ϕ1, ϕ2, ϕ3, ϕ4) =(0.660, 0.103, 0.113, 0.123) 11

12 Bayesian for Example 1 in Genetics (4) Consider the original model, The random sample of n also follow a multinomial distribution: 2+ 1 PA ( * B*) ϕ =, PA ( * b*) = Pa ( * B*) = ϕ, Pa ( * b*) = ϕ ϕ 1 ϕ 1 ϕ ϕ ( y1, y2, y3, y4)~ Multinomial n;,,, We will assume a Beta prior distribution: Beta( ν, ν )

13 Bayesian for Example 1 in Genetics (5) The posterior distribution becomes P y y y y Py (, y, y, y ϕ) P( ϕ) = Py ( 1, y2, y3, y4 ϕ) P( ϕ) dϕ ( ϕ 1, 2, 3, 4). The integration in the above denominator, P( y, y, y, y ϕ) P( ϕ) dϕ, does not have a close form. 13

14 Bayesian for Example 1 in Genetics (6) How to solve this problem? Monte Carlo Markov Chains (MCMC) Method! What value is appropriate for ( ν, ν )?

15 Part 4 Monte Carlo Methods 15

16 Monte Carlo Methods (1) Consider the game of solitaire: what s the chance of winning with a properly shuffled deck? g/wiki/monte_carlo_ method u/local/talks/mcmc_2 004_07_01.ppt? Lose Lose Win Lose Chance of winning is 1 in 4! 16

17 Monte Carlo Methods (2) Hard to compute analytically because winning or losing depends on a complex procedure of reorganizing cards Insight: why not just play a few hands, and see empirically how many do in fact win? More generally, can approximate a probability density function using only samples from that density 17

18 Monte Carlo Methods (3) Given a very large set X and a distribution f(x) over it. We draw a set of N i.i.d. random samples. We can then approximate the distribution using these samples. f(x) X f x 1 x x f x N () i N ( ) = 1( = ) ( ) N N i= 1 18

19 Monte Carlo Methods (4) We can also use these samples to compute expectations: 1 E g gx Eg gxf x N () i N ( ) = ( ) ( ) = ( ) ( ) N N i= 1 x And even use them to find a maximum: () i ˆ = arg max[ f( x )] ( i ) x x 19

20 Monte Carlo Example 4 X,, X be i.i.d. N(0,1) E( )=? 1 Solution: n 4 4 i -, find Use Monte Carlo method to approximation X i 2 1 x 4! /2 4 8 E( X )= x exp( ) dx= = = 3 2π 2 ( )! 2 20

21 Part 5 Monte Carle Method vs. Numerical Integration 21

22 Numerical Integration (1) Theorem (Riemann Integral): If f is continuous, integrable, then the Riemann Sum: n Sn = f ( ci)( xi+ 1 xi), where ci [ xi, xi+ 1] i= 1 S I f( x) dx, a constant n max x x i= 1,.., n n = i+ 1 i n b a 22

23 Numerical Integration (2) Trapezoidal Rule: b a xi = a+ i, equally spaced intervals n b a xi+ 1 xi = h= n n f( xi) + f( xi+ 1) ST = h 2 i= = h[ f( x1) + f( x2) f( xn) + f( xn+ 1)]

24 Numerical Integration (3) An Example of Trapezoidal Rule by R : 24

25 Numerical Integration (4) Simpson s Rule: b b a a+ b Specifically, f ( xdx ) [ f ( a) + 4 f( ) + f( b)] a 6 2 One derivation replaces the integrand f(x) by the quadratic polynomial P(x) which takes the same values as f(x) at the end points a and b and the midpoint m = (a+b) / 2. ( x m)( x b) ( x a)( x b) ( x a)( x m) Px ( ) = f( a) + f( m) + f( b) ( a m)( a b) ( m a)( m b) ( b a)( b m) b a b a a+ b P( x) = [ f ( a) + 4 f( ) + f( b)]

26 Numerical Integration (5) Example of Simpson s Rule by R: x 26

27 Numerical Integration (6) Two Problems in numerical integration: f ( xdx ) 1. (b,a)= (, ), i.e. How to use numerical integration? Logistic transform: 1 y = 1 + e 2. Two or more high dimension integration? Monte Carle Method! x 27

28 Monte Carlo Integration f( x) I = f ( x) dx = g( x) dx, gx ( ) where g(x) is a probability distribution function and let h(x)=f(x)/g(x). If x,..., x ~ g( x), 1 n iid then 1 n n LLN i n i= 1 hx ( ) Ehx ( ( )) = hxgxdx ( ) ( ) = f( xdx ) = I 28

29 Example (1) 1 0 Γ(18) Γ(7) Γ(11) (1 ) =? x x dx 2 (It is !) 57 Numerical Integration by Trapezoidal Rule: 29

30 Example (2) Monte Carlo Integration: Γ(18) Γ (7 + 11) x (1 x) dx= x x (1 x) dx Γ(7) Γ(11) Γ(7) Γ(11) Let x~ Beta(7,11), then Γ (7 + 11) x x (1 x) dx= E( x ) Γ(7) Γ(11) n 1 LLN Γ(18) xi Ex ( ) = x (1 x) dx n n 0 Γ(7) Γ(11) i= 1 30

31 Example by C/C++ and R 31

32 Part 6 Markov Chains 32

33 Markov Chains (1) A Markov chain is a mathematical model for stochastic systems whose states, discrete or continuous, are governed by transition probability. Suppose the random variable X 0, X 1, take state space (Ω) that is a countable set of value. A Markov chain is a process that corresponds to the network. X 0 1 X t 1 X t X X t

34 Markov Chains (2) The current state in Markov chain only depends on the most recent previous states. PX ( = j X = ix, = i,, X = i) t+ 1 t t 1 t = PX ( = j X = i) = t+ 1 where ( i,..., i, j) Ω 0 t Transition Probability orial/lect1_mcmc_intro.pdf 34

35 An Example of Markov Chains Ω = (1,2,3,4,5) X = ( X0, X1,..., X t,...) Ω where X 0 is initial state and so on. P is transition matrix P =

36 Definition (1) Define the probability of going from state i to state j in n time steps as ( n) p = P( X = j X = i) ij t+ n t A state j is accessible from state i if there ( n) are n time steps such that p ij > 0, where n=0,1, A state i is said to communicate with state j (denote: i j ), if it is true that both i is accessible from j and that j is accessible from i. 36

37 Definition (2) A state i has period d(i) if any return to state i must occur in multiples of d(i) time steps. Formally, the period of a state is defined as di n P ii ( ) ( ) = gcd{ : n > 0} If d(i)= 1, then the state is said to be aperiodic; otherwise (d(i)>1), the state is said to be periodic with period d(i). 37

38 Definition (3) A set of states C is a communicating class if every pair of states in C communicates with each other. Every state in a communicating class must have the same period Example: 38

39 Definition (4) A finite Markov chain is said to be irreducible if its state space (Ω) is a communicating class; this means that, in an irreducible Markov chain, it is possible to get to any state from any state. Example: 39

40 Definition (5) A finite state irreducible Markov chain is said to be ergodic if its states are aperiodic Example: 40

41 Definition (6) A state i is said to be transient if, given that we start in state i, there is a non-zero probability that we will never return back to i. Formally, let the random variable T i be the next return time to state i (the hitting time ): T = min{ n: X = i X = i} i Then, state i is transient iff there exists a finite T i such that: n 0 PT< ( i ) < 1 41

42 Definition (7) A state i is said to be recurrent or persistent iff there exists a finite T i such that:. PT< ( ) = 1 i μ = ET [ ] The mean recurrent time. State i is positive recurrent if is finite; otherwise, state i is null recurrent. A state i is said to be ergodic if it is aperiodic and positive recurrent. If all states in a Markov chain are ergodic, then the chain is said to be ergodic. i i μ i 42

43 Stationary Distributions Theorem: If a Markov Chain is irreducible and aperiodic, then ( n) 1 Pij as n, i, j Ω μ j Theorem: If a Markov chain is irreducible and aperiodic, then! π = lim P( X = j) and π = π P, π = 1, j Ω j i ij j i Ω i Ω π where is stationary distribution. j n n 43

44 Definition (7) A Markov chain is said to be reversible, if there is a stationary distribution π such that π P = π P i, j Ω i ij j ji Theorem: if a Markov chain is reversible, then π = π P j i ij i Ω 44

45 An Example of Stationary Distributions A Markov chain: P = The stationary distribution is π = =

46 Properties of Stationary Distributions Regardless of the starting point, the process of irreducible and aperiodic Markov chains will converge to a stationary distribution. The rate of converge depends on properties of the transition probability. 46

47 Part 7 Monte Carlo Markov Chains 47

48 Applications of MCMC Simulation: Ex: n x+ α 1 n x+ β 1 ( xy, )~ f( xy, ) = c y (1 y), x where x= 0,1, 2,..., n, 0 y 1, α, β are known. Integration: computing in high dimensions. Ex: 1 Egy ( ( )) = gyf ( ) ( ydy ). 0 Bayesian Inference: Ex: Posterior distributions, posterior means 48

49 Monte Carlo Markov Chains MCMC method are a class of algorithms for sampling from probability distributions based on constructing a Markov chain that has the desired distribution as its stationary distribution. The state of the chain after a large number of steps is then used as a sample from the desired distribution. 49

50 Inversion Method vs. MCMC (1) Inverse transform sampling, also known as the probability integral transform, is a method of sampling a number at random from any probability distribution given its cumulative distribution function (cdf). form_sampling_method 50

51 Inversion Method vs. MCMC (2) A random variable with a cdf F, then F has a uniform distribution on [0, 1]. The inverse transform sampling method works as follows: 1. Generate a random number from the standard uniform distribution; call this u. 2. Compute the value x such that F(x) = u; call this x chosen. 3. Take x chosen to be the random number drawn from the distribution described by F. 51

52 Inversion Method vs. MCMC (3) For one dimension random variable, Inversion method is good, but for two or more high dimension random variables, Inverse Method maybe not. For two or more high dimension random variables, the marginal distributions for those random variables respectively sometime be calculated difficult with more time. 52

53 Gibbs Sampling One kind of the MCMC methods. The point of Gibbs sampling is that given a multivariate distribution it is simpler to sample from a conditional distribution rather than integrating over a joint distribution. George Casella and Edward I. George. "Explaining the Gibbs sampler". The American Statistician, 46: , (Basic summary and many references.) 53

54 Example 1 (1) To sample x from: where n xy f xy= c y y x x= 0,1, 2,..., n, 0 y 1, n, α, β are known x+ α 1 n x+ β 1 (, )~ (, ) (1 ), c is a constant. One can see that f( x, y) n x n x f ( x y) = y (1 y) ~ Binomial( n, y), f( y) x fxy (, ) f y x y y Beta x n x fx () x+ α 1 n x+ β 1 ( ) = (1 ) ~ ( + α, 54 + β).

55 Example 1 (2) Gibbs sampling Algorithm: Initial Setting: y x 0 ~ Uniform[0,1] or a arbitrary value [0,1] ~ Bin( n, y ) 0 0 For t=0,,n, sample a value (x t+1,y t+1 ) from yt+ 1 ~ Beta( xt + α, n xt + β ) xt+ 1 ~ Bin( n, yt+ 1) Return (x n, y n ) 55

56 Example 1 (3) Under regular conditions: How many steps are needed for convergence? Within an acceptable error, such as i+ 10 i+ 20 x x, y y. t t xt xt 1 t i 11 < 0.001, i t=+ i =+ t t n is large enough, such as n=

57 Example 1 (4) Inversion Method: x is Beta-Binomial distribution. x~ f( x) = f( x, y) dy 1 0 n Γ ( α + β) Γ ( x+ α) Γ( n x+ β) = Γ x ( α) Γ ( β) Γ ( n + α + β) The cdf of x is F(x) that has a uniform distribution on [0, 1]. x n Γ ( α + β) Γ ( i+ α) Γ( n i+ β) F( x) = i= 0 i Γ ( α) Γ ( β) Γ ( n + α + β) 57

58 Gibbs sampling by R (1) 58

59 Gibbs sampling by R (2) Inversion method 59

60 Gibbs sampling by R (3) 60

61 Inversion Method by R 61

62 Gibbs sampling by C/C++ (1) 62

63 Gibbs sampling by C/C++ (2) Inversion method 63

64 Gibbs sampling by C/C++ (3) 64

65 Gibbs sampling by C/C++ (4) 65

66 Gibbs sampling by C/C++ (5) 66

67 Inversion Method by C/C++ (1) ) 67

68 Inversion Method by C/C++ (2) 68

69 Plot Histograms by Maple (1) Figure 1:1000 samples with n=16, α=5 and β=7. Blue-Inversion method Red-Gibbs sampling 69

70 Plot Histograms by Maple (2) 70

71 Probability Histograms by Maple (1) Figure 2: Blue histogram and yellow line are pmf of x. Red histogram is Pˆ( X = x ) = P ( X = x Yi = yi ) from m i = 1 Gibbs sampling. 1 m 71

72 Probability Histograms by Maple (2) The probability histogram of blue histogram of Figure 1 would be similar to the blue probability histogram of Figure 2, when the sample size. The probability histogram of red histogram of Figure 1 would be similar to the red probability histogram of Figure 2, when the iteration n. 72

73 Probability Histograms by Maple (3) 73

74 Exercises Write your own programs similar to those examples presented in this talk, including Example 1 in Genetics and other examples. Write programs for those examples mentioned at the reference web pages. Write programs for the other examples that you know. 74

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 9: Markov Chain Monte Carlo 9.1 Markov Chain A Markov Chain Monte

More information

Stat 516, Homework 1

Stat 516, Homework 1 Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

Monte Carlo Methods. Leon Gu CSD, CMU

Monte Carlo Methods. Leon Gu CSD, CMU Monte Carlo Methods Leon Gu CSD, CMU Approximate Inference EM: y-observed variables; x-hidden variables; θ-parameters; E-step: q(x) = p(x y, θ t 1 ) M-step: θ t = arg max E q(x) [log p(y, x θ)] θ Monte

More information

17 : Markov Chain Monte Carlo

17 : Markov Chain Monte Carlo 10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo

More information

Markov Chains (Part 3)

Markov Chains (Part 3) Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is

More information

Markov chain Monte Carlo

Markov chain Monte Carlo 1 / 26 Markov chain Monte Carlo Timothy Hanson 1 and Alejandro Jara 2 1 Division of Biostatistics, University of Minnesota, USA 2 Department of Statistics, Universidad de Concepción, Chile IAP-Workshop

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

Markov Chains and MCMC

Markov Chains and MCMC Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time

More information

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning for for Advanced Topics in California Institute of Technology April 20th, 2017 1 / 50 Table of Contents for 1 2 3 4 2 / 50 History of methods for Enrico Fermi used to calculate incredibly accurate predictions

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

The Theory behind PageRank

The Theory behind PageRank The Theory behind PageRank Mauro Sozio Telecom ParisTech May 21, 2014 Mauro Sozio (LTCI TPT) The Theory behind PageRank May 21, 2014 1 / 19 A Crash Course on Discrete Probability Events and Probability

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

SC7/SM6 Bayes Methods HT18 Lecturer: Geoff Nicholls Lecture 2: Monte Carlo Methods Notes and Problem sheets are available at http://www.stats.ox.ac.uk/~nicholls/bayesmethods/ and via the MSc weblearn pages.

More information

Lecture 8: The Metropolis-Hastings Algorithm

Lecture 8: The Metropolis-Hastings Algorithm 30.10.2008 What we have seen last time: Gibbs sampler Key idea: Generate a Markov chain by updating the component of (X 1,..., X p ) in turn by drawing from the full conditionals: X (t) j Two drawbacks:

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

Markov Chains on Countable State Space

Markov Chains on Countable State Space Markov Chains on Countable State Space 1 Markov Chains Introduction 1. Consider a discrete time Markov chain {X i, i = 1, 2,...} that takes values on a countable (finite or infinite) set S = {x 1, x 2,...},

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

Markov Chains and MCMC

Markov Chains and MCMC Markov Chains and MCMC CompSci 590.02 Instructor: AshwinMachanavajjhala Lecture 4 : 590.02 Spring 13 1 Recap: Monte Carlo Method If U is a universe of items, and G is a subset satisfying some property,

More information

Lecture 6: Markov Chain Monte Carlo

Lecture 6: Markov Chain Monte Carlo Lecture 6: Markov Chain Monte Carlo D. Jason Koskinen koskinen@nbi.ku.dk Photo by Howard Jackman University of Copenhagen Advanced Methods in Applied Statistics Feb - Apr 2016 Niels Bohr Institute 2 Outline

More information

Answers and expectations

Answers and expectations Answers and expectations For a function f(x) and distribution P(x), the expectation of f with respect to P is The expectation is the average of f, when x is drawn from the probability distribution P E

More information

Bayesian GLMs and Metropolis-Hastings Algorithm

Bayesian GLMs and Metropolis-Hastings Algorithm Bayesian GLMs and Metropolis-Hastings Algorithm We have seen that with conjugate or semi-conjugate prior distributions the Gibbs sampler can be used to sample from the posterior distribution. In situations,

More information

Review: mostly probability and some statistics

Review: mostly probability and some statistics Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random

More information

MSc MT15. Further Statistical Methods: MCMC. Lecture 5-6: Markov chains; Metropolis Hastings MCMC. Notes and Practicals available at

MSc MT15. Further Statistical Methods: MCMC. Lecture 5-6: Markov chains; Metropolis Hastings MCMC. Notes and Practicals available at MSc MT15. Further Statistical Methods: MCMC Lecture 5-6: Markov chains; Metropolis Hastings MCMC Notes and Practicals available at www.stats.ox.ac.uk\ nicholls\mscmcmc15 Markov chain Monte Carlo Methods

More information

2.1 Elementary probability; random sampling

2.1 Elementary probability; random sampling Chapter 2 Probability Theory Chapter 2 outlines the probability theory necessary to understand this text. It is meant as a refresher for students who need review and as a reference for concepts and theorems

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011 Lecture 2: Linear Models Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011 1 Quick Review of the Major Points The general linear model can be written as y = X! + e y = vector

More information

Lecture 11. Probability Theory: an Overveiw

Lecture 11. Probability Theory: an Overveiw Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the

More information

Markov Chain Monte Carlo Methods

Markov Chain Monte Carlo Methods Markov Chain Monte Carlo Methods p. /36 Markov Chain Monte Carlo Methods Michel Bierlaire michel.bierlaire@epfl.ch Transport and Mobility Laboratory Markov Chain Monte Carlo Methods p. 2/36 Markov Chains

More information

The Markov Chain Monte Carlo Method

The Markov Chain Monte Carlo Method The Markov Chain Monte Carlo Method Idea: define an ergodic Markov chain whose stationary distribution is the desired probability distribution. Let X 0, X 1, X 2,..., X n be the run of the chain. The Markov

More information

2 Functions of random variables

2 Functions of random variables 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

Machine Learning for Data Science (CS4786) Lecture 24

Machine Learning for Data Science (CS4786) Lecture 24 Machine Learning for Data Science (CS4786) Lecture 24 Graphical Models: Approximate Inference Course Webpage : http://www.cs.cornell.edu/courses/cs4786/2016sp/ BELIEF PROPAGATION OR MESSAGE PASSING Each

More information

1 Joint and marginal distributions

1 Joint and marginal distributions DECEMBER 7, 204 LECTURE 2 JOINT (BIVARIATE) DISTRIBUTIONS, MARGINAL DISTRIBUTIONS, INDEPENDENCE So far we have considered one random variable at a time. However, in economics we are typically interested

More information

A quick introduction to Markov chains and Markov chain Monte Carlo (revised version)

A quick introduction to Markov chains and Markov chain Monte Carlo (revised version) A quick introduction to Markov chains and Markov chain Monte Carlo (revised version) Rasmus Waagepetersen Institute of Mathematical Sciences Aalborg University 1 Introduction These notes are intended to

More information

Machine Learning. Probability Basics. Marc Toussaint University of Stuttgart Summer 2014

Machine Learning. Probability Basics. Marc Toussaint University of Stuttgart Summer 2014 Machine Learning Probability Basics Basic definitions: Random variables, joint, conditional, marginal distribution, Bayes theorem & examples; Probability distributions: Binomial, Beta, Multinomial, Dirichlet,

More information

Lecture 1 Basic Statistical Machinery

Lecture 1 Basic Statistical Machinery Lecture 1 Basic Statistical Machinery Bruce Walsh. jbwalsh@u.arizona.edu. University of Arizona. ECOL 519A, Jan 2007. University of Arizona Probabilities, Distributions, and Expectations Discrete and Continuous

More information

Outline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution

Outline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution Outline A short review on Bayesian analysis. Binomial, Multinomial, Normal, Beta, Dirichlet Posterior mean, MAP, credible interval, posterior distribution Gibbs sampling Revisit the Gaussian mixture model

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

Review of Probability. CS1538: Introduction to Simulations

Review of Probability. CS1538: Introduction to Simulations Review of Probability CS1538: Introduction to Simulations Probability and Statistics in Simulation Why do we need probability and statistics in simulation? Needed to validate the simulation model Needed

More information

MCMC: Markov Chain Monte Carlo

MCMC: Markov Chain Monte Carlo I529: Machine Learning in Bioinformatics (Spring 2013) MCMC: Markov Chain Monte Carlo Yuzhen Ye School of Informatics and Computing Indiana University, Bloomington Spring 2013 Contents Review of Markov

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

MATH Notebook 5 Fall 2018/2019

MATH Notebook 5 Fall 2018/2019 MATH442601 2 Notebook 5 Fall 2018/2019 prepared by Professor Jenny Baglivo c Copyright 2004-2019 by Jenny A. Baglivo. All Rights Reserved. 5 MATH442601 2 Notebook 5 3 5.1 Sequences of IID Random Variables.............................

More information

Computer intensive statistical methods

Computer intensive statistical methods Lecture 11 Markov Chain Monte Carlo cont. October 6, 2015 Jonas Wallin jonwal@chalmers.se Chalmers, Gothenburg university The two stage Gibbs sampler If the conditional distributions are easy to sample

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Chapter 5 Markov Chain Monte Carlo MCMC is a kind of improvement of the Monte Carlo method By sampling from a Markov chain whose stationary distribution is the desired sampling distributuion, it is possible

More information

Learning in graphical models & MC methods Fall Cours 8 November 18

Learning in graphical models & MC methods Fall Cours 8 November 18 Learning in graphical models & MC methods Fall 2015 Cours 8 November 18 Enseignant: Guillaume Obozinsi Scribe: Khalife Sammy, Maryan Morel 8.1 HMM (end) As a reminder, the message propagation algorithm

More information

Learning Bayesian network : Given structure and completely observed data

Learning Bayesian network : Given structure and completely observed data Learning Bayesian network : Given structure and completely observed data Probabilistic Graphical Models Sharif University of Technology Spring 2017 Soleymani Learning problem Target: true distribution

More information

Advanced Sampling Algorithms

Advanced Sampling Algorithms + Advanced Sampling Algorithms + Mobashir Mohammad Hirak Sarkar Parvathy Sudhir Yamilet Serrano Llerena Advanced Sampling Algorithms Aditya Kulkarni Tobias Bertelsen Nirandika Wanigasekara Malay Singh

More information

LECTURE 15 Markov chain Monte Carlo

LECTURE 15 Markov chain Monte Carlo LECTURE 15 Markov chain Monte Carlo There are many settings when posterior computation is a challenge in that one does not have a closed form expression for the posterior distribution. Markov chain Monte

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

2 Discrete-Time Markov Chains

2 Discrete-Time Markov Chains 2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

ECE 4400:693 - Information Theory

ECE 4400:693 - Information Theory ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential

More information

Introduction to Probabilistic Graphical Models

Introduction to Probabilistic Graphical Models Introduction to Probabilistic Graphical Models Sargur Srihari srihari@cedar.buffalo.edu 1 Topics 1. What are probabilistic graphical models (PGMs) 2. Use of PGMs Engineering and AI 3. Directionality in

More information

Simulation - Lectures - Part III Markov chain Monte Carlo

Simulation - Lectures - Part III Markov chain Monte Carlo Simulation - Lectures - Part III Markov chain Monte Carlo Julien Berestycki Part A Simulation and Statistical Programming Hilary Term 2018 Part A Simulation. HT 2018. J. Berestycki. 1 / 50 Outline Markov

More information

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains, Random Walks on Graphs, and the Laplacian Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner Fundamentals CS 281A: Statistical Learning Theory Yangqing Jia Based on tutorial slides by Lester Mackey and Ariel Kleiner August, 2011 Outline 1 Probability 2 Statistics 3 Linear Algebra 4 Optimization

More information

The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations

The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations John R. Michael, Significance, Inc. and William R. Schucany, Southern Methodist University The mixture

More information

Use of Eigen values and eigen vectors to calculate higher transition probabilities

Use of Eigen values and eigen vectors to calculate higher transition probabilities The Lecture Contains : Markov-Bernoulli Chain Note Assignments Random Walks which are correlated Actual examples of Markov Chains Examples Use of Eigen values and eigen vectors to calculate higher transition

More information

Lecture 2 : CS6205 Advanced Modeling and Simulation

Lecture 2 : CS6205 Advanced Modeling and Simulation Lecture 2 : CS6205 Advanced Modeling and Simulation Lee Hwee Kuan 21 Aug. 2013 For the purpose of learning stochastic simulations for the first time. We shall only consider probabilities on finite discrete

More information

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo Group Prof. Daniel Cremers 11. Sampling Methods: Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Scientific Computing: Monte Carlo

Scientific Computing: Monte Carlo Scientific Computing: Monte Carlo Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 April 5th and 12th, 2012 A. Donev (Courant Institute)

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Introduction to Statistical Inference Self-study

Introduction to Statistical Inference Self-study Introduction to Statistical Inference Self-study Contents Definition, sample space The fundamental object in probability is a nonempty sample space Ω. An event is a subset A Ω. Definition, σ-algebra A

More information

Stat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC

Stat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC Stat 451 Lecture Notes 07 12 Markov Chain Monte Carlo Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapters 8 9 in Givens & Hoeting, Chapters 25 27 in Lange 2 Updated: April 4, 2016 1 / 42 Outline

More information

Lecture 3: Basic Statistical Tools. Bruce Walsh lecture notes Tucson Winter Institute 7-9 Jan 2013

Lecture 3: Basic Statistical Tools. Bruce Walsh lecture notes Tucson Winter Institute 7-9 Jan 2013 Lecture 3: Basic Statistical Tools Bruce Walsh lecture notes Tucson Winter Institute 7-9 Jan 2013 1 Basic probability Events are possible outcomes from some random process e.g., a genotype is AA, a phenotype

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos Contents Markov Chain Monte Carlo Methods Sampling Rejection Importance Hastings-Metropolis Gibbs Markov Chains

More information

Ch5. Markov Chain Monte Carlo

Ch5. Markov Chain Monte Carlo ST4231, Semester I, 2003-2004 Ch5. Markov Chain Monte Carlo In general, it is very difficult to simulate the value of a random vector X whose component random variables are dependent. In this chapter we

More information

16 : Markov Chain Monte Carlo (MCMC)

16 : Markov Chain Monte Carlo (MCMC) 10-708: Probabilistic Graphical Models 10-708, Spring 2014 16 : Markov Chain Monte Carlo MCMC Lecturer: Matthew Gormley Scribes: Yining Wang, Renato Negrinho 1 Sampling from low-dimensional distributions

More information

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods Prof. Daniel Cremers 11. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

Previously Monte Carlo Integration

Previously Monte Carlo Integration Previously Simulation, sampling Monte Carlo Simulations Inverse cdf method Rejection sampling Today: sampling cont., Bayesian inference via sampling Eigenvalues and Eigenvectors Markov processes, PageRank

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Northwestern University Department of Electrical Engineering and Computer Science

Northwestern University Department of Electrical Engineering and Computer Science Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability

More information

INTRODUCTION TO BAYESIAN STATISTICS

INTRODUCTION TO BAYESIAN STATISTICS INTRODUCTION TO BAYESIAN STATISTICS Sarat C. Dass Department of Statistics & Probability Department of Computer Science & Engineering Michigan State University TOPICS The Bayesian Framework Different Types

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

Basics on Probability. Jingrui He 09/11/2007

Basics on Probability. Jingrui He 09/11/2007 Basics on Probability Jingrui He 09/11/2007 Coin Flips You flip a coin Head with probability 0.5 You flip 100 coins How many heads would you expect Coin Flips cont. You flip a coin Head with probability

More information

I. Bayesian econometrics

I. Bayesian econometrics I. Bayesian econometrics A. Introduction B. Bayesian inference in the univariate regression model C. Statistical decision theory D. Large sample results E. Diffuse priors F. Numerical Bayesian methods

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 Last compiled: November 6, 213 1. Conditional expectation Exercise 1.1. To start with, note that P(X Y = P( c R : X > c, Y c or X c, Y > c = P( c Q : X > c, Y

More information

Introduction to Computational Biology Lecture # 14: MCMC - Markov Chain Monte Carlo

Introduction to Computational Biology Lecture # 14: MCMC - Markov Chain Monte Carlo Introduction to Computational Biology Lecture # 14: MCMC - Markov Chain Monte Carlo Assaf Weiner Tuesday, March 13, 2007 1 Introduction Today we will return to the motif finding problem, in lecture 10

More information

Today: Fundamentals of Monte Carlo

Today: Fundamentals of Monte Carlo Today: Fundamentals of Monte Carlo What is Monte Carlo? Named at Los Alamos in 1940 s after the casino. Any method which uses (pseudo)random numbers as an essential part of the algorithm. Stochastic -

More information

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods Prof. Daniel Cremers 14. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

Problems for 3505 (2011)

Problems for 3505 (2011) Problems for 505 (2011) 1. In the simplex of genotype distributions x + y + z = 1, for two alleles, the Hardy- Weinberg distributions x = p 2, y = 2pq, z = q 2 (p + q = 1) are characterized by y 2 = 4xz.

More information

Essentials on the Analysis of Randomized Algorithms

Essentials on the Analysis of Randomized Algorithms Essentials on the Analysis of Randomized Algorithms Dimitris Diochnos Feb 0, 2009 Abstract These notes were written with Monte Carlo algorithms primarily in mind. Topics covered are basic (discrete) random

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Lecture 7 and 8: Markov Chain Monte Carlo

Lecture 7 and 8: Markov Chain Monte Carlo Lecture 7 and 8: Markov Chain Monte Carlo 4F13: Machine Learning Zoubin Ghahramani and Carl Edward Rasmussen Department of Engineering University of Cambridge http://mlg.eng.cam.ac.uk/teaching/4f13/ Ghahramani

More information