Conditional Distributions

Size: px
Start display at page:

Download "Conditional Distributions"

Transcription

1 Conditional Distributions The goal is to provide a general definition of the conditional distribution of Y given X, when (X, Y ) are jointly distributed. Let F be a distribution function on R. Let G(, ) be a map from R B R to [, 1] satisfying: (a) G(x, ) ia a probability measure on B R for every x in R, and, (b) G(, A) is a measurable function for every Borel set A. We can then form the generalized product F G in the following sense: There exists a measure H on B R 2, which we call F G such that: (F G)[(, x ] (, y ]] = x G(x, (, y ]) df (x). More generally, for Borel subsets A, B, we should have: (F G)(A B) = G(x, B) df (x). If (X, Y ) has distribution F G, then the marginal of X is F and the conditional of Y given X = x is G(x, ). From F G, we can recover the marginal distribution of Y, say F and the conditional of X given Y = y, say G(y, ), where G has the same properties as G and F G gives the distribution of (Y, X). Note that: A F (y ) = P (X <, Y y ) = G(x, (, y ]) df (x). When G does not depend on its first co-ordinate, i.e. G(x, ) is the same measure for all x, X and Y are independent, and G( ) G(, ), gives the marginal distribution of Y. Example 1: Suppose that F is U(, 1), and that G is defined by: G(x, {1}) = x and G(x, {}) = 1 x. 1

2 We seek to find F G which is defined on ([, 1] {, 1}, B [,1] 2 {,1} ). Let (X, Y ) follow F G. Now, Similarly, P (X x, Y = 1) = x G(u, {1}) df (u) = x u du = x2 2. (.1) P (X x, Y = ) = x x2 2. (.2) So P (Y = 1) = 1/2; therefore Y Ber(1/2). It is also clear from the above discussion that given X, Y Ber(x). This can also be verified through the limiting definition of conditional probabilities that was discussed before. P (Y = 1 X = x) = lim h P (Y = 1 X [x, x + h]) P (Y = 1, X [x, x + h]) = lim h P (X [x, x + h]) = lim h x+h x u du h = x. Next, we seek to find H(y, ), the conditional of X given Y = y. We can do it via the definition of conditional probabilities when conditioning on a discrete random variable but let s try the more formal recipe. We have: P (Y = 1, X x) = H(y, (, x]) df (y) = H(1, (, x]) 1 2, {1} and P (Y =, X x) = and using (.1) and (.2), we get: {} H(y, (, x]) df (y) = H(, (, x]) 1 2, H(1, (, x]) = x 2 and H(, (, x]) = 2x = x 2. Example 2: Let X be a continuous random variable with: Note that the distribution function of X is: f(x) = 2 3 e x 1(x > ) + e x 1(x < ). F (x) = P (X x) = 1 3 ex 1(x ) + 2 ( ) 3 (1 e x ) 1(x > ).

3 We first find the conditional distribution of Y = sign(x) given X, i.e. P (Y = 1 X = x) for x >. So, it suffices to get a transition function G(t, a), a { 1, 1}, t >, such that: and Now, and P ( X x, Y = 1) = P ( X x, Y = 1) = x x G(t, 1) df X ) t. (.3) G(t, 1) df X ) t. P ( X x, Y = 1) = P ( < X x) = 2 3 (1 e x ), P ( X x, Y = 1) = P ( x X < ) = 1 3 (1 e x ). So P ( X x) = 1 e x. Now, by (.3), 2 3 (1 e x ) = x G(t, 1) d(1 e t ) = x G(t, 1) e t dt, showing that G(x, 1) = 2/3. Similarly G(x, 1) = 1/3. Note that the distribution corresponding to f can be generated by the following stochastic mechanism: Let V follow Exp(1) and let B be a { 1, 1} valued random variable independent of V, with p B (1) = 2/3 = 1 p B ( 1), and let X = V 1{B = 1} V 1{B = 1}. Then V is precisely X and X f. Note that the sign of X is precisely B and it is independent of V = X by the mechanism itself. So the conditional of sign(x) given X is simply the unconditional distribution of B and we obtain the same result as with the formal derivation. Next, consider the distribution of X given Y. Note that P (Y = 1) = 1/3 and P (Y = 1) = 2/3. We have: so H( 1, (, x]) = 3 P (Y = 1, X x) Similarly, we compute H(1, (, x]). P (Y = 1, X x) = H( 1, (, x]) 1 3, = 3 P (X x) [ ] 1 = 3 1(x > ) + P (X x) 1(x ) = e x 1(x ) + 1(x > ). 3 3

4 .1 Order statistics and conditional distributions Let X 1, X 2,..., X n be i.i.d. from a distribution F with Lebesgue density f. Let X (1), X (2),..., X (n) be the corresponding order statistics. Note that the order statistics are all distinct with probability 1 and P ( (X (1), X (2),..., X (n) ) B) = 1, where B = {(x 1, x 2,..., x n ) : x 1 < x 2 <... < x n }. Let s first find the joint density of the order statistics. Let Π be the set of all permutations of the numbers 1 through n. For a measurable subset of B, say A, we have: P ((X (1), X (2),..., X (n) ) A)) = P ( π Π {(X π1, X π2,..., X πn ) A}) = π Π P ({(X π1, X π2,..., X πn ) A}) This shows that: = n! P ((X 1, X 2,..., X ) A) = n! f(x 1, x 2,..., x n ) dx 1 dx 2... dx n. A f ord (x 1, x 2,..., x n ) = n! Π n i=1 f(x i ), (x 1, x 2,..., x n ) B. Remark: If we assumed that the X i s were not independent but came from an exchangeable distribution with density f(x 1, x 2,..., x n ), i.e. the distribution of the X i s is invariant to permutations of the X i s, then f is necessarily symmetric in its arguments and an argument similar to the one above would show that f ord (x 1, x 2,..., x n ) = n! f(x 1, x 2,..., x n ), (x 1, x 2,..., x n ) B. Now, consider the situation that the distribution of (X 1, X 2,..., X n ) is exchangeable. We seek to find: P ((X 1, X 2,... X n ) = (x π1, x π2,..., x πn ) X (1) = x 1, X (2) = x 2,..., X (n) = x n ) for some permutation π. Let τ be an arbitrary permutation. Note that (Y 1, Y 2,..., Y n ) (X τ1, X τ2,..., X τn ) has the same distribution as (X 1, X 2,..., X n ). Thus, P((X 1, X 2,... X n ) = (x π1, x π2,..., x πn ) X (1) = x 1, X (2) = x 2,..., X (n) = x n ) = P((Y 1, Y 2,... Y n ) = (x π1, x π2,..., x πn ) Y (1) = x 1, Y (2) = x 2,..., Y (n) = x n ) = P((X τ1, X τ2,... X τn ) = (x π1, x π2,..., x πn ) X (1) = x 1, X (2) = x 2,..., X (n) = x n ) = P((X 1, X 2,... X n ) = (x (π τ 1 ) 1, x (π τ 1 ) 2,..., x (π τ 1 ) n ) X (i) = x i, i = 1,..., n). 4

5 As τ runs over all permutations, so does π τ 1, showing that the conditional probability under consideration does not depend upon the permutation π initially fixed. As there are n! permutations, we conclude that: P ((X 1, X 2,... X n ) = (x π1, x π2,..., x πn ) X (1) = x 1, X (2) = x 2,..., X (n) = x n ) = 1 n!. An example with Uniforms: Suppose that X 1, X 2,..., X n are i.i.d. Uniform (, θ). The joint density of {X (i) } is given by: f ord (x 1, x 2,..., x n ) = n! θ n 1 { < x 1 < x 2 <... < x n < θ}. The marginal density of the maximum, X (n) is: f X(n) (x n ) = n θ n xn 1 n 1 { < x n < θ}. So, the conditional density of (X (1), X (2),..., X (n 1) ) X (n) = x n ), by direct division, is seen to be: (n 1)! f cond (x 1, x 2,..., x n 1 ) = 1 { < x 1 < x 2 <... < x n }. x n 1 n This shows that the first n 1 order statistics given the maximum, x n, are distributed as the n 1 order statistics from a sample of size n 1 from Uniform(, x n ). But note that the distribution of the vector {X 1, X 2,..., X n } {X (n) } given (X (1), X (2),..., X (n) ) must be uniformly distributed on all the (n 1)! permutations of the first n 1 order statistics. Thus, the random vector {X 1, X 2,..., X n } {X (n) }, conditional on X (n), must behave like an i.i.d. random sample from Uniform(, X (n) ). These arguments can be made more rigorous but at the expense of much notation. Order statistics and non-exchangeable distributions: Take (X, Y ) to be a pair of independent random variables, each defined on (, 1), with X having Lebesgue density f and Y having Lebesgue density g. Now, X and Y are not exchangeable. We consider P ((X, Y ) A, (U, V ) B) where U = X Y, V = X Y, A is a Borel subset of (, 1) and B a Borel subset of {x < y : x, y (, 1)}. Let π be the permutation on {1, 2} that swaps indices. Then: P ((X, Y ) A, (U, V ) B) = P ((X, Y ) A (B πb)) = P ((X, Y ) A B) + P ((X, Y ) (A πb) = f(x)g(y) dx dy + f(x)g(y) dxdy A B A πb = f(u)g(v) du dv + f(v)g(u) dudv (change of variable) A B πa B =, {(f(u)g(v) 1((u, v) A) + f(v)g(u) 1((u, v) πa)} du dv. B 5

6 From the above derivation, taking A to be the unit square, we find that: P ((U, V ) B) = (f(u)g(v) + f(v)g(u)) dudv, so that df U,V (u, v) = (f(u)g(v) + f(v)g(u)) dudv. Conclude that: P ((X, Y ) A, (U, V ) B) = ξ((u, v), A) df U,V (u, v) where, for u < v, ξ((u, v), A) = B B f(u) g(v) f(v) g(u) 1((u, v) A) + 1((u, v) πa). f(u)g(v) + f(v)g(u) f(u)g(v) + f(v)g(u) Remark: If (X 1, X 2,..., X n ) is a random vector with density n i=1 f i(x i ), you should be able to guess the form of the conditional distribution of the X i s given the order statistics. 6

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

Math 140A - Fall Final Exam

Math 140A - Fall Final Exam Math 140A - Fall 2014 - Final Exam Problem 1. Let {a n } n 1 be an increasing sequence of real numbers. (i) If {a n } has a bounded subsequence, show that {a n } is itself bounded. (ii) If {a n } has a

More information

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14 Math 325 Intro. Probability & Statistics Summer Homework 5: Due 7/3/. Let X and Y be continuous random variables with joint/marginal p.d.f. s f(x, y) 2, x y, f (x) 2( x), x, f 2 (y) 2y, y. Find the conditional

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

MATH 6337: Homework 8 Solutions

MATH 6337: Homework 8 Solutions 6.1. MATH 6337: Homework 8 Solutions (a) Let be a measurable subset of 2 such that for almost every x, {y : (x, y) } has -measure zero. Show that has measure zero and that for almost every y, {x : (x,

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES Contents 1. Continuous random variables 2. Examples 3. Expected values 4. Joint distributions

More information

Economics 204 Fall 2011 Problem Set 2 Suggested Solutions

Economics 204 Fall 2011 Problem Set 2 Suggested Solutions Economics 24 Fall 211 Problem Set 2 Suggested Solutions 1. Determine whether the following sets are open, closed, both or neither under the topology induced by the usual metric. (Hint: think about limit

More information

Probability (continued)

Probability (continued) DS-GA 1002 Lecture notes 2 September 21, 15 Probability (continued) 1 Random variables (continued) 1.1 Conditioning on an event Given a random variable X with a certain distribution, imagine that it is

More information

Review: mostly probability and some statistics

Review: mostly probability and some statistics Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random

More information

Change Of Variable Theorem: Multiple Dimensions

Change Of Variable Theorem: Multiple Dimensions Change Of Variable Theorem: Multiple Dimensions Moulinath Banerjee University of Michigan August 30, 01 Let (X, Y ) be a two-dimensional continuous random vector. Thus P (X = x, Y = y) = 0 for all (x,

More information

Solutions: Problem Set 4 Math 201B, Winter 2007

Solutions: Problem Set 4 Math 201B, Winter 2007 Solutions: Problem Set 4 Math 2B, Winter 27 Problem. (a Define f : by { x /2 if < x

More information

1 Solution to Problem 2.1

1 Solution to Problem 2.1 Solution to Problem 2. I incorrectly worked this exercise instead of 2.2, so I decided to include the solution anyway. a) We have X Y /3, which is a - function. It maps the interval, ) where X lives) onto

More information

A solution to the exercise in the slide p.17

A solution to the exercise in the slide p.17 A solution to the exercise in the slide p17 Tomonari SEI Apr 7, 2017 (Ver 2) This document is a supplementary material for the first lecture (Apr 6), available from http://wwwstattu-tokyoacjp/~sei/lechtml

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Mathematics 426 Robert Gross Homework 9 Answers

Mathematics 426 Robert Gross Homework 9 Answers Mathematics 4 Robert Gross Homework 9 Answers. Suppose that X is a normal random variable with mean µ and standard deviation σ. Suppose that PX > 9 PX

More information

4. CONTINUOUS RANDOM VARIABLES

4. CONTINUOUS RANDOM VARIABLES IA Probability Lent Term 4 CONTINUOUS RANDOM VARIABLES 4 Introduction Up to now we have restricted consideration to sample spaces Ω which are finite, or countable; we will now relax that assumption We

More information

ECE 4400:693 - Information Theory

ECE 4400:693 - Information Theory ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential

More information

Kolmogorov Equations and Markov Processes

Kolmogorov Equations and Markov Processes Kolmogorov Equations and Markov Processes May 3, 013 1 Transition measures and functions Consider a stochastic process {X(t)} t 0 whose state space is a product of intervals contained in R n. We define

More information

Math 67. Rumbos Fall Solutions to Review Problems for Final Exam. (a) Use the triangle inequality to derive the inequality

Math 67. Rumbos Fall Solutions to Review Problems for Final Exam. (a) Use the triangle inequality to derive the inequality Math 67. umbos Fall 8 Solutions to eview Problems for Final Exam. In this problem, u and v denote vectors in n. (a) Use the triangle inequality to derive the inequality Solution: Write v u v u for all

More information

HW4 : Bivariate Distributions (1) Solutions

HW4 : Bivariate Distributions (1) Solutions STAT/MATH 395 A - PROBABILITY II UW Winter Quarter 7 Néhémy Lim HW4 : Bivariate Distributions () Solutions Problem. The joint probability mass function of X and Y is given by the following table : X Y

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get 18:2 1/24/2 TOPIC. Inequalities; measures of spread. This lecture explores the implications of Jensen s inequality for g-means in general, and for harmonic, geometric, arithmetic, and related means in

More information

Lecture 3: Random variables, distributions, and transformations

Lecture 3: Random variables, distributions, and transformations Lecture 3: Random variables, distributions, and transformations Definition 1.4.1. A random variable X is a function from S into a subset of R such that for any Borel set B R {X B} = {ω S : X(ω) B} is an

More information

Find the indicated derivative. 1) Find y(4) if y = 3 sin x. A) y(4) = 3 cos x B) y(4) = 3 sin x C) y(4) = - 3 cos x D) y(4) = - 3 sin x

Find the indicated derivative. 1) Find y(4) if y = 3 sin x. A) y(4) = 3 cos x B) y(4) = 3 sin x C) y(4) = - 3 cos x D) y(4) = - 3 sin x Assignment 5 Name Find the indicated derivative. ) Find y(4) if y = sin x. ) A) y(4) = cos x B) y(4) = sin x y(4) = - cos x y(4) = - sin x ) y = (csc x + cot x)(csc x - cot x) ) A) y = 0 B) y = y = - csc

More information

More on Distribution Function

More on Distribution Function More on Distribution Function The distribution of a random variable X can be determined directly from its cumulative distribution function F X. Theorem: Let X be any random variable, with cumulative distribution

More information

MATH 52 FINAL EXAM DECEMBER 7, 2009

MATH 52 FINAL EXAM DECEMBER 7, 2009 MATH 52 FINAL EXAM DECEMBER 7, 2009 THIS IS A CLOSED BOOK, CLOSED NOTES EXAM. NO CALCULATORS OR OTHER ELECTRONIC DEVICES ARE PERMITTED. IF YOU NEED EXTRA SPACE, PLEASE USE THE BACK OF THE PREVIOUS PROB-

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

EE/Stats 376A: Homework 7 Solutions Due on Friday March 17, 5 pm

EE/Stats 376A: Homework 7 Solutions Due on Friday March 17, 5 pm EE/Stats 376A: Homework 7 Solutions Due on Friday March 17, 5 pm 1. Feedback does not increase the capacity. Consider a channel with feedback. We assume that all the recieved outputs are sent back immediately

More information

Hints/Solutions for Homework 3

Hints/Solutions for Homework 3 Hints/Solutions for Homework 3 MATH 865 Fall 25 Q Let g : and h : be bounded and non-decreasing functions Prove that, for any rv X, [Hint: consider an independent copy Y of X] ov(g(x), h(x)) Solution:

More information

Let X and Y denote two random variables. The joint distribution of these random

Let X and Y denote two random variables. The joint distribution of these random EE385 Class Notes 9/7/0 John Stensby Chapter 3: Multiple Random Variables Let X and Y denote two random variables. The joint distribution of these random variables is defined as F XY(x,y) = [X x,y y] P.

More information

Chapter 4: Modelling

Chapter 4: Modelling Chapter 4: Modelling Exchangeability and Invariance Markus Harva 17.10. / Reading Circle on Bayesian Theory Outline 1 Introduction 2 Models via exchangeability 3 Models via invariance 4 Exercise Statistical

More information

Conditional densities, mass functions, and expectations

Conditional densities, mass functions, and expectations Conditional densities, mass functions, and expectations Jason Swanson April 22, 27 1 Discrete random variables Suppose that X is a discrete random variable with range {x 1, x 2, x 3,...}, and that Y is

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

Sample Spaces, Random Variables

Sample Spaces, Random Variables Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted

More information

DIFFERENTIATION RULES

DIFFERENTIATION RULES 3 DIFFERENTIATION RULES DIFFERENTIATION RULES 3. The Product and Quotient Rules In this section, we will learn about: Formulas that enable us to differentiate new functions formed from old functions by

More information

2 Functions of random variables

2 Functions of random variables 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as

More information

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N Problem 1. Let f : A R R have the property that for every x A, there exists ɛ > 0 such that f(t) > ɛ if t (x ɛ, x + ɛ) A. If the set A is compact, prove there exists c > 0 such that f(x) > c for all x

More information

Math 172 Problem Set 5 Solutions

Math 172 Problem Set 5 Solutions Math 172 Problem Set 5 Solutions 2.4 Let E = {(t, x : < x b, x t b}. To prove integrability of g, first observe that b b b f(t b b g(x dx = dt t dx f(t t dtdx. x Next note that f(t/t χ E is a measurable

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Introduction to ODE's (0A) Young Won Lim 3/9/15

Introduction to ODE's (0A) Young Won Lim 3/9/15 Introduction to ODE's (0A) Copyright (c) 2011-2014 Young W. Lim. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2

More information

Ch. 8 Math Preliminaries for Lossy Coding. 8.4 Info Theory Revisited

Ch. 8 Math Preliminaries for Lossy Coding. 8.4 Info Theory Revisited Ch. 8 Math Preliminaries for Lossy Coding 8.4 Info Theory Revisited 1 Info Theory Goals for Lossy Coding Again just as for the lossless case Info Theory provides: Basis for Algorithms & Bounds on Performance

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

The Central Limit Theorem: More of the Story

The Central Limit Theorem: More of the Story The Central Limit Theorem: More of the Story Steven Janke November 2015 Steven Janke (Seminar) The Central Limit Theorem:More of the Story November 2015 1 / 33 Central Limit Theorem Theorem (Central Limit

More information

STAT J535: Introduction

STAT J535: Introduction David B. Hitchcock E-Mail: hitchcock@stat.sc.edu Spring 2012 Chapter 1: Introduction to Bayesian Data Analysis Bayesian statistical inference uses Bayes Law (Bayes Theorem) to combine prior information

More information

MAE294B/SIOC203B: Methods in Applied Mechanics Winter Quarter sgls/mae294b Solution IV

MAE294B/SIOC203B: Methods in Applied Mechanics Winter Quarter sgls/mae294b Solution IV MAE9B/SIOC3B: Methods in Applied Mechanics Winter Quarter 8 http://webengucsdedu/ sgls/mae9b 8 Solution IV (i The equation becomes in T Applying standard WKB gives ɛ y TT ɛte T y T + y = φ T Te T φ T +

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Lecture 8: Channel Capacity, Continuous Random Variables

Lecture 8: Channel Capacity, Continuous Random Variables EE376A/STATS376A Information Theory Lecture 8-02/0/208 Lecture 8: Channel Capacity, Continuous Random Variables Lecturer: Tsachy Weissman Scribe: Augustine Chemparathy, Adithya Ganesh, Philip Hwang Channel

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory Department of Statistics & Applied Probability Wednesday, October 19, 2011 Lecture 17: UMVUE and the first method of derivation Estimable parameters Let ϑ be a parameter in the family P. If there exists

More information

7. Let X be a (general, abstract) metric space which is sequentially compact. Prove X must be complete.

7. Let X be a (general, abstract) metric space which is sequentially compact. Prove X must be complete. Math 411 problems The following are some practice problems for Math 411. Many are meant to challenge rather that be solved right away. Some could be discussed in class, and some are similar to hard exam

More information

Chapter 5. Random Variables (Continuous Case) 5.1 Basic definitions

Chapter 5. Random Variables (Continuous Case) 5.1 Basic definitions Chapter 5 andom Variables (Continuous Case) So far, we have purposely limited our consideration to random variables whose ranges are countable, or discrete. The reason for that is that distributions on

More information

Stat 366 A1 (Fall 2006) Midterm Solutions (October 23) page 1

Stat 366 A1 (Fall 2006) Midterm Solutions (October 23) page 1 Stat 366 A1 Fall 6) Midterm Solutions October 3) page 1 1. The opening prices per share Y 1 and Y measured in dollars) of two similar stocks are independent random variables, each with a density function

More information

Stochastic Differential Equations

Stochastic Differential Equations CHAPTER 1 Stochastic Differential Equations Consider a stochastic process X t satisfying dx t = bt, X t,w t dt + σt, X t,w t dw t. 1.1 Question. 1 Can we obtain the existence and uniqueness theorem for

More information

Lecture Notes 3 Multiple Random Variables. Joint, Marginal, and Conditional pmfs. Bayes Rule and Independence for pmfs

Lecture Notes 3 Multiple Random Variables. Joint, Marginal, and Conditional pmfs. Bayes Rule and Independence for pmfs Lecture Notes 3 Multiple Random Variables Joint, Marginal, and Conditional pmfs Bayes Rule and Independence for pmfs Joint, Marginal, and Conditional pdfs Bayes Rule and Independence for pdfs Functions

More information

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Unbiased estimation Unbiased or asymptotically unbiased estimation plays an important role in

More information

Probability Density (1)

Probability Density (1) Probability Density (1) Let f(x 1, x 2... x n ) be a probability density for the variables {x 1, x 2... x n }. These variables can always be viewed as coordinates over an abstract space (a manifold ).

More information

BASICS OF PROBABILITY

BASICS OF PROBABILITY October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,

More information

EXACT EQUATIONS AND INTEGRATING FACTORS

EXACT EQUATIONS AND INTEGRATING FACTORS MAP- EXACT EQUATIONS AND INTEGRATING FACTORS First-order Differential Equations for Which We Can Find Eact Solutions Stu the patterns carefully. The first step of any solution is correct identification

More information

Math 113/113H Winter 2006 Departmental Final Exam

Math 113/113H Winter 2006 Departmental Final Exam Name KEY Instructor Section No. Student Number Math 3/3H Winter 26 Departmental Final Exam Instructions: The time limit is 3 hours. Problems -6 short-answer questions, each worth 2 points. Problems 7 through

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Part II Probability and Measure

Part II Probability and Measure Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

SDS 321: Introduction to Probability and Statistics

SDS 321: Introduction to Probability and Statistics SDS 321: Introduction to Probability and Statistics Lecture 17: Continuous random variables: conditional PDF Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin

More information

Lecture 11: Probability, Order Statistics and Sampling

Lecture 11: Probability, Order Statistics and Sampling 5-75: Graduate Algorithms February, 7 Lecture : Probability, Order tatistics and ampling Lecturer: David Whitmer cribes: Ilai Deutel, C.J. Argue Exponential Distributions Definition.. Given sample space

More information

Mathematical Statistics 1 Math A 6330

Mathematical Statistics 1 Math A 6330 Mathematical Statistics 1 Math A 6330 Chapter 2 Transformations and Expectations Mohamed I. Riffi Department of Mathematics Islamic University of Gaza September 14, 2015 Outline 1 Distributions of Functions

More information

Statistical Learning Theory

Statistical Learning Theory Statistical Learning Theory Part I : Mathematical Learning Theory (1-8) By Sumio Watanabe, Evaluation : Report Part II : Information Statistical Mechanics (9-15) By Yoshiyuki Kabashima, Evaluation : Report

More information

converges as well if x < 1. 1 x n x n 1 1 = 2 a nx n

converges as well if x < 1. 1 x n x n 1 1 = 2 a nx n Solve the following 6 problems. 1. Prove that if series n=1 a nx n converges for all x such that x < 1, then the series n=1 a n xn 1 x converges as well if x < 1. n For x < 1, x n 0 as n, so there exists

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

Economics 620, Lecture 8: Asymptotics I

Economics 620, Lecture 8: Asymptotics I Economics 620, Lecture 8: Asymptotics I Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 8: Asymptotics I 1 / 17 We are interested in the properties of estimators

More information

1 Review of di erential calculus

1 Review of di erential calculus Review of di erential calculus This chapter presents the main elements of di erential calculus needed in probability theory. Often, students taking a course on probability theory have problems with concepts

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

REAL ANALYSIS I HOMEWORK 4

REAL ANALYSIS I HOMEWORK 4 REAL ANALYSIS I HOMEWORK 4 CİHAN BAHRAN The questions are from Stein and Shakarchi s text, Chapter 2.. Given a collection of sets E, E 2,..., E n, construct another collection E, E 2,..., E N, with N =

More information

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay 1 / 13 Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay August 8, 2013 2 / 13 Random Variable Definition A real-valued

More information

Order Statistics and Distributions

Order Statistics and Distributions Order Statistics and Distributions 1 Some Preliminary Comments and Ideas In this section we consider a random sample X 1, X 2,..., X n common continuous distribution function F and probability density

More information

Metric Spaces. Exercises Fall 2017 Lecturer: Viveka Erlandsson. Written by M.van den Berg

Metric Spaces. Exercises Fall 2017 Lecturer: Viveka Erlandsson. Written by M.van den Berg Metric Spaces Exercises Fall 2017 Lecturer: Viveka Erlandsson Written by M.van den Berg School of Mathematics University of Bristol BS8 1TW Bristol, UK 1 Exercises. 1. Let X be a non-empty set, and suppose

More information

MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Math 265 (Butler) Practice Midterm III B (Solutions)

Math 265 (Butler) Practice Midterm III B (Solutions) Math 265 (Butler) Practice Midterm III B (Solutions). Set up (but do not evaluate) an integral for the surface area of the surface f(x, y) x 2 y y over the region x, y 4. We have that the surface are is

More information

Continuous Distributions

Continuous Distributions Chapter 5 Continuous Distributions 5.1 Density and Distribution Functions In many situations random variables can take any value on the real line or in a certain subset of the real line. For concrete examples,

More information

A CLASSROOM NOTE: ENTROPY, INFORMATION, AND MARKOV PROPERTY. Zoran R. Pop-Stojanović. 1. Introduction

A CLASSROOM NOTE: ENTROPY, INFORMATION, AND MARKOV PROPERTY. Zoran R. Pop-Stojanović. 1. Introduction THE TEACHING OF MATHEMATICS 2006, Vol IX,, pp 2 A CLASSROOM NOTE: ENTROPY, INFORMATION, AND MARKOV PROPERTY Zoran R Pop-Stojanović Abstract How to introduce the concept of the Markov Property in an elementary

More information

MIT 2.71/2.710 Optics 10/31/05 wk9-a-1. The spatial frequency domain

MIT 2.71/2.710 Optics 10/31/05 wk9-a-1. The spatial frequency domain 10/31/05 wk9-a-1 The spatial frequency domain Recall: plane wave propagation x path delay increases linearly with x λ z=0 θ E 0 x exp i2π sinθ + λ z i2π cosθ λ z plane of observation 10/31/05 wk9-a-2 Spatial

More information

On Reparametrization and the Gibbs Sampler

On Reparametrization and the Gibbs Sampler On Reparametrization and the Gibbs Sampler Jorge Carlos Román Department of Mathematics Vanderbilt University James P. Hobert Department of Statistics University of Florida March 2014 Brett Presnell Department

More information

n 2 xi = x i. x i 2. r r ; i r 2 + V ( r) V ( r) = 0 r > 0. ( 1 1 ) a r n 1 ( 1 2) V( r) = b ln r + c n = 2 b r n 2 + c n 3 ( 1 3)

n 2 xi = x i. x i 2. r r ; i r 2 + V ( r) V ( r) = 0 r > 0. ( 1 1 ) a r n 1 ( 1 2) V( r) = b ln r + c n = 2 b r n 2 + c n 3 ( 1 3) Sep. 7 The L aplace/ P oisson Equations: Explicit Formulas In this lecture we study the properties of the Laplace equation and the Poisson equation with Dirichlet boundary conditions through explicit representations

More information

1* (10 pts) Let X be a random variable with P (X = 1) = P (X = 1) = 1 2

1* (10 pts) Let X be a random variable with P (X = 1) = P (X = 1) = 1 2 Math 736-1 Homework Fall 27 1* (1 pts) Let X be a random variable with P (X = 1) = P (X = 1) = 1 2 and let Y be a standard normal random variable. Assume that X and Y are independent. Find the distribution

More information

Jim Lambers MAT 280 Summer Semester Practice Final Exam Solution. dy + xz dz = x(t)y(t) dt. t 3 (4t 3 ) + e t2 (2t) + t 7 (3t 2 ) dt

Jim Lambers MAT 280 Summer Semester Practice Final Exam Solution. dy + xz dz = x(t)y(t) dt. t 3 (4t 3 ) + e t2 (2t) + t 7 (3t 2 ) dt Jim Lambers MAT 28 ummer emester 212-1 Practice Final Exam olution 1. Evaluate the line integral xy dx + e y dy + xz dz, where is given by r(t) t 4, t 2, t, t 1. olution From r (t) 4t, 2t, t 2, we obtain

More information

Stat 5101 Notes: Algorithms

Stat 5101 Notes: Algorithms Stat 5101 Notes: Algorithms Charles J. Geyer January 22, 2016 Contents 1 Calculating an Expectation or a Probability 3 1.1 From a PMF........................... 3 1.2 From a PDF...........................

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca October 22nd, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

x 2 y = 1 2. Problem 2. Compute the Taylor series (at the base point 0) for the function 1 (1 x) 3.

x 2 y = 1 2. Problem 2. Compute the Taylor series (at the base point 0) for the function 1 (1 x) 3. MATH 8.0 - FINAL EXAM - SOME REVIEW PROBLEMS WITH SOLUTIONS 8.0 Calculus, Fall 207 Professor: Jared Speck Problem. Consider the following curve in the plane: x 2 y = 2. Let a be a number. The portion of

More information

Transformations and Expectations

Transformations and Expectations Transformations and Expectations 1 Distributions of Functions of a Random Variable If is a random variable with cdf F (x), then any function of, say g(), is also a random variable. Sine Y = g() is a function

More information

Extremal behaviour of chaotic dynamics

Extremal behaviour of chaotic dynamics Extremal behaviour of chaotic dynamics Ana Cristina Moreira Freitas CMUP & FEP, Universidade do Porto joint work with Jorge Freitas and Mike Todd (CMUP & FEP) 1 / 24 Extreme Value Theory Consider a stationary

More information

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 Last compiled: November 6, 213 1. Conditional expectation Exercise 1.1. To start with, note that P(X Y = P( c R : X > c, Y c or X c, Y > c = P( c Q : X > c, Y

More information

Elementary ODE Review

Elementary ODE Review Elementary ODE Review First Order ODEs First Order Equations Ordinary differential equations of the fm y F(x, y) () are called first der dinary differential equations. There are a variety of techniques

More information

Lecture 11: Continuous-valued signals and differential entropy

Lecture 11: Continuous-valued signals and differential entropy Lecture 11: Continuous-valued signals and differential entropy Biology 429 Carl Bergstrom September 20, 2008 Sources: Parts of today s lecture follow Chapter 8 from Cover and Thomas (2007). Some components

More information

Chapter 4 Multiple Random Variables

Chapter 4 Multiple Random Variables Review for the previous lecture Theorems and Examples: How to obtain the pmf (pdf) of U = g ( X Y 1 ) and V = g ( X Y) Chapter 4 Multiple Random Variables Chapter 43 Bivariate Transformations Continuous

More information

MA 242 Review Exponential and Log Functions Notes for today s class can be found at

MA 242 Review Exponential and Log Functions Notes for today s class can be found at MA 242 Review Exponential and Log Functions Notes for today s class can be found at www.xecu.net/jacobs/index242.htm Example: If y = x n If y = x 2 then then dy dx = nxn 1 dy dx = 2x1 = 2x Power Function

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information

Review of Probability Theory

Review of Probability Theory Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving

More information

(x 3)(x + 5) = (x 3)(x 1) = x + 5. sin 2 x e ax bx 1 = 1 2. lim

(x 3)(x + 5) = (x 3)(x 1) = x + 5. sin 2 x e ax bx 1 = 1 2. lim SMT Calculus Test Solutions February, x + x 5 Compute x x x + Answer: Solution: Note that x + x 5 x x + x )x + 5) = x )x ) = x + 5 x x + 5 Then x x = + 5 = Compute all real values of b such that, for fx)

More information

Homework 11. Solutions

Homework 11. Solutions Homework 11. Solutions Problem 2.3.2. Let f n : R R be 1/n times the characteristic function of the interval (0, n). Show that f n 0 uniformly and f n µ L = 1. Why isn t it a counterexample to the Lebesgue

More information