1 Introduction. 2 Measure theoretic definitions
|
|
- Shauna Lynch
- 5 years ago
- Views:
Transcription
1 1 Introduction These notes aim to recall some basic definitions needed for dealing with random variables. Sections to 5 follow mostly the presentation given in chapter two of [1]. Measure theoretic definitions Let a non-empty set. Definition.1. (σ-algebra) A σ-algebra is a collection F of subsets of with these properties 1., F.. if F then F c for F c := \ F the complement of F. 3. if {F k } k=1 then k=1 F k, k=1 F k Definition.. (Probability measure) Let F be a σ-algebra of subsets of. We call a probability measure provided: 1. P ( ) = 0, P () = 1 P : F [0, 1]. if {F k } k=1 then 3. if {F k } k=1 are disjoint sets It follows that if F 1, F F P ( k=1 F k) P ( k=1 F k) = P (F k ) k=1 P (F k ) k=1 F 1 F P (F 1 ) P (F ) Definition.3. (Borel σ-algebra) The smallest σ-algebra containing all the open subsets of d is called the Borel σ-algebra, denoted by B The Borel subsets of d i.e. the content of B may be thought as the collection of all the well-behaved subsets of d for which Lebesgue measure theory applies. 1
2 3 Probability Space Definition 3.1. (Probability space) A triple is called a probability space provided (, F, P ) 1. is any set. F is a σ-algebra of subsets of 3. P is a probability measure on F Points ω are sample (outcome) points. A set F F is called an event. P (F ) is the probability of the event F. A property which is true except for an event of probability zero is said to hold almost surely (usually abbreviated a.s. ). Example 3.1. (Single unbiased coin tossing) outcome: head, tail = {head, tail}. σ-algebra F: it comprises F = = 4 events 1 T=tail H=head 3 =neither head nor tail 4 T H=head or tail Probability measure: P (T ) = P (H) = 1 & P ( ) = 0 & P (T H) = 1 Example 3.. (Uniform distribution) = [0, 1]. F: the σ-algebra of all Borel subsets of [0, 1]. P : the Lebesgue measure on [0, 1]. (Note: as 0 1 has zero measure [0, 1] (0, 1).)
3 Definition 3.. (Probability density on d ) Let p be a non-negative, integrable function, such that d d d x p(x) = 1 then to each B B (Borel σ-algebra) is possible to associate a probability P (B) = d d x p(x) so that ( d, B, P ) is a probability space. The function p is called the density of the probability P. Example 3.3. (Gaussian distribution) The function is a probability density on ( d, B, P ). B g x σ : + Example 3.4. (Dirac mass and Dirac δ-function) Let y be the coordinate of a point in d. Define for any B B (x x) g x σ (x) = e σ (3.1) πσ P y (B) = { 1 if y B 0 if y / B (3.) then ( d, B, P ) is a probability space. The probability P is the Dirac mass concentrated at x. The density associated to P is the Dirac δ-function (distribution). A possible definition of the Dirac δ-function on δ(x y) w := lim σ 0 g y σ (x) The definition must be understood in weak sense. Namely, let f : a bounded Lebesgue measurable test function then dx δ(x y) f(x) = lim dx g y σ (x)f(x) = lim σ 0 σ 0 dx g 0 1 (x)f(σx + y) = f(y) The above chain of equalities show that the Dirac δ is not a density with respect to the standard Lebesgue measure as it has support on a set of zero Lebesgue measure. A consequence is that indefinite integral H y (x) = x dz δ(z y) = 1 + sgn(x y) is not well defined as x y. Interpreting the Dirac δ as singular measure assigning a point mass to y (i.e. the definition (3.)) renders the limit well defined { 1 limx H y (y) = y H(x) 0 lim x y H(x) In weak sense (i.e. applied to suitable test functions), the Dirac δ over satisfies 3
4 i y+ε y ε dx δ(x y) f(x) = f(y) ii y+ε y ε dx d df dδ dxδ(x y) f(x) = dy (y) f(x) dx (x y) = w df dx (x)δ(x y) iii for h(x) having a simple zero x = x and otherwise non-vanishing and smooth in (x ε, x + ε) with ε > 0 x +ε x ε dx f(x)δ(h(x)) = f(x ) dh dx (x ) δ(h(x)) = w δ(x x ) dh dx (x ) iv The d-dimensional Dirac-δ d δ (d) (x y) = δ(x i y i ) (3.3) maybe defined by repeating the limiting procedure on each variable e.g. δ (d) (x y) = w v Let d g yi σ i (x i ) (3.4) h : d (3.5) such that h(x) = 0 (3.6) describes a smooth d 1-dimensional hyper-surface Σ embedded in d, then d d x δ(h(x)) = dσ f(x) d h (3.7) 4 andom variables Definition 4.1. (andom variable) Let (, F, P ) be a probability space. A mapping ξ : d is called an d-dimensional random variable if for each B B one has i.e. if ξ is F-measurable. The definition associates to each event a Borel subset. Example 4.1. (Indicator function) Let F F. The indicator function of F is χ F (ω) = ξ 1 (B) F { 1 if ω F 0 if ω / F 4
5 Example 4.. (Simple function) Let {F i } m F are disjoint (i.e. F i F j = ) and form a partition of (i.e. m F i = ) and {x i } m then is a random variable, called a simple function. ξ = m x i χ Fi (ω) Lemma 4.1. Let be a random variable. Then ξ(ω) : d F(ξ) = { ξ 1 (B) B B } is a σ-algebra, called the σ-algebra generated by ξ. This is the smallest sub σ-algebra of F with respect to which ξ is measurable. Proof. It suffices to verify that F(ξ) is a σ-algebra. 5 Expectation values Expectation values of generic random variables are defined following the same steps taken to define the Lebesgue integral of measurable functions. Let (, F, P ) a probability space and ξ a simple 1-dimensional random variable ξ = n x i χ Fi Definition 5.1. (Expectation value (integral) of a simple random variable) dp ξ = n x i P (F i ) Definition 5.. (Expectation value (integral) of a non-negative random variable η) For η : + we define η dp η := Definition 5.3. (Expectation value a random variable η) For η : we define sup dp ξ ξ η ξ=simple η + := max {η, 0} & η := max { η, 0} If we define the expectation variable of min { η +, η } < η η + η 5
6 as dp η := dp η + dp η With these definitions all the standard rules of Lebesgue integrals apply to expectation values. Lemma 5.1. Chebyshev s inequality If ξ is a random variable and 1 n <, then P ( ξ x) 1 x n ξ n n Proof. ξ p = dp ξ n x n ξ ξ dp ξ p x n P ( ξ ξ) 6 Moments of a random variable Definition 6.1. (Distribution function) The distribution function of a random variable ξ : d is the function P ξ : d + such that P ξ (x) = P ξ (ξ 1 x 1,..., ξ d x d ) Definition 6.. (PDF of a random variable) Let ξ : d be a random variable and P ξ its distribution function. If there exists a non-negative, integrable function p : d + such that P ξ (x) = d then p ξ specifies the probability density function of ξ (PDF). Lemma 6.1. Let xi ξ : d dy i p ξ (y) be a random variable, with statistics described by PDF p ξ. Suppose f : d and is integrable. Then y = f(x) y E {y} = d d x p ξ (x)f(x) 6
7 In particular if ξ i denotes the i-th component of ξ ξ i = d d x p ξ (x) x i average ξ ξ = d d x p ξ (x) x ξ variance Proof. Suppose first f is a simple function on d. Then n f(ξ) = f i χ Bi (ξ)dp = n f i P (B i ) = n f i p ξ (x) f(x) B i Consequently the formula holds for all simple functions g and, by approximation, it holds therefore for general functions g. Definition 6.3. (Moments of a random variable) Let ξ : we call the expectation value of the n-th power of ξ ξ n = the moment of order n of ξ. dp ξ n The lower order moments are those most recurrent in applications and as such are given special names such as the average and the variance. Example 6.1. (Average and variance of a Gaussian variable) Average as one finds ξ = dx x g x σ (x) = g 0 1 (x) = g 0 1 ( x) ξ = x dx ( x + σ x) g 0 1 (x) Variance (ξ ξ ) = σ dx x g 0 1 (x) The remaining integral I can be evaluated for example using the identity Namely I(l) = l 1 dx x g 0 1 (x) = d dl l=1 I(l) lx e I(l) = dx π e x 1 +x dx i π = l 1 0 dr r e r = l 1 The statistical properties of a Gaussian variable are therefore fully specified by its first two moments. 7
8 7 Characteristic function of a random variable Definition 7.1. (Characteristic function) Let the expectation value ξ : d ˇp ξ (q) := e ıξ q is referred to as the characteristic function of the random variable Example 7.1. (Characteristic function of a Gaussian variable) Let ξ : distributed with Gaussian PDF. The characteristic function is ǧ x,σ (q) = dx e ıqx g x,σ (x) = e ıqx σ q For a Gaussian variable it is also true Ǧ x,σ (q) = in q n Γ (n + 1) ξn with ξ n = Γ ( n + 1) n = ( n 1)!! (7.1) Γ (n + 1) In general but formally for the random variable ξ one can write ξ n = 1 d n ı n dq n ˇp ξ(q) (7.) q=0 The relation is formal because it may be a relation between infinities Example 7.. (Lorentz distribution) Let be p y σ (x) = p : + σ π {(x y) + σ } the Lorentz probability density so that (, B, P y σ (B)) a probability space. Note that p 0 σ (x) = p 0 σ ( x) Using a change of variable and it is straightforward to verify that dx x p y σ (x) = y however dx x p y σ (x) = 8
9 The characteristic function can be computed using Cauchy theorem ˇp y σ (q) = e ıqy dx e ıqx p 0 σ (x) } { = eıqy 1 dx e ıqx ı π x ı σ 1 x + ı σ The characteristic function develops a cusp for q = 0 Finally note that ˇp y σ (q) = e ıqy σ q δ(x y) w = lim σ 0 p y σ (x) emark 7.1. A formal interpretation of the δ-dirac Let f i : d, i = 1, some smooth integrable functions and ˇf i (q) = d x e ıq x f i (x) i = 1, d their Fourier transform. The convolution identity (f 1 f )(x) := d d y f 1 (x y) f (x) = maybe thought as a consequence of δ (d) (x y) = w d d q q (x y) eı (π) d = e ıqy { e q σ if q > 0 e q σ if q < 0 d d q (π) d eı q x ˇf1 (q) ˇf (q) The δ-dirac in such a case could be very formally though as the characteristic function of a random variable uniformly distributed over d. 8 Probability density and Dirac-δ Let ξ : d with PDF p ξ ( ). From the properties of the δ function we have δ (d) (ξ x) = d d y p ξ (y)δ (d) (y x) = p ξ (x) d This relation allows us to derive the relation between the PDF s of functionally dependent random variables 8.1 Derivation in one dimension Suppose the random variable has the P ξ (x < ξ < x + dx) = p ξ (x) dx Functional relation between random variables φ = f(ξ) (8.1) 9
10 again P φ (y < φ < y + dy) = p φ (y) dy From P ξ (x < ξ < x + dx) = P φ (y < φ < y + dy) one gets into p ξ (x) = p φ (f(x)) df dx 8. Multi-dimensional case using the δ-function Suppose f is one-to-one and write p φ (y) = δ (d) (φ y) = δ (d) (f(ξ) y) It follows Since f is one-to-one p φ (y) = d d x δ (d) (f(x) y)p ξ (x) lim d d x e d σ 0 d ( π σ ) d f(x) y σ p ξ (x) f 1 (y) = x (8.) is globally well defined. Taylor-expanding the argument of the exponential we get for the i-th component of y Call the integral becomes y i = f i (x ) + (x j x j ) f i x j (x ) + O((x j x j ) ) A ij := f i x j (x ) z i = xj x j σ 9 Independence p φ (y) = lim σ 0 d d d z e zi A li A lj z j +O(σ) ( π) d p ξ (x + O(σ)) = p ξ(x ) det A (8.3) Definition 9.1. (Conditional probability) Let (, F, P ) a probability space and F 1, F two events in F. Suppose P (F 1 ) > 0 Then the probability of the event F given the occurrence of F 1 is P (F F 1 ) = P (F F 1 ) P (F ) 10
11 A clear interpretation of this definition see [1] pag. 17. Definition 9.. (Independence) F is said to be independent of F 1 if P (F F 1 ) = P (F ) P (F F 1 ) = P (F 1 ) P (F ) Definition 9.3. (Independence of random variables) The random variables ξ i : d i = 1,... are said to be independent if for all integers 1 k 1 < k < k m and all choices of Borel sets {B ki } m d the following factorisation property holds true P (ξ k1 B k1, ξ k B k,..., ξ km B km ) = The definition implies that if there exists a PDF m P (ξ ki B ki ) p ξk1... ξ km : d }{{} d + (9.1) m times such that P (ξ k1 B k1, ξ k B k,..., ξ km B km ) = B k1 B km m d d x ki p ξk1... ξ km (x k1,..., x km ) then m p ξk1... ξ km (x k1,..., x km ) = p ξki (x ki ) Furthermore the characteristic function of m-independent random variables is equal to the product of the characteristic functions. 10 The main results of classical probability theory The two theorems below give a mathematical model to describe the statistical properties of the outcomes of repeated identical experiments. The first theorem gives information about the average outcome, the second about the typical size of the fluctuations around the average. Theorem (The strong law of large numbers) Let {ξ i } n a sequence of independent identically distributed integrable random variables defined over the same probability space. One has ( n P lim ξ ) i = ξ n n 1 = 1 (10.1) Proof. see e.g. [1] 11
12 Theorem 10.. (The central limit theorem) Let {ξ i } n a sequence of independent identically distributed real-valued integrable random variables defined over the same probability space. Assume ξ 1 = y (ξ 1 ξ 1 ) = σ > 0 Set Σ n = n ξ i Then for all < a < b < the limit holds where Idea behind the proof Consider the characteristic function ( lim P a < Σ n n y n n σ Σn n ıq y e nσ = n ) < b = g 0 1 (x) = e x π b a dx g 0 1 (x) (10.) [ ] e ıq ξ i y x y n nσ ıq = dx e nσ p ξ1 (x) (10.3) As n increases to infinity one expects the characteristic function for small values of q to be well approximated by a Taylor expansion of the exponential Σn n ıq y e nσ = [1 q (ξ y) dx n σ p ξ1 (x) + O( 1 ] n n 3/ ) n e q (10.4) Thus the small wave number behaviour of the characteristic function is approximated by the characteristic function of the Gaussian distribution. eferences [1] L.C. Evans, An Introduction to Stochastic Differential Equations, lecture notes, evans/ 1, 11 [] M. San Miguel and. Toral, Stochastic effects in Physical Systems, in Instabilities and Nonequilibrium Structures VI, E. Tirapegui and W. Zeller,eds. Kluwer Academic, and arxiv:cond-mat/
We denote the space of distributions on Ω by D ( Ω) 2.
Sep. 1 0, 008 Distributions Distributions are generalized functions. Some familiarity with the theory of distributions helps understanding of various function spaces which play important roles in the study
More informationLecture 6 Basic Probability
Lecture 6: Basic Probability 1 of 17 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 6 Basic Probability Probability spaces A mathematical setup behind a probabilistic
More informationCourse 212: Academic Year Section 1: Metric Spaces
Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........
More information1 Introduction. 2 Diffusion equation and central limit theorem. The content of these notes is also covered by chapter 3 section B of [1].
1 Introduction The content of these notes is also covered by chapter 3 section B of [1]. Diffusion equation and central limit theorem Consider a sequence {ξ i } i=1 i.i.d. ξ i = d ξ with ξ : Ω { Dx, 0,
More informationProbability and Measure
Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability
More informationChapter 1 Statistical Reasoning Why statistics? Section 1.1 Basics of Probability Theory
Chapter 1 Statistical Reasoning Why statistics? Uncertainty of nature (weather, earth movement, etc. ) Uncertainty in observation/sampling/measurement Variability of human operation/error imperfection
More informationVerona Course April Lecture 1. Review of probability
Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationTools from Lebesgue integration
Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given
More informationRecitation 2: Probability
Recitation 2: Probability Colin White, Kenny Marino January 23, 2018 Outline Facts about sets Definitions and facts about probability Random Variables and Joint Distributions Characteristics of distributions
More information1: PROBABILITY REVIEW
1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following
More information18.175: Lecture 2 Extension theorems, random variables, distributions
18.175: Lecture 2 Extension theorems, random variables, distributions Scott Sheffield MIT Outline Extension theorems Characterizing measures on R d Random variables Outline Extension theorems Characterizing
More informationNotes 1 : Measure-theoretic foundations I
Notes 1 : Measure-theoretic foundations I Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Section 1.0-1.8, 2.1-2.3, 3.1-3.11], [Fel68, Sections 7.2, 8.1, 9.6], [Dur10,
More informationSystem Identification
System Identification Arun K. Tangirala Department of Chemical Engineering IIT Madras July 27, 2013 Module 3 Lecture 1 Arun K. Tangirala System Identification July 27, 2013 1 Objectives of this Module
More informationWhy study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables
ECE 6010 Lecture 1 Introduction; Review of Random Variables Readings from G&S: Chapter 1. Section 2.1, Section 2.3, Section 2.4, Section 3.1, Section 3.2, Section 3.5, Section 4.1, Section 4.2, Section
More informationTHEOREMS, ETC., FOR MATH 515
THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every
More informationAlmost Sure Convergence of a Sequence of Random Variables
Almost Sure Convergence of a Sequence of Random Variables (...for people who haven t had measure theory.) 1 Preliminaries 1.1 The Measure of a Set (Informal) Consider the set A IR 2 as depicted below.
More informationHomework 11. Solutions
Homework 11. Solutions Problem 2.3.2. Let f n : R R be 1/n times the characteristic function of the interval (0, n). Show that f n 0 uniformly and f n µ L = 1. Why isn t it a counterexample to the Lebesgue
More informationSobolev spaces. May 18
Sobolev spaces May 18 2015 1 Weak derivatives The purpose of these notes is to give a very basic introduction to Sobolev spaces. More extensive treatments can e.g. be found in the classical references
More informationMultiple Random Variables
Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x
More informationCourse: ESO-209 Home Work: 1 Instructor: Debasis Kundu
Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear
More informationLecture I: Asymptotics for large GUE random matrices
Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random
More information2.1 Lecture 5: Probability spaces, Interpretation of probabilities, Random variables
Chapter 2 Kinetic Theory 2.1 Lecture 5: Probability spaces, Interpretation of probabilities, Random variables In the previous lectures the theory of thermodynamics was formulated as a purely phenomenological
More informationLecture 9: Conditional Probability and Independence
EE5110: Probability Foundations July-November 2015 Lecture 9: Conditional Probability and Independence Lecturer: Dr. Krishna Jagannathan Scribe: Vishakh Hegde 9.1 Conditional Probability Definition 9.1
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationSTAT 7032 Probability Spring Wlodek Bryc
STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,
More informationWe will briefly look at the definition of a probability space, probability measures, conditional probability and independence of probability events.
1 Probability 1.1 Probability spaces We will briefly look at the definition of a probability space, probability measures, conditional probability and independence of probability events. Definition 1.1.
More informationProbability. Lecture Notes. Adolfo J. Rumbos
Probability Lecture Notes Adolfo J. Rumbos October 20, 204 2 Contents Introduction 5. An example from statistical inference................ 5 2 Probability Spaces 9 2. Sample Spaces and σ fields.....................
More informationIntroduction and Preliminaries
Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis
More informationTheorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1
Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 3 9/10/2008 CONDITIONING AND INDEPENDENCE
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 3 9/10/2008 CONDITIONING AND INDEPENDENCE Most of the material in this lecture is covered in [Bertsekas & Tsitsiklis] Sections 1.3-1.5
More informationSample Spaces, Random Variables
Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationProblem set 1, Real Analysis I, Spring, 2015.
Problem set 1, Real Analysis I, Spring, 015. (1) Let f n : D R be a sequence of functions with domain D R n. Recall that f n f uniformly if and only if for all ɛ > 0, there is an N = N(ɛ) so that if n
More informationReview of Probability Theory
Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving
More information6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )
6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined
More informationReview: mostly probability and some statistics
Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random
More informationExercises with solutions (Set D)
Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where
More informationErgodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.
Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions
More information1 Continuity Classes C m (Ω)
0.1 Norms 0.1 Norms A norm on a linear space X is a function : X R with the properties: Positive Definite: x 0 x X (nonnegative) x = 0 x = 0 (strictly positive) λx = λ x x X, λ C(homogeneous) x + y x +
More informationconditional cdf, conditional pdf, total probability theorem?
6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random
More informationThe Lebesgue Integral
The Lebesgue Integral Brent Nelson In these notes we give an introduction to the Lebesgue integral, assuming only a knowledge of metric spaces and the iemann integral. For more details see [1, Chapters
More informationEC 521 MATHEMATICAL METHODS FOR ECONOMICS. Lecture 1: Preliminaries
EC 521 MATHEMATICAL METHODS FOR ECONOMICS Lecture 1: Preliminaries Murat YILMAZ Boğaziçi University In this lecture we provide some basic facts from both Linear Algebra and Real Analysis, which are going
More informationPart IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationChapter 4. Measure Theory. 1. Measure Spaces
Chapter 4. Measure Theory 1. Measure Spaces Let X be a nonempty set. A collection S of subsets of X is said to be an algebra on X if S has the following properties: 1. X S; 2. if A S, then A c S; 3. if
More informationLecture 11: Random Variables
EE5110: Probability Foundations for Electrical Engineers July-November 2015 Lecture 11: Random Variables Lecturer: Dr. Krishna Jagannathan Scribe: Sudharsan, Gopal, Arjun B, Debayani The study of random
More informationMeasurable Choice Functions
(January 19, 2013) Measurable Choice Functions Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/choice functions.pdf] This note
More informationAnalysis Finite and Infinite Sets The Real Numbers The Cantor Set
Analysis Finite and Infinite Sets Definition. An initial segment is {n N n n 0 }. Definition. A finite set can be put into one-to-one correspondence with an initial segment. The empty set is also considered
More informationMAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9
MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended
More informationvan Rooij, Schikhof: A Second Course on Real Functions
vanrooijschikhof.tex April 25, 2018 van Rooij, Schikhof: A Second Course on Real Functions Notes from [vrs]. Introduction A monotone function is Riemann integrable. A continuous function is Riemann integrable.
More informationChapter 1. Sets and probability. 1.3 Probability space
Random processes - Chapter 1. Sets and probability 1 Random processes Chapter 1. Sets and probability 1.3 Probability space 1.3 Probability space Random processes - Chapter 1. Sets and probability 2 Probability
More informationProbability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27
Probability Review Yutian Li Stanford University January 18, 2018 Yutian Li (Stanford University) Probability Review January 18, 2018 1 / 27 Outline 1 Elements of probability 2 Random variables 3 Multiple
More information1.1. MEASURES AND INTEGRALS
CHAPTER 1: MEASURE THEORY In this chapter we define the notion of measure µ on a space, construct integrals on this space, and establish their basic properties under limits. The measure µ(e) will be defined
More informationThe Canonical Gaussian Measure on R
The Canonical Gaussian Measure on R 1. Introduction The main goal of this course is to study Gaussian measures. The simplest example of a Gaussian measure is the canonical Gaussian measure P on R where
More informationLecture 2: Review of Basic Probability Theory
ECE 830 Fall 2010 Statistical Signal Processing instructor: R. Nowak, scribe: R. Nowak Lecture 2: Review of Basic Probability Theory Probabilistic models will be used throughout the course to represent
More informationMeasure and integral. E. Kowalski. ETH Zürich
Measure and integral E. Kowalski ETH Zürich kowalski@math.ethz.ch Contents Preamble 1 Introduction 2 Notation 4 Chapter 1. Measure theory 7 1.1. Algebras, σ-algebras, etc 8 1.2. Measure on a σ-algebra
More informationLaplace s Equation. Chapter Mean Value Formulas
Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic
More informationSome basic elements of Probability Theory
Chapter I Some basic elements of Probability Theory 1 Terminology (and elementary observations Probability theory and the material covered in a basic Real Variables course have much in common. However
More informationSingular Integrals. 1 Calderon-Zygmund decomposition
Singular Integrals Analysis III Calderon-Zygmund decomposition Let f be an integrable function f dx 0, f = g + b with g Cα almost everywhere, with b
More informationA6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011
A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011 Reading Chapter 5 (continued) Lecture 8 Key points in probability CLT CLT examples Prior vs Likelihood Box & Tiao
More informationOn large deviations of sums of independent random variables
On large deviations of sums of independent random variables Zhishui Hu 12, Valentin V. Petrov 23 and John Robinson 2 1 Department of Statistics and Finance, University of Science and Technology of China,
More informationPCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities
PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets
More informationLECTURE 5: THE METHOD OF STATIONARY PHASE
LECTURE 5: THE METHOD OF STATIONARY PHASE Some notions.. A crash course on Fourier transform For j =,, n, j = x j. D j = i j. For any multi-index α = (α,, α n ) N n. α = α + + α n. α! = α! α n!. x α =
More informationChapter 2: Random Variables
ECE54: Stochastic Signals and Systems Fall 28 Lecture 2 - September 3, 28 Dr. Salim El Rouayheb Scribe: Peiwen Tian, Lu Liu, Ghadir Ayache Chapter 2: Random Variables Example. Tossing a fair coin twice:
More informationReal Analysis Problems
Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.
More information1.3.1 Definition and Basic Properties of Convolution
1.3 Convolution 15 1.3 Convolution Since L 1 (R) is a Banach space, we know that it has many useful properties. In particular the operations of addition and scalar multiplication are continuous. However,
More informationMA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems
MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions
More informationRegularity for Poisson Equation
Regularity for Poisson Equation OcMountain Daylight Time. 4, 20 Intuitively, the solution u to the Poisson equation u= f () should have better regularity than the right hand side f. In particular one expects
More informationENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM
c 2007-2016 by Armand M. Makowski 1 ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM 1 The basic setting Throughout, p, q and k are positive integers. The setup With
More informationELEMENTS OF PROBABILITY THEORY
ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable
More informationLecture 1: Overview of percolation and foundational results from probability theory 30th July, 2nd August and 6th August 2007
CSL866: Percolation and Random Graphs IIT Delhi Arzad Kherani Scribe: Amitabha Bagchi Lecture 1: Overview of percolation and foundational results from probability theory 30th July, 2nd August and 6th August
More information18.175: Lecture 3 Integration
18.175: Lecture 3 Scott Sheffield MIT Outline Outline Recall definitions Probability space is triple (Ω, F, P) where Ω is sample space, F is set of events (the σ-algebra) and P : F [0, 1] is the probability
More informationProblem Set 5: Solutions Math 201A: Fall 2016
Problem Set 5: s Math 21A: Fall 216 Problem 1. Define f : [1, ) [1, ) by f(x) = x + 1/x. Show that f(x) f(y) < x y for all x, y [1, ) with x y, but f has no fixed point. Why doesn t this example contradict
More informationMathematical Methods for Physics and Engineering
Mathematical Methods for Physics and Engineering Lecture notes for PDEs Sergei V. Shabanov Department of Mathematics, University of Florida, Gainesville, FL 32611 USA CHAPTER 1 The integration theory
More informationNAME: MATH 172: Lebesgue Integration and Fourier Analysis (winter 2012) Final exam. Wednesday, March 21, time: 2.5h
NAME: SOLUTION problem # 1 2 3 4 5 6 7 8 9 points max 15 20 10 15 10 10 10 10 10 110 MATH 172: Lebesgue Integration and Fourier Analysis (winter 2012 Final exam Wednesday, March 21, 2012 time: 2.5h Please
More informationPart II Probability and Measure
Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)
More informationThe main results about probability measures are the following two facts:
Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a
More informationLebesgue Integration on R n
Lebesgue Integration on R n The treatment here is based loosely on that of Jones, Lebesgue Integration on Euclidean Space We give an overview from the perspective of a user of the theory Riemann integration
More information2 n k In particular, using Stirling formula, we can calculate the asymptotic of obtaining heads exactly half of the time:
Chapter 1 Random Variables 1.1 Elementary Examples We will start with elementary and intuitive examples of probability. The most well-known example is that of a fair coin: if flipped, the probability of
More informationChapter 3: Random Variables 1
Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.
More informationX n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)
14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence
More informationMATH MEASURE THEORY AND FOURIER ANALYSIS. Contents
MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure
More information2 Lebesgue integration
2 Lebesgue integration 1. Let (, A, µ) be a measure space. We will always assume that µ is complete, otherwise we first take its completion. The example to have in mind is the Lebesgue measure on R n,
More informationNorthwestern University Department of Electrical Engineering and Computer Science
Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability
More informationλ n = L φ n = π L eınπx/l, for n Z
Chapter 32 The Fourier Transform 32. Derivation from a Fourier Series Consider the eigenvalue problem y + λy =, y( L = y(l, y ( L = y (L. The eigenvalues and eigenfunctions are ( nπ λ n = L 2 for n Z +
More information1 Sequences of events and their limits
O.H. Probability II (MATH 2647 M15 1 Sequences of events and their limits 1.1 Monotone sequences of events Sequences of events arise naturally when a probabilistic experiment is repeated many times. For
More informationChapter 2. Random Variable. Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance.
Chapter 2 Random Variable CLO2 Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance. 1 1. Introduction In Chapter 1, we introduced the concept
More informationChapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University
Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real
More informationNotes on Measure, Probability and Stochastic Processes. João Lopes Dias
Notes on Measure, Probability and Stochastic Processes João Lopes Dias Departamento de Matemática, ISEG, Universidade de Lisboa, Rua do Quelhas 6, 1200-781 Lisboa, Portugal E-mail address: jldias@iseg.ulisboa.pt
More informationProbability and Measure
Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability
More informationMath 565: Introduction to Harmonic Analysis - Spring 2008 Homework # 2, Daewon Chung
Math 565: Introduction to Harmonic Analysis - Spring 8 Homework #, Daewon Chung. Show that, and that Hχ [a, b] : lim H χ [a, b] a b. H χ [a, b] : sup > H χ [a, b] a b. [Solution] Let us pick < min a, b
More information4th Preparation Sheet - Solutions
Prof. Dr. Rainer Dahlhaus Probability Theory Summer term 017 4th Preparation Sheet - Solutions Remark: Throughout the exercise sheet we use the two equivalent definitions of separability of a metric space
More informationMath-Stat-491-Fall2014-Notes-I
Math-Stat-491-Fall2014-Notes-I Hariharan Narayanan October 2, 2014 1 Introduction This writeup is intended to supplement material in the prescribed texts: Introduction to Probability Models, 10th Edition,
More informationLet R be the line parameterized by x. Let f be a complex function on R that is integrable. The Fourier transform ˆf = F f is. e ikx f(x) dx. (1.
Chapter 1 Fourier transforms 1.1 Introduction Let R be the line parameterized by x. Let f be a complex function on R that is integrable. The Fourier transform ˆf = F f is ˆf(k) = e ikx f(x) dx. (1.1) It
More informationX 1 ((, a]) = {ω Ω : X(ω) a} F, which leads us to the following definition:
nna Janicka Probability Calculus 08/09 Lecture 4. Real-valued Random Variables We already know how to describe the results of a random experiment in terms of a formal mathematical construction, i.e. the
More information04. Random Variables: Concepts
University of Rhode Island DigitalCommons@URI Nonequilibrium Statistical Physics Physics Course Materials 215 4. Random Variables: Concepts Gerhard Müller University of Rhode Island, gmuller@uri.edu Creative
More informationMeasure and integration
Chapter 5 Measure and integration In calculus you have learned how to calculate the size of different kinds of sets: the length of a curve, the area of a region or a surface, the volume or mass of a solid.
More informationS chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1.
Sep. 1 9 Intuitively, the solution u to the Poisson equation S chauder Theory u = f 1 should have better regularity than the right hand side f. In particular one expects u to be twice more differentiable
More informationAppendix A : Introduction to Probability and stochastic processes
A-1 Mathematical methods in communication July 5th, 2009 Appendix A : Introduction to Probability and stochastic processes Lecturer: Haim Permuter Scribe: Shai Shapira and Uri Livnat The probability of
More informationProduct measures, Tonelli s and Fubini s theorems For use in MAT4410, autumn 2017 Nadia S. Larsen. 17 November 2017.
Product measures, Tonelli s and Fubini s theorems For use in MAT4410, autumn 017 Nadia S. Larsen 17 November 017. 1. Construction of the product measure The purpose of these notes is to prove the main
More information