Solutions Homework 6
|
|
- Andrew O’Brien’
- 5 years ago
- Views:
Transcription
1 1 Solutions Homework 6 October 26, 2015 Solution to Exercise 1.5.9: Part (a) is easy: we know E[X Y ] = k if X = k, a.s. The function φ(y) k is Borel measurable from the range space of Y to, and E[X Y ] = φ(y ), a.s. For part (b), if X 1 X 2, a.s., then φ 1 (Y ) = E[X 1 Y ] E[X 2 Y ] = φ 2 (Y ), a.s., so we claim that φ 1 (y) = E[X 1 Y = y] E[X 2 Y = y] = φ 2 (y), P Y -a.s. Letting A = {y : φ 1 (y) > φ 2 (y)}, we have P Y (A) = P (Y 1 (A)) = P ({ω : Y (ω) A}) = P ({ω : φ 1 (Y (ω)) > φ 2 (Y (ω))}) = 0. For part (c), the obvious conjecture is that E[a 1 X 1 + a 2 X 2 Y = y] = a 1 E[X 1 Y = y] + a 2 E[X 2 Y = y], P Y a.s. We have from the theorem that E[a 1 X 1 + a 2 X 2 Y ] = a 1 E[X 1 Y ] + a 2 E[X 2 Y ], P a.s. Now we know that E[a 1 X 1 + a 2 X 2 Y ], E[X 1 Y ], and E[X 2 Y ] can each be expressed as a function of Y, say E[a 1 X 1 + a 2 X 2 Y ] = h(y ) E[X 1 Y ] = h 1 (Y ) E[X 2 Y ] = h 2 (Y ). See the discussion beginning in the middle of p. 70. Our equation above then says h(y ) = a 1 h 1 (Y ) + a 1 h 1 (Y ), P a.s. Also, by definition, So, can t we conclude that E[a 1 X 1 + a 2 X 2 Y = y] = h(y) E[X 1 Y = y] = h 1 (y) E[X 2 Y = y] = h 2 (y). h(y) = a 1 h 1 (y) + a 1 h 1 (y), P Y a.s.?
2 2 Let A = {y : h(y) a 1 h 1 (y) + a 1 h 1 (y)}. This is a subset of the range space of Y. It is measurable in that space since it is the inverse image of the Borel set \ {0} under the measurable map h (a 1 h 1 + a 2 h 2 ). We want to show that P Y (A) = 0. Now P Y (A) = P ( Y 1 (A) ) = P ({ω : h(y (ω)) a 1 h 1 (Y (ω)) + a 1 h 1 (Y (ω))}). We already observed this latter event has probability 0 when we noted that h(y ) = a 1 h 1 (Y ) + a 1 h 1 (Y ), P -a.s. So we are done. The previous paragraph illustrates one way of proving a result for the conditional expectation of the type E[X Y = y]: prove the corresponding result for for conditional expectation of the type E[X Y ] and simply translate it over. This will work in most cases, so one doesn t have to do separate proofs. The P -null sets in Ω where an equality fails will automatically become a P Y -null set on the range of Y by the same sort of argument as above. However, most students seem to want to derive a result using the defining properties of E[X Y = y]. So, for example, we observe that a 1 E[X 1 Y = y] + a 2 E[X 2 Y = y] is a Borel measurable function of y (whose domain is the range space of Y ), and if A is a measurable set in the range space of Y, then Y 1 (A) (a 1 X 1 + a 2 X 2 ) dp = = Y 1 (A) A (a 1 E[X 1 Y ] + a 2 E[X 2 Y ]) dp (a 1 E[X 1 Y = y] + a 2 E[X 2 Y = y]) dp Y (y), which shows that a 1 E[X 1 Y = y] + a 2 E[X 2 Y = y] has satisifies the integral property to be E[a 1 X 1 + a 2 X 2 Y = y]. Moving on to part (d), it is very tempting to write E[E[X Y = y]] but this doesn t make sense. For a r.v. Z, E[Z] = Ω ZdP is an integral over the underlying probability space, but the domain of the function E[X Y = y] is the range of Y, not Ω. Of course, the range of Y has the probability measure P Y, so it makes sense to write Λ E[X Y = y] dp Y (y) = E[X],
3 where Λ is the range of Y. Since E[X Y = y] y=y is E[X Y ], we have by the law of the unconscious statistician that Λ E[X Y = y] dp Y (y) = Y 1 (Λ) E[X Y ] dp = E[E[X Y ]] = E[X] where the last equality follows by part (d) of Theorem 1.5.7(d). To deal with part (e) of the theorem, we need to translate E[X {, Ω}] into some kind of statement about E[X Y = y], we need to think of it as E[X Y ] where σ(y ) = {, Ω}. But this happens if and only if Y is a constant r.v. (Check it out!). Hence, we claim 3 If Y is a constant r.v., E[X Y = y] = E[X], P Y -a.s.. Clearly if Y = c where c Λ is fixed, then for h(y ) = E[X Y ] = E[X], we must have h(c) = E[X], and h(y) can be defined arbitrarily for y c. But this means h(y) = E[X], P Y -a.s., since P Y = δ c. Theorem 1.5.7(f) tells us that if σ(x) σ(y ), then E[X Y ] = X, a.s. Now, it wouldn t make sense to claim E[X Y = y] is equal to X, since they are functions with different domains. However, if σ(x) σ(y ), then we know from Theorem that X = φ(y ) for some function φ whose domain is Λ, the range of Y. Thus, it would make sense to claim that If X = φ(y )for some measurable φ, then E[X Y = y] = φ(y), P Y a.s. The proof is immediate from φ(y ) = X = E[X Y ], a.s. Part (g) is a little trickier. Suppose Y 1 is some other random element (with range space (Λ 1, G 1 ), say), and σ(y 1 ) σ(y ). We know (at least if the range of Y 1 is (, B) that then Y 1 = ψ(y ), for some ψ, by Theorem So let us just assume Y 1 = ψ(y ), ψ : (Λ, G) (Λ 1, G 1 ). Now it makes no sense to write E[E[X Y 1 = y 1 ] Y = y] since the domain of E[X Y 1 = y 1 ] is Λ 1, and not Ω. (Recall that when we write
4 4 E[Z Y = y], Z must be a r.v., i.e. a mapping from Ω to.) Also, how are we to match up the given values y 1 and y? If we are given Y = y and Y 1 = ψ(y ), then it must be that Y 1 = ψ(y). So let s try the following: If Y 1 = ψ(y ), then E[E[X Y 1 ] Y = y] = E[X Y 1 = ψ(y)], P Y1 a.s. Of course, we haven t fully interpreted E[E[X Y 1 ] Y = y] into the requisite kind of conditional expectation, but there is no way to do so. Also, what does the r.h.s. mean? We are taking the function h 1 given by h 1 (y 1 ) = E[X Y 1 = y 1 ] and composing it with ψ, i.e. the r.h.s. is h ψ evaluated at y, and h ψ is a map from Λ to, so the domains and ranges of the two sides match. Again, the proof is trivial: E[X Y 1 ] = h(y 1 ) = h(ψ(y )) = (h ψ)(y ), so we apply our result above on part (f) of the theorem. Now for the other side of the Law of Successive conditioning, we need to deal with E[E[X Y ] Y 1 = y 1 ] when Y 1 = ψ(y ). We could just write If Y 1 = ψ(y ), then E[E[X Y ] Y 1 = y 1 ] = E[X Y 1 = y 1 ], P Y1 a.s. Certainly everything makes sense: the l.h.s. and r.h.s. of the equation are the same kind of objects (both sides are functions with argument y 1 varying over the domain Λ 1 and the ranges are ). And we know that E[E[X Y ] Y 1 ] = E[X Y 1 ] a.s. in this case, which proves the result. This one is easy. Finally, part (h) is fairly straightforward: If X 1 is σ(y )-measurable, then X 1 = ψ(y ) for some measurable ψ, and we claim that E[ψ(Y )X 2 Y = y] = ψ(y)e[x 2 Y = y]. We see that φ(y) = ψ(y)e[x 2 Y = y] is a Borel measurable function on the range of Y (since it is the product of two such functions), and φ(y ) = ψ(y )E[X 2 Y ] = E[ψ(Y )X 2 Y ], where the last equality follows from the result given in the theorem. The translation and proof of Theorem is straightforward. For part (a), if 0 X n X, then we claim E[X n Y = y] E[X Y = y], P Y -a.s. We know that φ n (Y ) = E[X n Y ] E[X Y ] = φ(y ), a.s. Simply translate the null set to the range space of Y as in part (b) of the previous theorem. The dominated convergence theorem is similar.
5 Solution to Exercise : Let Z Z, then MSPE(Z) is given by E[(X Z) 2 ] = E[(X E[X Y ] + E[X Y ] Z) 2 ] adding and subtracting E[X Y ] = E[(X E[X Y ]) 2 ] + 2E[(X E[X Y ])(E[X Y ] Z)] + E[(E[X Y ] Z) 2 ] algebra and linearity of expectation = E[(X E[X Y ]) 2 ] + 2E[E[(X E[X Y ])(E[X Y ] Z) Y ]] + E[(E[X Y ] Z) 2 ] total expectation = E[(X E[X Y ]) 2 ] + 2E[(E[X Y ] Z)E[(X E[X Y ]) Y ]] + E[(E[X Y ] Z) 2 ] factorization result (Theorem 1.5.7(h)) since both E[X Y ] and Z are σ(y )-measurable = E[(X E[X Y ]) 2 ] + 2E[(E[X Y ] Z)(E[X Y ] E[X Y ])] + E[(E[X Y ] Z) 2 ] by linearity of E[ G] and Theorem 1.5.7(f) applied to E[X G] = E[(X E[X Y ]) 2 ] + E[(E[X Y ] Z) 2 ] = MSPE(E[X Y ]) + E[(E[X Y ] Z) 2 ]. Thus, MSPE(Z) MSPE(E[X Y ]) = E[(E[X Y ] Z) 2 ] 0, so the result follows. The desired uniqueness result is evident as well: E[(E[X Y ] Z) 2 ] > 0 unles (E[X Y ] Z) 2 = 0, a.s., which is equivalent to Z = E[X Y ], a.s. REMARK. Note the similarities in the calculations for the solution to this exercise and Exercise 1.5.7(b). Solution to Exercise : Let p(b, x) = δ x (B) = I B (x), for x and B B. Clearly p(, x) is a Borel probability measure for each x, so condition (ii) in the definition of conditional distributions holds (Definition 1.5.2). Turning to condition (i), of course P [X 5
6 6 B X] = E[I B (X) X] by the definition of conditional probability, and by Theorem 1.5.7(f), E[I B (X) X] = I B (X), a.s., so E[I B (X) X = x] = I B (x), P X -a.s. But, as noted above, this is just p(b, x). This is intuitively the right answer since given X = x, we should have a conditional probability totally concentrated on x, which is what δ x does. Solution to Exercise : We are looking for a conditional distribution of a two dimensional random vector, so our conditional distribution has to live on two dimensional space. Note that we are given that the first component X 1 = x 1, so its marginal conditional distribution should be δ x1. See the result in Exercise Now, the marginal conditional distribution for X 2 should be the obvious answer: P X2 X 1 ( x 1 ) = Law[X 2 X 1 = x 1 ]. Note that any r.v. is independent of a degenerate r.v., so there is only one way to make a joint distribution with these marginals: p(, x 1 ) = δ x1 Law[X 2 X 1 = x 1 ]. Let s check that this satisfies the requisite properties as spelled out in Remark 1.5.7(a): (1) For all x 1, p(, x 1 ) is a Borel p.m. on ( 2, B 2 ) since it is the product of two Borel p.m. s on. (2) We want to check that for all B B 2, p(b, x 1 ) is a measurable function of x 1. Note that p(b, x 1 ) = I B (ξ 1, ξ 2 ) d [ δ x1 P X2 X 1 ( x 1 ) ] (ξ 1, ξ 2 ) 2 [ ] = I B (ξ 1, ξ 2 ) dδ x1 (ξ 1 ) dp X2 X 1 ( x 1 )(ξ 2 ) = I B (x 1, ξ 2 ) dp X2 X 1 ( x 1 )(ξ 2 ) = P X2 X 1 (B 1 (x 1 ) x 1 ) = P [X 2 B 1 (x 1 ) X 1 = x 1 ], where B 1 (x 1 ) = {x 2 : (x 1, x 2 ) B}.
7 Note that for each x 1, this set is measurable this is implicitly one of the conclusions of Fubini s theorem, that we can fix the value of one variable and then the function is measurable in the other variable. Now, the last expression in our little calculation above, namely P [X 2 B 1 (x 1 ) X 1 = x 1 ], is a Borel function of x 1 by definition of (this kind of) conditional probability (expectation). (3) Now, we want to show that for all Borel sets A, B 2, P [X 1 A & (X 1, X 2 ) B] = I A (x 1 )p(b, x 1 )dp X1 (x 1 ). If we write out p(b, x 1 ) as an integral and carry out the calculation as in the previous step, then we obtain I A (x 1 )p(b, x 1 )dp X1 (x 1 ) [ ] = I A (x 1 ) I B (x 1, x 2 ) dp X2 X 1 ( x 1 )(x 2 ) dp X1 (x 1 ) = I A (x 1 )I B (x 1, x 2 ) dp X2 X 1 ( x 1 )(x 2 )dp X1 (x 1 ) = E [I A (X 1 )I B (X 1, X 2 ) X 1 = x 1 ] dp X1 (x 1 ) (by equation (1.71) in Theorem 1.5.6) = E [E [I A (X 1 )I B (X 1, X 2 ) X 1 ]] = E [I A (X 1 )I B (X 1, X 2 )] = P [X 1 A & (X 1, X 2 ) B], which is the desired result. 7
MATH 418: Lectures on Conditional Expectation
MATH 418: Lectures on Conditional Expectation Instructor: r. Ed Perkins, Notes taken by Adrian She Conditional expectation is one of the most useful tools of probability. The Radon-Nikodym theorem enables
More informationLectures 22-23: Conditional Expectations
Lectures 22-23: Conditional Expectations 1.) Definitions Let X be an integrable random variable defined on a probability space (Ω, F 0, P ) and let F be a sub-σ-algebra of F 0. Then the conditional expectation
More informationNotes 13 : Conditioning
Notes 13 : Conditioning Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Sections 0, 4.8, 9, 10], [Dur10, Section 5.1, 5.2], [KT75, Section 6.1]. 1 Conditioning 1.1 Review
More informationA D VA N C E D P R O B A B I L - I T Y
A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2
More informationLecture 4: Conditional expectation and independence
Lecture 4: Conditional expectation and independence In elementry probability, conditional probability P(B A) is defined as P(B A) P(A B)/P(A) for events A and B with P(A) > 0. For two random variables,
More informationSolutions Homework 7 November 16, 2015
Solutions Homework 7 November 6, 25 Solution to Exercise 2..5 If x and p < q, then x p x q. This follows since the exponential function is monotone increasing, so x p exp[p log x] exp[q log x] x q since
More informationUnit 1 Lesson 6: Seeing Structure in Expressions
Unit 1 Lesson 6: Seeing Structure in Expressions Objective: Students will be able to use inductive reasoning to try to solve problems that are puzzle like in nature. CCSS: A.SSE.1.b, A.SSE.2 Example Problems
More informationLecture 6 Basic Probability
Lecture 6: Basic Probability 1 of 17 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 6 Basic Probability Probability spaces A mathematical setup behind a probabilistic
More informationChapter 5 Random vectors, Joint distributions. Lectures 18-23
Chapter 5 Random vectors, Joint distributions Lectures 18-23 In many real life problems, one often encounter multiple random objects. For example, if one is interested in the future price of two different
More informationCommutative Banach algebras 79
8. Commutative Banach algebras In this chapter, we analyze commutative Banach algebras in greater detail. So we always assume that xy = yx for all x, y A here. Definition 8.1. Let A be a (commutative)
More informationIndependent random variables
CHAPTER 2 Independent random variables 2.1. Product measures Definition 2.1. Let µ i be measures on (Ω i,f i ), 1 i n. Let F F 1... F n be the sigma algebra of subsets of Ω : Ω 1... Ω n generated by all
More informationJoint Probability Distributions and Random Samples (Devore Chapter Five)
Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete
More informationMATH41011/MATH61011: FOURIER SERIES AND LEBESGUE INTEGRATION. Extra Reading Material for Level 4 and Level 6
MATH41011/MATH61011: FOURIER SERIES AND LEBESGUE INTEGRATION Extra Reading Material for Level 4 and Level 6 Part A: Construction of Lebesgue Measure The first part the extra material consists of the construction
More information18.440: Lecture 26 Conditional expectation
18.440: Lecture 26 Conditional expectation Scott Sheffield MIT 1 Outline Conditional probability distributions Conditional expectation Interpretation and examples 2 Outline Conditional probability distributions
More informationLecture 12: Multiple Random Variables and Independence
EE5110: Probability Foundations for Electrical Engineers July-November 2015 Lecture 12: Multiple Random Variables and Independence Instructor: Dr. Krishna Jagannathan Scribes: Debayani Ghosh, Gopal Krishna
More informationLinear Algebra. Mark Dean. Lecture Notes for Fall 2014 PhD Class - Brown University
Linear Algebra Mark Dean Lecture Notes for Fall 2014 PhD Class - Brown University 1 Lecture 1 1 1.1 Definition of Linear Spaces In this section we define the concept of a linear (or vector) space. The
More informationMeasure theory and stochastic processes TA Session Problems No. 3
Measure theory and stochastic processes T Session Problems No 3 gnieszka Borowska 700 Note: this is only a draft of the solutions discussed on Wednesday and might contain some typos or more or less imprecise
More informationSPECIAL POINTS AND LINES OF ALGEBRAIC SURFACES
SPECIAL POINTS AND LINES OF ALGEBRAIC SURFACES 1. Introduction As we have seen many times in this class we can encode combinatorial information about points and lines in terms of algebraic surfaces. Looking
More informationIEOR 4701: Stochastic Models in Financial Engineering. Summer 2007, Professor Whitt. SOLUTIONS to Homework Assignment 9: Brownian motion
IEOR 471: Stochastic Models in Financial Engineering Summer 27, Professor Whitt SOLUTIONS to Homework Assignment 9: Brownian motion In Ross, read Sections 1.1-1.3 and 1.6. (The total required reading there
More informationMAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction
MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example
More informationAn Introduction to Stochastic Calculus
An Introduction to Stochastic Calculus Haijun Li lih@math.wsu.edu Department of Mathematics Washington State University Weeks 3-4 Haijun Li An Introduction to Stochastic Calculus Weeks 3-4 1 / 22 Outline
More information6 Cosets & Factor Groups
6 Cosets & Factor Groups The course becomes markedly more abstract at this point. Our primary goal is to break apart a group into subsets such that the set of subsets inherits a natural group structure.
More information31. TRANSFORMING TOOL #2 (the Multiplication Property of Equality)
3 TRANSFORMING TOOL # (the Multiplication Property of Equality) a second transforming tool THEOREM Multiplication Property of Equality In the previous section, we learned that adding/subtracting the same
More informationREAL ANALYSIS I Spring 2016 Product Measures
REAL ANALSIS I Spring 216 Product Measures We assume that (, M, µ), (, N, ν) are σ- finite measure spaces. We want to provide the Cartesian product with a measure space structure in which all sets of the
More information9. TRANSFORMING TOOL #2 (the Multiplication Property of Equality)
9 TRANSFORMING TOOL # (the Multiplication Property of Equality) a second transforming tool THEOREM Multiplication Property of Equality In the previous section, we learned that adding/subtracting the same
More informationNotes 1 : Measure-theoretic foundations I
Notes 1 : Measure-theoretic foundations I Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Section 1.0-1.8, 2.1-2.3, 3.1-3.11], [Fel68, Sections 7.2, 8.1, 9.6], [Dur10,
More informationSYDE 112, LECTURE 7: Integration by Parts
SYDE 112, LECTURE 7: Integration by Parts 1 Integration By Parts Consider trying to take the integral of xe x dx. We could try to find a substitution but would quickly grow frustrated there is no substitution
More informationLecture 9 - Faithfully Flat Descent
Lecture 9 - Faithfully Flat Descent October 15, 2014 1 Descent of morphisms In this lecture we study the concept of faithfully flat descent, which is the notion that to obtain an object on a scheme X,
More informationALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers
ALGEBRA CHRISTIAN REMLING 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers by Z = {..., 2, 1, 0, 1,...}. Given a, b Z, we write a b if b = ac for some
More informationWriting proofs for MATH 51H Section 2: Set theory, proofs of existential statements, proofs of uniqueness statements, proof by cases
Writing proofs for MATH 51H Section 2: Set theory, proofs of existential statements, proofs of uniqueness statements, proof by cases September 22, 2018 Recall from last week that the purpose of a proof
More informationMath 31 Lesson Plan. Day 16: Review; Start Section 8. Elizabeth Gillaspy. October 18, Supplies needed: homework. Colored chalk. Quizzes!
Math 31 Lesson Plan Day 16: Review; Start Section 8 Elizabeth Gillaspy October 18, 2011 Supplies needed: homework Colored chalk Quizzes! Goals for students: Students will: improve their understanding of
More informationNotes on Probability
Notes on Probability Mark Schmidt January 7, 2017 1 Probabilites Consider an event A that may or may not happen. For example, if we roll a dice then we may or may not roll a 6. We use the notation p(a)
More informationHints/Solutions for Homework 3
Hints/Solutions for Homework 3 MATH 865 Fall 25 Q Let g : and h : be bounded and non-decreasing functions Prove that, for any rv X, [Hint: consider an independent copy Y of X] ov(g(x), h(x)) Solution:
More informationMultivariate Distributions (Hogg Chapter Two)
Multivariate Distributions (Hogg Chapter Two) STAT 45-1: Mathematical Statistics I Fall Semester 15 Contents 1 Multivariate Distributions 1 11 Random Vectors 111 Two Discrete Random Variables 11 Two Continuous
More information2.1 Convergence of Sequences
Chapter 2 Sequences 2. Convergence of Sequences A sequence is a function f : N R. We write f) = a, f2) = a 2, and in general fn) = a n. We usually identify the sequence with the range of f, which is written
More informationCS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding
CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding Tim Roughgarden October 29, 2014 1 Preamble This lecture covers our final subtopic within the exact and approximate recovery part of the course.
More informationLecture Notes 3 Convergence (Chapter 5)
Lecture Notes 3 Convergence (Chapter 5) 1 Convergence of Random Variables Let X 1, X 2,... be a sequence of random variables and let X be another random variable. Let F n denote the cdf of X n and let
More informationare Banach algebras. f(x)g(x) max Example 7.4. Similarly, A = L and A = l with the pointwise multiplication
7. Banach algebras Definition 7.1. A is called a Banach algebra (with unit) if: (1) A is a Banach space; (2) There is a multiplication A A A that has the following properties: (xy)z = x(yz), (x + y)z =
More informationCMPSCI 240: Reasoning Under Uncertainty
CMPSCI 240: Reasoning Under Uncertainty Lecture 8 Prof. Hanna Wallach wallach@cs.umass.edu February 16, 2012 Reminders Check the course website: http://www.cs.umass.edu/ ~wallach/courses/s12/cmpsci240/
More informationADVANCED PROBABILITY: SOLUTIONS TO SHEET 1
ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 Last compiled: November 6, 213 1. Conditional expectation Exercise 1.1. To start with, note that P(X Y = P( c R : X > c, Y c or X c, Y > c = P( c Q : X > c, Y
More informationDiscrete Mathematics and Probability Theory Fall 2018 Alistair Sinclair and Yun Song Note 6
CS 70 Discrete Mathematics and Probability Theory Fall 2018 Alistair Sinclair and Yun Song Note 6 1 Modular Arithmetic In several settings, such as error-correcting codes and cryptography, we sometimes
More information4 Expectation & the Lebesgue Theorems
STA 7: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on the same probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω,
More informationBuilding Infinite Processes from Finite-Dimensional Distributions
Chapter 2 Building Infinite Processes from Finite-Dimensional Distributions Section 2.1 introduces the finite-dimensional distributions of a stochastic process, and shows how they determine its infinite-dimensional
More informationCHANGE OF MEASURE. D.Majumdar
CHANGE OF MEASURE D.Majumdar We had touched upon this concept when we looked at Finite Probability spaces and had defined a R.V. Z to change probability measure on a space Ω. We need to do the same thing
More informationTheorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u.
5. Fields 5.1. Field extensions. Let F E be a subfield of the field E. We also describe this situation by saying that E is an extension field of F, and we write E/F to express this fact. If E/F is a field
More informationChapter 7. Homogeneous equations with constant coefficients
Chapter 7. Homogeneous equations with constant coefficients It has already been remarked that we can write down a formula for the general solution of any linear second differential equation y + a(t)y +
More informationThe binary entropy function
ECE 7680 Lecture 2 Definitions and Basic Facts Objective: To learn a bunch of definitions about entropy and information measures that will be useful through the quarter, and to present some simple but
More informationAlmost Sure Convergence of a Sequence of Random Variables
Almost Sure Convergence of a Sequence of Random Variables (...for people who haven t had measure theory.) 1 Preliminaries 1.1 The Measure of a Set (Informal) Consider the set A IR 2 as depicted below.
More informationProduct measures, Tonelli s and Fubini s theorems For use in MAT4410, autumn 2017 Nadia S. Larsen. 17 November 2017.
Product measures, Tonelli s and Fubini s theorems For use in MAT4410, autumn 017 Nadia S. Larsen 17 November 017. 1. Construction of the product measure The purpose of these notes is to prove the main
More information12. Perturbed Matrices
MAT334 : Applied Linear Algebra Mike Newman, winter 208 2. Perturbed Matrices motivation We want to solve a system Ax = b in a context where A and b are not known exactly. There might be experimental errors,
More informationIntroductory Analysis 2 Spring 2010 Exam 1 February 11, 2015
Introductory Analysis 2 Spring 21 Exam 1 February 11, 215 Instructions: You may use any result from Chapter 2 of Royden s textbook, or from the first four chapters of Pugh s textbook, or anything seen
More informationWe will briefly look at the definition of a probability space, probability measures, conditional probability and independence of probability events.
1 Probability 1.1 Probability spaces We will briefly look at the definition of a probability space, probability measures, conditional probability and independence of probability events. Definition 1.1.
More informationDiscrete Mathematics and Probability Theory Fall 2015 Note 20. A Brief Introduction to Continuous Probability
CS 7 Discrete Mathematics and Probability Theory Fall 215 Note 2 A Brief Introduction to Continuous Probability Up to now we have focused exclusively on discrete probability spaces Ω, where the number
More informationAdding and Subtracting Terms
Adding and Subtracting Terms 1.6 OBJECTIVES 1.6 1. Identify terms and like terms 2. Combine like terms 3. Add algebraic expressions 4. Subtract algebraic expressions To find the perimeter of (or the distance
More informationMore than one variable
Chapter More than one variable.1 Bivariate discrete distributions Suppose that the r.v. s X and Y are discrete and take on the values x j and y j, j 1, respectively. Then the joint p.d.f. of X and Y, to
More informationBasic Definitions: Indexed Collections and Random Functions
Chapter 1 Basic Definitions: Indexed Collections and Random Functions Section 1.1 introduces stochastic processes as indexed collections of random variables. Section 1.2 builds the necessary machinery
More informationSTA 711: Probability & Measure Theory Robert L. Wolpert
STA 711: Probability & Measure Theory Robert L. Wolpert 6 Independence 6.1 Independent Events A collection of events {A i } F in a probability space (Ω,F,P) is called independent if P[ i I A i ] = P[A
More informationMultiple Regression Analysis
Multiple Regression Analysis y = β 0 + β 1 x 1 + β 2 x 2 +... β k x k + u 2. Inference 0 Assumptions of the Classical Linear Model (CLM)! So far, we know: 1. The mean and variance of the OLS estimators
More informationCentral limit theorem. Paninski, Intro. Math. Stats., October 5, probability, Z N P Z, if
Paninski, Intro. Math. Stats., October 5, 2005 35 probability, Z P Z, if P ( Z Z > ɛ) 0 as. (The weak LL is called weak because it asserts convergence in probability, which turns out to be a somewhat weak
More informationMath 676. A compactness theorem for the idele group. and by the product formula it lies in the kernel (A K )1 of the continuous idelic norm
Math 676. A compactness theorem for the idele group 1. Introduction Let K be a global field, so K is naturally a discrete subgroup of the idele group A K and by the product formula it lies in the kernel
More informationHence, (f(x) f(x 0 )) 2 + (g(x) g(x 0 )) 2 < ɛ
Matthew Straughn Math 402 Homework 5 Homework 5 (p. 429) 13.3.5, 13.3.6 (p. 432) 13.4.1, 13.4.2, 13.4.7*, 13.4.9 (p. 448-449) 14.2.1, 14.2.2 Exercise 13.3.5. Let (X, d X ) be a metric space, and let f
More informationNotes on Unemployment Dynamics
Notes on Unemployment Dynamics Jorge F. Chavez November 20, 2014 Let L t denote the size of the labor force at time t. This number of workers can be divided between two mutually exclusive subsets: the
More informationDiscrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 3
EECS 70 Discrete Mathematics and Probability Theory Spring 014 Anant Sahai Note 3 Induction Induction is an extremely powerful tool in mathematics. It is a way of proving propositions that hold for all
More informationProbability COMP 245 STATISTICS. Dr N A Heard. 1 Sample Spaces and Events Sample Spaces Events Combinations of Events...
Probability COMP 245 STATISTICS Dr N A Heard Contents Sample Spaces and Events. Sample Spaces........................................2 Events........................................... 2.3 Combinations
More informationConics. Chapter 6. When you do this Exercise, you will see that the intersections are of the form. ax 2 + bxy + cy 2 + ex + fy + g =0, (1)
Chapter 6 Conics We have a pretty good understanding of lines/linear equations in R 2 and RP 2. Let s spend some time studying quadratic equations in these spaces. These curves are called conics for the
More informationE. Vitali Lecture 5. Conditional Expectation
So far we have focussed on independent random variables, which are well suitable to deal with statistical inference, allowing to exploit the very powerful central limit theorem. However, for the purpose
More information17. Convergence of Random Variables
7. Convergence of Random Variables In elementary mathematics courses (such as Calculus) one speaks of the convergence of functions: f n : R R, then lim f n = f if lim f n (x) = f(x) for all x in R. This
More informationAbstract Algebra I. Randall R. Holmes Auburn University. Copyright c 2012 by Randall R. Holmes Last revision: November 11, 2016
Abstract Algebra I Randall R. Holmes Auburn University Copyright c 2012 by Randall R. Holmes Last revision: November 11, 2016 This work is licensed under the Creative Commons Attribution- NonCommercial-NoDerivatives
More informationEssential Question: What is a complex number, and how can you add, subtract, and multiply complex numbers? Explore Exploring Operations Involving
Locker LESSON 3. Complex Numbers Name Class Date 3. Complex Numbers Common Core Math Standards The student is expected to: N-CN. Use the relation i = 1 and the commutative, associative, and distributive
More information1 Stat 605. Homework I. Due Feb. 1, 2011
The first part is homework which you need to turn in. The second part is exercises that will not be graded, but you need to turn it in together with the take-home final exam. 1 Stat 605. Homework I. Due
More informationTHE REPRESENTATION THEORY, GEOMETRY, AND COMBINATORICS OF BRANCHED COVERS
THE REPRESENTATION THEORY, GEOMETRY, AND COMBINATORICS OF BRANCHED COVERS BRIAN OSSERMAN Abstract. The study of branched covers of the Riemann sphere has connections to many fields. We recall the classical
More informationCS411 Notes 3 Induction and Recursion
CS411 Notes 3 Induction and Recursion A. Demers 5 Feb 2001 These notes present inductive techniques for defining sets and subsets, for defining functions over sets, and for proving that a property holds
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013 Conditional expectations, filtration and martingales Content. 1. Conditional expectations 2. Martingales, sub-martingales
More informationExpectation of geometric distribution. Variance and Standard Deviation. Variance: Examples
Expectation of geometric distribution Variance and Standard Deviation What is the probability that X is finite? Can now compute E(X): Σ k=f X (k) = Σ k=( p) k p = pσ j=0( p) j = p ( p) = E(X) = Σ k=k (
More informationNotes on Random Variables, Expectations, Probability Densities, and Martingales
Eco 315.2 Spring 2006 C.Sims Notes on Random Variables, Expectations, Probability Densities, and Martingales Includes Exercise Due Tuesday, April 4. For many or most of you, parts of these notes will be
More informationAt the start of the term, we saw the following formula for computing the sum of the first n integers:
Chapter 11 Induction This chapter covers mathematical induction. 11.1 Introduction to induction At the start of the term, we saw the following formula for computing the sum of the first n integers: Claim
More information2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.
CS450 Final Review Problems Fall 08 Solutions or worked answers provided Problems -6 are based on the midterm review Identical problems are marked recap] Please consult previous recitations and textbook
More informationExploring Operations Involving Complex Numbers. (3 + 4x) (2 x) = 6 + ( 3x) + +
Name Class Date 11.2 Complex Numbers Essential Question: What is a complex number, and how can you add, subtract, and multiply complex numbers? Explore Exploring Operations Involving Complex Numbers In
More information4 Expectation & the Lebesgue Theorems
STA 205: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on a probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω, does
More informationUniversity of Regina. Lecture Notes. Michael Kozdron
University of Regina Statistics 252 Mathematical Statistics Lecture Notes Winter 2005 Michael Kozdron kozdron@math.uregina.ca www.math.uregina.ca/ kozdron Contents 1 The Basic Idea of Statistics: Estimating
More informationLecture 1: Probability Fundamentals
Lecture 1: Probability Fundamentals IB Paper 7: Probability and Statistics Carl Edward Rasmussen Department of Engineering, University of Cambridge January 22nd, 2008 Rasmussen (CUED) Lecture 1: Probability
More information30. TRANSFORMING TOOL #1 (the Addition Property of Equality)
30 TRANSFORMING TOOL #1 (the Addition Property of Equality) sentences that look different, but always have the same truth values What can you DO to a sentence that will make it LOOK different, but not
More informationCartesian Products and Relations
Cartesian Products and Relations Definition (Cartesian product) If A and B are sets, the Cartesian product of A and B is the set A B = {(a, b) : (a A) and (b B)}. The following points are worth special
More informationMATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:
MATH 51H Section 4 October 16, 2015 1 Continuity Recall what it means for a function between metric spaces to be continuous: Definition. Let (X, d X ), (Y, d Y ) be metric spaces. A function f : X Y is
More information2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?
MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is
More informationHarmonic Oscillator with raising and lowering operators. We write the Schrödinger equation for the harmonic oscillator in one dimension as follows:
We write the Schrödinger equation for the harmonic oscillator in one dimension as follows: H ˆ! = "!2 d 2! + 1 2µ dx 2 2 kx 2! = E! T ˆ = "! 2 2µ d 2 dx 2 V ˆ = 1 2 kx 2 H ˆ = ˆ T + ˆ V (1) where µ is
More informationHarmonic Oscillator I
Physics 34 Lecture 7 Harmonic Oscillator I Lecture 7 Physics 34 Quantum Mechanics I Monday, February th, 008 We can manipulate operators, to a certain extent, as we would algebraic expressions. By considering
More information8. TRANSFORMING TOOL #1 (the Addition Property of Equality)
8 TRANSFORMING TOOL #1 (the Addition Property of Equality) sentences that look different, but always have the same truth values What can you DO to a sentence that will make it LOOK different, but not change
More informationFOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 37
FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 37 RAVI VAKIL CONTENTS 1. Motivation and game plan 1 2. The affine case: three definitions 2 Welcome back to the third quarter! The theme for this quarter, insofar
More informationOctober 16, 2018 Notes on Cournot. 1. Teaching Cournot Equilibrium
October 1, 2018 Notes on Cournot 1. Teaching Cournot Equilibrium Typically Cournot equilibrium is taught with identical zero or constant-mc cost functions for the two firms, because that is simpler. I
More informationa. Do you think the function is linear or non-linear? Explain using what you know about powers of variables.
8.5.8 Lesson Date: Graphs of Non-Linear Functions Student Objectives I can examine the average rate of change for non-linear functions and learn that they do not have a constant rate of change. I can determine
More information1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),
Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer
More information2 Systems of Linear Equations
2 Systems of Linear Equations A system of equations of the form or is called a system of linear equations. x + 2y = 7 2x y = 4 5p 6q + r = 4 2p + 3q 5r = 7 6p q + 4r = 2 Definition. An equation involving
More informationUniversal examples. Chapter The Bernoulli process
Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial
More informationModern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur
Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Lecture 02 Groups: Subgroups and homomorphism (Refer Slide Time: 00:13) We looked
More informationSequence convergence, the weak T-axioms, and first countability
Sequence convergence, the weak T-axioms, and first countability 1 Motivation Up to now we have been mentioning the notion of sequence convergence without actually defining it. So in this section we will
More informationSynthetic Probability Theory
Synthetic Probability Theory Alex Simpson Faculty of Mathematics and Physics University of Ljubljana, Slovenia PSSL, Amsterdam, October 2018 Gian-Carlo Rota (1932-1999): The beginning definitions in any
More informationLecture 4 October 18th
Directed and undirected graphical models Fall 2017 Lecture 4 October 18th Lecturer: Guillaume Obozinski Scribe: In this lecture, we will assume that all random variables are discrete, to keep notations
More informationLinear Algebra in Hilbert Space
Physics 342 Lecture 16 Linear Algebra in Hilbert Space Lecture 16 Physics 342 Quantum Mechanics I Monday, March 1st, 2010 We have seen the importance of the plane wave solutions to the potentialfree Schrödinger
More informationExpectation of geometric distribution
Expectation of geometric distribution What is the probability that X is finite? Can now compute E(X): Σ k=1f X (k) = Σ k=1(1 p) k 1 p = pσ j=0(1 p) j = p 1 1 (1 p) = 1 E(X) = Σ k=1k (1 p) k 1 p = p [ Σ
More information