ECE534, Spring 2018: Solutions for Problem Set #4 Due Friday April 6, 2018

Size: px
Start display at page:

Download "ECE534, Spring 2018: Solutions for Problem Set #4 Due Friday April 6, 2018"

Transcription

1 ECE534, Spring 2018: s for Problem Set #4 Due Friday April 6, MMSE Estimation, Data Processing and Innovations The random variables X, Y, Z on a common probability space (Ω, F, P ) are said to form a Markov chain in that order, denoted by X Y Z, if the conditional distribution of Z depends only on Y and is conditionally independent of X. In other words, X, Y, Z form a Markov chain X Y Z if for any two events A, B F P (X A, Z B Y ) = P (X A Y )P (Z B Y ). Assuming that X, Y, Z are jointly continuous, this condition can be equivalently described by f XY Z (x, y, z) = f X (x)f Y X (y x)f Z Y (z y). Moreover, for any two random variables S, T, define the average conditional variance of S given T as the mean square error of the conditional mean estimator of S given T : [ E[Var (S T )] = MMSE(S T ) = E (S E[S T ]) 2], where Var (S T ) = E[(S E[S T ]) 2 T ]. Here, we assume that all the involved expectations exist (i.e., the random variables S, T are second order). (a) Suppose that X Y Z. Show that [ E[Var(X Z)] E[Var(X Y )] = E (E[X Z] E[X Y ]) 2]. Conclude that E[var(X Y )] E[Var(X Z)], i.e., the MMSE along Markov chains increases. Note: The inequality in this part is an analogue of the data processing inequality in information theory: if X Y Z, then I(X; Y ) I(X; Z), where I(S; T ) is the mutual information of two random variables S, T. Hint: To solve this part, you need to use the Markov property of X Y Z and to think in terms of the Orthogonality Principle. There are alternative ways to argue on this problem. All these approaches are based on combining the Markov property with the orthogonality principle and relevant results. Directly using the definitions, we have: 1

2 E[Var(X Z)] E[Var(X Y )] = E[(X E[X Z]) 2 ] E[(X E[X Y ]) 2 ] = E[ 2XE[X Z] + (E[X Z]) 2 + 2XE[X Y ] (E[X Y ]) 2 ] = E[(E[X Z]) 2 + (E[X Y ]) 2 2E[X Y ]E[X Z]+ 2E[X Y ]E[X Z] 2(E[X Y ]) 2 2XE[X Z] + 2XE[X Y ]] = E[(E[X Z] E[X Y ]) 2 ] 2E[(E[X Y ] X)(E[X Y ] E[X Z])]. To finish, we need to show that 2E[(E[X Y ] X)(E[X Y ] E[X Z])] = 0. Note that E[(E[X Y ] X)(E[X Y ] E[X Z])] = E[(E[X Y ] X)E[X Y ]] + E[E[X Z](X E[X Y ])] = E[E[X Z](X E[X Y ])] (Orthogonality principle) = E[E[X Z](X E[X Y, Z])] (Markov property : X Z given Y ) = 0 (projection onto nested classes) A different way to see this is the following: E[E[X Z](X E[X Y ])] = E[E[X Z]X] E[E[X Z]E[X Y ]] = E[(E[X Z]) 2 ] E[E[X Z]E[E[X Y ] Z]] = E[(E[X Z]) 2 ] E[E[X Z]E[E[X Y, Z] Z]] = (Markov property) E[(E[X Z]) 2 ] E[(E[X Z]) 2 ] = 0. (smoothing property of conditional expectation) Alternatively, one may use Proposition 3.4 in the textbook to directly obtain: E[(X E[X Z]) 2 ] = E[(X E[X Y, Z]) 2 ] + E[(E[X Y, Z] E[X Z]) 2 ] = E[(X E[X Y ]) 2 ] + E[(E[X Y ] E[X Z]) 2 ]. (Markov property) [Note: Parts (b) and (c) are bonus questions]. In the remaining of this problem, all random variables will be finite-variance, zero mean, circularly-symmetric complex Gaussian random variables. (Note: We only assume complex Gaussian random variables to state the following results in full generality. No complex calculations are required for the solution of this part. Moreover, this part aims to show that the geometric intuition, when the invovled random 2

3 varialbes are complex-valued, does not change.). We use calligraphic letters, e.g., X = {X i }, to denote finite sets of such variables. On sets of Gaussian random variables, we will assume that their statistics are jointly Gaussian. The set of all complex linear combinations of a given finite set X of finite-variance, zero mean, circularly-symmetric jointly Gaussian complex random variables is a complex vector space G. Every element of G is a finite-variance, zero mean, circularly-symmetric complex Gaussian random variable and every subset of G is jointly Gaussian. The zero vector of G is the unique zero variable 0. The dimension of G is at most the cardinality, X, of X. If an inner product is defined on G as the correlation X, Y = E[XY ] ( denotes complex conjugation), then G becomes a Hilbert space. The squared norm of X G is therefore its variance, i.e., X 2 = E[ X 2 ], where denotes the complex modulus. The geometry of G is totally determined by the matrix of inner products between the elements of X, i.e., by the autocorrelation or Gram matrix of X, R X X = [E[X i Xj ]]. Two random variables X, Y G are orthogonal if X, Y = E[XY ] = 0. For jointly Gaussian random variables, orthogonality means independence. If X, Y = 0, the Pythagorean Theorem holds: X + Y 2 = X 2 + Y 2. Given any subset Z G, the closure Z of Z, or the subspace generated by Z, is the set of all linear combinations of elements of Z. Also, the orthogonal complement Z of Z is defined as: Z = {X G : X, Z = 0, Z Z}. Z is also a subspace of G. Moreover, 0 is the only common element of Z and Z. The key geometric property of the Hilbert space G is the projection theorem: if V and V are orthogonal subspaces of G, then there exists a unique X V in V and a unique X V in V such that X = X V + X V and X V, X V are called the projections of X onto V and V, respectively. Also, the Pythagorean Theorem implies that X 2 = X V 2 + X V 2. Let X be a subset of G and X be the subspace generated by the elements of X. An orthogonal basis of X can be found by a recursive procedure, known as Gram- Schmidt orthogonalization: Let X 1, X 2,... denote the elements of X and let X i 1 be the subspace generated by {X 1, X 2,..., X i 1 }. Set X 0 =. For the ith iteration, we use the Projection Theorem to write: X i = (X i ) X i 1 + (X i ) X i 1. We now have that (X i ) X i 1 = 0 if and only if X i 1 = X i and therefore X i may be excluded from the generating set X without changing X. If (X i ) X i 1 0, we can replace X i by the variable E i = (X i ) X i 1 in the generating set of X without changing X. Thus, the space generated by X i 1 E i is still X i, but E i X i 1. E i is 3

4 called an innovation variable. Summarizing, given a generating set X = {X 1, X 2,...} of some subspace X of G, we can reduce X to a linearly independent set X generating the same subspace X. Assuming therefore and without loss of generality that X is a linearly independent set, we can find an orthogonal basis E = {E 1, E 2,...} of X such that E i = (X i ) X i 1 = X i (X i ) X i 1, which corresponds to the innovation set. Using the above introductory material, try to answer the following: (b) Let X, Y, Z be jointly Gaussian sets of random variables. The MMSE estimate of X based on Y and Z is X YZ, which corresponds to the projection of X onto the subspace generated by the variables in Y and Z collectively. Show that X YZ = X Y + ( X Y ) Z. X YZ is the projection of X onto the subspace generated by the variables in Y and Z. Denote this subspace as YZ (this corresponds to the sum of the subspaces Y and Z.) This subspace can be decomposed into the sum of two orthogonal subspaces: Therefore, X YZ = X Y + (X Y ) Z. Y + Z = Y + (Y Z). (c) Generalize the expression in the previous part to a general expression for the case where we wish to estimate X from a sequence of observations Y = {Y 1, Y 2,...}, when X, Y are jointly Gaussian. By generalizing part (b) we have: X Y = X Y 1 + (X Y 1 ) Y (X Y i 1) Y i +..., where Y i is the subspace spanned by every new single observation (new information) and Y i 1 is the subspace spanned by all previous observations up to time i 1. 4

5 2. DTMCs: A First Example Let {X n } be a Discrete-Time Markov Chain (DTMC) taking values in {1, 2} with probability transition matrix [ ] P =, where P ij = P (X n+1 = j X n = i). Let {Y n } be a different random process defined as: { Xn, with probability 0.9 Y n = X n 1, with probability 0.1. Find lim n P (X n = 1 Y n = 1). By Bayes rule: Also, by the definition of Y n, P (X n = 1 Y n = 1) = P (Y n = 1 X n = 1)P (X n = 1). P (Y n = 1) P (Y n = 1 X n = 1) = P (Y n = X n ) = 0.9. We now solve for the stationary distribution of the DTMC by solving for π = (π 1, π 2 ) the equation πp = π subject to π 1 + π 2 = 1. The solution is: π 1 = 1/3, π 2 = 2/3. In this case, one can check that the stationary distribution is also limiting and therefore lim P (X n = 1) = π 1 = 1/3, n lim P (X n = 2) = π 2 = 2/3. n Also, by total probability lim P (Y n = 1) = lim P (Y n = 1 X n = 1)P (X n = 1) + P (Y n = 1 X n = 2)P (X n = 2) n n Therefore, = lim P (X P (X n = 1) n = 1 Y n = 1) = P (Y n = 1 X n = 1) lim n n P (Y n = 1) = =

6 3. Markov Chains and Martingales Let X be a random process taking values in a discrete, finite set S. Prove that the following statements are equivalent: (a) X is a Markov chain. (b) For any function f, t 1 M t = f(x t ) f(x 0 ) (E[f(X s+1 ) X s ] f(x s )) is a martingale with respect to X. s=0 Let X be a finite state Markov chain. Then, Additionally, t 1 M t f(x t ) + f(x 0 ) + (E[f(X s+1 ) X s ] f(x s )) <. s=0 M t+1 M t = f(x t+1 ) f(x t ) E[f(X t+1 ) X t ] + f(x t ) = f(x t+1 ) E[f(X t+1 ) X t ]. Therefore, E[M t+1 M t X 0, X 1,..., X t ] = E[f(X t+1 ) X t ] E[f(X t+1 ) X t ] = 0, which shows that M t is a martingale with respect to X. For the reverse implication: Let f(x t ) = I Xt=x. Then, M t+1 M t = I Xt+1 =x E[I Xt+1 =x X t ]. By assumption, E[M t+1 M t X 0, X 1,..., X t ] = 0 since M is a martingale with respect to X. By conditioning, we obtain: E[I Xt+1 =x X 0, X 1,..., X t ] = E[E[I Xt+1 =x X t ] X 0, X 1,..., X t ] or P (X t+1 X 0, X 1,..., X t ) = P (X t+1 X t ), since x is arbitrary, i.e., X is a Markov process. 6

7 4. DTMCs again A fair coin is flipped till three consecutive heads are seen. What is the expected number of coin flips? Start your proof by first drawing the associated DTMC. Hint: Define a DTMC X as follows: X k takes one of the four values {0, 1, 2, 3}. X k keeps track of the latest consecutive streak of heads seen up to time k. For example, when X k = 2, X k+1 = 3 with probability 1/2 (if the (k + 1)th coin flip results in heads) and X k+1 = 0 with probability 1/2 (if the (k + 1)th coin flip results in tails). Compute the corresponding hitting time in this DTMC. The DTMC is: Figure 1: DTMC Let T i3 be the expected time to hit state 3 starting from state i {0, 1, 2, 3}. By Fig. 1, we have: T 03 = 1 2 (1 + T 03) (1 + T 13), which results in Similarly, i.e., and i.e., T 03 = 2 + T 13. (1) T 13 = 1 2 (1 + T 03) (1 + T 23), T 13 = T T 23 (2) T 23 = 1 2 (1 + T 03) + 1 2, T 23 = T 03. (3) Combining (1) and (2), we obtain: T 03 = 6 + T 23 7

8 and then using (3), we finally have: T 03 = Martingales and Stopping Times Given a discrete time stochastic process X = {X n : n 0}, a random time is a discrete random variable on the same probability space as X taking values in N = {0, 1, 2,...}, such that X T denotes the state at random time T ; if T = n, then X T = X n. If the event {T = n} is determined by (at most) the information {X 0, X 1,..., X n } known up to time n, then T is a stopping time. Let {M n : n 0} be a martingale with respect to {Z n : n 0}. An important property of martingales is that E[M n ] = E[M 0 ], n 0. Assume now that T is a bounded random variable, i.e., P (T M) = 1 for some M <, which is a stopping time with respect to {Z n : n 0}. Show that E[M T ] = E[M 0 ]. Hint: Express M T using the martingale difference sequence {D 1, D 2,...}, where D i = M i M i 1. More specifically, start by expressing M T as follows: M T = M 0 + M D i I(T i). i=1 By taking the expectation to both sides of the provided expression in the hint we have: E[M T ] = E[M 0 ] + = E[M 0 ] + M E[D i I(T i)] i=1 M E[E[D i I(T i) Z 0, Z 1,..., Z i 1 ]] i=1 M }{{} = E[M 0 ] + E[I(T i)e[d i Z 0, Z 1,..., Z i 1 ]] (a) i=1 }{{} = E[M 0 ] (b) In (a) we use the fact that since T is a stopping time, the event {T i} is determined by the knowledge of Z 0,..., Z i 1. 8

9 In (b) we use the definition of a martingale sequence with respect to some other sequence to get E[D i Z 0, Z 1,..., Z i 1 ] = 0. This observation has been also used in the proofs of Azuma-Hoeffding and McDiamird s inequality taught in class. 6. Erdős-Rényi graphs and Concentration Consider again the ensemble G(n, p) of Erdős-Rényi graphs in Problem 3 of HW#1. (a) Let (A 1,..., A m ) be a partition of the edge set of the complete graph with n vertices, usually denoted by K n. Define the graph function f such that f(g) f(g ) 1 whenever the symmetric difference E(G) E(G ) = (E(G) \ E(G )) (E(G ) \ E(G)) of the edge sets of G and G is contained in a single set A k for some k {1, 2,..., m}. Then for a graph G n,p drawn from G n,p the random variable Z = f(g n,p ) satisfies: P (Z E[Z] t) e 2t2 m, t 0. Follows from McDiarmid s inequality for c i = 1, i. (b) Two edges are said to be adjacent if they contain a common endpoint. Three edges form a triangle if, together, they contain only three distinct vertices. For a graph G = (V, E), the subset T of E E E containing all possible triangles has size N = ( n 3). Let {Xe Ber(p)} be a set of i.i.d. Bernoulli variables representing the existence or not of an edge in an Erdős-Rényi graph. The number of triangles in a graph G n,p drawn from G n,p is f(g n,p ) = {e 1,e 2,e 3 } T X e1 X e2 X e3. i. What is the expected number of triangles in G n,p? ii. Give an upper bound for P ( f(g n,p ) E [f(g n,p )] t). Based on this bound, assume that we wish to show P ( f(g n,p ) E [f(g n,p )] 1 2 E [f(g n,p)]) 0. For what values of p depending on n is this possible? 9

10 (i) The expected number of triangles is Np 3. (ii) From the two-sided version of McDiarmid s inequality, we have: ( P ( f(g n,p ) E[f(G n,p )] t) 2 exp 2t 2 m(n 2) 2 where m = ( n 2) is the number of pairs of distinct vertices or equivalently the number of possible edges. ), Explanation: A change in a single x e (x e is the realization of X e ) has a significant impact on f(g n,p ). If x e = 1 for all e E and just one x e is changed to 0, then n 2 triangles disappear. This shows that the c i s in McDiarmid s inequality are all equal to n 2. For P ( f(g n,p ) E [f(g n,p )] 1 2 E [f(g n,p)]) we have that N grows as n 3. Therefore, t grows as n 3 p 3 (part (i)). Also, m grows as n 2. Thus, the exponent becomes O(n 2 p 6 ). We need n 1/3 p to make the above probability go to zero. 7. Erdős-Rényi graphs and Doob Martingales We have given in the class notes the construction resulting to the so-called Doob martingale. Let s revisit it here. Consider a set of random variables X = {X 1, X 2,..., X n }, each one taking values in some set S and consider a function f : S n R. Define the random variables Z i = E[f(X 1,..., X n ) X 1,..., X i ]. Then {Z i } is always a martingale, regardless of the properties of X and it is called the Doob martingale for f. Let now f be a function on graphs with n vertices. Let m = ( n 2) be the number of edges in the corresponding complete graph, denoted by K n, and assume that e 1, e 2,..., e m is an arbitrary labeling (and ordering) of these edges. For a graph G G n,p (G n,p is the Erdős-Rényi ensemble), let Z i = E[f(G) X 1, X 2,..., X i ], i {0, 1...., m}, 10

11 where X i = I{e i G} and i = 0 corresponds to no conditioning. In other words, Z i corresponds to the expectation of f over all graphs G G n,p, which agree with G on {e 1, e 2,..., e i }. Clearly, {Z i } is a martingale and it is called edge exposure martingale because the edges of G are exposed sequentially in the conditioning. Similarly, let m = n 1 and v 1, v 2,..., v n be an arbitrary labeling (and ordering) of the vertices of K n. Define Z i = E[f(G) G v1,...,v i+1 ], i {0, 1...., m}, where conditioning on G v1,...,v i+1 is interpreted as conditioning on graphs which agree with G on {v 1,..., v i+1 }. In other words, in step i we reveal all the edges between the vertex v i+1 and its predecessors {v 1,..., v i }. Again, {Z i } is a martingale and it is called vertex exposure martingale. Note that in both the above martingales Z 0 = E[f(G)] (and similarly for Z 0 ) and Z m = f(g) (and similarly for Z m). Definition: A proper graph coloring of a graph Γ = (V, E) using k colors is a mapping φ : V {1, 2,..., k} that maps neighboring vertices (i.e., vertices joined by an edge) to distinct colors: φ(u) φ(v), (u, v) E. The smallest number k of colors resulting in a proper coloring is called chromatic number of Γ and is denoted by χ(γ). Show that for G G n,p, by following the steps: P ( χ(g) E[χ(G)] > t n) 2e t2 2. (a) (Bonus part) Consider the vertex exposure martingale. 1 for all i. Z i 1 (b) Using the previous part, show the desired result. Argue that Z i (b) Follows directly from the Azuma-Hoeffding inequality, by using part (a). 8. Poisson Processes Assume that tourists arrive at the port of a small Greek island in the Aegean Sea as a Poisson process with rate λ. 11

12 (a) The only ferry departs after a deterministic time T. Let W be the total waiting time for all tourists. Compute E[W ]. Let T 1, T 2,... be the arrival times of the tourists in [0, T ]. The combined waiting time is W = (T T 1 ) + (T T 2 ) +. Let N(t) be the number of arrivals in [0, t]. Conditioning with respect to N(T ), we have: E[W ] = = E[W N(T ) = k]p (N(T ) = k) k=0 k=0 k T 2 P (N(T ) = k) = T λt 2 E[N(T )] = 2 2, where we use the fact that conditioning on N(T ) = k makes the arrival times i.i.d. Unif[0, T ] random variables. (b) Suppose now that two ferries depart, one at T and one at S < T. What is now E[W ]? We have two independent Poisson processes in [0, S] and [S, T ] resulting in N(T ) = N(S) + N(T S). Therefore, E[W ] = λ S2 2 S)2 + λ(t. 2 (c) What is E[W ], if T Exp(µ), independent of the arrivals of the tourists? In this case E[W ] = = 0 0 E[W T = t]f T (t)dt λ t2 2 f T (t)dt = λ 2 E[T 2 ] = λ µ 2. 12

13 9. Poisson Processes Again Let N(t) be a Poisson process with rate λ. (a) Obtain P (N(3) = 5). N(3) Poi(λ(3 0)) = Poi(3λ). Thus, P (N(3) = 5) = e 3λ (3λ) 5 5!. (b) Obtain P (N(7) N(4) = 5) and E[N(7) N(4)]. N(7) N(4) Poi(3λ). Thus, P (N(7) N(4) = 5) = e 3λ (3λ) 5 5!. Also, E[N(7) N(4)] = 3λ. (c) Obtain P (N(7) N(4) = 5 N(6) N(4) = 2). Split the interval [4, 7] into [4, 6] and [6, 7]. These two intervals are disjoint and therefore N 7 N 6 Poi(λ) is independent of N 6 N 4 Poi(2λ). Thus, P (N(7) N(4) = 5 N(6) N(4) = 2) = P (N(7) N(6) = 3) = e λ λ3 (d) Obtain P (N(6) N(4) = 2 N(7) N(4) = 5). 3!. P (N(6) N(4) = 2 N(7) N(4) = 5) = = = = P (N(6) N(4) = 2, N(7) N(4) = 5) P (N(7) N(4) = 5) P (N(6) N(4) = 2, N(7) N(6) = 3) P (N(7) N(4) = 5) P (N(6) N(4) = 2)P (N(7) N(6) = 3) P (N(7) N(4) = 5) (2λ)2 (e 2λ 2! )(e λ λ3 e 3λ (3λ)5 5! 3! ). 10. Brownian Motion Let s revisit the definition of standard Brownian motion: Definition: A real-valued stochastic process {W (t) : t 0} is called a (linear) Brownian motion with start in x R if: (a) W (0) = x. (b) The process has independent increments, i.e., for all 0 t 1 t 2 t n, the increments W (t n ) W (t n 1 ),..., W (t 2 ) W (t 1 ) are independent random variables. (c) For all t 0 and h > 0, the increments are distributed as follows: W (t + h) W (t) N (0, h). (d) Almost surely, the function t W (t) is continuous. 13

14 We say that {W (t) : t 0} is a standard Brownian motion if x = 0. Suppose that {W (t) : t 0} is a standard Brownian motion. Define the process { 0, t = 0 X(t) = tw (1/t), t > 0. Show that X(t) is also a standard Brownian motion. Hint: You need to use the fact that a Brownian motion is a Gaussian process and the corresponding finite dimensional distributions are totally characterized by the mean and autocovariance functions. See the class notes. You then need to verify that X(t) satisfies all the properties of the provided definition. By the lecture notes, W (t 1 ),..., W (t n ) is a Gaussian vector characterized by E[W (t i )] = 0 and Cov(W (t i ), W (t j )) = min(t i, t j ) = t i, if 0 t i t j. Clearly, X(t) is also a Gaussian process with E[X(t i )] = 0 and ( ( ) 1 Cov(X(t + s), X(t)) = (t + s)scov W, W t + s ( )) 1 t 1 = t(t + s) t + s = t. Therefore, all the finite dimensional vectors (X(t 1 ),..., X(t n )) of X(t) are distributed as in a Brownian motion. The paths t X(t) are clearly continuous. Also, X(t) 0 as t 0 by the continuity of the sample paths. 14

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes Shannon meets Wiener II: On MMSE estimation in successive decoding schemes G. David Forney, Jr. MIT Cambridge, MA 0239 USA forneyd@comcast.net Abstract We continue to discuss why MMSE estimation arises

More information

Outline. Martingales. Piotr Wojciechowski 1. 1 Lane Department of Computer Science and Electrical Engineering West Virginia University.

Outline. Martingales. Piotr Wojciechowski 1. 1 Lane Department of Computer Science and Electrical Engineering West Virginia University. Outline Piotr 1 1 Lane Department of Computer Science and Electrical Engineering West Virginia University 8 April, 01 Outline Outline 1 Tail Inequalities Outline Outline 1 Tail Inequalities General Outline

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Sample Spaces, Random Variables

Sample Spaces, Random Variables Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted

More information

Chapter 6: Random Processes 1

Chapter 6: Random Processes 1 Chapter 6: Random Processes 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

ECE534, Spring 2018: Solutions for Problem Set #3

ECE534, Spring 2018: Solutions for Problem Set #3 ECE534, Spring 08: Solutions for Problem Set #3 Jointly Gaussian Random Variables and MMSE Estimation Suppose that X, Y are jointly Gaussian random variables with µ X = µ Y = 0 and σ X = σ Y = Let their

More information

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis. Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar

More information

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y. CS450 Final Review Problems Fall 08 Solutions or worked answers provided Problems -6 are based on the midterm review Identical problems are marked recap] Please consult previous recitations and textbook

More information

STAT 418: Probability and Stochastic Processes

STAT 418: Probability and Stochastic Processes STAT 418: Probability and Stochastic Processes Spring 2016; Homework Assignments Latest updated on April 29, 2016 HW1 (Due on Jan. 21) Chapter 1 Problems 1, 8, 9, 10, 11, 18, 19, 26, 28, 30 Theoretical

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

STAT 414: Introduction to Probability Theory

STAT 414: Introduction to Probability Theory STAT 414: Introduction to Probability Theory Spring 2016; Homework Assignments Latest updated on April 29, 2016 HW1 (Due on Jan. 21) Chapter 1 Problems 1, 8, 9, 10, 11, 18, 19, 26, 28, 30 Theoretical Exercises

More information

1 Stat 605. Homework I. Due Feb. 1, 2011

1 Stat 605. Homework I. Due Feb. 1, 2011 The first part is homework which you need to turn in. The second part is exercises that will not be graded, but you need to turn it in together with the take-home final exam. 1 Stat 605. Homework I. Due

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Lecture 5: Probabilistic tools and Applications II

Lecture 5: Probabilistic tools and Applications II T-79.7003: Graphs and Networks Fall 2013 Lecture 5: Probabilistic tools and Applications II Lecturer: Charalampos E. Tsourakakis Oct. 11, 2013 5.1 Overview In the first part of today s lecture we will

More information

Random Processes. DS GA 1002 Probability and Statistics for Data Science.

Random Processes. DS GA 1002 Probability and Statistics for Data Science. Random Processes DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Modeling quantities that evolve in time (or space)

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

Northwestern University Department of Electrical Engineering and Computer Science

Northwestern University Department of Electrical Engineering and Computer Science Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability

More information

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes? IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only

More information

Modern Discrete Probability Branching processes

Modern Discrete Probability Branching processes Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson

More information

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1 Problem Sheet 1 1. Let Ω = {1, 2, 3}. Let F = {, {1}, {2, 3}, {1, 2, 3}}, F = {, {2}, {1, 3}, {1, 2, 3}}. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X :

More information

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1). Name M362K Final Exam Instructions: Show all of your work. You do not have to simplify your answers. No calculators allowed. There is a table of formulae on the last page. 1. Suppose X 1,..., X 1 are independent

More information

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017) UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, 208 Practice Final Examination (Winter 207) There are 6 problems, each problem with multiple parts. Your answer should be as clear and readable

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Recitation 2: Probability

Recitation 2: Probability Recitation 2: Probability Colin White, Kenny Marino January 23, 2018 Outline Facts about sets Definitions and facts about probability Random Variables and Joint Distributions Characteristics of distributions

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

n px p x (1 p) n x. p x n(n 1)... (n x + 1) x!

n px p x (1 p) n x. p x n(n 1)... (n x + 1) x! Lectures 3-4 jacques@ucsd.edu 7. Classical discrete distributions D. The Poisson Distribution. If a coin with heads probability p is flipped independently n times, then the number of heads is Bin(n, p)

More information

Introduction to Stochastic Processes

Introduction to Stochastic Processes Stat251/551 (Spring 2017) Stochastic Processes Lecture: 1 Introduction to Stochastic Processes Lecturer: Sahand Negahban Scribe: Sahand Negahban 1 Organization Issues We will use canvas as the course webpage.

More information

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process.

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. Stationary independent increments 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. 2. If each set of increments, corresponding to non-overlapping collection of

More information

180B Lecture Notes, W2011

180B Lecture Notes, W2011 Bruce K. Driver 180B Lecture Notes, W2011 January 11, 2011 File:180Lec.tex Contents Part 180B Notes 0 Course Notation List......................................................................................................................

More information

22m:033 Notes: 6.1 Inner Product, Length and Orthogonality

22m:033 Notes: 6.1 Inner Product, Length and Orthogonality m:033 Notes: 6. Inner Product, Length and Orthogonality Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman April, 00 The inner product Arithmetic is based on addition and

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Stat 150 Practice Final Spring 2015

Stat 150 Practice Final Spring 2015 Stat 50 Practice Final Spring 205 Instructor: Allan Sly Name: SID: There are 8 questions. Attempt all questions and show your working - solutions without explanation will not receive full credit. Answer

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

Randomized Algorithms

Randomized Algorithms Randomized Algorithms 南京大学 尹一通 Martingales Definition: A sequence of random variables X 0, X 1,... is a martingale if for all i > 0, E[X i X 0,...,X i1 ] = X i1 x 0, x 1,...,x i1, E[X i X 0 = x 0, X 1

More information

Stochastic Processes

Stochastic Processes Elements of Lecture II Hamid R. Rabiee with thanks to Ali Jalali Overview Reading Assignment Chapter 9 of textbook Further Resources MIT Open Course Ware S. Karlin and H. M. Taylor, A First Course in Stochastic

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

CHAPTER 1. Martingales

CHAPTER 1. Martingales CHAPTER 1 Martingales The basic limit theorems of probability, such as the elementary laws of large numbers and central limit theorems, establish that certain averages of independent variables converge

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

Essentials on the Analysis of Randomized Algorithms

Essentials on the Analysis of Randomized Algorithms Essentials on the Analysis of Randomized Algorithms Dimitris Diochnos Feb 0, 2009 Abstract These notes were written with Monte Carlo algorithms primarily in mind. Topics covered are basic (discrete) random

More information

EECS 126 Probability and Random Processes University of California, Berkeley: Fall 2014 Kannan Ramchandran November 13, 2014.

EECS 126 Probability and Random Processes University of California, Berkeley: Fall 2014 Kannan Ramchandran November 13, 2014. EECS 126 Probability and Random Processes University of California, Berkeley: Fall 2014 Kannan Ramchandran November 13, 2014 Midterm Exam 2 Last name First name SID Rules. DO NOT open the exam until instructed

More information

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define HILBERT SPACES AND THE RADON-NIKODYM THEOREM STEVEN P. LALLEY 1. DEFINITIONS Definition 1. A real inner product space is a real vector space V together with a symmetric, bilinear, positive-definite mapping,

More information

The Hilbert Space of Random Variables

The Hilbert Space of Random Variables The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Disjointness and Additivity

Disjointness and Additivity Midterm 2: Format Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten

More information

Recap of Basic Probability Theory

Recap of Basic Probability Theory 02407 Stochastic Processes? Recap of Basic Probability Theory Uffe Høgsbro Thygesen Informatics and Mathematical Modelling Technical University of Denmark 2800 Kgs. Lyngby Denmark Email: uht@imm.dtu.dk

More information

11.1 Set Cover ILP formulation of set cover Deterministic rounding

11.1 Set Cover ILP formulation of set cover Deterministic rounding CS787: Advanced Algorithms Lecture 11: Randomized Rounding, Concentration Bounds In this lecture we will see some more examples of approximation algorithms based on LP relaxations. This time we will use

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

A = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1

A = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1 Lecture I jacques@ucsd.edu Notation: Throughout, P denotes probability and E denotes expectation. Denote (X) (r) = X(X 1)... (X r + 1) and let G n,p denote the Erdős-Rényi model of random graphs. 10 Random

More information

Lecture 5: January 30

Lecture 5: January 30 CS71 Randomness & Computation Spring 018 Instructor: Alistair Sinclair Lecture 5: January 30 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They

More information

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley Midterm 2: Format 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014 Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Monday, Feb 10, 2014 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note

More information

Probability reminders

Probability reminders CS246 Winter 204 Mining Massive Data Sets Probability reminders Sammy El Ghazzal selghazz@stanfordedu Disclaimer These notes may contain typos, mistakes or confusing points Please contact the author so

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

Problem 1. Problem 2. Problem 3. Problem 4

Problem 1. Problem 2. Problem 3. Problem 4 Problem Let A be the event that the fungus is present, and B the event that the staph-bacteria is present. We have P A = 4, P B = 9, P B A =. We wish to find P AB, to do this we use the multiplication

More information

Random Process. Random Process. Random Process. Introduction to Random Processes

Random Process. Random Process. Random Process. Introduction to Random Processes Random Process A random variable is a function X(e) that maps the set of experiment outcomes to the set of numbers. A random process is a rule that maps every outcome e of an experiment to a function X(t,

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

18440: Probability and Random variables Quiz 1 Friday, October 17th, 2014

18440: Probability and Random variables Quiz 1 Friday, October 17th, 2014 18440: Probability and Random variables Quiz 1 Friday, October 17th, 014 You will have 55 minutes to complete this test. Each of the problems is worth 5 % of your exam grade. No calculators, notes, or

More information

Exam Stochastic Processes 2WB08 - March 10, 2009,

Exam Stochastic Processes 2WB08 - March 10, 2009, Exam Stochastic Processes WB8 - March, 9, 4.-7. The number of points that can be obtained per exercise is mentioned between square brackets. The maximum number of points is 4. Good luck!!. (a) [ pts.]

More information

Recap of Basic Probability Theory

Recap of Basic Probability Theory 02407 Stochastic Processes Recap of Basic Probability Theory Uffe Høgsbro Thygesen Informatics and Mathematical Modelling Technical University of Denmark 2800 Kgs. Lyngby Denmark Email: uht@imm.dtu.dk

More information

< k 2n. 2 1 (n 2). + (1 p) s) N (n < 1

< k 2n. 2 1 (n 2). + (1 p) s) N (n < 1 List of Problems jacques@ucsd.edu Those question with a star next to them are considered slightly more challenging. Problems 9, 11, and 19 from the book The probabilistic method, by Alon and Spencer. Question

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO QUESTION BOOKLET EE 26 Spring 2006 Final Exam Wednesday, May 7, 8am am DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO You have 80 minutes to complete the final. The final consists of five

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

An Introduction to Stochastic Calculus

An Introduction to Stochastic Calculus An Introduction to Stochastic Calculus Haijun Li lih@math.wsu.edu Department of Mathematics Washington State University Weeks 3-4 Haijun Li An Introduction to Stochastic Calculus Weeks 3-4 1 / 22 Outline

More information

Quiz 1 Date: Monday, October 17, 2016

Quiz 1 Date: Monday, October 17, 2016 10-704 Information Processing and Learning Fall 016 Quiz 1 Date: Monday, October 17, 016 Name: Andrew ID: Department: Guidelines: 1. PLEASE DO NOT TURN THIS PAGE UNTIL INSTRUCTED.. Write your name, Andrew

More information

LECTURE 1. 1 Introduction. 1.1 Sample spaces and events

LECTURE 1. 1 Introduction. 1.1 Sample spaces and events LECTURE 1 1 Introduction The first part of our adventure is a highly selective review of probability theory, focusing especially on things that are most useful in statistics. 1.1 Sample spaces and events

More information

1. Let X and Y be independent exponential random variables with rate α. Find the densities of the random variables X 3, X Y, min(x, Y 3 )

1. Let X and Y be independent exponential random variables with rate α. Find the densities of the random variables X 3, X Y, min(x, Y 3 ) 1 Introduction These problems are meant to be practice problems for you to see if you have understood the material reasonably well. They are neither exhaustive (e.g. Diffusions, continuous time branching

More information

T. Liggett Mathematics 171 Final Exam June 8, 2011

T. Liggett Mathematics 171 Final Exam June 8, 2011 T. Liggett Mathematics 171 Final Exam June 8, 2011 1. The continuous time renewal chain X t has state space S = {0, 1, 2,...} and transition rates (i.e., Q matrix) given by q(n, n 1) = δ n and q(0, n)

More information

Inner product spaces. Layers of structure:

Inner product spaces. Layers of structure: Inner product spaces Layers of structure: vector space normed linear space inner product space The abstract definition of an inner product, which we will see very shortly, is simple (and by itself is pretty

More information

Chapter 2: Random Variables

Chapter 2: Random Variables ECE54: Stochastic Signals and Systems Fall 28 Lecture 2 - September 3, 28 Dr. Salim El Rouayheb Scribe: Peiwen Tian, Lu Liu, Ghadir Ayache Chapter 2: Random Variables Example. Tossing a fair coin twice:

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

ABSTRACT CONDITIONAL EXPECTATION IN L 2

ABSTRACT CONDITIONAL EXPECTATION IN L 2 ABSTRACT CONDITIONAL EXPECTATION IN L 2 Abstract. We prove that conditional expecations exist in the L 2 case. The L 2 treatment also gives us a geometric interpretation for conditional expectation. 1.

More information

EE514A Information Theory I Fall 2013

EE514A Information Theory I Fall 2013 EE514A Information Theory I Fall 2013 K. Mohan, Prof. J. Bilmes University of Washington, Seattle Department of Electrical Engineering Fall Quarter, 2013 http://j.ee.washington.edu/~bilmes/classes/ee514a_fall_2013/

More information

Midterm Exam 1 Solution

Midterm Exam 1 Solution EECS 126 Probability and Random Processes University of California, Berkeley: Fall 2015 Kannan Ramchandran September 22, 2015 Midterm Exam 1 Solution Last name First name SID Name of student on your left:

More information

3 Multiple Discrete Random Variables

3 Multiple Discrete Random Variables 3 Multiple Discrete Random Variables 3.1 Joint densities Suppose we have a probability space (Ω, F,P) and now we have two discrete random variables X and Y on it. They have probability mass functions f

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Week 12-13: Discrete Probability

Week 12-13: Discrete Probability Week 12-13: Discrete Probability November 21, 2018 1 Probability Space There are many problems about chances or possibilities, called probability in mathematics. When we roll two dice there are possible

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

Lecture 1: Brief Review on Stochastic Processes

Lecture 1: Brief Review on Stochastic Processes Lecture 1: Brief Review on Stochastic Processes A stochastic process is a collection of random variables {X t (s) : t T, s S}, where T is some index set and S is the common sample space of the random variables.

More information

Probability Theory II. Spring 2016 Peter Orbanz

Probability Theory II. Spring 2016 Peter Orbanz Probability Theory II Spring 2016 Peter Orbanz Contents Chapter 1. Martingales 1 1.1. Martingales indexed by partially ordered sets 1 1.2. Martingales from adapted processes 4 1.3. Stopping times and

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

Stat 516, Homework 1

Stat 516, Homework 1 Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball

More information

ECE 302 Division 1 MWF 10:30-11:20 (Prof. Pollak) Final Exam Solutions, 5/3/2004. Please read the instructions carefully before proceeding.

ECE 302 Division 1 MWF 10:30-11:20 (Prof. Pollak) Final Exam Solutions, 5/3/2004. Please read the instructions carefully before proceeding. NAME: ECE 302 Division MWF 0:30-:20 (Prof. Pollak) Final Exam Solutions, 5/3/2004. Please read the instructions carefully before proceeding. If you are not in Prof. Pollak s section, you may not take this

More information

Exponential Distribution and Poisson Process

Exponential Distribution and Poisson Process Exponential Distribution and Poisson Process Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 215 Outline Introduction Exponential

More information