Module 2. Random Processes. Version 2 ECE IIT, Kharagpur

Similar documents
Lecture 3: Probability Distributions

Linear Regression Analysis: Terminology and Notation

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

APPENDIX A Some Linear Algebra

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

Marginal Effects in Probit Models: Interpretation and Testing. 1. Interpreting Probit Coefficients

20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The first idea is connectedness.

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.

Math1110 (Spring 2009) Prelim 3 - Solutions

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Correlation and Regression. Correlation 9.1. Correlation. Chapter 9

CHAPTER 4. Vector Spaces

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

Difference Equations

NUMERICAL DIFFERENTIATION

Complete subgraphs in multipartite graphs

Chapter Newton s Method

Chapter 11: Simple Linear Regression and Correlation

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

The Geometry of Logit and Probit

FREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced,

9. Complex Numbers. 1. Numbers revisited. 2. Imaginary number i: General form of complex numbers. 3. Manipulation of complex numbers

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

PES 1120 Spring 2014, Spendier Lecture 6/Page 1

Foundations of Arithmetic

More metrics on cartesian products

Nice plotting of proteins II

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands

STAT 3008 Applied Regression Analysis

Important Instructions to the Examiners:

Modelli Clamfim Equazioni differenziali 7 ottobre 2013

Affine and Riemannian Connections

Expected Value and Variance

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva

Solution Thermodynamics

Engineering Risk Benefit Analysis

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

CALCULUS CLASSROOM CAPSULES

Rockefeller College University at Albany

Probability and Random Variable Primer

Problem Set 9 Solutions

2.3 Nilpotent endomorphisms

Bernoulli Numbers and Polynomials

EEE 241: Linear Systems

Lecture 17 : Stochastic Processes II

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

PhysicsAndMathsTutor.com

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)

Why Monte Carlo Integration? Introduction to Monte Carlo Method. Continuous Probability. Continuous Probability

The Fundamental Theorem of Algebra. Objective To use the Fundamental Theorem of Algebra to solve polynomial equations with complex solutions

Lecture 3. Ax x i a i. i i

A be a probability space. A random vector

Probability Theory (revisited)

Kernel Methods and SVMs Extension

SELECTED PROOFS. DeMorgan s formulas: The first one is clear from Venn diagram, or the following truth table:

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

SIMPLE LINEAR REGRESSION

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Regression Analysis

Limited Dependent Variables

Appendix B. Criterion of Riemann-Stieltjes Integrability

Quantum Mechanics for Scientists and Engineers. David Miller

Calculus of Variations Basics

Open Systems: Chemical Potential and Partial Molar Quantities Chemical Potential

COS 521: Advanced Algorithms Game Theory and Linear Programming

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

NP-Completeness : Proofs

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Affine transformations and convexity

Supplementary Notes for Chapter 9 Mixture Thermodynamics

Formulas for the Determinant

Notes prepared by Prof Mrs) M.J. Gholba Class M.Sc Part(I) Information Technology

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Finding Dense Subgraphs in G(n, 1/2)

Problem Do any of the following determine homomorphisms from GL n (C) to GL n (C)?

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

Polynomial Regression Models

Statistical Inference. 2.3 Summary Statistics Measures of Center and Spread. parameters ( population characteristics )

Economics 101. Lecture 4 - Equilibrium and Efficiency

Introduction to Random Variables

Section 8.3 Polar Form of Complex Numbers

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Sampling Theory MODULE VII LECTURE - 23 VARYING PROBABILITY SAMPLING

Review of Taylor Series. Read Section 1.2

DIFFERENTIAL FORMS BRIAN OSSERMAN

Module 3: Element Properties Lecture 1: Natural Coordinates

Solutions to Homework 7, Mathematics 1. 1 x. (arccos x) (arccos x) 1

DUE: WEDS FEB 21ST 2018

On the Multicriteria Integer Network Flow Problem

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics

Chapter 8 SCALAR QUANTIZATION

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

Transcription:

Module Random Processes

Lesson 6 Functons of Random Varables

After readng ths lesson, ou wll learn about cdf of functon of a random varable. Formula for determnng the pdf of a random varable. Let, X be a random varable and ga ( ) s a functon of a real varable a. Then, the expresson = g( x) leads to a new random varable Y wth the followng connotaton: Let s ndcate an outcome of a random experment, as ntroduced earler n Lesson #5. For a gven s, x(s) s a real number and gxs [ ( )] s another real number specfed n terms of x(s) and g(a). Ths new number s the value s () = gxs [()], whch s assgned to the random varable Y. In bref, Y = g (X) ndcates ths functonal relatonshp between the random varables X and Y. The cdf F (b) of the new random varable Y, so formed, s the probablt of the event { b}, consstng of all outcomes s such that s () = gxs [()] b. Ths means, F ( b) = P{ b} = P{ g( s) b.6. For a specfc b, there ma be multple values of a for whch ga ( ) b. Let us assume that all these values of a for whch ga ( ) b, form a set on the a-axs and let us denote ths set as I. Ths set s known as the pont set. So, gxs [ ( )] bf x(s) s a number n the set I,.e. F ( b) = P{ x I}.6. Now, g(a) must have the followng propertes so that g(x) s a random varable : a) The doman of g(a) must nclude the range of the random varable X. b) For ever b such that ga ( ) b, the set I must consst of the unon and ntersecton of a countable number of ntervals snce then onl { b} s an event. c) The events {g(x) = ± } must have zero probablt. Cumulatve Dstrbuton Functon [cdf] of g(x) We wsh to express the cdf F (b) of the new random varable Y where = g(x) n term of the cdf F x (a) of the random varable X and the functon g(a). To do ths, we determne the set I on the a-axs so that ga ( ) band also the probablt that the random varable X s n ths set. Let us assume that F x (a) s contnuous and consder a few examples to llustrate the pont. Example #.6.

Let, = g(x) = c.x + d, where c and d are constants [Ths s an equaton of a straght lne]. To fnd F (b), we have to fnd the values of a such that, c.a + d b. b d For c > 0: ca + d b means a c b d b d So, F ( b) = P x = Fx c c b d Whle, for c < 0, ca + d b means a and so c b d b d F( b) = P x = Fx c c Example #.6. Let, = g(x) = x It s eas to see that, for b < 0, F (b) = 0 However, for b 0 a b for b a b and hence, F ( b) P b x b = Fx b Fx b = { } ( ) ( ) Example #.6.3 Let us consder the followng functon g(a): a+ c, a< c ga ( ) = 0, c a c a c, a> c It s a good dea to sketch g(a) versus a to gan a closer look at the functon. Note that, F (b) s dscontnuous at b= g( a) = 0 b the amount Fx( c) Fx( c) Further, for b 0, P{ b} = P{ x b + c} = Fx ( b + c) & for b < 0, P{ b} = P{ x b c} = Fx ( b c) Example #.6.4 Whle we wll dscuss more about lnear and non-lnear quantzers n the next Module, let us consder the smple transfer characterstcs of a lnear quantzer here: Let, ga ( ) = ns.,( n as ) < a nswhere s s a constant, ndcatng a fxed step sze and n s an nteger, representng the n-th quantzaton level. Then for = g( x), the random varable Y takes values bn = ns wth P{ = ns} = P{( n ) s < x ns} = Fx( ns) Fx( ns s)

Example #.6.5 a+ c, a 0 Let, ga ( ) =, where c s a constant. Plot g(a) versus a and see that a c, a< 0 g(a) s dscontnuous at a = 0, wth g(0 - ) = -c and g(0 + ) = +c. Ths mples that, F Y (b) = F X (0), for b c. Further, for b c, g( a) b for a b c; hence, F( b) = Fx( b c) c b c, g( a) b for a c; hence, F( b) = F x(0) b c, g( a) b for a b + c; hence, F ( b) = F ( b + c) x An mportant step whle dealng wth functons of random varables s to fnd the pont set I and thereb the cdf F Y (Y) when the functons g(x) and F X (X) are known. In terms of probablt, t s equvalent to fndng the values of the random varable X such that, FY( ) = P{ Y } = P{ X I}. We now brefl dscuss about a concse and convenent relatonshp for determnaton of the pdf of Y,.e f Y (Y). Formula for determnng the pdf of Y,.e., f Y (Y): Let, X be a contnuous random varable wth pdf f x (X) and g(x) be a dfferentable functon of x. [ e.. g( x) 0]. We wsh to establsh a general expresson for the pdf of Y = g(x). Note that, an event { < Y + d} can be wrtten as a unon of several dsjont elementar events {E }. Let, the equaton = g( x) have n real roots x, x,, x n,.e. g(x ) = 0, for =,, n. Then, the dsjont events are of the forms: E = x < X < x f g x s ve { }, ( ) or E = x < X < x + f g x s + ve { }, ( ) In ether case, we can wrte (followng the basc defnton of pdf), that, Pr. of an event = (pdf at x = x). So, for the above dsjont events {E }, we ma, approxmatel wrte, PE { } = Probablt of event E = fx( x) As we have consdered the events E s dsjont, we ma now wrte that, Prob.{ < Y ( + d) } = f Y (). d = f x (x ). + f x (x ). +..+ f x (x n ). n

n = f x (x ). = The above expresson can equvalentl be wrtten as, n fy( ) = fx( x). d = n = f = X d ( x). Let us note that, at the -th root of = g(x), wth respect to x, evaluated at x = x. Usng the above convenent notaton, we fnall get, n Y( ) X( )/ ( ), = d = g ( x ). = value of the dervatve of g(x) f = f x g x.6.3 Here, x s the -th real root of = g(x) and g ( x ) 0. If, for a gven, = g(x) has no real root, then fy ( ) = 0 as X beng a random varable and x beng real, t can not take magnar values wth non-zero probablt. Let us take up a small example before concludng ths lesson. Example #.6.6 Let X be a random varable known to follow unform dstrbuton between -π and +π. So, the mean of X s 0 and ts probablt denst functon [pdf] s:, π < x π fx ( x) = π 0, otherwse Now consder a new random varable Y whch s a functon of X and the functonal relatonshp s, Y = g(x) = snx. So, we can wrte, = g(x) = sn x. Further, one can easl observe that, the pdf of Y exsts for -.0 <.0. Let us frst consder the nterval 0 <.0: The roots of sn x = 0 for >0 are, x = sn - () and x = π - sn - (). dg( x) Further, = cosx whle dg( x) = cos( sn ) and x= x

dg( x) x= x We see that, dg( x) dg( x) ( π sn ) = cos ( cos π.cos sn ) sn π.sn ( sn = + ) = cos( sn ) x x = f X( x ) f ( x ) fy ( ) = + g ( x ) ( ) X g x f f = + X(sn ) X( π sn ) = =., 0 < π π Followng smlar procedure for the range - < 0, t can ultmatel be shown that,., < fy ( ) = π 0, otherwse Problems Q.6.) Q.6.) Let, =x + 3x+. If pdf s x s f X (x), determne an expresson for pdf of. Sketch the pdf of of problem.6., f X has u form dstrbuton between - and +.