9.07 Introduction to Probability and Statistics for Brain and Cognitive Sciences Emery N. Brown

Size: px
Start display at page:

Download "9.07 Introduction to Probability and Statistics for Brain and Cognitive Sciences Emery N. Brown"

Transcription

1 9.07 Introducton to Probablty and Statstcs for Bran and Cogntve Scences Emery N. Brown Lecture 6: Epectaton and Varance, Covarance and Correlaton, And Movement Generatng Functons I. Objectves Understand how to compute epectatons and varances for lnear combnatons of random varables. Understand how to compute the covarance between lnear combnatons of random varables. Understand the concept of moments of a probablty densty. Understand how the moments of a probablty densty or probablty mass functon can be derved from the moment generatng functon. Understand the basc propertes of moment generatng functons and ther use n probablty calculatons. II. Epectatons and Covarances A. Epectaton In Lectures and 3, we defned the epected value of a dscrete and contnuous random varable respectvely as. Dscrete If then EX ( ) = p ( ) X : p () (6.) (6.) provded that p( ) <.. Contnuous If then E ( ) = f ( d ) X : f ( ) (6.3) (6.4) provded that f ( ) d<.

2 page : Lecture 6: Epectatons and Varances, Covarances and Correlaton, Moment Generatng Functons Remark 6.. The epected value has an mportant physcal nterpretaton as the center of mass or the frst moment. We also stated that f Y = g ( X ), we can obtan the epected value of Y drectly wthout frst computng py( y) or fy ( y ). That s, gven Y = g ( X ). Dscrete If X : p ()then provded that g ( ) p< ( ). EY ( ) = g ( ) p ( ) (6.5) If. Contnuous X : f ( )then EY ( ) = ( ) ( ) (6.5) g f d provded that g ( ) f( d ) <. Proof (Dscrete Case) EY ( ) = yp y ( y ). (6.6) Let A be the 's mapped nto by y by g. We have A f g ( ) = y.hence, p ( y )= p ( ) y A EY ( ) = y p ( ) A = yp ( ) (6.7) A = gp ( ) ( ) A = gp ( ) ( ). Remark 6.. A very mportant functon ( ) whose epected value we consdered s g g ( ) = ( µ ) whch s the varance of X. We know that ( ( ))= EX µ ) = EX X µ + µ ] = E µ. σ = E gx ( [ X

3 page 3: Lecture 6: Epectatons and Varances, Covarances and Correlaton, Moment Generatng Functons The standard devaton of X s σ = [EX ] µ. In general, f X,..., X n are jontly dstrbuted random varables and Y = g( X,..., X n ) then. Dscrete X, X,..., X n : p ( X, X,..., X n) then If EY ( ) = g (,..., n ) p (,..., n ) (6.8) provded that g (,..., n ) p (,..., n) <.. Contnuous X, X,..., X n : f (,..., n ) then If EY ( ) = g (,..., ) f (,..., ) d,..., d (6.9) n n n provded that E( Y ) <. Remark 6.3. If X and Y are ndependent and g and h are fed functons, then Eg [ ( X ) hy ( )]= E[ gx ( )] EhY [ ( )] provded that each of the epectatons on the rght s fnte. B. Epectatons of Lnear Combnatons of Random Varables In Lectures and 3 we stated that f Y = ax + b, then EY ( ) = ae ( X ) + b. That s, epectaton s a lnear operaton. We now state a very mportant generalzaton of ths result. Proposton 6.. If X,..., X n are jontly dstrbuted random varables wth epectatons EX ( ) and Y s a lnear functon of the X 's, n = + b E( X ). Y a (6.0) Smply stated, the epectaton of the lnear combnaton of a set of random varables s the lnear combnaton of the epectatons provded that each epectaton s fnte. Proof: (see Rce, p. 5). Eample. (contnued). The rat eecuted 40 trals of a learnng eperment wth a probablty p of a correct response on each tral. What s the epected number of correct responses? Each tral s a Bernoull eperment wth probablty p of the anmal eecutng t correctly. Let X be 40 the outcome on tral, then Y = X s the total number of correct responses on each tral. Hence,

4 page 4: Lecture 6: Epectatons and Varances, Covarances and Correlaton, Moment Generatng Functons 40 EY ( ) = E( X) 40 = E( X ) 40 = p = 40 p. (6.) In general, f there were n trals EY ( ) = np. We showed ths result before n Lecture usng the defnton of epectaton drectly and the bnomal theorem. Eample 5.4 (contnued). Suppose that the number of channel openngs n a 500 msec tme nterval at each of n stes at the frog neuromuscular juncton s a Posson random varable. Each ste has an epected number of openngs λ per mllsecond for =,..., n. What s the epected number of openngs n 500 msec across all n stes? We have X : P( λ ) E( X ) = λ (6.) and n 500 msec we have Z = 500X EZ ( ) = 500λ (6.3) and n Y = Z N (6.4) EY ( ) = E( Z) n = EZ ( ) = 500 λ. n Eample 5.4 (contnued). We assumed that the length of tme that each of two channels s open s an eponental random varable wth parameters λ and λ respectvely. What s the epected sum of the openng tmes at the two stes? We have X : E( λ ) =, and + and EZ EX ( )= λ. Hence, Z = X X ( )= E( X + X ) = λ + λ. Notce that when λ = λ = λ we have EZ ( ) = λ. Ths s completely consstent wth our observaton that when λ = λ = λ, Z : Γ(, λ), whch we showed n Lecture 5.

5 page 5: Lecture 6: Epectatons and Varances, Covarances and Correlaton, Moment Generatng Functons III. Covarance and Correlaton A. Covarance Eample 4.6 (contnued). In ths eample, we saw that postve values on one sensor were assocated wth postve values on the other sensor and negatve values on one sensor were assocated wth negatve values on the other sensor. The values seemed to be related. We formalze the characterzaton of ths relatonshp n terms of the concept of covarance and correlaton. The latter we have already dscussed n terms of the correlaton coeffcent. Defnton 6.. If X and Y are jontly dstrbuted random varables wth EX ( )= µ and EY ( )= µ y, then the covarance of X and Y s Cov( XY, ) = E( X µ )( Y µ y ) (6.5) = (X µ )( Y µ y ) f y (, y) dd y. Remark 6.4. The covarance of X and Y measures the strength of the assocaton between X and Y n the sense mentoned above n Eample 4.6. It s the frst cross moment of X and Y. It has unts of the random varables squared, assumng X and Y measure the same thng. Remark 6.5. If X = Y, we have Cov( XY, ) = E( X µ )( Y µ y ) ( µ ) (6.6) = E X =σ. Therefore, the covarance of X wth tself s smply the varance of X. Remark 6.6. Varance s the second central moment, and EX ( ) s the second moment. The varance tells us about how mass s dstrbuted away from the mean. We report the standard devaton because σ s on the same scale (unts) as X. Remark 6.7. We have on epandng the terms n Defnton 6. Cov( XY, ) = EX ( µ )( Y µ y ) = E ( XY µ Y µ y X + µ µ y ) = E ( XY ) µ E( Y ) µ y E ( X ) + µ µ y (6.7) = E ( XY ) µ µ y µ yµ + µ µ y = E( XY ) E( X ) E( Y ). Eample 6. (Rce p. 38). Suppose X and Y have the jont probablty densty f(, y ) = ( + y 4 y ) (6.8) where (0,) and y (0,).

6 page 6: Lecture 6: Epectatons and Varances, Covarances and Correlaton, Moment Generatng Functons Fnd Cov( XY)., Frst t s easy to show that EX ( ) = E( Y ) = E ( X ) = ( + y 4 y ) dyd = 0 0 E ( Y ) = y ( + y 4 y ) ddy = 0 0 (6.9) We need to compute EXY ( ) = y ( + y 4 yddy ) =. (6.0) Hence, Cov( XY, ) = EXY ( ) EXEY ( ) ( ) = ( ) =. (6.) 9 36 Here are 4 useful results. Result 6. Cov( a+ X, Y) Cov( a+ X, Y ) = E (( a+ X ) Y ) E ( a+ X ) E ( Y ) = E ( ay ) + E ( XY ) [ E ( a ) + E ( X )] E ( Y ) = ae ( Y ) + E ( XY ) E ( a ) E ( Y ) E ( X ) E ( Y ) = E ( XY ) E ( X ) E ( Y ) + ae ( Y ) ae ( Y ) = cov( XY, ) (6.) Result 6. Cov( ax, by ) Cov( ax, by ) = E ( axby ) E ( ax ) E ( by ) = abe ( XY ) abe ( X ) E ( Y ) = ab cov( X, Y ) (6.3) Result 6.3 Cov( XY, + Z) Cov( XY, + Z ) = EXY ( + Z ) EXEY ( ) ( + Z ) = EXY ( + XZ ) EX ( )[ EY ( ) + EZ ( )] = EXY ( ) + EXZ ( ) EXEY ( ) ( ) EXEZ ( ) ( ) = EXY ( ) EXEY ( ) ( ) + EXZ ( ) EXEZ ( ) ( ) = cov( XY, ) + cov( XZ, ) (6.4)

7 page 7: Lecture 6: Epectatons and Varances, Covarances and Correlaton, Moment Generatng Functons Result 6.4 Cov( aw + bx + cy + dz) Cov( aw + bx, cy + dz) = Cov( aw + bx, cy ) + Cov( aw + bx, dz ) = Cov( aw, cy ) + cov( bx, cy ) + Cov( aw, dz ) + Cov( bx, dz ) (6,5) We can state the followng general result = accov( W, Y ) + bccov( X, Y ) + ad Cov( WZ ) + bdcov( X, Z ). Proposton 6.. Let U = a 0 + a X V = b 0 + b j X j j= Then I J I J Cov( U, V) = ab j Cov( X, X j ). (6.6) j= We wll be more nterested n the followng specal cases Cov( X + Y, X + Y) = Cov( X, X ) + Cov( Y,Y) + Cov( X,Y) ) (6.7) = Var( X ) + Var( Y ) + Cov( X,Y) ) Cov( X Y, X Y ) = Var( X) + Var( Y) Cov( X,Y) (6.8) I ) If U = a + a X and X are ndependent then Var( U ) = Cov( U,U ) = Cov( a + a X, a + a X ) I j j I I j j j= = aa cov( X X ). I (6.9) If j Cov( X, X j ) = 0 because the X are ndependent. If = j Cov( X, X j ) = Var( X ). Thus, Var( ) I I U = a Var( X ) = a σ. (6.30) Ths result s to some etent the analog for varances of the result for epectatons establshed n Proposton 6.. We requred ndependence n the varance calculatons.

8 page 8: Lecture 6: Epectatons and Varances, Covarances and Correlaton, Moment Generatng Functons Eample. (contnued). Here X : B(, n p) where n = 40. What s the varance of X? We can wrte where Y : Bernoull wth probablty of a correct response p. 40 X = Y (6.3) If we assume (as we do) that the Y 's are ndependent, then by Proposton Var( X ) = Var( Y ) = p( p) = 40 p( p). (6.3) In general, for n we have Var( X ) = np ( p). We dd not show ths before because t s very tedous to do drectly from the defnton of the varance. Eample 6.. Integrate and Fre Model as a Random Walk + Drft As we dscussed n Lecture 3, an elementary stochastc model for the subthreshold membrane voltage of a neuron s the random walk wth drft model proposed by Gersten and Mandelbrot (964). In contnuous tme, t s t Vt ()= µt +σ d Wu ( ) (6.33) t 0 where W() t s a Wener process,.e. a contnuous tme Gaussan random varable wth mean 0 and varance. We assume EW ( ( sw ) ( t) ) = 0 for s t. To smulate ths on the computer we rewrte t n dscrete tme as Vt ( )= µ t +σ W (6.34) j=0 where W : N(0,). Fnd the varance of V( t ). Applyng Proposton 6. we get Var( Vt ( )) = Var( µt + σ W ) j=0 = σ Var( W ) (6.35) j=0 = σ ( + ). Ths s a restatement of the well-known classc result that the varance of a random walk s proportonal to the length of the walk. Uncertanty ncreases wth tme. Notce however, that

9 page 9: Lecture 6: Epectatons and Varances, Covarances and Correlaton, Moment Generatng Functons EV ( ( t ))= E(µt +σ W ) = E ( µt ) +σ EW ( ) = µt + 0 = µt. j=0 j=0 (6.36) On average the random walk wth drft wll be on the course of lnear ncrease n membrane voltage. (see Gersten GL, Mandelbrot M. Random walk models for the spkng actvty of a sngle neuron. Bophyscal Journal 4:4-68, 964). B. Correlaton Defnton 6.. If X and Y are jontly dstrbuted random varables and the varances and covarances of X and Y are non-zero, then the correlaton of X and Y s Cov( XY, ) Cov( X, Y) ρ = =. (6.37) [Var( X )Var( Y )] σσy Eample 6. (contnued). Fnd ρ. We have EX ( )= E( Y ) = and we computed Cov( X, Y) =. 36 Because X and Y are both unform on (0,) t s easy to show that Var( X ) = Var( Y ) =. Now, we have ρ = = =. (6.38) ( ) 3 We now establsh an mportant property of the correlaton coeffcent. Proposton 6.3. ρ. ρ =± f and only f Pr( Y = a+ b X ) = for some constants a and b. Proof: X Y 0 Var( + ) σ σ y Var( X ) Var( Y ) X Y = + + Cov(, ) σ σ y σ σ Var( X ) Var( Y ) Cov( X,Y) = + + (6.39) σ σ y σσ y + + ρ = ( + ρ) Or ( + ρ) 0 + ρ 0 or ρ. Also X X 0 Var( ) = ( ρ) (6.40) σ σ y

10 page 0: Lecture 6: Epectatons and Varances, Covarances and Correlaton, Moment Generatng Functons Or ( ρ) 0 ( ρ) 0 ρ. To prove the last statement of the proposton requres Chebyshev s nequalty whch we wll prove n Lecture 7. Remark 6.8. Correlaton provdes a normalzed (scale-free) measure of covarance. Eample 4.6 (contnued). We prevously stated n Lecture 4 that ρ was the correlaton coeffcent for a bvarate Gaussan probablty densty. Now we demonstrate ths by showng that cov( XY, ) = ρσ σ y. Cov( X Y, ) = ( µ )( y µ y ) f ( ydd., ) y (6.4) Make the change of varables u = ( µ )/ σ, v = (y µ y)/σ y and we have σσ y (u + v ρuv) dudv (6.4) ( ρ ) ( ρ ) = π uv ep completng the square σσ y v = vep u ep (u ρv) du du π ( ρ ) ( ρ ), (6.43) The nner ntegral s ( ) [ ( ρ ) = ρv[ π ( ρ Eu π ] )] ρσ σ y [ π ( ρ )] v = v ep dv π ( ρ ) y π = ρσ σ ( ) ( π ) Ev ( ) (6.44) π = ρσ σ y. IV. Moment Generatng Functons Learnng to use moment generatng functons n probablty calculatons s lke gong from usng a PC to usng a supercomputer. -- E.N. Brown As we wll see, the moment generatng functon smplfes many of the calculatons we have already done and wll enable many more, the most mportant of whch s the proof of the Central Lmt Theorem.

11 page : Lecture 6: Epectatons and Varances, Covarances and Correlaton, Moment Generatng Functons th Defnton 6.3. The r moment of a random varable X s r r EX ( ) = f ( d ) (6.45) provded the epectaton ests. The r th central moment of a random varable X s r provded that the epectaton ests. Remark 6.9. Some key factods about moments st EX ( ) = µ moment r E{[ X E( X )] } = ( E( )) f ( ) d (6.46) EX ( ) nd moment nd EX ( µ) = σ central moment (varance) EX ( µ) 3 3 rd central moment EX ( µ) 4 4 th central moment The skewness measures the asymmetry of the dstrbuton Fgure 6.A. Eamples of Negatve (left panel) and postve (rght) skew n a probablty densty. It s defned s

12 page : Lecture 6: Epectatons and Varances, Covarances and Correlaton, Moment Generatng Functons EX ( µ ) 3 σ 3 (6.47) The skewness of standard Gaussan dstrbuton s 0. The kurtoss measures how fat the tals of a dstrbuton s often reported as EX ( µ ) 4 σ 4 (6.48) or to make a comparson relatve to a standard Gaussan dstrbuton as EX ( µ ) 4 because for kurtoss for the standard Gaussan dstrbuton s 3. σ 4 3 (6.49) Defnton 6.4. The moment generatng functon (mgf) of a random varable X s Ee ( t ) whch s n the Dscrete Case and s n the t M () t = e p ( ) (6.50) Contnuous Case t M ()= t e f ( ) d (6.5) provded the epectatons est. Ths depends crtcally on how fast the tals n the probablty densty de out. Result 6.5. If M() t ests n an open nterval contanng zero t unquely determnes f ( ). Ths s an mportant, deep result whch allows us to perform computatons on M propertes of X and f ( ). For eample, f we dfferentate Mt ()we obtan () t to establsh and d t M '( t ) = e f ( ) d dt d t dt t = e f ( ) d (6.5) = e f ( ) d

13 page 3: Lecture 6: Epectatons and Varances, Covarances and Correlaton, Moment Generatng Functons M '(0) = f d ( Dfferentatng r tmes and evaluatng the dervatve at zero gves r ( ) = E X) (6.53) r M () (0) = E( X ). (6.54) Result 6.6. If Mt () ests n an open nterval contanng 0, then M () (0) = E( X ). In ths sense Mt ()can be used to generate the moments of X. r r Eample 6.3. Posson MGF tk k e λ λ M()= t e k! k=0 (λe t t ) λ k! k=0 = e (6.55) t k t t λ (λe ) λ λe λ(e ) = e = e e = e. k! k=0 Then sum averages for all t. Dfferentatng, we have M'( t) = λe e t λ(e ) t t t t λ(e ) t λ(e ) M''( t ) = λe e + λ e ( e ) M '(0) and λ E( k ) = λ (6.56) M ''(0) = λ+ λ and E( k ) = λ + λ or E( k ) [ E( k )] = λ + λ λ = λ. Eample 6.4 Gamma MGF e t β α α β M ()= t e d 0 Γ( α ) α β α (t β ) = e d (6.57) Γ( α ) 0 β α α Γ( α) β = = Γ( α) (β t) α (β t) α where we requre t < β for the ntegral to est

14 page 4: Lecture 6: Epectatons and Varances, Covarances and Correlaton, Moment Generatng Functons M '( t) = αβ (β t) α (α +) α M '(0) = E( X ) = β α (α +) αα ( +) M ''( t) = αα ( +) β (β t) M '(0) ' = E( X ) = β (6.58) αα ( +) α α and Var( X ) = E ( X ) [ E ( X )] = = β β β Eample 6.5. Standard Gaussan MGF. t = Mt () ( π ) e e d (6.59) Complete the square to rewrte and hence, t = ( + (6.60) t t ) t M () t = e t ( π ) ( t ). e d (6.6) Let u = t then d = du t e = ( π ) u e e du = ( π ) t ( π ) t. = e (6.6) Result 6.7 (Lnear Transformaton). If X has mgf M() t and Y = a + bx, then Y has mgf at My ()= t e M ( bt ). Proof: M Y ()= t E( e ty ) at +btx = E( e ) (6.63) at btx at btx at = Ee ( e ) = e Ee ( ) = e M ( bt ) t µt µt σ Eample 6.5 (contnued). Let Y = µ + σ X where X : N(0,) then MY () t = e M ( σt ) = e e. Result 6.8 (Convoluton of Independent Random Varables) If X and Y are ndependent wth mgf s M () t and M () t respectvely, and Z = X + Y, then y M t M t () on the common nterval where both mgf s est. ()= ()M t z y

15 page 5: Lecture 6: Epectatons and Varances, Covarances and Correlaton, Moment Generatng Functons by ndependence. M z ()= t E( e tz ) ( + ) = E( e t X Y ) tx ty = E( e e ) (6.64) tx ty = E( e ) Ee ( ) = M () t M y () t Eample 5.4 (contnued). We had X : Epon enta l( λ ) Y : Epone n t a l( λ ) and Z = X + Y. We showed that f λ = λ = λ and X and Y are ndependent, then Z : Γ(α =, β = λ). We now establsh the same result usng mgf s (Eample 6.4 and Result 6.8). Mz ()= t M ()M t y () t λ λ = ( )( ) (6.65) λ t λ t = ( λ ) λ t whch s the mgf of a gamma random varable wth α = and β = λ. Eample 6.6. X : N( µ, σ ) Y : N ( µ, σ ) and X and Y are ndependent. Fnd the dstrbuton y y of Z = X + Y. By Eample 6.5, Hence, Z N(µ + µ,σ +σ ). : y y Mz ()= t M ()M t y () t t µ σ t µ y y σ = e e e e (6.66) t + ) ( σ + σ y ) y (µ µ = e e Remark 6.0. There s an obvous generalzaton of Result 6.8, namely f X,..., X ndependent random varables wth mgf s M () t =,..,. n, and Z = X, then n Mz ( t) = M t The X 's do not have to come from the same dstrbuton. n n are ( ). (6.67) Remark 6.. In general, f two random varables follow the same dstrbuton the sum wll not necessarly follow the same dstrbuton. See Eample 5.4, 5.5 and consder the case of two ndependent gamma random varables wth β = β. Remark 6.. Result 6.8 s the specal case of the well-known mathematcal fact that the convoluton of two functons n ths case probablty densty functons n the probablty (tme)

16 page 6: Lecture 6: Epectatons and Varances, Covarances and Correlaton, Moment Generatng Functons doman corresponds to multplyng the transforms n the t (frequency) doman. Ths result s eploted a lot n lnear systems theory and Fourer analyss. Remark 6.3. There s a multvarate generalzaton of the mgf whch we wll not consder. Remark 6.4. The lmtaton that the mgf may not est s rectfed by workng nstead wth the characterstc functon φ () t = E( e tx ) t = e f ( ) d (6.68) where = ( ). The ntegral converges for all t because e t. The characterstc functon has the same propertes of the mgf wthout the estence lmtaton. Its use requres famlarty wth comple numbers and ther manpulatons. V. Summary We can now perform basc probablty calculatons on lnear combnatons of random varables. These results have now establshed the essental groundwork for provng the Central Lmt Theorem n Lecture 7. Acknowledgments I am grateful to Jule Scott for techncal assstance n preparng ths lecture. Reference Rce JA. Mathematcal Statstcs and Data Analyss, 3 rd edton. Boston, MA, 007.

17 MIT OpenCourseWare Statstcs for Bran and Cogntve Scence Fall 06 For nformaton about ctng these materals or our Terms of Use, vst:

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

b ), which stands for uniform distribution on the interval a x< b. = 0 elsewhere

b ), which stands for uniform distribution on the interval a x< b. = 0 elsewhere Fall Analyss of Epermental Measurements B. Esensten/rev. S. Errede Some mportant probablty dstrbutons: Unform Bnomal Posson Gaussan/ormal The Unform dstrbuton s often called U( a, b ), hch stands for unform

More information

Probability and Random Variable Primer

Probability and Random Variable Primer B. Maddah ENMG 622 Smulaton 2/22/ Probablty and Random Varable Prmer Sample space and Events Suppose that an eperment wth an uncertan outcome s performed (e.g., rollng a de). Whle the outcome of the eperment

More information

SELECTED PROOFS. DeMorgan s formulas: The first one is clear from Venn diagram, or the following truth table:

SELECTED PROOFS. DeMorgan s formulas: The first one is clear from Venn diagram, or the following truth table: SELECTED PROOFS DeMorgan s formulas: The frst one s clear from Venn dagram, or the followng truth table: A B A B A B Ā B Ā B T T T F F F F T F T F F T F F T T F T F F F F F T T T T The second one can be

More information

Chapter 2 Transformations and Expectations. , and define f

Chapter 2 Transformations and Expectations. , and define f Revew for the prevous lecture Defnton: support set of a ranom varable, the monotone functon; Theorem: How to obtan a cf, pf (or pmf) of functons of a ranom varable; Eamples: several eamples Chapter Transformatons

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Statistics and Probability Theory in Civil, Surveying and Environmental Engineering

Statistics and Probability Theory in Civil, Surveying and Environmental Engineering Statstcs and Probablty Theory n Cvl, Surveyng and Envronmental Engneerng Pro. Dr. Mchael Havbro Faber ETH Zurch, Swtzerland Contents o Todays Lecture Overvew o Uncertanty Modelng Random Varables - propertes

More information

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands Content. Inference on Regresson Parameters a. Fndng Mean, s.d and covarance amongst estmates.. Confdence Intervals and Workng Hotellng Bands 3. Cochran s Theorem 4. General Lnear Testng 5. Measures of

More information

Introduction to Random Variables

Introduction to Random Variables Introducton to Random Varables Defnton of random varable Defnton of random varable Dscrete and contnuous random varable Probablty functon Dstrbuton functon Densty functon Sometmes, t s not enough to descrbe

More information

Chapter 1. Probability

Chapter 1. Probability Chapter. Probablty Mcroscopc propertes of matter: quantum mechancs, atomc and molecular propertes Macroscopc propertes of matter: thermodynamcs, E, H, C V, C p, S, A, G How do we relate these two propertes?

More information

A random variable is a function which associates a real number to each element of the sample space

A random variable is a function which associates a real number to each element of the sample space Introducton to Random Varables Defnton of random varable Defnton of of random varable Dscrete and contnuous random varable Probablty blt functon Dstrbuton functon Densty functon Sometmes, t s not enough

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

LECTURE 9 CANONICAL CORRELATION ANALYSIS

LECTURE 9 CANONICAL CORRELATION ANALYSIS LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of

More information

Strong Markov property: Same assertion holds for stopping times τ.

Strong Markov property: Same assertion holds for stopping times τ. Brownan moton Let X ={X t : t R + } be a real-valued stochastc process: a famlty of real random varables all defned on the same probablty space. Defne F t = nformaton avalable by observng the process up

More information

UNIVERSITY OF TORONTO Faculty of Arts and Science. December 2005 Examinations STA437H1F/STA1005HF. Duration - 3 hours

UNIVERSITY OF TORONTO Faculty of Arts and Science. December 2005 Examinations STA437H1F/STA1005HF. Duration - 3 hours UNIVERSITY OF TORONTO Faculty of Arts and Scence December 005 Examnatons STA47HF/STA005HF Duraton - hours AIDS ALLOWED: (to be suppled by the student) Non-programmable calculator One handwrtten 8.5'' x

More information

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE THE ROYAL STATISTICAL SOCIETY 6 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE PAPER I STATISTICAL THEORY The Socety provdes these solutons to assst canddates preparng for the eamnatons n future years and for

More information

Probability Theory (revisited)

Probability Theory (revisited) Probablty Theory (revsted) Summary Probablty v.s. plausblty Random varables Smulaton of Random Experments Challenge The alarm of a shop rang. Soon afterwards, a man was seen runnng n the street, persecuted

More information

/ n ) are compared. The logic is: if the two

/ n ) are compared. The logic is: if the two STAT C141, Sprng 2005 Lecture 13 Two sample tests One sample tests: examples of goodness of ft tests, where we are testng whether our data supports predctons. Two sample tests: called as tests of ndependence

More information

Effects of Ignoring Correlations When Computing Sample Chi-Square. John W. Fowler February 26, 2012

Effects of Ignoring Correlations When Computing Sample Chi-Square. John W. Fowler February 26, 2012 Effects of Ignorng Correlatons When Computng Sample Ch-Square John W. Fowler February 6, 0 It can happen that ch-square must be computed for a sample whose elements are correlated to an unknown extent.

More information

Statistics Spring MIT Department of Nuclear Engineering

Statistics Spring MIT Department of Nuclear Engineering Statstcs.04 Sprng 00.04 S00 Statstcs/Probablty Analyss of eperments Measurement error Measurement process systematc vs. random errors Nose propertes of sgnals and mages quantum lmted mages.04 S00 Probablty

More information

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y) Secton 1.5 Correlaton In the prevous sectons, we looked at regresson and the value r was a measurement of how much of the varaton n y can be attrbuted to the lnear relatonshp between y and x. In ths secton,

More information

Convergence of random processes

Convergence of random processes DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large

More information

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2) 1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons

More information

PhysicsAndMathsTutor.com

PhysicsAndMathsTutor.com PhscsAndMathsTutor.com phscsandmathstutor.com June 005 5. The random varable X has probablt functon k, = 1,, 3, P( X = ) = k ( + 1), = 4, 5, where k s a constant. (a) Fnd the value of k. (b) Fnd the eact

More information

Simulation and Probability Distribution

Simulation and Probability Distribution CHAPTER Probablty, Statstcs, and Relablty for Engneers and Scentsts Second Edton PROBABILIT DISTRIBUTION FOR CONTINUOUS RANDOM VARIABLES A. J. Clark School of Engneerng Department of Cvl and Envronmental

More information

Lecture 17 : Stochastic Processes II

Lecture 17 : Stochastic Processes II : Stochastc Processes II 1 Contnuous-tme stochastc process So far we have studed dscrete-tme stochastc processes. We studed the concept of Makov chans and martngales, tme seres analyss, and regresson analyss

More information

Chapter 9: Statistical Inference and the Relationship between Two Variables

Chapter 9: Statistical Inference and the Relationship between Two Variables Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,

More information

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression 11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING

More information

First Year Examination Department of Statistics, University of Florida

First Year Examination Department of Statistics, University of Florida Frst Year Examnaton Department of Statstcs, Unversty of Florda May 7, 010, 8:00 am - 1:00 noon Instructons: 1. You have four hours to answer questons n ths examnaton.. You must show your work to receve

More information

Probability Theory. The nth coefficient of the Taylor series of f(k), expanded around k = 0, gives the nth moment of x as ( ik) n n!

Probability Theory. The nth coefficient of the Taylor series of f(k), expanded around k = 0, gives the nth moment of x as ( ik) n n! 8333: Statstcal Mechancs I Problem Set # 3 Solutons Fall 3 Characterstc Functons: Probablty Theory The characterstc functon s defned by fk ep k = ep kpd The nth coeffcent of the Taylor seres of fk epanded

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

An Application of Fuzzy Hypotheses Testing in Radar Detection

An Application of Fuzzy Hypotheses Testing in Radar Detection Proceedngs of the th WSES Internatonal Conference on FUZZY SYSEMS n pplcaton of Fuy Hypotheses estng n Radar Detecton.K.ELSHERIF, F.M.BBDY, G.M.BDELHMID Department of Mathematcs Mltary echncal Collage

More information

Simulation and Random Number Generation

Simulation and Random Number Generation Smulaton and Random Number Generaton Summary Dscrete Tme vs Dscrete Event Smulaton Random number generaton Generatng a random sequence Generatng random varates from a Unform dstrbuton Testng the qualty

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

Research Article Green s Theorem for Sign Data

Research Article Green s Theorem for Sign Data Internatonal Scholarly Research Network ISRN Appled Mathematcs Volume 2012, Artcle ID 539359, 10 pages do:10.5402/2012/539359 Research Artcle Green s Theorem for Sgn Data Lous M. Houston The Unversty of

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Conjugacy and the Exponential Family

Conjugacy and the Exponential Family CS281B/Stat241B: Advanced Topcs n Learnng & Decson Makng Conjugacy and the Exponental Famly Lecturer: Mchael I. Jordan Scrbes: Bran Mlch 1 Conjugacy In the prevous lecture, we saw conjugate prors for the

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

A REVIEW OF ERROR ANALYSIS

A REVIEW OF ERROR ANALYSIS A REVIEW OF ERROR AALYI EEP Laborator EVE-4860 / MAE-4370 Updated 006 Error Analss In the laborator we measure phscal uanttes. All measurements are subject to some uncertantes. Error analss s the stud

More information

Statistical analysis using matlab. HY 439 Presented by: George Fortetsanakis

Statistical analysis using matlab. HY 439 Presented by: George Fortetsanakis Statstcal analyss usng matlab HY 439 Presented by: George Fortetsanaks Roadmap Probablty dstrbutons Statstcal estmaton Fttng data to probablty dstrbutons Contnuous dstrbutons Contnuous random varable X

More information

Suites of Tests. DIEHARD TESTS (Marsaglia, 1985) See

Suites of Tests. DIEHARD TESTS (Marsaglia, 1985) See Sutes of Tests DIEHARD TESTS (Marsagla, 985 See http://stat.fsu.edu/~geo/dehard.html NIST Test sute- 6 tests on the sequences of bts http://csrc.nst.gov/rng/ Test U0 Includes the above tests. http://www.ro.umontreal.ca/~lecuyer/

More information

Randomness and Computation

Randomness and Computation Randomness and Computaton or, Randomzed Algorthms Mary Cryan School of Informatcs Unversty of Ednburgh RC 208/9) Lecture 0 slde Balls n Bns m balls, n bns, and balls thrown unformly at random nto bns usually

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

18.1 Introduction and Recap

18.1 Introduction and Recap CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva Econ 39 - Statstcal Propertes of the OLS estmator Sanjaya DeSlva September, 008 1 Overvew Recall that the true regresson model s Y = β 0 + β 1 X + u (1) Applyng the OLS method to a sample of data, we estmate

More information

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models Computaton of Hgher Order Moments from Two Multnomal Overdsperson Lkelhood Models BY J. T. NEWCOMER, N. K. NEERCHAL Department of Mathematcs and Statstcs, Unversty of Maryland, Baltmore County, Baltmore,

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980

Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980 MT07: Multvarate Statstcal Methods Mke Tso: emal mke.tso@manchester.ac.uk Webpage for notes: http://www.maths.manchester.ac.uk/~mkt/new_teachng.htm. Introducton to multvarate data. Books Chat eld, C. and

More information

x i1 =1 for all i (the constant ).

x i1 =1 for all i (the constant ). Chapter 5 The Multple Regresson Model Consder an economc model where the dependent varable s a functon of K explanatory varables. The economc model has the form: y = f ( x,x,..., ) xk Approxmate ths by

More information

Economics 130. Lecture 4 Simple Linear Regression Continued

Economics 130. Lecture 4 Simple Linear Regression Continued Economcs 130 Lecture 4 Contnued Readngs for Week 4 Text, Chapter and 3. We contnue wth addressng our second ssue + add n how we evaluate these relatonshps: Where do we get data to do ths analyss? How do

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Experment-I MODULE VII LECTURE - 3 ANALYSIS OF COVARIANCE Dr Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Any scentfc experment s performed

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law: CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and

More information

Uncertainty and auto-correlation in. Measurement

Uncertainty and auto-correlation in. Measurement Uncertanty and auto-correlaton n arxv:1707.03276v2 [physcs.data-an] 30 Dec 2017 Measurement Markus Schebl Federal Offce of Metrology and Surveyng (BEV), 1160 Venna, Austra E-mal: markus.schebl@bev.gv.at

More information

Distributions /06. G.Serazzi 05/06 Dimensionamento degli Impianti Informatici distrib - 1

Distributions /06. G.Serazzi 05/06 Dimensionamento degli Impianti Informatici distrib - 1 Dstrbutons 8/03/06 /06 G.Serazz 05/06 Dmensonamento degl Impant Informatc dstrb - outlne densty, dstrbuton, moments unform dstrbuton Posson process, eponental dstrbuton Pareto functon densty and dstrbuton

More information

Expected Value and Variance

Expected Value and Variance MATH 38 Expected Value and Varance Dr. Neal, WKU We now shall dscuss how to fnd the average and standard devaton of a random varable X. Expected Value Defnton. The expected value (or average value, or

More information

CS 798: Homework Assignment 2 (Probability)

CS 798: Homework Assignment 2 (Probability) 0 Sample space Assgned: September 30, 2009 In the IEEE 802 protocol, the congeston wndow (CW) parameter s used as follows: ntally, a termnal wats for a random tme perod (called backoff) chosen n the range

More information

Lecture 3 Stat102, Spring 2007

Lecture 3 Stat102, Spring 2007 Lecture 3 Stat0, Sprng 007 Chapter 3. 3.: Introducton to regresson analyss Lnear regresson as a descrptve technque The least-squares equatons Chapter 3.3 Samplng dstrbuton of b 0, b. Contnued n net lecture

More information

STAT 511 FINAL EXAM NAME Spring 2001

STAT 511 FINAL EXAM NAME Spring 2001 STAT 5 FINAL EXAM NAME Sprng Instructons: Ths s a closed book exam. No notes or books are allowed. ou may use a calculator but you are not allowed to store notes or formulas n the calculator. Please wrte

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

CS-433: Simulation and Modeling Modeling and Probability Review

CS-433: Simulation and Modeling Modeling and Probability Review CS-433: Smulaton and Modelng Modelng and Probablty Revew Exercse 1. (Probablty of Smple Events) Exercse 1.1 The owner of a camera shop receves a shpment of fve cameras from a camera manufacturer. Unknown

More information

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition)

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition) Count Data Models See Book Chapter 11 2 nd Edton (Chapter 10 1 st Edton) Count data consst of non-negatve nteger values Examples: number of drver route changes per week, the number of trp departure changes

More information

On mutual information estimation for mixed-pair random variables

On mutual information estimation for mixed-pair random variables On mutual nformaton estmaton for mxed-par random varables November 3, 218 Aleksandr Beknazaryan, Xn Dang and Haln Sang 1 Department of Mathematcs, The Unversty of Msssspp, Unversty, MS 38677, USA. E-mal:

More information

β0 + β1xi and want to estimate the unknown

β0 + β1xi and want to estimate the unknown SLR Models Estmaton Those OLS Estmates Estmators (e ante) v. estmates (e post) The Smple Lnear Regresson (SLR) Condtons -4 An Asde: The Populaton Regresson Functon B and B are Lnear Estmators (condtonal

More information

A be a probability space. A random vector

A be a probability space. A random vector Statstcs 1: Probablty Theory II 8 1 JOINT AND MARGINAL DISTRIBUTIONS In Probablty Theory I we formulate the concept of a (real) random varable and descrbe the probablstc behavor of ths random varable by

More information

Another converse of Jensen s inequality

Another converse of Jensen s inequality Another converse of Jensen s nequalty Slavko Smc Abstract. We gve the best possble global bounds for a form of dscrete Jensen s nequalty. By some examples ts frutfulness s shown. 1. Introducton Throughout

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore Sesson Outlne Introducton to classfcaton problems and dscrete choce models. Introducton to Logstcs Regresson. Logstc functon and Logt functon. Maxmum Lkelhood Estmator (MLE) for estmaton of LR parameters.

More information

Notes prepared by Prof Mrs) M.J. Gholba Class M.Sc Part(I) Information Technology

Notes prepared by Prof Mrs) M.J. Gholba Class M.Sc Part(I) Information Technology Inverse transformatons Generaton of random observatons from gven dstrbutons Assume that random numbers,,, are readly avalable, where each tself s a random varable whch s unformly dstrbuted over the range(,).

More information

RELIABILITY ASSESSMENT

RELIABILITY ASSESSMENT CHAPTER Rsk Analyss n Engneerng and Economcs RELIABILITY ASSESSMENT A. J. Clark School of Engneerng Department of Cvl and Envronmental Engneerng 4a CHAPMAN HALL/CRC Rsk Analyss for Engneerng Department

More information

STAT 3008 Applied Regression Analysis

STAT 3008 Applied Regression Analysis STAT 3008 Appled Regresson Analyss Tutoral : Smple Lnear Regresson LAI Chun He Department of Statstcs, The Chnese Unversty of Hong Kong 1 Model Assumpton To quantfy the relatonshp between two factors,

More information

Chapter 3 Describing Data Using Numerical Measures

Chapter 3 Describing Data Using Numerical Measures Chapter 3 Student Lecture Notes 3-1 Chapter 3 Descrbng Data Usng Numercal Measures Fall 2006 Fundamentals of Busness Statstcs 1 Chapter Goals To establsh the usefulness of summary measures of data. The

More information

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise. Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where y + = β + β e for =,..., y and are observable varables e s a random error How can an estmaton rule be constructed for the

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU Group M D L M Chapter Bayesan Decson heory Xn-Shun Xu @ SDU School of Computer Scence and echnology, Shandong Unversty Bayesan Decson heory Bayesan decson theory s a statstcal approach to data mnng/pattern

More information

Lecture 6: Introduction to Linear Regression

Lecture 6: Introduction to Linear Regression Lecture 6: Introducton to Lnear Regresson An Manchakul amancha@jhsph.edu 24 Aprl 27 Lnear regresson: man dea Lnear regresson can be used to study an outcome as a lnear functon of a predctor Example: 6

More information

MA 323 Geometric Modelling Course Notes: Day 13 Bezier Curves & Bernstein Polynomials

MA 323 Geometric Modelling Course Notes: Day 13 Bezier Curves & Bernstein Polynomials MA 323 Geometrc Modellng Course Notes: Day 13 Bezer Curves & Bernsten Polynomals Davd L. Fnn Over the past few days, we have looked at de Casteljau s algorthm for generatng a polynomal curve, and we have

More information

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1 C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

), it produces a response (output function g (x)

), it produces a response (output function g (x) Lnear Systems Revew Notes adapted from notes by Mchael Braun Typcally n electrcal engneerng, one s concerned wth functons of tme, such as a voltage waveform System descrpton s therefore defned n the domans

More information

Polynomials. 1 More properties of polynomials

Polynomials. 1 More properties of polynomials Polynomals 1 More propertes of polynomals Recall that, for R a commutatve rng wth unty (as wth all rngs n ths course unless otherwse noted), we defne R[x] to be the set of expressons n =0 a x, where a

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements CS 750 Machne Learnng Lecture 5 Densty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square CS 750 Machne Learnng Announcements Homework Due on Wednesday before the class Reports: hand n before

More information

Correlation and Regression. Correlation 9.1. Correlation. Chapter 9

Correlation and Regression. Correlation 9.1. Correlation. Chapter 9 Chapter 9 Correlaton and Regresson 9. Correlaton Correlaton A correlaton s a relatonshp between two varables. The data can be represented b the ordered pars (, ) where s the ndependent (or eplanator) varable,

More information

Inductance Calculation for Conductors of Arbitrary Shape

Inductance Calculation for Conductors of Arbitrary Shape CRYO/02/028 Aprl 5, 2002 Inductance Calculaton for Conductors of Arbtrary Shape L. Bottura Dstrbuton: Internal Summary In ths note we descrbe a method for the numercal calculaton of nductances among conductors

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

e i is a random error

e i is a random error Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where + β + β e for,..., and are observable varables e s a random error How can an estmaton rule be constructed for the unknown

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

Stochastic Structural Dynamics

Stochastic Structural Dynamics Stochastc Structural Dynamcs Lecture-1 Defnton of probablty measure and condtonal probablty Dr C S Manohar Department of Cvl Engneerng Professor of Structural Engneerng Indan Insttute of Scence angalore

More information

Uncertainty in measurements of power and energy on power networks

Uncertainty in measurements of power and energy on power networks Uncertanty n measurements of power and energy on power networks E. Manov, N. Kolev Department of Measurement and Instrumentaton, Techncal Unversty Sofa, bul. Klment Ohrdsk No8, bl., 000 Sofa, Bulgara Tel./fax:

More information

The EM Algorithm (Dempster, Laird, Rubin 1977) The missing data or incomplete data setting: ODL(φ;Y ) = [Y;φ] = [Y X,φ][X φ] = X

The EM Algorithm (Dempster, Laird, Rubin 1977) The missing data or incomplete data setting: ODL(φ;Y ) = [Y;φ] = [Y X,φ][X φ] = X The EM Algorthm (Dempster, Lard, Rubn 1977 The mssng data or ncomplete data settng: An Observed Data Lkelhood (ODL that s a mxture or ntegral of Complete Data Lkelhoods (CDL. (1a ODL(;Y = [Y;] = [Y,][

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

U-Pb Geochronology Practical: Background

U-Pb Geochronology Practical: Background U-Pb Geochronology Practcal: Background Basc Concepts: accuracy: measure of the dfference between an expermental measurement and the true value precson: measure of the reproducblty of the expermental result

More information

Error Probability for M Signals

Error Probability for M Signals Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information