7. Multivariate Probability

Size: px
Start display at page:

Download "7. Multivariate Probability"

Transcription

1 7. Multvarate Probablty Chrs Pech and Mehran Saham May 2017 Often you wll work on problems where there are several random varables (often nteractng wth one another). We are gong to start to formally look at how those nteractons play out. For now we wll thnk of jont probabltes wth two random varables X and Y. 1 Dscrete Jont Dstrbutons In the dscrete case a jont probablty mass functon tells you the probablty of any combnaton of events X = a and Y = b: p X,Y (a,b) = P(X = a,y = b) Ths functon tells you the probablty of all combnatons of events (the, means and ). If you want to back calculate the probablty of an event only for one varable you can calculate a margnal from the jont probablty mass functon: p X (a) = P(X = a) = P X,Y (a,y) y p Y (b) = P(Y = b) = P X,Y (x,b) x In the contnuous case a jont probablty densty functon tells you the relatve probablty of any combnaton of events X = a and Y = y. In the dscrete case, we can defne the functon p X,Y non-parametrcally. Instead of usng a formula for p we smply state the probablty of each possble outcome. 2 Contnuous Jont Dstrbutons Random varables X and Y are Jontly Contnuous f there exsts a Probablty Densty Functon (PDF) f X,Y such that: P(a 1 < X a 2,b 1 < Y b 2 ) = a2 b2 a 1 b 1 f X,Y (x,y)dy dx Usng the PDF we can compute margnal probablty denstes: f X (a) = f Y (b) = f X,Y (a,y)dy f X,Y (x,b)dx Let F(a,b) be the Cumulatve Densty Functon (CDF): P(a 1 < X a 2,b 1 < Y b 2 ) = F(a 2,b 2 ) F(a 1,b 2 ) + F(a 1,b 1 ) F(a 2,b 1 ) 1

2 3 Multnomal Dstrbuton Say you perform n ndependent trals of an experment where each tral results n one of m outcomes, wth respectve probabltes: p 1, p 2,..., p m (constraned so that p = 1). Defne X to be the number of trals wth outcome. A multnomal dstrbuton s a closed form functon that answers the queston: What s the probablty that there are c trals wth outcome. Mathematcally: ( ) n P(X 1 = c 1,X 2 = c 2,...,X m = c m ) = p c 1 1 c 1,c 2,...,c pc pc m m m Example 1 A -sded de s rolled 7 tmes. What s the probablty that you roll: 1 one, 1 two, 0 threes, 2 fours, 0 fves, 3 sxes (dsregardng order). P(X 1 = 1,X 2 = 1,X 3 = 0,X 4 = 2,X 5 = 0,X = 3) = 7! 2!3! ( ) 1 7 = 420 ( 1 ) 1 ( 1 ) 1 ( 1 ) 0 ( 1 ) 2 ( 1 ) 0 ( ) 1 3 Fedaralst Papers In class we wrote a program to decde whether or not James Madson or Alexander Hamlton wrote Fedaralst Paper 49. Both men have clamed to be have wrtten t, and hence the authorshp s n dspute. Frst we used hstorcal essays to estmate p, the probablty that Hamlton generates the word (ndependent of all prevous and future choces or words). Smlarly we estmated q, the probablty that Madson generates the word. For each word we observe the number of tmes that word occurs n Fedaralst Paper 49 (we call that count c ). We assume that, gven no evdence, the paper s equally lkely to be wrtten by Madson or Hamlton. Defne three events: H s the event that Hamlton wrote the paper, M s the event that Madson wrote the paper, and D s the event that a paper has the collecton of words observed n Fedaralst Paper 49. We would lke to know whether P(H D) s larger than P(M D). Ths s equvalent to tryng to decde f P(H D)/P(M D) s larger than 1. The event D H s a multnomal parameterzed by the values p. The event D M s also a multnomal, ths tme parameterzed by the values q. Usng Bayes Rule we can smplfy the desred probablty. P(H D) P(M D) = = P(D H)P(H) P(D) P(D M)P(M) P(D) ( n ) c 1,c 2,...,c m p c ( n ) c 1,c 2,...,c m q c = P(D H)P(H) P(D M)P(M) = P(D H) P(D M) = p c q c Ths seems great! We have our desred probablty statement expressed n terms of a product of values we have already estmated. However, when we plug ths nto a computer, both the numerator and denomnator come out to be zero. The product of many numbers close to zero s too hard for a computer to represent. To fx ths problem, we use a standard trck n computatonal probablty: we apply a log to both sdes and apply 2

3 some basc rules of logs. ( P(H D) ) ( p c ) log = log P(M D) q c = log( p c = log(p c = ) log( ) q c ) log(q c ) c log(p ) c log(q ) Ths expresson s numercally stable and my computer returned that the answer was a negatve number. We can use exponentaton to solve for P(H D)/P(M D). Snce the exponent of a negatve number s a number smaller than 1, ths mples that P(H D)/P(M D) s smaller than 1. As a result, we conclude that Madson was more lkely to have wrtten Federalst Paper Expectaton wth Multple RVs Expectaton over a jont sn t ncely defned because t s not clear how to compose the multple varables. However, expectatons over functons of random varables (for example sums or multplcatons) are ncely defned: E[g(X,Y )] = x,y g(x,y)p(x,y) for any functon g(x,y ). When you expand that result for the functon g(x,y ) = X +Y you get a beautful result: E[X +Y ] = E[g(X,Y )] = g(x,y)p(x,y) = [x + y]p(x,y) x,y x,y = x,y = x = x xp(x,y) + yp(x, y) x,y x y p(x,y) + y xp(x) + yp(y) y = E[X] + E[Y ] Ths can be generalzed to multple varables: E [ n =1X ] = n =1 E[X ] y p(x, y) x Expectatons of Products Lemma Unfortunately the expectaton of the product of two random varables only has a nce decomposton n the case where the random varables are ndependent of one another. E[g(X)h(Y )] = E[g(X)]E[h(Y )] f and only f X and Y are ndependent Example 3 A dsk surface s a crcle of radus R. A sngle pont mperfecton s unformly dstrbuted on the dsk wth jont PDF: { 1 f x 2 + y 2 R 2 f X,Y (x,y) = πr 2 0 else Let D be the dstance from the orgn: D = X 2 +Y 2. What s E[D]? Hnt: use the lemmas 3

4 5 Independence wth Multple RVs Dscrete Two dscrete random varables X and Y are called ndependent f: P(X = x,y = y) = P(X = x)p(y = y) for all x,y Intutvely: knowng the value of X tells us nothng about the dstrbuton of Y. If two varables are not ndependent, they are called dependent. Ths s a smlar conceptually to ndependent events, but we are dealng wth multple varables. Make sure to keep your events and varables dstnct. Contnuous Two contnuous random varables X and Y are called ndependent f: P(X a,y b) = P(X a)p(y b) for all a,b Ths can be stated equvalently as: F X,Y (a,b) = F X (a)f Y (b) for all a,b f X,Y (a,b) = f X (a) f Y (b) for all a,b More generally, f you can factor the jont densty functon then your contnuous random varable are ndependent: f X,Y (x,y) = h(x)g(y) where < x,y < Example 2 Let N be the # of requests to a web server/day and that N Po(λ). Each request comes from a human (probablty = p) or from a bot (probablty = (1 p)), ndependently. Defne X to be the # of requests from humans/day and Y to be the # of requests from bots/day. Snce requests come n ndependently, the probablty of X condtoned on knowng the number of requests s a Bnomal. Specfcally: (X N) Bn(N, p) (Y N) Bn(N,1 p) Calculate the probablty of gettng exactly human requests and j bot requests. Start by expandng usng the chan rule: P(X =,Y = j) = P(X =,Y = j X +Y = + j)p(x +Y = + j) We can calculate each term n ths expresson: ( ) + j P(X =,Y = j X +Y = + j) = p (1 p) j P(X +Y = + j) = e λ λ + j ( + j)! Now we can put those together and smplfy: ( + j P(X =,Y = j) = )p (1 p) j λ λ + j e ( + j)! As an exercse you can smplfy ths expresson nto two ndependent Posson dstrbutons. 4

5 Symmetry of Independence Independence s symmetrc. That means that f random varables X and Y are ndependent, X s ndependent of Y and Y s ndependent of X. Ths clam may seem meanngless but t can be very useful. Imagne a sequence of events X 1,X 2,... Let A be the event that X s a record value (eg t s larger than all prevous values). Is A n+1 ndependent of A n? It s easer to answer that A n s ndependent of A n+1. By symmetry of ndependence both clams must be true. Convoluton of Dstrbutons Convoluton s the result of addng two dfferent random varables together. For some partcular random varables computng convoluton has ntutve closed form equatons. Importantly convoluton s the sum of the random varables themselves, not the addton of the probablty densty functons (PDF)s that correspond to the random varables. Independent Bnomals wth equal p For any two Bnomal random varables wth the same success probablty: X Bn(n 1, p) and Y Bn(n 2, p) the sum of those two random varables s another bnomal: X +Y Bn(n 1 + n 2, p). Ths does not hold when the two dstrbuton have dfferent parameters p. Independent Possons For any two Posson random varables: X Po(λ 1 ) and Y Po(λ 2 ) the sum of those two random varables s another Posson: X +Y Po(λ 1 + λ 2 ). Ths holds when λ 1 s not the same as λ 2. Independent Normals For any two normal random varables X N (µ 1,σ1 2) and Y N (µ 2,σ2 2 ) the sum of those two random varables s another normal: X +Y N (µ 1 + µ 2,σ1 2 + σ 2 2). General Independent Case For two general ndependent random varables (aka cases of ndependent random varables that don t ft the above specal stuatons) you can calculate the CDF or the PDF of the sum of two random varables usng the followng formulas: F X+Y (a) = P(X +Y a) = F X (a y) f Y (y)dy y= f X+Y (a) = f X (a y) f Y (y)dy y= There are drect analoges n the dscrete case where you replace the ntegrals wth sums and change notaton for CDF and PDF. 5

6 Example 1 Calculate the PDF of X +Y for ndependent unform random varables X Un(0,1) and Y Un(0,1)? Frst plug n the equaton for general convoluton of ndependent random varables: 1 f X+Y (a) = f X+Y (a) = y=0 1 y=0 f X (a y) f Y (y)dy f X (a y)dy Because f Y (y) = 1 It turns out that s not the easest thng to ntegrate. By tryng a few dfferent values of a n the range [0,2] we can observe that the PDF we are tryng to calculate s dscontnuous at the pont a = 1 and thus wll be easer to thnk about as two cases: a < 1 and a > 1. If we calculate f X+Y for both cases and correctly constran the bounds of the ntegral we get smple closed forms for each case: a f 0 < a 1 f X+Y (a) = 2 a f 1 < a 2 0 else 7 Condtonal Dstrbutons Before we looked at condtonal probabltes for events. Here we formally go over condtonal probabltes for random varables. The equatons for both the dscrete and contnuous case are ntutve extensons of our understandng of condtonal probablty: Dscrete The condtonal probablty mass functon (PMF) for the dscrete case: p X Y (x y) = P(X = x Y = y) = P(X = x,y = y) P(Y = y) = P X,Y (x,y) p Y (y) The condtonal cumulatve densty functon (CDF) for the dscrete case: F X Y (a y) = P(X a Y = y) = x a p X,Y (x,y) p Y (y) = p X Y (x y) x a Contnuous The condtonal probablty densty functon (PDF) for the contnuous case: f X Y (x y) = f X,Y (x,y) f Y (y) The condtonal cumulatve densty functon (CDF) for the contnuous case: F X Y (a y) = P(X a Y = y) = a f X Y (x y)dx Mxng Dscrete and Contnuous These equatons are straghtforward once you have your head around the notaton for probablty densty functons ( f X (x)) and probablty mass functons (p X (x)).

7 Let X be contnuous random varable and let N be a dscrete random varable. The condtonal probabltes of X gven N and N gven X respectvely are: f X N (x n) = p N X(n x) f X (x) p N (n) p N X (n x) = f X N(x n)p N (n) f X (x) Example 2 Let s say we have two ndependent random Posson varables for requests receved at a web server n a day: X = # requests from humans/day, X Po(λ 1 ) and Y = # requests from bots/day, Y Po(λ 2 ). Snce the convoluton of Posson random varables s also a Posson we know that the total number of requests (X +Y ) s also a Posson (X +Y ) Po(λ 1 + λ 2 ). What s the probablty of havng k human requests on a partcular day gven that there were n total requests? P(X = k X +Y = n) = P(X = k,y = n k) P(X +Y = n) = P(X = k)p(y = n k) P(X +Y = n) = e λ 1λ1 k e λ 2λ2 n k k! (n k)! n! e 1(λ 1+λ 2 ) (λ 1 + λ 2 ) n ( )( ) n k ( ) n k λ1 λ2 = k λ 1 + λ 2 λ 1 + λ 2 ( ) λ 2 Bn n, λ 1 + λ 2 8 Covarance and Correlaton Consder the two multvarate dstrbutons shown bellow. In both mages I have plotted one thousand samples drawn from the underlyng jont dstrbuton. Clearly the two dstrbutons are dfferent. However, the mean and varance are the same n both the x and the y dmenson. What s dfferent? Covarance s a quanttatve measure of the extent to whch the devaton of one varable from ts mean matches the devaton of the other from ts mean. It s a mathematcal relatonshp that s defned as: Cov(X,Y ) = E[(X E[X])(Y E[Y ])] That s a lttle hard to wrap your mnd around (but worth pushng on a bt). The outer expectaton wll be a weghted sum of the nner functon evaluated at a partcular (x,y) weghted by the probablty of (x,y). If x and y are both above ther respectve means, or f x and y are both below ther respectve means, that term wll be postve. If one s above ts mean and the other s below, the term s negatve. If the weghted sum of terms s postve, the two random varables wll have a postve correlaton. We can rewrte the above equaton to 7

8 get an equvalent equaton: Cov(X,Y ) = E[XY ] E[Y ]E[X] Usng ths equaton (and the product lemma) s t easy to see that f two random varables are ndependent ther covarance s 0. The reverse s not true n general. Propertes of Covarance Say that X and Y are arbtrary random varables: Cov(X,Y ) = Cov(Y,X) Cov(X,X) = E[X 2 ] E[X]E[X] = Var(X) Cov(aX + b,y ) = acov(x,y ) Let X = X 1 + X X n and let Y = Y 1 +Y 2 + +Y m. The covarance of X and Y s: Cov(X,Y ) = n m =1 j=1 Cov(X,X) = Var(X) = Cov(X,Y j ) n n =1 j=1 Cov(X,X j ) That last property gves us a thrd way to calculate varance. You could use ths defnton to calculate the varance of the bnomal. Correlaton Covarance s nterestng because t s a quanttatve measurement of the relatonshp between two varables. Correlaton between two random varables, ρ(x,y ) s the covarance of the two varables normalzed by the varance of each varable. Ths normalzaton cancels the unts out and normalzes the measure so that t s always n the range [0, 1]: ρ(x,y ) = Cov(X,Y ) Var(X)Var(Y ) Correlaton measure lnearty between X and Y. ρ(x,y ) = 1 ρ(x,y ) = 1 ρ(x,y ) = 0 Y = ax + b where a = σ y /σ x Y = ax + b where a = σ y /σ x absence of lnear relatonshp If ρ(x,y ) = 0 we say that X and Y are uncorrelated. If two varables are ndependent, then ther correlaton wll be 0. However, t doesn t go the other way. A correlaton of 0 does not mply ndependence. When people use the term correlaton, they are actually referrng to a specfc type of correlaton called Pearson correlaton. It measures the degree to whch there s a lnear relatonshp between the two varables. An alternatve measure s Spearman correlaton whch has a formula almost dentcal to your regular correlaton score, wth the excepton that the underlyng random varables are frst transformed nto ther rank. Spearman correlaton s outsde the scope of CS109. 8

7. Multivariate Probability

7. Multivariate Probability 7. Multvarate Probablty Chrs Pech and Mehran Saham Oct 2017 Often you wll work on problems where there are several random varables (often nteractng wth one another). We are gong to start to formally look

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

Expected Value and Variance

Expected Value and Variance MATH 38 Expected Value and Varance Dr. Neal, WKU We now shall dscuss how to fnd the average and standard devaton of a random varable X. Expected Value Defnton. The expected value (or average value, or

More information

SELECTED PROOFS. DeMorgan s formulas: The first one is clear from Venn diagram, or the following truth table:

SELECTED PROOFS. DeMorgan s formulas: The first one is clear from Venn diagram, or the following truth table: SELECTED PROOFS DeMorgan s formulas: The frst one s clear from Venn dagram, or the followng truth table: A B A B A B Ā B Ā B T T T F F F F T F T F F T F F T T F T F F F F F T T T T The second one can be

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

CS-433: Simulation and Modeling Modeling and Probability Review

CS-433: Simulation and Modeling Modeling and Probability Review CS-433: Smulaton and Modelng Modelng and Probablty Revew Exercse 1. (Probablty of Smple Events) Exercse 1.1 The owner of a camera shop receves a shpment of fve cameras from a camera manufacturer. Unknown

More information

A be a probability space. A random vector

A be a probability space. A random vector Statstcs 1: Probablty Theory II 8 1 JOINT AND MARGINAL DISTRIBUTIONS In Probablty Theory I we formulate the concept of a (real) random varable and descrbe the probablstc behavor of ths random varable by

More information

Probability and Random Variable Primer

Probability and Random Variable Primer B. Maddah ENMG 622 Smulaton 2/22/ Probablty and Random Varable Prmer Sample space and Events Suppose that an eperment wth an uncertan outcome s performed (e.g., rollng a de). Whle the outcome of the eperment

More information

7. Joint Distributions

7. Joint Distributions 7. Jot Dstrbutos Chrs Pech ad Mehra Saham Ma 2017 Ofte ou wll work o problems where there are several radom varables (ofte teractg wth oe aother. We are gog to start to formall look at how those teractos

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

First Year Examination Department of Statistics, University of Florida

First Year Examination Department of Statistics, University of Florida Frst Year Examnaton Department of Statstcs, Unversty of Florda May 7, 010, 8:00 am - 1:00 noon Instructons: 1. You have four hours to answer questons n ths examnaton.. You must show your work to receve

More information

Engineering Risk Benefit Analysis

Engineering Risk Benefit Analysis Engneerng Rsk Beneft Analyss.55, 2.943, 3.577, 6.938, 0.86, 3.62, 6.862, 22.82, ESD.72, ESD.72 RPRA 2. Elements of Probablty Theory George E. Apostolaks Massachusetts Insttute of Technology Sprng 2007

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

1.4. Experiments, Outcome, Sample Space, Events, and Random Variables

1.4. Experiments, Outcome, Sample Space, Events, and Random Variables 1.4. Experments, Outcome, Sample Space, Events, and Random Varables In Secton 1.2.5, we dscuss how to fnd probabltes based on countng. Whle the probablty of any complex event s bult on countng, brute force

More information

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur Module Random Processes Lesson 6 Functons of Random Varables After readng ths lesson, ou wll learn about cdf of functon of a random varable. Formula for determnng the pdf of a random varable. Let, X be

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

Probability Theory (revisited)

Probability Theory (revisited) Probablty Theory (revsted) Summary Probablty v.s. plausblty Random varables Smulaton of Random Experments Challenge The alarm of a shop rang. Soon afterwards, a man was seen runnng n the street, persecuted

More information

Statistics and Probability Theory in Civil, Surveying and Environmental Engineering

Statistics and Probability Theory in Civil, Surveying and Environmental Engineering Statstcs and Probablty Theory n Cvl, Surveyng and Envronmental Engneerng Pro. Dr. Mchael Havbro Faber ETH Zurch, Swtzerland Contents o Todays Lecture Overvew o Uncertanty Modelng Random Varables - propertes

More information

Effects of Ignoring Correlations When Computing Sample Chi-Square. John W. Fowler February 26, 2012

Effects of Ignoring Correlations When Computing Sample Chi-Square. John W. Fowler February 26, 2012 Effects of Ignorng Correlatons When Computng Sample Ch-Square John W. Fowler February 6, 0 It can happen that ch-square must be computed for a sample whose elements are correlated to an unknown extent.

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU Group M D L M Chapter Bayesan Decson heory Xn-Shun Xu @ SDU School of Computer Scence and echnology, Shandong Unversty Bayesan Decson heory Bayesan decson theory s a statstcal approach to data mnng/pattern

More information

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE THE ROYAL STATISTICAL SOCIETY 6 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE PAPER I STATISTICAL THEORY The Socety provdes these solutons to assst canddates preparng for the eamnatons n future years and for

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models

More information

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements CS 750 Machne Learnng Lecture 5 Densty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square CS 750 Machne Learnng Announcements Homework Due on Wednesday before the class Reports: hand n before

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Mamum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models for

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2) 1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons

More information

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva Econ 39 - Statstcal Propertes of the OLS estmator Sanjaya DeSlva September, 008 1 Overvew Recall that the true regresson model s Y = β 0 + β 1 X + u (1) Applyng the OLS method to a sample of data, we estmate

More information

xp(x µ) = 0 p(x = 0 µ) + 1 p(x = 1 µ) = µ

xp(x µ) = 0 p(x = 0 µ) + 1 p(x = 1 µ) = µ CSE 455/555 Sprng 2013 Homework 7: Parametrc Technques Jason J. Corso Computer Scence and Engneerng SUY at Buffalo jcorso@buffalo.edu Solutons by Yngbo Zhou Ths assgnment does not need to be submtted and

More information

Randomness and Computation

Randomness and Computation Randomness and Computaton or, Randomzed Algorthms Mary Cryan School of Informatcs Unversty of Ednburgh RC 208/9) Lecture 0 slde Balls n Bns m balls, n bns, and balls thrown unformly at random nto bns usually

More information

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0 Bezer curves Mchael S. Floater August 25, 211 These notes provde an ntroducton to Bezer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of the

More information

Basically, if you have a dummy dependent variable you will be estimating a probability.

Basically, if you have a dummy dependent variable you will be estimating a probability. ECON 497: Lecture Notes 13 Page 1 of 1 Metropoltan State Unversty ECON 497: Research and Forecastng Lecture Notes 13 Dummy Dependent Varable Technques Studenmund Chapter 13 Bascally, f you have a dummy

More information

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition)

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition) Count Data Models See Book Chapter 11 2 nd Edton (Chapter 10 1 st Edton) Count data consst of non-negatve nteger values Examples: number of drver route changes per week, the number of trp departure changes

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore Sesson Outlne Introducton to classfcaton problems and dscrete choce models. Introducton to Logstcs Regresson. Logstc functon and Logt functon. Maxmum Lkelhood Estmator (MLE) for estmaton of LR parameters.

More information

Conjugacy and the Exponential Family

Conjugacy and the Exponential Family CS281B/Stat241B: Advanced Topcs n Learnng & Decson Makng Conjugacy and the Exponental Famly Lecturer: Mchael I. Jordan Scrbes: Bran Mlch 1 Conjugacy In the prevous lecture, we saw conjugate prors for the

More information

Linear Regression Analysis: Terminology and Notation

Linear Regression Analysis: Terminology and Notation ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Simulation and Random Number Generation

Simulation and Random Number Generation Smulaton and Random Number Generaton Summary Dscrete Tme vs Dscrete Event Smulaton Random number generaton Generatng a random sequence Generatng random varates from a Unform dstrbuton Testng the qualty

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information

UNIVERSITY OF TORONTO Faculty of Arts and Science. December 2005 Examinations STA437H1F/STA1005HF. Duration - 3 hours

UNIVERSITY OF TORONTO Faculty of Arts and Science. December 2005 Examinations STA437H1F/STA1005HF. Duration - 3 hours UNIVERSITY OF TORONTO Faculty of Arts and Scence December 005 Examnatons STA47HF/STA005HF Duraton - hours AIDS ALLOWED: (to be suppled by the student) Non-programmable calculator One handwrtten 8.5'' x

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of

More information

Gaussian Mixture Models

Gaussian Mixture Models Lab Gaussan Mxture Models Lab Objectve: Understand the formulaton of Gaussan Mxture Models (GMMs) and how to estmate GMM parameters. You ve already seen GMMs as the observaton dstrbuton n certan contnuous

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

Learning from Data 1 Naive Bayes

Learning from Data 1 Naive Bayes Learnng from Data 1 Nave Bayes Davd Barber dbarber@anc.ed.ac.uk course page : http://anc.ed.ac.uk/ dbarber/lfd1/lfd1.html c Davd Barber 2001, 2002 1 Learnng from Data 1 : c Davd Barber 2001,2002 2 1 Why

More information

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0 Bézer curves Mchael S. Floater September 1, 215 These notes provde an ntroducton to Bézer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of

More information

Math1110 (Spring 2009) Prelim 3 - Solutions

Math1110 (Spring 2009) Prelim 3 - Solutions Math 1110 (Sprng 2009) Solutons to Prelm 3 (04/21/2009) 1 Queston 1. (16 ponts) Short answer. Math1110 (Sprng 2009) Prelm 3 - Solutons x a 1 (a) (4 ponts) Please evaluate lm, where a and b are postve numbers.

More information

PHYS 705: Classical Mechanics. Calculus of Variations II

PHYS 705: Classical Mechanics. Calculus of Variations II 1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary

More information

Appendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis

Appendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis A Appendx for Causal Interacton n Factoral Experments: Applcaton to Conjont Analyss Mathematcal Appendx: Proofs of Theorems A. Lemmas Below, we descrbe all the lemmas, whch are used to prove the man theorems

More information

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y) Secton 1.5 Correlaton In the prevous sectons, we looked at regresson and the value r was a measurement of how much of the varaton n y can be attrbuted to the lnear relatonshp between y and x. In ths secton,

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

Hidden Markov Models

Hidden Markov Models Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,

More information

PhysicsAndMathsTutor.com

PhysicsAndMathsTutor.com PhscsAndMathsTutor.com phscsandmathstutor.com June 005 5. The random varable X has probablt functon k, = 1,, 3, P( X = ) = k ( + 1), = 4, 5, where k s a constant. (a) Fnd the value of k. (b) Fnd the eact

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

Error Probability for M Signals

Error Probability for M Signals Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Density matrix. c α (t)φ α (q)

Density matrix. c α (t)φ α (q) Densty matrx Note: ths s supplementary materal. I strongly recommend that you read t for your own nterest. I beleve t wll help wth understandng the quantum ensembles, but t s not necessary to know t n

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

COMPLEX NUMBERS AND QUADRATIC EQUATIONS

COMPLEX NUMBERS AND QUADRATIC EQUATIONS COMPLEX NUMBERS AND QUADRATIC EQUATIONS INTRODUCTION We know that x 0 for all x R e the square of a real number (whether postve, negatve or ero) s non-negatve Hence the equatons x, x, x + 7 0 etc are not

More information

18.1 Introduction and Recap

18.1 Introduction and Recap CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng

More information

Statistical Inference. 2.3 Summary Statistics Measures of Center and Spread. parameters ( population characteristics )

Statistical Inference. 2.3 Summary Statistics Measures of Center and Spread. parameters ( population characteristics ) Ismor Fscher, 8//008 Stat 54 / -8.3 Summary Statstcs Measures of Center and Spread Dstrbuton of dscrete contnuous POPULATION Random Varable, numercal True center =??? True spread =???? parameters ( populaton

More information

Multiple Choice. Choose the one that best completes the statement or answers the question.

Multiple Choice. Choose the one that best completes the statement or answers the question. ECON 56 Homework Multple Choce Choose the one that best completes the statement or answers the queston ) The probablty of an event A or B (Pr(A or B)) to occur equals a Pr(A) Pr(B) b Pr(A) + Pr(B) f A

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Notes prepared by Prof Mrs) M.J. Gholba Class M.Sc Part(I) Information Technology

Notes prepared by Prof Mrs) M.J. Gholba Class M.Sc Part(I) Information Technology Inverse transformatons Generaton of random observatons from gven dstrbutons Assume that random numbers,,, are readly avalable, where each tself s a random varable whch s unformly dstrbuted over the range(,).

More information

PES 1120 Spring 2014, Spendier Lecture 6/Page 1

PES 1120 Spring 2014, Spendier Lecture 6/Page 1 PES 110 Sprng 014, Spender Lecture 6/Page 1 Lecture today: Chapter 1) Electrc feld due to charge dstrbutons -> charged rod -> charged rng We ntroduced the electrc feld, E. I defned t as an nvsble aura

More information

Propagation of error for multivariable function

Propagation of error for multivariable function Propagaton o error or multvarable uncton ow consder a multvarable uncton (u, v, w, ). I measurements o u, v, w,. All have uncertant u, v, w,., how wll ths aect the uncertant o the uncton? L tet) o (Equaton

More information

Introduction to Random Variables

Introduction to Random Variables Introducton to Random Varables Defnton of random varable Defnton of random varable Dscrete and contnuous random varable Probablty functon Dstrbuton functon Densty functon Sometmes, t s not enough to descrbe

More information

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands Content. Inference on Regresson Parameters a. Fndng Mean, s.d and covarance amongst estmates.. Confdence Intervals and Workng Hotellng Bands 3. Cochran s Theorem 4. General Lnear Testng 5. Measures of

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980

Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980 MT07: Multvarate Statstcal Methods Mke Tso: emal mke.tso@manchester.ac.uk Webpage for notes: http://www.maths.manchester.ac.uk/~mkt/new_teachng.htm. Introducton to multvarate data. Books Chat eld, C. and

More information

Appendix B. Criterion of Riemann-Stieltjes Integrability

Appendix B. Criterion of Riemann-Stieltjes Integrability Appendx B. Crteron of Remann-Steltes Integrablty Ths note s complementary to [R, Ch. 6] and [T, Sec. 3.5]. The man result of ths note s Theorem B.3, whch provdes the necessary and suffcent condtons for

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

/ n ) are compared. The logic is: if the two

/ n ) are compared. The logic is: if the two STAT C141, Sprng 2005 Lecture 13 Two sample tests One sample tests: examples of goodness of ft tests, where we are testng whether our data supports predctons. Two sample tests: called as tests of ndependence

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1 C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned

More information

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7 Stanford Unversty CS54: Computatonal Complexty Notes 7 Luca Trevsan January 9, 014 Notes for Lecture 7 1 Approxmate Countng wt an N oracle We complete te proof of te followng result: Teorem 1 For every

More information

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models Computaton of Hgher Order Moments from Two Multnomal Overdsperson Lkelhood Models BY J. T. NEWCOMER, N. K. NEERCHAL Department of Mathematcs and Statstcs, Unversty of Maryland, Baltmore County, Baltmore,

More information

Applied Stochastic Processes

Applied Stochastic Processes STAT455/855 Fall 23 Appled Stochastc Processes Fnal Exam, Bref Solutons 1. (15 marks) (a) (7 marks) The dstrbuton of Y s gven by ( ) ( ) y 2 1 5 P (Y y) for y 2, 3,... The above follows because each of

More information

1 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations

1 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations Physcs 171/271 -Davd Klenfeld - Fall 2005 (revsed Wnter 2011) 1 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys

More information

THE SUMMATION NOTATION Ʃ

THE SUMMATION NOTATION Ʃ Sngle Subscrpt otaton THE SUMMATIO OTATIO Ʃ Most of the calculatons we perform n statstcs are repettve operatons on lsts of numbers. For example, we compute the sum of a set of numbers, or the sum of the

More information

8.6 The Complex Number System

8.6 The Complex Number System 8.6 The Complex Number System Earler n the chapter, we mentoned that we cannot have a negatve under a square root, snce the square of any postve or negatve number s always postve. In ths secton we want

More information

6.4. RANDOM VARIABLES 233

6.4. RANDOM VARIABLES 233 6.4. RANDOM VARIABLES 2 6.4 Random Varables What are Random Varables? A random varable for an experment wth a sample space S s a functon that assgns a number to each element of S. Typcally nstead of usng

More information

12 MATH 101A: ALGEBRA I, PART C: MULTILINEAR ALGEBRA. 4. Tensor product

12 MATH 101A: ALGEBRA I, PART C: MULTILINEAR ALGEBRA. 4. Tensor product 12 MATH 101A: ALGEBRA I, PART C: MULTILINEAR ALGEBRA Here s an outlne of what I dd: (1) categorcal defnton (2) constructon (3) lst of basc propertes (4) dstrbutve property (5) rght exactness (6) localzaton

More information

CS 798: Homework Assignment 2 (Probability)

CS 798: Homework Assignment 2 (Probability) 0 Sample space Assgned: September 30, 2009 In the IEEE 802 protocol, the congeston wndow (CW) parameter s used as follows: ntally, a termnal wats for a random tme perod (called backoff) chosen n the range

More information

A random variable is a function which associates a real number to each element of the sample space

A random variable is a function which associates a real number to each element of the sample space Introducton to Random Varables Defnton of random varable Defnton of of random varable Dscrete and contnuous random varable Probablty blt functon Dstrbuton functon Densty functon Sometmes, t s not enough

More information

10.34 Fall 2015 Metropolis Monte Carlo Algorithm

10.34 Fall 2015 Metropolis Monte Carlo Algorithm 10.34 Fall 2015 Metropols Monte Carlo Algorthm The Metropols Monte Carlo method s very useful for calculatng manydmensonal ntegraton. For e.g. n statstcal mechancs n order to calculate the prospertes of

More information

Solutions to Problem Set 6

Solutions to Problem Set 6 Solutons to Problem Set 6 Problem 6. (Resdue theory) a) Problem 4.7.7 Boas. n ths problem we wll solve ths ntegral: x sn x x + 4x + 5 dx: To solve ths usng the resdue theorem, we study ths complex ntegral:

More information