Target tracking example Filtering: Xt. (main interest) Smoothing: X1: t. (also given with SIS)

Size: px
Start display at page:

Download "Target tracking example Filtering: Xt. (main interest) Smoothing: X1: t. (also given with SIS)"

Transcription

1 Target trackng example Flterng: Xt Y1: t (man nterest) Smoothng: X1: t Y1: t (also gven wth SIS) However as we have seen, the estmate of ths dstrbuton breaks down when t gets large due to the weghts becomng degenerate (f we don t resample). If we resample, most of the values sampled for X 1 wll dsappear when t gets large (related to the weght breakdown). So SIS sn t useful for all problems. Gbbs samplng Specal case of Markov Chan Monte Carlo (MCMC) Instead of generatng ndependent samples, t generates dependent samples va a Markov chan X X X Useful for a wde range of problems. 1

2 Popular for Bayesan analyses, but s a general samplng procedure. For example, t can be used to do smoothng n the target trackng example. Smlar to SIS n that the random varable X s decomposed nto X = { X1, X2,, Xk } and each pece s smulated separately. However the condtonng structure s dfferent. When samplng X j, t s drawn condtonal on all other components of X. Gbbs sampler A) Startng value: X 0 = { X , X2,, Xk } Pcked by some mechansm t t t t B) Sample X { X1, X2,, X k } = by 1) 2) X ~ X X, X,, X k t t 1 t 1 t X ~ X X, X,, X k t t t 1 t

3 j ) X ~ X X,, X, X,, X t t t t 1 t 1 j j 1 j 1 j+ 1 k k ) X t ~ t 1,, t k X j X Xk 1 Under certan regularty condtons, the realzatons X, X, X, form a Markov chan X. wth statonary dstrbuton [ ] Thus the realzatons can be treated as dependent samples from the desred dstrbuton. Example: (Nuclear pump falure) Gaver & O Murcheartagh (Technometrcs, 1987) Gelfand & Smth (JASA, 1990) Observed 10 nuclear reactor pumps Counted the number of falures for each pump 3

4 Pump Falures ( s ) Obs Tme ( t ) Obs Rate ( l ) (Obs Tme n 1000 s of hours) (Obs Rate = Falures / Tme) 4

5 Want to determne the true falure rate for each pump wth the followng herarchcal model s ~ Posson β ~Gamma, ( t ) ( α β) ( ) β ~IGamma γ,1δ Note: β ~IGamma ( γ,1δ ) s equvalent to f 1 ~Gamma,1 β ( s ) ( ) π β ( ) ρ β Want to determne = = = ( t ) α 1 α γ + 1 ( γ δ ) s s e β Γ γ δ e β e! ( α) δ β Γ ( γ) t β 1) S for each pump = 1,, 10 2) β S where( S { s,, s } = )

6 Note that both sets of these dstrbutons are hard to get analytcally. Can show that p ( S) where {,, } α + s α+ s 1 t 1 10α+ γ ( δ + ) Γ ( α + s ) = Note that the s are correlated and tryng to get the margnal for each looks to be ntractable analytcally. Run a Gbbs sampler to determne β, S. From ths sampler we can get the desred dstrbutons S and β S. A possble Gbbs scheme Step 1) Sample 1 ~ 1 ( 1), β,s Step 10) Sample 10 ~ 10 ( 10), β,s Step 11) Sample β ~ β,s where ( ) = { 1,, j 1, j 1,, j + 10} t e 6

7 Need the followng condtonal dstrbutons ~ j j ( ), β, S = j β, s j j 1 = Gamma α + s j, t j + 1 β β ~ β, S = β 1 = IGamma γ + 10 α, δ + Ths can be gotten from the jont dstrbuton by ncludng only the terms n the product that contan the random varable of nterest [ β,, S ] ( ) s 10 t 10 α 1 β γ δ β t e e δ e = a = 1 s! = 1 β Γ α β Γ γ ( ) γ + 1 ( ) e.g. for j, whch terms above have a j n them. 7

8 Equvalently, you can do ths by lookng at the graph structure of the model by only ncludng terms that correspond to edges jonng to the node of nterest. e.g. for β, whch edges connect wth the node for β. S β Example Run: α = 1.8 δ = 1 γ = 0.1 n = β = l 8

9 Pump 1 Pump 2 Densty Densty lambda lambda Pump 7 Pump 8 Densty Densty lambda lambda Pump 10 Beta Densty Densty lambda beta 9

10 Pump Mean Medan Std Dev Mean Medan Std Dev Beta

11 beta_ ( 1 ) Cor β, β + = beta_ 11

12 Pump 1 lambda_ lambda_ ( ) Cor, + = Pump 9 lambda_ lambda_ ( ) Cor, + =

13 Pump 7 lambda_ lambda_ 1 ( 7 7 ) Cor, + = Pump 8 lambda_ lambda_ 1 ( 8 8 ) Cor, + =

14 Target trackng wth the Gbbs sampler As mentoned last tme, the smoothng problem, X1: k Y1: k, sn t solved very well wth SIS. However t can be done very easly wth Gbbs samplng. Step j, j = 1,, k 1 Step k Draw X ~ X X 1, X + 1, Y j j j j j Draw X ~ X X 1, Y k k k k As all the components nvolved n these condtonal dstrbutons are normal, each of these condtonal dstrbutons are normal, thus are easly sampled. In the SIS analyss, t was assumed that all of the parameters of the movement and measurement error dstrbutons (all varances) and the startng pont were assumed known. Ths can easly be relaxed by puttng prors on X 0, Λ, and Σ and samplng them as well as part of the Markov chan. 14

15 Λ X 0 X 1 X 2 X 3 Z 1 Z 2 Z 3 Σ The sampler needs to be modfed as Step 0 Draw X0 ~ X0 X1, Λ Step j, j = 1,, k 1 Draw X ~ j X j X j 1, X j+ 1, Yj, Λ Step k Draw Xk ~ Xk Xk 1, Yk, Λ 15

16 Step k + 1 Draw Λ Λ X 0: Step k + 2 ~ k ~, Draw Σ Σ X0: k Y1: K Ths can be performed by Gbbs samplng f the prors on X 0 s Normal and the prors on Λ and Σ are IGamma. Condtons for Gbbs Samplng to work Whle you can always run the chan, t may not gve the answer you want. That s, the realzatons may not have the desred statonary dstrbuton. One-step transtons: p( x y ) n-step transtons: pn ( x y ). Statonary dstrbuton: π ( x) = lm p n ( x y) n 16

17 If t exsts, t satsfes ( ) ( ) π ( ) π x = p x y y dy A stronger condton whch shows that π ( x ) s the densty of the statonary dstrbuton s π ( x) p( y x) = π ( y) p( x y) holds for all x & y (detaled balance). Note that detaled balance statonarty but statonarty doesn t mply detaled balance. If the followng two condtons hold, the chan wll have the desred statonary dstrbuton. Irreducblty: The chan generated must be rreducble. That s t s possble to get from each state to every other state n a fnte number of steps. 17

18 Not all problems lead to rreducble chans. Example: ABO blood types The chldren s data?? mples that the chld wth blood type AB must have genotype AB and that the chld wth blood type O must have AB O genotype OO. The only possble way for the two chldren to nhert those genotypes f for one parent to have genotype AO and for the other parent to have genotype BO. However t s not possble to say whch parent has whch genotype wth certanty. By a smple symmetry argument [ = & = ] = P[ Dad = BO & Mom = AO] P Dad AO Mom BO =

19 Lets try runnng a Gbbs sampler, by frst generatng mom s genotype gven dad s and then dad s gven mom s. Let start the chan wth Dad = AO. Step 1: Generate Mom P Mom = AO Dad = AO = 0 P Mom = BO Dad = AO = 1 so Mom = BO. Step 2: Generate Dad P Dad = AO Mom = BO = 1 P Dad = BO Mom = BO = so Dad = AO. Ths mples that every realzaton of the chan has Mom = BO & Dad = AO. If the chan s started wth Dad = BO, every realzaton of that chan wll have Mom = AO & Dad = BO. 0 19

20 The reducble chan n ths case does not have the correct statonary dstrbuton. (Well reducble chans don t really have statonary dstrbutons anyway). But runnng the descrbed Gbbs sampler wll not correctly the descrbe the dstrbuton of the mother and father s genotypes. Aperodcty: Don t want a perodc chan (e.g. certan states can only occur on when t s even) Ths volates the dea that each state has a long run frequency margnally. Startng Ponts For every chan you need to specfy a startng pont. There are a number of approaches for choosng ths. 1) Pror means δ = =. γ 0 In pump example, set β E [ β] 20

21 2) Estmate from data In pump example, E[ l ] = αβ, so set β =. α 0 l In target trackng example, set startng postons at each tme to average observed postons, the dfferences of these to get the veloctes. 3) Sample from pror 4) Ad hoc choces In pump example, set 0 β = For many problems, ths choce can be mportant. The statonary dstrbuton s an asymptotc property and t may take a long tme for the chan to converge. 21

22 Start = l-bar / Alpha Beta Imputaton Start = Infnty Beta Imputaton Start = 0 Beta Imputaton 22

23 0 β = 0 (actually 100 Startng wth 10 ), the ntal draws are not consstent wth the statonary dstrbuton seen later n the chan. Whle for ths example, the problem clears up quckly, for other problems t can take a whle. Ths s more common whch larger problems, that mght have mllons, or maybe bllons of varables beng sampled n a complete sngle scan through the data. Ths can occur wth large space tme problems, such as the Tropcal Pacfc sea surface temperature predctons dscussed at < 23

24 Forecast map for December 2002 based on data from January 1970 to May 2002 Observed December 2002 map The usual approach to have a burn-n perod where the ntal samples are thrown away snce they may not be representatve of samples from the statonary dstrbuton. 24

25 The followng table contans estmates of the posteror means of the 11 parameters n the pump example wth 3 dfferent startng ponts. The frst 200 mputatons were dscarded and then the next 1000 mputatons were sampled. 0 Pump β = l α 0 β = 0 β = β

26 Often the bgger the problem, the longer the burn-n perod desred. However those are the problems where tme consderatons wll lmt the total number of mputatons that can be done. So you do want to thnk about startng values for your chan. Gbbs samplng and Bayes Choce of prors For Gbbs samplng to be effcent, the draws n each step of the procedure need to be feasble. That suggests that conjugate dstrbutons need to be used as part of the herarchcal model, as was done n pump and target trackng examples. However conjugacy s not strctly requred, as rejecton samplng wth log-concave dstrbutons mght be able to be used n some problems. Ths dea s sometmes used n the software package WnBUGS (Bayesan analyss Usng Gbbs Samplng). 26

27 However for some problems the model you want to analyze s not conjugate and the trcks to get around non-conjugacy won t work. For example, lets change model for the pump example to ~ Posson µ, σ ~LogN, s ( t ) σ ( µ σ ) ( ) ( α β) µ ~Logstc ντ, ~ Webull, Good luck on runnng a Gbbs sampler on ths model (I thnk). Other samplng technques are needed, for ths and other more complcated problems. 27

28 Metropols Hastngs Algorthm (M-H) A general approach for constructng a Markov chan that has the desred statonary π = π j ) dstrbuton ( ( ) 1) Proposal dstrbuton: Assume that j X t =. Need to propose a new q = q j. state wth dstrbuton j ( ) 2) Calculate the Hastngs rato a j π jq j = mn,1 πqj 3) Acceptance/Reject step Generate U ~ U ( 0,1) and set X t + 1 = = j f U a t ( X ) j otherwse 28

29 Notes: 1) Gbbs samplng s a specal case of M-H as for each step, π jq j = 1 π q j whch mples the relatonshp also holds for a complete scan through all the varables. 2) The Metropols (Metropols et al, 1953) algorthm was based on a symmetrc proposal dstrbuton ( qj = qj ) a j π j = mn,1 π So a hgher probablty state wll always be accepted. 3) As wth many other samplng procedures, π and q only need to be known up to normalzng constants as they wll be cancelled out when calculatng the Hastngs rato. 29

30 4) Perodcty sn t a problem usually. For many proposals, q > 0 for all. Also f t+ 1 t a j < 1, P X X = = > 0, thus some states have perod 1, whch mples the chan s aperodc. 5) qa j j gves the 1-step transton probabltes of the chan (e.g. ts p( x y ) n the earler notaton). 6) Detaled balance s easy. Wthout loss of generalty, assume that π jq π q j j < 1 (whch mples a j < 1 and a j = 1) Then π qa = π q j j j = π q j j = π qa π q j π q j j j j j 30

31 7) The bg problem s rreducblty. However by settng the proposal to correspond to a rreducble chan solves ths. Proposal dstrbuton deas: 1) Approxmate the dstrbuton. For example use a normal wth smlar means and varances. Or use a t wth a moderate number of degrees of freedom. 2) Random walk q( y x) = q( y x) If there s a contnuous state process, you could use ( ) y = x + ε; ε ~ q For a dscrete process, you could use 0.4 j = 1 q( j ) = 0.2 j = 0.4 j =

32 3) Autoregressve chan ( ) ; ~ ( ) y = a + B x a + z z q For the random walk and autoregressve chans, q does not need to correspond to a symmetrc dstrbuton (though that s common). 4) Independence sampler ( ) = q( y) q y x For an ndependence sampler you want q to be smlar to π. a j π jq = mn,1 πq j If they are too dfferent, q π could get very small, makng t dffcult to move from state. (The chan mxes slowly). 32

33 5) Block at a tme Deal wth varables n blocks lke the Gbbs sampler. Sometmes referred to Metropols wthn Gbbs. Allows for complex problems to be broken down nto smpler ones. Any M-H style update can be used wthn each block (e.g. random walk for one block, ndependence sampler for the next, Gbbs for the one after that). Allows for a Gbbs style sampler, but wthout the worry about conjugate dstrbutons n the model to make samplng easer. Pump Example: ~ Posson ( t ) ( µ σ ) 2 ( ) µ, σ ~LogN, s σ 2 2 µ ~ N ν, τ 2 ( γ δ) ~IGamma, 33

34 2 Can perform Gbbs on µ and σ but not on, due the non-conjugacy of the Posson and log Normal dstrbutons. Step, = 1,, 10 (M-H): 2 Sample from s, µσ, wth proposal * 2 ~logn, θ (Multplcatve random walk) ( ) HR = * ( t) s ( t ) e * t * 1 log µ φ * σ σ 1 log µ φ σ σ * 1 log log φ θ θ * 1 log log φ * θ θ s t e aj ( HR ) = mn,1 34

35 Step 11 (Gbbs): 2 2 Sample µ from µσ,, ντ, ~ N ( mean, var) where Step 12 (Gbbs): 2 Sample σ from 1 ν mean = var log σ τ 1 n 1 var = σ τ σ 2, µ, γ, δ 1 ~IGamma γ + 5, δ + ( log µ )

36 Parameters for run Burn-n: 1000 Imputatons: 100,000 ν = τ = 100 γ = 1 δ = θ = 0.01 Startng values = l 1 µ = log l ( log ) 2 σ = l µ 9 36

37 Other optons 1) Combne steps 1 10 nto a sngle draw. Wth ths opton all s change or none do. In the sampler used, whether each changes s ndependent of the other s. The opton used s probably preferable, as t should lead to better mxng of the chan. 2 2) Combne samplng, µ, and σ nto a sngle M-H step. Probably suboptmal as the proposal dstrbuton won t be a great match for the jont posteror dstrbuton of 2, µ, and σ. 37

38 Rejecton rates Havng some rejecton can be good. Wth the multplcatve random walk sampler 2 used, f θ s too small, there wll be very few rejectons, but the sampler wll move too slowly through the space. 2 Increasng θ wll lead to better mxng, as bgger jumps can be made, though t wll lead to hgher rejecton rates. You need to fnd a balance between rejecton rates, mxng of the chan, and coverage of the state space. For some problems, a rejecton rate of 50% s fne and I ve seen reports for large problems usng normal random walk proposals the rejecton rates of 75% are optmal. 38

39 Rejecton rates for falure rates proposals under dfferent random walk varances Pump

40 Theta^2 = Lambda_ e+00 2 e+04 4 e+04 6 e+04 8 e+04 1 e+05 Tme Theta^2 = Lambda_ e+00 2 e+04 4 e+04 6 e+04 8 e+04 1 e+05 Tme Theta^2 = 0.01 Lambda_ e+00 2 e+04 4 e+04 6 e+04 8 e+04 1 e+05 Tme Theta^2 = 0.04 Lambda_ e+00 2 e+04 4 e+04 6 e+04 8 e+04 1 e+05 Tme 40

41 Standard errors n MCMC As dscussed before, the correlaton of the chan of the chan must be taken nto account when determnng standard errors of quanttes estmated by the sampler. Suppose we use x to estmate and that the burn-n perod was long enough to get nto the statonary dstrbuton. Then σ Var 2 2 n 2 n 1 ( x) = n + ( n j) j = 1 ρ j For a reasonable chan, the autocorrelatons wll de off and so lets assume that they wll be neglgble for j > K. Then the above reduces to 2 σ Var 2 2 n ( x) = n + ( n j) K j = 1 ρ j If the autocorrelatons de off farly quckly, σ and ρ j can be estmated consstently (though wth some bas) by the usual emprcal moments. 2 41

42 Another approach s blockng. Assume that n = Jm for ntegers J and m. Then let jm 1 x = x ; j = 1,, J j m = j 1 m + 1 ( ) Note that x = x. If m s large relatve to K, then the correlatons between the x j should neglgble and the varance can be estmated as f the x were ndependent. j If the correlaton s slghtly larger, t mght be reasonable to assume that the correlaton between x j and x j + 1 s some value ρ to be determned, but that correlatons at larger lags are neglgble. In ths case ( ) ( ) 1 + x x 2 Var Var j J ρ 42

43 Estmates wth m = 100 Parameter x SE ρ µ σ

44 Estmates wth m = 1000 Parameter x SE ρ µ σ

45 Standard error estmates for pump example m = 1000 m = 100 Independent µ σ

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs

More information

Outline for today. Markov chain Monte Carlo. Example: spatial statistics (Christensen and Waagepetersen 2001)

Outline for today. Markov chain Monte Carlo. Example: spatial statistics (Christensen and Waagepetersen 2001) Markov chan Monte Carlo Rasmus Waagepetersen Department of Mathematcs Aalborg Unversty Denmark November, / Outlne for today MCMC / Condtonal smulaton for hgh-dmensonal U: Markov chan Monte Carlo Consder

More information

Probabilistic Graphical Models

Probabilistic Graphical Models School of Computer Scence robablstc Graphcal Models Appromate Inference: Markov Chan Monte Carlo 05 07 Erc Xng Lecture 7 March 9 04 X X 075 05 05 03 X 3 Erc Xng @ CMU 005-04 Recap of Monte Carlo Monte

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

Convergence of random processes

Convergence of random processes DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large

More information

CS 3750 Machine Learning Lecture 6. Monte Carlo methods. CS 3750 Advanced Machine Learning. Markov chain Monte Carlo

CS 3750 Machine Learning Lecture 6. Monte Carlo methods. CS 3750 Advanced Machine Learning. Markov chain Monte Carlo CS 3750 Machne Learnng Lectre 6 Monte Carlo methods Mlos Haskrecht mlos@cs.ptt.ed 5329 Sennott Sqare Markov chan Monte Carlo Importance samplng: samples are generated accordng to Q and every sample from

More information

6. Stochastic processes (2)

6. Stochastic processes (2) Contents Markov processes Brth-death processes Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 Markov process Consder a contnuous-tme and dscrete-state stochastc process X(t) wth state space

More information

6. Stochastic processes (2)

6. Stochastic processes (2) 6. Stochastc processes () Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 6. Stochastc processes () Contents Markov processes Brth-death processes 6. Stochastc processes () Markov process

More information

Stat 543 Exam 2 Spring 2016

Stat 543 Exam 2 Spring 2016 Stat 543 Exam 2 Sprng 206 I have nether gven nor receved unauthorzed assstance on ths exam. Name Sgned Date Name Prnted Ths Exam conssts of questons. Do at least 0 of the parts of the man exam. I wll score

More information

( ) ( ) ( ) ( ) STOCHASTIC SIMULATION FOR BLOCKED DATA. Monte Carlo simulation Rejection sampling Importance sampling Markov chain Monte Carlo

( ) ( ) ( ) ( ) STOCHASTIC SIMULATION FOR BLOCKED DATA. Monte Carlo simulation Rejection sampling Importance sampling Markov chain Monte Carlo SOCHASIC SIMULAIO FOR BLOCKED DAA Stochastc System Analyss and Bayesan Model Updatng Monte Carlo smulaton Rejecton samplng Importance samplng Markov chan Monte Carlo Monte Carlo smulaton Introducton: If

More information

Stat 543 Exam 2 Spring 2016

Stat 543 Exam 2 Spring 2016 Stat 543 Exam 2 Sprng 2016 I have nether gven nor receved unauthorzed assstance on ths exam. Name Sgned Date Name Prnted Ths Exam conssts of 11 questons. Do at least 10 of the 11 parts of the man exam.

More information

Hidden Markov Models

Hidden Markov Models CM229S: Machne Learnng for Bonformatcs Lecture 12-05/05/2016 Hdden Markov Models Lecturer: Srram Sankararaman Scrbe: Akshay Dattatray Shnde Edted by: TBD 1 Introducton For a drected graph G we can wrte

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

BIO Lab 2: TWO-LEVEL NORMAL MODELS with school children popularity data

BIO Lab 2: TWO-LEVEL NORMAL MODELS with school children popularity data Lab : TWO-LEVEL NORMAL MODELS wth school chldren popularty data Purpose: Introduce basc two-level models for normally dstrbuted responses usng STATA. In partcular, we dscuss Random ntercept models wthout

More information

Simulation and Random Number Generation

Simulation and Random Number Generation Smulaton and Random Number Generaton Summary Dscrete Tme vs Dscrete Event Smulaton Random number generaton Generatng a random sequence Generatng random varates from a Unform dstrbuton Testng the qualty

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Quantifying Uncertainty

Quantifying Uncertainty Partcle Flters Quantfyng Uncertanty Sa Ravela M. I. T Last Updated: Sprng 2013 1 Quantfyng Uncertanty Partcle Flters Partcle Flters Appled to Sequental flterng problems Can also be appled to smoothng problems

More information

First Year Examination Department of Statistics, University of Florida

First Year Examination Department of Statistics, University of Florida Frst Year Examnaton Department of Statstcs, Unversty of Florda May 7, 010, 8:00 am - 1:00 noon Instructons: 1. You have four hours to answer questons n ths examnaton.. You must show your work to receve

More information

Sampling Self Avoiding Walks

Sampling Self Avoiding Walks Samplng Self Avodng Walks James Farbanks and Langhao Chen December 3, 204 Abstract These notes present the self testng algorthm for samplng self avodng walks by Randall and Snclar[3] [4]. They are ntended

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements CS 750 Machne Learnng Lecture 5 Densty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square CS 750 Machne Learnng Announcements Homework Due on Wednesday before the class Reports: hand n before

More information

Probability Theory (revisited)

Probability Theory (revisited) Probablty Theory (revsted) Summary Probablty v.s. plausblty Random varables Smulaton of Random Experments Challenge The alarm of a shop rang. Soon afterwards, a man was seen runnng n the street, persecuted

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Hopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen

Hopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen Hopfeld networks and Boltzmann machnes Geoffrey Hnton et al. Presented by Tambet Matsen 18.11.2014 Hopfeld network Bnary unts Symmetrcal connectons http://www.nnwj.de/hopfeld-net.html Energy functon The

More information

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition)

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition) Count Data Models See Book Chapter 11 2 nd Edton (Chapter 10 1 st Edton) Count data consst of non-negatve nteger values Examples: number of drver route changes per week, the number of trp departure changes

More information

Gaussian Mixture Models

Gaussian Mixture Models Lab Gaussan Mxture Models Lab Objectve: Understand the formulaton of Gaussan Mxture Models (GMMs) and how to estmate GMM parameters. You ve already seen GMMs as the observaton dstrbuton n certan contnuous

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

x = , so that calculated

x = , so that calculated Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

The optimal delay of the second test is therefore approximately 210 hours earlier than =2. THE IEC 61508 FORMULAS 223 The optmal delay of the second test s therefore approxmately 210 hours earler than =2. 8.4 The IEC 61508 Formulas IEC 61508-6 provdes approxmaton formulas for the PF for smple

More information

Suites of Tests. DIEHARD TESTS (Marsaglia, 1985) See

Suites of Tests. DIEHARD TESTS (Marsaglia, 1985) See Sutes of Tests DIEHARD TESTS (Marsagla, 985 See http://stat.fsu.edu/~geo/dehard.html NIST Test sute- 6 tests on the sequences of bts http://csrc.nst.gov/rng/ Test U0 Includes the above tests. http://www.ro.umontreal.ca/~lecuyer/

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Comparison of Regression Lines

Comparison of Regression Lines STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Basically, if you have a dummy dependent variable you will be estimating a probability.

Basically, if you have a dummy dependent variable you will be estimating a probability. ECON 497: Lecture Notes 13 Page 1 of 1 Metropoltan State Unversty ECON 497: Research and Forecastng Lecture Notes 13 Dummy Dependent Varable Technques Studenmund Chapter 13 Bascally, f you have a dummy

More information

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y) Secton 1.5 Correlaton In the prevous sectons, we looked at regresson and the value r was a measurement of how much of the varaton n y can be attrbuted to the lnear relatonshp between y and x. In ths secton,

More information

Probability and Random Variable Primer

Probability and Random Variable Primer B. Maddah ENMG 622 Smulaton 2/22/ Probablty and Random Varable Prmer Sample space and Events Suppose that an eperment wth an uncertan outcome s performed (e.g., rollng a de). Whle the outcome of the eperment

More information

Conjugacy and the Exponential Family

Conjugacy and the Exponential Family CS281B/Stat241B: Advanced Topcs n Learnng & Decson Makng Conjugacy and the Exponental Famly Lecturer: Mchael I. Jordan Scrbes: Bran Mlch 1 Conjugacy In the prevous lecture, we saw conjugate prors for the

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k.

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k. THE CELLULAR METHOD In ths lecture, we ntroduce the cellular method as an approach to ncdence geometry theorems lke the Szemeréd-Trotter theorem. The method was ntroduced n the paper Combnatoral complexty

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

STATS 306B: Unsupervised Learning Spring Lecture 10 April 30

STATS 306B: Unsupervised Learning Spring Lecture 10 April 30 STATS 306B: Unsupervsed Learnng Sprng 2014 Lecture 10 Aprl 30 Lecturer: Lester Mackey Scrbe: Joey Arthur, Rakesh Achanta 10.1 Factor Analyss 10.1.1 Recap Recall the factor analyss (FA) model for lnear

More information

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle

More information

BAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS. Dariusz Biskup

BAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS. Dariusz Biskup BAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS Darusz Bskup 1. Introducton The paper presents a nonparaetrc procedure for estaton of an unknown functon f n the regresson odel y = f x + ε = N. (1) (

More information

DS-GA 1002 Lecture notes 5 Fall Random processes

DS-GA 1002 Lecture notes 5 Fall Random processes DS-GA Lecture notes 5 Fall 6 Introducton Random processes Random processes, also known as stochastc processes, allow us to model quanttes that evolve n tme (or space n an uncertan way: the trajectory of

More information

STATISTICS QUESTIONS. Step by Step Solutions.

STATISTICS QUESTIONS. Step by Step Solutions. STATISTICS QUESTIONS Step by Step Solutons www.mathcracker.com 9//016 Problem 1: A researcher s nterested n the effects of famly sze on delnquency for a group of offenders and examnes famles wth one to

More information

Topic 23 - Randomized Complete Block Designs (RCBD)

Topic 23 - Randomized Complete Block Designs (RCBD) Topc 3 ANOVA (III) 3-1 Topc 3 - Randomzed Complete Block Desgns (RCBD) Defn: A Randomzed Complete Block Desgn s a varant of the completely randomzed desgn (CRD) that we recently learned. In ths desgn,

More information

Tracking with Kalman Filter

Tracking with Kalman Filter Trackng wth Kalman Flter Scott T. Acton Vrgna Image and Vdeo Analyss (VIVA), Charles L. Brown Department of Electrcal and Computer Engneerng Department of Bomedcal Engneerng Unversty of Vrgna, Charlottesvlle,

More information

Negative Binomial Regression

Negative Binomial Regression STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

Analysis of Discrete Time Queues (Section 4.6)

Analysis of Discrete Time Queues (Section 4.6) Analyss of Dscrete Tme Queues (Secton 4.6) Copyrght 2002, Sanjay K. Bose Tme axs dvded nto slots slot slot boundares Arrvals can only occur at slot boundares Servce to a job can only start at a slot boundary

More information

Problem Set 9 - Solutions Due: April 27, 2005

Problem Set 9 - Solutions Due: April 27, 2005 Problem Set - Solutons Due: Aprl 27, 2005. (a) Frst note that spam messages, nvtatons and other e-mal are all ndependent Posson processes, at rates pλ, qλ, and ( p q)λ. The event of the tme T at whch you

More information

Lecture 3 Stat102, Spring 2007

Lecture 3 Stat102, Spring 2007 Lecture 3 Stat0, Sprng 007 Chapter 3. 3.: Introducton to regresson analyss Lnear regresson as a descrptve technque The least-squares equatons Chapter 3.3 Samplng dstrbuton of b 0, b. Contnued n net lecture

More information

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal Markov chans M. Veeraraghavan; March 17, 2004 [Tp: Study the MC, QT, and Lttle s law lectures together: CTMC (MC lecture), M/M/1 queue (QT lecture), Lttle s law lecture (when dervng the mean response tme

More information

Why Monte Carlo Integration? Introduction to Monte Carlo Method. Continuous Probability. Continuous Probability

Why Monte Carlo Integration? Introduction to Monte Carlo Method. Continuous Probability. Continuous Probability Introducton to Monte Carlo Method Kad Bouatouch IRISA Emal: kad@rsa.fr Wh Monte Carlo Integraton? To generate realstc lookng mages, we need to solve ntegrals of or hgher dmenson Pel flterng and lens smulaton

More information

Artificial Intelligence Bayesian Networks

Artificial Intelligence Bayesian Networks Artfcal Intellgence Bayesan Networks Adapted from sldes by Tm Fnn and Mare desjardns. Some materal borrowed from Lse Getoor. 1 Outlne Bayesan networks Network structure Condtonal probablty tables Condtonal

More information

A Bayesian methodology for systemic risk assessment in financial networks

A Bayesian methodology for systemic risk assessment in financial networks A Bayesan methodology for systemc rsk assessment n fnancal networks Lutgard A. M. Veraart London School of Economcs and Poltcal Scence September 2015 Jont work wth Axel Gandy (Imperal College London) 7th

More information

Economics 130. Lecture 4 Simple Linear Regression Continued

Economics 130. Lecture 4 Simple Linear Regression Continued Economcs 130 Lecture 4 Contnued Readngs for Week 4 Text, Chapter and 3. We contnue wth addressng our second ssue + add n how we evaluate these relatonshps: Where do we get data to do ths analyss? How do

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one)

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one) Why Bayesan? 3. Bayes and Normal Models Alex M. Martnez alex@ece.osu.edu Handouts Handoutsfor forece ECE874 874Sp Sp007 If all our research (n PR was to dsappear and you could only save one theory, whch

More information

Lecture 4: November 17, Part 1 Single Buffer Management

Lecture 4: November 17, Part 1 Single Buffer Management Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Durban Watson for Testing the Lack-of-Fit of Polynomial Regression Models without Replications

Durban Watson for Testing the Lack-of-Fit of Polynomial Regression Models without Replications Durban Watson for Testng the Lack-of-Ft of Polynomal Regresson Models wthout Replcatons Ruba A. Alyaf, Maha A. Omar, Abdullah A. Al-Shha ralyaf@ksu.edu.sa, maomar@ksu.edu.sa, aalshha@ksu.edu.sa Department

More information

The EM Algorithm (Dempster, Laird, Rubin 1977) The missing data or incomplete data setting: ODL(φ;Y ) = [Y;φ] = [Y X,φ][X φ] = X

The EM Algorithm (Dempster, Laird, Rubin 1977) The missing data or incomplete data setting: ODL(φ;Y ) = [Y;φ] = [Y X,φ][X φ] = X The EM Algorthm (Dempster, Lard, Rubn 1977 The mssng data or ncomplete data settng: An Observed Data Lkelhood (ODL that s a mxture or ntegral of Complete Data Lkelhoods (CDL. (1a ODL(;Y = [Y;] = [Y,][

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

EM and Structure Learning

EM and Structure Learning EM and Structure Learnng Le Song Machne Learnng II: Advanced Topcs CSE 8803ML, Sprng 2012 Partally observed graphcal models Mxture Models N(μ 1, Σ 1 ) Z X N N(μ 2, Σ 2 ) 2 Gaussan mxture model Consder

More information

Continuous Time Markov Chains

Continuous Time Markov Chains Contnuous Tme Markov Chans Brth and Death Processes,Transton Probablty Functon, Kolmogorov Equatons, Lmtng Probabltes, Unformzaton Chapter 6 1 Markovan Processes State Space Parameter Space (Tme) Dscrete

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information

Hierarchical Bayes. Peter Lenk. Stephen M Ross School of Business at the University of Michigan September 2004

Hierarchical Bayes. Peter Lenk. Stephen M Ross School of Business at the University of Michigan September 2004 Herarchcal Bayes Peter Lenk Stephen M Ross School of Busness at the Unversty of Mchgan September 2004 Outlne Bayesan Decson Theory Smple Bayes and Shrnkage Estmates Herarchcal Bayes Numercal Methods Battng

More information

Hidden Markov Models

Hidden Markov Models Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,

More information

Sampling Theory MODULE VII LECTURE - 23 VARYING PROBABILITY SAMPLING

Sampling Theory MODULE VII LECTURE - 23 VARYING PROBABILITY SAMPLING Samplng heory MODULE VII LECURE - 3 VARYIG PROBABILIY SAMPLIG DR. SHALABH DEPARME OF MAHEMAICS AD SAISICS IDIA ISIUE OF ECHOLOGY KAPUR he smple random samplng scheme provdes a random sample where every

More information

Notes prepared by Prof Mrs) M.J. Gholba Class M.Sc Part(I) Information Technology

Notes prepared by Prof Mrs) M.J. Gholba Class M.Sc Part(I) Information Technology Inverse transformatons Generaton of random observatons from gven dstrbutons Assume that random numbers,,, are readly avalable, where each tself s a random varable whch s unformly dstrbuted over the range(,).

More information

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1] DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm

More information

Copyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for P Charts. Dr. Wayne A. Taylor

Copyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for P Charts. Dr. Wayne A. Taylor Taylor Enterprses, Inc. Control Lmts for P Charts Copyrght 2017 by Taylor Enterprses, Inc., All Rghts Reserved. Control Lmts for P Charts Dr. Wayne A. Taylor Abstract: P charts are used for count data

More information

Multiple Choice. Choose the one that best completes the statement or answers the question.

Multiple Choice. Choose the one that best completes the statement or answers the question. ECON 56 Homework Multple Choce Choose the one that best completes the statement or answers the queston ) The probablty of an event A or B (Pr(A or B)) to occur equals a Pr(A) Pr(B) b Pr(A) + Pr(B) f A

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE THE ROYAL STATISTICAL SOCIETY 6 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE PAPER I STATISTICAL THEORY The Socety provdes these solutons to assst canddates preparng for the eamnatons n future years and for

More information

Lecture 17 : Stochastic Processes II

Lecture 17 : Stochastic Processes II : Stochastc Processes II 1 Contnuous-tme stochastc process So far we have studed dscrete-tme stochastc processes. We studed the concept of Makov chans and martngales, tme seres analyss, and regresson analyss

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Physics 5153 Classical Mechanics. Principle of Virtual Work-1

Physics 5153 Classical Mechanics. Principle of Virtual Work-1 P. Guterrez 1 Introducton Physcs 5153 Classcal Mechancs Prncple of Vrtual Work The frst varatonal prncple we encounter n mechancs s the prncple of vrtual work. It establshes the equlbrum condton of a mechancal

More information

Chapter 20 Duration Analysis

Chapter 20 Duration Analysis Chapter 20 Duraton Analyss Duraton: tme elapsed untl a certan event occurs (weeks unemployed, months spent on welfare). Survval analyss: duraton of nterest s survval tme of a subject, begn n an ntal state

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Checking Pairwise Relationships. Lecture 19 Biostatistics 666

Checking Pairwise Relationships. Lecture 19 Biostatistics 666 Checkng Parwse Relatonshps Lecture 19 Bostatstcs 666 Last Lecture: Markov Model for Multpont Analyss X X X 1 3 X M P X 1 I P X I P X 3 I P X M I 1 3 M I 1 I I 3 I M P I I P I 3 I P... 1 IBD states along

More information

Copyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for U Charts. Dr. Wayne A. Taylor

Copyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for U Charts. Dr. Wayne A. Taylor Taylor Enterprses, Inc. Adjusted Control Lmts for U Charts Copyrght 207 by Taylor Enterprses, Inc., All Rghts Reserved. Adjusted Control Lmts for U Charts Dr. Wayne A. Taylor Abstract: U charts are used

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Information Geometry of Gibbs Sampler

Information Geometry of Gibbs Sampler Informaton Geometry of Gbbs Sampler Kazuya Takabatake Neuroscence Research Insttute AIST Central 2, Umezono 1-1-1, Tsukuba JAPAN 305-8568 k.takabatake@ast.go.jp Abstract: - Ths paper shows some nformaton

More information