A: Details of MCMC for Fitting DDP-GP Model

Size: px
Start display at page:

Download "A: Details of MCMC for Fitting DDP-GP Model"

Transcription

1 Supplement: Bayesan Nonparametrc Estmaton for Dynamc Treatment Regmes wth Sequental Transton Tmes A: Detals of MCMC for Fttng DDP-GP Model Summary of Model p(y k x k, F k ) = F k (y k x k ) F k DDP-GP { {µ k h}, C k ; α k, {β k h}, σ k} (1) F k (y x k ) = wh k N(y; θh(x k k ), σ k ). (2) h=0 {θ k h(x k )} GP (µ k h(x k ), C k (x k )). h = 1, 2,... µ k h(x k ) = x k β k h. k = 1,..., n trans. We complete the model constructon by assumng βh k N(βk 0, Σ k 0), (σ k ) 2..d. Ga(λ 1, λ 2 ) and α k..d. Ga(λ 3, λ 4 ). Posteror Computaton To evaluate the posteror n a DDP-GP model, we frst margnalze (1) analytcally wth respect to the random probablty measures F k ( x k ). The form s not obvous from the earler defnton. Temporarly suppress the superscrpted transton ndex k. Consder generatng a sample (Y 1, x 1 ),, (Y n, x n ) by frst samplng from a covarate dstrbuton, p(x), and then from a condtonal transton tme dstrbuton, F ( x). We rewrte (2) as a herarchcal model wth a new latent ndcator varable γ for the normal mxture summand ndex h, (Y γ = h, x ) N(θ h (x ), σ 2 ) and p(γ = h) = w h, (3) for = 1,, n. Let θ ( ) = θ γ ( ) denote a realzaton of the stochastc process selected by γ. Next, we re-ndex the θ h ( ) such that n =1 I(γ = h) 1 for h = 1,..., H. That s, we let h = 1,..., H ndex the realzatons θ h ( ) that are selected by some of the γ s, so 1

2 that {θ 1,, θ H } are the unque values of the n realzatons { θ, = 1,..., n}. If clusters of patents are defned as S h = { : θ = θ h }, then the γ s are nterpreted as cluster membershp ndcators. Posteror smulaton makes use of these ndcators and the vectors θ h = (θ h (x 1 ),..., θ h (x n )). After margnalzaton wth respect to F x, we are left wth the margnal model for {γ, θ h (x ); = 1,..., n, h = 1,..., H}. For each transton k, we update parameters usng fnte DP algorthm as follows. Denote #{ : γ k = h} = n k h. Update σ k H (σ k ) 2 Inverse Gamma(λ 1 + nk 2, λ h=1 γ 2 + k =h(yk θh k(x )) 2 ) (4) 2 Update θ k h p(θ k h ) p(θ k h) :γ k =h p(y k θ k h(x k )) exp{ 1 2 (θk h X k β k h) (C k ) 1 (θ h X k β k h)} exp{ :γ k =h(yk θ k h (x )) 2 2(σ k ) 2 } N(((C k ) 1 + U U (σ k ) 2 I) 1 (U y k h (σ k ) 2 + (Ck ) 1 X k β k h), ((C k ) 1 + U U (σ k ) 2 I n k n k) 1 ), where y k h = {yk, γ k = h}, I n k n k s an nk n k dentty matrx, X k s an n k M k matrx wth the covarates x k of the -th patent n row. U s a n k h nk matrx: f patent s the j-th element of γ k = r, then U j = 1. All other elements are 0. Update β k h p(β k h ) p(β k h) exp{ 1 2 (θk h X k β k h) (C k ) 1 (θ k h X k β k h)} N(Σ k h[((x k ) (C k ) 1 θ k h + Σ k 0β k 0], Σ k h), where Σ k h = ((Xk ) (C k ) 1 X k + (Σ k 0) 1 ) 1. 2

3 Update w k h v k h Beta(1 + n k h, α k + j>h n k j ), where n k h = n k =1 I(γk = h) s the number of observatons such that γ k = h. Then wh k = vk h j>h (1 vk j ). Update γ k If y k s not censored, P r(γ k = h ) w k h p(y k θ k h(x ))p(θ k h(x k ) θ k h(x k ))d(θ k h(x k )), where x k = {x k j : γ k j = h, j }. If y k s censored, Let p k h(t) = p(y k θ k h(x ))p(θ k h(x k ) θ k h(x k ))d(θ k h(x k )). Then P r(γ k = h ) V k w k hp k h(t)dt, Update α k where V k s the observed tme for patent n transton k. Usng data augmentaton, we frst sample an m from beta dstrbuton beta(α k +1, n k ). Then we sample the new α k value from α k πga(λ 3 + H, λ 4 log(m)) + (1 π)ga(λ 3 + H 1, λ 4 log(m)), where π = 1 π λ 3+H 1 n k (λ 4 log(m)). 3

4 B: Determnng Pror Hyperparameters As prors for βh k n (1) we assume βk h N(βk 0, Σ k 0) for each transton k, h = 1, 2,.... For σ k we assume (σ k ) 2..d. Ga(λ 1, λ 2 ). And, fnally, α k..d. Ga(λ 3, λ 4 ). To apply the DDP-GP model, one must frst determne numercal values for the fxed hyperparameters {β k 0, Σ k 0, k = 1, 2,...} and λ = (λ 1, λ 2, λ 3, λ 4 ). Ths s a crtcal step. These numercal hyperparameter values must facltate posteror computaton, and they should not ntroduce napproprate nformaton nto the pror that would nvaldate posteror nferences. Wth ths n mnd, the hyperparameters (β k 0, Σ k 0) for the k th transton tme covarate effect dstrbuton may be obtaned va emprcal Bayes by dong a prelmnary fts of a lognormal dstrbuton Y k = log(t k ) N(x k β k 0, σ k 0) for each transton k. Smlarly, we assume a dagonal matrx for Σ k 0 wth the dagonal values also obtaned from the prelmnary ft of the lognormal dstrbuton. Once an emprcal estmate of σ k s obtaned, one can tune (λ 1, λ 2 ) so that the pror mean of σ k matches the emprcal estmate and the varance equals 1 or a sutably large value to ensure a vague pror. Fnally, nformaton about α k typcally s not avalable n practce. We use λ 3 = λ 4 = 1. Ths approach works n practce because the parameter β k 0 specfes the pror mean for the mean functon of the GP pror, whch n turn formalzes the regresson of T k on the covarates x k, ncludng treatment selecton. The mputed treatment effects hnge on the predctve dstrbuton under that regresson. Excessve pror shrnkage could smooth away the treatment effect that s the man focus. The use of an emprcal Bayes type pror n the present settng s smlar to emprcal Bayes prors n herarchcal models. Ths type of emprcal Bayes approach for hyperparameter selecton s commonly used when a full pror elctaton s ether not possble or s mpractcal. Inference s not senstve to values of the hyperparameters λ that determne the prors of σ k and α k for two reasons. Frst, the standard devaton σ k s the scale of the kernel that s used to smooth the dscrete random probablty measure generated by the DDP pror. It s mportant for reportng a smooth ft, that s for dsplay, but t s not crtcal for the mputed fts n our regresson settng. Assumng some regularty of the posteror mean functon, smoothng adds only mnor correctons. Second, the total mass parameter α k determnes the number of unque clusters formed n the underlyng Polya urn. However, because most clusters are small changng the pror of α k does not sgnfcantly change the posteror predctve values that 4

5 are the bass for the proposed nference. The conjugacy of the mpled multvarate normal on {θh k(xk ), = 0,..., n} and the normal kernel n (2) greatly smplfy computatons, snce any Markov chan Monte Carlo (MCMC) scheme for DP mxture models can be used. MacEachern and Müller (1998) and Neal (2000) descrbed specfc algorthms to mplement posteror MCMC smulaton n DPM models. Ishwaran and James (2001) developed alternatve computatonal algorthms based on fnte DPs, whch truncated (2) after a fnte number of terms. C: Survval Tme Regresson Smulaton Ths smulaton was desgned to study the DDP-GP regresson model by comparng nference for a survval functon wth the smulaton truth. In ths study, we dd not evaluate a regme effect, but rather focused on nference for the survval curve. For each subject, we generated T = survval tme, the covarates x 1 = tumor sze (0=small, 1=large) and x 2 = body weght, and x 3 = a bomarker (0=absent, 1=present). We assumed that small and large tumor szes each had probablty.50. Body weghts were computed by samplng from a unform dstrbuton, Unf(80, 150), wth the covarate x 2 defned by shftng and scalng to obtan mean 0 and varance 1. The bomarker was assocated wth tumor sze, as follows. Patents n the large tumor sze group were bomarker negatve wth probablty 0.7 and bomarker postve wth probablty 0.3. Patents wth small tumor sze were bomarker negatve wth probablty 0.3 and bomarker postve wth probablty 0.7. Let Y LN(m, s) denote a lognormal random varable Y = log T for T N(m, s). By a slght abuse of notaton, we also use LN(m, s) to denote the lognormal p.d.f. Let x = (1, x,1, x,2, x,3 ) denote the covarates for patent, here we nclude 1 n the covarate to ndcate the ntercept. We smulated each sample Y 1,, Y n of n observatons from a mxture of lognormal dstrbutons, Y x 0.4 LN(x β 1, σ 2 ) LN(x β 2, σ 2 ), where the true covarate parameters of the mxture components were β 1 = (1, 2, 2, 1) and β 2 = (2, 1, 3, 3), wth σ 2 = 0.4. For comparson, we also ft an AFT regresson model, assumng Y = log(t ) = x β + σɛ, = 1,..., n wth ɛ followng an extreme value dstrbuton, so that T follows a Webull dstrbuton. 5

6 In ths smulaton, we consdered four scenaros, wth n = 50, 100, or 200 observatons wthout censorng or n = 200 wth 23% censorng. For each scenaro, N = 1, 000 trals were smulated. For each smulated data set we ft a DDP-GP survval regresson model F (Y x ). For smulaton j, let S(t x) = p(t n+1 t x n+1,j = x, data) denote the posteror expected survval functon for a future patent wth covarate x. Usng the emprcal dstrbuton 1 n n =1 δ x to margnalze w.r.t. x j n+1,j and averagng across smulatons, we get S(t) = 1 N N j=1 1 n n S(t x j ). =1 Fgure S1 compares S( ) under the DDP-GP model wth the smulaton truth S 0 (t) = 1 N N j=1 1 n n S 0 (t x j ), =1 and maxmum lkelhood estmates (MLE) under Webull AFT, Lognormal AFT, and Exponental AFT models. In each scenaro, the true curve s gven as a sold black sold lne, the MLE of the survval functons under the AFT regresson model assumng Webull dstrbuton, Lognormal dstrbuton and Exponental dstrbuton as green, blue, magenta sold lnes respectvely, and the posteror mean survval functon under the DDP-GP model as a sold red lne wth pont-wse 90% credble bands as two dotted red lnes. In all four scenaros, the DDP-GP model based estmate relably recovered the shape of the true survval functon and avoded the excessve bas seen wth the Webull, lognormal and exponental MLE. As expected, the three scenaros wthout censorng show that ncreasng sample sze gves more accurate estmaton. Wth 23% censorng, the DDP-GP estmate becomes less accurate, but t stll s much closer to the smulaton truth than the AFT regresson models wth Webull, lognormal and exponental dstrbutons. D: Computng Mean Survval Tme The rsk sets of the seven transton tme n the leukema tral are defned as follows. Let G 0 = {1,..., n} denote the ntal rsk set at the start of nducton chemotherapy, and G (0,r) = { : s 1 = r} for r = D, C, R, so G 0 = G (0,D) G (0,C) G (0,R). Smlarly, G (C,P ) = 6

7 Npat=50, wthout censorng Npat=100, wthout censorng Survval Truth DDP GP Webull Lognormal Exponental Survval Truth DDP GP Webull Lognormal Exponental Tme Tme Npat=200, wthout censorng Npat=200, wth 23% censorng Survval Truth DDP GP Webull Lognormal Exponental Survval Truth DDP GP Webull Lognormal Exponental Tme Tme Fgure S1: Smulaton 1. True mean survval functons (black color) and estmated mean survval functons under the DDP-GP model (red color) for sample szes n = 50, 100, 200 and n = 200 wth 23% censorng for 1,000 smulatons. For comparsons, we also show the MLE under an AFT regresson wth Webull dstrbuton (green color), Lognormal dstrbuton (blue color) and Exponental dstrbuton (magenta). In all cases, the pont-wse 90% credble bands are also dsplayed as the regon between two dotted red lnes. { : s 1 = C, s 2 = P } s the later rsk set for T (P,D). To record rght censorng, let U denote the tme from the start of nducton to last 7

8 followup for patent. We assume that U s condtonally ndependent of the transton tmes gven pror transton tmes and other covarates. Censorng of event tmes occurs by competng rsk and/or loss to follow up. For a patent n the rsk set for event tme T k, let δ k δ (0,D) = 1 f a patent s not censored and 0 f patent s rght censored. For example, = 1 for G 0 f T (0,D) = mn(u, T (0,k), k = D, C, R). Smlarly, δ (R,D) + T (C,P ) G (0,R) f T (0,R) + T (R,D) < U and δ (P,D) = 1 for G (C,P ) f T (0,C) = 1 for + T (P,D) < U. Let V x, denote the observed tme for patent n rsk set G x, as follows. For G 0 let V 1, = mn(t (0,D), T (0,R), T (0,C), U ) denote the observed tme for the stage 1 event or censorng. For G (0,C) let V C = mn(t (C,D), T (C,P ), U T (0,C) ) denote the observed event tme for the competng rsks D and P and loss to followup. Smlarly, for G (0,R), let V R = mn(t (R,D), U T (R,D) ), and for G (C,P ) let V (C,P ), = mn(t (P,D), U T (0,C) T (C,P ) ). The jont lkelhood functon s the product L = L 1 L 2 L 3 L 4. The frst factor L 1 corresponds to response to nducton therapy, L 1 = G 0 k {D,R,C} f (0,k) (V 1, x (0,k) ) δ(0,k) S (0,k) (V 1, x (0,k) ) 1 δ(0,k). (5) where S k = 1 F k. The second factor L 2 corresponds to patents G (0,R) who experence resstance to nducton and receve salvage Z 2,1, L 2 = f (R,D) (V R x (R,D) G (0,R) ) δ(r,d) S (R,D) (V R x (R,D) ) 1 δ(r,d). (6) The thrd factor L 3 s the lkelhood contrbuton from patents achevng CR, L 3 = G (0,C) k=(c,d),(c,p ) f k (V C x k ) δk S k (V C x k ) 1 δk. (7) The fourth factor L 4 s the contrbuton from patents who experence tumor progresson after CR L 4 = f (P,D) (V CP, x (P,D) G (C,P ) ) δ(p,d) S (P,D) (V CP, x (P,D) ) 1 δ(p,d). (8) 8

9 + The mean survval tme of a patent treated wth regme Z = (Z 1, Z 2,1, Z 2,2 ) s [ ] η(z) = p(s 1 = D x 0, Z 1 )η (0,D) (x 0, Z 1 ) d p(x 0 ) { [ + p(s 1 = R x 0, Z 1 ) η R (x 0, Z 1 ) + ]} η (R,D) (x 0, Z 1, Z 2,1, T (0,R) )dµ(t (0,R) ) d p(x 0 ) [ [ p(s 1 = C x 0, Z 1 ) η C (x 0, Z 1 ) + p(s 2 = D s 1 = C, x 0, Z 1, T (0,C) )η (C,D) (x 0, Z 1, T C ) + p(s 2 = P s 1 = C, x 0, Z 1, T (0,C) )[η (C,P ) (x 0, Z 1, T (0,C) ) ] + η (P,D) (x 0, Z 1, Z 2,2 T (0,C), T (C,P ) )dµ(t (C,P ) )]dµ(t (0,C) ) d p(x 0 ). (9) We compute the IPTW estmates for overall mean survval wth regme Z as IP T W (Z) = n w (Z)T / =1 n w (Z), (10) =1 where w (Z) = I(Z = Z [ )δ I(s 1 = D) + I(s 1 = R)I (Z ˆK(U 2,1 )/ ˆPr(Z 2,1 s 1 = R, Z 1, x 0, T (0,R) ) ) +I(s 1 = C, s 2 = D) ] +I(s 1 = C, s 2 = P )I (Z 2,2 )/ ˆPr(Z 2,2 s 1 = C, s 2 = P, Z 1, x 0, T (0,C), T (C,P ) ). (11) In (11), ˆK s the Kaplan-Meer estmator of the censorng survval dstrbuton K(u) = P (U t) at tme t. I (Z) s s an ndctor of treatment Z and 0 otherwse, and ˆPr(Z 2,1 s 1 = C, Z 1, x 0, T (0,R) ) s the probablty of recevng salvage treatment Z 2,1 estmated usng logstc regresson, and smlarly for ˆPr(Z 2,2 s 1 = C, s 2 = P, Z 1, x 0, T (0,C), T (C,P ) ). The above estmator has been shown to be consstent under sutable assumptons (Wahed and Thall, 2013; Scharfsten et al., 1999). 9

10 E: Survval Regresson for T (C,P ) and T (P,D) Here we summarze results for the survval regresson for T (C,P ). Among the n = 210 patents, 102 (48.6%) acheved C, wth C rates of 37%, 48%, 53% and 56% n the FAI, FAI plus ATRA, FAI plus GCSF and FAI plus GCSF plus ATRA arms, respectvely. Of the 102 patents who acheved CR, 93 experenced dsease progresson before death or beng lost to follow-up. Among these 93 relapsed patents, 53 receved salvage treatment wth HDAC. For a hypothetcal future patent at age 61 and poor cytogenetc abnormalty, Fgure S2 summarzes survval regresson functons for each of the four nducton therapes, wth sold lnes representng T (0,C) = 20 and dashed lnes representng T (0,C) = 30. The four dashed lnes are below the four correspondng sold lnes, ndcatng that T (0,C) was assocated wth T (C,P ). Ths observaton concdes wth the well-known phenomenon n chemotherapy for AML or MDS that, regardless of nducton therapy, the longer t takes to acheve C, the shorter the perod that the patent remans n C. Smlarly, we summarze results for the survval regresson for T (P,D). For a patent wth poor cytogenetc abnormalty, Fgure S3 shows the posteror predcted survval functons under dfferent combnatons of nducton therapy and age. Panels (a) and (c) show the survval functons of a patent assgned salvage treatment HDAC wth age 46 or 76, whle panels (b) and (d) plot the correspondng survval functons for the patent assgned non HDAC as salvage. Four dfferent colors represent the four nducton therapes. Fgure S3 shows that resdual tme to D after dsease progresson followng C was assocated wth both age and salvage therapy. Older patents are more lkely to have shorter resdual lfe once ther dsease progressed, and patents gven HDAC as salvage de more quckly than patents gven non HDAC salvage. References Ishwaran, H. and James, L. F. (2001). Gbbs samplng methods for stck-breakng prors. Journal of the Amercan Statstcal Assocaton, 96(453). MacEachern, S. N. and Müller, P. (1998). Estmatng mxture of drchlet process models. Journal of Computatonal and Graphcal Statstcs, 7(2):

11 The effect of T C on T CP Survval FAI FAI+ATRA FAI+GCSF FAI+ATRA+GCSF Tme Fgure S2: The effect of T (0,C) on T (C,P ) at age 61 wth poor cytogenetc abnormalty. Black, red, green and blue curves represent nducton treatments FAI, FAI+ATRA, FAI+GCSF and FAI+ATRA+GCSF, respectvely. Sold lnes and dash lnes represent T (0,C) = 20 and T (0,C) = 30, respectvely. The longer t takes to acheve C, the shorter the perod of tme that the patent remaned n C. Neal, R. M. (2000). Markov chan samplng methods for drchlet process mxture models. Journal of computatonal and graphcal statstcs, 9(2): Scharfsten, D. O., Rotntzky, A., and Robns, J. M. (1999). Adjustng for nongnorable drop-out usng semparametrc nonresponse models. Journal of the Amercan Statstcal Assocaton, 94(448): Wahed, A. S. and Thall, P. F. (2013). Evaluatng jont effects of nducton salvage treatment regmes on overall survval n acute leukaema. Journal of the Royal Statstcal Socety: Seres C (Appled Statstcs), 62(1):

12 Salvage = HDAC, Age=46 Salvage = non HDAC, Age=46 Survval FAI FAI+ATRA FAI+GCSF FAI+ATRA+GCSF Survval FAI FAI+ATRA FAI+GCSF FAI+ATRA+GCSF Tme Tme (a) (b) Salvage = HDAC, Age=76 Salvage = non HDAC, Age=76 Survval FAI FAI+ATRA FAI+GCSF FAI+ATRA+GCSF Survval FAI FAI+ATRA FAI+GCSF FAI+ATRA+GCSF Tme Tme (c) (d) Fgure S3: AML-MDS tral data n transton (P, D): Panels (a) and (c) show the posteror estmated survval functons of patent at age 46 and 76 wth poor cytogenetc abnormalty assgned to salvage treatment HDAC for four nducton therapes respectvely. Panels (b) and (d) show the posteror estmated survval functons of patent at age 46 and 76 wth poor cytogenetc abnormalty assgned to salvage treatment non HDAC for four nducton therapes respectvely. Black, red, green and blue curves represent nducton treatments FAI, FAI+ATRA, FAI+GCSF and FAI+ATRA+GCSF, respectvely. 12

Bayesian Nonparametric Estimation for Dynamic. Treatment Regimes with Sequential Transition Times

Bayesian Nonparametric Estimation for Dynamic. Treatment Regimes with Sequential Transition Times Bayesan Nonparametrc Estmaton for Dynamc Treatment Regmes wth Sequental Transton Tmes Yanxun Xu Dvson of Statstcs and Scentfc Computng, The Unversty of Texas at Austn, Austn, TX Peter Müller Department

More information

Non-Mixture Cure Model for Interval Censored Data: Simulation Study ABSTRACT

Non-Mixture Cure Model for Interval Censored Data: Simulation Study ABSTRACT Malaysan Journal of Mathematcal Scences 8(S): 37-44 (2014) Specal Issue: Internatonal Conference on Mathematcal Scences and Statstcs 2013 (ICMSS2013) MALAYSIAN JOURNAL OF MATHEMATICAL SCIENCES Journal

More information

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

Bayesian Nonparametric Estimation for Dynamic Treatment Regimes With Sequential Transition Times

Bayesian Nonparametric Estimation for Dynamic Treatment Regimes With Sequential Transition Times Journal of the Amercan Statstcal Assocaton ISSN: 0162-1459 (Prnt) 1537-274X (Onlne) Journal homepage: http://www.tandfonlne.com/lo/uasa20 Bayesan Nonparametrc Estmaton for Dynamc Treatment Regmes Wth Sequental

More information

Comparison of Regression Lines

Comparison of Regression Lines STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence

More information

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition)

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition) Count Data Models See Book Chapter 11 2 nd Edton (Chapter 10 1 st Edton) Count data consst of non-negatve nteger values Examples: number of drver route changes per week, the number of trp departure changes

More information

Negative Binomial Regression

Negative Binomial Regression STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

Chapter 20 Duration Analysis

Chapter 20 Duration Analysis Chapter 20 Duraton Analyss Duraton: tme elapsed untl a certan event occurs (weeks unemployed, months spent on welfare). Survval analyss: duraton of nterest s survval tme of a subject, begn n an ntal state

More information

EM and Structure Learning

EM and Structure Learning EM and Structure Learnng Le Song Machne Learnng II: Advanced Topcs CSE 8803ML, Sprng 2012 Partally observed graphcal models Mxture Models N(μ 1, Σ 1 ) Z X N N(μ 2, Σ 2 ) 2 Gaussan mxture model Consder

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

Conjugacy and the Exponential Family

Conjugacy and the Exponential Family CS281B/Stat241B: Advanced Topcs n Learnng & Decson Makng Conjugacy and the Exponental Famly Lecturer: Mchael I. Jordan Scrbes: Bran Mlch 1 Conjugacy In the prevous lecture, we saw conjugate prors for the

More information

Stat 543 Exam 2 Spring 2016

Stat 543 Exam 2 Spring 2016 Stat 543 Exam 2 Sprng 2016 I have nether gven nor receved unauthorzed assstance on ths exam. Name Sgned Date Name Prnted Ths Exam conssts of 11 questons. Do at least 10 of the 11 parts of the man exam.

More information

Bayesian predictive Configural Frequency Analysis

Bayesian predictive Configural Frequency Analysis Psychologcal Test and Assessment Modelng, Volume 54, 2012 (3), 285-292 Bayesan predctve Confgural Frequency Analyss Eduardo Gutérrez-Peña 1 Abstract Confgural Frequency Analyss s a method for cell-wse

More information

Stat 543 Exam 2 Spring 2016

Stat 543 Exam 2 Spring 2016 Stat 543 Exam 2 Sprng 206 I have nether gven nor receved unauthorzed assstance on ths exam. Name Sgned Date Name Prnted Ths Exam conssts of questons. Do at least 0 of the parts of the man exam. I wll score

More information

RELIABILITY ASSESSMENT

RELIABILITY ASSESSMENT CHAPTER Rsk Analyss n Engneerng and Economcs RELIABILITY ASSESSMENT A. J. Clark School of Engneerng Department of Cvl and Envronmental Engneerng 4a CHAPMAN HALL/CRC Rsk Analyss for Engneerng Department

More information

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models Computaton of Hgher Order Moments from Two Multnomal Overdsperson Lkelhood Models BY J. T. NEWCOMER, N. K. NEERCHAL Department of Mathematcs and Statstcs, Unversty of Maryland, Baltmore County, Baltmore,

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

x = , so that calculated

x = , so that calculated Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

The conjugate prior to a Bernoulli is. A) Bernoulli B) Gaussian C) Beta D) none of the above

The conjugate prior to a Bernoulli is. A) Bernoulli B) Gaussian C) Beta D) none of the above The conjugate pror to a Bernoull s A) Bernoull B) Gaussan C) Beta D) none of the above The conjugate pror to a Gaussan s A) Bernoull B) Gaussan C) Beta D) none of the above MAP estmates A) argmax θ p(θ

More information

Small Area Interval Estimation

Small Area Interval Estimation .. Small Area Interval Estmaton Partha Lahr Jont Program n Survey Methodology Unversty of Maryland, College Park (Based on jont work wth Masayo Yoshmor, Former JPSM Vstng PhD Student and Research Fellow

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

On Outlier Robust Small Area Mean Estimate Based on Prediction of Empirical Distribution Function

On Outlier Robust Small Area Mean Estimate Based on Prediction of Empirical Distribution Function On Outler Robust Small Area Mean Estmate Based on Predcton of Emprcal Dstrbuton Functon Payam Mokhtaran Natonal Insttute of Appled Statstcs Research Australa Unversty of Wollongong Small Area Estmaton

More information

6 Supplementary Materials

6 Supplementary Materials 6 Supplementar Materals 61 Proof of Theorem 31 Proof Let m Xt z 1:T : l m Xt X,z 1:t Wethenhave mxt z1:t ˆm HX Xt z 1:T mxt z1:t m HX Xt z 1:T + mxt z 1:T HX We consder each of the two terms n equaton

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands Content. Inference on Regresson Parameters a. Fndng Mean, s.d and covarance amongst estmates.. Confdence Intervals and Workng Hotellng Bands 3. Cochran s Theorem 4. General Lnear Testng 5. Measures of

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

Linear Regression Analysis: Terminology and Notation

Linear Regression Analysis: Terminology and Notation ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models

More information

Parametric fractional imputation for missing data analysis

Parametric fractional imputation for missing data analysis Secton on Survey Research Methods JSM 2008 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Wayne Fuller Abstract Under a parametrc model for mssng data, the EM algorthm s a popular tool

More information

Statistical inference for generalized Pareto distribution based on progressive Type-II censored data with random removals

Statistical inference for generalized Pareto distribution based on progressive Type-II censored data with random removals Internatonal Journal of Scentfc World, 2 1) 2014) 1-9 c Scence Publshng Corporaton www.scencepubco.com/ndex.php/ijsw do: 10.14419/jsw.v21.1780 Research Paper Statstcal nference for generalzed Pareto dstrbuton

More information

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore Sesson Outlne Introducton to classfcaton problems and dscrete choce models. Introducton to Logstcs Regresson. Logstc functon and Logt functon. Maxmum Lkelhood Estmator (MLE) for estmaton of LR parameters.

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

How its computed. y outcome data λ parameters hyperparameters. where P denotes the Laplace approximation. k i k k. Andrew B Lawson 2013

How its computed. y outcome data λ parameters hyperparameters. where P denotes the Laplace approximation. k i k k. Andrew B Lawson 2013 Andrew Lawson MUSC INLA INLA s a relatvely new tool that can be used to approxmate posteror dstrbutons n Bayesan models INLA stands for ntegrated Nested Laplace Approxmaton The approxmaton has been known

More information

STATS 306B: Unsupervised Learning Spring Lecture 10 April 30

STATS 306B: Unsupervised Learning Spring Lecture 10 April 30 STATS 306B: Unsupervsed Learnng Sprng 2014 Lecture 10 Aprl 30 Lecturer: Lester Mackey Scrbe: Joey Arthur, Rakesh Achanta 10.1 Factor Analyss 10.1.1 Recap Recall the factor analyss (FA) model for lnear

More information

Probability Theory (revisited)

Probability Theory (revisited) Probablty Theory (revsted) Summary Probablty v.s. plausblty Random varables Smulaton of Random Experments Challenge The alarm of a shop rang. Soon afterwards, a man was seen runnng n the street, persecuted

More information

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements CS 750 Machne Learnng Lecture 5 Densty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square CS 750 Machne Learnng Announcements Homework Due on Wednesday before the class Reports: hand n before

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

STK4080/9080 Survival and event history analysis

STK4080/9080 Survival and event history analysis SK48/98 Survval and event hstory analyss Lecture 7: Regresson modellng Relatve rsk regresson Regresson models Assume that we have a sample of n ndvduals, and let N (t) count the observed occurrences of

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors Stat60: Bayesan Modelng and Inference Lecture Date: February, 00 Reference Prors Lecturer: Mchael I. Jordan Scrbe: Steven Troxler and Wayne Lee In ths lecture, we assume that θ R; n hgher-dmensons, reference

More information

8 : Learning in Fully Observed Markov Networks. 1 Why We Need to Learn Undirected Graphical Models. 2 Structural Learning for Completely Observed MRF

8 : Learning in Fully Observed Markov Networks. 1 Why We Need to Learn Undirected Graphical Models. 2 Structural Learning for Completely Observed MRF 10-708: Probablstc Graphcal Models 10-708, Sprng 2014 8 : Learnng n Fully Observed Markov Networks Lecturer: Erc P. Xng Scrbes: Meng Song, L Zhou 1 Why We Need to Learn Undrected Graphcal Models In the

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

A joint frailty-copula model between disease progression and death for meta-analysis

A joint frailty-copula model between disease progression and death for meta-analysis CSA-KSS-JSS Specal Invted Sessons 4 / / 6 A jont fralty-copula model between dsease progresson and death for meta-analyss 3/5/7 Takesh Emura Graduate Insttute of Statstcs Natonal Central Unversty TAIWAN

More information

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z ) C4B Machne Learnng Answers II.(a) Show that for the logstc sgmod functon dσ(z) dz = σ(z) ( σ(z)) A. Zsserman, Hlary Term 20 Start from the defnton of σ(z) Note that Then σ(z) = σ = dσ(z) dz = + e z e z

More information

1 Binary Response Models

1 Binary Response Models Bnary and Ordered Multnomal Response Models Dscrete qualtatve response models deal wth dscrete dependent varables. bnary: yes/no, partcpaton/non-partcpaton lnear probablty model LPM, probt or logt models

More information

Expectation Maximization Mixture Models HMMs

Expectation Maximization Mixture Models HMMs -755 Machne Learnng for Sgnal Processng Mture Models HMMs Class 9. 2 Sep 200 Learnng Dstrbutons for Data Problem: Gven a collecton of eamples from some data, estmate ts dstrbuton Basc deas of Mamum Lelhood

More information

International Journal of Industrial Engineering Computations

International Journal of Industrial Engineering Computations Internatonal Journal of Industral Engneerng Computatons 4 (03) 47 46 Contents lsts avalable at GrowngScence Internatonal Journal of Industral Engneerng Computatons homepage: www.growngscence.com/ec Modelng

More information

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

ANOMALIES OF THE MAGNITUDE OF THE BIAS OF THE MAXIMUM LIKELIHOOD ESTIMATOR OF THE REGRESSION SLOPE

ANOMALIES OF THE MAGNITUDE OF THE BIAS OF THE MAXIMUM LIKELIHOOD ESTIMATOR OF THE REGRESSION SLOPE P a g e ANOMALIES OF THE MAGNITUDE OF THE BIAS OF THE MAXIMUM LIKELIHOOD ESTIMATOR OF THE REGRESSION SLOPE Darmud O Drscoll ¹, Donald E. Ramrez ² ¹ Head of Department of Mathematcs and Computer Studes

More information

Web Appendix B Estimation. We base our sampling procedure on the method of data augmentation (e.g., Tanner and Wong,

Web Appendix B Estimation. We base our sampling procedure on the method of data augmentation (e.g., Tanner and Wong, Web Appendx B Estmaton Lkelhood and Data Augmentaton We base our samplng procedure on the method of data augmentaton (eg anner and Wong 987) here e treat the unobserved ndvdual choces as parameters Specfcally

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maxmum Lkelhood Estmaton INFO-2301: Quanttatve Reasonng 2 Mchael Paul and Jordan Boyd-Graber MARCH 7, 2017 INFO-2301: Quanttatve Reasonng 2 Paul and Boyd-Graber Maxmum Lkelhood Estmaton 1 of 9 Why MLE?

More information

Convergence of random processes

Convergence of random processes DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

Hidden Markov Models

Hidden Markov Models CM229S: Machne Learnng for Bonformatcs Lecture 12-05/05/2016 Hdden Markov Models Lecturer: Srram Sankararaman Scrbe: Akshay Dattatray Shnde Edted by: TBD 1 Introducton For a drected graph G we can wrte

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1 On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool

More information

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y) Secton 1.5 Correlaton In the prevous sectons, we looked at regresson and the value r was a measurement of how much of the varaton n y can be attrbuted to the lnear relatonshp between y and x. In ths secton,

More information

Number of cases Number of factors Number of covariates Number of levels of factor i. Value of the dependent variable for case k

Number of cases Number of factors Number of covariates Number of levels of factor i. Value of the dependent variable for case k ANOVA Model and Matrx Computatons Notaton The followng notaton s used throughout ths chapter unless otherwse stated: N F CN Y Z j w W Number of cases Number of factors Number of covarates Number of levels

More information

Durban Watson for Testing the Lack-of-Fit of Polynomial Regression Models without Replications

Durban Watson for Testing the Lack-of-Fit of Polynomial Regression Models without Replications Durban Watson for Testng the Lack-of-Ft of Polynomal Regresson Models wthout Replcatons Ruba A. Alyaf, Maha A. Omar, Abdullah A. Al-Shha ralyaf@ksu.edu.sa, maomar@ksu.edu.sa, aalshha@ksu.edu.sa Department

More information

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests Smulated of the Cramér-von Mses Goodness-of-Ft Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

This column is a continuation of our previous column

This column is a continuation of our previous column Comparson of Goodness of Ft Statstcs for Lnear Regresson, Part II The authors contnue ther dscusson of the correlaton coeffcent n developng a calbraton for quanttatve analyss. Jerome Workman Jr. and Howard

More information

Motion Perception Under Uncertainty. Hongjing Lu Department of Psychology University of Hong Kong

Motion Perception Under Uncertainty. Hongjing Lu Department of Psychology University of Hong Kong Moton Percepton Under Uncertanty Hongjng Lu Department of Psychology Unversty of Hong Kong Outlne Uncertanty n moton stmulus Correspondence problem Qualtatve fttng usng deal observer models Based on sgnal

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Hidden Markov Models

Hidden Markov Models Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Lecture 2: Prelude to the big shrink

Lecture 2: Prelude to the big shrink Lecture 2: Prelude to the bg shrnk Last tme A slght detour wth vsualzaton tools (hey, t was the frst day... why not start out wth somethng pretty to look at?) Then, we consdered a smple 120a-style regresson

More information

Explaining the Stein Paradox

Explaining the Stein Paradox Explanng the Sten Paradox Kwong Hu Yung 1999/06/10 Abstract Ths report offers several ratonale for the Sten paradox. Sectons 1 and defnes the multvarate normal mean estmaton problem and ntroduces Sten

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

Discussion of Extensions of the Gauss-Markov Theorem to the Case of Stochastic Regression Coefficients Ed Stanek

Discussion of Extensions of the Gauss-Markov Theorem to the Case of Stochastic Regression Coefficients Ed Stanek Dscusson of Extensons of the Gauss-arkov Theorem to the Case of Stochastc Regresson Coeffcents Ed Stanek Introducton Pfeffermann (984 dscusses extensons to the Gauss-arkov Theorem n settngs where regresson

More information

Bias-correction under a semi-parametric model for small area estimation

Bias-correction under a semi-parametric model for small area estimation Bas-correcton under a sem-parametrc model for small area estmaton Laura Dumtrescu, Vctora Unversty of Wellngton jont work wth J. N. K. Rao, Carleton Unversty ICORS 2017 Workshop on Robust Inference for

More information

Chapter 8 Indicator Variables

Chapter 8 Indicator Variables Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n

More information

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise. Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where y + = β + β e for =,..., y and are observable varables e s a random error How can an estmaton rule be constructed for the

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Properties of Least Squares

Properties of Least Squares Week 3 3.1 Smple Lnear Regresson Model 3. Propertes of Least Squares Estmators Y Y β 1 + β X + u weekly famly expendtures X weekly famly ncome For a gven level of x, the expected level of food expendtures

More information

Hydrological statistics. Hydrological statistics and extremes

Hydrological statistics. Hydrological statistics and extremes 5--0 Stochastc Hydrology Hydrologcal statstcs and extremes Marc F.P. Berkens Professor of Hydrology Faculty of Geoscences Hydrologcal statstcs Mostly concernes wth the statstcal analyss of hydrologcal

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Mamum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models for

More information

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Population Design in Nonlinear Mixed Effects Multiple Response Models: extension of PFIM and evaluation by simulation with NONMEM and MONOLIX

Population Design in Nonlinear Mixed Effects Multiple Response Models: extension of PFIM and evaluation by simulation with NONMEM and MONOLIX Populaton Desgn n Nonlnear Mxed Effects Multple Response Models: extenson of PFIM and evaluaton by smulaton wth NONMEM and MONOLIX May 4th 007 Carolne Bazzol, Sylve Retout, France Mentré Inserm U738 Unversty

More information

A Bayesian Approach to Arrival Rate Forecasting for Inhomogeneous Poisson Processes for Mobile Calls

A Bayesian Approach to Arrival Rate Forecasting for Inhomogeneous Poisson Processes for Mobile Calls A Bayesan Approach to Arrval Rate Forecastng for Inhomogeneous Posson Processes for Moble Calls Mchael N. Nawar Department of Computer Engneerng Caro Unversty Caro, Egypt mchaelnawar@eee.org Amr F. Atya

More information

Space of ML Problems. CSE 473: Artificial Intelligence. Parameter Estimation and Bayesian Networks. Learning Topics

Space of ML Problems. CSE 473: Artificial Intelligence. Parameter Estimation and Bayesian Networks. Learning Topics /7/7 CSE 73: Artfcal Intellgence Bayesan - Learnng Deter Fox Sldes adapted from Dan Weld, Jack Breese, Dan Klen, Daphne Koller, Stuart Russell, Andrew Moore & Luke Zettlemoyer What s Beng Learned? Space

More information

2016 Wiley. Study Session 2: Ethical and Professional Standards Application

2016 Wiley. Study Session 2: Ethical and Professional Standards Application 6 Wley Study Sesson : Ethcal and Professonal Standards Applcaton LESSON : CORRECTION ANALYSIS Readng 9: Correlaton and Regresson LOS 9a: Calculate and nterpret a sample covarance and a sample correlaton

More information

Gaussian Mixture Models

Gaussian Mixture Models Lab Gaussan Mxture Models Lab Objectve: Understand the formulaton of Gaussan Mxture Models (GMMs) and how to estmate GMM parameters. You ve already seen GMMs as the observaton dstrbuton n certan contnuous

More information

On mutual information estimation for mixed-pair random variables

On mutual information estimation for mixed-pair random variables On mutual nformaton estmaton for mxed-par random varables November 3, 218 Aleksandr Beknazaryan, Xn Dang and Haln Sang 1 Department of Mathematcs, The Unversty of Msssspp, Unversty, MS 38677, USA. E-mal:

More information

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE THE ROYAL STATISTICAL SOCIETY 6 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE PAPER I STATISTICAL THEORY The Socety provdes these solutons to assst canddates preparng for the eamnatons n future years and for

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

Laboratory 3: Method of Least Squares

Laboratory 3: Method of Least Squares Laboratory 3: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly they are correlated wth

More information