4.1 basic idea of interval mapping
|
|
- Doreen Davis
- 5 years ago
- Views:
Transcription
1 4 Interval Mappng for a Sngle TL basc dea of nterval mappng nterval mappng by maxmum lkelhood maxmum lkelhood usng EM MCMC Bayesan nterval mappng "natural" Bayesan prors multple mputaton MCMC bootstrapped varance estmates advantages & shortcomngs of IM Haley-Knott regresson approxmaton ch Broman Churchll Yandell Zeng 4. basc dea of nterval mappng study propertes of lkelhood at each possble TL treatng TL as mssng data assumng only a sngle TL (for now) recall lkelhood as mxture over unknown TL lkelhood product of sum of products complcated to evaluate--requres teraton L( θ λ Y X ) pr( Y X θ λ) prod prod pr( Y sum X θ λ) pr( X λ)pr( Y θ ) ch Broman Churchll Yandell Zeng
2 uncertanty n TL genotype how to mprove guess on wth data parameters? pror recombnaton: pr( X λ ) posteror recombnaton: pr( Y X θ λ ) man phlosophes for assessng lkelhood maxmum lkelhood: study peak(s) Bayesan analyss: study whole shape mplementaton methodologes Expectaton-Maxmzaton (EM) Markov chan Monte Carlo (MCMC) multple mputaton genetc algorthms GEE ch Broman Churchll Yandell Zeng 3 posteror on TL genotypes full condtonal of gven data parameters proportonal to pror pr( X λ ) weght toward that agrees wth flankng markers proportonal to lkelhood pr(y θ) weght toward so that group mean G Y phenotype and flankng markers may conflct posteror recombnaton balances these two weghts pr( X λ)pr( Y θ ) pr( Y X θ λ) pr( Y X θ λ) ch Broman Churchll Yandell Zeng 4
3 how does phenotype Y affect? D4Mt4 D4Mt4 bp what are probabltes for genotype between markers? recombnants AA:AB all : f gnore Y and f we use Y? AA AA AB AA AA AB AB AB Genotype ch Broman Churchll Yandell Zeng 5 maxmum lkelhood (ML) dea pck TL locus λ (usually scan whole genome) fnd ML estmates of gene acton θ gven λ maxmum lkelhood at peak of lkelhood slope (dervatve wth respect to θ) s zero sometmes maxmum s at a boundary (non-zero slope) slope s weghted average usng posterors for cannot wrte estmate n "closed form" need to know θ to estmate t! terate toward the maxmum n some clever way dl( θ λ Y X ) sum dθ d log(pr( Y θ )) pr( Y X θ λ) dθ ch Broman Churchll Yandell Zeng 6
4 Bayesan model posteror augment data (YX) wth unknowns study unknowns (θλ) gven data (YX) ~ pr( Y X θ λ ) no longer need weghted average over nstead we average over to study parameters pr(θλ YX) sum pr(θλ YX) study propertes of posteror need to specfy prors for (θλ) denomnator s very dffcult to compute n practce drawng samples from posteror n some clever way pr( θ λ Y X ) pr( X λ)pr( Y θ )pr( λ X )pr( θ ) pr( Y X ) ch Broman Churchll Yandell Zeng 7 4. nterval mappng by ML search whole genome for putatve TL "profle" lkelhood across all possble λ fnd ML estmate of θ gven λ ML estmate of (θλ) at maxmum over genome L ( ˆ θ Y ) prod 0 0 L( ˆ θ λ Y X ) prod sum LOD( λ) log f ( Y ˆ µ s L( ˆ θ λ Y X ) ˆ pr( X λ) f ( Y Gˆ ˆ σ 0 L0 ( θ0 Y ) ch Broman Churchll Yandell Zeng 8 ) pool )
5 LOD for hyper dataset X chromosomes hghest LOD on 4 other TL? lod X ch Broman Churchll Yandell Zeng 9 LOD(λ) on chr 4 of hyper 8 6 EM HK IMP EM "exact" Haley-Knott regresson sngle mputaton lod 4 all agree at the peak and mostly at markers note marker spacng Map poston (cm) ch Broman Churchll Yandell Zeng 0
6 EM method for nterval mappng fx a possble TL λ terate between expectaton & maxmzaton lkelhood ncreases wth each teraton stop teratng when the change s "neglgble" ntal values P pr( X λ ) recombnaton model n the absence of data or use Haley-Knott regresson estmates of θ ch Broman Churchll Yandell Zeng EM method for nterval mappng E-step: estmate posteror recombnaton P pr( Y X θ λ ) estmate for every ndvdual genotype depends on effects θ M-steps: maxmze lkelhood for θ may be many parameters techncal pont: cauton on parallel updates solve system of equaton: dervatves set to zero depends on P 0 sum P d log(pr( Y θ )) dθ ch Broman Churchll Yandell Zeng
7 4.. M-step for normal phenotype Y G + e e ~ N(0σ ) pr(y θ ) f(y G σ ) see notes n book for dervatve detals E-step estmates: Gˆ ˆ σ sum sum Y P P ( Y Gˆ ) P / n /sum ch Broman Churchll Yandell Zeng ML va MCMC basc dea of smulated annealng start wth non-nformatve prors on (θλ) sample from posteror (somehow ) gradually shrnk prors toward ML estmate slght dffculty need to know (θλ) to sample from posteror teraton leads to Markov chan pont of ths secton MCMC does not mply a Bayesan perspectve! ch Broman Churchll Yandell Zeng 4
8 4.3 Bayesan nterval mappng sample mssng genotypes decouple effects θ from TL λ but depends on (θλ) and vce versa also need to specfy prors pr( X λ)pr( λ X ) λ ~ pr( X ) ~ pr( Y X θ λ) pr( Y θ )pr( θ ) θ ~ pr( Y ) ch Broman Churchll Yandell Zeng Bayesan prors for TL locus λ may be unform over genome pr(λ X ) / length of genome mssng genotypes pr( X λ ) recombnaton model s formally a pror effects θ (Gσ ) G (G G q G qq ) conjugate prors for normal phenotype G ~ N(µ κσ ) σ ~ nverse-χ (ντ ) or ντ / σ ~ χ ch Broman Churchll Yandell Zeng 6
9 effect of pror varance on posteror κ0.5.0 κ κ0.5 κ normal pror posteror for n posteror for n 5 true mean (sold black) (dotted blue) (dashed red) (green arrow) ch Broman Churchll Yandell Zeng 7 detals of phenotype prors prors depend on "hyper-parameters" G ~ N(µ κσ ) center around phenotype grand mean κσ σ G genetc varance κ σ G /σ h / ( h ) h σ G /(σ G +σ ) hertablty σ ~ nverse-χ (ντ ) or ντ / σ ~ χ τ s total sample varance ν pror degrees of freedom small nteger ch Broman Churchll Yandell Zeng 8
10 ch Broman Churchll Yandell Zeng 9 Y G + E posteror for sngle ndvdual envron E ~ N(0σ ) σ known lkelhood pr(y Gσ ) N(Y Gσ ) pror pr(g σ µκ) N(G µκσ ) posteror N(G µ+b (Y-µ) B σ ) Y G + E posteror for sample of n ndvduals shrnkage weghts B n go to Bayes for normal data sum wth ) ( N ) pr( + + n n B n Y Y n B Y B G Y G n n n κ κ σ µ µ κ µ σ ch Broman Churchll Yandell Zeng 0 Y G + E genetc qqq envron E ~ N(0σ ) σ known parameters θ (Gσ ) lkelhood pr(y Gσ ) N(Y G σ ) pror pr(g σ µκ) N(G µκσ ) posteror: posteror by T genetc value sum } count{ ) ( N ) pr( } : { + + n n B n Y Y n n B Y B G Y G κ κ σ µ µ κ µ σ
11 Emprcal Bayes: choosng hyper-parameters How do we choose hyper-parameters µκ? Emprcal Bayes: margnalze over pror estmate µκ from margnal posteror lkelhood pr(y Gσ ) N(Y G( )σ ) pror pr(g σ µκ) N(G µκσ ) margnal pr(y σ µκ) N(Y µ (κ +)σ ) estmates EB posteror ˆ µ Y s sum ( Y Y ) / n ˆ σ s /( κ + ) s /( h ) pr( G Y ) N G Y + B ( Y Y ) B ˆ σ n ch Broman Churchll Yandell Zeng What f varance σ s unknown? recall that sample varance s proportonal to ch-square pr(s σ ) χ (ns /σ n) or equvalently ns /σ σ ~ χ n conjugate pror s nverse ch-square pr(σ ντ ) nv-χ (σ ντ ) or equvalently ντ /σ ντ ~ χ ν emprcal choce: τ s /3 ν6 E(σ ντ ) s / Var(σ ντ ) s 4 /4 posteror gven data pr(σ Yντ ) nv-χ (σ ν+n (ντ +ns )/(ν+n)) weghted average of pror and data ch Broman Churchll Yandell Zeng
12 jont effects posteror detals Y G( ) + E genetc qqq envron E ~ N(0σ ) parameters θ (Gσ ) lkelhood pr(y Gσ ) N(Y G( )σ ) pror pr(g σ µκ) N(G µ σ /κ) posteror: pr(σ ντ ) nv-χ (σ ντ ) pr( G Y σ µ κ ) N G Y + B ( Y ντ + ns pr( σ Y G ν τ ) nv - χ n σ ν + ν + n n ( ) n σ Y ) B n wth B s sum Y G( ) / κ + n ch Broman Churchll Yandell Zeng Bayesan multple mputaton basc dea mpute multple copes of mssng genotypes sample ~pr( X λ) weghted to appear as draws from posteror average out gene effects θ study posteror for putatve TL λ most effectve for multple TL use sngle TL to ntroduce dea consder all loc as possble TL sample on grd Λ of `pseudomarkers' (every cm) smlar to nterval map scan of whole genome ch Broman Churchll Yandell Zeng 4
13 mportance samplng dea draw samples from one dstrbuton 3 n ~ f() weght them approprately by ω() sample summares from dstrbuton g() g() f()ω() / constant mean for f sum / n mean for g sum ω( ) / sum ω( ) ch Broman Churchll Yandell Zeng 5 example: mean copes of genotype qq q sum copes 0 true g draw f /3 /3 /3.0 weght ω f ω /3 /3 /3 4/3 mportance samplng g f 0.75ω sample mean f /00. mean g /30.08 ch Broman Churchll Yandell Zeng 6
14 what are approprate weghts? deally draw genotype from posteror want sample ~ g() sum θ pr( YXθλ)pr(θ ) but have sample ~ f() pr( Xλ) approprate weghts ω(λ YX) pr(λ X) sum θ pr(y θ )pr(θ ) estmate margnal posteror for TL λ draw N samples from pror at each TL λ 3 N ~ pr( Xλ) pr(λ YX) sum ω(λ YX) pr( Xλ) / constant sum j ω( j λ YX) / constant constant s summed over all λ but not actually needed ch Broman Churchll Yandell Zeng 7 relatng weghts to posteror posteror s smply averaged over θ weghts comprse terms except pr( Xλ) estmatng weghts: see Sen & Churchll pr( λ Y X ) sum θ pr( θ λ Y X ) pr( X λ)pr( λ X ) sumθ pr( Y θ )pr( θ ) pr( Y X ) pr( X λ) ω( λ Y X ) / pr( Y X ) ch Broman Churchll Yandell Zeng 8
15 estmatng effects va mputaton multple mputaton averages over effects dffcult to study posteror of effects drectly can estmate usual summares E( θ Y X ) sum sum sum λ λ j E( θ Y ) pr( X λ) ω( λ Y X ) / pr( Y X ) E( θ Y E( θ Y ) pr( Y X ) j ) ω( j λ Y X ) / constant ch Broman Churchll Yandell Zeng Bayesan MCMC Markov chan Monte Carlo Monte Carlo samples along a Markov chan What s a Markov chan? What s MCMC? Samplng from full condtonals Gbbs sampler Metropols-Hastngs ch Broman Churchll Yandell Zeng 30
16 What s a Markov chan? future gven present s ndependent of past update chan based on current value can make chan arbtrarly complcated chan converges to stable pattern π() we wsh to study pr() p /( p + q) p -p 0 -q q ch Broman Churchll Yandell Zeng 3 Markov chan dea p pr() p /( p + q) -p 0 -q q state tme ch Broman Churchll Yandell Zeng 3
17 Markov chan Monte Carlo can study arbtrarly complex models need only specfy how parameters affect each other can reduce to specfyng full condtonals construct Markov chan wth rght model jont posteror of unknowns as lmtng stable dstrbuton update unknowns gven data and all other unknowns sample from full condtonals cycle at random through all parameters next step depends only on current values nce Markov chans have nce propertes sample summares make sense consder almost as random sample from dstrbuton ergodc theorem and all that stuff ch Broman Churchll Yandell Zeng 33 Markov chan Monte Carlo dea have posteror pr(θ Y) want to draw samples propose θ ~ pr(θ Y) (deal: Gbbs sample) propose new θ nearby accept f more probable toss con f less probable based on relatve heghts (Metropols-Hastngs) pr(θ Y) θ ch Broman Churchll Yandell Zeng 34
18 mcmc sequence MCMC realzaton pr(θ Y) θ θ added twst: occasonally propose from whole doman ch Broman Churchll Yandell Zeng 35 margnal posterors jont posteror pr(λθ YX) pr(θ)pr(λ)pr( Xλ)pr(Y θ) /constant genetc effects observed pr(θ YX) sum pr(θ Y) pr( YX) X Y TL locus mssng pr(λ YX) sum pr(λ X) pr( YX) unknown TL genotypes more complcated λ θ pr( YX) sum λθ pr( YXλθ ) pr(λθ YX) mpossble to separate λ and θ n sum ch Broman Churchll Yandell Zeng 36
19 Why not Ordnary Monte Carlo? ndependent samples of jont dstrbuton channg (or peelng) of effects pr(θ Y)pr(G Yσ )pr(σ Y) possble analytcally here gven genotypes Monte Carlo: draw N samples from posteror sample varance σ sample genetc values G gven varance σ but we know markers X not genotypes! would have messy average over possble pr(θ YX) sum pr(θ Y) pr( YX) ch Broman Churchll Yandell Zeng 37 MCMC Idea for TLs construct Markov chan around posteror want posteror as stable dstrbuton of Markov chan n practce the chan tends toward stable dstrbuton ntal values may have low posteror probablty burn-n perod to get chan mxng well update components from full condtonals update effects θ gven genotypes & trats update locus λ gven genotypes & marker map update genotypes gven trats marker map locus & effects ( λ θ) ( λ θ) ~ pr( λ θ Y X) ( ) λ θ L λ θ N ( ) ch Broman Churchll Yandell Zeng 38
20 sample from full condtonals hard to sample from jont posteror update each unknown gven all others examne posteror: keep terms wth unknown normalzng denomnator make a dstrbuton pr( X λ)pr( λ X ) λ ~ pr( X ) ~ pr( Y X θ λ) pr( Y θ )pr( θ ) θ ~ pr( Y ) ch Broman Churchll Yandell Zeng 39 sample from full condtonals for model wth m TL hard to sample from jont posteror pr(λθ YX) pr(θ)pr(λ)pr( Xλ)pr(Y θ) /constant easy to sample parameters from full condtonals full condtonal for genetc effects pr(θ YXλ) pr(θ Y) pr(θ) pr(y θ) /constant full condtonal for TL locus pr(λ YXθ) pr(λ X) pr(λ) pr( Xλ) /constant full condtonal for TL genotypes pr( YXλθ ) pr( Xλ) pr(y θ) /constant observed X Y mssng unknown λ θ ch Broman Churchll Yandell Zeng 40
21 Gbbs sampler dea want to study two correlated normals could sample drectly from bvarate normal Gbbs sampler: sample each from ts full condtonal pck order of samplng at random repeat N tmes θ θ θ θ µ ρ ~ N θ θ µ ρ ~ N µ ρ µ ρ ~ N µ ρ ( µ + ρ( θ µ ) ρ ) ( µ + ρ( θ µ ) ρ ) ch Broman Churchll Yandell Zeng 4 Gbbs sampler samples: ρ 0.6 N 50 samples N 00 samples Gbbs: mean Markov chan ndex Gbbs: mean Gbbs: mean Gbbs: mean Gbbs: mean Gbbs: mean Markov chan ndex Gbbs: mean Gbbs: mean Gbbs: mean Gbbs: mean Markov chan ndex Gbbs: mean Markov chan ndex Gbbs: mean ch Broman Churchll Yandell Zeng 4
22 Gbbs Sampler: effects & genotypes for gven locus λ can sample effects θ and genotypes effects parameter vector θ (Gσ ) wth G(G qq G q G ) mssng genotype vector ( n ) Gbbs sampler: update one at a tme va full condtonals randomly select order of unknowns update each gven current values of all others locus λ and data (YX) sample varance σ gven Y and genetc values G sample genotype gven markers X and locus λ can do block updates f more effcent sample all genetc values G gven Y and varance σ ch Broman Churchll Yandell Zeng 43 phenotype model: alternate form genetc value G n cell means form easy but often useful to model effects drectly sort out addtve and domnance effects useful for reduced models wth multple TL TL man effects and nteractons (parwse 3-way etc.) we only consder addtve effects here G qq µ a G q µg µ + a recodng for regresson model for genotype qq 0 for genotype q for genotype G( ) µ + a ch Broman Churchll Yandell Zeng 44
23 ch Broman Churchll Yandell Zeng 45 MCMC run of mean & addtve MCMC run/ mean frequency MCMC run/ addtve frequency ch Broman Churchll Yandell Zeng 46 MCMC run for varance MCMC run varance frequency
24 mssng marker data sample mssng marker data a la T genotypes full condtonal for mssng markers depends on flankng markers possble flankng TL can explctly decompose by ndvdual bnomal (or trnomal) probablty pr( X k aaaa or AA Y X θ λ) pr( X X k k X λ) ch Broman Churchll Yandell Zeng 47 Metropols-Hastngs dea want to study dstrbuton f(θ) take Monte Carlo samples unless too complcated Metropols-Hastngs samples: current sample value θ propose new value θ * from some dstrbuton g(θθ * ) Gbbs sampler: g(θθ * ) f(θ * ) accept new value wth prob A Gbbs sampler: A A * * f ( θ ) g( θ θ ) mn * f ( θ ) g( θ θ ) ch Broman Churchll Yandell Zeng f(θ) g(θ θ * )
25 Metropols-Hastngs samples mcmc sequence N 00 samples N 000 samples narrow g wde g narrow g wde g mcmc sequence mcmc sequence mcmc sequence pr(θ Y) θ pr(θ Y) θ pr(θ Y) θ pr(θ Y) θ θ θ θ θ ch Broman Churchll Yandell Zeng 49 full condtonal for locus cannot easly sample from locus full condtonal pr(λ YXθ) pr(λ X) pr(λ) pr( Xλ) /constant cannot explctly determne full condtonal dffcult to normalze need to average over all possble genotypes over entre map Gbbs sampler wll not work but can use method based on ratos of probabltes ch Broman Churchll Yandell Zeng 50
26 ch Broman Churchll Yandell Zeng 5 Metropols-Hastngs Step pck new locus based upon current locus propose new locus from dstrbuton q( ) pck value near current one? pck unformly across genome? accept new locus wth probablty a() Gbbs sampler s specal case of M-H always accept new proposal acceptance nsures rght stable dstrbuton accept new proposal wth probablty A otherwse stck wth current value ) ( ) ( ) ( ) ( mn ) ( * * new old old old new new new old q q A λ λ λ π λ λ λ π λ λ x x ch Broman Churchll Yandell Zeng MCMC run/ dstance (cm) frequency MCMC Run for locus at 40cM
27 Care & Use of MCMC sample chan for long run ( ) longer for more complcated lkelhoods use dagnostc plots to assess mxng standard error of estmates use hstogram of posteror compute varance of posteror--just another summary studyng the Markov chan Monte Carlo error of seres (Geyer 99) tme seres estmate based on lagged auto-covarances convergence dagnostcs for proper mxng ch Broman Churchll Yandell Zeng bootstrapped varance estmates (re)sample (Y X ) wth replacement create bootstrap sample "new" data of sze n estmate loc λ and effects θ repeat ths N tmes construct summares of these mean varance medan percentle construct 95% confdence ntervals for λ and θ (.5%le 97.5%le) order estmates pck number.05n and.975n ch Broman Churchll Yandell Zeng 54
28 4.5 advantages & shortcomngs of IM advantages over sngle marker analyss can nfer poston and effect of TL estmated locatons & effects almost unbased f only one segregatng TL per chromosome requres fewer ndvduals for detecton of TL ch Broman Churchll Yandell Zeng 55 not an nterval test shortcomngs of IM cannot say whether or not TL s n an nterval not ndependent of effects of TL outsde nterval can gve false postves due to lnkage hgh LOD score due to nearby TL less of a problem for unlnked TL can detect "ghost TL" hgher peak between two lnked TL estmated poston and effect are based ch Broman Churchll Yandell Zeng 56
29 4.6 Haley-Knott Regresson Approxmaton lkelhood mxes over mssng genotypes normal data mxture of normals approxmate mxture by one normal just estmate mean and varance advantages works well for closely spaced markers mean s correct can explot flankng markers for mssng data calculatons are easy and fast (PLABTL) dsadvantages varance depends on marker genotypes and spacngs approxmaton errors accumulate for multple TL ch Broman Churchll Yandell Zeng 57 Haley-Knott regresson dea replace mssng genotypes by expected values P E( X λ) sum pr( X λ) ft regresson model (e.g. addtve gene acton) Y µ + αp + e n assume constant varance correct mean E(Y X θ λ) P wrong varance V (Y X θ λ) σ sum [pr( X λ)] ch Broman Churchll Yandell Zeng 58
30 Haley-Knott and EM both use expected value of genotypes HK: P E( X λ) pror expectaton EM: P E( Y X θ λ) pror expectaton both solve regresson problems for effects dfference s n teraton HK s frst step teraton EM terates E-step and M-steps to convergence ch Broman Churchll Yandell Zeng 59
multiple QTL likelihood
Bayesan Interval Mappng multple TL lkelhood compare CIM MIM mputaton BIM Drosophla shape example Bayesan dea Who was Bayes? What s Bayes theorem? Bayesan Bayes factors and margnal posterors Markov chan
More informationMarkov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement
Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs
More informationParametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010
Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton
More informationChecking Pairwise Relationships. Lecture 19 Biostatistics 666
Checkng Parwse Relatonshps Lecture 19 Bostatstcs 666 Last Lecture: Markov Model for Multpont Analyss X X X 1 3 X M P X 1 I P X I P X 3 I P X M I 1 3 M I 1 I I 3 I M P I I P I 3 I P... 1 IBD states along
More informationTarget tracking example Filtering: Xt. (main interest) Smoothing: X1: t. (also given with SIS)
Target trackng example Flterng: Xt Y1: t (man nterest) Smoothng: X1: t Y1: t (also gven wth SIS) However as we have seen, the estmate of ths dstrbuton breaks down when t gets large due to the weghts becomng
More informationMarkov Chain Monte Carlo Lecture 6
where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways
More informationOutline for today. Markov chain Monte Carlo. Example: spatial statistics (Christensen and Waagepetersen 2001)
Markov chan Monte Carlo Rasmus Waagepetersen Department of Mathematcs Aalborg Unversty Denmark November, / Outlne for today MCMC / Condtonal smulaton for hgh-dmensonal U: Markov chan Monte Carlo Consder
More informationCS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements
CS 750 Machne Learnng Lecture 5 Densty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square CS 750 Machne Learnng Announcements Homework Due on Wednesday before the class Reports: hand n before
More informationEM and Structure Learning
EM and Structure Learnng Le Song Machne Learnng II: Advanced Topcs CSE 8803ML, Sprng 2012 Partally observed graphcal models Mxture Models N(μ 1, Σ 1 ) Z X N N(μ 2, Σ 2 ) 2 Gaussan mxture model Consder
More informationHierarchical Bayes. Peter Lenk. Stephen M Ross School of Business at the University of Michigan September 2004
Herarchcal Bayes Peter Lenk Stephen M Ross School of Busness at the Unversty of Mchgan September 2004 Outlne Bayesan Decson Theory Smple Bayes and Shrnkage Estmates Herarchcal Bayes Numercal Methods Battng
More informationThe EM Algorithm (Dempster, Laird, Rubin 1977) The missing data or incomplete data setting: ODL(φ;Y ) = [Y;φ] = [Y X,φ][X φ] = X
The EM Algorthm (Dempster, Lard, Rubn 1977 The mssng data or ncomplete data settng: An Observed Data Lkelhood (ODL that s a mxture or ntegral of Complete Data Lkelhoods (CDL. (1a ODL(;Y = [Y;] = [Y,][
More informationGlobal Sensitivity. Tuesday 20 th February, 2018
Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values
More information( ) ( ) ( ) ( ) STOCHASTIC SIMULATION FOR BLOCKED DATA. Monte Carlo simulation Rejection sampling Importance sampling Markov chain Monte Carlo
SOCHASIC SIMULAIO FOR BLOCKED DAA Stochastc System Analyss and Bayesan Model Updatng Monte Carlo smulaton Rejecton samplng Importance samplng Markov chan Monte Carlo Monte Carlo smulaton Introducton: If
More informationFirst Year Examination Department of Statistics, University of Florida
Frst Year Examnaton Department of Statstcs, Unversty of Florda May 7, 010, 8:00 am - 1:00 noon Instructons: 1. You have four hours to answer questons n ths examnaton.. You must show your work to receve
More informationQuantifying Uncertainty
Partcle Flters Quantfyng Uncertanty Sa Ravela M. I. T Last Updated: Sprng 2013 1 Quantfyng Uncertanty Partcle Flters Partcle Flters Appled to Sequental flterng problems Can also be appled to smoothng problems
More informationReview: Fit a line to N data points
Revew: Ft a lne to data ponts Correlated parameters: L y = a x + b Orthogonal parameters: J y = a (x ˆ x + b For ntercept b, set a=0 and fnd b by optmal average: ˆ b = y, Var[ b ˆ ] = For slope a, set
More informationMLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012
MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:
More informationMultipoint Analysis for Sibling Pairs. Biostatistics 666 Lecture 18
Multpont Analyss for Sblng ars Bostatstcs 666 Lecture 8 revously Lnkage analyss wth pars of ndvduals Non-paraetrc BS Methods Maxu Lkelhood BD Based Method ossble Trangle Constrant AS Methods Covered So
More informationParametric fractional imputation for missing data analysis
Secton on Survey Research Methods JSM 2008 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Wayne Fuller Abstract Under a parametrc model for mssng data, the EM algorthm s a popular tool
More informationj) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1
Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons
More informationMATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)
1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons
More informationENG 8801/ Special Topics in Computer Engineering: Pattern Recognition. Memorial University of Newfoundland Pattern Recognition
EG 880/988 - Specal opcs n Computer Engneerng: Pattern Recognton Memoral Unversty of ewfoundland Pattern Recognton Lecture 7 May 3, 006 http://wwwengrmunca/~charlesr Offce Hours: uesdays hursdays 8:30-9:30
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationCIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M
CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute
More informationSee Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition)
Count Data Models See Book Chapter 11 2 nd Edton (Chapter 10 1 st Edton) Count data consst of non-negatve nteger values Examples: number of drver route changes per week, the number of trp departure changes
More informationMachine learning: Density estimation
CS 70 Foundatons of AI Lecture 3 Machne learnng: ensty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square ata: ensty estmaton {.. n} x a vector of attrbute values Objectve: estmate the model of
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationprinceton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora
prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable
More informationEconomics 130. Lecture 4 Simple Linear Regression Continued
Economcs 130 Lecture 4 Contnued Readngs for Week 4 Text, Chapter and 3. We contnue wth addressng our second ssue + add n how we evaluate these relatonshps: Where do we get data to do ths analyss? How do
More informationNegative Binomial Regression
STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...
More informationQTL model selection: key players
Bayesian Interval Mapping. Bayesian strategy -9. Markov chain sampling 0-7. sampling genetic architectures 8-5 4. criteria for model selection 6-44 QTL : Bayes Seattle SISG: Yandell 008 QTL model selection:
More informationUsing the estimated penetrances to determine the range of the underlying genetic model in casecontrol
Georgetown Unversty From the SelectedWorks of Mark J Meyer 8 Usng the estmated penetrances to determne the range of the underlyng genetc model n casecontrol desgn Mark J Meyer Neal Jeffres Gang Zheng Avalable
More informationWeb Appendix B Estimation. We base our sampling procedure on the method of data augmentation (e.g., Tanner and Wong,
Web Appendx B Estmaton Lkelhood and Data Augmentaton We base our samplng procedure on the method of data augmentaton (eg anner and Wong 987) here e treat the unobserved ndvdual choces as parameters Specfcally
More informationProbabilistic Graphical Models
School of Computer Scence robablstc Graphcal Models Appromate Inference: Markov Chan Monte Carlo 05 07 Erc Xng Lecture 7 March 9 04 X X 075 05 05 03 X 3 Erc Xng @ CMU 005-04 Recap of Monte Carlo Monte
More informationEcon Statistical Properties of the OLS estimator. Sanjaya DeSilva
Econ 39 - Statstcal Propertes of the OLS estmator Sanjaya DeSlva September, 008 1 Overvew Recall that the true regresson model s Y = β 0 + β 1 X + u (1) Applyng the OLS method to a sample of data, we estmate
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationFinite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin
Fnte Mxture Models and Expectaton Maxmzaton Most sldes are from: Dr. Maro Fgueredo, Dr. Anl Jan and Dr. Rong Jn Recall: The Supervsed Learnng Problem Gven a set of n samples X {(x, y )},,,n Chapter 3 of
More informationStat 543 Exam 2 Spring 2016
Stat 543 Exam 2 Sprng 206 I have nether gven nor receved unauthorzed assstance on ths exam. Name Sgned Date Name Prnted Ths Exam conssts of questons. Do at least 0 of the parts of the man exam. I wll score
More informationMotion Perception Under Uncertainty. Hongjing Lu Department of Psychology University of Hong Kong
Moton Percepton Under Uncertanty Hongjng Lu Department of Psychology Unversty of Hong Kong Outlne Uncertanty n moton stmulus Correspondence problem Qualtatve fttng usng deal observer models Based on sgnal
More informationLimited Dependent Variables
Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages
More informationQuantitative Genetic Models Least Squares Genetic Model. Hardy-Weinberg (1908) Principle. change of allele & genotype frequency over generations
Quanttatve Genetc Models Least Squares Genetc Model Hardy-Wenberg (1908) Prncple partton of effects P = G + E + G E P s phenotypc effect G s genetc effect E s envronmental effect G E s nteracton effect
More informationSmall Area Interval Estimation
.. Small Area Interval Estmaton Partha Lahr Jont Program n Survey Methodology Unversty of Maryland, College Park (Based on jont work wth Masayo Yoshmor, Former JPSM Vstng PhD Student and Research Fellow
More informationLecture 3: Probability Distributions
Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the
More informationThe Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction
ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also
More informationChapter 13: Multiple Regression
Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to
More informationPolynomial Regression Models
LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance
More informationCourse 395: Machine Learning - Lectures
Course 395: Machne Learnng - Lectures Lecture 1-2: Concept Learnng (M. Pantc Lecture 3-4: Decson Trees & CC Intro (M. Pantc Lecture 5-6: Artfcal Neural Networks (S.Zaferou Lecture 7-8: Instance ased Learnng
More informationStat 543 Exam 2 Spring 2016
Stat 543 Exam 2 Sprng 2016 I have nether gven nor receved unauthorzed assstance on ths exam. Name Sgned Date Name Prnted Ths Exam conssts of 11 questons. Do at least 10 of the 11 parts of the man exam.
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More information1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands
Content. Inference on Regresson Parameters a. Fndng Mean, s.d and covarance amongst estmates.. Confdence Intervals and Workng Hotellng Bands 3. Cochran s Theorem 4. General Lnear Testng 5. Measures of
More informationProbability Theory (revisited)
Probablty Theory (revsted) Summary Probablty v.s. plausblty Random varables Smulaton of Random Experments Challenge The alarm of a shop rang. Soon afterwards, a man was seen runnng n the street, persecuted
More informationPredictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore
Sesson Outlne Introducton to classfcaton problems and dscrete choce models. Introducton to Logstcs Regresson. Logstc functon and Logt functon. Maxmum Lkelhood Estmator (MLE) for estmaton of LR parameters.
More informationConvergence of random processes
DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large
More informationP R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /
Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons
More informationLecture 4 Hypothesis Testing
Lecture 4 Hypothess Testng We may wsh to test pror hypotheses about the coeffcents we estmate. We can use the estmates to test whether the data rejects our hypothess. An example mght be that we wsh to
More informationProperties of Least Squares
Week 3 3.1 Smple Lnear Regresson Model 3. Propertes of Least Squares Estmators Y Y β 1 + β X + u weekly famly expendtures X weekly famly ncome For a gven level of x, the expected level of food expendtures
More informationU-Pb Geochronology Practical: Background
U-Pb Geochronology Practcal: Background Basc Concepts: accuracy: measure of the dfference between an expermental measurement and the true value precson: measure of the reproducblty of the expermental result
More informationChapter Newton s Method
Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve
More informationChapter 4: Regression With One Regressor
Chapter 4: Regresson Wth One Regressor Copyrght 2011 Pearson Addson-Wesley. All rghts reserved. 1-1 Outlne 1. Fttng a lne to data 2. The ordnary least squares (OLS) lne/regresson 3. Measures of ft 4. Populaton
More informationTopic 5: Non-Linear Regression
Topc 5: Non-Lnear Regresson The models we ve worked wth so far have been lnear n the parameters. They ve been of the form: y = Xβ + ε Many models based on economc theory are actually non-lnear n the parameters.
More informationOn an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1
On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More informationx = , so that calculated
Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to
More informationExpectation Maximization Mixture Models HMMs
-755 Machne Learnng for Sgnal Processng Mture Models HMMs Class 9. 2 Sep 200 Learnng Dstrbutons for Data Problem: Gven a collecton of eamples from some data, estmate ts dstrbuton Basc deas of Mamum Lelhood
More informationLogistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI
Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton
More informationGaussian Mixture Models
Lab Gaussan Mxture Models Lab Objectve: Understand the formulaton of Gaussan Mxture Models (GMMs) and how to estmate GMM parameters. You ve already seen GMMs as the observaton dstrbuton n certan contnuous
More informationBAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS. Dariusz Biskup
BAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS Darusz Bskup 1. Introducton The paper presents a nonparaetrc procedure for estaton of an unknown functon f n the regresson odel y = f x + ε = N. (1) (
More informationModeling and Simulation NETW 707
Modelng and Smulaton NETW 707 Lecture 5 Tests for Random Numbers Course Instructor: Dr.-Ing. Magge Mashaly magge.ezzat@guc.edu.eg C3.220 1 Propertes of Random Numbers Random Number Generators (RNGs) must
More informationStatistical Inference. 2.3 Summary Statistics Measures of Center and Spread. parameters ( population characteristics )
Ismor Fscher, 8//008 Stat 54 / -8.3 Summary Statstcs Measures of Center and Spread Dstrbuton of dscrete contnuous POPULATION Random Varable, numercal True center =??? True spread =???? parameters ( populaton
More informationANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)
Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of
More informationRelevance Vector Machines Explained
October 19, 2010 Relevance Vector Machnes Explaned Trstan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introducton Ths document has been wrtten n an attempt to make Tppng s [1] Relevance Vector Machnes
More informationCathy Walker March 5, 2010
Cathy Walker March 5, 010 Part : Problem Set 1. What s the level of measurement for the followng varables? a) SAT scores b) Number of tests or quzzes n statstcal course c) Acres of land devoted to corn
More informationLecture 6 More on Complete Randomized Block Design (RBD)
Lecture 6 More on Complete Randomzed Block Desgn (RBD) Multple test Multple test The multple comparsons or multple testng problem occurs when one consders a set of statstcal nferences smultaneously. For
More informationComposite Hypotheses testing
Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter
More informationSpace of ML Problems. CSE 473: Artificial Intelligence. Parameter Estimation and Bayesian Networks. Learning Topics
/7/7 CSE 73: Artfcal Intellgence Bayesan - Learnng Deter Fox Sldes adapted from Dan Weld, Jack Breese, Dan Klen, Daphne Koller, Stuart Russell, Andrew Moore & Luke Zettlemoyer What s Beng Learned? Space
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More information/ n ) are compared. The logic is: if the two
STAT C141, Sprng 2005 Lecture 13 Two sample tests One sample tests: examples of goodness of ft tests, where we are testng whether our data supports predctons. Two sample tests: called as tests of ndependence
More informationClassification as a Regression Problem
Target varable y C C, C,, ; Classfcaton as a Regresson Problem { }, 3 L C K To treat classfcaton as a regresson problem we should transform the target y nto numercal values; The choce of numercal class
More informationA be a probability space. A random vector
Statstcs 1: Probablty Theory II 8 1 JOINT AND MARGINAL DISTRIBUTIONS In Probablty Theory I we formulate the concept of a (real) random varable and descrbe the probablstc behavor of ths random varable by
More informationLecture 4: Universal Hash Functions/Streaming Cont d
CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected
More informationMaximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models
ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models
More informationRockefeller College University at Albany
Rockefeller College Unverst at Alban PAD 705 Handout: Maxmum Lkelhood Estmaton Orgnal b Davd A. Wse John F. Kenned School of Government, Harvard Unverst Modfcatons b R. Karl Rethemeer Up to ths pont n
More informationRetrieval Models: Language models
CS-590I Informaton Retreval Retreval Models: Language models Luo S Department of Computer Scence Purdue Unversty Introducton to language model Ungram language model Document language model estmaton Maxmum
More informationHere is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)
Secton 1.5 Correlaton In the prevous sectons, we looked at regresson and the value r was a measurement of how much of the varaton n y can be attrbuted to the lnear relatonshp between y and x. In ths secton,
More informationStatistical analysis using matlab. HY 439 Presented by: George Fortetsanakis
Statstcal analyss usng matlab HY 439 Presented by: George Fortetsanaks Roadmap Probablty dstrbutons Statstcal estmaton Fttng data to probablty dstrbutons Contnuous dstrbutons Contnuous random varable X
More informationInformation Geometry of Gibbs Sampler
Informaton Geometry of Gbbs Sampler Kazuya Takabatake Neuroscence Research Insttute AIST Central 2, Umezono 1-1-1, Tsukuba JAPAN 305-8568 k.takabatake@ast.go.jp Abstract: - Ths paper shows some nformaton
More informationMultiple Choice. Choose the one that best completes the statement or answers the question.
ECON 56 Homework Multple Choce Choose the one that best completes the statement or answers the queston ) The probablty of an event A or B (Pr(A or B)) to occur equals a Pr(A) Pr(B) b Pr(A) + Pr(B) f A
More informationLecture 21: Numerical methods for pricing American type derivatives
Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)
More informationLecture Nov
Lecture 18 Nov 07 2008 Revew Clusterng Groupng smlar obects nto clusters Herarchcal clusterng Agglomeratve approach (HAC: teratvely merge smlar clusters Dfferent lnkage algorthms for computng dstances
More informationA New Method for Estimating Overdispersion. David Fletcher and Peter Green Department of Mathematics and Statistics
A New Method for Estmatng Overdsperson Davd Fletcher and Peter Green Department of Mathematcs and Statstcs Byron Morgan Insttute of Mathematcs, Statstcs and Actuaral Scence Unversty of Kent, England Overvew
More informationSection 8.3 Polar Form of Complex Numbers
80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the
More informationGoodness of fit and Wilks theorem
DRAFT 0.0 Glen Cowan 3 June, 2013 Goodness of ft and Wlks theorem Suppose we model data y wth a lkelhood L(µ) that depends on a set of N parameters µ = (µ 1,...,µ N ). Defne the statstc t µ ln L(µ) L(ˆµ),
More informationLinear Regression Analysis: Terminology and Notation
ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented
More informationNOVEL METHODS FOR INCREASING EFFICIENCY OF QUANTITATIVE TRAIT LOCUS MAPPING ZHIGANG GUO. M. S., Nanjing Agricultural University, 1998
NOVEL METHODS FOR INCREASING EFFICIENCY OF QUANTITATIVE TRAIT LOCUS MAPPING by ZHIGANG GUO M. S., Nanjng Agrcultural Unversty, 1998 AN ABSTRACT OF A DISSERTATION submtted n partal fulfllment of the requrements
More informationThe Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD
he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s
More information1 Binary Response Models
Bnary and Ordered Multnomal Response Models Dscrete qualtatve response models deal wth dscrete dependent varables. bnary: yes/no, partcpaton/non-partcpaton lnear probablty model LPM, probt or logt models
More information18.1 Introduction and Recap
CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng
More information2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification
E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationAnswers Problem Set 2 Chem 314A Williamsen Spring 2000
Answers Problem Set Chem 314A Wllamsen Sprng 000 1) Gve me the followng crtcal values from the statstcal tables. a) z-statstc,-sded test, 99.7% confdence lmt ±3 b) t-statstc (Case I), 1-sded test, 95%
More information