Likelihood Methods: A Companion to the NEPPSR analysis project. Colin Gay, Yale University

Size: px
Start display at page:

Download "Likelihood Methods: A Companion to the NEPPSR analysis project. Colin Gay, Yale University"

Transcription

1 Lkelhood Methods: A Companon to the EPPSR analyss project

2 Outlne We have arranged a hands-on mn-course on fttng technques usng the ROOT framework To ft data, we often use the MIUIT package from CER, whch s a fttng engne for numercally mnmzng It s bult-n to ROOT, or can be called stand-alone Instead of usng t as a black box, we thought we d show how conceptually smple ts operaton really s Quck revew of some bascs of probabltes Maxmum Lkelhood bascs Propertes of the ML method, usng specfc examples and YES, there wll be a test (Especally for repeat EPPSR offenders)

3 Probablty Bascs Suppose X s a random varable. The probablty of throwng X between x, x+dx s P(x)dx P(x) s called the probablty densty functon (pdf) for X P( x) dx = The Expectaton value of a functon f(x) over P(x) s: E( f) = f( x) P( x) dx The most common expectatons we use are the frst few moments of the pdf: E() = P( x) dx = E( x) = xp( x) dx = x E x x x x P x dx σ normalzaton mean (( ) ) = ( ) ( ) = varance See Crag Blocker s talk for a more detaled ntroducton

4 Probablty Bascs The Condtonal Probablty P(x a) s the pdf for X, gven that a s true For example, P(x d) = probablty that our detector measures a partcle passng a wre at dstance x, gven that the partcle s truly at dstance d Or: P(m m 0 ) = probablty of measurng mass m gven the true mass s m 0 We use pdfs all the tme n our Monte Carlos: we know true value of masses, trajectores, etc, and turn nto fnte samples of quanttes reconstructed by our detector va these pdfs Our job wth real data s the nverse: Gven a fnte sample of measurements of a quantty, to nfer our pdf and true value for the quantty

5 Samples Let X have pdf P(x). A Sample of sze s a set of {x } of throws of X. When we plot, e.g., the mass of all of our reconstructed top quarks, we are vsualzng our sample pulled from the pdf P(m m 0 ), where m 0 s the true top mass Our job s, based on our fnte sample, to estmate the true value m 0, and to quantfy how certan our estmaton s For ths, we need an estmator for m 0 Some estmators are better than others e.g. My estmator s 50 GeV, no matter what Ths s an estmator, but not a very good one!

6 Estmators What propertes would we lke n our estmators? Consstent: As our sample sze ncreases, we d lke our estmator to converge to the true value. Such an estmator s called consstent. Effcent: There s a theoretcal mnmum for the varance of an estmator about the true value, gven a sample of sze (called the Mnmum Varance Bound) An estmator wth varance equal to the MVB s effcent Unbased: An estmator whose expectaton value (mean) s equal to the true value s called unbased

7 Maxmum Lkelhood Estmators Suppose we have a sample of measurements of a varable m, and we know the pdf for m s P(m m 0 ). However, we don t know m 0 n fact, t s the physcs quantty we are nterested n measurng Form the lkelhood Lm ( ) = Pm ( m) 0 0 The Maxmum Lkelhood Estmator (MLE) m * for m 0 s the value of m 0 that maxmzes the jont lkelhood MLEs are not always unbased, but are consstent and effcent. They are also asymptotcally normal. Extractng the MLE for a quantty s called fttng the data

8 Least-squares ft (revew?) Frst ft most of us learn s a least-squares or χ ft Put data nto hstogram bns: centers x, value y wth uncertanty σ Choose functon f( x θ ) whch predcts the bn contents y as a functon of the ft parameters θ ( f( x θ ) y) Form χ = σ The Least-Squares Prncple states that the best estmate for the parameters are the ones whch mnmze χ θ

9 Bnned vs Unbnned fts The LSQ ft s an example of a bnned ft A LSQ ft has the nce property that n addton to supplyng the ft parameters, t also tells us how good the ft functon approxmates the data Bnned fts wth few (or zero) events per bn are problematc Gaussan approx of uncertanty σ on bn contents s 0 Contrbuton to χ undefned Root just gnores these bns => ft based hgh Lkelhood fts gve us the ablty to deal wth data n an unbnned way Ideal for small statstcs, or sparse data

10 MLE Propertes EPPSR project I ll develop the concepts of the ML ft usng an analytcally solvable case You wll wrte a fttng program, ft to data suppled by Stephane, Kevn and John, and can compare to analytc result I ll use a common real-world case fttng for the lfetme of a data sample of decay tmes

11 Lfetme Lkelhood We can wrte the probablty densty functon for an exponental decay n two ways: P( t ) t / = e or These are properly normalzed (more on ths later) P( t Γ ) =Γe Γt P( t ) dt = Construct the lkelhood L( ) = P( t ) from the tme measurements { t }, =,

12 Lfetme Lkelhood Rather than Maxmzng the lkelhood, we usually take the log umercally, the lkelhood can get extremely small, resultng n precson ssues. log s better. log s monotonc, so maxmzng logl s equvalent to L Our man fttng package, MIUIT, fnds Mnma, so we multply by - and Mnmze: log L( ) = log P( t ) = t / log( e ) t = (log + ) = log + t

13 Maxmzng the Lkelhood To fnd the mnmum, we set the frst dervatve to 0 ( log L) = 0 = = t * = = = = t t ( E( t) t mean) * s the ML estmator for the true lfetme

14 Lkelhood for a Lfetme

15 umercal method Whle we can solve ths system analytcally, n general we must buld up the lkelhood numercally by loopng over the events and addng up the log P: sum = 0; for (0; <; ++) { sum += log(tau) + t[]/tau; } Fnd mnmum sum by scannng n tau Ths allows for very complcated probablty functons to be handled n a straghtforward way

16 What about the wdth? What f we consdered the wdth to be our unknown, rather than the lfetme? Γt log L( Γ ) = log( Γ e ) = ( log Γ+Γt ) = log Γ+Γ t ( log L) = 0 = + t Γ Γ * Γ = * * (.e. Γ = / ) t

17 Unbasedness? Let s calculate the expectaton (mean) value of our * = t = estmator gven the true lfetme s E E t E t * ( ) = ( ) = ( ) = t / t e dt = = = t/ t/ t/ t e dt te e dt The mean value of our ft result for, f we repeat the experment many tmes, s the true value =>Unbased *

18 Bas (contnued) What about Γ? E Γ = E = P t Γ P t Γ dt dt Γ * ( ) ( )... ( )... ( )... t t * Hence Γ s a based estmator. Ths sounds bad However, fttng for lfetme or wdth gves the same answer: Remember * * Γ =/ You can get a fne ft wth ether. It should be no surprse that E( ) x E ( x ) so that f estmator x s unbased, /x must be based

19 MLE transformaton nvarance In fact, the result of the lkelhood ft s nvarant under parameter transformaton: If f(x) s any functon of the estmator x, then the MLE for f(x) satsfes * * ( f ( x)) = f( x ) That s, fttng for x, and then applyng the transformaton f gves the same result as fttng for f(x) Thus there s more than one way to skn any cat

20 Uncertanty on ML Estmator Taylor expand lkelhood about mnmum: log L log ( θ ) = log + ( θ θ ) L L θ * θ * θ= θ * θ = * θ = * logl θ θ L( ) L e θ * ( θ θ ) **If we consder ths a probablty densty for the true value of the parameter θ, we see t s a Gaussan, wth * mean and varance θ σ = V ( θ) = log L θ * θ = θ

21 Meanng of Lkelhood Depends on f you are Bayesan or Frequentst Whle most partcle physcsts are professed frequentsts, they also seem to me to be closet Bayesans at tmes Let s be a bt more careful n notaton. The lkelhood s L({ t} ) = P( t ) Bayes: We d lke to nvert the probablty, and consder ths as the probablty densty for the true value gven the observatons t In general P(A B) P(B A). In fact, Bayes thm s: P( B A) P( A) P( A B) = PB ( ) (follows from P( ABPB ) ( ) = PB ( APA ) ( ))

22 Meanng of Lkelhood Thus P( data) = P( data ) P( ) Pdata ( ) P( data) s some constant normalzaton If we take the so-called pror probablty functon to be flat, then the lkelhood s the probablty dstrbuton for the true value of P( ) P( data) = P( data ) = L( )

23 ML Uncertantes Wrtng n a more suggestve way: * ( ) log L ( ) log L θ θ = θ θ * σ Hence the values of the lkelhood for the true value of beng,, n σ from the central value are * log L( θ ± σ) = log L * + θ ± = + * log L( θ σ) log L * θ log L( θ ± nσ) = log L * + n θ * ( f x) For fts, χ = χ =, σ correspond to,, 3 σ excursons, 4, 9 Same for ML fts, except factor of / θ

24 Lkelhood Uncertanty

25 Estmated Varance of Lfetme ft In our case of the lfetme ft: log L= log + t = ( log L) = 0 = t ( log L) = + * * *3 = = + = t * * * σ = = ( log L) = * *

26 Lfetme ft Uncertantes ote that samples wth ftted lfetmes that fluctuate low ALSO have the smallest uncertanty! Ths can easly (and dd/does) cause world-averages of many measurements to be based low, as low values get the largest weght More correct to combne the lkelhood curves to average many experments Ths s now done behnd the scenes for many measurements Experts supply samplng of the lkelhood curve for ther ft L curves are ADDED Mnmum of summed L found, and L=/ gves combned uncertanty

27 Goodness of Ft Consder the two data sets below, made nto hstograms for vsualzaton. Both result n the same ML estmator for the lfetme. Surely, data set s more lkely than data set, rght? Surely, snce t s more exponental, the value of the lkelhood functon for the st should be larger than for the nd, rght? Each events probablty should be hgher, resultng n a net larger lkelhood.

28 Goodness of Ft Unfortunately, ths s wrong. Recall: log L= log + t and = * t = log L mn = + t = t + t * log log( ) * t = + + lo t = + ) * ( log g ) (log AY sample wth the same t produces the SAME * estmate wth the SAME value for the logl of = t / * (log + ) The two prevous fts are equally lkely! Ths s a weakness of the ML method t doesn t naturally supply a goodness of ft metrc

29 Moral The ML method wll not save you from yourself! Santy check of results essental! How lkely s lkely needs care: e.g. from a self-help mathematcs webste: Defnton of Unlkely Event The event that may not happen s an unlkely event. In other words, unlkely event s an event that s not lkely to happen.

30 Lkelhood ormalzaton You must be vglant that your lkelhood s properly normalzed (or at least ts normalzaton doesn t change) Easest way s to normalze each buldng-block probablty dstrbuton Ignorng ths can cause real problems: Consder a ML ft wth 0,000 data events Suppose the normalzaton of the underlyng pdfs changes from to ,000 log L= log P( t ) 0,000 0,000 0,000 0,000 log Pt ( ) = (log log Pt ( )) = (.000+ log Pt ( )) = log Pt ( ) More than sgma Change!

31 ML Ft wth Constrant It s easy to add external constrants on ft parameters Let s assume Gaussan uncertanty on constrant e.g. add n the world-average knowledge as gven by PDG L= P( PDG, σpdg ) P( t ) log L= log P(, σ ) + log P( t ) log PDG PDG PDG ( ) σ PDG PDG = log( e ) + π σ = Orgnal PDG PDG log Pt ( ) ( ) Pt ( ) + + log π σ σ χ penalty Constant PDG Wanderng by sgma from constrant costs ½ unt of lkelhood (= sgma)

32 Lfetme wth mperfect Detector All detectors have non-zero tme resoluton -> Let s add ths effect n to our ft Assume detector has a gaussan resoluton functon, wth mean = 0 (.e. unbased) and wdth Pt Expt G ' ( ) = (, ) (0, σ) = πσ 0 ' ( t t ) ' t / σ ' e e dt Consder the exponentals. The exponent s: ' ' t ( t t ) ' ' σ ' = ( t t t+ t + t ) σ σ ' ' σ ( ( ) t t t t ) = + σ 4 ' σ σ t σ = ( t ( t )) + + σ σ t σ = ( ( )) σ ' t t

33 Lfetme wth Imperfectons PDF becomes Pt ( ) = ' / ' ' σ / ' ' t σ e e e dt σ σ 0 ( t ( t )) σ σ σ t / ' σ ( t ) πσ t/ πσ t = e e e dt 0 πσ = = e e π σ ( t ( t )) σ t σ e e e dt t σ ( ) σ e x dx t / t σ = e e erfc( ( )) σ

34 Effect of Worsenng Resoluton σ / t t σ Pt ( ) = e e erfc( ( )) σ σ As grows, pdf looks less lke exponental, more lke the gaussan resoluton functon

35 Lfetme wth Cutoff Suppose we are measurng the ½ lfe of some radoactve materal, but we only have T days to run our experment Our probablty densty for observng a decay must cut off at T days, affectng the normalzaton Pt ( ) = e T / e t / log L( ) log( e ) T / e = t / T / = (log( e ) + log ) t ow solve numercally ote that the normalzaton changes wth

36 Lfetme wth Background Suppose we have a prompt (0 lfetme) background n our sample (measured wth resoluton σ ) We can easly handle ths by makng our per-event probablty: Pt f = f e G σ + fgt σ t / (, ) (0, ) ( ) ( 0, ) f = fracton of sgnal G = gaussan Form log L as usual, mnmze wrt both, f

37 Multple varables Maxmum Lkelhood ft easly handles multple ft varables smply multply n the probablty denstes for each new varable Least-squares ft has problems wth such fts As we add more and more dmensons to our ft, bnned methods encounter trouble Data gets spread thn over large number of hstogram bns, resultng n few entres (or zero) per bn => problematc for the ft e.g. Addng the reconstructed mass to the decay tme as varables to ft and dstngush sgnal from background Ptm f m f e G Gm m + ( f){ G( t 0, σ )( am+ b)} t / (,,, B) = { (0, σ)) ( B, σm)} Gaussan sgnal Lnear background

38 Summary Maxmum Lkelhood ft s a powerful, convenent technque for estmatng parameters from fnte samples Unbnned, so small statstcs, sparse data ok Best choce for many-parameter fts For large, gves unbased value, converges to true value, and has mnmum uncertanty possble Constant normalzaton crtcal Auxlary goodness of ft requred Vsualzaton can be dffcult Most serous fts you do n your career wll be Maxmum Lkelhood fts Stephane wll talk about the hands-on part of the project

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements CS 750 Machne Learnng Lecture 5 Densty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square CS 750 Machne Learnng Announcements Homework Due on Wednesday before the class Reports: hand n before

More information

Laboratory 3: Method of Least Squares

Laboratory 3: Method of Least Squares Laboratory 3: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly they are correlated wth

More information

Laboratory 1c: Method of Least Squares

Laboratory 1c: Method of Least Squares Lab 1c, Least Squares Laboratory 1c: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

Calculating CLs Limits. Abstract

Calculating CLs Limits. Abstract DØNote 4492 Calculatng CLs Lmts Harrson B. Prosper Florda State Unversty, Tallahassee, Florda 32306 (Dated: June 8, 2004) Abstract Ths note suggests how the calculaton of lmts based on the CLs method mght

More information

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012 MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:

More information

Gaussian Mixture Models

Gaussian Mixture Models Lab Gaussan Mxture Models Lab Objectve: Understand the formulaton of Gaussan Mxture Models (GMMs) and how to estmate GMM parameters. You ve already seen GMMs as the observaton dstrbuton n certan contnuous

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors Stat60: Bayesan Modelng and Inference Lecture Date: February, 00 Reference Prors Lecturer: Mchael I. Jordan Scrbe: Steven Troxler and Wayne Lee In ths lecture, we assume that θ R; n hgher-dmensons, reference

More information

PHYS 450 Spring semester Lecture 02: Dealing with Experimental Uncertainties. Ron Reifenberger Birck Nanotechnology Center Purdue University

PHYS 450 Spring semester Lecture 02: Dealing with Experimental Uncertainties. Ron Reifenberger Birck Nanotechnology Center Purdue University PHYS 45 Sprng semester 7 Lecture : Dealng wth Expermental Uncertantes Ron Refenberger Brck anotechnology Center Purdue Unversty Lecture Introductory Comments Expermental errors (really expermental uncertantes)

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maxmum Lkelhood Estmaton INFO-2301: Quanttatve Reasonng 2 Mchael Paul and Jordan Boyd-Graber MARCH 7, 2017 INFO-2301: Quanttatve Reasonng 2 Paul and Boyd-Graber Maxmum Lkelhood Estmaton 1 of 9 Why MLE?

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

STATISTICAL MECHANICS

STATISTICAL MECHANICS STATISTICAL MECHANICS Thermal Energy Recall that KE can always be separated nto 2 terms: KE system = 1 2 M 2 total v CM KE nternal Rgd-body rotaton and elastc / sound waves Use smplfyng assumptons KE of

More information

Lecture 2: Prelude to the big shrink

Lecture 2: Prelude to the big shrink Lecture 2: Prelude to the bg shrnk Last tme A slght detour wth vsualzaton tools (hey, t was the frst day... why not start out wth somethng pretty to look at?) Then, we consdered a smple 120a-style regresson

More information

Computing MLE Bias Empirically

Computing MLE Bias Empirically Computng MLE Bas Emprcally Kar Wa Lm Australan atonal Unversty January 3, 27 Abstract Ths note studes the bas arses from the MLE estmate of the rate parameter and the mean parameter of an exponental dstrbuton.

More information

8/25/17. Data Modeling. Data Modeling. Data Modeling. Patrice Koehl Department of Biological Sciences National University of Singapore

8/25/17. Data Modeling. Data Modeling. Data Modeling. Patrice Koehl Department of Biological Sciences National University of Singapore 8/5/17 Data Modelng Patrce Koehl Department of Bologcal Scences atonal Unversty of Sngapore http://www.cs.ucdavs.edu/~koehl/teachng/bl59 koehl@cs.ucdavs.edu Data Modelng Ø Data Modelng: least squares Ø

More information

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2) 1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons

More information

Integrals and Invariants of Euler-Lagrange Equations

Integrals and Invariants of Euler-Lagrange Equations Lecture 16 Integrals and Invarants of Euler-Lagrange Equatons ME 256 at the Indan Insttute of Scence, Bengaluru Varatonal Methods and Structural Optmzaton G. K. Ananthasuresh Professor, Mechancal Engneerng,

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

U-Pb Geochronology Practical: Background

U-Pb Geochronology Practical: Background U-Pb Geochronology Practcal: Background Basc Concepts: accuracy: measure of the dfference between an expermental measurement and the true value precson: measure of the reproducblty of the expermental result

More information

Engineering Risk Benefit Analysis

Engineering Risk Benefit Analysis Engneerng Rsk Beneft Analyss.55, 2.943, 3.577, 6.938, 0.86, 3.62, 6.862, 22.82, ESD.72, ESD.72 RPRA 2. Elements of Probablty Theory George E. Apostolaks Massachusetts Insttute of Technology Sprng 2007

More information

Goodness of fit and Wilks theorem

Goodness of fit and Wilks theorem DRAFT 0.0 Glen Cowan 3 June, 2013 Goodness of ft and Wlks theorem Suppose we model data y wth a lkelhood L(µ) that depends on a set of N parameters µ = (µ 1,...,µ N ). Defne the statstc t µ ln L(µ) L(ˆµ),

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

Machine learning: Density estimation

Machine learning: Density estimation CS 70 Foundatons of AI Lecture 3 Machne learnng: ensty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square ata: ensty estmaton {.. n} x a vector of attrbute values Objectve: estmate the model of

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Lecture 20: Hypothesis testing

Lecture 20: Hypothesis testing Lecture : Hpothess testng Much of statstcs nvolves hpothess testng compare a new nterestng hpothess, H (the Alternatve hpothess to the borng, old, well-known case, H (the Null Hpothess or, decde whether

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

Basically, if you have a dummy dependent variable you will be estimating a probability.

Basically, if you have a dummy dependent variable you will be estimating a probability. ECON 497: Lecture Notes 13 Page 1 of 1 Metropoltan State Unversty ECON 497: Research and Forecastng Lecture Notes 13 Dummy Dependent Varable Technques Studenmund Chapter 13 Bascally, f you have a dummy

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

Space of ML Problems. CSE 473: Artificial Intelligence. Parameter Estimation and Bayesian Networks. Learning Topics

Space of ML Problems. CSE 473: Artificial Intelligence. Parameter Estimation and Bayesian Networks. Learning Topics /7/7 CSE 73: Artfcal Intellgence Bayesan - Learnng Deter Fox Sldes adapted from Dan Weld, Jack Breese, Dan Klen, Daphne Koller, Stuart Russell, Andrew Moore & Luke Zettlemoyer What s Beng Learned? Space

More information

ENG 8801/ Special Topics in Computer Engineering: Pattern Recognition. Memorial University of Newfoundland Pattern Recognition

ENG 8801/ Special Topics in Computer Engineering: Pattern Recognition. Memorial University of Newfoundland Pattern Recognition EG 880/988 - Specal opcs n Computer Engneerng: Pattern Recognton Memoral Unversty of ewfoundland Pattern Recognton Lecture 7 May 3, 006 http://wwwengrmunca/~charlesr Offce Hours: uesdays hursdays 8:30-9:30

More information

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore Sesson Outlne Introducton to classfcaton problems and dscrete choce models. Introducton to Logstcs Regresson. Logstc functon and Logt functon. Maxmum Lkelhood Estmator (MLE) for estmaton of LR parameters.

More information

xp(x µ) = 0 p(x = 0 µ) + 1 p(x = 1 µ) = µ

xp(x µ) = 0 p(x = 0 µ) + 1 p(x = 1 µ) = µ CSE 455/555 Sprng 2013 Homework 7: Parametrc Technques Jason J. Corso Computer Scence and Engneerng SUY at Buffalo jcorso@buffalo.edu Solutons by Yngbo Zhou Ths assgnment does not need to be submtted and

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise. Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where y + = β + β e for =,..., y and are observable varables e s a random error How can an estmaton rule be constructed for the

More information

e i is a random error

e i is a random error Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where + β + β e for,..., and are observable varables e s a random error How can an estmaton rule be constructed for the unknown

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

CS286r Assign One. Answer Key

CS286r Assign One. Answer Key CS286r Assgn One Answer Key 1 Game theory 1.1 1.1.1 Let off-equlbrum strateges also be that people contnue to play n Nash equlbrum. Devatng from any Nash equlbrum s a weakly domnated strategy. That s,

More information

Probability Theory (revisited)

Probability Theory (revisited) Probablty Theory (revsted) Summary Probablty v.s. plausblty Random varables Smulaton of Random Experments Challenge The alarm of a shop rang. Soon afterwards, a man was seen runnng n the street, persecuted

More information

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models

More information

Sampling Theory MODULE VII LECTURE - 23 VARYING PROBABILITY SAMPLING

Sampling Theory MODULE VII LECTURE - 23 VARYING PROBABILITY SAMPLING Samplng heory MODULE VII LECURE - 3 VARYIG PROBABILIY SAMPLIG DR. SHALABH DEPARME OF MAHEMAICS AD SAISICS IDIA ISIUE OF ECHOLOGY KAPUR he smple random samplng scheme provdes a random sample where every

More information

Statistical analysis using matlab. HY 439 Presented by: George Fortetsanakis

Statistical analysis using matlab. HY 439 Presented by: George Fortetsanakis Statstcal analyss usng matlab HY 439 Presented by: George Fortetsanaks Roadmap Probablty dstrbutons Statstcal estmaton Fttng data to probablty dstrbutons Contnuous dstrbutons Contnuous random varable X

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva Econ 39 - Statstcal Propertes of the OLS estmator Sanjaya DeSlva September, 008 1 Overvew Recall that the true regresson model s Y = β 0 + β 1 X + u (1) Applyng the OLS method to a sample of data, we estmate

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Motion Perception Under Uncertainty. Hongjing Lu Department of Psychology University of Hong Kong

Motion Perception Under Uncertainty. Hongjing Lu Department of Psychology University of Hong Kong Moton Percepton Under Uncertanty Hongjng Lu Department of Psychology Unversty of Hong Kong Outlne Uncertanty n moton stmulus Correspondence problem Qualtatve fttng usng deal observer models Based on sgnal

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

SIO 224. m(r) =(ρ(r),k s (r),µ(r)) SIO 224 1. A bref look at resoluton analyss Here s some background for the Masters and Gubbns resoluton paper. Global Earth models are usually found teratvely by assumng a startng model and fndng small

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Rockefeller College University at Albany

Rockefeller College University at Albany Rockefeller College Unverst at Alban PAD 705 Handout: Maxmum Lkelhood Estmaton Orgnal b Davd A. Wse John F. Kenned School of Government, Harvard Unverst Modfcatons b R. Karl Rethemeer Up to ths pont n

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Quantifying Uncertainty

Quantifying Uncertainty Partcle Flters Quantfyng Uncertanty Sa Ravela M. I. T Last Updated: Sprng 2013 1 Quantfyng Uncertanty Partcle Flters Partcle Flters Appled to Sequental flterng problems Can also be appled to smoothng problems

More information

T E C O L O T E R E S E A R C H, I N C.

T E C O L O T E R E S E A R C H, I N C. T E C O L O T E R E S E A R C H, I N C. B rdg n g En g neern g a nd Econo mcs S nce 1973 THE MINIMUM-UNBIASED-PERCENTAGE ERROR (MUPE) METHOD IN CER DEVELOPMENT Thrd Jont Annual ISPA/SCEA Internatonal Conference

More information

Learning from Data 1 Naive Bayes

Learning from Data 1 Naive Bayes Learnng from Data 1 Nave Bayes Davd Barber dbarber@anc.ed.ac.uk course page : http://anc.ed.ac.uk/ dbarber/lfd1/lfd1.html c Davd Barber 2001, 2002 1 Learnng from Data 1 : c Davd Barber 2001,2002 2 1 Why

More information

AS-Level Maths: Statistics 1 for Edexcel

AS-Level Maths: Statistics 1 for Edexcel 1 of 6 AS-Level Maths: Statstcs 1 for Edecel S1. Calculatng means and standard devatons Ths con ndcates the slde contans actvtes created n Flash. These actvtes are not edtable. For more detaled nstructons,

More information

Excess Error, Approximation Error, and Estimation Error

Excess Error, Approximation Error, and Estimation Error E0 370 Statstcal Learnng Theory Lecture 10 Sep 15, 011 Excess Error, Approxaton Error, and Estaton Error Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton So far, we have consdered the fnte saple

More information

STAT 3008 Applied Regression Analysis

STAT 3008 Applied Regression Analysis STAT 3008 Appled Regresson Analyss Tutoral : Smple Lnear Regresson LAI Chun He Department of Statstcs, The Chnese Unversty of Hong Kong 1 Model Assumpton To quantfy the relatonshp between two factors,

More information

Stat 543 Exam 2 Spring 2016

Stat 543 Exam 2 Spring 2016 Stat 543 Exam 2 Sprng 206 I have nether gven nor receved unauthorzed assstance on ths exam. Name Sgned Date Name Prnted Ths Exam conssts of questons. Do at least 0 of the parts of the man exam. I wll score

More information

Differentiating Gaussian Processes

Differentiating Gaussian Processes Dfferentatng Gaussan Processes Andrew McHutchon Aprl 17, 013 1 Frst Order Dervatve of the Posteror Mean The posteror mean of a GP s gven by, f = x, X KX, X 1 y x, X α 1 Only the x, X term depends on the

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

First Year Examination Department of Statistics, University of Florida

First Year Examination Department of Statistics, University of Florida Frst Year Examnaton Department of Statstcs, Unversty of Florda May 7, 010, 8:00 am - 1:00 noon Instructons: 1. You have four hours to answer questons n ths examnaton.. You must show your work to receve

More information

Primer on High-Order Moment Estimators

Primer on High-Order Moment Estimators Prmer on Hgh-Order Moment Estmators Ton M. Whted July 2007 The Errors-n-Varables Model We wll start wth the classcal EIV for one msmeasured regressor. The general case s n Erckson and Whted Econometrc

More information

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition)

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition) Count Data Models See Book Chapter 11 2 nd Edton (Chapter 10 1 st Edton) Count data consst of non-negatve nteger values Examples: number of drver route changes per week, the number of trp departure changes

More information

Properties of Least Squares

Properties of Least Squares Week 3 3.1 Smple Lnear Regresson Model 3. Propertes of Least Squares Estmators Y Y β 1 + β X + u weekly famly expendtures X weekly famly ncome For a gven level of x, the expected level of food expendtures

More information

Conjugacy and the Exponential Family

Conjugacy and the Exponential Family CS281B/Stat241B: Advanced Topcs n Learnng & Decson Makng Conjugacy and the Exponental Famly Lecturer: Mchael I. Jordan Scrbes: Bran Mlch 1 Conjugacy In the prevous lecture, we saw conjugate prors for the

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information

Economics 130. Lecture 4 Simple Linear Regression Continued

Economics 130. Lecture 4 Simple Linear Regression Continued Economcs 130 Lecture 4 Contnued Readngs for Week 4 Text, Chapter and 3. We contnue wth addressng our second ssue + add n how we evaluate these relatonshps: Where do we get data to do ths analyss? How do

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

Stat 543 Exam 2 Spring 2016

Stat 543 Exam 2 Spring 2016 Stat 543 Exam 2 Sprng 2016 I have nether gven nor receved unauthorzed assstance on ths exam. Name Sgned Date Name Prnted Ths Exam conssts of 11 questons. Do at least 10 of the 11 parts of the man exam.

More information

2 Finite difference basics

2 Finite difference basics Numersche Methoden 1, WS 11/12 B.J.P. Kaus 2 Fnte dfference bascs Consder the one- The bascs of the fnte dfference method are best understood wth an example. dmensonal transent heat conducton equaton T

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

THE SUMMATION NOTATION Ʃ

THE SUMMATION NOTATION Ʃ Sngle Subscrpt otaton THE SUMMATIO OTATIO Ʃ Most of the calculatons we perform n statstcs are repettve operatons on lsts of numbers. For example, we compute the sum of a set of numbers, or the sum of the

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Expected Value and Variance

Expected Value and Variance MATH 38 Expected Value and Varance Dr. Neal, WKU We now shall dscuss how to fnd the average and standard devaton of a random varable X. Expected Value Defnton. The expected value (or average value, or

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Mamum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models for

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

β0 + β1xi. You are interested in estimating the unknown parameters β

β0 + β1xi. You are interested in estimating the unknown parameters β Ordnary Least Squares (OLS): Smple Lnear Regresson (SLR) Analytcs The SLR Setup Sample Statstcs Ordnary Least Squares (OLS): FOCs and SOCs Back to OLS and Sample Statstcs Predctons (and Resduals) wth OLS

More information

Advanced Circuits Topics - Part 1 by Dr. Colton (Fall 2017)

Advanced Circuits Topics - Part 1 by Dr. Colton (Fall 2017) Advanced rcuts Topcs - Part by Dr. olton (Fall 07) Part : Some thngs you should already know from Physcs 0 and 45 These are all thngs that you should have learned n Physcs 0 and/or 45. Ths secton s organzed

More information

Solutions Homework 4 March 5, 2018

Solutions Homework 4 March 5, 2018 1 Solutons Homework 4 March 5, 018 Soluton to Exercse 5.1.8: Let a IR be a translaton and c > 0 be a re-scalng. ˆb1 (cx + a) cx n + a (cx 1 + a) c x n x 1 cˆb 1 (x), whch shows ˆb 1 s locaton nvarant and

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

Assignment 2. Tyler Shendruk February 19, 2010

Assignment 2. Tyler Shendruk February 19, 2010 Assgnment yler Shendruk February 9, 00 Kadar Ch. Problem 8 We have an N N symmetrc matrx, M. he symmetry means M M and we ll say the elements of the matrx are m j. he elements are pulled from a probablty

More information

Estimation of the Mean of Truncated Exponential Distribution

Estimation of the Mean of Truncated Exponential Distribution Journal of Mathematcs and Statstcs 4 (4): 84-88, 008 ISSN 549-644 008 Scence Publcatons Estmaton of the Mean of Truncated Exponental Dstrbuton Fars Muslm Al-Athar Department of Mathematcs, Faculty of Scence,

More information

Integrals and Invariants of

Integrals and Invariants of Lecture 16 Integrals and Invarants of Euler Lagrange Equatons NPTEL Course Varatonal Methods and Structural Optmzaton G. K. Ananthasuresh Professor, Mechancal Engneerng, Indan Insttute of Scence, Banagalore

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede Fall 0 Analyss of Expermental easurements B. Esensten/rev. S. Errede We now reformulate the lnear Least Squares ethod n more general terms, sutable for (eventually extendng to the non-lnear case, and also

More information

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one)

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one) Why Bayesan? 3. Bayes and Normal Models Alex M. Martnez alex@ece.osu.edu Handouts Handoutsfor forece ECE874 874Sp Sp007 If all our research (n PR was to dsappear and you could only save one theory, whch

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Effects of Ignoring Correlations When Computing Sample Chi-Square. John W. Fowler February 26, 2012

Effects of Ignoring Correlations When Computing Sample Chi-Square. John W. Fowler February 26, 2012 Effects of Ignorng Correlatons When Computng Sample Ch-Square John W. Fowler February 6, 0 It can happen that ch-square must be computed for a sample whose elements are correlated to an unknown extent.

More information

Statistics for Economics & Business

Statistics for Economics & Business Statstcs for Economcs & Busness Smple Lnear Regresson Learnng Objectves In ths chapter, you learn: How to use regresson analyss to predct the value of a dependent varable based on an ndependent varable

More information

Meshless Surfaces. presented by Niloy J. Mitra. An Nguyen

Meshless Surfaces. presented by Niloy J. Mitra. An Nguyen Meshless Surfaces presented by Nloy J. Mtra An Nguyen Outlne Mesh-Independent Surface Interpolaton D. Levn Outlne Mesh-Independent Surface Interpolaton D. Levn Pont Set Surfaces M. Alexa, J. Behr, D. Cohen-Or,

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable

More information

The big picture. Outline

The big picture. Outline The bg pcture Vncent Claveau IRISA - CNRS, sldes from E. Kjak INSA Rennes Notatons classes: C = {ω = 1,.., C} tranng set S of sze m, composed of m ponts (x, ω ) per class ω representaton space: R d (=

More information