On Centralized Composite Detection with Distributed Sensors

Size: px
Start display at page:

Download "On Centralized Composite Detection with Distributed Sensors"

Transcription

1 On Centralzed Composte Detecton wth Dstrbuted Sensors Cuchun Xu, and Steven Kay Abstract For composte hypothess testng, the generalzed lkelhood rato test (GLRT and the Bayesan approach are two wdely used methods. Ths paper nvestgates the two methods for sgnal detecton of a known waveform and unknown ampltude wth dstrbuted sensors. It s frst proved that the performance of GLRT can be poor and hence mproved for ths problem and then an approxmated Bayesan detector s proposed. Compared wth the exact Bayesan approach, the proposed method always has a closed form and hence s easy to mplement. Computer smulaton results show that the proposed method has comparable performance to the exact Bayesan approach. I. INTRODUCTION Detecton wth dstrbuted sensors has been studed for nearly three decades [] [] [3]. Wth respect to dfferent data assumptons, t can be categorzed to centralzed detecton and decentralzed detecton. In centralzed detecton, t s assumed that all data from all local sensors are avalable for processng. In decentralzed detecton, only compressed data or local decson from all local sensors are communcated to a central processor, where a central decson s made. Snce the centralzed detecton can largely resort to classcal detecton theory, many works focus on decentralzed detecton [] [] [3]. However, there are stll some problems left for centralzed detecton, especally when there are unknown parameters n hypotheses,.e. composte detecton. In [4], a mnmax constant false alarm rate (CFAR centralzed detector s proposed for composte detecton. However, t assumes that the unknown parameters take dscrete values and the detecton nvolves large computaton. In [3], many dstrbuted detecton technques are revewed, wth focus on the locally optmum dstrbuted detectors, whch are optmal only when sgnal s weak. Other common composte detecton methods nclude the generalzed lkelhood rato test (GLRT and the Bayesan approach. Because of ts ease of mplementaton, the GLRT s wdely used and usually has satsfactory performance. However, ts optmalty s hard to establsh, even though some attempts have been made, based

2 on dfferent crtera [8] [9]. On the other hand, the Bayesan approach s optmal f the assumed pror probabltes and pror probablty densty functons (PDFs of the unknown parameters are true. However, n most applcatons the true prors and true pror PDFs are unknown and hard to assume. Furthermore the ntegraton nvolved n calculaton of the margnal PDF may not be easy to evaluate []. In ths paper, we examne the mportant specal case of ndependent sensors (condtoned on hypothess. In partcular, we focus on stuatons that even when a sgnal s present only part of the sensors receve the sgnal. It s shown n Secton III that n ths case, the GLRT s not optmal. Some mprovements of the GLRT are also dscussed. In secton IV, an approxmated Bayesan detector s proposed based on vague pror PDFs, whch does not need a specfc form. Some computer smulaton results are shown n Secton V and fnally Secton VI offers concluson and further dscusson. II. STATEMENT OF THE PROBLEM To smplfy the mathematcal exposton, we assume the sensors are ordered wth respect to ther mportance. That s f j and sensor j receves sgnal then sensor must also receve sgnal. Ths order of mportance may be due to the physcal locatons of the sensors. When the sensors are not ordered, the dscusson s gven n Secton VI. We assume that there are M ndependent sensors or channels. The detecton problem s stated as H 0 : x [n] = w [n] n = 0,,,N H : x [n] = As[n] + w [n] n = 0,,,N ( where A s the M unknown ampltude vector, wth possbly dfferent elements, s[n] s a known waveform and w[n] s a Gaussan random vector. Each element of w[n] s whte both n tme doman and n space doman,.e. σ, n = m E [w [n]w [m]] = 0, n m for any M and σ, = j E [w [n]w j [n]] = 0, j for any 0 n N where w [n] denotes the th element of w[n] and E[ ] denotes the expectaton operator. Snce the sensors are ordered, under H only the frst L elements of A s nonzero. L s referred to as the true model order under H. Both L and the value of the frst L elements of A are unknown. If we assume there are nonzero elements n A, the correspondng model of ( s called model and s denoted as M. Under

3 3 model M, A s denoted as A and A = [A, A,,A, 0, 0,,0] M Clearly, under H 0 the data s from model M 0 and under H the data s from model M, M. Wth ths modelng, the detecton problem can equvalently be stated as to assgn the gven data to M 0 (H 0 or to M M M M (H. A. Performance of the GLRT III. GENERALIZED LIKELIHOOD RATIO TEST Frstly, suppose the varance σ s known. Snce L and A are unknown, the GLRT decdes H f p (X;Â, M L G (X = max > λ M p (X; M 0 where X = [x[0],x[],,x[n ]] T denotes the whole data and p (X;Â, M s the probablty densty functon (PDF under M parameterzed by the maxmum lkelhood estmator (MLE Â, whch s estmated assumng M s true. Snce f j, M can be thought of as a specal case of M j, t s readly shown that the GLRT wll always mplement the test statstc based on the maxmum model,.e. [5] or equvalently L G (X = p ( X;ÂM, M M p (X; M 0 > λ lnl G (X = ln p ( X;Â M, M M p (X; M 0 The test statstc of ( has the PDF [5] χ M lnl G (X, H 0 χ M (λ, H > λ ( where χ M denotes the ch-squared dstrbuton and χ M (λ denotes the noncentral ch-squared dstrbuton wth the noncentralty parameter λ. The noncentralty parameter s gven by where λ = ε = ε M A = σ N n=0 s [n] s the sgnal waveform energy. Snce only the frst L elements of A are nonzero, we have λ = ε L = A σ (3

4 4 Wth the PDF of the GLRT test statstc of (3, we now show the performance of the GLRT s generally not optmal and can be mproved. Suppose L < M and let K be an nteger such that L K < M. If there s a generalzed lkelhood rato test based on M K, whch decdes H f ( lnl GK (X = ln p X;Â K, M k p (X; M 0 the PDF of ths test statstc s gven by χ K lnl GK (X, H 0 χ K (λ, H We note that f the varance σ s unknown, ( and (4 should be changed to ( lnl G (X = ln p X;ÂM, ˆσ M, M M p ( X; ˆσ 0, M > λ 0 and ( lnl GK (X = ln p X;ÂK, ˆσ K, M K p ( X; ˆσ 0, M > λ 0 > λ (4 where ˆσ s the MLE of σ under M. However, asymptotcally (3 and (5 stll hold [5]. Snce K < M, from Theorem, whch s proved n Appendx I, the detector based on model M K has better performance than that of the GLRT, whch s based on M M. PDF Theorem 3.: Suppose there are two detecton statstcs T and T. The detecton statstc T has the χ ν T, H 0 χ ν (λ, H and the detector decdes a sgnal s present f T > γ. The detecton statstc T has the PDF χ ν T, H 0 χ ν (λ, H and the detector decdes a sgnal s present f T > γ. Then, f ν < ν, the performance of detector s better than that of detector n a Neyman-Pearson (NP sense,.e. for the same false alarm rate detector has a hgher probablty of detecton than detector. In other words, for ths dstrbuted detecton problem, f L < M the GLRT s not optmal. Ths s because the GLRT s based on M M and therefore the channels that contan only nose samples are ncluded n the test statstc. Includng these channels wll ncrease the degrees of freedom of the test statstc, however the noncentralty parameter remans the same. (5

5 5 B. Improved GLRT From Theorem, any detector based on M wth > L wll have a performance less than the detector based on M L. It s clear that the unknown order L s crtcal to the performance of detectors. In order to mprove the GLRT, L can frst be estmated and a generalzed lkelhood rato test based on model ˆL can then be conducted. Ths can be done by usng varous model order selecton crtera [6]. A wdely used crteron s the mnmum descrpton length (MDL crteron, whch selects the model that mnmzes MDL( = lnl G (X + lnn, M (6 The GLRT wth estmated model s referred to as rglrt n ths paper. Recently, a multfamly lkelhood rato test (MFLRT s proposed to modfy the GLRT to accommodate nested PDF famles [7]. The MFLRT decdes H f {[ ( ( ] ( } LG (X LG (X T MFLRT (X = max L G (X ln + u > γ (7 M where u( s the unt step functon. These two revsed GLRTs wll be compared wth the GLRT n Secton V. A. Approxmated Bayesan Approach IV. BAYESIAN APPROACH Frstly, suppose the varance σ s known. If we are wllng to assume Bayesan assumptons, that s model M, 0 M has a pror probablty π and under each model n H, A j, j M has a pror PDF gven as p(a j M j, the optmal NP detector decdes H f From the Bayesan assumptons Pluggng (9 nto (8, the detector decdes H equvalently f p (X H p (X H 0 > λ (8 M p (X H = π p (X M (9 = M π p (X M = p (X M 0 > λ (0 where the margnal PDF p(x M s gven by p (X M = p (X,A M da = p (X A, M p(a M da (

6 6 As mentoned prevously, the true pror PDFs are usually unknown and hard to assume. Furthermore the ntegraton n ( may not have a closed form. If the varance σ s unknown p (X A, M p(a M da should be replaced by p ( X A, σ, M p ( A, σ M da dσ and t s even more dffcult to assume the pror PDFs as well as calculate the ntegraton. It s shown n [0] that assumng vague pror PDFs, the margnal PDF p(x M can be asymptotcally approxmated as ( I p (X M const p X Â, M (Â (πe where denotes determnant and I (Â s the observed nformaton matrx, or I (Â = lnp(x A, M A A T For the model gven by (, t can be shown that asymptotcally and therefore I (Â N A=Â p (X M const p (X Â ( N, M πe Pluggng the above nto (0 we have M ( ( π const p X Â, M N πe = > λ p (X M 0 or M = When σ s unknown, smlarly we have ( ( N π L G (X > λ (3 πe p (X M const p (X Â (, ˆσ N, M πe + and (3 s stll vald. The detector gven by (3 s referred to as abayesan n the smulatons of Secton V. B. An Exact Bayesan Detector In order to make a comparson to the approxmated Bayesan detector n the smulatons n Secton V, n ths subsecton we provde a set of Bayesan pror PDFs that result n a detector wth a closed form.

7 7 Suppose σ s known, and under M, the elements of A are ndependent and dentcally dstrbuted, or p (A M = p (A k (4 Assume under any model M, M ( p (A k = exp πσ A σa (A k µ k, k k k (5 Denotng x k = [X k [0], X k [],,X k [N ]] T as the data vector from channel k, because each channel s ndependent to other channels we have Denotng p(xk M p (X M p (X M 0 = k= k= p (x k M p (x k M 0 p(x as R (x k M 0 k, wth the assumed pror PDFs, after some algebra we have σ R (x k = NσA k + σ exp N σa k ( x k + Nµ k x k σ Nµ k ( σ (7 σ NσA k + σ where x k denotes the mean of the data from sensor k, gven as x k = N N n=0 From (0 and (6 the Bayesan detector decdes H f X k [n] (6 M π R (x k > λ (8 = k= V. COMPUTER SIMULATIONS In ths secton the prevously dscussed detectors are compared va computer smulatons. We note, these detectors are derved from dfferent orgns and have quet dfferent propertes. The performance of each detector requres further nvestgaton. Ths secton only serves to gve some examples of these detectors and to show the GLRT can be mproved. A. Known Varance Defne the SNR of channel k as SNR k = 0 log 0 ( εa k σ Wth ths defnton, when SNR k 0dB all methods yeld pretty good results. Therefore the man concern s the stuaton when SNR k < 0dB,.e. when A k < 00σ ε. Hence we assume σa k =

8 8 00σ 9ε, k for p(a k gven by (5, so that 3σ Ak = 00σ ε and for most values that A k wll take the SNR k s less than 0dB. Snce the SNR k for each sensor s the same, we smply denote t as SNR. We also assume µ k = 0, k for p(a k and the prors π, M are all equal. Now, all the assumptons requred for the exact Bayesan detector gven by (7 and (8 are made. If all these assumptons are true, the exact Bayesan detector s optmal. However, snce n practce the assumptons are rarely completely true, the Bayesan approach s only to fnd a realzable detector and we hope the detector wll be robust. That s when the true prors and pror PDFs are not the same as the assumptons, the detector stll has satsfactory performance. In the smulaton the true prors under H are set to equal, whch s the same as the assumed prors. However, the true A k takes fxed value, n other words A k s determnstc. Snce the detecton performance only depends on the energy of the sgnal waveform, we smply let s[n] =, 0 n N and σ = N, so that ε σ =. The number of sensors M s set to be 5. Wth the parameters, t can be shown that ( ( L G (X = exp x k= We compare the recever operatng characterstcs (ROC curves of the GLRT gven by (, the rglrt wth model selected by the MDL, the MFLRT gven by (7, the approxmated Bayesan detector gven by (3 and the exact Bayesan detector gven by (7 and (8. When SNR= 6 or A k = the performances of detectors of 5000 realzatons are shown n Fg., and when SNR= or A k = 4 the performances of detectors of 5000 realzatons are shown n Fg.. For relatvely low SNR case, t can be seen n the Fg. that the Bayesan detector, abayesan detector and the MFLRT are better than the GLRT for all regon of the ROC curve. The rglrt are better than GLRT for the regon of the ROC curve where P FA > For relatvely hgh SNR case, t can be seen n the Fg. that the GLRT s the worst for all regon of the ROC curve. Comparng Fg. and Fg., t s seen that the abayesan detector has very smlar performance to the exact Bayesan detector. It s also seen from Fg. and Fg., when SNR s low, the rglrt s close to the GLRT, whch s the worst detector, and when SNR s hgh rglrt s the best detector. Compared wth the rglrt, the MFLRT s more robust. Actually, t can be shown the MFLRT has some mnmax propertes, whch wll be addressed n a future work. B. Unknown Varance When the varance σ s unknown, t can be shown (ˆσ L G (X = 0 ˆσ MN

9 9 0.9 ROC curves from 5000 trals; SNR = 6; M = P D MFLRT Glrt Bayesan abayesan rglrt P FA Fg.. ROC curves of the detectors n relatvely low SNR (known varance.. ROC curves from 5000 trals; SNR = ; M = P D MFLRT Glrt Bayesan abayesan rglrt P FA Fg.. ROC curves of the detectors n relatvely hgh SNR (known varance.. where ˆσ k = NM N k M (X [n] x + (X j [n] n=0 = j=k+

10 0 Because n ths case, t s even more dffcult to assume vald pror PDFs, we only compare the other 4 detectors. When SNR= 6 the performances of detectors of 5000 realzatons are shown n Fg. 3, and when SNR= the performances of detectors of 5000 realzatons are shown n Fg. 4. For relatvely 0.9 ROC curves from 5000 trals; SNR = 6; M = P D MFLRT Glrt abayesan rglrt P FA Fg. 3. ROC curves of the detectors n relatvely low SNR (unknown varance.. low SNR case, t can be seen n the Fg. 3 that the abayesan detector and the MFLRT are better than the GLRT for all regon of the ROC curve. The rglrt are better than the GLRT for the regon of the ROC curve where P FA > For relatvely hgh SNR case, t can be seen n the Fg. 4 that the GLRT s the worst for all regon of the ROC curve. The mnmax property of the MFLRT can also be observed comparng Fg. 3 and Fg. 4. VI. CONCLUSIONS AND DISCUSSION We have nvestgated the GLRT and the Bayesan approach for centralzed composte dstrbuted detecton. In partcular, In partcular, we have proved that the performance of GLRT s poor and can be mproved for ths problem. An approxmated Bayesan detector has also been proposed based on vague pror PDFs, and therefore the margnal PDF can be obtaned wthout ntegraton. As a result, the detector always has a closed form. In our exposton, we assume the sensors have already been ordered wth respect to ther mportance. Therefore when there are M sensors, only M + models need to be consdered. When the orgnal data

11 ROC curves from 5000 trals; SNR = ; M = P D MFLRT Glrt abayesan rglrt P FA Fg. 4. ROC curves of the detectors n relatvely hgh SNR (unknown varance.. from sensors are not ordered, for M sensors we have to consder M models as n [4]. Ths wll greatly ncrease the computatonal load. Alternatvely, some prelmnary processng can be conducted to order the sensors and smlar technques as n [] can be used. However, the performance degradaton as a result of orderng needs further nvestgaton. REFERENCES [] R. R. Tenney and N. R. Sandell Jr., Detecton wth dstrbuted snesors, IEEE Trans. Aerospace Elect. Syst.,, vol. AES-7, pp , July 98. [] R. Vswanathan and P. K. Varshney, Dstrbuted detecton wth multple sensors-part I: Fundamentals, Proceedngs of the IEEE,, vol. 85, pp , Jan [3] R. S. Blum, S. A. Kassam and H. V. Poor, Dstrbuted detecton wth multple sensors-part II: Advanced Topcs, Proceedngs of the IEEE,, vol. 85, pp , Jan [4] B. Baygun and A. O. Hero III, Optmal Smultaneous Detecton and Estmaton Under a False Alarm Constrant, IEEE Trans. Inform. Theory,, vol. 4, pp , may 995. [5] S. M. Kay, Fundamentals of Statstcal Sgnal Processng: Detecton Theory, Upper Saddle Rver, NJ: Prentce-Hall, 998. [6] P. Stoca and Y. Selen, Model-Order Selecton-A revew of nformaton crteron rules, IEEE Sgnal Processng Magzne,, vol., pp , July 004. [7] S. M. Kay, The Multfamly Lkelhood Rato Test for Multple Sgnal Model Detecton, IEEE Sgnal Process. Letters,, vol., no. 5, pp , may 005.

12 [8] L. L. Scharf and B. Fredlander, Matched subspace detectors, IEEE Trans. Sgnal Process.,, vol. 46, no. 8, pp , Aug [9] O. Zetoun, J. Zv and N. Merhav, When s the generalzed lkelhood rato test optmal, IEEE Trans. Inform. Theory,, vol. 38, pp , Sept. 99. [0] P. M. Djurc, Asymptotc MAP crtera for model selecton, IEEE Trans. Sgnal Process.,, vol. 46, no. 0, pp , Oct [] R. R. Hockng, The analyss and selecton of varables n lnear regresson, Bometrcs,, 3, pp. -49, Mar [] A. Gelman, J. Carln, H. Stern and D. Rubn Bayesan Data Analyss, London, U.K.: Chapman & Hall, 995. APPENDIX I PROOF OF THEOREM Snce ν < ν, consder a hypothess testng problem, n whch, H 0 : x[] χ ν, x[] χ ν ν H : x[] χ ν (λ, x[] χ ν ν and x[] s ndependent of x[] n each hypothess. The optmal NP detector decdes H f [5] Pluggng n the expresson of the PDF of χ ν p χ (λ (x[]p χ (x[] ν ν ν = p χ (λ (x[] ν > γ p χ ν (x[]p χ ν ν (x[] p χ ν (x[] and χ ν ( ν ] x[] 4 λ exp [ (x[] + λ k=0 ν Γ( ν The above expresson can be smplfed to x[] ν exp c k x[] k > γ k=0 (λ yelds [5] ( λx[] k+ ν k!γ( ν +k ( x[] > γ wth all postve c k s. Snce x[] 0, c k x[] k s a monotoncally non-decreasng functon of x[], the detector decdes H equvalently f k=0 T NP ([x[], x[]] = x[] > γ (9 Ths s the optmal detector n the NP sense. Now consder another detector whch decdes H f T G ([x[], x[]] = x[] + x[] > γ Ths detector s dfferent from the optmal detector (9 and therefore t has a poorer performance. Snce χ ν T NP, H 0 χ ν (λ, H

13 3 and χ ν T G, H 0 χ ν (λ, H The theorem s proved by lettng T = T NP and T = T G.

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests Smulated of the Cramér-von Mses Goodness-of-Ft Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

x = , so that calculated

x = , so that calculated Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to

More information

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore Sesson Outlne Introducton to classfcaton problems and dscrete choce models. Introducton to Logstcs Regresson. Logstc functon and Logt functon. Maxmum Lkelhood Estmator (MLE) for estmaton of LR parameters.

More information

x i1 =1 for all i (the constant ).

x i1 =1 for all i (the constant ). Chapter 5 The Multple Regresson Model Consder an economc model where the dependent varable s a functon of K explanatory varables. The economc model has the form: y = f ( x,x,..., ) xk Approxmate ths by

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Joint Statistical Meetings - Biopharmaceutical Section

Joint Statistical Meetings - Biopharmaceutical Section Iteratve Ch-Square Test for Equvalence of Multple Treatment Groups Te-Hua Ng*, U.S. Food and Drug Admnstraton 1401 Rockvlle Pke, #200S, HFM-217, Rockvlle, MD 20852-1448 Key Words: Equvalence Testng; Actve

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition)

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition) Count Data Models See Book Chapter 11 2 nd Edton (Chapter 10 1 st Edton) Count data consst of non-negatve nteger values Examples: number of drver route changes per week, the number of trp departure changes

More information

Goodness of fit and Wilks theorem

Goodness of fit and Wilks theorem DRAFT 0.0 Glen Cowan 3 June, 2013 Goodness of ft and Wlks theorem Suppose we model data y wth a lkelhood L(µ) that depends on a set of N parameters µ = (µ 1,...,µ N ). Defne the statstc t µ ln L(µ) L(ˆµ),

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

Error Probability for M Signals

Error Probability for M Signals Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal

More information

Average Decision Threshold of CA CFAR and excision CFAR Detectors in the Presence of Strong Pulse Jamming 1

Average Decision Threshold of CA CFAR and excision CFAR Detectors in the Presence of Strong Pulse Jamming 1 Average Decson hreshold of CA CFAR and excson CFAR Detectors n the Presence of Strong Pulse Jammng Ivan G. Garvanov and Chrsto A. Kabachev Insttute of Informaton echnologes Bulgaran Academy of Scences

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

First Year Examination Department of Statistics, University of Florida

First Year Examination Department of Statistics, University of Florida Frst Year Examnaton Department of Statstcs, Unversty of Florda May 7, 010, 8:00 am - 1:00 noon Instructons: 1. You have four hours to answer questons n ths examnaton.. You must show your work to receve

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Experment-I MODULE VII LECTURE - 3 ANALYSIS OF COVARIANCE Dr Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Any scentfc experment s performed

More information

Lecture 20: Hypothesis testing

Lecture 20: Hypothesis testing Lecture : Hpothess testng Much of statstcs nvolves hpothess testng compare a new nterestng hpothess, H (the Alternatve hpothess to the borng, old, well-known case, H (the Null Hpothess or, decde whether

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Experment-I MODULE VIII LECTURE - 34 ANALYSIS OF VARIANCE IN RANDOM-EFFECTS MODEL AND MIXED-EFFECTS EFFECTS MODEL Dr Shalabh Department of Mathematcs and Statstcs Indan

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

the absence of noise and fading and also show the validity of the proposed algorithm with channel constraints through simulations.

the absence of noise and fading and also show the validity of the proposed algorithm with channel constraints through simulations. 1 Channel Aware Decentralzed Detecton n Wreless Sensor Networks Mlad Kharratzadeh mlad.kharratzadeh@mal.mcgll.ca Department of Electrcal & Computer Engneerng McGll Unversty Montreal, Canada Abstract We

More information

Maximum Likelihood Estimation (MLE)

Maximum Likelihood Estimation (MLE) Maxmum Lkelhood Estmaton (MLE) Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Wnter 01 UCSD Statstcal Learnng Goal: Gven a relatonshp between a feature vector x and a vector y, and d data samples (x,y

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

ENG 8801/ Special Topics in Computer Engineering: Pattern Recognition. Memorial University of Newfoundland Pattern Recognition

ENG 8801/ Special Topics in Computer Engineering: Pattern Recognition. Memorial University of Newfoundland Pattern Recognition EG 880/988 - Specal opcs n Computer Engneerng: Pattern Recognton Memoral Unversty of ewfoundland Pattern Recognton Lecture 7 May 3, 006 http://wwwengrmunca/~charlesr Offce Hours: uesdays hursdays 8:30-9:30

More information

Chapter 5 Multilevel Models

Chapter 5 Multilevel Models Chapter 5 Multlevel Models 5.1 Cross-sectonal multlevel models 5.1.1 Two-level models 5.1.2 Multple level models 5.1.3 Multple level modelng n other felds 5.2 Longtudnal multlevel models 5.2.1 Two-level

More information

Basic Statistical Analysis and Yield Calculations

Basic Statistical Analysis and Yield Calculations October 17, 007 Basc Statstcal Analyss and Yeld Calculatons Dr. José Ernesto Rayas Sánchez 1 Outlne Sources of desgn-performance uncertanty Desgn and development processes Desgn for manufacturablty A general

More information

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU Group M D L M Chapter Bayesan Decson heory Xn-Shun Xu @ SDU School of Computer Scence and echnology, Shandong Unversty Bayesan Decson heory Bayesan decson theory s a statstcal approach to data mnng/pattern

More information

Digital Modems. Lecture 2

Digital Modems. Lecture 2 Dgtal Modems Lecture Revew We have shown that both Bayes and eyman/pearson crtera are based on the Lkelhood Rato Test (LRT) Λ ( r ) < > η Λ r s called observaton transformaton or suffcent statstc The crtera

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method

More information

Polynomial Regression Models

Polynomial Regression Models LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform

More information

since [1-( 0+ 1x1i+ 2x2 i)] [ 0+ 1x1i+ assumed to be a reasonable approximation

since [1-( 0+ 1x1i+ 2x2 i)] [ 0+ 1x1i+ assumed to be a reasonable approximation Econ 388 R. Butler 204 revsons Lecture 4 Dummy Dependent Varables I. Lnear Probablty Model: the Regresson model wth a dummy varables as the dependent varable assumpton, mplcaton regular multple regresson

More information

Classification as a Regression Problem

Classification as a Regression Problem Target varable y C C, C,, ; Classfcaton as a Regresson Problem { }, 3 L C K To treat classfcaton as a regresson problem we should transform the target y nto numercal values; The choce of numercal class

More information

Lecture 2: Prelude to the big shrink

Lecture 2: Prelude to the big shrink Lecture 2: Prelude to the bg shrnk Last tme A slght detour wth vsualzaton tools (hey, t was the frst day... why not start out wth somethng pretty to look at?) Then, we consdered a smple 120a-style regresson

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

Credit Card Pricing and Impact of Adverse Selection

Credit Card Pricing and Impact of Adverse Selection Credt Card Prcng and Impact of Adverse Selecton Bo Huang and Lyn C. Thomas Unversty of Southampton Contents Background Aucton model of credt card solctaton - Errors n probablty of beng Good - Errors n

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

LOGIT ANALYSIS. A.K. VASISHT Indian Agricultural Statistics Research Institute, Library Avenue, New Delhi

LOGIT ANALYSIS. A.K. VASISHT Indian Agricultural Statistics Research Institute, Library Avenue, New Delhi LOGIT ANALYSIS A.K. VASISHT Indan Agrcultural Statstcs Research Insttute, Lbrary Avenue, New Delh-0 02 amtvassht@asr.res.n. Introducton In dummy regresson varable models, t s assumed mplctly that the dependent

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one)

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one) Why Bayesan? 3. Bayes and Normal Models Alex M. Martnez alex@ece.osu.edu Handouts Handoutsfor forece ECE874 874Sp Sp007 If all our research (n PR was to dsappear and you could only save one theory, whch

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1 On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool

More information

Chapter 12 Analysis of Covariance

Chapter 12 Analysis of Covariance Chapter Analyss of Covarance Any scentfc experment s performed to know somethng that s unknown about a group of treatments and to test certan hypothess about the correspondng treatment effect When varablty

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

LINEAR REGRESSION ANALYSIS. MODULE VIII Lecture Indicator Variables

LINEAR REGRESSION ANALYSIS. MODULE VIII Lecture Indicator Variables LINEAR REGRESSION ANALYSIS MODULE VIII Lecture - 7 Indcator Varables Dr. Shalabh Department of Maematcs and Statstcs Indan Insttute of Technology Kanpur Indcator varables versus quanttatve explanatory

More information

Durban Watson for Testing the Lack-of-Fit of Polynomial Regression Models without Replications

Durban Watson for Testing the Lack-of-Fit of Polynomial Regression Models without Replications Durban Watson for Testng the Lack-of-Ft of Polynomal Regresson Models wthout Replcatons Ruba A. Alyaf, Maha A. Omar, Abdullah A. Al-Shha ralyaf@ksu.edu.sa, maomar@ksu.edu.sa, aalshha@ksu.edu.sa Department

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

Explaining the Stein Paradox

Explaining the Stein Paradox Explanng the Sten Paradox Kwong Hu Yung 1999/06/10 Abstract Ths report offers several ratonale for the Sten paradox. Sectons 1 and defnes the multvarate normal mean estmaton problem and ntroduces Sten

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Research Article Green s Theorem for Sign Data

Research Article Green s Theorem for Sign Data Internatonal Scholarly Research Network ISRN Appled Mathematcs Volume 2012, Artcle ID 539359, 10 pages do:10.5402/2012/539359 Research Artcle Green s Theorem for Sgn Data Lous M. Houston The Unversty of

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Exerments-I MODULE III LECTURE - 2 EXPERIMENTAL DESIGN MODELS Dr. Shalabh Deartment of Mathematcs and Statstcs Indan Insttute of Technology Kanur 2 We consder the models

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.

More information

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models Computaton of Hgher Order Moments from Two Multnomal Overdsperson Lkelhood Models BY J. T. NEWCOMER, N. K. NEERCHAL Department of Mathematcs and Statstcs, Unversty of Maryland, Baltmore County, Baltmore,

More information

Two-factor model. Statistical Models. Least Squares estimation in LM two-factor model. Rats

Two-factor model. Statistical Models. Least Squares estimation in LM two-factor model. Rats tatstcal Models Lecture nalyss of Varance wo-factor model Overall mean Man effect of factor at level Man effect of factor at level Y µ + α + β + γ + ε Eε f (, ( l, Cov( ε, ε ) lmr f (, nteracton effect

More information

Solutions Homework 4 March 5, 2018

Solutions Homework 4 March 5, 2018 1 Solutons Homework 4 March 5, 018 Soluton to Exercse 5.1.8: Let a IR be a translaton and c > 0 be a re-scalng. ˆb1 (cx + a) cx n + a (cx 1 + a) c x n x 1 cˆb 1 (x), whch shows ˆb 1 s locaton nvarant and

More information

Department of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6

Department of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6 Department of Quanttatve Methods & Informaton Systems Tme Seres and Ther Components QMIS 30 Chapter 6 Fall 00 Dr. Mohammad Zanal These sldes were modfed from ther orgnal source for educatonal purpose only.

More information

Laboratory 3: Method of Least Squares

Laboratory 3: Method of Least Squares Laboratory 3: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly they are correlated wth

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information

The Second Anti-Mathima on Game Theory

The Second Anti-Mathima on Game Theory The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player

More information

Computing MLE Bias Empirically

Computing MLE Bias Empirically Computng MLE Bas Emprcally Kar Wa Lm Australan atonal Unversty January 3, 27 Abstract Ths note studes the bas arses from the MLE estmate of the rate parameter and the mean parameter of an exponental dstrbuton.

More information

Primer on High-Order Moment Estimators

Primer on High-Order Moment Estimators Prmer on Hgh-Order Moment Estmators Ton M. Whted July 2007 The Errors-n-Varables Model We wll start wth the classcal EIV for one msmeasured regressor. The general case s n Erckson and Whted Econometrc

More information

Laboratory 1c: Method of Least Squares

Laboratory 1c: Method of Least Squares Lab 1c, Least Squares Laboratory 1c: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly

More information

Economics 130. Lecture 4 Simple Linear Regression Continued

Economics 130. Lecture 4 Simple Linear Regression Continued Economcs 130 Lecture 4 Contnued Readngs for Week 4 Text, Chapter and 3. We contnue wth addressng our second ssue + add n how we evaluate these relatonshps: Where do we get data to do ths analyss? How do

More information

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands Content. Inference on Regresson Parameters a. Fndng Mean, s.d and covarance amongst estmates.. Confdence Intervals and Workng Hotellng Bands 3. Cochran s Theorem 4. General Lnear Testng 5. Measures of

More information

4.1. Lecture 4: Fitting distributions: goodness of fit. Goodness of fit: the underlying principle

4.1. Lecture 4: Fitting distributions: goodness of fit. Goodness of fit: the underlying principle Lecture 4: Fttng dstrbutons: goodness of ft Goodness of ft Testng goodness of ft Testng normalty An mportant note on testng normalty! L4.1 Goodness of ft measures the extent to whch some emprcal dstrbuton

More information

An Application of Fuzzy Hypotheses Testing in Radar Detection

An Application of Fuzzy Hypotheses Testing in Radar Detection Proceedngs of the th WSES Internatonal Conference on FUZZY SYSEMS n pplcaton of Fuy Hypotheses estng n Radar Detecton.K.ELSHERIF, F.M.BBDY, G.M.BDELHMID Department of Mathematcs Mltary echncal Collage

More information

The written Master s Examination

The written Master s Examination he wrtten Master s Eamnaton Opton Statstcs and Probablty SPRING 9 Full ponts may be obtaned for correct answers to 8 questons. Each numbered queston (whch may have several parts) s worth the same number

More information

Chapter 8 Indicator Variables

Chapter 8 Indicator Variables Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n

More information

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan Wnter 2008 CS567 Stochastc Lnear/Integer Programmng Guest Lecturer: Xu, Huan Class 2: More Modelng Examples 1 Capacty Expanson Capacty expanson models optmal choces of the tmng and levels of nvestments

More information

Pulse Coded Modulation

Pulse Coded Modulation Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal

More information

Bayesian predictive Configural Frequency Analysis

Bayesian predictive Configural Frequency Analysis Psychologcal Test and Assessment Modelng, Volume 54, 2012 (3), 285-292 Bayesan predctve Confgural Frequency Analyss Eduardo Gutérrez-Peña 1 Abstract Confgural Frequency Analyss s a method for cell-wse

More information

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES SVANTE JANSON Abstract. We gve explct bounds for the tal probabltes for sums of ndependent geometrc or exponental varables, possbly wth dfferent

More information

Statistics for Economics & Business

Statistics for Economics & Business Statstcs for Economcs & Busness Smple Lnear Regresson Learnng Objectves In ths chapter, you learn: How to use regresson analyss to predct the value of a dependent varable based on an ndependent varable

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models

More information

/ n ) are compared. The logic is: if the two

/ n ) are compared. The logic is: if the two STAT C141, Sprng 2005 Lecture 13 Two sample tests One sample tests: examples of goodness of ft tests, where we are testng whether our data supports predctons. Two sample tests: called as tests of ndependence

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Discussion of Extensions of the Gauss-Markov Theorem to the Case of Stochastic Regression Coefficients Ed Stanek

Discussion of Extensions of the Gauss-Markov Theorem to the Case of Stochastic Regression Coefficients Ed Stanek Dscusson of Extensons of the Gauss-arkov Theorem to the Case of Stochastc Regresson Coeffcents Ed Stanek Introducton Pfeffermann (984 dscusses extensons to the Gauss-arkov Theorem n settngs where regresson

More information

Rockefeller College University at Albany

Rockefeller College University at Albany Rockefeller College Unverst at Alban PAD 705 Handout: Maxmum Lkelhood Estmaton Orgnal b Davd A. Wse John F. Kenned School of Government, Harvard Unverst Modfcatons b R. Karl Rethemeer Up to ths pont n

More information

Open Systems: Chemical Potential and Partial Molar Quantities Chemical Potential

Open Systems: Chemical Potential and Partial Molar Quantities Chemical Potential Open Systems: Chemcal Potental and Partal Molar Quanttes Chemcal Potental For closed systems, we have derved the followng relatonshps: du = TdS pdv dh = TdS + Vdp da = SdT pdv dg = VdP SdT For open systems,

More information

Estimation of the Mean of Truncated Exponential Distribution

Estimation of the Mean of Truncated Exponential Distribution Journal of Mathematcs and Statstcs 4 (4): 84-88, 008 ISSN 549-644 008 Scence Publcatons Estmaton of the Mean of Truncated Exponental Dstrbuton Fars Muslm Al-Athar Department of Mathematcs, Faculty of Scence,

More information

Tracking with Kalman Filter

Tracking with Kalman Filter Trackng wth Kalman Flter Scott T. Acton Vrgna Image and Vdeo Analyss (VIVA), Charles L. Brown Department of Electrcal and Computer Engneerng Department of Bomedcal Engneerng Unversty of Vrgna, Charlottesvlle,

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors Stat60: Bayesan Modelng and Inference Lecture Date: February, 00 Reference Prors Lecturer: Mchael I. Jordan Scrbe: Steven Troxler and Wayne Lee In ths lecture, we assume that θ R; n hgher-dmensons, reference

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Experments- MODULE LECTURE - 6 EXPERMENTAL DESGN MODELS Dr. Shalabh Department of Mathematcs and Statstcs ndan nsttute of Technology Kanpur Two-way classfcaton wth nteractons

More information