EECS564 Estimation, Filtering, and Detection Exam 2 Week of April 20, 2015
|
|
- Bryan O’Brien’
- 5 years ago
- Views:
Transcription
1 EECS564 Estimation, Filtering, and Detection Exam Week of April 0, 015 This is an open book takehome exam. You have 48 hours to complete the exam. All work on the exam should be your own. problems have equal weight. If you feel that additional assumptions need to be made to answer any part of a question state your assumption explicitly. Please make sure that your name and student id number are on your exam, and if not using a blue book, make sure all of your pages are stapled in the correct order before handing in. Do not forget to write and sign the honor pledge before you hand in your work. 1 In this problem you will explore the so-called change detection problem for detecting a shift in the mean. Let w k be a white Gaussian noise with variance σ w. Let uk be a unit step function, i.e., uk = 0 for k 0 and uk = 1 for k 0. It is of interest to test the hypotheses : x k = w k, : x k = a k uk τ + w k, k = 1,..., n k = 1,..., n where a k 0 and τ {1,..., n} is the change time assumed fixed and known. a Find the most powerful test of level α for the case that the sequence {a k } k and τ are known and non-random. Be sure to specify an expression for the threshold. Find an expression for the power β. Is your test UMP against unknown positive values of a? Note that in vector form the signal component under has the form s = [0,..., 0, a τ,..., a n ] T where there are τ 1 zeros in this vector. Note that n τ + 1. The likelihood ratio test statistic is n Λ = exp a n kx k σw a k σw uk τ = n Therefore, the MP-LRT is a k x k γ n where γ = a k σ wn 1 1 α. a constant signal. The power is The test is not UMP unless a k = a, i.e., β = 1 N N 1 1 α d where the detectibility index d is given by the positive square root of d = n a k σ w 1
2 b Find the most powerful test of level α for the case that {a k } k are i.i.d. zero mean Gaussian with variance σa and τ is known and non-random. Be sure to specify an expression for the threshold. Find an expression for the power β. Is the test uniformly most powerful against unknown σa? In this case under the observations x k are independent N 0, σw for k τ and independent N 0, σa+σ w for k τ. Hence, the likelihood ratio test statistic is σ n τ+1 Λ = w σ a σa + σw exp σa + σwσ w x k Therefore the MP-LRT is x k Where γ = σwx n τ α. This is UMP as it does not depend on σ a. The power is β = 1 X n τ+1 ρxn τ α where the detectibility index ρ [0, 1] is given by γ ρ = σ w σ a + σ w c Find the most powerful test of level α for the case that a k = a, i.e., a k is constant over time, where a is a zero mean Gaussian r.v. with variance σa. Be sure to specify an expression for the threshold. Find an expression for the power β. Is the test uniformly most powerful against unknown σa This is a bit more involved than parts a-c since the random constant a makes the observations correlated over time. The covariance matrix of the vector of observations x under is obtained by using the vector form of the signal s = [0,..., 0, 1,..., 1] T a, denoted as s = 1 τ a: R 1 = covx = σ a1 τ 1 T τ + σ wi The form of the MP LRT can now be derived using the Woodbury identity for the R 1 1 to express f 1 x. Alternatively, one can derive it directly without using matridentities via the method of conditioning f 1 x = fx afada which, using the results of part a and completion of the square in the exponent of the integrand, gives Λ = f 1x f 0 x = 1 exp 1 n τ σ w /σs n τ σw/σ s x k Therefore, the MP LRT reduces to x k γ
3 where γ = n τ + 1 σ wx α. This is UMP wrt σ a. The power is β = 1 X 1 ρx α where the detectibility index ρ [0, 1] is given by ρ = σ w n τ + 1σ a + σ w d If the change time τ is unknown but a is known as in part a what does the GLRT look like? You do not need to specify the level α threshold or the power of the GLRT. The test statistic is simply the imized version of the LRT statistic found in part a: a n Λ GLR = exp x k τ σw a n τ + 1 σw This cannot be taken any further. Consider the following study of survival statistics among a particular population. A number n of individuals have enrolled in a long term observational study, e.g., a study of life expectancy for heart disease patients or chain smokers. The exact time of death of some of the individuals is reported. The other individuals stopped participating in the study at some known time. For these latter patients the exact time of death is unknown; their survival statistics are said to be censored. The objective is to estimate or test the mean survival time of the entire population. For the i-th individual define the indicator variable w i, where w i = 1 if the time of death is reported and otherwise w i = 0. For an individual with w i = 1 let t i denote their reported time of death. For an individual with w i = 0 let τ i denote they time they stopped participating in the study. Let T i be a random variable that is the time of death of individual i. Assume that the T i s are i.i.d with density ft; λ parameterized by λ, which is related to the mean survival time. Then, for w i = 1 the observation is the real value X i = t i while for w i = 0 the observation is the binary value X i = IT i τ i where IA is the indicator function of event A. Therefore the likelihood function associated with the observation x = [x 1,..., x n ] T is fx; λ = n f w i t i ; λ 1 F τ i ; λ 1 w i where F t is the cumulative density function t 0 fu; λdu. In the following you should assume that T i is exponentially distributed with density ft; λ = λe λt, λ 0 a Find the imum likelihood estimator of λ. Find the CR bound on unbiased estimators of λ. Is your estimator an efficient estimator? First, note that the likelihood function is only proper if w i interpreted as a Bernoulli variable with parameter P w i = 1 = F τ i ; λ, where τ i are non-random exit times those that do not exit the study before death have τ i =. Otherwise, the likelihood function does not integrate to one. 3
4 Define N d = n w i the number of deaths reported and N N d is the rest. The MLE is simply the imizing value over λ of the likelihood function fx; λ = n f w i t i ; λ 1 F τ i ; λ 1 w i = λ N d exp w i t i + 1 w i τ i where we have used the fact that F τ i ; λ = 1 exp λτ i. Define x = n 1 n = n w it i + n 1 w iτ i and w = n 1 n w i = N d /n. Then, a simple analysis of the gradient of the log likelihood function reveals d ln f/dλ = nλ 1 w x and d ln f/dλ = nλ w. Hence the MLE is ˆλ = w x = 1 N d The estimator is not efficient since d ln f/dλ k λ ˆλ λ. The second derivative is λ n w i. This is the conditional Fisher information given w i, giving the conditional CR bound var λ ˆλ λ /N d The unconditional CRB is found by taking the expectation of the second derivative of the log likelihood, which is F λ = λ E λ [w i ] = λ 1 exp λτ i F τ i ; λ = λ Or, defining the indicator function z i = Iτ i = indicating the indices of those individuals who do not exit the study before they die, F λ = λ M + 1 z i 1 exp λτ i where M is the number of these individuals. Note that the Fisher information decreases as more people exit the study, as is expected since this corresponds to less informative observations for λ. b Consider testing the one-sided hypotheses : λ = λ 0 : λ λ 0 where λ 0 0 is fixed. Find the most powerful test of level alpha 1. Is your test uniformly most powerful? 1 Hint: the sum of m i.i.d. standard exponential variables Y i with E[Y i] = 1 is Erlang distributed with parameter m, denoted Er n 4
5 The LRT is Since λ λ o λ Nd exp λ λ o λ o this reduces to H1 η or since the times τ i γ that individuals left the study are known and non-random w i t i γ 1 As t i are i.i.d. exponential λ o random variables under, λ o t i are i.i.d. standard exponential 1 random variables, and we can set the threshold using the fact that under : λ n o w it i is Erlang with N d degrees of freedom, so that γ 1 = λ 1 o Er Nd α. The test is uniformly most powerful since it does not depend on the unknown value of λ. c Now consider the two-sided hypotheses : λ = λ 0 : λ λ 0 Does there exist a UMP test for these hypotheses? Find the level α GLRT for testing vs. There does not exist a UMP since the sign of λ λ o changes over the uncertainty domain of λ unlike in part b. The GLRT is Nd / n Λ GLR = λ o Nd exp which can be rewritten as 1 Nd Λ GLR = C λ n o x exp i N d + λ o λ o where C = N N d d exp N d is a positive constant. Λ GLR is of the form Cu N d expu where u = λ n o, which is a convex function of u. Hence, the GLRT has an decision region of the form γ u γ +. Since n = n w it i + sum n 1 w iτ i, and w i and τ i are non-random and known, this test is equivalent to γ 1 w i τ i λ o w i t i γ + 1 w i τ i. Under the distribution of λ o n w it i is Er Nd which allows us to set the thresholds γ ± to give false alarm probability equal to α. H1 H1 η η 5
6 d Find a 1 α confidence interval for the parameter λ. Using the results of c, if γ and γ + are determined so that the GLRT is of level α, we have P λo γ / λ o γ + / = 1 α which, since λ o is arbitrary, specifies the following 1 α confidence interval for the parameter λ: [ ] γ /, γ + / 3 Here you will consider the multichannel simultaneous signal detection problem. Consider testing the following N hypotheses on the presence of a signal s i in at least one of N channels, where b i is a Bernoulli distributed random variable indicating the presence of signal s i, and w i is noise, all in the i-th channel. : x 1 = s 1 b 1 + w 1... : x N = s N b N + w N Here we assume that s i, b i and w i are mutually independent random variables and all quantities are independent over i. Let ˆb i {0, 1} be the decision function for the i-th channel where ˆb i = 0 and ˆb i = 1 corresponds to deciding b i = 0 and b i = 1 respectively. a What is the decision function ˆb i that corresponds to most powerful likelihood ratio test of level α for testing any one channels for signal, i.e., testing : = w i vs. H i : = s i + w i? Specify the test with threshold for the case that s i and w i are independent zero mean Gaussian random variables with variances σs and σw, respectively. An equivalent pair of hypotheses is: H i0 : b i = 0 vs H i1 : b i = 1. These are simple hypotheses so the MP LRT of level α is the Bayes optimal test, which has the form P X i b i = 1/P X i b i = 0 γ where η = 1 p i /p i, where p i = E[b i ] = P b i = 1. This is equivalent the posterior odds ratio test: E[b i X] Λ B = η E[1 b i X] Thus the Bayes optimal decision function is = { 1 ΛB η 0 Λ B η Under the Gaussian assumption given, σ Λ B = w σs σs + σw exp σs + σwσ w so that the Bayes optimal test is equivalent to x i pi 1 p i x i γ. 6
7 To find the power function and the ROC curves we first find the γ that gives probability of false alarm equal to α, which we find to be: γ = σwx α, where X1 1 1 α denotes the 1 α quantile of the Chi-square distribution with 1 degree of freedom. Therefore, the power function is β = 1 X 1 ρx α with ρ = σ w/σ s + σ w. b With the threshold found in part a what is the probability that at least one test of level α gives a false alarm among all of the N channels? This is the multichannel false alarm rate. Adjust the threshold on your test in part a so that the multichannel false alarm rate is equal to α. What is the new single channel false alarm rate and how does it compare to that of part a? What is the power of your test for correctly detecting signal presence in a given channel with this multichannel test of level α? Evaluate this power for case that s i and w i are independent zero mean Gaussian random variables with variances σs and σw, respectively. The power function associated with ˆb is found by expressing β m = P ˆb i = 1 in terms of α m = P ˆb i = 1. We will make the assumption here that the channels are all identical, i.e., the signals, noises and bernoulli variables in each channel are independent with identical distributions i.i.d., so that P ˆb i = 1, P ˆb i = 1 do not depend on i. In this case, the probability that at least one test of ˆb i gives a positive is P i {ˆb i = 1} H k = 1 P i {ˆb i = 0} H k = 1 N P ˆb i = 0 H k = = 1 P N ˆb i = 0 H k = 1 1 P ˆb i = 1 H k N for H k equal to or to. Hence, defining α as the level of the single channel test, we have that P i {ˆb i = 1} = 1 1 α N Therefore, if we set the left side to α and solve for α we obtain α = 1 1 α 1/N. Therefore, if gα denotes the power function for a single channel test ˆb i then the power function under the constraint P i {ˆb i = 1} = α is β = 1 1 g1 1 α 1/N N For the case that s i and w i are independent zero mean Gaussian random variables with variances σs and σw, respectively, from part a gα = 1 X 1 ρx α. c As an alternative to defining the decision function ˆb i as a MP LRT and adjusting the test for multichannel error protection using part b, here you will consider a different approach to optimal detection that accounts for the multichannel problem directly. Specifically, we define the optimal multichannel set of decision rules {ˆb i } N as those that imize the average number of true positives subject to a constraint on the average proportion of false positives over the N channels: E[ˆb k b k ] subject to E[ˆb k 1 b k ]/N q, 7
8 where q [0, 1] is the mean false positive rate, set by the user. Derive a general form of the optimal decision rule, illustrate it for the case that s i and w i are independent zero mean Gaussian random variables with variances σs and σw, respectively, and evaluate the power function. Using the Lagrange multiplier approach discussed in class to derive the Neyman-Pearson Lemma, we formulate this as the optimization { N } E[ˆb k b k ] + λ q E[ˆb k 1 b k ]/N Collect terms and, for Q a random variable, express E[Q] as E[E[Q X]] where X denotes all the observations: { N } E ˆbk [E[b k X] λe[1 b k X]/N] + λq The sum in the first expectation is imized by assigning ˆb k = 1 if and only if E[b k X] λe[1 b k X]/N 0 or equivalently ˆb k is the decision rule associated with the test E[b k X] E[1 b k X] which has exactly the same form as the single channel test in a and its multi-channel power curve is the same as that derived in b. d As another alternative consider defining the optimal multichannel set of decision rules {ˆb i } N as those that imize the average number of true positives but now subject to a constraint on the average proportion of false positives among all positives found: λ/n E[ˆb k b k ] subject to E[ˆb k 1 b k /M] q, where M = N ˆb i is the total number of positives found by the test. Derive a general form of the optimal decision rule and illustrate it for the case that s i and w i are independent zero mean Gaussian random variables with variances σs and σw, respectively. You do not have to evaluate the power function for this part. Again, using the Lagrange multiplier approach discussed in class to derive the Neyman-Pearson Lemma, we formulate this as the optimization { N } E[ˆb k b k ] + λ q E[ˆb k 1 b k ]/M Collect terms and, again, for Q a random variable, express E[Q] as E[E[Q X]] where X denotes all the observations: { N } E ˆbk [E[b k X] λe[1 b k X]/M] + λq 8
9 Now the fact that M = i ˆb i couples the multichannel tests together. However, if we fix M then the only ˆb k s that should be equal to one are those form which E[b k X] λe[1 b k X]/M 0 or equivalently ˆb k is the decision rule associated with the test E[b k X] E[1 b k X] An equivalent form for this decision rule is λ/m T k = E[1 b k X] E[b k X] = P b k = 0 X P b k = 1 X M/λ This test can be implemented by rank ordering all of the scores also called posterior odds ratios" T k in increasing order T 1... T N and finding the first index M at which T k goes above the straight line T i = i/λ. Only those hypotheses having scores less than than M/λ should be declared as valid discoveries. This is very similar to the Benjamini False Discovery Rate FDR test, and this Bayesian generalization is due to John Storey. 9
Detection theory 101 ELEC-E5410 Signal Processing for Communications
Detection theory 101 ELEC-E5410 Signal Processing for Communications Binary hypothesis testing Null hypothesis H 0 : e.g. noise only Alternative hypothesis H 1 : signal + noise p(x;h 0 ) γ p(x;h 1 ) Trade-off
More informationStatistics Ph.D. Qualifying Exam: Part I October 18, 2003
Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer
More informationLet us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided
Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or
More information10. Composite Hypothesis Testing. ECE 830, Spring 2014
10. Composite Hypothesis Testing ECE 830, Spring 2014 1 / 25 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve unknown parameters
More informationMathematics Ph.D. Qualifying Examination Stat Probability, January 2018
Mathematics Ph.D. Qualifying Examination Stat 52800 Probability, January 2018 NOTE: Answers all questions completely. Justify every step. Time allowed: 3 hours. 1. Let X 1,..., X n be a random sample from
More informationDetection theory. H 0 : x[n] = w[n]
Detection Theory Detection theory A the last topic of the course, we will briefly consider detection theory. The methods are based on estimation theory and attempt to answer questions such as Is a signal
More informationVariations. ECE 6540, Lecture 10 Maximum Likelihood Estimation
Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter
More informationDetection Theory. Composite tests
Composite tests Chapter 5: Correction Thu I claimed that the above, which is the most general case, was captured by the below Thu Chapter 5: Correction Thu I claimed that the above, which is the most general
More informationLecture 4: Types of errors. Bayesian regression models. Logistic regression
Lecture 4: Types of errors. Bayesian regression models. Logistic regression A Bayesian interpretation of regularization Bayesian vs maximum likelihood fitting more generally COMP-652 and ECSE-68, Lecture
More informationMathematics Qualifying Examination January 2015 STAT Mathematical Statistics
Mathematics Qualifying Examination January 2015 STAT 52800 - Mathematical Statistics NOTE: Answer all questions completely and justify your derivations and steps. A calculator and statistical tables (normal,
More informationSTAT 461/561- Assignments, Year 2015
STAT 461/561- Assignments, Year 2015 This is the second set of assignment problems. When you hand in any problem, include the problem itself and its number. pdf are welcome. If so, use large fonts and
More informationComposite Hypotheses and Generalized Likelihood Ratio Tests
Composite Hypotheses and Generalized Likelihood Ratio Tests Rebecca Willett, 06 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve
More informationSome General Types of Tests
Some General Types of Tests We may not be able to find a UMP or UMPU test in a given situation. In that case, we may use test of some general class of tests that often have good asymptotic properties.
More informationLecture 8: Information Theory and Statistics
Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang
More information2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)?
ECE 830 / CS 76 Spring 06 Instructors: R. Willett & R. Nowak Lecture 3: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we
More informationQualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf
Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section
More informationLecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary
ECE 830 Spring 207 Instructor: R. Willett Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we saw that the likelihood
More informationThe University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80
The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80 71. Decide in each case whether the hypothesis is simple
More informationENGR352 Problem Set 02
engr352/engr352p02 September 13, 2018) ENGR352 Problem Set 02 Transfer function of an estimator 1. Using Eq. (1.1.4-27) from the text, find the correct value of r ss (the result given in the text is incorrect).
More informationLecture 22: Error exponents in hypothesis testing, GLRT
10-704: Information Processing and Learning Spring 2012 Lecture 22: Error exponents in hypothesis testing, GLRT Lecturer: Aarti Singh Scribe: Aarti Singh Disclaimer: These notes have not been subjected
More informationDETECTION theory deals primarily with techniques for
ADVANCED SIGNAL PROCESSING SE Optimum Detection of Deterministic and Random Signals Stefan Tertinek Graz University of Technology turtle@sbox.tugraz.at Abstract This paper introduces various methods for
More informationQualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf
Part 1: Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)
More informationHypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes
Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis
More informationDetection and Estimation Chapter 1. Hypothesis Testing
Detection and Estimation Chapter 1. Hypothesis Testing Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2015 1/20 Syllabus Homework:
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random
More information(Practice Version) Midterm Exam 2
EECS 126 Probability and Random Processes University of California, Berkeley: Fall 2014 Kannan Ramchandran November 7, 2014 (Practice Version) Midterm Exam 2 Last name First name SID Rules. DO NOT open
More informationRowan University Department of Electrical and Computer Engineering
Rowan University Department of Electrical and Computer Engineering Estimation and Detection Theory Fall 2013 to Practice Exam II This is a closed book exam. There are 8 problems in the exam. The problems
More informationStatistics & Data Sciences: First Year Prelim Exam May 2018
Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book
More informationMachine Learning Linear Classification. Prof. Matteo Matteucci
Machine Learning Linear Classification Prof. Matteo Matteucci Recall from the first lecture 2 X R p Regression Y R Continuous Output X R p Y {Ω 0, Ω 1,, Ω K } Classification Discrete Output X R p Y (X)
More informationECE531 Lecture 6: Detection of Discrete-Time Signals with Random Parameters
ECE531 Lecture 6: Detection of Discrete-Time Signals with Random Parameters D. Richard Brown III Worcester Polytechnic Institute 26-February-2009 Worcester Polytechnic Institute D. Richard Brown III 26-February-2009
More informationChapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1)
Chapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1) Detection problems can usually be casted as binary or M-ary hypothesis testing problems. Applications: This chapter: Simple hypothesis
More informationQualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama
Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Instructions This exam has 7 pages in total, numbered 1 to 7. Make sure your exam has all the pages. This exam will be 2 hours
More informationDirection: This test is worth 250 points and each problem worth points. DO ANY SIX
Term Test 3 December 5, 2003 Name Math 52 Student Number Direction: This test is worth 250 points and each problem worth 4 points DO ANY SIX PROBLEMS You are required to complete this test within 50 minutes
More informationDepartment of Statistical Science FIRST YEAR EXAM - SPRING 2017
Department of Statistical Science Duke University FIRST YEAR EXAM - SPRING 017 Monday May 8th 017, 9:00 AM 1:00 PM NOTES: PLEASE READ CAREFULLY BEFORE BEGINNING EXAM! 1. Do not write solutions on the exam;
More information[y i α βx i ] 2 (2) Q = i=1
Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation
More informationPractice Exam 1. (A) (B) (C) (D) (E) You are given the following data on loss sizes:
Practice Exam 1 1. Losses for an insurance coverage have the following cumulative distribution function: F(0) = 0 F(1,000) = 0.2 F(5,000) = 0.4 F(10,000) = 0.9 F(100,000) = 1 with linear interpolation
More informationLinear Models for Classification
Linear Models for Classification Oliver Schulte - CMPT 726 Bishop PRML Ch. 4 Classification: Hand-written Digit Recognition CHINE INTELLIGENCE, VOL. 24, NO. 24, APRIL 2002 x i = t i = (0, 0, 0, 1, 0, 0,
More informationSTATISTICAL METHODS FOR SIGNAL PROCESSING c Alfred Hero
STATISTICAL METHODS FOR SIGNAL PROCESSING c Alfred Hero 1999 32 Statistic used Meaning in plain english Reduction ratio T (X) [X 1,..., X n ] T, entire data sample RR 1 T (X) [X (1),..., X (n) ] T, rank
More informationProbabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016
Probabilistic classification CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2016 Topics Probabilistic approach Bayes decision theory Generative models Gaussian Bayes classifier
More informationCourse: ESO-209 Home Work: 1 Instructor: Debasis Kundu
Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear
More informationSolutions to the Spring 2015 CAS Exam ST
Solutions to the Spring 2015 CAS Exam ST (updated to include the CAS Final Answer Key of July 15) There were 25 questions in total, of equal value, on this 2.5 hour exam. There was a 10 minute reading
More informationMultivariate statistical methods and data mining in particle physics
Multivariate statistical methods and data mining in particle physics RHUL Physics www.pp.rhul.ac.uk/~cowan Academic Training Lectures CERN 16 19 June, 2008 1 Outline Statement of the problem Some general
More informationsimple if it completely specifies the density of x
3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely
More informationLecture 7 Introduction to Statistical Decision Theory
Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7
More informationSTATS 200: Introduction to Statistical Inference. Lecture 29: Course review
STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout
More informationPractice Problems Section Problems
Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,
More informationTwo-stage Adaptive Randomization for Delayed Response in Clinical Trials
Two-stage Adaptive Randomization for Delayed Response in Clinical Trials Guosheng Yin Department of Statistics and Actuarial Science The University of Hong Kong Joint work with J. Xu PSI and RSS Journal
More informationCourse 4 Solutions November 2001 Exams
Course 4 Solutions November 001 Exams November, 001 Society of Actuaries Question #1 From the Yule-Walker equations: ρ φ + ρφ 1 1 1. 1 1+ ρ ρφ φ Substituting the given quantities yields: 0.53 φ + 0.53φ
More informationMathematical Statistics
Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics
More informationST495: Survival Analysis: Hypothesis testing and confidence intervals
ST495: Survival Analysis: Hypothesis testing and confidence intervals Eric B. Laber Department of Statistics, North Carolina State University April 3, 2014 I remember that one fateful day when Coach took
More informationStat 5102 Final Exam May 14, 2015
Stat 5102 Final Exam May 14, 2015 Name Student ID The exam is closed book and closed notes. You may use three 8 1 11 2 sheets of paper with formulas, etc. You may also use the handouts on brand name distributions
More informationTesting Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata
Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function
More informationPlug-in Measure-Transformed Quasi Likelihood Ratio Test for Random Signal Detection
Plug-in Measure-Transformed Quasi Likelihood Ratio Test for Random Signal Detection Nir Halay and Koby Todros Dept. of ECE, Ben-Gurion University of the Negev, Beer-Sheva, Israel February 13, 2017 1 /
More informationLeast Squares Regression
E0 70 Machine Learning Lecture 4 Jan 7, 03) Least Squares Regression Lecturer: Shivani Agarwal Disclaimer: These notes are a brief summary of the topics covered in the lecture. They are not a substitute
More informationExercises Chapter 4 Statistical Hypothesis Testing
Exercises Chapter 4 Statistical Hypothesis Testing Advanced Econometrics - HEC Lausanne Christophe Hurlin University of Orléans December 5, 013 Christophe Hurlin (University of Orléans) Advanced Econometrics
More informationStatistics 135 Fall 2008 Final Exam
Name: SID: Statistics 135 Fall 2008 Final Exam Show your work. The number of points each question is worth is shown at the beginning of the question. There are 10 problems. 1. [2] The normal equations
More information. Find E(V ) and var(v ).
Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number
More informationSTATISTICS SYLLABUS UNIT I
STATISTICS SYLLABUS UNIT I (Probability Theory) Definition Classical and axiomatic approaches.laws of total and compound probability, conditional probability, Bayes Theorem. Random variable and its distribution
More informationLeast Squares Regression
CIS 50: Machine Learning Spring 08: Lecture 4 Least Squares Regression Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may not cover all the
More informationECE 313: Conflict Final Exam Tuesday, May 13, 2014, 7:00 p.m. 10:00 p.m. Room 241 Everitt Lab
University of Illinois Spring 1 ECE 313: Conflict Final Exam Tuesday, May 13, 1, 7: p.m. 1: p.m. Room 1 Everitt Lab 1. [18 points] Consider an experiment in which a fair coin is repeatedly tossed every
More informationMidterm. Introduction to Machine Learning. CS 189 Spring You have 1 hour 20 minutes for the exam.
CS 189 Spring 2013 Introduction to Machine Learning Midterm You have 1 hour 20 minutes for the exam. The exam is closed book, closed notes except your one-page crib sheet. Please use non-programmable calculators
More informationBayesian Regression Linear and Logistic Regression
When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we
More informationProbability and Distributions
Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated
More informationSpring 2012 Math 541B Exam 1
Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote
More informationProbability and Statistics qualifying exam, May 2015
Probability and Statistics qualifying exam, May 2015 Name: Instructions: 1. The exam is divided into 3 sections: Linear Models, Mathematical Statistics and Probability. You must pass each section to pass
More informationSCHOOL OF MATHEMATICS AND STATISTICS. Linear and Generalised Linear Models
SCHOOL OF MATHEMATICS AND STATISTICS Linear and Generalised Linear Models Autumn Semester 2017 18 2 hours Attempt all the questions. The allocation of marks is shown in brackets. RESTRICTED OPEN BOOK EXAMINATION
More informationChapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1
Chapter 4 HOMEWORK ASSIGNMENTS These homeworks may be modified as the semester progresses. It is your responsibility to keep up to date with the correctly assigned homeworks. There may be some errors in
More information10-704: Information Processing and Learning Fall Lecture 24: Dec 7
0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 24: Dec 7 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy of
More informationMAS3301 / MAS8311 Biostatistics Part II: Survival
MAS330 / MAS83 Biostatistics Part II: Survival M. Farrow School of Mathematics and Statistics Newcastle University Semester 2, 2009-0 8 Parametric models 8. Introduction In the last few sections (the KM
More informationCh. 5 Hypothesis Testing
Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,
More informationIf there exists a threshold k 0 such that. then we can take k = k 0 γ =0 and achieve a test of size α. c 2004 by Mark R. Bell,
Recall The Neyman-Pearson Lemma Neyman-Pearson Lemma: Let Θ = {θ 0, θ }, and let F θ0 (x) be the cdf of the random vector X under hypothesis and F θ (x) be its cdf under hypothesis. Assume that the cdfs
More informationThe exam is closed book, closed notes except your one-page (two sides) or two-page (one side) crib sheet.
CS 189 Spring 013 Introduction to Machine Learning Final You have 3 hours for the exam. The exam is closed book, closed notes except your one-page (two sides) or two-page (one side) crib sheet. Please
More informationPart III. A Decision-Theoretic Approach and Bayesian testing
Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to
More informationF2E5216/TS1002 Adaptive Filtering and Change Detection. Course Organization. Lecture plan. The Books. Lecture 1
Adaptive Filtering and Change Detection Bo Wahlberg (KTH and Fredrik Gustafsson (LiTH Course Organization Lectures and compendium: Theory, Algorithms, Applications, Evaluation Toolbox and manual: Algorithms,
More informationSPRING 2007 EXAM C SOLUTIONS
SPRING 007 EXAM C SOLUTIONS Question #1 The data are already shifted (have had the policy limit and the deductible of 50 applied). The two 350 payments are censored. Thus the likelihood function is L =
More information3 Joint Distributions 71
2.2.3 The Normal Distribution 54 2.2.4 The Beta Density 58 2.3 Functions of a Random Variable 58 2.4 Concluding Remarks 64 2.5 Problems 64 3 Joint Distributions 71 3.1 Introduction 71 3.2 Discrete Random
More informationChapter 2 Signal Processing at Receivers: Detection Theory
Chapter Signal Processing at Receivers: Detection Theory As an application of the statistical hypothesis testing, signal detection plays a key role in signal processing at receivers of wireless communication
More informationChapter 9: Hypothesis Testing Sections
Chapter 9: Hypothesis Testing Sections 9.1 Problems of Testing Hypotheses 9.2 Testing Simple Hypotheses 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.6 Comparing the Means of Two
More informationECE531 Lecture 4b: Composite Hypothesis Testing
ECE531 Lecture 4b: Composite Hypothesis Testing D. Richard Brown III Worcester Polytechnic Institute 16-February-2011 Worcester Polytechnic Institute D. Richard Brown III 16-February-2011 1 / 44 Introduction
More information1 Glivenko-Cantelli type theorems
STA79 Lecture Spring Semester Glivenko-Cantelli type theorems Given i.i.d. observations X,..., X n with unknown distribution function F (t, consider the empirical (sample CDF ˆF n (t = I [Xi t]. n Then
More informationSTOCHASTIC PROCESSES, DETECTION AND ESTIMATION Course Notes
STOCHASTIC PROCESSES, DETECTION AND ESTIMATION 6.432 Course Notes Alan S. Willsky, Gregory W. Wornell, and Jeffrey H. Shapiro Department of Electrical Engineering and Computer Science Massachusetts Institute
More informationChapter 4. Theory of Tests. 4.1 Introduction
Chapter 4 Theory of Tests 4.1 Introduction Parametric model: (X, B X, P θ ), P θ P = {P θ θ Θ} where Θ = H 0 +H 1 X = K +A : K: critical region = rejection region / A: acceptance region A decision rule
More informationReal time detection through Generalized Likelihood Ratio Test of position and speed readings inconsistencies in automated moving objects
Real time detection through Generalized Likelihood Ratio Test of position and speed readings inconsistencies in automated moving objects Sternheim Misuraca, M. R. Degree in Electronics Engineering, University
More informationReview. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda
Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with
More informationSummary: the confidence interval for the mean (σ 2 known) with gaussian assumption
Summary: the confidence interval for the mean (σ known) with gaussian assumption on X Let X be a Gaussian r.v. with mean µ and variance σ. If X 1, X,..., X n is a random sample drawn from X then the confidence
More informationMidterm Examination. STA 215: Statistical Inference. Due Wednesday, 2006 Mar 8, 1:15 pm
Midterm Examination STA 215: Statistical Inference Due Wednesday, 2006 Mar 8, 1:15 pm This is an open-book take-home examination. You may work on it during any consecutive 24-hour period you like; please
More information2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.
CS450 Final Review Problems Fall 08 Solutions or worked answers provided Problems -6 are based on the midterm review Identical problems are marked recap] Please consult previous recitations and textbook
More informationFundamental Probability and Statistics
Fundamental Probability and Statistics "There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More informationMidterm Exam 1 Solution
EECS 126 Probability and Random Processes University of California, Berkeley: Fall 2015 Kannan Ramchandran September 22, 2015 Midterm Exam 1 Solution Last name First name SID Name of student on your left:
More informationCoding for Digital Communication and Beyond Fall 2013 Anant Sahai MT 1
EECS 121 Coding for Digital Communication and Beyond Fall 2013 Anant Sahai MT 1 PRINT your student ID: PRINT AND SIGN your name:, (last) (first) (signature) PRINT your Unix account login: ee121- Prob.
More informationThis exam contains 13 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.
Probability and Statistics FS 2017 Session Exam 22.08.2017 Time Limit: 180 Minutes Name: Student ID: This exam contains 13 pages (including this cover page) and 10 questions. A Formulae sheet is provided
More informationDetection Theory. Chapter 3. Statistical Decision Theory I. Isael Diaz Oct 26th 2010
Detection Theory Chapter 3. Statistical Decision Theory I. Isael Diaz Oct 26th 2010 Outline Neyman-Pearson Theorem Detector Performance Irrelevant Data Minimum Probability of Error Bayes Risk Multiple
More informationOn the Optimality of Likelihood Ratio Test for Prospect Theory Based Binary Hypothesis Testing
1 On the Optimality of Likelihood Ratio Test for Prospect Theory Based Binary Hypothesis Testing Sinan Gezici, Senior Member, IEEE, and Pramod K. Varshney, Life Fellow, IEEE Abstract In this letter, the
More informationDetecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf
Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf Reading: Ch. 5 in Kay-II. (Part of) Ch. III.B in Poor. EE 527, Detection and Estimation Theory, # 5c Detecting Parametric Signals in Noise
More informationIntroduction to Detection Theory
Introduction to Detection Theory Detection Theory (a.k.a. decision theory or hypothesis testing) is concerned with situations where we need to make a decision on whether an event (out of M possible events)
More informationPractical Statistics
Practical Statistics Lecture 1 (Nov. 9): - Correlation - Hypothesis Testing Lecture 2 (Nov. 16): - Error Estimation - Bayesian Analysis - Rejecting Outliers Lecture 3 (Nov. 18) - Monte Carlo Modeling -
More informationProblem Selected Scores
Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected
More informationQuiz 2 Date: Monday, November 21, 2016
10-704 Information Processing and Learning Fall 2016 Quiz 2 Date: Monday, November 21, 2016 Name: Andrew ID: Department: Guidelines: 1. PLEASE DO NOT TURN THIS PAGE UNTIL INSTRUCTED. 2. Write your name,
More information