Real time detection through Generalized Likelihood Ratio Test of position and speed readings inconsistencies in automated moving objects
|
|
- Baldwin Parrish
- 6 years ago
- Views:
Transcription
1 Real time detection through Generalized Likelihood Ratio Test of position and speed readings inconsistencies in automated moving objects Sternheim Misuraca, M. R. Degree in Electronics Engineering, University of Buenos Aires, Argentina Project Engineer, Sistemas Industriales, ABB Argentina Abstract In the present work we describe a system to detect inconsistencies of position and speed readings in automated moving objects to prevent collisions. The system is implemented through a Generalized Likelihood Ratio Test (GLRT) strategy. Position and speed measurements are used as inputs of an hypothesis testing system for consistency checking. Type I error (false alarm) probability can be specified and set, while minimizing type II error (mis-detection) probability. Also, we show the results of the successful implementation of this strategy in the transelevator control system we installed in April 2013, in the Distribution Center of Molinos Rio de La Plata in Barracas, Buenos Aires, Argentina. 1 Introduction and problem set up There are several industrial applications where automated moving objects relay on position and speed readings to control its movement, such as cranes and transelevators. Independent position and speed measurements are easily implemented through laser positioning, bar codes or encoders. However they are usually noisy, leaving consistency checking systems open to errors such as stating that the readings are not consistent when they are (type I or false alarm error), or stating that the readings are consistent when they are not (type II or mis-detection error). We examine the Distribution Center of Molinos Río de la Plata S.A, located in Barracas, Buenos Aires, Argentina, where we implemented the system in the trans-elevator control program we developed and installed in April The Distribution Center consists on a vertical warehouse that storages pallets carrying comestible loads of up until one ton. The warehouse has three corridors, each of 100m long x 12m high, with capacity for 2200 pallets. Through each corridor, a fully automated trans-elevator of two ton moves at 10km/h - see figure 1. 1
2 1 Introduction and problem set up 2 In the event of a mechanical or electronic issue in the position or speed readings used for positioning control, it is of the essence that the control system stop the device in order to protect the people and the facilities involved. Fig. 1: Empty trans-elevator moving through the warehouse In this work we present a strategy of hypothesis testing using Generalized Likelihood Ratio Test (GLRT). The system is implemented in real time with negligible computational burden, and allows to set false alarm probability while minimizing mis-detection error probability. The aforementioned trans-elevators are automated by means of a AC800M controller, model PM851. Movement is controlled individually for each axis, by means of an ACS800 drive attached controlling the corresponding motor. The drive receives motor speed measurements through an incremental encoder plugged to the motor axis, and position measurements of the trans-elevator jail through a different incremental encoder plugged to an independent mechanical system - (elevation is taken from a wire, and horizontal movement from a timing chain). We added the consistency check layer to the existing control system without adding computational burden to the controller. In the next section we detail the
3 2 Mathematical model 3 implemented algorithm and we compare it with more conventional integration and differentiation methods to address this issue. 2 Mathematical model We start modeling available measurements. Time is discretized according to the execution time of the controller s task running the program, which we will call t. Time is referenced by a subindex (e.g. x i ). For each axis, at instant i, independent position - x i - and speed - v i measurements are available. They are, however, noisy, with mean equal to the actual position and speed, respectively. Noise statistics are approximated by a gaussian distribution (due to it being mathematically easy to use). We thus have x i = x i + η i (1) v i = v i + ν i (2) With η i R and ν i R gaussian, zero mean and variances σx 2 and σv 2 respectively. We assume they are both ergodic processes (measurement noises are uncorrelated if measurements are taken at different times, though the statistical parameters are equal). Next we derive the most common approaches to this problem, along with the issues they present - differentiation and integration, and finally we describe the method we chose to implement, the generalized likelihood ratio test. 2.1 Differentiation strategy The most simple strategy to check for consistency is to approximate the derivative of the position via finite differences, and then compare it to the speed measurement. We start by defining Λ dx dt (i) x i x i 1 t Λ = x i x i 1 t (3) v i (4) Now we define ɛ > 0 as the bound for the difference between measured speed and approximated speed through derivatives. Thus, when Λ > ɛ, inconsistency warning is set. This strategy presents several issues. First, position measurement noise is amplified, since 3 is a high pass filter, which effect is to filter out the mean and to amplify the random component. Since successive position measurements are taken very fast compared to the trans-elevator speed, the means are similar, rendering it vulnerable to the filter. Moreover, since the subtraction is divided by a very small factor ( t), the noise is even more amplified. Secondly, speed measurement v i is unfiltered, so its noise is not mitigated, which is not desirable. Finally, there is no control over the error probabilities.
4 2 Mathematical model Integration strategy Instead of derivating position to compare it with speed, we will try integrating speed to compare it with the position. Speed measurement, at each moment, is multiplied by the time interval t (and by a factor ρ if speed needs scaling); and then it is added to approximate the integral of the speed. We then calculate the discriminant as Λ = (x N x 1 ) ρ t Again, the decision strategy boils down to comparing the absolute value of the discriminant with the bound ɛ - if it is greater, we decide there is an inconsistency in the measurements. This strategy is better than derivating since speed measurement noise is mitigated. Proof lies on averaging zero mean noise components. Due to eq 2, we have v i = v i + Since adding independent, identically distributed (i.i.d.) samples and dividing by the number of samples is an unbiased estimator of the mean (remember measurement noise is ergodic by hypothesis) we then write η i v i = v i + N ˆµ η However µ η = 0 by hypothesis, thus noise effect is mitigated. Another improvement is that position measurement noise is not as highly amplified, since means of position separated by the data window (N) are bound to be less similar (unless the trans-elevator is not moving at all). Besides, we are no longer dividing it by the sampling factor t. We must underline the fact that, as in the previous case, we don t have means to quantify the errors, much less set them according to specifications. 2.3 Likelihood ratio strategy Constructing the sample set To overcome the problems inherent to integration, we will use hypothesis testing. We start defining the random variable z as follows z N = (x N x 1 ) ρ t v i v i (5) This random variable is gaussian, due to being a linear combination of gaussian, independent variables, with mean equal to the linear combination of their means, and variance equal to the sum of individual variances, weighted by the squared coefficients. Since the linear combination of the means is equal to zero (in normal
5 2 Mathematical model 5 conditions, the mean value of the integral of the speed is equal to the position difference) we have z N (0, 2σ 2 x + ρ 2 t 2 Nσ 2 v) = N (0, σ 2 z) (6) Using 5 we build the sample set {z 1... z N } of (i.i.d.) variables. Samples will be i.i.d. if data windows are independent. However, this will delay sample production, so data overlapping will be permitted if needed out of the application sample speed requirement, assuming i.i.d. samples to simplify the mathematical model Generalized Likelihood ratio test - GLRT Next we define two hypothesis. Null hypothesis is that the mean of z is zero, which corresponds to speed and position measurements consistency. Alternative hypothesis is defined as having non-zero mean, which signifies that there is a deterministic difference between speed and position measurements dynamics, and thus an inconsistency in the data which might be caused by electrical or mechanical issues. Formalizing H 0 : µ z = 0 H A : µ z 0 To contrast such hypothesis, we use Likelihood Ratio Test, which consists on obtaining the discriminant comparing the probability of obtaining the sample set conditioned to the null hypothesis or the alternative hypothesis. At this point, we must redefine the alternative hypothesis (to avoid comparing the zero mean probability to every mean probability) - alternative mean is now equal to the maximum likelihood estimator of the mean computed from the sample set, which is equal to the data set average. We don t lose generality in the process, in the sense that even if the alternative hypothesis will always be more likely, the key is to accept the null hypothesis when it is not likely enough. Later we will see that the error probability, which can now be computed, will help us set the bound to choose the alternative or null hypothesis. See [1] and [2] for details of this detection strategy. Next we formally redefine the hypothesis H 0 : µ z = 0 H A : µ z = µ MLE z = 1 N z i Λ 0 = p(z 1... z N H 0 ) p(z 1... z N H A ) = p(z 1... z N µ z = 0) p(z 1... z N µ z = µ MLE z ) Since samples are independent and gaussian, we write (7)
6 2 Mathematical model 6 Λ 0 = N N 1 e σ z 2π 1 σ z 2π e z 2 i 2σ 2 z (z i µ MLE z ) 2 2σ 2 z Taking logarithms and simplifying constants, we have Λ 1 = z 2 i 2σ 2 z + (z i µ MLE z ) 2 2σ 2 z dividing by 2σz 2 and expanding (z i µ MLE z ) 2 we obtain Λ 2 = Nµ MLE 2 z ( N ) 2 = z i Finally, dividing by the constant and taking squared root (note that Λ 1 is always negative or zero since Λ 0 1, since the denominator of 7 is always greater than the numerator by definition of maximum likelihood estimator) Λ = z i (8) Discriminant Λ is then compared with ɛ. Should it be smaller, null hypothesis is accepted and the system is branded consistent. Else, null hypothesis is rejected, branding the system inconsistent Si Λ ɛ null hypothesis is accepted, system is consistent Si Λ > ɛ null hypothesis is rejected, system is inconsistent The most important difference with the previously described methods is that we are now allowed to compute the probability of false alarm (type I error) α, that is, the probability of Λ being greater than the discriminant conditioned to z having zero mean (inconsistency warning is set, though system is consistent) α = p(λ > ɛ µ z = 0) Statistics of Λ can be derived from the statistics of z. According to (8), Λ is the absolute value of a zero mean gaussian variable, with variance (N)σ z (see 6), namely Folded Normal Distribution. The variance is a function of the ratio ρ (linear-angular speed ratio), sampling time t and σ x y σ v standard deviations. Applying the same principle we derive a mis-detection error bound β Det, which means Λ being smaller that the discriminant given non-zero mean (readings are inconsistent but warning is not set) β Det = p(λ ɛ µ z > µ βdet )
7 3 Implementation 7 According to Neyman-Pearson theorem ([1]), likelihood ratio test minimizes mis-detection error once false alarm error is set, which renders the method optimal when such error is set by the system s specifications. High values of mis-detection errors are more acceptable than high values of false alarm errors. The reason is that monitoring is done in real time (a check for each time instant), thus the probability of a standing type II error through k instants is βdet k. This is the reason why a low type I (false alarm) error should be set to a small value, and then type II error be minimized. Details on how to choose the bound for the discriminant are given in the next section. To minimize the global error, Bayesian estimation methods should be used, which requires a priori knowledge of the class probability density functions (i.e. the a priori probability for the system of being consistent/inconsistent), see [2] for details. 3 Implementation 3.1 Algorithm description and computational burden The algorithm consists on building position and speed measurements vectors, and the sample vector, to choose for system consistency/inconsistency 1. Initialize position and speed measurements vectors, and sample vector z of size N. Initialize the cumulative sum of speed measurements and the cumulative sum of samples z. 2. Refresh measurement vectors, eliminating the oldest measurement and adding the newest one. By the same token, refresh cumulative speed measurement. 3. Compute a new sample z, according to 5. Refresh samples vector cumulative sum of samples vector z, as seen in the previous step. 4. Take absolute value of z to build discriminant Λ, according to 8, and compare it to the decision boundary ɛ. Set inconsistency warning if the discriminant surpasses the decision boundary. 5. Increment time and go back to step 2. The algorithm is O(1) regarding vector size, since the number of operations is independent of such size. The system is therefore implementable in real time without burdening the process controller. 3.2 Practical considerations Next we detail the implementation of the system in the elevation axis of the transelevators in the Distribution Center of Molinos Rio de La Plata S.A. located in Barracas, Buenos Aires, Argentina. Procedures for horizontal movement is analogous. Linear-angular speed ration ρ is identified setting the trans-elevator to move at constant speed to have constant movement difference and constant speed integral. We selected t = 60ms, and vector size N=20. We approximated standard deviation of speed measurement as σ v = 0.1mHz, and position measurement as
8 3 Implementation 8 σ x = 1mm. 3.3 Type I and II error selection Using data according to the previous section, we obtained curves that allows to set false alarm and mis-detection errors. Next we show false alarm error probability function, setting the decision boundary to ɛ = 0.15, which determines type I error α < 1%. For this type of error, increasing the decision boundary improves performance. Fig. 2: Estimation of P (Λ x H 0 ) Finally, we show detection error probability function, setting the same decision boundaryɛ = 0.15, which determines a mis-detection error of β 1seg 15cm = 0.1%. For this type of error, lowering the decision boundary improves performance. Fig. 3: Estimation of P (Λ < x H A ) It is not possible to lower both error types probabilities simultaneously. However, due to Neyman-Pearson theorem, fixing one type of error minimizes the other.
9 4 Conclusions 9 In this case, selecting ɛ = 0.15 renders a false alarm error of α < 1% and a mis-detection lower error bound (for a displacement of 15cm) of β 15cm = 50%. Recall that the probability of a detection error of a displacement over 15cm to last over 600ms is β15cm 10 = 0.1%. The value 15cm is an acceptable distance for the trans-elevator to stop in the event of an inconsistency in the speed and position readings. Test conducted on site supported the simulated results. In the event of inducted failures regarding encoders, trans-elevators stopped safely and reported inconsistency warnings. 4 Conclusions We have implemented, in real time, a GLRT strategy to detect inconsistencies in the position and speed readings of an automated moving object, with minimum computational and hardware requirements. Implemented system constitutes a security layer to avoid collisions, and its performance can quantified and false alarm error adjusted to have it meet the application requirements, whereas mis-detection error is optimally minimized. Finally, we describe the implemented solution in the trans-elevator system of the Distribution Center of Molinos Rio de La Plate S.A. located in Barracas, Buenos Aires, Argentina. Acknowledgments To Alejandro Carrasco, Engineering Manager of Industrial Systems - ABB Argentina, for his assistance in the writing of the present paper. To Juan Pablo Giuttari, Service Engineer - ABB Argentina, for his assistance in the testing process. 5 Bibliography [1] S. Kay, Fundamental of Statistical Signal Processing Vol II Detection Theory. New Jersey: Prentice Hall, [2] O. Duda, P. Hart, and D. Stork, Pattern Classification 2da. Ed. Nueva York: John Wiley & Sons, Inc., [3] S. Kay, Fundamentals of statistical signal processing, volume I: Estimation theory. Prentice Hall, [4] Kailath, A. H. Sayed, and Hassibi, Linear Estimation. New Jersey: Prentice Hall, 2000.
RLS filter approach for linear distributed canning industry optimization
RLS filter approach for linear distributed canning industry optimization Author Sternheim Misuraca, M. R. Degree in Electronics Engineering, University of Buenos Aires, Argentina Project Engineer, Sistemas
More informationDetection theory 101 ELEC-E5410 Signal Processing for Communications
Detection theory 101 ELEC-E5410 Signal Processing for Communications Binary hypothesis testing Null hypothesis H 0 : e.g. noise only Alternative hypothesis H 1 : signal + noise p(x;h 0 ) γ p(x;h 1 ) Trade-off
More informationLecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary
ECE 830 Spring 207 Instructor: R. Willett Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we saw that the likelihood
More information2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)?
ECE 830 / CS 76 Spring 06 Instructors: R. Willett & R. Nowak Lecture 3: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we
More informationLecture 22: Error exponents in hypothesis testing, GLRT
10-704: Information Processing and Learning Spring 2012 Lecture 22: Error exponents in hypothesis testing, GLRT Lecturer: Aarti Singh Scribe: Aarti Singh Disclaimer: These notes have not been subjected
More informationReview. December 4 th, Review
December 4 th, 2017 Att. Final exam: Course evaluation Friday, 12/14/2018, 10:30am 12:30pm Gore Hall 115 Overview Week 2 Week 4 Week 7 Week 10 Week 12 Chapter 6: Statistics and Sampling Distributions Chapter
More informationComposite Hypotheses and Generalized Likelihood Ratio Tests
Composite Hypotheses and Generalized Likelihood Ratio Tests Rebecca Willett, 06 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve
More informationIntroduction to Statistical Inference
Structural Health Monitoring Using Statistical Pattern Recognition Introduction to Statistical Inference Presented by Charles R. Farrar, Ph.D., P.E. Outline Introduce statistical decision making for Structural
More informationEECS564 Estimation, Filtering, and Detection Exam 2 Week of April 20, 2015
EECS564 Estimation, Filtering, and Detection Exam Week of April 0, 015 This is an open book takehome exam. You have 48 hours to complete the exam. All work on the exam should be your own. problems have
More informationTopic 3: Hypothesis Testing
CS 8850: Advanced Machine Learning Fall 07 Topic 3: Hypothesis Testing Instructor: Daniel L. Pimentel-Alarcón c Copyright 07 3. Introduction One of the simplest inference problems is that of deciding between
More informationSTAT 135 Lab 5 Bootstrapping and Hypothesis Testing
STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,
More informationFundamentals of Statistical Signal Processing Volume II Detection Theory
Fundamentals of Statistical Signal Processing Volume II Detection Theory Steven M. Kay University of Rhode Island PH PTR Prentice Hall PTR Upper Saddle River, New Jersey 07458 http://www.phptr.com Contents
More informationDETECTION theory deals primarily with techniques for
ADVANCED SIGNAL PROCESSING SE Optimum Detection of Deterministic and Random Signals Stefan Tertinek Graz University of Technology turtle@sbox.tugraz.at Abstract This paper introduces various methods for
More informationp(x ω i 0.4 ω 2 ω
p(x ω i ).4 ω.3.. 9 3 4 5 x FIGURE.. Hypothetical class-conditional probability density functions show the probability density of measuring a particular feature value x given the pattern is in category
More informationSequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process
Applied Mathematical Sciences, Vol. 4, 2010, no. 62, 3083-3093 Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Julia Bondarenko Helmut-Schmidt University Hamburg University
More informationEconomics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,
Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem
More informationMinimum Error-Rate Discriminant
Discriminants Minimum Error-Rate Discriminant In the case of zero-one loss function, the Bayes Discriminant can be further simplified: g i (x) =P (ω i x). (29) J. Corso (SUNY at Buffalo) Bayesian Decision
More informationSystems of Linear Equations
4 Systems of Linear Equations Copyright 2014, 2010, 2006 Pearson Education, Inc. Section 4.1, Slide 1 1-1 4.1 Systems of Linear Equations in Two Variables R.1 Fractions Objectives 1. Decide whether an
More information10-704: Information Processing and Learning Fall Lecture 24: Dec 7
0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 24: Dec 7 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy of
More informationp(d θ ) l(θ ) 1.2 x x x
p(d θ ).2 x 0-7 0.8 x 0-7 0.4 x 0-7 l(θ ) -20-40 -60-80 -00 2 3 4 5 6 7 θ ˆ 2 3 4 5 6 7 θ ˆ 2 3 4 5 6 7 θ θ x FIGURE 3.. The top graph shows several training points in one dimension, known or assumed to
More informationSTONY BROOK UNIVERSITY. CEAS Technical Report 829
1 STONY BROOK UNIVERSITY CEAS Technical Report 829 Variable and Multiple Target Tracking by Particle Filtering and Maximum Likelihood Monte Carlo Method Jaechan Lim January 4, 2006 2 Abstract In most applications
More informationCS 195-5: Machine Learning Problem Set 1
CS 95-5: Machine Learning Problem Set Douglas Lanman dlanman@brown.edu 7 September Regression Problem Show that the prediction errors y f(x; ŵ) are necessarily uncorrelated with any linear function of
More informationECE521 week 3: 23/26 January 2017
ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear
More informationBayesian decision theory Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory
Bayesian decision theory 8001652 Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory Jussi Tohka jussi.tohka@tut.fi Institute of Signal Processing Tampere University of Technology
More informationTwo hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45
Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS 21 June 2010 9:45 11:45 Answer any FOUR of the questions. University-approved
More informationProblem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30
Problem Set MAS 6J/1.16J: Pattern Recognition and Analysis Due: 5:00 p.m. on September 30 [Note: All instructions to plot data or write a program should be carried out using Matlab. In order to maintain
More informationDetection theory. H 0 : x[n] = w[n]
Detection Theory Detection theory A the last topic of the course, we will briefly consider detection theory. The methods are based on estimation theory and attempt to answer questions such as Is a signal
More informationEncoding or decoding
Encoding or decoding Decoding How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms for extracting a stimulus
More informationEEL 851: Biometrics. An Overview of Statistical Pattern Recognition EEL 851 1
EEL 851: Biometrics An Overview of Statistical Pattern Recognition EEL 851 1 Outline Introduction Pattern Feature Noise Example Problem Analysis Segmentation Feature Extraction Classification Design Cycle
More informationUNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS
UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS F. C. Nicolls and G. de Jager Department of Electrical Engineering, University of Cape Town Rondebosch 77, South
More informationLecture 2: Introduction to Uncertainty
Lecture 2: Introduction to Uncertainty CHOI Hae-Jin School of Mechanical Engineering 1 Contents Sources of Uncertainty Deterministic vs Random Basic Statistics 2 Uncertainty Uncertainty is the information/knowledge
More information8: Hypothesis Testing
Some definitions 8: Hypothesis Testing. Simple, compound, null and alternative hypotheses In test theory one distinguishes between simple hypotheses and compound hypotheses. A simple hypothesis Examples:
More informationexp{ (x i) 2 i=1 n i=1 (x i a) 2 (x i ) 2 = exp{ i=1 n i=1 n 2ax i a 2 i=1
4 Hypothesis testing 4. Simple hypotheses A computer tries to distinguish between two sources of signals. Both sources emit independent signals with normally distributed intensity, the signals of the first
More informationPrimer on statistics:
Primer on statistics: MLE, Confidence Intervals, and Hypothesis Testing ryan.reece@gmail.com http://rreece.github.io/ Insight Data Science - AI Fellows Workshop Feb 16, 018 Outline 1. Maximum likelihood
More informationREVIEW OF ORIGINAL SEMBLANCE CRITERION SUMMARY
Semblance Criterion Modification to Incorporate Signal Energy Threshold Sandip Bose*, Henri-Pierre Valero and Alain Dumont, Schlumberger Oilfield Services SUMMARY The semblance criterion widely used for
More informationStatistical Inference
Statistical Inference Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Week 12. Testing and Kullback-Leibler Divergence 1. Likelihood Ratios Let 1, 2, 2,...
More informationBayesian Decision Theory
Bayesian Decision Theory Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2017 CS 551, Fall 2017 c 2017, Selim Aksoy (Bilkent University) 1 / 46 Bayesian
More informationProbability and Statistics Notes
Probability and Statistics Notes Chapter Seven Jesse Crawford Department of Mathematics Tarleton State University Spring 2011 (Tarleton State University) Chapter Seven Notes Spring 2011 1 / 42 Outline
More informationPartitioning the Parameter Space. Topic 18 Composite Hypotheses
Topic 18 Composite Hypotheses Partitioning the Parameter Space 1 / 10 Outline Partitioning the Parameter Space 2 / 10 Partitioning the Parameter Space Simple hypotheses limit us to a decision between one
More informationIntelligent Systems Statistical Machine Learning
Intelligent Systems Statistical Machine Learning Carsten Rother, Dmitrij Schlesinger WS2014/2015, Our tasks (recap) The model: two variables are usually present: - the first one is typically discrete k
More informationLecture Testing Hypotheses: The Neyman-Pearson Paradigm
Math 408 - Mathematical Statistics Lecture 29-30. Testing Hypotheses: The Neyman-Pearson Paradigm April 12-15, 2013 Konstantin Zuev (USC) Math 408, Lecture 29-30 April 12-15, 2013 1 / 12 Agenda Example:
More informationMachine Learning Linear Classification. Prof. Matteo Matteucci
Machine Learning Linear Classification Prof. Matteo Matteucci Recall from the first lecture 2 X R p Regression Y R Continuous Output X R p Y {Ω 0, Ω 1,, Ω K } Classification Discrete Output X R p Y (X)
More informationINFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS. Michael A. Lexa and Don H. Johnson
INFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS Michael A. Lexa and Don H. Johnson Rice University Department of Electrical and Computer Engineering Houston, TX 775-892 amlexa@rice.edu,
More informationReliability Theory of Dynamically Loaded Structures (cont.)
Outline of Reliability Theory of Dynamically Loaded Structures (cont.) Probability Density Function of Local Maxima in a Stationary Gaussian Process. Distribution of Extreme Values. Monte Carlo Simulation
More informationA CUSUM approach for online change-point detection on curve sequences
ESANN 22 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Bruges Belgium, 25-27 April 22, i6doc.com publ., ISBN 978-2-8749-49-. Available
More informationSTATS 200: Introduction to Statistical Inference. Lecture 29: Course review
STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout
More informationIntelligent Systems Statistical Machine Learning
Intelligent Systems Statistical Machine Learning Carsten Rother, Dmitrij Schlesinger WS2015/2016, Our model and tasks The model: two variables are usually present: - the first one is typically discrete
More informationUse of Wijsman's Theorem for the Ratio of Maximal Invariant Densities in Signal Detection Applications
Use of Wijsman's Theorem for the Ratio of Maximal Invariant Densities in Signal Detection Applications Joseph R. Gabriel Naval Undersea Warfare Center Newport, Rl 02841 Steven M. Kay University of Rhode
More information(Refer Slide Time: 00:01:30 min)
Control Engineering Prof. M. Gopal Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 3 Introduction to Control Problem (Contd.) Well friends, I have been giving you various
More informationECE531 Lecture 4b: Composite Hypothesis Testing
ECE531 Lecture 4b: Composite Hypothesis Testing D. Richard Brown III Worcester Polytechnic Institute 16-February-2011 Worcester Polytechnic Institute D. Richard Brown III 16-February-2011 1 / 44 Introduction
More informationF2E5216/TS1002 Adaptive Filtering and Change Detection. Course Organization. Lecture plan. The Books. Lecture 1
Adaptive Filtering and Change Detection Bo Wahlberg (KTH and Fredrik Gustafsson (LiTH Course Organization Lectures and compendium: Theory, Algorithms, Applications, Evaluation Toolbox and manual: Algorithms,
More informationParametric Techniques
Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure
More informationAn Invariance Property of the Generalized Likelihood Ratio Test
352 IEEE SIGNAL PROCESSING LETTERS, VOL. 10, NO. 12, DECEMBER 2003 An Invariance Property of the Generalized Likelihood Ratio Test Steven M. Kay, Fellow, IEEE, and Joseph R. Gabriel, Member, IEEE Abstract
More information10/31/2012. One-Way ANOVA F-test
PSY 511: Advanced Statistics for Psychological and Behavioral Research 1 1. Situation/hypotheses 2. Test statistic 3.Distribution 4. Assumptions One-Way ANOVA F-test One factor J>2 independent samples
More informationIntroduction to Signal Detection and Classification. Phani Chavali
Introduction to Signal Detection and Classification Phani Chavali Outline Detection Problem Performance Measures Receiver Operating Characteristics (ROC) F-Test - Test Linear Discriminant Analysis (LDA)
More informationTesting Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box Durham, NC 27708, USA
Testing Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box 90251 Durham, NC 27708, USA Summary: Pre-experimental Frequentist error probabilities do not summarize
More informationA Contrario Detection of False Matches in Iris Recognition
A Contrario Detection of False Matches in Iris Recognition Marcelo Mottalli, Mariano Tepper, and Marta Mejail Departamento de Computación, Universidad de Buenos Aires, Argentina Abstract. The pattern of
More informationAn Optimization Approach In Information Security Risk Management
Advances in Management & Applied Economics, vol.2, no.3, 2012, 1-12 ISSN: 1792-7544 (print version), 1792-7552 (online) Scienpress Ltd, 2012 An Optimization Approach In Information Security Risk Management
More informationMath 494: Mathematical Statistics
Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/
More information14.30 Introduction to Statistical Methods in Economics Spring 2009
MIT OpenCourseWare http://ocw.mit.edu 4.0 Introduction to Statistical Methods in Economics Spring 009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationChap 4. Software Reliability
Chap 4. Software Reliability 4.2 Reliability Growth 1. Introduction 2. Reliability Growth Models 3. The Basic Execution Model 4. Calendar Time Computation 5. Reliability Demonstration Testing 1. Introduction
More informationM(t) = 1 t. (1 t), 6 M (0) = 20 P (95. X i 110) i=1
Math 66/566 - Midterm Solutions NOTE: These solutions are for both the 66 and 566 exam. The problems are the same until questions and 5. 1. The moment generating function of a random variable X is M(t)
More informationCourse content (will be adapted to the background knowledge of the class):
Biomedical Signal Processing and Signal Modeling Lucas C Parra, parra@ccny.cuny.edu Departamento the Fisica, UBA Synopsis This course introduces two fundamental concepts of signal processing: linear systems
More informationDecentralized Sequential Hypothesis Testing. Change Detection
Decentralized Sequential Hypothesis Testing & Change Detection Giorgos Fellouris, Columbia University, NY, USA George V. Moustakides, University of Patras, Greece Outline Sequential hypothesis testing
More informationSTAT 135 Lab 7 Distributions derived from the normal distribution, and comparing independent samples.
STAT 135 Lab 7 Distributions derived from the normal distribution, and comparing independent samples. Rebecca Barter March 16, 2015 The χ 2 distribution The χ 2 distribution We have seen several instances
More informationLecture 8: Signal Detection and Noise Assumption
ECE 830 Fall 0 Statistical Signal Processing instructor: R. Nowak Lecture 8: Signal Detection and Noise Assumption Signal Detection : X = W H : X = S + W where W N(0, σ I n n and S = [s, s,..., s n ] T
More informationCOS513 LECTURE 8 STATISTICAL CONCEPTS
COS513 LECTURE 8 STATISTICAL CONCEPTS NIKOLAI SLAVOV AND ANKUR PARIKH 1. MAKING MEANINGFUL STATEMENTS FROM JOINT PROBABILITY DISTRIBUTIONS. A graphical model (GM) represents a family of probability distributions
More informationBasic Concepts of Inference
Basic Concepts of Inference Corresponds to Chapter 6 of Tamhane and Dunlop Slides prepared by Elizabeth Newton (MIT) with some slides by Jacqueline Telford (Johns Hopkins University) and Roy Welsch (MIT).
More informationInformation Theory and Hypothesis Testing
Summer School on Game Theory and Telecommunications Campione, 7-12 September, 2014 Information Theory and Hypothesis Testing Mauro Barni University of Siena September 8 Review of some basic results linking
More informationBEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized.
BEST TESTS Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized. 1. Most powerful test Let {f θ } θ Θ be a family of pdfs. We will consider
More informationNotes on the Multivariate Normal and Related Topics
Version: July 10, 2013 Notes on the Multivariate Normal and Related Topics Let me refresh your memory about the distinctions between population and sample; parameters and statistics; population distributions
More informationsimple if it completely specifies the density of x
3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely
More informationLecture Notes on the Gaussian Distribution
Lecture Notes on the Gaussian Distribution Hairong Qi The Gaussian distribution is also referred to as the normal distribution or the bell curve distribution for its bell-shaped density curve. There s
More informationAutomatic Differentiation Equipped Variable Elimination for Sensitivity Analysis on Probabilistic Inference Queries
Automatic Differentiation Equipped Variable Elimination for Sensitivity Analysis on Probabilistic Inference Queries Anonymous Author(s) Affiliation Address email Abstract 1 2 3 4 5 6 7 8 9 10 11 12 Probabilistic
More informationParameter Estimation, Sampling Distributions & Hypothesis Testing
Parameter Estimation, Sampling Distributions & Hypothesis Testing Parameter Estimation & Hypothesis Testing In doing research, we are usually interested in some feature of a population distribution (which
More informationUni- and Bivariate Power
Uni- and Bivariate Power Copyright 2002, 2014, J. Toby Mordkoff Note that the relationship between risk and power is unidirectional. Power depends on risk, but risk is completely independent of power.
More informationF79SM STATISTICAL METHODS
F79SM STATISTICAL METHODS SUMMARY NOTES 9 Hypothesis testing 9.1 Introduction As before we have a random sample x of size n of a population r.v. X with pdf/pf f(x;θ). The distribution we assign to X is
More informationMachine Learning 2017
Machine Learning 2017 Volker Roth Department of Mathematics & Computer Science University of Basel 21st March 2017 Volker Roth (University of Basel) Machine Learning 2017 21st March 2017 1 / 41 Section
More informationLecture 7 Introduction to Statistical Decision Theory
Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7
More informationSolution: chapter 2, problem 5, part a:
Learning Chap. 4/8/ 5:38 page Solution: chapter, problem 5, part a: Let y be the observed value of a sampling from a normal distribution with mean µ and standard deviation. We ll reserve µ for the estimator
More informationOutline Lecture 2 2(32)
Outline Lecture (3), Lecture Linear Regression and Classification it is our firm belief that an understanding of linear models is essential for understanding nonlinear ones Thomas Schön Division of Automatic
More informationSimple and Multiple Linear Regression
Sta. 113 Chapter 12 and 13 of Devore March 12, 2010 Table of contents 1 Simple Linear Regression 2 Model Simple Linear Regression A simple linear regression model is given by Y = β 0 + β 1 x + ɛ where
More informationDetection & Estimation Lecture 1
Detection & Estimation Lecture 1 Intro, MVUE, CRLB Xiliang Luo General Course Information Textbooks & References Fundamentals of Statistical Signal Processing: Estimation Theory/Detection Theory, Steven
More informationUncertainty. Jayakrishnan Unnikrishnan. CSL June PhD Defense ECE Department
Decision-Making under Statistical Uncertainty Jayakrishnan Unnikrishnan PhD Defense ECE Department University of Illinois at Urbana-Champaign CSL 141 12 June 2010 Statistical Decision-Making Relevant in
More informationSelecting an optimal set of parameters using an Akaike like criterion
Selecting an optimal set of parameters using an Akaike like criterion R. Moddemeijer a a University of Groningen, Department of Computing Science, P.O. Box 800, L-9700 AV Groningen, The etherlands, e-mail:
More informationMachine Learning, Midterm Exam
10-601 Machine Learning, Midterm Exam Instructors: Tom Mitchell, Ziv Bar-Joseph Wednesday 12 th December, 2012 There are 9 questions, for a total of 100 points. This exam has 20 pages, make sure you have
More informationStatistical Data Analysis Stat 3: p-values, parameter estimation
Statistical Data Analysis Stat 3: p-values, parameter estimation London Postgraduate Lectures on Particle Physics; University of London MSci course PH4515 Glen Cowan Physics Department Royal Holloway,
More informationIntroduction 1. STA442/2101 Fall See last slide for copyright information. 1 / 33
Introduction 1 STA442/2101 Fall 2016 1 See last slide for copyright information. 1 / 33 Background Reading Optional Chapter 1 of Linear models with R Chapter 1 of Davison s Statistical models: Data, and
More informationInstitute of Actuaries of India
Institute of Actuaries of India Subject CT3 Probability & Mathematical Statistics May 2011 Examinations INDICATIVE SOLUTION Introduction The indicative solution has been written by the Examiners with the
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 2. MLE, MAP, Bayes classification Barnabás Póczos & Aarti Singh 2014 Spring Administration http://www.cs.cmu.edu/~aarti/class/10701_spring14/index.html Blackboard
More informationSeismic Analysis of Structures Prof. T.K. Datta Department of Civil Engineering Indian Institute of Technology, Delhi. Lecture 03 Seismology (Contd.
Seismic Analysis of Structures Prof. T.K. Datta Department of Civil Engineering Indian Institute of Technology, Delhi Lecture 03 Seismology (Contd.) In the previous lecture, we discussed about the earth
More informationLearning Methods for Linear Detectors
Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIMAG 2 / MoSIG M1 Second Semester 2011/2012 Lesson 20 27 April 2012 Contents Learning Methods for Linear Detectors Learning Linear Detectors...2
More informationParametric Techniques Lecture 3
Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to
More informationMonte Carlo. Lecture 15 4/9/18. Harvard SEAS AP 275 Atomistic Modeling of Materials Boris Kozinsky
Monte Carlo Lecture 15 4/9/18 1 Sampling with dynamics In Molecular Dynamics we simulate evolution of a system over time according to Newton s equations, conserving energy Averages (thermodynamic properties)
More informationDesigning Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way
EECS 16A Designing Information Devices and Systems I Fall 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate it
More informationEstimating the accuracy of a hypothesis Setting. Assume a binary classification setting
Estimating the accuracy of a hypothesis Setting Assume a binary classification setting Assume input/output pairs (x, y) are sampled from an unknown probability distribution D = p(x, y) Train a binary classifier
More informationIf we want to analyze experimental or simulated data we might encounter the following tasks:
Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction
More informationp(x ω i 0.4 ω 2 ω
p( ω i ). ω.3.. 9 3 FIGURE.. Hypothetical class-conditional probability density functions show the probability density of measuring a particular feature value given the pattern is in category ω i.if represents
More informationCentral Limit Theorem ( 5.3)
Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately
More informationGrowing Window Recursive Quadratic Optimization with Variable Regularization
49th IEEE Conference on Decision and Control December 15-17, Hilton Atlanta Hotel, Atlanta, GA, USA Growing Window Recursive Quadratic Optimization with Variable Regularization Asad A. Ali 1, Jesse B.
More information