Hazard Function, Failure Rate, and A Rule of Thumb for Calculating Empirical Hazard Function of Continuous-Time Failure Data

Size: px
Start display at page:

Download "Hazard Function, Failure Rate, and A Rule of Thumb for Calculating Empirical Hazard Function of Continuous-Time Failure Data"

Transcription

1 Hazard Function, Failure Rate, and A Rule of Thumb for Calculating Empirical Hazard Function of Continuous-Time Failure Data Feng-feng Li,2, Gang Xie,2, Yong Sun,2, Lin Ma,2 CRC for Infrastructure and Engineering Asset Management (CIEAM) 2 School of Engineering Systems Queensland University of Technology & Mathematical Sciences, Brisbane, Australia Abstract Hazard function plays an essential role in engineering reliability study. Distribution free hazard rate values calculated based on observed sample data is defined as the empirical hazard function. A theoretically sound and accurate empirical hazard function may be used directly for analysis of life time distribution of the or continuous-time failure data or can be used as a basis for further parametric modelling analysis in asset management. For the sake of bridging the gaps between academic theory and data analysis practice, this paper starts from clarifying the relationship between the concepts of hazard function and failure rate. Then, two often-used continuous-time data empirical hazard function formulas are derived directly from discretising their theoretic definitions of the hazard function. The properties of these two different formulas are investigated and their estimation performances against the true hazard function values are compared using simulation samples from an exponential and a Weibull distribution. It is found that one formula calculates the average hazard rates over a specified time interval while the other one underestimates the true hazard function values. However, we also showed that in most cases the relative error of the underestimation is less than 6%. Both formulas are valid for right censored data and under certain conditions are valid for left and interval censored data, too. The simulaton results show that the average hazard formula always gives more accurate estimates while the other one consistently underestimates. Such results match the theoretic conclusions completely. Based on the result of this study, we proposed a rule of thumb for applications of these two most often-used empirical hazard function formulas in data analysis practice. Keywords: hazard function; failure rate; empirical hazard function; continuoustime failure data. Introduction Hazard function plays an essential role in the application of probability theory in engineering reliability study. For example, the Mean Time To Failure (MTTF) is calculated

2 as the inverse of hazard rate if we assume the asset system life time distribution follows an exponential distribution. In the data analysis stage for asset management, however, the term failure rate is more often used when we try to work out the MTTF. In a sense, there is a gap between probability theory and data analysis when we talk about hazard function and failure rate. Because people can be confused with the questions like, are these two terms are interchangeable; if the answer is yes, why not just use one of the terms; if they are different what are the differences? A short answer is: hazard or hazard rate h i h(t i ) is the instantaneous failure rate (for non repairable asset systems) at a time instant t i i =, 2,. However, when we talk about failure rate in data analysis it is more often a short term for Average Failure Rate (AFR) over a time period t 2 t (assuming 0 t < t 2 ). We know that AFR can be calculated using formula [8] AFR = t2 t h(u)du. () t 2 t Equation() is nothing but the average hazard function formula which is considered as the most typical estimation of the true hazard function values [5]. Therefore, we need a empirical hazard function formula so that we can estimate the hazard function h(t) based on observed sample data. We may treat sample failure time data as discrete data, i.e. we consider the observed sample failure times as the events that occur at pre-assigned times 0 t < t 2 <, and that under a parametric model of interest the hazard function at t i is h i = h i (θ). Let us consider a set of intervals I i = [t i, t i+ ) covering [0, ) for an engineering asset system with N functional components at t = 0. Let us also denote d i = N(t i ) N(t i+ ) where N(t i ) and N(t i+ ) are the numbers of components which are functional at time t i and time t i+, respectively. Then the quantity d i is the number of failures in interval I i and r i N(t i ) is the number of components at risk (i.e. having the potential to fail) at t i. It can be shown that the maximum likelihood estimator (MLE) ĥ i = d i /r i (2) from which the well known Kaplan-Meier estimator for the reliability function ˆR(y) = ( ) ( ĥ i = d ) i r i i:t i <y is derived. Equation(2) is valid under independent right censoring [2](pp93-97) and [9] (pp ). However, in data analysis practice, we may be interested in treating the sample failure time data as continuous-time data as shown in Equation (). Two often-used empirical hazard function formulas for treating the continuous-time data are: i:t i <y and ĥ i = N(t i) N(t i + t) t N(t i ) = t d i r i ĥ i, (3) ĥ i = [ t log N(t ] i) N(t i + t) = N(t i ) t log( d i ) ĥ2 i, (4) r i 2

3 where t t i+ t i in order to emphasize that failures can happen at any time instants, not necessarily at t i i =, 2, under the continuous-time data setting. At the first glance, Equations (3) and (4) are very different. When people need to make a decision in choosing one of the above two formulas for the calculation of the empirical hazard function, questions like which one should I use and why are naturally asked. In addition, industry people may have a good chance of not knowing, hence they want to know how Equations (3) and (4) relate to Equation (). These questions are necessary to be answered for correctly estimating the true hazard function values using sample failure time data in asset management practice. Although these questions are not theoretically difficult but it seems that they have been ignored so far in publication. This paper is aiming at filling this gap. The rest of the paper is arranged as follows. In Section 2 we derive the Equations (3) and (4) directly from discretising their theoretic definitions of the hazard function followed by a detailed discussion on the properties of these two formulas in terms of estimation of the true hazard function values. In Section 3, we verify our theoretic conclusions by calculating the empirical hazards based on two simulation samples one generated from an exponential distribution and the second one generated from a Weibull distribution. Section 4 shows how to use Equations (3) and (4) properly based on a real life scenario. Section 5 concludes this paper with a proposed rule of thumb for applications of Equations (3) and (4) in engineering reliability analysis practice. 2 Empirical hazard function derivation and discussion The empirical hazard function formulas can be derived in various ways. For example, Equation (3) was given in [4] and [7]; Equation (4) was derived from the discussion of the probability of failure in the period [t i, t i+ ) given survival to t i in [2]. We will derive Equations (3) and (4) directly from the definition of the hazard function. As can be found in any standard textbook on failure time data analysis, we have the following definition and relationship equations for the hazard function. Assuming the time to failure T is a random variable which can take any value in the interval [0, ), the hazard function of T is defined as h(t) = f(t) F (t) = lim t 0 F (t + t) F (t) t ( F (t)), (5) where f(t) and F (t) are the probability density function (pdf) and the cumulative distribution function (cdf) of T, respectively. Since f(t) = hazard function as df (t) dt d[log( F (t))] h(t) = dt, after some algebra, we get another form of the definition for the = lim t 0 F (t + t)) log( F (t)) log(. (6) t 3

4 By discretising Equations (5) and (6) respectively, we get and ĥ(t) = log( F (t + t)) log( F (t)) ĥ(t) = t F (t + t) F (t) t ( F (t)), (7) = [ ] F (t + t) t log. (8) F (t) Given our early defined notations N, N(t i ), t t i+ t i and h i h(t i ), using the the relative frequency as the estimator for F (t i ), we have F (t i ) N N(t i) = N(t i) N N. (9) By applying Equation (9) to Equations (7) and (8) accordingly, Equations (3) and (4) fall out after some trivial and tedious algebras. Up to this point it is clear that both formulas (3) and (4) converge to the true values of h i as t approaches zero. Note that this asymptotic property of convergence still hold after the introduction of Equation (9) in the derivation process due to the Law of large numbers []. We now investigate their properties when t > 0. First, let us rewrite Equation (7) as ĥ(t) = t+ t t f(u) du t F (t). (0) Equation (0) implies that Equation (3) will underestimate the true hazard function t+ t t f(u) du values because is the average density over t while is monotonically t F (t) decreasing. Another way to show Equation (3) underestimating the true h i values is to consider t as a unit time interval, e.g. one hour, one day, or one year, etc.. Then we have ĥ i = N(t i) N(t i + t) N(t i ) ĥ i, which implies the empirical hazard values will never be greater than per unit time. Now let us rewrite Equation (8) as ĥ(t) = H(t + t) H(t), () t where H(t) = t h(u)du = log( F (t)) is the cumulative hazard function. Equation 0 () implies that Equation (4) calculates the average values of the true hazard function. Therefore, we should expect to see Equation (4) gives more accurate and unbiased estimation of the true hazard function values than Equation (3) does. If we denote that t + t t 2 and t t, hence t = t 2 t, we realize that Equation () and Equation () are identical. This is how Equation (4) related to AFR but Equation (3) does not have this direct connection. As from Equation (5), the hazard function h(t), also referred to as hazard rate at time t, is defined as a conditional density function, i.e. the ratio of probability density f(t) 4

5 over the reliability F (t) (a probability), which is not as intuitive to interpret as the concept of failure rate used in data analysis. The direct connection of Equation (4) with the AFR fills the mental gap between the probability theory and data analysis. Theoretically, the difference between formulas (3) and (4) is significant. However, in data analysis practice, the numeric calculation results from both formulas can be very close. Before we verify this theoretic conclusion in the next section, we examine how different the estimation results can be between Equations (3) and (4). As a standard mathematical result[] (pp25), it is known that, if x 2/3, then log( + x) = x x2 2 + θ(x), where θ(x) x 3. Therefore, it is straight forward to show that if 0 < x 0., the relative difference between log( x) and x (i.e. [ log( x) x]/ log( x)) is less than 6%. We are now ready to compare the estimation performances of Equations (3) and (4) to verify the theoretic results we have obtained so far. 3 Comparison of empirical hazard function formulas using simulation samples In this section the open source statistical package R [6] is used for data analysis. A random sample of an exponential distribution of sample size n = 0000 is generated with the parameter specification rate = 0. (using random seed 0 for exact repeatability of the analysis results); A second random sample of a Weibull distribution of sample size n = 0000 is generated with the parameter specification: shape =.8 and scale = 30 (random seed = 0). Based on these two simulation random samples, the empirical hazard values ĥ i of Equation (3) and ĥ2 i of Equation (4) are calculated and compared with the true hazard function values to verify the theoretic results obtained from Section 2. Figure presents the simulation results of comparing the empirical hazard values ĥ i and ĥ2 i (in vertical bars) against the true hazard function values (in circles connected by a fine solid line) based on the exponential distribution random sample. In calculating ĥ i and ĥ2 i, the most important setting is to specify the number of intervals over the full sample data range. The specification of the number of intervals is equivalent to specify the length of t. Therefore, we would expect to see the larger of the number of intervals the better of the approximation of the ĥ i and ĥ2 i values to the true hazard values. In Figure, the empirical hazards in the top two panel plots are calculated using 20 intervals and in the bottom two panel plots the number of intervals is 50. The graph shows us that, ĥ2 i always performs better than ĥ i which is consistently underestimating the true hazards. The difference is much significant when the number of intervals is small. We also notice that it is ĥ i which is much more sensitive to the number of intervals specification while ĥ2 i s estimation results very robust (i.e. almost not affected by the change of the number of intervals specification). With this particular exponential distribution sample, the 99% quantile value is about 45 time units which only spread over less than 60% of the full sample data range. Note 5

6 hazard exponential(x rate=0.) hazard hazard hazard failure times Figure : Empirical hazard function values calculated using ĥ i (the top and third panel plots) and ĥ2 i (the second and bottom panel plots): circle points are the true hazard function values connected by a fine solid line; vertical bars are the empirical hazard function values. 6

7 Table : Comparison of calculated empirical hazard values versus the true hazard value True hazard ĥ i ĥ2 i Data set Number of Exponential sample average average range intervals full range % quantile full range % quantile 50 that, for both ĥ i and ĥ2 i, the estimates fluctuate wildly after the 99% quantile point because of the sparseness of observations over the upper part of range interval. Actually, ĥ2 i will always has an infinite large hazard value for the last interval because surely all items must die out in the end. On the other hand, ĥ i will always equals / t for the last interval. Therefore, empirical values of the very last interval should not be included. We, therefore, propose using only the estimates calculated from those sample observations which are up to 99% quantile point. Table presents the numeric results of the averages of ĥ i and ĥ2 i under different conditions, compared with the true hazard value. Since this is an exponential sample which has a constant hazard rate, the true hazard is given in column. The conditions under which the empirical hazards are calculated are specified in column 4 and 5. For example, the first numeric output line shows that, given number of intervals is 20 and using the full range empirical hazard values (values of the last interval discarded), the average of ĥ i = and the average of ĥ2 i = This is not a very good estimation of the true hazard value which is Based on the comparison of the numeric results presented Table, we conclude that (a) the conclusions obtained from examining Figure are confirmed by the numeric results; (b) only those estimates of empirical hazard functions calculated up to 99% quantile point are reliable and robust. Figure 2 examines the simulation results of comparing the empirical hazard values ĥ i (top panel) and ĥ2 i (bottom panel) against the true hazard function values based on a Weibull distribution random sample. Figure 2 follows the same drawing format as in Figure, i.e. the empirical hazard values ĥ i and ĥ2 i are represented in vertical bars against the true hazard function values (in circles connected by a fine solid line). The number of intervals is chosen to be 45, i.e. t = 2 time units. In addition, the approximate 95% confidence bands for ĥ i and ĥ2 i values are constructed using the parametric bootstrap method [3]. Based on the Weibull distribution specification, 500 bootstrap samples (each of n = 0000) are generated and ĥ i and ĥ2 i are calculated for each of these bootstrap samples. The medians of empirical hazards are superimposed using a thick (in blue colour) solid line with the dashed lines (in grey colour) for the lower and upper limits respectively. By examining the graphic output of the comparison of the empirical hazards ĥ i (top panel plot) and ĥ2 i (bottom panel plot) against the true hazard values from Figure 2, we discover once again what we already found from the examination of Figure. With the specified Weibull distribution sample, the 99% quantile point is at about 70 time units. With Figure 2, the superimposed confidence bands shows us visually how much 7

8 hazard Weibull(x shape=.8,scale=30): complete sample hazard failure times Figure 2: Empirical hazard function values calculated using ĥ i (top panel plot) and ĥ2 i (bottom panel plot): circle points are the true hazard function values connected by a fine solid line; vertical bars are the empirical hazard function values; The thick blue line is the medians of the empirical hazard function values calculated from 500 bootstrap samples (each of sample size n = 0000); the two grey dashed lines are the approximate 95% confidence band. the sampling variation can be over the upper part of the sample data range. So far, the simulation verification is done with the full sample data sets. In the next section, we will examine how ĥ i and ĥ2 i perform with right censored data based on the same Weibull distribution random sample specified in this section. Furthermore, based on a real life scenario of a water pipelines data set, we examine the different types of the data censoring and how they may affect the calculation of ĥ i and ĥ2 i. 4 Empirical hazard function and censored failure time data Figure 3 examines the simulation results of comparing the empirical hazard values ĥ i (top panel) and ĥ2 i (bottom panel) against the true hazard function values based on a 8

9 right censored Weibull distribution random sample. Figure 3 follows the same drawing format as we detailed in Figure 2 in Section 3. hazard Weibull(x shape=.8,scale=30): right censored hazard failure times Figure 3: Empirical hazard function values calculated using ĥ i (top panel plot) and ĥ2 i (bottom panel plot) with right censored sample: circle points are the true hazard function values connected by a fine solid line; vertical bars are the empirical hazard function values; The thick blue line is the medians of the empirical hazard function values calculated from 500 bootstrap samples (each of sample size n = 0000); the two grey dashed lines are the approximate 95% confidence band. In the analysis of a censored data set, we should distinguish a censored random sample from a truncated sample. For example, in this study, we created a right censored sample with the censoring time at 50 as shown in Figure 3. We set any observations greater than 50 to be 50 in the full sample, whereas we should discard any observations which are greater than 50 if what we are after is a truncated sample. A close look at the Equations (3) and (4) will reveal that the calculation of the empirical hazards ĥi do not depend on those observations failed before t i and the calculation would not be affected by the right censoring. Therefore, it is expected to find out that Figure 3 is just part of the Figure 2 (t 50). Hence, all the relevant results concluded from the examination of Figure 2 in Section 3 are still true. In fact, the estimation of the true hazards with right censored data is more reliable in general, than the estimation calculated based on the full data set. 9

10 Because the wild fluctuation of the empirical hazards calculated from the upper part of the sample range more or less is avoided. Of course, the cost is that we are unable to estimate the true hazards beyond the right censoring point of time year Figure 4: A schematic of data types in a continuous-time failure events sample a water pipelines scenario: small vertical bars represent the starting/installation times; circles represent missing records or unknown times (either installation or failure times); crosses represent the failure time records; each horizontal line segment represent one pipeline section of the same asset ID. The length of the line segment represents the corresponding time period in years. The motivation of this study is to justify and verify the proper calculation of empirical hazards based on a real life case of analysis of a water pipelines data set obtained from a water company located in Queensland, Australia [0]. In the raw data treatment stage, we found that the classification of data types does not match the normally defined categories for censoring data in standard failure time data analysis. We now present our finds on the classification of the water pipelines data and these finds may apply to linear assets in 0

11 general. Finally, we give very brief discussion on how the calculation of ĥ i and ĥ2 i may be affected given those different data types. The data types of these water pipeline assets are schematically shown in Figure 4. It is known that the earliest water pipelines were installed about 60 years ago in the region. The asset management data are properly recorded only for about 0 years up to date. Since linear assets like water pipelines are long-lived assets, it is not surprising to see that no failure/repair records are found with the majority of the pipelines over the observation period, i.e. they are right censored. As shown in Figure 4, these right censored observations are labelled 0 s at the right end of horizontal line segments. Observations labelled by s are the pipelines with known installation date and failures observed; observations with unknown installation date but known failure date are labelled by 2 s; observations with both installation date and failure date unknown, but functional over the whole observation period and beyond, are labelled by 3 s; observations with both installation date and failure date unknown, and failed before the observation period, are labelled by 4 s. Obviously, observations labelled by 4 s are missing values with which we are not even aware of them. The existence of this type of the missing values will make the calculated empirical hazards overestimate the true hazards because we want to find out the asset age specific hazard distribution. Observations labelled by 3 s may be treated as right censored data. By doing so, we will underestimate the true hazards. Similarly, we may treat observations with label 2 s as fully observed failure data and we will underestimate the true hazards as well. Overall, it would be reasonable to believe that the bias effect caused by data labelled 2, 3, and 4 may be cancelling out each other to some extent. If we can reasonably assume that the asset management records have been well collected and kept, i.e. the missing values, or loss of installation information are not serious, we conclude that ĥ i and ĥ2 i are valid estimators for the true hazard function values. 5 conclusions In this paper, we have presented the theoretic proof and numeric verification on the proper use of two often-used formulas (as reproduced below from Section to refresh our minds) for calculating the empirical hazard function in reliability analysis for the complete or censored continuous-time failure data: ĥ i = N(t i) N(t i + t) t N(t i ) = t d i r i ĥ i, and ĥ i = [ t log N(t ] i) N(t i + t) = N(t i ) t log( d i ) ĥ2 i. r i Our research shows that ĥ2 i is nothing but a finite approximation of AFR, whereas ĥ i is a finite approximation of the instantaneous hazard rates. However, in their limiting forms, both ĥ i and ĥ2 i converge to the true hazard function h i. For data analysis purpose, a rule of thumb for calculating empirical hazard function of continuous-time failure data may be summarised as: if the maximum failure rate over the time interval periods of our concern is less than 0., both ĥ i and ĥ2 i are good estimators

12 of the true hazard function values. Most asset management reliability study cases should fall into this category. Otherwise, ĥ2 i should be used for calculating the empirical hazard function. Note that both formulas are valid for right censored continuous-time failure data. If the data contains left censored, or interval censored, or missing value cases, one must be aware of the limitations in using these formulas. We also recommend that, in using ĥ i and ĥ2 i for estimating the true hazard function values, discard any empirical hazard values which are calculated based on those sample observations beyond the 99% quantile. As we have shown in Section 3, empirical hazard values which are calculated based on those sample observations beyond 99% quantile are inaccurate and have a very wide fluctuation range due to the sparse observations over a big life time distribution interval. The proposed rule of thumb should fill the gap between the probability theory and the data analysis practice in applications of the hazard function. Acknowledgments Add any thing needed HERE. References [] Kai Lai Chung and Farid AitSahlia. Elementary Probability Theory with Stochastic Process and an Introduction to Mathematical Finance. Springer-Verlag New York Berlin Heidelberg, Fourth Edition, [2] A. C. Davison. Statistical Models. Cambridge University Press, [3] Bradley Efron and Robert J. Tibshirani. An Introduction to the Bootstrap. Chapman & Hall/CRC, 993. [4] E.A. Elsayed. Reliability Engineering. Reading, Massachusetts: Addison Wesley Longman, Inc , 996. [5] William Q. Meeker and Luis A. Escobar. Statistical Method for Reliability Data. JOHN WILEY & SONS, INC., 998. [6] R Development Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 202. ISBN [7] B. Rai and N. Singh. Hazard rate estimation from incomplete and unclean warranty data. Reliability Engineering & System Safety, 8:79 92, [8] R. Ramakumar. Engineering Reliability: Fundamentals and Applications. Prentice Hall, 993. [9] W.N. Venables and B.D. Ripley. Modern Applied Statistics with S-Plus. Springer- Verlag New York, Inc., corrected fourth printing,

13 [0] Lin Ma Yong Sun, Colin Fidge. Reliability prediction of long-lived linear assets with incomplete failure data. Quality, Reliability, Risk, Maintenance, and Safety Engineering (ICQR2MSE), 20 International Conference, Xian, IEEE Conference Publications, pages 43 47, 20. 3

Notes largely based on Statistical Methods for Reliability Data by W.Q. Meeker and L. A. Escobar, Wiley, 1998 and on their class notes.

Notes largely based on Statistical Methods for Reliability Data by W.Q. Meeker and L. A. Escobar, Wiley, 1998 and on their class notes. Unit 2: Models, Censoring, and Likelihood for Failure-Time Data Notes largely based on Statistical Methods for Reliability Data by W.Q. Meeker and L. A. Escobar, Wiley, 1998 and on their class notes. Ramón

More information

Chapter 9. Bootstrap Confidence Intervals. William Q. Meeker and Luis A. Escobar Iowa State University and Louisiana State University

Chapter 9. Bootstrap Confidence Intervals. William Q. Meeker and Luis A. Escobar Iowa State University and Louisiana State University Chapter 9 Bootstrap Confidence Intervals William Q. Meeker and Luis A. Escobar Iowa State University and Louisiana State University Copyright 1998-2008 W. Q. Meeker and L. A. Escobar. Based on the authors

More information

Introduction to Reliability Theory (part 2)

Introduction to Reliability Theory (part 2) Introduction to Reliability Theory (part 2) Frank Coolen UTOPIAE Training School II, Durham University 3 July 2018 (UTOPIAE) Introduction to Reliability Theory 1 / 21 Outline Statistical issues Software

More information

Reliability Engineering I

Reliability Engineering I Happiness is taking the reliability final exam. Reliability Engineering I ENM/MSC 565 Review for the Final Exam Vital Statistics What R&M concepts covered in the course When Monday April 29 from 4:30 6:00

More information

10 Introduction to Reliability

10 Introduction to Reliability 0 Introduction to Reliability 10 Introduction to Reliability The following notes are based on Volume 6: How to Analyze Reliability Data, by Wayne Nelson (1993), ASQC Press. When considering the reliability

More information

Unit 10: Planning Life Tests

Unit 10: Planning Life Tests Unit 10: Planning Life Tests Ramón V. León Notes largely based on Statistical Methods for Reliability Data by W.Q. Meeker and L. A. Escobar, Wiley, 1998 and on their class notes. 11/2/2004 Unit 10 - Stat

More information

Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring

Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring A. Ganguly, S. Mitra, D. Samanta, D. Kundu,2 Abstract Epstein [9] introduced the Type-I hybrid censoring scheme

More information

Time-varying failure rate for system reliability analysis in large-scale railway risk assessment simulation

Time-varying failure rate for system reliability analysis in large-scale railway risk assessment simulation Time-varying failure rate for system reliability analysis in large-scale railway risk assessment simulation H. Zhang, E. Cutright & T. Giras Center of Rail Safety-Critical Excellence, University of Virginia,

More information

Statistical Inference on Constant Stress Accelerated Life Tests Under Generalized Gamma Lifetime Distributions

Statistical Inference on Constant Stress Accelerated Life Tests Under Generalized Gamma Lifetime Distributions Int. Statistical Inst.: Proc. 58th World Statistical Congress, 2011, Dublin (Session CPS040) p.4828 Statistical Inference on Constant Stress Accelerated Life Tests Under Generalized Gamma Lifetime Distributions

More information

Key Words: Lifetime Data Analysis (LDA), Probability Density Function (PDF), Goodness of fit methods, Chi-square method.

Key Words: Lifetime Data Analysis (LDA), Probability Density Function (PDF), Goodness of fit methods, Chi-square method. Reliability prediction based on lifetime data analysis methodology: The pump case study Abstract: The business case aims to demonstrate the lifetime data analysis methodology application from the historical

More information

Let us use the term failure time to indicate the time of the event of interest in either a survival analysis or reliability analysis.

Let us use the term failure time to indicate the time of the event of interest in either a survival analysis or reliability analysis. 10.2 Product-Limit (Kaplan-Meier) Method Let us use the term failure time to indicate the time of the event of interest in either a survival analysis or reliability analysis. Let T be a continuous random

More information

Censoring and Truncation - Highlighting the Differences

Censoring and Truncation - Highlighting the Differences Censoring and Truncation - Highlighting the Differences Micha Mandel The Hebrew University of Jerusalem, Jerusalem, Israel, 91905 July 9, 2007 Micha Mandel is a Lecturer, Department of Statistics, The

More information

Failure rate in the continuous sense. Figure. Exponential failure density functions [f(t)] 1

Failure rate in the continuous sense. Figure. Exponential failure density functions [f(t)] 1 Failure rate (Updated and Adapted from Notes by Dr. A.K. Nema) Part 1: Failure rate is the frequency with which an engineered system or component fails, expressed for example in failures per hour. It is

More information

Slope Fields: Graphing Solutions Without the Solutions

Slope Fields: Graphing Solutions Without the Solutions 8 Slope Fields: Graphing Solutions Without the Solutions Up to now, our efforts have been directed mainly towards finding formulas or equations describing solutions to given differential equations. Then,

More information

Measurements and Data Analysis

Measurements and Data Analysis Measurements and Data Analysis 1 Introduction The central point in experimental physical science is the measurement of physical quantities. Experience has shown that all measurements, no matter how carefully

More information

Lecture 22 Survival Analysis: An Introduction

Lecture 22 Survival Analysis: An Introduction University of Illinois Department of Economics Spring 2017 Econ 574 Roger Koenker Lecture 22 Survival Analysis: An Introduction There is considerable interest among economists in models of durations, which

More information

ST745: Survival Analysis: Nonparametric methods

ST745: Survival Analysis: Nonparametric methods ST745: Survival Analysis: Nonparametric methods Eric B. Laber Department of Statistics, North Carolina State University February 5, 2015 The KM estimator is used ubiquitously in medical studies to estimate

More information

STAT 6350 Analysis of Lifetime Data. Probability Plotting

STAT 6350 Analysis of Lifetime Data. Probability Plotting STAT 6350 Analysis of Lifetime Data Probability Plotting Purpose of Probability Plots Probability plots are an important tool for analyzing data and have been particular popular in the analysis of life

More information

Survival Analysis. Stat 526. April 13, 2018

Survival Analysis. Stat 526. April 13, 2018 Survival Analysis Stat 526 April 13, 2018 1 Functions of Survival Time Let T be the survival time for a subject Then P [T < 0] = 0 and T is a continuous random variable The Survival function is defined

More information

A Simulation Study on Confidence Interval Procedures of Some Mean Cumulative Function Estimators

A Simulation Study on Confidence Interval Procedures of Some Mean Cumulative Function Estimators Statistics Preprints Statistics -00 A Simulation Study on Confidence Interval Procedures of Some Mean Cumulative Function Estimators Jianying Zuo Iowa State University, jiyizu@iastate.edu William Q. Meeker

More information

Problem Set 3: Bootstrap, Quantile Regression and MCMC Methods. MIT , Fall Due: Wednesday, 07 November 2007, 5:00 PM

Problem Set 3: Bootstrap, Quantile Regression and MCMC Methods. MIT , Fall Due: Wednesday, 07 November 2007, 5:00 PM Problem Set 3: Bootstrap, Quantile Regression and MCMC Methods MIT 14.385, Fall 2007 Due: Wednesday, 07 November 2007, 5:00 PM 1 Applied Problems Instructions: The page indications given below give you

More information

Survival Distributions, Hazard Functions, Cumulative Hazards

Survival Distributions, Hazard Functions, Cumulative Hazards BIO 244: Unit 1 Survival Distributions, Hazard Functions, Cumulative Hazards 1.1 Definitions: The goals of this unit are to introduce notation, discuss ways of probabilistically describing the distribution

More information

ter. on Can we get a still better result? Yes, by making the rectangles still smaller. As we make the rectangles smaller and smaller, the

ter. on Can we get a still better result? Yes, by making the rectangles still smaller. As we make the rectangles smaller and smaller, the Area and Tangent Problem Calculus is motivated by two main problems. The first is the area problem. It is a well known result that the area of a rectangle with length l and width w is given by A = wl.

More information

Estimation of AUC from 0 to Infinity in Serial Sacrifice Designs

Estimation of AUC from 0 to Infinity in Serial Sacrifice Designs Estimation of AUC from 0 to Infinity in Serial Sacrifice Designs Martin J. Wolfsegger Department of Biostatistics, Baxter AG, Vienna, Austria Thomas Jaki Department of Statistics, University of South Carolina,

More information

A hidden semi-markov model for the occurrences of water pipe bursts

A hidden semi-markov model for the occurrences of water pipe bursts A hidden semi-markov model for the occurrences of water pipe bursts T. Economou 1, T.C. Bailey 1 and Z. Kapelan 1 1 School of Engineering, Computer Science and Mathematics, University of Exeter, Harrison

More information

Chapter 15. System Reliability Concepts and Methods. William Q. Meeker and Luis A. Escobar Iowa State University and Louisiana State University

Chapter 15. System Reliability Concepts and Methods. William Q. Meeker and Luis A. Escobar Iowa State University and Louisiana State University Chapter 15 System Reliability Concepts and Methods William Q. Meeker and Luis A. Escobar Iowa State University and Louisiana State University Copyright 1998-2008 W. Q. Meeker and L. A. Escobar. Based on

More information

Optimal Cusum Control Chart for Censored Reliability Data with Log-logistic Distribution

Optimal Cusum Control Chart for Censored Reliability Data with Log-logistic Distribution CMST 21(4) 221-227 (2015) DOI:10.12921/cmst.2015.21.04.006 Optimal Cusum Control Chart for Censored Reliability Data with Log-logistic Distribution B. Sadeghpour Gildeh, M. Taghizadeh Ashkavaey Department

More information

Multistate Modeling and Applications

Multistate Modeling and Applications Multistate Modeling and Applications Yang Yang Department of Statistics University of Michigan, Ann Arbor IBM Research Graduate Student Workshop: Statistics for a Smarter Planet Yang Yang (UM, Ann Arbor)

More information

TMA 4275 Lifetime Analysis June 2004 Solution

TMA 4275 Lifetime Analysis June 2004 Solution TMA 4275 Lifetime Analysis June 2004 Solution Problem 1 a) Observation of the outcome is censored, if the time of the outcome is not known exactly and only the last time when it was observed being intact,

More information

Seismic Analysis of Structures Prof. T.K. Datta Department of Civil Engineering Indian Institute of Technology, Delhi. Lecture 03 Seismology (Contd.

Seismic Analysis of Structures Prof. T.K. Datta Department of Civil Engineering Indian Institute of Technology, Delhi. Lecture 03 Seismology (Contd. Seismic Analysis of Structures Prof. T.K. Datta Department of Civil Engineering Indian Institute of Technology, Delhi Lecture 03 Seismology (Contd.) In the previous lecture, we discussed about the earth

More information

Chapter 6. a. Open Circuit. Only if both resistors fail open-circuit, i.e. they are in parallel.

Chapter 6. a. Open Circuit. Only if both resistors fail open-circuit, i.e. they are in parallel. Chapter 6 1. a. Section 6.1. b. Section 6.3, see also Section 6.2. c. Predictions based on most published sources of reliability data tend to underestimate the reliability that is achievable, given that

More information

Availability and Reliability Analysis for Dependent System with Load-Sharing and Degradation Facility

Availability and Reliability Analysis for Dependent System with Load-Sharing and Degradation Facility International Journal of Systems Science and Applied Mathematics 2018; 3(1): 10-15 http://www.sciencepublishinggroup.com/j/ijssam doi: 10.11648/j.ijssam.20180301.12 ISSN: 2575-5838 (Print); ISSN: 2575-5803

More information

Math 016 Lessons Wimayra LUY

Math 016 Lessons Wimayra LUY Math 016 Lessons Wimayra LUY wluy@ccp.edu MATH 016 Lessons LESSON 1 Natural Numbers The set of natural numbers is given by N = {0, 1, 2, 3, 4...}. Natural numbers are used for two main reasons: 1. counting,

More information

Step-Stress Models and Associated Inference

Step-Stress Models and Associated Inference Department of Mathematics & Statistics Indian Institute of Technology Kanpur August 19, 2014 Outline Accelerated Life Test 1 Accelerated Life Test 2 3 4 5 6 7 Outline Accelerated Life Test 1 Accelerated

More information

Cox s proportional hazards model and Cox s partial likelihood

Cox s proportional hazards model and Cox s partial likelihood Cox s proportional hazards model and Cox s partial likelihood Rasmus Waagepetersen October 12, 2018 1 / 27 Non-parametric vs. parametric Suppose we want to estimate unknown function, e.g. survival function.

More information

Simultaneous Prediction Intervals for the (Log)- Location-Scale Family of Distributions

Simultaneous Prediction Intervals for the (Log)- Location-Scale Family of Distributions Statistics Preprints Statistics 10-2014 Simultaneous Prediction Intervals for the (Log)- Location-Scale Family of Distributions Yimeng Xie Virginia Tech Yili Hong Virginia Tech Luis A. Escobar Louisiana

More information

Open book, but no loose leaf notes and no electronic devices. Points (out of 200) are in parentheses. Put all answers on the paper provided to you.

Open book, but no loose leaf notes and no electronic devices. Points (out of 200) are in parentheses. Put all answers on the paper provided to you. ISQS 5347 Final Exam Spring 2017 Open book, but no loose leaf notes and no electronic devices. Points (out of 200) are in parentheses. Put all answers on the paper provided to you. 1. Recall the commute

More information

EAS 535 Laboratory Exercise Weather Station Setup and Verification

EAS 535 Laboratory Exercise Weather Station Setup and Verification EAS 535 Laboratory Exercise Weather Station Setup and Verification Lab Objectives: In this lab exercise, you are going to examine and describe the error characteristics of several instruments, all purportedly

More information

Predicting the Probability of Correct Classification

Predicting the Probability of Correct Classification Predicting the Probability of Correct Classification Gregory Z. Grudic Department of Computer Science University of Colorado, Boulder grudic@cs.colorado.edu Abstract We propose a formulation for binary

More information

Smooth nonparametric estimation of a quantile function under right censoring using beta kernels

Smooth nonparametric estimation of a quantile function under right censoring using beta kernels Smooth nonparametric estimation of a quantile function under right censoring using beta kernels Chanseok Park 1 Department of Mathematical Sciences, Clemson University, Clemson, SC 29634 Short Title: Smooth

More information

An Evaluation of the Reliability of Complex Systems Using Shadowed Sets and Fuzzy Lifetime Data

An Evaluation of the Reliability of Complex Systems Using Shadowed Sets and Fuzzy Lifetime Data International Journal of Automation and Computing 2 (2006) 145-150 An Evaluation of the Reliability of Complex Systems Using Shadowed Sets and Fuzzy Lifetime Data Olgierd Hryniewicz Systems Research Institute

More information

Point and Interval Estimation for Gaussian Distribution, Based on Progressively Type-II Censored Samples

Point and Interval Estimation for Gaussian Distribution, Based on Progressively Type-II Censored Samples 90 IEEE TRANSACTIONS ON RELIABILITY, VOL. 52, NO. 1, MARCH 2003 Point and Interval Estimation for Gaussian Distribution, Based on Progressively Type-II Censored Samples N. Balakrishnan, N. Kannan, C. T.

More information

FULL LIKELIHOOD INFERENCES IN THE COX MODEL

FULL LIKELIHOOD INFERENCES IN THE COX MODEL October 20, 2007 FULL LIKELIHOOD INFERENCES IN THE COX MODEL BY JIAN-JIAN REN 1 AND MAI ZHOU 2 University of Central Florida and University of Kentucky Abstract We use the empirical likelihood approach

More information

Analytical Bootstrap Methods for Censored Data

Analytical Bootstrap Methods for Censored Data JOURNAL OF APPLIED MATHEMATICS AND DECISION SCIENCES, 6(2, 129 141 Copyright c 2002, Lawrence Erlbaum Associates, Inc. Analytical Bootstrap Methods for Censored Data ALAN D. HUTSON Division of Biostatistics,

More information

Supporting Information for Estimating restricted mean. treatment effects with stacked survival models

Supporting Information for Estimating restricted mean. treatment effects with stacked survival models Supporting Information for Estimating restricted mean treatment effects with stacked survival models Andrew Wey, David Vock, John Connett, and Kyle Rudser Section 1 presents several extensions to the simulation

More information

n =10,220 observations. Smaller samples analyzed here to illustrate sample size effect.

n =10,220 observations. Smaller samples analyzed here to illustrate sample size effect. Chapter 7 Parametric Likelihood Fitting Concepts: Chapter 7 Parametric Likelihood Fitting Concepts: Objectives Show how to compute a likelihood for a parametric model using discrete data. Show how to compute

More information

Estimation of Quantiles

Estimation of Quantiles 9 Estimation of Quantiles The notion of quantiles was introduced in Section 3.2: recall that a quantile x α for an r.v. X is a constant such that P(X x α )=1 α. (9.1) In this chapter we examine quantiles

More information

Objective Experiments Glossary of Statistical Terms

Objective Experiments Glossary of Statistical Terms Objective Experiments Glossary of Statistical Terms This glossary is intended to provide friendly definitions for terms used commonly in engineering and science. It is not intended to be absolutely precise.

More information

Load-strength Dynamic Interaction Principle and Failure Rate Model

Load-strength Dynamic Interaction Principle and Failure Rate Model International Journal of Performability Engineering Vol. 6, No. 3, May 21, pp. 25-214. RAMS Consultants Printed in India Load-strength Dynamic Interaction Principle and Failure Rate Model LIYANG XIE and

More information

ISQS 5349 Spring 2013 Final Exam

ISQS 5349 Spring 2013 Final Exam ISQS 5349 Spring 2013 Final Exam Name: General Instructions: Closed books, notes, no electronic devices. Points (out of 200) are in parentheses. Put written answers on separate paper; multiple choices

More information

Application of Time-to-Event Methods in the Assessment of Safety in Clinical Trials

Application of Time-to-Event Methods in the Assessment of Safety in Clinical Trials Application of Time-to-Event Methods in the Assessment of Safety in Clinical Trials Progress, Updates, Problems William Jen Hoe Koh May 9, 2013 Overview Marginal vs Conditional What is TMLE? Key Estimation

More information

Statistics 262: Intermediate Biostatistics Non-parametric Survival Analysis

Statistics 262: Intermediate Biostatistics Non-parametric Survival Analysis Statistics 262: Intermediate Biostatistics Non-parametric Survival Analysis Jonathan Taylor & Kristin Cobb Statistics 262: Intermediate Biostatistics p.1/?? Overview of today s class Kaplan-Meier Curve

More information

Chapter 17. Failure-Time Regression Analysis. William Q. Meeker and Luis A. Escobar Iowa State University and Louisiana State University

Chapter 17. Failure-Time Regression Analysis. William Q. Meeker and Luis A. Escobar Iowa State University and Louisiana State University Chapter 17 Failure-Time Regression Analysis William Q. Meeker and Luis A. Escobar Iowa State University and Louisiana State University Copyright 1998-2008 W. Q. Meeker and L. A. Escobar. Based on the authors

More information

Structure of Materials Prof. Anandh Subramaniam Department of Material Science and Engineering Indian Institute of Technology, Kanpur

Structure of Materials Prof. Anandh Subramaniam Department of Material Science and Engineering Indian Institute of Technology, Kanpur Structure of Materials Prof. Anandh Subramaniam Department of Material Science and Engineering Indian Institute of Technology, Kanpur Lecture - 5 Geometry of Crystals: Symmetry, Lattices The next question

More information

Distribution Fitting (Censored Data)

Distribution Fitting (Censored Data) Distribution Fitting (Censored Data) Summary... 1 Data Input... 2 Analysis Summary... 3 Analysis Options... 4 Goodness-of-Fit Tests... 6 Frequency Histogram... 8 Comparison of Alternative Distributions...

More information

Parametric Techniques

Parametric Techniques Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure

More information

For right censored data with Y i = T i C i and censoring indicator, δ i = I(T i < C i ), arising from such a parametric model we have the likelihood,

For right censored data with Y i = T i C i and censoring indicator, δ i = I(T i < C i ), arising from such a parametric model we have the likelihood, A NOTE ON LAPLACE REGRESSION WITH CENSORED DATA ROGER KOENKER Abstract. The Laplace likelihood method for estimating linear conditional quantile functions with right censored data proposed by Bottai and

More information

Sample Size and Number of Failure Requirements for Demonstration Tests with Log-Location-Scale Distributions and Type II Censoring

Sample Size and Number of Failure Requirements for Demonstration Tests with Log-Location-Scale Distributions and Type II Censoring Statistics Preprints Statistics 3-2-2002 Sample Size and Number of Failure Requirements for Demonstration Tests with Log-Location-Scale Distributions and Type II Censoring Scott W. McKane 3M Pharmaceuticals

More information

Practical Applications of Reliability Theory

Practical Applications of Reliability Theory Practical Applications of Reliability Theory George Dodson Spallation Neutron Source Managed by UT-Battelle Topics Reliability Terms and Definitions Reliability Modeling as a tool for evaluating system

More information

Quantile POD for Hit-Miss Data

Quantile POD for Hit-Miss Data Quantile POD for Hit-Miss Data Yew-Meng Koh a and William Q. Meeker a a Center for Nondestructive Evaluation, Department of Statistics, Iowa State niversity, Ames, Iowa 50010 Abstract. Probability of detection

More information

ON THE FAILURE RATE ESTIMATION OF THE INVERSE GAUSSIAN DISTRIBUTION

ON THE FAILURE RATE ESTIMATION OF THE INVERSE GAUSSIAN DISTRIBUTION ON THE FAILURE RATE ESTIMATION OF THE INVERSE GAUSSIAN DISTRIBUTION ZHENLINYANGandRONNIET.C.LEE Department of Statistics and Applied Probability, National University of Singapore, 3 Science Drive 2, Singapore

More information

Survival Analysis: Weeks 2-3. Lu Tian and Richard Olshen Stanford University

Survival Analysis: Weeks 2-3. Lu Tian and Richard Olshen Stanford University Survival Analysis: Weeks 2-3 Lu Tian and Richard Olshen Stanford University 2 Kaplan-Meier(KM) Estimator Nonparametric estimation of the survival function S(t) = pr(t > t) The nonparametric estimation

More information

Asymptotic distribution of the sample average value-at-risk

Asymptotic distribution of the sample average value-at-risk Asymptotic distribution of the sample average value-at-risk Stoyan V. Stoyanov Svetlozar T. Rachev September 3, 7 Abstract In this paper, we prove a result for the asymptotic distribution of the sample

More information

Bootstrap Method for Dependent Data Structure and Measure of Statistical Precision

Bootstrap Method for Dependent Data Structure and Measure of Statistical Precision Journal of Mathematics and Statistics 6 (): 84-88, 00 ISSN 549-3644 00 Science Publications ootstrap Method for Dependent Data Structure and Measure of Statistical Precision T.O. Olatayo, G.N. Amahia and

More information

Robustness and Distribution Assumptions

Robustness and Distribution Assumptions Chapter 1 Robustness and Distribution Assumptions 1.1 Introduction In statistics, one often works with model assumptions, i.e., one assumes that data follow a certain model. Then one makes use of methodology

More information

Double Bootstrap Confidence Interval Estimates with Censored and Truncated Data

Double Bootstrap Confidence Interval Estimates with Censored and Truncated Data Journal of Modern Applied Statistical Methods Volume 13 Issue 2 Article 22 11-2014 Double Bootstrap Confidence Interval Estimates with Censored and Truncated Data Jayanthi Arasan University Putra Malaysia,

More information

A Note on Bayesian Inference After Multiple Imputation

A Note on Bayesian Inference After Multiple Imputation A Note on Bayesian Inference After Multiple Imputation Xiang Zhou and Jerome P. Reiter Abstract This article is aimed at practitioners who plan to use Bayesian inference on multiplyimputed datasets in

More information

Lecture 7. Poisson and lifetime processes in risk analysis

Lecture 7. Poisson and lifetime processes in risk analysis Lecture 7. Poisson and lifetime processes in risk analysis Jesper Rydén Department of Mathematics, Uppsala University jesper.ryden@math.uu.se Statistical Risk Analysis Spring 2014 Example: Life times of

More information

Statistics for Engineers Lecture 4 Reliability and Lifetime Distributions

Statistics for Engineers Lecture 4 Reliability and Lifetime Distributions Statistics for Engineers Lecture 4 Reliability and Lifetime Distributions Chong Ma Department of Statistics University of South Carolina chongm@email.sc.edu February 15, 2017 Chong Ma (Statistics, USC)

More information

BAYESIAN MODELING OF DYNAMIC SOFTWARE GROWTH CURVE MODELS

BAYESIAN MODELING OF DYNAMIC SOFTWARE GROWTH CURVE MODELS BAYESIAN MODELING OF DYNAMIC SOFTWARE GROWTH CURVE MODELS Zhaohui Liu, Nalini Ravishanker, University of Connecticut Bonnie K. Ray, IBM Watson Research Center Department of Mathematical Sciences, IBM Watson

More information

These notes will supplement the textbook not replace what is there. defined for α >0

These notes will supplement the textbook not replace what is there. defined for α >0 Gamma Distribution These notes will supplement the textbook not replace what is there. Gamma Function ( ) = x 0 e dx 1 x defined for >0 Properties of the Gamma Function 1. For any >1 () = ( 1)( 1) Proof

More information

Introducing the Normal Distribution

Introducing the Normal Distribution Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 10: Introducing the Normal Distribution Relevant textbook passages: Pitman [5]: Sections 1.2,

More information

THE WEIBULL GENERALIZED FLEXIBLE WEIBULL EXTENSION DISTRIBUTION

THE WEIBULL GENERALIZED FLEXIBLE WEIBULL EXTENSION DISTRIBUTION Journal of Data Science 14(2016), 453-478 THE WEIBULL GENERALIZED FLEXIBLE WEIBULL EXTENSION DISTRIBUTION Abdelfattah Mustafa, Beih S. El-Desouky, Shamsan AL-Garash Department of Mathematics, Faculty of

More information

Probability Methods in Civil Engineering Prof. Dr. Rajib Maity Department of Civil Engineering Indian Institute of Technology, Kharagpur

Probability Methods in Civil Engineering Prof. Dr. Rajib Maity Department of Civil Engineering Indian Institute of Technology, Kharagpur Probability Methods in Civil Engineering Prof. Dr. Rajib Maity Department of Civil Engineering Indian Institute of Technology, Kharagpur Lecture No. # 38 Goodness - of fit tests Hello and welcome to this

More information

Bayesian vs frequentist techniques for the analysis of binary outcome data

Bayesian vs frequentist techniques for the analysis of binary outcome data 1 Bayesian vs frequentist techniques for the analysis of binary outcome data By M. Stapleton Abstract We compare Bayesian and frequentist techniques for analysing binary outcome data. Such data are commonly

More information

Robust Parameter Estimation in the Weibull and the Birnbaum-Saunders Distribution

Robust Parameter Estimation in the Weibull and the Birnbaum-Saunders Distribution Clemson University TigerPrints All Theses Theses 8-2012 Robust Parameter Estimation in the Weibull and the Birnbaum-Saunders Distribution Jing Zhao Clemson University, jzhao2@clemson.edu Follow this and

More information

Introducing the Normal Distribution

Introducing the Normal Distribution Department of Mathematics Ma 3/13 KC Border Introduction to Probability and Statistics Winter 219 Lecture 1: Introducing the Normal Distribution Relevant textbook passages: Pitman [5]: Sections 1.2, 2.2,

More information

Bivariate Degradation Modeling Based on Gamma Process

Bivariate Degradation Modeling Based on Gamma Process Bivariate Degradation Modeling Based on Gamma Process Jinglun Zhou Zhengqiang Pan Member IAENG and Quan Sun Abstract Many highly reliable products have two or more performance characteristics (PCs). The

More information

DIFFERENTIAL EQUATIONS

DIFFERENTIAL EQUATIONS DIFFERENTIAL EQUATIONS Basic Concepts Paul Dawkins Table of Contents Preface... Basic Concepts... 1 Introduction... 1 Definitions... Direction Fields... 8 Final Thoughts...19 007 Paul Dawkins i http://tutorial.math.lamar.edu/terms.aspx

More information

Industrial Engineering Prof. Inderdeep Singh Department of Mechanical & Industrial Engineering Indian Institute of Technology, Roorkee

Industrial Engineering Prof. Inderdeep Singh Department of Mechanical & Industrial Engineering Indian Institute of Technology, Roorkee Industrial Engineering Prof. Inderdeep Singh Department of Mechanical & Industrial Engineering Indian Institute of Technology, Roorkee Module - 04 Lecture - 05 Sales Forecasting - II A very warm welcome

More information

Dependable Systems. ! Dependability Attributes. Dr. Peter Tröger. Sources:

Dependable Systems. ! Dependability Attributes. Dr. Peter Tröger. Sources: Dependable Systems! Dependability Attributes Dr. Peter Tröger! Sources:! J.C. Laprie. Dependability: Basic Concepts and Terminology Eusgeld, Irene et al.: Dependability Metrics. 4909. Springer Publishing,

More information

Parametric Evaluation of Lifetime Data

Parametric Evaluation of Lifetime Data IPN Progress Report 42-155 November 15, 2003 Parametric Evaluation of Lifetime Data J. Shell 1 The proposed large array of small antennas for the DSN requires very reliable systems. Reliability can be

More information

=.55 = = 5.05

=.55 = = 5.05 MAT1193 4c Definition of derivative With a better understanding of limits we return to idea of the instantaneous velocity or instantaneous rate of change. Remember that in the example of calculating the

More information

Exercises. (a) Prove that m(t) =

Exercises. (a) Prove that m(t) = Exercises 1. Lack of memory. Verify that the exponential distribution has the lack of memory property, that is, if T is exponentially distributed with parameter λ > then so is T t given that T > t for

More information

STATISTICAL INFERENCE IN ACCELERATED LIFE TESTING WITH GEOMETRIC PROCESS MODEL. A Thesis. Presented to the. Faculty of. San Diego State University

STATISTICAL INFERENCE IN ACCELERATED LIFE TESTING WITH GEOMETRIC PROCESS MODEL. A Thesis. Presented to the. Faculty of. San Diego State University STATISTICAL INFERENCE IN ACCELERATED LIFE TESTING WITH GEOMETRIC PROCESS MODEL A Thesis Presented to the Faculty of San Diego State University In Partial Fulfillment of the Requirements for the Degree

More information

Uncertainty. Michael Peters December 27, 2013

Uncertainty. Michael Peters December 27, 2013 Uncertainty Michael Peters December 27, 20 Lotteries In many problems in economics, people are forced to make decisions without knowing exactly what the consequences will be. For example, when you buy

More information

Reliability Growth in JMP 10

Reliability Growth in JMP 10 Reliability Growth in JMP 10 Presented at Discovery Summit 2012 September 13, 2012 Marie Gaudard and Leo Wright Purpose of Talk The goal of this talk is to provide a brief introduction to: The area of

More information

UNIVERSITÄT POTSDAM Institut für Mathematik

UNIVERSITÄT POTSDAM Institut für Mathematik UNIVERSITÄT POTSDAM Institut für Mathematik Testing the Acceleration Function in Life Time Models Hannelore Liero Matthias Liero Mathematische Statistik und Wahrscheinlichkeitstheorie Universität Potsdam

More information

Research Article A Nonparametric Two-Sample Wald Test of Equality of Variances

Research Article A Nonparametric Two-Sample Wald Test of Equality of Variances Advances in Decision Sciences Volume 211, Article ID 74858, 8 pages doi:1.1155/211/74858 Research Article A Nonparametric Two-Sample Wald Test of Equality of Variances David Allingham 1 andj.c.w.rayner

More information

MAS3301 / MAS8311 Biostatistics Part II: Survival

MAS3301 / MAS8311 Biostatistics Part II: Survival MAS3301 / MAS8311 Biostatistics Part II: Survival M. Farrow School of Mathematics and Statistics Newcastle University Semester 2, 2009-10 1 13 The Cox proportional hazards model 13.1 Introduction In the

More information

Fundamentals of Reliability Engineering and Applications

Fundamentals of Reliability Engineering and Applications Fundamentals of Reliability Engineering and Applications E. A. Elsayed elsayed@rci.rutgers.edu Rutgers University Quality Control & Reliability Engineering (QCRE) IIE February 21, 2012 1 Outline Part 1.

More information

Statistical Analysis of Competing Risks With Missing Causes of Failure

Statistical Analysis of Competing Risks With Missing Causes of Failure Proceedings 59th ISI World Statistics Congress, 25-3 August 213, Hong Kong (Session STS9) p.1223 Statistical Analysis of Competing Risks With Missing Causes of Failure Isha Dewan 1,3 and Uttara V. Naik-Nimbalkar

More information

Maejo International Journal of Science and Technology

Maejo International Journal of Science and Technology Maejo Int. J. Sci. Technol. 018, 1(01), 11-7 Full Paper Maejo International Journal of Science and Technology ISSN 1905-7873 Available online at www.mijst.mju.ac.th Parameter estimation of Pareto distribution:

More information

The Fundamental Theorem of Calculus with Gossamer numbers

The Fundamental Theorem of Calculus with Gossamer numbers The Fundamental Theorem of Calculus with Gossamer numbers Chelton D. Evans and William K. Pattinson Abstract Within the gossamer numbers G which extend R to include infinitesimals and infinities we prove

More information

Bootstrap Procedures for Testing Homogeneity Hypotheses

Bootstrap Procedures for Testing Homogeneity Hypotheses Journal of Statistical Theory and Applications Volume 11, Number 2, 2012, pp. 183-195 ISSN 1538-7887 Bootstrap Procedures for Testing Homogeneity Hypotheses Bimal Sinha 1, Arvind Shah 2, Dihua Xu 1, Jianxin

More information

STAT Sample Problem: General Asymptotic Results

STAT Sample Problem: General Asymptotic Results STAT331 1-Sample Problem: General Asymptotic Results In this unit we will consider the 1-sample problem and prove the consistency and asymptotic normality of the Nelson-Aalen estimator of the cumulative

More information

STANDARDS OF LEARNING CONTENT REVIEW NOTES. ALGEBRA I Part II 1 st Nine Weeks,

STANDARDS OF LEARNING CONTENT REVIEW NOTES. ALGEBRA I Part II 1 st Nine Weeks, STANDARDS OF LEARNING CONTENT REVIEW NOTES ALGEBRA I Part II 1 st Nine Weeks, 2016-2017 OVERVIEW Algebra I Content Review Notes are designed by the High School Mathematics Steering Committee as a resource

More information

Measurement: The Basics

Measurement: The Basics I. Introduction Measurement: The Basics Physics is first and foremost an experimental science, meaning that its accumulated body of knowledge is due to the meticulous experiments performed by teams of

More information

Teaching Linear Algebra, Analytic Geometry and Basic Vector Calculus with Mathematica at Riga Technical University

Teaching Linear Algebra, Analytic Geometry and Basic Vector Calculus with Mathematica at Riga Technical University 5th WSEAS / IASME International Conference on ENGINEERING EDUCATION (EE'8), Heraklion, Greece, July -4, 8 Teaching Linear Algebra, Analytic Geometry and Basic Vector Calculus with Mathematica at Riga Technical

More information

18.465, further revised November 27, 2012 Survival analysis and the Kaplan Meier estimator

18.465, further revised November 27, 2012 Survival analysis and the Kaplan Meier estimator 18.465, further revised November 27, 2012 Survival analysis and the Kaplan Meier estimator 1. Definitions Ordinarily, an unknown distribution function F is estimated by an empirical distribution function

More information