Lund Institute of Technology Centre for Mathematical Sciences Mathematical Statistics
|
|
- Derick Cummings
- 5 years ago
- Views:
Transcription
1 Lund Institute of Technology Centre for Mathematical Sciences Mathematical Statistics STATISTICAL METHODS FOR SAFETY ANALYSIS FMS065 ÓÑÔÙØ Ö Ü Ö Ì ÓÓØ ØÖ Ô Ð ÓÖ Ø Ñ Ò Ý Ò Ò ÐÝ In this exercise we will focus on two different approaches to study uncertainty of estimates, the bootstrap algorithm and Bayesian analysis. In the first exercise we will analyse times between earthquakes and in the second exercise life times of ball bearings. The earthquake data and ball bearings data are stored in the files ÕÙ Ô ÖºÑ Ø and ÐÐ Ö Ò ºÑ Ø. 1 Preparatory exercises 1. Read the instructions for the computer exercise and chapter and 6 in the book. 2. Write down Bayes formula when updating the prior density to the posterior density.. 2 Earthquake data Time intervals in days between successive serious earthquakes (>7.5 on Richter scale or more than 1000 casualties) worldwide have been recorded. In all 63 serious earthquakes are recorded, i.e. 62 waiting times. This particular set covers the period from the 16th December 1902 to 4th March Our objective is to get an opinion about the probability of a period of more than 1500 days ( 4 years) between serious earthquakes. We will estimate this probability and plot a histogram to describe the uncertainty of the estimation. A confidence interval will also be given. Load the earthquake data into MATLAB. Ñ Ò Ø Ñ ¼ Ò ØÛ Ó ÐÓ ÕÙ Ô Ö Ü ÕÙ Ô Ö Plot the data in a histogram. Û ØÓ Üµ ÜÐ Ð ³ Ý ³µ From previous analysis, we believe that the exponential distribution is a good candidate to model periods between earthquakes (judging from probability papers and χ 2 tests). The exponential distribution depends on the parameter θ, (θ =return period of earthquakes) F X (x;θ) = 1 e x/θ. (1) and we search the probability, p = P(X > 1500) = 1 F X (1500;θ) = e 1500/θ. (2)
2 ii COMPUTER EXERCISE 3, FMS065 First we will analytically write down an estimate of p and (try to) analyse the variability: Consider n random variables X k, k = 1,... n, independent and distributed as given by Equation??. The MLestimate of θ is Θ = 1 n n k=1 X k. Equation?? gives, P = e 1500/Θ. (3) Note that, since the variables X k are random, Θ and P are also random. The uncertainty of the estimator P is measured by its variance. Try to compute V [P ] = This variance is clearly difficult to compute. Instead we will use bootstrap to overcome this problem. 2.1 Point estimate of p To obtain an estimate p of p, first calculate the ML-estimate of θ Ø Ø Ø Ö Ñ Ò Üµ Ô Ø Ö ÜÔ ¹½ ¼¼»Ø Ø Ø Öµ Write down the estimate of p: p = (How accurate is this result?) Another method to estimate the probability is to use the empirical distribution. Plot the empirical distribution and use the zoom button in the figure window to get the value at ÑÔ ØÖ Üµ The empirical estimate can also be obtained by the following line, Ô ÑÔ ½ ¹ Ñ Ü Ò ½µ ½ ¼¼µµ ¾µ Are the estimates Ô Ø Ö and Ô ÑÔ close to each other? Now we have estimated the searched probability. We will next use bootstrap to evaluate the uncertainty of p. 2.2 Estimating the uncertainty of p with bootstrap In the past decades, the use of bootstrap techniques has attracted a lot of interest, from scientists in different fields handling data, as well as researchers in statistical theory. Roughly speaking, bootstrap techniques combine notions from classical inference with computer intensive methods. We will here only point out some of the basic ideas and demonstrate how to use bootstrap to derive the distribution of the estimation error E = p p. Of course we cannot calculate the distribution straight-forwardly since we only have one estimate p and p is unknown. But with bootstrap we will be able to estimate the distribution of the error E. To understand the basic idea of the bootstrap, we first test the basic commands on a short vector. The following three lines creates a bootstrap sample of vector y. Ý ¾ ½ ¾ ½ ½ Ö Ò ÓÑ Ò Ð Ö Ò ½ µ µ Ñ Ý Ö Ò ÓÑ Ò µ
3 COMPUTER EXERCISE 3, FMS065 iii Convince yourself that Ð Ö Ò ½ µ µ produce 6 random numbers from 1 to 6, and that they occur with equal probability (i.e. comparable to throwing a dice 6 times). Thus, one index can occur more than once. Repeat the command Ð Ö Ò ½ µ µ to get new results. The line Ñ Ý Ö Ò ÓÑ Ò µ gives a bootstrap sample of Ý. The vector Ñ contains 6 values, each element in Ñ takes one of the values: ¾ ½ ¾ ½ ½ with equal probability. Now we consider the 62 samples in vector Ü. Recall that we have assumed that they are independent. From the values in Ü we obtain the estimate p of p. Now to see how p varies we randomly pick with replacement 62 numbers from the sample Ü: Ö Ò ÓÑ Ò Ð Ö Ò ¾ ½µ ¾µ Ñ Ü Ö Ò ÓÑ Ò µ Ô ÓÓØ ÜÔ ¹½ ¼¼»Ñ Ò Ñµµ ± ÓÓØ ØÖ Ô ÑÔÐ The value of Ô ÓÓØ is a new estimate of p, estimated from the bootstrap sample Ñ. Recall that the purpose to bootstrap Ü was that we want to obtain the distribution of E = p p. One can prove that the error E = p p is approximately equally distributed with the difference Ô Ø Ö¹Ô ÓÓØ. So if we repeat the lines above to produce several estimates Ô ÓÓØ, then we can make a histogram of the difference Ô Ø Ö¹Ô ÓÓØ, which will describe the variability of the error p p and hence the uncertainty of the estimate p, Æ ½¼¼¼ ÓÖ ½ Æ Ñ µ Ü Ð Ö Ò ¾ ½µ ¾µµ Ò Ô ÓÓØ ÜÔ ¹½ ¼¼º»Ñ Ò Ñµµ This is the crucial step in the algorithm. Explain what the Matlab code does. Then plot the bootstrap estimates, to visualise the variability Û ØÓ Ô ÓÓØ ¾¼ ½ ½µ Ö ÓÒ Estimate the errors and plot a histogram, Ø ÖÖÓÖ Ô Ø Ö ¹ Ô ÓÓØ Û ØÓ Ø ÖÖÓÖ ¾¼ ½ ½µ 2.3 Bootstrap confidence intervals Finally, we will construct a confidence interval of our estimate based on the distribution of the error. Chapter 4.5.1, in the course compendium gives a more thorough explanation on how to construct a confidence interval based on Bootstrap. Here, we will only give the MATLAB-code. From the distribution of the error we obtain 95%-quantiles of the error. Ø ÖÖÓÖ ÓÖØ Ø ÖÖÓÖµ Õ Ø ÖÖÓÖ ÖÓÙÒ Æ ¼º¼¾ Æ ¼º µµ
4 iv COMPUTER EXERCISE 3, FMS065 We then have that 95% of the errors belong to the interval [Õ ½µ, Õ ¾µ]. Since the error E = p p, the confidence interval for p is ÓÒ ÒØ Ô Ø Ö Õ ½µ Ô Ø Ö Õ ¾µ Write down the confidence interval of p: I p = 3 Bayesian analysis of earthquake data Instead of bootstrap techniques we will now analyse the same data set with a Bayesian approach. We study the intensity Λ = 1 Θ instead of the return period Θ. The probability we are interested in is p = P(X > 1500) = exp( Λ 1500). Similarly as in the bootstrap exercise, we want to obtain the distribution of p. In the Bayesian approach, Λ (Capital letter "lambda") is a random variable, therefore we use the capital version of λ in this chapter. (Recall that in classical approach λ is a fixed unknown constant, but an estimator Λ is random.) So in the Bayesian approach, p is a function of a random variable Λ, and thus also a random variable. However to not confuse p with the probability function P we do not use the capital letter. We will first calculate the posterior distribution of Λ and then calculate the posterior distribution of p. We assume as in previous chapter that the time between earthquakes is exponentially distributed. This implies that the number of earthquakes, N, in a time interval [0,t] is Poisson distributed given that we know the value of Λ. The probability of k earthquakes is, P(N = k Λ = λ) = e λt(λt)k. k! The Bayesian technique requires a prior distribution of Λ. Then using the information that we have 63 earthquakes during the time period, we get the posterior distribution. The updating formula is f post Λ prior (λ) = c P(N = 63 Λ = λ)f (λ), (4) where c is a constant (i.e. not depending on λ). Λ 3.1 Choice of prior The first step in the Bayesian analysis is to choose the prior distribution. The prior distribution should reflect our knowledge about Λ prior to the measurement. If we know very little about Λ, we should choose a flat distribution. Chapter in the course book discusses how to choose suitable priors for intensities. Here we will choose an improper prior, f prior Λ (λ) = λ 1. This gives a non-informative prior. Explain why the prior is called non-informative and improper.
5 COMPUTER EXERCISE 3, FMS065 v 3.2 Calculating the posterior density To calculate the posterior density we need to take into account the information of 63 earthquakes during days. More precisely, we want to calculate the posterior distribution using Equation??. Since N is a Poisson variable, we have that P(N = 63 Λ = λ) = e λ 27120(λ 27120)63 63! So the posterior density is f post Λ (λ) = c 1 e λ λ 63 }{{} λ 1 }{{} updating probability prior density = constant e λ λ 63 (5) = c 1 λ 63 1 e λ (6) which is a gamma density but with parameters a = 63 and b = (In MATLAB the parameters are a = 63 and 1/b = 1/27120). Use ÑÔ to plot the posterior density and distribution. Ñ ÙÖ ÜÜ ¼ ¼º¼¼¼¼½ ¼º¼½ ÔÐÓØ ÜÜ ÑÔ ÜÜ ½»¾ ½¾¼µ ³ ³µ For simplicity we can approximate this distribution with a normal density. Using a table of the gamma distribution we get the expectation, m and standard deviation, σ, Ñ»¾ ½¾¼ ± ÜÔ Ø Ø ÓÒ ¾»¾ ½¾¼ ¾ ±Î Ö Ò ÔÔÖÓÜ ÒÓÖÑÔ ÜÜ Ñ ÕÖØ ¾µµ Ñ µ ÓÐ ÓÒ ÔÐÓØ ÜÜ ÔÔÖÓÜ ³Ö³µ Ð Ò ³ Ü Ø³ ³ ÔÔÖÓÜ Ñ Ø ÓÒ³µ From the figure we see that the posterior density of Λ is approximately a Normal variable, which is easier to handle, Λ N(m,σ 2 ) (7) However we are interested in the posterior density of p = exp( Λ 1500). Thus if we take the logarithm we obtain, log(p) = 1500 Λ and since Λ is (approximately) Normal also log(p) is Normal, log(p) N( 1500 m, σ 2 ). (8) Fill in the numerical values of expectation and variance of log(p) E(log(p)) = 1500 m = V(log(p)) = σ 2 = Now we can plot the posterior density and distribution of p, ÐÓ ÔÑ Ò ¹½ ¼¼ Ñ ÐÓ ÔÚ Ö ½ ¼¼ ¾ ¾ ÔÔ ¼ ¼º¼¼¼½ ¼º½¾ Ô ÐÓ ÒÔ ÔÔ ÐÓ ÔÑ Ò ÕÖØ ÐÓ ÔÚ Öµµ
6 vi COMPUTER EXERCISE 3, FMS065 Ô ÐÓ Ò ÔÔ ÐÓ ÔÑ Ò ÕÖØ ÐÓ ÔÚ Öµµ Ù ÔÐÓØ ¾½½µ ÔÐÓØ ÔÔ Ôµ Ø ØÐ ³ÈÓ Ø Ö ÓÖ Ò ØÝ Ó Ô³µ Ù ÔÐÓØ ¾½¾µ ÔÐÓØ ÔÔ Ôµ Ø ØÐ ³ÈÓ Ø Ö ÓÖ ØÖ ÙØ ÓÒ Ó Ô³µ Write down the credibility interval of p with help from the figure you just plotted. [p 0.975, p ] = Compare the credibility interval with the confidence interval in previous chapter. Are they similar? 4 Ball bearings Now we will study measurements of life times of ball bearings. We will assume a parametric model for the distribution and study the uncertainty of an estimate of the quantile x 0.9. Load the data set into MATLAB. ÐÓ ÐÐ Ö Ò Ü ÐÐ The unit is million cycles. By experience we know that the Weibull distribution often is a suitable model for this kind of data. Plot the data on a Weibull-paper to graphically check the assumption. ÛÛ ÔÐÓØ Üµ Is it a reasonable assumption? The Weibull distribution has two parameters, a and c, F X (x) = 1 e ( x a) c (9) Now an expert claims that only 10% has a lifetime less than 40 million cycles. We will now investigate this statement with bootstrap. 5 Ball bearings, a bootstrap study We will now analyse the quantile of interest, x 0.9. More precisely, we will estimate this quantile and analyse the variability of the quantile by means of bootstrap. First we estimate the parameters a and c in the Weibull distribution and plot the empirical distribution. Ô Ö ÛÛ Ø Üµ Ô Ö ½µ Ô Ö ¾µ Get the the empirical estimate of x 0.9 from the empirical distribution function, the blue line in the figure. Use the zoom button if necessary x emp 0.9 =
7 COMPUTER EXERCISE 3, FMS065 vii It is also possible to get an estimate of the quantile using the ML estimates, ܼ Ø Ö ¹ÐÓ ¼º µµ ½»µ x 0.9 = To get a notion about the variability of these estimates we bootstrap the data set to get more estimates of x 0.9. Æ ½¼¼¼ ÓÖ ½ Æ Ò Ü Ü Ð Ö Ò ¾¾ ½µ ¾¾µµ Ô Ö ½ ¾µ ÛÛ Ø Ü µ ܼ µ Ô Ö ½µ ¹ÐÓ ¼º µµ ½»Ô Ö ¾µµ Then plot a histogram of the obtained estimates of the quantile. Û ØÓ Ü¼ ½µ Does this confirm the experts statement or not?
Estimation of Quantiles
9 Estimation of Quantiles The notion of quantiles was introduced in Section 3.2: recall that a quantile x α for an r.v. X is a constant such that P(X x α )=1 α. (9.1) In this chapter we examine quantiles
More informationLecture 2. Distributions and Random Variables
Lecture 2. Distributions and Random Variables Igor Rychlik Chalmers Department of Mathematical Sciences Probability, Statistics and Risk, MVE300 Chalmers March 2013. Click on red text for extra material.
More informationPH Nuclear Physics Laboratory Gamma spectroscopy (NP3)
Physics Department Royal Holloway University of London PH2510 - Nuclear Physics Laboratory Gamma spectroscopy (NP3) 1 Objectives The aim of this experiment is to demonstrate how γ-ray energy spectra may
More informationSME 3023 Applied Numerical Methods
UNIVERSITI TEKNOLOGI MALAYSIA SME 3023 Applied Numerical Methods Solution of Nonlinear Equations Abu Hasan Abdullah Faculty of Mechanical Engineering Sept 2012 Abu Hasan Abdullah (FME) SME 3023 Applied
More informationSKMM 3023 Applied Numerical Methods
SKMM 3023 Applied Numerical Methods Solution of Nonlinear Equations ibn Abdullah Faculty of Mechanical Engineering Òº ÙÐÐ ÚºÒÙÐÐ ¾¼½ SKMM 3023 Applied Numerical Methods Solution of Nonlinear Equations
More informationx 0, x 1,...,x n f(x) p n (x) = f[x 0, x 1,..., x n, x]w n (x),
ÛÜØ Þ ÜÒ Ô ÚÜ Ô Ü Ñ Ü Ô Ð Ñ Ü ÜØ º½ ÞÜ Ò f Ø ÚÜ ÚÛÔ Ø Ü Ö ºÞ ÜÒ Ô ÚÜ Ô Ð Ü Ð Þ Õ Ô ÞØÔ ÛÜØ Ü ÚÛÔ Ø Ü Ö L(f) = f(x)dx ÚÜ Ô Ü ÜØ Þ Ü Ô, b] Ö Û Þ Ü Ô Ñ ÒÖØ k Ü f Ñ Df(x) = f (x) ÐÖ D Ü Ü ÜØ Þ Ü Ô Ñ Ü ÜØ Ñ
More informationLecture 16: Modern Classification (I) - Separating Hyperplanes
Lecture 16: Modern Classification (I) - Separating Hyperplanes Outline 1 2 Separating Hyperplane Binary SVM for Separable Case Bayes Rule for Binary Problems Consider the simplest case: two classes are
More informationSME 3023 Applied Numerical Methods
UNIVERSITI TEKNOLOGI MALAYSIA SME 3023 Applied Numerical Methods Ordinary Differential Equations Abu Hasan Abdullah Faculty of Mechanical Engineering Sept 2012 Abu Hasan Abdullah (FME) SME 3023 Applied
More informationThis document has been prepared by Sunder Kidambi with the blessings of
Ö À Ö Ñ Ø Ò Ñ ÒØ Ñ Ý Ò Ñ À Ö Ñ Ò Ú º Ò Ì ÝÊ À Å Ú Ø Å Ê ý Ú ÒØ º ÝÊ Ú Ý Ê Ñ º Å º ² ºÅ ý ý ý ý Ö Ð º Ñ ÒÜ Æ Å Ò Ñ Ú «Ä À ý ý This document has been prepared by Sunder Kidambi with the blessings of Ö º
More informationRadu Alexandru GHERGHESCU, Dorin POENARU and Walter GREINER
È Ö Ò Ò Ù Ò Ò Ò ÖÝ ÒÙÐ Ö Ý Ø Ñ Radu Alexandru GHERGHESCU, Dorin POENARU and Walter GREINER Radu.Gherghescu@nipne.ro IFIN-HH, Bucharest-Magurele, Romania and Frankfurt Institute for Advanced Studies, J
More informationSKMM 3023 Applied Numerical Methods
UNIVERSITI TEKNOLOGI MALAYSIA SKMM 3023 Applied Numerical Methods Ordinary Differential Equations ibn Abdullah Faculty of Mechanical Engineering Òº ÙÐÐ ÚºÒÙÐÐ ¾¼½ SKMM 3023 Applied Numerical Methods Ordinary
More informationÇÙÐ Ò ½º ÅÙÐ ÔÐ ÔÓÐÝÐÓ Ö Ñ Ò Ú Ö Ð Ú Ö Ð ¾º Ä Ò Ö Ö Ù Ð Ý Ó ËÝÑ ÒÞ ÔÓÐÝÒÓÑ Ð º Ì ÛÓ¹ÐÓÓÔ ÙÒÖ Ö Ô Û Ö Ö ÖÝ Ñ ¹ ÝÓÒ ÑÙÐ ÔÐ ÔÓÐÝÐÓ Ö Ñ
ÅÙÐ ÔÐ ÔÓÐÝÐÓ Ö Ñ Ò ÝÒÑ Ò Ò Ö Ð Ö Ò Ó Ò Ö ÀÍ ÖÐ Òµ Ó Ò ÛÓÖ Û Ö Ò ÖÓÛÒ Ö Ú ½ ¼¾º ¾½ Û Åº Ä Ö Ö Ú ½ ¼¾º ¼¼ Û Äº Ñ Ò Ëº Ï ÒÞ ÖÐ Å ÒÞ ½ º¼ º¾¼½ ÇÙÐ Ò ½º ÅÙÐ ÔÐ ÔÓÐÝÐÓ Ö Ñ Ò Ú Ö Ð Ú Ö Ð ¾º Ä Ò Ö Ö Ù Ð Ý Ó ËÝÑ
More informationSelf-Testing Polynomial Functions Efficiently and over Rational Domains
Chapter 1 Self-Testing Polynomial Functions Efficiently and over Rational Domains Ronitt Rubinfeld Madhu Sudan Ý Abstract In this paper we give the first self-testers and checkers for polynomials over
More informationBayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007
Bayesian inference Fredrik Ronquist and Peter Beerli October 3, 2007 1 Introduction The last few decades has seen a growing interest in Bayesian inference, an alternative approach to statistical inference.
More informationA Language for Task Orchestration and its Semantic Properties
DEPARTMENT OF COMPUTER SCIENCES A Language for Task Orchestration and its Semantic Properties David Kitchin, William Cook and Jayadev Misra Department of Computer Science University of Texas at Austin
More informationNarrowing confidence interval width of PAC learning risk function by algorithmic inference
Narrowing confidence interval width of PAC learning risk function by algorithmic inference Bruno Apolloni, Dario Malchiodi Dip. di Scienze dell Informazione, Università degli Studi di Milano Via Comelico
More informationMean, Median, Mode, More. Tilmann Gneiting University of Washington
Mean, Median, Mode, More ÓÖ ÉÙ ÒØ Ð ÇÔØ Ñ Ð ÈÓ ÒØ ÈÖ ØÓÖ Tilmann Gneiting University of Washington Mean, Median, Mode, More or Quantiles as Optimal Point Predictors Tilmann Gneiting University of Washington
More informationProving observational equivalence with ProVerif
Proving observational equivalence with ProVerif Bruno Blanchet INRIA Paris-Rocquencourt Bruno.Blanchet@inria.fr based on joint work with Martín Abadi and Cédric Fournet and with Vincent Cheval June 2015
More informationClassical and Bayesian inference
Classical and Bayesian inference AMS 132 January 18, 2018 Claudia Wehrhahn (UCSC) Classical and Bayesian inference January 18, 2018 1 / 9 Sampling from a Bernoulli Distribution Theorem (Beta-Bernoulli
More informationMargin Maximizing Loss Functions
Margin Maximizing Loss Functions Saharon Rosset, Ji Zhu and Trevor Hastie Department of Statistics Stanford University Stanford, CA, 94305 saharon, jzhu, hastie@stat.stanford.edu Abstract Margin maximizing
More informationArbeitstagung: Gruppen und Topologische Gruppen Vienna July 6 July 7, Abstracts
Arbeitstagung: Gruppen und Topologische Gruppen Vienna July 6 July 7, 202 Abstracts ÁÒÚ Ö Ð Ñ Ø Ó Ø¹Ú ÐÙ ÙÒØ ÓÒ ÁÞØÓ Ò ÞØÓ º Ò ÙÒ ¹Ñ º ÙÐØÝ Ó Æ ØÙÖ Ð Ë Ò Ò Å Ø Ñ Ø ÍÒ Ú Ö ØÝ Ó Å Ö ÓÖ ÃÓÖÓ ½ ¼ Å Ö ÓÖ ¾¼¼¼
More informationu x + u y = x u . u(x, 0) = e x2 The characteristics satisfy dx dt = 1, dy dt = 1
Õ 83-25 Þ ÛÐ Þ Ð ÚÔÜØ Þ ÝÒ Þ Ô ÜÞØ ¹ 3 Ñ Ð ÜÞ u x + u y = x u u(x, 0) = e x2 ÝÒ Þ Ü ÞØ º½ dt =, dt = x = t + c, y = t + c 2 We can choose c to be zero without loss of generality Note that each characteristic
More informationF(jω) = a(jω p 1 )(jω p 2 ) Û Ö p i = b± b 2 4ac. ω c = Y X (jω) = 1. 6R 2 C 2 (jω) 2 +7RCjω+1. 1 (6jωRC+1)(jωRC+1) RC, 1. RC = p 1, p
ÓÖ Ò ÊÄ Ò Ò Û Ò Ò Ö Ý ½¾ Ù Ö ÓÖ ÖÓÑ Ö ÓÒ Ò ÄÈ ÐØ Ö ½¾ ½¾ ½» ½½ ÓÖ Ò ÊÄ Ò Ò Û Ò Ò Ö Ý ¾ Á b 2 < 4ac Û ÒÒÓØ ÓÖ Þ Û Ö Ð Ó ÒØ Ó Û Ð Ú ÕÙ Ö º ËÓÑ Ñ ÐÐ ÕÙ Ö Ö ÓÒ Ò º Ù Ö ÓÖ ½¾ ÓÖ Ù Ö ÕÙ Ö ÓÖ Ò ØÖ Ò Ö ÙÒØ ÓÒ
More information2 Hallén s integral equation for the thin wire dipole antenna
Ú Ð Ð ÓÒÐ Ò Ø ØØÔ»» Ѻ Ö Ùº º Ö ÁÒغ º ÁÒ Ù ØÖ Ð Å Ø Ñ Ø ÎÓк ÆÓº ¾ ¾¼½½µ ½ ¹½ ¾ ÆÙÑ Ö Ð Ñ Ø Ó ÓÖ Ò ÐÝ Ó Ö Ø ÓÒ ÖÓÑ Ø Ò Û Ö ÔÓÐ ÒØ ÒÒ Ëº À Ø ÑÞ ¹Î ÖÑ ÞÝ Ö Åº Æ Ö¹ÅÓ Êº Ë Þ ¹Ë Ò µ Ô ÖØÑ ÒØ Ó Ð ØÖ Ð Ò Ò
More informationSubject CS1 Actuarial Statistics 1 Core Principles
Institute of Actuaries of India Subject CS1 Actuarial Statistics 1 Core Principles For 2019 Examinations Aim The aim of the Actuarial Statistics 1 subject is to provide a grounding in mathematical and
More informationA Generalization of Principal Component Analysis to the Exponential Family
A Generalization of Principal Component Analysis to the Exponential Family Michael Collins Sanjoy Dasgupta Robert E. Schapire AT&T Labs Research 8 Park Avenue, Florham Park, NJ 7932 mcollins, dasgupta,
More informationChapter 5. Bayesian Statistics
Chapter 5. Bayesian Statistics Principles of Bayesian Statistics Anything unknown is given a probability distribution, representing degrees of belief [subjective probability]. Degrees of belief [subjective
More informationChebyshev Spectral Methods and the Lane-Emden Problem
Numer. Math. Theor. Meth. Appl. Vol. 4, No. 2, pp. 142-157 doi: 10.4208/nmtma.2011.42s.2 May 2011 Chebyshev Spectral Methods and the Lane-Emden Problem John P. Boyd Department of Atmospheric, Oceanic and
More informationAn Example file... log.txt
# ' ' Start of fie & %$ " 1 - : 5? ;., B - ( * * B - ( * * F I / 0. )- +, * ( ) 8 8 7 /. 6 )- +, 5 5 3 2( 7 7 +, 6 6 9( 3 5( ) 7-0 +, => - +< ( ) )- +, 7 / +, 5 9 (. 6 )- 0 * D>. C )- +, (A :, C 0 )- +,
More informationSeminar to the lecture Computer-based Engineering Mathematics
Seminar to the lecture Computer-based Engineering Mathematics N T S Prof. Dr.-Ing. A. Czylwik Mohammad Abdelqader, M.Sc. Room: BA 249, Tel: +49-203-379-3474 E-Mail: abdelqader@nts.uni-duisburg-essen.de
More information! " # $! % & '! , ) ( + - (. ) ( ) * + / 0 1 2 3 0 / 4 5 / 6 0 ; 8 7 < = 7 > 8 7 8 9 : Œ Š ž P P h ˆ Š ˆ Œ ˆ Š ˆ Ž Ž Ý Ü Ý Ü Ý Ž Ý ê ç è ± ¹ ¼ ¹ ä ± ¹ w ç ¹ è ¼ è Œ ¹ ± ¹ è ¹ è ä ç w ¹ ã ¼ ¹ ä ¹ ¼ ¹ ±
More informationContents. Preface to Second Edition Preface to First Edition Abbreviations PART I PRINCIPLES OF STATISTICAL THINKING AND ANALYSIS 1
Contents Preface to Second Edition Preface to First Edition Abbreviations xv xvii xix PART I PRINCIPLES OF STATISTICAL THINKING AND ANALYSIS 1 1 The Role of Statistical Methods in Modern Industry and Services
More informationCONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN. Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren
CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren CESAME, Université catholique de Louvain Bâtiment Euler, Avenue G. Lemaître 4-6 B-1348 Louvain-la-Neuve,
More informationI118 Graphs and Automata
I8 Graphs and Automata Takako Nemoto http://www.jaist.ac.jp/ t-nemoto/teaching/203--.html April 23 0. Û. Û ÒÈÓ 2. Ø ÈÌ (a) ÏÛ Í (b) Ø Ó Ë (c) ÒÑ ÈÌ (d) ÒÌ (e) É Ö ÈÌ 3. ÈÌ (a) Î ÎÖ Í (b) ÒÌ . Û Ñ ÐÒ f
More informationLars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), Institute BW/WI & Institute for Computer Science, University of Hildesheim
Course on Information Systems 2, summer term 2010 0/29 Information Systems 2 Information Systems 2 5. Business Process Modelling I: Models Lars Schmidt-Thieme Information Systems and Machine Learning Lab
More informationClassical and Bayesian inference
Classical and Bayesian inference AMS 132 Claudia Wehrhahn (UCSC) Classical and Bayesian inference January 8 1 / 11 The Prior Distribution Definition Suppose that one has a statistical model with parameter
More informationEXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY
EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA, 00 MODULE : Statistical Inference Time Allowed: Three Hours Candidates should answer FIVE questions. All questions carry equal marks. The
More informationExam Empirical Methods VU University Amsterdam, Faculty of Exact Sciences h, February 12, 2015
Exam Empirical Methods VU University Amsterdam, Faculty of Exact Sciences 18.30 21.15h, February 12, 2015 Question 1 is on this page. Always motivate your answers. Write your answers in English. Only the
More informationChapter 4: CONTINUOUS RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS
Chapter 4: CONTINUOUS RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS Part 4: Gamma Distribution Weibull Distribution Lognormal Distribution Sections 4-9 through 4-11 Another exponential distribution example
More informationLecture 11: Regression Methods I (Linear Regression)
Lecture 11: Regression Methods I (Linear Regression) Fall, 2017 1 / 40 Outline Linear Model Introduction 1 Regression: Supervised Learning with Continuous Responses 2 Linear Models and Multiple Linear
More informationNon-Stationary Spatial Modeling
BAYESIAN STATISTICS 6, pp. 000 000 J. M. Bernardo, J. O. Berger, A. P. Dawid and A. F. M. Smith (Eds.) Oxford University Press, 1998 Non-Stationary Spatial Modeling D. HIGDON, J. SWALL, and J. KERN Duke
More informationBayesian inference. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. April 10, 2017
Bayesian inference Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark April 10, 2017 1 / 22 Outline for today A genetic example Bayes theorem Examples Priors Posterior summaries
More informationComputer exercise 4 Poisson Regression
Chalmers-University of Gothenburg Department of Mathematical Sciences Probability, Statistics and Risk MVE300 Computer exercise 4 Poisson Regression When dealing with two or more variables, the functional
More informationLecture 11: Regression Methods I (Linear Regression)
Lecture 11: Regression Methods I (Linear Regression) 1 / 43 Outline 1 Regression: Supervised Learning with Continuous Responses 2 Linear Models and Multiple Linear Regression Ordinary Least Squares Statistical
More informationBayesian Inference. Chapter 2: Conjugate models
Bayesian Inference Chapter 2: Conjugate models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in
More informationApplications of Discrete Mathematics to the Analysis of Algorithms
Applications of Discrete Mathematics to the Analysis of Algorithms Conrado Martínez Univ. Politècnica de Catalunya, Spain May 2007 Goal Given some algorithm taking inputs from some set Á, we would like
More informationµ(, y) Computing the Möbius fun tion µ(x, x) = 1 The Möbius fun tion is de ned b y and X µ(x, t) = 0 x < y if x6t6y 3
ÈÖÑÙØØÓÒ ÔØØÖÒ Ò Ø ÅÙ ÙÒØÓÒ ÙÖ ØÒ ÎØ ÂÐÒ Ú ÂÐÒÓÚ Ò ÐÜ ËØÒÖÑ ÓÒ ÒÖ Ì ØÛÓµ 2314 ½¾ ½ ¾ ¾½ ¾ ½ ½¾ ¾½ ½¾ ¾½ ½ Ì ÔÓ Ø Ó ÔÖÑÙØØÓÒ ÛºÖºØº ÔØØÖÒ ÓÒØÒÑÒØ ½ 2314 ½¾ ½ ¾ ¾½ ¾ ½ ½¾ ¾½ ½¾ ¾½ Ì ÒØÖÚÐ [12,2314] ½ ¾ ÓÑÔÙØÒ
More informationNew Bayesian methods for model comparison
Back to the future New Bayesian methods for model comparison Murray Aitkin murray.aitkin@unimelb.edu.au Department of Mathematics and Statistics The University of Melbourne Australia Bayesian Model Comparison
More informationPATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS Parametric Distributions Basic building blocks: Need to determine given Representation: or? Recall Curve Fitting Binary Variables
More informationSTAT 6350 Analysis of Lifetime Data. Probability Plotting
STAT 6350 Analysis of Lifetime Data Probability Plotting Purpose of Probability Plots Probability plots are an important tool for analyzing data and have been particular popular in the analysis of life
More informationPractice Exam 1. (A) (B) (C) (D) (E) You are given the following data on loss sizes:
Practice Exam 1 1. Losses for an insurance coverage have the following cumulative distribution function: F(0) = 0 F(1,000) = 0.2 F(5,000) = 0.4 F(10,000) = 0.9 F(100,000) = 1 with linear interpolation
More informationChapter 8: Sampling distributions of estimators Sections
Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample variance Skip: p.
More informationStatistics: Learning models from data
DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial
More informationStatistical Inference
Statistical Inference Classical and Bayesian Methods Class 5 AMS-UCSC Tue 24, 2012 Winter 2012. Session 1 (Class 5) AMS-132/206 Tue 24, 2012 1 / 11 Topics Topics We will talk about... 1 Confidence Intervals
More informationSTAT 425: Introduction to Bayesian Analysis
STAT 425: Introduction to Bayesian Analysis Marina Vannucci Rice University, USA Fall 2017 Marina Vannucci (Rice University, USA) Bayesian Analysis (Part 1) Fall 2017 1 / 10 Lecture 7: Prior Types Subjective
More informationAnalysis of Spectral Kernel Design based Semi-supervised Learning
Analysis of Spectral Kernel Design based Semi-supervised Learning Tong Zhang IBM T. J. Watson Research Center Yorktown Heights, NY 10598 Rie Kubota Ando IBM T. J. Watson Research Center Yorktown Heights,
More informationConfidence Distribution
Confidence Distribution Xie and Singh (2013): Confidence distribution, the frequentist distribution estimator of a parameter: A Review Céline Cunen, 15/09/2014 Outline of Article Introduction The concept
More informationPlanning for Reactive Behaviors in Hide and Seek
University of Pennsylvania ScholarlyCommons Center for Human Modeling and Simulation Department of Computer & Information Science May 1995 Planning for Reactive Behaviors in Hide and Seek Michael B. Moore
More informationSPRING 2007 EXAM C SOLUTIONS
SPRING 007 EXAM C SOLUTIONS Question #1 The data are already shifted (have had the policy limit and the deductible of 50 applied). The two 350 payments are censored. Thus the likelihood function is L =
More informationPeriodic monopoles and difference modules
Periodic monopoles and difference modules Takuro Mochizuki RIMS, Kyoto University 2018 February Introduction In complex geometry it is interesting to obtain a correspondence between objects in differential
More informationCIVL 7012/8012. Continuous Distributions
CIVL 7012/8012 Continuous Distributions Probability Density Function P(a X b) = b ò a f (x)dx Probability Density Function Definition: and, f (x) ³ 0 ò - f (x) =1 Cumulative Distribution Function F(x)
More informationCLASS NOTES Models, Algorithms and Data: Introduction to computing 2018
CLASS NOTES Models, Algorithms and Data: Introduction to computing 208 Petros Koumoutsakos, Jens Honore Walther (Last update: June, 208) IMPORTANT DISCLAIMERS. REFERENCES: Much of the material (ideas,
More informationProblem 1 (From the reservoir to the grid)
ÈÖÓ º ĺ ÙÞÞ ÐÐ ÈÖÓ º ʺ ³ Ò Ö ½ ½¹¼ ¼¹¼¼ ËÝ Ø Ñ ÅÓ Ð Ò ÀË ¾¼½ µ Ü Ö ÌÓÔ ÀÝ ÖÓ Ð ØÖ ÔÓÛ Ö ÔÐ ÒØ À Èȵ ¹ È ÖØ ÁÁ Ð ÖÒ Ø Þº ÇØÓ Ö ½ ¾¼½ Problem (From the reservoir to the grid) The causality diagram of the
More informationProbability calculus and statistics
A Probability calculus and statistics A.1 The meaning of a probability A probability can be interpreted in different ways. In this book, we understand a probability to be an expression of how likely it
More informationAn Introduction to Optimal Control Applied to Disease Models
An Introduction to Optimal Control Applied to Disease Models Suzanne Lenhart University of Tennessee, Knoxville Departments of Mathematics Lecture1 p.1/37 Example Number of cancer cells at time (exponential
More informationBayesian Regression Linear and Logistic Regression
When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we
More informationMAS3301 Bayesian Statistics Problems 5 and Solutions
MAS3301 Bayesian Statistics Problems 5 and Solutions Semester 008-9 Problems 5 1. (Some of this question is also in Problems 4). I recorded the attendance of students at tutorials for a module. Suppose
More informationCS 361: Probability & Statistics
March 14, 2018 CS 361: Probability & Statistics Inference The prior From Bayes rule, we know that we can express our function of interest as Likelihood Prior Posterior The right hand side contains the
More informationBayesian Linear Models
Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Department of Forestry & Department of Geography, Michigan State University, Lansing Michigan, U.S.A. 2 Biostatistics, School of Public
More informationWhat does Bayes theorem give us? Lets revisit the ball in the box example.
ECE 6430 Pattern Recognition and Analysis Fall 2011 Lecture Notes - 2 What does Bayes theorem give us? Lets revisit the ball in the box example. Figure 1: Boxes with colored balls Last class we answered
More informationParameter Estimation
Parameter Estimation Chapters 13-15 Stat 477 - Loss Models Chapters 13-15 (Stat 477) Parameter Estimation Brian Hartman - BYU 1 / 23 Methods for parameter estimation Methods for parameter estimation Methods
More informationINRIA Sophia Antipolis France. TEITP p.1
ÌÖÙ Ø ÜØ Ò ÓÒ Ò ÓÕ Ä ÙÖ ÒØ Ì ÖÝ INRIA Sophia Antipolis France TEITP p.1 ÅÓØ Ú Ø ÓÒ Ï Ý ØÖÙ Ø Ó ÑÔÓÖØ ÒØ Å ÒÐÝ ÈÖÓÚ Ò ÌÖÙØ Ø Ò ÑÔÐ Ã Ô ÈÖÓÚ Ò ÌÖÙ Ø È Ó ÖÖÝ Ò ÈÖÓÓ µ Ò Ö ØÝ ÓÑ Ò ËÔ ÔÔÐ Ø ÓÒ TEITP p.2 ÇÙØÐ
More informationMachine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall
Machine Learning Gaussian Mixture Models Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall 2012 1 The Generative Model POV We think of the data as being generated from some process. We assume
More informationPMR Learning as Inference
Outline PMR Learning as Inference Probabilistic Modelling and Reasoning Amos Storkey Modelling 2 The Exponential Family 3 Bayesian Sets School of Informatics, University of Edinburgh Amos Storkey PMR Learning
More informationInformation Theory. Week 4 Compressing streams. Iain Murray,
Information Theory http://www.inf.ed.ac.uk/teaching/courses/it/ Week 4 Compressing streams Iain Murray, 2014 School of Informatics, University of Edinburgh Jensen s inequality For convex functions: E[f(x)]
More informationLA PRISE DE CALAIS. çoys, çoys, har - dis. çoys, dis. tons, mantz, tons, Gas. c est. à ce. C est à ce. coup, c est à ce
> ƒ? @ Z [ \ _ ' µ `. l 1 2 3 z Æ Ñ 6 = Ð l sl (~131 1606) rn % & +, l r s s, r 7 nr ss r r s s s, r s, r! " # $ s s ( ) r * s, / 0 s, r 4 r r 9;: < 10 r mnz, rz, r ns, 1 s ; j;k ns, q r s { } ~ l r mnz,
More informationMonte Carlo-based statistical methods (MASM11/FMS091)
Monte Carlo-based statistical methods (MASM11/FMS091) Jimmy Olsson Centre for Mathematical Sciences Lund University, Sweden Lecture 12 MCMC for Bayesian computation II March 1, 2013 J. Olsson Monte Carlo-based
More informationNaive Bayes classification
Naive Bayes classification Christos Dimitrakakis December 4, 2015 1 Introduction One of the most important methods in machine learning and statistics is that of Bayesian inference. This is the most fundamental
More informationBayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework
HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for
More informationMarkov Chain Monte Carlo methods
Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As
More informationSTAT 135 Lab 5 Bootstrapping and Hypothesis Testing
STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,
More informationBayesian Linear Regression
Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective
More informationprobability of k samples out of J fall in R.
Nonparametric Techniques for Density Estimation (DHS Ch. 4) n Introduction n Estimation Procedure n Parzen Window Estimation n Parzen Window Example n K n -Nearest Neighbor Estimation Introduction Suppose
More informationAdvanced Statistical Modelling
Markov chain Monte Carlo (MCMC) Methods and Their Applications in Bayesian Statistics School of Technology and Business Studies/Statistics Dalarna University Borlänge, Sweden. Feb. 05, 2014. Outlines 1
More informationBayesian statistics, simulation and software
Module 4: Normal model, improper and conjugate priors Department of Mathematical Sciences Aalborg University 1/25 Another example: normal sample with known precision Heights of some Copenhageners in 1995:
More informationThe comparative studies on reliability for Rayleigh models
Journal of the Korean Data & Information Science Society 018, 9, 533 545 http://dx.doi.org/10.7465/jkdi.018.9..533 한국데이터정보과학회지 The comparative studies on reliability for Rayleigh models Ji Eun Oh 1 Joong
More informationConcrete subjected to combined mechanical and thermal loading: New experimental insight and micromechanical modeling
Concrete subjected to combined mechanical thermal loading: New experimental insight micromechanical modeling Thomas Ring 1, Matthias Zeiml 1,2, Roman Lackner 3 1 Institute for Mechanics of Materials Structures
More informationTopic 16 Interval Estimation. The Bootstrap and the Bayesian Approach
Topic 16 Interval Estimation and the Bayesian Approach 1 / 9 Outline 2 / 9 The confidence regions have been determined using aspects of the distribution of the data, by, for example, appealing to the central
More informationStatistics/Mathematics 310 Larget April 14, Solution to Assignment #9
Solution to Assignment #9 1. A sample s = (753.0, 1396.9, 1528.2, 1646.9, 717.6, 446.5, 848.0, 1222.0, 748.5, 610.1) is modeled as an i.i.d sample from a Gamma(α, λ) distribution. You may think of this
More informationStep-Stress Models and Associated Inference
Department of Mathematics & Statistics Indian Institute of Technology Kanpur August 19, 2014 Outline Accelerated Life Test 1 Accelerated Life Test 2 3 4 5 6 7 Outline Accelerated Life Test 1 Accelerated
More informationPart 3: Parametric Models
Part 3: Parametric Models Matthew Sperrin and Juhyun Park August 19, 2008 1 Introduction There are three main objectives to this section: 1. To introduce the concepts of probability and random variables.
More informationFinding small factors of integers. Speed of the number-field sieve. D. J. Bernstein University of Illinois at Chicago
The number-field sieve Finding small factors of integers Speed of the number-field sieve D. J. Bernstein University of Illinois at Chicago Prelude: finding denominators 87366 22322444 in R. Easily compute
More informationStat 5101 Lecture Notes
Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random
More informationEstimation of reliability parameters from Experimental data (Parte 2) Prof. Enrico Zio
Estimation of reliability parameters from Experimental data (Parte 2) This lecture Life test (t 1,t 2,...,t n ) Estimate θ of f T t θ For example: λ of f T (t)= λe - λt Classical approach (frequentist
More informationPetr Volf. Model for Difference of Two Series of Poisson-like Count Data
Petr Volf Institute of Information Theory and Automation Academy of Sciences of the Czech Republic Pod vodárenskou věží 4, 182 8 Praha 8 e-mail: volf@utia.cas.cz Model for Difference of Two Series of Poisson-like
More informationProblem 1 (From the reservoir to the grid)
ÈÖÓ º ĺ ÙÞÞ ÐÐ ÈÖÓ º ʺ ³ Ò Ö ½ ½¹¼ ¹¼¼ ËÝ Ø Ñ ÅÓ Ð Ò ÀË ¾¼½ µ Ü Ö ËÓÐÙØ ÓÒ ÌÓÔ ÀÝ ÖÓ Ð ØÖ ÔÓÛ Ö ÔÐ ÒØ À Èȵ ¹ È ÖØ ÁÁ Ð ÖÒ Ø Þº ÇØÓ Ö ¾ ¾¼½ Problem 1 (From the reservoir to the grid) The causality diagram
More informationDepinning transition for domain walls with an internal degree of freedom
Depinning transition for domain walls with an internal degree of freedom Vivien Lecomte 1, Stewart Barnes 1,2, Jean-Pierre Eckmann 3, Thierry Giamarchi 1 1 Département de Physique de la Matière Condensée,
More informationarxiv:hep-ph/ v1 10 May 2001
New data and the hard pomeron A Donnachie Department of Physics, Manchester University P V Landshoff DAMTP, Cambridge University DAMTP-200-38 M/C-TH-0/03 arxiv:hep-ph/005088v 0 May 200 Abstract New structure-function
More informationOptimal Routing Policy in Two Deterministic Queues
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Optimal Routing Policy in Two Deterministic Queues Bruno Gaujal Emmanuel Hyon N 3997 Septembre 2000 THÈME 1 ISSN 0249-6399 ISRN INRIA/RR--3997--FR+ENG
More informationThe posterior - the goal of Bayesian inference
Chapter 7 The posterior - the goal of Bayesian inference 7.1 Googling Suppose you are chosen, for your knowledge of Bayesian statistics, to work at Google as a search traffic analyst. Based on historical
More information