Bayesian techniques for fatigue life prediction and for inference in linear time dependent PDEs

Size: px
Start display at page:

Download "Bayesian techniques for fatigue life prediction and for inference in linear time dependent PDEs"

Transcription

1 Bayesian techniques for fatigue life prediction and for inference in linear time dependent PDEs Marco Scavino SRI Center for Uncertainty Quantification Computer, Electrical and Mathematical Sciences & Engineering Division, King Abdullah University of Science and Technology, KSA Universidad de la República, Montevideo, Uruguay SRI UQ Center CEMSE KAUST 4 th Annual Workshop 2016 January 8, KAEC, Kingdom of Saudi Arabia 1 / 54

2 Overview Introduction Statistical models Bayesian approach Model comparison Conclusions References Linear Parabolic PDEs with Noisy Boundary Conditions Introduction Formulation of the problem Statistical setting The marginal likelihood of θ Example - inference for thermal diffusivity 2 / 54

3 Introduction Bayesian inference and model comparison for metallic fatigue data Joint work ( with Ivo Babuška ICES, The University of Texas at Austin, Austin, USA, Zaid Sawlan CEMSE, King Abdullah University of Science and Technology, Thuwal, KSA, Barna Szabó Washington University in St. Louis, St. Louis, USA, Raúl Tempone CEMSE, King Abdullah University of Science and Technology, Thuwal, KSA, 3 / 54

4 Introduction Mechanical and structural components subjected to cyclic loading are susceptible to cumulative damage and eventual failure through an irreversible process called metal fatigue. Goal To predict the fatigue life of materials by providing a systematic approach to model calibration, model selection and model ranking with reference to stress-life (S-N) data drawn from a collection of records of fatigue experiments. 4 / 54

5 Introduction The 75S-T6 aluminum sheet specimens data set Data are available on a collection of records of fatigue tests conducted at the Battelle Memorial Institute on 85 75S-T6 aluminum sheet specimens by means of a Krouse direct repeated-stress testing machine [1, table 3, pp.22 24]. The following data are recorded for each specimen: the maximum stress, S max, measured in ksi units. the test ratio, R, defined as the minimum to maximum stress ratio. the fatigue life, N, defined as the number of load cycles at which fatigue failure occurred. a binary variable (0/1) to denote whether or not the test had been stopped prior to the occurrence of failure (run-out). 5 / 54

6 Introduction The 75S-T6 aluminum sheet specimens data set (cont.) In 12 of the 85 experiments, the specimens remained unbroken when the tests were stopped. Specimen Max. Stress (ksi) Load cycles Test ratio Type B24M1 80,0 2,478, B81M4 75,0 10,538, B91M3 80,5 14, B95M4 80,5 71, B94M1 80,5 68, B93M1 80,5 99, B15M2 79,0 162, B23M4 79,0 181, B19M2 75,0 58, B19M3 70,0 88, B39M4 70,0 432, B16M1 65,0 10,780, B19M1 60,0 10,780, B35M3 65,0 89, / 54...

7 Statistical models Statistical models for metallic fatigue data In the Metallic Materials Properties Development and Standardization (MMPDS-01) Handbook, 2003, Battelle Memorial Institute, it is considered the so-called logarithmic fit using the objective function, µ(s eq (lg) ) = A 1 + A 2 log 10 (S eq (lg) A 3 ), e std = ( n ) i=1 (log 10(n i ) µ(s eq (lg) 1/2 )) 2, n p where n is the number of data points and p is the number of fitting parameters (namely A 1, A 2, A 3 and q). 7 / 54

8 Statistical models Statistical models for metallic fatigue data (cont.) The resulting estimated mean value function, without the run-outs, is given by µ(s eq (lg) ) = log 10 (S eq (lg) 31.53), where S eq (lg) = S max (1 R) and the value of the objective function is e std = The estimated fatigue limit is equal to 31.53/( ) = ksi, since the fatigue limit is the value of the maximum stress when the test ratio, R, is equal to 1 (the fully reversed condition). 8 / 54

9 Statistical models Statistical models for metallic fatigue data (cont.) Estimated 0.95 Quantile Function Estimated 0.05 Quantile Function Estimated Median Function Fatigue Limit Parameter = ksi 80 S eq (ksi) Cycles Figure 1 : Logarithmic fit of the 75S-T6 data set without run-outs. 9 / 54

10 Statistical models We consider fatigue-limit models [2] and random fatigue-limit models [3] that are specially designed to allow the treatment of the run-outs (right-censored data). Fatigue data obtained for particular test ratios need to be generalized to arbitrary test ratios. To this purpose the equivalent stress S eq is defined as S eq (q) = S max (1 R) q, where q is a fitting parameter. We distinguish between the fatigue limit, which is a physical notion, and the fatigue limit parameter, which is an unknown parameter, expressed in the same scale as the equivalent stress and calibrated for different models. 10 / 54

11 Statistical models Model Ia Let A 3 be the fatigue limit parameter. At each equivalent stress with S eq > A 3, the fatigue life N is modeled with a lognormal distribution: log 10 (N) N (µ(s eq ), σ(s eq )). Assume that µ(s eq ) = A 1 + A 2 log 10 (S eq A 3 ), if S eq > A 3 σ(s eq ) = τ. Given the sample data, n = (n 1,..., n m ), we consider the likelihood function L(A 1, A 2, A 3, τ, q; n), given by m [ ] g(log10 (n i ) ; µ(s eq ), σ(s eq )) δi [ ( )] log10 (n i ) µ(s eq ) 1 δi 1 Φ, n i log(10) σ(s eq ) i=1 11 / 54

12 Statistical models Model Ia (cont.) } where g(t; µ, σ) = 1 2π σ exp { (t µ)2, Φ is the cumulative 2σ 2 distribution function of the standard normal distribution, and δ i = { 1 if ni is a failure 0 if n i is a run-out. 12 / 54

13 Statistical models Model Ia (cont.) Estimated 0.95 Quantile Function Estimated 0.05 Quantile Function Estimated Median Function Fatigue Limit Parameter = ksi 80 S eq (ksi) Cycles Figure 2 : A 1 = 7.38, A 2 = 2.01, A 3 = 35.04, q = 0.563, τ = / 54

14 Statistical models Model Ib We extend the Model Ia by allowing a non-constant standard deviation: µ(s eq ) = A 1 + A 2 log 10 (S eq A 3 ), if S eq > A 3 σ(s eq ) = 10 (B 1+B 2 log 10 (S eq)), if S eq > A / 54

15 Statistical models Model Ib (cont.) Estimated 0.95 Quantile Function Estimated 0.05 Quantile Function Estimated Median Function Fatigue Limit Parameter = ksi 80 S eq (ksi) Cycles Figure 3 : A 1 = 6.72, A 2 = 1.57, A 3 = 36.21, q = 0.551, B 1 = 4.55, B 2 = / 54

16 Statistical models Profile likelihood To assess the plausibility of a range of values of the fatigue limit parameter, A 3, we construct the profile likelihood [2, p.294]: R(A 3 ) = max θ 0 [ ] L(θ 0, A 3 ), L( θ) where θ 0 denotes all parameters except for the fatigue limit parameter, A 3, and θ is the ML estimate of θ. 16 / 54

17 Statistical models Profile likelihood (cont.) 0.9 Model Ia Model Ib R(A 3 ) Fatigue Limit Parameter (ksi) Figure 4 : Profile likelihood estimates for the fatigue limit parameter, A 3, with Model Ia fit (blue curve) and Model Ib fit (red curve). 17 / 54

18 Statistical models Model II We extend the Model Ia to allow a random fatigue limit parameter as in [3]: µ(s eq ) = A 1 + A 2 log 10 (S eq A 3 ), if S eq > A 3. σ(s eq ) = τ. the density of log 10 (A 3 ) is φ(t; µ f, σ f ). the conditional density of log 10 (N) given S eq > A 3 is φ(t; µ(s eq ), σ(s eq )), where φ(t; µ, σ) = 1 σ exp {( t µ ) ( σ exp t µ )} σ is the smallest extreme value (sev) probability density function with location parameter µ and scale parameter σ [4, Chapter 4]. 18 / 54

19 Statistical models Model II (cont.) Estimated 0.95 Quantile Function Estimated 0.05 Quantile Function Estimated Median Function 80 S eq (ksi) Cycles Figure 5 : A 1 = 6.51, A 2 = 1.47, µ f = 1.60, σ f = , q = 0.489, τ = / 54

20 Bayesian approach Bayesian approach Model Ia Model Ia: A 1 U(5, 13), A 2 U( 5, 0), A 3 U(24, 40), q U(0.1, 0.9), τ U(0.1, 1.5) A 1 A q τ A 3 Figure 6 : Prior densities (red line) and approximate marginal posterior densities (blue line) for A 1, A 2, q, τ and A / 54

21 Bayesian approach Bayesian approach Model Ia (cont.) A A q τ A A A Figure 7 : Contour plots of the approximate bivariate densities of each pair of parameters in Model Ia. q 21 / 54

22 Bayesian approach Bayesian approach Model Ib Model Ib: A 1 U(4, 10), A 2 U( 4, 0), A 3 U(30, 40), q U(0.1, 0.9), B 1 U(2, 7), B 2 U( 5, 1) A 1 A A q B 1 B 2 Figure 8 : Prior densities (red line) and approximate marginal posterior densities (blue line) for A 1, A 2, q, B 1, B 2 and A / 54

23 Bayesian approach Bayesian approach Model Ib (cont.) 1.5 A 2 2 A 3 q B B A A A q B 1 Figure 9 : Contour plots of the approximate bivariate densities of each pair of parameters in Model Ib. 23 / 54

24 Bayesian approach Bayesian approach Model II Model II: A 1 U(4, 10), A 2 U( 4, 0), µ f U(1.4, 1.8), σ f U(0, 0.1), q U(0.1, 0.9), τ U(0, 0.25) A 1 A µ f σ f q Figure 10 : Prior densities (red line) and approximate marginal posterior densities (blue line) for A 1, A 2, q, τ, µ f and σ f. τ 24 / 54

25 Bayesian approach Bayesian approach Model II (cont.) A µ 1.6 f σ f q τ A A µ f σ f Figure 11 : Contour plots of the approximate bivariate densities of each pair of parameters in Model II. q 25 / 54

26 Model comparison Model comparison - classical information criteria Model Ia Model Ib Model II maximum log-likelihood Akaike Information Criterion (AIC) Bayesian Information Criterion (BIC) AIC with correction AIC = 2k 2 log L( θ y) BIC = k log(n) 2 log L( θ y) AIC c = AIC + 2k(k+1) n k 1 26 / 54

27 Model comparison Model comparison - Bayesian predictive criteria Model Ia Model Ib Model II Log marginal likelihood (Laplace) Log marginal likelihood (Laplace-Metropolis) Log pointwise predictive density (lppd) Deviance information criterion (DIC) Watanabe-Akaike information criterion (WAIC) fold cross-validation (elpd) lppd = ( n i=1 log 1 ) S S m=1 ρ(y i θ m ), where θ m, m = 1,..., S, are draws from the posterior of θ. DIC = 2p DIC 2 log p(y θ Bayes ) where p DIC = 2 WAIC = 2(lppd p WAIC ) ( log p(y θ Bayes ) 1 ) S S m=1 log p(y θm ). 27 / 54

28 Model comparison Model comparison - Bayesian predictive criteria (cont.) where p WAIC = 2 ( ( n 1 log S i=1 ) S p(y i θ m ) 1 S m=1 ) S log p(y i θ m ). m=1 K-fold cross-validation (elpd) Data are randomly partitioned into K disjoint subsets: {y k } K k=1. Training set: {y ( k) } = {y 1,..., y k 1, y k+1,..., y K }. For each training set compute the corresponding posterior distribution, p(θ y ( k) ). The log predictive density for y i y k is computed using the training set {y ( k) }: ( ) 1 S lpd i = log p(y i θ k,m ), i k, S m=1 28 / 54

29 Model comparison Model comparison - Bayesian predictive criteria (cont.) where {θ k,m } S m=1 are MCMC samples of the posterior p(θ y ( k)). Finally, compute the expected log predictive density (elpd): elpd = n lpd i. i=1 29 / 54

30 Conclusions Conclusions Models of various complexity that were designed to account for right-censored data are calibrated by means of the maximum likelihood method. Bayesian approach is considered and simulation-based posterior distributions are provided. Classical measures of fit based on information criteria and Bayesian model comparison were applied to determine which model might be preferred under different a priori scenarios. The classical approach and the Bayesian approach for model comparison have provided evidence in favor of Model II given the 75S-T6 data set. 30 / 54

31 Conclusions Conclusions (cont.) An integrated set of computational tools has been developed for model calibration, cross-validation, consistency and model comparison, allowing the user to rank alternative statistical models based on objective criteria. 31 / 54

32 References References [1] H. J. Grover, S. M. Bishop & L. R. Jackson (March 1951). Fatigue Strengths of Aircraft Materials. Axial-Load Fatigue Tests on Unnotched Sheet Specimens of 24S-T3 and 75S-T6 Aluminum Alloys and of SAE 4130 Steel, Battelle Memorial Institute, National Advisory Committee on Aeronautics (NACA), Technical Note 2324, Washington. [2] Francis G. Pascual & William Q. Meeker (1997). Analysis of fatigue data with runouts based on a model with nonconstant standard deviation and a fatigue limit parameter, Journal of Testing and Evaluation, 25, [3] Francis G. Pascual & William Q. Meeker (1999). Estimating fatigue curves with the random fatigue-limit model, Technometrics, 41 (4), / 54

33 References References (cont.) [4] William Q. Meeker & Luis A. Escobar (1998). Statistical Methods for Reliability Data, Wiley. [5] Andrew Gelman, Jessica Hwang & Aki Vehtari (2014). Understanding predictive information criteria for Bayesian models, Statistics and Computing, 24 (6), [6] Sumio Watanabe (2010). Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory, Journal of Machine Learning Research, 11, / 54

34 Linear Parabolic PDEs with Noisy Boundary Conditions Introduction A hierarchical Bayesian setting for an inverse problem in linear parabolic PDEs with noisy boundary conditions Joint work ( with Fabrizio Ruggeri CNR - IMATI, Consiglio Nazionale delle Ricerche, Milano, Italy, fabrizio@mi.imati.cnr.it Zaid Sawlan CEMSE, King Abdullah University of Science and Technology, Thuwal, KSA, zaid.sawlan@kaust.edu.sa Raúl Tempone CEMSE, King Abdullah University of Science and Technology, Thuwal, KSA, raul.tempone@kaust.edu.sa 34 / 54

35 Linear Parabolic PDEs with Noisy Boundary Conditions Introduction We develop a hierarchical Bayesian setting to infer unknown parameters of linear parabolic partial differential equations, as an example of linear time dependent PDEs, under the assumption that noisy measurements are available in the interior of a domain of interest and for the unknown boundary conditions. Goal To solve the inverse problem based on the assumption that the boundary parameters are unknown and modeled by means of adequate probability distributions. 35 / 54

36 Linear Parabolic PDEs with Noisy Boundary Conditions Formulation of the problem Formulation of the problem Consider the deterministic one-dimensional parabolic initial-boundary value problem: t T + L θ T = 0, x (x L, x R ), 0 < t t N < T (x L, t) = T L (t), t [0, t N ] T (x R, t) = T R (t), t [0, t N ] T (x, 0) = g(x), x (x L, x R ), (1) where L θ is a linear second-order partial differential operator that takes the form L θ T = x (a(x) x T ) + b(x) x T + c(x)t, 36 / 54

37 Linear Parabolic PDEs with Noisy Boundary Conditions Formulation of the problem Formulation of the problem (cont.) θ(x) = (a(x), b(x), c(x)) tr, and the partial differential operator t + L θ is parabolic. Our main objective is to provide a Bayesian solution to an inverse problem for θ, where we assume that i θ is unknown, while the initial condition g in the initial-boundary value problem is known; ii θ is allowed to vary with the spatial variable x. Noisy readings of the function T (x, t) at the I + 1 spatial locations, including the boundaries, x L = x 0, x 1, x 2,..., x I 1, x I = x R, at each of the N times t 1, t 2,..., t N, are assumed available. 37 / 54

38 Linear Parabolic PDEs with Noisy Boundary Conditions Statistical setting Statistical setting Let Y n := (Y 0,n,..., Y I,n ) tr denote the vector of observed readings at time t n, and assume a statistical model with an additive Gaussian experimental noise ɛ n : Y n = (I +1) 1 T L (t n ) T (x 1, t n ). T (x I 1, t n ) T R (t n ) + ɛ n, (2) where ɛ n i.i.d. N (0 I +1, σ 2 I I +1 ) for some measurement error variance σ 2 > / 54

39 Linear Parabolic PDEs with Noisy Boundary Conditions Statistical setting Statistical setting (cont.) Let Yn I (I 1) 1 := (Y 1,n,..., Y I 1,n ) tr denote the vector of observed data at the interior locations x 1, x 2,..., x I 1 and let Yn B := (Y L,n, Y R,n ) tr be the vector of observed data at the boundary locations x 0, x I at time t n. Consider the time local problem, defined between consecutive measurement times, i.e. t T + L θ T = 0, x (x L, x R ), t n 1 < t t n, T (x L, t) = T L (t), t [t n 1, t n ], T (x R, t) = T R (t), t [t n 1, t n ], T (x, t n 1 ) = T (x, t n 1 ), x (x L, x R ), (3) 39 / 54

40 Linear Parabolic PDEs with Noisy Boundary Conditions Statistical setting Statistical setting (cont.) whose exact solution, denoted by T (, t n ), depends only on θ, T (, t n 1 ) and the boundary values {T L (t), T R (t)} t (tn 1,t n). Lemma The probability density function (pdf) of Y I n is given by = ρ(yn θ, I T (, t n 1 ), {T L (t), T R (t)} t (tn 1,t ( ) n) 1 ( exp 1 ) 2πσ) I 1 2σ 2 R t n 2 l 2, (4) where R tn (I 1) 1 := ( T (x 1, t n ) Y 1,n,..., T (x I 1, t n ) Y I 1,n ) tr denotes the data residual vector at time t = t n. 40 / 54

41 Linear Parabolic PDEs with Noisy Boundary Conditions Statistical setting Statistical setting (cont.) Assume now that the Dirichlet boundary condition functions, T L ( ) and T R ( ), are well approximated by piecewise linear continuous functions in the time partition {t n } n=1,...,n. In this way, only 2N parameters, say T L (t n ) = T L,n, T R (t n ) = T R,n, n = 1, 2,..., N, suffice to determine the boundary conditions that are, in principle, infinite dimensional parameters. Let LBC n denote the time nodes that determine the local boundary conditions {T L,n 1, T L,n, T R,n 1, T R,n }. 41 / 54

42 Linear Parabolic PDEs with Noisy Boundary Conditions Statistical setting Statistical setting (cont.) Lemma The joint likelihood function of θ and the boundary parameters {LBC n } n=1,...,n is given by ρ(y 1,..., Y N θ, {LBC n } n=1,...,n ) = 1 ( 2πσ 2 exp 1 ) 2σ 2 (T L,n Y L,n ) 2 N n=1 exp ( 1 ( exp 1 ) 2πσ) I 1 2σ 2 R t n 2 l 2 ( 1 ) 2σ 2 (T R,n Y R,n ) 2. (5) Notation: T L = (T L,1,..., T L,N ) tr, T R = (T R,1,..., T R,N ) tr, Y L = (Y L,1,..., Y L,N ) tr and Y R = (Y R,1,..., Y R,N ) tr. 42 / 54

43 Linear Parabolic PDEs with Noisy Boundary Conditions Statistical setting Statistical setting (cont.) The joint likelihood function (5) can be written as ρ(y 1,..., Y N θ, T L, T R ) = ( ( ) 2πσ) N(I +1) exp 1 N 2σ 2 R tn 2 l 2 n=1 ( exp 1 [ ] ) 2σ 2 T L Y L 2 l 2 + T R Y R 2 l 2. (6) 43 / 54

44 Linear Parabolic PDEs with Noisy Boundary Conditions The marginal likelihood of θ The marginal likelihood of θ Assume that the nuisance parameters T L and T R are independent Gaussian distributed: T L N (µ L, σ 2 pi N ), T R N (µ R, σ 2 pi N ). (7) Using (6), the marginal likelihood of θ is given by ρ(y 1,..., Y N θ) = ( 2πσ) N(I +1) ( ( ) 2πσ p) 2N exp 1 N R T R T L 2σ 2 tn 2 l 2 n=1 ( exp 1 2σ (T 2 L Y L ) tr (T L Y L ) 1 ) 2σ (T 2 R Y R ) tr (T R Y R ) ( exp 1 (T 2σp 2 L µ L ) tr (T L µ L ) 1 ) (T 2σp 2 R µ R ) tr (T R µ R ) dt L dt R, (8) where R tn is approximated by ( ) R tn = B n T 0 Yn I + A L,n (θ)t L + A R,n (θ)t R. (9) 44 / 54

45 Linear Parabolic PDEs with Noisy Boundary Conditions Example - inference for thermal diffusivity Consider the heat equation (one dim diffusion equation for T (x, t)): t T x (θ(x) x T ) = 0, x (x L, x R ), 0 < t t N < T (0, t) = T L (t), t [0, t N ] T (1, t) = T R (t), t [0, t N ] T (x, 0) = g(x), x (x L, x R ). (10) Goal To infer the thermal diffusivity, θ(x), an unknown parameter that measures the rapidity of the heat propagation through a material, using a Bayesian approach, when the temperature is measured at I + 1 locations, x 0 = x L, x 1, x 2,..., x I 1, x I = x R, at each of the N times, t 1, t 2,..., t N. Clearly, this problem is a special case of (1) where L θ = x (θ(x) x T ) and θ(x) > / 54

46 Linear Parabolic PDEs with Noisy Boundary Conditions Example - inference for thermal diffusivity Consider a lognormal prior log θ N (ν, τ), where ν R and τ > 0. Assume noisy boundary measurements and a Gaussian distribution for the nuisance boundary parameters as in (7). The non-normalized posterior density for θ is given by ρ ν,τ (θ Y 1,..., Y N ) ) 1 (log θ ν)2 exp ( 2πθτ 2τ 2 ρ(y 1,..., Y N θ), where ρ(y 1,..., Y N θ) is the marginal likelihood of θ. To assess the performance of our method we use a synthetic dataset, and assume that θ is a lognormal random variable with ν = τ = / 54

47 Linear Parabolic PDEs with Noisy Boundary Conditions Example - inference for thermal diffusivity Bayesian inference for thermal diffusivity Figure 12 : Example: Comparison between log-likelihoods (on the left) and log-posteriors (on the right) for θ using different numbers of observations, N, and different values of σ. 47 / 54

48 Linear Parabolic PDEs with Noisy Boundary Conditions Example - inference for thermal diffusivity Bayesian inference for thermal diffusivity (cont.) Figure 13 : Example: Comparison between log-likelihoods (on the left) and log-posteriors (on the right) for θ using different values of σ and σ p, with N = / 54

49 Linear Parabolic PDEs with Noisy Boundary Conditions Example - inference for thermal diffusivity Bayesian inference for thermal diffusivity (cont.) Figure 14 : Example: Lognormal prior and approximated Gaussian posterior densities for θ where σ p = σ = 0.5 and N = / 54

50 Linear Parabolic PDEs with Noisy Boundary Conditions Example - inference for thermal diffusivity Bayesian inference for thermal diffusivity (cont.) Introduce three experimental setups (es s) of interest: es1) ξ consists of three non-overlapping time intervals, with the same length, which cover the entire observational period [0, 1] ; es2) ξ consists of the five inner thermocouples; es3) ξ is the combination of the two previous experimental setups, es1 and es2. 50 / 54

51 Linear Parabolic PDEs with Noisy Boundary Conditions Example - inference for thermal diffusivity Bayesian inference for thermal diffusivity (cont.) Figure 15 : Example: The expected information gain compared with the information divergence for the synthetic dataset, for the three time intervals experimental setup (es1). 51 / 54

52 Linear Parabolic PDEs with Noisy Boundary Conditions Example - inference for thermal diffusivity Bayesian inference for thermal diffusivity (cont.) Figure 16 : Example: The expected information gain compared with the information divergence for the synthetic datase, for the five inner thermocouples experimental setup (es2). 52 / 54

53 Linear Parabolic PDEs with Noisy Boundary Conditions Example - inference for thermal diffusivity Bayesian inference for thermal diffusivity (cont.) Figure 17 : Example: The expected information gain computed for the combination (es3) of the three time intervals and three out of five inner thermocouples experimental setups. 53 / 54

54 Linear Parabolic PDEs with Noisy Boundary Conditions Example - inference for thermal diffusivity Many thanks for your attention! 54 / 54

55 Table 1 : Maximum posterior estimates for Model Ia. Estimator A 1 A 2 A 3 q τ Laplace Laplace-Metropolis Table 2 : MCMC posterior empirical mean estimates with their standard deviations. A 1 A 2 A 3 q τ Mean SD / 3

56 Table 3 : Maximum posterior estimates for Model Ib. Estimator A 1 A 2 A 3 q B 1 B 2 Laplace Laplace-Metropolis Table 4 : MCMC posterior empirical mean estimates with their standard deviations. A 1 A 2 A 3 q B 1 B 2 Mean SD / 3

57 Table 5 : Maximum posterior estimates for Model IIb. Estimator A 1 A 2 µ f σ f q τ Laplace Laplace-Metropolis Table 6 : MCMC posterior empirical mean estimates with their standard deviations. A 1 A 2 µ f σ f q τ Mean SD / 3

arxiv: v1 [stat.co] 6 Dec 2015

arxiv: v1 [stat.co] 6 Dec 2015 Bayesian inference and model comparison for metallic fatigue data Ivo Babuška a, Zaid Sawlan b,, Marco Scavino b,d, Barna Szabó c, Raúl Tempone b a ICES, The University of Texas at Austin, Austin, USA

More information

Algebraic Geometry and Model Selection

Algebraic Geometry and Model Selection Algebraic Geometry and Model Selection American Institute of Mathematics 2011/Dec/12-16 I would like to thank Prof. Russell Steele, Prof. Bernd Sturmfels, and all participants. Thank you very much. Sumio

More information

Bayesian Experimental Designs

Bayesian Experimental Designs Bayesian Experimental Designs Quan Long, Marco Scavino, Raúl Tempone, Suojin Wang SRI Center for Uncertainty Quantification Computer, Electrical and Mathematical Sciences & Engineering Division, King Abdullah

More information

Quantile POD for Hit-Miss Data

Quantile POD for Hit-Miss Data Quantile POD for Hit-Miss Data Yew-Meng Koh a and William Q. Meeker a a Center for Nondestructive Evaluation, Department of Statistics, Iowa State niversity, Ames, Iowa 50010 Abstract. Probability of detection

More information

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1 Parameter Estimation William H. Jefferys University of Texas at Austin bill@bayesrules.net Parameter Estimation 7/26/05 1 Elements of Inference Inference problems contain two indispensable elements: Data

More information

Lecture : Probabilistic Machine Learning

Lecture : Probabilistic Machine Learning Lecture : Probabilistic Machine Learning Riashat Islam Reasoning and Learning Lab McGill University September 11, 2018 ML : Many Methods with Many Links Modelling Views of Machine Learning Machine Learning

More information

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak 1 Introduction. Random variables During the course we are interested in reasoning about considered phenomenon. In other words,

More information

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for

More information

ST440/540: Applied Bayesian Statistics. (9) Model selection and goodness-of-fit checks

ST440/540: Applied Bayesian Statistics. (9) Model selection and goodness-of-fit checks (9) Model selection and goodness-of-fit checks Objectives In this module we will study methods for model comparisons and checking for model adequacy For model comparisons there are a finite number of candidate

More information

Chapter 17. Failure-Time Regression Analysis. William Q. Meeker and Luis A. Escobar Iowa State University and Louisiana State University

Chapter 17. Failure-Time Regression Analysis. William Q. Meeker and Luis A. Escobar Iowa State University and Louisiana State University Chapter 17 Failure-Time Regression Analysis William Q. Meeker and Luis A. Escobar Iowa State University and Louisiana State University Copyright 1998-2008 W. Q. Meeker and L. A. Escobar. Based on the authors

More information

Standard Errors & Confidence Intervals. N(0, I( β) 1 ), I( β) = [ 2 l(β, φ; y) β i β β= β j

Standard Errors & Confidence Intervals. N(0, I( β) 1 ), I( β) = [ 2 l(β, φ; y) β i β β= β j Standard Errors & Confidence Intervals β β asy N(0, I( β) 1 ), where I( β) = [ 2 l(β, φ; y) ] β i β β= β j We can obtain asymptotic 100(1 α)% confidence intervals for β j using: β j ± Z 1 α/2 se( β j )

More information

Lecture 20. Model Comparison and glms

Lecture 20. Model Comparison and glms Lecture 20 Model Comparison and glms Previously glms model Checking Today glms, contd oceanic tools example and centering model comparison oceanic tools and other models model comparison poisson over-dispersion

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

Bayesian Methods: Naïve Bayes

Bayesian Methods: Naïve Bayes Bayesian Methods: aïve Bayes icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Last Time Parameter learning Learning the parameter of a simple coin flipping model Prior

More information

Sparse Linear Models (10/7/13)

Sparse Linear Models (10/7/13) STA56: Probabilistic machine learning Sparse Linear Models (0/7/) Lecturer: Barbara Engelhardt Scribes: Jiaji Huang, Xin Jiang, Albert Oh Sparsity Sparsity has been a hot topic in statistics and machine

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

Statistical Approaches to Learning and Discovery. Week 4: Decision Theory and Risk Minimization. February 3, 2003

Statistical Approaches to Learning and Discovery. Week 4: Decision Theory and Risk Minimization. February 3, 2003 Statistical Approaches to Learning and Discovery Week 4: Decision Theory and Risk Minimization February 3, 2003 Recall From Last Time Bayesian expected loss is ρ(π, a) = E π [L(θ, a)] = L(θ, a) df π (θ)

More information

Expectation Propagation for Approximate Bayesian Inference

Expectation Propagation for Approximate Bayesian Inference Expectation Propagation for Approximate Bayesian Inference José Miguel Hernández Lobato Universidad Autónoma de Madrid, Computer Science Department February 5, 2007 1/ 24 Bayesian Inference Inference Given

More information

Bivariate Degradation Modeling Based on Gamma Process

Bivariate Degradation Modeling Based on Gamma Process Bivariate Degradation Modeling Based on Gamma Process Jinglun Zhou Zhengqiang Pan Member IAENG and Quan Sun Abstract Many highly reliable products have two or more performance characteristics (PCs). The

More information

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu Lecture: Gaussian Process Regression STAT 6474 Instructor: Hongxiao Zhu Motivation Reference: Marc Deisenroth s tutorial on Robot Learning. 2 Fast Learning for Autonomous Robots with Gaussian Processes

More information

Bayesian Learning (II)

Bayesian Learning (II) Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Bayesian Learning (II) Niels Landwehr Overview Probabilities, expected values, variance Basic concepts of Bayesian learning MAP

More information

Machine Learning. Theory of Classification and Nonparametric Classifier. Lecture 2, January 16, What is theoretically the best classifier

Machine Learning. Theory of Classification and Nonparametric Classifier. Lecture 2, January 16, What is theoretically the best classifier Machine Learning 10-701/15 701/15-781, 781, Spring 2008 Theory of Classification and Nonparametric Classifier Eric Xing Lecture 2, January 16, 2006 Reading: Chap. 2,5 CB and handouts Outline What is theoretically

More information

Nonparameteric Regression:

Nonparameteric Regression: Nonparameteric Regression: Nadaraya-Watson Kernel Regression & Gaussian Process Regression Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro,

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

Ch 4. Linear Models for Classification

Ch 4. Linear Models for Classification Ch 4. Linear Models for Classification Pattern Recognition and Machine Learning, C. M. Bishop, 2006. Department of Computer Science and Engineering Pohang University of Science and echnology 77 Cheongam-ro,

More information

Least Squares Regression

Least Squares Regression CIS 50: Machine Learning Spring 08: Lecture 4 Least Squares Regression Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may not cover all the

More information

PATTERN RECOGNITION AND MACHINE LEARNING

PATTERN RECOGNITION AND MACHINE LEARNING PATTERN RECOGNITION AND MACHINE LEARNING Chapter 1. Introduction Shuai Huang April 21, 2014 Outline 1 What is Machine Learning? 2 Curve Fitting 3 Probability Theory 4 Model Selection 5 The curse of dimensionality

More information

The Bayesian approach to inverse problems

The Bayesian approach to inverse problems The Bayesian approach to inverse problems Youssef Marzouk Department of Aeronautics and Astronautics Center for Computational Engineering Massachusetts Institute of Technology ymarz@mit.edu, http://uqgroup.mit.edu

More information

ICML Scalable Bayesian Inference on Point processes. with Gaussian Processes. Yves-Laurent Kom Samo & Stephen Roberts

ICML Scalable Bayesian Inference on Point processes. with Gaussian Processes. Yves-Laurent Kom Samo & Stephen Roberts ICML 2015 Scalable Nonparametric Bayesian Inference on Point Processes with Gaussian Processes Machine Learning Research Group and Oxford-Man Institute University of Oxford July 8, 2015 Point Processes

More information

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012 Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood

More information

Learning the hyper-parameters. Luca Martino

Learning the hyper-parameters. Luca Martino Learning the hyper-parameters Luca Martino 2017 2017 1 / 28 Parameters and hyper-parameters 1. All the described methods depend on some choice of hyper-parameters... 2. For instance, do you recall λ (bandwidth

More information

Bayesian Model Diagnostics and Checking

Bayesian Model Diagnostics and Checking Earvin Balderama Quantitative Ecology Lab Department of Forestry and Environmental Resources North Carolina State University April 12, 2013 1 / 34 Introduction MCMCMC 2 / 34 Introduction MCMCMC Steps in

More information

Bayesian Machine Learning

Bayesian Machine Learning Bayesian Machine Learning Andrew Gordon Wilson ORIE 6741 Lecture 2: Bayesian Basics https://people.orie.cornell.edu/andrew/orie6741 Cornell University August 25, 2016 1 / 17 Canonical Machine Learning

More information

Nonparametric Bayesian Methods (Gaussian Processes)

Nonparametric Bayesian Methods (Gaussian Processes) [70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent

More information

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Jae-Kwang Kim Department of Statistics, Iowa State University Outline 1 Introduction 2 Observed likelihood 3 Mean Score

More information

Lecture 5: GPs and Streaming regression

Lecture 5: GPs and Streaming regression Lecture 5: GPs and Streaming regression Gaussian Processes Information gain Confidence intervals COMP-652 and ECSE-608, Lecture 5 - September 19, 2017 1 Recall: Non-parametric regression Input space X

More information

\ fwf The Institute for Integrating Statistics in Decision Sciences

\ fwf The Institute for Integrating Statistics in Decision Sciences # \ fwf The Institute for Integrating Statistics in Decision Sciences Technical Report TR-2007-8 May 22, 2007 Advances in Bayesian Software Reliability Modelling Fabrizio Ruggeri CNR IMATI Milano, Italy

More information

Practical Bayesian Optimization of Machine Learning. Learning Algorithms

Practical Bayesian Optimization of Machine Learning. Learning Algorithms Practical Bayesian Optimization of Machine Learning Algorithms CS 294 University of California, Berkeley Tuesday, April 20, 2016 Motivation Machine Learning Algorithms (MLA s) have hyperparameters that

More information

Introduction to Bayesian methods in inverse problems

Introduction to Bayesian methods in inverse problems Introduction to Bayesian methods in inverse problems Ville Kolehmainen 1 1 Department of Applied Physics, University of Eastern Finland, Kuopio, Finland March 4 2013 Manchester, UK. Contents Introduction

More information

1 Hypothesis Testing and Model Selection

1 Hypothesis Testing and Model Selection A Short Course on Bayesian Inference (based on An Introduction to Bayesian Analysis: Theory and Methods by Ghosh, Delampady and Samanta) Module 6: From Chapter 6 of GDS 1 Hypothesis Testing and Model Selection

More information

Least Squares Regression

Least Squares Regression E0 70 Machine Learning Lecture 4 Jan 7, 03) Least Squares Regression Lecturer: Shivani Agarwal Disclaimer: These notes are a brief summary of the topics covered in the lecture. They are not a substitute

More information

COS513 LECTURE 8 STATISTICAL CONCEPTS

COS513 LECTURE 8 STATISTICAL CONCEPTS COS513 LECTURE 8 STATISTICAL CONCEPTS NIKOLAI SLAVOV AND ANKUR PARIKH 1. MAKING MEANINGFUL STATEMENTS FROM JOINT PROBABILITY DISTRIBUTIONS. A graphical model (GM) represents a family of probability distributions

More information

Probabilistic machine learning group, Aalto University Bayesian theory and methods, approximative integration, model

Probabilistic machine learning group, Aalto University  Bayesian theory and methods, approximative integration, model Aki Vehtari, Aalto University, Finland Probabilistic machine learning group, Aalto University http://research.cs.aalto.fi/pml/ Bayesian theory and methods, approximative integration, model assessment and

More information

GWAS IV: Bayesian linear (variance component) models

GWAS IV: Bayesian linear (variance component) models GWAS IV: Bayesian linear (variance component) models Dr. Oliver Stegle Christoh Lippert Prof. Dr. Karsten Borgwardt Max-Planck-Institutes Tübingen, Germany Tübingen Summer 2011 Oliver Stegle GWAS IV: Bayesian

More information

n =10,220 observations. Smaller samples analyzed here to illustrate sample size effect.

n =10,220 observations. Smaller samples analyzed here to illustrate sample size effect. Chapter 7 Parametric Likelihood Fitting Concepts: Chapter 7 Parametric Likelihood Fitting Concepts: Objectives Show how to compute a likelihood for a parametric model using discrete data. Show how to compute

More information

Bayesian SAE using Complex Survey Data Lecture 4A: Hierarchical Spatial Bayes Modeling

Bayesian SAE using Complex Survey Data Lecture 4A: Hierarchical Spatial Bayes Modeling Bayesian SAE using Complex Survey Data Lecture 4A: Hierarchical Spatial Bayes Modeling Jon Wakefield Departments of Statistics and Biostatistics University of Washington 1 / 37 Lecture Content Motivation

More information

Unsupervised Learning

Unsupervised Learning Unsupervised Learning Bayesian Model Comparison Zoubin Ghahramani zoubin@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc in Intelligent Systems, Dept Computer Science University College

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Relevance Vector Machines for Earthquake Response Spectra

Relevance Vector Machines for Earthquake Response Spectra 2012 2011 American American Transactions Transactions on on Engineering Engineering & Applied Applied Sciences Sciences. American Transactions on Engineering & Applied Sciences http://tuengr.com/ateas

More information

A Bayesian view of model complexity

A Bayesian view of model complexity A Bayesian view of model complexity Angelika van der Linde University of Bremen, Germany 1. Intuitions 2. Measures of dependence between observables and parameters 3. Occurrence in predictive model comparison

More information

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008 Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:

More information

Calibrating Environmental Engineering Models and Uncertainty Analysis

Calibrating Environmental Engineering Models and Uncertainty Analysis Models and Cornell University Oct 14, 2008 Project Team Christine Shoemaker, co-pi, Professor of Civil and works in applied optimization, co-pi Nikolai Blizniouk, PhD student in Operations Research now

More information

g-priors for Linear Regression

g-priors for Linear Regression Stat60: Bayesian Modeling and Inference Lecture Date: March 15, 010 g-priors for Linear Regression Lecturer: Michael I. Jordan Scribe: Andrew H. Chan 1 Linear regression and g-priors In the last lecture,

More information

Non-Parametric Bayes

Non-Parametric Bayes Non-Parametric Bayes Mark Schmidt UBC Machine Learning Reading Group January 2016 Current Hot Topics in Machine Learning Bayesian learning includes: Gaussian processes. Approximate inference. Bayesian

More information

The Relationship Between Confidence Intervals for Failure Probabilities and Life Time Quantiles

The Relationship Between Confidence Intervals for Failure Probabilities and Life Time Quantiles Statistics Preprints Statistics 2008 The Relationship Between Confidence Intervals for Failure Probabilities and Life Time Quantiles Yili Hong Iowa State University, yili_hong@hotmail.com William Q. Meeker

More information

Bayesian Regression Linear and Logistic Regression

Bayesian Regression Linear and Logistic Regression When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we

More information

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ). .8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics

More information

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) = Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,

More information

Pattern Recognition and Machine Learning. Bishop Chapter 6: Kernel Methods

Pattern Recognition and Machine Learning. Bishop Chapter 6: Kernel Methods Pattern Recognition and Machine Learning Chapter 6: Kernel Methods Vasil Khalidov Alex Kläser December 13, 2007 Training Data: Keep or Discard? Parametric methods (linear/nonlinear) so far: learn parameter

More information

DEPARTMENT OF COMPUTER SCIENCE Autumn Semester MACHINE LEARNING AND ADAPTIVE INTELLIGENCE

DEPARTMENT OF COMPUTER SCIENCE Autumn Semester MACHINE LEARNING AND ADAPTIVE INTELLIGENCE Data Provided: None DEPARTMENT OF COMPUTER SCIENCE Autumn Semester 203 204 MACHINE LEARNING AND ADAPTIVE INTELLIGENCE 2 hours Answer THREE of the four questions. All questions carry equal weight. Figures

More information

Bayesian System Identification based on Hierarchical Sparse Bayesian Learning and Gibbs Sampling with Application to Structural Damage Assessment

Bayesian System Identification based on Hierarchical Sparse Bayesian Learning and Gibbs Sampling with Application to Structural Damage Assessment Bayesian System Identification based on Hierarchical Sparse Bayesian Learning and Gibbs Sampling with Application to Structural Damage Assessment Yong Huang a,b, James L. Beck b,* and Hui Li a a Key Lab

More information

Hierarchical Models & Bayesian Model Selection

Hierarchical Models & Bayesian Model Selection Hierarchical Models & Bayesian Model Selection Geoffrey Roeder Departments of Computer Science and Statistics University of British Columbia Jan. 20, 2016 Contact information Please report any typos or

More information

Introduction to Probability and Statistics (Continued)

Introduction to Probability and Statistics (Continued) Introduction to Probability and Statistics (Continued) Prof. icholas Zabaras Center for Informatics and Computational Science https://cics.nd.edu/ University of otre Dame otre Dame, Indiana, USA Email:

More information

Assessing Regime Uncertainty Through Reversible Jump McMC

Assessing Regime Uncertainty Through Reversible Jump McMC Assessing Regime Uncertainty Through Reversible Jump McMC August 14, 2008 1 Introduction Background Research Question 2 The RJMcMC Method McMC RJMcMC Algorithm Dependent Proposals Independent Proposals

More information

Lecture 6: Model Checking and Selection

Lecture 6: Model Checking and Selection Lecture 6: Model Checking and Selection Melih Kandemir melih.kandemir@iwr.uni-heidelberg.de May 27, 2014 Model selection We often have multiple modeling choices that are equally sensible: M 1,, M T. Which

More information

Lecture 6: Graphical Models: Learning

Lecture 6: Graphical Models: Learning Lecture 6: Graphical Models: Learning 4F13: Machine Learning Zoubin Ghahramani and Carl Edward Rasmussen Department of Engineering, University of Cambridge February 3rd, 2010 Ghahramani & Rasmussen (CUED)

More information

Foundations of Statistical Inference

Foundations of Statistical Inference Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 32 Lecture 14 : Variational Bayes

More information

ECE521 week 3: 23/26 January 2017

ECE521 week 3: 23/26 January 2017 ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear

More information

Approximating high-dimensional posteriors with nuisance parameters via integrated rotated Gaussian approximation (IRGA)

Approximating high-dimensional posteriors with nuisance parameters via integrated rotated Gaussian approximation (IRGA) Approximating high-dimensional posteriors with nuisance parameters via integrated rotated Gaussian approximation (IRGA) Willem van den Boom Department of Statistics and Applied Probability National University

More information

Bayesian Deep Learning

Bayesian Deep Learning Bayesian Deep Learning Mohammad Emtiyaz Khan AIP (RIKEN), Tokyo http://emtiyaz.github.io emtiyaz.khan@riken.jp June 06, 2018 Mohammad Emtiyaz Khan 2018 1 What will you learn? Why is Bayesian inference

More information

Seminar über Statistik FS2008: Model Selection

Seminar über Statistik FS2008: Model Selection Seminar über Statistik FS2008: Model Selection Alessia Fenaroli, Ghazale Jazayeri Monday, April 2, 2008 Introduction Model Choice deals with the comparison of models and the selection of a model. It can

More information

Statistical Methods in Particle Physics Lecture 1: Bayesian methods

Statistical Methods in Particle Physics Lecture 1: Bayesian methods Statistical Methods in Particle Physics Lecture 1: Bayesian methods SUSSP65 St Andrews 16 29 August 2009 Glen Cowan Physics Department Royal Holloway, University of London g.cowan@rhul.ac.uk www.pp.rhul.ac.uk/~cowan

More information

Bayesian parameter estimation in predictive engineering

Bayesian parameter estimation in predictive engineering Bayesian parameter estimation in predictive engineering Damon McDougall Institute for Computational Engineering and Sciences, UT Austin 14th August 2014 1/27 Motivation Understand physical phenomena Observations

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester Physics 403 Parameter Estimation, Correlations, and Error Bars Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Best Estimates and Reliability

More information

Bayesian Machine Learning

Bayesian Machine Learning Bayesian Machine Learning Andrew Gordon Wilson ORIE 6741 Lecture 4 Occam s Razor, Model Construction, and Directed Graphical Models https://people.orie.cornell.edu/andrew/orie6741 Cornell University September

More information

Outline Lecture 2 2(32)

Outline Lecture 2 2(32) Outline Lecture (3), Lecture Linear Regression and Classification it is our firm belief that an understanding of linear models is essential for understanding nonlinear ones Thomas Schön Division of Automatic

More information

Motivation Scale Mixutres of Normals Finite Gaussian Mixtures Skew-Normal Models. Mixture Models. Econ 690. Purdue University

Motivation Scale Mixutres of Normals Finite Gaussian Mixtures Skew-Normal Models. Mixture Models. Econ 690. Purdue University Econ 690 Purdue University In virtually all of the previous lectures, our models have made use of normality assumptions. From a computational point of view, the reason for this assumption is clear: combined

More information

Markov chain Monte Carlo

Markov chain Monte Carlo 1 / 26 Markov chain Monte Carlo Timothy Hanson 1 and Alejandro Jara 2 1 Division of Biostatistics, University of Minnesota, USA 2 Department of Statistics, Universidad de Concepción, Chile IAP-Workshop

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Approximate Inference using MCMC

Approximate Inference using MCMC Approximate Inference using MCMC 9.520 Class 22 Ruslan Salakhutdinov BCS and CSAIL, MIT 1 Plan 1. Introduction/Notation. 2. Examples of successful Bayesian models. 3. Basic Sampling Algorithms. 4. Markov

More information

Risk Estimation and Uncertainty Quantification by Markov Chain Monte Carlo Methods

Risk Estimation and Uncertainty Quantification by Markov Chain Monte Carlo Methods Risk Estimation and Uncertainty Quantification by Markov Chain Monte Carlo Methods Konstantin Zuev Institute for Risk and Uncertainty University of Liverpool http://www.liv.ac.uk/risk-and-uncertainty/staff/k-zuev/

More information

Bayesian methods in economics and finance

Bayesian methods in economics and finance 1/26 Bayesian methods in economics and finance Linear regression: Bayesian model selection and sparsity priors Linear Regression 2/26 Linear regression Model for relationship between (several) independent

More information

ACCELERATED DESTRUCTIVE DEGRADATION TEST PLANNING. Presented by Luis A. Escobar Experimental Statistics LSU, Baton Rouge LA 70803

ACCELERATED DESTRUCTIVE DEGRADATION TEST PLANNING. Presented by Luis A. Escobar Experimental Statistics LSU, Baton Rouge LA 70803 ACCELERATED DESTRUCTIVE DEGRADATION TEST PLANNING Presented by Luis A. Escobar Experimental Statistics LSU, Baton Rouge LA 70803 This is jointly work with Ying Shi and William Q. Meeker both from Iowa

More information

Bayesian Inference in Astronomy & Astrophysics A Short Course

Bayesian Inference in Astronomy & Astrophysics A Short Course Bayesian Inference in Astronomy & Astrophysics A Short Course Tom Loredo Dept. of Astronomy, Cornell University p.1/37 Five Lectures Overview of Bayesian Inference From Gaussians to Periodograms Learning

More information

Ages of stellar populations from color-magnitude diagrams. Paul Baines. September 30, 2008

Ages of stellar populations from color-magnitude diagrams. Paul Baines. September 30, 2008 Ages of stellar populations from color-magnitude diagrams Paul Baines Department of Statistics Harvard University September 30, 2008 Context & Example Welcome! Today we will look at using hierarchical

More information

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn Parameter estimation and forecasting Cristiano Porciani AIfA, Uni-Bonn Questions? C. Porciani Estimation & forecasting 2 Temperature fluctuations Variance at multipole l (angle ~180o/l) C. Porciani Estimation

More information

GWAS V: Gaussian processes

GWAS V: Gaussian processes GWAS V: Gaussian processes Dr. Oliver Stegle Christoh Lippert Prof. Dr. Karsten Borgwardt Max-Planck-Institutes Tübingen, Germany Tübingen Summer 2011 Oliver Stegle GWAS V: Gaussian processes Summer 2011

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin

More information

Lecture 2: Priors and Conjugacy

Lecture 2: Priors and Conjugacy Lecture 2: Priors and Conjugacy Melih Kandemir melih.kandemir@iwr.uni-heidelberg.de May 6, 2014 Some nice courses Fred A. Hamprecht (Heidelberg U.) https://www.youtube.com/watch?v=j66rrnzzkow Michael I.

More information

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis. 401 Review Major topics of the course 1. Univariate analysis 2. Bivariate analysis 3. Simple linear regression 4. Linear algebra 5. Multiple regression analysis Major analysis methods 1. Graphical analysis

More information

STA 4273H: Sta-s-cal Machine Learning

STA 4273H: Sta-s-cal Machine Learning STA 4273H: Sta-s-cal Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 2 In our

More information

Step-Stress Models and Associated Inference

Step-Stress Models and Associated Inference Department of Mathematics & Statistics Indian Institute of Technology Kanpur August 19, 2014 Outline Accelerated Life Test 1 Accelerated Life Test 2 3 4 5 6 7 Outline Accelerated Life Test 1 Accelerated

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 143 Part IV

More information

Lecture 3: Adaptive Construction of Response Surface Approximations for Bayesian Inference

Lecture 3: Adaptive Construction of Response Surface Approximations for Bayesian Inference Lecture 3: Adaptive Construction of Response Surface Approximations for Bayesian Inference Serge Prudhomme Département de mathématiques et de génie industriel Ecole Polytechnique de Montréal SRI Center

More information

Nonparametric Bayesian Methods - Lecture I

Nonparametric Bayesian Methods - Lecture I Nonparametric Bayesian Methods - Lecture I Harry van Zanten Korteweg-de Vries Institute for Mathematics CRiSM Masterclass, April 4-6, 2016 Overview of the lectures I Intro to nonparametric Bayesian statistics

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Statistical Inference Using Progressively Type-II Censored Data with Random Scheme

Statistical Inference Using Progressively Type-II Censored Data with Random Scheme International Mathematical Forum, 3, 28, no. 35, 1713-1725 Statistical Inference Using Progressively Type-II Censored Data with Random Scheme Ammar M. Sarhan 1 and A. Abuammoh Department of Statistics

More information

Parametric Techniques Lecture 3

Parametric Techniques Lecture 3 Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to

More information

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite

More information